Skip to content

Making language models work for hedge funds

In his recent article "Why know-it-all LLMs make second-rate forecasters" on risk.net, Rob Mannix dismisses large language models as inferior forecasters. This critique, however, does not apply to how we at Aisot Technologies develop and use specialized language models for hedge funds and asset managers.

TraderWallStreet

While OpenAI's August 2025 release of GPT-5 promised PhD-level knowledge across topics, such general-purpose models are indeed poorly suited for quantitative finance. The fundamental flaw in current LLM forecasting criticism is, however,  comparing general-purpose models to specialized financial prediction tasks. It's akin to judging a Swiss Army knife's effectiveness as a surgical scalpel. The solution isn't to abandon language models entirely, but to build and use them correctly for financial applications.

Why knowing less predicts better
At aisot, we don't use LLMs to make direct time-series predictions. Instead, our time-series and cross-sectional models leverage time-boxed LLMs to extract additional predictive information from financial texts. aisot's time-boxed language models are trained exclusively on data up to specific historical points, eliminating the "look-ahead bias" that plagued earlier LLM forecasting attempts.

Our time-boxed approach works by:

  • Training separate model instances for different historical periods
  • Ensuring each model only "knows" information available up to that specific time
  • Creating a temporal firewall that prevents future knowledge contamination
  • Enabling backtesting without the “know-it-all problem”
  • Using LLMs to extract predictive information from financial texts that becomes input for specialized time-series and cross-sectional financial models

This addresses the core issue raised by Alexander Denev of Turnleaf Analytics: “They train on as much data as possible going back in time – data that may no longer be relevant.” 

"We use time-boxing and integrate language model outputs with financial time-series and cross-sectional models within a Bayesian framework—statistical methods that update predictions as new information arrives," says Nino Antulov-Fantulin, Co-founder and Head of R&D at Aisot Technologies. "This approach delivers more reliable, real-time insights, which are then incorporated into client portfolios using their specific constraints and risk models to generate actionable, verifiable decisions."

Teaching language models to speak Wall Street
Generic LLMs fail at forecasting because they're optimized for language understanding, not financial pattern recognition. Fine-tuned models, trained specifically on financial time-series data and market dynamics, develop domain-specific intuitions that general models lack.

The fine-tuning process transforms the model's attention mechanisms to:

  • Focus on financially relevant patterns rather than linguistic structures
  • Understand market-specific temporal relationships
  • Learns impact of news on prices on different temporal scales 
  • Weight recent information appropriately for market conditions

When the article mentions that "LLMs were showing no special understanding of sequential patterns in the data," this reflects the limitations of using general-purpose models. 

“Fine-tuned financial language models demonstrate clear sequential pattern recognition precisely because they're trained to understand temporal market dynamics from news. This is achieved by a hybrid model that combines both fine-tuned LLMs and financial time-series and cross-sectional asset pricing models”, Nino Antulov-Fantulin says. 

The hybrid advantage
Rather than completely replacing traditional forecasting methods, our approach creates powerful hybrid Bayesian systems that combine the best of both worlds:

  • Language model components handle complex pattern recognition and natural language processing of market news and alternative data
  • Time-series models are specifically designed to handle dynamical financial systems 
  • Traditional cross-sectional models like arbitrage pricing theory (APT) provide robust adaptive baseline for asset pricing

Conclusion
The critique of LLMs as forecasters is correct when applied to general-purpose models trained on internet-scale data, and when they are used for direct time-series forecasting tasks. 

However, at Aisot Technologies, we develop specialized, time-boxed, fine-tuned and non-instruct following language models designed specifically for financial applications.

By addressing the core problems through specialized LLMs, that are combined with traditional quant financial models, rather than abandoning the approach entirely, we unlock the superior pattern recognition capabilities of language models while maintaining the temporal discipline required for rigorous financial forecasting.

If you are a professional investor who wants to learn more about our fine-tuned and time-boxed language models, book a call.