
Smart Model Combining in Real-Time
Adaptive filtering outperforms static mixing of large language models
MoE-F introduces a dynamic approach to combining multiple LLMs that adapts in real-time based on each model's ongoing performance, unlike traditional static mixture methods.
- 17% improvement in F1 measure for financial market movement prediction
- Employs stochastic filtering to forecast optimal model weights at each time step
- Demonstrates superior performance over both individual models and static mixing approaches
- Particularly effective for time-series prediction tasks with streaming data
This research has significant security implications for financial markets, enabling more reliable detection of market movements based on news streams and improving risk management capabilities for organizations.
Filtered not Mixed: Stochastic Filtering-Based Online Gating for Mixture of Large Language Models