Back to Stories Science Of Ai

Foundation Models for Time Series: Extending Transformers Beyond Language

RISE researchers are exploring how transformer-based foundation models can generalize across domains with minimal retraining, tackling challenges in sensor data, healthcare, and climate systems.

January 1, 2025 | State of AI 2025 Report | Page 13
Time series forecasting graphs on monitor
Photograph: GPT-IMAGE-1

While transformer models revolutionized natural language processing, applying them to time series data, from sensors, healthcare, and climate systems, presents new challenges. RISE researchers are exploring how foundation models can generalize across domains with minimal retraining.

Promising Models

Models like Informer (sparse attention for long sequences), TS2Vec (contrastive learning for unlabeled data), and Moment (masked modeling for multi-domain adaptation) show promise for classification, anomaly detection, and forecasting.

Challenges

However, time series data often features non-stationary dynamics, irregular sampling, and noise, challenges that require specialized architectures and pretraining strategies. RISE work focuses on improving these models’ robustness, efficiency, and interpretability for real-world applications.

Potential Impact

The potential payoff is significant: foundation models that can adapt across industries could make advanced AI analysis accessible even to organizations with limited data or technical capacity.

Share this story