Code : https://github.com/vilhess/PatchFM
A tutorial on how to build a Foundation Model for Univariate Time Series Forecasting
A concise, reproducible recipe for training a transformer-based, patch-to-patch forecasting model for univariate time series. The approach mirrors Large Language Model (LLM) practices (next-token β next-patch) while remaining lightweight compared to a classic LLM and practical.
Highlights
- Next-patch prediction objective (autoregressive, causal)
- Patch-based representation of time series (tokens β patches)
- Causal masking self-attention with RoPE (relative positions)
- RevIN (Reversible Instance Normalization)
- SwiGLU feed-forward networks
- Autoregressive multi-quantile decoding MOIRAI2.0
- KV-cache for efficient long-horizon inference
- flips equivariance during inference (optional) Reverso
Quick Start
from source code
- Clone the repository and install dependencies
git clone https://github.com/vilhess/PatchFM
cd PatchFM
pip install -r requirements.txt
- Run inference with a pretrained model from Huggingface Hub
import torch
from configs import PatchFMConfig
from model import Forecaster
# --- Instantiate model ---
config = PatchFMConfig(load_from_hub=True)
model = Forecaster(config)
# --- Inference ---
forecast_horizon = 64
seq = torch.randn(1, 1024) # (batch, time)
pred_median, pred_quantiles = model(seq, forecast_horizon=forecast_horizon, quantiles=[0.1, 0.5, 0.9], flip_equivariance=True) # (batch, time), (batch, time, quantiles)
from pip package
- Install the package from PyPI
pip install patchfm
- Run inference with a pretrained model from Huggingface Hub
import torch
from patchfm import Forecaster, PatchFMConfig
# same as above
pred_median, pred_quantiles = model(seq, forecast_horizon=forecast_horizon, quantiles=[0.1, 0.5, 0.9], flip_equivariance=True) # (batch, time), (batch, time, quantiles)
We provide an extended quick start example in notebooks/tutorial.ipynb. If you dont have suitable hardware you can run the the extended quick start example example also in Google Colab:
Method (TL;DR)
- Patching: Split a context signal of length $w$ into $P_{num} = w / P_{len}$ patches of length $P_{len}$.
- Causal RevIN: Normalize input signal and denormalize outputs to the original scale without statistics leakage.
- Architecture: Input residual MLP β stacked Transformer blocks (MHA + SwiGLU FFN, pre-norm, residual) β $|\mathcal{Q}|$ output heads mapping back to patch space.
- Positional encoding: Rotary Position Embeddings (RoPE) applied to queries/keys.
- Training: Multi-quantile (pinball) loss across positions, elements, and quantiles $\mathcal{Q}$.
- Inference: Predict next patch; roll out autoregressively for long horizons.
- KV-cache: during inference, cache keys/values to avoid redundant computations.
- Flip-equivariance: during inference, flip input sequence and average predictions to improve robustness (at cost of doubling batch size).
Problem Formulation
Given context patches $x_{p_1}, \ldots, x_{p_n}$, predict the next patch $x_{p_{i+1}}$ for each position $i$ using only past patches (causality). The model outputs quantiles ${\hat{x}{p{i+1}}^{(q)}: q \in \mathcal{Q}}$ with median (q=0.5) as the point forecast.
Loss: Multi-Quantile (Pinball)
For residual $u = x - \hat{x}^{(q)}$: Aggregate over positions, patch elements, and quantiles.
Architecture
- Input MLP: $\mathbb{R}^{P_{len}} \to \mathbb{R}^{dim}$ residual 2-layer MLP (ReLU)
- Multi-Head Attention: causal mask, RoPE; queries/keys/values per head
- FFN: SwiGLU (SiLU-gated), pre-norm + residual
- Output heads: |Q| linear maps $\mathbb{R}^{dim} \to \mathbb{R}^{P_{len}}$ (one per quantile)
Model Details
- Patch size: 32
- Max context: 32 patches (1024 steps)
- Forecast horizon: 32 steps per forward pass
- Quantiles $\mathcal{Q}$: {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}
- Layers: 6
- Attention heads: 64 (head dim 32)
- Model dim: 2048
- Parameters: ~300M
Inference
Single step: predict next patch ($P_{len}$ values)
Long-horizon: append prediction to context and repeat (optionally drop oldest patch to keep window fixed)
Flip-equivariance Reverso: optionally flip input sequence and average predictions to improve robustness (at cost of doubling batch size):
Autoregressive Inference with Quantile Forecasting (Moirai 2.0)
During autoregressive inference, the model generates forecasted values patch by patch. At each time step, the predicted patch is fed back into the model as input for the next step. This iterative process continues until the desired forecast horizon is reached.
When performing quantile forecasting, the situation becomes more complex. Instead of producing a single patch per step, the model outputs multiple patches corresponding to different quantiles (e.g., 0.1, 0.5, 0.9). Since the model expects a single patch for the next time step, it is not straightforward to feed all quantile predictions back into the model simultaneously.
A common workaround is to feed only the median prediction (the 0.5 quantile) back into the model at each step. While this approach preserves the autoregressive structure, it discards the uncertainty information captured by the other quantiles.
An alternative approach is autoregressive multi-quantile decoding, as proposed in Moirai 2.0. This method enables consistent autoregressive generation while preserving the full predictive distribution across quantiles. However, it is computationally more expensive than the median-only approach as it requires duplicating the context for each quantile.
Classic Autoregressive Inference
Autoregressive Multi-Quantile Decoding
The algorithm proceeds as follows:
Initialization
Start with the initial context window of observed data
Shape:(BS Γ L)BS: batch sizeL: context lengthP: patch sizeQ: number of quantilesH: forecast horizoni=1: current algorithm step
First Quantile Prediction (Forward Pass)
Predict the quantiles for the next patch using the current context.
Output shape:(BS Γ P Γ Q)Context Duplication
For each predicted quantile, create a separate context by appending the corresponding predicted patch to the current context.
This increases the number of contexts by a factor ofQat each step.
New context shape:(BS Γ Q Γ i(L + P))Next Forward Pass
For each duplicated context, predict the quantiles of the next patch.
Output shape:(BS Γ Q Γ P Γ Q)Quantile Collapse
- Permute and reshape the predictions to aggregate all possible quantile paths:
Intermediate shape:(BS Γ P Γ QΒ²) - Compute the quantiles across the
QΒ²predictions to obtain the final quantile estimates for the next patch.
Final shape:(BS Γ P Γ Q) - Increment the step counter
i β i + 1.
- Permute and reshape the predictions to aggregate all possible quantile paths:
Iteration
Repeat Steps 3β5 until the forecast horizonHis reached, i.e., until the total number of predicted time steps satisfiesi Γ P β₯ H.
This procedure preserves predictive uncertainty across quantiles while maintaining the autoregressive structure of the model. Although it is computationally more expensive than feeding only the median prediction (0.5 quantile) back into the model, it remains tractable in practice and enables consistent multi-quantile forecasting.
β οΈ Warning
With this strategy, the median prediction (0.5 quantile) does not necessarily match the prediction obtained by autoregressively feeding only the median patch back into the model at each step.
This discrepancy arises because the quantile collapse step aggregates predictions across all possible quantile paths. As a result, the median is computed from the combined multi-path distribution rather than from a single deterministic trajectory, which can lead to different estimates compared to the single-path (median-only) autoregressive approach.
Datasets
- UTSD (Unified Time Series Dataset) [UTSD]: seven domains (Energy, IoT, Nature, Web, Health, Transport, Environment). We work with UTSD-12G (~18M series after preprocessing).
- GIFT-Eval pretraining dataset [GIFT]: aligned with the GIFT-Eval dataset but without data leakage issue with the benchmark. The dataset contains approximately 71 univariate and 17 multivariate time series datasets from various domains and various frequencies. After preprocessing, this yields approximately 600K univariate series.
- Chronos synthetic datasets [Chronos]: two large synthetic datasets generated with Chronos, one with TSMixup and one with KernelSynth. Each contains approximately 10 million univariate series and 1 million respectively, each signal of length 1024.
- Artificial: ~1M synthetic series (sinusoidal, linear, polynomial, logarithmic) plus mixtures via TSMixup [Chronos]; Gaussian Process samples via KernelSynth (mixtures of RBF/periodic/linear kernels with swept hyperparameters).
Repository Layout
model/training/β main PatchFM model classmodules.py- core modules (Residual Layers, MHA, SwiGLU, RoPE, Transformer Encoder, ...)revin.pyβ causal RevINloss.pyβ multi-quantile (pinball) losstrainer.pyβ PyTorch Lightning trainer class
model/inference/β main PatchFM model class for inferencemodules.pyβ core modules with caching supportforecaster.pyβ Forecasting model and rollout logic
dataset/β data loading and preprocessingartificial.pyβ synthetic dataset : artificial signals + TSMixup + KernelSynthutsd.pyβ Unified Time Series Dataset (UTSD) loading and preprocessinggift.pyβ GIFT-Eval pretraining dataset loading and preprocessingget_data.pyβ utility to fetch and preprocess datasetschronosdata.pyβ loading of the synthetic datasets generated with Chronos (TSMixup and KernelSynth) with download functions integratedgenerate_data.pyβ utility to generate and save the KernelSynth dataset (long to generate)
configs/β model and training configurationsnotebooks/inferenceβ how to load a trained model and generate forecaststraining.pyβ training script using PyTorch Lightning
- Downloads last month
- 44