Skip to content

Time-Series Forecasting with TimesFM

Plato includes a reference workflow for federated time-series forecasting with Hugging Face time-series models. The initial case study predicts EV charging availability: each client owns one user's charging history and trains on sliding windows from that user's hourly sequence.

Reference files:

  • configs/TimeSeries/timesfm25_ev_charging.toml
  • configs/TimeSeries/timesfm25_ev_charging_top4_mixed.toml
  • configs/TimeSeries/timesfm25_ev_charging_top4_mixed_diloco.toml
  • configs/TimeSeries/patchtsmixer_ev_charging.toml
  • plato/datasources/ev_charging.py

Dataset preparation

The configs use the Residential electric vehicle charging datasets from apartment buildings [doi: 10.17632/jbks2rcwyj.1].

Download dataset1_ev_charging_reports.csv and place it at the path used by the configs, for example:

runtime/data/ado1/dataset1_ev_charging_reports.csv

The dataset is not bundled with Plato. The datasource expects the raw semicolon-separated CSV and performs the preprocessing at runtime.

EVCharging datasource behavior

Use the datasource with:

[data]
datasource = "EVCharging"
datasource_path = "runtime/data/ado1/dataset1_ev_charging_reports.csv"
garage = "AdO1"
users = ["AdO1-1", "AdO1-2", "AdO1-3", "AdO1-4"]
sampler = "all_inclusive"

The datasource:

  • filters to one garage, or uses the whole CSV when garage = "all";
  • preserves the configured users order;
  • maps client IDs to users in that order (client_id = 1 selects the first user);
  • builds a continuous hourly grid for each user's active date range;
  • marks is_charging = 1 when a charging session overlaps an hour;
  • accumulates energy_kwh over active charging hours;
  • scales energy with the training-window maximum;
  • adds cyclic time features for hour-of-day and day-of-week;
  • splits valid sliding-window starts into train / validation / test windows.

The model input feature order is:

is_charging, energy_scaled, hour_sin, hour_cos, dow_sin, dow_cos

The reference configs forecast only the first channel, is_charging.

Model choices

TimesFM

Select TimesFM through the HuggingFace trainer:

[trainer]
type = "HuggingFace"
model_name = "google/timesfm-2.5-200m-transformers"
model_type = "timesfm"
context_length = 672
prediction_length = 128
num_input_channels = 6
prediction_channel_indices = [0]
freq = 0

Supported reference variants include:

  • google/timesfm-2.0-500m-pytorch
  • google/timesfm-2.5-200m-pytorch
  • google/timesfm-2.5-200m-transformers

The TimesFM reference configs use prediction_length = 128 because the selected TimesFM checkpoints expose a fixed 128-step native horizon.

PatchTSMixer

PatchTSMixer is useful as a smaller scratch baseline:

[trainer]
type = "HuggingFace"
model_name = "patchtsmixer_scratch"
model_type = "patchtsmixer"
model_task = "forecasting"
context_length = 672
prediction_length = 168
num_input_channels = 6
prediction_channel_indices = [0]
mode = "mix_channel"

Unlike the TimesFM wrapper's channel-independent path, the reference PatchTSMixer config uses mode = "mix_channel" so the model can use the time features jointly.

Run the reference configs

Install the normal Plato environment first:

uv sync

Then run one of the configs from the repository root.

TimesFM 2.5 on the four AdO1 users:

uv run plato.py --config configs/TimeSeries/timesfm25_ev_charging.toml

TimesFM 2.5 on selected high-data users across garages:

uv run plato.py --config configs/TimeSeries/timesfm25_ev_charging_top4_mixed.toml

PatchTSMixer scratch baseline:

uv run plato.py --config configs/TimeSeries/patchtsmixer_ev_charging.toml

Single-client TimesFM 2.5 transformers smoke run:

uv run plato.py --config configs/TimeSeries/timesfm_transformers_bl1.toml

DiLoCo variant

The branch also includes a TimesFM 2.5 + DiLoCo config:

uv run plato.py --config configs/TimeSeries/timesfm25_ev_charging_top4_mixed_diloco.toml

The config uses:

[server]
type = "diloco"

[server.diloco]
outer_optimizer = "nesterov"
outer_learning_rate = 0.7
outer_momentum = 0.9
aggregation_weighting = "uniform"
apply_outer_optimizer_to = "parameters"

[trainer]
local_steps_per_round = 1500
preserve_optimizer_state = true

local_steps_per_round is counted in completed optimizer steps, not epochs. See the DiLoCo design contract for the mechanics behind this server type.

Result logging

The time-series configs use MSE as the scalar test metric:

[results]
types = "round, elapsed_time, mse"

A lower MSE is better.

Troubleshooting

Dataset file not found

Make sure data.datasource_path points to the downloaded dataset1_ev_charging_reports.csv. Relative paths are resolved from the Plato repository root when using the reference commands above.

User not found

If a configured user is missing, check the garage setting. Users from multiple garages require:

garage = "all"

TimesFM class not available

TimesFM 2.5 requires a recent transformers version that exposes TimesFm2_5ModelForPrediction. If model import fails, update the environment and verify the class can be imported before launching a long run.

Metric looks like accuracy in old scripts

For time-series runs, use configs that include:

[results]
types = "round, elapsed_time, mse"

The server and client logs label the primary metric as MSE when the active trainer testing strategy reports metric_name = "mse".