Welcome to PyNNLF¶
PyNNLF (Python for Network Net Load Forecast) is a tool to evaluate net load forecasting model performance in a reliable and reproducible way.
You can access the GitHub repository here.
Objective¶
This tool evaluates net load forecasting models reliably and reproducibly. It includes a library of public net load datasets and common forecasting models, including simple benchmark models. Users define the forecast problem and model specification in a YAML spec and run experiments through the Python package.
It also allows users to add datasets, models, and modify hyperparameters. Researchers claiming a new or superior model can compare their model with existing ones on public datasets. The target audience includes researchers in academia or industry focused on evaluating and optimizing net load forecasting models.
A visual illustration of the tool workflow is shown below.

Input¶
- Forecast Target: dataset & forecast horizon defined in the YAML spec at
specs/experiment.yaml. - Model Specification: model & hyperparameters defined in the YAML spec at
specs/experiment.yaml.
Output¶
a1_experiment_result.csv– contains accuracy (cross-validated test n-RMSE), stability (accuracy stddev), and training time.a2_hyperparameter.csv– lists hyperparameters used for each model.a3_cross_validation_result.csv– detailed results for each cross-validation split.cv_plots/– Folder with plots including: a) Observation vs forecast (time plot), b) Observation vs forecast (scatter plot), c) Residual time plot, and d) Residual histogramcv_test/andcv_train/– folders containing time series of observation, forecast, and residuals for each cross-validation split.experiment_result/a1_experiment_result.csv– optional recap across multiple experiments (generated byrecap_experiments).
Tool Output Naming Convention¶
Format:
[experiment_no]_[experiment_date]_[dataset]_[forecast_horizon]_[model]_[hyperparameter]
Example:
E00001_250915_ds0_fh30_m6_lr_hp1