Spanish Grid Scenario Forecasting

Minimal TensorFlow baseline trained on engineered features; compared against a naive 24‑hour lag. All sections below are rendered from the latest JSON export produced by the notebook.

Goal

Forecast Spain’s hourly electricity demand with a small, transparent baseline that outperforms a simple 24‑hour lag.

Approach

  1. Engineer calendar and lag features (1/24/168), plus simple rolling stats.
  2. Train a compact dense network; compare to a naive 24‑hour lag.
  3. Hold out the last 30 days to report unbiased metrics.

Guiding questions

  • Does the model beat the 24‑hour lag on the hold‑out window?
  • How closely does the forecast follow ramps and peaks?
  • Do training curves suggest a stable, well‑sized architecture?

Data

  • Source: Kaggle ENTSO‑E energy & weather— hourly load, generation, price, and weather for Spain.
  • Features: lags (1/24/168), 24h rolling mean, calendar (hour/day/month, sin/cos).
  • Split: last 30 days reserved as hold‑out.

Headline Numbers

  • Model MAE: 3.6 × 10² MW vs Naive: 2.5 × 10³ MW (Model beats naive by 86.0% MAE on the hold‑out.)
  • MAPE: 1.24% vs Naive: 9.04%
  • R²: 9.9196 × 10⁻¹ vs Naive 4.8774 × 10⁻¹

Question

Training loss

Loss tracked per epoch from the notebook run.

Insight: Validation loss fell from 2.0953 × 10⁻² to 9.4401 × 10⁻³ across 26 epochs.

Question

Training MAE

MAE per epoch (train/val).

Insight: Validation MAE improved from 1.1188 × 10⁻¹ to 7.8742 × 10⁻².

Question

Next 24 hours: model vs naive

Hold‑out overlay comparing the model against the lag‑24 baseline.

Insight: Model beats naive by 86.0% MAE on the hold‑out.

Model Card

Architecture
InputLayer → Normalization → Dense (128, relu) → Dropout → Dense (64, relu) → Dense (1, linear)
Optimizer: Adam @ lr 4.6875002 × 10⁻⁶ • Loss: mse
Training
epochs: 5 × 10¹
batch: 1.28 × 10²
Features
count: 3.7 × 10¹
sample: total_load_forecast, price_day_ahead, forecast_solar_day_ahead, forecast_wind_onshore_day_ahead, generation_solar, generation_wind_onshore, generation_wind_offshore, generation_other_renewable, generation_nuclear, generation_biomass
Neural network schematic

Insight: Model beats naive by 86.0% MAE on the hold‑out.

Takeaways

  • Training converges steadily (loss/MAE curves), suggesting the architecture is well‑sized for this task.
  • The 24‑hour overlay shows the model tracks peaks and troughs better than naive, especially around ramps.
  • Architecture is intentionally small for clarity and speed; hyperparameters are listed above the diagram.

Conclusion

A compact TF model beats the naive day‑lag baseline on the same features and hold‑out window. The JSON‑first export keeps the page reproducible: re‑run the notebook to refresh every number and chart without touching the code.