Course at IMCA, Lima, Peru, 2024
    
Stochastic and Dynamic Optimization
for Optimal Energy Storage and Allocation

    
25–29 November 2024
    

Pierre Carpentier1, Jean-Philippe Chancelier2, Michel De Lara3
École nationale des ponts et chaussées, IP Paris


Abstract:

Energy companies witness a rapidly changing landscape: increase of intermittent, variable and spatially distributed power sources (wind, sun); expansion of markets and actors at all spatial and temporal scales; penetration of telecom technologies (smart grids). These new factors challenge the management of energy systems and impact the practice of optimization, towards more stochastic and more decentralized optimization. Data on energy demand and on meteorological conditions will be more and more abundant. How to formulate mathematical optimization problems that take into account the variability of data? What are the proper formats to feed such data in optimization problems? What are optimization methods adapted to storage management problems? How to handle multiple stocks? Are there ways to decentralize an optimal solution to local agents? These are the type of questions addressed in this course.


In a deterministic optimization problem, the values of all parameters are supposed known. What happens when this is no longer the case? And when some values are revealed during the stages of decision? We present stochastic optimization, at the same time as a framework to formulate problems under uncertainty, and as methods to solve them according to the formulation. More precisely, we present

Since multistage stochastic optimization problems are usually too difficult to solve in a direct way, we then present advanced methods like the Stochastic Dual Dynamic Programming (SDDP) algorithm (used in commercial software in the world of the energy), which mixes stochastic dynamic programming and cutting plane algorithm. Depending on time availability, we try to shed light on decomposition methods that lead to decentralized optimization (especially adapted to micro-grid management). Finally, depending also on time availability, we open up on risk measures.

Modeling exercises (and optional computer sessions in auto-training) tackle issues like optimal economic dispatch of energy production units, storage/delivery optimization problem to buffer an intermittent and variable source of energy, dam optimal management with stochastic water inflows, battery optimal management with renewable energy inputs.


Keywords: stochastic programming, scenario tree, Progressive Hedging, optimal stochastic control, stochastic dynamic programming, Stochastic Dual Dynamic Programming, decomposition.


Contents

IMCA link: https://semana-imca.jimdosite.com/


pdf version of this document


1 Monday 25 November 2024 (9h00-12h30).
One-stage stochastic programming

Speaker: Michel De Lara [9h00-12h30]

We outline one-stage stochastic programming by means of examples.

     slides (MDL)     (section Working out static examples)

Recommended reading: [SDR09, Chap. 1]

Key insight: the output of a stochastic optimization problem is the distribution of the values taken by the random cost/payoff; it is up to the decision-maker to choose between several distributions according to her/his aversion to risk.

Exercise (1h00)

We are doing a group blood test modeling exercise. We formulate a stochastic optimization problem consisting in minimizing the average number of tests depending on the size of the groups. We compare the optimal size obtained to the optimal group size solution of the robust optimization problem — consisting in minimizing the worst number of tests depending on the size of the groups. Finally, we compare the distributions of the number of tests depending on the size of the groups.

Lecture (1h00)

Recalls on probability calculus: probability space, probability, random variables, law of a random variable, mathematical expectation (linearity), indicator function (law, expectation), independence of random variables, almost-sure convergence and law of large numbers.

Recommended reading: [SDR09, Chap. 7, § 7.2], [Fel68]

Exercise (1h00)

We present, under the form of an exercise, an example of optimization problem under uncertainty: “the newsvendor problem”. The newsvendor must decide in the morning the (deterministic) number of newspapers that she/he will order. The demand for newspapers is random (with known probability distribution) and is revealed in the afternoon. We show how the optimal number of ordered newspapers depends on buying unitary cost, selling unitary price and demand probability distribution.

Optinal computer session

The newsvendor problem (only the Section 1, The newsvendor problem (integer formulation))


2 Tuesday 26 November 2024 (9h00-12h30).
From one-stage to two-stage stochastic linear programming

Speaker: Michel De Lara [9h00-10h30]

We move from one-stage to two-stage stochastic programming, in the linear case.

Key insight: when a first decision is made before the realization of a random variable, and a second decision is made after, this is mathematically framed as a nonanticipativity constraint; scenario trees directly encode such constraint.

     slides (MDL)     (section Two-stage linear stochastic programs)

Recommended reading: [SDR09, Chap. 2, § 2.1, 2.2 and 2.3]

Lecture (1h30)

We extend the newsvendor problem into a two-stage inventory management problem. A company must order a product to satisfy a random request, but it can perform a second (costlier) command once the request is observed. We show how to go from a one-stage optimization problem — either polyhedral deterministic without constraints, or linear — towards a two-stage stochastic optimization problem. Then, we present two-stage linear stochastic programs. We provide a sketch of a day-ahead market problem as a two stage stochastic optimization for determining energy reserves.


Speaker: Pierre Carpentier [11h00–12h30]

In a non-stochastic setting, we review optimization and convexity from the theoretical and algorithmic points of view, as well as the notion of duality that will be used in the stochastic setting in the subsequent lectures.

     slides: background on convexity and optimization

     additional slides: background on duality (MDL, SR)

Recommended reading: [Roc70, Parts I – VI – VII]

Lecture (1h30)

Background on optimization and convexity.


3 Wednesday 27 November 2024 (9h00-12h30).
Two-stage stochastic programming, stochastic dynamic programming

Speaker: Michel De Lara [9h00-10h30]

We develop the two-stage stochastic programming, especially in the case of a finite probability space (scenarios).

Key insight: two-stage stochastic programming is especially suited to resolution by decomposition methods, either temporal (L-shaped) or by scenarios (Progressive Hedging).

     slides (MDL)     (section Two-stage stochastic programs)

Recommended reading: [SDR09, Chap. 2, § 2.1, 2.2 and 2.3]

Lecture (1h30)

In addition to the first (deterministic) decision, we introduce the notion of recourse, or second (random) decision made after the realization of a random variable. We show how the nonanticipativity constraint — that bears on the first (deterministic) decision — can be taken into account by indexing solutions

We present how to handle scenario decomposition by Lagrangian relaxation (Progressive Hedging). We sandwich the optimal value of a two-stage stochastic program by those obtained by a myopic decision-maker (hardening information constraints, open loop solution) and by a clairvoyant decision-maker (relaxing information constraints, anticipative solution). We then discuss the extension from two to multiple stages.

     additional slides: two-stage stochastic programming (VL)      additional slides: Progressive Hedging (JPC)


Speaker: Michel De Lara [11h00–12h30]

Lecture (0h45)

We raise the difficulty to move from two-stage stochastic programming to the multistage case. We illustrate dynamical models of storage (battery models, dam models). We formulate multistage stochastic optimization problems in the formalism of optimal control, that is, with state, controls and noise processes. We describe the stochastic dynamic programming algorithm: Bellman cost-to-go functions, Bellman equation. When the noise process is stagewise independent, we state that the so-called greedy one-step lookahead algorithm provides an optimal solution. Recommended reading: [Ber00, Chap. 1]

     slides (MDL)      additional slides: stochastic dynamic programming (JPC)


Exercise (0h45)

Optimal growth and reproduction of a plant.

     slides (MDL)

Optional computer session

     Sizing of reserves for the balancing on an electric market
(linear and quadratic optimization on a tree)


4 Thursday 28 November 2024 (9h00-12h30).
Stochastic Dual Dynamic Programming

Speaker: Jean-Philippe Chancelier [9h00–9h45]

EMSx: an Energy Management System numerical benchmark.

     slides (ALF)


Speaker: Jean-Philippe Chancelier [9h45–11h15]

Lecture (1h30)

SDDP (Stochastic Dual Dynamic Programming) is an algorithm that stands at the intersection of dynamic programming with the L-Shaped method. Relying on hypothesis of linearity and convexity, SDDP builds up lower approximations of Bellman functions. This approach allows pushing the limits of the dimensionality curses. For this purpose,

  1. we recall definition and properties of the Bellman operators,
  2. we use duality results to obtain so-called cuts (whose slopes represent the usage values of water uses in the context of management of hydraulic stocks),
  3. finally, we present the SDDP algorithm.

     slides (VL)

Speaker: Pierre Carpentier [11h45–12h30]

Optimal sizing and integration of electricity and heat cogeneration with thermal and electrical storage in an individual house.

     slides (FP)      additional slides (FP)


Optional computer session

     Programming the stochastic dynamic programming algorithm


5 Friday 29 November 2024 (workshop).
Decomposition methods

Speaker: Michel De Lara []
An Overview of Decomposition-Coordination Methods in Multistage Stochastic Optimization

Multistage stochastic optimization problems are, by essence, complex because their solutions are indexed both by stages (time) and by uncertainties (scenarios). Quite often, solutions are also indexed by decision units, like nodes in a graph (space), or agents in a team. Hence, their large scale nature makes decomposition methods appealing. We present, in an unified framework, three main approaches and methods to decompose multistage stochastic optimization problems for numerical resolution: time decomposition (and state-based resolution methods, like Stochastic Dynamic Programming, in Stochastic Optimal Control); scenario decomposition (like Progressive Hedging in Stochastic Programming); spatial decomposition (price or resource decompositions). We show how writing a dynamic programming equation on the increasing sets of histories paves the way for state reduction at specified stages; this makes it possible to develop what we call time block decomposition. We also show how price or resource decompositions quite naturally provide decomposed lower and upper bounds for minimization problems. Finally, we point to some mathematical questions raised by the mixing (blending) of different decompositions methods to tackle large scale problems. We hint at the potential of blending for the management of new energy systems (smart grids), as they will be developed in the next two talks.

     slides (MDL)

Speaker: Jean-Philippe Chancelier []
Mixing Time Blocks and Price/Resource Decompositions Methods

We provide a method to decompose multistage stochastic optimization problems by time blocks. This method is based on reducing the so-called history space using a compressed “state” variable. It leads to a reduced dynamic programming equation. Then, we apply the reduction method by time blocks to two time-scales stochastic optimization problems arising from long term storage management of batteries. We present a stochastic optimization model aiming at minimizing the investment and maintenance costs of batteries for a house with solar panels. For any given capacity of battery it is necessary to compute a charge/discharge strategy as well as maintenance to maximize revenues provided by intraday energy arbitrage while ensuring a long term aging of the storage devices. Long term aging is a slow process while charge/discharge control of a storage handles fast dynamics. For this purpose, we have designed algorithms that take into account this two time scales aspect in the decision making process. We show on instances with huge time steps how one of our algorithm can be used for the optimal sizing of a storage taking into account charge/discharge strategy as well as aging. Numerical results show that it is economically significant to control aging. We also compare our algorithms to Stochastic Dynamic Programming and to Stochastic Dual Dynamic Programming on small instances and we observe that they are less computationally costly while displaying similar performances on the control of a storage.

     slides (JPC)

Speaker: Pierre Carpentier []
Mixing Dynamic Programming and Spatial Decomposition Methods

We consider a stochastic optimization problem in which different units are connected together via a network. Each unit is a (small) control system, located at a node. Each unit state evolution is affected by uncertainties and controls of the neighboring nodes transmitted through edges. Static constraints couple all units at each time. We formulate the associated global stochastic optimization problem. We propose two decomposition methods, whether we decouple the constraints by prices or by resources. We show that the optimal value of the global problem can be bounded above by a sum of resource-decomposed nodal value, and below by a sum of price-decomposed nodal value. We provide conditions under which these nodal values can be computed by dynamic programming. We illustrate these results with numerical studies that tackle the optimization of urban micro-grids of large size. Finally, we introduce two different information structures for these microgrids, namely the centralized and the decentralized ones, and we analyze the lower and upper bounds when considering these information structures.

     slides (PC)

References

Ber00
D. P. Bertsekas.
Dynamic Programming and Optimal Control.
Athena Scientific, Belmont, Massachusetts, second edition, 2000.
Volumes 1 and 2.
CCCD15
P. Carpentier, J.-P. Chancelier, G. Cohen, and M. De Lara.
Stochastic Multi-Stage Optimization. At the Crossroads between Discrete Time Stochastic Control and Stochastic Programming.
Springer-Verlag, Berlin, 2015.
Fel68
W. Feller.
An Introduction to Probability Theory and its Applications, volume 1.
Wiley, New York, third edition, 1968.
Roc70
T. R. Rockafellar.
Convex Analysis.
Princeton University Press, Princeton, N.J., 1970.
RW91
R.T. Rockafellar and R. J-B. Wets.
Scenarios and policy aggregation in optimization under uncertainty.
Mathematics of operations research, 16(1):119–147, 1991.
SDR09
A. Shapiro, D. Dentcheva, and A. Ruszczynski.
Lectures on Stochastic Programming: Modeling and Theory.
The society for industrial and applied mathematics and the mathematical programming society, Philadelphia, USA, 2009.

Version 1.



Footnotes

... Carpentier1
pierre.carpentier@ensta-paris.fr
... Chancelier2
jean-philippe.chancelier@enpc.fr
... Lara3
michel.delara@enpc.fr