Pierre Carpentier1,
Jean-Philippe Chancelier2,
Michel De Lara3
École nationale des ponts et chaussées, IP Paris
In a deterministic optimization problem, the values of all parameters are supposed known. What happens when this is no longer the case? And when some values are revealed during the stages of decision? We present stochastic optimization, at the same time as a framework to formulate problems under uncertainty, and as methods to solve them according to the formulation. More precisely, we present
Modeling exercises (and optional computer sessions in auto-training) tackle issues like optimal economic dispatch of energy production units, storage/delivery optimization problem to buffer an intermittent and variable source of energy, dam optimal management with stochastic water inflows, battery optimal management with renewable energy inputs.
Keywords: stochastic programming, scenario tree, Progressive Hedging, optimal stochastic control, stochastic dynamic programming, Stochastic Dual Dynamic Programming, decomposition.
We outline one-stage stochastic programming by means of examples.
slides (MDL) (section Working out static examples)
Recommended reading: [SDR09, Chap. 1]Key insight: the output of a stochastic optimization problem is the distribution of the values taken by the random cost/payoff; it is up to the decision-maker to choose between several distributions according to her/his aversion to risk.
We are doing a group blood test modeling exercise. We formulate a stochastic optimization problem consisting in minimizing the average number of tests depending on the size of the groups. We compare the optimal size obtained to the optimal group size solution of the robust optimization problem — consisting in minimizing the worst number of tests depending on the size of the groups. Finally, we compare the distributions of the number of tests depending on the size of the groups.
Recalls on probability calculus: probability space, probability, random variables, law of a random variable, mathematical expectation (linearity), indicator function (law, expectation), independence of random variables, almost-sure convergence and law of large numbers.
Recommended reading: [SDR09, Chap. 7, § 7.2], [Fel68]
We present, under the form of an exercise, an example of optimization problem under uncertainty: “the newsvendor problem”. The newsvendor must decide in the morning the (deterministic) number of newspapers that she/he will order. The demand for newspapers is random (with known probability distribution) and is revealed in the afternoon. We show how the optimal number of ordered newspapers depends on buying unitary cost, selling unitary price and demand probability distribution.
The newsvendor problem
(only the Section 1,
The newsvendor problem (integer formulation))
We move from one-stage to two-stage stochastic programming, in the linear case.
Key insight: when a first decision is made before the realization of a random variable, and a second decision is made after, this is mathematically framed as a nonanticipativity constraint; scenario trees directly encode such constraint.
slides (MDL) (section Two-stage linear stochastic programs)
Recommended reading: [SDR09, Chap. 2, § 2.1, 2.2 and 2.3]
We extend the newsvendor problem into a two-stage inventory management problem. A company must order a product to satisfy a random request, but it can perform a second (costlier) command once the request is observed. We show how to go from a one-stage optimization problem — either polyhedral deterministic without constraints, or linear — towards a two-stage stochastic optimization problem. Then, we present two-stage linear stochastic programs. We provide a sketch of a day-ahead market problem as a two stage stochastic optimization for determining energy reserves.
In a non-stochastic setting, we review optimization and convexity from the theoretical and algorithmic points of view, as well as the notion of duality that will be used in the stochastic setting in the subsequent lectures.
slides: background on convexity and optimization
additional slides: background on duality (MDL, SR) Recommended reading: [Roc70, Parts I – VI – VII]
Background on optimization and convexity.
We develop the two-stage stochastic programming, especially in the case of a finite probability space (scenarios).
Key insight: two-stage stochastic programming is especially suited to resolution by decomposition methods, either temporal (L-shaped) or by scenarios (Progressive Hedging).
slides (MDL) (section Two-stage stochastic programs)
Recommended reading: [SDR09, Chap. 2, § 2.1, 2.2 and 2.3]
In addition to the first (deterministic) decision, we introduce the notion of recourse, or second (random) decision made after the realization of a random variable. We show how the nonanticipativity constraint — that bears on the first (deterministic) decision — can be taken into account by indexing solutions
additional slides: two-stage stochastic programming (VL) additional slides: Progressive Hedging (JPC)
We raise the difficulty to move from two-stage stochastic programming to the multistage case. We illustrate dynamical models of storage (battery models, dam models). We formulate multistage stochastic optimization problems in the formalism of optimal control, that is, with state, controls and noise processes. We describe the stochastic dynamic programming algorithm: Bellman cost-to-go functions, Bellman equation. When the noise process is stagewise independent, we state that the so-called greedy one-step lookahead algorithm provides an optimal solution. Recommended reading: [Ber00, Chap. 1]
slides (MDL) additional slides: stochastic dynamic programming (JPC)
Optimal growth and reproduction of a plant.
Sizing of reserves for the balancing on an electric market
(linear and quadratic optimization on a tree)
EMSx: an Energy Management System numerical benchmark.
SDDP (Stochastic Dual Dynamic Programming) is an algorithm that stands at the intersection of dynamic programming with the L-Shaped method. Relying on hypothesis of linearity and convexity, SDDP builds up lower approximations of Bellman functions. This approach allows pushing the limits of the dimensionality curses. For this purpose,
Optimal sizing and integration of electricity and heat cogeneration with thermal and electrical storage in an individual house.
slides (FP) additional slides (FP)
Programming the stochastic dynamic programming algorithm
Multistage stochastic optimization problems are, by essence, complex because their solutions are indexed both by stages (time) and by uncertainties (scenarios). Quite often, solutions are also indexed by decision units, like nodes in a graph (space), or agents in a team. Hence, their large scale nature makes decomposition methods appealing. We present, in an unified framework, three main approaches and methods to decompose multistage stochastic optimization problems for numerical resolution: time decomposition (and state-based resolution methods, like Stochastic Dynamic Programming, in Stochastic Optimal Control); scenario decomposition (like Progressive Hedging in Stochastic Programming); spatial decomposition (price or resource decompositions). We show how writing a dynamic programming equation on the increasing sets of histories paves the way for state reduction at specified stages; this makes it possible to develop what we call time block decomposition. We also show how price or resource decompositions quite naturally provide decomposed lower and upper bounds for minimization problems. Finally, we point to some mathematical questions raised by the mixing (blending) of different decompositions methods to tackle large scale problems. We hint at the potential of blending for the management of new energy systems (smart grids), as they will be developed in the next two talks.
We provide a method to decompose multistage stochastic optimization problems by time blocks. This method is based on reducing the so-called history space using a compressed “state” variable. It leads to a reduced dynamic programming equation. Then, we apply the reduction method by time blocks to two time-scales stochastic optimization problems arising from long term storage management of batteries. We present a stochastic optimization model aiming at minimizing the investment and maintenance costs of batteries for a house with solar panels. For any given capacity of battery it is necessary to compute a charge/discharge strategy as well as maintenance to maximize revenues provided by intraday energy arbitrage while ensuring a long term aging of the storage devices. Long term aging is a slow process while charge/discharge control of a storage handles fast dynamics. For this purpose, we have designed algorithms that take into account this two time scales aspect in the decision making process. We show on instances with huge time steps how one of our algorithm can be used for the optimal sizing of a storage taking into account charge/discharge strategy as well as aging. Numerical results show that it is economically significant to control aging. We also compare our algorithms to Stochastic Dynamic Programming and to Stochastic Dual Dynamic Programming on small instances and we observe that they are less computationally costly while displaying similar performances on the control of a storage.
We consider a stochastic optimization problem in which different units are connected together via a network. Each unit is a (small) control system, located at a node. Each unit state evolution is affected by uncertainties and controls of the neighboring nodes transmitted through edges. Static constraints couple all units at each time. We formulate the associated global stochastic optimization problem. We propose two decomposition methods, whether we decouple the constraints by prices or by resources. We show that the optimal value of the global problem can be bounded above by a sum of resource-decomposed nodal value, and below by a sum of price-decomposed nodal value. We provide conditions under which these nodal values can be computed by dynamic programming. We illustrate these results with numerical studies that tackle the optimization of urban micro-grids of large size. Finally, we introduce two different information structures for these microgrids, namely the centralized and the decentralized ones, and we analyze the lower and upper bounds when considering these information structures.
Version 1.