Michel De Lara1,
Cermics, École nationale des ponts et chaussées, IP Paris
In a deterministic optimization problem, the values of all parameters are supposed known. What happens when this is no longer the case (like power prices or market prices and volumes)? And when some values are revealed during the stages of decision? We present stochastic optimization, at the same time as a framework to formulate problems under uncertainty, and as methods to solve them numerically, according to the formulation. More precisely, we present
Modeling exercises (and optional computer sessions in auto-training) tackle issues like optimal economic dispatch of energy production units, storage/delivery optimization problem to buffer an intermittent and variable source of energy, dam optimal management with stochastic water inflows, battery optimal management with renewable energy inputs.
Keywords: stochastic programming, scenario tree, Progressive Hedging, optimal stochastic control, stochastic dynamic programming, Stochastic Dual Dynamic Programming, risk.
We outline one-stage stochastic programming by means of examples.
slides (MDL) (section Working out static examples)
Recommended reading: [SDR09, Chap. 1]Key insight: the output of a stochastic optimization problem is the distribution of the values taken by the random cost/payoff; it is up to the decision-maker to choose between several distributions according to her/his aversion to risk.
We are doing a group blood test modeling exercise. We formulate a stochastic optimization problem consisting in minimizing the average number of tests depending on the size of the groups. We compare the optimal size obtained to the optimal group size solution of the robust optimization problem — consisting in minimizing the worst number of tests depending on the size of the groups. Finally, we compare the distributions of the number of tests depending on the size of the groups.
Recalls on probability calculus: probability space, probability, random variables, law of a random variable, mathematical expectation (linearity), indicator function (law, expectation), independence of random variables, almost-sure convergence and law of large numbers.
Recommended reading: [SDR09, Chap. 7, § 7.2], [Fel68]
We present, under the form of an exercise, an example of optimization problem under uncertainty: “the newsvendor problem”. The newsvendor must decide in the morning the (deterministic) number of newspapers that she/he will order. The demand for newspapers is random (with known probability distribution) and is revealed in the afternoon. We show how the optimal number of ordered newspapers depends on buying unitary cost, selling unitary price and demand probability distribution.
The newsvendor problem
(only the Section 1,
The newsvendor problem (integer formulation))
We move from one-stage to two-stage stochastic programming, in the linear case.
We revisit the newsvendor problem in the case of a scenarios tree. More generally, we show how to go from a one-stage optimization problem — polyhedral deterministic without constraints — towards a two-stage stochastic optimization problem.
We extend the newsvendor problem into a two-stage inventory management problem. A company must order a product to satisfy a random request, but it can perform a second (costlier) order once the request is observed. We show how to go from a one-stage optimization problem — linear with constraints — towards a two-stage stochastic optimization problem with recourse variables.
Then, we present two-stage linear stochastic programs. We provide a sketch of a day-ahead market problem as a two stage stochastic optimization for determining energy reserves.
We develop the two-stage stochastic programming, especially in the case of a finite probability space (scenarios).
Key insight: two-stage stochastic programming is especially suited to resolution by temporal decomposition method, L-shaped, based on duality in linear programming.
Background on optimization and convexity.
Key insight: when a first decision is made before the realization of a random variable, and a second decision is made after, this is mathematically framed as a nonanticipativity constraint; scenario trees directly encode such constraint.
In addition to the first (deterministic) decision, we introduce the notion of recourse, or second (random) decision made after the realization of a random variable.
We show how the nonanticipativity constraint — that bears on the first (deterministic) decision — can be taken into account by indexing solutions
Recommended reading: § 2.1, 2.2 et 2.3 de [SDR09, Chap. 2]
Slides * (MDL) Slides (VL) Slides (JPC)
We raise the difficulty to move from two-stage stochastic programming to the multistage case. We formulate multistage stochastic optimization problems in the formalism of so-called histories, and we introduce the Bellman operators. Dynamic programming turns the original pronblem into a sequence of interconnected single step optimization problems, giving Bellman functions. Then, the so-called greedy one-step lookahead algorithm provides an optimal solution.
We illustrate dynamical models of storage (battery models, dam models). We formulate multistage stochastic optimization problems in the formalism of optimal control, that is, with state, controls and noise processes. We describe the stochastic dynamic programming algorithm: Bellman cost-to-go functions, Bellman equation. When the noise process is white (that is, stagewise independent), we state that the so-called greedy one-step lookahead algorithm provides an optimal solution.
Finally, we discuss the curse of dimensionality, we present the quadratic case and modeling issues (delineating state and white noise)
Key insight: the existence of a state that contains sufficient information to make an optimal decision at each stage.
Recommended reading: [Ber00, Chap. 1]
Slides (MDL) Slides (JPC) Slides (VL)
Optimal growth and reproduction of a plant.
Stochastic optimal control with quadratic costs and linear dynamics, without constraints on the control.
Programming the stochastic dynamic programming algorithm
SDDP (Stochastic Dual Dynamic Programming) is an algorithm that stands at the intersection of dynamic programming with the L-Shaped method. Relying on hypothesis of linearity and convexity, SDDP builds up lower approximations of Bellman functions. This approach allows pushing the limits of the dimensionality curses. For this purpose,
Beyond mathematical expectation and worst case — as a form aggregation of uncertainties in an objective function — we present risk measures.
Examples: Sun´R, optimal purchase of crude oil.