This practical work is aimed at presenting the theory of the SDDP algorithm and apply it on a
simple dam management problem.

The Stochastic Dual Dynamic Programming (SDDP) algorithm is close to the Dynamic
Programming method. The main diﬀerence being that, instead of computing the Bellman function
at every point, the algorithm construct a polyhedral approximation of the Bellman function. More
precisely the algorithm construct a piecewise linear function that is a lower approximation of the
exact Bellman function.

This document is given in three parts:

In §1, we present a few results about piecewise linear functions. We also recall the
cutting plane algorithm, known as Kelley’s algorithm.

Recall that the supremum of a ﬁnite number of hyperplane is a convex piecewise linear function.
We consider the function

(1)

where b_{i}∈ ℝ and a_{i}∈ ℝ^{n} for i ∈{1,,k}.

Question 1Find a new representation of f as a linear programm. That is f(x) is the valueof a minimization problem with linear cost and linear equality or inequality constraints.

1.2 The cutting plane method: Kelley algorithm

The Kelley algorithm enables someone to approximate the minimum of a convex function f on a
compact C only by having an oracle able to compute at each point its value and derivative. Recall
that a convex function is always above its tangents. The cutting plane method consists in
constructing a lower approximation of the objective function f as the maximum of tangents.
This lower approximation is a piecewise-linear function, and is improved throughout the
algorithm.

Let describe more precisely the algorithm. We start at a given point x_{0}. The ﬁrst lower
approximation is the tangent at this point,

Then, at each step, we computes the equation of the tangent of f at the minimum of the
approximate function ,

(2)

The new lower approximation ^{k+1} is given by the maximum of the old one and the
tangent,

(3)

Question 2

Run the algorithm in the ﬁle Kelley.sce that picture the initial function and the cuttingplanes found on the same graph. Evaluate graphically the diﬀerence between the minimumfound and the real one.

Test the algorithm for diﬀerent convex function and for an increasing number of iteration.Comments.

2 The Structure of SDDP

In this section we present the Stochastic Dual Dynamic Programming (SDDP) algorithm.
More precisely we present the so called DOASA implementation. For clarity reason
we present it in the deterministic case in a ﬁrst place, and then go to the stochastic
case.

2.1 The Deterministic case

The SDDP algorithm in a deterministic setting is really close to the Kelley’s algorithm on a
multistage problem instead of a static problem.

2.1.1 Problem statement

We consider a controled dynamic system in discrete time that follows the equation

where x_{t}∈ 𝕏 is the state at time t of the system and u_{t}∈ 𝕌 the control applied at time t; f_{t} is the
dynamics of the system. We assume that the sets 𝕌 and 𝕏 are compact subset of a ﬁnite
dimensionnal vector space. We also assume that the functions f_{t} are linear. At each time t there is
a convex cost L_{t}(x_{t},u_{t}) that depends on the current state and the control choosen.
Moreover there is a ﬁnal cost K(x_{T }) that depends on the ﬁnal state of the system.
A policy is a sequence of functions π = (π_{1},…,π_{T−1}) giving for any state x a control
u.

We consider the following problem

(4)

This problem can be solved by dynamic programming.

Question 3Write the Dynamic Programming equation relative to this problem. How tocompute the Bellman Value function ? How to compute the optimal policy ?

2.1.2 Bellman operator

In order to understand the SDDP algorithm it is useful to introduce the Bellman operator 𝒯_{t} of
the problem.

For any time t, and any function A mapping the set of states into ℝ we deﬁne:

(5)

Question 4Write the Dynamic Programming equation with the Bellman operator.

Let study a few properties of the Bellman operator.

Monotonicity For any couple of functions (V,V), we have

(6)

Convexity For any function V , if L_{t} is jointly convex in (x,u), V is convex, and f_{t} is aﬃne
then

(7)

Linearity For any piecewise linear function , if is also piecewise linear, and f_{t} aﬃne,
then

(8)

Question 5Prove these three results.

2.1.3 Duality property

Let J be a function mapping 𝕏 × 𝕌 into ℝ. Consider the function φ deﬁned as

where J is jointly convex in (x,u). Then the interpretation of the multiplier as a marginal cost
implies that a subgradient λ ∈ ∂φ(x_{0}) of φ at x_{0} is the dual multiplier of constraint x_{0}− x = 0 in
the following problem

In particular it means that, as φ is convex

(9)

2.1.4 SDDP algorithm (deterministic case)

As in the Kelley’s algorithm, the SDDP algorithm recursively constructs a lower-approximation
_{t}^{(k)} of each Bellman function V_{
t} as the supremum of a number of aﬃne functions. Each of this
aﬃne functions is called a cut.

Stage k of the SDDP algorithm goes as follows.

Suppose that we have a collection of T lower approximation of the Bellman functions denoted
_{t}^{(k)} that is a maximum of aﬃne functions. In order to improve our approximation we follow an
optimal trajectory (x_{t}^{(k)})_{
t} of the approximated problem and add a cut computed along the optimal
trajectory for each Bellman function.

Thus, we begin with a loop forward in time by setting t = 0 and x_{t}^{(k)} = x_{
0}, and
solve

We call β_{t}^{(k+1)} the value of the problem, λ_{
t}^{(k+1)} a multiplier of the constraint x = x_{
t}^{(k)}, and u_{
t}^{(k)}
an optimal solution. This can also be written as

Question 6Show that, by (9), and by monotonicity of 𝒯_{t}, we obtain

(10)

Thus we have by (9), and by monotonicity of 𝒯_{t},

(11)

This ensures that the function xβ_{t}^{(k+1)} + is indeed a cut, i.e an aﬃne function
below V_{t}. Consequently we update our approximate of V_{t} by deﬁning

Furthermore we can see that _{t}^{(k+1)} is convex and lower than V_{
t}. We go to the next time step by
setting

Upon reaching time t = T we have completed iteration k of the algorithm.

2.1.5 Initialization and stopping rule

In order to initialize the algorithm it seems that we need a lower bound to all value function.
According to our assumptions this bound always exist but is not necessary in order to implement
the algorithm. Indeed we can choose V _{t}^{(0)} = 0 in order to compute the trajectories, and simply set
V _{t}^{(1)} equal to the ﬁrst cut, which means that we “forget” V^{(0)} in the maximum that determine
V _{t}^{(1)}.

Note that at any step k of the algorithm we have a (non optimal) solution (u^{(k)})_{
t} of Problem
(4), and we can estimate the quality of the solution. Indeed V _{0}^{(k)}(x_{
0}) is a lower bound to the
optimal value of Problem (4). Moreover the simulated cost following the solution given
by

is an
upper bound to the value of Problem (4). A reasonable stopping rule for the algorithm is given by
checking that the (relative) diﬀerence of the upper and lower bound is smaller than some constant
as they are both asymptotically exact bounds, i.e. both converges toward the value of Problem
(4).

2.2 The Stochastic case

Now we introduce some random variables in our problem. This complexify the algorithm in
diﬀerent ways:

we need some probabilistic assumptions;

for each stage k we need to do a forward phase that yields a trajectory (x_{t}^{(k)})_{
t}, and a
backward phase that gives a new cut;

we cannot compute an exact upper bound for the problem’s value.

2.2.1 Problem statement

As in the deterministic case we consider a controled stochastic dynamic system in discrete time
that follows the equation

where x_{t}∈ 𝕏 is the state at time t of the system, u_{t} the control applied at time t, and W_{t} an
exogeneous discrete noise ; f_{t} is the dynamic of the system.

We assume that the stochastic process (W_{t})_{t=1,…,T−1} is independent in time. Moreover we
assume that at any time t we know the whole past of the noise process up to, and including, time t.
The independence assumption ensures that we can only consider control u_{t} measurable with
respect to (x_{t},W_{t}). This is the so called hazard-decision information structure where we know at
time t the realization of the noise before choosing our control.

At each time t there is a jointly convex in (x_{t},u_{t}) cost, L_{t}(x_{t},u_{t},W_{t}) that depends on the
current state and the control choosen ; and there is a ﬁnal cost K(x_{T }) the depends on the ﬁnal
state of the system. A policy is a sequence of measurable functions π = (π_{1},…,π_{T−1}) giving for any
state x, and realization of the noise w a control u.

We consider the following problem

(12)

2.2.2 Bellman operator

As in the deterministic case we also introduce the stochastic Bellman operator 𝒯_{t} of the problem.
For any time t, and any function A mapping the set of states and noises 𝕏 × 𝕎 into ℝ we
deﬁne:

The stochastic Bellman operator 𝒯_{t}, is given by

Question 7Write the Dynamic Programming used to solve this problem in two ways:directly, and with the help of the Bellman operator.

Question 8Show that the Bellman operator have the same properties as in thedeterministic case.

2.2.3 Duality theory

Suppose that for an iteration k and a time step t + 1 we have a lower estimation V _{t+1}^{k+1} of V_{
t+1}.
We consider, for any w, the problem

(13)

and call _{t}^{(k+1)}(w) the optimal value, and _{
t}^{(k+1)}(w) an optimal multiplier of the constraint. As in
the deterministic case we have

(14)

Question 9Show that we have

(15)

By deﬁning

show that we have

(16)

2.2.4 SDDP algorithm (stochastic case)

At the beginning of step k we suppose that we have, for each time step t an approximation _{t}^{(k)} of
V_{t} verifying

_{t}^{(k)}≤ V_{
t},

V _{T }^{k} = K,

_{t}^{(k)} is convex.

Forward path
We randomly select a scenario (w_{0},…,w_{T−1}) ∈ 𝕎^{T }. We recursively, by a forward loop in time,
deﬁne a trajectory (x_{t}^{(k)})_{
t=0,…,T } by x_{t+1}^{(k)} = f_{
t}(x_{t}^{(k)},u_{
t}^{(k)},w_{
t}), where u_{t}^{(k)} is an optimal solution
of

(17)

This trajectory is a trajectory given by the optimal policy of Problem (12) where V_{t} is replaced by
its current approximation _{t}^{(k)}.

Backward path
We reader improve the approximation of V_{t} by adding a cut at V_{t}(x_{t}^{(k)}). We procede backward
in time by setting t = T − 1, and solve Problem (13) for every w in the support of W_{t}. We obtain
_{t}^{(k+1)}(w) and _{
t}^{(k+1)}(w). As W_{
t} is discrete we can compute exactely the expectation λ_{t}^{(k+1)} and
β_{t}^{(k+1)}, and deﬁne the new lower approximation

Question 10Check that_{t}^{(k+1)}is a lower convex approximation of V_{
t}.

Go one step back in time: t ← t − 1. Upon reaching t = 0 we have completed step k of the
algorithm.

Question 11For one iteration of the SDDP, algorithm, how many one-step one-aleaproblem (Problem(17)) do we need to solve ?

2.2.5 Initialization and stopping rule

We can intialize as suggested in the deterministic case. However in order to accelerate the
convergence it can be useful to bypass a few forward paths by abritrarily choosing some
trajectories (x_{t}^{(k)})_{
t}.

As in the deterministic case we have an exact lower bound of Problem (12) given by V _{0}^{(k)}(x_{
0}).
However, determining an upper bound is more diﬃcult. Indeed, the upper bound is given as the
expectation of the cost induced by the optimal strategy of the approximated problem. This
expectation is to be taken on the whole process (W_{0},…,W_{T−1}) and cannot be computed exactly.
Consequently this expectation is estimated by Monte-Carlo methods, and we have no control over
the error of our solution.

A heuristic stopping rule consist in stopping the algorithm if the lower bound is in
the conﬁdence interval of the upper bound for a determined number of Monte-Carlo
simulation.

2.2.6 A few other possibilities

In the algorithm presented here, also called DOASA, we select one scenario (one realization of
(W_{1},…,W_{T−1})) to do a forward and backward path. It is also possible to select a number N of
scenarios to do the forward path (computation can be parallelized). Then during the backward
path we add N cuts to V_{t} before computing the cuts on V_{t−1}. This is the original implementation
of the algorithm.

Otherwise the CUPPS algorithm suggest to use _{t+1}^{(k)} instead of V _{
t+1}^{(k+1)} in the
computation of the cuts (14). In practice it means that we do only a forward path at step k:

select randomly a scenario (w_{t})_{t=0,…,T−1};

at time t we have a state x_{t}^{(k)}, we solve problem (13) for every possible realization of
W_{t} in order to compute the new cut for V_{t};

choose the optimal control corresponding to the realization W_{t} = w_{t} in order to
compute the state x_{t+1}^{(k)} where the cut for V_{
t+1} will be computed, and goes to the
next step.

Moreover it is possible to select more than one scenario, and everything can be parralelized in this
implementation.

Finally it is always possible, and most of the time fruitful to compute some cuts before starting
the algorithm. One way of doing this is to bypass the forward phase by choosing the trajectory
(x_{t}^{(k)})_{
t=0,…,T } instead of computing it as some optimal trajectory.

3 Problem data

We consider a dam manager intenting to minimize the intertemporal cost obtained by turbining
the water in a dam, when the water inﬂow (rain, outﬂows from upper dams) is supposed to be
stochastic.

3.1 Dam dynamics

with

time t ∈𝕋 := {t_{0},…,T} is discrete (days), and t denotes the beginning of the period
[t,t + 1[,

S_{t} volume (stock) of water at the beginning of period [t,t + 1[, belonging to the set
, made of water volumes (h m^{3}), where is the maximum dam volume,

a_{t} inﬂow water volume (rain, etc.) during [t,t + 1[, belonging to
, and supposed to be known at the beginning of the period
(Hazard-Decision),

q_{t} turbined outﬂow volume during [t,t+1[, decided at the beginning of period [t,t+1[,
belonging to the set , where q^{♯} is the maximum which can be turbined by
time unit (and produce electricity),

δ_{t} spilled outﬂow volume during [t,t + 1[, decided at the beginning of period [t,t + 1[,
belonging to the set , and such that

(18)

The dam manager is supposed to make a decision, here turbining q_{t} and spilling δ_{t}, at the
beginning of the period [t,t + 1], after knowing the water inﬂow a_{t}.

A water inﬂows scenario is a sequence as

(19)

Compared to other methods, we have added the spilled outﬂow volume δ in order to get a
linear dynamics and we are placed in the Hazard-Decision case. These two points are mandatory to
apply the SDDP algorithm.

3.2 Criterion: intertemporal cost

The manager original problem is one of cost minimization where turbining one unit of water has
unitary cost −p_{t}. On the period from t_{0} to T, the costs sum up to

(20)

where

a scenario of costs

(21)

is given (data of the problem) and supposed to be known in advance (deterministic
setting),

the ﬁnal term gives value to the water volume in the dam at the horizon T.

3.3 Probabilistic model for water inﬂows

We suppose that sequences of uncertainties (a_{t0},…,a_{T−1}) are random variables with a known
probability distribution ℙ on the set {0,…,a^{♯}}, with ℙ{a_{
t} = a} given.

Notice that the random variables (a_{t0},…,a_{T−1}) are independent, but that they are not
necessarily identically distributed. This allows us to account for seasonal eﬀects (more rain in
autumn and winter).

3.4 Numerical data

We now perform numerical simulations, and try diﬀerent policies.

Time.
We consider a daily management over one year

(22)

Bounds.
Concerning the dam and water inﬂows, we consider the following bounds:

(23)

These ﬁgures reﬂect the assumptions that, during one week, one can release at maximum 40% of
the dam volume, and that during one week of full water inﬂows, an empty dam can be
half-ﬁlled.

Costs scenario.
The scenario of costs is known in advance. We produce it by one
sample from the expression

(24)

where is drawn from a sequence of i.i.d. uniform random variables in .

Water inﬂow scenarios.
We produce scenarios a(⋅) := a_{t0},…,a_{T−1} of water inﬂows by a sequence of independent
random draws from the set {0, 1,,a^{♯}}.

3.5 Water turbined policy

An admissible policy π : 𝕋 × 𝕏 × 𝕎 → 𝕌 × 𝔻 assigns an amount of water turbined and spilled
to any state S of dam stock volume, to any decision period t ∈ 𝕋 and to any
water inﬂow a ∈ 𝕎, while respecting the constraints

In the following, we will simply write as deﬁned A ⋅ + S_{t}.E ≤ b(t).

Once the policy given, we obtain a volume trajectory S(⋅) = , a turbined
trajectory q(⋅) = and a spilled trajectory δ(⋅) = produced by
the ”closed-loop” dynamics

(28)

and function of the scenario a(⋅). Thus, in the end, we obtain the cost

(29)

3.6 Minimization under constraints

The aim of this part is to compute the minimum of a given convex function under linear
constraints and to ﬁnd a cutting plane. We will use this to ﬁnd an optimal strategy.

The problem is

(30)

In order to use the linear solver, we have to transform this formula into one of the
form

(31)

We remind the reader that V _{t+1}^{k} is a piecewise linear function written

, , and ,
we obtain the wanted form for the minimization problem (see (31)).

The last step is to ﬁnd the parameters of the cutting plane of the Bellman function
V _{t}^{k} at a given point z. To do so, we transform the state z in the previous equation in
a variable x and add the constraint x = z. The Lagrange multiplier of this equality
constraint is the slope of a new cutting plane and the minimum cost is the ordinate at the
origin.

The following macro returns the parameters of the plane (λ,β) and the point at which the
minimum is reached u_{opt} given the matricial form of the cost (−p,V ), the constraints
(A,b(t),E), the value of the uncertainty a, the dynamics given by (Q,R) and the state
z.

Question 12Explain the objective of the function CutPerAlea. Where, in the SDDPalgorithm, is this function required ? How many times is the function called for a step k ofthe SDDP algorithm ?

4 Dam optimal management

We now solve the problem presented earlier in this practical work, by Dynamic Programming and
by Stochastic Dual Dynamic Programming.

Question 13First of all open the ﬁle damdata.sce, and understand the diﬀerent macrogiven in this ﬁle, in particular the macro trajectories.

Now deﬁne a policy (a function taking a time t, a current stock s and an inﬂow a andgiving an admissible control), and test it. Plot (using plot2d2) the trajectories of the stockwith this policy.

4.1 DP resolution

We solve exactly the problem using Dynamic Programming.

Question 14In the ﬁle sdp.sce a Dynamic Programming approach is presented.Understand how the function works, and plot the optimal stock trajectory using function fromdamdata.sce.

From the function of sdp.sce what is the exact value of the optimization problem ?Simulate the optimal strategy and give an estimation of this value and asymptotic errorbounds.

What do you observe for the ﬁnal stock? Explain why.

4.2 SDDP resolution

We now apply an SDDP approach to the same problem.

In the ﬁles that were downloaded at the beginning, the function SDDP creates an approximate of
the Bellman function given the constraints and random trajectories, then the function
sddp_policy deﬁnes the optimal policy given the former approximate.

Question 15Using the function of SDDP picture the trajectory of the stocks correspondingto the optimal policy. Evaluate the optimal payoﬀ, and the error bounds.

Evaluate the lower bound given by SDDP. Deduce the estimated gap (in %).

Plot the improvement of the gap with iterations of SDDP.

4.3 Comparison of SDDP and DP

Question 16Compare the numerical results given by the DP approach, and the SDDPapproach.

Sum up the theoretical and practical diﬀerences between DP and SDDP: what are the prosand cons of each method.