M. De Lara3,
ENSTA ParisTech and École des Ponts ParisTech
The 4 days session Interface 2019 Stochastic Optimization for Large-Scale Systems will present tools and methods to formulate and solve such problems. For this purpose, courses, computing sessions and interactive sessions will alternate.
The school addresses a mixed public, in the academy and in the industry. These four days of immersion in the CIRM (International Center of Mathematical Meetings, Marseille) will make discussions and exchanges possible, be it during or outside the training sessions. The institutional organizers are the ENSTA ParisTech and the École des Ponts ParisTech.
Keywords: stochastic programming, scenario trees, progressive hedging, stochastic optimal control, stochastic dynamic programming, stochastic dual dynamic programming, spatial decomposition.
The organizing committee is made of the following French researchers
The course mainly deals with optimization in a dynamical setting and subject to uncertain perturbations. The iconic example consists of managing a stock when facing an uncertain demand over a given time span. The decisions of refilling the stock are taken at given discrete instants, knowing the past demand uncertainties.
Such optimization problems occur in most industries and services dealing with stochastic dynamical systems, such as dam management, battery aging, smart-grid control in the energy field, or managing dynamic equilibrium between demand and offer in a context where production and transport are coupled via a network (petroleum industry).
Computer and softwares:
During the first part of the course (sessions 1 to 3), we will present the two standard mathematical approaches dealing with stochastic multistage optimization problems, that is, stochastic programming and stochastic optimal control. The sessions include several practical works in Julia.
Multistage stochastic optimization problems are often too large to be solved in a straightforward manner. During the second part of the course (sessions 5 to 8), we will present advanced methods allowing to solve such problems by decomposition, that is, by iteratively (in parallel or sequentially) solving subproblems of a reasonable size rather than the whole one. We will recall the notions from convex optimization and duality theory necessary for devising such methods.
The course starts with basics in stochastic optimization.
When the problem incorporates several time steps, the stochastic programming approach proposes to build a scenario tree of uncertainties, and to attach one control per node, hence leading to an equivalent deterministic formulation. We present alternative equivalent deterministic formulations.
We discuss the importance of information structures in multistage optimization, as well as how to use such structures to obtain computable bounds.
We end the session with an hands-on project, that tackles a multistage version of the problem studied in Session 1.
The stochastic optimal control approach looks at the problem from a Markovian point of view, where uncertainties are stagewise independent. We present a generic way of solving this problem by Dynamic Programming.
We end the session by applying Dynamic Programming to the problem studied in Session 2.
During this morning, industrials will present their current problems. It will be the occasion to address some specific difficulties encountered when dealing with real-life problems.
Based on industrial participant's proposals, we will organize small groups to tackle specific issues during the numerous ``course free'' periods enabled by the immersive nature of the Interface program.
When a stochastic multistage optimization optimization is formulated on a large number of scenarios, decomposition techniques can be used to iteratively solve a collection of subproblems, each formulated along a single scenario.
We present two approaches : (augmented) Lagrangian decomposition along scenarios, leading to the Progressive Hedging algorithm, and Bender's decomposition leading to the L-Shaped method.
In the framework of dynamic programming, the SDDP (Stochastic Dual Dynamic Programming) method allows to push forward the limits of the curse of dimensionality. Taking advantage of linearity and convexity, SDDP builds (lower) approximations of the Bellman value functions in an iterative way.
The afternoon is devoted to a computer session aiming at discovering SDDP algorithm through the Julia software (StochDynamicProgramming package).
During this last session, we will explore different decomposition schemes allowing to split a stochastic optimal control problem -- involving a large number of units -- so as to obtain several small-scale subproblems. These methods allow to solve the subproblems by Dynamic Programming or SDDP.