Fermer X

Information constraints discretization in a two-step model

J.-Ph Chancelier and SOWG
(last modification date: October 10, 2017)
Version pdf de ce document
Version sans bandeaux

Contents

Introduction

This pratical work will follow the example worked out in M. De Lara’s first lecture. The computations will be performed using nsp (a Matlab like simulation language). In order to launch nsp just use the Nsp entry in the menu Applications/Science. You will need a specific nsp toolbox called qhull to perform Voronoi tesselation. In order to load it, just type within nsp:
exec('COURSES/qhullnsp/loader.sce');

1 The Problem

Consider two independent random variables W  0   and W  1   , each with a uniform probability distribution over [− 1,1]  (zero mean, variance 1∕3  ). The unique decision variable U may only use the observation of W  0   (which we view as the initial state X0   ). The final state X1   is equal to W  0 + U +  W 1   . The goal is to minimize 𝔼(𝜀U 2 + X21)  , where 𝜀  is a given “small” positive number (“cheap control”). The statement is thus

        (   2                    )
 min  𝔼  𝜀U  +  (W  0 + U  + W  1)2 .
U ≼W 0
(1)

2 Exact Solution

We have that

 (                         )     (
𝔼 𝜀U  2 + (W 0 + U + W  1)2  = 𝔼  W  20 + W 21 + (1 + 𝜀)U  2
                                                                                    )
                                                     +  2U W  0 + 2U W 1 + 2W  0W  1 .
The last two terms in the right-hand side yield zero in expectation since W  0   and W  1   are centered independent random variables and since U is measurable with respect to W  0   . The first two terms yield twice the variance 1∕3  of the noises. Therefore, we remain with the problem of minimizing
2-    (         2         )
3 + 𝔼  (1 + 𝜀)U  + 2U  W 0
(2)

by choosing U as a measurable function of W  0   . One can see that the solution is given by

U  = − -W--0,
       1 + 𝜀

and the corresponding optimal cost is readily calculated to be

1 1 + 2𝜀   1
---------≈ --.
3 1 + 𝜀    3
(3)

Question 1 Use the following code to recover previous results by both numerical integration (nsp function intg) and Monte Carlo simulations. Choose and code an other feedback and compare the associated cost with the optimal cost.

  epsilon=1.E-2;
  // we define here the optimal feedback
  function u=feed(w0) u=-w0/(1+epsilon);endfunction;
  
  // the cost under expectation
  function y=fcost(w0,feedback)
    u=feedback(w0);
    y=2/3+(1+epsilon)*u .^2+2*u .*w0;
  endfunction;
  
  // numerical integration
  [cost,ea_estim]=intg(-1,1,fcost,args = feed,vecteval = %t);
  cost=cost/2;
  
  // Monte Carlo simulations
  W=grand(1,2500,'unf',-1,1);
  Y=fcost(W,feed);
  costmc=mean(Y);

3 Naive Monte Carlo Discretization

PIC

We now proceed to some discretization of this problem. To that purpose, we first consider N  noise trajectories   i   i
(w 0,w 1),i = 1,...,N,  which are N  sample of the two-dimensional vector (W  0,W  1)  . Those samples will serve to approximate the cost expectation by a usual Monte Carlo averaging.

However, in this process, we must also consider N  corresponding realizations {ui}i=1,...,N  of the random decision variable U . But we must keep in mind that this random variable should be measurable with respect to the first component W  0   of the previous vector.

To that purpose, we impose the constraint

∀i,j,wi = wj  ⇒  ui = uj,
      0     0
(4)

which prevents U from taking different values whenever W  0   assumes the same value in any two sample trajectories.

When Constraint (4) is never active, then we have to compute a different control value for each sample and the cost is:

   i 2     i    i    i 2           i 2      i     i  i     i    i2
𝜀(u ) +  (w 0 + u + w 1) =  (𝜀 + 1)(u ) + 2(w0 + w 1)u  + (w0 + w 1).
(5)

This expression must be minimized in ui  for every i = 1,...,N,  under the constraint (4). This yields the optimal value

         i    i
ui = − w-0 +-w-1
         1 + 𝜀
(6)

and the corresponding contribution to the cost 𝜀(wi0 + wi1)2∕(1 + 𝜀)  . This is of order 𝜀  , and so is the average over N   samples

          N
---1-----∑      i    i 2
N (1 + 𝜀)    𝜀 (w 0 + w1)
         i=1
(7)

even when N  goes to infinity. This is far from the actual optimal cost given by (3).

Question 2 Use the function grand to obtain a set of N=2500 samples of (W0,W1). Assuming that the N values that you obtain for W0 are one by one distinct, compute the optimal cost of the discrete problem.
Optional question: set N=1000000 and check if all the values for W0 are one by one distinct (use function unique and type help unique to get the online manual). If not, propose a modification of the code to handle this case in order to compute the discrete optimal cost.

  // to be filled

4 What Is the Real Value of this “Solution”?

However, any admissible solution (any U such that U is measurable w.r.t.  W  0   ) cannot achieve a cost which is better than the optimal cost (3). The value (7) is just a “fake” cost estimation. The resolution of the discretized problem derived from the previous Monte Carlo procedure yielded an optimal value ui  (see (6)) associated with each sample noise trajectory represented by a point (wi ,wi)
  0   1  in the square [− 1,1 ]2   . Hence, before trying to evaluate the cost associated with this “solution”, we must first derive from it an admissible solution for the original problem, that is, a random variable U over Ω =  [− 1,1]2   , but with constant value along every vertical line of this square (since the abscissa represents the first component W  0   of the 2-dimensional noise (W  0,W  1)  ).

A natural choice is as follows:

  • we first renumber the N   sample points so that the first component  i
w0   is increasing with i  ;
  • then, we divide the square into N   vertical strips by drawing vertical lines in the middle of segments [wi0,wi0+1]  , that is, the i  -th strip is [ai−1,ai] × [− 1,1 ]  with ai = (wi + wi+1)∕2
       0    0  for i = 2, ...,N − 1  , a0 = − 1  , and aN =  1  ;
  • then, we define the solution U as the function of (w ,w  )
  0   1  which is piecewise constant over the square divided into those N   strips, using of course the optimal value  i
u  given by (6) in strip i  ; that is, we consider
              N
         ∑    i
U  (w ) =     u 1[ai−1,ai]×[−1,1](w ),
         i=1
    (8)

    where w  ranges in the square [− 1,1]2   and 1 (⋅)
 A  is the function which takes the value 1 in A  and 0 elsewhere.

Since this is an admissible solution for the original (continuous) problem, the corresponding cost value 𝔼(𝜀U 2 + X21)  can be evaluated. Here, the expectation is over the argument w  considered as a random variable over the square with uniform distribution (the calculation of this expectation is done analytically).

According to (2), this expected cost is easily evaluated as

    ∑N  (            ∫ ai             ∫ ai       )
2-+      (1 + 𝜀)(ui)2     1-dw0 + 2ui      w0-dw0
3   i=1               ai−12            ai−1  2
                                        N (                                        )
                                   2-  ∑             i2ai-−-ai−1    i(ai)2 −-(ai−1)2
                                =  3 +      (1 + 𝜀)(u)     2     + u       2         .
                                       i=1
(9)

Question 3 Compute the optimal cost for the real problem obtained with the feedback (8) for a given sample of (W0,W1). You will have to do the following steps:

  • sort the values of W0 (use function gsort).
  • compute the midpoints of the intervals and do not forget to add the endpoints 1 and 1.
  • use the permutation obtained when sorting the W0 values to associate the correct value of the control to each interval.

Although this is an “expected” cost, it is still a random variable since ui   and ai   are functions of the wi0   ’s which result from random drawings (ui   also depends upon the wj1   ’s). The optimal cost being itself a random variable, compute by Monte Carlo its mean and standard deviation for different values of N.

  epsilon=1.E-2;
  N=2500;
  W=grand(2,N,'unf',-1,1);
  U=-(W(1,:)+W(2,:))/(1+epsilon);
  // compute the mid-points vector a
  // compute the cost depending on a
  cost=ZZZ,// ...

5 Scenario Tree-based Discretization

PIC The question is thus: how to formulate another constraint translating the informational constraint in the discretized problem more effective than (4)? An obvious answer is that, in our collection of sample trajectories used in the discrete optimization problem, there should really be distinct samples with the same value of component w
 0   .

Admittedly, if the scenarios are produced randomly (according to the joint uniform probability law of (W 0,W  1)  over the square [− 1,1] × [− 1,1]  ), or if they have been recorded from real life observations, there is a probability zero that a tree shape will pop up spontaneously, for any arbitrary large but finite N  .

In the case of our example, since W
   0   and W
   1   are known to be independent (the white noise case), any element in a set of N0    samples of W 0   can be combined with the same, or N0   distinct, sets of N1   samples of W  1   to produce such a tree. Even if W  0   and W 1   were not independent, one could first generate N0   samples of W  0   using the marginal probability law of this variable, and then, using each sample w
  0j  and the conditional probability law of W
   1   knowing that W
   0   assumes the value   j
w 0   , one could generate N1   associated samples   k
w1   of W  1   (“sons” of that   j
w 0   ).

To fix notations, we consider scenarios {(wj0,wjk1 )}kj==11,.,.....,,NN10   and we introduce the following additional symbols:

          N                     N
--j   -1-∑ 1   jk    --j 2   -1-∑ 1   jk2
w 1 = N      w 1 ,  (σ 1) =  N     (w 1 ).
        1k=1                  1k=1
(10)

Notice that wj
  1   can be interpreted as an estimate of the conditional expectation of W  1   knowing that         j
W  0 = w0   . Likewise,  --j2
(σ 1)   can be interpreted as an estimate of the conditional second order moment.

The cost of the discretized problem is

    N                N
-1-∑ 0 (   j 2   -1-∑ 1   j    j    jk 2)
N       𝜀(u ) +  N      (u  + w 0 + w1 )  .
  0j=1             1k=1

The minimizer is

       wj + wj
uj = − --0----1, j = 1, ...,N0,
         1 + 𝜀
(11)

to be compared with (6). This yields the optimal cost

    1     ∑N0 (   j         j-j    -j            --j )
----------     𝜀(w0)2 + 2𝜀w 0w 1 − (w1)2 + (1 + 𝜀 )(σ 1)2 ,
N0 (1 + 𝜀)j=1
(12)

to be compared with (7) and (3). If we assume that the estimates (10) converge towards their right values (respectively, 0 and 1/3) as N1   goes to infinity, then (12) gets close to

    1     ∑N0 (    j2   1 + 𝜀)
N--(1 +-𝜀-)    𝜀(w 0)  + --3--  .
  0       j=1

Now, the expression       ∑
(1 ∕N0)   N0 (wj0)2
         j=1   can also be viewed as an estimate of the second order moment of W
   0   and, if we assume that it converges to the true value 1/3 when N
  0   goes to infinity, then we recover, in the limit, the true optimal cost (3). Therefore, unlike with the previous naive Monte Carlo method (see (7)), here the optimal cost obtained in the discrete problem appears to converge to the right value.

As we did earlier (see (9)), it is also interesting to evaluate the real cost associated with an admissible solution derived from the collection of “optimal” values (11) by plugging those values into the formula (8) (with N  replaced by N0   ). Again, we have appealed to a computer program using 100 experiments, each consisting in:

  • drawing N0   values wj0   at random;
  • associated with each of those values, drawing a set of N
  1   values wjk
  1   at random;
  • computing the --j
w 1   ’s (see (10)), the   j
u  ’s (see (11)) and forming the admissible solution (8) (N  replaced by N0   ) with those values after reordering the indices j  so that wj0   is increasing with j  ;
  • evaluating the true cost 𝔼(𝜀U 2 + X2 )
           1  by analytic integration w.r.t. the couple w =  (w0, w1)  with uniform probability distribution over the square        2
[− 1,1]   .

Question 4 Compute the optimal cost for the real problem obtained with the feedback (8) for a given tree sample of (W0,W1). The optimal cost being itself a random variable, compute by Monte Carlo its mean and standard deviation for different values of N.

  epsilon=1.E-2;
  N0=50;
  W0=grand(1,N0,'unf',-1,1);
  N1=50;
  // each column of W1 is associated to a value W0
  W1=grand(N1,N0,'unf',-1,1);
  // compute the mean for each column
  W1m=mean(W1,1);
  // compute the variance for each column
  sig=mean(W1 .*W1,1);
  // the optimal discrete control
  // ...

6 Independent Discretizations of Noise and Information

PIC

We consider now an alternative approach consisting in independent discretizations of noise and information. The noise is discretized using a voronoi tesselation (Nc cells) and the control is discretized using vertical strips (Ns strips). As shown in the slides, we need to solve Ns optimization problems (one problem for each strip), formulated as follows :

  • Denoting j(u, w0,w1 ) = 𝜀u2 + (w0 +  u + w1)2
  • The discretized optimization problem on strip k giving the optimal uk is
        ∑Nc
min     pikj(uk, wi0,wi1)
 uk i=1

  • in which pik  is the probability weight of the intersection of cell i  with strip k  .

Question 5

  • Draw a sample of size 2500 value of (w0,w1) and plot the Voronoi diagram using the voronoi function.
  • Discretize uniformly the interval (1, 1) to define the vertical strips (use 100 strips and use function linspace).
  • Find a closed form formula for the optimal uk.
  • Use the function SquareBV to obtain the pik and compute the optimal value of uk.
  • Draw the obtained feedback, that is, u as a piecewise constant function of w0 and graphically compare it to the optimal feedback defined at §2.
  • compute the real cost associated to the obtained feedback.

Note that you can use the macro SquareBV in order to compute the surfaces of the intersections of the cells of a voronoi tesselation and of a given rectangle.

  // voronoi(x,y); // draw a tesselation diagram
  // compute the surfaces for the strip [xmin,xmax]x[-1,1].
  // [Surfaces]=SquareBV(x,y,rect=[xmin,xmax,-1,1]);
  // ....

Solutions

L'École des Ponts ParisTech est membre fondateur de

L'École des Ponts ParisTech est certifiée 

ParisTech ParisEst ParisTech

 

Copyright 2014
École des Ponts ParisTech
All rights reserved