Optimal control of a domestic microgrid

SOWG
(last modification date: October 10, 2017)
Version pdf de ce document
Version sans bandeaux

1 Problem statement, formalisation and data
2 Simulation and evaluation of given policies
3 Solving by two different methods
A Discretization and interpolation schemes

Contents

1 Problem statement, formalisation and data
 1.1 Problem statement
 1.2 System’s dynamic
  1.2.1 Battery dynamic
  1.2.2 Hot water tank dynamic
 1.3 Criterion: operational costs
 1.4 Optimization criterion
2 Simulation and evaluation of given policies
 2.1 Evaluation of a policy
 2.2 Simulation of a given policy
3 Solving by two different methods
 3.1 Model Predictive Control
 3.2 Stochastic Dynamic Programming
 3.3 Comparison between different policies
A Discretization and interpolation schemes
 A.1 Discretization
 A.2 Interpolation

Abstract

1

1 Problem statement, formalisation and data

1.1

1.1 Problem statement

We consider here a house equipped with a thermal hot water tank, a combined heat and power generator (CHP) and a battery. This house has two systems to satisfy its energetical needs: a thermal system for heating, and an electrical system to satisfy its electrical demand. These two systems are coupled together via the CHP, rendering the optimal control difficult to find.

The CHP is a generator which produces both electricity and heat while burning gas. The power of the CHP can not be modulated: it produces always the same amount of heat and electricity.

A battery can store the potential surplus of electricity (especially when demand is low), and the house is connected to the grid to import electricy if necessary. The heat produced by the CHP is stored in a hot water tank. Hot water is withdrawn from this tank to satisfy thermal demand.

We aim to decrease the operational costs while taking into account that:


PIC
Figure 1: Different systems constituting the domestic microgrid.


PIC
Figure 2: Electrical price


PICPIC

Figure 3: Different scenarios for demand

1.2

1.2 System’s dynamic

We model our system with discrete linear equations, stating the evolution of the system between two timesteps t and t + 1. Controls are computed at the begining of the time interval [t,t + 1[, and are applied all along this interval. That gives a discrete time optimal control problem.

We control here the system during one day, at a 15mn timestep.

1.2.1

1.2.1 Battery dynamic

Bt is the energy stored inside the battery at time t. The battery’s dynamic is the following:

Bt+1 = Bt + ΔtFB,t.
(1)

Battery’s constraints The battery’s level must remain greater than a given level otherwise its life expectancy decreases much more quickly. Usually, it is specified that this level remains between 30% and 90% of the maximum level of charge. The charge and discharge are also bounded by a maximum rampage rate ΔB. These two constraints write:

B ♭ ≤ Bt ≤ B ♯.
(2)

and

− ΔB  ≤ Bt+1 − Bt ≤ ΔB
(3)

1.2.2

1.2.2 Hot water tank dynamic

We have two inputs for the hot water tank:

and one output: the thermal demand of the house Dtth.

We denote the tank’s level Ht, denoting the energy stored inside the tank at time t (in kWh). This level is bounded:

H ♭ ≤ Ht ≤ H ♯.
(4)

The hot water tank is modelled with:

pict

All these equations are designed so as the constraints 4 are satisfied for all demand Dtth.

The recourse variable Fth,t+1 is penalized afterward in the cost function.

1.3

1.3 Criterion: operational costs

Electrical exchange We denote FN,t the electricity imported from the grid at time t.

The balance equation between production and demand in the upper electric part in Figure 1.1 writes:

αPchp,t + FN,t = Detl+ FB,t
(6)

The electricity imported from the network is being paid at a tariff πtel, which depends upon time. The price scenario (πtiel,Tfel) is supposed to be known in advance.

Operational costs Costs between [t,t + 1[ are:

                             gas           gas            th               el
Lt(Pchp,t,FA,t,FN,t,Fth,t+1) = π◟--P◝c◜hp,◞t+    π◟--◝F◜A,t◞   +   π◟--|Ft◝h◜,t+1|◞  +    π◟t F◝N◜,t◞  .
                              CHP        aux burner      uncomfort       network
(7)

In (7), costs of different devices are considered:

1.4

1.4 Optimization criterion

We can now write the optimization criterion:

      T∑−1
min 𝔼[    Lt(Pchp,t,FA,t,FN,t,Fth,t+1)]
 u    t=0
(8)

s.t.
                                   Bt+1 = Bt + ΔtFB,t
                                        B ♭ ≤ Bt ≤ B♯
                                  ♭                 ♯
                              ΔB   ≤ Bt+1 −thBt ≤ ΔB
     Ht+1 = Ht + Δt [(1 − α)Pchp,t + FA,t − Dt♭ + Fth,t+1♯]
                                        H  ≤ Ht ≤  H

We recall the different notations we used previously in Table 1.


State variables
Bt Battery level kWh
Ht Tank level kWh
Control variables
Pchp,tCHP power kW
FB,t Energy exchanged with the batterykW
FA,t Production of auxiliary burner kW
Noises
Dtel Electrical demand kW
Dtth Thermal demand kW
Parameters
πel Electricity price euros
πgas Gas price euros
πth Price of uncomfort euros

Table 1: Notations

2

2 Simulation and evaluation of given policies

The states, controls and perturbations are now denoted:

pict

We suppose that states belong to a static set xt 𝕏, so do control ut 𝕌.

A demand scenarios is a sequence of demands from time t0 up to Tf 1.

w(⋅) = (w ,⋅⋅⋅,w     ),
         0      Tf−1
(10)

where for all t, wt = (Dtel,Dtth).

w() is supposed to be the realization of a stochastic process (Wt)t=0Tf.

2.1

2.1 Evaluation of a policy

An admissible policy γ : 𝕋 × 𝕏 𝕌 assigns a control  u = γ(t,x) 𝕌 to any time t 𝕋 and to any state  x 𝕏,

Given an admissible policy γ and given a demand scenario w(.) we are able to build trajectories x() for battery and hot water tank: the dynamic is initiate at time t0 by:

x0 = xini

and the trajectory is computed step by step with following equation:

pict

We obtain the payoff associated to the demand scenario w():

 γ(    )   ∑T
J  w (⋅) =     Lt(xt,ut,wt)
           t=0
(11)

The expected payoff associated with the policy γ is:

  (        )
𝔼  Jγ(w (⋅)) ,
(12)

where the expectation 𝔼 is taken with respect to the product probability . We evaluate (12) by the Monte-Carlo method using Ns scenarios (w1(),⋅⋅⋅,wNs(.)):

   N∑s    (    )
-1-   J γ ws(⋅) .
Ns s=1
(13)

In the next, we will evaluate each algorithm with this method:

The following code generate scenarios that can be used for simulation:

  # Read assessment scenarios:
  De_assess = readcsv("demandelec_assess.csv")
  Dt_assess = readcsv("demandtherm_assess.csv");
  scenarios = zeros(model.stageNumber, 30, 2)
  scenarios[:, :, 1] = De_assess
  scenarios[:, :, 2] = Dt_assess;

The data used for scenarios generation can be obtained here in zip format code.zip or tgz code.tgz.

Question 1

  1. Copy this code and execute it.
  2. Plot 10 scenarios of electrical demand and 10 scenarios of thermal demand.

2.2

2.2 Simulation of a given policy

The function simulate is used to simulate a policy:

  function simulate(scenarios, policy)
      nassess = size(scenarios, 2)
      ntime = size(scenarios, 1)
  
      x0 = [.6, 20.]
      stocks = zeros(nassess, ntime, 2)
      controls = zeros(nassess, ntime-1, 3)
      costs = zeros(nassess)
  
      # Set first value of stocks equal to x0:
      for i in 1:nassess
          stocks[i, 1, :] = x0
      end
  
      for t=1:ntime-1
          for k = 1:nassess
              state_t = stocks[t, k, :]
              aleas = scenarios[t, k, :]
  
              uc = policy_mpc(t, state_t)
  
              xf = dynamic(t, state_t, uc, aleas)
  
              stocks[k, t+1, :] = xf
              controls[k, t, :] = uc
              costs[k] += cost(t, state_t, uc, aleas)
          end
      end
  
      return costs, stocks, controls
  end

Question 2

  1. What is the inputs of the function simulate? And its outputs?
  2. Relate this function with the theoritical model above.

We consider a first policy, given by the echelon-based heuristic:

Question 3

  1. Implement this heuristic. The function should take as input the time t and the current state xt and return a control ut.
  2. Apply this policy along the scenarios generated in Question 1.
  3. Show the costs’ distribution. Plot 10 trajectories for states and controls.

3

3 Solving by two different methods

We are going to compare two policies to solve the problem (8):

We will assess each policy along the scenarios generated in Question 1. However, these policies need to consider some scenarios as input for their algorithms. To be fair, we will use optimization scenarios distinct from the assessment scenarios considered in Question 1.

As we are considering a real world problem, we do not have that much scenarios recorded. That is why we use only 30 scenarios for optimization and 30 scenarios for assessment.

3.1

3.1 Model Predictive Control

As time goes on, MPC computes at the beginning of each period [τ,τ + 1[  the control as a function of the current time and state xτ. Here are the details:

    T−1
    ∑           --
miun⋅     Lt(xt,ut,wt)
    t=τ
(14)

s.t.  u⋅ = (uτ,...,uT−1)
     xt+1 = ft(xt,ut,wt)
            x♭ ≤ xt ≤ x♯
              u ∈ 𝕌 (xt)

The problem (14) returns a control ut for all time t and state xt. This problem gives the MPC policy:

 MP C
γ      :  𝕋×  𝕏  →  𝕌♯
          (t,xt)  →  ut
(15)

γMPC is implemented in the function policy_mpc:

  function policy_mpc(t::Int64, x0::Array{Float64})
      tf = NTIME - t
  
      m = Model()
  
      # Define constraints other variables:
      @variable(m,  19           <= h[1:(tf+1)]  <= 30)
      @variable(m,  0.5          <= b[1:(tf+1)]  <= 3)
      @variable(m, -DELTA_B_MAX  <= Fb[1:(tf)]   <= DELTA_B_MAX)
      @variable(m,  0.           <= Fa[1:(tf)]   <= 5.5)
      @variable(m,  Fh[1:(tf)]   >=0)
      @variable(m,  0 <= Yt[1:(tf)] <= 1)
      @variable(m,  Z1[1:(tf)] >= 0)
  
  
      # Set optimization objective:
      @objective(m, Min, sum{P_BURNER*Fa[i] + P_CHP*Yt[i] +
                             Z1[i] + P_TH*Fh[i], i = 1:tf})
  
      for i in 1:tf
          @constraint(m, b[i+1] - ALPHA_B * b[i] +
                         DT*BETA_B*Fb[i] == 0)
          @constraint(m, h[i+1] - ALPHA_H * h[i] -
                         DT*BETA_H*(Yt[i]*P_CHP_THERM + Fa[i] -
                         previsions[(i+t -1), 2] + Fh[i]) == 0)
  
          @constraint(m, Z1[i] >=  (previsions[(i+t -1), 1] -
                                    Fb[i] - Yt[i]*P_CHP_ELEC)*priceElec[t+i-1])
          @constraint(m, Z1[i] >= -(previsions[(i+t -1), 1] -
                                    Fb[i] - Yt[i]*P_CHP_ELEC)*P_INJECTION)
      end
  
      # Add initial constraints:
      @constraint(m, b[1] == x0[1])
      @constraint(m, h[1] == x0[2])
  
      status = solve(m)
      return [getvalue(Yt)[1]; getvalue(Fb)[1]; getvalue(Fa)[1]]
  end

Question 4

  1. Explain what the above code is doing.
  2. Apply MPC policy along the scenarios generated in Question 1.
  3. Show the costs’ distribution. Plot 10 trajectories for states and controls.
  4. Does the trajectory of the battery’s state of charge seem reasonable?

3.2

3.2 Stochastic Dynamic Programming

Dynamic programming works in two stages: an offline stage where value functions are computed, and an online stage where these functions are used to compute a policy.

Offline: computation of value functions

Online: construction of the policy DP policy computes at the beginning of each period [τ,τ + 1 [  the control as a function of the current time τ and state xτ:

xt+1 = ft(xt,ut,wt) is a functionnal relation, lying in the continuous state space 𝕏. As we use a computer we will discretize state and control spaces to implement dynamic programming. We restrict ourself to a subset 𝕏d 𝕏 for states and 𝕌d 𝕌 for controls.

Implementation The Dynamic Programming solver is already implemented. It is called by the function:

     Vs = solve_DP(sdpmodel, paramSDP, 1)

which return the Bellman functions in a multidimensional array Vs.

Question 5 Generate marginal laws with the help of the given optimization scenarios and the function generate_laws.

Question 6

  1. Compute Bellman functions with the function solve_DP.
  2. Apply Dynamic Programming policy along scenarios generated in Question 1.
  3. Show the costs’ distribution. Plot 10 trajectories for states and controls.

Question 7 Plot different Bellman functions with the help of the function plot_bellman_functions. Comment the evolution of the shape of these functions along time.

Question 8

  1. Find the discretization used for states and controls. What is the cardinal of the set 𝕏d (approximation of the space 𝕏)? And those of the set 𝕌d?
  2. What happens if we refine the discretization of tank and battery by 10?

3.3

3.3 Comparison between different policies

We want to compare three different features for each algorithm:


Policy Offline computation timeOnline computation timeAverage Costs
Heuristic
MPC
DP

Table 2: Benchmark

Question 9 Fill the Table 2 with the results previously obtained.

A

A Discretization and interpolation schemes

We have two stocks: a battery and thermal tank. Even if the dynamic programming principle still holds, the numerical implementation is not as straightforward as in the unidimensional case.

A.1

A.1 Discretization

We need to discretize state and control spaces to apply dynamic programming. We restrict ourself to a subset 𝕏d 𝕏 for states and 𝕌d 𝕌 for controls. The policy find by dynamic programming is then suboptimal.

To iterate upon different states in 𝕏d, we define a bijective mapping between 𝕏d and . This mapping is written:

    d
ϕ : 𝕏 → ℕ
(16)

Question 10

  1. Find a mapping that satisfy to the conditions previously stated.
  2. Implement it.

A.2

A.2 Interpolation

With this discretization, we know the value of Bellman function V only upon the grid 𝕏d, and not upon the whole state 𝕏.

We recall that the next state is given by the state equation: However, even if xt 𝕏d and ut 𝕌d, we can not ensure that xt+1 𝕏d to compute future cost V t+1(xt+1). That is why we need to use an interpolation scheme to interpolate V t+1(xt+1) with the values of V t+1 in 𝕏d.


PIC
Figure 4: xt+1 could be outside the grid defined by 𝕏d

Question 11 With the notation introduced in Figure 4, write the linear interpolation of xt+1 w.r.t Xid.