## 3  Modeling optimal control problems

In order to understand the basic modeling features in PROPT it is best to start with a simple example. Open the file called brysonDenham.m located in /tomlab/propt/examples.

There are also brysonDenhamShort.m, brysonDenhamDetailed.m, brysonDenhamTwoPhase.m that solve the same problem, utilizing different features of PROPT.

To solve the problem, simply enter the following in the Matlab command prompt:

`>> brysonDenham`

### 3.1  A simple example

The following example can be found in vanDerPol.m in /tomlab/propt/examples.

The objective is to solve the following problem:

minimize:

 J = x3(tf)

subject to:

dx1
dt
= (1−x22)*x1x2+u

dx2
dt
= x1

dx3
dt
= x12+x22+u2

−0.3 <=u <= 1.0
x(t0)= [0  1  0]′
tf= 5

To solve the problem with PROPT the following compact code can be used:

``` toms t
p = tomPhase('p', t, 0, 5, 60);
setPhase(p);

tomStates x1 x2 x3
tomControls u

% Initial guess
x0 = {icollocate({x1 == 0; x2 == 1; x3 == 0})
collocate(u == -0.01)};

% Box constraints
cbox = {-10  <= icollocate(x1) <= 10
-10  <= icollocate(x2) <= 10
-10  <= icollocate(x3) <= 10
-0.3 <= collocate(u)   <= 1};

% Boundary constraints
cbnd = initial({x1 == 0; x2 == 1; x3 == 0});

% ODEs and path constraints
ceq = collocate({dot(x1) == (1-x2.^2).*x1-x2+u
dot(x2) == x1; dot(x3) == x1.^2+x2.^2+u.^2});

% Objective
objective = final(x3);

% Solve the problem
options = struct;
options.name = 'Van Der Pol';
solution = ezsolve(objective, {cbox, cbnd, ceq}, x0, options);```

### 3.2  Code generation

It is possible to compile permanently, in order to keep the autogenerated code:

```>> Prob = sym2prob('lpcon',objective, {cbox, cbnd, ceq}, x0, options);
>> Prob.FUNCS

ans =

f: 'lp_f'
g: 'lp_g'
H: 'lp_H'
c: 'c_AFBHVT'
dc: 'dc_AFBHVT'
d2c: 'd2c_AFBHVT'
r: []
J: []
d2r: []
fc: []
gdc: []
fg: []
cdc: []
rJ: []

>> edit c_AFBHVT```

The code that was generated can be found in the \$TEMP directory. The objective in this case is linear and can found in the Prob structure (Prob.QP.c).

Here is what the auto generated constraint code may look like:

``` function out = c_AFBHVT(tempX,Prob)
% c_AFBHVT - Autogenerated file.
tempD=Prob.tomSym.c;
x3_p = reshape(tempX(183:243),61,1);
u_p = reshape(tempX(1:60),60,1);
x2_p = reshape(tempX(122:182),61,1);
x1_p = reshape(tempX(61:121),61,1);
tempC7   = tempD{2}*x2_p;
tempC8   = tempC7.^2;
tempC10  = tempD{2}*x1_p;
out      = [(tempD{3}-tempC8).*tempC10-tempC7+u_p-0.2*(tempD{1}*x1_p);...
tempC10.^2+tempC8+u_p.^2-0.2*(tempD{1}*x3_p)];```

And the objective function to optimize (in this case a simple linear objective already available in TOMLAB (hence not auto generated, but defined by the Prob field used)):

``` function f = lp_f(x, Prob)
f = Prob.QP.c'*x(:);```

### 3.3  Modeling

The PROPT system uses equations and expressions (collected in cells) to model optimal control problems.

Equations must be written either using (== <= >=) equality/inequality markers.

It is possible to include more than one equation on the same line.

For example:

``` toms a b c
cnbd = {a == b; b == c};```

does the same job as:

``` toms a b c
cnbd = {a == b
b == c};```

The same is true for inequalities.

When working with optimal control problems, one typically work with expressions that are valid for all collocation points. The functions collocate and icollocate can be used to extend an arbitrary expressions to the necessary collocation points.

Consider for example the starting points:

``` tomStates x1
tomControls u1
x0 = {icollocate(x1==1); collocate(u1==1)};```

Note: icollocate, as it is working with a state variable, also includes the starting point.

Scale the problem:

Proper scaling may speed up convergence, and conversely, improperly scaled problems may converge slowly or not at all. Both unknowns and equations can be scaled.

Don’t use inf in equations:

It is strongly discouraged to use -inf/inf in equations. This is because the equation may be evaluated by subtracting its left-hand side from the right-hand side and comparing the result to zero, which could have unexpected consequences.

Equations like x<=Inf are better left out completely (Although in many cases tomSym will know to ignore them anyway).

Avoid using non-smooth functions:

Because the optimization routines rely on derivatives, non-smooth functions should be avoided. A common example of a non-smooth function is the Heaviside function H, defined as H(x) = 1 for x > 0, and H(x) = 0 for x ≤ 0, which in Matlab code can be written as (x>0). A smoother expression, which has the same behavior for x >> a is:
Hs = 1/2 ( 1 + tanh( x/a ) )

When the problems are non-smooth in the independent variable (time, t), the problem should normally be separated into phases.

Use descriptive names for equations, states, controls, and variables:

The names used for states, controls, and variables make the code easier to read. Consider using names such as ”speed”, ”position”, ”concentration”, etc. instead of ”x1”, ”x2”, ”x3”.

The names used for equations do not matter in most cases. However, they will be useful when accessing Lagrange multipliers.

Re-solve on successively finer grids:

If a very fine grid is needed in order to obtain a good solution, it is usually a good idea to solve the problem on a less dense grid first, and then re solve, by using the obtained solution as a starting point. The following code will do the trick:

``` for n=[10 40]
p = tomPhase('p', t, 0, 6, n);
setPhase(p);
tomStates x1 x2

% Initial guess
if n == 10
x0 = {p1 == 0; p2 == 0};
else
x0 = {p1 == p1opt; p2 == p2opt
icollocate({x1 == x1opt; x2 == x2opt})};
end
...
...
...
solution = ezsolve(objective, {cbox, cbnd, ceq}, x0, options);

% Optimal x, p for starting point
x1opt = subs(x1, solution);
x2opt = subs(x2, solution);
p1opt = subs(p1, solution);
p2opt = subs(p2, solution);
end```

See for example denbighSystem.m and drugScheduling.m.

#### 3.3.1  Modeling notes

The examples included with the software in many cases just scratch the surface of the capabilities of PROPT. The software is designed for maximum user flexibility and with many "hidden" features. The following notes are worth keeping in mind when modeling optimal control problems.

Left-hand side treatment:

The left hand side of equations do not have to be a single time-derivative. Things like collocate(m*dot(v) == F) work just fine, even if m is a state variable (A constraint that prevents m from becoming to small may improve convergence).

Second derivatives:

Second derivatives are allowed, as in collocate(dot(dot(x)) == a), although convergence is almost always better when including extra states to avoid this.

Equations:

Equations do not have to include any time-derivative. For example collocate(0.5*m*v2 + m*g*h)==initial(0.5*m*v2 + m*g*h) will work just fine in the PROPT framework.

Fixed end times:

Problems with fixed end-times enables the use of any form of expression involving the collocation points in time since the collocation points are pre-determined.

The following illustrates how t can (should) be used.

``` toms t
% Will cause an error if myfunc is undefined for tomSym
myfunc(t);

% This works since collocate(t) is a double vector and not a
% tomSym object.
myfunc(collocate(t))```

It is also worth noting that feval bypasses tomSym, so it is possible to call any function, at the expense of slower derivatives. In the case of a fixed end time, there are no derivatives:

``` toms t
% Works since tomSym is bypassed.
collocate(feval('myfunc',t))```

Segmented constraints:

With PROPT is it possible to define constraints that are valid for only a part of phase. In addition, it is possible to define constraints for other points than the collocation points which could assist in avoiding constraint violations in between the collocation points.

mcollocate automates this process for the entire phase and imposes twice the number of constraints, i.e. the solution will be constrained in between each collocation point as well.

The following example shows how to set segmented constraints (valid for a part of the phase) for time-free and time-fixed problems.

``` % For variable end time (the number of constraints need to be fixed)
% Relax all constraints below the cutoff, while 0 above
con = {collocate((8*(t-0.5).^2-0.5-x2) >= -1e6*(t<0.4))};

% When the end time is fixed there are many possible solutions.
% One can use custom points or a set of collocation points if needed.
tsteps = collocate(t);
n = length(tsteps);
y = atPoints(p,tsteps(round(n/3):round(n/2)),(8*(t-0.5).^2-0.5-x2));
con = {y >= 0};

% Alternatively one could write for custom points.
con = atPoints(p,linspace(0.4, 0.5, 10),(8*(t-0.5).^2-0.5-x2) >= 0);```

Constant unknowns:

It is possible to mix constant unknowns (created with "toms") into the equations, and those will become part of the solution: icollocate(0.5*m*v2 + m*g*h == Etot).

Higher-index DAEs:

Higher-index DAEs usually converge nicely with PROPT. One can use dot() on entire expressions to get symbolic time derivatives to use when creating lower index DAEs.

Lagrangian equations:

Deducing Lagrangian equations manually is usually not necessary, but if needed, one can use tomSym for derivatives. TomSym allows for computation of symbolic derivatives with respect to constant scalars and matrices using the "derivative" function, but this does not work for tomStates or tomControls. The current implementation is therefore to create expressions using normal tomSym variables (created with the "toms" command), use the derivative function and then use "subs" function to replace the constants with states and controls before calling collocate.

### 3.4  Independent variables, scalars and constants

Independent variables can be defined as follows (yes, it is that simple!):

``` toms t  % Time
toms tf % Final time
% Phase name, time, start, end, number of nodes
p = tomPhase('p', t, 0, tf, 25);
cbox = {20 <= tf <= 30}; % Bounds on end time
x0 = {tf == 22); % Starting point for end time (must be set before used
% in other expressions)```

It is also possible use scalar variables within the PROPT framework:

``` toms p1
cbox = {1 <= p1 <= 2};
x0 = {p1 == 1.3);```

where the variables define lower, upper bound and a suitable guess.

A constant variable ki0 can be defined with the following statement (i.e. no difference from normal Matlab code):

` ki0 = [1e3; 1e7; 10; 1e-3];`

### 3.5  State and control variables

The difference between state and control variables is that states are continuous between phases, while control variables can be discontinuous.

Examples:

Unconstrained state variable x1 with a starting guess from 0 to 1:

``` tomStates x1
x0 = icollocate(x1 == t/tf);```

State variable x1 with bounds of 0.5 and 10. The starting guess should be set as well to avoid any potential singularities:

``` tomStates x1
cbox = {0.5 <= icollocate(x1) <= 10};```

Control variable T with a "table" defining the starting point:

``` tomControls T
x0 = collocate(T==273*(t<100)+415*(t>=100));```

### 3.6  Boundary, path, event and integral constraints

Boundary constraints are defined as expressions since the problem size is reduced. For example if state variable x1 has to start in 1 and end in 1 and x2 has to travel from 0 to 2 define:

``` cbnd1 = initial(x1 == 1);
cbnd2 = initial(x2 == 0);
cbnd3 = final(x1 == 1);
cbnd4 = final(x2 == 2);```

A variety of path, event and integral constraints are shown below:

``` x3min     = icollocate(x3 >= 0.5);    % Path constraint
integral1 = {integrate(x3) - 1 == 0}; % Integral constraint
final3    = final(x3) >= 0.5;         % Final event constraint
init1     = initial(x1) <= 2.0;       % Initial event constraint```

NOTE: When defining integral constraints that span over several phase, use either of the following notations:

` integrate(p1,x3p1) + integrate(p2,x3p1) + integrate(p3,x3p1) <= 2.1e3`

or with expressions:

``` p1qx3 == integrate(p1,x3p1);
p2qx3 == integrate(p2,x3p1);
p3qx3 == integrate(p3,x3p1);

qx2sum = {p1qx3 + p2qx3 + p3qx3 <= 2.1e3};```