0% found this document useful (0 votes)
48 views

Different Types of Systems: TF X (TF)

The document discusses optimal control systems using a variational approach. It defines different types of systems based on whether the final time and state are fixed or free. It also outlines Pontryagin's maximum principle for solving optimal control problems, including forming the Hamiltonian and solving the boundary value problem.

Uploaded by

ADSH QWERTY
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

Different Types of Systems: TF X (TF)

The document discusses optimal control systems using a variational approach. It defines different types of systems based on whether the final time and state are fixed or free. It also outlines Pontryagin's maximum principle for solving optimal control problems, including forming the Hamiltonian and solving the boundary value problem.

Uploaded by

ADSH QWERTY
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

2.

7 Variational Approach to Optimal Control Systems 65

is clear why .x(t) is called the costate vector. Finally, using the
relation (2.7.28), the boundary condition (2.7.26) at the optimal
condition reduces to

(2.7.32)

This is the general boundary condition for free-end point system


in terms of the Hamiltonian.

2. 7.2 Different Types of Systems


We now obtain different cases depending on the statement of the prob-
lem regarding the final time tf and the final state x(tf) (see Figure 2.9).

• Type (a): Fixed-Final Time and Fixed-Final State System: Here,


since tf and x(tf) are fixed or specified (Figure 2.9(a», both 8tf
and 8xf are zero in the general boundary condition (2.7.32), and
there is no extra boundary condition to be used other than those
given in the problem formulation.

• Type (b): Free-Final Time and Fix ed-Final State System: Since
tf is free or not specified in advance, 8tf is arbitrary, and since
X(tf) is fixed or specified, 8xf is zero as shown in Figure 2.9(b).
Then, the coefficient of the arbitrary 8t f in the general boundary
condition (2.7.32) is zero resulting in

(2.7.33)

• Type (c): Fixed-Final Time and Free-Final State System: Here


tf is specified and x(tf) is free (see Figure 2.9(c». Then 8tf is
zero and 8xf is arbitrary, which in turn means that the coefficient
of oXf in the general boundary condition (2.7.32) is zero. That
is

( as _ -x*(t») = 0 ----+ -X*(tf) = (as) (2.7.34)


ax * t f a x *tf
66 Chapter 2: Calculus of Variations and Optimal Control

x(t) x(t)
x*(t)+ox(t)

xo ..

t t
(a) (b)

x(t) x(t)

xo .. Xo ...

o to t o to it t
(c) (d)

Figure 2.9 Different Types of Systems: (a) Fixed-Final Time and


Fixed-Final State System, (b) Free-Final Time and Fixed-Final State
System, (c) Fixed-Final Time and Free-Final State System, (d)
Free-Final Time and Free-Final State System
2.7 Variational Approach to Optimal Control Systems 67

• Type (d): Free-Final Time and Dependent Free-Final State Sys-


tem: If t f and x( t f) are related such that x( t f) lies on a moving
curve 8(t) as shown in Figure 2.8, then

(2.7.35)

Using (2.7.35), the boundary condition (2.7.32) for the optimal


condition becomes

Since t f is free, 8tf is arbitrary and hence the coefficient of 8tf


in (2.7.36) is zero. That is

• Type (e): Free-Final Time and Independent Free-Final State:


If t f and x( t f) are not related, then 8tf and 8xf are unrelated,
and the boundary condition (2.7.32) at the optimal condition
becomes

(2.7.38)

(2.7.39)

2.7.3 Sufficient Condition


In order to determine the nature of optimization, i.e., whether it is
minimum or maximum, we need to consider the second variation and
examine its sign. In other words, we have to find a sufficient condition
for extremum. Using (2.7.14), (2.7.28) and (2.7.37), we have the second
68 Chapter 2: Calculus of Variations and Optimal Control

variation in (2.7.16) and using the relation (2.7.28), we get

For the minimum, the second variation 82 J must be positive. This


means that the matrix II in (2.7.40)

(2.7.41)

must be positive definite. But the important condition is that the


second partial derivative of 1t* w.r.t. u(t) must be positive. That is

(2.7.42)

and for the maximum, the sign of (2.7.42) is reversed.

2. 7.4 Summary of Pontryagin Procedure


Consider a free-final time and free-final state problem with general cost
function (Bolza problem), where we want to minimize the performance
index

i
t!
J = S(x(tf), tf) + V(x(t), u(t), t)dt (2.7.43)
to

for the plant described by

x(t) = f(x(t), u(t), t) (2.7.44)

with the boundary conditions as

x(t = to) = Xo; t = tf is free and x(tf) is free. (2.7.45)


2.7 Variational Approach to Optimal Control Systems 69

Table 2.1 Procedure Summary of Pontryagin Principle for Bolza


Problem
A. Statement of the Problem
Given the plant as
x(t) = f(x(t), u(t), t),
the performance index as
J = S(x(tf), tf) + Jt~ V(x(t), u(t), t)dt,
and the boundary conditions as
x(to) = Xo and tf and x(tf) = xf are free,
find the optimal control.
B. Solution of the Problem
Step 1 Form the Pontryagin 1i function
1i(x(t), u(t), A(t), t) = V(x(t), u(t), t) + A' (t)f(x(t), u(t), t).
Step 2 Minimize 1i w.r.t. u(t)
(~~t = 0 and obtain u*(t) = h(x*(t), A*(t), t).
Step 3 Using the results of Step 2 in Step 1, find the optimal 1i*
1i*(x*(t), h(x*(t), A*(t), t), A*(t), t) = 1i*(x*(t), A*(t), t).
Step 4 Solve the set of 2n differential equations
x* (t) = + (~~) * and'\ *(t) = - (~":.) *
with initial conditions Xo and the final conditions
[1i* + ~~]tf 8tf + [(~~t - A*(t)]:f 8xf = O.
Step 5 Substitute the solutions of x* (t), A" (t) from Step 4
into the expression for the optimal control u*(t) of Step 2.
C. Types of Systems
a . Fixed-final time and fixed-final state system, Fig. 2.9(a)
b). Free-final time and fixed-final state system, Fig. 2.9(b)
c . Fixed-final time and free-final state system, Fig. 2.9(c)
d). Free-final time and dependent free-final state system, Fig. 2.9(d).
e . Free-final time and independent free-final state system
Type Substitutions Boundary Conditions
(a) 8tf = 0,8xf = 0 x(to) = xo, x(tf) = xf
(b) 8tf =1= 0, 8xf = 0 x( to) = Xo, x( t f) = x f' l1i* + Ft Jt f = 0
(c) 8tf = 0, 8xf =1= 0 x( to) = xo, A*(t f) = (~tt f
(d) 8xf = 6(tf )8tf x(to) = Xo, x(tf) = 6(tf)
[H* + ~~ + { (~~) * - L
A*(t)} I 6(t) f = 0
(e) 8tf =1= 0 8x(to) = Xo
8xf =1= 0 [1i*+as] =0 [(as) -A*(t)] =0
at tf 'ax * tf
70 Chapter 2: Calculus of Variations and Optimal Control

Here, x(t) and u(t) are n- and r- dimensional state and control vec-
tors respectively. Let us note that u( t) is unconstrained. The entire pro-
cedure (called Pontryagin Principle) is now summarized in Table 2.1.

Note: From Table 2.1 we note that the only difference in the proce-
dure between the free-final point system without the final cost function
(Lagrange problem) and free-final point system with final cost function
(Bolza problem) is in the application of the general boundary condition.
To illustrate the Pontryagin method described previously, consider
the following simple examples describing a second order system. Specif-
ically, we selected a double integral plant whose analytical solutions for
the optimal condition can be obtained and the same verified by using
MATLAB©.
First we consider the fixed-final time and fixed-final state problem
(Figure 2.9(a), Table 2.1, Type (a)).

Example 2.12
Given a second order (double integrator) system as

Xl(t) = X2(t)
X2(t) = u(t) (2.7.46)
and the performance index as

(2.7.47)

find the optimal control and optimal state, given the boundary
(initial and final) conditions as

x(O) = [1 2]'; x(2) = [1 0]'. (2.7.48)


Assume that the control and state are unconstrained.

Solution: We follow the step-by-step procedure given in Table 2.1.


First, by comparing the present plant (2.7.46) and the PI (2.7.47)
with the general formulation of the plant (2.7.1) and the PI (2.7.2),
we identify
1
V(x(t), u(t), t) = V(u(t)) = "2u 2(t)
f(x(t), u(t), t) = [fl, f2]', (2.7.49)

where, fl = X2(t), f2 = u(t).


2.7 Variational Approach to Optimal Control Systems 71

• Step 1: Form the Hamiltonian function as


H = H(XI(t), X2(t), u(t), AI(t), A2(t))
= V(u(t)) + -X'(t)f(x(t) , u(t))
1
= 2u2(t) + Al (t)X2(t) + A2(t)U(t). (2.7.50)

• Step 2: Find u*(t) from


aH
au = 0 ~ u*(t) + A2(t) = 0 ~ u*(t) = -A2(t). (2.7.51)

• Step 3: Using the results of Step 2 in Step 1, find the optimal


H*as

H* (xi(t) , X2(t), Ai(t) , A2(t)) = ~A22 (t) + Ai (t)X2(t) - A22 (t)

= Ai(t)x2(t) - ~A22(t). (2.7.52)

• Step 4: Obtain the state and costate equations from

xi(t) = + ( : ) * = x2(t)

x:;(t) = + (:) * = -A;(t)

.
Ai(t) = - (aH)
aXI * = 0

A;(t) = - ( : ) * = -Ai(t). (2.7.53)

Solving the previous equations, we have the optimal state and


costate as
* C3 3 C4 2
Xl (t) = tit - 2t + C2t + C1

X2(t) = ~3t2 - C4t + C2


Ai(t) = C3
A2(t) = -C3t + C4. (2.7.54)

• Step 5: Obtain the optimal control from


u*(t) = -A2(t) = C3t - C4 (2.7.55)
72 Chapter 2: Calculus of Variations and Optimal Control

Optimal
Controller
u*(t)
J x2 *(t)
J x 1*(Q

Figure 2.10 Optimal Controller for Example 2.12

where, 01, O2 , 0 3 , and 0 4 are constants evaluated using the given


boundary conditions (2.7.48). These are found to be
01 = 1, O2 = 2, 03 = 3, and 04 = 4. (2.7.56)
Finally, we have the optimal states, costates and control as

xi(t) = 0.5t 3 - 2t2 + 2t + 1,


X2(t) = 1.5t2 - 4t + 2,
Ai(t) = 3,
A2(t) = -3t + 4,
u*(t) = 3t - 4. (2.7.57)
The system with the optimal controller is shown in Figure 2.10.
The solution for the set of differential equations (2.7.53) with
the boundary conditions C2.7.48} for Example 2.12 using Symbolic
Toolbox of the MATLA~, Version 6, is shown below.

**************************************************************
%% Solution Using Symbolic Toolbox (STB) in
%% MATLAB Version 6.0
%%
S=dsolve('Dxl=x2,Dx2=-lambda2,Dlambdal=0,Dlambda2=-lambdal, ...
xl(0)=1,x2(0)=2,xl(2)=1,x2(2)=0')
S.xl
S.x2
S.lambdal
S.lambda2

S =
lambdal: [lxl symJ
lambda2: [lxl symJ
xl: [lxl symJ
x2: [lxl symJ
S.xl

ans=
2.7 Variational Approach to Optimal Control Systems 73

S.x2

ans=
2-4*t+3/2*t-2

S.lambdal

ans=
3
S.lambda2

ans=

Plot command is used for which we need to


%% convert the symbolic values to numerical values.
j=l;
for tp=O: .02: 2
t=sym(tp);
xlp(j)=double(subs(S.xl));
%% subs substitutes S.xl to xlp
x2p(j)=double(subs(S.x2));
%% double converts symbolic to numeric
up(j)=-double(subs(S.lambda2));
%% optimal control u = -lambda_2
tl(j)=tp;
j=j+l;
end
plot(tl,xlp, 'k' ,tl,x2p, 'k' ,tl,up, 'k:')
xlabel ('t')
gtext('x_l(t) ')
gtext (' x_2(t) ')
gtext ('u(t) ')
*********************************************************
It is easy to see that the previous solutions for xi(t) , x1(t) , Ai(t) ,
A2(t), and u*(t) = -A2(t) obtained by using MATLAB© are the
same as those given by the analytical solutions (2.7.571 The op-
timal control and state are plotted (using MATLAB©) in Fig-
ure 2.11.

Next, we consider the fixed-final time and free-final state case (Fig-
74 Chapter 2: Calculus of Variations and Optimal Control

, '"
u(t), .. ,.
.-
-2 ., .,'
,. ,.
..
"
·3 ,,." "
,
,.
.,'
"
-4~~~~--~--~--~--~--~--~--~--~
o 0.2 0.4 0.6 0.8 1.2 1.4 1.6 1.8 2

Figure 2.11 Optimal Control and States for Example 2.12

ure 2.9(b), Table 2.1, Type (c)) of the same system.

Example 2.13
Consider the same Example 2.12 with changed boundary conditions
as
x(O) = [1 2]'; xl(2) = 0; x2(2) is free. (2.7.58)
Find the optimal control and optimal states.

Solution: Following the procedure illustrated in Table 2.1 (Type


(c) ), we get the same optimal states, costates, and control as given
in (2.7.54) and (2.7.55) which are repeated here for convenience.
* 0 3 3 04 2
X1(t) = (ft - 2:t +02 t + Cl,

X2(t) = ~3t2 - 04t + C 2,


Ai(t) = 0 3 ,
A2(t) = -03 t + 0 4,
u*(t) = -A2(t) = C3t - 0 4. (2.7.59)
The only difference is in solving for the constants C1 to C4. First
of all, note that the performance index (2.7.47) does not contain
the terminal cost function S. From the given boundary conditions
2.7 Variational Approach to Optimal Control Systems 75

(2.7.58), we have tf specified to be 2 and hence 8tf is zero in the


general boundary condition (2.7.32).
Also, since x2(2) is free, 8X2j is arbitrary and hence the corre-
sponding final condition on the costate becomes

(2.7.60)

(since S = 0). Thus we have the four boundary conditions as

With these boundary conditions substituted in (2.7.59), the con-


stants are found to be

01 = 1; 02 = 2; 03 = 15/8; 04 = 15/4. (2.7.62)


Finally the optimal states, costates and control are given from
(2.7.59) and (2.7.62) as

* 5 3 15 2
xl (t) = 16 t - 8 t + 2t + 1,
* 15 2 15
x2 (t) =- t - -t + 2
16 4 '
*( ) 15
Al t = 8'
* 15 15
A2(t) = -8 t + 4'
* 15 15
u (t) = 8 t - 4·
(2.7.63)

The solution for the set of differential equations (2.7.53) with


the boundary conditions {2. 7. 58} for Example 2.13 using Symbolic
Toolbox of the MATLAdS) , Version 6, is shown below.

***************************************************************
%% Solution Using Symbolic Toolbox (STB) in
%% MATLAB Version 6.0
%%
S=dsolve('Dxl=x2,Dx2=-lambda2,Dlambdal=0,Dlambda2=-lambdal,
xl(0)=1,x2(0)=2,xl(2)=0,lambda2(2)=0')

S =
76 Chapter 2: Calculus of Variations and Optimal Control

lambdal: [lxl sym]


lambda2: [lxl sym]
xl: [lxl sym]
x2: [lxl sym]
S.xl

ans=

S.x2

ans=

S.lambdal

ans=

15/8

S.lambda2

ans=

-15/8*t+15/4

%% Plot command is used for which we need to


%% convert the symbolic values to numerical values.
j=l;
for tp=O:.02:2
t=sym(tp);
xlp(j)=double(subs(S.xl));
%% subs substitutes S.xl to xlp
x2p(j)=double(subs(S.x2));
%% double converts symbolic to numeric
up(j)=-double(subs(S.lambda2));
%% optimal control u = -lambda_2
tl(j)=tp;
j=j+l;
end
plot(tl,xlp, 'k' ,tl,x2p, 'k' ,tl,up, 'k: ')
xlabel ( , t ' )
gtext ('x_l (t) ')
2.7 Variational Approach to Optimal Control Systems 77

gtext ('x_2(t)')
gtext ('u(t) ')
*******************************************************
It is easy to see that the previous solutions for xi (tj, x2 (t), Ai (t), A2 (t),
and u*(t) = -A2(t) obtained by using MATLAB© are the same as
those given by (2.7.63) obtained analytically. The optimal control
and states for Example 2.13 are plotted in Figure 2.12.

0
.... "".-
-1

-2 .-'
. ..
,"
-3
".
."
"

-4
0 0.2 0.4 0.6 0.8 1.2 1.4 1.6 1.8 2

Figure 2.12 Optimal Control and States for Example 2.13

Next, we consider the free-final time and independent free-final state


case (Figure 2.9(e), Table 2.1, Type (e)) of the same system.

Example 2.14
Consider the same Example 2.12 with changed boundary conditions
as

Find the optimal control and optimal state.

Solution: Following the procedure illustrated in Table 2.1 (Type


(e)), we get the same optimal control, states and costates as given
78 Chapter 2: Calculus of Variations and Optimal Control

in (2.7.54) and (2.7.55) which are repeated here for convenience.

* C3 t 3 C4 2
X1(t) =6 - 2t + C2t + C1,
X2(t) = ~3t2 - C4t + C2,
Ai(t) = C3,
A2(t) = -C3 t + C4,
U*(t) = -A2(t) = C3t - C4. (2.7.65)

The only difference is in solving for the constants C1 to C4 and the


unknown t f. First of all, note that the performance index (2.7.47)
does not contain the terminal cost function S, that is, S = O. From
the given boundary conditions (2.7.64), we have tf unspecified and
hence otf is free in the general boundary condition (2.7.32) leading
to the specific final condition

(2.7.66)

Also, since X2 (t f) is free, OX2 f is arbitrary and hence the general


boundary condition (2.7.32) becomes

>'2(tf) = (:!) = 0 (2.7.67)

where ~ is given by (2.7.52). Combining (2.7.64), (2.7.66) and


(2.7.67), we have the following 5 boundary conditions for the 5
unknowns (4 constants of integration C 1 to C4 and 1 unknown t f)
as

x1(0)=I; X2(0) =2; X1(tf) =3;


A2(tf) = 0; A1(tf)X2(tf) - 0.5A~(tf) = o. (2.7.68)

Using these boundary conditions along with (2.7.65), the constants


are found to be

C1 = 1; C2 = 2; C3 = 4/9; C 4 = 4/3; tf = 3. (2.7.69)

Finally, the optimal states, costates, and control are given from
2.7 Variational Approach to Optimal Control Systems 79

(2.7.65) and (2.7.69) as

(2.7.70)
The solution for the set of differential equations (2.7.53) with
the boundary conditions C2. 7.68} for Example 2.14 using Symbolic
Toolbox of the MATLA~, Version 6 is shown below.
********************************************************
%% Solution Using Symbolic Toolbox (STB) in
%% of MATLAB Version 6
%%
clear all
S=dsolve('Dx1=x2,Dx2=-lam2,Dlaml=O,Dlam2=-lam1,xl(0)=l,
x2(0)=2,x1(tf)=3,lam2(tf)=0')
t='tf' ;
eq1=subs(S.x1)-'x1tf';
eq2=subs(S.x2)-'x2tf';
eq3=S.lam1-'lam1tf';
eq4=subs(S.lam2)-'lam2tf';
eq5='lamltf*x2tf-0.5*lam2tf 2'; A

S2=solve(eq1,eq2,eq3,eq4,eq5,'tf,x1tf,x2tf,lam1tf,
lam2tf','lam1tf<>0')
%% lam1tf<>0 means lam1tf is not equal to 0;
%% This is a condition derived from eq5.
%% Otherwise, without this condition in the above
%% SOLVE routine, we get two values for tf (1 and 3 in this case)
%%
tf=S2.tf
xltf=S2.xltf;
x2tf=S2.x2tf;
clear t
x1=subs(S.xl)
x2=subs(S.x2)
lam1=subs(S.lam1)
80 Chapter 2: Calculus of Variations and Optimal Control

lam2=subs(S.lam2)
%% Convert the symbolic values to
%% numerical values as shown below.
j=l;
tf=double(subs(S2.tf))
%% coverts tf from symbolic to numerical
for tp=O:O.05:tf
t=sym(tp);
%% coverts tp from numerical to symbolic
xlp(j)=double(subs(S.xl));
%% subs substitutes S.xl to xlp
x2p(j)=double(subs(S.x2));
%% double converts symbolic to numeric
up(j)=-double(subs(S.lam2));
%% optimal control u = -lambda_2
tl(j)=tp;
j=j+l ;
end
plot(tl,xlp, 'k' ,tl,x2p, 'k' ,tl,up, 'k:')
xlabel('t' )
gtext (' x_l (t) ')
gtext ( , x_2 ( t) , )
gtext ('u(t) ')
*******************************************************
The optimal control and states for Example 2.14 are plotted in
Figure 2.13.

Finally, we consider the fixed-final time and free-final state system


with a terminal cost (Figure 2.9 (b), Table 2.1, Type (b)).

Example 2.15
We consider the same Example 2.12 with changed performance
index

1
J = -[xl(2) 1
- 4] 2 + -[x2(2) 1102
- 2] 2 + -2 u2 dt (2.7.71)
2 2 0
and boundary conditions as

x(O) = [1 2]; x(2) = is free. (2.7.72)

Following the procedure illustrated in Table 2.1 (Type (b)), we get


the same optimal control, states and costates as given in (2.7.54)
and (2.7.55), which are reproduced here for ready reference. Thus
2.7 Variational Approach to Optimal Control Systems 81

2.5

1.5

0 ",-~

-,- .. -'"
,-- _~o#--

-0.5 u(t) ",,-


,,- .. --
-1
-,,-'"
, .. - -,- .4flJ~

-1.5
0 0.5 1.5 2 2.5 3
t

Figure 2.13 Optimal Control and States for Example 2.14

we have

* C3 3 C4 2
X1(t) = (it - 2t + C2t + C1,

X2(t) = ~3t2 - C4t + C2,


Ai(t) = C3,
A2(t) = -C3 t + C4,
U*(t) = -A2(t) = C3t - C4. (2.7.73)

The only difference is in solving for the constants C 1 to C4 using the


given and obtained boundary conditions. Since t f is specified as 2,
6t f is zero and since x(2) unspecified, 6xf is free in the boundary
condition (2.7.32), which now reduces to

(2.7.74)

where,

(2.7.75)
82 Chapter 2: Calculus of Variations and Optimal Control

Thus, (2.7.74) becomes

AHtf) = (aasXl ) ~ AH2) =


tf
xl(2) - 4

A2(tf) = (aas ) ~ A2(2) = x2(2) -


X2 tf
2. (2.7.76)

Now, we have two initial conditions from (2.7.72) and two final
conditions from (2.7.76) to solve for the four constants as

(2.7.77)

Finally, we have the optimal states, costates and control given as

* 1 3
Xl (t) = 14 t -"72 t 2 + 2t + 1,
x2*( t ) = -3 t2 - -t
4 +2
14 7 '
Ai(t) = ~,
A2(t) = -~t + ~,
u*(t) = ~t - i.7 (2.7.78)
7
The previous results c~ also obtained using Symbolic Math
Toolbox of the MATLAB\9, Version 6, as shown below.

***************************************************************
%% Solution Using Symbolic Math Toolbox (STB) in
%% MATLAB Version 6
%%
S=dsolve('Dxl=x2,Dx2=-lambda2,Dlambdal=O,Dlambda2=-lambdal,
xl(O)=1,x2(O)=2,lambdal(2)=x12-4,lambda2(2)=x22-2')
t='2' ;
S2=solve(subs(S.xl)-'x12',subs(S.x2)-'x22','x12,x22');
%% solves for xl(t=2) and x2(t=2)
x12=S2.x12;
x22=S2.x22;
clear t

S =
2.7 Variational Approach to Optimal Control Systems 83

lambdal: [lxl sym]


lambda2: [lxl sym]
xl: [lxl sym]
x2: [lxl sym]

xl=subs(S.xl)

xl =

x2=subs(S.x2)

x2 =

lambdal=subs(S.lambdal)

lambdal

3/7

lambda2=subs(S.lambda2)

lambda2 =

4/7-3/7*t

%% Plot command is used for which we need to


%% convert the symbolic values to numerical values.
j=l;
for tp=O: .02:2
t=sym(tp);
xlp(j)=double(subs(S.xl»;
%% subs substitutes S.xl to xlp
x2p(j)=double(subs(S.x2»;
%% double converts symbolic to numeric
up(j)=-double(subs(S.lambda2»;
%% optimal control u = -lambda_2
tl(j)=tp;
j=j+l;
end
84 Chapter 2: Calculus of Variations and Optimal Control

plot(t1,x1p, 'k' ,t1,x2p, 'k' ,t1,up, 'k: ')


xlabel ('t')
gtext (' x_1 (t) ')
gtext (' x_2 (t) ')
gt ext ( 'u ( t) , )
***************************************************************
It is easy to see that the previous solutions for xi(t)~2(t), Ai (t), A2(t),
and u*(t) = -A2(t) obtained by using MATLAB\0 are the same
as those given by (2.7.78) obtained analytically.
The optimal control and states for Example 2.15 are plotted in
Figure 2.14.

-1~--~--~--~--~--~--~--~--~~--~~
o 0.2 0.4 0.6 0.8 1.2 1.4 1.6 1.8 2

Figure 2.14 Optimal Control and States for Example 2.15

2.8 Summary of Variational Approach


In this section, we summarize the development of the topics covered so
far in obtaining optimal conditions using the calculus of variations. The
development is carried out in different stages as follows. Also shown is
the systematic link between various stages of development.

You might also like