0% found this document useful (0 votes)
62 views44 pages

Minimal Realizations in LTI Systems

Uploaded by

hocine.oubabas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views44 pages

Minimal Realizations in LTI Systems

Uploaded by

hocine.oubabas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

ECEN 605

LINEAR SYSTEMS

Lecture 12
Structure of LTI Systems IV
– Minimal Realizations
Controllability, Observability, and Minimality

Let G (s) be a proper rational matrix and let {A, B, C , D} be a


realization. If A is n × n, we say the order of the realization is n.

An important question is: Is it possible to realize G (s) with a lower


order dynamic system? If not, n is the minimal order. Otherwise,
how do we find the minimal order?

This problem was completely solved by Kalman in a classical


paper1 . The solution involves the concepts of controllability and
observability which are also important in other areas.

2/44
Controllability, Observability, and Minimality
(cont.)

Theorem (Minimal Realization)


A realization {A, B, C , D} of a proper rational matrix G (s) is
minimal iff (A, B) is controllable and (C , A) is observable.

This result is obtained by Kalman. It implies that if (A, B) is not


controllable, the order can be reduced. Likewise, if (C , A) is not
observable, the order can also be reduced.

1
R.E. Kalman, “Irreducible Realizations and the Degree of a Rational
Matrix,” SIAM J. Appl. Math., Vol. 13, pp. 520-544, June 1965 3/44
Coordinate Transformation and Order
Reduction

If we set
x(t) = Tz(t) T ∈ Rn×n
where T is invertible, then we have

ż(t) = T −1 ATz(t) + T −1 Bu(t)


y (t) = CTz(t) + Du(t)

as the new state equations in z.

4/44
Coordinate Transformation and Order
Reduction (cont.)

It can be easily verified that the “new” transfer function is

CT (sI − T −1 AT )−1 T −1 B + D = C (sI − A)−1 B + D


= old transfer function

and the new state space realization is related to the old one by
relationship:

{A, B, C , D} −→T {T −1 −1
| {zAT}, |T {z B}, |{z} D }.
CT , |{z}
Anew Bnew Cnew Dnew

This is called a similarity transformation.

5/44
Coordinate Transformation and Order
Reduction (cont.)
The next two observations are crucial. If
   
A1 A3 B1  
AT =  B = CT = D = D,
−1 −1
T , T , C1 C2 ,
0 A2 0

we can see that

C (sI − A)−1 B + D = C1 (sI − A1 )−1 B1 + D.

Similarly, if
   
A1 A3 B1  
T
−1
AT =  , T
−1
B = , CT = 0 C2 , D = D,
0 A2 B2

then
C (sI − A)−1 B + D = C2 (sI − A2 )−1 B2 + D.
In the first case, the order is reduced from n to n1 (size of A1 ). In
the second case, the order is reduced from n to n2 (size of A2 ).
6/44
Controllability Reduction

Let us regard
A : X −→ X

as a linear operator, and define


 
R := B AB A2 B ··· An−1 B ,

the controllability matrix and let R denote the column span of R.

7/44
Controllability Reduction (cont.)

In other words if rank[R] = n1 , then R is the n1 dimensional


subspace spanned by the columns of R. Let {v1 , v2 , · · · , vn1 } be a
set of basis vectors for R and let {wn1 +1 , wn1 +2 , · · · , wn } be
n − n1 vectors such that
 
T := v1 v2 ··· vn 1 | wn1 +1 wn1 +2 ··· wn

is an n × n invertible matrix.

8/44
Controllability Reduction (cont.)

Lemma
   
A1 A3 B1
T −1 AT =   T −1 B =   (1)
0 A2 0
where A1 is n1 × n1 , B1 is n1 × r . Then (A1 , B1 ) is controllable.
The pair of two matrices in (1) is called the Kalman controllable
canonical form.

9/44
Controllability Reduction (cont.)

The proof of the lemma depends on the following fact.


Definition
Let A ∈ Rn×n . And, V ⊂ Rn is a subspace. Then we say that V is
A-invariant if AV ⊂ V, i.e., v ∈ V implies that Av ∈ V.

Lemma
R is an A-invariant subspace and

B(column span of B) ⊂ R.

In fact R is the smallest A-invariant subspace containing B.

10/44
Controllability Reduction (cont.)

Proof
Suppose r ∈ R. Then

r = By0 + ABy1 + · · · + An−1 Byn−1 ∈ R


for some vectors y0 , y1 , · · · , yn−1 . Then

Ar = ABy0 + A2 By1 + · · · + An Byn−1 .

By the Cayley-Hamilton Theorem

An = αn−1 An−1 + αn−2 An−2 + · · · + α1 A + α0 I .

11/44
Controllability Reduction (cont.)

Substituting this in the expression for Ar , we have

Ar = Bz0 + ABz1 + · · · + An−1 Bzn−1 ∈ R

for some vectors z0 , z1 , · · · , zn−1 . Therefore,

Ar ∈ R.

Obviously, B ⊂ R.

12/44
Controllability Reduction (cont.)

To prove that R is the smallest such subspace, let S be a smaller


subspace. Then
B ⊂ S ⊂ R.
Applying A to both sides, we have

AB ⊂ AS ⊂ S ⊂ R
A2 B ⊂ AS ⊂ S ⊂ R
..
.
An−1 B ⊂ AS ⊂ S ⊂ R.

Therefore,

R := B + AB + · · · + An−1 B ⊂ S ⊂ R.

so that S = R.
13/44
Controllability Reduction (cont.)

Proof (Proof of the first lemma)


Eq. (1) is equivalent to the following
" #
.
A . =
v1 ··· vn 1 . wn1 +1 ··· wn

 
.
" # . 
 A1 . A3 
.  
.  ··· ··· ,
v1 ··· vn 1 . wn1 +1 ··· wn  
 . 
.
A4 . A2

" # 
B1
  .
b1 b2 ··· br B = .  ··· 
v1 ··· vn 1 . wn1 +1 ··· wn B2

and we want to prove that A4 = 0, B2 = 0.

14/44
Controllability Reduction (cont.)

This follows from the following facts

AR ⊂ R : Avi = α1i v1 + α2i v2 + · · · + αni 1 vn1 , i = 1, 2, · · · , n


B⊂R : bj = β1j v1 + β2j v2 + ··· + βnj 1 vn1 , j = 1, 2, · · · , n

established in the second lemma.

15/44
Controllability Reduction (cont.)
Therefore, if a realization {A, B, C , D} is given with rank[R] = n1
< n, We can apply
1. a coordinate transformation so that
   
−1 A1 A3 −1 B1
An = T AT = , Bn = T B =
0 A2 0
 
Cn = CT = C1 C2 , Dn = D

2. use the fact

C (sI − A)−1 B + D = Cn (sI − An )−1 Bn + Dn

  
  (sI − A1 )−1 − (sI − A1 )−1 A3 (sI − A2 )−1 B1
= C1 C2 +D
0 (sI − A2 )−1 0

= C1 (sI − A1 ) B1 + D (see the following remark)


−1

to get the lower order realization of order n1 , which is moreover


controllable.
16/44
Controllability Reduction (cont.)
Remark2

1. When A−1 and B −1 exist,


 −1  
A 0 A−1 0
=
C B −B CA−1
−1
B −1

and  −1  
A D A−1 −A−1 DB −1
= .
0 B 0 B −1

2. If A−1 exists,
 −1  
A D A−1 + E ∆−1 F −E ∆−1
=
C B −∆−1 F ∆−1

where
∆ = B − CA−1 D, E = A−1 D, F = CA−1 .
2
T. Kailath, Linear Systems, Prentice-Hall, 1980, p.656 17/44
Observability Reduction

Define  
C

 CA 


O :=  CA2 

 .. 
 . 
CAn−1
and let θ be the null space (or kernel) of O.

θ := {x : Ox = 0}.

Obviously, θ is the subspace that is orthogonal to all the rows of


O. If rank[O] = n2 , then θ has dimension n − n2 .

18/44
Observability Reduction (cont.)

Lemma
θ is A-invariant and is contained in Kernel(C ). In fact, θ is the
largest such subspace.

19/44
Observability Reduction (cont.)
Proof

If v ∈ θ, then CAi v = 0, i = 0,1,· · · ,n − 1. Then CAj Av = 0,


j = 0,1,· · · ,n − 2. To complete the proof of A-invariance we need
to show that CAn−1 Av = 0. This follows from the
Cayley-Hamilton Theorem. If v ∈ θ, then certainly Cv = 0 so that

θ ⊂ Kernel(C ).

To prove that θ is the largest such subspace, suppose that it is not


and θ1 is a larger subspace with the property

θ ⊂ θ1 ⊂ Kernel(C ).

Then it is possible to argue and show that

θ ⊂ θ1 ⊂ θ.
20/44
Observability Reduction (cont.)

Now suppose that {v1 , · · · , vn2 } is a basis for θ and choose


{wn2 +1 , · · · , wn } so that
 
T := v1 ··· vn 2 wn2 +1 ··· wn

is an invertible n × n matrix.

21/44
Observability Reduction (cont.)

Then we have the following:


Lemma

 
..
 A1 . A3  h
..
i
T −1 AT = 
 ··· ··· 
 CT = 0 . C2 (2)
..
0 . A2

where A2 ∈ Rn2 ×n2 , C2 ∈ Rm×n2 , and (C2 , A2 ) observable. This


pair is called the Kalman observable canonical form.

22/44
Observability Reduction (cont.)

Proof

Again Eq. (2) is equivalent to the following matrix equations.


 
A v1 · · · vn 2 wn2 +1 · · · wn =
 
  A1 A3
v1 ··· vn 2 wn2 +1 ··· wn
A4 A2
   
C v1 ··· vn 2 wn2 +1 ··· wn = C1 C2
and we need to show that i) A4 = 0, ii) C1 = 0. But this follows
from
P
1. A-invariance of θ, Avi = nj 2 αji vj and
2. θ ⊂ Kernel(C ) which means Cvi = 0, i = 1, 2, · · · , n2 .

23/44
Observability Reduction (cont.)

Therefore if a realization {A, B, C , D} with rank[O] = n2 < n is


given we can
1. apply a coordinate transformation T so that
   
−1 A1 A3 −1 B1
An = T AT = Bn = T B =
0 A2 B2
 
Cn = CT = 0 C2 Dn = D

2. use the fact

C (sI − A)−1 B + D = Cn (sI − An )−1 Bn + D


= C2 (sI − A2 )−1 B2 + D

to get a realization of order n − n2 , which is observable.

24/44
Joint Reduction

Suppose that we have a realization (A, B, C , D) with rank[R] =


n1 . By applying the controllability reduction we get a realization
(A1 , B1 , C1 , D) of order n1 and (A1 , B1 ) is controllable. If (C1 , A1 )
is observable, we are through as we have a controllable and
observable realization. Otherwise carry out an observability
reduction so that
   
−1 A11 A13 −1 B11
T A1 T = T B1 =
0 A12 B12
 
CT = 0 C12 D=D

and we have a realization (A12 , B12 , C12 , D) which is observable.

The question that arises is: Is (A12 , B12 ) controllable?

25/44
Joint Reduction (cont.)

The answer is: If (A1 , B1 ) is controllable, so is (A12 , B12 ).


Remark
This shows that a two step procedure is enough to produce a
controllable and observable realization (minimal realization).

26/44
Gilbert Realization

Gilbert’s Realization is a particular minimal realization which can


be obtained directly from a transfer function matrix G (s).
However, this realization is possible only when each entry of
G (s) has distinct poles.
1. Expand each entry of G (s) into partial fractions.
2. Form
[R1 ] [R2 ] [R3 ]
G (s) = + + + ··· .
s − α1 s − α2 s − α3
3. Total size of realization is
X
n∗ = Rank [Ri ] .
i

4. Find Bi and Ci so that

Ci Bi = Ri where Ci ∈ Rn×m , Bi ∈ Rm×n ; m = Rank [Ri ] .

27/44
Gilbert Realization (cont.)

5. Form (A, B, C ) where


   
α1 I1 B1
 α2 I2   B2   
A=


 B =
 . 
 C = C1 C2 ··· .
.
.. .
.

Note that Ii is the identity matrix with dimension being equal


to Rank [Ri ].

28/44
Gilbert Realization (cont.)

Example
Find a minimal realization of the following transfer function.

 1 1 
 (s − 1)(s − 2) (s − 2)(s − 3) 
 
G (s) =  
 1 1 
(s − 2)(s − 3) (s − 1)(s − 2)

29/44
Gilbert Realization (cont.)

Since all entries of G (s) have simple poles, we can use Gilbert
Realization.
 
−1 1 −1 1
 + + 
 s−1 s−2 s−2 s−3 
G (s) = 



 −1 1 −1 1 
+ +
s−2 s−3 s−1 s−2
     
1 −1 0 1 1 −1 1 0 1
= + +
s−1 0 −1 s−2 −1 1 s−3 1 0
   
1 −1 0 1 0 1 1  
= + 1 −1
s−1 0 −1 0 1 s−2 −1 | {z }
| {z }| {z } | {z } B2
C1 B1 C2

  
1 0 1 1 0
+
s−3 1 0 0 1
| {z }| {z }
C3 B3

30/44
Gilbert Realization (cont.)

Therefore,
 .. 
1 0 . 0 0 0  

 .. 
 1 0
 0 1 0 . 0 0   0 1 
   
 ··· ···   ··· ··· 
 . ..   
 0 0 .. 2
A= B= 1 −1 
. 0 0 
  

 ··· ··· ···



 ··· ··· 


 0 0 .. 

 1 0 
0 . −3 0
  0 1
..
0 0 0 . 0 −3
 
. ..
−1 0 .. 1 . 0 1 
C= 
.. .. .
0 −1 . −1 . 1 0

31/44
Balanced Realizations

Recall the fact that a transfer function can be realized by infinite


number of state space realizations. Depending on purpose, a
designer chooses a different state space realization to implement.

One type of realization we often see is a companion form


realization which is known to be highly numerically sensitive. Here
we discuss another type of realization that is called a balanced
realization. To proceed, we limit the scope with stable and minimal
realizations.

32/44
Balanced Realizations (cont.)

Consider a stable and minimal realization of form:

ẋ(t) = Ax(t) + Bu(t), y (t) = Cx(t).

Then

AWc + Wc AT = −BB T
AT Wo + Wo A = −C T C

where the controlability Gramian Wc and the obervability Gramian


Wo are positive definite.

33/44
Balanced Realizations (cont.)

Theorem
Suppose that two different state space realizations (A, B, C ) and
(Â, B̂, Ĉ ) are minimal and equivalent. Let Wc Wo and Ŵc Ŵo be
the products of their controllability Gramian and observability
Gramian, respectively. Then Wc Wo and Ŵc Ŵo are similar and
positive definite.

34/44
Balanced Realizations (cont.)

Proof
Write

 = T −1 AT , B̂ = T −1 B, Ĉ = CT .
Then
ÂŴc + Ŵc ÂT = −B̂ B̂ T
yields

T −1 AT Ŵc + Ŵc T T AT T −T = −T −1 BB T T T
AT Ŵc + T Ŵc T T AT T −T = −T −1 BB T T −T
A T Ŵc T T + T Ŵc T T AT = −T −1 BB T .
| {z } | {z }
Wc Wc

35/44
Balanced Realizations (cont.)

Thus, we have
Wc = T Ŵc T T
and similarly,
Wo = T −T Ŵo T −1 .
Now,

Wc Wo = T Ŵc T T T −T Ŵo T −1 = T Ŵc Ŵo T −1

which implies that Wc Wo and Ŵc Ŵo are similar.

36/44
Balanced Realizations (cont.)

To prove they are positive definite, we need the following lemma.


Lemma
For every real symmetric matrix A, there exists an orthogonal
matrix Q such that
A = Q T DQ
where D is a diagonal matrix with the eigenvalues of A which are
real.

37/44
Balanced Realizations (cont.)

Note that Wc is symmetric positive definite. Since its eigenvalues


are real and positive, we write
1 1
Wc = Q T D 2 D 2 Q =: R T R
1
where Q is orthogonal, i.e., Q −1 = Q T , and R = D 2 Q. Consider
h i
det(sI − Wc Wo ) = det(sI − R T RWo ) = det R T (sR − RWo )
= det(sI − RWo R T )

which implies that the matrices Wc Wo and RWo R T have the


same eigenvalues. Here, note that RWo R T is symmetric and
positive definite, therefore, so does Wc Wo .

38/44
Balanced Realizations (cont.)

Theorem (A Balanced Realization)


For any minimal realizaton (A, B, C ), there exists a similarity
transformation such that the controllability Gramian Wc and
observability Gramian Wo of its equivalent state space realization
have the propoety
Ŵc = Ŵo = Σ.
Such a equivalent realization is called a balanced realization.

39/44
Balanced Realizations (cont.)

Proof
Recall the expression RWo R T where

1
Wc = R T R, R = D 2 Q.
Since RWo R T is symmetric, we can write

RWo R T = UΣ2 U T

where U is orthogonal.

40/44
Balanced Realizations (cont.)

Then we can write


1 1
U T RWo R T U = Σ 2 ΣΣ 2

and with (32)


1 1
−2 T
Σ U R} Wo |R T UΣ
| {z
−2
{z } = Σ =: Ŵo
T −T T −1

Similarly, with (32)


1 1 1 1
T −T
|Σ U{zR } Wc R
2
−1
| {z UΣ }2 = Σ 2 U T R −T R T RR −1 UΣ 2
T TT
= Σ =: Ŵc .

41/44
Degree of Transfer Function Matrices

Definition
In a proper rational matrix G (s), the characteristic polynomial of
G (s) is defined as the least common denominator of all minors of
G (s). The degree of the characteristic polynomial is called the
McMillan degree.

42/44
Degree of Transfer Function Matrices (cont.)

Example
Consider
 
1 1
 s +1 s +1 
 
G1 (s) =  .
 1 1 
s +1 s +1
1
The miniors of order 1 are all s+1 and the minior of order 2 is 0.
The characteristic polynomial is δ1 (s) = s + 1 and the McMillian
degree is 1.

43/44
Degree of Transfer Function Matrices (cont.)

Consider  
2 1
 s +1 s +1 
 
G2 (s) =  .
 1 1 
s +1 s +1
1 2
The miniors of order 1 are s+1 and s+1 , and the minior of order 2
1
is (s+1)2 . So the characteristic polynomial is δ2 (s) = (s + 1)2 and
the McMillan degree is 2.

44/44

You might also like