IGNOU - Lecture Notes - Linear Algebra
IGNOU - Lecture Notes - Linear Algebra
Introduction
Objectives
1.2
1.3
1.4
Idempotent matrices
1.5
Cochrans Theorem
1.6
1.7
Summary
1.8
Reference
1.9
Solutions / Answers
1.1 INTRODUCTION
In this unit, we recapitulate certain concepts and results which will be useful in the
study of multivariate statistical analysis. We start with the study of real symmetric
matrices and the associated quadratic forms. We define a classification for the
quadratic forms and develop a method for determining the class to which a given
quadratic form belongs. Positive definite and nonnegative definite matrices (which
we shall notice in unit 2 as the dispersion matrices) are very important for the study of
multivariate distributions, and, in particular the multivariate normal distribution. In
the section 1.3 we obtain some characterizations of positive definite and nonnegative
definite matrices and study some of their useful properties. We give a method of
computing a square root of a matrices which play an important role in transforming
correlated random variables to uncorrelated random variables, as we shall see in the
unit 2. Idempotent matrices and Cochrans theorem play a key role in the distribution
of quadratic forms in independent standard normal variables, particularly, in
connection with the distribution of quadratic forms to be independent chi-squares. In
sections 1.4 and 1.5 we study the properties of idempotent matrices and prove the
algebraic version of Cochrans theorem respectively. Singular value decompositions
plays a very important role in developing the theory and studying the properties of
canonical correlations between two random vectors. In section 1.6 we study the
singular value decomposition.
We use the following notations. Matrices are denoted by boldface capital letters like
A, B, C. Vectors are denoted by boldface italic lower case letters like x, y, z. Scalars
are denoted by lower case letters like a, b, . The transpose, rank and trace of a
matrix A are denoted by At, rank A and tr(A) respectively.
Rn denotes the ndimensional Euclidean space.
Objectives :
After completing this unit, you should be able to
Determine the definiteness of a given quadratic form
Apply the spectral decomposition in the study of principal components
Compute a triangular square root of a positive definite matrix
Apply the properties of positive and nonnegative definite matrices to the
problems that will be studied in unit 2 on multivariate normal distribution.
Apply Cochrans theorem to the distribution of quadratic forms in normal
variables
Apply the singular value decomposition in the development of canonical
correlations.
(i)
i
1
4
i
3
(ii)
1 2
3
3
(iii)
2 1
(iv)
4
5
Solution:
(i) is not a real symmetric matrix because not all of its elements are real.
(ii) is not a real symmetric matrix because it is not a square matrix.
(iii) is not a real symmetric matrix because (1, 2)th element 3 2 = (2, 1)th element.
(iv) is a real symmetric matrix because (a) it is a square matrix (of order 2 x 2), (b) all
its elements are real and (c) (1, 2)th element 4 = 4 = (2, 1)th element.
In 14.3 of MTE02 we noticed that there is a unique real symmetric matrix A
associated with a given real quadratic form Q(x) in the sense that Q(x) = xtAx. This
matrix is called the matrix of the quadratic form Q(x).
2
Example 2. Find the matrix of the quadratic form in three variables x1, x2, and x3.
2
2
Q(x) = 2 x1 x2 5 x1 x3 3 x2 x3 .
x1
Solution: Since there are three variables x1, x2, and x3, x = x2 . Let A be the
x
3
symmetric matrix such that Q(x) = xtAx. Then A is of order 3 x 3. Further a21 = a12 =
(coefficient of x1x2) = 1. In general, whenever i j, aij = aji = (coefficient of xi,
2
xj). Also aii = the coefficient of xi , i = 1, 2, 3. Thus
A= 1
3
0
2.5
2.5
0
1
Solution:
(i) x12 x22 1 if x1 = 1 and x2 = 0. Again x12 x22 1 if x1 = 0 and x2 = 1.
1
Thus it
0
.
1
(ii) x12 x22 > 0 whenever at least one of x1 and x2 is not zero. Hence this quadratic
1
0
, the identity
1
(iii) Q(x) = x12 x22 2x42 0 for all values of x1, x2, x3 and x4. However for x3 = 1 and
x1 = x2 = x4 = 0, the value of x12 x22 2x42 = 0. Thus there is a vector
0
0
1
0
The matrix of the quadratic form is
0
0
1
0
0
0
0
.
0
0
0
0
0
We leave it to you to show that the quadratic forms in (iv) and (v) are negative
definite and negative semi-definite respectively. (You can use the quadratic forms in
(ii) and (iii) to arrive at the above conclusion and in writing down the matrices of the
quadratic forms in (iv) and (v).
In the above example, we considered quadratic forms whose matrices are diagonal
matrices. Here it is easy to identify the definiteness of the quadratic form. In fact if
n
Q(x) =
x
i 1
2
i
n.d., n.s.d. or indefinite according as i > 0 for all i; i 0 for all i and j = 0 for some
j; i < 0 for all i; i 0 for all i and j = 0 for some j and i > 0 for some i and j < 0
for some j respectively.
What if we have a quadratic form Q(x) = 2 x12 3 x1 x2 x22 or Q(x) =
2 x12 x22 x32 3 x1 x2 2 x1 x3 4 x2 x3 ? (Notice that the matrices of these quadratic
2
forms are
1.5
1.5
1
2
1.5
and
1
1.5
1
2
1
2
1
diagonal matrices.)
In general, consider a quadratic form Q(x) = xtAx where A is not a diagonal matrix.
How do we determine the definiteness of the quadratic form in such a case? The
following results will be useful towards that end.
O
T
is an orthogonal matrix if T is an
Our object is to determine the definiteness of a quadratic form Q(x), the matrix of
which is not necessarily diagonal. We shall now show that we can make an
orthogonal transformation of the variables (i.e. we can make a transformation y = Px
where P is an orthogonal matrix) such that under this transformation, the quadratic
2
form is transformed into a quadratic form i yi . Since we know how to
2
2
determine the definiteness of i yi , and since the definiteness of i yi is the
same as that of Q(x), we have the definiteness of Q(x).
If A is a real matrix, then it is not necessary that its eigen values are real. For
0
example if A =
, then the eigen values are i and i . However, if A is real
1 0
and symmetric then all its eigen values are real as shown below.
Theorem 2. Let A be a real symmetric matrix. All the eigen values of A are real and
all the eigen vectors of A can be chosen to be real.
Proof: Let +i be an eigen value of A and let the corresponding eigen vector be
x+iy where and are real numbers and x and y are real vectors. Clearly at least one
of x and y is nonnull as x+iy, being an eigen vector is nonnull. Now,
A(x+iy) = (+i)(x+iy)
Equating the real parts on both sides and the imaginery parts on both sides of the
above equality we get,
Ax = x - y
Ay = y + x
(2.1)
(2.2)
(2.3)
(2.4)
1 b12t
b12t
t
= R 1
where 0, b12
AR = A(x1 :.: xm+1) = (x1 :.: xm+1)
and
0
B
0
B
22
22
B 22 are of order m x 1, 1 x m and m x m respectively. [ This is so because, Ax2,,
Axm+1 being vectors in Rm+1 and x1,, xm+1 form a basis of Rm+1, Axi is a linear
combination of x1,, xm+1.]
1
So R AR =
0
t
b12t
. Since RtAR is real and symmetric it follows that b12 = 0 and
B 22
0
1
. By induction
B22 is an m x m real symmetric matrix. Thus RtAR =
0 B 22
hypothesis there exists an orthogonal matrix S1 of order m x m such that B 22 S1 S 1t
S t or A = RS
RtAR = S
0
we notice that S is an
S t R t
Writing
0
AP = P
or
1 0
or
Api = ipi, i = 1, , n.
Since pi is a vector in an orthonormal basis, pi is (a non null vector) of unit norm.
Hence i is an eigen value of A and pi is an eigen vector of A corresponding to the
eigen value i. Thus the diagonal elements of are the eigen values of A and the
columns of P are the orthonormal eigen vectors of A. Further
1 0
p1t
= 1 p1 p1t n pn pnt
t
pn
t
Write Ei pi pi , i = 1, , n
E i if i j
0 otherwise
Then E i E j
t
and rank Ei = rank pi pi = rank pi = 1
matrices of rank 1such that Ei Ej = 0 whenever i j. the set {1,., n} is called the
spectrum of A. Since the decomposition mentioned above involves the spectrum and
the eigen vectors it is called a spectral decomposition of A.
1
=0
2
or (4 - )(2 - ) 1 = 0
or 2 - 6 + 7 = 0
6 36 28
3 2
2
Then
[ A (3
2 )I ]u 0
1 2
or
u1
0
1 2 u2
1
1
42 2
1
1 2
1
42 2
the
12
eigen
42 2
value
. Thus the
is
3 2
.
4 2 2 1 2
1
corresponding to 3 2 is
P
1 2
and 3 2
3 2
Using theorem 3, we can determine the definiteness of a quadratic form. Consider the
quadratic form Q(x) = xtAx. Let A = PPt be a spectral decomposition of A. Then
Q(x) = xPPtx = yty where y = Ptx. Since P is nonsingular (in fact, orthogonal) the
definiteness of Q(x) is the same as the definiteness of yty. The definiteness of yty
is determined by the diagonal elements 1,., n are the eigen values of A.
Thus
Because of the one-one correspondence between real symmetric matrices and the
quadratic forms, we call a real symmetric matrix A as positive definite, positive
semidefinite, negative definite, negative semidefinite or indefinite according as the
quadratic form xtAx is positive definite, positive semidefinite, negative definite,
negative semidefinite or indefinite respectively.
Example 4. Determine the definiteness of the quadratic forms (i) 2 x12 x1 x2 x22 and
(ii) x12 x 22 x32 3 x1 x 2 3 x1 x3 3 x 2 x3 .
0.5
99
or
2
1
1.5
(ii) The matrix of the quadratic form is A =
1.5
1.5
1
1.5
1.5
1.5
1
values which is the same as the trace of A is 3. Hence there must be at least one
positive eigen value of A. So the quadratic form is in definite.
E2.
E3.
Determine
(i)
(iii)
the
definiteness
x12 5 x1 x2 7 x22 ,
(ii)
of
the
following
quadratic
forms:
E4.
Let A =
1
100
A .
Since B is nonsingular, so is Bt
Let if possible y = Bt x = 0. Then x = (Bt)-1y = 0
Since x 0, there is a contradiction. So y 0.
Hence xtAx =yty > 0.
The choice of x being arbitrary, it follows, that A is positive definite.
Only if part. Let A be positive definite. Then all its eigen values are strictly positive.
Let A = PPt be a spectral decomposition of A. Let 1 , , n be positive square
roots of 1, , n respectively. Write
10
1
2
A = P
Pt
1 2
1
0
Then
12 0
Pt is symmetric and CCt = A.
C = P 1
0 0
A matrix B such that A = BBt is called a square root of A. Given A, B is not unique
since A = BPPtBt where P is any orthogonal matrix. In theorem 4 we gave a method
of computing a square root if we know the spectral decomposition of A. However
obtaining spectral decomposition is not easy in general. We give below a method of
obtaining a square root of a positive definite matrix. Let us start with an example.
4
11
Solution. We shall obtain a lower triangular matrix B such that A = BBt. Write B =
0
b11 0
b21 b22 0 . We shall solve for bij j = i,,3, i = 1, 2, 3 such that A = BBt.
b
31 b32 b33
Write
b11
= b21
2 1 5
b
31
4
1
3
0
b33
0
b22
b32
b11
0
0
b21
b22
0
b31
b32
b33
1
=
b11
11
.
4
b32 =
2
2
2
b22
a22 = 3 = b21
or b22
= 3-
1
Thus B =
2
2
b33
5 12
11
=5-1-
1
43
11 11
1
2
4
1
.
11
11
or b33
43
.
11
11
4
1
11
0 is a square root of A.
43
11
For the given matrix A in example 6, we could obtain a lower triangular matrix B
such that A = BBt. Can we always do this? Let us examine how we went about in
solving for the elements of B. First we solved for the first column of B, then for the
second column and so on. Also observe that each time we just had to solve one
equation in one unknown to obtain the elements of B. Could there have been some
2
2
hitch? What if the computed value for b22
or later for b33
turns out to be negative?
If it happens to be so, we would not have been able to solve for B. It can be shown
(which is beyond the scope of the present notes) that if A is positive definite then the
above situation never arises. (For a proof see Rao and Bhimasankaram(2000) page
12
358-359) There is also another method of obtaining a triangular square root using
elementary row operations. (See Rao and Bhimasankaram (2000) page: 361-363).
A square root of a positive definite matrix is useful in transforming correlated random
variables to uncorrelated random variables as we shall see in unit 2.
E 5. Compute a lower triangular square root in each of the following cases.
4
(i)
1
(ii) 3
3
A11 A12
be an n x n positive definite matrix where A11 and
Example 7. Let A = t
A12 A 22
A22 are square matrices of order r x r and (n-r) x (n-r) respectively for some r (1 r
n-1). Show that A11 is positive definite.
Solution: Let x be a nonnull vector.
Then x A11 x x
t
t
1 n
A
: 0 11
t
A12
A12
A 22 n n 0
x : 0 A 0
0
n n
since
0
is a
a12
Let A =
a
1n
a11
a12
Write Ai =
a
1i
a12
a22
a2 n
a12
a22
a2 i
a1n
a2 n
be a pd matrix.
ann
a1i
a2i
, i = 1,, n.
aii
The matrices Ai, i = 1,, n are called leading principal submatrices of A. Combining
examples 7 and 8 we have |Ai|>0 for i = 1,, n if A is p.d. Is the converse true? This
leads us to the
Theorem 5. Let A be a real symmetric matrix of order n x n. Let Ai, i = 1,, n be
as defined above. Then A is positive definite if and only if |Ai| > 0 for Ai, i = 1,, n.
13
Proof: Only if part follows from examples 7 and 8. For the proof of the if part see
Rao and Bhimasankaram (2000) page 341.
Example 9. Let A be symmetrix positive definite. Show that RARt is pd where R is
any nonsingular matrix.
Solution. Let A = BBt where B is nonsingular. Then RARt = RBBtRt = CCt where C
= RB. Further, C is nonsingular since R and B are nonsingular. Hence RARt is pd.
Example 10. A symmetric matrix A is positive definite if RARt is pd for some
nonsingular matrix R.
Let x 0. consider xtAx = xtR-1RARt R 1 x = ytRARty where
y R 1 x 0 since x 0. Hence xtAx = ytRARty > 0 since y 0 and RARt is pd.
Hence A is pd.
t
Solution.
t
E6.
E8.
A11
Let A = t
A12
A22 is pd.
A12
where A11 and A22 are square. Show that if A is pd, then
A 22
Let A be as in E7. Show that if A is nnd, then A11 and A22 are nnd.
A11 A12
be a partition of A where A11 and A22 are square. A
Theorem 6. Let A = t
A12 A 22
1
is pd, if and only if A11 and A22 - A21 A11
A12 are pd.
Proof: For the if part A11 is pd. For the only if part A is pd and hence A11 is pd by
theorem 5. It is easy to see that
A11
A = t
A12
I
A12
= t 1
A 22
A12 A11
A11
Hence A = R
0
I
Where R = t 1
A12 A11
A11
1
A 22 A 21 A11 A12
1
I A11
A12
0
t
R
1
A 22 A 21A11
A12
0
We promised in the beginning that we shall give an easy way to construct pd matrices
and orthogonal matrices. We shall do so now.
Theorem 7. Let A be a symmetric matrix of order n x n with positive diagonal
aij i = 1,, n.
elements and let aii
j i
xtAx
2 t
t
t
= ei Aei e j Ae j 2ei Ae j
2
= aii a jj 2aij a jj 2aij
a jj 1
2aij
is a contradiction since A is psd. Hence aij = 0. Choice of j being arbitrary, the result
follows.
E9.
E10.
1
1 where n is the order
n 1
E11.
E12.
Let A and B be nnd matrices of the same order. Show that (i) A + B is nnd;
(ii) the column space of A is a subspace of the column space of A+B.
15
(1.4.1)
Again, A(I-A) = 0. Hence the column of (I-A) is a subspace of the null space N(A) of
A. We know that dimension of N(A) = n - rank A = n - r. So, rank of I-A is at most
n - r. On the other hand, since I = A+(I-A), n = rank I rank A + rank (I-A).
Hence rank (I-A) n - rank A. Thus rank (I-A) = n - rank A. This, coupled with the
fact the column space of (I-A) N(A), shows that the column space of (I-A) is the
same as the null space of A. Let el al , , el al
be linearly independent
columns of I-A. Then
1
nr
n r
16
A el k al k = 0, k = 1,, n-r.
(1.4.2)
= A a i 1 : : ai r : el 1 al 1 : : el n r a l n r
= a i : : a i : e l a l : : el
1
Ir
= P
0
Ir
0
Thus we have A = P
nr
I
al nr r
0
0 -1
P .
0
Let A be an idempotent matrix of order n x n with rank r. From the theorem 9, the
following statements are clear.
(a) A is similar to a diagonal matrix.
(b) A has at most two distinct eigen values 1 and 0.
multiplicity r and 0 with algebraic multiplicity n-r.
1 is with algebraic
Finding the rank of a matrix in general is not very easy. However it is quite easy for
idempotent matrices. We start with a definition.
Definition.
a
i 1
ii
Example 12. Let A and B be are square matrices of the same order. Show that (i)
tr(c.A) = c.tr(A) when c is a real number; (ii) tr(A+B) = tr(A)+ tr(B).
Solution: Left as an exercise.
Example 13. Let A and B be matrices of order m x n and n x m respectively. Show
that tr(AB) = tr(BA).
17
Solution:
m
aij b ji =
i 1 j 1
aij b ji =
j 1 i 1
a b
j 1
ij
ji
Hence tr(AB) =
b
i 1
ji
element of BA.
We are now ready to prove
Theorem 10. Let A be an idempotent matrix of order n x n. Then rank A = tr(A).
Proof: The proof is trivial if rank A is 0 or n. Let rank A = r when 1 r n-1. Then
I r 0 -1
P .
by theorem 9, there exists a nonsingular matrix P such that A = P
0 0
Ir
Now tr(A) = tr P
0
0 -1
P = tr
0
Ir
0 -1
I
P P = tr r
0
0
r = rank A.
definite.
Proof: Left as an exercise.
Example 4. Let A and B be idempotent matrices of the same order. Then show that
A+B is idempotent if and only if AB = BA = 0.
Solution: If part is trivial. For the only if part, let A, B and A+B be idempotent.
Then A+B = (A+B)(A+B) = A2 +B2 +AB + BA = A + B + AB + BA. So, AB+BA =
0. Premultiplying by A, we get AB+ABA = 0. Now post multiplying the previous
equality by A we get ABA+ABA = 0 or ABA = 0. Hence AB = 0 and as a
consequence BA = 0.
E13.
E14.
is also
idempotent.
18
E15.
Show that if A and B are idempotent and the column space of A is contained
in the column space of B, the BA = A.
rank A
i 1
algebraic version of this result. In the next unit we shall prove the statistical version.
Theorem 13. Let A1, A2,Ak be real symmetric matrices such that A1 + A2+ + Ak
= I. The following are equivalent:
(a) Ai is idempotent, i = 1,, k
k
(b)
rank A
i 1
(c) Ai Aj = 0 whenever i j
k
rank A
i 1
(since
r n.
i 1
t
t
matrix there exists a matrix Pi of order n x ri such that A i Pi i Pi , Pi Pi I ri and
i is a real nonsingular diagonal matrix, i = 1,, k.
k
So, I = A1++Ak =
P P
i 1
t
i
= PPt
1
0
2
0
0
k
0
0
columns in P is
We now prove an algebraic version of another useful result in connection with the
distribution of quadratic forms in normal variables.
Theorem 14. Let A and B be symmetric idempotent matrices and let B-A be non
negative definite. Then B-A is also a symmetric idempotent matrix.
Proof: Since A is symmetric idempotent, it is nnd. Since B-A is nnd, the column
space of A is contained in the column space of B. So BA=A. Then (B-A)A= 0 =
A(B-A). Since B(I-B) = (I-B)B = 0, A(I-B) = (I-B)A = 0 and (I-B)(B-A) = 0 = (B-A)
(I-B). Now A+ (B-A) + (I-B) = I. By (c) (a) of theorem 13 it follows that B-A is
idempotent.
Vt
matrix.
Proof: Notice AAt and AtA are non negative definite matrices (Why? See theorem
4). Let u1, u2,, um be orthogonal eigen vectors of AAt corresponding to the eigen
values 1 2 r > r+1 = = m =0. So AAtui = iui, i = 1,,m. Write
vi
1
A t ui , i = 1,,r. Where
i
= 1,,r
20
1if i j
vi v j u AA u
i j 0 if i j
1
t t
i j
t
t
A = u1u1 ... um um A
i ui vit
i 1
Denote i =
t
r r
i , i = 1,,r.
r
It follows that A =
i 1
0 t
V .
0
0
i ui vit = U
We shall now interpret the columns of U and V and the diagonal elements of in the
0 t
V when U and V are orthogonal and is a positive
above form. Let U
0 0
definite diagonal matrix.
Then rank of A is the same as rank of which in turn is the number of rows in .
Now
2
AA = U
0
0
0 0
2
Thus U
0
0 t
U is a spectral decomposition of AAt. Hencethe diagonal elements
0
0
0
U = U
t
Ut.
of 2 are the nonzero eigen values of AAt and the columns of U are the orthonormal
eigen vectors of AAt. To be more specific
21
AAtui
0 for i r 1, , m
2
Again A A= V
0
t
0 t
V , which is a spectral decomposition of AAt. Hence
0
At Aui
0 for i r 1,, n
Thus the diagonal elements of and the columns of U and V relate to the eigen
values and eigen vectors of AAt and AtA. The diagonal elements of are called the
singular values and the columns of U and V are called the singular vectors of A. The
0
decomposition A = U
Example: Let A =
0
1
1
2
3
2
2
1
2
2 2
2 0
1 0
1
0
0
0
0
0
1 1
2 1
1
1
1
1
1
1
1
1
be a
singular value decomposition of A. What are the eigen values of AAt and AtA?
Identify the corresponding eigen vectors. What is the rank of A?
0
Solution: A = U
1
1
Where U = 2
3
2
Vt
2
1
2
2
1
1 1
V=
2 1
and
2
1
1
1
1
1
1
1
1
1
1
1
and
3
1
2
1
1
2
first, second and third columns respectively of U, namely
, 1
3
3
2
2
2
2 respectively.
1
We leave it as an exercise to identify the eigen values and eigen vectors of AtA.
22
1.7 SUMMARY
In this unit we have covered the following points:
1.
2.
3.
4.
5.
6.
7.
8.
9.
1.8 REFERENCE
Ramachandra Rao A and P Bhimasankaram (2000) Linear Algebra 2nd Edition,
Hindustan Book Agency, New Delhi.
23
(i) Coefficient of x12 , x22 , and x1x2 are respectively 1, -1 and 0. So the matrix
1
0
.
1
(ii) Coefficients of x12 , x22 and x1x2 are respectively 2, 5 and 3. So the
1.5
.
5
2
(iii) Coefficients of x12 , x22 , x3 , x1x2, x1x3 and x2x3 are respectively 0, 0, 0, 3,
-4, 5. So the matrix of the quadratic form 3x1x2 + 5x2x3 4x1x3 is
0
1.5
1.5
0
2.5
2.5 .
0
2
(iv) Coefficients of x12 , x22 , x3 , x42 , x1x2, x1x3, x1x4, x2x3, x2x4 and x3x4 are
respectively 1, 1, 0, 1, 0, 0, 0, 0, 0, 0. So the matrix of the quadratic form x12
2
2
+x +x
2
4
1
0
is
0
1
0
0
0
1
0
0
.
0
E2.
Suppose aii < 0. Let ei denote the ith column of the identity matrix. Then
eit Aei = aii < 0. Hence A cannot be pd or psd.
E3.
2.5
64 3
8 64 3
and
which are both positive.
2
2
24
3
3
0
0
0 . The characteristic
4
25 12
. Thus all the three roots are
2
-x1 + x2 = 0
x1 - x2 = 0
Thus x1 = x2. So the normalized eigen vector corresponding to the eigen value
3 is
1
2
1
.
1
1
2
1 1
1 1
A = 3. 1 1 1.
2 1
2 1
or
1 1 1
2 1 1
1 1
2 1
Hence A100 =
1 1 1
2 1 1
3100
1 1 1
2 1 1
25
E5.
4 1 b11
(i) Let
1 2 b21
b22
b11
b21
b22
2
So b11
= 4 or b11 = 2
1
1
b11 2
1 7
2
2
2
2
b21
b22
2 or b22
4 4
7
so b22
2
1
2
9 3 3 b11
(ii) Let 3 5 1 b21
3 1 6 b
31
b31
b32
b33
2
b11
=9
b11 b21 = 3
b11 b31 = 3
2
2
b21
b22
5
so
0
b22
b32
0
b33
b11
0
0
b21
b22
0
or
b11 = 3
b21 = 1
b31 = 1
2
b22
= 5-1 = 4 or
or
or
or
0
7 .
2
b22 = 2
E6.
0
2
0
0
5
C
: y t
0
0 x t : 0
be pd consider
0 x
x t Cx whenever x 0. Hence C is pd.
D 0
26
E7.
A11
t
We know that 0 0 : y t
A12
t
so that y Ay is conformable).
A12
A 22
0
y t A 22 y whenever y 0 (y is chosen
y
E9.
Let A be an nnd matrix. Then there exists B such that A = BBt. Hence 0 =
xtAx = xtBBtx implies that Btx = 0. So Ax = BBtx = 0.
E10.
As will be shown in the section 1.6, nonzero eigen values of AAt and AtA are
the same. Hence nonzero eigen values of 11t are the same as the eigen values
of the 1 x 1 matrix 1t1 = n. Thus the eigen values of 1t1 = n. So, the
eigen values of 11t are n and 0, 0, ,0 (n-1) times.
Let be an eigen value of 11t. Let the corresponding eigen vector be x.
Then ((1-)I + 11)x = (1-)x + x = (1-+)x
Thus the eigen values of (1-)I+11 are (1-)+ n, (1-),, (1-) ((n-1)
times).
Hence (1-)I +11t is pd if and only if 1+(n-1) > 0 and 1- > 0 or
1
1.
n 1
E11.
0.5
3
4
0.5
is a pd matrix with
(i) xt(A+B)x = xtAx + xtBx 0 since xtAx and xtBx are nonnegative. Hence
A+B is nnd.
Ct
t
D
b 2
27
Thus
1
2.
E14.
E15.
A2
B 0
0 A
B 0
So
0
2
)
b
A
2
B 0
28