0% found this document useful (0 votes)
14 views287 pages

Lecture

Uploaded by

yeungmooli
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
14 views287 pages

Lecture

Uploaded by

yeungmooli
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 287

Linear Algebra

Min Yan

November 11, 2021


2
Contents

1 Vector Space 7
1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.1 Axioms of Vector Space . . . . . . . . . . . . . . . . . . . . . 7
1.1.2 Proof by Axiom . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2 Linear Combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.1 Linear Combination Expression . . . . . . . . . . . . . . . . . 12
1.2.2 Row Operation . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.2.3 Row Echelon Form . . . . . . . . . . . . . . . . . . . . . . . . 17
1.2.4 Reduced Row Echelon Form . . . . . . . . . . . . . . . . . . . 19
1.3 Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.3.1 Basis and Coordinate . . . . . . . . . . . . . . . . . . . . . . . 21
1.3.2 Spanning Set . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.3.3 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . 27
1.3.4 Minimal Spanning Set . . . . . . . . . . . . . . . . . . . . . . 31
1.3.5 Maximal Independent Set . . . . . . . . . . . . . . . . . . . . 33
1.3.6 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.3.7 Calculation of Coordinate . . . . . . . . . . . . . . . . . . . . 38

2 Linear Transformation 41
2.1 Linear Transformation and Matrix . . . . . . . . . . . . . . . . . . . 41
2.1.1 Linear Transformation of Linear Combination . . . . . . . . . 43
2.1.2 Linear Transformation between Euclidean Spaces . . . . . . . 45
2.1.3 Operation of Linear Transformation . . . . . . . . . . . . . . . 49
2.1.4 Matrix Operation . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.1.5 Elementary Matrix and LU-Decomposition . . . . . . . . . . . 56
2.2 Onto, One-to-one, and Inverse . . . . . . . . . . . . . . . . . . . . . . 59
2.2.1 Onto and One-to-one for Linear Transformation . . . . . . . . 60
2.2.2 Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.2.3 Invertible Matrix . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.3 Matrix of General Linear Transformation . . . . . . . . . . . . . . . . 72
2.3.1 Matrix with Respect to Bases . . . . . . . . . . . . . . . . . . 72
2.3.2 Change of Basis . . . . . . . . . . . . . . . . . . . . . . . . . . 77
2.3.3 Similar Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 79

3
4 CONTENTS

2.4 Dual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.4.1 Dual Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.4.2 Dual Linear Transformation . . . . . . . . . . . . . . . . . . . 84
2.4.3 Double Dual . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
2.4.4 Dual Pairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

3 Subspace 91
3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.1.1 Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.1.2 Span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.1.3 Calculation of Extension to Basis . . . . . . . . . . . . . . . . 96
3.2 Range and Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.2.1 Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.2.2 Rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.2.3 Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.2.4 General Solution of Linear Equation . . . . . . . . . . . . . . 106
3.3 Sum of Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.3.1 Sum and Direct Sum . . . . . . . . . . . . . . . . . . . . . . . 108
3.3.2 Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3.3.3 Blocks of Linear Transformation . . . . . . . . . . . . . . . . . 115
3.4 Quotient Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.4.1 Construction of the Quotient . . . . . . . . . . . . . . . . . . 118
3.4.2 Universal Property . . . . . . . . . . . . . . . . . . . . . . . . 121
3.4.3 Direct Summand . . . . . . . . . . . . . . . . . . . . . . . . . 124

4 Inner Product 127


4.1 Inner Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.1.2 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.1.3 Positive Definite Matrix . . . . . . . . . . . . . . . . . . . . . 133
4.2 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
4.2.1 Orthogonal Sum . . . . . . . . . . . . . . . . . . . . . . . . . 137
4.2.2 Orthogonal Complement . . . . . . . . . . . . . . . . . . . . . 140
4.2.3 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . 142
4.2.4 Gram-Schmidt Process . . . . . . . . . . . . . . . . . . . . . . 145
4.2.5 Property of Orthogonal Projection . . . . . . . . . . . . . . . 147
4.3 Adjoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.3.1 Adjoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.3.2 Adjoint Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.3.3 Isometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
4.3.4 QR-Decomposition . . . . . . . . . . . . . . . . . . . . . . . . 160
CONTENTS 5

5 Determinant 163
5.1 Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.1.1 Multilinear and Alternating Function . . . . . . . . . . . . . . 164
5.1.2 Column Operation . . . . . . . . . . . . . . . . . . . . . . . . 165
5.1.3 Row Operation . . . . . . . . . . . . . . . . . . . . . . . . . . 167
5.1.4 Cofactor Expansion . . . . . . . . . . . . . . . . . . . . . . . . 169
5.1.5 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5.2 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.2.1 Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.2.2 Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.2.3 Determinant of Linear Operator . . . . . . . . . . . . . . . . . 179
5.2.4 Geometric Axiom for Determinant . . . . . . . . . . . . . . . 180

6 General Linear Algebra 183


6.1 Complex Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . 184
6.1.1 Complex Number . . . . . . . . . . . . . . . . . . . . . . . . . 184
6.1.2 Complex Vector Space . . . . . . . . . . . . . . . . . . . . . . 185
6.1.3 Complex Linear Transformation . . . . . . . . . . . . . . . . . 188
6.1.4 Complexification and Conjugation . . . . . . . . . . . . . . . . 190
6.1.5 Conjugate Pair of Subspaces . . . . . . . . . . . . . . . . . . . 193
6.1.6 Complex Inner Product . . . . . . . . . . . . . . . . . . . . . 195
6.2 Field and Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
6.2.1 Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
6.2.2 Vector Space over Field . . . . . . . . . . . . . . . . . . . . . 202
6.2.3 Polynomial over Field . . . . . . . . . . . . . . . . . . . . . . 204
6.2.4 Unique Factorisation . . . . . . . . . . . . . . . . . . . . . . . 208
6.2.5 Field Extension . . . . . . . . . . . . . . . . . . . . . . . . . . 209
6.2.6 Trisection of Angle . . . . . . . . . . . . . . . . . . . . . . . . 212

7 Spectral Theory 217


7.1 Eigenspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
7.1.1 Invariant Subspace . . . . . . . . . . . . . . . . . . . . . . . . 218
7.1.2 Eigenspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
7.1.3 Characteristic Polynomial . . . . . . . . . . . . . . . . . . . . 224
7.1.4 Diagonalisation . . . . . . . . . . . . . . . . . . . . . . . . . . 227
7.1.5 Complex Eigenvalue of Real Operator . . . . . . . . . . . . . . 233
7.2 Orthogonal Diagonalisation . . . . . . . . . . . . . . . . . . . . . . . 237
7.2.1 Normal Operator . . . . . . . . . . . . . . . . . . . . . . . . . 237
7.2.2 Commutative ∗-Algebra . . . . . . . . . . . . . . . . . . . . . 238
7.2.3 Hermitian Operator . . . . . . . . . . . . . . . . . . . . . . . . 240
7.2.4 Unitary Operator . . . . . . . . . . . . . . . . . . . . . . . . . 244
7.3 Canonical Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
7.3.1 Generalised Eigenspace . . . . . . . . . . . . . . . . . . . . . . 246
6 CONTENTS

7.3.2 Nilpotent Operator . . . . . . . . . . . . . . . . . . . . . . . . 250


7.3.3 Jordan Canonical Form . . . . . . . . . . . . . . . . . . . . . . 255
7.3.4 Rational Canonical Form . . . . . . . . . . . . . . . . . . . . . 257

8 Tensor 263
8.1 Bilinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
8.1.1 Bilinear Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
8.1.2 Bilinear Function . . . . . . . . . . . . . . . . . . . . . . . . . 265
8.1.3 Quadratic Form . . . . . . . . . . . . . . . . . . . . . . . . . . 266
8.2 Hermitian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
8.2.1 Sesquilinear Function . . . . . . . . . . . . . . . . . . . . . . . 273
8.2.2 Hermitian Form . . . . . . . . . . . . . . . . . . . . . . . . . . 275
8.2.3 Completing the Square . . . . . . . . . . . . . . . . . . . . . . 278
8.2.4 Signature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
8.2.5 Positive Definite . . . . . . . . . . . . . . . . . . . . . . . . . 280
8.3 Multilinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
8.4 Invariant of Linear Operator . . . . . . . . . . . . . . . . . . . . . . . 283
8.4.1 Symmetric Function . . . . . . . . . . . . . . . . . . . . . . . 284
Chapter 1

Vector Space

Linear structure is one of the most basic structures in mathematics. The key object
for linear structure is vector space, characterised by the operations of addition and
scalar multiplication. The key relation between vector space is linear transformation,
characterised by preserving the two operations. The key example of vector space is
the Euclidean space, which is the model for all finite dimensional vector spaces.
The theory of linear algebra can be developed over any field, which is a “number
system” that allows the usual four arithmetic operations. In fact, a more general
theory (of modules) can be developed over any ring, which is a system that allows
addition, subtraction and multiplication (but not necessarily division). Since the
linear algebra of real vector spaces already reflects most of the true spirit of linear
algebra, we will concentrate on real vector spaces until Chapter 6.

1.1 Definition
1.1.1 Axioms of Vector Space
Definition 1.1.1. A (real) vector space is a set V , together with the operations of
addition and scalar multiplication

~u + ~v : V × V → V, a~u : R × V → V,

such that the following are satisfied.

1. Commutativity: ~u + ~v = ~v + ~u.

2. Additive associativity: (~u + ~v ) + w


~ = ~u + (~v + w).
~

3. Zero: There is an element ~0 ∈ V satisfying ~u + ~0 = ~u = ~0 + ~u.

4. Negative: For any ~u, there is ~v (to be denoted −~u), such that ~u +~v = ~0 = ~v +~u.

5. One: 1~u = ~u.

7
8 CHAPTER 1. VECTOR SPACE

6. Multiplicative associativity: (ab)~u = a(b~u).

7. Scalar distributivity: (a + b)~u = a~u + b~u.

8. Vector distributivity: a(~u + ~v ) = a~u + a~v .

Due to the additive associativity, we may write ~u + ~v + w


~ and even longer
expressions without ambiguity.

Example 1.1.1. The zero vector space {~0} consists of a single element ~0. This leaves
no choice for the two operations: ~0 + ~0 = ~0, a~0 = ~0. It can be easily verified that
all eight axioms are satisfied.

Example 1.1.2. The Euclidean space Rn is the set of n-tuples

~x = (x1 , x2 , . . . , xn ), xi ∈ R.

The i-th number xi is the i-th coordinate of the vector. The Euclidean space is a
vector space with coordinate wise addition and scalar multiplication

(x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn ) = (x1 + y1 , x2 + y2 , . . . , xn + yn ),


a(x1 , x2 , . . . , xn ) = (ax1 , ax2 , . . . , axn ).

Geometrically, we may express a vector in the Euclidean space as a dot or an


arrow from the origin ~0 = (0, 0, . . . , 0) to the dot. Figure 1.1.1 shows that the
addition is described by parallelogram, and the scalar multiplication is described by
stretching and shrinking.

(x1 + y1 , x2 + y2 )
(y1 , y2 )
2(x1 , x2 )

(x1 , x2 )

−0.5(x1 , x2 )

−(y1 , y2 )

Figure 1.1.1: Euclidean space R2 .

For the purpose of calculation (especially when mixed with matrices), it is more
convenient to write a vector as a vertical n × 1 matrix, or the transpose (indicated
1.1. DEFINITION 9

by superscript T ) of a horizontal 1 × n matrix


 
x1
 x2 
~x =  ..  = (x1 x2 · · · xn )T .
 
.
xn
Then the addition and scalar multiplication are
         
x1 y1 x1 + y 1 x1 ax1
 x2   y 2   x2 + y 2   x2   ax2 
 ..  +  ..  =  ..  , a  ..  =  ..  .
         
. .  .  .  . 
xn yn xn + y n xn axn

Example 1.1.3. Consider all polynomials of degree ≤ n


Pn = {a0 + a1 t + a2 t2 + · · · + an tn }.
We know how to add two polynomials together
(a0 + a1 t + a2 t2 + · · · + an tn ) + (b0 + b1 t + b2 t2 + · · · + bn tn )
=(a0 + b0 ) + (a1 + b1 )t + (a2 + b2 )t2 + · · · + (an + bn )tn ,
and how to multiplying a number to a polynomial
c(a0 + a1 t + a2 t2 + · · · + an tn ) = ca0 + ca1 t + ca2 t2 + · · · + can tn .
We can then verify that all eight axioms are satisfied. Therefore Pn is a vector space.
The coefficients of a polynomial provide a one-to-one correspondence
a0 + a1 t + a2 t2 + · · · + an tn ∈ Pn ←→ (a0 , a1 , a2 , . . . , an ) ∈ Rn+1 .
Since the one-to-one correspondence preserves the addition and scalar multiplication,
it identifies the polynomial vector space Pn with the Euclidean vector space Rn+1 .
Such identification is an isomorphism.
The rigorous treatment of isomorphism will appear in Section 2.2.2.

Example 1.1.4. An m × n matrix A is mn numbers arranged in m rows and n


columns. The number aij in the i-th row and j-column of A is the (i, j)-entry of A.
We also denote the matrix by A = (aij ).
All m × n matrices form a vector space Mm×n with the obvious addition and
scalar multiplication. For example, in M3×2 we have
     
x11 x12 y11 y12 x11 + y11 x12 + y12
x21 x22  + y21 y22  = x21 + y21 x22 + y22  ,
x31 x32 y31 y32 x31 + y31 x32 + y32
   
x11 x12 ax11 ax12
a x21 x22  = ax21 ax22  .
x31 x32 ax31 ax32
10 CHAPTER 1. VECTOR SPACE

We also have an isomorphism that identifies matrices with Euclidean vectors


 
x1 y 1
x2 y2  ∈ M3×2 ←→ (x1 , x2 , x3 , y1 , y2 , y3 ) ∈ R6 .
x3 y 3
Moreover, we have the general transpose isomorphism that identifies m × n matrices
with n × m matrices (see Example 2.2.12 for the general formula)
 
x1 y 1  
T x 1 x 2 x 3
A = x2 y2  ∈ M3×2 ←→ A = ∈ M2×3 .
y1 y2 y3
x3 y 3
A special case is the isomorphism in Example 1.1.2
 
x1
 x2 
~x =  ..  ∈ Mn×1 ←→ ~xT = (x1 x2 · · · xn ) ∈ M1×n .
 
.
xn
The addition, scalar multiplication, and transpose of matrices are defined in the
most “obvious” way. However, even simple definitions need to be justified. We can
directly verify the expected properties by using the given formulae. In Sections 2.1.4
and 4.3.1, we give conceptual justifications for addition, scalar multiplication, and
transpose.

Example 1.1.5. All infinite sequences (xn )∞


n=1 of real numbers form a vector space,
with the addition and scalar multiplications given by
(xn ) + (yn ) = (xn + yn ), a(xn ) = (axn ).

Example 1.1.6. All smooth functions form a vector space C ∞ , with the usual ad-
dition and scalar multiplication of functions. The vector space is not isomorphic to
the usual Euclidean space because it is “infinite dimensional”.

Exercise 1.1. Prove that (a + b)(~x + ~y ) = a~x + b~y + b~x + a~y in any vector space.

Exercise 1.2. Introduce the following addition and scalar multiplication in R2


(x1 , x2 ) + (y1 , y2 ) = (x1 + y2 , x2 + y1 ), a(x1 , x2 ) = (ax1 , ax2 ).
Check which axioms of vector spaces are satisfied, and which are not satisfied.

Exercise 1.3. Introduce the following addition and scalar multiplication in R2


(x1 , x2 ) + (y1 , y2 ) = (x1 + y1 , 0), a(x1 , x2 ) = (ax1 , 0).
Check which axioms of vector spaces are satisfied, and which are not satisfied.
1.1. DEFINITION 11

Exercise 1.4. Introduce the following addition and scalar multiplication in R2

(x1 , x2 ) + (y1 , y2 ) = (x1 + ky1 , x2 + ly2 ), a(x1 , x2 ) = (ax1 , ax2 ).

Show that this makes R2 into a vector space if and only if k = l = 1.

Exercise 1.5. Show that all convergent sequences form a vector space.

Exercise 1.6. Show that all even smooth functions form a vector space.

Exercise 1.7. Explain that the transpose of matrix satisfies

(A + B)T = AT + B T , (cA)T = cAT , (AT )T = A.

Section 2.4.2 gives conceptual explanation of the equalities.

1.1.2 Proof by Axiom


We establish some basic properties of vector spaces. You can directly verify these
properties in the Euclidean space. For general vector spaces, however, we should
derive these properties from the axioms.

Proposition 1.1.2. The zero vector is unique.

Proof. Suppose ~01 and ~02 are two zero vectors. By applying the first equality in
Axiom 3 to ~u = ~01 and ~0 = ~02 , we get ~01 + ~02 = ~01 . By applying the second equality
in Axiom 3 to ~0 = ~01 and ~u = ~02 , we get ~02 = ~01 + ~02 . Combining the two equalities,
we get ~02 = ~01 + ~02 = ~01 .

Proposition 1.1.3. If ~u + ~v = ~u, then ~v = ~0.

By Axioms 2, we also have ~v + ~u = ~u, then ~v = ~0. Both properties are the
cancelation law.
Proof. Suppose ~u + ~v = ~u. By Axiom 3, there is w,~ such that w~ + ~u = ~0. We use w
~
instead of ~v in the axiom, because ~v is already used in the proposition. Then

~v = ~0 + ~v (Axiom 3)
= (w~ + ~u) + ~v (choice of w)
~
=w ~ + (~u + ~v ) (Axiom 2)
=w ~ + ~u (assumption)
= ~0. (choice of w)
~
12 CHAPTER 1. VECTOR SPACE

Proposition 1.1.4. a~u = ~0 if and only if a = 0 or ~u = ~0.

Proof. By Axioms 3, 7, 8, we have

0~u + 0~u = (0 + 0)~u = 0~u, a~0 + a~0 = a(~0 + ~0) = a~0.

Then by Proposition 1.1.3, we get 0~u = ~0 and a~0 = ~0. This proves the if part of the
proposition.
The only if part means a~u = ~0 implies a = 0 or ~u = ~0. This is the same as
a~u = ~0 and a 6= 0 implying ~u = ~0. So we assume a~u = ~0 and a 6= 0, and then apply
Axioms 5, 6 and a~0 = ~0 (just proved) to get

~u = 1~u = (a−1 a)~u = a−1 (a~u) = a−1~0 = ~0.

Exercise 1.8. Directly verify Propositions 1.1.2, 1.1.3, 1.1.4 in Rn .

Exercise 1.9. Prove that the vector ~v in Axiom 4 is unique, and is (−1)~u. This justifies
the notation −~u. Moreover, prove −(−~u) = ~u.

Exercise 1.10. Prove that a~v = b~v if and only if a = b or ~v = ~0.

Exercise 1.11. Prove the more general version of the cancelation law: ~u + ~v1 = ~u + ~v2
implies ~v1 = ~v2 .

Exercise 1.12. We use Exercise 1.9 to define ~u −~v = ~u +(−~v ). Prove the following properties

−(~u − ~v ) = −~u + ~v , −(~u + ~v ) = −~u − ~v .

1.2 Linear Combination


1.2.1 Linear Combination Expression
Combining addition and scalar multiplication gives linear combination

a1~v1 + a2~v2 + · · · + an~vn .

If we start with a nonzero seed vector ~u, then all its linear combinations a~u form
a straight line passing through the origin ~0. If we start with two non-parallel vectors
~u and ~v , then all their linear combinations a~u + b~v form a plane passing through the
origin ~0. See Figure 1.2.1.

Exercise 1.13. What are all the linear combinations of two parallel vectors ~u and ~v ?
1.2. LINEAR COMBINATION 13

−2~u + 3~v 2~u − ~v


2~u + ~v
~u + ~v 3~u
~v 2~u
1.5~u
~u
−~u ~0 0.5~u 2~u − ~v
−~v ~u − ~v

~u − 2~v 5~u − 4~v

Figure 1.2.1: Linear combination.

By the axioms of vector spaces, we can easily verify the following

c1 (a1~v1 + · · · + an~vn ) + c2 (b1~v1 + · · · + bn~vn )


= (c1 a1~v1 + · · · + c1 an~vn ) + (c2 b1~v1 + · · · + c2 bn~vn )
= (c1 a1 + c2 b1 )~v1 + · · · + (c1 an + c2 bn )~vn .

This show the linear combination of two linear combinations is still a linear combi-
nation. The fact can be easily extended to more linear combinations.

Proposition 1.2.1. A linear combination of linear combinations is still a linear


combination.

Example 1.2.1. In R3 , we try to express ~v = (10, 11, 12) as a linear combination of


the following vectors

~v1 = (1, 2, 3), ~v2 = (4, 5, 6), ~v3 = (7, 8, 9).

This means finding suitable coefficients x1 , x2 , x3 , such that


 
10
~v = 11 = x1~v1 + x2~v2 + x3~v3

12
       
1 4 7 x1 + 4x2 + 7x3
= x1 2 + x2 5 + x3 8 = 2x1 + 5x2 + 8x3  .
3 6 9 3x1 + 6x2 + 9x3
In other words, we try to solve the system of linear equations

x1 + 4x2 + 7x3 = 10,


2x1 + 5x2 + 8x3 = 11,
3x1 + 6x2 + 9x3 = 12.
14 CHAPTER 1. VECTOR SPACE

By the way, we see the advantage of expressing Euclidean vectors in the vertical way
in calculations.
To solve the system, we may eliminate x1 in the second and third equations, by
using E2 − 2E1 (multiply the first equation by −2 and add to the second equation)
and E3 − 3E1 (multiply the first equation by −3 and add to the third equation).
The result of the two operations is
x1 + 4x2 + 7x3 = 10,
− 3x2 − 6x3 = −9,
− 6x2 − 12x3 = −18.
Then we use E3 − 2E2 to get
x1 + 4x2 + 7x3 = 10,
− 3x2 − 6x3 = −9,
0 = 0.
The last equation is trivial, and we only need to solve the first two equations. We
may do − 13 E2 (multiplying − 31 to the second equation) to get
x1 + 4x2 + 7x3 = 10,
x2 + 2x3 = 3,
0 = 0.
From the second equation, we get x2 = 3 − 2x3 . Substituting into the first equation,
we get x1 = 10 − 4(3 − 2x3 ) − 7x3 = −2 + x3 . The solution of the system is
x1 = −2 + x3 , x2 = 3 − 2x3 , x3 arbitrary.
We conclude that ~v is a linear combination of ~v1 , ~v2 , ~v3 , and there are many linear
combination expressions, i.e., the expression is not unique.

Example 1.2.2. In P2 , we look for a, such that p(t) = 10 + 11t + at2 is a linear
combination of the following polynomials,
p1 (t) = 1 + 2t + 3t2 , p2 (t) = 4 + 5t + 6t2 , p3 (t) = 7 + 8t + 9t2 ,
This means finding suitable coefficients x1 , x2 , x3 , such that
10 + 11t + at2 = x1 (1 + 2t + 3t2 ) + x2 (4 + 5t + 6t2 ) + x3 (7 + 8t + 9t2 )
= (x1 + 4x2 + 7x3 ) + (2x1 + 5x2 + 8x3 )t + (3x1 + 6x2 + 9x3 )t2 .
Comparing the coefficients of 1, t, t2 , we get a system of linear equations
x1 + 4x2 + 7x3 = 10,
2x1 + 5x2 + 8x3 = 11,
3x1 + 6x2 + 9x3 = a.
1.2. LINEAR COMBINATION 15

We use the same simplification process in Example 1.2.1 to simplify the system.
First we get
x1 + 4x2 + 7x3 = 10,
− 3x2 − 6x3 = −9,
− 6x2 − 12x3 = a − 30.
Then we get
x1 + 4x2 + 7x3 = 10,
− 3x2 − 6x3 = −9,
0 = a − 12.
If a 6= 12, then the last equation is a contradiction, and the system has no solution.
If a = 12, then we are back to Example 1.2.1, and the system has (non-unique)
solution.
We conclude p(t) is a linear combination of p1 (t), p2 (t), p3 (t) if and only if a = 12.

Exercise 1.14. Find the condition on a, such that the last vector can be expressed as a
linear combination of the previous ones.

1. (1, 2, 3), (4, 5, 6), (7, a, 9), (10, 11, 12).

2. (1, 2, 3), (7, a, 9), (10, 11, 12).

3. 1 + 2t + 3t2 , 7 + at + 9t2 , 10 + 11t + 12t2 .

4. t2 + 2t + 3, 7t2 + at + 9, 10t2 + 11t + 12.


       
1 2 4 5 7 a 10 11
5. , , , .
2 3 5 6 a 9 11 12
     
1 2 7 a 10 11
6. , , .
3 3 9 9 12 12

1.2.2 Row Operation


Examples 1.2.1 and 1.2.2 show that the problem of expressing a vector as a linear
combination is equivalent to solving a system of linear equations. The shape of
the vector (Euclidean, or polynomial, or some other form) is not important for the
calculation. What is important is the coefficients in the vectors.
In general, to express a vector ~b ∈ Rm as a linear combination of ~v1 , ~v2 , . . . , ~vn ∈
Rm , we use ~vi to form the columns of a matrix
   
a11 a12 · · · a1n a1i
 a21 a22 · · · a2n   a2i 
A = (aij ) =  .. ..  = (~v1 ~v2 · · · ~vn ), ~vi =  ..  . (1.2.1)
   
..
 . . .   . 
am1 am2 · · · amn ami
16 CHAPTER 1. VECTOR SPACE

We denote the linear combination by A~x


     
a11 a12 a1n
 a21   a22   a2n 
A~x = x1~v1 + x2~v2 + · · · + xn~vn = x1  ..  + x2  ..  + · · · + xn  .. 
     
 .   .   . 
am1 am2 amn
 
a11 x1 + a12 x2 + · · · + a1n xn
 a21 x1 + a22 x2 + · · · + a2n xn 
= . (1.2.2)
 
..
 . 
am1 x1 + am2 x2 + · · · + amn xn
Then the linear combination expression
A~x = x1~v1 + x2~v2 + · · · + xn~vn = ~b
means a system of linear equations
a11 x1 + a12 x2 + · · · + a1n xn = b1 ,
a21 x1 + a22 x2 + · · · + a2n xn = b2 ,
..
.
am1 x1 + am2 x2 + · · · + amn xn = bm .
We call A the coefficient matrix of the system, and call ~b = (b1 , b2 , . . . , bn ) the right
side. The augmented matrix of the system is
 
a11 a12 · · · a1n b1
 a21 a22 · · · a2n b2 
(A ~b) =  .. ..  .
 
.. ..
 . . . . 
am1 am2 · · · amn bm
We have the correspondences
equations ⇐⇒ rows of (A ~b),
variables ⇐⇒ columns of A.
A system of linear equations can be solved by the process of Gaussian elimina-
tion, as illustrated in Examples 1.2.1 and 1.2.2. The idea is to eliminate variables,
and thereby simplify equations. This is equivalent to the similar simplifications of
the augmented matrix (A ~b). For example, the Gaussian elimination process in
Example 1.2.1 corresponds to the following operations on rows
    R −2R  
1 4 7 10 R2 −2R1 1 4 7 10 3
1
− 3 R2
2 1 4 7 10
R3 −3R1
2 5 8 11 − −−−→ 0 −3 −6 −9  −−− −→ 0 1 2 3  . (1.2.3)
3 6 9 12 0 −6 −12 −18 0 0 0 0
In general, we may use three types of row operations, which do not change solutions
of a system of linear equations.
1.2. LINEAR COMBINATION 17

• Ri ↔ Rj : exchange the i-th and j-th rows.


• cRi : multiply a number c 6= 0 to the i-th row.
• Ri + cRj : add the c multiple of the j-th row to the i-th row.
In Example 1.2.1, we use the third operation to create zero coefficients (and therefore
simpler matrix). We use the second operation to simplify the coefficients (say, −3
is changed to 1). We may use the first operation to rearrange the equations from
the most complicated (i.e., longest) to the simplest (i.e., shortest). We did not use
this operation in the example because the arrangement is already from the most
complicated to the simplest.

Example 1.2.3. We rearrange the order of vectors as ~v2 , ~v3 , ~v1 in Example 1.2.1.
The corresponding row operations tell us how to express ~v as a linear combination
of ~v2 , ~v3 , ~v1 . We remark that the first row operation is also used.
     
4 7 1 10 R2 −R1 4 7 1 10 R1 −4R2 0 3 −3 6
R3 −R1 R3 −2R2
5 8 2 11 − −−−→ 1 1 1 1  −− −−→ 1 1 1 1
6 9 3 12 2 2 2 2 0 0 0 0
   
− 31 R1 1 1 1 1 1 0 2 −1
R ↔R2 R1 −R2
−−1−−→ 0 1 −1 2 − −−−→ 0 1 −1 2  .
0 0 0 0 0 0 0 0
The system is simplified to x2 + 2x1 = −1 and x3 − x1 = 2. We get the general
solution
x2 = −1 − 2x1 , x3 = 2 + x1 , x1 arbitrary.

Exercise 1.15. Explain that row operations can always be reversed:


• The reverse of Ri ↔ Rj is Ri ↔ Rj .
• The reverse of cRi is c−1 Ri .
• The reverse of Ri + cRj is Ri − cRj .

Exercise 1.16. Explain that row operations do not change the solutions of the corresponding
system.

1.2.3 Row Echelon Form


We use three row operations to simplify a matrix. The simplest shape we can achieve
is called the row echelon form. For the matrices in Examples 1.2.1 and 1.2.3, the
row echelon form is
 
• ∗ ∗ ∗
0 • ∗ ∗ , • = 6 0, ∗ can be any number. (1.2.4)
0 0 0 0
18 CHAPTER 1. VECTOR SPACE

The entries indicated by • are called the pivots. The rows and columns containing
the pivots are pivot rows and pivot columns. In the row echelon form (1.2.4), the
pivot rows are the first and second, and the pivot columns are the first and second.
The following are all the 2 × 3 row echelon forms
       
• ∗ ∗ • ∗ ∗ • ∗ ∗ 0 • ∗
, , , ,
0 • ∗ 0 0 • 0 0 0 0 0 •
     
0 • ∗ 0 0 • 0 0 0
, , .
0 0 0 0 0 0 0 0 0
In general, a row echelon form has the shape of upside down staircase (indicating
simpler and simpler linear equations), and the shape is characterised by the locations
of the pivots. The pivots are the leading nonzero entries in the rows. They appear
in the first several rows, and in later and later positions. The subsequent non-pivot
rows are completely zero. We note that each row has at most one pivot and each
column has at most one pivot. Therfore

number of pivot rows = number of pivots = number of pivot columns.

For an m × n matrix, the number above is no more than the number of rows and
columns
number of pivots ≤ min{m, n}. (1.2.5)

Exercise 1.17. How can row operations improve the following shapes to become upside
down staircase?
       
0 • ∗ ∗ • ∗ ∗ ∗ 0 • ∗ ∗ 0 • ∗ ∗
1. 0 0 0 0. 2. • ∗ ∗ ∗. 3. 0 • ∗ ∗. 4. 0 0 • ∗.
• ∗ ∗ ∗ 0 0 0 0 • ∗ ∗ ∗ • ∗ ∗ ∗

Then explain why the shape (1.2.4) cannot be further improved?

Exercise 1.18. Display all the 2 × 2 row echelon forms. How about 3 × 2 row echelon forms?

Exercise 1.19. How many m × n row echelon forms are there?

If a 6= 12 in Example 1.2.2, then the augmented matrix of the system has the
following row echelon form  
• ∗ ∗ ∗
0 • ∗ ∗
0 0 0 •
The row (0 0 0 •) represents the equation 0 = •, a contradiction. Therefore the
system has no solution. We remark that the row (0 0 0 •) means the last column is
pivot.
1.2. LINEAR COMBINATION 19

If a = 12, then we do not have the contradiction, and the system has solution.
Section 1.2.4 shows the existence of solution even more explicitly.
The discussion leads to the first part of the following.

Theorem 1.2.2. A system of linear equations A~x = ~b has solution if and only if ~b
is not a pivot column of the augmented matrix (A ~b). The solution is unique if and
only if all columns of A are pivot.

1.2.4 Reduced Row Echelon Form


Although a row echelon form has the simplest shape, we may still simplify individual
entries in a row echelon form. First we may divide rows (using the second row
operation) by the nonzero numbers at the pivots, so that the pivots are occupied by
1. Then we use these 1 to eliminate (using the third row operation) all terms above
the pivots. The result is the simplest matrix one can get by row operations, called
the reduced row echelon form.
For the row echelon form (1.2.4), this means
     
• ∗ ∗ ∗ 1 ∗ ∗ ∗ 1 0 ∗ ∗
0 • ∗ ∗ −→ 0 1 ∗ ∗ −→ 0 1 ∗ ∗ .
0 0 0 0 0 0 0 0 0 0 0 0

Here is one more example


     
• ∗ ∗ ∗ ∗ 1 ∗ ∗ ∗ ∗ ∗ 1 ∗ 0 0 ∗ ∗
0 0 • ∗ ∗ −→ 0 0 1 ∗ ∗ ∗ −→ 0 0 1 0 ∗ ∗ .
0 0 0 • ∗ 0 0 0 1 ∗ ∗ 0 0 0 1 ∗ ∗

The corresponding systems of linear equations are also the simplest

x1 + c13 x3 = d1 , x1 + c12 x2 + c15 x5 = d1 ,


x2 + c23 x3 = d2 , x3 + c25 x5 = d2 ,
0 = 0. x4 + c35 x5 = d3 .

Then we can literally read off the solutions of the two systems. The general solution
of the first system is

x1 = d1 − c13 x3 , x2 = d2 − c23 x3 , x3 arbitrary.

For the obvious reason, we call x3 a free variable, and x1 , x2 non-free variables. The
general solution of the second system is

x1 = d1 − c12 x2 − c15 x5 , x3 = d2 − c25 x5 , x4 = d3 − c35 x5 , x2 , x5 arbitrary.

Here x2 , x5 are free, and x1 , x3 , x4 are not free.


20 CHAPTER 1. VECTOR SPACE

Exercise 1.20. Display all the 2 × 2, 2 × 3, 3 × 2 and 3 × 4 reduced row echelon forms.

Exercise 1.21. Given the reduced row echelon form of the augmented matrix of the system
of linear equations, find the general solution.
   
1 a1 0 b1 1 0 a1 a2 b1
7. .
1. 0 0 1 b2 . 0 1 a3 a4 b2
0 0 0 0  
1 0 a1 b1
8. .
0 1 a2 b2
 
1 a1 0 b1 0
2. 0 0 1 b2 0.  
0 1 0 a1 b1
0 0 0 0 0 9. .
0 0 1 a2 b2
 
1 0 0 b1 
1 0 a1 b1

3. 0 1 0 b2 . 0 1 a2 b2 
0 0 1 b3 10. 
0 0 0 0 .

  0 0 0 0
1 a1 a2 b1
4. 0 0 0 0 .
 
1 0 a1 0 a2 b1
0 0 0 0 0 1 a3 0 a4 b2 
11. 
0
.
  0 0 1 a5 b3 
1 a1 a2 b1 0 0 0 0 0 0 0
5. 0 0 0 0 0.  
0 0 0 0 0 1 0 a1 0 a2 b1 0
0 1 a3 0 a4 b2 0
  12.  .
1 a1 0 a2 b1 0 0 0 1 a5 b3 0
6. .
0 0 1 a3 b2 0 0 0 0 0 0 0

Exercise 1.22. Given the general solution of the system of linear equations, find the reduced
row echelon form of the augmented matrix.
1. x1 = −x3 , x2 = 1 + x3 ; x3 arbitrary.

2. x1 = −x3 , x2 = 1 + x3 ; x3 , x4 arbitrary.

3. x2 = −x4 , x3 = 1 + x4 ; x1 , x4 arbitrary.

4. x2 = −x4 , x3 = x4 − x5 ; x1 , x4 , x5 arbitrary.

5. x1 = 1 − x2 + 2x5 , x3 = 1 + 2x5 , x4 = −3 + x5 ; x2 , x5 arbitrary.

6. x1 = 1 + 2x2 + 3x4 , x3 = 4 + 5x4 + 6x5 ; x2 , x4 , x5 arbitrary.

7. x1 = 2x2 + 3x4 − x6 , x3 = 5x4 + 6x5 − 4x6 ; x2 , x4 , x5 , x6 arbitrary.

We see that, if the system has solution (i.e., ~b is not pivot column of (A ~b)),
then the reduced row echelon form is equivalent to the general solution. Since the
solution is independent of the choice of the row operations, we know the reduced
row echelon form of a matrix is unique.
1.3. BASIS 21

We also see the following correspondence

variables in A~x = ~b free non-free


columns in A non-pivot pivot

In particular, the uniqueness of solution means no freedom for the variables. By


the correspondence above, this means all columns of A are pivot. This explains the
second part of Theorem 1.2.2.

1.3 Basis
1.3.1 Basis and Coordinate
In Example 1.2.2, we see the linear combination problem for polynomials is equiv-
alent to the linear combination problem for Euclidean vectors. In Example 1.1.3, the
equivalence is given by expressing polynomials as linear combinations of 1, t, t2 , . . . , tn
with unique coefficients. In general, we ask for the following.

Definition 1.3.1. A ordered set α = {~v1 , ~v2 , . . . , ~vn } of vectors in a vector space V
is a basis of V , if for any ~x ∈ V , there exist unique x1 , x2 , . . . , xn , such that

~x = x1~v1 + x2~v2 + · · · + xn~vn .

The coefficients x1 , x2 , . . . , xn are the coordinates of ~x with respect to the (or-


dered) basis. The unique expression means that the α-coordinate map

~x ∈ V 7→ [~x]α = (x1 , x2 , . . . , xn ) ∈ Rn

is well defined. The reverse map is given by the linear combination

(x1 , x2 , . . . , xn ) ∈ Rn 7→ x1~v1 + x2~v2 + · · · + xn~vn ∈ V.

The two way maps identify (called isomorphism) the general vector space V with
the Euclidean space Rn . Moreover, the following result shows that the coordinate
preserves linear combinations. Therefore we may use the coordinate to translate
linear algebra problems (such as linear combination expression) in a general vector
space to corresponding problems in an Euclidean space.

Proposition 1.3.2. [~x + ~y ]α = [~x]α + [~y ]α , [a~x]α = a[~x]α .

Proof. Let [~x]α = (x1 , x2 , . . . , xn ) and [~y ]α = (y1 , y2 , . . . , yn ). Then by the definition
of coordinates, we have

~x = x1~v1 + x2~v2 + · · · + xn~vn , ~y = y1~v1 + y2~v2 + · · · + yn~vn .


22 CHAPTER 1. VECTOR SPACE

Adding the two together, we have


~x + ~y = (x1 + y1 )~v1 + (x2 + y2 )~v2 + · · · + (xn + yn )~vn .
By the definition of coordinates, this means
[~x + ~y ]α = (x1 + y1 , x2 + y2 , . . . , xn + yn )
= (x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn ) = [~x]α + [~y ]α .
The proof of [a~x]α = a[~x]α is similar.

Example 1.3.1. The standard basis vector ~ei in Rn has the i-th coordinate 1 and
all other coordinates 0. For example, the standard basis vectors of R3 are
~e1 = (1, 0, 0), ~e2 = (0, 1, 0), ~e3 = (0, 0, 1).
By the equality
(x1 , x2 , . . . , xn ) = x1~e1 + x2~e2 + · · · + xn~en ,
any vector is a linear combination of the standard basis vectors. Moreover, the
equality shows that, if two expressions on the right are equal
x1~e1 + x2~e2 + · · · + xn~en = y1~e1 + y2~e2 + · · · + yn~en ,
then the two vectors are also equal
(x1 , x2 , . . . , xn ) = (y1 , y2 , . . . , yn ).
Of course this means exactly x1 = y1 , x2 = y2 , . . . , xn = yn , i.e., the uniqueness of
the coefficients. Therefore the standard basis vectors for the standard basis  =
{~e1 , ~e2 , . . . , ~en } of Rn . The equality can be interpreted as
[~x] = ~x.
If we change the order in the standard basis, then we should also change the
order of coordinates
[(x1 , x2 , x3 )]{~e1 ,~e2 ,~e3 } = (x1 , x2 , x3 ),
[(x1 , x2 , x3 )]{~e2 ,~e1 ,~e3 } = (x2 , x1 , x3 ),
[(x1 , x2 , x3 )]{~e3 ,~e2 ,~e1 } = (x3 , x2 , x1 ).

Example 1.3.2. Any polynomial of degree ≤ n is of the form


p(t) = a0 + a1 t + a2 t2 + · · · + an tn .
The formula can be interpreted as that p(t) is a linear combination of monomials
1, t, t2 , . . . , tn . For the uniqueness of the linear combination, we consider the equality
a0 + a1 t + a2 t2 + · · · + an tn = b0 + b1 t + b2 t2 + · · · + bn tn .
1.3. BASIS 23

The equality usually mean equal functions. In other words, if we substitute any
real number in place of t, the two sides have the same value. Taking t = 0, we get
a0 = b0 . Dividing the remaining equality by t 6= 0, we get

a1 + a2 t + · · · + an tn−1 = b1 + b2 t + · · · + bn tn−1 for t 6= 0.

Taking limt→0 on both sides, we get a1 = b1 . Inductively, we find that ai = bi for


all i. This shows the uniqueness of the linear combination. Therefore 1, t, t2 , . . . , tn
form a basis of Pn . We have

[a0 + a1 t + a2 t2 + · · · + an tn ]{1,t,t2 ,...,tn } = (a0 , a1 , a2 , . . . , an ).

Example 1.3.3. Consider the monomials 1, t − 1, (t − 1)2 at t0 = 1. Any quadratic


polynomial is a linear combination of 1, t − 1, (t − 1)2

a0 + a1 t + a2 t2 = a0 + a1 [1 + (t − 1)] + a2 [1 + (t − 1)]2
= (a0 + a1 + a2 ) + (a1 + 2a2 )(t − 1) + a2 (t − 1)2 .

Moreover, if two linear combinations are equal

a0 + a1 (t − 1) + a2 (t − 1)2 = b0 + b1 (t − 1) + b2 (t − 1)2 ,

then substituting t by t + 1 gives the equality

a0 + a1 t + a2 t2 = b0 + b1 t + b2 t2 .

This means a0 = b0 , a1 = b1 , a2 = b2 , or the uniqueness of the linear combination


expression. Therefore 1, t − 1, (t − 1)2 form a basis of P2 . In general, 1, t − t0 , (t −
t0 )2 , . . . , (t − t0 )n form a basis of Pn .

Exercise 1.23. Show that the following 3 × 2 matrices form a basis of the vector space
M3×2 .
           
1 0 0 0 0 0 0 1 0 0 0 0
0 0 , 1 0 , 0 0  , 0 0 , 0 1  , 0 0 .
0 0 0 0 1 0 0 0 0 0 0 1

In general, how many matrices are in a basis of the vector space Mm×n of m × n matrices?

Exercise 1.24. For an ordered basis α = {~v1 , ~v2 , . . . , ~vn } is of V , explain that [~vi ]α = ~ei .

Exercise 1.25. A permutation of {1, 2, . . . , n} is a one-to-one correspondence π : {1, 2, . . . , n} →


{1, 2, . . . , n}. Let α = {~v1 , ~v2 , . . . , ~vn } be a basis of V .

1. Show that π(α) = {~vπ(1) , ~vπ(2) , . . . , ~vπ(n) } is still a basis.

2. What is the relation between [~x]α and [~x]π(α) ?


24 CHAPTER 1. VECTOR SPACE

1.3.2 Spanning Set


The definition of basis consists of two parts, the existence of linear combination
expression for all vectors, and the uniqueness of the expression. We study the two
properties separately. The existence is the following property.

Definition 1.3.3. A set of vectors α = {~v1 , ~v2 , . . . , ~vn } in a vector space V spans V
if any vector in V can be expressed as a linear combination of ~v1 , ~v2 , . . . , ~vn .

For V = Rm , we form the m × n matrix A = (~v1 ~v2 · · · ~vn ). Then the vectors
spanning V means the system of linear equations

A~x = x1~v1 + x2~v2 + · · · + xn~vn = ~b

has solution for all right side ~b.

Example 1.3.4. For ~u = (a, b) and ~v = (c, d) to span R2 , we need the system of two
linear equations to have solution for all p, q

ax + cy = p,
bx + dy = q.

We multiply the first equation by b and the second equation by a, and then subtract
the two. We get
(bc − ad)y = bp − aq.
If ad 6= bc, then we can solve for y. We may further substitute y into the original
equations to get x. Therefore the system has solution for all p, q.
We conclude that (a, b) and (c, d) span R2 in case ad 6= bc. For example, by
1 · 4 6= 3 · 2, we know (1, 2) and (3, 4) span R2 .

Exercise 1.26. Show that a linear combination of (1, 2) and (2, 4) is always of the form
(a, 2a). Then explain that the two vectors do not span R2 .

Exercise 1.27. Explain that, if ad = bc, then (a, b) and (c, d) do not span R2 .

Example 1.3.5. The following vectors span R3

~v1 = (1, 2, 3), ~v2 = (4, 5, 6), ~v3 = (7, 8, 9),

if and only if (~v1 ~v2 ~v3 )~x = ~b has solution for all ~b. We apply the same row operations
in (1.2.3) to the augmented matrix

1 4 7 b01
   
1 4 7 b1
(~v1 ~v2 ~v3 ~b) = 2 5 8 b2  → 0 1 2 b02  .
3 6 9 b3 0 0 0 b03
1.3. BASIS 25

Although we may calculate the explicit formulae for b0i , which are linear combinations
of b1 , b2 , b3 , we do not need these. All we need to know is that, since b1 , b2 , b3 are
arbitrary, and the row operations can be reversed (see Exercise 1.15), the right side
b01 , b02 , b03 in the row echelon form are also arbitrary. In particular, it is possible to
have b03 6= 0, and the system has no solution. Therefore the three vectors do not
span R3 .
If we change the third vector to ~v3 = (7, 8, a), then the same row operations give

b01
   
1 4 7 b1 1 4 7
(~v1 ~v2 ~v3 ~b) = 2 5 8 b2  → 0 1 2 b02  .
3 6 a b3 0 0 a − 9 b03

If a 6= 9, then the last column is not pivot. By Theorem 1.2.2, the system always
has solution. Therefore (1, 2, 3), (4, 5, 6), (7, 8, a) span R3 if and only if a 6= 9.

Example 1.3.5 can be summarized as the following criterion for a set of vectors
to span the Euclidean space.

Proposition 1.3.4. Let α = {~v1 , ~v2 , . . . , ~vn } ⊂ Rm and A = (~v1 ~v2 · · · ~vn ). The
following are equivalent.

1. α spans Rm .

2. A~x = ~b has solution for all ~b ∈ Rm .

3. All rows of A are pivot. In other words, the row echelon form of A has no zero
row (0 0 · · · 0).

Moreover, we have m ≤ n in the above cases.

For the last property n ≥ m, we note that all rows pivot implies the number of
pivots is m. Then by (1.2.5), we get m ≤ min{m, n} ≤ n.

Example 1.3.6. To find out whether (1, 2, 3), (4, 5, 6), (7, 8, a), (10, 11, b) span R3 ,
we apply the row operations in (1.2.3)
   
1 4 7 10 1 4 7 10
2 5 8 11 → 0 1 2 3 .
3 6 a b 0 0 a − 9 b − 12

The row echelon form depends on the values of a and b. If a 6= 9, then the result is
already a row echelon form  
• ∗ ∗ ∗
0 • ∗ ∗ .
0 0 • ∗
26 CHAPTER 1. VECTOR SPACE

If a = 9, then the result is  


• ∗ ∗ ∗
0 • ∗ ∗ .
0 0 0 b − 12
Then we have two possible row echelon forms
   
• ∗ ∗ ∗ • ∗ ∗ ∗
0 • ∗ ∗ for b 6= 12; 0 • ∗ ∗ for b = 12.
0 0 0 • 0 0 0 0

By Proposition 1.3.4, the vectors (1, 2, 3), (4, 5, 6), (7, 8, a), (10, 11, b) span R3 if and
only if a 6= 9, or a = 9 and b 6= 12.
If we restrict the row operations to the first three columns, then we find that all
rows are pivot if and only if a 6= 9. Therefore the first three vectors span R3 if and
only if a 6= 9.
We may also restrict the row operations to the first, second and fourth columns,
and find that all rows are pivot if and only if b 6= 12. This is the condition for
(1, 2, 3), (4, 5, 6), (10, 11, b) to span R3 .

Exercise 1.28. Find row echelon form and determine whether the column vectors span the
Euclidean space.
       
1 2 3 1 1 2 3 1 2 3 1 0 0 1
1. 2 3 1 2 .
  3. 2 3
 4. 2 3 4 0 1 1 0
3 4 5.
5.   7.  .
3 1 2 3 3 4 5 1 0 1 0
4 5 a 0 1 0 1
     
1 2 3   0 2 −1 4 1 0 0 1
2 3 1 1 2 3 4 −1 3 0 1 0
  1 1 0
2. 
3
. 6. 
 2 −4 −1 8.
. .
1 2 4. 2 3 4 5. 2 1 0 1 a
1 2 3 3 4 5 a 1 1 −2 7 0 1 a b

Exercise 1.29. If ~v1 , ~v2 , . . . , ~vn span V , prove that ~v1 , ~v2 , . . . , ~vn , w
~ span V .
The property means that any set bigger than a spanning set is also a spanning set.

Exercise 1.30. Suppose w~ is a linear combination of ~v1 , ~v2 , . . . , ~vn . Prove that ~v1 , ~v2 , . . . , ~vn , w
~
span V if and only if ~v1 , ~v2 , . . . , ~vn span V .
The property means that, if one vector is a linear combination of the others, then we
may remove the vector without changing the spanning set property.

Exercise 1.31. Suppose ~v1 , ~v2 , . . . , ~vn ∈ V . If each of w ~ 1, w


~ 2, . . . , w
~ m is a linear combination
of ~v1 , ~v2 , . . . , ~vn , and w
~ 1, w
~ 2, . . . , w
~ m span V , prove that ~v1 , ~v2 , . . . , ~vn also span V .

Exercise 1.32. Prove that the following are equivalent for a set of vectors in V .
1. ~v1 , . . . , ~vi , . . . , ~vj , . . . , ~vn span V .
1.3. BASIS 27

2. ~v1 , . . . , ~vj , . . . , ~vi , . . . , ~vn span V .

3. ~v1 , . . . , c~vi , . . . , ~vn , c 6= 0, span V .

4. ~v1 , . . . , ~vi + c~vj , . . . , ~vj , . . . , ~vn span V .



Example 1.3.7. By √ the last part of Proposition 1.3.4, three vectors (1, 0, 2, π),
(log 2, e, 100, −0.5), ( 3, e−1 , sin 1, 2.3) cannot span R4 .

Exercise 1.33. If m > n in Proposition 1.3.4, what can you conclude?

Exercise 1.34. Explain that the vectors do not span the Euclidean space. Then interpret
the result in terms of systems of linear equations.

1. (10, −2, 3, 7, 2), (0, 8, −2, 5, −4), (8, −9, 3, 6, 5).

2. (10, −2, 3, 7, 2), (0, 8, −2, 5, −4), (8, −9, 3, 6, 5), (7, −9, 3, −5, 6).

3. (0, −2, 3, 7, 2), (0, 8, −2, 5, −4), (0, −9, 3, 6, 5), (0, −5, 4, 2, −7), (0, 4, −1, 3, −6).

4. (6, −2, 3, 7, 2), (−4, 8, −2, 5, −4), (6, −9, 3, 6, 5), (8, −5, 4, 2, −7), (−2, 4, −1, 3, −6).

1.3.3 Linear Independence


Definition 1.3.5. A set of vectors ~v1 , ~v2 , . . . , ~vn are linearly independent if the coef-
ficients in linear combination are unique

x1~v1 +x2~v2 +· · ·+xn~vn = y1~v1 +y2~v2 +· · ·+yn~vn =⇒ x1 = y1 , x2 = y2 , . . . , xn = yn .

The vectors are linearly dependent if they are not linearly independent.

For V = Rm , we form the matrix A = (~v1 ~v2 · · · ~vn ). Then the linear indepen-
dence of the column vectors means the solution of the system of linear equations

A~x = x1~v1 + x2~v2 + · · · + xn~vn = ~b

is unique. By Theorem 1.2.2, we have the following criterion for linear independence.

Proposition 1.3.6. Let α = {~v1 , ~v2 , . . . , ~vn } ⊂ Rm and A = (~v1 ~v2 · · · ~vn ). The
following are equivalent.

1. α is linearly independent.

2. The solution of A~x = ~b is unique.

3. All columns of A are pivot.

Moreover, we have m ≥ n in the above cases.


28 CHAPTER 1. VECTOR SPACE

For the last property m ≥ n, we note that all columns pivot implies the number
of pivots is n. Then by (1.2.5), we get n ≤ min{m, n} ≤ m.

Example 1.3.8. We try to find the condition on a, such that

~v1 = (1, 2, 3, 4), ~v2 = (5, 6, 7, 8), ~v3 = (9, 10, 11, a)

are linearly independent. We carry out the row operations


   
1 5 9 R4 −R3 1 5 9
3 −R2
2 6 10 R
(~v1 ~v2 ~v3 ) =  − − −
→ 1 1
R2 −R1 

1  
3 7 11 1 1 1 
4 8 a 1 1 a − 11
   
R1 −R2 0 4 8 1 1 1
R3 −R2 R1 ↔R2
R −R2 1 1
 1  R3 ↔R4 0
  4 8 
−−4−−→ −−−−→  .
0 0 0  0 0 a − 12
0 0 a − 12 0 0 0

We find all three columns are pivot if and only if a 6= 12. This is the condition for
the three vectors to be linearly independent.

Exercise 1.35. Determine the linear independence of the column vectors in Exercises 1.28.

Exercise 1.36. Prove that the following are equivalent.


1. ~v1 , . . . , ~vi , . . . , ~vj , . . . , ~vn are linearly independent.

2. ~v1 , . . . , ~vj , . . . , ~vi , . . . , ~vn are linearly independent.

3. ~v1 , . . . , c~vi , . . . , ~vn , c 6= 0, are linearly independent.

4. ~v1 , . . . , ~vi + c~vj , . . . , ~vj , . . . , ~vn are linearly independent.



Example 1.3.9. √ By the last part of Proposition 1.3.6, four vectors (1, log 2, 3),
(0, e, e−1 ), ( 2, 100, sin 1), (π, −0.5, 2.3) in R3 are linearly dependent.

Exercise 1.37. If m < n in Proposition 1.3.6, what can you conclude?

Exercise 1.38. Explain that the vectors are linearly dependent. Then interpret the result
in terms of systems of linear equations.
1. (1, 2, 3), (2, 3, 1), (3, 1, 2), (1, 3, 2), (3, 2, 1), (2, 1, 3).

2. (1, 3, 2, −4), (10, −2, 3, 7), (0, 8, −2, 5), (8, −9, 3, 6), (7, −9, 3, −5).

3. (1, 3, 2, −4), (10, −2, 3, 7), (0, 8, −2, 5), (π, 3π, 2π, −4π).

4. (1, 3, 2, −4, 0), (10, −2, 3, 7, 0), (0, 8, −2, 5, 0), (8, −9, 3, 6, 0), (7, −9, 3, −5, 0).
1.3. BASIS 29

The criterion for linear independence in Proposition 1.3.6 does not depend on
the right side. This means we only need to verify the uniqueness for the case ~b = ~0.
We call the corresponding system A~x = ~0 homogeneous. The homogeneous system
always has the zero solution ~x = ~0. Therefore we only need to ask the uniqueness
of the solution of A~x = ~0.
The relation between the uniqueness for A~x = ~b and the uniqueness for A~x = ~0
holds in general vector space.

Proposition 1.3.7. A set of vectors ~v1 , ~v2 , . . . , ~vn are linearly independent if and
only if
x1~v1 + x2~v2 + · · · + xn~vn = ~0 =⇒ x1 = x2 = · · · = xn = 0.

Proof. The property in the proposition is the special case of y1 = · · · = yn = 0 in


the definition of linear independence. Conversely, if the special case holds, then

x1~v1 + x2~v2 + · · · + xn~vn = y1~v1 + y2~v2 + · · · + yn~vn


=⇒ (x1 − y1 )~v1 + (x2 − y2 )~v2 + · · · + (xn − yn )~vn = ~0
=⇒ x1 − y1 = x2 − y2 = · · · = xn − yn = 0.

Example 1.3.10. By Proposition 1.3.7, a single vector ~v is linearly independent if


and only if a~v = ~0 implies a = 0. By Proposition 1.1.4, the property means exactly
~v 6= ~0.

Example 1.3.11. To show that cos t, sin t, et are linearly independent, we only need
to verify that the equality x1 cos t + x2 sin t + x3 et = 0 implies x1 = x2 = x3 = 0.
If the equality holds, then by evaluating at t = 0, π2 , π, we get
π
x1 + x3 = 0, x2 + x3 e 2 = 0, −x1 + x3 eπ = 0.

Adding the first and third equations together, we get x3 (1 + eπ ) = 0. This implies
x3 = 0. Substituting x3 = 0 to the first and second equations, we get x1 = x2 = 0.

Example 1.3.12. To show t(t − 1), t(t − 2), (t − 1)(t − 2) are linearly independent,
we onsider
a1 t(t − 1) + a2 t(t − 2) + a3 (t − 1)(t − 2) = 0.
Since the equality holds for all t, we may take t = 0 and get a3 (−1)(−2) = 0.
Therefore a3 = 0. Similarly, by taking t = 1 and t = 2, we get a2 = 0 and a1 = 0.
In general, suppose t0 , t1 , . . . , tn are distinct, and1
Y
pi (t) = (t − tj ) = (t − t0 )(t − t1 ) · · · (t\− ti ) · · · (t − tn )
j6=i

1
The notation ˆ? is the mathematical convention that the term ? is missing.
30 CHAPTER 1. VECTOR SPACE

is the product of all t − t∗ except t − ti . Then p0 (t), p(1(t), . . . , pn (t) are linearly
independent.

Exercise 1.39. For a 6= b, show that eat and ebt are linearly independent. What about eat ,
ebt and ect ?

Exercise 1.40. Prove that cos t, sin t, et do not span the vector space of all smooth functions,
by showing that the constant function 1 is not a linear combination of the three functions.
Hint: Take several values of t in x1 cos t + x2 sin t + x3 et = 1 and then derive contra-
diction.

Exercise 1.41. Determine whether the given functions are linearly independent, and whether
f (x), g(t) can be expressed as linear combinations of given functions.
1. cos2 t, sin2 t. f (t) = 1, g(t) = t.

2. cos2 t, sin2 t, 1. f (t) = cos 2t, g(t) = t.

3. 1, t, et , tet . f (t) = (1 + t)et , g(t) = f 0 (t).

4. cos2 t, cos 2t. f (t) = a, g(t) = a + sin2 t.

The following result says that linear dependence means some vector is a “waste”.
The proof makes use of division by nonzero number.

Proposition 1.3.8. A set of vectors are linearly dependent if and only if one vector
is a linear combination of the other vectors.

Proof. If ~v1 , ~v2 , . . . , ~vn are linearly dependent, then by Proposition 1.3.7, we have
x1~v1 + x2~v2 + · · · + xn~vn = ~0, with some xi 6= 0. Then we get
x1 xi−1 xi+1 xn
~vi = − ~v1 − · · · − ~vi−1 − ~vi+1 − · · · − ~vn .
xi xi xi xi
This shows that the i-th vector is a linear combination of the other vectors.
Conversely, if

~vi = x1~v1 + · · · + xi−1~vi−1 + xi+1~vi+1 + · · · + xn~vn ,

then the left side is a linear combination with coefficients (0, . . . , 0, 1(i) , 0, . . . , 0),
and the right side has coefficients (x1 , . . . , xi−1 , 0(i) , xi+1 , . . . , xn ). Since coefficients
are different, by definition, the vectors are linearly dependent.

Example 1.3.13. By Proposition 1.3.8, two vectors ~u and ~v are linearly dependent if and
only if either ~u is a linear combination of ~v , or ~v is a linear combination of ~u. In other
words, the two vectors are parallel.
Two vectors are linearly independent if and only if they are not parallel.
1.3. BASIS 31

Exercise 1.42. Prove that ~v1 , ~v2 , . . . , ~vn , w


~ are linearly independent if and only if ~v1 , ~v2 , . . . , ~vn
are linearly independent, and w ~ is not a linear combination of ~v1 , ~v2 , . . . , ~vn .
The property implies that any subset of a linearly independent set is also linearly
independent. What does this tell you about linear dependence?

Exercise 1.43. Prove that ~v1 , ~v2 , . . . , ~vn are linearly dependent if and only if some ~vi is a
linear combination of the previous vectors ~v1 , ~v2 , . . . , ~vi−1 .

1.3.4 Minimal Spanning Set


Basis has two aspects, span and linear independence. If we already know that some
vectors span a vector space, then we can achieve linear independence (and therefore
basis) by deleting “unnecessary” vectors.

Definition 1.3.9. A vector space is finite dimensional if it is spanned by finitely


many vectors.

Theorem 1.3.10. In a finite dimensional vector space, a set of vectors is a basis if


and only if it is a minimal spanning set. Moreover, any finite spanning set contains
a minimal spanning set and therefore a basis.

By a minimal spanning set α, we mean α spans V , and any subset strictly smaller
than α does not span V .

Proof. Suppose α = {~v1 , ~v2 , . . . , ~vn } spans V . The set α is either linearly indepen-
dent, or linearly dependent.
If α is linearly independent, then it is a basis by definition. Moreover, by Proposi-
tion 1.3.8, we know ~vi is not a linear combination of ~v1 , · · · , ~vi−1 , ~vi+1 . . . , ~vn . There-
fore after deleting ~vi , the remaining vectors ~v1 , · · · , ~vi−1 , ~vi+1 . . . , ~vn do not span V .
This proves that α is a minimal spanning set.
If α is linearly dependent, then by Proposition 1.3.8, we may assume ~vi is a linear
combination of
α0 = α − {~vi } = {~v1 , · · · , ~vi−1 , ~vi+1 . . . , ~vn }.
By Proposition 1.2.1 (also see Exercise 1.30), linear combinations of α are also linear
combinations of α0 . Therefore we get a strictly smaller spanning set α0 . Then we may
ask whether α0 is linearly dependent. If the answer is yes, then α0 contains strictly
smaller spanning set. The process continues and, since α is finite, will stop after
finitely many steps. By the time we stop, we get a linearly independent spanning
set. By definition, this is a basis.
We proved that “independence =⇒ minimal” and “dependence =⇒ not
minimal”. This implies that “independence ⇐⇒ minimal”. Since a spanning set
is independent if and only if it is a basis, we get the first part of the theorem. Then
the second part is contained in the proof above in case α is linearly dependent.
32 CHAPTER 1. VECTOR SPACE

The intuition behind Theorem 1.3.10 is the following. Imagine that α is all the
people in a company, and V is all the things the company wants to do. Then α
spanning V means that the company can do all the things it wants to do. However,
the company may not be efficient in the sense that if somebody’s duty can be fulfilled
by the others (the person is a linear combination of the others), then the company
can fire the person and still do all the things. By firing unnecessary persons one
after another, eventually everybody is indispensable (linearly independent). The
result is that the company can do everything, and is also the most efficient.

Example 1.3.14. By taking a = b = 10 in Example 1.3.6, we get the row operations


   
1 4 7 10 1 4 7 10
2 5 8 11 → 0 −3 −6 −9 .
3 6 10 10 0 0 1 −2

Since all rows are pivot, the four vectors (1, 2, 3), (4, 5, 6), (7, 8, 10), (11, 12, 10) span
R3 . By restricting the row operations to the first three columns, the row echelon
form we get still has all rows being pivot. Therefore the spanning set can be reduced
to a strictly smaller spanning set (1, 2, 3), (4, 5, 6), (7, 8, 10). Alternatively, we may
view the matrix as the augmented matrix of a system of linear equations. Then the
row operation implies that the fourth vector (10, 11, 10) is a linear combination of
the first three vectors. This means that the fourth vector is a waste, and we can
delete the fourth vector to get a strictly smaller spanning set.
Is the smaller spanning set (1, 2, 3), (4, 5, 6), (7, 8, 10) minimal? If we further
delete the third vector, then we are talking about the same row operation applied
to the first two columns. The row echelon form we get has only two pivot rows, and
third row is not pivot. Therefore (1, 2, 3), (4, 5, 6) do not span R3 . One may also
delete the second or the first vector and do the similar investigation. In fact, by
Proposition 1.3.4, two vectors can never span R3 .

Exercise 1.44. Suppose we have row operations


 
• ∗ ∗ ∗ ∗ ∗
0 0 • ∗ ∗ ∗
(~v1 , ~v2 , . . . , ~v6 ) →  .
0 0 0 • ∗ ∗
0 0 0 0 0 •

Explain that the six vectors span R4 and ~v1 , ~v3 , ~v4 , ~v6 form a minimal spanning set (and
therefore a basis).

Exercise 1.45. Show that the vectors span P3 and then find a minimal spanning set.

1. 1 + t, 1 + t2 , 1 + t3 , t + t2 , t + t3 , t2 + t3 .

2. t + 2t2 + 3t3 , −t − 2t2 − 3t3 , 1 + 2t2 + 3t3 , 1 − t, 1 + t + 3t3 , 1 + t + 2t2 .


1.3. BASIS 33

1.3.5 Maximal Independent Set


If we already know that some vectors are linearly independent, then we can achieve
the span property (and therefore basis) by adding “independent” vectors. Therefore
we have two ways of constructing basis
delete vectors add vectors
span vector space −−−−−−−→ basis ←−−−−−− linearly independent

Using the analogy of company, linear independence means there is no waste.


What we need to achieve is to do all the things the company wants to do. If there is
a job that the existing employees cannot do, then we need to hire somebody who can
do the job. The new hire is linearly independent of the existing employees because
the person can do something the others cannot do. We keep adding new necessary
people until the company can do all the things, and therefore achieve the span.

Theorem 1.3.11. In a finite dimensional vector space, a set of vectors is a basis if


and only if it is a maximal linearly independent set. Moreover, any linearly inde-
pendent set can be extended to a maximal linearly independent set and therefore a
basis.

By a maximal linearly independent set α, if α is linearly independent, and any


set strictly bigger than α is linearly dependent.
Proof. Suppose α = {~v1 , ~v2 , . . . , ~vn } is a linearly independent set in V . The set α
either spans V , or does not span V .
If α spans V , then it is a basis by definition. Moreover, any vector ~v ∈ V is a
linear combination of α. By Proposition 1.3.8, adding ~v to α makes the set linearly
dependent. Therefore α is a maximal linearly independent set.
If α does not span V , then there is a vector ~v ∈ V which is not a linear com-
bination of α. If α0 = α ∪ {~v } = {~v1 , ~v2 , . . . , ~vn , ~v } is linearly dependent, then by
Proposition 1.3.7, we have

a1~v1 + a2~v2 + · · · + an~vn + a~v = ~0,

and some coefficients are nonzero. If a = 0, then some of a1 , a2 , . . . , an are nonzero.


By Proposition 1.3.7, this contradicts the linear independence of α. If a 6= 0, then
we get
a1 a2 an
~v = − ~v1 − ~v2 + · · · − ~vn ,
a a a
contradicting the assumption that ~v is not a linear combination of α. This proves
that α0 is a linearly independent set strictly bigger than α.
Now we may ask whether α0 spans V . If the answer is no, then we can enlarge α0
further by adding another vector that is not a linear combination of α0 . The result
is again a bigger linearly independent set. The process continues, and must stop
after finitely many steps due to the finite dimension assumption (full justification by
34 CHAPTER 1. VECTOR SPACE

Proposition 1.3.13). By the time we stop, we get a linearly independent spanning


set, which is a basis by definition.
We proved that “span V =⇒ maximal” and “not span V =⇒ not maximal”.
This implies that “span V ⇐⇒ maximal”. Since a linearly independent set spans
the vector space if and only if it is a basis, we get the first part of the theorem. Then
the second part is contained in the proof above in case α does not span V .

Example 1.3.15. We take the transpose of the matrix in Example 1.3.14, and carry
out row operations (this is the column operations on the earlier matrix)
       
1 2 3 R4 −R3 1 2 3 1 2 3 1 2 3
3 −R2 R3 −R2 R2 −3R1
4 5 6 R R2 −R1  3 3 3 R4 −R2  3 3 3 0 −3 −6 .
R4 +3R3  
 − − − −
→   −
− − −
→   − − −−→
 7 8 10 3 3 4 0 0 1  0 0 1
11 12 10 3 3 0 0 0 −3 0 0 0
By Proposition 1.3.6, this shows that (1, 4, 7, 11), (2, 5, 8, 12), (3, 6, 10, 10) are linearly
independent. However, since the last row is not pivot (or the last part of Proposition
1.3.4), the three vectors do not span R4 .
To enlarge the linearly independent set of three vectors, we try to add a vector
so that the same row operations produces (0, 0, 0, 1). The vector can be obtained
by reversing the operations on (0, 0, 0, 1)
       
0 R2 +R1 0 0 0
R4 −3R3
0 R 3 +R2
  R4 +R2
R4 +R3 0 R3 +R2 0 R2 +3R1 0
  
 ←
0 −−−− 0 ←−−−− 0 ←−−−− 0 .

1 1 1 1
Then we have row operations
   
1 2 3 0 1 2 3 0
 −→ 0 −3 −6 0 .
4 5 6 0  

7 8 10 0 0 0 1 0
11 12 10 1 0 0 0 1
This shows that (1, 4, 7, 11), (2, 5, 8, 12), (3, 6, 10, 10), (0, 0, 0, 1) form a basis.
If we try a more interesting vector (4, 3, 2, 1)
       
4 R2 +R1 4 4 4
R4 −3R3
19 R 3 +R2
15
R4 +R2
R +R  3 2   2 1 3
R +R 15 R +3R
4

3 
    
  ←−
36 −−− 17 ←−−−−  2  ←−−−− 2 ,
46 10 −5 1
then we find that (1, 4, 7, 11), (2, 5, 8, 12), (3, 6, 10, 10), (4, 19, 36, 46) form a basis.

Exercise 1.46. For the column vectors in Exercise 1.28, find a linearly independent subset,
and then extend to a basis. Note that the linearly independent subset should be as big as
possible, to avoid the lazy choice such as picking the first column only.
1.3. BASIS 35

Exercise 1.47. Explain that t2 (t − 1), t(t2 − 1), t2 − 4 are linearly independent. Then extend
to a basis of P3 .

1.3.6 Dimension
Let V be a finite dimensional vector space. By Theorem 1.3.10, V has a basis
α = {~v1 , ~v2 , . . . , ~vn }. Then the coordinate with respect to α translates the linear
algebra of V to the Euclidean space [·]α : V ↔ Rn .
Let β = {w ~ 1, w ~ k } be another basis of V . Then the α-coordinate translates
~ 2, . . . , w
β into a basis [β]α = {[w ~ 1 ]α , [w ~ k ]α } of Rn . Therefore [β]α (a set of k
~ 2 ]α , . . . , [w
vectors) spans Rn and is also linearly independent. By the last parts of Propositions
1.3.4 and 1.3.6, we get k = n. This shows that the following concept is well defined.

Definition 1.3.12. The dimension of a (finite dimensional) vector space is the num-
ber of vectors in a basis.

We denote the dimension by dim V . By Examples 1.3.1, 1.3.2, and Exercise 1.23,
we have dim Rn = n, dim Pn = n + 1, dim Mm×n = mn.
If dim V = m, then V can be identified with the Euclidean space Rm , and the
linear algebra in V is the same as the linear algebra in Rm . For example, we may
change Rm in Propositions 1.3.4 and 1.3.6 to any vector space V of dimension m,
and get the following.

Proposition 1.3.13. Suppose V is a finite dimensional vector space.

1. If n vectors span V , then dim V ≤ n.

2. If n vectors in V are linearly independent, then dim V ≥ n.

Continuation of the proof of Theorem 1.3.11. The proof of the theorem creates big-
ger and bigger linearly independent sets of vectors. By Proposition 1.3.13, however,
the set is no longer linearly independent when the number of vectors is > dim V .
This means that, if the set α we start with has n vectors, then the construction in
the proof stops after at most dim V − n steps.
We note that the argument uses Theorem 1.3.10, for the existence of basis and
then the concept of dimension. What we want to prove is Theorem 1.3.11, which is
not used in the argument.

Exercise 1.48. Explain that the vectors do not span the vector space.
√ √
1. 3 + 2t − πt2 − 3t3 , e + 100t + 2 3t2 , 4πt − 15.2t2 + t3 .
     
3 8 2 8 4 7
2. , , .
4 9 6 5 5 0
36 CHAPTER 1. VECTOR SPACE
 √  √     
π 3 2 π
√ 3 100 sin 2
√ π
3. , , , .
1 2π −10 2 2 −77 6 2π 2 sin 2

Exercise 1.49. Explain that the vectors are linearly dependent.


√ √ √
1. 3 + 2t − πt2 , e + 100t + 2 3t2 , 4πt − 15.2t2 , π + e2 t2 .
     
3 8 2 8 1 0
2. , , .
4 9 6 5 −2 4
 √  √     
π 3 2 π
√ , 3 100 sin
√ 2 π
3. , , .
1 2π −10 2 2 −77 6 2π 2 sin 2

Theorem 1.3.14. Suppose α is a collection of vectors in a finite dimensional vector


space V . Then any two of the following imply the third.
1. The number of vectors in α is dim V .

2. α spans V .

3. α is linearly independent.

To prove the theorem, we may translate into Euclidean space. Then by Propo-
sitions 1.3.4 and 1.3.6, we only need to prove the following properties about system
of linear equations.

Theorem 1.3.15. Suppose A is an m × n matrix. Then any two of the following


imply the third.
1. m = n.

2. A~x = ~b has solution for all ~b.

3. The solution of A~x = ~b is unique.

Proof. If the second and third statement hold, then by Propositions 1.3.4 and 1.3.6,
we have m ≤ n and m ≥ n. Therefore the first statement holds.
Now we assume the first statement, and prove that the second and third are
equivalent. The first statement means A is an n × n matrix. By Proposition 1.3.4,
the second statement means all rows are pivot, i.e., the number of pivots is n. By
Proposition 1.3.6, the third statement means all columns are pivot, i.e., the number
of pivots is n. Therefore the two statements are the same.

Example 1.3.16. By Example 1.3.8, the three quadratic polynomials t(t − 1), t(t −
2), (t − 1)(t − 2) are linearly independent. By Theorem 1.3.14 and dim P2 = 3, we
know the three vectors form a basis of P2 .
For the general discussion, see Example 2.2.13.
1.3. BASIS 37

Exercise 1.50. Use Theorem 1.3.10 to give another proof of the first part of Proposition
1.3.13. Use Theorem 1.3.11 to give another proof of the second part of Proposition 1.3.13.

Exercise 1.51. Suppose the number of vectors in α is dim V . Explain the following are
equivalent.

1. α spans V .

2. α is linearly independent.

3. α is a basis.

Exercise 1.52. Show that the vectors form a basis.

1. (1, 1, 0), (1, 0, 1), (0, 1, 1) in R3 .

2. (1, 1, −1), (1, −1, 1), (−1, 1, 1) in R3 .

3. (1, 1, 1, 0), (1, 1, 0, 1), (1, 0, 1, 1), (0, 1, 1, 1) in R4 .

Exercise 1.53. Show that the vectors form a basis.

1. 1 + t, 1 + t2 , t + t2 in P2 .
       
1 1 1 1 1 0 0 1
2. , , , in M2×2 .
1 0 0 1 1 1 1 1

Exercise 1.54. For which a do the vectors form a basis?

1. (1, 1, 0), (1, 0, 1), (0, 1, a) in R3 .

2. (1, −1, 0), (1, 0, −1), (0, 1, a) in R3 .

3. (1, 1, 1, 0), (1, 1, 0, 1), (1, 0, 1, 1), (0, 1, 1, a) in R4 .

Exercise 1.55. For which a do the vectors form a basis?

1. 1 + t, 1 + t2 , t + at2 in P2 .
       
1 1 1 1 1 0 0 1
2. , , , in M2×2 .
1 0 0 1 1 1 1 a

Exercise 1.56. Show that (a, b), (c, d) form a basis of R2 if and only if ad 6= bc. What is
the condition for a to be a basis of R1 ?

Exercise 1.57. If the columns of a matrix form a basis of the Euclidean space, what is the
reduced row echelon form of the matrix?

Exercise 1.58. Show that α is a basis if and only if β is a basis.


38 CHAPTER 1. VECTOR SPACE

1. α = {~v1 , ~v2 }, β = {~v1 + ~v2 , ~v1 − ~v2 }.

2. α = {~v1 , ~v2 , ~v3 }, β = {~v1 + ~v2 , ~v1 + ~v3 , ~v2 + ~v3 }.

3. α = {~v1 , ~v2 , ~v3 }, β = {~v1 , ~v1 + ~v2 , ~v1 + ~v2 + ~v3 }.

4. α = {~v1 , ~v2 , . . . , ~vn }, β = {~v1 , ~v1 + ~v2 , . . . , ~v1 + ~v2 + · · · + ~vn }.

Exercise 1.59. Use Exercises 1.32 and 1.37 to prove that the following are equivalent.

1. {~v1 , . . . , ~vi , . . . , ~vj , . . . , ~vn } is a basis.

2. {~v1 , . . . , ~vj , . . . , ~vi , . . . , ~vn } is a basis.

3. {~v1 , . . . , c~vi , . . . , ~vn }, c 6= 0, is a basis.

4. {~v1 , . . . , ~vi + c~vj , . . . , ~vj , . . . , ~vn } is a basis.

1.3.7 Calculation of Coordinate


In Examples 1.3.1 and 1.3.1, we see the coordinates with respect to simple bases are
quite easy to find. For the more complicated bases, it takes some serious calculation
to find the coordinates.

Example 1.3.17. We have a basis of R3 (confirmed by later row operations)

α = {~v1 , ~v2 , ~v3 } = {(1, −1, 0), (1, 0, −1), (1, 1, 1)}.

The coordinate of a general vector ~x = (x1 , x2 , x3 ) ∈ R3 with respect to α is the


unique solution of a system of linear equation, whose augmented matrix is
 
1 1 1 x1
(~v1 ~v2 ~v3 ~b) = −1 0 1 x2  .
0 −1 1 x3

Of course we can find the coordinate [~x]α by carrying out the row operations on this
augmented matrix.
Alternatively, we may first try to find the α-coordinates of the standard basis
vectors ~e1 , ~e2 , ~e3 . Then the α-coordinate of a general vector is

[(x1 , x2 , x3 )]α = [x1~e1 + x2~e2 + x3~e3 ]α = x1 [~e1 ]α + x2 [~e2 ]α + x3 [~e3 ]α .

The coordinate [~ei ]α is calculated by row operations on the augmented matrix


(~v1 ~v2 ~v3 ~ei ). Note that the three augmented matrices have the same coefficient
1.3. BASIS 39

matrix part. Therefore we may combine the three calculations together by carry
out the following row operations
  R3 +R1 +R2  
1 1 1 1 0 0 R1 ↔R2 −1 0 1 0 1 0
R ↔R3
(~v1 ~v2 ~v3 ~e1 ~e2 ~e3 ) = −1 0 1 0 1 0 −−−2−−− →  0 −1 1 0 0 1
0 −1 1 0 0 1 0 0 3 1 1 1
−R1 
1 2 1
−R2
  
1
R3
1 0 −1 0 −1 0 R1 +R3 1 0 0 3
− 3 3
R +R3
−3−→ 0 1 −1 0 0 −1 −−2−−→ 0 1 0 1 1 − 2  .
3 3 3
0 0 1 13 31 1
3
0 0 1 1
3
1
3
1
3

If we restrict the row operations to the first four columns (~v1 ~v2 ~v3 ~e1 ), then the
reduced row echelon form is
1 0 0 31
   
1 0 0
1
(I [~e1 ]α ) = 0 1 0 3 , where I = (I [~e1 ]α ) = 0
  1 0 = (~e1 ~e2 ~e3 ).
0 0 1 13 0 0 1
The fourth column of the reduced echelon form is the solution [~e1 ]α = ( 31 , 31 , 13 ).
Similarly, we get [~e2 ]α = (− 23 , 13 , 13 ) (fifth column) and [~e3 ]α = ( 31 , − 23 , 13 ) (sixth
column).
By Proposition 1.3.2, the α-coordinate of a general vector in R3 is
[~b]α = x1 [~e1 ]α + x2 [~e2 ]α + x3 [~e3 ]α
1  2  1   
−3 1 −2 1
3 3 1
= x1  13  + x2  31  + x3 − 23  = 1 1 −2 ~x.
1 1 1 3
3 3 3
1 1 1

In general, if α = {~v1 , ~v2 , . . . , ~vn } is a basis of Rn , then all rows and all columns
of A = (~v1 ~v2 . . . ~vn ) are pivot. This means that the reduced row echelon form of
A is the identity matrix
 
1 0 ··· 0
0 1 · · · 0
I =  .. .. ..  = (~e1 ~e2 · · · ~en ).
 
. . .
0 0 ··· 1
In other words, we have row operations changing A to I. Applying the same row
operations to the n × 2n matrix (A I), we get
(A I) → (I B).
Then columns of B are the coordinates [~ei ]α of ~ei with respect to α, and the general
α-coordinate is given by
[~x]α = B~x.

Exercise 1.60. Find the coordinates of a general vector in Euclidean space with respect to
basis.
40 CHAPTER 1. VECTOR SPACE

1. (0, 1), (1, 0). 5. (1, 1, 0), (1, 0, 1), (0, 1, 1).

2. (1, 2), (3, 4). 6. (1, 2, 3), (0, 1, 2), (0, 0, 1).

3. (a, 0), (0, b), a, b 6= 0. 7. (0, 1, 2), (0, 0, 1), (1, 2, 3).

4. (cos θ, sin θ), (− sin θ, cos θ). 8. (0, −1, 2, 1), (2, 3, 2, 1), (−1, 0, 3, 2), (4, 1, 2, 3).

Exercise 1.61. Find the coordinates of a general vector with respect to the basis in Exercise
1.52.

Exercise 1.62. Determine whether vectors form a basis of Rn . Moreover, find the coordi-
nates with respect to the basis.

1. ~e1 − ~e2 , ~e2 − ~e3 , . . . , ~en−1 − ~en , ~en − ~e1 .

2. ~e1 − ~e2 , ~e2 − ~e3 , . . . , ~en−1 − ~en , ~e1 + ~e2 + · · · + ~en .

3. ~e1 + ~e2 , ~e2 + ~e3 , . . . , ~en−1 + ~en , ~en + ~e1 .

4. ~e1 , ~e1 + 2~e2 , ~e1 + 2~e2 + 3~e3 , . . . , ~e1 + 2~e2 + · · · + n~en .

Exercise 1.63. Determine whether polynomials form a basis of Pn . Moreover, find the
coordinates with respect to the basis.

1. 1 − t, t − t2 , . . . , tn−1 − tn , tn − 1.

2. 1 + t, t + t2 , . . . , tn−1 + tn , tn + 1.

3. 1, 1 + t, 1 + t2 , . . . , 1 + tn .

4. 1, t − 1, (t − 1)2 , . . . , (t − 1)n .

Exercise 1.64. Suppose ad 6= bc. Find the coordinate of a vector in R2 with respect to the
basis (a, b), (c, d) (see Exercise 1.56).

Exercise 1.65. Suppose there is an identification of a vector space V with Euclidean space
Rn . In other words, there is a one-to-one correspondence F : V → Rn preserving vector
space operations
F (~u + ~v ) = F (~u) + F (~v ), F (a~u) = aF (~u).
Let ~vi = F −1 (~ei ) ∈ V .

1. Prove that α = {~v1 , ~v2 , . . . , ~vn } is a basis of V .

2. Prove that F (~x) = [~x]α .


Chapter 2

Linear Transformation

Linear transformation is the relation between vector spaces. We discuss general


concepts about maps such as onto, one-to-one, and invertibility, and then specialise
these concepts to linear transformations. We also introduce matrices as the for-
mula for linear transformations. Then operations of linear transformations become
operations of matrices.

2.1 Linear Transformation and Matrix


Definition 2.1.1. A map L : V → W between vector spaces is a linear transforma-
tion if it preserves two operations in vector spaces

L(~u + ~v ) = L(~u) + L(~v ), L(c~u) = cL(~u).

If V = W , then we also call L a linear operator. If W = R, then we also call L a


linear functional.

Geometrically, the preservation of two operations means the preservation of par-


allelogram and scaling.
The collection of all linear transformations from V to W is denoted Hom(V, W ).
Here “Hom” refers to homomorphism, which means maps preserving algebraic struc-
tures.

Example 2.1.1. The identity map I(~v ) = ~v : V → V is a linear operator. We also


denote by IV to emphasise the vector space V .
The zero map O(~v ) = ~0 : V → W is a linear transformation.

Example 2.1.2. Proposition 1.3.2 means that the α-coordinate map is a linear trans-
formation.

41
42 CHAPTER 2. LINEAR TRANSFORMATION

Example 2.1.3. The rotation Rθ of the plane by angle θ and the reflection (flipping)
Fθ with respect to the direction of angle ρ are linear, because they clearly preserve
parallelogram and scaling.


ρ

Figure 2.1.1: Rotation by angle θ and flipping with respect to angle ρ.

Example 2.1.4. The projection of R3 to a plane R2 passing through the origin is a


linear transformation. More generally, any (orthogonal) projection of R3 to a plane
inside R3 and passing through the origin is a linear operator.

~x R3

R2

P (~x)

Figure 2.1.2: Projection from R3 to R2 .

Example 2.1.5. The evaluation of functions at several places is a linear transfor-


mation
L(f ) = (f (0), f (1), f (2)) : C ∞ → R3 .
In the reverse direction, the linear combination of several functions is a linear trans-
formation
L(x1 , x2 , x3 ) = x1 cos t + x2 sin t + x3 et : R3 → C ∞ .
The idea is extended in Exercise 2.19.
2.1. LINEAR TRANSFORMATION AND MATRIX 43

Example 2.1.6. In C ∞ , taking the derivative is a linear operator

f 7→ f 0 : C ∞ → C ∞ .

The integration is a linear operator


Z t
f (t) 7→ f (τ )dτ : C ∞ → C ∞ .
0

Multiplying a fixed function a(t) is also a linear operator

f (t) 7→ a(t)f (t) : C ∞ → C ∞ .

Exercise 2.1. Is the map a linear transformation?

1. (x1 , x2 , x3 ) 7→ (x1 , x2 + x3 ) : R3 → R2 . 3. (x1 , x2 , x3 ) 7→ (x3 , x1 , x2 ) : R3 → R3 .

2. (x1 , x2 , x3 ) 7→ (x1 , x2 x3 ) : R3 → R2 . 4. (x1 , x2 , x3 ) 7→ x1 +2x2 +3x3 : R3 → R.

Exercise 2.2. Is the map a linear transformation?

1. f 7→ f 2 : C ∞ → C ∞ . 6. f 7→ f 0 + 2f : C ∞ → C ∞ .

2. f (t) 7→ f (t2 ) : C ∞ → C ∞ . 7. f 7→ (f (0) + f (1), f (2)) : C ∞ → R2 .

3. f 7→ f 00 : C ∞ → C ∞ . 8. f 7→ f (0)f (1) : C ∞ → R.
R1
4. f (t) 7→ f (t − 2) : C ∞ → C ∞ . 9. f 7→ 0 f (t)dt : C ∞ → R.
Rt
5. f (t) 7→ f (2t) : C ∞ → C ∞ . 10. f 7→ 0 τ f (τ )dτ : C ∞ → C ∞ .

2.1.1 Linear Transformation of Linear Combination


A linear transformation L : V → W preserves linear combination

L(x1~v1 + x2~v2 + · · · + xn~vn ) = x1 L(~v1 ) + x2 L(~v2 ) + · · · + xn L(~vn ) (2.1.1)


= x1 w ~ 2 + · · · + xn w
~ 1 + x2 w ~ n, w~ i = L(~vi ).

If ~v1 , ~v2 , . . . , ~vn span V , then any ~x ∈ V is a linear combination of ~v1 , ~v2 , . . . , ~vn , and
the formula implies that a linear transformation is determined by its values on a
spanning set.

Proposition 2.1.2. If ~v1 , ~v2 , . . . , ~vn span V , then two linear transformations L, K
on V are equal if and only if L(~vi ) = K(~vi ) for each i.

Conversely, given assigned values w~ i = L(~vi ) on a spanning set, the following


says when the formula (2.1.1) gives a well defined linear transformation.
44 CHAPTER 2. LINEAR TRANSFORMATION

Proposition 2.1.3. If ~v1 , ~v2 , . . . , ~vn span a vector space V , and w


~ 1, w
~ 2, . . . , w
~ n are
vectors in W , then (2.1.1) gives a well defined linear transformation L : V → W if
and only if

x1~v1 + x2~v2 + · · · + xn~vn = ~0 =⇒ x1 w ~ n = ~0.


~ 2 + · · · + xn w
~ 1 + x2 w

Proof. The formula gives a well defined map if and only if

x1~v1 + x2~v2 + · · · + xn~vn = y1~v1 + y2~v2 + · · · + yn~vn

implies
x1 w ~ 2 + · · · + xn w
~ 1 + x2 w ~ n = y1 w ~ 2 + · · · + yn w
~ 1 + y2 w ~ n.
Let zi = xi − yi . Then the condition becomes

z1~v1 + z2~v2 + · · · + zn~vn = ~0

implying
z1 w
~ 1 + z2 w ~ n = ~0.
~ 2 + · · · + zn w
This is the condition in the proposition.
After showing L is well defined, we still need to verify that L is a linear trans-
formation. For

~x = x1~v1 + x2~v2 + · · · + xn~vn , ~y = y1~v1 + y2~v2 + · · · + yn~vn ,

by (2.1.1), we have

L(~x + ~y ) = L((x1 + y1 )~v1 + (x2 + y2 )~v2 + · · · + (xn + yn )~vn )


= (x1 + y1 )w~ 1 + (x2 + y2 )w ~ 2 + · · · + (xn + yn )w ~n
= (x1 w
~ 1 + x2 w~ 2 + · · · + xn w
~ n ) + (y1 w ~ 2 + · · · + yn w
~ 1 + y2 w ~ n)
= L(~x) + L(~y ).

We can similarly show L(c~x) = cL(~x).


If ~v1 , ~v2 , . . . , ~vn is a basis of V , then the condition of Proposition 2.1.3 is satisfied
(the first =⇒ is due to linear independence)

x1~v1 + x2~v2 + · · · + xn~vn = ~0 =⇒ x1 = x2 = · · · = xn = 0


=⇒ x1 w
~ 1 + x2 w ~ n = ~0.
~ 2 + · · · + xn w

This leads to the following result.

Proposition 2.1.4. If ~v1 , ~v2 , . . . , ~vn is a basis of V , then (2.1.1) gives one-to-one
correspondence between linear transformations L : V → W and the collections of n
vectors w
~ 1 = L(~v1 ), w
~ 2 = L(~v2 ), . . . , w
~ n = L(~vn ) in W .
2.1. LINEAR TRANSFORMATION AND MATRIX 45

Example 2.1.7. The rotation Rθ in Example 2.1.3 takes ~e1 = (1, 0) to the vec-
tor (cos θ, sin
 θ) of radius
 1 and angle θ. It also takes ~e2 = (0, 1) πto the vector
π π
(cos θ + 2 , sin θ + 2 ) = (− sin θ, cos θ) of radius 1 and angle θ + 2 . We get
Rθ (~e1 ) = (cos θ, sin θ), Rθ (~e2 ) = (− sin θ, cos θ).

Example 2.1.8. The derivative linear transformation P3 → P2 is determined by


the derivatives of the monomials 10 = 0, t0 = 1, (t2 )0 = 2t, (t3 )0 = 3t2 . It is also
determined by the derivatives of another basis (see Example 1.3.3) of P3 : 10 = 0,
(t − 1)0 = 1, ((t − 1)2 )0 = 2(t − 1), ((t − 1)3 )0 = 3(t − 1)2 .

Exercise 2.3. Suppose ~v1 , ~v2 , . . . , ~vn are vectors in V , and L is a linear transformation on
V . Prove the following.
1. If ~v1 , ~v2 , . . . , ~vn are linearly dependent, then L(~v1 ), L(~v2 ), . . . , L(~vn ) are linearly de-
pendent.
2. If L(~v1 ), L(~v2 ), . . . , L(~vn ) are linearly independent, then ~v1 , ~v2 , . . . , ~vn are linearly
independent.

2.1.2 Linear Transformation between Euclidean Spaces


By Proposition 2.1.4, linear transformations L : Rn → W are in one-to-one corre-
spondence with the collections of n vectors in W
w
~ 1 = L(~e1 ), w
~ 2 = L(~e2 ), . . . , w
~ n = L(~en ).
In case W = Rm is also a Euclidean space, a linear transformation L : Rn → Rm
corresponds to the matrix of linear transformation
A = (w ~2 · · · w
~1 w ~ n ) = (L(~e1 ) L(~e2 ) · · · L(~en )).
The formula for L is the left (1.2.2) of a system of linear equations
L(~x) = x1 w ~ 2 + · · · + xn w
~ 1 + x2 w ~ n = A~x. (2.1.2)

Example 2.1.9. The matrix of the identity operator I : Rn → Rn is the identity


matrix, still denoted by I (or In to indicate the size)
 
1 0 ··· 0
0 1 · · · 0
(I(~e1 ) I(~e2 ) · · · I(~en )) = (~e1 ~e2 · · · ~en ) =  .. .. ..  = In .
 
. . .
0 0 ··· 1
The zero transformation O = Rn → Rm has the zero matrix
 
0 0 ··· 0
0 0 · · · 0
O = Om×n =  .. ..  .
 
..
. . .
0 0 ··· 0
46 CHAPTER 2. LINEAR TRANSFORMATION

Example 2.1.10. By Example 2.1.7, we have the matrix of the rotation Rθ in Ex-
ample 2.1.3  
cos θ − sin θ
(Rθ (~e1 ) Rθ (~e2 )) = .
sin θ cos θ
The reflection Fρ of R2 with respect to the direction of angle ρ takes ~e1 to the
vector of radius 1 and angle 2ρ, and also takes ~e2 to the vector of radius 1 and angle
2ρ − π2 . Therefore the matrix of Fρ is

cos 2ρ cos 2ρ − π2 
   
cos 2ρ sin 2ρ
= .
sin 2ρ sin 2ρ − π2 sin 2ρ − cos 2ρ

Example 2.1.11. The projection in Example 2.1.4 takes the standard basis ~e1 =
(1, 0, 0), ~e2 = (0, 1, 0), ~e3 = (0, 0, 1) of R3 to (1, 0), (0, 1), (0, 0) in R2 . The matrix
of the projection is  
1 0 0
.
0 1 0

Example 2.1.12. The linear transformation corresponding to the matrix


 
1 4 7 10
A = 2 5 8 11
3 6 9 12

is  
x1  
 x2  x1 + 4x2 + 7x3 + 10x4
4 3
x3  = 2x1 + 5x2 + 8x3 + 11x4 : R → R .
L   
3x1 + 6x2 + 9x3 + 12x4
x4
We note that    
1 + 4 · 0 + 7 · 0 + 10 · 0 1
L(~e1 ) = 2 · 1 + 5 · 0 + 8 · 0 + 11 · 0 = 2
  
3 · 1 + 6 · 0 + 9 · 0 + 12 · 0 3
is the first column of A. Similarly, L(~e2 ), L(~e3 ), L(~e4 ) are the second, third and
fourth columns of A.

Example 2.1.13. The orthogonal projection P of R3 to the plane x + y + z = 0 is


a linear transformation. The columns of the matrix of P are the projections of the
standard basis vectors to the plane. These projections are not easy to see directly.
On the other hand, we can easily find the projections of some other vectors.
First, the vectors ~v1 = (1, −1, 0) and ~v2 = (1, 0, −1) lie in the plane because they
satisfy x + y + z = 0. Since the projection clearly preserves the vectors on the plane,
we get P (~v1 ) = ~v1 and P (~v2 ) = ~v2 .
2.1. LINEAR TRANSFORMATION AND MATRIX 47

z ~x

~v3
P (~x)
y
~v1
x ~v2

Figure 2.1.3: Projection to the plane x + y + z = 0.

Second, the vector ~v3 = (1, 1, 1) is the coefficients of x+y+z = 0, and is therefore
orthogonal to the plane. Since the projection kills vectors orthogonal to the plane,
we get P (~v3 ) = ~0.
In Example 1.3.17, we found [~e1 ]α = ( 13 , 13 , 31 ). This implies

P (~e1 ) = 13 P (~v1 ) + 13 P (~v2 ) + 13 P (~v3 ) = 13 ~v1 + 31 ~v2 + 13~0 = ( 32 , − 13 , − 31 ).

By the similar idea, we get

P (~e2 ) = − 23 ~v1 + 13 ~v2 + 31~0 = (− 13 , 23 , − 13 ),


P (~e3 ) = 1 ~v1 − 2 ~v2 + 1~0 = (− 1 , − 1 , 2 ).
3 3 3 3 3 3

We conclude that the matrix of P is


2
− 13 − 31
 
3
(P (~e1 ) P (~e2 ) P (~e3 )) = − 13 2
3
− 13  .
− 13 − 3 32
1

Example 2.1.14. For the basis ~v1 , ~v2 , ~v3 in Example 2.1.13, suppose we know a
linear transformation L : R3 → R4 satisfies

L(~v1 ) = w
~ 1 = (1, 2, 3, 4), L(~v2 ) = w
~ 2 = (5, 6, 7, 8), L(~v3 ) = w
~ 3 = (9, 10, 11, 12).

To find the matrix of L, we note that a linear transformation preserves column


operations. For example, we have L(~v1 + c~v2 ) = L(~v1 ) + cL(~v2 ) = w ~ 1 + cw
~ 2 . Now
suppose some column operations change (~v1 ~v2 ~v3 ) to (~e1 ~e2 ~e3 ). We apply the same
column operations to (w
~1 w
~2 w~ 3 ) and obtain (~u1 ~u2 ~u3 ). Then we get L(~ei ) = ~ui ,
48 CHAPTER 2. LINEAR TRANSFORMATION

and the matrix of L. Therefore we carry out column operations


   
1 1 1 1 1 3
−1 0 1  −1 0 0 
   
   0 −1 1  C3 +C1 
 0 −1 0 
~v1 ~v2 ~v3 
− C3 +C2 
 
= 1 5 9 − − −
→ 1 5 15 
w~1 w~2 w~3 
2
  
 6 10 
2
 6 18 
3 7 11 3 7 21
4 8 12 4 8 24
     
1 1 1 1 0 0 1 0 0
0 −1 0  0 −1 0  0 1 0
1
    
C
3 3

C2 ↔C3 0 0 −1  C2 −C1 0 0 −1 −C2 0 0 1
C ↔C2   C3 −C1   −C3  
−−1−−→ 5 1
 5  −−−−→ 5 −4 0  −
   −→ 5
 4 0
.
6 2 6 6 −4 0  6 4 0
     
7 3 7 7 −4 0  7 4 0
8 4 8 8 −4 0 8 4 0

We conclude that the matrix of L is


 
5 4 0
6 4 0
 .
7 4 0
8 4 0

In general, if ~v1 , ~v2 , . . . , ~vn is a basis of Rn , and a linear transformation L : Rn →


m
R satisfies L(~vi ) = w ~ i , then the matrix A of L is given by column operations
   
~v1 ~v2 · · · ~vn col op I
−−−→ .
w ~1 w ~2 · · · w ~n A

Exercise 2.4. Find the matrix of the linear operator of R2 that sends (1, 2) to (1, 0) and
sends (3, 4) to (0, 1).

Exercise 2.5. Find the matrix of the linear operator of R3 that reflects with respect to the
plane x + y + z = 0.

Exercise 2.6. Use the method in Example 2.1.14 to calculate Example 2.1.13 in another
way.

Exercise 2.7. Find the matrix of the linear transformation L : R3 → R4 determined by


L(1, −1, 0) = (1, 2, 3, 4), L(1, 0, −1) = (2, 3, 4, 1), L(1, 1, 1) = (3, 4, 1, 2).

Exercise 2.8. Suppose a linear transformation L : R3 → P3 satisfies L(1, −1, 0) = 1 + 2t +


3t2 + 4t3 , L(1, 0, −1) = 2 + 3t + 4t2 + t3 , L(1, 1, 1) = 3 + 4t + t2 + 2t3 . Find L(x, y, z).
2.1. LINEAR TRANSFORMATION AND MATRIX 49

2.1.3 Operation of Linear Transformation


The addition of two linear transformations L, K : V → W is

(L + K)(~v ) = L(~v ) + K(~v ) : V → W.

The following shows that L + K preserves the addition

(L + K)(~u + ~v ) = L(~u + ~v ) + K(~u + ~v ) (definition of L + K)


= (L(~u) + L(~v )) + (K(~u) + K(~v )) (L and K preserve addition)
= L(~u) + K(~u) + L(~v ) + K(~v ) (Axioms 1 and 2)
= (L + K)(~u) + (L + K)(~v ). (definition of L + K)

We can similarly verify (L + K)(c~u) = c(L + K)(~u).


The scalar multiplication of a linear transformation is

(cL)(~v ) = cL(~v ) : V → W.

We can similarly show that cL is a linear transformation.

Proposition 2.1.5. Hom(V, W ) is a vector space.

Proof. The following proves L + K = K + L

(L + K)(~u) = L(~u) + K(~u) = K(~u) + L(~u) = (K + L)(~u).

The first and third equalities are due to the definition of addition in Hom(V, W ).
The second equality is due to Axiom 1 of vector space.
We can similarly prove the associativity (L + K) + M = L + (K + M ). The
zero vector in Hom(V, W ) is the zero transformation O(~v ) = ~0 in Example 2.1.1.
The negative of L ∈ Hom(V, W ) is K(~v ) = −L(~v ). The other axioms can also be
verified, and are left as exercises.
We can compose linear transformations K : U → V and L : V → W with match-
ing domain and range

(L ◦ K)(~v ) = L(K(~v )) : U → W.

The composition preserves the addition

(L ◦ K)(~u + ~v ) = L(K(~u + ~v )) = L(K(~u) + K(~v ))


= L(K(~u)) + L(K(~v )) = (L ◦ K)(~u) + (L ◦ K)(~v ).

The first and fourth equalities are due to the definition of composition. The second
and third equalities are due to the linearity of L and K. We can similarly verify
that the composition preserves the scalar multiplication. Therefore the composition
is a linear transformation.
50 CHAPTER 2. LINEAR TRANSFORMATION

Example 2.1.15. The composition of two rotations is still a rotation: Rθ1 ◦ Rθ2 =
Rθ1 +θ2 .

Example 2.1.16. Consider the differential equation

(1 + t2 )f 00 + (1 + t)f 0 − f = t + 2t3 .

The left is the addition of three transformations f 7→ (1 + t2 )f 00 , f 7→ (1 + t)f 0 ,


f 7→ −f .
Let D(f ) = f 0 be the derivative linear transformation in Example 2.1.6. Let
Ma (f ) = af be the linear transformation of multiplying a function a(t). Then
(1 + t2 )f 00 is the composition D2 = M1+t2 ◦ D ◦ D, (1 + t)f 0 is the composition
M1+t ◦ D, and f 7→ −f is the linear transformation M−1 . Therefore the left side of
the differential equation is the linear transformation

L = M1+t2 ◦ D ◦ D + M1+t ◦ D + M−1 .

The differential equation can be expressed as L(f (t)) = b(t) with b(t) = t+2t3 ∈ C ∞ .
In general, a linear differential equation of order n is

dn f dn−1 f dn−2 f df
a0 (t) n
+ a 1 (t) n−1
+ a 2 (t) n−2
+ · · · + an−1 (t) + an (t)f = b(t).
dt dt dt dt
If the coefficient functions a0 (t), a1 (t), . . . , an (t) are smooth, then the left side is a
linear transformation C ∞ → C ∞ .
Rt
Exercise 2.9. Interpret the Newton-Leibniz formula f (t) = f (0) + 0 f 0 (τ )dτ as an equality
of linear transformations.

Exercise 2.10. The trace of a square matrix A = (aij ) is the sum of its diagonal entries

trA = a11 + a22 + · · · + ann .

Explain that the trace is a linear functional on the vector space of Mn×n of n × n matrices,
and trAT = trA.

Exercise 2.11. Fix a vector ~v ∈ V . Prove that the evaluation map L 7→ L(~v ) : Hom(V, W ) →
W is a linear transformation.

Exercise 2.12. Let L : V → W be a linear transformation.

1. Prove that L ◦ (K1 + K2 ) = L ◦ K1 + L ◦ K2 and L ◦ (aK) = a(L ◦ K).

2. Explain that the first part means that the map L∗ = L◦· : Hom(U, V ) → Hom(U, W )
is a linear transformation.

3. Prove that I∗ = I, (L + K)∗ = L∗ + K∗ , (aL)∗ = aL∗ and (L ◦ K)∗ = L∗ ◦ K∗ .


2.1. LINEAR TRANSFORMATION AND MATRIX 51

4. Prove that L 7→ L∗ : Hom(V, W ) → Hom(Hom(U, V ), Hom(U, W )) is a linear trans-


formation.

Exercise 2.13. Let L : V → W be a linear transformation.

1. Prove that (K1 + K2 ) ◦ L = K1 ◦ L + K2 ◦ L and (aK) ◦ L = a(K ◦ L).

2. Prove that L∗ = · ◦ L : Hom(W, U ) → Hom(V, U ) is a linear transformation.

3. Prove that I ∗ = I, (L + K)∗ = L∗ + K ∗ , (aL)∗ = aL∗ and (L ◦ K)∗ = K ∗ ◦ L∗ .

4. Prove that L 7→ L∗ : Hom(V, W ) → Hom(Hom(W, U ), Hom(V, U )) is a linear trans-


formation.

Exercise 2.14. Denote by Map(X, Y ) all the maps from a set X to another set Y . For a
map f : X → Y and any set Z, define

f∗ = f ◦ : Map(Z, X) → Map(Z, Y ), f ∗ = ◦f : Map(Y, Z) → Map(X, Z).

Prove that (f ◦ g)∗ = f∗ ◦ g∗ and (f ◦ g)∗ = g ∗ ◦ f ∗ . This shows that (L ◦ K)∗ = L∗ ◦ K∗ in


Exercise 2.12 and (L ◦ K)∗ = K ∗ ◦ L∗ in Exercise 2.13 do not require linear transformation.

2.1.4 Matrix Operation


In Section 2.1.2, we have the equivalence

L ∈ Hom(Rn , Rm ) ←→ A ∈ Mm×n , L(~x) = A~x, A = (L(~e1 ) L(~e2 ) · · · L(~en )).

Any operation we can do on one side should be reflected on the other side. For
example, we know Hom(Rn , Rm ) is a vector space. We define the addition and
scalar multiplication of matrices in Mm×n in such a way to make the invertible map
Hom(Rn , Rm ) ←→ Mm×n an isomorphism of vector spaces.
Let L, K : Rn → Rm be linear transformations, with respective matrices
   
a11 a12 . . . a1n b11 b12 . . . b1n
 a21 a22 . . . a2n   b21 b22 . . . b2n 
A =  .. ..  , B =  .. ..  .
   
.. ..
 . . .   . . . 
am1 am2 . . . amn bm1 bm2 . . . bmn

Then the i-th column of the matrix of L + K : Rn → Rm is


     
a1i b1i a1i + b1i
 a2i   b2i   a2i + b2i 
(L + K)(~ei ) = L(~ei ) + K(~ei ) =  ..  +  ..  =  .
     
..
 .   .   . 
ami bmi ami + bmi
52 CHAPTER 2. LINEAR TRANSFORMATION

The addition of two matrices (of the same size) is the matrix of L + K
 
a11 + b11 a12 + b12 . . . a1n + b1n
 a21 + b21 a22 + b22 . . . a2n + b2n 
A+B = .
 
.. .. ..
 . . . 
am1 + bm1 am2 + bm2 . . . amn + bmn
Similarly,the scalar multiplication cA of a matrix is the matrix of the linear trans-
formation cL  
ca11 ca12 . . . ca1n
 ca21 ca22 . . . ca2n 
cA =  .. ..  .
 
..
 . . . 
cam1 cam2 . . . camn
We emphasise that the formulae for A + B and cA are not because they are the
obvious thing to do, but because they reflect the concepts of L + K and cL for linear
transformations.
Let L : Rn → Rm and K : Rk → Rn be linear transformations, with respective
matrices
   
a11 a12 . . . a1n b11 b12 . . . b1k
 a21 a22 . . . a2n   b21 b22 . . . b2k 
A =  .. ..  , B =  .. ..  .
   
.. ..
 . . .   . . . 
am1 am2 . . . amn bn1 bn2 . . . bnk
To get the matrix of the composition L ◦ K : Rk → Rm , we note that
 
b1i
 b2i 
B = (~v1 ~v2 · · · ~vk ), K(~ei ) = ~vi =  ..  .
 
 . 
bni
Then the i-th column of the matrix of L ◦ K is (see (1.2.1) and (1.2.2))
(L ◦ K)(~ei ) = L(K(~ei )) = L(~vi ) = A~vi
    
a11 a12 . . . a1n b1i a11 b1i + a12 b2i + · · · + a1n bni
 a21 a22 . . . a2n   b2i   a21 b1i + a22 b2i + · · · + a2n bni 
=  .. ..   ..  =  .
    
.. ..
 . . .   .   . 
am1 am2 . . . amn bni am1 b1i + am2 b2i + · · · + amn bni
We define the multiplication of two matrices (of matching size) to be the matrix of
L◦K  
c11 c12 . . . c1k
 c21 c22 . . . c2k 
AB =  .. ..  , cij = ai1 b1j + · · · + ain bnj .
 
..
 . . . 
cm1 cm2 . . . cmk
2.1. LINEAR TRANSFORMATION AND MATRIX 53

The ij-entry of AB is obtained by multiplying the i-th row of A and the j-th column
of B. For example, we have
   
a11 a12   a11 b11 + a12 b21 a11 b12 + a12 b22
a21 a22  b11 b12 = a21 b11 + a22 b21 a21 b12 + a22 b22  .
b21 b22
a31 a32 a31 b11 + a32 b21 a31 b12 + a32 b22

Example 2.1.17. The zero map O in Example 2.1.1 corresponds to the zero matrix
O in Example 2.1.9. Since O + L = L = L + O, we get O + A = A = A + O.
The identity map I in Example 2.1.1 corresponds to the identity matrix I in
Example 2.1.9. Since I ◦ L = L = L ◦ I, we get IA = A = AI.

Example 2.1.18. The composition of maps satisfies (L ◦ K) ◦ M = L ◦ (K ◦ M ).


The equality is also satisfied by linear transformations. Correspondingly, we get the
associativity (AB)C = A(BC) of the matrix multiplication.
It is very complicated to verify (AB)C = A(BC) by multiplying rows and
columns. The conceptual explanation makes such computation unnecessary.

Example 2.1.19. In Example 2.1.10, we obtained the matrix of rotation Rθ . Then


the equality Rθ1 ◦ Rθ2 = Rθ1 +θ2 in Example 2.1.15 corresponds to the multiplication
of matrices
    
cos θ1 − sin θ1 cos θ2 − sin θ2 cos(θ1 + θ2 ) − sin(θ1 + θ2 )
= .
sin θ1 cos θ1 sin θ2 cos θ2 sin(θ1 + θ2 ) cos(θ1 + θ2 )

On the other hand, we calculate the left side by multiplying the rows of the first
matrix with the columns of the second matrix, and we get
 
cos θ1 cos θ2 − sin θ1 sin θ2 − cos θ1 sin θ2 − sin θ1 cos θ2
.
cos θ1 sin θ2 + sin θ1 cos θ2 cos θ1 cos θ2 − sin θ1 sin θ2

By comparing the two sides, we get the familiar trigonometric identities

cos(θ1 + θ2 ) = cos θ1 cos θ2 − sin θ1 sin θ2 ,


sin(θ1 + θ2 ) = cos θ1 sin θ2 + sin θ1 cos θ2 .

Example 2.1.20. Let   



1 2 1 0
A= ,
I= .
3 4 0 1
 
x z
We try to find matrices X satisfying AX = I. Let X = . Then the equality
y w
becomes    
x + 2y z + 2w 1 0
AX = =I= .
3x + 4y 3z + 4w 0 1
54 CHAPTER 2. LINEAR TRANSFORMATION

This means two systems of linear equations

x + 2y = 1, z + 2w = 0,
3x + 4y = 0; 3z + 4w = 1.

We can solve two systems simultaneously by the row operations


     
1 2 1 0 1 2 1 0 1 0 −2 1
(A I) = → → .
3 4 0 1 0 −2 −3 1 0 1 23 − 21

From the first three columns, we get x = −2 and y = 32 . from the first, second and
fourth columns, we get z = 1 and w = − 12 . Therefore
 
−2 1
X= 3 .
2
− 12

In general, to solve AX = B, we may carry out the row operation on the matrix
(A B).

Exercise 2.15. Composing the reflection Fρ in Examples 2.1.3 and 2.1.10 with itself is the
identity. Explain that this means the trigonometric identity cos2 θ + sin2 θ = 1.

Exercise 2.16. Geometrically, one can see the following compositions of rotations and
refelctions.

1. Rθ ◦ Fρ is a reflection. What is the angle of reflection?

2. Fρ ◦ Rθ is a reflection. What is the angle of reflection?

3. Fρ1 ◦ Fρ2 is a rotation. What is the angle of rotation?

Interpret the geometrical observations as trigonometric identities.

Exercise 2.17. Use some examples (say rotations and reflections) to show that for two n×n
matrices, AB may not be equal to BA.

Exercise 2.18. Use the formula for matrix addition to show the commutativity A + B =
B +A and the associativity (A+B)+C = A+(B +C). Then give a conceptual explanation
to the properties without using calculation.

Exercise 2.19. Explain that the addition and scalar multiplication of matrices make the set
Mm×n of m×n matrices into a vector space. Moreover, the matrix of linear transformation
gives an isomorphism (i.e., invertible linear transformation) Hom(Rn , Rm ) → Mm×n .

Exercise 2.20. Explain that Exercise 2.12 means that the matrix multiplication satisfies
A(B + C) = AB + AC, A(aB) = a(AB), and the left multiplication X 7→ AX is a linear
transformation.
2.1. LINEAR TRANSFORMATION AND MATRIX 55

Exercise 2.21. Explain that Exercise 2.13 means that the matrix multiplication satisfies
(A + B)C = AC + BC, (aA)B = a(AB), and the right multiplication X 7→ XA is a linear
transformation.

Exercise 2.22. Let A be an m × n matrix and let B be a k × m matrix. For the trace
defined in Example 2.10, explain that trAXB is a linear functional for X ∈ Mn×k .

Exercise 2.23. Let A be an m × n matrix and let B be an n × m matrix. Prove that the
trace defined in Example 2.10 satisfies trAB = trBA.

Exercise 2.24. Add or multiply matrices, whenever you can.


     
1 2 0 1 0 0 1
1. . 3. .
3 4 1 0 1 5. 1
 0.
0 1
 
    1 1 1
4 −2 1 0 1 6. 1
 1 1 .
2. . 4. .
−3 1 0 1 0 1 1 1

Exercise 2.25. Find the n-th power matrix An (i.e., multiply the matrix to itself n times).
     
cos θ − sin θ a1 0 0 0 a b 0 0
1. .
sin θ cos θ 0 a2 0 0  0 a b 0
3. 
0
. 5.  .
0 a3 0  0 0 a b
0 0 0 a4 0 0 0 a
 
0 a 0 0
0 0 a 0
  4.  .
cos θ sin θ 0 0 0 a
2. .
sin θ − cos θ 0 0 0 0

Exercise 2.26. Solve the matrix equations.


       
1 2 3 7 8 1 2 3 4
1. X= .
4 5 6 9 10 3. 5 6  X =  7 8.
    9 10 11 b
1 4 7 10
2. 2 5 X = 8 11.
3 6 a b

Exercise 2.27. For the transpose of 2×2 matrices, verify that (AB)T = B T AT (the concep-
tual reason will be given in Section 2.4.2).) Then use this to solve the matrix equations.
         
1 2 1 0 1 2 4 −3 1 0
1. X = . 2. X = .
3 4 0 1 3 4 −2 1 0 1
56 CHAPTER 2. LINEAR TRANSFORMATION
 
−1 0
Exercise 2.28. Let A = . Find all the matrices X satisfying AX = XA. Gener-
0 1
alise your result to diagonal matrix
 
a1 0 · · · 0
 0 a2 · · · 0 
A=. ..  .
 
. ..
. . .
0 0 ··· an

2.1.5 Elementary Matrix and LU-Decomposition


The row operations can be interpreted as multiplying on the left by some special
matrices, called elementary matrices. In fact, the special matrix is obtained by ap-
plying the same row operation to the identity matrix I. For example, by exchanging
the 2nd and 4th rows, by multiplying c to the 2nd row, and by adding c multiples
of the 3rd row to the 1st row, we get respectively
     
1 0 0 0 1 0 0 0 1 0 c 0
0 0 0 1 0 c 0 0 0 1 0 0
T24 = 
0 0 1 0 , D2 (c) = 0 0 1 0 , E13 (c) = 0 0 1 0 .
    

0 1 0 0 0 0 0 1 0 0 0 1
Then
1. Tij A exchanges i-th and j-th rows of A.
2. Di (c)A multiplies c to the i-th row of A.
3. Eij (c)A adds the c multiple of the j-th row to the i-th row.
Note that the elementary matrices can also be obtained by applying similar
column operations (already appeared in Exercises 1.32, 1.37, 1.59) to I. Then we
have
1. ATij exchanges i-th and j-th columns of A.
2. ADi (a) multiplies a to the i-th column of A.
3. AEij (a) adds the a multiple of the i-th column to the j-th column.
We know that any matrix can become (reduced) row echelon form after some row
operations. This means that multiplying the left of the matrix by some elementary
matrices gives a (reduced) row echelon form.

Example 2.1.21. We have


 
1 0 0 0
0 0 0 1
T24 D2 (c) = 
0
 = D4 (c)T24 .
0 1 0
0 2 0 0
2.1. LINEAR TRANSFORMATION AND MATRIX 57

The equality can be obtained in four ways:

1. T24 ×: Exchange the 2nd and 4th rows of D2 (c).

2. ×D2 (c): Multiplying c to the 2nd column of T24 .

3. ×T24 : Exchange the 2nd and 4th columns of D4 (c).

4. D4 (c)×: Multiplying c to the 4th row of T24 .

In general, we get Tij Di (c) = Dj (c)Tij . We also have Tij Dj (c) = Di (c)Tij . Moreover,
we have Tij Dk (c) = Dk (c)Tij for distinct i, j, k.

Example 2.1.22. The row operations in Example 1.2.1 can be interpreted as


     
1 0 0 1 0 0 1 0 0 1 0 0 1 4 7 10
0 − 1 0 0 1 0  0 1 0 −2 1 0 2 5 8 11
3
0 0 1 0 −2 1 −3 0 1 0 0 1 3 6 9 12
 
1 4 7 10
= 0 1 2 3  = U.

0 0 0 0

Here U is an upper triangle matrix. On the other hand, we used row operations cRi
and Ri + cRj with j < i. This means that all the elementary matrices are lower
triangular. The we move the elementary matrices to the right, and use Di (c)−1 =
Di (c−1 ) and Eij (c)−1 = Eij (−c) to get
 
1 4 7 10
2 5 8 11
3 6 9 12
     
1 0 0 1 0 0 1 0 0 1 0 0 1 4 7 10
= 2 1 0 0 1 0 0 1 0 0 −3 0 0 1 2 3 
0 0 1 3 0 1 0 2 1 0 0 1 0 0 0 0
  
1 0 0 1 4 7 10
= 2 −3 0
   0 1 2 3  = LU.
3 −6 1 0 0 0 0

We may also move −3 to U (i.e., remove D2 (− 31 )) and get


    
1 4 7 10 1 0 0 1 4 7 10
2 5 8 11 = 2 1
  0  0 −3 −6 −9 .
3 6 9 12 3 2 1 0 0 0 0

The example shows that, if we can use only Ri + cRj with j < i to get a row
echelon form U of A, then we can write A = LU , where L is the combination of
58 CHAPTER 2. LINEAR TRANSFORMATION

the inverse of these Ri + cRj , and is a lower triangular square matrix with nonzero
diagonals. This is the LU -decomposition of the matrix A.
Not every matrix has LU -decomposition. For example, if a11 = 0 in A, then we
need to first exchange rows to make the term nonzero
     
0 1 2 3 1 2 3 4 R3 −2R1 1 2 3 4
R ↔R2
A = 1 2 3 4 −−1−−→ 0 1 2 3 −R−3− +R2
−→ 0 1 2 3 = U.
2 3 4 5 2 3 4 5 0 0 0 0
This gives
     
0 1 0 0 1 2 3 1 0 0 1 2 3 4
P A = 1 0 0 1 2 3 4 = 0 1 0 0 1 2 3 = LU.
0 0 1 2 3 4 5 2 −1 1 0 0 0 0
Here the left multiplication by P permutes rows. In general, every matrix has LU -
decomposition after suitable permutation of rows.
The LU -decomposition is useful for solving A~x = ~b. We may first solve L~y = ~b
to get ~y and then solve U~x = ~y . Since L is lower triangular, it is easy to get the
unique solution ~y by forward substitution. Since U is upper triangular, we can use
backward substituting to solve U~x = ~y .

Exercise 2.29. Write down the 5 × 5 matrices: T24 , T42 , D4 (c), E35 (c), E53 (c).

Exercise 2.30. What do you get by multiplying T13 E13 (−2)D2 (3) to the left of
 
1 4 7
2 5 8 .
3 6 9
What about multiplying on the right?

Exercise 2.31. Explain the following equalities


Tij2 = I,
Di (a)Di (b) = Di (ab),
Eij (a)Eij (b) = Eij (a + b),
Eij (a) = Eik (a)Ekj (1)Eik (a)−1 Ekj (1)−1 .
Can you come up with some other equalities?

Exercise 2.32. Find the LU -decompositions of the matrices in Exercise 1.28 that do not
involve parameters.

Exercise 2.33. The LU -decompositions is derived from using row operations of third type
(and maybe also second type) to get a row echelon form. What do you get by using the
similar column operations.
2.2. ONTO, ONE-TO-ONE, AND INVERSE 59

2.2 Onto, One-to-one, and Inverse


Definition 2.2.1. Let f : X → Y be a map.

1. f is onto (or surjective) if for any y ∈ Y , there is x ∈ X, such that f (x) = y.

2. f is one-to-one (or injective) if f (x) = f (x0 ) implies x = x0 .

3. f is a one-to-one correspondence (or bijective) if it is onto and one-to-one.

The onto property can be regarded as that the equation f (x) = y has solution
for all the right side y.
The one-to-one property also means that x 6= x0 implies f (x) 6= f (x0 ). The
property can be regarded as that uniqueness of the solution of the equation f (x) = y.

Proposition 2.2.2. Let f : X → Y be a map.

1. f is onto if and only if f has right inverse: There is a map g : Y → X, such


that f ◦ g = idY .

2. f is one-to-one if and only if f has left inverse: There is a map g : Y → X,


such that g ◦ f = idX .

3. f is a one-to-one correspondence if and only if it has inverse: There is a map


g : Y → X, such that f ◦ g = idY and g ◦ f = idX .

In the third statement, the map f is invertible, with inverse map g denoted
g = f −1 .
Proof. If f ◦ g = idY , then for any y ∈ Y , we have f (g(y)) = y. In other words,
x = g(y) satisfies f (x) = y. Conversely, suppose f : X → Y is onto. We construct
a map g : Y → X as follows. For any y ∈ Y , by f onto, we can find some x ∈ X
satisfying f (x) = y. We choose one such x (strictly speaking, this uses the Axiom
of Choice) and define g(y) = x. Then the map g satisfies (f ◦ g)(y) = f (x) = y.
If g ◦ f = idX , then we have g(f (x)) = x. Therefore

f (x) = f (x0 ) =⇒ g(f (x)) = g(f (x0 )) =⇒ x = x0 .

Conversely, suppose f : X → Y is one-to-one. We fix an element x0 ∈ X and


construct a map g : Y → X as follows. For any y ∈ Y , if y = f (x) for some x ∈ X
(i.e., y lies in the image of f ), then we define g(y) = x. If we cannot find such x,
then we define g(y) = x0 . Note that in the first case, if y = f (x0 ) for another x0 ∈ X,
then by f one-to-one, we have x0 = x. This shows that g is well defined. For the
case y = f (x), our construction of g implies (g ◦ f )(x) = g(f (x)) = x.
From the first and second parts, we know a map f is onto and one-to-one if and
only if there are maps g and h, such that f ◦ g = id and h ◦ f = id. Compared with
60 CHAPTER 2. LINEAR TRANSFORMATION

the definition of invertibility, for the third statement, we only need to show g = h.
This follows from g = id ◦ g = h ◦ f ◦ g = h ◦ id = h.

Example 2.2.1. Consider the map (f =) Instructor: Courses → Professors.


The map is onto means every professor teaches some course. The map g in
Proposition 2.2.2 can take a professor (say me) to any one course (say linear algebra)
the professor teaches.
The map is one-to-one means any professor either teaches one course, or does not
teach any course. This also means that no professor teaches two or more courses.
If a professor (say me) teaches one course, then the map g in Proposition 2.2.2
takes the professor to the unique course (say linear algebra) the professor teaches.
If a professor does not teach any course, then g may take the professor to any one
existing course.

Example 2.2.2. The identity map I(x) = x : X → X is always onto and one-to-one,
with I −1 = I.
The zero map O(~v ) = ~0 : V → W in Example 2.1.1 is onto if and only if W is
the zero vector space in Example 1.1.1. The zero map is one-to-one if and only if V
is the zero vector space.
The coordinate map in Section 1.3.7 is onto and one-to-one, with the linear
combination map as the inverse.
The rotation and flipping in Example 2.1.3 are invertible, with Rθ−1 = R−θ and
Fθ−1 = Fθ .

Exercise 2.34. Prove that the composition of onto maps is onto.

Exercise 2.35. Prove that the composition of one-to-one maps is one-to-one.

Exercise 2.36. Prove that the composition of invertible maps is invertible. Moreover, we
have (f ◦ g)−1 = g −1 ◦ f −1 .

Exercise 2.37. Prove that if f ◦ g is onto, then f is onto. Prove that if g ◦ f is one-to-one,
then f is one-to-one.

2.2.1 Onto and One-to-one for Linear Transformation


For a linear transformation L : V → W , we may regard the onto and one-to-one
properties as the existence and uniqueness of solutions of the equation L(~x) = ~b. For
the case V and W are Euclidean spaces, this becomes the existence and uniqueness
of solutions of the system of linear equations A~x = ~b. This can be determined by
the row echelon form of the matrix A.
2.2. ONTO, ONE-TO-ONE, AND INVERSE 61

Example 2.2.3. For the linear transformation in Example 2.1.12, we have the row
echelon form (1.2.3). Since there are non-pivot rows and non-pivot columns, the
linear transformation is not onto and not one-to-one.
More generally, by the row echelon form in Example 1.3.6, the linear transfor-
mation  
x1  
 x2  x1 + 4x2 + 7x3 + 10x4
4 3
Lx3  = 2x1 + 5x2 + 8x3 + 11x4 : R → R
  
3x1 + 6x2 + 9x3 + 12x4
x4
is onto if and only if a 6= 9, or a = 0 and b 6= 12. Moreover, the linear transformation
is never one-to-one.

Example 2.2.4. Consider the linear transformation


   
x1 x1 + 4x2 + 7x3
L x2  = 2x1 + 5x2 + 8x3  : R3 → R3
x3 3x1 + 6x2 + ax3
By the row echelon form in Example 1.3.6, the linear transformation is onto if and
only if a 6= 9, and it is one-to-one if and only if a 6= 9. Therefore it is invertible if
and only if a 6= 9.

Exercise 2.38. Determine whether the linear transformation is onto or one-to-one.


1. L(x1 , x2 , x3 , x4 ) = (x1 +2x2 +3x3 +4x4 , 5x1 +6x2 +7x3 +8x4 , 9x1 +10x2 +11x3 +12x4 ).
2. L(x1 , x2 , x3 , x4 ) = (x1 +2x2 +3x3 +4x4 , 5x1 +6x2 +7x3 +8x4 , 9x1 +10x2 +ax3 +bx4 ).
3. L(x1 , x2 , x3 , x4 ) = (x1 + 2x2 + 3x3 + 4x4 , 3x1 + 4x2 + 5x3 + 6x4 , 5x1 + 6x2 + 7x3 +
8x4 , 7x1 + 8x2 + 9x3 + 10x4 )
4. L(x1 , x2 , x3 , x4 ) = (x1 + 2x2 + 3x3 + 4x4 , 3x1 + 4x2 + 5x3 + 6x4 , 5x1 + 6x2 + ax3 +
8x4 , 7x1 + 8x2 + 9x3 + bx4 ).
5. L(x1 , x2 , x3 ) = (x1 + 5x2 + 9x3 , 2x1 + 6x2 + 10x3 , 3x1 + 7x2 + 11x3 , 4x1 + 8x2 + 12x3 ).
6. L(x1 , x2 , x3 ) = (x1 + 5x2 + 9x3 , 2x1 + 6x2 + 10x3 , 3x1 + 7x2 + 11x3 , 4x1 + 8x2 + ax3 ).

Example 2.2.5. We claim that the evaluation L(f (t)) = (f (0), f (1), f (2)) : C ∞ →
R3 in Example 2.1.5 is onto. The idea is to find functions f1 (t), f2 (t), f3 (t), such
that L(f1 (t)) = ~e1 , L(f2 (t)) = ~e2 , L(f3 (t)) = ~e3 . Then any vector in R3 is
(x1 , x2 , x3 ) = L(x1 f1 (t) + x3 f2 (t) + x3 f3 (t)).
It is not difficult to find a smooth function f (t) satisfying f (0) = 1 and f (t) = 0 for
|t| ≥ 1. Then we may take f1 (t) = f (t), f2 (t) = f (t − 1), f3 (t) = f (t − 2).
The evaluation is not one-to-one because L(f (t − 3)) = (0, 0, 0) = L(0), and
f (t − 3) is not a zero function.
62 CHAPTER 2. LINEAR TRANSFORMATION

Example 2.2.6. The derivation operator in Example 2.1.6 is onto R t due to the Newton-
Leibniz formula. For any g ∈ C , we have g = f for f (t) = 0 g(τ )dτ ∈ C ∞ . It is
∞ 0

not one-to-one because all the constant functions are mapped to the zero function.
TheR integration operator in Example 2.1.6 is not onto because any function
t
g(t) = 0 f (τ )dτ must satisfy g(0) = 0. The operator is one-to-one because taking
Rt Rt
derivative of 0 f1 (τ )dτ = 0 f2 (τ )dτ implies f1 (t) = f2 (t).
In fact, the Newton-Leibniz formula says derivation ◦ integration is the identity
map. Then by Proposition 2.2.2, the derivation is onto and the integration is one-
to-one.

need exercises

Proposition 2.2.3. Suppose L : V → W is a linear transformation, and V, W are


finite dimensional. The following are equivalent.

1. L is onto.

2. L takes a spanning set of V to a spanning set of W .

3. There is a linear transformation K : W → V satisfying L ◦ K = IW .

From the proof below, we note that the equivalence of the first two statements
only needs V to be finite dimensional, and the equivalence of the first and third
statements only needs W to be finite dimensional.
Proof. Suppose ~v1 , ~v2 , . . . , ~vn span V . Then

~x ∈ V =⇒ ~x = x1~v1 + x2~v2 + · · · + xn~vn


=⇒ L(~x) = x1 L(~v1 ) + x2 L(~v2 ) + · · · + xn L(~vn ).

The equality shows the first two statements are equivalent

L is onto ⇐⇒ L(~x) can be all the vectors in W


⇐⇒ L(~v1 ), L(~v2 ), . . . , L(~vn ) span W.

By Proposition 2.2.2, we know the third statement implies the first. Conversely,
assume L is onto. For a basis w ~ 1, w ~ m of W , we can find ~vi ∈ V satisfying
~ 2, . . . , w
~ i . By Proposition 2.1.4, there is a linear transformation K : W → V
L(~vi ) = w
satisfying K(w ~ i ) = ~vi . By (L ◦ K)(w ~ i ) = L(K(w ~ i )) = L(~vi ) = w
~ i and Proposition
2.1.2, we get L ◦ K = IW .

Proposition 2.2.4. Suppose L : V → W is a linear transformation, and W is finite


dimensional. The following are equivalent.

1. L is one-to-one.
2.2. ONTO, ONE-TO-ONE, AND INVERSE 63

2. L takes a linearly independent set in V to a linearly independent set in W .

3. L(~v ) = ~0 implies ~v = ~0.

4. There is a linear transformation K : W → V satisfying K ◦ L = IV .

From the proof below, we note that the equivalence of the first three statements
does not need W to be finite dimensional.

Proof. Suppose L is one-to-one and vectors ~v1 , ~v2 , . . . , ~vn in V are linearly indepen-
dent. The following shows that L(~v1 ), L(~v2 ), . . . , L(~vn ) are linearly independent

x1 L(~v1 ) + x2 L(~v2 ) + · · · + xn L(~vn ) = y1 L(~v1 ) + y2 L(~v2 ) + · · · + yn L(~vn )


=⇒ L(x1~v1 + x2~v2 + · · · + xn~vn ) = L(y1~v1 + y2~v2 + · · · + yn~vn ) (L is linear)
=⇒ x1~v1 + x2~v2 + · · · + xn~vn = y1~v1 + y2~v2 + · · · + yn~vn (L is one-to-one)
=⇒ x1 = y1 , x2 = y2 , . . . , xn = yn . (~v1 , ~v2 , . . . , ~vn are linearly independent)

Next we assume the second statement. If ~v 6= ~0, then by Example 1.3.10, the
single vector ~v is linearly independent. By the assumption, the single vector L(~v )
is also linearly independent. Again by Example 1.3.10, this means L(~v ) 6= ~0. This
proves ~v 6= ~0 =⇒ L(~v ) 6= ~0, which is the same as L(~v ) = ~0 =⇒ ~v = ~0.
The following proves that the third statement implies the first

L(~x) = L(~y ) =⇒ L(~x − ~y ) = L(~x) − L(~y ) = ~0 =⇒ ~x − ~y = ~0 =⇒ ~x = ~y .

This completes the proof that the first three statements are equivalent.
By Proposition 2.2.2, we know the fourth statement implies the first. It remains
to prove that the first three statements imply the fourth. This makes use of the
assumption that W is finite dimensional.
Suppose ~v1 , ~v2 , . . . , ~vn in V are linearly independent. By the second statement,
the vectors w ~ 1 = L(~v1 ), w ~ 2 = L(~v2 ), . . . , w
~ n = L(~vn ) in W are also linearly indepen-
dent. By Proposition 1.3.13, we get n ≤ dim W . Since dim W is finite, this implies
that V is also finite dimensional. Therefore V has a basis, which we still denote
by {~v1 , ~v2 , . . . , ~vn }. By Theorem 1.3.11, the corresponding linearly independent set
{w~ 1, w ~ n } can be extended to a basis {w
~ 2, . . . , w ~ 1, w
~ 2, . . . , w
~ n, w ~ m } of W . By
~ n+1 , . . . , w
Proposition 2.1.4, there is a linear transformation K : W → V satisfying K(w ~ i ) = ~vi
for i ≤ n and K(w ~
~ i ) = 0 for n < i ≤ m. By (K ◦ L)(~vi ) = K(L(~vi )) = K(w ~ i ) = ~vi
for i ≤ n and Proposition 2.1.2, we get K ◦ L = IV .

The following is comparable to Proposition 1.3.13. It reflects the intuition that, if


every professor teaches some course (see Example 2.2.1), then the number of courses
is more than the number of professors. On the other hand, if each professor teaches
at most one course, then the number of courses is less than the number of professors.
64 CHAPTER 2. LINEAR TRANSFORMATION

Proposition 2.2.5. Suppose L : V → W is a linear transformation between finite


dimensional vector spaces.

1. If L is onto, then dim V ≥ dim W .

2. If L is one-to-one, then dim V ≤ dim W .

Proof. Suppose α = {~v1 , ~v2 , . . . , ~vn } is a basis of V . We denote the image of the
basis by L(α) = {L(~v1 ), L(~v2 ), . . . , L(~vn )}.
If L is onto, then by Proposition 2.2.3, L(α) spans W . By the first part of Propo-
sition 1.3.13, we get dim V = n ≥ dim W . If L is one-to-one, then by Proposition
2.2.4, L(α) is linearly independent. By the second part of Proposition 1.3.13, we get
dim V = n ≤ dim W .

The following is the linear transformation version of Theorem 1.3.14.

Theorem 2.2.6. Suppose L : V → W is a linear transformation between finite di-


mensional vector spaces. Then any two of the following imply the third.

• L is onto.

• L is one-to-one.

• dim V = dim W .

Let ~v1 , ~v2 , . . . , ~vn be a basis of V . The equivalence follows from Theorem 1.3.14,
and applying the equivalence of the first two statements in Propositions 2.2.3 and
2.2.4.

Example 2.2.7. For the evaluation L(f (t)) = (f (0), f (1), f (2)) : C ∞ → R3 in
Example 2.1.5, we find the functions f1 (t), f2 (t), f3 (t) in Example 2.2.5 satisfy-
ing L(f1 (t)) = ~e1 , L(f2 (t)) = ~e2 , L(f3 (t)) = ~e3 . This means that K(x1 , x2 , x3 ) =
x1 f1 (t) + x3 f2 (t) + x3 f3 (t) satisfies L ◦ K = I. This is actually the reason for L to
be onto in Example 2.2.5.

Example 2.2.8. The differential equation f 00 +(1+t2 )f 0 +tf = b(t) in Example 2.1.16
can be interpreted as L(f (t)) = b(t) for a linear transformation L : C ∞ → C ∞ . If we
regard L as a linear transformation L : Pn → Pn+1 (restricting L to polynomials),
then by Proposition 2.2.5, the restriction linear transformation is not onto. For
example, we can find a polynomial b(t) of degree 5, such that f 00 +(1+t2 )f 0 +tf = b(t)
cannot be solved for a polynomial f (t) of degree 4.

Exercise 2.39. Show that the linear combination map L(x1 , x2 , x3 ) = x1 cos t + x2 sin t +
x3 et : R3 → C ∞ in Example 2.1.5 is not onto and is one-to-one.
2.2. ONTO, ONE-TO-ONE, AND INVERSE 65

Exercise 2.40. Show that the multiplication map f (t) 7→ a(t)f (t) : C ∞ → C ∞ in Example
2.1.6 is onto if and only if a(t) 6= 0 everywhere. Show that the map is one-to-one if a(t) = 0
at only finitely many places.

Exercise 2.41. Strictly speaking, the second statement of Proposition 2.2.3 can be about
one spanning set or all spanning sets of V . Show that the two versions are equivalent.
What about the second statement of Proposition 2.2.4?

Exercise 2.42. Suppose α is a basis of V . Prove that a linear transformation L : V → W


is onto if and only if L(α) spans W , and L is one-to-one if and only if L(α) is linearly
independent.

Exercise 2.43. Prove that a linear transformation is onto if it takes a (not necessarily
spanning) set to a spanning set.

Exercise 2.44. Suppose L : V → W is an onto linear transformation. If V is finite dimen-


sional, prove that W is finite dimensional.

Exercise 2.45. Let A be an m×n matrix. Explain that a system of linear equations A~x = ~b
has solution for all ~b ∈ Rm if and only if there is an n × m matrix B, such that AB = Im .
Moreover, the solution is unique if and only if there is B, such that BA = In .

Exercise 2.46. Suppose L ◦ K and K are linear transformations. Prove that if K is onto,
then L is also a linear transformation.

Exercise 2.47. Suppose L ◦ K and L are linear transformations. Prove that if L is one-to-
one, then K is also a linear transformation.

Exercise 2.48. Recall the induced maps f∗ and f ∗ in Exercise 2.14. Prove that if f is onto,
then f∗ is onto and f ∗ is one-to-one. Prove that if f is one-to-one, then f∗ is one-to-one
and f ∗ is onto.

Exercise 2.49. Suppose L is an onto linear transformation. Prove that two linear transfor-
mations K and K 0 are equal if and only if K ◦ L = K 0 ◦ L. What does this tell you about
the linear transformation L∗ : Hom(V, W ) → Hom(U, W ) in Exercise 2.13?

Exercise 2.50. Suppose L is a one-to-one linear transformation. Prove that two linear
transformations K and K 0 are equal if and only if L ◦ K = L ◦ K 0 . What does this tell
you about the linear transformation L∗ : Hom(U, V ) → Hom(U, W ) in Exercise 2.12?

2.2.2 Isomorphism
Definition 2.2.7. An invertible linear transformation is an isomorphism. If there
is an isomorphism between two vector spaces V and W , then we say V and W are
66 CHAPTER 2. LINEAR TRANSFORMATION

isomorphic, and denote V ∼


= W.

The isomorphism can be used to translate the linear algebra in one vector space
to the linear algebra in another vector space.

Theorem 2.2.8. If a linear transformation L : V → W is an isomorphism, then


the inverse map L−1 : W → V is also a linear transformation. Moreover, suppose
~v1 , ~v2 , . . . , ~vn are vectors in V .

1. ~v1 , ~v2 , . . . , ~vn span V if and only if L(~v1 ), L(~v2 ), . . . , L(~vn ) span W .

2. ~v1 , ~v2 , . . . , ~vn are linearly independent if and only if L(~v1 ), L(~v2 ), . . . , L(~vn ) are
linearly independent.

3. ~v1 , ~v2 , . . . , ~vn form a basis of V if and only if L(~v1 ), L(~v2 ), . . . , L(~vn ) form a
basis of W .

The linearity of L−1 follows from Exercise 2.46 or 2.47. The rest of the proposi-
tion follows from the second statements in Propositions 2.2.3 and 2.2.4.

Example 2.2.9. Given a basis α of V , we explained in Section 1.3.7 that the α-


coordinate map [·]α : V → Rn has an inverse. Therefore the α-coordinate map is an
isomorphism.

Example 2.2.10. A linear transformation L : R → V gives a vector L(1) ∈ V . This


is a linear map (see Exercise 2.11)

L ∈ Hom(R, V ) 7→ L(1) ∈ V.

Conversely, for any ~v ∈ V , we may construct a linear transformation L(x) =


x~v : R → V . The construction gives a map

~v ∈ V 7→ (L(x) = x~v ) ∈ Hom(R, V ).

We can verify that the two maps are inverse to each other. Therefore we get an
isomorphism Hom(R, V ) ∼= V.

Example 2.2.11. The matrix of linear transformation between Euclidean spaces


gives an invertible map

L ∈ Hom(Rn , Rm ) ←→ A ∈ Mm×n , A = (L(~e1 ) L(~e2 ) · · · L(~en )), L(~x) = A~x.

The vector space structure on Hom(Rn , Rm ) is given by Proposition 2.1.5. Then the
addition and scalar multiplication in Mm×n are defined for the purpose of making
the map into an isomorphism.
2.2. ONTO, ONE-TO-ONE, AND INVERSE 67

Example 2.2.12. The transpose of matrices is an isomorphism


   
a11 a12 . . . a1n a11 a21 . . . am1
 a21 a22 . . . a2n   a12 a22 . . . am2 
 
T
A =  .. ..  ∈ Mm×n 7→ A =  .. ..  ∈ Mn×m .
 
.. ..
 . . .   . . . 
am1 am2 . . . amn a1n a2n . . . amn

In fact, we have (AT )T = A, which means that the inverse of the transpose map is
the transpose map.

Example 2.2.13 (Lagrange interpolation). Let t0 , t1 , . . . , tn be n+1 distinct numbers.


Consider the general evaluation linear transformation (see Example 2.1.5)
L(f (t)) = (f (t0 ), f (t1 ), . . . , f (tn )) : Pn → Rn+1 .
We claim that L is onto by finding polynomials pi (t) satisfying L(pi (t)) = ~ei (see
Example 2.2.5 and Exercise 2.43). Here we denote vectors in Rn+1 by (x0 , x1 , . . . , xn ),
and denote the standard basis vectors by ~e0 , ~e1 , . . . , ~en .
Let
g(t) = (t − t1 )(t − t2 ) · · · (t − tn ).
Since t0 , t1 , . . . , tn are distinct, we have
g(t0 ) = (t0 − t1 )(t0 − t2 ) · · · (t0 − tn ) 6= 0, g(t1 ) = g(t2 ) = · · · = g(tn ) = 0.
This implies that, if we take
g(t) (t − t1 )(t − t2 ) · · · (t − tn )
p0 (t) = = ,
g(t0 ) (t0 − t1 )(t0 − t2 ) · · · (t0 − tn )
then we have L(p0 (t)) = ~e0 . Similarly, we have Lagrange polynomials
(t − t1 ) · · · (t − ti−1 )(t − ti+1 ) · · · (t − tn ) Y t − tj
pi (t) = = ,
(t0 − t1 ) · · · (t0 − ti−1 )(t0 − ti+1 ) · · · (t0 − tn ) 0≤j≤n, j6=i ti − tj

satisfying L(pi (t)) = ~ei .


Since L is onto and dim Pn = dim Rn+1 , by Theorem 2.2.6, we conclude that L
is an isomorphism. The one-to-one property of L means that a polynomial f (t) of
degree n is determined by its values x0 = f (t0 ), x1 = f (t1 ), . . . , xn = f (tn ) at n + 1
distinct places. The formula for f (t) in terms of these values is given by the inverse
L−1 : Rn+1 → Pn , called the Lagrange interpolation
n
X n
X
f (t) = L−1 (x0 , x1 , . . . , xn ) = xi L−1 (~ei ) = xi pi (t)
i=0 i=0
n n
X Y t − tj X Y t − tj
= xi = f (ti ) .
i=0
ti − tj i=0
ti − tj
0≤j≤n, j6=i 0≤j≤n, j6=i
68 CHAPTER 2. LINEAR TRANSFORMATION

For example, a quadratic polynomial f (t) satisfying f (−1) = 1, f (0) = 2, f (1) = 3


is uniquely given by

t(t − 1) (t + 1)(t − 1) (t + 1)t 1 5


f (t) = 1 +2 +3 = 2 − t − t2 .
(−1) · (−2) 1 · (−1) 4·3 4 4

Exercise 2.51. Suppose dim V = dim W and L : V → W is a linear transformation. Prove


that the following are equivalent.

1. L is invertible.

2. L has left inverse: There is K : W → V , such that K ◦ L = I.

3. L has right inverse: There is K : W → V , such that L ◦ K = I.

Moreover, show that the two K in the second and third parts must be the same.

Exercise 2.52. Explain dim Hom(V, W ) = dim V dim W .

Exercise 2.53. Explain that the linear transformation

f ∈ Pn 7→ (f (t0 ), f 0 (t0 ), . . . , f (n) (t0 )) ∈ Rn+1

is an isomorphism. What is the inverse isomorphism?

Exercise 2.54. Explain that the linear transformation (the right side has obvious vector
space structure)
f ∈ C ∞ 7→ (f 0 , f (t0 )) ∈ C ∞ × R
is an isomorphism. What is the inverse isomorphism?

2.2.3 Invertible Matrix


A matrix A is invertible if the corresponding linear transformation L(~x) = A~x is
invertible (i.e., an isomorphism), and the inverse matrix A−1 is the matrix of the
inverse linear transformation L−1 . By Theorem 2.2.6 (and Theorems 1.3.14 and
1.3.15), an invertible matrix must be a square matrix.
A linear transformation K is the inverse of L if L ◦ K = I and K ◦ L = I.
Correspondingly, a matrix B is the inverse of A if AB = BA = I. Exercise 2.51
shows that, in case of equal dimension, L ◦ K = I is equivalent to K ◦ L = I.
Correspondingly, for square matrices, AB = I is equivalent to BA = I.

Example 2.2.14. Since the inverse of the identity linear transformation is the iden-
tity, the inverse of the identity matrix is the identity matrix: In−1 = In .
2.2. ONTO, ONE-TO-ONE, AND INVERSE 69

Example 2.2.15. The rotation Rθ of the plane by angle θ in Example 2.1.3 is in-
vertible, with the inverse Rθ−1 = R−θ being the rotation by angle −θ. Therefore the
matrix of R−θ is the inverse of the matrix of Rθ
 −1  
cos θ − sin θ cos θ sin θ
= .
sin θ cos θ − sin θ cos θ

One can directly verify that the multiplication of the two matrices is the identity.
The flipping Fρ in Example 2.1.3 is also invertible, with the inverse Fρ−1 = Fρ
being the flipping itself. Therefore the matrix of Fρ is the inverse of itself (θ = 2ρ)
 −1  
cos θ sin θ cos θ sin θ
= .
sin θ − cos θ sin θ − cos θ

Exercise 2.55. Construct a 2 × 3 matrix A and a 3 × 2 matrix B satisfying AB = I2 .


Explain that BA can never be I3 .

Exercise 2.56. What are the inverses of elementary matrices?

Exercise 2.57. Suppose A and B are invertible matrices. Prove that (AB)−1 = B −1 A−1 .

Exercise 2.58. Prove that the trace defined in Example 2.10 satisfies trAXA−1 = trX.

The following summarises many equivalent criteria for invertible matrix (and
there will be more).

Proposition 2.2.9. The following are equivalent for an n × n matrix A.


1. A is invertible.

2. A~x = ~b has solution for all ~b ∈ Rn .

3. The solution of A~x = ~b is unique.

4. The homogeneous system A~x = ~0 has only trivial solution ~x = ~0.

5. A~x = ~b has unique solution for all ~b ∈ Rn .

6. The columns of A span Rn .

7. The columns of A are linearly independent.

8. The columns of A form a basis of Rn .

9. All rows of A are pivot.

10. All columns of A are pivot.


70 CHAPTER 2. LINEAR TRANSFORMATION

11. The reduced row echelon form of A is the identity matrix I.

12. A is the product of elementary matrices.

Next we try to calculate the inverse matrix.


Let A be the matrix of an invertible linear transformation L : Rn → Rn . Let
−1
A = (w ~1 w ~ n ) be the matrix of the inverse linear transformation L−1 . Then
~2 · · · w
~ i = L−1 (~ei ) = A−1~ei . This implies Aw
w ~ i = L(w~ i ) = ~ei . Therefore the i-th column
−1
of A is the solution of the system of linear equation A~x = ~ei . The solution can be
calculated by the reduced row echelon form of the augmented matrix

(A ~ei ) → (I w
~ i ).

Here the row operations can reduce A to I by Proposition 2.2.9. Then the solution
of A~x = ~ei is exactly the last column of the reduced row echelon form (I w ~ i ).
Since the systems of liner equations A~x = ~e1 , A~x = ~e2 , . . . , A~x = ~en have
the same coefficient matrix A, we may solve these equations simultaneously by
combining the row operations

(A I) = (A ~e1 ~e2 · · · ~en ) → (I w ~ n ) = (I A−1 ).


~2 · · · w
~1 w

We have already used the idea in Example 1.3.17.

Example 2.2.16. The row operations in Example 2.1.20 give

 −1  
1 2 −2 1
= 3 .
3 4 2
− 21

In general, we can directly verify that

 −1  
a b 1 d −b
= , ad 6= bc.
c d ad − bc −c a

Example 2.2.17. The basis in Example 1.3.14 shows that the matrix

 
1 4 7
A = 2 5 8 
3 6 10
2.2. ONTO, ONE-TO-ONE, AND INVERSE 71

is invertible. Then we carry out the row operations


   
1 4 7 1 0 0 1 4 7 1 0 0
(A I) = 2 5 8 0 1 0 → 0 −3 −6 −2 1 0
3 6 10 0 0 1 0 −6 −11 −3 0 1
1 0 −1 − 35 34
   
1 4 7 1 0 0 0
→ 0 1 2 23 − 13 0 → 0 1 2 2
3
− 13 0
0 0 1 1 −2 1 0 0 1 1 −2 1
2 2
 
1 0 0 −3 −3 1
→ 0 1 0 − 43 11

3
−2 .
0 0 1 1 −2 1

Therefore −1  2
− 3 − 32 1
 
1 4 7
2 5 8  = − 4 11 −2 .
3 3
3 6 10 1 −2 1

Example 2.2.18. The row operation in Example 1.3.17 shows that


 −1  
1 1 1 1 −2 1
−1 0 1 = 1 1 1 −2 .
3
0 −1 1 1 1 1

In terms of linear transformation, the result means that the linear transformation
   
x1 x1 + x2 + x3
L x2  =  −x1 + x3  : R3 → R3
x3 −x2 + x3

is invertible, and the inverse is


   
x1 x1 − 2x2 + x3
1
L−1 x2  = x1 + x2 − 2x3  : R3 → R3 .
3
x3 x1 + x2 + x3

Exercise 2.59. What is the inverse of 1 × 1 matrix (a)?

Exercise 2.60. Verify the formula for the inverse of 2 × 2 matrix in Example 2.2.16 by
multiplying the two matrices together. Moreover, show that the 2 × 2 matrix is not
invertible when ad = bc.

Exercise 2.61. Find inverse matrix.


72 CHAPTER 2. LINEAR TRANSFORMATION
     
0 1 0 1 1 1 0 b c 1
1. 0 0 1. 1 1 0 1 7. 1 0 0.
4.  .
1 0 0 1 0 1 1 a 1 0
0 1 1 1  

0 0 1 0

  c a b 1
1 1 0 0 b 1 a 0
0 0 0 8.  .
2. 
0
. 5. a
 1 0. a 0 1 0
0 0 1
b c 1 1 0 0 0
0 1 0 0
 
  1 a b c
1 1 0 0 1 a b
6.  .
3. 1 0 1. 0 0 1 a
0 1 1 0 0 0 1

Exercise 2.62. Find the inverse matrix.


   
1 1 1 ··· 1 0 0 ··· 0 1
0 1
 1 · · · 1 
0
 0 ··· 1 a1 

1. 0 0
 1 · · · 1 . 3. 0
 0 ··· a1 a2 .

 .. .. .. ..   .. .. .. .. 
. . . . . . . . 
0 0 0 ··· 1 1 a1 · · · an−2 an−1
 
1 a1 a2 ··· an−1
0 1 a1
 ··· an−2 

2. 0 0 1
 ··· an−3 
.
 .. .. .. .. 
. . . . 
0 0 0 ··· 1

2.3 Matrix of General Linear Transformation


By using bases to identify general vector spaces with Euclidean spaces, we may
introduce the matrix of a general linear transformation with respect to bases.

2.3.1 Matrix with Respect to Bases


Let α = {~v1 , ~v2 , . . . , ~vn } and β = {w
~ 1, w ~ m } be (ordered) bases of (finite
~ 2, . . . , w
dimensional) vector spaces V and W . Then a linear transformation L : V → W can
be translated into a linear transformation Lβα between Euclidean spaces.
L
V −−−→ W
 
∼
[·]α y= ∼ 
=y[·]β Lβα ([~v ]α ) = [L(~v )]β for ~v ∈ V.
Lβα
Rn −−−→ Rn
Please pay attention to the notation, that Lβα is the corresponding linear transfor-
mation from α to β.
2.3. MATRIX OF GENERAL LINEAR TRANSFORMATION 73

The matrix [L]βα of L with respect to bases α and β is the matrix of the linear
transformation Lβα , introduced in Section 2.1.2. To calculate this matrix, we apply
the translation above to a vector ~vi ∈ α and use [~v ]α = ~ei

L
~vi −−−→ L(~vi )
 
Lβα (~ei ) = Lβα ([~vi ]α ) = [L(~vi )]β .
 
y y
Lβα
~ei −−−→ [L(~vi )]β

Denote L(α) = {L(~v1 ), L(~v2 ), . . . , L(~vn )}. Then we have

[L]βα = ([L(~v1 )]β [L(~v2 )]β · · · [L(~vn )]β ) = [L(α)]β .

Specifically, suppose α = {~v1 , ~v2 } is a basis of V , and β = {w


~ 1, w ~ 3 } is a basis
~ 2, w
of W . A linear transformation L : V → W is determined by

L(~v1 ) = a11 w
~ 1 + a21 w
~ 2 + a31 w
~ 3,
L(~v2 ) = a12 w
~ 1 + a22 w
~ 2 + a32 w
~ 3.

Then
    
a11 a12 a11 a12
[L(~v1 )]β = a21  , [L(~v2 )]β = a22  , [L]βα = a21 a22  .
a31 a32 a31 a32

Note that the matrix [L]βα is obtained by combining all the coefficients in L(~v1 ),
L(~v2 ) and then take the transpose.

Example 2.3.1. The orthogonal projection P of R3 to the plane x + y + z = 0 in


Example 2.1.13 preserves vectors on the plan and maps vectors orthogonal to the
plane to ~0. Specifically, with respect to the basis

α = {~v1 , ~v2 , ~v3 }, ~v1 = (1, −1, 0), ~v2 = (1, 0, −1), ~v3 = (1, 1, 1),

we have
P (~v1 ) = ~v1 , P (~v2 ) = ~v2 , P (~v3 ) = ~0.
This means that  
1 0 0
[P ]αα = 0 1 0 .
0 0 0
This is much simpler than the matrix with respect to the standard basis  that we
obtained in Example 2.1.13.
74 CHAPTER 2. LINEAR TRANSFORMATION

Example 2.3.2. With respect to the standard monomial bases α = {1, t, t2 , t3 }


and β = {1, t, t2 } of P3 and P2 , the matrix of the derivative linear transformation
D : P3 → P2 is
 
0 1 0 0
[D]βα = [(1)0 , (t)0 , (t2 )0 , (t3 )0 ]{1,t,t2 } = [0, 1, 2t, 3t2 ]{1,t,t2 } = 0 0 2 0 .
0 0 0 3

For example, the derivative (1 + 2t + 3t2 + 4t3 )0 = 2 + 6t + 12t2 fits into


 
    1
2 0 1 0 0  
 6  = 0 0 2 0 2 .
3
12 0 0 0 3
4

If we modify the basis β to γ = {1, t − 1, (t − 1)2 } in Example 1.3.3, then

[D]γα = [0, 1, 2t, 3t2 ]{1,t−1,(t−1)2 }


= [0, 1, 2(1 + (t − 1)), 3(1 + (t − 1))2 ]{1,t−1,(t−1)2 }
= [0, 1, 2 + 2(t − 1), 3 + 6(t − 1) + 3(t − 1)2 ]{1,t−1,(t−1)2 }
 
0 1 2 3
= 0 0 2 6 .
0 0 0 3

Example 2.3.3. The linear transformation in Example 2.1.16

L(f ) = (1 + t2 )f 00 + (1 + t)f 0 − f : P3 → P3

satisfies

L(1) = −1, L(t) = 1, L(t2 ) = 2 + 2t + 3t2 , L(t3 ) = 6t + 3t2 + 8t3 .

Therefore  
−1 1 2 0
0 0 2 6
[L]{1,t,t2 ,t3 }{1,t,t2 ,t3 } =
0
.
0 3 3
0 0 0 8
To solve the equation L(f ) = t + 2t3 in Example 2.1.16, we have row operations
     
−1 1 2 0 0 −1 1 2 0 0 −1 1 2 0 0
0 0 2 6 1 0 0 1 1 0 0 0 1 1 0
 → → .
0 0 3 3 0 0 0 2 6 1 0 0 0 4 1
0 0 0 8 2 0 0 0 8 2 0 0 0 0 0
2.3. MATRIX OF GENERAL LINEAR TRANSFORMATION 75

This shows that L is not one-to-one and not onto. Moreover, the solution of the
differential equation is given by

a3 = 14 , a2 = −a3 = − 41 , a0 = 2 · 14 + a1 = 1
2
+ a1 .

In other words, the solution is (c = a1 is arbitrary)

f= 1
2
+ a1 + a1 t − 41 t2 + 14 t3 = c(1 + t) + 14 (2 − t2 + t3 ).

Example 2.3.4. With respect to the standard basis of M2×2


       
1 0 0 1 0 0 0 0
σ = {S1 , S2 , S3 , S4 } = , , , ,
0 0 0 0 1 0 0 1

the transpose linear transformation ·T : M2×2 → M2×2 has matrix


 
1 0 0 0
 0 0 1 0
[·T ]σσ = [σ T ]σ = [S1 , S3 , S2 , S4 ]{S1 ,S2 ,S3 ,S4 } = 
0 1 0
.
0
0 0 0 1

 the linear transformation A· : M2×2 → M2×2 of left multiplying by A =


Moreover,

1 2
has matrix
3 4

[A·]σσ = [Aσ]σ = [AS1 , AS2 , AS3 , AS4 ]{S1 ,S2 ,S3 ,S4 }
 
1 0 2 0
0 1 0 2
= [S1 + 3S3 , S2 + 3S4 , 2S1 + 4S3 , 2S2 + 4S4 ]{S1 ,S2 ,S3 ,S4 } =
3
.
0 4 0
0 3 0 4

Exercise 2.63. In Example 2.3.2, what is the matrix of the derivative linear transformation
if α is changed to {1, t + 1, (t + 1)2 }?
Rt
Exercise 2.64. Find the matrix of 0 : P2 → P3 , with respect to the usual bases in P2 and
Rt
P3 . What about 1 : P2 → P3 ?

Exercise 2.65. In Example 2.3.4, find the matrix of the right multiplication by A.

Proposition 2.3.1. The matrix of linear transformation has the following properties

[I]αα = I, [L + K]βα = [L]βα + [K]βα , [cL]βα = c[L]βα ,

[L ◦ K]γα = [L]γβ [K]βα .


76 CHAPTER 2. LINEAR TRANSFORMATION

Proof. The equality [I]αα = I is equivalent to that Iαα is the identity linear trans-
formation. This follows from

Iαα ([~v ]α ) = [I(~v )]α = [~v ]α .

Alternatively, we have
[I]αα = [α]α = I.
The equality [L + K]βα = [L]βα + [K]βα is equivalent to the equality (L + K)βα =
Lβα + Kβα for linear transformations, which we can verify by using Proposition 1.3.2

(L + K)βα ([~v ]α ) = [(L + K)(~v )]β = [L(~v ) + K(~v )]β


= [L(~v )]β + [K(~v )]β = Lβα ([~v ]α ) + Kβα ([~v ]α ).

Alternatively, we have (the second equality is true for individual vectors in α)

[L + K]βα = [(L + K)(α)]β = [L(α) + K(α)]β


= [L(α)]β + [K(α)]β = [L]βα + [K]βα .

The verification of [cL]βα = c[L]βα is similar, and is omitted.


The equality [L ◦ K]γα = [L]γβ [K]βα is equivalent to (L ◦ K)γα = Lγβ ◦ Kβα . This
follows from

(L ◦ K)γα ([~v ]α ) = [(L ◦ K)(~v )]γ = [L(K(~v ))]γ


= Lγβ ([K(~v )]β ) = Lγβ (Kβα ([~v ]α )) = (Lγβ ◦ Kβα )([~v ]α ).

Alternatively, we have (the third equality is true for individual vectors in K(α))

[L ◦ K]γα = [(L ◦ K)(α)]γ = [L(K(α))]γ = [L]γβ [K(α)]β = [L]γβ [K]βα .

Example 2.3.5 (Vandermonde Matrix). Applying the evaluation linear transforma-


tion in Example 2.2.13 to the monomials, we get the matrix of the linear transfor-
mation  
1 t0 t20 . . . tn0
1 t1 t2 . . . tn1 
 1 
2
[L]{1,t,...,tn } = 1 t2 t2
 . . . tn2 
.
 .. .. .. .. 
. . . .
1 tn t2n . . . tnn
This is called the Vandermonde matrix. Example 2.2.13 tells us that the matrix
is invertible if and only if all ti are distinct. Moreover, the Lagrangian interpola-
tion is the formula for L−1 , and shows that the i-th column of the inverse of the
Vandermonde matrix is the coefficients in the polynomial
(t − t1 ) · · · (t − ti−1 )(t − ti+1 ) · · · (t − tn )
L−1 (~ei ) = .
(t0 − t1 ) · · · (t0 − ti−1 )(t0 − ti+1 ) · · · (t0 − tn )
2.3. MATRIX OF GENERAL LINEAR TRANSFORMATION 77

For example, for n = 2, we have


 
−1 (t − t1 )(t − t2 ) (t1 t2 , −t1 − t2 , 1)
[L (~e0 )]{1,t,t2 } = = ,
(t0 − t1 )(t0 − t2 ) {1,t,t2 } (t0 − t1 )(t0 − t2 )

and
 −1  t1 t2 t0 t2 t0 t1

1 t0 t20 (t0 −t1 )(t0 −t2 ) (t1 −t0 )(t1 −t2 ) (t2 −t0 )(t2 −t1 )
1 t1 t21  =  −t −t −t0 −t2 −t0 −t1
 (t0 −t11)(t02−t2 ) (t2 −t0 )(t2 −t1 )  .

(t1 −t0 )(t1 −t2 )
2 1 1 1
1 t2 t2 (t −t )(t −t ) (t1 −t0 )(t1 −t2 ) (t2 −t0 )(t2 −t1 )
0 1 0 2

Exercise 2.66. In Example 2.3.2, directly verify [D]γα = [I]γβ [D]βα .

Exercise 2.67. Prove that (cL)βα = cLβα and (L ◦ K)γα = Lγβ Kβα . This implies the
equalities [cL]βα = c[L]βα and [L ◦ K]γα = [L]γβ [K]βα in Proposition 2.3.1.

Exercise 2.68. Prove that L is invertible if and only if [L]βα is invertible. Moreover, we
have [L−1 ]αβ = [L]−1
βα .

Exercise 2.69. Find the matrix of the linear transformation in Exercise 2.53 with respect
to the standard basis in Pn and Rn+1 . Also find the inverse matrix.

Exercise 2.70. The left multiplication in Example 2.3.4 is an isomorphism. Find the matrix
of the inverse.

2.3.2 Change of Basis


The matrix [L]βα depends on the choice of (ordered) bases α and β. If α0 and β 0
are also bases, then by Proposition 2.3.1, the matrix of L with respect to the new
choice is
[L]β 0 α0 = [I ◦ L ◦ I]β 0 α0 = [I]β 0 β [L]βα [I]αα0 .
This shows that the matrix of linear transformation is modified by multiplying
matrices [I]αα0 , [I]β 0 β of the identity operator with respect to various bases. Such
matrices are the matrices for the change of basis, and is simply the coordinates of
vectors in one basis with respect to the other basis

[I]αα0 = [I(α0 )]α = [α0 ]α .

Proposition 2.3.1 implies the following properties.

Proposition 2.3.2. The matrix for the change of basis has the following properties

[I]αα = I, [I]βα = [I]−1


αβ , [I]γα = [I]γβ [I]βα .
78 CHAPTER 2. LINEAR TRANSFORMATION

Example 2.3.6. Let  be the standard basis of Rn , and let α = {~v1 , ~v2 , . . . , ~vn } be
another basis. Then the matrix for changing from α to  is
[I]α = [α] = (~v1 ~v2 · · · ~vn ) = (α).
In general, the matrix for changing from α to β is
[I]βα = [I]β [I]α = [I]−1 −1 −1
β [I]α = [β] [α] = (β) (α).

For example, the matrix for changing from the basis in Example 2.2.17
α = {(1, 2, 3), (4, 5, 6), (7, 8, 10)}
to the basis in Examples 1.3.17, 2.2.18 and 2.3.1
β = {(1, −1, 0), (1, 0, −1), (1, 1, 1)}
is
 −1       
1 1 1 1 4 7 1 −2 1 1 4 7 0 0 1
−1 0 1 2 5 8  = 1 1 1 −2 2 5 8  = 1 −3 −3 −5 .
3 3
0 −1 1 3 6 10 1 1 1 3 6 10 6 15 25

Example 2.3.7. Consider the basis αθ = {(cos θ, sin θ), (− sin θ, cos θ)} of unit length
vectors on the plane at angles θ and θ + π2 . The matrix for the change of basis from
αθ1 to αθ2 is obtained from the αθ2 -coordinates of vectors in αθ1 . Since αθ1 is obtained
from αθ2 by rotating θ = θ1 − θ2 , the coordinates are the same as the -coordinates
of vectors in αθ . This means
   
cos θ − sin θ cos(θ1 − θ2 ) − sin(θ1 − θ2 )
[I]αθ2 αθ1 = = .
sin θ cos θ sin(θ1 − θ2 ) cos(θ1 − θ2 )
This is consistent with the formula in Example 2.3.6
 −1  
−1 cos θ2 − sin θ2 cos θ1 − sin θ1
[I]αθ2 αθ1 = (αθ2 ) (αθ1 ) = .
sin θ2 cos θ2 sin θ1 cos θ1

Example 2.3.8. The matrix for the change from the basis α = {1, t, t2 , t3 } of P3 to
another basis β = {1, t − 1, (t − 1)2 , (t − 1)3 } is
[I]βα = [1, t, t2 , t3 ]{1,t−1,(t−1)2 ,(t−1)3 }
= [1, 1 + (t − 1), 1 + 2(t − 1) + (t − 1)2 ,
1 + 3(t − 1) + 3(t − 1)2 + (t − 1)3 ]{1,t−1,(t−1)2 ,(t−1)3 }
 
1 1 1 1
0 1 2 3
=0 0 1 3 .

0 0 0 1
2.3. MATRIX OF GENERAL LINEAR TRANSFORMATION 79

The inverse of this matrix is

[I]αβ = [1, t − 1, (t − 1)2 , (t − 1)3 ]{1,t,t2 ,t3 }


= [1, −1 + t, 1 − 2t + t2 , −1 + 3t − 3t2 + t3 ]{1,t,t2 ,t3 }
 
1 −1 1 −1
0 1 −2 3 
= .
0 0 1 −3
0 0 0 1
We can also use the method outlined before Example 2.2.16 to calculate the inverse.
But the method above is simpler.
The equality

(t + 1)3 = 1 + 3t + 3t2 + t3 = ((t − 1) + 2)3 = 8 + 12(t − 1) + 6(t − 1)2 + (t − 1)3

gives coordinates

[(t + 1)3 ]α = (1, 3, 3, 1), [(t + 1)3 ]β = (8, 12, 6, 1).

The two coordinates are related by the matrices for the change of basis
         
8 1 1 1 1 1 1 1 −1 1 −1 8
12 0 1 2 3 3 3 0 1 −2 3  12
 =
 6  0 0 1 3 3 ,
   =  .
3 0 0 1 −3  6 
1 0 0 0 1 1 1 0 0 0 1 1

Exercise 2.71. Use matrices for the change of basis in Example 2.3.8 to find the matrix
[L]{1,t,t2 ,t3 }{1,t−1,(t−1)2 ,(t−1)3 } of the linear transformation L in Example 2.3.3.

Exercise 2.72. If the basis in the source vector space V is changed by one of three operations
in Exercise 1.59, how is the matrix of linear transformation changed? What about the
similar change in the target vector space W ?

2.3.3 Similar Matrix


For a linear operator L : V → V , we usually choose the same basis α for the domain
V and the range V . The matrix of the linear operator with respect to the basis α is
[L]αα . The matrices with respect to different bases are related by

[L]ββ = [I]βα [L]αα [I]αβ = [I]−1 −1


αβ [L]αα [I]αβ = [I]βα [L]αα [I]βα .

We say the two matrices A = [L]αα and B = [L]ββ are similar in the sense that they
are related by
B = P −1 AP = QAQ−1 ,
where P (matrix for changing from β to α) is an invertible matrix with P −1 = Q
(matrix for changing from α to β).
80 CHAPTER 2. LINEAR TRANSFORMATION

Example 2.3.9. In Example 2.3.1, we showed that the matrix of the orthogonal
projection P in Example 2.1.13 with respect to α = {(1, −1, 0), (1, 0, −1), (1, 1, 1)}
is very simple  
1 0 0
[P ]αα = 0 1 0 .
0 0 0
On the other hand, by Examples 2.3.6 and 2.2.18, we have
   
1 1 1 1 −2 1
1
[I]α = [α] = −1 0 1 , [I]−1 α = 1 1 −2 .
3
0 −1 1 1 1 1

Then we get the usual matrix of P (with respect the the standard basis )

[P ] = [I]α [P ]αα [I]−1



   −1  
1 1 1 1 0 0 1 1 1 2 −1 −1
1
= −1 0 1 0 1 0 −1 0 1 = −1 2 −1 .
3
0 −1 1 0 0 0 0 −1 1 −1 −1 2
The matrix is the same as the one obtained in Example 2.1.13 by another method.

Example 2.3.10. Consider the linear operator L(f (t)) = tf 0 (t) + f (t) : P3 → P3 .
Applying the operator to the bases α = {1, t, t2 , t3 }, we get

L(1) = 1, L(t) = 2t, L(t2 ) = 3t2 , L(t3 ) = 4t3 .

Therefore  
1 0 0 0
0 2 0 0
[L]αα =
0
.
0 3 0
0 0 0 4
Consider another basis β = {1, t − 1, (t − 1)2 , (t − 1)3 } of P2 . By Example 2.3.8,
we have    
1 1 1 1 1 −1 1 −1
 , [I]αβ = 0 1 −2 3  .
0 1 2 3  
[I]βα = 
0 0 1 3 0 0 1 −3
0 0 0 1 0 0 0 1
Therefore

[L]ββ = [I]βα [L]αα [I]αβ


     
1 1 1 1 1 0 0 0 1 −1 1 −1 1 1 0 0
0 1 2 3 0 2 0 0 0 1 −2 3  0
    2 2 0
=  = .
0 0 1 3 0 0 3 0 0 0 1 −3 0 0 3 3
0 0 0 1 0 0 0 4 0 0 0 1 0 0 0 4
2.4. DUAL 81

We can verify the result by directly applying L(f ) = (tf (t))0 to vectors in β

L(1) = 1,
L(t − 1) = [(t − 1) + (t − 1)2 ]0 = 1 + 2(t − 1),
L((t − 1)2 ) = [(t − 1)2 + (t − 1)3 ]0 = 2(t − 1) + 3(t − 1)2 ,
L((t − 1)3 ) = [(t − 1)3 + (t − 1)4 ]0 = 3(t − 1)2 + 4(t − 1)3 .

Exercise 2.73. Explain that if A is similar to B, then B is similar to A.

Exercise 2.74. Explain that if A is similar to B, and B is similar to C, then A is similar


to C.

Exercise 2.75. Find the matrix of the linear operator of R2 that sends ~v1 = (1, 2) and
~v2 = (3, 4) to 2~v1 and 3~v2 . What about sending ~v1 , ~v2 to ~v2 , ~v1 ?

Exercise 2.76. Find the matrix of the reflection of R3 with respect to the plane x+y+z = 0.

Exercise 2.77. Find the matrix of the linear operator of R3 that circularly sends the basis
vectors in Example 2.1.13 to each other

(1, −1, 0) 7→ (1, 0, −1) 7→ (1, 1, 1) 7→ (1, −1, 0).

2.4 Dual
2.4.1 Dual Space
A function on a vector space V is a map l : V → R. If the map is a linear transfor-
mation
l(~u + ~v ) = l(~u) + l(~v ), l(c~u) = c l(~u),
then we call the map a linear functional. All the linear functionals on a vector space
V form a vector space, called the dual space

V ∗ = Hom(V, R).

An ordered bases α = {~v1 , ~v2 , . . . , ~vn } of V gives an isomorphism

l ∈ V ∗ ←→ l(α) = (l(~v1 ), l(~v2 ), . . . , l(~vn )) = (a1 , a2 , . . . , an ) ∈ Rn . (2.4.1)

The 1 × n matrix on the right is the matrix [l(α)]1 = [l]1α of l with respect to the
basis α of V and the basis 1 of R. The formula for l (i.e., the inverse of (2.4.1)) is
then given by (2.1.1)

l(~x) = a1 x1 + a2 x2 + · · · + an xn , [~x]α = (x1 , x2 , . . . , xn ). (2.4.2)


82 CHAPTER 2. LINEAR TRANSFORMATION

The isomorphism tells us


dim V ∗ = dim V. (2.4.3)
Under the isomorphism (2.4.1), the standard basis  of Rn corresponds to a
basis α∗ = {~v1∗ , ~v2∗ , . . . , ~vn∗ } of V ∗ , called the dual basis. The correspondence means
(~vi∗ (~v1 ), ~vi∗ (~v2 ), . . . , ~vi∗ (~vn )) = ~ei , or
(
1, if i = j,
~vi∗ (~vj ) = δij =
0, if i 6= j,

In other words, the linear functional ~vi∗ is the i-th α-coordinate

~vi∗ (x1~v1 + x2~v2 + · · · + xn~vn ) = xi .

For ~x = x1~v1 + x2~v2 + · · · + xn~vn , we can also write this as

[~x]α = (~v1∗ (~x), ~v2∗ (~x), . . . , ~vn∗ (~x)) = α∗ (~x),

or
~x = ~v1∗ (~x)~v1 + ~v2∗ (~x)~v2 + · · · + ~vn∗ (~x)~vn .

Proposition 2.4.1. Suppose V is a finite dimensional vector space, and ~x, ~y ∈ V .


Then ~x = ~y if and only if l(~x) = l(~y ) for all l ∈ V ∗ .

For the sufficiency, we take l to be ~vi∗ and get [~x]α = [~y ]α . Since the coordinate
map is an isomorphism, this implies ~x = ~y .
We may also interpret (2.4.2) as

[l]α∗ = (l(~v1 ), l(~v2 ), . . . , l(~vn )) = l(α),

or
l = l(~v1 )~v1∗ + l(~v2 )~v2∗ + · · · + l(~vn )~vn∗ .

Example 2.4.1. The dual basis of the standard basis of Euclidean space is given by

~e∗i (x1 , x2 , . . . , xn ) = xi .

The isomorphism (2.4.1) is given by

l(x1 , x2 , . . . , xn ) = a1 x2 + a2 x2 + · · · + an xn ∈ (Rn )∗ ←→ (a1 , a2 , . . . , an ) ∈ Rn .

Example 2.4.2. We want to calculate the dual basis of the basis α = {~v1 , ~v2 , ~v3 } =
{(1, −1, 0), (1, 0, −1), (1, 1, 1)} in Example 1.3.17. The dual basis vector

~v1∗ (x1 , x2 , x3 ) = a1 x1 + a2 x2 + a3 x3
2.4. DUAL 83

is characterised by
~v1∗ (~v1 ) = a1 − a2 = 1,
~v1∗ (~v2 ) = a1 − a3 = 0,
~v1∗ (~v3 ) = a1 + a2 + a3 = 0.
This is a system of linear equations with vectors in α as rows, and ~e1 as the right
side. We get the similar systems for the other dual basis vectors ~v2∗ , ~v3∗ , with ~e2 , ~e3
as the right sides. Similar to Example 1.3.17, we may solve the three systems at the
same time by carrying out the row operations
1 0 0 13 1 1
 T     
~v1 1 −1 0 1 0 0 3 3
~v2T ~e1 ~e2 ~e3  = 1 0 −1 0 1 0 → 0 1 0 − 2 1 1  .
3 3 3
~v3T 1 1 1 0 0 1 0 0 1 13 − 23 31
This gives the dual basis
~v1∗ (x1 , x2 , x3 ) = 31 (x1 − 2x2 + x3 ),
~v2∗ (x1 , x2 , x3 ) = 31 (x1 + x2 − 2x3 ),
~v3∗ (x1 , x2 , x3 ) = 31 (x1 + x2 + x3 ).
We note that the right half of the matrix obtained by the row operations is the
transpose of the right half of the corresponding matrix in Example 1.3.17. This will
be explained by the equality (AT )−1 = (A−1 )T .

Example 2.4.3. For any number a, the evaluation Ea (p(t)) = p(a) at a is a linear
functional on Pn (and on all the other function spaces). We argue that the three
evaluations E0 , E1 , E2 form a basis of the dual space P2∗ .
The key idea already appeared in Example 1.3.12. We argued that p1 (t) =
t(t − 1), p2 (t) = t(t − 2), p3 = (t − 1)(t − 2) form a basis of P2 because their values
at 0, 1, 2 almost form the standard basis of R3
E0 (p1 , p2 , p3 ) = (0, 0, 2), E1 (p1 , p2 , p3 ) = (0, −1, 0), E2 (p1 , p2 , p3 ) = (2, 0, 0).
This can be interpreted as the linear transformation E = (E0 , E1 , E2 ) : P2 → R3
taking α = { 21 p3 , −p2 , 21 p1 } to the standard basis of R3 . Since the standard basis
is linearly independent, the set α is also linearly independent (Exercise 2.3). Since
dim P2 = 3 is the number of vectors in α, by Theorem 1.3.14, α is a basis of P2 .
Moreover, the linear transformation E shows that {E0 , E1 , E2 } is the dual basis of
α.

Exercise 2.78. If we permute the standard basis of the Euclidean space, how is the dual
basis changed?

Exercise 2.79. How is the dual basis changed if we change a basis {~v1 , . . . , ~vi , . . . , ~vj , . . . , ~vn }
to the following bases? (see Exercise 1.59)
84 CHAPTER 2. LINEAR TRANSFORMATION

1. {~v1 , . . . , ~vj , . . . , ~vi , . . . , ~vn }.

2. {~v1 , . . . , c~vi , . . . , ~vn }, c 6= 0.

3. {~v1 , . . . , ~vi + c~vj , . . . , ~vj , . . . , ~vn }.

Exercise 2.80. Find the dual basis of a basis {(a, b), (c, d)} of R2 .

Exercise 2.81. Find the dual basis of the basis {(1, 2, 3), (4, 5, 6), (7, 8, 10)} of R3 .

Exercise 2.82. Find the dual basis of the basis {1 − t, 1 − t2 , 1 + t + t2 } of P2 .

Exercise 2.83. Find the dualR basis of the basis {1, t, t2 } of P2 . Moreover, express the dual
1
basis in the form l(p(t)) = 0 p(t)λ(t)dt for suitable λ(t) ∈ P2 .

Exercise 2.84. Find the basis of P2 , such that the dual basis is the evaluations at three
distinct places t1 , t2 , t3 . Moreover, extend your result to Pn .

Exercise 2.85. Find the basis of P2 , such that the dual basis is the three derivatives at 0:
p(t) 7→ p(0), p(t) 7→ p0 (0), p(t) 7→ p00 (0). Extend to the derivatives up to n-order for Pn ,
and at a place a other than 0.

2.4.2 Dual Linear Transformation


A linear transformation L : V → W induces the dual linear transformation

L∗ (l) = l ◦ L : W ∗ → V ∗ , (L∗ (l))(~v ) = l(L(~v )).

This is the special case U = R of Exercise 2.13 (and therefore justifies that L∗ is
a linear transformation). The following shows that the dual linear transformation
corresponds to the transpose of matrix.

Proposition 2.4.2. Suppose α, β are bases of V, W , and α∗ , β ∗ are the corresponding


dual bases of V ∗ , W ∗ . Then
[L∗ ]α∗ β ∗ = [L]Tβα .

Proof. For 1 ≤ i ≤ m = dim W and 1 ≤ j ≤ n = dim V , denote (notice ji for L∗ )

α = {~v1 , ~v2 , . . . , ~vn }, β = {w


~ 1, w ~ m },
~ 2, . . . , w [L]βα = (aij ), [L∗ ]α∗ β ∗ = (a∗ji ).

Then (recall the 3 × 2 matrix of L before Example 2.3.1)

L(~vj ) = a1j w ~ 2 + · · · + amj w


~ 1 + a2j w ~ m,
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
~ i ) = a1i~v1 + a2i~v2 + · · · + ani~vn .
L (w
2.4. DUAL 85

~ i∗ to the first equality, and applying the second


Applying the linear functional w
equality to ~vj , we get

~ i∗ (L(~vj )) = aij ,
w ~ i∗ ))(~vj ) = a∗ji .
(L∗ (w

By the definition of L∗ , we have (L∗ (w


~ i∗ ))(~vj ) = w
~ i∗ (L(~vj )). Therefore aij = a∗ji .
This means [L]Tβα = [L∗ ]α∗ β ∗ .
By Exercise 2.13, the dual linear transformation has the following properties

I ∗ = I, (L + K)∗ = L∗ + K ∗ , (cL)∗ = cL∗ , (L ◦ K)∗ = K ∗ ◦ L∗ .

Then by Proposition 2.4.2, these translate into properties of the transpose of matrix

I T = I, (A + B)T = AT + B T , (cA)T = cAT , (AB)T = B T AT .

Exercise 2.86. Directly verify that L∗ (l) : W ∗ → V ∗ is a linear transformation: L∗ (l + k) =


L∗ (l) + L∗ (k), L∗ (cl) = cL∗ (l).

Exercise 2.87. Directly verify that the dual linear transformation has the claimed proper-
ties: I ∗ (l) = l, (L + K)∗ (l) = L∗ (l) + K ∗ (l), (cL)∗ (l) = cL∗ (l), (L ◦ K)∗ (l) = K ∗ (L∗ (l)).

By the last statements of Propositions 2.2.3 and 2.2.4, we know that L ◦ K = I


implies L is onto and K is one-to-one. Applying the dual switches the order into
K ∗ ◦ L∗ = I. This suggests that K ∗ is onto and L∗ is one-to-one, and almost gives
the proof of the following result.

Proposition 2.4.3. Suppose L : V → W is a linear transformation between finite


dimensional vector spaces.

1. If L is onto, then L∗ is one-to-one.

2. If L is one-to-one, then L∗ is onto.

Proof. Suppose L is onto, and l, k ∈ V ∗ satisfy L∗ (l) = L∗ (k). Then l(L(~v )) =


L∗ (l)(~v ) = L∗ (k)(~v ) = k(L(~v )) for all ~v ∈ V . Since L is onto, every vector of W is
of the form L(~v ) for some ~v ∈ V . Therefore the equality means l(w) ~ = k(w)~ for all
~ ∈ W . This is the definition of l = k, and proves the first statement.
w
The following proves the second statement

L is one-to-one =⇒ K ◦ L = I (Proposition 2.2.4)


=⇒ L∗ ◦ K ∗ = I ((L ◦ K)∗ = K ∗ ◦ L∗ )
=⇒ L∗ is onto. (Proposition 2.2.3)

In fact, we may also use the similar idea to prove the first statement.
86 CHAPTER 2. LINEAR TRANSFORMATION

Exercise 2.88. Let A be an m × n matrix. What is the relation between the existence and
the uniqueness of solutions of the following two systems of linear equations?
1. A~x = ~b: m equations in n variables.
2. AT ~y = ~c: n equations in m variables.

Exercise 2.89. By Proposition 2.4.3, what can you say about the pivots of the row echelon
forms of a matrix A and its transpose AT ? Note that row operation on AT can also be
regarded as column operation on A.

Exercise 2.90. Do we really need to assume finite dimension in Proposition 2.4.3?

2.4.3 Double Dual


Any vector ~v ∈ V induces a function on the dual space V ∗
~v ∗∗ (l) = l(~v ).
As the special case W = R of Exercise 2.11, the function ~v ∗∗ is a linear functional
on V ∗ . Therefore ~v ∗∗ is a vector in the double dual V ∗∗ = (V ∗ )∗ of the dual space
V ∗ . This gives the natural double dual map
~v ∈ V 7→ ~v ∗∗ ∈ V ∗∗ .
The following shows that the double dual map is a linear transformation
~ ∗∗ (l) = l(a~v + bw)
(a~v + bw) ~ = a~v ∗∗ (l) + bw
~ = al(~v ) + bl(w) ~ ∗∗ (l) = (a~v ∗∗ + bw
~ ∗∗ )(l).
Proposition 2.4.1 can be interpreted as ~v ∗∗ (l) = w
~ ∗∗ (l) for all l implying ~v = w.
~
∗∗
Therefore the double dual map V 7→ V is one-to-one. By using (2.4.3) twice, we
get dim V ∗∗ = dim V . Then by Theorem 2.2.6, we conclude that the double dual
map is an isomorphism.

Proposition 2.4.4. The double dual of a finite dimensional vector space is naturally
isomorphic to itself.

A linear transformation L : V → W induces a linear transformation L∗ : W ∗ →


V ∗ , which further induces the double dual linear transformation
L∗∗ : V ∗∗ → W ∗∗ .
Let us compare with the natural isomorphism in Proposition 2.4.4
L
V −−−→ W
 

=y
 ∼
y=
L∗∗
V ∗∗ −−−→ W ∗∗
2.4. DUAL 87

The following shows that L∗∗ can be identified with L under the natural isomorphism
(~v ∈ V and l ∈ W ∗ )

L∗∗ (~v ∗∗ )(l) = ~v ∗∗ (L∗ (l)) = (L∗ (l))(~v ) = l(L(~v )) = (L(~v ))∗∗ (l).

Therefore like Proposition 2.4.4, the double dual L∗∗ of a linear transformation L is
naturally identified with the linear transformation L itself.
Let A = [L]βα be the matrix of a linear transformation. Then by Proposition
2.4.2, we have AT = [L∗ ]α∗ β ∗ , and (AT )T = [L∗∗ ]β ∗∗ α∗∗ (Exercise 2.91 implicitly used).
The natural identification of L∗∗ and L is then an elaborately way of explaining
(AT )T = A.
Combining the natural identification of L∗∗ and L with Proposition 2.4.3 gives
the following more complete result.

Theorem 2.4.5. Suppose L : V → W is a linear transformation between finite di-


mensional vector spaces.
1. L is onto if and only if L∗ is one-to-one.

2. L is one-to-one if and only if L∗ is onto.

Exercise 2.91. For a basis α = {~v1 , ~v2 , . . . , ~vn } of V , the notation α∗∗ = {~v1∗∗ , ~v2∗∗ , . . . , ~vn∗∗ }
has two possible meanings.
1. (α∗ )∗ : First get dual basis α∗ of V ∗ . Then get the dual of the dual basis (α∗ )∗ of
(V ∗ )∗ .

2. α∗∗ : The image under the natural transformation V → V ∗∗ .


Explain that the two meanings are the same.

2.4.4 Dual Pairing


A function b : V × W → R is bilinear if it is linear in V and also linear in W

b(x1~v1 + x2~v2 , w)~ = x1 b(~v1 , w) ~ + x2 b(~v2 , w),~


b(~v , y1 w
~ 1 + y2 w
~ 2 ) = y1 b(~v , w
~ 1 ) + y2 b(~v , w
~ 2 ).

Let α = {~v1 , ~v2 , . . . , ~vm } and β = {w~ 1, w ~ n } be bases of V and W . Then


~ 2, . . . , w
the bilinear property imples
X
b(x1~v1 + x2~v2 + · · · + xm~vm , y1 w ~ 2 + · · · + yn w
~ 1 + y2 w ~ n) = xi yj b(~vi , w
~ j ).
1≤i≤m
1≤j≤n

Denote the matrix of bilinear function with respect to the bases

[b]αβ = (bij ), bij = b(~vi , w


~ j ).
88 CHAPTER 2. LINEAR TRANSFORMATION

Then we have X
b(~x, ~y ) = bij xi yj = [~x]Tα [b]αβ [~y ]β .
i,j

We can define the linear combination of bilinear functions in the obvious way

(c1 b1 + c2 b2 )(~x, ~y ) = c1 b1 (~x, ~y ) + c1 b2 (~x, ~y ).

This makes all the bilinear functions on V × W into a vector space. It is also easy
to see that
[c1 b1 + c2 b2 ]αβ = c1 [b1 ]αβ + c2 [b2 ]αβ .
Therefore the vector space of bilinear functions is isomorphic to the vector space of
m × n matrices, m = dim V , n = dim W .
For a bilinear function b(~x, ~y ) on V × W , the linearity in V gives a map

~y ∈ W 7→ b(·, ~y ) ∈ V ∗ .

Then the linearity in W implies that the map is a linear transformation. Conversely,
a linear transformation L : W → V ∗ , gives a bilinear function

b(~x, ~y ) = L(~y )(~x).

Here L(~y ) is a linear functional on V and can be applied to ~x. This gives an
isomorphism between the vector space of all bilinear functions on V × W and the
vector space Hom(W, V ∗ ).
Due to the symmetry in V and W , we also have the isomorphism between the
vector space of all bilinear functions on V × W and the vector space Hom(V, W ∗ ).
One direction is given by

~x ∈ V 7→ K(~x) = b(~x, ·) ∈ W ∗ .

The converse is given by

K ∈ Hom(V, W ∗ ) 7→ b(~x, ~y ) = K(~x)(~y ).

Example 2.4.4. The evaluation pairing

e(~x, l) = l(~x) : V × V ∗ → R

is a bilinear function. The corresponding linear transformation in Hom(V ∗ , V ∗ ) is


the identity. The corresponding linear transformation in Hom(V, (V ∗ )∗ ) is the double
dual map ~v 7→ ~v ∗∗ .
Let α and β be bases of V and V ∗ . Then [e]αβ = I if and only if β is the dual
basis of α.

Exercise 2.92. Explain that two bilinear functions on V × W are equal if and only if they
are equal on two spanning sets of V and W .
2.4. DUAL 89

Exercise 2.93. How is the matrix of a bilinear function changed when the bases are changed?

Exercise 2.94. Prove that L ∈ Hom(W, V ∗ ) and K ∈ Hom(V, W ∗ ) give the same bilinear
function on V × W if and only if K = L∗ , subject to the isomorphism in Proposition 2.4.4.

Exercise 2.95. A bilinear function b on V × W corresponds to L ∈ Hom(W, V ∗ ) and K ∈


Hom(V, W ∗ ). Let α, β be bases of V, W , and α∗ , β ∗ be the dual bases. How are the
matrices [b]αβ , [L]α∗ β , [K]β ∗ α related?

~ on V × W , bt (w,
Exercise 2.96. For a bilinear function b(~v , w) ~ ~v ) = b(~v , w)
~ is a bilinear
function on W × V . Let α, β be bases of V, W . How are the matrices [b]αβ and [bt ]βα
related? Moreover, the linear functions b and bt correspond to four linear transformations.
How are these linear transformations related?

Exercise 2.97. Let b(~x, ~y ) be a bilinear function on V × W . Let L : U → V be a linear


transformation.

1. Explain that b(L(~z), ~y ) is a bilinear function on U × W .

2. Given bases for U, V, W , how are the matrices of b(~x, ~y ) and b(L(~z), ~y ) related?

3. The two bilinear functions correspond to four linear transformations. How are these
linear transformations related?

Finally, please study the same problem for b(~x, L(~z)).

Definition 2.4.6. A bilinear function b : V ×W → R is a dual pairing if both induced


linear transformations V → W ∗ and W → V ∗ are isomorphisms.

By Exercise 2.94, the two linear transformations can be regarded as dual to


each other. Therefore we only need one of them to be isomorphic. Moreover, by
Proposition 2.4.5, the dual pairing is equivalent to both V → W ∗ and W → V ∗
being onto, and is also equivalent to both being one-to-one.
The evaluation pairing in Example 2.4.4 is a basic example of dual pairing. Using
the same idea, a basis α = {~v1 , ~v2 , . . . , ~vn } of V and a basis β = {w
~ 1, w ~ n } of
~ 2, . . . , w
W are dual bases with respect to the dual pairing b if

b(~vi , w
~ j ) = δij , or [b]αβ = I.

This is equivalent to that the corresponding V → W ∗ maps the basis α of V to the


dual basis β ∗ of W ∗ , and is also equivalent to that the corresponding W → V ∗ maps
the basis β of W to the dual basis α∗ of V ∗ . The dual bases with respect to the
bilinear function also give

~x = b(~x, w ~ 1 )~v1 + b(~x, w~ 2 )~v2 + · · · + b(~x, w~ n )~vn for any ~x ∈ V,


~y = b(~v1 , ~y )w ~ 2 + · · · + b(~vn , ~y )w
~ 1 + b(~v2 , ~y )w ~n for any ~y ∈ W.
90 CHAPTER 2. LINEAR TRANSFORMATION

Exercise 2.98. Prove that a bilinear function is a dual pairing if and only if its matrix with
respect to some bases is invertible.
Chapter 3

Subspace

As human civilisation became more sophisticated, they found it necessary to extend


their number systems. The ancient Greeks found that the rational numbers Q was
not sufficient for describing lengths in geometry, and the problem was solved later
by the Arabs who extended the rational numbers to the real numbers R. Then the
Italians found it useful to take the square root of −1 in their search for the roots
of cubic equations, and the idea led to the extension of real numbers to complex
numbers C.
In extending the number system, we still wish to preserve the key features of the
old system. This means that the inclusions Q ⊂ R ⊂ C are compatible with the four
arithmetic operations. In other words, 2 + 3 = 5 is an equality of rational numbers,
and is also an equality of real (or complex) numbers. In this sense, we may call Q
a sub-number system of R and C.

3.1 Definition
3.1.1 Subspace
Definition 3.1.1. A subset H of a vector space V is a subspace if it satisfies

~u, ~v ∈ H, a, b ∈ R =⇒ a~u + b~v ∈ H.

Using the addition and scalar multiplication of V , the subset H is also a vector
space. One should imagine that a subspace is a flat and infinite (with the only
exception of the trivial subspace {~0}) subset passing through the origin.
The smallest subspace is the trivial subspace. The biggest subspace is the whole
space V itself. Polynomials of degree ≤ 3 is a subspace of polynomials of degree
≤ 5. All polynomials is a subspace of all functions. Although R3 can be identified
with a subspace of R5 (in many different ways), R3 is not a subspace of R5 .

91
92 CHAPTER 3. SUBSPACE

Proposition 3.1.2. If H is a subspace of a finite dimensional vector space V , then


dim H ≤ dim V . Moreover, H = V if and only if dim H = dim V .

Proof. Let α = {~v1 , ~v2 , . . . , ~vn } be a basis of H. Then α is a linearly independent


set in V . By Proposition 1.3.13, we have dim H = k ≤ dim V . Moreover, if
k = dim H = dim V , then by Theorem 1.3.14, the linear independence of α implies
that α also spans V . Since α spans H, we get H = V .

Exercise 3.1. Determine whether the subset is a subspace of R2 .

1. {(x, 0) : x ∈ R}. 3. {(x, y) : 2x − 3y = 0}. 5. {(x, y) : xy = 0}.

2. {(x, y) : x + y = 0}. 4. {(x, y) : 2x − 3y = 1}. 6. {(x, y) : x, y ∈ Q}.

Exercise 3.2. Determine whether the subset is a subspace of R3 .

1. {(x, 0, z) : x, z ∈ R}.

2. {(x, y, z) : x + y + z = 0}.

3. {(x, y, z) : x + y + z = 1}.

4. {(x, y, z) : x + y + z = 0, x + 2y + 3z = 0}.

Exercise 3.3. Determine whether the subset is a subspace of Pn .

1. even polynomials. 3. polynomials satisfying f (0) = 1.

2. polynomials satisfying f (1) = 0. 4. polynomials satisfying f 0 (0) = f (1).

Exercise 3.4. Determine whether the subset is a subspace of C ∞ .

1. odd functions.

2. functions satisfying f 00 + f = 0.

3. functions satisfying f 00 + f = 1.

4. functions satisfying f 0 (0) = f (1).

5. functions satisfying limt→∞ f (t) = 0.

6. functions satisfying limt→∞ f (t) = 1.

7. functions such that limt→∞ f (t) diverges.


R1
8. functions satisfying 0 f (t)dt = 0.

Exercise 3.5. Determine whether the subset is a subspace of the space of all sequences (xn ).
3.1. DEFINITION 93
P
1. xn converges. 3. The series xn converges.
P
2. xn diverges. 4. The series xn absolutely converges.

Exercise 3.6. If H is a subspace of V and V is a subspace of H, what can you conclude?

Exercise 3.7. Prove that H is a subspace if and only if ~0 ∈ H and a~u + ~v ∈ H for any
a ∈ R and ~u, ~v ∈ H.

Exercise 3.8. Suppose H is a subspace of V , and ~v ∈ V . Prove that ~v +H = {~v +~h : ~h ∈ H}


is still a subspace if and only if ~v ∈ H.

Exercise 3.9. Suppose H is a subspace of V . Prove that the inclusion i(~h) = ~h : H → V is


a one-to-one linear transformation.
For any linear transformation L : V → W , the restriction L|H = L ◦ i : H → W is still
a linear transformation.

Exercise 3.10. Suppose H and H 0 are subspaces of V .

1. Prove that the sum H + H 0 = {~h + ~h0 : ~h ∈ H, ~h0 ∈ H 0 } is a subspace.

2. Prove that the intersection H ∩ H 0 is a subspace.

When is the union H ∪ H 0 a subspace?

3.1.2 Span
The span of a set of vectors is the collection of all linear combinations

Spanα = Span{~v1 , ~v2 , . . . , ~vn }


= {x1~v1 + x2~v2 + · · · + xn~vn : xi ∈ R}
= R~v1 + R~v2 + · · · + R~vn .

By Proposition 1.2.1, a linear combination of linear combinations is still a linear


combination. This means Spanα is a subspace.
The span of a single nonzero vector is the straight line in the direction of the
vector. If the vector is zero, then the span is reduced to the origin.
The span of two non-parallel vectors is the 2-dimensional plane containing the
origin and the two vectors (or containing the parallelogram formed by the two vec-
tors, see Figure 1.2.1). If two vectors are parallel, then the span is reduced to a line
in the direction of the two vectors.

Exercise 3.11. Prove that Spanα is the smallest subspace containing α.

Exercise 3.12. Prove that α ⊂ β implies Spanα ⊂ Spanβ.


94 CHAPTER 3. SUBSPACE

Exercise 3.13. Prove that ~v is a linear combination of ~v1 , ~v2 , . . . , ~vn if and only if

R~v1 + R~v2 + · · · + R~vn + R~v = R~v1 + R~v2 + · · · + R~vn .

Exercise 3.14. Prove that if one vector is a linear combination of the other vectors, then
deleting the vector does not change the span.

Exercise 3.15. Prove that Spanα ⊂ Spanβ if and only if vectors in α are linear combinations
of vectors in β. In particular, Spanα = Spanβ if and only if vectors in α are linear
combinations of vectors in β, and vectors in β are linear combinations of vectors in α.

By the definition, the subspace Spanα is already spanned by α. By Theorem


1.3.10, we get a basis of Spanα by finding a maximal linearly independent set in α.

Example 3.1.1. To find a basis for the span of

~v1 = (1, 2, 3), ~v2 = (4, 5, 6), ~v3 = (7, 8, 9), ~v4 = (10, 11, 12),

we consider the row operations in Example 1.2.1


   
1 4 7 10 1 4 7 10
(~v1 ~v2 ~v3 ~v4 ) = 2 5 8 11 → 0 1 2 3  .
3 6 9 12 0 0 0 0

If we restrict the row operations to the first two columns, then we find that ~v1 , ~v2 are
linearly independent. If we restrict the row operations to the first three columns,
then we see that adding ~v3 gives linearly dependent set ~v1 , ~v2 , ~v3 , because the third
column is not pivot. By the same reason, ~v1 , ~v2 , ~v4 are also linearly dependent.
Therefore ~v1 , ~v2 form a maximal linearly independent set among ~v1 , ~v2 , ~v3 , ~v4 . By
Theorem 1.3.10, ~v1 , ~v2 form a basis of R~v1 + R~v2 + R~v3 + R~v4 .

In general, given ~v1 , ~v2 , . . . , ~vn ∈ Rm , we carry out row operations on the matrix
(~v1 ~v2 · · · ~vn ). Then the pivot columns in (~v1 ~v2 · · · ~vn ) form a basis of R~v1 + R~v2 +
· · · + R~vn . In particular, the dimension of the span is the number of pivots after row
operations on (~v1 ~v2 · · · ~vn ).

Exercise 3.16. Show that {~v1 , ~v3 } and {~v1 , ~v4 } are also bases in Example 3.1.1.

Exercise 3.17. Find a basis of the span.

1. (1, 2, 3, 4), (2, 3, 4, 5), (3, 4, 5, 6).

2. (1, 5, 9), (2, 6, 10), (3, 7, 11), (4, 8, 12).

3. (1, 1, 0, 0), (1, 0, 1, 0), (1, 0, 0, 1), (0, 1, 1, 0), (0, 1, 0, 1), (0, 0, 1, 1).
3.1. DEFINITION 95
           
1 1 1 0 1 0 0 1 0 1 0 0
4. , , , , , .
0 0 1 0 0 1 1 0 0 1 1 1

5. 1 − t, t − t3 , 1 − t3 , t3 − t5 , t − t5 .

By Exercise 1.32, the span has the following property.

Proposition 3.1.3. The following operations do not change the span.

1. {~v1 , . . . , ~vi , . . . , ~vj , . . . , ~vn } → {~v1 , . . . , ~vj , . . . , ~vi , . . . , ~vn }.

2. {~v1 , . . . , ~vi , . . . , ~vn } → {~v1 , . . . , c~vi , . . . , ~vn }, c 6= 0.

3. {~v1 , . . . , ~vi , . . . , ~vj , . . . , ~vn } → {~v1 , . . . , ~vi + c~vj , . . . , ~vj , . . . , ~vn }.

For vectors in a Euclidean space, the operations in Proposition 3.1.3 are column
operations on the matrix (~v1 ~v2 · · · ~vn ). The proposition basically says that column
operations do not change the span. We can take advantage of the proposition to
find another way of calculating a basis of Spanα.

Example 3.1.2. For the four vectors in Example 3.1.1, we carry out column opera-
tions on the matrix (~v1 ~v2 ~v3 ~v4 )

  C4 −C3   C4 −C3    
1 4 7 10 C3 −C2 1 3 3 3 C31−C
C
2 1 1 0 0 C1 −C2 1 0 0 0
C2 −C1 2 C ↔C2
−−−→ 2 3 3 3 −−3−−→ 2 1 0 0 −−1−−→
2 5 8 11 − 1 1 0 0 .
3 6 9 12 3 3 3 3 3 1 0 0 1 2 0 0

The result is a column echelon form. By Proposition 3.1.3, we get

R~v1 + R~v2 + R~v3 + R~v4 = R(1, 1, 1) + R(0, 1, 2) + R(0, 0, 0) + R(0, 0, 0)


= R(1, 1, 1) + R(0, 1, 2).

The two pivot columns (1, 1, 1), (0, 1, 2) of the column echelon form are always lin-
early independent, and therefore form a basis of R~v1 + R~v2 + R~v3 + R~v4 .

Example 3.1.3. By taking the transpose of the row operation in Example 3.1.1, we
get the column operation
     
1 2 3 1 0 0 1 0 0
 →  4 −3 −6  →  4 1 0 .
4 5 6    

7 8 9  7 −6 −12  7 2 0
10 11 12 10 −9 −18 10 3 0

We find that (1, 4, 7, 10), (0, 1, 2, 3) form a basis of R(1, 4, 7, 10) + R(2, 5, 8, 11) +
R(3, 6, 9, 12).
96 CHAPTER 3. SUBSPACE

In general, given ~v1 , ~v2 , . . . , ~vn ∈ Rm , we use column operations to find a column
echelon form of (~v1 ~v2 · · · ~vn ). Then the pivot columns in the column echelon form
(not the columns in the original matrix (~v1 ~v2 · · · ~vn )) is a basis of R~v1 + R~v2 +
· · · + R~vn . In particular, the dimension of the span is the number of pivots after
column operations on (~v1 ~v2 · · · ~vn ). This is also the same as the number of pivots
after row operations on the transpose (~v1 ~v2 · · · ~vn )T .
Comparing the dimension of the span obtained by two ways of calculating the
span, we conclude that applying row operations to A and AT give the same number
of pivots.

Exercise 3.18. Explain how Proposition 3.1.3 follows from Exercise 1.32.

Exercise 3.19. List all the 2 × 3 column echelon forms.

Exercise 3.20. Explain that the nonzero columns in a column echelon form are linearly
independent.

Exercise 3.21. Explain that, if the columns of an n × n matrix is a basis of Rn , then the
rows of the matrix is also a basis of Rn .

Exercise 3.22. Use column operations to find a basis of the span in Exercise 3.17.

3.1.3 Calculation of Extension to Basis


The column echelon form can also be used to extend linearly independent vectors
to a basis. By the proof of Theorem 1.3.11, this can be achieved by finding vectors
not in the span.

Example 3.1.4. In Example 1.3.15, we use row operations to find that the vectors
~v1 = (1, 4, 7, 11), ~v2 = (2, 5, 8, 12), ~v3 = (3, 6, 10, 10) in R4 are linearly independent.
We may also use column operations to get
     
1 2 3 1 0 0 1 0 0
 →  4 −3 −6  →  4 −3 0  .
4 5 6    
(~v1 ~v2 ~v3 ) = 
 7 8 10  7 −6 −11  7 −6 1 
11 12 10 11 −10 −23 11 −10 −3

This shows that (1, 4, 7, 11), (0, −3, −6, −10), (0, 0, 1, −3) form a basis of R~v1 +R~v2 +
R~v3 . In particular, the span has dimension 3. By Theorem 1.3.14, we find that
~v1 , ~v2 , ~v3 are also linearly independent.
It is obvious that (1, 4, 7, 11), (0, −3, −6, −10), (0, 0, 1, −3), ~e4 = (0, 0, 0, 1) form
a basis of R4 . In fact, the same column operations (applied to the first three columns
3.2. RANGE AND KERNEL 97

only) gives
     
1 2 3 0 1 0 0 0 1 0 0 0
 4 5 6 0  4 −3 −6 0  4 −3 0 0
 7 8 10 0 →  7 −6 −11 0 →  7 −6 1
(~v1 ~v2 ~v3 ~e4 ) =      .
0
11 12 10 1 11 −10 −23 1 11 −10 −3 1
Then ~v1 , ~v2 , ~v3 , ~e4 and (1, 4, 7, 11), (0, −3, −6, −10), (0, 0, 1, −3), (0, 0, 0, 1) span the
same vector space, which is R4 . By Theorem 1.3.14, adding ~e4 gives a basis ~v1 , ~v2 , ~v3 , ~e4
of R4 .
In Example 1.3.15, we extended ~v1 , ~v2 , ~v3 to a basis by a different method. The
reader should compare the two methods.

Example 3.1.4 suggests the following practical way of extending a linearly inde-
pendent set in Rn to a basis of Rn . Suppose column operations on three linearly
independent vectors ~v1 , ~v2 , ~v3 ∈ R5 give
 
• 0 0
∗ 0 0
col op  
(~v1 ~v2 ~v3 ) −−−→  ∗ • 0 .

∗ ∗ •
∗ ∗ ∗
We may add ~u1 = (0, •, ∗, ∗, ∗) and ~u2 = (0, 0, 0, 0, •) to create pivots in the second
and the fifth columns
   
• 0 0 0 0 • 0 0 0 0
col op on ∗ 0 0 • 0 exchange ∗
  • 0 0 0
first 3 col  col

(~v1 ~v2 ~v3 ~u1 ~u2 ) −−−−−→ ∗ • 0 ∗ 0 −−−−−→ ∗
 ∗ • 0 0.
∗ ∗ • ∗ 0 ∗ ∗ ∗ • 0
∗ ∗ ∗ ∗ • ∗ ∗ ∗ ∗ •
Then ~v1 , ~v2 , ~v3 , ~u1 , ~u2 form a basis of R5 .

Exercise 3.23. Extend the basis you find in Exercise 3.22 to a basis of the whole vector
space.

3.2 Range and Kernel


A linear transformation L : V → W induces two subspaces. The range (or image) is
RanL = L(V ) = {L(~v ) : all ~v ∈ V } ⊂ W.
The following verifies this is a subspace
w,
~ w~ 0 ∈ L(V ) =⇒ w ~ 0 = L(~v 0 ) for some ~v , ~v 0 ∈ V
~ = L(~v ), w
=⇒ aw ~ 0 = aL(~v ) + bL(~v 0 ) = L(a~v + b~v 0 ) ∈ L(V ).
~ + bw
98 CHAPTER 3. SUBSPACE

The kernel is the preimage of the zero vector

KerL = L−1 (~0) = {~v : ~v ∈ V and L(~v ) = ~0} ⊂ V.

The following verifies this is a subspace

~v , ~v 0 ∈ KerL =⇒ L(~v ) = ~0, L(~v 0 ) = ~0


=⇒ L(a~v + b~v 0 ) = aL(~v ) + bL(~v 0 ) = a~0 + b~0 = ~0.

Exercise 3.24. Prove that Ran(L ◦ K) ⊂ RanL. Moreover, if K is onto, then Ran(L ◦ K) =
RanL.

Exercise 3.25. Prove that Ker(L ◦ K) ⊃ KerK. Moreover, if L is one-to-one, then Ker(L ◦
K) = KerK.

Exercise 3.26. Suppose L : V → W is a linear transformation, and H ⊂ V is a subspace.


Prove that L(H) = {L(~v ) : all ~v ∈ H} is a subspace.

Exercise 3.27. Suppose L : V → W is a linear transformation, and H ⊂ W is a subspace.


Prove that L−1 (H) = {~v ∈ V : L(~v ) ∈ H} is a subspace.

Exercise 3.28. Prove that L(H) ∩ KerK = L(H ∩ Ker(K ◦ L)).

3.2.1 Range
The range is actually defined for any map f : X → Y

Ranf = f (X) = {f (x) : all x ∈ X} ⊂ Y.

For the map Instructor: Courses → Professors, the range is all the professors who
teach some courses.
The map is onto if and only if f (X) = Y . This suggests that we may consider
the same map with smaller target

f˜: X → f (X), f˜(x) = f (x).

For the Instructor map, this means Ĩnstructor: Courses → Teaching Professors. The
advantage of the modification is the following.

Proposition 3.2.1. For any map f : X → Y , the corresponding map f˜: X → f (X)
has the following properties.

1. f˜ is onto.

2. f˜ is one-to-one if and only if f is one-to-one.


3.2. RANGE AND KERNEL 99

Exercise 3.29. Prove Proposition 3.2.1.

Exercise 3.30. Prove that Ran(f ◦ g) ⊂ Ranf . Moreover, if g is onto, then Ran(f ◦ g) =
Ranf .

Specialising to a linear transformation L : V → W , if α = {~v1 , ~v2 , . . . , ~vn } spans


V , then any vector of V is a linear combination of α, and L is given by (2.1.1) for
all vectors in V . This implies that the range subspace is a span

RanL = L(V ) = SpanL(α).

If L : Rn → Rm is actually a linear transformation of Euclidean spaces, with m × n


matrix A, then the interpretation above shows that the range of L is the span of
the column vectors L(~ei ) of A, called the column space

RanL = ColA ⊂ Rm .

Of course we can also consider the span of the rows of A and get the row space.
The row and column spaces are clearly related by the transpose of the matrix

RowA = ColAT ⊂ Rn .

By Proposition 2.4.2 and the natural identification between (Rn )∗ and Rn in Example
2.4.1, the row space corresponds to the range space of the dual linear transformation
L∗ .

Example 3.2.1. The derivative of a polynomial of degree n is a polynomial of degree


n − 1. Therefore we have linear transformation D(f ) = f 0 : Pn → Pm for m ≥ n − 1.
By integrating polynomial (i.e., anti-derivative), we get RanD = Pn−1 ⊂ Pm . The linear
transformation is onto if and only if m = n − 1.

Example 3.2.2. Consider the linear transformation L(f ) = f 00 + (1 + t2 )f 0 + tf : P3 → P4


in Example 2.3.3. The row operations in the earlier example shows that the following form
a basis of RanL

L(1) = t, L(t) = 1 + 2t2 , L(t2 ) = 2 + 2t + 3t3 , L(t3 ) = 6t + 3t2 + 4t4 .

Alternatively, we may carry out the column operations


       
0 1 2 0 0 1 0 0 0 1 0 0 1 0 0 0
1 0 2 6 1 0 0 0 1 0 0 0 0 1 0 0
   
   
0 2 0 3 → 0
   2 −4 3  → 0 2 −4
 0 → 2
 0 −4 0 
.
0 0 3 0 0 9
0 3 0 0 0 3
4
0 0 3 9
0 0 0 4 0 0 0 4 0 0 0 4 0 0 0 16

This shows that 1 + 2t2 , t, −4t2 + 3t3 , 9t3 + 16t4 form a basis of RanL. It is also easy to
see that adding t4 gives a basis of P4 .
100 CHAPTER 3. SUBSPACE

Example 3.2.3. Consider the linear transformation L(A) = A + AT : Mn×n → Mn×n . By


(A+AT )T = AT +(AT )T = AT +A = A+AT , any X = A+AT ∈ RanL satisfies X T = X.
Such matrices are called symmetric because they are of the form (for n = 3, for example)
   
a11 a12 a13 a d e
X = a12 a22 a23  = d b f  .
a13 a23 a33 e f c

Conversely, if X = X T , then for A = 12 X, we have

1 1 1 1
L(A) = A + AT = X + X T = X + X = X.
2 2 2 2
This shows that any symmetric matrix lies in RanL. Therefore the range of L consists of
exactly all the symmetric matrices. A basis of 3 × 3 symmetric matrices is given by
       
a d e 1 0 0 0 0 0 0 0 0
d b f  = a 0 0 0 + b 0 1 0 + c 0 0 0
e f c 0 0 0 0 0 0 0 0 1
     
0 1 0 0 0 1 0 0 0
+ d 1 0 0 + e 0 0 0 + f 0 0 1 .
0 0 0 1 0 0 0 1 0

Exercise 3.31. Explain that A~x = ~b has solution if and only if ~b ∈ ColA.

Exercise 3.32. Suppose L : V → W is a linear transformation.

1. Prove the modified linear transformation (see Proposition 3.2.1) L̃ : V → L(V ) is


an onto linear transformation.

2. Let i : L(V ) → W be the inclusion linear transformation in Exercise 3.9. Show that
L = i ◦ L̃.

Exercise 3.33. Suppose L : V → W is a linear transformation. Prove that L is one-to-one


if and only if L̃ : V → L(V ) is an isomorphism.

Exercise 3.34. Show that the range of the linear transformation L(A) = A − AT : Mn×n →
Mn×n consists of matrices X satisfying X T = −X. These are called skew-symmetric
matrices.

Exercise 3.35. Find the dimensions of the subspaces of symmetric and skew-symmetric
matrices.

3.2.2 Rank
The span of a set of vectors, the range of a linear transformation, and the column
space of a matrix are different presentations of the same concept. Their size, which
3.2. RANGE AND KERNEL 101

is their dimension, is the rank

rankα = dim Spanα, rankL = dim RanL, rankA = dim ColA.

By the calculation of basis of span subspace in Section 3.1.1, the rank of a matrix
A is the number of pivots in the row echelon form, and is also the number of pivots in
the column echelon form. Since column operation on A is the same as row operation
on AT , we have
rankAT = rankA.
By Proposition 2.4.2, this means

rankL∗ = rankL.

Since the number of pivots of an m × n matrix is always no more than m and n,


we have
rankAm×n ≤ min{m, n}.
If the equality holds, then the matrix has full rank. This means either rankAm×n = m
(all rows are pivot), or rankAm×n = n (all columns are pivot). By Propositions 1.3.4,
1.3.6, 2.2.9, we have the following.

Proposition 3.2.2. Let A be an m×n matrix. Then rankA ≤ min{m, n}. Moreover,

1. A~x = ~b has solution for all ~b if and only if rankA = m.

2. The solution of A~x = ~b is unique if and only if rankA = n.

3. A is invertible if and only if rankA = m = n.

The following is the same result for set of vectors. The first statement actu-
ally follows from Proposition 3.1.2, and the second statement may be obtained by
applying Theorem 1.3.14 to V = Spanα.

Proposition 3.2.3. Let α be a set of n vectors in V . Then rankα ≤ min{n, dim V },


Moreover,

1. α spans V if and only if rankα = dim V .

2. α is linearly independent if rankα = n.

3. α is a basis of V if and only if rankα = dim V = n.

The following is the same result for linear transformation. The first statement
actually follows from Proposition 3.1.2, and the second statement may be obtained
by applying Theorem 2.2.6 to L̃ : V → RanL in Exercises 3.32 and 3.33.
102 CHAPTER 3. SUBSPACE

Proposition 3.2.4. Let L : V → W be a linear transformation. Then rankL ≤


min{dim V, dim W }. Moreover,

1. L is onto if and only if rankL = dim W .

2. L is one-to-one if and only if rankL = dim V .

3. L is invertible if and only if rankL = dim V = dim W .

Exercise 3.36. What is the rank of the vector set in Exercise 3.17?

Exercise 3.37. If a set of vectors is enlarged, how is the rank changed?

Exercise 3.38. Suppose the columns of an m × n matrix A are linearly independent. Prove
that there are n rows of A that are also linearly independent. Similarly, if the rows of A
are linearly independent, then there are n columns of A that are also linearly independent.

Exercise 3.39. Directly prove Proposition 3.2.3.

K L
Exercise 3.40. Consider a composition U −→ V −
→ W . Let L|K(U ) : K(U ) → W be the
restriction of L to the subspace K(U ).

1. Prove Ran(L ◦ K) = RanL|K(U ) ⊂ RanL.

2. Prove rank(L ◦ K) ≤ min{rankL, rankK}.

3. Prove rank(L ◦ K) = rankL when K is onto.

4. Prove rank(L ◦ K) = rankK when L is one-to-one.

5. Translate the second part into a fact about matrices.

3.2.3 Kernel
By the third part of Proposition 2.2.4, we know that a linear transformation L is
one-to-one if and only if KerL = {~0}. In contrast, the linear transformation is onto
if and only if RanL is the whole target space.
For a linear transformation L(~x) = A~x : Rn → Rm between Euclidean spaces,
the kernel is all the solutions of the homogeneous system of linear equations, called
the null space

NulA = KerL = {~v : all ~v ∈ Rn satisfying A~v = ~0}.

The uniqueness of solution means NulA = {~0}. In contrast, the existence of solution
(for all right side) means ColA = Rm .
3.2. RANGE AND KERNEL 103

Example 3.2.4. To find the null space of


 
1 4 7 10
A = 2 5 8 11 ,
3 6 9 12

we use the row operation in Example 1.2.1 and continue to get the reduced row
echelon form    
1 4 7 10 1 0 −1 −2
A −→ 0  1 2 3 −→ 0 1 2
  3
0 0 0 0 0 0 0 0
As pointed out in Section 1.2.4, this is the same as the general solution of the
homogeneous system A~x = ~0. We express the solution in vector form
       
x1 x3 + 2x4 1 2
x2  −2x3 − 3x4  −2 −3
~x = 
 x3  = 
   = x3   + x4   = x3~v1 + x4~v2 .
x3  1 0
x4 x4 0 1

Since the free variables x3 , x4 can be arbitrary, this shows that NulA = R~v1 + R~v2 .
The following further shows that ~v1 , ~v2 are linearly independent

x3~v1 + x4~v2 = ~0 ⇐⇒ the solution ~x = ~0


=⇒ coordinates of the solution x3 = x4 = 0.

In general, the solution of a homogeneous system A~x = ~0 is ~x = c1~v1 + c2~v2 +


· · · + ck~vk , where c1 , c2 , . . . , ck are all the free variables. This implies NulA = R~v1 +
R~v2 + · · · + R~vk . Moreover, the argument in the example shows that the vectors are
always linearly independent. Therefore ~v1 , ~v2 , . . . , ~vk form a basis of NulA.
The nullity dim NulA of a matrix is the dimension of the null space. The calcu-
lation of the null space shows that the nullity is the number of non-pivot columns
(corresponding to free variables) of A. Since rankA = dim ColA is the number of
pivot columns (corresponding to non-free variables) of A, we conclude that

dim NulA + rankA = number of columns of A.

Translated into linear transformations, this means the following.

Theorem 3.2.5. If L : V → W is a linear transformation, then dim KerL+rankL =


dim V .

Example 3.2.5. For the linear transformation L(f ) = f 00 : P5 → P3 , we have

KerL = {f ∈ P5 : f 00 = 0} = {a + bt : a, b ∈ R}.
104 CHAPTER 3. SUBSPACE

The monomials 1, t form a basis of the kernel, and dim KerL = 2. Since L(P5 ) = P3
is onto, we have rankL = dim L(P5 ) = dim P3 = 4. Then
dim KerL + rankL = 2 + 4 = 6 = dim P5 .
This confirms Theorem 3.2.5.

Example 3.2.6. Consider the linear transformation


L(f ) = (1 + t2 )f 00 + (1 + t)f 0 − f : P3 → P3
in Examples 2.1.16 and 2.3.3. The row operations in Example 2.3.3 show that
rankL = 3. Therefore dim KerL = dim P3 − rankL = 1. Since we already know
L(1 + t) = 0, we conclude that KerL = R(1 + t).

Example 3.2.7. By Exercise 2.20, the left multiplication by an m × n matrix A is


a linear transformation
LA (X) = AX : Mn×k → Mm×k .
Let X = (~x1 ~x2 · · · ~xk ). Then AX = (A~x1 A~x2 · · · A~xk ). Therefore Y ∈ RanLA if
and only if all columns of Y lie in ColA, and X ∈ KerLA if and only if all columns
of X lie in NulA.
Let α = {~v1 , ~v2 , . . . , ~vr } be a basis of ColA (r = rankA). Then for the special
case k = 2, the following is a basis of RanLA
(~v1 ~0), (~0 ~v1 ), (~v2 ~0), (~0 ~v2 ), . . . , (~vr ~0), (~0 ~vr ).
Therefore dim RanLA = 2r = 2 dim ColA. Similarly, we have dim KerLA = 2r =
2 dim NulA.
In general, we have
dim RanLA = k dim ColA = k rankA,
dim KerLA = k dim NulA = k(n − rankA).

Example 3.2.8. In Example 3.2.3, we saw the range of linear transformation L(A) =
A + AT : Mn×n → Mn×n is exactly all symmetric matrices. The kernel of the linear
transformation consists those A satisfying A + AT = O, or AT = −A. These are the
skew-symmetric matrices. See Exercises 3.34 and 3.35. We have
rankL = dim{symmetric matrices}
1
= 1 + 2 + · · · + n = n(n + 1),
2
and
dim{skew-symmetric matrices} = dim Mn×n − rankL
1 1
= n2 − n(n + 1) = n(n − 1).
2 2
3.2. RANGE AND KERNEL 105

Exercise 3.41. Use Theorem 3.2.5 to show that L : V → W is one-to-one if and only if
rankL = dim V (the second statement of Proposition 3.2.4).

Exercise 3.42. An m × n matrix A induces four subspaces ColA, RowA, NulA, NulAT .

1. Which are subspaces of Rm ? Which are subspaces of Rn ?

2. Which basis can you calculate by using row operations on A?

3. Which basis can you calculate by using column operations on A?

4. What are the dimensions in terms of the rank of A?

Exercise 3.43. Find a basis of the kernel of the linear transformation given by the matrix
(some appeared in Exercise 1.28).
     
1 2 3 4 1 2 3 1 2 3 1
3 4 5 6  2 3 4 5. 2 3 1 2.
5 6 7 8 .
1.   3. 3
.
4 1 3 1 2 3
7 8 9 10 4 1 2
   
1 3 5 7   1 2 3
2 4 6 8  1 2 3 4 2 3 1
2. 
3 5 7 9 
. 6.  .
4. 2 3 4 1. 3 1 2
4 6 8 10 3 4 1 2 1 2 3

Exercise 3.44. Find the rank and nullity.

1. L(x1 , x2 , x3 , x4 ) = (x1 + x2 , x2 + x3 , x3 + x4 , x4 + x1 ).

2. L(x1 , x2 , x3 , x4 ) = (x1 + x2 + x3 , x2 + x3 + x4 , x3 + x4 + x1 , x4 + x1 + x2 ).

3. L(x1 , x2 , x3 , x4 ) = (x1 − x2 , x1 − x3 , x1 − x4 , x2 − x3 , x2 − x4 , x3 − x4 ).

4. L(x1 , x2 , . . . , xn ) = (xi − xj )1≤i<j≤n .

5. L(x1 , x2 , . . . , xn ) = (xi + xj )1≤i<j≤n .

Exercise 3.45. Find the rank and nullity.

1. L(f ) = f 00 + (1 + t2 )f 0 + tf : P3 → P4 .

2. L(f ) = f 00 + (1 + t2 )f 0 + tf : P3 → P5 .

3. L(f ) = f 00 + (1 + t2 )f 0 + tf : Pn → Pn+1 .

Exercise 3.46. Find the dimensions of the range and the kernel of right multiplication by
an m × n matrix A
RA (X) = XA : Mk×m → Mk×n .
106 CHAPTER 3. SUBSPACE

3.2.4 General Solution of Linear Equation


Let L : V → W be a linear transformation, and ~b ∈ W . Then L(~x) = ~b has solution
(i.e., there is ~x0 ∈ V satisfying the equation) if and only if ~b ∈ RanL. Next, we try
to find the collection of all solutions (i.e., the preimage of ~b)
L−1 (~b) = {~x ∈ V : L(~x) = ~b}.
By
L(~x) = ~b ⇐⇒ L(~x − ~x0 ) = ~b − ~b = ~0 ⇐⇒ ~x − ~x0 ∈ KerL,
we conclude that
L−1 (~b) = ~x0 + KerL = {~x0 + ~v : L(~v ) = ~0}.
In terms of system of linear equations, this means that the solution of A~x = ~b (in
case ~b ∈ ColA) is of the form ~x0 + ~v , where ~x0 is one special solution, and ~v ∈ NulA
is any solution of the homogeneous system A~x = ~0.
Geometrically, the kernel is a subspace. The collection of all solutions is obtained
by shifting the subspace by one special solution ~x0 .
We note the range (and ~x0 ) manifests the existence, while the kernel manifests
the variations in the solution. In particular, the uniqueness of solution means no
variation, or the triviality of the kernel.

Example 3.2.9. The system of linear equations


x1 + 4x2 + 7x3 + 10x4 = 1,
2x1 + 5x2 + 8x3 + 11x4 = 1,
3x1 + 6x2 + 9x3 + 12x4 = 1,
has an obvious solution ~x0 = 31 (−1, 1, 0, 0). In Example 3.2.4, we found that ~v1 =
(1, −2, 1, 0) and ~v2 = (2, −3, 0, 1) form a basis of the kernel. Therefore the general
solution is (c1 , c2 are arbitrary)
       1 
−1 1 2 − 3 + c1 + 2c2
1 1    1
 + c1 −2 + c2 −3 =  3 − 2c1 − 3c2  .
  
~x = ~x0 + c1~v1 + c2~v2 = 
3 0  1 0  c1 
0 0 1 c2
Geometrically, the solutions is the plane R(1, −2, 1, 0) + R(1, −2, 1, 0) shifted by
~x0 = 31 (−1, 1, 0, 0).
We may also use another obvious solution 31 (0, −1, 1, 0) and get an alternative
formula for the general solution
       
0 1 2 c1 + 2c2
1 −1    1
 + c1 −2 + c2 −3 = − 3 −1 2c1 − 3c2  .
  
~x = 
3  1   1   0  
3
+ c1 
0 0 1 c2
3.2. RANGE AND KERNEL 107

Example 3.2.10. For the linear transformation in Examples 2.1.16 and 2.3.3
L(f ) = (1 + t2 )f 00 + (1 + t)f 0 − f : P3 → P3 ,
we know from Example 3.2.6 that the kernel is R(1+t). Moreover, by L(1), L(t2 ), L(t3 )
in Example 2.3.3, we also know L(2 − t2 + t3 ) = 4t + 8t3 . Therefore 14 (2 − t2 + t3 )
is a special solution of the differential equation in Example 2.1.16. Then we get the
general solution in Example 2.3.3
f = 14 (2 − t2 + t3 ) + c(1 + t).

Example 3.2.11. The general solution of the linear differential equation f 0 = sin t
is
f = − cos t + C.
Here f0 = − cos t is one special solution, and the arbitrary constants C form the
kernel of the derivative linear transform
Ker(f 7→ f 0 ) = {C : C ∈ R} = R1.
Similarly, f 00 = sin t has a special solution f0 = − sin t. Moreover, we have (see
Example 3.2.5)
Ker(f 7→ f 00 ) = {C + Dt : C, D ∈ R} = R1 + Rt.
Therefore the general solution of f 00 = sin t is f = − sin t + C + Dt.

Example 3.2.12. The left side of a linear differential equation of order n (see Ex-
ample 2.1.15)
dn f dn−1 f dn−2 f df
L(f ) = + a 1 (t) + a 2 (t) + · · · + a n−1 (t) + an (t)f = b(t)
dtn dtn−1 dtn−2 dt
is a linear transformation C ∞ → C ∞ . A fundamental theorem in the theory of
differential equations says that dim KerL = n. Therefore to solve the differential
equation, we need to find one special function f0 satisfying L(f0 ) = b(t) and n
linearly independent functions f1 , f2 , . . . , fn satisfying L(fi ) = 0. Then the general
solution is
f = f 0 + c1 f 1 + c2 f 2 + · · · + cn f n .
Take the second order differential equation f 00 +f = et as an example. We try the
special solution f0 = aet and find (aet )00 + aet = 2aet = et implying a = 21 . Therefore
f0 = 12 et is a solution. Moreover, we know that both f1 = cos t and f2 = sin t satisfy
the homogeneous equation f 00 + f = 0. By Example 1.3.11, f1 and f2 are linearly
independent. This leads to the general solution of the differential equation
f = 12 et + c1 cos t + c2 sin t, c1 , c2 ∈ R.

Exercise 3.47. Find a basis of the kernel of L(f ) = f 00 + 3f 0 − 4f by trying functions of the
form f (t) = eat . Then find general solution.
108 CHAPTER 3. SUBSPACE

1. f 00 + 3f 0 − 4f = 1 + t. 3. f 00 + 3f 0 − 4f = cos t + 2 sin t.

2. f 00 + 3f 0 − 4f = et . 4. f 00 + 3f 0 − 4f = 1 + t + et .

3.3 Sum of Subspace


The sum of subspaces generalises the span. The direct sum of subspaces generalises
the linear independence. Subspace, sum, and direct sum are the deeper linear algebra
concepts that replace vector, span, and linear independence.

3.3.1 Sum and Direct Sum


Definition 3.3.1. The sum of subspaces H1 , H2 , . . . , Hk ⊂ V is
H1 + H2 + · · · + Hk = {~h1 + ~h2 + · · · + ~hk : ~hi ∈ Hi }.
The sum is direct if
~h1 + ~h2 + · · · + ~hk = ~h0 + ~h0 + · · · + ~h0 , ~hi , ~h0 ∈ Hi =⇒ ~hi = ~h0 .
1 2 k i i

We indicate the direct sum by writing H1 ⊕ H2 ⊕ · · · ⊕ Hk .

If Hi = R~vi , then the sum is the span of α = {~v1 , ~v2 , . . . , ~vk }. If ~vi 6= ~0, then the
direct sum means that α is linearly independent.

Example 3.3.1. Let Pneven be all the even polynomials in Pn and Pnodd be all the odd
polynomials in Pn . Then Pn = Pneven ⊕ Pnodd .

Example 3.3.2. For k = 1, we have the sum H1 of single vector space H1 . The
single sum is always direct.
For H1 + H2 to be direct, we require
~h1 + ~h2 = ~h0 + ~h0 =⇒ ~h1 = ~h0 , ~h2 = ~h0 .
1 2 1 2

Let ~v1 = ~h1 − ~h01 ∈ H1 and ~v2 = ~h2 − ~h02 ∈ H2 . Then the condition becomes
~v1 + ~v2 = ~0 =⇒ ~v1 = ~v2 = ~0.
The equality on the left means ~v1 = −~v2 , which is a vector in H1 ∩ H2 . Therefore
the condition above means exactly H1 ∩ H2 = {~0}. This is the criterion for the sum
H1 + H2 to be direct.

Example 3.3.3 (Abstract Direct Sum). Let V and W be vector spaces. Construct a
vector space V ⊕ W to be the set V × W = {(~v , w)
~ : ~v ∈ V, w
~ ∈ W }, together with
addition and scalar multiplication
(~v1 , w
~ 1 ) + (~v2 , w
~ 2 ) = (~v1 + ~v2 , w
~1 + w
~ 2 ), a(~v , w)
~ = (a~v , aw).
~
3.3. SUM OF SUBSPACE 109

It is easy to verify that V ⊕ W is a vector space. Moreover, V and W are isomorphic


to the following subspaces of V ⊕ W

V ∼
= V ⊕ ~0 = {(~v , ~0W ) : ~v ∈ V }, W ∼
= ~0 ⊕ W = {(~0V , w) ~ ∈ W }.
~ :w

~ = (~v , ~0W ) + (~0V , w),


Since any vector in V ⊕ W can be uniquely expressed as (~v , w) ~
we find that V ⊕ W is the direct sum of two subspaces

V ⊕ W = {(~v , ~0W ) : ~v ∈ V } ⊕ {(~0V , w) ~ ∈ W }.


~ :w

This is the reason why V ⊕ W is called abstract direct sum.


The construction allows us to write Rm ⊕ Rn = Rm+n . Strictly speaking, Rm
and Rn are not subspaces of Rm+n . The equality means
1. Rm is isomorphic to the subspace of vectors in Rm+n with the last n coordinates
vanishing.

2. Rn is isomorphic to the subspace of vectors in Rm+n with the first m coordi-


nates vanishing.

3. Rm+n is the direct sum of these two subspaces.

Exercise 3.48. Prove that

H1 + H2 = H2 + H1 , (H1 + H2 ) + H3 = H1 + H2 + H3 = H1 + (H2 + H3 ).

Exercise 3.49. Prove that an intersection H1 ∩ H2 ∩ · · · ∩ Hk of subspaces is a subspace.

Exercise 3.50. Prove that H1 + H2 + · · · + Hk is the smallest subspace containing all Hi .

Exercise 3.51. Prove that Spanα + Spanβ = Span(α ∪ β).

Exercise 3.52. Prove that Span(α ∩ β) ⊂ (Spanα) ∩ (Spanβ). Show that the two sides may
or may not equal.

Exercise 3.53. We may regard a subspace H as a sum of single subspace. Explain that the
single sum is always direct.

Exercise 3.54. If a sum is direct, prove that the sum of a selection of subspaces is also
direct.

Exercise 3.55. Prove that a sum H1 +H2 +· · ·+Hk is direct if and only if the sum expression
for ~0 is unique
~h1 + ~h2 + · · · + ~hk = ~0, ~hi ∈ Hi =⇒ ~h1 = ~h2 = · · · = ~hk = ~0.

The generalises Proposition 1.3.7.


110 CHAPTER 3. SUBSPACE

Exercise 3.56. Prove that a sum H1 + H2 + · · · + Hk is direct if and only if

~h1 + ~h2 + · · · + ~hk−1 ∈ Hk , ~hi ∈ Hi =⇒ ~h1 = ~h2 = · · · = ~hk−1 = ~0.

Explain that this generalises Proposition 1.3.8.

Exercise 3.57. Show that Mn×n is the direct sum of the subspace of symmetric matrices
(see Example 3.2.3) and the subspace of skew-symmetric matrices (see Exercise 3.34). In
other words, any square matrix is the sum of a unique symmetric matrix and a unique
skew-symmetric matrix.

Exercise 3.58. Let H, H 0 be subspaces of V . We have the sum H + H 0 ⊂ V and we


also have the abstract direct sum H ⊕ H 0 from Example 3.3.3. Prove that L(~h, ~h0 ) =
~h + ~h0 : H ⊕ H 0 → H + H 0 is an onto linear transformation, and KerL is isomorphic to
H ∩ H 0.

Exercise 3.59. Use Exercise 3.58 to prove dim(H + H 0 ) = dim H + dim H 0 − dim(H ∩ H 0 ).

In general, a sum of sums of subspaces is a sum. For example, we have

(H1 + H2 ) + H3 + (H4 + H5 ) = H1 + H2 + H3 + H4 + H5 .

We will show that the sum on the right is direct if and only if H1 + H2 , H3 , H4 + H5 ,
(H1 + H2 ) + H3 + (H4 + H5 ) are direct sums.
To state the general result, we consider n sums

Hi = +j Hij = +kj=1
i
Hij = Hi1 + Hi2 + · · · + Hiki , i = 1, 2, . . . , n.

Then we consider the sum

H = +i (+j Hij ) = +ni=1 Hi = H1 + H2 + · · · + Hn ,

and consider the further splitting of the sum

H = +ij Hij = H11 + H12 + · · · + H1k1 + H21 + H22 + · · · + H2k2


+ · · · · · · + Hn1 + Hn2 + · · · + Hnkn .

Proposition 3.3.2. The sum +ij Hij is direct if and only if the sum +i (+j Hij ) is
direct and the sum +j Hij is direct for each i.

Proof. Suppose H = +ij Hij is a direct sum. To prove that H = +i Hi = +i (+j Hij )
is direct, we consider a vector ~h = i ~hi = ~h1 + ~h2 + · · · + ~hn , ~hi ∈ Hi , in the sum.
P

By Hi = +j Hij , we have ~hi = j ~hij = ~hi1 + ~hi2 + · · · + ~hiki , ~hij ∈ Hij . Then
P
~h = P ~hij . Since H = +ij Hij is direct, we find that ~hij are uniquely determined
ij
3.3. SUM OF SUBSPACE 111

by ~h. This implies that ~hi are also uniquely determined by ~h. This proves that
H = +i Hi is direct.
Next we further prove that Hi = +j Hij is also direct. We consider a vector
~h = P ~hij = ~hi1 + ~hi2 + · · · + ~hik , ~hij ∈ Hij , in the sum. By taking ~hi0 j = ~0 for
j i

all i 6= i, we form the double sum ~h = ij ~hij . Since H = +ij Hij is direct, all ~hpj ,
0
P

p = i or p = i0 , are uniquely determined by ~h. In particular, ~hi1 , ~hi2 , . . . , ~hiki are


uniquely determined by ~h. This proves that Hi = +j Hij is direct.
Conversely, suppose the sum +i (+j Hij ) is direct and the sum +j Hij is direct
for each i. To prove that H = +ij Hij is direct, we consider a vector ~h = ij ~hij ,
P
~hij ∈ Hij , in the sum. We have ~h = P ~hi for ~hi = P ~hij ∈ +j Hij . Since
i j
H = +i (+j Hij ) is direct, we find that ~hi are uniquely determined by ~h. Since
Hi = +j Hij is direct, we also find that ~hij are uniquely determined by ~hi . Therefore
all ~hij are uniquely determined by ~h. This proves that +ij Hij is direct.

For the special case Hij = R~vij is spanned by a single non-zero vector, the
equality +i (+j Hij ) = +ij Hij means that, if αi = {~vi1 , ~vi2 , . . . , ~viki } spans Hi for
each i, then the union α = α1 ∪ α2 ∪ · · · ∪ αk = {~vij : all i, j} spans the sum
H = H1 + H2 + · · · + Hk . If αi is a basis of Hi , then by Propositions 1.3.13, 3.3.2
and Theorem 1.3.14, we get the following.

Proposition 3.3.3. If H1 , H2 , . . . , Hk are finite dimensional subspaces, then

dim(H1 + H2 + · · · + Hk ) ≤ dim H1 + dim H2 + · · · + dim Hk .

Moreover, the sum is direct if and only if the equality holds.

Exercise 3.60. Suppose αi are linearly independent. Prove that the sum Spanα1 +Spanα2 +
· · · + Spanαk is direct if and only if α1 ∪ α2 ∪ · · · ∪ αk is linearly independent.

Exercise 3.61. Suppose αi is a basis of Hi . Prove that α1 ∪ α2 ∪ · · · ∪ αk is a basis of


H1 + H2 + · · · + Hk if and only if the sum H1 + H2 + · · · + Hk is direct.

3.3.2 Projection
A direct sum V = H ⊕ H 0 induces a map by picking the first term in the unique
expression
P (~v ) = ~h, if ~v = ~h + ~h0 , ~h ∈ H, ~h0 ∈ H 0 .
The direct sum implies that P is a well defined linear transformation satisfying
P 2 = P . See Exercise 3.62.

Definition 3.3.4. A linear operator P : V → V is a projection if P 2 = P .


112 CHAPTER 3. SUBSPACE

Conversely, given any projection P , we have ~v = P (~v ) + (I − P )(~v ). By P (I −


P )(~v ) = (P − P 2 )(~v ) = ~0, we have P (~v ) ∈ RanP and (I − P )(~v ) ∈ KerP . Therefore
V = RanP + KerP . On the other hand, if ~v = ~h + ~h0 with ~h ∈ RanP and ~h0 ∈ KerP ,
then ~h = P (w)
~ for some w ~ ∈ V , and

P (~v ) = P (~h) + P (~h0 )


= P (~h) (~h0 ∈ KerP )
= P 2 (w)
~ (~h = P (w))
~
= P (w)
~ (P 2 = P )
= ~h.

This shows that ~h is unique. Therefore the decomposition ~v = ~h + ~h0 is also unique,
and we have a direct sum
V = RanP ⊕ KerP.
We conclude that there is a one-to-one correspondence between projections of V
and decompositions of V into direct sums of two subspaces.

Example 3.3.4. With respect to the direct sum Pn = Pneven ⊕Pnodd in Example 3.3.1,
the projection to even polynomials is given by f (t) 7→ 21 (f (t) + f (−t)).

Example 3.3.5. In C ∞ , we consider subspaces

H = R1 = {constant functions}, H 0 = {f : f (0) = 0}.

Since any function f (t) = f (0) + (f (t) − f (0)) with f (0) ∈ H and f (t) − f (0) ∈ H 0 ,
we have C ∞ = H + H 0 . Since H ∩ H 0 consists of zero function only, we have direct
sum C ∞ = H ⊕H 0 . Moreover, the projection to H is f (t) 7→ f (0) and the projection
to H 0 is f (t) 7→ f (t) − f (0).

Exercise 3.62. Given a direct sum V = H ⊕ H 0 , verify that P (~v ) = ~h is well defined, is a
linear operator, and satisfies P 2 = P .

Exercise 3.63. Directly verify that the matrix A of the orthogonal projection in Example
2.1.13 satisfies A2 = A.

Exercise 3.64. For the orthogonal projection P in Example 2.1.13, explain that I − P is
also a projection. What is the subspace corresponding to I − P ?

Exercise 3.65. If P is a projection, prove that Q = I − P is also a projection satisfying

P + Q = I, P Q = QP = O.
3.3. SUM OF SUBSPACE 113

Moreover, P and Q induce the same direct sum decomposition

V = H ⊕ H 0, H = RanP = KerQ, H 0 = RanQ = KerP,

with the only exception that the order of H and H 0 are switched.

Exercise 3.66. Find the formula for the projections given by the direct sum in Exercise
3.57
Mn×n = {symmetric matrix} ⊕ {skew-symmetric matrix}.

Exercise 3.67. Consider subspaces in C ∞

H = R1 ⊕ Rt = {polynomials of degree ≤ 1}, H 0 = {f : f (0) = f (1) = 0}.

Show that we have a direct sum C ∞ = H ⊕ H 0 , and find the corresponding projections.
Moreover, generalise to higher order polynomials and evaluation at more points (and not
necessarily including 0 or 1).

Suppose we have a direct sum

V = H1 ⊕ H2 ⊕ · · · ⊕ Hk .

Then the direct sum V = Hi ⊕ (⊕j6=i Hj ) corresponds to a projection Pi : V → Hi ⊂


V . It is easy to see that the unique decomposition

~v = ~h1 + ~h2 + · · · + ~hk , ~hi ∈ Hi ,

gives (and is given by) ~hi = Pi (~v ). The interpretation immediately implies

P1 + P2 + · · · + Pk = I, Pi Pj = O for i 6= j.

Conversely, given linear operators Pi satisfying the above, we get Pi = Pi I = Pi P1 +


Pi P2 + · · · + Pi Pk = Pi2 . Therefore Pi is a projection. Moreover, if

~v = ~h1 + ~h2 + · · · + ~hk , ~hi = Pi (w


~ i ) ∈ Hi = RanPi ,

then

Pi (~v ) = Pi P1 (w
~ 1 ) + Pi P2 (w ~ k ) = Pi2 (w
~ 2 ) + · · · + Pi Pk (w ~ i ) = ~hi .
~ i ) = Pi (w

This implies the uniqueness of ~hi , and we get a direct sum.

Proposition 3.3.5. There is a one-to-one correspondence between direct sum de-


compositions of a vector space V and collections of projections Pi satisfying

P1 + P2 + · · · + Pk = I, Pi Pj = O for i 6= j.
114 CHAPTER 3. SUBSPACE

Example 3.3.6. The basis in Examples 1.3.17 and 2.1.13


~v1 = (1, −1, 0), ~v2 = (1, 0, −1), ~v3 = (1, 1, 1),
gives a direct sum R3 = R~v1 ⊕ R~v2 ⊕ R~v3 . Then we have three projections P1 , P2 , P3
corresponding to three 1-dimensional subspaces
~b = x1~v1 + x2~v2 + x3~v3 =⇒ P1 (~b) = x1~v1 , P2 (~b) = x2~v2 , P3 (~b) = x3~v3 .

The calculation of the projections becomes the calculation of the formulae of x1 , x2 , x3


in terms of ~b. In other words, we need to solve the equation A~x = ~b for the matrix
A in Example 2.2.18. The solution is
    
x1 1 −2 1 b1
x2  = A−1~b = 1 1 1 −2 b2  .
3
x3 1 1 1 b3
Therefore
P1 (~b) = 31 (b1 − 2b2 + b3 )(1, −1, 0),
P2 (~b) = 13 (b1 + b2 − 2b3 )(1, 0, −1),
P3 (~b) = 1 (b1 + b2 + b3 )(1, 1, 1).
3

The matrices of the projections are


     
1 −2 1 1 1 −2 1 1 1
[P1 ] = 31 −1 2 −1 , [P2 ] = 13  0 0 0 , [P3 ] = 31 1 1 1 .
0 0 0 −1 −1 2 1 1 1
We also note that the projection to the subspace R~v1 ⊕ R~v2 in the direct sum
3
R = R~v1 ⊕ R~v2 ⊕ R~v3 is P1 + P2 (see Exercise 3.68)
~b = (x1~v1 + x2~v2 ) + x3~v3 7→ x1~v1 + x2~v2 = P1 (~b) + P2 (~b).

The matrix of the projection is [P1 ] + [P2 ], which can also be calculated as follows
 
2 −1 −1
[P1 ] + [P2 ] = I − [P3 ] = 31 −1 2 −1 .
−1 −1 2

Exercise 3.68. Suppose the direct sum V = H1 ⊕ H2 ⊕ H3 corresponds to the projections


P1 , P2 , P3 . Prove that the projection to H1 ⊕ H2 in the direct sum V = (H1 ⊕ H2 ) ⊕ H3
is P1 + P2 .

Exercise 3.69. For the direct sum given by the basis in Example 2.2.17
R3 = R(1, 2, 3) ⊕ R(4, 5, 6) ⊕ R(7, 8, 10),
find the projections to the three lines. Then find the projection to the plane R(1, 2, 3) ⊕
R(4, 5, 6).
3.3. SUM OF SUBSPACE 115

Exercise 3.70. For the direct sum given by modifying the basis in Example 2.1.13
R3 = R(1, −1, 0) ⊕ R(1, 0, −1) ⊕ R(1, 0, 0),
find the projection to the plane R(1, −1, 0) ⊕ R(1, 0, −1).

Exercise 3.71. In Example 3.1.4, a set of linearly independent vectors ~v1 , ~v2 , ~v3 is extended
to a basis by adding ~e4 = (0, 0, 0, 1). Find the projections related to the direct sum
R4 = Span{~v1 , ~v2 , ~v3 } ⊕ R~e4 .

Exercise 3.72. We know the rank of


 
1 4 7 10
A = 2 5 8 11
3 6 9 12
is 2, and A~e1 , A~e2 are linearly independent. Show that we have a direct sum R4 =
NulA ⊕ Span{~e1 , ~e2 }. Moreover, find the projections corresponding to the direct sum.

Exercise 3.73. The basis t(t − 1), t(t − 2), (t − 1)(t − 2) in Example 1.3.12 gives a direct
sum P2 = Span{t(t − 1), t(t − 2)} ⊕ R(t − 1)(t − 2). Find the corresponding projections.

3.3.3 Blocks of Linear Transformation


Suppose L : V → W is a linear transformation. Suppose α = {~v1 , ~v2 , ~v3 } and β =
{w ~ 2 } are bases of V and W . Then the matrix of
~ 1, w
L : V = R~v1 ⊕ R~v2 ⊕ R~v3 → W = Rw
~ 1 ⊕ Rw
~2
means
  L(~v1 ) = a11 w
~ 1 + a21 w
~ 2,
a11 a12 a13
[L]βα = , L(~v2 ) = a12 w
~ 1 + a22 w
~ 2,
a21 a22 a23
L(~v3 ) = a13 w
~ 1 + a23 w
~ 2.
Let P1 : W → W be the projection to Rw ~ 1 . Then P1 L|R~v2 (x~v2 ) = a12 xw
~ 1 . This
means that a12 is the 1×1 matrix of the linear transformation L12 = P1 L|R~v2 : R~v2 →
Rw~ 1 . Similarly, aij is the 1 × 1 matrix of the linear transformation Lij : R~vj → Rw ~i
obtained by restricting L to the direct sum components, and we may write
 
L11 L12 L13
L= .
L21 L22 L23
In general, a linear transformation L : V1 ⊕ V2 ⊕ · · · ⊕ Vn → W1 ⊕ W2 ⊕ · · · ⊕ Wm
has the block matrix
 
L11 L12 . . . L1n
 L21 L22 . . . L2n 
L =  .. ..  , Lij = Pi L|Vj : Vj → Wi ⊂ W.
 
..
 . . . 
Lm1 Lm2 . . . Lmn
116 CHAPTER 3. SUBSPACE

Similar to the vertical expression of vectors in Euclidean spaces, we should write


    
w~1 L11 L12 . . . L1n ~v1
w
 2   21 L22
~ L . . . L2n 
  ~v2 
 ..  =  .. ..   ..  ,
 
..
 .   . . .  . 
w
~m Lm1 Lm2 . . . Lmn ~vn

which means
~ i = Li1 (~v1 ) + Li2 (~v2 ) + · · · + Lin (~vn )
w
, and
L(~v1 + ~v2 + · · · + ~vn ) = w ~2 + · · · + w
~1 + w ~ m.

Example 3.3.7. The linear transformation L : R4 → R3 given by the matrix


 
1 4 7 10
[L] = 2 5 8 11
3 6 9 12

can be decomposed as
 
L11 L12
L= : R1 ⊕ R3 → R2 ⊕ R1 ,
L21 L22

with matrices
   
1 4 7 10
[L11 ] = , [L12 ] = , [L21 ] = (3), [L22 ] = (6 9 12).
2 5 8 11

Example 3.3.8. Suppose a projection P : V → V corresponds to a direct sum V =


H ⊕ H 0 . Then  
I O
P =
O O
with respect to the direct sum.

Example 3.3.9. The direct sum L = L1 ⊕ L2 ⊕ · · · ⊕ Ln of linear transformations


Li : Vi → Wi is the diagonal block matrix
 
L1 O . . . O
 O L2 . . . O 
L =  .. ..  : V1 ⊕ V2 ⊕ · · · ⊕ Vn → W1 ⊕ W2 ⊕ · · · ⊕ Wn ,
 
..
. . . 
O O . . . Ln

given by

L(~v1 + ~v2 + · · · + ~vn ) = L1 (~v1 ) + L2 (~v2 ) + · · · + Ln (~vn ), ~vi ∈ Vi , Li (~vi ) ∈ Wi .


3.3. SUM OF SUBSPACE 117

For example, the identity on V1 ⊕ · · · ⊕ Vn is the direct sum of identities


 
IV1 O ... O
 O IV ... O 
2
I =  .. ..  .
 
..
 . . . 
O O . . . IVn

Exercise 3.74. What is the block matrix for switching the factors in a direct sum V ⊕ W →
W ⊕V?

The operations of block matrices are similar to the usual matrices, as long as the
L,K
direct sums match. For example, for linear transformations V1 ⊕V2 ⊕V3 −−→ W1 ⊕W2 ,
we have
     
L11 L12 L13 K11 K12 K13 L11 + K11 L12 + K12 L13 + K13
+ = ,
L21 L22 L23 K21 K22 K23 L21 + K21 L22 + K22 L23 + K23
   
L11 L12 L13 aL11 aL12 aL13
a = .
L21 L22 L23 aL21 aL22 aL23
K L
For the composition of linear tranformations U1 ⊕ U2 −→ V1 ⊕ V2 −
→ W1 ⊕ W2 ⊕ W3 ,
we have
   
L11 L12   L11 K11 + L12 K21 L11 K12 + L12 K22
L21 L22  K11 K12 = L21 K11 + L22 K21 L21 K12 + L22 K22  .
K21 K22
L31 L32 L31 K11 + L32 K21 L31 K12 + L32 K22

Example 3.3.10. We have


    
I L I K I L+K
= .
O I O I O I
In particular, this implies
 −1  
I L I −L
= .
O I O I
     
L M L O O L
Exercise 3.75. For invertible L and K, find the inverses of , , .
O K M K K M

Exercise 3.76. Find the n-th power of


 
λI L O . . . O
 O λI L . . . O
 
 O O λI . . . O
J = . ..  .
 
.. ..
 .. . . . 
 
O O O ... L
O O O ... λI
118 CHAPTER 3. SUBSPACE

Exercise 3.77. Use block matrix to explain that

Hom(V1 ⊕ V2 ⊕ · · · ⊕ Vn , W ) = Hom(V1 , W ) ⊕ Hom(V2 , W ) ⊕ · · · ⊕ Hom(Vn , W ),


Hom(V, W1 ⊕ W2 ⊕ · · · ⊕ Wn ) = Hom(V, W1 ) ⊕ Hom(V, W2 ) ⊕ · · · ⊕ Hom(V, Wn ).

3.4 Quotient Space


For a subspace H of V , the quotient space V /H measures the “difference” between
H and V . This is achieved by ignoring the differences in H. When the difference
between H and V is realised as a subspace of V , the subspace is the direct summand
of H in V .

3.4.1 Construction of the Quotient


Given a subspace H ⊂ V , we regard two vectors in V to be equivalent if they differ
by a vector in H
~v ∼ w
~ ⇐⇒ ~v − w ~ ∈ H.
The equivalence relation has the following three properties.

1. Reflexivity: ~v ∼ ~v .

2. Symmetry: ~v ∼ w ~ ∼ ~v .
~ =⇒ w

3. Transitivity: ~u ∼ ~v and ~v ∼ w
~ =⇒ ~u ∼ w.
~

The reflexivity follows from ~v − ~v = ~0 ∈ H. The symmetry follows from w ~ − ~v =


−(~v − w)
~ ∈ H. The transitivity follows from ~u − w ~ = (~u − ~v ) + (~v − w)
~ ∈ H.
The equivalence class of a vector ~v is all the vectors equivalent to ~v

v̄ = {~u : ~u − ~v ∈ H} = {~v + ~h : ~h ∈ H} = ~v + H.

For example, Section 3.2.4 shows that all the solutions of a linear equation L(~x) = ~b
form an equivalence class with respect to H = KerL.

Definition 3.4.1. Let H be a subspace of V . The quotient space is the collection of


all equivalence classes
V̄ = V /H = {~v + H : ~v ∈ V },
together with the addition and scalar multiplication

(~u + H) + (~v + H) = (~u + ~v ) + H, a(~u + H) = a~u + H.

Moreover, we have the quotient map

π(~v ) = v̄ = ~v + H : V → V̄ .
3.4. QUOTIENT SPACE 119

The operations in the quotient space can also be written as ū + v̄ = u + v and


aū = au.
The following shows that the addition is well defined

~u ∼ ~u 0 , ~v ∼ ~v 0 ⇐⇒ ~u − ~u 0 , ~v − ~v 0 ∈ H
=⇒ (~u + ~v ) − (~u 0 + ~v 0 ) = (~u − ~u 0 ) + (~v − ~v 0 ) ∈ H
⇐⇒ ~u + ~v ∼ ~u 0 + ~v 0 .

We can similarly show that the scalar multiplication is also well defined.
We still need to verify the axioms for vector spaces. The commutativity and
associativity of the addition in V̄ follow from the commutativity and associativity
of the addition in V . The zero vector 0̄ = ~0 + H = H. The negative vector
−(~v + H) = −~v + H. The axioms for the scalar multiplications can be similarly
verified.

Proposition 3.4.2. The quotient map π : V → V̄ is an onto linear transformation


with kernel H.

Proof. The onto property of π is tautology. The linearity of π follows from the
definition of the vector space operations in V̄ . In fact, we can say that the operations
in V̄ are defined for the purpose of making π a linear transformation. Moreover, the
kernel of π consists of ~v satisfying ~v ∼ ~0, which means ~v = ~v − ~0 ∈ H.
Proposition 3.4.2 and Theorem 3.2.5 imply

dim V /H = rankπ = dim V − dim Kerπ = dim V − dim H.

Example 3.4.1. Let V = R2 and H = R~e1 = R × 0 = {(x, 0) : x ∈ R}. Then

(a, b) + H = {(a + x, b) : x ∈ R = {(x, b) : x ∈ R}

are all the horizontal lines. See Figure 3.4.1. These horizontal lines are in one-to-one
correspondence with the y-coordinate

(a, b) + H ∈ R2 /H ←→ b ∈ R.

This identifies the quotient space R2 /H with R. The identification is a linear trans-
formation because it simply picks the second coordinate. Therefore we have an
isomorphism R2 /H ∼ = R of vector spaces.

Exercise 3.78. For subsets X, Y of a vector space V , define

X + Y = {~u + ~v : ~u ∈ X, ~v ∈ Y }, aX = {a~u : ~u ∈ X}.

Verify the following properties similar to some axioms of vector space.


120 CHAPTER 3. SUBSPACE

(a, b) + H (a, b)
b

R2 R

Figure 3.4.1: Quotient space R2 /R × 0.

1. X + Y = Y + X.

2. (X + Y ) + Z = X + (Y + Z).

3. {~0} + X = X = X + {~0}.

4. 1X = X.

5. (ab)X = a(bX).

6. a(X + Y ) = aX + aY .

Exercise 3.79. Prove that a subset H of a vector space is a subspace if and only if H+H = H
and aH = H for a 6= 0.

Exercise 3.80. A subset A of a vector space is an affine subspace if aA + (1 − a)A = A for


any a ∈ R.

1. Prove that sum of two affine subspaces is an affine subspace.

2. Prove that a finite subset is an affine subspace if and only if it is a single vector.

3. Prove that an affine subspace A is a vector subspace if and only if ~0 ∈ A.

4. Prove that A is an affine subspace if and only if A = ~v + H for a vector ~v and a


subspace H.

Exercise 3.81. An equivalence relation on a set X is a collection of ordered pairs x ∼ y


(regarded as elements in X ×X) satisfying the reflexivity, symmetry, and transitivity. The
equivalence class of x ∈ X is

x̄ = {y ∈ X : y ∼ x} ⊂ X.

Prove the following.

1. For any x, y ∈ X, either x̄ = ȳ or x̄ ∩ ȳ = ∅.

2. X = ∪x∈X x̄.
3.4. QUOTIENT SPACE 121

If we choose one element from each equivalence class, and let I be the set of all such
elements, then the two properties imply X = tx∈I x̄ is a decomposition of X into a
disjoint union of non-empty subsets.

Exercise 3.82. Suppose X = ti∈I Xi is a partition (i.e., disjoint union of non-empty sub-
sets). Define x ∼ y ⇐⇒ x and y are in the same subset Xi . Prove that x ∼ y is an
equivalence relation, and the equivalence classes are exactly Xi .

Exercise 3.83. Let f : X → Y be a map. Define x ∼ x0 ⇐⇒ f (x) = f (x0 ). Prove that


x ∼ x0 is an equivalence relation, and the equivalence classes are exactly the preimages
f −1 (y) = {x ∈ X : f (x) = y} for y ∈ f (X) (otherwise the preimage is empty).

3.4.2 Universal Property


The quotient map π : V → V̄ is a linear transformation constructed for the purpose
of ignoring (or vanishing on) H. The map is universal because it can be used to
construct all linear transformations on V that vanish on H.

Theorem 3.4.3. Suppose H is a subspace of V , and π : V → V̄ = V /H is the


quotient map. Then a linear transformation L : V → W satisfies L(~h) = ~0 for all
~h ∈ H (i.e., H ⊂ KerL) if and only if it is the composition L = L̄ ◦ π for a linear
transformation L̄ : V̄ → W .

The linear transformation L̄ can be described by the following commutative di-


agram.
L
V W

π L̄

V /H

Proof. If L = L̄ ◦ π, then H ⊂ KerL by


~h ∈ H =⇒ π(~h) = ~0 =⇒ L(~h) = L̄(π(~h)) = L̄(~0) = ~0.

Conversely, if H ⊂ KerL, then the following shows that L̄(v̄) = L(~v ) is well defined
ū = v̄ ⇐⇒ ~u − ~v ∈ H =⇒ L(~u) − L(~v ) = L(~u − ~v ) = ~0 ⇐⇒ L(~u) = L(~v ).
The following verifies that L̄ is a linear transformation
L̄(aū + bv̄) = L̄(au + bv) = L(a~u + b~v ) = aL(~u) + bL(~v ) = aL̄(ū) + bL̄(v̄).
The following verifies L = L̄ ◦ π
L(~v ) = L̄(v̄) = L̄(π(~v )) = (L̄ ◦ π)(~v ).
122 CHAPTER 3. SUBSPACE

Any linear transformation L : V → W vanishes on the kernel KerL. We may


take H = KerL in Proposition 3.4.3 and get

L
V W

π, onto L̄, one-to-one

V /KerL

Here the one-to-one property of L̄ follows from

L̄(v̄) = ~0 ⇐⇒ ~v ∈ KerL ⇐⇒ v̄ = ~v + KerL = KerL = 0̄.

If we further know that L is onto, then we get the following property.

Proposition 3.4.4. If a linear transformation L : V → W is onto, then L̄ : V /KerL ∼


=
W is an isomorphism.

Example 3.4.2. The picking of the second coordinate (x, y) ∈ R2 → y ∈ R is an


onto linear transformation with kernel H = R~e1 = R × 0. By Proposition 3.4.4, we
get R2 /H ∼= R. This is the isomorphism in Example 3.4.1.
In general, if H = Rk ×~0 is the subspace of Rn , such that the last n−k coordinates
vanish, then Rn /H ∼= Rn−k by picking the last n − k coordinates.

Example 3.4.3. The linear functional l(x, y, z) = x + y + z : R3 → R is onto, and


its kernel H = {(x, y, z) : x + y + z = 0} is the plane in Example 2.1.13. Therefore
¯l : R3 /H → R is an isomorphism. The equivalence classes are the planes

(a, 0, 0) + H = {(a + x, y, z) : x + y + z = 0} = {(x, y, z) : x + y + z = a} = l−1 (a)

parallel to H.

Example 3.4.4. The orthogonal projection P of R3 in Example 2.1.13 is onto the


range H = {(x, y, z) : x + y + z = 0}. The geometrical meaning of P shows that
KerP = R(1, 1, 1) is the line in direction (1, 1, 1) and passing through the origin.
Then by Proposition 3.4.2, we have R3 /R(1, 1, 1) ∼ = H. Note that the vectors in the
quotient space R3 /R(1, 1, 1) are defined as all the lines in direction (1, 1, 1) (but not
necessarily passing through the origin). The isomorphism identifies the collection of
such lines with the plane H.

Example 3.4.5. The derivative map D(f ) = f 0 : C ∞ → C ∞ is onto, and the kernel
is all the constant functions KerD = {C : C ∈ R} = R. This induces an isomorphism
C ∞ /R ∼ = C ∞.
3.4. QUOTIENT SPACE 123

The second order derivative map D2 (f ) = f 00 : C ∞ → C ∞ vanishes on constant


functions. By Theorem 3.4.3, we have D2 = D̄2 ◦ D. Of course we know D̄2 = D
and D2 = D2 .

Exercise 3.84. Prove that the map L̄ in Theorem 3.4.3 is one-to-one if and only if H =
KerL.

Exercise 3.85. Use Exercise 3.58, Proposition 3.4.4 and dim V /H = dim V −dim H to prove
Proposition ??.

Exercise 3.86. Show that the linear transformation by the matrix is onto. Then explain
the implication in terms of quotient space.
   
1 −1 0 a1 −1 0 ··· 0
1. .
1 0 −1  a2 0 −1
 ··· 0 
3. .

 .. .. .. ..
  . . . .
1 4 7 10

2. . a n 0 0 ··· −1
2 5 8 11

Exercise 3.87. Explain that the quotient space Mn×n /{symmetric spaces} is isomorphic to
the vector space of all skew-symmetric matrices.

Exercise 3.88. Show that


H = {f ∈ C ∞ : f (0) = 0}
is a subspace of C ∞ , and C ∞ /H ∼
= R.

Exercise 3.89. Let k ≤ n and t1 , t2 , . . . , tk be distinct. Let

H = (t − t1 )(t − t2 ) · · · (t − tk )Pn = {(t − t1 )(t − t2 ) · · · (t − tk )f (t) : f ∈ Pn−k }.

Show that H is a subspace of Pn and the evaluations at t1 , t2 , . . . , tk gives an isomorphism


between Pn /H and Rk .

Exercise 3.90. Show that

H = {f ∈ C ∞ : f (0) = f 0 (0) = 0}

is a subspace of C ∞ , and C ∞ /H ∼
= R2 .

Exercise 3.91. For fixed t0 , the map

f ∈ C ∞ 7→ (f (t0 ), f 0 (t0 ), . . . , f (k) (t0 )) ∈ Rn+1

can be regarded as the n-th order Taylor expansion at t0 . Prove that the Taylor expansion
is an onto linear transformation. Find the kernel of the linear transformation and interpret
your result in terms of quotient space.
124 CHAPTER 3. SUBSPACE

Exercise 3.92. Suppose ∼ is an equivalence relation on a set X. Define the quotient set
X̄ = X/ ∼ to be the collection of equivalence classes.

1. Prove that the quotient map π(x) = x̄ : X → X̄ is onto.

2. Prove that a map f : X → Y satisfies x ∼ x0 implying f (x) = f (x0 ) if and only if it


is the composition of a map f¯: X̄ → Y with the quotient map.

f
X Y

π f¯

X/ ∼

3.4.3 Direct Summand


Definition 3.4.5. A direct summand of a subspace H in the whole space V is a
subspace H 0 satisfying V = H ⊕ H 0 .

A direct summand fills the gap between H and V , similar to that 3 fills the gap
between 2 and 5 by 5 = 2 + 3. The following shows that a direct summand also
“internalises” the quotient space.

Proposition 3.4.6. A subspace H 0 is a direct summand of H in V if and only if the


composition H 0 ⊂ V → V /H is an isomorphism.

Proof. The proposition is the consequence of the following two claims and the k = 2
case of Proposition 3.3.3 (see the remark after the earlier proposition)

1. V = H + H 0 if and only if the composition H 0 ⊂ V → V /H is onto.

2. The kernel of the composition H 0 ⊂ V → V /H is H ∩ H 0 .

For the first claim, we note that H 0 ⊂ V → V /H is onto means that for any
~v ∈ V , there is ~h0 ∈ H 0 , such that ~v + H = ~h0 + H, or ~v − ~h0 ∈ H. Therefore onto
means any ~v ∈ V can be expressed as ~h + ~h0 for some ~h ∈ H and ~h0 ∈ H 0 . This is
exactly V = H + H 0 .
For the second claim, we note that the kernel of the composition is

{~h0 ∈ H 0 : π(~h0 ) = ~0} = {~h0 ∈ H 0 : ~h0 ∈ Kerπ = H} = H ∩ H 0 .

Example 3.4.6. A direct summand of H = R~e1 = R × 0 in R2 is a 1-dimensional


subspace H 0 = R~v , such that R2 = R~e1 ⊕ R~v . In other words, ~e1 and ~v form a basis
of R2 . The condition means exactly that the second coordinate of ~v is nonzero.
3.4. QUOTIENT SPACE 125

Therefore, by multiplying a non-zero scalar to ~v (which does not change H 0 ), we


may assume H 0 = R(a, 1). Since different a gives different line R(a, 1), this gives a
one-to-one correspondence
{direct summand R(a, 1) of R × 0 in R2 } ←→ a ∈ R.

H0

1
(a, 1)

R2 R

Figure 3.4.2: Direct summands of R × 0 in R2 .

Example 3.4.7. By Example 3.4.5, the derivative induces an isomorphism C ∞ /R ∼ =


C ∞ , where R is the subspace of all constant functions. By Proposition 3.4.6, a direct
summand of constant functions R in C ∞ is then a subspace H ⊂ C ∞ , such that
D|H : f ∈ H → f 0 ∈ C ∞ is an isomorphism. For any fixed t0 , we may choose
H(t0 ) = {f ∈ C ∞ : f (t0 ) = 0}.
Rt
For any g ∈ C ∞ , we have f (t) = t0 g(τ )dτ satisfying f 0 = g and f (t0 ) = 0. This
shows that D|H is onto. Since f 0 = 0 and f (t0 ) = 0 implies f = 0, we also know
that the kernel of D|H is trivial. Therefore D|H is an isomorphism, and H(t0 ) is a
direct summand.

Exercise 3.93. What is the dimension of a direct summand?

Exercise 3.94. Describe all the direct summands of Rk × ~0 in Rn .

Exercise 3.95. Is it true that any direct summand of R in C ∞ is H(t0 ) in Example 3.4.7
for some t0 ?

Exercise 3.96. Suppose α is a basis of H and α ∪ β is a basis of V . Prove that β spans a


direct summand of H in V . Moreover, all the direct summands are obtained in this way.
A direct summand is comparable to an extension of a linearly independent set to a
basis of the whole space.

Exercise 3.97. Prove that direct summands of a subspace H in V are in one-to-one corre-
spondence with projections P of V satisfying P (V ) = H.
126 CHAPTER 3. SUBSPACE

Exercise 3.98. A splitting of a linear transformation L : V → W is a linear transformation


K : W → V satisfying L ◦ K = I. Let H = KerL.

1. Prove that L has a splitting if and only if L is onto. By Proposition 3.4.4, L induces
an isomorphism L̄ : V /H ∼= W.

2. Prove that K is a splitting of L if and only if K(W ) is a direct summand of H in


V.

3. Prove that splittings of L are in one-to-one correspondence with direct summands


of H in V .

Exercise 3.99. Suppose K is a splitting of L. Prove that K ◦L is a projection. Then discuss


the relation between two interpretations of direct summands in Exercises 3.97 and 3.98.

Exercise 3.100. Suppose H 0 and H 00 are two direct summands of H in V . Prove that there
is a self isomorphism L : V → V , such that L(H) = H and L(H 0 ) = H 00 . Moreover, prove
that it is possible to further require that L satisfies the following, and such L is unique.

1. L fixes H: L(~h) = ~h for all ~h ∈ H.

2. L is natural: ~h0 + H = L(~h0 ) + H for all ~h0 ∈ H 0 .

Exercise 3.101. Suppose V = H ⊕ H 0 . Prove that

A ∈ Hom(H 0 , H) 7→ H 00 = {(A(~h0 ), ~h0 ) : ~h0 ∈ H 0 }

is a one-to-one correspondence to all direct summands H 00 of H in V . This extends


Example 3.4.6.

Suppose H 0 and H 00 are direct summands of H in V . Then by Proposition 3.4.6,


both natural linear transformations H 0 ⊂ V → V /H and H 00 ⊂ V → V /H are
isomorphisms. Combining the two isomorphisms, we find that H 0 and H 00 are nat-
urally isomorphic. Since the direct summand is unique up to natural isomorphism,
we denote the direct summand by V H.

Exercise 3.102. Prove that H + H 0 = (H (H ∩ H 0 )) ⊕ (H ∩ H 0 ) ⊕ (H 0 (H ∩ H 0 )). Then


prove that
dim(H + H 0 ) + dim(H ∩ H 0 ) = dim H + dim H 0 .
Chapter 4

Inner Product

The inner product introduces geometry (such as length, angle, area, volume, etc.)
into a vector space. Orthogonality can be introduced in an inner product space,
as the most linearly independent (or direct sum) scenario. Moreover, we have the
related concepts of orthogonal projection and orthogonal complement. The inner
product also induces natural isomorphism between a vector space and its dual space.

4.1 Inner Product


4.1.1 Definition
Definition 4.1.1. An inner product on a real vector space V is a function

h~u, ~v i : V × V → R,

such that the following are satisfied.

1. Bilinearity: ha~u + b~u0 , ~v i = ah~u, ~v i + bh~u0 , ~v i, h~u, a~v + b~v i = ah~u, ~v i + bh~u, ~v 0 i.

2. Symmetry: h~v , ~ui = h~u, ~v i.

3. Positivity: h~u, ~ui ≥ 0 and h~u, ~ui = 0 if and only if ~u = ~0.

An inner product space is a vector space equipped with an inner product.

Example 4.1.1. The dot product on the Euclidean space is

(x1 , x2 , . . . , xn ) · (y1 , y2 , . . . , yn ) = x1 y1 + x2 y2 + · · · + xn yn .

If we use the convention of expressing Euclidean vectors as vertical n × 1 matrices,


then we have
~x · ~y = ~xT ~y .

127
128 CHAPTER 4. INNER PRODUCT

This is especially convenient when the dot product is combined with matrices. For
example, for matrices A = (~v1 ~v2 · · · ~vm ) and B = (w ~2 · · · w
~1 w ~ n ), where all
k T
column vectors are in the same Euclidean space R , we have (A is m × k, and B
is k × n)
   
~v1T ~v1 · w
~ 1 ~v1 · w~ 2 . . . ~v1 · w~n
~v T   ~v2 · w
~ 1 ~v2 · w~ 2 . . . ~v2 · w~n 
T  2
A B =  ..  (w ~2 · · · w
~1 w ~ n ) =  .. ..  .
 
..
 .   . . . 
T
~vm ~vm · w
~ 1 ~vm · w
~ 2 . . . ~vm · w
~n

In particular, we have  
~v1 · ~x
 ~v2 · ~x 
AT ~x =  ..  .
 
 . 
~vm · ~x

Example 4.1.2. The dot product is not the only inner product on the Euclidean
space. For example, if all ai > 0, then the following is also an inner product

h~x, ~y i = a1 x1 y1 + a2 x2 y2 + · · · + an xn yn .

For general discussion of inner products on Euclidean spaces, see Section 4.1.3.

Example 4.1.3. On the vector space Pn of polynomials of degree ≤ n, we may


introduce the inner product
Z 1
hf, gi = f (t)g(t)dt.
0

This is also an inner product on the vector space C[0, 1] of all continuous functions
on [0, 1], or the vector space of continuous periodic
R1 functions on R of period 1.
More generally, if K(t) > 0, then hf, gi = 0 f (t)g(t)K(t)dt is an inner product.

Example 4.1.4. On the vector space Mm×n of m × n matrices, we use the trace
introduced in Exercise 2.10 to define
X
hA, Bi = trAT B = aij bij , A = (aij ), B = (bij ).
i,j

By Exercises
P 2.10 and 2.22, the symmetry and bilinear conditions are satisfied. By
hA, Bi = i,j a2ij ≥ 0, the positivity condition is satisfied. Therefore trAT B is an
inner product on Mm×n .
In fact, if we use the usual isomorphism between Mm×n and Rmn , the inner
product is translated into the dot product on the Euclidean space.
4.1. INNER PRODUCT 129

Exercise 4.1. Suppose h , i1 and h , i2 are two inner products on V . Prove that for any
a, b > 0, ah , i1 + bh , i2 is also an inner product.

Exercise 4.2. Prove that ~u satisfies h~u, ~v i = 0 for all ~v if and only if ~u = ~0.

Exercise 4.3. Prove that ~v1 = ~v2 if and only if h~u, ~v1 i = h~u, ~v2 i for all ~u. In other words,
two vectors are equal if and only if their inner products with all vectors are equal.

Exercise 4.4. Let W be an inner product space. Prove that two linear transformations
L, K : V → W are equal if and only if hw,
~ L(~v )i = hw,
~ K(~v )i for all ~v ∈ V and w
~ ∈ W.

Exercise 4.5. Prove that two matrices A and B are equal if and only if ~x · A~y = ~x · B~y (i.e.,
~xT A~y = ~xT B~y ) for all ~x and ~y .

Exercise 4.6. Use the formula for the product of matrices in Example 4.1.1 to show that
(AB)T = B T AT .

Exercise 4.7. Show that hf, gi = f (0)g(0) + f (1)g(1) + · · · + f (n)g(n) is an inner product
on Pn .

4.1.2 Geometry
The usual Euclidean length is given by the Pythagorian theorem
p √
k~xk = x21 + · · · + x2n = ~x · ~x.

In general, the length (or norm) with respect to an inner product is


p
k~v k = h~v , ~v i.

We may take the square root because of the positivity property.


Inspired by the geometry in R2 , we define the angle θ between two nonzero
vectors ~u, ~v by
h~u, ~v i
cos θ = .
k~uk k~v k
Two vectors are orthogonal if the angle between them is 12 π. By cos 21 π = 0, we
define ~u and ~v to be orthogonal, and denote ~u ⊥ ~v , if h~u, ~v i = 0.
For the definition of angle to make sense, however, we need the following result.

Proposition 4.1.2 (Cauchy-Schwarz Inequality). |h~u, ~v i| ≤ k~uk k~v k.

Proof. For any real number t, we have

0 ≤ h~u + t~v , ~u + t~v i = h~u, ~ui + 2th~u, ~v i + t2 h~v , ~v i.


130 CHAPTER 4. INNER PRODUCT

For the quadratic function of t to be always non-negative, the coefficients must


satisfy
(h~u, ~v i)2 ≤ h~u, ~uih~v , ~v i.
This is the same as |h~u, ~v i| ≤ k~uk k~v k.

Knowing the angle, we may compute the area of the parallelogram spanned by
the two vectors

Area(~u, ~v ) = k~uk k~v k sin θ


s  2
h~u, ~v i p
= k~uk k~v k 1 − = k~uk2 k~v k2 − (h~u, ~v i)2 .
k~uk k~v k

Again, we can take the square root due to the Cauchy-Schwarz inequality.

Proposition 4.1.3. The vector length has the following properties.

1. Positivity: k~uk ≥ 0, and k~uk = 0 if and only if ~u = ~0.

2. Scaling: ka~uk = |a| k~uk.

3. Triangle inequality: k~u + ~v k ≤ k~uk + k~v k.

The first two properties are easy to verify, and the triangle inequality is a con-
sequence of the Cauchy-Schwarz inequality

k~u + ~v k2 = h~u + ~v , ~u + ~v i
= h~u, ~ui + h~u, ~v i + h~v , ~ui + h~v , ~v i
≤ k~uk2 + k~uk k~v k + k~v k k~uk + k~v k2
= (k~uk + k~v k)2 .

By the scaling property in Proposition 4.1.3, if ~v 6= ~0, then by dividing the


length, we get a unit length vector (i.e., length 1)

~v
~u = .
k~v k

Note that ~u indicates the direction of the vector ~v by “forgetting” its length. In
fact, all the directions in the inner product space form the unit sphere

S1 = {~u ∈ V : k~uk = 1} = {~u ∈ V : ~u · ~u = 1}.

Any nonzero vector has unique polar decomposition

~v = r~u, where r = k~v k > 0 and k~uk = 1.


4.1. INNER PRODUCT 131

Example 4.1.5. With respect to the dot product, the lengths of (1, 1, 1) and (1, 2, 3)
are
√ √ √ √
k(1, 1, 1)k = 12 + 12 + 12 = 3, k(1, 2, 3)k = 12 + 22 + 32 = 14.
Their polar decompositions are
√ √
(1, 1, 1) = 3( √13 , √13 , √13 ), (1, 2, 3) = 14( √114 , √214 , √314 ).
The angle between the two vectors is given by
1·1+1·2+1·3 6
cos θ = =√ .
k(1, 1, 1)k k(1, 2, 3)k 42
Therefore the angle is arccos √642 = 0.1234π = 22.2077◦ .

Example 4.1.6. Consider the triangle with vertices ~a = (1, −1, 0), ~b = (2, 0, 1), ~c =
(2, 1, 3). The area is half of the parallelogram spanned by ~u = ~b − ~a = (1, 1, 1) and
~v = ~c − ~a = (1, 2, 3)

r
1p 1 3
k(1, 1, 1)k2 k(1, 2, 3)k2 − ((1, 1, 1) · (1, 2, 3))2 = 3 · 14 − 62 = .
2 2 2

Example 4.1.7. By the inner product in Example 4.1.3, the lengths of 1 and t are
s s
Z 1 Z 1
1
k1k = dt = 1, ktk = t2 dt = √ .
0 0 3

Therefore 1 has the unit length, and t has polar decomposition t = √13 ( 3t). The
angle between 1 and t is given by
R1 √
tdt 3
cos θ = 0 = .
k1k ktk 2
Therefore the angle is 16 π. Moreover, the area of the parallelogram spanned by 1
and t is s
Z 1 Z 1 Z 1 2
1
dt 2
t dt − tdt = √ .
0 0 0 2 3

Exercise 4.8. Show that the area of the triangle with vertices (0, 0), (a, b), (c, d) is 21 |ad−bc|.
qP
More generally, the area of the triangle vertices ~0, ~x, ~y ∈ Rn is 1 2 (xi yj − xj yi )2 .
1≤i<j≤n

Exercise 4.9. In Example 4.1.6, we calculated the area of the triangle by subtracting ~a. By
the obvious symmetry, we can also calculate the area by subtracting ~b or ~c. Please verify
that the alternative calculations give the same results. Can you provide an argument for
the general case.
132 CHAPTER 4. INNER PRODUCT

Exercise 4.10. Prove that the distance d(~u, ~v ) = k~u − ~v k in an inner product space has the
following properties.
1. Positivity: d(~u, ~v ) ≥ 0, and d(~u, ~v ) = 0 if and only if ~u = ~v .
2. Symmetry: d(~u, ~v ) = d(~v , ~u).
~ ≤ d(~u, ~v ) + d(~v , w).
3. Triangle inequality: d(~u, w) ~

Exercise 4.11. Show that the area of the parallelogram spanned by two vectors is zero if
and only if the two vectors are parallel.

Exercise 4.12. Prove the polarisation identity


1 1
h~u, ~v i = (k~u + ~v k2 − k~u − ~v k2 ) = (k~u + ~v k2 − k~uk2 − k~v k2 ).
4 2

Exercise 4.13. Prove that two inner products h·, ·i1 and h·, ·i2 are equal if and only if they
induce the same length: h~x, ~xi1 = h~x, ~xi2 .

Exercise 4.14. Prove the parallelogram identity


k~u + ~v k2 + k~u − ~v k2 = 2(k~uk2 + k~v k2 ).

Exercise 4.15. Find the length of vectors and the angle between vectors.

1. (1, 0), (1, 1). 3. (1, 2, 3), (2, 3, 4). 5. (0, 1, 2, 3), (4, 5, 6, 7).

2. (1, 0, 1), (1, 1, 0). 4. (1, 0, 1, 0), (0, 1, 0, 1). 6. (1, 1, 1, 1), (1, −1, 1, −1).

Exercise 4.16. Find the area of the triangle with the given vertices.

1. (1, 0), (0, 1), (1, 1). 4. (1, 1, 0), (1, 0, 1), (0, 1, 1).

2. (1, 0, 0), (0, 1, 0), (0, 0, 1). 5. (1, 0, 1, 0), (0, 1, 0, 1), (1, 0, 0, 1).

3. (1, 2, 3), (2, 3, 4), (3, 4, 5). 6. (0, 1, 2, 3), (4, 5, 6, 7), (8, 9, 10, 11).

Exercise 4.17. Find the length of vectors and the angle between vectors.

1. (1, 0, 1, 0, . . . ), (0, 1, 0, 1, . . . ). 3. (1, 1, 1, 1, . . . ), (1, −1, 1, −1, . . . ).

2. (1, 2, 3, . . . , n), (n, n − 1, n − 2, . . . , 1). 4. (1, 1, 1, 1, . . . ), (1, 2, 3, 4, . . . ).

Exercise 4.18. Redo Exercises 4.15, 4.16, 4.17 with respect to the inner product
h~x, ~y i = x1 y1 + 2x2 y2 + · · · + nxn yn .

Exercise 4.19. Find the area of the triangle with given vertices, with respect to the inner
product in Example 4.1.3.
4.1. INNER PRODUCT 133

1. 1, t, t2 . 3. 1, at , bt .

2. 0, sin t, cos t. 4. 1 − t, t − t2 , t2 − 1.

Exercise 4.20. Redo Exercise 4.19 with respect to the inner product
Z 1
hf, gi = tf (t)g(t)dt.
0

Exercise 4.21. Redo Exercise 4.19 with respect to the inner product
Z 1
hf, gi = f (t)g(t)dt.
−1

4.1.3 Positive Definite Matrix


An inner product h·, ·i on a finite dimensional vector space V is a bilinear function.
Let α = {~v1 , ~v2 , . . . , ~vn } be a basis of V . Then as explained in Section 2.4.4, the
bilinear function is given by a matrix
 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
h~x, ~y i = [~x]Tα A[~y ]α , A =  .. ..  , aij = h~vi , ~vj i.
 
..
 . . . 
an1 an2 · · · ann
For the special case V = Rn and α is the standard basis, the general inner product
may be compared with the dot product
X
h~x, ~y i = aij xi xj = ~xT A~y = ~x · A~y , aij = h~ei , ~ej i.
1≤i,j≤n

The symmetry property means that the matrix is symmetric


AT = A, aij = aji .
The positivity property is more subtle.

Definition 4.1.4. A matrix A is positive definite if it is a symmetric matrix, and


~xT A~x > 0 for any ~x 6= ~0.

Given a basis of V , the inner products on V are in one-to-one correspondence


with positive definite matrices.

Example 4.1.8. A diagonal matrix


 
a1 0 . . . 0
 0 a2 . . . 0 
A =  .. ..
 
.. 
. . .
0 0 . . . an
134 CHAPTER 4. INNER PRODUCT

is positive definite if and only if all ai > 0. See Example 4.1.2.

 
1 2
Example 4.1.9. For A = , we have
2 a

~xT A~x = x21 + 4x1 x2 + ax22 = (x1 + 2x2 )2 + (a − 4)x22 .

Therefore A is positive definite if and only if a > 4.

 
a b
Exercise 4.22. Prove that is positive definite if and only if a > 0 and ac > b2 .
b c

 
A O
Exercise 4.23. For symmetric matrices A and B, prove that is positive definite
O B
if and only if A and B are positive definite.

Exercise 4.24. Suppose A and B are positive definite, and a, b > 0. Prove that aA + bB is
positive definite.

Exercise 4.25. Prove that positive definite matrices are invertible.

Exercise 4.26. If A is positive definite and P is invertible, prove that P T AP is positive


definite.

Exercise 4.27. If A is invertible, prove that AT A is positive definite. In fact, we only need
A to be one-to-one (i.e., solution of A~x = ~b is unique).

Exercise 4.28. Suppose A is symmetric and q(~x) = ~x · A~x. Prove the polarisation identity

1 1
~x · A~y = (q(~x + ~y ) − q(~x − ~y )) = (q(~x + ~y )q(~x) − q(~y )).
4 2

Exercise 4.29. Prove that two symmetric matrices A and B are equal if and only if ~x · A~x =
~x · B~x for all ~x.

In general, we may determine the positive definite property of a symmetric matrix


A = (aij ) by the process of completing the square similar to Example 4.1.9. Suppose
4.1. INNER PRODUCT 135

a11 6= 0, then we gather all the terms involving x1 and get


X
~xT A~x = a11 x21 + 2a12 x1 x2 + · · · + 2a1n x1 xn + aij xi xj
2≤i,j≤n
 
1 1
= a11 x21
+ 2x1 · (a12 x12 + · · · + a1n x1n ) + 2 (a12 x12 + · · · + a1n x1n )2
a11 a11
1 X
− (a12 x12 + · · · + a1n x1n )2 + aij xi xj
a11 2≤i,j≤n
 2
a12 a1n X
= a11 x1 + x12 + · · · + x1n + a0ij xi xj .
a11 a11 2≤i,j≤n

Here the matrix


a1i a1j
A0 = (a0ij ), a0ij = aij − ,
a11
is obtained by the “symmetric row and column operations”
   
a11 a12 · · · a1n a11 a12 · · · a1n
ai1 0 0 
 a21
 Ri − a11 R1  0 a22 · · · a2n 
a22 · · · a2n 
 
a11 ∗

A =  .. ..  −−−−−−→  .. ..  = O A0

.. ..
 . . .   . . . 
0 0
an1 an2 · · · ann 0 an2 · · · ann
 
a11 0 · · · 0
a
Ci − a 1i C1  0 a022 · · · a02n 
 
11 a11 O
−−−−−−→  .. ..  = O A0 .
 
..
 . . . 
0 0
0 an2 · · · ann

In fact, the row operations are already sufficient for getting A0 . We find that

~xT A~x = b1 y12 + ~x 0T A0~x 0 ,

where
a12 a1n
y 1 = x1 + x12 + · · · + x1n , b1 = a11 , ~x 0 = (x2 , . . . , xn ).
a11 a11
We note that ~x 0T A0~x 0 does not involve x1 . Therefore the process of completing the
square can continue until we get

~xT A~x = b1 y12 + b2 y22 + · · · + bn yn2 , yi = xi + ci(i+1) xi+1 + · · · + cin xn .

Then ~xT A~x is positive definite if and only if all bi > 0.

Example 4.1.10. By the row operations


     
1 2 3 1 2 3 1 2 3
A = 2 2 4 → 0 −2 −2 → 0 −2 −2 ,
3 4 5 0 −2 −4 0 0 −2
136 CHAPTER 4. INNER PRODUCT

we get ~xA~x = y12 − 2y22 − 2y32 after completing the square. The matrix A is therefore
not positive definite.
By the row operations
     
1 2 3 1 2 3 1 2 3
B= 2 5 4 → 0 1
   −2 → 0 1 −2 ,
3 4 15 0 −2 6 0 0 2
the matrix B is positive definite. The corresponding inner product is
h~x, ~y i = x1 y1 + 5x2 y2 + 15x3 y3 + 2(x1 y2 + x2 y1 ) + 3(x1 y3 + x3 y1 ) + 4(x2 y3 + x3 y2 ).

Example 4.1.11. For  


1 3 1
A = 3 13 9  ,
1 9 14
we gather together all the terms involving x and complete the square
~xT A~x = x2 + 13y 2 + 14z 2 + 6xy + 2xz + 18yz
= [x2 + 2x(3y + z) + (3y + z)2 ] + 13y 2 + 14z 2 + 18yz − (3y + z)2
= (x + 3y + z)2 + 4y 2 + 13z 2 + 12yz.
The remaining terms involve only y and z. Gathering all the terms involving y and
completing the square, we get 4y 2 + 13z 2 + 12yz = (2y + 3z)2 + 4z 2 and
~xT A~x = (x + 3y + z)2 + (2y + 3z)2 + (2z)2 = u2 + v 2 + w2 ,
for       
u x + 3y + z 1 3 1 x
~y = v =
   2y + 3z  = 0 2 3
   y ,
w 2z 0 0 2 z
or  
1 3 1
~u = P ~x, P = 0 2 3 .
0 0 2
In particular, the matrix is positive definite.
The process of completing the square corresponds to the row operations
     
1 3 1 1 3 1 1 3 1
A = 3 13 9  → 0 4 6  → 0 4 6 .
1 9 14 0 6 13 0 0 4
The result of the completing the square can also be interpreted as
~xT A~x = ~uT ~u = (P ~x)T (P ~x) = ~xT P T P ~x.
By Exercise 4.29, this means A = P T P .
4.2. ORTHOGONALITY 137

If P is an invertible matrix, and ~y = P ~x, then completing the square means

~xT A~x = b1 y12 + b2 y22 + · · · + bn yn2 = ~y T D~y = ~xT P T DP ~x,

where D is diagonal  
b1 0 · · · 0
 0 b2 · · · 0
D =  .. .. ..  .
 
. . .
0 0 ··· bn
By Exercise 4.29, the equality ~xT A~x = ~xT P T DP ~x means A = P T DP .

Exercise 4.30. Prove that all the diagonal terms in a positive definite matrix must be
positive.

Exercise 4.31. For any n × n matrix A and 1 ≤ i1 < i2 < · · · < ik ≤ n, let A(i1 , i2 , . . . , ik )
be the k × k submatrix obtained by selecting the ip -rows and iq -columns, 1 ≤ p, q ≤ k. If
A is positive definite, prove that A(i1 , i2 , . . . , ik ) is also positive definite. This generalizes
Exercise 4.30.

Exercise 4.32. Determine whether the matrix is positive definite.


     
1 1 1 1 −1 1 1 −1 −1
1. 1 2
 2. 2. −1 2 −2. 3. −1 2 −2.
1 2 3 1 −2 3 −1 −2 3
 
  a 1 0
a 1
Exercise 4.33. Find the condition for and 1 a 1 to be positive definite, and
1 a
0 1 a
generalise to n × n matrix.

4.2 Orthogonality
In Section 3.3, we learned that the essence of linear algebra is not individual vectors,
but subspaces. The essence of span is sum of subspace, and the essence of linear
independence is that the sum is direct. Similarly, the essence of orthogonal vectors
is orthogonal subspaces.

4.2.1 Orthogonal Sum


Two vectors are orthogonal if their inner product is 0. Therefore we may define the
orthogonality of two subspaces.

Definition 4.2.1. Two spaces H and H 0 are orthogonal and denoted H ⊥ H 0 , if


h~u, ~v i = 0 for all ~u ∈ H and ~v ∈ H 0 .
138 CHAPTER 4. INNER PRODUCT

Clearly, ~u ⊥ ~v if and only if R~u ⊥ R~v . We note that ~u and ~v are linearly
dependent if the angle between them is 0 (or π). Although the two vectors become
linearly independent when the angle is slightly away from 0, we still feel they are
almost dependent. In fact, we feel they are more and more independent when the
angle gets bigger and bigger. We feel the greatest independence when the angle is
1
2
π. This motivates the following result.

Theorem 4.2.2. If subspaces H1 , H2 , . . . , Hn are pairwise orthogonal, then H1 +


H2 + · · · + Hn is a direct sum.

Proof. Suppose ~hi ∈ Hi satisfies ~h1 + ~h2 + · · · + ~hn = ~0. Then by the pairwise
orthogonality, we have

0 = h~hi , ~h1 + ~h2 + · · · + ~hn i = h~hi , ~h1 i + h~hi , ~h2 i + · · · + h~hi , ~hn i = h~hi , ~hi i.

By the positivity of the inner product, this implies ~hi = ~0.

To emphasis the orthogonality between subspaces, we express the orthogonal


(direct) sum as H1 ⊥ H2 ⊥ · · · ⊥ Hk . Similarly, if

V = H1 ⊥ H2 ⊥ · · · ⊥ Hk ,
W = H10 ⊥ H20 ⊥ · · · ⊥ Hk0 ,

and L : V → W satisfies L(Hi ) ⊂ Hi0 , then we have linear transformations Li : Hi →


Hi0 and denote the direct sum of linear transformations (see Example 3.3.9) as an
orthogonal sum

L = L1 ⊥ L2 ⊥ · · · ⊥ Ln : H1 ⊥ H2 ⊥ · · · ⊥ Hk → H10 ⊥ H20 ⊥ · · · ⊥ Hk0 .

Example 4.2.1. We have the direct sum Pn = Pneven ⊕ Pnodd in Example 3.3.1. Moreover,
the two subspaces are orthogonal with respect to the inner product in Exercise 4.21.
Therefore we have Pn = Pneven ⊥ Pnodd .

Exercise 4.34. What is the subspace orthogonal to itself?

Exercise 4.35. Prove that H1 +H2 +· · ·+Hm ⊥ H10 +H20 +· · ·+Hn0 if and only if Hi ⊥ Hj0 for
all i and j. What does this tell you about R~v1 +R~v2 +· · ·+R~vm ⊥ Rw ~ 2 +· · ·+Rw
~ 1 +Rw ~ n?

Exercise 4.36. Show that H1 + H2 + H3 + H4 + H5 is an orthogonal sum if and only if


H1 + H2 + H3 , H4 + H5 , (H1 + H2 + H3 ) + (H4 + H5 ) are orthogonal sums. In general,
extend Proposition 3.3.2.

In case Hi = R~vi , the pairwise orthogonal property in Theorem 4.2.2 means


~v1 , ~v2 , . . . , ~vk are pairwise orthogonal, i.e., ~vi · ~vj = 0 whenever i 6= j. We call
4.2. ORTHOGONALITY 139

α = {~v1 , ~v2 , . . . , ~vk } an orthogonal set. Then Theorem 4.2.2 says that an orthogonal
set of nonzero vectors is linearly independent.
If α is an orthogonal set of k nonzero vectors, and k = dim V , then α is a basis
of V , called an orthogonal basis.
If all vectors in an orthogonal set have unit length, i.e., we have
(
0, i 6= j,
~vi · ~vj = δij =
1, i = j,
then we have an orthonormal set. If the number of vectors in an orthonormal set is
dim V , then we have an orthonormal basis.
An orthogonal set of nonzero vectors can be changed to an orthonormal set by
dividing the vector lengths.

Example 4.2.2. The vectors ~v1 = (2, 2, −1) and ~v2 = (2, −1, 2) are orthogonal (with
respect to the dot product). To get an orthogonal basis of R3 , we need to add one
vector ~v3 = (x, y, z) satisfying
~v3 · ~v1 = 2x + 2y − z = 0, ~v3 · ~v2 = 2x − y + 2z = 0.
The solution is y = z = −2x. Taking x = −1, or ~v3 = (−1, 2, 2), we get an
orthogonal basis {(2, 2, −1), (2, −1, 2), (−1, 2, 2)}. By dividing the length k~v1 k =
k~v2 k = k~v3 k = 3, we get an orthonormal basis { 13 (2, 2, −1), 13 (2, −1, 2), 13 (−1, 2, 2)}.

Example 4.2.3. By the inner product in Example 4.1.3, we have


Z 1
1 1
ht, t − ai = t(t − a)dt = − a.
0 3 2
Therefore t is orthogonal to t − a if and only if a = 23 . By
s s 2
Z 1 Z 1
1 2 1
ktk = t2 dt = √ , ktk = t− dt = ,
0 3 0 3 3

we get an orthonormal basis { 3t, 3t − 2} of P1 .

Exercise 4.37. For an orthogonal set {~v1 , ~v2 , . . . , ~vn }, prove the Pythagorean identity
k~v1 + ~v2 + · · · + ~vn k2 = k~v1 k2 + k~v2 k2 + · · · + k~vn k2 .

Exercise 4.38. Show that an orthonormal basis in R2 is either {(cos θ, sin θ), (− sin θ, cos θ)}
or {(cos θ, sin θ), (sin θ, − cos θ)}.

Exercise 4.39. Find an orthonormal basis of Rn with the inner product in Exercise 4.18.

Exercise 4.40. For P2 with the inner product in Example 4.1.3, find an orthogonal basis of
P2 of the form a0 , b0 + b1 t, c0 + c1 t + c2 t2 . Then convert to an orthonormal basis.
140 CHAPTER 4. INNER PRODUCT

4.2.2 Orthogonal Complement


By Exercise 4.35, if H is orthogonal to H10 and H20 , then H is orthogonal to H10 + H20 .
This suggests there is a maximal subspace orthogonal to H. This maximal subspace
is given below.

Definition 4.2.3. The orthogonal complement of a subspace H of an inner product


space V is
H ⊥ = {~v : h~v , ~hi = 0 for all ~h ∈ H}.

The following shows that H ⊥ is indeed a subspace

~ ∈ H ⊥ ⇐⇒ h~v , ~hi = hw,


~v , w ~ ~hi = 0 for all ~h ∈ H
=⇒ ha~v + bw,~ ~hi = ah~v , ~hi + bhw,
~ ~hi = 0 for all ~h ∈ H
~ ∈ H ⊥.
⇐⇒ a~v + bw

Proposition 4.2.4. The orthogonal complement has the following properties.

1. H ⊂ H 0 implies H ⊥ ⊃ H 0⊥ .

2. (H1 + H2 + · · · + Hn )⊥ = H1⊥ ∩ H2⊥ ∩ · · · ∩ Hn⊥ .

3. H ⊂ (H ⊥ )⊥ .

4. If V = H ⊥ H 0 , then H 0 = H ⊥ and H = (H 0 )⊥ .

Proof. We only prove the fourth statement. The other properties are left as exercise.
Assume V = H ⊥ H 0 . By the definition of orthogonal subspaces, we have
H ⊂ H ⊥ . Conversely, we express any ~x ∈ H ⊥ as ~x = ~h + ~h0 with ~h ∈ H and
0
~h0 ∈ H 0 . Then we have

0 = h~x, ~hi = h~h, ~hi + h~h0 , ~hi = h~h, ~hi.

Here the first equality is due to ~x ∈ H ⊥ , ~h ∈ H, and the third equality is due to
~h0 ∈ H ⊥ , ~h ∈ H. The overall equality implies ~h = ~0. Therefore ~x = ~h0 ∈ H 0 . This
proves H ⊥ ⊂ H 0 .

Exercise 4.41. Prove the first three statements in Proposition 4.2.4.

The following can be obtained from Exercise 4.35 and gives a practical way of
computing the orthogonal complement.

Proposition 4.2.5. The orthogonal complement of R~v1 + R~v2 + · · · + R~vk is all the
vector orthogonal to ~v1 , ~v2 , . . . , ~vk .
4.2. ORTHOGONALITY 141

Consider an m × n matrix A = (~v1 ~v2 · · · ~vn ), ~vi ∈ Rm . By Proposition 4.2.5,


the orthogonal complement of the column space ColA consists of vectors ~x ∈ Rm
satisfying ~v1 · ~x = ~v2 · ~x = · · · = ~vn · ~x = 0. By the formula in Example 4.1.1, this
means AT ~x = ~0, or the null space of AT . Therefore we get
(ColA)⊥ = NulAT .
Taking transpose, we also get
(RowA)⊥ = NulA.

Example 4.2.4. The orthogonal complement of the line H = R(1, 1, 1) consists of


vectors (x, y, z) ∈ R3 satisfying (1, 1, 1) · (x, y, z) = x + y + z = 0. This is the plane
in Example 2.1.13. In general, the solutions of a1 x1 + a2 x2 + · · · + am xm = 0 is the
hyperplane orthogonal to the line R(a1 , a2 , . . . , am ).
For another example, Example 3.2.4 shows that the orthogonal complement of
R(1, 4, 7, 10) + R(2, 5, 8, 11) + R(3, 6, 9, 12) is R(1, −2, 0, 0) + R(2, −3, 0, 1).

Example 4.2.5. We try to calculate the orthogonal complement of P1 (span of 1


and t) in P3 with respect to the inner product in Example 4.1.3. A polynomial
f = a0 + a1 t + a2 t2 + a3 t3 is in the orthogonal complement if and only if
Z 1
1 1 1
h1, f i = (a0 + a1 t + a2 t2 + a3 t3 )dt = a0 + a1 + a2 + a3 = 0,
2 3 4
Z0 1
1 1 1 1
ht, f i = t(a0 + a1 t + a2 t2 + a3 t3 )dt = a0 + a1 + a2 + a3 = 0.
0 2 3 4 5
We find two linearly independent solutions 2 − 9t + 10t3 and 1 − 18t2 + 20t3 , which
form a basis of the orthogonal complement.

Exercise 4.42. Find the orthogonal complement of the span of (1, 4, 7, 10), (2, 5, 8, 11),
(3, 6, 9, 12) with respect to the inner product h(x1 , x2 , x3 , x4 ), (y1 , y2 , y3 , y4 )i = x1 y1 +
2x2 y2 + 3x3 y3 + 4x4 y4 .

Exercise 4.43. Find the orthogonal complement of P1 in P3 respect to the inner products
in Exercises 4.7, 4.20, 4.21.

Exercise 4.44. Find all vectors orthogonal to the given vectors.

1. (1, 4, 7), (2, 5, 8), (3, 6, 9). 3. (1, 2, 3, 4), (2, 3, 4, 1), (3, 4, 1, 2).

2. (1, 2, 3), (4, 5, 6), (7, 8, 9). 4. (1, 0, 1, 0), (0, 1, 0, 1), (1, 0, 0, 1).

Exercise 4.45. Redo Exercise 4.44 with respect to the inner product in Exercise 4.18.

Exercise 4.46. Find all polynomials of degree 2 orthogonal to the given functions, with
respect to the inner product in Example 4.1.3.
142 CHAPTER 4. INNER PRODUCT

1. 1, t. 2. 1, t, 1 + t. 3. sin t, cos t. 4. 1, t, t2 .

Exercise 4.47. Redo Exercise 4.46 with respect to the inner product in Exercises 4.20, 4.21.

4.2.3 Orthogonal Projection


In Section 3.3.2, a direct sum V = H ⊕ H 0 is associated with a projection

P (~h + ~h0 ) = ~h, ~h ∈ H, ~h0 ∈ H 0 .

The projection depends on the choice of the direct summand H 0 of H in V .


If V = H ⊥ H 0 , then the projection is an orthogonal projection. We will prove
that, if H is finite dimensional, then V = H ⊥ H ⊥ , and H 0 must be H ⊥ . Therefore
the orthogonal projection depends only on H, and we denote it by projH ~x. Since
we have not yet proved V = H ⊥ H ⊥ , we define the orthogonal projection without
using the result.

Definition 4.2.6. The orthogonal projection of ~x ∈ V onto a subspace H ⊂ V is the


vector ~h ∈ H satisfying ~x − ~h ⊥ H.

~x

H ~0
~h

Figure 4.2.1: Orthogonal projection.

For the uniqueness of ~h (i.e., projH ~x is not ambiguous), we note that ~x − ~h ⊥ H


and ~x − ~h0 ⊥ H imply ~h0 − ~h = (~x − ~h) − (~x − ~h0 ) ⊥ H. Then by ~h0 − ~h ∈ H, we get
~h0 − ~h = ~0.

Proposition 4.2.7. Orthogonal projection onto H exists if and only if V = H ⊥ H ⊥ .


Moreover, we have (H ⊥ )⊥ = H.

The orthogonal sum V = H ⊥ H ⊥ implies that the orthogonal projection is the


projection associated to a direct sum, and is therefore a linear transformation.

Proof. Suppose the orthogonal projection onto H exists. Let the orthogonal pro-
jection of ~x ∈ V be ~h ∈ H. Then ~h0 = ~x − ~h ∈ H ⊥ , and we have ~x = ~h + ~h0 with
4.2. ORTHOGONALITY 143

~h ∈ H and ~h0 ∈ H ⊥ . This proves V = H + H ⊥ . Since the sum H + H ⊥ is always


an orthogonal sum, we get V = H ⊥ H ⊥ .
Suppose V = H ⊥ H ⊥ . Then the projection P : V → H assocated to the direct
sum satisfies ~x − P (~x) ∈ H ⊥ . By definition, P (~x) is the orthogonal projection onto
H. On the other hand, we may also apply the fourth statement in Proposition 4.2.4
to H 0 = H ⊥ to get (H ⊥ )⊥ = H.
It remains to prove that (H ⊥ )⊥ = H implies V = H ⊥ H ⊥ .
The following gives the formula of the orthogonal projection in case H has a
finite orthogonal basis.

Proposition 4.2.8. If α = {~v1 , ~v2 , . . . , ~vk } is an orthogonal basis of a subspace H ⊂


V , then the orthogonal projection of ~x on H is
~x · ~v1 ~x · ~v2 ~x · ~vk
projH ~x = ~v1 + ~v2 + · · · + ~vk .
~v1 · ~v1 ~v2 · ~v2 ~vk · ~vk
If the basis is orthonormal, then
projH ~x = (~x · ~v1 )~v1 + (~x · ~v2 )~v2 + · · · + (~x · ~vk )~vk .

In Section 4.2.4, we will prove that any finite dimensional subspace has an orthog-
onal basis. Then the formula in the proposition shows the existence of orthogonal
projection.
We also note that, if ~x ∈ H = Spanα, then ~x = projH ~x, and we get
~x · ~v1 ~x · ~v2 ~x · ~vk
~x = ~v1 + ~v2 + · · · + ~vk .
~v1 · ~v1 ~v2 · ~v2 ~vk · ~vk
Proof. Let ~h be the formula in the proposition. We need to verify ~x − ~h ⊥ H. By
Proposition 4.2.5 (and Exercise 4.35), we only need to show (~x − ~h) · ~vi = 0. The
following proves ~h · ~vi = ~x · ~vi .
 
~x · ~v1 ~x · ~v2 ~x · ~vk
~v1 + ~v2 + · · · + ~vk · ~vi
~v1 · ~v1 ~v2 · ~v2 ~vk · ~vk
~x · ~v1 ~x · ~v2 ~x · ~vk
= ~v1 · ~vi + ~v2 · ~vi + · · · + ~vk · ~vi
~v1 · ~v1 ~v2 · ~v2 ~vk · ~vk
~x · ~vi
= ~vi · ~vi = ~x · ~vi .
~vi · ~vi

Example 4.2.6. In Example 2.1.13, we found the formula for the orthogonal pro-
jection onto the subspace H ⊂ R3 given by x + y + z = 0. Now we find the formula
again by using Proposition 4.2.8. To get an orthogonal basis of H, we start with
~v1 = (1, −1, 0) ∈ H. Since dim H = 2, we only need to find ~v2 = (x, y, z) ∈ H
satisfying ~v1 · ~v2 = 0. This means
x + y + z = 0, x − y = 0.
144 CHAPTER 4. INNER PRODUCT

The solution is x = y, z = −2y, with y arbitrary. By choosing y = 1, we get an


orthogonal basis ~v1 = (1, −1, 0), ~v2 = (1, 1, −2) of H. Then by Proposition 4.2.8,
we get
     
x1 1 1
x1 − x2   x1 + x2 − 2x3  
proj{x+y+z=0} x2  = −1 + 1
1+1+0 1+1+4
x3 0 −2
 
2x1 − x2 − x3
1
= −x1 + 2x2 − x3  .
3
−x1 − x2 + 2x3

Example 4.2.7 (Fourier series). The inner product in Example 4.3.3 is defined for
all continuous (in fact square integrable on [0, 2π] is enough) periodic functions on
R of period 2π. For integers m 6= n, we have
Z 2π
1
hcos mt, cos nti = cos mt cos ntdt
2π 0
Z 2π
1
= (cos(m + n)t + cos(m − n)t)dt
4π 0
 π
1 sin(m + n)t sin(m − n)t
= + = 0.
4π m+n m−n 0

We may similarly find hsin mt, sin nti = 0 for m 6= n and hcos mt, sin nti = 0.
Therefore the vectors (1 = cos 0t)

1, cos t, sin t, cos 2t, sin 2t, . . . , cos nt, sin nt, . . .

form an orthogonal set.


We can also easily find the lengths k cos ntk = k sin ntk = √12 for n 6= 0 and
k1k = 1. If we pretend Proposition 4.2.8 works for infinite sum, then for a periodic
function f (t) of period 2π, we may expect

f (t) = a0 + a1 cos t + b1 sin t + a2 cos 2t + b2 sin 2t + · · · + an cos nt + bn sin nt + · · · ,

with
Z 2π
1
a0 = hf (t), 1i = f (t)dt,
2π 0
1 2π
Z
an = 2hf (t), cos nti = f (t) cos ntdt,
π 0
1 2π
Z
bn = 2hf (t), sin nti = f (t) sin ntdt.
π 0
This is the Fourier series.
4.2. ORTHOGONALITY 145

4.2.4 Gram-Schmidt Process


The following gives an inductive process for turning a basis of a subspace to an
orthogonal basis of the subspace. In particular, it implies the existence of orthogonal
basis for finite dimensional subspace. By dividing the vector length, we may even
get orthonormal basis.

Proposition 4.2.9. Suppose α = {~v1 , ~v2 , . . . , ~vn } is a linearly independent set in


an inner product space. Then there is an orthogonal set β = {w ~ 1, w ~ n } of
~ 2, . . . , w
nonzero vectors satisfying

Rw ~ 2 + · · · + Rw
~ 1 + Rw ~ i = R~v1 + R~v2 + · · · + R~vi , i = 1, 2, . . . , n.

In particular, any finite dimensional inner product space has an orthonormal basis.

Proof. We start with w ~ 1 = ~v1 and get Rw ~ 1 = R~v1 . Then we inductively assume
w
~ 1, w
~ 2, . . . , w
~ i are constructed, and Rw~ 1 + Rw ~ 2 + · · · + Rw
~ i = R~v1 + R~v2 + · · · + R~vi
is satisfied. We denote the subspace by Hi and take

~ i+1 = ~vi+1 − projHi ~vi+1 .


w

Since α is linearly independent, we have ~vi+1 6∈ Hi . Therefore w ~ i+1 6= ~0. Moreover,


by the definition of orthogonal projection, we get w ~ i+1 ⊥ Hi . This means w ~ i+1 ⊥
w ~ i+1 ⊥ w
~ 1, w ~ i+1 ⊥ w
~ 2, . . . , w ~ i . Therefore we get an orthogonal basis at the end.

Example 4.2.8. In Example 2.1.13, the subspace H of R3 given by the equation


x + y + z = 0 has basis ~v1 = (1, −1, 0), ~v2 = (1, 0, −1). Then we derive an orthogonal
basis of H

~ 1 = ~v1 = (1, −1, 0),


w
h~v2 , w
~ 1i 1+0+0 1
~ 20 = ~v2 −
w ~ 1 = (1, 0, −1) −
w (1, −1, 0) = (1, 1, −2),
hw ~ 1i
~ 1, w 1+1+0 2
0
w
~ 2 = 2w ~ 2 = (1, 1, −2).

Here we simplify the choice of vectors by suitable scalar multiplication, which does
not change orthogonal basis. The orthogonal basis we get is the same as the one
used in Example 4.2.6.

Example 4.2.9. In Example 1.3.14, the vectors ~v1 = (1, 2, 3), ~v2 = (4, 5, 6), ~v3 =
(7, 8, 10) form a basis of R3 . We apply the Gram-Schmidt process to get an orthog-
146 CHAPTER 4. INNER PRODUCT

onal basis of R3 .

w
~ 1 = ~v1 = (1, 2, 3),
h~v2 , w
~ 1i 4 + 10 + 18 3
~ 20 = ~v2 −
w ~ 1 = (4, 5, 6) −
w (1, 2, 3) = − (4, 1, −2),
hw ~ 1i
~ 1, w 1+4+9 7
~ 2 = (4, 1, −2),
w
h~v3 , w
~ 1i h~v3 , w
~ 2i
~ 30 = ~v3 −
w ~1 −
w w
~2
hw ~ 1i
~ 1, w hw~ 2, w~ 2i
7 + 16 + 30 28 + 8 − 20 1
= (7, 8, 10) − (1, 2, 3) − (4, 1, −2) = − (1, −2, 1),
1+4+9 16 + 1 + 4 6
~ 3 = (1, −2, 1).
w

Example 4.2.10. The natural basis {1, t, t2 } of P2 is not orthogonal with respect to
the inner product in Example 4.1.3. We improve the basis to become orthogonal

f1 = 1,
R1
t · 1dt 1
f2 = t − R0 1 1=t− ,
12 dt 2
0
R1 2 R1 2 1
 
· −

2 t 1dt t t dt 1 1
0
f3 = t − R 1 0
1− R1 2
t− = t2 − t + .
1 2 2 6
2

0
1 dt 0
t − 2 dt

By rescaling, we find that 1, 2t − 1, 6t2 − 6t + 1 is an orthogonal basis of P2 .


We may use the orthogonal basis 1, 2t − 1 of P1 to calculate the orthogonal
projection of P3 to P1
R1 R1
t2 dt (1 − 2t)t2 dt 1 1 1
projP1 t2 = R 01 1 + R01 (1 − 2t) = 1 − (1 − 2t) = − + t,
1 2 dt (1 − 2t) 2 dt 3 2 6
R01 3 R 01
3 0
t dt 0
(1 − 2t)t3 dt 1 9 1 9
projP1 t = R 1 1+ R1 (1 − 2t) = 1 − (1 − 2t) = − + t.
12 dt (1 − 2t)2 dt 4 20 5 10
0 0

Combined with projP1 1 = 1 and projP1 t = t, we get


   
2 3 1 1 9
projP1 (a0 + a1 t + a2 t + a3 t ) = a0 + a1 t + a2 − + t + a3 − + t
6 5 10
   
1 1 9
= a0 − a2 − a3 + a1 + a2 + a3 t.
6 5 10

Example 4.2.11. With respect to the inner product in Exercise 4.21, even and odd
polynomials are always orthogonal. Therefore to find an orthogonal basis of P3 with
respect to this inner product, we may apply the Gram-Schmidt process to 1, t2 and
4.2. ORTHOGONALITY 147

t, t3 separately, and then simply combine the results together. Specifically, we have
R1 2 R1 2
t · 1dt t · 1dt 1
t2 − −1
R1 1 = t2 − 0R 1 1 = t2 − ,
12 dt 12 dt 3
−1 0
R1 3 R1 3
t · tdt t · tdt 3
t3 − −1
R1 t = t3 − 0R 1 t = t3 − t.
t2 dt t2 dt 5
−1 0

Therefore 1, t, t2 − 31 , t3 − 53 t form an orthogonal basis of P3 . By


Z 1 Z 1
2 2
1 dt = 2, t2 dt = ,
−1 −1 3
Z 1 2 Z 1 2
1 8 3 8
t2 − dt = 2 , t3 − t dt = 2 ,
−1 3 3 ·5 −1 5 5 ·7
we divide the edge lengths and get an orthonormal basis
√ √ √
1 3 2 2 2 2 2 3
√ , √ t, √ (3t − 1), √ (5t − 3t).
2 2 5 7

Exercise 4.48. Find an orthogonal basis of the subspace in Example 4.2.8 by starting with
~v2 and then use ~v1 .

Exercise 4.49. Find an orthogonal basis of the subspace in Example 4.2.8 with respect
to the inner product h(x1 , x2 , x3 ), (y1 , y2 , y3 )i = x1 y1 + 2x2 y2 + 3x3 y3 . Then extend the
orthogonal basis to an orthogonal basis of R3 .

Exercise 4.50. Apply the Gram-Schmidt process to 1, t, t2 with respect to the inner prod-
ucts in Exercises 4.7 and 4.20.

Exercise 4.51. Use the orthogonal basis in Example 4.2.3 to calculate the orthogonal pro-
jection in Example 4.2.10.

Exercise 4.52. Find the orthogonal projection of general polynomial onto P1 with respect
to the inner product in Example 4.1.3. What about the inner products in Exercises 4.7,
4.20, 4.21?

4.2.5 Property of Orthogonal Projection


Proposition 4.2.9 implies that any finite dimensional subspace H of an inner product
space V has orthogonal basis. Then Proposition 4.2.8 implies that the orthogonal
projection onto H exists. Further, Proposition 4.2.7 shows that (H ⊥ )⊥ = H.
In Section 4.2.2, for a matrix A, we have

(ColA)⊥ = NulAT , (RowA)⊥ = NulA.


148 CHAPTER 4. INNER PRODUCT

Applying (H ⊥ )⊥ = H, we get
(NulA)⊥ = RowA, (NulAT )⊥ = ColA.
We remark that the equality (NulAT )⊥ = ColA means that A~x = ~b has solution if
and only if ~b is orthogonal to all the solutions of the equation AT ~x = ~0. Similarly,
the equality (RowA)⊥ = NulA means that A~x = ~0 if and only if ~x is orthogonal to all
~b such that AT ~x = ~b has solution. These are called the complementarity principles.

Example 4.2.12. By Example 4.2.4 and (H ⊥ )⊥ = H, the orthogonal complement


of R(1, −2, 0, 0) + R(2, −3, 0, 1) is R(1, 4, 7, 10) + R(2, 5, 8, 11) + R(3, 6, 9, 12).
Similarly, in Example 4.2.5, the orthogonal complement of R(2 − 9t + 10t3 ) +
R(1 − 18t2 + 20t3 ) in P3 is P1 .

By Section 3.3.2, the orthogonal sum V = H ⊥ H ⊥ implies (see Exercise 3.65)


~x = projH ~x + projH ⊥ ~x.
In fact, if subspaces H1 , H2 are orthogonal, then by Proposition 4.2.4, we have
V = H1 ⊥ H2 ⊥ (H1 ⊥ H2 )⊥ = H1 ⊥ H2 ⊥ (H1⊥ ∩ H2⊥ ).
By Exercise 3.68, we get
projH1 ⊥H2 = projH1 + projH2 .

Example 4.2.13. We wish to calculate the matrix of the orthogonal projection to


the subspace of Rn of dimension n − 1
H = {(x1 , x2 , . . . , xn ) : a1 x1 + a2 x2 + · · · + an xn = 0}.
It would be complicated to find an orthogonal basis of H, like what we did in
Example 4.2.6. Instead, we take advantage of the fact that
H = (R~a)⊥ , ~a = (a1 , a2 , . . . , an ).
Moreover, we may assume k~ak = 1 by dividing the length, then
proj{a1 x1 +a2 x2 +···+an xn =0}~x = ~x − projR~a~x = ~x − (a1 x1 + a2 x2 + · · · + an xn )~a
 
x1 − a1 (a1 x1 + a2 x2 + · · · + an xn )
 x2 − a2 (a1 x1 + a2 x2 + · · · + an xn ) 
=
 
.. 
 . 
xn − an (a1 x1 + a2 x2 + · · · + an xn )
 
1 − a21 −a1 a2 · · · −a1 an
 −a2 a1 1 − a2 · · · −a2 an 
2
=  .. ..  ~x.
 
..
 . . . 
−an a1 −an a2 · · · 1 − a2n
For ~a = √1 (1, 1, 1), we get the same matrix as the earlier examples.
3
4.2. ORTHOGONALITY 149

Example 4.2.14. Let  


1 4 7 10
A = 2 5 8 11 .
3 6 9 12
In Example 3.2.4, we get a basis (1, −2, 1, 0), (2, −3, 0, 1) for the null space NulA.
The two vectors are not orthogonal. By
(2, −3, 0, 1) · (1, −2, 1, 0) 1
(2, −3, 0, 1) − (1, −2, 1, 0) = (2, −1, −4, 3),
(1, −2, 1, 0) · (1, −2, 1, 0) 3
we get an orthogonal basis (1, −2, 1, 0), (2, −1, −4, 3) of NulA. Then
   
1 2
x1 − 2x2 + x3 −2 2x1 − x2 − 4x3 + 3x4 −1
  
projNulA~x = + −4

1+4+1  1  4 + 1 + 16 + 9
0 3
 
3 4 −1 2
1  −4 7 −2 −1
=   ~x.
10 −1 −2 7 −4

2 −1 −4 3
By RowA = (NulA)⊥ , we also get
 
7 −4 1 −2
1 4 3 2 1
projRowA~x = ~x − projNulA~x =  ~x.
10  1 2 3 4
−2 1 4 7

Example 4.2.15. In Example 4.2.1, we have orthogonal sum decomposition


Pn = Pneven ⊕ Pnodd
with respect to the inner product in Exercise 4.21. By Example 4.2.11, we have
further orthogonal sum decompositions
P3even = R1 ⊕ R(t2 − 31 ), P3odd = Rt ⊕ R(t3 − 53 t).
This gives two orthogonal projections
even
PR1 (a0 + a2 t2 ) = PR1
even
((a0 + 31 a2 ) + a2 (t2 − 13 )) = a0 + 13 a2 : P3even → R1
odd
PRt (a1 t + a3 t3 ) = PRt
odd
((a1 + 35 a3 )t + a2 (t3 − 53 t)) = (a1 + 35 a3 )t : P3odd → Rt.
Then the orthogonal projection to P1 = R1 ⊕ Rt is
projR1⊕Rt (a0 + a1 t + a2 t2 + a3 t3 ) = PR1
even
(a0 + a2 t2 ) + PRt
odd
(a1 t + a3 t3 )
= a0 + 31 a2 + (a1 + 53 a3 )t.
The idea of the example is extended in Exercise 4.53.
150 CHAPTER 4. INNER PRODUCT

Exercise 4.53. Suppose V = H1 ⊥ H2 ⊥ · · · ⊥ Hk is an orthogonal sum. Suppose Hi0 ⊂ Hi


is a subspace, and Pi : Hi → Hi0 are the orthogonal projections inside subspaces. Prove
that
projH10 ⊥H20 ⊥···⊥H 0 = P1 ⊕ P2 ⊕ · · · ⊕ Pk .
k

Exercise 4.54. Suppose H1 , H2 , . . . , Hk are pairwise orthogonal subspaces. Prove that

projH1 ⊥H2 ⊥···⊥Hk = projH1 + projH2 + · · · + projHk .

Then use the orthogonal projection to a line


h~v , ~ui
projR~u = ~u
h~u, ~ui
to derive the formula in Proposition 4.2.8.

Exercise 4.55. Prove that I = projH +projH 0 if and only if H 0 is the orthogonal complement
of H.

Exercise 4.56. Find the orthogonal projection the subspace x+y +z = 0 in R3 with respect
to the inner product h(x1 , x2 , x3 ), (y1 , y2 , y3 )i = x1 y1 + 2x2 y2 + 3x3 y3 .

Exercise 4.57. Directly find the orthogonal projection to the row space in Example 4.2.14.

4.3 Adjoint
The inner product gives a dual pairing of an inner product space with itself, making
the space self-dual. Then the dual linear transformation can be interpreted as linear
transformations of the original vector spaces. This is the adjoint. We may use the
adjoint to describe isometric linear transformations, which can also be described by
orthonormal basis.

4.3.1 Adjoint
The inner product fits into Definition 2.4.6 of the dual pairing. The symmetric prop-
erty of inner product implies that the two linear transformations in the definition
are the same.

Proposition 4.3.1. Suppose V is a finite dimensional inner product space. Then


the inner product induces an isomorphism

~v 7→ h·, ~v i : V ∼
= V ∗.

Proof. By dim V ∗ = dim V , to show that ~v 7→ h·, ~v i is an isomorphism, it is sufficient


to argue the one-to-one property, or the triviality of the kernel. A vector ~v is in this
4.3. ADJOINT 151

kernel means that h~x, ~v i = 0 for all ~x. By taking ~x = ~v and applying the positivity
property of the inner product, we get ~v = ~0.
A linear transformation L : V → W between vector spaces has the dual trans-
formation L∗ : W ∗ → V ∗ . If V and W are finite dimensional inner product spaces,
then we may use Proposition 4.3.1 to identify L∗ with a linear transformation from
W to V , which we still denote by L∗ .
L∗ L∗
W ∗ −−−→ V ∗ ~ W −−−→ h·, L∗ (w)i
h·, wi ~ V
x x x x
∼
h , iW = ∼  
=h , iV  
L∗ L∗
W −−−→ V w
~ −−−→ L∗ (w)
~
~ ∈ W , then the definition means L∗ (h·, wi
If we start with w ~ W ) = h·, L∗ (w)i
~ V.
Applying the equality to · = ~v ∈ V , we get the following definition.

Definition 4.3.2. Suppose L : V → W is a linear transformation between two inner


product spaces. Then the adjoint of L is the linear transformation L∗ : W → V
satisfying
~ = h~v , L∗ (w)i
hL(~v ), wi ~ for all ~v ∈ V, w
~ ∈ W.

Since the proof of Proposition 4.3.1 makes use of finite dimension, we know the
adjoint exists for finite dimensional inner product spaces.

Example 4.3.1. In Example 4.1.1, the dot product on the Euclidean space is ~x · ~y =
~xT ~y . Then for L(~v ) = A~v , we have
~ = (A~v )T w
L(~v ) · w ~ = ~v T AT w
~ = ~v · AT w.
~
Therefore L∗ (w)
~ = AT w.
~

Example 4.3.2. Consider the vector space of polynomials with inner product in
Example 4.1.3. The adjoint D∗ : Pn−1 → Pn of the derivative linear transformation
D(f ) = f 0 : Pn → Pn−1 is characterised by
Z 1 Z 1 Z 1
p ∗ q p q p
t D (t )dt = D(t )t dt = ptp−1 tq dt = , 0 ≤ p ≤ n, 0 ≤ q ≤ n − 1.
0 0 0 p+q
For fixed q, let D∗ (tq ) = x0 + x1 t + · · · + xn tn . Then we get a system of linear
equations
1 1 1 p
x0 + x1 + · · · + xn = , 0 ≤ p ≤ n.
p+1 p+2 p+n+1 p+q
The solution is quite non-trivial. For n = 2, we have
2
D∗ (tq ) = (11 − 2q + 6(17 + q)t − 90t2 ), q = 0, 1.
(1 + q)(2 + q)
152 CHAPTER 4. INNER PRODUCT

Example 4.3.3. Let V be the vector space of all smooth periodic functions on R of
period 2π, with inner product
Z 2π
1
hf, gi = f (t)g(t)dt.
2π 0

The derivative operator D(f ) = f 0 : V → V takes periodic functions to periodic


functions. By the integration by parts and period 2π, we have
Z 2π Z 2π
1 0 1
hD(f ), gi = f (t)g(t)dt = f (2π)g(2π) − f (0)g(0) − f (t)g 0 (t)dt
2π 0 2π 0
Z 2π
1
=− f (t)g 0 (t)dt = −hf, D(g)i.
2π 0

This implies D∗ = −D.


The same argument can be applied to the vector space of all smooth functions
f on R satisfying limt→∞ f (n) (t) = 0 for all n ≥ 0, with inner product
Z ∞
hf, gi = f (t)g(t)dt.
−∞

We still get D∗ = −D.

Exercise 4.58. Prove that a linear operator L : V → V satisfies

hL(~u), ~v i + hL(~v ), ~ui = hL(~u + ~v ), ~u + ~v i − hL(~u), ~ui − hL(~v ), ~v i.

Then prove that hL(~v ), ~v i = 0 for all ~v if and only if L + L∗ = 0.

Exercise 4.59. Calculate the adjoint of the derivative linear transformation D(f ) = f 0 : Pn →
Pn−1 with respect to the inner products in Exercises 4.7, 4.20, 4.21.

Exercise 4.60. Prove that (L1 ⊥ L2 ⊥ · · · ⊥ Lk )∗ = L∗1 ⊥ L∗2 ⊥ · · · ⊥ L∗k . What if the
subspaces are not orthogonal?

Since the adjoint is only the “translation” of the dual via the inner product,
properties of the dual can be translated into properties of the adjoint

I ∗ = I, (L + K)∗ = L∗ + K ∗ , (aL)∗ = aL∗ , (L ◦ K)∗ = K ∗ ◦ L∗ , (L∗ )∗ = L.

All properties can be directly verified by definition. See Exercise 4.61.


Example 4.3.1 and (ColA)⊥ = NulAT suggest

(RanL)⊥ = KerL∗ .
4.3. ADJOINT 153

The following is a direct argument for the equality

~x ∈ (RanL)⊥ ⇐⇒ hw,
~ ~xi = 0 for all w ~ ∈ RanL ⊂ W
⇐⇒ hL(~v ), ~xi = 0 for all ~v ∈ V
⇐⇒ h~v , L∗ (~x)i = 0 for all ~v ∈ V
⇐⇒ L∗ (~x) = ~0 for all ~v ∈ V
⇐⇒ ~x ∈ KerL∗ .

Substituting L∗ in place of L, we get (RanL∗ )⊥ = KerL. Then we may use (H ⊥ )⊥ =


H to get

(RanL)⊥ = NulL∗ , (RanL∗ )⊥ = NulL, (KerL)⊥ = RanL∗ , (KerL∗ )⊥ = RanL.

Exercise 4.61. Use the definition of adjoint to directly prove its properties.

Exercise 4.62. Prove that KerL = KerL∗ L and rankL = rankL∗ L.

Exercise 4.63. Prove that the following are equivalent.


1. L is one-to-one.

2. L∗ is onto.

3. L∗ L is invertible.

Exercise 4.64. Prove that an operator P on an inner product space is the orthogonal
projection to a subspace if and only if P 2 = P = P ∗ .

Exercise 4.65. Prove that there is a one-to-one correspondence between the decomposition
of an inner product space into an orthogonal sum of subspaces and collection of linear
operators Pi satisfying

P1 + P2 + · · · + Pk = I, Pi Pj = O for i 6= j, Pi∗ = Pi .

This is the orthogonal sum analogue of Proposition 3.3.5.

4.3.2 Adjoint Basis


Let α = {~v1 , ~v2 , . . . , ~vn } be a basis of V . Then we have the dual basis α∗ of the dual
space V ∗ . Using the isomorphism in Proposition 4.3.1, we may identify α∗ with a
basis β = {w~ 1, w ~ 2, . . . , w~ n } of V . This means

h·, βi = {h·, w
~ 1 i, h·, w ~ n i} = α∗ = {~v1∗ , ~v2∗ , . . . , ~vn∗ }.
~ 2 i, . . . , h·, w

By the definition of dual basis in V ∗ and applying the above to · = ~vi , this means

~ j i = ~vj∗ (~vi ) = δij .


h~vi , w
154 CHAPTER 4. INNER PRODUCT

Therefore the dual basis is the specialisation of the dual basis in Section 2.4.4 when
the inner product is used as the dual pairing.
We call β the adjoint basis of α with respect to the inner product, and even
denote β = α∗ , w
~ j = ~vj∗ . However, we need to be clear that the same notation is
used for two meanings:

1. The dual basis of α is a basis of V ∗ . The concept is independent of the inner


product.

2. The adjoint basis of α (with respect to an inner product) is a basis of V . The


concept depends on the inner product.

The two meanings are related by the isomorphism in Proposition 4.3.1.


In the second sense, a natural question is whether the adjoint β is the same as
the original α. In other words, whether α is a self-adjoint basis with respect to the
inner product. Using the characterisation h~vi , w
~ j i = δij , we get the following.

Proposition 4.3.3. A basis α of an inner product space is self-adjoint, in the sense


that α∗ = h·, αi, if and only if α is an orthonormal basis.

Let α and β be bases of V and W . Then Proposition 2.4.2 gives the equality

[L ]α∗ β ∗ = [L]Tβα for the dual linear transformation and dual basis. Since the dual
bases of V ∗ , W ∗ are translated into the adjoint basis of V, W under the isomorphism
in Proposition 4.3.1, and both are still denoted α∗ , β ∗ , the equality is also true for
the adjoint linear transformation L∗ and adjoint bases α∗ , β ∗ . In particular, by
Proposition 4.3.3, we have the following.

Proposition 4.3.4. Suppose L : V → W is a linear transformation of inner product


spaces, and L∗ : W → V is the adjoint. If α and β are orthonormal bases, then
[L∗ ]αβ = [L]Tβα .

Example 4.3.4. For the basis of R3 in Examples 1.3.17 and 2.1.13

~v1 = (1, −1, 0), ~v2 = (1, 0, −1), ~v3 = (1, 1, 1),

we would like to find the adjoint basis w ~ 1, w ~ 3 of R3 with respect to the dot
~ 2, w
product. Let X = (~v1 ~v2 ~v3 ) and Y = (w
~1 w ~2 w ~ 3 ), then the condition ~vi · w
~ j = δij
T
means X Y = I. By Example 2.2.18, we have
 
1 1 1
1
Y = (X −1 )T = −2 1 1 .
3
1 −2 1

This gives
~ 1 = 13 (1, −2, 1),
w ~ 2 = 13 (1, 1, −2),
w ~ 3 = 13 (1, 1, 1).
w
4.3. ADJOINT 155

Example 4.3.5. For the basis α = {1, t, t2 } of P2 , we would like to find the adjoint
basis α∗ = {p0 (t), p1 (t), p2 (t)} with respect to the inner product in Example 4.1.3.
Note that for p(t) = a0 + a1 t + a2 t2 , we have
 R1  
1 1
  1 1
0
p(t)dt a 0 + 2
a 1 + 3
a 2 1 2 3
R1
hα, p(t)i =  0 tp(t)dt  = 2 a0 + 3 a1 + 4 a2 = 12 13 14  [p(t)]{1,t,t2 } .
 1 1 1  
1
a + 14 a1 + 15 a2 1 1 1
R1 2
0
t p(t)dt 3 0 3 4 5

Then the adjoint basis means hα, p0 (t)i = ~e1 , hα, p1 (t)i = ~e2 , hα, p2 (t)i = ~e3 . This
is the same as  1 1
1 2 3
 1 1 1  ([p0 (t)] [p1 (t)] [p2 (t)]) = I.
2 3 4
1 1 1
3 4 5
Therefore
1 1 −1
   
1 2 3
9 −36 30
([p0 (t)] [p1 (t)] [p2 (t)]) =  12 1
3
1
4
= −36 192 −180 ,
1 1 1
3 4 5
30 −180 180
and we get the adjoint basis

p0 (t) = 9 − 36t + 30t2 , p1 (t) = −36 + 192t − 180t2 , p2 (t) = 30 − 180t + 180t2 .

Exercise 4.66. Find the adjoint basis of (a, b), (c, d) with respect to the dot product on R2 .

Exercise 4.67. Find the adjoint basis of (1, 2, 3), (4, 5, 6), (7, 8, 10) with respect to the dot
product on R3 .

Exercise 4.68. If the inner product on R3 is given by h~x, ~y i = x1 y1 + 2x2 y2 + 3x3 y3 , what
would be the adjoint basis in Example 4.3.4?

Exercise 4.69. If the inner product on P2 is given by Exercise 4.21, what would be the
adjoint basis in Example 4.3.5?

Exercise 4.70. Suppose α = {~v1 , ~v2 , . . . , ~vn } is a basis of Rn and A is positive definite.
What is the adjoint basis of α with respect to the inner product h~x, ~y i = ~xT A~y ?

Exercise 4.71. Prove that if β is the adjoint basis of α with respect to an inner product,
then α is the adjoint basis of β with respect to the same inner product.

Exercise 4.72. The adjoint basis can be introduced in another way. For a basis α of V , we
have a basis h·, αi of V ∗ . Then the basis h·, αi corresponds to a basis β of V under the
isomorphism in Proposition 4.3.1. Prove that β is also the adjoint basis of α.

Exercise 4.73. Prove that rankL = rankL∗ .


156 CHAPTER 4. INNER PRODUCT

4.3.3 Isometry
A linear transformations preserves addition and scalar multiplication. Naturally, we
wish a linear transformation between inner product spaces to additionally preserve
the inner product.

Definition 4.3.5. A linear transformation L : V → W between inner product spaces


is an isometry if it preserves the inner product
hL(~u), L(~v )iW = h~u, ~v iV for all ~u, ~v ∈ V.
If the isometry is also invertible, then it is an isometric isomorphism. In case V = W ,
the isometric isomorphism is also called an orthogonal operator.

An isometry preserves all the concepts defined by the inner product, such as the
length, the angle, the orthogonality, the area, and the distance
kL(~u) − L(~v )k = kL(~u − ~v )k = k~u − ~v k.
Conversely, a linear transformation preserving the length (or distance) must be an
isometry. See Exercise 4.76.
A linear transformation L between finite dimensional inner product spaces is an
isometry if
h~u, ~v i = hL(~u), L(~v )i = h~u, L∗ L(~v )i.
By Exercise 4.4, this means that L∗ L = I is the identity.
An isometry is always one-to-one by the following argument
~u 6= ~v =⇒ k~u − ~v k =
6 0 =⇒ kL(~u) − L(~v )k =
6 0 =⇒ L(~u) 6= L(~v ).
By Theorem 2.2.6, for finite dimensional spaces, an isometry is an isomorphism if
and only if dim V = dim W . In this case, by L∗ L = I, we get L−1 = L∗ .
The following is the isometric version of Proposition 2.1.3.

Proposition 4.3.6. Suppose ~v1 , ~v2 , . . . , ~vn span a vector space V , and w
~ 1, w
~ 2, . . . , w
~n
are vectors in W . Then L(~vi ) = w ~ i (see (2.1.1)) gives a well defined isometry
L : V → W if and only if
h~vi , ~vj i = hw ~ji
~ i, w for all i, j.

Proof. By the definition, the equality h~vi , ~vj i = hw ~ j i is satisfied if L is an isom-


~ i, w
etry. Conversely, suppose the equality is satisfied, then using the bilinearity of the
inner product, we have
hx1~v1 + x2~v2 + · · · + xn~vn , y1~v1 + y2~v2 + · · · + yn~vn i
X X
= xi yj h~vi , ~vj i = xi yj hw ~ji
~ i, w
ij ij

=hx1 w ~ 2 + · · · + xn w
~ 1 + x2 w ~ n , y1 w ~ 2 + · · · + yn w
~ 1 + y2 w ~ n i.
4.3. ADJOINT 157

By taking xi = yi , this implies

kx1~v1 + x2~v2 + · · · + xn~vn k = kx1 w ~ 2 + · · · + xn w


~ 1 + x2 w ~ n k.

By ~x = ~0 ⇐⇒ k~xk = 0, the condition in Proposition 2.1.3 for L being well defined


is satisfied. Moreover, the equality above means h~x, ~y i = hL(~x), L(~y )i, i.e., L is an
isometry.

Applying Proposition 4.3.6 to the α-coordinate of a finite dimensional vector


space, we find that [·]α : V → Rn is an isometric isomorphism between V and the
Euclidean space (with dot product) if and only if α is an orthonormal basis. Since
Proposition 4.2.9 says that any finite dimensional inner product space has an or-
thonormal basis, we get the following.

Theorem 4.3.7. Any finite dimensional inner product space is isometrically iso-
morphic to a Euclidean space with the dot product.

Example 4.3.6. The inner product in Exercise 4.18 is


√ √ √ √
h~x, ~y i = x1 y1 + 2x2 y2 + · · · + nxn yn = x1 y1 + ( 2x2 )( 2y2 ) + · · · + ( nxn )( nyn ).
√ √
This shows that L(x1 , x2 , . . . , xn ) = (x1 , 2x2 , . . . , nxn ) is an isometric isomor-
phism from this inner product to the dot product.

Example 4.3.7. In Example 4.2.3, we get an orthonormal basis { 3t, 3t − 2} of P1
with respect to the inner product in Example 4.1.3. Then
√ √
L(x1 , x2 ) = x1 ( 3t) + x2 (3t − 2) = −2x2 + ( 3x1 + 3x2 )t : R2 → P1

is an isometric isomorphism.

Example 4.3.8. In Exercise 4.76, we will show that a linear transformation is an


isometry if and only if it preserves length. For example, the length in P1 with respect
to the inner product in Example 4.1.3 is given by
Z 1  2  2
2 2 1 1 1
ka0 + a1 tk = (a0 + a1 t) dt = a20 + a0 a1 + a21 = a0 + a1 + √ a1 .
0 3 2 2 3

Since the right side is the square of the Euclidean length of (a0 + 21 a1 , 2√1 3 a1 ) ∈ R2 ,
we find that
L(a0 + a1 t) = (a0 + 21 a1 , 2√1 3 a1 ) : P1 → R2
is an isometric isomorphism between P1 with the inner product in Example 4.1.3
and R2 with the dot product.
158 CHAPTER 4. INNER PRODUCT

Example 4.3.9. The map t → 2t − 1 : [0, 1] → [−1, 1] is a linear change of variable


between two intervals. We have
Z 1 Z 1 Z 1
f (t)g(t)dt = f (2t − 1)g(2t − 1)d(2t − 1) = 2 f (2t − 1)g(2t − 1)dt.
−1 0 0

This implies that L(f (t)) = 2f (2t − 1) : Pn → Pn is an isometric isomorphism
between the inner product in Exercise 4.21 and the inner product in Example 4.1.3.
We may apply the isometric isomorphism to the orthonormal basis in Example 4.2.11
to get an orthonormal basis of P3 with respect to the inner product in Example 4.1.3
  √
L √12 = 2 √12 = 1,
√  √ √ √
L √32 t = 2 √32 (2t − 1) = 3(2t − 1),
 √  √ √
L 2√ 2
5
(3t − 1) = 2 2√52 (3(2t − 1)2 − 1) = √85 (6t2 − 6t + 1),
2

 √  √ √
L 2√72 (5t3 − 3t) = 2 2√72 (5(2t − 1)3 − 3(2t − 1)) = √87 (2t − 1)(10t2 − 10t + 1).

Exercise 4.74. Prove that a composition of isometries is an isometry.

Exercise 4.75. Prove that the inverse of an isometric isomorphism is also an isometric
isomorphism.

Exercise 4.76. Use the polarisation identity in Exercise 4.12 to prove that a linear trans-
formation L is an isometry if and only if it preserves the length: kL(~v )k = k~v k.

Exercise 4.77. Use Exercise 4.38 to prove that an orthogonal operator on R2 is either a
rotation or a reflection. What about an orthogonal operator on R1 ?

Exercise 4.78. In Example 4.1.2, we know that h(x1 , x2 ), (y1 , y2 )i = x1 y1 + 2x1 y2 + 2x2 y1 +
ax2 y2 is an inner product on R2 for a > 4. Find an isometric isomorphism between R2
with this inner product and R2 .

Exercise 4.79. Find an isometric isomorphism between R3 and P2 with the inner product
in Example 4.1.3, by two ways.
1. Similar to Example 4.3.8, calculate the length of a0 + a1 t + a2 t2 , and then complete
the square.

2. Similar to Example 4.3.9, find an orthogonal set of the form 1, t, a + t2 with respect
to the inner product in Exercise 4.21, divide the length, and then translate to the
inner product in Example 4.1.3.

Exercise 4.80. Prove that a linear transformation L : V → W is an isometry if and only if


L takes an orthonormal bases of V to an orthonormal set of W .
4.3. ADJOINT 159

Exercise 4.81. For an orthonormal basis {~v1 , ~v2 , . . . , ~vn }, prove Parsival’s identity

h~x, ~y i = h~x, ~v1 ih~v1 , ~y i + h~x, ~v2 ih~v2 , ~y i + · · · + h~x, ~vn ih~vn , ~y i.

Exercise 4.82. Suppose L : V → W is an isometric isomorphism, and β is the adjoint basis


of a basis α of V . Explain that L(β) is the adjoint basis of L(α). Then use the isometric
isomorphism in Example 4.3.9 to relate Example 4.3.5 and Exercise 4.69.

Exercise 4.83. A linear transformation is conformal if it preserves the angle. Prove that a
linear transformation is conformal if and only if it is a scalar multiple of an isometry.

Let α and β be orthonormal bases of V and W . Then the isometric isomorphisms


[·]α : V ∼
= Rn and [·]β : W ∼
= Rm translates an isometry L : V → W to an isometry
Lβα : Rn → Rm . The columns of the matrix Q = [L]βα = [Lβα ] is the isometric image
of the standard basis of Rn , and is therefore an orthonormal set in Rm . The property
means QT Q = I, which is consistent with L∗ L = I interpretation of isometry. In
case m = n, L becomes an isomorphism, and Q is described by the following concept.

Definition 4.3.8. An orthogonal matrix is a square matrix Q satisfying QT Q = I.

Orthogonal matrices are exactly the matrices of isometric isomorphisms with


respect to orthonormal bases. We have Q−1 = QT , corresponding to L−1 = L∗ .
Note that QQT = I means that the rows of Q also form an orthonormal basis.

Example 4.3.10. The orthogonal basis of R3 in Example 4.2.2 gives an orthogonal


matrix  
2 2 −1
1
Q =  2 −1 2  , QT Q = I.
3
−1 2 2

The inverse of the matrix is simply the transpose


 
2 2 −1
1
Q−1 = QT =  2 −1 2  .
3
−1 2 2

Exercise 4.84. Find all the 2 × 2 orthogonal matrices.

Exercise 4.85. Find an orthogonal matrix such that the first two columns are parallel to
(1, −1, 0), (1, a, 1). Then find the inverse of the orthogonal matrix.

Exercise 4.86. Find an orthogonal matrix such that the first three columns are parallel to
(1, 0, 1, 0), (0, 1, 0, 1), (1, −1, 1, −1). Then find the inverse of the orthogonal matrix.
160 CHAPTER 4. INNER PRODUCT

Exercise 4.87. Prove that the transpose, inverse and multiplication of orthogonal matrices
are orthogonal matrices.

Exercise 4.88. Suppose α is an orthonormal basis. Prove that another basis β is orthonor-
mal if and only if [I]βα is an orthogonal matrix.

4.3.4 QR-Decomposition
In Proposition 4.2.9, we apply the Gram-Schmidt process to a basis α to get an
orthogonal basis β. The span property means that the two bases are related in
“triangular” way, with aii 6= 0

~v1 = a11 w
~ 1,
~v2 = a12 w
~ 1 + a22 w
~ 2,
..
.
~vn = a1n w ~ 2 + · · · + ann w
~ 1 + a2n w ~ n.

The vectors w
~ i can also be expressed in ~vi in similar triangular way. The relation
can be rephrased in the matrix form
 
a11 a12 · · · a1n
 0 a22 · · · a2n 
A = QR, A = (~v1 ~v2 · · · ~vn ), Q = (w ~2 · · · w
~1 w ~ n ), R =  .. ..  .
 
..
 . . . 
0 0 ··· ann

If we make β to become orthonormal (by dividing the lengths), then the columns
of Q are orthonormal. The expression A = QR, with QT Q = I and R upper
triangular, is called the QR-decomposition of A. Any m × n matrix A of rank n has
QR-decomposition.
For the calculation, we first use the Gram-Schmidt process to get Q. Then
Q A = QT QR = IR = R.
T

Example 4.3.11. After dividing the vector length, the QR-decomposition for the
Gram-Schmidt process in Example 4.2.8 has
 1
√1


 
1 1 2 6
 1 1 
A = −1 0  , Q = − √2 √√6  .
0 −1 0 − √23

Then ! 1 1
 √ !
√1 − √12 0 2 √1
2
R = QT A = √ −1 0  = √2 ,
√1 √1 − √23 0 √3
6 6 0 −1 2
4.3. ADJOINT 161

and we get the QR-decomposition

√1 √1 √
     
1 1 1 1  1
 2 6 2 √1
!
−1 0  = −1 1  1 2
1 =
− √1
 2 √1 
6 
√2 .
0 √ 0 √3
0 −1 0 −2 2
0 − √23 2

For the Gram-Schmidt process in Example 4.2.9, we have


 1 
√ √4 √1
 
1 4 7
 14 21 6
A = 2 5 8  , Q =  √214 √121 − √26  .

3 6 10 √3 − √221 √16
14

Therefore

√1 √2 √3
  √ √32 √53

14

1 4 7
 √414 √1
14 14
− √221  √9
14 14
R = QT A =  2 5 8= 0 − √1221  .
  
 21 21 21
√1 − √26 √1 3 6 10 0 0 √1
6 6 6

Exercise 4.89. In Example 2.1.13, the 


basis in Example
 4.2.8 is extended to a basis of
1 1 1
3
R , and we get the invertible matrix −1
 0 1 in Example 1.3.17. Find the QR-
0 −1 1
decomposition of this matrix.
   
1 4 1 4 7
Exercise 4.90. Use Example 4.2.9 to find the QR-decomposition for 2 5 and 2 5 8.
3 6 3 6 9
Then make a general observation.

The QR-decomposition is related to the least square solution. Here is the prob-
lem: For a system of linear equations A~x = ~b to have solution, we need ~b ∈ ColA. If
~b 6∈ ColA, then we may settle with the best approximate solution, in the sense that
the Euclidean distance kA~x − ~bk is the smallest possible.
Let ~h = projColA~b. Then A~x, ~h ∈ ColA, so that A~x − ~h ∈ ColA and ~b − ~h ∈
(ColA)⊥ are orthogonal. Then we have

kA~x − ~bk2 = kA~x − ~hk2 + k~b − ~hk2 ≥ k~b − ~hk2 .

This shows that the best approximate solution is the solution of A~x = ~h. This is
characterised by A~x − ~b ⊥ ColA. In other words, for any ~y , we have

0 = (A~x − ~b) · A~y = AT (A~x − ~b) · ~y .

Since this should hold for all ~y , we get AT A~x = AT~b.


162 CHAPTER 4. INNER PRODUCT

~b

ColA ~0
~h
A~x

Figure 4.3.1: Least square method.

We conclude that the least square solution of A~x = ~b is the same as the solution
of AT A~x = AT~b. In general, the solution is not unique. In fact, by Exercise 4.62, we
have NulAT A = NulA. In case the solution of A~x = ~0 is unique, i.e., the columns
of A are linearly independent, then the square matrix AT A is invertible, and we get
the unique least square solution

~x = (AT A)−1 AT~b.

Note that the linear independence of the columns of A implies A = QR. Then
AT A = RT QT QR = RT R, and the solution becomes

~x = (RT R)−1 RT QT~b = R−1 QT~b.

Since R is upper triangular, it is very easy to calculate R−1 . Therefore R−1 QT~b is
easier than (AT A)−1 AT~b.
Chapter 5

Determinant

Determinant first appeared as the numerical criterion that “determines” whether a


system of linear equations has a unique solution. More properties of the determi-
nant were discovered later, especially its relation to geometry. We will define the
determinant by axioms, derive the calculation technique from the axioms, and then
discuss the geometric meaning.

5.1 Algebra
Definition 5.1.1. The determinant of n × n matrices A is the function det A satis-
fying the following properties.
1. Multilinear: The function is linear in each column vector
det(· · · a~u + b~v · · · ) = a det(· · · ~u · · · ) + b det(· · · ~v · · · ).

2. Alternating: Exchanging two columns introduces a negative sign


det(· · · ~v · · · ~u · · · ) = − det(· · · ~u · · · ~v · · · ).

3. Normal: The determinant of the identity matrix is 1


det I = det(~e1 ~e2 · · · ~en ) = 1.

For a multilinear function D, taking ~u = ~v in the alternating property gives


D(· · · ~u · · · ~u · · · ) = 0.
Conversely, if D satisfies the equality above, then
0 = D(· · · ~u + ~v · · · ~u + ~v · · · )
= D(· · · ~u · · · ~u · · · ) + D(· · · ~u · · · ~v · · · )
+ D(· · · ~v · · · ~u · · · ) + D(· · · ~v · · · ~v · · · )
= D(· · · ~u · · · ~v · · · ) + D(· · · ~v · · · ~u · · · ).
We recover the alternating property.

163
164 CHAPTER 5. DETERMINANT

5.1.1 Multilinear and Alternating Function


A multilinear and alternating function D of 2 × 2 matrices is given by
 
a11 a12
D = D(a11~e1 + a21~e2 a12~e1 + a22~e2 )
a21 a22
= D(~e1 ~e1 )a11 a12 + D(~e1 ~e2 )a11 a22 + D(~e2 ~e1 )a21 a12 + D(~e2 ~e2 )a21 a22
= 0a11 a12 + D(~e1 ~e2 )a11 a22 − D(~e1 ~e2 )a21 a12 + 0a21 a22
= D(~e1 ~e2 )(a11 a22 − a21 a12 ).
A multilinear and alternating function D of 3 × 3 matrices is given by
D(a11~e1 + a21~e2 + a31~e3 a12~e1 + a22~e2 + a32~e3 a13~e1 + a23~e2 + a33~e3 )
= D(~e1 ~e2 ~e3 )a11 a22 a33 + D(~e2 ~e3 ~e1 )a21 a32 a13 + D(~e3 ~e1 ~e2 )a31 a12 a23
+ D(~e1 ~e3 ~e2 )a11 a32 a23 + D(~e3 ~e2 ~e1 )a31 a22 a13 + D(~e2 ~e1 ~e3 )a21 a12 a33
= D(~e1 ~e2 ~e3 )(a11 a22 a33 + a21 a32 a13 + a31 a12 a23 − a11 a32 a23 − a31 a22 a13 − a21 a12 a33 ).
Here in the first equality, we use the alternating property that D(~ei ~ej ~ek ) = 0
whenever two of i, j, k are equal. In the second equality, we exchange (i.e., switching
two indices) distinct i, j, k (which must be a rearrangement of 1, 2, 3) to get the
usual order 1, 2, 3. For example, we have
D(~e3 ~e1 ~e2 ) = −D(~e1 ~e3 ~e2 ) = D(~e1 ~e2 ~e3 ).
In general, a multilinear and alternating function D of n × n matrices A = (xij )
is X
D(A) = D(~e1 ~e2 · · · ~en ) ±ai1 1 ai2 2 · · · ain n ,
where the sum runs over all the rearrangements (or permutations) (i1 , i2 , . . . , in ) of
(1, 2, . . . , n). Moreover, the ± sign, usually denoted by sign(i1 , i2 , . . . , in ), is deter-
mined in the following way.
• If it takes even number of exchanges to transform (i1 , i2 , . . . , in ) to (1, 2, . . . , n),
then sign(i1 , i2 , . . . , in ) = 1.
• If it takes odd number of exchanges to transform (i1 , i2 , . . . , in ) to (1, 2, . . . , n),
then sign(i1 , i2 , . . . , in ) = −1.
The argument above can be carried out on any vector space, by replacing the
standard basis of Rn by an (ordered) basis of V .

Theorem 5.1.2. Multilinear and alternating functions of n vectors in a vector space


of dimension n are unique up to multiplying constants. Specifically, in terms of the
coordinates with respect to a basis α, such a function D is given by
X
D(~v1 , ~v2 , . . . , ~vn ) = c sign(i1 , i2 , . . . , in )ai1 1 ai2 2 · · · ain n ,
(i1 ,i2 ,...,in )
5.1. ALGEBRA 165

where
[~vj ]α = (a1j , a2j , . . . , anj ).

In case the constant c = D(α) = 1, the formula in the theorem is the determinant.

Exercise 5.1. What is the determinant of a 1 × 1 matrix?

Exercise 5.2. How many terms are in the explicit formula for the determinant of n × n
matrices?

Exercise 5.3. Show that any alternating and bilinear function on 2 × 3 matrices is zero.
Can you generalise this observation?

Exercise 5.4. Find explicit formula for an alternating and bilinear function on 3×2 matrices.

Theorem 5.1.2 is a very useful tool for deriving properties of the determinant.
The following is a typical example.

Proposition 5.1.3. det AB = det A det B.

Proof. For fixed A, we consider the function D(B) = det AB. Since AB is obtained
by multiplying A to the columns of B, the function D(B) is multilinear and alternat-
ing in the columns of B. By Theorem 5.1.2, we get D(B) = c det B. To determine
the constant c, we let B to be the identity matrix and get det A = D(I) = c det I = c.
Therefore det AB = D(B) = c det B = det A det B.

Exercise 5.5. Prove that det A−1 = 1


det A . More generally, we have det An = (det A)n for
any integer n.

Exercise 5.6. Use the explicit formula for the determinant to verify det AB = det A det B
for 2 × 2 matrices.

5.1.2 Column Operation


The explicit formula in Theorem 5.1.2 is too complicated to be a practical way of
calculating the determinant. Since matrices can be simplified by row and column
operations, it is useful to know how the determinant is changed by the operations.
The alternating property means that the column operation Ci ↔ Cj introduces
a negative sign

det(· · · ~v · · · ~u · · · ) = − det(· · · ~u · · · ~v · · · ).
166 CHAPTER 5. DETERMINANT

The multilinear property implies that the column operation c Ci multiplies the de-
terminant by the scalar
det(· · · c~u · · · ) = c det(· · · ~u · · · ).
Combining the multilinear and alternating properties, the column operation Ci +c Cj
preserves the determinant
det(· · · ~u + c~v · · · ~v · · · ) = det(· · · ~u · · · ~v · · · ) + c det(· · · ~v · · · ~v · · · )
= det(· · · ~u · · · ~v · · · ) + c · 0
= det(· · · ~u · · · ~v · · · ).

Example 5.1.1. By column operations, we have


     
1 4 7 C2 −4C1 1 0 0 C3 −2C2 1 0 0
C3 −7C1 3C2
det 2
 5 8 ==== det 2 −3 −6  ==== −3 det 2 1 0 
3 6 a 3 −6 a − 21 3 2 a−9
  C2 −2C3  
1 0 0 C1 −2C2 1 0 0
(a−9)C3 C1 −3C3
==== −3(a − 9) det 2 1 0 ==== −3(a − 9) det 0 1 0
3 2 1 0 0 1
= −3(a − 9).

The column operations can simplify a matrix to a column echelon form, which
is a lower triangular matrix. By column operations, the determinant of a lower
triangular matrix is the product of diagonal entries
   
a1 0 · · · 0 1 0 ··· 0
 ∗ a2 · · · 0  ∗ 1 · · · 0
det  .. .. ..  = a1 a2 · · · an det  .. ..
   
.. 
. . . . . .
∗ ∗ · · · an ∗ ∗ ··· 1
 
1 0 ··· 0
0 1 · · · 0
= a1 a2 · · · an det  .. .. ..  = a1 a2 · · · an .
 
. . .
0 0 ··· 1
The argument above assumes all ai 6= 0. If some ai = 0, then the last column in
the column echelon form is the zero vector ~0. By the linearity of the determinant in
the last column, the determinant is 0. This shows that the equality always holds.
We also note that the argument for lower triangular matrices also applies to upper
triangular matrices
   
a1 ∗ · · · ∗ a1 0 · · · 0
 0 a2 · · · ∗   ∗ a2 · · · 0 
det  .. .. = a a · · · a = det ..  .
   
..  1 2 n  .. ..
. . .   . . .
0 0 · · · an ∗ ∗ · · · an
5.1. ALGEBRA 167

Example 5.1.2. We subtract the first column from the later columns and get
 
a1 a1 a1 ··· a1 a1

 a2 b 2 a2 ··· a2 a2 

 a3 ∗ b3 ··· a3 a3 
det 
 
.. .. .. .. .. 

 . . . . . 

an−1 ∗ ∗ · · · bn−1 an−1 
an ∗ ∗ ··· ∗ bn
 
a1 0 0 ··· 0 0
 a2 b 2 − a2 0 ··· 0 0 
 
 a3 ∗ b 3 − a3 ··· 0 0 
= det  ..
 
.. .. .. .. 
 . . . . . 
 
an−1 ∗ ∗ ··· bn−1 − an−1 0 
an ∗ ∗ ··· ∗ b n − an
= a1 (b2 − a2 )(b3 − a3 ) . . . (bn − an ).

A square matrix is invertible if and only if all the diagonal entries in the column
echelon form are pivots, i.e., all ai 6= 0. This proves the following.

Theorem 5.1.4. A square matrix is invertible if and only if its determinant is


nonzero.

In terms of a system of linear equations with equal number of equations and


variables, this means that whether A~x = ~b has unique solution is “determined” by
det A 6= 0.

5.1.3 Row Operation


To find out the effect of row operations on the determinant, we use the idea for the
proof of Proposition 5.1.3. The key is that a row operation preserves the multilinear
and alternating property. In other words, suppose A 7→ Ã is a row operation, then
det à is still a multilinear and alternating function of A.
By Theorem 5.1.2, therefore, we have det à = a det A for a constant a. The
constant a can be calculated from the special case that A = I is the identity ma-
trix. In Section 2.1.5, we see that I˜ = Tij , Di (c), Eij (c) respectively for the three
row operations. Then we may use column operations to get a = det I˜ = −1, c, 1
respectively for the three operations. We conclude that the effect of row operations
on the determinant is the same as the effect of column operations. By the same idea
of proving rankAT = rankA, we get the following result.

Proposition 5.1.5. det AT = det A.


168 CHAPTER 5. DETERMINANT

A consequence of the proposition is that the determinant is also multilinear and


alternating in the row vectors.

Proof. A column operation A 7→ B is equivalent to a row operation AT 7→ B T .


The discussion above shows that det B = a det A if and only if det B T = a det AT .
Therefore det A = det AT if and only if det B = det B T .
If A is invertible, then we may apply a sequence of column operations to reduce
A to I. The transpose of these column operations is a sequence of row operations
that reduces AT to I T = I. Since det I = det I T , by what we proved above, we get
det A = det AT .
If A is not invertible, then AT is also not invertible. By Theorem 5.1.4, we get
det AT = 0 = det A.

Example 5.1.3. We calculate the determinant in Example 5.1.1 by mixing row and
column operations
     
1 4 7 R3 −R2 1 4 7 C3 −C2 1 3 3
R2 −R1 C2 −C1
det 2 5 8 ==== det 1 1 1  ==== det 1 0 0 
3 6 a 1 1 a−8 1 0 a−9
C1 ↔C2  
C2 ↔C3 3 3 1
R2 ↔R3
==== − det 0 a − 9 1 = −3 · (a − 9) · 1 = −3(a − 9).
0 0 1

Note that the negative sign after the third equality is due to odd number of ex-
changes.

Exercise 5.7. Prove that any orthogonal matrix has determinant ±1.

Exercise 5.8. Prove that the determinant is the unique function that is multilinear and
alternating on the row vectors, and satisfies det I = 1.

Exercise 5.9. Suppose A and B are square matrices. Suppose O is the zero matrix. Use
the multilinear and alternating property on columns of A and rows of B to prove
   
A ∗ A O
det = det A det B = det .
O B ∗ B

Exercise 5.10. Suppose A is an m×n matrix. Use Exercise 3.38 to prove that, if rankA = n,
then there is an n × n submatrix B inside A, such that det B 6= 0. Similarly, if rankA = m,
then there is an m × m submatrix B inside A, such that det B 6= 0.

Exercise 5.11. Suppose A is an m × n matrix. Prove that if columns of A are linearly


dependent, then for any n × n submatrix B, we have det B = 0. Similarly, if rows of A
are linearly dependent, then any m × m submatrix inside A has vanishing determinant.
5.1. ALGEBRA 169

Exercise 5.12. Let r = rankA.


1. Prove that there is an r × r submatrix B of A, such that det B 6= 0.

2. Prove that if s > r, then any s × s submatrix of A has vanishing determinant.


We conclude that the rank r is the biggest number such that there is an r × r submatrix
with non-vanishing determinent.

5.1.4 Cofactor Expansion


The first column of an n × n matrix A = (aij ) = (~v1 ~v2 · · · ~vn ) is
 
a11
 a21 
~v1 =  ..  = a11~e1 + a21~e2 + · · · + an1~en .
 
 . 
an1

By the linearity of det A in the first column, we get

det A = a11 D1 (A) + a21 D2 (A) + · · · + an1 Dn (A),

where

Di (A) = det(~ei ~v2 · · · ~vn )


··· ···
   
0 a12 a1n 0 a12 a1n
. . .
.. .
..  . . .. .. 
.  . . . 
0 a(i−1)2 · · · a(i−1)n  0 a(i−1)2 ··· a(i−1)n 
   
= det 1 ai2 ··· ain  = det 1 0 ··· 0 
   
0 a(i+1)2 · · · a(i+1)n  0 a(i+1)2 ··· a(i+1)n 
   
. .. ..  . .. .. 
 .. . .   .. . . 
0 an2 ··· ann 0 an2 ··· ann
···
 
1 0 0
0
. a12 ··· a1n 
. .. .. 
. . . 

= (−1)i+1 det 0 a(i−1)2 · · · a(i−1)n 
 
0 a(i+1)2 · · · a(i+1)n 
 
. .. .. 
 .. . . 
0 an2 ··· ann
 
1 O
= (−1)i−1 det = (−1)i−1 det Ai1 .
O Ai1

Here the third equality is by the third type column operations, the fourth equality
is by the first type row operations. The (n − 1) × (n − 1) matrix Ai1 is obtained by
170 CHAPTER 5. DETERMINANT

deleting 
the i-th row
 and the 1st column of A. The last equality is due to the fact
1 O
that det is multilinear and alternating in columns of Ai1 . We conclude
O Ai1
the cofactor expansion formula

det A = a11 det A11 − a21 det A21 + · · · + (−1)n−1 an1 det An1 .

For a 3 × 3 matrix, this means


 
a11 a12 a13      
a 22 a 23 a12 a 13 a 12 a 13
det a21 a22 a23  = a11 det −a21 det +a31 det .
a32 a33 a32 a33 a22 a23
a31 a32 a33

We may carry out the same argument with respect to the i-th column instead
of the first one. Let Aij be the matrix obtained by deleting the i-th row and j-th
column from A. Then we get the cofactor expansions along the i-th column

det A = (−1)1−i a1i det A1i + (−1)2−i a2i det A2i + · · · + (−1)n−i ani det Ani .

By det AT = det A, we also have the cofactor expansion along the i-th row

det A = (−1)i−1 ai1 det Ai1 + (−1)i−2 ai2 det Ai2 + · · · + (−1)i−n ain det Ain .

The combination of the row operation, column operation, and cofactor expansion
gives an effective way of calculating the determinant.

Example 5.1.4. Cofactor expansion is the most convenient along rows or columns
with only one nonzero entry.
   
t−1 2 4 t−5 2 4
C1 −C3
det  2 t−4 2  ==== det  0 t−4 2 
4 2 t−1 −t + 5 2 t−1
 
t−5 2 4
R3 +R1
==== det  0 t−4 2 
0 4 t+3
 
cofactor C1 t−4 2
==== (t − 5) det
4 t+3
= (t − 5)(t2 − t − 20) = (t − 5)2 (t + 4).

Example 5.1.5. We calculate the determinant of the 4 × 4 Vandermonde matrix in


5.1. ALGEBRA 171

Example 2.3.5
   
1 t0 t20 t30 C4 −t0 C3 1 0 0 0
C3 −t0 C2
1 t1 t21 t31  C −t
 2 0 1 C t1 − t0 t1 (t1 − t0 ) t21 (t1 − t0 )
1

det  3  ==== det 

1 t2 t22 t2 t2 − t0 t2 (t2 − t0 ) t22 (t2 − t0 )
1
1 t3 t23 t33
t3 − t0 t3 (t3 − t0 ) t23 (t3 − t0 )
1
 
t1 (t1 − t0 ) t21 (t1 − t0 )
t1 − t0
= det t2 − t0
 t2 (t2 − t0 ) t22 (t2 − t0 )
t3 (t3 − t0 ) t23 (t3 − t0 )
t3 − t0
(t1 −t0 )R1  
(t2 −t0 )R2 1 t1 t21
(t3 −t0 )R3
==== (t1 − t0 )(t2 − t0 )(t3 − t0 ) det 1 t2 t22  .
1 t3 t23

The second equality uses the cofactor expansion along the first row. We find that
the calculation is reduced to the determinant of a 3 × 3 Vandermonde matrix. In
general, by induction, we have
 
1 t0 t20 · · · tn0
1 t1 t2 · · · tn 
 1 1
2 n
Y
det 1 t2 t2 · · · t2  = (tj − ti ).

 .. .. .. ..  i<j
. . . .
1 tn tn · · · tnn
2

Exercise 5.13. Calculate the determinant.


     
2 1 −3 0 1 2 3 2 0 0 1
1. −1 2 1 . 1 2 3 0 0 1 3 0
2. 
2 3 0 1.
 3.  .
3 −2 1 −2 0 −1 2
3 0 1 2 0 3 1 2

Exercise 5.14. Calculate the determinant.


   
t − 3 −1 3 t−2 1 1 −1
1.  1 t−5 3 .  1 t − 2 −1 1 
3.  .
6 −6 t + 2  1 −1 t − 2 1 
  −1 1 1 t−2
t 2 3
2.  1 t − 1 1 .
−2 −2 t − 5

Exercise 5.15. Calculate the determinant.

t20 t40
   
a b c 1 t0
1. b c
 a. 1 t1 t21 t41 
2.  .
c a b 1 t2 t22 t42 
1 t3 t23 t43
172 CHAPTER 5. DETERMINANT

t20 t30 t40


 
1
1 t21 t31 t41 
3.  .
1 t22 t32 t42 
1 t23 t33 t43

Exercise 5.16. Calculate the determinant.


   
t 0 ··· 0 0 a0 b1 a1 a1 ··· a1 a1
−1 t · · · 0 0 a1   a2 b2 a2 ··· a2 a2 
   
 0 −1 · · · 0 0 a2  ···
2. det  a3 a3 b3 a3 a3 .

1.  . .
 
.. .. .. ..  .. .. .. .. .. 
 .. . . . .  . . . . .
 
0 0 · · · −1 t an−2  an an an · · · an bn
0 0 · · · 0 −1 t + an−1

Exercise 5.17. The Vandermonde matrix comes from the evaluation of polynomials f (t) of
degree n at n + 1 distinct points. If two points t0 , t1 are merged to the same point t0 , then
the evaluations f (t0 ) and f (t1 ) should be replaced by f (t0 ) and f 0 (t0 ). The idea leads to
the “derivative” Vandermonde matrix such as
1 t0 t20 t30 · · · tn0
 
0 1 2t0 3t2 · · · ntn−1 
 0 0 
1 t2 t2 t3 · · · t n 
 2 2 2 .
 .. .. .. .. .. 
. . . . . 
2 3
1 tn tn tn · · · tnn
Find the determinant of the matrix. What about the general case of evaluations at
t1 , t2 . . . , tk , with multiplicities m1 , m2 , . . . , mk (i.e., taking values f (ti ), f 0 (ti ), . . . , f (mi −1) (ti ))
satisfying m1 + m2 + · · · + mk = n?

Exercise 5.18. Use cofactor expansion to explain the determinant of upper and lower tri-
angular matrices.

5.1.5 Cramer’s Rule


The cofactor expansion suggests the adjugate matrix of a square matrix A
 
det A11 − det A21 ··· (−1)n−1 det An1
 − det A12 det A 22 · ·· (−1)n−2 det An2 
adj(A) =  .
 
.. .. ..
 . . . 
1−n 2−n n−n
(−1) det A1n (−1) det A2n · · · (−1) det Ann

The ij-entry of the matrix multiplication A adj(A) is


cij = (−1)j−1 ai1 det Aj1 + (−1)j−2 ai2 det Aj2 + · · · + (−1)j−n ain det Ajn .
The cofactor expansion of det A along the j-th row shows that the diagonal entries
cjj = det A. In case i 6= j, we compare with the cofactor expansion of det A along the
5.1. ALGEBRA 173

j-th row, and find that cij is the determinant of the matrix B obtained by replacing
the j-th row (aj1 , aj2 , . . . , ajn ) of A by the i-th row (ai1 , ai2 , . . . , ain ). Since both the
i-th and the j-th rows of B are the same as the i-th row of A, by the alternating
property of the determinant in the row vectors, we conclude that the off-diagonal
entry cij = 0. For the 3 × 3 case, the argument is (â1i = a1i , the hat ˆ is added to
indicate along which row the cofactor expansion is made)

c12 = −â11 det A21 + â12 det A22 − â13 det A23
     
a12 a13 a11 a13 a11 a12
= −â11 det + â12 det − â13 det
a32 a33 a31 a33 a31 a32
 
a11 a12 a13
= det â11 â12 â13  = 0.
a31 a32 a33

The calculation of A adj(A) gives

A adj(A) = (det A)I.

In case A is invertible, this gives an explicit formula for the inverse


1
A−1 = adj(A).
det A
A consequence of the formula is the following explicit formula for the solution of
A~x = ~b.

Proposition 5.1.6 (Cramer’s Rule). If A = (~v1 ~v2 · · · ~vn ) is an invertible matrix,


then the solution of A~x = ~b is given by
(i)
det(~v1 · · · ~b · · · ~vn )
xi = .
det A

Proof. In case A is invertible, the solution of A~x = ~b is


1
~x = A−1~b = adj(A)~b.
det A
Then xi is the i-th coordinate
1
xi = [(−1)1−i (det A1i )b1 + (−1)2−1 (det A2i )b2 + · · · + (−1)n−i (det Ani )bn ],
det A
and (−1)1−i (det A1i )b1 + (−1)2−1 (det A2i )b2 + · · · + (−1)n−i (det Ani )bn is the cofactor
(i)
expansion of the determinant of the matrix (~v1 · · · ~b · · · ~vn ) obtained by replacing
the i-th column of A by ~b.
174 CHAPTER 5. DETERMINANT

For the system of three equations in three variables, the Cramer’s rule is
     
b1 a12 a13 a11 b1 a13 a11 a12 b1
det b2
 a22 a23  det a21
 b2 a23  det a21
 a22 b2 
b3 a32 a33 a31 b3 a33 a31 a32 b3
x1 =   , x2 =   , x3 =  .
a11 a12 a13 a11 a12 a13 a11 a12 a13
det a21
 a22 a23  det a21
 a22 a23  det a21
 a22 a23 
a31 a32 a33 a31 a32 a33 a31 a32 a33

Cramer’s rule is not a practical way of calculating the solution for two reasons. The
first is that it only applies to the case A is invertible (and in particular the number
of equations is the same as the number of variables). The second is that the row
operation is much more efficient method for finding solutions.

5.2 Geometry
The determinant of a real square matrix is a real number. The number is determined
by its absolute value (magnitude) and its sign. The absolute value is the volume
of the parallelotope spanned by the column vectors of the matrix. The sign is
determined by the orientation of the Euclidean space.

5.2.1 Volume
The parallelogram spanned by (a, b) and (c, d) may be divided into five pieces
A, B, B 0 , C, C 0 . The center piece A is a rectangle. The triangle B has the same
area (due to same base and height) as the dotted triangle below A. The triangle B 0
is identical to B. Therefore the areas of B and B 0 together is the area of the dotted
rectangle below A. By the same reason, the areas of C and C 0 together is the area
of the dotted rectangle on the left of A. The area of the parallelogram is then the
sum of the areas of the rectangle A, the dotted rectangle below A, and the dotted
rectangle on the left of A. This sum and is clearly
 
a c
ad − bc = det .
b d

Strictly speaking, the picture used for the argument assumes that (a, b) and (c, d)
are in the first quadrant, and the first vector is “below” the second vector. One may
try other possible positions of the two vectors and find that the area (which is always
≥ 0) is always the determinant up to a sign. Alternatively, we may also use the
5.2. GEOMETRY 175

B0
d C0
A
C
b B
c a

Figure 5.2.1: Area of parallelogram.

formula for the area in Section 4.1.1


p
Area((a, b), (c, d)) = k(a, b)k2 k(c, d)k2 − ((a, b) · (c, d))2
p
= (a2 + b2 )(c2 + d2 ) − (ac + bd)2
p
= (a2 c2 + a2 d2 + b2 c2 + b2 d2 ) − (a2 c2 + 2abcd + b2 d2 )

= a2 d2 + b2 c2 − 2abcd = |ad − bc|.

Proposition 5.2.1. The absolute value of the determinant of A = (~v1 ~v2 · · · ~vn ) is
the volume of the parallelotope spanned by column vectors

P (A) = {x1~v1 + x2~v2 + · · · + xn~vn : 0 ≤ xi ≤ 1}.

Proof. We show that a column operation has the same effect on the volume vol(A)
of P (A) and on | det A|. We illustrate the idea by only looking at the operations on
the first two columns.
The operation C1 ↔ C2 is

A = (~v1 ~v2 · · · ) 7→ Ã = (~v2 ~v1 · · · ).

The operation does not change the parallelotope. Therefore P (Ã) = P (A), and we
get vol(Ã) = vol(A). This is the same as | det Ã| = | − det A| = | det A|.
The operation C1 → aC2 is

A = (~v1 ~v2 · · · ) 7→ Ã = (a~v1 ~v2 · · · ).

The operation stretches the parallelotope by a factor of |a| in the ~v1 direction, and
keeps all the other directions fixed. Therefore the volume of P (Ã) is |a| times the
volume of P (A), and we get vol(Ã) = |a|vol(A). This is the same as | det Ã| =
|a det A| = |a|| det A|.
The operation C1 → C1 + aC2 is

A = (~v1 ~v2 · · · ) 7→ Ã = (~v1 + a~v2 ~v2 · · · ).


176 CHAPTER 5. DETERMINANT

The operation moves ~v1 vertex of the parallelotope along the direction of ~v2 , and
keeps all the other directions fixed. Therefore P (Ã) and P (A) have the same base
along the subspace Span{~v2 , . . . , ~vn } (the (n − 1)-dimensional parallelotope spanned
by ~v2 , . . . , ~vn ) and the same height (from the tips of ~v1 + a~v2 or ~v1 to the base
Span{~v2 , . . . , ~vn }). This implies that the operation preserves the volume, and we
get vol(Ã) = vol(A). This is the same as | det Ã| = | det A|.
If A is invertible, then the column operation reduces A to the identity matrix
I. The volume of P (I) is 1 and we also have det I = 1. If A is not invertible, then
P (A) is degenerate and has volume 0. We also have det A = 0 by Proposition 5.1.4.
This completes the proof.

Example 5.2.1. The volume of the tetrahedron is 16 of the corresponding parallelo-


tope. For example, the volume of the tetrahedron with vertices ~v0 = (1, 1, 1), ~v1 =
(1, 2, 3), ~v2 = (2, 3, 1), ~v3 = (3, 1, 2) is
   
0 1 2 0 1 2
1 1 1
| det(~v1 − ~v0 , ~v2 − ~v0 , ~v3 − ~v0 )| = det 1 2
 0 = det 1
  2 0
6 6 6
2 0 1 0 −4 1
 
1 1 2 1 3
= − det = | − 9| = .
6 −4 1 6 2

In general, the volume of the simplex with vertices ~v0 , ~v1 , ~v2 , . . . , ~vn is

1
| det(~v1 − ~v0 ~v2 − ~v0 · · · ~vn − ~v0 )|.
n!

5.2.2 Orientation
The line R has two orientations: positive and negative. The positive orientation
is represented by a positive number (such as 1), and the negative orientation is
represented by a negative number (such as −1).
The plane R2 has two orientations: counterclockwise, and clockwise. The coun-
terclockwise orientation is represented by an ordered pair of directions, such that
going from the first to the second direction is counterclockwise. The clockwise orien-
tation is also represented by an ordered pair of directions, such that going from the
first to the second direction is clockwise. For example, {~e1 , ~e2 } is counterclockwise,
and {~e2 , ~e1 } is clockwise.
The space R3 has two orientations: right hand, and left hand. The left hand
orientation is represented by an ordered triple of directions, such that going from
the first to the second and then the third follows the right hand rule. The left
hand orientation is also represented by an ordered triple of directions, such that
going from the first to the second and then the third follows the left hand rule. For
example, {~e1 , ~e2 , ~e3 } is right hand, and {~e2 , ~e1 , ~e3 } is left hand.
5.2. GEOMETRY 177

How can we introduce orientation in a general vector space? For example, the
line H = R(1, −2, 3) is a 1-dimensional subspace of R3 . The line has two direc-
tions represented by (1, −2, 3) and −(1, −2, 3) = (−1, 2, −3). However, there is no
preference as to which direction is positive and which is negative. Moreover, both
(1, −2, 3) and 2(1, −2, 3) = (2, −4, 6) represent the same directions. In fact, all
vectors representing the same direction as (1, −2, 3) form the set

o(1,−2,3) = {c(1, −2, 3) : c > 0}.

We note that any vector c(1, −2, 3) ∈ o(1,−2,3) can be continuously deformed to
(1, −2, 3) in H without passing through the zero vector

~v (t) = ((1 − t)c + t)(1, −2, 3) ∈ H, ~v (t) 6= ~0 for all 0 ≤ t ≤ 1,

~v (0) = c(1, −2, 3), ~v (1) = (1, −2, 3).


Similarly, the set

o(−1,2,−3) = {c(−1, 2, −3) : c > 0} = {c(1, −2, 3) : c < 0}

consists of all the vectors in H that can be continuously deformed to (−1, 2, −3) in
H without passing through the zero vector.
Suppose dim V = 1. If we fix a nonzero vector ~v ∈ V − ~0, then the two directions
of V are represented by the disjoint sets

o~v = {c~v : c > 0}, o−~v = {−c~v : c > 0} = {c~v : c < 0}.

Any two vectors in o~v can be continuously deformed to each other without passing
through the zero vector, and the same can be said about o−~v . Moreover, we have
o~v t o−~v = V − ~0. An orientation of V is a choice of one of the two sets.
Suppose dim V = 2. An orientation of V is represented by a choice of ordered
basis α = {~v1 , ~v2 }. Another choice β = {w ~ 2 } represents the same orientation if
~ 1, w
α can be continuously deformed to β through bases (i.e., without passing through
linearly dependent pair of vectors). For the special case of V = R2 , it is intuitively
clear that, if going from ~v1 to ~v2 is counterclockwise, then going from w ~ 1 to w
~ 2 must
also be counterclockwise. The same intuition applies to the clockwise direction.
Let α = {~v1 , ~v2 , . . . , ~vn } and β = {w
~ 1, w ~ n } be ordered bases of a vector
~ 2, . . . , w
space V . We say α and β are compatibly oriented if there is

α(t) = {~v1 (t), ~v2 (t), . . . , ~vn (t)}, t ∈ [0, 1],

such that

1. α(t) is continuous, i.e., each ~vi (t) is a continuous function of t.

2. α(t) is a basis for each t.


178 CHAPTER 5. DETERMINANT

3. α(0) = α and α(1) = β.

In other words, α and β are connected by a continuous family of ordered bases.


We can imagine α(t) to be a “movie” that starts from α and ends at β. If α(t)
connects α to β, then the reverse movie α(1 − t) connects β to α. If α(t) connects
α to β, and β(t) connects β to γ, then we may stitch the two movies together and
get a movie (
α(2t), for t ∈ [0, 12 ],
β(2t − 1), for t ∈ [ 12 , 1],
that connects α to γ. This shows that the compatible orientation is an equivalence
relation.
In R1 , a basis is a nonzero number. Any positive number can be deformed
(through nonzero numbers) to 1, and any negative number can be deformed to −1.
In R2 , a basis is a pair of non-parallel vectors. Any such pair can be deformed
(through non-parallel pairs) to {~e1 , ~e2 } or {~e1 , −~e2 }, and {~e1 , ~e2 } cannot be deformed
to {~e1 , −~e2 } without passing through parallel pair.

Proposition 5.2.2. Two ordered bases α, β are compatibly oriented if and only if
det[I]βα > 0.

Proof. Suppose α, β are compatibly oriented. Then there is a continuous family


α(t) of ordered bases connecting the two. The function f (t) = det[I]α(t)α is a
continuous function satisfying f (0) = det[I]αα = 1 and f (t) 6= 0 for all t ∈ [0, 1].
By the intermediate value theorem, f never changes sign. Therefore det[I]βα =
det[I]α(1)α = f (1) > 0. This proves the necessity.
For the sufficiency, suppose det[I]βα > 0. We study the effect of the operations
on sets of vectors in Proposition 3.1.3 on the orientation. We describe the operations
on ~u and ~v , which are regarded as ~vi and ~vj . The other vectors in the set are fixed
and are omitted in the formula.

1. The sets {~u, ~v } and {~v , −~u} are connected by the “90◦ rotation” (the quotation
mark indicates that we only pretend {~u, ~v } to be orthonormal)

α(t) = {cos tπ2 ~u + sin tπ2 ~v , − sin tπ2 ~u + cos tπ2 ~v }.

Therefore the first operation plus a sign change preserves the orientation.
Combining two such rotations also shows that {~u, ~v } and {−~u, −~v } are con-
nected by “180◦ rotation”, and are therefore compatibly oriented.

2. For any c > 0, ~v and c~v are connected by α(t) = ((1 − t)c + t)~v . Therefore the
second operation with positive scalar preserves the orientation.

3. The sets {~u, ~v } and {~u +c~v , ~v } are connected by the sliding α(t) = {~u +tc~v , ~v }.
Therefore the third operation preserves the orientation.
5.2. GEOMETRY 179

By these operations, any ordered basis can be modified to become certain “reduced
column echelon set”. If V = Rn , then the “reduced column echelon set” is either
{~e1 , ~e2 , . . . , ~en−1 , ~en } or {~e1 , ~e2 , . . . , ~en−1 , −~en }. In general, we use the β-coordinate
isomorphism V ∼ = Rn to translate into a Euclidean space. Then we can use a
sequence of operations above to modify α to either β = {w ~ 1, w
~ 2, . . . , w ~ n } or
~ n−1 , w
β 0 = {w ~ 1, w ~ n−1 , −w
~ 2, . . . , w ~ n }. Correspondingly, we have a continuous family α(t)
of ordered bases connecting α to β or β 0 .
By the just proved necessity, we have det[I]α(1)α > 0. On the other hand, the
assumption det[I]βα > 0 implies det[I]β 0 α = det[I]β 0 β det[I]βα = − det[I]βα < 0.
Therefore α(1) 6= β 0 , and we must have α(1) = β. This proves that α and β are
compatibly oriented.

Proposition 5.2.2 shows that the orientation compatibility gives exactly two
equivalence classes. We denote the equivalence class represented by α by

oα = {β : α, β compatibly oriented} = {β : det[I]βα > 0}.

We also denote the other equivalence class by

−oα = {β : α, β incompatibly oriented} = {β : det[I]βα < 0}.

In general, the set of all ordered bases is the disjoint union o∪o0 of two equivalence
classes. The choice of o or o0 specifies an orientation of the vector space. In other
words, an oriented vector space is a vector space equipped with a preferred choice
of the equivalence class. An ordered basis α of an oriented vector space is positively
oriented if α belongs to the preferred equivalence class, and is otherwise negatively
oriented.
The standard (positive) orientation of Rn is the equivalence class represented by
the standard basis {~e1 , ~e2 , . . . , ~en−1 , ~en }. The standard negative orientation is then
represented by {~e1 , ~e2 , . . . , ~en−1 , −~en }.

Proposition 5.2.3. Let A be an n × n invertible matrix. Then det A > 0 if and only
if the columns of A form a positively oriented basis of Rn , and det A < 0 if and only
if the columns of A form a negatively oriented basis of Rn .

5.2.3 Determinant of Linear Operator


A linear transformation L : V → W has the matrix [L]βα with respect to bases α, β
of V, W . For the matrix to be square, we need dim V = dim W . Then we can
certainly define det[L]βα to be the “determinant with respect to the bases”. The
problem is that det[L]βα depends on the choice of α, β.
If L : V → V is a linear operator, then we usually choose α = β and consider
det[L]αα . If we choose another basis α0 of V , then [L]α0 α0 = P −1 [L]αα P , where
180 CHAPTER 5. DETERMINANT

P = [I]αα0 is the matrix between the two bases. Further by Proposition 5.1.3, we
have
det(P −1 [L]αα P ) = (det P )−1 (det[L]αα )(det P ) = det[L]αα .
Therefore the determinant of a linear operator is well defined.

Exercise 5.19. Prove that det(L ◦ K) = det L det K, det L−1 = 1


det L , det L∗ = det L.

Exercise 5.20. Prove that det(aL) = adim V det L.

In another case, suppose L : V → W is a linear transformation between oriented


inner product spaces. Then we may consider [L]βα for positively oriented orthonor-
mal bases α and β. If α0 and β 0 are other orthonormal bases, then [I]αα0 and [I]β 0 β
are orthogonal matrices, and we have det[I]αα0 = ±1 and det[I]β 0 β = ±1. If we
further know that α0 and β 0 are positively oriented, then these determinants are
positive, and we get

det[L]β 0 α0 = det[I]β 0 β det[L]βα det[I]αα0 = 1 det[L]βα 1 = det[L]βα .

This shows that the determinant of a linear transformation between oriented inner
product spaces is well defined.
The two cases of well defined determinant of linear transformations suggest a
common and deeper concept of determinant for linear transformations between vec-
tor spaces of the same dimension. The concept will be clarified in the theory of
exterior algebra.

5.2.4 Geometric Axiom for Determinant


 
A O
An n × n invertible matrix A gives an (n + k) × (n + k) invertible matrix ,
  O I
A O
called the stabilisation of A. The parallelotope P is the “orthogonal prod-
O I
uct” of P (A) in Rn and P (I) in Rk . Moreover, P (I) is the unit cube and should
have volume 1. Therefore we expect the stabilisation property for the determinant
 
A O
det = det A.
O I

The other property we expect from the geometry is the multiplication property for
invertible n × n matrices A and B

det AB = det A det B.

Theorem 5.2.4. The determinant of invertible matrices is the unique function sat-
isfying the stabilisation property, the multiplication property, and D(a) = a.
5.2. GEOMETRY 181

Proof. Let D be a function with the properties in the theorem. By taking A =


B = I in the multiplication property, we get D(I) = 1. By taking B = A−1 in the
multiplication property, we get D(A−1 ) = D(A)−1 .
A row operation of third type is obtained by multiplying an elementary matrix
Eij (c) to the left. The following shows that E12 (c) = E13 (c)E32 (1)E13 (c)−1 E32 (1)−1 .
     −1  −1
1 c 0 1 0 c 1 0 0 1 0 c 1 0 0
0 1 0 = 0 1 0 0 1 0 0 1 0 0 1 0 .
0 0 1 0 0 1 0 1 1 0 0 1 0 1 1
By thinking of the matrix multiplication as happing only to the i-th, j-th and k-th
columns and rows, the equality shows that Eij (c) = Eik (c)Ekj (1)Eik (c)−1 Ekj (1)−1 .
Then by the multiplication property, we have (at least for n × n matrices, n ≥ 3)
D(Eij (c)) = D(Eik (c)) D(Ekj (1)) D(Eik (c))−1 D(Ekj (1))−1 = 1.
By the multiplication property again, we have D(Eij (c)A) = D(A). In other words,
row operations of third type do not change D.
Next we argue that third type row operations can change any invertible matrix
to a diagonal one. The key is the following “skew row exchange” operation similar
to the first type row operation
!
~a R1 −R2 ~a − ~b R2 +R1 ~a − ~b R1 −R2 −~b
     
~b −−−−→ ~b −−−−→
~a
−−−−→
~a
.

Since the operation is a combination of three third type row operations, it does not
change D.
An invertible n × n matrix A can be reduced to I by row operations. In more
detail, we may first use first type row operation to move a nonzero entry in the first
column of A into the 11-entry position, so that the 11-entry is nonzero. Of course
this can also be achieved by “skew row exchange”, with the cost of adding “−” sign.
In other words, we may use third type row operations to make the 11-entry nonzero.
Then we may use more third type row operations to make the other entries in the
first column into 0. Therefore we may use only third type row operations to get
 
a1 ∗
A→ ,
O A1
where A1 is an invertible (n − 1) × (n − 1) matrix. Then we repeat the process for
A1 . This means using third type row operations to make the 22-entry nonzero, and
all the other entries in the second column into 0. Inductively, we use third type row
operations to get a diagonal matrix
 
a1 0 · · · 0
 0 a2 · · · 0 
A → Â =  .. .. ..  , ai 6= 0.
 
. . .
0 0 · · · an
182 CHAPTER 5. DETERMINANT

Since only third type row operations are used, we have D(A) = D(Â) and det A =
det Â.
For a diagonal matrix, we have a sequence of third type row operations (the last
→ is a combination of three third type row operations)
         
b 0 b 1 b 1 0 1 ab 0
→ → → → .
0 a 0 a −ab 0 −ab 0 0 1

By repreatedly using this, we may use third type row operations to get
 
a O
 → , a = a1 a2 . . . an .
O I
   
a O a O
Then we get D(A) = D(Â) = D and det A = det  = det . By the
O I O I
stabilisation property, we get D(A) = D(a) = a. We also know det A = det(a) = a.
This completes the proof that D = det.
Behind the theorem is the algebraic K-theory of real numbers. The theory
deals with the problem of invertible matrices that are equivalent up to stabilisation
and third type operations. The proof of the theorem shows that, for real number
matrices, every invertible matrix is equivalent to the diagonal matrix with diagonal
entries a, 1, 1, . . . , 1.
Chapter 6

General Linear Algebra

A vector space is characterised by addition V × V → V and scalar multiplication


R×V → V . The addition is an internal operation, and the scalar multiplication uses
R which is external to V . Therefore the concept of vector space can be understood
in two steps. First we single out the internal structure of addition.

Definition 6.0.1. An abelian group is a set V , together with the operation of addi-
tion
u + v : V × V → V,

such that the following are satisfied.

1. Commutativity: u + v = v + u.

2. Associativity: (u + v) + w = u + (v + w).

3. Zero: There is an element 0 ∈ V satisfying u + 0 = u = 0 + u.

4. Negative: For any u, there is v, such that u + v = 0 = v + u.

For the external structure, we may use scalars other than the real numbers R,
and most of the linear algebra theory remain true. For example, if we use rational
numbers Q in place of R in Definition 1.1.1 of vector spaces, then all the chapters
so far remain valid, with taking square roots (in inner product space) as the only
exception. A much more useful scalar is the complex numbers C, for which all
chapters remain valid, except a more suitable version of the complex inner product
needs to be developed.
Further extension of the scalar could abandon the requirement that nonzero
scalars can be divided. This leads to the useful concept of modules over rings.

183
184 CHAPTER 6. GENERAL LINEAR ALGEBRA

6.1 Complex Linear Algebra


6.1.1 Complex Number

A complex number is of the form a + ib, with a, b ∈ R and i = −1 satisfying
i2 = −1. A complex number has the real and imaginary parts

Re(a + ib) = a, Im(a + ib) = b.

The addition and multiplication of complex numbers are

(a + ib) + (c + id) = (a + c) + i(b + d),


(a + ib)(c + id) = (ac − bd) + i(ad + bc).

It can be easily verified that the operations satisfy the usual properties (such as
commutativity, associativity, distributivity) of arithmetic operations. In particular,
the subtraction is

(a + ib) − (c + id) = (a − c) + i(b − d),

and the division is


a + ib (a + ib)(c − id) (ac + bd) + i(−ad + bc)
= = .
c + id (c + id)(c − id) c2 + d 2

All complex numbers C can be identified with the Euclidean space R2 , with the
real part as the first coordinate and the imaginary part as the second coordinate.
The corresponding real vector has length r and angle θ (i.e., polar coordinate), and
we have
a + ib = r cos θ + ir sin θ = reiθ .
The first equality is trigonometry, and the second equality uses the expansion (the
theoretical explanation is the complex analytic continuation of the exponential func-
tion of real numbers)
1 1 1
eiθ = 1 + iθ + (iθ)2 + · · · + (iθ)n + · · ·
 1! 2!  n!  
1 2 1 4 1 3 1 5
= 1 − θ + θ + ··· + i θ − θ + θ + ···
2! 4! 3! 5!
= cos θ + i sin θ.

The complex exponential has the usual properties of the real exponential (because
of complex analytic continuation), and we can easily get the multiplication and
division of complex numbers

0 0 reiθ r i(θ−θ0 )
(reiθ )(r0 eiθ ) = rr0 ei(θ+θ ) , 0 0 = e .
re iθ r0
6.1. COMPLEX LINEAR ALGEBRA 185

The polar viewpoint easily shows that multiplying reiθ means scaling by r and
rotation by θ.
The complex conjugation a + bi = a − bi preserves the four arithmetic operations
 
z1 z̄1
z1 + z2 = z̄1 + z̄2 , z1 − z2 = z̄1 − z̄2 , z1 z2 = z̄1 z̄2 , = .
z2 z̄2

Therefore the conjugation an automorphism (self-isomorphism) of C. Geometri-


cally, the conjugation means reflection with respect to the x-axis. This gives the
conjugation in polar coordinates

reiθ = re−iθ .

The length can also be expressed in terms of the conjugation



z z̄ = a2 + b2 = r2 , |z| = r = z z̄.

This suggests that the positivity of the inner product can be extended to complex
vector spaces, as long as we modify the inner product by using the complex conju-
gation.
A major difference between R and C is that the polynomial t2 + 1 has no root
in R but has a pair of roots ±i in C. In fact, complex numbers has the following so
called algebraically closed property.

Theorem 6.1.1 (Fundamental Theorem of Algebra). Any non-constant complex poly-


nomial has roots.

The real number R is not algebraically closed.

6.1.2 Complex Vector Space


By replacing R with C in the definition of vector spaces, we get the defintion of
complex vector spaces.

Definition 6.1.2. A (complex) vector space is a set V , together with the operations
of addition and scalar multiplication

~u + ~v : V × V → V, a~u : C × V → V,

such that the following are satisfied.


1. Commutativity: ~u + ~v = ~v + ~u.

2. Associativity for addition: (~u + ~v ) + w


~ = ~u + (~v + w).
~

3. Zero: There is an element ~0 ∈ V satisfying ~u + ~0 = ~u = ~0 + ~u.


186 CHAPTER 6. GENERAL LINEAR ALGEBRA

4. Negative: For any ~u, there is ~v (to be denoted −~u), such that ~u +~v = ~0 = ~v +~u.

5. One: 1~u = ~u.

6. Associativity for scalar multiplication: (ab)~u = a(b~u).

7. Distributivity in the scalar: (a + b)~u = a~u + b~u.

8. Distributivity in the vector: a(~u + ~v ) = a~u + a~v .

The complex Euclidean space Cn has the usual addition and scalar multiplication

(x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn ) = (x1 + y1 , x2 + y2 , . . . , xn + yn ),


a(x1 , x2 , . . . , xn ) = (ax1 , ax2 , . . . , axn ).

All the material in the chapters on vector space, linear transformation, subspace
remain valid for complex vector spaces. The key is that complex numbers C has four
arithmetic operations like real numbers R, and all the properties of the arithmetic
operations remain valid. The most important is the division that is used in the
cancelation property and the proof of results such as Proposition 1.3.8.
The conjugate vector space V̄ of a complex vector space V is the same set V with
the same addition, but with the scalar multiplication modified by the conjugation

a~v¯ = ā~v .

Here ~v¯ is the vector ~v ∈ V regarded as a vector in V̄ . Then a~v¯ is the scalar
multiplication in V̄ . On the right side is the vector ā~v ∈ V regarded as a vector in
V̄ . The definition means that multiplying a in V̄ is the same as multiplying ā in V .
For example, the scalar multiplication in the conjugate Euclidean space C̄n is

a(x1 , x2 , . . . , xn ) = (āx1 , āx2 , . . . , āxn ).

Let α = {~v1 , ~v2 , . . . , ~vn } be a basis of V . Let ᾱ = {~v¯1 , ~v¯2 , . . . , ~v¯n } be the same
set considered as being inside V̄ . Then ᾱ is a basis of V̄ (see Exercise 6.2). The
“identity map” V → V̄ is

~v = x1~v1 + x2~v2 + · · · + xn~vn ∈ V 7→ x̄1~v1 + x̄2~v2 + · · · + x̄n~vn = ~v¯ ∈ V̄ .

Note that the scalar multiplication on the right is in V̄ . This means

[~v¯]ᾱ = [~v ]α .

Exercise 6.1. Prove that a complex subspace of V is also a complex subspace of V̄ . More-
over, sum and direct sum of subspaces in V and V̄ are the same.

Exercise 6.2. Suppose α is a set of vectors in V , and ᾱ is the corresponding set in V̄ .


6.1. COMPLEX LINEAR ALGEBRA 187

1. Prove that α and ᾱ span the same subspace.

2. Prove that α is linearly independent in V if and only if ᾱ is linearly independent in


V̄ .

If we restrict the scalars to real numbers, then a complex vector space becomes a
real vector space. For example, the complex vector space C becomes the real vector
space R2 , with z = x + iy ∈ C identified with (x, y) ∈ R2 . In general, Cn becomes
R2n , and dimR V = 2 dimC V for any complex vector space V .
Conversely, how can a real vector space V be obtained by restricting the scalars
of a complex vector space to real numbers? Clearly, we need to add the scalar
multiplication by i. This is a special real linear operator J : V → V satisfying
J 2 = −I. Given such an operator, we define the complex multiplication by

(a + ib)~v = a~v + bJ(~v ).

Then we can verify all the axioms of the complex vector space. For example,

(a + ib)((c + id)~v ) = (a + ib)(c~v + dJ(~v ))


= a(c~v + dJ(~v )) + bJ(c~v + dJ(~v ))
= ac~v + adJ(~v ) + bcJ(~v ) + bdJ 2 (~v )
= (ac − bd)~v + (ad + bc)J(~v ),
((a + ib)(c + id))~v = ((ac − bd) + i(ad + bc))~v
= (ac − bd)~v + (ad + bc)J(~v ).

Therefore the operator J is a complex structure on the real vector space.

Proposition 6.1.3. A complex vector space is a real vector space equipped with a
linear operator J satisfying J 2 = −I.

Exercise 6.3. Prove that R has no complex structure. In other words, there is no real linear
operator J : R → R satisfying J 2 = −I.

Exercise 6.4. Suppose J is a complex structure on a real vector space V , and ~v ∈ V is


nonzero. Prove that ~v , J(~v ) are linearly independent.

Exercise 6.5. Suppose J is a complex structure on a real vector space V . Suppose ~v1 ,
J(~v1 ), ~v2 , J(~v2 ), . . . , ~vk , J(~vk ) are linearly independent. Prove that if ~v is not in the span
of these vectors, then ~v1 , J(~v1 ), ~v2 , J(~v2 ), . . . , ~vk , J(~vk ), ~v , J(~v ) is still linearly independent.

Exercise 6.6. Suppose J is a complex structure on a real vector space V . Prove that
there is a set α = {~v1 , ~v2 , . . . , ~vn }, and we get J(α) = {J(~v1 ), J(~v2 ), . . . , J(~vn )}, such that
α∪J(α) is a real basis of V . Moreover, prove that α is a complex basis of V (with complex
structure given by J) if and only if α ∪ J(α) is a real basis of V .
188 CHAPTER 6. GENERAL LINEAR ALGEBRA

Exercise 6.7. If a real vector space has an operator J satisfying J 2 = −I, prove that the
real dimension of the space is even.

6.1.3 Complex Linear Transformation


A map L : V → W of complex vector spaces is (complex) linear if

L(~u + ~v ) = L(~u) + L(~v ), L(a~v ) = aL(~v ).

Here a can be any complex number. It is conjugate linear if

L(~u + ~v ) = L(~u) + L(~v ), L(a~v ) = āL(~v ).

Using the conjugate vector space, the following are equivalent

1. L : V → W is conjugate linear.

2. L : V̄ → W is linear.

3. L : V → W̄ is linear.

We have the vector space Hom(V, W ) of linear transformations, and also the vector
space Hom(V, W ) of conjugate linear transformations. They are related by

Hom(V, W ) = Hom(V̄ , W ) = Hom(V, W̄ ).

A conjugate linear transformation L : Cn → Cm is given by a matrix

L(~x) = L(x1~e1 + x2~e2 + · · · + xn~en )


= x̄1 L(~e1 ) + x̄2 L(~e2 ) + · · · + x̄n L(~en ) = A~x¯.

The matrix of L is
A = (L(~e1 ) L(~e2 ) . . . L(~en )) = [L] .
If we regard L as a linear transformation L : Cn → C̄m , then the formula becomes
L(~x) = A~x¯ = Ā~x, or Ā = [L]¯ . If we regard L as L : C̄n → Cm , then the formula
becomes L(~x¯) = A~x¯, or A = [L]¯ .
In general, the matrix of a conjugate linear transformation L : V → W is

[L]βα = [L(α)]β , [L(~v )]β = [L]βα [~v ]α .

Then the matrices of the two associated linear transformations are

[L : V → W̄ ]β̄α = [L]βα , [L : V̄ → W ]β ᾱ = [L]βα .

We see that conjugation on the target adds conjugation to the matrix, and the
conjugation on the source preserves the matrix.
6.1. COMPLEX LINEAR ALGEBRA 189

Due to two types of linear transformations, there are two dual spaces

V ∗ = Hom(V, C), V̄ ∗ = Hom(V, C) = Hom(V, C̄) = Hom(V̄ , C).

We note that Hom(V̄ , C) means the dual space (V̄ )∗ of the conjugate space V̄ .
Moreover, we have a conjugate linear isomorphism (i.e., invertible conjugate linear
transformation)
l(~x) ∈ V ∗ 7→ ¯l(~x) = l(~x) ∈ V̄ ∗ ,
which can also be regarded as a usual linear isomorphism between the conjugate V ∗
of the dual space V ∗ and the dual V̄ ∗ of the conjugate space V̄ . In this sense, there
is no ambiguity about the notation V̄ ∗ .
A basis α = {~v1 , ~v2 , . . . , ~vn } of V has the corresponding conjugate basis ᾱ of
V̄ and the dual basis α∗ of V ∗ . Both further have the same corresponding basis
ᾱ∗ = {~v¯1∗ , ~v¯2∗ , . . . , ~v¯n∗ } of V̄ ∗ , given by

~v¯i∗ (x1~v1 + x2~v2 + · · · + xn~vn ) = x̄i = ~vi∗ (~x).

We can use this to justify the matrices of linear transformations

[L]β̄α = [L(α)]β̄ = β̄ ∗ (L(α)) = β ∗ (L(α)) = [L(α)]β = [L]βα ,


[L]β ᾱ = [L(ᾱ)]β = [L(α)]β = [L]βα .

In the second line, L in L(ᾱ) means L : V̄ → W , and L in L(α) means L : V → W .


Therefore they are the same in W .
A conjugate linear transformation L : V → W induces a conjugate linear trans-
formation L∗ : V ∗ → W ∗ . We have (aL)∗ = aL∗ , which means that the map

Hom(V, W ) → Hom(W ∗ , V ∗ )

is a linear transformation. Then the equivalent viewpoints

Hom(V, W ) → Hom(W̄ ∗ , V ∗ ) = Hom(W ∗ , V̄ ∗ )

are conjugate linear, which means (aL)∗ = āL∗ .

Exercise 6.8. What is V̄¯ ? What is V̄ ∗ ? What is Hom(V̄ , W̄ )? What is Hom(V̄ , W )?

Exercise 6.9. We have Hom(V, W ) = Hom(V̄ , W̄ )? What is the relation between [L]βα and
[L]β̄ ᾱ ?

Exercise 6.10. What is the composition of a (conjugate) linear transformation with another
(conjugate) linear transformation? Interpret this as induced (conjugate) linear transfor-
mations L∗ , L∗ and repeat Exercises 2.12, 2.13.
190 CHAPTER 6. GENERAL LINEAR ALGEBRA

6.1.4 Complexification and Conjugation


For any real vector space W , we may construct its complexification V = W ⊕ iW .
~ ∈ W is denoted iw.
Here iW is a copy of W in which a vector w ~ Then V becomes
a complex vector space by

(w
~ 1 + iw ~ 10 + iw
~ 2 ) + (w ~ 20 ) = (w
~1 + w~ 10 ) + i(w ~2 + w~ 20 ),
(a + ib)(w ~ 1 + iw~ 2 ) = (aw~ 1 − bw ~ 2 ) + i(bw
~ 1 + aw ~ 2 ).

A typical example is Cn = Rn ⊕ iRn .


Conversely, when is a complex vector space V the complexification of a real
vector space W ? This is the same as finding a real subspace W ⊂ V (i.e., ~u, ~v ∈ W
and a, b ∈ R implying a~u + b~v ∈ W ), such that V = W ⊕ iW . By Exercise 6.6, for
any (finite dimensional) complex vector space V , such W is exactly the real span

SpanR α = {x1~v1 + x2~v2 + · · · + xn~vn : xi ∈ R}


= R~v1 + R~v2 + · · · + R~vn

of a complex basis α = {~v1 , ~v2 , . . . , ~vn } of V . For example, if we take the standard
basis  of V = Cn , then we get W = SpanR  = Rn . This makes Cn into the
complexification of Rn .
Due to the many possible choices of complex basis α, the subspace W is not
unique. For example, V = Cn is also the complexification of W = iRn (real span of
i = {i~e1 , i~e2 , . . . , i~en }). Next we show that such W are in one-to-one correspondence
with conjugation operators on V .
A conjugation of a complex vector space is a conjugate linear operator C : V → V
satisfying C 2 = I. The basic example of conjugation is

(z1 , z2 , . . . , zn ) = (z̄1 , z̄2 , . . . , z̄n ) : Cn → Cn .

If V = W ⊕ iW , then

C(w
~ 1 + iw ~ 1 − iw
~ 2) = w ~2

is a conjugation on V . Conversely, if C is a conjugation on W , then we introduce

W = ReV = {w
~ : C(w)
~ = w}.
~

~ ∈ iW , we have C(~u) = C(iw)


For ~u = iw ~ = −iw
~ = īC(w) ~ = −~u, and we get

iW = ImV = {~u : C(~u) = −~u}.

Moreover, any ~v ∈ V has the unique decomposition

~v = w
~ 1 + iw
~ 2, ~ 1 = 12 (~v + C(~v )) ∈ W,
w ~ 2 = 12 (~v − C(~v )) ∈ iW.
w

This gives the direct sum V = W ⊕ iW = ReV ⊕ ImV .


6.1. COMPLEX LINEAR ALGEBRA 191

For a given complexification V = W ⊕ iW , or equivalently a conjugation C, we


will not use C and instead denote the conjugation by

w
~ 1 + iw ~ 1 − iw
~2 = w ~ 2, w ~ 2 ∈ W.
~ 1, w

A complex subspace H ⊂ V has the corresponding conjugate subspace

H̄ = {~v¯ : ~v ∈ H} = {w
~ 1 − iw
~2 : w ~ 2 ∈ W, w
~ 1, w ~ 2 ∈ H}.
~ 1 + iw

Note that we also use H̄ to denote the conjugate space of H. For the given conju-
gation on V , the two notations are naturally isomorphic.

Example 6.1.1. Let C : C → C be a conjugation. Then we have a fixed complex


number c = C(1), and C(z) = C(z1) = z̄C(1) = cz̄. The formula satisfies the first
two properties of the conjugation. Moreover, we have C 2 (z) = C(cz̄) = ccz̄ = |c|2 z.
Therefore the third condition means |c| = 1. We find that conjugations on C are in
one-to-one correspondence with points c = eiθ on the unit circle.
For C(z) = eiθ z̄, we have (the real part with respect to c)

Reθ C = {z : eiθ z̄ = z} = {reρ : eiθ e−ρ = eρ , r ≥ 0}


θ
= {reρ : 2ρ = θ mod 2π, r ≥ 0} = Rei 2 .

This is the real line of angle 2θ . The imaginary part


θ π θ θ+π
Imθ C = iRei 2 = e 2 Rei 2 = Rei 2

θ+π
is the real line of angle 2
, and is orthogonal to Reθ C.

Exercise 6.11. Suppose V = W ⊕ iW is a complexification. What is the conjugation with


respect to the complexification V = iW ⊕ W ?

Exercise 6.12. Suppose V = W ⊕ iW is a complexification, and α ⊂ W is a set of real


vectors.

1. Prove that SpanC α is the complexification of SpanR α.

2. Prove that α is R-linearly independent if and only if it is C-linearly independent.

3. Prove that α is an R-basis of W if and only if it is a C-basis of V .

Exercise 6.13. For a complex subspace H of a complexification V = W ⊕ iW , prove that


H̄ = H if and only if H = U ⊕ iU for a real subspace U of W .

Exercise 6.14. Suppose V is a complex vector space with conjugation. Suppose α is a set
of vectors, and ᾱ is the set of conjugations of vectors in α.
192 CHAPTER 6. GENERAL LINEAR ALGEBRA

1. Prove that Spanᾱ = Spanα.


2. Prove that α is linearly independent if and only if ᾱ is linearly independent.
3. Prove that α is a basis of V if and only if ᾱ is a basis of V .

Exercise 6.15. Prove that


RanĀ = RanA, NulĀ = NulA, rankĀ = rankA.

Consider a complex linear transformation between complexifications of real vec-


tor spaces
L : V ⊕ iV → W ⊕ iW.
We have R-linear transformations L1 = ReL, L2 = ImL : V → W given by
L(~v ) = L1 (~v ) + iL2 (~v ), ~v ∈ V.
Then the whole L is determined by L1 , L2
L(~v1 + i~v2 ) = L(~v1 ) + iL(~v2 )
= L1 (~v1 ) + iL2 (~v1 ) + i(L1 (~v2 ) + iL2 (~v2 ))
= L1 (~v1 ) − L2 (~v2 ) + i(L2 (~v1 ) + L1 (~v2 )).
This means that, in the block form, we have (using iV ∼
= V and iW ∼
= W)
 
L1 −L2
L= : V ⊕ (i)V → W ⊕ (i)W.
L2 L1
Let α and β be R-bases of V and W . Then we have real matrices [L1 ]βα and
[L2 ]βα . Moreover, we know α and β are also C-bases of V ⊕ iV and W ⊕ W , and
[L]βα = [L1 ]βα + i[L2 ]βα .
The conjugation on the target space also gives the conjugation of L

L̄(~v ) = L(~v¯) : V ⊕ iV → W ⊕ iW.


We have
L̄(~v ) = L1 (~v ) − iL2 (~v ), ~v ∈ V,
and
[L̄]βα = [L1 ]βα − i[L2 ]βα = [L]βα .

Exercise 6.16. Prove that a complex linear transformation L : V ⊕ iV → W ⊕ iW satisfies


L(W ) ⊂ W if and only if it preserves the conjugation: L(~v ) = L(~v ) (this is the same as
L = L̄).

Exercise 6.17. Suppose V = W ⊕ iW = W 0 ⊕ iW 0 are two complexifications. What can


you say about the real and imaginary parts of the identity I : W ⊕ iW → W 0 ⊕ iW 0 ?
6.1. COMPLEX LINEAR ALGEBRA 193

6.1.5 Conjugate Pair of Subspaces


We study a complex subspace H of a complixification V = W ⊕ iW satisfying
V = H ⊕ H̄, where H̄ is the conjugation subspace.
The complex subspace H has a complex basis

α = {~u1 − iw
~ 1 , ~u2 − iw
~ 2 , . . . , ~um − iw
~ m }, ~ j ∈ W.
~uj , w

Then ᾱ = {~u1 + iw
~ 1 , ~u2 + iw ~ m } is a complex basis of H̄, and α ∪ ᾱ is
~ 2 , . . . , ~um + iw
a (complex) basis of V = H ⊕ H̄. We introduce the real and imaginary parts of ᾱ

β = {~u1 , ~u2 , . . . , ~um }, β † = {w


~ 1, w ~ m }.
~ 2, . . . , w

Since vectors in α ∪ ᾱ and vectors in β ∪ β † are (complex) linear combinations of


each other, and the two sets have the same number of vectors, we know β ∪ β † is
also a complex basis of V .
Since the vectors in β ∪ β † are in W , the set is actually a (real) bases of the real
subspace W . We introduce real subspaces

E = Spanβ, E † = Spanβ † , W = E ⊕ E †.

We also introduce a real isomorphism by taking ~uj to w


~j

†: E ∼
= E †, ~u†j = w
~j.

Because ~uj − i~u†j = ~uj − iw


~ j ∈ α ⊂ H, the isomorphism has the property that

~u − i~u ∈ H for any ~u ∈ E.

Proposition 6.1.4. Suppose V = W ⊕ iW = H ⊕ H̄ for a real vector space W and


a complex subspace H. Then there are real subspaces E, E † and an isomorphism
~u ↔ ~u† between E and E † , such that

W = E ⊕ E †, H = {(~u1 + ~u†2 ) + i(~u2 − ~u†1 ) : ~u1 , ~u2 ∈ E}.

Conversely, any complex subspace H constructed in this way satisfies V = H ⊕ H̄.

We note that E and E † are not unique. For example, given a basis α of H, the
following is also a basis of H

α0 = {i(~u1 − iw
~ 1 ), ~u2 − iw ~ 2 , . . . , ~um − iw
~ m}
= {w
~ 1 + i~u1 , ~u2 − iw
~ 2 , . . . , ~um − iw ~ m }.

Therefore we may also choose

~ 1 + R~u2 + · · · + R~um ,
E = Rw E † = R(−~u1 ) + Rw
~ 2 + · · · + Rw
~ m,

~ 1† = −~u1 , ~u†j = w
in our construction and take w ~ j for j ≥ 2.
194 CHAPTER 6. GENERAL LINEAR ALGEBRA

Proof. First we need to show that our construction gives the formula of H in the
proposition. For any ~u1 , ~u2 ∈ E, we have
~h(~u1 , ~u2 ) = ~u1 + ~u† + i(~u2 − ~u† ) = ~u1 − i~u† + i(~u2 − i~u† ) ∈ H.
2 1 1 2

Conversely, we want to show that any ~h ∈ H is of the form ~h(~u1 , ~u2 ). We have the
decomposition
~h = ~u + iw,
~ ~u = ~u1 + ~u†2 ∈ W, w
~ ∈ W, ~u1 , ~u2 ∈ E.

Then
~ − ~u2 + ~u†1 = −i(~h − ~h(~u1 , ~u2 )) ∈ H.
w
~ − ~u2 + ~u†1 ∈ W . By
However, we also know w

~v ∈ H ∩ W =⇒ ~v = ~v¯ ∈ H̄ =⇒ ~v ∈ H ∩ H̄ = {~0},

~ − ~u2 + ~u†1 = ~0, and


we conclude w

~ = ~u1 + ~u†2 + i(~u2 − ~u†1 ) = ~h(~u1 , ~u2 ).


~h = ~u + iw

Now we show the converse, that H as constructed in the proposition is a complex


subspace satisfying V = H ⊕ H̄. First, since E and E † are real subspaces, H is closed
under addition and scalar multiplication by real numbers. To show H is a complex
subspace, therefore, it is sufficient to prove ~h ∈ H implies i~h ∈ H. This follows from

i(~u1 + ~u†2 ) + i(~u2 − ~u†1 )) = −~u2 + ~u†1 + i(~u1 + ~u†2 ) = ~h(−~u2 , ~u1 ).

Next we prove W ⊂ H + H̄, which implies iW ⊂ H + H̄ and V = W + iW ⊂


H + H̄. For ~u ∈ E, we have

~u = 21 (~u − i~u† ) + 12 (~u − i~u† ) = ~h( 12 ~u, 0) + ~h( 12 ~u, 0) ∈ H,


~u† = 12 (~u† + i~u) + 12 (~u† + i~u) = ~h(0, 12 ~u) + ~h(0, 21 ~u) ∈ H.

This shows that E ⊂ H and E † ⊂ H, and therefore W = E + E † ⊂ H.


Finally, we prove that H + H̄ is a direct sum by showing that H ∩ H̄ = {~0}. A
vector in the intersection is of the form ~h(~u1 , ~u2 ) = ~h(w ~ 2 ). By V = E ⊕ E † ⊕
~ 1, w
iE ⊕ iE † , the equality gives

~u1 = w
~ 1, ~u†2 = w
~ 2† , ~u2 = −w
~ 2, −~u†1 = w
~ 1† .

Since † is an isomorphism, ~u†2 = w


~ 2† and −~u†1 = w
~ 1† imply ~u2 = w
~ 2 and −~u1 = w
~ 1.
Then it is easy to show that all component vectors vanish.

Exercise 6.18. Prove that E in Proposition 6.1.4 uniquely determines E † by

E † = {w
~ ∈ W : ~u − iw
~ ∈ H for some ~u ∈ E}.
6.1. COMPLEX LINEAR ALGEBRA 195

In the setup of Proposition 6.1.4, we consider a linear operator L : V → V


satisfying L(W ) ⊂ W and L(H) ⊂ H. The condition L(W ) ⊂ W means L is a real
operator with respect to the complex conjugation structure on V . The property is
equivalent to L commuting with the conjugation operation (see Exercise 6.16). The
condition L(H) ⊂ H means that H is an invariant subspace of L.
By L(H) ⊂ H and the description of H in Proposition 6.1.4, there are linear
transformations L1 , L2 : E → E defined by

L(~u) − iL(~u† ) = L(~u − i~u† ) = (L1 (~u) + L2 (~u)† ) + i(L2 (~u) − L1 (~u)† ), ~u ∈ E.

By V = W ⊕ iW and L(~u), L(~u† ) ∈ L(W ) ⊂ W , we get

L(~u) = L1 (~u) + L2 (~u)† , L(~u† ) = −L2 (~u) + L1 (~u)† ,

or         
~u1 L1 (~u1 ) −L2 (~u2 ) L1 −L2 ~u1
L † = + = .
~u2 L2 (~u1 ) †
L1 (~u2 )†
L2 L1 ~u†2
In other words, with respect to the direct sum W = E ⊕ E † and using E ∼
= E † , the
restriction L|W : W → W has the block matrix form
 
L1 −L2
L|W = .
L2 L1

6.1.6 Complex Inner Product


The complex dot product between ~x, ~y ∈ Cn cannot be x1 y1 + x2 y2 + · · · + xn yn
because complex numbers do not satisfy x21 + x22 + · · · + x2n ≥ 0. The correct dot
product that has the positivity property is

(x1 , x2 , . . . , xn ) · (y1 , y2 . . . , yn ) = x1 ȳ1 + · · · + xn ȳn = ~xT ~y .

More generally, a bilinear function b on V × V satisfies b(i~v , i~v ) = i2 b(~v , ~v ) =


−b(~v , ~v ). Therefore the bilinearity is contradictory to the positivity.

Definition 6.1.5. A (Hermitian) inner product on a complex vector space V is a


function
h~u, ~v i : V × V → C,
such that the following are satisfied.

1. Sesquilinearity: ha~u +b~u0 , ~v i = ah~u, ~v i+bh~u0 , ~v i, h~u, a~v +b~v i = āh~u, ~v i+ b̄h~u, ~v 0 i.

2. Conjugate symmetry: h~v , ~ui = h~u, ~v i.

3. Positivity: h~u, ~ui ≥ 0 and h~u, ~ui = 0 if and only if ~u = ~0.


196 CHAPTER 6. GENERAL LINEAR ALGEBRA

The sesquilinear (sesqui is Latin for “one and half”) property is the linearity in
the first vector and the conjugate linearity in the second vector. Using the conjugate
p is bilinear on V × V̄ .
vector space v̄, this means that the function
The length of a vector is still k~v k = h~v , ~v i. Due to the complex value of the
inner product, the angle between nonzero vectors is not defined, and the area is not
defined. The Cauchy-Schwarz inequality (Proposition 4.1.2) still holds, so that the
length still has the three properties in Proposition 4.1.3.

Exercise 6.19. Suppose a function b(~u, ~v ) is linear in ~u and is conjugate symmetric. Prove
that b(~u, ~v ) is conjugate linear in ~v .

Exercise 6.20. Prove the complex version of the Cauchy-Schwarz inequality.

Exercise 6.21. Prove the polarisation identity in the complex inner product space (compare
Exercise 4.12)
1
h~u, ~v i = (k~u + ~v k2 − k~u − ~v k2 + ik~u + i~v k2 − ik~u − i~v k2 ).
4

Exercise 6.22. Prove the parallelogram identity in the complex inner product space (com-
pare Exercise 4.14)
k~u + ~v k2 + k~u − ~v k2 = 2(k~uk2 + k~v k2 ).

Exercise 6.23. Prove that h~u, ~v iV̄ = h~v , ~uiV is a complex inner product on the conjugate
space V̄ . The “identity map” V → V̄ is a conjugate linear isomorphism that preserves the
length, but changes the inner product by conjugation.

The orthogonality ~u ⊥ ~v is still defined by h~u, ~v i = 0, and hence the concepts of


orthogonal set, orthonormal set, orthonormal basis, orthogonal complement, orthog-
onal projection, etc. Due to conjugate linearity in the second variable, one needs to
be careful with the order of vectors in formulae. For example, the formula for the
coefficient in Proposition 4.2.8 must have ~x as the first vector in inner product.
The Gram-Schmidt process still works, with the formula for the real Gram-
Schmidt process still valid. Consequently, any finite dimensional complex inner
product space is isometrically isomorphic to the complex Euclidean space with the
dot product.
The inner product induces a linear isomorphism
~v ∈ V 7→ h~v , ·i ∈ V̄ ∗
and a conjugate linear isomorphism
~v ∈ V 7→ h·, ~v i ∈ V ∗ .
A basis is self-dual if it is mapped to its dual basis. Under either isomorphism, a
basis is self-dual if and only if it is orthonormal.
6.1. COMPLEX LINEAR ALGEBRA 197

A linear transformation L : V → W has the dual linear transformation L∗ : W ∗ →


V ∗ . Using the conjugate linear isomorphisms induced by the inner product, the dual
L∗ is translated into the adjoint L∗ : W → V
L∗ (dual)
W ∗ −−−−−→ V ∗
x x
~ ∼
h·,wi

= ∼  , vi
=h·,~ ~ = h~v , L∗ (w)i.
hL(~v ), wi ~
L∗ (adjoint)
W −−−−−−→ V
Due to the combination of two conjugate linear isomorphisms, the adjoint L∗ is a
(non-conjugate, i.e., usual) complex linear transformation.
The adjoint satisfies
I ∗ = I, (L + K)∗ = L∗ + K ∗ , (aL)∗ = āL∗ , (L ◦ K)∗ = K ∗ ◦ L∗ , (L∗ )∗ = L.
In particular, the adjoint is a conjugate linear isomorphism
L 7→ L∗ : Hom(V, W ) ∼
= Hom(W, V ).
The conjugate linearity can be verified directly
h~v , (aL)∗ (w)i
~ = haL(~v ), wi ~ = ah~v , L∗ (w)i
~ = ahL(~v ), wi ~ = h~v , āL∗ (w)i.
~
Let α = {~v1 , ~v2 , . . . , ~vn } and β = {w
~ 1, w ~ m } be bases of V and W . By
~ 2, . . . , w
Proposition 2.3.1, we have
[L∗ : W ∗ → V ∗ ]α∗ β ∗ = [L]Tβα .
If we further know that α and β are orthonormal, then α ⊂ V is taken to α∗ ⊂ V ∗
and α ⊂ W is taken to β ∗ ⊂ W . The calculation in Section 6.1.3 shows that the
composition with the conjugate linear isomorphism W ∼ = W ∗ does not change the
matrix, but the composition with the conjugate linear isomorphism V ∼= V ∗ adds
complex conjugation to the matrix. Therefore we get
T
[L∗ : W → V ]αβ = [L∗ : W ∗ → V ∗ ]α∗ β ∗ = [L]βα = [L]∗βα .
Here we denote the conjugate transpose of a matrix by A∗ = ĀT .
For the special case of complex Euclidean spaces with dot products and the
standard bases, the matrix for the adjoint can be verified directly
A~x · ~y = (A~x)T ~y = ~xT AT ~y = ~xT A∗ ~y = ~x · A∗ ~y .
A linear transformation L : V → W is an isometry (i.e., preserves the inner
product) if and only if L∗ L = I. In case dim V = dim W , this implies L is invertible,
L−1 = L∗ , and LL∗ = I.
In terms of matrix, a square complex matrix U is called a unitary matrix if it
satisfies U ∗ U = I. A real unitary matrix is an orthogonal matrix. A unitary matrix
is always invertible with U −1 = U ∗ . Unitary matrices are precisely the matrices of
isometric isomorphisms with respect to orthonormal bases.
198 CHAPTER 6. GENERAL LINEAR ALGEBRA

Exercise 6.24. Prove

(RanL)⊥ = KerL∗ , (RanL∗ )⊥ = KerL, (KerL)⊥ = RanL∗ , (KerL∗ )⊥ = RanL.

Exercise 6.25. Prove that the formula in Exercise 4.58 extends to linear operator L on
complex inner product space

hL(~u), ~v i + hL(~v ), ~ui = hL(~u + ~v ), ~u + ~v i − hL(~u), ~ui − hL(~v ), ~v i.

Then prove that the following are equivalent


1. hL(~v ), ~v i is real for all ~v .

2. hL(~u), ~v i + hL(~v ), ~ui is real for all ~u, ~v .

3. L = L∗ .

Exercise 6.26. Prove that hL(~v ), ~v i is imaginery for all ~v if an donly if L∗ = −L.

Exercise 6.27. Define the adjoint L∗ : W → V of a conjugate linear transformation L : V →


~ = h~v , L∗ (w)i.
W by hL(~v ), wi ~ Prove that L∗ is conjugate linear, and find the matrix of L∗
with respect to orthonormal bases.

Exercise 6.28. Prove that the following are equivalent for a linear transformation L : V →
W.
1. L is an isometry.

2. L preserves length.

3. L takes an orthonormal basis to an orthonormal set.

4. L∗ L = I.

5. The matrix A of L with respect to orthonormal bases of V and W satisfies A∗ A = I.

Exercise 6.29. Prove that the columns of a complex matrix A form an orthogonal set with
respect to the dot product if and only if A∗ A is diagonal. Moreover, the columns form an
orthonormal set if and only if A∗ A = I.

Exercise 6.30. Prove that P is an orthogonal projection if and only if P 2 = P = P ∗ .

Suppose W is a real vector space with real inner product. Then we may extend
the inner product to the complexification V = W ⊕ iW in the unique way

h~u1 + i~u2 , w ~ 2 i = h~u1 , w


~ 1 + iw ~ 1 i + h~u2 , w
~ 2 i + ih~u2 , w
~ 1 i − ih~u1 , w
~ 2 i.

In particular, the length satisfies the Pythagorean theorem

kw ~ 2 k2 = kw
~ 1 + iw ~ 1 k2 + kw
~ 2 k2 .
6.2. FIELD AND POLYNOMIAL 199

~ and ~v¯ = ~u − iw
The other property is that ~v = ~u + iw ~ are orthogonal if and only if

0 = h~u + iw, ~ = k~uk2 − kwk


~ ~u − iwi ~ 2 + 2ih~u, wi.
~

This means that


k~uk = kwk,
~ h~u, wi
~ = 0.
In other words, ~u, w
~ is the scalar multiple of an orthonormal pair.
Conversely, we may restrict the inner product of a complexification V = W ⊕ iW
to the real subspace W . If the restriction always takes the real value, then it is an
inner product on W , and the inner product on V is the complexification of the inner
product on W .
Problem: What is the condition for the restriction of inner product to be real?

Exercise 6.31. Suppose H1 , H2 are subspaces of real inner product space V . Prove that
H1 ⊕iH1 , H2 ⊕iH2 are orthogonal subspaces in V ⊕iV if and only if H1 , H2 are orthogonal
subspaces of V .

6.2 Field and Polynomial


6.2.1 Field
The scalars is external to vector spaces. In the theory of vector spaces we devel-
oped so far, only the four arithmetic operations of the scalars are used until the
introduction of the inner product. If we regard any system with four arithmetic
operations as “generalised numbers”, then the similar linear algebra theory without
inner product can be developed over such generalised numbers.

Definition 6.2.1. A field is a set with arithmetic operations +, −, ×, ÷ satisfying


the usual properties.

In fact, only two operations +, × are required in a field, and −, ÷ are regarded
as the “opposite” or “inverse” of the two operations. Here are the axioms for + and
×.

1. Commutativity: a + b = b + a, ab = ba.

2. Associativity: (a + b) + c = a + (b + c), (ab)c = a(bc).

3. Distributivity: a(b + c) = ab + ac.

4. Unit: There are 0, 1, such that a + 0 = a = 0 + a, a1 = a = 1a.

5. Inverse: For any a, there is b, such that a + b = 0 = b + a. For any a 6= 0,


there is c, such that ac = 1 = ca.
200 CHAPTER 6. GENERAL LINEAR ALGEBRA

The real numbers R, the complex numbers C and the rational numbers Q are
examples of fields.
√ √ √
Example 6.2.1. The field of 2-rational numbers is Q[ 2] = {a + b 2 : a, b ∈ Q}.
The four arithmetic operations are
√ √ √
(a + b 2) + (c + d 2) = (a + c) + (b + d) 2,
√ √ √
(a + b 2) − (c + d 2) = (a − c) + (b − d) 2,
√ √ √
(a + b 2)(c + d 2) = (ac + 2bd) + (ad + bc) 2,
√ √ √
a+b 2 (a + b 2)(c − d 2) ac − 2bd −ad + bc √
√ = √ √ = 2 + 2 2.
c+d 2 (c + d 2)(c − d 2) c − 2d2 c − 2d2

The field is a subfield of R, just like a subspace.



In general, for any integer p with no square factors, we have the field of p-
√ √
rational numbers Q[ p] = {a + b p : a, b ∈ Q}, with
√ √ √
(a + b p) + (c + d p) = (a + c) + (b + d) p,
√ √ √
(a + b p)(c + d p) = (ac + pbd) + (ad + bc) p.
√ √
Example 6.2.2. We have the field of ( 2, 3)-rational numbers
√ √ √ √ √
Q[ 2, 3] = {a + b 2 + c 3 + d 6 : a, b, c, d ∈ Q}.
√ √ √
√ be understood as similar to (Q[ 2])[ 3], where Q[ 2] plays the role
The field can
of Q in Q[ 3]
√ √ √ √
(Q[ 2])[ 3] = {a + b 3 : a, b ∈ Q[ 2]}
√ √ √
= {(a1 + a2 2) + (b1 + b2 2) 3 : a1 , a2 , b1 , b2 ∈ Q}
√ √ √
= {a + b 2 + c 3 + d 6 : a, b, c, d ∈ Q}.

In particular, we know how to divide numbers in this field. They key is the reciprocal
√ √ √
1 (a + b 2) − (c + d 2) 3
√ √ √ = √ √ √ √ √ √
a+b 2+c 3+d 6 [(a + b 2) + (c + d 2) 3][(a + b 2) − (c + d 2) 3]
√ √ √
(a + b 2) − (c + d 2) 3
= √ √ .
(a + b 2)2 − 3(c + d 2)2

The rest of the division calculation is similar to Q[ 2].

Example 6.2.3. For any integer p with no cubic factors, we have the field of 3 p-
rational numbers
√ √ p
Q[ 3 p] = {a + b 3 p + c 3 p2 : a, b, c ∈ Q}.
6.2. FIELD AND POLYNOMIAL 201


The +, −, × operations in Q[ 3 p] are obvious. We only need to explain ÷. Specifi-
√ p
cally, we need to explain the existence of the reciprocal of x = a + b 3 p + c 3 p2 6= 0.
√ √ p
The idea is that Q[ 3 p] is a Q-vector space spanned by 1, 3 p, 3 p2 . Therefore the
four vectors 1, x, x2 , x3 are Q-linearly dependent (this is the linear algebra theory
prior to inner product)
a0 + a1 x + a2 x2 + a3 x3 = 0, ai ∈ Q.
If a0 6= 0, then the equality implies
1 a1 a2 a3
= − − x − x2
x a0 a0 a0
a1 a2 √ p a3 √ p
= − − (a + b 3 p + c 3 p2 ) − (a + b 3 p + c 3 p2 )2
a0 a0 a0
√ p3
= c0 + c1 3 p + c2 p2 , c0 , c1 , c2 ∈ Q.
If x0 = 0, then we get a1 + a2 x + a3 x2 = 0 and ask whether a1 6= 0. The precess
goes on and eventually gives the formula for x1 in all cases.

Example 6.2.4. The field of integers modulo a prime number 5 is the set of mod 5
congruence classes
Z5 = {0̄, 1̄, 2̄, 3̄, 4̄}.
For example,
2̄ = {2 + 5k : k ∈ Z} = {. . . , −8, −3, 2, 7, 12, . . . }
is the set of all integers n such that n − 2 is divisible by 5. In particular, n̄ = 0̄
means n is divisible by 5.
The addition and multiplication are the obvious operations, such as
3̄ + 4̄ = 3 + 4 = 7̄ = 2̄, 3̄ · 4̄ = 3 · 4 = 12 = 2̄.
The two operations satisfy the usual properties. The addition has the obvious
opposite operation of subtraction, such as 3̄ − 4̄ = 3 − 4 = −1 = 4̄.
The division is a bit more complicated. This means that, for any x̄ ∈ Z∗5 = Z5 − 0̄,
we need to find ȳ ∈ Z5 satisfying x̄ · ȳ = 1̄. Then we have ȳ = x̄−1 . To find ȳ, we
consider the following map
Z5 → Z5 , ȳ 7→ x̄ȳ = xy.
Since x̄ 6= 0̄ means that x is not divisible by 5, the following shows the map is
one-to-one
xy = xz =⇒ x(y − z) = xy − xz = xy − xz = 0̄
=⇒ x(y − z) is divisible by 5
=⇒ y − z is divisible by 5
=⇒ ȳ = z̄.
202 CHAPTER 6. GENERAL LINEAR ALGEBRA

Here the reason for the third =⇒ is that x is not divisible by 5, and 5 is a prime
number. Since both sides of the map ȳ 7→ xy are finite sets of the same size, the
one-to-one property implies the onto property. In particular, we have xy = 1̄ for
some ȳ.
In general, for any prime number p, we have the field

Fp = {0̄, 1̄, 2̄, . . . , p − 1}.

We use F instead of Z to emphasise field.

Exercise 6.32. A homomorphism of fields is a nonzero map between two fields preserving
the arithmetic operations. Prove that a homomorphism of fields is always one-to-one.
√ √ √
Exercise 6.33. Prove that the conjugation a + b 2 7→ a − b 2 is a homomorphism
√ of Q[ 2]
to itself. Moreover, this is the only non-trivial self-homomorphism of Q[ 2].
√ √
Exercise 6.34. Find all the self-homomorphism of Q[ 2, 3].

Exercise 6.35. Show that it makes sense to introduce the field F5 [ 2]. When would you

have difficulty introducing Fp [ n]? Here p is a prime number and 1 ≤ n < p.

Exercise 6.36. Prove that (x + y)p = xp + y p in Fp . In particular, x 7→ xp is a self-


homomorphism of Fp . In fact, this is a self-homomorphism of any field of characteristic
p.

Given a field F, we consider the natural map Z → F that takes n to n =


1 + 1 + · · · + 1, where 1 is the unit element of F. The map preserves addition and
multiplication. If the map is injective, then we have a copy of Z inside F. By using
the division in F, we get Q ⊂ F. In this case, we say F has characteristic 0.
If the map is not injective, we let p is the smallest natural number that is mapped
to 0 (if −p is mapped to 0, then p is mapped to 0). If p = p1 p2 has two integral
factors, then p1 p2 = 0 in a field F implies p1 = 0 or p2 = 0. Therefore p is a prime
number, and Fp ⊂ F. We say F has characteristic p.

6.2.2 Vector Space over Field


By changing R in Definition 1.1.1 to a field F, we get the definition of a vector space
over a field. Chapters 1, 2, 3 and Section 5.1 remain valid over any field. The part
of linear algebra related to geometric measurement may not extend to any field.
This includes inner product space, and the geometric aspect of the determinant.
Another minor exception is expressing any matrix as the sum of a symmetric and a
skew-symmetric matrix. Since dividing 2 is used, the fact is no longer true over a
field of characteristic 2.
6.2. FIELD AND POLYNOMIAL 203

If one field is contained in another field F ⊂ E, then we say F√is a subfield√of E,


and E is a field √extension
√ of F. For example,√Q is a subfield
√ √ of Q[ 2], and Q[ 2] is
a subfield of Q[ 2, 3]. Moreover, both Q[ 2] and Q[ 2, 3] are field extensions
of Q.
Suppose F ⊂ E is a field extension. Then for a, b ∈ F and u, v ∈ E, we √ have
au + bv ∈√ E. This makes E √ into√a vector space over
√ F. √ For
√ example, Q[ 2] has
basis {1, 2} over Q, and Q[ 2, 3] has basis {1, 2, 3, 6} over Q. Therefore
we have the Q-dimensions
√ √ √
dimQ Q[ 2] = 2, dimQ Q[ 2, 3] = 4.
√ √
Since any element in Q[ 2, 3] has the expression
√ √ √ √ √ √
a + b 2 + c 3 + d 6 = (a + b 2) + (c + d 2) 3
√ √ √ √
for unique “coefficients”
√ √a + b 2, c + d 2 ∈ Q[
√ 2], we find that {1, 3} is a basis
of the vector space Q[ 2, 3] over the field Q[ 2]
√ √
dimQ[√2] Q[ 2, 3] = 2.

Usually we use more convenient notation [E : F] for dimF E. Then


√ √ √ √ √ √
[Q[ 2] : Q] = 2, [Q[ 2, 3] : Q] = 4, [Q[ 2, 3] : Q[ 2]] = 2.

Proposition 6.2.2. Suppose three fields satisfy F1 ⊂ F2 ⊂ F3 . Then

[F3 : F1 ] = [F3 : F2 ][F2 : F1 ].

Proof. Suppose v1 , v2 , . . . , vm ∈ F3 is a basis of F3 as F2 -vector space. Suppose


w1 , w2 , . . . , wn ∈ F2 is a basis of F2 as F1 -vector space.
Any element x ∈ F3 is a linear combination

x = y1 v1 + y2 v2 + · · · + ym vm , yi ∈ F2 .

The elements yi of F2 are also linear combinations

yi = zi1 w1 + zi2 w2 + · · · + zin wn , zij ∈ F1 .

Then we get !
X X X X
x= y i vi = zij wj vi = zij wj vi .
i i j ij

This is a linear combination of wj vi ∈ F3 with zij ∈ F1 as coefficients. Therefore all


wj vi span F3 as an F1 -vector space.
204 CHAPTER 6. GENERAL LINEAR ALGEBRA

Next we show that wj vi are F1 -linearly independent. Consider the equality of


linear combinations in F3 with F1 -coefficients
X X
zij wj vi = zij0 wj vi , zij , zij0 ∈ F1 .
ij ij

Let yi = j zij wj and yi0 = j zij0 wj . Then yi , yi0 ∈ F2 , and the above becomes an
P P
equality of linear combinations in F3 with F2 -coefficients
X X
yi vi = yi0 vi , yi , yi0 ∈ F2 .
i i

Since v1 , v2 , . . . , vm is a basis of F3 as F2 -vector space, the F2 -coefficients are equal.


The equality yi = yi0 is then an equality of linear combinations in F2 with F1 -
coefficients X X
zij wj = zij0 wj , zij , zij0 ∈ F1 .
j j

Since w1 , w2 , . . . , wn is a basis of F2 as F1 -vector space, we get zij = zij0 . This proves


the linear independence of all wj vi .
Therefore the mn vectors w1 v1 , w1 v2 , . . . , wn vm form a basis of F3 as F1 -vector
space, and we get [F3 : F1 ] = mn = [F3 : F2 ][F2 : F1 ].

6.2.3 Polynomial over Field


We denote all the polynomials over a field F by

F[t] = {f (t) = a0 + a1 t + a2 t2 + · · · + an tn : ai ∈ F}.

The polynomial has degree n if an 6= 0, and is monic if an = 1.


We have +, −, × operations in F[t] but not ÷. In fact, F[t] is very similar to the
integers Z. Both belong to the concept of integral domain, which is a set R with
two operations +, × satisfying the following.

1. Commutativity: a + b = b + a, ab = ba.

2. Associativity: (a + b) + c = a + (b + c), (ab)c = a(bc).

3. Distributivity: a(b + c) = ab + ac.

4. Unit: There are 0, 1, such that a + 0 = a = 0 + a, a1 = a = 1a.

5. Negative: For any a, there is b, such that a + b = 0 = b + a.

6. No zero divisor: ab = 0 implies a = 0 or b = 0.


6.2. FIELD AND POLYNOMIAL 205

The first four axioms are the same as the field. The only modification is that the
existence of the multiplicative inverse (allowing ÷ operation) is replaced by the
no zero divisor condition. The condition is equivalent to the cancelation property:
ab = ac and a 6= 0 =⇒ b = c.
Due to the cancelation property, we may construct the field of rational numbers
as the quotients of integers
Q = { ab : a, b ∈ Z, b 6= 0}.
Similarly, we have the field of rational functions
F(t) = { fg(t)
(t)
: f (t), g(t) ∈ F[t], g(t) 6= 0}.
In general, an integral domain can be regarded as a system with +, −, × that can
be embedded into a field.
Integers and polynomials are special kinds of integral domain, in that they have
certain division process. For an integer a and a nonzero integer b, there are unique
integers q and r, such that
a = qb + r, 0 ≤ r < |b|.
We say the division of a by the divisor b has the quotient q and the remainder r.
When the remainder r = 0, we say b divides a or is a factor of a, and we denote b|a.
Polynomials over a field have the similar division process. For example, let
a(t) = (t + 1)(t − 1)2 (t2 − t + 1) = t5 − 2t4 + t3 + t2 − 2t + 1,
b(t) = (t + 1)3 (t − 1) = t4 + 2t3 − 2t − 1.
The following calculation
t−4
4 3
 5 4 3 2
t + 2t − 2t − 1 t − 2t + t + t − 2t + 1
− t5 − 2t4 + 2t2 + t
− 4t4 + t3 + 3t2 − t + 1
4t4 + 8t3 − 8t − 4
3 2
9t + 3t − 9t − 3
shows that
a(t) = t5 − 2t4 + t3 + t2 − 2t + 1
= (t − 4)(t4 + 2t3 − 2t − 1) + 9t3 + 3t2 − 9t − 3
= q(t)b(t) + r(t),
where
q(t) = t − 4, r(t) = 9t3 + 3t2 − 9t − 3.
In general, we have the following.
206 CHAPTER 6. GENERAL LINEAR ALGEBRA

Proposition 6.2.3. Suppose f (t), g(t) are polynomials over a field F. If g(t) 6= 0,
then there are unique polynomials q(t) and r(t), such that

f (t) = q(t)g(t) + r(t), deg r(t) < deg q(t).

Again, the division of f (t) by the divisor g(t) has the quotient q(t) and the
remainder r(t). If r(t) = 0, the g(t) divides (is a factor of) f (t), and we denote
g(t)|f (t).

Definition 6.2.4. A Euclidean domain is an integral domain R, with a function


d : R − 0 → non-negative integers, such that the following division process happens:
For any a, b ∈ R, with b 6= 0, there are unique q, r ∈ R, such that

a = qb + r, r = 0 or d(r) < d(b).

We may take d(a) = |a| for R = Z and d(f (t)) = deg f (t) for R = F[t]. All
discussion of the rest of this section applies to the Euclidean domain.
An integer d ∈ Z is a common divisor of a and b if d|a and d|b. We say d is a
greatest common divisor, and denote d = gcd(a, b), if

c|a and c|b ⇐⇒ c|d.

As defined, the greatest common divisor is unique up to ± sign. We usually prefer


choosing the positive one. Moreover, it is easy to extend the concept of greatest
common divisor to more than two integers.
The greatest common divisor can be calculated by the Euclidean algorithm. The
algorithm is based on the fact that, if a = qb + r, then c|a and c|b is equivalent to
c|b and c|r. For example, by 96 = 2 × 42 + 12, we have c|96 and c|42 ⇐⇒ c|42 and
c|12. The process repeats until we reach complete division (i.e., zero remainder).

c|96 and c|42 ⇐⇒ c|42 and c|12 by 96 = 2 × 42 + 12


⇐⇒ c|12 and c|6 by 42 = 3 × 12 + 6
⇐⇒ c|6 and c|0 by 12 = 2 × 6 + 0
⇐⇒ c|6.

We conclude 6 = gcd(96, 42).


The Euclidean algorithm is based on the division process, and is therefore valid in
any Euclidean domain. In particular, this proves the existence of greatest common
divisor in any Euclidean domain. For polynomials over a field, the following divisions

t5 − 2t4 + t3 + t2 − 2t + 1 = (t − 4)(t4 + 2t3 − 2t − 1) + 3(3t3 + t2 − 3t − 1),


t4 + 2t3 − 2t − 1 = 91 (3t + 5)(3t3 + t2 − 3t − 1) + 94 (t2 − 1),
3t3 + t2 − 3t − 1 = (3t + 1)(t2 − 1)
6.2. FIELD AND POLYNOMIAL 207

imply

gcd(t5 − 2t4 + t3 + t2 − 2t + 1, t4 + 2t3 − 2t − 1)


= gcd(t4 + 2t3 − 2t − 1, 3t3 + t2 − 3t − 1)
= gcd(3t3 + t2 − 3t − 1, t2 − 1)
= gcd(t2 − 1, 0)
=t2 − 1.

We note that, for polynomials, the greatest common divisor is unique up to mul-
tiplying a nonzero element in the field. Therefore we may ignore the field coefficients
such as 3 and 94 in the calculation above. In particular, we usually choose a monic
polynomial as the preferred greatest common divisor.
The algorithm can be applied to more than two polynomials. Among several
polynomials, we may always choose the polynomial of smallest degree to divide the
other polynomials. We gather this divisor polynomial and replace all the other poly-
nomials by the remainders. Then we repeat the process. This proves the existence
of the greatest common divisor among several polynomials.

Proposition 6.2.5. Suppose f1 (t), f2 (t), . . . , fk (t) ∈ F[t] are nonzero polynomials.
Then there is a unique monic polynomial d(t), such that g(t) divides every one of
f1 (t), f2 (t), . . . , fk (t) if and only if g(t) divides d(t).

The Euclidean algorithm can also be used to express the greatest common divisor
as a combination of the original numbers or polynomials. For example, we have

6 = 42 − 3 × 12 = 42 − 3 × (96 − 2 × 42)
= −3 × 96 + (1 + 3 × 2) × 42 = −3 × 96 + 7 × 42.

Similarly,

t2 − 1 = 94 (t4 + 2t3 − 2t − 1) − 41 (3t + 5)(3t3 + t2 − 3t − 1)


= 49 (t4 + 2t3 − 2t − 1)
− 12 1
(3t + 5)[(t5 − 2t4 + t3 + t2 − 2t + 1) − (t − 4)(t4 + 2t3 − 2t − 1)]
5
= (− 14 t − 12 )(t5 − 2t4 + t3 + t2 − 2t + 1)
+ ( 41 t2 − 20
17
t + 33
20
)(t4 + 2t3 − 2t − 1).

This also extends to more than two polynomials.

Proposition 6.2.6. Suppose f1 (t), f2 (t), . . . , fk (t) ∈ F[t] are nonzero polynomials.
Then there are polynomials u1 (t), u2 (t), . . . , uk (t), such that

gcd(f1 (t), f2 (t), . . . , fk (t)) = f1 (t)u1 (t) + f2 (t)u2 (t) + · · · + fk (t)uk (t).
208 CHAPTER 6. GENERAL LINEAR ALGEBRA

Several polynomials are coprime if their greatest common divisor is 1. In other


words, the only polynomials dividing every one of f1 (t), f2 (t), . . . , fk (t) are nonzero
elements of the field. In this case, we can find polynomials u1 (t), u2 (t), . . . , uk (t)
satisfying
f1 (t)u1 (t) + f2 (t)u2 (t) + · · · + fk (t)uk (t) = 1.

6.2.4 Unique Factorisation


We know any nonzero integer is a unique product of prime numbers. We analyse
the detailed reason behind this fact, and justify the similar property for polynomials
over a field.
An element a of an integral domain R is called invertible if there is b, such that
ab = 1 = ba. It is easy to show that b is unique. Then we denote b = a−1 , and we
denote the set of all invertibles by R∗ . We have Z∗ = {1, −1}. For any field F, we
have F∗ = F − 0, and F[t]∗ = F∗ .

Definition 6.2.7. Let p be a non-invertible element in an integral domain R.

1. p is irreducible if p = ab implies either a or b is invertible.

2. p is prime if p|ab implies p|a or p|b.

The concept of irreducible is used to construct factorisation. The concept of


prime is used to justify the uniqueness of factorisation. It happens that the two
concepts are equivalent in a Euclidean domain, such as Z and F[t].
First, we construct factorisation in Z. We start with a nonzero integer a and ask
whether a = bc for some b, c 6= ±1. If not, then a is irreducible. If yes, then |b| < |a|
and we further ask the same question for b in place of a. Since |b| < |a| in each step,
the precess eventually stops, and we get an irreducible factor p of a. Now we repeat
the precess with ap ∈ Z in place of a, and find another irreducible factor q of ap , and
so on. Since | ap | < |a|, the precess eventually stops, and we express a as a product of
mk
irreducibles. The expression is a = ±pm 1 m2
1 p2 . . . pk , where pi are distinct positive
irreducible numbers, and mi are natural numbers.
Second, we argue about the uniqueness of factorisation in Z. Therefore we
mk
assume pm 1 m2
1 p2 . . . pk = q1n1 q2n2 . . . qlnl . Suppose we also know that the irreducibles
pi , qj are also primes. Then p1 |q1n1 q2n2 . . . qlnl implies p1 divides some qj . In other
words, we have qi = cp1 . Since qi is irreducible, and p1 is not invertible, we find
c = ±1 to be invertible. Since we choose p1 , qj > 0, we get c = 1. Without loss
1 −1 m2
of generality, therefore, we may assume p1 = q1 . Then we get pm 1 p2 . . . pm
k
k
=
n1 −1 n2 nl
q1 q2 . . . ql , and we may use induction to prove the uniqueness of factorisation.
Finally, we compare irreducible integers and prime integers. In fact, the second
step above assumed that any irreducible p is also a prime. To see this, we assume
p|ab and p - a, and wish to prove p|b. If b divides p and a, then by p irreducible,
6.2. FIELD AND POLYNOMIAL 209

we have b = ±1 or ±p. Since b = ±p - a, we have b = ±1. Therefore the greatest


common divisor of p and a is 1. By the Euclidean algorithm, we have pu + av = 1
for some integers u, v. Then p divides b = pbu + abv.
We note that the proof of irreducible implying prime uses the Euclidean algo-
rithm, and is therefore valid in any Euclidean domain. Conversely, assume a prime
p = ab. Then p|a or p|b. If p|a, then a = pc for some integer c. Then p = ab = pcb,
and we get cb = 1. This implies that b is invertible, and proves that any prime
is irreducible. We note that the key to the proof is the cancelation of p from the
equality p = pcb. The cancelation holds in any integral domain.
With the degree of polynomial in place of the absolute value, the argument above
for the integers Z is also valid for the polynomials F[t] over a field.

Proposition 6.2.8. For any nonzero polynomial f (t) ∈ F[t], there are c ∈ F∗ ,
unique monic irreducible polynomials p1 (t), p2 (t), . . . , pk (t), and natural numbers
m1 , m2 , . . . , mk , such that

f (t) = cp1 (t)m1 p2 (t)m2 . . . pk (t)mk .

The unique factorisation can be used to obtain the greatest common divisor. For
example, we have

gcd(96, 42) = gcd(25 · 3, 2 · 3 · 7) = 2min{5,1} · 3min{1,1} · 7min{0,1} = 2 · 3 = 6,

and

gcd(x5 − 2x4 + x3 + x2 − 2x + 1, x4 + 2x3 − 2x − 1)


= gcd((x + 1)(x − 1)2 (x2 − x + 1), (x + 1)3 (x − 1))
=(x + 1)(x − 1) = x2 − 1.

Exercise 6.37. The Fundamental Theorem of Algebra (Theorem 6.1.1) says that any non-
constant complex polynomial has root. Use this to show that complex irreducible polyno-
mials are linear functions.

Exercise 6.38. Show that if a complex number r is a root of a real polynomial, then the
complex conjugate r̄ is also a root of the polynomial. Then use this to explain real
irreducible polynomials are the following

• Linear: a + bt, with a, b ∈ R and b 6= 0.

• Quadratic: a + bt + ct2 , with a, b, c ∈ R and the discriminant b2 − 4ac < 0.

6.2.5 Field Extension


The complex field is obtained from the attempt of solving the equations at2 +bt+c =
0 for a, b, c ∈ R. This can be achieved by adding the solution of t2 + 1 = 0 to R.
210 CHAPTER 6. GENERAL LINEAR ALGEBRA

The field Q[ 2] is obtained from the attempt of solving the polynomial equation
t2 − 2 = 0, when the only known numbers are the rational numbers Q.
In general, let F ⊂ C be a number field (a subfield of complex numbers). Then F
has characteristic 0 and contains at least all the rationals Q. For a complex number
r ∈ C, we wish to extend F to a bigger field containing r. The bigger field should
at least contain all the polynomials of r with coefficient in F

F[r] = {f (r) : f (t) ∈ F[t]} = {a0 + a1 r + a2 r2 + · · · + an rn : ri ∈ F}.

For any polynomials f (t), g(t) ∈ F[t], we have f (r)+g(r), f (r)g(r) ∈ F[r]. Therefore
F[r] is an integral domain. It remains to introduce division.
We need to consider two possibilities:
1. r is algebraic over F: f (r) = 0 for some f (t) ∈ F[t].
2. r is transcendental over F: If f (t) ∈ F[t] is nonzero, then f (r) 6= 0.
A number is transcendental if it is not the root of a nonzero polynomial. This
means that 1, r, r2 , . . . , rn are linearly independent over F for any n, so that F [r]
is an infinite dimensional vector space over F. An example is e = 2.71828 · · · over
F = Q. In this case, the map

f (t) 7→ f (r) : F [t] → F [r] ⊂ C

is injective. As a consequence, we conclude that the map on rational functions


f (t) f (r)
7→ : F (t) → F (r) ⊂ C
g(t) g(r)
is also injective. In particular, the smallest field extension of F containing r is F (r),
which is strictly bigger than F [r].
A number is algebraic if 1, r, r2 , . . . , rn are linearly dependent for sufficiently
large n. Suppose n is the smallest number, such that 1, r, r2 , . . . , rn are linearly
dependent over F. Then we have a monic polynomial m(t) ∈ F[t] of degree n, such
that m(r) = 0. Moreover, since n is the smallest number, we know 1, r, r2 , . . . , rn−1
are linearly independent over F. This means that

s(t) ∈ F[t], s(r) = 0, deg r < n =⇒ s(t) = 0.

Now suppose f (t) ∈ F[t] satisfies f (r) = 0. We have f (t) = q(t)m(t) + s(t) with
deg s < deg f = n. Then 0 = f (r) = q(r)m(r) + s(r) = s(r). By the property about
s(t) above, we have s(t) = 0. Therefore we get

f (r) = 0 =⇒ m(t)|f (t).

Of course, we also have

m(t)|f (t) =⇒ f (t) = g(t)m(t) =⇒ f (r) = g(r)m(r) = g(r)0 = 0.


6.2. FIELD AND POLYNOMIAL 211

In other words, a polynomial vanishes at r if and only if it is divisible by m(t).


Therefore we call√m(t) the minimal polynomial of the algebraic number r.
2
The number −1 is algebraic over R, with the √ minimal polynomial t + 1. If
an integer a 6= 0, 1, −1 has no square factor,
√ then a is algebraic over Q, with the
2 3
minimal polynomial t − a. Moreover, 2 is algebraic over Q, with the minimal
polynomial t3 − 2.
Examples 6.2.1, 6.2.2, 6.2.3 suggest that for algebraic r, F[r] admits division,
and is therefore the smallest field extension containing r.

Theorem 6.2.9. Suppose r is algebraic over a field F. Suppose m(t) ∈ F[t] is the
minimal polynomial of r, and deg m(t) = n. Then F[r] is a field, and [F[r] : F] = n.

Proof. Let

m(t) = a0 + a1 t + a2 t2 + · · · + an−1 tn−1 + tn , ai ∈ F.

Then m(r) = 0 implies that rn is a linear combination of 1, r, r2 , . . . , rn−1

rn = −a0 − a1 r − a2 r2 − · · · − an−1 rn−1 .

This implies that rn+1 is also a linear combination of 1, r, r2 , . . . , rn−1

rn+1 = −a0 r − a1 r2 − a2 r3 − · · · − an−2 rn−1 − an−1 rn


= −a0 r − a1 r2 − a2 r3 − · · · − an−2 rn−1
+ an−1 (a0 + a1 r + a2 r2 + · · · + an−1 rn−1 )
= b0 + b1 r + b2 r2 + · · · + bn−1 rn−1 , bi ∈ F.

Inductively, we know rk is a linear combination of 1, r, r2 , . . . , rn−1 . Therefore

F[r] = {c0 + c1 r + c2 r2 + · · · + cn−1 rn−1 : ci ∈ F}.

Moreover, since 1, r, r2 , . . . , rn−1 are linearly independent, we get dimF F[r] = n.


It remains to show F[r] is a field. It is sufficient to argue that any

0 6= x = c0 + c1 r + c2 r2 + · · · + cn−1 rn−1 ∈ F[r]

has inverse in F[r]. We may use the argument in Example 6.2.3. Specifically, since
dimF F[r] = n, the n + 1 vectors 1, x, x2 , . . . , xn in F[r] must be linearly dependent.
Therefore there is a polynomial f (t) ∈ F[t] of degree ≤ n, such that f (x) = 0.
Without loss of generality, we may further assume that f (x) is a monic polynomial,
and has the smallest degree among all polynomials satisfying f (x) = 0 (i.e., f (t) is
the minimal polynomial of x). Let

f (t) = b0 + b1 t + b2 t2 + · · · + bk−1 tk−1 + tk , k ≤ n.


212 CHAPTER 6. GENERAL LINEAR ALGEBRA

If b0 = 0, then f (x) = 0 implies b1 + b2 x + · · · + bk−1 xk−2 + xk−1 = 0, contradicting


the smallest degree assumption. Then f (x) = 0 implies that

1 b1 b2 bk−1 k−2 1
= − − r − ··· − r − rk−1 .
x b0 b0 b0 b0

Substituting x = c0 + c1 r + c2 r2 + · · · + cn−1 rn−1 into above and expand, we find the


above is in F[r].

Exercise 6.39. Suppose E is a finite dimensional field extension of F. Prove that a number
is algebraic over E if and only if it is algebraic over F. How are the degrees of minimal
polynomials over respective fields related?

Exercise 6.40. Suppose a and b are algebraic over F. Prove that a + b, a − b, a × b, a ÷ b are
also algebraic over F.
√ √ √ √
Exercise 6.41. Explain [Q[ 3 2] : Q] = 3 and [Q[ 4 5] : Q] = 4. Then explain [Q[ 3 2, 4 5] :
Q] = 12.

Exercise 6.42. Suppose E is a field extension of F, such that [E : F] is a prime number. If


r ∈ E − F, prove that E = F[r].

6.2.6 Trisection of Angle


A classical Greek problem is to use ruler and compass to divide any angle into three
equal parts. For example, when the angle is 13 π (60 degrees), this means drawing an
angle of 19 π (20 degrees).
Here is the precise meaning of construction by ruler and compass:

1. We start with two points on the plane. The two points are considered as
constructed.

2. If two points are constructed, then the straight line passing through the two
points is constructed.

3. If two points are constructed, then the circle centered at one point and passing
through the other point is constructed.

4. The intersection of two constructed lines or circles is constructed.

Denote all the constructed points, lines and circles by C. We present some basic
constructions.

• Given a line l and a point p in C, the line passing through p and perpendicular
to l is in C.
6.2. FIELD AND POLYNOMIAL 213

• Given a line l and a point p in C, the line passing through p and parallel to l
is in C.

• Given a line l and three points p, x, y in C, such that p ∈ l. Then there are two
points on l, such that the distance to p is the same as the distance between x
and y. The two points are constructed.

Figure 6.2.6 gives the constructions. The first construction depends on whether p is
on l or not. The numbers indicate the order of constructions.

p
p
2 2 x
4 4p
l l 1 4 y 3
1 1 p
2 3 2 3 l l
1

Figure 6.2.1: Basic constructions.

Now we parameterise the construction by complex numbers. We set the two


initial points to be 0 and 1. The construction of perpendicular line shows that a
complex number can be constructed if and only if the real and imaginary parts can
be constructed. Therefore we only need to consider the set CR of all the points on
the real line that can be constructed. The numbers in CR are constructible numbers.
The trisection problem of the angle 31 π is the same as whether cos 19 π is in CR .

√ The constructible numbers CR is a field. Moreover, if a ∈ CR


Proposition 6.2.10.
is positive, then a ∈ CR .

The proposition is proved by Figure 6.2.6.

b
1 √
a−1 a

0 1 a ab 1 a

Figure 6.2.2: Constructible numbers is a field, and is closed under square root.

The constructible numbers can be considered as a field extension of Q. From


this viewpoint, we start with Q. Then each ruler and compass construction adds
new constructible numbers one by one. Each new constructible number gives a field
214 CHAPTER 6. GENERAL LINEAR ALGEBRA

extension. Then finitely many constructions gives a finite sequence of real number
field extensions Q ⊂ F1 ⊂ F2 ⊂ . . . ⊂ Fk .
Suppose we get a field F after certain number of steps. The next constructible
number is obtained from the intersection of two lines or circles based on the numbers
in F. In other words, a line is ax + by = c with a, b, c ∈ F, and a circle is (x − a)2 +
(y − b)2 = c2 with a, b, c ∈ F. The following are the possible intersections.
1. The intersection of two lines a1 x + b1 y = c1 and a2 x + b2 y = c2 is obtained by
solving the system of two equations. The solution is obtained by arithmetic
operations of the coefficients, and therefore still lies in F.

2. The intersection of a1 x + b1 y = c1 and (x − a2 )2 + (y − b2 )2 = c22 is reduced


to quadratic equations√ of single variables with coefficients in F. Therefore the
solution is in F or F[ a] for some positive a ∈ F.

3. The intersection of (x − a1 )2 +√(y − b1 )2 = c21 and (x − a2 )2 + (y − b2 )2 = c22


also gives a number in F or F[ a].
This proves the following.

Proposition 6.2.11. If a is a constructible number, then there is a sequence of field



extensions Q = F0 ⊂ F1 ⊂ F2 ⊂ . . . ⊂ Fk , such that Fi+1 = Fi [ ai ] for some ai ∈ Fi ,
and a ∈ Fk .

Propositions 6.2.10 and 6.2.11 are complementary, and actually gives the nec-
essary and sufficient condition for a real number to be constructible. Now we are
ready to tackle the trisection problem.

Theorem 6.2.12. cos 19 π is not constructible by ruler and compass.

Proof. By cos 13 π = 21 and cos 3θ = 4 cos3 θ − 3 cos θ, we find that r = cos 19 π is a


root of f (t) = 8t3 − 6t − 1. If f (t) is not irreducible, then it has a linear factor at + b.
We may arrange to have a, b to be coprime integers. The linear factor means that
f (− ab ) = 0, which is the same as a(6b−a) = 8b2 . Since a, b are coprime, we have a|8.
This implies b|(6b − a), or b|a. By the coprime assumption again, we have b = ±1,
and a(±6 − a) = 8. Then it is easy to see that no integer a satisfies a(±6 − a) = 8.
Therefore f (t) is irreducible. By Theorem 6.2.9, we get [Q[r] : Q] = 3. On the
other hand, if r is constructible, then r ∈ Fk for a sequence of field extensions in
Proposition 6.2.11. This implies Q[r] ⊂ Fk .
By applying Proposition 6.2.2 to Q ⊂ Q[r] ⊂ Fk , we find that

[Fk : Q] = [Fk : Q[r]] [Q[r] : Q] = 3[Fk : Q[r]].

Therefore the Q-dimension [Fk : Q] of Fk is divisible by 3. On the other hand,



applying Theorem 6.2.9 to Fi+1 = Fi [ ai ], we get [Fi+1 : Fi ] = 2. Then we apply
6.2. FIELD AND POLYNOMIAL 215

Proposition 6.2.2 to Q = F0 ⊂ F1 ⊂ F2 ⊂ . . . ⊂ Fk to get

[Fk : Q] = [Fk : Fk−1 ] · · · [F2 : F1 ] [F1 : F0 ] = 2k .

Since 3 does not divide 2k , we get a contradiction.


216 CHAPTER 6. GENERAL LINEAR ALGEBRA
Chapter 7

Spectral Theory

The famous Fibonacci numbers 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, . . . is defined
through the recursive relation
F0 = 0, F1 = 1, Fn = Fn−1 + Fn−2 .
Given a specific number, say 100, we can certainly calculate F100 by repeatedly
applying the recursive relation 100 times. However, it is not obvious what the
general formula for Fn should be.
The difficulty for finding the general formula is due to the lack of understanding
of the structure of the recursion process. The Fibonacci numbers is a linear system
because it is governed by a linear equation Fn = Fn−1 + Fn−2 . Many differential
equations such as Newton’s second law F = m~x00 are also linear. Understanding the
structure of linear operators inherent in linear systems helps us solving problems
about the system.

7.1 Eigenspace
We illustrate how understanding the geometric structure of a linear operator helps
solving problems.

Example 7.1.1. Suppose a pair of numbers xn , yn is defined through the recursive


relation
x0 = 1, y0 = 0, xn = xn−1 − yn−1 , yn = xn−1 + yn−1 .
To find the general formula for xn and yn , we rewrite the recursive relation as a
linear transformation
     
xn 1 −1 1
~xn = = ~xn−1 = A~xn−1 , ~x0 = = ~e1 .
yn 1 1 0
By
√ cos π4 − sin π4
 
A= 2 ,
sin π4 cos π4

217
218 CHAPTER 7. SPECTRAL THEORY

the linear operator is the rotation by π4 and scalar multiplication by 2. Therefore

~xn is obtained by rotating ~e1 by n π4 and multiplication by ( 2)n . We conclude that
n n
xn = 2 2 cos nπ
4
, yn = 2 2 sin nπ
4
.
√ √
For example, we have (x8k , y8k ) = (24k , 0) and (x8k+3 , y8k+3 ) = (−24k+1 2, 24k+1 2).

Example 7.1.2. Suppose a linear system is obtained by repeatedly applying the


matrix  
13 −4
A= .
−4 7
To find An , we note that
       
1 1 −2 −2
A =5 , A = 15 .
2 2 1 1

This implies        
n1 n 1 n −2 n −2
A =5 , A = 15 .
2 2 1 1
Then we may apply An to the other vectors by expressing as linear combinations of
~v1 = (1, 2), ~v2 = (−2, 1). For example, by (0, 1) = 52 ~v1 + 15 ~v2 , we get
   n

n 0 2 n 1 n 2 n 1 n n−1 2 − 2 · 3
A = A ~v1 + A ~v2 = 5 ~v1 + 15 ~v2 = 5 .
1 5 5 5 5 4 + 3n

Exercise 7.1. In Example 7.1.1, what do you get if you start with x0 = 0 and y0 = 1?

Exercise 7.2. In Example 7.1.2, find the matrix An .

7.1.1 Invariant Subspace


We have better understanding of a linear operator L on V if it decomposes with
respect to a direct sum V = H1 ⊕ H2 ⊕ · · · ⊕ Hk

~v = ~h1 + ~h2 + · · · + ~hk =⇒ L(~v ) = L1 (~h1 ) + L2 (~h2 ) + · · · + Lk (~hk ).

In block notation, this means


 
L1 O
 L2 
L = L1 ⊕ L2 ⊕ · · · ⊕ Ln =  .
 
..
 . 
O Lk

The linear operator in Example 7.1.2 is decomposed into multiplying 5 and 15 with
respect to the direct sum R2 = R(1, 2) ⊕ R(−2, 1).
7.1. EIGENSPACE 219

In the decomposition above, we have


~h ∈ Hi =⇒ L(~h) ∈ Hi .

This leads to the following concept.

Definition 7.1.1. For a linear operator L : V → V , a subspace H ⊂ V is invariant


if ~h ∈ H =⇒ L(~h) ∈ H.

The zero space {~0} and the whole space V are trivial examples of invariant
subspaces. We wish to express V as a direct sum of invariant subspaces.
It is easy to see that a 1-dimensional subspace R~x is invariant if and only if
L(~x) = λ~x for some scalar λ. In this case, we say λ is an eigenvalue and ~x is an
eigenvector.

Example 7.1.3. Both R~v1 and R~v2 in Example 7.1.2 are invariant subspaces of the
matrix A. In fact, ~v1 is an eigenvector of A of eigenvalue 5, and ~v2 is an eigenvector
of A of eigenvalue 15.
Are there any other eigenvectors? If a~v1 + b~v2 is an eigenvector of eigenvalue λ,
then
A(a~v1 + b~v2 ) = 5a~v1 + 15b~v2 = λ(a~v1 + b~v2 ) = λa~v1 + λb~v2 .
Since ~v1 , ~v2 are linearly independent, we get λa = 5a and λb = 15b. This implies
either a = 0 or b = 0. Therefore there are no other eigenvectors except non-zero
multiples of ~v1 , ~v2 . In other words, R~v1 and R~v2 are the only 1-dimensional invariant
subspaces.

Example 7.1.4. Consider the derivative operator D : C ∞ → C ∞ . If f ∈ Pn ⊂ C ∞ ,


then D(f ) ∈ Pn−1 . Therefore the polynomial subspaces Pn are D-invariant.

Example 7.1.5. Let L be a linear operator on V . Then for any ~v ∈ V , the subspace
H = R~v + RL(~v ) + RL2 (~v ) + · · ·
is L-invariant, called the L-cyclic subspace generated by ~v . This is clearly the small-
est invariant subspace of L containing ~v .
If ~v , L(~v ), L2 (~v ), . . . , Ln (~v ) are linearly dependent for some n (this happens when
V is finite dimensional, as we can take any n > dim V ), then we let k be the smallest
number, such that α = {~v , L(~v ), L2 (~v ), . . . , Lk−1 (~v )} is linearly independent. This
implies Lk (~v ) is a linear combination of α. In other words, we have
Lk (~v ) + ak−1 Lk−1 (~v ) + ak−2 Lk−2 (~v ) + · · · + a1 L(~v ) + a0~v = ~0.
This further implies that Ln (~v ) is a linear combination of α for any n ≥ k. The
minimality of k implies that α is linearly independent. Therefore α is a basis of the
cyclic subspace H.
220 CHAPTER 7. SPECTRAL THEORY
 
1 1
Exercise 7.3. Show that has only one 1-dimensional invariant subspace.
0 1

Exercise 7.4. For the derivative operator D : C ∞ → C ∞ , find the smallest invariant sub-
space containing tn , and the smallest invariant subspace containing sin t.

Exercise 7.5. Show that if a rotation of R2 has a 1-dimensional invariant subspace, then
the rotation is either I or −I.

Exercise 7.6. Prove that RanL is L-invariant. What about KerL?

Exercise 7.7. Suppose LK = KL. Prove that RanK and KerK are L-invariant.

Exercise 7.8. Suppose H is an invariant subspace of L and K. Prove that H is an invariant


subspace of L + K, aL, L ◦ K. In particular, H is an invariant subspace of any polynomial
f (L) = an Ln + an−1 Ln−1 + · · · + a1 L + a0 I of L.

Exercise 7.9. Suppose ~v1 , ~v2 , . . . , ~vn span H. Prove that H is L-invariant if and only if
L(~vi ) ∈ H.

Exercise 7.10. Prove that sum and intersection of L-invariant subspaces are still L-invariant.

Exercise 7.11. Suppose L is a linear operator on a complex vector space with conjugation.
If H is an invariant subspace of L, prove that H̄ is an invariant subspace of L̄.

Exercise 7.12. What is the relation between the invariant subspaces of K −1 LK and L?

Exercise 7.13. Suppose V = H ⊕ H 0 . Prove that H is an invariant subspace of a linear


operator L on V if and only if the block form of L with respect to the direct sum is
 
L1 ∗
L= .
O L2

What about H 0 being an invariant subspace of L?

Exercise 7.14. Prove that the L-cyclic subspace generated by ~v is the smallest invariant
subspace containing ~v .

Exercise 7.15. For the cyclic subspace H in Example 7.1.5, we have the smallest number
k, such that α = {~v , L(~v ), L2 (~v ), . . . , Lk−1 (~v )} is linearly independent. Let

f (t) = tk + ak−1 tk−1 + ak−2 tk−2 + · · · + a1 t + a0 , f (L)(~v ) = ~0.

We also note that L|H is a linear operator on the subspace H.

1. Prove that f (L)(~h) = ~0 for all ~h ∈ H. This means f (L|H ) = O.


7.1. EIGENSPACE 221

2. If f (t) = g(t)h(t), prove that Kerg(L|H ) ⊃ Ranh(L|H ) 6= {~0}.

Exercise 7.16. In Example 7.1.5, show that the matrix of the restriction of L to the cyclic
subspace H with respect to the basis α is
 
0 0 ··· 0 0 −a0
1
 0 ··· 0 0 −a1  
0 1 ··· 0 0 −a2 
[L|H ]αα = . ..  .
 
.. .. ..
 .. . . . . 
 
0 0 ··· 1 0 −ak−2 
0 0 ··· 0 1 −ak−1

7.1.2 Eigenspace
The simplest direct sum decomposition L = L1 ⊕ L2 ⊕ · · · ⊕ Ln is that L multiplies
a scalar λi on each Hi (this may or may not exist)

~v = ~h1 + ~h2 + · · · + ~hk , ~hi ∈ Hi =⇒ L(~v ) = λ1~h1 + λ2~h2 + · · · + λk~hk .

In other words, we have

L = λ1 I ⊕ λ2 I ⊕ · · · ⊕ λk I,

where I is the identity operator on subspaces. If λi are distinct, then the equality
means exactly Hi = Ker(L − λi I), so that

V = Ker(L − λ1 I) ⊕ Ker(L − λ2 I) ⊕ · · · ⊕ Ker(L − λk I).

We note that, for finite dimensional V , the subspace Hi 6= {~0} if and only if L − λi I
is not invertible. This means L(~v ) = λi~v for some nonzero ~v .

Definition 7.1.2. A number λ is an eigenvalue of a linear operator L : V → V if


L − λI is not invertible. The associated kernel subspace is the eigenspace

Ker(L − λI) = {~v : L(~v ) = λ~v }.

For finite dimensional V , the non-invertiblity of L − λI is equivalent to that


the eigenspace Ker(L − λI) 6= {~0}. Any nonzero vector in the eigenspace is an
eigenvector.

Example 7.1.6. For the orthogonal projection P of R3 to the plane x + y + z = 0 in


Example 2.1.13, the plane is the eigenspace of eigenvelue 1, and the line R(1, 1, 1)
orthogonal to the plane is the eigenspace of eigenvelue 0. The fact is used in the
example (and Example 2.3.9) to get the matrix of P .
222 CHAPTER 7. SPECTRAL THEORY

Example 7.1.7. By Exercise 3.65, a projection P on V induces

V = RanP ⊕ Ran(I − P ) = Ker(I − P ) ⊕ KerP = Ker(P − 1 · I) ⊕ Ker(P − 0 · I).

Therefore P has eigenvalues 1 and 0, with respective eigenspaces RanP and Ran(I −
P ).

Example 7.1.8. The transpose operation of square matrices has eigevalues 1 and −1,
with symmetric matrices and skew-symmetric matrices as respective eigenspaces.

Example 7.1.9. Consider the derivative linear transformation D(f ) = f 0 on the


space of all real valued smooth functions f (t) on R. The eigenspace of eigenvalue λ
consists of all functions f satisfying f 0 = λf . This means exactly f (t) = ceλt is a
constant multiple of the exponential function eλt . Therefore any real number λ is
an eigenvalue, and the eigenspace Ker(D − λI) = Reλt .

Example 7.1.10. We may also consider the derivative linear transformation on the
space V of all complex valued smooth functions f (t) on R of period 2π. In this
case, the eigenspace KerC (D − λI) still consists of ceλt , but c and λ can be complex
numbers. For the function to have period 2π, we further need eλ2π = eλ0 = 1. This
means λ = in ∈ iZ. Therefore the (relabeled) eigenspaces are KerC (D−inI) = Ceint .
The “eigenspace decomposition” V = ⊕n Ceint essentially means the Fourier series.
More details will be given in Example 7.2.2.

Exercise 7.17. Prove that 0 is the eigenvalue of a linear operator if and only if the linear
operator is not invertible.

Exercise 7.18. Suppose L = λ1 I ⊕ λ2 I ⊕ · · · ⊕ λk I on V = H1 ⊕ H2 ⊕ · · · ⊕ Hk , and λi are


distinct. Prove that λ1 , λ2 , . . . , λk are all the eigenvalues of L, and any invariant subspace
of L is of the form W1 ⊕ W2 ⊕ · · · ⊕ Wk , with Wi ⊂ Hi .

Exercise 7.19. Suppose L is a linear operator on a complex vector space with conjugation.
Prove that L(~v ) = λ~v implies L̄(~v¯) = λ̄~v¯. In particular, if H = Ker(L−λI) is an eigenspace
of L, then the conjugate subspace H̄ = Ker(L̄ − λ̄I) is an eigenspace of L̄.

Exercise 7.20. What is the relation between the eigenvalues and eigenspaces of K −1 LK
and L?

For any polynomial f (t) = an tn + an−1 tn−1 + · · · + a1 t + a0 ∈ F[t], we define

f (L) = an Ln + an−1 Ln−1 + · · · + a1 L + a0 I.

The set of all polynomials of L

F[L] = {f (L) : f is a polynomial}


7.1. EIGENSPACE 223

is a commutative algebra in the sense that

K1 , K2 ∈ F[L] =⇒ a1 K1 + a2 K2 ∈ F[L], K1 K2 = K2 K1 ∈ F[L].

If L(~h) = λ~h, then we easily get Li (~h) = λi~h, and

f (L)(~h) = an Ln (~h) + an−1 Ln−1 (~h) + · · · + a1 L(~h) + a0 I(~h)


= an λn~h + an−1 λn−1~h + · · · + a1 λ~h + a0~h = f (λ)(~h).

Therefore we have the following result, which implies that a simplest direct sum
decomposition for L is also simplest for all polynomials of L.

Proposition 7.1.3. If L is multiplying λ on a subspace H and f (t) is a polynomial,


then f (L) is multiplying f (λ) on H.

The maximal subspace on which L is multiplying λ is the eigenspace Ker(L−λI).


The following shows that, in order to get the simplest direct sum decomposition, it
is sufficient to show that V is a sum of eigenspaces.

Proposition 7.1.4. The sum of eigenspaces with distinct eigenvalues is a direct sum.

Proof. Suppose L is multiplying λi on Hi , and λi are distinct. Suppose


~h1 + ~h2 + · · · + ~hk = ~0, ~hi ∈ Hi .

Inspired by Example 2.2.13, we take

f (t) = (t − λ2 )(t − λ3 ) · · · (t − λk ).

By Proposition 7.1.3, we have f (L)(~hi ) = f (λi )~hi for ~hi ∈ Hi . Applying f (L) to the
equality ~0 = ~h1 + ~h2 + · · · + ~hk and using f (λ2 ) = · · · = f (λk ) = 0, we get
~0 = f (L)(~h1 + ~h2 + · · · + ~hk ) = f (λ1 )~h1 + f (λ2 )~h2 + · · · + f (λk )~hk = f (λ1 )~h1 .

By f (λ1 ) 6= 0, we get ~h1 = ~0. By taking f to be the other similar polynomials, we


get all ~hi = ~0.
The following is a very useful property about eigenspaces when there are two
commuting operators.

Proposition 7.1.5. Suppose L and K are linear operators on V . If LK = KL, then


any eigenspace of L is an invariant subspace of K.

Proof. If ~v ∈ Ker(L − λI), then L(~v ) = λ~v . This implies L(K(~v )) = K(L(~v )) =
K(λ~v ) = λK(~v ). Therefore K(~v ) ∈ Ker(L − λI).
224 CHAPTER 7. SPECTRAL THEORY

Exercise 7.21. Suppose L = λ1 I ⊕ λ2 I ⊕ · · · ⊕ λk I on V = H1 ⊕ H2 ⊕ · · · ⊕ Hk . Prove that

det L = λdim
1
H1 dim H2
λ2 · · · λdim
k
Hk
,

and
(L − λ1 I)(L − λ2 I) . . . (L − λk I) = O.

Exercise 7.22. Suppose L = L1 ⊕ L2 ⊕ · · · ⊕ Lk and K = K1 ⊕ K2 ⊕ · · · ⊕ Kk with respect


to the same direct sum decomposition V = H1 ⊕ H2 ⊕ · · · ⊕ Hk .

1. Prove that LK = KL if and only if Li Ki = Ki Li for all i.

2. Prove that if Li = λi I for all i, then LK = KL.

The second statement is the converse of Proposition 7.1.5.

Exercise 7.23. Suppose a linear operator satisfies L2 + 3L + 2 = O. What can you say
about the eigenvalues of L?

Exercise 7.24. A linear operator L is nilpotent if it satisfies Ln = O for some n. Show


that the derivative operator from Pn to itself is nilpotent. What are the eigenvalues of
nilpotent operators?

Exercise 7.25. Prove that f (K −1 LK) = K −1 f (L)K for any polynomial f .

7.1.3 Characteristic Polynomial


By Proposition 5.1.4, λ is an eigenvalue of a linear operator L on a finite dimensional
vector space if and only if det(L − λI) = 0. This is the same as det(λI − L) = 0.

Definition 7.1.6. The characteristic polynomial of a linear operator L is det(tI −L).

Let α be a basis of the vector space, and the vector space has dimension n. Let
A = [L]αα . Then

det(tI − A) = tn − σ1 tn−1 + σ2 tn−2 + · · · + (−1)n−1 σn−1 t + (−1)n σn .

The polynomial is independent of the choice of basis because the characteristic


polynomials of P −1 AP and A are the same.

Exercise 7.26. Prove that the eigenvalues of an upper or lower triangular matrix are the
diagonal entries.

Exercise 7.27.
 Suppose
 L1 and L2 are linear operators. Prove that the characteristic poly-

L1 ∗ L1 O
nomial of is det(tI −L1 ) det(tI −L2 ). Prove that the same is true for .
O L2 ∗ L2
7.1. EIGENSPACE 225

Exercise 7.28. What is the characteristic polynomial of the derivative operator on Pn ?

Exercise 7.29. What is the characteristic polynomial of the transpose operation on n × n


matrices?
 
a b
Exercise 7.30. For 2 × 2 matrix A = , show that
c d

det(tI − A) = t2 − (trA)t + det A, trA = a + d, det A = ad − bc.

Exercise 7.31. Prove that σn = det A, and


 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
σ1 = tr  . ..  = a11 + a22 + · · · + ann .
 
..
 .. . . 
an1 an2 · · · ann

Exercise 7.32. Let I = (~e1 ~e2 · · · ~en ) and A = (~v1 ~v2 · · · ~vn ). Then the term (−1)n−1 σn−1 t
in det(tI − A) is
X
(−1)n−1 σn−1 t = det(−~v1 · · · t~ei · · · − ~vn )
1≤i<j≤n
X
= (−1)n−1 t det(~v1 · · · ~ei · · · ~vn ),
1≤i<j≤n

where the i-th columns of A is replaced by t~ei . Using the argument similar to the cofactor
expansion to show that
σn−1 = det A11 + det A22 + · · · + det Ann .

Exercise 7.33. For an n × n matrix A and 1 ≤ i1 < i2 < · · · < ik ≤ n, let A(i1 , i2 , . . . , ik )
be the k × k submatrix of A of the i1 , i2 , . . . , ik rows and i1 , i2 , . . . , ik columns. Prove that
X
σk = det A(i1 , i2 , . . . , ik ).
1≤i1 <i2 <···<ik ≤n

Exercise 7.34. Suppose L = λ1 I ⊕ λ2 I ⊕ · · · ⊕ λk I on V = H1 ⊕ H2 ⊕ · · · ⊕ Hk . Prove that


det(tI − L) = (t − λ1 )dim H1 (t − λ2 )dim H2 · · · (t − λk )dim Hk .
Then verify directly that the polynomial f (t) = det(tI − L) satisfies f (L) = O.

Exercise 7.35. Show that the characteristic polynomial of the matrix in Example 7.16 (for
the restriction of a linear operator to a cyclic subspace) is tn + an−1 tn−1 + · · · + a1 t + a0 .

Proposition 7.1.7. If H is an invariant subspace of a linear operator L on V ,


then the characteristic polynomial det(tI − L|H ) of the restriction L|H divides the
characteristic polynomial det(tI − L) of L.
226 CHAPTER 7. SPECTRAL THEORY

Proof. Let H 0 be a direct summand of H: V = H ⊕ H 0 . Since H is L-invariant, we


have (see Exercise 7.13)  
L|H ∗
L= .
O L0
Here the linear operator L0 is the H 0 -component of the restriction of L on H 0 . Then
by Exercise 7.27, we have
 
tI − L|H ∗
det(tI − L) = det = det(tI − L|H ) det(tI − L0 ).
O tI − L0

Theorem 7.1.8 (Cayley-Hamilton Theorem). Let f (t) = det(tI − L) be the charac-


teristic polynomial of a linear operator L. Then f (L) = O.

By Exercise 7.30, for a 2 × 2 matrix, the theorem says


 2      
a b a b 1 0 0 0
− (a + d) + (ad − bc) = .
c d c d 0 1 0 0
One could imagine that a direct computational proof would be very complicated.
Proof. For any vector ~v , construct the cyclic subspace in Example 7.1.5

H = R~v + RL(~v ) + RL2 (~v ) + · · · .

The subspace is L-invariant and has a basis α = {~v , L(~v ), L2 (~v ), . . . , Lk−1 (~v )}. More-
over, the fact that Lk (~v ) is a linear combination of α

Lk (~v ) = −ak−1 Lk−1 (~v ) − ak−2 Lk−2 (~v ) − · · · − a1 L(~v ) − a0~v

means that g(t) = tk + ak−1 tk−1 + ak−2 tk−2 + · · · + a1 t + a0 satisfies g(L)(~v ) = ~0.
By Exercises 7.16 and 7.35, the characteristic polynomial det(tI −L|H ) is exactly
the polynomial g(t) above. By Proposition 7.1.7, we know f (t) = h(t)g(t) for a
polynomial h(t). Then

f (L)(~v ) = h(L)(g(L)(~v )) = h(L)(~0) = ~0.

Since this is proved for all ~v , we get f (L) = O.

Example 7.1.11. The characteristic polynomial of the matrix in Example 2.2.18 is


 
t − 1 −1 −1
det  1 t −1  = t(t − 1)2 − 1 + (t − 1) + (t − 1) = t3 − 2t2 + 3t − 3.
0 1 t−1
By Cayley-Hamilton Theorem, this implies

A3 − 2A2 + 3A − 3I = O.
7.1. EIGENSPACE 227

This is the same as A(A2 − 2A + 3I) = 3I and gives

1
A−1 = (A2 − 2A + 3I)
3      
0 0 3 1 1 1 1 0 0
1
= −1 −2 0 − 2 −1 0 1 + 3 0 1 0
3
1 −1 0 0 −1 1 0 0 1
 
1 −2 1
1
= 1 1 −2 .
3
1 1 1

The result is the same as Example 2.2.18, but the calculation is more complicated.

7.1.4 Diagonalisation
We saw that the simplest case for a linear operator L on V is that V is the direct
sum of eigenspaces of L. If we take a basis of each eigenspace, then the union of
such bases is a basis of V consisting of eigenvectors of L. Therefore the simplest
case is that L has a basis of eigenvectors.
Let α = {~v1 , ~v2 , . . . , ~vn } be a basis of eigenvectors, with corresponding eigenvalues
d1 , d2 , . . . , dn . The numbers dj are the eigenvalues λi repeated dim Ker(L − λi I)
times. The equalities L(~vj ) = dj ~vj mean that the matrix of L with respect to the
basis α is diagonal  
d1 O
 d2 
[L]αα =   = D.
 
. .
 . 
O dn
For this reason, we describe the simplest case for a linear operator as follows.

Definition 7.1.9. A linear operator is diagonalisable if it has a basis of eigenvectors.

Suppose L is a linear operator on Fn , with corresponding matrix A. If L has a


basis α of eigenvectors, then for the standard basis , we have

A = [L] = [I]α [L]αα [I]α = P DP −1 , P = [α] = (~v1 ~v2 · · · ~vn ).

We call the formula A = P DP −1 a diagonalisation of A. A diagonalisation is the


same as a basis of eigenvectors (which form the columns of P ).
The following is a special diagonalisable case.

Proposition 7.1.10. If a linear operator on an n-dimensional vector space has n


distinct eigenvalues, then the linear operator is diagonalisable.
228 CHAPTER 7. SPECTRAL THEORY

Proof. The condition means det(tI −L) = (t−λ1 )(t−λ2 ) · · · (t−λn ), with λi distinct.
In this case, we pick one eigenvector ~vi for each eigenvalue λi . By Proposition
7.1.4, the eigenvectors ~v1 , ~v2 , . . . , ~vn are linearly independent. Since the space has
dimension n, the eigenvectors form a basis. This proves the proposition.
To find diagonalisation, we may first solve det(tI − L) = 0 to find eigenvalues
λ. Then we solve (L − λI)~x = ~0 to find (a basis of) the eigenspace Ker(L − λI).
The number dim Ker(L − λI) is the geometric multiplicity of λ, and the operator is
diagonalisable if and only if the sum of geometric multiplicities is the dimension of
the whole space.

Example 7.1.12. For the matrix in Example 7.1.2, we have


 
t − 13 4
det(tI − A) = det
4 t−7
= (t − 13)(t − 7) − 16 = t2 − 20t + 75 = (t − 5)(t − 15).
This gives two possible eigenvalues 5 and 15.
The eigenspace Ker(A − 5I) is the solutions of
   
13 − 5 −4 8 −4
(A − 5I)~x = ~x = ~x = ~0.
−4 7−5 −4 2
We get Ker(A − 5I) = R(1, 2), and the geometric multiplicity is 1.
The eigenspace Ker(A − 15I) is the solutions of
   
13 − 15 −4 −2 −4
(A − 15I)~x = ~x = ~x = ~0.
−4 7 − 15 −4 −8
We get Ker(A − 15I) = R(2, −1), and the geometric multiplicity is 1.
The sum of geometric multiplicities is 2 = dim R2 . Therefore the matrix is
diagonalisable, with basis of eigenvectors {(1, 2), (2, −1)}. The corresponding diag-
onalisation is      −1
13 −4 1 2 5 0 1 2
= .
−4 7 2 −1 0 15 2 −1
If A = P DP −1 , then An = P Dn P −1 . It is easy to see that Dn simply means
taking the n-th power of the diagonal entries. Therefore
 n   n  −1
13 −4 1 2 5 0 1 2
=
−4 7 2 −1 0 15n 2 −1
  n    n

1 1 2 5 0 1 2 n−1 1 + 4 · 3 2 − 2 · 3n
= =5 .
5 2 −1 0 15n 2 −1 2 − 2 · 3n 4 + 3n
By using Taylor expansion, we also have f (A) = P f (D)P −1 , where the function f
is applied to each diagonal entry of D. For example, we have
  5  −1
13 −4
 
−4
1 2 e 0 1 2
e 7 = .
2 −1 0 e15 2 −1
7.1. EIGENSPACE 229

Example 7.1.13. For the matrix in Example 7.1.1, we have


 
t−1 1
det(tI − A) = det = (t − 1)2 + 1 = (t − 1 − i)(t − 1 + i).
−1 t − 1
This gives two possible eigenvalues λ = 1 + i, λ̄ = 1 − i. We need to view the real
matrix A as an operator on the complex vector space C2 because they are complex
eigenvalues.
The eigenspace Ker(A − λI) is the solutions of
   
1 − (1 + i) −1 −i −1
(A − (1 + i)I)~x = ~x = ~x = ~0.
1 1 − (1 + i) 1 −i

We have the complex eigenspace KerC (A − λI) = C(1, −i), and the geometric mul-
tiplicity is 1.
Since A is a real matrix, by Example 7.19, we know KerC (A − λ̄I) = C(1, −i) =
C(1, i) is also a complex eigenspace of A, with the same geometric multiplicity 1.
The sum of geometric multiplicities is 2 = dimC C2 . Therefore the matrix is
diagonalisable, with basis of eigenvectors {(1, −i), (1, i)}. The corresponding diago-
nalisation is      −1
1 −1 1 1 1+i 0 1 1
= .
1 1 −i i 0 1−i −i i

Example 7.1.14. By Example 5.1.4, we have


 
1 −2 −4
A = −2 4 −2 , det(tI − A) = (t − 5)2 (t + 4).
−4 −2 1
We get eigenvalues 5 and −4, and
   
−4 −2 −4 5 −2 −4
A − 5I = −2 −1 −2 , A + 4I = −2 9 −2 .
−4 −2 −4 −4 −2 5
The eigenspaces are Ker(A − 5I) = R(−1, 2, 0) ⊕ R(−1, 0, 1) and Ker(A + 4I) =
R(2, 1, 2). We get a basis of eigenvectors {(−1, 2, 0), (−1, 0, 1), (2, 1, 2)}, with the
corresponding diagonalisation
     −1
1 −2 −4 −1 −1 2 5 0 0 −1 −1 2
−2 4 −2 =  2 0 1  0 5 0   2 0 1 .
−4 −2 1 0 1 2 0 0 −4 0 1 2

Example 7.1.15. For the matrix


 
3 1 −3
A = −1 5 −3 ,
−6 6 −2
230 CHAPTER 7. SPECTRAL THEORY

we have
   
t − 3 −1 3 t − 4 −1 3
det(tI − A) = det  1 t−5 3  = det t − 4 t − 5 3 
6 −6 t + 2 0 −6 t + 2
 
t − 4 −1 3
= det  0 t − 4 0  = (t − 4)2 (t + 2).
0 −6 t + 2

We get eigenvalues 4 and −2, and


   
−1 1 −3 5 1 −3
A − 4I = −1 1 −3 , A + 2I = −1 7 −3 .
−6 6 −6 −6 6 0

The eigenspaces are Ker(A − 4I) = R(1, 1, 0) and Ker(A + 2I) = R(1, 1, 2), with
geometric multiplicities 1 and 1. Since 1 + 1 < dim R3 , the matrix is not diagonal-
isable.

Example 7.1.16. By Proposition 7.1.10, the matrix


 
√1 0 0
 2 2 0
π sin 1 e

has three distinct eigenvalues 1, 2, e, and is therefore diagonalisable. The actual


diagonalisation is quite complicated to calculate.

Exercise 7.36. Find eigenspaces and determine whether this is a basis of eigenvectors.
Express the diagonalisable matrix as P DP −1 .
     
0 1 1 0 0 3 1 −3
1. .
1 0 4. 2 4 0 .
  7. −1 5
 −3.
3 5 6 −6 6 −2
   
0 1 0 0 1
 
2. . 1 1 1 1
−1 0 5. 0 1 0. 1 −1 −1 1 
1 0 0 8. 
1 −1
.
1 −1
    1 1 −1 −1
1 2 3 3 1 0
3. 0 4 5. 6. −4 −1 0 .
0 0 6 4 −8 −2

Exercise 7.37. Suppose a matrix has the following eigenvalues and eigenvectors. Find the
matrix.

1. (1, 1, 0), λ1 = 1; (1, 0, −1), λ2 = 2; (1, 1, 2), λ3 = 3.


7.1. EIGENSPACE 231

2. (1, 1, 0), λ1 = 2; (1, 0, −1), λ2 = 2; (1, 1, 2), λ3 = −1.

3. (1, 1, 0), λ1 = 1; (1, 0, −1), λ2 = 1; (1, 1, 2), λ3 = 1.

4. (1, 1, 1), λ1 = 1; (1, 0, 1), λ2 = 2; (0, −1, 2), λ3 = 3.

Exercise 7.38. Let A = ~v~v T , where ~v is a nonzero vector regarded as a vertical matrix.

1. Find the eigenspaces of A.

2. Find the eigenspaces of aI + bA.

3. Find the eigenspaces of the matrix


 
b a ... a
a b . . . a
..  .
 
 .. ..
. . .
a a ... b

Exercise 7.39. Describe all 2 × 2 matrices satisfying A2 = I.

Exercise 7.40. What are diagonalisable nilpotent operators? Then use Exercise 7.24 to
explain that the derivative operator on Pn is not diagonalisable.

Exercise 7.41. Is the transpose of n × n matrix diagonalisable?

Exercise 7.42. Suppose H is an invariant subspace of a linear operator L. Prove that if L


is diagonalisable, then the restriction operator L|H is also diagonalisable.

Exercise 7.43. Prove that L1 ⊕ L2 is diagonalisable if and only if L1 and L2 are diagonal-
isable.

Exercise 7.44. Prove that two diagonalisable matrices are similar if and only if they have
the same characteristic polynomial.

Exercise 7.45. Suppose two real matrices A, B are complex diagonalisable, and have the
same characteristic polynomial. Is it true that B = P AP −1 for a real matrix P ?

Exercise 7.46. Suppose det(tI − L) = (t − λ1 )n1 (t − λ2 )n2 · · · (t − λk )nk , with λi distinct.


The number ni is the algebraic multiplicity of λi . Denote the geometric multiplicity by
gi = dim Ker(L − λi I).

1. By taking H = Ker(L − λi I) in Proposition 7.1.7, prove that gi ≤ ni .

2. Explain that gi ≥ 1.

3. Prove that L is diagonalisable if and only if gi = ni .


232 CHAPTER 7. SPECTRAL THEORY

Example 7.1.17. To find the general formula for the Fibonacci numbers, we intro-
duce
       
Fn 0 Fn+1 0 1
~xn = , ~x0 = , ~xn+1 = = A~xn , A = .
Fn+1 1 Fn+1 + Fn 1 1

Then the n-th Fibonacci number Fn is the first coordinate of ~xn = An~x0 .
The characteristic polynomial det(tI − A) = t2 − t − 1 has two roots
√ √
1+ 5 1− 5
λ1 = , λ2 = .
2 2
By
√ !
− 1+2 5
 
−λ1 1 1√
A − λ1 I = = ,
1 1 − λ1 1 1− 5
2
√ √ √
we get the eigenspace Ker(A−λ1 I) = R(1, 1+2 5 ). If we

substitute 5 by − 5, then
1− 5
we get the other eigenspace Ker(A − λ2 I) = R(1, 2 ). To find ~xn , we decompose
~x0 according to the basis of eigenvectors
     
0 1 1√ 1 1√
~x0 = =√ 1+ 5 −√ 1− 5 .
1 5 2 5 2

The two coefficients can be obtained by solving a system of linear equations. Then
       
1 n 1√ 1 n 1√ 1 n 1√ 1 n 1√
~xn = √ A 1+ 5 − √ A 1− 5 = √ λ1 1+ 5 − √ λ2 1− 5 .
5 2 5 2 5 2 5 2

Picking the first coordinate, we get

1 1 h √ √ i
Fn = √ (λn1 − λn2 ) = √ (1 + 5)n − (1 − 5)n .
5 2n 5

Exercise 7.47. Find the general formula for the Fibonacci numbers that start with F0 = 1,
F1 = 0.

Exercise 7.48. Given the recursive relations and initial values. Find the general formula.

1. xn = xn−1 + 2yn−1 , yn = 2xn−1 + 3yn−1 , x0 = 1, y0 = 0.

2. xn = xn−1 + 2yn−1 , yn = 2xn−1 + 3yn−1 , x0 = 0, y0 = 1.

3. xn = xn−1 + 2yn−1 , yn = 2xn−1 + 3yn−1 , x0 = a, y0 = b.

4. xn = xn−1 + 3yn−1 − 3zn−1 , yn = −3xn−1 + 7yn−1 − 3zn−1 , zn = −6xn−1 + 6yn−1 −


2zn−1 , x0 = a, y0 = b, zn = c.
7.1. EIGENSPACE 233

Exercise 7.49. Consider recursive relation xn = an−1 xn−1 + an−2 xn−2 + · · · + a1 x1 + a0 x0 .


Prove that if the polynomial tn − an−1 tn−1 − an−2 tn−2 − · · · − a1 t − a0 has distinct roots
λ1 , λ2 , . . . , λn , then
xn = c1 λn1 + c2 λn2 + · · · + cn λnn .

Here the coefficients c1 , c2 , . . . , cn may be calculated from the initial values x0 , x1 , . . . , xn−1 .
Then apply the result to the Fibonacci numbers.

Exercise 7.50. Consider the recursive relation xn = an−1 xn−1 +an−2 xn−2 +· · ·+a1 x1 +a0 x0 .
Prove that if the polynomial tn −an−1 tn−1 −an−2 tn−2 −· · ·−a1 t−a0 = (t−λ1 )(t−λ2 ) . . . (t−
λn−1 )2 , and λj are distinct (i.e., λn−1 is the only double root), then

xn = c1 λn1 + c2 λn2 + · · · + cn−2 λnn−2 + (cn−1 + cn λn )λnn .

Can you imagine the general formula in case tn − an−1 tn−1 − an−2 tn−2 − · · · − a1 t − a0 =
(t − λ1 )n1 (t − λ2 )n2 . . . (t − λk )nk ?

7.1.5 Complex Eigenvalue of Real Operator


In Example 7.1.13, the real matrix is diagonalised by using complex matrices. We
wish to know what this means in terms of only real matrices.
An eigenvalue λ of a real matrix A can be real (λ ∈ R) or not real (λ ∈ C − R).
In case λ is not real, the conjugate λ̄ is another distinct eigenvalue.
Suppose λ ∈ R. Then the we have the real eigenspace KerR (A − λI). It is easy
to see that the complex eigenspace

KerC (A − λI) = {~v1 + i~v2 : ~v1 , ~v2 ∈ Rn , A(~v1 + i~v2 ) = λ(~v1 + i~v2 )}
= {~v1 + i~v2 : ~v1 , ~v2 ∈ Rn , A~v1 = λ~v1 , A~v2 = λ~v2 }

is the complexification KerR (A − λI) ⊕ iKerR (A − λI) of the real eigenspace. If we


choose a real basis α = {~v1 , ~v2 , . . . , ~vm } of KerR (A − λI), then

A~vj = λ~vj .

Suppose λ ∈ C − R. By Exercise 7.19, we have a pair of complex eigenspaces


H = KerC (A − λI) and H̄ = KerC (A − λ̄I) in Cn . By Proposition 7.1.4, the sum
H + H̄ is direct. Let α = {~u1 − iw ~ 1 , ~u2 − iw ~ 2 , . . . , ~um − iw
~ m }, ~uj , w~ j ∈ Rn , be a basis
of H. Then ᾱ = {~u1 +iw ~ 1 , ~u2 +iw
~ 2 , . . . , ~um +iw ~ m } is a basis of H̄. By the direct sum
H ⊕ H̄, the union α ∪ ᾱ is a basis of H ⊕ H̄. The set β = {~u1 , w ~ 1 , ~u2 , w ~ m}
~ 2 , . . . , ~um , w
of real vectors and the set α ∪ ᾱ can be expressed as each other’s complex linear
combinations. Since the two sets have the same number of vectors, the real set β is
also a complex basis of H ⊕ H̄. In particular, β is a linearly independent set (over
real or over complex, see Exercise 6.12). Therefore β is the basis of a real subspace
SpanR β ⊂ Rn .
234 CHAPTER 7. SPECTRAL THEORY

~ j . If λ = µ+iν, µ, ν ∈ R, then A(~uj −iw


The vectors in β appear in pairs ~uj , w ~j) =
λ(~uj − iw
~ j ) means

A~uj = µ~uj + ν w
~j,
~ j = −ν~uj + µw
Aw ~j.

This means that the restriction of A to the 2-dimensional subspace R~uj ⊕ Rw


~ j has
the matrix
   
µ −ν cos θ − sin θ
[A|R~uj ⊕Rw~ j ] = =r , λ = reθ .
ν µ sin θ cos θ

In other words, the restriction of the linear transformation is the rotation by θ and
scalar multiplication by r. We remark that the rotation does not mean there is an
inner product. In fact, the concepts of eigenvalue, eigenspace and eigenvector do
not require inner product. The rotation only means that, if we “pretend” {~uj , w ~j}
to be an orthonormal basis (which may not be the case with respect to the usual
dot product, see Example 7.1.18), then the linear transformation is the rotation by
θ and scalar multiplication by r.
The pairs in β give a pair of real subspaces that form SpanR β = E ⊕ E †

E = R~u1 + R~u2 + · · · + R~um ,


E † = Rw ~2 + · · · + w
~ 1 + Rw ~ m.

Moreover, we have an isomorphism † : E ∼ †


= E † given by ~uj = w
~ j . Then we have
 
µI −νI
A|E⊕E † = .
νI µI

The discussion extends to complex eigenvalue of real linear operator on complex


vector space with conjugation. See Section 6.1.5.
In Example 7.1.13, we have
√ π
λ = 1 + i = 2ei 4 , ~u − iw ~ = (1, −i),

and
E = R(1, 0), E † = R(0, 1), (1, 0)† = (0, 1).

Proposition 7.1.11. Suppose a real square matrix A is complex diagonalisable, with


complex basis of eigenvectors

. . . , ~vj , . . . , ~uk − i~u†k , ~uk + i~u†k , . . . ,

and corresponding eigenvalues

. . . , dj , . . . , ak + ibk , ak − ibk , . . . .
7.1. EIGENSPACE 235

Then
..
 
.
dj O
 
 
 .. 
A = P DP −1 , P = (· · · ~vj · · · ~uk ~u†k · · · ),

D= . 
.

 ak −bk 


 O bk ak 

..
.

In more conceptual language, which is applicable to a complex diagonalisable


real linear operator L on a real vector space V , the result is
     
µ1 I −ν1 I µ2 I −ν2 I µq I −νq I
L = λ1 I ⊕ λ2 I ⊕ · · · ⊕ λp I ⊕ ⊕ ⊕ ··· ⊕ ,
ν1 I µ1 I ν2 I µ2 I νq I µq I
with respect to direct sum decomposition
V = H1 ⊕ H2 ⊕ · · · ⊕ Hp ⊕ (E1 ⊕ E1 ) ⊕ (E2 ⊕ E2 ) ⊕ · · · ⊕ (Eq ⊕ Eq ).
Here E ⊕E actually means the direct sum E ⊕E † of two different subspaces, together
with an isomorphism † that identifies the two subspaces.

Example 7.1.18. Consider


 
1 −2 4
A = 2 2 −1 , det(tI − A) = (t − 3)(t2 + 9).
0 3 0
We find Ker(A − 3I) = R(1, 1, 1). For the conjugate pair of eigenvalues 3i, −3i, we
have  
1 − 3i −2 4
A − 3iI =  2 2 − 3i −1  .
0 3 −3i
For complex vector ~x ∈ Ker(A−3iI), the last equation tells us x2 = ix3 . Substituting
into the second equation, we get
0 = 2x1 + (2 − 3i)x2 − x3 = 2x1 + (2 − 3i)ix3 − x3 = 2x1 + (2 + 2i)x3 .
Therefore x1 = −(1 + i)x3 , and we get Ker(A − 3iI) = C(−1 − i, i, 1). This gives
~u = (−1, 0, 1), ~u† = (1, −1, 0), and
   −1
1 −1 − i −1 + i 3 0 0 1 −1 − i −1 + i
A = 1 i −i  0 3i 0  1 i −i 
1 1 1 0 0 −3i 1 1 1
   −1
1 −1 1 3 0 0 1 −1 1
= 1 0 −1
   0 0 −3  1 0 −1 .
1 1 0 0 3 0 1 1 0
236 CHAPTER 7. SPECTRAL THEORY

Geometrically, the operator fixes (1, 1, 1), “rotate” {(−1, 0, 1), (1, −1, 0)} by 90◦ ,
and then multiply the whole space by 3.

Example 7.1.19. Consider the linear operator L on R3 that flips ~u = (1, 1, 1) to


its negative and rotate the plane H = (R~u)⊥ = {x + y + z = 0} by 90◦ . Here the
rotation of H is compatible with the normal direction ~u of H by the right hand rule.
In Example 4.2.8, we obtained an orthogonal basis ~v = (1, −1, 0), w
~ = (1, 1, −2)
of H. By
 
1 1 1
~ ~u) = det −1 1 1 = 6 > 0,
det(~v w
0 −2 1

the rotation from ~v to w


~ is compatible with ~v , and we have
   
~v w
~ w
~
L k~v k
= kwk
~
, L kwk
~
= − k~~vvk , L(~u) = −~u.

This means
√ √
L( 3~v ) = w,
~ ~ = − 3~v , L(~u) = −~u.
L(w)

Therefore the matrix of L is (taking P = ( 3~v w
~ ~u))
√   √ −1
√3 1 1 0 −1 0 √3 1 1
− 3 1 1  1 0 0  − 3 1 1
0 −2 1 0 0 −1 0 −2 1
 √ √ 
1 √ −1 3−1 √3 − 1
=  √3 − 1 √ −1 − 3 − 1 .
3
− 3−1 3−1 1

From the meaning of the linear operator, we also know that the 4-th power of the
matrix is the identity.

~ = (1, 1, −1, −1). Let H = R~v ⊕ Rw.


Exercise 7.51. Let ~v = (1, 1, 1, 1), w ~ Find the matrix.

1. Rotate H by 45◦ in the direction of ~v to w.


~ Flip H ⊥ .

2. Rotate H by 45◦ in the direction of w


~ to ~v . Flip H ⊥ .

3. Rotate H by 45◦ in the direction of ~v to w.


~ Identity on H ⊥ .

4. Rotate H by 45◦ in the direction of ~v to w.


~ Rotate H ⊥ by 90◦ . The orientation of
R4 given by the two rotations is the positive orientation.

5. Orthogonal projection to H, then rotate H by 45◦ in the direction of ~v to w.


~
7.2. ORTHOGONAL DIAGONALISATION 237

7.2 Orthogonal Diagonalisation


7.2.1 Normal Operator
Let V be a complex vector space with (Hermitian) inner product h , i. The simplest
case for a linear operator L on V is an orthogonal sum

L = λ1 I ⊥ λ2 I ⊥ · · · ⊥ λk I,

with respect to
V = H1 ⊥ H2 ⊥ · · · ⊥ Hk .
This means that L has an orthonormal basis of eigenvectors, or orthogonally diago-
nalisable.
By Exercise 4.60 and (λI)∗ = λ̄I, the simplest case implies

L∗ = λ̄1 I ⊥ λ̄2 I ⊥ · · · ⊥ λ̄k I.

Then we get L∗ L = LL∗ .

Definition 7.2.1. A linear operator L on an inner product space is normal if L∗ L =


LL∗ .

The discussion before the definition proves the necessity of the following result.
The sufficiency will follow from the much more general Theorem 7.2.4.

Theorem 7.2.2. A linear operator on a complex inner product space is orthogonally


diagonalisable if and only if it is normal.

The adjoint of a complex matrix A is A∗ = ĀT (with respect to the standard


complex dot product). Therefore a normal matrix means A∗ A = AA∗ . The theorem
says that A is normal if and only if A = U DU −1 = U DU ∗ for a diagonal matrix D
and a unitary matrix U . This is the unitary diagonalisation of A.

Exercise 7.52. Prove that L1 ⊥ L2 is orthogonally diagonalisable if and only if L1 and L2


are orthogonally diagonalisable.

Exercise 7.53. Prove the following are equivalent.

1. L is normal.

2. L∗ is normal.

3. kL(~v )k = kL∗ (~v )k for all ~v .

4. In the decomposition L = L1 + L2 , L∗1 = L1 , L∗2 = −L2 , we have L1 L2 = L2 L1 .


238 CHAPTER 7. SPECTRAL THEORY

Exercise 7.54. Suppose L is a normal operator on V .


1. Use Exercise 4.62 to prove that KerL = KerL∗ .

2. Prove that Ker(L − λI) = Ker(L∗ − λ̄I).

3. Prove that eigenspaces of L with distinct eigenvalues are orthogonal.

Exercise 7.55. Suppose L is a normal operator on V .


1. Use Exercises 6.24 and 7.54 to prove that RanL = RanL∗ .

2. Prove that V = KerL ⊥ RanL.

3. Prove that RanLk = RanL and KerLk = KerL.

4. Prove that RanLj L∗k = RanL and KerL∗k Lj = KerL.

Now we apply Theorem 7.2.2 to a real linear operator L of real inner product
space V . The normal property means L∗ L = LL∗ , or the corresponding matrix
A with respect to an orthonormal basis satisfies AT A = AAT . By applying the
proposition to the natural extension of L to the complexification V ⊕ iV , we get the
diagonalisation as described in Proposition 7.1.11 and the subsequent remark. The
new concern here is the orthogonality. The vectors ~vj are real eigenvectors with real
eigenvalues, and can be chosen to be orthonormal. The vector pairs ~uk −i~u†k , ~uk +i~u†k
are eigenvectors corresponding to a conjugate pair of non-real eigenvalues. Since the
two conjugate eigenvalues are distinct, by the third part of Exercise 7.54, the two
vectors are orthogonal with respect to the complex inner product. As argued earlier
in Section 6.1.6, this means that we may also choose the vectors . . . , ~uk , ~u†k , . . . to
be orthonormal. Therefore we get
   
µ1 I −ν1 I µq I −νq I
L = λ1 I ⊥ · · · ⊥ λp I ⊥ ⊥ ··· ⊥ ,
ν1 I µ1 I νq I µq I

with respect to

V = H1 ⊥ · · · ⊥ Hp ⊥ (E1 ⊥ E1 ) ⊥ · · · ⊥ (Eq ⊥ Eq ).

Here E ⊥ E means the direct sum E ⊥ E † of two orthogonal subspaces, together


with an isometric isomorphism † that identifies the two subspaces.

7.2.2 Commutative ∗-Algebra


In Propositions 7.1.3 and 7.1.5, we already saw the simultaneous diagonalisation of
several linear operators. If a linear operator L on an inner product space is normal,
then we may consider the collection of polynomials of L and L∗
X
C[L, L∗ ] = { aij Li L∗j : i, j non-negative integers}.
7.2. ORTHOGONAL DIAGONALISATION 239

By L∗ L = LL∗ , we have (Li L∗j )(Lk L∗l ) = Li+k L∗ j+l . We also have (Li L∗j )∗ =
Lj L∗i . Therefore C[L, L∗ ] is an example of the following concept.

Definition 7.2.3. A collection A of linear operators on an inner product space is a


commutative ∗-algebra if the following are satisfied.

1. Algebra: L, K ∈ A implies aL + bK, LK ∈ A.

2. Closed under adjoint: L ∈ A implies L∗ ∈ A.

3. Commutative: L, K ∈ A implies LK = KL.

The definition allows us to consider more general situation. For example, if L, K


are operators, such that L, L∗ , K, K ∗ are mutually commutative. Then the collection
C[L, L∗ , K, K ∗ ] of polynomials of L, L∗ , K, K ∗ is also a commutative ∗-algebra. We
may then get an orthonormal basis of eigenvectors for both L and K.
An eigenvector of A is the eigenvector of every linear operator in A.

Theorem 7.2.4. A commutative ∗-algebra of linear operators on a finite dimensional


inner product space is orthogonally diagonalisable.

The proof of the theorem is based on Proposition 7.1.5 and the following result.
Both do not require normal operator.

Lemma 7.2.5. If H is L-invariant, then H ⊥ is L∗ -invariant.

Proof. Let ~v ∈ H ⊥ . For any ~h ∈ H, we have L(~h) ∈ H, and therefore

h~h, L∗ (~v )i = hL(~h), ~v i = 0.

Since this holds for all ~h ∈ H, we get L∗ (~v ) ∈ H ⊥ .

Proof of Theorem 7.2.4. If every operator in A is a constant multiple on the whole


V , then any basis is a basis of eigenvectors for A. So we assume an operator L ∈ A
is not a constant multiple. By the fundamental theorem of algebra (Theorem 6.1.1),
the characteristic polynomial of L has a root and gives an eigenvalue λ. Since L is
not a constant multiple, the corresponding eigenspace H = Ker(L − λI) is neither
zero nor V .
Since any K ∈ A satisfies KL = LK, by Proposition 7.1.5, H is K-invariant.
Then by Lemma 7.2.5, H ⊥ is K ∗ -invariant. Since we also have K ∗ ∈ A, the conclu-
sion remains true if K is replaced by K ∗ . By (K ∗ )∗ = K, we find that H ⊥ is also
K-invariant.
Since both H and H ⊥ are K-invariant for every K ∈ A, we may restrict A to
the two subspaces and get two sets of operators AH and AH ⊥ of the respective inner
240 CHAPTER 7. SPECTRAL THEORY

product spaces H and H ⊥ . Moreover, both AH and AH ⊥ are still commutative


∗-algebras.
Since H is neither zero nor V , both H and H ⊥ have strictly smaller dimension
than V . By induction on the dimension of underlying space, both AH and AH ⊥ are
orthogonally diagonalisable. This implies that A is orthogonally diagonalisable (see
Exercise 7.52).
It remains to justify the beginning of the induction, which is when dim V = 1.
Since every linear operator is a constant multiple in this case, the conclusion follows
trivially.
Theorem 7.2.4 can be greatly extended. The infinite dimensional version of the
theorem is the theory of commutative C ∗ -algebra.

Exercise 7.56. Suppose H is an invariant subspace of a normal operator L. Use Exercise


7.18 and orthogonal diagonalisation to prove that both H and H ⊥ are invariant subspaces
of L and L∗ .

Exercise 7.57. Suppose H is an invariant subspace of a normal operator L. Suppose P is


the orthogonal projection to H.
1. Prove that H is an invariant subspace of L if and only if (I − P )LP = O.
2. For X = P L(I − P ), prove that trXX ∗ = 0.
3. Prove that H ⊥ is an invariant subspace of a normal operator L.
This gives an alternative proof of Exercise 7.56 without using the diagonalisation.

7.2.3 Hermitian Operator


An operator L is Hermitian if L = L∗ . This means
hL(~v ), wi
~ = h~v , L(w)i
~ for all ~v , w.
~
Therefore we also say that L is self-adjoint. Hermitian operators are normal.

Proposition 7.2.6. A linear operator on a complex inner product space is Hermitian


if and only if it is orthogonally diagonalisable and all the eigenvalues are real.

A matrix A is Hermitian if A∗ = A, and we get A = U DU −1 = U DU ∗ for a real


diagonal matrix D and a unitary matrix U .
Proof. By Theorem 7.2.2, we only need to show that a normal operator is Hermitian
if and only if all the eigenvalues are real. Let L(~v ) = λ~v , ~v 6= ~0. Then
hL(~v ), ~v i = hλ~v , ~v i = λh~v , ~v i,
h~v , L(~v )i = h~v , λ~v i = λ̄h~v , ~v i.
By L = L∗ , the left are equal. Then by h~v , ~v i =
6 0, we get λ = λ̄.
7.2. ORTHOGONAL DIAGONALISATION 241

Example 7.2.1. The matrix


 
2 1+i
A=
1−i 3
is Hermitian, with characteristic polynomial
 
t − 2 −1 − i
det = (t − 2)(t − 3) − (12 + 12 ) = t2 − 5t + 4 = (t − 1)(t − 4).
−1 + i t − 3
By    
1 1+i −2 1 + i
A−I = , A − 4I = ,
1+i 2 1 − i −1
we get Ker(A − I) = C(1 + i, −1) and Ker(A − 4I) = C(1, 1 − i). By k(1 + i, −1)k =

3 = k(1, 1 − i)k, we get
C2 = C(1 + i, −1) ⊥ C(1, 1 − i) = C √13 (1 + i, −1) ⊥ C √13 (1, 1 − i), A = 1 ⊥ 4,
and the unitary diagonalisation
! !−1
  1+i
√ √1
 1+i
√ √1
2 1+i 3 3 1 0 3 3
= .
1−i 3 − √13 1−i

3
0 4 − √13 1−i

3

Example 7.2.2 (Fourier Series). We extend Example 4.3.3 to the vector space V of
complex valued smooth periodic functions f (t) of period 2π, and we also extend the
inner product Z 2π
1
hf, gi = f (t)g(t)dt.
2π 0
By the calculation in Example 4.3.3, the derivative operator D(f ) = f 0 on V satisfies
D∗ = −D. This implies that the operator L = iD is Hermitian.
As pointed out earlier, Theorems 7.2.2, 7.2.4 and Proposition 7.2.6 can be ex-
tended to infinite dimensional spaces (Hilbert spaces, to be more precise). This
suggests that the operator L = iD should have an orthogonal basis of eigenvectors
with real eigenvalues. Indeed we have found in Example 7.1.10 that the eigenvalues
of L are precisely integers n, and the eigenspace Ker(nI − L) = Ceint . Moreover (by
Exercise 7.54, for example), the eigenspaces are always orthogonal. The following is
a direct verification of the orthonormal property
(
Z 2π 1
1 ei(m−n)t |2π
0 = 0, if m 6= n,
heimt , eint i = eimt e−int dt = 2πi(m−n)
1
2π 0 2π
2π = 1, if m = n.
The diagonalisation means that any periodic function of period 2π should be ex-
pressed as
X +∞
X +∞
X
int int −int
f (t) = cn e = c0 + (cn e + c−n e ) = a0 + (an cos nt + bn sin nt).
n∈Z n=1 n=1
242 CHAPTER 7. SPECTRAL THEORY

Here we use

eint = cos nt + i sin nt, an = cn + c−n , bn = cn − c−n .

We conclude that the Fourier series grows naturally out of the diagonalisation of the
derivative operator. If we apply the same kind of thinking to the derivative operator
on the second vector space in Example 4.3.3 and use the eigenvectors in Example
7.1.9, then we get the Fourier transformation.

Exercise 7.58. Prove that a Hermitian operator L satisfies

hL(~u), ~v i = 41 (hL(~u + ~v ), ~u + ~v i − hL(~u − ~v ), ~u − ~v i


− ihL(~u + i~v ), ~u + i~v i − ihL(~u − i~v ), ~u − i~v i).

Exercise 7.59. Prove that L is Hermitian if and only if hL(~v ), ~v i = h~v , L(~v )i for all ~v .

Exercise 7.60. Prove that L is Hermitian if and only if hL(~v ), ~v i is real for all ~v .

Exercise 7.61. Prove that the determinant of a Hermitian operator is real.

Next we apply Proposition 7.2.6 to a real linear operator L of a real inner product
space. The self-adjoint property means L∗ = L, or the corresponding matrix with
respect to an orthonormal basis is symmetric. Since all the eigenvalues are real, the
complex eigenspaces are complexifications of real eigenspaces. The orthogonality
between complex eigenspaces is then the same as the orthogonality between real
eigenspaces. Therefore we conclude

L = λ1 I ⊥ λ2 I ⊥ · · · ⊥ λp I, λi ∈ R.

Proposition 7.2.7. A real matrix is symmetric if and only if it is (real) orthogonally


diagonalisable. In other words, we have A = U DU −1 = U DU T for a real diagonal
matrix D and an orthogonal matrix U .

Example 7.2.3. Even without calculation, we know the symmetric matrix in Exam-
ples 7.1.2 and 7.1.12 has orthogonal diagonalisation. From the earlier calculation,
we have orthogonal decomposition

R2 = R √15 (1, 2) ⊥ R √15 (2, −1), A = 5 ⊥ 15,

and the orthogonal diagonalisation


! !−1
√1 √1 √1 √1
  
13 −4 5 2 5 0 5 2
= √2
.
−4 7 5
− √15 0 15 √2
5
− √15
7.2. ORTHOGONAL DIAGONALISATION 243

Example 7.2.4. The symmetric matrix in Example 7.1.14 has an orthogonal diag-
onalisation. The basis of eigenvectors in the earlier example is not orthogonal. We
may apply the Gram-Schmidt process to get an orthogonal basis of Ker(A − 5I)

~v1 = (−1, 2, 0),


1+0+0
~v2 = (−1, 0, 1) − 1+4+0
(−1, 2, 0) = 15 (−4, −2, 5).

Together with the basis ~v3 = (2, 1, 2) of Ker(A + 4I), we get an orthogonal basis of
eigenvectors. By further dividing the length, we get the orthogonal diagonalisation
2 −1
    √1 
− √15 − 3√4 5 2
− 5 − 3√4 5

3 5 0 0 3
A =  √25 − 3√2 5 1 
0 5 0   √25 −√3√2 5 1
.
 
√ 3 3
0 5 2 0 0 −4 0 5 2
3 3 3 3

Example 7.2.5. The matrix


 √ 
√1 2 π
 2 2 sin 1
π sin 1 e

is numerically too complicated to calculate the eigenvalues and eigenspaces. Yet we


still know that the matrix is orthogonally diagonalisable.

Exercise 7.62. Find orthogonal diagonalisation.


       
0 1 a b −1 2 0 0 2 4
1. . 2. .
1 0 b a 3.  2 0 2. 4. 2 −3 2.
0 2 1 4 2 0

Exercise 7.63. Suppose a 3 × 3 real symmetric matrix has eigenvalues 1, 2, 3. Suppose


(1, 1, 0) is an eigenvector with eigenvalue 1. Suppose (1, −1, 0) is an eigenvector with
eigenvalue 2. Find the matrix.

Exercise 7.64. Suppose A and B are real symmetric matrices satisfying det(tI − A) =
det(tI − B). Prove that A and B are similar.

Exercise 7.65. What can you say about a real symmetric matrix A satisfying A3 = O.
What about satisfying A2 = I?

Exercise 7.66 (Legendre Polynomial). The Legendre polynomials Pn are obtained by applying
the Gram-Schmidt
R1 process to the polynomials 1, t, t2 , . . . with respect to the inner product
hf, gi = −1 f (t)g(t)dt. The following steps show that

1 dn 2
Pn = [(t − 1)n ].
2n n! dtn
244 CHAPTER 7. SPECTRAL THEORY

1. Prove [(1 − t2 )Pn0 ]0 + n(n + 1)Pn = 0.


2. Prove ∞ n 1
P
n=0 Pn x = 1−2tx+x2 .

3. Prove Bonnet’s recursion formula (n + 1)Pn+1 = (2n + 1)tPn − nPn−1 .


R1 2δm,n
4. Prove that −1 Pm Pn dt = 2n+1 . [eigenvectors of hermitian operator used]
..........................

An operator L is skew-Hermitian if L = −L∗ . This means


hL(~v ), wi
~ = −h~v , L(w)i
~ for all ~v , w.
~
Skew-Hermitian operators are normal.
Note that L is skew-Hermitian if and only if iL is Hermitian. By Proposition
7.2.6, therefore, L is skew-Hermitian if and only if it is orthogonally diagonalisable
and all the eigenvalues are imaginary.
Now we apply to a real linear operator L of a real inner product space. The skew-
self-adjoint property means L∗ = −L, or the corresponding matrix with respect to
an orthonormal basis is skew-symmetric. The eigenvalues of L are either 0 or a pair
±iλ with λ > 0. Therefore we get
     
O −I O −I O −I
L = O ⊥ λ1 ⊥ λ2 ⊥ · · · ⊥ λq ,
I O I O I O
with respect to
V = H ⊥ (E1 ⊥ E1 ) ⊥ (E2 ⊥ E2 ) ⊥ · · · ⊥ (Eq ⊥ Eq ).

Exercise 7.67. For any linear operator L, prove that L = L1 + L2 for unique Hermitian
L1 and skew Hermitian L2 . Moreover, L is normal if and only if L1 and L2 commute. In
fact, the algebras C[L, L∗ ] and C[L1 , L2 ] are equal.

Exercise 7.68. What can you say about the determinant of a skew-Hermitian operator?
What about a skew-symmetric real operator?

Exercise 7.69. Suppose A and B are real skew-symmetric matrices satisfying det(tI − A) =
det(tI − B). Prove that A and B are similar.

Exercise 7.70. What can you say about a real skew-symmetric matrix A satisfying A3 = O.
What about satisfying A2 = −I?

7.2.4 Unitary Operator


An operator U is unitary if it is an isometric isomorphism. The isometry means
U ∗ U = I. Then the isomorphism means U −1 = U ∗ . Therefore we have U ∗ U = I =
U U ∗ , and unitary operators are normal.
7.2. ORTHOGONAL DIAGONALISATION 245

Proposition 7.2.8. A linear operator on a complex inner product space is unitary if


and only if it is orthogonally diagonalisable and all the eigenvalues satisfy |λ| = 1.

The equality |λ| = 1 follows directly from kU (~v )k = k~v k. Conversely, it is easy
to see that, if |λi | = 1, then U = λ1 I ⊥ λ2 I ⊥ · · · ⊥ λp I preserves length and is
therefore unitary.
Now we may apply Proposition 7.2.8 to an orthogonal operator U of a real inner
product space V . The real eigevalues are 1, −1, and complex eigenvalues appear as
conjugate pairs e±iθ = cos θ ± i sin θ. Therefore we get
 
cos θ − sin θ
U = I ⊥ −I ⊥ Rθ1 I ⊥ Rθ2 I ⊥ · · · ⊥ Rθq I, Rθ = ,
sin θ cos θ

with respect to

V = H+ ⊥ H− ⊥ (E1 ⊥ E1 ) ⊥ (E2 ⊥ E2 ) ⊥ · · · ⊥ (Eq ⊥ Eq ).

The decomposition implies the following understanding of orthogonal operators.

Proposition 7.2.9. Any orthogonal operator on a real inner product space is the
orthogonal sum of identity, reflection, and rotations (on planes).

A real matrix A is orthogonal if and only if AT A = I = AAT . The proposition


implies that this is equivalent to A = U DU −1 = U DU T for an orthogonal matrix U
and  
..
.
 
 1 O 
..
 

 . 

−1
 
D= .
 
 ... 
 

 cos θ − sin θ 


 O sin θ cos θ 

..
.
Note that an orthogonal operator has determinant det U = (−1)dim H− . Therefore
U preserves orientation (i.e., det U = 1) if and only if dim H− is even. This means
that the −1 entries in the diagonal can be grouped into pairs and form rotations by
180◦    
−1 0 cos π − sin π
= .
0 −1 sin π cos π
Therefore an orientation preserving orthogonal operator is an orthogonal sum of
identities and rotations. For example, an orientation preserving orthogonal operator
on R3 is always a rotation around a fixed axis.
246 CHAPTER 7. SPECTRAL THEORY

Example 7.2.6. Let L : R3 → R3 be an orthogonal operator. If all eigenvalues of L


are real, then L is one of the following.

1. L = ±I, the identity or the antipodal.

2. L fixes a line and is antipodal on the plane orthogonal to the line.

3. L flips a line and is identity on the plane orthogonal to the line.

If L has non-real eigenvalue, then L is one of the following.

1. L fixes a line and is rotation on the plane orthogonal to the line.

2. L flips a line and is rotation on the plane orthogonal to the line.

Exercise 7.71. Describe all the orthogonal operators of R2 .

Exercise 7.72. Describe all the orientation preserving orthogonal operators of R4 .

Exercise 7.73. Suppose an orthogonal operator exchanges (1, 1, 1, 1) and (1, 1, −1, −1), and
fixes (1, −1, 1, −1). What can the orthogonal operator be?

7.3 Canonical Form


Example 7.1.15 shows that there are non-diagonalisable linear operators. We still
wish to understand the structure of general linear operators by decomposing into
the direct sum of blocks of some standard shape. If the decomposition is unique
in some way, then it is canonical. The canonical form can be used to answer the
question whether two general square matrices are similar.
The most famous canonical form is the Jordan canonical form, for linear oper-
ators whose characteristic polynomial completely factorises. This always happens
over complex numbers. Over general fields, we always have the rational canonical
form.

7.3.1 Generalised Eigenspace


We study the structure of a linear operator L by thinking of the algebra F[L] of
polynomials of L over the base field F. We will take advantage of the division of
polynomials and the consequences of division algorithm in Section 6.2.3.
Let d(t) be the greatest common divisor of polynomials f1 (t), f2 (t), . . . , fk (t).
Then we have polynomials u1 (t), u2 (t), . . . , uk (t), such that

d(t) = f1 (t)u1 (t) + f2 (t)u2 (t) + · · · + fk (t)uk (t).


7.3. CANONICAL FORM 247

Applying the equality to a linear operator L, we get

d(L) = f1 (L)u1 (L) + f2 (L)u2 (L) + · · · + fk (L)uk (L),

Therefore

f1 (L)(~v ) = f2 (L)(~v ) = · · · = fk (L)(~v ) = ~0 =⇒ d(L)(~v ) = ~0,

and
Ran d(L) ⊂ Ranf1 (L) + Ranf2 (L) + · · · + Ranfk (L).
In fact, since fi (t) = d(t)qi (t) for some polynomial qi (t), we also have

Ranfi (L) = Ran d(L)qi (L) ⊂ Ran d(L).

Therefore we conclude

Ran d(L) = Ranf1 (L) + Ranf2 (L) + · · · + Ranfk (L).

For the special case that the polynomials are coprime, we have d(t) = 1, d(L) = I,
Ran d(L) = V , and therefore the following.

Lemma 7.3.1. Suppose L is a linear operator on V , and f1 (t), f2 (t), . . . , fk (t) are
coprime. Then

V = Ranf1 (L) + Ranf2 (L) + · · · + Ranfk (L).

Moreover, f1 (L)(~v ) = f2 (L)(~v ) = · · · = fk (L)(~v ) = ~0 implies ~v = ~0.

Recall that any monic polynomial is a unique product of monic irreducible poly-
nomials

f (t) = p1 (t)n1 p2 (t)n2 · · · pk (t)nk , p1 (t), p2 (t), . . . , pk (t) distinct.

For example, by the Fundamental Theorem of Algebra (Theorem 6.1.1), the irre-
ducible polynomials over C are t − λ. Therefore we have

f (t) = (t − λ1 )n1 (t − λ2 )n2 · · · (t − λk )nk , λ1 , λ2 , . . . , λk ∈ C distinct.

The irreducible polynomials over R are t − λ and t2 + at + b, with a2 < 4b. Therefore
we have

f (t) = (t−λ1 )n1 (t−λ2 )n2 · · · (t−λk )nk (t2 +a1 t+b1 )m1 (t2 +a2 t+b2 )m2 · · · (t2 +al t+bl )ml .

The quadratic factor t2 + at + b has a conjugate pair of complex roots λ = µ + iν,


λ̄ = µ − iν, µ, ν ∈ R. We will use specialised irreducible polynomials over C and R
only in the last moment. Therefore most part of our theory applies to any field.
248 CHAPTER 7. SPECTRAL THEORY

Suppose L is a linear operator on a finite dimensional F-vector space V . We


consider the annilator

AnnL = {g(t) ∈ F[t] : g(L) = O},

which is all the polynomials vanishing on L. The Cayley-Hamilton Theorem (Theo-


rem 7.1.8) says such polynomials exist. Moreover, if g1 (L) = O and g2 (L) = O, and
d(t) is the greatest common divisor of g1 (t) and g2 (t), then the discussion before
Lemma 7.3.1 says d(L) = O. Therefore there is a unique monic polynomial m(t)
satisfying
AnnL = m(t)F[t] = {m(t)q(t) : q(t) ∈ F[t]}.

Definition 7.3.2. The minimal polynomial m(t) of a linear operator L is the monic
polynomial with the property that g(L) = O if and only if m(t) divides g(t).

Suppose the characteristic polynomial of L is factorised into distinct irreducible


polynomials
f (t) = det(tI − L) = p1 (t)n1 p2 (t)n2 · · · pk (t)nk .
Then the minimal polynomial divides f (t), and we have (mi ≥ 1 is proved in Exercise
7.75)
m(L) = p1 (t)m1 p2 (t)m2 · · · pk (t)mk , 1 ≤ mi ≤ ni .

Example 7.3.1. The characteristic polynomial in the matrix in Example 7.1.15 is


(t − 4)2 (t + 2). The minimal polynomial is either (t − 4)(t + 2) or (t − 4)2 (t + 2). By
    
−1 1 −3 5 1 −3 12 ∗ ∗
(A − 4I)(A + 2I) = −1 1 −3 −1 7 −3 =  ∗ ∗ ∗ 6= O,
−6 6 −6 −6 6 0 ∗ ∗ ∗

the polynomial (t−4)(t+2) is not minimal. The minimal polynomial is (t−4)2 (t+2).
In fact, Example 7.1.15 shows that A is not diagonalisable. Then we may also
use the subsequent Exercise 7.74 to see that the minimal polynomial cannot be
(t − 4)(t + 2).

Proposition 7.3.3. Suppose L is a linear operator, and f (t) = p1 (t)n1 p2 (t)n2 · · · pk (t)nk
for distinct irreducible polynomials p1 (t), p2 (t), . . . , pk (t). If f (L) = O, then

V = Ker p1 (L)n1 ⊕ Ker p2 (L)n2 ⊕ · · · ⊕ Ker pk (L)nk .

For the case F = C is the complex numbers, we have pi (L) = L − λi I. The


kernel Ker(L − λi I)ni contains the eigenspace, and is called a generalised eigenspace.
Therefore the proposition generalises Proposition 7.1.4.
7.3. CANONICAL FORM 249

Proof. Let

f (t) n1 ni−1 ni+1 nk


Y
fi (t) = n
= p 1 (t) · · · p i−1 (t) p i+1 (t) · · · p k (t) = pl (t)nl .
pi (t) i
l6=i

Then f1 (t), f2 (t), . . . , fk (t) are coprime. By Lemma 7.3.1, we have

V = Ranf1 (L) + Ranf2 (L) + · · · + Ranfk (L).

By pi (L)ni fi (L)(~v ) = f (L) = O, we have Ranfi (L) ⊂ Ker pi (L)ni . Therefore

V = Ker p1 (L)n1 + Ker p2 (L)n2 + · · · + Ker pk (L)nk .

To show the sum is direct, we consider

~v1 + ~v2 + · · · + ~vk = ~0, ~vi ∈ Ker pi (L)ni .

We will prove that fj (L)(~vi ) = ~0 for all i, j. Then by Lemma 7.3.1 and the fact that
f1 (t), f2 (t), . . . , fk (t) are coprime, we get all ~vi = ~0.
For i 6= j, let

f (t) fi (t) Y
fij (t) = = = pl (t)nl .
pi (t)ni pj (t)nj pj (t)nj l6=i,j

Then
fj (L)(~vi ) = fij (L)pi (L)ni (~vi ) = ~0.
This proves fj (L)(~vi ) = ~0 for all i 6= j. Applying fi (L) to ~0 = ~v1 + ~v2 + · · · + ~vk , we
get
~0 = fi (L)(~v1 ) + fi (L)(~v2 ) + · · · + fi (L)(~vk ) = fi (L)(~vi ).

This proves that we also have fj (L)(~vi ) = ~0 for i = j. Therefore we indeed have
fj (L)(~vi ) = ~0 for all i, j.

Example 7.3.2. For the matrix in Example 7.1.15, we have Ker(A + 2I) = R(1, 1, 2)
and
 2  
1 −1 3 1 −1 1
(A − 4I)2 = 1 −1 3 = 18 1 −1 1 ,
6 −6 6 2 −2 1
Ker(A − 4I)2 = R(1, 1, 0) ⊕ R(1, 0, −1).

Then we have the direct sum decomposition

R3 = Ker(A − 4I)2 ⊕ Ker(A + 2I) = (R(1, 1, 0) ⊕ R(1, 0, −1)) ⊕ R(1, 1, 2).


250 CHAPTER 7. SPECTRAL THEORY

Exercise 7.74. Prove that a linear operator is diagonalisable if and only if the minimal
polynomial completely factorises and has no repeated root: m(t) = (t − λ1 )(t − λ2 ) · · · (t −
λk ), with λ1 , λ2 , . . . , λk distinct.

Exercise 7.75. Suppose det(tI − L) = p1 (t)n1 p2 (t)n2 · · · pk (t)nk , where p1 (t), p2 (t), . . . ,
pk (t) are distinct irreducible polynomials. Suppose m(L) = p1 (t)m1 p2 (t)m2 · · · pk (t)mk is
the minimal polynomial of L.
1. Prove that Ker pi (t)ni 6= {~0}. Hint: First use Exercise 7.15 to prove the case of
cyclic subspace. Then induct.
2. Prove that mi > 0.
3. Prove that eigenvalues are exactly roots of the minimal polynomial. In other words,
the minimal polynomial and the characteristic polynomial have the same roots.

Exercise 7.76. Suppose p1 (t)m1 p2 (t)m2 · · · pk (t)mk is the minimal polynomial of L. Prove
that Ker pi (t)mi +1 = Ker pi (t)mi ) Ker pi (t)mi −1 .

Exercise 7.77. Prove that the minimal polynomial of the linear operator on the cyclic
subspace in Example 7.1.5 is tk + ak−1 tk−1 + ak−2 tk−2 + · · · + a1 t + a0 .

Exercise 7.78. Find minimal polynomial. Determine whether the matrix is diagonalisable.
       
1 2 3 −1 0 3 −1 0 0 0 0
1. .
3 4 2. 0 2 0. 3. 0 2 0. 4. 1 0 0.
1 −1 2 0 −1 2 2 3 0

Exercise 7.79. Describe all 3 × 3 matrices satisfying A2 = I. What about A3 = A?

Exercise 7.80. Find the minimal polynomial of the derivative operator on Pn .

Exercise 7.81. Find the minimal polynomial of the transpose of n × n matrix.

7.3.2 Nilpotent Operator


By Proposition 7.1.5, the subspace H = Ker pi (L)ni in Proposition 7.3.3 is L-
invariant. Therefore it remains to study the restriction L|H of L to H. Since
pi (L|H )ni = O, the operator T = pi (L|H ) has the following property.

Definition 7.3.4. A linear operator T is nilpotent if T n = O for some n.

We may regard applying a linear operator as “hitting” vectors. A nilpotent


operator means that every vector is killed by hitting sufficiently many times.

Exercise 7.82. Prove that the only eigenvalue of a nilpotent operator is 0.


7.3. CANONICAL FORM 251

Exercise 7.83. Consider the matrix that shifts the coordinates by one position
   
0 O
 
x1 0
1 0
  x2   x1 
  

.
 
A=

1 . .

x  x2 
→
:  3 .
   
.   ..   .. 

 . . 0   .   . 
O 1 0 xn xn−1

Show that An = O and An−1 6= O.

Exercise 7.84. Show that any matrix of the following form is nilpotent
   
0 0 ··· 0 0 0 ∗ ··· ∗ ∗
∗ 0 · · · 0 0 0 0 · · · ∗ ∗
   
 .. .. .. ..  ,  .. .. .. ..  .
. . . .  . . . .
  
∗ ∗ · · · 0 0 0 0 · · · 0 ∗
∗ ∗ ··· ∗ 0 0 0 ··· 0 0

Let T be a nilpotent operator on V . Let m be the smallest number such that


T m = O. This means that all vectors are killed after hitting m times. We may then
ask more refined question on how vectors are killed:

1. What is the exact number i of hits needed to kill a vector ~v ? This means
T i (~v ) = ~0 and T i−1 (~v ) 6= ~0.

2. A vector may be the result of prior hits. This means that the vector is T j (~v ).
If ~v needs exactly i hits to get killed, then T j (~v ) needs exactly i − j hits to
get killed.

The second question leads to the search for “fresh” vectors that have no history of
prior hits, i.e., not of the form T (~v ). This means that we find a direct summand F
(for fresh) of T (V )
V = F ⊕ T (V ).
We expect vectors in T (V ) to be obtained by applying T repeatedly to the fresh
vectors in F .
Now we ask the first question on F , which is the exact number of hits needed to
kill vectors. We start with vectors that must be hit maximal number of times. In
other words, they cannot be killed by m − 1 hits. This means we take Fm to be a
direct summand of the subspace KerT m−1 of vectors that are killed by fewer than
m hits
V = Fm ⊕ KerT m−1 .
Next we try to find fresh vectors that are killed by exactly m − 1 hits. This means
we exclude the subspace KerT m−2 of vectors that are killed by fewer than m − 1
252 CHAPTER 7. SPECTRAL THEORY

hits. We also should exclude T (Fm ), which are killed by exactly m − 1 hits but are
not fresh. Therefore we take Fm−1 to be a direct summand
KerT m−1 = Fm−1 ⊕ (T (Fm ) + KerT m−2 ).
Continuing with the similar idea, we inductively take Fi to be a direct summand
KerT i = Fi ⊕ (T (Fi+1 ) + T 2 (Fi+2 ) + · · · + T m−i (Fm ) + KerT i−1 ). (7.3.1)
We claim that the sum (7.3.1) is actually direct
KerT i = Fi ⊕ T (Fi+1 ) ⊕ T 2 (Fi+2 ) ⊕ · · · ⊕ T m−i (Fm ) ⊕ KerT i−1 . (7.3.2)
The first such statement (for i = m) is V = Fm ⊕ KerT m−1 , which is direct by our
construction. We inductively assume the direct sum (7.3.2) for KerT i+1 , and try to
prove the similar sum (7.3.1) for KerT i is also direct. Here we note that the induction
starts from i = m and goes down to i = 0. By Proposition 3.3.2, for the sum (7.3.1)
to be direct, we only need to prove T (Fi+1 ) + T 2 (Fi+2 ) + · · · + T m−i (Fm ) + KerT i−1
is direct. By Exercise 3.56, this means that, if
T (~vi+1 ) + T 2 (~vi+2 ) + · · · + T m−i (~vm ) ∈ KerT i−1 , ~vj ∈ Fj for i < j ≤ m,
then all the terms in the sum T j−i (~vj ) = ~0. Note that the above is the same as
~vi+1 + T (~vi+2 ) + · · · + T m−i−1 (~vm ) ∈ KerT i .
By the inductively assumed direct sum (7.3.2) for KerT i+1 , and Exercise 3.56, we
get
~vi+1 = T (~vi+2 ) = · · · = T m−i−1 (~vm ) = ~0.
This implies
T (~vi+1 ) = T 2 (~vi+2 ) = · · · = T m−i (~vm ) = ~0.
Combining all direct sums (7.3.2), and using Proposition 3.3.2, we get
V = Fm ⊕ KerT m−1
= Fm ⊕ Fm−1 ⊕ T (Fm ) ⊕ KerT m−2
= Fm ⊕ Fm−1 ⊕ T (Fm ) ⊕ Fm−2 ⊕ T (Fm−1 ) ⊕ T 2 (Fm ) ⊕ KerT m−3
= ···
= ⊕0≤j<i≤m T j (Fi ).
We summarise the direct sum decomposition into the following.
Km Fm
Km−1 Fm−1 T (Fm )
Km−2 Fm−2 T (Fm−1 ) T 2 (Fm )
.. .. .. ..
. . . .
K2 F2 T (F3 ) T 2 (F4 ) · · · T m−2 (Fm )
K1 F1 T (F2 ) T 2 (F3 ) · · · T m−2 (Fm−1 ) T m−1 (Fm )
V F T (F ) T 2 (F ) · · · T m−2 (F ) T m−1 (F )
7.3. CANONICAL FORM 253

We claim that all the diagonal subspaces are isomorphic. This means applying
T repeatedly gives isomorphisms

Fi ∼
= T (Fi ) ∼
= T 2 (Fi ) ∼
= ··· ∼
= T i−1 (Fi ).

Since the map T j |Fi : Fi → T j (Fi ) is automatically onto, we only need to show
Ker(T j |Fi ) = Fi ∩ KerT j = {~0} for j < i. This is a consequence of the direct sum
Fi ⊕ KerT i−1 as part of (7.3.2), and KerT j ⊂ KerT i−1 for j < i.
The isomorphisms between diagonal subspaces imply that the direct sum of
diagonal subspaces is a direct sum of copies of the leading fresh subspace

Gi = Fi ⊕ T (Fi ) ⊕ T 2 (Fi ) ⊕ · · · ⊕ T i−1 (Fi ) ∼


= Fi ⊕ Fi ⊕ Fi ⊕ · · · ⊕ Fi . (7.3.3)

The subspace Gi is T -invariant, and the restriction T |Gi is simply the “right shift”

T (~v1 , ~v2 , ~v3 , . . . , ~vi ) = (~0, ~v1 , ~v2 , . . . , ~vi−1 ), ~vj ∈ Fi .

In block form, we have V = G1 ⊕ G2 ⊕ · · · ⊕ Gm , and


 
O O
I O 
..
 
T = T |G1 ⊕ T |G2 ⊕ · · · ⊕ T |Gm , T |Gi = 

I . .


 . .. O


O I O

The block matrix is i × i, and each entry in the block matrix is dim Gi × dim Gi .
However, it is possible to have some Gi = {~0}. In this case, the block T |Gi is not
needed (or does not appear) in the decomposition for T .
Instead of taking the diagonal sum above for the decomposition V = ⊕0≤j<i≤m T j (Fi ),
we may also take the column sum or row sum. The first column adds up to become
the subspace
F = Fm ⊕ Fm−1 ⊕ · · · ⊕ F1
of all fresh vectors. The sum of subspaces in the i-th column is then the subspace

T i−1 (F ) = T i−1 (Fm ) ⊕ T i−1 (Fm−1 ) ⊕ · · · ⊕ T i−1 (Fi )

obtained by hitting fresh vectors i − 1 times, and we get

V = F ⊕ T (F ) ⊕ T 2 (F ) ⊕ · · · ⊕ T m−1 (F ).

In other words, V consists of fresh vectors and the various hits of the fresh vectors.
If we add the subspaces in the k-th row, we get the subspace of vectors killed by
exactly k hits

Kk = Fk ⊕ T (Fk+1 ) ⊕ T 2 (Fk+2 ) ⊕ · · · ⊕ T m−k (Fm ).


254 CHAPTER 7. SPECTRAL THEORY

The whole space is then decomposed by counting the exact hits

V = Km ⊕ Km−1 ⊕ Km−2 ⊕ · · · ⊕ K1 .

In particular, we know KerT k consists of vectors killed by exactly i hits, for all i ≤ k

KerT k = Kk ⊕ Kk−1 ⊕ · · · ⊕ K1 .

Example 7.3.3. Continuing the discussion of the matrix in Example 7.1.15. We have
the nilpotent operator T = A − 4I on V = Ker(A − 4I)2 = R(1, 1, 0) ⊕ R(1, 0, −1).
In Example 7.3.2, we get Ker(A − 4I) = R(1, 1, 0) and the filtration

V = Ker(A − 4I)2 = R(1, 1, 0) ⊕ R(1, 0, −1) ⊃ Ker(A − 4I) = R(1, 1, 0) ⊃ {~0}.

We may choose the fresh F2 = R(1, 0, −1) between the two kernels. Then T (F2 ) is
the span of
T (1, 0, −1) = (A − 4I)(1, 0, −1) = (2, 2, 0).
This suggests us to revise the basis of V to

Ker(A − 4I)2 = R(1, 0, −1) ⊕ R((A − 4I)(1, 0, −1)) = R(1, 0, −1) ⊕ R(2, 2, 0).

The matrices of T |Ker(A−4I)2 and A|Ker(A−4I)2 with respect to the basis are
   
0 0 4 0
[T |Ker(A−4I)2 ] = , [A|Ker(A−4I)2 ] = [T ] + 4I = .
1 0 1 4

We also have Ker(A + 2I) = R(1, 1, 2). With respect to the basis

α = {(1, 0, −1), (2, 2, 0), (1, 1, 2)},

we have  
  4 0 0
[A|Ker(A−4I)2 ] O
[A]αα = = 1 4 0  ,
O [A|Ker(A+2I) ]
0 0 −2
and    −1
1 2 1 4 0 0 1 2 1
A =  0 2 1 1 4 0   0 2 1 .
−1 0 2 0 0 −2 −1 0 2
We remark that there is no need to introduce F1 because T (F2 ) already fills up
Ker(A − 4I). If Ker(A − 4I) 6= T (F2 ), then we need to find the direct summand F1
of T (F2 ) in Ker(A − 4I).

Example 7.3.4. The derivative operator D(f ) = f 0 : Pn → Pn satisfies Dn+1 = O.


In fact, we know KerDi = Pi−1 is the subspace of polynomials of degree < i.
7.3. CANONICAL FORM 255

Note that Fn+1 is the direct summand of KerDn = Pn−1 in KerDn+1 = Pn . This
means Fn+1 = Rf (t), for a polynomial f (t) of degree n. For example, we may choose
n n
tn−1
f = tn! (another good choice is f = (t−a)
n!
). Then we have D(f ) = (n−1)! , and

Pn−1 = RD(f ) ⊕ Pn−2 , or KerDn = D(Fn+1 ) ⊕ KerDn−1 .


tn−2
Therefore Fn = {~0}. Next, we have D2 (f ) = (n−2)!
, and (note D(Fn ) = {~0})

Pn−2 = RD2 (f ) ⊕ Pn−3 , or KerDn−1 = D(Fn ) ⊕ D2 (Fn+1 ) ⊕ KerDn−2 .

Therefore Fn−1 = {~0}. In general, we have


n i
Fn+1 = R tn! , Dn−i (Fn+1 ) = R ti! , Fn = Fn−1 = · · · = F1 = {~0},

and
Pn = Fn+1 ⊕ D(Fn+1 ) ⊕ D2 (Fn+1 ) ⊕ · · · ⊕ Dn (Fn+1 ).
The equality means that a polynomial of degree ≤ n equals its n-th order Taylor
(t−a)n
expansion at 0. If we use f = n! , then the equality means that the polynomial
equals its n-th order Taylor expansion at a.
If we use the basis f, D(f ), D2 (f ), . . . , Dn (f ), then the matrix of D with respect
to the basis is  
0 0 ··· 0 0
1 0 · · · 0 0 
 
0 1 · · · 0 0 
[D] =  .. .. .. ..  .
 
. . . .
 
0 0 · · · 0 0 
0 0 ··· 1 0

7.3.3 Jordan Canonical Form


In this section, we assume the characteristic polynomial f (t) = det(tI − L) com-
pletely factorises. This means pl (t) = t − λl in the earlier discussions, and the direct
sum in Proposition 7.3.3 becomes

V = V1 ⊕ V2 ⊕ · · · ⊕ Vk , Vl = Ker(L − λl I)nl .

Correspondingly, we have

L = (T1 + λ1 I) ⊕ (T2 + λ2 I) ⊕ · · · ⊕ (Tk + λk I),

where Tl = L|Vl − λl I satisfies Tlnl = O and is therefore a nilpotent operator on Vl .


Suppose ml is the smallest number such that Tlml = O. Then we have further
direct sum decomposition

Vl = ⊕0≤j<i≤ml Tlj (Fil ) ∼


= ⊕1≤i≤ml Fil⊕i .
256 CHAPTER 7. SPECTRAL THEORY

For each i, Fil⊕i is the direct sum of i copies of Fil , and restriction of Tl is the “right
shift”. This means
 
λl I O
 I λl I 
..
 
L|Vl = Tl + λl I = ⊕1≤i≤ml 

I .  .


 . . . λl I


O I λl I i×i

The identity I in the block is on the subspace Fil .


To explicitly write down the matrix of the linear operator, we may take a basis
αil of the subspace Fil ⊂ Vl ⊂ V . Then the disjoint union

α = ∪1≤l≤k ∪0≤j<i≤ml Tlj (αil )

is a basis of V . The fresh part of the basis is α0 below

α0 = ∪1≤l≤k ∪1≤i≤ml αil , α = α0 ∪ T (α0 ) ∪ T 2 (α0 ) ∪ · · · .

Each fresh basis vector ~v ∈ α0 belongs to some αil . The fresh basis vector generates
an L-invariant subspace (actually Tl -cyclic subspace, see Example 7.1.5)

R~v ⊕ RTl (~v ) ⊕ · · · ⊕ RTli−1 (~v ).

The matrix of the restriction of L with respect to the basis is called a Jordan block
 
λl O
 1 λl 
...
 
J = 1  .
 

 . . . λl


O 1 λl i×i

The matrix of the whole L is a direct sum of the Jordan blocks, one for each fresh
basis vector.

Theorem 7.3.5 (Jordan Canonical Form). Suppose the characteristic polynomial of


a linear operator completely factorises. Then there is a basis, such that the matrix
of the linear operator is a direct sum of Jordan blocks.

Exercise 7.85. Find the Jordan canonical form of the matrix in Example 7.78.

Exercise 7.86. In terms of Jordan canonical form, what is the condition for diagonalisabil-
ity?

Exercise 7.87. Prove that for complex matrices, A and AT are similar.
7.3. CANONICAL FORM 257

Exercise 7.88. Compute the powers of a Jordan block. Then compute the exponential of
a Jordan block.

Exercise 7.89. Prove that the geometric multiplicity dim Ker(L − λl I) is the number of
Jordan blocks with eigenvalue λl .

Exercise 7.90. Prove that the minimal polynomial m(t) = (t−λ1 )m1 (t−λ2 )m2 · · · (t−λk )mk ,
where mi is the smallest number such that Ker(L − λl I)ml = Ker(L − λl I)ml +1 .

Exercise 7.91. By applying the complex Jordan canonical form, show that any real linear
operator is a direct sum of following two kinds of real Jordan canonical forms
 
a −b O
b a 0 
 
 1 0 a −b 
 
..
 
d O
 
 1 b a . 
1 d  
.. ..
  

.
  1 0 . . 

1 .. 
,  
.

 . . . . . .

.
  1 . . . 
 .. d   
   . . . . 
O 1 d
 . . a −b 
 
. . . b
 

 a 0 

 1 0 a −b
O 1 b a

7.3.4 Rational Canonical Form


In general, the characteristic polynomial f (t) = det(tI − L) may not completely
factorise. We need to study the restriction of L to the invariant subspace Ker pl (L)nl .
We assume L is a linear operator on V , and

p(t) = tp + an−1 tn−1 + · · · + a2 t2 + a1 t + a0 (7.3.4)

is an (monic) irreducible polynomial of degree d, and T = p(L) is nilpotent. Let m


be the smallest number satisfying T m = p(L)m = O. In Section 7.3.2, we construct
fresh subspaces Fm , Fm−1 , . . . , F1 inductively by (7.3.2), which we rewrite as

KerT i = Fi ⊕ Wi ⊕ KerT i−1 , (7.3.5)

with
Wi = T (Fi+1 ) ⊕ T 2 (Fi+2 ) ⊕ · · · ⊕ T m−i (Fm ). (7.3.6)
Moreover, for each i, we have direct sum and isomorphism (7.3.3).
For any polynomial f (t), we divide by p(t), and then divide the quotient by p(t),
and then repeat the process. We get

f (t) = r0 (t) + r1 (t)p(t) + r2 (t)p(t)2 + · · · , deg rj (t) < deg p(t) = d.


258 CHAPTER 7. SPECTRAL THEORY

By T = p(L) and T i = O on Fi , for any ~v ∈ Fi , we have

f (L)(~v ) = r0 (L)(~v ) + T (r1 (L)(~v )) + T 2 (r2 (L)(~v )) + · · · + T i−1 (ri−1 (L)(~v )).

For this to be compatible with the direct sum (7.3.3), we wish to have all rj (L)(~v ) ∈
Fi . Then by the isomorphisms in (7.3.3), we may regard

f (L)(~v ) = (r0 (L)(~v ), r1 (L)(~v ), r2 (L)(~v ), . . . , ri−1 (L)(~v )) ∈ Fi⊕i .

By deg ri (t) < d, we need to consider

r(L)(~v ) = a0~v + a1 L(~v ) + a2 L2 (~v ) + · · · + ad−1 Ld−1 (~v ).

Then our wish can be interpreted as finding a suitable “home” Ei for ~v .

Lemma 7.3.6. There are subspaces Ei that give direct sums

Fi = {r(L)(~v ) : ~v ∈ Ei , deg r(t) < d}


= Ei ⊕ L(Ei ) ⊕ L2 (Ei ) ⊕ · · · ⊕ Ld−1 (Ei ),

and they satisfy (7.3.5).

Recall Fi is inductively constructed as direct summand of Wi ⊕KerT i−1 in (7.3.5).


The direct summand is obtained by picking vectors ~v1 , ~v2 , · · · between Wi ⊕ KerT i−1
and KerT i one by one, to make sure they are always linearly independent. Then
these vectors span Fi . In fact, we should regard the picking to be 1-dimensional
subspaces R~v1 , R~v2 , · · · that give a direct sum Fi = R~v1 ⊕ R~v2 ⊕ · · · .
The refinement we wish to make in the lemma is the following. When we pick ~v1 ,
we obtain not just one vector, but d vectors ~v1 , L(~v1 ), L2 (~v1 ), . . . , Ld−1 (~v1 ). Therefore
instead of picking R~v1 , we actually pick a subspace C(~v1 ), where

C(~v ) = R~v + RL(~v ) + RL2 (~v ) + · · · + RLd−1 (~v ) = {r(L)(~v ) : deg r(t) < d}. (7.3.7)

Then the next pick ~v2 is between C(~v1 ) ⊕ Wi ⊕ KerT i−1 and KerT i , such that the
subspace C(~v2 ) we obtain is independent of C(~v1 ). In other words, we need keep
the sum C(~v1 ) + C(~v2 ) to be direct. The process continues, and Fi is a direct sum
of C(~vi ) instead of R~vi in the original construction.
Finally, we note that Ei = R~v1 ⊕ R~v2 ⊕ · · · in the lemma. Then the direct sum
in the lemma means the sum (7.3.7) that defines C(~v ) is direct.

Proof. We construct Ei inductively. First we assume Em , Em−1 , . . . , Ei+1 have been


constructed. Then we have the subspaces Fm , Fm−1 , . . . , Fi+1 as given by the lemma,
and the subspace Wi as given by (7.3.6). Next we pick ~v1 ∈ KerT i − (Wi ⊕ KerT i−1 ),
and ask whether C(~v1 ) + Wi ⊕ KerT i−1 is the whole KerT i . If the answer is no, then
7.3. CANONICAL FORM 259

we pick ~v2 ∈ KerT i − (C(~v1 ) + Wi ⊕ KerT i−1 ), and ask whether C(~v1 ) + C(~v2 ) +
Wi ⊕ KerT i−1 is the whole KerT i . The process continues until we get

KerT i = C(~v1 ) ⊕ C(~v2 ) ⊕ · · · ⊕ C(~vli ) ⊕ Wi ⊕ KerT i−1 . (7.3.8)

We need to show the sum above is direct, and the sum in (7.3.7) is also direct.
We show the direct sum (7.3.9) by induction. Suppose we already have the direct
sum
H = C(~v1 ) ⊕ C(~v2 ) ⊕ · · · ⊕ C(~vl−1 ) ⊕ Wi ⊕ KerT i−1 . (7.3.9)
Then for the next pick w ~ = ~vl ∈ KerT i − H, we need to argue that C(w) ~ + H is
direct.
The key here is that H is L-invariant. By T L = p(L)L = Lp(L) = LT , we know
T i−1 (~x) = ~0 =⇒ T i−1 (L(~x)) = L(T i−1 (~x)) = L(~0) = ~0. This shows KerT i−1 is
L-invariant. For Wi , by (7.3.6), we consider applying L to T k−i (Fk ), i < k ≤ m. By
the inductive assumption, we have

Fk = Ek + L(Ek ) + L2 (Ek ) + · · · + Ld−1 (Ek ),


L(Fk ) = L(Ek ) + L2 (Ek ) + · · · + Ld−1 (Ek ) + Ld (Ek ) ⊂ Fk + Ld (Ek ),
L(T k−i (Fk )) = T k−i (L(Fk )) ⊂ T k−i (Fk ) + T k−i (Ld (Ek )).

By T k−i (Fk ) ⊂ Wi ⊂ H, the problem L(T k−i (Fk )) ⊂ H is reduced to T k−i (Ld (Ek )) ⊂
H. Since p(t) is a monic polynomial of degree d, we have p(t) = r(t) + td with
deg r(t) < d. Then Ld = −r(L) + p(L) = −r(L) + T , and for any ~x ∈ Ek , we have

Ld (~x) = −r(L)(~x) + T (~x) ∈ Fk + T (Ek ) ⊂ Fk + T (KerT k ),


T k−i (Ld (~x)) ∈ T k−i (Fk ) + T k−i+1 (KerT k ) ⊂ Wi + KerT i−1 ⊂ H.

For C(~vk ), 1 ≤ k < l, we have the similar argument

L(C(~vk )) = RL(~vk ) + RL2 (~vk ) + · · · + RLd−1 (~vk ) + RLd (~vk ) ⊂ C(~vk ) + RLd (~vk ),
Ld (~vk ) = −r(L)(~vk ) + T (~vk ) ∈ C(~vk ) + T (KerT i ) ⊂ C(~vk ) + KerT i−1 ⊂ H.

~ ⊕ H. By Exercise 3.56, this means


Next we argue the direct sum C(w)

~ = ~0.
~ ∈ H, deg r(t) < d =⇒ r(L)(w)
r(L)(w)

If r(t) 6= 0, then by deg r(t) < deg p(t) and p(t) irreducible, we know r(t) and p(t)m
are coprime. Therefore we have s(t)r(t) + q(t)p(t)m = 1 for some polynomials s(t)
and q(t). Then by p(L)m = T m = O, we get

w ~ + q(L)p(L)m (w)
~ = s(L)r(L)(w) ~ = s(L)r(L)(w).
~

~ ∈ H, and H is L-invariant, we get s(L)r(L)(w)


Since r(L)(w) ~ ∈ H. This contradicts
i
~ ∈ KerT − H. Therefore r(t) = 0, and r(L)(w)
w ~
~ = 0. This proves the direct sum
~ ⊕ H.
C(w)
260 CHAPTER 7. SPECTRAL THEORY

Next, we argue the sum in (7.3.7) is also direct. This means

r(L)(~v ) = c0~v + c1 L(~v ) + c2 L2 (~v ) + · · · + cd−1 Ld−1 (~v ) = ~0


=⇒ c0~v = c1 L(~v ) = c2 L2 (~v ) = · · · = cd−1 Ld−1 (~v ) = ~0.

Again we use s(t)r(t) + q(t)p(t)m = 1 in the argument for C(w)


~ ⊕ H to get

~ = s(L)~0 + q(L)~0 = ~0.


~v = s(L)r(L)(~v ) + q(L)T m (w)

This gives the implication we want.


Having argued that all sums are direct, we let

Ei = R~v1 ⊕ R~v2 ⊕ · · · ⊕ R~vli ,


Fi = Ei ⊕ L(Ei ) ⊕ L2 (Ei ) ⊕ · · · ⊕ Ld−1 (Ei ) = C(~v1 ) ⊕ C(~v2 ) ⊕ · · · ⊕ C(~vli ).

Then (7.3.9) becomes (7.3.5).

By Lemma 7.3.6, the whole space is a direct sum

V = ⊕0≤j<i≤m T j (Fi ) = ⊕0≤j<i≤m, 0≤k<d T j Lk (Ei ) ∼


= ⊕1≤i≤m Ei⊕di .

The last isomorphism is given by T j Lk (Ei ) = p(L)j Lk (Ei ) ∼


= Ei for all 0 ≤ j < i and
0 ≤ k < d. In other words, any R~v ⊂ Ei gives di copies of R~v in the decomposition
above

~v , L(~v ), . . . , Ld−1 (~v ),


T (~v ), T L(~v ), . . . , T Ld−1 (~v ),
......,
T i−1 (~v ), T i−1 L(~v ), . . . , T i−1 Ld−1 (~v ).

The operator L shifts each row to the right. By p(L) = O for the irreducible
polynomial p in (7.3.4), the application of L to the last vector of each row is the
following

L(T j Ld−1 (~v )) = T j Ld (~v ) = T j (p(L)(~v ) − a0~v − a1 L(~v ) − · · · − ak−1 Ld−1 (~v ))
= T j+1 (~v ) − a0 T j (~v ) − a1 T j L(~v ) − · · · − ad−1 T j Ld−1 (~v ).
7.3. CANONICAL FORM 261

Therefore for each i, we get the block form of L on ⊕0≤j<i, 0≤k<d T j Lk (Ei ) ∼
= Ei⊕di
 
O −a0 I
I O
 −a1 I 

 .. .. .. 
 . . . 
 

 I −ak−1 I 


 I O −a0 I 


 I O −a1 I 

 .. .. .. 
 . . . 
L|E ⊕di = .
 
i  I −ak−1 I 
 .. .. 

 . . 

 .. .. 

 . . 


 I O −a0 I 


 I O −a1 I 

 .. .. .. 
 . . . 
I −ak−1 I

The identity is over Ei , and the whole L has decomposition

⊕dm ⊕ L| ⊕d(m−1) ⊕ · · · ⊕ L| ⊕d .
L = L|Em E m−1
E 1

Finally, the discussion so far is only about one irreducible factor in the minimal
polynomial of a general linear operator. In general, the minimal polynomial of a lin-
ear operator L is p1 (t)m1 p2 (t)m2 · · · pk (t)mk for distinct monic irreducible polynomials
p1 (t), p2 (t), . . . , pk (t). Then we have a direct sum

V = Ker p1 (L)m1 ⊕ Ker p2 (L)m2 ⊕ · · · ⊕ Ker pk (L)mk ,

and the corresponding decomposition

L = L|Ker p1 (L)m1 ⊕ L|Ker p2 (L)m2 ⊕ · · · ⊕ L|Ker pk (L)mk .

Then Ti = pi (L) satisfies Timi = O on Ker pi (L)mi , and we have

L|Ker pi (L)mi = L|E ⊕di mi ⊕ L|E ⊕di (mi −1) ⊕ · · · ⊕ L|E ⊕di , di = deg pi (t).
imi i(mi −1) i1

Each L|E ⊕di j is given by the block matrix above.


ij
262 CHAPTER 7. SPECTRAL THEORY
Chapter 8

Tensor

8.1 Bilinear
8.1.1 Bilinear Map
Let U, V, W be vector spaces. A map B : V × W → U is bilinear if it is linear in U
and linear in V

B(x1~v1 + x2~v2 , w)~ = x1 B(~v1 , w) ~ + x2 B(~v2 , w),~


B(~v , y1 w
~ 1 + y2 w
~ 2 ) = y1 B(~v , w
~ 1 ) + y2 B(~v , w
~ 2 ).

In case U = F, the bilinear map is a bilinear function b(~x, ~y ).


The bilinear property extends to
X
B(x1~v1 + x2~v2 + · · · + xm~vm , y1 w ~ 2 + · · · + yn w
~ 1 + y2 w ~ n) = xi yj B(~vi , w
~ j ).
1≤i≤m
1≤j≤n

If α = {~v1 , ~v2 , . . . , ~vm } and β = {w ~ 1, w ~ n } are bases of V and W , then the


~ 2, . . . , w
formula above shows that bilinear maps B are in one-to-one correspondence with
the collection of values B(~vi , w ~ j ) on bases. These values form an m × n “matrix”

[B]αβ = (Bij ), Bij = B(~vi , w


~ j ).

In case U = F, we denote bij = b(~vi , w


~ j ), and get
X
b(~x, ~y ) = bij xi yj = [~x]Tα [b]αβ [~y ]β .
i,j

We may define linear combination of bilinear maps V × W → U is obvious way.


This makes all the bilinear maps into a vector space Bilinear(V × W, U ). On the
other hand, we may regard a bilinear map as a linear transformation

~v 7→ B(~v , ·) : V → Hom(W, U ).

263
264 CHAPTER 8. TENSOR

We may also regard a bilinear map as another linear transformation

~ 7→ B(·, w)
w ~ : W → Hom(V, U ).

The three viewpoints are equivalent. This means we have an isomorphism of vector
spaces

Hom(V, Hom(W, U )) ∼
= Bilinear(V × W, U ) ∼
= Hom(W, Hom(V, U )).

Example 8.1.1. In a real inner product space, the inner product is a bilinear func-
tion h·, ·i : V × V → R. The complex inner product is not bilinear because it is
conjugate linear in the second vector. The matrix of the inner product with respect
to an orthonormal basis α is [h·, ·i]αα = I.

Example 8.1.2. Recall the dual space V ∗ = Hom(V, F) of linear functionals on an


F-vector space V . The evaluation pairing

b(~x, l) = l(~x) : V × V ∗ → F

is a bilinear function. The corresponding map for the second vector V ∗ → Hom(V, F)
is the identity. The corresponding map for the first vector V → Hom(V ∗ , F) = V ∗∗
is ~v 7→ ~v ∗∗ , ~v ∗∗ (l) = l(~v ).

Example 8.1.3. The cross product in R3

(x1 , x2 , x3 ) × (y1 , y2 , y3 ) = (x2 y3 − x3 y2 , x3 y1 − x1 y3 , x1 y2 − x2 y1 )

is a bilinear map, with ~ei × ~ej ) = ±~ek when i, j, k are distinct, and ~ei × ~ei = ~0. The
sign ± is given by the orientation of the basis {~ei , ~ej , ~ek }. The cross product also
has the alternating property ~x × ~y = −~y × ~x.

Example 8.1.4. Composiiton of linear transform. Product of matrix.

Exercise 8.1. For matching linear transformations L, show that the compositions B(L(~v ), w),
~
B(~v , L(w)),
~ L(B(~v , w))
~ are still linear transformations.

Exercise 8.2. Show that a bilinear map B : V × W → U1 ⊕ U2 is given by two bilinear maps
B1 : V × W → U1 and B2 : V × W → U2 . What if V or W is a direct sum?

Exercise 8.3. Show that a map B : V × W → Fn is bilinear if and only if each coordinate
bi : V × W → F of B is a bilinear function.

Exercise 8.4. For a linear functional l(x1 , x2 , x3 ) = a1 x1 + a2 x2 + a3 x3 , what is the bilinear


function l(~x × ~y )?
8.1. BILINEAR 265

8.1.2 Bilinear Function


A bilinear function b : V × W → F is determined by its values bij = b(~vi , w ~ j ) on
bases α = {~v1 , . . . , ~vm } and β = {w ~ n } of V and W
~ 1, . . . , w
X
b(~x, ~y ) = bij xi yj = [~x]Tα B[~y ]β , [~x]α = (x1 , . . . , xm ), [~y ]β = (y1 , . . . , yn ).
i,j

Here the matrix of bilinear function is


~ j ))m,n
B = [b]αβ = (b(~vi , w i,j=1 .

By b(~x, ~y ) = [~x]Tα [b]αβ [~y ]β and [~x]α0 = [I]α0 α [~x]α , we get the change of matrix caused
by the change of bases
[b]α0 β 0 = [I]Tαα0 [b]αβ [I]ββ 0 .
A bilinear function induces a linear transformation
L(~v ) = b(~v , ·) : V → W ∗ .
Conversely, a linear transformation L : V → W ∗ gives a bilinear function b(~v , w)
~ =
L(~v )(w).
~
If W is a real inner product space, then we have the induced isomorphism W ∗ ∼ =
W by Proposition 4.3.1. Combined with the linear transformation above, we get a
linear transformation, still denoted L
L: V → W∗ ∼
= W, ~v 7→ b(~v , ·) = hL(~v ), ·i.
This means
~ = hL(~v ), wi
b(~v , w) ~ for all ~v ∈ V, w
~ ∈ W.
Therefore real bilinear functions b : V × W → R are in one-to-one correspondence
with linear transformations L : V → W .
Similarly, the bilinear function also corresponds to a linear transformation
L∗ (w) ~ : W → V ∗.
~ = b(·, w)
L ∗
The reason for the notation L∗ is that the linear transformation is W ∼
= W ∗∗ −→ V ∗ ,
or the dual linear transformation up to the natural double dual isomorphism of W .
If we additionally know that V is a real inner product space, then we may combine
W → V ∗ with the isomorphism V ∗ ∼ = V by Proposition 4.3.1, to get the following
linear transformation, still denoted L∗
L∗ : W → V ∗ ∼
= V, w ~ = h·, L∗ (w)i.
~ 7→ b(·, w) ~
This means
~ = h~v , L∗ (w)i
b(~v , w) ~ for all ~v ∈ V, w
~ ∈ W.
If both V and W are real inner product spaces, then we have
b(~v , w) ~ = h~v , L∗ (w)i.
~ = hL(~v ), wi ~
266 CHAPTER 8. TENSOR

Exercise 8.5. What are the matrices of bilinear functions b(L(~v ), w),
~ b(~v , L(w)),
~ l(B(~v , w))?
~

Example 8.1.5. An inner product h·, ·i : V × V → R is a dual pairing. The induced


isomorphism
V ∼= V ∗ : ~a 7→ h~a, ·i
makes V into a self dual vector space. In a self dual vector space, it makes sense to
ask for self dual basis. For inner product, a basis α is self dual if

h~vi , ~vj i = δij .

This means exactly that the basis is orthonormal.

Example 8.1.6. The evaluation pairing in Example 8.1.2 is a dual pairing. For a
basis α of V , the dual basis with respect to the evaluation pairing is the dual basis
α∗ in Section 2.4.1.

Example 8.1.7. The function space is infinite dimensional. Its dual space needs to
consider the topology, which means that the dual space consists of only continuous
linear functionals. For the vector space of power p integrable functions
Z b
p
L [a, b] = {f (t) : |f (t)|p dt < ∞}, p ≥ 1,
a

the continuous linear functionals on Lp [a, b] are of the form


Z b
l(f ) = f (t)g(t)dt,
a

where the function g(t) satisfies


Z b
1 1
|g(t)|q dt < ∞, + = 1.
a p q

This shows that the dual space Lp [a, b]∗ of all continuous linear functionals on Lp [a, b]
is Lq [a, b]. In particular, the Hilbert space L2 [a, b] of square integrable functions is
self dual.

8.1.3 Quadratic Form


A quadratic form on a vector space V is q(~x) = b(~x, ~x), where b is some bilinear
function on V ×V . Since replacing b with the symmetric bilinear function 21 (b(~x, ~y )+
b(~y , ~x)) gives the same q, we will always take b to be symmetric

b(~x, ~y ) = b(~y , ~x).


8.1. BILINEAR 267

Then (the symmetric) b can be recovered from q by polarisation

1 1
b(~x, ~y ) = (q(~x + ~y ) − q(~x − ~y )) = (q(~x + ~y ) − q(~x) − q(~y )).
4 2
The discussion assumes that 2 is invertible in the base field F. If this is not the
case, then there is a subtle difference between quadratic forms and symmetric forms.
The subsequent discussion always assumes that 2 is invertible.
The matrix of q with respect to a basis α = {~v1 , . . . , ~vn } is the matrix of b with
respect to the basis, and is symmetric

[q]α = [b]αα = B = (b(~vi , ~vj )).

Then the quadratic form can be expressed in terms of the α-coordinate [~x]α =
(x1 , . . . , xn )
X X
q(~x) = [~x]Tα [q]α [~x]α = bii x2i + 2 bij xi xj .
1≤i≤n 1≤i<j≤n

For another basis β, we have

[q]β = [I]Tαβ [q]α [I]αβ = P T BP, P = [I]αβ .

In particular, the rank of a quadratic form is well defined

rank q = rank[q]α .

Exercise 8.6. Prove that a function is a quadratic form if and only if it is homogeneous of
second order
q(c~x) = c2 q(~x),

and satisfies the parallelogram identity

q(~x + ~y ) + q(~x − ~y ) = 2q(~x) + 2q(~y ).

Similar to the diagonalisation of linear operators, we may ask about the canonical
forms of quadratic forms. The goal is to eliminate the cross terms bij xi xj , i 6= j,
by choosing a different basis. Then the quadratic form consists of only the square
terms
q(~x) = b1 x21 + · · · + bn x2n .
We may get the canonical form by the method of completing the square. The
method is a version of Gaussian elimination, and can be applied to any base field F
in which 2 is invertible.
In terms of matrix, this means that we want to express a symmetric matrix B
as P T DP for diagonal D and invertible P .
268 CHAPTER 8. TENSOR

Example 8.1.8. For q(x, y, z) = x2 + 13y 2 + 14z 2 + 6xy + 2xz + 18yz, we gather
together all the terms involving x and complete the square
q = x2 + 6xy + 2xz + 13y 2 + 14z 2 + 18yz
= [x2 + 2x(3y + z) + (3y + z)2 ] + 13y 2 + 14z 2 + 18yz − (3y + z)2
= (x + 3y + z)2 + 4y 2 + 13z 2 + 12yz.
The remaining terms involve only y and z. Gathering all the terms involving y and
completing the square, we get 4y 2 + 13z 2 + 12yz = (2y + 3z)2 + 4z 2 and
q = (x + 3y + z)2 + (2y + 3z)2 + (2z)2 = u2 + v 2 + w2 .
In terms of matrix, the process gives
   T   
1 3 1 1 3 1 1 0 0 1 3 1
3 13 9  = 0 2 3 0 1 0 0 2 3 .
1 9 14 0 0 2 0 0 1 0 0 2
Geometrically, the original variables x, y, z are the coordinates with respect to the
standard basis . The new variables u, v, w are the coordinates with respect to a
new basis α. The two coordinates are related by
        
u x + 3y + z 1 3 1 x 1 3 1
 v  =  2y + 3z  = 0 2 3 y  , [I]α = 0 2 3 .
w 2z 0 0 2 z 0 0 2
Then the basis α is the columns of the matrix
1 − 23 74
 

[α] = [I]α = [I]−1


α =
0 1 − 3  .
2 4
1
0 0 2

Example 8.1.9. The cross terms in the quadratic form q(x, y, z) = x2 + 4y 2 + z 2 −


4xy − 8xz − 4yz can be eliminated as follows
q = [x2 − 2x(2y + 4z) + (2y + 4z)2 ] + 4y 2 + z 2 − 4yz − (2y + 4z)2
= (x − 2y − 4z)2 − 15z 2 − 20yz
2 2
= (x − 2y − 4z)2 − 15[z 2 + 34 yz + 32 y ] + 15 23 y
= (x − 2y − 4z)2 + 20
3
y 2 − 15(z + 23 y)2 .
Since we do not have y 2 term after the first step, we use z 2 term to complete the
square in the second step.
We also note that the division by 3 is used. If we cannot divide by 3, then this
means 3 = 0 in the field F. This implies −2 = 1, −4 = −4, −15 = 0, −20 = 1 in F,
and we get
q = (x − 2y − 4z)2 − 15z 2 − 20yz = (x + y − z)2 + yz.
8.1. BILINEAR 269

We may further eliminate the cross terms by introducing y = u + v, z = u − v, so


that

q = (x + y − z)2 + u2 − v 2 = (x + y − z)2 + 14 (y + z)2 − 41 (y − z)2 .

Example 8.1.10. The quadratic form q = xy + yz has no square term. We may


eliminate the cross terms by introducing x = u + v, y = u − v, so that q = u2 − v 2 +
uz − vz. Then we complete the square and get
2 2
q = u − 21 z − v + 21 z = 14 (x + y − z)2 − 41 (x − y + z)2 .

Example 8.1.11. The cross terms in the quadratic form

q = 4x21 + 19x22 − 4x24


− 4x1 x2 + 4x1 x3 − 8x1 x4 + 10x2 x3 + 16x2 x4 + 12x3 x4

can be eliminated as follows

q = 4[x21 − x1 x2 + x1 x3 − 2x1 x4 ] + 19x22 − 4x24 + 10x2 x3 + 16x2 x4 + 12x3 x4


h 2 i
= 4 x21 + 2x1 − 12 x2 + 21 x3 − x4 + − 21 x2 + 12 x3 − x4

2
+ 19x22 − 4x24 + 10x2 x3 + 16x2 x4 + 12x3 x4 − 4 − 21 x2 + 21 x3 − x4
2
= 4 x1 − 12 x2 + 21 x3 − x4 + 18 x22 + 23 x2 x3 + 23 x2 x4 − x23 − 8x24 + 16x3 x4
 
h 2 i
2 2 1 1 1 1

= (2x1 − x2 + x3 − 2x4 ) + 18 x2 + 2x2 3 x3 + 3 x4 + 3 x3 + 3 x4
2
− x23 − 8x24 + 16x3 x4 − 18 13 x3 + 13 x4
2
= (2x1 − x2 + x3 − 2x4 )2 + 18 x2 + 13 x3 + 13 x4 − 3(x23 − 4x3 x4 ) − 10x24
= (2x1 − x2 + x3 − 2x4 )2 + 2(3x2 + x3 + x4 )2
− 3[x23 + 2x3 (−2x4 ) + (−2x4 )2 ] − 10x24 + 3(−2x4 )2
= (2x1 − x2 + x3 − 2x4 )2 + 2(3x2 + x3 + x4 )2 − 3(x3 − 2x4 )2 + 2x24
= y12 + 2y22 − 3y33 + 2y42 .

The new variables y1 , y2 , y3 , y4 are the coordinates with respect to the basis of the
columns of
 −1  1 1 2 1

2 −1 1 −2 2 6
− 3
− 2
 1 1
 =  0 3 − 3 −1  .
0 3 1 1  

0 0 1 −2 0 0 1 2 
0 0 0 1 0 0 0 1

Example 8.1.12. For the quadratic form q(x, y, z) = x2 +iy 2 +3z 2 +2(1+i)xy +4yz
270 CHAPTER 8. TENSOR

over complex numbers C, the following eliminates the cross terms

q = [x2 + 2(1 + i)xy + ((i + 1)y)2 ] + iy 2 + 3z 2 + 4yz − (i + 1)2 y 2


= (x + (1 + i)y)2 − iy 2 + 3z 2 + 4yz
= (x + (1 + i)y)2 − i y 2 + 4iyz + (2iz)2 + 3z 2 + i(2i)2 z 2
 

= (x + (1 + i)y)2 − i(y + 2iz)2 + (3 − 4i)z 2 .

We may further use (pick one of two possible complex square roots)
√ p π π √ p
−i = e−i 2 = e−i 4 = 1−i√ ,
2
3 − 4i = (2 − i)2 = 2 − i,

to get
 √ 2
q = (x + (1 + i)y)2 + 1−i
√ y
2
+ 2(1 + i)z + ((2 − i)z)2 .

In terms of matrix, the process gives


   T   
1 1+i 0 1 1+i √ 0 1 0 0 1 1+i √ 0
1 + i i 2 = 0 1−i

2
2(1 + i) 0 1 0 0 1−i

2
2(1 + i) .
0 2 3 0 0 2−i 0 0 1 0 0 2−i

The new variables x + (1 + i)y, 1−i
√ y+
2
2(1 + i)z, (2 − i)z are the coordinates with
respect to some new basis. By the row operation
   
1 1+i √ 0 1 0 0 1+i √ R2
1 1 + i 0 1 0 0
0 1−i

2
2(1 + i) 0 1 0 −−2−→ 0 1 2i 0 1+i √
2
0 
2+i
R3 2+i
0 0 2−i 0 0 1 5 0 0 1 0 0 5
   √ −6+2i

1 1+i 0 1 0 0 1 0 0 1 2i 5
R −2iR3 −2+4i  R1 −(1+i)R2  −2+4i 
−−2−−−→ 0 1 0 0 1+i√
2 5 −−−−−−−→ 0 1 0 0 1+i √
2 5 ,
2+i 2+i
0 0 1 0 0 5 0 0 1 0 0 5

the new basis is the last three columns of last matrix above.

Exercise 8.7. Eliminate the cross terms.

1. x2 + 4xy − 5y 2 .

2. 2x2 + 4xy.

3. 4x21 + 4x1 x2 + 5x22 .

4. x2 + 2y 2 + z 2 + 2xy − 2xz.

5. −2u2 − v 2 − 6w2 − 4uw + 2vw.

6. x21 + x23 + 2x1 x2 + 2x1 x3 + 2x1 x4 + 2x3 x4 .


8.1. BILINEAR 271

Exercise 8.8. Eliminate the cross terms in the quadratic form x2 + 2y 2 + z 2 + 2xy − 2xz
by first completing a square for terms involving z, then completing for terms involving y.

Next we study the process of completing the square in general. Let q(~x) = ~xT B~x
for ~x ∈ Rn and a symmetric n × n matrix B. The leading principal minors of B are
the determinants of the square submatrices formed by the entries in the first k rows
and first k columns of B
 
  b11 b12 b13
b b
d1 = b11 , d2 = det 11 12 , d3 = det b21 b22 b23  , . . . , dn = det B.
b21 b22
b31 b32 b33
If d1 6= 0 (this means b11 is invertible in F), then eliminating all the cross terms
involving x1 gives
 
2 1 1 2
q(~x) = b11 x1 + 2x1 b11 (b12 x2 + · · · + b1n xn ) + b2 (b12 x2 + · · · + b1n xn )
11

+ b22 x22 + · · · + bnn x2n + 2b23 x2 x3 + 2b24 x2 x4 + · · · + 2b(n−1)n xn−1 xn


− b111 (b12 x2 + · · · + b1n xn )2
 2
b12 b1n
= d1 x1 + d1 x2 + · · · + d1 xn + q2 (~x2 ).

Here q2 is a quadratic form not involving x1 . In other words, it is a quadratic form


of the truncated vector ~x2 = (x2 , . . . , xn ). The symmetric coefficient matrix B2
for q2 is obtained as follows. For all 2 ≤ i ≤ n, we apply the column operation
Ri − bb11
1i
R1 to B to eliminate all the entries in the first column below the first entry
 
d1 ∗
b11 . The result is a matrix ~ . In fact, we may further apply the similar row
0 B2
 
b1i d1 0
operations Ci − b11 C1 and get a symmetric matrix ~ . Since the operations
0 B2
do not change the determinant of the matrix (and all the leading principal minors),
(2) (2)
the principal minors d1 , . . . , dn−1 of B2 are related to the principal minors of B by
(2)
di+1 = d1 di .
The discussion sets up an inductive argument. To facilitate the argument, we
(1)
(1) (2) di+1
denote B1 = B, di = di , and get di = (1) . If d1 , . . . , dk are all nonzero, then we
d1
may complete the squares in k steps and obtain
(1) (2)
q(~x) = d1 (x1 + c12 x2 + · · · + c1n xn )2 + d1 (x2 + c23 x3 + · · · + c2n xn )2
(k)
+ · · · + d1 (xk + ck(k+1) xk+1 + · · · + ckn xn )2 + qk+1 (~xk+1 ).
(1)
(2) di+1
The calculation of the coefficients is inspired by di = (1)
d1

(i−1) (i−2) (1)


(i) d2 d3 di di
d1 = (i−1)
= (i−2)
= ··· = (1)
= .
d1 d2 di−1 di−1
272 CHAPTER 8. TENSOR

(k+1) dk+1
Moreover, the coefficient of x2k+1 in qk+1 is d1 = .
dk

Proposition 8.1.1 (Lagrange-Beltrami Identity). Suppose q(~x) = ~xT B~x is a quadratic


form of rank r, over a field in which 2 6= 0. If all the leading principal minors
d1 , . . . , dr of the symmetric coefficient matrix B are nonzero, then there is an upper
triangular change of variables

yi = xi + ci(i+1) xi+1 + · · · + cin xn , i = 1, . . . , r,

such that
d2 2 dr 2
q = d1 y12 + y2 + · · · + y .
d1 dr−1 r

Examples 8.1.9 and 8.1.10 shows that the nonzero condition on leading principal
minors may not always be satisfied. Still, the examples show that it is always
possible to eliminate cross terms after a suitable change of variable.

Proposition 8.1.2. Any quadratic form of rank r and over a field in which 2 6= 0
can be expressed as
q = b1 x21 + · · · + br x2r

after a suitable change of variable.

In terms of matrix, this means that any symmetric matrix B can be written as
B = P T DP , where P is invertible and B is a diagonal matrix with exactly r nonzero
entries.
√ For F = C, we may further get the unique canonical form by replacing xi
with bi xi
q = x21 + · · · + x2r , r = rank q.

Two quadratic forms q and q 0 are equivalent if q 0 (~x) = q(L(~x)) for some invertible
linear transformation. In terms of symmetric matrices, this means that S and P T SP
are equivalent. The unique canonical form above implies the following.

Theorem 8.1.3. Two complex quadratic forms are equivalent if and only if they
have the same rank.
√ √
For F = R, we may replace xi with bi xi in case bi > 0 and with −bi xi in case
bi < 0. The canonical form we get is (after rearranging the order of xi if needed)

q = x21 + · · · + x2s − x2s+1 − · · · − x2r .

Theorem 8.2.4 discusses this canonical form.


8.2. HERMITIAN 273

8.2 Hermitian
8.2.1 Sesquilinear Function
The complex inner product is not bilinear. It is a sesquilinear function (defined for
complex vector spaces V and W ) in the following sense

s(x1~v1 + x2~v2 , w)~ = x1 s(~v1 , w) ~ + x2 s(~v2 , w), ~


s(~v , y1 w
~ 1 + y2 w
~ 2 ) = ȳ1 s(~v , w
~ 1 ) + ȳ2 s(~v , w
~ 2 ).

The sesquilinear function can be regarded as a bilinear function s : V × W̄ → C,


where W̄ is the conjugate vector space of W .
A sesquilinear function is determined by its values sij = s(~vi , w ~ j ) on bases α =
{~v1 , . . . , ~vm } and β = {w ~ n } of V and W
~ 1, . . . , w
X
s(~x, ~y ) = sij xi ȳj = [~x]Tα S[~y ]β , [~x]α = (x1 , . . . , xm ), [~y ]β = (y1 , . . . , yn ).
i,j

Here the matrix of sesquilinear function is

~ j ))m,n
S = [s]αβ = (s(~vi , w i,j=1 .

By s(~x, ~y ) = [~x]Tα [s]αβ [~y ]β and [~x]α0 = [I]α0 α [~x]α , we get the change of matrix caused
by the change of bases
[s]α0 β 0 = [I]Tαα0 [s]αβ [I]ββ 0 .
A sesquilinear function s : V × W → C induces a linear transformation

L(~v ) = s(~v , ·) : V → W̄ ∗ = Hom(W, C) = Hom(W̄ , C).

Here W̄ ∗ is the conjugate dual space consisting of conjugate linear functionals. We


have a one-to-one correspondence between sesquilinear functions s and linear trans-
formations L.
If we also know that W is a complex (Hermitian) inner product space, then
we have a linear isomorphism w ~ ·i : W ∼
~ 7→ hw, = W̄ ∗ , similar to Proposition 4.3.1.
Combined with the linear transformation above, we get a linear transformation, still
denoted L
L : V → W̄ ∗ ∼
= W, ~v 7→ s(~v , ·) = hL(~v ), ·i.
This means
~ = hL(~v ), wi
s(~v , w) ~ for all ~v ∈ V, w
~ ∈ W.
Therefore sesquilinear functions s : V × W → C are in one-to-one correspondence
with linear transformations L : V → W .
The sesquilinear function also induces a conjugate linear transformation

L∗ (w) ~ : W → V ∗ = Hom(V, C),


~ = s(·, w)
274 CHAPTER 8. TENSOR

The correspondence s 7→ L∗ is again one-to-one if V and W have finite dimensions.


If we also know V is a complex inner product space, then we have a conjugate linear
isomorphism ~v 7→ h·, ~v i : V ∼
= V ∗ , similar to Proposition 4.3.1. Combined with the
conjugate linear transformation above, we get a linear transformation, still denoted
L∗
L∗ : W → V ∗ ∼ = V, w ~ 7→ s(·, w)
~ = h·, L(w)i.
~
This means
~ = h~v , L∗ (w)i
s(~v , w) ~ for all ~v ∈ V, w
~ ∈ W.
Therefore sesquilinear functions s : V × W → C are in one-to-one correspondence
with linear transformations L∗ : W → V .
If both V and W are complex inner product spaces, then we have

s(~v , w) ~ = h~v , L∗ (w)i.


~ = hL(~v ), wi ~

Exercise 8.9. Suppose s(~x, ~y ) is sesquilinear. Prove that s(~y , ~x) is also sesquilinear. What
are the linear transformations induced by s(~y , ~x)? How are the matrices of the two
sesquilinear functions related?

A sesquilinear function is a conjugate dual pairing if both L : V → W̄ ∗ and


L : W → V ∗ are isomorphisms. In fact, one isomorphism implies the other. This is

also the same as both are one-to-one, or both are onto.


The basic examples of conjugate dual pairing are the complex inner product and
the evaluation pairing
s(~x, l) = l(~x) : V × V̄ ∗ → C.
A basis α = {~v1 , . . . , ~vn } of V and a basis β = {w ~ n } of W are dual bases
~ 1, . . . , w
with respect to the dual pairing if

s(~vi , w
~ j ) = δij , or [s]αβ = I.

We may express any ~x ∈ V in terms of α

~x = s(~x, w ~ 2 )~v2 + · · · + s(~x, w


~ 1 )~v1 + s(~x, w ~ n )~vn ,

and express any ~y ∈ W in terms of β

~y = s(~v1 , ~y )w ~ 2 + · · · + s(~vn , ~y )w
~ 1 + s(~v2 , ~y )w ~ n,

For the inner product, a basis is self dual if and only if it is an orthonormal
basis. For the evaluation pairing, the dual basis α∗ = {~v1∗ , . . . , ~vn∗ } ⊂ V̄ ∗ of α =
{~v1 , . . . , ~vn } ⊂ V is given by

~vi∗ (~x) = x̄i , [~x]α = (x1 , . . . , xn ).

Exercise 8.10. Suppose a sesquilinear form s : V × W → C is a dual pairing. Suppose α


and β are bases of V and W . Prove that the following are equivalent.
8.2. HERMITIAN 275

1. α and β are dual bases with respect to s.

2. L takes α to β ∗ (conjugate dual basis of W̄ ∗ ).

3. L∗ takes β to α∗ (dual basis of V ∗ ).

Exercise 8.11. Prove that if α and β are dual bases with respect to dual pairing s(~x, ~y ),
then β and α are dual bases with respect to s(~y , ~x).

Exercise 8.12. Suppose α and β are dual bases with respect to a sesquilinear dual pairing
s : V × W → C. What is the relation between matrices [s]αβ , [L]β ∗ α , [L∗ ]α∗ β ?

8.2.2 Hermitian Form


A sesquilinear function s : V × V → C is Hermitian if it satisfies

s(~x, ~y ) = s(~y , ~x).

A typical example is the complex inner product. If we take ~x = ~y , then we get a


Hermitian form
q(~x) = s(~x, ~x).
Conversely, the Hermitian sesquilinear function s can be recovered from q by polar-
isation (see Exercise 6.21)

s(~x, ~y ) = 41 (q(~x + ~y ) − q(~x − ~y ) + iq(~x + i~y ) − iq(~x − i~y )).

Proposition 8.2.1. A sesquilinear function s(~x, ~y ) is Hermitian if and only if s(~x, ~x)
is always a real number.

Exercise 6.25 is a similar result.


Proof. If s is Hermitian, then s(~x, ~y ) = s(~y , ~x) implies s(~x, ~x) = s(~x, ~x), which means
s(~x, ~x) is a real number.
Conversely, suppose s(~x, ~x) is always a real number. We want to show s(~x, ~y ) =
s(~y , ~x), which is the same as s(~x, ~y ) + s(~y , ~x) is real and s(~x, ~y ) − s(~y , ~x) is imaginary.
This follows from

s(~x, ~y ) + s(~y , ~x) = s(~x + ~y , ~x + ~y ) − s(~x, ~x) − s(~y , ~y )


is(~x, ~y ) − is(~y , ~x) = s(i~x + ~y , i~x + ~y ) − s(i~x, i~x) − s(~y , ~y ).

In terms of the matrix S = [q]α = [s]αα with respect to a basis α of V , the


Hermitian property means that S ∗ = S, or S is a Hermitian matrix. For another
basis β, we have

[q]β = [I]Tαβ [q]α [I]αβ = P ∗ SP, P = [I]αβ .


276 CHAPTER 8. TENSOR

Suppose V is an inner product space. Then the Hermitian form is given by a


linear operator L by q(~x) = hL(~x), ~xi. By Proposition 8.2.1, we know q(~x) is a real
number, and therefore q(~x) = hL(~x), ~xi = h~x, L(~x)i. By polarisation, we get
hL(~x), ~y i = h~x, L(~y )i for all ~x, ~y .
This means that Hermitian forms are in one-to-one correspondence with self-adjoint
operators.
By Propositions 7.2.6 and 7.2.7, L has an orthonormal basis α = {~v1 , . . . , ~vn }
of eigenvectors, with eigenvalues d1 , . . . , dn . Then we have hL(~vi ), ~vj i = hdi~vi , ~vj i =
di δij . This means  
d1
[q]α = 
 .. 
. 
dn
is diagonal, and the expression of q in α-coordinate has no cross terms
q(~x) = d1 x1 x̄1 + · · · + dn xn x̄n = d1 |x1 |2 + · · · + dn |xn |2 .
Let λmax = max di and λmin = min di be the maximal and minimal eigenvalues
of L. Then the formula above implies (k~xk2 = |x1 |2 + · · · + |xn |2 by orthonormal
basis)
λmin k~xk2 ≤ q(~x) = hL(~x), ~xi ≤ λmax k~xk2 .
Moreover, the right equality is reached if and only if xi = 0 whenever di 6= λmax .
Since ⊕di =λ R~vi = Ker(L−λI), the right equality is reached exactly on the eigenspace
Ker(L − λmax I). Similarly, the left equality is reached exactly on the eigenspace
Ker(L − λmin I).

Proposition 8.2.2. For self-adjoint operator L on an inner product space, the max-
imal and minimal eigenvalues of L are maxk~xk=1 hL(~x), ~xi and mink~xk=1 hL(~x), ~xi.

Proof. We provide an alternative proof by using the Lagrange multiplier. We try


to find the maximum of the function q(~x) = hL(~x), ~xi = h~x, L(~x)i subject to the
constraint g(~x) = h~x, ~xi = k~xk2 = 1. By
q(~x0 + ∆~x) = hL(~x0 + ∆~x), ~x0 + ∆~xi
= hL(~x0 ), ~x0 i + hL(~x0 ), ∆~xi + hL(∆~x), ~x0 i + hL(∆~x), ∆~xi
= q(~x0 ) + hL(~x0 ), ∆~xi + h∆~x, L(~x0 )i + o(k∆~xk),
we get (the “multi-derivative” is a linear functional)
q 0 (~x0 ) = hL(~x0 ), ·i + h·, L(~x0 )i.
By the similar reason, we have
g 0 (~x0 ) = h~x0 , ·i + h·, ~x0 i.
8.2. HERMITIAN 277

If the maximum of q subject to the constraint g = 1 happens at ~x0 , then we have

q 0 (~x0 ) = hL(~x0 ), ·i + h·, L(~x0 )i = hλ~x0 , ·i + h·, λ~x0 i = λg 0 (~x0 )

for a real number λ. Let ~v = L(~x0 )−λ~x0 . Then the equality means h~v , ·i+h·, ~v i = 0.
Taking the variable · to be ~v , we get 2k~v k2 = 0, or ~v = ~0. This proves that
L(~x0 ) = λ~x0 . Moreover, the maximum is

max q(~x) = q(~x0 ) = hL(~x0 ), ~x0 i = hλ~x0 , ~x0 i = λg(~x0 ) = λ.


k~
xk=1

For any other eigenvalue µ, we have L(~x) = µ~x for a unit length vector ~x and get

µ = µh~x, ~xi = hµ~x, ~xi = hL(~x), ~xi = q(~x) ≤ q(~x0 ) = λ.

This proves that λ is the maximum eigenvalue.

Example 8.2.1. In Example 7.2.1, we have the orthogonal diagonalisation of a Her-


mitian matrix
! !−1
  1+i
√ √1
 1+i
√ √1
2 1+i 3 3 1 0 3 3
=
1−i 3 − √13 1−i

3
0 4 − √1
3
1−i

3
!∗  !
1−i 1  1−i
√1
√ − √ 1 0 √ −
= √13 1+i 3
√1
3
1+i
3
.
3

3
0 4 3

3

In terms of Hermitian form, this means


2 2
2xx̄ + (1 + i)xȳ + (1 − i)yx̄ + 3y ȳ = √1 ((1 − i)x − y) + 4 √1 (x + (1 + i)y) .
3 3

Example 8.2.2. In Example 7.2.4, we found an orthogonal diagonalisation of the


symmetric matrix in Example 7.1.14
  √1 T    √1 
− 5 √2 0 − 5 √2 0

1 −2 −4 5 √ 
5 0 0 5 √
−2 4 −2 = − √4 − √2
 5 0 5 0  − √4 − √2
 5 .
3 5 3 5 3  3 5 3 5 3 
−4 −2 1 2 1 2 0 0 −4 2 1 2
3 3 3 3 3 3

Here we use U −1 = U T for orthogonal matrix. In terms of Hermitian form, this


means

xx̄ + 4y ȳ + z z̄ − 2xȳ − 2yx̄ − 4xz̄ − 4z x̄ − 2yz̄ − 2z ȳ


2 √ 2
5 2
= 5 − √15 x + √2 y
5
+ 5 − 3√4 5 x − 2

3 5
y + 3
z −4 2
3
x + 31 y + 23 z .
278 CHAPTER 8. TENSOR

In terms of quadratic form, this means


x2 + 4y 2 + z 2 − 4xy − 8xz − 4yz
 2  √ 2 2
5
√1 √2
= 5 − 5 x + 5 y + 5 − 3√4 5 x − 2

3 5
y + 3
z −4 2
3
x + 31 y + 23 z .

We remark that the quadratic form is also diagonalised by completing the square in
Example 8.1.9.

Exercise 8.13. Prove that a linear operator on a complex inner product space is self-adjoint
if and only if hL(~x), ~xi is always a real number. Compare with Proposition 7.2.6.

8.2.3 Completing the Square


In Examples 8.2.1 and 8.2.2, we use orthogonal diagonalisation of Hermitian opera-
tors to eliminate the cross terms in Hermitian forms. We may also use the method
of completing the square similar to the quadratic forms.

Example 8.2.3. For the Hermitian form in Example 8.2.1, we have


2xx̄ + (1 + i)xȳ + (1 − i)yx̄ + 3y ȳ
h i
1−i 1−i 1−i 1−i |1−i|2
= 2 xx̄ + x 2 y + 2 yx̄ + 2 y 2 y − 2
y ȳ + 3y ȳ
1−i 2
= x+ 2
y + 2|y|2 .

Example 8.2.4. We complete squares


xx̄ − y ȳ + 2z z̄ + (1 + i)xȳ + (1 − i)yx̄ + 3iyz̄ − 3iz ȳ
h i
= xx̄ + (1 + i)xȳ + (1 − i)yx̄ + (1 − i)y(1 − i)y
− y ȳ + 2z z̄ + 3iyz̄ − 3iz ȳ − |1 − i|2 y ȳ
= |x + (1 − i)y|2 − 3y ȳ + 2z z̄ + 3iyz̄ − 3iz ȳ
= |x + (1 − i)y|2 − 3 y ȳ − iyz̄ + iz ȳ + iziz + 2z z̄ + 3|i|2 z z̄
 

= |x + (1 − i)y|2 − 3|y + iz|2 + 5|z|2 .


In terms of Hermitian matrix, this means
   ∗   
1 1+i 0 1 1−i 0 1 0 0 1 1−i 0
1 − i −1 3i = 0 1 i  0 −3 0 0 1 i .
0 −3i 2 0 0 1 0 0 5 0 0 1
The new variables x + (1 − i)y, y + iz, z are the coordinates with respect to the basis
of the columns of the inverse matrix
 −1  
1 1−i 0 1 −1 + i 1 + i
0 1 i  = 0 1 −i  .
0 0 1 0 0 1
8.2. HERMITIAN 279

The Lagrange-Beltrami Identity (Proposition 8.1.1) remains valid for Hermitian


forms. We note that, by Exercise 7.61, all the principal minors of a Hermitian matrix
are real.

Proposition 8.2.3 (Lagrange-Beltrami Identity). Suppose q(~x) = ~xT S ~x¯ is a Hermi-


tian form of rank r. If all the leading principal minors d1 , . . . , dr of the Hermitian
coefficient matrix S are nonzero, then there is an upper triangular change of vari-
ables
yi = xi + ci(i+1) xi+1 + · · · + cin xn , i = 1, . . . , r,
such that
d2 dr
q = d1 |y1 |2 + |y2 |2 + · · · + |yr |2 .
d1 dr−1

8.2.4 Signature
For Hermitian forms (including real quadratic forms), we may use either orthogonal
diagonalisation or completing the square to reduce the form to

q(~x) = d1 |x1 |2 + · · · + dr |xr |2 .


p
Here di are real numbers, and r is the rank of q. By replacing xi with |di |xi and
rearranging the orders of the variables, we may further get the canonical form of
the Hermitian form

q(~x) = |x1 |2 + · · · + |xs |2 − |xs+1 |2 − · · · − |xr |2 .

Here s is the number of di > 0, and r−s is the number of di < 0. From the viewpoint
of orthogonal diagonalisation, we have q(~x) = hL(~x), ~xi for a self-adjoint operator
L. Then s is the number of positive eigenvalues of L, and r − s is the number of
negative eigenvalues of L.
We know the rank r is unique. The following says that s is also unique. Therefore
the canonical form of the Hermitian form is unique.

Theorem 8.2.4 (Sylvester’s Law). After eliminating the cross terms in a quadratic
form, the number of positive coefficients and the number of negative coefficients are
independent of the elimination process.

Proof. Suppose

q(~x) = |x1 |2 + · · · + |xs |2 − |xs+1 |2 − · · · − |xr |2


= |y1 |2 + · · · + |yt |2 − |yt+1 |2 − · · · − |yr |2

in terms of the coordinates with respect to bases α = {~v1 , . . . , ~vn } and basis β =
{w ~ n }.
~ 1, . . . , w
280 CHAPTER 8. TENSOR

We claim that ~v1 , . . . , ~vs , w


~ t+1 , . . . , w
~ n are linearly independent. Suppose

x1~v1 + · · · + xs~vs + yt+1 w ~ n = ~0.


~ t+1 + · · · + yn w

Then
x1~v1 + · · · + xs~vs = −yt+1 w
~ t+1 − · · · − yn w
~ n.
Applying q to both sides, we get

|x1 |2 + · · · + |xs |2 = −|yt+1 |2 − · · · − |yr |2 ≤ 0.

This implies x1 = · · · = xs = 0 and yt+1 w ~ n = ~0. Since β is a basis,


~ t+1 + · · · + yn w
we get yt+1 = · · · = yn = 0. This completes the proof of the claim.
The claim implies s+(n−t) ≤ n, or s ≤ t. By symmetry, we also have t ≤ s.
The number s − t is called the signature of the quadratic form. The Hermi-
tian (quadratic) forms in Examples 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.2.1 have signatures
3, 1, 0, 2, 2.
If all the leading principal minors are nonzero, then the Lagrange-Beltrami Iden-
tity gives a way of calculating the signature by comparing the signs of leading
principal minors.
An immediate consequence of Sylvester’s Law is the following analogue of Propo-
sition 8.1.3.

Theorem 8.2.5. Two Hermitian quadratic forms are equivalent if and only if they
have the same rank and signature.

8.2.5 Positive Definite


Definition 8.2.6. Let q(~x) be a Hermitian form.

1. q is positive definite if q(~x) > 0 for any ~x 6= 0.

2. q is negative definite if q(~x) < 0 for any ~x 6= 0.

3. q is positive semi-definite if q(~x) ≥ 0 for any ~x 6= 0.

4. q is negative semi-definite if q(~x) ≤ 0 for any ~x 6= 0.

5. q is indefinite if the values of q can be positive and can also be negative.

The type of Hermitian form can be easily determined by its canonical form. Let
s, r, n be the signature, rank, and dimension. Then we have the following correspon-
dence
+ − semi + semi − indef
s = r = n −s = r = n s = r −s = r s 6= r, −s 6= r
8.2. HERMITIAN 281

The Lagrange-Beltrami Identity (Proposition 8.2.3) gives another criterion.

Proposition 8.2.7 (Sylvester’s Criterion). Suppose q(~x) = ~xT S ~x¯ is a Hermitian form
of rank r and dimension n. Suppose all the leading principal minors d1 , . . . , dr of S
are nonzero.

1. q is positive semi-definite if and only if d1 , d2 , . . . , dr are all positive. If we


further have r = n, then q is positive definite.

2. q is negative semi-definite if and only if d1 , −d2 , . . . , (−1)r dr are all positive.


If we further have r = n, then q is negative definite.

3. Otherwise q is indefinite.

If some d1 , . . . , dr are zero, then q cannot be positive or negative definite. The


criterion for the other possibilities is a little more complicated1 .

Exercise 8.14. Suppose a Hermitian form has not square term |x1 |2 (i.e., the coefficient
s11 = 0). Prove that the form is indefinite.

Exercise 8.15. Prove that if a quadratic form q(~x) is positive definite, then q(~x) ≥ ck~xk2
for any ~x and a constant c > 0. What is the maximum of such c?

Exercise 8.16. Suppose q and q 0 are positive definite, and a, b > 0. Prove that aq + bq 0 is
positive definite.

The types of quadratic forms can be applied to self-adjoint operators L, by


considering the Hermitian form hL(~x), ~xi. For example, L is positive definite if
hL(~x), ~xi > 0 for all ~x 6= ~0.
Using the orthogonal diagonalisation to eliminate cross terms in hL(~x), ~xi, the
type of L can be determined by its eigenvalues λi .

+ − semi + semi − indef


all λi > 0 all λi < 0 all λi ≥ 0 all λi ≤ 0 some λi > 0, some λj < 0

Exercise 8.17. Prove that positive definite and negative operators are invertible.

Exercise 8.18. Suppose L and K are positive definite, and a, b > 0. Prove that aL + bL is
positive definite.

Exercise 8.19. Suppose L is positive definite. Prove that Ln is positive definite.


1
Sylvester’s Minorant Criterion, Lagrange-Beltrami Identity, and Nonnegative Definiteness, by
Sudhir R. Ghorpade, Balmohan V. Limaye, in The Mathematics Student. Special Centenary
Volume (2007), 123-130
282 CHAPTER 8. TENSOR

Exercise 8.20. For any self-adjoint operator L, prove that L2 − L + I is positive definite.
What is the condition on a, b such that L2 + aL + bI is always positive definite?

Exercise 8.21. Prove that for any linear operator L, L∗ L is self-adjoint and positive semi-
definite. Moreover, if L is one-to-one, then L∗ L is positive definite.

A positive semi-definite operator has decomposition

L = λ1 I ⊥ · · · ⊥ λk I, λi ≥ 0.
√ √
Then we may construct the operator λ1 I ⊥ · · · ⊥ λk I, which satisfies the fol-
lowing definition.

Definition
√ 8.2.8. Suppose L is a positive semi-definite operator. 2 The square root
operator L is the positive semi-definite operator K satisfying K = L and KL =
LK.

By applying Theorem 7.2.4 to the commutative ∗-algebra (the ∗-operation is


trivial) generated by L and K, we find that L and K have simultaneous orthogonal
decompositions

L = λ1 I ⊥ · · · ⊥ λk I, K = µ1 I ⊥ · · · ⊥ µk I.
√ √
Then K 2 = L means µ2i = λi . Therefore λ1 I ⊥ · · · ⊥ λk I is the unique operator
satisfying the definition.
Suppose L : V → W is an one-to-one linear transformation √ between inner prod-
uct spaces. Then L∗ L is a positive definite operator, and A = L∗ L is also a positive
definite operator. Since positive definite operators are invertible, we may introduce
a linear transformation U = LA−1 : V → W . Then

U ∗ U = A−1 L∗ LA−1 = A−1 A2 A−1 = I.

Therefore U is an isometry. The decomposition L = U A is comparable to the polar


decomposition of complex numbers

z = eiθ r, r = |z| = z̄z,

and is therefore called the polar decomposition of L.

8.3 Multilinear
tensor of two
tensor of many
exterior algebra
8.4. INVARIANT OF LINEAR OPERATOR 283

8.4 Invariant of Linear Operator


The quantities we use to describe the structure of linear operators should depend
only on the linear operator itself. Suppose we define such a quantity f (A) by the
matrix A of the linear operator with respect to a basis. Since a change of basis
changes A to P −1 AP , we need to require f (P AP −1 ) = f (A). In other words, f is a
similarity invariant. Examples of invariants are the rank, the determinant and the
trace (see Exercise 8.22)
 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
tr  .. ..  = a11 + a22 + · · · + ann .
 
..
 . . . 
an1 an2 · · · ann

Exercise 8.22. Prove that trAB = trBA. Then use this to show that trP AP −1 = trA.

The characteristic polynomial det(tI − L) is an invariant of L. It is a monic


polynomial of degree n = dim V and can be completely decomposed using complex
roots

det(tI − L) = tn − σ1 tn−1 + σ2 tn−2 − · · · + (−1)n−1 σn−1 t + (−1)n σn


= (t − λ1 )n1 (t − λ2 )n1 · · · (t − λk )nk , n1 + n2 + · · · + nk = n
= (t − d1 )(t − d2 ) · · · (t − dn ).

Here λ1 , λ2 , . . . , λk are all the distinct eigenvalues, and nj is the algebraic multiplicity
of λj . Moreover, d1 , d2 , . . . , dn are all the eigenvalues repeated in their multiplici-
ties (i.e., λ1 repeated n1 times, λ2 repeated n2 times, etc.). The (unordered) set
{d1 , d2 , . . . , dn } of all roots of the characteristic polynomial is the spectrum of L.
The polynomial is the same as the coefficients σ1 , σ2 , . . . , σn . The “polynomial”
σ1 , σ2 , . . . , σn determines the spectrum {d1 , d2 , . . . , dn } by “finding the roots”. Con-
versely, the spectrum determines the polynomial by Vieta’s formula
X
σk = di1 di2 · · · dik .
1≤i1 <i2 <···<ik ≤n

Two special cases are the trace

σ1 = d1 + d2 + · · · + dn = trL,

and the determinant


σn = d1 d2 · · · dn = det L.
The general formula of σj in terms of L is given in Exercise 7.33.
284 CHAPTER 8. TENSOR

8.4.1 Symmetric Function


Suppose a function f (A) of square matrices A is an invariant of linear operators. If
A is diagonalisable, then
 
d1 O
A = P DP −1 , D = 
 .. −1
 , f (A) = f (P DP ) = f (D).

.
O dn

Therefore f (A) is actually a function of the spectrum {d1 , d2 , . . . , dn }. Note that


the order does not affect the value because exchanging the order of dj is the same
of exchanging columns of P . This shows that the invariant is really a function
f (d1 , d2 , . . . , dn ) satisfying

f (di1 , di2 , . . . , din ) = f (d1 , d2 , . . . , dn )

for any permutation (i1 , i2 , . . . , in ) of (1, 2, . . . , n). These are symmetric functions.
The functions σ1 , σ2 , . . . , σn given by Vieta’s formula are symmetric. Since the
spectrum (i.e., unordered set of possibly repeated numbers) {d1 , d2 , . . . , dn } is the
same as the “polynomial” σ1 , σ2 , . . . , σn , symmetric functions are the same as func-
tions of σ1 , σ2 , . . . , σn

f (d1 , d2 , . . . , dn ) = g(σ1 , σ2 , . . . , σn ).

For this reason, we call σ1 , σ2 , . . . , σn elementary symmetric functions.


For example, d31 + d32 + d33 is clearly symmetric, and we have

d31 + d32 + d33 = (d1 + d2 + d3 )3 − 3(d1 + d2 + d3 )(d1 d2 + d1 d3 + d2 d3 ) + 3d1 d2 d3


= σ13 − 3σ1 σ2 + 3σ3 .

We note that σ1 , σ2 , . . . , σn are defined for n variables, and

σk (d1 , d2 , . . . , dn ) = σk (d1 , d2 , . . . , dn , 0, . . . , 0), k ≤ n.

Theorem 8.4.1. Any symmetric polynomial f (d1 , d2 , . . . , dn ) is a unique polynomial


g(σ1 , σ2 , . . . , σn ) of the elementary symmetric polynomials.

Proof. We prove by induction on n. If n = 1, then f (d1 ) is a polynomial of the


only symmetric polynomial σ1 = d1 . Suppose the theorem is proved for n − 1.
Then f˜(d1 , d2 , . . . , dn−1 ) = f (d1 , d2 , . . . , dn−1 , 0) is a symmetric polynomial of n − 1
variables. By induction, we have f˜(d1 , d2 . . . , dn−1 ) = g̃(σ̃1 , σ̃2 , . . . , σ̃n−1 ) for a poly-
nomial g̃ and elementary symmetric polynomials σ̃1 , σ̃2 , . . . , σ̃n−1 of d1 , d2 , . . . , dn−1 .
Now consider

h(d1 , d2 . . . , dn ) = f (d1 , d2 . . . , dn ) − g̃(σ1 , σ2 , . . . , σn−1 ),


8.4. INVARIANT OF LINEAR OPERATOR 285

where σ1 , σ2 , . . . , σn−1 are the elementary symmetric polynomials of d1 , d2 , . . . , dn .


The polynomial h(d1 , d2 , . . . , dn ) is still symmetric. By σk (d1 , d2 , . . . , dn−1 , 0) =
σ̃k (d1 , d2 , . . . , dn−1 ), we have

h(d1 , d2 , . . . , dn−1 , 0) = f (d1 , d2 , . . . , dn−1 , 0) − g̃(σ̃1 , σ̃2 , . . . , σ̃n−1 ) = 0.

This means that all the monomial terms of h(d1 , d2 , . . . , dn ) have a dn factor. By
symmetry, all the monomial terms of h(d1 , d2 , . . . , dn ) also have a dj factor for every
j. Therefore all the monomial terms of h(d1 , d2 , . . . , dn ) have σn = d1 d2 · · · dn factor.
This implies that
h(d1 , d2 , . . . , dn ) = σn k(d1 , d2 , . . . , dn ),
for a symmetric polynomial k(d1 , d2 , . . . , dn ). Since k(d1 , d2 , . . . , dn ) has strictly
lower total degree than f , a double induction on the total degree of h can be used.
This means that we may assume k(d1 , d2 , . . . , dn ) = k̃(σ1 , σ2 , . . . , σn ) for a polyno-
mial k̃. Then we get

f (d1 , d2 , . . . , dn ) = g̃(σ1 , σ2 , . . . , σn−1 ) + σn k̃(σ1 , σ2 , . . . , σn ).

For the uniqueness, we need to prove that, if g(σ1 , σ2 , . . . , σn ) = 0 as a poly-


nomial of d1 , d2 , . . . , dn , then g = 0 as a polynomial of σ1 , σ2 , . . . , σn . Again we
induct on n. By taking dn = 0, we have σ̃n = σn (d1 , d2 . . . , dn−1 , 0) = 0 and
g(σ̃1 , σ̃2 , . . . , σ̃n−1 , 0) = 0 as a polynomial of d1 , d2 , . . . , dn−1 . Then by induction,
we get g(σ̃1 , σ̃2 , . . . , σ̃n−1 , 0) = 0 as a polynomial of σ̃1 , σ̃2 , . . . , σ̃n−1 . This implies
that g(σ1 , σ2 , . . . , σn−1 , 0) = 0 as a polynomial of σ1 , σ2 , . . . , σn−1 and further implies
that g(σ1 , σ2 , . . . , σn−1 , σn ) = σn h(σ1 , σ2 , . . . , σn−1 , σn ) for some polynomial h. Since
σn 6= 0 as a polynomial of d1 , d2 , . . . , dn , the assumption g(σ1 , σ2 , . . . , σn ) = 0 as
a polynomial of d1 , d1 . . . , dn implies that h(σ1 , σ2 , . . . , σn ) = 0 as a polynomial of
d1 , d1 . . . , dn . Since h has strictly lower degree than g, a further double induction on
the degree of g implies that h = 0 as a polynomial of σ1 , σ2 , . . . , σn .

Exercise 8.23 (Newton’s Identity). Consider the symmetric polynomial

sk = dk1 + dk2 + · · · + dkn .

Explain that for i = 1, 2, . . . , n, we have

dni − σ1 dn−1
i + σ2 dn−2
i − · · · + (−1)n−1 σn−1 di + (−1)n σn = 0.

Then use the equalities to derive

sn − σ1 sn−1 + σ2 sn−2 − · · · + (−1)n−1 σn−1 s1 + (−1)n nσn = 0.

This gives a recursive relation for expressing sn as a polynomial of σ1 , σ2 , . . . , σn .


286 CHAPTER 8. TENSOR

The discussion so far assumes the linear operator is diagonalisable. To extend


the result to general not necessarily diagonalisable linear operators, we establish the
fact that any linear operator is a limit of diagonalisable linear operators.
First we note that if det(λI − L) has no repeated roots, and n = dim V , then we
have n distinct eigenvalues and corresponding eigenvectors. By Proposition 7.1.4,
these eigenvectors are linearly independent and therefore must be a basis of V . We
conclude the following.

Proposition 8.4.2. Any complex linear operator is the limit of a sequence of diago-
nalisable linear operators.

Proof. By Proposition 7.1.10, the proposition is the consequence of the claim that
any complex linear operator is approximated by a linear operator such that the
characteristic polynomial has no repeated root. We will prove the claim by inducting
on the dimension of the vector space. The claim is clearly true for linear operators
on 1-dimensional vector space.
By the fundamental theorem of algebra (Theorem 6.1.1), any linear operator L
has an eigenvalue λ. Let H = Ker(λI − L) be the corresponding eigenspace. Then
we have V = H ⊕ H 0 for some subspace H 0 . In the blocked form, we have
 
λI ∗
L= , I : H → H, K : H 0 → H 0 .
O K

By induction on dimension, K is approximated by an operator K 0 , such that det(tI −


K 0 ) has no repeated root. Moreover, we may approximate λI by the diagonal matrix
 
λ1
T =
 .. ,

r = dim H,
.
λr

such that λi are very close to λ, are distinct, and are not roots of det(tI − K 0 ). Then
 
0 T ∗
L =
O K0

approximates L, and by Exercise 7.27, has det(tI − L0 ) = (t − λ1 ) · · · (t − λr ) det(tI −


K 0 ). By our setup, the characteristic polynomial of L0 has no repeated root.

Since polynomials are continuous, by using Proposition 8.4.2 and taking the
limit, we may extend Theorem 8.4.1 to all linear operators.

Theorem 8.4.3. Polynomial invariants of linear operators are exactly polynomial


functions of the coefficients of the characteristic polynomial.
8.4. INVARIANT OF LINEAR OPERATOR 287

A key ingredient in the proof of the theorem is the continuity. The theorem
cannot be applied to invariants such as the rank because it is not continuous
  (
a 0 1 if a 6= 0,
rank =
0 0 0 if a = 0.

Exercise 8.24. Identify the traces of powers Lk of linear operators with the symmetric
functions sn in Exercises 8.23. Then use Theorem 8.4.3 and Newton’s identity to show
that polynomial invariants of linear operators are exactly polynomials of trLk , k = 1, 2, . . . .

You might also like