Lecture Notes
Lecture Notes
1
Supported by a grant from MHRD
2
Contents
I Linear Algebra 7
1 Matrices 9
1.1 Definition of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.1.1 Special Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2 Operations on Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.1 Multiplication of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Some More Special Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Submatrix of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.1 Block Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4 Matrices over Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.3 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.1.4 Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.3 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.3.1 Important Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.4 Ordered Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4 Linear Transformations 69
4.1 Definitions and Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2 Matrix of a linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.3 Rank-Nullity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.4 Similarity of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
14 Appendix 239
14.1 System of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
14.2 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
14.3 Properties of Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
14.4 Dimension of M + N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
14.5 Proof of Rank-Nullity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
14.6 Condition for Exactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Part I
Linear Algebra
Chapter 1
Matrices
where aij is the entry at the intersection of the ith row and j th column.
In a more concise manner, we also denote the matrix A by [aij ] by suppressing its order.
a11 a12 a1n
a21 a22 a2n
Remark 1.1.2 Some books also use
.. .. .. to represent a matrix.
..
. . . .
am1 am2 amn
" #
1 3 7
Let A = . Then a11 = 1, a12 = 3, a13 = 7, a21 = 4, a22 = 5, and a23 = 6.
4 5 6
A matrix having only one column is called a column vector; and a matrix with only one row is
called a row vector.
Whenever a vector is used, it should be understood from the context whether it is
a row vector or a column vector.
Definition 1.1.3 (Equality of two Matrices) Two matrices A = [aij ] and B = [bij ] having the same order
m n are equal if aij = bij for each i = 1, 2, . . . , m and j = 1, 2, . . . , n.
In other words, two matrices are said to be equal if they have the same order and their corresponding
entries are equal.
10 CHAPTER 1. MATRICES
Example" 1.1.4 The# linear system of equations 2x + 3y = 5 and 3x + 2y = 5 can be identified with the
2 3 : 5
matrix .
3 2 : 5
2. A matrix having the number of rows equal to the number of columns is called a square matrix. Thus,
its order is m m (for some m) and is represented by m only.
3. In a square matrix, A = [aij ], of order n, the entries a11 , a22 , . . . , ann are called the diagonal entries
and form the principal diagonal of A.
4. A square matrix A = [aij ] is said to be a diagonal matrix if aij = 0 for i 6= j. In other words,
" the #
4 0
non-zero entries appear only on the principal diagonal. For example, the zero matrix 0n and
0 1
are a few diagonal matrices.
A diagonal matrix D of order n with the diagonal entries d1 , d2 , . . . , dn is denoted by D = diag(d1 , . . . , dn ).
If di = d for all i = 1, 2, . . . , n then the diagonal matrix D is called a scalar matrix.
(
1 if i = j
5. A square matrix A = [aij ] with aij =
0 if i 6= j
is called the identity matrix, denoted by In .
" # 1 0 0
1 0
For example, I2 = , and I3 = 0 1 0 .
0 1
0 0 1
The subscript n is suppressed in case the order is clear from the context or if no confusion arises.
6. A square matrix A = [aij ] is said to be an upper triangular matrix if aij = 0 for i > j.
A square matrix A = [aij ] is said to be an lower triangular matrix if aij = 0 for i < j.
A square matrix A is said to be triangular if it is an upper or a lower triangular matrix.
2 1 4
For example 0 3 1 is an upper triangular matrix. An upper triangular matrix will be represented
0 0 2
a11 a12 a1n
0 a22 a2n
by .
.. .. .
..
.. . . .
0 0 ann
That is, by the transpose of an m n matrix A, we mean a matrix of order n m having the rows
of A as its columns and the columns of A as its
rows.
" # 1 0
1 4 5
then At = 4 1 .
For example, if A =
0 1 2
5 2
Thus, the transpose of a row vector is a column vector and vice-versa.
Proof. Let A = [aij ], At = [bij ] and (At )t = [cij ]. Then, the definition of transpose gives
Definition 1.2.3 (Addition of Matrices) let A = [aij ] and B = [bij ] be are two m n matrices. Then the
sum A + B is defined to be the matrix C = [cij ] with cij = aij + bij .
Note that, we define the sum of two matrices only when the order of the two matrices are same.
Definition 1.2.4 (Multiplying a Scalar to a Matrix) Let A = [aij ] be an m n matrix. Then for any
element k R, we define kA = [kaij ].
" # " #
1 4 5 5 20 25
For example, if A = and k = 5, then 5A = .
0 1 2 0 5 10
1. A + B = B + A (commutativity).
2. (A + B) + C = A + (B + C) (associativity).
3. k(A) = (k)A.
4. (k + )A = kA + A.
Proof. Part 1.
Let A = [aij ] and B = [bij ]. Then
1. Then there exists a matrix B with A + B = 0. This matrix B is called the additive inverse of A, and
is denoted by A = (1)A.
2. Also, for the matrix 0mn , A + 0 = 0 + A = A. Hence, the matrix 0mn is called the additive identity.
12 CHAPTER 1. MATRICES
Note that in this example, while AB is defined, the product BA is not defined. However, for square
matrices A and B of the same order, both the product AB and BA are defined.
Definition 1.2.9 Two square matrices A and B are said to commute if AB = BA.
Remark 1.2.10 1. Note that if A is a square matrix of order n then AIn = In A. Also for any d R,
the matrix dIn commutes with every square matrix of order n. The matrices dIn for any d R
are called scalar matrices.
2. In general, the
" matrix
# product" is not
# commutative. For example, consider the following two
1 1 1 0
matrices A = and B = . Then check that the matrix product
0 0 1 0
" # " #
2 0 1 1
AB = 6= = BA.
0 0 1 1
Theorem 1.2.11 Suppose that the matrices A, B and C are so chosen that the matrix multiplications are
defined.
A similar statement holds for the columns of A when A is multiplied on the right by D.
Proof. Part 1. Let A = [aij ]mn , B = [bij ]np and C = [cij ]pq . Then
p
X n
X
(BC)kj = bk cj and (AB)i = aik bk .
=1 k=1
1.3. SOME MORE SPECIAL MATRICES 13
Therefore,
n
X n
X p
X
A(BC) ij = aik BC kj
= aik bk cj
k=1 k=1 =1
Xn Xp p
n X
X
= aik bk cj = aik bk cj
k=1 =1 k=1 =1
Xp X n t
X
= aik bk cj = AB i cj
=1 k=1 =1
= (AB)C ij
.
Exercise 1.2.12 1. Let A and B be two matrices. If the matrix addition A + B is defined, then prove
that (A + B)t = At + B t . Also, if the matrix product AB is defined then prove that (AB)t = B t At .
b1
b2
.. . Compute the matrix products AB and BA.
2. Let A = [a1 , a2 , . . . , an ] and B =
.
bn
3. Let n be a positive integer. Compute An for the following matrices:
" # 1 1 1 1 1 1
1 1
, 0 1 1 , 1 1 1 .
0 1
0 0 1 1 1 1
(a) Suppose that the matrix product AB is defined. Then the product BA need not be defined.
(b) Suppose that the matrix products AB and BA are defined. Then the matrices AB and BA can
have different orders.
(c) Suppose that the matrices A and B are square matrices of order n. Then AB and BA may or
may not be equal.
1 1 1
3 3 3
1
12
2. Let A = 2 0 . Then A is an orthogonal matrix.
1 1 26
6 6
1 if i = j + 1
3. Let A = [aij ] be an n n matrix with aij = . Then An = 0 and A 6= 0 for 1
0 otherwise
n 1. The matrices A for which a positive integer k exists such that Ak = 0 are called nilpotent
matrices. The least positive integer k for which Ak = 0 is called the order of nilpotency.
" #
1 0
4. Let A = . Then A2 = A. The matrices that satisfy the condition that A2 = A are called
0 0
idempotent matrices.
2. Show that the product of two lower triangular matrices is a lower triangular matrix. A similar statement
holds for upper triangular matrices.
3. Let A and B be symmetric matrices. Show that AB is symmetric if and only if AB = BA.
7. Let A be a nilpotent matrix. Show that there exists a matrix B such that B(I + A) = I = (I + A)B.
Miscellaneous Exercises
Exercise 1.3.5 1. Complete the proofs of Theorems 1.2.5 and 1.2.11.
" # " # " # " #
x1 y1 1 0 cos sin
2. Let x = , y= , A= and B = . Geometrically interpret y = Ax
x2 y2 0 1 sin cos
and y = Bx.
Then for two square matrices, A and B of the same order, show the following:
5. Show that, there do not exist matrices A and B such that AB BA = cIn for any c 6= 0.
7. Let A be an n n matrix such that AB = BA for all n n matrices B. Show that A = I for some
R.
1 2
8. Let A = 2 1 . Show that there exist infinitely many matrices B such that BA = I2 . Also, show
3 1
that there does not exist any matrix C such that AC = I3 .
AB = P H + QK.
Proof. First note that the matrices P H and QK are each of order n p. The matrix products P H
and QK are valid as the order of the matrices P, H, Q and K are respectively, n r, r p, n (m r)
and (m r) p. Let P = [Pij ], Q = [Qij ], H = [Hij ], and K = [kij ]. Then, for 1 i n and 1 j p,
we have
m
X r
X m
X
(AB)ij = aik bkj = aik bkj + aik bkj
k=1 k=1 k=r+1
Xr Xm
= Pik Hkj + Qik Kkj
k=1 k=r+1
= (P H)ij + (QK)ij = (P H + QK)ij .
16 CHAPTER 1. MATRICES
Theorem 1.3.6 is very useful due to the following reasons:
2. It may be possible to block the matrix in such a way that a few blocks are either identity matrices
or zero matrices. In this case, it may be easy to handle the matrix product using the block form.
3. Or when we want to prove results using induction, then we may assume the result for r r
submatrices and then look for (r + 1) (r + 1) submatrices, etc.
" # a b
1 2 0
For example, if A = and B = c d , Then
2 5 0
e f
" #" # " # " #
1 2 a b 0 a + 2c b + 2d
AB = + [e f ] = .
2 5 c d 0 2a + 5c 2b + 5d
0 1 2
If A = 3 1 4 , then A can be decomposed as follows:
2 5 3
0 1 2 0 1 2
A= 3 1 4 , or A = 3 1 4 , or
2 5 3 2 5 3
0 1 2
A= 3 1 4 and so on.
2 5 3
Exercise 1.3.7 1. Compute the matrix product AB using the block matrix multiplication for the matrices
1 0 0 1 1 2 2 1
0 1 1 1 1 1 2 1
A= and B = .
0 1 1 0 1 1 1 1
0 1 0 1 1 1 1 1
" #
P Q
2. Let A = . If P, Q, R and S are symmetric, what can you say about A? Are P, Q, R and S
R S
symmetric, when A is symmetric?
1.4. MATRICES OVER COMPLEX NUMBERS 17
3. Let A = [aij ] and B = [bij ] be two matrices. Suppose a1 , a2 , . . . , an are the rows of A and
b1 , b2 , . . . , bp are the columns of B. If the product AB is defined, then show that
a1 B
a2 B
AB = [Ab1 , Ab2 , . . . , Abp ] = .
.
..
an B
[That is, left multiplication by A, is same as multiplying each column of B by A. Similarly, right
multiplication by B, is same as multiplying each row of A by B.]
Exercise 1.4.3 1. Give examples of Hermitian, skew-Hermitian and unitary matrices that have entries
with non-zero imaginary parts.
2.1 Introduction
Let us look at some examples of linear systems.
a1 x + b1 y = c1 and a2 x + b2 y = c2 ,
the set of solutions is given by the points of intersection of the two lines. There are three cases to
be considered. Each case is illustrated by an example.
Remark 2.2.2 Observe that the ith row of the augmented matrix [A b] represents the ith equation
and the j th column of the coefficient matrix A corresponds to coefficients of the j th variable xj . That
is, for 1 i m and 1 j n, the entry aij of the coefficient matrix A corresponds to the ith equation
and j th variable xj ..
For a system of linear equations Ax = b, the system Ax = 0 is called the associated homogeneous
system.
Definition 2.2.3 (Solution of a Linear System) A solution of the linear system Ax = b is a column vector
y with entries y1 , y2 , . . . , yn such that the linear system (2.2.1) is satisfied by substituting yi in place of xi .
2. Eliminating x from 2nd and 3rd equation, we get the linear system
x+y+z =3
6y + 2z =8 (obtained by subtracting the first
equation from the second equation.)
6y 5z =1 (obtained by subtracting 4 times the first equation
from the third equation.) (2.2.3)
This system and the system (2.2.2) has the same set of solution. (why?)
3. Eliminating y from the last two equations of system (2.2.3), we get the system
x+y+z =3
6y + 2z =8
7z =7 obtained by subtracting the third equation
from the second equation. (2.2.4)
which has the same set of solution as the system (2.2.3). (why?)
x+y+z =3
3y + z =4 divide the second equation by 2
z =1 divide the second equation by 2 (2.2.5)
3. replace an equation by itself plus a constant multiple of another equation, say replace the k th equation
by k th equation plus c times the j th equation.
(compare the system (2.2.3) with (2.2.2) or the system (2.2.4) with (2.2.3).)
Observations:
1. In the above example, observe that the elementary operations helped us in getting a linear system
(2.2.5), which was easily solvable.
2. Note that at Step 1, if we interchange the first and the second equation, we get back to the linear
system from which we had started. This means the operation at Step 1, has an inverse operation.
In other words, inverse operation sends us back to the step where we had precisely started.
It will be a useful exercise for the reader to identify the inverse operations at each step in
Example 2.2.4.
So, in Example 2.2.4, the application of a finite number of elementary operations helped us to obtain
a simpler system whose solution can be obtained directly. That is, after applying a finite number of
elementary operations, a simpler linear system is obtained which can be easily solved. Note that the
three elementary operations defined above, have corresponding inverse operations, namely,
It will be a useful exercise for the reader to identify the inverse operations at each step in
Example 2.2.4.
Definition 2.3.2 (Equivalent Linear Systems) Two linear systems are said to be equivalent if one can be
obtained from the other by a finite number of elementary operations.
The linear systems at each step in Example 2.2.4 are equivalent to each other and also to the original
linear system.
Lemma 2.3.3 Let Cx = d be the linear system obtained from the linear system Ax = b by a single
elementary operation. Then the linear systems Ax = b and Cx = d have the same set of solutions.
Proof. We prove the result for the elementary operation the k th equation is replaced by k th equation
plus c times the j th equation. The reader is advised to prove the result for other elementary operations.
In this case, the systems Ax = b and Cx = d vary only in the k th equation. Let (1 , 2 , . . . , n )
be a solution of the linear system Ax = b. Then substituting for i s in place of xi s in the k th and j th
equations, we get
Therefore,
(ak1 + caj1 )1 + (ak2 + caj2 )2 + + (akn + cajn )n = bk + cbj . (2.3.1)
But then the k th equation of the linear system Cx = d is
(ak1 + caj1 )x1 + (ak2 + caj2 )x2 + + (akn + cajn )xn = bk + cbj . (2.3.2)
2.3. ROW OPERATIONS AND EQUIVALENT SYSTEMS 23
Therefore, using Equation (2.3.1), (1 , 2 , . . . , n ) is also a solution for the k th Equation (2.3.2).
Use a similar argument to show that if (1 , 2 , . . . , n ) is a solution of the linear system Cx = d then
it is also a solution of the linear system Ax = b.
Hence, we have the proof in this case.
Lemma 2.3.3 is now used as an induction step to prove the main result of this section (Theorem
2.3.4).
Theorem 2.3.4 Two equivalent systems have the same set of solutions.
Let us formalise the above section which led to Theorem 2.3.4. For solving a linear system of equa-
tions, we applied elementary operations to equations. It is observed that in performing the elementary
operations, the calculations were made on the coefficients (numbers). The variables x1 , x2 , . . . , xn
and the sign of equality (that is, = ) are not disturbed. Therefore, in place of looking at the system
of equations as a whole, we just need to work with the coefficients. These coefficients when arranged in
a rectangular array gives us the augmented matrix [A b].
Definition 2.3.5 (Elementary Row Operations) The elementary row operations are defined as:
1. interchange of two rows, say interchange the ith and j th rows, denoted Rij ;
2. multiply a non-zero constant throughout a row, say multiply the k th row by c 6= 0, denoted Rk (c);
3. replace a row by itself plus a constant multiple of another row, say replace the k th row by k th row
plus c times the j th row, denoted Rkj (c).
Exercise 2.3.6 Find the inverse row operations corresponding to the elementary row operations that have
been defined just above.
Definition 2.3.7 (Row Equivalent Matrices) Two matrices are said to be row-equivalent if one can be
obtained from the other by a finite number of elementary row operations.
Example
2.3.8
Thethree matrices
given below
are row equivalent.
0 1 1 2 2 0 3 5 1 0 23 52
2 0 3 5 R12 0 1 1 2 R1 (1/2) 0 1 1 2 .
1 1 1 3 1 1 1 3 1 1 1 3
0 1 1 2 1 0 1 2
Whereas the matrix 2 0 3 5 is not row equivalent to the matrix 0 2 3 5 .
1 1 1 3 1 1 1 3
24 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS
y+z = 2
2x + 3z = 5
x+y+z = 3
0 1 1 2
Solution: In this case, the augmented matrix is 2 0 3 5 . The method proceeds along the fol-
1 1 1 3
lowing steps.
3. Add 1 times the 1st equation to the 3rd equation (or R31 (1)).
x + 32 z = 52 1 0 3
2
5
2
y+z =2 0 1 1 2 .
1 1
y 2z = 2 0 1 12 1
2
4. Add 1 times the 2nd equation to the 3rd equation (or R32 (1)).
x + 32 z = 52 1 0 3
2
5
2
y+z =2 0 1 1 2 .
32 z = 32 0 0 32 23
2.3. ROW OPERATIONS AND EQUIVALENT SYSTEMS 25
The last equation gives z = 1, the second equation now gives y = 1. Finally the first equation gives
x = 1. Hence the set of solutions is (x, y, z)t = (1, 1, 1)t , a unique solution.
x+y+z = 3
x + 2y + 2z = 5
3x + 4y + 4z = 11
1 1 1 3
Solution: In this case, the augmented matrix is 1 2 2 5 and the method proceeds as follows:
3 4 4 11
1. Add 1 times the first equation to the second equation.
x+y+z =3 1 1 1 3
y+z =2 0 1 1 2 .
3x + 4y + 4z = 11 3 4 4 11
Thus, the set of solutions is (x, y, z)t = (1, 2 z, z)t = (1, 2, 0)t + z(0, 1, 1)t, with z arbitrary. In other
words, the system has infinite number of solutions.
x+y+z = 3
x + 2y + 2z = 5
3x + 4y + 4z = 12
1 1 1 3
Solution: In this case, the augmented matrix is 1 2 2 5 and the method proceeds as follows:
3 4 4 12
1. Add 1 times the first equation to the second equation.
x+y+z =3 1 1 1 3
y+z =2 0 1 1 2 .
3x + 4y + 4z = 12 3 4 4 12
26 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS
This can never hold for any value of x, y, z. Hence, the system has no solution.
Remark 2.3.13 Note that to solve a linear system, Ax = b, one needs to apply only the elementary
row operations to the augmented matrix [A b].
2. the column containing this 1 has all its other entries zero.
A matrix in the row reduced form is also called a row reduced matrix.
Example 2.4.2 1. One of the most important examples of a row reduced matrix is the n n identity
matrix, In . Recall that the (i, j)th entry of the identity matrix is
1 if i = j
Iij = ij = .
0 if i 6= j.
Definition 2.4.3 (Leading Term, Leading Column) For a row-reduced matrix, the first non-zero entry of
any row is called a leading term. The columns containing the leading terms are called the leading
columns.
2.4. ROW REDUCED ECHELON FORM OF A MATRIX 27
Definition 2.4.4 (Basic, Free Variables) Consider the linear system Ax = b in n variables and m equa-
tions. Let [C d] be the row-reduced matrix obtained by applying the Gauss elimination method to the
augmented matrix [A b]. Then the variables corresponding to the leading columns in the first n columns of
[C d] are called the basic variables. The variables which are not basic are called free variables.
The free variables are called so as they can be assigned arbitrary values and the value of the basic
variables can then be written in terms of the free variables.
Observation: In Example 2.3.11, the solution set was given by
(x, y, z)t = (1, 2 z, z)t = (1, 2, 0)t + z(0, 1, 1)t, with z arbitrary.
That is, we had two basic variables, x and y, and z as a free variable.
Remark 2.4.5 It is very important to observe that if there are r non-zero rows in the row-reduced form
of the matrix then there will be r leading terms. That is, there will be r leading columns. Therefore,
if there are r leading terms and n variables, then there will be r basic variables and
n r free variables.
I. Add 1 times the third equation to the second equation (or R23 (1)).
x + 23 z = 25 1 0 23 52
y =2 0 1 0 1 .
z =1 0 0 1 1
II. Add 3
2 times the third equation to the first equation (or R13 ( 23 )).
x =1 1 0 0 1
y =1 0 1 0 1 .
z =1 0 0 1 1
III. From the above matrix, we directly have the set of solution as (x, y, z)t = (1, 1, 1)t .
Definition 2.4.6 (Row Reduced Echelon Form of a Matrix) A matrix C is said to be in the row reduced
echelon form if
2. The rows consisting of all zeros comes below all non-zero rows; and
3. the leading terms appear from left to right in successive rows. That is, for 1 k, let i be the
leading column of the th row. Then i1 < i2 < < ik .
0 1 0 2 0 0 0 1 0
Example 2.4.7 Suppose A = 0 0 0 0 and B = 1 1 0 0 0 are in row reduced form. Then the
0 0 1 1 0 0 0 0 1
0 1 0 2 1 1 0 0 0
corresponding matrices in the row reduced echelon form are respectively, 0 0 1 1 and 0 0 0 1 0 .
0 0 0 0 0 0 0 0 1
28 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS
Definition 2.4.8 (Row Reduced Echelon Matrix) A matrix which is in the row reduced echelon form is
also called a row reduced echelon matrix.
Definition 2.4.9 (Back Substitution/Gauss-Jordan Method) The procedure to get to Step II of Example
2.3.10 from Step 5 of Example 2.3.10 is called the back substitution.
The elimination process applied to obtain the row reduced echelon form of the augmented matrix is called
the Gauss-Jordan elimination.
That is, the Gauss-Jordan elimination method consists of both the forward elimination and the backward
substitution.
Method to get the row-reduced echelon form of a given matrix A
Let A be an m n matrix. Then the following method is used to obtain the row-reduced echelon form
the matrix A.
Step 2: If all entries in the first column after the first step are zero, consider the right m (n 1)
submatrix of the matrix obtained in step 1 and proceed as in step 1.
Else, forget the first row and first column. Start with the lower (m 1) (n 1) submatrix of the
matrix obtained in the first step and proceed as in step 1.
Step 3: Keep repeating this process till we reach a stage where all the entries below a particular row,
say r, are zero. Suppose at this stage we have obtained a matrix C. Then C has the following
form:
1. the first non-zero entry in each row of C is 1. These 1s are the leading terms of C
and the columns containing these leading terms are the leading columns.
2. the entries of C below the leading term are all zero.
Step 4: Now use the leading term in the rth row to make all entries in the rth leading column equal
to zero.
Step 5: Next, use the leading term in the (r 1)th row to make all entries in the (r 1)th leading
column equal to zero and continue till we come to the first leading term or column.
The final matrix is the row-reduced echelon form of the matrix A.
Remark 2.4.10 Note that the row reduction involves only row operations and proceeds from left to
right. Hence, if A is a matrix consisting of first s columns of a matrix C, then the row reduced form
of A will be the first s columns of the row reduced form of C.
The proof of the following theorem is beyond the scope of this book and is omitted.
(a) x + y + z + w = 0, x y + z + w = 0 and x + y + 3z + 3w = 0.
(b) x + 2y + 3z = 1 and x + 3y + 2z = 1.
(c) x + y + z = 3, x + y z = 1 and x + y + 7z = 6.
(d) x + y + z = 3, x + y z = 1 and x + y + 4z = 6.
(e) x + y + z = 3, x + y z = 1, x + y + 4z = 6 and x + y 4z = 1.
1. Eij , which is obtained by the application of the elementary row operation Rij to the identity
1
if k = and 6= i, j
th
matrix, In . Thus, the (k, ) entry of Eij is (Eij )(k,) = 1 if (k, ) = (i, j) or (k, ) = (j, i) .
0 otherwise
2. Ek (c), which is obtained by the application of the elementary row operation Rk (c) to the identity
1 if i = j and i 6= k
th
matrix, In . The (i, j) entry of Ek (c) is (Ek (c))(i,j) = c if i = j = k .
0 otherwise
3. Eij (c), which is obtained by the application of the elementary row operation Rij (c) to the identity
1 if k =
matrix, In . The (k, )th entry of Eij (c) is (Eij )(k,) c if (k, ) = (i, j) .
0 otherwise
In particular,
1 0 0 c 0 0 1 0 0
E23 = 0 0 1 , E1 (c) = 0 1 0 , and E23 (c) = 0 1 c .
0 1 0 0 0 1 0 0 1
1 2 3 0
Example 2.4.15 1. Let A = 2 0 3 4 . Then
3 4 5 6
1 2 3 0 1 2 3 0 1 0 0
2 0 3 4 R23 3 4 5 6 = 0 0 1 A = E23 A.
3 4 5 6 2 0 3 4 0 1 0
That is, interchanging the two rows of the matrix A is same as multiplying on the left by the corre-
sponding elementary matrix. In other words, we see that the left multiplication of elementary matrices
to a matrix results in elementary row operations.
30 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS
0 1 1 2
2. Consider the augmented matrix [A b] = 2 0 3 5 . Then the result of the steps given below is
1 1 1 3
same as the matrix product
0 1 1 2 1 1 1 3 1 1 1 3 1 1 1 3
2 0 3 5 R13 2 0 3 5 R21 (2) 0 2 1 1 R23 0 1 1 2
1 1 1 3 0 1 1 2 0 1 1 2 0 2 1 1
1 1 1 3 1 1 1 3 1 0 0 1
R32 (2) 0 1 1 2 R3 (1/3) 0 1 1 2 R12 (1) 0 1 1 2
0 0 3 3 0 0 1 1 0 0 1 1
1 0 0 1
R23 (1) 0 1 0 1
0 0 1 1
Definition 2.4.16 The column transformations obtained by right multiplication of elementary matrices are
called elementary column operations.
1 2 3
Example 2.4.17 Let A = 2 0 3 and consider the elementary column operation f which interchanges
3 4 5
1 3 2 1 0 0
the second and the third column of A. Then f (A) = 2 3 0 = A 0 0 1 = AE23 .
3 5 4 0 1 0
Exercise 2.4.18 1. Let e be an elementary row operation and let E = e(I) be the corresponding ele-
mentary matrix. That is, E is the matrix obtained from I by applying the elementary row operation e.
Show that e(A) = EA.
2. Show that the Gauss elimination method is same as multiplying by a series of elementary matrices on
the left to the augmented matrix.
Does the Gauss-Jordan method also corresponds to multiplying by elementary matrices on the left?
Give reasons.
3. Let A and B be two m n matrices. Then prove that the two matrices A, B are row-equivalent if and
only if B = P A, where P is product of elementary matrices. When is this P unique?
3. no solution.
Definition 2.5.1 (Consistent, Inconsistent) A linear system is called consistent if it admits a solution
and is called inconsistent if it admits no solution.
The question arises, as to whether there are conditions under which the linear system Ax = b is
consistent. The answer to this question is in the affirmative. To proceed further, we need a few definitions
and remarks.
Recall that the row reduced echelon form of a matrix is unique and therefore, the number of non-zero
rows is a unique number. Also, note that the number of non-zero rows in either the row reduced form
or the row reduced echelon form of a matrix are same.
Definition 2.5.2 (Row rank of a Matrix) The number of non-zero rows in the row reduced form of a
matrix is called the row-rank of the matrix.
By the very definition, it is clear that row-equivalent matrices have the same row-rank. For a matrix A,
we write row-rank (A) to denote the row-rank of A.
1 2 1
Example 2.5.3 1. Determine the row-rank of A = 2 3 1 .
1 1 2
Solution: To determine the row-rank of A, we proceed as follows.
1 2 1 1 2 1
(a) 2 3 1 R21 (2), R31 (1) 0 1 1 .
1 1 2 0 1 1
1 2 1 1 2 1
(b) 0 1 1 R2 (1), R32 (1) 0 1 1 .
0 1 1 0 0 2
1 2 1 1 0 1
(c) 0 1 1 R3 (1/2), R12 (2) 0 1 1 .
0 0 2 0 0 1
1 0 1 1 0 0
(d) 0 1 1 R23 (1), R13 (1) 0 1 0
0 0 1 0 0 1
The last matrix in Step 1d is the row reduced form of A which has 3 non-zero rows. Thus, row-rank(A) = 3.
This result can also be easily deduced from the last matrix in Step 1b.
1 2 1
2. Determine the row-rank of A = 2 3 1 .
1 1 0
Solution: Here we have
1 2 1 1 2 1
(a) 2 3 1 R21 (2), R31 (1) 0 1 1 .
1 1 0 0 1 1
1 2 1 1 2 1
(b) 0 1 1 R2 (1), R32 (1) 0 1 1 .
0 1 1 0 0 0
32 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS
Remark 2.5.4 Let Ax = b be a linear system with m equations and n unknowns. Then the row-reduced
echelon form of A agrees with the first n columns of [A b], and hence
Remark 2.5.5 Consider a matrix A. After application of a finite number of elementary column oper-
ations (see Definition 2.4.16) to the matrix A, we can have a matrix, say B, which has the following
properties:
2. A column containing only 0s comes after all columns with at least one non-zero entry.
3. The first non-zero entry (the leading term) in each non-zero column moves down in successive
columns.
Definition 2.5.6 The number of non-zero rows in the row reduced form of a matrix A is called the rank of
A, denoted rank (A).
Theorem 2.5.7 Let A be a matrix of rank r. Then there exist elementary matrices E1 , E2 , . . . , Es and
F1 , F2 , . . . , F such that " #
Ir 0
E1 E2 . . . Es A F1 F2 . . . F = .
0 0
Proof. Let C be the row reduced echelon matrix obtained by applying elementary row operations to
the given matrix A. As rank(A) = r, the matrix C will have the first r rows as the non-zero rows. So by
Remark 2.4.5, C will have r leading columns, say i1 , i2 , . . . , ir . Note that, for 1 s r, the ith
s column
will have 1 in the sth row and zero elsewhere.
We now apply column operations to the matrix C. Let D be the matrix obtained from C by succes-
sively"interchanging
# the sth and ith
s column of C for 1 s r. Then the matrix D can be written in the
Ir B
form , where B is a matrix of appropriate size. As the (1, 1) block of D is an identity matrix,
0 0
the block (1, 2) can be made the zero matrix by application of column operations to D. This gives the
required result.
Exercise 2.5.8 1. Determine the ranks of the coefficient and the augmented matrices that appear in Part
1 and Part 2 of Exercise 2.4.12.
2.6.1 Example
Consider a linear system Ax = b which after the application of the Gauss-Jordan method reduces to a
matrix [C d] with
1 0 2 1 0 0 2 8
0 1 1 3 0 0 5 1
0 0 0 0 1 0 1 2
[C d] =
0 0 0 0 0 1 1
.
4
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
For this particular matrix [C d], we want to see the set of solutions. We start with some observations.
Observations:
1. The number of non-zero rows in C is 4. This number is also equal to the number of non-zero rows
in [C d].
2. The first non-zero entry in the non-zero rows appear in columns 1, 2, 5 and 6.
2. if ra = r = n, the solution set of the linear system has a unique n 1 vector x0 satisfying Ax0 = b.
Remark 2.6.2 Let A be an m n matrix and consider the linear system Ax = b. Then by Theorem
2.6.1, we see that the linear system Ax = b is consistent if and only if
The following corollary of Theorem 2.6.1 is a very important result about the homogeneous linear
system Ax = 0.
Corollary 2.6.3 Let A be an m n matrix. Then the homogeneous system Ax = 0 has a non-trivial solution
if and only if rank(A) < n.
Proof. Suppose the system Ax = 0 has a non-trivial solution, x0 . That is, Ax0 = 0 and x0 6= 0. Under
this assumption, we need to show that rank(A) < n. On the contrary, assume that rank(A) = n. So,
n = rank(A) = rank [A 0] = ra .
Also A0 = 0 implies that 0 is a solution of the linear system Ax = 0. Hence, by the uniqueness of the
solution under the condition r = ra = n (see Theorem 2.6.1), we get x0 = 0. A contradiction to the fact
that x0 was a given non-trivial solution.
Now, let us assume that rank(A) < n. Then
ra = rank [A 0] = rank(A) < n.
So, by Theorem 2.6.1, the solution set of the linear system Ax = 0 has infinite number of vectors x
satisfying Ax = 0. From this infinite set, we can choose any vector x0 that is different from 0. Thus, we
have a solution x0 6= 0. That is, we have obtained a non-trivial solution x0 .
We now state another important result whose proof is immediate from Theorem 2.6.1 and Corollary
2.6.3.
2.7. INVERTIBLE MATRICES 35
Proposition 2.6.4 Consider the linear system Ax = b. Then the two statements given below cannot hold
together.
2.6.3 Exercises
Exercise 2.6.6 1. For what values of c and k-the following systems have i) no solution, ii) a unique
solution and iii) infinite number of solutions.
(a) x + y + z = 3, x + 2y + cz = 4, 2x + 3y + 2cz = k.
(b) x + y + z = 3, x + y + 2cz = 7, x + 2y + 3cz = k.
(c) x + y + 2z = 3, x + 2y + cz = 5, x + 2y + 4z = k.
(d) kx + y + z = 1, x + ky + z = 1, x + y + kz = 1.
(e) x + 2y z = 1, 2x + 3y + kz = 3, x + ky + 3z = 2.
(f) x 2y = 1, x y + kz = 1, ky + 4z = 6.
x + 2y 3z = a, 2x + 6y 11z = b, x 2y + 7z = c
is consistent.
3. Let A be an n n matrix. If the system A2 x = 0 has a non trivial solution then show that Ax = 0
also has a non trivial solution.
3. A matrix A is said to be invertible (or is said to have an inverse) if there exists a matrix B such
that AB = BA = In .
Lemma 2.7.2 Let A be an n n matrix. Suppose that there exist n n matrices B and C such that
AB = In and CA = In , then B = C.
36 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS
Remark 2.7.3 1. From the above lemma, we observe that if a matrix A is invertible, then the inverse
is unique.
Theorem 2.7.4 Let A and B be two matrices with inverses A1 and B 1 , respectively. Then
1. (A1 )1 = A.
2. (AB)1 = B 1 A1 .
2. Show that every elementary matrix is invertible. Is the inverse of an elementary matrix, also an ele-
mentary matrix?
4. If P and Q are invertible matrices and P AQ is defined then show that rank (P AQ) = rank (A).
5. Find
" matrices
# P and "Q which #are product of elementary matrices such that B = P AQ where A =
2 4 8 1 0 0
and B = .
1 3 2 0 1 0
8. Let A be an m n matrix of rank r. Then A can be written as A = BC, where both B and C have
rank r and B is a matrix of size m r and C is a matrix of size r n.
9. Let A and B be two matrices such that AB is defined and rank (A) = rank (AB). Then show that
A = ABX for some matrix X. Similarly, if BA is defined and rank (A) = rank (BA), then
" A=#
Y BA
A1 0
for some matrix Y. [Hint: Choose non-singular matrices P, Q and R such that P AQ = and
0 0
" # " #
C 0 C 1 A1 0
P (AB)R = . Define X = R Q1 .]
0 0 0 0
10. Let A = [aij ] be an invertible matrix and let B = [pij aij ] for some nonzero real number p. Find the
inverse of B.
11. If matrices B and C are invertible and the involved partitioned products are defined, then show that
" #1 " #
A B 0 C 1
= .
C 0 B 1 B 1 AC 1
B11 = A1 1
11 + (A11 A12 )P
1
(A21 A1
11 ), B21 = P
1
(A21 A1 1
11 ), B12 = (A11 A12 )P
1
,
and B22 = P 1 .
Theorem 2.7.7 For a square matrix A of order n, the following statements are equivalent.
1. A is invertible.
2. A is of full rank.
Proof. 1 = 2
Let if possible rank(A)"= r < n.#Then there exists an invertible matrix P (a product of elementary
" #
B1 B2 1 C1
matrices) such that P A = , where B1 is an rr matrix. Since A is invertible, let A = ,
0 0 C2
where C1 is an r n matrix. Then
" #" # " #
B 1 B 2 C 1 B 1 C 1 + B 2 C2
P = P In = P (AA1 ) = (P A)A1 = = . (2.7.1)
0 0 C2 0
Thus the matrix P has n r rows as zero rows. Hence, P cannot be invertible. A contradiction to P
being a product of invertible matrices. Thus, A is of full rank.
2 = 3
38 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS
Suppose A is of full rank. This implies, the row reduced echelon form of A has all non-zero rows.
But A has as many columns as rows and therefore, the last row of the row reduced echelon form of A
will be (0, 0, . . . , 0, 1). Hence, the row reduced echelon form of A is the identity matrix.
3 = 4
Since A is row-equivalent to the identity matrix there exist elementary matrices E1 , E2 , . . . , Ek such
that A = E1 E2 Ek In . That is, A is product of elementary matrices.
4 = 1
Suppose A = E1 E2 Ek ; where the Ei s are elementary matrices. We know that elementary matrices
are invertible and product of invertible matrices is also invertible, we get the required result.
The ideas of Theorem 2.7.7 will be used in the next subsection to find the inverse of an invertible
matrix. The idea used in the proof of the first part also gives the following important Theorem. We
repeat the proof for the sake of clarity.
Proof. Suppose that AB = In . We will prove that the matrix A is of full rank. That is, rank (A) = n.
Let if possible, rank(A)"= r < n.#Then there"exists
# an invertible matrix P (a product of elementary
C1 C2 B1
matrices) such that P A = . Let B = , where B1 is an r n matrix. Then
0 0 B2
" #" # " #
C1 C2 B1 C1 B1 + C2 B2
P = P In = P (AB) = (P A)B = = . (2.7.2)
0 0 B2 0
Thus the matrix P has n r rows as zero rows. So, P cannot be invertible. A contradiction to P being
a product of invertible matrices. Thus, rank (A) = n. That is, A is of full rank. Hence, using Theorem
2.7.7, A is an invertible matrix. That is, BA = In as well.
Using the first part, it is clear that the matrix C in the second part, is invertible. Hence
AC = In = CA.
Remark 2.7.9 This theorem implies the following: if we want to show that a square matrix A of order
n is invertible, it is enough to show the existence of
Theorem 2.7.10 The following statements are equivalent for a square matrix A of order n.
1. A is invertible.
Proof. 1 = 2
Since A is invertible, by Theorem 2.7.7 A is of full rank. That is, for the linear system Ax = 0, the
number of unknowns is equal to the rank of the matrix A. Hence, by Theorem 2.6.1 the system Ax = 0
has a unique solution x = 0.
2 = 1
Let if possible A be non-invertible. Then by Theorem 2.7.7, the matrix A is not of full rank. Thus
by Corollary 2.6.3, the linear system Ax = 0 has infinite number of solutions. This contradicts the
assumption that Ax = 0 has only the trivial solution x = 0.
1 = 3
Since A is invertible, for every b, the system Ax = b has a unique solution x = A1 b.
3 = 1
For 1 i n, define ei = (0, . . . , 0, 1
|{z} , 0, . . . , 0)t , and consider the linear system Ax = ei .
ith position
By assumption, this system has a solution xi for each i, 1 i n. Define a matrix B = [x1 , x2 , . . . , xn ].
That is, the ith column of B is the solution of the system Ax = ei . Then
Exercise 2.7.11 1. Show that a triangular matrix A is invertible if and only if each diagonal entry of A
is non-zero.
Corollary 2.7.12 Let A be an invertible n n matrix. Suppose that a sequence of elementary row-operations
reduces A to the identity matrix. Then the same sequence of elementary row-operations when applied to the
identity matrix yields A1 .
Proof. Let A be a square matrix of order n. Also, let E1 , E2 , . . . , Ek be a sequence of elementary row
operations such that E1 E2 Ek A = In . Then E1 E2 Ek In = A1 . This implies A1 = E1 E2 Ek .
Summary: Let A be an n n matrix. Apply the Gauss-Jordan method to the matrix [A In ].
Suppose the row reduced echelon form of the matrix [A In ] is [B C]. If B = In , then A1 = C or else
A is not invertible.
2 1 1
Example 2.7.13 Find the inverse of the matrix 1 2 1 using Gauss-Jordan method.
1 1 2
2 1 1 1 0 0
Solution: Consider the matrix 1 2 1 0 1 0 . A sequence of steps in the Gauss-Jordan method
1 1 2 0 0 1
are:
40 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS
2 1 1 1 0 0 1 12 12 12 0 0
1. 1 2 1 0 1 0 R1 (1/2) 1 2 1 0 1 0
1 1 2 0 0 1 1 1 2 0 0 1
1 12 12 12 0 0 1 12 12 1
2 0 0
R21 (1)
2. 1 2 1 0 1 0 0 32 12 21 1 0
R31 (1)
1 1 2 0 0 1 0 12 32 21 0 1
1 12 12 1
2 0 0 1 12 12 1
2 0 0
3. 0 32 12 21 1 0 R2 (2/3) 0 1 13 31 23 0
0 21 32 21 0 1 0 12 32 21 0 1
1 12 12 1
2 0 0 1 12 12 1
2 0 0
4. 0 1 3 3 3 0 R32 (1/2) 0 1 3 31
1 1 2 1 2
3 0
0 12 32 21 0 1 0 0 43 31 13 1
1 12 12 1
2 0 0 1 12 12 1
2 0 0
1 1 2 1 1 2
5. 0 1 3 3 3 0 R 3 (3/4) 0 1 3 3 3 0
0 0 43 31 31 1 0 0 1 41 14 3
4
1 1 1
1 5 1 3
1 2 2 2
0 0 1 2
0 8 8 8
1 R23 (1/3) 1 1
6.
0 1 1
3 3
2
3
0 0 1 0 4
3
4 4
1 1 3 R13 (1/2) 1 1 3
0 0 1 4 4 4
0 0 1 4 4 4
1 5 1 3
3 1 1
1 2
0 8 8 8
1 0 0 4 4 4
1 1
1 1
7.
0 1 0 4
3
4
R
4 12
(1/2) 0 1 0 4 4
3
4
.
1 1 3 1 1 3
0 0 1 4 4 4
0 0 1 4 4 4
3/4 1/4 1/4
8. Thus, the inverse of the given matrix is 1/4 3/4 1/4 .
1/4 1/4 3/4
Exercise
2.7.14
Find the
inverseof the following
matricesusing Gauss-Jordan method.
1 2 3 1 3 3 2 1 3
(i) 1 3 2 , (ii) 2 3 2 , (iii) 1 3 2 .
2 4 7 2 4 7 2 4 1
2.8 Determinant
Notation: For an n n matrix A, by A(|), we mean the submatrix B of A, which is obtained by
deleting the th row and th column.
1 2 3 " # " #
1 2 1 3
Example 2.8.1 Consider a matrix A = 1 3 2 . Then A(1|2) = , A(1|3) = , and
2 7 2 4
2 4 7
A(1, 2|1, 3) = [4].
Definition 2.8.2 (Determinant of a Square Matrix) Let A be a square matrix of order n. With A, we
associate inductively (on n) a number, called the determinant of A, written det(A) (or |A|) by
a if A = [a] (n = 1),
n
det(A) = P
(1)1+j a1j det A(1|j) , otherwise.
j=1
2.8. DETERMINANT 41
Definition 2.8.3 (Minor, Cofactor of a Matrix) The number det (A(i|j)) is called the (i, j)th minor of
A. We write Aij = det (A(i|j)) . The (i, j)th cofactor of A, denoted Cij , is the number (1)i+j Aij .
"#
a11 a12
Example 2.8.4 1. Let A = . Then, det(A) = |A| = a11 A11 a12 A12 = a11 a22 a12 a21 .
a21 a22
" #
1 2
For example, for A = det(A) = |A| = 1 2 2 = 3.
2 1
a11 a12 a13
2. Let A = a21 a22 a23 . Then,
a31 a32 a33
2. Show that the determinant of a triangular matrix is the product of its diagonal entries.
The proof of the next theorem is omitted. The interested reader is advised to go through Appendix
14.3.
4. if B is obtained from A by replacing the jth row by itself plus k times the ith row, where i 6= j then
det(B) = det(A),
Remark 2.8.8 1. Many authors define the determinant using Permutations. It turns out that the
way we have defined determinant is usually called the expansion of the determinant along
the first row.
2. Part 1 of Lemma 2.8.7 implies that one can also calculate the determinant by expanding along
any row. Hence, for an n n matrix A, for every k, 1 k n, one also has
n
X
det(A) = (1)k+j akj det A(k|j) .
j=1
Remark 2.8.9 1. Let ut = (u1 , u2 ) and vt = (v1 , v2 ) be two vectors in R2 . Then consider the par-
allelogram, P QRS, formed by the vertices {P = (0, 0)t , Q = u, S = v, R = u + v}. We
" #!
u1 v1
Claim: Area (P QRS) = det = |u1 v2 u2 v1 |.
u2 v2
p
Recall that the dot product, u v = u1 v1 + u2 v2 , and u u = (u21 + u22 ), is the length of the
vector u. We denote the length by (u). With the above notation, if is the angle between the
vectors u and v, then
uv
cos() = .
(u)(v)
Which tells us,
s 2
uv
Area(P QRS) = (u)(v) sin() = (u)(v) 1
(u)(v)
p p
= (u)2 + (v)2 (u v)2 = (u1 v2 u2 v1 )2
= |u1 v2 u2 v1 |.
Hence, the claim holds. That is, in R2 , the determinant is times the area of the parallelogram.
2. Let u = (u1 , u2 , u3 ), v = (v1 , v2 , v3 ) and w = (w1 , w2 , w3 ) be three elements of R3 . Recall that the
cross product of two vectors in R3 is,
u v = (u2 v3 u3 v2 , u3 v1 u1 v3 , u1 v2 u2 v1 ).
Let P be the parallelopiped formed with (0, 0, 0) as a vertex and the vectors u, v, w as adjacent
vertices. Then observe that u v is a vector perpendicular to the plane that contains the paral-
lelogram formed by the vectors u and v. So, to compute the volume of the parallelopiped P, we
need to look at cos(), where is the angle between the vector w and the normal vector to the
parallelogram formed by u and v. So,
volume (P ) = |w (u v)|.
(a) If u1 = (1, 0, . . . , 0)t , u2 = (0, 1, 0, . . . , 0)t , . . . , and un = (0, . . . , 0, 1)t , then det(A) = 1. Also,
volume of a unit n-dimensional cube is 1.
(b) If we replace the vector ui by ui , for some R, then the determinant of the new matrix
is det(A). This is also true for the volume, as the original volume gets multiplied by .
(c) If u1 = ui for some i, 2 i n, then the vectors u1 , u2 , . . . , un will give rise to an (n 1)-
dimensional parallelopiped. So, this parallelopiped lies on an (n 1)-dimensional hyperplane.
Thus, its n-dimensional volume will be zero. Also, | det(A)| = |0| = 0.
In general, for any n n matrix A, it can be proved that | det(A)| is indeed equal to the volume
of the n-dimensional parallelepiped. The actual proof is beyond the scope of this book.
Definition 2.8.10 (Adjoint of a Matrix) Let A be an n n matrix. The matrix B = [bij ] with bij = Cji ,
for 1 i, j n is called the Adjoint of A, denoted Adj(A).
1 2 3 4 2 7
Example 2.8.11 Let A = 2 3 1 . Then Adj(A) = 3 1 5 ;
1 2 2 1 0 1
as C11 = (1)1+1 A11 = 4, C12 = (1)1+2 A12 = 3, C13 = (1)1+3 A13 = 1, and so on.
n
P n
P
2. for i 6= , aij Cj = aij (1)+j Aj = 0, and
j=1 j=1
1
det(A) 6= 0 A1 = Adj(A). (2.8.2)
det(A)
By the construction of B, two rows (ith and th ) are equal. By Part 5 of Lemma 2.8.7, det(B) = 0. By
construction again, det A(|j) = det B(|j) for 1 j n. Thus, by Remark 2.8.8, we have
n
X n
+j
X
0 = det(B) = (1) bj det B(|j) = (1)+j aij det B(|j)
j=1 j=1
Xn n
X
= (1)+j aij det A(|j) = aij Cj .
j=1 j=1
44 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS
Now,
n
X n
X
A Adj(A) = aik Adj(A) kj = aik Cjk
ij k=1 k=1
(
0 if i 6= j
=
det(A) if i = j
1
Thus, A(Adj(A)) = det(A)In . Since, det(A) 6= 0, A Adj(A) = In . Therefore, A has a right
det(A)
inverse. Hence, by Theorem 2.7.8 A has an inverse and
1
A1 = Adj(A).
det(A)
1 1 0
Example 2.8.13 Let A = 0 1 1 . Then
1 2 1
1 1 1
Adj(A) = 1 1 1
1 3 1
1/2 1/2 1/2
and det(A) = 2. By Theorem 2.8.12.3, A1 = 1/2 1/2 1/2 .
1/2 3/2 1/2
The next corollary is an easy consequence of Theorem 2.8.12 (recall Theorem 2.7.8).
Theorem 2.8.15 Let A and B be square matrices of order n. Then det(AB) = det(A) det(B).
Corollary 2.8.16 Let A be a square matrix. Then A is non-singular if and only if A has an inverse.
1
Proof. Suppose A is non-singular. Then det(A) 6= 0 and therefore, A1 = Adj(A). Thus, A
det(A)
has an inverse.
Suppose A has an inverse. Then there exists a matrix B such that AB = I = BA. Taking determinant
of both sides, we get
det(A) det(B) = det(AB) = det(I) = 1.
The linear system Ax = b has a unique solution for every b if and only if A1 exists.
Theorem 2.8.18 (Cramers Rule) Let Ax = b be a linear system with n equations in n unknowns. If
det(A) 6= 0, then the unique solution to this system is
det(Aj )
xj = , for j = 1, 2, . . . , n,
det(A)
where Aj is the matrix obtained from A by replacing the jth column of A by the column vector b.
46 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS
1
Proof. Since det(A) 6= 0, A1 = Adj(A). Thus, the linear system Ax = b has the solution
det(A)
1
x= Adj(A)b. Hence, xj , the jth coordinate of x is given by
det(A)
2. If A and B are two n n non-singular matrices, are the matrices A + B and A B non-singular?
Justify your answer.
9. Suppose A = [aij ] and B = [bij ] are two n n matrices such that bij = pij aij for 1 i, j n for
some non-zero real number p. Then compute det(B) in terms of det(A).
10. The position of an element aij of a determinant is called even or odd according as i + j is even or odd.
Show that
(a) If all the entries in odd positions are multiplied with 1 then the value of the determinant doesnt
change.
(b) If all entries in even positions are multiplied with 1 then the determinant
i. does not change if the matrix is of even order.
ii. is multiplied by 1 if the matrix is of odd order.
11. Let A be an n n Hermitian matrix, that is, A = A. Show that det A is a real number. [A is a matrix
with complex entries and A = At .]
Consider the problem of finding the set of points of intersection of the two planes 2x + 3y + z + u = 0
and 3x + y + 2z + u = 0.
Let V be the set of points of intersection of the two planes. Then V has the following properties:
2. For the points (1, 0, 1, 1) and (5, 1, 7, 0) which belong to V ; the point (6, 1, 8, 1) = (1, 0, 1, 1)+
(5, 1, 7, 0) V.
Similarly, for an m n real matrix A, consider the set V, of solutions of the homogeneous linear
system Ax = 0. This set satisfies the following properties:
3. The vector 0 V as A0 = 0.
1. Vector Addition: To every pair u, v V there corresponds a unique element u v in V such that
(d) For every u V there is a unique element u V such that u (u) = 0 (called the additive
inverse).
is called vector addition.
(a) (u v) = ( u) ( v).
(b) ( + ) u = ( u) ( u).
Remark 3.1.2 The elements of F are called scalars, and that of V are called vectors. If F = R, the
vector space is called a real vector space. If F = C, the vector space is called a complex vector
space.
We may sometimes write V for a vector space if F is understood from the context.
Some interesting consequences of Definition 3.1.1 is the following useful result. Intuitively, these
results seem to be obvious but for better understanding of the axioms it is desirable to go through the
proof.
1. u v = u implies v = 0.
u (u v) = u u (u u) v = 0 0 v = 0 v = 0.
Proof of Part 2.
As 0 = 0 0, using the distributive law, we have
0 = (0 0) = ( 0) ( 0).
Thus, for any F, the first part implies 0 = 0. In the same way,
0 u = (0 + 0) u = (0 u) (0 u).
3.1.2 Examples
Example 3.1.4 1. The set R of real numbers, with the usual addition and multiplication (i.e., +
and ) forms a vector space over R.
(called component wise or coordinate wise operations). Then V is a real vector space with addition and
scalar multiplication defined as above. This vector space is denoted by Rn , called the real vector
space of n-tuples.
4. Let V = R+ (the set of positive real numbers). This is not a vector space under usual operations of
addition and scalar multiplication (why?). We now define a new vector addition and scalar multiplication
as
v1 v2 = v1 v2 and v = v
for all v1 , v2 , v R+ and R. Then R+ is a real vector space with 1 as the additive identity.
Recall 1 is denoted i.
(a) If the set F is the set C of complex numbers, then Cn is a complex vector space having n-tuple
of complex numbers as its vectors.
(b) If the set F is the set R of real numbers, then Cn is a real vector space having n-tuple of complex
numbers as its vectors.
Remark 3.1.5 In Example 7a, the scalars are Complex numbers and hence i(1, 0) = (i, 0).
Whereas, in Example 7b, the scalars are Real Numbers and hence we cannot write i(1, 0) =
(i, 0).
8. Fix a positive integer n and let Mn (R) denote the set of all n n matrices with real entries. Then
Mn (R) is a real vector space with vector addition and scalar multiplication defined by
9. Fix a positive integer n. Consider the set, Pn (R), of all polynomials of degree n with coefficients
from R in the indeterminate x. Algebraically,
10. Consider the set P(R), of all polynomials with real coefficients. Let f (x), g(x) P(R). Observe that
a polynomial of the form a0 + a1 x + + am xm can be written as a0 + a1 x + + am xm + 0
xm+1 + + 0 xp for any p > m. Hence, we can assume f (x) = a0 + a1 x + a2 x2 + + ap xp and
g(x) = b0 + b1 x + b2 x2 + + bp xp for some ai , bi R, 0 i p, for some large positive integer p.
We now define the vector addition and scalar multiplication as
11. Let C([1, 1]) be the set of all real valued continuous functions on the interval [1, 1]. For f, g
C([1, 1]) and R, define
Then C([1, 1]) forms a real vector space. The operations defined above are called point wise
addition and scalar multiplication.
3.1. VECTOR SPACES 53
12. Let V and W be real vector spaces with binary operations (+, ) and (, ), respectively. Consider
the following operations on the set V W : for (x1 , y1 ), (x2 , y2 ) V W and R, define
On the right hand side, we write x1 + x2 to mean the addition in V, while y1 y2 is the addition in
W. Similarly, x1 and y1 come from scalar multiplication in V and W, respectively. With the
above definitions, V W also forms a real vector space.
The readers are advised to justify the statements made in the above examples.
3.1.3 Subspaces
Definition 3.1.6 (Vector Subspace) Let S be a non-empty subset of V. S(F) is said to be a subspace
of V (F) if u + v S whenever , F and u, v S; where the vector addition and scalar multiplication
are the same as that of V (F).
Remark 3.1.7 Any subspace is a vector space in its own right with respect to the vector addition and
scalar multiplication that is defined for V (F).
L(S) = {1 u1 + 2 u2 + + n un : i F, 1 i n}
Example 3.1.11 1. Note that (4, 5, 5) is a linear combination of (1, 0, 0), (1, 1, 0), and (1, 1, 1) as (4, 5, 5) =
5(1, 1, 1) 1(1, 0, 0) + 0(1, 1, 0).
For each vector, the linear combination in terms of the vectors (1, 0, 0), (1, 1, 0), and
(1, 1, 1) is unique.
Check that 3(1, 2, 3)+ (1)(1, 1, 4)+ 0(3, 3, 2) = (4, 5, 5). Also, in this case, the vector (4, 5, 5) does
not have a unique expression as linear combination of vectors (1, 2, 3), (1, 1, 4) and
(3, 3, 2).
3. Verify that (4, 5, 5) is not a linear combination of the vectors (1, 2, 1) and (1, 1, 0)?
Lemma 3.1.12 (Linear Span is a subspace) Let V (F) be a vector space and let S be a non-empty subset
of V. Then L(S) is a subspace of V (F).
Proof. By definition, S L(S) and hence L(S) is non-empty subset of V. Let u, v L(S). Then, for
1 i n there exist vectors wi S, and scalars i , i F such that u = 1 w1 + 2 w2 + + n wn
and v = 1 w1 + 2 w2 + + n wn . Hence,
Remark 3.1.13 Let V (F) be a vector space and W V be a subspace. If S W, then L(S) W is a
subspace of W as W is a vector space in its own right.
Theorem 3.1.14 Let S be a non-empty subset of a vector space V. Then L(S) is the smallest subspace of
V containing S.
Proof. For every u S, u = 1.u L(S) and therefore, S L(S). To show L(S) is the smallest
subspace of V containing S, consider any subspace W of V containing S. Then by Proposition 3.1.13,
L(S) W and hence the result follows.
Definition 3.1.15 Let A be an m n matrix with real entries. Then using the rows at1 , at2 , . . . , atm Rn
and columns b1 , b2 , . . . , bn Rm , we define
1. RowSpace(A) = L(a1 , a2 , . . . , am ),
2. ColumnSpace(A) = L(b1 , b2 , . . . , bn ),
Note that the column space of a matrix A consists of all b such that Ax = b has a solution. Hence,
ColumnSpace(A) = Range(A).
Lemma 3.1.16 Let A be a real m n matrix. Suppose B = EA for some elementary matrix E. Then
Row Space(A) = Row Space(B).
Proof. We prove the result for the elementary matrix Eij (c), where c 6= 0 and i < j. Let at1 , at2 , . . . , atm
be the rows of the matrix A. Then B = Eij (c)A gives us
1. N (A) is a subspace of Rn ;
2. the non-zero row vectors of a matrix in row-reduced form, forms a basis for the row-space. Hence
dim( Row Space(A)) = row rank of (A).
56 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES
Proof. Part 1) can be easily proved. Let A be an m n matrix. For part 2), let D be the row-reduced
form of A with non-zero rows dt1 , dt2 , . . . , dtr . Then B = Ek Ek1 E2 E1 A for some elementary matrices
E1 , E2 , . . . , Ek . Then, a repeated application of Lemma 3.1.16 implies Row Space(A) = Row Space(B).
That is, if the rows of the matrix A are at1 , at2 , . . . , atm , then
L(a1 , a2 , . . . , am ) = L(b1 , b2 , . . . , br ).
Exercise 3.1.18 1. Show that any two row-equivalent matrices have the same row space. Give examples
to show that the column space of two row-equivalent matrices need not be same.
3. Let P and Q be two subspaces of a vector space V. Show that P Q is a subspace of V. Also show
that P Q need not be a subspace of V. When is P Q a subspace of V ?
5. Let S = {x1 , x2 , x3 , x4 } where x1 = (1, 0, 0, 0), x2 = (1, 1, 0, 0), x3 = (1, 2, 0, 0), x4 = (1, 1, 1, 0).
Determine all xi such that L(S) = L(S \ {xi }).
6. Let C([1, 1]) be the set of all continuous functions on the interval [1, 1] (cf. Example 3.1.4.11). Let
7. Let V = {(x, y) : x, y R} over R. Define (x, y) (x1 , y1 ) = (x + x1 , 0) and (x, y) = (x, 0).
Show that V is not a vector space over R.
8. Recall that Mn (R) is the real vector space of all n n real matrices. Prove that the following subsets
are subspaces of Mn (R).
9. Let V = R. Define x y = x y and x = x. Which vector space axioms are not satisfied here?
In this section, we saw that a vector space has infinite number of vectors. Hence, one can start with
any finite collection of vectors and obtain their span. It means that any vector space contains infinite
number of other vector subspaces. Therefore, the following questions arise:
1. What are the conditions under which, the linear span of two distinct sets the same?
2. Is it possible to find/choose vectors so that the linear span of the chosen vectors is the whole vector
space itself?
3. Suppose we are able to choose certain vectors whose linear span is the whole space. Can we find
the minimum number of such vectors?
1 u1 + 2 u2 + + m um = 0,
then the set S is called a linearly dependent set. Otherwise, the set S is called linearly independent.
Example 3.2.2 1. Let S = {(1, 2, 1), (2, 1, 4), (3, 3, 5)}. Then check that 1(1, 2, 1)+1(2, 1, 4)+(1)(3, 3, 5) =
(0, 0, 0). Since 1 = 1, 2 = 1 and 3 = 1 is a solution of (3.2.1), so the set S is a linearly dependent
subset of R3 .
2. Let S = {(1, 1, 1), (1, 1, 0), (1, 0, 1)}. Suppose there exists , , R such that (1, 1, 1)+(1, 1, 0)+
(1, 0, 1) = (0, 0, 0). Then check that in this case we necessarily have = = = 0 which shows
that the set S = {(1, 1, 1), (1, 1, 0), (1, 0, 1)} is a linearly independent subset of R3 .
1 u1 + 2 u2 + + m um = 0. (3.2.1)
3. If S is a linearly dependent subset of V then every set containing S is also linearly dependent.
Proof. We give the proof of the first part. The reader is required to supply the proof of other parts.
Let S = {0 = u1 , u2 , . . . , un } be a set consisting of the zero vector. Then for any 6= o, u1 + ou2 +
+ 0un = 0. Hence, for the system 1 u1 + 2 u2 + + m um = 0, we have a non-zero solution 1 =
and o = 2 = = n . Therefore, the set S is linearly dependent.
Theorem 3.2.4 Let {v1 , v2 , . . . , vp } be a linearly independent subset of a vector space V. Suppose there
exists a vector vp+1 V, such that the set {v1 , v2 , . . . , vp , vp+1 } is linearly dependent, then vp+1 is a linear
combination of v1 , v2 , . . . , vp .
Proof. Since the set {v1 , v2 , . . . , vp , vp+1 } is linearly dependent, there exist scalars 1 , 2 , . . . , p+1 ,
not all zero such that
1 v1 + 2 v2 + + p vp + p+1 vp+1 = 0. (3.2.2)
Claim: p+1 6= 0.
Let if possible p+1 = 0. Then equation (3.2.2) gives 1 v1 + 2 v2 + + p vp = 0 with not all
i , 1 i p zero. Hence, by the definition of linear independence, the set {v1 , v2 , . . . , vp } is linearly
dependent which is contradictory to our hypothesis. Thus, p+1 6= 0 and we get
1
vp+1 = (1 v1 + + p vp ).
p+1
58 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES
We now state two important corollaries of the above theorem. We dont give their proofs as they are
easy consequence of the above theorem.
Corollary 3.2.5 Let {u1 , u2 , . . . , un } be a linearly dependent subset of a vector space V. Then there exists
a smallest k, 2 k n such that
The next corollary follows immediately from Theorem 3.2.4 and Corollary 3.2.5.
Corollary 3.2.6 Let {v1 , v2 , . . . , vp } be a linearly independent subset of a vector space V. Suppose there
exists a vector v V, such that v 6 L(v1 , v2 , . . . , vp ). Then the set {v1 , v2 , . . . , vp , v} is also linearly
independent subset of V.
Exercise 3.2.7 1. Consider the vector space R2 . Let u1 = (1, 0). Find all choices for the vector u2 such
that the set {u1 , u2 } is linear independent subset of R2 . Does there exist choices for vectors u2 and
u3 such that the set {u1 , u2 , u3 } is linearly independent subset of R2 ?
2. If none of the elements appearing along the principal diagonal of a lower triangular matrix is zero, show
that the row vectors are linearly independent in Rn . The same is true for column vectors.
3. Let S = {(1, 1, 1, 1), (1, 1, 1, 2), (1, 1, 1, 1)} R4 . Determine whether or not the vector (1, 1, 2, 1)
L(S)?
4. Show that S = {(1, 2, 3), (2, 1, 1), (8, 6, 10)} is linearly dependent in R3 .
5. Show that S = {(1, 0, 0), (1, 1, 0), (1, 1, 1)} is a linearly independent set in R3 . In general if {f1 , f2 , f3 }
is a linearly independent set then {f1 , f1 + f2 , f1 + f2 + f3 } is also a linearly independent set.
6. In R3 , give an example of 3 vectors u, v and w such that {u, v, w} is linearly dependent but any set
of 2 vectors from u, v, w is linearly independent.
10. Under what conditions on are the vectors (1 + , 1 ) and ( 1, 1 + ) in C2 (R) linearly
independent?
11. Let u, v V and M be a subspace of V. Further, let K be the subspace spanned by M and u and H
be the subspace spanned by M and v. Show that if v K and v 6 M then u H.
3.3 Bases
Definition 3.3.1 (Basis of a Vector Space) 1. A non-empty subset B of a vector space V is called a
basis of V if
Remark 3.3.2 Let {v1 , v2 , . . . , vp } be a basis of a vector space V (F). Then any v V is a unique
linear combination of the basis vectors, v1 , v2 , . . . , vp .
Observe that if there exists a v W such that v = 1 v1 + 2 v2 + + p vp and v = 1 v1 + 2 v2 +
+ p vp then
0 = v v = (1 1 )v1 + (2 2 )v2 + + (p p )vp .
But then the set {v1 , v2 , . . . , vp } is linearly independent and therefore the scalars i i for 1 i p
must all be equal to zero. Hence, for 1 i p, i = i and we have the uniqueness.
By convention, the linear span of an empty set is {0}. Hence, the empty set is a basis of the vector
space {0}.
Example 3.3.3 1. Check that if V = {(x, y, 0) : x, y R} R3 , then B = {(1, 0, 0), (0, 1, 0)} or
B = {(1, 0, 0), (1, 1, 0)} or B = {(2, 0, 0), (1, 3, 0)} or are bases of V.
3. Let V = {(x, y, z) : x+yz = 0, x, y, z R} be a vector subspace of R3 . Then S = {(1, 1, 2), (2, 1, 3), (1, 2, 3)}
V. It can be easily verified that the vector (3, 2, 5) V and
4. Let V = {a + ib : a, b R} and F = C. That is, V is a complex vector space. Note that any element
a + ib V can be written as a + ib = (a + ib)1. Hence, a basis of V is {1}.
6. Recall the vector space P(R), the vector space of all polynomials with real coefficients. A basis of this
vector space is the set
{1, x, x2 , . . . , xn , . . .}.
This basis has infinite number of vectors as the degree of the polynomial can be any positive integer.
Definition 3.3.4 (Finite Dimensional Vector Space) A vector space V is said to be finite dimensional if
there exists a basis consisting of finite number of elements. Otherwise, the vector space V is called infinite
dimensional.
In Example 3.3.3, the vector space of all polynomials is an example of an infinite dimensional vector
space. All the other vector spaces are finite dimensional.
60 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES
Remark 3.3.5 We can use the above results to obtain a basis of any finite dimensional vector space V
as follows:
Step 1: Choose a non-zero vector, say, v1 V. Then the set {v1 } is linearly independent.
Step 2: If V = L(v1 ), we have got a basis of V. Else there exists a vector, say, v2 V such that
v2 6 L(v1 ). Then by Corollary 3.2.6, the set {v1 , v2 } is linearly independent.
Step 3: If V = L(v1 , v2 ), then {v1 , v2 } is a basis of V. Else there exists a vector, say, v3 V such
that v3 6 L(v1 , v2 ). So, by Corollary 3.2.6, the set {v1 , v2 , v3 } is linearly independent.
Exercise 3.3.6 1. Let S = {v1 , v2 , . . . , vp } be a subset of a vector space V (F). Suppose L(S) = V but
S is not a linearly independent set. Then prove that each vector in V can be expressed in more than
one way as a linear combination of vectors from S.
2. Show that the set {(1, 0, 1), (1, i, 0), (1, 1, 1 i)} is a basis of C3 (C).
3. Let A be a matrix of rank r. Then show that the r non-zero rows in the row-reduced echelon form of
A are linearly independent and they form a basis of the row space of A.
Proof. Since we want to find whether the set {w1 , w2 , . . . , wm } is linearly independent or not, we
consider the linear system
1 w1 + 2 w2 + + m wm = 0 (3.3.1)
with 1 , 2 , . . . , m as the m unknowns. If the solution set of this linear system of equations has more
than one solution, then this set will be linearly dependent.
As {v1 , v2 , . . . , vn } is a basis of V and wi V, for each i, 1 i m, there exist scalars aij , 1 i
n, 1 j m, such that
Therefore, finding i s satisfying equation (3.3.1) reduces to solving the system of homogeneous equations
a11 a12 a1m
a21 a22 a2m
A = 0 where t = (1 , 2 , . . . , m ) and A =
.. .. .. . Since n < m, i.e., the number
..
. . . .
an1 an2 anm
of equations is strictly less than the number of unknowns, Corollary 2.6.3 implies that the solution
set consists of infinite number of elements. Therefore, the equation (3.3.1) has a solution with not all
i , 1 i m, zero. Hence, the set {w1 , w2 , . . . , wm } is a linearly dependent set.
Remark 3.3.8 Let V be a vector subspace of Rn with spanning set S. We give a method of finding a
basis of V from S.
2. Use only the elementary row operations Ri (c) and Rij (c) to get the row-reduced form B of A (in
fact we just need to make as many zero-rows as possible).
Example 3.3.9 Let S = {(1, 1, 1, 1), (1, 1, 1, 1), (1, 1, 0, 1), (1, 1, 1, 1)} be a subset of R4 . Find a basis of
L(S).
1 1 1 1
1 1 1 1
Solution: Here A = . Applying row-reduction to A, we have
1 1 0 1
1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 0 0 2 0 0 0 0 0
R12 (1), R13 (1), R14 (1) R32(2) .
1 1 0 1 0 0 1 0 0 0 1 0
1 1 1 1 0 2 0 0 0 2 0 0
Observe that the rows 1, 3 and 4 are non-zero. Hence, a basis of L(S) consists of the first, third and fourth
vectors of the set S. Thus, B = {(1, 1, 1, 1), (1, 1, 0, 1), (1, 1, 1, 1)} is a basis of L(S).
Observe that at the last step, in place of the elementary row operation R32 (2), we can apply R23 ( 12 )
to make the third row as the zero-row. In this case, we get {(1, 1, 1, 1), (1, 1, 1, 1), (1, 1, 1, 1)} as a basis
of L(S).
Corollary 3.3.10 Let V be a finite dimensional vector space. Then any two bases of V have the same
number of vectors.
Proof. Let {u1 , u2 , . . . , un } and {v1 , v2 , . . . , vm } be two bases of V with m > n. Then by the above
theorem the set {v1 , v2 , . . . , vm } is linearly dependent if we take {u1 , u2 , . . . , un } as the basis of V. This
contradicts the assumption that {v1 , v2 , . . . , vm } is also a basis of V. Hence, we get m = n.
Definition 3.3.11 (Dimension of a Vector Space) The dimension of a finite dimensional vector space V
is the number of vectors in a basis of V, denoted dim(V ).
62 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES
Note that the Corollary 3.2.6 can be used to generate a basis of any non-trivial finite dimensional
vector space.
So, {(1, 0), (0, 1)} is a basis of C2 (C) and thus dim(V ) = 2.
2. Consider the real vector space C2 (R). In this case, any vector
Hence, the set {(1, 0), (i, 0), (0, 1), (0, i)} is a basis and dim(V ) = 4.
Remark 3.3.13 It is important to note that the dimension of a vector space may change if the under-
lying field (the set of scalars) is changed.
Example 3.3.14 Let V be the set of all functions f : Rn R with the property that f (x+y) = f (x)+f (y)
and f (x) = f (x). For f, g V, and t R, define
Then it can be easily verified that the set {e1 , e2 , . . . , en } is a basis of V and hence dim(V ) = n.
The next theorem follows directly from Corollary 3.2.6 and Theorem 3.3.7. Hence, the proof is
omitted.
Theorem 3.3.15 Let S be a linearly independent subset of a finite dimensional vector space V. Then the
set S can be extended to form a basis of V.
Corollary 3.3.16 Let V be a vector space of dimension n. Then any set of n linearly independent vectors
forms a basis of V. Also, every set of m vectors, m > n, is linearly dependent.
v + x 3y + z = 0, w x z = 0 and v = y
is given by
(v, w, x, y, z)t = (y, 2y, x, y, 2y x)t = y(1, 2, 0, 1, 2)t + x(0, 0, 1, 0, 1)t.
Thus, a basis of V W is
{(1, 2, 0, 1, 2), (0, 0, 1, 0, 1)}.
To find a basis of W containing a basis of V W, we can proceed as follows:
3.3. BASES 63
1. Find a basis of W.
2. Take the basis of V W found above as the first two vectors and that of W as the next set of vectors.
Now use Remark 3.3.8 to get the required basis.
Recall that for two vector subspaces M and N of a vector space V (F), the vector subspace M + N
is defined by
M + N = {u + v : u M, v N }.
With this definition, we have the following very important theorem (for a proof, see Appendix 14.4.1).
Theorem 3.3.18 Let V (F) be a finite dimensional vector space and let M and N be two subspaces of V.
Then
dim(M ) + dim(N ) = dim(M + N ) + dim(M N ). (3.3.2)
Exercise 3.3.19 1. Find a basis of the vector space Pn (R). Also, find dim(Pn (R)). What can you say
about the dimension of P(R)?
2. Consider the real vector space, C([0, 2]), of all real valued continuous functions. For each n consider
the vector en defined by en (x) = sin(nx). Prove that the collection of vectors {en : 1 n < } is a
linearly independent set.
[Hint: On the contrary, assume that the set is linearly dependent. Then we have a finite set of vectors,
say {ek1 , ek2 , . . . , ek } that are linearly dependent. That is, there exist scalars i R for 1 i not all
zero such that
3. Show that the set {(1, 0, 0), (1, 1, 0), (1, 1, 1)} is a basis of C3 (C). Is it a basis of C3 (R) also?
6. Let V be the set of all real symmetric n n matrices. Find its basis and dimension. What if V is the
complex vector space of all n n Hermitian matrices?
64 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES
7. If M and N are 4-dimensional subspaces of a vector space V of dimension 7 then show that M and
N have at least one vector in common other than the zero vector.
8. Let P = L{(1, 0, 0), (1, 1, 0)} and Q = L{(1, 1, 1)} be vector subspaces of R3 . Show that P + Q = R3
and P Q = {0}. If u R3 , determine uP , uQ such that u = uP + uQ where uP P and uQ Q.
Is it necessary that uP and uQ are unique?
9. Let W1 be a k-dimensional subspace of an n-dimensional vector space V (F) where k 1. Prove that
there exists an (n k)-dimensional subspace W2 of V such that W1 W2 = {0} and W1 + W2 = V.
10. Let P and Q be subspaces of Rn such that P + Q = Rn and P Q = {0}. Then show that each
u Rn can be uniquely expressed as u = uP + uQ where uP P and uQ Q.
11. Let P = L{(1, 1, 0), (1, 1, 0)} and Q = L{(1, 1, 1), (1, 2, 1)} be vector subspaces of R3 . Show that
P + Q = R3 and P Q 6= {0}. Show that there exists a vector u R3 such that u cannot be written
uniquely in the form u = uP + uQ where uP P and uQ Q.
13. Let V be the set of all 2 2 matrices with complex entries and a11 + a22 = 0. Show that V is a real
vector space. Find its basis. Also let W = {A V : a21 = a12 }. Show W is a vector subspace of V,
and find its dimension.
1 2 1 3 2 2 4 0 6
0 2 2 2 4 1 0 2 5
14. Let A = , and B = be two matrices. For A and B find
2 2 4 0 8 3 5 1 4
4 2 5 6 10 1 1 1 2
the following:
15. Let M (n, R) denote the space of all n n real matrices. For the sets given below, check that they are
subspaces of M (n, R) and also find their dimension.
(a) sl(n, R) = {A M (n, R) : tr(A) = 0}, where recall that tr(A) stands for trace of A.
(b) S(n, R) = {A M (n, R) : A = At }.
(c) A(n, R) = {A M (n, R) : A + At = 0}.
Before going to the next section, we prove that for any matrix A of order m n
with
Ri L(u1 , u2 , . . . , ur ) Rn , for all i, 1 i m.
Therefore, there exist real numbers ij , 1 i m, 1 j r such that
Xr r
X r
X
R1 = 11 u1 + 12 u2 + + 1r ur = ( 1i ui1 , 1i ui2 , . . . , 1i uin ),
i=1 i=1 i=1
Xr r
X r
X
R2 = 21 u1 + 22 u2 + + 2r ur = ( 2i ui1 , 2i ui2 , . . . , 2i uin ),
i=1 i=1 i=1
So,
r
P
i=1 1i u i1
r 11 12 1r
P
i=1 2i ui1
21 22 2r
C1 = = u
11 .
+ u
21 .
+ + u r1 . .
.. .. .. ..
r .
P m1 m2 mr
mi ui1
i=1
Therefore, we observe that the columns C1 , C2 , . . . , Cn are linear combination of the r vectors
Therefore,
Column rank(A) = dim L(C1 , C2 , . . . , Cn ) = r = Row rank(A).
A similar argument gives
Row rank(A) Column rank(A).
Thus, we have the required result.
66 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES
Definition 3.4.1 (Ordered Basis) An ordered basis for a vector space V (F) of dimension n, is a ba-
sis {u1 , u2 , . . . , un } together with a one-to-one correspondence between the sets {u1 , u2 , . . . , un } and
{1, 2, 3, . . . , n}.
If the ordered basis has u1 as the first vector, u2 as the second vector and so on, then we denote this
ordered basis by
(u1 , u2 , . . . , un ).
Example 3.4.2 Consider P2 (R), the vector space of all polynomials of degree less than or equal to 2 with
coefficients from R. The set {1 x, 1 + x, x2 } is a basis of P2 (R).
For any element a0 + a1 x + a2 x2 P2 (R), we have
a0 a1 a0 + a1
a0 + a1 x + a2 x2 = (1 x) + (1 + x) + a2 x2 .
2 2
a0 a1 a0 + a1
If (1x, 1+x, x2 ) is an ordered basis, then is the first component, is the second component,
2 2
and a2 is the third component of the vector a0 + a1 x + a2 x2 .
a0 + a1 a0 a1
If we take (1 + x, 1 x, x2 ) as an ordered basis, then is the first component, is the
2 2
2
second component, and a2 is the third component of the vector a0 + a1 x + a2 x .
That is, as ordered bases (u1 , u2 , . . . , un ), (u2 , u3 , . . . , un , u1 ), and (un , u1 , u2 , . . . , un1 ) are different
even though they have the same set of vectors as elements.
Definition 3.4.3 (Coordinates of a Vector) Let B = (v1 , v2 , . . . , vn ) be an ordered basis of a vector space
V (F) and let v V. If
v = 1 v1 + 2 v2 + + n vn
then the tuple (1 , 2 , . . . , n ) is called the coordinate of the vector v with respect to the ordered basis B.
Mathematically, we denote it by [v]B = (1 , . . . , n )t , a column vector.
Suppose B1 = (u1 , u2 , . . . , un ) and B2 = (un , u1 , u2 , . . . , un1 ) are two ordered bases of V. Then for
any x V there exists unique scalars 1 , 2 , . . . , n such that
x = 1 u1 + 2 u2 + + n un = n un + 1 u1 + + n1 un1 .
Therefore,
[x]B1 = (1 , 2 , . . . , n )t and [x]B2 = (n , 1 , 2 , . . . , n1 )t .
n
P
Note that x is uniquely written as i ui and hence the coordinates with respect to an ordered
i=1
basis are unique.
Suppose that the ordered basis B1 is changed to the ordered basis B3 = (u2 , u1 , u3 , . . . , un ). Then
[x]B3 = (2 , 1 , 3 , . . . , n )t . So, the coordinates of a vector depend on the ordered basis chosen.
In general, let V be an n-dimensional vector space with ordered bases B1 = (u1 , u2 , . . . , un ) and
B2 = (v1 , v2 , . . . , vn ). Since, B1 is a basis of V, there exists unique scalars aij , 1 i, j n such that
n
X
vi = ali ul for 1 i n.
l=1
Note that the ith column of the matrix A is equal to [vi ]B1 , i.e., the ith column of A is the coordinate
of the ith vector vi of B2 with respect to the ordered basis B1 . Hence, we have proved the following
theorem.
Theorem 3.4.5 Let V be an n-dimensional vector space with ordered bases B1 = (u1 , u2 , . . . , un ) and
B2 = (v1 , v2 , . . . , vn ). Let
A = [[v1 ]B1 , [v2 ]B1 , . . . , [vn ]B1 ] .
1. Then
and
yx xy
[(x, y, z)]B2 = ( + z) (1, 1, 1) + (1, 1, 1)
2 2
+(x z) (1, 1, 0)
yx xy
= ( + z, , x z)t .
2 2
68 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES
0 2 0
2. Let A = [aij ] = 0 2 1 . The columns of the matrix A are obtained by the following rule:
1 1 0
and
[(1, 1, 0)]B1 = 0 (1, 0, 0) + 1 (1, 1, 0) + 0 (1, 1, 1) = (0, 1, 0)t .
That is, the elements of B2 = (1, 1, 1), (1, 1, 1), (1, 1, 0) are expressed in terms of the ordered basis
B1 .
In the next chapter, we try to understand Theorem 3.4.5 again using the ideas of linear transforma-
tions / functions.
Exercise 3.4.7 1. Determine the coordinates of the vectors (1, 2, 1) and (4, 2, 2) with respect to the
basis B = (2, 1, 0), (2, 1, 1), (2, 2, 1) of R3 .
Linear Transformations
Definition 4.1.1 (Linear Transformation) Let V and W be vector spaces over F. A map T : V W is
called a linear transformation if
Example 4.1.2 1. Define T : RR2 by T (x) = (x, 3x) for all x R. Then T is a linear transformation
as
T (x + y) = (x + y, 3(x + y)) = (x, 3x) + (y, 3y) = T (x) + T (y).
2. Verify that the maps given below from Rn to R are linear transformations. Let x = (x1 , x2 , . . . , xn ).
n
P
(a) Define T (x) = xi .
i=1
(b) For any i, 1 i n, define Ti (x) = xi .
n
P
(c) For a fixed vector a = (a1 , a2 , . . . , an ) Rn , define T (x) = ai xi . Note that examples (a)
i=1
and (b) can be obtained by assigning particular values for the vector a.
Then TA is a linear transformation. That is, every m n real matrix defines a linear transformation
from Rn to Rm .
5. Recall that Pn (R) is the set of all polynomials of degree less than or equal to n with real coefficients.
Define T : Rn+1 Pn (R) by
Proposition 4.1.3 Let T : V W be a linear transformation. Suppose that 0V is the zero vector in V and
0W is the zero vector of W. Then T (0V ) = 0W .
From now on, we write 0 for both the zero vector of the domain space and the zero vector of the
range space.
Definition 4.1.4 (Zero Transformation) Let V be a vector space and let T : V W be the map defined
by
T (v) = 0 for every v V.
Then T is a linear transformation. Such a linear transformation is called the zero transformation and is
denoted by 0.
Definition 4.1.5 (Identity Transformation) Let V be a vector space and let T : V V be the map
defined by
T (v) = v for every v V.
Then T is a linear transformation. Such a linear transformation is called the Identity transformation and is
denoted by I.
We now prove a result that relates a linear transformation T with its value on a basis of the domain
space.
Proof. Since B is a basis of V, for any x V, there exist scalars 1 , 2 , . . . , n such that x =
1 u1 + 2 u2 + + n un . So, by the definition of a linear transformation
Observe that, given x V, we know the scalars 1 , 2 , . . . , n . Therefore, to know T (x), we just need
to know the vectors T (u1 ), T (u2 ), . . . , T (un ) in W.
That is, for every x V, T (x) is determined by the coordinates (1 , 2 , . . . , n ) of x with respect to
the ordered basis B and the vectors T (u1 ), T (u2 ), . . . , T (un ) W.
Exercise 4.1.7 1. Which of the following are linear transformations T : V W ? Justify your answers.
(a) Let V = R2 and W = R3 with T (x, y) = (x + y + 1, 2x y, x + 3y)
(b) Let V = W = R2 with T (x, y) = (x y, x2 y 2 )
(c) Let V = W = R2 with T (x, y) = (x y, |x|)
(d) Let V = R2 and W = R4 with T (x, y) = (x + y, x y, 2x + y, 3x 4y)
(e) Let V = W = R4 with T (x, y, z, w) = (z, x, w, y)
2. Recall that M2 (R) is the space of all 2 2 matrices with real entries. Then, which of the following are
linear transformations T : M2 (R)M2 (R)?
4.1. DEFINITIONS AND BASIC PROPERTIES 71
3. Let T : R R be a map. Then T is a linear transformation if and only if there exists a unique c R
such that T (x) = cx for every x R.
Then prove that T 2 (x) := T (T (x)) = A2 x. In general, for k N, prove that T k (x) = Ak x.
5. Use the ideas of matrices to give examples of linear transformations T, S : R3 R3 that satisfy:
(a) T 6= 0, T 2 6= 0, T 3 = 0.
(b) T 6= 0, S 6= 0, S T 6= 0, T S = 0; where T S(x) = T S(x) .
(c) S 2 = T 2 , S 6= T.
(d) T 2 = I, T 6= I.
7. Let T : Rn Rm be a linear transformation, and let x0 Rn with T (x0 ) = y. Consider the sets
That is, f fixes the line y = x and sends the point (x1 , y1 ) for x1 6= y1 to its mirror image along the
line y = x.
Is this function a linear transformation? Justify your answer.
1. Then for each w W, the set T 1 (w) is a set consisting of a single element.
is a linear transformation.
72 CHAPTER 4. LINEAR TRANSFORMATIONS
Proof. Since T is onto, for each w W there exists a vector v V such that T (v) = w. So, the set
T 1 (w) is non-empty.
Suppose there exist vectors v1 , v2 V such that T (v1 ) = T (v2 ). But by assumption, T is one-one
and therefore v1 = v2 . This completes the proof of Part 1.
We now show that T 1 as defined above is a linear transformation. Let w1 , w2 W. Then by Part 1,
there exist unique vectors v1 , v2 V such that T 1 (w1 ) = v1 and T 1 (w2 ) = v2 . Or equivalently,
T (v1 ) = w1 and T (v2 ) = w2 . So, for any 1 , 2 F, we have T (1 v1 + 2 v2 ) = 1 w1 + 2 w2 .
Thus for any 1 , 2 F,
T 1 (1 w1 + 2 w2 ) = 1 v1 + 2 v2 = 1 T 1 (w1 ) + 2 T 1 (w2 ).
Definition 4.1.9 (Inverse Linear Transformation) Let T : V W be a linear transformation. If the map
T is one-one and onto, then the map T 1 : W V defined by
2. Recall the vector space Pn (R) and the linear transformation T : Rn+1 Pn (R) defined by
V. In the last section, we saw that a linear transformation is determined by its image on a basis of the
domain space. We therefore look at the images of the vectors vj B1 for 1 j n.
Now for each j, 1 j n, the vectors T (vj ) W. We now express these vectors in terms of
an ordered basis B2 = (w1 , w2 , . . . , wm ) of W. So, for each j, 1 j n, there exist unique scalars
a1j , a2j , . . . , amj F such that
The matrix A is called the matrix of the linear transformation T with respect to the ordered bases B1
and B2 , and is denoted by T [B1 , B2 ].
We thus have the following theorem.
Theorem 4.2.1 Let V and W be finite dimensional vector spaces with dimensions n and m, respectively.
Let T : V W be a linear transformation. If B1 is an ordered basis of V and B2 is an ordered basis of W,
then there exists an m n matrix A = T [B1 , B2 ] such that
[T (x)]B2 = A [x]B1 .
74 CHAPTER 4. LINEAR TRANSFORMATIONS
We now give a few examples to understand the above discussion and the theorem.
T ( (x, y) ) = (x + y, x y).
We obtain T [B1 , B2 ], the matrix of the linear transformation T with respect to the ordered bases
B1 = (1, 0), (0, 1) and B2 = (1, 1), (1, 1) of R2 .
T ( (1, 0) ) = (1, 1) = 1 (1, 1) + 0 (1, 1). So, [T ( (1, 0) )]B2 = (1, 0)t
and
T ( (0, 1) ) = (1, 1) = 0 (1, 1) + 1 (1, 1).
" #
1 0
That is, [T ( (0, 1) )]B2 = (0, 1)t . So the T [B1, B2 ] = . Observe that in this case,
0 1
" #
x
[T ( (x, y) )]B2 = [(x + y, x y)]B2 = x(1, 1) + y(1, 1) = , and
y
" #" # " #
1 0 x x
T [B1 , B2 ] [(x, y)]B1 = = = [T ( (x, y) )]B2 .
0 1 y y
2. Let B1 = (1, 0, 0), (0, 1, 0), (0, 0, 1) , B2 = (1, 0, 0), (1, 1, 0), (1, 1, 1) be two ordered bases of R3 .
Define
T : R3 R3 by T (x) = x.
Then
Thus, we have
Exercise 4.2.4 Recall the space Pn (R) ( the vector space of all polynomials of degree less than or equal to
n). We define a linear transformation D : Pn (R)Pn (R) by
That is, we multiply the matrix of the linear transformation with the coordinates [x]B1 , of the
vector x V to obtain the coordinates of the vector T (x) W.
TA (x) = Ax.
We sometimes write A for TA . Suppose that the standard bases for Rn and Rm are the ordered
bases B1 and B2 , respectively. Then observe that
T [B1 , B2 ] = A.
2. N (T ) = {x V : T (x) = 0}.
Proposition 4.3.2 Let V and W be finite dimensional vector spaces and let T : V W be a linear trans-
formation. Suppose that (v1 , v2 , . . . , vn ) is an ordered basis of V. Then
2. (a) N (T ) is a subspace of V.
(b) dim(N (T )) dim(V ).
76 CHAPTER 4. LINEAR TRANSFORMATIONS
Proof. The results about R(T ) and N (T ) can be easily proved. We thus leave the proof for the
readers.
We now assume that T is one-one. We need to show that N (T ) = {0}.
Let u N (T ). Then by definition, T (u) = 0. Also for any linear transformation (see Proposition 4.1.3),
T (0) = 0. Thus T (u) = T (0). So, T is one-one implies u = 0. That is, N (T ) = {0}.
Let N (T ) = {0}. We need to show that T is one-one. So, let us assume that for some u, v
V, T (u) = T (v). Then, by linearity of T, T (u v) = 0. This implies, u v N (T ) = {0}. This in turn
implies u = v. Hence, T is one-one.
The other parts can be similarly proved.
Remark 4.3.3 1. The space R(T ) is called the range space of T and N (T ) is called the null
space of T.
3. (T ) is called the rank of the linear transformation T and (T ) is called the nullity of T.
Example 4.3.4 Determine the range and null space of the linear transformation
Solution: By Definition R(T ) = L(T (1, 0, 0), T (0, 1, 0), T (0, 0, 1)). We therefore have
R(T ) = L (1, 0, 1, 2), (1, 1, 0, 5), (1, 1, 0, 5)
= L (1, 0, 1, 2), (1, 1, 0, 5)
= {(1, 0, 1, 2) + (1, 1, 0, 5) : , R}
= {( + , , , 2 + 5) : , R}
= {(x, y, z, w) R4 : x + y z = 0, 5y 2z + w = 0}.
Also, by definition
N (T ) = {(x, y, z) R3 : T (x, y, z) = 0}
= {(x, y, z) R3 : (x y + z, y z, x, 2x 5y + 5z) = 0}
= {(x, y, z) R3 : x y + z = 0, y z = 0,
x = 0, 2x 5y + 5z = 0}
3
= {(x, y, z) R : y z = 0, x = 0}
3
= {(x, y, z) R : y = z, x = 0}
3
= {(0, y, y) R : y arbitrary}
= L((0, 1, 1))
Exercise 4.3.5 1. Let T : V W be a linear transformation and let {T (v1 ), T (v2 ), . . . , T (vn )} be
linearly independent in R(T ). Prove that {v1 , v2 , . . . , vn } V is linearly independent.
4.3. RANK-NULLITY THEOREM 77
2. Let T : R2 R3 be defined by
T (1, 0) = (1, 0, 0), T (0, 1) = (1, 0, 0).
Then the vectors (1, 0) and (0, 1) are linearly independent whereas T (1, 0) and T (0, 1) are linearly
dependent.
D : Pn (R)Pn (R)
by
D(a0 + a1 x + a2 x2 + + an xn ) = a1 + 2a2 x + + nan xn1 .
Describe the null space and range space of D. Note that the range space is contained in the space
Pn1 (R).
5. Let T : R3 R3 be defined by
7. Determine a linear transformation T : R3 R3 whose range space is L{(1, 2, 0), (0, 1, 1), (1, 3, 1)}.
A B1 B1 B2 Bk1 Bk B.
If row space of B is in the row space of Bk and the row space of Bl is in the row space of Bl1 for
2 l k then show that the row space of B is in the row space of A.
We now state and prove the rank-nullity Theorem. This result also follows from Proposition 4.3.2.
Theorem 4.3.6 (Rank Nullity Theorem) Let T : V W be a linear transformation and V be a finite
dimensional vector space. Then
or equivalently (T ) + (T ) = dim(V ).
78 CHAPTER 4. LINEAR TRANSFORMATIONS
We now prove that the set {T (ur+1), T (ur+2 ), . . . , T (un )} is linearly independent. Suppose the set is
not linearly independent. Then, there exists scalars, r+1 , r+2 , . . . , n , not all zero such that
That is,
T (r+1 ur+1 + r+2 ur+2 + + n un ) = 0.
So, by definition of N (T ),
That is,
1 u1 + + + r ur r+1 ur+1 n un = 0.
But the set {u1 , u2 , . . . , un } is a basis of V and so linearly independent. Thus by definition of linear
independence
i = 0 for all i, 1 i n.
In other words, we have shown that {T (ur+1 ), T (ur+2), . . . , T (un )} is a basis of R(T ). Hence,
Using the Rank-nullity theorem, we give a short proof of the following result.
Corollary 4.3.7 Let T : V V be a linear transformation on a finite dimensional vector space V. Then
Proof. By Proposition 4.3.2, T is one-one if and only if N (T ) = {0}. By the rank-nullity Theorem
4.3.6 N (T ) = {0} is equivalent to the condition dim(R(T )) = dim(V ). Or equivalently T is onto.
By definition, T is invertible if T is one-one and onto. But we have shown that T is one-one if and
only if T is onto. Thus, we have the last equivalent condition.
Remark 4.3.8 Let V be a finite dimensional vector space and let T : V V be a linear transformation.
If either T is one-one or T is onto, then T is invertible.
The following are some of the consequences of the rank-nullity theorem. The proof is left as an
exercise for the reader.
4.3. RANK-NULLITY THEOREM 79
1. Rank (A) = k.
(a) If V is finite dimensional then show that the null space and the range space of T are also finite
dimensional.
(b) If V and W are both finite dimensional then show that
i. if dim(V ) < dim(W ) then T is onto.
ii. if dim(V ) > dim(W ) then T is not one-one.
6. Let V be the complex vector space of all complex polynomials of degree at most n. Given k distinct
complex numbers z1 , z2 , . . . , zk , we define a linear transformation
T : V Ck by T P (z) = P (z1 ), P (z2 ), . . . , P (zk ) .
Theorem 4.4.1 (Composition of Linear Transformations) Let V, W and Z be finite dimensional vec-
tor spaces with ordered bases B1 , B2 , B3 , respectively. Also, let T : V W and S : W Z be linear
transformations. Then the composition map S T : V Z is a linear transformation and
Now for 1 t n,
m
X m
X
(S T ) (ut ) = S(T (ut )) = S (T [B1 , B2 ])jt vj = (T [B1 , B2 ])jt S(vj )
j=1 j=1
m p
X X
= (T [B1 , B2 ])jt (S[B2 , B3 ])kj wk
j=1 k=1
p m
X X
= ( (S[B2 , B3 ])kj (T [B1 , B2 ])jt )wk
k=1 j=1
p
X
= (S[B2 , B3 ] T [B1 , B2 ])kt wk .
k=1
So,
[(S T ) (ut )]B3 = ((S[B2 , B3 ] T [B1 , B2 ])1t , . . . , (S[B2 , B3 ] T [B1 , B2 ])pt )t .
Hence,
(S T ) [B1 , B3 ] = [(S T ) (u1 )]B3 , . . . , [(S T ) (un )]B3 = S[B2 , B3 ] T [B1 , B2 ].
Proposition 4.4.2 Let V be a finite dimensional vector space and let T, S : V V be a linear transforma-
tions. Then
(T ) + (S) (T S) max{(T ), (S)}.
(T S) (T ) n (T S) n (T ) (T S) (T ).
So, to complete the proof of the second inequality, we need to show that R(T S) R(T ). This is true
as R(S) V.
Or equivalently
X k
X
c i ui + (i )vi = 0.
i=1 i=1
82 CHAPTER 4. LINEAR TRANSFORMATIONS
That is, the 0 vector is a non-trivial linear combination of the basis vectors v1 , v2 , . . . , vk , u1 , u2 , . . . , u
of N (T S). A contradiction.
Thus, the set {S(u1 ), S(u2 ), . . . , S(u )} is a linearly independent subset of N (T ) and so (T ) .
Hence,
(T S) = k + (S) + (T ).
Recall from Theorem 4.1.8 that if T is an invertible linear Transformation, then T 1 : V V is a
linear transformation defined by T 1 (u) = v whenever T (v) = u. We now state an important result
about inverse of a linear transformation. The reader is required to supply the proof (use Theorem 4.4.1).
Theorem 4.4.3 (Inverse of a Linear Transformation) Let V be a finite dimensional vector space with
ordered bases B1 and B2 . Also let T : V V be an invertible linear transformation. Then the matrix of T
and T 1 are related by
T [B1 , B2 ]1 = T 1 [B2 , B1 ].
Exercise 4.4.4 For the linear transformations given below, find the matrix T [B, B].
1. Let B = (1, 1, 1), (1, 1, 1), (1, 1, 1) be an ordered basis of R3 . Define T : R3 R3 by T (1, 1, 1) =
(1, 1, 1), T (1, 1, 1) = (1, 1, 1), and T (1, 1, 1) = (1, 1, 1). Is T an invertible linear transforma-
tion? Give reasons.
2. Let B = 1, x, x2 , x3 ) be an ordered basis of P3 (R). Define T : P3 (R)P3 (R) by
Theorem 4.4.5 (Change of Basis Theorem) Let V be a finite dimensional vector space with ordered bases
B1 = (u1 , u2 , . . . , un } and B2 = (v1 , v2 , . . . , vn }. Suppose x V with [x]B1 = (1 , 2 , . . . , n )t and
[x]B2 = (1 , 2 , . . . , n )t . Then
[x]B1 = I[B2 , B1 ] [x]B2 .
4.4. SIMILARITY OF MATRICES 83
Equivalently,
1 a11 a12 a1n 1
2 a21 a22 a2n 2
. = . .. .. . .
. . .. .
. . . . . .
n an1 an2 ann n
Note: Observe that the identity linear transformation I : V V defined by I(x) = x for every
x V is invertible and
I[B2 , B1 ]1 = I 1 [B1 , B2 ] = I[B1 , B2 ].
Therefore, we also have
[x]B2 = I[B1 , B2 ] [x]B1 .
Let V be a finite dimensional vector space and let B1 and B2 be two ordered bases of V. Let T : V V
be a linear transformation. We are now in a position to relate the two matrices T [B1 , B1 ] and T [B2 , B2 ].
Theorem 4.4.6 Let V be a finite dimensional vector space and let B1 = (u1 , u2 , . . . , un ) and B2 =
(v1 , v2 , . . . , vn ) be two ordered bases of V. Let T : V V be a linear transformation with B = T [B1 , B1 ]
and C = T [B2 , B2 ] as matrix representations of T in bases B1 and B2 .
Also, let A = [aij ] = I[B2 , B1 ], be the matrix of the identity linear transformation with respect to the
bases B1 and B2 . Then BA = AC. Equivalently B = ACA1 .
Proof. For any x V , we represent [T (x)]B2 in two ways. Using Theorem 4.2.1, the first expression is
and therefore,
n
P
k=1 b1k akj
n a1j
P
b a
2k kj
a2j
[T (vj )]B1 = k=1 .. .
= B
.. .
n .
P anj
bnk akj
k=1
and so
n
P
k=1 a1k ckj
n c1j
P
a2k ckj
c2j
[T (vj )]B1 = k=1 .. .
= A
.. .
n .
P cnj
ank ckj
k=1
Let V be a vector space with dim(V ) = n, and let T : V V be a linear transformation. Then for
each ordered basis B of V, we get an n n matrix T [B, B]. Also, we know that for any vector space we
have infinite number of choices for an ordered basis. So, as we change an ordered basis, the matrix of
the linear transformation changes. Theorem 4.4.6 tells us that all these matrices are related.
Now, let A and B be two n n matrices such that P 1 AP = B for some invertible matrix P. Recall
the linear transformation TA : Rn Rn defined by TA (x) = Ax for all x Rn . Then we have seen that
if the standard basis of Rn is the ordered basis B, then A = TA [B, B]. Since P is an invertible matrix,
its columns are linearly independent and hence we can take its columns as an ordered basis B1 . Then
note that B = TA [B1 , B1 ]. The above observations lead to the following remark and the definition.
Remark 4.4.7 The identity (4.4.3) shows how the matrix representation of a linear transformation T
changes if the ordered basis used to compute the matrix representation is changed. Hence, the matrix
I[B1 , B2 ] is called the B1 : B2 change of basis matrix.
Definition 4.4.8 (Similar Matrices) Two square matrices B and C of the same order are said to be similar
if there exists a non-singular matrix P such that B = P CP 1 or equivalently BP = P C.
{S 1 AS : S is n n invertible matrix }
is the set of all matrices that are similar to the given matrix A. Therefore, similar matrices are just
different matrix representations of a single linear transformation.
4.4. SIMILARITY OF MATRICES 85
Then
[1 + x x2 ]B1 = 0 1 + 2 (1 + x) + (1) (1 + x + x2 ) = (0, 2, 1)t ,
[1 + 2x + x2 ]B1 = (1) 1 + 1 (1 + x) + 1 (1 + x + x2 ) = (1, 1, 1)t , and
[2 + x + x2 ]B1 = 1 1 + 0 (1 + x) + 1 (1 + x + x2 ) = (1, 0, 1)t .
Therefore,
I[B2 , B1 ] = [[I(1 + x x2 )]B1 , [I(1 + 2x + x2 )]B1 , [I(2 + x + x2 )]B1 ]
= [[1 + x x2 ]B1 , [1 + 2x + x2 ]B1 , [2 + x + x2 ]B1 ]
0 1 1
= 2 1 0 .
1 1 1
Then
0 0 2 4/5 1 8/5
T [B1 , B1 ] = 1 1 4 , and T [B2 , B2 ] = 2/5 2 9/5 .
0 1 0 8/5 0 1/5
Find I[B1 , B2 ] and verify,
I[B1 , B2 ] T [B1 , B1 ] I[B2 , B1 ] = T [B2 , B2 ].
Check that,
2 2 2
T [B1 , B1 ] I[B2 , B1 ] = I[B2 , B1 ] T [B2 , B2 ] = 2 4 5 .
2 1 0
Exercise 4.4.11 1. Let V be an n-dimensional vector space and let T : V V be a linear transformation.
Suppose T has the property that T n1 6= 0 but T n = 0.
(a) Then prove that there exists a vector u V such that the set
is a basis of V.
(b) Let B = (u, T (u), . . . , T n1 (u)). Then prove that
0 0 0 0
1
0 0 0
T [B, B] = 0 1 0 0 .
.. .. .. ..
. . . .
0 0 1 0
86 CHAPTER 4. LINEAR TRANSFORMATIONS
(c) Let A be an n n matrix with the property that An1 6= 0 but An = 0. Then prove that A is
similar to the matrix given above.
We had learned that given vectors ~i and ~j (which are at an angle of 90 ) in a plane, any vector in the
plane is a linear combination of the vectors ~i and ~j. In this section, we investigate a method by which
any basis of a finite dimensional vector can be transferred to another basis in such a way that the vectors
in the new basis are at an angle of 90 to each other. To do this, we start by defining a notion of inner
product (dot product) in a vector space. This helps us in finding out whether two vectors are at 90
or not.
x (y + z) = x y + x z, x y = y x, and x x 0
and x x = 0 if and only if x = 0. Thus, we are motivated to define an inner product on an arbitrary
vector space.
Definition 5.1.1 (Inner Product) Let V (F) be a vector space over F. An inner product over V (F), denoted
by h , i, is a map,
h , i : V V F
such that for u, v, w V and a, b F
1. hau + bv, wi = ahu, wi + bhv, wi,
Definition 5.1.2 (Inner Product Space) Let V be a vector space with an inner product h , i. Then
(V, h , i) is called an inner product space, in short denoted by ips.
Example 5.1.3 The first two examples given below are called the standard inner product or the dot
product on Rn and Cn , respectively..
1. Let V = Rn be the real vector space of dimension n. Given two vectors u = (u1 , u2 , . . . , un ) and
v = (v1 , v2 , . . . , vn ) of V, we define
hu, vi = u1 v1 + u2 v2 + + un vn = uvt .
hu, vi = u1 v1 + u2 v2 + + un vn = uv
is an inner product.
" #
4 1
3. Let V = R2 and let A = . Define hx, yi = xAyt . Check that h , i is an inner product.
1 2
Hint: Note that xAyt = 4x1 y1 x1 y2 x2 y1 + 2x2 y2 .
4. let x = (x1 , x2 , x3 ), y = (y1 , y2 , y3 ) R3 ., Show that hx, yi = 10x1 y1 + 3x1 y2 + 3x2 y1 + 2x2 y2 +
x2 y3 + x3 y2 + x3 y3 is an inner product in R3 (R).
5. Consider the real vector space R2 . In this example, we define three products that satisfy two conditions
out of the three conditions for an inner product. Hence the three products are not inner products.
(a) Define hx, yi = h(x1 , x2 ), (y1 , y2 )i = x1 y1 . Then it is easy to verify that the third condition is
not valid whereas the first two conditions are valid.
(b) Define hx, yi = h(x1 , x2 ), (y1 , y2 )i = x21 + y12 + x22 + y22 . Then it is easy to verify that the first
condition is not valid whereas the second and third conditions are valid.
(c) Define hx, yi = h(x1 , x2 ), (y1 , y2 )i = x1 y13 + x2 y23 . Then it is easy to verify that the second
condition is not valid whereas the first and third conditions are valid.
Remark 5.1.4 Note that in parts 1 and 2 of Example 5.1.3, the inner products are uvt and uv ,
respectively. This occurs because the vectors u and v are row vectors. In general, u and v are taken as
column vectors and hence one uses the notation ut v or u v.
Exercise 5.1.5 Verify that inner products defined in parts 3 and 4 of Example 5.1.3, are indeed inner products.
Definition 5.1.6 (Length/Norm of a Vector) For u V, we define the length (norm) of u, denoted kuk,
p
by kuk = hu, ui, the positive square root.
A very useful and a fundamental inequality concerning the inner product is due to Cauchy and
Schwartz. The next theorem gives the statement and a proof of this inequality.
Theorem 5.1.7 (Cauchy-Schwartz inequality) Let V (F) be an inner product space. Then for any u, v
V
|hu, vi| kuk kvk.
The equality holds if and only if the vectors u and v are linearly dependent. Further, if u 6= 0, then
u u
v = hv, i .
kuk kuk
Proof. If u = 0, then the inequality holds. Let u 6= 0. Note that hu + v, u + vi 0 for all F.
hv, ui
In particular, for = , we get
kuk2
0 hu + v, u + vi
= kuk2 + hu, vi + hv, ui + kvk2
hv, ui hv, ui hv, ui hv, ui
= kuk2 hu, vi hv, ui + kvk2
kuk2 kuk2 kuk2 kuk2
|hv, ui|2
= kvk2 .
kuk2
5.1. DEFINITION AND BASIC PROPERTIES 89
Definition 5.1.8 (Angle between two vectors) Let V be a real vector space. Then for every u, v V, by
the Cauchy-Schwartz inequality, we have
hu, vi
1 1.
kuk kvk
We know that cos : [0, ] [1, 1] is an one-one and onto function. Therefore, for every real number
hu, vi
, there exists a unique , 0 , such that
kuk kvk
hu, vi
cos = .
kuk kvk
hu, vi
1. The real number with 0 and satisfying cos = is called the angle between the two
kuk kvk
vectors u and v in V.
Exercise 5.1.9 1. Let {e1 , e2 , . . . , en } be the standard basis of Rn . Then prove that with respect to the
standard inner product on Rn , the vectors ei satisfy the following:
(a) Find the angle between the vectors e1 = (1, 0)t and e2 = (0, 1)t .
(b) Let u = (1, 0)t . Find v R2 such that hv, ui = 0.
(c) Find two vectors x, y R2 , such that kxk = kyk = 1 and hx, yi = 0.
4. Let V be a complex vector space with dim(V ) = n. Fix an ordered basis B = (u1 , u2 , . . . , un ). Define
a map
Xn
h , i : V V C by hu, vi = ai b i
i=1
t t
whenever [u]B = (a1 , a2 , . . . , an ) and [v]B = (b1 , b2 , . . . , bn ) . Show that the above defined map is
indeed an inner product.
is an inner product in R3 (R). With respect to this inner product, find the angle between the vectors
(1, 1, 1) and (2, 5, 2).
6. Consider the set Mnn (R) of all real square matrices of order n. For A, B Mnn (R) we define
hA, Bi = tr(AB t ). Then
hA + B, Ci = tr (A + B)C t = tr(AC t ) + tr(BC t ) = hA, Ci + hB, Ci.
and therefore, hA, Ai > 0 for all non-zero matrices A. So, it is clear that hA, Bi is an inner product on
Mnn (R).
7. Let V be the real vector space of all continuous functions with domain [2, 2]. That is, V =
R1
C[2, 2]. Then show that V is an inner product space with inner product 1 f (x)g(x)dx.
For different values of m and n, find the angle between the functions cos(mx) and sin(nx).
10. Let x, y Rn . Observe that hx, yi = hy, xi. Hence or otherwise prove the following:
Remark 5.1.10 i. Suppose the norm of a vector is given. Then, the polarisation identity
can be used to define an inner product.
5.1. DEFINITION AND BASIC PROPERTIES 91
ii. Observe that if hx, yi = 0 then the parallelogram spanned by the vectors x and y is a
rectangle. The above equality tells us that the lengths of the two diagonals are equal.
12. Let V be an n-dimensional inner product space, with an inner product h , i. Let u V be a fixed
vector with kuk = 1. Then give reasons for the following statements.
Theorem 5.1.11 Let V be an inner product space. Let {u1 , u2 , . . . , un } be a set of non-zero, mutually
orthogonal vectors of V.
3. Let dim(V ) = n and also let kui k = 1 for i = 1, 2, . . . , n. Then for any v V,
n
X
v= hv, ui iui .
i=1
Proof. Consider the set of non-zero, mutually orthogonal vectors {u1 , u2 , . . . , un }. Suppose there exist
scalars c1 , c2 , . . . , cn not all zero, such that
c1 u1 + c2 u2 + + cn un = 0.
as huj , ui i = 0 for all j 6= i and hui , ui i = 1. This gives a contradiction to our assumption that some of
the ci s are non-zero. This establishes the linear independence of a set of non-zero, mutually orthogonal
vectors. (
0 if i 6= j
For the second part, using hui , uj i = 2
for 1 i, j n, we have
kui k if i = j
n
X Xn n
X n
X n
X
k i ui k2 = h i ui , i ui i = i hui , j uj i
i=1 i=1 i=1 i=1 j=1
n
X n
X Xn
= i j hui , uj i = i i hui , ui i
i=1 j=1 i=1
Xn
= |i |2 kui k2 .
i=1
92 CHAPTER 5. INNER PRODUCT SPACES
For the third part, observe from the first part, the linear independence of the non-zero mutually
orthogonal vectors u1 , u2 , . . . , un . Since dim(V ) = n, they form a basis of V. Thus, for every vector
Pn
v V, there exist scalars i , 1 i n, such that v = i=1 i un . Hence,
Xn n
X
hv, uj i = h i ui , uj i = i hui , uj i = j .
i=1 i=1
Definition 5.1.12 (Orthonormal Set) Let V be an inner product space. A set of non-zero, mutually or-
thogonal vectors {v1 , v2 , . . . , vn } in V is called an orthonormal set if kvi k = 1 for i = 1, 2, . . . , n.
If the set {v1 , v2 , . . . , vn } is also a basis of V, then the set of vectors {v1 , v2 , . . . , vn } is called an
orthonormal basis of V.
Example 5.1.13 1. Consider the vector space R2 with the standard inner product. Then the standard
1 1
ordered basis B = (1, 0), (0, 1) is an orthonormal set. Also, the basis B1 = (1, 1), (1, 1)
2 2
is an orthonormal set.
2. Let Rn be endowed with the standard inner product. Then by Exercise 5.1.9.1, the standard ordered
basis (e1 , e2 , . . . , en ) is an orthonormal set.
In view of Theorem 5.1.11, we inquire into the question of extracting an orthonormal basis from
a given basis. In the next section, we describe a process (called the Gram-Schmidt Orthogonalisation
process) that generates an orthonormal set from a given set containing finitely many vectors.
Remark 5.1.14 The last part of the above theorem can be rephrased as suppose {v1 , v2 , . . . , vn } is
an orthonormal basis of an inner product space V. Then for each u V the numbers hu, vi i for 1 i n
are the coordinates of u with respect to the above basis.
That is, let B = (v1 , v2 , . . . , vn ) be an ordered basis. Then for any u V,
v
<v,u>
v u
|| u ||
Theorem 5.2.1 (Gram-Schmidt Orthogonalization Process) Let V be an inner product space. Suppose
{u1 , u2 , . . . , un } is a set of linearly independent vectors of V. Then there exists a set {v1 , v2 , . . . , vn } of
vectors of V satisfying the following:
1. kvi k = 1 for 1 i n,
and define
wi
vi = .
kwi k
We prove the theorem by induction on n, the number of linearly independent vectors.
u1
For n = 1, we have v1 = . Since u1 6= 0, v1 6= 0 and
ku1 k
u1 u1 hu1 , u1 i
kv1 k2 = hv1 , v1 i = h , i= = 1.
ku1 k ku1 k ku1 k2
Hence, the result holds for n = 1.
Let the result hold for all k n 1. That is, suppose we are given any set of k, 1 k n 1
linearly independent vectors {u1 , u2 , . . . , uk } of V. Then by the inductive assumption, there exists a set
{v1 , v2 , . . . , vk } of vectors satisfying the following:
1. kvi k = 1 for 1 i k,
Now, let us assume that we are given a set of n linearly independent vectors {u1 , u2 , . . . , un } of V.
Then by the inductive assumption, we already have vectors v1 , v2 , . . . , vn1 satisfying
1. kvi k = 1 for 1 i n 1,
So, by (5.2.2)
un = 1 + hun , v1 i v1 + 2 + hun , v2 i v2 + + ( n1 + hun , vn1 i vn1 .
This gives a contradiction to the given assumption that the set of vectors {u1 , u2 , . . . , un } is linear
independent.
wn
So, wn 6= 0. Define vn = . Then kvn k = 1. Also, it can be easily verified that hvn , vi i = 0 for
kwn k
1 i n 1. Hence, by the principle of mathematical induction, the proof of the theorem is complete.
Example 5.2.2 Let {(1, 1, 1, 1), (1, 0, 1, 0), (0, 1, 0, 1)} be a linearly independent set in R4 (R). Find an
orthonormal set {v1 , v2 , v3 } such that L( (1, 1, 1, 1), (1, 0, 1, 0), (0, 1, 0, 1) ) = L(v1 , v2 , v3 ).
(1, 0, 1, 0)
Solution: Let u1 = (1, 0, 1, 0). Define v1 = . Let u2 = (0, 1, 0, 1). Then
2
(1, 0, 1, 0)
w2 = (0, 1, 0, 1) h(0, 1, 0, 1), iv1 = (0, 1, 0, 1).
2
(0, 1, 0, 1)
Hence, v2 = . Let u3 = (1, 1, 1, 1). Then
2
(1, 0, 1, 0) (0, 1, 0, 1)
w3 = (1, 1, 1, 1) h(1, 1, 1, 1), iv1 h(1, 1, 1, 1), iv2
2 2
= (0, 1, 0, 1)
(0, 1, 0, 1)
and v3 = .
2
5.2. GRAM-SCHMIDT ORTHOGONALISATION PROCESS 95
L(v1 , v2 , . . . , vi ) = L(u1 , u2 , . . . , ui ).
2. Suppose we are given a set of n vectors, {u1 , u2 , . . . , un } of V that are linearly dependent. Then
by Corollary 3.2.5, there exists a smallest k, 2 k n such that
for 2 i n, the set {u1 , u2 , . . . , uk1 } is linearly independent (use Corollary 3.2.5). So, by
Theorem 5.2.1, there exists an orthonormal set {v1 , v2 , . . . , vk1 } such that
So, by definition of wk , wk = 0.
Therefore, in this case, we can continue with the Gram-Schmidt process by replacing uk by uk+1 .
3. Let S be a countably infinite set of linearly independent vectors. Then one can apply the Gram-
Schmidt process to get a countably infinite orthonormal set.
[vi ]B = (1i , 2i , . . . , ni )t .
is an n k matrix.
Also, observe that the conditions kvi k = 1 and hvi , vj i = 0 for 1 i 6= j n, implies that
n
P
1 = kvi k = kvi k2 = hvi , vi i = 2ji ,
j=1
n
P (5.2.3)
and 0 = hvi , vj i = si sj .
s=1
96 CHAPTER 5. INNER PRODUCT SPACES
Note that,
v1t kv1 k2
[v1 , v2 , . . . , vk ] hv1 , v2 i hv1 , vk i
vt hv , v i kv2 k2 hv2 , vk i
2 2 1
At A
= .
. = .. .. .. ..
.
. . . .
vkt hvk , v1 i hvk , v2 i kvk k2
1 0 0
0 1 0
= .
. .. .. = Ik .
..
. . . .
0 0 1
Perhaps the readers must have noticed that the inverse of A is its transpose. Such matrices are called
orthogonal matrices and they have a special role to play.
Exercise 5.2.5 1. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are
both orthogonal matrices.
where
n
X
ui = aji ej , for 1 i n.
j=1
4. Let A be an n n upper triangular matrix. If A is also an orthogonal matrix, then prove that A = In .
5.2. GRAM-SCHMIDT ORTHOGONALISATION PROCESS 97
Theorem 5.2.6 (QR Decomposition) Let A be a square matrix of order n. Then there exist matrices Q
and R such that Q is orthogonal and R is upper triangular with A = QR.
In case, A is non-singular, the diagonal entries of R can be chosen to be positive. Also, in this case, the
decomposition is unique.
Proof. We prove the theorem when A is non-singular. The proof for the singular case is left as an
exercise.
Let the columns of A be x1 , x2 , . . . , xn . The Gram-Schmidt orthogonalisation process applied to the
vectors x1 , x2 , . . . , xn gives the vectors u1 , u2 , . . . , un satisfying
)
L(u1 , u2 , . . . , ui ) = L(x1 , x2 , . . . , xi ),
for 1 i 6= j n. (5.2.4)
kui k = 1, hui , uj i = 0,
Now, consider the ordered basis B = (u1 , u2 , . . . , un ). From (5.2.4), for 1 i n, we have L(u1 , u2 , . . . , ui ) =
L(x1 , x2 , . . . , xi ). So, we can find scalars ji , 1 j i such that
xi = 1i u1 + 2i u2 + + ii ui = (1i , . . . , ii , 0 . . . , 0)t B . (5.2.5)
Thus, we see that A = QR, where Q is an orthogonal matrix (see Remark 5.2.3.4) and R is an upper
triangular matrix.
The proof doesnt guarantee that for 1 i n, ii is positive. But this can be achieved by replacing
the vector ui by ui whenever ii is negative.
Uniqueness: suppose Q1 R1 = Q2 R2 then Q1 1
2 Q1 = R2 R1 . Observe the following properties of
upper triangular matrices.
1. The inverse of an upper triangular matrix is also an upper triangular matrix, and
Thus the matrix R2 R11 is an upper triangular matrix. Also, by Exercise 5.2.5.1, the matrix Q1
2 Q1 is
1
an orthogonal matrix. Hence, by Exercise 5.2.5.4, R2 R1 = In . So, R2 = R1 and therefore Q2 = Q1 .
Suppose we have matrix A = [x1 , x2 , . . . , xk ] of dimension n k with rank (A) = r. Then by Remark
5.2.3.2, the application of the Gram-Schmidt orthogonalisation process yields a set {u1 , u2 , . . . , ur } of
98 CHAPTER 5. INNER PRODUCT SPACES
Hence, proceeding on the lines of the above theorem, we have the following result.
and
2 0 2 3
2
0 2 0 2
R=
.
0 0 2 0
1
0 0 0
2
u1
Let u1 = (1, 1, 1, 1). Define v1 = . Let u2 = (1, 0, 1, 0). Then
2
1
w2 = (1, 0, 1, 0) hu2 , v1 iv1 = (1, 0, 1, 0) v1 = (1, 1, 1, 1).
2
(1, 1, 1, 1)
Hence, v2 = . Let u3 = (1, 2, 1, 2). Then
2
w3 = u3 hu3 , v1 iv1 hu3 , v2 iv2 = u3 3v1 + v2 = 0.
Exercise 5.2.9 1. Determine an orthonormal basis of R4 containing the vectors (1, 2, 1, 3) and (2, 1, 3, 1).
2. Prove that the polynomials 1, x, 32 x2 12 , 52 x3 32 x form an orthogonal set of functions in the in-
R1
ner product space C[1, 1] with the inner product hf, gi = 1 f (t)g(t)dt. Find the corresponding
functions, f (x) with kf (x)k = 1.
3. Consider the vector space C[, ] with the standard inner product defined in the above exercise. Find
an orthonormal basis for the subspace spanned by x, sin x and sin(x + 1).
6. Let S = {(1, 1, 1, 1), (1, 2, 0, 1), (2, 2, 4, 0)}. Find an orthonormal basis of L(S) in R4 .
7. Let Rn be endowed with the standard inner product. Suppose we have a vector xt = (x1 , x2 , . . . , xn )
Rn , with kxk = 1. Then prove the following:
(a) the set {x} can always be extended to form an orthonormal basis of Rn .
n
basis be {x, x2 , . . . , xn }. Suppose B = (e1 , e2 , . . . , en ) is the standard basis of R . Let
(b) Let this
A = [x]B , [x2 ]B , . . . , [xn ]B . Then prove that A is an orthogonal matrix.
8. Let v, w Rn , n 1 with kuk = kwk = 1. Prove that there exists an orthogonal matrix A such that
Av = w. Prove also that A can be chosen such that det(A) = 1.
100 CHAPTER 5. INNER PRODUCT SPACES
W + W0 = V and W W0 = {0}.
The subspace W0 is called the complementary subspace of W in V. We now define an important class of
linear transformations on an inner product space, called orthogonal projections.
Definition 5.3.1 (Projection Operator) Let V be an n-dimensional vector space and let W be a k-
dimensional subspace of V. Let W0 be a complement of W in V. Then we define a map PW : V V
by
PW (v) = w, whenever v = w + w0 , w W, w0 W0 .
The map PW is called the projection of V onto W along W0 .
Remark 5.3.2 The map P is well defined due to the following reasons:
The next proposition states that the map defined above is a linear transformation from V to V. We
omit the proof, as it follows directly from the above remarks.
1. Let W0 = L( (1, 2, 2) ). Then W W0 = {0} and W + W0 = R3 . Also, for any vector (x, y, z) R3 ,
note that (x, y, z) = w + w0 , where
So, by definition,
0 1 1 x
PW ((x, y, z)) = (z y, 2z 2x y, 3z 2x 2y) = 2 1 2 y .
2 2 3 z
2. Let W0 = L( (1, 1, 1) ). Then W W0 = {0} and W + W0 = R3 . Also, for any vector (x, y, z) R3 ,
note that (x, y, z) = w + w0 , where
So, by definition,
0 1 1 x
PW ( (x, y, z) ) = (z y, z x, 2z x y) = 1 0 1 y .
1 1 2 z
2. Observe that for a fixed subspace W, there are infinitely many choices for the complementary
subspace W0 .
5.3. ORTHOGONAL PROJECTIONS AND APPLICATIONS 101
3. It will be shown later that if V is an inner product space with inner product, h , i, then the subspace
W0 is unique if we put an additional condition that W0 = {v V : hv, wi = 0 for all w W }.
Exercise 5.3.7 1. Let A be an n n real matrix with A2 = A. Consider the linear transformation
TA : R Rn , defined by TA (v) = Av for all v Rn . Prove that
n
2. Find all 2 2 real matrices A such that A2 = A. Hence or otherwise, determine all projection operators
of R2 .
The next result uses the Gram-Schmidt orthogonalisation process to get the complementary subspace
in such a way that the vectors in different subspaces are orthogonal.
Definition 5.3.8 (Orthogonal Subspace of a Set) Let V be an inner product space. Let S be a non-empty
subset of V . We define
S = {v V : hv, si = 0 for all s S}.
1. S = {0}. Then S = R.
2. S = R, Then S = {0}.
Theorem 5.3.10 Let S be a subset of a finite dimensional inner product space V, with inner product h , i.
Then
1. S is a subspace of V.
2. Let S be equal to a subspace W . Then the subspaces W and W are complementary. Moreover, if
w W and u W , then hu, wi = 0 and V = W + W .
Proof. We leave the prove of the first part for the reader. The prove of the second part is as follows:
Let dim(V ) = n and dim(W ) = k. Let {w1 , w2 , . . . , wk } be a basis of W. By Gram-Schmidt orthogo-
nalisation process, we get an orthonormal basis, say, {v1 , v2 , . . . , vk } of W. Then, for any v V,
k
X
v hv, vi ivi W .
i=1
Definition 5.3.11 (Orthogonal Complement) Let W be a subspace of a vector space V. The subspace
W is called the orthogonal complement of W in V.
Exercise 5.3.12 1. Let W = {(x, y, z) R3 : x + y + z = 0}. Find W with respect to the standard
inner product.
3. Let V be the vector space of all n n real matrices. Then Exercise5.1.9.6 shows that V is a real
inner product space with the inner product given by hA, Bi = tr(AB t ). If W is the subspace given by
W = {A V : At = A}, determine W .
Definition 5.3.13 (Orthogonal Projection) Let W be a subspace of a finite dimensional inner product
space V, with inner product h , i. Let W be the orthogonal complement of W in V. Define PW : V V
by
PW (v) = w where v = w + u, with w W, and u W .
Definition 5.3.14 (Self-Adjoint Transformation/Operator) Let V be an inner product space with inner
product h , i. A linear transformation T : V V is called a self-adjoint operator if hT (v), ui = hv, T (u)i
for every u, v V.
Example 5.3.15 1. Let A be an n n real symmetric matrix. That is, At = A. Then show that the linear
transformation TA : Rn Rn defined by TA (x) = Ax for every xt Rn is a self-adjoint operator.
Solution: By definition, for every xt , yt Rn ,
Remark 5.3.16 1. By Proposition 5.3.3, the map PW defined above is a linear transformation.
5.3. ORTHOGONAL PROJECTIONS AND APPLICATIONS 103
2
2. PW = PW , (I PW )PW = 0 = PW (I PW ).
4. Let v V and w W. Then PW (w) = w for all w W. Therefore, using Remarks 5.3.16.2 and
5.3.16.3, we get
hv PW (v), wi = h I PW (v), PW (w)i = hPW I PW (v), wi
= h0(v), wi = h0, wi = 0
for every w W.
Therefore,
kv wk kv PW (v)k
and the equality holds if and only if w = PW (v). Since PW (v) W, we see that
That is, PW (v) is the vector nearest to v W. This can also be stated as: the vector PW (v) solves
the following minimisation problem:
inf kv wk = kv PW (v)k.
wW
n
P
ordered basis B2 = (e1 , e2 , . . . , en ) of Rn . Therefore, if vi = aji ej , for 1 i k, then
j=1
n
P
a 1i hv, vi i
a11 a12 a1k i=1
n
P
a2i hv, vi i
a
21 a22 a2k
A= , [v]B2 =
i=1
.. .. .. .. ..
. . . .
.
an1 an2 ank P n
ani hv, vi i
i=1
and
k
P
a 1i hv, v i i
i=1
P k
a2i hv, vi i
[PW (v)]B2 = i=1 .
..
.
k
P
ani hv, vi i
i=1
t
Then as observed in Remark 5.2.3.4, A A = Ik . That is, for 1 i, j k,
n
(
X 1 if i = j
asi asj = (5.3.1)
s=1
0 if i 6= j.
Exercise 5.3.19 1. Show that for any non-zero vector vt Rn , the rank of the matrix vvt is 1.
4. Let W1 and W2 be two distinct subspaces of a finite dimensional vector space V. Let PW1 and PW2
be the corresponding orthogonal projection operators of V along W1 and W2 , respectively. Then by
constructing an example in R2 , show that the map PW1 PW2 is a projection but not an orthogonal
projection.
106 CHAPTER 5. INNER PRODUCT SPACES
5. Let W be an (n 1)-dimensional vector subspace of Rn and let W be its orthogonal complement. Let
B = (v1 , v2 , . . . , vn1 , vn ) be an orthogonal ordered basis of Rn with (v1 , v2 , . . . , vn1 ) an ordered
basis of W. Define a map
T : Rn Rn by T (v) = w0 w
Example 6.1.1 Let A be a real symmetric matrix. Consider the following problem:
L
= 2a11 x1 + 2a12 x2 + + 2a1n xn 2x1 ,
x1
L
= 2a21 x1 + 2a22 x2 + + 2a2n xn 2x2 ,
x2
and so on, till
L
= 2an1 x1 + 2an2 x2 + + 2ann xn 2xn .
xn
Therefore, to get the points of extrema, we solve for
L L L t L
(0, 0, . . . , 0)t = ( , ,..., ) = = 2(Ax x).
x1 x2 xn x
We therefore need to find a R and 0 6= x Rn such that Ax = x for the extremal problem.
d y(t)
= Ay, t 0; (6.1.1)
dt
108 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALIZATION
is a solution of (6.1.1) and look into what and c has to satisfy, i.e., we are investigating for a necessary
condition on and c so that (6.1.2) is a solution of (6.1.1). Note here that (6.1.1) has the zero solution,
namely y(t) 0 and so we are looking for a non-zero c. Differentiating (6.1.2) with respect to t and
substituting in (6.1.1), leads to
So, (6.1.2) is a solution of the given system of differential equations if and only if and c satisfy (6.1.3).
That is, given an n n matrix A, we are this lead to find a pair (, c) such that c 6= 0 and (6.1.3) is satisfied.
Ax = x? (6.1.4)
Here, Fn stands for either the vector space Rn over R or Cn over C. Equation (6.1.4) is equivalent to
the equation
(A I)x = 0.
So, to solve (6.1.4), we are forced to choose those values of F for which det(A I) = 0. Observe
that det(A I) is a polynomial in of degree n. We are therefore lead to the following definition.
Definition 6.1.3 (characteristic Polynomial) Let A be a matrix of order n. The polynomial det(A I)
is called the characteristic polynomial of A and is denoted by p(). The equation p() = 0 is called the
characteristic equation of A. If F is a solution of the characteristic equation p() = 0, then is called a
characteristic value of A.
Some books use the term eigenvalue in place of characteristic value.
Theorem 6.1.4 Let A = [aij ]; aij F, for 1 i, j n. Suppose = 0 F is a root of the characteristic
equation. Then there exists a non-zero v Fn such that Av = 0 v.
Proof. Since 0 is a root of the characteristic equation, det(A 0 I) = 0. This shows that the matrix
A 0 I is singular and therefore by Theorem 2.6.1 the linear system
(A 0 In )x = 0
Remark 6.1.5 Observe that the linear system Ax = x has a solution x = 0 for every F. So, we
consider only those x Fn that are non-zero and are solutions of the linear system Ax = x.
Definition 6.1.6 (Eigenvalue and Eigenvector) If the linear system Ax = x has a non-zero solution
x Fn for some F, then
1. F is called an eigenvalue of A,
6.1. INTRODUCTION AND DEFINITIONS 109
Remark 6.1.7 To understand the difference between a characteristic value and an eigenvalue, we give
the following example. " #
0 1
Consider the matrix A = . Then the characteristic polynomial of A is
1 0
p() = 2 + 1.
1. If F = C, that is, if A is considered a complex matrix, then the roots of p() = 0 in C are i.
So, A has (i, (1, i)t ) and (i, (i, 1)t ) as eigenpairs.
2. If F = R, that is, if A is considered a real matrix, then p() = 0 has no solution in R. Therefore,
if F = R, then A has no eigenvalue but it has i as characteristic values.
Remark 6.1.8 Note that if (, x) is an eigenpair for an n n matrix A then for any non-zero c F, c 6=
0, (, cx) is also an eigenpair for A. Similarly, if x1 , x2 , . . . , xr are eigenvectors of A corresponding to
r
P
the eigenvalue , then for any non-zero (c1 , c2 , . . . , cr ) Fr , it is easily seen that if ci xi 6= 0, then
i=1
r
P
ci xi is also an eigenvector of A corresponding to the eigenvalue . Hence, when we talk of eigenvectors
i=1
corresponding to an eigenvalue , we mean linearly independent eigenvectors.
Suppose 0 F is a root of the characteristic equation det(A 0 I) = 0. Then A 0 I is singular
and rank (A 0 I) < n. Suppose rank (A 0 I) = r < n. Then by Corollary 4.3.9, the linear system
(A 0 I)x = 0 has n r linearly independent solutions. That is, A has n r linearly independent
eigenvectors corresponding to the eigenvalue 0 whenever rank (A 0 I) = r < n.
Qn
Example 6.1.9 1. Let A = diag(d1 , d2 , . . . , dn ) with di R for 1 i n. Then p() = i=1 ( di )
is the characteristic equation. So, the eigenpairs are
2. Find
" eigenpairs
# " over C, for
# each
" of the following
# matrices:
" # " #
1 0 1 1+i i 1+i cos sin cos sin
, , , , and .
0 0 1i 1 1 + i i sin cos sin cos
(a) Then prove that A and B have the same set of eigenvalues.
(b) Let (, x) be an eigenpair for A and (, y) be an eigenpair for B. What is the relationship between
the vectors x and y?
[Hint: Recall that if the matrices A and B are similar, then there exists a non-singular matrix
P such that B = P AP 1 .]
n
P
4. Let A = (aij ) be an n n matrix. Suppose that for all i, 1 i n, aij = a. Then prove that a
j=1
is an eigenvalue of A. What is the corresponding eigenvector?
5. Prove that the matrices A and At have the same set of eigenvalues. Construct a 2 2 matrix A such
that the eigenvectors of A and At are different.
6. Let A be a matrix such that A2 = A (A is called an idempotent matrix). Then prove that its eigenvalues
are either 0 or 1 or both.
7. Let A be a matrix such that Ak = 0 (A is called a nilpotent matrix) for some positive integer k 1.
Then prove that its eigenvalues are all 0.
Theorem 6.1.11 Let A = [aij ] be an n n matrix with eigenvalues 1 , 2 , . . . , n , not necessarily distinct.
n
Q n
P n
P
Then det(A) = i and tr(A) = aii = i .
i=1 i=1 i=1
Also,
a11 a12 a1n
a a22 a2n
21
det(A In ) = .
. .. .. .. (6.1.6)
.
. . .
an1 an2 ann
2
= a0 a1 + a2 +
+(1)n1 n1 an1 + (1)n n (6.1.7)
for some a0 , a1 , . . . , an1 F. Note that an1 , the coefficient of (1)n1 n1 , comes from the product
Exercise 6.1.12 1. Let A be a skew symmetric matrix of order 2n + 1. Then prove that 0 is an eigenvalue
of A.
2. Let A be a 3 3 orthogonal matrix (AAt = I).If det(A) = 1, then prove that there exists a non-zero
vector v R3 such that Av = v.
Let A be an n n matrix. Then in the proof of the above theorem, we observed that the charac-
teristic equation det(A I) = 0 is a polynomial equation of degree n in . Also, for some numbers
a0 , a1 , . . . , an1 F, it has the form
n + an1 n1 + an2 2 + a1 + a0 = 0.
Note that, in the expression det(A I) = 0, is an element of F. Thus, we can only substitute by
elements of F.
It turns out that the expression
holds true as a matrix identity. This is a celebrated theorem called the Cayley Hamilton Theorem. We
state this theorem without proof and give some implications.
Theorem 6.1.13 (Cayley Hamilton Theorem) Let A be a square matrix of order n. Then A satisfies its
characteristic equation. That is,
2. Suppose we are given a square matrix A of order n and we are interested in calculating A where
is large compared to n. Then we can use the division algorithm to find numbers 0 , 1 , . . . , n1
and a polynomial f () such that
= f () n + an1 n1 + an2 2 + a1 + a0
+0 + 1 + + n1 n1 .
A = 0 I + 1 A + + n1 An1 .
Exercise 6.1.15 Find inverse of the following matrices by using the Cayley Hamilton Theorem
2 3 4 1 1 1 1 2 1
i) 5 6 7 ii) 1 1 1 iii) 2 1 1 .
1 1 2 0 1 1 0 1 2
Proof. The proof is by induction on the number m of eigenvalues. The result is obviously true if
m = 1 as the corresponding eigenvector is non-zero and we know that any set containing exactly one
non-zero vector is linearly independent.
Let the result be true for m, 1 m < k. We prove the result for m + 1. We consider the equation
ci (i 1 ) = 0 for 2 i m + 1.
But the eigenvalues are distinct implies i 1 6= 0 for 2 i m + 1. We therefore get ci = 0 for
2 i m + 1. Also, x1 6= 0 and therefore (6.1.9) gives c1 = 0.
Thus, we have the required result.
We are thus lead to the following important corollary.
Corollary 6.1.17 The eigenvectors corresponding to distinct eigenvalues of an n n matrix A are linearly
independent.
2. Let A and B be 2 2 matrices for which det(A) = det(B) and tr(A) = tr(B).
3. Let (1 , u) be an eigenpair for a matrix A and let (2 , u) be an eigenpair for another matrix B.
6.2 diagonalization
Let A be a square matrix of order n and let TA : Fn Fn be the corresponding linear transformation.
In this section, we ask the question does there exist a basis B of Fn such that TA [B, B], the matrix of
the linear transformation TA , is in the simplest possible form.
We know that, the simplest form for a matrix is the identity matrix and the diagonal matrix. In
this section, we show that for a certain class of matrices A, we can find a basis B such that TA [B, B] is
a diagonal matrix, consisting of the eigenvalues of A. This is equivalent to saying that A is similar to a
diagonal matrix. To show the above, we need the following definition.
114 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALIZATION
Definition 6.2.1 (Matrix Diagonalization) A matrix A is said to be diagonalizable if there exists a non-
singular matrix P such that P 1 AP is a diagonal matrix.
1. Let V = R2 . Then A has no real eigenvalue (see Example 6.1.8 and hence A doesnt have eigenvectors
that are vectors in R2 . Hence, there does not exist any non-singular 2 2 real matrix P such that
P 1 AP is a diagonal matrix.
2. In case, V = C2 (C), the two complex eigenvalues of A are i, i and the corresponding eigenvectors
are (i, 1)t and (i, 1)t , respectively. "Also, (i, 1)t t 2
# and (i, 1) can be taken as a basis of C (C). Define
i i
a 2 2 complex matrix by U = 12 . Then
1 1
" #
i 0
U AU = .
0 i
Theorem 6.2.4 let A be an nn matrix. Then A is diagonalizable if and only if A has n linearly independent
eigenvectors.
Proof. Let A be diagonalizable. Then there exist matrices P and D such that
P 1 AP = D = diag(1 , 2 , . . . , n ).
Aui = di ui for 1 i n.
Since ui s are the columns of a non-singular matrix P, they are non-zero and so for 1 i n, we get
the eigenpairs (di , ui ) of A. Since, ui s are columns of the non-singular matrix P, using Corollary 4.3.9,
we get u1 , u2 , . . . , un are linearly independent.
Thus we have shown that if A is diagonalizable then A has n linearly independent eigenvectors.
Conversely, suppose A has n linearly independent eigenvectors ui , 1 i n with eigenvalues i .
Then Aui = i ui . Let P = [u1 , u2 , . . . , un ]. Since u1 , u2 , . . . , un are linearly independent, by Corollary
4.3.9, P is non-singular. Also,
Corollary 6.2.5 let A be an n n matrix. Suppose that the eigenvalues of A are distinct. Then A is
diagonalizable.
6.2. DIAGONALIZATION 115
Proof. As A is an nn matrix, it has n eigenvalues. Since all the eigenvalues of A are distinct, by Corol-
lary 6.1.17, the n eigenvectors are linearly independent. Hence, by Theorem 6.2.4, A is diagonalizable.
Corollary 6.2.6 Let A be an n n matrix with 1 , 2 , . . . , k as its distinct eigenvalues and p() as its
characteristic polynomial. Suppose that for each i, 1 i k, (x i )mi divides p() but (x i )mi +1
does not divides p() for some positive integers mi . Then
A is diagonalizable if and only if dim ker(A i I) = mi for each i, 1 i k.
Exercise 6.2.8 1. By finding the eigenvalues of the following matrices, justify whether or not A = P DP 1
for "
some real non-singular
# matrix
" P and a real # diagonal matrix D.
cos sin cos sin
i) ii) for any with 0 2.
sin cos sin cos
116 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALIZATION
# "
A 0
2. Let A be an n n matrix and B an m m matrix. Suppose C = . Then show that C is
0 B
diagonalizable if and only if both A and B are diagonalizable.
N (T ) = {(x1 , x2 , x3 , x4 , x5 ) R5 | x1 + x4 + x5 = 0, x2 + x3 = 0}.
Then
4. Let A be a non-zero square matrix such that A2 = 0. Show that A cannot be diagonalized. [Hint:
Use Remark 6.2.2.]
Definition 6.3.1 (Special Matrices) 1. A = (aji ), is called the conjugate transpose of the matrix
A.
t
Note that A = At = A .
Note that a symmetric matrix is always Hermitian, a skew-symmetric matrix is always skew-Hermitian
and an orthogonal matrix is always unitary. Each of these matrices are normal. If A is a unitary matrix
then A = A1 .
" #
i 1
Example 6.3.2 1. Let B = . Then B is skew-Hermitian.
1 i
6.3. DIAGONALIZABLE MATRICES 117
" # " #
1 i 1 1
2. Let A = 1 and B = . Then A is a unitary matrix and B is a normal matrix. Note
2
i 1 1 1
that 2A is also a normal matrix.
Definition 6.3.3 (Unitary Equivalence) Let A and B be two n n matrices. They are called unitarily
equivalent if there exists a unitary matrix U such that A = U BU.
2. Every matrix can be uniquely expressed as A = S + iT where both S and T are Hermitian matrices.
1
4. Doesthere exist a unitary matrix
U such
that
U AU = B where
1 1 4 2 1 3 2
A = 0 2 2 and B = 0 1 2 .
0 0 3 0 0 3
Proposition 6.3.5 Let A be an n n Hermitian matrix. Then all the eigenvalues of A are real.
x A = x A = (Ax) = (x) = x .
Hence
x x = x (x) = x (Ax) = (x A)x = (x )x = x x.
But x is an eigenvector and hence x 6= 0 and so the real number kxk2 = x x is non-zero as well. Thus
= . That is, is a real number.
Theorem 6.3.6 Let A be an n n Hermitian matrix. Then A is unitarily diagonalizable. That is, there
exists a unitary matrix U such that U AU = D; where D is a diagonal matrix with the eigenvalues of A as
the diagonal entries.
In other words, the eigenvectors of A form an orthonormal basis of Cn .
Proof. We will prove the result by induction on the size of the matrix. The result is clearly true if
n = 1. Let the result be true for n = k 1. we will prove the result in case n = k. So, let A be a k k
matrix and let (1 , x) be an eigenpair of A with kxk = 1. We now extend the linearly independent set
{x} to form an orthonormal basis {x, u2 , u3 , . . . , uk } (using Gram-Schmidt Orthogonalisation) of Ck .
As {x, u2 , u3 , . . . , uk } is an orthonormal set,
ui x = 0 for all i = 2, 3, . . . , k.
Recall that , the entries i , for 2 i k are the eigenvalues of the matrix B. We also know that two
similar matrices
" #have the same set of eigenvalues. Hence, the eigenvalues of A are 1 , 2 , . . . , k . Define
1 0
U = U1 . Then U is a unitary matrix and
0 U2
" #!1 " #!
1 0 1 0
U 1 AU = U1 A U1
0 U2 0 U2
" # ! " #!
1 0 1 1 0
= U1 A U1
0 U21 0 U2
" # " #
1 0 1
1 0
= U1 AU1
0 U21 0 U2
" #" #" # " #
1 0 1 0 1 0 1 0
= =
0 U21 0 B 0 U2 0 U21 BU2
" #
1 0
= .
0 D2
Proof. As A is symmetric, A is also an Hermitian matrix. Hence, by Proposition 6.3.5, the eigenvalues
of A are all real. Let (, x) be an eigenpair of A. Suppose xt Cn . Then there exist yt , zt Rn such
that x = y + iz. So,
Ax = x = A(y + iz) = (y + iz).
6.3. DIAGONALIZABLE MATRICES 119
Comparing the real and imaginary parts, we get Ay = y and Az = z. Thus, we can choose the
eigenvectors to have real entries.
To prove the orthonormality of the eigenvectors, we proceed on the lines of the proof of Theorem
6.3.6, Hence, the readers are advised to complete the proof.
Exercise 6.3.8 1. Let A be a skew-Hermitian matrix. Then all the eigenvalues of A are either zero or
purely imaginary. Also, the eigenvectors corresponding to distinct eigenvalues are mutually orthogonal.
[Hint: Carefully study the proof of Theorem 6.3.6.]
3. Let A be a normal matrix. Then, show that if (, x) is an eigenpair for A then (, x) is an eigenpair
for A .
" # " #
4 4 10 9
4. Show that the matrices A = and B = are similar. Is it possible to find a unitary
0 4 4 2
matrix U such that A = U BU ?
(a) if det(A) = 1, then A is a rotation about a fixed axis, in the sense that A has an eigenpair (1, x)
such that the restriction of A to the plane x is a two dimensional rotation of x .
(b) if det A = 1, then the action of A corresponds to a reflection through a plane P, followed by a
rotation about the line through the origin that is perpendicular to P.
" # " #
4 4 10 9
Remark 6.3.9 In the previous exercise, we saw that the matrices A = and B =
0 4 4 2
are similar but not unitarily equivalent, whereas unitary equivalence implies similarity equivalence as
U = U 1 . But in numerical calculations, unitary transformations are preferred as compared to similarity
transformations. The main reasons being:
120 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALIZATION
1. Exercise 6.3.8.2 implies that an orthonormal change of basis leaves unchanged the sum of squares
of the absolute values of the entries which need not be true under a non-orthonormal change of
basis.
3. Also in doing conjugate transpose, the loss of accuracy due to round-off errors doesnt occur.
We next prove the Schurs Lemma and use it to show that normal matrices are unitarily diagonaliz-
able.
Lemma 6.3.10 (Schurs Lemma) Every n n complex matrix is unitarily similar to an upper triangular
matrix.
Proof. We will prove the result by induction on the size of the matrix. The result is clearly true
if n = 1. Let the result be true for n = k 1. we will prove the result in case n = k. So, let A be a
k k matrix and let (1 , x) be an eigenpair for A with kxk = 1. Now the linearly independent set {x} is
extended, using the Gram-Schmidt Orthogonalisation, to get an orthonormal basis {x, u2 , u3 , . . . , uk }.
Then U1 = [x u2 uk ] (with x, u2 , . . . , uk as the columns of the matrix U1 ) is a unitary matrix and
Exercise 6.3.11 1. Let A be an n n real invertible matrix. Prove that there exists an orthogonal matrix
P and a diagonal matrix D with positive diagonal entries such that AAt = P DP 1 .
1 1 1 2 1 2
2. Show that matrices A = 0 2 1 and B = 0 1 0 are unitarily equivalent via the unitary
0 0 3 0 0 3
1 1 0
1
matrix U = 2 1 1 0 . Hence, conclude that the upper triangular matrix obtained in the
0 0 2
Schurs Lemma need not be unique.
Remark 6.3.12 (The Spectral Theorem for Normal Matrices) Let A be an n n normal
matrix. Then the above exercise shows that there exists an orthonormal basis {x1 , x2 , . . . , xn } of
Cn (C) such that Axi = i xi for 1 i n.
6.4. SYLVESTERS LAW OF INERTIA AND APPLICATIONS 121
We end this chapter with an application of the theory of diagonalization to the study of conic sections
in analytic geometry and the study of maxima and minima in analysis.
Observe that if A = I (the identity matrix) then the bilinear form reduces to the standard real inner
product. Also, if we want it to be symmetric in x and y then it is necessary and sufficient that aij = aji
for all i, j = 1, 2, . . . , n. Why? Hence, any symmetric bilinear form is naturally associated with a real
symmetric matrix.
Definition 6.4.2 (Sesquilinear Form) Let A be a n n matrix with complex entries. A sesquilinear form
in x = (x1 , x2 , . . . , xn )t , y = (y1 , y2 , . . . , yn )t is given by
n
X
H(x, y) = aij xi yj .
i,j=1
Note that if A = I (the identity matrix) then the sesquilinear form reduces to the standard complex
inner product. Also, it can be easily seen that this form is linear in the first component and conjugate
linear in the second component. Also, if we want H(x, y) = H(y, x) then the matrix A need to be an
Hermitian matrix. Note that if aij R and x, y Rn , then the sesquilinear form reduces to a bilinear
form.
The expression Q(x, x) is called the quadratic form and H(x, x) the Hermitian form. We generally
write Q(x) and H(x) in place of Q(x, x) and H(x, x), respectively. It can be easily shown that for any
choice of x, the Hermitian form H(x) is a real number.
Therefore, in matrix notation, for a Hermitian matrix A, the Hermitian form can be rewritten as
" #
1 2i
Example 6.4.3 Let A = . Then check that A is an Hermitian matrix and for x = (x1 , x2 )t ,
2+i 2
the Hermitian form
" # !
1 2i x1
H(x) = x Ax = (x1 , x2 )
2+i 2 x2
= x1 x1 + 2x2 x2 + (2 i)x1 x2 + (2 + i)x2 x1
= |x1 |2 + 2|x2 |2 + 2Re[(2 i)x1 x2 ]
where Re denotes the real part of a complex number. This shows that for every choice of x the Hermitian
form is always real. Why?
122 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALIZATION
The main idea is to express H(x) as sum of squares and hence determine the possible values that
it can take. Note that if we replace x by cx, where c is any complex number, then H(x) simply gets
multiplied by |c|2 and hence one needs to study only those x for which kxk = 1, i.e., x is a normalised
vector.
From Exercise 6.3.11.3 one knows that if A = A (A is Hermitian) then there exists a unitary matrix
U such that U AU = D (D = diag(1 , 2 , . . . , n ) with i s the eigenvalues of the matrix A which we
know are real). So, taking z = U x (i.e., choosing zi s as linear combination of xj s with coefficients
coming from the entries of the matrix U ), one gets
2
n n n
X X X
2
H(x) = x Ax = z U AU z = z Dz = i |zi | = i
uji xj . (6.4.1)
i=1 i=1 j=1
Thus, one knows the possible values that H(x) can take depending on the eigenvalues of the matrix A
n
P
in case A is a Hermitian matrix. Also, for 1 i n, uji xj represents the principal axes of the conic
j=1
that they represent in the n-dimensional space.
Equation (6.4.1) gives one method of writing H(x) as a sum of n absolute squares of linearly inde-
pendent linear forms. One can easily show that there are more than one way of writing H(x) as sum of
squares. The question arises, what can we say about the coefficients when H(x) has been written as
sum of absolute squares.
This question is answered by Sylvesters law of inertia which we state as the next lemma.
Lemma 6.4.4 Every Hermitian form H(x) = x Ax (with A an Hermitian matrix) in n variables can be
written as
H(x) = |y1 |2 + |y2 |2 + + |yp |2 |yp+1 |2 |yr |2
where y1 , y2 , . . . , yr are linearly independent linear forms in x1 , x2 , . . . , xn , and the integers p and r, 0 p
r n, depend only on A.
Proof. From Equation (6.4.1) it is easily seen that H(x) has the required form. Need to show that p
and r are uniquely given by A.
Hence, let us assume on the contrary that there exist positive integers p, q, r, s with p > q such that
Note: The integer r is the rank of the matrix A and the number r 2p is sometimes called the
inertial degree of A.
We complete this chapter by understanding the graph of
Then
" #" #
2 2 3 2 x
3x + 4xy + 3y = [x, y]
2 3 y
" #" #" #" #
1 1 5 0 1 1 x
= [x, y] 1 2 2 2 2
1 1
2
2 0 1 2
12 y
" #" #
5 0 u
= u, v
0 1 v
= 5u2 + v 2 .
v2
5u2 + v 2 = 5 or equivalently u2 + = 1.
5
Therefore, the given graph represents an ellipse with the principal axes u = 0 and v = 0. That is, the principal
axes are
y + x = 0 and x y = 0.
The eccentricity of the ellipse is e = 25 , the foci are at the points S1 = ( 2, 2) and S2 = ( 2, 2),
and the equations of the directrices are x y = 52 .
S1
S2
Definition 6.4.6 (Associated Quadratic Form) Let ax2 + 2hxy + by 2 + 2gx + 2f y + c = 0 be the equation
of a general conic. The quadratic expression
" #" #
2 2
a h x
ax + 2hxy + by = x, y
h b y
We now consider the general conic. We obtain conditions on the eigenvalues of the associated
quadratic form to characterize the different conic sections in R2 (endowed with the standard inner
product).
1. an ellipse if ab h2 > 0,
2. a parabola if ab h2 = 0, and
3. a hyperbola if ab h2 < 0.
" #
a h
Proof. Let A = . Then the associated quadratic form
h b
" #
2
2
x
ax + 2hxy + by = x y A .
y
As A is a symmetric matrix, by Corollary 6.3.7, the eigenvalues 1 , 2 of A are both real, the corre-
sponding eigenvectors u1 , u2 are orthonormal and A is unitarily diagonalizable with
" #" #
ut1 1 0
A= t u1 u2 . (6.4.2)
u2 0 2
" # " #
u x
Let = u1 u2 . Then
v y
ax2 + 2hxy + by 2 = 1 u2 + 2 v 2
and the equation of the conic section in the (u, v)-plane, reduces to
1 u2 + 2 v 2 + 2g1 u + 2f1 v + c = 0.
1. 1 = 0 = 2 .
Substituting 1 = 2 = 0 in (6.4.2) gives A = 0. Thus, the given conic reduces to a straight line
2g1 u + 2f1 v + c = 0 in the (u, v)-plane.
2. 1 = 0, 2 6= 0.
In this case, the equation of the conic reduces to
2 (v + d1 )2 = d2 u + d3 for some d1 , d2 , d3 R.
(a) If d2 = d3 = 0, then in the (u, v)-plane, we get the pair of coincident lines v = d1 .
6.4. SYLVESTERS LAW OF INERTIA AND APPLICATIONS 125
(b) If d2 = 0, d3 6= 0.
r
d3
i. If 2 d3 > 0, then we get a pair of parallel lines v = d1 .
2
ii. If 2 d3 < 0, the solution set corresponding to the given conic is an empty set.
(c) If d2 6= 0. Then the given equation is of the form Y 2 = 4aX for some translates X = x +
and Y = y + and thus represents a parabola.
Also, observe that 1 = 0 implies that the det(A) = 0. That is, ab h2 = det(A) = 0.
1 (u + d1 )2 2 (v + d2 )2 = d3 for some d1 , d2 , d3 R.
1 (u + d1 )2 2 (v + d2 )2 = 0.
The terms on the left can be written as product of two factors as 1 , 2 > 0. Thus, in this
case, the given equation represents a pair of intersecting straight lines in the (u, v)-plane.
(b) suppose d3 6= 0. As d3 6= 0, we can assume d3 > 0. So, the equation of the conic reduces to
1 (u + d1 )2 2 (v + d2 )2
= 1.
d3 d3
This equation represents a hyperbola in the (u, v)-plane, with principal axes
u + d1 = 0 and v + d2 = 0.
As 1 2 < 0, we have
ab h2 = det(A) = 1 2 < 0.
4. 1 , 2 > 0.
In this case, the equation of the conic can be rewritten as
1 (u + d1 )2 + 2 (v + d2 )2 = d3 , for some d1 , d2 , d3 R.
(a) suppose d3 = 0. Then the equation of the ellipse reduces to a pair of perpendicular lines
u + d1 = 0 and v + d2 = 0 in the (u, v)-plane.
(b) suppose d3 < 0. Then there is no solution for the given equation. Hence, we do not get any
real ellipse in the (u, v)-plane.
(c) suppose d3 > 0. In this case, the equation of the conic reduces to
1 (u + d1 )2 2 (v + d2 )2
+ = 1.
d3 d3
This equation represents an ellipse in the (u, v)-plane, with principal axes
u + d1 = 0 and v + d2 = 0.
126 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALIZATION
ab h2 = det(A) = 1 2 > 0.
implies that the principal axes of the conic are functions of the eigenvectors u1 and u2 .
1. x2 + 2xy + y 2 6x 10y = 3.
As a last application, we consider the following problem that helps us in understanding the quadrics.
Let
ax2 + by 2 + cz 2 + 2dxy + 2exz + 2f yz + 2lx + 2my + 2nz + q = 0 (6.4.3)
be a general quadric. Then we need to follow the steps given below to write the above quadric in the
standard form and thereby get the picture of the quadric. The steps are:
xt Ax + bt x + q = 0,
where
a d e 2l x
A = d b f , b = 2m , and x = y .
e f c 2n z
2. As the matrix A is symmetric matrix, find an orthogonal matrix P such that P t AP is a diagonal
matrix.
3. Replace the vector x by y = P t x. Then writing yt = (y1 , y2 , y3 ), the equation (6.4.3) reduces to
4. Complete the squares, if necessary, to write the equation (6.4.4) in terms of the variables z1 , z2 , z3
so that this equation is in the standard form.
5. Use the condition y = P t x to determine the centre and the planes of symmetry of the quadric in
terms of the original system.
6.4. SYLVESTERS LAW OF INERTIA AND APPLICATIONS 127
2 2 2
Example 6.4.10 Determine the quadric2x + 2y + 2z
+ 2xy + 2xz + 2yz + 4x + 2y + 4z + 2 = 0.
2 1 1 4
Solution: In this case, A = 1 2 1 and b = 2 and q = 2. Check that for the orthonormal matrix
1 1 2 4
1 1 1
4 0 0
3 2 6
P = 3 2 16 , P t AP = 0 1 0 . So, the equation of the quadric reduces to
1 1
1 0 2
0 0 1
3 6
10 2 2
4y12 + y22 + y32 + y1 + y2 y3 + 2 = 0.
3 2 6
Or equivalently,
5 1 1 9
4(y1 + )2 + (y2 + )2 + (y3 )2 = .
4 3 2 6 12
So, the equation of the quadric in standard form is
9
4z12 + z22 + z32 = ,
12
1 1 t 1 3 t
where the point (x, y, z)t = P ( 45
,
3 2
, 6 ) = ( 3
4 , 4 , 4 ) is the centre. The calculation of the planes of
symmetry is left as an exercise to the reader.
128 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALIZATION
Part II
Differential Equations
dy d2 y d(k) y
y = , y = 2 , . . . , y (k) = (k) for k 3.
dx dx dx
The independent variable will be defined for an interval I; where I is either R or an interval a < x <
b R. With these notations, we ask the question: what is a differential equation?
A differential equation is a relationship between the independent variable and the unknown dependent
functions along with its derivatives.
is called an Ordinary Differential Equation; where f is a known function from I Rn+1 to R. Also,
the unknown function y is to be determined.
Remark 7.1.2 Usually, Equation (7.1.1) is written as f x, y, y , . . . , y (n) = 0, and the interval I is not
mentioned in most of the examples.
1. y = 6 sin x + 9;
2. y + 2y 2 = 0;
3. y = x + cos y;
2
4. (y ) + y = 0.
5. y + y = 0.
132 CHAPTER 7. DIFFERENTIAL EQUATIONS
6. y + y = 0.
7. y (3) = 0.
8. y + m sin y = 0.
Definition 7.1.3 (Order of a Differential Equation) The order of a differential equation is the order of
the highest derivative occurring in the equation.
In Example 7.1, the order of Equations 1, 3, 4, 5 are one, that of Equations 2, 6 and 8 are two and
the Equation 7 has order three.
Remark 7.1.6 Sometimes a solution y is also called an integral. A solution of the form y = g(x) is
called an explicit solution. If y is given by an implicit relation h(x, y) = 0 and satisfies the differential
equation, then y is called an implicit solution.
Remark 7.1.7 Since the solution is obtained by integration, we may expect a constant of integration
(for each integration) to appear in a solution of a differential equation. If the order of the ODE is n, we
expect n(n 1) arbitrary constants.
To start with, let us try to understand the structure of a first order differential equation of the form
f (x, y, y ) = 0 (7.1.2)
and move to higher orders later. With this in mind let us look at:
Definition 7.1.8 (General Solution) A function y(x, c) is called a general solution of Equation (7.1.2) on
an interval I R, if y(x, c) is a solution of Equation (7.1.2) for each x I, for a fixed c R but c is
arbitrary.
Remark 7.1.9 The family of functions {y(., c) : c R} is called a one parameter family of functions
and c is called a parameter. In other words, a general solution of Equation (7.1.2) is nothing but a one
parameter family of solutions of the Equation (7.1.2).
Example 7.1.10 1. Show that for each k R, y = kex is a solution of y = y. This is a general solution
as it is a one parameter family of solutions. Here the parameter is k.
Solution: This can be easily verified.
7.1. INTRODUCTION AND PRELIMINARIES 133
2. Determine a differential equation for which a family of circles with center at (1, 0) and arbitrary radius,
a is an implicit solution.
Solution: This family is represented by the implicit relation
(x 1)2 + y 2 = a2 , (7.1.3)
3. Consider the one parameter family of circles with center at (c, 0) and unit radius. The family is
represented by the implicit relation
(x c)2 + y 2 = 1, (7.1.5)
2
where c is a real constant. Show that y satisfies yy + y 2 = 1.
Solution: We note that, differentiation of the given equation, leads to
(x c) + yy = 0.
(yy )2 + y 2 = 1.
In Example 7.1.10.2, we see that y is not defined explicitly as a function of x but implicitly defined
1
by Equation (7.1.3). On the other hand y = is an explicit solution in Example 7.1.5.2. Solving a
1x
differential equation means to find a solution.
Let us now look at some geometrical interpretations of the differential Equation (7.1.2). The Equation
(7.1.2) is a relation between x, y and the slope of the function y at the point x. For instance, let us find
1 x
the equation of the curve passing through (0, ) and whose slope at each point (x, y) is . If y is the
2 4y
required curve, then y satisfies
dy x 1
= , y(0) = .
dx 4y 2
It is easy to verify that y satisfies the equation x2 + 4y 2 = 1.
(a) y 2 + sin(y ) = 1.
(b) y + (y )2 = 2x.
(c) (y )3 + y 2y 4 = 1.
3. Find the equation of the curve C which passes through (1, 0) and whose slope at each point (x, y) is
x
.
y
134 CHAPTER 7. DIFFERENTIAL EQUATIONS
y = f (x, y)
where f is an arbitrary continuous function. But there are special cases of the function f for which the
above equation can be solved. One such set of equations is
y = g(y)h(x). (7.2.1)
1 dy
= h(x).
g(y) dx
G(y) + c = H(x).
2. Solve y = y 2 .
1
Solution: It is easy to deduce that y = , where c is a constant; is the required solution.
x+c
Observe that the solution is defined, only if x + c 6= 0 for any x. For example, if we let y(0) = a, then
a
y= exists as long as ax 1 6= 0.
ax 1
g1 (x, y) y
y = or equivalently y = g( )
g2 (x, y) x
where g1 and g2 are homogeneous functions of the same degree in x and y, and g is a continuous function.
In this case, we use the substitution, y = xu(x) to get y = xu + u. Thus, the above equation after
substitution becomes
xu + u(x) = g(u),
x
2. Find the equation of the curve passing through (0, 1) and whose slope at each point (x, y) is 2y .
Solution: If y is such a curve then we have
dy x
= and y(0) = 1.
dx 2y
Notice that it is a separable equation and it is easy to verify that y satisfies x2 + 2y 2 = 2.
Definition 7.3.1 (Exact Equation) Equation (7.3.1) is called Exact if there exists a real valued twice con-
tinuously differentiable function f : R2 R (or the domain is an open subset of R2 ) such that
f f
= M and = N. (7.3.3)
x y
f f dy df (x, y)
+ = = 0.
x y dx dx
This implies that f (x, y) = c (where c is a constant) is an implicit solution of Equation (7.3.1). In other
words, the left side of Equation (7.3.1) is an exact differential.
dy
Example 7.3.3 The equation y + x dx = 0 is an exact equation. Observe that in this example, f (x, y) = xy.
Theorem 7.3.4 Let M and N be twice continuously differentiable function in a region D. The Equation
(7.3.1) is exact if and only if
M N
= . (7.3.4)
y x
Note: If the Equation (7.3.1) or Equation (7.3.2) is exact, then there is a function f (x, y) satisfying
f (x, y) = c for some constant c, such that
Let us consider some examples, where Theorem 7.3.4 can be used to easily find the general solution.
Therefore, the given equation is exact. Hence, there exists a function G(x, y) such that
G G
= 2xey and = x2 ey + cos y.
x y
7.3. EXACT EQUATIONS 137
The first partial differentiation when integrated with respect to x (assuming y to be a constant) gives,
G(x, y) = x2 ey + h(y).
But then
G (x2 ey + h(y))
= =N
y y
implies dh
dy = cos y or h(y) = sin y + c where c is an arbitrary constant. Thus, the general solution of
the given equation is
x2 ey + sin y = c.
dy
y 2 + mxy =0
dx
is exact. Also, find its general solution.
Solution: In this example, we have
M N
M = y 2 , N = mxy, = 2y and = my.
y x
Hence for the given equation to be exact, m = 2. With this condition on and m, the equation
reduces to
dy
y 2 + 2xy = 0.
dx
This equation is not meaningful if = 0. Thus, the above equation reduces to
d
(xy 2 ) = 0
dx
whose solution is
xy 2 = c
Solution: Here
M = 3x2 ey x2 and N = x3 ey + y 2 .
M N
Hence, y = x = 3x2 ey . Thus the given equation is exact. Therefore,
Z
x3
G(x, y) = (3x2 ey x2 )dx = x3 ey + h(y)
3
(keeping y as constant). To determine h(y), we partially differentiate G(x, y) with respect to y and
3
compare with N to get h(y) = y3 . Hence
x3 y3
G(x, y) = x3 ey + =c
3 3
is the required implicit solution.
138 CHAPTER 7. DIFFERENTIAL EQUATIONS
an exact equation. Such a factor (in this case, ex ) is called an integrating factor for the given
equation. Formally
Definition 7.3.6 (Integrating Factor) A function Q(x, y) is called an integrating factor for the Equation
(7.3.1), if the equation
Q(x, y)M (x, y)dx + Q(x, y)N (x, y)dy = 0
is exact.
Solution: It can be easily verified that the given equation is not exact.
Method 1: Here the terms M = 4y 2 + 3xy and N = (3xy + 2x2 ) are homogeneous functions of
degree 2. It may be checked that an integrating factor for the given differential equation is
1 1
= .
Mx + Ny xy x + y
Hence, we need to solve the partial differential equations
G(x, y) y 4y + 3x 4 1
= = and (7.3.5)
x xy x + y x x+y
G(x, y) x(3y + 2x) 2 1
= = . (7.3.6)
y xy x + y y x + y
Method 2: Here the terms M = 4y 2 + 3xy and N = (3xy + 2x2 ) are polynomial in x and y.
Therefore, we suppose that x y is an integrating factor for some , R. We try to find this and
.
Multiplying the terms M (x, y) and N (x, y) with x y , we get
M (x, y) = x y 4y 2 + 3xy , and N (x, y) = x y (3xy + 2x2 ).
M (x, y) N (x, y)
For the new equation to be exact, we need = . That is, the terms
y x
and
3(1 + )x y 1+ 2(2 + )x1+ y
y
must be equal. Solving for and , we get = 5 and = 1. That is, the expression 5 is also an
x
integrating factor for the given differential equation. This integrating factor leads to
y3 y2
G(x, y) = + h(y)
x4 x3
and
y3 y2
G(x, y) = + g(x).
x4 x3
Thus, we need h(y) = g(x) = c, for some constant c R. Hence, the required solution by this method
is
y 2 y + x = cx4 .
Remark 7.3.8 1. If Equation (7.3.1) has a general solution, then it can be shown that Equation
(7.3.1) admits an integrating factor.
2. If Equation (7.3.1) has an integrating factor, then it has many (in fact infinitely many) integrating
factors.
3. Given Equation (7.3.1), whether or not it has an integrating factor, is a tough question to settle.
4. In some cases, we use the following rules to find the integrating factors.
1
M x + N y 6= 0, then
Mx + Ny
is an Integrating Factor.
140 CHAPTER 7. DIFFERENTIAL EQUATIONS
(b) If the functions M (x, y) and N (x, y) are polynomial functions in x, y; then x y works as an
integrating factor for some appropriate values of and .
R
f (x)dx
(c) The equation M(x, y)dx + N (x, y)dy = 0 has e as an integrating factor, if f (x) =
1 M N
is a function of x alone.
N y x
R
g(y)dy
(d) Theequation M(x, y)dx + N (x, y)dy = 0 has e as an integrating factor, if g(y) =
1 M N
is a function of y alone.
M y x
(e) For the equation
yM1 (xy)dx + xN1 (xy)dy = 0
1
with M x N y 6= 0, the function is an integrating factor.
Mx Ny
Exercise 7.3.9 1. Show that the following equations are exact and hence solve them.
dr
(a) (r + sin + cos )+ r(cos sin ) = 0.
d
y x dy
(b) (ex ln y + ) + ( + ln x + cos y) = 0.
x y dx
dy
(x2 + xy 2 ) + {ax2 y 2 + g(x, y)} =0
dx
is exact.
3. What are the conditions on f (x), g(y), (x), and (y) so that the equation
dy
((x) + (y)) + (f (x) + g(y)) =0
dx
is exact.
4. Verify that the following equations are not exact. Further find suitable integrating factors to solve
them.
dy
(a) y + (x + x3 y 2 ) = 0.
dx
dy
(b) y 2 + (3xy + y 2 1) = 0.
dx
dy
(c) y + (x + x3 y 2 ) = 0.
dx
dy
(d) y 2 + (3xy + y 2 1) = 0.
dx
dy
(a) (x2 y + 2xy 2 ) + 2(x3 + 3x2 y) = 0 with y(1) = 0.
dx
dy
(b) y(xy + 2x2 y 2 ) + x(xy x2 y 2 ) = 0 with y(1) = 1.
dx
7.4. LINEAR EQUATIONS 141
dy
is called a linear equation, where y stands for . Equation (7.4.1) is called Linear non-homogeneous if
dx
q(x) 6= 0 and is called Linear homogeneous if q(x) = 0 on I.
A first order equation is called a non-linear equation (in the independent variable) if it is neither a linear
homogeneous nor a non-homogeneous linear equation.
In other words, Z
y = ceP (x) + eP (x) eP (x) q(x)dx (7.4.2)
Rx
Remark 7.4.3 If we let P (x) = p(s)ds in the above discussion, Equation (7.4.2) also represents
a
Zx
P (x) P (x)
y = y(a)e +e eP (s) q(s)ds. (7.4.3)
a
Proposition 7.4.4 y = ceP (x) (where c is any constant) is the general solution of the linear homogeneous
equation
y + p(x)y = 0. (7.4.4)
In particular, when p(x) = k, is a constant, the general solution is y = cekx , with c an arbitrary constant.
142 CHAPTER 7. DIFFERENTIAL EQUATIONS
A class of nonlinear Equations (7.4.1) (named after Bernoulli (1654 1705)) can be reduced to linear
equation. These equations are of the type
or equivalently
u + (1 a)p(x)u = (1 a)q(x), (7.4.6)
u + mu = n
(a) y + y = 4.
(b) y 3y = 10.
(c) y 2xy = 0.
(d) y xy = 4x.
(e) y + y = ex .
(f) sinh xy + y cosh x = ex .
(g) (x2 + 1)y + 2xy = x2 .
(a) y 4y = 5, y(0) = 0.
7.5. MISCELLANEOUS REMARKS 143
(b) y + (1 + x2 )y = 3, y(0) = 0.
(c) y + y = cos x, y() = 0.
(d) y y 2 = 1, y(0) = 0.
(e) (1 + x)y + y = 2x2 , y(1) = 1.
4. Let y1 be a solution of y + a(x)y = b1 (x) and y2 be a solution of y + a(x)y = b2 (x). Then show that
y1 + y2 is a solution of
y + a(x)y = b1 (x) + b2 (x).
(a) y + 2y = y 2 .
(b) (xy + x3 ey )y = y 2 .
(c) y sin(y) + x cos(y) = x.
(d) y y = xy 2 .
dy f (x, p) f (x, p) dp dp
=p= + of equivalently p = g(x, p, ). (7.5.2)
dx x p dx dx
Equation (7.5.2) can be viewed as a differential equation in p and x. We now assume that Equation
(7.5.2) can be solved for p and its solution is
h(x, p, c) = 0. (7.5.3)
If we are able to eliminate p between Equations (7.5.1) and (7.5.3), then we have an implicit
solution of the Equation (7.5.1).
Solve y = 2px xp2 .
dy
Solution: Differentiating with respect to x and replacing by p, we get
dx
dp dp dp
p = 2p p2 + 2x 2xp or (p + 2x )(1 p) = 0.
dx dx dx
So, either
dp
p + 2x = 0 or p = 1.
dx
144 CHAPTER 7. DIFFERENTIAL EQUATIONS
That is, either p2 x = c or p = 1. Eliminating p from the given equation leads to an explicit solution
r
c
y = 2x c or y = x.
x
The first solution is a one-parameter family of solutions, giving us a general solution. The latter
one is a solution but not a general solution since it is not a one parameter family of solutions.
dy dy dx
= = p(3p2 1).
dp dx dp
Therefore,
3 4 1 2
p p +c
y=
4 2
(regarding p as a parameter). The desired solution in this case is in the parametric form, given by
3 4 1 2
x = t3 t 1 and y = t t +c
4 2
where c is an arbitrary constant.
Remark 7.5.1 The readers are again informed that the methods discussed in 1), 2), 3) are more
or less ad hoc methods. It may not work in all cases.
(a) 8y = x2 + p2 .
7.6. INITIAL VALUE PROBLEMS 145
(b) y + xp = x4 p2 .
(c) y 2 log y p2 = 2xyp.
(d) 2y + p2 + 2p = 2x(p + 1).
(e) 2y = 2x2 + 4px + p2 .
y = f (x, y) (7.6.1)
1. Does Equation (7.6.1) admit solutions at all (i.e., the existence problem)?
2. Is there a method to find solutions of Equation (7.6.1) in case the answer to the above question is
in the affirmative?
The answers to the above two questions are not simple. But there are partial answers if some
additional restrictions on the function f are imposed. The details are discussed in this section.
For a, b R with a > 0, b > 0, we define
S = {(x, y) R2 : |x x0 | a, |y y0 | b} I R.
Definition 7.6.1 (Initial Value Problems) Let f : S R be a continuous function on a S. The problem
of finding a solution y of
in a neighbourhood I of x0 (or an open interval I containing x0 ) is called an Initial Value Problem, henceforth
denoted by IVP.
The condition y(x0 ) = y0 in Equation (7.6.2) is called the initial condition stated at x = x0 and y0
is called the initial value.
Further, we assume that a and b are finite. Let
Such an M exists since S is a closed and bounded set and f is a continuous function and let h =
b
min(a, M ). The ensuing proposition is simple and hence the proof is omitted.
In the absence of any knowledge of a solution of IVP (7.6.2), we now try to find an approximate
solution. Any solution of the IVP (7.6.2) must satisfy the initial condition y(x0 ) = y0 . Hence, as a crude
approximation to the solution of IVP (7.6.2), we define
Now the Equation (7.6.3) appearing in Proposition 7.6.2, helps us to refine or improve the approximate
solution y0 with a hope of getting a better approximate solution. We define
Z x
y1 = yo + f (s, y0 )ds
x0
As yet we have not checked a few things, like whether the point (s, yn (s)) S or not. We formalise
the theory in the latter part of this section. To get ourselves motivated, let us apply the above method
to the following IVP.
Solution: From Proposition 7.6.2, a function y is a solution of the above IVP if and only if
Z x
y =1 y(s)ds.
x0
So, Z x
x2
y2 = 1 (1 s)ds = 1 x + .
0 2!
By induction, one can easily verify that
x2 x3 xn
yn = 1 x + + + (1)n .
2! 3! n!
Note: The solution of the given IVP is
This example justifies the use of the word approximate solution for the yn s.
We now formalise the above procedure.
Definition 7.6.4 (Picards Successive Approximations) Consider the IVP (7.6.2). For x I with |x
x0 | a, define inductively
Whether Equation (7.6.4) is well defined or not is settled in the following proposition.
Proposition 7.6.5 The Picards approximates yn s, for the IVP (7.6.2) defined by Equation (7.6.4) is well
b
defined on the interval |x x0 | h = min{a, M }, i.e., for x [x0 h, x0 + h].
7.6. INITIAL VALUE PROBLEMS 147
Proof. We have to verify that for each n = 0, 1, 2, . . . , (s, yn ) belongs to the domain of definition of f
for |s x0 | h. This is needed due to the reason that f (s, yn ) appearing as integrand in Equation (7.6.4)
may not be defined. For n = 0, it is obvious that f (s, y0 ) S as |s x0 | a and |y0 y0 | = 0 b. For
n = 1, we notice that, if |x x0 | h then
|y1 y0 | M |x x0 | M h b.
(x, y1 ) S if |x x0 | h.
|yn y0 | M |x x0 | M h b.
This shows that (x, yn ) S whenever |x x0 | h. Hence (x, yk ) S for k = n holds and therefore the
proof of the proposition is complete.
Let us again come back to Example 7.6.3 in the light of Proposition 7.6.2.
Solution: Note that x0 = 0, y0 = 1, f (x, y) = y, and a = b = 1. The set S on which we are studying the
differential equation is
S = {(x, y) : |x| 1, |y 1| 1}.
1 1
Therefore, the approximate solutions yn s are defined only for the interval [ , ], if we use Proposition
2 2
7.6.2.
Observe that the exact solution y = ex and the approximate solutions yn s of Example 7.6.3 exist
1 1
on [1, 1]. But the approximate solutions as seen above are defined in the interval [ , ].
2 2
That is, for any IVP, the approximate solutions yn s may exist on a larger interval as compared to
the interval obtained by the application of the Proposition 7.6.2.
We now consider another example.
Example 7.6.7 Find the Picards successive approximations for the IVP
where
f (y) = y for y 0.
148 CHAPTER 7. DIFFERENTIAL EQUATIONS
A similar argument implies that yn (x) 0 for all n = 2, 3, . . . and lim yn (x) 0. Also, it can be easily
n
verified that y(x) 0 is a solution of the IVP (7.6.6).
x2 x2
Also y(x) = , 0 x 1 is a solution of Equation (7.6.6) and the {yn }s do not converge to . Note
4 4
here that the IVP (7.6.6) has at least two solutions.
The following result is about the existence of a unique solution to a class of IVPs. We state the
theorem without proof.
Remark 7.6.9 The theorem asserts the existence of a unique solution on a subinterval |x x0 | h of
the given interval |x x0 | a. In a way it is in a neighbourhood of x0 and so this result is also called
the local existence of a unique solution. A natural question is whether the solution exists on the whole
of the interval |x x0 | a. The answer to this question is beyond the scope of this book.
Whenever we talk of the Picards theorem, we mean it in this local sense.
Exercise 7.6.10 1. Compute the sequence {yn } of the successive approximations to the IVP
y = y (y 1), y(x0 ) = 0, x0 0.
y = y (y 1), y(x0 ) = 1, x0 0
is y 1, x x0 .
3. The IVP
y = y, y(0) = 0, x 0
x2
has solutions y1 0 as well as y2 = , x 0. Why does the existence of the two solutions not
4
contradict the Picards theorem?
(a) Compute the interval of existence of the solution of the IVP by using Theorem 7.6.8.
(b) Show that y = ex is the solution of the IVP which exists on whole of R.
This again shows that the solution to an IVP may exist on a larger interval than what is being implied
by Theorem 7.6.8.
7.6. INITIAL VALUE PROBLEMS 149
Example 7.6.11 Compute the orthogonal trajectories of the family F of curves given by
F : y 2 = cx3 , (7.6.7)
3cx2 3 cx3 3y
y = = = . (7.6.9)
2y 2x y 2x
At the point (x, y), if any curve intersects orthogonally, then (if its slope is y ) we must have
2x
y = .
3y
Solving this differential equation, we get
x2
y2 = + c.
3
2
Or equivalently, y 2 + x3 = c is a family of curves which intersects the given family F orthogonally.
Below, we summarize how to determine the orthogonal trajectories.
Step 1: Given the family F (x, y, c) = 0, determine the differential equation,
for which the given family F are a general solution. Equation (7.6.10) is obtained by the elimination of
the constant c appearing in F (x, y, c) = 0 using the equation obtained by differentiating this equation
with respect to x.
Step 2: The differential equation for the orthogonal trajectories is then given by
1
y = . (7.6.11)
f (x, y)
Final Step: The general solution of Equation (7.6.11) is the orthogonal trajectories of the given family.
In the following, let us go through the steps.
150 CHAPTER 7. DIFFERENTIAL EQUATIONS
Example 7.6.12 Find the orthogonal trajectories of the family of stright lines
y = mx + 1, (7.6.12)
x2 + y 2 2y = c, (7.6.14)
where c is an arbitrary constant. In other words, the orthogonal trajectories of the family of straight
lines (7.6.12) is the family of circles given by Equation (7.6.14).
Exercise 7.6.13 1. Find the orthogonal trajectories of the following family of curves (the constant c
appearing below is an arbitrary constant).
(a) y = x + c.
(b) x2 + y 2 = c.
(c) y 2 = x + c.
(d) y = cx2 .
(e) x2 y 2 = c.
2. Show that the one parameter family of curves y 2 = 4k(k + x), k R are self orthogonal.
3. Find the orthogonal trajectories of the family of circles passing through the points (1, 2) and (1, 2).
In this section, we study a simple method to find the numerical solutions of Equation (7.7.1). The
study of differential equations has two important aspects (among other features) namely, the qualitative
theory, the latter is called Numerical methods for solving Equation (7.7.1). What is presented here is
at a very rudimentary level nevertheless it gives a flavour of the numerical method.
To proceed further, we assume that f is a good function (there by meaning sufficiently differen-
tiable). In such case, we have
h2
y(x + h) = y + hy + y +
2!
7.7. NUMERICAL METHODS 151
x0 x1 x2 xn = x
which suggests a crude approximation y(x + h) y + hf (x, y) (if h is small enough), the symbol
means approximately equal to. With this in mind, let us think of finding y, where y is the solution of
x x0
Equation (7.7.1) with x > x0 . Let h = and define
n
xi = x0 + ih, i = 0, 1, 2, . . . , n.
That is, we have divided the interval [x0 , x] into n equal intervals with end points x0 , x1 , . . . , x = xn .
Our aim is to calculate y : At the first step, we have y(x + h) y0 + hf x0 , y0 . Define y1 =
y0 + hf (x0 , y0 ). Error at first step is
|y(x0 + h) y1 | = E1 .
This method of calculation of y1 , y2 , . . . , yn is called the Eulers method. The approximate solution of
Equation (7.7.1) is obtained by linear elements joining (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ).
y
3
y2 y n1
y1 yn
y
0
x0 x1 x2 x3 x4 x n1 x n
Figure 7.2: Approximate Solution
152 CHAPTER 7. DIFFERENTIAL EQUATIONS
Chapter 8
8.1 Introduction
Second order and higher order equations occur frequently in science and engineering (like pendulum
problem etc.) and hence has its own importance. It has its own flavour also. We devote this section for
an elementary introduction.
4. ax2 y + bxy + cy = 0 c 6= 0 is a homogeneous second order linear equation. This equation is called
Euler Equation of order 2. Here a, b, and c are real constants.
Definition 8.1.3 A function y defined on I is called a solution of Equation (8.1.1) if y is twice differentiable
and satisfies Equation (8.1.1).
Then for any two real number c1 , c2 , the function c1 y1 + c2 y2 is also a solution of Equation (8.1.2).
It is to be noted here that Theorem 8.1.5 is not an existence theorem. That is, it does not assert the
existence of a solution of Equation (8.1.2).
Definition 8.1.6 (Solution Space) The set of solutions of a differential equation is called the solution space.
For example, all the solutions of the Equation (8.1.2) form a solution space. Note that y(x) 0 is
also a solution of Equation (8.1.2). Therefore, the solution set of a Equation (8.1.2) is non-empty. A
moments reflection on Theorem 8.1.5 tells us that the solution space of Equation (8.1.2) forms a real
vector space.
Remark 8.1.7 The above statements also hold for any homogeneous linear differential equation. That
is, the solution space of a homogeneous linear differential equation is a real vector space.
The natural question is to inquire about its dimension. This question will be answered in a sequence
of results stated below.
We first recall the definition of Linear Dependence and Independence.
Definition 8.1.8 (Linear Dependence and Linear Independence) Let I be an interval in R and let f, g :
I R be continuous functions. we say that f, g are said to be linearly dependent if there are real numbers
a and b (not both zero) such that
The functions f (), g() are said to be linearly independent if f (), g() are not linear dependent.
To proceed further and to simplify matters, we assume that p(x) 1 in Equation (8.1.2) and that
the function q(x) and r(x) are continuous on I.
In other words, we consider a homogeneous linear equation
Theorem 8.1.9 (Picards Theorem on Existence and Uniqueness) Consider the Equation (8.1.3) along
with the conditions
y(x0 ) = A, y (x0 ) = B, for some x0 I (8.1.4)
where A and B are prescribed real constants. Then Equation (8.1.3), with initial conditions given by Equation
(8.1.4) has a unique solution on I.
Theorem 8.1.10 Let q and r be real valued continuous functions on I. Then Equation (8.1.3) has exactly
two linearly independent solutions. Moreover, if y1 and y2 are two linearly independent solutions of Equation
(8.1.3), then the solution space is a linear combination of y1 and y2 .
Proof. Let y1 and y2 be two unique solutions of Equation (8.1.3) with initial conditions
The unique solutions y1 and y2 exist by virtue of Theorem 8.1.9. We now claim that y1 and y2 are
linearly independent. Consider the system of linear equations
where and are unknowns. If we can show that the only solution for the system (8.1.6) is = = 0,
then the two solutions y1 and y2 will be linearly independent.
Use initial condition on y1 and y2 to show that the only solution is indeed = = 0. Hence the
result follows.
We now show that any solution of Equation (8.1.3) is a linear combination of y1 and y2 . Let be
any solution of Equation (8.1.3) and let d1 = (x0 ) and d2 = (x0 ). Consider the function defined by
By Definition 8.1.3, is a solution of Equation (8.1.3). Also note that (x0 ) = d1 and (x0 ) = d2 . So,
and are two solution of Equation (8.1.3) with the same initial conditions. Hence by Picards Theorem
on Existence and Uniqueness (see Theorem 8.1.9), (x) (x) or
Remark 8.1.11 1. Observe that the solution space of Equation (8.1.3) forms a real vector space of
dimension 2.
Exercise 8.1.13 1. State whether the following equations are second-order linear or second-
order non-linear equaitons.
156 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS
(a) y + y sin x = 5.
(b) y + (y )2 + y sin x = 0.
(c) y + yy = 2.
(d) (x2 + 1)y + (x2 + 1)2 y 5y = sin x.
y y = 0
conclude that sinh x and cosh x are also solutions of y y = 0. Do sinh x and cosh x form a
fundamental set of solutions?
3. Given that {sin x, cos x} forms a basis for the solution space of y + y = 0, find another basis.
Definition 8.2.1 (General Solution) Let y1 and y2 be a fundamental system of solutions for
y = c1 y 1 + c2 y 2 , x I
where c1 and c2 are arbitrary real constants. Note that y is also a solution of Equation (8.2.1).
In other words, the general solution of Equation (8.2.1) is a 2-parameter family of solutions, the
parameters being c1 and c2 .
8.2.1 Wronskian
In this subsection, we discuss the linear independence or dependence of two solutions of Equation (8.2.1).
Definition 8.2.2 (Wronskian) Let y1 and y2 be two real valued continuously differentiable function on an
interval I R. For x I, define
y y
1 1
W (y1 , y2 ) :=
y2 y2
= y1 y2 y1 y2 .
2. Let y1 = x2 |x|, and y2 = x3 for x (1, 1). Let us now compute y1 and y2 . From analysis, we know
that y1 is differentiable at x = 0 and
Therefore, for x 0,
y
1 y1 x3 3x2
W (y1 , y2 ) = = =0
y2 y2 x3 3x2
and for x < 0,
y
1 y1 x3 3x2
W (y1 , y2 ) = = = 0.
y2 y2 x3 3x2
That is, for all x (1, 1), W (y1 , y2 ) = 0.
It is also easy to note that y1 , y2 are linearly independent on (1, 1). In fact,they are linearly independent
on any interval (a, b) containing 0.
Given two solutions y1 and y2 of Equation (8.2.1), we have a characterisation for y1 and y2 to be
linearly independent.
Theorem 8.2.4 Let I R be an interval. Let y1 and y2 be two solutions of Equation (8.2.1). Fix a point
x0 I. Then for any x I,
Z x
W (y1 , y2 ) = W (y1 , y2 )(x0 ) exp( q(s)ds). (8.2.3)
x0
Consequently,
W (y1 , y2 )(x0 ) 6= 0 W (y1 , y2 ) 6= 0 for all x I.
W (y1 , y2 ) = y1 y2 y1 y2 .
So
d
W (y1 , y2 ) = y1 y2 y1 y2 (8.2.4)
dx
= y1 (q(x)y2 r(x)y2 ) (q(x)y1 r(x)y1 ) y2 (8.2.5)
= q(x) y1 y2 y1 y2 (8.2.6)
= q(x)W (y1 , y2 ). (8.2.7)
So, we have Z x
W (y1 , y2 ) = W (y1 , y2 )(x0 ) exp q(s)ds .
x0
Remark 8.2.5 1. If the Wronskian W (y1 , y2 ) of two solutions y1 , y2 of (8.2.1) vanish at a point
x0 I, then W (y1 , y2 ) is identically zero on I.
158 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS
2. If any two solutions y1 , y2 of Equation (8.2.1) are linearly dependent (on I), then W (y1 , y2 ) 0
on I.
Theorem 8.2.6 Let y1 and y2 be any two solutions of Equation (8.2.1). Let x0 I be arbitrary. Then y1
and y2 are linearly independent on I if and only if W (y1 , y2 )(x0 ) 6= 0.
admits a non-zero solution d1 , d2 . (as 0 = W (y1 , y2 )(x0 ) = y1 (x0 )y2 (x0 ) y1 (x0 )y2 (x0 ).)
Let y = d1 y1 + d2 y2 . Note that Equation (8.2.8) now implies
Therefore, by Picards Theorem on existence and uniqueness of solutions (see Theorem 8.1.9), the solu-
tion y 0 on I. That is, d1 y1 + d2 y2 0 for all x I with |d1 | + |d2 | 6= 0. That is, y1 , y2 is linearly
dependent on I. A contradiction. Therefore, W (y1 , y2 )(x0 ) 6= 0. This proves the first part.
Suppose that W (y1 , y2 )(x0 ) 6= 0 for some x0 I. Therefore, by Theorem 8.2.4, W (y1 , y2 ) 6= 0 for all
x I. Suppose that c1 y1 (x) + c2 y2 (x) = 0 for all x I. Therefore, c1 y1 (x) + c2 y2 (x) = 0 for all x I.
Since x0 I, in particular, we consider the linear system of equations
But then by using Theorem 2.6.1 and the condition W (y1 , y2 )(x0 ) 6= 0, the only solution of the linear
system (8.2.9) is c1 = c2 = 0. So, by Definition 8.1.8, y1 , y2 are linearly independent.
Corollary 8.2.8 Let y1 , y2 be two linearly independent solutions of Equation (8.2.1). Let y be any solution
of Equation (8.2.1). Then there exist unique real numbers d1 , d2 such that
y = d1 y1 + d2 y2 on I.
Proof. Let x0 I. Let y(x0 ) = a, y (x0 ) = b. Here a and b are known since the solution y is given.
Also for any x0 I, by Theorem 8.2.6, W (y1 , y2 )(x0 ) 6= 0 as y1 , y2 are linearly independent solutions of
Equation (8.2.1). Therefore by Theorem 2.6.1, the system of linear equations
Exercise 8.2.9 1. Let y1 and y2 be any two linearly independent solutions of y + a(x)y = 0. Find
W (y1 , y2 ).
y + a(x)y + b(x)y = 0, x I.
admiting y1 = sin x and y2 = x as its solutions; where a(x) and b(x) are any continuous functions
on [0, 2]. [Hint: Use Exercise 8.2.9.2.]
v y1 + v(2y1 + py1 ) = 0
which is same as
d
(vy12 ) = p(vy12 ).
dx
This is a linear equation of order one (hence the name, reduction of order) in v whose solution is
Z x
vy12 = exp( p(s)ds), x0 I.
x0
It is left as an exercise to show that y1 , y2 are linearly independent. That is, {y1 , y2 } form a funda-
mental system for Equation (8.2.1).
We illustrate the method by an example.
160 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS
1
Example 8.2.10 Given that e y1 = , x 1 is a solution of
x
x2 y + 4xy + 2y = 0, (8.2.11)
determine another solution y2 of (8.2.11), such that the solutions y1 , y2 , for x 1 are linearly independent.
4
Solution: With the notations used above, note that x0 = 1, p(x) = , and y2 (x) = u(x)y1 (x), where u
x
is given by
Z x Z s
1
u = 2 exp p(t)dt ds
1 y1 (s) 1
Z x
1
= 2 exp ln(s4 ) ds
1 y1 (s)
Z x 2
s 1
= 4
ds = 1 ;
1 s x
where A and B are constants. So,
1 1
.
y2 (x) =
x x2
1 1 1 1
Since the term already appears in y1 , we can take y2 = 2 . So, and 2 are the required two linearly
x x x x
independent solutions of (8.2.11).
Exercise 8.2.11 In the following, use the given solution y1 , to find another solution y2 so that the two
solutions y1 and y2 are linearly independent.
1. y = 0, y1 = 1, x 0.
2. y + 2y + y = 0, y1 = ex , x 0.
3. x2 y xy + y = 0, y1 = x, x 1.
4. xy + y = 0, y1 = 1, x 1.
5. y + xy y = 0, y1 = x, x 1.
y + ay + by = 0 (8.3.1)
L(y) = y + ay + by
and
p() = 2 + a + b.
It is easy to note that
L(ex ) = p()ex .
Now, it is clear that ex is a solution of Equation (8.3.1) if and only if
p() = 0. (8.3.2)
8.3. SECOND ORDER EQUATIONS WITH CONSTANT COEFFICIENTS 161
Equation (8.3.2) is called the characteristic equation of Equation (8.3.1). Equation (8.3.2) is a
quadratic equation and admits 2 roots (repeated roots being counted twice).
Hence, e1 x and xe1 x are two linearly independent solutions of Equation (8.3.1). In this case, we have
a fundamental system of solutions of Equation (8.3.1).
Case 3: Let = + i be a complex root of Equation (8.3.2).
So, i is also a root of Equation (8.3.2). Before we proceed, we note:
Lemma 8.3.2 Let y = u + iv be a solution of Equation (8.3.1), where u and v are real valued functions.
Then u and v are solutions of Equation (8.3.1). In other words, the real part and the imaginary part of a
complex valued solution (of a real variable ODE Equation (8.3.1)) are themselves solution of Equation (8.3.1).
Proof. exercise.
ex (cos(x) + i sin(x))
is a complex solution of Equation (8.3.1). By Lemma 8.3.2, y1 = ex cos(x) and y2 = sin(x) are
solutions of Equation (8.3.1). It is easy to note that y1 and y2 are linearly independent. It is as good as
saying {ex cos(x), ex sin(x)} forms a fundamental system of solutions of Equation (8.3.1).
(a) y 4y + 3y = 0.
(b) 2y + 5y = 0.
(c) y 9y = 0.
(d) y + k 2 y = 0, where k is a real constant.
(a) y 5y = 0.
(b) y + 6y + 5y = 0.
(c) y + 5y = 0.
162 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS
L(y) = f (8.4.3)
L(y) = 0. (8.4.4)
2. Let z be any solution of (8.4.1) on I and let z1 be any solution of (8.4.2). Then y = z + z1 is a solution
of (8.4.1) on I.
Proof. Observe that L is a linear transformation on the set of twice differentiable function on I. We
therefore have
L(y1 ) = f and L(y2 ) = f.
The linearity of L implies that L(y1 y2 ) = 0 or equivalently, y = y1 y2 is a solution of (8.4.2).
For the proof of second part, note that
implies that
L(z + z1 ) = L(z) + L(z1 ) = f.
Thus, y = z + z1 is a solution of (8.4.1).
The above result leads us to the following definition.
8.4. NON HOMOGENEOUS EQUATIONS 163
Definition 8.4.2 (General Solution) A general solution of (8.4.1) on I is a solution of (8.4.1) of the form
y = yh + yp , x I
We now prove that the solution of (8.4.1) with initial conditions is unique.
Theorem 8.4.3 (Uniqueness) Suppose that x0 I. Let y1 and y2 be two solutions of the IVP
Remark 8.4.4 The above results tell us that to solve (i.e., to find the general solution of (8.4.1)) or the
IVP (8.4.5), we need to find the general solution of the homogeneous equation (8.4.2) and a particular
solution yp of (8.4.1). To repeat, the two steps needed to solve (8.4.1), are:
Step 1. has been dealt in the previous sections. The remainder of the section is devoted to step 2., i.e.,
we elaborate some methods for computing a particular solution yp of (8.4.1).
3. Let f1 (x) and f2 (x) be two continuous functions. Let yi s be particular solutions of
where q(x) and r(x) are continuous functions. Show that y1 + y2 is a particular solution of y +
q(x)y + r(x)y = f1 (x) + f2 (x).
164 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS
on I, where q(x) and r(x) are arbitrary continuous functions defined on I. Then we know that
y = c1 y 1 + c2 y 2
is a solution of (8.5.1) for any constants c1 and c2 . We now vary c1 and c2 to functions of x, so that
where f is a piecewise continuous function defined on I. The details are given in the following theorem.
Theorem 8.5.1 (Method of Variation of Parameters) Let q(x) and r(x) be continuous functions defined
on I and let f be a piecewise continuous function on I. Let y1 and y2 be two linearly independent solutions
of (8.5.1) on I. Then a particular solution yp of (8.5.3) is given by
Z Z
y2 f (x) y1 f (x)
yp = y1 dx + y2 dx, (8.5.4)
W W
where W = W (y1 , y2 ) is the Wronskian of y1 and y2 . (Note that the integrals in (8.5.4) are the indefinite
integrals of the respective arguments.)
Proof. Let u(x) and v(x) be continuously differentiable functions (to be determined) such that
Since yp is a particular solution of (8.5.3), substitution of (8.5.5) and (8.5.8) in (8.5.3), we get
u y1 + q(x)y1 + r(x)y1 + v y2 + q(x)y2 + r(x)y2 + u y1 + v y2 = f (x).
As y1 and y2 are solutions of the homogeneous equation (8.5.1), we obtain the condition
u y1 + v y2 = f (x). (8.5.9)
8.5. VARIATION OF PARAMETERS 165
We now determine u and v from (8.5.7) and (8.5.9). By using the Cramers rule for a linear system of
equations, we get
y2 f (x) y1 f (x)
u = and v = (8.5.10)
W W
(note that y1 and y2 are linearly independent solutions of (8.5.1) and hence the Wronskian, W 6= 0 for
any x I). Integration of (8.5.10) give us
Z Z
y2 f (x) y1 f (x)
u= dx and v = dx (8.5.11)
W W
( without loss of generality, we set the values of integration constants to zero). Equations (8.5.11) and
(8.5.5) yield the desired results. Thus the proof is complete.
Before, we move onto some examples, the following comments are useful.
Remark 8.5.2 1. The integrals in (8.5.11) exist, because y2 and W (6= 0) are continuous functions
and f is a piecewise continuous function. Sometimes, it is useful to write (8.5.11) in the form
Z x Z x
y2 (s)f (s) y1 (s)f (s)
u= ds and v = ds
x0 W (s) x0 W (s)
where x I and x0 is a fixed point in I. In such a case, the particular solution yp as given by
(8.5.4) assumes the form
Z x Z
y2 (s)f (s) y1 (s)f (s)
yp = y1 ds + y2 )xx0 ds (8.5.12)
x0 W (s) W (s)
for a fixed point x0 I and for any x I.
2. Again, we stress here that, q and r are assumed to be continuous. They need not be constants.
Also, f is a piecewise continuous function on I.
3. A word of caution. While using (8.5.4), one has to keep in mind that the coefficient of y in (8.5.3)
is 1.
yh = c1 cos x + c2 sin x.
Here, the solutions y1 = sin x and y2 = cos x are linearly independent over I = [0, ) and W =
W (sin x, cos x) = 1. Therefore, a particular solution, yh , by Theorem 8.5.1, is
Z Z
y2 y1
yp = y1 dx + y2 dx
2 + sin x 2 + sin x
Z Z
cos x sin x
= sin x dx + cos x dx
2 + sin x 2 + sin x
Z
1
= sinx ln(2 + sin x) + cos x (x 2 dx). (8.5.13)
2 + sin x
So, the required general solution is
y = c1 cos x + c2 sin x + yp
is a linear differential operator of order n with constant coefficients, a1 , a2 , . . . , an being real constants
(called the coefficients of the linear equation) and the function f (x) is a piecewise continuous function
defined on the interval I. We will be using the notation y (n) for the nth derivative of y. If f (x) 0, then
(8.6.1) which reduces to
Ln (y) = 0 on I, (8.6.2)
is called a homogeneous linear equation, otherwise (8.6.1) is called a non-homogeneous linear equation.
The function f is also known as the non-homogeneous term or a forcing term.
Definition 8.6.1 A function y defined on I is called a solution of (8.6.1) if y is n times differentiable and
y along with its derivatives satisfy (8.6.1).
Remark 8.6.2 1. If u and v are any two solutions of (8.6.1), then y = u v is also a solution of
(8.6.2). Hence, if v is a solution of (8.6.2) and yp is a solution of (8.6.1), then u = v + yp is a
solution of (8.6.1).
2. Let y1 and y2 be two solutions of (8.6.2). Then for any constants (need not be real) c1 , c2 ,
y = c1 y 1 + c2 y 2
3. Note that y 0 is a solution of (8.6.2). This, along with the super-position principle, ensures that
the set of solutions of (8.6.2) forms a vector space over R. This vector space is called the solution
space or space of solutions of (8.6.2).
As in Section 8.3, we first take up the study of (8.6.2). It is easy to note (as in Section 8.3) that for
a constant ,
Ln (ex ) = p()ex
where,
p() = n + a1 n1 + + an (8.6.3)
Definition 8.6.3 (Characteristic Equation) The equation p() = 0, where p() is defined in (8.6.3), is
called the characteristic equation of (8.6.2).
Note that p() is of polynomial of degree n with real coefficients. Thus, it has n zeros (counting with
multiplicities). Also, in case of complex roots, they will occur in conjugate pairs. In view of this, we
have the following theorem. The proof of the theorem is omitted.
Theorem 8.6.4 ex is a solution of (8.6.2) on any interval I R if and only if is a root of (8.6.3)
e1 x , e2 x , . . . , en x
2. If 1 is a repeated root of p() = 0 of multiplicity k, i.e., 1 is a zero of (8.6.3) repeated k times, then
e1 x , xe1 x , . . . , xk1 e1 x
These are complex valued functions of x. However, using super-position principle, we note that
y1 + y2 y1 y2
= ex cos(x) and = ex sin(x)
2 2i
are also solutions of (8.6.2). Thus, in the case of 1 = + i being a complex root of p() = 0, we
have the linearly independent solutions
y 6y + 11y 6y = 0.
p() = 3 62 + 11 6 = 0.
By inspection, the roots of p() = 0 are = 1, 2, 3. So, the linearly independent solutions are ex , e2x , e3x
and the solution space is
{c1 ex + c2 e2x + c3 e3x : c1 , c2 , c3 R}.
y 2y + y = 0.
p() = 3 22 + = 0.
By inspection, the roots of p() = 0 are = 0, 1, 1. So, the linearly independent solutions are 1, ex , xex
and the solution space is
{c1 + c2 ex + c3 xex : c1 , c2 , c3 R}.
y (4) + 2y + y = 0.
p() = 4 + 22 + 1 = 0.
By inspection, the roots of p() = 0 are = i, i, i, i. So, the linearly independent solutions are
sin x, x sin x, cos x, x cos x and the solution space is
From the above discussion, it is clear that the linear homogeneous equation (8.6.2), admits n lin-
early independent solutions since the algebraic equation p() = 0 has exactly n roots (counting with
multiplicity).
Definition 8.6.6 (General Solution) Let y1 , y2 , . . . , yn be any set of n linearly independent solution of
(8.6.2). Then
y = c1 y 1 + c2 y 2 + + cn y n
Solution: Note that the roots of the characteristic equation 3 + 2 + + 1 = 0 are 1, i, i. So,
the general solution is
y = c1 ex + c2 sin x + c3 cos x.
Exercise 8.6.8 1. Find the general solution of the following differential equations:
(a) y + y = 0.
(b) y + 5y 6y = 0.
(c) y iv + 2y + y = 0.
2. Find a linear differential equation with constant coefficients and of order 3 which admits the following
solutions:
dn y dn1 y
xn n
+ an1 xn1 n1 + + a0 y = 0, x I (8.6.4)
dx dx
is called the homogeneous Euler-Cauchy Equation (or just Eulers Equation) of degree n. (8.6.4) is also
called the standard form of the Euler equation. We define
dn y n1 d
n1
y
L(y) = xn n
+ a n1 x n1
+ + a0 y.
dx dx
170 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS
( 1) ( n + 1) + an1 ( 1) ( n + 2) + + a0 = 0. (8.6.5)
Essentially, for finding the solutions of (8.6.4), we need to find the roots of (8.6.5), which is a polynomial
in . With the above understanding, solve the following homogeneous Euler equations:
5. Consider the Euler equation (8.6.4) with x > 0 and x I. Let x = et or equivalently t = ln x. Let
d d
D = dt and d = dx . Then
dy dy)
(a) show that xd(y) = Dy(t), or equivalently x dx = dt .
(b) using mathematical induction, show that xn dn y = D(D 1) (D n + 1) y(t).
(c) with the new (independent) variable t, the Euler equation (8.6.4) reduces to an equation with
constant coefficients. So, the questions in the above part can be solved by the method just
explained.
We turn our attention toward the non-homogeneous equation (8.6.1). If yp is any solution of (8.6.1)
and if yh is the general solution of the corresponding homogeneous equation (8.6.2), then
y = yh + yp
is a solution of (8.6.1). The solution y involves n arbitrary constants. Such a solution is called the
general solution of (8.6.1).
Solving an equation of the form (8.6.1) usually means to find a general solution of (8.6.1). The
solution yp is called a particular solution which may not involve any arbitrary constants. Solving
(8.6.1) essentially involves two steps (as we had seen in detail in Section 8.3).
Step 1: a) Calculation of the homogeneous solution yh and
b) Calculation of the particular solution yp .
In the ensuing discussion, we describe the method of undetermined coefficients to determine yp . Note
that a particular solution is not unique. In fact, if yp is a solution of (8.6.1) and u is any solution of
(8.6.2), then yp + u is also a solution of (8.6.1). The undetermined coefficients method is applicable for
equations (8.6.1).
3. f (x) = xm .
yp = Aex ,
Ln (yp ) = Ap()ex .
k
Since p() 6= 0, we can choose A = to obtain
p()
Ln (yp ) = kex .
k x
Thus, yp = e is a particular solution of Ln (y) = kex .
p()
Modification Rule: If is a root of the characteristic equation, i.e., p() = 0, with multiplicity r,
(i.e., p() = p () = = p(r1) () = 0 and p(r) () 6= 0) then we take, yp of the form
yp = Axr ex
y 4y = 2ex .
Solution: Here f (x) = 2ex with k = 2 and = 1. Also, the characteristic polynomial, p() = 2 4.
Note that = 1 is not a root of p() = 0. Thus, we assume yp = Aex . This on substitution gives
1 x3 ex
Solving for A, we get A = , and thus a particular solution is yp = .
3 3
172 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS
Solution: The characteristic polynomial is p() = 3 and = 2. Thus, using yp = Ae2x , we get
1 1 e2x
A= = , and hence a particular solution is yp = .
p() 6 6
4. Solve y 3y + 3y y = 2e2x .
Exercise 8.7.2 Find a particular solution for the following differential equations:
1. y 3y + 2y = ex .
2. y 9y = e3x .
3. y 3y + 6y 4y = e2x .
Case II. f (x) = ex k1 cos(x) + k2 sin(x) ; k1 , k2 , , R
We first assume that + i is not a root of the characteristic equation, i.e., p( + i) 6= 0. Here, we
assume that yp is of the form
yp = ex A cos(x) + B sin(x) ,
and then comparing the coefficients of ex cos x and ex sin x (why!) in Ln (y) = f (x), obtain the values
of A and B.
Modification Rule: If +i is a root of the characteristic equation, i.e., p(+i) = 0, with multiplicity
r, then we assume a particular solution as
yp = xr ex A cos(x) + B sin(x) ,
and then comparing the coefficients in Ln (y) = f (x), obtain the values of A and B.
y + 2y + 2y = 4ex sin x.
Comparing the coefficients of ex cos x and ex sin x on both sides, we get A B = 1 and A + B = 0.
1 ex
On solving for A and B, we get A = B = . So, a particular solution is yp = (sin x cos x) .
2 2
2. Find a particular solution of
y + y = sin x.
Exercise 8.7.4 Find a particular solution for the following differential equations:
1. y y + y y = ex cos x.
2. y + 2y + y = sin x.
3. y 2y + 2y = ex cos x.
yp = Am xm + Am1 xm1 + + A0
and then compare the coefficient of xk in Ln (yp ) = f (x) to obtain the values of Ai for 0 i m.
Modification Rule: If = 0 is a root of the characteristic equation, i.e., p(0) = 0, with multiplicity r,
then we assume a particular solution as
yp = xr Am xm + Am1 xm1 + + A0
and then compare the coefficient of xk in Ln (yp ) = f (x) to obtain the values of Ai for 0 i m.
y y + y y = x2 .
Finally, note that if yp1 is a particular solution of Ln (y) = f1 (x) and yp2 is a particular solution of
Ln (y) = f2 (x), then a particular solution of
is given by
yp = k1 yp1 + k2 yp2 .
In view of this, one can use method of undetermined coefficients for the cases, where f (x) is a linear
combination of the functions described above.
1. y + y = 2 sin x.
174 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS
2. y + y = sin 2x.
1
For the first problem, a particular solution (Example 8.7.3.2) is yp1 = 2 x cos x = x cos x.
2
1
For the second problem, one can check that yp2 = sin(2x) is a particular solution.
3
Thus, a particular solution of the given problem is
1
yp1 + yp2 = x cos x sin(2x).
3
Exercise 8.7.7 Find a particular solution for the following differential equations:
2. y + 2y + y = x + ex .
3. y + 3y 4y = 4ex + e4x .
4. y + 9y = cos x + x2 + x3 .
5. y 3y + 4y = x2 + e2x sin x.
6. y + 4y + 6y + 4y + 5y = 2 sin x + x2 .
Chapter 9
9.1 Introduction
In the previous chapter, we had a discussion on the methods of solving
y + ay + by = f (x);
where a, b were real numbers and f was a real valued continuous function. We also looked at Euler
Equations which can be reduced to the above form. The natural question is:
what if a and b are functions of x?
In this chapter, we have a partial answer to the above question. In general, there are no methods of
finding a solution of an equation of the form
where q(x) and r(x) are real valued continuous functions defined on an interval I R. In such a
situation, we look for a class of functions q(x) and r(x) for which we may be able to solve. One such
class of functions is called the set of analytic functions.
Definition 9.1.1 (Power Series) Let x0 R and a0 , a1 , . . . , an , . . . R be fixed. An expression of the type
X
an (x x0 )n (9.1.1)
n=0
is called a power series in x around x0 . The point x0 is called the center, and an s are called the coefficients.
In short, a0 , a1 , . . . , an , . . . are called the coefficient of the power series and x0 is called the center.
Note here that an R is the coefficient of (x x0 )n and that the power series converges for x = x0 . So,
the set
X
S = {x R : an (x x0 )n converges}
n=0
is a non-empty. It turns out that the set S is an interval in R. We are thus led to the following definition.
2. Any polynomial
a0 + a1 x + a2 x2 + + an xn
is a power series with x0 = 0 as the center, and the coefficients am = 0 for m n + 1.
Definition 9.1.3 (Radius of Convergence) A real number R 0 is called the radius of convergence of the
power series (9.1.1), if the expression (9.1.1) converges for all x satisfying
From what has been said earlier, it is clear that the set of points x where the power series (9.1.1) is
convergent is the interval (R + x0 , x0 + R), whenever R is the radius of convergence. If R = 0, the
power series is convergent only at x = x0 .
Let R > 0 be the radius of convergence of the power series (9.1.1). Let I = (R + x0 , x0 + R). In
the interval I, the power series (9.1.1) converges. Hence, it defines a real valued function and we denote
it by f (x), i.e.,
X
f (x) = an (x x0 )n , x I.
n=1
Such a function is well defined as long as x I. f is called the function defined by the power series
(9.1.1) on I. Sometimes, we also use the terminology that (9.1.1) induces a function f on I.
It is a natural question to ask how to find the radius of convergence of a power series (9.1.1). We
state one such result below but we do not intend to give a proof.
P
Theorem 9.1.4 1. Let an (x x0 )n be a power series with center x0 . Then there exists a real number
n=1
R 0 such that
X
an (x x0 )n converges for all x (R + x0 , x0 + R).
n=1
P
In this case, the power series an (x x0 )n converges absolutely and uniformly on
n=1
Remark 9.1.5 If the reader is familiar with the concept of lim sup of a sequence, then we have a
modification of the above theorem.
p
In case, n |an | does not tend to a limit as n , then the above theorem holds if we replace
p p
lim n |an | by lim sup n |an |.
n n
9.1. INTRODUCTION 177
P
Example 9.1.6 1. Consider the power series (x + 1)n . Here x0 = 1 is the center and an = 1 for all
p n=0
n 0. So, n |an | = n 1 = 1. Hence, by Theorem 9.1.4, the radius of convergenceR = 1.
X (1)n (x + 1)2n+1
2. Consider the power series . In this case, the center is
(2n + 1)!
n0
(1)n
x0 = 1, an = 0 for n even and a2n+1 = .
(2n + 1)!
So,
p
2n+1
p
2n
lim |a2n+1 | = 0 and lim |a2n | = 0.
n n
p
Thus, lim n |an | exists and equals 0. Therefore, the power series converges for all x R. Note that
n
the series converges to sin(x + 1).
P
3. Consider the power series x2n . In this case, we have
n=1
So,
p p
lim 2n+1
|a2n+1 | = 0 and lim |a2n | = 1.
2n
n n
p
Thus, lim n |an | does not exist.
n
P
P
We let u = x2 . Then the power series x2n reduces to un . But then from Example 9.1.6.1, we
n=1 n=1
P
learned that un converges for all u with |u| < 1. Therefore, the original power series converges
n=1
whenever |x2 | < 1 or equivalently whenever |x| < 1. So, the radius of convergence is R = 1. Note that
1 X
= x2n for |x| < 1.
1 x2 n=1
X p
4. Consider the power series nn xn . In this case, n
|an | = n nn = n. doesnt have any finite limit as
n0
n . Hence, the power series converges only for x = 0.
n1
X xn 1 1
5. The power series has coefficients an = and it is easily seen that lim = 0 and the
n! n! n n!
n0
power series converges for all x R. Recall that it represents ex .
Definition 9.1.7 Let f : I R be a function and x0 I. f is called analytic around x0 if there exists a
> 0 such that
X
f (x) = an (x x0 )n for every x with |x x0 | < .
n0
with radius of convergence R1 > 0 and R2 > 0, respectively. Let F (x) and G(x) be the functions defined
by the two power series defined for all x I, where I = (R + x0 , x0 + R) with R = min{R1 , R2 }. Note
that both the power series converge for all x I.
With F (x), G(x) and I as defined above, we have the following properties of the power series.
an = bn for all n = 0, 1, 2, . . . .
P
In particular, if an (x x0 )n = 0 for all x I, then
n=0
an = 0 for all n = 0, 1, 2, . . . .
p p p 1
lim n |nan | = lim n |n| lim n |an | = 1 .
n n n R1
Let 0 < r < R1 . Then for all x (r + x0 , x0 + r), we have
d X
F (x) = F (x) = nan (x x0 )n .
dx n=1
In other words, inside the region of convergence, the power series can be differentiated term by
term.
9.2. SOLUTIONS IN TERMS OF POWER SERIES 179
In the following, we shall consider power series with x0 = 0 as the center. Note that by a transfor-
mation of X = x x0 , the center of the power series can be shifted to the origin.
Exercise 9.1.1 1. which of the following represents a power series (with center x0 indicated in the brack-
ets) in x?
x3 x5 x2n+1
f (x) = x + + (1)n +
3! 5! (2n + 1)!
x2 x4 x2n
and g(x) = 1 + + (1)n + .
2! 4! (2n)!
Find the radius of convergence of f (x) and g(x). Also, for each x in the domain of convergence, show
that
f (x) = g(x) and g (x) = f (x).
[Hint: Use Properties 1, 2, 3 and 4 mentioned above. Also, note that we usually call f (x) by sin x
and g(x) by cos x.]
Let a and b be analytic around the point x0 = 0. In such a case, we may hope to have a solution y in
terms of a power series, say
X
y= ck xk . (9.2.2)
k=0
In the absence of any information, let us assume that (9.2.1) has a solution y represented by (9.2.2). We
substitute (9.2.2) in Equation (9.2.1) and try to find the values of ck s. Let us take up an example for
illustration.
y + y = 0 (9.2.3)
P
P
Then y = ncn xn1 and y = n(n 1)cn xn2 . Substituting the expression for y, y and y in
n=0 n=0
Equation (9.2.3), we get
X
X
n(n 1)cn xn2 + cn xn = 0
n=0 n=0
or, equivalently
X
X
X
0= (n + 2)(n + 1)cn+2 xn + cn xn = {(n + 1)(n + 2)cn+2 + cn }xn .
n=0 n=0 n=0
c2 = c2!0 , c3 = c3!1 ,
c4 = (1)2 c4!0 , c5 = (1)2 c5!1 ,
.. ..
. .
c0 c1
c2n = (1)n (2n)! , c2n+1 = (1)n (2n+1)! .
or y = c0 cos(x) + c1 sin(x) where c0 and c1 can be chosen arbitrarily. For c0 = 1 and c1 = 0, we get
y = cos(x). That is, cos(x) is a solution of the Equation (9.2.3). Similarly, y = sin(x) is also a solution of
Equation (9.2.3).
Exercise 9.2.2 Assuming that the solutions y of the following differential equations admit power series
representation, find y in terms of a power series.
1. y = y, (center at x0 = 0).
2. y = 1 + y 2 , (center at x0 = 0).
admits a solution y which has a power series representation around x I. In other words, we are
interested in looking into an existence of a power series solution of (9.3.1) under certain conditions on
a(x), b(x) and f (x). The following is one such result. We omit its proof.
9.4. LEGENDRE EQUATIONS AND LEGENDRE POLYNOMIALS 181
Theorem 9.3.1 Let a(x), b(x) and f (x) admit a power series representation around a point x = x0 I,
with non-zero radius of convergence r1 , r2 and r3 , respectively. Let R = min{r1 , r2 , r3 }. Then the Equation
(9.3.1) has a solution y which has a power series representation around x0 with radius of convergence R.
Remark 9.3.2 We remind the readers that Theorem 9.3.1 is true for Equations (9.3.1), whenever the
coefficient of y is 1.
Secondly, a point x0 is called an ordinary point for (9.3.1) if a(x), b(x) and f (x) admit power
series expansion (with non-zero radius of convergence) around x = x0 . x0 is called a singular point
for (9.3.1) if x0 is not an ordinary point for (9.3.1).
The following are some examples for illustration of the utility of Theorem 9.3.1.
Exercise 9.3.3 1. Examine whether the given point x0 is an ordinary point or a singular point for the
following differential equations.
2. Show that the following equations admit power series solutions around a given x0 . Also, find the power
series solutions if it exists.
(a) y + y = 0, x0 = 0.
(b) xy + y = 0, x0 = 0.
(c) y + 9y = 0, x0 = 0.
Equation (9.4.1) was studied by Legendre and hence the name Legendre Equation.
Equation (9.4.1) may be rewritten as
2x p(p + 1)
y 2
y + y = 0.
(1 x ) (1 x2 )
2x p(p + 1)
The functions 2
and are analytic around x0 = 0 (since they have power series expressions
1x 1 x2
with center at x0 = 0 and with R = 1 as the radius of convergence). By Theorem 9.3.1, a solution y of
(9.4.1) admits a power series solution (with center at x0 = 0) with radius of convergence R = 1. Let us
182 CHAPTER 9. SOLUTIONS BASED ON POWER SERIES
P
assume that y = ak xk is a solution of (9.4.1). We have to find the value of ak s. Substituting the
k=0
expression for
X
X
y = kak xk1 and y = k(k 1)ak xk2
k=0 k=0
Hence, for k = 0, 1, 2, . . .
(p k)(p + k + 1)
ak+2 = ak .
(k + 1)(k + 2)
It now follows that
a2 = p(p+1)
2! a0 , a3 = (p1)(p+2)
3! a1 ,
(p2)(p+3) 2 (p1)(p3)(p+2)(p+4)
a4 = 34 a2 a5 = (1) 5! a1
2 p(p2)(p+1)(p+3)
= (1) 4! a0 ,
etc. In general,
p(p 2) (p 2m + 2)(p + 1)(p + 3) (p + 2m 1)
a2m = (1)m a0
(2m)!
and
(p 1)(p 3) (p 2m + 1)(p + 2)(p + 4) (p + 2m)
a2m+1 = (1)m a1 .
(2m + 1)!
It turns out that both a0 and a1 are arbitrary. So, by choosing a0 = 1, a1 = 0 and a0 = 0, a1 = 1 in the
above expressions, we have the following two solutions of the Legendre Equation (9.4.1), namely,
p(p + 1) 2 (p 2m + 2) (p + 2m 1) 2m
y1 = 1 x + + (1)m x + (9.4.2)
2! (2m)!
and
(p 1)(p + 2) 3 (p 2m + 1) (p + 2m) 2m+1
y2 = x x + + (1)m x + . (9.4.3)
3! (2m + 1)!
Remark 9.4.2 y1 and y2 are two linearly independent solutions of the Legendre Equation (9.4.1). It
now follows that the general solution of (9.4.1) is
y = c1 y 1 + c2 y 2 (9.4.4)
Case 1: Let n be a positive even integer. Then y1 in Equation (9.4.2) is a polynomial of degree n. In fact,
y1 is an even polynomial in the sense that the terms of y1 are even powers of x and hence y1 (x) = y1 (x).
9.4. LEGENDRE EQUATIONS AND LEGENDRE POLYNOMIALS 183
Case 2: Now, let n be a positive odd integer. Then y2 (x) in Equation (9.4.3) is a polynomial of degree
n. In this case, y2 is an odd polynomial in the sense that the terms of y2 are odd powers of x and hence
y2 (x) = y2 (x).
In either case, we have a polynomial solution for Equation (9.4.1).
Definition 9.4.3 A polynomial solution Pn (x) of (9.4.1) is called a Legendre Polynomial whenever
Pn (1) = 1.
Fix a positive integer n and consider Pn (x) = a0 + a1 x + + an xn . Then it can be checked that
Pn (1) = 1 if we choose
(2n)! 1 3 5 (2n 1)
an = n = .
2 (n!)2 n!
Using the recurrence relation, we have
(2n 2m)!
an2m = (1)m .
2n m!(n m)!(n 2m)!
Hence,
M
X (2n 2m)!
(1)m xn2m , (9.4.6)
m=0
2n m!(n m)!(n 2m)!
n n1
where M = when n is even and M = when n is odd.
2 2
Proposition 9.4.4 Let p = n be a non-negative even integer. Then any polynomial solution y of (9.4.1)
which has only even powers of x is a multiple of Pn (x).
Similarly, if p = n is a non-negative odd integer, then any polynomial solution y of (9.4.1) which has only
odd powers of x is a multiple of Pn (x).
Proof. Suppose that n is a non-negative even integer. Let y be a polynomial solution of (9.4.1). By
(9.4.4)
y = c1 y 1 + c2 y 2 ,
where y1 is a polynomial of degree n (with even powers of x) and y2 is a power series solution with odd
powers only. Since y is a polynomial, we have c2 = 0 or y = c1 y1 with c1 6= 0.
Similarly, Pn (x) = c1 y1 with c1 6= 0. which implies that y is a multiple of Pn (x). A similar proof holds
when n is an odd positive integer.
We have an alternate way of evaluating Pn (x). They are used later for the orthogonality properties
of the Legendre polynomials, Pn (x)s.
Theorem 9.4.5 (Rodrigues Formula) The Legendre polynomials Pn (x) for n = 1, 2, . . . , are given by
1 dn 2
Pn (x) = (x 1)n . (9.4.7)
2n n! dxn
d
Proof. Let V (x) = (x2 1)n . Then dx V (x) = 2nx(x2 1)n1 or
d
(x2 1) V (x) = 2nx(x2 1)n = 2nxV (x).
dx
184 CHAPTER 9. SOLUTIONS BASED ON POWER SERIES
Now differentiating (n + 1) times (by the use of the Leibniz rule for differentiation), we get
This tells us that U (x) is a solution of the Legendre Equation (9.4.1). So, by Proposition 9.4.4, we have
dn 2
Pn (x) = U (x) = (x 1)n for some R.
dxn
Also, let us note that
dn 2 dn
(x 1)n = {(x 1)(x + 1)}n
dxn dxn
= n!(x + 1)n + terms containing a factor of (x 1).
Therefore,
dn 2 n
n
(x 1) = 2n n! or, equivalently
dx x=1
1 dn 2
n
(x 1) =1
2n n! dxn x=1
and thus
1 dn 2
Pn (x) = (x 1)n .
2n n! dxn
One may observe that the Rodrigues formula is very useful in the computation of Pn (x) for small values
of n.
Theorem 9.4.7 Let Pn (x) denote, as usual, the Legendre Polynomial of degree n. Then
Z 1
Pn (x)Pm (x) dx = 0 if m 6= n. (9.4.8)
1
Multiplying Equation (9.4.9) by Pm (x) and Equation (9.4.10) by Pn (x) and subtracting, we get
n(n + 1) m(m + 1) Pn (x)Pm (x) = (1 x2 )Pm
(x) Pn (x) (1 x2 )Pn (x) Pm (x).
9.4. LEGENDRE EQUATIONS AND LEGENDRE POLYNOMIALS 185
Therefore,
Z 1
n(n + 1) m(m + 1) Pn (x)Pm (x)dx
1
Z 1
= (1 x2 )Pm
(x) Pn (x) (1 x2 )Pn (x) Pm (x) dx
1
Z 1 x=1
2 2
= (1 x )Pm (x)Pn (x)dx + (1 x
)Pm (x)Pn (x)
1 x=1
Z 1 x=1
2 2
+ (1 x )Pn (x)Pm (x)dx + (1 x )Pn (x)Pm (x)
1 x=1
= 0.
Z1
dn dn
Let us call I = n
V (x) n V (x)dx. Note that for 0 m < n,
dx dx
1
dm dm
m
V (1) = m V (1) = 0. (9.4.12)
dx dx
Therefore, integrating I by parts and using (9.4.12) at each step, we get
Z 1 2n Z 1 Z 1
d n 2 n
I= 2n
V (x) (1) V (x)dx = (2n)! (1 x ) dx = (2n)! 2 (1 x2 )n dx.
1 dx 1 0
R2
Now substitute x = cos and use the value of the integral sin2n d, to get the required result.
0
We now state an important expansion theorem. The proof is beyond the scope of this book.
Theorem 9.4.9 Let f (x) be a real valued continuous function defined in [1, 1]. Then
X
f (x) = an Pn (x), x [1, 1]
n=0
Z1
2n + 1
where an = f (x)Pn (x)dx.
2
1
Legendre polynomials can also be generated by a suitable function. To do that, we state the following
result without proof.
186 CHAPTER 9. SOLUTIONS BASED ON POWER SERIES
1
The function h(t) = admits a power series expansion in t (for small t) and the
1 2xt + t2
n
coefficient of t in Pn (x). The function h(t) is called the generating function for the Legendre
polynomials.
Exercise 9.4.11 1. By using the Rodrigues formula, find P0 (x), P1 (x) and P2 (x).
Using the generating function (9.4.13), we can establish the following relations:
The relations (9.4.14), (9.4.15) and (9.4.16) are called recurrence relations for the Legendre polyno-
mials, Pn (x). The relation (9.4.14) is also known as Bonnets recurrence relation. We will now give the
proof of (9.4.14) using (9.4.13). The readers are required to proof the other two recurrence relations.
Differentiating the generating function (9.4.13) with respect to t (keeping the variable x fixed), we
get
1 2 32
X
(1 2xt + t ) (2x + 2t) = nPn (x)tn1 .
2 n=0
Or equivalently,
X
1
(x t)(1 2xt + t2 ) 2 = (1 2xt + t2 ) nPn (x)tn1 .
n=0
P 1
We now substitute Pn (x)tn in the left hand side for (1 2xt + t2 ) 2 , to get
n=0
X
X
(x t) Pn (x)tn = (1 2xt + t2 ) nPn (x)tn1 .
n=0 n=0
The two sides and power series in t and therefore, comparing the coefficient of tn , we get
Exercise 9.4.12 1. Find a polynomial solution y(x) of (1 x2 )y 2xy + 20y = 0 such that y(1) = 10.
R1
(a) Pm (x)dx = 0 for all positive integers m 1.
1
R1
(b) x2n+1 P2m (x)dx = 0 whenever m and n are positive integers with m 6= n.
1
R1
(c) xm Pn (x)dx = 0 whenever m and n are positive integers with m < n.
1
n(n + 1) n(n + 1)
3. Show that Pn (1) = and Pn (1) = (1)n1 .
2 2
4. Establish the following recurrence relations.
(a) (n + 1)Pn (x) = Pn+1 (x) xPn (x).
(b) (1 x2 )Pn (x) = n Pn1 (x) xPn (x) .
188 CHAPTER 9. SOLUTIONS BASED ON POWER SERIES
Part III
Laplace Transform
Chapter 10
Laplace Transform
10.1 Introduction
In many problems, a function f (t), t [a, b] is transformed to another function F (s) through a relation
of the type:
Z b
F (s) = K(t, s)f (t)dt
a
where K(t, s) is a known function. Here, F (s) is called integral transform of f (t). Thus, an integral
transform sends a given function f (t) into another function F (s). This transformation of f (t) into F (s)
provides a method to tackle a problem more readily. In some cases, it affords solutions to otherwise
difficult problems. In view of this, the integral transforms find numerous applications in engineering
problems. Laplace transform is a particular case of integral transform (where f (t) is defined on [0, )
and K(s, t) = est ). As we will see in the following, application of Laplace transform reduces a linear
differential equation with constant coefficients to an algebraic equation, which can be solved by algebraic
methods. Thus, it provides a powerful tool to solve differential equations.
It is important to note here that there is some sort of analogy with what we had learnt during the
study of logarithms in school. That is, to multiply two numbers, we first calculate their logarithms, add
them and then use the table of antilogarithm to get back the original product. In a similar way, we first
transform the problem that was posed as a function of f (t) to a problem in F (s), make some calculations
and then use the table of inverse Laplace transform to get the solution of the actual problem.
In this chapter, we shall see same properties of Laplace transform and its applications in solving
differential equations.
2. A function f (t) is said to be a piece-wise continuous function for t 0, if f (t) is a piece-wise continuous
function on every closed interval [a, b] [0, ). For example, see Figure 10.1.
Definition 10.2.2 (Laplace Transform) Let f : [0, ) R and s R. Then F (s), for s R is called
192 CHAPTER 10. LAPLACE TRANSFORM
|f (t)| M et for all t > 0 and for some real numbers and M with M > 0.
lim F (s) = 0.
s
Definition 10.2.4 (Inverse Laplace Transform) Let L(f (t)) = F (s). That is, F (s) is the Laplace trans-
form of the function f (t). Then f (t) is called the inverse Laplace transform of F (s). In that case, we write
f (t) = L1 (F (s)).
10.2.1 Examples
Example 10.2.5 1. Find F (s) = L(f (t)), where f (t) = 1, t 0.
Z b
st est 1 esb
Solution: F (s) = e dt = lim = lim .
0 b s 0 s b s
Note that if s > 0, then
esb
lim = 0.
b s
Thus,
1
F (s) = , for s > 0.
s
10.2. DEFINITIONS AND EXAMPLES 193
In the remaining part of this chapter, whenever the improper integral is calculated, we will not explicitly
write the limiting process. However, the students are advised to provide the details.
1 1
1 , s>0 t , s>0
s s2
n! 1
tn , s>0 eat , s>a
sn+1 sa
a s
sin(at) , s>0 cos(at) , s>0
s2 + a2 s2 + a2
a s
sinh(at) , s>a cosh(at) , s>a
s2 a2 s2 a2
Table 10.1: Laplace transform of some Elementary Functions
The above lemma is immediate from the definition of Laplace transform and the linearity of the
definite integral.
2. Similarly,
1 1 1 a
L(sinh(at)) = = , s > |a|.
2 sa s+a s2 a2
10.3. PROPERTIES OF LAPLACE TRANSFORM 195
1 1 1
a 2a a 2a a
1
3. Find the inverse Laplace transform of .
s(s + 1)
Solution:
1 1 1
L1 = L1
s(s + 1) s s+1
1 1
= L1 L1 = 1 et .
s s+1
1
Thus, the inverse Laplace transform of is f (t) = 1 et .
s(s + 1)
Theorem 10.3.3 (Scaling by a) Let f (t) be a piecewise continuous function with Laplace transform F (s).
1 s
Then for a > 0, L(f (at)) = F ( ).
a a
Proof. By definition and the substitution z = at, we get
Z Z
st 1 s z
L(f (at)) = e f (at)dt = e a f (z)dz
0 a 0
Z
1 sz 1 s
= e a f (z)dz = F ( ).
a 0 a a
2. Find the Laplace transform of the function f () given by the graphs in Figure 10.2.
1 1
3. If L(f (t)) = + , find f (t).
s2 + 1 2s + 1
The next theorem relates the Laplace transform of the function f (t) with that of f (t).
Theorem 10.3.5 (Laplace Transform of Differentiable Functions) Let f (t), for t > 0, be a differentiable
function with the derivative, f (t), being continuous. Suppose that there exist constants M and T such that
|f (t)| M et for all t T. If L(f (t)) = F (s) then
Proof. Note that the condition |f (t)| M et for all t T implies that
So, by definition,
Z Z b
L f (t) = est f (t)dt = lim est f (t)dt
0 b 0
b Z b
st
= lim f (t)e lim f (t)(s)est dt
b b 0 0
= f (0) + sF (s).
We can extend the above result for nth derivative of a function f (t), if f (t), . . . , f (n1) (t), f (n) (t)
exist and f (n) (t) is continuous for t 0. In this case, a repeated use of Theorem 10.3.5, gives the
following corollary.
Corollary 10.3.6 Let f (t) be a function with L(f (t)) = F (s). If f (t), . . . , f (n1) (t), f (n) (t) exist and
f (n) (t) is continuous for t 0, then
L f (n) (t) = sn F (s) sn1 f (0) sn2 f (0) f (n1) (0). (10.3.2)
Corollary 10.3.7 Let f (t) be a piecewise continuous function for t 0. Also, let f (0) = 0. Then
s
Example 10.3.8 1. Find the inverse Laplace transform of .
s2 +1
1 s
Solution: We know that L1 ( ) = sin t. Then sin(0) = 0 and therefore, L1 ( 2 ) = cos t.
s2 + 1 s +1
2. Find the Laplace transform of f (t) = cos2 (t).
Solution: Note that f (0) = 1 and f (t) = 2 cos t sin t = sin(2t). Also,
2
L( sin(2t)) = .
s2 + 4
Now, using Theorem 10.3.5, we get
1 2 s2 + 2
L(f (t)) = +1 = .
s s2 + 4 s(s2 + 4)
Lemma 10.3.9 (Laplace Transform of tf (t)) Let f (t) be a piecewise continuous function with L(f (t)) =
F (s). If the function F (s) is differentiable, then
d
L(tf (t)) = F (s).
ds
d
Equivalently, L1 ( F (s)) = tf (t).
ds
10.3. PROPERTIES OF LAPLACE TRANSFORM 197
Z
Proof. By definition, F (s) = est f (t)dt. The result is obtained by differentiating both sides with
0
respect to s.
Suppose we know the Laplace transform of a f (t) and we wish to find the Laplace transform of the
f (t)
function g(t) = . Suppose that G(s) = L(g(t)) exists. Then writing f (t) = tg(t) gives
t
d
F (s) = L(f (t)) = L(tg(t)) = G(s).
ds
Rs R
Thus, G(s) = F (p)dp for some real number a. As lim G(s) = 0, we get G(s) = F (p)dp.
a s s
Hence,we have the following corollary.
f (t)
Corollary 10.3.10 Let L(f (t)) = F (s) and g(t) = . Then
t
Z
L(g(t)) = G(s) = F (p)dp.
s
Proof. By definition,
Z t Z Z t Z Z t
st
L f ( ) d = e f ( ) d dt = est f ( ) d dt.
0 0 0 0 0
We dont go into the details of the proof of the change in the order of integration. We assume that the
order of the integrations can be changed and therefore
Z Z t Z Z
est f ( ) d dt = est f ( ) dt d.
0 0 0
198 CHAPTER 10. LAPLACE TRANSFORM
Thus,
Z t Z Z t
L f ( ) d = est f ( ) d dt
0 0 0
Z Z Z Z
= e
f ( ) dt d = st
es(t )s f ( ) dt d
0 0
Z Z
s s(t )
= e f ( )d e dt
0
Z Z
1
= es f ( )d esz dz = F (s) .
0 0 s
Rt
Example 10.3.13 1. Find L( 0 sin(az)dz).
a
Solution: We know L(sin(at)) = 2 . Hence
s + a2
Z t
1 a a
L( sin(az)dz) = 2 2
= .
0 s (s + a ) s(s + a2 )
2
Z t
2
2. Find L d .
0
Solution: By Lemma 10.3.12
Z t
2 L t2 1 2! 2
L d = = 3 = 4.
0 s s s s
4
3. Find the function f (t) such that F (s) = .
s(s 1)
1
Solution: We know L(et ) = . So,
s1
Z t
4 1 1
L1
= 4L 1
=4 e d = 4(et 1).
s(s 1) ss1 0
Lemma 10.3.14 (s-Shifting) Let L(f (t)) = F (s). Then L(eat f (t)) = F (s a) for s > a.
Proof.
Z Z
L(eat f (t)) = eat f (t)est dt = f (t)e(sa)t dt
0 0
= F (s a) s > a.
(s + 1)(s + 3)
If F (s) = find f (t).
s(s + 2)(s + 8)
3 1 35
Solution: F (s) = + + . Thus,
16s 12(s + 2) 48(s + 8)
3 1 35
f (t) = + e2t + e8t .
16 12 48
2. Denominator of F has Distinct Complex Roots:
4s + 3
If F (s) = find f (t).
s2 + 2s + 5
s+1 1 2
Solution: F (s) = 4 . Thus,
(s + 1)2 + 22 2 (s + 1)2 + 22
1
f (t) = 4et cos(2t) et sin(2t).
2
3. Denominator of F has Repeated Real Roots:
3s + 4
If F (s) = find f (t).
(s + 1)(s2 + 4s + 4)
Solution: Here,
3s + 4 3s + 4 a b c
F (s) = = = + + .
(s + 1)(s2 + 4s + 4) (s + 1)(s + 2)2 s + 1 s + 2 (s + 2)2
1 1 2 1 1 d 1
Solving for a, b and c, we get F (s) = s+1 s+2 + (s+2) 2 = s+1 s+2 + 2 ds (s+2) . Thus,
f (t) = et e2t + 2te2t .
Z
esa
Example 10.3.18 L Ua (t) = est dt = , s > 0.
s
a
c
c
g(t)
f(t)
d a d+a
e5s
Example 10.3.20 Find L1 s2 4s5 .
e5s 1
Solution: Let G(s) = s2 4s5 =e 5s
F (s), with F (s) = s2 4s5 . Since s2 4s 5 = (s 2)2 32
1 3 1
L1 (F (s)) = L1 = sinh(3t)e2t .
3 (s 2)2 32 3
Hence, by Lemma 10.3.19
1
U5 (t) sinh 3(t 5) e2(t5) .
L1 (G(s)) =
3
(
0 t < 2
Example 10.3.21 Find L(f (t)), where f (t) =
t cos t t > 2.
Solution: Note that
(
0 t < 2
f (t) =
(t 2) cos(t 2) + 2 cos(t 2) t > 2.
s2 1 s
Thus, L(f (t)) = e2s + 2 2
(s2 + 1)2 s +1
Note: To be filled by a graph
Theorem 10.4.1 (First Limit Theorem) Suppose L(f (t)) exists. Then
as lim est = 0.
s
Example 10.4.2 1. For t 0, let Y (s) = L(y(t)) = a(1 + s2 )1/2 . Determine a such that y(0) = 1.
Solution: Theorem 10.4.1 implies
as a
1 = lim sY (s) = lim = lim . Thus, a = 1.
s s (1 + s2 )1/2 s ( 12 + 1)1/2
s
(s + 1)(s + 3)
2. If F (s) = find f (0+ ).
s(s + 2)(s + 8)
Solution: Theorem 10.4.1 implies
(s + 1)(s + 3)
f (0+ ) = lim sF (s) = lim s = 1.
s s s(s + 2)(s + 8)
On similar lines, one has the following theorem. But this theorem is valid only when f (t) is bounded
as t approaches infinity.
Theorem 10.4.3 (Second Limit Theorem) Suppose L(f (t)) exists. Then
Proof.
Z
lim sF (s) = f (0) + lim est f (t)dt
s0 s0 0
Z t
= f (0) + lim lim es f ( )d
s0 t 0
Z t
= f (0) + lim lim es f ( )d = lim f (t).
t 0 s0 t
2(s + 3)
Example 10.4.4 If F (s) = find lim f (t).
s(s + 2)(s + 8) t
Solution: From Theorem 10.4.3, we have
2(s + 3) 6 3
lim f (t) = lim sF (s) = lim s = = .
t s0 s0 s(s + 2)(s + 8) 16 8
We now generalise the lemma on Laplace transform of an integral as convolution theorem.
Definition 10.4.5 (Convolution of Functions) Let f (t) and g(t) be two smooth functions. The convolu-
tion, f g, is a function defined by
Z t
(f g)(t) = f ( )g(t )d.
0
202 CHAPTER 10. LAPLACE TRANSFORM
Check that
1. (f g)(t) = g f (t).
t cos(t) + sin(t)
2. If f (t) = cos(t) then (f f )(t) = .
2
Theorem 10.4.6 (Convolution Theorem) If F (s) = L(f (t)) and G(s) = L(g(t)) then
Z t
L f ( )g(t )d = F (s) G(s).
0
1
Remark 10.4.7 Let g(t) = 1 for all t 0. Then we know that L(g(t)) = G(s) = . Thus, the
s
Convolution Theorem 10.4.6 reduces to the Integral Lemma 10.3.12.
Hence,
G(s) (as + b)f0 af1
F (s) = + + 2 . (10.5.1)
as2 + bs + c 2
as + bs + c as + bs + c
| {z } | {z }
nonhomogeneous part initial conditions
Now, if we know that G(s) is a rational function of s then we can compute f (t) from F (s) by using the
method of partial fractions (see Subsection 10.3.1 ).
1 e5s
L(f (t)) = + .
s2 s
Taking Laplace transform of the above equation, we get
1 e5s
s2 Y (s) sy(0) y (0) 4 (sY (s) y(0)) 5Y (s) = L(f (t)) = 2 + .
s s
10.5. APPLICATION TO DIFFERENTIAL EQUATIONS 203
Which gives
s e5s 1
Y (s) = + + 2
(s + 1)(s 5) s(s + 1)(s 5) s (s + 1)(s 5)
1 5 1 e5s 6 5 1
= + + + +
6 s5 s+1 30 s s+1 s5
1 30 24 25 1
+ 2 + + .
150 s s s+1 s5
Hence,
5e5t et 1 e(t5) e5(t5)
y(t) = + + U5 (t) + +
6 6 5 6 30
1 t 5t
+ 30t + 24 25e + e .
150
Remark 10.5.3 Even though f (t) is a discontinuous function at t = 5, the solution y(t) and y (t)
are continuous functions of t, as y exists. In general, the following is always true:
Let y(t) be a solution of ay + by + cy = f (t). Then both y(t) and y (t) are continuous functions of time.
Example 10.5.4 1. Consider the IVP ty (t) + y (t) + ty(t) = 0, with y(0) = 1 and y (0) = 0. Find
L(y(t)).
Solution: Applying Laplace transform, we have
d 2 d
s Y (s) sy(0) y (0) + (sY (s) y(0)) Y (s) = 0.
ds ds
Using initial conditions, the above equation reduces to
d 2
(s + 1)Y (s) s sY (s) + 1 = 0.
ds
This equation after simplification can be rewritten as
Y (s) s
= 2 .
Y (s) s +1
1
Therefore, Y (s) = a(1 + s2 ) 2 . From Example 10.4.2.1, we see that a = 1 and hence
1
Y (s) = (1 + s2 ) 2 .
Z t
2. Show that y(t) = f ( )g(t )d is a solution of
0
1
where L[g(t)] = .
s2 + as + b
F (s) 1
Solution: Here, Y (s) = = F (s) 2 . Hence,
s2 + as + b s + as + b
Z t
y(t) = (f g)(t) = f ( )g(t )d.
0
Z t
1
3. Show that y(t) = f ( ) sin(a(t ))d is a solution of
a 0
F (s) 1 a
Solution: Here, Y (s) = 2 2
= F (s)
. Hence,
s +a a s2 + a2
Z
1 1 t
y(t) = f (t) sin(at) = f ( ) sin(a(t ))d.
a a 0
Solution: Taking Laplace transform of both sides and using Theorem 10.3.5, we get
Y (s) 1 1
sY (s) 1 = + 2 4 2 .
s s s +1
Solving for Y (s), we get
s2 1 1 1
Y (s) = = 2 2 .
s(s2 + 1) s s +1
So, Z t
y(t) = 1 2 sin( )d = 1 + 2(cos t 1) = 2 cos t 1.
0
1
Solution: Note that h (t) = U0 (t) Uh (t) . By linearity of the Laplace transform, we get
h
1 1 ehs
Dh (s) = .
h s
Remark 10.6.2 1. Observe that in Example 10.6.1, if we allow h to approach 0, we obtain a new
function, say (t). That is, let
(t) = lim h (t).
h0
This new function is zero everywhere except at the origin. At origin, this function tends to infinity.
In other words, the graph of the function appears as a line of infinite height at the origin. This
new function, (t), is called the unit-impulse function (or Diracs delta function).
1 ehs
5. Also, observe that L(h (t)) = . Now, if we take the limit of both sides, as h approaches
hs
zero (apply LHospitals rule), we get
1 ehs sehs
L((t)) = lim = lim = 1.
h0 hs h0 s
206 CHAPTER 10. LAPLACE TRANSFORM
Part IV
Numerical Applications
Chapter 11
11.1 Introduction
In many practical situations, for a function y = f (x), which either may not be explicitly specified or
may be difficult to handle, we often have a tabulated data (xi , yi ), where yi = f (xi ), and xi < xi+1
for i = 0, 1, 2, . . . , N. In such cases, it may be required to represent or replace the given function by a
simpler function, which coincides with the values of f at the N + 1 tabular points xi . This process is
known as Interpolation. Interpolation is also used to estimate the value of the function at the non
tabular points. Here, we shall consider only those functions which are sufficiently smooth, i.e., they are
differentiable sufficient number of times. Many of the interpolation methods, where the tabular points
are equally spaced, use difference operators. Hence, in the following we introduce various difference
operators and study their properties before looking at the interpolation methods.
We shall assume here that the tabular points x0 , x1 , x2 , . . . , xN are equally spaced, i.e., xk
xk1 = h for each k = 1, 2, . . . , N. The real number h is called the step length. This gives us
xk = x0 + kh. Further, yk = f (xk ) gives the value of the function y = f (x) at the k th tabular point.
The points y1 , y2 , . . . , yN are known as nodes or nodal values.
The expression f (x + h) f (x) gives the first forward difference of f (x) and the operator is
called the first forward difference operator. Given the step size h, this formula uses the values
at x and x + h, the point at the next step. As it is moving in the forward direction, it is called the
forward difference operator.
Backward
x0 x1 x k1 x k x k+1 xn
Forward
210 CHAPTER 11. NEWTONS INTERPOLATION FORMULAE
Definition 11.2.2 (Second Forward Difference Operator) The second forward difference operator, 2 , is
defined as
2 f (x) = f (x) = f (x + h) f (x).
We note that
2 f (x) = f (x + h) f (x)
= f (x + 2h) f (x + h) f (x + h) f (x)
= f (x + 2h) 2f (x + h) + f (x).
and
2 yk = yk+1 yk = yk+2 2yk+1 + yk .
Definition 11.2.3 (rth Forward Difference Operator) The rth forward difference operator, r , is defined
as
r f (x) = r1 f (x + h) r1 f (x), r = 1, 2, . . . ,
0
with f (x) = f (x).
Exercise 11.2.4 Show that 3 yk = 2 (yk ) = (2 yk ). In general, show that for any positive integers r
and m with r > m,
r yk = rm (m yk ) = m (rm yk ).
i 0 1 2 3 4 5
xi 0 0.1 0.2 0.3 0.4 0.5 .
yi 0.05 0.11 0.26 0.35 0.49 0.67
Solution: Here,
y3 = y4 y3 = 0.49 0.35 = 0.14, and
3 y2 = (2 y2 ) = (y4 2y3 + y2 )
= (y5 y4 ) 2(y4 y3 ) + (y3 y2 )
= y5 3y4 + 3y3 y2
= 0.67 3 0.49 + 3 0.35 0.26 = 0.01.
Thus the rth forward difference at yk uses the values at yk , yk+1 , . . . , yk+r .
Example 11.2.7 If f (x) = x2 + ax + b, where a and b are real constants, calculate r f (x).
11.2. DIFFERENCE OPERATOR 211
Now,
Remark 11.2.9 1. For a set of tabular values, the horizontal forward difference table is written as:
x0 y0 y0 = y1 y0 2 y0 = y1 y0 n y0 = n1 y1 n1 y0
x1 y1 y1 = y2 y1 2 y1 = y2 y1
x2 y2 y2 = y3 y2 2 y2 = y3 y2
..
.
xn1 yn1 yn1 = yn yn1
xn yn
2. In many books, a diagonal form of the difference table is also used. This is written as:
x0 y0
y0
x1 y1 2 y 0
y1 3 y 0
2
x2 y2 y1
..
. yn1
xn2 yn2 2 yn3
yn2 3 yn3
2
xn1 yn1 yn2
yn1
xn yn
Given the step size h, note that this formula uses the values at x and x h, the point at the previous
step. As it moves in the backward direction, it is called the backward difference operator.
212 CHAPTER 11. NEWTONS INTERPOLATION FORMULAE
Definition 11.2.11 (rth Backward Difference Operator) The rth backward difference operator, r , is
defined as
Example 11.2.12 Using the tabulated values in Example 11.2.5, find y4 and 3 y3 .
Example 11.2.13 If f (x) = x2 + ax + b, where a and b are real constants, calculate r f (x).
Now,
Remark 11.2.14 For a set of tabular values, backward difference table in the horizontal form is written
as:
x0 y0
x1 y1 y1 = y1 y0
x2 y2 y2 = y2 y1 2 y2 = y2 y1
..
.
xn2 yn2
xn1 yn1 yn1 = yn1 yn2
xn yn yn = yn yn1 2 yn = yn yn1 n yn = n1 yn n1 yn1
Example 11.2.15 For the following set of tabular values (xi , yi ), write the forward and backward difference
tables.
xi 9 10 11 12 13 14
yi 5.0 5.4 6.0 6.8 7.5 8.7
x y y 2 y 3 y 4 y 5 y
9 5 0.4 = 5.4 - 5 0.2 = 0.6 - 0.4 0= 0.2-0.2 -.3 = -0.3 - 0.0 0.6 = 0.3 - (-0.3)
10 5.4 0.6 0.2 -0.3 0.3
11 6.0 0.8 -0.1 0.0
12 6.8 0.7 -0.1
13 7.5 0.6
14 8.1
x y y 2 y 3 y 4 y 5 y
9 5
10 5.4 0.4
11 6 0.6 0.2
12 6.8 0.8 0.2 0.0
13 7.5 0.7 -0.1 - 0.3 -0.3
14 8.1 0.6 -0.1 0.0 0.3 0.6
Remark 11.2.18 In view of the remarks (11.2.8) and (11.2.17) it is obvious that, if y = f (x) is a
polynomial function of degree n, then n f (x) is constant and n+r f (x) = 0 for r > 0.
Thus, 2 uses the table of (xk , yk ). It is easy to see that only the even central differences use the tabular
point values (xk , yk ).
214 CHAPTER 11. NEWTONS INTERPOLATION FORMULAE
Ef (x) = f (x + h).
Thus,
Eyi = yi+1 , E 2 yi = yi+2 , and E k yi = yi+k .
1h i 1
2 y i = yi+ 12 + yi 12 = [yi+1 + 2yi + yi1 ] .
2 4
Thus,
E 1+ or E 1.
Thus E 1 + , gives us
So we write,
(1 + )1 = 1 or = 1 (1 + )1 , and
(1 )1 = 1 + = E.
Similarly,
= (1 )1 1.
1
3. Let us denote by E 2 f (x) = f (x + h2 ). Then, we see that
h h 1 1
f (x) = f (x + ) f (x ) = E 2 f (x) E 2 f (x).
2 2
Thus,
1 1
= E 2 E 2 .
Recall,
So, we have,
q
2 2
2 4 +1 or 1+ 4 .
r
2
That is, the action of 1+ is same as that of .
4
4. We further note that,
1 1
f (x) = f (x + h) f (x) = f (x + h) 2f (x) + f (x h) + f (x + h) f (x h)
2 2
1 2 1
= (f (x)) + f (x + h) f (x h)
2 2
and
1 h h 1
f (x) = f (x + ) + f (x ) = {f (x + h) f (x)} + {f (x) f (x h)}
2 2 2 2
1
= [f (x + h) f (x h)] .
2
Thus,
1 2
f (x) = + f (x),
2
i.e., r
1 1 2
2 + 2 + 1+ .
2 2 4
In view of the above discussion, we have the following table showing the relations between various
difference operators:
E
q
1 2 2
E E +1 (1 ) 1
2 + 1 + 4 + 1
q
1 2
E1 (1 )1 1 2 + q 1 + 14 2
1 E 1 1 (1 + )1 12 2 + 1 + 14 2
1/2 1/2 1/2
E E (1 + ) (1 )1/2
2. Obtain the relations between the averaging operator and other difference operators.
i 0 1 2 3 4
xi 93.0 96.5 100.0 103.5 107.0
yi 11.3 12.5 14.0 15.2 16.0
f (x) PN (x),
216 CHAPTER 11. NEWTONS INTERPOLATION FORMULAE
for some constants a0 , a1 , ...aN , to be determined using the fact that PN (xi ) = yi for i = 0, 1, . . . , N.
So, for i = 0, substitute x = x0 in (11.4.1) to get PN (x0 ) = y0 . This gives us a0 = y0 . Next,
y1 y0 y0
So, a1 = h = . For i = 2, y2 = a0 + (x2 x0 )a1 + (x2 x1 )(x2 x0 )a2 , or equivalently
h
y0
2h2 a2 = y2 y0 2h( ) = y2 2y1 + y0 = 2 y0 .
h
2 y0
Thus, a2 = . Now, using mathematical induction, we get
2h2
k y0
ak = for k = 0, 1, 2, . . . , N.
k! hk
Thus,
y0 2 y0 k y0
PN (x) = y0 + (x x0 ) + 2
(x x0 )(x x1 ) + + (x x0 ) (x xk1 )
h 2! h k! hk
N y0
+ (x x0 )...(x xN 1 ).
N ! hN
As this uses the forward differences, it is called Newtons Forward difference formula for inter-
polation, or simply, forward interpolation formula.
With this transformation the above forward interpolation formula is simplified to the following form:
y0 2 y0 k y0 hk
PN (u) = y0 + (hu) + 2
{(hu)(h(u 1))} + + k
u(u 1) (u k + 1)
h 2! h k! h
N y0
++ (hu) h(u 1) h(u N + 1) .
N ! hN
2 y0 k y0
= y0 + y0 (u) + (u(u 1)) + + u(u 1) (u k + 1)
2! k!
N y0
++ u(u 1)...(u N + 1) . (11.4.2)
N!
If N =1, we have a linear interpolation given by
b0 = yN
1 1
b1 = (yN yN 1 ) = yN
h h
yN 2yN 1 + yN 2 1
b2 = 2
= 2 (2 yN )
2h 2h
..
.
1
bk = k yN .
k! hk
Thus, using backward differences and the transformation x = xN + hu, we obtain the Newtons
backward interpolation formula as follows:
u(u + 1) 2 u(u + 1) (u + N 1) N
PN (u) = yN + uyN + yN + + yN . (11.4.5)
2! N!
Exercise 11.4.2 Derive the Newtons backward interpolation formula (11.4.5) for N = 3.
Remark 11.4.3 If the interpolating point lies closer to the beginning of the interval then one uses the
Newtons forward formula and if it lies towards the end of the interval then Newtons backward formula
is used.
Remark 11.4.4 For a given set of n tabular points, in general, all the n points need not be used for
interpolating polynomial. In fact N is so chosen that N th forward/backward difference almost remains
constant. Thus N is less than or equal to n.
Example 11.4.5 1. Obtain the Newtons forward interpolating polynomial, P5 (x) for the following tab-
ular data and interpolate the value of the function at x = 0.0045.
x 0 0.001 0.002 0.003 0.004 0.005
y 1.121 1.123 1.1255 1.127 1.128 1.1285
Solution: For this data, we have the Forward difference difference table
xi yi yi 2 y3 3 yi 4 yi 5 yi
0 1.121 0.002 0.0005 -0.0015 0.002 -.0025
.001 1.123 0.0025 -0.0010 0.0005 -0.0005
.002 1.1255 0.0015 -0.0005 0.0
.003 1.127 0.001 -0.0005
.004 1.128 0.0005
.005 1.1285
218 CHAPTER 11. NEWTONS INTERPOLATION FORMULAE
x x0
Thus, for x = x0 + hu, where x0 = 0, h = 0.001 and u = , we get
h
u(u 1) u(u 1)(u 2)
P5 (x) = 1.121 + u .002 + (.0005) + (.0015)
2 3!
u(u 1)(u 2)(u 3) u(u 1)(u 2)(u 3)(u 4)
+ (.002) + (.0025).
4! 5!
Thus,
2. Using the following table for tan x, approximate its value at 0.71. Also, find an error estimate (Note
tan(0.71) = 0.85953).
Solution: As the point x = 0.71 lies towards the initial tabular values, we shall use Newtons Forward
formula. The forward difference table is:
xi yi yi 2 yi 3 yi 4 yi
0.70 0.84229 0.03478 0.00124 0.0001 0.00001
0.72 0.87707 0.03602 0.00134 0.00011
0.74 0.91309 0.03736 0.00145
0.76 0.95045 0.03881
0.78 0.98926
In the above table, we note that 3 y is almost constant, so we shall attempt 3rd degree polynomial
interpolation.
0.71 0.70
Note that x0 = 0.70, h = 0.02 gives u = = 0.5. Thus, using forward interpolating
0.02
polynomial of degree 3, we get
0.00124 0.0001
P3 (u) = 0.84229 + 0.03478u + u(u 1) + u(u 1)(u 2).
2! 3!
0.00124
Thus, tan(0.71) 0.84229 + 0.03478(0.5) + 0.5 (0.5)
2!
0.0001
+ 0.5 (0.5) (1.5)
3!
= 0.859535.
Note that exact value of tan(0.71) (upto 5 decimal place) is 0.85953. and the approximate value,
obtained using the Newtons interpolating polynomial is very close to this value. This is also reflected
by the error estimate given above.
11.4. NEWTONS INTERPOLATION FORMULAE 219
3. Apply 3rd degree interpolation polynomial for the set of values given in Example 11.2.15, to estimate
the value of f (10.3) by taking
2 y0 3 y0
f (x0 + hu) = y0 + y0 u + u(u 1) + u(u 1)(u 2).
2! 3!
Therefore,
10.3 9.0
(a) for x0 = 9.0, h = 1.0 and x = 10.3, we have u = = 1.3. This gives,
1
.2 .0
f (10.3) 5 + .4 1.3 + (1.3) .3 + (1.3) .3 (0.7)
2! 3!
= 5.559.
10.3 10.0
(b) for x0 = 10.0, h = 1.0 and x = 10.3, we have u = = .3. This gives,
1
.2 0.3
f (10.3) 5.4 + .6 .3 + (.3) (0.7) + (.3) (0.7) (1.7)
2! 3!
= 5.54115.
Note: as x = 10.3 is closer to x = 10.0, we may expect estimate calculated using x0 = 10.0 to
be a better approximation.
(c) for x0 = 13.5, we use the backward interpolating polynomial, which gives,
2 yN 3 yN
f (xN + hu) y0 + yN u + u(u + 1) + u(u + 1)(u + 2).
2! 3!
13.5 14
Therefore, taking xN = 14, h = 1.0 and x = 13.5, we have u = = 0.5. This gives,
1
0.1 0.0
f (13.5) 8.1 + .6 (0.5) + (0.5) 0.5 + (0.5) 0.5 (1.5)
2! 3!
= 7.8125.
2. The speed of a train, running between two station is measured at different distances from the starting
station. If x is the distance in km. from the starting station, then v(x), the speed (in km/hr) of the
train at the distance x is given by the following table:
x 0 50 100 150 200 250
v(x) 0 60 80 110 90 0
Find the approximate speed of the train at the mid point between the two stations.
220 CHAPTER 11. NEWTONS INTERPOLATION FORMULAE
Rx
3. Following table gives the values of the function S(x) = sin( 2 t2 )dt at the different values of the
0
tabular points x,
x 0 0.04 0.08 0.12 0.16 0.20
S(x) 0 0.00003 0.00026 0.00090 0.00214 0.00419
Obtain a fifth degree interpolating polynomial for S(x). Compute S(0.02) and also find an error estimate
for it.
4. Following data gives the temperatures (in o C) between 8.00 am to 8.00 pm. on May 10, 2005 in
Kanpur:
Time 8 am 12 noon 4 pm 8pm
Temperature 30 37 43 38
Obtain Newtons backward interpolating polynomial of degree 3 to compute the temperature in Kanpur
on that day at 5.00 pm.
Chapter 12
12.1 Introduction
In the previous chapter, we derived the interpolation formula when the values of the function are given
at equidistant tabular points x0 , x1 , . . . , xN . However, it is not always possible to obtain values of the
function, y = f (x) at equidistant interval points, xi . In view of this, it is desirable to derive an in-
terpolating formula, which is applicable even for unequally distant points. Lagranges Formula is one
such interpolating formula. Unlike the previous interpolating formulas, it does not use the notion of
differences, however we shall introduce the concept of divided differences before coming to it.
Let us assume that the function y = f (x) is linear. Then [xi , xj ] is constant for any two tabular
points xi and xj , i.e., it is independent of xi and xj . Hence,
f (xi ) f (xj )
[xi , xj ] = = [xj , xi ].
xi xj
Thus, for a linear function f (x), if we take the points x, x0 and x1 then, [x0 , x] = [x0 , x1 ], i.e.,
f (x) f (x0 )
= [x0 , x1 ].
x x0
Thus, f (x) = f (x0 ) + (x x0 )[x0 , x1 ].
So, if f (x) is approximated with a linear polynomial, then the value of the function at any point x
can be calculated by using f (x) P1 (x) = f (x0 ) + (x x0 )[x0 , x1 ], where [x0 , x1 ] is the first divided
difference of f relative to x0 and x1 .
[xj , xk ] [xi , xj ]
[xi , xj , xk ] = is constant.
xk xi
In view of the above, for a polynomial function of degree 2, we have [x, x0 , x1 ] = [x0 , x1 , x2 ]. Thus,
[x, x0 ] [x0 , x1 ]
= [x0 , x1 , x2 ].
x x1
This gives,
[x, x0 ] = [x0 , x1 ] + (x x1 )[x0 , x1 , x2 ].
So, whenever f (x) is approximated with a second degree polynomial, the value of f (x) at any point
x can be computed using the above polynomial, which uses the values at three points x0 , x1 and x2 .
Example 12.2.3 Using the following tabular values for a function y = f (x), obtain its second degree poly-
nomial approximation.
i 0 1 2
xi 0.1 0.16 0.2
f (xi ) 1.12 1.24 1.40
Also, find the approximate value of the function at x = 0.13.
Solution: We shall first calculate the desired divided differences.
Therefore
f (0.13) 1.12 + 2(0.13 0.1) + 20(0.13 0.1)(0.13 0.16) = 1.162.
Exercise 12.2.4 1. Using the following table, which gives values of log(x) corresponding to certain values
of x, find approximate value of log(323.5) with the help of a second degree polynomial.
2. Show that
f (x0 ) f (x1 ) f (x2 )
[x0 , x1 , x2 ] = + + .
(x0 x1 )(x0 x2 ) (x1 x0 )(x1 x2 ) (x2 x0 )(x2 x1 )
4. Show that for a linear function, the second divided difference with respect to any three points, xi , xj
and xk , is always zero.
Definition 12.2.5 (k th Divided Difference) The k th divided difference of f (x) relative to the tab-
ular points x0 , x1 , . . . , xk , is defined recursively as
[x1 , x2 , . . . , xk ] [x0 , x1 , . . . , xk1 ]
[x0 , x1 , . . . , xk ] = .
xk x0
It can be shown by mathematical induction that for equidistant points,
k y0 k yk
[x0 , x1 , . . . , xk ] == (12.2.1)
k!hk k!hk
where, y0 = f (x0 ), and h = x1 x0 = x2 x1 = = xk xk1 .
In general,
n yi
[xi , xi+1 , . . . , xi+n ] = ,
n!hn
where yi = f (xi ) and h is the length of the interval for i = 0, 1, 2, . . . .
Remark 12.2.6 In view of the remark (11.2.18) and (12.2.1), it is easily seen that for a polynomial
function of degree n, the nth divided difference is constant and the (n + 1)th divided difference is zero.
Exercise 12.2.8 Show that f (x) can be written in the following form:
Remark 12.2.9 In general it can be shown that f (x) = Pn (x) + Rn+1 (x), where,
f (x0 ) f (x1 )
[x0 , x1 , . . . , xk ] = +
(x0 x1 )(x0 x2 ) (x0 xk ) (x1 x0 )(x1 x2 ) (x1 xk )
f (xk )
+ +
(xk x0 )(xk x1 ) (xk xk1 )
f (x0 ) f (xl ) f (xk )
= k
+ + k
+ + k
Q Q Q
(x0 xj ) (xl xj ) (xk xj )
j=1 j=0, j6=l j=0, j6=k
Proof. We will prove the result by induction on k. The result is trivially true for k = 0. For k = 1,
which on rearranging the terms gives the desired result. Therefore, by mathematical induction, the
proof of the theorem is complete.
Remark 12.3.2 In view of the theorem 12.3.1 the k th divided difference of a function f (x), remains
unchanged regardless of how its arguments are interchanged, i.e., it is independent of the order of its
arguments.
Now, if a function is approximated by a polynomial of degree n, then , its (n + 1)th divided difference
relative to x, x0 , x1 , . . . , xn will be zero,(Remark 12.2.6) i.e.,
[x, x0 , x1 , . . . , xn ] = 0
or,
f (x) f (x0 ) f (x1 )
= +
(x x0 )(x x1 ) (x xn ) (x0 x)(x0 x1 ) (x0 xn ) (x1 x)(x1 x0 )(x1 x2 ) (x1 xn )
f (xn )
+ + ,
(xn x)(xn x0 ) (xn xn1 )
which gives ,
(x x1 )(x x2 ) (x xn ) (x x0 )(x x2 ) (x xn )
f (x) = f (x0 ) + f (x1 )
(x0 x1 ) (x0 xn ) (x1 x0 )(x1 x2 ) (x1 xn )
(x x0 )(x x1 ) (x xn1 )
+ + f (xn )
(xn x0 )(xn x1 ) (xn xn1 )
Qn
(x xj )
n n
Y x xj n
X X j=0
= f (xi ) = n f (xi )
xi xj i=0 (x xi )
Q
i=0 j=0, j6=i (xi xj )
j=0, j6=i
n n
Y X f (xi )
= (x xj ) n
Q .
j=0 i=0 (x xi ) (xi xj )
j=0, j6=i
Note that the expression on the right is a polynomial of degree n and takes the value f (xi ) at x = xi
for i = 0, 1, , (n 1).
This polynomial approximation is called Lagranges Interpolation Formula.
Remark 12.3.3 In view of the Remark (12.2.9), we can observe that Pn (x) is another form of Lagranges
Interpolation polynomial formula as obtained above. Also the remainder term Rn+1 gives an estimate
of error between the true value and the interpolated value of the function.
Remark 12.3.4 We have seen earlier that the divided differences are independent of the order of its
arguments. As the Lagranges formula has been derived using the divided differences, it is not necessary
here to have the tabular points in the increasing order. Thus one can use Lagranges formula even
when the points x0 , x1 , , xk , , xn are in any order, which was not possible in the case of Newtons
Difference formulae.
Remark 12.3.5 One can also use the Lagranges Interpolating Formula to compute the value of x for
a given value of y = f (x). This is done by interchanging the roles of x and y, i.e. while using the table
of values, we take tabular points as yk and nodal points are taken as xk .
Example 12.3.6 Using the following data, find by Lagranges formula, the value of f (x) at x = 10 :
i 0 1 2 3 4
xi 9.3 9.6 10.2 10.4 10.8
yi = f (xi ) 11.40 12.80 14.70 17.00 19.80
Also find the value of x where f (x) = 16.00.
Solution: To compute f (10), we first calculate the following products:
4
Y 4
Y
(x xj ) = (10 xj )
j=0 j=0
= (10 9.3)(10 9.6)(10 10.2)(10 10.4)(10 10.8) = 0.01792,
4
Y n
Y n
Y
(x0 xj ) = 0.4455, (x1 xj ) = 0.1728, (x2 xj ) = 0.0648,
j=1 j=0, j6=1 j=0, j6=2
n
Y n
Y
(x3 xj ) = 0.0704, and (x4 xj ) = 0.4320.
j=0, j6=3 j=0, j6=4
226 CHAPTER 12. LAGRANGES INTERPOLATION FORMULA
Thus,
11.40 12.80 14.70
f (10) 0.01792 + +
0.7 0.4455 0.4 (0.1728) (0.2) 0.0648
17.00 19.80
+ +
(0.4) (0.0704) (0.8) 0.4320
= 13.197845.
Now to find the value of x such that f (x) = 16, we interchange the roles of x and y and calculate the
following products:
4
Y 4
Y
(y yj ) = (16 yj )
j=0 j=0
= (16 11.4)(16 12.8)(16 14.7)(16 17.0)(16 19.8) = 72.7168,
4
Y Yn n
Y
(y0 yj ) = 217.3248, (y1 yj ) = 78.204, (y2 yj ) = 73.5471,
j=1 j=0, j6=1 j=0, j6=2
n
Y n
Y
(y3 yj ) = 151.4688, and (y4 yj ) = 839.664.
j=0, j6=3 j=0, j6=4
Exercise 12.3.7 The following table gives the data for steam pressure P vs temperature T :
Exercise 12.3.8 Compute from following table the value of y for x = 6.20 :
re-designated tabular points in their given order are equidistant. Now recall from remark (12.3.3) that
Lagranges interpolating polynomial can also be written as :
Now note that the points x2 , x1 , x0 , x1 , x2 and x3 are equidistant and the divided difference are
independent of the order of their arguments. Thus, we have
y0 2 y1
[x0 , x1 ] = , [x0 , x1 , x1 ] = [x1 , x0 , x1 ] = ,
h 2h2
3 y1
[x0 , x1 , x1 , x2 ] = [x1 , x0 , x1 , x2 ] = ,
3!h3
4 y2
[x0 , x1 , x1 , x2 , x2 ] = [x2 , x1 , x0 , x1 , x2 ] = ,
4!h4
5 y2
[x0 , x1 , x1 , x2 , x2 , x3 ] = [x2 , x1 , x0 , x1 , x2 , x3 ] = ,
5!h5
where yi = f (xi ) for i = 2, 1, 0, 1, 2. Now using the above relations and the transformation x =
x0 + hu, we get
y0 2 y1
3 y1
f (x0 + hu) y0 + (hu) + (hu)(hu h) + (hu)(hu h)(hu + h)
h 2h2 3!h3
4 y2
+ (hu)(hu h)(hu + h)(hu 2h)
4!h4
5
y2
+ (hu)(hu h)(hu + h)(hu 2h)(hu + 2h).
5!h5
2 y1
3 y1
f (x0 + hu) y0 + uy0 + u(u 1) + u(u2 1)
2! 3!
4
y 2 5 y2
+u(u2 1)(u 2) + u(u2 1)(u2 22 ) . (12.4.1)
4! 5!
Similarly using the tabular points x0 , x1 = x0 h, x2 = x0 +h, x3 = x0 2h, x4 = x0 +2h, x5 = x0 3h, and
the re-designating them, as x3 , x2 , x1 , x0 , x1 and x2 , we get another form of interpolating polynomial
as follows:
2 y1
3 y2
f (x0 + hu) y0 + uy1
+ u(u + 1) + u(u2 1)
2! 3!
4
y 2 5 y3
+u(u2 1)(u + 2) + u(u2 1)(u2 22 ) . (12.4.2)
4! 5!
228 CHAPTER 12. LAGRANGES INTERPOLATION FORMULA
Now taking the average of the two interpoating polynomials (12.4.1) and (12.4.2) (called Gausss first
and second interpolating formulas, respectively), we obtain Sterlings Formula of interpolation:
2
y1 + y0 2 y1 u(u2 1) 3 y2
+ 3 y1
f (x0 + hu) y0 + u +u +
2 2! 2 3!
4 5
2 2 y2 u(u 1)(u 2 ) y3 + 5 y2
2 2 2
+u (u 1) + + . (12.4.3)
4! 2 5!
These are very helpful when, the point of interpolation lies near the middle of the interpolating interval.
In this case one usually writes the diagonal form of the difference table.
Example 12.4.1 Using the following data, find by Sterlings formula, the value of f (x) = cot(x) at x =
0.225 :
x 0.20 0.21 0.22 0.23 0.24
f (x) 1.37638 1.28919 1.20879 1.13427 1.06489
Here the point x = 0.225 lies near the central tabular point x = 0.22. Thus , we define x2 = 0.20, x1 =
0.21, x0 = 0.22, x1 = 0.23, x2 = 0.24, to get the difference table in diagonal form as:
x2 = 0.20 y2 = 1.37638
y2 = .08719
x1 = .021 y1 = 1.28919 2 y2 = .00679
y1 = .08040 3 y2 = .00091
x0 = 0.22 y0 = 1.20879 2 y 1 = .00588 4 y2 = .00017
y0 = .07452 3 y1 = .00074
x1 = 0.23 y1 = 1.13427 2 y0 = .00514
y1 = .06938
x2 = 0.24 y2 = 1.06489
Exercise 12.4.2 Compute from the following table the value of y for x = 0.05 :
13.1 Introduction
Numerical differentiation/ integration is the process of computing the value of the derivative of a function,
whose analytical expression is not available, but is specified through a set of values at certain tabular
points x0 , x1 , , xn In such cases, we first determine an interpolating polynomial approximating the
function (either on the whole interval or in sub-intervals) and then differentiate/integrate this polynomial
to approximately compute the value of the derivative at the given point.
2 y0 k y0
f (x) = f (x0 + hu) y0 + y0 u + (u(u 1)) + + {u(u 1) (u k + 1)}
2! k!
n y0
++ {u(u 1)...(u n + 1)}. (13.2.1)
n!
Differentiating (13.2.1), we get the approximate value of the first derivative at x as
df 1 df 1 2 y0 3 y0
= y0 + (2u 1) + (3u2 6u + 2) +
dx h du h 2! 3!
n y0 n(n 1)2 n2
+ nun1 u + + (1)(n1) (n 1)! . (13.2.2)
n! 2
x x0
where, u = .
h
230 CHAPTER 13. NUMERICAL DIFFERENTIATION AND INTEGRATION
Remark 13.2.1 Numerical differentiation using Stirlings formula is found to be more accurate than
that with the Newtons difference formulae. Also it is more convenient to use.
Now higher derivatives can be found by successively differentiating the interpolating polynomials. Thus
e.g. using (13.2.2), we get the second derivative at x = x0 as
d2 f 1 2 3 2 11 4 y0
= 2 y0 y0 + .
dx2 x=x0 h 4!
Example 13.2.2 Compute from following table the value of the derivative of y = f (x) at x = 1.7489 :
x 1.73 1.74 1.75 1.76 1.77
y 1.772844100 1.155204006 1.737739435 1.720448638 1.703329888
Solution: We note here that x0 = 1.75, h = 0.01, so u = (1.7489 1.75)/0.01 = 0.11, and y0 =
.0017290797, 2y0 = .0000172047, 3y0 = .0000001712,
y1 = .0017464571, 2y1 = .0000173774, 3y1 = .0000001727,
3 y2 = .0000001749, 4y2 = .0000000022
Thus, f (1.7489) is obtained as:
(i) Using Newtons Forward difference formula,
1 0.0000172047
f (1.4978) 0.0017290797 + (2 0.11 1)
0.01 2
0.0000001712
+ (3 (0.11)2 6 0.11 + 2) = 0.173965150143.
3!
(ii) Using Stirlings formula, we get:
1 (.0017464571) + (.0017290797)
f (1.4978) + (0.11) .0000173774
.01 2
(3 (0.11)2 1) ((.0000001749) + (.0000001727))
+
2 3!
2 (.0000000022)
+ 2 (0.11) (2(0.11) 1)
4!
= 0.17396520185
13.2. NUMERICAL DIFFERENTIATION 231
It may be pointed out here that the above table is for f (x) = ex , whose derivative has the value
-0.1739652000 at x = 1.7489.
Example 13.2.3 Using only the first term in the formula (13.2.6) show that
y1 y1
f (x0 ) .
2h
Hence compute from following table the value of the derivative of y = ex at x = 1.15 :
3.4903 2.8577
f (1.15) = 3.1630.
2 0.1
Note the error between the computed value and the true value is 3.1630 3.1582 = 0.0048.
Exercise 13.2.4 Retaining only the first two terms in the formula (13.2.3), show that
3y0 + 4y1 y2
f (x0 ) .
2h
Hence compute the derivative of y = ex at x = 1.15 from the following table:
Also compare your result with the computed value in the example (13.2.3).
Exercise 13.2.5 Retaining only the first two terms in the formula (13.2.6), show that
y2 8y1 + 8y1 y2
f (x0 ) .
12h
Hence compute from following table the value of the derivative of y = ex at x = 1.15 :
Exercise 13.2.6 Following table gives the values of y = f (x) at the tabular points x = 0 + 0.05 k,
k = 0, 1, 2, 3, 4, 5.
Compute (i)the derivatives y and y at x = 0.0 by using the formula (13.2.2). (ii)the second derivative y
at x = 0.1 by using the formula (13.2.6).
232 CHAPTER 13. NUMERICAL DIFFERENTIATION AND INTEGRATION
Similarly, if we have tabular points which are not equidistant, one can use Lagranges interpolating
polynomial, which is differentiated to get an estimate of first derivative. We shall see the result for
four tabular points and then give the general formula. Let x0 , x1 , x2 , x3 be the tabular points, then the
corresponding Lagranges formula gives us:
(x x1 )(x x2 )(x x3 ) (x x0 )(x x2 )(x x3 )
f (x) f (x0 ) + f (x1 )
(x0 x1 )(x0 x2 )(x0 x3 ) (x1 x0 )(x1 x2 )(x1 x3 )
(x x0 )(x x1 )(x x3 ) (x x0 )(x x1 )(x x2 )
+ f (x2 ) + f (x3 )
(x2 x0 )(x2 x1 )(x2 x3 ) (x3 x0 )(x3 x1 )(x3 x2 )
Differentiation of the above interpolating polynomial gives:
df (x) (x x2 )(x x3 ) + (x x1 )(x x2 ) + (x x1 )(x x3 )
f (x0 )
dx (x0 x1 )(x0 x2 )(x0 x3 )
(x x2 )(x x3 ) + (x x0 )(x x2 ) + (x x0 )(x x3 )
+ f (x1 )
(x1 x0 )(x1 x2 )(x1 x3 )
(x x1 )(x x2 ) + (x x0 )(x x1 ) + (x x0 )(x x3 )
+ f (x2 )
(x2 x0 )(x2 x1 )(x2 x3 )
(x x1 )(x x2 ) + (x x0 )(x x2 ) + (x x0 )(x x1 )
+ f (x3 )
(x3 x0 )(x3 x1 )(x3 x2 )
3 3 3
X f (xi ) 1
Y X
= (x xr ) . (13.2.7)
i=0 Q3 (x xk )
r=0 (x xi ) (xi xj ) k=0, k6=i
j=0, j6=i
df 1 1 1 (x0 x2 )(x0 x3 )
+ + f (x0 ) + f (x1 )
dx x=x0 (x0 x1 ) (x0 x2 ) (x0 x3 ) (x1 x0 )(x1 x2 )(x1 x3 )
(x0 x1 )(x0 x3 ) (x0 x1 )(x0 x2 )
+ f (x2 ) + f (x3 ).
(x2 x0 )(x2 x1 )(x2 x3 ) (x3 x0 )(x3 x1 )(x3 x2 )
Now, generalizing Equation (13.2.7) for n + 1 tabular points x0 , x1 , , xn we get:
n n n
df Y X f (xi ) X 1
= (x xr ) n
.
dx Q (x xk )
r=0 i=0 (x xi ) (xi xj ) k=0, k6=i
j=0, j6=i
Example 13.2.7 Compute from following table the value of the derivative of y = f (x) at x = 0.6 :
x 0.4 0.6 0.7
y 3.3836494 4.2442376 4.7275054
Solution: The given tabular points are not equidistant, so we use Lagranges interpolating polynomial with
three points: x0 = 0.4, x1 = 0.6, x2 = 0.7 . Now differentiating this polynomial the derivative of the function
at x = x1 is obtained in the following form:
df (x1 x2 ) 1 1 (x1 x0 )
f (x 0 ) + + f (x1 ) + f (x2 ).
dx x=x1 (x0 x1 )(x0 x2 ) (x1 x2 ) (x1 x0 ) (x2 x0 )(x2 x1 )
For the sake of comparison, it may be pointed out here that the above table is for the function f (x) = 2ex +x,
and the value of its derivative at x = 0.6 is 4.6442376.
Exercise 13.2.8 For the function, whose tabular values are given in the above example(13.2.8), compute the
value of its derivative at x = 0.5.
Remark 13.2.9 It may be remarked here that the numerical differentiation for higher derivatives does
not give very accurate results and so is not much preferred.
Zb Zb
y0 2 y0 3 y0
f (x)dx = y0 + (x x0 ) + 2
(x x0 )(x x1 ) + (x x0 )(x x1 )(x x2 )
h 2!h 3!h3
a a
4 y0
+ (x x0 )(x x1 )(x x2 )(x x3 ) + dx
4!h4
Zb Zn
2 y0 3 y0
f (x)dx = h y0 + uy0 + u(u 1) + u(u 1)(u 2)
2! 3!
a 0
4 y0
+ u(u 1)(u 2)(u 3) + du
4!
Zb
n2 2 y0 n3 n2 3 y0 n4 3 2
f (x)dx = h ny0 + y0 + + n +n
2 2! 3 2 3! 4
a
4 y0 n5 3n4 11n3
+ + 3n2 + (13.3.1)
4! 5 2 3
Zb
y0 h
f (x)dx = h y0 + = [y0 + y1 ] . (13.3.2)
2 2
a
234 CHAPTER 13. NUMERICAL DIFFERENTIATION AND INTEGRATION
Zb
8 4 2 y0
f (x)dx = h 2y0 + 2y0 +
3 2 2
a
1 y2 2y1 + y0 h
= 2h y0 + (y1 y0 ) + = [y0 + 4y1 + y2 ] . (13.3.3)
3 2 3
In the above we have replaced the integrand by an interpolating polynomial over the whole interval
[a, b] and then integrated it term by term. However, this process is not very useful. More useful
Numerical integral formulae are obtained by dividing the interval [a, b] in n sub-intervals [xk , xk+1 ],
where, xk = x0 + kh for k = 0, 1, , n with x0 = a, xn = x0 + nh = b.
Now using the formula ( 13.3.2) for n = 1 on the interval [xk , xk+1 ], we get,
xZk+1
h
f (x)dx = [yk + yk+1 ] .
2
xk
Thus, we have,
Zb
h h h h h
f (x)dx = [y0 + y1 ] + [y1 + y2 ] + + [yk + yk+1 ] + + [yn2 + yn1 ] + [yn1 + yn ]
2 2 2 2 2
a
i.e.
Zb
h
f (x)dx = [y0 + 2y1 + 2y2 + + 2yk + + 2yn1 + yn ]
2
a
" n1
#
y0 + yn X
= h + yi . (13.3.4)
2 i=1
This is called Trapezoidal Rule. It is a simple quadrature formula, but is not very accurate.
Remark 13.3.1 An estimate for the error E1 in numerical integration using the Trapezoidal rule is
given by
ba 2
E1 = y,
12
where 2 y is the average value of the second forward differences.
Recall that in the case of linear function, the second forward differences is zero, hence, the Trapezoidal
rule gives exact value of the integral if the integrand is a linear function.
13.3. NUMERICAL INTEGRATION 235
R1 2
Example 13.3.2 Using Trapezoidal rule compute the integral ex dx, where the table for the values of y =
0
2 x 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
ex is given below:
y 1.00000 1.01005 1.04081 1.09417 1.17351 1.28402 1.43332 1.63231 1.89648 2.2479 2.71828
Solution: Here, h = 0.1, n = 10,
Thus,
Z1
2
ex dx = 0.1 [1.85914 + 12.81257] = 1.467171
0
Zb Zx2 Zx4 xZ
2k+2 Zxn
f (x)dx = f (x)dx + f (x)dx + + f (x)dx + + f (x)dx
a x0 x2 x2k xn2
h
= [(y0 + 4y1 + y2 ) + (y2 + 4y3 + y4 ) + + (yn2 + 4yn1 + yn )]
3
h
= [y0 + 4y1 + 2y2 + 4y3 + 2y4 + + 2yn2 + 4yn1 + yn ] ,
3
which gives the second quadrature formula as follows:
Zb
h
f (x)dx = [(y0 + yn ) + 4 (y1 + y3 + + y2k+1 + + yn1 )
3
a
+ 2 (y2 + y4 + + y2k + + yn2 )]
n1 n2
h X X
= (y0 + yn ) + 4 yi + 2 yi . (13.3.5)
3 i=2, ieven
i=1, iodd
Remark 13.3.3 An estimate for the error E2 in numerical integration using the Simpsons rule is given
by
ba 4
E2 = y, (13.3.6)
180
where 4 y is the average value of the forth forward differences.
236 CHAPTER 13. NUMERICAL DIFFERENTIATION AND INTEGRATION
2
Example 13.3.4 Using the table for the values of y = ex as is given in Example 13.3.2, compute the integral
R1 x2
e dx, by Simpsons rule. Also estimate the error in its calculation and compare it with the error using
0
Trapezoidal rule.
Solution: Here, h = 0.1, n = 10, thus we have odd number of nodal points. Further,
9
X
y0 + y10 = 1.0 + 2.71828 = 3.71828, yi = y1 + y3 + y5 + y7 + y9 = 7.26845,
i=1, iodd
and
8
X
yi = y2 + y4 + y6 + y8 = 5.54412.
i=2, ieven
Thus,
Z1
2 0.1
ex dx = [3.71828 + 4 7.268361 + 2 5.54412] = 1.46267733
3
0
To find the error estimates, we consider the forward difference table, which is given below:
xi yi yi 2 yi 3 yi 4 yi
0.0 1.00000 0.01005 0.02071 0.00189 0.00149
0.1 1.01005 0.03076 0.02260 0.00338 0.00171
0.2 1.04081 0.05336 0.02598 0.00519 0.00243
0.3 1.09417 0.07934 0.03117 0.00762 0.00320
0.4 1.17351 0.11051 0.3879 0.01090 0.00459
0.5 1.28402 0.14930 0.04969 0.01549 0.00658
0.6 1.43332 0.19899 0.06518 0.02207 0.00964
0.7 1.63231 0.26417 0.08725 0.03171
0.8 1.89648 0.35142 0.11896
0.9 2.24790 0.47038
1.0 2.71828
Thus, error due to Trapezoidal rule is,
10 2
E1 = y
12
1 0.02071 + 0.02260 + 0.02598 + 0.03117 + 0.03879 + 0.04969 + 0.06518 + 0.08725 + 0.11896
=
12 9
= 0.004260463.
It shows that the error in numerical integration is much less by using Simpsons rule.
R1
Example 13.3.5 Compute the integral f (x)dx, where the table for the values of y = f (x) is given below:
0.05
x 0.05 0.1 0.15 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
y 0.0785 0.1564 0.2334 0.3090 0.4540 0.5878 0.7071 0.8090 0.8910 0.9511 0.9877 1.0000
Solution: Note that here the points are not given to be equidistant, so as such we can not use any of
the above two formulae. However, we notice that the tabular points 0.05, 0.10, 0, 15 and 0.20 are equidistant
13.3. NUMERICAL INTEGRATION 237
and so are the tabular points 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 and 1.0. Now we can divide the interval in two
subinterval: [0.05, 0.2] and [0.2, 1.0]; thus,
Z1 Z0.2 Z1
f (x)dx = f (x)dx + f (x)dx
0.05 0.05 0.2
. The integrals then can be evaluated in each interval. We observe that the second set has odd number of
points. Thus, the first integral is evaluated by using Trapezoidal rule and the second one by Simpsons rule
(of course, one could have used Trapezoidal rule in both the subintervals).
For the first integral h = 0.05 and for the second one h = 0.1. Thus,
Z0.2
0.0785 + 0.3090
f (x)dx = 0.05 + 0.1564 + 0.2334 = 0.0291775,
2
0.05
Z1.0
0.1
and f (x)dx = (0.3090 + 1.0000) + 4 (0.4540 + 0.7071 + 0.8910 + 0.9877)
3
0.2
+2 (0.5878 + 0.8090 + 0.9511)
= 0.6054667,
which gives,
Z1
f (x)dx = 0.0291775 + 0.6054667 = 0.6346442
0.05
It may be mentioned here that in the above integral, f (x) = sin(x/2) and that the value of the integral
is 0.6346526. It will be interesting for the reader to compute the two integrals using Trapezoidal rule and
compare the values.
Rb
Exercise 13.3.6 1. Using Trapezoidal rule, compute the integral f (x)dx, where the table for the values
a
of y = f (x) is given below. Also find an error estimate for the computed value.
x a=1 2 3 4 5 6 7 8 9 b=10
(a)
y 0.09531 0.18232 0.26236 0.33647 0.40546 0.47000 0.53063 0.58779 0.64185 0.69314
Appendix
1. if ra = r < n, the set of solutions of the linear system is an infinite set and has the form
2. if ra = r = n, the solution set of the linear system has a unique n 1 vector x0 satisfying Ax0 = 0.
Proof. Suppose [C d] is the row reduced echelon form of the augmented matrix [A b]. Then
by Theorem 2.3.4, the solution set of the linear system [C d] is same as the solution set of the linear
system [A b]. So, the proof consists of understanding the solution set of the linear system Cx = d.
1. Let r = ra < n.
Then [C d] has its first r rows as the non-zero rows. So, by Remark 2.4.5, the matrix C = [cij ]
has r leading columns. Let the leading columns be 1 i1 < i2 < < ir n. Then we observe
the following:
(a) the entries clil for 1 l r are leading terms. That is, for 1 l r, all entries in the ith
l
column of C is zero, except the entry clil . The entry clil = 1;
(b) corresponding is each leading column, we have r basic variables, xi1 , xi2 , . . . , xir ;
(c) the remaining n r columns correspond to the n r free variables (see Remark 2.4.5),
xj1 , xj2 , . . . , xjnr . So, the free variables correspond to the columns 1 j1 < j2 < <
jnr n.
For 1 l r, consider the lth row of [C d]. The entry clil = 1 and is the leading term. Also, the
first r rows of the augmented matrix [C d] give rise to the linear equations
nr
X
xil + cljk xjk = dl , for 1 l r.
k=1
240 CHAPTER 14. APPENDIX
Let yt = (xi1 , . . . , xir , xj1 , . . . , xjnr ). Then the set of solutions consists of
nr
P
d 1 c 1j xj
xi1 k=1
k k
. .
. ..
.
nr
xir P
y=
dr crjk xjk
x = . (14.1.1)
j1 k=1
. x j
. 1
. ..
xjnr .
xjnr
As xjs for 1 s n r are free variables, let us assign arbitrary constants ks R to xjs . That is,
for 1 s n r, xjs = ks . Then the set of solutions is given by
nr
nr
P P
d c x d
1 s=1 1js js 1 s=1 1js s c k
.. ..
.
.
nr
P nr
P
dr crjs xjs = dr
crjs ks
y =
s=1 s=1
xj1 k1
.. ..
. .
xjnr knr
d1 c1j1 c1j2 c1jnr
. . . .
. . . .
. . . .
dr crj1 crj2 crjnr
0 1 0 0
= k1 k2 knr .
0 0 1 0
.. .. .. ..
. . . .
0 0 0 0
0 0 0 1
Let us write v0 t = (d1 , d2 , . . . , dr , 0, . . . , 0)t . Also, for 1 i n r, let vi be the vector associated
with ki in the above representation of the solution y. Observe the following:
Cv0 = Cy = d. (14.1.2)
d = Cy = C(v0 + v1 ). (14.1.3)
d = Cy = C(v0 + vt ). (14.1.4)
Note that a rearrangement of the entries of y will give us the solution vector xt = (x1 , x2 , . . . , xn )t .
Suppose that for 0 i n r, the vectors ui s are obtained by applying the same rearrangement
to the entries of vi s which when applied to y gave x. Therefore, we have Cu0 = d and for
1 i n r, Cui = 0. Now, using equivalence of the linear system Ax = b and Cx = d gives
Thus, we have obtained the desired result for the case r = r1 < n.
2. r = ra = n, m n.
Here the first n rows of the row reduced echelon matrix [C d] are the non-zero rows. Also, the
number of columns in C equals n = rank (A) = rank (C). So, by Remark 2.4.5, all the columns
of C are leading columns and all the variables x1 , x2 , . . . , xn are basic variables. Thus, the row
reduced echelon form [C d] of [A b] is given by
" #
In d
[C d] = .
0 0
Therefore, the solution set of the linear system Cx = d is obtained using the equation In x = d.
This gives us, a solution as x0 = d. Also, by Theorem 2.4.11, the row reduced form of a given
matrix is unique, the solution obtained above is the only solution. That is, the solution set consists
of a single vector d.
3. r < ra .
As C has n columns, the row reduced echelon matrix [C d] has n + 1 columns. The condition,
r < ra implies that ra = r + 1. We now observe the following:
(b) Whereas the condition ra = r + 1 implies that the (r + 1)th row of the matrix [C d] is
non-zero.
Thus, the (r + 1)th row of [C d] is of the form (0, . . . , 0, 1). Or in other words, dr+1 = 1.
0 x1 + 0 x2 + + 0 xn = 1.
This linear equation has no solution. Hence, in this case, the linear system Cx = d has no solution.
Therefore, by Theorem 2.3.4, the linear system Ax = b has no solution.
Corollary 14.1.2 Consider the linear system Ax = b. Then the two statements given below cannot hold
together.
14.2 Determinant
In this section, S denotes the set {1, 2, . . . , n}.
2. The set of all functions : SS that are both one to one and onto will be denoted by Sn . That is,
Sn is the set of all permutations of the set {1, 2, . . . , n}.
!
1 2 n
Example 14.2.2 1. In general, we represent a permutation by = .
(1) (2) (n)
This representation of a permutation is called a two row notation for .
2. For each positive integer n, Sn has a special permutation called the identity
! permutation, denoted Idn ,
1 2 n
such that Idn (i) = i for 1 i n. That is, Idn = .
1 2 n
3. Let n = 3. Then
( ! ! !
1 2 3 1 2 3 1 2 3
S3 = 1 = , 2 = , 3 = ,
1 2 3 1 3 2 2 1 3
! ! !)
1 2 3 1 2 3 1 2 3
4 = , 5 = , 6 = (14.2.5)
2 3 1 3 1 2 3 2 1
2. Suppose that , Sn . Then both and are one to one and onto. So, their composition map
, defined by ( )(i) = (i) , is also both one to one and onto. Hence, is also a
permutation. That is, Sn .
3. Suppose Sn . Then is both one to one and onto. Hence, the function 1 : SS defined
by 1 (m) = if and only if () = m for 1 m n, is well defined and indeed 1 is also both
one to one and onto. Hence, for every element Sn , 1 Sn and is the inverse of .
Proposition 14.2.4 Consider the set of all permutations Sn . Then the following holds:
2. Sn = { 1 : Sn }.
Proof. For the first part, we need to show that given any element Sn , there exists elements
, Sn such that = = . It can easily be verified that = 1 and = 1 .
For the second part, note that for any Sn , ( 1 )1 = . Hence the result holds.
14.2. DETERMINANT 243
Definition 14.2.5 Let Sn . Then the number of inversions of , denoted n(), equals
Definition 14.2.6 A permutation Sn is called a transposition if there exists two positive integers m, r
{1, 2, . . . , n} such that (m) = r, (r) = m and (i) = i for 1 i 6= m, r n.
For the sake of convenience, a transposition for which (m) = r, (r) = m and (i) = i for
1 i 6= m, r n will be denoted simply by = (m r) or (r m). Also, note that for any transposition
Sn , 1 = . That is, = Idn .
!
1 2 3 4
Example 14.2.7 1. The permutation = is a transposition as (1) = 3, (3) =
3 2 1 4
1, (2) = 2 and (4) = 4. Here note that = (1 3) = (3 1). Also, check that
n( ) = 3 + 1 + 1 + 1 + 0 + 3 + 2 + 1 = 12.
3. Let , m and r be distinct element from {1, 2, . . . , n}. Suppose = (m r) and = (m ). Then
( )() = () = (m) = r, ( )(m) = (m) = () =
( )(r) = (r) = (r) = m, and ( )(i) = (i) = (i) = i if i 6= , m, r.
!
1 2 m r n
Therefore, = (m r) (m ) = = (r l) (r m).
1 2 r m n
!
1 2 m r n
Similarly check that = .
1 2 m r n
With the above definitions, we state and prove two important results.
Proof. We will prove the result by induction on n(), the number of inversions of . If n() = 0, then
= Idn = (1 2) (1 2). So, let the result be true for all Sn with n() k.
For the next step of the induction, suppose that Sn with n( ) = k + 1. Choose the smallest
positive number, say , such that
As is a permutation, there exists a positive number, say m, such that () = m. Also, note that m > .
Define a transposition by = ( m). Then note that
( )(i) = i, for i = 1, 2, . . . , .
244 CHAPTER 14. APPENDIX
= 1 2 t .
Before coming to our next important result, we state and prove the following lemma.
Idn = 1 2 t ,
then t is even.
where and s are distinct elements of {1, 2, . . . , n} and are different from m, r. In the first case, we
can remove 1 2 and obtain Idn = 3 4 t . In this expression for identity, the number of
transpositions is t 2 = k 1 < k. So, by mathematical induction, t 2 is even and hence t is also even.
In the other three cases, we replace the original expression for 1 2 by their counterparts on the
right to obtain another expression for identity in terms of t = k + 1 transpositions. But note that in the
new expression for identity, the positive integer m doesnt appear in the first transposition, but appears
in the second transposition. We can continue the above process with the second and third transpositions.
At this step, either the number of transpositions will reduce by 2 (giving us the result by mathematical
induction) or the positive number m will get shifted to the third transposition. The continuation of this
process will at some stage lead to an expression for identity in which the number of transpositions is
14.2. DETERMINANT 245
t 2 = k 1 (which will give us the desired result by mathematical induction), or else we will have
an expression in which the positive number m will get shifted to the right most transposition. In the
later case, the positive integer m appears exactly once in the expression for identity and hence this
expression does not fix m whereas for the identity permutation Idn (m) = m. So the later case leads us
to a contradiction.
Hence, the process will surely lead to an expression in which the number of transpositions at some
stage is t 2 = k 1. Therefore, by mathematical induction, the proof of the lemma is complete.
Theorem 14.2.10 Let Sn . Suppose there exist transpositions 1 , 2 , . . . , k and 1 , 2 , . . . , such that
= 1 2 k = 1 2
Idn = 1 2 k 1 1 .
Hence by Lemma 14.2.9, k + is even. Hence, either k and are both even or both odd. Thus the result
follows.
Remark 14.2.12 Observe that if and are both even or both odd permutations, then the permu-
tations and are both even. Whereas if one of them is odd and the other even then the
permutations and are both odd. We use this to define a function on Sn , called the sign of a
permutation, as follows:
Example 14.2.14 1. The identity permutation, Idn is an even permutation whereas every transposition
is an odd permutation. Thus, sgn(Idn ) = 1 and for any transposition Sn , sgn() = 1.
2. Using Remark 14.2.12, sgn( ) = sgn() sgn( ) for any two permutations , Sn .
Definition 14.2.15 Let A = [aij ] be an n n matrix with entries from F. The determinant of A, denoted
det(A), is defined as
X X n
Y
det(A) = sgn()a1(1) a2(2) . . . an(n) = sgn() ai(i) .
Sn Sn i=1
Remark 14.2.16 1. Observe that det(A) is a scalar quantity. The expression for det(A) seems
complicated at the first glance. But this expression is very helpful in proving the results related
with properties of determinant.
246 CHAPTER 14. APPENDIX
X 3
Y
det(A) = sgn() ai(i)
Sn i=1
3
Y 3
Y 3
Y
= sgn(1 ) ai1 (i) + sgn(2 ) ai2 (i) + sgn(3 ) ai3 (i) +
i=1 i=1 i=1
3
Y 3
Y 3
Y
sgn(4 ) ai4 (i) + sgn(5 ) ai5 (i) + sgn(6 ) ai6 (i)
i=1 i=1 i=1
= a11 a22 a33 a11 a23 a32 a12 a21 a33 + a12 a23 a31 + a13 a21 a32 a13 a22 a31 .
Observe that this expression for det(A) for a 3 3 matrix A is same as that given in (2.8.1).
5. Let B = [bij ] and C = [cij ] be two matrices which differ from the matrix A = [aij ] only in the mth
row for some m. If cmj = amj + bmj for 1 j n then det(C) = det(A) + det(B).
6. if B is obtained from A by replacing the th row by itself plus k times the mth row, for 6= m then
det(B) = det(A).
7. if A is a triangular matrix then det(A) = a11 a22 ann , the product of the diagonal elements.
11. det(A) = det(At ), where recall that At is the transpose of the matrix A.
Proof. Proof of Part 1. Suppose B = [bij ] is obtained from A = [aij ] by the interchange of the th
and mth row. Then bj = amj , bmj = aj for 1 j n and bij = aij for 1 i 6= , m n, 1 j n.
14.3. PROPERTIES OF DETERMINANT 247
X n
Y X n
Y
det(B) = sgn() bi(i) = sgn( ) bi( )(i)
Sn i=1 Sn i=1
X
= sgn( ) sgn() b1( )(1) b2( )(2) b( )() bm( )(m) bn( )(n)
Sn
X
= sgn( ) sgn() b1(1) b2(2) b(m) bm() bn(n)
Sn
!
X
= sgn() a1(1) a2(2) am(m) a() an(n) as sgn( ) = 1
Sn
= det(A).
Proof of Part 2. Suppose that B = [bij ] is obtained by multiplying the mth row of A by c 6= 0. Then
bmj = c amj and bij = aij for 1 i 6= m n, 1 j n. Then
X
det(B) = sgn()b1(1) b2(2) bm(m) bn(n)
Sn
X
= sgn()a1(1) a2(2) cam(m) an(n)
Sn
X
= c sgn()a1(1) a2(2) am(m) an(n)
Sn
= c det(A).
P
Proof of Part 3. Note that det(A) = sgn()a1(1) a2(2) . . . an(n) . So, each term in the expression
Sn
for determinant, contains one entry from each row. Hence, from the condition that A has a row consisting
of all zeros, the value of each term is 0. Thus, det(A) = 0.
Proof of Part 4. Suppose that the th and mth row of A are equal. Let B be the matrix obtained
from A by interchanging the th and mth rows. Then by the first part, det(B) = det(A). But the
assumption implies that B = A. Hence, det(B) = det(A). So, we have det(B) = det(A) = det(A).
Hence, det(A) = 0.
Proof of Part 5. By definition and the given assumption, we have
X
det(C) = sgn()c1(1) c2(2) cm(m) cn(n)
Sn
X
= sgn()c1(1) c2(2) (bm(m) + am(m) ) cn(n)
Sn
X
= sgn()b1(1) b2(2) bm(m) bn(n)
Sn
X
+ sgn()a1(1) a2(2) am(m) an(n)
Sn
= det(B) + det(A).
Proof of Part 6. Suppose that B = [bij ] is obtained from A by replacing the th row by itself plus k
times the mth row, for 6= m. Then bj = aj + k amj and bij = aij for 1 i 6= m n, 1 j n.
248 CHAPTER 14. APPENDIX
Then
X
det(B) = sgn()b1(1) b2(2) b() bm(m) bn(n)
Sn
X
= sgn()a1(1) a2(2) (a() + kam(m) ) am(m) an(n)
Sn
X
= sgn()a1(1) a2(2) a() am(m) an(n)
Sn
X
+k sgn()a1(1) a2(2) am(m) am(m) an(n)
Sn
X
= sgn()a1(1) a2(2) a() am(m) an(n) use Part 4
Sn
= det(A).
Proof of Part 7. First let us assume that A is an upper triangular matrix. Observe that if Sn
is different from the identity permutation then n() 1. So, for every 6= Idn Sn , there exists a
positive integer m, 1 m n 1 (depending on ) such that m > (m). As A is an upper triangular
matrix, am(m) = 0 for each (6= Idn ) Sn . Hence the result follows.
A similar reasoning holds true, in case A is a lower triangular matrix.
Proof of Part 8. Let In be the identity matrix of order n. Then using Part 7, det(In ) = 1. Also,
recalling the notations for the elementary matrices given in Remark 2.4.14, we have det(Eij ) = 1,
(using Part 1) det(Ei (c)) = c (using Part 2) and det(Eij (k) = 1 (using Part 6). Again using Parts 1, 2
and 6, we get det(EA) = det(E) det(A).
Proof of Part 9. Suppose A is invertible. Then by Theorem 2.7.7, A is a product of elementary
matrices. That is, there exist elementary matrices E1 , E2 , . . . , Ek such that A = E1 E2 Ek . Now a
repeated application of Part 8 implies that det(A) = det(E1 ) det(E2 ) det(Ek ). But det(Ei ) 6= 0 for
1 i k. Hence, det(A) 6= 0.
Now assume that det(A) 6= 0. We show that A is invertible. On the contrary, assume that A is
not invertible. Then by Theorem 2.7.7, the matrix A is not of full rank. That is there exists a positive
integer r < n such " that
# rank(A) = r. So, there exist elementary matrices E1 , E2 , . . . , Ek such that
B
E1 E2 Ek A = . Therefore, by Part 3 and a repeated application of Part 8,
0
" #!
B
det(E1 ) det(E2 ) det(Ek ) det(A) = det(E1 E2 Ek A) = det = 0.
0
But det(Ei ) 6= 0 for 1 i k. Hence, det(A) = 0. This contradicts our assumption that det(A) 6= 0.
Hence our assumption is false and therefore A is invertible.
Proof of Part 10. Suppose A is not invertible. Then by Part 9, det(A) = 0. Also, the product matrix
AB is also not invertible. So, again by Part 9, det(AB) = 0. Thus, det(AB) = det(A) det(B).
Now suppose that A is invertible. Then by Theorem 2.7.7, A is a product of elementary matrices.
That is, there exist elementary matrices E1 , E2 , . . . , Ek such that A = E1 E2 Ek . Now a repeated
application of Part 8 implies that
Proof of Part 11. Let B = [bij ] = At . Then bij = aji for 1 i, j n. By Proposition 14.2.4, we know
that Sn = { 1 : Sn }. Also sgn() = sgn( 1 ). Hence,
X
det(B) = sgn()b1(1) b2(2) bn(n)
Sn
X
= sgn( 1 )b1 (1) 1 b1 (2) 2 b1 (n) n
Sn
X
= sgn( 1 )a11 (1) b21 (2) bn1 (n)
Sn
= det(A).
Remark 14.3.2 1. The result that det(A) = det(At ) implies that in the statements made in Theo-
rem 14.3.1, where ever the word row appears it can be replaced by column.
2. Let A = [aij ] be a matrix satisfying a11 = 1 and a1j = 0 for 2 j n. Let B be the submatrix
of A obtained by removing the first row and the first column. Then it can be easily shown that
det(A) = det(B). The reason being is as follows:
for every Sn with (1) = 1 is equivalent to saying that is a permutation of the elements
{2, 3, . . . , n}. That is, Sn1 . Hence,
X X
det(A) = sgn()a1(1) a2(2) an(n) = sgn()a2(2) an(n)
Sn Sn ,(1)=1
X
= sgn()b1(1) bn(n) = det(B).
Sn1
We are now ready to relate this definition of determinant with the one given in Definition 2.8.2.
n
P
Theorem 14.3.3 Let A be an n n matrix. Then det(A) = (1)1+j a1j det A(1|j) , where recall that
j=1
A(1|j) is the submatrix of A obtained by removing the 1st row and the j th column.
We now compute det(Bj ) for 1 j n. Note that the matrix Bj can be transformed into Cj by j 1
interchanges of columns done in the following manner:
first interchange the 1st and 2nd column, then interchange the 2nd and 3rd column and so on (the last
process consists of interchanging the (j 1)th column with the j th column. Then by Remark 14.3.2
and Parts 1 and 2 of Theorem 14.3.1, we have det(Bj ) = a1j (1)j1 det(Cj ). Therefore by (14.3.6),
n
X n
X
det(A) = (1)j1 a1j det A(1|j) = (1)j+1 a1j det A(1|j) .
j=1 j=1
250 CHAPTER 14. APPENDIX
14.4 Dimension of M + N
Theorem 14.4.1 Let V (F) be a finite dimensional vector space and let M and N be two subspaces of V.
Then
dim(M ) + dim(N ) = dim(M + N ) + dim(M N ). (14.4.7)
2. L(B2 ) = M + N.
The second part can be easily verified. To prove the first part, we consider the linear system of equations
1 u1 + + k uk + 1 w1 + + s ws + 1 v1 + + r vr = 0. (14.4.8)
1 u1 + + k uk + 1 w1 + + s ws = (1 v1 + + r vr ).
(1 1 )u1 + + (k k )uk + 1 w1 + + s ws = 0.
But then, the vectors u1 , u2 , . . . , uk , w1 , . . . , ws are linearly independent as they form a basis. Therefore,
by the definition of linear independence, we get
1 u1 + + k uk + 1 v1 + + r vr = 0.
Thus we see that the linear system of Equations (14.4.8) has no non-zero solution. And therefore,
the vectors are linearly independent.
Hence, the set B2 is a basis of M + N. We now count the vectors in the sets B1 , B2 , BM and BN to
get the required result.
14.5. PROOF OF RANK-NULLITY THEOREM 251
3. If V is finite dimensional vector space then dim(R(T )) dim(V ). The equality holds if and only if
N (T ) = {0}.
Proof. Part 1) can be easily proved. For 2), let T be one-one. Suppose u N (T ). This means that
T (u) = 0 = T (0). But then T is one-one implies that u = 0. If N (T ) = {0} then T (u) = T (v)
T (u v) = 0 implies that u = v. Hence, T is one-one.
The other parts can be similarly proved. Part 3) follows from the previous two parts.
The proof of the next theorem is immediate from the fact that T (0) = 0 and the definition of linear
independence/dependence.
Theorem 14.5.3 (Rank Nullity Theorem) Let T : V W be a linear transformation and V be a finite
dimensional vector space. Then
or (T ) + (T ) = n.
which is equivalent to showing that Range (T ) is the span of {T (ur+1), T (ur+2 ), . . . , T (un )}.
We now prove that the set {T (ur+1), T (ur+2 ), . . . , T (un )} is a linearly independent set. Suppose the
set is linearly dependent. Then, there exists scalars, r+1 , r+2 , . . . , n , not all zero such that
Or T (r+1 ur+1 + r+2 ur+2 + + n un ) = 0 which in turn implies r+1 ur+1 + r+2 ur+2 + + n un
N (T ) = L(u1 , . . . , ur ). So, there exists scalars i , 1 i r such that
That is,
1 u1 + + + r ur r+1 ur+1 n un = 0.
Thus i = 0 for 1 i n as {u1 , u2 , . . . , un } is a basis of V. In other words, we have shown that the set
{T (ur+1), T (ur+2 ), . . . , T (un )} is a basis of Range (T ). Now, the required result follows.
we now state another important implication of the Rank-nullity theorem.
252 CHAPTER 14. APPENDIX
Corollary 14.5.4 Let T : V V be a linear transformation on a finite dimensional vector space V. Then
Proof. Let dim(V ) = n and let T be one-one. Then dim(N (T )) = 0. Hence, by the rank-nullity
Theorem 14.5.3 dim( Range (T )) = n = dim(V ). Also, Range(T ) is a subspace of V. Hence, Range(T ) =
V. That is, T is onto.
Suppose T is onto. Then Range(T ) = V. Hence, dim( Range (T )) = n. But then by the rank-nullity
Theorem 14.5.3, dim(N (T )) = 0. That is, T is one-one.
Now we can assume that T is one-one and onto. Hence, for every vector u in the range, there is a
unique vectors v in the domain such that T (v) = u. Therefore, for every u in the range, we define
T 1 (u) = v.
Definition 14.6.1 (Exact Equation) The Equation (14.6.9) is called Exact if there exists a real valued twice
continuously differentiable function f such that
f f
= M and = N.
x y
Theorem 14.6.2 Let M and N be smooth in a region D. The equation (14.6.9) is exact if and only if
M N
= . (14.6.10)
y x
Proof. Let Equation (14.6.9) be exact. Then there is a smooth function f (defined on D) such that
2 f 2 f
M = f f M N
x and N = y . So, y = yx = xy = x and so Equation (14.6.10) holds.
Conversely, let Equation (14.6.10) hold. We now show that Equation (14.6.10) is exact. Define
G(x, y) on D by Z
G(x, y) = M (x, y)dx + g(y)
G
where g is any arbitrary smooth function. Then x = M (x, y) which shows that
G G M N
= = = .
x y y x y x
G G
So x (N y ) = 0 or N yis independent of x. Let (y) = N G G
y or N = (y) + y . Now
dy G G dy
M (x, y) + N = + + (y)
dx x y dx
Z
G G dy d dy
= + + (y)dy
x y dx dy dx
Z
d d
= G(x, y(x)) + (y)dy where y = y(x)
dx dx
Z
d
= f (x, y) where f (x, y) = G(x, y) + (y)dy
dx
Index
Trace of a Matrix, 15
Vector
Coordinates, 66
Length, 88
Norm, 88
vector
angle, 89
Vector Space, 49
Cn : Complex n-tuple, 52
Rn : Real n-tuple, 51
Basis, 58
Complex, 50
Dimension, 61
Finite Dimensional, 59
Infinite Dimensional, 59
Real, 50
Subspace, 53
Vector Space:Dimension of M + N , 250
Vector Subspace, 53
Wronskian, 156
Zero Transformation, 70