01) Vector - Merged
01) Vector - Merged
Vector :
An n component vector X is an ordered n tuple of numbers written as row x1 , x2 , xn or as a column
x1
x2 and are called the components of n vectors. For example, X 2 0 1 3 is a 4-vector.
xn
Vector Space:
The set of all n-vectors over a field F is called the n-vector space over F. It is usually denoted by Vn F or
simply Vn if F known.
Basis of a sub-space:
A set of vectors is said to be a basis of a sub-space if
1. The sub-space is spanned by the set and
2. The set is linearly independent.
For example:
The set of vectors e1 1 0 0 , e2 0 1 0 ,..... , en 0 0 1 Constitute a basis of Vn
Dimension of a sub-space:
The number of vectors in any basis of a sub-space is called the dimension of the sub-space.
Example : Let x1 2,3,4,7 , x2 0,0,0,1 , x3 1,0,1,0 ,be the vectors and 1,2,3 be the scalars
then their linear combination is defined to be a vector,
X 1x1 2x2 3x3 1 2,3,4,7 2 0,0,0,1 3 1,0,1,0 5,3,7,9
Length of a vector:
If X is a vector of Vn R , then the positive square root of inner product of X and X is defined as the length
of the vector X and is denoted by X .
Orthogonal vectors:
Any vector X1 is said to be orthogonal to a vector X2 if X1 . X2 X1 X2 0 .
A set S of n vectors X1 , X2 , ........, X n is said to be a orthogonal set if any two distinct vectors in S are
orthogonal.
Example : The vectors X1 1,2,1 and X2 2, 3, 8 Are orthogonal for their inner product
Orthonormal vectors:
A set of n-vectors X1 , X2 , , X k is said to be a orthonormal set of vectors if
(i) Each vector in S is a unit vector,
(ii) Any two distinct vectors in S are orthogonal
1 1 1 1 2 1 1 1
Example : The vectors X1 , , , X 2 , , , X 3 ,0, are
3 3 3 6 6 6 2 2
orthonormal set of vectors, since X1 X2 X3 1 and X1 X2 X2 X 3 X 3 X1 0
Mutually orthogonal:
Two vectors x1 and x2 of Vn R are called orthogonal if their inner product x1 x2 x2 x1 is zero and a set
of vectors is said to be a linearly orthogonal set of vectors if every pair of them mutually orthogonal.
1 2 1 1 2 2 2 1 2
Example: Let x1 , , , x 2 , , and x 3 , , form orthogonal
6 6 6 3 3 3 3 3 3
set of vectors since x1 x2 x2 x3 x3 x1 0 .
Orthogonal basis:
A basis of Vnm R formed by mutually orthogonal vectors is called an orthogonal basis of the space Vnm R .
1
e.g X1 1, 1, 1 , X2 1,0,1 form a basis of V32 R .
3
Orthonormal basis:
In an orthogonal basis, if the mutually orthogonal vectors are also normal vectors, then the basis is called
an orthonormal basis or a normal orthogonal basis. The elementary vectors form an orthonormal basis of
Vn R .
The non-zero vectors X1 , X2 ,, X n are linearly dependent if and only if one of the vectors X k is a linear
combination of the proceeding vectors. X1 , X2 ,, X k 1
Proof :
If X k 1 X1 2 X2 ...... k 1 X k 1
Then 1 X1 2 X2 ...... k 1 X k 1 X k 0
1 X1 2 X2 ...... k 1 X k 1 X k 0. X k 1 0. X k 2 ..... 0. X n 0
Since all i 0 ,
Hence the vectors X1 , X2 ,, X n are linearly dependent.
Conversely,
Suppose that, the vectors X1 , X2 ,, X n are linearly dependent.
Then 1 X1 2 X2 ...... n X n 0 , where the scalars i 0
Then 1 X1 2 X2 ...... k X k 0. X k 1 0. X k 2 ..... 0. X n 0
1 X1 2 X2 ...... k X k 0
Now if k 1 , this implies 1 X1 0 , with i 0 .
So that, X1 0 . Which gives a contradiction, since X1 is a non-zero vector. Hence k 1 .
We may write, X k 1 X1 2 X2 ...... k 1 X k 1
k k k
So that, X k is a linear combination of X1 , X2 ,, X k 1
Hence the proof.
The vectors X1 , X2 ,, X n are linearly dependent if and only if any of them can be expressed as a is a
linear combination of the others
Proof:
Suppose that, the vectors X1 , X2 ,, X n are linearly dependent.
Then 1 X1 2 X2 ...... k X k k 1 X k 1 k 2 X k 2 ..... n X n 0 , where all i are not zero.
Let the coefficient of X k is not zero. i,e. k 0
Now, k X k 1 X1 2 X2 ...... k 1 X k 1 k 1 X k 1 k 2 X k 2 ..... n X n 0
X k 1 X1 2 X 2 ...... k 1 X k 1 k 1 X k 1 ..... n X n
k k k k k
So that, X k is a linear combination of the set of vectors X1 , X2 ,..... X k 1 , X k 1 ,...., X n
Conversely,
Suppose that, X k is a linear combination of the set of vectors X1 , X2 ,..... X k 1 , X k 1 ,...., X n
So that, X k 1 X1 2 X2 ...... k 1 X k 1 k 1 X k 1 k 2 X k 2 ..... n X n
1 X1 2 X2 ...... k 1 X k 1 1 X k k 1 Xk 1 k 2 X k 2 ..... n X n 0
It is clear that, if all the coefficient of X1 , X2 ,..... X k 1 , X k 1 ,...., X n are zero but the coefficient of X k
which is 1 is not zero. So the vectors X1 , X2 ,, X n are linearly dependent.
Hence the proof.
223605 Linear Algebra
4
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
If the set of vectors X1 , X2 ,, Xm are linearly independent and the set of vectors X1 , X2 ,, X m , X is
linearly dependent then X is a linear combination of thevectors X1 , X2 ,, X m
Proof:
Since the set X1 , X2 ,, Xm , X is linearly dependent there exist scalars 1 , 2 ,....., m , not all
zero, such that, 1 X1 2 X2 ...... m X m X 0 .................... i
If 0 then one of the i is not zero.
Since,the set of vectors X1 , X2 ,, X m are linearly independent.
So that, 1 X1 2 X2 ...... m X m 0 , where all i 0 . This is contradictory with the above
statement.This implies that, 0 .
Thus from i we get, X 1 X1 2 X2 ...... m X m
X 1 X1 2 X2 ...... m X m
So that, X is a linear combination of the vectors X1 , X2 ,, X m .
Hence the proof.
Proof:
Let X1 , X2 ,., X k is a set of orthogonal vectors.
i.e X i . X j 0 ; i j
Consider a relation 1 X1 2 X2 k X k 0 i
Taking inner product with X1 of both sides of i we get,
1 X1 2 X2 k X k . X1 0. X1
1 X1 X1 2 X 2 X1 k X k X1 0
1 X 1 X1 0
1 0 [ X 1 X1 0 ]
Similarly taking inner products with X2 , X 3 , , X k of both side of i we can show that
2 0, 3 0, , k o
X1 , X2 , , X k is a linearly independent set.
Hence the proof.
Proof:
The basis of a set of vectors is always selected from a set of vectors which generate a vector space
Proof:
Let X1 , X2 ,., X k be a set of vectors which generate Vn R .
Orthogonalization:
Orthogonalization is a process under which we reduce a non-orthogonal set of vectors into an orthogonal
set of vectors.
Gram-Schmidt orthogonalization:
Let X1 , X 2 ,, X m is a set of non-orthogonal vectors. We shall reduce this set of vectors into an orthogonal
set of vectors by Gram Schmidt process.
Let, Y1 , Y2 , , Ym be the set of orthogonal vectors.
Let Y1 X1 and Y2 X 2 aY1
Since, Y1 & Y2 are orthogonal
Y1 .Y2 0
Y1 X2 aY1 0
Y1 . X2 aY1 .Y1 0
Y1 . X2
a
Y1 .Y1
Y .X Y1 . X2
Y2 X 2 1 2 Y1 Y2 X2 Y1
Y1 .Y1 Y1 .Y1
Y3 .X 4 Y .X Y .X
Similarly, Y4 = X 4 Y3 2 4 Y2 1 4 Y1
Y3 .Y3 Y2 .Y2 Y1 .Y1
Continue the process until Ym is obtained Thus
Y X YX YX
Ym = X m m-1 m Ym-1 - .......- 2 m Y2 1 m Y1
Ym-1Ym-1 Y2Y2 Y1Y1
Hence Y1 , Y2 , , Ym form an orthogonal basis a Vnm R
Yi
The vectors Gi = , i 1, 2, .. , m are normal and mutually orthogonal
Yi
So the vectors G1 , G2 ,...., Gm form an orthonormal basis of Vnm R
Problem:
Solution:
We have to examine whether the vectors α1 , α 2 , α3 and α 4 are independent or dependent. We have,
1 8,18,17 ,32
2 30 , 22,14, 27
3 10,14, 48, 6
4 31, 49, 6, 8
Let us consider the relation :
k1 α1 +k 2 α2 +k 3 α 3 +k 4 α 4 = 0
k1 8 ,18 ,17 ,32 + k2 30 ,22 ,14 ,27 + k 3 10,14 , 48 ,6 + k 4 31 , 49 ,6 , 8 = 0
8k1 + 30k2 + 10k 3 + 31k 4 =0
18k 1 + 22k2 + 14k 3 + 49k 4 = 0
17k 1 + 14k 2 + 48k 3 + 6k 4 = 0
32k 1 + 27k2 + 6k 3 + 8k 4 =0
8 18 17 32
30
91 199 R21
8 30 10 31 k1 0 0 93 k1 0 8
18 22 14 49 k2 0 2 4 k 0
10
17 107 2 R31
17 14 48 6 k3 0 0 34 k3 0 8
2 4 k 0
32 27 6 8 k4 0
83 479 4 31
R41
0 116 8
4 8
8 18 17 32
91 199
0 93 k1 0
2 4 k 0
17
1513
2
3280 R32
0 0 k3 0 91
91 91
3384
k
13393 4
0 83
R42
0 0 182
91 182
8 18 17 32
91 199
0 93 k1 0
2 4 k 0
1513
2
3280
0 0 k3 0
91 91 423
k 0 R43
18602 4 410
0 0 0
205
k4 0
So that , k1 0, k2 0, k3 0
Since all the scallers are zero, so that the given vectors are
linearly independent.
Problem:
-1 + 0 + 2 + 1
2 2 2 2
=
= 6
2 + 3 + 1 + 5
2 2 2 2
=
= 39
-3 + -3 + 1 + -4
2 2 2 2
=
= 35
X1 .X2
(iv) The angle between X1 and X2 , cosθ =
X1 X 2
5
=
6 39
5
θ = cos -1
6 39
θ = 70o 55 18
2 2
(v) Now, L.H.S. = X1 + X2 + X1 - X 2
-1 0 2 1 + 2 3 1 5 + -1 0 2 1 - 2 3 1 5
2 2
=
1 3 3 6 + -3 -3 1 -4
2 2
=
2 2
55 + 35
2 2
=
= 55 + 35
= 90
2 2
R.H.S. = 2 X1 + 2 X2
= 2 -1 0 2 1 + 2 2 3 1 5
2 2
2 2
-1 + 0 + 2 + 1 2 + 3 + 1 + 5
2 2 2 2 2 2 2 2
=2 +2
6 + 2 39
2 2
=2
= 12 + 78
= 90
2 2 2 2
So That , X1 + X2 + X1 - X 2 = 2 X1 + 2 X2
(proved).
Problem:
For vectors are as follows :
α1 = 8 ,18 ,17 ,32
α 2 = 30 ,22 ,14 ,27
α 3 = 10,14 ,48 ,6
α 4 = 31 , 49 ,6 , 8
Construct a set of orthogonal vectors and find their lengths
and construct a set of orthonormal vectors.
Solution:
We are given,
α1 = 8 ,18 ,17 ,32
α2 = 30 ,22 ,14 ,27
α3 = 10,14 ,48 ,6
α4 = 31 ,49 ,6 , 8
72.5 787.8
= 31 , 49 ,6 , 8 - 1.28 , - 0.58 , 34.98 , - 18.58 - 21.83, 3.61 , - 3.37 , - 5.70
1570.79 531.96
1488
- 8 ,18 ,17 ,32
1701
= 31 ,49 ,6 , 8 - 0.06 , - 0.03, 1.61 , - 0.86 - 32.31 , 5.34 , - 4.99 , - 8.44 - 7.00 ,15.75 ,14.87 ,27.99
= -8.37 , 27.94 , - 5.49 , - 10.69
Matrix:
A system of mn numbers arranged in the form of an ordered set of m rows,each row consisting ofan
ordered set of n numbers, enclosed in square bracket or parenthesis or in double vertical lines is called an
m n matrix.
As for example,
Order of matrix:
The number of rows m and number of column n in a matrix is called order of the matrix.
Rectangular Matrix:
A matrix in which the number of rows is unequal to the number of columns is called a rectangular matrix.
Thus A aij ; i 1,2,....., m : j 1,2,....., n is called an m n rectangular matrix if m n .
2 5
2 3 1
For example, A or B 1 3 are rectangular matrices.
2 1 0 23 4 1
32
Square Matrix:
A matrix in which the number of rows is equal to the number of columns is called a square matrix.
Thus A aij ; i 1,2,....., m : j 1,2,....., n is called an n n square matrix or a square matrix of
order n if m n .
2 5 1
For example, A 3 4 5 is a square matrix.
1 7 3
33
Row matrix:
If a matrix contain only one row then it called a row matrix. Thus A a11 , a12 ,......, a1 n is called row matrix
or row vector.
Example, A= 5 8 9 is a row matrix.
Column matrix:
a11
a
If a matrix contains only one column then it is called a column matrix.Thus A 12 is called column
a1n
4
matrix or column vector. Example: A= 5 is a column matrix.
6
Null Matrix:
Any matrix whose all elements are equal to zero is called a null matrix.
0 0 0
As for example, A= 0 0 0 is a null matrix.
0 0 0
Diagonal Matrix:
In a square matrix the elements aii ; i 1,2,....., n Are known as diagonal elements and the line in which
they lie is known as principal diagonal. A square matrix all the elements except those in the principal
diagonal are zero is called a diagonal matrix. Thus the matrix A aij ; i , j 1,2,....., n is a diagonal
matrix if aij 0 ; i j . A diagonal matrix of order n with diagonal element a11 , a22 ......ann is
denoted by A diag a11 , a22 ......ann in which all off diagonal elements are zero is called a diagonal matrix.
5 0 0
For example, A 0 3 0 is a diagonal matrix.
0 0 2
Scalar Matrix:
A diagonal matrix whose all the diagonal elements are all equal is called the scalar matrix. Thus the square
matrix A aij is a scalar matrix if and only if for some scalar k
aij k , i j
; i , j 1,2,......, n
0 , i j
5 0 0
Example, A 0 5 0 is a scalar matrix.
0 0 5
Identity Matrix:
In a diagonal matrix if we take k 1 , then A is called a unit matrix or an identity matrix of order n and is
denoted by I n .
Symmetric Matrix:
A square matrix A aij is said to be a symmetric matrix if ij th element are same as the ji th element.
Therefore aij a ji ; i , j . For a symmetric matrix A A
a h g a h g
As for example, A= h b f
A h b f
g f c g f c
3 5 2 2 0 0
is a upper triangular matrix.
Example: A 0 9 3 A 3 5 0 is a lower triangular matrix.
0 0 4 1 2 7
1 5 1 2 0 0
0 2 4 1 2 0
A is a upper triangular matrix. A is a lower triangular matrix.
0 0 6 7 2 3
0 0 0 6 5 1
Sub Matrix:
The matrix obtained by deleting some rows or column or both of a matrix is said to be a sub matrix that
matrix. As for example,
1 2 3 4
1 2 3
If A = 5 6 7 6 then sub matrix of A is A1
9 1 2 3 5 6 7
Comparable Matrices
Two matrices A aij and B bij are said to be comparable if each has the same number of rows and
columns as the other. i.e, if they have the same dimensions.
Equality of Matrices:
Two matrices A aij and B bij are said to be equal iff
(i) they are conformable, . i.e, they are of the same dimensions and
(ii) the elements in the corresponding position of the two matrices are same,. i.e, for each pair of
subscripts i and j we have aij bij
Matrix Addition
Addition of Matrices:
Two matrices A aij and B bij are said to be conformable for addition if they are comparable and
then their sum A B is defined as the matrix
C A B cij ; where, cij aij bij
i.e, the sum of two matrices is obtained on adding their corresponding elements. Obviously A B has the
same dimension as A or B .
Example:
a11 a12 a13 b11 b12 b13 a11 b11 a12 b12 a13 b13
If A a21 a22 a23 and B b21 b22 b23
Then A B a21 b21 a22 b22 a23 b23
a b a b a32 b32 a33 b33
31 a32 a33 31 b32 b33 31 31
i) Matrix addition is commutative, i.e, for two matrices A aij and B bij , A B B A
ii) Matrix addition is associative, i.e, for three matrices A aij , B bij and C cij ,
A B C A B C
iii) Matrix addition is distributive, i.e, for two matrices A aij and B bij and for a scalar k
k A B kA kB
iv) If A is m n matrix and 0 is a null matrix of the same dimensions then A 0 0 A A
v) If A and X are conformable for addition then the matrix equation A X 0 X A has a
unique solution X A aij
Matrix addition is commutative, i.e, for two matrices A aij and B bij , A B B A
Proof:
Let we have two matrices A aij and B bij .Thus
AB aij bij bij aij B A
AB B A
Matrix addition is associative, i.e, for three matrices A aij , B bij and C cij , A BC A B C
Proof:
Let, we have three matrices A aij , B bij and C cij . Since A , B and C are comparable matrices of
order m n , the matrices A, B C , A B , and C are also the same dimensions m n and thus the
matrix additions A B C and A B C are defined and these matrices are comparable. Further,
i , j th element of A B C aij bij cij aij bij cij
i , j th element of A B C
A B C A B C
Matrix Multiplication
Matrix multiplication:
Two matrices A and B are said to be conformable for the product AB , if the number of columns of A is
equal to the number of rows of B.
Let A aij be m n matrix and B bij n p matrix so that they are conformable for the product
C AB cij
is a m p matrix where c ij is the inner product of i th row of A by the j th column of B .
Let,
Proof:
If the matrix product AB is defined, it is not necessary that the product BA is also defined. Let A is an
m n matrix and B is an n p matrix, then product AB is defined but the product BA is not defined
since the number of columns of B is not equal to the number of rows of A . Again if both the products AB
and BA are defined they need not be equal. For example, if
a b b 0
A and B
0 0 a 0
0 0 ba b2
Here, AB and BA defined, AB and BA 2
0 0 a ab
AB BA
Let A aij , B bij and C cij be m n, n p and n q matrices repectively so that AB uij and
BC vij are m p and n q matrices respectively where
223605 Linear Algebra
6
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
n p
uij a
k 1
b
ik kj and vij bik ckj
k 1
Let, AB C wij and A BC xij . Then AB C wij and A BC xij are each m q
matrices where,
n n
wij uir crj and xij aikv kj
r 1 k 1
xij
Hence the proof.
Transpose of a Matrix
Transpose matrix:
A matrix obtained by interchanging the rows and columns of a given matrix is called a transpose
matrix. For any matrix A transpose matrix is denoted by A or AT .
5 7
5 3 7
If A 3 5 then A
7 8 7 5 8
If the matrices A and B are conformable for the product AB then the matrices B and A are conformable
for the product BA and AB BA
Or,
The transpose of the product of two matrices is equal to the product of transposes taken in reverse
order.
Proof:
Thus AB is m p matrix so that AB is a p m matrix. Also BA is a p m matrix. Thus the matrices
Now, AB cij
n
i 1,2,...., m
where, cij aik bkj ;
k 1 j 1,2,...., p
i , j th element of AB cij c ji
n
a jk bki
k 1
n
akj bik
k 1
n
bik akj
k 1
i , j th elementof BA
Proof:
By definition A aij a ji
Now A a ji aij A
A A
a b A B
ij ij
A B A B
223605 Linear Algebra
8
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
A aij a ji
Proof:
Let A be a square matrix, and A be its transpose. Then we have,
A A A A A A A A
So that, A A is symmetric matrix.
A A A A A A A A
So that, A A is skew symmetric matrix.
Hence the proof.
Every square matrix can be uniquely expressed as the sum of a symmetric matrix and a skew-symmetric
matrix.
Proof:
Let A be a square matrix, and A be its transpose. Then we have,
1 1 1 1
A A A A A B C ; where B A A & C A A
2 2 2 2
1 1
Now, B A A A A
2 2
1
A A
2
B
1 1
And C A A A A
2 2
1
A A
2
1
A A
2
C
Thus B is a symmetric matrix and C is a skew symmetric matrix. So that, every square matrix can
be uniquely expressed as the sum of a symmetric matrix and a skew-symmetric matrix.
Proof:
Let A is a skew-symmetric matrix. Then we have, A A
AA A A A2
and AA A A A2
So that, AA AA
Again
AA A A AA
and AA A A AA
AA and AA are both symmetric matrices.
Therefore, A2 is a symmetric matrix. Again 1 is scalar. So that, A2 is a symmetric matrix.
If A and B are both skew- symmetric matrices of same order such that AB BA then AB is symmetric.
Proof:
If A and B are both skew-symmetric matrices, then A A and B B
Given that,
AB BA
AB B A BA AB
AB AB
Thus AB is a symmetric matrix.
If A and B are two symmetric matrices of the same order then necessary and sufficient condition for
matrix AB to be symmetric is that AB BA .
Proof:
Since A and B are symmetric matrices, then A A and B B
Necessary condition:If AB is symmetricmatrix then
AB AB
BA AB
BA AB
AB BA
Sufficient condition: Given that, AB BA
Now, AB BA BA AB
AB is symmetric
223605 Linear Algebra
10
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
Orthogonal matrix:
If A and B are orthogonal matrices, each of order n then the matrices AB and BA are also orthogonal.
Proof:
Since A and B are n rowed orthogonal matrices, then we have
AA AA In and BB BB In
The matrix product AB is also a square matrix of order n and we have,
AB AB BA AB
B AA B
B InB
B B
In
Thus AB is an orthogonal matrix of order n .
Similarly,
BA BA AB BA
A BB A
A In A
A A
In
Hence BA is an orthogonal matrix of order n .
If A is an orthogonal matrix , then A1 is also orthogonal.
Proof:
If A is an orthogonal, we have
AA AA In where, I is the identity matrix.
AA AA I 1 I
1 1
A A1 A1 A I
1 1
A A
1 1
A 1 A1 I
Hence A1 is orthogonal. So that inverse of an orthogonal matrix is also orthogonal.
Proof:
If A is an orthogonal, we have
AA AA In where, I is the identity matrix.
AA AA I I
A A A A I
Hence A isorthogonal. So that transpose of an orthogonal matrix is also orthogonal.
Proof:
If A is an orthogonal, we have
AA A A In where, I is the identity matrix.
A1 AA A1 I
IA A1
A A1
So that, for an orthogonal matrix its transpose and inverse matrix is equal.
Proof:
If A is an orthogonal, we have
AA In where, I is the identity matrix.
AA I
A A 1
A A1
2
A 1
A 1
If A 1 then A is called proper orthogonal matrix, and if A 1 then A is called improper orthogonal
matrix.
Proof:
If A is an orthogonal, we have
AA In where, I is the identity matrix.
AA I
A A 1
A A 1
2
A 1
A 1 0
Determinant
Determinant:
Determinant of a square matrix A of order n is a real valued function of the elements of the matrix
and is given by,
a11 a12 a1n
a a22 a2 n
A 21 A a1i a2 j .....anp
an1 an 2 ann
Where summation is taken over n! permutations of column suffix i , j ,......, p with a ‘ ’ sign
given to a term of even permutations & a ' ' given to a term of odd permutations.
Properties of determinants:
1. If all the elements of determinant is zero then the value of determinant is zero.
0 0 0
Example: A 0 0 0 0
0 0 0
2. Determinant of a transpose matrix is equal to the determinant of original matrix such that A A
a1 b1 c1 a1 a2 a3
Example: a2 b2 c2 b1 b2 b3
a3 b3 c3 c1 c2 c3
3. If one row or column of a determinant is zero than the value of determinant is zero
a1 b1 c1
Example: 0 0 0 0
a2 b2 c2
4. If two rows or columns are interchanged then determinant changes its sign without changing its
numerical value.
a1 b1 c1 b1 a1 c1
Example: a2 b2 c2 b2 a2 c2
a3 b3 c3 b3 a3 c3
5. If two rows or columns are equal then the value of a determinant is zero.
a1 b1 a1
Example: a2 b2 a2 0
a3 b3 a3
6. If each of the element of any row or column of a determinant are multiplied by any a constant then
the value of the determinant is multiplied by same constant.
ma1 ma2 ma3 a1 a2 a3
Example: b1 b2 b3 m b1 b2 b3
c1 c2 c3 c1 c2 c3
7. If each of the elements of a determinant are multiplied by a constant C , then value of the
determinant will be multiplied by C n , where n is the order of determinant .
ma1 ma2 ma3 a1 a2 a3
Example: mb1 mb2 mb3 m b1 b2 b3
3
mc1 c2 mc3 c1 c2 c3
8. If each of the element of a row or column of any determinant may be expressed as asum of two or
more numbers then it may be expressed as sum of two or more determinant.
a1 b1 c1 a1 b1 c1
Example: a2 b2 c2 a2 b2 c2 a2 b2 c2
a3 b3 c3 a3 b3 c3 a3 b3 c3
9. If each of the element of any row or column of a determinant multiplied by k times and added with
the another row or column then the value of a determinant unchanged.
a1 b1 c1 a1 ka2 b1 kb2 c1 kc2
Example: a2 b2 c2 a2 b2 c2
a3 b3 c3 a3 b3 c3
10. Determinant of a diagonal matrix is equal to the product of diagonal elements. Such that
A a11a22 a33 ................ann
11. Determinant of a triangular matrix is equal to the product of diagonal elements. Such that
223605 Linear Algebra
14
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
1. Number of rows and columns are equal in a determinant. But number of rows and columns may or
may not be equal in a matrix.
2. Determinant is a definite value. But matrix has no value. It is merely an arrangement of elements in
rows and columns.
3. If two rows or columns are identical then determinant vanishes. But identical rows or columns may
occur in a matrix.
4. Rows and columns can be interchanged in a determinant without changing its value. But rows and
columns of a matrix cannot be interchanged.
5. If a determinant is multiplied by a constant then all the element of a row or a column are multiplied
by the same constant. But if a matrix is multiplied by a constant then all the elements of the matrix
are multiplied by the same constant.
6. Product of two determinants does not change the order of determinants. But the product of two
matrices may change the order of matrix.
7. Multiplication of two determinant is commutative such that A B B A . But matrix multiplication
is not commutative, since AB BA in general.
x a a a x n 1 a a a a
a x a a x n 1 a x a a
A a a x a x n 1 a a x a C1 C1 C2 ....C n
a a a x x n 1 a a a x
1 a a a 1 a a a
R R R
1 x a a 0 x a 0 2 0
2 1
R3 R3 R1
x n 1 a 1 a x a x n 1 a 0 0 x a 0
1 a a x 0 0 0 x a Rn Rn R1
x n 1 a x a
n 1
A x n 1 a x a
n 1
1 1 1 1 1 1 1
a1 a2 an a2 a3 an
1 1 1 1 1 1 1 1
a1 a2 an a2 a3 an
a1a2 ....an 1 1 1 1 1 1 1 1 C1 C1 C2 .... C n
a1 a2 an a2 a3 an
1 1 1 1 1 1 1 1
a1 a2 an a2 a3 an
1 1 1 1
a2 a3 an
1 1 1 1 1
a2 a3 an
a1a2 ....an 1 1 1 1 1 1 1 1 1
a1 a2 an a2 a3 an
1 1 1 1 1
a2 a3 an
1 1 1 1
a2 a3 an R2 R2 R1
0 1 0 0 R R R
3 1
a1a2 ....an 1 1 1 1 0 0 1 0
3
a1 a2 an
Rn Rn R1
0 0 0 1
Solution :
1 a1 a2 a3 an 1 a1 a2 ... an a2 a3 an
a1 1 a2 a3 an 1 a1 a2 ... an 1 a2 a3 an
A a1 a2 1 a3 an 1 a1 a2 ... an a2 1 a3 an C1 C1 C2 .... C n
a1 a2 a3 1 an 1 a1 a2 ... an a2 a3 1 an
1 a2 a3 an 1 a2 a3 an
R2 R2 R1
1 1 a2 a3 an 0 1 0 0 R R R
3 1
1 a1 a2 ... an 1 1 a1 a2 ... an 0
3
a2 1 a3 an 0 1 0
Rn Rn R1
1 a2 a3 1 an 0 0 0 1
1 a1 a2 ... an
1 0 0
d 2 d 1 d 3 d 1 d1 1 1 d 2 d 1 d 3 d 1 d 3 d 1 d 2 d 1
d 12 d 2 d1 d 3 d1
d 3 d 1 d 3 d 2 d 2 d 1
Similarly,
A4 d 4 d 1 d 4 d 2 d 4 d 3 d 3 d 1 d 3 d 2 d 2 d 1
An d n d 1 d n d 2 d n d n 1
d n 1 d 1 d n 1 d 2 d n 1 d n 2
..........................................................
n
d 2 d1 d
i j 1
i dj
1a 1 1 1
1 1b 1 1
A
1 1 1 c 1
1 1 1 1d
Solution :
1 1 1 1 1
1a 1 1 1 a b c d
1 1 1 1 1
1 1b 1 1 a b c d
A abcd
1 1 1c 1 1 1 1 1 1
a b c d
1 1 1 1d
1 1 1 1 1
a b c d
1 1 1 1
b c d
1 1 1 1 1
abcd 1 1 1 1 1
a b c d 1 1
b c
1 1 1
d
C1 C1 C 2 C 3 C 4
b c d
1 1 1 1 1
b c d
1 1 1 1
b c d R2 R2 R1
abcd 1 1 1 1 1 0
a b c d 0 1
0
0
1
0
0
R3 R3 R1
R R R
4 4 1
0 0 0 1
abcd 1 1 1 1 1
a b c d
Adjoint&Inverse of a Matrix
Minor of a Matrix:
The determinant of every square sub-matrix is called minor of the matrix. If Mij be the
n 1 n 1 sub-matrix of the matrix A aij obtained by removing the i-th row and j-th
column, then the determinant Mij is defined as the minor of the element aij in the determinant
aij of order n .
Adjoint of a Matrix:
Then the transpose of the matrix C ij is called the adjoint of the matrix A and is generally denoted
by adj A
2 21 18 2 21 18 2 6 4
Cofactor matrix C 6 7 6 adj A 6 7 6 21 7 8
4 8 4 4 8 4 18 6 4
Proof:
We know that, adj A C kj . where C kj is the cofactor of akj in A .
A 0 0
0 A 0
A . adj A
0 0 A
1 0 0
0 1 0
A A .I n
0 0 1
Similarly we can prove that adj A . A A . In
(Prove)
Proof:
We know that, AB A . B
A . adj A A . adj A
A 0 0
0 A 0
0 0 A
n
A . adj A A
n 1
adj A A
(Proved)
Proof:
We know that, A . adj A A .I
So that, AB . adj AB AB . I
Inverse of a matrix:
If for any square matrix A there is a square matrix B such that AB BA then B is called the
inverse matrix or reciprocal matrix of A and is denoted by A1 . So that B A1 . Thus for an
inverse matrix we have A . A 1 and A1 . A , where is unit or identity matrix.
5. Inverse of a transposed matrix is equal to the transpose of inverse matrix. Such that, A A1 .
1
6. Inverse of the product of matrices is equal to the product of inverse of the matrices in reverse
order. Such that AB B1 A1
1
Proof:
Let us suppose that there are two inverse matrices B and C for a square matrix A , then we have,
AB BA .............. i
and AC CA ii
Determinant of an inverse matrix is the reciprocal of the determinant of the original matrix.
Proof:
For an inverse matrix we have, A. A 1
A. A1
A . A 1
A 1
A
1
A 1 A
Thus determinant of an inverse matrix is the reciprocal of the determinant of the original matrix.
(Proved)
Proof:
For an inverse matrix we have , A. A1
A.A 1
A . A1
A 1 0
A
So that, an inverse matrix is a non singular matrix.
(Proved)
Proof:
A
1 1
. A 1 . A1 . A
1
A
1 1
. A
A
1 1
A
(Proved)
Inverse of a transposed matrix is equal to the transpose of inverse matrix. Such that, A A1 .
1
Proof:
For an inverse matrix we have, A1 . A
A1 . A
A. A1
A . A 1
1
A A1
1
(Proved)
Inverse of the product of matrices is equal to the product of inverse of the matrices in reverse order.
Such that AB B1 A1
1
Proof:
For two non-singular matrices and of same order, we have, AB . B1 A1 AB.B1 A1
A A1
A A 1
Also we have, B 1
A1 . AB B1 A1 AB
B 1 B
B 1 B
1 1
So that, B A is the inverse matrix of AB
(Proved)
adj A
If A aij be a square matrix of order n then A 1
A
Proof:
We know that, adj A C kj . where C kj is the cofactor of akj in A .
A.
adj A adj A . A I
n
A A
Since A.A1 A1 .A
Therefore, A1
adj A
A
(Proved)
A necessary and sufficient condition that any square matrix A has an inverse is that A 0
Proof:
Necessary condition:
Let us suppose that any square matrix A has inverse matrix A1 . Then we have,
A. A 1
A.A 1
A . A 1
A 0
i.e. A is non-singular.
Sufficient condition:
adj A
Let us suppose that A 0 .Then we can find a matrix B .
A
adj A adj A
Now, AB A. Similarly, BA .A
A A
A. adj A adj A .A
A A
A . A .
A A
So that B is an inverse matrix of A
(Proved)
Co-factor method:
If A aij is a square matrix of order n and adj A Aji is adjoint matrix of A and A is the
determinant of A , then in this method inverse matrix is computed as
A11 A21 An1
A
A A
A
11 A A
n1
A12 A22 An2
21
adj A 1 A12 A22 An2
A 1 A A A
A A
A1n A2n Ann A
1n A2n Ann
A A A
Sweep-out method:
This method consists of placing a unit matrix of same order on the right hand side of original matrix
and then converting the original matrix by elementary row or column operations into a unit matrix.
In this process, unit matrix on the right hand side is converted into a matrix which is the inverse
matrix of the original matrix. Thus we have,
A A 1
Partitioned method:
A B
Let us suppose that a square matrix A0 is written in partitioned from as A0
C D
Where A, B, C and D are sub-matrices and A and D are non-singular sub matrices. Let us suppose
that A0 is written in partitioned from as
1
X Y
A0
1
Z W AX BD 1CX
Now, A0 A01 A BD C X
1
A B X Y 0
X A BD C
1 1
C D Z W 0
From ii we get ,
AX BZ AY BW 0
AY BW 0
CX DZ CY DW 0
AY BW
AX BZ i
Y A1BW
AY BW 0 ........... ii
Putting this value of Y in iv we get ,
CX DZ 0 ............ ii
C A1BW DW
CY DW iv
From i we get CX DZ 0
D CA B W
1
W D CA B 1 1
DZ CX
Z D 1CX
Putting this value of Z in i we get,
A 0 A 1 0
In particular, if B 0 and C 0 then A0 A01 1
0 D 0 B
Rank of a Matrix
Rank of a matrix:
A non-zero matrix A is said to have rank r if at least one of its minor of order r is different from
zero, while every minor of order r 1 ,if any is zero. Equivalently, the rank of a matrix is the
maximum number of linearly independent rows or columns in the matrix.
2 3 1 1
2 3 1 1
0 5 3 7
2 2 2 0 5 3 7
7 12 2 2 2
33 22 R32 , R42 R43 1
0 0 5 5 33 22
5 5 0 0
5 5
33 22
0 0 0 0 0 0
5 5
So that the rank of the given matrix is 3.
Properties of Rank
Proof:
Let A aij be any m n matrix.
Then the transpose matrix A a ji is an n m matrix.
Let the rank of A be r and let B be the r r sub-matrix of A , such that, B 0
Also we know that the value of a determinant remains unaltered if its rows and columns are
interchanged.
i.e. B B 0 where, B is evidently a r r sub-matrix.
The rank of A is A r
Again if C be a r 1 r 1 sub-matrix of A . Then by definition of rank we must have,
C 0
Also C is a r 1 r 1 sub-matrix of A . So we have,
C C 0
Therefore, we conclude that there cannot be any r 1 r 1 sub-matrix of A with non-zero
determinant.
The rank of A is A r and it cannot be greater than r .
The rank of A is r which is also the rank of A
The row and column rank of a matrix is equal
Proof:
a11 a12 a1n
a21 a22 a2n
Let A be an arbitrary m n matrix A
am1 am2 amn
Let , R1 , R2 ,......, Rm denotes its rows.
R1 a11 a12 a1n
R2 a21 a22 a2 n
Rm am1 am2 amn
Suppose the row rank is r and the following r vectors form a basis for the row space
V1 b11 b12 b1 n
V2 b21 b22 b2n
Vr br 1 br 2 brn
Then each of the row vectors is a linear combination of the vectors V1 , V2 ,....., Vr
i.e. R1 11V1 12V2 .... 1rVr
R2 21V1 22V2 .... 2 rVr
Rm m1V1 m2V2 .... mrVr
Where ij are scalars.
Setting the i-th component of each of the above vector equations equal to each other, we obtain
the following system of equations ;
a1i 11 b1i 12 b2 i ..... 1r bri
a2i 21b1i 22 b2i ..... 2r bri
ami m1b1i m2 b2 i ..... mr bri
Thus for i 1, 2, ...., n
a1i 11 12 1r
a2 i b 21 b 22 ....... b 2r
1i
1i 1i
ami m1 m2 mr
11 12 1r
21
22 , ........, 2r
So that each columns of the matrix A is linear combination of r vectors ,
m1 m2 mr
Thus the column space of the matrix A has dimension at most r .
i.e. column rank r i.e. column rank row rank
The rank of the product of two matrices cannot exceed the rank of either the two matrices.
Such that, AB A and AB B
Proof:
This shows that the rows of the matrix AB are the linear combinations of the rows 1 , 2 ,....., n of
the matrix B . So the number of linearly independent rows of AB cannot exceed the number of
linearly independent rows of B .
Therefore, AB B .
Similarly, it can be shown that, the columns of the matrix AB are the linear combinations of the
columns of matrix A . So the number of linearly independent columns of AB cannot exceed the
number of linearly independent columns of A .
Therefore, AB A .
Proof:
Let, A r There exists two non-singular matrices P and Q such that,
0 0 1
PAQ r A P 1 r
Q
0 0 0 0
0 0 r 0 1
Let, C P 1 Then we get, A C P 1 Q
0 n r 0 n r
A C r n r n A r and C n r
Again since A C is non-singular we have, A C B B
B AB CB AB CB ............ i
Also we have, CB C
Trace of a matrix
Trace of a matrix:
The sum of the diagonal elements of a square matrix is called the trace of that matrix.
Properties of trace:
Proof:
2. tr A B tr A tr B
Proof:
3. tr AB tr B A
Proof:
223605 Linear Algebra
33
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
an1b11 an2b21 ... ann bn1 an1b12 an2 b22 ... ann bn2 an1b1n an2 b2 n ... ann bnn
a11b11 a21b12 ... an1 b1 n a12 b11 a22 b12 ... an2 b1 n a1n b11 a2 n b12 ... ann b1n
a b a b ... a b a12 b21 a22 b22 ... an2 b2n a1n b21 a2 n b22 ... ann b2 n
and B A
11 21 21 22 n1 2 n
a11bn1 a21bn2 ... an1 bnn a12 bn1 a22 bn2 ... an2bnn a1n bn1 a2 nbn2 ... ann bnn
4. tr n n
Proof:
We have,
1 0 0
0 1 0
n
0 0 1
tr n 1 1 ..... 1 n
5. tr A tr A
Proof:
6. tr ABC tr BC A tr C AB
Proof:
We know that tr AB tr B A . Using this fact we have,
tr ABC tr AB .C tr C . AB tr CAB
Also we have , tr ABC tr A. BC tr BC . A tr BCA
tr ABC tr BCA tr CAB
Proof:
If C is an orthogonal matrix then C C C C
Now, tr C AC tr AC C tr A tr A
tr C AC tr A ; where C is an orthogonal matrix.
8. If P is a non-singular matrix then tr P 1 AP tr A
Proof:
If P is a non-singular matrix then PP 1 P 1P
Now, tr P 1 AP tr AP 1P tr A tr A
tr P 1 AP tr A ; where P is a non-singular matrix.
Idempotent Matrix
Idempotent Matrix:
Any square matrix A is called an idempotent matrix if A2 A .
1 1
2 2
Example : Let A
1 1
2 2
1 1 1 1 1 1 1 1 1 1
Now, A2 2 2 2
2 4 4
4 4 2
2 A
1 1 1 1 1 1 1 1 1 1
2 2 2 2 4 4 4 4 2 2
So that A is an idempotent matrix.
Now we have, A2 A
A .A A
A1 A . A A 1 A
. A
A
Proof:
Let us suppose that A is an idempotent matrix with rank r . Then there exists an orthogonal
matrix P such that P AP is a diagonal matrix with r diagonal elements equal to 1 and
remaining n r diagonal elements equal to 0.
Now , tr P AP 1 1 ....... 1 0 0...... 0 r
But , tr PAP tr APP
tr A
tr A
Or , tr A tr PAP r A
A tr A
Thus rank of an idempotent matrix is equal to its trace.
Proof:
Since A is an idempotent and P is an orthogonal matrix, so that, A2 A and P P PP
PAP P AP P AP
2
Now ,
PAPPAP
PA AP
PA AP
PA 2P
PAP
PAP PAP
2
Proof:
1 0 0 0 115 2
11
5
11 111 116 112 115 11
1
4 2
0 1 0 0 112 3 2
11 11 118 112 114
B XX X X
1
11 11
0 0 1 0 115 2
11
5
11 111 115 112 6
11
1
11
1 4
0 0 0 1 11 11 11 11 11 11 11
1 9 1 4 1 2
11
Again B2 X X X X X X X X 2 X X X X X X X X X X X X X X X X
1 1 1 1 1 1
X X X X X X X X X X X X X X X X
1 1 1 1
So that B is an idempotent matrix.
Now, B tr B tr 4 X X X X tr 4 tr X X X X 4 tr X X X X
1 1 1
4 tr 2 4 2 2
Rank of B is 2.
System of Equation
a11 a12
a1n x1 b1
a21
a22 a2 n x2
2
b
AX B ............... i
am1
am2 amn xn bm
The system of m equations in n variables AX B is called a system of homogeneous equations if B 0 .
1
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
a11 a12
a1n x1 b1
a21
a22 a2 n x2 b
2 AX B ............... i
am1
am2 amn xn bm
The system of m equations in n variables AX B is called a system of homogeneous equations if B 0 .
Difference between homogeneous and non-homogeneous system of equations:
Homogeneous System of Equation Non-homogeneous System of Equation
The matrix equation AX B where B 0 which is a The matrix equation AX B where B 0 which is a
system of m equations in n unknowns x1 , x2 , ....., xn system of m equations in n unknowns x1 , x2 , ....., xn
is called a system of linear homogeneous equation. is called a system of linear non- homogeneous
equation.
For homogeneous system of equation trivial No trivial solution exists in non-homogeneous
solution always exists. system of equation.
In homogeneous system of equation non-trival In non-homogeneous equations a unique solution
solution exists only if A is a singular matrix and the exists only if A is a non-singular matrix and an
appearance of one non-trivial solution automatically infinite number of solutions exists if the system has
implies the existence of infinite number of trivial two distinct solutions.
solution.
Consistent and inconsistent:
1. For non-homogeneous system of equation:
Let us consider a system of m equations in n variables as
a11 x1 a12 x2 ........ a1n x n b1
a21 x1 a22 x2 ........ a2n xn b2
am1 x1 am2 x2 ........ amn xn bm
a11 a12 a1n x1 b1
a21 a22 a2 n x2 b
2 AX B ............... i
am1 am2 amn xn bm
If the coefficient matrix A and the augmented matrix [A B] have the same rank then the system of
equation AX B is said to be consistent.
But if the coefficient matrix A and the augmented matrix [A B] have different ranks then the system
of equation AX B is said to be inconsistent.
A consistent non-homogeneous system of equation has just one or infinitely many solutions,
whereas an inconsistent system has no solution at all.
2
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
Condition for which a system of non-homogeneous equation will have unique solution, no solution and
infinitely many solution
3
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
Let us suppose that the coefficient matrix A has rank r and its first r columns 1 2 r are
linearly independent. Then each of the remaining columns is a linear combination of
1 2 r .
Sufficient condition:
Let us suppose that A B A r. Then the number of linearly independent columns
of augmented matrix A B is r and these independent columns are 1 2 r . So B is a
linear combination of 1 2 r . Then we can find some constants k1 , k2 ,......kr which are
not all zero, such that, k1 1 k2 2 ...... kr r B
k11 k2 2 ...... kr r 0. r 1 0. r 2 ...... 0. n B ................ 2
Comparing (1) and (2) we get x1 k1 , x2 k2 ......xr kr , xr 1 0 ......xn 0
Since the given system of equation have some solutions as indicated above. So the given system of
equations is consistent.
Necessary Condition:
Let us suppose that the given system of equation is consistent having some solutions
s1 , s2 ,.....sn . Then we have 1 s1 2 s2 ...... n sn B ......... 3
We have already assumed that coefficient matrix A has rank r and its first r columns
1 2 r are linearly independent. So the other columns r 1 r 2 n are linearly
dependent which can be expressed as linear combination of 1 2 r . Then (3) can be
stated in terms of 1 2 r as 1 s1 2 s2 ...... r sr B .
This shows that B is also linear combination of 1 2 r . Then the number of linearly
independent columns of augmented matrix A B is r . A B A r .
Thus the rank of the coefficient matrix is equal to the rank of the augmented matrix.
A system of non-homogeneous equations has no solution, unique solution or infinitely many solutions.
Proof :
No Solution:
If A A B then there will be no solution for the given system of equations. Because the
columns of A are included in augmented matrix A B and ifIf A A B then the constant
column B must be independent of the columns of A . Then there will be no non-zero set of values
for x1 , x2 , , xn to satisfy the equation 1 x1 2 x2 .... n xn B . So the given system of equations
has no solution in this case.
Unique Solution:
If A A B r n where, n is the number of variables, then the given system of equation has
only one solution or unique solution. Because we can select n independent equation in n variables
and solutions of these independent equations by inverse matrix or other wise will be unique
solution. Then the remaining m n dependent equation will be satisfied by the above unique
solution.
4
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
Proof :
Let us rank of A is r . So that, the coefficient matrix A has r linearly independent columns. Let the
first r columns from the left of the matrix A are linearly independent.
If C i is i th column of A of order m 1 i 1,2,...., n ,then we can write A c1 c2 cn .
Where, c1 , c2 , cr are linearly independent and the system of equations AX 0 can be
written as
x1
x
c1 c2 cn 2 0
xn
c1 x1 c2 x2 ...... cn xn 0
Since A r , each of the columns cr 1 cr 2 cn can be expressed as a linear combination
of c1 , c2 , cr as given below;
5
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
11 21 n r 1
12 22 n r 2
1r 2 r
X1 , X 2 , ............ , X n r n r r
1 0
0
0 1
0
0 0
1
This provide the n r solutions of of the equation AX 0 .
Now we are to show that these n r solutions form a linearly independent set of vectors.
Let, 1 x1 2 x2 ...... n r x n r 0
Comparing r 1 th , r 2 th ,........, n th of the vectors on the left we get,
1 1 2 0 ...... n r 0 0
1 0 2 1 ...... n r 0 0
.... .... .... .... .... .... .... .... ....
1 0 2 0 ...... n r 1 0
1 2 ...... n r 0
Since all ' s are zero , hence n r solutions are linearly independent.
A necessary and sufficient condition for a system of m homogeneous equation in n variables a non-
trivial (non-zero) solution is that the rank of the coefficient matrix is less than n .
Proof :
Let us consider the system of equation of m homogeneous equation in n variables given by AX 0
6
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
x1
or, 1 2 n x2 0 or , 1 x1 2 x2 .... n xn 0......... 1
xm
Sufficient Condition :
Let us suppose that A n , so the column of the matrix A are linearly dependent. Then we can
find a set of constants c1 c2 cn which are not all zero. Such that,
c11 c2 2 ...... cn n 0 .................... 2
Comparing 1 and 2 we get , x1 c1 , x2 c2 , .........., xn cn
So the given condition is sufficient for getting non zero solutions.
Necessary Condition:
Let us suppose that the given system of equations has some non-zero solutions 1 , 2 ,........n .
Then we have
11 22 .......... n n 0 .
This shows that , 1 2 n are linearly dependent. Therefore the rank of the coefficient
matrix is less than n
The system of homogeneous equations AX 0 has a trivial (zero) solution if the rank of the coefficient
matrix A is equal to the number of its columns.
Proof :
Let us consider the system of equation of m homogeneous equation in n variables given by AX 0 ,
where coefficient matrix
a11 a12 a1n
a a22 a2 n
A 21
am1 am2 amn
Let the rank of A is n , where n is the number of columns or unknowns.
Since A is a matrix with full rank, A is a non-singular matrix and A1 exists. Hence we get,
AX 0
A1 AX A1 .0 0
A A X 0
1
n X 0 X 0
x1 0, x2 0, ........., xn 0
So the system has trivial solution if A n . Since A1 is unique, the trivial solution is the only solution of
AX 0 .
Problem:
Test the consistency of the following equations and solve them if possible.
7
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
x 2y z 3
3 x y 2z 1
2 x 2 y 3z 2
x y z 1
Solution:
For the given system of equations the augmented matrix is given by
1 2 1 3 1 2 1 3
3 1 2 1 0 7 5 8 R21 3
A B 2 2 3 2 0 6 5 4 R31 2
R41 1
1 1 1 1 0 3 2 4
1 2 1 3 1 2 1 3
0 7 6
5 8 R32 0 7 5 8
0 0 5
7
20
7
7 R 3
0 0 5 20
R43 1
5
7 7
42
0 0 1 4 7 0 0 0 0
7 7
Here both the coefficient matrix and augmented matrix have same rank 3. Which equal to the
number of variables . So the given system of equations is consistent and has unique solution.
The reduced equations are
x 2y z 3 ....................... i
7y 5z 8 ....................... ii
5 20
z ...................... iii
7 7
5 20
From iii we get, z z4
7 7
Putting z 4 in equation ii we get,
7y 5 4 8
7y 28
y4
Solution:
For the given system of equations the augmented matrix is given by
1 1 4 2 1 1 4 2
1 2 1 1 0 3 5 1 R21 1
A B 1 1 1 0 0 0 5 2 R31 1
R41 2
2 1 2 1 0 3 10 3
1 1 4 2 1 1 4 2
0 3 5 1 0 3 5 1
R42 1 R 1
0 0 5 2 0 0 5 2 43
0 0 5 2 0 0 0 0
Here both the coefficient matrix and augmented matrix have same rank 3. Which equal to the
number of variables . So the given system of equations is consistent and has unique solution.
The reduced equations are
x1 x2 4 x3 2 ....................... i
3x2 5x3 1 ....................... ii
5x3 2 ...................... iii
2
From equation iii we get, 5 x3 2 x3
5
2
Putting x3 in equation ii we get,
5
2
3 x2 5 1
5
3 x2 1
1
x2
3
2 1
Putting x3 and x2 in equation i we get,
5 3
1 2
x1 4 2
3 5
1 8
x1 2
3 5
11
x1
15
11 1 2
So the required solutions are x1 , x2 and x3
5 3 5
Problem:
Test the consistency of the following equations and solve them if possible.
2 x1 3x2 5x3 6
x1 2 x2 3x3 2
x2 x3 0
9
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
Solution:
For the given system of equations the augmented matrix is given by
2 3 5 6 2 3 5 6
1
A B 1 2 3 2 0 1 2 1 2 1 R21
2
0 1 1 2 0 1 1 2
2 3 5 6
0 1 1 1 R42 2
2 2
0 0 0 0
Here both the coefficient matrix and augmented matrix have same rank 2. Which is less than the
number of variables . So the given system of equations is consistent with infinite number of
solutions.
The reduced equations are
2 x1 3 x2 5 x3 6 ....................... i
1 1
x2 x3 1 ....................... ii
2 2
Let x3 a , where a is an arbitrary value. From equation ii we get,
1 1
x2 a 1
2 2
x2 a 2
x2 a 2
10
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
1 1 1 6 1 1 1 6
R 1
A B 1 2 3 10 0
1 2 4 21
R 1
1 2 0 1 1 31
1 1 1 6
0 1 2 4 R32 1
0 0 3
When 3 and 10 then A 2 and A B 3 . Since, the ranks of the two matrices are
unequal, so the given system of equations has no solution in this case.
When 3 and has any value then A 3 and A B 3 . Since the ranks of two matrices
are equal, which is equal to the number of variables. So the given system of equations has unique
solution in this case.
When 3 and 10 then A 2 and A B 2 . Since the rank of the two matrices are equal
and less than the number of variables, so the given system of equations has infinite number of
solutions in this case.
Problem:
Test the consistency of the following equations and solve them if possible.
5x 3y 7z 4 0
3x 26y 2z 9 0
7 x 2y 10 z 5 0
Solution:
For the given system of equations the augmented matrix is given by
5 3 7 4 R 3
5 3 7 4 21
3 26 2 9 0 121 5
A B 5
11
5
33
5 7
7 2 10 5 0 11 R31
1 3
5
5 5 5
5 3 7 4
121 1
0 11 33 R32
5 5 5 11
0 0 0 0
Here both the coefficient matrix and augmented matrix have same rank 2. Which is less than the
number of variables . So the given system of equations is consistent with infinite number of
solutions.
The reduced equations are
5x 3y 7z 4 ....................... i
121 11 33
x2 x3 ....................... ii
5 5 5
11
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
y
a 3
11
Putting z a and y
a 3 in equation i we get,
11
5x 3
3
a
7a 4
11
55 x 3a 9 77a 44
55 x 35 80a
16a 7
x
11
So the general solutions are x
16a 7
, y
a 3 and z a
11 11
Problem:
Test the consistency of the following equations and solve them if possible.
2 x 6y 0
6 x 20y 6 z 3 0
6 x 18 z 1 0
Solution:
For the given system of equations the augmented matrix is given by
2 6 0 0 2 6 0 0
A B 6 20 6 3 0 2 6 3 R21 3
0 6 18 1 0 6 18 1
2 6 0 0
0 2 6 3 R32 3
0 0 0 8
Here rank of the coefficient matrix and augmented matrix are not same. Hence the given system of
equations is inconsistent and there is no solution of the given system of equations.
Problem:
For what value of the equations
x y z 1
x 2y 4 z
x 4 y 10 z 2
have a solution and solve them completely in each case.
Solution:
12
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
Here rank of the coefficient matrix is 3. Which is equal to the number of variables. Hence the given
system of equations has only trivial solutions. i.e. x1 x2 x3 0
Problem:
Find the solution of the following equations by matrix notation
x 2y z w 0
x 2y 2z 3w 0
4 x y 5z 8w 0
Solution:
For the given system of equations the coefficient matrix is
1 2 1 1 1 2 1 1 1 2 1 1
R21 1
A 1 1 2 3 0 3 3 4
0 3 3 4 R32 3
R31 4
4 1 5 8 0 9 9 12 0 0 0 0
Here rank of the coefficient matrix is 2. Which is less than the number of variables. Hence the given
system of equations has non-trivial solutions. The reduced equations are,
x 2y z w 0 .................... i
3y 3z 4w 0 .................... ii
Here we have n r 4 2 2 free variables.
Let us consider x 1 and w 0 , then from equation ii we get,
3y 3 1 4 0 0
y 1
Putting w 0 , z 1 and y 1 in equation i we get
x 21 1 0 0
x 1
Hence the required solution is (1 , 1 , 1 , 0 )
14
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
Statement and proof of important properties of characteristic root and characteristic vector :
Property 01: Latent root of a real symmetric matrix are all real.
Proof:
Let us suppose that and are two complex conjugate roots and X and X are corresponding
latent vectors of a real symmetric matrix A .
Then AX X and AX X
X AX X X .............. i and X AX X X ................... ii
Since A is a symmetric matrix so A A . Taking transpose in ii we get,
X AX X X
X AX X X
X AX X X .................. iii
Comparing i and iii we get,
X X X X
X X X X 0
X X 0
Since X 0 and X 0 we have 0
223605 Linear Algebra
1
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
If a ib and a ib
Then
a ib a ib
2ib 0
b0
a and a which is real number.
Thus the latent roots of a real symmetric matrix are all real.
Property 02: If is a characteristic root of A , then k is a characteristic root of Ak , where k is an integer.
Proof:
We have,
AX X
A2 X AX .X 2 X
A3 X 2 AX 2 .X 3 X
Ak X k X
Thus k is the characteristic root of Ak .
Property 03: If all the characteristic roots of a matrix are different, then the corresponding characteristic
vectors are linearly independent.
Proof:
Let us suppose that, 1 , 2 ,......., n are distinct characteristic roots and X1 , X2 ,......, X n are
corresponding characteristic vectors of a matrix A .
Then AX i i X i
Let us consider the equation
C1 X1 C2 X2 ........ C n X n 0
C1 AX1 C 2 AX 2 ........ C n AX n 0
C11 X1 C2 2 X2 ........ C n n X n 0
C11 AX1 C2 2 AX2 ........ C n n AX n 0
C112 X1 C2 22 X 2 ........ C n 2n X n 0
Continuing this process of multiplication, we have,
C11n 1 X1 C 2 2n 1 X 2 ........ C n nn 1 X n 0
After taking transpose of each equation, the resulting n equations may be written as
1 1 1 C1 X1
1 2 n C2 X2
0 ..................... i
n 1
1 2n1 nn 1 C n X n
n
The determinant of the first matrix on L.H.S. is
i j 1
i j 0 since i j .
1 1 1
1 2 n
So the matrix B is a non singular matrix and B1 exists.
n 1 n 1 n 1
1 2 n
Pre-multiplying the matrix equation i by B 1 we get
C1 X1
C2 X2 0
C n X n
C1 X1 C2 X2 C n X n 0
C1 X1 C2 X2 C n X n 0
Since X1 , X2 ,......, X n are non- zero vectors we have, C1 C2 C n 0 .
So the characteristic vectors X1 , X2 ,......, X n are linearly independent.
Property 04: If a matrix A has all distinct characteristic roots, then there exists a non-singular matrix P
such that, P 1 AP is a diagonal matrix whose diagonal elements are the characteristic roots
of A .
Proof:
If 1 , 2 ,......., n are distinct latent roots of matrix A , then the corresponding latent vectors
X1 , X2 ,......, X n are linearly independent.
Let us construct a matrix P X1 , X2 ,......, X n such that P 0 and P 1 exists.
A.P A X1 , X2 ,......, X n 1 X1 , 2 X2 ,......, n X n
1 0 0
0 2 0
X1 , X2 ,......, X n
0 0 n
1 0 0
0 2 0
P
0 0 n
1 0 0 1 0 0
0 2 0 0 2 0
P 1 AP P 1P
0 0 n 0 0 n
Hence the proof.
Property 05: For a symmetric matrix with distinct latent roots, the latent vectors are orthogonal vectors.
Proof:
Let i and j be two distinct latent roots and X i and X j be corresponding latent vectors of a
symmetric matrix A.
Then AX i i X i and AX j X j
X j AX i i X j X i .............. i and X iAX j j X iX j ................... ii
Transposing ii we get,
X AX X X
i j j i j
X j AX i j X j X i
X j AX i j X j X i .................... iii
Comparing i and iii we get,
i X j X i j X j X i
i X j X i j X j X i 0
i j X j X i 0
Since i j so that, X j X i 0
X i and X j are orthogonal vectors.
Thus the characteristic vectors of a symmetric matrix with distinct latent roots are orthogonal
vectors.
Property 06: If a matrix A has all distinct characteristic roots, then there exists an orthogonal matrix P
such that, P AP is a diagonal matrix whose diagonal elements are the characteristic roots
of A .
Proof:
If 1 , 2 ,......., n are distinct latent roots of matrix A , then the corresponding latent vectors
X1 , X2 ,......, X n are orthogonal vectors.
X X X
Let us construct an orthogonal matrix P 1 , 2 ,......, n such that PP .
X1 X 2 Xn
X X X X X X
Now, A.P A 1 , 2 ,......, n 1 1 , 2 2 ,......, n n 1 X1 , 2 X2 ,......, n X n
X1 X 2 Xn X1 X2 Xn
1 0 0 1 0 0
X1 X 2
X n 0 2 0 0 2 0
, ,......, P
X1 X 2 Xn
0 0 n 0 0 n
1 0 0 1 0 0 1 0 0
0 2 0 0 2 0 0 2 0
P AP P P
0 0 n 0 0 n 0 0 n
Hence the proof.
Property 07: For a matrix A sum of latent roots is equal to its trace.
Proof:
Let us suppose that A is a symmetric matrix of order n . Then there exists an orthogonal matrix P
such that PP
1 0 0
0 2 0
and P AP is a diagonal matrix where diagonal elements 1 , 2 ,......., n are
0 0 n
the latent roots of A .
Now, tr P AP 1 2 ......... n
tr AP P 1 2 ......... n
tr A 1 2 ......... n
tr A 1 2 ......... n
Thus, sum of characteristic root of a matrix is equal to its trace.
Property 09: If A is a non-singular matrix, then the characteristic roots of A1 are the reciprocals of the
characteristic roots of A .
Proof:
Let be a characteristic root and X be the corresponding characteristic vector of a matrix A .
Then we have,
AX X
A1 AX A1 X
X A1 X
X A 1 X
1
X A 1 X
1
A1 X X
This shows that 1 is characteristic root of matrix A1 , which is the reciprocal of .
Thus, the characteristic roots of A1 are the reciprocals of the characteristic roots of A .
Property 11: Any square matrix A and its transpose matrix A have same characteristic roots.
Proof:
We have,
a11 a12 a1n a11 a21 an1
a21 a22 a2 n a12 a22 an2
A A
an1 an2 ann a1n a2 n ann
This shows that A and its transpose A have same characteristic equation and hence have same
characteristic roots.
Property 12: Each characteristic root of an idempotent matrix is either zero or unity.
Proof:
Let A be an idempotent matrix. Then by definition
AX X
A2 X AX
AX X
AX 2 X
X 2 X
X 2 X 0
1 X 0
Since any characteristic vector is not the null vector
1 0
0 or 1
So that,the characteristic root of an idempotent matrix is either zero or unity.
Example :
6 2 2
Find the characteristic roots and corresponding characteristic vectors of the matrix 2 3 1
2 1 3
Solution :
6 2 2
Let the matrix A 2 3 1 . Therefore the characteristic equation is
2 1 3
A 0
6 2 2 1 0 0
2 3 1 0 1 0 0
2 1 3 0 0 1
6 2 2
2 3 1 0
2 1 3
6 3
1 22 3 2 22 2 3 0
2
2 x1 2 x2 2 x3 0............ i
The reduced equation is
3x2 3x3 0.............. ii
Let x3 1
x2 1
From i we get,
2 x1 2 1 2 1 0
2 x1 2 2 0
x1 2
So that, the characteristic vector corresponding to the characteristic root 8 is is given by
2 1 1
Quadratic From:
A homogeneous polynomial of the second degree in any number of variables is called a quadratic
from.
For example :
2 x12 3x1 x2 5x22
3x12 2 x22 x32 x1 x2 2 x2 x3 3x3 x1
are quadratic forms in 2 or 3 variables respectively. In matrix form, the above forms can be written
as
2 3 x 2 3
2 1 2 , X x x
2 x1 3x1 x2 5x2 x1 x2
2 2
X AX where, A 1 2
3
5 2 x 3 5
2 2
3 1 3
2 2 x1
3x1 2 x2 x3 x1 x2 2 x2 x3 3x3 x1 x1 x2 x3
2 2 2 1 2 1 x2 X AX
2
3 x
1 1 3
2
3 1 3
2 2
where, A 1 2 1 , X x1 x2 x3
2
3 1 1
2
Canonical Form:
If a real quadratic form can be expressed as the sum and difference of the squares of the new
variables by any real non-singular transformation, then this expression is called the canonical form
of the given form.
Let us consider the quadratic form X AX and a real non-singular linear transformation X PY
where P is a non-singular matrix, then
X AX PY A PY
Y P AP Y
Y BY where, B P AP
which is the canonical form of the quadratic form X AX .
Rank of a quadratic form:
For a quadratic form X AX , A is called the rank of the quadratic form.
Index of a quadratic form:
The number of positive square terms in the canonical form of a quadratic form is called the index of
the quadratic form.
Signature of a quadratic form:
The difference between the number of positive square terms and negative square terms in the
canonical form of a quadratic from is called signature of the quadratic form.
(v) Indefinite:
If q X AX be a real quadratic form in n variables with rank r and index p . Then the
quadratic form q X AX is called indefinite if p r n . The canonical form of a indefinite
quadratic form is y12 y22 y32 y42 ........ yr2 ; r n .
Necessary and sufficient condition for a real quadratic form to be positive definite
Statement :
A necessary and sufficient condition for a real quadratic form X AX to be positive definite is that the
leading principal minors are all positive.
Proof:
Necessary condition:
n n
Let the quadratic form X AX aij xi x j a ij a ji be positive definite. We have to show that all
i 1 j 1
Since X AX is positive definite then there exists a non-singular linear transformation X PY such
that,
X AX Y P APY
PAP nn
y12 y22 ........ yn2
P A P 1
Y nnY
1
P AP nn Ann 2
P
Since P is non-singular matrix, so that, P 0 . Thus Ann is positive.
Let us consider the quadratic form with the last variable xn in X AX as zero. Then the matrix Ann
reduces to an n 1 n 1 matrix and the definiteness of the quadratic form remains unchanged.
Therefore there will also exists a non-singular linear transformation X PZ such that,
X AX Z PAPZ z12 z22 ......... zn21
Z n 1 n 1 Z
P AP n 1 n1
Proceeding in this way we can show that A n 2 n 2 is positive. Thus we have established the
relationship A11 0 , A22 0 ,........., Ann 0.
Hence all the leading principal minors of the matrix A are positive.
Sufficient condition :
n n
Let the leading principal minors of the matrix A in X AX aij xi x j aij a ji be positive. We
i 1 j 1
Hence the elements except a11 both in the 1st row and in the 1st column are reduced to zero by
elementary transformation. Then the resulting matrix is of the form
a11 0 0
a11 0
0 b b We have, A 0
A
22 2 n 22
0 b22
a11b22 0 ; Since a11 0 b22 0
0 bn2 bnn
So keeping b22 fixed the elements of the 2nd row and 2ndcolumn are reduced to zero by elementary
transformation. Then the resulting matrix is of the form
a11 0 0 0
a11 0 0
0 b22 0 0 We have, A 0 b 0 0
33
A 0 0 c33 c3n
22
0 0 c33
0 a11b22 c33 0 ; Since a11 , b22 0 c33 0
0 c n3 c nn
a11 0 0 0
0 b22 0 0
Proceeding in this way it can be shown that A is equivalent to A 0 0 c33 0
0 0 0 lnn
a11 0 0 0
0 b y
22 0 0 1
y
X AX y1 y2 yn 0 0 c33 0 2
y
0
0 0 lnn n
a11 y12 b22 y22 ........ lnn yn2
Since a11 0, b22 0,........, lnn 0 . Hence the quadratic form X AX is positive definite.
Necessary and sufficient condition for a real quadratic form to be negative definite
Statement :
A necessary and sufficient condition for a real quadratic form X AX to be negative definite is that
the leading principal minors are alternatively negative and positive.
Proof:
Necessary condition:
n n
Let the quadratic form X AX aij xi x j aij a ji be negative definite. We have to show that,
i 1 j 1
the leading principal minors of the matrix A are alternatively negative and positive.
Since X AX is negative definite then there exists a non-singular linear transformation X PY such
that,
X AX Y P APY
P AP nn
y12 y22 ........ yn2
P A P 1
n
Y nn Y
1
n
P AP nn
Ann 2
P
2
Since P is non-singular matrix, so that, P 0 . Since P is positive,
Thus, Ann is positive, if n is even
Ann isnegative, if n is odd.
Let us consider the quadratic form with the last variable xn in X AX as zero. Then the matrix Ann
reduces to an n 1 n 1 matrix and the definiteness of the quadratic form remains unchanged.
Therefore there will also exists a non-singular linear transformation X PZ such that,
X AX Z P APZ z12 z22 ......... zn21
Z n1 n1 Z
P AP n1 n1
P A P 1
n 1
1
A n1 n1 2
P
2
Since P is non-singular matrix, so that, P 0 . Since P is positive,
Thus, A n1 n1 is positive, if n is odd
Ann isnegative, if n is even.
Thus we have provided that if Ann is positive then A n 1 n1 is negative and that if Ann is
Proceeding in this way we can show that A n 2 n 2 is positive if n is even and negative if n is odd.
Thus we have established the relationship
A11 0 , A22 0 , A33 0 ,........., Ann 0 or 0. (According to n is odd or even)
Hence the leading principal minors of the matrix A are alternatively negative and positive.
Sufficient condition :
n n
Let the leading principal minors of the matrix A in X AX aij xi x j a
ij a ji are alternatively
i 1 j 1
negative and positive. We have to show that the quadratic form X AX is negative definite.
Let us consider the following matrix
Since b22 0 , so keeping b22 fixed the elements of the 2nd row and 2nd column are reduced to zero
by elementary transformation. Then the resulting matrix is of the form
a11 0 0 0
a11 0 0
0 b 22 0 0 We have, A 0 b
33 0 0
A 0 0 c33 c3n
22
0 0 c33
a11b22c33 0 ; Since a11 , b22 0 c33 0
0 0 cn3 cnn
a11 0 0 0
0 b22 0 0
Proceeding in this way it can be shown that A is equivalent to A 0 0 c33 0
0 0 0 lnn
a11 0 0 0
y1
0 b22 0 0 y
X AX y1 y2 yn 0 0 c33 0 2 a11 y12 b22 y22 ........ lnn yn2
y
0
0 0 lnn n
Since a11 0, b22 0,........, lnn 0 . Hence the quadratic form X AX is negative definite.
Proof :
We know that for a positive definite real quadratic form X AX all the leading principal minors of the
matrix A are positive.
a11 a12 a1n
a a22 a2n
i.e. If the matrix A 21
, then A11 0 , A22 0 ,........., Ann 0 a11 0 .
an1 an2 ann
223605 Linear Algebra
6
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
Hence the elements except a11 both in the 1st row and in the 1st column are reduced to zero by
elementary transformation. Then the resulting matrix is of the form
a11 0 0
a11 0
0 b b We have, A 0
A
22 2 n 22
0 b22
a11b22 0 ; Since a11 0 b22 0
0 bn2 bnn
So keeping b22 fixed the elements of the 2nd row and 2nd column are reduced to zero by elementary
transformation. Then the resulting matrix is of the form
a11 0 0 0
a11 0 0
0 b22 0 0 We have, A 0 b 0 0
33
A 0 0 c33 c3n
22
0 0 c33
0 a11b22 c33 0 ; Since a11 , b22 0 c33 0
0 c n3 c nn
a11 0 0 0
0 b22 0 0
Proceeding in this way it can be shown that A is equivalent to A 0 0 c33 0
0 0 0 lnn
a11 0 0 0
y1
0 b22 0 0 y
X AX y1 y2 yn 0 0 c33 0 2 a11 y12 b22 y22 ........ lnn yn2
y
0
0 0 lnn n
Hence the quadratic form X AX is expressed as the sum of squares, since a11 0 , b22 0,....., lnn 0
A real quadratic form is positive definite iff all the characteristic roots of A are positive
Proof :
Necessary condition :
n n
Let, X AX aij xi x j be a positive definite quadratic form and A be the matrix of the quadratic
i 1 j 1
form. Also 1 ,2 ,, n be the characteristic roots of A and X1 , X2 ,........, X n be the corresponding
characteristic vectors.
Sufficientcondition :
Let all the characteristic roots of A be positive also X1 , X2 ,........, X n be the characteristic vector
corresponding to the characteristic roots 1 ,2 ,, n .
We have, AX i i X i i 1,2,........, n
X iAX i i X iX i
Since i 0, so that, X iAX i 0 Since X iX i 0
Hence X AX is positive definite.
Question :
n
Express (X
i 1
i X )2 as a quadratic form and hence determine its rank and comment on the nature
i 1 i 1 n
n
1 n n
X i2 X i2 2 X i X j
i 1 n i 1 i j 1
n
1 n
2 n
X i2 X i2 X i X j
i 1 n i 1 n i j 1
1 n 1 n
1 X i2 X i X j
n i 1 n i j 1
1 1 1
1 n n n
X1
1 1 1
n 1 n X2
X1 , X 2 ,., X n n
X AX
Xn
1 1
1 1
n n n
Which is a quadratic form.
1 1 1
1 n n n
1 1
1 1
Where A n n n is the required matrix of the quadratic form.
1 1 1
1
n n n
Here A . A A A so that, the matrix A is an idempotent matrix.
2
1
A tr A n 1 n 1
n
Comment : Since A is less than the number of variables and r p therefore the given
quadratic form is positive semi- definite.
Question :
n
Express (X
i j 1
i X j )2 as a quadratic form and hence determine its rank and comment on the
(X X1 X 2 X1 X 3 ........ X1 X n
2 2 2
i X j )2
i j 1
X2 X1 X2 X 3 ........ X2 X n
2 2 2
X 3 X1 X 3 X2 ........ X3 X n
2 2 2
X n X1 X n X 2 ........ X n X n 1
2 2 2
2[ X1 X 2 X1 X 3 X1 X 4 ........ X1 X n
2 2 2 2
X2 X 3 X2 X 4 ........ X2 X n
2 2 2
X 3 X 4 ........ X 3 X n
2 2
X n 1 X n
2
]
n n
2 n 1 X i2 2 Xi X j
i 1 i j 1
n n
2n 2 X i2 2 X i X j
i 1 i j 1
2n 2 2 2
X1
2 2n 2 2 X2
X1 , X2 ,., X n X AX .Which is a quadratic form.
2 2 2n 2 X n
1 1 1
1 n n n
2n 2 2 2
1
2 2n 2 2 1
n 1 1
n 2nB
Where A 2n n
2 2 2n 2
1 1 1
1
n n n
is the required matrix of the quadratic form.
1 1 1
1 n n n
1 1
1 1
Where, B n n n
1 1 1
1
n n n
Here B . B B2 B so that, the matrix B is an idempotent matrix.
1
B tr B n 1 n 1
n
Since A 2nB
A 2nB n 1
Comment : Since A is less than the number of variables and r p therefore the given
quadratic form is positive semi- definite.
Question :
n
Express (X
i j 1
i X j )2 as a quadratic form and hence determine its rank and comment on the
Answer :
n
(X X1 X2 X1 X 3 X1 X 4 ........ X1 X n
2 2 2 2
i X j )2
i j 1
X2 X3 X2 X 4 ........ X 2 X n
2 2 2
X 3 X 4 ........ X 3 X n
2 2
X n 1 X n
2
n n
n 1 Xi2 2 X i X j
i 1 i j 1
n 1 1 1 X 1
1 n 1 1 X 2
X1 , X 2 ,., X n X AX .Which is a quadratic form.
1 1 n 1 X n
1 1 1
1 n n n
n 1 1 1
1 1
1 n 1 1 n 1 1 n
Where A n n nB
1 1 n 1
1 1 1
1
n n n
is the required matrix of the quadratic form.
1
1 n 1 1
n n
1 1
1 1
Where, B n n n
1 1 1
1
n n n
Here B . B B2 B so that, the matrix B is an idempotent matrix.
1
B tr B n 1 n 1
n
Since A nB
A nB n 1
Comment : Since A is less than the number of variables and r p therefore the given
quadratic form is positive semi- definite.
Question :
ni
k
1 ni
Express ( X
i 1 j 1
ij X ij as a quadratic form and hence determine its rank and
X i )2 where X i
ni j 1
comment on the nature of the quadratic form.
223605 Linear Algebra
11
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
Answer :
k ni n1 n2 n3 nk
(X ij X i )2 X 1 j X1 X 2 j X 2 X 3 j X 3 ........ X kj X n
2 2 2 2
i 1 j 1 j 1 j 1 j 1 j 1
2
nr nr
1 nr
Again, (X
j 1
rj X r )2 X rj2
j 1
X rj
nr j 1
nr
1 nr n
X rj2 X rj2 2 X rj X rl
j 1 nr j 1 j l 1
1 nr 2 n
1 X rj2 X rj X rl
nr j 1 nr j l 1
1
1 1 n 1
nr
nr r
Xr1
1 1
1 1 n Xr2
X r 1 , X r 2 ,., X rnr n r nr r X r Ar X r .
X
rnr
1 1 1
1
nr nr nr
1
1 1 n 1
n
nr r r
1 1 1
1
Where, Ar nr
n r
nr and X r X r 1 , X r 2 ,., X rn
r
1
1 1 1
nr nr nr
k ni k
(X ij Xi )2 X1 AX1 X2 AX2 X3 AX3 ....... Xk AX k X iAXi ;which is a quadratic form.
i 1 j 1 i 1
k
Comment : Since A is less than the number of variables and r p therefore the given
i 1
i
Question :
1 n
1 n
n 1 X i is a quadratic form in the variables
Prove that, S 2 ( X i X )2
, where X
i 1 n i 1
X1 , X 2 , , X n and hence determine its rank, comment on the nature ofthe quadratic form.
Answer :
1 n
Given that, S 2 (Xi X )2
n 1 i 1
1 n 2 1 n
2
i X i X
n 1 i 1 n i 1
1 n 2 1 n 2 n
i X i X 2 X i X j
n 1 i 1 n i 1 i j 1
1 1 n 2 2 n
1 X i X i X j
n 1 n i 1 n i j 1
1 n 2 2 n
i n n 1 i
n i 1
X
j 1
Xi X j
1 1 1
n n n 1 n n 1
X
1 1 1 1
X
X1 , X2 ,., X n n n 1 n n n 1 2 X AX Which is a quadratic form.
Xn
1 1 1
n n 1 n n 1 n
Where
1 1 1
n
n n 1
n n 1
1 1
n
1
n
1
n
1 1
A n n 1
1
n
n n 1
1
1
n
1
1
n
1
n
1
B
n 1 n 1
1
n n 1
1
n n 1
1
n
1
n
1
n
1 1
n
is the required matrix of the quadratic form.
Here B . B B B so that, the matrix B is an idempotent matrix.
2
1
B tr B n 1 n 1
n
1
A B B n 1
n 1
Comment : Since A is less than the number of variables and r p therefore the given
quadratic form is positive semi- definite.
223605 Linear Algebra
13
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
Question :
Show that ax12 2bx1 x2 cx22 is positive definite iff a 0 and A ac b2 0 when A is the matrix
of given quadratic form.
Answer :
Given that, ax12 2bx1 x2 cx22
In matrix notation the given quadratic form can be written as
a b x1 a b
x1 x2 X AX where X x1 x2 and A
b a x2 b a
Since X AX is positive definite then there exists a non-singular linear transformation X PY such
that,
X AX Y PAPY
PAP 22
y12 y22
P A P 1
Y 22Y
1
P AP 22 A22 2
P
Since P is non-singular matrix, so that, P 0 . Thus A22 0 is positive.
Let us consider the quadratic form with the last variable x2 in X AX as zero. Then the matrix A22
reduces to an A11 matrix and the definiteness of the quadratic form remains unchanged.
Therefore there will also exists a non-singular linear transformation X PZ such that,
X AX Z P APZ z12
Z 11 Z
PAP 11
PAP 11
P A P 1
1
A11 2
P
A11 0
Again from the given quadratic form we have A11 a a Since A11 0 a 0
a b
Again A22 ac b2 Since A22 0 A ac b2 0.
b a
Hence condition is proved.
Problem 01 :
Reduce the quadratic form 5 x12 26 x22 10 x32 4 x2 x3 14 x3 x1 6 x1 x2 to canonical form and
determine its rank, index and signature and comment on the definiteness. Also find a non-zero set
of values of x1 , x2 and x3 which makes the form zero.
Solution :
In matrix notation the quadratic form can be written as
5 3 7 x1
5x 26 x 10 x 4 x2 x3 14 x3 x1 6 x1 x2 x1
2
1
2
2
2
3 x2 x3 3 26 2 x2 X AX
7 2 10 x
3
5 3 7
where, A 3 26 2 , X x1 x2 x3
7 2 10
To effect conjugate reduction of A we write A A . i.e. we are to find the matrix B P AP which
is congruent to A where P is a non-singular matrix.
5 3 7 1 0 0 1 0 0
3 26 2 0 1 0 A 0 1 0
7 2 10 0 0 1 0 0 1
Applying elementary congruent transformation R21 3 ,C21 3 ; R31 7 , C31 7
5 5 5 5
5 0 0 1 0 0 1 3 7
5 5
0 121 11
3
1 0 A 0 1 0
5 5 5
0 11 1 7 0 0 1
0 1
5 5 5
R32 1 11 , C 1 11
32
3 16
5 0 0 1 0 0 1 5 5
0 121 0 3 1 0 A 0 1 1
5 5 11
0 0 0 16 1 1 0 0 1
5 11
R1 1 , C1 1 ; R2 5 , C2 5
5 5 11 11
1 0 0 1 3 16
5 5 11 5 5
1 0 0
0 1 0 3 5 0 A 0 5 1
11 5 11 11 11
0 0 0
16 0 0 1
5 1 1
11
1 0 0
Thus X AX reduces to Y BY , where B 0 1 0 and Y y1 y2 y3
0 0 0
So the canonical for is y12 y22 . Therefore the rank is 2, index is 2 and signature is 2.
Comment : Since rank < number of variables and rank = index. Hence the quadratic form is
positive semi-definite.
1 3 16
5 11 5 5
y
1
Now the linear transformation is X PY 0 5 1 y
11 11 2
0 0 1 y3
1 3 16
x1 y1 y2 y3
5 11 5 11
5 1
x2 y2 y 3
11 11
x 3 y3
The canonical form is zero for the set of values y1 y2 y3 0 0 1 corresponding to which
we get the value x1 x2
x3 16
11
1
11
1 .Which makes the form zero.
Problem 02 :
Reduce the quadratic form 2 x12 2 x22 3 x32 4 x2 x3 4 x3 x1 2 x1 x2 to canonical form and determine
its rank, index and signature and comment on the definiteness. Also find a non-zero set of values of
x1 , x2 and x3 which makes the form zero.
Solution :
In matrix notation the quadratic form can be written as
2 1 2 x1
2 x1 2 x2 3x3 4 x2 x3 4 x3 x1 2 x1 x2 x1 x2 x3 1 2 2 x2 X AX
2 2 2
2 2 3 x
3
2 1 2
where, A 1 2 2 , X x1 x2 x3
2 2 3
To effect conjugate reduction of A we write A A . i.e. we are to find the matrix B P AP which
is congruent to A where P is a non-singular matrix.
2 1 2 1 0 0 1 0 0
1 2 2 0 1 0 A 0 1 0
2 2 3 0 0 1 0 0 1
Applying elementary congruent transformation R21 1 ,C21 1 ; R31 1 , C31 1
2 2
2 0 0 1 0 0 1 1 1
2
0 3 1 1 1 0 A 0 1 0
2 2
0 1 1 1 0 1 0 0 1
R32 2 , C 32 2
3 3
2 0 1 2
0 1 0 0 1 2 3
0 3 0 1 1 0 A 0 1 2
2 2 3
0 0 1 2 2 1 0 0 1
3 3 3
2 2
R1 1 , C1 1 ; R2 2 , C 2 2 ; R3 3 , C3 3
3 3
1 0 0 1 1 2
1 0 0 2 2 6 3
0 1 0 1 2 0 A 0 2 2
6 3 3 3
0 0 1
2
2 3 0 0 3
3 3
1 0 0
Thus X AX reduces to Y BY , where B 0 1 0 and Y y1 y2 y3
0 0 1
So the canonical for is y12 y22 y32 . Therefore the rank is 3, index is 3.
Comment : Since rank = number of variables and rank = index. Hence the quadratic form is
positive definite.
1 1 2
2 6 3 y
1
Now the linear transformation is X PY 0 2 2 y
3 3 2
0 y3
0 3
1 3 2
x1 y1 y2 y3
2 6 3
2 2
x2 y2 y3
3 3
x 3 3 y3
The canonical form is zero for the set of values y1 y2 y3 0 0 0 corresponding to which
we get the value x1 x2 x3 0 0 0 .Which makes the form zero.
Problem 03 :
Reduce the quadratic form 5x12 5 x22 14 x32 16 x2 x3 8 x3 x1 2 x1 x2 to canonical form and
determine its rank, index and signature and comment on the definiteness. Also find a non-zero set
of values of x1 , x2 and x3 which makes the form zero.
Solution :
In matrix notation the quadratic form can be written as
5 1 4 x1
5x1 5x2 14 x3 16 x2 x3 8 x3 x1 2 x1 x2 x1 x2 x3 1 5 8 x2 X AX
2 2 2
4 8 14 x
3
5 1 4
where, A 1 5 8 , X x1 x2 x3
4 8 14
To effect conjugate reduction of A we write A A . i.e. we are to find the matrix B P AP which
is congruent to A where P is a non-singular matrix.
5 1 4 1 0 0 1 0 0
1 5 8 0 1 0 A 0 1 0
4 8 14 0 0 1 0 0 1
Applying elementary congruent transformation R21 1 , C21 1 ; R31 4 , C31 4
5 5 55
5 0 4 1 0 0 1 1 4
5 5
0 24 36 1 1 0 A 0 1 0
5 5 5
0 0 0 1
36 54 4 0 1
5 5 5
2 , C 32
R32 3 32
1 1
5 0 4 1 0 0 1 5 2
0 24 0 1 1 0 A 0 1 3
5 5 2
0 0 0 1 3 1 0 0 1
2 2
R1 1 , C1 1 ; R2
5 5 5
24 ,C 2
5
24
1 0 0 1 1 1
2
1 0 0 5 5 120
1 5
5 3
0 1 0 0 A 0
120 24 24 2
0 0 0
1 0 0 1
3 1
2 2
1 0 0
Thus X AX reduces to Y BY , where B 0 1 0 and Y y1 y2 y3
0 0 0
223605 Linear Algebra
18
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal
1 1 1
5 120 2
y
1
Now the linear transformation is X PY 0 5 3 y
24 2 2
0 0 1 y3
1 1 1
x1 y1 y2 y 3
5 120 2
5 3
x2 y2 y 3
24 2
x 3 y3
The canonical form is zero for the set of values y1 y2 y3 0 0 1 corresponding to which
we get the value x1 x2
x3 1
2
3
2
1 .Which makes the form zero.
Problem 04 :
Reduce the quadratic form 2 x12 3 x32 4 x2 x3 2 x3 x1 6 x1 x2 to canonical form and determine its
rank, index and signature and comment on the definiteness. Also find a non-zero set of values of
x1 , x2 and x3 which makes the form zero.
Solution :
In matrix notation the quadratic form can be written as
2 3 1 x1
2 x1 3x3 4 x2 x3 2 x3 x1 6 x1 x2 x1 x2 x3 3 0 2 x2 X AX
2 2
1 2 3 x
3
2 3 1
where, A 3 0 2 , X x1 x2 x3
1 2 3
To effect conjugate reduction of A we write A A . i.e. we are to find the matrix B P AP which
is congruent to A where P is a non-singular matrix.
2 3 1 1 0 0 1 0 0
3 0 2 0 1 0 A 0 1 0
1 2 3 0 0 1 0 0 1
Applying elementary congruent transformation R21 3 , C21 3 ; R31 1 , C31 1
2 2 2
2
2 0 0 1 0 0 1 3 1
2 2
0 9 7
3
1 0 A 0 1 0
2 2 2
0 7 5 1 0 0 1
0 1
2 2 2
R32 7 , C32 7
9 9
3 2
2
0 0 1 0 0 1 2 3
0 9
0 3 1
0 A 0 1 7
2 2 9
0 0 47 2 7 1 0 0 1
9 3 9
R1 1 , C1 1 ; R2 2 , C 2 2 ; R3 3 ,C 3
2 2 3 3 47 3 47
1 0 0 1 1 2
1 0 0 2 2 2 47
0 1 0 1 2 0 A 0 2 7
2 3 3 3 47
0 0 1
2 3 0 3
7 0
47 3 47 47 47
1 0 0
Thus X AX reduces to Y BY , where B 0 1 0 and Y y1 y2 y3
0 0 1
So the canonical for is y1 y2 y3 . Therefore the rank is 3, index is 2.
2 2 2
Comment : Since rank = number of variables and rank > index. Hence the quadratic form is
indefinite.
1 1 2
2 2 47 y
1
Now the linear transformation is X PY 0 2 7 y
3 3 47 2
0 0 3 y3
47
1 1 2
x1 y1 y2 y3
2 2 47
2 7
x2 y2 y3
3 3 47
3
x3 y3
47
The canonical form is zero for the set of values y1 y2 y3 1 1 0 corresponding to which
2
we get the value x1 x2 x3 0 0 . Which makes the form zero.
3
Problem 05 :
Reduce the quadratic form x12 2 x22 3x32 2 x2 x3 2 x3 x1 2 x1 x2 to canonical form and determine its
rank, index and signature and comment on the definiteness. Also find a non-zero set of values of
x1 , x2 and x3 which makes the form zero.
Solution :
In matrix notation the quadratic form can be written as
1 1 1 x1
x1 2 x2 3x3 2 x2 x3 2 x3 x1 2 x1 x2 x1 x2 x3 1 2 1 x2 X AX
2 2 2
1 1 3 x
3
1 1 1
where, A 1 2 1 , X x1 x2 x3
1 1 3
To effect conjugate reduction of A we write A A . i.e. we are to find the matrix B P AP which
is congruent to A where P is a non-singular matrix.
1 1 1 1 0 0 1 0 0
1 2 1 0 1 0 A 0 1 0
1 1 3 0 0 1 0 0 1
Applying elementary congruent transformation R21 1 ,C21 1 ; R31 1 , C31 1
1 0 0 1 0 0 1 1 0
0 1 2 1 1 0 A 0 1 0
0 2 2 1 0 1 0 0 1
R32 2 , C32 2
1 0 0 1 0 0 1 1 3
0 1 0 1 1 0 A 0 1 2
0 0 2 3 2 1 0 0 1
R3 1 , C3 1
2 2
1 1 3
1 0 0 1 0 0 2
0 1 0 1 1 0 A 0 1 2
0 0 1 3
2 1 0 0 1
2 2 2
1 0 0
Thus X AX reduces to Y BY , where B 0 1 0 and Y y1 y2 y3
0 0 1
So the canonical for is y1 y2 y3 . Therefore the rank is 3, index is 2.
2 2 2
Comment : Since rank = number of variables and rank > index. Hence the quadratic form is
indefinite.
1 1 3
2 y1
Now the linear transformation is X PY 0 1 2 y2
0 0 1 y3
2
3
x1 y1 y2 y3
2
x2 y2 2 y3
1
x3 y3
2
The canonical form is zero for the set of values y1 y2 y3 1 0 1 corresponding to which
2 3 1
we get the value x1 x2 x3 2 . Which makes the form zero.
2 2
Problem 06:
Reduce the quadratic form 2 x12 x22 x32 2 x2 x3 2 x3 x1 2 x1 x2 to canonical form and determine
its rank, index and signature and comment on the definiteness. Also find a non-zero set of values of
x1 , x2 and x3 which makes the form zero.
Solution :
In matrix notation the quadratic form can be written as
2 1 1 x1
2 x1 x2 x3 2 x2 x3 2 x3 x1 2 x1 x2 x1 x2 x3 1 2 1 x2 X AX
2 2 2
1 1 2 x
3
2 1 1
where, A 1 2 1 , X x1 x2 x3
1 1 2
To effect conjugate reduction of A we write A A . i.e. we are to find the matrix B P AP which
is congruent to A where P is a non-singular matrix.
2 1 1 1 0 0 1 0 0
1 2 1 0 1 0 A 0 1 0
1 1 2 0 0 1 0 0 1
1 1 1 1
Applying elementary congruent transformation R21 ,C21 ; R31 , C31
2 2 2 2
2 0 0 1 0 0 1 1 1
2 2
0 3 3
1 1 0 A 0 1 0
2 2 2
0 3 3 1 0 0 1
0 1
2 2 2
R32 1 , C 32 1
2 0 0 1 0 0 1 1 1
2
0 3 0 1 1 0 A 0 1 1
2 2
0 0 0 1 1 1 0 0 1
2 2
R1 1 , C1 1 ; R2 2 , C2 2
3 3
1 0 0 1 1 1
1 0 0 2 2 6
1
0 1 0 2 0 A 0 2 1
6 3 3
0 0 0
1 1 1 0 0 1
1 0 0
Thus X AX reduces to Y BY , where B 0 1 0 and Y y1 y2 y3
0 0 0
So the canonical for is y1 y2 . Therefore the rank is 2, index is 2.
2 2
Comment : Since rank < number of variables and rank = index. Hence the quadratic form is
positive semi-definite.
1 1 1
2 6 y
1
Now the linear transformation is X PY 0 2 1 y2
3
0 0 1 y3
1 1
x1 y1 y2 y 3
2 6
x2 2 y y
3 2 3
x3 y 3
The canonical form is zero for the set of values y1 y2 y3 0 0 1 corresponding to which
we get the value x1 x2 x3 1 1 1 . Which makes the form zero.
Problem 07:
Reduce the quadratic form 2 x12 x22 3 x32 8 x2 x3 4 x3 x1 12 x1 x2 to canonical form and determine
its rank, index and signature and comment on the definiteness.
Solution :
In matrix notation the quadratic form can be written as
2 6 2 x1
2 x1 x2 3x3 8 x2 x3 4 x3 x1 12 x1 x2 x1 x2 x3 6
2 2 2
1 4 x2 X AX
2 4 3 x
3
2 6 2
where, A 6 1 4 , X x1 x2 x3
2 4 3
To effect conjugate reduction of A we write A A . i.e. we are to find the matrix B P AP which
is congruent to A where P is a non-singular matrix.
2 6 2 1 0 0 1 0 0
6 1 4 0 1 0 A 0 1 0
2 4 3 0 0 1 0 0 1
Applying elementary congruent transformation R21 3 , C21 3 ; R31 1 , C31 1
2 0 0 1 0 0 1 3 1
0 17 2 3 1 0 A 0 1 0
0 2 5 1 0 1 0 0 1
2 2
R32 , C 32
17 17
11
1 3
2 0 0 1 0 0 17
2
0 17 0 3 1 0 A 0 1
17
0 0 81 11 2 1 0 0 1
17
17 17
R1 1 , C1 1 ; R2 1
2 2
,C 1
17 2 17
; R 17
2 81 17 81
, C2
1 11
1 3
0 0 2 17 1377
2 0 0 2
2
0 1 0 3 1 0 A 0 1
0 0 1 17 17 17 1377
11 2 17 0 0 17
81 81
1377 1377
1 0 0
Thus X AX reduces to Y BY , where B 0 1 0 and Y y1 y2 y3
0 0 1
So the canonical for is y12 y22 y32 . Therefore the rank is 3, index is 1.
Comment : Since rank = number of variables and rank > index. Hence the quadratic form is
indefinite.