0% found this document useful (0 votes)
28 views99 pages

01) Vector - Merged

Uploaded by

sahaarpon21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views99 pages

01) Vector - Merged

Uploaded by

sahaarpon21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Professor Biplab Bhattacharjee

1. Vector Dept Of Statistics


Govt. B M College, Barishal

Vector :
An n component vector X is an ordered n tuple of numbers written as row  x1 , x2  , xn  or as a column
 x1 
 
 x2  and are called the components of n vectors. For example, X   2 0 1 3 is a 4-vector.
 
 
 xn 

Vector Space:
The set of all n-vectors over a field F is called the n-vector space over F. It is usually denoted by Vn  F  or
simply Vn if F known.

Sub-space of an n-Vector Space:


A set S of vectors is a sub-space of the n-vector sub-space Vn if it is closed with respect to the operations of
addition and scalar multiplication. OR An non-empty set S of the vectors of Vn is called a sub-space of Vn(F)
if when
1. x1 , x2 are any two members of S, then x1  x2 is also a member of Si.e. x1  S and x2  S
 x1  x2  S
2. X is a member of S and k is a member of F, then kX is also member of S . i.e X  S  kX  S ,
k being any scalar.

Sub-space spanned by a set of vector:


A set of vectors  x1 , x2 ,......, xk  from Vn (R) is said to be span or generate Vn (R) if every vector in Vn (R) can
be written as a linear combination of  x1 , x2 ,......., xk  . Such a set of vector is called sub-space spanned by
a set of vector.

Basis of a sub-space:
A set of vectors is said to be a basis of a sub-space if
1. The sub-space is spanned by the set and
2. The set is linearly independent.
For example:
The set of vectors e1  1 0  0  , e2   0 1  0  ,..... , en   0 0  1 Constitute a basis of Vn

Dimension of a sub-space:
The number of vectors in any basis of a sub-space is called the dimension of the sub-space.

Linear combination of a set of vector:


A vector X which can be expressed in the form X  k1 x1  k2 x2  ......  kr xr is said to be a linear combination
of the set of vectors  x1 , x2 ,..... xr  here  k1 , k2 ,.....kr  are any scalar.

223605 Linear Algebra


1
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Example : Let x1   2,3,4,7  , x2   0,0,0,1 , x3  1,0,1,0  ,be the vectors and 1,2,3 be the scalars
then their linear combination is defined to be a vector,
X  1x1  2x2  3x3  1  2,3,4,7   2  0,0,0,1   3 1,0,1,0    5,3,7,9 

Linearly independent set of vectors:


A set of r vectors x1 , x2, ......., xr is said to be linearly independent if there exists r scalars k1 , k2 ,......, kr are
all zero. Such that, k1 x1  k2 x2 ..  kr xr  0 ; where 0 denotes the n-vectors whose component
are all zero.
Example : Let x1  1,0,0  , x2   0,1,0  are linearly independent since k1 x1  k2 x2  0  k1  k2  0

Linearly dependent set of vectors:


A set of r vectors x1 , x2, ......., xr is said to be linearly dependent if there exists r scalars k1 , k2 ,......, kr are
not all zero. Such that, k1 x1  k2 x2 ..  kr xr  0 ; where 0 denotes the n-vectors whose
component are all zero.
Example : The vectors x1  1,2,4  , x2   3,6,12  are linearly dependent since k1 x1  k2 x2  0
Implies that, k1  3 , k2  1 which are not all zero.

Inner product of vectors:


Let X   x1 , x2 ,......., xn  and Y   y1 , y2 , ........., yn  be two n-vectors. Then inner product of two vectors
is defined by X .Y which is obtained by adding the product of the corresponding elements of X and Y i.e
X .Y  X Y   x , x , ........, x  .  y , y , ......., y   x y  x y   x y
1 2 n 1 2 n 1 1 2 2 n n

Length of a vector:
If X is a vector of Vn  R  , then the positive square root of inner product of X and X is defined as the length
of the vector X and is denoted by X .

Thus, If X   x1 , x2 , . , xn  then X   X.X    X X   x12  x22  ......  xn2

Example : Let X  1, 2, 2  then X  12  22  22  1  4  4  9  3

Unit vector or Normal vector:


A vector X is said to be a unit vector or normal vector if X  1 . In other words, a vector with length 1 is
called a unit vector or normal vector
 1 2 2 
2 2 2
1 2 2 9
Example: Let X   , ,  is a unit vector, since X            1  1
3 3 3 3 3 3 9

Orthogonal vectors:
Any vector X1 is said to be orthogonal to a vector X2 if X1 . X2  X1 X2  0 .
A set S of n vectors X1 , X2 , ........, X n is said to be a orthogonal set if any two distinct vectors in S are
orthogonal.

223605 Linear Algebra


2
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Example : The vectors X1  1,2,1  and X2   2, 3,  8  Are orthogonal for their inner product

X1 X 2  1,2,1  2,3, 8   1  2  2  3  1   8   2  6  8  0

Orthonormal vectors:
A set of n-vectors X1 , X2 ,  , X k is said to be a orthonormal set of vectors if
(i) Each vector in S is a unit vector,
(ii) Any two distinct vectors in S are orthogonal
 1 1 1   1 2 1   1 1 
Example : The vectors X1   , ,  , X 2   , ,  , X 3    ,0,  are
 3 3 3  6 6 6  2 2
orthonormal set of vectors, since X1  X2  X3  1 and X1  X2  X2  X 3  X 3  X1  0

Mutually orthogonal:
Two vectors x1 and x2 of Vn  R  are called orthogonal if their inner product x1 x2  x2 x1 is zero and a set
of vectors is said to be a linearly orthogonal set of vectors if every pair of them mutually orthogonal.
 1 2 1   1 2 2   2 1 2 
Example: Let x1    , ,  , x 2    , ,   and x 3   ,  ,   form orthogonal
 6 6 6  3 3 3 3 3 3
set of vectors since x1 x2  x2 x3  x3 x1  0 .

Orthogonal basis:
A basis of Vnm  R  formed by mutually orthogonal vectors is called an orthogonal basis of the space Vnm  R  .
1
e.g X1  1,  1, 1 , X2   1,0,1 form a basis of V32  R  .
3

Orthonormal basis:
In an orthogonal basis, if the mutually orthogonal vectors are also normal vectors, then the basis is called
an orthonormal basis or a normal orthogonal basis. The elementary vectors form an orthonormal basis of
Vn  R  .

223605 Linear Algebra


3
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

The non-zero vectors X1 , X2 ,, X n are linearly dependent if and only if one of the vectors X k is a linear
combination of the proceeding vectors. X1 , X2 ,, X k 1

Proof :
If X k   1 X1   2 X2  ......   k 1 X k 1
Then  1 X1   2 X2  ......   k 1 X k 1  X k  0
  1 X1   2 X2  ......   k 1 X k 1  X k  0. X k 1  0. X k 2  .....  0. X n  0
Since all  i  0 ,
Hence the vectors X1 , X2 ,, X n are linearly dependent.
Conversely,
Suppose that, the vectors X1 , X2 ,, X n are linearly dependent.
Then  1 X1   2 X2  ......   n X n  0 , where the scalars  i  0
Then 1 X1   2 X2  ......   k X k  0. X k 1  0. X k 2  .....  0. X n  0
  1 X1   2 X2  ......   k X k  0
Now if k  1 , this implies  1 X1  0 , with  i  0 .
So that, X1  0 . Which gives a contradiction, since X1 is a non-zero vector. Hence k  1 .
     
We may write, X k    1  X1   2  X2  ......   k 1  X k 1
 k   k   k 
So that, X k is a linear combination of X1 , X2 ,, X k 1
Hence the proof.

The vectors X1 , X2 ,, X n are linearly dependent if and only if any of them can be expressed as a is a
linear combination of the others

Proof:
Suppose that, the vectors X1 , X2 ,, X n are linearly dependent.
Then  1 X1   2 X2  ......   k X k   k 1 X k 1   k 2 X k 2  .....   n X n  0 , where all  i are not zero.
Let the coefficient of X k is not zero. i,e.  k  0
Now,  k X k  1 X1   2 X2  ......   k 1 X k 1   k 1 X k 1   k 2 X k 2  .....   n X n  0
         
 X k    1  X1   2  X 2  ......   k 1  X k 1   k 1  X k 1  .....   n  X n
 k   k   k   k   k 
So that, X k is a linear combination of the set of vectors X1 , X2 ,..... X k 1 , X k 1 ,...., X n
Conversely,
Suppose that, X k is a linear combination of the set of vectors X1 , X2 ,..... X k 1 , X k 1 ,...., X n
So that, X k   1 X1   2 X2  ......   k 1 X k 1   k 1 X k 1   k 2 X k 2  .....   n X n
 1 X1   2 X2  ......   k 1 X k 1   1 X k   k 1 Xk 1   k 2 X k 2  .....   n X n  0
It is clear that, if all the coefficient of X1 , X2 ,..... X k 1 , X k 1 ,...., X n are zero but the coefficient of X k
which is  1 is not zero. So the vectors X1 , X2 ,, X n are linearly dependent.
Hence the proof.
223605 Linear Algebra
4
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

If the set of vectors  X1 , X2 ,, Xm  are linearly independent and the set of vectors  X1 , X2 ,, X m , X is
linearly dependent then X is a linear combination of thevectors X1 , X2 ,, X m

Proof:
Since the set  X1 , X2 ,, Xm , X is linearly dependent there exist scalars 1 ,  2 ,.....,  m ,  not all
zero, such that, 1 X1   2 X2  ......   m X m   X  0 ....................  i 
If   0 then one of the  i is not zero.
Since,the set of vectors  X1 , X2 ,, X m  are linearly independent.
So that,  1 X1   2 X2  ......   m X m  0 , where all  i  0 . This is contradictory with the above
statement.This implies that,   0 .
Thus from  i  we get,  X  1 X1   2 X2  ......   m X m
        
 X    1  X1    2  X2  ......    m  X m
        
So that, X is a linear combination of the vectors X1 , X2 ,, X m .
Hence the proof.

Every subset of a linearly independent set is also linearly independent


Proof:
Let  X1 , X2 ,, X k  is a subset of a linearly independent set  X1 , X2 ,, X k , Xk 1 ,, Xn 
 1 X1  2 X2   k X k   k 1 X k 1   n X n  0 ..............  i 
  1  0 ,  2  0, .. , k  0,  k 1  0, .,  n  0
Putting  k 1  o, , n  0 in equation (i) we get
1 X1   2 X2 .   k X k  0
But here also all  i ’s are zero i , e.  1  0,  2  0, .,  k  0
This implies that the given subset  X1 , X2 ,., X k  is also linearly independent.
Hence the proof.

Every super set of a linearly dependent set is also linearly dependent


Proof:
Suppose that  X1 , X2 ,, X k , Xk 1 ,, Xn  is a super set of a linearly dependent set  X1 , X2 ,., X k 
Then by definition of linearly dependent set we get, 1 X1  2 X2 .   k X k  0  i 
Which implies all coefficients  i  0 .
We can rewrite  i  as  1 X1   2 X2  .   k X k  0. X k 1  0. X k 2 .  0. X n  0
But here also not  i ’s are zero.
Hence we can say that the superset is  X1 , X2 ,., X k , X k 1 ,, X n  also is also linearly
dependent.
Hence the proof.

223605 Linear Algebra


5
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

A set of orthogonal vectors is always a linearly independent set

Proof:
Let  X1 , X2 ,., X k  is a set of orthogonal vectors.
i.e X i . X j  0 ;  i j
Consider a relation 1 X1   2 X2   k X k  0  i 
Taking inner product with X1 of both sides of  i  we get,
1 X1   2 X2    k X k . X1  0. X1
  1 X1 X1   2 X 2 X1   k X k X1  0
  1 X 1 X1  0
 1  0 [  X 1 X1  0 ]
Similarly taking inner products with X2 , X 3 ,  , X k of both side of  i  we can show that
 2  0,  3  0, ,  k  o
  X1 , X2 ,  , X k  is a linearly independent set.
Hence the proof.

A set of unit vectors e1   1,0,0,,0  , e2   0,1,0,,0  ,, en   0,0,0,1 is linearly independent .

Proof:

Given e1  1,0,0,,0  , e2   0,1,0,,0  ,......, en   0,0,0,,1


Consider the relation
1e1   2 e2  ........   n en  0
 1 (1,0, 0,......,0)   2 (0,1, 0,....., 0)  ......   n (0, 0, 0,.....,1)  (0, 0, 0,....0)
 1  0,  2  0,.........,  n  0
 {e1 , e2 ,....., en } is a linearly independent .
Hence the proof.

The basis of a set of vectors is always selected from a set of vectors which generate a vector space
Proof:
Let  X1 , X2 ,., X k  be a set of vectors which generate Vn  R  .

If this set is linearly independent then it is already a basis.


If no, then some of them is a linear combination of others. Deleting this member we obtain
another set which also generate a vector space.

If this new set is linearly independent then it is already a basis


If no, we can proceed the same process as previous until we arrive at a basis of a vector space.
Hence the proof.

223605 Linear Algebra


6
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Gram-Schmidt orthogonalization process of vectors

Orthogonalization:
Orthogonalization is a process under which we reduce a non-orthogonal set of vectors into an orthogonal
set of vectors.

Gram-Schmidt orthogonalization:
Let  X1 , X 2 ,, X m  is a set of non-orthogonal vectors. We shall reduce this set of vectors into an orthogonal
set of vectors by Gram Schmidt process.
Let, Y1 , Y2 , , Ym  be the set of orthogonal vectors.
Let Y1  X1 and Y2  X 2  aY1
Since, Y1 & Y2 are orthogonal
Y1 .Y2  0
 Y1  X2  aY1   0
 Y1 . X2  aY1 .Y1  0
Y1 . X2
 a
Y1 .Y1
 Y .X  Y1 . X2
 Y2  X 2    1 2 Y1  Y2  X2  Y1
 Y1 .Y1  Y1 .Y1

Let, Y3  X 3  aY2  bY1 Again, Y2 .Y3  0


Since Y1 , Y2 , Y3 are to be mutually orthogonal  Y2  X 3  aY2  bY1   0
Y1 .Y3  0  Y2 . X 3  a Y2 .Y2  bY2 .Y1  0
 Y1  X 3  aY2  bY1   0  Y2 . X 3  aY2 .Y2  0
 Y1 . X 3  a Y1 .Y2  b Y1 .Y3  0 Y2 . X3  Y .X   Y .X 
a   Y3  X 3    2 3 Y2    1 3 Y1
 Y1 . X 3  bY1 .Y3  0 Y2 .Y  Y2Y2   Y1Y1 
Y1 . X 3 Y .X Y .X
 b   Y3  X 3  2 3 Y2  1 3 Y1
Y1 .Y Y2Y2 Y1Y1

Y3 .X 4 Y .X Y .X
Similarly, Y4 = X 4  Y3  2 4 Y2  1 4 Y1
Y3 .Y3 Y2 .Y2 Y1 .Y1
Continue the process until Ym is obtained Thus
Y X YX YX
Ym = X m  m-1 m Ym-1 - .......- 2 m Y2  1 m Y1
Ym-1Ym-1 Y2Y2 Y1Y1
Hence Y1 , Y2 , , Ym form an orthogonal basis a Vnm  R 
Yi
The vectors Gi = , i  1, 2, .. , m  are normal and mutually orthogonal
Yi
So the vectors G1 , G2 ,...., Gm form an orthonormal basis of Vnm  R 

223605 Linear Algebra


7
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Problem:

For vectors are as follows :


α1 =  8 ,18 ,17 ,32 
α2 =  30 ,22 ,14 ,27 
α 3 = 10,14 ,48 ,6 
α 4 =  31, 49 ,6 , 8 
Examine whether the vectors are independent or not.

Solution:

We have to examine whether the vectors α1 , α 2 , α3 and α 4 are independent or dependent. We have,
1   8,18,17 ,32 
 2   30 , 22,14, 27 
 3  10,14, 48, 6 
 4   31, 49, 6, 8 
Let us consider the relation :
k1 α1 +k 2 α2 +k 3 α 3 +k 4 α 4 = 0
 k1  8 ,18 ,17 ,32  + k2  30 ,22 ,14 ,27  + k 3 10,14 , 48 ,6  + k 4  31 , 49 ,6 , 8  = 0
8k1 + 30k2 + 10k 3 + 31k 4 =0
18k 1 + 22k2 + 14k 3 + 49k 4 = 0

17k 1 + 14k 2 + 48k 3 + 6k 4 = 0
32k 1 + 27k2 + 6k 3 + 8k 4 =0

223605 Linear Algebra


8
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

8 18 17 32 
 30 
 91 199  R21  
8 30 10 31  k1  0  0 93   k1  0   8 
18 22 14 49   k2  0  2 4   k  0 
  10 
     17 107  2  R31  
17 14 48 6   k3  0 0 34   k3  0   8 
      2 4   k  0 
32 27 6 8   k4   0
 83 479  4    31 
R41  
0 116   8 
 4 8 

8 18 17 32 
 91 199 
0 93   k1   0 
 2 4   k  0 
 17 
  1513      
2
3280 R32  
0 0   k3   0   91 
 91 91     
 3384
k
13393   4   
0  83 
R42  
0 0   182 
 91 182 

8 18 17 32 
 91 199 
0 93   k1  0 
 2 4   k  0 
  1513      
2
3280
0 0   k3   0 
 91 91       423 
k 0 R43  
 18602   4     410 
0 0 0 
 205 

8k1  30k2  10 k3  31k4  0


91 199
k2  k3  93k4  0
2 4
 3280 1513
k3  k4  0
91 91
18602
 k4  0
205

 k4 0

So that , k1  0, k2  0, k3  0

Since all the scallers are zero, so that the given vectors are
linearly independent.

223605 Linear Algebra


9
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Problem:

Given the vectors : X1 =  -1 0 2 1 , X 2 = 2 3 1 5


Compute (i) X1 ; X 2
(ii) X1 .X2
(iii) d(X1 .X2 )
(iv) Find the angle between X1 and X2
2 2 2 2
(v) Show that, X1 + X2 + X1 - X 2 =2 X1 + 2 X2
Solution:
(i) We know that, the length of X1 is X1 = X1 X1 =  -1 0 2 1 -1 0 2 1

 -1 +  0  + 2  + 1 
2 2 2 2
=
= 6

the lengthof X 2 is X2 = X2 X 2 = 2 3 1 52 3 1 5

 2  +  3 + 1 +  5 
2 2 2 2
=
= 39

(ii) The inner product X1 .X2 =  -1 0 2 12 3 1 5


= (-1.2) + (0.3) + (2.1) + (1.5)
=5

(iii) Distance d(X 1 .X2 ) = X1 - X2


=  -1 0 2 1 - 2 3 1 5
=  -3 -3 1 -4 

 -3 +  -3  + 1 +  -4 
2 2 2 2
=
= 35

223605 Linear Algebra


10
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

X1 .X2
(iv) The angle between X1 and X2 , cosθ =
X1 X 2
5
=
6 39
 5 
θ = cos -1  
 6 39 
 θ = 70o 55 18

2 2
(v) Now, L.H.S. = X1 + X2 + X1 - X 2

 -1 0 2 1 +  2 3 1 5  +  -1 0 2 1 -  2 3 1 5
2 2
=

1 3 3 6  +  -3 -3 1 -4 
2 2
=

   
2 2

1 +  3  +  3 +  6   -3 +  -3  + 1 +  -4 


2 2 2 2 2 2 2 2
= +

 55 +  35
2 2
=
= 55 + 35
= 90

2 2
R.H.S. = 2 X1 + 2 X2
= 2  -1 0 2 1 + 2  2 3 1 5 
2 2

   
2 2

 -1 +  0  +  2  + 1  2  +  3 + 1  +  5 
2 2 2 2 2 2 2 2
=2 +2

 6 + 2 39 
2 2
=2
= 12 + 78
= 90

2 2 2 2
So That , X1 + X2 + X1 - X 2 = 2 X1 + 2 X2
(proved).

223605 Linear Algebra


11
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Problem:
For vectors are as follows :
α1 =  8 ,18 ,17 ,32 
α 2 =  30 ,22 ,14 ,27 
α 3 = 10,14 ,48 ,6 
α 4 =  31 , 49 ,6 , 8 
Construct a set of orthogonal vectors and find their lengths
and construct a set of orthonormal vectors.

Solution:

We are given,
α1 =  8 ,18 ,17 ,32 
α2 =  30 ,22 ,14 ,27 
α3 = 10,14 ,48 ,6 
α4 =  31 ,49 ,6 , 8 

Let β1 ,β2 ,β3 and β4 be the orthogonal set of vectors, then by


Gram - Smith orthogonalization process, we have,
β1 =  8 ,18 ,17 ,32 

α2 β1  30 ,22 ,14 ,27  8 ,18 ,17 ,32 


β2 = α 2 - .β1 =  30 ,22 ,14 ,27  - .  8 ,18 ,17 ,32 
β1β1  8 ,18 ,17 ,32  8 ,18 ,17 ,32 
1738
=  30 ,22 ,14 ,27 - .  8 ,18 ,17 ,32 
1701
=  30 ,22 ,14 ,27 -  8.17 , 18.39, 17.37, 32.70 
=  21.83 , 3.61 , - 3.37 , - 5.70 

 β2 =  21.83 , 3.61 , - 3.37 , - 5.70 

223605 Linear Algebra


12
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

72.5 787.8
=  31 , 49 ,6 , 8  - 1.28 , - 0.58 , 34.98 , - 18.58  - 21.83, 3.61 , - 3.37 , - 5.70 
1570.79 531.96
1488
-  8 ,18 ,17 ,32 
1701
=  31 ,49 ,6 , 8  -  0.06 , - 0.03, 1.61 , - 0.86  -  32.31 , 5.34 , - 4.99 , - 8.44  -  7.00 ,15.75 ,14.87 ,27.99 
=  -8.37 , 27.94 , - 5.49 , - 10.69 

 β4 =  -8.37 , 27.94 , - 5.49 , - 10.69 


So that, the orthogonal set of vectors are as
β1 =  8 ,18 ,17 ,32 
β2 =  21.83 , 3.61 , - 3.37 , - 5.70 
β3 = 1.28 , - 0.58 , 34.98 , - 18.58 
β4 =  -8.37 , 27.94 , - 5.49 , - 10.69 
Orthonormal vectors:

The length of the orthogonal vectors are : -


R1 = β1β1 = (8)2 +(18)2 + (17)2 + (32)2 = 41.24
R2 = β2β2 = (21.83)2 +(3.61)2 + (-3.37)2 + (-5.70)2 = 23.10
R3 = β3β3 = (1.28)2 +(-0.58)2 + (34.98)2 + (-18.58)2 = 39.63
R4 = β4β4 = (-8.37)2 +(27.94)2 + (-5.49)2 + (-10.69)2 = 31.54
β1  8 18 17 32 
Now, V1 = =  , , ,  =  0.19 ,0.44 ,0.41,0.78 
R1  41.24 41.24 41.24 41.24 
β  21.83 3.61 -3.37 -5.70 
V2 = 2 =  , , ,  =  0.95,0.16 ,- 0.15,- 0.25
R2  23.10 23.10 23.10 23.10 
β3  1.28 -0.58 34.98 -18.58 
V3 = = , , ,  =  0.03,- 0.01,0.88 ,- 0.47
R3  39.63 39.63 39.63 39.63 
β  -8.37 27.94 -5.49 -10.69 
V4 = 4 =  , , ,  =  -0.27,0.89 ,- 0.17,- 0.34 
R4  31.54 31.54 31.54 31.54 

So that the set orthonormal set of vectors are


V1 =  0.19 ,0.44 ,0.41,0.78 
V2 =  0.95,0.16 ,- 0.15,- 0.25
V3 =  0.03,- 0.01,0.88 ,- 0.47
V4 =  -0.27 ,0.89 ,- 0.17 ,- 0.34 

223605 Linear Algebra


13
Professor Biplab Bhattacharjee
2. Algebra of Matrices Dept Of Statistics
Govt. B M College, Barishal

Matrix:
A system of mn numbers arranged in the form of an ordered set of m rows,each row consisting ofan
ordered set of n numbers, enclosed in square bracket or parenthesis or in double vertical lines is called an
m  n matrix.
As for example,

 a11 a12  a1n  a11 a12  a1n  a11 a12  a1n 


  a
a a22  a2 n  a a22  a2 n a22  a2 n 
A   21 or A  21 or A   21 
            
   
 am1 am 2  amn  am1 am 2  amn  am1 am 2  amn 

Sometimes a convenient short notation is used which is

A   aij  ; where, i  1, 2 ,......., m ; j  1,2 ,........, n

Order of matrix:
The number of rows m and number of column n in a matrix is called order of the matrix.
Rectangular Matrix:
A matrix in which the number of rows is unequal to the number of columns is called a rectangular matrix.
Thus A   aij  ;  i  1,2,....., m : j  1,2,....., n  is called an m  n rectangular matrix if m  n .
 2 5
 2 3 1  
For example, A    or B   1 3  are rectangular matrices.
 2 1 0 23  4 1
 32
Square Matrix:

A matrix in which the number of rows is equal to the number of columns is called a square matrix.
Thus A   aij  ;  i  1,2,....., m : j  1,2,....., n  is called an n  n square matrix or a square matrix of
order n if m  n .
 2 5 1 
For example, A   3 4 5  is a square matrix.
 1 7 3
 33

Row matrix:
If a matrix contain only one row then it called a row matrix. Thus A   a11 , a12 ,......, a1 n  is called row matrix
or row vector.
Example, A= 5 8 9  is a row matrix.

223605 Linear Algebra


1
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Column matrix:
 a11 
 
a
If a matrix contains only one column then it is called a column matrix.Thus A   12  is called column
  
 
 a1n 
4
matrix or column vector. Example: A=  5  is a column matrix.
 
 6 

Null Matrix:
Any matrix whose all elements are equal to zero is called a null matrix.
0 0 0
As for example, A=  0 0 0  is a null matrix.
0 0 0
 

Principal diagonal of a Matrix:


For Any square matrix the diagonal running from upper left to the lower right of the matrix is known as
main diagonal or principal diagonal. In fact the elements  a11 , a22 ......ann  whose rows suffix and column
suffix are equal constitutes the main diagonal. These elements of main diagonal are called main diagonal
elements or principal diagonal elements or simply diagonal elements other elements of the matrix are
called off diagonal elements.
 a11 a12 a13 
 
A   a 21 a 22 a 23  a11 , a 22 , a33 are diagonal elements a12 , a13 , a 23 , a 21 , a31 , a 32 are off diagonal elements.
a a 32 a 33 
 31

Diagonal Matrix:
In a square matrix the elements aii ;  i  1,2,....., n  Are known as diagonal elements and the line in which
they lie is known as principal diagonal. A square matrix all the elements except those in the principal
diagonal are zero is called a diagonal matrix. Thus the matrix A   aij  ;  i , j  1,2,....., n  is a diagonal
matrix if aij  0 ;  i  j . A diagonal matrix of order n with diagonal element  a11 , a22 ......ann  is
denoted by A  diag  a11 , a22 ......ann  in which all off diagonal elements are zero is called a diagonal matrix.
5 0 0
For example, A   0 3 0  is a diagonal matrix.
0 0 2
 

223605 Linear Algebra


2
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Scalar Matrix:
A diagonal matrix whose all the diagonal elements are all equal is called the scalar matrix. Thus the square
matrix A   aij  is a scalar matrix if and only if for some scalar k
aij  k , i  j 
 ; i , j  1,2,......, n
 0 , i  j

 5 0 0
Example, A   0 5 0  is a scalar matrix.
 0 0 5
 

Identity Matrix:
In a diagonal matrix if we take k  1 , then A is called a unit matrix or an identity matrix of order n and is
denoted by I n .
Symmetric Matrix:
A square matrix A   aij  is said to be a symmetric matrix if ij  th element are same as the ji  th element.
Therefore aij  a ji ;  i , j . For a symmetric matrix A  A 
a h g a h g
As for example, A=  h b f
 
 A   h b f

g f c  g f c 
 

So that A is a symmetric matrix.

Skew Symmetric Matrix:


A square matrix A   aij  in which aij   a ji and diagonal elements are equal to zero then it is called a skew
symmetric matrix or for which A   A is called skew symmetric matrix
 0 1 2  0 1 2   0 1 2
     
As for example, A   1 0 3   A   1 0 3     1 0 3    A
 2 3 0  2 3 0    2 3 0 
     

So that A is a skew symmetric matrix.


Triangular matrix:

m  n matrix A  aij   is said to be upper triangular, if aij  0 ;  i  j and it is said to be lower


triangular if aij  0 ;  i  j .
Thus if A is a square matrix then in upper triangular matrix all the elements below the principal diagonal
are zero while in lower triangular matrix all the elements above the principal diagonal are zero. Triangular
matrix need not to be square.

223605 Linear Algebra


3
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

3 5 2 2 0 0
  is a upper triangular matrix.  
Example: A  0 9 3 A   3 5 0  is a lower triangular matrix.
0 0 4 1 2 7
   
1 5 1  2 0 0
0 2 4   1 2 0 
A  is a upper triangular matrix. A    is a lower triangular matrix.
0 0 6 7 2 3
   
0 0 0  6 5 1 

Sub Matrix:
The matrix obtained by deleting some rows or column or both of a matrix is said to be a sub matrix that
matrix. As for example,
1 2 3 4
1 2 3
If A =  5 6 7 6  then sub matrix of A is A1  
9 1 2 3 5 6 7 
 

Comparable Matrices

Two matrices A   aij  and B   bij  are said to be comparable if each has the same number of rows and
columns as the other. i.e, if they have the same dimensions.

Equality of Matrices:
Two matrices A   aij  and B   bij  are said to be equal iff
(i) they are conformable, . i.e, they are of the same dimensions and
(ii) the elements in the corresponding position of the two matrices are same,. i.e, for each pair of
subscripts i and j we have aij  bij

Matrix Addition

Addition of Matrices:
Two matrices A   aij  and B   bij  are said to be conformable for addition if they are comparable and
then their sum A  B is defined as the matrix
C  A  B   cij  ; where, cij  aij  bij
i.e, the sum of two matrices is obtained on adding their corresponding elements. Obviously A  B has the
same dimension as A or B .
Example:
 a11 a12 a13   b11 b12 b13   a11  b11 a12  b12 a13  b13 
  
If A   a21 a22 a23  and B   b21 b22 b23    
Then A  B   a21  b21 a22  b22 a23  b23 
a  b  a  b a32  b32 a33  b33 
 31 a32 a33   31 b32 b33   31 31

Properties of matrix addition:


223605 Linear Algebra
4
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

i) Matrix addition is commutative, i.e, for two matrices A   aij  and B   bij  , A  B  B  A
ii) Matrix addition is associative, i.e, for three matrices A   aij  , B   bij  and C   cij  ,
A  B  C    A  B  C
iii) Matrix addition is distributive, i.e, for two matrices A   aij  and B   bij  and for a scalar k
k  A  B   kA  kB
iv) If A is m  n matrix and 0 is a null matrix of the same dimensions then A  0  0  A  A
v) If A and X are conformable for addition then the matrix equation A  X  0  X  A has a
unique solution X   A    aij 

Matrix addition is commutative, i.e, for two matrices A   aij  and B   bij  , A  B  B  A
Proof:
Let we have two matrices A   aij  and B   bij  .Thus
AB   aij  bij    bij  aij   B  A
 AB  B A

Matrix addition is associative, i.e, for three matrices A   aij  , B   bij  and C   cij  , A  BC   A B C
Proof:
Let, we have three matrices A   aij  , B   bij  and C   cij  . Since A , B and C are comparable matrices of
order m  n , the matrices A,  B  C  ,  A  B  , and C are also the same dimensions m  n and thus the
matrix additions A   B  C  and  A  B   C are defined and these matrices are comparable. Further,
 i , j   th element of A   B  C   aij  bij  cij   aij  bij   cij
  i , j   th element of  A  B   C
 A  B  C    A  B  C

Matrix Multiplication
Matrix multiplication:
Two matrices A and B are said to be conformable for the product AB , if the number of columns of A is
equal to the number of rows of B.
Let A   aij  be m  n matrix and B   bij  n  p matrix so that they are conformable for the product
C  AB   cij 
is a m  p matrix where c ij is the inner product of i  th row of A by the j  th column of B .

Let,

223605 Linear Algebra


5
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 a11 a12 a13   b11 b12 b13 


   
A   a21 a22 a23  and B   b21 b22 b23 
a a32 a33  b 
 31  31 b32 b33 
 c11 c12 c13   a11 b11  a12 b21  a13 b31 a11 b12  a12 b22  a13 b32 a11 b13  a12 b23  a13 b33 
   
then AB   c21 c22 c23    a21 b11  a22 b21  a23 b31 a21 b12  a22 b22  a23 b32 a21 b13  a22 b23  a23 b33 
c   a31 b13  a32 b23  a33 b33 
 31 c32 c33   a31 b11  a32 b21  a33 b31 a31 b12  a32 b22  a33 b32
 b1 j 
 
b2 j n
where, cij   ai 1 , ai 2 ,......ain      aik bkj
   k 1
 
 bnj 

Properties of Matrix multiplication:


i) Matrix multiplication is not general, not commutative. i.e, For two matrices A and B , AB  BA .
ii) Matrix multiplication is associative . i.e, For three matrices A , B and C A  BC    AB  C
iii) Matrix multiplication is distributive i.e, For four matrices A , B , C and D are conformable then
A  B  C   AB  AC 

and  B  C  D  BD  CD 
iv) For two matrices A and B , without any of the matrices being zero, we may have AB  0 .

For two matrices A   aij  and B   bij  , AB  BA , in general

Proof:
If the matrix product AB is defined, it is not necessary that the product BA is also defined. Let A is an
m  n matrix and B is an n  p matrix, then product AB is defined but the product BA is not defined
since the number of columns of B is not equal to the number of rows of A . Again if both the products AB
and BA are defined they need not be equal. For example, if
a b  b 0
A  and B   
0 0  a 0 
0 0  ba b2 
Here, AB and BA defined, AB    and BA   2 
0 0  a ab 
 AB  BA

For three matrices A , B and C A  BC    AB  C

Let A   aij  , B   bij  and C   cij  be m  n, n  p and n  q matrices repectively so that AB   uij  and
BC   vij  are m  p and n  q matrices respectively where
223605 Linear Algebra
6
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

n p
uij  a
k 1
b
ik kj and vij   bik ckj
k 1

Let,  AB  C   wij  and A  BC    xij  . Then  AB  C   wij  and A  BC    xij  are each m  q
matrices where,
n n
wij   uir crj and xij   aikv kj
r 1 k 1

We have to prove that w ij  xij . Now,


n n
 n 
wij   uir crj     aik bkr  crj
r 1 r 1  k 1 
n
 p

   aik  bkr crj 
k 1  r 1 
n
  aikv kj
k 1

 xij
Hence the proof.

Transpose of a Matrix

Transpose matrix:
A matrix obtained by interchanging the rows and columns of a given matrix is called a transpose
matrix. For any matrix A transpose matrix is denoted by A or AT .
5 7
5 3 7
If A   3 5  then A   
7 8 7 5 8
 

If the matrices A and B are conformable for the product AB then the matrices B and A are conformable
for the product BA and  AB   BA
Or,
The transpose of the product of two matrices is equal to the product of transposes taken in reverse
order.

Proof:

Let A   aij  be an m  n matrix and B   bij  is an n  p matrix. Then


A   aij  where aij  a ji is an n  m matrix
and B   bij  where bij  bji is an p  n matrix.

223605 Linear Algebra


7
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Thus AB is m  p matrix so that  AB  is a p  m matrix. Also BA is a p  m matrix. Thus the matrices

 AB  and BA have the same dimensions.

Now, AB   cij 
n
 i  1,2,...., m 
where, cij   aik bkj ;  
k 1  j  1,2,...., p 
  i , j   th element of  AB   cij  c ji
n
  a jk bki
k 1
n
  akj bik
k 1
n
  bik akj
k 1

  i , j   th elementof BA

Hence the proof.

If the A and B are comparable matrices then,


 i   A  A ,  ii   A  B    A  B  ,  iii   kA   k.A , k being scalar.

Proof:

(i) Let A   aij  , [ i  1,2,....., m ; j  1,2,....., n]

By definition A   aij    a ji 

Now  A    a ji    aij   A

  A  A

(ii) Let A   aij  and B   bij  , [ i  1,2,....., m ; j  1,2,....., n] .


Then      
C  A  B is defined and cij  aij  bij
Now by the definition of the transpose of C we have,
A  B   C  c   c  a  b
         
ij ji ji ji

  a    b   A  B
ij ij

  A  B   A  B
223605 Linear Algebra
8
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

(iii) Let A   aij  , [ i  1,2,....., m ; j  1,2,....., n] .

 A   aij    a ji 

Now,  kA    kaij    ka ji   k  a ji   kA


  kA   kA
If A is a square matrix, then  A  A  is symmetric and  A  A  is a skew-symmetric matrix.

Proof:
Let A be a square matrix, and A be its transpose. Then we have,
 A  A  A   A   A  A  A  A
So that, A  A is symmetric matrix.

 A  A  A   A   A  A    A  A 
So that, A  A is skew  symmetric matrix.
Hence the proof.

Every square matrix can be uniquely expressed as the sum of a symmetric matrix and a skew-symmetric
matrix.

Proof:
Let A be a square matrix, and A be its transpose. Then we have,
1 1 1 1
A   A  A    A  A   B  C ; where B   A  A  & C  A  A 
2 2 2 2
1 1
Now, B   A  A    A   A  
2 2  
1
  A  A
2
B
1 1
And C  A  A    A   A 
2 2 
1
  A  A
2
1
   A  A
2
 C
Thus B is a symmetric matrix and C is a skew symmetric matrix. So that, every square matrix can
be uniquely expressed as the sum of a symmetric matrix and a skew-symmetric matrix.

223605 Linear Algebra


9
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

If A is a skew symmetric matrix then AA  AA and A2 is symmetric.

Proof:
Let A is a skew-symmetric matrix. Then we have, A   A
 AA  A   A    A2
and AA    A  A   A2
So that, AA  AA
Again
 AA   A  A  AA
and  AA   A  A   AA
 AA and AA are both symmetric matrices.
Therefore,  A2 is a symmetric matrix. Again 1 is scalar. So that, A2 is a symmetric matrix.

If A and B are both skew- symmetric matrices of same order such that AB  BA then AB is symmetric.

Proof:
If A and B are both skew-symmetric matrices, then A   A and B   B
Given that,
AB  BA

 AB   B   A   BA   AB 

  AB   AB
Thus AB is a symmetric matrix.

If A and B are two symmetric matrices of the same order then necessary and sufficient condition for
matrix AB to be symmetric is that AB  BA .
Proof:
Since A and B are symmetric matrices, then A  A and B  B
Necessary condition:If AB is symmetricmatrix then
 AB   AB
 BA  AB
 BA  AB
 AB  BA
Sufficient condition: Given that, AB  BA
Now,  AB   BA  BA  AB
 AB is symmetric
223605 Linear Algebra
10
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Hence the proof.


Orthogonal Matrix

Orthogonal matrix:

A square matrix A of order n is said to be orthogonal if and only if


AA  In  AA
where, I n is an identity matrix and A is is the transposed matrix of A .
Example:
  13 2
3
2
3    13 23 3 
2
1 0 0
 2     
A   23  3 3   A   3  3 3 
1 2 1 2
 AA   0 1 0 
 2 2
 13   2 2
 13  0 0 1
 3 3  3 3  
So that A is an orthogonal matrix.

If A and B are orthogonal matrices, each of order n then the matrices AB and BA are also orthogonal.

Proof:
Since A and B are n  rowed orthogonal matrices, then we have
AA  AA  In and BB  BB  In
The matrix product AB is also a square matrix of order n and we have,
 AB   AB    BA AB 
 B  AA  B
 B InB
 B B
 In
Thus AB is an orthogonal matrix of order n .

Similarly,
 BA   BA    AB BA 
 A  BB  A
 A In A
 A A
 In
Hence BA is an orthogonal matrix of order n .
If A is an orthogonal matrix , then A1 is also orthogonal.

Proof:
If A is an orthogonal, we have
AA  AA  In where, I is the identity matrix.

223605 Linear Algebra


11
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 AA   AA   I 1  I
1 1

 A  A1  A1  A  I
1 1

  A  A
1 1
 A 1  A1   I
Hence A1 is orthogonal. So that inverse of an orthogonal matrix is also orthogonal.

Transpose of an orthogonal matrix is also orthogonal.

Proof:
If A is an orthogonal, we have
AA  AA  In where, I is the identity matrix.
  AA    AA   I  I

  A A  A  A  I
Hence A isorthogonal. So that transpose of an orthogonal matrix is also orthogonal.

For an orthogonal matrix its transpose and inverse matrix is equal.

Proof:
If A is an orthogonal, we have
 
AA  A A  In where, I is the identity matrix.
 A1 AA  A1 I
 IA  A1
 A  A1
So that, for an orthogonal matrix its transpose and inverse matrix is equal.

Determinant of an orthogonal matrix is 1

Proof:
If A is an orthogonal, we have
AA  In where, I is the identity matrix.
 AA  I
 A A  1
 A A1
2
 A 1
 A  1
If A   1 then A is called proper orthogonal matrix, and if A   1 then A is called improper orthogonal
matrix.

223605 Linear Algebra


12
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

An orthogonal matrix is a non singular matrix

Proof:
If A is an orthogonal, we have
AA  In where, I is the identity matrix.
 AA  I
 A A  1
 A A 1
2
 A 1
 A  1  0

So orthogonal matrix is a non singular matrix.

Uses of orthogonal matrix:


A matrix transformation is called orthogonal transformation if the corresponding matrix is orthogonal. In
statistics there are many places where orthogonal transformation is required, which is performed using
orthogonal matrix. Thus orthogonal matrix plays an important role in statistics.

Determinant
Determinant:
Determinant of a square matrix A of order n is a real valued function of the elements of the matrix
and is given by,
a11 a12  a1n
a a22  a2 n
A  21  A    a1i a2 j .....anp
  
an1 an 2  ann
Where summation is taken over n! permutations of column suffix  i , j ,......, p  with a ‘  ’ sign
given to a term of even permutations & a '  ' given to a term of odd permutations.

Properties of determinants:
1. If all the elements of determinant is zero then the value of determinant is zero.
0 0 0
Example: A  0 0 0  0
0 0 0
2. Determinant of a transpose matrix is equal to the determinant of original matrix such that A  A

223605 Linear Algebra


13
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

a1 b1 c1 a1 a2 a3
Example: a2 b2 c2  b1 b2 b3
a3 b3 c3 c1 c2 c3
3. If one row or column of a determinant is zero than the value of determinant is zero
a1 b1 c1
Example: 0 0 0  0
a2 b2 c2
4. If two rows or columns are interchanged then determinant changes its sign without changing its
numerical value.
a1 b1 c1 b1 a1 c1
Example: a2 b2 c2   b2 a2 c2
a3 b3 c3 b3 a3 c3
5. If two rows or columns are equal then the value of a determinant is zero.
a1 b1 a1
Example: a2 b2 a2  0
a3 b3 a3
6. If each of the element of any row or column of a determinant are multiplied by any a constant then
the value of the determinant is multiplied by same constant.
ma1 ma2 ma3 a1 a2 a3
Example: b1 b2 b3  m b1 b2 b3
c1 c2 c3 c1 c2 c3

7. If each of the elements of a determinant are multiplied by a constant C , then value of the
determinant will be multiplied by C n , where n is the order of determinant .
ma1 ma2 ma3 a1 a2 a3
Example: mb1 mb2 mb3  m b1 b2 b3
3

mc1 c2 mc3 c1 c2 c3
8. If each of the element of a row or column of any determinant may be expressed as asum of two or
more numbers then it may be expressed as sum of two or more determinant.
a1   b1   c1   a1 b1 c1   
Example: a2 b2 c2  a2 b2 c2  a2 b2 c2
a3 b3 c3 a3 b3 c3 a3 b3 c3
9. If each of the element of any row or column of a determinant multiplied by k times and added with
the another row or column then the value of a determinant unchanged.
a1 b1 c1 a1  ka2 b1  kb2 c1  kc2
Example: a2 b2 c2  a2 b2 c2
a3 b3 c3 a3 b3 c3
10. Determinant of a diagonal matrix is equal to the product of diagonal elements. Such that
A  a11a22 a33 ................ann
11. Determinant of a triangular matrix is equal to the product of diagonal elements. Such that
223605 Linear Algebra
14
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

A  a11a22 a33 ................ann


12. For the square matrix A & B of same order. AB  A . B

Difference between determinant and matrix :

1. Number of rows and columns are equal in a determinant. But number of rows and columns may or
may not be equal in a matrix.
2. Determinant is a definite value. But matrix has no value. It is merely an arrangement of elements in
rows and columns.
3. If two rows or columns are identical then determinant vanishes. But identical rows or columns may
occur in a matrix.
4. Rows and columns can be interchanged in a determinant without changing its value. But rows and
columns of a matrix cannot be interchanged.
5. If a determinant is multiplied by a constant then all the element of a row or a column are multiplied
by the same constant. But if a matrix is multiplied by a constant then all the elements of the matrix
are multiplied by the same constant.
6. Product of two determinants does not change the order of determinants. But the product of two
matrices may change the order of matrix.
7. Multiplication of two determinant is commutative such that A B  B A . But matrix multiplication
is not commutative, since AB  BA in general.

Example : Evaluate the determinant


x a a  a
a x a  a
A a a x  a
   
a a a  x
Solution:

223605 Linear Algebra


15
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

x a a  a x   n  1 a a a  a
a x a  a x   n  1 a x a  a
A a a x  a  x   n  1 a a x  a C1  C1  C2  ....C n 
       
a a a  x x   n  1 a a a  x
1 a a  a 1 a a  a
R   R  R 
1 x a  a 0 x a 0   2 0
2 1

R3  R3  R1 
 x   n  1 a 1 a x  a  x   n  1 a 0 0 x a  0  
          
 
1 a a  x 0 0 0  x  a Rn  Rn  R1 
  x   n  1  a  x  a 
n 1

 A   x   n  1  a  x  a 
n 1

Example : Evaluate the determinant


1  a1 1 1  1
1 1  a2 1  1
A  1 1 1  a3  1
   
1 1 1  1  an
Solution :

223605 Linear Algebra


16
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

1 1  1  1 1 1  1
a1 a2 an a2 a3 an
1 1  1  1 1 1 1  1
a1 a2 an a2 a3 an
  a1a2 ....an  1  1  1  1 1 1 1  1 C1  C1  C2  ....  C n 
a1 a2 an a2 a3 an
   
1 1  1  1 1 1  1 1
a1 a2 an a2 a3 an

1 1 1  1
a2 a3 an
1 1 1 1  1
a2 a3 an
  a1a2 ....an   1  1  1    1  1 1 1 1  1
 a1 a2 an  a2 a3 an
   
1 1 1  1 1
a2 a3 an

1 1 1  1
a2 a3 an R2  R2  R1 
0 1 0  0 R   R  R 
 3 1
  a1a2 ....an   1  1  1    1  0 0 1 0 
3

 a1 a2 an   
 
    Rn  Rn  R1 
0 0 0  1

  a1a2 ....an   1  1  1    1   A   a1a2 ....an   1  1  1    1 


 a1 a2 an   a1 a2 an 

Example : Evaluate the determinant


1  a1 a2 a3  an
a1 1  a2 a3  an
A  a1 a2 1  a3  an
   
a1 a2 a3  1  an

Solution :

223605 Linear Algebra


17
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

1  a1 a2 a3  an 1  a1  a2  ...  an a2 a3  an
a1 1  a2 a3  an 1  a1  a2  ...  an 1  a2 a3  an
A  a1 a2 1  a3  an  1  a1  a2  ...  an a2 1  a3  an C1  C1  C2  ....  C n 
       
a1 a2 a3  1  an 1  a1  a2  ...  an a2 a3  1  an

1 a2 a3  an 1 a2 a3  an
R2  R2  R1 
1 1  a2 a3  an 0 1 0  0 R  R  R 
 3 1
 1  a1  a2  ...  an  1  1  a1  a2  ...  an  0
3
a2 1  a3  an 0 1  0  
  
       
Rn  Rn  R1 
1 a2 a3  1  an 0 0 0  1
 1  a1  a2  ...  an 

Example : Evaluate the determinant


1 1 1  1
d1 d2 d3  d n
A  d12 d22 d32  dn2
   
n 1 n 1 n 1 n 1
d1 d2 d3  dn
Solution :
1 1 1 1 0 0
1 1  C 2  C 2  C 1 
Let, A2   d 2  d1 A3  d 1 d2 d 3  d1 d 2  d1 d 3  d1  
d1 d2
d12 d 22 d 32 d 12 d 22  d 12 d 32  d 12  C 3  C 3  C 1 

1 0 0
  d 2  d 1  d 3  d 1  d1 1 1   d 2  d 1  d 3  d 1   d 3  d 1  d 2  d 1 
d 12 d 2  d1 d 3  d1
  d 3  d 1  d 3  d 2  d 2  d 1 
Similarly,
A4   d 4  d 1  d 4  d 2  d 4  d 3  d 3  d 1   d 3  d 2  d 2  d 1 

An   d n  d 1  d n  d 2   d n  d n  1 
 d n 1  d 1  d n 1  d 2   d n 1  d n  2 
..........................................................
n
  d 2  d1    d
i  j 1
i  dj 

Example : Evaluate the determinant

223605 Linear Algebra


18
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

1a 1 1 1
1 1b 1 1
A 
1 1 1 c 1
1 1 1 1d
Solution :
1 1 1 1 1
1a 1 1 1 a b c d
1 1 1 1 1
1 1b 1 1 a b c d
A    abcd 
1 1 1c 1 1 1 1 1 1
a b c d
1 1 1 1d
1 1 1 1 1
a b c d
1 1 1 1
b c d
1 1 1 1 1

  abcd  1  1  1  1  1
a b c d 1  1
b c
1 1 1
d
C1  C1  C 2  C 3  C 4 
b c d
1 1 1 1 1
b c d

1 1 1 1
b c d  R2  R2  R1 

  abcd  1  1  1  1  1 0
a b c d 0  1
0
0
1
0
0
 
 R3  R3  R1 
R   R  R 
 4 4 1
0 0 0 1


  abcd  1  1  1  1  1
a b c d 
Adjoint&Inverse of a Matrix

Minor of a Matrix:
The determinant of every square sub-matrix is called minor of the matrix. If Mij be the
 n  1   n  1 sub-matrix of the matrix A   aij  obtained by removing the i-th row and j-th
column, then the determinant Mij is defined as the minor of the element aij in the determinant
aij of order n .

Cofactor of an Element of a Matrix :


The cofactor C ij of the element aij in the determinant aij is given by C ij   1
i j
Mij where Mij is
the minor of the determinant aij .

Adjoint of a Matrix:

223605 Linear Algebra


19
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 a11 a12  a1 n  a11 a12  a1 n


 
a a22  a2 n  a a22  a2n
Let, A   21 , then determinant of the matrix A is A  21
       
 
 an1 an2  ann  an1 an2  ann
Let, C ij  i  1,2,....n & j  1,2,....n  be the cofactors of the determinant A form the matrix  C .
ij

Then the transpose of the matrix  C ij  is called the adjoint of the matrix A and is generally denoted
by adj A

 C11 C12  C1 n   C 11 C21  C n1 


C 
C 22  C 2 n  C C22  C n2 
 adj A   21
  12
         
   
 C n1 C n2  C nn   C1 n C2 n  C nn 
Example :
 1 0 1 
 
Let, A   3 4 5
 0 6 7 
 
4 5 3 5 3 4 0 1
Then the cofactors are C11   2 , C12    21 , C13    18 , C21   6 ,
6 7 0 7 0 6 6 7
1 1 1 0 0 1 1 1 1 0
C22    7 , C23   6, C31  4 , C32     8 , C33  4
0 7 0 6 4 5 3 5 3 4

 2 21 18   2 21 18   2 6 4 
     
 Cofactor matrix C   6 7 6   adj A   6 7 6    21 7 8 
 4 8 4   4 8 4   18 6 4 
     

If A   aij  be a square matrix of order n then A .  adj A    adj A . A  A . I n

Proof:
We know that, adj A  C kj  . where C kj is the cofactor of akj in A .

Therefore, A  adj A    aij  ckj 


  aij  c jk 
  bik 
n
 A if i  k
Where bik   aij ckj  
j 1  0 if i  k
So that, all the diagonal elements of A .  adj A  are equal to A and off diagonal elements are 0.

223605 Linear Algebra


20
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 A 0  0
 
0 A  0
 A .  adj A   
    
 
 0 0  A 
1 0  0
 
 0 1  0
 A  A .I n
  
 
0 0  1
Similarly we can prove that  adj A . A  A . In
(Prove)

If A   aij  be a square matrix of order n then adj A  A n 1


, if A  0

Proof:
We know that, AB  A . B
 A . adj A  A .  adj A 
A 0  0
0 A  0

  
0 0  A
n
 A . adj A  A
n 1
 adj A  A
(Proved)

If A and B are two n  n matrices then adj  AB    adj B  adj A 

Proof:
We know that, A .  adj A   A .I
So that,  AB  .  adj AB   AB . I

223605 Linear Algebra


21
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Now,  AB .  adj B .  adj A   A.B.  adj B . adj A 


 A.  B .adj B  .  adj A 
 A. B .I.  adj A 
 A. B .  adj A 
 B .A.  adj A 
 B . A .I
 A . B .I
 AB . I
  AB  .  adj AB    AB  .  adj B  .  adj A 
  adj AB    adj B .  adj A 
(Proved)

Singular and non- singular matrix :


A square matrix whose determinant is zero is called singular matrix and a square matrix whose
determinant is non-zero is called a non-singular matrix.

Inverse of a matrix:
If for any square matrix A there is a square matrix B such that AB  BA  then B is called the
inverse matrix or reciprocal matrix of A and is denoted by A1 . So that B  A1 . Thus for an
inverse matrix we have A . A 1   and A1 . A   , where  is unit or identity matrix.

Properties of inverse matrix:


1. An inverse matrix, if it exists, is unique.
2. Determinant of an inverse matrix is the reciprocal of the determinant of the original matrix.
3. An inverse matrix is a non singular matrix.
4. Inverse of an inverse matrix is equal to the original matrix. Such that  A 1 
1
A .

5. Inverse of a transposed matrix is equal to the transpose of inverse matrix. Such that,  A    A1  .
1

6. Inverse of the product of matrices is equal to the product of inverse of the matrices in reverse
order. Such that  AB   B1 A1
1

An inverse matrix, if it exists, is unique.

Proof:
Let us suppose that there are two inverse matrices B and C for a square matrix A , then we have,
AB  BA   ..............  i 
and AC  CA    ii 

223605 Linear Algebra


22
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Pre-multiplying  i  by C and post-multiplying  ii  by B we get,


CAB  C .  AB   C .   C
and CAB   CA  .B  .B  B
Thus, B  C
So that an inverse matrix is unique.

Determinant of an inverse matrix is the reciprocal of the determinant of the original matrix.

Proof:
For an inverse matrix we have, A. A 1  
 A. A1  
 A . A 1 

 A 1 
A
1
 A 1  A
Thus determinant of an inverse matrix is the reciprocal of the determinant of the original matrix.
(Proved)

An inverse matrix is a non singular matrix.

Proof:
For an inverse matrix we have , A. A1  
 A.A 1  
 A . A1 

 A 1   0
A
So that, an inverse matrix is a non singular matrix.
(Proved)

Inverse of an inverse matrix is equal to the original matrix. Such that  A 1 


1
A .

Proof:

223605 Linear Algebra


23
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

For an inverse matrix we have ,   A 1 . A

A 
1 1
.   A 1  . A1 . A
1

 A 
1 1
 . A

 A 
1 1
A
(Proved)

Inverse of a transposed matrix is equal to the transpose of inverse matrix. Such that,  A    A1  .
1

Proof:
For an inverse matrix we have,   A1 . A

      A1 . A 
   A.  A1 

 A    A . A.  A1 


1 1

 A   .  A 1 
1

 A    A1 
1

(Proved)

Inverse of the product of matrices is equal to the product of inverse of the matrices in reverse order.
Such that  AB   B1 A1
1

Proof:
For two non-singular matrices and of same order, we have,  AB .  B1 A1   AB.B1 A1
 A  A1
 A A 1


Also we have, B 1
A1 .  AB   B1 A1 AB
 B 1  B
 B 1 B

1 1
So that, B A is the inverse matrix of AB
(Proved)

223605 Linear Algebra


24
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

adj A
If A   aij  be a square matrix of order n then A 1 
A

Proof:
We know that, adj A  C kj  . where C kj is the cofactor of akj in A .

Therefore, A  adj A    aij  ckj 


  aij  c jk 
  bik 
n
 A if i  k
Where bik   aij ckj  
j 1  0 if i  k
So that, all the diagonal elements of A .  adj A  are equal to A and off diagonal elements are 0.
 A 0  0
 
0 A  0
 A .  adj A   
    
 
 0 0  A 
1 0  0
 
0 1  0
 A  A .I n
  
 
0 0  1
Similarly,  adj A . A  A . In
 A.  adj A    adj A  . A  A . In

 A.
 adj A    adj A  . A  I
n
A A
Since A.A1  A1 .A  

Therefore, A1 
 adj A 
A
(Proved)

A necessary and sufficient condition that any square matrix A has an inverse is that A  0

Proof:
Necessary condition:
Let us suppose that any square matrix A has inverse matrix A1 . Then we have,

223605 Linear Algebra


25
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

A. A 1  
 A.A 1  
 A . A 1  
 A 0
i.e. A is non-singular.
Sufficient condition:
adj A
Let us suppose that A  0 .Then we can find a matrix B  .
A
adj A adj A
Now, AB  A. Similarly, BA  .A
A A
A.  adj A   adj A .A
 
A A
A . A .
   
A A
So that B is an inverse matrix of A
(Proved)

Process of finding Inverse:


There are three important process of finding inverse of a matrix. These are
1. Co-factor method
2. Sweep-out method
3. Computation of inverse matrix from partitioned matrix.

Co-factor method:
If A   aij  is a square matrix of order n and adj A   Aji  is adjoint matrix of A and A is the
determinant of A , then in this method inverse matrix is computed as
 A11 A21 An1 
 A 
A A 
A
 11 A A  
 n1 
  A12 A22  An2 
21

adj A 1  A12 A22  An2   
A 1    A A A 
A A    
      
 A1n A2n  Ann   A 
 1n A2n  Ann 
 A A A 

Sweep-out method:
This method consists of placing a unit matrix of same order on the right hand side of original matrix
and then converting the original matrix by elementary row or column operations into a unit matrix.
In this process, unit matrix on the right hand side is converted into a matrix which is the inverse
matrix of the original matrix. Thus we have,

223605 Linear Algebra


26
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

A    A 1

Partitioned method:
A B
Let us suppose that a square matrix A0 is written in partitioned from as A0   
C D
Where A, B, C and D are sub-matrices and A and D are non-singular sub matrices. Let us suppose
that  A0  is written in partitioned from as
1

X Y 
 A0 
1
 
Z W AX  BD 1CX  
Now, A0 A01     A  BD C  X  
1

 A B  X Y    0 
X   A  BD C 
1 1
     
 C D  Z W   0  
From  ii  we get ,
 AX  BZ AY  BW    0 
    AY  BW  0
 CX  DZ CY  DW   0  
 AY   BW
AX  BZ     i 
 Y   A1BW
AY  BW  0 ...........  ii 
 Putting this value of Y in  iv  we get ,
CX  DZ  0 ............  ii 
 C A1BW  DW  
CY  DW     iv 
From  i  we get CX  DZ  0
  D  CA B W  
1

W   D  CA B  1 1
 DZ   CX 
 Z   D 1CX
Putting this value of Z in  i  we get,

 A  BD 1C 1  A1BW 


 A 1
 
 1 
 D  CA B 
0
1 1
 D CX

A 0  A 1 0 
In particular, if B  0 and C  0 then A0    A01   1 
 0 D  0 B 

Rank of a Matrix

Rank of a matrix:

223605 Linear Algebra


27
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

A non-zero matrix A is said to have rank r if at least one of its minor of order r is different from
zero, while every minor of order  r  1  ,if any is zero. Equivalently, the rank of a matrix is the
maximum number of linearly independent rows or columns in the matrix.

Process of determining rank:


Sweep-out method : Rank of a matrix is generally determined by sweep-out method. Sweep-out
method consists of elementary row operations which are used for converting a given matrix into a
triangular form of matrix. Then the number of non-zero rows of the resulting triangular form of
matrix is the rank of original matrix.
Example:
2 3 1 1 
1 1 2 4 
Find the rank of the matrix  
3 1 3 2 
 
6 3 0 7 
Solution:
2 3 1 1 
 2 3 1 1   5 3 7 
1 1 2 4   0   
    2 2 2  R   1  , R   3  , R 3
 21   31   41  
 3 1 3 2  7 9 1  2  2
  0   
6 3 0 7   2 2 2
0 6 3 4 

2 3 1 1 
  2 3 1 1 
0  5  3  7   
 2 2 2 0  5  3  7 
 7  12   2 2 2
 33 22  R32    , R42     R43  1 
0 0   5  5  33 22 
 5 5  0 0 
 5 5 
 33 22 
 0 0 0 0 0 0 
5 5 
So that the rank of the given matrix is 3.

Properties of Rank

i) Only null matrix has no rank.


ii) Every non-null matrix has some rank.
iii) For any matrix A ,   A     A   .
iv) The row rank and column rank of a matrix are equal.
v) Rank of the product of two matrices cannot exceed the rank of either two matrices.
Such that,   AB     A  or   B  .
vi) The rank of the sum of two matrices cannot exceed the sum of the ranks of the two
matrices. Such that,   A  B     A     B  .
vii) Rank of a matrix is unaltered when it is multiplied by a non singular matrix.
223605 Linear Algebra
28
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

viii) For two square matrices of order n   AB     A     B   n


ix) For any square matrix A ,   A A     A  A     A     A 
x) For a diagonal matrix, rank is to the number of non-zero diagonal elements.
xi) For a unit matrix of order n    n   n

The rank of a matrix is equal to the rank of its transpose matrix.

Proof:
Let A   aij  be any m  n matrix.
Then the transpose matrix A   a ji  is an n  m matrix.
Let the rank of A be r and let B be the r  r sub-matrix of A , such that, B  0
Also we know that the value of a determinant remains unaltered if its rows and columns are
interchanged.
i.e. B   B  0 where, B is evidently a r  r sub-matrix.
 The rank of A  is   A  r
Again if C be a  r  1   r  1  sub-matrix of A . Then by definition of rank we must have,
C 0
Also C  is a  r  1    r  1  sub-matrix of A . So we have,
C  C 0
Therefore, we conclude that there cannot be any  r  1   r  1  sub-matrix of A with non-zero
determinant.
 The rank of A is   A   r and it cannot be greater than r .
 The rank of A is r which is also the rank of A
The row and column rank of a matrix is equal

Proof:
 a11 a12  a1n 
 
a21 a22  a2n 
Let A be an arbitrary m  n matrix A  
    
 
 am1 am2  amn 
Let , R1 , R2 ,......, Rm denotes its rows.
R1   a11 a12  a1n 
R2   a21 a22  a2 n 


Rm   am1 am2  amn 
Suppose the row rank is r and the following r vectors form a basis for the row space

223605 Linear Algebra


29
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

V1   b11 b12  b1 n 
V2   b21 b22  b2n 


Vr   br 1 br 2  brn 
Then each of the row vectors is a linear combination of the vectors V1 , V2 ,....., Vr
i.e. R1   11V1  12V2  ....   1rVr
R2   21V1   22V2  ....   2 rVr


Rm   m1V1   m2V2  ....   mrVr
Where  ij are scalars.
Setting the i-th component of each of the above vector equations equal to each other, we obtain
the following system of equations ;
a1i  11 b1i   12 b2 i  .....   1r bri
a2i   21b1i   22 b2i  .....   2r bri

ami   m1b1i   m2 b2 i  .....   mr bri
Thus for i  1, 2, ...., n
 a1i   11   12   1r 
       
 a2 i   b   21   b   22   .......  b  2r 
   1i
   1i    1i
  
       
 ami    m1    m2    mr 
 11    12    1r 

  21     
  22  , ........,   2r 
So that each columns of the matrix A is linear combination of r vectors ,
        
     
  m1    m2    mr 
Thus the column space of the matrix A has dimension at most r .
i.e. column rank  r i.e. column rank  row rank

Similarly, considering the transpose of A , we obtain row rank  column rank


Thus the row rank and the column rank of the matrix A are equal.

The rank of the product of two matrices cannot exceed the rank of either the two matrices.
Such that,   AB     A  and   AB     B 

Proof:

223605 Linear Algebra


30
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 a11 a12  a1n   b11 b12  b1 p   1 


   
a a22  a2n   b21 b22  b2 p   2 
Let A and B be two matrices A   21 and B   
            
     
 am1 am2  amn   bn1 an2  bnp    n 

 a11 a12  a1n  1   a11 1  a12 2  .....  a1n  n 


    
a a22  a2 n  2  a   a   .....  a2n  n 
Now AB   21   21 1 22 2
         
    
 am1 am2  amn   n   am1 1  am2 2  .....  amn  n 

This shows that the rows of the matrix AB are the linear combinations of the rows 1 , 2 ,....., n of
the matrix B . So the number of linearly independent rows of AB cannot exceed the number of
linearly independent rows of B .

Therefore,   AB     B  .

Similarly, it can be shown that, the columns of the matrix AB are the linear combinations of the
columns of matrix A . So the number of linearly independent columns of AB cannot exceed the
number of linearly independent columns of A .

Therefore,   AB     A  .

Thus we have   AB     A  and   AB     B 

If two matrices A and B are of the same order m  n , then   A  B     A     B 


Proof:
Let A and B are two matrices of the same order m  n .
Let, R1  A  , R2  A  ,....., Rm  A  ; R1  B  , R2  B  ,....., Rm  B  and R1  A  B  , R2  A  B  ,....., Rm  A  B 
denote the rows of the matrices A , B and A  B respectively.
Also RA , RB and RAB denote the row spaces of the matrices A , B and A  B respectively.

223605 Linear Algebra


31
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Then RA  R1  A  , R2  A  ,....., Rm  A 


RB  R1  B  , R2  B  ,....., Rm  B 
and RA B  R1  A  B  , R2  A  B  ,....., Rm  A  B 

Now we have, RA  RB  RA  RB  R1  A  , R2  A  ,....., Rm  A  , R1  B  , R2  B  ,....., Rm  B  


But we know that, dim  RA  RB   dim  RA   dim  RB   dim  RA  RB 
 dim  RA  RB   dim  RA   dim  RB  ..............................  i 

Again, since RAB is a row space of RA  RB


We have, dim  RAB   dim  RA  RB  ...........................  ii 
From  i  and  ii  we have, dim  RA B   dim  RA   dim  RB 
   A  B    A   B Proved

If A and B are two square matrices of order n , then   AB     A     B   n

Proof:
Let,   A   r There exists two non-singular matrices P and Q such that,
 0  0  1
PAQ   r A  P 1  r
 Q
0 0   0 0
0 0  r 0  1
Let, C  P 1   Then we get, A  C  P 1  Q
0  n r   0  n r 
  A  C   r  n  r  n    A   r and   C   n  r 
Again since  A  C  is non-singular we have,   A  C  B     B 
   B     AB  CB     AB     CB  ............  i 
Also we have,   CB     C 

Thus from  i we get,   B     AB     C 


   B     AB   n  r
   B     AB   n    A 
   AB     A     B   n Proved

Trace of a matrix
Trace of a matrix:
The sum of the diagonal elements of a square matrix is called the trace of that matrix.

223605 Linear Algebra


32
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 a11 a12  a1n 


 
a a22  a2n 
Let, A   21
    
 
 an1 an2  ann 
n
Then the trace of the matrix A is given by, tr  A   a11  a22  ......  ann   aii
i 1

Properties of trace:

1. If A is a square matrix of order n and  is a scalar then tr   A    tr  A  .

Proof:

 a11 a12  a1n   a11 a12  a1n   a11  a12  a1n 


a a22  a2 n  a   a22   a2n 
Let A  
21
 A    21 a22  a2n    a21
              
     
 an1 an2  ann   an1 an2  ann   an1 an2  ann 
Now tr  A   a11  a22  .....  ann
And tr   A    a11   a22  .....   ann    a11  a22  .....  ann    tr  A 
 tr   A    tr  A  Proved 

2. tr  A  B   tr  A   tr  B 
Proof:

 a11 a12  a1n   b11 b12  b1n 


   
 a21 a22  a2 n   b21 b22  b2 n 
Let A  and B 
         
   
 an1 an2  ann   bn1 bn2  bnn 
Now tr  A   a11  a22  .....  ann and tr  B   b11  b22  .....  bnn
 a11  b11 a12  b12  a1n  b1n 
 
a b a22  b22  a2n  b2n 
A  B   21 21
    
 
 an1  bn1 an2  bn2  ann  bnn 
 tr  A  B    a11  b11    a22  b22   .....   ann  bnn    a11  a22  ......  ann    b11  b22  ......  bnn 
 tr  A   tr  B 

3. tr  AB   tr  B A 
Proof:
223605 Linear Algebra
33
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 a11 a12  a1n   b11 b12  b1n 


   
 a21 a22  a2 n   b21 b22  b2 n 
Let A  and B 
         
   
 an1 an2  ann   bn1 bn2  bnn 
 a11b11  a12 b21  ...  a1nbn1 a11b12  a12 b22  ...  a1n bn2  a11b1n  a12 b2n  ...  a1n bnn 
 a b  a b  ...  a b a21b12  a22b22  ...  a2n bn2  a21b1n  a22 b2n  ...  a2n bnn 
Now, AB   
21 11 22 21 2 n n1

    
 
an1b11  an2b21  ...  ann bn1 an1b12  an2 b22  ...  ann bn2  an1b1n  an2 b2 n  ...  ann bnn 

 a11b11  a21b12  ...  an1 b1 n a12 b11  a22 b12  ...  an2 b1 n  a1n b11  a2 n b12  ...  ann b1n 
 a b  a b  ...  a b a12 b21  a22 b22  ...  an2 b2n  a1n b21  a2 n b22  ...  ann b2 n 
and B A   
11 21 21 22 n1 2 n

    
 
a11bn1  a21bn2  ...  an1 bnn a12 bn1  a22 bn2  ...  an2bnn  a1n bn1  a2 nbn2  ...  ann bnn 

 tr  AB   a11b11  a12b21  ...  a1n bn1


a21b12  a22b22  ...  a2 n bn2
an1b1n  an2 b2n  ...  ann bnn

and tr  B A   a11b11  a21b12  ...  an1b1n


a12b21  a22b22  ...  an2b2 n
a1n bn1  a2n bn2  ...  ann bnn

 a11b11  a12 b21  ...  a1n bn1


a21b12  a22b22  ...  a2 n bn2
an1b1n  an2 b2n  ...  ann bnn  tr  AB 
 tr  AB   tr  B A 

4. tr   n   n
Proof:
We have,
1 0  0
 
 0 1  0
n 
  
 
0 0  1
 tr   n   1  1  .....  1  n

223605 Linear Algebra


34
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

5. tr  A   tr  A 
Proof:

 a11 a12  a1n   a11 a21  an1 


   
a21 a22  a2n  a12 a22  an2 
Let A    A  
         
   
 an1 an2  ann   a1 n a2 n  ann 

Now, tr  A   a11  a22  ....  ann


and, tr  A   a11  a22  ....  ann
 tr  A   tr  A 

6. tr  ABC   tr  BC A   tr  C AB 

Proof:
We know that tr  AB   tr  B A  . Using this fact we have,
tr  ABC   tr  AB  .C   tr C .  AB    tr  CAB 
Also we have , tr  ABC   tr  A.  BC    tr  BC . A   tr  BCA 
 tr  ABC   tr  BCA   tr  CAB 

7. If C is an orthogonal matrix then tr  C AC   tr  A 

Proof:
If C is an orthogonal matrix then C C   C  C  
Now, tr C AC   tr  AC C   tr  A    tr  A 
 tr C AC   tr  A  ; where C is an orthogonal matrix.


8. If P is a non-singular matrix then tr P 1 AP  tr  A  
Proof:
If P is a non-singular matrix then PP 1  P 1P  
Now, tr  P 1 AP   tr  AP 1P   tr  A    tr  A 
 tr  P 1 AP   tr  A  ; where P is a non-singular matrix.

223605 Linear Algebra


35
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Idempotent Matrix

Idempotent Matrix:
Any square matrix A is called an idempotent matrix if A2  A .
 1 1
 2 2
Example : Let A   
1 1 
 
 2 2 
 1 1  1 1  1 1 1 1  1 1
            
Now, A2   2 2 2

2  4 4
 
4 4  2
 
2 A

 1 1   1 1   1  1 1 1   1 1 
       
 2 2  2 2   4 4 4 4   2 2 
So that A is an idempotent matrix.

Properties of Idempotent Matrix:


1. Characteristic roots of an idempotent matrix are either 0 or 1.
Proof:
If A is an idempotent matrix then A2  A . If  is a characteristic root and X is the
corresponding characteristic vector of matrix A, then
AX   X
 A. AX  A.  X
 A2 X   A X
 AX    X
  X  2 X
 2 X   X  0
     1 X  0
Since X  0 ; So that     1  0
   0 and   1
Thus the characteristic roots of an idempotent matrix are either 0 or 1

2. Any non- singular idempotent matrix is a unit matrix.


Proof:
If A is a non-singular idempotent matrix then A1 exists and A2  A .

223605 Linear Algebra


36
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Now we have, A2  A
 A .A A
 A1 A . A  A 1 A
  . A 
 A 

3. Rank of an idempotent matrix is equal to its trace.

Proof:
Let us suppose that A is an idempotent matrix with rank r . Then there exists an orthogonal
matrix P such that P AP is a diagonal matrix with r diagonal elements equal to 1 and
remaining  n  r  diagonal elements equal to 0.
Now , tr  P AP   1  1  .......  1  0  0......  0  r
But , tr  PAP   tr  APP  
 tr  A  
 tr  A 
Or , tr  A   tr  PAP   r    A 
   A   tr  A 
Thus rank of an idempotent matrix is equal to its trace.

4. If A is an idempotent matrix and P is an orthogonal matrix, then P AP is an idempotent matrix.

Proof:
Since A is an idempotent and P is an orthogonal matrix, so that, A2  A and P P  PP   
 PAP    P AP  P AP 
2
Now ,
 PAPPAP
 PA AP
 PA AP
 PA 2P
 PAP
  PAP   PAP
2

So that, P AP is an idempotent matrix.

5. For any idempotent matrix A,    A  is also idempotent matrix.

Proof:

223605 Linear Algebra


37
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

If A is an idempotent matrix then A2  A


   A      A    A 
2
Now ,
 2  A   A  A2
 A  A A
 A
So that    A  is an idempotent matrix.

Difference between idempotent matrix and identity matrix:


1. Identity matrix is a diagonal matrix whose diagonal elements are all equal to 1. But any square
matrix is called an idempotent matrix if A2  A .
2. Rank of an identity matrix is equal to its order. But rank of an idempotent matrix is equal to its
trace.
3. Identity matrix is a non-singular matrix. But idempotent matrix may or may not be non-singular
matrix.
4. Identity matrix is idempotent matrix. But idempotent matrix may or may not be identity matrix.
Problem:
1 1 
 
 1 2
Compute B  4  X  X X  X  and show that B is idempotent. Find   B 
1
Given X 
1 1 
 
1 3
Solution:
1 1 
 
1 2 1 1 1 1  4 7 
X  , X     X X    X X  11
1 1  1 2 1 3  7 15 
 
1 3
1 1
 
1  15 7   15  117   1 2   15  117   1 1 1 1 
 X X    
1 1
 11
 7  Now, X X X X   
11
4  
11  7 4    11 114   1 1    117 11   1 2 1 3
 
1 3
1 1   115 2
11
5
11  111 
   4 
 1 2   118 1
11
8
11  116   112 3
11
2
11 11 
  5
 1 1    113 111  113 5 
11 
 11 112 5
11  111 
   1 4 
1 3   11 11  11 11 
1 9

 1 0 0 0   115 2
11
5
11  111   116  112  115 11 
1

   4   2 
 0 1 0 0   112 3 2
11    11 118  112  114 
 B   XX X X 
1
   11 11

 0 0 1 0   115 2
11
5
11  111    115  112 6
11
1 
11
   1 4   
 0 0 0 1    11 11  11 11   11  11 11
1 9 1 4 1 2
11 

223605 Linear Algebra


38
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Again B2    X  X X  X    X  X X  X   2   X  X X  X   X  X X  X    X  X X  X X  X X  X 
1 1 1 1 1 1
    
   X  X X  X   X  X X  X   X  X X  X     X  X X  X  
1 1 1 1
   
So that B is an idempotent matrix.

Now,   B   tr  B   tr   4  X  X X  X   tr   4   tr  X  X X  X   4  tr  X X  X X 
1 1 1
     
 4  tr  2   4  2  2
 Rank of B is 2.

223605 Linear Algebra


39
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

System of Equation

Non Homogeneous Homogeneous


System of Equation System of Equation
AX = B AX = 0

Inconsistent Consistent Inconsistent


Consistent
If ρ(A) ≠ ρ [A B] If ρ(A) < n If ρ(A) = n
If ρ(A) = ρ [A B]
No Solution Non trivial solution Trivial solution

If ρ(A) = ρ [A B] = n If ρ(A) = ρ [A B] < n


Infinitely many
Unique Solution solution

Linear Homogeneous Equation :


Let us consider a system of m equations in n variables as
a11 x1  a12 x2  ........  a1 n xn  b1
a21 x1  a22 x2  ........  a2n xn  b2

am1 x1  am2 x2  ........  amn xn  bm

 a11 a12
 a1n   x1   b1 
    
  a21
a22  a2 n   x2 
 2
b
 AX  B ...............  i 
 
       
    
 am1
am2  amn   xn   bm 
The system of m equations in n variables AX  B is called a system of homogeneous equations if B  0 .

Linear Non-homogeneous Equation :


Let us consider a system of m equations in n variables as

1
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

a11 x1  a12 x2  ........  a1n xn  b1


a21 x1  a22 x2  ........  a2 n xn  b2

am1 x1  am2 x2  ........  amn xn  bm

 a11 a12
 a1n   x1   b1 
    
  a21
a22  a2 n   x2  b
 2   AX  B ...............  i 
        
    
 am1
am2  amn   xn   bm 
The system of m equations in n variables AX  B is called a system of homogeneous equations if B  0 .
Difference between homogeneous and non-homogeneous system of equations:
Homogeneous System of Equation Non-homogeneous System of Equation
The matrix equation AX  B where B  0 which is a The matrix equation AX  B where B  0 which is a
system of m equations in n unknowns x1 , x2 , ....., xn system of m equations in n unknowns x1 , x2 , ....., xn
is called a system of linear homogeneous equation. is called a system of linear non- homogeneous
equation.
For homogeneous system of equation trivial No trivial solution exists in non-homogeneous
solution always exists. system of equation.
In homogeneous system of equation non-trival In non-homogeneous equations a unique solution
solution exists only if A is a singular matrix and the exists only if A is a non-singular matrix and an
appearance of one non-trivial solution automatically infinite number of solutions exists if the system has
implies the existence of infinite number of trivial two distinct solutions.
solution.
Consistent and inconsistent:
1. For non-homogeneous system of equation:
Let us consider a system of m equations in n variables as
a11 x1  a12 x2  ........  a1n x n  b1
a21 x1  a22 x2  ........  a2n xn  b2

am1 x1  am2 x2  ........  amn xn  bm
 a11 a12  a1n   x1   b1 
    
 a21 a22  a2 n   x2  b
  2  AX  B ...............  i 
         
    
 am1 am2  amn   xn   bm 
If the coefficient matrix A and the augmented matrix [A B] have the same rank then the system of
equation AX  B is said to be consistent.
But if the coefficient matrix A and the augmented matrix [A B] have different ranks then the system
of equation AX  B is said to be inconsistent.
A consistent non-homogeneous system of equation has just one or infinitely many solutions,
whereas an inconsistent system has no solution at all.
2
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

2. For homogeneous system of equation:

Let us consider a system of linear homogeneous equation given by AX  0 where, A is the m  n


coefficient matrix X is the n 1 column vector of n unknowns and 0 is the m 1 column vector of
constants. Let the rank of A be r . Then the system of equation is said to be
a) Consistent if r  n
b) Inconsistent if r  n ; i.e. the system has no solution other than trivial solution.

Condition for which a system of non-homogeneous equation will have unique solution, no solution and
infinitely many solution

Let us consider a system of m linear non homogeneous equation in n unknowns given by AX  B ,


where A is the m  n coefficient matrix X is the n 1 column vector of n unknowns and B is the
m  1 column vector of constants.
Thus the coefficient matrix is A  aij  ; i  1,2,...., m j  1,2,...., n.
 a11 a12  a1n b1 
 
a21 a22  a2n b2 
And the augmented matrix  A B  
     
 
 am1 am2  amn bm 
Let, rank of A  rA and rank of  A B  rAB , Then
a) the system has a unique solution if rA  rAB  n
b) the system has no solution if rA  rAB
c) the system has infinitely many solutions if rA  rAB  n

A necessary and sufficient condition for a system of non-homogeneous equation AX  B is consistent is


that the rank of the coefficient matrix is equal to the rank of the augmented matrix.
Proof :
Let us consider the system of equation AX  B where coefficient matrix
 a11 a12  a1n   x1   b1 
     
a a22  a2n  x b
A   21   1  2   n  , X   2  B 2 
         
     
 am1 am2  amn   xn   bm 
 a11 a12  a1n  b1 
 
a a22  a2n  b2 
Augmented matrix  A B   21 Then the system of equation can be written
      
 
 am1 am2  amn  bm 
 x1 
 
as  1  2  n   x2   B or , 1 x1  2 x2  ....   n xn  B ......... 1 
  
 
 xm 

3
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Let us suppose that the coefficient matrix A has rank r and its first r columns 1 2   r  are
linearly independent. Then each of the remaining columns is a linear combination of
 1  2   r  .
Sufficient condition:
Let us suppose that   A B    A   r. Then the number of linearly independent columns
of augmented matrix  A B is r and these independent columns are 1  2   r  . So B is a
linear combination of 1  2   r  . Then we can find some constants k1 , k2 ,......kr which are
not all zero, such that, k1 1  k2 2  ......  kr r  B
k11  k2 2  ......  kr r  0. r 1  0. r 2  ......  0. n  B ................  2 
Comparing (1) and (2) we get x1  k1 , x2  k2 ......xr  kr , xr 1  0 ......xn  0
Since the given system of equation have some solutions as indicated above. So the given system of
equations is consistent.
Necessary Condition:
Let us suppose that the given system of equation is consistent having some solutions
s1 , s2 ,.....sn . Then we have 1 s1   2 s2  ......   n sn  B .........  3
We have already assumed that coefficient matrix A has rank r and its first r columns
1 2   r  are linearly independent. So the other columns  r 1  r 2   n  are linearly
dependent which can be expressed as linear combination of 1  2   r  . Then (3) can be
stated in terms of 1  2   r  as  1 s1   2 s2  ......   r sr  B .
This shows that B is also linear combination of 1  2   r  . Then the number of linearly
independent columns of augmented matrix  A B is r .    A B    A   r .
Thus the rank of the coefficient matrix is equal to the rank of the augmented matrix.

A system of non-homogeneous equations has no solution, unique solution or infinitely many solutions.
Proof :
No Solution:
If   A     A B  then there will be no solution for the given system of equations. Because the
columns of A are included in augmented matrix  A B and ifIf   A     A B  then the constant
column B must be independent of the columns of A . Then there will be no non-zero set of values
for x1 , x2 , , xn to satisfy the equation  1 x1   2 x2  ....   n xn  B . So the given system of equations
has no solution in this case.

Unique Solution:
If   A     A B   r  n where, n is the number of variables, then the given system of equation has
only one solution or unique solution. Because we can select n independent equation in n variables
and solutions of these independent equations by inverse matrix or other wise will be unique
solution. Then the remaining m  n dependent equation will be satisfied by the above unique
solution.

4
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Infinitely Many Solutions:


If   A     A B   n then the given system of equation will have infinite number of solutions.
Because we can select r independent equations in n variables, where n  r free variables can give
arbitrary values. Then these r independent equations can be solved for r variables in terms of the
constants and arbitrary values of  n  r  free variables. Since  n  r  free variables can give
arbitrary values in infinite number of ways , so, there are infinite number of solutions for the given
system of equations in this case.

The number of linearly independent solutions of m linear homogeneous equations in n variables


AX  0 is n  r where r    A 

Proof :
Let us rank of A is r . So that, the coefficient matrix A has r linearly independent columns. Let the
first r columns from the left of the matrix A are linearly independent.
If C i is i  th column of A of order m  1  i  1,2,...., n  ,then we can write A   c1 c2  cn .
Where, c1 , c2 ,  cr are linearly independent and the system of equations AX  0 can be
written as
 x1 
 
x
 c1 c2  cn   2   0

 
 xn 
c1 x1  c2 x2  ......  cn xn  0
Since   A   r , each of the columns  cr 1 cr 2  cn  can be expressed as a linear combination
of c1 , c2 ,  cr as given below;

5
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

cr 1  11 c1  12c2  ......  1r cr


cr 2  21c1  22 c2  ......  2r cr
.... .... .... .... .... .... .... ....
cn   n r 1c1   n r 2 c2  ......   n r r cr

11c1  12c2  ......  1r cr  1.cr 1  0.cr 2  ......  0.cn  0


21c1  22 c2  ......  2r cr  0.cr 1  1.cr 2  ......  0.cn  0

.... .... .... .... .... .... .... .... .... .... .... .... .... .... ....
 n r 1c1   n r 2c2  ......   n r r cr  0.cr 1  0.cr 2  ......  1.cn  0

 11   21    n r 1 
     
    
 12   22   n  r  2

       
      
1r 2 r  
 X1    , X 2    , ............ , X  n r     n r r 
 1   0 
     0 
 0   1   
 0 
     
      
 0   0   
     1 
This provide the  n  r  solutions of of the equation AX  0 .

Now we are to show that these  n  r  solutions form a linearly independent set of vectors.
Let, 1 x1  2 x2  ......  n r x n r  0
Comparing  r  1   th ,  r  2   th ,........, n  th of the vectors on the left we get,
1  1  2  0   ......  n r  0   0
1  0   2  1  ......  n r  0   0
.... .... .... .... .... .... .... .... ....
1  0   2  0   ......  n r  1  0
 1  2  ......  n r  0

Since all  ' s are zero , hence  n  r  solutions are linearly independent.

A necessary and sufficient condition for a system of m homogeneous equation in n variables a non-
trivial (non-zero) solution is that the rank of the coefficient matrix is less than n .
Proof :
Let us consider the system of equation of m homogeneous equation in n variables given by AX  0

6
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 x1 
 
or,  1  2  n   x2   0 or , 1 x1   2 x2  ....   n xn  0......... 1
  
 
 xm 
Sufficient Condition :
Let us suppose that   A   n , so the column of the matrix A are linearly dependent. Then we can
find a set of constants c1 c2  cn which are not all zero. Such that,
c11  c2 2  ......  cn n  0 .................... 2 
Comparing 1  and  2  we get , x1  c1 , x2  c2 , .........., xn  cn
So the given condition is sufficient for getting non zero solutions.
Necessary Condition:
Let us suppose that the given system of equations has some non-zero solutions 1 , 2 ,........n .
Then we have
11   22  ..........   n n  0 .
This shows that , 1  2   n  are linearly dependent. Therefore the rank of the coefficient
matrix is less than n

The system of homogeneous equations AX  0 has a trivial (zero) solution if the rank of the coefficient
matrix A is equal to the number of its columns.
Proof :
Let us consider the system of equation of m homogeneous equation in n variables given by AX  0 ,
where coefficient matrix
 a11 a12  a1n 
 
a a22  a2 n 
A   21
    
 
 am1 am2  amn 
Let the rank of A is n , where n is the number of columns or unknowns.
Since A is a matrix with full rank, A is a non-singular matrix and A1 exists. Hence we get,
AX  0
 A1  AX   A1 .0  0
  A A X  0
1

 n X  0 X 0
 x1  0, x2  0, ........., xn  0
So the system has trivial solution if   A   n . Since A1 is unique, the trivial solution is the only solution of
AX  0 .
Problem:
Test the consistency of the following equations and solve them if possible.

7
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

x  2y  z  3
3 x  y  2z  1
2 x  2 y  3z  2
x  y  z  1
Solution:
For the given system of equations the augmented matrix is given by
1 2  1  3  1 2 1  3 
 3 1 2  1   0 7 5  8  R21  3
 A B  2 2 3  2   0 6 5  4  R31  2 
    R41  1
1 1 1  1  0 3 2  4 
1 2 1  3   1 2 1  3 
 0 7   6
5  8  R32    0 7 5  8 

 
0 0 5
7
 20 
 7
7  R   3


0 0 5  20

 
R43 1
5
  7 7
42  
0 0  1  4   7 0 0 0  0 
 7 7
Here both the coefficient matrix and augmented matrix have same rank 3. Which equal to the
number of variables . So the given system of equations is consistent and has unique solution.
The reduced equations are
x  2y  z  3 .......................  i 
 7y  5z  8 .......................  ii 
5 20
z ......................  iii 
7 7
5 20
From  iii  we get, z  z4
7 7
Putting z  4 in equation  ii  we get,
7y  5  4  8
 7y   28
 y4

Putting z  4 and y  4 in equation  i  we get,


x  2 4  4  3
 x  34
 x  1
So the required solutions are x  1, y  4 and z  4
Problem:
Test the consistency of the following equations and solve them if possible.
x1  x2  4 x3  2
x1  2 x2  x3  1
x1  x2  x3  0
2 x1  x2  2 x3  1
8
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Solution:
For the given system of equations the augmented matrix is given by
1 1  4  2   1 1 4  2 
 1 2 1  1  0 3 5  1 R21  1 
 A B  1 1 1  0   0 0 5  2 R31  1
    R41  2 
2  1 2  1  0 3 10  3
 1 1 4  2  1 1 4  2 
 0 3 5  1  0 3 5  1
   R42  1    R  1 
0 0 5  2  0 0 5  2 43
   
0 0 5  2  0 0 0  0 
Here both the coefficient matrix and augmented matrix have same rank 3. Which equal to the
number of variables . So the given system of equations is consistent and has unique solution.
The reduced equations are
x1  x2  4 x3  2 .......................  i 
 3x2  5x3  1 .......................  ii 
5x3  2 ......................  iii 
2
From equation  iii  we get, 5 x3   2  x3  
5
2
Putting x3   in equation  ii  we get,
5
 2
 3 x2  5     1
 5
  3 x2  1
1
 x2  
3
2 1
Putting x3   and x2   in equation  i  we get,
5 3
1  2
x1   4     2
3  5
1 8
 x1  2  
3 5
11
 x1 
15
11 1 2
So the required solutions are x1  , x2   and x3  
5 3 5
Problem:
Test the consistency of the following equations and solve them if possible.
2 x1  3x2  5x3  6
x1  2 x2  3x3  2
 x2  x3  0

9
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Solution:
For the given system of equations the augmented matrix is given by
2 3 5  6  2 3 5  6
  1
 A B  1 2 3  2   0 1 2 1 2  1 R21   
   2
 0 1 1  2  0  1 1  2
 
2 3 5  6
 
 0 1 1  1 R42  2 
2 2
 
 0 0 0  0
Here both the coefficient matrix and augmented matrix have same rank 2. Which is less than the
number of variables . So the given system of equations is consistent with infinite number of
solutions.
The reduced equations are
2 x1  3 x2  5 x3  6 .......................  i 
1 1
x2  x3  1 .......................  ii 
2 2
Let x3  a , where a is an arbitrary value. From equation  ii  we get,
1 1
x2  a   1
2 2
 x2  a  2
 x2    a  2 

Putting x3  a and x2    a  2  in equation  i  we get,


2 x1  3  a  2   5a  6
 2 x1  3a  6  5a  6
 2 x1  2a  12
 x1  6  a
So the general solutions are x1  6  a , x2    a  2  and x3  a
Problem:
Investigate for what value of  and  the system of equations
xyz 6
x  2y  3z  10
x  2y   z  
has (i) no solution (ii) unique solution (iii) infinite number of solutions.
Solution:
For the given system of equations the augmented matrix is given by

10
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

1 1 1  6  1 1 1 6  
R  1 
 A B  1 2 3  10    0
  1 2  4  21
R  1
1 2      0 1   1      31
1 1 1  6 

 0 1 2  4  R32  1
 
 0 0   3   

When   3 and   10 then   A   2 and   A B   3 . Since, the ranks of the two matrices are
unequal, so the given system of equations has no solution in this case.

When   3 and  has any value then   A   3 and   A B   3 . Since the ranks of two matrices
are equal, which is equal to the number of variables. So the given system of equations has unique
solution in this case.

When   3 and   10 then   A   2 and   A B   2 . Since the rank of the two matrices are equal
and less than the number of variables, so the given system of equations has infinite number of
solutions in this case.

Problem:
Test the consistency of the following equations and solve them if possible.
5x  3y  7z  4  0
3x  26y  2z  9  0
7 x  2y  10 z  5  0
Solution:
For the given system of equations the augmented matrix is given by
5 3 7  4  R 3
5 3 7  4    21  
3 26 2  9    0 121   5
  
A B    5
 11
5
 33
5  7
7 2 10  5   0  11 R31   
1   3
 5
 5 5 5
5 3 7  4 
 121   1 
 0  11  33  R32  
5 5 5  11 
 
0 0 0  0 
Here both the coefficient matrix and augmented matrix have same rank 2. Which is less than the
number of variables . So the given system of equations is consistent with infinite number of
solutions.
The reduced equations are
5x  3y  7z  4 .......................  i 
121 11 33
x2  x3  .......................  ii 
5 5 5

11
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Let z  a , where a is an arbitrary value. From equation  ii  we get,


121 11 33
y a
5 5 5
 121y  11a  33

 y
 a  3
11

Putting z  a and y 
 a  3 in equation  i  we get,
11

5x  3 
  3
a
 7a  4
11
 55 x  3a  9  77a  44
 55 x  35  80a
16a  7
 x
11
So the general solutions are x  
16a  7
, y
 a  3 and z  a
11 11

Problem:
Test the consistency of the following equations and solve them if possible.
2 x  6y  0
6 x  20y  6 z  3  0
6 x  18 z  1  0
Solution:
For the given system of equations the augmented matrix is given by
2 6 0  0 2 6 0  0 
 A B  6 20 6  3  0 2 6  3 R21  3
 
 0 6 18  1  0 6 18  1
2 6 0  0 
 0 2 6  3 R32  3 
 
0 0 0  8 
Here rank of the coefficient matrix and augmented matrix are not same. Hence the given system of
equations is inconsistent and there is no solution of the given system of equations.

Problem:
For what value of  the equations
x  y  z 1
x  2y  4 z  
x  4 y  10 z   2
have a solution and solve them completely in each case.

Solution:
12
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

For the given system of equations the augmented matrix is given by


1 1 1  1  1 1 1  1 
R  1 
 A B  1 2 4     0 1 3    1  R21 1
  
31  
1 4 10   2  0 3 9   2  1
1 1 1  1 

 0 1 3    1  R32  3

 0 0 0   2  3  2
The given system of equations will be consistent if
 2  3  2  0
    2    1  0
   1 and  2
when   1 , the reduced equations are
x  y  z  1 Putting z  0 we get, y  0 and x  1
y  3z  0 Thus a set of solution is 1, 0, 0 
when   2 , the reduced equations are
x  y  z  1 Putting z  0 we get, y  1 and x  0
y  3z  1 Thus a set of solution is  0, 1, 0 
Problem:
Test the consistency of the following equations and solve them if possible.
x  y  z 1
2 x  3y  7z  0
3x  2y  8 z  4
Solution:
For the given system of equations the augmented matrix is given by
1 1 1  1  1 1 1  1  1 1 1  1 
R21  2 
 A B  2 3 7  0   0 5 5  2  R 3  0 5 5  2 R32  1
   
31  
3 2 8  4   0 5 5  1  0 0 0  3 
Here rank of the coefficient matrix and augmented matrix are not same. Hence the given system of
equations is inconsistent and there is no solution of the given system of equations.
Problem:
Find the solution of the following equations by matrix notation
2 x1  x2  x3  0
3x1  2 x2  x3  0
x1  3x2  5x3  0
Solution:
For the given system of equations the coefficient matrixis
 2 1 1  R 3  2 1 1 
 2 1 1    21    
 2 5
A  3 2 1    0 7 1   0 7 1  R32  
 2 2  1  2 2 7
1 3 5 0  5 9  R31   2  0 0 29 
 2 2     7
13
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Here rank of the coefficient matrix is 3. Which is equal to the number of variables. Hence the given
system of equations has only trivial solutions. i.e. x1  x2  x3  0
Problem:
Find the solution of the following equations by matrix notation
x  2y  z  w  0
x  2y  2z  3w  0
4 x  y  5z  8w  0
Solution:
For the given system of equations the coefficient matrix is
 1 2 1 1 1 2 1 1 1 2 1 1
R21  1 
A   1 1 2 3    0 3 3 4 
   0 3 3 4  R32  3 
R31  4 
 4 1 5 8   0 9 9 12  0 0 0 0 
Here rank of the coefficient matrix is 2. Which is less than the number of variables. Hence the given
system of equations has non-trivial solutions. The reduced equations are,
x  2y  z  w  0 ....................  i 
3y  3z  4w  0 ....................  ii 
Here we have  n  r    4  2   2 free variables.
Let us consider x  1 and w  0 , then from equation  ii  we get,
3y  3  1  4  0  0
 y 1
Putting w  0 , z  1 and y  1 in equation  i  we get
x  21  1  0  0
x 1
Hence the required solution is (1 , 1 , 1 , 0 )

14
223605 Linear Algebra
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Characteristic Root and characteristic vector:


If for a square matrix A of order n , there are a scalar  and a non-zero vector X  0 such that,
AX  X , then  is called characteristic root, latent root or eigenvalue of matrix A and
corresponding non-zero vector X is called characteristic vector, latent vector, eigenvector or
invariant vector of matrix A .
Characteristic Equation:
We have,
AX  X
 AX  X  0
  A    X  0
This is a system of n homogeneous equation in n variables, which has non-zero solutions iff
A    0 . The determinant A   is a n  th degree polynomial in  and is known as
characteristic polynomial of the matrix A . The equation A    0 is called characteristic equation
of matrix A , which has n roots 1 , 2 ,.......,  n arecalled characteristic roots and the corresponding
non-zero vectors X1 , X2 ,......, X n are called characteristic vectors of matrix A .
In fact characteristic equation A    0 can be expressed as
a11   a12  a1n
a21 a22    a2n
0
  
an1 an2  ann  

Statement and proof of important properties of characteristic root and characteristic vector :

Property 01: Latent root of a real symmetric matrix are all real.
Proof:
Let us suppose that  and  are two complex conjugate roots and X and X are corresponding
latent vectors of a real symmetric matrix A .
Then AX  X and AX  X
 X AX  X X ..............  i  and X AX  X X ...................  ii 
Since A is a symmetric matrix so A  A . Taking transpose in  ii  we get,

 X AX    X X 
 X AX  X X
 X AX  X X ..................  iii 
Comparing  i  and  iii  we get,
X X  X X
 X X  X X  0
      X X  0
Since X  0 and X  0 we have       0  
223605 Linear Algebra
1
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

If   a  ib and   a  ib
Then 
 a  ib  a  ib
 2ib  0
 b0
   a and   a which is real number.
Thus the latent roots of a real symmetric matrix are all real.
Property 02: If  is a characteristic root of A , then  k is a characteristic root of Ak , where k is an integer.
Proof:
We have,
AX  X
 A2 X  AX  .X  2 X
 A3 X   2 AX   2 .X   3 X

 Ak X   k X
Thus  k is the characteristic root of Ak .

Property 03: If all the characteristic roots of a matrix are different, then the corresponding characteristic
vectors are linearly independent.
Proof:
Let us suppose that, 1 , 2 ,.......,  n are distinct characteristic roots and X1 , X2 ,......, X n are
corresponding characteristic vectors of a matrix A .
Then AX i   i X i
Let us consider the equation
C1 X1  C2 X2  ........  C n X n  0
 C1 AX1  C 2 AX 2  ........  C n AX n  0
 C11 X1  C2 2 X2  ........  C n  n X n  0
 C11 AX1  C2 2 AX2  ........  C n  n AX n  0
 C112 X1  C2 22 X 2  ........  C n 2n X n  0
Continuing this process of multiplication, we have,
C11n 1 X1  C 2  2n 1 X 2  ........  C n  nn 1 X n  0
After taking transpose of each equation, the resulting n equations may be written as
 1 1  1  C1 X1 
  
 1  2   n  C2 X2 
 0 .....................  i 
      
 n 1  
 1 2n1   nn 1  C n X n 
n
The determinant of the first matrix on L.H.S. is  
i  j 1
i   j   0 since  i   j .

223605 Linear Algebra


2
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 1 1  1 
 
 1 2   n 
So the matrix B  is a non  singular matrix and B1 exists.
    
 n 1 n 1 n 1 
 1 2  n 
Pre-multiplying the matrix equation  i  by B 1 we get
 C1 X1 
 
 C2 X2   0
  
 
 C n X n 
 C1 X1  C2 X2    C n X n  0
 C1 X1  C2 X2    C n X n  0
Since X1 , X2 ,......, X n are non- zero vectors we have, C1  C2    C n  0 .
So the characteristic vectors X1 , X2 ,......, X n are linearly independent.

Property 04: If a matrix A has all distinct characteristic roots, then there exists a non-singular matrix P
such that, P 1 AP is a diagonal matrix whose diagonal elements are the characteristic roots
of A .
Proof:
If 1 , 2 ,.......,  n are distinct latent roots of matrix A , then the corresponding latent vectors
X1 , X2 ,......, X n are linearly independent.
Let us construct a matrix P   X1 , X2 ,......, X n  such that P  0 and P 1 exists.
A.P  A  X1 , X2 ,......, X n    1 X1 , 2 X2 ,......,  n X n 
 1 0  0
 
0 2  0
  X1 , X2 ,......, X n     
 
0 0  n 
 1 0  0 
 
 0 2  0 
 P
    
 
 0 0  n 
 1 0  0   1 0  0
   
0 2  0  0 2  0
 P 1 AP  P 1P  
         
   
 0 0  n  0 0  n 
Hence the proof.

223605 Linear Algebra


3
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Property 05: For a symmetric matrix with distinct latent roots, the latent vectors are orthogonal vectors.
Proof:
Let  i and  j be two distinct latent roots and X i and X j be corresponding latent vectors of a
symmetric matrix A.
Then AX i   i X i and AX j  X j
 X j AX i   i X j X i ..............  i  and X iAX j   j X iX j ...................  ii 
Transposing  ii  we get,

 X AX     X X 
i j j i j

 X j AX i   j X j X i
 X j AX i   j X j X i ....................  iii 
Comparing  i  and  iii  we get,
 i X j X i   j X j X i
  i X j X i   j X j X i  0
  i   j  X j X i  0
Since  i   j so that, X j X i  0
 X i and X j are orthogonal vectors.
Thus the characteristic vectors of a symmetric matrix with distinct latent roots are orthogonal
vectors.
Property 06: If a matrix A has all distinct characteristic roots, then there exists an orthogonal matrix P
such that, P AP is a diagonal matrix whose diagonal elements are the characteristic roots
of A .
Proof:
If 1 , 2 ,.......,  n are distinct latent roots of matrix A , then the corresponding latent vectors
X1 , X2 ,......, X n are orthogonal vectors.
 X X X 
Let us construct an orthogonal matrix P   1 , 2 ,......, n  such that PP   .
 X1 X 2 Xn 
 X X X   X  X  X 
Now, A.P  A  1 , 2 ,......, n    1 1 , 2 2 ,......, n n   1 X1 ,  2 X2 ,......,  n X n 
 X1 X 2 Xn   X1 X2 Xn 
 1 0  0   1 0  0 
 X1 X 2    
X n   0 2  0   0 2  0 
  , ,......,  P
 X1 X 2 Xn           
   
 0 0  n   0 0  n 
 1 0  0   1 0  0   1 0  0 
     
0 2  0  0 2  0   0 2  0 
 P AP  P P    
              
     
 0 0  n   0 0  n   0 0  n 
Hence the proof.

223605 Linear Algebra


4
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Property 07: For a matrix A sum of latent roots is equal to its trace.
Proof:
Let us suppose that A is a symmetric matrix of order n . Then there exists an orthogonal matrix P
such that PP  
 1 0  0 
 
 0 2  0 
and P AP  is a diagonal matrix where diagonal elements 1 , 2 ,.......,  n are
    
 
 0 0  n 
the latent roots of A .
Now, tr  P AP   1  2  .........   n
 tr  AP P   1   2  .........   n
 tr  A    1  2  .........   n
 tr  A   1   2  .........   n
Thus, sum of characteristic root of a matrix is equal to its trace.

Property 08: Product of characteristic roots of a matrix is equal to its determinant.


Proof:
Let us suppose that A is a symmetric matrix of order n . Then there exists an orthogonal matrix P
such that PP  PP  
 1 0  0 
 
 0 2  0 
and 
P AP  is a diagonal matrix where diagonal elements 1 , 2 ,.......,  n are
    
 
 0 0  n 
the latent roots of A .
Now, PAP  1 2 ......... n
 P A P  1  2 ......... n
 PP A  1 2 ......... n
  A  1  2 ......... n
  A  1  2 ......... n
 A  1 2 ......... n
Thus, product of characteristic roots of a matrix is equal to its determinant.

Property 09: If A is a non-singular matrix, then the characteristic roots of A1 are the reciprocals of the
characteristic roots of A .
Proof:
Let  be a characteristic root and X be the corresponding characteristic vector of a matrix A .
Then we have,

223605 Linear Algebra


5
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

AX  X
 A1 AX  A1 X
  X    A1 X 
 X    A 1 X 
1
 X  A 1 X

1
 A1 X  X

This shows that 1 is characteristic root of matrix A1 , which is the reciprocal of  .

Thus, the characteristic roots of A1 are the reciprocals of the characteristic roots of A .

Property 10: A and P 1 AP have same latent roots.


Proof:
We have,
P 1 AP    P 1 AP  P 1P
 P 1  A    P
 P 1 A   P
 P 1P A  
  A  
  A  
 P 1 AP    A  
So, A and P 1 AP have same latent roots.

Property 11: Any square matrix A and its transpose matrix A have same characteristic roots.
Proof:
We have,
a11   a12  a1n a11   a21  an1
a21 a22    a2 n a12 a22    an2
A      A  
     
an1 an2  ann   a1n a2 n  ann  

This shows that A and its transpose A have same characteristic equation and hence have same
characteristic roots.

Property 12: Each characteristic root of an idempotent matrix is either zero or unity.
Proof:
Let A be an idempotent matrix. Then by definition

223605 Linear Algebra


6
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

AX  X
 A2 X  AX
 AX  X
 AX  2 X
 X   2 X
 X   2 X  0
  1    X  0
Since any characteristic vector is not the null vector
  1     0
   0 or   1
So that,the characteristic root of an idempotent matrix is either zero or unity.
Example :
 6 2 2 
 
Find the characteristic roots and corresponding characteristic vectors of the matrix  2 3 1 
 2 1 3 
 
Solution :
 6 2 2 
 
Let the matrix A   2 3 1  . Therefore the characteristic equation is
 2 1 3 
 
A    0
 6 2 2  1 0 0
   
  2 3 1     0 1 0   0
 2 1 3  0 0 1
   
6   2 2
 2 3 1  0
2 1 3

 6    3    
 1  22  3     2  22  2  3     0
2

  6   9  6  2  1  26  2  2  22  6  2  0


  6   2  6  8  4  2  4   0
  6   2  4    8  8    2   0
  6      4    2   8    2   0
    2   6  24  2  4  8   0
    2   2  10  16   0
    2   2  8    16   0
    2    2    8   0
So the characteristic roots of A are   2 ,   2 and   8
223605 Linear Algebra
7
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Now the characteristic vector corresponding to   2 is the solution of X given by  A    X  0


 6 2 2   1 0 0    x1   0 
       
i.e.  2 3 1   2  0 1 0    x2    0 
 2 1 3   0 0 1    x3   0 
1
 4 2 2  x1   0   4 2 2  x1   0 
R21  
          2
  2 1 1  x2    0    0 0 0  x2    0 
 2 1 1  x   0   0 0 0  x   0   1
  3      3    R31   
 2
The rank of the coefficient matrix is 1. So we have  3  1  2 free variables.

The reduced equation is 4 x1  2x2  2 x3  0


Let x2  0 and x1  1
 4 x1  0  2.1  0
 x1   1
2
So that, the characteristic vector corresponding to the characteristic root   2 is is given by
1
2
0 1 
Now the characteristic vector corresponding to   8 is the solution of X given by  A    X  0
 6 2 2   1 0 0    x1   0   2 2 2  x1   0 
            
i.e.  2 3 1   8  0 1 0    x2    0    2 5 1  x2    0 
 2 1 3   0 0 1    x3   0   2 1 5  x   0 
  3   
 2 2 2  x1   0   2 2 2  x1   0 
     R21  1
  0 3 3  x2    0    0 3 3    
 x2    0  R32  1
 0 3 3  x   0  R31  1   0 0 0  x   0 
  3      3   
The rank of the coefficient matrix is 2. So we have  3  2   1 free variables.

2 x1  2 x2  2 x3  0............  i 
The reduced equation is
 3x2  3x3  0..............  ii 
Let x3  1
 x2  1
From  i  we get,
2 x1  2  1   2 1   0
 2 x1  2  2  0
 x1  2
So that, the characteristic vector corresponding to the characteristic root   8 is is given by
 2 1 1

223605 Linear Algebra


8
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Quadratic From:
A homogeneous polynomial of the second degree in any number of variables is called a quadratic
from.
For example :
2 x12  3x1 x2  5x22
3x12  2 x22  x32  x1 x2  2 x2 x3  3x3 x1
are quadratic forms in 2 or 3 variables respectively. In matrix form, the above forms can be written
as
 2 3  x  2 3 
 2   1 2  , X  x x
2 x1  3x1 x2  5x2   x1 x2 
2 2
   X AX where, A    1 2
3 
5  2 x  3 5 
 2   2 

 3 1 3 
 2 2   x1 
  
3x1  2 x2  x3  x1 x2  2 x2 x3  3x3 x1   x1 x2 x3  
2 2 2 1 2 1   x2   X AX
 2  
3 x
1 1  3 
 2 
 3 1 3 
 2 2
where, A   1 2 1  , X    x1 x2 x3 
 2 
3 1 1 
 2 
Canonical Form:
If a real quadratic form can be expressed as the sum and difference of the squares of the new
variables by any real non-singular transformation, then this expression is called the canonical form
of the given form.
Let us consider the quadratic form X AX and a real non-singular linear transformation X  PY
where P is a non-singular matrix, then
X AX   PY  A  PY 
 Y   P AP Y
 Y BY where, B  P AP
which is the canonical form of the quadratic form X AX .
Rank of a quadratic form:
For a quadratic form X AX ,   A  is called the rank of the quadratic form.
Index of a quadratic form:
The number of positive square terms in the canonical form of a quadratic form is called the index of
the quadratic form.
Signature of a quadratic form:
The difference between the number of positive square terms and negative square terms in the
canonical form of a quadratic from is called signature of the quadratic form.

223605 Linear Algebra


1
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Classification of quadratic form:


Quadratic form can be classified as follows:
(i) Positive definite
(ii) Positive semi- definite
(iii) Negative definite
(iv) Negative semi- definite
(v) Indefinite

(i) Positive definite


If q  X AX be a real quadratic form in n variables with rank r and index p . Then the
quadratic form q  X AX is called positive definite if r  p  n . The canonical form of a
positive definite quadratic form is y12  y22  ........  yn2 .

(ii) Positive semi-definite:


If q  X AX be a real quadratic form in n variables with rank r and index p . Then the
quadratic form q  X AX is called positive semi-definite if r  n ; r  p . The canonical form
of a positive semi- definite quadratic form is y12  y22  ........  yr2 .

(iii) Negative definite


If q  X AX be a real quadratic form in n variables with rank r and index p . Then the
quadratic form q  X AX is called negative definite if r  n ; p  0 . The canonical form of a
negative definite quadratic form is  y12  y22  ........  yn2 .

(iv) Negative semi-definite:


If q  X AX be a real quadratic form in n variables with rank r and index p . Then the
quadratic form q  X AX is called negative semi-definite if r  n ; p  0 . The canonical
form of a negative semi- definite quadratic form is  y12  y22  ........  yr2 .

(v) Indefinite:
If q  X AX be a real quadratic form in n variables with rank r and index p . Then the
quadratic form q  X AX is called indefinite if p  r  n . The canonical form of a indefinite
quadratic form is  y12  y22  y32  y42  ........  yr2 ; r  n .

Necessary and sufficient condition for a real quadratic form to be positive definite

Statement :
A necessary and sufficient condition for a real quadratic form X AX to be positive definite is that the
leading principal minors are all positive.
Proof:
Necessary condition:

223605 Linear Algebra


2
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

n n
Let the quadratic form X AX   aij xi x j a ij  a ji  be positive definite. We have to show that all
i 1 j 1

the leading principal minors of the matrix A are positive.

Since X AX is positive definite then there exists a non-singular linear transformation X  PY such
that,
X AX  Y P APY
 PAP   nn
 y12  y22  ........  yn2
 P A P  1
 Y   nnY
1
 P AP   nn  Ann  2
P
Since P is non-singular matrix, so that, P  0 . Thus Ann is positive.

Let us consider the quadratic form with the last variable xn in X AX as zero. Then the matrix Ann
reduces to an  n  1   n  1 matrix and the definiteness of the quadratic form remains unchanged.
Therefore there will also exists a non-singular linear transformation X  PZ such that,
X AX  Z PAPZ  z12  z22  .........  zn21
 Z    n 1 n 1 Z
 P AP   n 1 n1

 PAP    n1 n1


 P A P  1
1
 A n 1 n 1  2
P
Since P is non-singular matrix, so that, P  0 . Thus A n1 n1 is positive.

Proceeding in this way we can show that A n 2 n 2 is positive. Thus we have established the
relationship A11  0 , A22  0 ,........., Ann  0.
Hence all the leading principal minors of the matrix A are positive.
Sufficient condition :
n n
Let the leading principal minors of the matrix A in X AX   aij xi x j aij  a ji  be positive. We
i 1 j 1

have to show that the quadratic form X AX is positive definite.

Let us consider the following matrix


 a11 a12  a1n 
 
 a21 a22  a2n  Here, A11  0 , A22  0 ,........., Ann  0
A .
      a11  0
 
 an1 an2  ann 

223605 Linear Algebra


3
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Hence the elements except a11 both in the 1st row and in the 1st column are reduced to zero by
elementary transformation. Then the resulting matrix is of the form
 a11 0  0 
  a11 0
0 b  b We have, A  0
A 
22 2 n 22
0 b22
    
   a11b22  0 ; Since a11  0  b22  0
 0 bn2  bnn 

So keeping b22 fixed the elements of the 2nd row and 2ndcolumn are reduced to zero by elementary
transformation. Then the resulting matrix is of the form
 a11 0 0  0 
  a11 0 0
 0 b22 0  0  We have, A  0 b 0 0
33
A 0 0 c33  c3n 
22

  0 0 c33
     
 0   a11b22 c33  0 ; Since a11 , b22  0  c33  0
 0 c n3  c nn 

 a11 0 0  0
 
 0 b22 0  0 
Proceeding in this way it can be shown that A is equivalent to A   0 0 c33  0 
 
     
 0 0 0  lnn 

 a11 0 0  0
 0 b y 
 22 0  0   1 
y
 X AX   y1 y2  yn   0 0 c33  0   2 
   
      
y
 0
 0 0  lnn   n 
 a11 y12  b22 y22  ........  lnn yn2
Since a11  0, b22  0,........, lnn  0 . Hence the quadratic form X AX is positive definite.

Necessary and sufficient condition for a real quadratic form to be negative definite

Statement :
A necessary and sufficient condition for a real quadratic form X AX to be negative definite is that
the leading principal minors are alternatively negative and positive.

Proof:
Necessary condition:
n n
Let the quadratic form X AX   aij xi x j aij  a ji  be negative definite. We have to show that,
i 1 j 1

the leading principal minors of the matrix A are alternatively negative and positive.

223605 Linear Algebra


4
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Since X AX is negative definite then there exists a non-singular linear transformation X  PY such
that,
X AX  Y P APY
 P AP   nn
 y12  y22  ........  yn2
P  A P   1 
n

 Y    nn Y
 1 
n
 P AP     nn 
  Ann  2
P
2
Since P is non-singular matrix, so that, P  0 . Since P is positive,
Thus, Ann is positive, if n is even
Ann isnegative, if n is odd.

Let us consider the quadratic form with the last variable xn in X AX as zero. Then the matrix Ann
reduces to an  n  1   n  1  matrix and the definiteness of the quadratic form remains unchanged.
Therefore there will also exists a non-singular linear transformation X  PZ such that,
X AX  Z P APZ   z12  z22  .........  zn21


 Z   n1 n1 Z 
 P AP     n1 n1

 P AP   n 1 n 1

P  A P   1 
n 1

1
 A n1 n1  2
P
2
Since P is non-singular matrix, so that, P  0 . Since P is positive,
Thus, A n1 n1 is positive, if n is odd
Ann isnegative, if n is even.
Thus we have provided that if Ann is positive then A n 1 n1 is negative and that if Ann is

negativethen A n 1 n1 is positive.

Proceeding in this way we can show that A n 2 n 2 is positive if n is even and negative if n is odd.
Thus we have established the relationship
A11  0 , A22  0 , A33  0 ,........., Ann  0 or  0. (According to n is odd or even)
Hence the leading principal minors of the matrix A are alternatively negative and positive.
Sufficient condition :
n n
Let the leading principal minors of the matrix A in X AX   aij xi x j a
ij  a ji  are alternatively
i 1 j 1

negative and positive. We have to show that the quadratic form X AX is negative definite.
Let us consider the following matrix

223605 Linear Algebra


5
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 a11 a12  a1n 


 
 a21 a22  a2n  Here, A11  0 , A22  0 ,........., Ann  0 or  0
A .
      a11  0
 
 an1 an2  ann 
Hence the elements except a11 both in the 1st row and in the 1st column are reduced to zero by
elementary transformation. Then the resulting matrix is of the form
 a11 0  0 
  a11 0
0 b  b We have, A  0
A 
22 2 n 22
0 b22
    
   a11b22  0 ; Since a11  0  b22  0
 0 bn2  bnn 

Since b22  0 , so keeping b22 fixed the elements of the 2nd row and 2nd column are reduced to zero
by elementary transformation. Then the resulting matrix is of the form
 a11 0 0  0 
  a11 0 0
 0 b 22 0  0  We have, A  0 b
33 0 0
A 0 0 c33  c3n 
22

  0 0 c33
     
 a11b22c33  0 ; Since a11 , b22  0  c33  0
 0 0 cn3  cnn 

 a11 0 0  0
 
 0 b22 0  0 
Proceeding in this way it can be shown that A is equivalent to A   0 0 c33  0 
 
     
 0 0 0  lnn 

 a11 0 0  0
   y1 
 0 b22 0  0   y 
 X AX   y1 y2  yn   0 0 c33  0   2   a11 y12  b22 y22  ........  lnn yn2
   
      y 
 0
 0 0  lnn   n 
Since a11  0, b22  0,........, lnn  0 . Hence the quadratic form X AX is negative definite.

A positive definite quadratic form can be expressed as a sum of squares

Proof :
We know that for a positive definite real quadratic form X AX all the leading principal minors of the
matrix A are positive.
 a11 a12  a1n 
a a22  a2n 
i.e. If the matrix A   21
, then A11  0 , A22  0 ,........., Ann  0  a11  0 .
    
 
 an1 an2  ann 
223605 Linear Algebra
6
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Hence the elements except a11 both in the 1st row and in the 1st column are reduced to zero by
elementary transformation. Then the resulting matrix is of the form
 a11 0  0 
  a11 0
0 b  b We have, A  0
A 
22 2 n 22
0 b22
    
   a11b22  0 ; Since a11  0  b22  0
 0 bn2  bnn 

So keeping b22 fixed the elements of the 2nd row and 2nd column are reduced to zero by elementary
transformation. Then the resulting matrix is of the form
 a11 0 0  0 
  a11 0 0
 0 b22 0  0  We have, A  0 b 0 0
33
A 0 0 c33  c3n 
22

  0 0 c33
     
 0   a11b22 c33  0 ; Since a11 , b22  0  c33  0
 0 c n3  c nn 

 a11 0 0  0
 
 0 b22 0  0 
Proceeding in this way it can be shown that A is equivalent to A   0 0 c33  0 
 
     
 0 0 0  lnn 

 a11 0 0  0
   y1 
 0 b22 0  0   y 
 X AX   y1 y2  yn   0 0 c33  0   2   a11 y12  b22 y22  ........  lnn yn2
   
      
y
 0
 0 0  lnn   n 
Hence the quadratic form X AX is expressed as the sum of squares, since a11  0 , b22  0,....., lnn  0

A real quadratic form is positive definite iff all the characteristic roots of A are positive
Proof :

Necessary condition :
n n
Let, X AX   aij xi x j be a positive definite quadratic form and A be the matrix of the quadratic
i 1 j 1

form. Also 1 ,2 ,, n be the characteristic roots of A and X1 , X2 ,........, X n be the corresponding
characteristic vectors.

223605 Linear Algebra


7
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Then we get, AX i   i X i  i  1,2,........, n 


 X iAX i   i X iX i
Since X AX is positive definite,
Hence, X iAX i  0
  i X iX i  0
Since X iX i  0 , so that  i  0
i.e.all the characteristic roots are positive.

Sufficientcondition :
Let all the characteristic roots of A be positive also X1 , X2 ,........, X n be the characteristic vector
corresponding to the characteristic roots 1 ,2 ,, n .
We have, AX i   i X i  i  1,2,........, n 
 X iAX i   i X iX i
Since  i  0, so that, X iAX i  0  Since X iX i  0 
Hence X AX is positive definite.
Question :
n
Express  (X
i 1
i  X )2 as a quadratic form and hence determine its rank and comment on the nature

ofthe quadratic form.


Answer :
2
 n 
n n   Xi 
 (X i  X )2   X i   i 1 
2

i 1 i 1 n
n
1 n n

  X i2   X i2  2  X i X j 
i 1 n  i 1 i  j 1 
n
1 n
2 n
  X i2   X i2   X i X j
i 1 n i 1 n i  j 1
 1 n 1 n
  1    X i2   X i X j
 n  i 1 n i  j 1
 1  1 1 
 1  n   n   n 
    X1 
 1  1 1  
  n  1     n   X2 
  X1 , X 2 ,., X n    n 
     X AX
     
   Xn 
 1  1 
 1   1   
 n n  n 
Which is a quadratic form.

223605 Linear Algebra


8
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 1  1 1 
 1  n   n   n 
  
 1  1 
   1   1 
Where A   n  n n is the required matrix of the quadratic form.

    
 
 1 1  1 
   1   
 n n  n 
Here A . A  A  A so that, the matrix A is an idempotent matrix.
2

 1
   A   tr  A   n  1     n  1 
 n
Comment : Since   A  is less than the number of variables and r  p therefore the given
quadratic form is positive semi- definite.
Question :
n
Express  (X
i  j 1
i  X j )2 as a quadratic form and hence determine its rank and comment on the

nature of the quadratic form.


Answer :
n

 (X  X1  X 2    X1  X 3   ........   X1  X n 
2 2 2
i  X j )2 
i  j 1

  X2  X1    X2  X 3   ........   X2  X n 
2 2 2

  X 3  X1    X 3  X2   ........   X3  X n 
2 2 2

      
  X n  X1    X n  X 2   ........   X n  X n 1 
2 2 2

2[  X1  X 2    X1  X 3    X1  X 4   ........   X1  X n 
2 2 2 2

  X2  X 3    X2  X 4  ........   X2  X n 
2 2 2

  X 3  X 4  ........   X 3  X n 
2 2

    
  X n 1  X n 
2
]
 n n

 2  n  1   X i2  2  Xi X j 
 i 1 i  j 1 
n n
  2n  2   X i2  2  X i X j
i 1 i  j 1

  2n  2  2  2
  X1 
  
2 2n  2   2   X2 
  X1 , X2 ,., X n    X AX .Which is a quadratic form.
      
  
 2 2   2n  2    X n 

223605 Linear Algebra


9
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 1 1 1 
1  n   n   n 
  2n  2  2  2    
    1  
 2 2n  2   2   1
n   1   1 
n  2nB
Where A   2n  n 
    
      
 2 2   2n  2    
 1 1  1 
   1  
 n n  n 
is the required matrix of the quadratic form.
 1  1 1 
 1  n   n   n 
  
 1  1 
   1   1 
Where, B   n  n n

    
 
 1 1  1 
    1  
 n n  n  
Here B . B  B2  B so that, the matrix B is an idempotent matrix.
 1
   B   tr  B   n  1     n  1 
 n
Since A  2nB
   A     2nB    n  1

Comment : Since   A  is less than the number of variables and r  p therefore the given
quadratic form is positive semi- definite.
Question :
n
Express  (X
i  j 1
i  X j )2 as a quadratic form and hence determine its rank and comment on the

nature ofthe quadratic form.

223605 Linear Algebra


10
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Answer :
n

 (X   X1  X2    X1  X 3    X1  X 4   ........   X1  X n 
2 2 2 2
i  X j )2
i  j 1

  X2  X3    X2  X 4  ........   X 2  X n 
2 2 2

  X 3  X 4  ........   X 3  X n 
2 2

   
  X n 1  X n 
2

n n
  n  1  Xi2  2  X i X j
i 1 i  j 1

  n  1 1  1   X 1 
  
1  n  1  1   X 2 
  X1 , X 2 ,., X n    X AX .Which is a quadratic form.
      
  
 1 1   n  1    X n 
 1  1 1 
 1  n   n   n 
  n  1 1  1    
   1 1 
 1  n  1  1    n  1     1 n 
Where A   n  n   nB
    
      
 1  1   n  1    
 1 1  1 
   1   
 n n  n 
is the required matrix of the quadratic form.
 1 
 1  n  1  1 
n n
  
 1  1 
   1   1 
Where, B  n  n n
 
    
 
 1 1  1 
    1  
 n n  n  
Here B . B  B2  B so that, the matrix B is an idempotent matrix.
 1
   B   tr  B   n  1     n  1 
 n
Since A  nB
   A     nB    n  1
Comment : Since   A  is less than the number of variables and r  p therefore the given
quadratic form is positive semi- definite.
Question :
ni
k
1 ni
Express  ( X
i 1 j 1
ij  X ij as a quadratic form and hence determine its rank and
 X i )2 where X i 
ni j 1
comment on the nature of the quadratic form.
223605 Linear Algebra
11
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Answer :
k ni n1 n2 n3 nk

 (X ij  X i )2  X 1 j  X1     X 2 j  X 2     X 3 j  X 3   ........    X kj  X n 
2 2 2 2

i 1 j 1 j 1 j 1 j 1 j 1
2
nr nr
1 nr
Again, (X
j 1
rj  X r )2   X rj2 
j 1
  X rj 
nr  j 1 
nr
1  nr n

  X rj2    X rj2  2  X rj X rl 
j 1 nr  j 1 j  l 1 
 1  nr 2 n
  1    X rj2   X rj X rl
 nr  j 1 nr j l 1
 1 
1    1 n  1
nr 
 nr  r

     Xr1 
 1 1  
 1     1 n   Xr2 

 X r 1 , X r 2 ,., X rnr  n r  nr  r   X r Ar X r .
   
     X 
  rnr 
1 1  1 
     1   
 nr nr nr  
 
 1 
 1    1 n  1
n 
 nr  r r

  
 1 1 1 
 1    
Where, Ar   nr
 n r 
nr  and X r  X r 1 , X r 2 ,., X rn  
 
r

    
  1  
 1  1   1  
 nr nr nr  
 
k ni k
  (X ij  Xi )2  X1 AX1  X2 AX2  X3 AX3  .......  Xk AX k   X iAXi ;which is a quadratic form.
i 1 j 1 i 1

Here Ai . Ai  Ai2  Ai so that, the matrix Ai is an idempotent matrix.


 1 
   Ai   tr  Ai   ni  1     ni  1 
 ni 
k
    A     n  1   n  k 
i 1
i i

k
Comment : Since   A  is less than the number of variables and r  p therefore the given
i 1
i

quadratic form is positive semi- definite.

223605 Linear Algebra


12
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Question :
1 n
1 n
 n  1   X i is a quadratic form in the variables
Prove that, S 2  ( X i  X )2
, where X 
i 1 n i 1
X1 , X 2 , , X n and hence determine its rank, comment on the nature ofthe quadratic form.

Answer :
1 n
Given that, S 2   (Xi  X )2
n  1 i 1
1  n 2 1 n  
2

  i X   i X
n  1  i 1 n  i 1  

1  n 2 1 n 2 n

  i X   i X  2  X i X j 
n  1  i 1 n  i 1 i  j 1 
1  1  n 2 2 n 
  1    X i   X i X j 
n  1  n  i 1 n i  j 1 
1 n 2 2 n
  i n  n  1 i 
n i 1
X 
 j 1
Xi X j

 1 1 1 
   
n n  n  1 n  n  1 
  X 
 1 1 1  1 
     X
  X1 , X2 ,., X n   n  n  1 n n  n  1   2   X AX Which is a quadratic form.
  
     
   Xn 
 1 1 1 
 n  n  1  n  n  1  n 
 
Where
 1 1 1 


n

n  n  1
 
n  n  1 


 
1 1
n  
1
n
 
1 
n 
   
1 1
 
A   n  n  1
1
n
  
n  n  1  
1 


1
n
1  
1
n
  
1 
n  
1
B
n 1  n 1
       
   
 
 1
 n  n  1 
1
n  n  1
 1
n


 

1
n

1
n
 1 1 
n 
 
is the required matrix of the quadratic form.
Here B . B  B  B so that, the matrix B is an idempotent matrix.
2

 1
   B   tr  B   n  1     n  1 
 n
 1 
  A    B     B    n  1
 n 1 
Comment : Since   A  is less than the number of variables and r  p therefore the given
quadratic form is positive semi- definite.
223605 Linear Algebra
13
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Question :
Show that ax12  2bx1 x2  cx22 is positive definite iff a  0 and A  ac  b2  0 when A is the matrix
of given quadratic form.
Answer :
Given that, ax12  2bx1 x2  cx22
In matrix notation the given quadratic form can be written as
 a b  x1   a b 
 x1 x2      X AX where X    x1 x2  and A   
 b a  x2   b a 
Since X AX is positive definite then there exists a non-singular linear transformation X  PY such
that,
X AX  Y PAPY
 PAP  22
 y12  y22
 P A P  1
 Y  22Y
1
 P AP   22  A22  2
P
Since P is non-singular matrix, so that, P  0 . Thus A22  0 is positive.

Let us consider the quadratic form with the last variable x2 in X AX as zero. Then the matrix A22
reduces to an A11 matrix and the definiteness of the quadratic form remains unchanged.
Therefore there will also exists a non-singular linear transformation X  PZ such that,
X AX  Z P APZ  z12
 Z  11 Z
 PAP  11
 PAP  11
 P A P  1
1
 A11  2
P
 A11  0
Again from the given quadratic form we have A11  a  a Since A11  0  a  0
a b
Again A22   ac  b2 Since A22  0  A  ac  b2  0.
b a
Hence condition is proved.
Problem 01 :
Reduce the quadratic form 5 x12  26 x22  10 x32  4 x2 x3  14 x3 x1  6 x1 x2 to canonical form and
determine its rank, index and signature and comment on the definiteness. Also find a non-zero set
of values of x1 , x2 and x3 which makes the form zero.
Solution :
In matrix notation the quadratic form can be written as

223605 Linear Algebra


14
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 5 3 7  x1 
  
5x  26 x  10 x  4 x2 x3  14 x3 x1  6 x1 x2   x1
2
1
2
2
2
3 x2 x3   3 26 2  x2   X AX
 7 2 10  x 
  3 
5 3 7 
 
where, A   3 26 2  , X    x1 x2 x3 
 7 2 10 
 
To effect conjugate reduction of A we write A  A  . i.e. we are to find the matrix B  P AP which
is congruent to A where P is a non-singular matrix.
5 3 7  1 0 0 1 0 0
     
 3 26 2    0 1 0  A  0 1 0 
 7 2 10   0 0 1   0 0 1 
     
   
Applying elementary congruent transformation R21  3 ,C21  3 ; R31  7 , C31  7
5 5 5    5
5 0 0   1 0 0   1  3 7 
  5 5

 0 121  11  
  3 
1 0 A 0  1 0 
 5 5  5   
 0  11 1  7 0 0 1 
0 1   
 5 5   5 
R32 1 11 , C  1 11
32

3 16
5 0 0   1 0 0   1  5  5 
 
  0 121 0    3 1 0  A 0 1 1 
5  5   11 
     
 0 0 0   16 1 1 0 0 1
 5 11   
R1  1  , C1  1  ; R2  5  , C2  5 
 5  5  11   11 
 1 0 0  1 3  16 
 5   5 11 5 5
1 0 0  
   
 0 1 0   3 5 0  A 0 5 1 
11 5 11   11 11 
0 0 0 
   16 0 0 1 
  5 1 1   
 11   
1 0 0
 
Thus X AX reduces to Y BY , where B   0 1 0  and Y    y1 y2 y3 
0 0 0
 
So the canonical for is y12  y22 . Therefore the rank is 2, index is 2 and signature is 2.
Comment : Since rank < number of variables and rank = index. Hence the quadratic form is
positive semi-definite.

223605 Linear Algebra


15
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

 1 3  16 
 5 11 5 5
y
  1 
Now the linear transformation is X  PY   0 5 1  y 
 11 11   2 
 
 0 0 1   y3 
 
 
1 3 16
 x1  y1  y2  y3
5 11 5 11
5 1
x2  y2  y 3
11 11
x 3  y3
The canonical form is zero for the set of values  y1 y2 y3    0 0 1  corresponding to which
we get the value  x1 x2 
x3    16
11
1
11 
1 .Which makes the form zero.
Problem 02 :
Reduce the quadratic form 2 x12  2 x22  3 x32  4 x2 x3  4 x3 x1  2 x1 x2 to canonical form and determine
its rank, index and signature and comment on the definiteness. Also find a non-zero set of values of
x1 , x2 and x3 which makes the form zero.
Solution :
In matrix notation the quadratic form can be written as
 2 1 2  x1 
  
2 x1  2 x2  3x3  4 x2 x3  4 x3 x1  2 x1 x2   x1 x2 x3   1 2 2  x2   X AX
2 2 2

 2 2 3  x 
  3 
 2 1 2 
 
where, A   1 2 2  , X    x1 x2 x3 
 2 2 3 
 
To effect conjugate reduction of A we write A  A  . i.e. we are to find the matrix B  P AP which
is congruent to A where P is a non-singular matrix.
 2 1 2   1 0 0   1 0 0 
     
 1 2 2    0 1 0  A  0 1 0 
 2 2 3   0 0 1   0 0 1 
     
  
Applying elementary congruent transformation R21  1 ,C21  1 ; R31 1  , C31 1 
2 2 

223605 Linear Algebra


16
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

2 0 0  1 0 0 1  1 1
     2 
 0 3 1     1 1 0  A 0 1 0
2 2
     
 0 1 1  1 0 1 0 0 1
 
   
R32 2 , C 32 2
3 3
2 0 1 2 
 0   1 0 0   1  2 3
 0 3 0  1 1 0  A 0 1 2 
 2   2   3
0 0 1   2 2 1   0 0 1 
 3  3 3   

 2  2    
R1  1  , C1  1  ; R2 2 , C 2 2 ; R3 3 , C3 3
3 3    
 1 0 0  1 1 2 
 1 0 0   2   2 6 3
     
 0 1 0    1 2 0  A 0 2 2 
6 3 3 3
0 0 1   
   2  
 2 3  0 0 3 
 3 3   
1 0 0
 
Thus X AX reduces to Y BY , where B   0 1 0  and Y    y1 y2 y3 
0 0 1
 
So the canonical for is y12  y22  y32 . Therefore the rank is 3, index is 3.
Comment : Since rank = number of variables and rank = index. Hence the quadratic form is
positive definite.

1 1 2 
 2 6 3  y 
  1
Now the linear transformation is X  PY   0 2 2  y 

3 3   2 
 0  y3 
0 3 
 
1 3 2
 x1  y1  y2  y3
2 6 3
2 2
x2  y2  y3
3 3
x 3  3 y3
The canonical form is zero for the set of values  y1 y2 y3    0 0 0  corresponding to which
we get the value  x1 x2 x3    0 0 0  .Which makes the form zero.

223605 Linear Algebra


17
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Problem 03 :
Reduce the quadratic form 5x12  5 x22  14 x32  16 x2 x3  8 x3 x1  2 x1 x2 to canonical form and
determine its rank, index and signature and comment on the definiteness. Also find a non-zero set
of values of x1 , x2 and x3 which makes the form zero.
Solution :
In matrix notation the quadratic form can be written as
 5 1 4  x1 
  
5x1  5x2  14 x3  16 x2 x3  8 x3 x1  2 x1 x2   x1 x2 x3   1 5 8  x2   X AX
2 2 2

 4 8 14  x 
  3 
 5 1 4 
 
where, A   1 5 8  , X    x1 x2 x3 
 4 8 14 
 

To effect conjugate reduction of A we write A   A  . i.e. we are to find the matrix B  P AP which
is congruent to A where P is a non-singular matrix.
 5 1 4   1 0 0   1 0 0 
     
 1 5 8    0 1 0  A  0 1 0 
 4 8 14   0 0 1   0 0 1 
     
  
Applying elementary congruent transformation R21  1 , C21  1 ; R31 4 , C31 4
5 5 55    
 5 0 4   1 0 0   1  1 4 
  5 5
  0  24 36     1 1 0  A 0 1 0 
 5 5   5   
 0   0 0 1 
36  54  4 0 1   
 5 5  5 
 2  , C  32 
R32 3 32

1 1 
 5 0 4   1 0 0   1  5 2
 
  0  24 0    1 1 0  A 0 1 3 
5  5   2
  
0 0 0 1 3 1 0 0 1 
 2 2   
R1  1  , C1  1  ; R2
 5  5  5
24  ,C  2
5
24 
 1 0 0  1 1 1 
2
 1 0 0   5   5 120
   1 5
 
5 3 

  0 1 0     0  A 0
120 24 24 2
 0 0 0    
   1 0 0 1 
 3 1   
 2 2   

 1 0 0 
 
Thus X AX reduces to Y BY , where B   0 1 0  and Y    y1 y2 y3 
 0 0 0
 
223605 Linear Algebra
18
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

So the canonical for is  y12  y22 . Therefore the rank is 2, index is 0.


Comment : Since rank < number of variables and rank > index. Hence the quadratic form is
negative semi- definite.

 1 1 1 
 5 120 2
y 
  1 
Now the linear transformation is X  PY   0 5 3  y
24 2  2
  
 0 0 1   y3 

 
1 1 1
 x1  y1  y2  y 3
5 120 2
5 3
x2  y2  y 3
24 2
x 3  y3
The canonical form is zero for the set of values  y1 y2 y3    0 0 1  corresponding to which
we get the value  x1 x2 
x3   1
2
3
2 
1 .Which makes the form zero.
Problem 04 :
Reduce the quadratic form 2 x12  3 x32  4 x2 x3  2 x3 x1  6 x1 x2 to canonical form and determine its
rank, index and signature and comment on the definiteness. Also find a non-zero set of values of
x1 , x2 and x3 which makes the form zero.
Solution :
In matrix notation the quadratic form can be written as
 2 3 1  x1 
  
2 x1  3x3  4 x2 x3  2 x3 x1  6 x1 x2   x1 x2 x3   3 0 2  x2   X AX
2 2

 1 2 3  x 
  3 
2 3 1 
 
where, A   3 0 2  , X    x1 x2 x3 
 1 2 3 
 
To effect conjugate reduction of A we write A  A  . i.e. we are to find the matrix B  P AP which
is congruent to A where P is a non-singular matrix.
2 3 1  1 0 0 1 0 0
     
 3 0 2    0 1 0  A  0 1 0 
 1 2 3   0 0 1   0 0 1 
     
   
Applying elementary congruent transformation R21  3 , C21  3 ; R31  1 , C31  1
2 2 2   
2 

223605 Linear Algebra


19
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

2 0 0   1 0 0 1  3 1 
    2 2
 0 9  7  
  3 
1 0 A 0  1 0 
 2 2  2   
0 7 5  1 0 0 1 
0 1   
 2 2   2 
 
R32  7 , C32  7
9  9 
3 2 
2
 0 0   1 0 0   1  2 3 

 0 9 
0  3  1 
0 A 0  1 7 
 2   2   9
0 0 47   2  7 1   0 0 1 
 9  3 9   
R1  1  , C1  1  ; R2  2  , C 2  2  ; R3  3  ,C  3
 


 2  2  3  3  47  3  47 
 1 0 0  1 1 2 
1 0 0  2   2 2 47 
     
  0 1 0     1 2 0  A 0 2 7 
2 3 3 3 47 
0 0 1   
   2 3   0 3 
 7   0 
 47 3 47 47   47 
1 0 0
 
Thus X AX reduces to Y BY , where B   0 1 0  and Y    y1 y2 y3 
0 0 1
 
So the canonical for is y1  y2  y3 . Therefore the rank is 3, index is 2.
2 2 2

Comment : Since rank = number of variables and rank > index. Hence the quadratic form is
indefinite.

1 1 2 
 2 2 47   y 
  1 
Now the linear transformation is X  PY   0 2  7  y
 3 3 47   2 
 0 0 3   y3 
 
 47 
1 1 2
 x1  y1  y2  y3
2 2 47
2 7
x2  y2  y3
3 3 47
3
x3  y3
47
The canonical form is zero for the set of values  y1 y2 y3   1 1 0  corresponding to which
 2 
we get the value  x1 x2 x3    0 0  . Which makes the form zero.
 3 

223605 Linear Algebra


20
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Problem 05 :
Reduce the quadratic form x12  2 x22  3x32  2 x2 x3  2 x3 x1  2 x1 x2 to canonical form and determine its
rank, index and signature and comment on the definiteness. Also find a non-zero set of values of
x1 , x2 and x3 which makes the form zero.
Solution :
In matrix notation the quadratic form can be written as
 1 1 1  x1 
  
x1  2 x2  3x3  2 x2 x3  2 x3 x1  2 x1 x2   x1 x2 x3   1 2 1  x2   X AX
2 2 2

 1 1 3  x 
  3 
 1 1 1 
 
where, A   1 2 1  , X    x1 x2 x3 
 1 1 3 
 
To effect conjugate reduction of A we write A  A  . i.e. we are to find the matrix B  P AP which
is congruent to A where P is a non-singular matrix.
 1 1 1   1 0 0   1 0 0 
     
 1 2 1    0 1 0  A 0 1 0 
 1 1 3   0 0 1   0 0 1 
     
Applying elementary congruent transformation R21  1  ,C21  1 ; R31 1  , C31 1
 1 0 0   1 0 0   1 1 0 
     
  0 1 2    1 1 0  A  0 1 0 
0 2 2  1 0 1 0 0 1
     
R32  2  , C32  2 
 1 0 0   1 0 0   1 1 3 
     
  0 1 0    1 1 0  A  0 1 2 
 0 0 2   3 2 1   0 0 1 
     
R3  1  , C3  1 
 2  2
   1 1 3 
1 0 0   1 0 0   2
   
  0 1 0    1 1 0  A 0 1  2
 
 0 0 1   3
    2 1   0 0 1 
 2 2  2 

1 0 0 
 
Thus X AX reduces to Y BY , where B   0 1 0  and Y    y1 y2 y3 
 0 0 1 
 
So the canonical for is y1  y2  y3 . Therefore the rank is 3, index is 2.
2 2 2

223605 Linear Algebra


21
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Comment : Since rank = number of variables and rank > index. Hence the quadratic form is
indefinite.

 1 1 3 
 2   y1 
  
Now the linear transformation is X  PY   0 1  2   y2 
0 0 1   y3 
 2 

3
 x1  y1  y2  y3
2
x2  y2  2 y3
1
x3  y3
2
The canonical form is zero for the set of values  y1 y2 y3   1 0 1 corresponding to which
 2 3 1 
we get the value  x1 x2 x3     2  . Which makes the form zero.
 2 2 
Problem 06:
Reduce the quadratic form 2  x12  x22  x32   2 x2 x3  2 x3 x1  2 x1 x2 to canonical form and determine
its rank, index and signature and comment on the definiteness. Also find a non-zero set of values of
x1 , x2 and x3 which makes the form zero.
Solution :
In matrix notation the quadratic form can be written as
 2 1 1  x1 
  
2  x1  x2  x3   2 x2 x3  2 x3 x1  2 x1 x2   x1 x2 x3   1 2 1  x2   X AX
2 2 2

 1 1 2  x 
  3 
2 1 1 
 
where, A   1 2 1  , X    x1 x2 x3 
 1 1 2 
 
To effect conjugate reduction of A we write A  A  . i.e. we are to find the matrix B  P AP which
is congruent to A where P is a non-singular matrix.
2 1 1  1 0 0  1 0 0
     
 1 2 1    0 1 0  A  0 1 0 
 1 1 2   0 0 1   0 0 1 
     

 1  1  1  1
Applying elementary congruent transformation R21    ,C21    ; R31    , C31   
 2  2  2  2

223605 Linear Algebra


22
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

2 0 0   1 0 0 1  1 1 
    2 2
 0 3  3  
  1 1 0 A 0   1 0 
 2 2  2   
0 3 3  1 0 0 1 
0 1   
 2 2   2 
R32 1  , C 32 1 
2 0 0  1 0 0 1  1 1 
     2 
 0 3 0   1 1 0  A 0 1 1
2 2
     
0 0 0   1 1 1   0 0 1
 

 2  2    
R1  1  , C1  1  ; R2 2 , C2 2
3 3
 1 0 0  1 1 1 
1 0 0  2   2 6 
   1   
 0 1 0   2 0  A 0 2 1
6 3 3
0 0 0    
   1 1 1   0 0 1
   
   
1 0 0
 
Thus X AX reduces to Y BY , where B   0 1 0  and Y    y1 y2 y3 
0 0 0
 
So the canonical for is y1  y2 . Therefore the rank is 2, index is 2.
2 2

Comment : Since rank < number of variables and rank = index. Hence the quadratic form is
positive semi-definite.

 1 1 1 
 2 6  y 
  1 
Now the linear transformation is X  PY   0 2 1   y2 
3
  
 0 0 1   y3 

 
1 1
 x1  y1  y2  y 3
2 6
x2  2 y y
3 2 3
x3  y 3
The canonical form is zero for the set of values  y1 y2 y3    0 0 1  corresponding to which
we get the value  x1 x2 x3    1 1 1 . Which makes the form zero.

223605 Linear Algebra


23
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

Problem 07:
Reduce the quadratic form 2 x12  x22  3 x32  8 x2 x3  4 x3 x1  12 x1 x2 to canonical form and determine
its rank, index and signature and comment on the definiteness.
Solution :
In matrix notation the quadratic form can be written as
2 6 2  x1 
  
2 x1  x2  3x3  8 x2 x3  4 x3 x1  12 x1 x2   x1 x2 x3   6
2 2 2
1 4  x2   X AX
 2 4 3  x 
  3 
2 6 2 
 
where, A   6 1 4  , X    x1 x2 x3 
 2 4 3 
 
To effect conjugate reduction of A we write A  A  . i.e. we are to find the matrix B  P AP which
is congruent to A where P is a non-singular matrix.
 2 6 2   1 0 0   1 0 0 
     
 6 1 4    0 1 0  A  0 1 0 
 2 4 3   0 0 1   0 0 1 
     
Applying elementary congruent transformation R21  3  , C21  3 ; R31 1 , C31 1
2 0 0   1 0 0   1 3 1 
     
  0 17 2    3 1 0  A  0 1 0 
 0 2 5   1 0 1   0 0 1 
     
 2   2 
R32   , C 32  
 17   17 
 11 
   1 3
2 0 0   1 0 0   17 
   
  2 
  0 17 0    3 1 0  A 0 1
   17 
 0 0  81   11 2 1   0 0 1 
 17     
 17 17  
 
R1  1  , C1  1  ; R2  1
 2  2 
 ,C  1
 
17  2  17  
 ; R 17
 2 81  17 81 
, C2

  1 11 
 1 3
0 0   2 17 1377 
2 0 0   2  
   2 
  0 1 0     3 1 0  A  0 1
 0 0 1   17 17 17 1377 
    
 11 2 17   0 0 17 
 81   81 
 1377 1377  
1 0 0 
 
Thus X AX reduces to Y BY , where B   0 1 0  and Y    y1 y2 y3 
 0 0 1 
 

223605 Linear Algebra


24
Professor Biplab Bhattacharjee
Dept Of Statistics
Govt. B M College, Barishal

So the canonical for is y12  y22  y32 . Therefore the rank is 3, index is 1.
Comment : Since rank = number of variables and rank > index. Hence the quadratic form is
indefinite.

223605 Linear Algebra


25

You might also like