Linear Vector
Linear Vector
In the ordinary three dimensional space, we come across the quantities called vectors. Addition of
vectors gives a new vector, and multiplication of a vector by scalar also gives a vector.
The concept of a linear vector space is generalization of ordinary vectors in three dimensional
space to n dimensional space.
Linear vector space
A linear vector space V is a set (collection) of entities {x}, called vectors, on which two
operations, addition and multiplication by scalar, have been defined. It means when elements of
vector space are added and multiplied by scalars {} always yields elements in V. We say that
elements of vectors space obey closure property.
A. Addition of vectors in V: Let x and y be vectors in V then, x + y is also a vector in the
same space V. So, the vector space is said to be closed under addition. This property is
known as closer property under addition.
Addition of vector follows following property:
Addition of vectors in V is commutative. For two vectors x and y in V,
x + y = y + x
Addition of vectors in V is associative. For vectors x , y and z in V,
x + (y + z) = (x + y) + z
There exists a null vector, O, such that x + O = O + x = x
For every element x in V there exists an inverse denoted by x such that
x + (x) = O
B. Multiplication by scalar: V: Let x a vector in V and be scalar then, x is also a vector in
the same space V. So, the vector space is said to be closed under multiplication by scalar.
This property is known as closer property under multiplication. Multiplication by a scalar
obeys following rules:
For each vector, x, in V and a scalar , x is also a vector in V,
For scalars , ,
( + )x = x + y
(x + y) = x + y
(x) = ()x
The set of allowed scalars is called field, F, over which V is defined. The field consists of all real
and complex numbers. If the field is real then the space is called real vector space and if it is
complex the field is called complex vector space.
A sequence of ordered n quantities is called a topple.
Examples
1. An n-topple of real numbers is an ordered set of n-real numbers of the form (u
1
, u
2
, u
3
,
, u
n
). The set R
n
of all real n-topple forms a linear vector space over a real field.
2. An n-tuple of complex numbers is an ordered set of n-complex numbers of the form (c
1
, c
2
,
c
3
, , c
n
). The set C
n
of all complex n-topple forms a linear vector space over a complex
field.
3. The set of all real solutions of the differential equations d
2
y/dx
2
+ w
2
y = 0 forms a linear
vector space over a real field.
4. The set of all real continuous functions in an interval [a, b] forms a linear vector space
over a real field.
The elements of a topple are called components.
Linear combination
1
Consider a vector space V over a field k ={c
i
: i = 1, 2, ., n}. Let
n
x x x x ., ,......... , ,
3 2 1
be finite set
of vectors in V. Then the sum of the form
n n
x c x c x c x c + + + + ..........
3 3 2 2 1 1
is called the linear combination. If a vector x in V can be expressed as
n n
x c x c x c x c x + + + + ..........
3 3 2 2 1 1
Then x is called linear combination of the vectors
n
x x x x ., ,......... , ,
3 2 1
.
For example:
) 3 , 2 , 1 ( u
) 2 , 4 , 6 ( v
And
) 7 , 2 , 3 ( w
The set
} .... .... , , {
3 2 1 n
e e e e S
is standard basis for Rn.
Dimension
A vector space is said to be n-dimensional space, if it has a finite basis consisting of n-elements.
So every vector in V can be expressed as linear combination of n-basis vectors.
If the basis consists of infinite elements, then the vector space is called infinite dimensional.
An n-dimensional space is called n-space and is denoted by R
n
.
Example:
Show that ) 1 , 2 , 1 (
1
v
) 0 , 9 , 2 (
2
v
) 4 , 0 , 3 (
3
v
1
1
1
]
1
1
1
1
]
1
0
0
0
4 0 3
0 9 2
1 2 1
3
2
1
k
k
k
Let the A represents matrix of coefficient, then
1
1
1
]
1
4 0 3
0 9 2
1 2 1
A
The determinant of A is
1
4 0 3
0 9 2
1 2 1
det A
Thus only possible is
0
3 2 1
k k k
. Hence, the given vectors are linearly independent.
Next we show that any arbitrary vectors can be expressed as linear combination of
1
v
2
v
3
v
. Then
3 3 2 2 1 1
v k v k v k b
+ +
In terms of components
) 4 , 0 , 3 ( ) 0 , 9 , 2 ( ) 1 , 2 , 1 ( ) , , (
3 2 1 3 2 1
c c c b b b + +
Equating corresponding coefficients, we get,
3 2 1 1
2 c c c b + +
0 9 2
2 1 2
+ + c c b
3 1 3
4 0 3 c c b + +
If b1, b2, b3 are known, the constants c1, c2, c3 can be determined. Thus, S forms basis for R
3
.
Example:
Let
1
]
1
0 0
0 1
1
m
1
]
1
0 0
1 0
2
m
1
]
1
0 1
0 0
3
m
1
]
1
1 0
0 0
4
m
Show that
} , , , {
4 3 2 1
m m m m S
is basis for vector space m22 of 2 x 2 matrix.
We can show that m1, m2, m3, m4 are linearly independent
0
4 4 3 3 2 2 1 1
+ + + m k m k m k m k
1
]
1
1
]
1
+
1
]
1
+
1
]
1
+
1
]
1
0 0
0 0
1 0
0 0
0 1
0 0
0 0
1 0
0 0
0 1
4
3
3 2 1
k k k k
1
]
1
1
]
1
+
1
]
1
+
1
]
1
+
1
]
1
0 0
0 0
0
0 0
0
0 0
0 0
0
0 0
0
4
3
3
2 1
k k
k k
1
]
1
1
]
1
0 0
0 0
4 3
2 1
k k
k k
Hence,
0
4 3 2 1
k k k k
Next any 2 x 2 matrix A in V can be expressed as linear combination of S:
Let
3
1
]
1
d c
b a
A
Then,
1
]
1
+
1
]
1
+
1
]
1
+
1
]
1
1
]
1
1 0
0 0
0 1
0 0
0 0
1 0
0 0
0 1
d c b a
d c
b a
Hence,
4 3 2 1
dm cm bm am A + + +
Example:
Explain whether the vectors (0, 1, 0), (0, 1, -1), (1, -2, 1) are linearly independent. Can (-2, 1, -3)
be expressed as a linear combination of these vectors. Express the result in terms of linear algebra.
Here, ) 0 , 1 , 0 (
1
v
) 1 , 1 , 0 (
2
v
) 1 , 2 , 1 (
3
v
0
3 3 2 2 1 1
+ + v k v k v k
0 ) 1 , 2 , 1 ( ) 1 , 1 , 0 ( ) 0 , 1 , 0 (
3 2 1
+ + k k k
0 ) 1 ( ) 0 ( ) 0 (
3 2 1
+ + k k k
0 ) 2 ( ) 1 ( ) 1 (
3 2 1
+ + k k k
0 ) 1 ( ) 1 ( ) 0 (
3 2 1
+ + k k k
In terms of matrix equation, we have
1
1
1
]
1
1
1
1
]
1
1
1
1
]
1
0
0
0
1 1 0
2 1 1
1 0 0
3
2
1
k
k
k
Determinant of coefficient matrix is
1
1 1 0
2 1 1
1 0 0
det
A
Since detA is not zero, hence, its inverse exists. It means
0
3 2 1
k k k
and v1 v2 and v3 are
linearly independent.
In order to express the (-2, 1, -3) as a linear combination of these vectors, we have,
) 3 , 1 , 2 ( ) 1 , 2 , 1 ( ) 1 , 1 , 0 ( ) 0 , 1 , 0 (
3 2 1
+ + k k k
) 3 , 1 , 2 ( ) 1 1 0 , 2 1 1 , 1 0 0 (
3 2 1 3 2 1 3 2 1
+ + + + k k k k k k k k k
Equating the corresponding coefficients, we get
2 1 0 0
3 2 1
+ + k k k
(a)
1 2 1 1
3 2 1
+ k k k
(b)
3 1 1 0
3 2 1
+ k k k
(c)
From (a)
2
3
k
From (c)
3 ) 2 ( 1 1
2
+ k
1 2 3
2
k
From (b)
1 ) 2 ( 2 ) 1 ( 1 1
1
+ k
4
1
k
Then the required result is
) 3 , 1 , 2 ( 2 4
3 2 1
+ v v v
Theorem
Any two basis for a finite dimensional vector space has number of vectors.
Proof
4
Let
} ., ,......... , , {
3 2 1 n
v v v v S
} ., ,......... , , {
3 2 1 m
w w w w S
If S be the basis and S be linearly independent set then this implies that m n. Again if S be the
basis and S is linearly independent set then n m. For same space, only both conditions cannot be
hold hence only possible is
n = m
Theorem
Let
} ......., , , {
3 2 1 r
v v v v S
be a set of vectors in Rn. If r > n then S is linearly dependent.
Each of the vectors can be expressed in terms of the components as follows:
) ,....... , , (
1 13 12 11 1 n
v v v v v
) ,....... , , (
2 23 22 21 2 n
v v v v v
. . . . . . .
) ,....... , , (
3 2 1 nn n n n n
v v v v v
is a linearly independent.
Theorem
If
} ......., , , {
3 2 1 n
v v v v S
is a basis for a finite dimensional vector space V, then every set with
more than n vector is linearly independent.
Let
} ,......., , , { '
3 2 1 m
w w w w S
be any set of m vectors in V where m > n. We wish to show that S
is linearly dependent. Since, S is a basis, each w
i
can be expressed as linear combination of the
vectors in S; say:
n n
v a v a v a v a w
1 3 31 2 21 1 11 1
...... + + + +
n n
v a v a v a v a w
2 3 32 2 22 1 12 2
...... + + + +
. . . . . . . . . .
n nm m m m m
v a v a v a v a w + + + + ....
3 3 2 2 1 1
To show that S is linearly dependent, we must find scalars
m
k k k k ., ,......... , ,
3 2 1
not all zero such
that
0 ...
3 3 2 2 1 1
+ + + +
m m
w k w k w k w k
0 ) ... ( ... ) ... ( ) ... (
2 2 1 1 2 2 22 1 12 2 1 2 21 1 11 1
+ + + + + + + + + + + +
n nm m m m n n n n
v a v a v a k v a v a v a k v a v a v a k
0 ) ... ( .. ) ... ( ) ... (
2 2 1 1 2 2 22 2 21 1 1 1 12 2 11 1
+ + + + + + + + + + + +
n nm m n n m m m m
v a k a k a k v a k a k a k v a k a k a k
Equating corresponding coefficient to zero
0 ...
1 12 2 11 1
+ + +
m m
a k a k a k
0 ...
2 22 2 21 1
+ + +
m m
a k a k a k
. . . . . . . .
5
0 ...
2 2 1 1
+ + +
nm m n n
a k a k a k
Since these equations have more unknowns than number of equations, so the system has non-
trivial solution, hence, S is linearly dependent.
Theorem
If
} ......., , , {
3 2 1 n
v v v v S
is a set of linearly dependent vectors in an n-dimensional vector space V
then S is a basis for V.
Let the set of linearly independent vectors be
) ,...., , , (
1 31 21 11 1 n
a a a a v
) ,...., , , (
2 32 22 12 2 n
a a a a v
. . . . . . .
) ,...., , , (
3 2 1 nn n n n n
a a a a v
Then, there exists a set of scalars
n
k k k k ,....., , ,
3 2 1
for all ks zero such that
0 .......
3 3 2 2 1 1
+ + + +
n n
v k v k v k v k
0 ) ,...., , , ( ....... ) ,...., , , ( ) ,...., , , (
3 2 1 2 32 22 12 2 1 31 21 11 1
+ + +
nn n n n n n n
a a a a k a a a a k a a a a k
0 ) .... ( .... ) ( ) (
2 2 1 1 2 22 2 12 1 1 21 2 11 1
+ + + + + + + + + +
nn n n n n n n n
a k a k a k a k a k a k a k a k a k
Equating the corresponding components, we get
0 ....
1 12 2 11 1
+ + +
n n
a k a k a k
0 ....
2 22 2 21 1
+ + +
n n
a k a k a k
.
0 ....
2 2 1 1
+ + +
nn n n n
a k a k a k
In the matrix form
1
1
1
1
]
1
1
1
1
1
]
1
1
1
1
1
]
1
0
...
0
0
...
...
... ... ... ...
...
...
2
1
2 1
2 22 21
1 12 11
n nn n n
n
n
k
k
k
a a a
a a a
a a a
Since,
n
k k k k ,....., , ,
3 2 1
form a set of scalars and all are zero, thus, determinant A of coefficient
matrix is non zero and A-1 exists that is
0
...
... ... ... ...
...
...
det
2 1
2 22 21
1 12 11
nn n n
n
n
a a a
a a a
a a a
A
Suppose
) ,...., , , (
3 2 1 n
b b b b v
be any vector in V. If
} ,......., , , {
3 2 1 n
v v v v S
span V then
n n
v c v c v c v c v + + + + .......
3 3 2 2 1 1
Where cs are scalars not all zero. In terms of components
) ,...., , , ( .... ) ,..., , , ( ) ,..., , , ( ) ,..., , , (
3 2 1 2 32 22 12 2 1 31 21 11 1 3 2 1 nn n n n n n n n
a a a a c a a a a c a a a a c b b b b + + +
) .... ,..., ... , ... ( ) ,..., , , (
2 2 1 1 2 22 2 21 1 1 12 2 11 1 3 2 1 nn n n n n n n n n
a c a c a c a c a c a c a c a c a c b b b b + + + + + + + + +
Equating the coefficients
n n
a c a c a c b
1 12 2 11 1 1
... + + +
n n
a c a c a c b
2 22 2 21 1 2
... + + +
.
nn n n n n
a c a c a c b + + + ....
2 2 1 1
In terms of matrix
6
1
1
1
1
]
1
1
1
1
1
]
1
1
1
1
1
]
1
n n nn n n
n
n
b
b
b
c
c
c
a a a
a a a
a a a
... ...
...
... ... ... ...
...
...
2
1
2
1
2 1
2 22 21
1 12 11
since A and C exists, so S spans V and S is the basis.
Theorem
If
} ......., , , {
3 2 1 n
v v v v S
Then,
8
0
,
| , |
,
,
* ,
,
,
,
,
2
> <
> <
+ > <
> <
> <
> <
> <
> <
> <
y y
x y
x y
y y
x y
y x
y y
x y
x x
0
,
| , |
,
,
,
,
,
* ,
,
2
> <
> <
+ > <
> <
> <
> <
> <
> <
> <
y y
x y
y x
y y
x y
x y
y y
x y
x x
0 | , | | , | | , | , ,
2 2 2
> < + > < > < > >< < x y y x x y y y x x
0 | , | , ,
2
> < > >< < y x y y x x
From this result, finally we obtain the Schwartz inequality,
2
| , | , , > < >< < y x y y x x
Alternately,
Let c = <x, x>, b = <x, y> and a = <y, y> then
c + 2b +
2
a 0
This is quadratic equation in . It has no real and repeated roots, therefore, its discriminant must
satisfy
4b
2
4ac 0
4<x, y>
2
4<x, x><y, y> 0
<x, y>
2
<x, x><y, y>
This proves the statement.
Example:
Prove Cauchy-Schwartz inequality
2 2 2
| | | | | | B A B A
Let
2
| | B A
be two vectors in Rn, then a be a real scalar so that
B a A
Then
B B a B A
Again
0 B a A
On squaring,
0 2
2
2
2
+ B a B A a A
This equation has no real and repeated roots if its discriminant satisfies
0 4 4
2 2 2
B A B A
2 2 2
B A B A
> >< < > < B B A A B A
, ,
2
Also it follows that
B A B A
Example:
Prove that
B A B A
+ +
B A B A B A
+ + + 2
2 2 2
B A B A B A
+ + + 2
2 2 2
( )
2 2
B A B A
+ +
B A B A
+ +
This result is known as triangular inequality
9
Norm
The norm of a vector x in V is defined to be a positive real number denoted by ||x|| associated with
x. The norm has following properties:
a. ||x|| 0
b. ||x|| = 0 if and only if x = 0
c. ||ax|| = |a| ||x||
d. ||x + y|| ||x|| + ||y||
One way of obtaining norm of a vector is to take positive square root of scalar product of the
vector with itself. Hence,
> < x x x , || ||
To prove the fourth property,
> + + < + y x y x y x , || ||
2
> < + > < + > < + > < + y y x y y x x x y x , , , , || ||
2
> < + > < + > < + > < + y y y x y x x x y x , * , , , || ||
2
> < + > < + > < + y y y x x x y x , , Re 2 , || ||
2
Since,|<x, y>| Re <x, y>
| , | 2 || || || || || ||
2 2 2
> < + + + y x y x y x
Using the Schwartz inequality
|| || || || , y x y x > <
|| || || || 2 || || || || || ||
2 2 2
y x y x y x + + +
2 2
||) || || (|| || || y x y x + +
Hence,
|| || || || || || y x y x + +
If,
) ., ,......... , , (
3 2 1 n
u u u u u
then
2 1 2 2
4
2
3
2
2
2
1
2 1
) .......... ( , || ||
n
u u u u u u u u + + + + + > <
Angle between vectors
Let u
and v
be two ordinary vectors. Then according to vector algebra, dot product of these
vectors is
cos uv v u
Here is the angle between the given vectors. Hence,
uv
v u
cos
Generalizing this result to the vector space, the angle between the vectors in vector space, V we
have
|| || || ||
,
cos
v u
v u > <
Example:
Find the cosine of the angle between the vectors
) 2 , 1 , 3 , 4 ( u
and
) 3 , 2 , 1 , 2 ( v
where the
vector space in R
4
with the Euclidean inner product.
Now norm of the vectors are
30 ) 2 ( 1 3 4
2 2 2 2
+ + + u
18 3 ) 2 ( 1 ) 2 (
2 2 2 2
+ + + v
Similarly,
13 3 ) 2 ( ) 2 ( 1 1 3 ) 2 ( 4 , + + + > < v u
So,
10
60 3
13
18 30
13
cos
Distance
Distance between two vectors u and v is denoted by
) , ( v u d
and defined as
|| || ) , ( v u v u d
If,
) ., ,......... , , (
3 2 1 n
u u u u u
and
) ., ,......... , , (
3 2 1 n
v v v v v
then
> < v u v u v u v u d , || || ) , (
2 1 2 2
3 3
2
2 2
2
1 1
] ) ( . ,......... ) ( ) ( ) [(
n n
v u v u v u v u + + + +
Properties of distance
1.
0 ) , ( v u d
2.
0 ) , ( v u d
if u = v
3.
) , ( ) , ( u v d v u d
4.
) , ( ) , ( ) , ( v w d w v d v u d +
Normalized vector
A vector of which norm is unit is called normalized vector. If a vector x is not normalized, we can
normalize it by dividing by its > < x x, . This process is known as normalization.
Orthogonal vector
In an inner product space two vectors, x and y, are said to be orthogonal if their scalar product is
zero, that is,
<x, y> = 0.
Example:
If u and v are orthogonal vectors in an inner product space then
2 2 2
|| || || || || || y x v u + +
Proof
> < + + > + + < + v u v u v u v u v u , 2 || || || || , || ||
2 2 2
Since,
0 , > < v u
2 2 2
|| || || || , || || v u v u v u v u + > + + < +
Theorem:
If
} ......., , , {
3 2 1 n
v v v v S
is an orthogonal set of non-zero vectors in an inner product space V then S
is linearly independent.
Proof
If
} ......., , , {
3 2 1 n
v v v v S
is linearly independent we have to show that
0 .......
3 3 2 2 1 1
+ + + +
n n
v k v k v k v k
If k1 = k2 = = kn = 0
Let us consider the inner product
> + + + + <
i n n
v v k v k v k v k , .......
3 3 2 2 1 1
where i =1, 2, 3,
Then
0 , 0 , .......
3 3 2 2 1 1
> < > + + + + <
i i n n
v v v k v k v k v k
0 , ....... , ,
2 2 1 1
> < + + > < + > <
i n n i i
v v k v v k v v k
+ + + +
i
i i n n
x c x c x c x c x c x ..........
3 3 2 2 1 1
+ + + +
i
i i n n
x d x d x d x d x d y ..........
3 3 2 2 1 1
Taking the scalar product of x and y, we have
,
_
j
j j
i
i i
x d x c y x , ) , (
) , ( ) , (
*
i j
j i j i
x x d c y x
Since the basis is orthonormal, ij j i
x x ) , (
hence,
i
i i
i j
ij j i
d c d c y x
* *
) , (
In similar way,
i
i i
c c x x
*
) , (
Theorem:
If
} ......., , , {
3 2 1 n
v v v v S
is an orthonormal basis for an inner product space V and u is a vector in
V, then
n n
v v u v v u v v u v v u u > < + + > < + > < + > < , ....... , , ,
3 3 2 2 1 1
That is coordinate vector with respect to S is
> < > < > < > <
n S
v u v u v u v u u , ,........ , , , , , ) (
3 2 1
Proof
Since,
} ......., , , {
3 2 1 n
v v v v S
is a basis, a vector u can be expressed uniquely in the form
n n
v k v k v k v k u + + + + .......
3 3 2 2 1 1
Let us consider the inner product
> <
i
v u,
where i =1, 2, 3,
Then
> + + + + > < <
i n n i
v v k v k v k v k v u , ....... ,
3 3 2 2 1 1
> < + + > < + > < > <
i n n i i i
v v k v v k v v k v u , ....... , , ,
2 2 1 1
12
> < > < + + > < + > < > <
n
j
i j j i n n i i i
v v k v v k v v k v v k v u
1
2 2 1 1
, , ....... , , ,
As
} ......., , , {
3 2 1 n
v v v v S
is orthonormal basis, 1 ,
2
> <
i i i
v v v and
0 , > <
j i
v v
for i j we can
write,
> <
n
j
ij j i
k v u
1
,
In the summation only the term for which i = j will exists due to orthonormalisation process,
hence,
i i
k v u > < ,
Using this result we have
n n
v v u v v u v v u v v u u > < + + > < + > < + > < , ....... , , ,
3 3 2 2 1 1
Gram Schmidt Orthogonalisation
Statement: Every non zero finite dimensional inner product space has orthogonal basis.
Let V be a non-zero finite dimensional inner product space and let
} ., ,......... , , {
3 2 1 n
y y y y s
be a
set of n linearly independent vectors which form basis for V. We can construct n mutually
orthogonal vectors from this set by applying Gram-Schmidt orthogonalisation process. Let
n
x x x x ., ,......... , ,
3 2 1
be set of n mutually orthogonal vectors. To proceed suppose that
1 1
y x
1 21 2 2
x c y x +
1 31 2 32 3 3
x c x c y x + +
1 41 2 42 3 43 4 4
x c x c x c y x + + +
...
1 1 2 2 2 2 , 1 1 ,
. . . . x c x c x c x c y x
i i i i i i i i i i
+ + + + +
And so on
To the constants cs, take the inner product of x1 and x2,
> + < > <
1 21 2 1 2 1
, , x c y x x x
> < + > < > <
1 21 1 2 1 2 1
, , , x c x y x x x
> < + > < > <
1 1 21 2 1 2 1
, , , x x c y x x x
> < + > <
1 1 21 2 1
, , 0 x x c y x
Hence,
> <
> <
1 1
2 1
21
,
,
x x
y x
c
Similarly, taking the inner product of x1 and x3, we get,
> + + < > <
1 31 2 32 3 1 3 1
, , x c x c y x x x
> < + > < + > < > <
1 1 31 2 1 32 3 1 3 1
, , , , x x c x x c y x x x
> < + + > <
1 1 31 32 3 1
, ) 0 ( , 0 x x c c y x
> <
> <
1 1
3 1
31
,
,
x x
y x
c
Inner product of x
2
and x
3
gives
> < + > < + > < > <
1 2 31 2 2 32 3 2 3 2
, , , , x x c x x c y x x x
) 0 ( , , 0
31 2 2 32 3 1
c x x c y x + > < + > <
> <
> <
2 2
3 2
32
,
,
x x
y x
c
In general,
13
> <
> <
j j
i j
ij
x x
y x
c
,
,
By knowing all cs we can construct the orthogonal vectors.
1
1 1
2 1
2 1 21 2 2
,
,
x
x x
y x
y x c y x
> <
> <
+
1
1 1
3 1
2
2 2
3 2
3 1 31 2 32 3 3
,
,
,
,
x
x x
y x
x
x x
y x
y x c x c y x
> <
> <
> <
> <
+ +
1 1 2 2 2 2 , 1 1 ,
. . . . x c x c x c x c y x
i i i i i i i i i i
+ + + + +
1
1 1
1
2
2 2
2
2
2 2
2
1
1 1
1
,
,
,
,
.....
,
,
,
,
x
x x
y x
x
x x
y x
x
x x
y x
x
x x
y x
y
i i
i
i i
i i
i
i i
i i
i
> <
> <
> <
> <
> <
> <
> <
> <
Orthonormal basis
Let V be a non-zero finite dimensional inner product space and let
} ., ,......... , , {
3 2 1 n
u u u u s
be a
set of n any basis for V. We can construct n mutually orthogonal vectors from this set by applying
Gram-Schmidt orthogonalisation process. Let
n
v v v v ., ,......... , ,
3 2 1
be set of n mutually orthogonal
vectors. To proceed suppose that
|| ||
1
1
1
u
u
v
This vector has norm one.
To construct a vector v
2
of norm 1 and orthogonal to v
1
, find the projection of
2
u along
1
v that is
> <
1 2
, v u and form a vector of the form
1 1 2 2
, v v u u > < , then normalize it:
|| , ||
,
1 1 2 2
1 1 2 2
2
v v u u
v v u u
v
> <
> <
> <
> >< < > <
> < v v
v v u u
v u
v v u u
v v v u v u
v v
0
,
,
1
|| , ||
,
|| || || ||
,
1
|| , ||
,
1 1
1 1
1 1 2 2
1 2
1 1
1 1
1 1 2 2
1 2
,
_
> <
> <
> <
> <
,
_
> <
> <
> <
u u
u u
v v u u
v u
u u
u u
v v u u
v u
Similarly,
|| , , ||
, ,
2 2 3 1 1 3 3
2 2 3 1 1 3 3
3
v v u v v u u
v v u v v u u
v
> < > <
> < > <
|| , , ||
0 ) , 1 ( ,
|| , , ||
, , , , ,
,
2 2 3 1 1 3 3
1 1 1 3
2 2 3 1 1 3 3
1 2 2 3 1 1 1 3 1 3
1 3
v v u v v u u
v v v u
v v u v v u u
v v v u v v v u v u
v v
> < > <
> < > <
v v u v v u u
v u
14
|| , , ||
0 ) , 1 ( ,
|| , , ||
, , , , ,
,
2 2 3 1 1 3 3
2 2 2 3
2 2 3 1 1 3 3
2 2 2 3 2 1 2 3 2 3
2 3
v v u v v u u
v v v u
v v u v v u u
v v v u v v v u v u
v v
> < > <
> < > <
v v u v v u u
v u
Proceeding in the same way we can construct a set of orthonormal basis vectors from the given set
of the basis vectors.
Example:
Orthogonalise the following vectors by Gram-Schmidt Process: ) 0 , 0 , 1 (
1
u , ) 0 , 1 , 1 (
2
u and
) 1 , 1 , 1 (
3
u
Ans: ) 0 , 0 , 1 (
1 1
u v ) 0 , 1 , 0 (
2
v and
) 0 , 1 , 0 (
3
v
Example
From the following vectors construct orthonormal system:
) 3 , 2 , 1 (
1
u ) 2 , 1 , 3 (
2
u
and ) 2 , 1 , 1 (
3
u
Function
A function is a rule which associates each element of set A with unique element of set B,
B A f :
.
A set of real functions form a linear vector space:
Let V x f f ) ( and
V x g g ) (
then V is a vector space.
a. V x g x f g f + + ) ( ) ( ) ( , because sum of real valued function is a real.
b. )) ( ) ( ( ) ( ) ( x h x g x f h g f + + + +
) ( ) ( ) ( x h x g x f + +
V h g f x h x g x f + + + + ) ( ) ( )) ( ) ( (
c. V f g x f x g x g x f g f + + + + ) ( ) ( ) ( ) ( ) (
d. The zero vector in the space is the zero constant function
e. The negative vector in the space is the negative function:
V x f x f f f + 0 ) ( ) ( ) ( (
f. V x kf kf ) ( scalar multiple of real valued function is a real valued function
g. V kg kf x kg x kf x g x f k g f k + + + + ) ( ) ( )} ( ) ( { ) (
h. V lf kf x lf x kf x f l k f l k + + + + ) ( ) ( ) ( ) ( ) (
i. V f kl x klf x lf k f l k ) ( ) ( )) ( ( ) (
j. V f x f x f f ) ( ) ( 1 1
Hence V is the vector space.
Types of functions:
One-to-one
A function F form a set A to a set B is said to be one-to one or injective if a distinct element in A
have a distinct image in B.
For any
A y x ,
,
y x
then
) ( ) ( y f x f
On-to
A function F from a set A to a set B is said to be an on-to or surjective if every elements of b is an
image of at least one element of A: f(A) = B.
One-to one and on-to
A function F from a set A to a set B which is both one-to-one and on-to is called bijective
function.
Constant function
15
A function F from a set A to a set B is said to be a constant function if every element of A has
same image on B.
Inner product of polynomials
Let
) (x p p
and
) (x q q
be polynomials in Pn. Then, in such vector space, inner product is
defined as
dx x q x p q p
b
a
> < ) ( ) ( ,
Here a and b are any fixed real numbers such that a < b. We can see that the inner product of
polynomials satisfies all the axioms of inner product:
a.
> < > <
p q dx x p x q dx x q x p q p
b
a
b
a
, ) ( ) ( ) ( ) ( ,
b.
> < + > < + + > + <
s q s p dx x s x q x s x p dx x s x q x p s q p
b
a
b
a
, , )] ( ) ( ) ( ) ( [ ) ( )) ( ) ( ( ,
c.
> < > <
q p k dx x q x p k dx x q x kp q kp
b
a
b
a
, ) ( ) ( ) ( ) ( ,
d. If p = p(x) is only polynomials in Pn then 0 ) (
2
x p for all x, thus
0 ) ( ,
2
> <
dx x p p p
b
a
e.
0 ) ( ,
2
> <
dx x p p p
b
a
if and only if p(x) = 0.
Example:
Let the vector space P2 have the inner product dx x q x p q p
> <
1
1
) ( ) ( , . Apply the Gram-Schmidt
processes to transform a standard basis } , , 1 {
2
x x S in an orthonormal basis.
Here we have,
, 1
1
u ,
2
x u ,
2
3
x u
According to orthonormalisation process
|| ||
1
1
1
u
u
v
By the definition of inner product:
2 1 || ||
1
1
1
1
1 1
2
1
dx dx u u u
2 || ||
1
u
2
1
|| ||
1
1
1
u
u
v
|| , ||
,
1 1 2 2
1 1 2 2
2
v v u u
v v u u
v
> <
> <
x dx x x dx x x v v u u
,
_
,
_
> <
2
1
2
1
2
1
,
1
1
1
1
1 1 2 2
16
|| || || , ||
1 1 2 2
x v v u u > <
3
2
3
. || || || , ||
1
1
3
1
1
2 2
1 1 2 2
,
_
> <
x
dx x x x v v u u
3
2
|| || || , ||
1 1 2 2
> < x v v u u
x
x
v v u u
v v u u
v
2
3
3 2 || , ||
,
1 1 2 2
1 1 2 2
2
> <
> <
|| , , ||
, ,
2 2 3 1 1 3 3
2 2 3 1 1 3 3
3
v v u v v u u
v v u v v u u
v
> < > <
> < > <
3
1 3
3
2
2
1
2
3
2
3
2
1
2
1
, ,
2
2
1
1
2
1
1
2 2
2 2 3 1 1 3 3
,
_
,
_
,
_
,
_
,
_
x
x
v
Operators
An operator
A
on a vector space, V, maps V into itself ( V V ) such that with each vector x in V
is associated with another vector y:
x A y x A
In simple terms when an operator acts on a vector one gets another both being in the same space.
Linear operator
An operator
A
is said to be linear if
y A c x A c y c x c A
) (
2 1 2 1
+ +
The quantities c
1
and c
2
are arbitrary constants.
Sum of operators
Let
A
and
B
and
B
(
x B A x P
Where x is a vector in V. so,
B A P
17
Product operator denotes that operation of two operators successively. It is to be noted that while
taking product of two operators order of operators should be preserved because in general
B A
is
not same as
A B
.
Commutator
The difference of operators
A B B A
is called commutator and denoted as
A B B A B A
]
[
For some operators
B A
is same as
A B
. In that case the commutator of the operators is zero, and
then
A
and
B
Let B be an operator such that starting from y we get x by the operation of B
y B x y B
Then B will be called inverse of A, and denoted by A
-1
B. Hence,
x x A A y A y B
1 1
Inverse of a operator may not exist. The operator of which inverse exists is called invertible
operator.
An operator has an inverse if both the following properties are satisfied:
(1) If
, y x
then y A x A
(2) For every x in V there exists at least one y such that x A y
If A is an invertible linear operator then A
-1
is a linear operator.
Since
A
is a linear operator,
2 2 1 1 2 2 1 1
) (
x A c x A c x c x c A + +
)
(
2 2 1 1
1
2 2 1 1
x A c x A c A x c x c + +
1 1
y x A
2 2
y x A
and
1
1
1
y A x
2
1
2
y A x
) (
2 2 1 1
1
2
1
2 1
1
1
y c y c A y A c y A c + +
It shows that
1
A
is a linear operator.
Matrices
A matrix an array of numbers of the form
1
1
1
]
1
33 32 31
23 22 21
13 12 11
a a a
a a a
a a a
A
The number aij is called elements of the matrix belonging to ith row and jth column. If there are m
rows and n columns, then the matrix is referred as m n matrix. A matrix with single column (n =
1) is called column matrix and with single row (m = 1) is called row matrix. If m = n then the
matrix is known as square matrix.
Equality of matrix
Two matrices A and B are said to be equal if
a. Both of them have equal number of rows and columns
b. Each element of one is equal to the corresponding element of the other. That is a
ij
= ij.
Addition of matrices
18
We can add two matrices if they have equal rows and columns. That is we cannot add all matrices.
Two matrices can be added if they have equal number of rows and columns. If A and B are both m
n matrices, the sum of matrices A and B, is also m x n matrix. Let a
ij
and b
ij
are the element of
ith row and jth column of A and B respectively, then the addition of two matrices is a matrix such
that
S
ij
= a
ij
+ b
ij
.
Product of matrix
We can multiply two matrices A and B if number of column of A is equal to number of rows of B,
and denoted as AB. Let A be an m x n matrix and B be n x k matrix. Then the product P = AB is
the m x k matrix with the elements
lj
j
il ij
b a AB
) (
Here a
il
are elements of i
th
row of A and b
lj
are the elements of j
th
column of B. It means elements
of P are obtained by multiplying i
th
row of A with j
th
column of B and then adding. In general AB
is not equal to BA. So order of matrices should be preserved in multiplication.
Transpose of a matrix
A matrix obtained by interchanging rows and column of a matrix A is called transpose of the
matrix A and is denoted by
T
A
. Thus, if
ij
a
is the element belonging to i
th
row and j
th
column of A,
then it will belong to j
th
row and i
th
column of
T
A
. Hence, a
ij
element of
T
A
is the element a
ji
of A.
Then, then the transpose of A is
1
1
1
]
1
33 23 13
32 22 12
31 21 11
a a a
a a a
a a a
A
T
Hence,
ji
T
ij
a a
Adjoint of a matrix
A matrix may have complex elements. Let A be a complex matrix. Taking transpose of such
complex matrix and then complex conjugate we get a matrix called adjoint matrix of A and is
denoted by
+
A
. Thus if a
ij
is the element belonging to i
th
row and j
th
column of A, then, its complex
conjugate belongs to j
th
row and ith column of
T
A
. Hence, a
ij
element of
T
A
is the element a
ji
of
A.
Then, then the adjoint of A is
1
1
1
]
1
+
* ) ( * ) ( * ) (
* ) ( * ) ( * ) (
* ) ( * ) ( * ) (
33 23 13
32 22 12
31 21 11
a a a
a a a
a a a
A
Hence,
ji ij
a a
*
Symmetric matrix
A matrix is said to be symmetric matrix if the matrix is equal to its transpose that is
A = A
T
Hence for a symmetric matrix,
ji ij
a a
Skew symmetric matrix
A matrix is said to be skew symmetric matrix if the matrix is equal to negative of its transpose that
is
A = A
T
Hence for a symmetric matrix,
ji ij
a a
Orthogonal matrices
A matrix A is called orthogonal matrix if
I A A AA
T T
Hence, inverse of an orthogonal matrix is its transpose.
An example of orthogonal matrix is the matrix which represents rotational transformation in three-
dimension.
19
Let ,
i ,
j
k
be unit vectors along axes OX, OY and OZ of rectangular coordinate system. Then the
position vector
k z j y i x r
+ +
Suppose that the axes be rotated keeping the origin and vector fixed. Let ,
i ,
j
k
be new unit
vectors along new set of axes. Then, in terms of new unit vectors
k z j y i x r + +
k z j y i x k z j y i x r
+ + + +
i
on both sides we get
)
( )
( )
( )
( )
( )
( k i z j i y i i x k i z j i y i i x + + + +
)
( )
( )
( k i z j i y i i x x + +
13 12 11
a y a x a x + +
,
11
a i i ,
12
a j i
k i a
13
Similarly, taking scalar product with ,
j
and
k
we get
z a y a x a k j z j j y i j x y
23 22 21
)
( )
( )
( + + + +
z a y a x a k k z j k y i k x z
33 32 31
)
( )
( )
( + + + +
This rotational transformation can be represented in the form of matrix equation
,
_
,
_
,
_
z
y
x
a a a
a a a
a a a
z
y
x
33 32 31
23 22 21
13 12 11
AX X
,
_
z
y
x
X
,
_
33 32 31
23 22 21
13 12 11
a a a
a a a
a a a
A
,
_
z
y
x
X
Taking the transpose on both sides
T T T T
A X AX X ) (
AX A X X X
T T T
X X z y x z y x X X
T T
+ + + +
2 2 2 2 2 2
Thus
I A A
T
,
_
nn n n
n
n
a a a
a a a
a a a
A
2 1
2 22 21
1 12 11
and
,
_
n
x
x
x
X
2
1
Then the equation () can be written as
,
_
,
_
1
1
1
1
1
]
1
,
_
,
_
0
0
0
1 0 0
0 1 0
0 0 1
2
1
2 1
2 22 21
1 12 11
n nn n n
n
n
x
x
x
a a a
a a a
a a a
,
_
,
_
,
_
0
0
0
2
1
2 1
2 22 21
1 12 11
n nn n n
n
n
x
x
x
a a a
a a a
a a a
nn n n
n
n
a a a
a a a
a a a
0 ) det( I A
The last equation is known as characteristic or eigen value equation. It is an nth degree algebraic
equation in the unknown and will have n solutions.
Eigne values are also called latent roots, proper values and characteristic values.
Hence, the central problem of eigen value equation is to find the eigen values and then eigen
vectors corresponding to each eigen value.
Example:
Let
,
_
5 0 0
0 3 2
0 2 3
A
Find the eigen values and eigen vectors.
From the characteristic equation:
0
5 0 0
0 3 2
0 2 3
0 } 4 ) 3 ){( 5 (
2
0 } 4 ) 6 9 ){( 5 (
2
+
0 } 6 5 ){ 5 (
2
+
0 ) 1 )( 5 )( 5 (
The eigen values are: 5 and 1
For 5
0 ) ( X I A
,
_
,
_
,
_
0
0
0
5 0 0
0 3 2
0 2 3
3
2
1
x
x
x
,
_
,
_
,
_
0
0
0
0 0 0
0 5 2
0 2 2
3
2
1
x
x
x
0 2 2
2 1
x x
0 0 0 0
3
x
Hence,
p x x
2 1
q x
3
22
Hence the eigen vector is
,
_
,
_
,
_
,
_
,
_
,
_
1
0
0
0
1
1
0
0
0
3
2
1
q p
q
p
p
q
p
p
x
x
x
For 1
0 ) ( X I A
,
_
,
_
,
_
0
0
0
5 0 0
0 3 2
0 2 3
3
2
1
x
x
x
,
_
,
_
,
_
0
0
0
4 0 0
0 2 2
0 2 2
3
2
1
x
x
x
0 2 2
2 1
x x
0 4 0 0
3
+ x
Hence,
p x x
2 1
0
3
x
Hence the eigen vector is
,
_
,
_
,
_
0
1
1
0
3
2
1
p p
p
x
x
x
Example:
Find the eigen values and eigen vectors of the matrix
,
_
9 8 7
6 5 4
3 2 1
A
Ans:
117 . 1 , 117 . 16 , 0
,
_
,
_
,
_
797 . 2
735 . 1
0 . 1
44 . 3
1
,
094 . 10
1
29 . 2
217 . 16
1
,
1
2
1
6
1
Eigen values and eigen vectors of matrix
1. Two eigen values can not correspond to same eigen vector
Let X be eigen vector of the matrix A corresponding to the eigen values
1
and
2
, then,
X AX
1
X AX
2
On subtracting
0 ) (
2 1
X
Since X is non zero, so,
2 1
2. A given eigen value may correspond to different eigen vectors
Let X be eigen vector of the matrix A corresponding to the eigen value , then,
X AX
Multiplying by k on both sides,
) (kX AkX
23
Here kx is the eigen vector corresponding to eigen value , hence, corresponds to both
the eigen vectors X and kX.
3. The eigen values of a Hermitian matrix are real
Let X
i
and X
j
be eigen vectors of the hermitian matrix H corresponding to the eigen values
i
, j
then,
X HX
i i
X HX
j j
Now taking the adjoint of the second equation
+ + + +
X H X H X
j j j
On multiplying (a) by
+
j
X
from left
i j i i j
X X HX X
+ +
Similarly, on multiplying (b) by
i
X
from right
i j j i j
X X HX X
+ +
Subtracting
0 ) (
+
i j j i
X X
When i = j,
i i
that is is real. When i j,
0
+
i j
X X
, that is
+
j
X
is orthogonal to
i
X
.
4. Eigen values of real symmetric matrix are real
5. Any two eigen vectors corresponding to two distinct eigen values of a real symmetric
matrix are orthogonal.
Let X
i
and X
j
be eigen vectors of the real symmetric matrix A corresponding to the eigen
values
i
, j
then,
X AX
i i
X AX
j j
Now taking the transpose of the second equation
T
j
T
j
T T
j
X A X A X
On multiplying (a) by
T
j
X
from left
i
T
j i i
T
j
X X AX X
Similarly, on multiplying (b) by
i
X
from right
i
T
j j i
T
j
X X AX X
Subtracting
0 ) (
i
T
j j i
X X
When i = j,
i i
that is is real. When i j,
0
+
i j
X X
, that is
+
j
X
is orthogonal to
i
X
.
6. Eigen values of skew Hermitian matrix are purely imaginary or zero.
X AX
+ +
X A X
X X AX X
+ +
X X AX X
+ +
Adding
0 ) ( +
+
X X
0 +
Thus if is real they are zero, if complex they are purely imaginary.
7. The modulus of each eigen values of a unitary matrix is unity.
X UX
+ +
X U X
X X UX U X
+ + +
24
X X X X
+ +
1 | |
2
8. The modulus of if each eigen values of an orthogonal matrix is unity.
Let O be orthogonal matrix and X be the eigen vector corresponding the eigen value
then,
X OX
Now taking transpose of this equation,
T T
X O X
Multiplying this equation by the first equation, we get,
X X OX O X
T T T
Noting that I O O
T
we have,
X X X X
T T
Hence, we have,
1 | |
2
9. Any two eigen vectors corresponding to two distinct eigen values of a unitary matrix are
orthogonal
Let U be unitary matrix and X1 and X2 be eigen vectors corresponding the eigen values
1
and
2
, then,
1 1 1
X UX
2 2 2
X UX
Now taking adjoint of the second equation,
2 2 2
X U X
+ +
Multiplying by the first equation, we get,
1 2 1 2 1 2
X X UX U X
+ + +
Noting that I U U
+
we have,
0 ) 1 (
1 2 1 2
+
X X
Using the fact that the modulus of if each eigen values of an orthogonal matrix is unity, we
have 1
1 1 2 2
, hence,
0 ) (
1 2 1 2 2 2
+
X X
0 ) (
1 2 1 2 2
+
X X
Since the eigen values are distinct,
0
1 2
so, we get,
0
1 2
+
X X
It shows that the eigen vectors are orthogonal.
10. The eigen vectors corresponding to distinct eigen values of a matrix are linearly
independent
1
1
1
]
1
33 32 31
23 22 21
13 12 11
a a a
a a a
a a a
A
3 2 1
, , X X X
be eigen vectors corresponding to eigen values
3 2 1
, ,
0
3 3 2 2 1 1
+ + X c X c X c
0
3 3 2 2 1 1
+ + AX c AX c AX c
0
3 3 3 2 2 2 1 1 1
+ + X c X c X c
0
3
2
3 3 2
2
2 2 1
2
1 1
+ + X c X c X c
1
1
1
]
1
1
1
1
]
1
1
1
1
]
1
0
0
0 1 1 1
3 3
2 2
1 1
2
3
2
2
2
1
3 2 1
X a
X a
X a
25
O PY
0 ) )( )( (
1 1 1
det
2 3 1 3 1 2
2
3
2
2
2
1
3 2 1
P
. So,
1
P
exists.
0
1
PY P
0 Y
0
0
0
3 3
2 2
1 1
X c
X c
X c
This is true only when , 0
1
c , 0
2
c 0
3
c
thus,
3 2 1
, , X X X
are linearly independent.
Alternately
Let
n
x x x x ., ,......... , ,
3 2 1
be eigen vectors of the operator A corresponding to distinct eigen
values
n
....... , ,
3 2 1
.
We shall assume that
n
x x x x ., ,......... , ,
3 2 1
are linearly dependent and obtain a contradiction.
We can then conclude that
n
x x x x ., ,......... , ,
3 2 1
are linearly independent. Since an eigen
vector is, by definition, non zero
} {
1
x
is linearly dependent. Let r be the largest integer such
that
r
x x x x ., ,......... , ,
3 2 1
is linearly independent. Since we assume that
r
x x x x ., ,......... , ,
3 2 1
are
linearly dependent, r satisfies n r < 1 . Moreover, by definition of r,
1 3 2 1
., ,......... , ,
+ r
x x x x
is
linearly dependent. Thus, there are scalars c
1
, c
2
, c
r+1
not all zero such that
0 ..........
1 1 3 3 2 2 1 1
+ + + +
+ + r r
x c x c x c x c
Noting that
1 1 1
x x A
2 2 2
x x A ..
1 1 1
+ + +
r r r
x x A
We have,
0 ..........
1 1 1 3 3 3 2 2 2 1 1 1
+ + + +
+ + + r r r
x c x c x c x c
Multiplying (1) by
1 + r
3
2
1
0 0
0 0
0 0
A
26
0 I A
0
0 0
0 0
0 0
3
2
1
1
1
1
]
1
0 ) )( )( (
3 2 1
Hence,
3 2 1
, ,
are the eigen values.
Diagonalisation of matrix
A square matrix A is diagonalizable if there is an invertible matrix P such that
AP P
1
is diagonal
matrix.
The matrix P is said to diagonalise A. If P is orthogonal then A is called orthogonally
diagonalizable.
Let A be an n n matrix. Then, A is diagonalizable if it has n linearly independent eigen vectors
and vice versa.
Let
n
X X X ,..... ,
2 1
be n eigen vectors corresponding to n distinct eigen values
n
,...... ,
2 1
of a
matrix A. Let the vectors be represented by column matrix:
1
1
1
1
]
1
ni
i
i
i
X
X
X
X
2
1
Thus,
i i i
X AX
where i = 1, 2, 3,n
1
1
1
1
]
1
1
1
1
1
]
1
ni i
i i
i i
ni
i
i
i i i i
X
X
X
X
X
X
X AX
2
1
2
1
Let us consider an n x n matrix, P, whose column vectors are
n
X X X ,..... ,
2 1
. Hence,
[ ]
1
1
1
1
]
1
nn n n
n
n
n
X X X
X X X
X X X
X X X P
2 1
2 22 21
1 12 11
2 1
Then AP is a square matrix of which the columns are
1
AX
2
AX .
n
AX
. Thus, Using ()
[ ] [ ]
1
1
1
1
]
1
nn n n n
n n
n n
n n
X X X
X X X
X X X
AX AX AX X X X A AP
2 2 1 1
2 22 2 21 1
1 12 2 11 1
2 1 2 1
1
1
1
1
]
1
1
1
1
1
]
1
n nn n n
n
n
X X X
X X X
X X X
AP
0 0
0 0
0 0
2
1
2 1
2 22 21
1 12 11
Let the diagonal matrix be denoted by D,
27
1
1
1
1
]
1
n
n
diag D
0 0
0 0
0 0
) (
2
1 1
1 1 1 1
Then
[ ]
n n
n nn n n
n
n
X X X PD
X X X
X X X
X X X
AP
2 2 1 1
2
1
2 1
2 22 21
1 12 11
0 0
0 0
0 0
1
1
1
1
]
1
1
1
1
1
]
1
[ ]
n
AX AX AX PD
2 1
[ ] AP X X X A PD
n
2 1
Since, X
i
are linearly independent, P is non singular, so,
1
P
exists.
AP P D
1
) (
1
D P A P
T T
) (
1
Hence,
T T
P A P D AP P ) (
1 1
It proves that
T
P P
1
.
2. Diagonalizing matrix of Hermitian matrix is unitary.
Let H be a hermitian matrix, U be diagonalizing matrix. Then,
D HU U
1
Taking adjoint
D D HU U
T
+
) (
1
D U H U
+ +
) (
1
Hence,
+ +
) (
1 1
U H U D HU U
It proves that
+
U U
1
.
3. If Y is an eigen vector of
AR R B
1
AR AR RR RB
1
RY ARY
) ( ) ( RY RY A
RY is eigen vector of A
These eigen vectors are called invariant eigen vectors.
4. A matrix and transpose of a matrix have the same eigen value.
T
A
be transpose of A
0 | | I A
T
0 | |
T T
I A
0 | | I A
28
5. Eigen values of a complex conjugate of a given matrix A are complex conjugate of its
eigen values.
0 | | I A
0 | ) ( |
*
I A
0 | |
* *
I A
0 | |
*
I A
6. If is an eigen value of A, then k is an eigen value of kA.
0 | | I kA
0 | ) ( | I k A k
0 | ) ( | I k A
k
k
7. Dsfsfd
Linear transformation
If f:
w v
is a function from the real vector space to w, then f is called linear transformation if
i.
) ( ) ( ) ( v f u f v u f + +
for all u, v in V
ii.
) ( ) ( u kf ku f
for all u in V and all scalars k
T: R R is a linear transformation where
ds x f x T
1
0
) ( ) (
dx x g dx x f dx x g x f
+ +
1
0
1
0
1
0
) ( ) ( )) ( ) ( (
dx x f k dx x kf
1
0
1
0
) ( ) (
It follows that T is a linear transformation of integration.
Example:
Let f:
3 2
R R
be the function defined by
1. If ) , (
1 1
y x u and ) , (
2 2
y x v and ) , (
2 1 2 1
y y x x v u + + + show that f is a linear
transformation
)] ( ) ( ), ( ) ( ), [( ) (
2 1 2 1 2 1 2 1 2 1
y y x x y y x x x x v u f + + + + + + +
)] ( ), ( ), [(
2 2 1 1 2 2 1 1 2 1
y x y x y x y x x x + + + + +
) , , ( ) , , (
2 2 2 2 2 1 1 1 1 1
y x y x x y x y x x + + +
) ( ) ( v f u f +
2. If k is a scalar,
) , ( ky kx ku
so that
3. ) ( ) , , ( ) , , ( ) (
1 1 1 1 1 1 1 1 1 1
u kf y x y x x k ky kx ky kx kx ku f + +
Thus f is the linear transformation.
Example:
Let A be fixed m x n matrix. If we use matrix notation for vectors in Rm and Rn then we can
define a function T:
m n
R R
by T(x) = Ax using properties of matrix multiplication. Show that
A(u + v) = Au + Bv, A(ku) = kA(u) or equivalently A(u + v) = T(u) + T(v), T(ku) = kT(u)
Linear transformation of this kind is called matrix transformation.
Example:
Let D:
v w
be the transformation that maps f into its derivative that D(f) = f
From the properties of differentiation, we have,
D(f + g) = D(f) + D(g)
29
D(kf) = kD(f)
Hence D is linear transformation.
Example:
Let T: R v be defined by
dx x f x T
1
0
) ( ) (
If f(x) = x
2
, then
3
1
) (
1
0
2
dx x x T
Matrix and linear transformation
Every linear transformation on a finite dimensional vector space can be regarded as matrix
transformation. More precisely we shall show that if T:
m n
R R
is any linear transformation,
then we can find m x n matrix A such that T is multiplication by A. To see this let A be m x n
matrix having
), (
1
e T ), (
2
e T ), (
3
e T . ), (
n
e T as its column vectors.
For example
,
_
,
_
1
]
1
2 1
2 1
2
1
2
x x
x x
x
x
T
The standard basis is
,
_
0
1
1
e
,
_
1
0
2
e
,
_
,
_
,
_
,
_
1
1
0 1
0 2 1
0
1
) (
2
1
1
T
x
x
T e T
,
_
,
_
,
_
,
_
1
2
1 0
1 2 0
1
0
) (
2
1
2
T
x
x
T e T
Hence the matrix formed by these column vectors is
,
_
1 1
1 1
A
More generally
,
_
1
12
11
1
) (
m
a
a
a
e T
,
_
mn
n
n
n
a
a
a
e T
1
1
) (
,
_
mn m m
n
n
a a a
a a a
a a a
A
2 1
2 22 21
1 12 11
and
n n
n
e x e x e x
x
x
x
x + + +
,
_
.......
2 2 1 1
2
1
Therefore by linearity
) ( ...... ) ( ) ( ) (
2 2 1 1 n n
e T x e T x e T x x T + +
30
,
_
+ + +
+ + +
+ + +
,
_
,
_
n mn m m
n n
n n
n mn m m
n
n
x a x a x a
x a x a x a
x a x a x a
x
x
x
a a a
a a a
a a a
Ax
2 2 1 1
2 2 22 1 21
1 2 12 1 11
2
1
2 1
2 22 21
1 12 11
,
_
+ +
,
_
,
_
mn
n
n
n
m m
a
a
a
x
a
a
a
x
a
a
a
x Ax
2
1
2
22
12
2
1
21
11
1
) ( ) ( ) (
2 2 1 1 n n
e T x e T x e T x Ax + + +
Comparing
Ax x T ) (
Hence, T is multiplicative by A.
Example:
Explain whether the following transformation is orthogonal and find its inverse
2 2
3 1
1
x x
y +
2 2
x y
2 2
3 1
3
x x
y +
,
_
,
_
,
_
3
2
1
3
2
1
2 1 0 2 1
0 1 0
2 1 0 2 1
x
x
x
y
y
y
X A Y
,
_
2 1 0 2 1
0 1 0
2 1 0 2 1
A
,
_
2 1 0 2 1
0 1 0
2 1 0 2 1
T
A
,
_
,
_
2 1 0 2 1
0 1 0
2 1 0 2 1
2 1 0 2 1
0 1 0
2 1 0 2 1
T
AA
,
_
1 0 0
0 1 0
0 0 1
Hence A is orthogonal matrix, and its inverse
,
_
2 1 0 2 1
0 1 0
2 1 0 2 1
1
A A
T
Matrix representation of a linear operator
Let be a linear operator on an n-dimensional linear vector space transforming the vector x to y :
x A y
If the set
n
x x x x ., ,......... , ,
3 2 1
forms an orthonormal basis in the space, we can write
i
i i
x c x
and
31
i
i i
x d y
The complex numbers c
i
and d
i
are given by
) , ( x x c
i i
) , ( y x d
i i
i
i i
i
i i
i
i i
x A c x c A x d
Taking scalar product with xj we get
i
i i j
i
i i j
x A c x x d x )
, ( ) (
,
i
i j i
i
i j i
x A x c x x d )
, ( ) , (
i
i j i
i
ji i
x A x c d )
, (
i
i j i j
x A x c d )
, (
The scalar product
)
, (
i j
x A x
is a complex number. Introducing the matrices
,
_
n
d
d
d
D
2
1
,
_
n
c
c
c
C
2
1
and
,
_
,
_
nn n n
n
n
n n n n
n
n
A A A
A A A
A A A
x A x x A x x A x
x A x x A x x A x
x A x x A x x A x
A
2 1
2 22 21
1 12 11
2 1
2 2 2 1 2
1 2 1 1 1
)
, ( )
, ( )
, (
)
, ( )
, ( )
, (
)
, ( )
, ( )
, (
Then
AC D
This equation is completely equivalent to first equation. The first equation is an equation involving
linear operators and vectors, this equation involves matrices. The n x n matrix A is called matrix
representation of the operator
A
in the basis
} {
i
x
. Similarly the column matrices C and D are the
representations of the vectors x and y in the same basis. It should be noted that the representative
matrices are basis dependent.
Transformation of operators
A linear operator is represented by a matrix in a given coordinate system. Let x and y be the
vectors such that the components of the vectors x and y are related by
i
i ji j
x A y
This is a special case of a matrix product Ax in which the right side matrix x and y has a single
column.
We now ask how the components of vectors and linear operator transform when we change
coordinate system.
Let us consider two coordinate systems, primed and unprimed: let the basis vectors in the primed
system be
j
e
and in the primed system be
i
e
. Let
j
e
be defined in terms of
i
e
by
i
i ij j
e r e
The n
2
coefficients
ij
r
form the elements of a transformation matrix r, which effects the
transformation from one system to the other.
Consider an arbitrary vector with components
i
x
and
j
x
in the two systems then
j
j j
i
i i
e x e x x
Substituting for
j
e
from (), we get
j i
i ij j
i
i i
e r x e x x
32
i j
i j ij
i
i i
e x r e x
Hence,
j
j ij i
x r x
This is equivalent to matrix equation
x r x
The linear independence of the base vectors
j
e
assure that the matrix r is non singular so that it has
an inverse. Multiplying by the inverse
1
r
, we get
x r x
1
Quantities that transform like x are said to transform contragradiently. Quantities that transform
like
j
e
are said to transform cogradiently.
Transformation law for operators
The operator equation in unprimed system is
x A y
In primed system
x A y
Since,
x r x
y r y
x Ar Ax y y r
So,
x Ar r y
1
x Ar r x A
1
Ar r A
1
This is the required transformation.
In addition
y r y
1
Since,
x A y
x r A x A y
We have,
x Ar r Ax r x A
1 1
So,
Ar r A
1
The eigen values of a linear operator are independent of the particular coordinate system.
The matrix equation
X AX
x r x rr A r
1 1 1
x x A
Thus if x is an eigen vector of A, its transform x is the eigen vector of the transformation matrix
A, but in both cases the eigen values are same.
It can be shown that
TrA AS TrS A Tr
1
A S A S AS S A det det det det ) det( det
1 1
Similarity transformation
In the case of matrices any transformation of the form
AB B
1
where B is non-singular, is called
similarity transformation.
33
Eigen vector and eigen value of operators
Let A be a linear operator on a vector space, x be a vector in the space such that
x x A
+ + + +
i
i i n n
x c x c x c x c x c x ..........
3 3 2 2 1 1
Using the fact that A is a linear operator we have,
n
i
i i
i
i i
x A c x c A x A
1
Hence,
n
i
i i
n
i
i i
x c x A c
1 1
Taking scalar product with xj, we get
n
i
i j i
n
i
i j i
x x c x A x c
1 1
) , ( )
, (
n
i
ij i
n
i
i j i
c x A x c
1 1
)
, (
j
n
i
i ji
c c A
1
AC = C
Here A is the matrix formed the elements Aij and C is the column matrix formed by cj.
Adjoint operator
Let A be an operator on a linear space, then the adjoint operator to A is defined as
*
)
, ( )
, ( y A x x A y
+
Hermitian operator
An operator a is said to be hermitian operator if
+
A A
A hermitian operator is represented by hermitian matrix
Let
A
, ( )
, (
i j j i ij
x A x x A x A
+
Since,
+
A A
we have,
* *
)
, ( )
, (
ji i j j i ij
A x A x x A x A
Thus, A is hermitian matrix.
Eigen values of hermitian operator are real
Eigen vectors corresponding to distinct eigen values are orthogonal.
Degeneracy
If there are more than one vectors belonging to an eigen value, then the eigen vectors are said to be
degenerate. In the case of degeneracy, the linear combination of degenerate eigen vectors is also
eigen vector corresponding to the same eigen value:
Thus if
1 1
x x A
2 2
x x A
Then
34
2 2 1 1 2 2 1 1 2 2 1 1
) (
x c x c x A c x A c x c x c A + + +
) ( ) (
2 2 1 1 2 2 1 1
x c x c x c x c A + +
This proves our proposition.
Simultaneous eigen vector
Case I non degeneracy
Case II degeneracy
Unitary Operator
An operator U is said to be unitary if
1
+ +
UU U U
Properties of unitary operator
1. A unitary operator preserves the norm of a vector.
Let the unitary operator U transform the vector x in a linear space to the vector y:
x U y
Then
)
, ( ) , ( x U y y y
*
)
, ( ) , ( y U x y y
+
*
)
, ( ) , ( x U U x y y
+
*
) , ( ) , ( x x y y
*
) , ( x x is real , we have
) , ( ) , ( x x y y
Hence the norm is preserved.
2. A unitary operator transforms two orthogonal vectors x
1
and x
2
to the orthogonal vectors y
1
and y
2
.
Let
1 1
x U y
2 2
x U y
Then
*
1 2 2 1 2 1
)
, ( )
, ( ) , ( y U x x U y y y
+
*
1 2 2 1
)
, ( ) , ( Ux U x y y
+
*
1 2 2 1
) , ( ) , ( x x y y
) , ( ) , (
2 1 2 1
x x y y
Since x
1
and x
2
are orthogonal y
1
and y
2
are also orthogonal.
3. Under a unitary transformation U on a vector space, a linear operator A gets transformed
to the operator
1
+
U A U U A U
Let the vectors x and y in the linear space be related through the transformation:
x A y
Taking the unitary transformation:
x A U y U
x U U A U y U
+
Let y y U
and x U x
x U A U y
+
It follows that under unitary transformation y and x transform to y and x and the operator
transform to
1
+
U A U U A U
Square integrable function
A function f(x) is said to be square integrable function in the interval [a, b] if
35
<
dx x f
b
a
2
) (
Square integrable function forms linear vector space.
Let f(x) and g(x) be two square integrable functions in the interval [a, b]. Then
2 2 2 2
) ( 2 ) ( 2 ) ( ) ( ) ( ) ( x g x f x g x f x g x f + + +
2 2 2
) ( 2 ) ( 2 ) ( ) ( x g x f x g x f + +
Since,
<
dx x f
b
a
2
) (
and
<
dx x g
b
a
2
) (
Hence,
< +
dx x g x f
b
a
2
) ( ) (
If c is a complex number and f(x) is a square integrable function, then cf(x) is square integrable
function.
Since,
2 2 2
) ( ) ( x f c x cf
We have
dx x f c dx x cf
b
a
b
a
2 2 2
) ( ) (
As
<
dx x f
b
a
2
) (
<
dx x f c
b
a
2 2
) (
Scalar product of square integrable function
Let f(x) and g(x) be scalar integrable function, their inner product is defined as
dx x g x f g f
b
a
) ( ) ( ) , (
1
]
1
+
2 2 2
) ( ) ( ) ( ) (
2
1
) ( ) ( x g x f x g x f x g x f
1
]
1
+
2 2
) ( ) (
2
1
) ( ) ( x g x f x g x f
1
1
]
1
+
dx x g dx x f dx x g x f
b
a
b
a
b
a
2 2
) ( ) (
2
1
) ( ) (
Hence,
<
dx x g x f
b
a
) ( ) (
dx x g x f dx x g x f
b
a
b
a
< ) ( ) ( ) ( ) (
36