Rank-Nullity Theorem
Definition (Range Space and Null Space): Let V be finite dimensional vector space
over F and let W be any vector space over F . Then for a linear transformation
T : V −→W , we define
1. C (T ) = T ( x) : xV as the range space of T and
2. N (T ) = xV : T ( x) = 0 as the null space of T .
We now prove some results associated with the above definitions.
Proposition: Let V be a vector space over F with basis v1 , v2 ,..., vn . Also,
let W be a vector spaces over F . Then for any linear transformation T : V −→ W ,
1. C (T ) = T ( x) : x V is a subspace of W and dim (C (T )) dim(W ) .
2. N (T ) is a subspace of V and dim ( N (T )) dim(V ) .
1
3. The following statements are equivalent.
(a) T is one-one.
(b) N (T ) = {0} .
(c) T (ui ) : 1 i n is a basis of C (T ) .
4. dim(C (T ) = dim(V ) if and only if N (T ) = {0} .
Proof. Parts 1 and 2 The results about C (T ) and N (T ) can be easily proved. We
Thus leave the proof for the students.
We now assume that T is one-one. We need to show that N (T ) = {0} .
Let u ∈ N (T ) . Then by definition, T (u) = 0 .
Also for any linear transformation, T(0) = 0 . Thus T (u) = T (0) . So, T is one-one
implies u = 0 . That is, N (T ) = {0} .
Let N (T ) = {0} . We need to show that T is one-one. So, let us assume that for
some u, v ∈ V, T (u) = T (v) .
2
Then, by linearity of T, T (u − v) = 0 . This implies, u − v ∈ N (T ) = {0} . This in turn
implies u = v . Hence, T is one-one.
The other parts can be similarly proved.
Remark: 1. C (T ) is called the range space and N (T ) the null space of T .
2. dim(C (T )) is denoted by (T ) and is called the rank of T .
3. dim(N (T )) is denoted by (T ) and is called the nullity of T .
Example: Determine the range and null space of the linear transformation
𝑇: ℝ3 → ℝ4 𝑤𝑖𝑡ℎ𝑇(𝑥, 𝑦, 𝑧) = (𝑥 − 𝑦 + 𝑧, 𝑦 − 𝑧, 𝑥, 2𝑥 − 5𝑦 + 5𝑧).
Solution: By Definition
R(T ) = L ((1, 0, 1, 2), (−1, 1, 0, −5), (1, −1, 0, 5))
= L (1, 0, 1, 2), (1, −1, 0, 5)
3
= {α(1, 0, 1, 2) + β(1, −1, 0, 5) : α, β ∈ R}
= {(α + β, −β, α, 2α + 5β) : α, β ∈ R}
= (𝑥, 𝑦, 𝑧, 𝑤) ∈ ℝ4 : 𝑥 + 𝑦 − 𝑧 = 0,5𝑦 − 2𝑧 + 𝑤 = 0
𝑁(𝑇) = (𝑥, 𝑦, 𝑧) ∈ ℝ3 : 𝑇(𝑥, 𝑦, 𝑧) = 0
= (𝑥, 𝑦, 𝑧) ∈ ℝ3 : 𝑇(𝑥 − 𝑦 + 𝑧, 𝑦 − 𝑧, 𝑥, 2𝑥 − 5𝑦 + 5𝑧) = 0
= (𝑥, 𝑦, 𝑧) ∈ ℝ3 : 𝑥 − 𝑦 + 𝑎 = 0, 𝑦 − 𝑧 = 0, 𝑥 = 0,2𝑥 − 5𝑦 + 5𝑧 = 0
= (𝑥, 𝑦, 𝑧) ∈ ℝ3 : 𝑦 − 𝑧 = 0, 𝑥 = 0
= (0, 𝑦, 𝑦) ∈ ℝ3 : 𝑦 ∈ ℝ = 𝐿((0,1,1))
Exercise: Define a linear operator 𝐷: 𝑃𝑛 (ℝ) → 𝑃𝑛 (ℝ)𝑏𝑦
D (a0 + a1 x + a2 x 2 + ... + an x n ) = a1 + 2a2 x + ... + nan x n −1 .
Describe N (D) and C (D). Note that 𝐶(𝐷) ⊂ 𝑃𝑛−1 (ℝ).
4
Theorem (Rank Nullity Theorem): Let V be a finite dimensional vector space and
let T : V −→ W be a linear transformation. Then ρ(T ) + ν (T ) = dim(V ). That is,
dim(R(T )) + dim(N (T )) = dim(V ).
Theorem: Let V and W be finite dimensional vector spaces over F and let T :
V −→ W be a linear transformation. Also assume that T is one-one and onto.
Then
1. for each w ∈ W, the set T −1 ( w) is a set consisting of a single element.
−1
2. the map T : W → V defined by T −1 ( w) = v whenever T (v) = w is
a linear transformation.
Proof. Since T is onto, for each w ∈ W there exists v ∈ V such that T (v) = w.
−1
So, the set T ( w) is non-empty.
Suppose there exist vectors v1 , v2 V such that T (v1 ) = T (v2 ). Then the
assumption, T is one-one implies v1 = v2 . This completes the proof of
Part 1.
5
We are now ready to prove that T −1 , as defined in Part 2, is a linear transformation.
Let w1 , w2 W . Then by Part 1, there exist unique vectors v1 , v2 V such that
−1
T −1 ( w1 ) = v1 and T ( w2 ) = v2 .
Or equivalently, T (v1 ) = w1 and T (v2 ) = w2 .
So, for any 1 , 2 F , T (1v1 + 2v2 ) = 1w1 + 2 w2 .
Hence, by definition, for any 1 , 2 F ,
T −1 (1w1 + 2 w2 ) = 1v1 + 2v2 = 1T −1 ( w1 ) + 2T −1 ( w2 ).
Thus the proof of Part 2 is over.
Definition (Inverse Linear Transformation): Let V and W be finite dimensional
vector spaces over F and let T : V −→W be a linear transformation. If the map T
Is one-one and onto, then the map T −1 : W → V defined by
T −1 ( w) = v whenever T (v) = w
is called the inverse of the linear transformation T. 6
Example: Let 𝑇: ℝ2 → ℝ2 be defined by T (x, y) = (x + y, x − y). Then
x+ y x− y
𝑇 −1 : ℝ2 → ℝ2 is defined by T −1 ( x, y ) = , .
2 2
One can see that T T −1 ( x, y ) = I ( x, y ), where I is the identity operator.
−1
Hence, T T −1 I . Verify that T T I .
−1
Thus, the map T is indeed the inverse of T.
Corollary: Let V be a finite dimensional vector space and let T : V −→ V be a
linear operator. Then the following statements are equivalent.
1. T is one-one.
2. T is onto.
3. T is invertible.
7
Remark: Let V be a finite dimensional vector space and let T : V −→ V be a linear
operator. If either T is one-one or T is onto then T is invertible.
Exercise-1: Let V be a finite dimensional vector space and let T : V −→ W be
a linear transformation. Then prove that
(a) N (T ) and C (T ) are also finite dimensional.
(b) i. if dim(V ) < dim(W ) then T cannot be onto.
ii. if dim(V ) > dim(W ) then T cannot be one-one.
Exercise-2: Let V be a vector space of dimension n and let B = (v1 , v2 ,..., vn ) be
an ordered basis of V . For i = 1, . . . , n, let wi V with wi B = a1i , a2i ,..., ani .
t
Also, let A = aij . Then prove that w1 , w2 ,..., wn is a basis of V if and only if A
is invertible.
8
Similarity of Matrices
Let V be a finite dimensional vector space with ordered basis B. Then we saw that any
linear operator T : V −→ V corresponds to a square matrix of order dim(V ) and this
matrix was denoted by T [B, B].
In this section, we will try to understand the relationship between T [ B1 , B1 ] and
T [ B2 , B2 ] where B1 and B2 are distinct ordered bases of V .
Theorem (Composition of Linear Transformations): Let V, W and Z be finite
dimensional vector spaces with ordered bases B1 , B2 and B3 respectively.
Also, let T : V −→ W and S : W −→ Z be linear transformations. Then the
composition map S ◦ T : V −→ Z is a linear transformation and
(S T ) B1 , B3 = S B2 , B3 . T B1 , B2 .
9
Proof. Let B1 = (u1 ,..., un ), B2 = (v1 ,..., vm ) and B3 = ( w1 ,..., w p )
be ordered bases of V, W and Z, respectively. Then using the relation
m
T (v j ) = T B , B
i =1
1 2 ij wi , for 1 j n ,
we have
m m
( S T ) (ut ) = S (T (ut )) = S (T B1 , B2 ) jt v j = (T B1 , B2 ) jt S (v j )
j =1 j =1
m p p
m m
= (T B1 , B2 ) jt ( S B2 , B3 )kj wk = ( S B2 , B3 )kj (T B1 , B2 ) jt wk
j =1 k =1 k =1 j =1 j =1
m
= ( S B , B T B , B )
j =1
2 3 1 2 kt
wk .
Thus, using matrix multiplication, the t-th column of (S T ) B1 , B3 is given by
( S [ B2 , B3 ]T [ B1 , B2 ])1t T [ B1 , B2 ]1t
T [ B , B ]
( S [ B , B ] T [ B , B ] )
2 2t 2 2t
( S T ) ( ut ) B
2 3 1 1
= = S [ B2 , B3 ] .
3
( S [ B2 , B3 ]T [ B1 , B2 ]) pt T [ B , B ]
2 pt
1
10
Hence,
(S T ) B1 , B3 = ( S T ) (u1 ) B , ( S T ) (u2 ) B ,..., ( S T ) (un ) B = S B2 , B3 . T B1 , B2
3 3 3
and the proof of the theorem is over.
Proposition: Let V be a finite dimensional vector space and let T, S : V −→ V be
two linear operators. Then ν (T ) + ν (S) ≥ ν (T ◦ S) ≥ max{ν (T ), ν (S)}.
Proof. We first prove the second inequality.
Suppose v ∈ N (S). Then (T ◦ S)(v) = T (S(v) = T (0) = 0 gives N (S) ⊂ N (T ◦ S).
Therefore, ν (S) ≤ ν (T ◦ S).
We now use The Rank-Nulity Theorem to see that the inequality ν (T ) ≤ ν (T ◦ S)
is equivalent to showing C (T ◦ S) ⊂ C(T ).
But this holds true as C (S) ⊂ V and hence T (C (S)) ⊂ T (V ).
Thus, the proof of the second inequality is over.
11
For the proof of the first inequality, assume that k = ν (S) and v1 , v2 ,..., vk
is a basis of N (S). Then v1 , v2 ,..., vk N (T S ) as T (0) = 0.
So, let us extend it to get a basis v1 , v2 ,..., vk , u1 , u2 ,..., ul of N (T S ).
Claim: S (u1 ), S (u2 ),..., S (ul ) is a linearly independent subset of N (T ).
It is easily seen that S (u1 ), S (u2 ),..., S (ul ) is a subset of N (T ). So, let us solve
the linear system c1S (u1 ) + c2 S (u2 ) + ... + cl S (ul ) = 0 in the unknowns c1 , c2 ,..., cl .
l
This system is equivalent to S (c1u1 + c2u2 + ... + cl ul ) = 0. That is, c u N (S ).
i =1
i i
l
Hence, c u
i =1
i i is a unique linear combination of the vectors v1 , v2 ,..., vk .
Thus, c1u1 + c2u2 + ... + cl ul = 1v1 + 2v2 + ... + k vk (1)
for some scalars 1 , 2 ,..., k .
But by assumption, v1 , v2 ,..., vk , u1 , u2 ,..., ul is a basis of N (T ◦ S) and hence
linearly independent.
12
Therefore, the only solution of Equation (1) is given by ci = 0 for 1 i l
and j = 0 for 1 j k .
Thus, S (u1 ), S (u2 ),..., S (ul ) is a linearly independent subset of N (T ) and so
(T ) l .
Hence (T S ) = k + l ( S ) + (T ).
Theorem (Inverse of a Linear Transformation): Let V be a finite dimensional vector
space with ordered bases B1 and B2 . Also let T : V −→ V be an invertible linear
operator. Then the matrix of T and T −1 are related by T [ B1 , B2 ]−1 = T −1[ B2 , B1 ].
Exercise: Find the matrix of the linear transformations given below.
1. Define 𝑇: ℝ3 → ℝ3 𝑏𝑦𝑇(1,1,1) = (1, −1,1), 𝑇(1, −1,1) = (1,1, −1)𝑎𝑛𝑑
T (1,1, − 1) = (1,1,1). Find T [B, B], where B = (1, 1, 1), (1, −1, 1), (1, 1, −1) .
Is T an invertible linear operator?
13
2. Let B = (1, x, x , x ) be an ordered basis of P3 ( ). Define T : P3 ( ) → P3 ( )
2 3
by
T (1) = 1, T ( x) = 1 + x, T ( x 2 ) = (1 + x ) , T ( x 3 ) = (1 + x ) .
2 3
−1
Prove that T is an invertible linear operator. Also, find T [ B, B ] and T [ B, B ].
Definition (Isomorphism): Let V and W be two vector spaces over F . Then V
is said to be isomorphic to W if there exists a linear transformation T : V −→ W
that is one-one, onto and invertible. We also denote it by V W .
Theorem: Let V be a vector space over ℝ. If dim(V ) = n then 𝑉 ≅ ℝ𝑛 .
Proof. Let B be the standard ordered basis of n
and let B1 = (v1 , v2 ,..., vn ) be
an ordered basis of V . Define a map
𝑇: 𝑉 → ℝ𝑛 𝑏𝑦𝑇(𝑣𝑗 ) = 𝑒𝑗 𝑓𝑜𝑟1 ≤ 𝑖 ≤ 𝑛.
Then it can be easily verified that T is a linear transformation that is one-one, onto
and invertible (the image of a basis vector is a basis vector).
Hence, the result follows.
14
A similar idea leads to the following result and hence we omit the proof.
Theorem: Let V be a vector space over ℂ. If dim(V ) = n then 𝑉 ≅ ℂ𝑛 .
Example: Let 𝑉 = (𝑥, 𝑦, 𝑧, 𝑤) ∈ ℝ𝑛 : 𝑥 − 𝑦 + 𝑧 − 𝑤 = 0 . Suppose that B is the
standard ordered basis of ℝ3 and B1 = ((1,1,0,0),( −1,0,1,0),(1,0,0,1)) is the ordered
basis of V . Then defined by T (v) = T (y − z + w, y, z, w) = (y, z, w) is a linear
transformation and T [ B1 , B ] = I 3 . Thus, T is one-one, onto and invertible.
15
Change of Basis
Let V be a vector space with ordered bases B1 = u1 , u2 ,..., un and
B2 = v1 , v2 ,..., vn . Also, recall that the identity linear operator I : V −→ V
is defined by I (x) = x for every x ∈ V. If
a11 a12 a1n
a a22 a2 n
I [ B1 , B2 ] = I (v1 ) B , I (v2 ) B ,..., I (vn ) B
= 21
1 1 1
an1 an 2 ann
n
then by definition of I [ B2 , B1 ], we see that vi = I (vi ) = a
j =1
ji uj for all
i, 1 ≤ i ≤ n. Thus, we have proved the following result which also appeared in
another form in a previously stated theorem .
16
Theorem (Change of Basis Theorem): Let V be a finite dimensional vector space
with ordered bases B1 = u1 , u2 ,..., un and B2 = v1 , v2 ,..., vn . Suppose
x ∈ V with [ x]B1 = (1 , 2 ,..., n )t and [ x]B2 = ( 1 , 2 ,..., n )t . Then
[ x]B1 = I [ B2 , B1 ] [ x]B2 .
Or equivalently, 1 a11 a12 a1n 1
a a22 a2 n
2 = 21 2 .
n an1 an 2 ann n
Remark: Observe that the identity linear operator I : V −→ V is invertible and
hence by the previous theorem I [ B2 , B1 ]−1 = I −1[ B1 , B2 ] = I [ B1 , B2 ] . Therefore,
we also have [ x]B2 = I [ B1 , B2 ] [ x]B1 .
Let V be a finite dimensional vector space with ordered bases B1 and B2 .
Then for any linear operator T : V −→ V the next result relates T [ B1 , B1 ]
and T [ B2 , B2 ] .
17
Theorem: Let B1 = u1 , u2 ,..., un and B2 = v1 , v2 ,..., vn be two ordered
bases of a vector space V . Also, let A = [aij ] = I [ B2 , B1 ] be the matrix
of the identity linear operator. Then for any linear operator T : V −→ V
T [ B2 , B2 ] = A−1 . T [ B1 , B1 ] . A = I [ B1 , B2 ].T [ B1 , B1 ]. I [ B2 , B1 ] . (i )
Proof. The proof uses previously mentioned theorem by representing T [ B1 , B2 ]
as ( I T )[ B1 , B2 ] and (T I )[ B1 , B2 ] , where I is the identity operator on
V . By that theorem, we have
T [ B1 , B2 ] = ( I T )[ B1 , B2 ] = I [ B1 , B2 ] . T [ B1 , B1 ]
= (T I )[ B1 , B2 ] = T [ B2 , B2 ] . I [ B1 , B2 ] .
18
Thus, using I [ B2 , B1 ] = I [ B1 , B2 ]−1 , we get I [ B1 , B2 ] T [ B1 , B1 ] I [ B2 , B1 ] = T [ B2 , B2 ]
And the result follows.
Note:
➢ Let T : V −→ V be a linear operator on V . If dim(V ) = n then each ordered
basis B of V gives rise to an n × n matrix T [B, B].
➢ Also, we know that for any vector space we have infinite number of choices for
an ordered basis.
➢ So, as we change an ordered basis, the matrix of the linear transformation changes.
➢ Above theorem tells us that all these matrices are related by an invertible matrix .
Thus we are led to the following remark and the definition.
Remark: The Equation (i) shows that T [ B2 , B2 ] = I [ B1 , B2 ].T [ B1 , B1 ]. I [ B2 , B1 ] .
Hence, the matrix I [ B1 , B2 ] is called the B1 : B2 change of basis matrix.
19
Definition (Similar Matrices): Two square matrices B and C of the same order
are said to be similar if there exists a non-singular matrix P such that P −1BP = C
or equivalently BP = P C.
Example -1. Let B1 = (1 + x,1 + 2 x + x , 2 + x) and B2 = (1,1 + x,1 + x + x )
2 2
be ordered bases of 𝑃2(ℝ). Then I (a + bx + cx 2 ) = a + bx + cx 2 . Thus
−1 1 −2
I [ B2 , B1 ] = [1]B1 ,[1 + x]B1 ,[1 + x + x 2 ]B1 = 0 0 1 and
1 0 1
0 −1 1
I [ B1 , B2 ] = [1 + x]B2 ,[1 + 2 x + x 2 ]B2 ,[2 + x]B2 = 1 1 1 .
0 1 0
Also, verify that I [ B1 , B2 ]−1 = I [ B2 , B1 ] .
20
Example -2. Let B1 = ( (1,0,0),(1,1,0),(1,1,1) ) and B2 = ( (1,1, − 1),(1, 2,1),(2,1,1) )
be two ordered bases of ℝ3 . Define 𝑇: ℝ3 → ℝ3 𝑏𝑦
T ( x, y , z ) = ( x + y , x + y + 2 z , y − z ) .
Then,
0 0 −2 − 4 5 1 8 5
T [ B1 , B1 ] = 1 1 4 and T [ B2 , B2 ] = −2 5 2 9 5 .
0 1 0 8 5 0 −1 5
0 −1 1
Also, check that I [ B2 , B1 ] = 2 1 0 and
−1 1 1
2 −2 −2
T [ B1 , B1 ] I [ B2 , B1 ] = I [ B2 , B1 ] I [ B2 , B2 ] = −2 4 5 .
2 1 0
21