Chapter_4
Chapter_4
Linear maps
T : S ∋ X 7→ T(X) ∈ S ′
will be used for such a map. If X ∈ S, then T(X) ∈ S ′ is called the image of X by T.
The set S is often called the domain of T and is also denoted by Dom(T), while
{ }
T(S) := T(X) | X ∈ S
61
62 CHAPTER 4. LINEAR MAPS
Definition 4.2.1. Let V, W be two vector spaces over the same field F. A map T : V →
W is a linear map if the following two conditions are satisfied:
Note that the examples (ii) and (iv) of Examples 4.1.2 were already linear maps.
Let us still mention the map Id : V → V (also denoted by 1) defined by Id(X) = X for
any X ∈ V , which is clearly linear, and the map O : V → W defined by O(X) = 0 for
any X ∈ V , which is also linear.
Let us now observe that linear maps are rather simple maps.
Lemma 4.2.2. Let V, W be vector spaces over the same field F, and let T : V → W be
a linear map. Then,
4.2. LINEAR MAPS 63
(i) T(0) = 0,
(ii) T(−X) = −T(X) for any X ∈ V .
Proof. (i) It is sufficient to observe that
It is then easily observed that T1 + T2 is still a linear map, and that λT is also a linear
map. We can then even say more:
Proposition 4.2.3. Let V, W be vector spaces over the same field F. Then
{ }
L(V, W ) := T : V → W | T is linear ,
is a vector space over F, once endowed with the addition defined by (4.2.1) and the
multiplication by a scalar defined in (4.2.2).
Before giving the proof, let us observe that if V = Rn and W = Rn , then L(Rn , Rm )
corresponds to the set of all LA with A ∈ Mmn (R). Note that this statement also holds
for arbitrary field F, i.e.
{ }
L(Fn , Fm ) = LA | A ∈ Mmn (F) .
Proof. The proof consists in checking all conditions of Definition 3.1.3. For that purpose,
consider T, T1 , T2 , T3 be linear maps from V to W , and let λ, µ ∈ F. Let also X be an
arbitrary element of V .
(i) One has
[ ]
(T1 + T2 ) + T3 (X) = (T1 + T2 )(X) + T3 (X) = T1 (X) + T2 (X) + T3 (X)
[ ]
= T1 (X) + (T2 + T3 )(X) = T1 + (T2 + T3 ) (X).
64 CHAPTER 4. LINEAR MAPS
and
Ker(T) := {X ∈ V | T(X) = 0}.
Proof. The first part of the statement is proved in Exercise 4.4. For the second part
of the statement, consider Y1 , Y2 ∈ Ran(T), i.e. there exist X1 , X2 ∈ V such that
Y1 = T(X1 ) and Y2 = T(X2 ). Then one has
λY = λT(X) = T(λX)
with λX ∈ V . Again, it follows that λY ∈ Ran(T), from which one concludes that
Ran(T) is a subspace of W .
TN (X + Y ) = N · (X + Y ) = N · X + N · Y = TN (X) + TN (Y ),
and similarly TN (λX) = N · (λX) = λ(N · X) = λTN (X). Then one observes
that
Ker(TN ) = {X ∈ Rn | N · X = 0} = {X ∈ Rn | X · N = 0 · N } = HN,0 .
(ii) Let A ∈ Mmn (R) and let us set LA : Rn → Rm defined by LA (X) = AX for any
X ∈ Rn . As already mentioned, this map is linear, and one has Ker(LA ) = {X ∈
Rn | AX = 0}, i.e. Ker(LA ) are the solutions of the linear system AX = 0.
Remark 4.3.3. The kernel of a linear map is never empty, indeed it always contains
the element 0.
66 CHAPTER 4. LINEAR MAPS
Lemma 4.3.4. Let T : V → W be a linear map between vector spaces over the same
field F, and assume that Ker(T) = {0}. If {X1 , . . . , Xr } are linearly independent ele-
ments of V , then {T(X1 ), . . . , T(Xn )} are linearly independent elements of W .
Let us now come to an important result of this section. For this, we just recall that
for a vector space, its dimension corresponds to the number of elements of any of its
bases. It also corresponds to the maximal number of linearly independent elements of
this vector space.
Theorem 4.3.5. Let T : V → W be a linear map between two vector spaces over the
same field F, and assume that V is of finite dimension. Then
( ) ( )
dim Ker(T) + dim Ran(T) = dim(V ).
Proof. Let {Y1 , . . . , Yn } be a basis for Ran(T), and let X1 , . . . , Xn ∈ V such that
T(Xj ) = Yj for any j ∈ {1, . . . , n}. Let also {K1 , . . . , Km } be a basis for Ker(T). Note
that if one shows that {X1 , . . . , Xn , K1 , . . . , Km } is a basis for V , then the statement is
proved (with dim(V ) = m + n).
So, let X be an arbitrary element of V . Then there exist λ1 , . . . , λn ∈ F such that
T(X) = λ1 Y1 + · · · + λn Xn , since {Y1 , . . . , Yn } is a basis for Ran(T). It follows that
0 = T(X) − λ1 X1 − · · · − λn Yn
= T(X) − λ1 T(X1 ) − · · · − λn T(Xn )
( )
= T X − λ1 X1 − · · · − λn Xn ,
X − λ1 X1 − · · · − λn Xn = λ′1 K1 + · · · + λ′m Km ,
X = λ1 X1 + · · · + λn Xn + λ′1 K1 + · · · + λ′m Km ,
or in other words ( )
Vect X1 , . . . , Xn , K1 , . . . , Km = V.
4.4. RANK AND LINEAR MAPS 67
Let us now show that these vectors are linearly independent. By contraposition,
assume that
λ1 X1 + · · · + λn Xn + λ′1 K1 + · · · + λ′m Km = 0 (4.3.1)
for some λ1 , . . . , λn , λ′1 , . . . , λ′m . Then one infers from (4.3.1) that
0 = T(0)
( )
= T λ1 X1 + · · · + λn Xn + λ′1 K1 + · · · + λ′m Km
( )
= T λ1 X1 + · · · + λn Xn + 0
= λ1 T(X1 ) + · · · + λn T(Xn )
= λ1 Y1 + · · · + λn Yn .
Since Y1 , . . . , Yn are linearly independent, one already concludes that λj = 0 for any
j ∈ {1, . . . , n}. It then follows from (4.3.1) that λ′1 K1 + · · · + λ′m Km = 0, which implies
that λ′i = 0 for any i ∈ {1, . . . , m} since the vectors Ki are linearly independent.
In summary, one has shown that V is generated by the family of linearly independent
elements X1 , . . . , Xn , K1 , . . . , Km of V . Thus, these elements define a basis, as expected.
Lemma 4.4.1. The range of LA corresponds to the subspace generated by the columns
of A.
and to take into account the computation performed before the statement.
68 CHAPTER 4. LINEAR MAPS
us introduce a new notation: a basis for a vector space V over F will be denoted by
V := {V1 , . . . , Vn } with {V1 , . . . , Vn } a family of linearly independent elements of V
which generate V . In addition, let us denote by X an arbitrary element of V (which
was simply denoted by X up to now). Then, since V is a basis for V there exists
X := t (x1 , . . . , xn ) ∈ Fn such that
X = x1 V1 + x2 V2 + · · · + xn Vn .
The vector X ∈ Fn is called the coordinate vector of X with respect to the basis V of V ,
and we shall use the notation
(X )V = X
meaning precisely that the coordinates of X with respect to the basis V are X.
Remark 4.5.1. Clearly, if V = Rn and if Vj = Ej , one just says that X are the
coordinates of X and one uses to identify X and X. This is what we have done until
now since we have only considered the usual basis {Ej }nj=1 on Rn . However, if one needs
to consider different bases on Rn , the above notations are necessary. Note for example
that X exists without any choice of a particular basis, while X depends on such a choice.
Now, if Y is another element of V with (Y)V = Y = t (y1 , . . . , yn ), let us observe
that
(X + Y)V = X + Y and (λX )V = λX (4.5.1)
for any λ ∈ F. Indeed, this follows from the equalities
X + Y = x1 V1 + · · · + xn Vn + y1 V1 + · · · + yn Vn
= (x1 + y1 )V1 + · · · + (xn + yn )Vn
and
λX = λ(x1 V1 + · · · + xn Vn ) = (λx1 )V1 + · · · + (λxn )Vn .
Thus, choosing a basis V for V allows one to identity any point of V with an element
of Fn via its coordinate vector. By taking (4.5.1) into account, one also observes that
V allows one to define a linear map (·)V : V → Fn .
We also consider a vector space W over F endowed with a basis W := {W1 , . . . , Wm }.
In this case, for any Z ∈ W we set (Z)W = Z = t (z1 , . . . , zm ) ∈ Fm for the coordinate
vector of Z with respect to the basis W of W . Thus, if T : V → W is a linear map,
there exists T := (tij ) ∈ Mmn (F), called the matrix associated with T with respect to
the basis V of V and W of W defined by
∑
m ∑
m
t
T(Vj ) = tij Wi = tji Wi (4.5.2)
i=1 i=1
for any j ∈ {1, . . . , n}. On the other hand, we shall show just below that the following
equality also holds ( )
T(X ) W = T (X )V . (4.5.3)
70 CHAPTER 4. LINEAR MAPS
where T j corresponds to the j th column of the matrix T . In other words one has
( )
T = T(E1 ) T(E2 ) . . . T(En ) .
then the matrix associated with T with respect to the basis V is given by
2 1 5
T = −1 1 4 .
0 −4 2
Let us still consider the notion of a change of basis. Indeed, given the matrix associ-
ated to a linear map in a prescribed basis, it is natural to wonder about the matrix asso-
ciated to the same linear map but with respect to another basis. So, let V = {V1 , . . . , Vn }
and V ′ = {V1′ , . . . , Vn′ } be two basis of the same vector space V . Let B = (bij ) ∈ Mn (F)
be the matrix defined by
∑n ∑
n
′
Vj = bij Vi ≡ t
bji Vi .
i=1 i=1
4.6. COMPOSITION OF LINEAR MAPS 71
It is easily observed that the matrix B is invertible. Then, for any X ∈ V with X = (X )V
and X ′ = (X )V ′ , one has
∑
n ∑
n ∑
n ∑
n n (∑
∑ n )
xi Vi = X = x′j Vj′ = x′j bij Vi = bij x′j Vi .
i=1 j=1 j=1 i=1 i=1 j=1
Let us now consider a linear map T : V → V , and let T be the matrix associated
with T with respect to the basis V, and let T ′ be the matrix associated to T with
respect to the basis V ′ . The original question corresponds then to the link between T
and T ′ ? In order to answer this question, observe that for any X ∈ V one gets by
equations (4.5.4) and (4.5.5) that
( ) ( )
T BX ′ = T X = T(X ) V = B T(X ) V ′ = BT ′ X ′ .
T ′ = B −1 T B. (4.5.6)
One deduces in particular that the matrix T and T ′ are similar, see Definition 2.1.16.
Note that a similar (but slightly more complicated) computation can be realized
for a linear map between two vector spaces V and W over the same field F endowed
with two different bases V, V ′ and W, W ′ .
Let us now observe an important property of the composition of maps, namely the
associativity. Indeed, If U, V, W, S are sets and F : U → V , G : V → W and H : W → S
are maps, one has
(H ◦ G) ◦ F = H ◦ (G ◦ F).
Indeed, for any X ∈ U one has
[ ] ( ) ( )
(H ◦ G) ◦ F (X) = H ◦ G (F(X)) = H G(F(X))
and [ ] ( ) ( )
H ◦ (G ◦ F) (X) = H (G ◦ F)(X) = H G(F(X)) ,
and the equality of the two right hand sides implies the statement.
Lemma 4.6.2. Let U, V, W be vector spaces over a field F, and let G : U → V ,
G′ : U → V , H : V → W and H′ : V → W be linear maps. Then
(i) H ◦ G : U → W is a linear map,
(ii) (H + H′ ) ◦ G = H ◦ G + H′ ◦ G,
(iii) H ◦ (G + G′ ) = H ◦ G + H ◦ G′ ,
Due to the following lemma, there is no ambiguity in speaking about the inverse
(and not only about an inverse) of a invertible map.
Lemma 4.7.3. Let F : V → W be an invertible map between two sets V et W . Then
this inverse is unique.
4.7. INVERSE OF A LINEAR MAP 73
G = 1V ◦ G = (G′ ◦ F) ◦ G = G′ ◦ (F ◦ G) = G′ ◦ 1W = G′
Let us now come to two more refined notions related to a maps, linear or not.
Proof. (i) Assume first that F is bijective. In particular, since F is surjective, for any
Y ∈ W , there exists X ∈ V such that F(X) = Y . Note that X is unique because F is
also injective. Thus if one sets F−1 (Y ) := X then one has
) ( )
(F−1 ◦ F (X) = F−1 F(X) = F−1 (Y ) = X
which implies that F ◦ F−1 = 1W . One has thus define an inverse for F.
(ii) Let us now assume that F is invertible, with inverse denoted by F−1 . Let first
X1 , X2 ∈ V with F(X1 ) = F(X2 ). One then deduces that
( ) ( ) ( ) ( )
X1 = 1V X1 = F−1 ◦ F X1 = F−1 F(X1 ) = F−1 F(X2 ) = F−1 ◦ F (X2 ) = X2 ,
which implies that Y = F(X) for X given by F−1 (X). Thus F is injective. Since F is
both injective and surjective, F is bijective.
For linear maps the general theory simplifies a lot, as we shall see now.
Theorem 4.7.6. Let V, W be two vector spaces over the same field F, and let T : V →
W be an invertible linear map. Then its inverse T−1 : W → V is also a linear map.
74 CHAPTER 4. LINEAR MAPS
Proof. Let Y1 , Y2 ∈ W and set X1 := T−1 (Y1 ) and X2 := T−1 (X2 ). Since T ◦ T−1 = 1W
one has for j ∈ {1, 2}
( ) ( )
Yj = T ◦ T−1 (Yj ) = T T−1 (Yj ) = T(Xj ).
It is then sufficient to observe that (4.7.1) and (4.7.2) correspond to the linearity con-
ditions for T−1 .
In the next statement we give an equivalent property for the injectivity of a linear
map.
Lemma 4.7.7. A linear map T : V → W between two vector spaces over the same field
is injective if and only if Ker(T) = {0}.
Proof. (i) The first part of the proof is a contraposition argument: instead of proving
A ⇒ B we show equivalently that B̄ ⇒ Ā. Thus, let us assume first that Ker(T) ̸= {0},
then there exists X0 ̸= 0 such that T(X0 ) = 0. In addition, for any X ∈ V one has
(ii) T is invertible,
4.7. INVERSE OF A LINEAR MAP 75
(iii) T is surjective.
Proof. The implication (ii) ⇒ (i) and (ii) ⇒ (iii) are direct consequences of Theorem
4.7.5 and Lemma 4.7.7.
Assume now (i), and recall from Lemma 4.7.7 that this condition corresponds to T
injective. Then from Theorem 4.3.5 and more precisely from the equality
( ) ( )
dim Ker(T) +dim Ran(T) = dim(V )
| {z }
0
( )
one deduces that dim Ran(T) = dim(V ) = dim(W ), where the assumption ( about )the
dimension has been taken into account. It is enough then to observe that dim Ran(T) =
dim(W ) means that T is surjective. Since T is also injective, it follows that T is bijec-
tive. Since bijectivity corresponds to invertibility by Theorem 4.7.5, one infers that (ii)
holds.
Assume now that (iii) holds. By taking again Theorem 4.3.5 into account, one
deduces that from the equality
( )
dim Ran(T) = dim(W ) = dim(V )
( )
that dim Ker(T) = 0, meaning that T is injective. Again, it implies that T is bijective,
and thus invertible, and thus that (ii) holds.
76 CHAPTER 4. LINEAR MAPS
4.8 Exercises
( )
Exercise 4.1. Let F : R2 → R2 be the map defined by F ( xy ) = 2x 3y for any ( xy{) ∈ R2 .
Describe the image
} by F of the points lying on the unit circle centered at 0, i.e. ( xy ) ∈
R2 | x2 + y 2 = 1 .
y ) for any ( y ) ∈ R .
Exercise 4.2. Let F : R2 → R2 be the map defined by F ( xy ) = ( xy x 2
Exercise 4.3. Let V be a vector space of dimension n, and let {X1 , . . . , Xn } be a basis
for V . Let F be a linear map from V into itself. Show that F is uniquely defined if one
knows F(Xj ) for j ∈ {1, . . . , n}. Is it also true if F is an arbitrary map from V into
itself ?
Exercise 4.4. Let V, W be vector spaces over the same field, and let T : V → W be a
linear map. Show that the following set is a subspace of V :
{X ∈ V | T(X) = 0}.
e) F : R2 → R2 defined by F ( xy ) = ( xy ),
f ) F : R2 → R defined by F ( xy ) = xy.
Exercise 4.7. Determine the kernel and the range of the maps defined in the previous
exercise.
Exercise 4.8. Consider the subset of Rn consisting of all vectors t (x1 , . . . , xn ) such that
x1 + x2 + · · · + xn = 0. Is it a subspace of Rn ? If so, what is its dimension ?
Exercise 4.9. Let P : Mn (R) → Mn (R) be the map defined for any A ∈ Mn (R) by
1( )
P(A) = A + tA .
2
1. Show that P is a linear map.
4.8. EXERCISES 77
2. Show that the kernel of P consists in the vector space of all skew-symmetric ma-
trices.
3. Show that the range of P consists in the vector space of all symmetric matrices.
4. What is the dimension of the vector space of all symmetric matrices, and the
dimension of the vector space of all skew-symmetric matrices ?
Exercise 4.10. Let C ∞ (R) be the vector space of all real functions on R which admit
derivatives of all orders. Let D : C ∞ (R) → C ∞ (R) be the map which associates to any
f ∈ C ∞ (R) its derivative, i.e. Df = f ′ .
1. Is D a linear map ?
3. What is the kernel of Dn , for any n ∈ N, and what is the dimension of this vector
space ?
Exercise 4.12. What is the dimension of the space of solutions of the following systems
of linear equations ? In each case, find a basis for the space of solutions.
{ {
2x + y − z = 0 { 4x + 7y − πz = 0
a) b) x − y + z = 0 c)
2x + y + z = 0 2x − y + z = 0
and
x + y + z = 0
d) x−y = 0
y+z = 0
( 0 1 3 −2 )
Exercise 4.13. Let A be the matrix given by A = 2 1 −4 3 and consider the linear
2 3 2 −1
map LA : R4 → R3 defined by LA X = AX for all X ∈ R4 .
2. Deduce the dimension of the kernel of LA , and exhibit a basis for the kernel of LA .
( )
0
3. Find the set of all solutions of the equation AX = 2 .
2
Exercise 4.14. Let F : R3 → R2 be the map indicated below. What is the matrix
associated with F in the canonical bases of R3 and R2 ?
( ) ( ) ( )
1 −4 3
a) F(E1 ) = , F(E2 ) = , F(E3 ) =
−3 2 1
and
x1 ( )
3x − 2x + x
b) F x2 = 1 2 3
.
4x1 − x2 + 5x3
x3
) 4.15. Let L : R → R be a linear map which associated matrix has the form
3 3
Exercise
(
1 0 0
0 2 0 with respect to the canonical basis of R3 . What is the matrix associated with L
0 0 3 ( √ ) ( √ ) ( 0 )
1/ 2 −1/ 2
in the basis generated by the three vectors V1 = 1/ 2 , V2 = 1/√2 , V3 = −1
√
0
0 0
Exercise 4.16. For any A, B ∈ Mn (R), one says that A and B commute if AB = BA.
a) Show that the set of all matrices which commute with A is a subspace of Mn (R),
(ii) (H + H′ ) ◦ G = H ◦ G + H′ ◦ G,
(iii) H ◦ (G + G′ ) = H ◦ G + H ◦ G′ ,
Exercise 4.18. Let V be a real vector space, and let P : V → V be a linear map
satisfying P2 = P. Such a linear map is called a projection.
Exercise 4.20. Let F, G be invertible linear maps from a vector space into itself. Show
that (G ◦ F)−1 = F−1 ◦ G−1 .
Exercise 4.21. Show that the matrix B : Rn → Rn defining a change of basis in Rn is
always invertible.
Exercise 4.22. Let V be the set of all infinite sequences of real numbers (x1 , x2 , x3 , . . . ).
We endow V with the pointwise addition and multiplication, i.e.
and λ(x1 , x2 , x3 , . . . ) = (λx1 , λx2 , λx3 , . . . ), which make V an infinite dimensional vec-
tor space.
Define the map F : V → V , called shift operator, by
F(x1 , x2 , x3 , . . . ) = (0, x1 , x2 , x3 , . . . ).
(iii) Is F surjective ?
(vi) What is different from the finite dimensional case, i.e. when V is of finite dimen-
sion ?
Exercise 4.23. Consider the matrices
( ) ( ) ( ) ( )
2 0 1 0 −1 0 0 1
A= , B= , C= , D= ,
0 2 0 0 0 1 −1 0
( ) ( )
1 0.2 1 −1
E= , F= ,
0 1 1 1
and show their effect on the letter L defined by the three points ( 02 ) , ( 00 ) , ( 10 ) of R2 .
Exercise 4.24. Let N = ( nn12 ) be a vector in R2 with ∥N ∥ = 1, and let ℓ be the line in
R2 passing trough 0 ∈ R2 and parallel to N . Then any vector X ∈ R2 can be written
uniquely as X = X∥ + X⊥ , where X∥ is a vector parallel to ℓ and X⊥ is a vector
perpendicular to ℓ. Show that there exists a projection P ∈ M2 (R) such that X∥ = P X,
and express P in terms of n1 and n2 .
80 CHAPTER 4. LINEAR MAPS
( n1 )
Exercise 4.25. 1) Do the same exercise in R3 with N given by nn23 .
2) Show that there also exists a projection Q such that X⊥ = Q X. If H0,N is the
plane passing through 0 ∈ R3 and perpendicular to N , show that X⊥ ∈ H0,N .
Exercise 4.26. In the framework of the previous exercise, a reflection of X about H0,N
is defined by the vector Xref := X⊥ − X∥ . Show that ∥Xref ∥ = ∥X∥, and provide the
expression for the linear map transforming X into Xref .
Exercise 4.27. Block matrices are matrices which are partitioned into rectangular sub-
matrices called blocks. For example, let A ∈ Mn+m (R) be the block matrix
( )
A11 A12
A=
A21 A22
with A11 ∈ Mn (R), A22 (R) ∈ Mm (R), A12 ∈ Mn×m (R), and A21 ∈ Mm×n (R). Such
matrices can be multiplied as if every blocks where scalars (with the usual multiplication
of matrices), as long as the products are well defined. For example, check this statement
by
( computing ) the product 0AB in two different ways with the
−1
( B11following
B12
) matrices: A =
A11 A12 with A11 = ( 1 0 ) and A12 = ( 1 ), and B = B21 B22 with B11 = ( 14 25 ),
1
( ) ( )
B12 = ( 36 ), B21 = 7 8 , and B22 = 9 .
with A11 ∈ Mn (R), A22 (R) ∈ Mm (R) and A12 ∈ Mn×m (R).