0% found this document useful (0 votes)
19 views

Note 3

선형대수학

Uploaded by

niceanchor9
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Note 3

선형대수학

Uploaded by

niceanchor9
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Chapter: Vector spaces

Definition
Let V be an arbitrary nonempty set of objects for which two operations
are defined: addition and multiplication by numbers called scalars.
By addition we mean a rule for associating with each pair of objects u and
v in V an object u + v, called the sum of u and v; by scalar multiplication
we mean a rule for associating with each scalar k and each object u in V
an object ku, called the scalar multiple of u by k. If the following axioms
are satisfied by all objects u, v, w in V and all scalars k and m, then we
call V a vector space and we call the objects in V vectors.
A1. If u and v are objects in V , then u + v is in V .
A2. u + v = v + u.
A3. u + (v + w) = (u + v) + w.
A4. There exists an object in V , call the zero vector, that is denoted by
0 and has the property that 0 + u = u + 0 = u for all u in V .
A5. For each u in V , there is an object −u in V , called a negative of u,
such that u + (−u) = (−u) + u = 0.
A6. If k is any scalar and u is any object in V , then ku is in V .
A7. k(u + v) = ku + kv.
A8. (k + m)u = ku + mu.
A9. k(mu) = (km)u.
A10. 1u = u.

Remark
1. A real vector space means a vector space over R and a complex vector
space means a vector space over C.
2. Every vector space is an abelian group (by A1 ∼ A5).
1
2

In what follows, we denote a vector space over a field F by (V, +, ·) or


(V, +, ·F ), and a vector space means a vector space over a field F .

Example
1. F n is a vector space over F under the usual addition and scalar mul-
tiplication of n-tuples:
for any u = (u1, u2, . . . , un), v = (v1, v2, . . . , vn) ∈ F n and k ∈ F ,
u + v := (u1 + v1, u2 + v2, . . . , un + vn) and ku := (ku1, ku2, . . . , kun).
2. The set Matm×n(F ) of all m × n matrices with entries in a field F is
a vector space over F , under the operation of matrix addition and scalar
multiplication.
3. F (−∞, ∞) = {f | f : R → R} is a vector space over R under the
following addition and scalar multiplication:
(f + g)(x) := f (x) + g(x) and (kf )(x) := kf (x).
4. Let V consist of a single object, which we denote by 0. Define 0 + 0 = 0
and k0 = 0 for any scalar k. Then V is a vector space, called the zero
vector space.
Theorem
(V, +, ·), u ∈ V, k ∈ F .
Then the following hold:
1. 0u = 0.
2. k0 = 0.
3. (−1)u = −u.
4. If ku = 0, then k = 0 or u = 0.

Proof. 1. Note that 0u + 0u = (0 + 0)u = 0u by A8. By A5, 0u has a


negative, −0u. Adding −0u to both sides above yields (0u+0u)+(−0u) =
0u + (−0u).
Then 0u + (0u + (−0u)) = 0u + (−0u) by A3 and so 0u + 0 = 0 by A5.
Hence 0u = 0 by A4.
2,3,4 : By yourself. □
3

Definition
A subset W of a vector space V is called a subspace of V if W is itself a
vector space under the addition and scalar multiplication defined on V .

Theorem
Let V be a vector space and ∅ =
̸ W ⊆V.
Then W is a subspace of V iff the following conditions are satisfied:
(1) u, v ∈ W ⇒ u + v ∈ W .
(2) k ∈ F , u ∈ W ⇒ ku ∈ W .
Proof. (⇒) Clear. ⌈∵ (1) = A1 and (2) = A6. ⌋.
(⇐) Note that (1) = A1 and (2) = A6. Since V is a vector space,
A2,A3,A7,A8,A9,A10 holds.
Take any u ∈ W and 0 ∈ F . Then by the assumption and Theorem, we
see that 0 = 0u ∈ W and −u = (−1)u ∈ W . Thus A4 and A5 hold. □

Remark
Let V be a vector space and W ⊆ V . Then W is a subspace of V iff the
following conditions are satisfied:
(1) 0 ∈ W .
(2) u, v ∈ W ⇒ u + v ∈ W .
(3) k ∈ F , u ∈ W ⇒ ku ∈ W .

Example
1. P = {p(x) | p(x) is a polynomial with coefficients in a field F } is a
subspace of F (−∞, ∞).
2. The solution set W of a homogeneous linear system Ax = 0 of m
equations in n unknowns is a subspace of Rn.
⌈∵ Clearly W ̸= ∅, since x = 0 is a solution of Ax = 0. Take any
x1, x2 ∈ W and any k ∈ R. Then Ax1 = 0 and Ax2 = 0. Note that
A(x1 + x2) = Ax1 + Ax2 = 0 and A(kx1) = kAx1 = k0 = 0. Thus
x1 + x2 ∈ W and kx1 ∈ W . Hence W is a subspace of Rn. ⌋
4

Definition
Let V be a vector space and let S be a nonempty subset of V .
A linear combination of vectors in S is an expression of the form k1v1 +
k2v2 + · · · + kr vr , where vi ∈ S and ki ∈ F for all 1 ≤ i ≤ r.

Theorem
Let S be a nonempty subset of a vector space V . Then the set W of all
linear combinations of vectors from S is a subspace of V .
Proof.PClearly WP̸= ∅. Suppose Prset {v1, v2P
Pr that S is a finite , . . . , vr }.
Then i=1 kivi+ i=1 livi = i=1(ki+li)vi and c i=1 kivi = ri=1(cki)vi,
r r

where ki, li ∈ F for all 1 ≤ i ≤ r and c ∈ F . Thus W is a subspace of


V . Suppose that S is an infinite set. Let v and w be vectors which are
expressed as a linear combination in S. Then ∃ finite subsets S1 and S2
of S such that v is expressed as a linear combination from S1 and w is
expressed as a linear combination in S2S. Thus v and w are expressed as
a linear combination in a finite set S1 S2. Hence we see that W is a
subspace of V by the previous argument. □
Definition
Let S be a nonempty subset of a vector space V .
1. The subspace spanned (or generated) by S is the set of all linear
combinations of vectors from S. We denote this by ⟨S⟩ = span(S) =
{k1v1 + k2v2 + · · · + knvn | ki ∈ F, vi ∈ V }. When S = {v1, v2, . . . , vr }
is a finite set, we use the notation ⟨v1, v2, . . . , vr ⟩ or span{v1, v2, . . . , vr }.
2. If W = span(S), then S is said to span(or generate) W or the vectors
in S are said to span(or generate) W .

Remark
1. It is clear that any superset of a spanning set is also a spanning set.
Note also that all vector spaces have spanning sets, since the entire space
is a spanning set.
2. Let S be a nonempty subset of a vector space V . Then the following
5

hold: T
(1) ⟨S⟩ = W.
S⊆W ⊆V
(2) ⟨S⟩ is the smallest subspace of V that contains S in the sense that
every other subspace of V that contains S must contain ⟨S⟩.

Example
1. Let e1 = (1, 0) and e2 = (0, 1). Then span{e1, e2} = R2.
2. Let Pn = {a0 + a1x + · · · + anxn | ai ∈ R, 1 ≤ i ≤ n}.
Then span{1, x, x2, . . . , xn} = Pn.
3. Determine whether v1 = (1, 1, 2), v2 = (1, 0, 1), and v3 = (2, 1, 3) span
R3 .
⌈∵ Take any w = (a, b, c) ∈ R3. Consider k1v1 + k2v2 + k3v3 = (a, b, c).
Then k1(1,1, 2) + k2(1,
 0,1) +k3(2,
 1, 3) = (a, b,
 c). 
1 1 2 k1 a 1 1 2
Note that 1 0 1 k2 =  b  . Since det(1 0 1) = 0, we see
2 1 3 k3 c 2 1 3
that v1, v2 and v3 do not span R3. ⌋

Definition
Let V be a vector space.
1. Let S = {v1, v2, . . . , vr } be a nonempty finite set of vectors in V .
If k1v1 + k2v2 + · · · + kr vr = 0 ⇒ ki = 0 for all 1 ≤ i ≤ r, then S is
called a linearly independent set.
(or the vectors v1, v2, . . . , vr are said to be linearly independent).
2. A nonempty set S of vectors in V is linearly independent if for any
v1, . . . , vr in S, k1v1 + k2v2 + · · · + kr vr = 0 ⇒ ki = 0 for all 1 ≤ i ≤ r,
i.e., every finite subset {v1, v2, . . . , vr } of S is linearly independent.
3. If a nonempty set S of vectors in V is not linearly independent, it is
called linearly dependent.
6

Remark
It is clear that any nonempty subset of a linearly independent set is
linearly independent. Note also that a set of vectors that contains 0 is
linearly dependent.

Example
Consider R2. Let v1 = (1, 0), v2 = (0, 1), v3 = (2, 3). Then {v1, v2} is
linearly independent. But {v1, v2, v3} is linearly dependent, since 2v1 +
3v2 − 1v3 = 0.
Theorem
Let V be a vector space and let S be a set with two or more vectors in V .
Then S is linearly dependent iff at least one of vectors in S is expressible
as a linear combination of the other vectors in S.
Proof. (⇒) Suppose S is linearly dependent. Then ∃ v1, v2, . . . vr ∈ S
such that {v1, . . . , vr } is linearly dependent. Thus ∃k1, k2, . . . , kr ∈ F ,
not all zero, such that k1v1 +k2v2 +. . . kr vr = 0. May assume that k1 ̸= 0.
Then v1 = (− kk21 )v2 + (− kk31 )v3 + · · · + (− kk1r )vr .
(⇐) By the assumption of the implication (⇐), we may assume that
∃ v1, v2, . . . vr ∈ S such that v1 = c2v2 + c3v3 + · · · + cr vr , where ci ∈ F
for all 2 ≤ i ≤ r. Then v1 − c2v2 − c3v3 − · · · − cr vr = 0. Thus the
equation k1v1 + k2v2 + · · · + kr vr = 0 has a nontrivial solution. Hence
{v1, . . . , vr } is linearly dependent and therefore we conclude that S is
linearly dependent. □
Example
Consider R3. Let v1 = (1, −2, 3), v2 = (5, 6, −1), v3 = (3, 2, 1). Then
{v1, v2, v3} is linearly dependent, since −v1− v2 + 2v3 = 0.
k1 + 5k2 + 3k3 = 0

cf. Consider k1v1 + k2v2 + k3v3 = 0. Then −2k1 + 6k2 + 2k3 = 0

3k − k + k = 0
1 2 3
7
 
1 5 3
Put A = −2 6 2. Since det(A) = 0, the system Ax = 0 has a
3 −1 1
 
k1
nontrivial solution, where X = k2. In fact, k1 = − 21 t, k2 = − 12 t, k3 =
k3
t, where t ∈ R.

Theorem
Let S be a nonempty set of vectors in a vector space V . Then the following
hold:
(1) If S is linearly independent
S and if v is a vector in V that is outside
of span(S), then S {v} is still linearly independent.
(2) If v is a vector in S that is expressible as a linear combination of other
vectors in S, then span(S) = span(S − {v}).
Proof. See the text. □

Definition
A vector space V is said to be finite dimensional if there is a finite set
of vectors in V that spans V and is said to be infinite dimensional if no
such set exists.

Definition
Let V be a vector space. Let B be a set of vectors in V . Then B is called
a basis of V if B spans V and B is linearly independent.

Remark
Note that the zero vector space is finite dimensional, since it is spanned
by the zero vector 0. Note also that {0} is not linearly independent and
so it has no basis. However we define the empty set as the basis of the
zero vector space.
8

Theorem
Let S be a finite set of vectors in a finite dimension vector space V . Then
the following hold:
(1) If S spans V but is not a basis for V , then S can be reduced to a
basis for V by removing appropriate vectors from S.
(2) If S is a linearly independent set that is not a basis for V , then S can
be enlarged to a basis for V by inserting appropriate vectors into S.
Proof. See the text. □

Theorem
Every vector space has a basis.

Remark
It is known that the existence of a basis for an arbitrary vector space
is equivalent to Zorn’s lemma. For the case of finite dimensional vector
space, the theorem above is proved without using Zorn’s lemma.

Example

Consider R3. Let v1 = (1, 2, 1), v2 = (2, 9, 0), v3 = (3, 3, 4). Then B =
{v1, v2, v3} is a basis for R3.
⌈∵ Claim 1: span(B) = R3.
Take any w = (x, y, z) ∈ R3. Consider k1v1 + k2v2 + k3v3 = (x, y, z).
Then
 k1(1, 2, 1) + k2(2, 9, 0) + k3(3, 3, 4) = (x, y, z).
k1 + 2k2 + 3k3 = x
    
 1 2 3 k1 x
2k1 + 9k2 + 3k3 = y ⇔ 2 9 3 k2 = y  .

k + 4k = z
1 3
1 0 4 k3 z
 
1 2 3
Put A = 2 9 3. Since det(A) ̸= 0, we see that span(B) = R3.
1 0 4
9

Claim 2: B is linearly independent.


Consider
 k1
v1+ k 2 v2 +k3v3 = 0. By the same argument in Claim 1, we
1 2 3 k1 0
have 2 9 3 k2 = 0 . Since det(A) ̸= 0, we see that B is linearly
1 0 4 k3 0
independent.
Hence we conclude that B is a basis for R3. ⌋

Theorem
Let V be a finite dimensional vector space and let B = {v1, v2, . . . , vn} ⊆
V . Then the following are equivalent:
(1) B is a basis for V .
(2) Every v ∈ V is uniquely expressed by v = a1v1 + a2v2 + · · · + anvn,
where ai ∈ F for all 1 ≤ i ≤ n.
Proof. (1) ⇒ (2) Take any v ∈ V . Clearly v is expressible as a linear
combination of the vectors in B. Suppose that v = a1v1 + a2v2 + · · · +
anvn = b1v1 + b2v2 + · · · + bnvn, where ai, bi ∈ F for all 1 ≤ i ≤ n. Then
(a1 − b1)v1 + (a2 − b2)v2 + · · · + (an − bn)vn = 0. Since B is linearly
independent, ai − bi = 0, i.e., ai = bi for all 1 ≤ i ≤ n.
(2) ⇒ (1) By the assumption, it is clear that B spans V . Suppose that
k1v1 + k2v2 + · · · + knvn = 0. Note that k1v1 + k2v2 + · · · + knvn = 0 =
0v1 + 0v2 + · · · + 0vn. By the uniqueness, ki = 0 for all 1 ≤ i ≤ n. □

Remark
Let B be a nonempty set of vectors in a vector space V . Then B is a basis
for V iff every nonzero vector v ∈ V is uniquely expressed by a linear
combination of vectors in B.
10

By an ordered basis for a finite dimensional vector space, we mean a


basis in which we are keeping track of the order in which the basis vectors
are listed. For example, if {v1, v2, . . . , vn} is a basis, then {v2, v1, . . . , vn}
is also a basis, but it is a different ordered basis. From now on, we assume
that every basis is an ordered basis.

Definition
Let B = {v1, v2, . . . , vn} be a (ordered) basis for a finite dimensional
vector space V . By Theorem, it follows that for any v ∈ V , there exist
unique c1, c2, . . . , cn such that v = c1v1 + c2v2 + · · · + cnvn. The vector
(c1, c2, . . . , cn) is called the coordinate vector of v relative to B.
 We  denote
c1
 c2 
it by (v)B = (c1, c2, . . . , cn). We also use the notation [v]B =   ... , which

cn
is called the coordinate matrix or the matrix form of the coordinate vector.

Remark
Note that coordinate vectors depend not only on the basis but also on
the order in which the basis vectors are written. A change in the order of
the basis vectors results in a correspoding change of order for the entries
in the coordinate vectors.
Example
1. Consider R2. Let v = (2, 3) and let B = {e1 = (1, 0), e2 = (0, 1)}.
Then (v)B = (2, 3). Let B ′ = {(2, 0), (0, 3)}. Then (v)B′ = (1, 1).
2. Consider R2. Let v1 = (1, 2, 1), v2 = (2, 9, 0), and v3 = (3, 3, 4). It can
be see that B = {v1, v2, v3} is a basis for R3. Let v = (5, −1, 9). Then
(v)B = (1, −1, 2).
⌈∵ v = c1v1 + c2v2 + c3v3 ⇒ c1 = 1, c2 = −1, c3 = 2. ⌋
11

Theorem
Let V be a finite dimensional vector space and let {v1, v2, . . . , vn} be any
basis for V . Then the following hold:
(1) If a set in V has more than n vectors, then it is linearly dependent.
(2) If a set in V has fewer than n vectors, then it does not span V .

Proof. (1) Let S = {w1, w2, . . . , wm} be any set of m vectors in V , where
m > n. Since {v1, v2, . . . , vn} is a basis, each wi can be expressed as a
linear combination of the vectors in S, say
w1 = a11v1 + a21v2 + · · · + an1vn
w2 = a12v1 + a22v2 + · · · + an2vn
... ... ... ...
wm = a1mv1 + a2mv2 + · · · + anmvn

Consider k1w1 + k2w2 + · · · + kmwm = 0. Then (k1a11 + k2a12 + · · · +


kma1m)v1 + (k1a21 + k2a22 + · · · + kma2m)v2 + · · · + (k1an1 + k2an2 + · · · +
kmanm)vn = 0. Since {v1, v2, . . . , vn} is linearly independent, we have
a11k1 + a12k2 + · · · + a1mkm = 0
a21k1 + a22k2 + · · · + a2m2km = 0
... ... ... ...
an1k1 + an2k2 + · · · + anmkm = 0

Note that a homogeneous system of linear equations with more unknowns


than equations has infinitely many solutions. Since m > n, we see that
there are scalars k1, k2, . . . , km, not all zero, satisfying the system above.
Hence S is linearly dependent.
(2) By yourself. □
12

Corollary
All bases for a finite dimensional vector space have the same number of
vectors.
Proof. Let V be a finite dimensional vector space. We may assume that
V ̸= {0}. Let B = {v1, v2, . . . , vn} and B ′ = {w1, w2, . . . , wm} be two
bases for V . Since B is a basis and B ′ is linearly independent, we have
m ≤ n by the contrapositive of (1) of the theorem above. By the same
arguemnt, we see that n ≤ m. Hence m = n. □

Definition
The dimension of a finite dimensional vector space over F , denoted by
dimF (V ) (or dim(V )), is defined to be the number of vectors in a basis for
V . In addition, the zero vector space is defined to have dimension zero.
Example
1. dimR(Rn√
) = n, dimR(C) = √
2, dimC(C) =√ 1,
dimQ(Q( 2)) = 2, where Q( 2) = {a + b 2 | a, b ∈ Q}.
2. Let Pn = {a0 + a1x + · · · + anxn | ai ∈ R, 1 ≤ i ≤ n}. Then
dimR(Pn) = n + 1.
⌈∵ Clearly {1, x, . . . , xn} spans Pn. Let p0 = 1, p1 = x, . . . , pn = xn.
Assume that a0p0 + a1p1 + · · · + anpn = 0, i.e., a0 + a1x + · · · + anxn = 0
for all x ∈ R. Recall from algebra that a nonzero polynomial of degree n
has at most n distinct roots. Thus we see that a0 = a1 = · · · = an = 0.
Hence {1, x, . . . , xn} is linearly independent and so it is a basis for Pn. ⌋

3. Consider 

2x1 + 2x2 − x3 + x5 = 0

−x − x + 2x − 3x + x = 0
1 2 3 4 5

x1 + x2 − 2x3 − x5 = 0

x3 + x4 + x5 = 0

13

By elementary row operations, we see that


   
2 2 −1 0 1 0 1 1 0 0 1 0
 −1 −1 2 −3 1 0 
⇝0 0 1 0 1 0
 

 1 1 −2 0 −1 0  0 0 0 1 0 0
0 0 1 1 1 0 0 0 0 0 0 0
Thus the solution space of the above linear system is given by
x1 = −s − t, x2 = s, x3 = −t, x4 = 0, x5 = t, where s, t ∈ R or
       
x1 −s − t −1 −1
x2  s  1 0
       
x3 =  −t  = s  0  + t −1
       
x4  0  0 0
x5 t 0 1
Hence B = {(−1, 1, 0, 0, 0), (−1, 0, −1, 0, 1)} is a basis for the solution
space of the above linear system and so the dimension of the solution
space of the above linear system is 2.

Theorem
If W is a subspace of a finite dimensional vector space V , then the fol-
lowing hold:
(1) W is finite dimensional.
(2) dim(W ) ≤ dim(V ).
(3) W = V iff dim(W ) = dim(V ).
Proof. See the text. □

Theorem
Let V be an n-dimensional vector space and let B be a set in V with
exactly n vectors. Then B is a basis for V iff B spans V or B is linearly
independent.
Proof. See the text. □
14

Theorem
Let W1 and W2 be subspaces of a vector space V . Then the following
hold: T
(1) W1 W2 := {w | w ∈ W1, w ∈ W2} is a subspace of V .
(2) W1 + W2 := {w1 + w2 | w1 ∈ W1, w2 ∈ W2} is a subspace of V .

T T
Proof. (1) Since 0 ∈ WT1 and 0 ∈ W2, 0 ∈ W1 W2 and so W1 TW2 ̸= ∅.
Take any Tu, v ∈ W1 W2Tand k ∈ F . Then u + v ∈ W1 W2 and
ku ∈ W1 W2. Hence W1 W2 is a subspace of V .

(2) Since 0 ∈ W1 and 0 ∈ W2, 0 = 0+0 ∈ W1 +W2 and so W1 +W2 ̸= ∅.


Take any u, v ∈ W1 +W2 and k ∈ F . Then u = w1 +w2 and v = w1′ +w2′ ,
where w1, w1′ ∈ W1 and w2, w2′ ∈ W2. Note that u + v = (w1 + w1′ ) +
(w2 + w2′ ) ∈ W1 + W2 and ku = k(w1 + w2) = kw1 + kw2 ∈ W1 + W2.
Hence W1 + W2 is a subspace of V . □

Example
Consider R2. Let W1 = {(2a, a) | a ∈ R} = {a(2, 1) | a ∈ R} = ⟨(2, 1)⟩
and W2 = {(b,
T 3b) | b ∈ R} = {b(1, 3) | b ∈ R} = ⟨(1, 3)⟩.
Clearly W1 W2 = {(0, 0)}. Note that W1 + W2 = R2.
⌈∵ Clearly W1 + W2 ⊆ R2. Take any (x, y) ∈ R2.
Consider
ï (x,ò ïy)ò = (2a,
ï òa) + (b, 3b). Then
ï ò2a + b = x and a + 3b = y. Note
2 1 a x 2 1
that = . Since det( ) ̸= 0, the system is consistent
1 3ï òb y 1 3
x
for every . In fact a = 3x−y
5 and b = −x+2y
5 . ⌋
y

Remark
Let W1 = ⟨v1, v2, . . . , vn⟩ and W2 = ⟨w1, w2, . . . wm⟩ be subspaces of V .
Then W1 + W2 = ⟨v1, v2, . . . , vn, w1, w2, . . . wm⟩.
15

Example
Consider R3. Let v1 = (1, 1, 0), v2 = (0, 1, 1), v3 = (1, 0, 1). Then
⟨v1⟩ + ⟨v2⟩ = ⟨v1, v2⟩
= {av1 + bv2 | a, b ∈ R}
= {(a, a + b, b) | a, b ∈ R}
= {(x, y, z) | x − y + z = 0, x, y, z ∈ R}
and ⟨v1⟩ + ⟨v2⟩ + ⟨v3⟩ = R3.
⌈∵ Note that
⟨v1⟩ + ⟨v2⟩ + ⟨v3⟩ = {av1 + bv2 + cv3 | a, b, c ∈ R}
= {a + c, a + b, b + c | a, b, c ∈ R}
Clearly ⟨v1⟩ + ⟨v2⟩ + ⟨v3⟩ ⊆ R3. Take any (x, y, z) ∈ R3.
Consider (x, y, z) = (a+ c, a +
 b,
 b+ c).Then
 a + c = x,a + b =
 y and
1 0 1 a x 1 0 1
b + c = z. Note that 1 1 0  b  = y  . Since det(1 1 0) ̸= 0,
0 1 1 c z 0 1 1
 
x
−x+y+z
the system is consistent for every y . In fact a = x+y−z
2 , b = 2
z
and c = x−y+z
2 . ⌋

Lemma
Let T
W1 and W2 be finite dimensional subspaces of a vector space V . If
W1 W2 = {0}, then dim(W1 + W2) = dim(W1) + dim(W2).
T
Proof. Clearly W1 W2 and W1 + W2 are finite dimensional. We may
assume that W1 ⊋ {0} and W2 ⊋ {0}.
⌈∵ If W1 = {0}, then dimW1 = 0 and W1 + W2 = W2 and so dim(W1 +
W2) = dim(W1) + dim(W2). ⌋
Put dimW1 = r ≥ 1 and dimW2 = s ≥ 1. Let B1 = {w1, w2, . . . , wr }
16

and B2 = {u1, u2, . . . , us} be bases for W1 and W2, repectively. It is clear
that W1 + W2 = ⟨w1, w2, . . . , wr , u1, u2, . . . , us⟩. Suppose a1w1 + a2w2 +
· · ·+ar wr +b1u1 +b2u2 +· · ·+bsus = 0. Then a1w1 +a2w2 +· · ·+ar wr =
−b1u1 − b2u2 − · · · − bsusT. Note that a1w1 + a2w2 + · · · + ar wr = −b1u1 −
b2u2 − · · · − bsus ∈ W1 W2 = {0}. Thus a1 = a2 = · · · = ar = 0 =
b1 = b2 = · · · = bs. Hence we see that {w1, w2, . . . , wr , u1, u2, . . . , us} is
a basis for W1 +W2. Therefore dim(W1 +W2) = dim(W1)+dim(W2). □

Theorem
Let W1 and W2 be finite dimensional subspaces of a vector space V . Then
dim(W1 + W2) = dim(W1) + dim(W2) − dim(W1 ∩ W2).
Proof. Clearly W1 + W2, W1 ∩ W2 are finite dimensional. By the previous
lemma, we may assume that W1 ∩ W2 ⊋ {0}. If W1 ∩ W2 = W1 (i.e.,
W1 ⊆ W2), then T W1 + W2 = W2, and so dim(W1 + W2) = dim(W2)
and dim(W1 W2) = dim(W1). Thus dim(W1 + W2) = dim(W1) +
dim(W2) − dim(W1 ∩ W2). If W1 ∩ W2 = W2, then the desired equality
holds by the same argument above. Suppose now that {0} ⊊ W1 ∩ W2 ⊊
W1 and {0} ⊊ W1 ∩ W2 ⊊ W2. Put dim(W1 ∩ W2) = m ≥ 1 and
B = {v1, v2, . . . , vm} is a basis for W1 ∩ W2. By Theorem, if follows
that there are B1 = {w1, w2, . . . , wr }, B2 = {u1, u2, . . . , us} such that
B ∪ B1 and B ∪ B2 are bases for W1 and W2, respectively.
Claim: B := B∪B1∪B2 = {v1, v2, . . . , vm, w1, w2, . . . , wr , u1, u2, . . . , us}
is a basis for W1 + W2.
Clearly W1 + W2 = ⟨v1, . . . , vm, w1, . . . , wr ⟩ + ⟨v1, . . . , vm, u1, . . . , us⟩
= ⟨v1, . . . , vm, w1, . . . , wr , u1, . . . , us⟩.
Suppose a1v1 + · · · + amvm + b1w1 + · · · + br w1 + · · · + br wr + c1u1 +
· · · + csus = 0. Then a1v1 + · · · + amvm + b1w1 + · · · + br wr = (−c1)u1 +
· · · + (−cs)us ∈ W1 ∩ W2. Thus there are d1, . . . , dm ∈ F such that
−c1u1 +· · ·+−csus = d1v1 +· · ·+dmvm and so d1v1 +· · ·+dmvm +c1u1 +
· · · + csus = 0. Since {v1, . . . , vm, u1, . . . , us} is linearly independent,
17

if follows that d1 = · · · = dm = c1 = · · · = cs = 0. Thus a1v1 +


· · · + amvm + b1w1 + · · · + br wr = 0. Since {v1, . . . , vm, w1, . . . , wr } is
linearly independent, a1 = · · · = am = b1 = · · · = br = 0. Hence
{v1, . . . , vm, w1, . . . , wr , u1, . . . , us} is linearly independent. Therefore we
see that {v1, . . . , vm, w1, . . . , wr , u1, . . . , us} is a basis for W1 + W2. In
all, we conclude that
m + r + s = dim(W1 + W2)
= dim(W1) + dim(W2) − dim(W1 ∩ W2)
= m + r + m + s − m.

Example
Consider W1 = {(a, b, 0) | a, b ∈ R} and W2 = {(0, b, c) | b, c ∈ R}. Note
that W1 + W2 = R3, W1 ∩ W2 = {(0, b, 0) | a, b ∈ R}. Then
dim(W1 + W2) = 3 = 2 + 2 − 1 = dim(W1) + dim(W2) − dim(W1 ∩ W2).
Definition
Let V be the vector space of real valued functions on R. Suppose that
f1, . . . , fn ∈ V are n − 1 times differentiable on R. The determinant
f1(x) f2(x) . . . fn(x)

f1(x) f2′ (x) . . . fn′ (x)
W (x) = ... ... ...
(n−1) (n−1) (n−1)
f1 (x) f2 (x) . . . fn (x)
is called the Wronskian of f1, f2, . . . , fn.

Theorem
Let f1, . . . , fn : R → R be functions which are n − 1 times differentiable.
If there exists x0 ∈ R such that W (x0) ̸= 0, then {f1, . . . , fn} is linearly
independent.
Proof. Claim : {f1, . . . , fn} is linearly dependent =⇒ W (x) = 0 for all
x ∈ R.
18

Suppose f1, . . . , fn are linearly dependent. Then there are k1, k2, . . . , kn ∈
R, not all zero, such that k1f1(x) + k2f2(x) + · · · + knfn(x) = 0 for all
x ∈ R. Note that for all x ∈ R,
k1f1(x) + k2f2(x) + · · · + knfn(x) = 0
k1f1′ (x) + k2f2′ (x) + · · · + knfn′ (x) = 0
... ... ... ...
(n−1) (n−1)
k1f1 (x) + k2f2 (x) + · · · + knfn(n−1)(x) = 0
Thus the linear system
    
f1(x) f2(x) . . . fn(x) k1 0

 f1(x) f2′ (x) ′
. . . fn(x)   k2  0
... ... ... ...   .  = .
  ..   .. 


(n−1) (n−1) (n−1)
f1 (x) f2 (x) . . . fn (x) kn 0
has a nontrivial solution for all x ∈ R. Hence W (x) = 0 for all x ∈ R. □

Example
{x, sin x} is linearly independent.
⌈∵
x sin x
W (x) = = x cos x − sin x
1 cos x
Since W ( π2 ) = π2 cos( π2 ) − sin( π2 ) = −1, it follow from Theorem that
{x, sin x} is linearly independent. ⌋

Definition
A 1 × n matrix is called a row vector or row matrix, and an m × 1 matrix
is called a column vector or column matrix.
19

Definition
For an m × n matrix
 
a11 a12 . . . a1n
 a21 a22 . . . a2n 
A=
 ... ... ,
. . . ... 
am1 am2 . . . amn
the vectors
r1 = [a11 a12 . . . a1n]
r2 = [a21 a22 . . . a2n]
... ...
rm = [am1 am2 . . . amn]

in Rn formed from the rows of A are called the row vectors of A, and the
vectors      
a11 a12 a1n
 a21   a22   a2n 
c1 =  ..  , c2 =  ..  , . . . , cn = 
   
 ... 

. .
am1 am2 amn
in Rm formed from the columns of A are called the column vectors of A.

Definition
Let A be an m × n matrix.
1. The subspace of Rn spanned by the row vectors of A is denoted by
row(A) and is called the row space of A.
2. The subspace of Rm spanned by the column vectors of A is denoted
by col(A) and is called the column space of A.
3. The solution space of the homogeneous system of equations Ax = 0
which is a subspace of Rn, is denoted by null(A) and is called the null
space of A.
20

Example
ï ò
1 0 3
A=
0 1 2
The row vectors r1 and r2 form a basis for the row space of A,
i.e., The row space of A = ⟨r1, r2⟩ ⊆ R3.
The column vectors c1 and c2 form a basis for the column space of A,
i.e., The column space of A = ⟨c1, c2, c3⟩ = ⟨c1, c2⟩ = R2.
Note
   
a11 a12 . . . a1n x1
 a21 a22 . . . a2n   x2 
A =  ..
 .
. . . .
.
 and x =  .
.

. . . .   .
am1 am2 . . . amn xn
Note that  
a11x1 + a12x2 + · · · + a1nxn
 a21x1 + a22x2 + · · · + a2nxn 
Ax =   ... 

am1x1 + am2x2 + · · · + amnxn
     
a11 a11 a1n
 a21   a21   a2n 
= x1  ..  + x2  ..  + · · · + xn 
   
 ... 

. .
am1 am1 amn
= x1c1 + x2c2 + · · · + xncn
where c1, c2, . . . , cn are the column vectors of A. Thus a linear system
Ax = b can be written as x1c1 + x2c2 + · · · + xncn = b. Hence Ax = b
is consistent if and only if b is expressible as a linear combination of the
column vectors of A. Therefore we have the following theorem.
Theorem
Ax = b is consistent if and only if b is in the column space of A.
21

Example
    
−1 3 2 x1 1
Ax = b ⇐⇒  1 2 −3 x2 = −9
2 1 −2 x3 3
Show that b is in the column space of A and express b as a linear com-
bination of the column vectors of A.
(Sol.)
By calculation, x1 = 2, x2 = −1, x3 = 3. Since the system is consistent,
b is in the column space of A. Moreover, we see from the note above that
       
−1 3 2 1
2  1  − 2 + 3 −3 = −9
2 1 −2 −3

Theorem
If x0 denotes any fixed solution of a consistent linear system Ax = b and
if {v1, v2, . . . , vk } is a basis for the null space of A, then every solution of
Ax = b can be expressed in the form x = x0 + c1v1 + c2v2 + · · · + ck vk .
Conversely, for all choices of scalars c1, c2, . . . , ck , the vector x = x0 +
c1v1 + · · · + ck vk is a solution of Ax = b.

Proof. Let x be an arbitrary solution of Ax = b. Then Ax = b and


Ax0 = b and so Ax − Ax0 = A(x − x0) = 0. Thus x − x0 is a solution
of Ax = 0 and so there are scalars c1, c2, . . . , ck such that
x − x0 = c1v1 + c2v2 + · · · + ck vk
x = x0 + c1v1 + c2v2 + · · · + ck vk .
Conversely, for any scalars c1, c2, . . . , ck , x = x0 + c1v1 + c2v2 + · · · + ck vk
is a solution of Ax = b.
22

⌈∵ Ax = A(x0 + c1v1 + c2v2 + · · · + ck vk )


= Ax0 + c1(Av1) + c2(Av2) + · · · + ck (Avk )
= b + 0 + 0 + ··· + 0
=b ⌋

Theorem
(1) Elementary row operations do not change the null space of a matrix.
(2) Elementary row operations do not change the row space of a matrix.
Proof. (1) Clear.
⌈∵ Elementary row operations do not change the solution set of Ax = 0.⌋
(2) Let B be a matrix obtained from A by performing an elementary row
operation. Let r1, r2, . . . , rm be the row vectors of A and r′1, r′2, . . . , r′m be
the row vectors of B. If the row operation is a row change, then A and B
have the same row vectors and so they have the same row space. If the row
operation is a multiplication of a row by a nonzero scalar or the addition
of a multiple of one row to another row, then r′1, r′2, . . . , r′m are linear
combination of r1, r2, . . . , rm (e.g. 2r1 + r3 = r′3). Thus r′1, r′2, . . . , r′m lie
in the row space of A. Since a vector space is closed under addition and
scalar multiplication, all linear combinations of r′1, r′2, . . . , r′m also lie in
the row space of A. Hence each vector in the row space of B is in the row
space of A. Since B is obtained from A by performing a row operation,
A can be obtained from B by performing the inverse operation. Thus the
argument above shows that the row space of A is contained in the row
space of B. Hence we conclude that the row space of A = the row space
of B. □
Remark
Elementary row operations can change the column space.
e.g. ï ò ï ò
1 3 1 3
A= B=
2 6 0 0
23

Lemma
Suppose that {v1, . . . , vm} is linearly independent and {v1, . . . vm, w}
is linearly dependent. Then w is a linear combination of the vi’s i.e.,
w ∈ span{v1, . . . , vm}.
Proof. Since {v1, . . . , vm, w} is linearly dependent, there are k1, . . . , km, l ∈
F , not all 0, such that k1v1 + · · · + kmvm + lw = 0. If l = 0, then one
of the ki’s is not zero and k1v1 + · · · + kmvm = 0. This is a contra-
diction because {v1, . . . , vm} is independent. Thus l ̸= 0 and so w =
1 −k1 −km
l (−k1 v1 + · · · + −km vm ) = l v1 + · · · + l vm . Hence w is a linear
combination of the vi’s. □

Remark
Let e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1), v = (1, 1, 0). Then
(1) Note that {e1, e2, e3} is linearly independent and {e1, e2, e3, v} is
linearly dependent. By the lemma above, v is a linear combination of ei’s.
(2) Note that {e1, e2, e3, v} is linearly dependent. But e3 is not a linear
combination of e1, e2, and v (cf. {e1, e2, v} is linearly dependent).

Note
Let A and B are matrices which are row equivalent. By Theorem, Ax = 0
and Bx = 0 have the same solution set. Recall that the two systems above
can be expressed as follows:
(1) x1c1 + x2c2 + · · · + xncn = 0
(2) x1c′1 + x2c′2 + · · · + xnc′n = 0
Thus (1) has as nontrivial(trivial) solution for x1, x2, . . . , xn if and only
if (2) has as nontrivial(trivial, respectively) solution for x1, x2, . . . , xn.
Hence we see that the column vectors of A is linearly dependent(linear
independent) if and only if the column vectors B is linear dependent(linear
independent, respectively). This conclusion also applies to any subset of
the column vectors. Therefore we have the following theorem.
24

Theorem
Let A and B are row equivalent matrices. Then
(1) A given set of column vectors of A is linearly independent (linearly
dependent) if and only if the corresponding column vectors of B are
linearly independent (linearly dependent, respectively).
(2) A given set of column vectors of A forms a basis for the column
space of A if and only if the corresponding column vectors of B
form a basis for the column space of B.
Proof. (1) It follows from Note.
(2) Let A and B be row equivalent m × n matrices. We may assume
that {c1, c2, . . . ck } is a basis for the column space of A, where 1 ≤
k ≤ n. Then ck+1, ck+2, . . . , cn are linear combinations of c1, c2, . . . ck .
Since {c1, c2, . . . ck } is linearly independent, it follows from (1) that
{c′1, c′2, . . . c′k } is linearly independent. Note that {c′1, c′2, . . . c′k , c′k+1} is
linearly dependent. By Lemma, it follows that c′k+1 ∈ span{c′1, c′2, . . . c′k }.
By the same argument, we see that c′j ∈ span{c′1, c′2, . . . c′k } for each
k + 1 ≤ j ≤ n. Thus we see that {c′1, c′2, . . . c′k } spans the column space
of B. Hence {c′1, c′2, . . . c′k } is a basis for the column space of B. By the
same argument, the converse also holds. Therefore we conclude that the
desired result holds. □

Remark
1.Even though an elementary row operation can change the column space,
it does not change the dimension of the column space (∵ by (2) of the
theorem above).
2. In the proof of (2) of the theorem above, there exist scalars l1, l2, . . . , lk ,
not all zero, such that c′k+1 = l1c′1 + l2c′2 + · · · + lk c′k . By Note, we see
that ck+1 = l1c1 + l2c2 + · · · + lk ck . The same result holds for each
k + 1 ≤ j ≤ n. Thus the statement (2) of the theorem above can be
proved from Note without using Lemma.
25

Theorem
If a matrix R is in row echelon form, then the row vectors with the leading
1’s (the nonzero row vectors) form a basis for the row space of R, and the
column vectors with leading 1’s of the row vectors form a bases for the
column space of R.

Example
1 −2 5 0 3 ← r1
 
0 1 3 0 0 ← r2
0 0 0 1 0 ← r3
 
R= 0 0 0 0 0 ← r4
↑ ↑ ↑ ↑ ↑
c1 c2 c3 c4 c5

By Theorem, {r1, r2, r3} is a basis for the row space of R and {c1, c2, c4}
is a basis for the column space of R. (cf. 11c1 + 3c2 = c3, 3c1 = c5)

Example
Find bases for the row space and column space of
 
1 −3 4 −2 5 4
 2 −6 9 −1 8 2
A=  2 −6 9 −1 9

7
−1 3 −4 2 −5 −4
 
1 −3 4 −2 5 4
0 0 1 3 −2 −6
(Sol.) By elementary row operations, A ⇝ R =  0 0 0 0 1 5 

0 0 0 0 0 0
Thus {r1, r2, r3} is a basis for the row space of A.
It follows from Theorem that if a set of column vectors of R forms a basis
26

for the column space of R, then the corresponding column vectors of A


forms a basis
 for the column space
 of A. By Theorem, we see that
1 4 5
 , c3 = 1 , c′5 = −2 form a basis for the column space
0 ′
c′1 = 
   
0 0 1
0 0 0
of R.
Hence the corresponding column vectors of A which are
     
1 4 5
2 9 8
c1 =   , c3 =   , c5 = 
    
2 9 9
−1 −4 −5
form a basis for the column space of A.

Example
Find a basis for the space spanned by the vectors v1 = (1, −2, 0, 0, 3),
v2 = (2, −5, −3,−2, 6), v3 = (0, 5,  15, 10, 0), v4 = (2, 6, 18, 8, 6)
1 −2 0 0 3
2 −5 −3 −2 6
(Sol.) Put A = 0 5 15 10 0 . By elementary row operations, we

2 6 18 8 6
 
1 −2 0 0 3
0 1 3 2 0
see that A ⇝ 0 0 1 1 0 .

0 0 0 0 0
Thus {(1, −2, 0, 0, 3), (0, 1, 3, 2, 0), (0, 0, 1, 1, 0)} is a basis for the space
spanned by {v1, v2, v3, v4}.
27

Example
(cf. See the previous example)  
1 −2 0 0 3
2 −5 −3 −2 6
Find a basis for the row space of A = 
0 5
 consisting
15 10 0
2 6 18 8 6
entirely of row vectors from A
(Sol.) Consider  
1 −2 0 2
−2 −5 5 6 
AT = 
 
 0 −3 15 18 

 0 −2 10 8 
3 6 0 6
By elementary row operations, we have
 
1 2 0 2
0 1 −5 −10
T
 
A ⇝ 0 0 0
 1 
0 0 0 0 
0 0 0 0
By Theorem, it follows that
     
1 2 2
−2 −5 6
     
c1 =  0  , c2 = −3 , c4 = 18
     
0 −2 8
3 6 6
form a basis for the column space of AT . Thus we conclude that
r1 = [1 − 2 0 0 3], r2 = [2 − 5 − 3 − 2 6], r4 = [2 6 18 8 6] form
a basis for the row space of A.
28

Example
Let v1 = (1, 2, 0, 3), v2 = (2, −5, −3, 6), v3 = (0, 1, 3, 0),
v4 = (2, −1, 4, −7), v5 = (5, −8, 1, 2).
(1) Find a subset of the vectors that forms a basis for the space spanned
by {v1, v2, v3, v4, v5}.
(2) Express each vector not in the basis as a linear combination of the
basis vectors.
(Sol.) (1) Consider
1 2 0 2 5
 
−2 −5 1 −1 −8
0 −3 3 4 1
 
3 6 0 −7 2
↑ ↑ ↑ ↑ ↑
v1 v2 v3 v4 v5
By elementary row operations, we have
1 0 2 0 1
 
0 1 −1 0 1 
0 0 0 1 1
 
0 0 0 0 0 (reduced row echelon form)
↑ ↑ ↑ ↑ ↑
w1 w2 w3 w4 w5

By Theorem, we see that {v1, v2, v4} forms a basis for the space spanned
by {v1, v2, v3, v4, v5}.

(2) By inspection, we can easily see that w3 = 2w1 − w2, w5 = w1 +


w2 + w4. By the proof of Theorem, it follows that v3 = 2v1 − v2, v5 =
v1 + v2 + v4.
29

Theorem
The row space and the column space of a matrix A have the same dimen-
sion.

Proof. Let A = [aij ]m×n.


Method 1: Suppose that the row space of A has dimension k and B =
{b1, b2, . . . , bk } is a basis for the row space, where bi = (bi1, bi2, . . . , bin)
for each 1 ≤ i ≤ k. Note that
r1 = c11b1 + c12b2 + · · · + c1k bk
...
rm = cm1b1 + cm2b2 + · · · + cmk bk
where ri is a row vector of A for each 1 ≤ i ≤ m and cij ’s are scalars.
Then
(a11 , a12 , . . . , a1n ) = c11 (b11 , b12 , . . . , b1n ) + c12 (b21 , b22 , . . . , b2n ) + · · · + c1k (bk1 , bk2 , . . . , bkn )
(a21 , a12 , . . . , a2n ) = c21 (b11 , b12 , . . . , b1n ) + c22 (b21 , b22 , . . . , b2n ) + · · · + c2k (bk1 , bk2 , . . . , bkn )
..
.
Thus, for any 1 ≤ j ≤ n,
a1j = c11 b1j + c12 b2j + · · · + c1k bkj
a2j = c21 b1j + c22 b2j + · · · + c2k bkj
..
.
amj = cm1 b1j + cm2 b2j + · · · + cmk bkj

and so  
   
a1j c11 c1k
 a2j 
 .  = b1j  c21
   c2k 
 ..   ... 
 + · · · + b kj  .. 
 
.
amj cm1 cmk
Hence each column vector of A lies in the space spanned by the vectors
   
c11 c1k
 c21 
 .  , . . . ,  c2k
 
 ..   ... 

cm1 cmk
30

and so dim(column space of A)≤ k. Therefore dim(column space of


A)≤ dim(row space of A). Since the matrix was arbitrary, we see that
dim(row space of A)= dim(column space of AT )≤ dim(row space of
AT )= dim(column space of A).
In all, we conclude that dim(row space of A) = dim(column space of A).
Method 2:
Let R be a row echelon form of A. By Theorem, we see that
dim(row space of A) = dim(row space of R),
dim(column space of A) = dim(column space of R).
By Theorem, we know that the dimension of the row space of R is the
number of nonzero rows(= the number of the rows in which leading 1’s oc-
cur) and the dimension of the column space of R is the number of columns
that containing leading 1’s. Hence we see that dim(row space of R) =
dim(column space of R). Therefore we conclude that
dim(row space of A) = dim(column space of A).

Definition
The common dimension of the row space and the column space of a
matrix A is called the rank of A and is denoted by rank(A) or r(A). The
dimension of the null space of A is called the nullity of A and is denoted
by nullity(A) or n(A).

Example
 
−1 2 0 4 5 −3
3
 −7 2 0 1 4 
2 −5 2 4 6 1
4 −9 2 −4 −4 7
31

The reduced row echelon form of A is


 
1 0 −4 −28 −37 13
0 1 −2 −12 −16 5 
R= 0 0 0

0 0 0
0 0 0 0 0 0
Thus rank(A) = 2. To calculate nullity(A), consider Ax = 0 ⇐⇒ Rx =
0.
Then
x1 − 4x3 − 28x4 − 37x5 + 13x6 = 0
x2 − 2x3 − 12x4 − 16x5 + 5x6 = 0
and so
x1 = 4r + 28s + 37t − 13u
x2 = 2r + 12s + 16t − 5u
x3 = r
x4 = s
x5 = t
x6 = u.
Hence          
x1 4 28 37 −13
x2 2 12 16  −5 
         
x3 1 0 0  0 
  = r  + s  + t  + u 
x4 0 1 0  0 
         
x5 0 0 1  0 
x6 0 0 0 1
Therefore nullity(A) = 4.

Remark
For any matrix A, rank(A) = rank(AT ).
⌈∵ rank(A) = dim(row space of A) = dim(column space of AT ) =
rank(AT ) ⌋
32

Theorem(Dimension theorem for matrices)


Let A be a matrix with n columns. Then rank(A) + nullity(A) = n.
Proof. Note that Ax = 0 has n unknown variables.
⌈∵ A has n columns. ⌋
Then the number of leading variables + the number of free variables = n
and so rank(A) + nullity(A) = n.
⌈∵ The number of leading variables
= The number of leading 1’s in a row echelon form of A
= rank(A).
The number of free variables
= The number of parameters in the of Ax = 0
= The dimension of the solution space of Ax = 0
= nullity(A). ⌋ □
Remark
Let A be an n × n matrix.
rank(A) = the number of leading variables in the general solution of
Ax = 0.
nullity(A) = the number of parameters in the general solution of Ax = 0.
Theorem
Let A ∈ Matn×n(R). The the following are equivalent:
(a) A is invertible.
(b) Ax = 0 has only the trivial solution.
...
(j) The column vectors of A are linearly independent.
(k) The row vectors of A are linearly independent.
(l) The column vectors of A span Rn.
(m) The row vectors of A span Rn.
(n) The column vectors of A form a basis for Rn.
(o) The row vectors of A form a basis for Rn.
(p) A has rank n.
(q) A has nullity 0.
33

Proof. (b) =⇒ (j)


Let c1, c2, . . . , cn be the column vectors of A. Consider x1c1 + x2c2 +
· · · + xncn = 0(⇔ Ax = 0). Since Ax = 0 has only the trivial solution,
x1 = x2 = · · · = xn = 0. Thus the column vectors of A are linearly
independent.
(j) ⇐⇒ (l) ⇐⇒ (n): By Theorem
(k) ⇐⇒ (m) ⇐⇒ (o): By Theorem
(n) =⇒ (p)
Since the column vectors of A form a basis for Rn, the column space of
A is n-dimensional and so rank(A) = n.
(o) =⇒ (p)
Since the row vectors of A form a basis for Rn, the row space of A is
n-dimensional and so rank(A) = n.
(p) =⇒ (q)
It follows immediately from Dimension Theorem.
(q) =⇒ (b)
Since nullity(A) = 0, the solution space of Ax = 0 is zero dimensional.
Hence Ax = 0 has only the trivial solution.
(j) =⇒ (p) =⇒ (k): Clear

You might also like