0% found this document useful (0 votes)
13 views18 pages

Revision

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views18 pages

Revision

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Linear Algebra – Enhanced Revision Notes

Elementary Matrices, LU Decomposition, Vector Spaces, and More

Comprehensive Revision Guide

September 11, 2025

Contents
1 Elementary Matrices 2

2 LU Decomposition 3

3 Vector Spaces 4

4 Column Space 5

5 Null Space 6

6 Basis and Span 7

7 Dimension 9

8 Rank and Nullity 9

9 Rank-Nullity Theorem 11

10 Subspaces 11

11 Pivots, Free Variables, and Gaussian Elimination 12

12 Consistent and Inconsistent Systems 14

13 Affine Spaces 15

14 General and Particular Solutions 16

15 Advanced Topics and Applications 17

16 Quick Reference and Formulas 17

1
1 Elementary Matrices
Definition. An elementary matrix is obtained by applying a single elementary row operation
to the identity matrix. Multiplying an elementary matrix E on the left of a matrix A performs
the corresponding row operation on A.

Three Types of Elementary Matrices


1. Type I (Row Swap): Eij swaps rows i and j

2. Type II (Row Scaling): Ei (k) multiplies row i by nonzero scalar k

3. Type III (Row Addition): Eij (k) adds k times row j to row i

Key Properties
• Every elementary matrix is invertible

• (Eij )−1 = Eij (row swaps are self-inverse)

• (Ei (k))−1 = Ei (1/k) for k ̸= 0

• (Eij (k))−1 = Eij (−k)

• det(Eij ) = −1, det(Ei (k)) = k, det(Eij (k)) = 1

Example: Row swap R1 ↔ R3 in 4 × 4


 
0 0 1 0
0 1 0 0
E13 =
1

0 0 0
0 0 0 1
For any 4 × 4 matrix A, E13 A swaps the first and third rows of A.

Example: Row scaling R2 → −3R2


 
1 0 0
E2 (−3) = 0 −3 0
0 0 1
This multiplies the second row by −3.

Example: Row addition R3 → R3 + 5R1


 
1 0 0
E31 (5) = 0 1 0
5 0 1
This adds 5 times the first row to the third row.

2
Example: Complex operation using multiple elementary matrices

To perform R1 ↔ R2 , then R2 → 2R2 , then R3 → R3 − 4R1 on a 3 × 3 matrix A:


    
1 0 0 0 1 0 0 1 0
E = E31 (−4) · E2 (2) · E12 =  0 2 0 1 0 0 =  2 0 0
−4 0 1 0 0 1 −4 0 1

Note: Right multiplication

Right multiplication AE performs column operations instead of row operations.

2 LU Decomposition
Definition. For a square matrix A, an LU decomposition is a factorization A = LU where L
is lower-triangular (often with unit diagonal) and U is upper-triangular.

Existence Conditions
• LU exists if all leading principal minors are nonzero

• If pivoting is needed, we get P A = LU where P is a permutation matrix

• For any invertible matrix, P LU decomposition always exists

Example: Basic LU decomposition


 
2 3
Compute LU for A = .
4 7
Step1: Use
 pivot a11 = 2.Multiplier:
 ℓ  = 2 Step 2: R2 → R2 − 2R1 gives
21 = 4/2
2 3 1 0 2 3
U= Result: L = ,U=
0 1 2 1 0 1
    
1 0 2 3 2 3
Verification: LU = = =A
2 1 0 1 4 7

Example: LU with partial pivoting needed


 
0 1
For A = , we need row swap since a11 = 0.
2 3
   
0 1 2 3
Step 1: P = (swap rows) Step 2: P A = Result: P A = LU where
1 0 0 1
   
1 0 2 3
L= ,U=
0 1 0 1

3
Example: Larger matrix LU decomposition
 
1 2 3
For A = 2 5 8:
1 3 4

Solution
Step 1: Eliminate below a11 = 1

ℓ21 = 2/1 = 2, R2 → R2 − 2R1 (1)


ℓ31 = 1/1 = 1, R3 → R3 − 1R1 (2)
 
1 2 3
After elimination: 0 1 2
0 1 1
Step 2: Eliminate below a22 = 1

ℓ32 = 1/1 = 1, R3 → R3 − 1R2 (3)

Final result:    
1 0 0 1 2 3
L = 2 1 0  , U = 0 1 2 
1 1 1 0 0 −1

Example: LU applications: Solving systems

Given A = LU and we want to solve Ax = b:

1. Solve Ly = b (forward substitution)

2. Solve U x = y (backward substitution)


   
2 3 8
For A = ,b= :
4 7 18
    
1 0 y1 8
Ly = b : = (4)
2 1 y2 18
y1 = 8, 2y1 + y2 = 18 ⇒ y2 = 2 (5)
    
2 3 x1 8
Ux = y : = (6)
0 1 x2 2
x2 = 2, 2x1 + 3x2 = 8 ⇒ x1 = 1 (7)

3 Vector Spaces
Definition. A set V with operations addition (+) and scalar multiplication (·) is a vector space
over field F if it satisfies these axioms:

Vector Space Axioms


1. Closure: u + v ∈ V and c · v ∈ V for all u, v ∈ V , c ∈ F

2. Associativity: (u + v) + w = u + (v + w)

4
3. Commutativity: u + v = v + u

4. Zero element: ∃0 ∈ V such that v + 0 = v for all v ∈ V

5. Additive inverse: For each v ∈ V , ∃(−v) ∈ V such that v + (−v) = 0

6. Scalar multiplication axioms: 1 · v = v, c(dv) = (cd)v

7. Distributivity: c(u + v) = cu + cv, (c + d)v = cv + dv

Example: Standard vector spaces

1. Rn with component-wise addition and scalar multiplication

2. Pn (polynomials of degree ≤ n) with usual operations

3. Mm×n (matrices) with matrix addition and scalar multiplication

4. C[a, b] (continuous functions on [a, b]) with pointwise operations

Example: Verifying vector space: Polynomial space P2

Let P2 = {a0 + a1 x + a2 x2 : a0 , a1 , a2 ∈ R}
Closure under addition: (a0 + a1 x + a2 x2 ) + (b0 + b1 x + b2 x2 ) = (a0 + b0 ) + (a1 +
b1 )x + (a2 + b2 )x2 ∈ P2
Closure under scalar multiplication: c(a0 + a1 x + a2 x2 ) = ca0 + ca1 x + ca2 x2 ∈ P2
Zero element: 0(x) = 0 + 0x + 0x2

Example: Non-vector space

Let V = {(x, y) ∈ R2 : xy ≥ 0} (first and third quadrants including axes).


This is not a vector space because it’s not closed under addition: (1, 1) ∈ V and
(−1, 1) ∈ V , but (1, 1) + (−1, 1) = (0, 2) ̸∈ V since we need 0 · 2 = 0 ≥ 0 but the set
definition is stricter.
Actually, let me correct: (0, 2) does satisfy xy = 0 · 2 = 0 ≥ 0. Better example:
(1, 1) ∈ V and (1, −1) ∈/ V since 1 · (−1) = −1 < 0.

Example: Function spaces

C 1 [0, 1] = {f : [0, 1] → R : f is differentiable and f ′ is continuous}


This is a vector space under pointwise addition and scalar multiplication: - (f +g)′ (x) =
f ′ (x) + g ′ (x) (sum rule) - (cf )′ (x) = cf ′ (x) (constant rule) - Zero function: 0(x) = 0
for all x

4 Column Space
Definition. The column space Col(A) is the span of the columns of matrix A. Equivalently,
Col(A) = {b : Ax = b is consistent}.

Key Properties
• Col(A) is a subspace of Rm when A is m × n

• dim(Col(A)) = rank(A)

5
• Pivot columns of A form a basis for Col(A)

Example: Finding column space


 
1 2 3
Let A = 2 4 6
1 2 3
Notice that c2 = 2c1 and c3 = 3c1 , so all columns are multiples of c1 . Therefore:
Col(A) = span{(1, 2, 1)T } This is a 1-dimensional subspace (a line through origin) in
R3 .

Example: Column space via row reduction


 
1 2 0 3
For A = 2 4 1 6:
1 2 1 3
Row reduce to find pivot columns:
   
1 2 0 3 1 2 0 3
2 4 1 6 → 0 0 1 0
1 2 1 3 0 0 0 0
   
 1 0 
Pivot columns are 1 and 3, so: Col(A) = span 2 , 1
1 1
 

Example: Geometric interpretation


 
1 0
For A = 0 1:
1 1
   
 1 0 
Col(A) = span 0 , 1
1 1
 
This represents a plane in R3 passing through the origin with normal vector found by
(1, 0, 1) × (0, 1, 1) = (−1, −1, 1).

Warning: Common mistake

Don’t confuse the columns of the original matrix with the columns of the row-reduced
form! The pivot columns of the original matrix form the basis for Col(A).

5 Null Space
Definition. The null space Nul(A) = {x ∈ Rn : Ax = 0} is the set of all solutions to the
homogeneous system Ax = 0.

Key Properties
• Nul(A) is always a subspace of Rn when A is m × n

• dim(Nul(A)) = nullity(A) = n − rank(A)

6
• Basis vectors correspond to free variables in the solution

Example: Basic null space calculation


 
1 2 3
For A = , solve Ax = 0:
0 0 0
From x1 + 2x2 + 3x3 = 0, we get x1 = −2x2 − 3x3 . Let x2 = s and x3 = t be free
variables.      
−2s − 3t −2 −3
General solution: x =  s  = s 1  + t 0 
t 0 1
   
 −2 −3 
Therefore: Nul(A) = span  1  ,  0 
0 1
 

Example: Null space of square matrix


 
1 2
For A = :
3 6
   
1 2 1 2
Row reduce: →
3 6 0 0
 
−2
From x1 + 2x2 = 0: x1 = −2x2 Let x2 = t: Nul(A) = span
1
Check: rank(A) = 1, nullity(A) = 2 − 1 = 1

Example: Relationship with linear independence

If columns of A are linearly independent, then Nul(A) = {0}.


Proof: If Ax = 0 and columns are linearly independent, then the only solution is x = 0.

Example: Computing fundamental matrix solutions


 
1 −3 0 −1
For A = :
0 0 1 2
RREF gives us: x1 − 3x2 − x4 = 0 and x3 + 2x4 = 0 Free variables: x2 , x4
Setting x2 = 1, x4 = 0: x1 = 3, x3 = 0 ⇒ v1 = (3, 1, 0, 0)T Setting x2 = 0, x4 = 1:
x1 = 1, x3 = −2 ⇒ v2 = (1, 0, −2, 1)T
Nul(A) = span{v1 , v2 } with dimension 2.

6 Basis and Span


Basis. A set {v1 , v2 , . . . , vk } is a basis for vector space V if:

1. The vectors are linearly independent

2. The vectors span V

Span. span{v1 , v2 , . . . , vk } = {c1 v1 + c2 v2 + · · · + ck vk : ci ∈ F}

7
Example: Standard bases
     
 1 0 0 
• R :
3  0 , 1 , 0
   
0 0 1
 

• P2 : {1, x, x2 }
       
1 0 0 1 0 0 0 0
• M2×2 : , , ,
0 0 0 0 1 0 0 1

Example: Testing for linear independence

Are vectors v1 = (1, 2, 1), v2 = (2, 1, 3), v3 = (1, −1, 2) linearly independent?
Set up: c1 v1 + c2 v2 + c3 v3 = 0
    
1 2 1 c1 0
2 1 −1 c2  = 0
1 3 2 c3 0
   
1 2 1 1 2 1
Row reduce to find if only trivial solution exists: 2 1 −1 → 0 −3 −3 →
1 3 2 0 1 1
 
1 2 1
0 1 1
0 0 0
Since we get a zero row, the vectors are linearly dependent. We can express v3 =
v1 + v2 (verify this!).

Example: Finding a basis for span

Find a basis for span{(1, 2, 0, 1), (0, 1, 1, 0), (1, 3, 1, 1), (2, 5, 1, 2)}.
Create matrix with these as columns and row reduce:
   
1 0 1 2 1 0 1 2
2 1 3 5 0 1 1 1
0 1 1 1 → 0 0 0 0
   

1 0 1 2 0 0 0 0

Pivot columns 1 and 2, so basis is {(1, 2, 0, 1), (0, 1, 1, 0)}.

Example: Change of basis

Express v = (5, 7) in the basis B = {(1, 2), (3, 1)} for R2 .


Solve: c1 (1, 2) + c2 (3, 1) = (5, 7)
    
1 3 c1 5
=
2 1 c2 7
 
7 −2
Solution: c1 = −2, c2 = 3, so [v]B = .
7/3

8
7 Dimension
Definition. The dimension of a vector space V is the number of vectors in any basis of V . We
write dim(V ).

Important Theorems
• All bases of a vector space have the same number of elements

• If dim(V ) = n, then any set of n linearly independent vectors is a basis

• If dim(V ) = n, then any set of n vectors that spans V is a basis

Example: Computing dimensions

• dim(Rn ) = n

• dim(Pn ) = n + 1 (basis: {1, x, x2 , . . . , xn })

• dim(Mm×n ) = mn

• dim({0}) = 0 (trivial vector space)

Example: Dimension of solution spaces


(
x + 2y − z = 0
Consider the system:
2x + 4y − 2z = 0
   
1 2 −1 1 2 −1
Row reduce: →
2 4 −2 0 0 0
One pivot column, so rank = 1 and dim(solution space) = 3 − 1 = 2. Basis:
{(−2, 1, 0), (1, 0, 1)} (from free variables y and z).

Example: Dimension and subspaces

Let W = {(x, y, z) ∈ R3 : x + y + 2z = 0}.


This is a plane through the origin (subspace of R3 ). To find dimension, solve x+y+2z =
0 for x: x = −y − 2z. Let y = s, z = t be parameters.
General solution: (x, y, z) = (−s − 2t, s, t) = s(−1, 1, 0) + t(−2, 0, 1) Therefore
dim(W ) = 2 with basis {(−1, 1, 0), (−2, 0, 1)}.

Note: Dimension formula for subspaces

If W is a subspace of Rn defined by k linearly independent equations, then dim(W ) =


n − k.

8 Rank and Nullity


Definitions.

• rank(A) = dim(Col(A)) = number of pivot columns = number of linearly independent


rows

• nullity(A) = dim(Nul(A)) = number of free variables

9
Methods to Find Rank
1. Count pivot positions in row echelon form

2. Count linearly independent columns

3. Count linearly independent rows

4. Use determinants of submatrices (for square matrices)

Example: Rank calculation via row reduction


 
1 2 3 4
Find rank 2 4 7 10:
1 2 4 6
Row reduce:
     
1 2 3 4 1 2 3 4 1 2 3 4
2 4 7 10 → 0 0 1 2 → 0 0 1 2
1 2 4 6 0 0 1 2 0 0 0 0

Two pivot positions rank(A) = 2, nullity(A) = 4 − 2 = 2.

Example: Rank of matrix products

Properties of rank:

• rank(AB) ≤ min(rank(A), rank(B))

• rank(AT ) = rank(A)

• rank(AT A) = rank(A) when A has linearly independent columns


   
1 2 1 0
For A = and B = : rank(A) = 1, rank(B) = 2, rank(AB) =
0 0 0 1
 
1 2
rank = 1 Indeed, 1 ≤ min(1, 2) = 1
0 0

Example: Applications: Consistency of systems

System
 Ax=  b is consistent
  if and only if rank(A) = rank([A|b]).
1 2 x 3
For = :
2 4 y 6
 
1 2 3
rank(A) = 1 and rank([A|b]) = rank = 1 Since ranks are equal, the system
2 4 6
is consistent.

10
Example: Rank and invertibility
For n × n matrix A:

• A is invertible rank(A) = n

• A is invertible nullity(A) = 0

• A is invertible det(A) ̸= 0
 
1 2 3
Test: A = 0 1 2 (upper triangular) rank(A) = 3 = n, so A is invertible. det(A) =
0 0 1
1 · 1 · 1 = 1 ̸= 0

9 Rank-Nullity Theorem
Fundamental Theorem. For any m × n matrix A:

rank(A) + nullity(A) = n

This connects the dimension of the column space with the dimension of the null space.

Example: Verifying rank-nullity theorem


 
1 2 1 0
For A = 0 0 1 2 (3 × 4 matrix):
0 0 0 0
From row echelon form: - Pivots in columns 1 and 3 rank(A) = 2 - Free variables:
x2 , x4 nullity(A) = 2 - Check: 2 + 2 = 4 = n

Example: Dimension counting in transformations

Linear transformation T : R5 → R3 with matrix representation A (3×5). If


dim(Nul(T )) = 2, what is dim(Range(T ))?
By rank-nullity: rank(A) + nullity(A) = 5 So rank(A) = 5 − 2 = 3 Since Range(T ) =
Col(A), we have dim(Range(T )) = 3.

Example: Implications for system solutions

System Ax = 0 where A is m × n:

• If n > m: Always has nontrivial solutions (more variables than equations)

• If n = m and rank(A) = n: Only trivial solution

• If rank(A) < n: Infinitely many solutions with dim(solution space) = n−rank(A)

10 Subspaces
Definition. A subset W ⊆ V is a subspace if:
1. 0 ∈ W (contains zero vector)

2. u, v ∈ W ⇒ u + v ∈ W (closed under addition)

11
3. v ∈ W, c ∈ F ⇒ cv ∈ W (closed under scalar multiplication)

Example: Standard subspaces of R3

• {(0, 0, 0)} (trivial subspace)

• Lines through origin: {t(a, b, c) : t ∈ R} where (a, b, c) ̸= (0, 0, 0)

• Planes through origin: {(x, y, z) : ax + by + cz = 0}

• All of R3

Example: Subspace verification

Is W = {(x, y, z) ∈ R3 : x + 2y − z = 0} a subspace?
Zero test: (0, 0, 0) satisfies 0 + 2(0) − 0 = 0 Addition: If (x1 , y1 , z1 ), (x2 , y2 , z2 ) ∈ W ,
then: (x1 +x2 )+2(y1 +y2 )−(z1 +z2 ) = (x1 +2y1 −z1 )+(x2 +2y2 −z2 ) = 0+0 = 0 Scalar
multiplication: If (x, y, z) ∈ W and c ∈ R: c(x + 2y − z) = cx + 2cy − cz = c(0) = 0
Therefore, W is a subspace.

Example: Non-subspace examples

1. W = {(x, y) ∈ R2 : x+y = 1} (line not through origin) Not a subspace: (0, 0) ∈


/W

2. W = {(x, y) ∈ R2 : xy = 0} (coordinate axes) Not a subspace: (1, 0), (0, 1) ∈ W


but (1, 0) + (0, 1) = (1, 1) ∈
/W

3. W = {(x, y, z) ∈ R3 : |x| + |y| + |z| ≤ 1} (unit ball) Not a subspace: not closed
under scalar multiplication (try c = 2)

Example: Intersection and sum of subspaces

Let U = span{(1, 0, 0), (0, 1, 0)} and V = span{(1, 1, 0), (0, 0, 1)}
Intersection: U ∩ V = {(x, y, z) : z = 0 and (x, y, z) ∈ V } Solving: (x, y, 0) =
a(1, 1, 0) + b(0, 0, 1) gives b = 0, so (x, y, 0) = a(1, 1, 0) Therefore: U ∩ V =
span{(1, 1, 0)} with dim(U ∩ V ) = 1
Sum: U + V = {u + v : u ∈ U, v ∈ V } Since any vector in R3 can be written as
(a, b, 0) + (c, c, d) = (a + c, b + c, d) We can solve for any (x, y, z): a = x − y, b = 0,
c = y, d = z Therefore: U + V = R3 with dim(U + V ) = 3
Verification of dimension formula: dim(U ) + dim(V ) = 2 + 2 = 4 dim(U ∩ V ) +
dim(U + V ) = 1 + 3 = 4

11 Pivots, Free Variables, and Gaussian Elimination


Definitions:

• Pivot: First nonzero entry in each row of echelon form

• Basic variables: Variables corresponding to pivot columns

• Free variables: Variables not corresponding to pivot columns

Algorithm for Solving Systems


1. Form augmented matrix [A|b]

12
2. Row reduce to reduced row echelon form (RREF)

3. Identify pivot and free variables

4. Express basic variables in terms of free variables

5. Write general solution

Example: System with unique solution



x + 2y − z = 3

2x + y + z = 1

x − y + 2z = −2

Augmented matrix and reduction:


   
1 2 −1 3 1 0 0 −1
2 1 1 1  → 0 1 0 2 
1 −1 2 −2 0 0 1 0

All variables are basic (3 pivots for 3 variables). Unique solution: (x, y, z) = (−1, 2, 0).

Example: System with infinitely many solutions



x + 2y + 3z = 6

2x + 4y + 7z = 13

3x + 6y + 10z = 19

Augmented matrix reduction:


   
1 2 3 6 1 2 3 6
2 4 7 13 → 0 0 1 1
3 6 10 19 0 0 0 0

Pivots in columns 1 and 3. Basic variables: x, z. Free variable: y = t. From the RREF:
- z = 1 (from row 2) -x + 
2t + =6⇒
3(1)  x = 3 − 2t (from row 1)
x 3 −2
General solution: y  = 0 + t  1 
z 1 0
3
This represents a line in R .

Example: System with no solution (inconsistent)


(
x+y =1
x+y =2
Clearly inconsistent. Augmented matrix:
   
1 1 1 1 1 1

1 1 2 0 0 1

Last row represents 0 = 1, which is impossible. Therefore, no solution exists.

13
Example: Parametric solutions with multiple free variables
 
  x1  
1 2 0 3 4  x2 
 5
Solve: 0 0 1 2 1  x 3
 = −1
 
0 0 0 0 0 x4  0
x5
From RREF: - Row 1: x1 + 2x2 + 3x4 = 5 - Row 2: x3 + 2x4 = −1
Basic variables: x1 , x3 . Free variables: x2 = s, x4 = t, x5 = u.
Solving: - x3 = −1 − 2t - x1 = 5 − 2s − 3t
General solution:
         
x1 5 −2 −3 0
x2   0  1 0 0
         
x3  = −1 + s  0  + t −2 + u 0
         
x4   0  0 1 0
x5 0 0 0 1

12 Consistent and Inconsistent Systems


Theorem (Consistency). The system Ax = b is consistent if and only if rank(A) = rank([A|b]).

Cases for Solutions


Let A be m × n with rank(A) = r.

1. Inconsistent: rank([A|b]) > rank(A)

2. Unique solution: rank([A|b]) = rank(A) = n

3. Infinitely many solutions: rank([A|b]) = rank(A) < n

Example: Testing consistency


    
1 2 x 3
Test consistency of = :
2 4 y 7
Check ranks:    
1 2 3 1 2 3

2 4 7 0 0 1
rank(A) = 1 (one pivot in coefficient matrix) rank([A|b]) = 2 (two pivots in augmented
matrix)
Since 2 > 1, the system is inconsistent.

Example: Geometric interpretation


    
1 1 x 3
System = :
2 2 y 6
This represents: - x + y = 3 (line in R2 ) - 2x + 2y = 6, which simplifies to x + y = 3
(same line)
Since both equations represent the same line, there are infinitely many solutions.
All points (x, y) on the line x + y = 3 are solutions.

14
Example: Parametric description of solution sets

For the consistent system with infinitely many solutions: x + y = 3


Solution set: {(3 − t, t) : t ∈ R} = {(3, 0) + t(−1, 1) : t ∈ R}
This is an affine set: a translation of the line t(−1, 1) by the vector (3, 0).

Warning: Common error in rank computation

When checking consistency, always row reduce the augmented matrix [A|b], not just
the coefficient matrix A. The appearance of a pivot in the last column indicates incon-
sistency.

13 Affine Spaces
Definition. An affine space (or affine set) is a translation of a linear subspace:
v0 + W = {v0 + w : w ∈ W }
where v0 is a fixed vector and W is a subspace.

Properties of Affine Spaces


• Solution sets of consistent linear systems Ax = b (when b ̸= 0) are affine spaces
• Affine spaces are not subspaces (unless they contain the origin)
• The ”direction” of an affine space is given by the associated subspace W
Example: Line in R2 not through origin

Consider x + y = 2. The solution set is:

{(x, y) : x + y = 2} = {(2, 0) + t(−1, 1) : t ∈ R}

This is the affine space (2, 0) + span{(−1, 1)}. - Point: (2, 0) (particular solution) -
Direction: span{(−1, 1)} (null space of coefficient matrix)

Example: Plane in R3 not through origin


System: 2x − y + z = 3
Particular solution: (0, 0, 3) (set x = y = 0, solve for z) Null space of [2, −1, 1]: solve
2x − y + z = 0 Let y = s, z = t: then x = s−t 2
Direction space: span 12 (1, 2, 0) + 21 (−1, 0, 2) = span{(1, 2, 0), (−1, 0, 2)}
Solution set: (0, 0, 3) + span{(1, 2, 0), (−1, 0, 2)}

Example: Affine combinations

Points in affine space v0 + W can be written as:

v0 + c1 w1 + c2 w2 + · · · + ck wk

where {w1 , w2 , . . . , wk } is a basis for W .


Alternative characterization: Affine combinations of points v1 , v2 , . . . , vk :

t1 v1 + t2 v2 + · · · + tk vk where t1 + t2 + · · · + tk = 1

15
Note: Connection to linear algebra

If xp is any particular solution to Ax = b and N = Nul(A), then the complete solution


set is the affine space xp + N .

14 General and Particular Solutions


For the non-homogeneous system Ax = b:
• Particular solution xp : Any single solution satisfying Axp = b

• Homogeneous solutions: All solutions to Ax = 0 (the null space Nul(A))

• General solution: x = xp + xh where xh ∈ Nul(A)


Theorem. If Ax = b is consistent, then the solution set is:

{xp + n : n ∈ Nul(A)}

for any particular solution xp .

Example: Complete solution structure


 
  x  
1 2 −1   3
Solve y =
0 0 0 0
z
Step 1: Find particular solution Set free variables y = 0, z = 0: then x = 3 Particular
solution: xp = (3, 0, 0)
Step 2: Find null space Solve x + 2y − z = 0: x = −2y + z General null space vector:
(−2s + t, s, t) = s(−2, 1, 0) + t(1, 0, 1) So Nul(A) = span{(−2, 1, 0), (1, 0, 1)}
Step 3: General solution

x = (3, 0, 0) + s(−2, 1, 0) + t(1, 0, 1) = (3 − 2s + t, s, t)

Example: Verification of solution structure

Check that if x1 and x2 are two solutions to Ax = b, then x1 − x2 ∈ Nul(A):


A(x1 − x2 ) = Ax1 − Ax2 = b − b = 0
This shows that any two particular solutions differ by an element of the null space.

Example: Finding particular solutions systematically

For system in RREF form, set all free variables to zero:


 
  x1  
1 2 0 3 5  x2 
 7
0 0 1 1 −2 x3  = 4
 
0 0 0 0 0 x4  0
x5
Set free variables x2 = x4 = x5 = 0: - From row 2: x3 = 4 - From row 1: x1 = 7
Particular solution: (7, 0, 4, 0, 0)
General solution involves adding the null space:

x = (7, 0, 4, 0, 0) + s(−2, 1, 0, 0, 0) + t(−3, 0, −1, 1, 0) + u(−5, 0, 2, 0, 1)

16
15 Advanced Topics and Applications
Example: Matrix equations AX = B
   
1 2 5 6
Solve X=
3 4 7 8
This is equivalent to solving two systems:
 AX1 = 7)T and AX
 (5,  2 = (6, 8)
T

4 −2 −2 1
Solution: X = A−1 B where A−1 = −2 1
=
−3 1 3/2 −1/2
    
−2 1 5 6 −3 −4
X= =
3/2 −1/2 7 8 1/2 1

Example: Least squares and normal equations

For overdetermined system Ax = b (more equations than unknowns), the least squares
solution satisfies:
AT Ax = AT b
Example: Fit line y = mx + c through points (0, 1), (1, 2), (2, 2):
   
0 1 1
A = 1 1 , b = 2
2 1 2

Normal equations: AT Ax = AT b
   
T 5 3 T 6
A A= , A b=
3 3 5

Solving: x = (m, c) = (1/2, 3/2), so y = 12 x + 23 .

16 Quick Reference and Formulas


Key Theorems
• Rank-Nullity: rank(A) + nullity(A) = n (for m × n matrix A)

• Fundamental Subspaces: Col(A) ⊥ Nul(AT ), Nul(A) ⊥ Col(AT )

• Invertible Matrix Theorem: A invertible rank(A) = n det(A) ̸= 0

• Dimension Formula: dim(U + V ) = dim(U ) + dim(V ) − dim(U ∩ V )

Common Dimensions
• dim(Rn ) = n

• dim(Pn ) = n + 1

• dim(Mm×n ) = mn
n(n+1)
• dim(symmetric n × n matrices) = 2

17
Problem-Solving Checklist
1. Always check dimensions and compatibility

2. Use row reduction for systematic solutions

3. Verify answers by substitution

4. Remember geometric interpretations

5. Apply rank-nullity theorem for dimension checks

Study Tips: Practice with varied examples, visualize in low dimensions, and always verify
theoretical results with concrete computations. Master the connections between algebraic and
geometric viewpoints!

18

You might also like