100% found this document useful (1 vote)
1K views2 pages

Ma2001 Cheatsheet Midterms

This document discusses linear systems, Gaussian elimination, matrices, and matrix operations. It provides the following key information: 1. Gaussian elimination is a method to solve systems of linear equations by transforming the augmented matrix into reduced row echelon form (RREF) using elementary row operations. 2. Matrices can represent linear systems and are classified by properties like size, structure, and rank. Elementary matrices represent elementary row operations. 3. Matrix operations include addition, subtraction, scalar multiplication, and advanced operations. The determinant of a square matrix provides a measure of its invertibility.

Uploaded by

Gerlyn Sun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
1K views2 pages

Ma2001 Cheatsheet Midterms

This document discusses linear systems, Gaussian elimination, matrices, and matrix operations. It provides the following key information: 1. Gaussian elimination is a method to solve systems of linear equations by transforming the augmented matrix into reduced row echelon form (RREF) using elementary row operations. 2. Matrices can represent linear systems and are classified by properties like size, structure, and rank. Elementary matrices represent elementary row operations. 3. Matrix operations include addition, subtraction, scalar multiplication, and advanced operations. The determinant of a square matrix provides a measure of its invertibility.

Uploaded by

Gerlyn Sun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Linear System Gaussian Elimination Matrices Elementary Matrices (EM) Using Gauss-Jordan Elimination to Find Inverse Types of Matrices

Made up of a system of linear equations 1. Locate the leftmost non-zero column. Bring a A 𝑚 × 𝑛 matrix can be generally written: An elementary matrix is a square matrix obtained If 𝑨 is invertible, then 𝑨 𝑰 − 𝐺𝐽𝐸 → 𝑰 𝑨−1 1. Square Matrix
nonzero entry to the top of the column if needed. from an identity matrix by a single ERO. Determinant A matrix is called a square matrix if it has the same
𝒂𝟏𝒙𝟏 + 𝒂𝟐𝒙𝟐 + ⋯ + 𝒂𝒏𝒙𝒏 = 𝒃, 𝑎1, … , 𝑎𝑛 ℝ
∈ 𝑎11 𝑎12 ⋯ 𝑎1𝑛 Types of Elementary Matrices number of rows and columns.
Let 𝑨 = 𝑎𝑖𝑗 be an 𝑛 𝗑 𝑛 matrix
Possible Solution Sets 2. For each row below the top row, add a suitable 𝑎21 𝑎22 ⋯ 𝑎2𝑛
𝑨= Case 1: Multiply a row by a non-zero constant
Consistent System Inconsistent System multiple of the top row to it so that the entry ⋮ ⋮ ⋱ ⋮ 𝒏 × 𝒏 matrix or a matrix of order 𝒏
Let 𝑴𝒊𝒋 be an (𝑛 − 1) 𝗑 (𝑛 − 1) matrix obtained
below 𝑎𝑚1 𝑎𝑚2 ⋯ 𝑎𝑚𝑛
Unique Solution from 𝑨 by deleting the 𝑖𝑡ℎ row and the j𝑡ℎ column.
No solutions the leading entry of the top row becomes zero. 𝑎11 𝑎12 ⋯ 𝑎1𝑛
Infinitely Many Solutions 𝑨 = 𝑎𝑖𝑗 𝑚𝗑𝑛, where 𝑎𝑖𝑗 is the 𝑖, 𝑗 𝑡ℎ entry of 𝑨 𝑎21 𝑎22 ⋯ 𝑎2𝑛
𝑎 𝑖𝑓 𝑛 = 1 ⋮ ⋮ ⋱ ⋮ Diagonal
Augmented Matrices 3. Repeat till REF is achieved det 𝑨 = |𝑨| = ቊ 11
where 𝑚 is the number of rows and 𝑛 the columns 𝑎11𝐴11 + ⋯ + 𝑎1𝑛𝐴1𝑛 𝑖𝑓 𝑛 > 1 𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛 Entries
Given a system Gauss-Jordan Elimination
Equivalence of Matrices
4. Multiply a suitable constant to each row so Where 𝐴𝑖𝑗 = −1 𝑖+𝑗 det(𝑴𝒊𝒋) 2. Diagonal Matrix
Two matrices are equal if they have the same size
𝑎11𝑥1 + 𝑎12𝑥2 + ⋯ + 𝑎1𝑛𝑥𝑛 = 𝑏1 that all the leading entries becomes 1. 𝒄𝑹𝒊 This is called the 𝑖, 𝑗 -cofactor of 𝑨. A diagonal matrix is a square matrix with all the
and their corresponding entries are equal.
𝑎21𝑥1 + 𝑎22𝑥2 + ⋯ + 𝑎2𝑛𝑥𝑛 = 𝑏2 E.g. non-diagonal entries 0
⋮ Standard Determinants
5. Beginning with the last nonzero row and Basic Matrix Operations 1 0 0
𝑎𝑚1𝑥1 + 𝑎𝑚2𝑥1 + ⋯ + 𝑎𝑚𝑛𝑥𝑛 = 𝑏𝑚 0 𝒄 0 - Multiplying the second row by c Case 1: 2 X 2 Matrix
working upward, add a suitable multiple of each Let 𝑨 = 𝑎𝑖𝑗 𝑨 = 𝑎𝑖𝑗 is diagonal ↔ 𝑎𝑖𝑗 = 0 when 𝑖 ≠ 𝑗
𝑚𝗑𝑛, 𝑩 = 𝑏𝑖𝑗 𝑚𝗑𝑛 𝑛𝗑𝑛
row to the rows above to introduce zeros above 0 0 1
We can represent it as a Augmented Matrix Addition 𝑨 + 𝑩 = (𝑎𝑖𝑗 + 𝑏𝑖𝑗 )𝑚𝗑𝑛 𝑨 = 𝑎𝑑 − 𝑏𝑐
the leading entries till RREF is achieved. 𝑎11 0 ⋯ 0
Subtraction 𝑨 − 𝑩 = (𝑎𝑖𝑗 − 𝑏𝑖𝑗 )𝑚𝗑𝑛 Case 2: Interchange two rows
0 𝑎22 ⋯ 0
𝑎11 𝑎12 ⋯ 𝑎1𝑛 𝑏1 Solving Linear Systems Using GJE Case 2: 3 X 3 Matrix
Scalar Multiplication 𝑐𝑨 = (𝑐𝑎𝑖𝑗 )𝑚𝗑𝑛 ⋮ ⋮ ⋱ ⋮
𝑎21 𝑎22 ⋯ 𝑎2𝑛 𝑏2
0 0 ⋯ 𝑎𝑛𝑛
⋮ ⋮ 𝑎1𝑥 + 𝑏1𝑦 + 𝑐1𝑧 = 𝑑1
𝑎𝑚1 𝑎𝑚2 ⋯ 𝑎𝑚𝑛 𝑏𝑚 Advanced Matrix Operations
Given the system, ቐ𝑎2𝑥 + 𝑏2𝑦 + 𝑐2𝑧 = 𝑑2 3. Scalar Matrix
Matrix Multiplication
𝑎3𝑥 + 𝑏3𝑦 + 𝑐3𝑧 = 𝑑3 A scalar matrix is a diagonal matrix with all the
Elementary Row Operations (ERO) Let 𝑨 = 𝑎𝑖𝑗 𝑚𝗑𝒑 𝑎𝑛𝑑 𝑩 = 𝑏𝑖𝑗 𝒑𝗑𝑛 𝑨 = 𝑎 𝑒𝑖 − 𝑓ℎ − 𝑏 𝑑𝑖 − 𝑓𝑔 + 𝑐(𝑑ℎ − 𝑒𝑔) diagonal entries the same
Action Symbol Applying GJE can have 3 possible outcomes Case 3: Identity Matrix 0 𝑖𝑓 𝑖 ≠ 𝑗
𝑝 𝑨 = 𝑎𝑖𝑗 𝑛𝗑𝑛 is diagonal ↔ 𝑎𝑖𝑗 = ቊ
Possible Solutions/Outcomes 𝑹𝒊 ↔ 𝑹𝒋 𝑐 𝑖𝑓 𝑖 = 𝑗
Multiply a row by a nonzero constant 𝑐𝑅𝑖 𝑨𝑩 = ෍ 𝑎𝑖𝑘𝑏𝑘𝑗 E.g.
Case 1: Unique Solution If 𝑨 is a identity matrix, then its determinant is 1.
Interchange two rows 𝑅𝑖 ↔ 𝑅𝑗 𝑘=1 1 0 0
# of nonzero rows = # of variables Case 3: Triangular Matrix 𝑐 0 ⋯ 0
Add a multiple of one row to another 0 0 𝟏 - Interchanging second & third rows
𝑅𝑖 + 𝑐𝑅𝑗 • 𝑨𝑩 ≠ 𝑩𝑨 0 𝑐 ⋯ 0
0 𝟏 0 , 𝑐 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 ∈ ℝ
row 1 0 0𝛼 𝑨𝑩 is the pre-multiplication of 𝑨 to 𝑩 If 𝑨 is a triangular matrix, then 𝑨 is equal to the ⋮ ⋮ ⋱ ⋮
Row Equivalence 0 1 0 𝛽 , 𝛼, 𝛽, 𝛾 𝜖 ℝ 𝑩𝑨 is the post-multiplication of 𝑨 to 𝑩 product of the diagonal entries of 𝑨. 0 0 ⋯ 𝑐
Case 3: Add a multiple of one row to another row
0 0 1 𝛾
Two matrices are said to be row equivalent if one Case 4: At least 2 identical rows or columns
can be obtained from each other by EROs. • 𝑨𝑩 = 𝟎 does not imply 𝑨 = 𝟎 𝒐𝒓 𝑩 = 𝟎 4. Identity Matrix
Here, the system is consistent and has the unique An identity matrix is a scalar matrix with all the
Powers of Matrices
solution + The determinant of a square matrix with two diagonal entries = 1
If augmented matrices of two linear systems are 𝑥1 = 𝛼 Let 𝑨 be a square matrix and 𝑛 ∈ ℤ identical rows/columns is zero.
row equivalent, then the two systems have the 0 𝑖𝑓 𝑖 ≠ 𝑗
ቐ𝑥2 = 𝛽 𝑰 𝑖𝑓 𝑛 = 0 Cofactor Expansion 𝑨 = 𝑰𝑛 = 𝑎𝑖𝑗 𝑛𝗑𝑛 is identity ↔ 𝑎𝑖𝑗 = ቊ
same set of solutions. 𝑨−𝑛 = 𝑨−1 𝑛 1 𝑖𝑓 𝑖 = 𝑗
𝑥3 = 𝛾 𝑨𝑛 = ቐ 𝑨𝑨 … 𝑨 𝑖𝑓 𝑛 ≥ 1
Find any row/column with the easiest numbers.
Row-Echelon Form Case 2: Infinitely Many Solutions 𝑛 times 𝑨𝑟 𝑨𝒔 = 𝑨𝑟+𝑠 1 0 ⋯ 0
𝑹𝒋 + 𝒄𝑹𝒊
# of nonzero rows ≠ # of variables E.g. 𝑎11 𝑎12 𝑎13 0 1 ⋯ 0
Leading Entry = First non-zero number Laws
# parameters = # of variables - # of nonzero rows 1 0 0 𝑨 = 𝑎21 𝑎22 𝑎23 ⋮ ⋮ ⋱ ⋮
1. 𝑨+𝑩 = 𝑩+𝑨 0 1 𝟑 - Add 3 X third row to second row 𝑎31 𝑎32 𝑎33 0 0 ⋯ 1
1. If there are any rows that consist entirely of
1 0 0𝛼 2. 𝑨+ 𝑩+𝑪 = 𝑨+𝑩 +𝑪 0 0 1
zeros, then they are grouped together at the 𝑎22 𝑎23
0 1 0 𝛽 , 𝛼, 𝛽 𝜖 ℝ 𝑀11 = 5. Zero Matrix
bottom of the matrix. 3. 𝑎 𝑨 + 𝑩 = 𝑎𝑨 + 𝑎𝑩 𝑎32 𝑎33
0 0 00 𝑨 A matrix is called a zero matrix if all entries = 0.
4. 𝑎 + 𝑏 𝑨 = 𝑎𝑨 + 𝑏𝑨 Properties of Invertible Matrices = 𝑎11 𝗑 (−1)1+1det 𝑀11
2. In any two successive rows that do not consist 𝑎12 𝑎13
Here, the system is consistent and has the general 5. 𝑎 𝑏𝑨 = 𝑎𝑏 𝑨 = 𝑏(𝑎𝑨) If 𝑨 is invertible: + 𝑎21 𝗑 (−1)2+1 det 𝑀21 𝑀21 = 0 0 ⋯ 0
entirely of zeros, the leading entry in the lower 𝑎32 𝑎33
solution 6. 𝑨 𝑩𝑪 = 𝑨𝑩 𝑪 + 𝑎31 𝗑 (−1)3+1 det 𝑀31 0 0 ⋯ 0
row occurs further to the right than the leasing 1. 𝑨𝒙 = 𝟎 has only the trivial solution. 𝑎 𝑎13 𝟎𝑛𝗑𝑛 =
7. 𝑨 𝑩 + 𝑩′ = 𝑨𝑩 + 𝑨𝑩′ 𝑀31 = 12 ⋮ ⋮ ⋱ ⋮
entry in the higher row. The leading entry need 𝑥1 = 𝛼 𝑎22 𝑎23
8. 𝑪 + 𝑪′ 𝑨 = 𝑪𝑨 + 𝑪′𝑨 0 0 ⋯ 0
not be in consecutive columns. ቐ𝑥2 = 𝛽 , 𝑠 𝑖𝑠 𝑎𝑛 𝑎𝑟𝑏𝑖𝑡𝑟𝑎𝑟𝑦 𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟 2. The RREF of 𝑨 is an identity matrix and the REF
9. 𝑎 𝑨𝑩 = 𝑎𝑨 𝑩 = 𝑨(𝑎𝑩) How does ERO Change Determinant?
𝑥3 = 𝑠 of 𝑨 has no zero row. 6. Symmetric Matrix
3. Columns with leading entries are pivot 10. 𝑨𝑚𝗑𝑛𝟎𝑛𝘹𝑞 = 𝟎𝑚𝘹𝑞 𝑎𝑛𝑑 𝟎𝑝𝗑𝑚𝑨𝑚𝗑𝑛 = 𝟎𝑝𝗑𝑛 𝑨 –ERO→ 𝑩 Change in Determinant
Case 3: No Solution A symmetric matrix is a square matrix with all
columns. 11. 𝑨𝑚𝗑𝑛𝑰𝑛 = 𝑰𝑚𝑨𝑚𝗑𝑛 = 𝑨𝑚𝗑𝑛 3. 𝑨 can be expressed as a product of elementary 𝑐𝑅𝑖 det 𝑩 = 𝑐 ∙ det(𝑨)
entries reflected along the diagonal
1 0 0𝛼 matrices. 𝑅𝑖 ↔ 𝑅𝑗 det 𝑩 = −det(𝑨)
0 1 0 𝛽 , 𝛼, 𝛽, 𝛾 𝜖 ℝ Transpose of a Matrix Inverse of a Matrix 𝑅𝑖 + 𝑐𝑅𝑗 det 𝐵 = det(𝐴) 𝑨 = 𝑎𝑖𝑗 is symmetric ↔ 𝑎𝑖𝑗 = 𝑎𝑗𝑖 for all 𝑖, 𝑗
𝑛𝗑𝑛
0 0 0 𝛾 The transpose of a The inverse of a 𝑛 × 𝑛 4. The determinant of 𝑨 is not 0. Properties of Determinants
𝑚 × 𝑛 matrix 𝑨, matrix 𝑨, is 𝑨-1 Invertible Matrices as a Product of EMs det 𝑨𝑩 = det 𝑨 det(𝑩) 𝑎11 𝛽 ⋯ 𝛾
Here, the system is inconsistent (0 = 𝛾), and has Suppose rref(𝑨) = 𝑰 𝛽 𝑎22 ⋯ 𝛿
denoted by 𝑨T (or 𝑨t ), det 𝑎𝑨 = 𝑎 𝑛 det 𝑨 , 𝑛 is the order of 𝑨
no solution as the last column is a pivot column. 𝑨 is invertible if ∃ 𝑩 | ⋮ ⋮ ⋱ ⋮
𝑨𝑩 = 𝑰𝑛 & 𝐁𝐀 = 𝑰𝑛 1 𝛾 𝛿 ⋯ 𝑎𝑛𝑛
Reduced Row-Echelon Form Homogenous Liner System is the matrix obtained by There exists elementary matrices 𝑬𝟏𝑬𝟐, … , 𝑬𝒌 such det 𝑨−1 =
that 𝑬𝒌 ⋯ 𝑬𝟐𝑬𝟏𝑨 = 𝑰 det(𝑨)
1. The leading entry of every nonzero row is 1. interchanging the rows 𝑨 is singular if for all 𝑩,
𝑎11𝑥1 + 𝑎12𝑥2 + ⋯ + 𝑎1𝑛𝑥𝑛 = 0 and columns of 𝑨. *PRE-MULTIPLICATION 1 7. Upper/Lower Triangular Matrix
𝑨𝑩 ≠ 𝑰𝑛 and 𝐁𝐀 ≠ 𝑰𝑛 𝑨−1 = adj(𝑨)
𝑎21𝑥1 + 𝑎22𝑥2 + ⋯ + 𝑎2𝑛𝑥𝑛 = 0 det 𝑨 An upper/lower triangular matrix is a square
2. Each column that contains a leading entry has Given the HLS
⋮ Laws of Transposition Laws of Inversion Hence, 𝑨 = 𝑬𝟏−1𝑬𝟐−1 ⋯ 𝑬𝒌−1𝑰 = 𝑬𝟏−1𝑬𝟐−1 ⋯ 𝑬𝒌−1 matrix with all the lower/upper half entries = 0
zeros everywhere else, and this is a pivot column. 𝑎𝑚1𝑥1 + 𝑎𝑚2𝑥1 + ⋯ + 𝑎𝑚𝑛𝑥𝑛 = 0 𝑨𝑇 𝑇 = 𝑨 𝑨𝑩 = 𝑨𝑩′ → 𝑩 = 𝑩′
𝑨 + 𝑩 𝑇 = 𝑨𝑇 + 𝑩𝑇 𝑨−1 −1 = 𝑨 Matrices that are a product of EMs are invertible. 𝑨 = 𝑎𝑖𝑗 𝑛𝗑𝑛 is UT ↔ 𝑎𝑖𝑗 = 0 for all 𝑖 > 𝑗
1. A HLS has either only the trivial solution or Ways to find Inverse of Matrices 𝑨 = 𝑎𝑖𝑗 𝑛𝗑𝑛 is LT ↔ 𝑎𝑖𝑗 = 0 for all 𝑗 > 𝑖
𝑨𝑩 𝑇 = 𝑩𝑇𝑨𝑇 𝑎𝑨 −1 = (1/𝑎)𝑨−1
infinitely many solutions in addition to the trivial
det 𝑨𝑇 = det(𝑨) 𝑨𝑩 −1 = 𝑩−1𝑨−1 1. Gauss-Jordan Elimination of 𝑨 𝑰 → 𝑰 𝑨−1
solution. 𝑎11 𝑎12 ⋯ 𝑎1𝑛 𝑎11 0 ⋯ 0
- Look at Top Right
𝑨𝑇 −1 = (𝑨−1)𝑇 0 𝑎22 ⋯ 𝑎2𝑛 𝑎21 𝑎22 ⋯ 0
2. A HLS with more unknowns than equations has 𝑨−1 = 𝑩, 𝑩−1 = 𝑨, 𝑩𝑨 = 𝑰 ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮
2. Using adjoint and determinant
infinitely many solutions. 0 0 ⋯ 𝑎𝑛𝑛 𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛
det(𝑨) ≠ 𝟎 - Look at next page
Adjoint/Adjugate Solving Linear Systems Using Cramer’s Rule Vectors Linear Combination Subspaces
Let Example of using Cramer’s Rule to solve a linear A vector is a directed line segment. A vector 𝒗 is a linear combination of vectors (𝒖𝟏, 𝒖𝟐 , … , 𝒖𝒏) if 𝒗 = 𝑎𝒖𝟏 + 𝑏𝒖𝟐 + ⋯ + 𝑛𝒖𝒏 A subspace must satisfy the following conditions
𝑎11 𝑎12 ⋯ 𝑎1𝑛 system. Determine if a Vector is a Linear Combination
𝑎21 𝑎22 ⋯ 𝑎2𝑛 Vectors have 1) Direction and 2) Magnitude Let 𝑆 = (𝒖𝟏, 𝒖𝟐, … , 𝒖𝒌) be a subspace of 𝐑𝑛, then
𝑨=
⋮ ⋮ ⋱ ⋮ Vectors are equal iff their Directions and 𝒗 = 𝑎𝒖𝟏 + 𝑏𝒖𝟐 + ⋯ + 𝑛𝒖𝑛
𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛 Magnitudes are equal. 1. 𝟎 is in span 𝑆
𝑎 𝒖𝟏𝟏, 𝒖𝟏𝟐, … , 𝒖𝟏𝒏 + b 𝒖𝟐𝟏, 𝒖𝟐𝟐, … , 𝒖𝟐𝒏 + ⋯ + n 𝒖𝒏𝟏 , 𝒖𝒏𝟐, … , 𝒖𝒏𝒏 = 𝒗𝟏, 𝒗𝟐, … , 𝒗𝒏
Vector Operations 2. For any 𝒗𝟏, 𝒗𝟐, … , 𝒗𝒓 in span 𝑆 and 𝒄𝟏, 𝒄𝟐, … , 𝒄𝒓 in 𝐑,
Then
Equality 𝒖 = 𝒗 iff 𝒖𝒊 = 𝒗𝒊 ∀ 𝑖 𝒖𝟏𝟏 𝒖𝟐𝟏 𝒖𝒏𝟏 𝒗𝟏 𝑎𝒖𝟏𝟏 + 𝑏𝒖𝟐𝟏 + ⋯ + 𝑛𝒖𝒏𝟏 = 𝒗𝟏 𝒄𝟏𝒗𝟏 + 𝒄𝟐𝒗𝟐 + ⋯ + 𝒄𝒓𝒗𝒓 is in span(S).
adj 𝑨 =
𝐴11 𝐴12 ⋯ 𝐴1𝑛 𝑇 𝐴11 𝐴21 ⋯ 𝐴𝑛1 Addition 𝒖 + 𝒗 = (𝑢1 + 𝑣1, 𝑢2 + 𝑣2, … , 𝑢𝑛 + 𝑣𝑛) 𝒖𝟏𝟐 𝒖𝟐𝟐 𝒖𝒏𝟐 𝒗𝟐 𝑎𝒖𝟏2 + 𝑏𝒖𝟐𝟐 + ⋯ + 𝑛𝒖𝒏𝟐 = 𝒗𝟐
𝑎 +𝑏 +⋯+𝑛 = → * {𝟎} is zero space and is a subspace too
𝐴21 𝐴22 ⋯ 𝐴2𝑛 𝐴12 𝐴22 ⋯ 𝐴𝑛2 Scalar X 𝑐𝒖 = (𝑐𝑢1, 𝑐𝑢2, … , 𝑐𝑢𝑛) ⋮ ⋮ ⋮ ⋮ ⋮
= 𝒖𝟏𝒏 𝒖𝟐𝒏 𝒖𝒏𝒏 𝒗𝒏 𝑎𝒖𝟏𝑛 + 𝑏𝒖𝟐𝒏 + ⋯ + 𝑛𝒖𝒏𝒏 = 𝒗𝒏 Determining if 𝑉 is a Subspace
⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ Negative −𝒖 = (−𝑢1, −𝑢2, … , −𝑢𝑛)
𝐴𝑛1 𝐴𝑛2 ⋯ 𝐴𝑛𝑛 𝐴1𝑛 𝐴2𝑛 ⋯ 𝐴𝑛𝑛 1) Can we write 𝑉 as a span, or a linear combination of some
Minus 𝒖 − 𝒗 = (𝑢1 − 𝑣1, 𝑢2 − 𝑣2, … , 𝑢𝑛 − 𝑣𝑛) 𝒖𝟏𝟏 𝒖𝟐𝟏 ⋯ 𝒖𝒏𝟏 𝒗𝟏 vectors
where 𝐴𝑖𝑗 = −1 𝑖+𝑗 det 𝑴𝒊𝒋 , the 𝑖, 𝑗 -cofactor Laws 𝒖𝟏𝟐 𝒖𝟐𝟐 ⋯ 𝒖𝒏𝟐 𝒗𝟐 2) Does 𝑉 contain the origin?
will the matrix to use GJE on
of 𝑨 𝒖+𝒗 = 𝒗+𝒖 ⋮ ⋮
***DON’T FORGET THE TRANSPOSE*** 𝒖𝟏𝒏 𝒖𝟐𝒏 ⋯ 𝒖𝒏𝒏 𝒗𝒏 Both must be YES. To show 𝑉 is not a subspace, at least one has
𝒖+ 𝒗+𝒘 = 𝒖+𝒗 +𝒘
Calculating Adjoint 𝒖+𝟎 = 𝟎+𝒖 to be NO.
If the system is consistent, then 𝒗 is a linear combination of (𝒖𝟏, 𝒖𝟐, … , 𝒖𝒏)
𝒖 + (−𝒖) = 𝟎 The solution of this system will be the values of the coefficients, (𝑎, 𝑏, … , 𝑛) Examples of not a subspace
𝑎(𝑏𝒖) = 𝑎𝑏 𝒖
Spans and Linear Combinations
𝑎(𝒖 + 𝒗) = 𝑎𝒖 + 𝑎𝒗
(𝑎 + 𝑏)𝒖 = 𝑎𝒖 + 𝑏𝒖 The vectors 𝑆 = (𝒖𝟏, 𝒖𝟐, … , 𝒖𝒌) span 𝐑𝑛 iff all vectors in the space are a linear
1𝒖 = 𝒖 combination of (𝒖𝟏, 𝒖𝟐, … , 𝒖𝒌)

Fails Condition 1
Using Vectors to Represent Solution Sets The linear combination of any vectors in 𝐑𝑛 will be in 𝐑𝑛.
Case 1: Line in R2
Implicit: 𝑥, 𝑦 𝑎𝑥 + 𝑏𝑦 = 𝑐 If 𝑘 < 𝑛, then 𝑆 cannot span 𝐑𝑛 → Cardinality of 𝑆 cannot be less than 𝑛
𝑐 − 𝑏𝑡 Spans and Subsets
, 𝑡 | 𝑡 𝑖𝑛 𝐑 𝑖𝑓 𝑎 ≠ 0 Let 𝑆1 = {𝒖𝟏, 𝒖𝟐, … , 𝒖𝒌} and 𝑆1 = {𝒗𝟏, 𝒗𝟐, … , 𝒗𝒎} be subsets of vectors in 𝐑𝑛.
𝑎
Explicit: 𝑐 − 𝑎𝑡
𝑡, | 𝑡 𝑖𝑛 𝐑 𝑖𝑓 𝑏 ≠ 0 Then span 𝑆1 ⊆ span 𝑆2 iff each 𝒖𝒊 is a linear combination of 𝒗𝟏, 𝒗𝟐, … , 𝒗𝒎.
𝑏
And span 𝑆2 ⊆ span 𝑆1 iff each 𝒗𝒊 is a linear combination of𝒖𝟏, 𝒖𝟐, … , 𝒖𝒌.
Case 2: Plane in R3
* span 𝑆1 = span 𝑆2 iff span 𝑆1 ⊆ span 𝑆2 and span 𝑆2 ⊆ span 𝑆1
Implicit: 𝑥, 𝑦, 𝑧 𝑎𝑥 + 𝑏𝑦 + 𝑐𝑧 = 𝑑
𝑑 − 𝑏𝑠 − 𝑐𝑡
, 𝑠, 𝑡 | 𝑠, 𝑡 𝑖𝑛 𝐑 𝑖𝑓 𝑎 ≠ 0 Example:

Fails Condition 2
𝑎 To show span{𝒖𝟏, 𝒖𝟐 , 𝒖𝟑} ⊆ span{𝒗𝟏, 𝒗𝟐}, write all 𝒖𝒊 as a linear combination of all 𝒗
𝑑 − 𝑎𝑠 − 𝑐𝑡
𝑠, , 𝑡 | 𝑠, 𝑡 𝑖𝑛 𝐑 𝑖𝑓 𝑏 ≠ 0
Explicit: 𝑏
𝑑 − 𝑎𝑠 − 𝑏𝑡
𝑠, 𝑡, | 𝑠, 𝑡 𝑖𝑛 𝐑 𝑖𝑓 𝑐 ≠ 0
𝑏

Case 3: Line in R3
𝑎0, 𝑏0, 𝑐0 + 𝑡(𝑎, 𝑏, 𝑐) 𝑡 𝑖𝑛 𝐑
𝑎0, 𝑏0, 𝑐0 is a point on the line If this system is consistent, then span{𝒖𝟏, 𝒖𝟐, 𝒖𝟑} ⊆ span{𝒗𝟏, 𝒗𝟐}
Explicit: (𝑎, 𝑏, 𝑐) is the direction of the line Subspaces in 𝐑𝑛
where 𝑎0, 𝑏0, 𝑐0, 𝑎, 𝑏, 𝑐 are real constants, Redundant Vectors All the subspaces of 𝐑2 All the subspaces of 𝐑3
and 𝑎, 𝑏, 𝑐 are not all zero If span 𝒖𝟏, 𝒖𝟐, … , 𝒖𝒌+𝟏 = span{𝒖𝟏, 𝒖𝟐, … , 𝒖𝒌 }, that is, removing 𝒖𝒌+𝟏 has no effect on - {𝟎}
- {𝟎}
Example: the span of the subset, then 𝒖𝒌+𝟏 is a redundant/useless vector. - Lines through the origin
- Lines through the origin
Types of Spans - Planes containing the origin
- 𝐑2
The solution of - 𝐑3
Case 1: A single vector in 𝐑2
Subspace as a solution set of HLS
The span of a single vector 𝒖 = (𝒖𝟏, 𝒖𝟐) in 𝐑2 is a line given by The solution set (solution space) of a homogeneous system of
span 𝒖 = 𝑐𝒖 | 𝑐 𝑖𝑛 𝐑 = 𝑥, 𝑦 𝑎𝑥 + 𝑏𝑦 = 0} linear equations in 𝒏 variables is a subspace of 𝐑𝑛.
Case 2: A single vector in 𝐑3 Case 1: Only the trivial solution
Using Adjoint to Find Inverse
This will be the zero space, {𝟎}.
The span of a single vector 𝒖 = (𝒖𝟏, 𝒖𝟐, 𝒖𝟑) in 𝐑3 is a line given by
span 𝒖 = 𝑐𝒖 | 𝑐 𝑖𝑛 𝐑 = 𝑐𝒖𝟏, 𝑐𝒖𝟐, 𝑐𝒖𝟑| 𝑐 𝑖𝑛 𝐑 Case 2: Infinitely many solutions

Types of Sets of Vectors in a Space Case 3: 2 vectors in 𝐑2 Given 𝒙𝟏 , 𝒙𝟐, … , 𝒙𝒏 are the variables to the system,
Term Definition The solution space will be
If vectors 𝒖 and 𝒗 are not parallel and linearly independent, then this is the plane 𝐑2 itself
The set of all 𝑛-vectors of real numbers is called the Euclidean 𝑛-space, denoted by 𝐑𝑛.
Space Case 4: 2 vectors in 𝐑3
𝒖 in 𝐑𝑛 iff 𝒖 = (𝒖𝟏, 𝒖𝟐, … , 𝒖𝒏) for some 𝒖𝟏, 𝒖𝟐, … , 𝒖𝒏 in 𝐑.
Subset A collection of vectors
The span of vectors 𝒖 = (𝒖𝟏, 𝒖𝟐, 𝒖𝟑) and 𝒗 = (𝒗𝟏, 𝒗𝟐, 𝒗𝟑) is a plane given by
Let 𝑆 = {𝒖𝟏, 𝒖𝟐, … , 𝒖𝒌} be a subset of 𝐑𝑛.
span 𝒖, 𝒗 = 𝑠𝒖 + 𝑡𝒗| 𝑠, 𝑡 𝑖𝑛 𝐑 = 𝑥, 𝑦, 𝑧 𝑎𝑥 + 𝑏𝑦 + 𝑐𝑧 = 0
The subspace 𝑉 = 𝑐1𝒖𝟏 + 𝑐2𝒖𝟐 + ⋯ , 𝑐𝑘𝒖𝒌 𝑐1, 𝑐2, … , 𝑐𝑘 𝑖𝑛 𝐑} of 𝐑𝑛 is called the space
Span spanned by 𝑆 (or by 𝒖𝟏, 𝒖𝟐, … , 𝒖𝒌). Lines and Planes in 𝐑3
𝟎 must be in in span(𝑆) for it to be a span.
To get a line in 𝐑3, we will need one point and the span of one direction vector
𝑉 = span 𝑆 𝑉 = span{𝒖𝟏, 𝒖𝟐, … , 𝒖𝒌} 𝑆 spans 𝑉 𝒖𝟏, 𝒖𝟐, … , 𝒖𝒌 spans 𝑉 𝐿 = 𝒙 + 𝒘 𝒘 ϵ 𝐿0} = 𝒙 + 𝒘 𝒘 ϵ span{𝒖}} = 𝒙 + 𝑡𝒖 𝑡 ϵ ℝ}.
Let 𝑆 = {𝒖𝟏, 𝒖𝟐, … , 𝒖𝒌} be a subset of 𝐑𝑛.
Subspace 𝑉 is a subspace of 𝐑𝑛 if 𝑉 = span 𝑆 To get a plane in 𝐑3, we will need one point and the span of 2 direction vectors
𝑉 is the subspace spanned by 𝑆 𝑆 spans the subspace 𝑉 𝐿 = 𝒙 + 𝒘 𝒘 ϵ 𝑃0} = 𝒙 + 𝒘 𝒘 ϵ span{𝒖, 𝒗}} = 𝒙 + 𝑠𝒖 + 𝑡𝒗 𝑠, 𝑡 ϵ ℝ}.

You might also like