Basic Linear Algebra Review
Basic Linear Algebra Review
Vector Operations
• Vector addition is a component-wise operation. Two vectors v and w may be added together as
long as they contain the same number n of components. Their sum is another vector, whose ith
component is the sum of the ith components of v and w:
1 3 1+3
+ =
2 4 2+4
• Vectors may be multiplied by a scalar. The product cv is a vector, whose ith component is the
product of c and the ith component of v:
1 (5)(1)
5 =
2 (5)(2)
• A linear combination of vectors is a sum of scalar multiples of vectors (with the same number of
components):
1 4 −1 1 4
5 +6 −3 +
2 0 6 2 −3
• The dot product of two vectors is a scalar, and is only defined when the vectors have the same
number of components. It is the sum of the products of the ith components of the vectors:
1 3
· = (1)(3) + (2)(4)
2 4
• The length of a vector is a scalar. It is found by taking the square root of the sum of the squares
of the vector’s components:
1 p
2 = 12 + 22 + 32
3
√ √
• A unit vector is a vector of length one. For example, [1, 0, 0] and [1/ 2, 1/ 2].
• The angle θ between two vectors v and w is related to their lengths and dot products by the
formula
v·w
cos(θ) =
||v|| ||w||
• Two vectors are perpendicular (also called orthogonal) if they have a dot product of 0.
Matrix Operations
• Matrix addition is a component-wise operation. Two matrices A and B may be added together
as long as they have the same number of rows and columns. Their sum is another matrix, whose
(i, j)th entry is the sum of the (i, j)th entries of A and B:
1 2 5 6 1+5 2+6
+ =
3 4 7 8 3+7 4+8
• Matrices may be multiplied by a scalar. The product cA is a matrix, whose (i, j)th entry is the
product of c and the (i, j)th entry of A:
1 2 (5)(1) (5)(2)
5 =
3 4 (5)(3) (5)(4)
• Matrices may be multiplied as long as their dimensions are compatible. The product AB is defined
when A has n columns and B has n rows. The (i, j)th entry of AB is the dot product of the ith
row of A and the jth column of B:
7 8
1 2 3 (1)(7) + (2)(9) + (3)(11) (1)(8) + (2)(10) + (3)(12)
9 10 =
4 5 6 (4)(7) + (5)(9) + (6)(11) (4)(8) + (5)(10) + (6)(12)
11 12
• The identity matrix In is an n × n matrix whose diagonal entries are 1, and whose remaining
entries are 0. It is the multiplicative identity, because if A is any m × n matrix, AIn = Im A = A.
When the size n is understood, we can simply write I for the identity matrix.
• A zero matrix 0 is a matrix with every entry 0. It can be any size, and not necessarily square. A
zero matrix is never invertible. It is the additive identity, since A + 0 = 0 + A = A for any matrix
A.
• An elementary matrix E is a square matrix which performs a row operation on a matrix A. There
are three types of elementary matrices, all of which are invertible:
1. E can interchange two rows. An elementary matrix of this type is obtained from the identity
by interchanging two of its rows. Such a matrix E is equal to its own inverse:
1 0 0 1 0 0
E = 0 0 1 E −1 = 0 0 1
0 1 0 0 1 0
2. E can multiply a row by a constant, c. An elementary matrix of this type is obtained from
the identity by multiplying one of its rows by c. It’s inverse is obtained from the identity by
multiplying on of its rows by 1/c:
1 0 0 1 0 0
E = 0 c 0 E −1 = 0 1/c 0
0 0 1 0 0 1
3. E can add a multiple of one row to another. An elementary matrix of this type is obtained
from the identity by making any off diagonal entry c 6= 0. It’s inverse is obtained from the
identity by making that same off diagonal entry −c:
1 0 0 1 0 0
E = 0 1 c E −1 = 0 1 −c
0 0 1 0 0 1
• A diagonal matrix is one whose main diagonal entries can take any value, but whose entries off
the main diagonal are 0. It is invertible if all diagonal entries are nonzero.
• An upper triangular matrix is one whose main diagonal entries, as well as any entries above the
diagonal, can take any value, but whose entries below the main diagonal are 0.
• A lower triangular matrix is one whose main diagonal entries, as well as any entries below the
diagonal, can take any value, but whose entries above the main diagonal are 0.
• A symmetric matrix is one that is equal to its own transpose, A = AT . These can have any value
on the main diagonal, but entries off the main diagonal must be paired: the (i, j)th entry must
be equal to the (j, i)th entry.
• A permutation matrix is a square matrix such that every row and column contains exactly one
entry of 1 and zeros elsewhere. There are n! permutation matrices of size n × n.
• The inverse of a square matrix A is the unique matrix A−1 such that AA−1 = A−1 A = I
• Not all matrices are invertible. If A has any of the following properties, it is NOT invertible:
1. A is not square
2. A has determinant 0
3. the rows or columns of A are not linearly independent
4. the equation Ax = 0 has more than one solution
5. A has at least one eigenvalue of 0
• Suppose A is an n × n matrix. To find the inverse of A (or determine that it doesn’t have one),
reduced the augmented matrix [A In ] to reduced row echelon form via row operations. If this
augmented matrix can be put in the form [In | B], then B = A−1 . Otherwise, A−1 does not exist.
For example:
1 2 1 0 1 2 1 0 1 0 −2 1 1 0 −2 1
→ → →
3 4 0 1 0 −2 −3 1 0 −2 −3 1 0 1 3/2 −1/2
1 2 −2 1
so has inverse .
3 4 3/2 −1/2
a b
• The inverse of a 2 × 2 matrix A = can be found by the formula
c d
−1 1 d −b
A =
ad − bc −c a
• Diagonal or triangular matrices are invertible when all of their diagonal entries are nonzero.
• Permutation matrices are always invertible, and their inverse is equal to their transpose (they are
orthogonal).
Systems of Equations
• A system can also be represented by an augmented matrix. The matrix A above is augmented
with the vector b:
2x + 3y = 3 2 3 0 3
4x + y + 3z = 6 ⇒ 4 1 3 6
x − 2y − 4z = −1 1 −2 −4 −1
• After reducing the augmented matrix [A x] to reduced row echelon form (RREF), one of three
results is possible:
1. The system has no solution. This happens when the RREF contains a row of the form
[0 0 0 · · · 0 c]
where c 6= 0.
2. The system has a unique solution. This happens when the rank of the matrix A is equal to
the number of variables in the system. The following examples are reduced echelon forms of
augmented matrices with exactly one solution:
1 0 0 2
1 0 0 2
0 1 0 3 , 0 1 0 0
0 0 1 5
0 0 1 5
0 0 0 0
The solutions to these systems are [2, 3, 5], and [2, 0, 5], respectively.
3. There are infinitely many solutions. This happens when the rank of the matrix A is less than
the number of variables. The following examples are reduced echelon forms of augmented
matrices with infinitely many solutions:
1 0 1 4
1 0 1 4
, 0 1 2 3
0 1 2 3
0 0 0 0
[x1 , x2 , x3 ] = [4 − c, 3 − 2c, c]
• A vector space V is a set of vectors, together with the operations of addition and scalar multipli-
cation, such that the following properties hold:
• A set of vectors is called linearly independent if no vector in the set can be written as a linear
combination of other vectors in the set. A set of n vectors with m components cannot be linearly
independent if n > m. If n = m, we can test for independence by forming a matrix A from the
vectors, where the vectors are either the rows or columns of A. If A is invertible, then the set is
independent.
• A set of vectors is called a spanning set for a space V if every vector in V can be written as a
linear combination of the vectors in the set. The set is not required to be linearly independent,
but it is a subset of V .
• Every basis of a space V contains the same number of vectors. This number is called the dimension
of the space.
• A spanning set contains at least as many vectors as a basis. It can be larger, but a basis is the
smallest
spanning
set possible.
1 1
Ex: , is a basis for R2 . It contains two linearly independent vectors.
2 3
1 1 1
Ex: , , is a spanning set for R2 . It spans R2 , but is not linearly independent. (It’s
2 3 4
too big!)
1
Ex: is a neither a spanning set nor a basis for R2 . It is linearly independent, but it does
2
not span. (It’s too small!)
Some Important Vector Spaces
• The real numbers are a vector space, with dimension 1. Any set containing one real number is a
basis.
• The set of vectors with n components, each a real number, is denoted Rn . This is a vector space
with dimension n. Any set of n linearly independent vectors with n components is a basis of Rn .
The standard basis is includes n vectors, each with a single entry of 1 in a different position, and
0’s elsewhere. For instance, below is a basis for R3 :
1 0 0
0 , 1 , 0
0 0 1
• The set of m × n matrices, denoted Mm,n , is a vector space with dimension mn. The standard
basis includes mn matrices of size m × n, each containing an entry of 1 in a different location and
0’s elsewhere. For instance, below is a basis for the 2 × 3 matrices:
1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0
, , , , , ,
0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1
• The set of all polynomials of degree at most n is a vector space with dimension n. The standard
basis contains monomials of the form xd for d = 0, . . . , n. For example, the space of polynomials
with degree at most 5 has standard basis
{1, x, x2 , x3 , x4 , x5 }
• Every vector space has the subspace {0}, containing only the 0 vector. Its dimension is 0.
• The nullspace of a matrix A is the set of all solutions x to the equation Ax = 0. If A is invertible,
the nullspace is {0}. Otherwise, it contains an infinite number of vectors, including {0}.
• The column space of a matrix A is the set of all linear combinations of the columns of A. If A
has m rows, each of these vectors has m components.
• The row space of a matrix A is the set of all linear combinations of the rows of A. If A has n
columns, each of these vectors has n components.
• The set of invertible n × n matrices is a subset of Mn,n , but not a subspace, because it does not
contain the 0 matrix. It is also not closed under addition: I and −I are both in the set, but
I + (−I) = 0 is not.
• The set of vectors of the form [a, 1] is a subset of R2 , but not a subspace. It is not closed under
scalar multiplication, since 5[3, 1] = [15, 3] is not in the set. It also does not contain 0, and is
• A line in R2 through the origin does define a subspace. A line that does not pass through the
origin is not a subspace.
Determinants
• The determinant of any n×n matrix A can be found by cofactor (also known as Laplace) expansion.
Let Mi,j denote the submatrix obtained from A by removing the ith row and j column. Let Ci,j
denote the number
Ci,j = (−1)i+j det(Mi,j )
Expanding along any row k, we obtain the formula
where ak,j is the entry of A in row k and column j. We can also expand along any column k by
the formula
det(A) = a1,k C1,k + a2,k C2,k + · · · + an,k Cn,k
No matter which row or column we use for the expansion, we’ll get the same result.
• The determinant of a diagonal or triangular matrix is the product of its diagonal entries.
• The determinant of a matrix is 0 if and only if the matrix is not invertible. If the matrix contains
a row or column of zeros, or if its rows or columns are not linearly independent, it’s determinant
is 0.
• In the two dimensional setting, a matrix A transforms an object in the plane. If the original
object had area V , then the transformed object has area V | det(A)|. When a matrix acts on the
unit square, we obtain a parallelogram with area | det(A)|.
Eigenvalues, Eigenvectors, and Diagonalization
• If A is a square matrix and Ax = λx for some vector x and scalar λ, we say x is an eigenvector
of A with associated eigenvalue λ.
• The eigenvalues of A are the roots of the characteristic polynomial. Every n × n matrix has n
eigenvalues, though they may not be distinct.
2−λ 7
det(A − λI) = = (2 − λ)(−6 − λ) − (7)(−1) = λ2 + 4λ − 5 = (λ + 5)(λ − 1)
−1 −6 − λ
The eigenvalues are the roots of this polynomial, λ1 = −5 and λ2 = 1. To find an eigenvector
associated to λ1 = −5, we find a solution of the equation (A + 5I)x = 0:
2+5 7 0 7 7 0 1 1 0
→ →
−1 −6 + 5 0 −1 −1 0 0 0 0
There are infinitely many solutions, one of which is x1 = [1, −1]. To find an eigenvector associated
to λ2 = 1, find any solution to the equation (A − I)x = 0:
2−1 7 0 1 7 0 1 7 0
→ →
−1 −6 − 1 0 −1 −7 0 0 0 0
• The sum of the eigenvalues of A is equal to its trace, the sum of the diagonal entries.
• Two matrices A and B are called similar if there exists an invertible matrix M such that B =
M −1 AM .
• If a matrix A is diagonalizable, it’s similar to the diagonal matrix whose diagonal entries are the
eigenvalues of A.
1. determinant
2. trace
3. eigenvalues
4. number of independent eigenvalues
5. Jordan form
• Even if two matrices share the above characteristics, we cannot assume they are similar.
Linear Transformations
• Some examples of linear transformations include: rotations, scaling, stretching, sheering, reflecting
across a line through the origin, and projection onto a line through the origin.
• Any linear transformation T : V → W where V has dimension n and W has dimension m can be
represented by a m × n matrix. Finding this matrix first requires choosing a basis {v1 , v2 , . . . , vn }
for V and a basis {w1 , w2 , . . . , wm } for W . For each basis vector vi in V , we write T (vi ) as a
linear combination of the basis vectors for W :
T (vi ) = c1 w1 + c2 w2 + · · · cm wm
• Other examples of linear transformations include the derivative and the integral.
Review Problems
It is highly recommended that you work on as many of these problems as possible. This is not an
exhaustive list of the types of questions you might see. You should also previous review in class exams,
review sheets, and homework problems.
1. Compute the dot product u · v and the length of each vector u and v when u = [4, 1, 3] and
v = [−2, 0, 1].
5. Decide which of the following sets are vector spaces. It is valid to show a set is a subspace, but
you must indicate which vector space it is a subset of. If the set is a vector space, find a basis for
it.
2 −4
6. Find the characteristic polynomial of the matrix A = . Then, find its eigenvalues and
−1 −1
associated eigenvectors.
7. Decide which of the following transformations are linear. For those that are, find the matrix of
the transformation (using the standard bases):