0% found this document useful (0 votes)
25 views11 pages

Basic Linear Algebra Review

Uploaded by

petrosmedhanie59
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views11 pages

Basic Linear Algebra Review

Uploaded by

petrosmedhanie59
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Math 150-01 Linear Algebra

Final Exam Review Sheet

Vector Operations

• Vector addition is a component-wise operation. Two vectors v and w may be added together as
long as they contain the same number n of components. Their sum is another vector, whose ith
component is the sum of the ith components of v and w:
     
1 3 1+3
+ =
2 4 2+4

• Vectors may be multiplied by a scalar. The product cv is a vector, whose ith component is the
product of c and the ith component of v:
   
1 (5)(1)
5 =
2 (5)(2)

• A linear combination of vectors is a sum of scalar multiples of vectors (with the same number of
components):        
1 4 −1 1 4
5 +6 −3 +
2 0 6 2 −3

• The dot product of two vectors is a scalar, and is only defined when the vectors have the same
number of components. It is the sum of the products of the ith components of the vectors:
   
1 3
· = (1)(3) + (2)(4)
2 4

• The length of a vector is a scalar. It is found by taking the square root of the sum of the squares
of the vector’s components:  
1 p
2 = 12 + 22 + 32
3
√ √
• A unit vector is a vector of length one. For example, [1, 0, 0] and [1/ 2, 1/ 2].

• The angle θ between two vectors v and w is related to their lengths and dot products by the
formula
v·w
cos(θ) =
||v|| ||w||

• Two vectors are perpendicular (also called orthogonal) if they have a dot product of 0.
Matrix Operations

• Matrix addition is a component-wise operation. Two matrices A and B may be added together
as long as they have the same number of rows and columns. Their sum is another matrix, whose
(i, j)th entry is the sum of the (i, j)th entries of A and B:
     
1 2 5 6 1+5 2+6
+ =
3 4 7 8 3+7 4+8

• Matrices may be multiplied by a scalar. The product cA is a matrix, whose (i, j)th entry is the
product of c and the (i, j)th entry of A:
   
1 2 (5)(1) (5)(2)
5 =
3 4 (5)(3) (5)(4)

• Matrices may be multiplied as long as their dimensions are compatible. The product AB is defined
when A has n columns and B has n rows. The (i, j)th entry of AB is the dot product of the ith
row of A and the jth column of B:
 
  7 8  
1 2 3  (1)(7) + (2)(9) + (3)(11) (1)(8) + (2)(10) + (3)(12)
9 10 = 
4 5 6 (4)(7) + (5)(9) + (6)(11) (4)(8) + (5)(10) + (6)(12)
11 12

• Matrix multiplication is not commutative. In general, AB 6= BA.


• The transpose of a matrix A, denoted AT , is the matrix whose (i, j)th entry is the (j, i)th entry
of A:  
  1 4
1 2 3
A= ⇒ AT = 2 5
4 5 6
3 6

Reduced Echelon Form

• A matrix is in reduced row echelon form (RREF) when


1. All rows containing nonzero entries are above any rows containing only zero
2. The first nonzero entry (from the left) in each row, called a pivot, is strictly to the right of
the first nonzero entry of any rows above it
3. All entries above and below a pivot are 0
4. The pivot entries are all 1
The following matrices are in reduced row echelon form:
       
1 0 0 2 1 0 0 2 1 0 3 −2 1 0 0 8
0 1 0 3 , 0 1 0 −1 , 0 1 2 4  , 0 0 1 3
0 0 1 5 0 0 0 0 0 0 0 0 0 0 0 1

• A matrix can be brought to its RREF by elementary row operations:


1. interchanging two rows
2. adding a multiple of one row to another
3. multiplying a row by a scalar
• The rank of a matrix is the number of pivots that appear in its RREF. The matrices above have
rank 3, 2, 2, and 3, respectively.
Some Special Matrices

• The identity matrix In is an n × n matrix whose diagonal entries are 1, and whose remaining
entries are 0. It is the multiplicative identity, because if A is any m × n matrix, AIn = Im A = A.
When the size n is understood, we can simply write I for the identity matrix.

• A zero matrix 0 is a matrix with every entry 0. It can be any size, and not necessarily square. A
zero matrix is never invertible. It is the additive identity, since A + 0 = 0 + A = A for any matrix
A.

• An elementary matrix E is a square matrix which performs a row operation on a matrix A. There
are three types of elementary matrices, all of which are invertible:

1. E can interchange two rows. An elementary matrix of this type is obtained from the identity
by interchanging two of its rows. Such a matrix E is equal to its own inverse:
   
1 0 0 1 0 0
E = 0 0 1 E −1 = 0 0 1
0 1 0 0 1 0

2. E can multiply a row by a constant, c. An elementary matrix of this type is obtained from
the identity by multiplying one of its rows by c. It’s inverse is obtained from the identity by
multiplying on of its rows by 1/c:
   
1 0 0 1 0 0
E = 0 c 0 E −1 = 0 1/c 0
0 0 1 0 0 1

3. E can add a multiple of one row to another. An elementary matrix of this type is obtained
from the identity by making any off diagonal entry c 6= 0. It’s inverse is obtained from the
identity by making that same off diagonal entry −c:
   
1 0 0 1 0 0
E = 0 1 c  E −1 = 0 1 −c
0 0 1 0 0 1

• A diagonal matrix is one whose main diagonal entries can take any value, but whose entries off
the main diagonal are 0. It is invertible if all diagonal entries are nonzero.

• An upper triangular matrix is one whose main diagonal entries, as well as any entries above the
diagonal, can take any value, but whose entries below the main diagonal are 0.

• A lower triangular matrix is one whose main diagonal entries, as well as any entries below the
diagonal, can take any value, but whose entries above the main diagonal are 0.

• A symmetric matrix is one that is equal to its own transpose, A = AT . These can have any value
on the main diagonal, but entries off the main diagonal must be paired: the (i, j)th entry must
be equal to the (j, i)th entry.

• A permutation matrix is a square matrix such that every row and column contains exactly one
entry of 1 and zeros elsewhere. There are n! permutation matrices of size n × n.

• An orthogonal matrix is one whose transpose is equal to its inverse: AT = A−1 .


Matrix Inverses

• The inverse of a square matrix A is the unique matrix A−1 such that AA−1 = A−1 A = I

• Not all matrices are invertible. If A has any of the following properties, it is NOT invertible:

1. A is not square
2. A has determinant 0
3. the rows or columns of A are not linearly independent
4. the equation Ax = 0 has more than one solution
5. A has at least one eigenvalue of 0

• If a square n × n matrix has rank n, it is invertible. Its RREF is the identity.

• If A and B are invertible n × n matrices, then their product is invertible, and


(AB)−1 = B −1 A−1 .

• If A is invertible, then AT is invertible, and (AT )−1 = (A−1 )T .

• Suppose A is an n × n matrix. To find the inverse of A (or determine that it doesn’t have one),
reduced the augmented matrix [A In ] to reduced row echelon form via row operations. If this
augmented matrix can be put in the form [In | B], then B = A−1 . Otherwise, A−1 does not exist.
For example:
       
1 2 1 0 1 2 1 0 1 0 −2 1 1 0 −2 1
→ → →
3 4 0 1 0 −2 −3 1 0 −2 −3 1 0 1 3/2 −1/2
   
1 2 −2 1
so has inverse .
3 4 3/2 −1/2
 
a b
• The inverse of a 2 × 2 matrix A = can be found by the formula
c d
 
−1 1 d −b
A =
ad − bc −c a

• Diagonal or triangular matrices are invertible when all of their diagonal entries are nonzero.

• Permutation matrices are always invertible, and their inverse is equal to their transpose (they are
orthogonal).
Systems of Equations

• A system of equations (with m equations in n unknowns) can be represented by the matrix


equation Ax = b:
    
2x + 3y = 3 2 3 0 x 3
4x + y + 3z = 6 ⇒ 4 1 3  y  =  6 
x − 2y − 4z = −1 1 −2 −4 z −1

• A system can also be represented by an augmented matrix. The matrix A above is augmented
with the vector b:
 
2x + 3y = 3 2 3 0 3
4x + y + 3z = 6 ⇒  4 1 3 6
x − 2y − 4z = −1 1 −2 −4 −1

We denote this augmented matrix as [A x].

• After reducing the augmented matrix [A x] to reduced row echelon form (RREF), one of three
results is possible:

1. The system has no solution. This happens when the RREF contains a row of the form

[0 0 0 · · · 0 c]

where c 6= 0.
2. The system has a unique solution. This happens when the rank of the matrix A is equal to
the number of variables in the system. The following examples are reduced echelon forms of
augmented matrices with exactly one solution:
 
  1 0 0 2
1 0 0 2 
0 1 0 3 , 0 1 0 0

0 0 1 5
0 0 1 5
0 0 0 0

The solutions to these systems are [2, 3, 5], and [2, 0, 5], respectively.
3. There are infinitely many solutions. This happens when the rank of the matrix A is less than
the number of variables. The following examples are reduced echelon forms of augmented
matrices with infinitely many solutions:
 
  1 0 1 4
1 0 1 4 
, 0 1 2 3
0 1 2 3
0 0 0 0

Both of these systems have the same set of solutions:

[x1 , x2 , x3 ] = [4 − c, 3 − 2c, c]

where any value of c gives another solution to the system.


Vector Spaces, Bases, and Dimension

• A vector space V is a set of vectors, together with the operations of addition and scalar multipli-
cation, such that the following properties hold:

1. V contains the zero vector 0, a unique additive identity: 0 + v = v


2. Addition is closed: if v1 and v2 are in V , then v1 + v2 is in V
3. Addition is associative: v1 + (v2 + v3 ) = (v1 + v2 ) + v3
4. Addition is commutative: v1 + v2 = v2 + v1
5. Each v in V has an additive inverse −v, also in V : v + (−v) = −v + v = 0
6. Scalar multiplication is closed: if v is in V and c is any scalar, cv is in V
7. Scalar multiplication is distributive across vector addition: c(v1 + v2 ) = cv1 + cv2
8. Scalar multiplication is distributive across scalar addition: (c + d)v = cv + dv
9. Scalar multiplication is associative: c(dv) = (cd)v
10. Scalar multiplication has an identity: 1v = v

• If W is a subset of a vector space V (meaning any vector in W also appears in V ), then W is a


subspace if it satisfies the following:

1. Addition is closed: if w1 and w2 are in W , then w1 + w2 is in W


2. Scalar multiplication is closed: if w is in W and c is any scalar, cw is in W

• A set of vectors is called linearly independent if no vector in the set can be written as a linear
combination of other vectors in the set. A set of n vectors with m components cannot be linearly
independent if n > m. If n = m, we can test for independence by forming a matrix A from the
vectors, where the vectors are either the rows or columns of A. If A is invertible, then the set is
independent.

• A set of vectors is called a spanning set for a space V if every vector in V can be written as a
linear combination of the vectors in the set. The set is not required to be linearly independent,
but it is a subset of V .

• A set of vectors is called a basis of a space V if it is a linearly independent spanning set of V . A


basis is not unique - any vector space has more than one basis.

• Every basis of a space V contains the same number of vectors. This number is called the dimension
of the space.

• A spanning set contains at least as many vectors as a basis. It can be larger, but a basis is the
smallest
 spanning
  set possible.
1 1
Ex: , is a basis for R2 . It contains two linearly independent vectors.
2 3
     
1 1 1
Ex: , , is a spanning set for R2 . It spans R2 , but is not linearly independent. (It’s
2 3 4
too big!)
 
1
Ex: is a neither a spanning set nor a basis for R2 . It is linearly independent, but it does
2
not span. (It’s too small!)
Some Important Vector Spaces

• The real numbers are a vector space, with dimension 1. Any set containing one real number is a
basis.

• The set of vectors with n components, each a real number, is denoted Rn . This is a vector space
with dimension n. Any set of n linearly independent vectors with n components is a basis of Rn .
The standard basis is includes n vectors, each with a single entry of 1 in a different position, and
0’s elsewhere. For instance, below is a basis for R3 :
     
 1 0 0 
0 , 1 , 0
0 0 1
 

• The set of m × n matrices, denoted Mm,n , is a vector space with dimension mn. The standard
basis includes mn matrices of size m × n, each containing an entry of 1 in a different location and
0’s elsewhere. For instance, below is a basis for the 2 × 3 matrices:
            
1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0
, , , , , ,
0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1

• The set of all polynomials of degree at most n is a vector space with dimension n. The standard
basis contains monomials of the form xd for d = 0, . . . , n. For example, the space of polynomials
with degree at most 5 has standard basis

{1, x, x2 , x3 , x4 , x5 }

Some Important Subspaces

• Every vector space has the subspace {0}, containing only the 0 vector. Its dimension is 0.

• The nullspace of a matrix A is the set of all solutions x to the equation Ax = 0. If A is invertible,
the nullspace is {0}. Otherwise, it contains an infinite number of vectors, including {0}.

• The column space of a matrix A is the set of all linear combinations of the columns of A. If A
has m rows, each of these vectors has m components.

• The row space of a matrix A is the set of all linear combinations of the rows of A. If A has n
columns, each of these vectors has n components.

Some Examples of Non-Subspaces

• The set of invertible n × n matrices is a subset of Mn,n , but not a subspace, because it does not
contain the 0 matrix. It is also not closed under addition: I and −I are both in the set, but
I + (−I) = 0 is not.

• The set of vectors of the form [a, 1] is a subset of R2 , but not a subspace. It is not closed under
scalar multiplication, since 5[3, 1] = [15, 3] is not in the set. It also does not contain 0, and is

• A line in R2 through the origin does define a subspace. A line that does not pass through the
origin is not a subspace.
Determinants

• The determinant of a matrix is only defined when the matrix is square.


 
a b
• The determinant of a 2 × 2 matrix A = is det(A) = ad − bc.
c d
 
a b c
• The determinant of a 3 × 3 matrix A = d e f  can be found by the “big formula”:
g h i

det(A) = aei + bf g + cdh − ceg − bdi − af h

• The determinant of any n×n matrix A can be found by cofactor (also known as Laplace) expansion.
Let Mi,j denote the submatrix obtained from A by removing the ith row and j column. Let Ci,j
denote the number
Ci,j = (−1)i+j det(Mi,j )
Expanding along any row k, we obtain the formula

det(A) = ak,1 Ck,1 + ak,2 Ck,2 + · · · + ak,n Ck,n

where ak,j is the entry of A in row k and column j. We can also expand along any column k by
the formula
det(A) = a1,k C1,k + a2,k C2,k + · · · + an,k Cn,k
No matter which row or column we use for the expansion, we’ll get the same result.

• The determinant of a diagonal or triangular matrix is the product of its diagonal entries.

• The determinant of a matrix is 0 if and only if the matrix is not invertible. If the matrix contains
a row or column of zeros, or if its rows or columns are not linearly independent, it’s determinant
is 0.

• Properties of the determinant:

1. det(AB) = det(A) det(B)


2. det(AT ) = det(A)
1
3. det(A−1 ) =
det(A)
• Suppose an object has volume V . If A acts on the object, we obtain a new object with volume
V | det(A)|.

• In the two dimensional setting, a matrix A transforms an object in the plane. If the original
object had area V , then the transformed object has area V | det(A)|. When a matrix acts on the
unit square, we obtain a parallelogram with area | det(A)|.
Eigenvalues, Eigenvectors, and Diagonalization

• If A is a square matrix and Ax = λx for some vector x and scalar λ, we say x is an eigenvector
of A with associated eigenvalue λ.

• The characteristic polynomial of a matrix A is det(A − λI). If A is n × n, this is an nth degree


polynomial.

• The eigenvalues of A are the roots of the characteristic polynomial. Every n × n matrix has n
eigenvalues, though they may not be distinct.

• Once the eigenvalues λ1 , λ2 , . . . , λn of the n × n matrix A have been found, an eigenvector xi


associated to each can be found by solving the equation (A − λi I)xi = 0.
 
2 7
• Example: Let A = . The characteristic polynomial of A is
−1 −6

2−λ 7
det(A − λI) = = (2 − λ)(−6 − λ) − (7)(−1) = λ2 + 4λ − 5 = (λ + 5)(λ − 1)
−1 −6 − λ

The eigenvalues are the roots of this polynomial, λ1 = −5 and λ2 = 1. To find an eigenvector
associated to λ1 = −5, we find a solution of the equation (A + 5I)x = 0:
     
2+5 7 0 7 7 0 1 1 0
→ →
−1 −6 + 5 0 −1 −1 0 0 0 0

There are infinitely many solutions, one of which is x1 = [1, −1]. To find an eigenvector associated
to λ2 = 1, find any solution to the equation (A − I)x = 0:
     
2−1 7 0 1 7 0 1 7 0
→ →
−1 −6 − 1 0 −1 −7 0 0 0 0

so one choice would be x2 = [7, −1].

• The product of the eigenvalues of A is equal to the determinant of A.

• The sum of the eigenvalues of A is equal to its trace, the sum of the diagonal entries.

• The eigenvalues of a diagonal or triangular matrix are the diagonal entries.

• A matrix is only diagonalizable if its eigenvectors are linearly independent.

• A matrix is diagonalizable if its eigenvalues are all distinct.

• A matrix is called positive definite if all of its eigenvalues are positive.

• The eigenvalues of a symmetric matrix are always real.


Similar Matrices

• Two matrices A and B are called similar if there exists an invertible matrix M such that B =
M −1 AM .

• If a matrix A is diagonalizable, it’s similar to the diagonal matrix whose diagonal entries are the
eigenvalues of A.

• If a matrix A is not diagonalizable, it’s still similar to a matrix in Jordan form.

• If A and B are similar, they have the same:

1. determinant
2. trace
3. eigenvalues
4. number of independent eigenvalues
5. Jordan form

• Even if two matrices share the above characteristics, we cannot assume they are similar.

Linear Transformations

• A transformation T : V → W is a map from one vector space V to another, W . T is called a


linear transformation if

1. T (v1 + v2 ) = T (v1 ) + T (v2 )


2. T (cv) = cT (v)

• Example: The transformation T : R2 → R2 defined by T (x, y) = (2x, x − y) is a linear transfor-


mation, since

T ((x, y) + (a, b)) = T (x + a, y + b) = (2(x + a), (x + a) − (y + b))


= (2x + 2a, (x − y) + (a − b)) = T (x, y) + T (a, b)
T (c(x, y)) = T (cx, cy) = (2cx, cx − cy) = c(2x, x − y) = cT (x, y)

• Example: The transformation T : R2 → R2 defined by T (x, y) = (x2 , y) is not a linear transfor-


mation, since
T (c(x, y)) = T (cx, cy) = ((cx)2 , cy) = c(cx2 , y) 6= cT (x, y)

• Some examples of linear transformations include: rotations, scaling, stretching, sheering, reflecting
across a line through the origin, and projection onto a line through the origin.

• Any linear transformation T : V → W where V has dimension n and W has dimension m can be
represented by a m × n matrix. Finding this matrix first requires choosing a basis {v1 , v2 , . . . , vn }
for V and a basis {w1 , w2 , . . . , wm } for W . For each basis vector vi in V , we write T (vi ) as a
linear combination of the basis vectors for W :

T (vi ) = c1 w1 + c2 w2 + · · · cm wm

The ith column of the transformation matrix contains the entries c1 , c2 , . . . , cm .

• If the matrix of a transformation has determinant 1, the transformation is area/volume preserving.

• Other examples of linear transformations include the derivative and the integral.
Review Problems
It is highly recommended that you work on as many of these problems as possible. This is not an
exhaustive list of the types of questions you might see. You should also previous review in class exams,
review sheets, and homework problems.

1. Compute the dot product u · v and the length of each vector u and v when u = [4, 1, 3] and
v = [−2, 0, 1].

2. Find the inverse of each matrix, or explain


 why
 it does 
not exist:
    1 3 3 0 1 0
3 2 1 2 4
(a) (b) (c) 1 4 3 (d) 0 0 1
1 3 −1 2 0
1 3 4 1 0 0

3. Find the determinant


 of the following
 matrices:   
  6 1 3 2 0 4 2 3 −2
5 2
(a) (b) 0
 3 3  (c) 1 3 −2
 (d) 1 0 1 
3 1
0 0 −3 3 3 2 4 1 2

4. Find the solution(s),


   if they
  exist, for each of the
 following
 equations Ax = b 
3 3 4 x1 1   x1     x1  
3 1 2 5 3 1 2 x2  = 0
(a) 3 5 9  x2  = 2 (b) x2  = (c)
6 2 4 2 6 2 4 0
5 9 17 x3 4 x3 x3

5. Decide which of the following sets are vector spaces. It is valid to show a set is a subspace, but
you must indicate which vector space it is a subset of. If the set is a vector space, find a basis for
it.

(a) The set of 3 × 3 diagonal matrices.


(b) The set of vectors of the form [a, b, 0].
(c) The set of vectors of the form [a, 6].
(d) The set of 3 × 3 symmetric matrices.


2 −4
6. Find the characteristic polynomial of the matrix A = . Then, find its eigenvalues and
−1 −1
associated eigenvectors.

7. Decide which of the following transformations are linear. For those that are, find the matrix of
the transformation (using the standard bases):

(a) T : R2 → R2 is defined by T (x, y) = (2x, y)


(b) T : R2 → R2 is defined by T (x, y) = (x + 1, y + 2)
(c) T : R2 → R2 rotates an object by an angle of π/3
(d) T : R3 → R2 is defined by T (x, y) = (x + 2y, x + 3y)

You might also like