CHE 358
Numerical Methods
for Engineers
Dr. Martinson Addo Nartey
Credit: Dr. K. Mensah-Darkwa
Lesson-04
Linear Systems
Quiz 1
Determine the roots of the equation below using the False
Position Method
f ( x) 12 21x 18 x 2.75 x 2 3
xl 1
xu 0
s 1%
2
3
Overview - Matrices
Overview - Matrices
• A matrix consists of a rectangular array of elements
represented by a single symbol ([A]).
• An individual entry of a matrix is an element (a23)
4
Overview - Matrices
• A horizontal set of elements is called a row and a vertical
set of elements is called a column.
• The first subscript of an element indicates the row while the
second indicates the column.
• The size of a matrix is given as m rows by n columns, or
simply m by n (m x n).
• 1 x n matrices are row vectors.
• m x 1 matrices are column vectors.
5
Overview – Matrices: Forms of Matrices
• Matrices where m=n are called square matrices.
Symmetric Diagonal Identity
5 1 2 a11 1
A 1 3 7 a22
A
A 1
2 7 8
a33
1
Upper Triangular Lower Triangular Banded
a11 a12 a13 a11 a11 a12
A a22 a23 A a21 a22 A a21 a22 a23
a33
a31 a32 a33
a32 a33 a34
6
a43 a44
Overview - Matrices
• Two matrices are considered equal if and only if every
element in the first matrix is equal to every corresponding
element in the second. This means the two matrices must
be the same size.
• Matrix addition and subtraction are performed by adding or
subtracting the corresponding elements.
This requires that the two matrices be the same size.
• Scalar matrix multiplication is performed by multiplying
each element by the same scalar.
7
Matrix Multiplication
• The elements in the matrix [C] that results from multiplying
matrices [A] and [B] are calculated using:
n
c ij aikbkj
k1
8
Matrix computation
• Given that A x B = C, determine the product of A and B
1 2 3 2 1
• 𝐴= 2 1 4 𝑎𝑛𝑑 𝐵 = 1 2
1 4 3 2 1
Solution
10 8
𝐶 = 13 8
12 12
9
Matrix Inverse and Transpose
• The inverse of a square, nonsingular matrix [A] is that
matrix which, when multiplied by [A]-1, yields the identity
matrix.
[A][A]-1=[A]-1[A]=[I]
• The transpose of a matrix involves transforming its rows
into columns and its columns into rows.
(aij)T=aji
10
Determinants
• The determinant D =|A| of a matrix is formed from the
coefficients of [A].
• Determinants for small matrices are:
11 a11 a11
a11 a12
22 a11a22 a12 a21
a21 a22
a11 a12 a13
a22 a23 a21 a23 a21 a22
3 3 a21 a22 a23 a11 a12 a13
a32 a33 a31 a33 a31 a32
a31 a32 a33
• Determinants for matrices larger than 3 x 3 can be very
complicated. 11
Representing Linear Algebra
• Matrices provide a concise notation for
representing and solving simultaneous linear
equations:
a11x1 a12 x 2 a13 x 3 b1 a11 a12 a13 x1 b1
a21x1 a22 x 2 a23 x 3 b2 a21 a22 a23x 2 b2
a31x1 a32 x 2 a33 x 3 b3 a33
a31 a32 x 3 b3
[A]{x} {b}
12
13
Graphical Methods
Graphical Method
• For small sets of simultaneous equations, graphing them
and determining the location of the intercept provides a
solution.
14
Graphing the equations can also show systems where:
a) No solution exists
b) Infinite solutions exist
c) System is ill-conditioned
Singular 15
16
Numerical Methods
Cramer’s Rule
Naïve Gauss Elimination
LU Factorization
Cholesky Factorization
Cramer’s Rule
• Cramer’s Rule states that each unknown in a system of
linear algebraic equations may be expressed as a fraction of
two determinants with denominator D and with the
numerator obtained from D by replacing the column of
coefficients of the unknown in question by the constants b1,
b2, …, bn.
represented by
17
Cramer’s Rule
0.3𝑥1 + 0.52𝑥2 + 𝑥3 = −0.01 80𝑥1 − 20𝑥2 − 20𝑥3 = 20
0.5𝑥1 + 𝑥2 + 1.9𝑥3 = 0.67 −20𝑥1 + 40𝑥2 − 20𝑥3 = 20
0.1𝑥1 + 0.3𝑥2 + 0.5𝑥3 = −0.44 −20𝑥1 − 20𝑥2 + 130𝑥3 = 20
Solution Solution
D = -0.0022 D = 300000
X1 = -14.9 X1 = 0.6
X2 = -29.5 X2 = 1
X3 = 19.8 X3 = 0.4
18
Quiz 2
Determine the values of 𝑥1 𝑎𝑛𝑑𝑥2 using the Cramer’s rule
methodology
2𝑥1 + 𝑥2 = −1 𝑥1 = −2
3𝑥1 − 5𝑥2 = −21 𝑥2 = 3
3𝑥1 + 2𝑥2 = 18
𝑥1 = 4
−𝑥1 + 𝑥2 = 2 𝑥2 = 3
19
Naïve Gauss Elimination
• For larger systems, Cramer’s Rule can become
cumbersome.
• Instead, a sequential process of removing unknowns from
equations using forward elimination followed by back
substitution may be used - this is Gauss elimination.
• “Naïve” Gauss elimination simply means the process does
not check for potential problems resulting from division by
zero.
20
Naïve Gauss Elimination
Forward elimination
• Starting with the first row, add or subtract multiples of that row
to eliminate the first coefficient from the second row and
beyond.
• Continue this process with the second row to remove the second
coefficient from the third row and beyond.
• Stop when an upper triangular matrix remains.
21
Naïve Gauss Elimination
Back substitution
• Starting with the last row, solve for the unknown, then
substitute that value into the next highest row.
• Because of the upper-triangular nature of the matrix,
each row will contain only one more unknown.
22
Naïve Gauss Elimination
Example
• Use Gauss elimination to solve the system of equations
below
3𝑥1 − 0.1𝑥2 − 0.2𝑥3 = 7.85
0.1𝑥1 + 7𝑥2 − 0.3𝑥3 = −19.3
0.3𝑥1 − 0.2𝑥2 + 10𝑥3 = 71.4
Solution
𝑥3 = 7.00000
𝑥2 = −2.50000
𝑥1 = 3.00000
23
Gauss Elimination
Pivoting
• Previous approach considered as naïve because it doesn’t factor into
account situations where the pivot element is zero.
• In cases where the pivot element is closer to zero in comparison to
other elements, round-off errors may be introduced.
• Before normalizing each row, it is advantageous to determine the
coefficient with the largest absolute value in the column below the
pivot element.
• Rows can be switched so that the largest element becomes the pivot
element. This is termed Partial Pivoting.
24
Gauss Elimination
• Example:
By using partial pivoting technique, determine the values of
the unknowns in the system of equation below.
2𝑥1 − 7𝑥2 − 10𝑥3 = −17
5𝑥1 + 𝑥2 +3𝑥3 = 14
𝑥1 + 10𝑥2 + 9𝑥3 = 7
Solution
Assignment 2
Solve to for the
𝑥1 = 1
unknowns without
𝑥2 = −3
partial pivoting
𝑥3 = 4
25
LU Factorization
• Recall that the forward-elimination step of Gauss
elimination comprises the bulk of the computational effort.
• LU factorization methods separate the time-consuming
elimination of the matrix [A] from the manipulations of the
right-hand-side [b].
• Once [A] has been factored (or decomposed), multiple right-
hand-side vectors can be evaluated in an efficient manner.
26
LU Factorization
LU factorization involves two steps:
1. Factorization to decompose the [A] matrix into a
product of a lower triangular matrix [L] and an upper
triangular matrix [U]. [L] has 1 for each entry on the
diagonal.
2. Substitution to solve for {x}
Gauss elimination can be implemented using LU
factorization
27
LU Factorization
28
LU Factorization
&
29
LU Factorization
Example
Solve using LU factorization for the
system of equation
Forward elimination
Forward substitution
From
30
Cholesky Factorization
• Symmetric systems occur commonly in both mathematical
and engineering/science problem contexts, and there are
special solution techniques available for such systems.
• The Cholesky factorization is one of the most popular of
these techniques, and is based on the fact that a symmetric
matrix can be decomposed as [A]= [U]T[U], where T stands
for transpose.
• The rest of the process is similar to LU decomposition and
Gauss elimination, except only one matrix, [U],is needed.
31
Cholesky Factorization
𝑖−1
2
𝑢𝑖𝑖 = 𝑎𝑖𝑖 − 𝑢𝑘𝑖
𝑘=1
i 1
aij uki ukj
uij k 1 for j=i+1, ……. , n
uii 32
Cholesky Factorization
Example
6 15 55 x1 76
15 55 225 x 295
2
Use Cholesky factorization to solve
55 225 979 x3 1259
Solution
33
Cholesky Factorization
Determine the transpose and use the expression below to determine d
Determine the unknowns by using the relation below
34