Applied Mathematics and
Computation
Unit 3
Simultaneous Linear Equations
1
2
A set of Linear Algebraic Equations
• as are constant coefficient
• bs are constants
• Xs are unknowns
• n is number of unknown and number of equations
3
In Matrix Form
4
Linear Algebraic Equations
• Laws of conservation
– Mass, momentum and energy
– Multi-component system
– Discretized differential equation
• Equilibrium equation & Compatibility equation
• Kirchhoff's law
5
Solution methods
• Graphical
• Crammer rule
• Elimination of unknowns
• Gauss elimination
• Gauss Jordon
• LU decomposition
• Gauss Seidel
6
Graphical method
• Limited to three equations
7
Graphical method
8
Graphical Method
(a) no solution
(b) infinite solutions
(c) ill-conditioned system (difficult to detect intersection point)
9
Cramer’s Rule
• Determination of determinant is time
consuming, as number of equations increases 10
Elimination of unknown
• Same as Cramer’s Rule
11
Gauss Elimination
Multiply Eq (1) by a factor and subtract it from Eq (2).
12
Gauss Elimination
13
Gauss Elimination
(a) Forward elimination
(b) Backward substitution
14
Example
Multiply Eq (1) by (0.1/3) and subtract from Eq (2)
Multiply Eq (1) by (0.3/3) and subtract from Eq (3)
15
Multiply Eq (2) by (-0.19/7.00333) and subtract
from Eq (3)
16
• Backward substitution
x2 = -2.5
x1 = 3.0
17
Operation counting
• Forward elimination
• Backward substitution n n 1 2
18
Operation counting
21
Pitfall-I
• Pivot element is zero
• Example
• Factor = 4/0, 2/0
• Division by zero
22
Pitfall-II
• Pivot element is nearly zero
• Tolerance be specified in the program to detect this
condition
• Example-II
• Factor = 1/0.0003
• Division by a very small number
23
• X1 = (2.0001-3*0.7)/0.0003 = -333
• X1 = (2.0001-3*0.67)/0.0003 = -33
• X1 = (2.0001-3*0.667)/0.0003 = -3.33
24
Pitfall-III
• Round off error
– Important when dealing with 100 or more equations,
due to error propagation
• Remedy
– Increase significant figures
– This increases computation efforts
25
Remedy
• Pivoting
– Switching largest element to pivot position
• Complete pivoting
– If columns as well as rows are searched for the
largest element and both are switched.
• Partial pivoting
– If only column below pivot element is searched for
largest element and corresponding row is switched.
• Complete pivoting is rarely used because
switching columns change the order of Xs. Add
complexity in program
26
• In programming, rows are not switched
• But track of the pivot row number is kept
• Defines order of forward elimination and
backward substitution without changing place
• Moreover, Scaling is required to identify
whether pivoting is necessary
27
Example
• Without Scaling &Pivoting With scaling & pivoting
• Forward elimination Pivoting
49999 x2 49998
• x2 0.99998 1.0 Forward elimination
x1 1.00002 0.0
0.99998 x2 0.99996
x2 0.99998 1.0
• Round off error x1 1.00002 1.0
• Without scaling and with pivoting
99998 x2 99996
x2 0.99998 1.0
x1 1.00002 1.0
• Scaling should be done to identify the need of pivoting. Pivoting
and Gauss elimination be done with the original coefficient
values to avoid roundoff error introduced by the scaling.
Pitfall-IV
• Singular systems
– Slope are same
– Graphically, they will either be parallel or superimpose
– No solution or infinite solution
– Computer algorithm should recognize this case
– Determinant is zero
a11 a21
a11 a22 a21 a12 a11 a22 a21a12 0
a12 a22
Pitfalls
• Ill-conditioned equations
– Slope of equations are nearly same
– Determinant (denominator) is nearly zero
Determinant determination
• Example 1: well conditioned problem
• Example 2: Ill conditioned problem
• Example 3: Ill conditioned problem
• The value of determinant is not right indicator
of ill conditioned equation. 36
Determinant determination with scaling
• Example 1: well conditioned problem
• Example 2: Ill conditioned problem
• Example 3: Ill conditioned problem
• The value of determinant after scaling is right
indicator of ill conditioned equation. However,
difficult to determine for more than 3 equations 37
Another Indicator
• Ill-conditioned equations: matrix
– Scale the matrix of coefficient [A] so that largest
element in each row is 1.
– Invert the matrix.
– If there are elements that are several order of
magnitude greater than one => Ill conditioned
38
Other indicators
• If multiplication of the inverse with the
original matrix is not close to identity matrix
=> ill condition
• If the invert of the inverted matrix is not close
to original matrix => ill condition
• Computationally inefficient methods
39
Another indicator
• Ill-conditioned equations:
– A small changes in coefficient, results in large
changes in solution
• Example
• A change of <5% in coefficient, brings a change in x1 by 100 %40
Pitfalls
• Ill-conditioned equations: When matrix condition
number is greater than unity
x A
Cond A
x A
≥1
A matrix that is not invertible has condition
number equal to infinity.
Matrix norm
• The norm of a matrix is a non-negative real number
which measures how large its elements are. It is a
way of determining the “size” of a matrix that is not
necessarily related to how many rows or columns the
matrix has but the magnitude of the matrix
• Frobenius (Euclidean) norm
(Root Sum Square)
• Infinity norm
(Max Absolute Row Sum)
• 1-norm
(Max Absolute Column Sum)
Example
45
46
Gauss Jordon
• Divide to make pivot element equal to 1
• Full elimination in all the (n-1) equations, not only
forward elimination
• Results in identity matrix instead of upper triangular
matrix.
• No need of backward substitution
49
50
• Multiplication: (n)n+(n-1)n+(n-2)n…=(n(n+1)/2).n=n3/2+n2/2
• i.e., S number of terms in each equation (n-i+1) * number of equations (n)
• Addition: n(n-1)+(n-1)(n-1)+(n-2)(n-1)…=(n.(n+1)/2)(n-1)=n3/2-n/2
• i.e., S number of terms in each equation (n-i+1) * number of equations (n-1)
• Number of flops
• Computation 50% more than Gauss elimination
Special matrices
• Banded Matrix
– Tridiagonal Matrix
(ODE discretization)
• Sparse Matrix
(PDE discretization)
Tri-diagonal Matrix
57
Iterative procedure
• Initial guess may be taken as zero or 1
• Stopping criteria
58
Jacobi Method
Parallelization possible
x1 = 0 2.616667 3.000762 3.000806 3.000022 2.999999 3
x2 = 0 -2.75714 -2.48852 -2.49974 -2.5 -2.5 -2.5
x3= 0 7.14 7.006357 7.000207 6.999981 6.999999 7
Gauss Seidel Method
Parallelization difficult
x1 = 0 2.616667 2.990557 3.000032 3
x2 = 0 -2.79452 -2.49962 -2.49999 -2.5
x3= 0 7.00561 7.000291 6.999999 7
Convergence Condition
• Diagonally dominant
63
64
Reducing flops
• Pre-divide the coefficient of the row with the
diagonal element
65
Relaxation
• l = 0 to 1 under relaxation (damping)
• l = 1 to 2 over relaxation (moving in right direction)
As obtained by Jacobi or
As obtained after Gauss-Seidel Methods
applying relaxation factor