CE2407B Lecture 4 PDF
CE2407B Lecture 4 PDF
Lecture Notes 4
Iterative Methods
(Gauss Seidel, Jacobi and SOR)
Kevin Kuang Sze Chiang
Department of Civil and Environmental Engineering
Room E2-04-11, Tel: 6516 4683, Email: ceeksck@nus.edu.sg
Lecture 4 PG1
Lecture 4: Iterative Techniques for Solving Linear Systems
Lecture 4 PG2
General Overview
The Gauss-Seidel method is like the Jacobi method, except that it uses
updated values as soon as they are available. In general, if the Jacobi
method converges, the Gauss-Seidel method will converge faster than the
Jacobi method, though still relatively slowly.
The Jacobi method is based on solving for every variable locally with
respect to the other variables; one iteration of the method corresponds to
solving for every variable once. The resulting method is easy to understand
and implement, but convergence is slow
Lecture 4 PG3
Gauss-Seidel Method
. . . .
Sn = . . . .
. . . .
. . . .
a x + a x + ....... + ann xn = bn ( En )
n1 1 n2 2
Assume now we have a 3x3 set of equations and all diagonal elements are
non-zero.
b1 − a12 x2 − a13 x3
x1 =
a11
a11 x1 + a12 x2 + a13 x3 = b1
a21 x1 + a22 x2 + a23 x3 = b2 x2 =
b2 − a21 x1 − a23 x3
a22
a31 x1 + a32 x2 + a33 x3 = b3
b3 − a31 x1 − a32 x2
x3 =
a33
Lecture 4 PG5
Gauss-Seidel Method
Repeat procedure with the new values of x1, x2 and x3 to obtain newer values
until solution converges to the true values. Convergence can be check using the
stopping criterion.
current _ approximation − previous _ approximation
εa = .100% < ε s
current _ approximation
Error normalized to approximated value Pre-specified error
Lecture 4 PG6
Gauss-Seidel Method- Worked Example 1
Lecture 4 PG8
Gauss-Seidel Method- Worked Example 1
2.990557 − 2.616667
ε a ,1 = 100% = 12.5%
2.990557
− 2.499625 + 2.794524
ε a,2 = 100% = 11.8%
− 2.499625
7.000291 − 7.005610
ε a ,3 = 100% = 0.076%
7.000291
#
Lecture 4 PG9
Gauss Seidel Method Using Excel
T j= 1 2 3 4 5 b
i= 1 0.0000 -0.0333 -0.0667 0.0000 0.0000 2.616667
2 0.0143 0.0000 -0.0429 0.0000 0.0000 -2.75714
3 0.0300 -0.0200 Converge
0.0000 on
0.0000 0.0000 7.14
4 0.0000 0.0000 3rd iteration
0.0000 0.0000 0.0000 0
5 0.0000 0.0000 0.0000 0.0000 0.0000 0
xi(0)
(k)
xi , k=1 k=2 k=3 k=4 k=5
i= 1 0 2.62 2.99 3.00 3.00 3.00
2 0 -2.79 -2.50 -2.50 -2.50 -2.50
3 0 7.01 7.00 7.00 7.00 7.00
4 0 0.00 0.00 0.00 0.00 0.00
5 0 0.00 0.00 0.00 0.00 0.00
Lecture 4 PG11
Jacobi Method- same example as before
Lecture 4 PG12
Jacobi Method- same example as before
Lecture 4 PG13
Jacobi Method- same example as before
3.000762 − 2.616667
ε a ,1 = 100% = 12.8%
3.000762
− 2.488524 + 2.75714
ε a,2 = 100% = 10.8%
− 2.488524
7.00636 − 7.14
ε a ,3 = 100% = 1.91%
7.00636
#
Lecture 4 PG14
Jacobi Method Using Excel
Jacobi Method
A j= 1 2 3 4 5 b
i= 1 3.0000 -0.1000 -0.2000 0.0000 0.0000 7.85
2 0.1000 7.0000 -0.3000 0.0000 0.0000 -19.3
3 0.3000 -0.2000 10.0000 0.0000 0.0000 71.4
4 0.0000 0.0000 0.0000 0.0000 0.0000 0
5 0.0000 0.0000 0.0000 0.0000 0.0000 0
T j= 1 2 3 4 5 b
i= 1 0.0000 -0.0333 -0.0667 0.0000 0.0000 2.6166667
2 0.0143 0.0000 -0.0429 0.0000 0.0000 -2.757143
3 0.0300 -0.0200 Converge
0.0000 on
0.0000 0.0000 7.14
4 0.0000 0.0000 3rd iteration
0.0000 0.0000 0.0000 0
5 0.0000 0.0000 0.0000 0.0000 0.0000 0
(0) (k)
xi x , k=1 k=2 k=3 k=4 k=5
i= 1 0 2.62 3.00 3.00 3.00 3.00
2 0 -2.76 -2.49 -2.50 -2.50 -2.50
3 0 7.14 7.01 7.00 7.00 7.00
4 0 0.00 0.00 0.00 0.00 0.00
5 0 0.00 0.00 0.00 0.00 0.00
10 x1 + 2 x2 − 3 x3 = 27
S3 − 3 x1 − 6 x2 + 2 x3 = −61.5
x1 + x2 + 5 x3 = −21.5
Try working this out on your own manually and using Excel.
Jacobi Method
xi(0) x(k), k = 1 2 3 4 5 6 7 8 9
i= 1 0 2.70 -0.640 -0.86 -0.65 -0.79 -0.81 -0.80 -0.80 -0.80
2 0 10.25 7.467 8.27 8.79 8.65 8.67 8.70 8.69 8.69
3 0 -4.30 -6.890 -5.67 -5.78 -5.93 -5.87 -5.87 -5.88 -5.88
xi(0) x(k), k = 1 2 3 4 5 6
i= 1 0 2.700 -1.066 -0.76 -0.81 -0.80 -0.80
2 0 8.900 8.576 8.69 8.69 8.69 8.69
3 0 -6.620 -5.802 -5.89 -5.88 -5.88 -5.88
Lecture 4 PG16
Gauss-Seidel Method- Worked Example 3 with Excel
A
2
3
ii >
6.0000 9.0000
∑
6.0000 a-1.0000
j =1
aij -1.0000 0.0000 0.0000
Applies for both GS and Jacobi
1.0000 -1.0000 0.0000
3
40
j ≠i
4 0.0000 0.0000 0.0000 1.0000 Diagonally0.0000
dominant 0
5 6 x − x0.0000
0.0000
1 2− x = 30.0000 0.0000 1.0000
3 6 − 1 − 1
0
6
xi(0)
6
S3
x ,k=1
x +
(k)
1 9 x 2 + x
23 = 40 3
4
9 1
5
6
− 3 1 12
i= 1 1 − 3 x1 + 3,225.333
-12.333 x2 + 12 x3-738,239.33
= 50 169,063,645.00 ############# #############
2 1 -78.000 18,532.000 -4,243,339.00 971,762,340.00 ############# #############
3 GS &1J Methods reliable
817.000 for systems
-186,100.000 with dominant
42,619,527.00 diagonals.
############# #############Lecture 4 PG17
#############
Gauss-Seidel Method- Worked Example 3 with Excel
Try working this out on your own manually and using Excel.
Jacobi Method
Let’s try solving without rearranging the equation . Gauss Seidel Method
How about using Gaussian Elimination without rearranging the equation?
Lecture 4 PG18
Gauss-Seidel and Jacobi Method- Example in notes
20 kN 15 kN
1 F2 2
10 kN 30o 5 kN
F5
FBD of Joint 1
F1 F3
20kN
10kN F4
F2 5 kN
4 3
F4
F1 +
Taking summation of vertical forces, + Taking summation of horizontal forces,
20 + F1+ F4cos60o =0 10 + F2 + F4cos30o =0
F1+ 0.5F4 = -20 -F2- 3/2F4 = 10
Lecture 4 PG19
Gauss-Seidel and Jacobi Method- Worked Example
1
1 2 F4 = −20
F +
( J 1) : ∑F
y =0 3
− F − F4 = 10
( J 1) :
2
∑F
x =0 2
1
( J 2 ) : ∑F
y =0 ⇒ S 5 = F3 + F5 = −15
2
( J 3) : ∑F
x =0 3
( J 2 ) : F4 = 5
∑F
x =0
2
3
F
2 + F5 = 5
1 2
1 0 0
2
0
Matrix form: 3 F − 20
0 −1 0 − 0 1 10
2 F2
1
0 0 1 0
2
F3 = − 15
F
5
3 4
0 0 0 0 F
2 5 5
3
0 1 0 0
2
Lecture 4 PG20
Gauss-Seidel and Jacobi Method- Worked Example
(k)
xi(0) x , k = 1 2 3 4 5 6
i= 1 0 -20.00 -22.89 -22.89 -22.89 -22.89 -22.89
2 0 -10.00 -15.00 -15.00 -15.00 -15.00 -15.00
3 0 -15.00 -17.89 -23.66 -26.55 -26.55 -26.55
4 0 5.77 5.77 5.77 5.77 5.77 5.77
5 0 5.77 17.32 23.09 23.09 23.09 23.09
Lecture 4 PG21
Say a 5 x 5 system with plenty of zeroes on the top triangle….
Using GE for large sparse matrix will use a lot of computer storage space, the
zero’s get transfer as part of the normal procedure.
Lecture 4 PG22
Gauss-Seidel and Jacobi Method- Worked Example
(k)
xi(0) x , k = 1 2 3 4 5 6
i= 1 0 -20.00 -22.89 -22.89 -22.89 -22.89 -22.89
2 0 -10.00 -15.00 -15.00 -15.00 -15.00 -15.00
3 0 -15.00 -23.66 -26.55 -26.55 -26.55 -26.55
4 0 5.77 5.77 5.77 5.77 5.77 5.77
5 0 17.32 23.09 23.09 23.09 23.09 23.09
(k) (k-1) 2
||X -X || 32.53204 11.9024 2.88675 0.0000 0.0000 0.0000
SQRT(SUM of squared difference between old and new values)
(k) (k-1) 2 (k) 2
||X -X || /||X || 0.27485 0.06423 0.0000 0.0000 0.0000
SQRT(SUM of squared difference between old and new)/SQRT(SUM of squared of new value)
Lecture 4 PG23
Beam me up
Successive Over Relaxation (SOR) Method one last time!
If λ is 0.5, then the relaxation is essentially the average of old and new value
(equally weighted)
If λ is 1, then the value of x is not modified by relaxation.
If λ is 0-1, the result is a weighted average of the present and the previous
results. This type of modification is called under-relaxation. It is typically used
to hasten convergence by dampening out oscillations.
If λ is 1-2, extra weight is placed on the present value. Implicitly, by doing this,
we are assuming that the new value is moving in the correct direction toward to
true solution but at too slow a rate. Thus, the added weight of λ is intended to
improve the estimate by pushing it closer to the solution. Hence, this type of
modification, called over-relaxation, is designed to accelerate the
convergence of an already convergent system. This is also called, successive
over-relaxation or SOR
Lecture 4 PG24
Successive Over Relaxation (SOR) Method-Example
SOR Method
Modified Gauss Seidel Method
j= 1 2 3 4 5 b
i= 1 1.0000 0.0000 0.0000 0.5000 0.0000 -20
2 0.0000 -1.0000 0.0000 -0.8660 0.0000 10
A 3 0.0000 0.0000 1.0000 0.0000 0.5000 -15
4 0.0000 0.0000 0.0000 0.8660 0.0000 5
5 0.0000 1.0000 0.0000 0.0000 0.8660 5
ω= 1.1 110
(k)
xi(0) x , k = 1 2 3 4 5 6
i= 1 0 -22.00 -23.29 -22.81 -22.90 -22.89 -22.89
2 0 -11.00 -15.95 -14.85 -15.02 -15.00 -15.00
3 0 -16.50 -26.03 -27.42 -26.27 -26.61 -26.54
4 0 6.35 5.72 5.78 5.77 5.77 5.77
5 0 20.32 24.58 22.76 23.15 23.09 23.10
Lecture 4 PG27
Successive Over Relaxation (SOR) Method- example
The choice of an optimum value for λ is highly problem specific and is often
determined empirically. Given an optimal value of ω, this method will converge
faster.
Given
4 1 1 0 1 x1 6
− 1 − 3 1 1 0 x 6
2
2 1 5 − 1 − 1 x3 = 6
− 1 − 1 − 1 4 0 x4 6
0 2 − 1 1 4 x5 6
Iterate using SOR with ω =1.06 and compare with Gauss Seidel method
Lecture 4 PG28
Gauss Seidel vs SOR Method- example
(k)
xi(0) x , k = 1 2 3 4 5 6 xi(0) x(k), k = 1 2 3 4 5 6
i= 1 0 1.50 1.19 0.85 0.78 0.78 0.79 i= 1 0 1.56 1.15 0.81 0.78 0.79 0.79
2 0 -2.50 -1.52 -1.04 -0.99 -1.00 -1.00 2 0 -2.62 -1.43 -0.97 -0.98 -1.00 -1.00
3 0 1.10 1.86 1.89 1.87 1.87 1.87 3 0 1.14 1.93 1.89 1.86 1.87 1.87
4 0 1.53 1.88 1.93 1.92 1.91 1.91 4 0 1.58 1.93 1.93 1.91 1.91 1.91
5 0 2.64 2.26 2.01 1.98 1.99 1.99 5 0 2.81 2.19 1.96 1.98 1.99 1.99
(k) (k-1) 2
||X(k)-X(k-1)||2 4.3618 1.3835 0.64369 0.0912 0.0142 0.0055 ||X -X || 4.583 1.6462 0.6195 0.0553 0.0223 0.0022
SQRT(SUM of squared difference between old and new values) SQRT(SUM of squared difference between old and new values)
(k) (k-1) 2 (k) 2
||X(k)-X(k-1)||2/||X(k)||2 0.3477 0.17759 0.0256 0.0040 0.0015 ||X -X || /||X || 0.4164 0.1734 0.0156 0.0063 0.0006
SQRT(SUM of squared difference between old and new)/SQRT(SUM of squared of new value) SQRT(SUM of squared difference between old and new)/SQRT(SUM of squared of new value)
Lecture 4 PG29
Lecture 4 PG30