0% found this document useful (0 votes)
49 views30 pages

CE2407B Lecture 4 PDF

This document provides an overview of iterative methods for solving systems of linear equations, including Gauss-Seidel, Jacobi, and SOR (Successive Overrelaxation) methods. It discusses that iterative methods use successive approximations to obtain more accurate solutions at each step. The Gauss-Seidel method is similar to Jacobi but uses updated values as they become available, allowing it to typically converge faster. The Jacobi method solves for each variable independently using other current values. SOR can converge even faster than Gauss-Seidel by introducing an extrapolation parameter. Worked examples demonstrating the Gauss-Seidel method are also provided.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
49 views30 pages

CE2407B Lecture 4 PDF

This document provides an overview of iterative methods for solving systems of linear equations, including Gauss-Seidel, Jacobi, and SOR (Successive Overrelaxation) methods. It discusses that iterative methods use successive approximations to obtain more accurate solutions at each step. The Gauss-Seidel method is similar to Jacobi but uses updated values as they become available, allowing it to typically converge faster. The Jacobi method solves for each variable independently using other current values. SOR can converge even faster than Gauss-Seidel by introducing an extrapolation parameter. Worked examples demonstrating the Gauss-Seidel method are also provided.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 30

CE2407 Engineering and Uncertainty Analysis

Lecture Notes 4
Iterative Methods
(Gauss Seidel, Jacobi and SOR)
Kevin Kuang Sze Chiang
Department of Civil and Environmental Engineering
Room E2-04-11, Tel: 6516 4683, Email: ceeksck@nus.edu.sg

Lecture 4 PG1
Lecture 4: Iterative Techniques for Solving Linear Systems

The methods presented in Lecture No. 3 used direct techniques to


solve a system (i.e. solve for unknowns) of n X n linear equations of
the form Ax = b.

This lecture deals with some common iterative methods to solve a


system of this type.

What is iterative method? The term ``iterative method'' refers to a wide


range of techniques that use successive approximations to obtain more
accurate solutions to a linear system at each step.

In this lecture we will discuss 3 types of (stationary) iterative methods:


1. Gauss-Seidel Method
2. Jacobi Method
3. SOR (Successive Overrelaxation) Method

Lecture 4 PG2
General Overview

The Gauss-Seidel method is like the Jacobi method, except that it uses
updated values as soon as they are available. In general, if the Jacobi
method converges, the Gauss-Seidel method will converge faster than the
Jacobi method, though still relatively slowly.

The Jacobi method is based on solving for every variable locally with
respect to the other variables; one iteration of the method corresponds to
solving for every variable once. The resulting method is easy to understand
and implement, but convergence is slow

Successive Overrelaxation (SOR) can be derived from the Gauss-Seidel


method by introducing an extrapolation parameter . For the optimal choice
of ω , SOR may converge faster than Gauss-Seidel by an order of
magnitude

Lecture 4 PG3
Gauss-Seidel Method

Given a set of n x n equations:

 a11 x1 + a12 x2 + ....... + a1n xn = b1 ( E1 )


a x + a x + ....... + a2 n xn = b2 ( E2 )
 21 1 22 2

 . . . .

Sn =  . . . .
 . . . .

 . . . .
a x + a x + ....... + ann xn = bn ( En )
 n1 1 n2 2

In matrix form, we can write as:


[A]{X } = {B}
Previously, in Gaussian Elimination/LU we made use matrix quite a bit.
Here, we will use the algebraic form – easy to understand today’s algorithm.
GS method can also be expressed in matrix form but let’s keep it simple so we
can focus on the algorithm Lecture 4 PG4
Gauss-Seidel Method- Most popular

Assume now we have a 3x3 set of equations and all diagonal elements are
non-zero.

b1 − a12 x2 − a13 x3
x1 =
a11
a11 x1 + a12 x2 + a13 x3 = b1
a21 x1 + a22 x2 + a23 x3 = b2 x2 =
b2 − a21 x1 − a23 x3
a22
a31 x1 + a32 x2 + a33 x3 = b3
b3 − a31 x1 − a32 x2
x3 =
a33

Lecture 4 PG5
Gauss-Seidel Method

b1 − a12 x2 − a13 x3 b2 − a21 x1 − a23 x3 b3 − a31 x1 − a32 x2


x1 = x2 = x3 =
a11 a22 a33
Eq1 Eq2 Eq3
Steps:
Make a guess for the values of x1, x2 and x3. The simplest is to assume the initial
values of x to be zero. These initial values is then substituted into Eq1, which is
then use to calculate x1. Then use this new x1 value to substitute into Eq2 (x3 is
still zero at this stage, as initially assumed). So we obtain x2. Finally, substitute
the new x1 and new x2 into Eq3 to obtain the new x3.

Repeat procedure with the new values of x1, x2 and x3 to obtain newer values
until solution converges to the true values. Convergence can be check using the
stopping criterion.
current _ approximation − previous _ approximation
εa = .100% < ε s
current _ approximation
Error normalized to approximated value Pre-specified error
Lecture 4 PG6
Gauss-Seidel Method- Worked Example 1

3 x1 − 0.1x2 − 0.2 x3 = 7.85


S3 0.1x1 + 7 x2 − 0.3 x3 = −19.3
0.3 x1 − 0.2 x2 + 10 x3 = 71.4
1st stage of interation:

7.85 + 0.1x2 + 0.2 x3 7.85 + 0 + 0


x1 = x1 = = 2.616667
3 3
− 19.3 − 0.1x1 + 0.3 x3 − 19.3 − 0.1(2.616667) + 0
x2 = x2 = = −2.794524
7 7
71.4 − 0.3(2.616667) + 0.2(−2.794524)
71.4 − 0.3 x1 + 0.2 x2 x3 =
x3 = 10
10
= 7.005610
Return to SOR Lecture 4 PG7
Gauss-Seidel Method- Worked Example 1

2nd stage of interation:

7.85 + 0.1(−2.794524) + 0.2(7.005610)


x1 = = 2.990557
3
− 19.3 − 0.1(2.990557) + 0.3(7.005610)
x2 = = −2.499625
7

71.4 − 0.3(2.990557) + 0.2(−2.499625)


x3 = = 7.000291
10

Lecture 4 PG8
Gauss-Seidel Method- Worked Example 1

2.990557 − 2.616667
ε a ,1 = 100% = 12.5%
2.990557
− 2.499625 + 2.794524
ε a,2 = 100% = 11.8%
− 2.499625
7.000291 − 7.005610
ε a ,3 = 100% = 0.076%
7.000291
#

NOW, you try the 3rd stage of iteration:


True values:
x1= 3, x2=-2.5, x3=7

Lecture 4 PG9
Gauss Seidel Method Using Excel

Gauss Seidel Method


A j= 1 2 3 4 5 b
i= 1 3.0000 -0.1000 -0.2000 0.0000 0.0000 7.85
2 0.1000 7.0000 -0.3000 0.0000 0.0000 -19.3
3 0.3000 -0.2000 10.0000 0.0000 0.0000 71.4
4 0.0000 0.0000 0.0000 0.0000 0.0000 0
5 0.0000 0.0000 0.0000 0.0000 0.0000 0

T j= 1 2 3 4 5 b
i= 1 0.0000 -0.0333 -0.0667 0.0000 0.0000 2.616667
2 0.0143 0.0000 -0.0429 0.0000 0.0000 -2.75714
3 0.0300 -0.0200 Converge
0.0000 on
0.0000 0.0000 7.14
4 0.0000 0.0000 3rd iteration
0.0000 0.0000 0.0000 0
5 0.0000 0.0000 0.0000 0.0000 0.0000 0
xi(0)
(k)
xi , k=1 k=2 k=3 k=4 k=5
i= 1 0 2.62 2.99 3.00 3.00 3.00
2 0 -2.79 -2.50 -2.50 -2.50 -2.50
3 0 7.01 7.00 7.00 7.00 7.00
4 0 0.00 0.00 0.00 0.00 0.00
5 0 0.00 0.00 0.00 0.00 0.00

Tij=Aij/Aii where i=j


Tij=0 when i=j Return to Jacobi
Return to SOR Lecture 4 PG10
Jacobi Method

Gauss Seidel Method Jacobi Method

Lecture 4 PG11
Jacobi Method- same example as before

3 x1 − 0.1x2 − 0.2 x3 = 7.85


S3 0.1x1 + 7 x2 − 0.3 x3 = −19.3
0.3 x1 − 0.2 x2 + 10 x3 = 71.4
1st stage of interation:

7.85 + 0.1x2 + 0.2 x3 7.85 + 0 + 0


x1 = x1 = = 2.616667
3 3
− 19.3 − 0.1x1 + 0.3 x3 − 19.3 − 0.1(0) + 0
x2 = x2 = = −2.75714
7 7

71.4 − 0.3 x1 + 0.2 x2 71.4 − 0.3(0) + 0.2(0)


x3 = x3 = = 7.14
10 10

Lecture 4 PG12
Jacobi Method- same example as before

x1 = 2.616667 x2 = −2.75714 x3 = 7.14


2nd stage of interation:

7.85 + 0.1(−2.75714) + 0.2(7.14)


x1 = = 3.000762
3
− 19.3 − 0.1(2.616667) + 0.3(7.14)
x2 = = −2.488524
7

71.4 − 0.3(2.616667) + 0.2(−2.75714)


x3 = = 7.00636
10
NOW, you try the 3rd stage of iteration:

Lecture 4 PG13
Jacobi Method- same example as before

3.000762 − 2.616667
ε a ,1 = 100% = 12.8%
3.000762
− 2.488524 + 2.75714
ε a,2 = 100% = 10.8%
− 2.488524
7.00636 − 7.14
ε a ,3 = 100% = 1.91%
7.00636
#

Lecture 4 PG14
Jacobi Method Using Excel

Jacobi Method

A j= 1 2 3 4 5 b
i= 1 3.0000 -0.1000 -0.2000 0.0000 0.0000 7.85
2 0.1000 7.0000 -0.3000 0.0000 0.0000 -19.3
3 0.3000 -0.2000 10.0000 0.0000 0.0000 71.4
4 0.0000 0.0000 0.0000 0.0000 0.0000 0
5 0.0000 0.0000 0.0000 0.0000 0.0000 0

T j= 1 2 3 4 5 b
i= 1 0.0000 -0.0333 -0.0667 0.0000 0.0000 2.6166667
2 0.0143 0.0000 -0.0429 0.0000 0.0000 -2.757143
3 0.0300 -0.0200 Converge
0.0000 on
0.0000 0.0000 7.14
4 0.0000 0.0000 3rd iteration
0.0000 0.0000 0.0000 0
5 0.0000 0.0000 0.0000 0.0000 0.0000 0
(0) (k)
xi x , k=1 k=2 k=3 k=4 k=5
i= 1 0 2.62 3.00 3.00 3.00 3.00
2 0 -2.76 -2.49 -2.50 -2.50 -2.50
3 0 7.14 7.01 7.00 7.00 7.00
4 0 0.00 0.00 0.00 0.00 0.00
5 0 0.00 0.00 0.00 0.00 0.00

Compare values from standard Gauss-Seidel ?


Lecture 4 PG15
Gauss-Seidel Method- Worked Example 2 with Excel

10 x1 + 2 x2 − 3 x3 = 27
S3 − 3 x1 − 6 x2 + 2 x3 = −61.5
x1 + x2 + 5 x3 = −21.5
Try working this out on your own manually and using Excel.
Jacobi Method
xi(0) x(k), k = 1 2 3 4 5 6 7 8 9
i= 1 0 2.70 -0.640 -0.86 -0.65 -0.79 -0.81 -0.80 -0.80 -0.80
2 0 10.25 7.467 8.27 8.79 8.65 8.67 8.70 8.69 8.69
3 0 -4.30 -6.890 -5.67 -5.78 -5.93 -5.87 -5.87 -5.88 -5.88

Gauss Seidel Method Faster?

xi(0) x(k), k = 1 2 3 4 5 6
i= 1 0 2.700 -1.066 -0.76 -0.81 -0.80 -0.80
2 0 8.900 8.576 8.69 8.69 8.69 8.69
3 0 -6.620 -5.802 -5.89 -5.88 -5.88 -5.88

Lecture 4 PG16
Gauss-Seidel Method- Worked Example 3 with Excel

Not diagonally dominant


− 3 x1 + x2 + 12 x3 = 50
3 1 12 
S3 6 x1 − x2 − x3 = 3 6 − 1 − 1
 
6 x1 + 9 x2 + x3 = 40 6 9 1 

To guarantee convergence, Gauss-Seidel Iterative


the diagonal element must be Method
greater than the sum
of the off-diagonal
j= element
1 for each row.
2 Although 3 if this criterion
4 is not met,
5 theb
technique may sometimes work, it is guaranteed is the condition is met.
i= 1 -3.0000 1.0000 12.0000 0.0000 0.0000 50
n

A
2
3
ii >
6.0000 9.0000

6.0000 a-1.0000
j =1
aij -1.0000 0.0000 0.0000
Applies for both GS and Jacobi
1.0000 -1.0000 0.0000
3
40
j ≠i
4 0.0000 0.0000 0.0000 1.0000 Diagonally0.0000
dominant 0
5 6 x − x0.0000
0.0000
1 2− x = 30.0000 0.0000 1.0000
3  6 − 1 − 1
0
6 
xi(0)
6
S3
x ,k=1
x +
(k)
1 9 x 2 + x
23 = 40 3 
4
9 1
5 
6
− 3 1 12 
i= 1 1 − 3 x1 + 3,225.333
-12.333 x2 + 12 x3-738,239.33
= 50 169,063,645.00 ############# #############
2 1 -78.000 18,532.000 -4,243,339.00 971,762,340.00 ############# #############
3 GS &1J Methods reliable
817.000 for systems
-186,100.000 with dominant
42,619,527.00 diagonals.
############# #############Lecture 4 PG17
#############
Gauss-Seidel Method- Worked Example 3 with Excel

Try working this out on your own manually and using Excel.

Jacobi Method

xi(0) x(k), k=1 k=2 k=3 k=4 k=5 k=6 k=7


i= 1 0 0.50 1.94 1.76 1.68 1.69 1.70 1.70
2 0 4.44 3.65 2.72 2.79 2.84 2.83 2.83
3 0 4.17 3.92 4.35 4.38 4.35 4.35 4.36

Gauss Seidel Method


(0)
xi xi(k), k=1 k=2 k=3 k=4 k=5
i= 1 0 0.50 1.84 1.70 1.70 1.70
2 0 4.11 2.78 2.83 2.83 2.83
3 0 3.95 4.40 4.36 4.36 4.36
#

Let’s try solving without rearranging the equation . Gauss Seidel Method
How about using Gaussian Elimination without rearranging the equation?
Lecture 4 PG18
Gauss-Seidel and Jacobi Method- Example in notes

Example 1: Consider the problem of determining the


member forces F1-F5 in the following planar truss:

20 kN 15 kN
1 F2 2
10 kN 30o 5 kN
F5
FBD of Joint 1
F1 F3
20kN
10kN F4
F2 5 kN
4 3
F4
F1 +
Taking summation of vertical forces, + Taking summation of horizontal forces,
20 + F1+ F4cos60o =0 10 + F2 + F4cos30o =0
F1+ 0.5F4 = -20 -F2- 3/2F4 = 10

Lecture 4 PG19
Gauss-Seidel and Jacobi Method- Worked Example

 1
 1 2 F4 = −20
F +

( J 1) : ∑F
y =0  3
  − F − F4 = 10
( J 1) :
2
∑F
x =0  2
  1
( J 2 ) : ∑F
y =0 ⇒ S 5 =  F3 + F5 = −15
  2
( J 3) : ∑F
x =0  3
( J 2 ) :  F4 = 5
 ∑F
x =0
 2
 3
F
 2 + F5 = 5
 1   2
1 0 0
2
0 
 
Matrix form: 3 F   − 20
0 −1 0 − 0  1  10 
 2  F2
 1    
0 0 1 0
2 
 F3  =  − 15 
  F  
5

3  4  
0 0 0 0  F 
 2   5  5 
 
 3
0 1 0 0 
 2 
Lecture 4 PG20
Gauss-Seidel and Jacobi Method- Worked Example

Jacobi Iterative Method [A]{X } = {B}


Jacobi Iterative Method
j= 1 2 3 4 5 b
i= 1 1.0000 0.0000 0.0000 0.5000 0.0000 -20
2 0.0000 -1.0000 0.0000 -0.8660 0.0000 10
A 3 0.0000 0.0000 1.0000 0.0000 0.5000 -15
4 0.0000 0.0000 0.0000 0.8660 0.0000 5
5 0.0000 1.0000 0.0000 0.0000 0.8660 5

(k)
xi(0) x , k = 1 2 3 4 5 6
i= 1 0 -20.00 -22.89 -22.89 -22.89 -22.89 -22.89
2 0 -10.00 -15.00 -15.00 -15.00 -15.00 -15.00
3 0 -15.00 -17.89 -23.66 -26.55 -26.55 -26.55
4 0 5.77 5.77 5.77 5.77 5.77 5.77
5 0 5.77 17.32 23.09 23.09 23.09 23.09

||X(k)-X(k-1)||2 28.1366 13.2288 8.16497 2.88675 0 0


SQRT(SUM of squared difference between old and new values)

||X(k)-X(k-1)||2/||X(k)||2 0.35329 0.18855 0.06423 0 0


SQRT(SUM of squared difference between old and new)/SQRT(SUM of squared of new value)

Lecture 4 PG21
Say a 5 x 5 system with plenty of zeroes on the top triangle….

Final Echelon form

Using GE for large sparse matrix will use a lot of computer storage space, the
zero’s get transfer as part of the normal procedure.

For Iterative method, e.g. GS or even Jacobi, convergence is achieved very


quickly.

Lecture 4 PG22
Gauss-Seidel and Jacobi Method- Worked Example

Gauss Seidel Method


Gauss-Seidel Iterative Method
[A]{X } = {B}
j= 1 2 3 4 5 b
i= 1 1.0000 0.0000 0.0000 0.5000 0.0000 -20
2 0.0000 -1.0000 0.0000 -0.8660 0.0000 10
A 3 0.0000 0.0000 1.0000 0.0000 0.5000 -15
4 0.0000 0.0000 0.0000 0.8660 0.0000 5
5 0.0000 1.0000 0.0000 0.0000 0.8660 5

(k)
xi(0) x , k = 1 2 3 4 5 6
i= 1 0 -20.00 -22.89 -22.89 -22.89 -22.89 -22.89
2 0 -10.00 -15.00 -15.00 -15.00 -15.00 -15.00
3 0 -15.00 -23.66 -26.55 -26.55 -26.55 -26.55
4 0 5.77 5.77 5.77 5.77 5.77 5.77
5 0 17.32 23.09 23.09 23.09 23.09 23.09
(k) (k-1) 2
||X -X || 32.53204 11.9024 2.88675 0.0000 0.0000 0.0000
SQRT(SUM of squared difference between old and new values)
(k) (k-1) 2 (k) 2
||X -X || /||X || 0.27485 0.06423 0.0000 0.0000 0.0000
SQRT(SUM of squared difference between old and new)/SQRT(SUM of squared of new value)

Lecture 4 PG23
Beam me up
Successive Over Relaxation (SOR) Method one last time!

Relaxation represents a slight modification of the Gauss Seidel method and is


designed to improve convergence. After each new value of x is calculated, they are
modified by a weighted average of the results of the previous and the present
iterations:
xi
new
= λx i
new
+ (1 − λ ) x i
old

If λ is 0.5, then the relaxation is essentially the average of old and new value
(equally weighted)
If λ is 1, then the value of x is not modified by relaxation.
If λ is 0-1, the result is a weighted average of the present and the previous
results. This type of modification is called under-relaxation. It is typically used
to hasten convergence by dampening out oscillations.
If λ is 1-2, extra weight is placed on the present value. Implicitly, by doing this,
we are assuming that the new value is moving in the correct direction toward to
true solution but at too slow a rate. Thus, the added weight of λ is intended to
improve the estimate by pushing it closer to the solution. Hence, this type of
modification, called over-relaxation, is designed to accelerate the
convergence of an already convergent system. This is also called, successive
over-relaxation or SOR
Lecture 4 PG24
Successive Over Relaxation (SOR) Method-Example

3 x1 − 0.1x2 − 0.2 x3 = 7.85


S3 0.1x1 + 7 x2 − 0.3 x3 = −19.3 Use λ =1.2

0.3 x1 − 0.2 x2 + 10 x3 = 71.4


1st stage of interation: xinew = λxinew + (1 − λ ) xiold
7.85 + 0.1x2 + 0.2 x3  7.85 + 0 + 0 
x1 = x1 = 1.2
3
 + (1 − 1.2)(0) = 3.14
3  

− 19.3 − 0.1x1 + 0.3 x3  − 19.3 − 0.1(3.14) + 0 


x2 = x2 = 1.2
 7
 + (1 − 1.2)(0) = −3.3624

7
 71.4 − 0.3(3.14) + 0.2(−3.3624) 
71.4 − 0.3 x1 + 0.2 x2 x3 = (1.2)  + (1 − 1.2)(0)
x3 =  10 
10 = 8.3743
Compare values from standard Gauss-Seidel ?
Lecture 4 PG25
Successive Over Relaxation (SOR) Method-Example

2nd stage of interation:

 7.85 + 0.1(−3.3624) + 0.2(8.37426) 


x1 = 1.2  + (1 − 1.2)(3.14) = 3.0474
 3 

 − 19.3 − 0.1(3.0474) + 0.3(8.37426) 


x2 = 1.2  + (1 − 1.2)(−3.3624) = −2.2577
 7 

 71.4 − 0.3(3.0474) + 0.2(−2.2577) 


x3 = 1.2  + (1 − 1.2)(8.37426) = 6.72925
 10 
NOW, you try the 3rd stage of iteration:
0 3.14 3.05 2.98 3.01 3.00 3.00
0 -3.36 -2.26 -2.56 -2.48 -2.50 -2.50
0 8.37 6.73 7.05 6.99 7.00 7.00
Compare values from standard Gauss-Seidel ?
Lecture 4 PG26
Successive Over Relaxation (SOR) Method

SOR Method
Modified Gauss Seidel Method
j= 1 2 3 4 5 b
i= 1 1.0000 0.0000 0.0000 0.5000 0.0000 -20
2 0.0000 -1.0000 0.0000 -0.8660 0.0000 10
A 3 0.0000 0.0000 1.0000 0.0000 0.5000 -15
4 0.0000 0.0000 0.0000 0.8660 0.0000 5
5 0.0000 1.0000 0.0000 0.0000 0.8660 5
ω= 1.1 110
(k)
xi(0) x , k = 1 2 3 4 5 6
i= 1 0 -22.00 -23.29 -22.81 -22.90 -22.89 -22.89
2 0 -11.00 -15.95 -14.85 -15.02 -15.00 -15.00
3 0 -16.50 -26.03 -27.42 -26.27 -26.61 -26.54
4 0 6.35 5.72 5.78 5.77 5.77 5.77
5 0 20.32 24.58 22.76 23.15 23.09 23.10

||X(k)-X(k-1)||2 36.478 11.639 2.5867 1.2237 0.3416 0.0723


SQRT(SUM of squared difference between old and new values)

||X(k)-X(k-1)||2/||X(k)||2 0.2533 0.0572 0.0273 0.0076 0.0016


SQRT(SUM of squared difference between old and new)/SQRT(SUM of squared of new value)

Lecture 4 PG27
Successive Over Relaxation (SOR) Method- example

The choice of an optimum value for λ is highly problem specific and is often
determined empirically. Given an optimal value of ω, this method will converge
faster.

This example highlights the superior convergence rate of SOR compared to GS


or Jacobi

Given
4 1 1 0 1   x1  6
− 1 − 3 1 1 0   x  6
   2   
2 1 5 − 1 − 1  x3  = 6
    
− 1 − 1 − 1 4 0   x4  6
 0 2 − 1 1 4   x5  6

Iterate using SOR with ω =1.06 and compare with Gauss Seidel method

Lecture 4 PG28
Gauss Seidel vs SOR Method- example

Gauss Seidel vs SOR


SOR Method =GS here SOR Method
j= 1 2 3 4 5 b j= 1 2 3 4 5 b
i= 1 4.0000 1.0000 1.0000 0.0000 1.0000 6 i= 1 4.0000 1.0000 1.0000 0.0000 1.0000 6
2 -1.0000 -3.0000 1.0000 1.0000 0.0000 6 2 -1.0000 -3.0000 1.0000 1.0000 0.0000 6
A 3 2.0000 1.0000 5.0000 -1.0000 -1.0000 6A 3 2.0000 1.0000 5.0000 -1.0000 -1.0000 6
4 -1.0000 -1.0000 -1.0000 4.0000 0.0000 6 4 -1.0000 -1.0000 -1.0000 4.0000 0.0000 6
5 0.0000 2.0000 -1.0000 1.0000 4.0000 6 5 0.0000 2.0000 -1.0000 1.0000 4.0000 6
0ω⇒ 1 100 xi = λx i + (1 − λ ) xi ω5
new new old
1.04 104 new ne w
xi = λx i + (1 − λ ) xiold

(k)
xi(0) x , k = 1 2 3 4 5 6 xi(0) x(k), k = 1 2 3 4 5 6
i= 1 0 1.50 1.19 0.85 0.78 0.78 0.79 i= 1 0 1.56 1.15 0.81 0.78 0.79 0.79
2 0 -2.50 -1.52 -1.04 -0.99 -1.00 -1.00 2 0 -2.62 -1.43 -0.97 -0.98 -1.00 -1.00
3 0 1.10 1.86 1.89 1.87 1.87 1.87 3 0 1.14 1.93 1.89 1.86 1.87 1.87
4 0 1.53 1.88 1.93 1.92 1.91 1.91 4 0 1.58 1.93 1.93 1.91 1.91 1.91
5 0 2.64 2.26 2.01 1.98 1.99 1.99 5 0 2.81 2.19 1.96 1.98 1.99 1.99

(k) (k-1) 2
||X(k)-X(k-1)||2 4.3618 1.3835 0.64369 0.0912 0.0142 0.0055 ||X -X || 4.583 1.6462 0.6195 0.0553 0.0223 0.0022
SQRT(SUM of squared difference between old and new values) SQRT(SUM of squared difference between old and new values)
(k) (k-1) 2 (k) 2
||X(k)-X(k-1)||2/||X(k)||2 0.3477 0.17759 0.0256 0.0040 0.0015 ||X -X || /||X || 0.4164 0.1734 0.0156 0.0063 0.0006
SQRT(SUM of squared difference between old and new)/SQRT(SUM of squared of new value) SQRT(SUM of squared difference between old and new)/SQRT(SUM of squared of new value)

Lecture 4 PG29
Lecture 4 PG30

You might also like