Introduction To Numerical Analysis: Cho, Hyoung Kyu Cho, Hyoung Kyu
Introduction To Numerical Analysis: Cho, Hyoung Kyu Cho, Hyoung Kyu
TO
NUMERICAL ANALYSIS
Cho, Hyoung Kyu
Department of Nuclear Engineering
Seoul National University
4. A SYSTEM OF LINEAR EQUATIONS
4.1 Background
4.2 Gauss Elimination Method
4.3 Gauss Elimination with Pivoting
4.4 Gauss‐Jordan Elimination Method
4.5 LU Decomposition Method
4.6 Inverse of a Matrix
4.7 Iterative Methods
4.8 Use of MATLAB Built‐In Functions for Solving a System of Linear
Equations
4.9 Tridiagonal Systems of Equations
4.1 Background
Systems of linear equations
Occur frequently not only in engineering and science but in any disciplines
Example
Electrical engineering
Kirchhoff’s law
4.1 Background
Systems of linear equations
Occur frequently not only in engineering and science but in any disciplines
Example
Force in members of a truss
Force balance
4.1 Background
Overview of numerical methods for solving a system of linear algebraic equations
Direct methods vs. iterative methods
Direct methods
The solution is calculated by performing arithmetic operations with the equations.
Three systems of equations that can be easily solved are the
– Upper triangular
– Lower triangular
– Diagonal
4.1 Background
Overview of numerical methods for solving a system of linear algebraic equations
Direct methods
Upper triangular form
– Back substitution
– Ex) 4 equations
– In general
4.1 Background
Overview of numerical methods for solving a system of linear algebraic equations
Direct methods
Lower triangular form
– Forward substitution
– Ex) 4 equations
– In general
4.1 Background
Overview of numerical methods for solving a system of linear algebraic equations
Direct methods
Diagonal form
LU decomposition method
– Lower and upper triangular form
Gauss‐Jordan method
– Diagonal form
Iterative methods
– Jacobi
– Gauss‐Seidel
4.2 Gauss Elimination Method
Gauss elimination method
A general form is manipulated to be in upper triangular form
Back substitution
4.2 Gauss Elimination Method
Gauss elimination method
A general form is manipulated to be in upper triangular form
Forward elimination
Step 1
– Eliminate in all other equations except the first one.
– First equation: pivot equation
– Coefficient : pivot coefficient
4.2 Gauss Elimination Method
Gauss elimination method
A general form is manipulated to be in upper triangular form
Forward elimination
Step 1
– Eliminate in all other equations except the first one.
– First equation: pivot equation
– Coefficient : pivot coefficient
4.2 Gauss Elimination Method
Gauss elimination method
A general form is manipulated to be in upper triangular form
Forward elimination
Step 1
– Eliminate in all other equations except the first one.
– First equation: pivot equation
– Coefficient : pivot coefficient
4.2 Gauss Elimination Method
Gauss elimination method
A general form is manipulated to be in upper triangular form
Forward elimination
Step 2
– Eliminate in all other equations except the 1st and 2nd ones.
– Second equation: pivot equation
– Coefficient : pivot coefficient
4.2 Gauss Elimination Method
Gauss elimination method
A general form is manipulated to be in upper triangular form
Forward elimination
Step 3
– Eliminate in 4th equations
– Third equation: pivot equation
– Coefficient : pivot coefficient
Back substitution !
4.2 Gauss Elimination Method
Gauss elimination method
4.2 Gauss Elimination Method
Gauss elimination method
Answer
0.5
3
4
2
4.2 Gauss Elimination Method
Gauss elimination method
Matrix
Back substitution !
4.2 Gauss Elimination Method
Gauss elimination method
function x = Gauss(a,b)
% The function solve a system of linear equations [a][x]=[b] using the Gauss
% elimination method.
% Input variables:
% a The matrix of coefficients.
% b A column vector of constants.
% Output variable:
% x A colum vector with the solution.
ab = [a,b]; Append the column vector [ b] to the matrix [a].
[R, C] = size(ab);
for j = 1:R-1
for i = j+1:R
ab(i,j:C) = ab(i,j:C)-ab(i,j)/ab(j,j)*ab(j,j:C); Forward elimination
end
end
x = zeros(R,1);
x(R) = ab(R,C)/ab(R,R); Back substitution
for i = R-1:-1:1
x(i)=(ab(i,C)-ab(i,i+1:R)*x(i+1:R))/ab(i,i);
end
4.2 Gauss Elimination Method
Gauss elimination method
ab(i,j:C) = ab(i,j:C)-ab(i,j)/ab(j,j)*ab(j,j:C);
2 : pivot equation 1~ 1
3 1~
0
0
1~
0
4.2 Gauss Elimination Method
Gauss elimination method
x(i)=(ab(i,C)-ab(i,i+1:R)*x(i+1:R))/ab(i,i);
4.2 Gauss Elimination Method
Gauss elimination method
4.2 Gauss Elimination Method
Potential difficulties when applying the Gauss elimination method
The pivot element is zero.
Pivot row is divided by the pivot element.
Pivoting!
If the value of the pivot element is equal to zero, a problem will arise.
The pivot element is small relative to the other terms in the pivot row.
Problem can be easily remedied by exchanging the order of the two equations.
In general, a more accurate solution is obtained when the equations are arranged (and
rearranged every time a new pivot equation is used) such that the pivot equation has the
largest possible pivot element.
Round‐off errors can also be significant when solving large systems of equations even when
all the coefficients in the pivot row are of the same order of magnitude.
This can be caused by a large number of operations (multiplication, division, addition, and
subtraction) associated with large systems.
4.2 Gauss Elimination Method
Potential difficulties when applying the Gauss elimination method
4.2 Gauss Elimination Method
Potential difficulties when applying the Gauss elimination method
Remedy
4.3 Gauss Elimination with Pivoting
Example
First pivot coefficient: 0
Pivoting
Exchange of rows
4.3 Gauss Elimination with Pivoting
Additional comments
The numerical calculations are less prone to error and will have fewer round‐off errors if the
pivot element has a larger numerical absolute value compared to the other elements in the
same row.
Consequently, among all the equations that can be exchanged to be the pivot equation, it is
better to select the equation whose pivot element has the largest absolute numerical value.
Moreover, it is good to employ pivoting for the purpose of having a pivot equation with the
pivot element that has a largest absolute numerical value at all times (even when pivoting is
not necessary).
Partial pivoting
4.3 Gauss Elimination with Pivoting
Additional comments
The numerical calculations are less prone to error and will have fewer round‐off errors if the
pivot element has a larger numerical absolute value compared to the other elements in the
same row.
Consequently, among all the equations that can be exchanged to be the pivot equation, it is
better to select the equation whose pivot element has the largest absolute numerical value.
Moreover, it is good to employ pivoting for the purpose of having a pivot equation with the
pivot element that has a largest absolute numerical value at all times (even when pivoting is
not necessary).
Full pivoting
4.3 Gauss Elimination with Pivoting
Example 4‐3
4.3 Gauss Elimination with Pivoting
Example 4‐3
function x = GaussPivot(a,b)
% The function solve a system of linear equations ax=b using the Gauss
% elimination method with pivoting.
% Input variables:
% a The matrix of coefficients.
% b A column vector of constants.
% Output variable:
% x A colum vector with the solution.
ab = [a,b]
[R, C] = size(ab);
for j = 1:R-1
% Pivoting section starts
if ab(j,j)==0 Check if the pivot element is zero.
for k=j+1:R
If pivoting is required, search in the rows
if ab(k,j)~=0 below for a row with nonzero pivot element. 1
abTemp=ab(j,:);
ab(j,:)=ab(k,:); Swap
ab(k,:)=abTemp;
break
end
end
end
% Pivoting section ends
for i = j+1:R
ab(i,j:C) = ab(i,j:C)-ab(i,j)/ab(j,j)*ab(j,j:C); Forward elimination
end
end
4.3 Gauss Elimination with Pivoting
Example 4‐3
x = zeros(R,1);
x(R) = ab(R,C)/ab(R,R); Back substitution
for i = R-1:-1:1
x(i)=(ab(i,C)-ab(i,i+1:R)*x(i+1:R))/ab(i,i);
end
Result
4.4 Gauss‐Jordan Elimination Method
Gauss‐Jordan Elimination Result
In this procedure, a system of equations that is given in a general form is manipulated into an
equivalent system of equations in diagonal form with normalized elements along the
diagonal.
4.4 Gauss‐Jordan Elimination Method
Procedure
In this procedure, a system of equations that is given in a general form is manipulated into an
equivalent system of equations in diagonal form with normalized elements along the
diagonal.
The pivot equation is normalized by dividing all the terms in the equation by the pivot
coefficient. This makes the pivot coefficient equal to 1.
The pivot equation is used to eliminate the off‐diagonal terms in ALL the other equations.
This means that the elimination process is applied to the equations (rows) that are above
and below the pivot equation.
In the Gaussian elimination method, only elements that are below the pivot element are
eliminated.
4.4 Gauss‐Jordan Elimination Method
Procedure
The Gauss‐Jordan method can also be used for solving several systems of equations
that have the same coefficients but different right‐hand‐side vectors .
This is done by augmenting the matrix to include all of the vectors .
The method is used in this way for calculating the inverse of a matrix.
4.4 Gauss‐Jordan Elimination Method
Procedure
Pivot coefficient is normalized!
First elements in rows 2, 3, 4 are
eliminated.
4.4 Gauss‐Jordan Elimination Method
Procedure
The second pivot coefficient is normalized!
The second elements in rows
1, 3, 4 are eliminated.
The third pivot coefficient is normalized!
The third elements in rows
1, 2, 4 are eliminated.
4.4 Gauss‐Jordan Elimination Method
Procedure
The fourth pivot coefficient is normalized!
The fourth elements in rows
2, 3, 4 are eliminated.
It is possible that the equations are written in such an order that during the elimination
procedure a pivot equation has a pivot element that is equal to zero.
Obviously, in this case it is impossible to normalize the pivot row (divide by the pivot
element).
As with the Gauss elimination method, the problem can be corrected by using pivoting.
4.4 Gauss‐Jordan Elimination Method
Example code
function x = GaussJordan(a,b)
% The function solve a system of linear equations ax=b using the Gauss
% elimination method with pivoting. In each step the rows are switched
% such that pivot element has the largest absolute numerical value.
% Input variables:
% a The matrix of coefficients.
% b A column vector of constants.
% Output variable:
% x A column vector with the solution.
ab = [a,b];
[R, C] = size(ab);
for j = 1:R
% Pivoting section starts
pvtemp=ab(j,j);
kpvt=j;
% Looking for the row with the largest pivot element.
for k=j+1:R
if ab(k,j)~=0 && abs(ab(k,j)) > abs(pvtemp)
pvtemp=ab(k,j);
kpvt=k;
end
end
4.4 Gauss‐Jordan Elimination Method
Example code
% If a row with a larger pivot element exists, switch the rows.
if kpvt~=j
abTemp=ab(j,:);
ab(j,:)=ab(kpvt,:);
ab(kpvt,:)=abTemp;
end
% Pivoting section ends
ab(j,:)= ab(j,:)/ab(j,j);
for i = 1:R
if i~=j
ab(i,j:C) = ab(i,j:C)-ab(i,j)*ab(j,j:C);
end
end
end
x=ab(:,C);
4.5 LU Decomposition Method
Background
The Gauss elimination method
Forward elimination procedure
→ ′ ′
– ′ : upper triangular.
Back substitution
The elimination procedure requires many mathematical operations and significantly more
computing time than the back substitution calculations.
During the elimination procedure, the matrix of coefficients [a] and the vector [b] are both
changed.
This means that if there is a need to solve systems of equations that have the same left‐
hand‐side terms (same coefficient matrix ) but different right‐hand‐side constants
(different vectors ), the elimination procedure has to be carried out for each again.
4.5 LU Decomposition Method
Background
Inverse matrix ?
Calculating the inverse of a matrix, however, requires many mathematical operations, and is
computationally inefficient.
A more efficient method of solution for this case is
the LU decomposition method !
LU decomposition
; : lower triangular matrix ; : upper triangular matrix
With this decomposition, the system of equations to be solved has the form:
;
Forward substitution method for Gauss elimination method
Back substitution method for Crout's method
2 substitutions, no elimination for
4.5 LU Decomposition Method
LU decomposition using the Gauss elimination procedure
Procedure
The elements of on the diagonal are all 1
The elements below the diagonal are the multipliers
Ex)
4.5 LU Decomposition Method
LU decomposition using the Gauss elimination procedure
Procedure
The elements of on the diagonal are all 1
The elements below the diagonal are the multipliers
1 0 0 0 2 3 1 5 2 3 1 5
3 1 0 0 6 13 5 19 0 4 2 4 ∙
∙
1 0 1 0 2 19 10 23 0 16 9 18
2 0 0 1 4 10 11 31 0 4 9 21
1 0 0 0 2 3 1 5 2 3 1 5
0 1 0 0 0 4 2 4 0 4 2 4 ∙ ∙ ∙
∙
0 4 1 0 0 16 9 18 0 0 1 2
0 1 0 1 0 4 9 21 0 0 7 17
1 0 0 0 2 3 1 5 2 3 1 5
0 1 0 0 0 4 2 4 0 4 2 4 ∙ ∙ ∙ ∙
∙
0 0 1 0 0 0 1 2 0 0 1 2
0 0 7 1 0 0 7 17 0 0 0 3
4.5 LU Decomposition Method
LU decomposition using the Gauss elimination procedure
Procedure
∙ ∙ ∙ ∙
∙ ∙ ∙
1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0
3 1 0 0 3 1 0 0 3 1 0 0 0 1 0 0 0 0 0 0
∙ ∙
1 0 1 0 1 0 1 0 1 0 1 0 0 4 1 0 0 0 1 0
2 0 0 1 2 0 0 1 2 0 0 1 0 1 0 1 0 0 7 1
1 0 0 0
1 0 0 0 1 0 0 0
3 1 0 0
0 1 0 0 0 1 0 0
1 4 1 0
0 4 1 0 0 4 1 0
2 1 7 1
0 1 0 1 0 1 0 1
1 0 0 0 1 0 0 0
0 1 0 0 0 1 0 0
0 0 1 0 0 0 1 0
0 0 7 1 0 0 7 1
4.5 LU Decomposition Method
LU decomposition using the Gauss elimination procedure
Algorithm
4.5 LU Decomposition Method
LU decomposition using Crout's method
The diagonal elements of the matrix are all 1 s.
Illustration with 4x4 matrix
4.5 LU Decomposition Method
LU decomposition using Crout's method
Illustration with 4x4 matrix
4.5 LU Decomposition Method
LU decomposition using Crout's method
For nn matrix
Step 1: Calculating the first column of :
1, 2, . . . ,
Step 2: Substituting 1 in the diagonal of :
Step 4: calculating the rest of the elements row after row. The elements of are calculated first
because they are used for calculating the elements of :
2, 3, . . . ,
2, 3, . . . ,
1 , 2 , . . . ,
4.5 LU Decomposition Method
LU decomposition using Crout's method
Example
4.5 LU Decomposition Method
LU decomposition using Crout's method
Example
4.5 LU Decomposition Method
LU decomposition using Crout's method
Example
function [L, U] = LUdecompCrout(A)
% The function decomposes the matrix A into a lower triangular matrix L
% and an upper triangular matrix U, using Crout's method such that A=LU.
% Input variables:
% A The matrix of coefficients.
% Output variable:
% L Lower triangular matrix.
% U Upper triangular matrix.
[R, C] = size(A);
for i = 1:R
L(i,1) = A(i,1);
U(i,i) = 1;
end
for j = 2:R
U(1,j)= A(1,j)/L(1,1);
end
for i = 2:R
for j = 2:i
L(i,j)=A(i,j)-L(i,1:j-1)*U(1:j-1,j);
end
for j = i+1:R
U(i,j)=(A(i,j)-L(i,1:i-1)*U(1:i-1,j))/L(i,i);
end
end
4.5 LU Decomposition Method
LU decomposition using Crout's method
Example
BackwardSub.m
ForwardSub.m
LUdecompCrout.m
Program4_6.m
;
4.5 LU Decomposition Method
LU Decomposition with Pivoting
Pivoting might also be needed in LU decomposition
If pivoting is used, then the matrices and that are obtained are not the
decomposition of the original matrix .
The product gives a matrix with rows that have the same elements as , but due to
the pivoting, the rows are in a different order.
When pivoting is used in the decomposition procedure, the changes that are made have to
be recorded and stored.
This is done by creating a matrix , called a permutation matrix, such that:
The order of the rows of have to be changed !
4.5 LU Decomposition Method
LU Decomposition with Pivoting
Pivoting might also be needed in LU decomposition
1 0 0 0
0 1 0 0
0 0 0 1
0 0 1 0
Sequential pivoting
P Pn 1 Pn 2 P2 P1
Characteristic of permutation matrix
PT P I PT P 1
4.6 Inverse of a Matrix
Inverse of a square matrix
Separate systems of equation
LU decomposition
Gauss‐Jordan elimination
4.6 Inverse of a Matrix
Calculating the inverse with the LU decomposition method
4.6 Inverse of a Matrix
Calculating the inverse with the LU decomposition method
A=[0.2 ‐5 3 0.4 0;‐0.5 1 7 ‐2 0.3; 0.6 2 ‐4 3 0.1; 3 0.8 2 ‐0.4 3; 0.5 3 2 0.4 1]
InverseLU(A)
Review
Direct method
Gauss elimination
Gauss‐Jordan elimination
LU decomposition
Using Gauss elimination
Crout’s method
Pivoting
Crout’s method vs. Gauss elimination
Review
Pivoting
⋮
⋯
⋯
Review
Pivoting
1 1 1 2 0 1 0 0 1 0 0 0
0 0 1 1
P1
1 0 0 0
L1
0 1 0 0
1 1 0 0 0 0 1 0 1 0 1 0
1 0 1
2 0 2 0 0 1 0 0 1
1 1 1 2 1 1 1 2 1 0 0 0 1 0 0 0
P2
0 0 1
L2
0 1 1 0 0 1 0 0
A2 0
0 1 1 0
0 0 1 2 0 0 1 2 0 0 1 0 1 0 1 0
0 0 0 1
1 1 0 0 1 1 1 0 0 0 0 1
1 1 1 2 1 0 0 0
L3
0 1 0 0
0 1 1 0
0 0 1 2 0 0 1 0
0 0
0 0 3 0 1 1
Review
Floating point operation counts (FLOP)
Gauss elimination: forward elimination
1
Division: 1
Multiplication: 1 1
+ / ‐: 1 1
Review
Floating point operation counts (FLOP)
Gauss elimination: back‐substitution
Division: 1 1
Multiplication: 0
+ / ‐: 0 1
With terms
Review
Floating point operation counts (FLOP)
Gauss elimination: in total
The amount of computation and the time required increases
with in proportion to !
Review
Floating point operation counts (FLOP)
LU decomposition
Forward/Backward substitution:
Repeated solution of with several bs
Significantly less than elimination,
particularly for large .
4.7 Iterative Methods
Iterative approach
Same as in the fixed‐point iteration method
Explicit form for a system of four equations
4.7 Iterative Methods
Iterative approach
Solution process
Initial value assumption (the first estimated solution)
In the first iteration, the first assumed solution is substituted on the right‐hand side of the
equations the second estimated solution.
In the second iteration, the second solution is substituted back the third estimated solution
The iterations continue until solutions converge toward the actual solution.
4.7 Iterative Methods
Iterative approach
Condition for convergence
A sufficient condition for convergence (not necessary)
– The absolute value of the diagonal element is greater than the sum of the absolute values of the off‐
diagonal elements.
Diagonally dominant!
Two specific iterative methods
Jacobi
– Updated all at once at the end of each iteration
Gauss‐Seidel
– Updated when a new estimated is calculated
4.7 Iterative Methods
Jacobi iterative method
Convergence check
Absolute value of relative error of all unknowns
4.7 Iterative Methods
Gauss‐Siedel iterative method
Comment
Gauss‐Siedel method converges faster than the Jacobi method and requires less computer memory.
4.7 Iterative Methods
Example
k = 1; x1 = 0; x2 = 0; x3 = 0; x4 = 0;
disp(' k x1 x2 x3 x4')
fprintf(' %2.0f %-8.5f %-8.5f %-8.5f %-8.5f \n', k, x1, x2, x3, x4)
for k=2 : 8
x1 = (54.5 - (-2*x2 + 3*x3 + 2*x4))/9;
x2 = (-14 - (2*x1 - 2*x3 + 3*x4))/8;
x3 = (12.5 - (-3*x1 + 2*x2 - 4*x4))/11;
x4 = (-21 - (-2*x1 + 3*x2 + 2*x3))/10;
fprintf(' %2.0f %-8.5f %-8.5f %-8.5f %-8.5f \n', k, x1, x2, x3, x4)
end
4.8 Use of MATLAB Built‐in Functions
MATLAB operators
Left division \
To solve a system of n equations written in matrix form
Right division /
To solve a system of equations written in matrix form
n
Matrix inversion
inv(a)
a^-1
4.8 Use of MATLAB Built‐in Functions
MATLAB’s built‐in function for LU decomposition
Without pivoting,
4.8 Use of MATLAB Built‐in Functions
MATLAB’s built‐in function for LU decomposition
Example
4.8 Use of MATLAB Built‐in Functions
MATLAB’s built‐in function for LU decomposition
Example
4.8 Use of MATLAB Built‐in Functions
Additional MATLAB built‐in functions
Example
4.9 Tri‐diagonal Systems of Equations
Tri‐diagonal systems of linear equations
Zero matrix coefficients except along the except along the diagonal, above‐diagonal, and
below‐diagonal elements
The system can be solved with the standard methods.
A large number of zero elements are stored and a large number of needles operations are executed.
To save computer memory and computing time, special numerical methods have been
developed.
Ex) Thomas algorithm or TDMA
4.9 Tri‐diagonal Systems of Equations
Thomas algorithm for solving tri‐diagonal systems
Similar to the Gaussian elimination method
Upper triangular matrix back substitution
Much more efficient because only the nonzero elements of the matrix of coefficients are
stored, and only the necessary operations are executed.
Procedure
Assigning the non‐zero elements of the TDM to three vectors
Diagonal vector , above diagonal vector , below diagonal vector
– , , , ,
Only vectors ,
are stored!
4.9 Tri‐diagonal Systems of Equations
Thomas algorithm for solving tri‐diagonal systems
Procedure
First row is normalized by dividing the row by .
Element is eliminated.
Second row is normalized by dividing the row by .
Element is eliminated.
4.9 Tri‐diagonal Systems of Equations
Thomas algorithm for solving tri‐diagonal systems
Procedure
This process continues row after row until the matrix is transformed to be upper triangular one.
Back substitution
4.9 Tri‐diagonal Systems of Equations
Thomas algorithm for solving tri‐diagonal systems
Mathematical form
Step 1
– Define the vectors 0, , ,..., , , ,..., , , ,..., , and
, ,..., .
Step 2
– Calculate:
Step 3
–
Step 4
0
Back substitution 0 1
4.9 Tri‐diagonal Systems of Equations
Thomas algorithm for solving tri‐diagonal systems
Example
4.9 Tri‐diagonal Systems of Equations
Thomas algorithm for solving tri‐diagonal systems
Example
function x = Tridiagonal(A,B)
% The function solve a tridiagonal system of linear equations [a][x]=[b]
% using Thomas algorithm.
% Input variables:
% A The matrix of coefficients.
% B A column vector of constants.
% Output variable:
% x A colum vector with the solution.
clear all
% Example 4-9
k1 = 8000; k2 = 9000; k3 = 15000; k4 = 12000; k5 = 10000; k6 = 18000;
L = 1.5; L1 = 0.18; L2 = 0.22; L3 = 0.26; L4 = 0.19; L5 = 0.15; L6 = 0.30;
a = [k1 + k2, -k2, 0, 0, 0; -k2, k2+k3, -k3, 0, 0; 0, -k3, k3+k4, -k4, 0
0, 0, -k4, k4+k5, -k5; 0, 0, 0, -k5, k5+k6]
b = [k1*L1 - k2*L2; k2*L2 - k3*L3; k3*L3 - k4*L4; k4*L4 - k5*L5; k5*L5 + k6*L - k6*L6]
Xs = Tridiagonal(a,b)
4.10 Error, Residual, Norms, and Condition Number
Error and residual
True error
True error cannot be calculated because the true solution is not known.
Residual
An alternative measure of the accuracy of a solution
This does not really indicate how small the error is.
It shows how well the right‐hand side of the equations is satisfied when is substituted for
in the original equations.
It is possible to have an approximate numerical solution that has a large true error but gives a small
residual.
Norm
4.10 Error, Residual, Norms, and Condition Number
Error and residual
Example
Small residual does not necessarily guarantee a small error.
Whether or not a small residual implies a small error depends on the "magnitude" of the matrix .
4.10 Error, Residual, Norms, and Condition Number
Norms and condition number
Norm
A real number assigned to a matrix or vector that satisfies the following four properties;
Triangle inequality
Vector norms
Infinity norm
1‐norm
Euclidean 2‐norm
4.10 Error, Residual, Norms, and Condition Number
Norms and condition number
Matrix norms
Infinity norm
Summation is done for each row
1‐norm
Summation is done for each column
2‐norm
Eigenvector
Euclidean norm for an matrix
Frobenius norm
4.10 Error, Residual, Norms, and Condition Number
Norms and condition number
Using norms to determine bounds on the error of numerical solutions
Residual written in terms of the error
Error
By
Relative error and relative residual
4.10 Error, Residual, Norms, and Condition Number
Norms and condition number
Using norms to determine bounds on the error of numerical solutions
Definition of true solution
By
condition
number
4.10 Error, Residual, Norms, and Condition Number
Norms and condition number
Condition number
The condition number of the identity matrix is 1.
The condition number of any other matrix is 1 or greater.
If the condition number is approximately 1, then the true relative error is of the same order of
magnitude as the relative residual.
If the condition number is much larger than 1, then a small relative residual does not necessarily
imply a small true relative error.
For a given matrix, the value of the condition number depends on the matrix norm that is used.
The inverse of a matrix has to be known in order to calculate the condition number of the matrix.
4.10 Error, Residual, Norms, and Condition Number
Norms and condition number
Example
4.11 Ill‐conditioned Systems
Meaning
System in which small variations in the coefficients cause large changes in the solution.
Ill‐conditioned systems generally has a condition number that is significantly greater than 1.
Large difference between denominators of the two equations.
Determinant of
4.11 Ill‐conditioned Systems
Example
Condition number
Using the infinity norm and 1‐norm
2‐norm
With any norm used,
the condition number is much larger than 1!
4.11 Ill‐conditioned Systems
Comment
Numerical solution of an ill‐conditioned system of equations
High probability of large error
Difficult to quantify the value of the condition number criterion
Need to check only
Whether or not the condition number is much larger than 1