0% found this document useful (0 votes)
48 views

Approximations, Errors and Their Analysis: AML702 Applied Computational Methods

Approximations are unavoidable in mathematical modeling of real world phenomena. This leads to errors in computations that must be estimated for reliable results. There are two main types of errors: truncation errors from approximating problems, and roundoff errors from representing numbers with finite precision in computers. The total numerical error is the sum of these truncation and roundoff errors.

Uploaded by

prasad_bathe
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

Approximations, Errors and Their Analysis: AML702 Applied Computational Methods

Approximations are unavoidable in mathematical modeling of real world phenomena. This leads to errors in computations that must be estimated for reliable results. There are two main types of errors: truncation errors from approximating problems, and roundoff errors from representing numbers with finite precision in computers. The total numerical error is the sum of these truncation and roundoff errors.

Uploaded by

prasad_bathe
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

AML702 Applied Computational Methods

I
I
T Lecture 03
Dc Approximations, Errors and
E Their Analysis
L
H
I
Approximations and Errors

• Approximation is unavoidable in mathematical


I modeling the real world phenomena
I • Approximation leads to errors
T • Estimating the errors in computation is
Dc necessary for reliability of computed results
E
L
H
I

2
Accuracy and Precision
Accuracy : It refers to how closely the measured or computed value
matches with the true value.
I Precision: It refers to how closely the computed or measured
I values agree with each other in repeated computation or
measurement.
T
Consider fig. 4.1 from Chapra,
as shown here
Dc
E
(a) inaccurate and imprecise,
L
(b) accurate and imprecise,
H (c) inaccurate and precise,
I (d) accurate and precise.
Errors in Computations
Error: The deviation from an expected or true
value. True value may not be known always!
I
I E.g. Area of a rectangle A= LxB=12x10=120 cm2
T But by measurement LmxBm=11.9x10.1
The computed value Ac= 120.19 cm2
D Error , E=|A- Ac |=0.19. This error is known as
E Absolute Error. When it is compared with the actual
L value or true value, it is known as relative error,
H Er = |A- Ac |/A=0.19/120.
I This is expressed in percentage as
Er = |A- Ac |/A*100=0.19/120*100 = 0.15%.

4
Floating Points and Machine Accuracy

Floating Point Representation


I These numbers are represented by a sign bit s, an
exact integer exponent ε, an exact positive integer
I
mantissa, M. (B – base, E – bias)
T
D
E Machine Accuracy, εm
L It refers to the smallest floating point number which
H added to the floating point number 1.0 produces a
I floating point result different from 1.0.
A typical 32 bit computer with base 2, has εm = 3x10-8

5
Machine Representation
Representing Real numbers on digital computers
I Computers have finite amount of memory. Consider
representing π= 3.141592653589793…………….
I
Or the repeating fraction 1/3 = 0.333333333333333……
T
Any such number x ∈ℜ needs infinite digits for accurate
D representation.
E x = ±(1.d1d 2d 3d 4..............d1) × 2e
L Where di are binary digits having values 0 or 1 and e is
H an integer exponent. The mantissa can be expressed
I as d1 d 2 d 3 d 4
1.d1d 2d 3d 4.......... = 1 + + 2 + 3 + 4 .........
2 2 2 2
E.g. The binary number x is represented as:
6 x = −(1.11010.........) × 21 = −(1. 12 + 14 + 0 + 161 .........) × 2 = −3.62510
Floating Point Representation
Floating point representation requires finite digits
I Corresponding to a real number x its digital
counterpart will be fl(x) with some k bits
I
T fl ( x) = ±(1.d 1d 2d 3d 4..............d 1) × 2e
D Now storing this in memory will require k bits which is an
E approximation for storing x with infinite bits. It is
L interesting to know the accuracy of such a
H representation. Alternatively we wish to determine the
I relative error as: | fl ( x) − x |
| x|
The IEEE standard gives the confidence by estimating
1
this error to be η = 2 × 2−k where k is the number of
7
digits (bits) used in the representation
Floating Point and Machine Precision

The IEEE standard gives the confidence by estimating


this error to be η = 1 × 2−k where k is the number of
I 2
digits (bits) used in the representation.
I
This η is known as machine precision or rounding unit.
T
The floating point word in common double precison
D systems like MATLAB is by default, has 64 bits. Out of
E them, 52 bits are used for the mantissa (or fraction)
while the rest store the sign and the exponent.
L
Therefore the machine precision for MATLAB is
H
I η = 2−53 ≈ 2.2×10−16.
(here k=52)
Check eps = 2.2204 × 10−16 in MATLAB
8
Floating point ranges
The exponent range is -1022 to 1023.
I (11 bits including 1 bit for sign)
I • The largest possible number MATLAB can
T store has
• +1.111111…111 X 21023 = (2-2-52)X 21023
D
• This yields approximately 21024 = 1.7997 X 10308
E
L • The smallest possible number MATLAB can
H store with full precision has
I • +1.00000…00000 X 2-1022
• This yields 2-1022 = 2.2251 X 10-308
Real Vs Floating point numbers
Comparing floating point numbers to real numbers.
Property Real numbers Floating point
I numbers
I
T Range Infinite Finite

D . Precision Infinite Finite


E
Existence Real Subset of real
L numbers
H
I

10
Roundoff Errors
• Finite-precision causes roundoff errors in
I numerical computation
I • Roundoff errors accumulate slowly
T • Subtracting nearly equal numbers leads to
D severe loss of precision. A similar loss of
E precision occurs when two numbers
L separated by large differences in magnitude
H are added.
I • Roundoff errors are unavoidable, good
algorithms can minimize their effect
Truncation Errors

I
I
T
D
E
L
H
I
Total Numerical Error
• The total numerical error is the sum of the
truncation and roundoff errors.
I • The truncation error generally increases as the
I step size increases, while the roundoff error
T decreases as the step size increases - this leads
to a point of diminishing returns for step size.
D
E
L
H
I

You might also like