0% found this document useful (0 votes)
67 views40 pages

Programming & Numerical Analysis: Kai-Feng Chen

The document discusses numerical methods for calculating derivatives and integrals of functions. It notes that while analytical methods are preferable when possible, numerical methods provide a useful check of results. It then presents the numerical differentiation method, noting that a small step size h leads to rounding errors that limit precision. The key is to use a central difference formula that cancels errors to O(h^4), improving precision to around 10-10 for double precision numbers when h is on the order of machine epsilon to the power of 1/3. Overall, numerical differentiation provides a check on analytical derivatives and improves in precision with cancellation techniques.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
67 views40 pages

Programming & Numerical Analysis: Kai-Feng Chen

The document discusses numerical methods for calculating derivatives and integrals of functions. It notes that while analytical methods are preferable when possible, numerical methods provide a useful check of results. It then presents the numerical differentiation method, noting that a small step size h leads to rounding errors that limit precision. The key is to use a central difference formula that cancels errors to O(h^4), improving precision to around 10-10 for double precision numbers when h is on the order of machine epsilon to the power of 1/3. Overall, numerical differentiation provides a check on analytical derivatives and improves in precision with cancellation techniques.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 40

Kai-Feng Chen

National Taiwan University

PROGRAMMING &
NUMERICAL ANALYSIS 16
Lecture 07: 

0 2
Numerical Differential and Integration

1
ANALYTICAL VERSUS
NUMERICAL
A GENERAL RULE:
■ If you know the exact form, it's always better to do the calculus
analytically unless it's not really doable.
■ Although we could do the calculation numerically without a
problem, but the precision is always a big issue.
■ In this lecture, we will discuss the derivatives & integration for a
black box function f(x).

f (x) =

2
ANALYTICAL VERSUS
NUMERICAL
ON THE OTHER HAND:
■ Even if you can do your derivatives or integrations analytically, it
is still very useful to do the same thing in a numerical way as a
very good cross check (ie. debug).
■ Suppose, you have >50 different functions to be implement in your
code, and you are calculating their derivatives analytically, even
you have already calculated everything by yourself, but it does
not guarantee you have no typo in your code!

Numerical calculus will give you a


quick and easy check first!

3
NUMERICAL DERIVATIVES
■ Suppose, you have a function f(x), and now you want to compute
f’(x), it’s pretty easy, right?


0 f (x + h) f (x)
By definition, for h → 0 f (x) ⇡
h
■ In principle we could insert a small h, maybe as small as possible
under the conversion of the numerical calculations. But THIS IS
NOT TRUE for numerical derivatives.
■ So, let's try such a simple function that we could actually do the
exact calculations easily:
f (x) = x2 + exp(x) + log(x) + sin(x)
0 1
f (x) = 2x + exp(x) + + cos(x)
x

4
LET'S GIVE IT A QUICK TRY!
import math

def f(x):
return x**2+math.exp(x)+math.log(x)+math.sin(x)
def fp(x):
return 2.*x+math.exp(x)+1./x+math.cos(x)

x, h = 0.5, 1E-2 ⇐ Starting from h = 1E-2


fp_exact = fp(x)

while h>1E-15:
fp_numeric = (f(x+h) - f(x))/h
print 'h = %e' % h
print 'Exact = %.16f,' % fp_exact,
print 'Numeric = %.16f,' % fp_numeric,
print 'diff = %.16f' % abs(fp_numeric-fp_exact)
h /= 10.

⇐ retry with smaller h!
l7-example-01.py

5
A QUICK TRY...?
■ Output:
Exact = 5.5263038325905010

h = 1e-02, Numeric = 5.5224259820642496, diff = 0.0038778505262513


h = 1e-03, Numeric = 5.5258912717413011, diff = 0.0004125608491998
h = 1e-04, Numeric = 5.5262623253238274, diff = 0.0000415072666735
h = 1e-05, Numeric = 5.5262996793148380, diff = 0.0000041532756629
h = 1e-06, Numeric = 5.5263034173247396, diff = 0.0000004152657613
h = 1e-07, Numeric = 5.5263037901376313, diff = 0.0000000424528697
h = 1e-08, Numeric = 5.5263038811759193, diff = 0.0000000485854184
h = 1e-09, Numeric = 5.5263038589714579, diff = 0.0000000263809570
h = 1e-10, Numeric = 5.5263038589714579, diff = 0.0000000263809570
h = 1e-11, Numeric = 5.5263127407556549, diff = 0.0000089081651540
h = 1e-12, Numeric = 5.5262461273741783, diff = 0.0000577052163226
h = 1e-13, Numeric = 5.5311311086825290, diff = 0.0048272760920280
h = 1e-14, Numeric = 5.5511151231257818, diff = 0.0248112905352809

6
OK, WHAT’S THE PROBLEM?

■ For a small h, let’s perform the Taylor expansions:


2 3
h h
f (x + h) ⇡ f (x) + hf 0 (x) + f 00 (x) + f 000 (x) + ...
2 6
2
This is what we 
 f (x + h) f (x) h h
are calculating: ⇡ f 0 (x) + f 00 (x) + f 000 (x) + ...
h 2 6

In principle, we have an approximation error of O(h),


for such calculations. But there is another round-off error,
close related to the machine precisions:
2 3
h h
f (x + h) ⇡ f (x) + hf 0 (x) + f 00 (x) + f 000 (x) + ... + ✏m
2 6

7
THE PROBLEM?

■ So, if we account for the numerical derivatives:


 2 ⇣✏ ⌘
0 f (x + h) f (x) h h m
fnumerical (x) = ⇡ f 0 (x) + f 00 (x) + f 000 (x) + ... + O
h 2 6 h
⇣✏ ⌘
m
The total error ~ O(h) + O
h
For a double precision number: ✏m ⇡ O(10 15 ) O(10 16
)
p
The total error will saturation at: h ⇡ O( ✏m ) ⇡ O(10 8
)

This simply limit the precision of numerical derivatives,


and it cannot be better then 10–8, unless...

8
THE TRICK IS
ACTUALLY VERY SIMPLE...
h h 0 h2 00 h3 000
f (x + ) ⇡ f (x) + f (x) + f (x) + f (x) + ...
2 2 8 48
h h 0 h2 00 h3 000
f (x ) ⇡ f (x) f (x) + f (x) f (x) + ...
2 2 8 48
 2 ⇣✏ ⌘
0 f (x + h2 ) f (x h
2 ) 0 h 000 4 m
fnumerical (x) ⇡ ⇡ f (x) + f (x) + O(h )... + O
h 24 h
⇣✏ ⌘ ✓ 16

2 m 2 10
The total error ~ O(h ) + O ⇡ O(h ) +
h h
The total error will saturation at O(10–10) if h ⇡ O(✏1/3
m ) ⇡ O(10
5
)

This is the “central difference” method.

9
A QUICK TRY AGAIN!
import math

def f(x):
return x**2+math.exp(x)+math.log(x)+math.sin(x)
def fp(x):
return 2.*x+math.exp(x)+1./x+math.cos(x)

x, h = 0.5, 1E-2
fp_exact = fp(x)

while h>1E-15:
fp_numeric = (f(x+h/2.) - f(x-h/2.))/h ⇐ Update here
print 'h = %e' % h
print 'Exact = %.16f,' % fp_exact,
print 'Numeric = %.16f,' % fp_numeric,
print 'diff = %.16f' % abs(fp_numeric-fp_exact)
h /= 10.
l7-example-01a.py

10
A QUICK TRY AGAIN! (II)
■ Output:
Exact = 5.5263038325905010

h = 1e-02, Numeric = 5.5263737163485871, diff = 0.0000698837580861


h = 1e-03, Numeric = 5.5263045313882486, diff = 0.0000006987977477
h = 1e-04, Numeric = 5.5263038395758635, diff = 0.0000000069853625
h = 1e-05, Numeric = 5.5263038326591731, diff = 0.0000000000686722
h = 1e-06, Numeric = 5.5263038325481508, diff = 0.0000000000423501
h = 1e-07, Numeric = 5.5263038323261062, diff = 0.0000000002643947
h = 1e-08, Numeric = 5.5263038367669983, diff = 0.0000000041764974
h = 1e-09, Numeric = 5.5263036369268530, diff = 0.0000001956636480
h = 1e-10, Numeric = 5.5263038589714579, diff = 0.0000000263809570
h = 1e-11, Numeric = 5.5263349452161474, diff = 0.0000311126256465
h = 1e-12, Numeric = 5.5266902165840284, diff = 0.0003863839935274
h = 1e-13, Numeric = 5.5266902165840284, diff = 0.0003863839935274
h = 1e-14, Numeric = 5.5511151231257818, diff = 0.0248112905352809

11
A FURTHER
IMPROVEMENT
■ Let's repeat the trick of “cancellation”:
h h 0 h2 00 h3 000
f (x + ) ⇡ f (x) + f (x) + f (x) + f (x) + ...
4 4 32 384
h h 0 h2 00 h3 000
f (x ) ⇡ f (x) f (x) + f (x) f (x) + ...
4 4 32 384

f (x + h4 ) f (x h
4) 1 0 h2 000
⇡ f (x) + f (x) + O(h4 )...
h 2 192

f (x + h2 ) f (x h
2) h 2
⇡ f 0 (x) + f 000 (x) + O(h4 )...
h 24

Simply repeat the same trick to remove the h2 term.

12
A FURTHER
IMPROVEMENT (II)
"
■ Then # " #
f (x + h4 ) f (x h
f (x + h2 ) h ⇥ ⇤ ⇣✏ ⌘
4) f (x 2) m
8 ⇡ 3f 0 (x) + O(h4 )... + O
h h h
0
fnumerical (x) ⇡
8f (x + h4 ) 8f (x h
f (x + h2 ) + f (x h ⇥ ⇤ ⇣✏ ⌘
4) 2) m
+ O(h4 )... + O
3h h
Take this term and neglect the rest
⇣✏ ⌘ ✓ 16

4 m 4 10
The total error ~ O(h ) + O ⇡ O(h ) +
h h
1/5 3
The total error will saturation at O(10–13) if h ⇡ O(✏ m ) ⇡ O(10 )

13
JUST CHANGE A LINE...
import math

def f(x):
return x**2+math.exp(x)+math.log(x)+math.sin(x)
def fp(x):
return 2.*x+math.exp(x)+1./x+math.cos(x)

x, h = 0.5, 1E-2
fp_exact = fp(x)

while h>1E-15:
fp_numeric = \ ⇙ Update here (note: a backslash “\” can wrap a python line)
(8.*f(x+h/4.)+f(x-h/2.)-8.*f(x-h/4.)-f(x+h/2.))/(h*3.)
print 'h = %e' % h
print 'Exact = %.16f,' % fp_exact,
print 'Numeric = %.16f,' % fp_numeric,
print 'diff = %.16f' % abs(fp_numeric-fp_exact)
h /= 10. l7-example-01b.py

14
JUST CHANGE A LINE...(II)
■ Output:
Exact = 5.5263038325905010

h = 1e-02, Numeric = 5.5263038315869801, diff = 0.0000000010035208


h = 1e-03, Numeric = 5.5263038325903402, diff = 0.0000000000001608
h = 1e-04, Numeric = 5.5263038325925598, diff = 0.0000000000020588
h = 1e-05, Numeric = 5.5263038327701954, diff = 0.0000000001796945
h = 1e-06, Numeric = 5.5263038328442100, diff = 0.0000000002537091
h = 1e-07, Numeric = 5.5263038249246188, diff = 0.0000000076658822
h = 1e-08, Numeric = 5.5263037257446959, diff = 0.0000001068458051
h = 1e-09, Numeric = 5.5263040070011948, diff = 0.0000001744106939
h = 1e-10, Numeric = 5.5263127407556549, diff = 0.0000089081651540
h = 1e-11, Numeric = 5.5263497481898094, diff = 0.0000459155993084
h = 1e-12, Numeric = 5.5258020381643282, diff = 0.0005017944261727
h = 1e-13, Numeric = 5.5215091758024446, diff = 0.0047946567880564
h = 1e-14, Numeric = 5.5807210704491190, diff = 0.0544172378586181

15
INTERMISSION

■ You have learned that the central difference method cancels the
term up to f’’, and the improved higher order method cancels the
term up to f’’’. You may try the code (l7-xample-01a.py and
l7-example-01b.py) and calculate the numerical derivative for
a polynomial up to x2 and x3. Can the calculation be 100% precise
or not?
■ For example you may try such a simple function:
f (x) = 5x3 + 4x2 + 3x + 2
0 2
! f (x) = 15x + 8x + 3

16
NUMERICAL
INTEGRATION
■ Starting from some super basic integration rules:
Rectangle rule

Trapezoidal rule

Simpson's rule

17
NUMERICAL
INTEGRATION (II)
■ Let's practice a classical integration: the trapezoidal rule, e.g.
2 3 4 sin(13x)
f (x) = x x +x x +
Z 13 2
x x3 x4 x5 cos(13x)
f (x)dx = +
2 3 4 5 169
L

xi+1

xi

fi fi+1

h
18
TRAPEZOIDAL RULE:
IMPLEMENTATION
import math

def f(x):
return x - x**2 + x**3 - x**4 + math.sin(x*13.)/13.
def fint(x):
return x**2/2. - x**3/3. + x**4/4. - x**5/5. - math.cos(x*13.)/169.

fint_exact = fint(1.2)-fint(0.)
area, x, h = 0., 0., 1E-3 ⇐ start with h = 10–3
f0 = f1 = f(x)
while x<1.2-h*0.5:
f0, f1 = f1, f(x+h) Exact: 0.1765358676046381, 

x += h Numerical: 0.1765352854227494, 

area += f0+f1
diff: 0.0000005821818886
area *= h/2.

print 'Exact: %.16f, Numerical: %.16f, diff: %.16f' \


% (fint_exact,area,abs(fint_exact-area))
l7-example-02.py
19
HOW ABOUT
A SMALLER STEP SIZE?
■ As expected, the precision cannot be improved by simply using a
smaller h.
■ It's very time consuming: smaller h, more operations, more
computing time needed.

Exact = 0.1765358676046381

h = 1e-02, Numeric = 0.1764776451750985, diff = 0.0000582224295395


h = 1e-03, Numeric = 0.1765352854227494, diff = 0.0000005821818886
h = 1e-04, Numeric = 0.1765358617829089, diff = 0.0000000058217292
h = 1e-05, Numeric = 0.1765358675475263, diff = 0.0000000000571118
h = 1e-06, Numeric = 0.1765358676034689, diff = 0.0000000000011692
h = 1e-07, Numeric = 0.1765358677680409, diff = 0.0000000001634028
h = 1e-08, Numeric = 0.1765358661586719, diff = 0.0000000014459662

20
ERROR ANALYSIS:
APPROXIMATION ERROR
■ Consider Taylor expansions for f(x): 2 3
h h
f (x + h) ⇡ f (x) + hf 0 (x) + f 00 (x) + f 000 (x) + ...
2 6
Exact integration:Z
h
h2 0 h3 00 h4 000
f (x + ⌘)d⌘ ⇡ hf (x) + f (x) + f (x) + f (x) + ...
0 2 6 24
Trapezoidal rule:
h h2 0 h3 00 h4 000
[f (x) + f (x + h)] ⇡ hf (x) + f (x) + f (x) + f (x) + ...
2 2 4 12

h3 00
Error per interval: ⇡ f (x) + ...
12
L
Approximation error: ✏approx ⇡ O(h ) ⇥ ⇡ O(h2 )
3
h
21
ERROR ANALYSIS:
TOTAL ERROR
■ If we believe theptheory:
L
✏roundo↵ ⇡ O( N ✏m ) N / = total no. of operation steps.
h
■ The total error: p
✓ ◆
2 ✏m
✏total ⇡ O( N ✏m ) + O(h ) ⇡ O p + O(h2 )
h
15 16
For a double precision float point number, ✏m ⇡ O(10 ) O(10 )
The best precision will be of O(10–12) when h ⇡ O(✏1/2.5
m ) ⇡ O(10 6
)

Well, this is just an order of magnitude guess,


usually it's highly dependent on the algorithm and your exact coding.
(also, smaller h means much more computing time!)

22
AN EASY IMPROVEMENT
■ Another classical method: Simpson's Rule.
■ Instead of liner interpolation, we could use a 2nd-order (parabola)
interpolation along 3 points:
L

xi+2
xi+1
xi

fi fi+1 fi+2

h h
23
THE FORMULAE
■ Treat the function as a parabola between the interval [–1,+1]:
Z +1  +1
a 3 b 2 2a
f (x) ⇡ ax2 + bx + c f (x)dx = x + x + cx = + 2c
1 3 2 1 3

{
f (+1) ⇡ a + b + c Z +1
f ( 1) 4f (0) f (+1)
f (0) ⇡ c Solve a,b,c : f (x)dx = + +
1 3 3 3
f ( 1) ⇡ a b+c
Z 2h
h 4h h
Simpson’s rule: f (x + ⌘)d⌘ ⇡ f (x) + f (x + h) + f (x + 2h)
0 3 3 3

Total integration:
Z
h 4h 2h 4h 2h 4h h
f (x)dx ⇡ f1 + f2 + f3 + f4 + f5 + ... + fN 1 + fN
3 3 3 3 3 3 3
24
SIMPSON’S RULE:
IMPLEMENTATION
import math

def f(x):
return x - x**2 + x**3 - x**4 + math.sin(x*13.)/13.
def fint(x):
return x**2/2. - x**3/3. + x**4/4. - x**5/5. - math.cos(x*13.)/169.

fint_exact = fint(1.2)-fint(0.)
area, x, h = 0., 0., 1E-3
f0 = f1 = f2 = f(x)
while x<1.2-h*0.5:
f0,f1,f2 = f2,f(x+h),f(x+h*2.) Exact: 0.1765358676046381, 

x += h*2. Numerical: 0.1765358676063498, 

area += f0+f1*4.+f2
diff: 0.0000000000017117
area *= h/3.

print 'Exact: %.16f, Numerical: %.16f, diff: %.16f' \


% (fint_exact,area,abs(fint_exact-area))
l7-example-03.py
25
SIMPSON’S RULE:
ERROR ANALYSIS
■ Could we cancel the O(h3) and O(h4) term?
2 3 4
h h h
f (x + h) ⇡ f (x) + hf 0 (x) + f 00 (x) + f 000 (x) + f (4) (x) + ...
2 6 24
3 4
4h 2h
f (x + 2h) ⇡ f (x) + 2hf 0 (x) + 2h2 f 00 (x) + f 000 (x) + f (4) (x) + ...
3 3
h 4h h
f (x) + f (x + h) + f (x + 2h)
3 3 3
3 4 5
4h 2h 5h
⇡ 2hf (x) + 2h2 f 0 (x) + f 00 (x) + f 000 (x) + f (4) (x) + ...
Z 2h 3 3 18
3 4 5
4h 2h 4h
f (x + ⌘)d⌘ ⇡ 2hf (x) + 2h2 f 0 (x) + f 00 (x) + f 000 (x) + f (4) (x) + ...
0 3 3 15

h5 (4) L
⇡ f (x) + ... ✏approx ⇡ O(h ) ⇥ ⇡ O(h4 )
5
90 h
26
SIMPSON’S RULE:
ERROR ANALYSIS (II)
■ The total error is given
p
by:
✏m
✓ ◆
4
✏total ⇡ O( N ✏m ) + O(h ) ⇡ O p + O(h4 )
h
The best precision could be of O(10–14) when h ⇡ O(✏1/4.5
m ) ⇡ O(10 4
)

Is it true? Not too bad in principle...

Exact = 0.1765358676046381

h = 1e-02, Numeric = 0.1765358847654857, diff = 0.0000000171608476


h = 1e-03, Numeric = 0.1765358676063498, diff = 0.0000000000017117
h = 1e-04, Numeric = 0.1765358676047102, diff = 0.0000000000000721
h = 1e-05, Numeric = 0.1765358676043926, diff = 0.0000000000002455
h = 1e-06, Numeric = 0.1765358676131805, diff = 0.0000000000085424
h = 1e-07, Numeric = 0.1765358676224454, diff = 0.0000000000178073
h = 1e-08, Numeric = 0.1765358675909871, diff = 0.0000000000136510

27
COMMENTS
■ You may already observed during our tests above, in the numeral
calculations, it is important to minimize the total error rather than
the approximation error only:
▫ Reducing the spacing h to a very small number is not a good
idea in principle; cancellation of higher order terms are more
effective.
▫ Some algorithms can reduce the spacing according to the
estimated approximation error. This is called “Adaptive
Stepping”.
▫ The key point is to use as small number of points as possible
(higher speed, less round-off error).
▫ Numerical calculations cannot be 100% precise by all means.
28
INTERMISSION

■ Those “fixed points” integration rules have several limitations ––


such as you cannot integrate over singularities. Try to integrate
over some functions with singularities and see what will you get?
■ Another case may give you a limited precision, if the function is
not continuous. Try to integrate over a step function and see how 


f (x) = 1 if x 0
precise you can get, e.g.:
f (x) = 0 if x < 0

29
GETTING START WITH
NUMPY & SCIPY
FROM THE OFFICIAL WEBSITE:

■ NumPy‘s array type augments the Python language with an


efficient data structure useful for numerical work, e.g.,
manipulating matrices. NumPy also provides basic numerical
routines.
■ SciPy contains additional routines needed in scientific work: for
example, routines for computing integrals numerically, solving
differential equations, optimization, etc.

In short:
NumPy = extended array + some routines
SciPy = scientific tools based on NumPy

30
TYPICAL WORK FLOW
Working on your own You can think NumPy/SciPy are nothing
research topic (TH/EXP) more than a bigger math module.
Don’t think they are something very fancy!

Need numerical analysis


still not 

for resolving some enough...
numerical problems Other solutions:
if not 
 Google other package/
enough...
write your own
Write your code with Adding 
 algorithm / Use a
standard math module NumPy/SciPy/etc. different language /
etc...

Problem solved!
31
NUMERICAL DERIVATIVES
IN SCIPY
■ Just google –– and you’ll find it’s just a simple function:

http://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.derivative.html
32
LET’S GIVE IT A TRY
import math
import scipy.misc as misc ⇐ import scipy.misc module

def f(x):
return x**2+math.exp(x)+math.log(x)+math.sin(x)
def fp(x):
return 2.*x+math.exp(x)+1./x+math.cos(x)

x, h = 0.5, 1E-2
fp_exact = fp(x)

while h>1E-15:
fp_numeric = misc.derivative(f, x, h) ⇐ just call it
print 'h = %e' % h
print 'Exact = %.16f,' % fp_exact,
print 'Numeric = %.16f,' % fp_numeric,
print 'diff = %.16f' % abs(fp_numeric-fp_exact)
h /= 10. l7-example-04.py
33
LET’S GIVE IT A TRY (II)
■ This gives us the best precision of O(10–10) when h~10–6.

Exact = 5.5263038325905010

h = 1e-02, Numeric = 5.5265834157978029, diff = 0.0002795832073019


h = 1e-03, Numeric = 5.5263066277866368, diff = 0.0000027951961359
h = 1e-04, Numeric = 5.5263038605413151, diff = 0.0000000279508141
h = 1e-05, Numeric = 5.5263038328479110, diff = 0.0000000002574101
h = 1e-06, Numeric = 5.5263038326591731, diff = 0.0000000000686722
h = 1e-07, Numeric = 5.5263038323261062, diff = 0.0000000002643947
h = 1e-08, Numeric = 5.5263038589714588, diff = 0.0000000263809579
h = 1e-09, Numeric = 5.5263038589714579, diff = 0.0000000263809570
h = 1e-10, Numeric = 5.5263038589714579, diff = 0.0000000263809570
h = 1e-11, Numeric = 5.5263127407556549, diff = 0.0000089081651540
h = 1e-12, Numeric = 5.5260240827692533, diff = 0.0002797498212477
h = 1e-13, Numeric = 5.5278004396086535, diff = 0.0014966070181526
h = 1e-14, Numeric = 5.5289106626332787, diff = 0.0026068300427777

34
GO TO HIGHER ORDER
■ This gives us the best precision of O(10–11~10–12) when h~10–4. 

Not a dramatically improvement...

x, h = 0.5, 1E-2
fp_exact = fp(x)

while h>1E-15: ⇓ update here


fp_numeric = misc.derivative(f, x, h, order=5)
print 'h = %e' % h
l7-example-04a.py (partial)

h = 1e-02, Numeric = 5.5263035753822134, diff = 0.0000002572082876


h = 1e-03, Numeric = 5.5263038325648601, diff = 0.0000000000256408
h = 1e-04, Numeric = 5.5263038325881197, diff = 0.0000000000023812
h = 1e-05, Numeric = 5.5263038325537019, diff = 0.0000000000367990
h = 1e-06, Numeric = 5.5263038325481508, diff = 0.0000000000423501
h = 1e-07, Numeric = 5.5263038328812177, diff = 0.0000000002907168

35
NUMERICAL INTEGRATION
WITH SCIPY
■ You’ll find there are many different integration tools in SciPy:
The quad is a general
integration tool with
QUADPACK.
Recommended!


http://docs.scipy.org/doc/scipy/reference/integrate.html#module-scipy.integrate
36
INTEGRATION WITH
QUAD() FUNCTION
import math
import scipy.integrate as integrate

def f(x):
return x - x**2 + x**3 - x**4 + math.sin(x*13.)/13.
def fint(x):
return x**2/2. - x**3/3. + x**4/4. - x**5/5. - math.cos(x*13.)/169.

fint_exact = fint(1.2)-fint(0.)

quad,quaderr = integrate.quad(f,0.,1.2,)

print 'Exact: %.16f' % fint_exact


print 'Numerical: %.16f+-%.16f, diff: %.16f' % \
(quad,quaderr,abs(fint_exact-quad)) l7-example-05.py

Exact: 0.1765358676046381

Numerical: 0.1765358676046380+-0.0000000000000029

diff: 0.0000000000000001
37
REMARK

■ It is very easy to use the NumPy/SciPy routines to do the


numerical derivatives and integration: just import the module, call
the function, get your results!
■ However the limitation of these functions is not different from our
homemade code: don’t use a too small stepping size!
■ You may find the integration is very precise –– this is due to the
algorithm in the QUADPACK (based on Gaussian quadrature)
which is more advance. The key idea is to select good points based
on the solutions of specific polynomial sets (e.g. Legendre poly.)
that give the best cancellation of the higher order terms.

➡ for your own further study.

38
HANDS-ON SESSION
■ Practice 1: 

Integration rules with even higher orders can be constructed easily,
for example, comparing Simpson’s rule to 3/8 rule:


 Z 2h

 h 4h h
Simpson [order 2]: f (x + ⌘)d⌘ ⇡ f (x) + f (x + h) + f (x + 2h)

 0 3 3 3
3/8 
 [order 3]:
Z 3h

 3h 9h 9h 3h
f (x + ⌘)d⌘ ⇡ f (x) + f (x + h) + f (x + 2h) + f (x + 3h)

 0 8 8 8 8

Try to modify l7-example-03.py to implement the 3/8 integration
rule and see how precise you can get?

39
HANDS-ON SESSION

■ Practice 2: 

The integration of cosine function is sine; let’s modify the 

l7-example-05.py [integration with the quad() function] code to
calculate the integration of a simple cosine and see how precise the
calculation you can get, i.e.:



 def f(x):
return math.cos(x)

 def fint(x):

 return math.sin(x)

by integrating f(x) over the intervals of [0,!], [0,100!], [0,1000!],
[0,100.5!], [0,1000.5!]. Is it always very precise?

40

You might also like