Part 2 Lecture Notes On Interpolation
Part 2 Lecture Notes On Interpolation
Lecture Notes on
Interpolation
MATH 435
E−mail: dattab@math.niu.edu
URL: www.math.niu.edu/~dattab
PART II.
Interpolation
2 Interpolation
2.1 Problem Statement and Applications
Consider the following table:
x0 f0
x1 f1
x2 f2
.. ..
. .
xk fk
.. ..
. .
xn fn
In the above table, fk , k = 0, · · · , n are assumed to be the values of a certain function f (x),
evaluated at xk , k = 0, · · · , n in an interval containing these points. Note that only the
functional values are known, not the function f (x) itself. The problem is to find fu
corresponding to a nontabulated intermediate value x = u.
Interpolation Problem
The Interpolation problem is also a classical problem and dates back to the time of Newton
and Kepler, who needed to solve such a problem in analyzing data on the positions of
stars and planets. It is also of interest in numerous other practical applications. Here is an
example.
32
2.2 Existence and Uniqueness
It is well-known that a continuous function f (x) on [a, b] can be approximated as close as
possible by means of a polynomial. Specifically, for each > 0, there exists a polynomial
P (x) such that |f (x) − P (x)| < for all x in [a, b]. This is a classical result, known as
Weierstrass Approximation Theorem.
Knowing that fk , k = 0, · · · , n are the values of a certain function at xk , the most obvious
thing then to do is to construct a polynomial Pn (x) of degree at most n that passes through
the (n + 1) points: (x0 , f0 ), (x1 , f1 ), · · · , (xn , fn ).
Indeed, if the nodes x0 , x1 , ..., xn are assumed to be distinct, then such a poly-
nomial always does exist and is unique, as can be seen from the following.
a0
1 x0 x20 · · · xn0 f0
a1
x21 · · · xn1
1 x1 f1
a2
=
(2.2)
.. ..
.
..
.
.
1 xn x2n · · · xnn
fn
an
Because x0 , x1 , · · · , xn are distinct, it can be shown [Exercise] that the matrix of the above
system is nonsingular. Thus, the linear system for the unknowns a0 , a1 , · · · , an has a unique
solution, in view of the following well-known result, available in any linear algebra text book.
33
Theorem 2.1 (Existence and Uniqueness Theorem for Polynomial Interpola-
tion)
Definition: The polynomial Pn (x) in Theorem 2.1 is called the interpolating polynomial.
Pn (xi ) = fi , i = 0, 1, · · · , n.
It is natural to obtain the polynomial by solving the linear system (2.1) in the previous
section. Unfortunately, the matrix of this linear system, known as the Vandermonde
Matrix, is usually highly ill-conditioned, and the solution of such an ill-conditioned
system, even by the use of a stable method, may not be accurate. There are,
however, several other ways to construct such a polynomial, that do not require solution of
a Vandermonde system. We describe one such in the following:
Suppose n = 1, that is, suppose that we have only two points (x0 , f0 ), (x1 , f1 ), then it is
easy to see that the linear polynomial
x − x1 (x − x0 )
P1 (x) = f0 + f1
(x0 − x1 ) (x1 − x0 )
is an interpolating polynomial, because
P1 (x0 ) = f0 , P1 (x1 ) = f1 .
34
The concept can be generalized easily for polynomials of higher degrees.
To generate polynomials of higher degrees, let’s define the set of polynomials {Lk (x)} recur-
sively, as follows:
(x − x1 ) · · · (x − xn )
L0 (x) =
(x0 − x1 ) · · · (x0 − xn )
(x − x0 )(x − x2 ) · · · (x − xn )
L1 (x) =
(x1 − x0 )(x1 − x2 ) · · · (x1 − xn )
..
.
(x − x0 )(x − x1 )(x − x2 ) · · · (x − xn−1 )
Ln (x) =
(xn − x0 )(xn − x1 )(xn − x2 ) · · · (xn − xn−1 )
Also, note that
Thus
Pn (x0 ) = L0 (x0 )f0 + L1 (x0 )f1 + · · · + Ln (x0 )fn = f0
35
(x − 1)(x − 2)(x − 4)
L0 (x) =
(−1)(−2)(−4)
(x − 0)(x − 2)(x − 4)
L1 (x) =
1 · (−1)(−3)
(x − 0)(x − 1)(x − 4)
L2 (x) =
2 · 1 · (−2)
(x − 0)(x − 1)(x − 2)
L3 (x) =
4·3·2
Verify: Note that f (x) in this case is f (x) = x2 + 5x + 7, and the exact value of f (x) at
x = 3 is 31.
i xi f (xi )
1
0 2
2
1
1 2.5
2.5
1
2 4
4
Interpolate f (x) at x = 3.
36
(x − x1 )(x − x2 ) (x − 2.5)(x − 4)
L0 (x) = = = (x − 2.5)(x − 4)
(x0 − x1 )(x0 − x2 ) (−0.5)(−2)
(x − x0 )(x − x2 ) (x − 2)(x − 4) 1
L1 (x) = = =− (x − 2)(x − 4)
(x1 − x0 )(x1 − x2 ) (0.5)(−1.5) 0.75
(x − x1 )(x − x0 ) 1
L2 (x) = = (x − 2.5)(x − 2)
(x2 − x1 )(x2 − x0 ) 3
1 1 1
So, P2 (x) = f (x0 )L0 (x) + f (x1 )L1 (x) + f (x2 )L2 (x) = L0 (x) + L1 (x) + L2 (x)
2 2.5 4
1 1 1 1 1 1 1 .5
Thus, P2 (3) = L0 (3) + L1 (3) + L2 (3) = (−0.5) + + = 0.3250
2 2.5 4 2 2.5 0.75 4 3
1
Verify: (The value of f (x) at x = 3 is 3
≈ 0.3333).
Suppose that f (x) is continuous on [a, b] and n times differentiable on (a, b). If f (x)
has n distinct zeros in [a, b], then f (n+1) (c) = 0 where a < c < b.
37
Proof: If x̄ is one of the numbers x0 , x1 , · · · , xn : then the result follows trivially. Because,
the error in this case is zero, and the result will hold for any arbitrary ξ.
Next, assume that x̄ is not one of the numbers x0 , x1 , · · · , xn .
Define a function g(t) in variable t in [a, b]:
(t − x0 )(t − x1 ) · · · (t − xn )
g(t) = f (t) − Pn (t) − [f (x̄) − Pn (x̄)] ∗ . (2.6)
(x̄ − x0 )(x̄ − x1 ) · · · (x̄ − xn )
=0
(Note that the numerator of the fraction appearing above contains the term (xk − xk ) = 0).
Furthermore,
(x̄ − x0 ) · · · (x̄ − xn )
g(x̄) = f (x̄) − Pn (x̄) − [f (x̄) − Pn (x̄)] ∗
(x̄ − x0 ) · · · (x̄ − xn )
= f (x̄) − Pn (x̄) − f (x̄) + Pn (x̄) (2.8)
=0
Thus, g(t) becomes identically zero at (n + 2) distinct points: x0 , x1 , · · · , xn , and x̄. Fur-
thermore, g(t) is (n + 1) times continuously differentiable, since f (x) is so.
Therefore, by generalized Rolle’s theorem there exists a number ξ(x̄) in (a, b) such that
g (n+1) (ξ) = 0.
Let’s compute g (n+1) (t) now. From (2.5) we have
dn+1
(n+1) (n+1) (t − x0 )(t − x1 ) · · · (t − xn )
g (t) = f (t)−Pn(n+1) (t)−[f (x̄)−Pn (x̄)] (2.9)
dtn+1 ((x̄ − x0 )(x̄ − x1 ) · · · (x̄ − xn )
Then [Exercise]:
dn+1
(t − x0 )(t − x1 ) · · · (t − xn )
dtn+1 (x̄ − x0 )(x̄ − x1 ) · · · (x̄ − xn )
1 dn+1
= · n+1 [(t − x0 )(t − x1 ) · · · (t − xn )]
(x̄ − x0 )(x̄ − x1 ) · · · (x̄ − xn ) dt
1
= (n + 1)!
(x̄ − x0 )(x̄ − x1 ) · · · (x̄ − xn )
(n+1) (n+1)
Also, Pn (t) = 0, because Pn is a polynomial of degree at most n. Thus, Pn (ξ) = 0.
38
So,
(f (x̄) − Pn (x̄))
g (n+1) (ξ) = f (n+1) (ξ) − (n + 1)! (2.10)
(x̄ − x0 ) · · · (x̄ − xn )
f (n+1) (ξ)
Since g (n+1) (ξ) = 0, from (2.10), we have En (x̄) = f (x̄) − Pn (x̄) = (x̄ − x0 ) · · · (x̄ −
(n + 1)!
xn ).
Remark: To obtain the error of interpolation using the above theorem, we need to know
the (n + 1)th derivative of the f (x) or its absolute maximum value on the interval [a, b].
Since in practice this value is hardly known, this error formula is of limited use only.
Example 2.3 Let’s compute the maximum absolute error for Example 2.2.
Here n = 2.
f (3) (ξ)
= (x̄ − x0 )(x̄ − x1 )(x̄ − x2 )
3!
To know the maximum value of E2 (x̄), we need to know f (3) (x).
Let’s compute this now:
1 1 2 6
f (x) = , f 0 (x) = − 2 , f 00 (x) 3 , f (3) (x) = − 4 .
x x x x
6 6
So, |f (3) (ξ)| < 4 = for 2 < x ≤ 4.
2 16
Since x̄ = 3, x0 = 2, x1 = 2.5, x2 = 4, we have
6 1
|E2 (x̄)| ≤ | × (3 − 2)(3 − 2.5)(3 − 4)| = 0.03125.
16 6
Note that in four-digit arithmetic, the difference between the value obtained by interpolation
and the exact value is 0.3333 − 0.3250 = 0.0083.
39
PSfrag replacements
h h h h h
a = x0 x1 x2 b = xn
Then it can be shown [Exercise] that
hn+1
|(x̄ − x0 )(x̄ − x1 ) · · · (x̄ − xn )| ≤ n!
4
If we also assume that |f (n+1) (x)| ≤ M , then we have
M hn+1 M hn+1
|En (x̄)| = |f (x̄) − Pn (x̄)| ≤ n! = . (2.11)
(n + 1)! 4 4(n + 1)
Example 2.4 Suppose a table of values for f (x) = cos x has to be prepared in [0, 2π] with
nodes of spacing h, using linear interpolation, with an error of interpolation of at most
5 × 10−7 . How small should h be?
Here n = 1.
f (x) = cos x, f 0 (x) = − sin x, f 2 (x) = f 00 (x) = − cos x
max |f (2) (x)| = 1, for 0 ≤ x ≤ 2π
Thus M = 1.
So, by (2.11) above we have
h2
|E1 (x̄)| = |f (x̄) − P1 (x̄)| ≤
.
8
Since the maximum error has to be 5 × 10−7 , we must have:
h2 1
≤ 5 × 10−7 = × 10−6 . That is, h ≤ 6.3246 × 10−4 .
8 2
√
Example 2.5 Suppose a table is to be prepared for the function f (x) = x on [1, 2]. Deter-
mine the spacing h in a table such that the interpolation with a polynomial of degree 2 will
give accuracy = 5 × 10−8 .
40
2.6 Divided Differences and the Newton-Interpolation Formula
A major difficulty with the Lagrange Interpolation is that one is not sure about the degree
of interpolating polynomial needed to achieve a certain accuracy. Thus, if the accuracy is
not good enough with polynomial of a certain degree, one needs to increase the degree of
the polynomial, and computations need to be started all over again.
Furthermore, computing various Lagrangian polynomials is an expensive procedure. It is,
indeed, desirable to have a formula which makes use of Pk−1 (x) in computing
Pk (x).
The following form of interpolation, known as Newton’s interpolation allows us to do so.
The idea is to obtain the interpolating polynomial Pn (x) in the following form:
f [xi+1 ] − f [xi ]
f [xi ] = f (xi ) and f [xi , xi+1 ] =
xi+1 − xi
Similarly,
f [xi+1 , ..., xi+k−1 , xi+k ] − f [xi , xi+1 , ..., xi+k−1 ]
f [xi , xi+1 , · · · , xi+k−1 , xi+k ] =
xi+k − xi
With these notations, we then have
a0 = f0 = f (x0 ) = f [x0 ]
f1 − f 0 f (x1 ) − f (x0 ) f [x1 ] − f [x0 ]
a1 = = = = f [x0 , x1 ].
x1 − x 0 x1 − x 0 x1 − x 0
Continuing this, it can be shown that [Exercise]
ak = f [x0 , x1 , · · · , xk ]. (2.13)
The number f [x0 , x1 , · · · , xk ] is called the k-th divided difference.
Substituting these expressions of ak in (2.13), the interpolating polynomial Pn (x) now can
be written in terms of the divided differences:
Pn (x) = f [x0 ] + f [x0 , x1 ](x − x0 ) + f [x0 , x1 , x2 ](x − x0 )(x − x1 ) + · · · +
(2.14)
f [x0 , x1 , · · · , xn ](x − x0 )(x − x1 ) · · · (x − xn−1 ).
41
Notes:
(i) Each divided difference can be obtained from two previous ones of lower orders.
For example, f [x0 , x1 , x2 ] can be computed from f [x0 , x1 ], and f [x1 , x2 ], and so on.
Indeed, they can be arranged in form of a table as shown below:
(ii) Note that in computing Pn (x) we need only the diagonal entries of the above table;
that is, we need only f [x0 ], f [x0 , x1 ], · · · , f [x0 , x1 , · · · , xn ].
(iii) Since the divided differences are generated recursively, the interpolating polynomials
of successively higher degrees can also be generated recursively. Thus the work done
previously can be used gainfully.
For example,
P1 (x) = f [x0 ] + f [x0 , x1 ](x − x0 )
P2 (x) = f [x0 ] + f [x0 , x1 ](x − x0 ) + f [x0 , x1 , x2 ](x − x0 )(x − x1 )
= P1 (x) + f [x0 , x1 , x2 ](x − x0 )(x − x1 ).
Thus, in computing P2 (x), P1 (x) has been gainfully used; in computing P3 (x), P2 (x) has
been gainfully used, etc.
42
Example 2.6 Interpolate at x = 2.5 using the following Table, with polynomials of degree 3
and 4.
i xi fi 1st diff. 2nd diff. 3rd diff. 4th diff. 5th diff.
0 1.0 0
0.3522
1 1.5 0.17609 −0.1023
0.2499 0.0265534
2 2.0 0.30103 −0.0491933 −0.006409
0.1761 0.01053 −0.001413
3 3.0 0.47712 −0.0281333 −0.002169
0.1339 0.005107
4 3.5 0.54407 −0.01792
0.11598
5 4.0 0.60206
(Note that in computing P4 (2.5), P3 (2.5) computed previously has been gainfully
used).
If we use the notation Dij = f [xi , . . . , xi+j ]. Then
43
Algorithm 2.1 Algorithm for Generating Divided Differences
Inputs:
The definite numbers x0 , x1 , · · · , xn and the values f0 , f1 , · · · , fn .
Outputs:
The Divided Difference D00 , D11 , · · · Dnn .
Step 2: For i = 1, 2, · · · , n do
j = 1, 2, · · · i do
Di,j−1 − Di−1,j−1
Dij =
xi − xi−j
End
A Relationship Between nth Divided Difference and the nth Derivative
The following theorem shows how the nth derivative of a function f (x) is related to the nth
divided difference. The proof is omitted. It can be found in any advanced numerical analysis
text book (e.g., Atkins on (1978), p. 144).
Theorem 2.3 Suppose f is n times continuously differentiable and x0 , x1 , · · · , xn are (n+1)
distinct numbers in [a, b]. Then there exists a number ξ in (a, b) such that
f (n) (ξ)
f [x0 , x1 , · · · , xn ] =
n!
44
We can then write
n
X s
Pn (x) = Pn (x0 + sh) = hk k!f [x0 , · · · , xn ] (2.16)
k
k=0
f [x1 , x2 ] − f [x0 , x1 ]
f [x0 , x1 , x2 ] =
(x2 − x0 )
f (x2 )−f (x1 ) f (x1 )−f (x0 )
−
= x2 −x1 x1 −x0 (2.19)
(x2 − x0 )
f (x2 ) − 2f (x1 ) + f (x0 ) ∆2 f 0
= =
h × 2h 2h2
In general, we have
Theorem 2.4
f [x0 , x1 , . . . , xk ]
1 (2.20)
= ∆k f 0 .
k!hk
Proof is by induction on k.
For k = 0, the result is trivially true. We have also proved the result for k = 1, and k = 2.
Assume now that the result is true k = m. Then we need to show that the result is also true
for k = m + 1.
45
∆m f 1
−∆
mf
f [x1 , . . . , xm+1 ] − f [x0 , . . . , xm ] m!hm m!hm
0
∆m (f1 − f0 )
Now, f [x0 , · · · , xm+1 ] = = = =
xm+1 − x0 (m + 1)h m!(m + 1)hm+1
∆m+1 f0
(m + 1)!hm+1
The numbers ∆k fi are called k th order forward differences of f at x = i.
We now show how the interpolating polynomial Pn (x) given by (2.16) can be written using
forward differences.
x − x0
where s = .
h
46
x f (x) ∆f ∆2 f ∆3 f ∆4 f
x0 f0
∆f0
x1 f1 ∆2 f 0
∆f1 ∆3 f 0
x2 f2 ∆2 f 1 ∆4 f 0
∆f2 ∆3 f 1
2
x3 f3 ∆ f2
∆f3
x4 f4
x f ∆f ∆2 f ∆3 f
0 1
1.7183
1 2.7183 2.9525
4.6708 5.0731
2 7.3891 8.0256 8.7176
12.6964 13.7907
3 20.0855 21.8163
34.5127
4 54.5982
Let x = 1.5
x − x0 1.5 − 0
Then s = = = 1.5
h 1
1.5 1.5 1.5 1.5 1.5
P4 (1.5) = 1+ 1.7183+ 2.9525+ 5.0731+ 8.7176 =
0 1 2 3 4
1 + 1.5(1.7183) + 0.3750(2.9525) + (−0.0625)(5.0731) + (0.0234)(8.7176) = 4.5716
The correct answer up to 4 decimal digits is 4.4817.
47
∇fn = fn − fn−1 , n = 1, 2, 3, . . . ,
and
∇k fn = ∇(∇k−1 fn ), k = 2, 3, . . .
Newton’s Backward-Difference
Interpolations Formula
n
k −s
X
Pn (x) = (−1) 5k f n .
k
k=0
2.7 Spline-Interpolation
So far we have been considering interpolation by means of a single polynomial in the entire
range.
Let’s now consider interpolation using different polynomials (but of the same degree) at
different intervals. Let the function f (x) be defined at the nodes a = x0 , x1 , x2 , ..., xn = b.
The problem now is to construct piecewise polynomials Sj (x) on each interval [xj , xj+1 ],
j = 0, 1, 2, ..., n − 1, so that the resulting function S(x) is an interpolant for the function
f (x).
The simplest such polynomials are of course, linear polynomials (straight lines). The inter-
polating polynomial in this case is called linear spline. The two biggest disadvantages of
a linear spline are (i) the convergence is rather slow, and (ii) not suitable for applications
demanding smooth approximations, since these splines have corner at the knots.
48
“Just imagine a cross section of an airplane wing in the form of a linear spline and you
quickly decide to go by rail. (Stewart (1998), p. 93).”
Likewise the quadratic splines have also certain disadvantages.
The most common and widely used splines are cubic splines. Assume that the cubic
polynomial Sj (x), has the following form:
Sj (x) = aj + bj (x − xj ) + cj (x − xj )2 + dj (x − xj )3 , j = 0, 1, . . . , n − 1.
Since Sj (x) contains four unknowns, to construct n cubic polynomials S0 (x), S1 (x), ..., Sn−1 (x),
we need 4n conditions. To have these 4n conditions, a cubic spline S(x) for the function
f (x) can be conveniently defined as follows:
A function S(x), denoted by Sj (x), over the interval [xj , xj+1 ], j = 0, 1, ..., n − 1 is
called a cubic spline interpolant if the following conditions hold:
(iii) Sj+1
0
(xj+1 ) = Sj0 (xj+1 ), j = 0, 1, 2, ..., n − 2.
(iv) Sj+1
00
(xj+1 ) = Sj00 (xj+1 ), j = 0, 1, 2, ..., n − 2.
Then, since for j there are four unkowns: aj , bj , cj , and dj and there are n such polynomials
to be determined, all together there are 4n unknowns. However, conditions (i)-(iv) above
give only (4n−2) equations: (i) gives n+1, and each of (ii)-(iv) gives n−1. So, to completely
determine the cubic spline, we must need two more equations. To obtain these two additional
equations, we can invoke boundary conditions.
• If the second derivative of S(x) can be approximated, than we can take: S 00 (x0 ) =
S 00 (xn ) = 0 (free or natural boundary).
• On the other hand, if only the first derivative can be estimated, we can use the following
boundary condition: S 0 (x0 ) = f 0 (x0 ), and S 0 (xn ) = f 0 (xn ) (clamped boundary).
aj = fj , j = 0, 1, 2, ..., n − 1.
49
Set hj = xj+1 −xj . With some mathematical manipulations, it can then be shown (see for e.g.,
Burden and Faires, pp. 144-145), that once aj ’s are known, the coefficients cj , j = 0, 1, 2, ..., n
can be found by solving the following linear algebraic system of equations:
Ax = r,
where
A=
2h0 h0 0 0
..
h0
2(h0 + h1 ) h1 .
..
0
h1 2(h1 + h2 ) h2 .
.. .. .. ..
. . . . 0
..
. hn−2 2(hn−2 + hn−1 ) hn−1
0 0 hn−1 2hn−1
3
h0
(a1 − a0 ) − 3f 0 (a)
c0
3 3
(a2 − a1 ) − (a1
− a0 )
c1 h1 h0
x= .., r = .
. ..
.
cn
3
3f (b) − hn−1 (an − an−1 )
0
Once c0 , c1 , ..., cn are known by solving the above system of equations, the quantities bj and
dj can be computed as follows:
(aj+1 − aj ) hj
bj = − (cj+1 + 2cj ), j = n − 1, n − 2, · · · , 0
hj 3
cj+1 − cj
dj = , j = n − 1, n − 2, · · · , 0.
3hj
50
A row diagonally dominant matrix can be similarly defined.
An important property of a strictly column diagonally dominant matrix is:
For details, see “Numerical Linear Algebra and Applications” by B. N. Datta, Brooks/Cole
Publishing Co., California, 1995 (Second edition is to be published by SIAM in 2009 ).
51
Algorithm 2.2 Computing Clamped Cubic Spline
Inputs:
Outputs: The coefficients a0 , ..., an ; b0 , ..., bn ; c0 , c1 , ..., cn , and d0 , d1 , ..., dn of the n polyno-
mials S0 (x), S1 (x), ..., Sn−1 (x) of which the cubic interpolant S(x) is composed.
Step 1. For i = 0, 1, ..., n − 1 do
Set hj = xj+1 − xj
Step 2. Compute a0 , a1 , ..., an :
For i = 0, 1, ..., n − 1 do
ai = f (xi ), i = 0, 1, ..., n.
End
Step 3. Compute the coefficients c0 , c1 , ..., cn by solving the system Ax = r, where A, x,
and r are as defined above.
Step 4. Compute the coefficients b0 , ..., bn and d0 , d1 , ..., dn as given in the box above.
52
Exercises
4. Suppose that a table is to be prepared for the function f (x) = ln x on [1, 3.5] with
equal spacing nodes such that the interpolation with third degree polynomial will give
an accuracy of = 5 × 10−8 . Determine how small h has to be to guarantee the above
accuracy.
53
5. Prove by induction that
∆k f 0
(a) f [x0 , x1 , · · · , xk ] = .
k!hk
6. (a) Find an approximate value of log10 (5) using Newton’s forward difference formula
with x0 = 1, x1 = 2.5, x2 = 4, and x3 = 5.5.
7. Using the data of problem 2, estimate f (1.025) with linear and quadratic splines and
compare your results with those obtained there.
54
8. Show that f [x0 , x1 , · · · , xn ] does not change if the nodes are rearranged.
√
2 3 3
(b) max |ψ2 (x)| = h
9
x0 ≤ x ≤ x 2
[Hint: To bound |ψ2 (x)|, shift this polynomial along the x-axis:
ψ̂2 (x) = (x + h)x(x − h) and obtain the bound of ψ̂2 (x)].
(d) Using the results of (a), (b), and (c), obtain the maximum absolute error in each
case for the functions:
f (x) = ex , 0 ≤ x ≤ 2
Π
f (x) = sin x 0 ≤ x ≤
2
(e) The interpretation of the result of Part (c) is that if the interpolation nodes are
chosen so that the interpolation point x is chosen close to the midpoint of [x0 , x3 ],
then the interpolation error is smaller than if n were chosen anywhere between x0
and x3 . Verify this statement with the data of Problem 2.
55