Fem

Download as pdf or txt
Download as pdf or txt
You are on page 1of 142

An Introduction to the

Finite Element Method (FEM)


for Differential Equations in 1D

Mohammad Asadzadeh

June 24, 2015


Contents

1 Introduction 1
1.1 Ordinary differential equations (ODE) . . . . . . . . . . . . . 1
1.2 Partial differential equations (PDE) . . . . . . . . . . . . . . . 2
1.2.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Polynomial Approximation in 1d 9
2.1 Overture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 Basis function in nonuniform partition . . . . . . . . . 14
2.2 Variational formulation for (IVP) . . . . . . . . . . . . . . . . 17
2.3 Galerkin finite element method for (2.1.1) . . . . . . . . . . . 19
2.4 A Galerkin method for (BVP) . . . . . . . . . . . . . . . . . . 21
2.4.1 The nonuniform version . . . . . . . . . . . . . . . . . 26
2.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3 Interpolation, Numerical Integration in 1d 31


3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Lagrange interpolation . . . . . . . . . . . . . . . . . . . . . . 39
3.3 Numerical integration, Quadrature rules . . . . . . . . . . . . 41
3.3.1 Composite rules for uniform partitions . . . . . . . . . 44
3.3.2 Gauss quadrature rule . . . . . . . . . . . . . . . . . . 48

4 Two-point boundary value problems 53


4.1 A Dirichlet problem . . . . . . . . . . . . . . . . . . . . . . . 53
4.2 The finite element method (FEM) . . . . . . . . . . . . . . . . 58
4.3 Error estimates in the energy norm . . . . . . . . . . . . . . . 59
4.4 FEM for convection–diffusion–absorption BVPs . . . . . . . . 65
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

iii
iv CONTENTS

5 Scalar Initial Value Problems 81


5.1 Solution formula and stability . . . . . . . . . . . . . . . . . . 82
5.2 Finite difference methods . . . . . . . . . . . . . . . . . . . . . 83
5.3 Galerkin finite element methods for IVP . . . . . . . . . . . . 86
5.3.1 The continuous Galerkin method . . . . . . . . . . . . 87
5.3.2 The discontinuous Galerkin method . . . . . . . . . . . 90
5.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

6 Initial Boundary Value Problems in 1d 95


6.1 Heat equation in 1d . . . . . . . . . . . . . . . . . . . . . . . . 95
6.1.1 Stability estimates . . . . . . . . . . . . . . . . . . . . 96
6.1.2 FEM for the heat equation . . . . . . . . . . . . . . . . 100
6.1.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.2 The wave equation in 1d . . . . . . . . . . . . . . . . . . . . . 106
6.2.1 Wave equation as a system of PDEs . . . . . . . . . . 107
6.2.2 The finite element discretization procedure . . . . . . 108
6.2.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 111

A Answers to Exercises 115

B Algorithms and MATLAB Codes 121

Table of Symbols and Indices 135


CONTENTS v

Preface and acknowledgments. This text is an elementary approach to


finite element method used in numerical solution of differential equations in
one space dimension. The purpose is to introduce students to piecewise poly-
nomial approximation of solutions using a minimum amount of theory. The
presented material in this note should be accessible to students with knowl-
edge of calculus of single- and several-variables and linear algebra. The theory
is combined with approximation techniques that are easily implemented by
Matlab codes presented at the end.
During several years, many colleagues have been involved in the design,
presentation and correction of these notes. I wish to thank Niklas Eriksson
and Bengt Svensson who have read the entire material and made many valu-
able suggestions. Niklas has contributed to a better presentation of the text
as well as to simplifications and corrections of many key estimates that has
substantially improved the quality of this lecture notes. Bengt has made all
xfig figures. The final version is further polished by John Bondestam Malm-
berg and Tobias Gebäck who, in particular, have many useful input in the
Matlab codes.
vi CONTENTS

void
Chapter 1

Introduction

In this lecture notes we present an introduction to approximate solutions for


differential equations. A differential equation is a relation between a function
and its derivatives. In case the derivatives that appear in a differential equa-
tion are only with respect to one variable, the differential equation is called
ordinary. Otherwise it is called a partial differential equation. For example,

du
− u(t) = 0, (1.0.1)
dt
is an ordinary differetial equation, whereas

∂u ∂ 2 u
− 2 = 0, (1.0.2)
∂t ∂x
2
is a partial differential (PDE) equation. In (1.0.2) ∂u
∂t
, ∂∂xu2 denote the partial
derivatives. Here t denotes the time variable and x is the space variable. We
shall only study one space dimentional equations that are either stationary
(time-independent) or time dependent. Our focus will be on the following
equations:

1.1 Ordinary differential equations (ODE)


• An example of population dynamic as in (1.0.1)

du
− λu(t) = f (t), (1.1.1)
dt

1
2 CHAPTER 1. INTRODUCTION

where λ is a constant and f is a source function.


• A stationary (time-independent) heat equation as
d2 u
− = f (x), (1.1.2)
dx2
• A stationary convection-diffusion equation
d2 u du
− + = f (x), (1.1.3)
dx2 dx
where f (x) is a source function.

1.2 Partial differential equations (PDE)


• The heat equation
∂u ∂ 2 u
− 2 = f (x). (1.2.1)
∂t ∂x
• The wave equation
∂ 2u ∂ 2u
− 2 = f (x). (1.2.2)
∂t2 ∂x
• The time depending convection-diffusion or reaction-diffusion equation
∂u ∂ 2 u ∂u
− 2+ = f (x). (1.2.3)
∂t ∂x ∂x
Some notation. For convenince we shall use the following notation:
∂u ∂ 2u ∂u ∂ 2u
u̇ = , ü = , u′ = , u′′ = .
∂t ∂t2 ∂x ∂x2

Example 1.1 (Initial Conditions). Consider the simple equation u̇(t) = t.


Evidently, u(t) = t2 /2, is a solution. But, for any constant C, t2 /2 + C is
also a solution. In this way we have infintely many solutions (one for each
constant C). To determine a unique solution we need to supply the equation
with one extra condition. Since the time variable t is always assumed to be
t ≥ 0, if we know the value of u(t), e.g., at the beginning, i.e., the initial
value e.g., u(0) = 3, then u(t) = t2 /2 + 3 is the unique solution to the initial
value problem: u̇(t) = t, u(0) = 3. A differential equation associated with
initial conditions is an initial value problem.
1.2. PARTIAL DIFFERENTIAL EQUATIONS (PDE) 3

Example 1.2 (Boundary Conditions). Likewise u(x) = −x2 /2 is a solu-


tion to −u′′ (x) = 1. But also all u(x) = −x2 /2 + Ax + B are solutions, for
all arbitrary constants A and B. Therefore, to determine a unique solution
u(x) we need to determine some fixed values for A and B, hence we need
to supply two conditions. Here if e.g., x belongs to a bounded interval, say,
[0, 1], then given the boundary values u(0) = 1 and u(1) = 0, we get from
u(x) = −x2 /2 + Ax + B that B = 1 and A = −1/2. Thus the solution to
the the initial boundary value problem: −u′′ (x) = 1, u(0) = 1, u(1) = 0
is: u(x) = −x2 /2 − x/2 + 1. The generale rule is that one should supply as
many conditions as the highest ordre of the derivative in each variable. So,
for example, for the hear equation u̇ − u′′ = 0, to get a unique solution we
nedd to supply one initial condition (there is one time derivative in the equa-
tion) and two boundary conditions (there are two derivatives in x), whereas
for the wave equation ü − u′′ = 0 we have to give two conditions in each
variable x and t. A differential equation with supplied boundary conditions
is a boundary value problem.
Objectives: For f being a simple elementary function (a polynomial, a
trigonometric, or exponential type function or a combination of them), the
equations (1.1.1)-(1.2.3), associated with suitable initial and boundary condi-
tions, have often closed form analytic solutions. But real problems: general
two and three dimensional problems, modeled by equations with variable
cefficients and in complex geometry, are seldom analytically solvable.
In this note our objective is to introduce numerical methods that ap-
proximate solutions for differential equations by polynomials. To check the
quality (reliability and efficiency) of these numerical methods, we choose to
apply them to the equations (1.1.1)-(1.2.3), where we already know their an-
alytic solutions. Below we shall give examples of analytic solutions to ODEs:
(1.1.1)-(1.1.3). For examples on analytic solutions for the PDEs: (1.2.1)-
(1.2.3), we refer to the separation of variables technique introduced in the
second part of our course.
Example 1.3. Determine the solution to the initial value problem

u̇(t) − λu(t) = 0, u(0) = u0 , (1.2.4)

assuming that u(t) > 0, for all t, λ = 1 and u0 = 2.

Solution. Since u(t) 6= 0, for all t, we may divide the equation (1.2.4) by
4 CHAPTER 1. INTRODUCTION

u̇(t)
u(t) and get u(t)
= λ. Relabeling t by s and integrating over (0, t) we get
Z t Z t h it
u̇(s)
ds = λ ds =⇒ ln u(s) = λ[s]t0 . (1.2.5)
0 u(s) 0 0

Hence we have
u(t)
ln u(t) − ln u(0) = λt or ln = λt. (1.2.6)
u(0)
Thus
u(t)
= eλt , i.e. u(t) = u0 eλt . (1.2.7)
u(0)
Consequently, with λ = 1 and u0 = 2 we have u(t) = 2et .
To derive solutions to our examples on a systematic way, we recall the pro-
cedure for determining a particular solution up to a second order differential
equation with constant coefficiets of the form:
u′′ (x) + au′ (x) + bu(x) = f (x). (1.2.8)

1. If f (x) = a polynomial of degree n. Set

i) up (x) = a0 + a1 x + · · · + an xn , if b 6= 0
ii) up (x) = x(a0 + a1 x + · · · + an xn ), if b = 0, a 6= 0

2. If f (x) = (polynom) × eσx . Set

i) up (x) = z(x)eσx .
This gives a new differential equation for z solved by 1).
ii) up (x) = Aeσx , if polynom = constant.
This works if σ 2 + aσ + b 6= 0: i.e. σ is not a root to the
characteristic equation.

3. If f (x) = p cos(ωx) + q sin(ωx). Set

i) up (x) = C cos(ωx) + D sin(ωx), for −ω 2 + aiω + b 6= 0,


i.e., if iω is not a root to the characteristic equation.
ii) up (x) = x(C cos(ωx) + D sin(ωx)) , if −ω 2 + aiω + b = 0.
1.2. PARTIAL DIFFERENTIAL EQUATIONS (PDE) 5

Example 1.4. Find all solutions to the differential equation


u′′ (x) − u(x) = cos(x). (1.2.9)
Solution. Due to the highest number of derivatives (here 2 which is also
called the order of this differential equation), we shall have solutions depend-
ing on two arbitrary constants. As we mentioned earilear a unique solution
would require supplying 2 conditions, which we skip in this problem.
We note that the characteristic equation to this differentisal equatios:
r2 − 1 = 0 has the roots ω = ±1. We split the solution procedure in 3 steps:

Step 1: According to the table above we choose a particular solution up (x)


of the form
up (x) = A cos x + B sin x. (1.2.10)
Differentiating twice and inserting in the equation (1.2.9) yeilds
u′p (x) = −A sin x + B cos x
u′′p (x) = −A cos x − B sin x
u′′p (x) − up (x) = −2A cos x − 2B sin x = cos x
identifying the coefficients yields A = − 21 , B = 0. Thus
1
up (x) = − cos x (1.2.11)
2

Step 2: The homogeneous solution is given by the standard ansatz


uh (x) = C1 er1 x + C2 er2 x , (1.2.12)
where C1 and C2 are arbitrary constants and r1 = 1 and r2 = −1 are the
roots of the characteristic equation. Hence
uh (x) = C1 ex + C2 e−x . (1.2.13)

Step 3: Finally, the general solution is given by adding the particular and
homogeneous solutions
1
u(x) = − cos x + C1 ex + C2 e−x . (1.2.14)
2
In the above example we obtained general solutions depending on two con-
stants. Below we shall demonstrate an example where, supplying two bound-
ary conditions, we obtain a unique solution
6 CHAPTER 1. INTRODUCTION

Example 1.5. Determine the unique solution of the following boundary value
problem

u′′ + 2u′ + u = 1 + x + 2 sin x, u(0) = 1, u′ (0) = 0. (1.2.15)

Homogeneous solution:
The characteristic equation for the differential equation (1.2.15) is given by

r2 + 2r + 1 = 0, and has dubbel root r1,2 = −1. (1.2.16)

This gives the homogeneous solutions as

uh = (C1 + C2 x)e−x . (1.2.17)

Particular solution:
The particula solution can be written as sum of two particular solution to
the following equations:

u′′1 + 2u′1 + u1 = 1 + x, (1.2.18)

and
u′′2 + 2u′2 + u2 = 2 sin x. (1.2.19)
Since the differential equation is linear, a concept justified by the relation

(au1 + bu2 )′ = au′1 + bu′2 and ∀a, b ∈ R,

thus u = u1 + u2 will be a particular solution for (1.2.15). Using the table of


particular solutions, we may insert u1 (x) = Ax + B, as particular solution,
in (1.2.18) and get
2A + Ax + B = 1 + x. (1.2.20)
Identifying the coefficients in (1.2.20) gives A = 1 and B = −1. Hence

u1 (x) = x − 1.

Once again using the table of particular solutions, we may insert u2 (x) =
A sin x + B cos x, as particular solution, in (1.2.19) and get

2A cos x − 2B sin x = 2 sin x. (1.2.21)

Identifying the coefficients in (1.2.21) gives A = 0 and B = −1. Hence

u2 (x) = − cos x.
1.2. PARTIAL DIFFERENTIAL EQUATIONS (PDE) 7

Thus the general solution is given by

u = uh + u1 + u2 = (C1 + C2 x)e−x + (x − 1) − cos x. (1.2.22)

Now we use the boundary conditions and determine the coefficients C1 and
C2 . Observe that

u′ = C2 e−x − (C1 + C2 x)e−x + 1 + sin x,

and we have that

u(0) = 1 =⇒ C1 − 1 − 1 = 1 =⇒ C1 = 3.

Further

u′ (0) = 0 =⇒ C2 − C1 + 1 = 0 =⇒ C2 = C1 − 1 =⇒ C2 = 2.

Thus the final solution is

u(x) = x − 1 − cos x + e−x (3 + 2x).

Summary: These examples of ODEs can serve as a sort of warm up. As we


mentioned the corresponding analytical solutions for our PDEs is the subject
of Fourier analysis that we cover on the second part of this course. The
remaing chapters will be devoted to the approximation methods for solution
of our ODEs and PDEs. We shall approximate the solutions with, piecewise,
polynomials. Such approximations are known as the Galerkin finite element
methods (FEM). In its final step, a finite element procedure yields a linear
system of equations (LSE) where the unknowns are the approximate values of
the solution at certain points. Then, an approximate solution is constructed
by adapting, piecewise, polynomials of certain degree to these point values.
The entries of the coefficient matrix and the right hand side of FEM’s
final linear system of equations consist of integrals which are not always
easily computable. Therefore, numerical integration are introduced to ap-
proximate such integrals. Interpolation techniques are introduced for both
accurate polynomial approximations and to derive error estimates necessary
in determining qualitative properties of the approximate solutions. That is
to show how the approximate solution approaches the exact solution as the
number of unknowns increase.
8 CHAPTER 1. INTRODUCTION

1.2.1 Exercises
Problem 1.1. Find all solutions to the following homogeneous (their right
hand side is zero “0”) differential equations
a) u′′ − 3u′ + 2u = 0 b) u′′ + 4u = 0 c) u′′ − 6u′ + 9u = 0

Problem 1.2. Find all solutions to the following non-homogeneous (their


right hande side are non-zero ” 6= 0′′ ) differential equations
a) u′′ +2u′ +2u = (1+x)2 b) u′′ +u′ +2u = sin x c) u′′ +3u′ +2u = ex

Problem 1.3. Find a particular solution to each of the following equations


a) u′′ − 2u′ = x2 b) u′′ + u = sin x c) u′′ + 3u′ + 2u = ex + sin x.

Problem 1.4. Solve the boundary value problem for all x ∈ (0, 1),

−u′′ + u = f (x), u(0) = u(1) = 0,

a) for f (x) = 0, b) for f (x) = x, c) for f (x) = sin(πx),

Problem 1.5. Solve the following boundary value problems


a) −u′′ = x − 1, 0 < x < π, u′ (0) = u(π) = 0,
b) −u′′ = x, 0 < x < π, u′ (0) = u′ (1) = 0.
Chapter 2

Polynomial Approximation in
1d

Our objective is to present the finite element method (FEM) as an approximation


technique for solution of differential equations using piecewise polynomials. This
chapter is devoted to some necessary mathematical environments and tools, as
well as a motivation for the unifying idea of using finite elements: A numerical
strategy arising from the need of changing a continuous problem into a discrete
one. The continuous problem will have infinitely many unknowns (if one asks for
u(x) at every x), and it cannot be solved exactly on a computer. Therefore it
has to be approximated by a discrete problem with a finite number of unknowns.
The more unknowns we keep, the better the accuracy of the approximation will
be, but at a greater computational expense.

2.1 Overture
Below we shall introduce a few standard examples of classical differential
equations and some regularity requirements.
Ordinary differential equations (ODEs)
An initial value problem (IVP), for instance a model in population dynamics
where u(t) is the size of the population at time t, can be written as
u̇(t) = λu(t), 0 < t < T, u(0) = u0 , (2.1.1)
where u̇(t) = du
dt
and λ is a positive constant. For u0 > 0 this problem has
the increasing analytic solution u(t) = u0 eλ·t , which blows up as t → ∞.

9
10 CHAPTER 2. POLYNOMIAL APPROXIMATION IN 1D

• Numerical solutions of (IVP)

Example 2.1. Explicit (forward) Euler method (a finite difference method).


We discretize the IVP (2.1.1) with the forward Euler method based on a
partition of the interval [0, T ] into N subintervals, and an approximation of

t0 = 0 t1 t2 t3 tN = T

the derivative by a difference quotient at each subinterval [tk , tk+1 ] by u̇(t) ≈


u(tk+1 )−u(tk )
tk+1 −tk
. Then an approximation of (2.1.1) is given by

u(tk+1 ) − u(tk )
= λ · u(tk ), k = 0, . . . , N − 1, and u(0) = u0 , (2.1.2)
tk+1 − tk

and thus, letting ∆tk = tk+1 − tk ,

u(tk+1 ) = (1 + λ∆tk )u(tk ). (2.1.3)

Starting with k = 0 and the data u(0) = u0 , the solution u(tk ) would, itera-
tively, be computed at the subsequent points: t1 , t2 , . . . , tN = T .
For a uniform partition, where all subintervals have the same length ∆t,
(2.1.3) would be of the form

u(tk+1 ) = (1 + λ∆t)u(tk ), k = 0, 1, . . . , N − 1. (2.1.4)

Iterating we get

u(tk+1 ) = (1 + λ∆t)u(tk ) = (1 + λ∆t)2 u(tk−1 ) = . . . = (1 + λ∆t)k+1 u0 .

Other finite difference methods for (2.1.1) are introduced in Chapter 5. There
are corresponding finite difference methods for PDE’s. Our goal, however, is
to study the Galerkin finite element method. To this approach we need to
introduce some basic tools:

Finite dimensional linear space of polynomials on an interval


Below we give an examples of finite dimensional linear space of polynomials
defined on an interval. In our study we shall consider, mainly, polynomials of
2.1. OVERTURE 11

degree 1. Higher degree polynomials are studied in some details in Chapter


3: the polynomial interpolation in 1D.
We define P (q) (a, b) := {Space of polynomials of degree ≤ q, a ≤ x ≤ b}.
A possible basis for P (q) (a, b) would be {xj }qj=0 = {1, x, x2 , x3 , . . . , xq }. These
are, in general, non-orthogonal polynomials and may be orthogonalized by
the Gram-Schmidt procedure. The dimension of P q is therefore q + 1.
Example 2.2. For linear approximation we shall only need the basis func-
tions 1 and x. An alternative linear basis function on the interval [a, b] is
given by two functions λa (x) and λb (x) with the additional property
 
 1, x=a  1, x=b
λa (x) = and λb (x) =
 0, x=b  0, x = 0.

Being linear λa (x) = Ax + B. To determine the coefficients A and B we have


that 
 λ (a) = 1 =⇒ Aa + B = 1
a
 λ (b) = 0 =⇒ Ab + B = 0
a

−1
Subtracting the two relations above we get A(b − a) = −1 =⇒ A = b−a
.
b
Then, from the second relation: B = −Ab we get B = b−a . Thus,
b−x x−a
λa (x) = . Likewise λb (x) = .
b−a b−a

1
λa (x) λb (x)

x
a b

Figure 2.1: Linear basis functions λa (x) and λb (x).

Note that
λa (x) + λb (x) = 1, and aλa (x) + bλb (x) = x.
12 CHAPTER 2. POLYNOMIAL APPROXIMATION IN 1D

Thus, we get the original basis functions: 1 and x for the linear polynomial
functions, as a linear combination of the basis functions λa (x) and λb (x).
Hence, any linear function f (x) on an interval [a, b] can be written as:

f (x) = f (a)λa (x) + f (b)λb (x). (2.1.5)

This is easily seen by the fact that the right hand side in (2.1.5) yields:

f (a)λa (a) + f (b)λb (a) = f (a) × 1 + f (b) × 0 = f (a),

f (a)λa (b) + f (b)λb (b) = f (a) × 0 + f (b) × 1 = f (b).


That is the two sides in (2.1.5) agree in two distinct points, therefore, being
linear, they represent the same function.

Example 2.3. Let [a, b] = [0, 1] then λ0 (x) = 1 − x and λ1 (x) = x. Consider
the linear function f (x) = 3x + 5/2. Then f (0) = 5/2, f (1) = 11/2 and
5 11
f (0)λ0 (x) + f (1)λ1 (x) = (1 − x) + x = 3x + 5/2 = f (x).
2 2
Definition 2.1. Let f (x) be a real valued function definied on R or on an
interval that contains [a, b]. A linear interpolant of f (x) on a and b is a
linear function π1 f (x) such that π1 f (a) = f (a) and π1 f (b) = f (b).

As in verification of (2.1.5), we have also π1 f (x) = f (a)λa (x)+f (b)λb (x) :

b−x x−a
π1 f (x) = f (a) + f (b) .
b−a b−a
Below, for simplicity, first we shall assume a uniform partition of the interval
[0, 1] into M + 1 subintervals of the same size h, i.e., we let xj = jh, and
consider subintervals Ij := [xj−1 , xj ] = [(j − 1)h, jh] for j = 1, . . . , M + 1.
Then setting a = xj−1 = (j − 1)h and b = xj = jh we may define

x − jh x − (j − 1)h
λj−1 (x) = − and λj (x) = .
h h
We denote the space of all continuous piecewise linear polynomial func-
tions on Th , by Vh . Let

Vh0 := {v : v ∈ Vh , v(0) = v(1) = 0}.


2.1. OVERTURE 13
y

π1 f (x)

f (x)

x
a b

Figure 2.2: The linear interpolant π1 f (x) on a single interval.

x
x0 x1 x2 xj−1 xj xM xM +1 = 1
h h h
Figure 2.3: An example of a function in Vh0 with uniform partition.

Applying (2.1.5), on each subinterval Ij , j = 1, . . . , M + 1, (using λj (x), j =


1, . . . , M ) we can easily construct the functions belonging Vh0 . To construct a
function v(x) ∈ Vh we shall also need additional basis functions λ0 (x) and/or
λM +1 (x) if v(0) 6= 0, and/or v(1) 6= 0, corresponding to non-vanishing data
in the boundary value problems.
The standard basis for piecewise linears in a uniform partition are given by
the so called hat-functions ϕj (x) with the property that ϕj (x) is a piecewise
linear function such that ϕj (xi ) = δij , where

 
 x−(j−1)h
(j − 1)h ≤ x ≤ jh
 1, 
 h
i = j, (j+1)h−x
δij = i.e. ϕj (x) = jh ≤ x ≤ (j + 1)h
 0, i 6= j, 
 h

 0 x∈
/ [(j − 1)h, (j + 1)h],
14 CHAPTER 2. POLYNOMIAL APPROXIMATION IN 1D

with obvious modifications for j = 0 and j = M + 1. The hat function


ϕj (x) is just a combination of two basis functions λj (x) of the two adjacent
intervals Ij and Ij+1 (each of these two adjacent intervals has its own λj (x),
check this), extended by zero for x ∈/ (Ij ∪ Ij+1 ).
y

1
ϕj (x)

xj−2 xj−1 xj xj+1 xM +1 x


x0 xM
h h
Figure 2.4: A general piecewise linear basis function ϕj (x).

2.1.1 Basis function in nonuniform partition


Below we generalize the above procedure to the case of nonuniform partition.
Let now I = [0, 1] and define a partition of I into a collection of nonuniform
subintervals. For example Th : 0 = x0 < x1 < . . . < xM < xM +1 = 1, with
hj = xj − xj−1 , and j = 1, . . . , M + 1, is a partition of [0, 1] into M + 1
subintervals. Here h := h(x), known as the mesh function, is a piecewise
constant function defined as h(x) = hj for x ∈ Ij = [xj−1 , xj ]. We shall see
that π1 f “gets closer to” f , as max h(x) → 0. Now we may apply the concept
of the linear interpolant to a a set of nonuniform subintevals Ij := [xj−1 , xj ]
of a given interval I, simply by setting a = xj−1 and b = xj . Therefore, we
define
xj − x x − xj−1
λj−1 (x) = and λj (x) = .
xj − xj−1 xj − xj−1
The corresponding basis functions for the nonuniform case are given as


 x−xj−1
xj−1 ≤ x ≤ xj

 hj
xj+1 −x
ϕj (x) = xj ≤ x ≤ xj+1

 hj+1

 0 x∈/ [xj−1 , xj+1 ].
2.1. OVERTURE 15
y

x
x0 x1 x2 xj−1 xj xM xM +1 = 1
h2 hj hM +1

Figure 2.5: An example of a function in Vh0 .

Again with obvious modifications for j = 0 and j = M + 1.


y

1
ϕj (x)

xj−2 xj−1 xj xj+1 xM +1 x


x0 xM
hj hj+1

Figure 2.6: A general piecewise linear basis function ϕj (x).

Vector spaces
To establish a framework we introduce some basic mathematical concepts:
Definition 2.2. A set V of functions or vectors is called a linear space, or
a vector space, if for all u, v ∈ V and all α ∈ R (real number), we have that
(i) u + v ∈ V, (closed under addition)
(ii) αu ∈ V, (closed under multiplication by scalars), (2.1.6)
(iii) ∃ (−u) ∈ V : u + (−u) = 0, (closed under inverse),
where (i) and (ii) obey the usual rules of addition and multiplication by
scalars. Observe that α = 0 in (ii) (or (iii) and (i), with v = (−u)), implies
that 0 (zero vector) is an element of every vector space.
16 CHAPTER 2. POLYNOMIAL APPROXIMATION IN 1D

Definition 2.3. A scalar product (inner product) is a real valued operator


on V ×V , viz hu, vi : V ×V → R such that for all u, v, w ∈ V and all α ∈ R,
(i) hu, vi = hv, ui, (symmetry)
(ii) hu + αv, wi = hu, wi + αhv, wi, (bi-linearity)
(2.1.7)
(iii) hv, vi ≥ 0, ∀v ∈ V, (positivity)
(iv) hv, vi = 0, ⇐⇒ v = 0 (positive definiteness).
Definition 2.4. A vector space V is called an inner product space if V is
associated with a scalar product h·, ·i, defined on V × V .
Example 2.4. A usual example of scalar product of two functions u and v
defined on an interval [a, b], known as the L2 scalar product, is defined by
Z b
hu, vi := u(x)v(x)dx. (2.1.8)
a

Here are examples of some vector spaces that are also linear product
spaces associated with the scalar product defined by (2.1.8).
• C(a, b): The space of continuous functions on an interval (a, b),
• P (q) [a, b]: the space of all polynomials of degree ≤ q on C[a, b] and
• Vh (a, d) and Vh0 (a, b) defined above.
The reader may easily check that all the properties (i) − (iv), in the
definition, for the scalar product are fullfiled for these spaces.
Definition 2.5. Two (real-valued) functions u(x) and v(x) are called orthog-
onal if hu, vi = 0. The orthogonality is also denoted by u ⊥ v.
Example 2.5. For the functions u(x) = 1 and v(x) = x, we have that
Z 1 Z 1 Z 1 Z 1
u(x)v(x)dx = 1×x dx = 0, u(x)v(x)dx = 1×x dx = 1/2 6= 0.
−1 −1 0 0

Thus, 1 and x are orthogonal on the interval [−1, 1], but not on [0, 1].
Definition 2.6 (Norm). If u ∈ V then the norm of u, or the length of u,
associated with the scalar product (2.1.8) above is defined by:
p Z b 1/2
1/2
kuk = hu, ui = hu, ui = |u(x)|2 dx . (2.1.9)
a

This norm is known as the L2 -norm of u(x). There are other norms that we
will introduce later on.
2.2. VARIATIONAL FORMULATION FOR (IVP) 17

Now we recall one of the most useful inequalities that is frequently used in
estimating the integrals of product of two functions.

Lemma 2.1 (The Cauchy-Schwarz inequality). For all inner products with
their corresponding norms We have that

|hu, vi| ≤ kukkvk.

In particular for the L2 -norm and scalar product


Z Z 1/2  Z 1/2
2
uv dx ≤ |u| dx |v|2 dx .

Proof. A simple proof is given by using

hu − av, u − avi ≥ 0, with a = hu, vi/kvk2 .

Then by the definition of the L2 -norm and the symmetry property of the
scalar product we get

0 ≤ hu − av, u − avi = kuk2 − 2ahu, vi + a2 kvk2 .

Setting a = hu, vi/kvk2 and rearranging the terms we get

hu, vi2 hu, vi2


0 ≤ kuk2 − kvk2 , and consequently ≤ kuk2 ,
kvk4 kvk2

which yields the desired result.

Now we shall return to approximate solution for (2.1.1) using polynomials.


To this approach we introduce the concept of weak formulation viz,

2.2 Variational formulation for (IVP)


We multiply the initial value problem (2.1.1) with test functions v in a certain
vector space V and integrate over [0, T ], to get
Z T Z T
u̇(t)v(t) dt = λ u(t)v(t) dt, ∀v ∈ V, (2.2.1)
0 0
18 CHAPTER 2. POLYNOMIAL APPROXIMATION IN 1D

or equivalently
Z T
(u̇(t) − λ u(t))v(t)dt = 0, ∀v(t) ∈ V, (2.2.2)
0

which, interpreted as inner product, means that

(u̇(t) − λ u(t)) ⊥ v(t), ∀v(t) ∈ V. (2.2.3)

We refer to (2.2.1) as the variational problem for (2.1.1). We shall seek a


solution for (2.2.1) in C(0, T ), or in
n Z T  o
1
V := H (0, T ) := f : f (t)2 + f˙(t)2 dt < ∞ .
0

Definition 2.7. If w is an approximation of u in the variational problem


(2.2.1), then R(w(t)) := ẇ(t) − λw(t) is called the residual error of w(t).
In general for an approximate solution w we have ẇ(t) − λw(t) 6= 0,
otherwise w and u would satisfy the same equation and by uniqueness we
would get the exact solution (w = u). Our requirement is instead that w
should satisfy (2.2.3), i.e. the equation (2.1.1) in average. In other words

R(w(t)) ⊥ v(t), ∀v(t) ∈ V. (2.2.4)

We look for an approximate solution U (t), called a trial function for (2.1.1),
in the space of polynomials of degree ≤ q:

V (q) := P (q) = {U : U (t) = ξ0 + ξ1 t + ξ2 t2 + . . . + ξq tq }. (2.2.5)

Hence, to determine U (t) we need to determine the coefficients ξ0 , ξ1 , . . . ξq .


We refer to V (q) as the trial space. Note that u(0) = u0 is given and therefore
we may take U (0) = ξ0 = u0 . It remains to find the real numbers ξ1 , . . . , ξq .
These are coefficients of the q linearly independent monomials t, t2 , . . . , tq .
To this approach we define the test function space:
(q) (q)
V0 := P0 = {v ∈ P (q) : v(0) = 0}. (2.2.6)

Thus, v can be written as v(t) = c1 t + c2 t2 + . . . + cq tq . For an approximate


solution U , we require its residual R(U ) to satisfy the condition (2.2.4):
(q)
R(U (t)) ⊥ v(t), ∀v(t) ∈ P0 .
2.3. GALERKIN FINITE ELEMENT METHOD FOR (2.1.1) 19

2.3 Galerkin finite element method for (2.1.1)


Given u(0) = u0 , find the approximate solution U ∈ P (q) of (2.1.1) satisfying
Z T Z T
(q)
R(U (t))v(t)dt = (U̇ (t) − λ U (t))v(t)dt = 0, ∀v(t) ∈ P0 . (2.3.1)
0 0

Formally, this can be obtained requiring


P U to satify (2.2.2). PThus, since
U ∈ P (q) , we may write U (t) = u0 + qj=1 ξj tj , then U̇ (t) = qj=1 jξj tj−1 .
(q)
Further, P0 is spanned by vi (t) = ti , i = 1, 2, . . . , q. Therefore, it suffices to
use these ti :s as test functions. Inserting these representations for U, U̇ and
v = vi , i = 1, 2, . . . , q into (2.3.1) we get
Z 1 q
X q
X 
j−1
jξj t − λu0 − λ ξj tj · ti dt = 0, i = 1, 2, . . . , q. (2.3.2)
0 j=1 j=1

Moving the data to the right hand side, this relation can be rewritten as
Z 1X q  Z 1
i+j−1 i+j
(jξj t − λ ξj t ) dt = λu0 ti dt, i = 1, 2, . . . , q. (2.3.3)
0 j=1 0

Performing the integration (ξj :s are constants independent of t) we get


Xq h ti+j ti+j+1 it=1 h ti+1 it=1
ξj j · −λ = λ · u0 , (2.3.4)
j=1
i+j i + j + 1 t=0 i + 1 t=0

or equivalently
q  
X j λ λ
− ξj = · u0 i = 1, 2, . . . , q, (2.3.5)
j=1
i + j i + j + 1 i + 1

which is a linear system of equations with q equations and q unknowns


(ξ1 , ξ2 , . . . , ξq ); in the coordinates form. In the matrix form (2.3.5) reads

AΞ = b, with A = (aij ), Ξ = (ξj )qj=1 , and b = (bi )qi=1 . (2.3.6)

But the matrix A although invertible, is ill-conditioned, i.e. difficult to invert


numerically with any accuracy. Mainly because {ti }qi=1 does not form an
orthogonal basis. For large i and j the last two rows (columns) of A computed
20 CHAPTER 2. POLYNOMIAL APPROXIMATION IN 1D

j λ
from aij = − , are very close to each other resulting in a very
i+j i+j+1
small value for the determinant of A.
If we insist to use polynomial basis up to certain order, then instead of
monomials, the use of Legendre orthogonal polynomials would yield a diago-
nal (sparse) coefficient matrix and make the problem well conditioned. This
however, is a rather tedious task. A better approach would be through the
use of piecewise polynomial approximations (see Chapter 5) on a partition of
[0, T ] into subintervals, where we use low order polynomial approximations
on each subinterval.
The L2 -projection onto a space of polynomials
A polynomial πf interpolating a given function f (x) on an interval (a, b)
agrees with point values of f at a certain discrete set of points xi ∈ (a, b) :
πf (xi ) = f (xi ), i = 1, . . . , n, for some integer n. This concept can be gener-
alized to determine a polynomial P f so that certain averages agree. These
could include the usual average of f over [a, b] defined by,
Z b
1
f (x) dx,
b−a a

or a generalized average of f with respect to a weight function w defined by


Z b
hf, wi = f (x)w(x) dx.
a

Pf

x
x0 x1 x2 xM xM +1 = 1

Figure 2.7: An example of a function f and its L2 projection P f in [0, 1].


2.4. A GALERKIN METHOD FOR (BVP) 21

Definition 2.8. The orthogonal projection, or L2 -projection, of the function


f onto P q (a, b) is the polynomial P f ∈ P q (a, b) such that

(f, w) = (P f, w) ⇐⇒ (f − P f, w) = 0 for all w ∈ P q (a, b). (2.3.7)

Thus, (2.3.7) is equivalent to a (q + 1) × (q + 1) system of equations.

2.4 A Galerkin method for (BVP)


We consider Galerkin method for the following stationary (u̇ = du/dt = 0)
heat equation in one dimension:

−u′′ (x) = f (x), 0 < x < 1; u(0) = u(1) = 0. (2.4.1)

Let Th : {jh}M +1
j=0 , (M + 1)h = 1 be a uniform partition of the interval [0, 1]
into the subintervals Ij = ((j − 1)h, jh), with the same length |I| = h,
j = 1, 2, . . . , M + 1. We define the finite dimensional space Vh0 by

Vh0 := {v ∈ C(0, 1) : v is a piecewise linear function on Th , v(0) = v(1) = 0},

with the basis functions {ϕj }Mj=1 defined below (these functions will be used to
determine the values of approximate solution at the points xj , j = 1, . . . , M.
Due to the fact that u is known at the boundary points 0 and 1; it is not
necessary to supply test functions corresponding to the values at x0 = 0 and
xM +1 = 1. However, in the case of given non-homogeneous boundary data
u(0) = u0 6= 0 and/or u(1) = u1 6= 0, to represent the trial function, one uses
the basis functions to all internal nodes as well as those corresponding to the
non-homogeneous data (i.e. at x = 0 and/or x = 1).

Remark 2.1. If the Dirichlet boundary condition is given at only one of the
boundary points; say x0 = 0 and the other one satisfies, e.g. a Neumann
condition as

−u′′ (x) = f (x), 0 < x < 1; u(0) = b0 , u′ (1) = b1 , (2.4.2)

then the function ϕ0 (at x0 = 0 ) will be unnecessary (no matter whether


b0 = 0 or b0 6= 0), whereas one needs to provide the half-base function ϕM +1
at xM +1 = 1 (dashed in (2.8) below). Note that, ϕ0 participates (as data) in
representing the trial function U (see excercises at the end of this chapter).
22 CHAPTER 2. POLYNOMIAL APPROXIMATION IN 1D
y

ϕ1 ϕj ϕM ϕM +1

x
x0 x1 x2 xj−1 xj xj+1 xM −1 xM xM +1
h h

Figure 2.8: Piecewise linear basis functions

Now we define the function space


 Z 1 
1
V0 = H0 (0, 1) := w : (w(x)2 + w′ (x)2 ) dx < ∞, w(0) = w(1) = 0 ,
0

A variational formulation for problem (2.4.1), is based on multiplying (2.4.1)


by a test function v ∈ V0 and integrating over [0, 1):
Z 1
(−u′′ (x) − f (x))v(x)dx = 0, ∀v(x) ∈ V0 . (2.4.3)
0

Integrating by parts we get


Z 1 Z 1
− ′′
u (x)v(x)dx = u′ (x)v ′ (x)dx − [u′ (x)v(x)]10 , (2.4.4)
0 0

and since for v(x) ∈ V0 ; v(0) = v(1) = 0, we end up with


Z 1 Z 1
′′
− u (x)v(x)dx = u′ (x)v ′ (x) dx. (2.4.5)
0 0

Thus the variational formulation for (2.4.1) is: Find u ∈ V0 such that
Z 1 Z 1
′ ′
u (x)v (x) dx = f (x)v(x)dx, ∀v ∈ V0 (2.4.6)
0 0

This is a justification for the finite element formulation:


2.4. A GALERKIN METHOD FOR (BVP) 23

The Galerkin finite element method (FEM) for the problem (2.4.1):
Find U (x) ∈ Vh0 such that
Z 1 Z 1
′ ′
U (x)v (x) dx = f (x)v(x)dx, ∀v(x) ∈ Vh0 . (2.4.7)
0 0

Thus the Galerkin approximation U is very similar to P u: The L2 -projection


of u. We shall determine ξj = U (xj ) which are the approximate values of u(x)
at the node points xj = jh, 1 ≤ j ≤ M . To this end using basis functions
ϕj (x), we may write
M
X M
X

U (x) = ξj ϕj (x) which implies that U (x) = ξj ϕ′j (x). (2.4.8)
j=1 j=1

Thus, (2.4.7) can be written as


M
X Z 1 Z 1
ξj ϕ′j (x) v ′ (x)dx = f (x)v(x)dx, ∀v(x) ∈ Vh0 . (2.4.9)
j=1 0 0

Since every v(x) ∈ Vh0 is a linear combination of the basis functions ϕi (x),
it suffices to try with v(x) = ϕi (x), for i = 1, 2, . . . , M : That is, to find ξj
(constants), 1 ≤ j ≤ M such that
M Z
X 1  Z 1
ϕ′i (x)ϕ′j (x)dx ξj = f (x)ϕi (x)dx, i = 1, 2, . . . , M. (2.4.10)
j=1 0 0

This M × M system of equations can be written in the matrix form as


Aξ = b. (2.4.11)
Here A is called the stiffness matrix and b the load vector:
Z 1
M
A = {aij }i,j=1 , aij = ϕ′i (x)ϕ′j (x)dx, (2.4.12)
0
   
b ξ
 1   1 
  1Z  
 b2   ξ2 
b=

 , with
 bi = f (x)ϕi (x)dx, and ξ = 

 . (2.4.13)

 ...  0  ... 
   
bM ξM
24 CHAPTER 2. POLYNOMIAL APPROXIMATION IN 1D

To compute the entries aij of the matrix A, first we need to derive ϕ′i (x), viz


 x−(i−1)h
(i − 1)h ≤ x ≤ ih

 h
(i+1)h−x
ϕi (x) = ih ≤ x ≤ (i + 1)h

 h

 0 else


 1
(i − 1)h < x < ih

 h

ϕ′i (x) = − h1 ih < x < (i + 1)h





 0 else

Stiffness matrix A:
If |i − j| > 1, then ϕi and ϕj have disjoint support, see Figure 2.7, and
Z 1
aij = ϕ′i (x)ϕ′j (x)dx = 0.
0

1
ϕj−1 ϕj+1

x
xj−2 xj−1 xj xj+1 xj+2

Figure 2.9: ϕj−1 and ϕj+1 .

As for i = j: we have that

h h
Z  1 2 Z  z }| { z }| {
xi xi+1
1 2 xi − xi−1 xi+1 − xi 1 1 2
aii = dx + − dx = + = + = .
xi−1 h xi h h2 h2 h h h
2.4. A GALERKIN METHOD FOR (BVP) 25

It remains to compute aij for the case of (applicable!) j = i ± 1: A straight-


forward calculation (see the fig below) yields
Z xi+1 
1 1 xi+1 − xi 1
ai,i+1 = − · dx = − 2
=− . (2.4.14)
xi h h h h

Obviously ai+1,i = ai,i+1 = − h1 . To summarize, we have


y

1
ϕj ϕj+1

x
xj−1 xj xj+1 xj+2

Figure 2.10: ϕj and ϕj+1 .



 a = 0, if |i − j| > 1,

 ij
aii = h2 , i = 1, 2, . . . , M, (2.4.15)



 a 1
i−1,i = ai,i−1 = − h , i = 2, 3, . . . , M.

By symmetry aij = aji , and we finally have the stiffness matrix for approxi-
mating the stationary heat conduction by piecewise linear polynomials in a
uniform mesh, as:
 
2 −1 0 . . . . . . 0
 
 
 −1 2 −1 0 . . . . . . 
 
 
1  
0 −1 2 −1 0 . . . 
.
Aunif = ·  (2.4.16)
h  ... ... ... ... ... 0  
 
 
 . . . . . . 0 −1 2 −1 
 
0 . . . . . . 0 −1 2
26 CHAPTER 2. POLYNOMIAL APPROXIMATION IN 1D

As for the components of the load vector b we have


Z 1 Z xi Z xi+1
x − xi−1 xi+1 − x
bi = f (x)ϕi (x) dx = f (x) dx + f (x) dx.
0 xi−1 h xi h

2.4.1 The nonuniform version


Now let Teh : 0 = x0 < x1 < . . . < xM < xM +1 = 1 be a partition of
the interval (0, 1) into nonuniform subintervals Ij = (xj−1 , xj ), with lengths
|Ij | = hj = xj − xj−1 , j = 1, 2, . . . , M + 1. We define the finite dimensional
space Vh0 by

Vh0 := {v ∈ C(0, 1) : v is a piecewise linear function on Teh , v(0) = v(1) = 0},

with the nonuniform basis functions {ϕj }M j=1 . To compute the entries aij of
the coefficient matrix A, first we need to derive ϕ′i (x) for the nonuniform
basis functions: i.e.,


 x−xi−1
xi−1 ≤ x ≤ xi

 hi
xi+1 −x
ϕi (x) = xi ≤ x ≤ xi+1 =⇒

 hi+1

 0 else


 1
xi−1 < x < xi

 hi
ϕ′i (x) = 1
− hi+1 xi < x < xi+1



 0 else
Nonuniform stiffness matrix A:
If |i − j| > 1, then ϕi and ϕj have disjoint support, see Figure 2.9, and
Z 1
aij = ϕ′i (x)ϕ′j (x)dx = 0.
0

As for i = j: we have that


h hi+1
Z xi  1 2 Z xi+1   z }|i { z }| {
1 2 xi − xi−1 xi+1 − xi 1 1
aii = dx+ − dx = 2
+ 2
= + .
xi−1 hi xi hi+1 hi hi+1 hi hi+1
2.5. EXERCISES 27

For the case of (applicable!) j = i ± 1:


Z xi+1 
1  1 xi+1 − xi 1
ai,i+1 = − · dx = − 2
=− . (2.4.17)
xi hi+1 hi+1 hi+1 hi+1
1
Obviously ai+1,i = ai,i+1 = − hi+1 . Thus in nonuniform case we have that


 a = 0, if |i − j| > 1,

 ij
aii = h1i + hi+1
1
, i = 1, 2, . . . , M, (2.4.18)



 a 1
i−1,i = ai,i−1 = − hi , i = 2, 3, . . . , M.

By symmetry aij = aji , and we finally have the stiffness matrix in nonuniform
mesh, for the stationary heat conduction as:
 
1 1 1
+ − h2 0 ... 0
 h1 h2 
 1 1 1 1 
 − h2 + h 3 − h3 0 0 
 h 2 
 
A= 0 ... ... ... 0 . (2.4.19)
 
 
 ... 0 ... ... − h1M 
 
0 ... 0 − h1M h1M + hM1+1

With a uniform mesh, i.e. hi = h we get that A = Aunif .

Remark 2.2. Unlike the matrix A for polynomial approximation of IVP in


(2.3.5), A has a more desirable structure, e.g. A is a sparse, tridiagonal and
symmetric matrix. This is due to the fact that the basis functions {ϕj }M j=1
are nearly orthogonal.

2.5 Exercises
(q)
Problem 2.1. Prove that V0 := {v ∈ P (q) (0, 1) : v(0) = 0}, is a subspace
of P (q) (0, 1).
Problem 2.2. Consider the ODE: u̇(t) = u(t), 0 < t < 1; u(0) = 1.
(q)
Compute its Galerkin approximation in P (0, 1), for q = 1, 2, 3, and 4.
28 CHAPTER 2. POLYNOMIAL APPROXIMATION IN 1D

Problem 2.3. Consider the ODE: u̇(t) = u(t), 0 < t < 1; u(0) = 1.
3
Compute the L2 (0, 1) projection of the exact solution u into P (0, 1).
Problem 2.4. Compute the stiffness matrix and load vector in a finite ele-
ment approximation of the boundary value problem

−u′′ (x) = f (x), 0 < x < 1, u(0) = u(1) = 0,

with f (x) = x and h = 1/4.


Problem 2.5. We want to find a solution approximation U (x) to

−u′′ (x) = 1, 0 < x < 1, u(0) = u(1) = 0,

using the ansatz U (x) = A sin πx + B sin 2πx.


a. Calculate the exact solution u(x).
b. Write down the residual R(x) = −U ′′ (x) − 1
c. Use the orthogonality condition
Z 1
R(x) sin πnx dx = 0, n = 1, 2,
0

to determine the constants A and B.


d. Plot the error e(x) = u(x) − U (x).
Problem 2.6. Consider the boundary value problem

−u′′ (x) + u(x) = x, 0 < x < 1, u(0) = u(1) = 0.

a. Verify that the exact solution of the problem is given by


sinh x
u(x) = x − .
sinh 1

b. Let U (x) be a solution approximation defined by

U (x) = A sin πx + B sin 2πx + C sin 3πx,

where A, B, and C are unknown constants. Compute the residual function

R(x) = −U ′′ (x) + U (x) − x.


2.5. EXERCISES 29

c. Use the orthogonality condition


Z 1
R(x) sin πnx dx = 0, n = 1, 2, 3,
0

to determine the constants A, B, and C.

Problem 2.7. Let U (x) = ξ0 φ0 (x) + ξ1 φ1 (x) be a solution approximation to

−u′′ (x) = x − 1, 0 < x < π, u′ (0) = u(π) = 0,

where ξi , i = 0, 1, are unknown coefficients and


x 3x
φ0 (x) = cos , φ1 (x) = cos .
2 2
a. Find the analytical solution u(x).

b. Define the approximate solution residual R(x).

c. Compute the constants ξi using the orthogonality condition


Z π
R(x) φi (x) dx = 0, i = 0, 1,
0

i.e., by approximating u(x) as a linear combination of φ0 (x) and φ1 (x)

Problem 2.8. Use the projection technique of the previous exercises to solve

−u′′ (x) = 0, 0 < x < π, u(0) = 0, u(π) = 2,


2 2
assuming that U (x) = A sin x + B sin 2x + C sin 3x + π2
x.

Problem 2.9. Show that (f − Ph f, v) = 0, ∀v ∈ Vh , if and only if (f −


Ph f, ϕi ) = 0, i = 0, . . . , N ; where {ϕi }N
i=1 ⊂ Vh is the basis of hat-functions.
30 CHAPTER 2. POLYNOMIAL APPROXIMATION IN 1D
Chapter 3

Interpolation, Numerical
Integration in 1d

3.1 Preliminaries
Definition 3.1. A polynomial interpolant πq f of a function f , defined on
an interval I = [a, b], is a polynomial of degree ≤ q having the nodal values
at q + 1 distinct points xj ∈ [a, b], j = 0, 1, . . . , q, coinciding with those of f ,
i.e., πq f ∈ P q (a, b) and πq f (xj ) = f (xj ), j = 0, . . . , q.
Below we illustrate this definition through a simple and familiar example.

Example 3.1. Linear interpolation on an interval. We start with the


unit interval I := [0, 1] and a continuous function f : I → R. We let q = 1
and seek the linear interpolant of f on I, i.e. the linear function π1 f ∈ P 1 ,
such that π1 f (0) = f (0) and π1 f (1) = f (1). Thus we seek the constants C0
and C1 in the following representation of π1 f ∈ P 1 ,
π1 f (x) = C0 + C1 x, x ∈ I, (3.1.1)
where
π1 f (0) = f (0) =⇒ C0 = f (0), and
(3.1.2)
π1 f (1) = f (1) =⇒ C0 + C1 = f (1) =⇒ C1 = f (1) − f (0).
Inserting C0 and C1 into (3.1.1) it follows that
π1 f (x) = f (0)+(f (1)−f (0))x = f (0)(1−x)+f (1)x := f (0)λ0 (x)+f (1)λ1 (x).

31
32CHAPTER 3. INTERPOLATION, NUMERICAL INTEGRATION IN 1D

In other words π1 f (x) is represented in two different bases:

π1 f (x) = C0 · 1 + C1 · x, with {1, x} as the set of basis functions and

π1 f (x) = f (0)(1−x)+f (1)x, with {1−x, x} as the set of basis functions.


The functions λ0 (x) = 1 − x and λ1 (x) = x are linearly independent, since if

0 = α0 (1 − x) + α1 x = α0 + (α1 − α0 )x, for all x ∈ I, (3.1.3)

then )
x=0 =⇒ α0 = 0
=⇒ α0 = α1 = 0. (3.1.4)
x=1 =⇒ α1 = 0

f (x) λ0 (x) = 1 − x
1

π1 f (x) λ1 (x) = x

1 1

Figure 3.1: Linear interpolation and basis functions for q = 1.

Remark 3.1. Note that if we define a scalar product on P k (a, b) by


Z b
(p, q) = p(x)q(x) dx, ∀p, q ∈ P k (a, b), (3.1.5)
a

then we can easily verify that neither


R 1 {1, x} norx2 {1 −1 x, x} is an orthogonal
1
basis for P (0, 1), since (1, x) := 0 1 · x dx = [ 2 ] = 2 6= 0 and (1 − x, x) :=
R1
0
(1 − x)x dx = 61 6= 0.

With such background, it is natural to pose the following question:


3.1. PRELIMINARIES 33

Question 3.1. How well does πq f approximate f ? In other words how


large/small will the error be in approximating f (x) by πq f (x)?
To answer this question we need to estimate the difference between f (x) and
πq f (x). For instance for q = 1, geometrically, the deviation of f (x) from
π1 f (x) (from being linear) depends on the curvature of f (x), i.e. on how
curved f (x) is. In other words, on how large f ′′ (x) is, say, on an interval
(a, b). To quantify the relationship between the size of the error f − π1 f and
the size of f ′′ , we need to introduce some measuring instrument for vectors
and functions:
Definition 3.2. Let x = (x1 , . . . , xn )T and y = (y1 , . . . , yn )T ∈ Rn be two
column vectors (T stands for transpose). We define the scalar product of x
and y by
hx, yi = xT y = x1 y1 + · · · + xn yn ,
and the vector norm for x as the Euclidean length of x:
p q
kxk := hx, xi = x21 + · · · + x2n .

Lp (a, b)-norm: Assume that f is a real valued function defined on the in-
terval (a, b). Then we define the Lp -norm (1 ≤ p ≤ ∞) of f by
Z b 1/p
p
Lp -norm kf kLp (a,b) := |f (x)| dx , 1 ≤ p < ∞,
a
L∞ -norm kf kL∞ (a,b) := max |f (x)|.
x∈[a,b]

For 1 ≤ p ≤ ∞ we define the Lp (a, b)-space by

Lp (a, b) := {f : kf kLp (a,b) < ∞}.


Below we shall answer Question 3.1, first in the L∞ -norm, and then in the
Lp -norm (mainly for p = 1, 2.)
Theorem 3.1. (L∞ -error estimates for linear interpolation in an interval)
Assume that f ′′ ∈ L∞ (a, b). Then, for q = 1, i.e. only 2 interpolation
nodes (e.g. end-points of the interval), there are interpolation constants,
Ci , i = 1, 2, 3., independent of the function f and the size of the interval
[a, b], such that

(1) kπ1 f − f kL∞ (a,b) ≤ C1 (b − a)2 kf ′′ kL∞ (a,b)


34CHAPTER 3. INTERPOLATION, NUMERICAL INTEGRATION IN 1D

(2) kπ1 f − f kL∞ (a,b) ≤ C2 (b − a)kf ′ kL∞ (a,b)

(3) k(π1 f )′ − f ′ kL∞ (a,b) ≤ C3 (b − a)kf ′′ kL∞ (a,b) .

Proof. Note that every linear function, p(x) on [a, b] can be written as a
linear combination of the basis functions λa (x) and λb (x) where

b−x x−a
λa (x) = and λb (x) = : (3.1.6)
b−a b−a
p(x) = p(a)λa (x) + p(b)λb (x). (3.1.7)
Recall that linear combinations of λa (x) and λb (x) give the basis functions
{1, x} for P 1 :

λa (x) + λb (x) = 1, aλa (x) + bλb (x) = x. (3.1.8)

Here, π1 f (x) being a linear function connecting the two points (a, f (a)) and
(b, f (b)), is represented by

π1 f (x) = f (a)λa (x) + f (b)λb (x). (3.1.9)

π1 f (x) λa (x) + λb (x) = 1


1
f (x)
b−x x−a
λa (x) = b−a
λb (x) = b−a

x
a b a b

Figure 3.2: Linear Lagrange basis functions for q = 1.

By the Taylor expansion for f (a) and f (b) about x ∈ (a, b) we can write

 1
 f (a) = f (x) + (a − x)f ′ (x) + (a − x)2 f ′′ (ηa ), ηa ∈ [a, x]
2 (3.1.10)
 1
 f (b) = f (x) + (b − x)f (x) + (b − x)2 f ′′ (ηb ), ηb ∈ [x, b].

2
3.1. PRELIMINARIES 35

Inserting f (a) and f (b) from (3.1.10) into (3.1.9), it follows that

1
π1 f (x) =[f (x) + (a − x)f ′ (x) + (a − x)2 f ′′ (ηa )]λa (x)+
2
1
+[f (x) + (b − x)f (x) + (b − x)2 f ′′ (ηb )]λb (x).

2
Rearranging the terms, using (3.1.8) and the identity (which also follows
from (3.1.8)) (a − x)λa (x) + (b − x)λb (x) = 0 we get

π1 f (x) = f (x)[λa (x) + λb (x)] + f ′ (x)[(a − x)λa (x) + (b − x)λb (x)]+


1 1
+ (a − x)2 f ′′ (ηa )λa (x) + (b − x)2 f ′′ (ηb )λb (x) =
2 2
1 1
= f (x) + (a − x) f (ηa )λa (x) + (b − x)2 f ′′ (ηb )λb (x).
2 ′′
2 2
Consequently
1 1

|π1 f (x) − f (x)| = (a − x)2 f ′′ (ηa )λa (x) + (b − x)2 f ′′ (ηb )λb (x) . (3.1.11)
2 2
To proceed, we note that for a ≤ x ≤ b both (a−x)2 ≤ (a−b)2 and (b−x)2 ≤
(a − b)2 , furthermore λa (x) ≤ 1 and λb (x) ≤ 1, ∀ x ∈ (a, b). Moreover,
by the definition of the maximum norm both |f ′′ (ηa )| ≤ kf ′′ kL∞ (a,b) , and
|f ′′ (ηb )| ≤ kf ′′ kL∞ (a,b) . Thus we may estimate (3.1.11) as

1 1
|π1 f (x)−f (x)| ≤ (a−b)2 ·1·kf ′′ kL∞ (a,b) + (a−b)2 ·1·kf ′′ kL∞ (a,b) , (3.1.12)
2 2
and hence

|π1 f (x) − f (x)| ≤ (a − b)2 kf ′′ kL∞ (a,b) corresponding to ci = 1. (3.1.13)

The other two estimates (2) and (3) are proved similarly.

Remark 3.2. We can show that the optimal value of C1 = 18 (cf Problem
3.10), i.e. the constant C1 = 1 of the proof above is not the optimal one.

An analogue to Theorem 3.1 can be proved in the Lp -norm, p = 1, 2. This


general version (concisely stated below as Theorem 3.2) is the frequently used
Lp -interpolation error estimate.
36CHAPTER 3. INTERPOLATION, NUMERICAL INTEGRATION IN 1D

Theorem 3.2. Let π1 v(x) be the linear interpolant of the function v(x) on
(a, b). Then, assuming that v is twice differentiable (v ∈ C 2 (a, b)), there are
interpolation constants ci , i = 1, 2, 3 such that for p = 1, 2, ∞,
kπ1 v − vkLp (a,b) ≤ c1 (b − a)2 kv ′′ kLp (a,b) , (3.1.14)
k(π1 v)′ − v ′ kLp (a,b) ≤ c2 (b − a)kv ′′ kLp (a,b) , (3.1.15)
kπ1 v − vkLp (a,b) ≤ c3 (b − a)kv ′ kLp (a,b) . (3.1.16)
For p = ∞ this is just the previous Theorem 3.1.
Proof. For p = 1 and p = 2, the proof uses the integral form of the Taylor
expansion and is left as an exercise.
Below we review a simple piecewise linear interpolation procedure on a
partition of an interval:

Vector space of piecewise linear functions on an interval. Given


I = [a, b], let Th : a = x0 < x1 < x2 < . . . < xN −1 < xN = b be a
partition of I into subintervals Ij = [xj−1 , xj ] of length hj = |Ij | := xj − xj−1 ;
j = 1, 2, . . . , N . Let
Vh := {v|v is a continuous, piecewise linear function on Th }, (3.1.17)
then Vh is a vector space with the previously introduced hat functions:
{ϕj }N
j=0 as basis functions. Note that ϕ0 (x) and ϕN (x) are left and right
half-hat functions, respectively. We now show that every function in Vh is a
linear combination of ϕj :s.
Lemma 3.1. We have that
N
X
∀v ∈ Vh ; v(x) = v(xj )ϕj (x). (3.1.18)
j=0

Proof. Both the left and right hand side are continuous piecewise linear func-
tions. Thus it suffices to show that they have the same nodal values: Let
x = xj , then since ϕi (xj ) = δij ,
RHS|xj =v(x0 )ϕ0 (xj ) + v(x1 )ϕ1 (xj ) + . . . + v(xj−1 )ϕj−1 (xj )
+ v(xj )ϕj (xj ) + v(xj+1 )ϕj+1 (xj ) + . . . + v(xN )ϕN (xj ) (3.1.19)
=v(xj ) = LHS|xj .
3.1. PRELIMINARIES 37

Definition 3.3. For a partition Th : a = x0 < x1 < x2 < . . . < xN = b of


the interval [a, b] we define the mesh function h(x) as the piecewise constant
function h(x) := hj = xj − xj−1 for x ∈ Ij = (xj−1 , xj ), j = 1, 2, . . . , N .

Definition 3.4. Assume that f is a continuous function in [a, b]. Then the
continuous piecewise linear interpolant of f is defined by
N
X
πh f (x) = f (xj )ϕj (x), x ∈ [a, b].
j=0

Here the sub-index h refers to the mesh function h(x).

Hence
πh f (xj ) = f (xj ), j = 0, 1, . . . , N. (3.1.20)

Remark 3.3. Note that we denote the linear interpolant, defined for a single
interval [a, b], by π1 f which is a polynomial of degree 1, whereas the piecewise
linear interpolant πh f is defined for a partition Th of [a, b] and is a piecewise
linear function. For the piecewise polynomial interpolants of (higher) degree
q we shall use the notation for Cardinal functions of Lagrange interpolation
(see Section 3.2).

Note that for each interval Ij , j = 1, . . . , N , we have that

(i) πh f (x) is linear on Ij =⇒ πh f (x) = c0 + c1 x for x ∈ Ij .

(ii) πh f (xj−1 ) = f (xj−1 ) and πh f (xj ) = f (xj ).

Combining (i) and (ii) we get


 
 π f (x ) = c + c x  c = f (xj )−f (xj−1 )
h j−1 0 1 j−1 = f (xj−1 ) 1 xj −xj−1
=⇒
 π f (x ) = c + c x = f (x )  c = −xj−1 f (xj )+xj f (xj−1 )
.
h j 0 1 j j 0 xj −xj−1

Thus, we may write



 c = f (x ) xj + f (x ) −xj−1
0 j−1 xj −xj−1 j xj −xj−1
(3.1.21)
 c x = f (x ) −x + f (x ) x .
1 j−1 xj −xj−1 j xj −xj−1
38CHAPTER 3. INTERPOLATION, NUMERICAL INTEGRATION IN 1D

πh f (x) f (x)

x
x0 x1 x2 xj xN −1 xN

Figure 3.3: Piecewise linear interpolant πh f (x) of f (x).

For x ∈ [xj−1 , xj ], j = 1, 2, . . . , N , adding up the equations in (3.1.21) yields

xj − x x − xj−1
πh f (x) = c0 + c1 x = f (xj−1 ) + f (xj )
xj − xj−1 xj − xj−1
= f (xj−1 )λj−1 (x) + f (xj )λj (x),

where λj−1 (x) and λj (x) are the restrictions of the piecewise linear basis
functions ϕj−1 (x) and ϕj (x) to Ij .

1
x−xj−1
λj−1 (x) =
xj −x λj (x) = xj −xj−1
xj −xj−1

x
xj−1 xj

Figure 3.4: Linear Lagrange basis functions for q = 1 on the subinterval Ij .

In the next section we shall generalize the above procedure and introduce
Lagrange interpolation basis functions.
The main result of this section can be stated as follows:
3.2. LAGRANGE INTERPOLATION 39

Theorem 3.3. Let πh v(x) be the piecewise linear interpolant of the function
v(x) on the partition Th of [a, b]. Then assuming that v is sufficiently regular
(v ∈ C 2 (a, b)), there are interpolation constants ci , i = 1, 2, 3, such that for
p = 1, 2, ∞,

kπh v − vkLp (a,b) ≤ c1 kh2 v ′′ kLp (a,b) , (3.1.22)


k(πh v)′ − v ′ kLp (a,b) ≤ c2 khv ′′ kLp (a,b) , (3.1.23)
kπh v − vkLp (a,b) ≤ c3 khv ′ kLp (a,b) . (3.1.24)

Proof. Recalling the definition of the partition Th , we may write

N
X N
X
kπh v − vkpLp (a,b) = kπh v − vkpLp (Ij ) ≤ cp1 kh2j v ′′ kpLp (Ij )
j=1 j=1 (3.1.25)
≤ cp1 kh2 v ′′ kpLp (a,b) ,

where in the first inequality we apply Theorem 3.2 to an arbitrary partition


interval Ij and them sum over j. The other two estimates are proved similarly.

3.2 Lagrange interpolation


Consider P q (a, b); the vector space of all polynomials of degree ≤ q on the
interval (a, b), with the basis functions 1, x, x2 , . . . , xq . We have seen, in
Chapter 2, that this is a non-orthogonal basis (with respect to scalar product
(3.1.5) with, e.g. a = 0 and b = 1) that leads to ill-conditioned coefficient
matrices. We will now introduce a new set of basis functions, which being
almost orthogonal have some useful properties.

Definition 3.5 (Cardinal functions). Lagrange basis is the set of polynomials


{λi }qi=0 ⊂ P q (a, b) associated with the (q + 1) distinct points, a = x0 < x1 <
. . . < xq = b in [a, b] and determined by the requirement that: at the nodes,
λi (xj ) = 1 for i = j, and 0 otherwise (λi (xj ) = 0 for i 6= j), i.e. for x ∈ [a, b],

(x − x0 )(x − x1 ) . . . (x − xi−1 ) ↓ (x − xi+1 ) . . . (x − xq )


λi (x) = . (3.2.1)
(xi − x0 )(xi − x1 ) . . . (xi − xi−1 ) ↑ (xi − xi+1 ) . . . (xi − xq )
40CHAPTER 3. INTERPOLATION, NUMERICAL INTEGRATION IN 1D
Y  x − xj 
By the arrows ↓ , ↑ in (3.2.1) we want to emphasize that λi (x) =
j6=i
xi − xj
x − xi
does not contain the singular factor . Hence
xi − xi
(xj − x0 )(xj − x1 ) . . . (xj − xi−1 )(xj − xi+1 ) . . . (xj − xq )
λi (xj ) = = δij ,
(xi − x0 )(xi − x1 ) . . . (xi − xi−1 )(xi − xi+1 ) . . . (xi − xq )
and λi (x), i = 0, 1, . . . , q, is a polynomial of degree q on (a, b) with

 1 i=j
λi (xj ) = δij = (3.2.2)
 0 i 6= j.

Example 3.2. Let q = 2, then we have a = x0 < x1 < x2 = b, where


(x2 − x0 )(x2 − x2 )
i = 1, j = 2 ⇒ λ1 (x2 ) = =0
(x1 − x0 )(x1 − x2 )

(x1 − x0 )(x1 − x2 )
i = j = 1 ⇒ λ1 (x1 ) = = 1.
(x1 − x0 )(x1 − x2 )
A polynomial P (x) ∈ P q (a, b) with the values pi = P (xi ) at the nodes xi ,
i = 0, 1, . . . , q, can be expressed in terms of the above Lagrange basis as
P (x) = p0 λ0 (x) + p1 λ1 (x) + . . . + pq λq (x). (3.2.3)
Using (3.2.2), P (xi ) = p0 λ0 (xi )+p1 λ1 (xi )+. . .+pi λi (xi )+. . .+pq λq (xi ) = pi .
Recalling definition 3.1, if we choose a ≤ ξ0 < ξ1 < . . . < ξq ≤ b, as
q + 1 distinct interpolation nodes on [a, b], then the interpolating polynomial
πq f ∈ P q (a, b) satisfies
πq f (ξi ) = f (ξi ), i = 0, 1, . . . , q (3.2.4)
and the Lagrange formula (3.2.3) for πq f (x) reads as
πq f (x) = f (ξ0 )λ0 (x) + f (ξ1 )λ1 (x) + . . . + f (ξq )λq (x), a ≤ x ≤ b.
Example 3.3. For q = 1, we have only the nodes a and b. Recall that
b−x x−a
λa (x) = and λb (x) = , thus as in the introduction in this chapter
b−a b−a
π1 f (x) = f (a)λa (x) + f (b)λb (x). (3.2.5)
3.3. NUMERICAL INTEGRATION, QUADRATURE RULES 41

Example 3.4. To interpolate f (x) = x3 + 1 by piecewise polynomials of


degree 2, in the partition x0 = 0, x1 = 1, x2 = 2 of the interval [0, 2], we
have
π2 f (x) = f (0)λ0 (x) + f (1)λ1 (x) + f (2)λ2 (x),
where f (0) = 1, f (1) = 2, f (2) = 9, and we may compute Lagrange basis as
1 1
λ0 (x) = (x − 1)(x − 2), λ1 (x) = −x(x − 2), λ2 (x) = x(x − 1).
2 2
This yields
1 1
π2 f (x) = 1 · (x − 1)(x − 2) − 2 · x(x − 2) + 9 · x(x − 1) = 3x2 − 2x + 1.
2 2

3.3 Numerical integration, Quadrature rules


In the finite element approximation procedure of solving differential equa-
tions, with aR given source term (data) f (x), we need to evaluate integrals
of the form f (x)ϕi (x) dx, with ϕi (x) being a finite element basis function.
Such integrals are not easily computable for higher order approximations
(e.g. with ϕi :s being Lagrange basis of high order) and more involved data.
Further, we encounter matrices with entries being the integrals of products
of these, higher order, basis functions and their derivatives. Except some
special cases (see calculations for A and Aunif in the previous chapter), such
integrations are usually performed approximately by using numerical meth-
ods. Below we briefly review some of these
R b numerical integration techniques.
We approximate the integral I = a f (x)dx using a partition of the in-
terval [a, b] into subintervals, where on each subinterval f is approximated
by polynomials of a certain degree d. We shall denote the approximate value
of the integral I by Id . To proceed we assume, without loss of generality,
that f (x) > 0 on [a, b] and that f is continuous on (a, b). Then the inte-
Rb
gral I = a f (x)dx is interpreted as the area of the domain under the curve
y = f (x); limited by the x-axis and the lines x = a and x = b. We shall
approximate this area using the values of f at certain points as follows.
We start by approximating the integral over a single interval [a, b]. These
rules are referred to as simple rules.
a+b
i) Simple
 midpoint
 rule uses the value of f at the midpoint x̄ := 2
of [a, b],
a+b
i.e. f 2
. This means that f is approximated by the constant function
42CHAPTER 3. INTERPOLATION, NUMERICAL INTEGRATION IN 1D
 
(polynomial of degree 0) P0 (x) = f a+b2
and the area under the curve
y = f (x) by
Z b a + b
I= f (x)dx ≈ (b − a)f . (3.3.1)
a 2
To prepare for generalizations, if we let x0 = a and x1 = b and assume that
the length of the interval is h, then
 h
I ≈ I0 = hf a + = hf (x̄) (3.3.2)
2

f (b)

f (a + h/2) P0 (x)

f (a)

x
a = x0 a + h/2 b = x1
R x1
Figure 3.5: Midpoint approximation I0 of the integral I = x0
f (x)dx.

ii) Simple trapezoidal rule uses the values of f at two endpoints a and b, i.e.
f (a) and f (b). Here f is approximated by the linear  function
 (polynomial
 
of degree 1) P1 (x) passing through the two points a, f (a) and b, f (b) .
Consequently, the area under the curve y = f (x) is approximated as
Z b
f (a) + f (b)
I= f (x)dx ≈ (b − a) . (3.3.3)
a 2

This is the area of the trapezoidal between the lines y = 0, x = a and


x = b and under the graph of P1 (x), and therefore is referred to as the simple
trapezoidal rule. Once again, for the purpose of generalization, we let x0 = a,
x1 = b and assume that the length of the interval is h, then (3.3.3) can be
3.3. NUMERICAL INTEGRATION, QUADRATURE RULES 43

written as
h[f (a + h) − f (a)] f (a) + f (a + h)
I ≈ I1 =hf (a) + =h
2 2 (3.3.4)
h
≡ [f (x0 ) + f (x1 )].
2
iii) Simple Simpson’s rule uses the values of f at the two endpoints a and b,

P1 (x) f (b)

f (a)

x
a = x0 b = x1 = a + h

R x1
Figure 3.6: Trapezoidal approximation I1 of the integral I = x0
f (x)dx.

 
and the midpoint a+b2
of the interval [a, b], i.e. f (a), f (b), and f a+b
2
. In this
case the area under y = f (x) is approximated by the area under  the graph  of
the second degree polynomial P2 (x); with P2 (a) = f (a), P2 a+b 2
= f a+b
2
,
and P2 (b) = f (b). To determine P2 (x) we may use Lagrange interpolation
for q = 2: let x0 = a, x1 = (a + b)/2 and x2 = b, then
P2 (x) = f (x0 )λ0 (x) + f (x1 )λ1 (x) + f (x2 )λ2 (x), (3.3.5)
where 

 λ (x) = (x−x1 )(x−x2 )
,

 0 (x0 −x1 )(x0 −x2 )
(x−x0 )(x−x2 )
λ1 (x) = , (3.3.6)

 (x1 −x0 )(x1 −x2 )

 λ (x) = (x−x0 )(x−x1 )
2 (x2 −x0 )(x2 −x1 )
.
Thus
Z b Z b 2
X Z b
I= f (x)dx ≈ P2 (x) dx = f (xi ) λi (x) dx. (3.3.7)
a a i=0 a
44CHAPTER 3. INTERPOLATION, NUMERICAL INTEGRATION IN 1D

Now we can easily compute the integrals


Z b Z b Z b
b−a 4(b − a)
λ0 (x) dx = λ2 (x) dx = , λ1 (x) dx = . (3.3.8)
a a 6 a 6
Hence
Z b
b−a
I= f (x)dx ≈ I2 = [f (x0 ) + 4f (x1 ) + f (x2 )]. (3.3.9)
a 6

f (b)
f (x)
f (a)

P2 (x)

x
a = x0 a + h/2 b = x1

R x1
Figure 3.7: Simpson’s rule approximation I2 of the integral I = x0
f (x)dx.

Obviously these approximations are less accurate for large intervals, [a, b]
and/or oscillatory functions f . Following Riemann’s idea we can use these
rules, instead of on the whole interval [a, b], for the subintervals in an appro-
priate partition of [a, b]. Then we get the following generalized versions.

3.3.1 Composite rules for uniform partitions


We shall use the following General algorithm to approximate the integral
Z b
I= f (x)dx.
a

(1) Divide the interval [a, b], uniformly, into N subintervals

a = x0 < x1 < x2 < . . . < xN −1 < xN = b. (3.3.10)


3.3. NUMERICAL INTEGRATION, QUADRATURE RULES 45

(2) Write the integral as


Z b Z x1 Z xN N Z
X xk
f (x)dx = f (x) dx + . . . + f (x) dx = f (x) dx.
a x0 xN −1 k=1 xk−1
(3.3.11)

(3) For each subinterval Ik := [xk−1 , xk ], k = 1, 2, . . . , N , apply the same


integration rule (i) − (iii). Then we get the following generalizations.

(M) Composite midpoint rule: approximates f by constants (the values of


f at the midpoint of the subinterval) on each subinterval. Let

b−a xk−1 + xk
h = |Ik | = , and x̄k = , k = 1, 2, . . . , N.
N 2
Then, using the simple midpoint rule for the interval Ik := [xk−1 , xk ],
Z xk Z xk
f (x) dx ≈ f (x̄k ) dx = hf (x̄k ). (3.3.12)
xk−1 xk−1

Summing over k, we get the Composite midpoint rule as:


Z b N
X
f (x)dx ≈ hf (x̄k ) = h[f (x̄1 ) + . . . + f (x̄N )] := MN . (3.3.13)
a k=1

(T) Composite trapezoidal rule: approximates f by simple trapezoidal rule


on each subinterval Ik ,
Z xk
h
f (x) dx ≈ [f (xk−1 ) + f (xk )]. (3.3.14)
xk−1 2

Summing over k yields the composite trapezoidal rule


Z b N
X h
f (x)dx ≈ [f (xk−1 ) + f (xk )]
a k=1
2 (3.3.15)
h
= [f (x0 ) + 2f (x1 ) + . . . + 2f (xN −1 ) + f (xN )] := TN .
2
46CHAPTER 3. INTERPOLATION, NUMERICAL INTEGRATION IN 1D

(S) Composite Simpson’s rule: approximates f by simple Simpson’s rule


on each subinterval Ik ,
Z xk
hh x
k−1 + xk
 i
f (x) dx ≈ f (xk−1 ) + 4f + f (xk ) . (3.3.16)
xk−1 6 2

To simplify, we introduce the following identification on each Ik :


xk−1 + xk h
z2k−2 = xk−1 , z2k−1 = := x̄k , z2k = xk , hz = .
2 2
(3.3.17)

a = x0 x̄1 x1 xk−1 x̄k xk xN = b

hz
< >
a = z0 z1 z2 z2k−2 z2k−1 z2k z2N = b

Figure 3.8: Identification of subintervals for composite Simpson’s rule

Then, summing (3.3.16) over k and using the above identification, we obtain
the composite Simpson’s rule viz,
Z
hh x + xk  i
b N
X k−1
f (x)dx ≈ f (xk−1 ) + 4f + f (xk )
a k=1
6 2

hz h i
XN
= f (z2k−2 ) + 4f (z2k−1 ) + f (z2k )
3 (3.3.18)
k=1
hz h
= f (z0 ) + 4f (z1 ) + 2f (z2 ) + 4f (z3 ) + 2f (z4 )
3 i
+ . . . + 2f (z2N −2 ) + 4f (z2N −1 ) + f (z2N ) := SN .
3.3. NUMERICAL INTEGRATION, QUADRATURE RULES 47

The figure below illustrates the starting procedure for the composite Simp-
son’s rule. The numbers in the brackets indicate the actual coefficients on
each subinterval. For instance the end of the first interval: x1 = z2 , coincides
with the start of the second interval, ending to the add-up [1] + [1] = 2 as
the coefficient of f (z2 ). This is the case for each interior node xk , i.e. z2k :s;
k = 1, . . . , N − 1.
[4] [4]

[1] + [1]

[1] [1]

z0 z1 z2 z3 z4

Figure 3.9: Coefficients for composite Simpson’s rule

Remark 3.4. One can verify that the errors of these integration rules are
depending on the regularity of the function and the size of interval (in simple
rules) and the mesh size (in the composite rules). These error estimates, for
both simple and composite quadrature rules, can be found in any elementary
text book in numerical linear algebra and/or numerical analysis are read as
follows:
Eroor in simple Midpoint rule
Z xk
h3
| f (x) dx − hf (x̄k )| = |f ′′ (η)|, η ∈ (xk−1 , xk ).
xk−1 24

Error in composite Midpoint rule


Z b
h2 (b − a) ′′
| f (x) dx − MN | = |f (ξ)|, ξ ∈ (a, b).
a 24
Eroor in simple trapezoidal rule
Z xk
h h3
| f (x) dx − [f (x̄k−1 + f (xk )]| = |f ′′ (η)|, η ∈ (xk−1 , xk ).
xk−1 2 12
48CHAPTER 3. INTERPOLATION, NUMERICAL INTEGRATION IN 1D

Error in composite trapezoidal rule


Z b
h2 (b − a) ′′
| f (x) dx − TN | = |f (ξ)|, ξ ∈ (a, b).
a 12
Eroor in simple Simpson’s rule
Z b
b−a 1  b − a 5 (4)
| f (x) dx− [f (a)+4f ((a+b)/2)+f (b)]| = |f (η)|, η ∈ (a, b).
a 6 90 2
Error in composite Simpson’s rule
Z b
h4 (b − a)
| f (x) dx − SN | = max |f 4 (ξ)|, h = (b − a)/N.
a 180 ξ∈[a,b]

Remark 3.5. The rules (M), (T) and (S) use values of the function at
equally spaced points. These are not always the best approximation methods.
Below we introduce a general and more optimal approach.

3.3.2 Gauss quadrature rule


This is an approximate integration rule aimed to choose the points of eval-
uation of an integrand f in an optimal manner, not necessarily at equally
spaced points. Here, we illustrate this rule by an example:

Problem: Choose the nodes xi ∈ [a, b], and coefficients ci , 1 ≤ i ≤ n such


that, for an arbitrary integrable function f , the following error is minimal:
Z b Xn
f (x)dx − ci f (xi ). (3.3.19)
a i=1

Solution. The relation (3.3.19) contains 2n unknowns consisting of n nodes


xi and n coefficients ci . Therefore we need 2n equations. Thus if we replace
f by a polynomial, then an optimal choice of these 2n parameters yields a
quadrature rule (3.3.19) which is exact for polynomials, f , of degree ≤ 2n−1.
Example 3.5. Let n = 2 and [a, b] = [−1, 1]. Then the coefficients are c1 and
c2 and the nodes are x1 and x2 . Thus optimal choice of these 4 parameters
should yield that the approximation
Z 1
f (x)dx ≈ c1 f (x1 ) + c2 f (x2 ), (3.3.20)
−1
3.3. NUMERICAL INTEGRATION, QUADRATURE RULES 49

is indeed exact for f (x) replaced by any polynomial of degree ≤ 3. So, we


replace f by a polynomial of the form f (x) = Ax3 +Bx2 +Cx+D and require
equality in (3.3.20). Thus, to determine the coefficients c1 , c2 and the nodes
x1 , x2 , in an optimal way, it suffices to change the above approximation to
equality when f is replaced by the basis functions for polynomials of degree
≤ 3: i.e., 1, x, x2 and x3 . Consequently we get the equation system
Z 1
1dx = c1 + c2 =⇒ [x]1−1 = 2 = c1 + c2
−1
Z 1 h x2 i 1
xdx = c1 · x1 + c2 · x2 =⇒ = 0 = c 1 · x1 + c 2 · x2
−1 2 −1
Z 1 h x3 i 1 (3.3.21)
2 2 2 2 2 2
x dx = c1 · x1 + c2 · x2 =⇒ = = c 1 · x1 + c 2 · x2
−1 3 −1 3
Z 1 h x4 i 1
x3 dx = c1 · x31 + c2 · x32 =⇒ = 0 = c1 · x31 + c2 · x32 ,
−1 4 −1

which, although nonlinear, has the unique solution presented below:


 

 c + c = 2 
 c1 = 1

 1 2 


 

 c x +c x =0  c =1
1 1 2 2 2
=⇒ √ (3.3.22)

 c x 2
+ c x 2
= 2 
 x = − 3

 1 1 2 2 3 
 1 3

 
 √
 3 3
c 1 x1 + c 2 x2 = 0  x2 = 3 .3

Hence, the approximation


Z 1  √3   √3 
f (x)dx ≈ c1 f (x1 ) + c2 f (x2 ) = f − +f , (3.3.23)
−1 3 3
is exact for all polynomials of degree ≤ 3.
R1
Example 3.6. Let f (x) = 3x2 + 2x + 1. Then −1 (3x2 + 2x + 1)dx =
√ √
[x3 + x2 + x]1−1 = 4, and we can easily check that f (− 3/3) + f ( 3/3) = 4.

Exercises
b−x x−a
Problem 3.1. Use the expressions λa (x) = b−a
and λb (x) = b−a
to show
λa (x) + λb (x) = 1, and aλa (x) + bλb (x) = x.
50CHAPTER 3. INTERPOLATION, NUMERICAL INTEGRATION IN 1D

Give a geometric interpretation by plotting, λa (x), λb (x), λa (x) + λb (x),


aλa (x), bλb (x) and aλa (x) + bλb (x).

Problem 3.2. Determine the linear interpolant π1 f ∈ P 1 (0, 1) and plot f


and π1 f in the same figure, when
(a) f (x) = x2 , (b) f (x) = sin(πx).

Problem 3.3. Determine the linear interpolation of the function


1 π
f (x) = 2
(x − π)2 − cos2 (x − ), −π ≤ x ≤ π.
π 2
where the interval [−π, π] is divided into 4 equal subintervals.

Problem 3.4. Assume that w′ ∈ L1 (I). Let x, x̄ ∈ I = [a, b] and w(x̄) = 0.


Show that Z
|w(x)| ≤ |w′ |dx. (3.3.24)
I

Problem 3.5. Let now v(t) be the constant interpolant of ϕ on I.

a -
x b

Show that Z Z
−1
h |ϕ − v| dx ≤ |ϕ′ | dx. (3.3.25)
I I

Problem 3.6. Show that P q (a, b) = {the set of polynomials of degree ≤ q},
is a vector space but, P q (a, b) := {p(x)|p(x) is a polynomial of degree = q},
is not a vector space.

Problem 3.7. Compute formulas for the linear interpolant of a continuous


function f through the points a and (b+a)/2. Plot the corresponding Lagrange
basis functions.
3.3. NUMERICAL INTEGRATION, QUADRATURE RULES 51

Problem 3.8. Prove the following interpolation error estimate:


1
||π1 f − f ||L∞ (a,b) ≤ (b − a)2 ||f ′′ ||L∞ (a,b) .
8
Problem 3.9. Prove that any value of f on the sub-intervals, in a partition
of (a, b), can be used to define πh f satisfying the error bound

||f − πh f ||L∞ (a,b) ≤ max hi ||f ′ ||L∞ (Ii ) = ||hf ′ ||L∞ (a,b) .
1≤i≤m+1

Prove that choosing the midpoint improves the bound by an extra factor 1/2.
 
2
Problem 3.10. Compute and graph π4 e−8x on [−2, 2], which interpolates
2
e−8x at 5 equally spaced points in [−2, 2].
Problem 3.11. Write down a basis for the set of piecewise quadratic poly-
(2)
nomials Wh on a partition a = x0 < x1 < x2 < . . . < xm+1 = b of (a, b)
into subintervals Ii = (xi−1 , xi ), where
(q)
Wh = {v : v|Ii ∈ P q (Ii ), i = 1, . . . , m + 1}.
(2)
Note that, a function v ∈ Wh is not necessarily continuous.
Problem 3.12. Determine a set of basis functions for the space of continuous
(2)
piecewise quadratic functions Vh on I = (a, b), where
(q) (q)
Vh = {v ∈ Wh : v is continuous on I}.

Problem 3.13. Prove that


Z x1  
′ x1 + x0 x1 + x0 
f x− dx = 0.
x0 2 2
Problem 3.14. Prove that
Z x1 x + x 
1 0
f (x) dx − f (x 1 − x )
0
x0 2
Z x1 
1 x1 + x0  2 1
≤ max |f | ′′
x− dx ≤ (x1 − x0 )3 max |f ′′ |.
2 [x0 ,x1 ] x0 2 24 [x0 ,x1 ]

x1 +x0
Hint: Use Taylor expansion of f about x = 2
.
52CHAPTER 3. INTERPOLATION, NUMERICAL INTEGRATION IN 1D
Chapter 4

Two-point boundary value


problems

In this chapter we focus on finite element approximation procedure for two-point


boundary value problems (BVPs). For each problem we formulate a correspond-
ing variational formulation (VF) and a minimization problem (MP) and prove
that the solution to either of BVP, its VF and MP satisfies the other two as well,
i.e,
(BV P ) ” ⇐⇒ ” (V F ) ⇐⇒ (M P ).
The ⇐= in the equivalence ” ⇐⇒ ” is subject to a regularity requirement on
the solution up to the order of the underlying PDE.

4.1 A Dirichlet problem


Assume that a horizontal elastic bar which occupies the interval I := [0, 1],
is fixed at the end-points. Let u(x) denote the displacement of the bar at a
point x ∈ I, a(x) be the modulus of elasticity, and f (x) a given load function,
then one can show that u satisfies the following boundary value problem
  ′
 − a(x)u′ (x) = f (x), 0 < x < 1,
(BV P ) (4.1.1)
 u(0) = u(1) = 0.

Equation (4.1.1) is of Poisson’s type modelling also the stationary heat flux.
We shall assume that a(x) is piecewise continuous function in (0, 1),
bounded for 0 ≤ x ≤ 1 and a(x) > 0 for 0 ≤ x ≤ 1.

53
54 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS

Let v(x) and its derivative v ′ (x), x ∈ I, be square integrable functions, that
is: v, v ′ ∈ L2 (0, 1), and define the L2 -based Sobolev space by
n Z 1 o
1 2 ′ 2
H0 (0, 1) := v(x) : (v(x) + v (x) )dx < ∞, v(0) = v(1) = 0 . (4.1.2)
0

The variational formulation (VF). We multiply the equation in (BVP)


by a so called test function v(x) ∈ H01 (0, 1) and integrate over (0, 1) to obtain
Z 1 Z 1
′ ′
− (a(x)u (x)) v(x)dx = f (x)v(x)dx. (4.1.3)
0 0

Using integration by parts we get


h i1 Z 1 Z 1
′ ′ ′
− a(x)u (x)v(x) + a(x)u (x)v (x)dx = f (x)v(x)dx. (4.1.4)
0 0 0

Now since v(0) = v(1) = 0 we have thus obtained the variational formulation
for the problem (4.1.1) as follows: find u(x) ∈ H01 such that
Z 1 Z 1
(VF) ′ ′
a(x)u (x)v (x)dx = f (x)v(x)dx, ∀v(x) ∈ H01 . (4.1.5)
0 0

In other words we have shown that if u satisfies (BVP), then u also satisfies
the (VF) above. We write this as (BVP) =⇒ (VF). Now the question
is whether the reverse implication is true, i.e. under which conditions can
we deduce the implication (VF) =⇒ (BVP)? It appears that this question
has an affirmative answer, provided that the solution u to (VF) is twice
differentiable. Then, modulo this regularity requirement, the two problems
are indeed equivalent. We prove this in the following theorem.
Theorem 4.1. The following two properties are equivalent
i) u satisfies (BVP)
ii) u is twice differentiable and satisfies (VF).
Proof. We have already shown that (BVP) =⇒ (VF).
It remains to prove that (VF) =⇒ (BVP). Integrating by parts on the
left hand side in (4.1.5), assuming that u is twice differentiable, f ∈ C(0, 1),
a ∈ C 1 (0, 1), and using v(0) = v(1) = 0 we return to the relation (4.1.3):
Z 1 Z 1
− ′ ′
(a(x)u (x)) v(x)dx = f (x) v(x)dx, ∀v(x) ∈ H01 (4.1.6)
0 0
4.1. A DIRICHLET PROBLEM 55

which can be rewritten as


Z 1n  ′ o

− a(x)u (x) − f (x) v(x)dx = 0, ∀v(x) ∈ H01 . (4.1.7)
0
To show that u satisfies BV P is equivalent to claim that (4.1.7) implies
 ′
− a(x)u′ (x) − f (x) ≡ 0, ∀x ∈ (0, 1). (4.1.8)
Suppose not. Then there exists at least one point ξ ∈ (0, 1), such that
 ′
− a(ξ)u′ (ξ) − f (ξ) 6= 0, (4.1.9)
where we may assume, without loss of generality, that
 ′

− a(ξ)u (ξ) − f (ξ) > 0 (or < 0). (4.1.10)
Thus, by continuity ∃δ > 0 such that
 ′
g(x) := − a(x)u′ (x) −f (x) > 0, for all x ∈ Iδ := (ξ−δ, ξ+δ). (4.1.11)
Now, take the test function v(x) in (4.1.7) as the hat-function v ∗ (x) > 0,
y

1
g(x)

v ∗ (x)

x
0 ξ−δ ξ ξ+δ 1

Figure 4.1: The hat function v ∗ (x) over the interval (ξ − δ, ξ + δ).

with v ∗ (ξ) = 1 and the support Iδ , see Fig 4.1. Then v ∗ (x) ∈ H01 and
Z 1n  ′ o Z
′ ∗
− a(x)u (x) − f (x) v (x)dx = g(x) v ∗ (x) dx > 0.
0 Iδ |{z} | {z }
>0 >0

This contradicts (4.1.7). Thus our claim is true. Note further that in (VF)
u ∈ H01 implies that u(0) = u(1) = 0 and hence we have also the boundary
conditions and the proof is complete.
56 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS

Corollary 4.1. (i) If f (x) is continuous and a(x) is continuously differ-


entiable: f ∈ C(0, 1) and a ∈ C 1 (0, 1), then (BV P ), (V F ) have the same
solution.
(ii) If a(x) is discontinuous and f ∈ L2 , then (BV P ) is not always well-
defined but (V F ) has still a meaning. Therefore (V F ) covers a larger set of
data than (BV P ).
(iii) More important: in (V F ), u ∈ C 1 (0, 1),while (BV P ) is formulated for
u having two derivatives, i.e. u ∈ C 2 (0, 1).

The minimization problem (MP). For the problem (4.1.1), we may for-
mulate yet another equivalent problem, viz:
Find u ∈ H01 such that F (u) ≤ F (w), ∀w ∈ H01 , where F (w) is the total
potential energy of the displacement w(x), given by
Z Z 1
1 1 ′ 2
(MP) F (w) = a(w ) dx − f wdx. (4.1.12)
2 0 0
Internal (elastic) energy Load potential

This means that the solution u minimizes the energy functional F (w). Below
we show that the above minimization problem is equivalent to the variational
formulation (VF) and hence also to the boundary value problem (BVP).

Theorem 4.2. The following two properties are equivalent

a) u satisfies the variational formulation (VF)

b) u is the solution for the minimization problem (MP)

i.e.
Z 1 Z 1
′ ′
au v dx = f vdx, ∀v ∈ H01 ⇐⇒ F (u) ≤ F (w), ∀w ∈ H01 . (4.1.13)
0 0

Proof. (=⇒): First we show that the variational formulation (VF) implies
the minimization problem (MP). To this end, for w ∈ H01 we let v = w − u,
4.1. A DIRICHLET PROBLEM 57

then, since H01 is a vector space and u ∈ H01 , hence v ∈ H01 and
Z Z 1
1 1  ′
2
F (w) = F (u + v) = a (u + v) dx − f (u + v)dx =
2 0 0
Z Z Z
1 1 1 1 ′ 2 1 1
= ′ ′
2au v dx + a(u ) dx + a(v ′ )2 dx
2 0 2 0 2 0
| {z } | {z }
(i) (ii)
Z 1 Z 1
− f udx − f vdx.
0 0
| {z } | {z }
(iii) (iv)

Now using (VF) we have (i) − (iv) = 0. Further by the definition of the
functional F , (ii) − (iii) = F (u). Thus
Z
1 1
F (w) = F (u) + a(x)(v ′ (x))2 dx, (4.1.14)
2 0
and since a(x) > 0 we get F (w) ≥ F (u), thus we have proved ” =⇒ ” part.
(⇐=): Next we show that the minimization problem (MP) implies the vari-
ational formulation (VF). To this end, assume that F (u) ≤ F (w) ∀w ∈ H01 ,
and for an arbitrary function v ∈ H01 , set gv (ε) = F (u + εv), then by (MP), g

(as a function of ε) has a minimum at ε = 0. In other words ∂ε gv (ε) = 0.
ε=0
We have that
Z Z 1
1 1  ′
2
gv (ε) = F (u + εv) = a (u + εv) dx − f (u + εv)dx =
2 0 0
Z Z 1 Z 1
1 1 ′ 2 2 ′ 2 ′ ′
= {a(u ) + aε (v ) + 2aεu v }dx − f udx − ε f vdx.
2 0 0 0
∂gv
The derivative ∂ε
(ε),
of g(ε, v) is
Z Z 1
∂gv 1 1 ′ 2 ′ ′
(ε) = {2aε(v ) + 2au v }dx − f vdx, (4.1.15)
∂ε 2 0 0

∂gv
where ∂ε = 0, yields
(ε=0)
Z 1 Z 1
′ ′
au v dx − f vdx = 0, (4.1.16)
0 0

which is our desired variational formulation (VF). Hence, we conclude that


F (u) ≤ F (w), ∀w ∈ H01 =⇒ (VF), and the proof is complete.
58 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS

We summarize the two theorems in short as

Corollary 4.2.
(BV P ) ” ⇐⇒ ” (V F ) ⇐⇒ (M P ).
Recall that ” ⇐⇒ ” is a conditional equivalence, requiring u to be twice
differentiable, for the reverse implication.

4.2 The finite element method (FEM)


We now formulate the finite element procedure for boundary value problems.
To do so we let Th = {0 = x0 < x1 < . . . < xM < xM +1 = 1} be a partition of
the interval I = [0, 1] into subintervals Ik = [xk−1 , xk ] and set hk = xk − xk−1 .
Define the piecewise constant function h(x) := xk − xk−1 = hk for x ∈ Ik .

x
x0 = 0 x1 x2 xk−1 xk xM xM +1 = 1
hk

Let C(I, P1 (Ik )) denote the set of all continuous piecewise linear functions on
Th (continuous in the whole interval I, linear on each subinterval Ik ), and
define
Vh0 = {v : v ∈ C(I, P1 (Ik )), v(0) = v(1) = 0}. (4.2.1)
Note that Vh0 is a finite dimensional (dimVh0 = M ) subspace of
n Z 1 o
H01 = v(x) : 2 ′ 2
(v(x) + v (x) )dx < ∞, and v(0) = v(1) = 0 . (4.2.2)
0

Continuous Galerkin of degree 1, cG(1). A finite element formulation


for our Dirichlet boundary value problem (BVP) is given by: find uh ∈ Vh0
such that the following discrete variational formulation holds true
Z 1 Z 1
(FEM) ′ ′
a(x)uh (x)v (x)dx = f (x)v(x)dx, ∀v ∈ Vh0 . (4.2.3)
0 0

The finite element method (FEM) is a finite dimensional version of the vari-
ational formulation (VF), where the test functions are in a finite dimensional
subspace Vh0 , of H01 , spanned by the hat-functions, ϕj (x), j = 1, . . . , M .
4.3. ERROR ESTIMATES IN THE ENERGY NORM 59

Thus, if in VF we restrict v to Vh0 (rather that H01 ) and subtract FEM from
it, we get the Galerkin orthogonality:
Z 1
a(x)(u′ (x) − u′h (x))v ′ (x)dx = 0, ∀v ∈ Vh0 . (4.2.4)
0

Now the purpose is to estimate the error arising in approximating the solution
for BV P by functions in Vh0 . To this approach we need some measuring
environment for the error. We recall the definition of Lp -norms:
Z 1 1/p
Lp -norm kvkLp = |v(x)|p dx , 1≤p<∞
0
L∞ -norm kvkL∞ = max |v(x)|,
x∈[0,1]
and also introduce: Z 1/2
1
Weighted L2 -norm kvka = a(x)|v(x)|2 dx , a(x) > 0
 Z0 1 1/2
′ 2
Energy-norm kvkE = a(x)|v (x)| dx ,
0
Note that kvkE = kv ′ ka .

4.3 Error estimates in the energy norm


We shall study an a priori error estimate; where a certain norm of the error is
estimated by some norm of the exact solution u. Here, the error analysis gives
information about the size of the error, depending on the (unknown) exact
solution u, before any computational steps. An a posteriori error estimate;
where the error is estimated by some norm of the residual of the approximate
solution is also included.
Below, first we shall prove a qualitative result which states that the finite
element solution is the best approximate solution to the Dirichlet problem
in the energy norm.
Theorem 4.3. Let u(x) be the solution to the Dirichlet boundary value prob-
lem (4.1.1) and uh (x) its finite element approximation given by (4.2.3), then

ku − uh kE ≤ ku − vkE , ∀v ∈ Vh0 . (4.3.1)

This means that the finite element solution uh ∈ Vh0 is the best approximation
of the solution u, in the energy norm, by functions in Vh0 .
60 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS

Proof. We take an arbitrary v ∈ Vh0 , then using the energy norm


Z 1
2
ku − uh kE = a(x)(u′ (x) − u′h (x))2 dx
Z0 1
= a(x)(u′ (x) − u′h (x))(u′ (x) − v ′ (x) + v ′ (x) − u′h (x))dx
Z0 1
= a(x)(u′ (x) − u′h (x))(u′ (x) − v ′ (x))dx
0
Z 1
+ a(x)(u′ (x) − u′h (x))(v ′ (x) − u′h (x)) dx.
0
(4.3.2)

Since v − uh ∈ Vh0 , by Galerkin orthogonality (4.2.4), the last integral is zero.


Thus,
Z 1
2
ku − uh kE = a(x)(u′ (x) − u′h (x))(u′ (x) − v ′ (x))dx
Z0 1
1 1
= a(x) 2 (u′ (x) − u′h (x))a(x) 2 (u′ (x) − v ′ (x))dx
0
Z 1  12  Z 1  12
′ ′ 2 ′ ′ 2
≤ a(x)(u (x) − uh (x)) dx a(x)(u (x) − v (x)) dx
0 0
= ku − uh kE · ku − vkE ,
(4.3.3)
where, in the last estimate, we used Cauchy-Schwarz inequality. Thus

ku − uh kE ≤ ku − vkE , ∀v ∈ Vh0 , (4.3.4)


and the proof is complete.
The next step is to show that there exists a function v(x) ∈ Vh0 such that
ku − vkE is not too large. The function that we have in mind is πh u(x): the
piecewise linear interpolant of u(x), introduced in Chapter 3.
Theorem 4.4. [An a priori error estimate] Let u and uh be the solutions
of the Dirichlet problem (BVP) and the finite element problem (FEM), re-
spectively. Then there exists an interpolation constant Ci , depending only on
a(x), such that
ku − uh kE ≤ Ci khu′′ ka . (4.3.5)
4.3. ERROR ESTIMATES IN THE ENERGY NORM 61

Proof. Since πh u(x) ∈ Vh0 , we may take v = πh u(x) in (4.3.1) and use, e.g.
the second estimate in the interpolation Theorem 3.3 (slightly generalized to
the weigthed norm k · ka , see remark below) to get

ku − uh kE ≤ ku − πh ukE = ku′ − (πh u)′ ka


Z 1 1/2 (4.3.6)
′′ 2 ′′ 2
≤ Ci khu ka = Ci a(x)h (x)u (x) dx ,
0

which is the desired result and the proof is complete.

Remark 4.1. The interpolation theorem is not stated in the weighted norm.
The a(x) dependence of the interpolation constant Ci can be shown as follows
Z 1 1/2
′ ′
ku − (πh u) ka = a(x)(u′ (x) − (πh u)′ (x))2 dx
 0   
≤ max a(x)1/2 · ku′ − (πh u)′ kL2 ≤ ci max a(x)1/2 khu′′ kL2
x∈[0,1] x∈[0,1]
  Z 1 1/2
= ci max a(x)1/2 h(x)2 u′′ (x)2 dx
x∈[0,1] 0
Z
(maxx∈[0,1] a(x) )  1
1/2
2 ′′ 2
1/2
≤ ci · a(x)h(x) u (x) dx .
(minx∈[0,1] a(x)1/2 ) 0

Thus
(maxx∈[0,1] a(x)1/2 )
C i = ci , (4.3.7)
(minx∈[0,1] a(x)1/2 )
where ci = c2 is the interpolation constant in the second estimate in Theorem
3.3.

Remark 4.2. If the objective is to divide [0, 1] into a finite number of subin-
tervals, then one can use the result of Theorem 4.4: to obtain an optimal
partition of [0, 1], where whenever a(x)u′′ (x)2 gets large we compensate by
making h(x) smaller. This, however, “requires that the exact solution u(x)
is known” 1 . Now we state the a posteriori error estimate, which instead of
the unknown solution u(x), uses the residual of the computed solution uh (x).

Theorem 4.5 (An a posteriori error estimate). There is an interpolation


constant ci depending only on a(x) such that the error in the finite element
1
Note that when a is a given constant then, −u′′ (x) = (1/a)f (x) is known.
62 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS

approximation of the Dirichlet boundary value problem (4.1.10), satisfies


Z 1 1 1/2
ku − uh kE ≤ ci h2 (x)R2 (uh (x))dx , (4.3.8)
0 a(x)

where R(uh (x)) = f + (a(x)u′h (x))′ is the residual, and u(x) − uh (x) ∈ H01 .
Proof. By the definition of the energy norm we have
Z 1 Z 1
2 ′ 2
ke(x)kE = a(x)(e (x)) dx = a(x)(u′ (x) − u′h (x))e′ (x)dx
Z0 1 0
Z 1 (4.3.9)
′ ′
= a(x)u (x)e (x)dx − a(x)u′h (x)e′ (x)dx
0 0

Since e ∈ H01 the variational formulation (VF) gives that


Z 1 Z 1
′ ′
a(x)u (x)e (x)dx = f (x)e(x)dx. (4.3.10)
0 0

Hence, we can write


Z 1 Z 1
ke(x)k2E = f (x)e(x)dx − a(x)u′h (x)e′ (x)dx. (4.3.11)
0 0

Adding and subtracting the interpolant πh e(x) and its derivative (πh e)′ (x)
to e and e′ in the integrands above yields
Z 1 Z 1
2
ke(x)kE = f (x)(e(x) − πh e(x))dx + f (x)πh e(x)dx
0 0
| {z }
(i)
Z 1 Z 1
− a(x)u′h (x)(e′ (x) ′
− (πh e) (x))dx − a(x)u′h (x)(πh e)′ (x)dx .
0
|0 {z }
(ii)

Since uh (x) is the solution of the (FEM) given by (4.2.3) and πh e(x) ∈ Vh0
we have that −(ii) + (i) = 0. Hence
Z 1 Z 1
2
ke(x)kE = f (x)(e(x) − πh e(x))dx − a(x)u′h (x)(e′ (x) − (πh e)′ (x))dx
0 0
Z 1 M
X +1 Z x k
= f (x)(e(x) − πh e(x))dx − a(x)u′h (x)(e′ (x) − (πh e)′ (x))dx.
0 k=1 xk−1
4.3. ERROR ESTIMATES IN THE ENERGY NORM 63

To continue we integrate by parts in the integrals in the summation above


Z xk
− a(x)u′h (x)(e′ (x) − (πh e)′ (x))dx
xk−1
h ix k Z xk

= − a(x)uh (x)(e(x) − πh e(x)) + (a(x)u′h (x))′ (e(x) − πh e(x)) dx.
xk−1 xk−1

Now, using e(xk ) = πh e(xk ), k = 0, 1 . . . , M + 1, where the xk :s are the


interpolation nodes, the boundary terms vanish and thus we end up with
Z xk Z xk
′ ′ ′
− a(x)uh (x)(e (x) − (πh e) (x))dx = (a(x)u′h (x))′ (e(x) − πh e(x))dx.
xk−1 xk−1

Thus, summing over k, we have


Z 1 Z 1
− a(x)u′h (x)(e′ (x) ′
− (πh e(x)) dx = (a(x)u′h (x))′ (e(x) − πh e(x))dx,
0 0

where (a(x)u′h (x))′ should be interpreted locally on each subinterval [xk−1 , xk ].


(Since u′h (x) in general is discontinuous, u′′h (x) does not exist globally on
[0, 1].) Therefore
Z 1 Z 1
ke(x)k2E = f (x)(e(x) − πh e(x))dx + (a(x)u′h (x))′ (e(x) − πh e(x))dx
Z0 1 0

= {f (x) + (a(x)u′h (x))′ }(e(x) − πh e(x))dx.


0

Now let R(uh (x)) = f (x) + (a(x)u′h (x))′ , i.e. R(uh (x)) is the residual error,
which is a well-defined function except in the set {xk }, k = 1, . . . , M ; where
(a(xk )u′h (xk ))′ is not defined. Then, using Cauchy-Schwarz’ inequality we get
the following estimate
Z 1
ke(x)k2E = R(uh (x))(e(x) − πh e(x))dx =
0
Z 1  e(x) − π e(x) 
p
1 h
= p h(x)R(uh (x)) · a(x) dx
0 a(x) h(x)
Z 1 1 1/2  Z 1  e(x) − π e(x) 2 1/2
2 2 h
≤ h (x)R (uh (x))dx a(x) dx .
0 a(x) 0 h(x)
64 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS

Further, by the definition of the weighted L2 -norm we have,


e(x) − π e(x) Z 1  e(x) − π e(x) 2 1/2
h h
= a(x) dx . (4.3.12)
h(x) a 0 h(x)

To estimate (4.3.12) we can use the third interpolation estimate (in Theorem
5.5) for e(x) in each subinterval and get
e(x) − π e(x)
h
≤ Ci ke′ (x)ka = Ci ke(x)kE , (4.3.13)
h(x) a

where Ci as before depends on a(x). Thus


Z 1
1 2 1/2
ke(x)k2E ≤ h (x)R2 (uh (x))dx · Ci ke(x)kE , (4.3.14)
0 a(x)

and the proof is complete.

Adaptivity
Below we briefly outline the adaptivity procedure based on the a posteriori
error estimate which uses the approximate solution and which can be used
for mesh-refinements. Loosely speaking, the estimate (4.3.8) predicts local
mesh refinement, i.e. indicates the regions (subintervals) which should be
subdivided further. More specifically the idea is as follows: assume that one
seeks an error less than a given error tolerance TOL > 0:

kekE ≤ TOL, e(x) := u(x) − uh (x). (4.3.15)

Then, one may use the following steps as a mesh refinement strategy:

(i) Make an initial partition of the interval

(ii) Compute the corresponding FEM solution uh (x) and residual R(uh (x)).

1
(iii) If kekE > TOL, refine the mesh in the places where R2 (uh (x)) is
a(x)
large and perform the steps (ii) and (iii) again.
4.4. FEM FOR CONVECTION–DIFFUSION–ABSORPTION BVPS 65

4.4 FEM for convection–diffusion–absorption


BVPs
We now return to the Galerkin approximation of a solution to boundary
value problems and give a framework for the cG(1) (continuous Galerkin of
degree 1) finite element procedure leading to a linear system of equations of
the form Aξ = b. More specifically, we shall extend the approach in Chapter
2, for the stationary heat equation, to cases involving absorption and/or
convection terms. We also consider non-homogeneous Dirichlet boundary
conditions. We illustrate this procedure through the following two examples.
Example 4.1. Determine the coefficient matrix and load vector for the cG(1)
finite element approximation of the boundary value problem
−u′′ (x) + 4u(x) = 0, 0 < x < 1; u(0) = α 6= 0, u(1) = β 6= 0,
on a uniform partition Th of the interval [0, 1] into n + 1 subintervals.
Solution: The objective is to construct an approximate solution uh in a fi-
nite dimensional space spanned by the piecewise linear basis functions (hat-
functions) ϕj (x), j = 0, 1, . . . , n + 1 on the partition Th . This results in a
discrete problem represented by a linear system of equations Aξ = b, for the
unknown ξ = {cj }nj=1 , (c0 = α and cn+1 = β are given in boundary data.)
The continuous solution is assumed to be in the Hilbert space
 Z 1  
1 2 ′ 2
H = w: w(x) + w (x) dx < ∞ .
0

Since u(0) = α and u(1) = β are given, we need to take the trial functions in
V := {w : w ∈ H 1 , w(0) = α, w(1) = β},
and the test functions in
V 0 := H01 = {w : w ∈ H 1 , w(0) = w(1) = 0}.
We multiply the PDE by a test function v ∈ V 0 and integrate over (0, 1).
Integrating by parts we get
Z 1 Z 1
′ ′ ′ ′
− u (1)v(1) + u (0)v(0) + u v dx + 4 uv dx = 0 ⇐⇒
0 0
Z 1 Z 1
(V F ) : Find u ∈ V so that ′ ′
u v dx + 4 uv dx = 0, ∀v ∈ V 0 .
0 0
66 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS

The partition Th , of [0, 1] into n + 1 uniform subintervals I1 = [0, h], I2 =


[h, 2h], . . ., and In+1 = [nh, (n + 1)h], is also described by the nodes x0 =
0, x1 = h, . . . , xn = nh and xn+1 = (n + 1)h = 1. The corresponding discrete
function spaces are (varying with h and hence with n),

Vh := {wh : wh is piecewise linear, continuous on T h , wh (0) = α, wh (1) = β},

and

Vh0 := {vh : vh is piecewise linear and continuous on T h , vh (0) = vh (1) = 0}.

Note that here, the basis functions needed to represent functions in Vh are the
hat-functions ϕj , j = 0, . . . , n+1 (including the two half-hat-functions ϕ0 and
ϕn+1 ), whereas the basis functions describing Vh0 are ϕi :s for i = 1, . . . , n,
i.e. all full-hat-functions but not ϕ0 and ϕn+1 . This is due to the fact that
the values u(0) = α och u(1) = β are given and therefore we do not need to
determine those two nodal values approximately.

ϕ0 ϕ1 ϕj ϕn ϕn+1

x0 = 0 x1 = h xj−1 xj xj+1 xn xn+1 = 1

Now the finite element formulation (the discrete variational formulation)


is: find uh ∈ Vh such that
Z 1 Z 1
(F EM ) u′h v ′ dx + 4 uh v dx = 0, ∀v ∈ Vh0 .
0 0
Pn
We have that uh (x) = c0 ϕ0 (x) + j=1 cj ϕj (x) + cn+1 ϕn+1 (x), where c0 = α,
cn+1 = β and

 

 x − xj−1 , xj−1 ≤ x ≤ xj
1  h−x 0≤x≤h 1
ϕ0 (x) = , ϕj (x) = xj+1 − x xj ≤ x ≤ xj+1
h  0, else h


 0 x∈
/ [xj−1 , xj+1 ].
4.4. FEM FOR CONVECTION–DIFFUSION–ABSORPTION BVPS 67

and 
1  x − xn nh ≤ x ≤ (n + 1)h
ϕn+1 (x) = .
h  0, else.
Inserting uh into (FEM), and choosing v = ϕi (x), i = 1, . . . , n we get
n Z
X 1 Z 1 
ϕ′j (x)ϕ′i (x) dx +4 ϕj (x)ϕi (x) dx cj
j=1 0 0
Z 1 Z 1 
=− ϕ′0 (x)ϕ′i (x) dx +4 ϕ0 (x)ϕi (x) dx c0
0
Z 1 Z0 1 
− ϕ′n+1 (x)ϕ′i (x) dx + 4 ϕn+1 (x)ϕi (x) dx cn+1 .
0 0

In matrix form this corresponds to Aξ = b with A = S+4M , where S = Aunif


is the, previously computed, stiffness matrix:
 
2 −1 0 0 ... 0
 
 
 −1 2 −1 0 ... 0 
 
 
1
0 −1 2 −1 . . . 0 
,
S=  (4.4.1)
h  ... ... ... ... ... ...  
 
 
 0 . . . . . . −1 2 −1 
 
0 . . . . . . . . . −1 2

and M is the mass-matrix given by


 R R1 R1 
1
ϕ1 ϕ1 0 ϕ2 ϕ1 ... ϕn ϕ1
 R 0
R R1
0 
 1 
 0 ϕ1 ϕ2 01 ϕ2 ϕ2 . . . 0 ϕn ϕ2 
M = 
.
 (4.4.2)
 ... ... ... ... 
 R R R1 
1 1
0
ϕ1 ϕn 0 ϕ2 ϕn . . . 0 ϕn ϕn

Note the index locations in the matrices S and M :


Z 1 Z 1
′ ′
sij = ϕj (x)ϕi (x) dx, mij = ϕj (x)ϕi (x) dx.
0 0
68 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS

This, however, does not make any difference in the current example, since,
as seen, both S and M are symmetric. To compute the entries of M , we
follow the same procedure as in Chapter 2, and notice that, as S, also M is
symmetric and its elements mij are

 R

 1
ϕ ϕ dx = 0, ∀i, j with |i − j| > 1

 R0 i j
1 2
mij = mji = ϕj (x) dx, for i=j (4.4.3)

 R
0

 1 ϕ (x)ϕ (x),
0 j j+1 for i = j + 1.

ϕj−1 ϕj ϕj+1 ϕj+2

x
xj−1 xj xj+1 xj+2

Figure 4.2: ϕj and ϕj+1 .

The diagonal elements are

Z Z Z xj+1
1
2 1  xj 2

mjj = ϕj (x) dx = 2 (x − xj−1 ) dx + (xj+1 − x)2
0 h xj−1 xj
h
1 (x − xj−1 ) 3 ix j h
1 (xj+1 − x) 3 ixj+1
= 2 − 2
h 3 xj−1 h 3 xj
3 3
1 h 1 h 2
= 2· + 2· = h, j = 1, . . . , n,
h 3 h 3 3
(4.4.4)
4.4. FEM FOR CONVECTION–DIFFUSION–ABSORPTION BVPS 69

and the two super- and sub-diagonals can be computed as


Z 1 Z xj+1
1
mj,j+1 = mj+1,j = ϕj ϕj+1 dx = 2 (xj+1 − x)(x − xj ) = [P I]
0 h xj
Z xj+1
1h (x − xj )2 ixj+1 1 (x − xj )2
= 2 (xj+1 − x) − 2 − dx
h 2 xj h xj 2
1 h (x − xj )3 ixj+1 1
= 2 = h, j = 1, . . . , n − 1.
h 6 xj 6
Thus the mass matrix in this case is
   
2 1
0 0 . . . 0 4 1 0 0 ... 0
 3 6   
 1 2 1   
 6 0 ... 0   1 4 1 0 ... 0 
 3 6   
 1 2 1   
 0 ... 0  h  0 1 4 1 ... 0 
M = h 
6 3 6 = 
 
.

 ... ... ... ... ... ...  6  ... ... ... ... ... ... 
   
 1 2 1   
 0 ... ...   0 ... ... 1 4 1 
 6 3 6   
1 2
0 ... ... ... 6 3
0 ... ... ... 1 4
Hence, for i, j = 1, . . . , n, the coefficient matrix A = S + 4M is given by


 2
+ 8h , i = j,
Z 1 Z 1 
 h 3

[A]ij = ϕ′i ϕ′j dx + 4 ϕi ϕj (x) dx = − h1 + 2h , |i − j| = 1,


0 0 
 3

 0 else.
Finally, with c0 = α och cn+1 = β, we get the load vector viz,
1 2h 1 2h
b1 = −(− + )c0 = α( − ),
h 3 h 3
b2 = . . . = bn−1 = 0,
1 2h 1 2h
bn = −(− + )cn+1 = β( − ).
h 3 h 3
Now, for each particular choice of h (i.e. n), α and β we may solve Aξ =
b to obtain the nodal values of the approximate solution uh at the inner
nodes xj , j = 1, . . . , n. That is: ξ = (c1 , . . . , cn )T := (uh (x1 ), . . . , uh (xn ))T .
Connecting the points (xj , uh (xj )), j = 0, . . . , n+1 by straight lines we obtain
the desired continuous piecewise linear approximation of the solution.
70 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS

Remark 4.3. An easier way to compute the above integrals mj,j+1 (as well
as mjj ) is through Simpson’s rule, which is exact for polynomials of degree
≤ 2. Since ϕj (x)ϕj+1 (x) = 0 at x = xj and x = xj+1 , we need to evaluate
only the midterm of the Simpson’s formula, i.e.
Z 1
h  xj + xj+1  x + x 
j j+1 h 1 1 h
ϕj ϕj+1 dx = 4 ϕj · ϕj+1 ·=4· · · = .
0 6 2 2 6 2 2 6
For a uniform partition one may use ϕ0 = 1 − x/h and ϕ1 = x/h on (0, h) :
Z 1 Z h h Z h
x x x x2 i h (−1) x2 h
ϕ0 ϕ1 dx = (1 − ) dx = (1 − ) − · dx = .
0 0 h h h 2h 0 0 h 2h 6
Example 4.2. Below we consider a convection-diffusion problem:

−εu′′ (x) + pu′ (x) = r, 0 < x < 1; u(0) = 0, u′ (1) = β 6= 0,

where ε and p are positive real numbers and r ∈ R. Here −εu′′ is the diffusion
term, pu′ corresponds to convection, and r is a given (here for simplicity a
constant) source (r > 0) or sink (r < 0). We would like to answer the same
question as in the previous example. This time with c0 = u(0) = 0. Then,
the test function at x = 0; ϕ0 will not be necessary. But since u(1) is not
given, we shall need the test function at x = 1: ϕn+1 . The function space for
the continuous solution: the trial function space, and the test function space
are both the same:
 Z 1  
2 ′ 2
V := w : w(x) + w (x) dx < ∞, and w(0) = 0 .
0

We multiply the PDE by a test function v ∈ V and integrate over (0, 1).
Then, integration by parts yields
Z 1 Z 1 Z 1
′ ′ ′ ′ ′
−εu (1)v(1) + εu (0)v(0) + ε u v dx + p u vdx = r v dx.
0 0 0

Hence, we end up with the variational formulation: find u ∈ V such that


Z 1 Z 1 Z 1
′ ′ ′
(VF) ε u v dx + p u v dx = r v dx + εβv(1), ∀v ∈ V.
0 0 0

The corresponding discrete test and trial function space is

Vh0 := {wh : wh is piecewise linear and continuous on T h , and wh (0) = 0}.


4.4. FEM FOR CONVECTION–DIFFUSION–ABSORPTION BVPS 71

ϕ0 ϕ2 ϕj ϕn ϕn+1

x0 = 0 x1 = h xj−1 xj xj+1 xn xn+1 = 1

Thus, the basis functions for Vh0 are the hat-functions ϕj , j = 1, . . . , n + 1


(including the half-hat-function ϕn+1 ), and hence dim(Vh0 ) = n + 1.
Now the finite element formulation reads as follows: find uh ∈ Vh0 such that
Z 1 Z 1 Z 1
(F EM ) ε ′ ′
uh v dx + p ′
uh v dx = r v dx + εβv(1), ∀v ∈ Vh0 .
0 0 0
Pn+1
Inserting the ansatz uh (x) = j=1 ξj ϕj (x) into (FEM), and choosing v =
ϕi (x), i = 1, . . . , n + 1, we get
n+1  Z
X 1 Z 1  Z 1
ε ϕ′j (x)ϕ′i (x) dx +p ϕ′j (x)ϕi (x) dx ξj = r ϕi (x) dx + εβϕi (1),
j=1 0 0 0

In matrix form this corresponds to the linear system of equations Aξ = b with


A = ε S̃ + p C, where S̃ is computed as Aunif and is the (n + 1) × (n + 1)-
R1
stiffness matrix with its last diagonal element s̃n+1,n+1 = 0 ϕ′n+1 ϕ′n+1 dx =
1/h, and C is the convection matrix with the elements
Z 1
cij = ϕ′j (x)ϕi (x) dx.
0

Hence we have, evidently,


 
2 −1 0 0 ... 0
 
 
 −1 2 −1 0 ... 0 
 
 
1 0 −1 2 −1 . . . 0 
S̃ = 
.

h  ... ... ... ... ... ... 
 
 
 0 ... ... −1 2 −1 
 
0 ... ... . . . −1 1
72 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS

To compute the entries for C, we note that like, S, M and S̃, also C is a
tridiagonal matrix. But C is anti-symmetric. Its entries are


 cij = 0, for |i − j| > 1



 R

 cii = 01 ϕi (x)ϕ′i (x) dx = 0, for i = 1, . . . , n

 R1
cn+1,n+1 = 0 ϕn+1 (x)ϕ′n+1 (x) dx = 1/2, (4.4.5)

 R

 1

 ci,i+1 = 0 ϕi (x)ϕ′i+1 (x) dx = 1/2, for i = 1, . . . , n

 R

 c 1
i+1,i= 0
ϕ (x)ϕ′ (x) dx = −1/2,
i+1 i for i = 1, . . . , n.

Finally, we have the entries bi of the load vector b as

b1 = . . . = bn = rh, bn+1 = rh/2 + εβ.

Thus,
     
0 1 0 0 ... 0 1 0
     
     
 −1 0 1 0 ... 0   1   0 
     
     
1  0 −1 0 1 ... 0   1   0 
C= 
,
 b = rh 

 + εβ 
 
.

2  ... ... ... ... ... ...   ·   · 
     
     
 0 . . . . . . −1 0 1   1   0 
     
0 ... ... ... −1 1 1/2 1

Remark 4.4. In the convection dominated case pε << 1 this standard FEM
will not work. Spurious oscillations in the approximate solution will appear.
The standard FEM has to be modified in this case.

4.5 Exercises
Problem 4.1. Consider the two-point boundary value problem

−u′′ = f, 0 < x < 1; u(0) = u(1) = 0. (4.5.1)

Let V = {v : kvk+kv ′ k < ∞, v(0) = v(1) = 0}, k·k denotes the L2 -norm.
a. Use V to derive a variational formulation of (4.5.1).
4.5. EXERCISES 73

b. Discuss why V is valid as a vector space of test functions.


c. Classify which of the following functions are admissible test functions
sin πx, x2 , x ln x, ex − 1, x(1 − x).
Problem 4.2. Assume that u(0) = u(1) = 0, and that u satisfies
Z 1 Z 1
′ ′
u v dx = f v dx,
0 0

for all v ∈ V = {v : kvk + kv ′ k < ∞, v(0) = v(1) = 0}.


a. Show that u minimizes the functional
Z Z 1
1 1 ′ 2
F (v) = (v ) dx − f v dx. (4.5.2)
2 0 0

Hint: F (v) = F (u + w) = F (u) + . . . ≥ F (u).


b. Prove that the above minimization problem is equivalent to
−u′′ = f, 0 < x < 1; u(0) = u(1) = 0.
Problem 4.3. Consider the two-point boundary value problem
−u′′ = 1, 0 < x < 1; u(0) = u(1) = 0. (4.5.3)
Let Th : xj = 4j , j = 0, 1, . . . , 4, denote a partition of the interval 0 < x < 1
into four subintervals of equal length h = 1/4 and let Vh be the corresponding
space of continuous piecewise linear functions vanishing at x = 0 and x = 1.
a. Compute a finite element approximation U ∈ Vh to (4.5.3).
b. Prove that U ∈ Vh is unique.
Problem 4.4. Consider once again the two-point boundary value problem
−u′′ = f, 0 < x < 1; u(0) = u(1) = 0.

a. Prove that the finite element approximation U ∈ Vh to u satisfies


k(u − U )′ k ≤ k(u − v)′ k, for all v ∈ Vh .

b. Use this result and interpolation estimate to deduce that


k(u − U )′ k ≤ Ckhu′′ k, (4.5.4)
where C depends on the interpolation constant.
74 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS

Problem 4.5. Consider the two-point boundary value problem

−(au′ )′ = f, 0 < x < 1,


(4.5.5)

u(0) = 0, a(1)u (1) = g1 ,
where a > 0 is a positive function and g1 is a constant.
a. Derive the variational formulation of (4.5.5).
b. Discuss how the boundary conditions are implemented.
Problem 4.6. Consider the two-point boundary value problem
−u′′ = 0, x ∈ I := (0, 1); u(0) = 0, u′ (1) = 7. (4.5.6)
Divide I into two subintervals of length h = 12 and let Vh be the corresponding
space of continuous piecewise linear functions vanishing at x = 0.
a. Formulate a finite element method for (4.5.6).
b. Calculate by hand the finite element approximation U ∈ Vh to (4.5.6).
c. Study how the boundary condition at x = 1 is approximated.
Problem 4.7. Consider the two-point boundary value problem
−u′′ = 0, 0 < x < 1; u′ (0) = 5, u(1) = 0. (4.5.7)
Let Th : xj = jh, j = 0, 1, . . . , N, h = 1/N be a uniform partition of the
interval 0 < x < 1 into N subintervals and let Vh be the corresponding space
of continuous piecewise linear functions.
a. Use Vh , with N = 3, and formulate a finite element method for (4.5.7).
b. Compute the finite element approximation U ∈ Vh assuming N = 3.
Problem 4.8. Consider the problem of finding a solution approximation to
−u′′ = 1, 0 < x < 1; u′ (0) = u′ (1) = 0. (4.5.8)
Let Th be a partition of the interval 0 < x < 1 into two subintervals of equal
length h = 12 and let Vh be the corresponding space of continuous piecewise
linear functions.
a. Find the exact solution to (4.5.8) by integrating twice.
b.Compute a finite element approximation U ∈ Vh to u if possible.
4.5. EXERCISES 75

Problem 4.9. Consider the two-point boundary value problem

−((1 + x)u′ )′ = 0, 0 < x < 1; u(0) = 0, u′ (1) = 1. (4.5.9)

Divide the interval 0 < x < 1 into 3 subintervals of equal length h = 13 and
let Vh be the corresponding space of continuous piecewise linear functions
vanishing at x = 0.
a. Use Vh to formulate a finite element method for (4.5.9).
b. Verify that the stiffness matrix A and the load vector b are given by
   
16 −9 0 0
1


 
 
A =  −9 20 −11  , b =  0 .
2   
0 −11 11 1

c. Show that A is symmetric tridiagonal, and positive definite.


d. Derive a simple way to compute the energy norm kU k2E , defined by
Z 1
kU k2E = (1 + x)U ′ (x)2 dx,
0

where U ∈ Vh is the finite element solution approximation.

Problem 4.10. Consider the two-point boundary value problem

−u′′ = 0, 0 < x < 1; u(0) = 0, u′ (1) = k(u(1) − 1). (4.5.10)

Let Th : 0 = x0 < x1 < x2 < x3 = 1, where x1 = 13 and x2 = 32 be a partition


of the interval 0 ≤ x ≤ 1 and let Vh be the corresponding space of continuous
piecewise linear functions, which vanish at x = 0.
a. Compute a solution approximation U ∈ Vh to (4.5.10) assuming k = 1.
b. Discuss how the parameter k influence the boundary condition at x = 1.
In particular when k → ∞ and k → 0.

Problem 4.11. Consider the finite element method applied to

−u′′ = 0, 0 < x < 1; u(0) = α, u′ (1) = β,


76 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS

where α and β are given constants. Assume that the interval 0 ≤ x ≤ 1


is divided into three subintervals of equal length h = 1/3 and that {ϕj }30 is
a nodal basis of Vh , the corresponding space of continuous piecewise linear
functions.
a. Verify that the ansatz

U (x) = αϕ0 (x) + ξ1 ϕ1 (x) + ξ2 ϕ2 (x) + ξ3 ϕ3 (x),

yields the following system of equations


 
 α 
 
−1 2 −1  0
   0 
1
   ξ 1 
= 
 0 −1 2 −1  
   0 . (4.5.11)
h   ξ2   
0 0 −1 1   β
ξ3

b. If α = 2 and β = 3 show that (4.5.11) can be reduced to


    
2 −1 0 ξ1 −2h−1
1



 
 


 −1 2 −1   ξ2  =  0 .
h    
0 −1 1 ξ3 3

c. Solve the above system of equations to find U (x).

Problem 4.12. Compute a finite element solution approximation to

−u′′ + u = 1; 0 ≤ x ≤ 1, u(0) = u(1) = 0, (4.5.12)

using the continuous piecewise linear ansatz U = ξ1 ϕ1 (x) + ξ2 ϕ2 (x) where


 

 3x, 0<x< 1 
 0, 0<x< 1

 3 
 3
1
ϕ1 (x) = 2 − 3x, < x < 32 , ϕ2 (x) = 3x − 1, 1
< x < 32 .

 3 
 3

 0, 2 
 3 − 3x, 2
3
<x<1 3
<x<1
4.5. EXERCISES 77

Problem 4.13. Consider the following eigenvalue problem

−au′′ + bu = 0; 0 ≤ x ≤ 1, u(0) = u′ (1) = 0, (4.5.13)

where a, b > 0 are constants. Let Th : 0 = x0 < x1 < . . . < xN = 1,


be a non-uniform partition of the interval 0 ≤ x ≤ 1 into N intervals of
length hi = xi − xi−1 , i = 1, 2, . . . , N and let Vh be the corresponding space
of continuous piecewise linear functions. Compute the stiffness and mass
matrices.

Problem 4.14. Show that the FEM with the mesh size h for the problem:

 −u′′ = 1 0 < x < 1
(4.5.14)
 u(0) = 1 u′ (1) = 0,

with
U (x) = 7ϕ0 (x) + U1 ϕ1 (x) + . . . + Um ϕm (x). (4.5.15)
leads to the linear system of equations: Ã · Ũ = b̃, where
     
−1 2 −1 0 7 h
     
     
 0 −1 2 −1 . . .   U1   ... 
1 
,



,



,

à =  . . . . . . . . . ...  Ũ =  ...  b̃ =  h 
h      
0 ... 0 ... Um h/2
m × (m + 1) (m + 1) × 1 m×1

. which is reduced to AU = b, with


   
2 −1 0 ... 0   h+ 7
  U1  h 
     
 −1 2 −1 0 ...     h 
1 

  U2  



A =  ... ... ... ... ... , U =

,
 b= ... .
h   ...   
     
 ... 0 −1 2 −1   h 
  Um  
0 0 0 −1 2 h/2
78 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS

Problem 4.15. Prove an a priori error estimate for the cG(1) finite element
method for the problem

−u′′ + αu = f, in I = (0, 1), u(0) = u(1) = 0,

where the coefficient α = α(x) is a bounded positive function on I, (0 ≤


α(x) ≤ K, x ∈ I).

Problem 4.16. a) Formulate a cG(1) method for the problem



 (a(x)u′ (x))′ = 0, 0 < x < 1,
 a(0)u′ (0) = u , u(1) = 0.
0

and give an a priori error estimate.


b) Let u0 = 3 and compute the approximate solution in a) for a uniform
partition of I = [0, 1] into 4 intervals and

 1/4, x < 1/2,
a(x) =
 1/2, x > 1/2.

c) Show that, with these special choices, the computed solution is equal to the
exact one, i.e. the error is equal to 0.

Problem 4.17. Prove an a priori error estimate for the finite element
method for the problem

−u′′ (x) + u′ (x) = f (x), 0 < x < 1, u(0) = u(1) = 0.

Problem 4.18. (a) Prove an a priori error estimate for the cG(1) approxi-
mation of the boundary value problem

−u′′ + cu′ + u = f in I = (0, 1), u(0) = u(1) = 0,

where c ≥ 0 is constant.
(b) For which value of c is the a priori error estimate optimal?
4.5. EXERCISES 79

Problem 4.19. Let U be the piecewise linear finite element approximation


for

−u′′ (x) + 2xu′ (x) + 2u(x) = f (x), x ∈ (0, 1), u(0) = u(1) = 0,

in a partition Th of the interval [0, 1]. Set e = u − U and derive a priori error
estimates in the energy-norm:
Z 1
2 ′ 2 2 2
||e||E = ||e || + ||e|| , where ||w|| = w(x)2 dx.
0
80 CHAPTER 4. TWO-POINT BOUNDARY VALUE PROBLEMS
Chapter 5

Scalar Initial Value Problems

This chapter is devoted to numerical methods for time discretizations. Here, we


shall consider problems depending on the time variable, only. The approximation
techniques developed in this chapter, combined with those of the previous chap-
ter for boundary value problems, can be used for the numerical study of initial
boundary value problems; such as, e.g. the heat and wave equations.
As a model problem we shall consider the classical example of population
dynamics described by the following ordinary differential equation (ODE)


(DE)  u̇(t) + a(t)u(t) = f (t), 0 < t ≤ T,
(5.0.1)
(IV)  u(0) = u , 0

du
where f (t) is the source term and u̇(t) = . The coefficient a(t) is a bounded
dt
function. If a(t) ≥ 0 the problem (5.0.1) is called parabolic, while a(t) ≥ α > 0
yields a dissipative problem, in the sense that, with increasing t, perturbations
of solutions to (5.0.1), e.g. introduced by numerical discretization, will decay.
In general, in numerical approximations for (5.0.1), the error accumulates when
advancing in time, i.e. the error of previous time steps adds up to the error of
the present time step. The different types of error accumulation/perturbation
growth are referred to as stability properties of the initial value problem.

81
82 CHAPTER 5. SCALAR INITIAL VALUE PROBLEMS

5.1 Solution formula and stability


Theorem 5.1. The solution of the problem (5.0.1) is given by
Z t
−A(t)
u(t) = u0 · e + e−(A(t)−A(s)) f (s)ds, (5.1.1)
0
Rt
where A(t) = 0
a(s)ds and eA(t) is the integrating factor.
Proof. Multiplying the (DE) by the integrating factor eA(t) we have
u̇(t)eA(t) + Ȧ(t)eA(t) u(t) = eA(t) f (t), (5.1.2)
where we used that a(t) = Ȧ(t). Equation (5.1.2) can be rewritten as
d A(t)

u(t)e = eA(t) f (t).
dt
We denote the variable by s and integrate from 0 to t to get
Z t   Z t
d A(s)
u(s)e ds = eA(s) f (s)ds,
0 ds 0

i.e. Z t
A(t) A(0)
u(t)e − u(0)e = eA(s) f (s)ds.
0
Now since A(0) = 0 and u(0) = u0 we get the desired result
Z t
−A(t)
u(t) = u0 e + e−(A(t)−A(s)) f (s)ds. (5.1.3)
0

This representation of u is known as the Variation of constants formula.


Theorem 5.2 (Stability estimates). Using the solution formula, we can de-
rive the following stability estimates:
1
(i) If a(t) ≥ α > 0, then |u(t)| ≤ e−αt |u0 | + (1 − e−αt ) max |f (s)|,
α 0≤s≤t

(ii) If a(t) ≥ 0 (i.e. α = 0; the parabolic case), then


Z t
|u(t)| ≤ |u0 | + |f (s)|ds or |u(t)| ≤ |u0 | + kf kL1 (0,t) . (5.1.4)
0
5.2. FINITE DIFFERENCE METHODS 83
Z t
Proof. (i) For a(t) ≥ α > 0 we have that A(t) = a(s)ds is an increasing
0
function of t, A(t) ≥ αt and
Z t Z s Z t
A(t) − A(s) = a(r) dr − a(r) dr = a(r) dr ≥ α(t − s). (5.1.5)
0 0 s

Thus e−A(t) ≤ e−αt and e−(A(t)−A(s)) ≤ e−α(t−s) . Hence, using (5.1.3) we get
Z t
−αt
|u(t)| ≤ |u0 |e + e−α(t−s) |f (s)|ds, (5.1.6)
0

which yields
h1 is=t
−αt −α(t−s)
|u(t)| ≤ e |u0 | + max |f (s)| e , i.e.
0≤s≤t α s=0
1
|u(t)| ≤ e−αt |u0 | + (1 − e−αt ) max |f (s)|.
α 0≤s≤t

(ii) Let α = 0 Zin (5.1.6) (which is true also in this case: for α = 0), then
t
|u(t)| ≤ |u0 | + |f (s)|ds, and the proof is complete.
0

Remark 5.1. (i) expresses that the effect of the initial data u0 decays expo-
nentially with time, and that the effect of the source term f on the right hand
side does not depend on the length of the time interval, only on the maximum
value of f , and on the value of α. In case (ii), the influence of u0 remains
bounded in time, and the integral of f indicates an accumulation in time.

5.2 Finite difference methods


Let us first continue as in Example 2.1 and give the other two, very common,
finite difference approaches, for numerical solution of (5.0.1): Let

Tk := {0 = x0 < x1 <, . . . < xN −1 < xN = T },

be a partition of the time interval [0, T ] into the subintervals Ik := [xk−1 , xk ], k =


1, . . . N as in the Example 2.1:
84 CHAPTER 5. SCALAR INITIAL VALUE PROBLEMS

t0 = 0 t1 t2 t3 tN = T

Example 5.1. Now we discretize the IVP (5.0.1), for a(t) being positive
constant, with the backward Euler method, on the partition Tk , approximat-
ing the derivative u̇(t) by a backward-difference quotient at each subinterval
Ik = (tk−1 , tk ] by u̇(t) ≈ u(tktk)−u(t
−tk−1
k−1 )
. Then an approximation of (5.0.1), with
f (t) ≡ 0 is given by
u(tk ) − u(tk−1 )
= −a · u(tk ), k = 1, . . . , N, and u(0) = u0 . (5.2.1)
tk − tk−1
(Note that in forward Euler, we would have −au(tk−1 ) on the right hand side
of (5.2.1)). Letting ∆tk = tk − tk−1 , (5.2.1) yields

(1 + a∆tk )u(tk ) = u(tk−1 ). (5.2.2)

Starting with k = 1 and the data u(0) = u0 , the solution u(tk ) would, itera-
tively, be computed at the subsequent points: t1 , t2 , . . . , tN = T .
For a uniform partition, where all subintervals have the same length ∆t, and
since for a > 0, 1 + a∆t > 0 (6= 0), (5.2.2) can be written as

u(tk ) = (1 + a∆t)−1 u(tk−1 ), k = 1, 2, . . . , N. (5.2.3)

Iterating we get the Backward or Implicit Euler method for (5.0.1):

u(tk ) = (1 + a∆t)−1 u(tk−1 ) = (1 + a∆t)−2 u(tk−2 ) = . . . = (1 + a∆t)−k u0 .

Remark 5.2. Note that, for the problem (5.0.1), with f (t) ≡ 0, in the
Example 2.1 we just replace λ with −a, and get the forward (explicit) Euler
method:
u(tk ) = (1 − a∆t)k u0 . (5.2.4)
Example 5.2. Now we introduce the Crank-Nicolson method for the finite
difference approximation of (5.0.1). Here, first we integrate the equation
(5.0.1) over Ik = [tk−1 , tk ] to get
Z tk Z tk
u(tk ) − u(tk−1 ) + a u(t) dt = f (t) dt. (5.2.5)
tk−1 tk−1
5.2. FINITE DIFFERENCE METHODS 85

Then, approximate the integral term by the simple trapezoidal rule we get
Z tk
∆tk
u(tk ) − u(tk−1 ) + a (u(tk ) + u(tk−1 )) = f (t) dt. (5.2.6)
2 tk−1

Rearranging the terms yields


 Z tk
∆tk   ∆tk 
1+a u(tk ) = 1 − a u(tk−1 ) + f (t) dt.
2 2 tk−1

Or, equivalently,
Z tk
1 − a∆tk /2 1
u(tk ) = u(tk−1 ) + f (t) dt.
1 + a∆tk /2 1 + a∆tk /2 tk−1

Let us assume a zero source term (f = 0), and uniform partition, i.e. ∆tk =
∆t for k = 1, 2, . . . , N , then we have the following Crank-Nicolson method:
 1 − a∆t/2 k
u(tk ) = u0 . (5.2.7)
1 + a∆t/2
Example 5.3. Consider the initial value problem:
u̇(t) + au(t) = 0, t > 0, u(0) = 1.

a) Let a = 40, and the time step k = 0.1. Draw the graph of Un :=
U (nk), k = 1, 2, . . . , approximating u using (i) explicit (forward) Euler, (ii)
implicit (Backward) Euler, and (iii) Crank-Nicolson methods.
b) Consider the case a = i, (i2 = −1), having the complex solution u(t) = e−it
with |u(t)| = 1 for all t. Show that this property is preserved in Crank-
Nicolson approximation, (i.e. |Un | = 1 ), but NOT in any of the Euler
approximations.
Solution: a) With a = 40 and k = 0.1 we get the explicit Euler:
 
 U −U + 40 × (0.1)U = 0,  U = −3U , n = 1, 2, . . . ,
n n−1 n−1 n n−1
=⇒
 U = 1.  U = 1.
0 0

Implicit Euler:

 U = 1
U = 15 Un−1 , n = 1, 2, 3, . . . ,
n 1+40×(0.1) n−1
 U = 1.
0
86 CHAPTER 5. SCALAR INITIAL VALUE PROBLEMS

Cranck-Nicolson:

 Un = 1− 21 ×40×(0.1)
U
1+ 21 ×40×(0.1) n−1
= − 13 Un−1 , n = 1, 2, 3, . . . ,
 U = 1.
0

E.E.
1 I.E.
10
1 C.N.

1/5
1

k 2k 3k 3k k 2k 3k
k 2k

−3 −1/3

b) With a = i we get
Explicit Euler

|Un | = |1 − (0.1) × i||Un−1 | = 1 + 0.01|Un−1 | =⇒ |Un | ≥ |Un−1 |.
Implicit Euler
1 1

|Un | = |Un−1 | = √ |Un−1 | ≤ |Un−1 |.
1 + (0.1) × i 1 + 0.01
Crank-Nicolson
1 − 1 (0.1) × i
2
|Un | = 1 |Un−1 | = |Un−1 |.
1 + 2 (0.1) × i

5.3 Galerkin finite element methods for IVP


The polynomial approximation procedure introduced in Chapter 2, along
with (2.2.5)-(2.3.5), for the initial value problem (2.1.1), or (5.0.1), being
5.3. GALERKIN FINITE ELEMENT METHODS FOR IVP 87

over the whole time interval (0, T ) is referred as the global Galerkin method.
In this section, first we introduce two versions of the global Galerkin method
and then extend them to partitions of the interval (0, T ) using piecewise
polynomial test (multiplier) and trial (solution) functions. Here, we shall
focus on two simple, low degree polynomial, approximation cases.
• The continuous Galerkin method of degree 1; cG(1). In this case the trial
functions are piecewise linear and continuous while the test functions are
piecewise constant and discontinuous, i.e. unlike the cG(1) for BVP, here
the trial and test functions belong to different polynomial spaces.
• The discontinuous Galerkin method of degree 0; dG(0). Here both the trial
and test functions are chosen to be piecewise constant and discontinuous.

5.3.1 The continuous Galerkin method


Recall the global Galerkin method of degree q; (2.3.1), for the initial value
problem (5.0.1): find U ∈ P q (0, T ), with U (0) = u0 such that
Z T Z T
(U̇ + aU )vdt = f v dt, ∀v ∈ P q (0, T ), with v(0) = 0. (5.3.1)
0 0

We formulate the following alternative formulation: Find U ∈ P q (0, T )


with U (0) = u0 such that
Z T Z T
(U̇ + aU )vdt = f vdt, ∀v ∈ P q−1 (0, T ), (5.3.2)
0 0

Note that in (5.3.1) we have that v ∈ span{t, t2 , . . . , tq }, whereas in (5.3.2)


the test functions v ∈ span{1, t, t2 , . . . , tq−1 }. Hence, the difference between
these two formulations lies in the choice of their test function spaces. We
shall focus on (5.3.2), due to the fact that, actually, this method yields a
more accurate approximation of degree q than the original method (5.3.1).
The following example illustrates the phenomenon
Example 5.4. Consider the IVP

 u̇(t) + u(t) = 0, 0 ≤ t ≤ 1,
(5.3.3)
 u(0) = 1.
88 CHAPTER 5. SCALAR INITIAL VALUE PROBLEMS

The exact solution is given by u(t) = e−t . The continuous piecewise linear
approximation, with the ansatz U (t) = 1 + ξ1 t in,
Z 1
(U̇ (t) + U (t))v(t) dt = 0, (5.3.4)
0

and v(t) = t (i.e. (5.3.1)) yields


Z h
1
t2 t 3 i1
(ξ1 + 1 + ξ1 t)t dt = 0 =⇒ (ξ1 + 1) + ξ1 = 0 =⇒ ξ= − 3/5.
0 2 3 0

Hence in this case the approximate solution, that we denote by U1 is given


by U1 (t) = 1 − (3/5)t. Whereas (5.3.2) for this problem means v(t) = 1 and
gives
Z h
1
t 2 i1
(ξ1 + 1 + ξ1 t) dt = 0 =⇒ (ξ1 + 1)t + ξ1 = 0 =⇒ ξ= − 2/3.
0 2 0

In this case the approximate solution that we denote by U2 is given as U2 (t) =


1 − (2/3)t. As we can see in the figure below U2 is a better approximation for
e−t than U1 .

U1 (t)
u(t) = e−t
U2 (t)

t
1

Figure 5.1: Two continuous linear Galerkin approximations of e−t .

Before generalizing (5.3.2) to piecewise polynomial approximation, which


is the cG(q) method, we consider a canonical example of(5.3.2).
5.3. GALERKIN FINITE ELEMENT METHODS FOR IVP 89

Example 5.5. Consider (5.3.2) with q = 1, then choosing v ≡ 1, yields


Z T Z T Z T
(U̇ + aU )vdt = (U̇ + aU )dt = U (T ) − U (0) + aU (t)dt (5.3.5)
0 0 0

Here U (t), as a linear function, is given by

T −t t
U (t) = U (0) + U (T ) . (5.3.6)
T T
Inserting (5.3.6) into (5.3.5) we get
Z  Z T
T
T −t t
U (T ) − U (0) + a U (0) + U (T ) dt = f dt, (5.3.7)
0 T T 0

which is an equation for the unknown quantity U (T ). Thus, using (5.3.6) with
a given U (0), we get the linear approximation U (t) for all t ∈ [0, T ]. Below
we generalize this example to piecewise linear approximation and demonstrate
the iteration procedure for the cG(1) scheme.
The cG(1) Algorithm
For a partition Tk of the interval [0, T ] into subintervals Ik = (tk−1 , tk ], we
perform the following steps:

(1) Given U (0) = U0 = u0 and a source term f , apply (5.3.7) to the first
subinterval (0, t1 ] and compute U (t1 ). Then, using (5.3.6) one gets
U (t), ∀t ∈ [0, t1 ].

(2) Assume that we have computed Uk−1 := U (tk−1 ). Hence, Uk−1 and
f are now considered as data. Now we consider the problem on the
subintervals Ik = (tk−1 , tk ], and compute the unknown Uk := U (tk )
from the local version of (5.3.7),
Z  t −t Z tk
tk
k t − tk−1 
Uk − Uk−1 + a Uk−1 + Uk dt = f dt.
tk−1 tk − tk−1 tk − tk−1 tk−1

Having both Uk−1 and Uk , by linearity, we get U (t) for t ∈ Ik . To get the
continuous piecewise linear approximation in the whole interval [0, tN ],
step (2) is performed in successive subintervals Ik , k = 2, . . . , N .
90 CHAPTER 5. SCALAR INITIAL VALUE PROBLEMS

The cG(q) method


The Global continuous Galerkin method of degree q, formulated on a parti-
tion Tk , 0 = t0 < t1 < . . . < tN = T of the interval (0, T ), is referred to as
(q)
the cG(q) method and reads as: find U (t) ∈ Vk , such that U (0) = u0 , and
Z tN Z tN
(q−1)
(U̇ + aU )wdt = f wdt, ∀w ∈ Wk , (5.3.8)
0 0

where
(q)
Vk = {v : v continuous, piecewise polynomial of degree ≤ q on Tk },
(q−1)
Wk = {w : w discontinuous, piecewise polynomial, deg w ≤ q − 1 on Tk }.

So, the difference between the global continuous Galerkin method and cG(q)
is that now we have piecewise polynomials on a partition of [0, T ] rather than
global polynomials in the whole interval [0, T ].

5.3.2 The discontinuous Galerkin method


We start presenting the global discontinuous Galerkin method of degree q:
find U (t) ∈ P q (0, T ) such that
Z T Z T
(U̇ + aU )vdt + (U (0) − u(0))v(0) = f vdt, ∀v ∈ P q (0, T ). (5.3.9)
0 0

This approach gives up the requirement that U (t) satisfies the initial condi-
tion. Instead, the initial condition is imposed in a variational sense by the
term (U (0) − u(0))v(0). As in the cG(q) case, to derive the discontinuous
Galerkin method of degree q: dG(q) scheme, the above strategy can be for-
mulated for the subintervals in a partition Tk . To this end, we recall the
notation for the right/left limits: vn± = lim+ v(tn ± s) and the corresponding
s→0
jump term [vn ] = vn+ − vn− at time level t = tn . Then, the dG(q) method for
(5.0.1) reads as follows: for n = 1, . . . , N ; find U (t) ∈ P q (tn−1 , tn ) such that
Z tn Z tn
+ + +
(U̇ + aU )vdt + Un−1 vn−1 = −
f vdt + Un−1 vn−1 , ∀v ∈ P q (tn−1 , tn ).
tn−1 tn−1
(5.3.10)
5.3. GALERKIN FINITE ELEMENT METHODS FOR IVP 91

vn+ ◦ •

[vn ]
◦ •vn−
kn
tn−1 tn+1 t
tn

Figure 5.2: The jump [vn ] and the right and left limits vn±

Example 5.6 (dG(0)). Let q = 0, then v is constant generated by the single


+
basis function: v ≡ 1. Further, we have U (t) = Un = Un−1 = Un− on
In = (tn−1 , tn ], and U̇ ≡ 0. Thus, for q = 0 (5.3.10) yields the following
dG(0) formulation: for n = 1, . . . , N ; find piecewise constants Un such that
Z tn Z tn
aUn dt + Un = f dt + Un−1 . (5.3.11)
tn−1 tn−1

Summing over n in (5.3.10), we get the following general dG(q) formulation:


(q)
Find U (t) ∈ Wk , with U0− = u0 such that
XN Z tn N
X Z tN
+ (q)
(U̇ + aU )wdt + [Un−1 ]wn−1 = f wdt, ∀w ∈ Wk . (5.3.12)
n=1 tn−1 n=1 0

Remark 5.3. One can show that cG(1) converges faster than dG(0), whereas
dG(0) has better stability properties than cG(1): More specifically, in the
parabolic case when a > 0 is constant and (f ≡ 0) we can easily verify
that (see Exercise 6.10 at the end of this chapter) the dG(0) solution Un
corresponds to the Backward Euler scheme
 1 n
Un = u0 ,
1 + ak
and the cG(1) solution Uen is given by the Crank-Nicolson scheme:
 1 n
en = 1 − 2 ak u0 ,
U
1 + 12 ak
where k is the constant time step.
92 CHAPTER 5. SCALAR INITIAL VALUE PROBLEMS

5.4 Exercises
Problem 5.1. (a) Derive the stiffness matrix and load vector in piecewise
polynomial (of degree q) approximation for the ODE of population dynamics:

u̇(t) = λu(t), for 0 < t ≤ 1, u(0) = u0 .

(b) Let λ = 1 and u0 = 1 and determine the approximate solution U (t), for
q = 1 and q = 2.

Problem 5.2. Consider the initial value problem

u̇(t) + a(t)u(t) = f (t), 0 < t ≤ T, u(0) = u0 .

Show that if a(t) = 1, f (t) =


√ 2 sin(t), then we have
u(t) = sin(t) − cos(t) = 2 sin(t − π/2).

Problem 5.3. Compute the solution for

u̇(t) + a(t)u(t) = t2 , 0 < t ≤ T, u(0) = 1,

corresponding to
(a) a(t) = 4, (b) a(t) = −t.

Problem 5.4. Compute the cG(1) approximation for the differential equa-
tions in the above problem. In each case, determine the condition on the step
size that guarantees that U exists.

Problem 5.5. Without using the solution Theorem 5.1, prove that if a(t) ≥ 0
then, a continuously differentiable solution of (5.0.1) is unique.

Problem 5.6. Consider the initial value problem

u̇(t) + a(t)u(t) = f (t), 0 < t ≤ T, u(0) = u0 .

Show that for a(t) > 0, and for N = 1, 2, . . . , the piecewise linear approxi-
mate solution U for this problem satisfies the error estimate

|u(tN ) − UN | ≤ max |k(U̇ + aU − f )|, k = kn , for tn−1 < t ≤ tn .


[0,tN ]
5.4. EXERCISES 93

Problem 5.7. Consider the initial value problem

u̇(t) + au(t) = 0, t > 0, u(0) = u0 , (a = constant).

Assume a constant time step k and verify the iterative formulas for dG(0)
and cG(1) approximations U and Ũ , respectively: i.e.
 1 n  1 − ak/2 n
Un = u0 , Ũn = u0 .
1 + ak 1 + ak/2

Problem 5.8. Assume that


Z
f (s) ds = 0, for j = 1, 2, . . . ,
Ij

where Ij = (tj−1 , tj ), tj = jk with k being a positive constant. Prove that if


a(t) ≥ 0, then the solution for (5.0.1) satisfies

|u(t)| ≤ e−A(t) |u0 | + max |kf (s)|.


0≤s≤t

Problem 5.9. Formulate a continuous Galerkin method using piecewise poly-


nomials based on the original global Galerkin method.

Problem 5.10. Formulate the dG(1) method for the differential equations
specified in Problem 5.3.

Problem 5.11. Write out the a priori error estimates for the equations
specified in Problem 5.3.

Problem 5.12. Use the a priori error bound to show that the residual of the
dG(0) approximation satisfies R(U ) = O(1).

Problem 5.13. Prove the following stability estimate for the dG(0) method
described by (5.3.12),
N
X −1
2
|UN | + |[Un ]|2 ≤ |u0 |2 .
n=0
94 CHAPTER 5. SCALAR INITIAL VALUE PROBLEMS
Chapter 6

Initial Boundary Value


Problems in 1d

A large class of phenomena in nature, science and technology, such as seasonal


periods, heat distribution, wave propagation, etc, are varying both in space and
time. To describe these phenomena in a physical domain requires the knowledge
of their initial status, as well as information on the boundary of the domain, or
asymptotic behavior in the case of unbounded domains. Problems that model
such properties are called initial boundary value problems. In this chapter we shall
study the two most important equations of this type: namely, the heat equation
and the wave equation in one space dimension. We also address (briefly) the
one-space dimensional time-dependent convection-diffusion problem.

6.1 Heat equation in 1d


In this section we focus on some basic L2 -stability and finite element error es-
timates for the, time-dependent, one-space dimensional heat equation. Here,
to illustrate, we consider an example of an initial boundary value problem
(IBVP) for the one-dimensional heat flux, viz


 u̇ − u′′ = f (x, t), 0 < x < 1, t > 0,


u(x, 0) = u0 (x), 0 < x < 1, (6.1.1)



 u(0, t) = u (1, t) = 0,
x t > 0.

95
96 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS IN 1D
u(x, t)

u0 (x)

tn−1

tn

t
Figure 6.1: A decreasing temperature profile with data u(0, t) = u(1, t) = 0.

Example 6.1. Describe the physical meaning of the functions and parame-
ters in the problem (6.1.1), when f = 20 − u.

Answer: The problem is an example of heat conduction where


u(x, t), means the temperature at the point x and time t.
u(x, 0) = u0 (x), is the initial temperature at time t = 0.
u(0, t) = 0, means fixed temperature at the boundary point x = 0.
u′ (1, t) = 0, means isolated boundary at the boundary point x = 1
(where no heat flux occurs).
f = 20 − u, is the heat source, in this case a control system to force
u → 20.

Remark 6.1. Observe that it is possible to generalize (6.1.1) to a u dependent


source term f , e.g. as in the above example where f = 20 − u.

6.1.1 Stability estimates


We shall derive a general stability estimate for the mixed (Dirichlet at one
end point and Neumann in the other) initial boundary value problem above,
6.1. HEAT EQUATION IN 1D 97

prove a one-dimensional version of the Poincare inequality and finally derive


some stability estimates in the homogeneous (f ≡ 0) case.

Theorem 6.1. The IBVP (6.1.1) satisfies the stability estimates


Z t
||u(·, t)|| ≤ ||u0 || + ||f (·, s)|| ds, (6.1.2)
0
Z t
2

||u (·, t)|| ≤ ||u′0 ||2 + ||f (·, s)||2 ds, (6.1.3)
0

where u0 and u′0 are assumed to be L2 (I) functions with I = (0, 1). Note
further that, here || • (·, t)|| is the time dependent L2 norm:
Z 1 1/2
2
||w(·, s)|| := ||w(·, s)||L2 (0,1) = ||w(x, s)|| dx .
0

Proof. Multiply the equation in (6.1.1) by u and integrate over (0, 1) to get
Z 1 Z 1 Z 1
′′
u̇u dx − u u dx = f u dx. (6.1.4)
0 0 0

Note that u̇u = 21 dtd u2 . Hence, integration by parts in the second integral
yields
Z Z 1 Z 1
1d 1 2 ′ 2 ′ ′
u dx + (u ) dx − u (1, t)u(1, t) + u (0, t)u(0, t) = f u dx.
2 dt 0 0 0

Then, using boundary conditions and Cauchy-Schwarz’ inequality yields


Z 1
d ′ 2
||u|| ||u|| + ||u || = f u dx ≤ ||f || ||u||. (6.1.5)
dt 0

Now since ||u′ ||2 ≥ 0, consequently ||u|| dtd ||u|| ≤ ||f || ||u||, and thus

d
||u|| ≤ ||f ||. (6.1.6)
dt
Relabeling the variable from t to s, and integrating over time we end up with
Z t
||u(·, t)|| − ||u(·, 0)|| ≤ ||f (·, s)|| ds, (6.1.7)
0
98 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS IN 1D

which yields the first assertion (6.1.2) of the theorem. To prove (6.1.3) we
multiply the differential equation by u̇, integrate over (0, 1), and use integra-
tion by parts so that we have on the left hand side
Z 1 Z 1 Z 1
2 ′′ 2
(u̇) dx − u u̇ dx = ||u̇|| + u′ u̇′ dx − u′ (1, t)u̇(1, t) + u′ (0, t)u̇(0, t).
0 0 0

Then, since u(0, t) = 0 =⇒ u̇(0, t) = 0, we have


Z 1
2 1d ′ 2 1 
||u̇|| + ||u || = f u̇ dx ≤ ||f || ||u̇|| ≤ ||f ||2 + ||u̇||2 , (6.1.8)
2 dt 0 2
where in the last step we used Cauchy-Schwarz’ inequality. Hence,
1 1d ′ 2 1
||u̇||2 + ||u || ≤ ||f ||2 , (6.1.9)
2 2 dt 2
and therefore, evidently,
d ′ 2
||u || ≤ ||f ||2 . (6.1.10)
dt
Finally, integrating over (0, t) we get the second assertion of the theorem:
Z t
2 2
′ ′
||u (·, t)|| − ||u (·, 0)|| ≤ ||f (·, s)||2 ds, (6.1.11)
0

and the proof is complete.


To proceed we give a proof of the Poincare inequality (in 1d) which is one
of the most useful inequalities in PDE and analysis.
Theorem 6.2 (The Poincare inequality in 1 − d). Assume that u and u′
are square integrable functions on an interval [0, L]. Then, there exists a
constant CL , independent of u, but dependent on L, such that if u(0) = 0,
Z L Z L p
2
u(x) dx ≤ CL u′ (x)2 dx, i.e. ||u|| ≤ CL ||u′ ||. (6.1.12)
0 0

Proof. For x ∈ [0, L] we may write


Z x Z x Z x
′ ′
u(x) = u (y) dy ≤ |u (y)| dy = |u′ (y)| · 1 dy
0 0 0
Z x 1/2  Z x 1/2
≤ |u′ (y)|2 dy · 12 dy
0 0
Z L 1/2  Z L 1/2 √  Z L 1/2
2 2
≤ ′
|u (y)| dy · 1 dy = L |u′ (y)|2 dy ,
0 0 0
6.1. HEAT EQUATION IN 1D 99

where in the last step we used the Cauchy-Schwarz inequality. Thus, squaring
both sides and integrating, we get
Z L Z L Z L  Z L
2 2 2
u(x) dx ≤ L ′
|u (y)| dy dx = L |u′ (y)|2 dy, (6.1.13)
0 0 0 0

and hence
||u|| ≤ L||u′ ||. (6.1.14)

Remark 6.2. The constant CL = L2 indicates that the Poincare inequality


is valid for arbitrary bounded intervals, but not for unbounded intervals. If
u(0) 6= 0 and, for simplicity L = 1, then by a similar argument as above we
get the following version of the one-dimensional Poincare inequality:
 
2 2 ′ 2
||u||L2 (0,1) ≤ 2 u(0) + ||u ||L2 (0,1) . (6.1.15)

Theorem 6.3 (Stability of the homogeneous heat equation). The initial


boundary value problem for the heat equation


 u̇ − u′′ = 0, 0 < x < 1, t>0


u(0, t) = ux (1, t) = 0, t>0 (6.1.16)



 u(x, 0) = u (x), 0 < x < 1, 0

satisfies the following stability estimates

d
a) ||u||2 + 2||u′ ||2 = 0, b) ||u(·, t)|| ≤ e−t ||u0 ||.
dt
Proof. a) Multiply the equation by u and integrate over x ∈ (0, 1), to get
Z 1 Z 1 Z 1
0= ′′
(u̇ − u )u dx = u̇u dx + (u′ )2 dx − u′ (1, t)u(1, t) + u′ (0, t)u(0, t),
0 0 0

where we used integration by parts. Using the boundary data we then have
Z Z 1
1d 1 2 1d
u dx + (u′ )2 dx = ||u||2 + ||u′ ||2 = 0.
2 dt 0 0 2 dt
100 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS IN 1D

This gives the proof of a). As for the proof of b), using a) and the Poincare
inequality, with L = 1, i.e., ||u|| ≤ ||u′ || we get
d
||u||2 + 2||u||2 ≤ 0. (6.1.17)
dt
Multiplying both sides of (6.1.17) by the integrating factor e2t yields
d 2 2t
 d 
||u|| e = ||u|| + 2||u|| e2t ≤ 0.
2 2
(6.1.18)
dt dt
We replace t by s and integrate with respect to s, over (0, t), to obtain
Z t  
d
||u||2 e2s ds = ||u(·, t)||2 e2t − ||u(·, 0)||2 ≤ 0. (6.1.19)
0 ds

This yields

||u(·, t)||2 ≤ e−2t ||u0 ||2 =⇒ ||u(·, t)|| ≤ e−t ||u0 ||, (6.1.20)

and completes the proof.

6.1.2 FEM for the heat equation


We consider the one-dimensional heat equation with Dirichlet boundary data:


 u̇ − u′′ = f, 0 < x < 1, t > 0,


u(0, t) = u(1, t) = 0, t > 0, (6.1.21)



 u(x, 0) = u (x),
0 0 < x < 1.

The Variational formulation for this problem reads as follows: For every time
interval In = (tn−1 , tn ], find u(x, t), x ∈ (0, 1), t ∈ In , such that
Z Z 1 Z Z 1
′ ′
(u̇v + u v )dxdt = f vdxdt, ∀v : v(0, t) = v(1, t) = 0. (VF)
In 0 In 0

A piecewise linear Galerkin finite element method: cG(1) − cG(1) is then


formulated as: for each time interval In := (tn−1 , tn ], with tn − tn−1 = kn , let

U (x, t) = Un−1 (x)Ψn−1 (t) + Un (x)Ψn (t), (6.1.22)


6.1. HEAT EQUATION IN 1D 101

where
t − tn−1 tn − t
Ψn (t) = , Ψn−1 (t) = , (6.1.23)
kn kn
and

Uñ (x) = Uñ,1 ϕ1 (x) + Uñ,2 ϕ2 (x) + . . . + Uñ,m ϕm (x), ñ = n − 1, n (6.1.24)

with ϕj being the usual continuous, piecewise linear finite element basis
functions (hat-functions) corresponding to a partition of Ω = (0, 1), with
0 = x0 < · · · < xℓ < xℓ+1 < · · · < xm+1 = 1, and ϕj (xi ) := δij . Now the
Galerkin method (FEM) is to determine the unknown coefficients Un,ℓ in the
above representation for U (U is a continuous, piecewise linear function both
in space and time variables) that satisfies the following discrete variational
formulation: Find U (x, t) given by (6.1.22) such that
Z Z 1 Z Z 1

(U̇ ϕi + U ϕ′i ) dxdt = f ϕi dxdt, i = 1, 2, . . . , m. (6.1.25)
In 0 In 0

Note that, on In = (tn−1 , tn ] and with Un (x) := U (x, tn ) and Un−1 (x) :=
U (x, tn−1 ),

Un − Un−1
U̇ (x, t) = Un−1 (x)Ψ̇n−1 (t) + Un (x)Ψ̇n (t) = . (6.1.26)
kn

Further differentiating (6.1.22) with respect to x we have

U ′ (x, t) = Un−1

(x)Ψn−1 (t) + Un′ (x)Ψn (t).(6.1.27)
R
Inserting (6.1.26) and (6.1.27) into (6.1.25) we get using the identities, In dt =
R R
kn and In Ψn dt = In Ψn−1 dt = kn /2 that,
Z 1 Z 1 Z Z 1

Un ϕi dx − Un−1 ϕi dx + Ψn−1 dt Un−1 ϕ′i dx
|0 {z } | 0 {z } | In {z } | 0 {z }
M ·Un M ·Un−1 kn /2 S·Un−1
Z Z 1 Z Z 1
(6.1.28)
+ Ψn dt Un′ ϕ′i
dx = f ϕi dxdt .
In
| {z } | 0 {z } | In 0
{z }
kn /2 S·Un Fn
102 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS IN 1D
1 (x)
0
0 1
1 0
0
1 U
0
1
n+1

0 1
1 0
0
1
y ψ
n+1(t) 0
1
0
1 0
1
ψ (t ) 0 1
1 0
0
1
n
0
1
0
1 0
1
t
0U (x)1
1 0
0
1
0
1
n

t n+1 0 1
1 0

1
1
tn
0
1
0
1 i
(x)

t n-1

x
x i-1 xi x i+1

This can be written in a compact form as the Crank-Nicolson system


 kn   kn 
M + S Un = M − S Un−1 + Fn , (CNS)
2 2
with the solution Un given by the data Un−1 and F , viz
 kn −1  kn   kn −1
Un = M + S M − S Un−1 + M + S Fn , (6.1.29)
| {z2 }| {z 2 } | {z2 }
B −1 A B −1

where M and S (computed below) are known as the mass-matrix and stiffness-
matrix, respectively, and
   
U F
 n,1   n,1 
    Z Z 1
 Un,2   Fn,2 
Un = 
,
 F =

,
 Fn,i = f ϕi dx dt. (6.1.30)
 ...   ...  In 0
   
Un,m Fn,m

Thus, given the source term f we can determine the vector Fn and then,
for each n = 1, . . . N , given the vector Un−1 (the initial value is given by
6.1. HEAT EQUATION IN 1D 103

U0,j := u0 (xj )) we may use the CNS to compute Un,ℓ , ℓ = 1, 2, . . . m (m


nodal values of U at the xj : s, and at the time level tn ).
We now return to the computation of the matrix entries for M and S, for
a uniform partition (all subintervals are of the same length) of the interval
I = (0, 1). Note that differentiating (6.1.24) with respect to x, yields

Un′ (x) = Un,1 ϕ′1 (x) + Un,2 ϕ′2 (x) + . . . + Un,m ϕ′m (x). (6.1.31)

Hence, for i = 1, . . . , m, the rows in the system of equations are given by


Z 1 Z 1  Z 1  Z 1 
′ ′ ′ ′ ′ ′
Un ϕi = ϕi ϕ1 Un,1 + ϕi ϕ2 Un,2 + . . . + ϕ′i ϕ′m Un,m ,
0 0 0 0

which can be written in matrix form as


 R R1 R1  
1 ′ ′
ϕϕ ϕ′1 ϕ′2 ... ϕ′1 ϕ′m U
 R 1 1
0 0
R1 ′ ′
0
R1 ′ ′   n,1 
 1 ′ ′  
 0 ϕ2 ϕ1 ϕ2 ϕ2 ... ϕ2 ϕm   Un,2 
SUn = 
0 0 

.
 (6.1.32)
 ... ... ... ...  ... 
 R R1 R1  
1 ′
0
ϕm ϕ′1 0
ϕ′m ϕ′2 . . . 0
ϕ′m ϕ′m Un,m

Thus, S is just the stiffness matrix Aunif computed in Chapter 2:


 
2 −1 0 0 ... 0
 
 
 −1 2 −1 0 ... 0 
1


S =  ... ... ... ... ... ... . (6.1.33)
h 
 
 0 ... ... −1 2 −1 
 
0 ... ... . . . −1 2

A non-uniform partition yields a matrix of the form A in Chapter 2.


Similarly, recalling the notation for the mass matrix M i in (6.1.28), we have
Z 1
[M Un ]i = Un ϕi , i = 1, . . . , m. (6.1.34)
0

Hence, to compute the mass matrix M one should drop all derivatives from
the general form of the matrix for S given by (6.1.32). In other words unlike
104 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS IN 1D
R1
the form [SUn ]i = 0 Un′ ϕ′i , M Un does not involve any derivatives, neither in
Un nor in ϕi . Consequently
 R R1 R1 
1
ϕ1 ϕ1 ϕ1 ϕ2 . . . 0 ϕ1 ϕm
 R 0 0
R1 R1 
 1 
 0 ϕ2 ϕ1 ϕ 2 ϕ 2 . . . ϕ2 ϕm 
M = 
0 0 .
 (6.1.35)
 ... ... ... ... 
 R R R 
1 1 1
0
ϕm ϕ1 0 ϕm ϕ2 . . . 0 ϕm ϕm
For a uniform partition, we have computed this mass matrix in Chapter 4:
   
2 1
0 0 ... 0 4 1 0 0 ... 0
 3 6   
 1 2 1   
 6 0 ... 0   1 4 1 0 ... 0 
 3 6  h 
   
M = h ... ... ... ... ... ...  =  ... ... ... ... ... ... .
  6 
 1 2 1   
 0 ... ...   0 ... ... 1 4 1 
 6 3 6   
1 2
0 ... ... ... 6 3
0 ... ... ... 1 4

6.1.3 Exercises
Problem 6.1. Derive a system of equations, as (6.1.29), for cG(1) − dG(0):
with the discontinuous Galerkin approximation dG(0) in time with piecewise
constants.
Problem 6.2. Let k · k denote the L2 (0, 1)-norm. Consider the problem

 −u′′ = f, 0 < x < 1,
 u′ (0) = v , u(1) = 0.
0

a) Show that |u(0)| ≤ ku′ k and kuk ≤ ku′ k.


b) Use a) to show that ku′ k ≤ kf k + |v0 |.
Problem 6.3. Assume that u = u(x) satisfies
Z 1 Z 1
′ ′
u v dx = f v dx, for all v(x) such that v(0) = 0. (6.1.36)
0 0

Show that −u′′ = f for 0 < x < 1 and u′ (1) = 0.


Hint: See previous chapters.
6.1. HEAT EQUATION IN 1D 105

Problem 6.4 (Generalized Poincare). Show that for a continuously differ-


entiable function v defined on (0, 1) we have that
||v||2 ≤ v(0)2 + v(1)2 + ||v ′ ||2 .
R 1/2 R1
Hint: Use partial integration for 0 v(x)2 dx and 1/2 v(x)2 dx and note that
(x − 1/2) has the derivative 1.
Problem 6.5. Let k · k denote the L2 (0, 1)-norm. Consider the following
heat equation


 u̇ − u′′ = 0, 0 < x < 1, t > 0,


u(0, t) = ux (1, t) = 0, t > 0,



 u(x, 0) = u (x),
0 0 < x < 1.

a) Show that the norms: ||u(·, t)|| and ||u′ (·, t)|| are non-increasing in time.
R 1/2
1 2
||u|| = 0
u(x) dx .
b) Show that ||u′ (·, t)|| → 0, as t → ∞.
c) Give a physical interpretation for a) and b).
Problem 6.6. Consider the inhomogeneous problem:


 u̇ − εu′′ = f, 0 < x < 1, t > 0,


u(0, t) = ux (1, t) = 0, t > 0,



 u(x, 0) = u (x),
0 0 < x < 1.
where f = f (x, t).
a) Show the stability estimate
Z t
||u(·, t)|| ≤ ||f (·, s)|| ds.
0
b) Show that for the corresponding stationary (u̇ ≡ 0) problem we have
1
||u′ || ≤ ||f ||.
ε
Problem 6.7. Give an a priori error estimate for the following problem:
(au′′ )′′ = f, 0 < x < 1, u(0) = u′ (0) = u(1) = u′ (1) = 0,
where a(x) > 0 on the interval I = (0, 1).
106 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS IN 1D

6.2 The wave equation in 1d


The theoretical study of the wave equation has some basic differences com-
pared to that of the heat equation. Some important aspects in this regard
are given in extended version of these notes. In our study here, the finite
element procedure for the wave equation is, mainly, the same as for that of
the heat equation outlined in the previous section. We start with an example
of the homogeneous wave equation, as an initial-boundary value problem:


 ü − u′′ = 0, 0<x<1 t>0 (DE)


u(0, t) = 0, u(1, t) = 0 t>0 (BC) (6.2.1)



 u(x, 0) = u (x), u̇(x, 0) = v (x), 0 < x < 1. (IC)
0 0

Theorem 6.4 (conservation of energy). For the equation (6.2.1) we have


1 1 1 1
||u̇||2 + ||u′ ||2 = ||v0 ||2 + ||u′0 ||2 = Constant, (6.2.2)
2 2 2 2
where Z 1
2 2
||w|| = ||w(·, t)|| = |w(x, t)|2 dx. (6.2.3)
0

Proof. We multiply the equation by u̇ and integrate over I = (0, 1) to get


Z 1 Z 1
ü u̇dx − u′′ u̇ dx = 0. (6.2.4)
0 0

Using integration by parts and the boundary data we obtain


Z 1 Z 1
1 ∂  2 ′ ′
h

i1
u̇ dx + u (u̇) dx − u (x, t)u̇(x, t)
0 2 ∂t 0 0
Z 1   Z 1  
1∂ 2 1∂ ′ 2
(6.2.5)
= u̇ dx + u dx
0 2 ∂t 0 2 ∂t
1 d 2 ′ 2

= ||u̇|| + ||u || = 0.
2 dt
Thus, we have that the quantity
1 1
||u̇||2 + ||u′ ||2 = Constant, independent of t. (6.2.6)
2 2
6.2. THE WAVE EQUATION IN 1D 107

Therefore the total energy is conserved. We recall that 12 ||u̇||2 is the kinetic
energy, and 21 ||u′ ||2 is the potential (elastic) energy.

Problem 6.8. Show that k(u̇)′ k2 + ku′′ k2 = constant, independent of t.


Hint: Differentiate the equation with respect to x and multiply by u̇, . . . .
Alternatively: Multiply (DE): ü − u′′ = 0, by −(u̇)′′ and integrate over I.

Problem 6.9. Derive a total conservation of energy relation using the Robin
type boundary condition: u′ + u = 0.

6.2.1 Wave equation as a system of PDEs


We rewrite the wave equation as a system of differential equations. To this
approach, we consider solving


 ü − u′′ = 0, 0 < x < 1, t > 0,


u(0, t) = 0, u′ (1, t) = g(t), t > 0, (6.2.7)



 u(x, 0) = u (x), u̇(x, 0) = v (x),
0 0 0 < x < 1,

where we let u̇ = v, and reformulate the problem as:



 u̇ − v = 0, (Convection)
(6.2.8)
 v̇ − u′′ = 0, (Diffusion).

We may now set w = (u, v)t and rewrite the system (6.2.8) as ẇ + Aw = 0:
      
u̇ 0 −1 u 0
ẇ + Aw =  +  = . (6.2.9)
∂2
v̇ − ∂x 2 0 v 0

In other words, the matrix differential operator is given by


 
0 −1
A= .
∂2
− ∂x 2 0
108 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS IN 1D

6.2.2 The finite element discretization procedure


We follow the same procedure as in the case of the heat equation, and let
Sn = Ω × In , n = 1, 2, . . . , N , with In = (tn−1 , tn ]. Then, for each n we
define, on Sn , the piecewise linear approximations

 U (x, t) = U (x)Ψ (t) + U (x)Ψ (t),
n−1 n−1 n n
0 < x < 1, t ∈ In ,
 V (x, t) = V (x)Ψ (t) + V (x)Ψ (t),
n−1 n−1 n n
(6.2.10)
where, e.g.

 U (x) = U ϕ (x) + . . . + U ϕ (x), ñ = n − 1, n
ñ ñ,1 1 ñ,m m
(6.2.11)
 V (x) = V ϕ (x) + . . . + V ϕ (x), ñ = n − 1, n.
ñ ñ,1 1 ñ,m m

ψn(t) ϕ j (x)
1

t n-1 tn t n+1 xj-1 xj xj+1

For u̇ − v = 0 and t ∈ In we write the general variational formulation


Z Z 1 Z Z 1
u̇ϕ dxdt − vϕ dxdt = 0, for all ϕ(x, t). (6.2.12)
In 0 In 0

Likewise, v̇ − u′′ = 0 yields a variational formulation, viz


Z Z 1 Z Z 1
v̇ϕ dxdt − u′′ ϕ dxdt = 0. (6.2.13)
In 0 In 0

Integrating by parts in x, in the second term, and using the boundary con-
dition u′ (1, t) = g(t) we get
Z 1 Z 1 Z 1
′′ ′ 1 ′ ′ ′
u ϕdx = [u ϕ]0 − u ϕ dx = g(t)ϕ(1, t) − u (0, t)ϕ(0, t) − u′ ϕ′ dx.
0 0 0
6.2. THE WAVE EQUATION IN 1D 109

Inserting the right hand side in (6.2.13) we get for all ϕ with ϕ(0, t) = 0:
Z Z 1 Z Z 1 Z
′ ′
v̇ϕ dxdt + u ϕ dxdt = g(t)ϕ(1, t) dt. (6.2.14)
In 0 In 0 In

The corresponding cG(1)cG(1) finite element method reads as follows: For


each n, n = 1, 2, . . . , N , find continuous piecewise linear functions U (x, t)
and V (x, t), in a partition, 0 = x0 < x1 < · · · < xm = 1 of Ω = (0, 1), such
that
Z Z 1
Un (x) − Un−1 (x)
ϕj (x) dxdt
In 0 kn
Z Z 1  (6.2.15)
− Vn−1 (x)Ψn−1 (t) + Vn (x)Ψn (t) ϕj (x) dxdt = 0,
In 0
for j = 1, 2, . . . , m,

and
Z Z 1
Vn (x) − Vn−1 (x)
ϕj (x) dxdt
In 0 kn
Z Z 1 

+ Un−1 (x)Ψn−1 (t) + Un′ (x)Ψn (t) ϕ′j (x) dxdt (6.2.16)
In 0
Z
= g(t)ϕj (1) dt, for j = 1, 2, . . . , m,
In

where U̇ , U ′ , V̇ , and V ′ are computed using (6.2.10) with


tn − t t − tn−1
ψn−1 (t) = , ψn (t) = , kn = tn − tn−1 .
kn kn

Thus, the equations (6.2.15) and (6.2.16) are reduced to the iterative forms:
Z 1 Z
kn 1
Un (x)ϕj (x)dx − Vn (x)ϕj (x)dx
2 0
|0 {z } | {z }
M Un M Vn
Z 1 Z 1
kn
= Un−1 (x)ϕj (x)dx + Vn−1 (x)ϕj (x) dx, j = 1, 2, . . . , m,
0 2 0
| {z } | {z }
M Un−1 M Vn−1
110 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS IN 1D

and
Z 1 Z
kn 1 ′
Vn (x)ϕj (x)dx + U (x)ϕ′j (x) dx
2 0 n
|0 {z } | {z }
M Vn SUn
Z 1 Z 1
kn
= Vn−1 (x)ϕj (x) dx − U ′ (x)ϕ′j (x) dx +gn , j = 1, 2, . . . , m,
2 0 n−1
|0 {z } | {z }
M Vn−1 SUn−1

respectively, where we used (6.2.11) and as we computed earlier


   
2 −1 . . . 0 4 1 ... 0
   
   
 −1 2 −1 . . .   1 4 1 ... 
1

 h


S =  ... ... ... ...  , M =  ... ... ... ... ,
h  6 
   
 0 −1 2 −1   ... 1 4 1 
   
0 0 −1 1 0 ... 1 2

where Z
T
gn = (0, . . . , 0, gn,m ) , where gn,m = g(t) dt.
In

In compact form the vectors Un and Vn are determined by solving the linear
system of equations:

 M U − kn M V = M U kn
n 2 n n−1 + 2 M Vn−1
(6.2.17)
 kn SU + M V = − kn SU + MV +g ,
2 n n 2 n−1 n−1 n

which is a system of 2m equations with 2m unknowns:


       
M − k2n M Un M kn
M U 0
  = 2   n−1  +  ,
kn kn
2
S M Vn −2S M Vn−1 gn
| {z }| {z } | {z }
A W b

with W = A−1 b, Un = W (1 : m) and Vn = W (m + 1 : 2m).


6.2. THE WAVE EQUATION IN 1D 111

6.2.3 Exercises
Problem 6.10. Derive the corresponding linear system of equations in the
case of time discretization with dG(0).
Problem 6.11 (discrete conservation of energy). Show that cG(1)-cG(1) for
the wave equation in system form with g(t) = 0, conserves energy: i.e.

kUn′ k2 + kVn k2 = kUn−1



k2 + kVn−1 k2 . (6.2.18)

Hint: Multiply the first equation by (Un−1 + Un )t SM −1 and the second equa-
tion by (Vn−1 +Vn )t and add up. Use then, e.g., the fact that Unt SUn = kUn′ k2 ,
where
 
Un,1
 
 
 Un,2 

Un =   , and Un = Un (x) = Un,1 (x)ϕ1 (x) + . . . + Un,m (x)ϕm (x).

 ... 
 
Un,m

Problem 6.12. Consider the wave equation




 ü − u′′ = 0, x ∈ R, t > 0,


u(x, 0) = u0 (x), x ∈ R, (6.2.19)



 u̇(x, 0) = v (x), x ∈ R.
0

Plot the graph of u(x, 2) in the following cases.


a) v0 = 0 and 
 1, x < 0,
u0 (x) =
 0, x > 0.
b) u0 = 0, and


 −1, −1 < x < 0,


v0 (x) = 1, 0 < x < 1,



 0, |x| > 0.
112 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS IN 1D

Problem 6.13. Compute the solution for the wave equation




 ü − 4u′′ = 0, x > 0, t > 0,


u(0, t) = 0, t > 0, (6.2.20)



 u(x, 0) = u (x), u̇(x, 0) = 0, x > 0.
0

Plot the solutions for the three cases t = 0.5, t = 1, t = 2, and with

 1, x ∈ [2, 3]
u0 (x) = (6.2.21)
 0, else
Problem 6.14. Apply cG(1) time discretization directly to the wave equation
by letting
U (x, t) = Un−1 Ψn−1 (t) + Un (x)Ψn (t), t ∈ In . (6.2.22)
Note that U̇ is piecewise constant in time and comment on:
Z Z 1 Z Z 1 Z
′ ′
Ü ϕj dxdt + u ϕj dxdt = g(t)ϕj (1)dt, j = 1, 2, . . . , m.
In 0 In 0 In
| {z } | {z } | {z }
? k
S(Un−1 +Un ) gn
2

Problem 6.15. Construct a FEM for the problem




 ü + u̇ − u′′ = f, 0 < x < 1, t > 0,


u(0, t) = 0, u′ (1, t) = 0, t > 0, (6.2.23)



 u(x, 0) = 0, u̇(x, 0) = 0, 0 < x < 1.
Problem 6.16. Determine the solution for the wave equation


 ü − c2 u′′ = f, x > 0, t > 0,


u(x, 0) = u0 (x), ut (x, 0) = v0 (x), x > 0,



 u (1, t) = 0,
x u(0, t) = 0 t > 0,
in the following cases:
a) f = 0.
b) f = 1, u0 = 0, v0 = 0.
6.2. THE WAVE EQUATION IN 1D 113

Problem 6.17. Prove that the solution u of the convection-diffusion problem


−uxx + ux + u = f, in I = (0, 1), u(0) = u(1) = 0,
satisfies the following estimate
Z 1/2  Z 1/2
2
u φ dx ≤ f 2 φ dx .
I I

where φ(x) is a positive weight function defined on (0, 1) satisfying φ′ (x) ≤ 0


and −φ′ (x) ≤ φ(x) for 0 ≤ x ≤ 1.
Problem 6.18. Let φ be a solution of the problem
−εφ′′ − 3φ′ + 2φ = e, φ′ (0) = φ(1) = 0.
Let k · k denote the L2 -norm on I. Show that there is a constant C such that
|φ(0)| ≤ Ckek, kεφ′′ k ≤ Ckek.
Problem 6.19. Use relevant interpolation theory estimates and prove an a
priori error estimate for the cG(1) finite element method for the problem
−u′′ + u′ = f, in I = (0, 1), u(0) = u(1) = 0.
Problem 6.20. Prove an a priori error estimate for the cG(1) finite element
method for the problem
−u′′ + u′ + u = f, in I = (0, 1), u(0) = u(1) = 0.
Problem 6.21. Consider the problem
−εu′′ + xu′ + u = f, in I = (0, 1), u(0) = u′ (1) = 0,
where ε is a positive constant, and f ∈ L2 (I). Prove that
||εu′′ || ≤ ||f ||.
Problem 6.22. We modify the problem 6.21 above according to
−εu′′ + c(x)u′ + u = f (x) 0 < x < 1, u(0) = u′ (1) = 0,
where ε is a positive constant, the function c satisfies c(x) ≥ 0, c′ (x) ≤ 0,
and f ∈ L2 (I). Prove that there are positive constants C1 , C2 and C3 such
that

ε||u′ || ≤ C1 ||f ||, ||cu′ || ≤ C2 ||f ||, and ε||u′′ || ≤ C3 ||f ||,
where || · || is the L2 (I)-norm.
114 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS IN 1D

Problem 6.23. Consider the convection-diffusion-absorption problem


√ ′
−εu′′ + u′ + u = f, in I = (0, 1), u(0) = 0, εu (1) + u(1) = 0,

where ε is a positive constant, and f ∈ L2 (I). Prove the following stability


estimates for the solution u

k εu′ k + kuk + |u(1)| ≤ Ckf k,

ku′ k + kεu′′ k ≤ Ckf k,


where k · k denotes the L2 (I)-norm, I = (0, 1), and C is an appropriate
constant.
Appendix A

Answers to Exercises

Chapter 1

1.1 a) u(x) = C1 ex + C2 e2x b) u(x) = C1 cos 2x + C2 sin 2t c) u(x) =


(C1 + C2 x)e3x

1.2 a) u(x) = x2 /2 + e−x (A cos x + B sin x)


√ √
b) u(x) = 21 (sin x − cos x) + e−x/2 (cos( 7/2)x + sin( 7/2)x)
c) u(x) = C1 e−x + C2 e−2x + 61 ex .

1.3 a) u(x) = − 61 x3 − 14 x2 − 14 x b) u(x) = − 12 x cos x


c) u(x) = 61 ex + 1
10
(sin x − 3 cos x).

1.5 b) No solution.

Chapter 2.
8 10 2
2.2 q = 1, U (t) = 1 + 3t. q = 2, U (t) = 1 + 11
t + 11
t.
30 45 2 35 3
q = 3, U (t) = 1 + 29
t + 116
t + 116
t.
q = 4, U (t) ≈ 1 + 0.9971t + 0.5161t2 + 0.1311t3 + 0.0737t4 .

2.3
P u(t) ≈ 0.9991 + 1.083t + 0.4212t2 + 0.2786t3 .

115
116 APPENDIX A. ANSWERS TO EXERCISES

2.4  
8 −4 0
  i
 
A =  −4 8 −4  , b = (bi )3i=1 , bi = .
  16
0 −4 8

2.5 a. u(x) = 12 x(1 − x)


b. R(x) = π 2 A sin πx + 4π 2 B sin 2πx − 1
c. A = 4/π 3 and B = 0.

2.6 a.
b. R(x) = (π 2 + 1)A sin πx + (4π 2 + 1)B sin 2πx + (9π 2 + 1)C sin 3πx − x
2 1 2
c. A = 2
,B=− 2
and C = .
π(π + 1) π(4π + 1) 3π(9π 2 + 1)
2.7 a. u(x) = 61 (π 3 − x3 ) + 21 (x2 − π 2 )
b. R(x) = −U ′′ (x) − x + 1 = 14 ξ0 cos x2 + 94 ξ1 cos 3x
2
c. ξ0 = 8(2π − 6)/π and ξ1 = 98 ( 29 − 32 π)/π.

2.8 U (x) = (16 sin x + 16


27
sin 3x)/π 3 + 2x2 /π 2 .

Chapter 3.
3.2 (a) x, (b) 0.

3.3 

 4 − 11(x + π)/(2π), −π ≤ x ≤ − π2 ,




 5/4 − (x + π )/(2π), − π2 ≤ x ≤ 0,
2
Π1 f (x) =

 1 − 7x/(2π), 0 ≤ x ≤ π2 ,




 3(x − π)/(2π), π
≤ x ≤ π.
2

3.6 Check the conditions required for a Vector space.

3.7
2x − a − b a + b 2(x − a)
Π1 f (x) = f (a) + f( ) .
a−b 2 b−a
117

3.8 Hint: Use the procedure in the proof of Theorem 3.1, with somewhat
careful estimates at the end.

3.10  
2
π4 e−8x ≈ 0.25x4 − 1.25x2 + 1.

3.11 For example we may choose the following basis:



 0, x ∈ [xi−1 , xi ],
ϕi,j (x) =
 λ (x), i = 1, . . . , m + 1, j = 0, 1, 2.
i,j

(x − ξi )(x − xi ) (x − xi−1 )(x − xi )


λi,0 (x) = , λi,1 (x) = ,
(xi−1 − ξi )(xi−1 − xi ) (ξi − xi−1 )(ξi − xi )

(x − xi−1 )(x − ξi )
λi,2 (x) = , ξi ∈ (xi−1 , xi ).
(xi − xi−1 )(xi − ξi )

3.12 This is a special case of problem 2.13.

3.13 This is “trivial”.


x1 +x2
3.14 Hint: Use Taylor expansion of f about x = 2
.

Chapter 4.
4.1 c) sin πx, x ln x and x(1 − x) are test functions of this problem. x2 and
ex − 1 are not test functions.

4.3 a) U is the solution for


    
2 −1 0 ξ 1
  1   
    
AU = f ⇐⇒ 1/h  −1 2 −1   ξ2  = h  1 
    
0 −1 2 ξ3 1

with h = 1/4.
b) A is invertible, therefore U is unique.
118 APPENDIX A. ANSWERS TO EXERCISES

4.6 a) ξ is the solution for


    
2 −1 ξ1 0
2  = 
−1 1 ξ2 7

b) (ξ1 , ξ2 ) = 7(1/2, 1) and U (x) = 7x (same as the exact solution).

4.7 a) In case of N = 3, ξ is the solution for


    
2 −1 0 ξ −5
  0   
    
Aξ = f ⇐⇒ 1/h  −1 2 −1   ξ1  =  0 
    
0 −1 2 ξ2 0

with h = 1/3. That is: (ξ0 , ξ1 , ξ2 ) = − 13 (15, 10, 5).


b) U (x) = 5x − 5 (same as the exact solution).

4.8 a) No solution!
b) Trying to get a finite element approximation ends up with the matrix
equation
    
2 −2 0 ξ0 1
   1 
    
Aξ = f ⇐⇒  −2 4 −2   ξ1  =  2 
   4 
0 −2 2 ξ2 1

where the coefficient matrix is singular (detA = 0). There is no finite


element solution.

4.9 d) ||U ||2E = ξ T Aξ (check spectral theorem, linear algebra!)

4.10 For an M + 1 partition (here M = 2) we get aii = 2/h, ai,i+1 = −1/h


except aM +1,M +1 = 1/h − 1, bi = 0, i = 1, . . . , M and bM +1 = −1:
a) U = (0, 1/2, 1, 3/2).
b) e.g, U3 = U (1) → 1, as k → ∞.
119

4.11 c) Set α = 2 and β = 3 in the general FEM solution:


ξ = α3 (−1, 1, 1)T + β(0, 0, 2)T :
     
ξ1 −1 0
     
     
 ξ2  = 2/3  1  + 3  0 .
     
ξ3 1 2

4.12        
2 −1 ξ1 4 1 ξ 1
3  + 1   1  = 1  
−1 2 ξ2 18 1 4 ξ2 3 1

⇐⇒ (MATLAB) ξ1 = ξ2 = 0.102.

4.13 Just follow the procedure in the theory.

4.15 a priori: ||e||E ≤ ||u − πh u||E .

4.16 a) ||e′ ||a ≤ Ci ||h(aU ′ )′ ||1/a .


b) The matrix equation:
    
1 −1 0 0 ξ0 −3
    
    
 −1 2 −1 0  ξ1   0 
  = ,
    
 0 −1 3 −2   ξ2   0 
    
0 0 2 4 ξ3 0

which yields the approximate solution U = −3(1/2, 1, 2, 3)T .


c) Since a is constant and U is linear on each subinterval we have that

(aU ′ )′ = a′ U ′ + aU ′′ = 0.

By the a posteriori error estimate we have that ||e′ ||a = 0, i.e. e′ = 0.


Combining with the fact that e(x) is continuous and e(1) = 0, we get
that e ≡ 0, which means that the finite, in this case, coincides with the
exact solution.
120 APPENDIX A. ANSWERS TO EXERCISES
 
4.17 a priori: ||e||H 1 ≤ Ci ||hu′′ || + ||h2 u′′ || .

4.18 a) a priori: ||e||E ≤ ||u − v||E (1 + c), and a posteriori: ||e||E ≤


Ci ||hR(U )||L2 (I) .
b) Since c ≥ 0, the a priori error estimate in a) yields optimality for
c ≡ 0, i.e. in the case of no convection (does this tell anything to you?).
 
′′ 2 ′′
4.19 a priori: ||e||H 1 ≤ Ci ||hu || + 4||h u || .

Chapter 5.
j 1 1
5.1 a) aij = j+i
− j+i+1
, bi = i+1
, i, j = 1, 2, . . . ,
8 10 2
b) q = 1 : U (t) = 1 + 3t. q=2: U (t) = 1 + 11
t + 11
t.
1
5.3 a) u(t) = e−4t + 32
(8t2 − 4t + 1).
1 2
√ 1 2 Rx
t + √π2 e 2 t erf ( √t2 ),
2
b) u(t) = e 2 t − erf (x) = √2 e−y dy.
π 0

[(x3i −x3i−1 )/3]−Ui (xi−1 )·(2(xi −xi−1 )−1)


5.4 a) Ui (xi ) = 1+2(xi −xi−1 )

Chapter 6.
6.8 ||e|| ≤ ||h2 uxx ||

6.14 
 1 (u (x + 2t) + u (x − 2t)), x ≥ 2t
2 0 0
u(x, t) =
 (u (2t + x) + u (2t − x)),
1
x < 2t
2 0 0

R R ct−x 
x+ct
1
6.16 a) u(x, t) = 12 [u0 (x + ct) + u0 (ct − x)] + 2c 0
v 0 + 0
v0 .
1
Rt
b) u(x, t) = 2c 0
2c(t − s) ds = t2 /2.
 
6.19 a priori: ||e||H 1 ≤ Ci ||hu′′ || + ||h2 u′′ || .
 
6.20 a priori: ||e||E ≤ Ci ||hu′′ || + ||h2 u′′ || .
Appendix B

Algorithms and MATLAB


Codes

To streamline the computational aspects, we have gathered suggestions for


some algorithms and Matlab codes that can be used in implementations.
These are simple specific Matlab codes on the concepts such as

• The L2 -projection.

• Numerical integration rules: Midpoint, Trapezoidal, Simpson.

• Finite difference Methods: Forward Euler, Backward Euler, Crank-Nicolson.

• Matrices/vectors: Stiffness- Mass-, and Convection Matrices. Load vector.

The Matlab codes are not optimized for speed, but rather intended to be easy
to read.

121
122 APPENDIX B. ALGORITHMS AND MATLAB CODES

An algorithm for L2 -projection:

1. Choose a partition Th of the interval I into N sub-intervals, N +1 nodes,


and define the corresponding space of piece-wise linear functions Vh .

2. Compute the (N + 1) × (N + 1) mass matrix M and the (N + 1) × 1


load vector b, viz
Z Z
mi j = ϕj ϕi dx, bi = f ϕi dx.
I I

3. Solve the linear system of equations

M ξ = b.

4. Set
N
X
Ph f = ξj ϕj .
j=0

Below are two versions of Matlab codes for computing the mass matrix M:

function M = MassMatrix(p, phi0, phiN)

%--------------------------------------------------------------------
% Syntax: M = MassMatrix(p, phi0, phiN)
% Purpose: To compute mass matrix M of partition p of an interval
% Data: p - vector containing nodes in the partition
% phi0 - if 1: include basis function at the left endpoint
% if 0: do not include a basis function
% phiN - if 1: include basis function at the right endpoint
% if 0: do not include a basis function
%--------------------------------------------------------------------

N = length(p); % number of rows and columns in M


M = zeros(N, N); % initiate the matrix M

% Assemble the full matrix (including basis functions at endpoints)


123

for i = 1:length(p)-1
h = p(i + 1) - p(i); % length of the current interval
M(i, i) = M(i, i) + h/3;
M(i, i + 1) = M(i, i + 1) + h/6;
M(i + 1, i) = M(i + 1, i) + h/6;
M(i + 1, i + 1) = M(i + 1, i + 1) + h/3;
end

% Remove unnecessary elements for basis functions not included


if ˜phi0
M = M(2:end, 2:end);
end
if ˜phiN
M = M(1:end-1, 1:end-1);
end

A Matlab code to compute the mass matrix M for a non-uniform mesh:


Since now the mesh is not uniform (the sub-intervals have different lengths), we
compute the mass matrix assembling the local mass matrix computation for each
sub-interval. To this end we can easily compute the mass matrix for the standard
interval I1 = [0, h] with the basis functions ϕ0 = (h − x)/h and ϕ1 = x/h: Then,

ϕ0 (x) ϕ1 (x)

x
x0 = 0 x1 = h

Figure B.1: Standard basis functions ϕ0 = (h − x)/h and ϕ1 = x/h.

the standard mass matrix is given by


 R R 
ϕ0 ϕ0 I 1 ϕ0 ϕ1
M I1 =  RI1 R .
I 1 ϕ1 ϕ0 I 1 ϕ1 ϕ1
124 APPENDIX B. ALGORITHMS AND MATLAB CODES

Inserting for ϕ0 = (h − x)/h and ϕ1 = x/h we compute M I1 as


 R Rh   
h
I1  0
(h − x)2 /h2 dx 2
0 (h − x)x/h dx h 2 1
M Rh Rh =  . (B.0.1)
2 x2 /h2 dx 6 1
0 x(h − x)/h dx 0 2

Thus, for an arbitrary sub-interval Ik := [xk−1 , xk ] with the mesh size hk , and
basis functions ϕk and ϕk−1 (see Fig. 3.4.), the local mass matrix is given by
 R R   
ϕk−1 ϕk−1 Ik ϕk−1 ϕk 2 1
M Ik =  R Ik R  = hk   (B.0.2)
6
Ik ϕk ϕk−1 Ik ϕk1 ϕk 1 2

where hk is the length of the interval Ik . Note that, assembling, the diagonal
elements in the Global mass matrix will be multiplied by 2 (see Example 4.1).
These elements are corresponding to the interior nodes and are the result of adding
their contribution for the intervals in their left and right. The assembling is through
the following Matlab routine:
A Matlab routine to compute the load vector b:
To solve the problem of the L2 -projection, it remains to compute/assemble the
load vector b. To this end we note that b depends on the unknown function f ,
and therefore will be computed by some of numerical integration rules (midpoint,
trapezoidal, Simpson or general quadrature). Below we shall introduce Matlab
routines for these numerical integration methods.

function b = LoadVector(f, p, phi0, phiN)

%--------------------------------------------------------------------
% Syntax: b = LoadVector(f, p, phi0, phiN)
% Purpose: To compute load vector b of load f over partition p
% of an interval
% Data: f - right hand side funcion of one variable
% p - vector containing nodes in the partition
% phi0 - if 1: include basis function at the left endpoint
% if 0: do not include a basis function
% phiN - if 1: include basis function at the right endpoint
% if 0: do not include a basis function
%--------------------------------------------------------------------

N = length(p); % number of rows in b


b = zeros(N, 1); % initiate the matrix S
125

% Assemble the load vector (including basis functions at both endpoints)


for i = 1:length(p)-1
h = p(i + 1) - p(i); % length of the current interval
b(i) = b(i) + .5*h*f(p(i));
b(i + 1) = b(i + 1) + .5*h*f(p(i + 1));
end

% Remove unnecessary elements for basis functions not included


if ˜phi0
b = b(2:end);
end
if ˜phiN
b = b(1:end-1);
end

The data function f can be either inserted as f=@(x) followed by some ex-
pression in the variable x, or more systematically through a separate routine, here
called “Myfunction” as in the following example

Example B.1 (Calling a data function f (x) = x2 of the load vector).


function y= Myfunction (p)

y=x.ˆ2

\vskip 0.3cm
Then, we assemble the corresponding load vector, viz:

\begin{verbatim}
b = LoadVector (@Myfunction, p, 1, 1)
Alternatively we may write

f=@(x)x.ˆ2
b = LoadVector(f, p, 1, 1)

Now we are prepared to write a Matlab routine “My1DL2Projection” for com-


puting the L2 -projection.
126 APPENDIX B. ALGORITHMS AND MATLAB CODES

Matlab routine to compute the L2 -projection:

function pf = L2Projection(p, f)

M = MassMatrix(p, 1, 1); % assemble mass matrix


b = LoadVector(f, p, 1, 1); % assemle load vector
pf = M\b; % solve linear system
plot(p, pf) % plot the L2-projection

The above routine for assembling the load vector uses the Composite trapezoidal
rule of numerical integration. Below we gather examples of the numerical integra-
tion routines:

A Matlab routine for the composite midpoint rule

function M = midpoint(f,a,b,N)

h=(b-a)/N
x=a+h/2:h:b-h/2;
M=0;
for i=1:N
M = M + f(x(i));
end
M=h*M;

A Matlab routine for the composite trapezoidal rule

function T=trapezoid(f,a,b,N)

h=(b-a)/N;
x=a:h:b;

T = f(a);
for k=2:N
T = T + 2*f(x(k));
end
T = T + f(b);
T = T * h/2;
127

A Matlab routine for the composite Simpson’s rule

function S = simpson(a,b,N,f)

h=(b-a)/(2*N);
x = a:h:b;
p = 0;
q = 0;

for i = 2:2:2*N % Define the terms to be multiplied by 4


p = p + f(x(i));
end

for i = 3:2:2*N-1 % Define the terms to be multiplied by 2


q = q + f(x(i));
end

S = (h/3)*(f(a) + 2*q + 4*p + f(b)); % Calculate final output

The precomputations for standard and local stiffness and convection matrices:
 R R   R R −1 1   
′ ϕ′ ′ ϕ′ −1 −1
ϕ 0 0 ϕ
I1 0 1  1 1 −1
S I1 =  R I1 R =  RI1 h h I1 h h
R 11 = h .
′ ′ ′ 1 −1
ϕ
I1 1 0 ϕ I1 ϕ 1 ′ϕ 1 I1 h h I1 h h −1 1

As in the assembling of the mass-matrix, even here, for the global stiffness matrix,
each interior node has contributions from both intervals that the node belongs.
Consequently, assembling we have 2/h as the interior diagonal elements in the
stiffness matrix (rather than 1/h in the single interval computes above). For the
convection matrix C, however, because of the skew-symmetry the contributions
from the two adjacent interior intervals will cancel out:
 R R   R R h−x 1 
′ ′ −1 h−x
ϕ0 ϕ0 I 1 ϕ0 ϕ1 
C I1 =  R I1 R =  RI1 h h R
I1 h h 
′ x −1 x1
I 1 ϕ1 ϕ0 I1 ϕ1 ′ϕ1 I1 h h I1 h h
 
1 −1 1
=  .
2 −1 1

A thorough computation of all matrix elements, for both interior and bound-
ary nodes, in the case of continuous piece-wise linear approximation, for Mass-,
stiffness- and convection-matrices are demonstrated in Examples 4.1 and 4.2.
128 APPENDIX B. ALGORITHMS AND MATLAB CODES

A Matlab routine assembling the stiffness matrix:

function S = StiffnessMatrix(p, phi0, phiN)

%---------------------------------------------------------------------
% Syntax: S = StiffnessMatrix(p, phi0, phiN)
% Purpose: To compute the stiffness matrix S of a partition p of an
% interval
% Data: p - vector containing nodes in the partition
% phi0 - if 1: include basis function at the left endpoint
% if 0: do not include a basis function
% phiN - if 1: include basis function at the right endpoint
% if 0: do not include a basis function
%---------------------------------------------------------------------

N = length(p); % number of rows and columns in S


S = zeros(N, N); % initiate the matrix S

% Assemble the full matrix (including basis functions at endpoints)


for i = 1:length(p)-1
h = p(i + 1) - p(i); % length of the current interval
S(i, i) = S(i, i) + 1/h;
S(i, i + 1) = S(i, i + 1) - 1/h;
S(i + 1, i) = S(i + 1, i) - 1/h;
S(i + 1, i + 1) = S(i + 1, i + 1) + 1/h;
end

% Remove unnecessary elements for basis functions not included


if ˜phi0
S = S(2:end, 2:end);
end
if ˜phiN
S = S(1:end-1, 1:end-1);
end
129

A Matlab routine to assemble the convection matrix:

function C = ConvectionMatrix(p, phi0, phiN)

%--------------------------------------------------------------------------
% Syntax: C = ConvectionMatrix(p, phi0, phiN)
% Purpose: To compute the convection matrix C of a partition p of an
% interval
% Data: p - vector containing nodes in the partition
% phi0 - if 1: include a basis function at the left endpoint
% if 0: do not include a basis function
% phiN - if 1: include a basis function at the right endpoint
% if 0: do not include a basis function
%--------------------------------------------------------------------------

N = length(p); % number of rows and columns in C


C = zeros(N, N); % initiate the matrix C

% Assemble the full matrix (including basis functions at both endpoints)


for i = 1:length(p)-1
C(i, i) = C(i, i) - 1/2;
C(i, i + 1) = C(i, i + 1) + 1/2;
C(i + 1, i) = C(i + 1, i) - 1/2;
C(i + 1, i + 1) = C(i + 1, i + 1) + 1/2;
end

% Remove unnecessary elementC for basis functions not included


if ˜phi0
C = C(2:end, 2:end);
end
if ˜phiN
C = C(1:end-1, 1:end-1);
end
130 APPENDIX B. ALGORITHMS AND MATLAB CODES

Finally, below we gather the Matlab routines for finite difference approxima-
tions (also cG(1) and dG(0) ) for the time discretizations.

Matlab routine for Forward-, Backward-Euler and Crank-Nicolson:

function [] = three_methods(u0, T, dt, a, f, exactexists, u)

% Solves the equation du/dt + a(t)*u = f(t)


% u0: initial value; T: final time; dt: time step size
% exactexists = 1 <=> exact solution is known
% exactexists = 0 <=> exact solution is unknown

timevector = [0]; % we build up a vector of


% the discrete time levels

U_explicit_E = [u0]; % vector which will contain the


% solution obtained using "Forward Euler"

U_implicit_E = [u0]; % vector which will contain the


% solution with "Backward Euler"

U_CN = [u0]; % vector which will contain the


% solution using "Crank-Nicolson"

n = 1; % current time interval

t_l = 0; % left end point of the current


% time interval, i.e. t_{n-1}

while t_l < T

t_r = n*dt; % right end point of the current


% time interval, i.e. t_{n}

% Forward Euler:
U_v = U_explicit_E(n); % U_v = U_{n-1}
U_h = (1-dt*a(t_l))*U_v+dt*f(t_l); % U_h = U_{n};
U_explicit_E(n+1) = U_h;

% Backward Euler:
U_v = U_implicit_E(n); % U_v = U_{n-1}
131

U_h = (U_v + dt*f(t_r))/(1 + dt*a(t_r)); % U_h = U_{n}


U_implicit_E(n+1) = U_h;

% Crank-Nicolson:
U_v = U_CN(n); % U_v = U_{n-1}
U_h = ((1 - dt/2*a(t_l))*U_v + dt/2*(f(t_l)+f(t_r))) ...
/ (1 + dt/2*a(t_r)); % U_h = U_{n}
U_CN(n+1) = U_h;

timevector(n+1) = t_r;
t_l = t_r; % right end-point in the current time interval
% becomes the left end-point in the next time interval.

n = n + 1;

end

% plot (real part (in case the solutions become complex))

figure(1)

plot(timevector, real(U_explicit_E), ’:’)


hold on
plot(timevector, real(U_implicit_E), ’--’)
plot(timevector, real(U_CN), ’-.’)

if (exactexists)
% if known, plot also the exact solution
u_exact = u(timevector);
plot(timevector, real(u_exact), ’g’)
end

xlabel(’t’)
legend(’Explicit Euler’, ’Implicit Euler’, ’Crank-Nicolson’, 0)
hold off

if (exactexists)
132 APPENDIX B. ALGORITHMS AND MATLAB CODES

% if the exact solution is known, then plot the error:


figure(2)

plot(timevector, real(u_exact - U_explicit_E), ’:’)


hold on
plot(timevector, real(u_exact - U_implicit_E), ’--’)
plot(timevector, real(u_exact - U_CN), ’-.’)
legend(’Explicit Euler’, ’Implicit Euler’, ’Crank-Nicolson’, 0)
title(’Error’)
xlabel(’t’)
hold off

end

return

Example B.2. Solving u′ (t) + u(t) = 0 with three_methods

a= @(t) 1;
f= @(t) 0;
u= @(t) exp(-t)
u_0=1;
T= 1;
dt=0.01;
three_methods (u_0, T, dt, a, f, 1, u)
133

Table of Symbols

Symbol reads Definition


∀ for all, for every ∀x, cos2 x + sin2 x = 1
∃ There exists see below
: such that ∃x : x > 3
∨ or x∨y (x or y)
∧ or & and x ∧ y (x and y) also x & y
√ √
∈ belongs 2 ∈ R ( 2 is a real numbers R)
√ √

/ not belongs 2∈/ Q ( 2 is not a rational number )
⊥ orthogonal to u ⊥ v (u and v are orthogonal)
Z b
:= defines as I := f (x) dx (I defines as integralen in RHS)
Z b a

=: defines f (x) dx =: I (The integral in LHS defines I)


a
≈ approximates A ≈ B (A approximates B) or A is approximately equal B.
=⇒ implies A =⇒ B (A implies B.)
⇐⇒ equivalent A ⇐⇒ B (A is equivalent to B.)
ODE Ordinary Differential Equation
PDE Partial Differential Equation
IVP Initial Value Problem
BVP Boundary Value Problem
VF Variational Formulation
MP Minimization Problem
P q (I) p ∈ P q (I) p(x) is a polynomial of degree ≤ q for x ∈ I.
Z b 
H 1 (I) v ∈ H 1 (I) if v(x)2 + v ′ (x)2 dx < ∞, I = [a, b].
a
Vh (I) v ∈ Vh (I) the space of piecewise linear functions on a partition of I.
Vh0 (I) v ∈ Vh0 (I) v ∈ Vh (I) and v is 0 at both or one of the boundary points.
134

Symbol reads Exempel/Definition


 Z 1/p

 |f (x)|p dx , 1≤p<∞
||f ||p , ||f ||Lp (I) Lp -norm of f on I ||f ||p := I

 maxx∈I |f (x)|, p=∞
Lp (I) Lp -space f ∈ Lp (I) iff ||f ||p < ∞
Z 1/2
||v||a weighted L2 -norm ||v||a := a(x)|v(x)|2 dx , a(x) > 0
 ZI 1/2
||v||E the energy norm ||v||E := a(x)|v(x)′ |2 dx , ||v||E = ||v ′ ||a .
I
N
Y
Q
product i = 1 · 2 · 3 · . . . · N =: N !
i=1
XN
P
sum i = 1 + 2 + 3 + . . . + N =: N (N + 1)/2
i=1

(u, v) := u1 v1 + u2 v2 + . . . + uN vN , u, v ∈ RN
(u, v) or hu, vi skalar/inner product Z
(u, v) := u(x)v(x) dx for u, v ∈ L2 (I).
I
Ph f L2 -projection (f, w) = (Ph f, w), ∀w ∈ P q (a, b).
Th (I) a partition of I Th [a, b] : a = x0 < x1 < . . . < xN = b.
πh f interpolant of f πh f (xi ) = f (xi ) in a partition Th of I = [a, b].
F DM Finite Difference Method
FE Forward Euler Forward Euler FDM
BE Backward Euler Backward Euler FDM ⇐⇒ dG(0)
CN Crank-Nicolson Crank-Nicolson FDM ⇐⇒ cG(1)
F EM Finite Element Method/Galerkin Method
cG(1) continuous Galerkin continuous, piecewise linear Galerkin approx
dG(0) discont. Galerkin discontinuous, piecewise constant Galerkin
cG(1)cG(1) continuous Galerkin space time continuous, piecewise linear Galerkin
Ci Interpolation Constant
Cs Stability constant
T OL Error TOLerance
Index

A Diffusion 2, 64, 70, 95, 107, 113,


Adaptivity 64, 114

B E
boundary condition 3, 5, 7, 55, 74, Error estimates 7, 33, 59, 95
75, 97, 107, 108 a priori error estimates 79, 93
Dirichlet 21, 53, 58, 59, Interpolation error 35, 51,
60, 61, 96, 100
Neumann 21, 96 F
boundary value problem 3, 6, 8, 13, Finite dimensional spaces 21, 26,
28, 29, 53, 56, 58, 59, 61, 64, 65, 65,
72-7, 78, 81, 95, 96, 99, 106 Finite Element Method V, 7, 9,
Two point bvp 53, 72-75 10, 19, 23, 58, 74, 75, 78,
86, 100, 109, 113
C Continuous Galerkin 58, 64, 87,
Cauchy-Schwarz 17, 61, 63, 97-99 90, 93, 104,
Conservation of energy 106, 107, disontinuous Galerkin 87, 90,
111, 104,
Convection 2, 64, 71, 72, 95,
120, 121, 127-129 G
Convection-diffusion 2, 64, 70, 107, Galerkin Method BVP 21,
113, 114 Gauss quadrature 48

Convection matrix 70, 127, 129 H


Crank-Nicolson 84-86, 91, 102, 121, hat function 13, 14, 30, 36, 55, 59,
130-132 66, 70, 101,

D I
differential equation V, 1-8, 41, 92, Interpolation
93, 98, 107, Lagrange interpolation
ordinary differential equation 1, 9, 37-39, 43,
81, linear interpolation 31, 33, 36,
partial differential equation 1-3, 50,
5, 7 polynomial interpolation 11,

135
136

Initial Boundary Value Problem (ODE) 1, 7, 9, 28, 81 , 92


(IBVP) 3, 81, 95, 96, 99, 106, Orthogonality 16, 29, 30, 59, 60
Initial value problem 2, 3, 9, 17, 81,
(IBVP) 85-87, 92, 93 P
Partial Differential Equations
J, K (PDE) 1-3, 7, 10, 53, 65, 70, 98,
L 107,
Lagrange basis 34,38-41, 50, heat equation 2, 21, 64, 95, 99,
linear space 10, 15, 100, 105, 106, 108,
L2 -projection 20, 21, 23, 121, 122, wave equation 2, 3, 81, 95, 106,
124-126 107, 111, 112
partition 10, 12-14, 20, 21, 26, 36,
M 37, 39, 41, 44, 51, 58, 61, 64,
Mass Matrix 67, 69, 102-104, 65, 70, 73-75, 77-79, 83-85.
122-124, 126, 127 Poincare inequality 97-100, 105
Minimization problem 53, 56, 57,
73, Q
Mixed bvp 96,
R
N
Residual 18, 29, 30, 59, 61-64, 93
Neumann problem/data 21, 96,
Numerical integration 7, 31, 41, 121,
S
124, 126,
Scalar initial value problem, 81
Composite midpoint 45
Scalar product 16, 17, 32, 33, 39
Composite trapezoidal 45
stability 81, 82, 91, 93, 95-97, 99,
Composite Simpson’s 46
105, 115
Simple midpoint 41, 45, 47
Stiffness matrix, 23-26, 28, 67, 71,
Simple trapezoidal 42, 45, 47,
75, 92, 103, 127, 128
85
Simple Simpson’s 43, 46, 48 T
test function 17-19, 21, 22, 54, 55,
Norm 16
58, 65, 70, 73, 87, 117
L2 -norm 16, 17, 63, 72, 97, 104,
trial function 18, 21, 65, 70, 87,
105, 113, 114
Lp -norm 33, 35
U
vector norm 33
maximum norm 33, 35
V/W
Variational formulation 17, 22, 53.
energy norm 59, 60, 62, 75, 79
O
XYZ
Ordinary Differential Equations

You might also like