0% found this document useful (0 votes)
17 views285 pages

Notes

This document contains lecture notes on mathematical methods for partial differential equations. It covers various topics including physical problem formulation, classification of PDEs, separation of variables, one-dimensional waves, two-dimensional waves, self-similar solutions, and complex variable methods. The notes were last updated on February 6, 2024 by Joseph M. Powers and contain copyright information.

Uploaded by

moin88967
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
17 views285 pages

Notes

This document contains lecture notes on mathematical methods for partial differential equations. It covers various topics including physical problem formulation, classification of PDEs, separation of variables, one-dimensional waves, two-dimensional waves, self-similar solutions, and complex variable methods. The notes were last updated on February 6, 2024 by Joseph M. Powers and contain copyright information.

Uploaded by

moin88967
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 285

LECTURE NOTES ON

MATHEMATICAL METHODS II

Joseph M. Powers

Department of Aerospace and Mechanical Engineering


University of Notre Dame
Notre Dame, Indiana 46556-5637
USA

updated
06 February 2024, 3:22pm
2

© 06 February 2024. J. M. Powers.


Contents

Preface 7

1 Physical problem formulation 9


1.1 Simple wave propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 One-dimensional unsteady energy diffusion . . . . . . . . . . . . . . . . . . . 16
1.3 Two-dimensional steady energy diffusion . . . . . . . . . . . . . . . . . . . . 20
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2 Classification of partial differential equations 25


2.1 General method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2 Application to standard problems . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.1 Wave equation: hyperbolic . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.2 Heat equation: parabolic . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.3 Laplace’s equation: elliptic . . . . . . . . . . . . . . . . . . . . . . . . 33
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3 Separation of variables 35
3.1 Well-posedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Cartesian geometries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Non-Cartesian geometries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.3.1 Cylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.3.2 Spherical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.4 Usage in a stability problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.4.1 Spatially homogeneous solutions . . . . . . . . . . . . . . . . . . . . . 77
3.4.2 Steady solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.4.3 Unsteady solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.5 Nonlinear separation of variables . . . . . . . . . . . . . . . . . . . . . . . . 89
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

4 One-dimensional waves 97
4.1 One-dimensional conservation laws . . . . . . . . . . . . . . . . . . . . . . . 97
4.1.1 Multiple conserved variables . . . . . . . . . . . . . . . . . . . . . . . 97
4.1.2 Single conserved variable . . . . . . . . . . . . . . . . . . . . . . . . . 100

3
4 CONTENTS

4.2 Inviscid Burgers’ equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104


4.3 Viscous Burgers’ equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3.1 Comparison to inviscid solution . . . . . . . . . . . . . . . . . . . . . 109
4.3.2 Steadily propagating waves . . . . . . . . . . . . . . . . . . . . . . . 112
4.3.3 Cole-Hopf transformation . . . . . . . . . . . . . . . . . . . . . . . . 115
4.4 Traffic flow model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.5 Linear dispersive waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.6 Stokes’ second problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

5 Two-dimensional waves 133


5.1 Helmholtz equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.2 Square domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5.3 Circular domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

6 Self-similar solutions 143


6.1 Stokes’ first problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6.2 Taylor-Sedov solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.2.1 Governing equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.2.2 Similarity transformation . . . . . . . . . . . . . . . . . . . . . . . . . 155
6.2.3 Transformed equations . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.2.4 Dimensionless equations . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.2.5 Reduction to nonautonomous form . . . . . . . . . . . . . . . . . . . 160
6.2.6 Numerical solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
6.2.7 Contrast with acoustic limit . . . . . . . . . . . . . . . . . . . . . . . 167
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

7 Monoscale and multiscale features 171


7.1 Monoscale problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
7.1.1 Spatially homogeneous solution . . . . . . . . . . . . . . . . . . . . . 172
7.1.2 Steady solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
7.1.3 Spatio-temporal solution . . . . . . . . . . . . . . . . . . . . . . . . . 175
7.2 Multiscale problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
7.2.1 Spatially homogeneous solution . . . . . . . . . . . . . . . . . . . . . 178
7.2.2 Steady solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
7.2.3 Spatio-temporal solution . . . . . . . . . . . . . . . . . . . . . . . . . 184

8 Complex variable methods 187


8.1 Laplace’s equation in engineering . . . . . . . . . . . . . . . . . . . . . . . . 187
8.2 Velocity potential and stream function . . . . . . . . . . . . . . . . . . . . . 188
8.3 Mathematics of complex variables . . . . . . . . . . . . . . . . . . . . . . . . 191
8.3.1 Euler’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

© 06 February 2024. J. M. Powers.


CONTENTS 5

8.3.2 Polar and Cartesian representations . . . . . . . . . . . . . . . . . . . 192


8.3.3 Cauchy-Riemann equations . . . . . . . . . . . . . . . . . . . . . . . 196
8.4 Elementary complex potentials . . . . . . . . . . . . . . . . . . . . . . . . . 200
8.4.1 Uniform field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
8.4.2 Sources and sinks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
8.4.3 Point vortices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
8.4.4 Superposition of sources . . . . . . . . . . . . . . . . . . . . . . . . . 202
8.4.5 Flow in corners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
8.4.6 Doublets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
8.4.7 Quadrupoles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
8.4.8 Rankine half body . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
8.4.9 Flow over a cylinder . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
8.5 Contour integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
8.5.1 Simple pole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
8.5.2 Constant potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
8.5.3 Linear potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
8.5.4 Quadrupole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
8.6 Laurent series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
8.7 Jordan’s lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
8.8 Conformal mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
8.8.1 Analog to steady two-dimensional heat transfer . . . . . . . . . . . . 220
8.8.2 Mapping of one geometry to another . . . . . . . . . . . . . . . . . . 221
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228

9 Integral transformation methods 231


9.1 Fourier transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
9.2 Laplace transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257

10 Linear integral equations 259


10.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
10.2 Homogeneous Fredholm equations . . . . . . . . . . . . . . . . . . . . . . . . 260
10.2.1 First kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
10.2.2 Second kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
10.3 Inhomogeneous Fredholm equations . . . . . . . . . . . . . . . . . . . . . . . 266
10.3.1 First kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
10.3.2 Second kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
10.4 Fredholm alternative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
10.5 Fourier series projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276

Bibliography 277

© 06 February 2024. J. M. Powers.


6 CONTENTS

© 06 February 2024. J. M. Powers.


Preface

These are lecture notes for AME 60612 Mathematical Methods II, the second of a pair of
courses on applied mathematics taught in the Department of Aerospace and Mechanical
Engineering of the University of Notre Dame. Most of the students in this course are be-
ginning graduate students in engineering coming from a variety of backgrounds. The course
objective is to survey topics in applied mathematics, with the focus being on partial dif-
ferential equations. Specific topics include physical motivations, classification, separation of
variables, one-dimensional waves, similarity, complex variables, integral transform methods,
and integral equations.
These notes emphasize method and technique over rigor and completeness; the student
should call on textbooks and other reference materials. It should also be remembered that
practice is essential to learning; the student would do well to apply the techniques presented
by working as many problems as possible. The notes, along with much information on
the course, can be found at https://www3.nd.edu/∼powers/ame.60612. At this stage,
members of the class have permission to download the notes. I ask that you not distribute
them.
These notes may have typographical errors. Do not hesitate to identify those to me. I
would be happy to hear further suggestions as well.

Joseph M. Powers
powers@nd.edu
https://www3.nd.edu/∼powers

Notre Dame, Indiana; USA


Tuesday 6th February, 2024

Copyright © 2024 by Joseph M. Powers.


All rights reserved.

7
8 CONTENTS

© 06 February 2024. J. M. Powers.


Chapter 1

Physical problem formulation

see Mei, Chapter 1,

Here we consider mathematical formulation of physical problems.

1.1 Simple wave propagation


Consider the scenario of Fig. 1.1. Here a material whose density of mass ρ varies with
position x and time t, i.e. ρ = ρ(x, t), flows with constant velocity a in a tube of constant
cross-sectional area A. One can consider the SI units of mass to be kg, those of ρ to be
kg/m3 , x to be m, t to be s, and a to be m/s. At the entrance, we are at position x1 . At the
exit, we are at position x1 + ∆x. Standard geometry tells us the volume bounded within the
tube is V = A∆x. Also indicated is the distance a material particle will have propagated in
a small increment of time ∆t, that distance being a∆t.
Certainly it is possible to define an average density within the volume, denoted with an
over-bar:
Z x1 +∆x
1
ρ(t) = ρ(x, t) dx. (1.1)
∆x x1
Let us invoke a common physical principle known as mass conservation and see how
this principle can be cast as a partial differential equation. One way to consider the mass
conservation principle is to insist for a fixed volume, such as ours, that the change in mass
within the volume can only be ascribed to mass entering and exiting the surface bounding
the volume. We might say

total mass @ (t + ∆t) - total mass @ t = mass {zmass flux out} .


flux in − (1.2)
| {z } |
unsteady advection and |diffusion
{z }
=0

The term on the left side of Eq. (1.2) is known as the “unsteady” term as it accounts for
the change in mass. The terms on the right side are those physical processes which can

9
10 CHAPTER 1. PHYSICAL PROBLEM FORMULATION

A
a Δt

a a

Figure 1.1: Schematic of mass advection.

induce change, namely mass entering and exiting the volume. In general, one can expect the
physical processes of advection and diffusion to allow mass changes. Here for simplicity, we
will ignore diffusion.
With mass m within the volume given in terms of density as
Z x1 +∆x
m= {z } = ρV,
ρ(x, t)A dx = ρ |A∆x (1.3)
x1
V

our mass conservation statement has the mathematical expression

m|t+∆t − m|t = min − mout , (1.4)


= − (mout − min ) . (1.5)

By inspection of Fig. 1.1, we see that

min = ρ|x1 Aa∆t, (1.6)


mout = ρ|x1 +∆x Aa∆t. (1.7)

Therefore Eq. (1.5) can be recast as



m|t+∆t − m|t = − ρ|x1 +∆x Aa∆t − ρ|x1 Aa∆t , (1.8)
m|t+∆t − m|t 
= − ρ|x1 +∆x Aa − ρ|x1 Aa . (1.9)
∆t
As ∆t → 0, Eq. (1.9) reduces to
dm 
= − ρ|x1 +∆x Aa − ρ|x1 Aa . (1.10)
dt
It is the form of Eq. (1.10) which is often considered to be the fundamental form expressing
mass conservation. We have not insisted here on any continuity properties for ρ. Here

© 06 February 2024. J. M. Powers.


1.1. SIMPLE WAVE PROPAGATION 11

however, for simplicity, we shall assume continuity of ρ, and return to operate on Eq. (1.9)
as follows:
ρ|t+∆t − ρ|t 
A∆x = − ρ|x1 +∆x Aa − ρ|x1 Aa , (1.11)
∆t
ρ|t+∆t − ρ|t ρ| − ρ|x1
= −a x1 +∆x . (1.12)
∆t ∆x
Now as we let ∆x → 0, ρ → ρ by the mean value theorem, assuming continuity of ρ. In
important cases to be studied in Sec. 4.1 in which the volume contains internal discontinuities,
we will not be able to make such an assumption. Then employing the definition of the partial
derivative, we arrive at
∂ρ ∂ρ
= −a . (1.13)
∂t ∂x
Rearranging, we get the classical form of what is known as a linear advection equation, which
is a type of partial differential equation.
∂ρ ∂ρ
+a = 0. (1.14)
∂t ∂x
We can formally integrate Eq. (1.14) to recover our original integral form.
Z x1 +∆x   Z x1 +∆x
∂ρ ∂ρ
+a dx = 0 dx, (1.15)
x1 ∂t ∂x x1
| {z }
0
Z x1 +∆x Z x1 +∆x
∂ρ ∂ρ
dx + a dx = 0, (1.16)
x1 ∂t x1 ∂x
(1.17)

We use Leibniz’s rule and the fundamental theorem of calculus to then get
Z
d x1 +∆x 
ρ dx + a ρ|x1 +∆x − ρ|x1 = 0, (1.18)
dt x1
dρ 
A∆x + aA ρ|x1 +∆x − ρ|x1 = 0, (1.19)
dt
dm
= ρ|x1 aA − ρ|x1 +∆x aA. (1.20)
dt
We might then say that the time rate of change of mass enclosed is equal to the difference
of the mass flux in and the mass flux out.
It is obvious why Eq. (1.14) is an advection equation. Let us examine why it is linear. If
we take the differential operator L to be
∂ ∂
L= +a , (1.21)
∂t ∂x

© 06 February 2024. J. M. Powers.


12 CHAPTER 1. PHYSICAL PROBLEM FORMULATION

Eq. (1.14) is stated as

Lρ = 0. (1.22)

The operator L is linear because it can be shown to satisfy the properties of a linear operator:

L(ρ + φ) = Lρ + Lφ, (1.23)


L(αρ) = αLρ, (1.24)

where ρ = ρ(x, t), φ = φ(x, t), and α is a constant.


Let us imagine that we are given an initial distribution of ρ:

ρ(x, 0) = f (x). (1.25)

Then it is easy to show that a solution which satisfies the linear advection equation, Eq. (1.14)
and the initial condition is

ρ(x, t) = f (x − at). (1.26)

Let us consider how this can be understood through the use of the general language of
coordinate transformations. We may imagine that our original coordinate system (x, t) maps
to a more convenient coordinate system which we will call (ξ, τ ):

x = x(ξ, τ ), (1.27)
t = t(ξ, τ ). (1.28)

We will find the following to be useful. We get expressions for the differentials to be
∂x ∂x
dx = dξ + dτ, (1.29)
∂ξ ∂τ
∂t ∂t
dt = dξ + dτ. (1.30)
∂ξ ∂τ
In matrix form this is
   ∂x ∂x  
dx ∂ξ ∂τ dξ
= ∂t ∂t . (1.31)
dt ∂ξ ∂τ

| {z }
=J

Here the Jacobian of the transformation is defined as


 ∂x ∂x 
J = ∂ξ ∂t
∂τ
∂t . (1.32)
∂ξ ∂τ

Our goal is to select a coordinate transformation which renders the solution of Eq. (1.14)
to be obvious. How to make such a choice is in general difficult. Leaving aside the important

© 06 February 2024. J. M. Powers.


1.1. SIMPLE WAVE PROPAGATION 13

question of how to make such a choice, we select our new set of coordinates to be given by
the linear transformation
x(ξ, τ ) = ξ + aτ, (1.33)
t(ξ, τ ) = τ. (1.34)
In matrix form, we have
    
x 1 a ξ
= . (1.35)
t 0 1 τ
| {z }
J

Here the Jacobian J of the transformation is


 ∂x ∂x   
∂ξ ∂τ 1 a
J = ∂t ∂t = . (1.36)
∂ξ ∂τ
0 1
We have
J = |J| = det J = 1. (1.37)
The transformation is nonsingular. Expanding our notion of area to think of area in (x, t)
space, the transformation is area- and orientation-preserving We have the unique inverse
transformation
    
ξ 1 −a x
= , (1.38)
τ 0 1 t
| {z }
J−1

or simply,
ξ = x − at, (1.39)
τ = t. (1.40)
To transform Eq. (1.14) into the new coordinate system, we need rules for how the partial
derivatives transform. The chain rule tells us
∂ρ ∂x ∂ρ ∂t ∂ρ
= + , (1.41)
∂ξ ∂ξ ∂x ∂ξ ∂t
∂ρ ∂x ∂ρ ∂t ∂ρ
= + . (1.42)
∂τ ∂τ ∂x ∂τ ∂t
In matrix form, this is
 ∂ρ   ∂x ∂t   ∂ρ 
∂ξ ∂ξ ∂ξ ∂x
∂ρ = ∂x ∂t ∂ρ , (1.43)
∂τ | ∂τ {z ∂τ } ∂t
T
 J   ∂ρ 
1 0 ∂x
= ∂ρ (1.44)
a 1 ∂t

© 06 February 2024. J. M. Powers.


14 CHAPTER 1. PHYSICAL PROBLEM FORMULATION

Inverting, we find
 ∂ρ     ∂ρ 
∂x 1 0 ∂ξ
∂ρ = ∂ρ (1.45)
∂t
−a 1
| {z } ∂τ
JT −1

This is in short
∂ρ ∂ρ
= , (1.46)
∂x ∂ξ
∂ρ ∂ρ ∂ρ
= −a + . (1.47)
∂t ∂ξ ∂τ

Now we apply these transformation rules to our physical equation, Eq. (1.14), to recast it as

∂ρ ∂ρ ∂ρ✓
−a + +a ✓ = 0, (1.48)
∂ξ ∂τ ∂ξ

| {z } ✓|{z}
∂ρ/∂t
✓ ∂ρ/∂x
∂ρ
= 0. (1.49)
∂τ
Integrating, we get

ρ = ρ(ξ), (1.50)
= ρ(x − at). (1.51)

To satisfy the initial condition, Eq. (1.25), we must then insist that

ρ(x, t) = ρ(x − at) = f (x − at). (1.52)

Physically, this indicates that the initial signal f (x) maintains its structure but is advected
in the direction of increasing x with velocity a. Note remarkably, that f may contain
discontinuous jumps. If we focus on a point with ξ = ξ0 , a constant, we can see how this
describes signal propagation. At ξ = ξ0 , we have ρ = ρ0 . And we can say

ξ0 = x − at. (1.53)

Taking the time derivative, we get


dξ0 dx
= − a, (1.54)
dt dt
dx
0 = − a, (1.55)
dt
dx
= a. (1.56)
dt

© 06 February 2024. J. M. Powers.


1.1. SIMPLE WAVE PROPAGATION 15

That is, a point where ξ and thus ρ, remain constant propagates at constant velocity a.
One can use the rules for differentiation to check if the differential equation is satisfied.
With ξ = x − at, we have

ρ(x, t) = f (ξ), (1.57)


∂ρ ∂ξ df
= , (1.58)
∂t ∂t dξ
df
= −a , (1.59)

∂ρ ∂ξ df
= , (1.60)
∂x ∂x dξ
df
= . (1.61)

Thus,
∂ρ ∂ρ df df
+a = −a + a = 0. (1.62)
∂t ∂x dξ dξ
Let us use a common but less rigorous method to solve Eq. (1.14). This method will
clearly expose some important notions for more complicated systems and also identify the
nature of the signal propagation. For this discussion, we will imagine the ρ(x, t) is continuous
and everywhere differentiable, though it is possible to relax these assumptions. If so, the
rules of calculus of many variables tell us the total differential dρ is given by
∂ρ ∂ρ
dρ = dt + dx. (1.63)
∂t ∂x
Let us scale both sides by dt to get
dρ ∂ρ ∂ρ dx
= + . (1.64)
dt ∂t ∂x dt
Consider now curves within (x, t) space on which
dx
= a, x(0) = x0 . (1.65)
dt
Such curves form a family of parallel lines given by

x = at + x0 , (1.66)

where x0 can take on many different values. On these curves, which are known as the
characteristics of the system, Eq. (1.64), a purely mathematical construct, reduces to
dρ ∂ρ ∂ρ
= +a ; x = at + x0 . (1.67)
dt ∂t ∂x

© 06 February 2024. J. M. Powers.


16 CHAPTER 1. PHYSICAL PROBLEM FORMULATION

a
01

02
=
00

03
=
0

0
=

=
0

0
x0=0 x0=1 x0=2 x0=3 x
Figure 1.2: Sketch of propagation of ρ via linear advection with velocity a.

Employing the mathematical construct of Eq. (1.67) within our physical principle of Eq. (1.14),
we obtain
dρ ∂ρ ∂ρ
= +a = 0, x = at + x0 , (1.68)
dt ∂t ∂x
ρ = ρ0 , x = at + x0 . (1.69)

That is to say, on a given characteristic curve, ρ maintains that value that it had at t = 0, ρ0 .
The value of ρ0 can vary from characteristic to characteristic! This is sketched in Fig. 1.2.

1.2 One-dimensional unsteady energy diffusion


Let us perform a similar physical derivation of the so-called heat equation, a partial dif-
ferential equation which is a manifestation of the first law of thermodynamics in combined
with an experimentally known relationship for the heat flux. The equation will describe the
process of energy diffusion. As depicted in Fig. 1.3 consider a volume V of dimension A by
∆x. We describe the heat flux in the x direction as qx . At the left boundary x1 , we have
diffusive heat flux in which we notate as qx |x1 . At the right boundary at x1 + ∆x, we have
diffusive heat flux out of qx |x1 +∆x . Recall the units of heat flux are J/m2 /s. We assume the
walls are thermally insulated; thus, there is no heat flux through the walls. The total energy

© 06 February 2024. J. M. Powers.


1.2. ONE-DIMENSIONAL UNSTEADY ENERGY DIFFUSION 17

Figure 1.3: Sketch of one-dimensional energy diffusion.

within the volume is E with units of J. We also have the specific energy e = E/m with units
of J/kg, where m is the mass enclosed within V . We suppress any addition of mass into V ,
so that m can be considered a constant.
We allow for the specific energy e to vary with space and time: e = e(x, t). Certainly it
is possible to define an average specific energy within the volume, denoted with an over-bar:
Z x1 +∆x
1
e(t) = e(x, t) dx. (1.70)
∆x x1

Our physical principle is the first law of thermodynamics, that being

change in total energy = heat in − work


| {z out} . (1.71)
=0

There is no work for our system. But there is heat flux through the system boundaries. In
a combination of symbols and words, we can say

total energy @ t + ∆t − total energy @ t = energy flux in − energy flux out . (1.72)
| {z } | {z }
unsteady advection
| {z } and diffusion
=0

Mathematically, we can say

E|t+∆t − E|t = − (Ef lux out − Ef lux in ) , (1.73)


 
ρA∆x e|t+∆t − e|t = − qx |x1 +∆x − qx |x1 |{z} A∆t , (1.74)
| {z } | {z } | {z } 2
kg (m s)
J/kg J/m2 /s
 
e| − e|t qx |x1 +∆x − qx |x1
ρ t+∆t = − . (1.75)
∆t ∆x

© 06 February 2024. J. M. Powers.


18 CHAPTER 1. PHYSICAL PROBLEM FORMULATION

Now, let ∆x → 0 and ∆t → 0 so as to induce mean values to be local values, and finite
differences to be derivatives, yielding a differential representation of the first law of thermo-
dynamics of
∂e ∂qx
ρ =− . (1.76)
∂t ∂x
Now, let us invoke some relationships known from experiment. First, the specific internal
energy of many materials is well modeled by a so-called calorically perfect state equation:

e = cT + e0 . (1.77)

Here c is the constant specific heat with units J/kg/K, T is the temperature with units of
K, and e0 is a constant with units of J/kg whose value is unimportant, as for nonreactive
materials, it is only energy differences which have physical importance. The caloric state
equation simply states the specific internal energy of a material is proportional to its tem-
perature. Next, experiment reveals that Fourier’s law is a good model for the heat flux in
many materials:
∂T
qx = −k . (1.78)
∂x
Here k is the so-called thermal conductivity of a material. It has units J/s/m/K. It is
sometimes dependent on T , but we will take it as a constant here. For agreement with
experiment, we must have k ≥ 0. The equation reflects the fact that heat flow in the
positive x direction is often detected to be proportional to a field in which temperature is
decreasing as x increases. In short, thermal energy flows from regions of high temperature
to low temperature.
Equation (1.78) along with the caloric state equation, Eq. (1.77) when substituted into
Eq. (1.76) yields
 
∂ ∂ ∂T
ρ (cT + e0 ) = − −k , (1.79)
∂t ∂x ∂x
∂T ∂2T
ρc = k 2, (1.80)
∂t ∂x
∂T k ∂2T
= . (1.81)
∂t ρc ∂x2
|{z}
α

Here we have defined the thermal diffusivity α = k/ρ/c. Thermal diffusivity has units of
m2 /s. In final form we have
∂T ∂2T
=α 2. (1.82)
∂t ∂x
Equation (1.82) is known as the heat equation.

© 06 February 2024. J. M. Powers.


1.2. ONE-DIMENSIONAL UNSTEADY ENERGY DIFFUSION 19

Let us consider a particularly simple solution to the heat equation, Eq. (1.82). The
solution will rely on a judicious guess, which will later be systematized, and will help develop
physical intuition. Let us assume a solution of the form
 
2πx
T (x, t) = T0 + A(t) sin . (1.83)
λ
Here we have presumed a sinusoidal form to capture the x variation of the solution with a
single sine function of constant wavelength λ. We also have the constant T0 . We allow for a
time-dependent amplitude A(t). Let us seek to solve for A(t) by substituting our assumed
solution form into the heat equation, Eq. (1.82). Doing so yields
      
∂ 2πx ∂ ∂ 2πx
T0 + A(t) sin = α T0 + A(t) sin , (1.84)
∂t λ ∂x ∂x λ
    
2πx dA 2π ∂ 2πx
sin = αA(t) cos , (1.85)
λ dt λ ∂x λ
   
2πx
✟ ✟✟dA 4π 2 ✟✟
2πx

sin ✟ = −αA(t) 2 sin ✟ , (1.86)
✟ λ dt λ ✟✟ λ

dA 4π 2 α
= − 2 A(t). (1.87)
dt λ
Remarkably, the sine function cancels on both sides of the equation leaving us with a first
order linear ordinary differential equation for the time-dependent amplitude A(t). Solving
yields
 
4π 2 α
A(t) = C exp − 2 t . (1.88)
λ
Thus recombining to form T (x, t), we get
   
4π 2 α 2πx
T (x, t) = T0 + C exp − 2 t sin . (1.89)
λ λ
The solution describes a temperature field with an isothermal value of T = T0 for x = 0 and
x = λ. The initial value of the temperature field is
 
2πx
T (x, 0) = T0 + C sin . (1.90)
λ
As t → ∞, we find that T (x, t) → T0 , a constant. The time constant τ of amplitude decay
is by inspection
λ2
τ= 2 . (1.91)
4π α
With λ2 having units of m2 and thermal diffusivity having units of m2 /s, it is clear that the
time constant has units of s. Importantly, we learn that rapid decay is induced by

© 06 February 2024. J. M. Powers.


20 CHAPTER 1. PHYSICAL PROBLEM FORMULATION

x (m)

T (K)

t (s)

Figure 1.4: Plot of T (x, t) for one-dimensional unsteady energy diffusion problem.

• small wavelength λ, and

• high diffusivity α.

We plot results for T0 = 300 K, C = 20 K, α = 0.1 m2 /s, λ = 1 m in Fig. 1.4. For this case,
the time constant of relaxation is
(1 m)2
τ= 2  = 0.253 s. (1.92)
4π 2 0.1 ms

The figure clearly displays the initial sinusoidal temperature distribution along with its decay
as t ∼ τ .

1.3 Two-dimensional steady energy diffusion


We can perform a similar analysis for two-dimensional steady energy diffusion, such as
depicted in Fig. 1.5. A key difference is the presence of y variation. We shall assume a
differential element in the y direction with dimension ∆y; while it will be less important
because we neglect variation in the z direction, we also take the differential element in the
z direction to have value ∆z. We begin with Eq. (1.73) and analyze.

E|t+∆t − E|t = − (Ef lux out − Ef lux in ) , (1.93)


 
ρ∆x∆y∆z e|t+∆t − e|t = − qx |x1 +∆x − qx |x1 ∆y∆z∆t
 
− qy |y1 +∆y − qy |y1 ∆x∆z∆t, (1.94)
    !
e|t+∆t − e|t qx |x1 +∆x − qx |x1 qy |y1 +∆y − qy |y1
ρ = − − . (1.95)
∆t ∆x ∆y

© 06 February 2024. J. M. Powers.


1.3. TWO-DIMENSIONAL STEADY ENERGY DIFFUSION 21

Figure 1.5: Sketch of two-dimensional energy diffusion.

Now, let ∆x → 0, ∆y → 0, and ∆t → 0, yielding


∂e ∂qx ∂qy
ρ =− − . (1.96)
∂t ∂x ∂y
Defining the heat flux vector q as
 
qx
q= , (1.97)
qy
and the differential operator ∇ for Cartesian coordinates as
 ∂ 
∇ = ∂x ∂ , (1.98)
∂y

The two-dimensional energy diffusion equation, Eq. (1.96) can be rewritten1 as


∂e
ρ = −∇T · q. (1.99)
∂t
In two dimensions, Fourier’s law, Eq. (1.78), extends to the vector form

q = −k∇T. (1.100)

As an aside, we note that because the heat flux vector is expressed as the gradient of a scalar,
T , we can say
• the scalar field T (x, y) can be considered to be a potential field with energy diffusing
in the direction of decreasing potential T , and
1
We use the unusual notation ∇T here, which formally only applies to Cartesian geometries.

© 06 February 2024. J. M. Powers.


22 CHAPTER 1. PHYSICAL PROBLEM FORMULATION

• the vector field q(x, y) is curl-free, ∇ × q = 0. That is because any vector that is the
gradient of a potential is guaranteed curl-free: ∇ × ∇T ≡ 0.
In terms of scalar components we can say qx = −k ∂T /∂x and qy = −k ∂T /∂y. Substituting
the caloric state equation, Eq. (1.76), and the multi-dimensional Fourier’s law, Eq. (1.100),
into our energy diffusion equation, Eq. (1.96), we get
∂T
ρc = −∇T · (−k∇T ) . (1.101)
∂t | {z }
q

Again, while k may be a function of T for some materials, we will take it to be a constant
yielding
∂T
= α∇2 T, (1.102)
∂t
where we have once again employed the definition of thermal diffusivity, α = k/ρ/c.2 We
have also defined the Laplacian operator as ∇2 = ∇T ·∇. We expand this important operator
for a two-dimensional Cartesian system as
 ∂ 
2 T ∂ ∂ ∂2 ∂2
∇ = ∇ · ∇ = ( ∂x ∂y ) ∂x ∂ = 2 + 2. (1.103)
∂y ∂x ∂y
For the important case of a steady state temperature distribution, we have T with no vari-
ation with t. In this case Eq. (1.102) reduces to the so-called Laplace’s3 equation:

∇2 T = 0. (1.104)

Remarkably, the diffusivity does not affect the temperature distribution in the steady state
limit. In two-dimensions, this can be written as
∂2T ∂2T
+ = 0. (1.105)
∂x2 ∂y 2
In a similar fashion as for the previous section, let us consider a particularly simple
solution to Laplace’s equation, Eq. (1.105):
 
2πx
T (x, y) = T0 + g(y) sin . (1.106)
λ
Once again, T0 is a constant with units of K, and λ is a constant with units of m which
can be interpreted as the wavelength of the disturbance in the x direction. We seek the
2
When extended to multi-dimensional materials with linear anisotropy, Fourier’s law takes on a vector
form q = −K · ∇T , where K is a positive definite symmetric tensor, embodying the material’s anisotropy.
In such cases, the heat equation becomes ρc ∂T /∂t = ∇T · (K · ∇T ).
3
Pierre-Simon Laplace, 1749-1827, French mathematician and physicist.

© 06 February 2024. J. M. Powers.


1.3. TWO-DIMENSIONAL STEADY ENERGY DIFFUSION 23

function g(y) which allows Laplace’s equation to be satisfied. Let us substitute Eq. (1.106)
into Eq. (1.105):
     
∂2 2πx ∂2 2πx
T0 + g(y) sin + 2 T0 + g(y) sin = 0, (1.107)
∂x2 λ ∂y λ
   
4π 2 ✟✟ d2 g
2πx ✟✟
2πx
−g(y) 2 sin ✟ ✟ + 2 sin ✟ ✟ = 0, (1.108)
λ ✟✟ λ dy ✟✟ λ
d2 g 4π 2
− 2 g = 0. (1.109)
dy 2 λ
This is a second order linear differential equation. We recall such equations may be solved by
assuming solutions of the form g(y) = Cery . Substituting the assumed form into the differ-
ential equation gives Cr 2 ery − (4π 2 /λ2 )Cery = 0. We cancel terms to get the characteristic
polynomial: r 2 − 4π 2 /λ2 = 0. We solve to get r = ±2π/λ. Thus, there are two functions
that satisfy. Because the original equation is linear, linear combinations also satisfy; thus
g(y) = K1 e2πy/λ + K2 e−2πy/λ , where K1 and K2 are constants. The exponentials may be cast
in terms of hyperbolic functions as we recall sinh y = (ey − e−y )/2 and cosh y = (ey + e−y )/2.
This yields the general solution
   
2πy 2πy
g(y) = C1 sinh + C2 cosh , (1.110)
λ λ
where C1 and C2 are arbitrary constants. Let us select C2 = 0 so as to yield a solution for
the temperature field of
   
2πy 2πx
T (x, y) = T0 + C1 sinh sin . (1.111)
λ λ
We note that T = T0 wherever x = 0, x = λ, or y = 0. We plot results for T0 = 300 K,
C1 = 0.1 K, λ = 1 m in Fig. 1.6.
Lastly, we consider global energy for our system. We integrate Eq. (1.99) over a fixed
volume V bounded by surface S with unit outer normal n. First apply the volume integration
operator to both sides of Eq. (1.99):
Z Z
∂e
ρ dV = − ∇T · q dV. (1.112)
V ∂t V
Apply Leibniz’s rule to the left side and Gauss’ theorem to the right side to obtain
Z Z
d
ρe dV = − qT · n dS. (1.113)
dt V S
The time rate of change of energy within V can be attributed solely to the net flux of energy
crossing the boundary S. In the steady state limit, we have
Z
qT · n dS = 0. (1.114)
S
That is to say, in order for there to be no change in the energy within the volume, the net
energy entering must be zero.

© 06 February 2024. J. M. Powers.


24 CHAPTER 1. PHYSICAL PROBLEM FORMULATION

T (K)

y (m)

x (m)

Figure 1.6: Plot of T (x, y) for steady two-dimensional energy diffusion problem.

Problems
1. Consider the solution of the linear advection equation
∂ρ ∂ρ
+a = 0.
∂t ∂x
For a = 2, x ∈ [0, 5], t ∈ [0, 5], plot contours and three-dimensional surfaces of ρ(x, t) for the following
initial conditions:
(a) ρ(x, 0) = sin(πx),
(b) ρ(x, 0) = H(x) − H(x − 1), where H(x) is the Heaviside4 unit step function.

4
Oliver Heaviside, 1850-1925, English electrical engineer.

© 06 February 2024. J. M. Powers.


Chapter 2

Classification of partial differential


equations

see Mei, Chapter 2,

Here we consider how to classify partial differential equations.

2.1 General method


Many important partial differential equations can be cast in the so-called quasi-linear form
of a system of first order partial differential equations
∂uj ∂uj
Aij + Bij = ci , i = 1, . . . , N; j = 1, . . . , N. (2.1)
∂t ∂x
In Gibbs notation, we could say
∂u ∂u
A· +B· = c. (2.2)
∂t ∂x
Here we have N dependent variables uj with j = 1, . . . , N. The independent variables are x
and t. The terms Aij and Bij may be a functions of x, t and any of the uj s. Both Aij and
Bij are elements of N × N nonconstant matrices. The term ci can be a function of x, t and
any of the uj s; it is an element of a N × 1 column matrix.
As described by Whitham,1 there is a general technique to analyze such equations. First
pre-multiply both sides of the equation by a yet-to-be-determined row of variables ℓi :
∂uj ∂uj
ℓi Aij + ℓi Bij = ℓ i ci , (2.3)
∂t ∂x
∂u ∂u
ℓT · A · + ℓT · B · = ℓT · c. (2.4)
∂t ∂x
1
Gerald Beresford Whitham, 1927-2014, applied mathematician and developer of theory for nonlinear
wave propagation.

25
26 CHAPTER 2. CLASSIFICATION OF PARTIAL DIFFERENTIAL EQUATIONS

The method hinges upon choosing ℓi to render the left side of Eq. (2.4) to be of the form
similar to ∂/∂t + λ(∂/∂x), where λ is a scalar that may be a variable. This is similar to the
form analyzed in Eq. (1.14), ∂ρ/∂t + a ∂ρ/∂x = 0, except here we are allowing λ to be a
variable.
Let us define the variable mj such that
 
∂uj ∂uj ∂uj ∂uj
ℓi Aij + ℓi Bij = mj +λ , (2.5)
∂t ∂x ∂t ∂x
duj dx
= mj on = λ. (2.6)
dt dt
Reorganizing Eq. (2.5), we get
∂uj ∂uj
(ℓi Aij − mj ) + (ℓi Bij − mj λ) = 0. (2.7)
| {z } ∂t | {z } ∂x
=0 =0

Because we expect ∂uj /∂t to be linearly independent of ∂uj /∂x, we insist


ℓi Aij = mj , (2.8)
ℓi Bij = λmj . (2.9)
Scaling Eq. (2.8) by λ gives
λℓi Aij = λmj . (2.10)
Subtracting Eq. (2.9) from Eq. (2.10) to eliminate λmj gives
ℓi (λAij − Bij ) = 0, (2.11)
ℓT · (λA − B) = 0T . (2.12)
This is a generalized left eigenvalue problem, where λ is known as a generalized eigenvalue
in the second sense and ℓ is a generalized left eigenvector. One has nontrivial ℓ when
det (λA − B) = 0. (2.13)
If A is invertible, we can post-multiply Eq. (2.12) by A−1 to recover an ordinary left eigen-
value problem:

ℓT · λI − B · A−1 = 0T . (2.14)
We adopt the following classification nomenclature, following Zauderer, p. 135:
• hyperbolic: All generalized eigenvalues λ are real and there exist N linearly independent
left generalized eigenvectors ℓ.
• parabolic: All generalized eigenvalues λ that exist are real, but there exist fewer than
N linearly independent generalized left eigenvectors ℓ.
• elliptic: All the generalized eigenvalues λ are complex.
• mixed: Some generalized eigenvalues λ may be real, others complex, and there may or
may not be N linearly independent generalized left eigenvectors ℓ.

© 06 February 2024. J. M. Powers.


2.2. APPLICATION TO STANDARD PROBLEMS 27

2.2 Application to standard problems


2.2.1 Wave equation: hyperbolic
By inspection the linear advection equation, Eq. (1.14) is already in the appropriate form.
Let us examine a common extension of the linear advection equation, the so-called wave
equation:

∂2y 2
2∂ y
= a . (2.15)
∂t2 ∂x2
We need to convert this second order equation into a system of first order equations. To
enable this, let us define two new variables, v and w:

∂y
v ≡ , (2.16)
∂t
∂y
w ≡ . (2.17)
∂x

Substituting Eqs. (2.16) and (2.17) into the wave equation, Eq. (2.15), we get our first first
order partial differential equation:

∂v ∂w
= a2 . (2.18)
∂t ∂x

We can next differentiate Eq. (2.16) with respect to x and Eq. (2.17) with respect to t to get

∂v ∂2y
= , (2.19)
∂x ∂x∂t
∂w ∂2y
= . (2.20)
∂t ∂t∂x
Now as long as y is sufficiently continuous and differentiable, the order of differentiation does
not matter, so we can take ∂ 2 y/∂x∂t = ∂ 2 y/∂t∂x. This enables us to equate Eqs. (2.19) and
(2.20), yielding our second first order partial differential equation:

∂v ∂w
= . (2.21)
∂x ∂t

We recast our two first order equations, Eqs. (2.18) and (2.21) as

∂v ∂w
− a2 = 0, (2.22)
∂t ∂x
∂w ∂v
− = 0. (2.23)
∂t ∂x

© 06 February 2024. J. M. Powers.


28 CHAPTER 2. CLASSIFICATION OF PARTIAL DIFFERENTIAL EQUATIONS

We next cast our two first order equations, Eqs. (2.22) and (2.23) in the general form of
Eq. (2.1) to get
   ∂v     ∂v   
1 0 ∂t
0 −a2 ∂x
0
∂w + ∂w = . (2.24)
0 1 ∂t
−1 0 ∂x
0
| {z } | {z } | {z } | {z } | {z }
A ∂u B ∂u c
∂t ∂x

Here our vector of dependent variables is


 
v
u= . (2.25)
w

The associated eigenvalue problem is

λ a2
det (λA − B) = = 0. (2.26)
1 λ

Solving gives

λ2 − a2 = 0, (2.27)
λ = ±a. (2.28)

We have two real and distinct eigenvalues. Let us find the eigenvectors.

ℓT · (λA − B) = 0T , (2.29)
 
λ a2
( ℓ1 ℓ2 ) = (0 0), (2.30)
1 λ
 
±a a2
( ℓ1 ℓ2 ) = (0 0). (2.31)
1 ±a

This yields two linearly dependent equations:

±aℓ1 + ℓ2 = 0, (2.32)
a2 ℓ1 ± aℓ2 = 0. (2.33)

If we multiply the first by ±a, we get the second. It is obvious the solution is not unique.
If we take ℓ1 = s, where s is any constant, then ℓ2 = ∓as. Let us take s = 1, and thus take
the eigenvectors to be
 
1
ℓ= . (2.34)
∓a

Importantly, not only do we have two distinct and real eigenvalues, but we also have two
linearly independent eigenvectors. Thus our wave equation is hyperbolic.

© 06 February 2024. J. M. Powers.


2.2. APPLICATION TO STANDARD PROBLEMS 29

We lastly use the eigenvalues and eigenvectors to recast our original system. Multiplying
both sides of Eq. (2.24) by ℓT , we get
   ∂v     ∂v   
1 0 ∂t
0 −a2 ∂x
0
( 1 ∓a ) ∂w + ( 1 ∓a ) ∂w = ( 1 ∓a ) , (2.35)
| {z } 0 1 ∂t | {z } −1 0 ∂x | {z } 0
| {z } | {z } | {z } | {z } | {z }
ℓT A ∂u ℓT B ∂u ℓT c
∂t ∂x
 ∂v   ∂v 
( 1 ∓a ) ∂w ∂t + ( ±a −a2 ) ∂w ∂x = 0, (2.36)
∂t ∂x
∂v ∂w ∂v ∂w
∓a ±a − a2 = 0, (2.37)
 ∂t  ∂t  ∂x ∂x

∂v ∂v ∂w ∂w
±a ∓a ±a = 0. (2.38)
∂t ∂x ∂t ∂x
This reduces to two sets of differential equations valid on two different sets of characteristic
lines:
dv dw
−a = 0 on x = at + x0 , (2.39)
dt dt
dv dw
+a = 0 on x = −at + x0 . (2.40)
dt dt
These combine to form
d
(v − aw) = 0 on x = at + x0 , (2.41)
dt
d
(v + aw) = 0 on x = −at + x0 . (2.42)
dt
Integrating, we find

v − aw = C1 on x = at + x0 , (2.43)
v + aw = C2 on x = −at + x0 . (2.44)

That is to say, the combinations of v∓aw are preserved on lines for which x = ±at+x0 . In this
solution signals are propagated in two distinct directions, and those signals are preserved as
they propagate. The constants C1 and C2 are known as Riemann2 invariants for the system.
The Riemann invariants are only invariant on a given characteristic and may vary from one
characteristic to another.
The just-completed analysis is common, and is often described as converting the partial
differential equation to ordinary differential equations valid along so-called characteristic lines
in x − t space. This is somewhat unsatisfying as the variation of C1 to C2 reflects the fact
that we really are considering partial differential equations. Motivated by the existence of
characteristic lines on which linear combinations of v and w must retain a constant value, let
2
Bernhard Riemann, 1826-1866, German mathematician.

© 06 February 2024. J. M. Powers.


30 CHAPTER 2. CLASSIFICATION OF PARTIAL DIFFERENTIAL EQUATIONS

us seek a coordinate transformation to clarify this. More importantly, the formal coordinate
transform will show us how we really are considering a partial differential equation in a more
easily analyzed space.
We take
    
ξ 1 −a x
= . (2.45)
η 1 a t

Thus,

ξ(x, t) = x − at, (2.46)


η(x, t) = x + at. (2.47)

Inverting, we get
   1 1
 
x 2 2
ξ
= 1 1 . (2.48)
t − 2a 2a
η
| {z }
J

Here the Jacobian J of the transformation is


 1 1

J= 2 2 . (2.49)
1 1
− 2a 2a

Here we find
1
J = |J| = det J = , (2.50)
2a
we see the transformation is only area-preserving when a = ±1/2, and is orientation-
preserving when a > 0.
We need rules for how the partial derivatives transform. The chain rule tells us
 ∂v   ∂x ∂t   ∂v 
∂ξ ∂ξ ∂ξ ∂x
∂v = ∂x ∂t ∂v , (2.51)
∂η ∂η ∂η ∂t
| {z }
T
 1 J 1   ∂v 
2
− 2a ∂x
= 1 1 ∂v . (2.52)
2 2a ∂t

Inverting, we find
 ∂v     ∂v 
∂x
1 1 ∂ξ
∂v = ∂v . (2.53)
∂t
−a a ∂η
| {z }
JT −1

© 06 February 2024. J. M. Powers.


2.2. APPLICATION TO STANDARD PROBLEMS 31

This is in short
∂ ∂ ∂
= + , (2.54)
∂x ∂ξ ∂η
∂ ∂ ∂
= −a + a . (2.55)
∂t ∂ξ ∂η
Employing these transformed operators on our original wave equation, Eq. (2.15), we get

     
∂ ∂ ∂ ∂ 2 ∂ ∂ ∂ ∂
−a + a −a + a y = a + + y, (2.56)
∂ξ ∂η ∂ξ ∂η ∂ξ ∂η ∂ξ ∂η
 2   2 
2 ∂ y ∂2y ∂2y 2 ∂ y ∂2y ∂2y
a −2 + = a +2 + , (2.57)
∂ξ 2 ∂ξ∂η ∂η 2 ∂ξ 2 ∂ξ∂η ∂η 2
∂2y ∂2y
−2a2 = 2a2 , (2.58)
∂ξ∂η ∂ξ∂η
∂2y
= 0. (2.59)
∂ξ∂η
We integrate this equation first with respect to ξ to get
∂y
= h(η). (2.60)
∂η
Note that when we integrate homogeneous partial differential equations, we must include
an arbitrary function rather than the arbitrary constant we get for ordinary differential
equations. We next integrate with respect to η to get
Z η
y= h(η̂) dη̂ +g(ξ). (2.61)
0
| {z }
=f (η)

The integral of h(η) simply yields another function of η, which we call f (η). Thus the general
solution to ∂ 2 y/∂ξ∂η = 0 is
y(ξ, η) = f (η) + g(ξ). (2.62)
We might say that we have separated the solution into two functions of two independent
variables. Here the separated functions were combined as a sum. In other problems, the
separated functions will combine as a product. In terms of our original coordinates, we can
say
y(x, t) = f (x + at) + g(x − at). (2.63)
We note that f and g are completely arbitrary functions. This is known as the d’Alembert3
solution. Compared to the related solution of the linear advection equation, we see that
3
Jean le Rond d’Alembert, 1717-1783, French mathematician.

© 06 February 2024. J. M. Powers.


32 CHAPTER 2. CLASSIFICATION OF PARTIAL DIFFERENTIAL EQUATIONS

two independent modes are admitted for signal propagation. One travels in the direction of
increasing x, the other in the direction of decreasing x. Both have speed a. The functional
forms of f and g admit discontinuous solutions, and the forms are preserved as t advances.

2.2.2 Heat equation: parabolic


Let us analyze the heat equation, Eq. (1.82), as a system of first order partial differential
equations. Our earlier analysis has already assisted in this. Equation (1.82) can be considered
a combination of the energy conservation principle, a caloric state equation, and Fourier’s
law. Taking the first of our equations as the combination of energy conservation, Eq. (1.76)
and the caloric state equation, Eq. (1.77) and the second as Fourier’s law, Eq. (1.78), we
write

∂T ∂qx
ρc = − , (2.64)
∂t ∂x
∂T
qx = −k . (2.65)
∂x

This can be recast as


   ∂T     ∂T   
ρc 0 ∂t 0 1 ∂x 0
∂qx + ∂qx = . (2.66)
0 0 ∂t
−k 0 ∂x
qx
| {z } | {z } | {z } | {z } | {z }
A ∂u B ∂u c
∂t ∂x

Here our vector u is


 
T
u= . (2.67)
qx

The associated eigenvalue problem is

λρc −1
det (λA − B) = = 0. (2.68)
k 0

Solving gives

λρc(0) + k = 0, (2.69)
λ → ∞. (2.70)

One cannot find any associated eigenvectors ℓ. Because there are an insufficient number of
eigenvectors on which to project our system, the heat equation is parabolic.

© 06 February 2024. J. M. Powers.


2.2. APPLICATION TO STANDARD PROBLEMS 33

2.2.3 Laplace’s equation: elliptic


Let us next analyze in a similar fashion Laplace’s equation, Eq. (1.105), ∂ 2 T /∂x2 +∂ 2 T /∂y 2 =
0. Here the independent variables are x and y, rather than x and t. Now our Laplace’s
equation arose from the two-dimensional time-independent form of Eq. (1.99), which is
∂e
ρ = −∇T · q, (2.71)
∂t
|{z}
=0
T
∇ · q = 0, (2.72)
∂qx ∂qy
+ = 0. (2.73)
∂x ∂y
This is our first first order partial differential equation. To aid this analysis, let us recall
from Eq. (1.100) that
∂T
qx = −k , (2.74)
∂x
∂T
qy = −k . (2.75)
∂y
We then see that
∂qx ∂2T
= −k , (2.76)
∂y ∂y∂x
∂qy ∂2T
= −k . (2.77)
∂x ∂x∂y
Equating the mixed second partial derivatives, we get our second first order partial differen-
tial equation:
∂qx ∂qy
= . (2.78)
∂y ∂x
Equations (2.73) and (2.78) form the system
∂qx ∂qy
+ = 0, (2.79)
∂x ∂y
∂qy ∂qx
− = 0. (2.80)
∂x ∂y
As an aside, we note that in two-dimensional incompressible, irrotational fluid mechanics,
q plays the role of the velocity vector, Eq. (2.79) represents an incompressibility condition,
∇T ·q = 0, and Eq. (2.80) represents an irrotationality condition, ∇×q = 0. Equations (2.79)
and (2.80) can be recast as
   ∂qx     ∂qx   
1 0 ∂x 0 1 ∂y 0
∂qy + ∂qy = . (2.81)
0 1 ∂x
−1 0 ∂y
0
| {z } | {z } | {z } | {z } | {z }
A ∂u B ∂u c
∂x ∂y

© 06 February 2024. J. M. Powers.


34 CHAPTER 2. CLASSIFICATION OF PARTIAL DIFFERENTIAL EQUATIONS

Here our vector u is


 
qx
u= . (2.82)
qy

The associated eigenvalue problem is

λ −1
det (λA − B) = = 0. (2.83)
1 λ

Solving gives

λ2 + 1 = 0, (2.84)
λ = ±i. (2.85)

The eigenvalues are distinct but not real. Presence of complex eigenvalues indicates the
equation cannot be written in characteristic form, and that finite speed signaling phenomena
are not present in the solution. Because its eigenvalues are imaginary, Laplace’s equation is
elliptic.

Problems

© 06 February 2024. J. M. Powers.


Chapter 3

Separation of variables

see Mei, Chapter 4,5.


Here we consider the technique of separation of variables. This method is appropriate for a
wide variety of linear partial differential equations.

3.1 Well-posedness
An important philosophical notion permeates the literature of partial differential equations,
that being that a partial differential equation should be accompanied by a set of initial and/or
boundary conditions that render its solutions consistent with those observed in nature for
systems it is intended to model. This idea was developed most notably by Hadamard,1 who
established three criteria of a problem to be well-posed. The solution must
• exist,
• be uniquely determined, and
• depend continuously on the initial and/or boundary data.
Using these criteria, one can, for example, show that Laplace’s equation, Eq. (1.105), is
well-posed if values of T are specified on the full boundary of a given domain. His famous
counterexample shows how Laplace’s equation is ill-posed if values of T and its derivatives are
simultaneously imposed on a portion of the boundary. Let us consider this counterexample,
which will also serve as a vehicle to introduce the main topic of this chapter: separation of
variables as a means to solve partial differential equations.

Example 3.1
Analyze
∂2T ∂2T
+ = 0, y > 0, (3.1)
∂x2 ∂y 2
1
Jacques Hadamard, 1865-1963, French mathematician.

35
36 CHAPTER 3. SEPARATION OF VARIABLES

with boundary conditions

∂T sin(nx)
T (x, 0) = 0, (x, 0) = . (3.2)
∂y n

A sketch of this scenario is shown in Fig. 3.1.

Δ 2
T=0

T(x,0)=0
x

Figure 3.1: Configuration for the counterexample of Hadamard to assess the well-posedness
of Laplace’s equation.

Thus we have specified both T and its derivative on the boundary y = 0. Note that as n → ∞,
that ∂T /∂y → 0. Let us assume the solution T (x, y) can be separated into the following forms:

T (x, y) = A(x)B(y). (3.3)

In contrast to the d’Alembert solution, whose separated functions combine as a sum, here we have
the separated functions combine as a product. We shall examine if this assumption leads to a viable
solution. With our assumption, we get the following expressions for various derivatives:

∂T dA ∂2T d2 A
= B(y) , = B(y) , (3.4)
∂x dx ∂x2 dx2
2 2
∂T dB ∂ T d B
= A(x) , 2
= A(x) 2 . (3.5)
∂y dy ∂y dy

We substitute these into Eq. (3.1) to get

d2 A d2 B
B(y) 2
+ A(x) 2 = 0, (3.6)
dx dy
2
1 d A 1 d2 B
= − (3.7)
A(x) dx2 B(y) dy 2
| {z } | {z }
function of x only function of y only

The left side is a function only of x, while the right side is function only of y. This can only happen if
both sides are equal to the same constant. Let us choose the constant to be −λ2 :

1 d2 A 1 d2 B
2
=− = −λ2 . (3.8)
A(x) dx B(y) dy 2

© 06 February 2024. J. M. Powers.


3.1. WELL-POSEDNESS 37

This choice is non-intuitive. It is guided by the boundary conditions for this particular problem. Had
we made more general choices, we would be led precisely to the same destination as this useful choice.
This induces two linear second order ordinary differential equations:
d2 A
+ λ2 A = 0, (3.9)
dx2
d2 B
− λ2 B = 0. (3.10)
dy 2
We focus first on the second, Eq. (3.10). By inspection, it has a solution composed of a linear combi-
nation of two linearly independent complementary functions, and is
B(y) = C1 cosh(λy) + C2 sinh(λy). (3.11)
Here C1 and C2 are arbitrary constants. Because T (x, 0) = 0, we must insist that B(0) = 0, giving
B(0) = 0 = C1 cosh 0 + ✘C2✘ ✘✘
sinh 0. (3.12)
We thus learn that C1 = 0, giving
B(y) = C2 sinh λy. (3.13)
We return to the first, Eq. (3.9), which has solution
A(x) = C3 sin λx + C4 cos λx. (3.14)
Combining, we get
 
T (x, y) = A(x)B(y) = sinh λy Ĉ3 sin λx + Ĉ4 cos λx . (3.15)

Here we have defined Ĉ3 = C2 C3 and Ĉ4 = C2 C4 . Now let us satisfy the second condition at y = 0,
that on ∂T /∂y:
∂T  
= λ cosh λy Ĉ3 sin λx + Ĉ4 cos λx , (3.16)
∂y
∂T   sin nx
= λ cosh 0 Ĉ3 sin λx + Ĉ4 cos λx = , (3.17)
∂y y=0 n
  sin nx
= λ Ĉ3 sin λx + Ĉ4 cos λx = . (3.18)
n
This is achieved if we take λ = n, Ĉ3 = 1/n2 , and Ĉ4 = 0, giving
1
T (x, y) = sinh ny sin nx. (3.19)
n2
Now we must admit that Eq. (3.19) satisfies the original partial differential equation and both conditions
at y = 0. Because the original equation is linear, we are inclined to believe that this is the unique solution
which does so.
Let us examine the properties of our solution. We can consider T at a small, fixed, positive value
of y: y = ŷ > 0 and study this in the limit as n → ∞. Now, we recognize that it is the inhomogeneous
boundary condition, sin(nx)/n that entirely drives the solution for T (x, y) to be nontrivial. As n → ∞,
the sole driving impetus becomes a low amplitude, high frequency driver. At y = ŷ, we have
1
T (x, ŷ) = sinh nŷ sin nx, (3.20)
n2
 
1 enŷ − e−nŷ
= sin nx. (3.21)
n2 2

© 06 February 2024. J. M. Powers.


38 CHAPTER 3. SEPARATION OF VARIABLES

For large n and ŷ > 0, the first term dominates yielding

enŷ
T (x, ŷ) ≈ sin nx. (3.22)
n2

Because as n → ∞ the enŷ approaches infinity faster than n2 , we have the amplitude of T ,

enŷ
lim → ∞. (3.23)
n→∞ n2

Paradoxically then, as the amplitude of the driver at the boundary is reduced to zero by increasing n,
the amplitude of the response nearby the boundary is simultaneously driven to infinity. This despite the
fact that T at the boundary is in fact zero. Clearly as n → ∞, the solution loses its continuity property
at the boundary. This famous counter-example problem is thus not well-posed, as such behavior is not
observed in nature.

3.2 Cartesian geometries


Let us consider a series of example problems posed on Cartesian geometries.

Example 3.2
Solve the wave equation, Eq. (2.15),

∂2y 2
2∂ y
= a , (3.24)
∂t2 ∂x2
subject to boundary and initial conditions

∂y
y(0, t) = y(L, t) = 0, y(x, 0) = f (x), (x, 0) = 0. (3.25)
∂t
Generate solutions for four sets of initial conditions:
πx
f (x) = y0 sin , mono-modal, (3.26)
 L 
πx 1 10πx
= y0 sin + sin , bi-modal, (3.27)
L 10 L
x x
= y0 1− , poly-modal, (3.28)
L  L  
x 2 x 3
= y0 H − −H − , poly-modal. (3.29)
L 5 L 5

We assume solutions of the type

y(x, t) = A(x)B(t) (3.30)

© 06 February 2024. J. M. Powers.


3.2. CARTESIAN GEOMETRIES 39

With this assumption, Eq. (3.24) becomes


d2 B d2 A
A(x) = a2 B(t) , (3.31)
dt2 dx2
1 d2 B 1 d A2
= = −λ2 . (3.32)
a2 B(t) dt2 A(x) dx2
Once again, for an arbitrary function of t to be equal to an arbitrary function of x, both functions must
be the same constant, which we have selected to be −λ2 . This induces two second order linear ordinary
differential equations:
d2 A
+ λ2 A = 0, (3.33)
dx2
d2 B
+ a2 λ2 B = 0. (3.34)
dt2
Consider the equation for A first. In order to satisfy the boundary conditions y(0, t) = y(L, t) = 0,
we must have A(0) = A(L) = 0. As is done here, if y is specified on a boundary it is known as
a Dirichlet2 boundary condition. Had the derivative ∂y/∂x been specified, the boundary condition
would have been called a Neumann3 boundary condition. When a linear combination of y and ∂y/∂x
is specified on a boundary, it is known as a Robin4 boundary condition. Along with these boundary
conditions, Eq. (3.33) can be recast as an eigenvalue problem:
d2
− A = λ2 A, A(0) = A(L) = 0. (3.35)
dx2
With L = −d2 /dx2 , a self-adjoint positive definite linear operator, this takes the form
LA = λ2 A. (3.36)
We recall that self-adjoint operators have orthogonal eigenfunctions and real eigenvalues. Because it
can be shown that our L is positive definite, the eigenvalues are also positive, which is why we describe
the eigenvalue as λ2 .
Solving Eq. (3.33), we see that
A(x) = C1 sin λx + C2 cos λx. (3.37)
For A(0) = 0, we get
sin✘
A(0) = 0 = C1✘ 0 + C2 cos 0 = C2 . (3.38)
Thus,
A(x) = C1 sin λx. (3.39)
Now at x = L, we have
A(L) = 0 = C1 sin λL. (3.40)
To guarantee this condition is satisfied, we must require that
λL = nπ, n = 1, 2, . . . , (3.41)

λ = , n = 1, 2, . . . . (3.42)
L
2
Peter Gustav Lejeune Dirichlet, 1805-1859, German mathematician.
3
Carl Gottfried Neumann, 1832-1925, German mathematician.
4
Victor Gustave Robin, 1855-1897, French mathematician.

© 06 February 2024. J. M. Powers.


40 CHAPTER 3. SEPARATION OF VARIABLES

With this, we have


nπx
A(x) = C1 sin , n = 1, 2, . . . (3.43)
L
We note that,
• the eigenvalues λ2 are real and positive,
• the eigenfunctions Cn sin nπx have an arbitrary amplitude.
RL
It will also soon be useful to employ the orthogonality property, 0 (sin mπx/L)(sin nπx/L) dx = 0 if
m 6= n when m and n are integers, and that the integral is nonzero if n = m.
We now turn to solution of Eq. (3.34), which is now restated as

d2 B  nπa 2
+ B = 0. (3.44)
dt2 L
This has solution
nπat nπat
B(t) = C3 sin + C4 cos . (3.45)
L L
Now for ∂y/∂t to be everywhere 0 at t = 0, we must insist that dB/dt(0) = 0. Enforcing this gives

dB C3 nπa nπat C4 nπa nπat


= cos − sin , (3.46)
dt L L L L
dB C3 nπa C4 nπa
(t = 0) = cos 0 − sin 0 = 0, (3.47)
dt L L
C3 nπa
= = 0. (3.48)
L

We thus insist that C3 = 0. Taking Ĉ4 = C1 C4 , our solution combines to form


nπat nπx
y(x, t) = Ĉ4 cos sin . (3.49)
L L
We next recognize that this solution is valid for arbitrary positive integer n; moreover, because the
original equation is linear, the principle of superposition applies and arbitrary linear combinations also
are valid solutions. We can express this by generalizing to

X nπat nπx
y(x, t) = Cn cos sin . (3.50)
n=1
L L

We can use standard trigonometric reductions to recast Eq. (3.50) as

Cn   nπ   nπ 
X∞
y(x, t) = sin (x + at) − sin (x − at) . (3.51)
n=1
2 L L

Importantly, we note that


• The solution can be thought of an infinite sum of left- and right-propagating waves.
• All modes travel at the same velocity magnitude a; formally such waves are non-dispersive.
• The amplitude of each mode does not decay with time; formally, such waves are non-diffusive.

© 06 February 2024. J. M. Powers.


3.2. CARTESIAN GEOMETRIES 41

We can fix the various values of Cn by applying the initial condition for y(x, 0) = f (x):

X nπx
y(x, 0) = f (x) = Cn sin . (3.52)
n=1
L

This amounts to finding the Fourier sine series expansion of f (x). We get this by taking advantage of
the orthogonality properties of sin nπx/L on the domain x ∈ [0, L] by the following series of operations.

X nπx
f (x) = Cn sin , (3.53)
n=1
L
X∞
mπx mπx nπx
sin f (x) = Cn sin sin , (3.54)
L n=1
L L
Z L Z LX∞
mπx mπx nπx
sin f (x) dx = Cn sin sin dx, (3.55)
0 L 0 n=1
L L

X Z L
mπx nπx
= Cn sin sin dx, (3.56)
L L
n=1 |0 {z }
=Lδmn /2

Because of orthogonality, the integral has value of 0 for n 6= m and L/2 for n = m. Employing the
Kronecker delta notation,

0 n 6= m
δnm = , (3.57)
1 n=m
we get
Z ∞
L
mπx LX
sin f (x) dx = Cn δnm , (3.58)
0 L 2 n=1
L
= Cm , (3.59)
2
Z
2 L nπx
Cn = sin f (x) dx (3.60)
L 0 L
This combined with Eq. (3.50) forms the solution for arbitrary f (x).
If we have the mono-modal f (x) = y0 sin(πx/L), the full solution is particularly simple. In this
case the initial condition has exactly the functional form of the eigenfunction, and there is thus only a
one-term Fourier series. The solution is, by inspection,
πat πx
y(x, t) = y0 cos sin . (3.61)
L L
The solution is a single fundamental mode, given by half of a sine wave pinned at x = 0 and x = L. At
any given point x, the position y oscillates. For example at x = L/2, we have
πat
y(L/2, t) = y0 cos . (3.62)
L
We call this a standing wave. Because there is only one Fourier mode, it is also known as mono-modal.
Trigonometric expansion shows that Eq. (3.61) can be expanded as
y0   π  π 
y(x, t) = sin (x − at) + sin (x + at) . (3.63)
2 L L

© 06 February 2024. J. M. Powers.


42 CHAPTER 3. SEPARATION OF VARIABLES

This form illustrates that the standing wave can be considered as a sum of two propagating signals,
one moving to the left with speed a, the other moving to the right at speed a. This is consistent with
the d’Alembert solution, Eq. (2.63). A plot of the single mode standing wave is shown in Fig. 3.2a for
parameter values shown in the caption.

b)
a)

c) d)

Figure 3.2: Response y(x, t) for a solution to the wave equation with a) a single Fourier
mode (mono-modal), b) two Fourier modes (bi-modal), c) multiple Fourier modes (poly-
modal), f (x) = y0 (x/L)(1 − x/L), and d) a poly-modal “top-hat” initial condition, f (x) =
y0 (H((x/L − 2/5) − H(x/L − 3/5)), all with y0 = 1, a = 1, L = 1.

The solution is almost as simple for the bi-modal second initial condition. We must have
πx y0 10πx
y(x, 0) = y0 sin + sin . (3.64)
L 10 L
By inspection again the solution is
πat πx y0 10πat 10πx
y(x, t) = y0 cos sin + cos sin . (3.65)
L L 10 L L
A plot of the bi-modal standing wave is shown in Fig. 3.2c for parameter values shown in the caption.
Next let us consider the poly-modal third initial distribution:
x x
y(x, 0) = f (x) = y0 1− . (3.66)
L L
For this f (x), evaluation of Cn via Eq. (3.60) gives the set of Cn s as
 
8y0 1 1
Cn = 3 1, 0, , 0, ,0... , n = 1, . . . , ∞. (3.67)
π 27 125

© 06 February 2024. J. M. Powers.


3.2. CARTESIAN GEOMETRIES 43

Obviously every odd term in the series is zero. This is a result of f (x) having symmetry about x = L/2.
It is possible to get a simple expression for Cn :
 8y0
Cn = n3 π 3 , n odd,
(3.68)
0 n even.

It is then easy to show that the solution can be expressed as the infinite series

8y0 X 1 (2m − 1)πat (2m − 1)πx
y(x, t) = 3 3
cos sin . (3.69)
π m=1 (2m − 1) L L

A plot of the poly-modal standing wave is shown in Fig. 3.2c for parameter values shown in the caption.
The plot looks similar to that for the mono-modal initial condition; this is because the quadratic
polynomial initial condition is well modeled by a single Fourier mode. Recognize however that an
infinite number smaller amplitude modes are present across infinite spectrum of frequencies. Note also
that the frequencies of the modes are discretely separated. This is known as a discrete spectrum of
frequencies. This feature is a consequence of the fact that for each sine wave to fit within the finite
domain and still match the boundary conditions, only discretely separated frequencies are admitted. If
we were to remove the boundary conditions at x = 0 and x = L, we would find instead a continuous
spectrum.
We lastly consider the poly-modal initial condition which is a so-called “top-hat” function:
    
x 2 x 3
f (x) = y0 H − −H − . (3.70)
L 5 L 5

Evaluation of Cn via Eq. (3.60) gives the set of Cn s as


(√ √ √ √ )
5−1 1+ 5 4 1+ 5 5−1
Cn = y0 , 0, − , 0, , 0, − , 0, , 0, . . . . (3.71)
π 3π 5π 7π 9π

A plot of the solution is shown in Fig. 3.2c for parameter values shown in the caption. Here fifty nonzero
terms have been retained in the series. We note several important features of Fig. 3.2c:
• The initial “top hat” signal immediately breaks into two distinct waveforms. One propagates to the
right and the other to the left. This is consistent with the d’Alembert nature of the solution to the
wave equation.
• When either waveform strikes the boundary at either x = 0 or x = L, there is a reflection, with the
sign of y changing.
• After a second reflection, both waves recombine to recover the initial waveform at a particular time.
• The pattern repeats, and there is no loss of information in the signal.
• Due to the finite number of terms in the series, there is a choppiness in the solution.
A so-called x − t diagram can be useful in understanding wave phenomena. In such a diagram either
contours or shading is used to show how the dependent variable varies in the x − t plane. Fig. 3.3 gives
such a diagram for solution to the wave equation with the “top-hat” function as an initial condition.
Here dark and light regions correspond to small and large y, respectively. Clearly signals are propagating
with slope ±π/4 in this plane, which corresponds to a wave speed of a = 1. We also clearly see the
reflection process at x = 0 and x = 1.
Lastly we examine the variation of the amplitudes |Cn | with n for each of the four cases. A plot is
shown in Fig. 3.4 Figures such as this are related to the so-called power spectral density of a signal; in
other contexts it is known as the energy spectral density. One can easily see how energy is partitioned

© 06 February 2024. J. M. Powers.


44 CHAPTER 3. SEPARATION OF VARIABLES

1
a=1
t

Figure 3.3: x−t diagram for solution to the wave equation with a “top-hat” initial condition,
f (x) = y0 (H((x/L − 2/5) − H(x/L − 3/5)), all with y0 = 1, a = 1, L = 1.
.

into various modes of oscillation. One can associate a frequency of oscillation ν with n via insisting
that 2πνt = nπat/L; thus,

2νL na
n= , ν= . (3.72)
a 2L

As an aside, let us recall some common notation for oscillatory systems. The wavelength
is often named λ, not to be confused with an eigenvalue. It has units of length and can be
defined as the distance for which a sine wave executes a full oscillation. We might say then
that if our function f (x) is

2πx
f (x) = sin , (3.73)
λ
then the wavelength is indeed λ. When x = λ/2, we get the first half of the sine wave for
which f > 0, and we get the complete sine wave when x = λ. Sometimes a sine wave is
expressed as

f (x) = sin kx. (3.74)

© 06 February 2024. J. M. Powers.


3.2. CARTESIAN GEOMETRIES 45

Cn Cn
1.0
2 0.9
0.8
1 0.7

0.6
0.5
0.5
0.5 1 2 1 2 5 10
a) n=2 L/a b) n=2 L/a

Cn Cn
1

0.100
0.10
0.010
0.05
0.001
-4
10 0.01

1 2 5 10 20 1 2 5 10 20 50
c) n=2 L/a d) n=2 L/a

Figure 3.4: Variation of Fourier mode amplitude |Cn | with n for solutions to the wave
equation with a) mono-modal signal (a single Fourier mode), b) a bi-modal signal (two
Fourier modes), c) a poly-modal signal (multiple Fourier modes), f (x) = y0 (x/L)(1 − x/L),
and d) a poly-modal “top-hat” initial condition, f (x) = y0 (H((x/L − 2/5) − H(x/L − 3/5)),
all with y0 = 1, a = 1, L = 1.
.

Here k is known as the wavenumber which has units of the reciprocal of length. We see that

k= . (3.75)
λ
Similarly the period T , not to be confused with temperature, is defined as the time for which
a sine wave undergoes a single complete cycle. We might imagine then
2πt
f (t) = sin . (3.76)
T
When t = T , the sine wave has underwent a complete cycle. Sometimes the sine wave is
expressed as

f (t) = sin ωt. (3.77)

Here ω is the angular frequency and has units of the reciprocal of time. We see that

ω= . (3.78)
T

© 06 February 2024. J. M. Powers.


46 CHAPTER 3. SEPARATION OF VARIABLES

We also often express the sine wave as


f (t) = sin 2πνt. (3.79)
Here ν is the frequency with units of the reciprocal of time. We see that
1
ν=
. (3.80)
T
For waves with the form suggested by Eq. (3.51), we can consider
1   nπ   nπ 
f (x, t) = sin (x + at) − sin (x − at) , (3.81)
2 L L
nπat nπx
= cos sin . (3.82)
L L
Comparing to Eqs. (3.73,3.76), we see that
2πx nπx
= , (3.83)
λ L
2L
λ = , (3.84)
n
and
2πt nπat
= , (3.85)
T L
2L
T = . (3.86)
na
Then we also see the wavenumber is
2π 2nπ nπ
k= = = . (3.87)
λ 2L L
We also see the frequency is
1 na
ν= = . (3.88)
T 2L
And the angular frequency is
2π 2πna nπa
ω= = = . (3.89)
T 2L L
We could then cast our f (x, t), Eq. (3.82), as
2πt 2πx 2πx
f (x, t) = cos ωt sin kx = cos sin = cos 2πνt sin , (3.90)
T λ λ
1   nπ   nπ 
= sin (x + at) − sin (x − at) , (3.91)
2 L L
1
= (sin (k(x + at)) − sin (k(x − at))) , (3.92)
2
1
= (sin (kx + ωt) − sin (kx − ωt)) . (3.93)
2

© 06 February 2024. J. M. Powers.


3.2. CARTESIAN GEOMETRIES 47

Example 3.3
Solve the heat equation, Eq. (1.82)

∂T ∂2T
=α 2, (3.94)
∂t ∂x
subject to boundary and initial conditions

T (0, t) = T (L, t) = 0, T (x, 0) = f (x). (3.95)

Generate solutions for four sets of initial conditions:


 πx 
f (x) = T0 sin , (3.96)
 L   
πx 1 10πx
= T0 sin + sin , (3.97)
L 10 L
x x 
= T0 1− , (3.98)
L  L  
x 2 x 3
= T0 H − −H − . (3.99)
L 5 L 5

Once again, we assume solutions of the form

T (x, t) = A(x)B(t). (3.100)

With this assumption, Eq. (3.94) becomes

dB d2 A
A(x) = αB(t) , (3.101)
dt dx2
1 dB 1 d2 A
= = −λ2 . (3.102)
αB(t) dt A(x) dx2

This induces
d2 A
+ λ2 A = 0, (3.103)
dx2
dB
+ αλ2 B = 0. (3.104)
dt
Consider the equation for A first. In order to satisfy the boundary conditions T (0, t) = T (L, t) = 0,
we must have A(0) = A(L) = 0. Along with these boundary conditions, Eq. (3.103) can be recast as
an eigenvalue problem:

d2
− A = λ2 A, A(0) = A(L) = 0. (3.105)
dx2
With L = −d2 /dx2 , a self-adjoint positive definite linear operator, this takes the form

LA = λ2 A. (3.106)

Solving Eq. (3.103), we see that

A(x) = C1 sin λx + C2 cos λx. (3.107)

© 06 February 2024. J. M. Powers.


48 CHAPTER 3. SEPARATION OF VARIABLES

For A(0) = 0, we get

sin✘
A(0) = 0 = C1✘ 0 + C2 cos 0 = C2 . (3.108)

Thus,

A(x) = C1 sin λx. (3.109)

Now at x = L, we have

A(L) = 0 = C1 sin λL. (3.110)

To guarantee this condition is satisfied, we must require that

λL = nπ, n = 1, 2, . . . , (3.111)

λ = , n = 1, 2, . . . . (3.112)
L
With this, we have
nπx
A(x) = C1 sin , n = 1, 2, . . . (3.113)
L
We note that,
• the eigenvalues λ2 are real and positive,
• the eigenfunctions Cn sin nπx have an arbitrary amplitude.
RL
It will once again soon be useful to employ the orthogonality property, 0 (sin mπx/L)(sin nπx/L) dx =
0 if m 6= n when m and n are integers, and that the integral is nonzero if n = m.
We now cast Eq. (3.104) as

dB n2 π 2 α
+ B = 0. (3.114)
dt L2
This has solution
 
−n2 π 2 αt
B(t) = C3 exp . (3.115)
L2

We note that for α > 0 L > 0, that

lim B(t) = 0. (3.116)


t→∞

That is to say, any amplitude of a given mode of T (x, t) decays to zero. By inspection the time scale
of decay of one of these modes is

L2
τ= . (3.117)
n2 π 2 α
Thus fast decay is induced by
• small domain length L,
• high frequency of a given Fourier mode, where n is proportional to the frequency,
• large diffusivity, α.

© 06 February 2024. J. M. Powers.


3.2. CARTESIAN GEOMETRIES 49

Taking Ĉ3 = C1 C3 , our solution combines to form


 
−n2 π 2 αt nπx
T (x, t) = Ĉ3 exp 2
sin . (3.118)
L L
By the principle of superposition, we can admit arbitrary linear combinations thus giving the general
solution
X∞  
−n2 π 2 αt nπx
T (x, t) = Cn exp 2
sin . (3.119)
n=1
L L

Here we note
• The amplitude of each Fourier mode decays with time; this is characteristic of diffusive phenomena.
• There is no wave propagation phenomena.
Once again the initial conditions fix the values of Cn after application of the initial condition T (x, 0) =
f (x):

X nπx
T (x, 0) = f (x) = Cn sin . (3.120)
n=1
L

Once again, this amounts to finding the Fourier sine series expansion of f (x). The first expansions are
the same as those from the previous example problem, so we will not repeat the analysis. We report
plots of T (x, t) for the four initial conditions given by f (x). Plots of the solutions are shown in Fig. 3.5
for parameter values shown in the caption. In Fig. 3.5b, we see that the high frequency mode present
in the initial condition decays much more rapidly than the low frequency mode.

Example 3.4
Solve Laplace’s equation, Eq. (1.105)

∂2T ∂2T
+ = 0, (3.121)
∂x2 ∂y 2
subject to boundary conditions

T (0, y) = T (L, y) = T (x, 0) = 0, T (x, L) = f (x). (3.122)

Generate solutions for four sets of boundary conditions:


 πx 
f (x) = T0 sin , (3.123)
 L   
πx 1 10πx
= T0 sin + sin , (3.124)
L 10 L
x x
= T0 1− , (3.125)
L  L  
x 2 x 3
= T0 H − −H − . (3.126)
L 5 L 5

© 06 February 2024. J. M. Powers.


50 CHAPTER 3. SEPARATION OF VARIABLES

a)
b)

c)
d)

Figure 3.5: Response T (x, t) for a solution to the heat equation with a) a single Fourier
mode, b) two Fourier modes, c) multiple Fourier modes, f (x) = T0 (x/L)(1 − x/L), d) a
“top-hat” initial condition, f (x) = T0 (H((x/L − 2/5) − H(x/L − 3/5)).

We assume solutions of the form


T (x, y) = A(x)B(y). (3.127)
With this assumption, Eq. (3.121) becomes
d2 A d2 B
B(y) + A(x) = 0, (3.128)
dx2 dy 2
1 d2 B 1 d2 A
− = = −λ2 . (3.129)
B(y) dy 2 A(x) dx2
This gives
d2 A
+ λ2 A = 0, (3.130)
dx2
d2 B
− λ2 B = 0. (3.131)
dy 2
Solving the first equation, we find
A(x) = C1 sin λx + C2 cos λx. (3.132)
To satisfy the boundary conditions at x = 0 and x = L, we will need A(x) = A(L) = 0. Thus, we have
A(0) = 0 = C1✘ sin✘
0 + C2 cos 0, (3.133)
= C2 . (3.134)

© 06 February 2024. J. M. Powers.


3.2. CARTESIAN GEOMETRIES 51

Thus

A(x) = C1 sin λx. (3.135)

At x = L, we then have

A(L) = 0 = C1 sin λL. (3.136)

We can thus take

λL = nπ, n = 1, 2, . . . , (3.137)

and
nπx
A(x) = C1 sin . (3.138)
L
Then Eq. (3.131) becomes

d2 B n2 π 2
2
− B = 0. (3.139)
dy L2
This has solution
nπy nπy
B(y) = C3 sinh + C4 cosh . (3.140)
L L
At y = 0 we must have B(y) = 0, so that

B(y) = 0 = C3✘ ✘✘
sinh 0 + C4 cosh 0. (3.141)

We learn then that C4 = 0, so that


nπy
B(y) = C3 sinh , (3.142)
L

and with Ĉ3 = C3 C1 ,


nπy nπx
T (x, y) = Ĉ3 sinh sin . (3.143)
L L
The principle of superposition holds here, so we can say

X nπy nπx
T (x, y) = Cn sinh sin . (3.144)
n=1
L L

Our boundary condition then gives



X nπx
f (x) = Cn sinh nπ sin . (3.145)
| {z } L
n=1
C̃n

Now if C̃n are the Fourier sine series coefficients of f (x), we have

C̃n
Cn = . (3.146)
sinh nπ
Plots of the solutions are shown in Fig. 3.6 for parameter values shown in the caption.

© 06 February 2024. J. M. Powers.


52 CHAPTER 3. SEPARATION OF VARIABLES

b)
a)

c) d)

Figure 3.6: Response y(x, t) for a solution to Laplace’s equation with a) a single Fourier
mode, b) two Fourier modes, c) multiple Fourier modes, f (x) = y0 (x/L)(1 − x/L), and d) a
“top-hat” boundary condition, f (x) = y0 (H((x/L − 2/5) − H(x/L − 3/5)), all with y0 = 1,
a = 1, L = 1.

Example 3.5
Consider the heat equation with initial and boundary conditions of

∂T ∂2T
=α 2, T (x, 0) = T0 , T (0, t) = T1 , T (L, t) = T1 . (3.147)
∂t ∂x
Find T (x, t).

Physically, one might imagine this as a rod of length L, initially at uniform temperature T0 , whose
ends are suddenly heated to T1 and held there. Let us scale the problem. We can take
T − T0 x t
T∗ = , x∗ = , t∗ = . (3.148)
T1 − T0 L tc
Our choice of T∗ maps T ∈ [T0 , T1 ] to T∗ ∈ [0, 1]. Our choice of x∗ maps x ∈ [0, L] to x∗ ∈ [0, 1]. We
need to choose tc , and will do so following a simple analysis. With our choices, our system becomes

© 06 February 2024. J. M. Powers.


3.2. CARTESIAN GEOMETRIES 53

1 ∂ α ∂2
((T1 − T0 )T∗ + T0 ) = ((T1 − T0 )T∗ + T0 ) , T∗ (x, 0) = 0, T∗ (0, t∗ ) = 1, T∗ (1, t∗ ) = 1,
tc ∂t∗ L2 ∂x2∗
(3.149)
∂T∗ αtc ∂ 2 T∗
= , T∗ (x, 0) = 0, T∗ (0, t∗ ) = 1, T∗ (1, t∗ ) = 1. (3.150)
∂t∗ L2 ∂x2∗

Let us select tc to remove the effect of the parameter, giving

L2
tc = , (3.151)
α
Thus, our system is scaled to be parameter-free:

∂T∗ ∂ 2 T∗
= , T∗ (x∗ , 0) = 0, T∗ (0, t∗ ) = 1, T∗ (1, t∗ ) = 1. (3.152)
∂t∗ ∂x2∗

We anticipate a Sturm-Liouville problem of a second order nature in x∗ . But we will likely need
homogeneous boundary conditions in order to pose an eigenvalue problem. Let us redefine T∗ to achieve
this. In the limit of a steady state, our heat equation will have a solution T∗s (x∗ ) which satisfies the
time-independent version of Eq. (3.152):

d2 T∗s
0= , T∗s (0) = T∗s (1) = 1. (3.153)
dx2∗

This has solution T∗ s (x∗ ) = C1 + C2 x∗ . To satisfy the boundary conditions, we must have C1 = 1 and
C2 = 0, so the steady state solution is

T∗ s (x∗ ) = 1. (3.154)

Let us now define a deviation from the steady state solution T˜∗ :

T̃ (x∗ , t∗ ) = T∗ (x∗ , t∗ ) − T∗s (x∗ ) = T∗ (x∗ , t∗ ) − 1. (3.155)

We then recast our system in terms of T̃ :

∂ ∂2
(T̃ + 1) = (T̃ + 1), T̃ (x∗ , 0) + 1 = 0, T̃ (0, t∗ ) + 1 = 1, T̃ (1, t∗ ) + 1 = 1, (3.156)
∂t∗ ∂x2∗
∂ T̃ ∂ 2 T̃
= , T̃ (x∗ , 0) = −1, T̃ (0, t∗ ) = 0, T̃ (1, t∗ ) = 0. (3.157)
∂t∗ ∂x2∗

Our change of variables has moved the inhomogeneity from the boundary condition to the initial
condition.
We can now separate variables and proceed much as before. First take

T̃ (x∗ , t∗ ) = A(x∗ )B(t∗ ). (3.158)

This gives

dB d2 A
A = B , (3.159)
dt∗ dx2∗
1 dB 1 d2 A
= = −λ2 . (3.160)
B dt∗ A dx2∗

© 06 February 2024. J. M. Powers.


54 CHAPTER 3. SEPARATION OF VARIABLES

This gives two ordinary differential equations:


d2 A
+ λ2 A = 0, (3.161)
dx2∗
dB
+ λ2 B = 0. (3.162)
dt∗
Solving the first gives
A(x∗ ) = C1 sin λx∗ + C2 cos λx∗ . (3.163)
Here is where the homogenous boundary conditions are important. We need A(0) = 0, so
A(0) = 0 = C1 (0) + C2 . (3.164)
Thus C2 = 0 and
A(x∗ ) = C1 sin λx∗ . (3.165)
We also need A(1) = 0, so
A(1) = 0 = C1 sin λ. (3.166)
For this, we insist that
λ = nπ, n = 1, 2, . . . (3.167)
Thus
A(x∗ ) = C1 sin nπx∗ . (3.168)
Then for B, we get
dB
+ n2 π 2 B = 0, (3.169)
dt∗

B(t∗ ) = C3 exp −n2 π 2 t∗ . (3.170)
Taking then our solution to be a linear combination of the various modes, we can assert

X 2
π 2 t∗
T̃ (x∗ , t∗ ) = Cn e−n sin nπx∗ . (3.171)
n=1

Enforcing the initial condition, we get



X
T̃ (x∗ , 0) = −1 = Cn sin nπx∗ . (3.172)
n=1

We need the Fourier sine coefficients for −1. We operate as usual to get

X
− sin mπx∗ = Cn sin mπx∗ sin nπx∗ , (3.173)
n=1
Z 1 X∞ Z 1
− sin mπx∗ dx∗ = Cn sin mπx∗ sin nπx∗ dx∗ , (3.174)
0 n=1 |0 {z }
=δmn /2
 2 ∞
X
− mπ m odd δmn
= Cn , (3.175)
0 m even 2
n=1
Cm
= , (3.176)
2 4
− nπ n odd
Cn = . (3.177)
0 n even

© 06 February 2024. J. M. Powers.


3.2. CARTESIAN GEOMETRIES 55

Thus

4X 1 2 2
T̃ (x∗ , t∗ ) = − e−(2n−1) π t∗ sin(2n − 1)πx∗ . (3.178)
π n=1 2n − 1

In terms of T∗ , we can then say



4X 1 2 2
T∗ (x∗ , t∗ ) = 1 − e−(2n−1) π t∗ sin(2n − 1)πx∗ . (3.179)
π n=1 2n − 1

We note also that


• High frequency modes decay rapidly.
• Low frequency modes decay slowly.
• All modes decay to zero leaving the long time solution T∗ = 1.
The slowest decaying mode has n = 1, for which the approximate solution is at t∗ → ∞,
4 −π2 t∗
T∗ ≈ 1 − e sin πx∗ . (3.180)
π
The time constant of decay of the slowest mode is τ and is by inspection
1
τ= . (3.181)
π2
The dimensional decay time τd is thus

L2
τd = τ tc = . (3.182)
π2 α
Thus rapid decay is induced by short length scales L and high diffusivity α. A plot of the solutions is
shown in Fig. 3.7. Here we have incorporated the original dimensional variables into the scaled axes of
Fig. 3.7.

Example 3.6
Consider the heat equation with a general initial condition and general homogeneous boundary
conditions (i.e. Robin boundary conditions):

∂T ∂2T
= , (3.183)
∂t ∂x2
T (x, 0) = f (x), (3.184)
∂T
α1 T (0, t) + α2 (0, t) = 0, (3.185)
∂x
∂T
β1 T (1, t) + β2 (1, t) = 0. (3.186)
∂x
Find a general expression for T (x, t).

© 06 February 2024. J. M. Powers.


56 CHAPTER 3. SEPARATION OF VARIABLES

x/L

T-T0
T1-T0

t L2/α

Figure 3.7: Response T (x, t) for a solution to the heat equation with suddenly imposed
inhomogeneous boundary condition.

Let us take

T (x, t) = A(x)B(t). (3.187)

Thus
dB d2 A
A(x) = B(t) , (3.188)
dt dx2
1 dB 1 d2 A
= = −λ2 . (3.189)
B(t) dt A(x) dx2

This yields

d2 A
+ λ2 A = 0, (3.190)
dx2
dB
+ λ2 B = 0. (3.191)
dt
The first has general solution

A(x) = C1 cos λx + C2 sin λx. (3.192)

We also see
dA
= −C1 λ sin λx + C2 λ cos λx. (3.193)
dx
Enforcing the boundary conditions at x = 0 and x = 1 gives

α1 C1 + α2 λC2 = 0, (3.194)
C1 (β1 cos λ − β2 λ sin λ) + C2 (β1 sin λ + β2 λ cos λ) = 0. (3.195)

© 06 February 2024. J. M. Powers.


3.2. CARTESIAN GEOMETRIES 57

15
tan
10

−10 −5 5 10
−5

−10 −

−15

Figure 3.8: Curves whose intersection gives roots of tan λ = −λ, eigenvalues of a problem
with Robin boundary conditions.

In matrix form, this becomes


    
α1 α2 λ C1 0
= . (3.196)
β1 cos λ − β2 λ sin λ β1 sin λ + β2 λ cos λ C2 0

For a nontrivial solution, the determinant of the coefficient matrix must be zero, yielding

α1 (β1 sin λ + β2 λ cos λ) − α2 λ(β1 cos λ − β2 λ sin λ) = 0. (3.197)

Assuming α1 6= 0 and β1 6= 0, we can scale to get


   
β2 α2 β2
sin λ + λ cos λ − λ cos λ − λ sin λ = 0. (3.198)
β1 α1 β1

In general, for a given α2 /α1 and β2 /β1 , this is a transcendental equation which must be solved
numerically for λ. In special cases, there is an exact solution. For the Dirichlet conditions found when
α2 = β2 = 0, we get sin λ = 0, yielding λ = nπ, and A(x) = C2 sin(nπx). For the Neumann conditions
when α1 = β1 = 0, we get λ2 sin λ = 0. This gives λ = nπ and A(x) = C1 cos nπx. For the Robin
conditions, we must find a numerical solution, and we still expect an infinite number of eigenvalues λ.
Consider the case when α1 = β1 = β2 = 1 and α2 = 0. Thus our Robin conditions are

T (0, t) = 0, (3.199)
∂T
T (1, t) + (1, t) = 0. (3.200)
∂x
Our expression for the eigenvalues, Eq. (3.198) reduces to

sin λ + λ cos λ = 0, (3.201)


tan λ = −λ. (3.202)

To understand how the roots are distributed, we plot λ and tan λ in Fig. 3.8. Numerical solution reveals
the eigenvalues are given by

λ = {0, ±2.02876, ±4.91318, ±7.97867, . . .} (3.203)

© 06 February 2024. J. M. Powers.


58 CHAPTER 3. SEPARATION OF VARIABLES

For large |λ|, the eigenvalues are given where tan λ is singular, which is where cos λ = 0:
 
1
λ≈ n+ π. (3.204)
2

We solve for B to get



B(t) = C exp −λ2 t . (3.205)

Then forming linear combinations of the solutions, we have as a general solution



X 2 2
T (x, t) = Cn e−λn t cos λn x + Bn e−λn t sin λn x. (3.206)
n=0

At t = 0, we have

X
f (x) = Cn cos λn x + Bn sin λn x. (3.207)
n=0

So if we can find the general Fourier coefficients of f (x), we then have the solution. Calculation of the
Fourier coefficients is aided greatly by the orthogonality of the eigenfunctions; details can be found in
Powers and Sen5 .

3.3 Non-Cartesian geometries


Let us consider some common partial differential equations in non-Cartesian coordinate
systems.

3.3.1 Cylindrical
One can transform from the Cartesian system with (x, y, z) as coordinates to the cylindrical
system with (r, θ, ẑ) as coordinates via

x = r cos θ, (3.208)
y = r sin θ, (3.209)
z = ẑ. (3.210)

A sketch of the geometry is shown in Fig. 3.9. We will consider the domain r ∈ [0, ∞),
θ ∈ [0, 2π], ẑ ∈ (−∞, ∞). Then, with the exception of the origin (x, y, z) = (0, 0, 0), every
(x, y, z) will map to a unique (r, θ, ẑ).
5
J. M. Powers and M. Sen, Mathematical Methods in Engineering, Cambridge University Press, New
York, 2015. See Section 6.5.

© 06 February 2024. J. M. Powers.


3.3. NON-CARTESIAN GEOMETRIES 59

Figure 3.9: Cylindrical coordinate geometry.

The Jacobian of the transformation is

∂(x, y, z)
J = , (3.211)
∂(r, θ, ẑ)
 ∂x ∂x ∂x

∂r ∂θ ∂ ẑ
=  ∂y
∂r
∂y
∂θ
∂y
∂ ẑ
, (3.212)
∂z ∂z ∂z
 ∂r ∂θ ∂ ẑ 
cos θ −r sin θ 0
=  sin θ r cos θ 0. (3.213)
0 0 1

We have J = |J| = r, so the transformation is singular and thus nonunique when r = 0. It is


orientation-preserving for r > 0, and it is volume preserving only for r = 1; thus, in general
it does not preserve volume.
The metric tensor G is

G = JT · J, (3.214)
  
cos θ sin θ 0 cos θ −r sin θ 0
=  −r sin θ r cos θ 0   sin θ r cos θ 0  , (3.215)
0 0 1 0 0 1

© 06 February 2024. J. M. Powers.


60 CHAPTER 3. SEPARATION OF VARIABLES

 
1 0 0
=  0 r2 0. (3.216)
0 0 1
Because G is diagonal, the new coordinates axes are also orthogonal.
Now it can be shown that the gradient operator in the Cartesian system is related to
that of the cylindrical system via
 ∂   ∂ 
∂x ∂r
∂ 
∇ =  ∂y ∂ 
= (JT )−1  ∂θ , (3.217)
∂ ∂
∂z ∂ ẑ
  ∂ 
cos θ − sinr θ 0 ∂r
=  sin θ cosr θ 0   ∂θ ∂ 
, (3.218)

0 0 1 ∂ ẑ
 ∂ sin θ ∂

cos θ ∂r − r ∂θ
=  sin θ ∂r
∂ ∂ 
+ cosr θ ∂θ . (3.219)

∂ ẑ

Consider then the Laplacian operator, ∇2 = ∇T · ∇, which is

∇2 = ∇T · ∇, (3.220)
 ∂ sin θ ∂

cos θ ∂r − r ∂θ

= ( cos θ ∂r − sin θ ∂
r ∂θ

sin θ ∂r + cos θ ∂
r ∂θ

∂ ẑ
)  sin θ ∂r

+ cos θ ∂
r ∂θ
. (3.221)

∂ ẑ

Detailed expansion followed by extensive use of trigonometric identities reveals that this
reduces to
 
T 2 1 ∂ ∂ 1 ∂2 ∂2
∇ ·∇=∇ = r + 2 2 + 2. (3.222)
r ∂r ∂r r ∂θ ∂ ẑ

Example 3.7
Consider the heat equation ∂T /∂t = ∇2 T which governs the distribution of T within a cylinder of
unit radius. Assume there is no variation of T with respect to θ or ẑ. Thus, we have T = T (r, t). Take
T (r, 0) = f (r), T (1, t) = 0, and T (0, t) < ∞. Generate T (r, t) if f (r) = r2 (1 − r).

Drawing upon Eq. (3.222) in the limits of this problem, our heat equation reduces to
 
∂T 1 ∂ ∂T
= r , (3.223)
∂t r ∂r ∂r
∂2T 1 ∂T
= 2
+ . (3.224)
∂r r ∂r
Let us separate variables and see if we can find a solution. Take

T (r, t) = A(r)B(t). (3.225)

© 06 February 2024. J. M. Powers.


3.3. NON-CARTESIAN GEOMETRIES 61

Then we get

dB d2 A B(t) dA
A(r) = B(t) + , (3.226)
dt dr2 r dr
1 dB 1 d2 A 1 dA
= + = −λ2 . (3.227)
B(t) dt A(r) dr2 rA(r) dr

This yields two ordinary differential equations:

dB
+ λ2 B = 0, (3.228)
dt
d2 A 1 dA
+ + λ2 A = 0. (3.229)
dr2 r dr
Note that Eq. (3.229) can be rewritten in Sturm-Liouville form as −(1/r)d/dr(rdA/dr) = λ2 A, and
the self-adjoint positive definite Sturm-Liouville operator is Ls = −(1/r)d/dr(rd/dr). The solution to
Eq. (3.229) is of the form

A(r) = C1 J0 (λr) + C2 Y0 (λr). (3.230)

Here J0 is a Bessel6 function of order zero, and Y0 is a Neumann function of order zero. The Neumann
function is singular at r = 0; thus, we insist that C2 = 0 to keep T bounded. Thus

A(r) = C1 J0 (λr). (3.231)

We need A(1) to be zero to satisfy the Dirichlet condition T (1, t) = 0. This gives

A(1) = 0 = C1 J0 (λ). (3.232)

For a nontrivial solution, we must select λ such that J0 (λ) = 0. These zeros must be found numerically.
We get an idea of their distribution by plotting J0 (λ) in Fig. 3.10. The first four are given by

J0(λ)

1.0

0.8

0.6

0.4

0.2

λ
10 20 30 40
−0.2

−0.4

Figure 3.10: Plot of J0 (λ).


6
Friedrich Bessel, 1784-1846, German astronomer and mathematician.

© 06 February 2024. J. M. Powers.


62 CHAPTER 3. SEPARATION OF VARIABLES

λ = {2.40483, 5.52008, 8.65373, 11.7915, . . .} . (3.233)

Each of these eigenvalues is associated with an eigenfunction. The first four are

J0 (2.40483r), J0 (5.52008r), J0 (8.65373r), J0 (11.7915r). (3.234)

We map these back to a Cartesian coordinate system and plot the first four eigenfunctions in Fig. 3.11.

y y
y

y
x
x x
x

Figure 3.11: Plot of the first four eigenfunctions, J0 (λn r), n = 1, 2, 3, 4, projected onto a
Cartesian space.

Knowing λ, we can now integrate Eq. (3.228) to get



B(t) = C3 exp −λ2 t . (3.235)

Combining with Eq. (3.231) and forming arbitrary linear combinations, we can say

X 2
T (r, t) = Cn e−λn t J0 (λn r). (3.236)
n=1

Here λn is the nth term of Eq. (3.233). We use the initial condition to find the Cn values. Doing so we
get

X
T (r, 0) = f (r) = Cn J0 (λn r). (3.237)
n=1

We thus need to expand f (r) in a Fourier-Bessel series. We do so via the following steps.

X
rJ0 (λm r)f (r) = Cn rJ0 (λm r)J0 (λn r), (3.238)
n=1
Z 1 X∞ Z 1
rJ0 (λm r)f (r) dr = Cn rJ0 (λm r)J0 (λn r) dr. (3.239)
0 n=1 0

Now the orthogonality of the Bessel functions is such that one can show
Z 1
1
rJ0 (λn r)J0 (λm r) dr = (J1 (λn ))2 δmn . (3.240)
0 2

© 06 February 2024. J. M. Powers.


3.3. NON-CARTESIAN GEOMETRIES 63

t=0
r

t
x

Figure 3.12: Solution T (r, t) to heat equation within a cylindrical geometry with T (r, 0) =
r 2 (1 − r) along with T (x, y, t = 0).

Therefore, we have
Z 1 ∞
X 1
rJ0 (λm r)f (r) dr = Cn (J1 (λn ))2 δmn , (3.241)
0 n=1
2
Cm
= (J1 (λm ))2 , (3.242)
2
Z 1
2
Cn = rJ0 (λn r)f (r) dr. (3.243)
(J1 (λn ))2 0

Thus,
∞ 
X Z 1 
2 2
T (r, t) = r̂J0 (λn r̂)f (r̂) dr̂ e−λn t J0 (λn r). (3.244)
n=1
(J1 (λn ))2 0

If f (r) = r2 (1 − r), we calculate that

Cn = {0.164131, −0.19501, 0.0504623, −0.0273625, 0.0138598, . . .} . (3.245)

For this f (r), a plot of T (r, t) along with T (x, y, t = 0) is shown in Fig. 3.12. We see the initial
distribution loses some of its structure, then the entire solution relaxes to zero at t advances.

Example 3.8
Consider the wave equation ∂ 2 φ/∂t2 = a2 ∇2 φ which governs the distribution of φ within a two-
dimensional circular domain with radius of unity. Assume there is no variation of φ with respect to θ
or ẑ. Thus we have φ = φ(r, t). Take ∂φ/∂t(r, 0) = 0, φ(r, 0) = f (r), φ(1, t) = 0, and φ(0, t) < ∞.
Generate φ(r, t) if f (r) = 1 − H(r − 1/4) and a = 1.

Again drawing upon Eq. (3.222) in the limits of our problem, our wave equation reduces to
 
∂2φ a2 ∂ ∂φ
= r , (3.246)
∂t2 r ∂r ∂r
1 ∂2φ ∂ 2 φ 1 ∂φ
= + . (3.247)
a2 ∂t2 ∂r2 r ∂r

© 06 February 2024. J. M. Powers.


64 CHAPTER 3. SEPARATION OF VARIABLES

Let us separate variables:

φ(r, t) = A(r)B(t). (3.248)

Then we get

A(r) d2 B d2 A B(t) dA
= B(t) + , (3.249)
a2 dt2 dr2 r dr
1 d2 B 1 d2 A 1 dA
= + = −λ2 . (3.250)
a2 B(t) dt2 A(r) dr 2 rA(r) dr
This yields

d2 B
+ a2 λ2 B = 0, (3.251)
dt2
d2 A 1 dA
+ + λ2 A = 0. (3.252)
dr2 r dr
As before, the solution to Eq. (3.252) is

A(r) = C1 J0 (λr) + C2 Y0 (λr), (3.253)

and we choose C2 = 0 to retain a bounded solution at r = 0, so

A(r) = C1 J0 (λr). (3.254)

And as before in order that φ(1, t) = 0, we must select λ so that J0 (λ) = 0 giving, as from Eq. (3.233),

λ = {2.40483, 5.52008, 8.65373, 11.7915, . . .} . (3.255)

Knowing λ, we can integrate Eq. (3.251) to get

B(t) = C3 sin aλt + C4 cos aλt. (3.256)

Now to satisfy the initial condition that ∂φ/∂t(r, 0) = 0, we must set dB/dt(0) = 0:
dB
= aλC3 cos aλt − aλC4 sin aλt, (3.257)
dt
dB
= aλC3 = 0. (3.258)
dt t=0

Thus C3 = 0 and we get

B(t) = C4 cos aλt. (3.259)

So our general solution is a linear combination of the various modes, yielding



X
φ(r, t) = Cn cos(aλn t)J0 (λn r). (3.260)
n=1

The ratio of the frequencies of oscillation of the various modes do not come in integer multiples because
of the nature of the cylindrical geometry. At the initial state, we require φ(r, 0) = f (r), yielding, as
before,
Z 1
2
Cn = rJ0 (λn r)f (r) dr. (3.261)
(J1 (λn ))2 0

© 06 February 2024. J. M. Powers.


3.3. NON-CARTESIAN GEOMETRIES 65

r t=1/2

x
t y

Figure 3.13: Solution φ(r, t) to wave equation within a cylindrical geometry with φ(r, 0) =
1 − H(r − 1/4) with a = 1 along with φ(x, y, t = 1/2).

For f (r) = 1 − H(r − 1/4), we get

Cn = {0.221578, 0.421112, 0.439822, 0.281122, . . .} . (3.262)

Thus
∞ 
X Z 1 
2
φ(r, t) = r̂J0 (λn r̂)f (r̂) dr̂ cos(aλn t)J0 (λn r). (3.263)
n=1
(J1 (λn ))2 0

For this f (r), a plot of φ(r, t) along with φ(x, y, t = 1/2) is shown in Fig. 3.13. A plot of φ in r − t space
is shown in Fig. 3.14. From this figure, we see that all disturbances propagate with speed of unity.
One feature of particular interest is the early time behavior of the wave form. The initial jump
breaks into two jumps. One moves in the direction of increasing r; the other moves towards r = 0.
The state between the two jumps varies with r. When the jump moving towards the center reaches the
center, there is a reflection.

Example 3.9
Find the two-dimensional field T which satisfies ∇2 T = 0 with boundary conditions of T = T1 on
the upper half of a circle of radius a and T = T2 on the lower half of the circle.

While we could do this problem in Cartesian coordinates, the specification of the boundary con-
ditions on the circle renders the cylindrical coordinate system to be of greater utility. In general, we
can expect T = T (r, θ, ẑ). But due to the nature of the problem statement, we expect no variation
in ẑ, and one can consider a variation of T (r, θ), which implies a polar coordinate system. For this
two-dimensional polar geometry, ∇2 T = 0 is written as
 
1 ∂ ∂T 1 ∂2T
r + 2 2 = 0. (3.264)
r ∂r ∂r r ∂θ

© 06 February 2024. J. M. Powers.


66 CHAPTER 3. SEPARATION OF VARIABLES

Figure 3.14: Solution φ(r, t) to wave equation within a cylindrical geometry with φ(r, 0) =
1 − H(r − 1/4) with a = 1.

We take as boundary conditions



T1 , θ ∈ [0, π],
T (a, θ) = (3.265)
T2 , θ ∈ [π, 2π].

Often we can simplify analysis by scaling the equations in a convenient fashion. Scaling choices are not
unique. We adopt the following guidelines to aid our choices:
• Try to render quantities to lie between zero and unity.

• Try to induce and take advantage of natural symmetry.

• Try to remove inhomogeneities, so as to have as many things be zero as possible.


Here we have some useful choices. Let us take
r
r∗ ≡ , (3.266)
a
T − T1
T∗ ≡ 1+2 . (3.267)
T1 − T2

With this choice the domain r ∈ [0, a] is mapped to r∗ ∈ [0, 1]. And when T = T1 , T∗ = 1; when
T = T2 , T∗ = −1. This choice does introduce both ±1 into T∗ , but it introduces an anti-symmetry
about θ = 0 and θ = π. By the chain rule, we see that

∂ dr∗ ∂ 1 ∂
= = . (3.268)
∂r dr ∂r∗ a ∂r∗

© 06 February 2024. J. M. Powers.


3.3. NON-CARTESIAN GEOMETRIES 67

So Eq. (3.264) becomes


    
1 ∂ 1 ∂ (T1 − T2 )(T∗ − 1) 1 ∂ 2 (T1 − T2 )(T∗ − 1)
ar∗ + T1 + 2 2 2 + T1 = 0. (3.269)
r∗ a2 ∂r∗ a ∂r∗ 2 r∗ a ∂θ 2
This and the boundary conditions reduce to
  
1 ∂ ∂T∗ 1 ∂ 2 T∗ 1, θ ∈ [0, π],
r∗ + 2 = 0, T ∗ (1, θ) = f (θ) = −1 + 2H(π − θ) = (3.270)
r∗ ∂r∗ ∂r∗ r∗ ∂θ2 −1, θ ∈ [π, 2π].

Let us now separate variables and assume

T∗ (r∗ , θ) = A(r∗ )B(θ). (3.271)

Our Laplace’s equation then becomes


 
B(θ) d dA A(r∗ ) d2 B
r∗ + = 0, (3.272)
r∗ dr∗ dr∗ r∗ 2 dθ2
 
r∗ d dA 1 d2 B
r∗ + = 0, (3.273)
A(r∗ ) dr∗ dr∗ B(θ) dθ2
 
r∗ d dA 1 d2 B
− r∗ = = −λ2 . (3.274)
A(r∗ ) dr∗ dr∗ B(θ) dθ2
This gives us two ordinary differential equations
 
d dA
r∗ r∗ − λ2 A = 0, (3.275)
dr∗ dr∗
d2 B
+ λ2 B = 0. (3.276)
dθ2
The second of these has solution

B(θ) = C1 sin λθ + C2 cos λθ. (3.277)

Now we expect both T and its spatial derivative to be periodic in θ. So we expect B(0) = B(2π) and
dB/dθ(0) = dB/dθ(2π). The condition B(0) = B(2π) gives

C2 = C1 sin(2πλ) + C2 cos(2πλ). (3.278)

The condition dB/dθ(0) = dB/dθ(2π) gives

λC1 = λC1 cos(2πλ) − λC2 sin(2πλ). (3.279)

We write this as a linear system of equations as


    
sin 2πλ cos 2πλ − 1 C1 0
= . (3.280)
cos 2πλ − 1 − sin 2πλ C2 0

For nontrivial C1 and C2 , we must require the determinant of the coefficient matrix be zero, giving
2
− sin2 2πλ − (cos 2πλ − 1) = 0, (3.281)
− sin2 2πλ − cos2 2πλ +2 cos 2πλ − 1 = 0, (3.282)
| {z }
=−1
2 cos 2πλ = 2, (3.283)
cos 2πλ = 1. (3.284)

© 06 February 2024. J. M. Powers.


68 CHAPTER 3. SEPARATION OF VARIABLES

This can only be achieved if we select


λ = n, n = 0, 1, 2, . . . . (3.285)
Thus,
B(θ) = C1 sin nθ + C2 cos nθ. (3.286)
Then Eq. (3.275) reduces to
d2 A dA
r∗ 2 + r∗ − n2 A = 0. (3.287)
dr∗ 2 dr∗
This is a second order ordinary differential equation with variable coefficients. It is known as Euler’s
equation. If n = 0, it reduces to
d2 A dA
r∗ 2
+ = 0. (3.288)
dr∗ dr∗
With A′ = dA/dr∗ , this becomes
dA′ A′
= − , (3.289)
dr∗ r∗
dA′ dr∗
= − , (3.290)
A′ r∗
ln A′ = C − ln r∗ , (3.291)

A′ = . (3.292)
r∗
Here Ĉ = eC . Continue then to find
dA Ĉ
= , (3.293)
dr∗ r∗
A(r∗ ) = C̃ + Ĉ ln r∗ , n = 0. (3.294)
For n 6= 0, we can find solutions by assuming solutions of the form A(r∗ ) = r∗ b . Substituting, we find
r∗ 2 b(b − 1)r∗ b−2 + r∗ br∗ b−1 − n2 r∗ b = 0, (3.295)
b(b − 1) + b − n2 = 0, (3.296)
b 2 − n2 = 0, (3.297)
b = ±n, n = 1, 2, . . . (3.298)
Thus
A(r∗ ) = C3 r∗ n + C4 r∗ −n . (3.299)
Combining, we find
(  
C̃ + Ĉ ln r∗ (C2 ) n = 0,
T∗ (r∗ , θ) = (3.300)
n −n
(C3 r∗ + C4 r∗ ) (C1 sin nθ + C2 cos nθ) , n = 1, 2, . . .

Now, we seek a bounded T∗ at r∗ = 0. To achieve this, we will insist that both Ĉ = C4 = 0, so that
(
 Ĉ0  n = 0,
T∗ (r∗ , θ) = n (3.301)
r∗ Ĉ1 sin nθ + Ĉ2 cos nθ , n = 1, 2, . . .

© 06 February 2024. J. M. Powers.


3.3. NON-CARTESIAN GEOMETRIES 69

Here we have taken Ĉ0 = C̃C2 , Ĉ1 = C3 C1 , and Ĉ2 = C3 C2 . We can in fact form linear combinations
of the various modes; doing this, and segregating the n = 0 term, defining the terms C0 , Cn and Bn
for convenience, and rearranging, we can say

X ∞
X
T∗ (r∗ , θ) = C0 + Cn r∗n cos nθ + Bn r∗n sin nθ. (3.302)
n=1 n=1

Now when r∗ = 1, we have



X ∞
X
T (1, θ) = f (θ) = −1 + 2H(π − θ) = C0 + Cn cos nθ + Bn sin nθ. (3.303)
n=1 n=1

The following orthogonality properties are easily verified for nonnegative integers n and m:

Z 2π  2π n = m = 0,
cos nθ cos mθ dθ = π n = m 6= 0, (3.304)
0 
0 n 6= m,

Z 2π  0 n = m = 0,
sin nθ sin mθ dθ = π n = m 6= 0, (3.305)
0 
0 n 6= m.
Let us see how to find Bn . Let us operate on Eq. (3.303) first by multiplying by sin mθ:

X ∞
X
f (θ) sin mθ = C0 sin mθ + Cn cos nθ sin mθ + Bn sin nθ sin mθ, (3.306)
n=1 n=1
Z 2π Z 2π ∞
X Z 2π
f (θ) sin mθ dθ = C0 sin mθ dθ + Cn cos nθ sin mθ dθ
0
|0 {z } n=1 |0 {z }
=0 =0

X Z 2π
+ Bn sin nθ sin mθ dθ , (3.307)
n=1 |0 {z }
=πδmn

X
= Bn πδmn , (3.308)
n=1
= πBm , (3.309)
Z 2π
1
f (θ) sin nθ dθ = Bn . (3.310)
π 0

Then, one could used the same procedure to find C0 and Cn . The general trigonometric Fourier
coefficients for f (θ) are easily shown to be
Z 2π
1
C0 = f (θ) dθ, (3.311)
2π 0
Z 2π
1
Cn = f (θ) cos nθ dθ, n = 1, 2, . . . (3.312)
π 0
Z
1 2π
Bn = f (θ) sin nθ dθ, n = 1, 2, . . . (3.313)
π 0
We find then for f (θ) = −1 + 2H(π − θ) from Eq. (3.270) that
C0 = 0, (3.314)

© 06 February 2024. J. M. Powers.


70 CHAPTER 3. SEPARATION OF VARIABLES

Cn = {0, 0, 0, 0, . . .} , (3.315)
 
4 4 4
Bn = , 0, , 0, ,... . (3.316)
π 3π 5π
Only odd powers of n have value in Bn . Recognizing this we can thus write the solution compactly as

4 X r∗ 2n−1 4r∗ sin θ 4r∗ 3 sin 3θ 4r∗ 5 sin 5θ
T∗ (r∗ , θ) = sin((2n − 1)θ) = + + + .... (3.317)
π n=1 2n − 1 π 3π 5π

With x∗ = x/a and y∗ = y/a, surface and contour plots of T∗ composed from 25 nonzero terms is
shown in Fig. 3.15. We note there is no particular difficulty in T∗ at the origin r∗ = 0. However T∗ is

x* 1

1
3/4

1/2

1/4

y* 0 0

T* −1/4

−1/2

−3/4

−1

−1
a) b) −1 0 1
y* x*

Figure 3.15: Plots of T∗ which satisfy ∇2 T∗ = 0 with T∗ = −1 on the lower circular boundary
and T∗ = 1 on the upper circular boundary: a) surface plot, b) contour plot.

nonunique at (r∗ , θ) = (1, 0) and (1, π), the locations of the jumps in T∗ .

3.3.2 Spherical
One can transform from the Cartesian system with (x, y, z) as coordinates to the spherical
system with (r, φ, θ) as coordinates via

x = r cos θ sin φ, (3.318)


y = r sin θ sin φ, (3.319)
z = r cos φ. (3.320)

© 06 February 2024. J. M. Powers.


3.3. NON-CARTESIAN GEOMETRIES 71

Figure 3.16: Spherical coordinate geometry.

A sketch of the geometry is shown in Fig. 3.16. We will consider the domain r ∈ [0, ∞),
φ ∈ [0, π], θ ∈ [0, 2π]. Then, with the exception of the z-axis, (x, y, z) = (0, 0, z) every
(x, y, z) will map to a unique (r, φ, θ).
The Jacobian of the transformation is
∂(x, y, z)
J = , (3.321)
∂(r, φ, θ)
 ∂x ∂x ∂x 
∂r ∂φ ∂θ
∂y ∂y ∂y
=  ∂r ∂φ ∂θ
, (3.322)
∂z ∂z ∂z
∂r ∂φ ∂θ
 
cos θ sin φ r cos θ cos φ −r sin θ sin φ
=  sin θ sin φ r cos φ sin θ r cos θ sin φ  . (3.323)
cos φ −r sin φ 0

We have J = |J| = r 2 sin φ, so the transformation is singular and thus nonunique when
either r = 0, φ = 0, or φ = π. It is orientation-preserving for r > 0, φ ∈ [0, π] and it is
volume-preserving only for r 2 sin φ = 1; thus, in general it does not preserve volume.
The metric tensor G is

G = JT · J, (3.324)

© 06 February 2024. J. M. Powers.


72 CHAPTER 3. SEPARATION OF VARIABLES

  
cos θ sin φ sin θ sin φ cos φ cos θ sin φ r cos θ cos φ −r sin θ sin φ
=  r cos θ cos φ r cos φ sin θ −r sin φ   sin θ sin φ r cos φ sin θ r cos θ sin φ  ,
−r sin θ cos φ r cos θ sin φ 0 cos φ −r sin φ 0
(3.325)
 
1 0 0
=  0 r2 0 . (3.326)
2 2
0 0 r sin φ

Because G is diagonal, the new coordinates axes are also orthogonal.


The gradient operator in the Cartesian system is related to that of the spherical system
via
 ∂   ∂ 
∂x ∂r
∂  ∂ 
∇ =  ∂y = (JT )−1  ∂φ , (3.327)
∂ ∂
∂z ∂θ
  ∂ 
cos θ sin φ cos θrcos φ − csc φrsin θ ∂r
∂ 
=  sin θ sin φ cos φrsin θ cos θrcsc φ   ∂φ , (3.328)
cos φ − sinr φ 0 ∂
∂θ
 

cos θ sin φ ∂r + cos θrcos φ ∂φ

− csc φrsin θ ∂θ

 ∂ 

=  sin θ sin φ ∂r + cos φrsin θ ∂φ

+ cos θrcsc φ ∂θ . (3.329)
∂ sin φ ∂
cos φ ∂r − r ∂φ

We then have the Laplacian operator, ∇2 = ∇T · ∇, which is, following extensive reduction
   
2 T 1 ∂ 2 ∂ 1 ∂2 1 ∂ ∂
∇ =∇ ·∇= 2 r + 2 2 + sin φ . (3.330)
r ∂r ∂r r sin φ ∂θ2 r 2 sin φ ∂φ ∂φ

Example 3.10
Find the distribution of T within a sphere of radius a which satisfies ∇2 T = 0 with boundary
conditions of T = T1 on the upper half of the sphere and T = T2 on the lower half.

In general, we could expect T = T (r, φ, θ). However, we notice symmetry in the boundary conditions
such that we are motivated to seek solutions T = T (r, φ); that is, we will neglect any variation in θ.
Our governing equation and boundary conditions then reduce to
      
∂ ∂T 1 ∂ ∂T T1 , φ ∈ 0, π2  ,
r2 + sin φ = 0, T (a, φ) = (3.331)
∂r ∂r sin φ ∂φ ∂φ T2 , φ ∈ π2 , π .

Similar to the related example in polar coordinates we select scaled variables r∗ = r/a, and T∗ =
1 + 2(T − T1 )/(T1 − T2 ). The problem is then expressed as
    π    
∂ 2 ∂T∗ 1 ∂ ∂T∗ 1, φ ∈ 0, π2  ,
r∗ + sin φ = 0, T∗ (1, φ) = −1 + 2H −φ =
∂r∗ ∂r∗ sin φ ∂φ ∂φ 2 −1, φ ∈ π2 , π .
(3.332)

© 06 February 2024. J. M. Powers.


3.3. NON-CARTESIAN GEOMETRIES 73

Let us separate variables and see if this leads to a solution. We can try
T∗ (r∗ , φ) = A(r∗ )B(φ). (3.333)
Substituting this assumption into Eq. (3.332) yields
   
d 2 dA A(r∗ ) d dB
B(φ) r∗ + sin φ = 0, (3.334)
dr∗ dr∗ sin φ dφ dφ
   
1 d dA 1 d dB
r∗ 2 =− sin φ = λ. (3.335)
A(r∗ ) dr∗ dr∗ B(φ) sin φ dφ dφ
This yields two ordinary differential equations:
 
d 2 dA
r∗ − λA = 0, (3.336)
dr∗ dr∗
 
d dB
sin φ + λ sin φ B = 0. (3.337)
dφ dφ
Let us operate further on Eq. (3.337), performing some non-obvious transformations to render it into
a standard form. First let us change the independent variable from φ to s via the transformation
s = cos φ. (3.338)
With this transformation, we find from the chain rule that
d ds d d
= = − sin φ . (3.339)
dφ dφ ds ds
Then Eq. (3.337) is rewritten as
 
d dB
− sin φ − sin2 φ + λ sin φ B = 0. (3.340)
ds ds
Now we recognize that sin2 φ = 1 − cos2 φ = 1 − s2 , and we scale by sin φ, taking care that φ ∈ (0, π),
so as to get
 
d 2 dB
(1 − s ) + λB = 0, (3.341)
ds ds
 
d d
− (1 − s2 ) B = λB. (3.342)
ds ds
| {z }
L

Here L is the well known positive definite Sturm-Liouville operator whose eigenvalues λ = n(n + 1),
with n = 0, 1, 2, . . . and eigenfunctions are the Legendre polynomials, Pn (s):
1 dn n
Pn (s) = n n
s2 − 1 . (3.343)
2 n! ds
The first few eigenfunctions and eigenvalues are
P0 (s) = 1, λ = 0, (3.344)
P1 (s) = s, λ = 2, (3.345)
1
P2 (s) = (−1 + 3s2 ), λ = 6, (3.346)
2
1
P3 (s) = s(−3 + 5s2 ), λ = 12, (3.347)
2
1
P4 (s) = (3 − 30s2 + 35s4 ), λ = 20, (3.348)
8
..
.

© 06 February 2024. J. M. Powers.


74 CHAPTER 3. SEPARATION OF VARIABLES

It can be shown by direct substitution of the eigenvalues and their corresponding eigenfunctions that
Eq. (3.342) is satisfied. Just as the Sturm-Liouville operator −d2 /ds2 has two families of eigenfunctions
(sin and cos), so does the Legendre Sturm-Liouville operator. The other set of complementary functions
are known as Qn (s) and have logarithmic singularities at s = ±1. Because we desire a bounded solution,
we will select the constant modulating Qn (s) to be zero, and thus consider it no further.
We thus take
Bn (φ) = Pn (cos φ). (3.349)
Thus,
B0 (φ) = 1, λ = 0, (3.350)
B1 (φ) = cos φ, λ = 2, (3.351)
1
B2 (φ) = (−1 + 3 cos2 φ), λ = 6, (3.352)
2
..
.
Now return to consider Eq. (3.336). With λ = n(n + 1), n = 0, 1, 2, . . . it is
 
d 2 dA
r∗ − n(n + 1)A = 0. (3.353)
dr∗ dr∗
This is an Euler equation. We can assume solutions of the type A(r∗ ) = Cr∗ b . With this assumption,
Eq. (3.353) becomes
 
d 2 d b

r∗ Cr∗ − n(n + 1)Cr∗ b = 0, (3.354)
dr∗ dr∗
d 
br∗ b+1 − n(n + 1)r∗ b = 0, (3.355)
dr∗
b(b + 1)r∗ b − n(n + 1)r∗ b = 0, (3.356)
b(b + 1) − n(n + 1) = 0. (3.357)
Solving this quadratic equation for b, we find two solutions: b = n and b = −(n + 1), thus
A(r∗ ) = C1 r∗ n + C2 r∗ −(n+1) , n = 0, 1, 2, . . . (3.358)
To suppress unbounded T∗ when r∗ = 0, we set C2 = 0 so that
A(r∗ ) = C1 r∗ n , n = 0, 1, 2, . . . (3.359)
We can then combine our solutions for B(φ) and A(r∗ ) in terms of arbitrary linear combinations to get

X
T∗ (r∗ , φ) = Cn r∗ n Pn (cos φ). (3.360)
n=0

To determine the constants Cn , we can apply the boundary condition at r∗ = 1 from Eq. (3.332):

X
T∗ (1, φ) = f (φ) = Cn Pn (cos φ). (3.361)
n=0

This amounts to expressing f (φ) in terms of a Fourier-Legendre series, where the basis functions are the
Legendre polynomials. To aid in this, let us again employ the transformation of Eq. (3.338), s = cos φ.
Let us also define g such that
g(cos φ) = f (φ). (3.362)

© 06 February 2024. J. M. Powers.


3.3. NON-CARTESIAN GEOMETRIES 75

For example, if f (φ) = φ, then g(cos φ) = arccos(cos φ); so g is the inverse cosine function. Then in
terms of s, we seek the Fourier-Legendre expansion for

X
g(s) = Cn Pn (s). (3.363)
n=0

Now the Legendre polynomials are orthogonal on the domain s ∈ [−1, 1] with it being easy to show
that
Z 1
2
Pn (s)Pm (s) ds = δmn . (3.364)
−1 2n + 1

Note that when s = 1, φ = 0 and when s = −1, φ = π, so the domain s ∈ [−1, 1] sweeps through the
entire sphere. Let us use the orthogonality property while operating on Eq. (3.363):

X
g(s)Pm (s) = Cn Pn (s)Pm (s), (3.365)
n=0
Z 1 X∞ Z 1
g(s)Pm (s) ds = Cn Pn (s)Pm (s) ds, (3.366)
−1 n=0 −1
| {z }
=2δnm /(2n+1)
Z 1 ∞
X 2δmn
g(s)Pm (s) ds = Cn , (3.367)
−1 n=0
2n + 1
Z 1
2Cm
g(s)Pm (s) ds = , (3.368)
−1 2m + 1
Z
2n + 1 1
Cn = g(s)Pn (s) ds. (3.369)
2 −1

Now our two-parted domain transforms as follows. With s = cos φ, we see that φ ∈ [0, π/2] maps to
s ∈ [1, 0]; moreover, φ ∈ (π/2, π] maps to s ∈ (0, −1]. So we can say that g(s) is expressed as

1 s ∈ [1, 0],
g(s) = −1 + 2H(s) = (3.370)
−1 s ∈ (0, −1].

With this g(s), evaluation of Eq. (3.369) gives


 
3 7 11 75
Cn = 0, , 0, − , 0, , 0, − ,... , n = 0, 1, 2, . . . (3.371)
2 8 16 128

Very detailed analysis reveals there is a general form for the Cn for arbitrary n, allowing one to write
the solution compactly as

X∞
(−1)m (4m + 3)(2m)! 2m+1
T∗ (r∗ , φ) = r∗ P2m+1 (cos φ). (3.372)
m=0
22m+1 (m + 1)(m!)2

The first two nonzero terms of the series are


3 7
T∗ (r∗ , φ) = r∗ P1 (cos φ) − r∗ 3 P3 (cos φ) + . . . , (3.373)
2 8
3 7 cos φ(−3 + 5 cos2 φ)
= r∗ cos φ − r∗ 3 + .... (3.374)
2 8 2

© 06 February 2024. J. M. Powers.


76 CHAPTER 3. SEPARATION OF VARIABLES

x* 1

3/4

1/2

1/4

z* 0 0
T* −1/4

−1/2

−3/4

−1

−1
a) −1 0 1
z* b)
x*

Figure 3.17: Plots from the plane y = 0 of T∗ which satisfy ∇2 T∗ = 0 with T∗ = 1 on an


upper hemispherical boundary and T∗ = −1 on a lower hemispherical boundary: a) surface
plot, b) contour plot.

With x∗ = x/a, z∗ = z/a, and considering only the plane on which y = 0, surface and contour plots of
T∗ composed from 10 nonzero terms of Eq. (3.372) are shown in Fig. 3.17. Again, there is no particular
difficulty in T∗ at the origin r∗ = 0. However T∗ is nonunique at (r∗ , φ) = (1, π/2), the locations of the
jump in T∗ . The plots of Fig. 3.17 are very similar to those of the cylindrical analog of Fig. 3.15; the
small differences can be attributed to spherical versus cylindrical geometry.

3.4 Usage in a stability problem


Separation of variables is often an important component of problems of larger scope. Often
when one wants to determine the stability of some solution of a nonlinear equation, one
employs local linearization techniques so as to generate a linear problem which may be solved
via separation of variables. As an example, let us consider an idealized problem motivated
by combustion. The general problem is nonlinear, with a linear heat equation subjected
to a nonlinear combustion source term. We shall determine a steady state solution from
solving numerically a nonlinear problem, then determine its linear stability via separation
of variables. We close by briefly examining its full nonlinear transient solution. Further
physical and mathematical details are given by Powers.7
7
Powers, J. M., 2014, Lecture Notes on Fundamentals of Combustion, University of Notre Dame; also see
Powers, J. M., 2016, Combustion Thermodynamics and Dynamics, Cambridge University Press, New York.

© 06 February 2024. J. M. Powers.


3.4. USAGE IN A STABILITY PROBLEM 77

The domain is modeled to be a slab of infinite extent in the y and z directions and has
length two in the x direction, with x ∈ [−1, 1]. The temperature at x = ±1 is held fixed at
T = 0. The slab is initially unreacted. Exothermic conversion of material from reactants to
products will generate an elevated temperature within the slab T > 0, for x ∈ [−1, 1]. If the
thermal energy generated diffuses rapidly enough, the temperature within the slab will be
T ∼ 0, and the reaction rate will be low. If the energy generated by local reaction diffuses
slowly, it will accumulate in the interior, accelerate the local reaction rate, and induce rapid
energy release and high temperature.
We take our model equations to be
 
∂T 1 ∂2T −Θ
= + (1 − T ) exp , (3.375)
∂t D ∂x2 1 + QT
T (−1, t) = 0, T (1, t) = 0, T (x, 0) = 0. (3.376)
Here, the equations are dimensionless. Three dimensionless parameters appear, 1) Θ, the
so-called dimensionless activation energy, motivated by so-called Arrhenius kinetics. In the
Arrhenius kinetics model, reactions are suppressed at low temperature. At high temperature
they are fully activated. The value of Θ plays a large role in determining at what temperature
the transition from slow to fast occurs, 2) Q, the dimensionless heat release parameter which
quantifies the exothermic nature of the reaction, and 3) D, the Damköhler8 number, which
gives the ratio of the time scales of energy diffusion to the time scales of chemical reaction.
Note that the initial and boundary conditions are homogeneous. The only inhomogeneity
lives in the exothermic reaction source term, which is nonlinear due to the exp(−1/T ) term.
As an aside, let us consider the evolution of total energy within the domain. To do so we
integrate Eq. (3.375) through the entire domain.
Z 1 Z 1 Z 1  
∂T 1 ∂2T −Θ
dx = 2
dx + (1 − T ) exp dx, (3.377)
−1 ∂t −1 D ∂x −1 1 + QT
Z Z 1  
d 1 1 ∂T 1 ∂T −Θ
T dx = − + (1 − T ) exp dx .
dt −1 D ∂x x=1 D ∂x x=−1 −1 1 + QT
| {z } | {z } | {z }
thermal energy change boundary heat flux internal conversion
(3.378)
The total thermal energy in our domain changes due to two reasons: 1) diffusive energy flux
at the isothermal boundaries, 2) internal conversion of chemical energy to thermal energy.

3.4.1 Spatially homogeneous solutions


For D → ∞ diffusion becomes unimportant in Eq. (3.375), and we recover a balance between
unsteady effects and reaction:
 
dT −Θ
= (1 − T ) exp , T (0) = 0. (3.379)
dt 1 + QT
8
Gerhard Damköhler, 1908-1944, German chemist.

© 06 February 2024. J. M. Powers.


78 CHAPTER 3. SEPARATION OF VARIABLES

Solutions T (t) are independent of x and are considered spatially homogeneous. An asymp-
totic theory valid in the limit of large Θ predicts significant acceleration of reaction when

t→ . (3.380)

For Θ = 15, Q = 1, we plot a numerical solution of T (t) in Fig. 3.18. For these parameters,

T
1.0

0.8

0.6

0.4

0.2
tblowup=217934
100 000 200 000 300 000 400 000
t

Figure 3.18: Plot of T (t) for Θ = 15, Q = 1, D → ∞.

Eq. (3.380) estimates a blow-up phenomena at t = 217934. The results of Fig. 3.18 indicate
our estimate is good. Physically, the exothermic heat release from initially slow reaction
accumulates inducing a slow temperature increase. At a critical temperature, the extreme
sensitivity of reaction rates induces a rapid rise of temperature to its final value of T = 1,
where the material is completely reacted.

3.4.2 Steady solutions


Let us examine solutions to Eq. (3.375) in the steady state limit for which ∂/∂t = 0 :
 
1 d2 T −Θ
0 = + (1 − T ) exp , (3.381)
D dx2 1 + QT
0 = T (−1) = T (1). (3.382)
Rearrange to get
 
d2 T −Θ
= −D (1 − T ) exp , (3.383)
dx2 1 + QT
 
QΘ exp(−Θ) −Θ
= −D (1 − T ) exp . (3.384)
QΘ exp(−Θ) 1 + QT
Now, defining for convenience
δ = DQΘ exp(−Θ), (3.385)

© 06 February 2024. J. M. Powers.


3.4. USAGE IN A STABILITY PROBLEM 79

T
1.0

0.8

0.6

0.4

0.2

x
−1.0 −0.5 0.5 1.0

Figure 3.19: Plots of high, low, and intermediate temperature distributions T (x) for δ = 0.4,
Q = 1, Θ = 15.

we get
 
d2 T exp(Θ) −Θ
2
= −δ (1 − T ) exp , (3.386)
dx QΘ 1 + QT
T (−1) = T (1) = 0. (3.387)

Equations (3.386-3.387) can be solved by a numerical trial and error method for which
we take T (−1) = 0 and guess dT /dx|x=−1. We keep guessing until we have satisfied the
boundary condition of T (1) = 0.
When we do this with δ = 0.4, Θ = 15, Q = 1 (so D = δeΘ /Q/Θ = 87173.8), we find
three steady solutions. For each we find a maximum temperature, T m , at x = 0. One is
at low temperature with T m = 0.016. We find a second intermediate temperature solution
with T m = 0.417. And we find a high temperature solution with T m = 0.987. Plots of T (x)
for high, low, and intermediate temperature solutions are given in Fig 3.19.
We can use a one-term collocation approximation to estimate the relationship between δ
and T m . Let us estimate that

Ta (x) = c1 (1 − x2 ). (3.388)

This estimate certainly satisfies the boundary conditions. Substituting our choice into
Eq. (3.386), we get a residual of
 
δ Θ
r(x) = −2c1 + exp Θ − (1 − c1 (1 − x2 )). (3.389)
QΘ 1 + c1 Q(1 − x2 )
R1
We choose a one term collocation method with ψ1 (x) = δD (x). Then, setting −1 ψ1 (x)r(x)dx =

© 06 February 2024. J. M. Powers.


80 CHAPTER 3. SEPARATION OF VARIABLES

Tam
1.0

0.8

0.6

0.4

0.2

0.2 0.4 0.6 0.8 1.0 1.2 1.4 δ

Figure 3.20: Plots of Tam versus δ, with Q = 1, Θ = 15 from a one-term collocation approx-
imate solution.

0 gives
 
δ Θ
r(0) = −2c1 + exp Θ − (1 − c1 ) = 0. (3.390)
QΘ 1 + c1 Q

We solve for δ and get


 
2c1 QΘ Θ
δ= exp . (3.391)
1 − c1 eΘ 1 + c1 Q

The maximum temperature of the approximation is given by Tam = c1 and occurs at x = 0.


A plot of Tam versus δ is given in Fig 3.20. For δ < δc1 ∼ 0.2, one low temperature solution
exists. For δc1 < δ < δc2 ∼ 0.84, three solutions exist. For δ > δc2 , one high temperature
solution exists.

3.4.3 Unsteady solutions


Let us now study the effects of time-dependency on our problem. Let us begin with
Eq. (3.375).

3.4.3.1 Linear stability


We will first consider small deviations from the steady solutions found earlier and see if those
deviations grow or decay with time. This will allow us to make a definitive statement about
the linear stability of those steady solutions.

© 06 February 2024. J. M. Powers.


3.4. USAGE IN A STABILITY PROBLEM 81

3.4.3.1.1 Formulation First, recall that we have independently determined three exact
numerical steady solutions to the time-independent version of Eq. (3.375). Let us call any
of these Te (x). Note that by construction Te (x) satisfies the boundary conditions on T .
Let us subject a steady solution to a small perturbation and consider that to be our
initial condition for an unsteady calculation. Take then

T (x, 0) = Te (x) + ǫA(x), A(−1) = A(1) = 0, (3.392)


A(x) = O(1), 0 < ǫ ≪ 1. (3.393)

Here, A(x) is some function which satisfies the same boundary conditions as T (x, t).
Now, let us assume that

T (x, t) = Te (x) + ǫT ′ (x, t). (3.394)

with

T ′ (x, 0) = A(x). (3.395)

Here, T ′ is an O(1) quantity. We then substitute our Eq. (3.394) into Eq. (3.375) to get
∂ 1 ∂2
(Te (x) + ǫT ′ (x, t)) = (Te (x) + ǫT ′ (x, t))
∂t D ∂x2  
′ −Θ
+ (1 − Te (x) − ǫT (x, t)) exp .
1 + QTe (x) + QǫT ′ (x, t)
(3.396)

From here on we will understand that Te is Te (x) and T ′ is T ′ (x, t). Now, consider the
exponential term:
  !
−Θ −Θ 1
exp = exp Q
, (3.397)
1 + QTe + QǫT ′ 1 + QTe 1 + 1+QT ǫT ′
e
  
−Θ Q ′
∼ exp 1− ǫT , (3.398)
1 + QTe 1 + QTe
   
−Θ ǫΘQ ′
∼ exp exp T , (3.399)
1 + QTe (1 + QTe )2
  
−Θ ǫΘQ ′
∼ exp 1+ T . (3.400)
1 + QTe (1 + QTe )2
So, our Eq. (3.396) can be rewritten as

∂ 1 ∂2
(Te + ǫT ′ ) = 2
(Te + ǫT ′ )
∂t D ∂x   
′ −Θ ǫΘQ ′
+ (1 − Te − ǫT ) exp 1+ T ,
1 + QTe (1 + QTe )2

© 06 February 2024. J. M. Powers.


82 CHAPTER 3. SEPARATION OF VARIABLES

1 ∂2
= 2
(Te + ǫT ′ )
D ∂x    
−Θ ′ ǫΘQ ′
+ exp (1 − Te − ǫT ) 1 + T ,
1 + QTe (1 + QTe )2
1 ∂2
= (Te + ǫT ′ )
D ∂x2    
−Θ ′ (1 − Te )ΘQ 2
+ exp (1 − Te ) + ǫT −1 + + O(ǫ ) ,
1 + QTe (1 + QTe )2
 
ǫ ∂2T ′ 1 ∂ 2 Te −Θ
= + + exp (1 − Te )
D ∂x2 D ∂x2 1 + QTe
| {z }
=0
    
−Θ ′ (1 − Te )ΘQ 2
+ exp ǫT −1 + + O(ǫ ) .
1 + QTe (1 + QTe )2
(3.401)

Now, we recognize the bracketed term as zero because Te (x) is constructed to satisfy the
steady state equation. We also recognize that ∂Te (x)/∂t = 0. So, our equation reduces to,
neglecting O(ǫ2 ) terms, and canceling ǫ
  
∂T ′ 1 ∂2T ′ −Θ (1 − Te )ΘQ
= 2
+ exp −1 + 2
T ′. (3.402)
∂t D ∂x 1 + QTe (1 + QTe )

Equation (3.402) is a linear partial differential equation for T ′ (x, t). It is of the form

∂T ′ 1 ∂2T ′
= 2
+ B(x)T ′ , (3.403)
∂t D ∂x
with
  
−Θ (1 − Te (x))ΘQ
B(x) ≡ exp −1 + . (3.404)
1 + QTe (x) (1 + QTe (x))2

3.4.3.1.2 Separation of variables Let us use the standard technique of separation of


variables to solve Eq. (3.403). We first assume that

T ′ (x, t) = H(x)K(t). (3.405)

So, Eq. (3.403) becomes

dK(t) 1 d2 H(x)
H(x) = K(t) + B(x)H(x)K(t), (3.406)
dt D dx2
1 dK(t) 1 1 d2 H(x)
= + B(x) = −λ. (3.407)
K(t) dt D H(x) dx2

© 06 February 2024. J. M. Powers.


3.4. USAGE IN A STABILITY PROBLEM 83

Because the left side is a function of t and the right side is a function of x, the only way the
two can be equal is if they are both the same constant. We will call that constant −λ.
Now, Eq. (3.407) really contains two equations, the first of which is
dK(t)
+ λK(t) = 0. (3.408)
dt
This has solution

K(t) = C exp(−λt), (3.409)

where C is some arbitrary constant. Clearly if λ > 0, this solution is stable, with time
constant of relaxation τ = 1/λ.
The second differential equation contained within Eq. (3.407) is
1 d2 H(x)
+ B(x)H(x) = −λH(x), (3.410)
D dx2
 
1 d2
− + B(x) H(x) = λH(x). (3.411)
D dx2
| {z }
L

This is of the classical eigenvalue form for a linear operator L; that is L(H(x)) = λH(x).
We also must have

H(−1) = H(1) = 0, (3.412)

to satisfy the spatially homogeneous boundary conditions on T ′ (x, t).


This eigenvalue problem is difficult to solve because of the complicated nature of B(x).
Let us see how the solution would proceed in the limiting case of B as a constant. We will
generalize later.
If B is a constant, we have
d2 H
+ (B + λ)DH = 0, H(−1) = H(1) = 0. (3.413)
dx2
The following mapping simplifies the problem somewhat:
x+1
y= . (3.414)
2
This takes our domain of x ∈ [−1, 1] to y ∈ [0, 1]. By the chain rule
dH dH dy 1 dH
= = .
dx dy dx 2 dy
So
d2 H 1 d2 H
= .
dx2 4 dy 2

© 06 February 2024. J. M. Powers.


84 CHAPTER 3. SEPARATION OF VARIABLES

So, our eigenvalue problem transforms to

d2 H
+ 4D(B + λ)H = 0, H(0) = H(1) = 0. (3.415)
dy 2
This has solution
p  p 
H(y) = C1 cos ( 4D(B + λ) )y + C2 sin ( 4D(B + λ)) y . (3.416)

At y = 0 we have then

H(0) = 0 = C1 (1) + C2 (0), (3.417)

so C1 = 0. Thus,
p 
H(y) = C2 sin ( 4D(B + λ)) y . (3.418)

At y = 1, we have the other boundary condition:


p 
H(1) = 0 = C2 sin ( 4D(B + λ)) . (3.419)

Because C2 6= 0 to avoid a trivial solution, we must require that


p 
sin ( 4D(B + λ)) = 0. (3.420)

For this to occur, the argument of the sin function must be an integer multiple of π:
p
4D(B + λ) = nπ, n = 1, 2, 3, . . . (3.421)

Thus,

n2 π 2
λ= − B. (3.422)
4D
We need λ > 0 for stability. For large n and D > 0, we have stability. Depending on the
value of B, low n, which corresponds to low frequency modes, could be unstable.

3.4.3.1.3 Numerical eigenvalue solution Let us return to the full problem where
B = B(x). Let us solve the eigenvalue problem via the method of finite differences. Let us
take our domain x ∈ [−1, 1] and discretize into N points with
2
∆x = , xi = (i − 1)∆x − 1. (3.423)
N −1
Note that when i = 1, xi = −1, and when i = N, xi = 1. Let us define B(xi ) = Bi and
H(xi ) = Hi .

© 06 February 2024. J. M. Powers.


3.4. USAGE IN A STABILITY PROBLEM 85

We can rewrite Eq. (3.410) as

d2 H(x)
+ D(B(x) + λ)H(x) = 0, H(−1) = H(1) = 0. (3.424)
dx2
Now, let us apply an appropriate equation at each node. At i = 1, we must satisfy the
boundary condition so

H1 = 0. (3.425)

At i = 2, we discretize Eq. (3.424) with a second order central difference to obtain


H1 − 2H2 + H3
+ D(B2 + λ)H2 = 0. (3.426)
∆x2
We get a similar equation at a general interior node i:
Hi−1 − 2Hi + Hi+1
+ D(Bi + λ)Hi = 0. (3.427)
∆x2
At the i = N − 1 node, we have
HN −2 − 2HN −1 + HN
+ D(BN −1 + λ)HN −1 = 0. (3.428)
∆x2
At the i = N node, we have the boundary condition

HN = 0. (3.429)

These represent a linear tridiagonal system of equations of the form


 2 
( D∆x2 − B2 ) − D∆x 1
0 0 ... 0    
2 H2 H2
 − 1 2 2 1
( D∆x2 − B3 ) − D∆x2 0 
. . . 0   H3 
 D∆x  H3 
 .. ..  .   . 
 0 − D∆x 1 . . ... ...  .   . 
 2
  ..  = λ  ..  .(3.430)
 .. .. .. .. ..    .. 
 . . . . . . . .   ..   . 
 .. .. . .. ..  .
 .   . 
 . . ... . . . . . .
.. ..
0 0 ... ... . . | H{zN −1
}
HN −1
| {z }
| {z } h h
L

This is of the classical linear algebraic eigenvalue form L · h = λh. All one need do is
discretize and find the eigenvalues of the matrix L. These will be good approximations to the
eigenvalues of the differential operator L. The eigenvectors of L will be good approximations
of the eigenfunctions of L. To get a better approximation, one need only reduce ∆x.
Note because the matrix L is symmetric, the eigenvalues are guaranteed real, and the
eigenvectors are guaranteed orthogonal. This is actually a consequence of the original prob-
lem being in Sturm-Liouville form, which is guaranteed to be self-adjoint with real eigenvalues
and orthogonal eigenfunctions.

© 06 February 2024. J. M. Powers.


86 CHAPTER 3. SEPARATION OF VARIABLES

0.14

0.12
0.10 0.10
0.10
0.05 0.05
0.08

0.06 x − 1.0 − 0.5


x
− 1.0 − 0.5 0.5 1.0 0.5 1.0

0.04 − 0.05 − 0.05

0.02
− 0.10 − 0.10
x
− 1.0 − 0.5 0.5 1.0

Figure 3.21: Plots of first, second, and third harmonic modes of eigenfunctions versus x,
with δ = 0.4, Q = 1, Θ = 15, low temperature steady solution Te (x).

Low temperature transients For our case of δ = 0.4, Q = 1, Θ = 15 (so D =


87173.8), we can calculate the stability of the low temperature solution. Choosing N = 101
points to discretize the domain, we find a set of eigenvalues. They are all positive, so the
solution is stable. The first few are

λ = 0.0000232705, 0.000108289, 0.000249682, 0.000447414, . . . . (3.431)

The first few eigenvalues can be approximated by inert theory with B(x) = 0, see Eq. (3.422):

n2 π 2
λ∼ = 0.0000283044, 0.000113218, 0.00025474, 0.00045287, . . . . (3.432)
4D
The first eigenvalue is associated with the longest time scale τ = 1/0.0000232705 =
42972.9 and a low frequency mode, whose shape is given by the associated eigenvector,
plotted in Fig. 3.21. This represents the fundamental mode, also known as the first harmonic
mode. Shown also in Fig. 3.21 are the second and third harmonic modes.

Intermediate temperature transients For the intermediate temperature solution


with T m = 0.417, we find the first few eigenvalues to be

λ = −0.0000383311, 0.0000668221, 0.000209943, . . . (3.433)

Except for the first, all the eigenvalues are positive. The first eigenvalue of λ = −0.0000383311
is associated with an unstable fundamental mode. This mode is also known as the first har-
monic mode. We plot the first three harmonic modes in Fig. 3.22.

High temperature transients For the high temperature solution with T m = 0.987,
we find the first few eigenvalues to be

λ = 0.000146419, 0.00014954, 0.000517724, . . . (3.434)

All the eigenvalues are positive, so all modes are stable. We plot the first three modes in
Fig. 3.23.

© 06 February 2024. J. M. Powers.


3.4. USAGE IN A STABILITY PROBLEM 87

0.14

0.10 0.10
0.12

0.10 0.05
0.05
0.08
x x
− 1.0 − 0.5 0.5 1.0 − 1.0 − 0.5 0.5 1.0
0.06

0.04 − 0.05 − 0.05

0.02
− 0.10 − 0.10
x
− 1.0 − 0.5 0.5 1.0

Figure 3.22: Plot of first, second, and third harmonic modes of eigenfunctions versus x, with
δ = 0.4, Q = 1, Θ = 15, intermediate temperature steady solution Te (x).
0.15
x
− 1.0 − 0.5 0.5 1.0 0.15

0.10
0.10

− 0.05
0.05 0.05

x
− 1.0 − 0.5 0.5 1.0
x
− 0.10 −1.0 − 0.5 0.5 1.0
− 0.05

− 0.05
− 0.10

− 0.15 − 0.15
− 0.10

Figure 3.23: Plot of first, second, and third harmonic modes of eigenfunctions versus x, with
δ = 0.4, Q = 1, Θ = 15, high temperature steady solution Te (x).

3.4.3.2 Full transient solution


We can get a full transient solution to Eq. (3.375) with numerical methods. We omit details
of such numerical methods, which can be found in standard texts.

3.4.3.2.1 Low temperature solution For our case of δ = 0.4, Q = 1, Θ = 15 (so


D = 87173.8), we show a plot of the full transient solution in Fig. 3.24. Also seen in
Fig. 3.24 is that the centerline temperature T (0, t) relaxes to the long time value predicted
by the low temperature steady solution:

lim T (0, t) = 0.016. (3.435)


t→∞

3.4.3.2.2 High temperature solution We next select a value of δ = 1.2 > δc . This
should induce transition to a high temperature solution. We maintain Θ = 15, Q = 1. We
get D = δeΘ /Θ/Q = 261521. The full transient solution is shown in Fig. 3.25. Also shown
in Fig. 3.25 is the centerline temperature T (0, t). We see it relaxes to the long time value
predicted by the high temperature steady solution:

lim T (0, t) = 0.9999185. (3.436)


t→∞

© 06 February 2024. J. M. Powers.


88 CHAPTER 3. SEPARATION OF VARIABLES

T
t 300 000
200 000 0.015
100 000
0
0.015
0.010
0.010
T
0.005
0.005
0.000
−1.0
− 0.5
0.0
0.5
x
1.0 50 000 100 000 150 000 200 000 250 000 300 000
t

Figure 3.24: Plot of T (x, t) and plot of T (0, t) along with the long time exact low temperature
centerline solution, Te (0), with δ = 0.4, Q = 1, Θ = 15.

1.5 106 T
t 1. 106
1.0
5. 105

0 0.8
1.0

0.6
T 0.5

0.4

0.0
−1.0 0.2
−0.5
0.0
x 0.5
1.0 4 x 105 8 x 105 12 x 105 t

Figure 3.25: Plot of T (x, t) and plot of T (0, t) along with the long time exact low temperature
centerline solution, Te (0), with δ = 1.2, Q = 1, Θ = 15.

It is clearly seen that there is a rapid acceleration of the reaction for t ∼ 106 . This compares
with the prediction of the induction time from the infinite Damköhler number, D → ∞,
thermal explosion theory of explosion to occur when

eΘ e15
t→ = = 2.17934 × 105 . (3.437)
QΘ (1)(15)

The estimate under-predicts the value by a factor of five. This is likely due to 1) cooling
of the domain due to the low temperature boundaries at x = ±1, and 2) effects of finite
activation energy.

© 06 February 2024. J. M. Powers.


3.5. NONLINEAR SEPARATION OF VARIABLES 89

3.5 Nonlinear separation of variables


We adopt here some presentation and an example first given by Powers and Sen.9 We close
this chapter by extending the notion of separation of variables to nonlinear systems. This
will allow us to illustrate some important principles:

• Solution of a partial differential equation can always be cast in terms of solving an


infinite set of ordinary differential equations.

• Approximate solution of a partial differential equation can be cast in terms of solving


a finite set of ordinary differential equations.

• A linear partial differential equation induces an uncoupled system of linear ordinary


differential equations.

• A nonlinear partial differential equation induces a coupled set of nonlinear ordinary


differential equations.

There are many viable methods to represent a partial differential equation as system of ordi-
nary differential equations. Among them are methods in which one or more dependent and
independent variables are discretized; important examples are the finite difference and finite
element methods, which will not be considered here. Another key method involves projecting
the dependent variable onto a set of basis functions and truncating this infinite series. We
will illustrate such a process here with an example involving a projection incorporating the
method of weighted residuals.

Example 3.11
Convert the nonlinear partial differential equation, initial and boundary conditions
 
∂T ∂ ∂T
= (1 + ǫT ) , T (x, 0) = x − x2 , T (0, t) = T (1, t) = 0, (3.438)
∂t ∂x ∂x

to a system of ordinary differential equations using a Galerkin projection method and find a two-term
approximation.

Equation (3.438) is an extension of the heat equation, Eq. (1.82), when one modifies Fourier’s law,
Eq. (1.78), to allow for a variation of thermal conductivity k with temperature T . Omitting details, it
can be shown to describe the time-evolution of a spatial temperature field in a one-dimensional geometry
with material properties which have weak temperature-dependency when 0 < ǫ ≪ 1. The boundary
conditions are homogeneous, and the initial condition is symmetric about x = 1/2. We can think of
T as temperature, x as distance, and t as time, all of which have been suitably scaled. For ǫ = 0,
the material properties are constant, and the equation is linear; otherwise, the material properties are

9
J. M. Powers and M. Sen, Mathematical Methods in Engineering, Cambridge University Press, New
York, 2015. See Section 9.10.

© 06 February 2024. J. M. Powers.


90 CHAPTER 3. SEPARATION OF VARIABLES

temperature-dependent, and the equation is nonlinear due to the product T ∂T /∂x. We can use the
product rule to rewrite Eq. (3.438) as
 2
∂T ∂2T ∂2T ∂T
= 2
+ ǫT 2 + ǫ . (3.439)
∂t ∂x ∂x ∂x

Now let us assume that T (x, t) can be approximated in an N -term series by


N
X
T (x, t) = αn (t)ϕn (x). (3.440)
n=1

This amounts to a separation of variables, which we note does not require that our system be linear.
We presume the exact solution is approached as N → ∞. We can consider αn (t) to be a set of N
time-dependent amplitudes which modulate each spatial basis function, ϕn (x). For convenience, we
will insist that the spatial basis functions satisfy the spatial boundary conditions ϕn (0) = ϕn (1) = 0 as
well as an orthonormality condition for x ∈ [0, 1]:

hϕn , ϕm i = δnm . (3.441)

At the initial state, we have


N
X
T (x, 0) = x − x2 = αn (0)ϕn (x). (3.442)
n=1

The terms αn (0) are simply the constants in the Fourier series expansion of x − x2 :

αn (0) = hϕn , (x − x2 )i. (3.443)

The partial differential equation expands as

N N N
! N ! N
!2
X dαn X d2 ϕn X X d2 ϕn X dϕn
ϕn (x) = αn (t) +ǫ αn (t)ϕn (x) αn (t) +ǫ αn (t) ,
n=1
dt n=1
dx2 n=1 n=1
dx2 n=1
dx
| {z } | {z } | {z }| {z } | {z }
∂T /∂t ∂ 2 T /∂x2 T ∂ 2 T /∂x2 (∂T /∂x)2

(3.444)

We change one of the dummy indices in each of the nonlinear terms from n to m and rearrange to find

XN N
X XN X N  
dαn d2 ϕn d2 ϕm dϕn dϕm
ϕn (x) = αn (t) + ǫ αn (t)αm (t) ϕn (x) + . (3.445)
n=1
dt n=1
dx2 n=1 m=1
dx2 dx dx

Next, for the Galerkin procedure, one selects the weighting functions ψl (x) to be the basis functions
ϕl (x) and takes the inner product of the equation with the weighting functions, yielding
* N
+ * N N X N  +
X dαn X d2 ϕn X d2 ϕm dϕn dϕm
ϕl (x), ϕn (x) = ϕl (x), αn (t) +ǫ αn (t)αm (t) ϕn (x) + .
n=1
dt n=1
dx2 n=1 m=1
dx2 dx dx
(3.446)
N
* N N X
N  +
X dαn X 2
d ϕn X 2
d ϕm dϕn dϕm
hϕl (x), ϕn (x)i = ϕl (x), αn (t) +ǫ αn (t)αm (t) ϕn (x) + .
n=1
dt | {z } n=1
dx2 n=1 m=1
dx2 dx dx
δln
(3.447)

© 06 February 2024. J. M. Powers.


3.5. NONLINEAR SEPARATION OF VARIABLES 91

Because of the orthonormality of the basis functions, the left side has obvious simplifications, yielding
* N N X N  +
dαl X d2 ϕn X d2 ϕm dϕn dϕm
= ϕl (x), αn (t) +ǫ αn (t)αm (t) ϕn (x) + . (3.448)
dt n=1
dx2 n=1 m=1
dx2 dx dx

The right side can also be simplified via a complicated set of integration by parts and application of
boundary conditions. If we further select ϕn (x) to be an eigenfunction of d2 /dx2 , the first term on the
right side will simplify considerably, though this choice is not required. Let us here take that choice,
thus requiring
d2
ϕn (x) = λn ϕn (x). (3.449)
dx2
Then we expand Eq. (3.448) as follows
N
X   XN X N   
dαl d2 ϕn d2 ϕm dϕn dϕm
= αn (t) ϕl (x), + ǫ αn (t)αm (t) ϕl (x), ϕn (x) + ,
dt n=1
dx2 n=1 m=1
dx2 dx dx
(3.450)
N
X N
X N
X   
d2 ϕm dϕn dϕm
= αn (t) hϕl (x), λn ϕn i +ǫ αn (t)αm (t) ϕl (x), ϕn (x) 2
+ ,
n=1
| {z } n=1 m=1
dx dx dx
λn δln
N
X N X
X N   
d2 ϕm dϕn dϕm
= λn αn (t)δln + ǫ αn (t)αm (t) ϕl (x), ϕn (x) 2
+ , (3.451)
n=1 n=1 m=1
dx dx dx
XN X N   
d2 ϕm dϕn dϕm
= λl αl (t) + ǫ αn (t)αm (t) ϕl (x), ϕn (x) + , (3.452)
dx2 dx dx
n=1 m=1 | {z }
Clnm
N
X N
X
= λl αl (t) + ǫ Clnm αn (t)αm (t). (3.453)
n=1 m=1

Here Clmn is set of constants obtained after forming the various integrals of the basis functions and
their derivatives. Note that for the limit in which nonlinear effects are negligible, ǫ → 0, we get a
set of N uncoupled linear ordinary differential equations for the time-dependent amplitudes. Thus for
the linear limit, the time-evolution of each mode is independent of the other modes. In contrast, for
ǫ 6= 0, the system of N ordinary differential equations for amplitude time-evolution is fully coupled and
nonlinear.
In any case, this all serves to remove the explicit dependency on x, thus yielding a system of N
ordinary differential equations of the classical form of a nonlinear dynamical system:

= f (α). (3.454)
dt
where α is a vector of length N , and f is in general a nonlinear function of α. We summarize some
important ideas for this equations of this type. For further background, one can consult Powers and
Sen.10
• The system, Eq. (3.454), is in equilibrium when

f (α) = 0. (3.455)
10
J. M. Powers and M. Sen, Mathematical Methods in Engineering, Cambridge University Press, New
York, 2015. See Sections 9.3-9.6.

© 06 February 2024. J. M. Powers.


92 CHAPTER 3. SEPARATION OF VARIABLES

This constitutes a system of nonlinear algebraic equations. Because the system is nonlinear, existence
and uniqueness of equilibria is not guaranteed. Thus, we could expect to find no roots, one root, or
multiple roots, depending on f (α). Equilibrium point are also known as critical points or fixed points.
We distinguish an equilibrium point from a general point by an overline, thus, taking equilibrium
points to be α = α, and requiring that
f (α) = 0. (3.456)

• Stability of each equilibrium can be ascertained by a local linear analysis in the neighborhood of
each equilibrium. Local Taylor series analysis of Eq. (3.454) in such a neighborhood allows it to be
rewritten as
dα ∂f
= f (α) + · (α − α) + . . . . (3.457)
dt | {z } ∂α α =α
0

Take the constant Jacobian of f evaluated at α as J:


∂f
J= (3.458)
∂α α=α
Use this and the fact that α is a constant to rewrite Eq. (3.457) as
d
(α − α) = J · (α − α) . (3.459)
dt
• With c as an arbitrary constant, this linear system has an exact solution in terms of the matrix
exponential:
α − α = c · eJt . (3.460)
With S as the matrix whose columns are populated by the eigenvectors of J and Λ as the diagonal
matrix whose diagonal is populated by the eigenvalues of J, taking care to ensure the order is such
that the correct eigenvalues correspond to the correct eigenvectors, the solution can be recast as

α − α = c · S · eΛt · S−1 . (3.461)


Obviously, the eigenvalues of J determine that stability of each equilibrium. For stability, the real
parts of each eigenvalue cannot be positive. A source has all real parts positive. A sink has all real
parts negative. A center has all eigenvalues purely imaginary. A saddle has some real parts positive
and some negative.
Returning to our problem, we select our orthonormal basis functions as the eigenfunctions of d2 /dx2
that also satisfy the appropriate boundary conditions,

ϕn (x) = 2 sin((2n − 1)πx), n = 1, . . . , N. (3.462)
Because of the symmetry of our system about x = 1/2, it can be shown that only odd multiples of πx
are present in the trigonometric sin approximation. Had we chosen an initial condition without such
symmetry, we would have required both even and odd powers. We then apply the necessary Fourier
expansion to find αn (0), perform a detailed analysis of all of the necessary inner products, select N = 2,
and arrive at the following nonlinear system of ordinary differential equations for the evolution of the
time-dependent amplitudes:
  √
dα1 2
√ 4 2 8 36 2 4 2
= −π α1 + 2πǫ − α1 + α2 α1 − α2 , α1 (0) = 3 , (3.463)
dt 3 15 35 π
  √
dα2 √ 12 2 648 4 2
= −9π 2 α2 + 2πǫ α1 − α2 α1 − 4α22 , α2 (0) = . (3.464)
dt 5 35 27π 3

© 06 February 2024. J. M. Powers.


3.5. NONLINEAR SEPARATION OF VARIABLES 93

The system is in equilibrium at all points (α1 , α2 ) where


 
√ 4 8 36
f1 (α1 , α2 ) = −π 2 α1 + 2πǫ − α21 + α2 α1 − α22 = 0, (3.465)
3 15 35
 
2
√ 12 2 648 2
f2 (α1 , α2 ) = −9π α2 + 2πǫ α − α2 α1 − 4α2 = 0. (3.466)
5 1 35

The general form of the Jacobian matrix J is


 ∂f1 ∂f1   √  √  
∂α1 ∂α2 −π√2
− 2πǫ 83 α1 − 15 8
α2

8
2πǫ√ 15 α1 − 72
35 α2 
J = ∂f2 ∂f2 = . (3.467)
∂α ∂α
2πǫ 24 648
5 α1 − 35 α2 −9π 2 − 2πǫ 648 35 α1 + 8α2
1 2

For ǫ = 1/5, we find four sets of equilibria, (α1 , α2 ), and their eigenvalues, (λ1 , λ2 ), associated with
local values of their Jacobian matrix, J, all given as follows

(α1 , α2 ) = (0, 0), (λ1 , λ2 ) = (−π 2 , −9π 2 ), sink, (3.468)


(α1 , α2 ) = (−4.53, −6.04), (λ1 , λ2 ) = (−17.4, 44.2), saddle, (3.469)
(α1 , α2 ) = (−8.78, −2.54), (λ1 , λ2 ) = (9.70, 73.7), source, (3.470)
(α1 , α2 ) = (−5.15, 3.46), (λ1 , λ2 ) = (−43.3, 18.6), saddle. (3.471)

When ǫ = 0, and because we selected our basis functions to be the eigenfunctions of d2 /dx2 , we see
the system is linear and uncoupled with exact solution

4 2 −π2 t
α1 (t) = e , (3.472)
π√3
4 2 −9π2 t
α2 (t) = e . (3.473)
27π 3
Thus, for ǫ = 0 the two-term approximation is
8 −π2 t 8 −9π2 t
T (x, t) ≈ 3
e sin(πx) + e sin(3πx). (3.474)
π 27π 3
For ǫ 6= 0, numerical solution is required. We do so for ǫ = 1/5 and plot the phase plane dynamics
in Fig. 3.26 for arbitrary initial conditions. Many initial conditions lead one to the finite sink at (0, 0).
It is likely that the dynamics are also influenced by equilibria at infinity, not shown here. One can show
that the solutions in the neighborhood of the sink are the most relevant to the underlying physical
problem.
We plot results of α1 (t), α2 (t) for our initial conditions in Fig. 3.27. We see the first mode has
significantly more amplitude than the second mode. Both modes are decaying rapidly to the sink at
(0, 0). The N = 2 solution with full time and space dependency is

T (x, t) ≈ α1 (t) sin(πx) + α2 (t) sin(3πx), (3.475)

and is plotted in Fig. 3.28.

Problems

© 06 February 2024. J. M. Powers.


94 CHAPTER 3. SEPARATION OF VARIABLES

4
saddle
2

0
sink
2 −2
source
−4

−6 saddle

−8
−10 −8 −6 −4 −2 0 2
1

Figure 3.26: Phase plane dynamics of N = 2 amplitudes of spatial modes of solution to a


weakly nonlinear heat equation.

10-1
1
(t)
mode amplitude

10-3

10-5 (t)
2

10-7
0.1 0.2 0.3 0.4 0.5 t

Figure 3.27: Evolution of N = 2 amplitudes of spatial modes of solution to a weakly nonlinear


heat equation.

© 06 February 2024. J. M. Powers.


3.5. NONLINEAR SEPARATION OF VARIABLES 95

Figure 3.28: T (x, t) from N = 2 term Galerkin projection for a weakly nonlinear heat
equation.

© 06 February 2024. J. M. Powers.


96 CHAPTER 3. SEPARATION OF VARIABLES

© 06 February 2024. J. M. Powers.


Chapter 4

One-dimensional waves

see Mei, Chapters 1, 3.

Here we consider further aspects of one-dimensional wave propagation. We build on notions


explored in Sec. 1.1. We will not focus on those one-dimensional waves which propagate in
two modes, left and right, such as studied in Secs. 2.2.1, 3.2.

4.1 One-dimensional conservation laws


As described by LeVeque1 the proper way to arrive at differential equations arising from
physical conservation principles is to use a more primitive form of the conservation laws,
expressed in terms of integrals of conservative form quantities balanced by fluxes and source
terms of those quantities. From such primitive forms, we shall often deduce continuum
differential equations; in certain cases, we will admit discontinuous solutions.

4.1.1 Multiple conserved variables


Consider the scenario of Fig. 4.1. In both Fig. 4.1a,b, we have a volume bounded in the
x direction by x1 and x2 . If q is a set of variables representing some quantity which is
conserved, and f(q) is the flux of q (e.g. for mass conservation, density ρ is a conserved
variable and ρu is the mass flux), and s(q) is an internal source term, then the primitive
form of the conservation law can be written as
Z Z x2
d x2
q(x, t) dx = f(q(x1 , t)) − f(q(x2 , t)) + s(q(x, t)) dx. (4.1)
dt x1 x1

Here, we have considered flow into and out of a one-dimensional box for x ∈ [x1 , x2 ]. In
Fig. 4.1a, the state variables q are allowed to have discontinuous jumps, while in Fig. 4.1b,
the state variables q are continuous. For problems with embedded discontinuous jumps, the
1
R. J. LeVeque, Numerical Methods for Conservation Laws, Birkhäuser, Basel, 1992.

97
98 CHAPTER 4. ONE-DIMENSIONAL WAVES

f(q(x1,t)) f(q(x2,t)) f(q(x1,t)) f(q(x2,t))


U
q(x,t) q(x,t)

x1 x2 x1 x2
a) b)

Figure 4.1: Schematic of general flux f into and out of finite volume in which general variable
q evolves.

mean value theorem does not work because a local value of q depends on how one lets x1
approach x2 . Because of this, we cannot use the limiting process employed in Sec. 1.1 to
arrive at a partial differential equation. One cannot replace the mean value of q by its local
value, and one cannot cast the conservation law in terms of a partial differential equation
when there are embedded discontinuous jumps. If we assume there is a discontinuity in the
region x ∈ [x1 , x2 ] propagating at speed U, we can find the Cauchy2 principal value of the
integral by splitting it into the form
Z x1 +U t− Z x2
d d
q(x, t) dx + q(x, t) dx
dt x1 dt x1 +U t+
Z x2
= f(q(x1 , t)) − f(q(x2 , t)) + s(q(x, t)) dx. (4.2)
x1

Here, x1 + Ut− lies just before the discontinuity and x1 + Ut+ lies just past the discontinuity.
Using Leibniz’s rule, we get
Z x1 +U t− Z x2
− ∂q + ∂q
q(x1 + Ut , t)U − 0 + dx + 0 − q(x1 + Ut , t)U + dx (4.3)
x1 ∂t x1 +U t+ ∂t
Z x2
= f(q(x1 , t)) − f(q(x2 , t)) + s(q(x, t)) dx.
x1

Now, if we assume that x2 −x1 → 0 and that on either side of the discontinuity the volume of
integration is sufficiently small so that the time and space variation of q is negligibly small,
we get

q(x1 )U − q(x2 )U = f(q(x1 )) − f(q(x2 )), (4.4)


U (q(x1 ) − q(x2 )) = f(q(x1 )) − f(q(x2 )). (4.5)
2
Augustin-Louis Cauchy, 1789-1857, French mechanician.

© 06 February 2024. J. M. Powers.


4.1. ONE-DIMENSIONAL CONSERVATION LAWS 99

Note that the contribution of the source term s is negligible as x2 − x1 → 0. Defining next
the notation for a jump as
Jq(x)K ≡ q(x2 ) − q(x1 ), (4.6)

the jump conditions are rewritten as


U Jq(x)K = Jf(q(x))K . (4.7)

If U = 0, as is the case when we transform to the frame where the wave is at rest, we
simply recover

0 = f(q(x1 )) − f(q(x2 )), (4.8)


f(q(x1 )) = f(q(x2 )), (4.9)
Jf(q(x))K = 0. (4.10)

That is, the fluxes on either side of the discontinuity are equal. We also get a more general
result for U 6= 0, which is the well-known
f(q(x2 )) − f(q(x1 )) Jf(q(x))K
U= = . (4.11)
q(x2 ) − q(x1 ) Jq(x)K
In contrast, if there is no discontinuity, Eq. (4.1) reduces to a partial differential equation
describing a continuum. We achieve this by rewriting Eq. (4.1) as
 Z x2  Z x2
d
q(x, t) dx + (f(q(x2 , t)) − f(q(x1 , t))) = s(q(x, t)) dx. (4.12)
dt x1 x1

Now, if we assume continuity of all fluxes and variables, we can use Taylor series expansion
and Leibniz’s rule to say
Z x2     Z x2
∂ ∂f
q(x, t) dx + f(q(x1 , t)) + (x2 − x1 ) + . . . − f(q(x1 , t)) = s(q(x, t)) dx.
x1 ∂t ∂x x1
(4.13)
We let x2 → x1 and get
Z x2    Z x2
∂ ∂f
q(x, t) dx + (x2 − x1 ) = s(q(x, t)) dx, (4.14)
x1 ∂t ∂x x1
Z x2  Z x2 Z x2
∂ ∂f
q(x, t) dx + dx = s(q(x, t)) dx. (4.15)
x1 ∂t x1 ∂x x1
(4.16)

Combining all terms under a single integral, we get


Z x2  
∂q ∂f
+ − s dx = 0. (4.17)
x1 ∂t ∂x

© 06 February 2024. J. M. Powers.


100 CHAPTER 4. ONE-DIMENSIONAL WAVES

Now, this integral must be zero for an arbitrary x1 and x2 , so the integrand itself must be
zero, and we get our partial differential equation:
∂q ∂f
+ − s = 0, (4.18)
∂t ∂x
∂ ∂
q(x, t) + f(q(x, t)) = s(q(x, t)), (4.19)
∂t ∂x
which applies away from jumps.

4.1.2 Single conserved variable


Let us consider a simple and important form in which there is a single conserved variable
and no source term. For such a case, we study Eq. (4.19) with q = u, f(q) = f (u), s = 0.
Then, we have the conservative form
∂u ∂
+ f (u) = 0. (4.20)
∂t ∂x
Assuming no discontinuities, Eq. (4.20) may be rewritten using the chain rule in characteristic
form as
∂u df ∂u
+ = 0. (4.21)
∂t du ∂x
Here the local speed of propagation of waves is df /du.
The function f (u) may be convex or non-convex. A function is convex if its epigraph, the
set of points on or above the graph of the function, form a convex set. It is easy to show that
a function is convex iff its second derivative is non-negative over its whole domain. Plots of
examples of convex (f (u) = 1/2 + u2) and non-convex (f (u) = 3/2 − u2) functions are shown
in Fig. 4.2. Note that the example convex function has d2 f /du2 = 2 > 0 and the example
non-convex function has d2 f /du2 = −2 < 0.

Example 4.1
Find the jump equations for the simple wave propagation of Sec. 1.1.

We start with Eq. (1.10), replacing x1 + ∆x by x2 and otherwise using the notation of Sec. 1.1:
dm 
= − ρ|x2 Aa − ρ|x1 Aa , (4.22)
Z dt
d x2 
A ρ dx = − ρ|x2 Aa − ρ|x1 Aa , (4.23)
dt x1
Z
d x2 
ρ dx = − ρ|x2 a − ρ|x1 a . (4.24)
dt x1

Here our vector q has one entry q = (ρ). And our flux vector f also has one entry f = (ρa). And there
is no source of mass, so the vector s = 0 = (0).

© 06 February 2024. J. M. Powers.


4.1. ONE-DIMENSIONAL CONSERVATION LAWS 101

f(u) f(u)
non-convex
region
convex
region

u u

Figure 4.2: Convex function, f = 1/2 + u2 , and non-convex function, f = 3/2 − u2 .

Equation (4.11) tells us that discontinuous jumps propagate at speed


Jf (q(x))K JρaK ρ2 a − ρ1 a
U= = = = a. (4.25)
Jq(x)K JρK ρ2 − ρ1
And for steady jumps for which U = 0, we simply have no jump in ρ: ρ2 = ρ1 . And for situations
where there is no jumps, we recover from Eq. (4.19) the continuous partial differential equation
∂ρ ∂ρ
+a = 0. (4.26)
∂t ∂x
We note as an aside that here u = ρ and f (u) = f (ρ) = aρ. Thus d2 f /dρ2 = 0. Because it is
non-negative, the flux function is convex.

Example 4.2
If the conserved variable is u(x, t) and the flux of u is given by f (u) = u2 /2, find appropriate jump
equations and the appropriate partial differential equation for continuous values of u. Evaluate the
possibles jumps admitted through in a steady wave for which u in the far field as x → −∞ takes on
the value u1 .

Equation (4.11) tells us that discontinuous jumps propagate at speed


Jf (q(x))K
U = , (4.27)
Jq(x)K
r 2z
u
2
= , (4.28)
JuK
u22 u2
2− 21
= , (4.29)
u2 − u1
1 (u2 − u1 )(u2 + u1 )
= , (4.30)
2 u2 − u1
u2 + u1
= . (4.31)
2

© 06 February 2024. J. M. Powers.


102 CHAPTER 4. ONE-DIMENSIONAL WAVES

The jump propagates at the average value of u over the jump.


Then Eq. (4.19) gives us, if u is continuous,
 
∂u ∂ u2
+ = 0. (4.32)
∂t ∂x 2

This can be expanded by the rules of calculus to get the so-called inviscid Bateman3 -Burgers’ 4
equa-
tion5 , usually known as the inviscid Burgers’ equation:
∂u ∂u
+u = 0. (4.33)
∂t ∂x
If the wave is steady, ∂/∂t = 0, and Eq. (4.32) reduces to the ordinary differential equation
 
d u2
= 0, u(x → −∞) = u1 . (4.34)
dx 2

Integrating, we obtain

u2
= C. (4.35)
2
To satisfy the far-field boundary condition, we need C = u21 /2, giving

u2 u21
= , (4.36)
2 2
u = ±u1 . (4.37)

Note that
• Only one of the solutions matches the boundary condition in the far field, but
• There is nothing preventing the existence of a stationary discontinuity with U = 0 sitting at any
finite x where the solution jumps from u = u1 to u = −u1 . Such a solution will satisfy the governing
differential equations and boundary condition. Additionally, it will satisfy the jump equations at the
discontinuity.
• The flux function here f (u) = u2 /2 is convex because d2 f /du2 = 1 > 0.
For u1 = 1, we give a plot of u(x) with a discontinuity located at x = 1 in Fig. 4.3. Obviously U = 0
because via Eq. (4.31), U = (u1 + u2 )/2 = (u1 − u1 )/2 = 0.
If we had u1 = 0 and u2 = 2, a solution would exist with a discontinuity linking the two states.
However, the discontinuity would be propagating at U = (0 + 2)/2 = 1. If we had transformed to the
frame where the wave were stationary, û = u − U = u − 1, we would have û1 = −1 and û2 = 1. For
general u1 and u2 , we could transform via û = u − U = u − (u1 + u2 )/2. Then û1 = (u1 − u2 )/2 and
û2 = (u2 − u1 )/2 = −û1 .

3
Harry Bateman, 1882-1946, English mathematician.
4
Johannes Martinus Burgers, 1895-1981, Dutch physicist.
5
The viscous version of the model equation, ∂u/∂t + u ∂u/∂x = ν ∂ 2 u/∂x2 , is widely known as Burgers’
equation and is often cited as originating from J. M. Burgers, 1948, A mathematical model illustrating the
theory of turbulence, Advances in Applied Mathematics, 1: 171-199. However, the viscous version was given
earlier by H. Bateman, 1915, Some recent researches in the motion of fluids, Monthly Weather Review, 43(4):
163-170.

© 06 February 2024. J. M. Powers.


4.1. ONE-DIMENSIONAL CONSERVATION LAWS 103

u
1

x
−2 2

−1

Figure 4.3: Solution to ∂u/∂t + ∂/∂x(u2 /2) with u(x → −∞) = −1. A stationary disconti-
nuity (thus U = 0) is arbitrarily located at x = 1.

Remarkably, if we incorrectly take as our starting point a continuous partial differential


equation such as Eq. (4.33), it is possible to be led to an incorrect jump equation as illustrated
by the following example.

Example 4.3
Show by multiplying Eq. (4.33) by u that one can be led to infer a jump condition which is
inconsistent with Eq. (4.31).

We perform the multiplication to get


∂u ∂u
u + u2 = 0. (4.38)
∂t ∂x
The ordinary rules of calculus suggest then that we can say
   
∂ u2 ∂ u3
+ = 0. (4.39)
∂t 2 ∂x 3

So our jump condition might be expected to be


r 3z
u
3
U = q u2 y , (4.40)
2
2 − u31
u32
= , (4.41)
3 − u21
u22
2 (u2 − u1 )(u21 + u1 u2 + u22 )
= , (4.42)
3 (u2 − u1 )(u2 + u1 )
2 u21 + u1 u2 + u22
= , (4.43)
3 u2 + u1
 2 !
u2 + u1 1 u2 − u1
= 1+ . (4.44)
2 3 u2 + u1

© 06 February 2024. J. M. Powers.


104 CHAPTER 4. ONE-DIMENSIONAL WAVES

Had we done the same analysis with the continuous equivalent ∂u/∂t + ∂/∂x(u2/2) = 0, we would have
arrived at a different result: U = (u2 + u1 )/2, as seen in Eq. (4.31). Clearly we cannot perform ad
hoc operations on continuous equations and expect to infer a consistent expression for the propagation
speed of a discontinuity. It is thus essential to infer propagation speeds of discontinuities from the more
fundamental integral form of the conservation equations such as that of Eq. (4.1).

4.2 Inviscid Burgers’ equation


Let us analyze the inviscid Burgers’ equation, Eq. (4.33), in the context of coordinate trans-
formations that have the general form
x = x(ξ, τ ), (4.45)
t = t(ξ, τ ). (4.46)
We assume the transformation to be unique and invertible. The Jacobian matrix of the
transformation is
 ∂x ∂x 
J = ∂ξ∂t
∂τ
∂t . (4.47)
∂ξ ∂τ

And we have
∂x ∂t ∂x ∂t
J = det J = − . (4.48)
∂ξ ∂τ ∂τ ∂ξ
Now
   ∂   ∂t ∂t  ∂   ∂t ∂ ∂t ∂ 

∂x T −1 ∂ξ 1 ∂τ
− ∂ξ ∂ξ 1 ∂τ ∂ξ
− ∂ξ ∂τ
∂ = (J ) = = . (4.49)
∂t

∂τ
J − ∂x
∂τ
∂x
∂ξ

∂τ
J − ∂x ∂
∂τ ∂ξ
+ ∂x ∂
∂ξ ∂τ

With these transformation rules, Eq (4.33) is rewritten as


   
1 ∂x ∂u ∂x ∂u 1 ∂t ∂u ∂t ∂u
− + +u − = 0. (4.50)
J ∂τ ∂ξ ∂ξ ∂τ J ∂τ ∂ξ ∂ξ ∂τ
| {z } | {z }
∂u/∂t ∂u/∂x

Now by assumption, J 6= 0, so we can multiply by J to get


∂x ∂u ∂x ∂u ∂t ∂u ∂t ∂u
− + +u −u = 0. (4.51)
∂τ ∂ξ ∂ξ ∂τ ∂τ ∂ξ ∂ξ ∂τ
Let us now restrict our transformation to satisfy the following requirements:
∂x ∂t
= u , (4.52)
∂τ ∂τ
t(ξ, τ ) = τ. (4.53)

© 06 February 2024. J. M. Powers.


4.2. INVISCID BURGERS’ EQUATION 105

The first says that if we insist that ξ is held fixed, that the ratio of the change in x to
the change in t will be u; this is equivalent to the more standard statement that on a
characteristic line we have dx/dt = u. The second is a convenience simply equating τ to t.
Applying the second restriction to the first, we can also say
∂x
= u. (4.54)
∂τ
With these restrictions, our inviscid Burgers’ equation becomes
∂x ∂u ∂x ∂u ∂t ∂u ∂t ∂u
− + +u −u = 0, (4.55)
∂τ ∂ξ
|{z} ∂ξ ∂τ ∂τ ∂ξ
|{z} ∂ξ ∂τ
|{z}
u 1 0

∂u ∂x ∂u ✓
∂u
−u + + u ✓ = 0, (4.56)
∂ξ ∂ξ ∂τ ✓ ∂ξ
∂x ∂u
= 0. (4.57)
∂ξ ∂τ

Let us further require that ∂x/∂ξ 6= 0. Then we have

∂u
= 0, (4.58)
∂τ
u = f (ξ). (4.59)

Here f is an arbitrary function. Substitute this into Eq. (4.52) to get

∂x ∂t
= f (ξ) . (4.60)
∂τ ∂τ
We can integrate Eq. (4.60) to get

x = f (ξ)t + g(ξ). (4.61)

Here g(ξ) is an arbitrary function. Note the coordinate transformation can be chosen for
our convenience. To this end, remove t in favor of τ and set g(ξ) = ξ so that x maps to ξ
when t = τ = 0 giving

x(ξ, τ ) = f (ξ)τ + ξ. (4.62)

We can then state the solution to the inviscid Burgers’ equation, Eq. (4.33), parametrically
as

u(ξ, τ ) = f (ξ), (4.63)


x(ξ, τ ) = f (ξ)τ + ξ, (4.64)
t(ξ, τ ) = τ. (4.65)

© 06 February 2024. J. M. Powers.


106 CHAPTER 4. ONE-DIMENSIONAL WAVES

For this transformation, we have from Eq. (4.47) that


 df

1 + dξ τ f (ξ)
J= . (4.66)
0 1

Thus
df
J = det J = 1 + τ. (4.67)

We have a singularity in the coordinate transformation whenever J = 0, implying a difficulty


when
1
τ = − df . (4.68)

Example 4.4
Solve the inviscid Burgers’ equation, ∂u/∂t + u ∂u/∂x = 0, Eq. (4.33), if

u(x, 0) = 1 + sin πx, x ∈ [0, 1] (4.69)

Let us not be concerned with that portion of u which at t = 0 has x < 0 or x > 1. The analysis is
easily modified to address this.

We know the solution is given in general by Eqs. (4.63-4.65). At t = 0, we have τ = 0, and thus
x = ξ. And we have

f (ξ) = 1 + sin πξ. (4.70)

Thus we can say by inspection that the solution is

u(ξ, τ ) = 1 + sin πξ, (4.71)


x(ξ, τ ) = (1 + sin πξ) τ + ξ, (4.72)
t(ξ, τ ) = τ. (4.73)

Results are plotted in Fig. 4.4. One notes the following:


• The signal propagates to the right; this is a consequence of u > 0 in the domain we consider.
• Portions of the signal with higher u propagate faster.
• The signal distorts as t increases.
• The wave appears to “break” at t = ts , where 1/4 . ts . 1/2. For t > ts , it is possible to find
multiple values of u at a given x and t. If u were a physical variable, we would not expect to see such
multivaluedness in nature.
• Because of the convexity of the flux function, the right side of the wave form steepens, and the left
side of the wave form becomes more shallow.

© 06 February 2024. J. M. Powers.


4.2. INVISCID BURGERS’ EQUATION 107

t
u t=0 t=1/8 t=1/4 t=1/2 t=1
2.0

1.8

1.6 u
1.4

1.2

0.5 1.0 1.5 2.0 2.5


x
x

Figure 4.4: Solution to ∂u/∂t + u∂u/∂x with u(x, 0) = 1 + sin πx.

u
2.0

1.8

1.6

1.4

1.2

1.0
x
0.5 1.0 1.5 2.0 2.5 3.0

Figure 4.5: Sketch of response of u which satisfies the inviscid Burgers’ equation ∂u/∂t +
u∂u/∂x with u(x, 0) = 1 + sin πx.

© 06 February 2024. J. M. Powers.


108 CHAPTER 4. ONE-DIMENSIONAL WAVES

2.0 2.8 3.92 2.8 3.92

1.75
1.68

1.5 1.5 1.68


3.36 3.36
1.25

J<0
τ 1.0 1

2.24
0.75

0.5 0.5
J=0
τ = ts = 1/π J>0
0.25 2.24
x = xs = 1 + 1/π
0.0 0.56 1.12
0.0 0.5 1.0 1.5 2.0

Figure 4.6: Curves where J = 0 and of constant x and t in the (ξ, τ ) plane for our coordinate
transformation.

The sketch of Fig. 4.5 shows how one can envision the portion of the initial sine wave with x > 1/2
steepening, while that portion with x < 1/2 flattens. We place arrows whose magnitude is proportional
to the local value of u on the plot itself.
For our value of f (ξ), we have from from Eq. (4.67) that

J = 1 + πτ cos πξ. (4.74)

Clearly, there exist values of (ξ, τ ) for which J = 0. At such points, we can expect difficulties in our
solution. In Fig. 4.6, we plot a portion of the locus of points for which J = 0 in the (ξ, τ ) plane. We
also see portions of this plane where the transformation is orientation-preserving, for which J > 0, and
orientation-reversing, for which J < 0. Also shown in Fig. 4.6 are contours of constant x and t. Clearly
when J = 0, the contours of constant x are parallel to those of constant t, and there are not enough
linearly independent vectors to form a basis.
From Eq. (4.68), we can expect a singular coordinate transformation when
1 1
τ = − df = − . (4.75)

π cos πξ

We then substitute this into Eqs. (4.72, 4.73) to get a parametric curve for when the transformation is
singular, xs (ξ), ts (ξ):
1 + sin πξ
xs (ξ) = − + ξ, (4.76)
π cos πξ
1
ts (ξ) = − . (4.77)
π cos πξ
A portion of this curve for where the transformation is singular is shown in Fig. 4.7. Figure 4.7a plots

© 06 February 2024. J. M. Powers.


4.3. VISCOUS BURGERS’ EQUATION 109

ts
2.2 2
1.2

2.0
1.0

xs 1.8 ts 0.8
1
1.6
0.6

1.4 0.4
. .
0.6 0.8 1.0 1.2 1.4 0.6 0.8 1.0 1.2 1.4 xs
1 2
a) ξ b) ξ c)

Figure 4.7: Plots indicating where the coordinate transformation of Eqs. (4.72,4.73) is sin-
gular: a) xs (ξ) from Eq. (4.76), b) ts (ξ) from Eq. (4.77), c) representation of the curve of
singularity in (x, t) space.

xs (ξ) from Eq. (4.76). Figure 4.7b plots ts (ξ) from Eq. (4.77). We see a parametric plot of the same
quantities in Fig. 4.7c. At early time the system is free of singularities. It is easily shown that both
xs (ξ) and ts (ξ) have a local minimum at ξ = 1, at which point, we have
1
xs (1) = 1+ , (4.78)
π
1
ts (1) = . (4.79)
π
Examining Fig. 4.4, this appears to be the point at which the solution becomes multivalued. Examining
Fig. 4.6, this is the point on the curve J = 0 that is a local minimum. So while xs and ts are well-
behaved as functions of ξ for the domain considered, when the curves are projected into the (x, t) plane,
there is a cusp at (x, t) = (xs (1), ts (1)) = (1 + 1/π, 1/π).

4.3 Viscous Burgers’ equation


Our predictions of u(x, t) change dramatically when diffusion is introduced. Consider the
viscous Burgers’ equation:

∂u ∂u ∂2u
+u = ν 2. (4.80)
∂t ∂x ∂x

4.3.1 Comparison to inviscid solution


When we simulate the same problem whose diffusion-free solution is plotted in Fig. 4.4 for
which u(x, 0) = 1 + sin πx, we obtain the results plotted in Fig. 4.8 for four different values
of ν = 1/1000, 1/100, 1/10, and 1. While we will soon outline a method to obtain an exact
solution to the viscous Burgers’ equation, in practice, it is complicated. It is often easier to

© 06 February 2024. J. M. Powers.


110 CHAPTER 4. ONE-DIMENSIONAL WAVES

u u
t=0 t=1/8 t=1/4 t=1/2 t=0 t=1/8 t=1/4
2.0 2.0 t=1/2
t=1
t=1
1.8 1.8

1.6 1.6

1.4 1.4
= 1/100
= 1/1000
1.2 1.2

1.0 1.0
x x
0.5 1.0 1.5 2.0 2.5 3.0 0.5 1.0 1.5 2.0 2.5 3.0
a) b)
u u
t=0 t=0
2.0 2.0
t=1/8

1.8 t=1/4 1.8

t=1/2
1.6 1.6
t=1
t=1/8
1.4 1.4
= 1/10 t=1/4 =1
1.2 1.2
t=1/2

1.0 1.0 t=1


0.5 1.0 1.5 2.0 2.5 3.0 x 0.5 1.0 1.5 2.0 2.5 3.0 x
c) d)

Figure 4.8: Solution to the viscous Burgers’ equation ∂u/∂t + u∂u/∂x = ν∂ 2 u/∂x2 with
u(x, 0) = 1 + sin πx and various values of ν: a) 1/1000, b) 1/100, c) 1/10, d) 1.

© 06 February 2024. J. M. Powers.


4.3. VISCOUS BURGERS’ EQUATION 111

Figure 4.9: x − t diagram for solution to the viscous Burgers’ equation ∂u/∂t + u∂u/∂x =
ν∂ 2 u/∂x2 with u(x, 0) = 1 + sin πx, ν = 1/100.

obtain results by numerical discretization, and that is what we did here. The scheme used
was sufficiently resolved to capture the thin zones present when ν was small. For the case
where ν = 1/100, we plot the x − t diagram, where the shading is proportional to the local
value of u, in Fig. 4.9.
We note:
• We restricted our study to positive values of ν, which can be shown to be necessary
for a stable solution as t → ∞.
• If ν = 0, our viscous Burgers’ equation reduces to the inviscid Burgers’ equation.
• As ν → 0, solutions to the viscous Burgers’ equation seem to relax to a solution with an
infinitely thin discontinuity; they do not relax to those solutions displayed in Fig. 4.4.
• For all values of ν, the solution u(x, t) at a given time has a single value of u for a
single value of x, in contrast to multi-valued solutions exhibited by the diffusion-free
analog.
• As ν → 0, the peaks retain a larger magnitude. Thus one can conclude that enhancing
ν smears peaks.
• At early time the solutions to the viscous Burgers’ equation resemble those of the
inviscid Burgers’ equation.
Let us try to understand this behavior. Fundamentally, it will be seen that in many cases,
nonlinearity, manifested in u ∂u/∂x can serve to steepen a waveform. If that steepening is
unchecked by diffusion, either a formal discontinuity is admitted, or multi-valued solutions.
Now diffusion acts most strongly when gradients are steep, that is when ∂u/∂x has large
magnitude. As a wave steepens due to nonlinear effects, diffusion, which many have been
initially unimportant, can reassert its importance and serve to suppress the growth due to
the nonlinearity.

© 06 February 2024. J. M. Powers.


112 CHAPTER 4. ONE-DIMENSIONAL WAVES

4.3.2 Steadily propagating waves


Let us examine solutions to Eq. (4.80) that can link a constant state where u(−∞, t) = u1
to a second constant state where u(∞, t) = u2 . We shall see that this can be achieved by
what is known as a steadily propagating wave solution. For such solutions a waveform is
maintained as the wave translates with a given velocity.
We can employ both a coordinate transformation and a change of variables. Let us take
as new coordinates

ξ = x − at, (4.81)
τ = t. (4.82)

Here a is a constant which we will see fit to specify later. From this, we get in matrix form
    
ξ 1 −a x
= . (4.83)
τ 0 1 t
| {z }
J−1

The inverse transformation is


    
x 1 a ξ
= . (4.84)
t 0 1 τ
| {z }
J

Here the Jacobian matrix is related, but not identical, to that defined in previous analysis,
see Sec. 1.1; moreover J = det J = 1. Similar to our analysis of Sec. 1.1, we get
 ∂u   ∂u     ∂u 
∂x T −1 ∂ξ 1 0 ∂ξ
∂u =J ∂u = ∂u . (4.85)
∂t ∂τ
−a 1 ∂τ

Thus, we see

∂ ∂
= , (4.86)
∂x ∂ξ
∂ ∂ ∂
= −a + . (4.87)
∂t ∂ξ ∂τ

We apply this coordinate transformation to Eq. (4.80) to get

∂u ∂u ∂u ∂2u
−a + +u =ν 2, (4.88)
∂ξ ∂τ ∂ξ ∂ξ
| {z } |{z} |{z}
∂u
∂t
u ∂u
∂x
∂2u
∂x2

∂u ∂u ∂2u
+ (u − a) = ν 2. (4.89)
∂τ ∂ξ ∂ξ

© 06 February 2024. J. M. Powers.


4.3. VISCOUS BURGERS’ EQUATION 113

This form suggests we will realize further simplification by defining

w = u − a. (4.90)

In physics, this is known as a Galilean transformation, with w being the relative velocity.
Doing so, we get
∂ ∂ ∂2
(w + a) + w (w + a) = ν 2 (w + a) , (4.91)
∂τ ∂ξ ∂ξ
∂w ∂w ∂2w
+w = ν 2. (4.92)
∂τ ∂ξ ∂ξ
Remarkably, Eq. (4.92) has precisely the same form as Eq. (4.80), with w standing in for u.
Leaving aside for now any concerns about initial and boundary conditions, we say that our
Galilean transformation, which transforms both the dependent and independent variables,
has mapped Eq. (4.80) into itself. It is analogous to certain geometrical transformations.
For example, if we rotate a square through an angle of π/4, it appears askew relative to its
original orientation. But if we rotate a square through the angles ±nπ/2, where n an integer,
one cannot detect a difference between the transformed square and the original square. The
form of the square is thus invariant to rotation through angles of ±nπ/2. It has a particular
type of symmetry. The geometric form circle is invariant under rotations through any angle.
It has a different type of symmetry. Idealized snowflakes may thought to be invariant under
rotations of ±nπ/3. Our Burgers’ equation too has a symmetry in that its form is invariant
under a Galilean transformation. Mathematical models that transform under a mapping
into themselves are also known as self-similar and are one of the key features of what is
known as group theory. A further discussion of similarity will be given in Ch. 6.
Our boundary conditions transform to

w(−∞, τ ) = u1 − a ≡ w1 , (4.93)
w(∞, τ ) = u2 − a ≡ w2 . (4.94)

We shall see there is an additional requirement for a for symmetry, to be determined.


We trivially note that if we seek solutions that are independent of ξ, Eq. (4.92) reduces
to dw/dτ = 0, which gives us w = C. The boundary conditions are only satisfied in the
special case when w1 = w2 , giving w = w1 . This is not particularly useful. We find nontrivial
results when we seek solutions that are independent of τ ; that is we seek w = w(ξ). Then
Eq. (4.92) reduces to
dw d2 w
w = ν 2, (4.95)
dξ dξ
 2
d w d2 w
= ν 2, (4.96)
dξ 2 dξ
2
w dw
+C = ν . (4.97)
2 dξ

© 06 February 2024. J. M. Powers.


114 CHAPTER 4. ONE-DIMENSIONAL WAVES

Now as ξ → −∞, we expect w → w1 and dw/dξ → 0. Thus,

w12
+ C = 0, (4.98)
2
w12
C = − . (4.99)
2
Thus,
dw 1 2
ν = (w − w12 ), (4.100)
dξ 2
dw dξ
2 2
= − , (4.101)
w1 − w 2ν
 
1 w ξ
tanh−1 = − + C, (4.102)
w1 w1 2ν
 w 
1
w(ξ) = w1 tanh − ξ + Cw1 . (4.103)

Examination of this solution reveals that limξ→−∞ w(ξ) = w1 and limξ→∞ w(ξ) = −w1 and
w(ξ) = 0 when ξ = 2νC. Let us make the convenient assumption that C = 0 to place the
somewhat arbitrary zero-crossing at ξ = 0. Other choices would simply translate the zero-
crossing and not otherwise affect the solution. Now we see we have satisfied the boundary
condition at ξ → −∞ but not at ξ = +∞. We can satisfy both boundary conditions at ±∞
by making the correct choice of the as of yet unspecified wave speed a. We thus would like
to choose a such that

−w1 = w2 . (4.104)

Using our definitions, Eqs. (4.93,4.94), we get

−(u1 − a) = u2 − a, (4.105)
2a = u1 + u2 , (4.106)
u1 + u2
a = . (4.107)
2
Then
u1 − u2
w1 = u 1 − a = , (4.108)
2
u1 − u2
w2 = u2 − a = − = −w1 . (4.109)
2
Our solution is then
 
u1 − u2 u1 − u2
w(ξ) = tanh − ξ . (4.110)
2 4ν

© 06 February 2024. J. M. Powers.


4.3. VISCOUS BURGERS’ EQUATION 115

In terms of our untransformed variables, we have


  
u1 + u2 u1 − u2 u1 − u2 u1 + u2
u(x, t) = + tanh − x− t . (4.111)
2 2 4ν 2

It is easy to verify by direct calculation that Eq. (4.111) satisfies the viscous Burgers’ equa-
tion, Eq. (4.80). Additionally, it satisfies both boundary conditions at x = ±∞. By in-
spection of the solution, the thickness ℓ of the zone were u adjusts from u1 to u2 is given
by


ℓ= (4.112)
u1 − u2

In the limit as ν → 0, we see ℓ → 0, and u(x, t) suffers a jump from u = u1 to u = u2


at x = (u1 + u2 )t/2. This is fully consistent with our inviscid jump analysis given in
Eq. (4.31). Because our independent analysis of the viscous Burgers’ equation revealed that
the propagation speed is (u1 + u2 )/2, we conclude that the appropriate form of the inviscid
Burgers’ equation is that of Eq. (4.32), and not one of the many others, such as Eq. (4.39).
Results are plotted in Fig. 4.10 for three different values of ν = 1/1000, 1/100, and 1/10 for
u1 = 3/2, u2 = 1/2 and t = 2. Clearly, all solutions relax at ±∞ to the correct values of
u1 and u2 . The only effect of ν is the thickness of the zone where u relaxes from u1 to u2 .
Also the propagation speed a = (u1 + u2 )/2 = 1. Because the wave was centered at x = 0
at t = 0, we see at t = 2 its “center” has propagated to x = 2.

4.3.3 Cole-Hopf transformation


For more general conditions than those of a steadily propagating wave, the viscous Burgers’
equation’s analysis is simplified by a so-called Cole6 -Hopf7 transformation. Let us redefine
u in terms of a new variable φ(x, t) via

1 ∂φ
u = −2ν . (4.113)
φ ∂x

Then Eq. (4.80) becomes


     
∂ 1 ∂φ 1 ∂φ ∂ 1 ∂φ ∂2 1 ∂φ
−2ν − 2ν −2ν = ν 2 −2ν , (4.114)
∂t φ ∂x φ ∂x ∂x φ ∂x ∂x φ ∂x
     
∂ 1 ∂φ 1 ∂φ ∂ 1 ∂φ ∂ 2 1 ∂φ
− 2ν = ν 2 , (4.115)
∂t φ ∂x φ ∂x ∂x φ ∂x ∂x φ ∂x
(4.116)
6
Julian David Cole, 1925-1999, American mathematician.
7
Eberhart Hopf, 1902-1983, Austrian-American mathematician and astronomer.

© 06 February 2024. J. M. Powers.


116 CHAPTER 4. ONE-DIMENSIONAL WAVES

u u
2.0 2.0

1.5
1.5

1.0 t=2 t=2


1.0
= 1/1000 = 1/100

0.5 0.5

−4 −2 0 2 4 x −4 −2 0 2 4
x
a) b)
u u
2.0 2.0

1.5 1.5

t=2
1.0 t=2 1.0 =1
= 1/10

0.5 0.5

−4 −2 0 2 4
x −4 −2 0 2 4
x
c) d)

Figure 4.10: Propagating steady wave solution at t = 2 to the viscous Burgers’ equation
∂u/∂t + u ∂u/∂x = ν ∂ 2 u/∂x2 with u(−∞, t) = u1 = 3/2, u(∞, t) = u2 = 1/2 and various
values of ν: a) 1/1000, b) 1/100, c) 1/10 d) 1.

© 06 February 2024. J. M. Powers.


4.3. VISCOUS BURGERS’ EQUATION 117

Detailed calculation verifies that this reduces to


   
∂ 1 ∂φ ∂ 1 ∂2φ
=ν . (4.117)
∂x φ ∂t ∂x φ ∂x2
Regrouping, we can say
  
∂ 1 ∂φ ∂2φ
−ν 2 = 0, (4.118)
∂x φ ∂t ∂x
 
1 ∂φ ∂2φ
−ν 2 = f (t), (4.119)
φ ∂t ∂x
∂φ ∂2φ
= ν 2 + φf (t). (4.120)
∂t ∂x
It suffices to take f (t) = 0, leaving us to solve a heat equation:
∂φ ∂2φ
= ν 2. (4.121)
∂t ∂x
If u(x, 0) = g(x), then it can be shown that
 Z ∞  Z r  
∂ 1 (x − r)2 1
u(x, t) = −2ν ln √ exp − − g(s) ds dr . (4.122)
∂x 4πνt −∞ 4νt 2ν 0

Example 4.5
Find u(x, t) for solutions to the viscous Burgers’ equation, Eq. (4.80) if
x
u(x, 0) = U . (4.123)
L

Direct substitution into Eq. (4.122) gives


 Z ∞  Z r   
∂ 1 (x − r)2 1 Us
u(x, t) = −2ν ln √ exp − − ds dr , (4.124)
∂x 4πνt −∞ 4νt 2ν 0 L
 Z ∞   
∂ 1 (x − r)2 1 U r2
= −2ν ln √ exp − − dr . (4.125)
∂x 4πνt −∞ 4νt 2ν 2L
Symbolic computational software reveals the answer to be simply
x
L
u(x, t) = U Ut
. (4.126)
1+ L

It is easily verified that both the initial condition as well as Eq. (4.80) are satisfied. Because the solution
is linear in x, it does not depend on the coefficient ν. Thus, it is also a solution to the inviscid Burgers’
equation. For U = 1, L = 1, we plot the solution in Fig. 4.11. Note for large t, more specifically for
U t/L ≫ 1, our solution reduces to
x
lim u(x, t) = . (4.127)
t→∞ t
It is easily seen that u = x/t satisfies the viscous Burgers’ equation by direct substitution.

© 06 February 2024. J. M. Powers.


118 CHAPTER 4. ONE-DIMENSIONAL WAVES

Figure 4.11: Solution to the viscous Burgers’ equation ∂u/∂t + u∂u/∂x = ν∂ 2 u/∂x2 with
u(x, 0) = Ux/L with U = 1, L = 1.

As an aside, we note that the previous example showed u = x/t satisfied Burgers’ equa-
tion. In fact, direct substitution verifies that
x+C
u(x, t) = , C ∈ R1 , (4.128)
t
where C is any constant satisfies the Burgers’ equation. This and a large family of exact
solutions are described by Öziş and Aslan.8 Exact solutions are obtained by first identifying
appropriate transformations, then recasting the partial differential equation typically as a
nonlinear ordinary differential equation that can be solved. For the limit when ν = 1, one
can verify that the solutions given next each satisfy Burgers’ equation:
 
−(x+t)2 /(4t)
1 −2e
u(x, t) = √  √   − 1 C ∈ R1 , (4.129)
t C + πerf √ x+t
2 t
 
(x+t)2 /(4t)
1 x + t 2e
u(x, t) = √  √ − √   − 1 C ∈ R1 , (4.130)
t t C + πerfi 2√t x+t

√  3/2 
s  2 t
 J−2/3 3 x − t − 2 + C
2

t2
u(x, t) = 1 + t − 2 x − t − + C √ 3/2  ,
2 2
J1/3 32 x − t − t2 + C
C ∈ R1 . (4.131)
8
T. Öziş and İ. Aslan, 2017, Similarity solutions to Burgers’ equation in terms of special functions of
mathematical physics, Acta Physica Polonica B, 48(7): 1349-1369.

© 06 February 2024. J. M. Powers.


4.4. TRAFFIC FLOW MODEL 119

shock
wave
(x,t)

(x,t) rarefaction
wave

Figure 4.12: Sketch of vehicle traffic density response ρ(x, t) to stop and go signals.

4.4 Traffic flow model


One of the more straightforward and intuitive applications of the notions of this chapter
comes in the study of ordinary traffic flow; see Fig. 4.12. Most students are familiar with
suddenly and surprisingly coming to a halt in what was freely flowing traffic as a consequence
of a red light or other constriction far upstream. One can imagine this as a discontinuity
in vehicle density, and it propagates backwards from the site of the traffic blockage. Such
a discontinuity is sometimes called a shock wave. In this scenario it propagates to the left.
Most students are also familiar with the gradual decrease in vehicle density that accompanies
a traffic light turning green. This decrease in density is known as a rarefaction wave. It is
depicted as propagating to the left as well, though it could be moving to the left or the right.
Now let us develop a simple model for traffic flow. Let us take ρ as the vehicle density.
For very light traffic density, we might imagine that a doubling of the vehicle density would
double the vehicle flux f (ρ). Certainly when the road becomes too crowded, drivers slow
down, so that one might imagine there to be a vehicle density where the flux attains its
maximum value fm . As we increase the density, the traffic begins to jam, and the flux goes
down. At a maximum density, ρm , we can expect our road to resemble a parking lot: there
will be no flux of vehicles, f (ρm ) = 0. Let us take a simple quadratic model for the flux of
vehicles:
 
ρ
f (ρ) = ρu0 1 − , ρ ∈ [0, ρm ]. (4.132)
ρm

Note that our flux function is slightly different than that of Mei’s found on his p. 45; ours
retains more analogs with the notation and precepts of fluid mechanics, and is easier to
justify on dimensional grounds. We restrict density appropriately. Here u0 is a constant,
which we interpret as a characteristic velocity with u0 > 0. A plot of f /(ρm u0 ) as a function
of ρ/ρm is given in Fig. 4.13.
Clearly when ρ = ρm , the flux is zero: f = 0. Also when ρ is small, the flux linearly

© 06 February 2024. J. M. Powers.


120 CHAPTER 4. ONE-DIMENSIONAL WAVES

f/( muo)

0.25

0.20

0.15

0.10

0.05

0.2 0.4 0.6 0.8 1.0 / m

Figure 4.13: Plot of traffic flux f as a function of scaled traffic density ρ/ρm for simple
quadratic flux model.

increases with increasing ρ. We have


 
df 2ρ
= u0 1 − , (4.133)
dρ ρm
d2 f 2u0
= − . (4.134)
dρ2 ρm

As the second derivative is strictly negative, any critical point must be a maximum. Impor-
tantly, the flux function f for this problem is non-convex. And when df /dρ = 0, we must
have
ρm
ρ= . (4.135)
2
Thus, the maximum flux is
ρ  ρm u0
m
f = fm = . (4.136)
2 4
For conservation of vehicle density, we specialize Eq. (4.19) with q = ρ, f = f , and s = 0,
thus giving

∂ρ ∂
+ f (ρ) = 0, (4.137)
 ∂t ∂x 
∂ρ ∂ ρ
+ ρu0 1 − = 0. (4.138)
∂t ∂x ρm

for the inviscid limit. Expanding the derivative, we find


 
∂ρ ρ ∂ρ
+ u0 1 − 2 = 0. (4.139)
∂t ρm ∂x

© 06 February 2024. J. M. Powers.


4.4. TRAFFIC FLOW MODEL 121

Note that the characteristic curves are given by curves whose slope is
 
dx ρ
= u0 1 − 2 , (4.140)
dt ρm

The slope of the curve may be positive or negative and gives velocity of small disturbances;
thus small disturbances may propagate to either the left or the right. The speed is depen-
dent on the local value of ρ. Specializing Eq. (4.11) to find the speed of propagation of
discontinuous jumps U, we get
    
ρ2 ρ1
Jf (ρ)K ρ u
2 0 1 − ρm
− ρ u
1 0 1 − ρm
U = = , (4.141)
JρK ρ2 − ρ1
 
ρ1 + ρ2
= u0 1 − . (4.142)
ρm

We can postulate a viscous version of Eq. (4.138): Expanding the derivative, we find
 
∂ρ ρ ∂ρ ∂2ρ
+ u0 1 − 2 = ν 2. (4.143)
∂t ρm ∂x ∂x

If we now define the transformed dependent variable as


 
ρ
w ≡ u0 1 − 2 , (4.144)
ρm

we find Eq. (4.143) transforms to the Burgers’ equation,

∂w ∂w ∂2w
+w =ν 2. (4.145)
∂t ∂x ∂x
Note that from Eq. (4.140), w gives the speed of propagation of small disturbances. The
inverse transformation gives
 
ρm w
ρ= 1− . (4.146)
2 u0

Example 4.6
Consider a traffic flow solution where ρm = 1, u0 = 1, and ν = 1/1000. At t = 0, the traffic density
is low, except for a small region where it jumps to a significant fraction of the maximum density, then
returns to the same low value. Specifically, we take
1 1
ρ(x, 0) = + (H(x − 1) − H(x − 2)) . (4.147)
10 2
We can think of this as a small traffic snarl. Find the behavior of the traffic density for t ∈ [0, 4].

© 06 February 2024. J. M. Powers.


122 CHAPTER 4. ONE-DIMENSIONAL WAVES

1.0 m
=1

0.8

0.6

0.4
t=4
t=
0.2 t=0 2

t=
1
0 1 2 3 4 5 6 x

Figure 4.14: Plot of vehicle density ρ as a function of x at various t in a traffic snarl.

With ρm = 1, uo = 1, we have

f (ρ) = ρ(1 − ρ), (4.148)


1
fm = , (4.149)
4
w = 1 − 2ρ, (4.150)
1
ρ = (1 − w). (4.151)
2
We solve the Burgers’ equation numerically and perform the appropriate transformations to generate
ρ(x, t) A plot of the solution is given in Fig. 4.14. We clearly see a shock and rarefaction, both
propagating to the right in the direction of increasing x. As vehicles approach from the right in
the region where density is low, they suddenly encounter a steep jump. Vehicles on the downstream
side of the snarl gradually decrease their density until they recover the freestream value of 1/10. In
contrast to problems with a convex flux function, for this non-convex flux function, the head of the
right-propagating wave is a rarefaction and the tail is a shock.
Specializing Eq. (4.142) for the parameters of this problem, we can expect jumps to propagate at
speed
 1 
+ 6 3
U = (1) 1 − 10 10 = . (4.152)
1 10

We see by examining Fig. 4.14 that the shock discontinuity at x = 1 at t = 0 has moved to x = 1.3 at
t = 1, consistent with our theory of discontinuity propagation velocity.
Small disturbances propagate at w = 1 − 2ρ. A small disturbance in the low density region of the
flow where ρ = 1/10 propagates at speed w = 1 − 2/10 = 4/5. A small disturbance in the high density
region of the flow where ρ = 3/5 propagates at speed w = 1 − 2(3/5) = −1/5. There is a continuum
of speeds for small disturbances that originate near the jump in ρ where ρ ∈ [1/10, 3/5]. These speeds
range from w = [4/5, −1/5].
The x − t diagram of Fig. 4.15 summarizes many important concepts. Clearly at early time there

© 06 February 2024. J. M. Powers.


4.5. LINEAR DISPERSIVE WAVES 123

t
rarefaction

k
shoc

Figure 4.15: x − t diagram of vehicle density ρ in a traffic snarl.

is a nearly discontinuous shock propagating to the right at velocity U = 3/10. Simultaneously there
is a continuous rarefaction, centered at (x, t) = (2, 0). The tail of this rarefaction propagates in the
negative x direction, with speed −1/5. Around t = 1.6, the infinitesimal tail of the rarefaction intersects
with the shock, and modulates it. This modulation continues as more infinitesimal rarefaction waves
intersect with the shock. Though it is difficult to discern from the plot, we have also sketched reflected
waves after the rarefaction strikes the shock. The head of the rarefaction propagates to the right at
speed w = 4/5.

4.5 Linear dispersive waves


Certainly the inviscid and viscous Burgers’ equations we have studied have displayed the
feature that their wave form distorts, sometimes dramatically, as time advances. Some of
that is an inviscid effect, such as shown in Fig. 4.4; some is a viscous effect, such as shown
in Fig. 4.8. The nonlinearity of the Burgers’ equation makes closed form analysis difficult.
Let us study these distortions in the context of a simple linear model system,
∂u ∂u ∂2u ∂3u
+ a = ν 2 + β 3 . (4.153)
∂t
|{z} ∂x
|{z} | ∂x
{z } | ∂x
{z }
evolution advection diffusion dispersion

The new term here is β ∂ 3 u/∂x3 . We will not provide a physical derivation, but note
• the term is known as a dispersive term as it will be seen to induce wave forms to
disperse and thus lose their integrity,

© 06 February 2024. J. M. Powers.


124 CHAPTER 4. ONE-DIMENSIONAL WAVES

• it can arise in a variety of physical scenarios such as in shallow water wave theory,

• it is a useful construct in evaluating higher order errors in various numerical approxi-


mations to partial differential equations.
To aid in our analysis, let us assume that we can separate variables so that

u(x, t) = A(t)eikx . (4.154)

We have separated u(x, t) into a time-dependent amplitude A(t) and a single Fourier spatial
mode with assumed wavenumber k. Here we use the term eikx as a convenience for analysis.
It does introduce the imaginary number i; if real valued solutions are desired, they can
always be achieved by suitably defining complex constants within A(t). Also note that we
are really assuming a spatially periodic solution in x with Euler’s formula, see Sec. 8.3.1,
giving

eikx = cos kx + i sin kx. (4.155)

Recall from Eq. (3.75) the wavenumber is k = 2π/λ, where λ is the wavelength. Let us further
imagine that we are in a doubly infinite domain, thus x ∈ (−∞, ∞). The consequence of
this is that there is a continuous spectrum of k admitted as solutions in contrast to equations
on a finite domain, where discrete spectra, such as displayed in Fig. 3.4, are the only types
admitted.
Necessary derivatives of Eq. (4.154) are
∂u dA ikx
= e , (4.156)
∂t dt
∂u
= ikAeikx , (4.157)
∂x
∂2u
= −k 2 Aeikx , (4.158)
∂x2
∂3u
= −ik 3 Aeikx . (4.159)
∂x3
With these, we see that Eq. (4.153) reduces to
dA ikx
e ✟ + aikA✟


eikx = −νk 2 A✟ ✟
eikx − βik 3 A✟ ✟
eikx , (4.160)
dt
dA 
= − aik + νk 2 + βik 3 A, (4.161)
dt
2 3
A(t) = A0 e−(νk +i(ak+βk ))t , (4.162)
−νk 2 t −i(ak+βk 3 )t
= A0 e e . (4.163)
2
Here A0 is the constant initial value of the amplitude of the Fourier mode. The term e−νk t
tells us that A(t) has a decaying amplitude for ν > 0 and for all k. Moreover the time scale

© 06 February 2024. J. M. Powers.


4.5. LINEAR DISPERSIVE WAVES 125

of amplitude decay is τ = 1/(νk 2 ): rapid decay is induced by high wavenumber k and large
3
diffusion coefficient ν. The term e−i(ak+βk )t is purely oscillatory and does not decay with
time. We recombine to form u(x, t) as
2 3
u(x, t) = A0 e−νk t e−i(ak+βk )t eikx , (4.164)
−νk 2 t ik(x−(a+βk 2 )t)
= A0 e e . (4.165)

Now considering the oscillatory part of u(x, t), if x − at − βk 2 t is fixed, a point on the
propagating wave is fixed. Let us call that the phase, φ:

φ = x − (a + βk 2 )t. (4.166)

The phase has a velocity. If we hold φ fixed and differentiate with respect to time we get

d dx
φ = − (a + βk 2 ), (4.167)
dt
|{z} dt
=0
dx
= c = a + βk 2 . (4.168)
dt
We note, importantly,

• For β 6= 0, the phase speed of the Fourier mode depends on the wavenumber k. Fourier
modes with different k travel at different speeds. This induces dispersion of an initial
waveform.

• For β > 0, high frequency modes, that is those with large k, move rapidly, and are
attenuated rapidly.

• For β > 0, low frequency modes move slowly and are attenuated slowly.

• If β = 0, all modes travel at the same speed a. Such waves are non-dispersive.

• The phase speed is independent of the diffusion coefficient ν.

So positive ν induces amplitude decay but not dispersion; nonzero β induces dispersion but
no amplitude decay.
We can rewrite the oscillatory portion as
2 
eik(x−(a+βk )t) = exp i(kx − (ka + βk 3 )t , (4.169)
= exp (i(kx − ωt)) , (4.170)

if we take

ω = ka + βk 3 . (4.171)

© 06 February 2024. J. M. Powers.


126 CHAPTER 4. ONE-DIMENSIONAL WAVES

In general, the relation


ω = ω(k), (4.172)
is known as the dispersion relation. We refer to Whitham for details, where it is seen to be
common to define the phase speed c(k) as
ω(k)
c(k) =
, (4.173)
k
This is consistent with our Eq. (4.168) for which we have c = a + βk 2 .
Leaving out details which are provided by Whitham, it is also common to define what is
known as the group velocity C(k) as

C(k) =
. (4.174)
dk
While individual Fourier modes propagate with individual phase speeds, it can be shown
that the integrated energy of a signal in fact propagates with the group velocity. For our
system, differentiating Eq. (4.171) shows the group velocity to be
C(k) = a + 3βk 2 . (4.175)

Example 4.7
Consider a solution to
∂u ∂u ∂2u ∂3u
+a = ν 2 +β 3. (4.176)
∂t ∂x ∂x ∂x
Consider an initial condition of a “top hat”:
u(x, 0) = H(x − 1) − H(x − 2), (4.177)
and four different parameter sets: i) a = 1, ν = 0, β = 0, ii) a = 1, ν = 1/100, β = 0, iii) a = 1,
ν = 1/100, β = 1/1000, and iv) a = 1, ν = 1/1000, β = 1/1000.

An individual Fourier mode, in terms of ordinary trigonometric functions, could be specialized from
Eq. (4.165) to take the form
2 
u(x, t) = A0 e−νk t sin k(x − (a + βk 2 )t) . (4.178)
Omitting details of how to sum the various Fourier modes so as to match the initial conditions, we
simply plot results in Fig. 4.16. For a = 1, ν = 0, β = 0, the initial top hat advects to the right and
the waveform is otherwise unchanged. For a = 1, ν = 1/100, β = 0, the waveform advects to the right
at the same rate, but the diffusion decays the amplitude of the high frequency modes near the sharp
interface, smoothing the solution. The wave is advection and diffusing, but not dispersing. For a = 1,
ν = 1/100, β = 1/1000, we see some additional high frequency modes moving in front of the initial
wave form. This is consistent with the notion that the phase speed for large k is large. This wave is
advecting, diffusing, and dispersing. The dispersive effect is more apparent when diffusion is lowered
so that a = 1, ν = 1/1000, β = 1/1000.
We show complementary x − t diagrams in Fig. 4.17.

© 06 February 2024. J. M. Powers.


4.5. LINEAR DISPERSIVE WAVES 127

u u
1.0 1.0
a=1
0.8 0.8 = 1/100
=0
t=1/2

0.6 0.6
t=0

t=1

a=1
=0
0.4 0.4

t=1/2
=0

t=0

t=1
0.2 0.2

x x
1 2 3 4 5 1 2 3 4 5
u u

1.0
1.0
0.8 a=1 a=1
= 1/100 = 1/1000

t=1/2
0.6 = 1/1000
= 1/1000 0.5
t=1/2

t=0
t=0

t=1

0.4
0.2 t=1
x
x 1 2 3 4 5
1 2 3 4 5

Figure 4.16: Solutions to Eq. (4.153) under conditions indicated.

t t

x x

t t

x x

Figure 4.17: x − t diagrams for solutions to Eq. (4.153) under conditions complementary to
those of Fig. 4.16.

© 06 February 2024. J. M. Powers.


128 CHAPTER 4. ONE-DIMENSIONAL WAVES

4.6 Stokes’ second problem


Let us consider what amounts to Stokes’ second problem. It is a problem in one spatial
dimension, so it is certainly “one-dimensional.” As the governing equation is parabolic
and not hyperbolic, it is not a traditional “wave.” But in that it has a sinusoidally forced
boundary condition, it does have wave-like features in that information from the boundary is
propagated into the domain. The propagation mechanism is diffusion rather than advection.
Stokes9 addressed it in his original work which developed the Navier-Stokes equations in
the mid-nineteenth century.10 He addressed it in the context of momentum diffusion; here,
we shall study its analog in the context of energy diffusion. We shall consider Stokes’ first
problem later in Sec. 6.1.
Consider then the one-dimensional unsteady heat equation, Eq. (1.82) along with initial
and boundary conditions as shown,
∂T ∂2T
= α 2, (4.179)
∂t ∂x
T (x, 0) = 0, T (0, t) = T0 sin Ωt, T (∞, t) < ∞. (4.180)

We can imagine this problem physically as one in which a semi-infinite slab is subjected to
an oscillatory temperature field at its boundary at x = 0. Such might be the case for the
surface of the earth during a night-day cycle.
We may recall Euler’s formula, Eq. (8.39), derived in Sec. 8.3.1:

eiΩt = cos Ωt + i sin Ωt. (4.181)

We also recall the real part is defined as



ℜ eiΩt = cos Ωt, (4.182)

and the imaginary part is defined as



ℑ eiΩt = sin Ωt. (4.183)

Let us define a related auxiliary problem, with T defined as a complex variable whose imag-
inary part is T : ℑ(T) = T . We then take our extended problem to be

∂T ∂2T
= α 2, (4.184)
∂t ∂x
T(x, 0) = 0, T(0, t) = T0 eiΩt , |T(∞, t)| < ∞. (4.185)

Next let us seek a solution which is valid at long time. That is to say, we will not require
our solution to satisfy any initial condition but will require it to satisfy the partial differential
9
George Gabriel Stokes, 1819-1903, Anglo-Irish mathematician and physicist.
10
Stokes, G. G., 1851, “On the effect of the internal friction of fluids on the motion of pendulums,” Trans-
actions of the Cambridge Philosophical Society, 9(2): 8-106.

© 06 February 2024. J. M. Powers.


4.6. STOKES’ SECOND PROBLEM 129

equation and boundary conditions at x = 0 and x → ∞. We will gain many useful insights
even though we will not capture the initial condition, which, with extra effort, we could.
Let us separate variables in the following fashion. Assume that

T(x, t) = f (x)eiΩt , (4.186)

where f (x) is a function to be determined. Ultimately we will only be concerned with the
imaginary portion of this solution, which is the portion we will need to match the boundary
condition. With this assumption, we find formulæ for the various partial derivatives to be

∂T
= iΩf (x)eiΩt , (4.187)
∂t
∂T df iΩt
= e , (4.188)
∂x dx
∂2T d2 f iΩt
= e . (4.189)
∂x2 dx2

Then Eq. (4.184) becomes

✟ d2f iΩt
eiΩt
iΩf✟ = α e ✟,
✟ (4.190)
dx2
d2 f
iΩf = α 2 . (4.191)
dx

Now assume that f (x) = Aeax , giving

iΩAeax = Aa2 αeax , (4.192)


α
i = a2 . (4.193)

Now in a polar representation, we note that

i = eiπ/2 . (4.194)

More generally, we could say

i = ei(π/2+2nπ) , n = 0, 1, 2, ... (4.195)

Thus, Eq. (4.193) can be re-expressed as


α
ei(π/2+2nπ) = a2 , (4.196)
r Ω
Ω i(π/4+nπ)
e = a. (4.197)
α

© 06 February 2024. J. M. Powers.


130 CHAPTER 4. ONE-DIMENSIONAL WAVES

Using Euler’s formula, Eq. (8.39), we could then say


r    π 
Ω π
a = cos + nπ + i sin + nπ , (4.198)
α 4 4
r  
Ω 1 i
= ± √ +√ , (4.199)
α 2 2
r

= ± (1 + i) . (4.200)

When n is even, we have the “plus” root; when odd, we have the “minus” root. For each
root, we can have a solution; thus, we form linear combinations to get
r ! r !
Ω Ω
f (x) = A1 exp (1 + i)x + A2 exp − (1 + i)x . (4.201)
2α 2α
Now because we take Ω > 0, α > 0 and x > 0, we will need A1 = 0 in order to keep |T|
bounded as x → ∞. So we have
r !

f (x) = A2 exp − (1 + i)x . (4.202)

Then recombining, we find that
r !

T(x, t) = A2 exp − (1 + i)x exp(iΩt). (4.203)

Now at x = 0, we must have
T0 exp(iΩt) = A2 exp(iΩt). (4.204)
We thus need to take A2 = T0 giving
r !

T(x, t) = T0 exp − (1 + i)x exp(iΩt). (4.205)

We then find T by considering only the imaginary portion of T, giving
r ! !

T (x, t) = ℑ T0 exp − (1 + i)x exp(iΩt) , (4.206)

r !!

= ℑ T0 exp − (1 + i)x + iΩt , (4.207)

r r !!!
Ω Ω
= ℑ T0 exp − x + i Ωt − x , (4.208)
2α 2α
r ! r !
Ω Ω
= T0 exp − x sin Ωt − x . (4.209)
2α 2α

© 06 February 2024. J. M. Powers.


4.6. STOKES’ SECOND PROBLEM 131

t
T

x
x

Figure 4.18: Solution to Stokes’ second problem with α = 1, T0 = 1, Ω = 1.

By inspection, the boundary condition is satisfied. Direct substitution reveals the solution
also satisfies the heat equation. And clearly as x → ∞, T → 0.
Now the amplitude of this wave-like solution has decayed to roughly T0 /100 at a point
where
r

x = 4.5, (4.210)

r

x = 4.5 . (4.211)

Thus the penetration depth of the wave into the domain is enhanced by high α and low Ω.
And below this depth, the material is ambivalent to the disturbance at the boundary. With
regards to the oscillatory
p portion of the solution, we see the angular frequency is Ω and the
wavenumber is k = Ω/(2α).
The phase of the wave is given by
r

φ = Ωt − x. (4.212)

Let us get the phase speed. If the phase itself is constant, we differentiate to get
r
dφ Ω dx
= 0 = Ω− , (4.213)
dt 2α dt
dx √
= 2αΩ. (4.214)
dt
p
For α = 1, T0 = 1, Ω = 1, we plot T (x, t) in Fig. 4.18. Clearly, for x ≈ 4.5 (2)(1)/1 =
6.4, the effect of the sinusoidal temperature variation at x = 0 has small effect.

Problems

© 06 February 2024. J. M. Powers.


132 CHAPTER 4. ONE-DIMENSIONAL WAVES

© 06 February 2024. J. M. Powers.


Chapter 5

Two-dimensional waves

see Mei, Chapters 8, 10.

Here we consider aspects of two-dimensional wave propagation.

5.1 Helmholtz equation


Consider the multidimensional extension of the wave equation, Eq. (2.15):

∂2φ
= a2 ∇2 φ. (5.1)
∂t2

Here we have exchanged φ for y as we may wish to use y for a coordinate. With x = (x, y, z)T
representing three-dimensional spatial coordinates, we can separate variables as follows

φ(x, t) = u(t)v(x). (5.2)

With this assumption, Eq. (5.1) becomes

d2 u
v = a2 u∇2 v, (5.3)
dt2
1 d2 u 1 2 1
2 2
= ∇ v = − 2. (5.4)
a u dt v λ

Our choice of the constant to be −1/λ2 is non-traditional, but will have an improved physical
interpretation. This induces the equations

1
∇2 v + 2 v = 0, (5.5)
λ
2
du  
a 2
+ u = 0. (5.6)
dt2 λ

133
134 CHAPTER 5. TWO-DIMENSIONAL WAVES

Equation (5.5) is known as a Helmholtz1 equation. It is a linear elliptic partial differential


equation. Because of its linearity, its solution can be decomposed into various eigenmodes.
The appropriate eigenfunctions will depend on the particular geometry.
Equation (5.6) has solution
at at
u(t) = C1 sin + C2 cos . (5.7)
λ λ
We note that if a has the units of velocity, and t time, then at has units of length, and so
must λ.

5.2 Square domain


Let us consider Eq. (5.5) in a two-dimensional Cartesian geometry with x = (x, y)T , on the
square x ∈ [0, L], y ∈ [0, L]. Let us insist that φ(x, 0, t) = 0, φ(x, L, t) = 0, φ(L, y, t) =
0. We will allow an inhomogeneous Dirichlet boundary condition at φ(0, y, t) = f (y, t).
Equation (5.5) becomes
∂2v ∂2v 1
2
+ 2 + 2 v = 0. (5.8)
∂x ∂y λ
Let us separate variables again by assuming

v(x, y) = w(x)z(y). (5.9)

Substituting, we get
d2 w d2 z 1
z 2
+ w 2
+ 2 wz = 0, (5.10)
dx dy λ
2 2
1 d w 1d z 1
2
+ 2
+ 2 = 0, (5.11)
w dx z dy λ
2 2
1d z 1d w 1 1
2
=− 2
− 2 = − 2. (5.12)
z dy w dx λ µ
This induces
d2 z 1
+ z = 0, (5.13)
dy 2 µ2
 
d2 w 1 1
+ − w = 0. (5.14)
dx2 λ2 µ 2
We find
y y
z(y) = C1 sin + C2 cos . (5.15)
µ µ
1
Herman Ludwig Ferdinand von Helmholtz, 1821-1894, German physician and physicist.

© 06 February 2024. J. M. Powers.


5.2. SQUARE DOMAIN 135

To satisfy the homogeneous Dirichlet boundary conditions, we must have C2 = 0 and 1/µ =
nπ/L. Thus
nπy
z(y) = C1 sin , n = 1, 2, . . . . (5.16)
L
And
 
d2 w 1 n2 π 2
+ − w = 0. (5.17)
dx2 λ2 L2
For λ < L/(nπ), the solution is oscillatory; while for λ > L/(nπ), the solution will have an
exponential character. In physics, the eigenfunctions for the oscillatory case are characterized
as “bound states.” In terms of the operator −d2 /dx2 − 1/λ2 + n2 π 2 /L2 , we see that it
is positive definite for λ > L/(nπ). Thus all its eigenvalues are positive. However for
λ < L/(nπ), some of the eigenvalues may be negative, inducing the bound states. The
solution is
 r ! r !

 L 2
x L 2
x
 n2 π 2 − 2 n2 π 2 − 2
 C3 cosh

λ L
+ C4 sinh
λ L
, λ > L/(nπ),
w(x) = r ! r ! (5.18)

 L 2
x L2
x
 2 2 2 2
 C3 cos

λ2
−n π
L
+ C4 sin
λ2
−n π
L
λ < L/(nπ).

One simple solution is


  q 

   r ! 2 2 L2 x

  nπy  tanh n π − λ2 L 

 at L2 x  1 −

 C cos sin cosh n2π2 −
 q  
,

 λ L λ 2 L

 tanh 2 2 L2
n π − λ2




λ > L/(nπ), q   (5.19)
φ(x, y, t) = 

 r ! L2

    nπy  tan − n2 π 2 Lx

 at L2 x  1 −
λ2 

 C cos sin cos − n2π2
 q  
,

 λ L λ 2 L

 tan L2
− n2 π 2

 λ2


λ < L/(nπ).
It satisfies the partial differential equation, the boundary conditions at y = L, and x = L.
And it admits the inhomogeneous boundary condition at x = 0 of
   nπy 
at
φ(0, y, t) = C cos sin , (5.20)
λ L
    
C nπy at nπy at
= sin − + sin + , (5.21)
2 L λ L λ
      
C nπ at L nπ at L
= sin y− + sin y+ . (5.22)
2 L λ nπ L λ nπ

© 06 February 2024. J. M. Powers.


136 CHAPTER 5. TWO-DIMENSIONAL WAVES

=10/π =1/π

y
y

x
x

=1/(10π) =1/(20π)

y y

x x

Figure 5.1: Plots of solution to the two-dimensional wave equation within a square domain
for various values of λ with n = 1.

This boundary condition holds for all λ and n. We note the phase speed of the time-
dependent boundary condition here is aL/(λnπ). More general boundary conditions could
be addressed with Fourier series expansions and use of the principle of superposition. A
well-posed problem requires two initial conditions, and those can be deduced easily from our
solution by examining φ(x, y, 0) and ∂φ/∂t(x, y, 0). This analysis is not shown here.
We next consider some relevant plots for particular parameter values. In all plots we will
take C = 1, a = 1, L = 1. We will study the effect of variable λ and variable n, which
enter in the specification of the inhomogeneous boundary condition. We first fix n = 1 and
present results at t = 0 for λ = 10/π, 1/π, 1/(10π), and 1/(20π) in Fig. 5.1. We note
that all figures display a matching of the various boundary conditions. We must envision
the left boundary at x = 0 oscillating and propagating disturbances into the domain. With
animation available with modern software, this can be visualized. We note for large λ that
there is a simple decay of the boundary disturbance to zero at the other three boundaries.

© 06 February 2024. J. M. Powers.


5.2. SQUARE DOMAIN 137

=10/π =1/(4π)

y
y
x
x
=1/(10π) =1/(20π)

y y

x x

Figure 5.2: Plots of solution to the two-dimensional wave equation within a square domain
for various values of λ with n = 4.

As λ increases, the essential behavior does not change until λ crosses below the threshold
of L/(nπ) = 1/π. Below this threshold, we find resonant structures oscillating within the
domain. As λ decreases, we find the wavelength of those structures decreases, and the
amplitude of the oscillations increases.
Let us next examine initial disturbances with higher wave number. We take n = 4 and
present results at t = 0 for λ = 10/π, 1/(4π), 1/(10π), and 1/(20π) in Fig. 5.2. We see
interesting phenomena here. First we note that large λ gives rise to a suppression of the
signal penetration into the domain. For λ → ∞, we see that the solution goes as cosh(nπx/L)
and thus the penetration depth goes like L/(nπ). So high wave number disturbances are
not felt in the interior. For a critical value of λ = L/(nπ) = 1/(4π) for us, the signal is felt
through the entire domain, but it decays moderately to zero at the x = 1 boundary. For
smaller λ, resonance patterns emerge and become the dominant structures.

© 06 February 2024. J. M. Powers.


138 CHAPTER 5. TWO-DIMENSIONAL WAVES

5.3 Circular domain


Let us now consider Eq. (5.5) in a two-dimensional polar geometry with x = (r, θ)T , within
the domain bounded by r ∈ [0, R], θ ∈ [0, 2π]. Drawing upon Eq. (3.222), we specialize our
earlier results for ∇2 in cylindrical coordinates to the plane polar case and find Eq. (5.5)
expands as
 
1 ∂ ∂v 1 ∂2v 1
r + 2 2 + 2 v = 0. (5.23)
r ∂r ∂r r ∂θ λ
Let us separate variables once more:
v(r, θ) = w(r)z(θ). (5.24)
Substituting, we get
 
1 ∂ ∂ 1 ∂2 1
r (wz) + 2 2 (wz) + 2 wz = 0, (5.25)
r ∂r ∂r r ∂θ λ
  2
z d dw wd z 1
r + 2 2 + 2 wz = 0, (5.26)
r dr dr r dθ λ
  2
r d dw 1d z r2
r + + = 0, (5.27)
w dr dr z dθ2 λ2
 
r d dw r2 1 d2 z
r + 2 = − = α2 . (5.28)
w dr dr λ z dθ2
This induces two ordinary differential equations:
d2 z
2
+ α2 z = 0, (5.29)
   2dθ 
d dw r 2
r r + − α w = 0. (5.30)
dr dr λ2
Solution to Eq. (5.29) is seen to be
z(θ) = C1 sin αθ + C2 cos αθ. (5.31)
Now we would like both φ and its derivatives to be periodic in θ. As done earlier in Sec. 3.3.1,
we can achieve this by requiring z(0) = z(2π) and dz/dθ(0) = dz/dθ(2π). The two conditions
are
C2 = C1 sin 2πα + C2 cos 2πα, (5.32)
αC1 = αC1 cos 2πα − αC2 sin 2πα. (5.33)
We write this as a linear system,
    
sin 2πα cos 2πα − 1 C1 0
= . (5.34)
cos 2πα − 1 − sin 2πα C2 0

© 06 February 2024. J. M. Powers.


5.3. CIRCULAR DOMAIN 139

For a nontrivial solution, we insist the determinant of the coefficient matrix be zero, giving

− sin2 2πα − (cos 2πα − 1)2 = 0, (5.35)


− sin2 2πα − (cos2 2πα − 2 cos 2πα + 1) = 0, (5.36)
2 2

| sin 2πα{z− cos 2πα} +2 cos 2πα − 1 = 0, (5.37)
−1
2 cos 2πα = 2, (5.38)
cos 2πα = 1. (5.39)

For this, we require that

α = n, n = 0, 1, 2, . . . . (5.40)

So, Eq. (5.31) reduces to

z(θ) = C1 sin nθ + C2 cos nθ, n = 0, 1, 2, . . . (5.41)

With this, Eq. (5.30) becomes


   
d dw r2 2
r r + − n w = 0, (5.42)
dr dr λ2
that has solution
r r
w(r) = C3 Jn + C4 Y n . (5.43)
λ λ
As limr→0 Yn (r/λ) → −∞, we take C4 = 0 to keep φ bounded.
We can compose a single mode of a solution as

φ(r, θ, t) = u(t)w(r)z(θ), (5.44)


   
at r
= C cos Jn cos (nθ) . (5.45)
λ λ
Of course, we could expand to include the sin component in both t and θ, and we could
sum modes so as to match some specified initial condition. Realizing that is possible, let us
simply study this simple solution, Eq. (5.45). Similar to the solution in the square domain of
Sec. 5.2, let us restrict attention to C = 1, R = 1, and a = 1. So we have a special solution
of
   
t r
φ(r, θ, t) = cos Jn cos(nθ). (5.46)
λ λ
At t = 0, this solution takes the form
r
φ(r, θ, 0) = Jn cos(nθ). (5.47)
λ

© 06 February 2024. J. M. Powers.


140 CHAPTER 5. TWO-DIMENSIONAL WAVES

=10/π =1/π

y y

x x

=1/(10π) =1/(20π)

y y

x
x x

Figure 5.3: Plots at t = 0 of solution to the two-dimensional wave equation within a circular
domain for various values of λ with n = 0.

© 06 February 2024. J. M. Powers.


5.3. CIRCULAR DOMAIN 141

We will study the effect of varying n and λ. We first fix n = 0 and present results at t = 0
for λ = 10/π, 1/π, 1/(10π) and 1/(20π) in Fig. 5.3. For large λ, φ has little variation with
space, and simply oscillates between ±1 at a frequency dictated by λ. As λ decreases, more
and more eigenmodes are bound within the domain. This is consistent with the results of
Sec. 5.2. For solutions with n = 0, there is no variation with θ. In this section, we may
imagine that φ at r = 1 is controlled; thus, the entire boundary of the circular domain may
be nontrivial. This contrasts Sec. 5.2, where three of the boundaries were homogeneous and
one was controlled.
While there appears to be a singularity at r = 0 for smaller values of λ, one can show
that in fact the solution is finite for finite λ. In fact for n = 0, a Taylor series of φ taken in
the limit of small r and small t gives
 
r2 a2 t2
φ ∼ C 1− 2 − 2 + ... . (5.48)
4λ 2λ

Certainly φ ∼ C as r → 0 and t → 0. But we might expect some interesting behavior for


small λ.
We next fix n = 4 and present results at t = 0 for λ = 10/π, 1/π, 1/(10π) and 1/(20π) in
Fig. 5.4. For large λ, the φ again has little variation with space. As λ decreases, more and
more eigenmodes are bound within the domain. For solutions with n = 4, there is variation
with θ.

Problems

© 06 February 2024. J. M. Powers.


142 CHAPTER 5. TWO-DIMENSIONAL WAVES

=10/π =1/π

y y

x x

=1/(10π) =1/(20π)

y y

x
x

Figure 5.4: Plots of solution at t = 0 to the two-dimensional wave equation within a circular
domain for various values of λ with n = 4.

© 06 February 2024. J. M. Powers.


Chapter 6

Self-similar solutions

see Cantwell.

Here we consider self-similar solutions. We will consider problems which can be addressed by
what is known as a similarity transformation. The problems themselves will be fundamental
ones which have variation in either time and one spatial coordinate, or with two spatial coor-
dinates. Because two coordinates are involved, we must resort to solving partial differential
equations. The similarity transformation actually reveals a hidden symmetry of the partial
differential equations by defining a new independent variable, which is a grouping of the
original independent variables, under which the partial differential equations transform into
ordinary differential equations. We then solve the resulting ordinary differential equations
by standard techniques.

6.1 Stokes’ first problem


The first problem we will consider which uses a similarity transformation is known as Stokes’
first problem. As with Stokes’ second problem, Sec. 4.6, Stokes addressed it in his original
work which developed the Navier-Stokes equations in the mid-nineteenth century.1 The
problem is described as follows, and is sketched in Figure 6.1. Consider a flat plate of
infinite extent lying at rest for t < 0 on the y = 0 plane in x − y − z space. In the volume
described by y > 0 exists a fluid of semi-infinite extent which is at rest at time t < 0. At
t = 0, the flat plate is suddenly accelerated to a constant velocity of U, entirely in the x
direction. Because the no-slip condition is satisfied for the viscous flow, this induces the fluid
at the plate surface to acquire an instantaneous velocity of u(x = 0, t ≥ 0) = U. Because
of diffusion of linear x momentum via tangential viscous shear forces, the fluid in the region
above the plate begins to acquire a positive velocity in the x direction as well.

1
Stokes, G. G., 1851, “On the effect of the internal friction of fluids on the motion of pendulums,” Trans-
actions of the Cambridge Philosophical Society, 9(2): 8-106.

143
144 CHAPTER 6. SELF-SIMILAR SOLUTIONS

x
u

Figure 6.1: Schematic for Stokes’ first problem of a suddenly accelerated plate diffusing
linear momentum into a fluid at rest.

Using standard assumptions, the linear momentum principle reduces to


∂u ∂2u
ρ = µ 2 . (6.1)
∂t
|{z} ∂y
| {z }
(mass)(acceleration) shear force

Employing the momentum diffusivity definition ν = µ/ρ, we get the following partial differ-
ential equation, initial and boundary conditions:
∂u ∂2u
= ν 2, (6.2)
∂t ∂y
u(y, 0) = 0, u(0, t) = U, u(∞, t) = 0. (6.3)

We note that Eq. (6.2) which describes the diffusion of linear momentum is mathematically
identical to the heat equation, Eq. (1.82) which describes the diffusion of energy.2
Now let us scale the equations. Choose
u t y
u∗ = , t∗ = , y∗ = . (6.4)
U tc yc
We have yet to choose characteristic length, (yc ), and time, (tc ), scales. The equations
become
U ∂u∗ νU ∂ 2 u∗
= , (6.5)
tc ∂t∗ yc2 ∂y∗2
∂u∗ νtc ∂ 2 u∗
= . (6.6)
∂t∗ yc2 ∂y∗2
2
The analog to temperature T is velocity u. The analog to Fourier’s law, Eq. (1.78), qx = −k ∂T /∂x is
that of a Newtonian fluid, which in one dimension reduces to τ = µ ∂u/∂x, where τ is the viscous shear
stress. The analog to the energy equation, Eq. (1.76), ρ ∂e/∂t = −∂qx /∂x is Newton’s second law, which
reduces to ρ ∂u/∂t = µ ∂τ /∂x. The analog to thermal diffusivity α = k/ρc is momentum diffusivity ν = µ/ρ.

© 06 February 2024. J. M. Powers.


6.1. STOKES’ FIRST PROBLEM 145

We choose
ν µ
yc ≡ = . (6.7)
U ρU
N s m3 s kg m s m3 s
Noting the SI units, we see µ/(ρU) has units of length: m2 kg m
= s2 m2 kg m
= m. With
this choice, we get
νtc νtc U 2 tc U 2
= = . (6.8)
yc2 ν2 ν
This suggests we choose
ν
tc = . (6.9)
U2
With all of these choices, the complete system can be written as

∂u∗ ∂ 2 u∗
= , (6.10)
∂t∗ ∂y∗2
u∗ (y∗ , 0) = 0, u∗ (0, t∗ ) = 1, u∗ (∞, t∗ ) = 0. (6.11)

Now for self-similarity, we seek transformation which reduce this partial differential equation,
as well as its initial and boundary conditions, into an ordinary differential equation with
suitable boundary conditions. If this transformation does not exist, no similarity solution
exists. In this, but not all cases, the transformation does exist.
Let us first consider a general transformation from a y∗ , t∗ coordinate system to a new
η∗ , t̂∗ coordinate system. We assume then a general transformation

η∗ = η∗ (y∗ , t∗ ), (6.12)
t̂∗ = t̂∗ (y∗ , t∗ ). (6.13)

We assume then that a general variable ψ∗ which is a function of y∗ and t∗ also has the same
value at the transformed point η∗ , t̂∗ :

ψ∗ (y∗ , t∗ ) = ψ∗ (η∗ , t̂∗ ). (6.14)

The chain rule then gives expressions for derivatives:

∂ψ∗ ∂ψ∗ ∂η∗ ∂ψ∗ ∂ t̂∗


= + , (6.15)
∂t∗ y∗ ∂η∗ t̂∗ ∂t∗ y∗ ∂ t̂∗ η∗ ∂t∗ y∗

∂ψ∗ ∂ψ∗ ∂η∗ ∂ψ∗ ∂ t̂∗


= + . (6.16)
∂y∗ t∗ ∂η∗ t̂∗ ∂y∗ t∗ ∂ t̂∗ η∗ ∂y∗ t∗

Now we will restrict ourselves to the transformation

t̂∗ = t∗ , (6.17)

© 06 February 2024. J. M. Powers.


146 CHAPTER 6. SELF-SIMILAR SOLUTIONS

∂ t̂∗ ∂ t̂∗
so we have ∂t∗
= 1 and ∂y∗
= 0, so our rules for differentiation reduce to
y∗ t∗

∂ψ∗ ∂ψ∗ ∂η∗ ∂ψ∗


= + , (6.18)
∂t∗ y∗ ∂η∗ t̂∗ ∂t∗ y∗ ∂ t̂∗ η∗
∂ψ∗ ∂ψ∗ ∂η∗
= . (6.19)
∂y∗ t∗ ∂η∗ t̂∗ ∂y∗ t∗

The next assumption is key for a similarity solution to exist. We restrict ourselves to
transformations for which ψ∗ = ψ∗ (η∗ ). That is we allow no dependence of ψ∗ on t̂∗ . Hence
we must require that ∂ψ ∗
∂ t̂∗
= 0. Moreover, partial derivatives of ψ∗ become total derivatives,
η∗
giving us a final form of transformations for the derivatives

∂ψ∗ dψ∗ ∂η∗


= , (6.20)
∂t∗ y∗ dη∗ ∂t∗ y∗
∂ψ∗ dψ∗ ∂η∗
= . (6.21)
∂y∗ t∗ dη∗ ∂y∗ t∗

In terms of operators we can say

∂ ∂η∗ d
= , (6.22)
∂t∗ y∗ ∂t∗ y∗ dη∗
∂ ∂η∗ d
= . (6.23)
∂y∗ t∗ ∂y∗ t∗ dη∗

Now returning to Stokes’ first problem, let us assume that a similarity solution exists of
the form u∗ (y∗ , t∗ ) = u∗(η∗ ). It is not always possible to find a similarity variable η∗ . One of
the more robust ways to find a similarity variable, if it exists, comes from group theory,3 and
3
Group theory has a long history in mathematics and physics. Its complicated origins generally include
attribution to Évariste Galois, 1811-1832, a somewhat romantic figure, as well as Niels Henrick Abel, 1802-
1829, the Norwegian mathematician. Critical developments were formalized by Marius Sophus Lie, 1842-
1899, another Norwegian mathematician, in what today is known as Lie group theory. A modern variant,
known as “renormalization group” (RNG) theory is an area for active research. The 1982 Nobel prize in
physics went to Kenneth Geddes Wilson, 1936-, of Cornell University and The Ohio State University, for use
of RNG in studying phase transitions, first done in the 1970s. The award citation refers to the possibilities
of using RNG in studying the great unsolved problem of turbulence, a modern area of research in which
Steven Alan Orszag, 1943-2011, made many contributions.
Quoting from the useful Eric Weisstein’s World of Mathematics, available online at
http://mathworld.wolfram.com/Group.html, “A group G is a finite or infinite set of elements together
with a binary operation which together satisfy the four fundamental properties of closure, associativity, the
identity property, and the inverse property. The operation with respect to which a group is defined is often
called the ‘group operation,’ and a set is said to be a group ‘under’ this operation. Elements A, B, C, . . .
with binary operations A and B denoted AB form a group if
1. Closure: If A and B are two elements in G, then the product AB is also in G.

© 06 February 2024. J. M. Powers.


6.1. STOKES’ FIRST PROBLEM 147

is explained in detail by Cantwell. Group theory, which is too detailed to explicate in full
here, relies on a generalized symmetry of equations to find simpler forms. In the same sense
that a snowflake, subjected to rotations of π/3, 2π/3, π, 4π/3, 5π/3, or 2π, is transformed
into a form which is indistinguishable from its original form, we seek transformations of
the variables in our partial differential equation which map the equation into a form which
is indistinguishable from the original. When systems are subject to such transformations,
known as group operators, they are said to exhibit symmetry.
Let us subject our governing partial differential equation along with initial and boundary
conditions to a particularly simple type of transformation, a simple stretching of space, time,
and velocity:
t̃ = ea t∗ , ỹ = eb y∗ , ũ = ec u∗. (6.24)
Here the “∼” variables are stretched variables, and a, b, and c are constant parameters. The
exponential will be seen to be a convenience, which is not absolutely necessary. Note that
for a ∈ (−∞, ∞), b ∈ (−∞, ∞), c ∈ (−∞, ∞), that ea ∈ (0, ∞), eb ∈ (0, ∞), ec ∈ (0, ∞).
So the stretching does not change the direction of the variable; that is it is not a reflecting
transformation. We note that with this stretching, the domain of the problem remains
unchanged; that is t∗ ∈ [0, ∞) maps into t̃ ∈ [0, ∞); y∗ ∈ [0, ∞) maps into ỹ ∈ [0, ∞).
The range is also unchanged if we allow u∗ ∈ [0, ∞), which maps into ũ ∈ [0, ∞). Direct
substitution of the transformation shows that in the stretched space, the system becomes

∂ ũ ∂ 2 ũ
ea−c = e2b−c 2 , (6.25)
∂ t̃ ∂ ỹ
e−c ũ(ỹ, 0) = 0, −c
e ũ(0, t̃) = 1, e−c ũ(∞, t̃) = 0. (6.26)

In order that the stretching transformation map the system into a form indistinguishable
from the original, that is for the transformation to exhibit symmetry, we must take

c = 0, a = 2b. (6.27)

So our symmetry transformation is

t̃ = e2b t∗ , ỹ = eb y∗ , ũ = u∗ , (6.28)
2. Associativity: The defined multiplication is associative, i.e. for all A, B, C ∈ G, (AB)C = A(BC).
3. Identity: There is an identity element I (a.k.a. 1, E, or e) such that IA = AI = A for every element
A ∈ G.
4. Inverse: There must be an inverse or reciprocal of each element. Therefore, the set must contain an
element B = A−1 such that AA−1 = A−1 A = I for each element of G.
. . ., A map between two groups which preserves the identity and the group operation is called a homomor-
phism. If a homomorphism has an inverse which is also a homomorphism, then it is called an isomorphism
and the two groups are called isomorphic. Two groups which are isomorphic to each other are considered to
be ‘the same’ when viewed as abstract groups.” For example, the group of 90 degree rotations of a square
are isomorphic.

© 06 February 2024. J. M. Powers.


148 CHAPTER 6. SELF-SIMILAR SOLUTIONS

giving in transformed space

∂ ũ ∂ 2 ũ
= , (6.29)
∂ t̃ ∂ ỹ 2
ũ(ỹ, 0) = 0, ũ(0, t̃) = 1, ũ(∞, t̃) = 0. (6.30)

Now both the original and transformed systems are the same, and the remaining stretching
parameter b does not enter directly into either formulation, so we cannot expect it in the
solution of either form. That is we expect a solution to be independent of the stretching
parameter b. This can be achieved if we take both u∗ and ũ to be functions of special
combinations of the independent variables, combinations that are formed such that b does
not appear. Eliminating b via

eb = , (6.31)
y∗
we get  2
t̃ ỹ
= , (6.32)
t∗ y∗
or after rearrangement
y∗ ỹ
√ =√ . (6.33)
t∗ t̃
√   √ 
We thus expect u∗ = u∗ y∗ / t∗ or equivalently ũ = ũ ỹ/ t̃ . This form also allows
√ 
u∗ = u∗ βy∗ / t∗ , where β is any constant. Let us then define our similarity variable η∗ as
y∗
η∗ = √ . (6.34)
2 t∗
Here the factor of 1/2 is simply a convenience adopted so that the solution takes on a
traditional form. We would find that any constant in the similarity transformation would
induce a self-similar result.
Let us rewrite the differential equation, boundary, and initial conditions (∂u∗ /∂t∗ =
∂ u∗ /∂y∗2 , u∗ (y∗ , 0) = 0, u∗ (0, t∗ ) = 1, u∗ (∞, t∗ ) = 0) in terms of the similarity variable η∗ .
2

We first must use the chain rule to get expressions for the derivatives. Applying the general
results just developed, we get
∂u∗ ∂η∗ du∗ 1 y∗ −3/2 du∗ η∗ du∗
= =− t∗ =− , (6.35)
∂t∗ ∂t∗ dη∗ 22 dη∗ 2t∗ dη∗
∂u∗ ∂η∗ du∗ 1 du∗
= = √ , (6.36)
∂y∗ ∂y∗ dη∗ 2 t∗ dη∗
   
∂ 2 u∗ ∂ ∂u∗ ∂ 1 du∗
= = √ , (6.37)
∂y∗2 ∂y∗ ∂y∗ ∂y∗ 2 t∗ dη∗
   
1 ∂ du∗ 1 1 d2 u ∗ 1 d2 u ∗
= √ = √ √ = . (6.38)
2 t∗ ∂y∗ dη∗ 2 t∗ 2 t∗ dη∗2 4t∗ dη∗2

© 06 February 2024. J. M. Powers.


6.1. STOKES’ FIRST PROBLEM 149

Thus, applying these rules to our governing equation, Eq. (6.10), we recover

η∗ du∗ 1 d2 u ∗
− = , (6.39)
2t∗ dη∗ 4t∗ dη∗2
d2 u ∗ du∗
2
+ 2η∗ = 0. (6.40)
dη∗ dη∗

Note our governing equation has a singularity at t∗ = 0. As it appears on both sides of


the equation, we cancel it on both sides, but we shall see that this point is associated with
special behavior of the similarity solution. The important result is that the reduced equation
has dependency on η∗ only. If this did not occur, we could not have a similarity solution.
Now consider the initial and boundary conditions. They transform as follows:

y∗ = 0, =⇒ η∗ = 0, (6.41)
y∗ → ∞, =⇒ η∗ → ∞, (6.42)
t∗ → 0, =⇒ η∗ → ∞. (6.43)

Note that the three important points for t∗ and y∗ collapse into two corresponding points in
η∗ . This is also necessary for the similarity solution to exist. Consequently, our conditions
in η∗ space reduce to

u∗ (0) = 1, surface condition, (6.44)


u∗ (∞) = 0, initial and far-field. (6.45)

We solve the second order differential equation by the method of reduction of order, noticing
that it is really two first order equations in disguise:
   
d du∗ du∗
+ 2η∗ = 0. (6.46)
dη∗ dη∗ dη∗
2
Multiply by the integrating factor eη∗ to get
   
η∗2 d du∗ η∗2 du∗
e + 2η∗ e = 0. (6.47)
dη∗ dη∗ dη∗
 
d η∗2 du∗
e = 0, (6.48)
dη∗ dη∗
2 du∗
eη∗ = A, (6.49)
dη∗
du∗ 2
= Ae−η∗ , (6.50)
dη∗
Z η∗
2
u∗ = B + A e−s ds. (6.51)
0

© 06 February 2024. J. M. Powers.


150 CHAPTER 6. SELF-SIMILAR SOLUTIONS

Now applying the condition u∗ = 1 at η∗ = 0 gives


Z 0
2
1 = B+A e−s ds, (6.52)
0
| {z }
=0
B = 1. (6.53)

So we have
Z η∗
2
u∗ = 1 + A e−s ds. (6.54)
0

Now applying the condition u∗ = 0 at η∗ → ∞, we get


Z ∞
2
0 = 1+A e−s ds, (6.55)
|0 √ {z }
= π/2

π
0 = 1+A , (6.56)
2
2
A = −√ . (6.57)
π

Though not immediately obvious, it can be shown by a simple variable transformation √ to


a polar coordinate system that the above integral from 0 to ∞ has a finite value of π/2.
It is not surprising that this integral has finite value over the semi-infinite domain as the
integrand is bounded between zero and one, and decays rapidly to zero as s → ∞.
Let us divert to evaluate this integral. To do so, consider the related integral I2 defined
over the first quadrant in s − t space, where
Z ∞Z ∞
2 2
I2 ≡ e−s −t ds dt, (6.58)
0 0
Z ∞ Z ∞
−t2 2
= e e−s ds dt, (6.59)
0Z ∞ 0
 Z ∞ 
2 2
= e−s ds e−t dt , (6.60)
0 0
Z ∞ 2
−s2
= e ds , (6.61)
0
p Z ∞
2
I2 = e−s ds. (6.62)
0

Now transform to polar coordinates with s = r cos θ, t = r sin θ. With this, we can easily
show ds dt = r dr dθ and s2 + t2 = r 2 . Substituting this into Eq. (6.58) and changing the

© 06 February 2024. J. M. Powers.


6.1. STOKES’ FIRST PROBLEM 151

limits of integration appropriately, we get


Z π/2 Z ∞
2
I2 = e−r r dr dθ, (6.63)
0 0
Z π/2  ∞
1 −r2
= − e dθ, (6.64)
0 2 0
Z π/2  
1
= dθ, (6.65)
0 2
π
= . (6.66)
4
Comparing with Eq. (6.62), we deduce
p Z ∞ √
−s2 π
I2 = e ds = . (6.67)
0 2
With this verified, we can return to our original analysis and say that the velocity profile
can be written as
Z η∗
2 2
u∗ (η∗ ) = 1 − √ e−s ds, (6.68)
π 0
Z √y∗
2 2 t∗ 2
u∗ (y∗ , t∗ ) = 1 − √ e−s ds, (6.69)
π 0
 
y∗
u∗ (y∗ , t∗ ) = erfc √ . (6.70)
2 t∗
In the last form above, we have introduced the so-called error function complement, “erfc.”
Plots for the velocity profile in terms of both η∗ and y∗ , t∗ are given in Figure 6.2. We see
that in similarity space, the curve is a single curve that in which u∗ has a value of unity at
η∗ = 0 and has nearly relaxed to zero when η∗ = 1. In dimensionless physical space, we see
that at early time, there is a thin momentum layer near the surface. At later time more
momentum is present in the fluid. We can say in fact that momentum is diffusing into the
fluid.
We define the momentum diffusion length as the length for which significant momentum
has diffused into the fluid. This is well estimated by taking η∗ = 1. In terms of physical
variables, we have
y
√∗ = 1, (6.71)
2 t∗

y∗ = 2 t∗ , (6.72)
s
y t
ν = 2 ν , (6.73)
U U2
r
2ν U 2 t
y = , (6.74)
U√ ν
y = 2 νt. (6.75)

© 06 February 2024. J. M. Powers.


152 CHAPTER 6. SELF-SIMILAR SOLUTIONS

y*

t *=
3
1

t* =
2
t* =
1

1 1
u* u*

Figure 6.2: Sketch of velocity field solution for Stokes’ first problem in both similarity
coordinate η∗ and primitive coordinates y∗ , t∗ .

We can in fact define this as a boundary layer thickness. That is to say the momentum
boundary layer thickness in Stokes’ first problem grows at a rate proportional to the square
root of momentum diffusivity and time. This class of result is a hallmark of all diffusion
processes, be it mass, momentum, or energy.

6.2 Taylor-Sedov solution


Here, we will study the Taylor4 -Sedov5 blast wave solution. We will follow most closely two
papers of Taylor67 from 1950. Taylor notes that the first of these was actually written in
1941, but was classified. Sedov’s complementary study8 is also of interest. One may also
consult other articles by Taylor for background.910 We shall follow Taylor’s analysis and
obtain what is known as self-similar solutions. Though there are more general approaches
4
Geoffrey Ingram Taylor, 1886-1975, English physicist.
5
Leonid Ivanovitch Sedov, 1907-1999, Soviet physicist.
6
Taylor, G. I., 1950, “The Formation of a Blast Wave by a Very Intense Explosion. I. Theoretical Discus-
sion,” Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 201(1065):
159-174.
7
Taylor, G. I., 1950, “The Formation of a Blast Wave by a Very Intense Explosion. II. The Atomic
Explosion of 1945,” Proceedings of the Royal Society of London. Series A, Mathematical and Physical
Sciences, 201(1065): 175-186.
8
Sedov, L. I., 1946, “Rasprostraneniya Sil’nykh Vzryvnykh Voln,” Prikladnaya Matematika i Mekhanika
10: 241-250.
9
Taylor, G. I., 1950, “The Dynamics of the Combustion Products Behind Plane and Spherical Detonation
Fronts in Explosives,” Proceedings of the Royal Society of London. Series A, Mathematical and Physical
Sciences, 200(1061): 235-247.
10
Taylor, G. I., 1946, “The Air Wave Surrounding an Expanding Sphere,” Proceedings of the Royal Society
of London. Series A, Mathematical and Physical Sciences, 186(1006): 273-292.

© 06 February 2024. J. M. Powers.


6.2. TAYLOR-SEDOV SOLUTION 153

which may in fact expose more details of how self-similar solutions are obtained, we will
confine ourselves to Taylor’s approach and use his notation.
The self-similar solution will be enabled by studying the equations for the motion of
a diffusion-free ideal compressible fluid in what is known as the strong shock limit for a
spherical shock wave. Now, a shock wave will raise both the internal and kinetic energy
of the ambient fluid into which it is propagating. We would like to consider a scenario in
which the total energy, kinetic and internal, enclosed by the strong spherical shock wave is
a constant. The ambient fluid, a calorically perfect ideal gas with gas constant R and ratio
of specific heats γ, is initially at rest, and a point source of energy, E, exists at r = 0. For
t > 0, this point source of energy is distributed to the mechanical and thermal energy of the
surrounding fluid.
Let us follow now Taylor’s analysis from his 1950 Part I “Theoretical Discussion” paper.
We shall

• write the governing inert one-dimensional unsteady equations in spherical coordinates,

• reduce the partial differential equations in r and t to ordinary differential equations in


an appropriate similarity variable,

• solve the ordinary differential equations numerically, and

• show our transformation guarantees constant total energy in the region r ∈ [0, R(t)],
where R(t) is the locus of the moving shock wave.

We shall also refer to specific equations in Taylor’s first 1950 paper.

6.2.1 Governing equations


The non-conservative formulation of the governing equations is as follows:
∂ρ ∂ρ ∂u 2ρu
+u +ρ = − , mass conservation (6.76)
∂t ∂r ∂r r
∂u ∂u 1 ∂P
+u + = 0, momentum conservation (6.77)
∂t ∂r ρ ∂r
   
∂e ∂e P ∂ρ ∂ρ
+u − 2 +u = 0, energy conservation (6.78)
∂t ∂r ρ ∂t ∂r
1 P
e = , caloric state equation (6.79)
γ−1 ρ
P = ρRT. thermal state equation (6.80)

The conservative version, not shown here, can also be written in the form of Eq. (4.1). The
conservative form induces a set of shock jump equations in the form of Eq. (4.10). Taking
the subscript s to denote the shocked state and the subscript o to denote the unshocked
p
state, the shock velocity to be dR/dt, and the shock Mach number Ms = (dR/dt)/ γPo /ρo ,

© 06 February 2024. J. M. Powers.


154 CHAPTER 6. SELF-SIMILAR SOLUTIONS

their solution gives the jump over a shock discontinuity of so-called Rankine11 -Hugoniot12
jump conditions:
 −1
ρs γ+1 2 1
= 1+ , (6.81)
ρo γ−1 (γ − 1) Ms2
Ps 2γ γ−1
= Ms2 − , (6.82)
Po γ+1 γ+1
s  2
dR γ+1 γPo 2
γ+1
= us + + us . (6.83)
dt 4 ρo 4

Let us look at the energy equation, Eq. (6.78) in a little more detail. We define the
material derivative as d/dt = ∂/∂t + u ∂/∂r, so Eq. (6.78) can be rewritten as
de P dρ
− = 0. (6.84)
dt ρ2 dt
As an aside, we recall that the specific volume v is defined as v = 1/ρ. Thus, we have
dv/dt = −(1/ρ2 )dρ/dt. Thus the energy equation can be rewritten as de/dt + P dv/dt = 0,
or de/dt = −P dv/dt. In differential form, this is de = −P dv. This says the change in energy
is solely due to reversible work done by a pressure force. We might recall the Gibbs equation
from thermodynamics, de = T ds − P dv, where s is the entropy. For our system, we have
ds = 0; thus, the flow is isentropic, at least behind the shock. It is isentropic because away
from the shock, we have neglected all entropy-producing mechanisms like diffusion.
Let us now substitute the caloric energy equation, Eq. (6.79), into the energy equation,
Eq. (6.84):
 
1 d P P dρ
− 2 = 0, (6.85)
γ − 1 dt ρ ρ dt
1 P dρ 1 1 dP P dρ
− 2
+ − 2 = 0, (6.86)
γ − 1 ρ dt γ − 1 ρ dt ρ dt
P dρ 1 dP P dρ
− 2 + − (γ − 1) 2 = 0, (6.87)
ρ dt ρ dt ρ dt
1 dP P dρ
−γ 2 = 0, (6.88)
ρ dt ρ dt
dP P dρ
−γ = 0, (6.89)
dt ρ dt
1 dP P dρ
γ
− γ γ+1 = 0, (6.90)
ρ dt ρ dt
 
d P
= 0, (6.91)
dt ργ
11
William John Macquorn Rankine, 1820-1872, Scottish engineer.
12
Pierre Henri Hugoniot, 1851-1887, French mechanician.

© 06 February 2024. J. M. Powers.


6.2. TAYLOR-SEDOV SOLUTION 155

   
∂ P ∂ P
+u = 0. (6.92)
∂t ργ ∂r ργ

This says that following a fluid particle, P/ργ is a constant. In terms of specific volume, this
says P v γ = C, which is a well-known isentropic relation for a calorically perfect ideal gas.

6.2.2 Similarity transformation


We shall next make some non-intuitive and non-obvious choices for a transformed coordinate
system and transformed dependent variables. These choices can be systematically studied
with the techniques of group theory, not discussed here.

6.2.2.1 Independent variables


Let us transform the independent variables (r, t) → (η, τ ) with
r
η = , (6.93)
R(t)
τ = t. (6.94)

We will seek solutions such that the dependent variables are functions of η, the distance
relative to the time-dependent shock, only. We will have little need for the transformed time
τ because it is equivalent to the original time t.

6.2.2.2 Dependent variables


Let us also define new dependent variables as

P
= y = R−3 f1 (η), (6.95)
Po
ρ
= ψ(η), (6.96)
ρo
u = R−3/2 φ1 (η). (6.97)

These amount to definitions of a scaled pressure f1 , a scaled density ψ, and a scaled velocity
φ1 , with the assumption that each is a function of η only. Here, Po , and ρo are constant
ambient values of pressure and density, respectively.
We also assume the shock velocity to be of the form

dR
U(t) = = AR−3/2 . (6.98)
dt
The constant A is to be determined.

© 06 February 2024. J. M. Powers.


156 CHAPTER 6. SELF-SIMILAR SOLUTIONS

6.2.2.3 Derivative transformations


By the chain rule we have

∂ ∂η ∂ ∂τ ∂
= + . (6.99)
∂t ∂t ∂η ∂t ∂τ

Now, by Eq. (6.93) we get

∂η r dR
= − 2 , (6.100)
∂t R dt
η dR
= − , (6.101)
R(t) dt
η
= − AR−3/2 , (6.102)
R

= − 5/2 . (6.103)
R
From Eq. (6.94) we simply get

∂τ
= 1. (6.104)
∂t
Thus, the chain rule, Eq. (6.99), can be written as

∂ Aη ∂ ∂
= − 5/2 + . (6.105)
∂t R ∂η ∂τ

As we are insisting the ∂/∂τ = 0, we get

∂ Aη d
= − 5/2 . (6.106)
∂t R dη

In the same way, we get

∂ ∂η ∂ ∂τ ∂
= + , (6.107)
∂r ∂r ∂η ∂r |{z}
∂τ
=0
1 d
= . (6.108)
R dη

6.2.3 Transformed equations


Let us now apply our rules for derivative transformation, Eqs. (6.103,6.108), and our trans-
formed dependent variables, Eqs. (6.95-6.97), to the governing equations.

© 06 February 2024. J. M. Powers.


6.2. TAYLOR-SEDOV SOLUTION 157

6.2.3.1 Mass
First, we shall consider the mass equation, Eq. (6.76). We get
Aη d 1 d 1 d  2
− 5/2 (ρo ψ) + R−3/2 φ1 (ρo ψ) + ρo ψ R−3/2 φ1 = − ρo ψ R−3/2 φ1 .
R dη | {z } | {z } R dη | {z } |{z} R dη | {z } |{z} =ρ | =u
r |{z} {z }
| {z } =ρ =u | {z } =ρ =ρ | {z } =u
=∂/∂t =∂/∂r =∂/∂r =2/(ηR)

(6.109)
Realizing that R(t) = R(τ ) is not a function of η, canceling the common factor of ρo , and
eliminating r with Eq. (6.93), we can write
Aη dψ φ1 dψ ψ dφ1 2 ψφ1
− 5/2
+ 5/2 + 5/2 = − , (6.110)
R dη R dη R dη η R5/2
dψ dψ dφ1 2
−Aη + φ1 +ψ = − ψφ1 , (6.111)
dη dη dη η
 
dψ dψ dφ1 2
−Aη + φ1 +ψ + φ1 = 0, mass. (6.112)
dη dη dη η
Equation (6.112) is number 9 in Taylor’s paper, which we will call here Eq. T(9).

6.2.3.2 Linear momentum


Now, consider the linear momentum equation, Eq. (6.77), and apply the same transforma-
tions:
∂  ∂  1 ∂ 
R−3/2 φ1 + R−3/2 φ1 R−3/2 φ1 + Po R−3 f1 = 0, (6.113)
∂t | {z } | {z } ∂r | {z } ρo ψ ∂r | {z }
=u =u =u |{z} =P
=1/ρ
∂φ1 3 −5/2 dR ∂  1 ∂ 
R−3/2 − R φ1 +R−3/2 φ1 R−3/2 φ1 + Po R−3 f1 = 0, (6.114)
| ∂t {z2 dt } ∂r ρo ψ ∂r
=∂u/∂t
 
−3/2 Aη
dφ1 3 −5/2  ∂ 
R − 5/2− R AR−3/2 φ1 + R−3/2 φ1 R−3/2 φ1
R
dη 2 ∂r
1 ∂ 
+ Po R−3 f1 = 0, (6.115)
ρo ψ ∂r
Aη dφ1 3 A −3/2 ∂ −3/2
 1 ∂ −3

− 4 − φ 1 + R φ 1 R φ 1 + P o R f1 = 0, (6.116)
R dη 2 R4 ∂r ρo ψ ∂r
Aη dφ1 3 A −3/2 1 d −3/2
 1 1 d −3

− 4 − φ 1 + R φ 1 R φ 1 + P o R f1 = 0, (6.117)
R dη 2 R4 R dη ρo ψ R dη
Aη dφ1 3 A φ1 dφ1 Po 1 df1
− 4 − φ 1 + + = 0, (6.118)
R dη 2 R4 R4 dη ρo ψ R4 dη
dφ1 3 dφ1 Po df1
−Aη − Aφ1 + φ1 + = 0. (6.119)
dη 2 dη ρo ψ dη

© 06 February 2024. J. M. Powers.


158 CHAPTER 6. SELF-SIMILAR SOLUTIONS

Our final form is


 
3 dφ1 dφ1 Po 1 df1
−A φ1 + η + φ1 + = 0, linear momentum. (6.120)
2 dη dη ρo ψ dη
Equation (6.120) is T(7).

6.2.3.3 Energy
Let us now consider the energy equation. It is best to begin with a form in which the
equation of state has already been imposed. So, we will start by expanding Eq. (6.89) in
terms of partial derivatives:
 
∂P ∂P P ∂ρ ∂ρ
+u −γ +u = 0, (6.121)
|∂t {z ∂r} ρ ∂t
| {z
∂r
}
=dP/dt =dρ/dt
∂  ∂ 
Po R−3 f1 + R−3/2 φ1 Po R−3 f1
∂t  ∂r 
−3
Po R f1 ∂ −3/2 ∂
−γ (ρo ψ) + R φ1 (ρo ψ) = 0, (6.122)
ρo ψ ∂t ∂r
∂  ∂ 
R−3 f1 + R−3/2 φ1 R−3 f1
∂t  ∂r 
R−3 f1 ∂ψ −3/2 ∂ψ
−γ +R φ1 = 0, (6.123)
ψ ∂t ∂r
∂f1 dR ∂ 
R−3 − 3R−4 f1 + R−3/2 φ1 R−3 f1
∂t dt  ∂r 
R−3 f1 ∂ψ −3/2 ∂ψ
−γ +R φ1 = 0, (6.124)
ψ ∂t ∂r
 
Aη df1 ∂ 
R−3 − 5/2 − 3R−4 (AR−3/2 )f1 + R−3/2 φ1 R−3 f1
R dη ∂r
−3
 
R f1 ∂ψ −3/2 ∂ψ
−γ +R φ1 = 0. (6.125)
ψ ∂t ∂r
Carrying on, we have
Aη df1 A −3/2 −3 1 df1
− − 3 f1 + R φ 1 R
R11/2 dη R11/2 R dη
−3
  
R f1 Aη dψ −3/2 1 dψ
−γ − 5/2 +R φ1 = 0, (6.126)
ψ R dη R dη
 
Aη df1 A φ1 df1 f1 dψ dψ
− 11/2 − 3 11/2 f1 + 11/2 −γ 11/2
−Aη + φ1 = 0, (6.127)
R dη R R dη ψR dη dη
 
df1 df1 f1 dψ dψ
−Aη − 3Af1 + φ1 −γ −Aη + φ1 = 0. (6.128)
dη dη ψ dη dη

© 06 February 2024. J. M. Powers.


6.2. TAYLOR-SEDOV SOLUTION 159

Our final form is


 
df1 f1 dψ df1
A 3f1 + η + γ (−Aη + φ1 ) − φ1 = 0, energy. (6.129)
dη ψ dη dη

Equation (6.129) is T(11), correcting for a typographical error replacing a r with γ.

6.2.4 Dimensionless equations


Let us now write our conservation principles in dimensionless form. We take the constant
ambient sound speed co to be defined for our gas as

Po
c2o ≡ γ . (6.130)
ρo

Note, we have used our notation for sound speed here; Taylor uses a instead.
Let us also define
 c 2
o
f ≡ f1 , (6.131)
A
φ1
φ ≡ . (6.132)
A

6.2.4.1 Mass
With these definitions, the mass equation, Eq. (6.112), becomes
 
dψ dψ dφ 2
−Aη + Aφ +ψ A + Aφ = 0, (6.133)
dη dη dη η
 
dψ dψ dφ 2
−η +φ +ψ + φ = 0, (6.134)
dη dη dη η
 
dψ dφ 2
(φ − η) = −ψ + φ , (6.135)
dη dη η
dφ 2φ
1 dψ dη
+ η
= , mass. (6.136)
ψ dη η−φ

Equation (6.136) is T(9a).

6.2.4.2 Linear momentum


With the same definitions, the momentum equation, Eq. (6.120) becomes
 
3 dφ dφ Po 1 A2 df
−A Aφ + Aη + A2 φ + = 0, (6.137)
2 dη dη ρo ψ c2o dη

© 06 February 2024. J. M. Powers.


160 CHAPTER 6. SELF-SIMILAR SOLUTIONS

 
3 dφ dφ 1 1 df
− φ+η +φ + = 0, (6.138)
2 dη dη γ ψ dη
dφ 3 1 df
(φ − η) − φ + = 0, (6.139)
dη 2 γψ dη
dφ 1 df 3
(η − φ) = − φ, momentum. (6.140)
dη γψ dη 2
Equation (6.140) is T(7a).

6.2.4.3 Energy
The energy equation, Eq. (6.129) becomes
 2 
A A2 df f A2 dψ A2 df
A 3 2 f +η 2 +γ (−Aη + Aφ) − A φ = 0, (6.141)
co co dη ψ c2o dη c2o dη
df f dψ df
3f + η + γ (−η + φ) −φ = 0, (6.142)
dη ψ dη dη
df 1 dψ df
3f + η + γ f (−η + φ) − φ = 0, energy. (6.143)
dη ψ dη dη
Equation (6.143) is T(11a).

6.2.5 Reduction to nonautonomous form


Let us eliminate dψ/dη and dφ/dη from Eq. (6.143) with use of Eqs. (6.136,6.140).

!
df dη
+ 2φ
η df
3f + η + γf (−η + φ) − φ = 0, (6.144)
dη η−φ dη
 1 df − 3 φ 
γψ dη 2φ
df
2
+ η
3f + η + γf 
η−φ  (−η + φ) − φ df = 0, (6.145)
dη η−φ dη
1 df 3
!
df γψ dη
− 2
φ 2φ
3f + (η − φ) − γf + = 0, (6.146)
dη η−φ η
 
2 df 1 df 3 2φ
3f (η − φ) + (η − φ) − γf − φ + (η − φ) = 0, (6.147)
dη γψ dη 2 η
   
2 f df 3 2γφ
(η − φ) − − f −3(η − φ) − γφ + (η − φ) = 0, (6.148)
ψ dη 2 η
   
2 f df 3 2γφ2
(η − φ) − + f 3η − 3φ + γφ − 2γφ + = 0, (6.149)
ψ dη 2 η
     
2 f df 1 2γφ2
(η − φ) − + f 3η − φ 3 + γ + = 0. (6.150)
ψ dη 2 η

© 06 February 2024. J. M. Powers.


6.2. TAYLOR-SEDOV SOLUTION 161

Rearranging, we get
     
2 f df 1 2γφ2
(η − φ) − = f −3η + φ 3 + γ − . (6.151)
ψ dη 2 η
Equation (6.151) is T(14).
We can thus write an explicit nonautonomous ordinary differential equation for the evo-
lution of f in terms of the state variables f , ψ, and φ, as well as the independent variable
η.
  2γφ2 
1
df f −3η + φ 3 + 2 γ − η
= . (6.152)
dη (η − φ)2 − ψf

Eq. (6.152) can be directly substituted into the momentum equation, Eq. (6.140) to get
1 df
dφ γψ dη
− 32 φ
= . (6.153)
dη η−φ
Then, Eq. (6.153) can be substituted into Eq. (6.136) to get

dψ dη
+ 2φ
η
=ψ . (6.154)
dη η−φ
Equations (6.152-6.154) form a nonautonomous system of first order differential equations
of the form
df
= g1 (f, φ, ψ, η), (6.155)


= g2 (f, φ, ψ, η), (6.156)


= g3 (f, φ, ψ, η). (6.157)

They can be integrated with standard numerical software. One must of course provide
conditions of all state variables at a particular point. We apply conditions not at η = 0,
but at η = 1, the locus of the shock front. Following Taylor, the conditions are taken from
the Rankine-Hugoniot equations, Eqs. (6.81-6.83), applied in the limit of a strong shock
(Ms → ∞). We omit the details of this analysis. We take the subscript s to denote the
shock state at η = 1. For the density, one finds
ρs γ+1
= , (6.158)
ρo γ−1
ρo ψs γ+1
= , (6.159)
ρo γ−1
γ+1
ψs = ψ(η = 1) = . (6.160)
γ−1

© 06 February 2024. J. M. Powers.


162 CHAPTER 6. SELF-SIMILAR SOLUTIONS

For the pressure, leaving out details, one finds that


dR 2
dt γ + 1 Ps
= , (6.161)
c2o 2γ Po
2 −3
AR γ + 1 −3
= R f1s , (6.162)
c2o 2γ
A2 R−3 γ + 1 −3 A2
= R fs , (6.163)
c2o 2γ c2o
γ+1
1 = fs , (6.164)


fs = f (η = 1) = . (6.165)
γ+1
For the velocity, leaving out details, one finds
us 2
dR
= , (6.166)
dt
γ+1
R−3/2 φ1s 2
= , (6.167)
AR−3/2 γ+1
R−3/2 Aφs 2
−3/2
= , (6.168)
AR γ+1
2
φs = φ(η = 1) = . (6.169)
γ+1
Equations (6.160, 6.165, 6.169) form the appropriate set of initial conditions for the integra-
tion of Eqs. (6.152-6.154).

6.2.6 Numerical solution


Solutions for f (η), φ(η) and ψ(η) are shown for γ = 7/5 in Figs. 6.3-6.5, respectively. So,
we now have a similarity solution for the scaled variables. We need to relate this to physical
dimensional quantities. Let us assign some initial conditions for t = 0, r > 0; that is, away
from the point source. Take
u(r, 0) = 0, ρ(r, 0) = ρo , P (r, 0) = Po . (6.170)
We also have from Eq. (6.80) that
Po
T (r, 0) = = To . (6.171)
ρo R
Using Eq. (6.79), we further have
1 Po
e(r, 0) = = eo . (6.172)
γ − 1 ρo

© 06 February 2024. J. M. Powers.


6.2. TAYLOR-SEDOV SOLUTION 163

f
1.2

1.0

0.8

0.6

0.4

0.2

0.0 0.2 0.4 0.6 0.8 1.0

Figure 6.3: Scaled pressure f versus similarity variable η for γ = 7/5 in Taylor-Sedov blast
wave.

1.0

0.8

0.6

0.4

0.2

0.0 0.2 0.4 0.6 0.8 1.0

Figure 6.4: Scaled velocity φ versus similarity variable η for γ = 7/5 in Taylor-Sedov blast
wave.

0.0 0.2 0.4 0.6 0.8 1.0

Figure 6.5: Scaled density ψ versus similarity variable η for γ = 7/5 in Taylor-Sedov blast
wave.

© 06 February 2024. J. M. Powers.


164 CHAPTER 6. SELF-SIMILAR SOLUTIONS

6.2.6.1 Calculation of total energy


Now, as the point source expands, it will generate a strong shock wave. Material which
has not been shocked is oblivious to the presence of the shock. Material which the shock
wave has reached has been influenced by it. It stands to reason from energy conservation
principles that we want the total energy, internal plus kinetic, to be constant in the shocked
domain, r ∈ (0, R(t)], where R(t) is the shock front location.
Let us recall some spherical geometry so this energy conservation principle can be properly
formulated. Consider a thin differential spherical shell of thickness dr located somewhere in
the shocked region: r ∈ (0, R(t)]. The volume of the thin shell is

dV = 4πr 2
|{z} dr
|{z} (6.173)
(surface area) (thickness)

The differential mass dm of this shell is

dm = ρ dV, (6.174)
= 4πr 2 ρ dr. (6.175)

Now, recall the mass-specific internal energy is e and the mass-specific kinetic energy is u2 /2.
So, the total differential energy, internal plus kinetic, in the differential shell is
 
1 2
dE = e+ u dm, (6.176)
2
 
1 2 2
= 4πρ e + u r dr. (6.177)
2
Now, the total energy E within the shock is the integral through the entire sphere,
Z R(t) Z R(t)  
1 2 2
E= dE = 4πρ e + u r dr, (6.178)
0 0 2
Z R(t)  
1 P 1 2 2
= 4πρ + u r dr, (6.179)
0 γ−1 ρ 2
Z R(t) Z R(t)
4π 2
= P r dr + 2π ρu2 r 2 dr . (6.180)
γ−1 0
| {z } | 0 {z }
thermal energy kinetic energy
We introduce variables from our similarity transformations next:
Z 1 Z 1
4π 2 2
E = −3
Po R f1 R η R dη +2π ρo ψ R−3 φ21 R2 η 2 R dη , (6.181)
γ −1 0 | {z } | {z } | {z } 0
|{z} | {z } | {z } | {z }
P r2 dr ρ u2 r2 dr
Z 1 Z 1

= Po f1 η 2 dη + 2π ρo ψφ21 η 2 dη, (6.182)
γ −1 0 0

© 06 February 2024. J. M. Powers.


6.2. TAYLOR-SEDOV SOLUTION 165

Z 1 Z 1
4π Po A2 2
= f η dη + 2π ρo ψA2 φ2 η 2 dη, (6.183)
γ −1 0 c2o 0
 Z 1 Z 
2 Po 2 ρo 1 2 2
= 4πA f η dη + ψφ η dη , (6.184)
c2o (γ − 1) 0 2 0
 Z 1 Z 
2 1 2 1 1 2 2
= 4πA ρo f η dη + ψφ η dη . (6.185)
γ(γ − 1) 0 2 0
| {z }
dependent on γ only

The term inside the parentheses is dependent on γ only. So, if we consider air with γ = 7/5,
we can, using our knowledge of f (η), ψ(η), and φ(η), which only depend on γ, to calculate
once and for all the value of the integrals. For γ = 7/5, we obtain via numerical quadrature
 
1
2 1
E = 4πA ρo (0.185194) + (0.185168) , (6.186)
(7/5)(2/5) 2
2
= 5.3192ρoA . (6.187)

Now, from Eqs. (6.95, 6.130, 6.131, 6.187) with γ = 7/5, we get

A2
P = Po R−3 f , (6.188)
c2o
ρo 2
= Po R−3 f A , (6.189)
γPo
1
= R−3 f ρo A2 , (6.190)
γ
1 E
= R−3 f 7 , (6.191)
5
5.3192
= 0.1343R−3Ef, (6.192)
 
E r
P (r, t) = 0.1343 3 f . (6.193)
R (t) R(t)

The peak pressure occurs at η = 1, where r = R, and where

2γ 2(1.4)
f (η = 1) = = = 1.167. (6.194)
γ+1 1.4 + 1

So, at η = 1, where r = R, we have

E
P = (0.1343)(1.167)R−3E = 0.1567 . (6.195)
R3

The peak pressure decays at a rate proportional to 1/R3 in the strong shock limit.

© 06 February 2024. J. M. Powers.


166 CHAPTER 6. SELF-SIMILAR SOLUTIONS

Now, from Eqs. (6.97, 6.132, 6.187) we get for u:

u = R−3/2 Aφ, (6.196)


s
E
= R−3/2 φ, (6.197)
5.319ρo
s  
E 1 r
u(r, t) = φ . (6.198)
5.319ρo R3/2 (t) R(t)

Let us now explicitly solve for the shock position R(t) and the shock velocity dR/dt. We
have from Eqs. (6.98, 6.187) that

dR
= AR−3/2 , (6.199)
dt s
E 1
= 3/2
, (6.200)
5.319ρo R (t)
s
E
R3/2 dR = dt, (6.201)
5.319ρo
s
2 5/2 E
R = t + C. (6.202)
5 5.319ρo

Now, because R(0) = 0, we get C = 0, so


s
2 5/2 E
R = t, (6.203)
5 5.319ρo
2 5/2 p
t = R 5.319ρo E −1/2 , (6.204)
5
= 0.9225R5/2 ρ1/2o E
−1/2
. (6.205)

Equation (6.205) is T(38). Solving for R, we get

1
R5/2 = tρ−1/2 E 1/2 , (6.206)
0.9225 o
R(t) = 1.03279ρ−1/5
o E 1/5 t2/5 . (6.207)

Thus, we have a prediction for the shock location as a function of time t, as well as point
source energy E. If we know the position as a function of time, we can easily get the shock
velocity by direct differentiation:

dR
= 0.4131ρ−1/5
o E 1/5 t−3/5 . (6.208)
dt

© 06 February 2024. J. M. Powers.


6.2. TAYLOR-SEDOV SOLUTION 167

If we can make a measurement of the blast wave location R at a given known time t, and
we know the ambient density ρo , we can estimate the point source energy E. Let us invert
Eq. (6.207) to solve for E and get
ρo R5
E = , (6.209)
(1.03279)5t2
ρo R5
= 0.85102 2 . (6.210)
t

6.2.6.2 Comparison with experimental data


Now, Taylor’s Part II paper from 1950 gives data for the 19 July 1945 atomic explosion at
the Trinity site in New Mexico. We choose one point from the photographic record which
finds the shock from the blast to be located at R = 185 m when t = 62 ms. Let us assume
the ambient air has a density of ρo = 1.161 kg/m3 . Then, we can estimate the energy of the
device by Eq. (6.210) as

1.161 mkg3 (185 m)5
E = 0.85102 , (6.211)
(0.062 s)2
= 55.7 × 1012 J. (6.212)

Now, 1 ton of the high explosive TNT13 is known to contain 4.25 × 109 J of chemical energy.
So, the estimated energy of the Trinity site device in terms of a TNT equivalent is
55.7 × 1012 J
TNTequivalent = 9 J
= 13.1 × 103 ton. (6.213)
4.25 × 10 ton
In common parlance, the Trinity site device was a 13 kiloton bomb by this simple estimate.
Taylor provides some nuanced corrections to this estimate. Modern estimates are now around
20 kiloton.

6.2.7 Contrast with acoustic limit


We saw in Eq. (6.195) that in the expansion associated with a strong shock, the pressure
decays as 1/R3 . Let us see how that compares with the decay of pressure in the limit of a
weak shock.
Let us first rewrite the governing equations. Here, we 1) rewrite Eq. (6.76) in a conserva-
tive form, using the chain rule to absorb the source term inside the derivative, 2) repeat the
linear momentum equation, Eq. (6.77), and 3) re-cast the energy equation for a calorically
perfect ideal gas, Eq. (6.89) in terms of the full partial derivatives:
∂ρ 1 ∂ 2 
+ 2 r ρu = 0, (6.214)
∂t r ∂r
13
More specifically, 2,4,6-trinitrotoluene, C6 H2 (NO2 )3 CH3 , first prepared in 1863.

© 06 February 2024. J. M. Powers.


168 CHAPTER 6. SELF-SIMILAR SOLUTIONS

∂u ∂u 1 ∂P
+u + = 0, (6.215)
∂t ∂r ρ ∂r
 
∂P ∂P P ∂ρ ∂ρ
+u −γ +u = 0. (6.216)
∂t ∂r ρ ∂t ∂r
Now, let us consider the acoustic limit, which corresponds to perturbations of a fluid at
rest. Taking 0 < ǫ ≪ 1, we recast the dependent variables ρ, P , and u as

ρ = ρo + ǫρ1 + . . . , (6.217)
P = Po + ǫP1 + . . . , (6.218)
u = uo +ǫu1 + . . . . (6.219)
|{z}
=0

Here, ρo and Po are taken to be constants. The ambient velocity uo = 0. Strictly speaking,
we should nondimensionalize the equations before we introduce an asymptotic expansion.
However, so doing would not change the essence of the argument to be made.
We next introduce our expansions into the governing equations:
∂ 1 ∂ 2 
(ρo + ǫρ1 ) + 2 r (ρo + ǫρ1 ) (ǫu1 ) = 0, (6.220)
∂t r ∂r
∂ ∂ 1 ∂
(ǫu1 ) + (ǫu1 ) (ǫu1 ) + (Po + ǫP1 ) = 0, (6.221)
∂t ∂r ρo + ǫρ1 ∂r
∂ ∂
(Po + ǫP1 ) + (ǫu1 ) (Po + ǫP1 )
∂t  ∂r 
Po + ǫP1 ∂ ∂
−γ (ρo + ǫρ1 ) + (ǫu1 ) (ρo + ǫρ1 ) = 0. (6.222)
ρo + ǫρ1 ∂t ∂r
Now, derivatives of constants are all zero, and so at leading order the constant state satisfies
the governing equations. At O(ǫ), the equations reduce to
∂ρ1 1 ∂
+ 2 (r 2 ρo u1 ) = 0, (6.223)
∂t r ∂r
∂u1 1 ∂P1
+ = 0, (6.224)
∂t ρo ∂r
∂P1 Po ∂ρ1
−γ = 0. (6.225)
∂t ρo ∂t
Now, adopt as before c2o = γPo /ρo , so the energy equation, Eq. (6.225), becomes
∂P1 ∂ρ1
= c2o . (6.226)
∂t ∂t
Now, substitute Eq. (6.226) into the mass equation, Eq. (6.223), to get
1 ∂P1 1 ∂
2
+ 2 (r 2 ρo u1 ) = 0. (6.227)
co ∂t r ∂r

© 06 February 2024. J. M. Powers.


6.2. TAYLOR-SEDOV SOLUTION 169

We take the time derivative of Eq. (6.227) to get


 
1 ∂ 2 P1 ∂ 1 ∂ 2 
+ r ρo u1 = 0, (6.228)
c2o ∂t2 ∂t r 2 ∂r
 
1 ∂ 2 P1 1 ∂ 2 ∂u1
+ 2 r ρo = 0. (6.229)
c2o ∂t2 r ∂r ∂t
We next use the momentum equation, Eq. (6.224), to eliminate ∂u1 /∂t in Eq. (6.229):
  
1 ∂ 2 P1 1 ∂ 2 1 ∂P1
+ 2 r ρo − = 0, (6.230)
c2o ∂t2 r ∂r ρo ∂r
2
 
1 ∂ P1 1 ∂ 2 ∂P1
− 2 r = 0, (6.231)
c2o ∂t2 r ∂r ∂r
 
1 ∂ 2 P1 1 ∂ 2 ∂P1
= 2 r . (6.232)
c2o ∂t2 r ∂r ∂r
This second-order linear partial differential equation has a well-known solution of the
d’Alembert form:
   
1 r 1 r
P1 = g t − + h t+ . (6.233)
r co r co
Here, g and h are arbitrary functions which are chosen to match the initial conditions. Let
us check this solution for g; the procedure can easily be repeated for h.
If P1 = (1/r)g(t − r/co), then
 
∂P1 1 ′ r
= g t− , (6.234)
∂t r co
 
∂ 2 P1 1 ′′ r
= g t− , (6.235)
∂t2 r co
and
   
∂P1 11 ′ r 1 r
= − g t− − 2g t − . (6.236)
∂r co r co r co
With these results, let us substitute into Eq. (6.232) to see if it is satisfied:
       
1 1 ′′ r 1 ∂ 2 11 ′ r 1 r
g t− = 2 r − g t− − 2g t − , (6.237)
c2o r co r ∂r co r co r co
    
1 ∂ r ′ r r
= − 2 g t− +g t− , (6.238)
r ∂r co co co
      
1 r ′′ r 1 ′ r 1 ′ r
= − 2 − 2g t − + g t− − g t− (6.239)
,
r co c co co co co
  o
1 r ′′ r
= 2 2
g t− , (6.240)
r co co
 
1 1 ′′ r
= 2 g t− . (6.241)
co r co

© 06 February 2024. J. M. Powers.


170 CHAPTER 6. SELF-SIMILAR SOLUTIONS

Indeed, our form of P1 (r, t) satisfies the governing partial differential equation. Moreover,
we can see by inspection of Eq. (6.233) that the pressure decays as does 1/r in the limit of
acoustic disturbances. This is a much slower rate of decay than for the blast wave, which
goes as the inverse cube of radius.

Problems

© 06 February 2024. J. M. Powers.


Chapter 7

Monoscale and multiscale features

Let us consider some simple model linear partial differential equations to consider the notion
of scales. We consider a model system motivated by combustion. The first example will
be “monoscale” in that a single state variable will be driven by a single source term. The
second example will be “multiscale” in that more than one state variable will be driven by
a more complicated linear source term, inducing evolution on more than one scale. Such
multiscale effects are endemic in nature and render the computational solution of associated
mathematical model problems to be difficult. The discussion is drawn from Powers.1

7.1 Monoscale problem


Consider the following linear advection-reaction-diffusion problem motivated by combustion:

∂ ∂ ∂2
Y (x, t) + u Y (x, t) = D 2 Y (x, t) − a(Y (x, t) − Yeq ), (7.1)
∂t ∂x ∂x
∂Y
Y (x, 0) = Yo , Y (0, t) = Yo, (∞, t) → 0, (7.2)
∂x

where the independent variables are time t > 0 and distance x ∈ (0, ∞). Here, Y (x, t) > 0 is
a scalar that can be loosely considered to be a mass fraction, u > 0 is a constant advective
wave speed, D > 0 is a constant diffusion coefficient, a > 0 is the chemical consumption rate
constant, Yo > 0 is a constant, as is Yeq > 0. We note that Y (x, t) = Yeq is a solution iff
Yo = Yeq . For Yo 6= Yeq , we may expect a boundary layer in which Y adjusts from its value
at x = 0 to Yeq , the equilibrium value.

1
J. M. Powers, Combustion Thermodynamics and Dynamics, Cambridge University Press, New York,
2016.

171
172 CHAPTER 7. MONOSCALE AND MULTISCALE FEATURES

7.1.1 Spatially homogeneous solution


The spatially homogeneous version of Eqs. (7.1-7.2) is

dY (t)
= −a(Y (t) − Yeq ), Y |t=0 = Yo , (7.3)
dt
that has solution
Y (t) = Yeq + (Yo − Yeq )e−at . (7.4)
The time scale τ over which Y evolves is

τ = 1/a. (7.5)

This time scale serves as an upper bound for the required time step to capture the dynamics
in a numerical simulation. Because there is only one dependent variable in this problem,
the temporal spectrum contains only one time scale. Consequently, this formulation of the
system is not temporally stiff.

Example 7.1
For a spatially homogeneous solution, plot the solution Y (t) to Eq. (7.3) if a = 108 s−1 , Yo = 0.1,
and Yeq = 0.001.

For these parameters, the solution from Eq. (7.4) is


8
s−1 )t
Y (t) = 0.001 + 0.099e−(10 . (7.6)

The time scale of relaxation is given by Eq. (7.5) and is

τ = 1/a = 1/(108 s−1 ) = 10−8 s. (7.7)

A plot of Y (t) is given in Fig. 7.1. It is seen that for early time, t ≪ τ , that Y is near Yo . Sig-
nificant relaxation of Y occurs when t ≈ τ . For t ≫ τ , we see Y → Yeq . The plot is presented
on a log-log scale that better highlights the dynamics. In particular, when examined over orders of
magnitude, the reaction event is seen in perspective as a sharp change from one state to another. Reac-
tion dynamics are typically characterized by a near constant,“frozen” state, seemingly in equilibrium.
This pseudo-equilibrium is punctuated by a reaction event, during which the system relaxes to a final
true equilibrium. The notion of “punctuated equilibrium” is also well-known in modern evolutionary
biology2 , usually for far longer time scale events, and has analog with our chemical reaction dynamics.

2
N. Eldredge and S. J. Gould, 1972, Punctuated equilibria: an alternative to phyletic gradualism, in
Models in Paleobiology, T. J. M. Schopf, ed., Freeman-Cooper, San Francisco, pp. 82-115.

© 06 February 2024. J. M. Powers.


7.1. MONOSCALE PROBLEM 173

10−1

Y −8
10−2 τ = 1/a = 10 s

10−3

10−10 10−8 10−6 10−4


t (s)

Figure 7.1: Mass fraction versus time for spatially homogeneous problem with simple one-
step linear kinetics.

7.1.2 Steady solution


A simple means to determine the relevant length scales, and consequently, an upper bound for
the required spatial grid resolution, is to obtain the steady structure Y (x), that is governed
by the time-independent version of Eqs. (7.1):
dY (x) d2Y (x) dY
u =D − a(Y (x) − Yeq ), Y |x=0 = Yo , → 0. (7.8)
dx dx2 dx x→∞

Assuming solutions of the form Y (x) = Yeq + Cerx , we rewrite Eq. (7.8) as
uCrerx = DCr 2 erx − aCerx , (7.9)
and through simplification are led to a characteristic polynomial of
ur = Dr 2 − a, (7.10)
that has roots
r !
u 4aD
r= 1± 1+ 2 . (7.11)
2D u

Taking r1 to denote the “plus” root, for which r1 > 0, and r2 to denote the “minus” root,
for which r2 < 0, the two solutions can be linearly combined to take the form
Y (x) = Yeq + C1 er1 x + C2 er2 x , (7.12)
where C1 and C2 are constants. Taking the spatial derivative of Eq. (7.12), we get
dY
= C1 r1 er1 x + C2 r2 er2 x . (7.13)
dx

© 06 February 2024. J. M. Powers.


174 CHAPTER 7. MONOSCALE AND MULTISCALE FEATURES

In the limit of large positive x, the boundary condition at infinity in Eq. (7.8) requires the
derivative to vanish giving
dY
lim = 0 = lim (C1 r1 er1 x + C2 r2 er2 x ) . (7.14)
x→∞ dx x→∞

Because r1 > 0, we must insist that C1 = 0. Then, enforcing that Y (0) = Yo , we find the
solution of Eq. (7.8) is
Y (x) = Yeq + (Yo − Yeq )er2 x , (7.15)
where r !
u 4aD
r2 = 1− 1+ . (7.16)
2D u2
Hence, there is one length scale in the system, ℓ ≡ 1/|r2 |; this formulation of the system is
not spatially stiff. By examining Eq. (7.16) in the limit aD/u2 ≫ 1, one finds that
r ! r
u 4aD a
r2 ≈ − = − . (7.17)
2D u2 D

Thus solving for the length scale ℓ in this limit, we get


r
1 D √
ℓ= ≈ = Dτ , (7.18)
|r2 | a

where τ = 1/a is the time scale from spatially homogeneous reaction, Eq. (7.5). So, this
length scale ℓ reflects the inherent physics of coupled advection-reaction-diffusion. In the
limit of aD/u2 ≪ 1, one finds r2 → 0, ℓ → ∞, and Y (x) → Yo, a constant.

Example 7.2
For a steady solution, plot Y (x) if a = 108 s−1 , u = 102 cm/s, D = 101 cm2 /s, Yo = 10−1 , and
Yeq = 10−3 .

For this system, we have from Eq. (7.16) that


r !
u 4aD
r2 = 1− 1+ 2 , (7.19)
2D u
 v 
u cm2
102 cm u
t 4 (108 s−1 ) 101
= s
2   1 − 1 + 2
s  = −3.2 × 103 cm−1 . (7.20)
2 101 cms 102 cms

Because aD/u2 = 105 ≫ 1, r2 is well estimated by Eq. (7.17):


r s
a 108 s−1
r2 ≈ − =− cm2
= −3.2 × 103 cm−1 . (7.21)
D 1
10 s

© 06 February 2024. J. M. Powers.


7.1. MONOSCALE PROBLEM 175
√ p
ℓ= Dτ = D/a = 3.2 × 10−4 cm
10−1

Y
10−2

10−3

10−6 10−5 10−4 10−3 10−2 10−1 100


x (cm)

Figure 7.2: Mass fraction versus distance for steady advection-reaction-diffusion problem
with simple one-step linear kinetics.

Then from Eq. (7.15), the solution is


3
cm−1 )x
Y (x) = Yeq + (Yo − Yeq )er2 x = 0.001 + 0.099e−(3.2×10 . (7.22)
The length scale of reaction is estimated by Eq. (7.18):
r s 
1 D √ cm2
ℓ= ≈ = Dτ = 10 1 (10−8 s) = 3.2 × 10−4 cm. (7.23)
|r2 | a s
A plot of Y (x) is given in Fig. 7.2.

7.1.3 Spatio-temporal solution


Now, for Eq. (7.1), it is possible to find a simple analytic expression for the continuous
spectrum of time scales τ as a function of a particular linearly independent Fourier mode’s
wavenumber k̂. A Fourier mode with wavenumber k̂ has wavelength λ = 2π/k̂. Assume a
solution of the form
Y (x, t) = Yeq + B(t)eik̂x , (7.24)
where B(t) is the time-dependent amplitude of the chosen mode. Recall that eik̂x = cos k̂x +
i sin k̂x. We thus see spatial oscillations are built into our assumed functional form for Y .
The fact that our chosen form also contains an imaginary part is inconsequential. It does
simplify some of the notation, and one can always confine attention to the real part of the
solution.
For this problem that considers a single Fourier mode, it does not make sense to impose
the initial condition of Eq. (7.2). Substituting Eq. (7.24) into Eq. (7.1) gives
dB ik̂x
e + ik̂uBeik̂x = −D k̂ 2 Beik̂x − aBeik̂x , (7.25)
dt
dB
+ ik̂uB = −D k̂ 2 B − aB. (7.26)
dt

© 06 February 2024. J. M. Powers.


176 CHAPTER 7. MONOSCALE AND MULTISCALE FEATURES

This takes the form


dB(t)
= −βB(t), B(0) = Bo , (7.27)
dt
where !
D k̂ 2 ik̂u
β =a 1+ + , (7.28)
a a
and we have imposed Bo as an initial value. This has solution

B(t) = Bo e−βt . (7.29)

The complete solution is easily shown to be


2 t−at
Y (x, t) = Yeq + Bo eik̂(x−ut)−Dk̂ . (7.30)

The continuous time scale spectrum for amplitude growth or decay is given by
1 1
τ= =  , 0 < k̂ ∈ R. (7.31)
|Re (β)| a 1 + Dak̂
2

From Eq. (7.31), it is clear that for D k̂ 2 /a ≪ 1, i.e. for sufficiently small wavenumbers, the
time scales of amplitude growth or decay will be dominated by reaction:

lim τ = 1/a. (7.32)


k̂→0

However, for D k̂ 2 /a ≫ 1, i.e. for sufficiently large wavenumbers or small wavelengths, the
amplitude growth/decay time scales are dominated by diffusion:
 2
1 1 λ
lim τ = = . (7.33)
k̂→∞ D k̂ 2 D 2π
p
From Eq. (7.31), we see that a balance between reaction and diffusion exists for k̂ = a/D.
In terms of wavelength, and recalling Eq. (7.18), we see the balance at
p √
λ/(2π) = 1/k̂ = D/a = Dτ = ℓ, (7.34)

where ℓ = 1/k̂ is proportional to the wavelength.


The oscillatory behavior is of lesser importance. The continuous time scale spectrum for
oscillatory mode, τO is given by

τO = 1/|Im(β)| = 1/(k̂u). (7.35)

As k̂ → 0, τO → ∞. While τO → 0 as k̂ → ∞, it approaches at a rate ∼ 1/k̂, in contrast


to the more demanding time scale of diffusion that approaches zero at a faster rate ∼ 1/k̂ 2 .

© 06 February 2024. J. M. Powers.


7.1. MONOSCALE PROBLEM 177

10−8
1/a = τ = 10−8 s
10−9

τ (s)
2
10−10
1

10−11 ℓ= Dτ = 3.2 × 10−4 cm
10−12
10−5 10−3 10−1 101 103
ℓ = λ/(2π) (cm)

Figure 7.3: Time scale spectrum versus length scale for the simple advection-reaction-
diffusion model.

Thus, it is clear that advection does not play a role in determining the limiting values of the
time scale spectrum; reaction and diffusion are the major players. Lastly, it is easy to show
in the absence of diffusion, that the length scale where reaction effects balance advection
effects is found at
ℓ = u/a = uτ, (7.36)
where τ = 1/a is the time scale from spatially homogeneous chemistry.

Example 7.3
Examine the behavior of the time scales as a function of the length scales for the linear advective-
reactive-diffusive system characterized by a = 108 1/s, D = 101 cm2 /s, u = 102 cm/s.

These values are loosely motivated by values for gas phase kinetics of physical systems. For these
values, we find the estimate from Eq. (7.18) for the length scale where reaction balances diffusion as
r s 
√ D cm2
ℓ = Dτ = = 10 1 (10−8 s) = 3.16228 × 10−4 cm. (7.37)
a s

A plot of τ versus ℓ = λ/(2π) from Eq. (7.31)


1 1
τ= ! = (7.38)
Dk̂ 2 D
a 1+ a+
a ℓ2

is given in Fig. 7.3. For long wavelengths, the time scales are determined by reaction; for fine
wavelengths,
√ the time scale’s falloff is dictated by diffusion, and our simple formula for the critical
ℓ = Dτ , illustrated as a dashed line, predicts the transition well. For small ℓ, it is seen that a one
decade decrease in ℓ induces a two decade decrease in τ , consistent with the prediction of Eq. (7.33):
limk̂→∞ (ln τ ) ∼ 2 ln (ℓ) − ln (D) . Lastly, over the same range of ℓ, the oscillatory time scales induced
by advection are orders of magnitude less demanding, and are thus not included in the plot.

© 06 February 2024. J. M. Powers.


178 CHAPTER 7. MONOSCALE AND MULTISCALE FEATURES

The results of this simple analysis can be summarized as follows:


• Long wavelength spatial disturbances have time dynamics that are dominated by chem-
istry; each spatial point behaves as an isolated spatially homogeneous reactor.

• Short wavelength spatial disturbances have time dynamics that are dominated by diffu-
sion.

• Intermediate wavelength spatial disturbances have time dynamics determined by fully


coupled combination diffusion and chemistry.
√ The critical intermediate length scale
where this balance exists is given by ℓ = Dτ .

• A so-called “Direct Numerical Simulation” (DNS) of a combustion process with advection-


reaction-diffusion requires

∆t < τ, ∆x < Dτ . (7.39)

Less restrictive choices will not capture time dynamics and spatial structures inherent
in the continuum model. Advection usually plays a secondary role in determining time
dynamics.
This argument is by no means new, and is effectively the same given by Landau and Lifshitz,3
in their chapter on combustion.

7.2 Multiscale problem


Let us next consider a multiple reaction extension to Eqs. (7.1-7.2):

∂ ∂ ∂2
Y(x, t) + u Y(x, t) = D 2 Y(x, t) − A · (Y(x, t) − Yeq ), (7.40)
∂t ∂x ∂x
∂Y
Y(x, 0) = Yo , Y(0, t) = Yo , (∞, t) → 0. (7.41)
∂x
Here all variables are as before, except we take Y to be a vector of length N and A to be
a constant full rank matrix of dimension N × N with real and positive eigenvalues, with N
linearly independent eigenvectors , not necessarily symmetric.

7.2.1 Spatially homogeneous solution


The spatially homogeneous version of Eqs. (7.40-7.41) is
dY
= −A · (Y − Yeq ), Y(0) = Yo . (7.42)
dt
3
L. D. Landau and E. M. Lifshitz, 1959, Fluid Mechanics, Pergamon Press, London, p. 475.

© 06 February 2024. J. M. Powers.


7.2. MULTISCALE PROBLEM 179

Because of the way A has been defined, it can be decomposed as

A = S · σ · S−1 , (7.43)

where S is an N × N matrix whose columns are populated by the N linearly independent


eigenvectors of A, and σ is the diagonal matrix with the N positive eigenvalues, σ1 , . . . , σN ,
of A on its diagonal. Substitute Eq. (7.43) into Eq. (7.40), take advantage of the fact that
dYeq /dt = 0, and operate to find

d −1
(Y − Yeq ) = − S
| · σ{z· S } ·(Y − Yeq ), (7.44)
dt
A
d
S−1 · (Y − Yeq ) = −S−1 · S · σ · S−1 · (Y − Yeq ), (7.45)
dt
d 
S−1 · (Y − Yeq ) = −σ · S−1 · (Y − Yeq ). (7.46)
dt
Take now

Z = S−1 · (Y − Yeq ), (7.47)

so that
dZ
= −σ · Z. (7.48)
dt
Our initial condition becomes

Z(0) = S−1 · (Yo − Yeq ) = Zo . (7.49)

The solution is

Z(t) = e−σ t · Zo , (7.50)


S−1 · (Y(t) − Yeq ) = e−σ t · S−1 · (Yo − Yeq ), (7.51)
Y(t) = Yeq + S · e−σ t · S−1 · (Yo − Yeq ). (7.52)

Expanded, one can say


     .. .. ..   −σ1 t 
Y1 (t) Y1e . . . e 0 0
 ...  =  ...  +  ..
 s1 . sN 

0
..
. 0 
YN (t) YN e .
. .
. .
. 0 0 e−σN t
. . .
  
· · · s−1
1 ··· Y1o − Y1eq
···· ··· ··· .. . (7.53)
.
−1
· · · sN · · · YN o − YN eq

© 06 February 2024. J. M. Powers.


180 CHAPTER 7. MONOSCALE AND MULTISCALE FEATURES

Here si , i = 1, . . . N, are eigenvectors of A. There are N time scales τi = 1/σi , i = 1, . . . , N,


on which the solution evolves. Each dependent variable Yi (t), i = 1, . . . , N, can evolve on
each of the time scales.

Example 7.4
For a case where N = 2, examine the solution to Eqs. (7.42) if
   −2   
1000000 s−1 −99000000 s−1 10 10−5
A= , Y o = , Yeq = . (7.54)
−99000000 s−1 99010000 s−1 10−1 10−6

Thus, solve

dY1
= −(1000000 s−1 )(Y1 − 10−5 ) + (99000000 s−1 )(Y2 − 10−6 ), Y1 (0) = 10−2 , (7.55)
dt
dY2
= (99000000 s−1 )(Y1 − 10−5 ) − (99010000 s−1 )(Y2 − 10−6 ), Y2 (0) = 10−1 . (7.56)
dt

Straightforward calculation reveals the eigenvalues of A to be

σ1 = 108 s−1 , σ2 = 104 s−1 . (7.57)

Thus the time scales of reaction τi = 1/σi are

τ1 = 10−8 s, τ2 = 10−4 s. (7.58)

Clearly the ratio of time scales is large with a stiffness ratio of 104 ; thus, this is obviously a multiscale
problem. It is not easy to infer either the time scales or the stiffness ratio from simple examination of
the numerical values of A. Instead, one must perform the eigenvalue calculation.
It is easily shown that a diagonal decomposition of A is given by
  8  1 100

−1 1 10 0 − 101 101
A= 1 100 100 . (7.59)
1 100 0 104
| {z }| {z } | 101 {z 101 }
S σ S−1

Detailed calculation as given in the preceding section shows that the exact solution is given by
8 −1 4 −1
9891e−(10 s )t 1089e−(10 s )t
Y1 (t) = − + + 10−5 , (7.60)
100000 10000
8 −1 4 −1
9891e−(10 s )t 1089e−(10 s )t
Y2 (t) = + + 10−6 . (7.61)
100000 1000000
A plot of Y1 (t) and Y2 (t) is given in Fig. 7.4. Clearly, for t < τ1 = 10−8 s, both Y1 (t) and Y2 (t) are frozen
at the initial values. When t ≈ τ1 = 10−8 s, the first reaction mode begins to have an effect. Both Y1
and Y2 then maintain intermediate pseudo-equilibrium values for t ∈ [τ1 , τ2 ]. When t ≈ τ2 = 10−4 s,
both Y1 and Y2 rapidly approach their true equilibrium values.

© 06 February 2024. J. M. Powers.


7.2. MULTISCALE PROBLEM 181

τ1 = 1/σ1 = 10−8 s
10−1

10−2

10−3

Yi
10−4 τ2 = 1/σ2 = 10−4 s
10−5 Y1
−6
10 Y2
−10 −8 −6 −4 −2 0
10 10 10 10 10 10
t (s)

Figure 7.4: Mass fraction versus time for spatially homogeneous problem with simple two-
step linear kinetics.

7.2.2 Steady solution


The time-independent version of Eqs. (7.40-7.41) is
dY d2 Y dY
u = D 2 − A · (Y − Yeq ), Y(0) = Yo , lim → 0. (7.62)
dx dx x→∞ dx

Let us again employ Eq. (7.43) and the fact that dYeq /dx = 0 to recast Eq. (7.62) as
d d2
u (Y − Yeq ) = D 2 (Y − Yeq ) − S · σ · S−1 · (Y − Yeq ). (7.63)
dx dx
Next operate on both sides of Eq. (7.63) with the constant matrix S−1 and then use Eq. (7.47)
to get
d d2
uS−1 · (Y − Yeq ) = DS−1 · 2 (Y − Yeq )
dx dx
−S · S · σ · S−1 · (Y − Yeq ),
−1
(7.64)
d  d2 
u S−1 · (Y − Yeq ) = D 2 S−1 · (Y − Yeq )
dx dx
−σ · S−1 · (Y − Yeq ), (7.65)
dZ d2 Z
u = D 2 − σ · Z, (7.66)
dx dx
Similar to Eq. (7.49), the boundary conditions become
dZ
Z(0) = Zo , lim → 0. (7.67)
x→∞ dx
Importantly, these equations are now uncoupled. For example, the ith equation and boundary
conditions become
dZi (x) d2 Zi (x) dZi
u =D − σi Zi (x), Zi |x=0 = Zio , → 0. (7.68)
dx dx2 dx x→∞

© 06 February 2024. J. M. Powers.


182 CHAPTER 7. MONOSCALE AND MULTISCALE FEATURES

The solution can then be directly inferred from Eqs. (7.15, 7.16) to be

Zi (x) = Zi,oer2,i x , i = 1, . . . , N, (7.69)

where r !
u 4σi D
r2,i = 1− 1+ 2 , i = 1, . . . , N. (7.70)
2D u

Analogously, in the limit where σi D/u2 ≫ 1, we can infer


p
ℓi = Dτi , i = 1, . . . , N, (7.71)

with the reaction time scale τi taken as

τi = 1/σi , i = 1, . . . , N. (7.72)

Then knowing Z(x), one can use Eq. (7.47) to form

Y(x) = Yeq + S · Z(x). (7.73)

Thus, any Yi (x) can be expected to relax over all N values of length scales ℓi .

Example 7.5
For a case where N = 2, D = 101 cm2 /s, u = 102 cm/s, examine the solution to Eqs. (7.62) if
   −2   −5 
1000000 s−1 −99000000 s−1 10 10
A= , Y o = , Yeq = . (7.74)
−99000000 s−1 99010000 s−1 10−1 10−6

Thus, solve
  
cm  dY1 cm2 d2 Y1
102 = 101 − (1000000 s−1 )(Y1 − 10−5 )
s dx s dx2
+(99000000 s−1 )(Y2 − 10−6 ), (7.75)
   2
cm  dY 2 cm 2
d Y2
102 = 101 + (99000000 s−1 )(Y1 − 10−5 )
s dx s dx2
−(99010000 s−1 )(Y2 − 10−6 ), (7.76)
dY1 dY2
Y1 (0) = 10−2 , Y2 (0) = 10−1 , lim = 0, lim = 0. (7.77)
x→∞ dx x→∞ dx

Employing the transformation from Y to Z along with S−1 as given in Eq. (7.59), our system can
be rewritten as
  
cm  dZ1 cm2 d2 Z1
102 = 101 − (108 s−1 )Z1 , (7.78)
s dx s dx2
  
cm  dZ2 cm2 d2 Z2
102 = 101 − (104 s−1 )Z2 , (7.79)
s dx s dx2

© 06 February 2024. J. M. Powers.


7.2. MULTISCALE PROBLEM 183

ℓ1 = Dτ1 = 3.2 × 10−4 cm
10−1

10−2

10−3

Yi

10−4 ℓ2 = Dτ2 = 3.2 × 10−2 cm
10−5 Y1
−6
10 Y2
−6 −4 −2 0 2
10 10 10 10 10 104
x (cm)

Figure 7.5: Mass fraction versus distance for advection-reaction-diffusion problem with sim-
ple two-step linear kinetics.

9891 1089 dZ1 dZ2


Z1 (0) = , Z2 (0) = , lim = 0, lim = 0. (7.80)
100000 10000 x→∞ dx x→∞ dx
These have solution

9891e((5−5 400001)cm )x
−1
3 −1
Z1 (x) = = 0.0981e−(3.2×10 cm )x , (7.81)
100000

1089e((5−5 41)cm )x
−1
1 −1
Z2 (x) = = 0.1089e−(2.7×10 cm )x . (7.82)
10000
The relevant length scales are
1
ℓ1 = √  = 3.2 × 10−4 cm, (7.83)
5 − 5 400001 cm−1
1
ℓ2 = √  = 3.7 × 10−2 cm. (7.84)
5 − 5 41 cm−1
Especially for ℓ1 , these are both well estimated by the simple formulæ of Eq. (7.71):
s 
p cm2
ℓ1 ≈ Dτ1 = 101 (10−8 s) = 3.2 × 10−4 cm, (7.85)
s
s 
p cm2
ℓ2 ≈ Dτ2 = 10 1 (10−4 s) = 3.2 × 10−2 cm. (7.86)
s
For the slower reaction 2, advection plays a larger role, rendering the diffusion-based estimate to have
a small but noticeable error.
Forming Y via Y = Yeq + S · Z, we find the steady solution to be
3 1
cm−1 )x cm−1 )x
Y1 (x) = 10−5 − 0.09891e−(3.2×10 + 0.1089e−(2.7×10 , (7.87)
−6 −(3.2×103 cm−1 )x −(2.7×101 cm−1 )x
Y2 (x) = 10 + 0.09891e + 0.001089e (7.88)
Both variables evolve over two distinct length scales as they relax to their distinct equilibria. A plot
of Y1 (t) and Y2 (t) is given in Fig. 7.5. Similar to the time-dependent version of this system, a frozen
state near x = 0 first undergoes a reaction to a pseudo-equilibrium state near x = ℓ1 . Near x = ℓ2 , the
system relaxes to its true equilibrium.

© 06 February 2024. J. M. Powers.


184 CHAPTER 7. MONOSCALE AND MULTISCALE FEATURES

7.2.3 Spatio-temporal solution


Let us next study solutions with dependency on both time and distance. We extend the
analysis and nomenclature of Sec. 7.1.3 so as to take

Y(x, t) = Yeq + B(t)eik̂x , (7.89)

so that Eq. (7.40) becomes


dB ik̂x
e + ik̂uBeik̂x = −D k̂ 2 Beik̂x − A · Beik̂x , (7.90)
dt
dB
+ ik̂uB = −D k̂ 2 B − A · B, (7.91)
dt
dB   
2
= − ik̂u + D k̂ I + A · B. (7.92)
dt
Here
 I is the identity
 matrix. Now it is the real part of the eigenvalues of the matrix
2
− ik̂u + D k̂ I + A that dictates whether the amplitudes grow or decay. With the
operator “eig” operating on a matrix to yield its eigenvalues, it is a well-known result from
linear algebra that

eig (αI + A) = α + eig A. (7.93)

Now, A is dictated by chemical kinetics alone, and is known to have N real and positive
eigenvalues, σi , i = 1, . . . , N. Our eigenvalues βi are thus seen to be

βi = ik̂u + D k̂ 2 + σi , i = 1, . . . , N. (7.94)

It is only the real part of βi that dictates growth or decay of a mode. Because

Re (βi ) = D k̂ 2 + σi > 0, ∀i = 1, . . . , N, (7.95)

we see that all modes are decaying, and that diffusion induces them to decay more rapidly.
The time scales of decay τi are again given by the reciprocals of the eigenvalues, and are
seen to be
1 1 1
τi = = != , i = 1, . . . , N, (7.96)
Re (βi ) D k̂ 2 D
σi 1+ σi + 2
σi ℓ

using ℓ = 1/k̂ from Eq. (7.34).


From Eq. (7.96), it is clear that for D k̂ 2 /σi ≪ 1, i.e. for sufficiently small wavenumbers
or long wavelengths, the time scales of amplitude growth or decay will be dominated by
reaction:
lim τi = 1/σi . (7.97)
k̂→0

© 06 February 2024. J. M. Powers.


7.2. MULTISCALE PROBLEM 185

However, for D k̂ 2 /σi ≫ 1, i.e. for sufficiently large wavenumbers or small wavelengths, the
amplitude growth/decay time scales are dominated by diffusion:
 2
1 1 λ
lim τi = = . (7.98)
k̂→∞ D k̂ 2 D 2π

From
p Eq. (7.96), we see that a balance between reaction and diffusion exists for k̂ = k̂i =
σi /D. In terms of wavelength, and recalling Eq. (7.72), we see the balance at
p p
λ/(2π) = 1/k̂i = D/σi = Dτi = ℓi . (7.99)

Here ℓi is the ℓ for which the balance exists.

Example 7.6
For a case where N = 2, D = 101 cm2 /s, u = 102 cm/s, examine time scales as a function of the
length scales when considering solutions to Eq. (7.40) if
 
1000000 s−1 −99000000 s−1
A= . (7.100)
−99000000 s−1 99010000 s−1

We have examined this matrix earlier and know from Eqs. (7.57, 7.58) that the eigenvalues and
spatially homogeneous reaction time scales are

σ1 = 108 s−1 , τ1 = 10−8 s, σ2 = 104 s−1 , τ2 = 10−4 s. (7.101)

From Eq. (7.96), we get expressions for the effects of diffusion on the two time scales:
1 1
τ1 = D
= cm2
, (7.102)
σ1 + ℓ2 (108 s−1 ) +
101 s
ℓ2
1 1
τ2 = D
= cm2
. (7.103)
σ2 + ℓ2 (104 s−1 ) +
101 s
ℓ2

The length scales where reaction and diffusion balance are given by Eq. (7.99):
s 
p cm2
ℓ1 = Dτ1 = 10 1 (10−8 s) = 3.2 × 10−4 cm, (7.104)
s
s 
p cm2
ℓ2 = Dτ2 = 101 (10−4 s) = 3.2 × 10−2 cm. (7.105)
s

This behavior is displayed in Fig. 7.6. We see that for large ℓ, the time scales are dictated by those
given by a spatially homogeneous theory. As ℓ is reduced, diffusion first plays a role in modulating the
time scale of the slow reaction. As ℓ is further reduced, diffusion also modulates the time scale of the
fast reaction. It is the fast reaction that dictates the time scale that needs to be considered to capture
the advection-reaction-diffusion dynamics.

© 06 February 2024. J. M. Powers.


186 CHAPTER 7. MONOSCALE AND MULTISCALE FEATURES

10−4 τ2 = 10−4 s

10−6 ℓ2 = Dτ2 = 3.2 × 10−2 cm
τ (s)

10−8 τ1 = 10−8 s

10−10 ℓ1 = Dτ1 = 3.2 × 10−4 cm

10−12

10−6 10−4 10−2 100 102


ℓ (cm)

Figure 7.6: Time scale spectrum versus length scale for the simple advection-reaction-
diffusion model with two-step linear kinetics.

© 06 February 2024. J. M. Powers.


Chapter 8

Complex variable methods

see Mei, Chapters 9, 11.

Here we consider complex variable methods. We will give a brief physical motivation in the
context of Laplace’s equation in two dimensions, whose solution can be elegantly described
with the use of the methods of this chapter. Solutions we find can be applied in highly dis-
parate fields, as fluid mechanics, heat transfer, mass transfer, and electromagnetism. Much
of this theory was developed throughout the nineteenth century. The surprising dexterity
of Laplace’s equation in describing so much of nature did not escape broader notice. One
even finds in Tolstoy’s great novel1 the following musing from his character Levin, who is
depicted reading Tyndall2

He took up his book again. ‘Very good, electricity and heat are the same thing;
but is it possible to substitute the one quantity for the other in the equation for
the solution of any problem?’

8.1 Laplace’s equation in engineering


We have seen in Sec. (1.3) a derivation of Laplace’s equation to describe the diffusion of heat
in a material whose temperature varies in two spatial dimensions and is constant in time.
The analysis yields Eq. (1.105), repeated below

∂2T ∂2T
+ = 0, (8.1)
∂x2 ∂y 2
where T is the temperature, with x and y as spatial variables. Also relevant in the derivation
was the heat flux vector, Eq. (1.97), q = (qx , qy )T , the steady state limit of the energy
1
Leo Tolstoy, 1877, Anna Karenina, Part 1, Chapter 27, in English translation by C. Gar-
nett with L. J. Kent and N. Berberova, Modern Library Classics, New York, 2000. Also see
e-book format from Project Gutenberg
2
John Tyndall, 1820-1893, Anglo-Irish physicist.

187
188 CHAPTER 8. COMPLEX VARIABLE METHODS

conservation principle in which Eq. (1.99) reduces to ∇T · q = 0, or ∂qx /∂x + ∂qy /∂y = 0,
and the two-dimensional limit of Fourier’s law, Eq. (1.100), q = −k∇T , or qx = −k∂T /∂x;
qy = −k∂T /∂y. We give analogs from many branches of engineering science in Table 8.1,
which include Fick’s,3 Newton’s,4 and Gauss’s5 laws.

Table 8.1: Notations from branches of engineering in which Laplace’s equation arises.

heat diffusion mass diffusion fluid mechanics dynamics electrostatics


Laplace’s
∇2 T = 0 ∇2 Y = 0 ∇2 φ = 0 ∇2 φ = 0 ∇2 Φ = 0
equation
g = −∇φ E = −∇Φ
relevant q = −k∇T j = −D∇Y u = ∇φ
gravitational electrical
vector Fourier’s law Fick’s law irrotationality
potential potential

∇T · g = 0 ∇T · E = 0
divergence ∇T · q = 0 ∇T · j = 0 ∇T · u = 0
Newton’s law Gauss’s law
condition energy mass incompressibility
in a in a
conservation conservation
vacuum vacuum

8.2 Velocity potential and stream function


We choose here to loosely focus on Laplace’s equation as it arises in two-dimensional, incom-
pressible, irrotational, inviscid fluid mechanics. One can easily use analogs from Table 8.1
to extend the same mathematical analysis to other fields. We first consider the so-called
velocity potential and stream function. We consider u to be a velocity vector, confined to
nonzero values in two dimensions:
 
u
u = v .
 (8.2)
0
Recall if a vector u is confined to the x − y plane, and there is no variation of u with z
(∂/∂z = 0), then the curl of that vector, ω = ∇ × u, is confined to the z direction and takes
the form  
i j k 0
ω = ∂x ∂ ∂
∂y
0 = 0 . (8.3)
∂v ∂u
u v 0 ∂x
− ∂y

3
Adolf Eugen Fick, 1829-1901, German physician.
4
Isaac Newton, 1642-1726, English physicist and mathematician.
5
Carl Friedrich Gauss, 1777-1855, German mathematician.

© 06 February 2024. J. M. Powers.


8.2. VELOCITY POTENTIAL AND STREAM FUNCTION 189

Now if the field is two-dimensional and curl-free, we have ω = 0 and thus


∂v ∂u
− = 0. (8.4)
∂x ∂y
Moreover, because ∇ × u = 0, we can express u as the gradient of a potential φ, the velocity
potential:
u = ∇φ. (8.5)
Note that with this definition, the velocity vector points in the direction of maximum increase
of φ. Expanding, we can say
∂φ
u = , (8.6)
∂x
∂φ
v = . (8.7)
∂y
We see by substitution of Eqs. (8.6, 8.7) into Eq. (8.4) that the curl-free condition is true
identically:    
∂v ∂u ∂ ∂φ ∂ ∂φ ∂2φ ∂2φ
− = − = − = 0. (8.8)
∂x ∂y ∂x ∂y ∂y ∂x ∂x∂y ∂y∂x
This holds as long as φ is continuous and sufficiently differentiable. In short, we may recall
that any vector field that is curl-free may be expressed as the gradient of a potential.
Now it can be shown that the physics of incompressible flows is such that

∇T · u = 0. (8.9)

Restricting to two dimensions, Eq. (8.9) reduces to


∂u ∂v
+ = 0. (8.10)
∂x ∂y
Substituting from Eqs. (8.6, 8.7) for u and v in favor of φ, we see Eq. (8.10) reduces to
   
∂ ∂φ ∂ ∂φ
+ = 0, (8.11)
∂x ∂x ∂y ∂y
∂2φ ∂2φ
+ 2 = 0, (8.12)
∂x2 ∂y
∇2 φ = 0. (8.13)

Now if Eq. (8.10) holds, we find it useful to define the stream function ψ as follows:
∂ψ
u = , (8.14)
∂y
∂ψ
v = − . (8.15)
∂x

© 06 February 2024. J. M. Powers.


190 CHAPTER 8. COMPLEX VARIABLE METHODS

Direct substitution of Eqs. (8.14, 8.15) into Eq. (8.10) shows that this yields an identity:
   
∂u ∂v ∂ ∂ψ ∂ ∂ψ ∂2ψ ∂2ψ
+ = + − = − = 0. (8.16)
∂x ∂y ∂x ∂y ∂y ∂x ∂x∂y ∂y∂x
Now, in an equation which will be critically important soon, we can set our definitions of u
and v in terms of φ and ψ equal to each other, as they must be. Thus combining Eqs. (8.6,
8.7, 8.14, 8.15), we see
∂φ ∂ψ
= , (8.17)
∂x
|{z} ∂y
|{z}
u u
∂φ ∂ψ
= − . (8.18)
∂y | ∂x}
{z
|{z}
v v
If we differentiate Eq. (8.17) with respect to y and Eq. (8.18) with respect to x, we see
∂2φ ∂2ψ
= , (8.19)
∂y∂x ∂y 2
∂2φ ∂2ψ
= − 2. (8.20)
∂x∂y ∂x
Now subtract the Eq. (8.20) from Eq. (8.19) to get
∂2ψ ∂2ψ
0 = + 2, (8.21)
∂y 2 ∂x
2
∇ ψ = 0. (8.22)
Laplace’s equation holds not only for φ but also for ψ.
Let us now examine lines of constant φ (equipotential lines) and lines of constant ψ
(which we call streamlines). So take φ = C1 , ψ = C2 . For φ = φ(x, y), we can differentiate
to get
∂φ ∂φ
dφ = dx + dy = 0, (8.23)
∂x ∂y
dφ = u dx + v dy = 0, (8.24)
dy u
= − . (8.25)
dx φ=C1 v
Now for ψ = ψ(x, y) we similarly get
∂ψ ∂ψ
dψ = dx + dy = 0, (8.26)
∂x ∂y
dψ = −v dx + u dy = 0, (8.27)
dy v
= . (8.28)
dx ψ=C2 u

© 06 February 2024. J. M. Powers.


8.3. MATHEMATICS OF COMPLEX VARIABLES 191

= C2

= C1

dy v
=
dx u u
v
u
dy −u
=
dx v

Figure 8.1: Sketch of lines of constant ψ and φ.

We note
dy 1
=− dy
. (8.29)
dx φ=C1 dx ψ=C2

Hence, lines of constant φ are orthogonal to lines of constant ψ. Furthermore, we see that
dx dy
= on lines for which ψ = C2 . (8.30)
u v
As a result, we have
dy v
= , (8.31)
dx ψ=C2 u
which amounts to saying the vector u is tangent to the curve for which ψ = C2 . These
notions are sketched in in Fig. 8.1.
Now solutions to the two key equations of potential flow ∇2 φ = 0, ∇2 ψ = 0, are most effi-
ciently studied using methods involving complex variables. We will delay discussing solutions
until we have reviewed the necessary mathematics.

8.3 Mathematics of complex variables


Here we briefly introduce relevant elements of complex variable theory. Recall that the
imaginary number i is defined such that

i2 = −1, i = −1. (8.32)

© 06 February 2024. J. M. Powers.


192 CHAPTER 8. COMPLEX VARIABLE METHODS

8.3.1 Euler’s formula


We can arrive at the useful Euler’s formula, by considering the following Taylor6 expansions
of common functions about t = 0:
1 2 1 3 1 4 1 5
et = 1 + t + t + t + t + t ..., (8.33)
2! 3! 4! 5!
1 2 1 3 1 4 1 5
sin t = 0 + t + 0 t − t + 0 t + t . . . , (8.34)
2! 3! 4! 5!
1 2 1 3 1 4 1
cos t = 1 + 0t − t + 0 t + t + 0 t5 . . . (8.35)
2! 3! 4! 5!
With these expansions now consider the following combinations: (cos t + i sin t)t=θ and
et |t=iθ :
1 2 1 1 1
cos θ + i sin θ = 1 + iθ − θ − i θ3 + θ4 + i θ5 + . . . , (8.36)
2! 3! 4! 5!
1 1 1 1
eiθ = 1 + iθ + (iθ)2 + (iθ)3 + (iθ)4 + (iθ)5 + . . . , (8.37)
2! 3! 4! 5!
1 2 1 3 1 4 1 5
= 1 + iθ − θ − i θ + θ + i θ + . . . (8.38)
2! 3! 4! 5!
As the two series are identical, we have Euler’s formula

eiθ = cos θ + i sin θ. (8.39)

8.3.2 Polar and Cartesian representations


We take x ∈ R1 , y ∈ R1 and define the complex number z to be

z = x + iy. (8.40)

We say that z ∈ C1 . We define the operator ℜ as selecting the real part of a complex number
and ℑ as selecting the imaginary part of a complex number. For Eq. (8.40), we see

ℜ(z) = x, (8.41)
ℑ(z) = y. (8.42)
p
Both operators ℜ and ℑ take C1 → R1 . We can multiply and divide Eq. (8.40) by x2 + y 2
to obtain !
p x y
z = x2 + y 2 p + ip . (8.43)
x2 + y 2 x2 + y 2
6
Brook Taylor, 1685-1731, English mathematician and artist, Cambridge-educated, published on capillary
action, magnetism, and thermometers, adjudicated the dispute between Newton and Leibniz over priority
in developing calculus, contributed to the method of finite differences, invented integration by parts, name
ascribed to Taylor series of which variants were earlier discovered by Gregory, Newton, Leibniz, Johann
Bernoulli, and de Moivre.

© 06 February 2024. J. M. Powers.


8.3. MATHEMATICS OF COMPLEX VARIABLES 193

iy

2
2 y
x+
r=

x
x

Figure 8.2: Polar and Cartesian representation of a complex number z.

Noting the similarities between this and the transformation between Cartesian and polar
coordinates suggests we adopt
p x y
r = x2 + y 2 , cos θ = p , sin θ = p . (8.44)
x2 + y 2 x2 + y 2
Thus we have

z = r (cos θ + i sin θ) , (8.45)


z = reiθ . (8.46)

We often say that a complex number can be characterized by its magnitude |z| and its
argument, θ; we say then

r = |z|, (8.47)
θ = arg z. (8.48)

Here, r ∈ R1 and θ ∈ R1 . Note that |eiθ | = 1. If x > 0, the function arg z is identical to
arctan(y/x) and is suggested by the polar and Cartesian representation of z as shown in
Fig. 8.2. However, we recognize that the ordinary arctan (also known as tan−1 ) function
maps onto the range [−π/2, π/2], while we would like arg to map onto [−π, π]. For example,
to capture the entire unit circle if r = 1, we need θ ∈ [−π, π]. This can be achieved if we
define arg, also known as Tan−1 as follows:
!
y
arg z = arg(x + iy) = Tan−1 (x, y) = 2 arctan p . (8.49)
x + x2 + y 2

Iff x > 0, this reduces to the more typical


y  y
−1 −1
arg z = arg(x + iy) = Tan (x, y) = arctan = tan , x > 0. (8.50)
x x

© 06 February 2024. J. M. Powers.


194 CHAPTER 8. COMPLEX VARIABLE METHODS

Table 8.2: Comparison of the action of arg, Tan−1 , and arctan.

x y arg(x + iy) Tan−1 (x, y) arctan(y/x)


1 1 π/4 π/4 π/4
−1 1 3π/4 3π/4 −π/4
−1 −1 −3π/4 −3π/4 π/4
1 −1 −π/4 −π/4 −π/4

Tan-1(x,y) tan-1(y/x)

y y
x
x

Figure 8.3: Comparison of Tan−1 (x, y) and tan−1 (y/x) .

The preferred and more general form is Eq. (8.49). We give simple function evaluations
involving arctan and Tan−1 for selected values of x and y in Table 8.2. Use of Tan−1
effectively captures the correct quadrant of the complex plane corresponding to different
positive and negative values of x and y. The function is sometimes known as Arctan or
atan2. A comparison of Tan−1 (x, y) and tan−1 (y/x) is given in Fig. 8.3.
Now we can define the complex conjugate z as

z = x − iy, (8.51)
!
p x y
= x2 + y 2 p − ip , (8.52)
x2 +y2 x2 + y2
= r (cos θ − i sin θ) , (8.53)
= r (cos(−θ) + i sin(−θ)) , (8.54)
= re−iθ . (8.55)

Note now that

zz = (x + iy)(x − iy) = x2 + y 2 = |z|2 , (8.56)


= reiθ re−iθ , (8.57)

© 06 February 2024. J. M. Powers.


8.3. MATHEMATICS OF COMPLEX VARIABLES 195

= r2, (8.58)
= |z|2 . (8.59)

We also have
eiθ − e−iθ
sin θ = , (8.60)
2i
eiθ + e−iθ
cos θ = . (8.61)
2

Example 8.7
Use the polar representation of z to find all roots to the algebraic equation

z 4 = 1. (8.62)

We know that z = reiθ . We also note that the constant 1 can be represented as

1 = e2nπi , n = 0, 1, 2, ... (8.63)

This will be useful in finding all roots to our equation. With this representation, Eq. (8.62) becomes

r4 e4iθ = e2nπi , n = 0, 1, 2, .... (8.64)

We have a solution when



r = 1, θ= , n = 0, 1, 2, ... (8.65)
2
There are unique solutions for n = 0, 1, 2, 3. For larger n, the solutions repeat. So we have four solutions

z = e0i , z = eiπ/2 , z = eiπ , z = e3iπ/2 . (8.66)

In Cartesian form, the four solutions are

z = ±1, z = ±i. (8.67)

Example 8.8
Find all roots to

z 3 = i. (8.68)

We proceed in a similar fashion as for the previous example. We know that

i = ei(π/2+2nπ) , n = 0, 1, 2, ... (8.69)

© 06 February 2024. J. M. Powers.


196 CHAPTER 8. COMPLEX VARIABLE METHODS

iy iy
z4 = 1 z3 = i

x x

r=1 r=1

Figure 8.4: Sketch of solutions to z 4 = 1 and z 3 = i in the complex plane.

Substituting this into Eq. (8.68), we get

r3 e3iθ = ei(π/2+2nπ) , n = 0, 1, 2, ... (8.70)

Solving, we get
π 2nπ
r = 1, θ= + . (8.71)
6 3
There are only three unique values of θ, those being θ = π/6, θ = 5π/6, θ = 3π/2. So the three roots
are

z = eiπ/6 , z = e5iπ/6 , z = e3iπ/2 . (8.72)

In Cartesian form these roots are


√ √
3+i − 3+i
z= , z= , z = −i. (8.73)
2 2
Sketches of the solutions to this and the previous example are shown in Fig. 8.4. For both examples,
the roots are uniformly distributed about the unit circle, with four roots for the quartic equation and
three for the cubic.

8.3.3 Cauchy-Riemann equations


Now it is possible to define complex functions of complex variables W (z). For example, take
a complex function to be defined as

W (z) = z 2 + z, (8.74)
= (x + iy)2 + (x + iy), (8.75)
= x2 + 2xyi − y 2 + x + iy, (8.76)

= x2 + x − y 2 + i (2xy + y) . (8.77)

© 06 February 2024. J. M. Powers.


8.3. MATHEMATICS OF COMPLEX VARIABLES 197

In general, we can say


W (z) = φ(x, y) + iψ(x, y). (8.78)
Here φ and ψ are real functions of real variables. We shall soon see we have chosen their
symbols to usefully match the physics-based symbols of the previous section.
Now W (z) is defined as analytic at zo if dW/dz exists at zo and is independent of the
direction in which it was calculated. That is, extending Newton’s definition of the derivative
to apply to complex numbers, we adopt
dW W (zo + ∆z) − W (zo )
= . (8.79)
dz z=zo ∆z
The notation in the subscript near the vertical bar indicates that the derivative is evaluated
at the point z = zo . Now there are many paths that we can choose to evaluate the derivative.
Let us consider two distinct paths, y = C1 and x = C2 . We will get a result which can be
shown to be valid for arbitrary paths.
For y = C1 , we have ∆z = ∆x, so
dW W (xo + iyo + ∆x) − W (xo + iyo )
= , (8.80)
dz z=zo ∆x
∂W
= . (8.81)
∂x y
Here, the subscript y next to the vertical bar indicates that y is considered to be held
constant.
For x = C2 , we have ∆z = i∆y, so
dW W (xo + iyo + i∆y) − W (xo + iyo )
= , (8.82)
dz z=zo i∆y
1 ∂W
= , (8.83)
i ∂y x
∂W
= −i . (8.84)
∂y x
Here, the subscript x next to the vertical bar indicates that x is considered to be held
constant. Now for an analytic function, we need the derivative to be the same for any path
of integration. So certainly we must require
∂W ∂W
= −i . (8.85)
∂x y ∂y x

Expanding using W = φ + iψ, and dispensing with the vertical bars, we need
 
∂φ ∂ψ ∂φ ∂ψ
+i = −i +i , (8.86)
∂x ∂x ∂y ∂y
∂ψ ∂φ
= −i . (8.87)
∂y ∂y

© 06 February 2024. J. M. Powers.


198 CHAPTER 8. COMPLEX VARIABLE METHODS

Thus, for equality, and thus path independence of the derivative, we require
∂φ ∂ψ
= , (8.88)
∂x ∂y
∂φ ∂ψ
= − . (8.89)
∂y ∂x
These are the well known Cauchy-Riemann7 equations for analytic functions of complex
variables. They are identical to our equations for incompressible irrotational fluid mechanics.
Moreover, they are identical to any of the other physical analogs from heat and mass transfer,
etc., presented in Table 8.1. Consequently, any analytic complex function is guaranteed to
be a physical solution. There are an infinite number of functions to choose from.
We define the complex potential as
W (z) = φ(x, y) + iψ(x, y), (8.90)
and taking a derivative of the analytic potential, we have, using Eqs. (8.6,8.15), that
dW ∂φ ∂ψ
= +i , (8.91)
dz ∂x ∂x
= u − iv. (8.92)
We can equivalently say using Eqs. (8.7, 8.14) that
 
dW ∂φ ∂ψ
= −i +i , (8.93)
dz ∂y ∂y
 
∂ψ ∂φ
= −i , (8.94)
∂y ∂y
= u − iv. (8.95)
Now most common functions are easily shown to be analytic. For example, for the
function W (z) = z 2 + z, we are tempted to apply the ordinary rules of differentiation to get
dW/dz = 2z + 1. Let us check more carefully. We first expand to express W (z) as
W (z) = (x2 + x − y 2) +i (2xy + y) . (8.96)
| {z } | {z }
φ(x,y) ψ(x,y)

We see then that we have


φ(x, y) = x2 + x − y 2 , ψ(x, y) = 2xy + y, (8.97)
∂φ ∂ψ
= 2x + 1, = 2y, (8.98)
∂x ∂x
∂φ ∂ψ
= −2y, = 2x + 1. (8.99)
∂y ∂y
7
Augustin-Louis Cauchy, 1789-1857, French mathematician and military engineer, worked in complex
analysis, optics, and theory of elasticity.

© 06 February 2024. J. M. Powers.


8.3. MATHEMATICS OF COMPLEX VARIABLES 199

2 −3 5 −2
−5
0

1- 3 3
1
−1
0
1 1
y 0 0 0

−1 3 1 −1 1 −5

−3 2
−2 5 −3 −2
−2 −1 0 1 2
x

Figure 8.5: Plot of contours of φ(x, y) = x2 + x − y 2 and ψ = 2xy + y.

Note that the Cauchy-Riemann equations are satisfied because ∂φ/∂x = ∂ψ/∂y and ∂φ/∂y =
−∂ψ/∂x. So the derivative is independent of direction, and we can say
dW ∂W
= = (2x + 1) + i(2y) = 2(x + iy) + 1 = 2z + 1. (8.100)
dz ∂x y

Thus our supposition that extending the ordinary rules of derivatives for real functions to
complex functions indeed works here. We plot contours of constant φ and ψ in Fig. 8.5. It
is seen in Fig. 8.5 that lines of constant φ are orthogonal to lines of constant ψ consistent
with the discussion in Sec. 8.2. Note also that because
∂φ ∂φ
u= = 2x + 1, v= = −2y, (8.101)
∂x ∂y
the velocity vector (u, v)T is zero when (x, y) = (−1/2, 0). This point is evident in Fig. 8.5.
Note also that
∂u ∂v
+ = 2 − 2 = 0, (8.102)
∂x ∂y
so if the solution were for a fluid, it would satisfy a mass conservation equation for an
incompressible fluid. And the solution is also representative of an irrotational fluid as
∂v ∂u
− = 0 − 0 = 0. (8.103)
∂x ∂y
For an example of a nonanalytic function consider W (z) = z. Thus
W (z) = x − iy. (8.104)

© 06 February 2024. J. M. Powers.


200 CHAPTER 8. COMPLEX VARIABLE METHODS

So φ = x and ψ = −y, ∂φ/∂x = 1, ∂φ/∂y = 0, and ∂ψ/∂x = 0, ∂ψ/∂y = −1. Because


∂φ/∂x 6= ∂ψ/∂y, the Cauchy-Riemann equations are not satisfied, and the derivative depends
on direction.

8.4 Elementary complex potentials


Let us examine some simple analytic functions and see examples of the physics to which
they correspond.

8.4.1 Uniform field


Take
W (z) = Az, with A ∈ C1 . (8.105)
Then
dW
= A = u − iv. (8.106)
dz
Because A is complex, we can say

A = Ue−iα = U cos α − iU sin α. (8.107)

Thus we get
u = U cos α, v = U sin α. (8.108)
This represents a spatially uniform velocity filed with streamlines inclined at angle α to the
x axis. The field is sketched in Fig. 8.6.

8.4.2 Sources and sinks


Take
W (z) = A ln z, A ∈ R1 . (8.109)
With z = reiθ , we have ln z = ln r + iθ. So

W (z) = A ln r + iAθ. (8.110)

Consequently, we have for the velocity potential and stream function

φ = A ln r, ψ = Aθ. (8.111)

Now u = ∇φ, so, after transforming to polar coordinates, omitting some details, we obtain

∂φ A 1 ∂φ
ur = = , uθ = = 0. (8.112)
∂r r r ∂θ

© 06 February 2024. J. M. Powers.


8.4. ELEMENTARY COMPLEX POTENTIALS 201

iy
2

0 x

−1

−2
−2 −1 0 1 2

Figure 8.6: Streamlines for uniform flow.

So the velocity is all radial, and becomes infinite at r = 0. We can show that the volume
flow rate is bounded, and is in fact a constant. The volume flow rate Q through a surface is
Z Z 2π Z 2π
T A
Q= u · n dA = ur r dθ = r dθ = 2πA. (8.113)
A 0 0 r
The volume flow rate is a constant. If A > 0, we have a source. If A < 0, we have a sink.
The potential for a source/sink is often written as
Q
W (z) = ln z. (8.114)

For a source located at a point zo which is not at the origin, we can say
Q
W (z) = ln(z − zo ). (8.115)

The flow is sketched in Fig. 8.7.

8.4.3 Point vortices


For an ideal point vortex, we have

W (z) = iB ln z, B ∈ R1 . (8.116)

So
W (z) = iB (ln r + iθ) = −Bθ + iB ln r. (8.117)

© 06 February 2024. J. M. Powers.


202 CHAPTER 8. COMPLEX VARIABLE METHODS

iy

Figure 8.7: Velocity vectors and equipotential lines for source flow.

Consequently,
φ = −Bθ, ψ = B ln r. (8.118)
We get the velocity field from
∂φ 1 ∂φ B
ur = = 0, uθ = =− . (8.119)
∂r r ∂θ r
So we see that the streamlines are circles about the origin, and there is no radial component
of velocity. Consider the so-called circulation of this flow
I Z 2π
T B
Γ= u · dr = − r dθ = −2πB. (8.120)
C 0 r
So we often write the complex potential in terms of the ideal vortex strength Γ:

W (z) = − ln z. (8.121)

For an ideal vortex not at z = zo , we say

W (z) = − ln(z − zo ). (8.122)

The point vortex flow is sketched in Fig. 8.8.

8.4.4 Superposition of sources


Because the equation for velocity potential is linear, we can use the method of superposition
to create new solutions as summations of elementary solutions. Say we want to model the
effect of a wall on a source as sketched in Fig. 8.9. At the wall we want u(0, y) = 0. That is

© 06 February 2024. J. M. Powers.


8.4. ELEMENTARY COMPLEX POTENTIALS 203

iy

Figure 8.8: Streamlines, equipotential, and velocity vectors lines for a point vortex.

iy

Q x Q
a a

Figure 8.9: Sketch for source-wall interaction.

© 06 February 2024. J. M. Powers.


204 CHAPTER 8. COMPLEX VARIABLE METHODS

 
dW
ℜ = ℜ (u − iv) = u = 0, on z = iy. (8.123)
dz
Now let us place a source at z = a and superpose a source at z = −a, where a is a real
number. So we have for the complex potential
Q Q
W (z) = ln(z − a) + ln(z + a), (8.124)
|2π {z } |2π {z }
original image
Q
= (ln(z − a) + ln(z + a)) , (8.125)

Q
= (ln(z − a)(z + a)) , (8.126)

Q
= ln(z 2 − a2 ), (8.127)

dW Q 2z
= . (8.128)
dz 2π z 2 − a2
Now on z = iy, which is the location of the wall, we have
   
dW Q 2iy Q y
= = −i . (8.129)
dz 2π −y 2 − a2 π y 2 + a2
| {z }
v

The term is purely imaginary; hence, the real part is zero, and we have u = 0 on the wall,
as desired.
On the wall we have a nonzero y component of velocity:
Q y
v= . (8.130)
π y 2 + a2
We find the location on the wall of the maximum v velocity by setting the derivative with
respect to y to be zero,
∂v Q (y 2 + a2 ) − y(2y)
= = 0. (8.131)
∂y π (y 2 + a2 )2
Solving, we find a critical point at y = ±a, which can be shown to be a maximum.

8.4.5 Flow in corners


Flow in or around a corner can be modeled by the complex potential

W (z) = Az n , A ∈ R1 , (8.132)
n
= A reiθ , (8.133)
= Ar n einθ , (8.134)
= Ar n (cos(nθ) + i sin(nθ)). (8.135)

© 06 February 2024. J. M. Powers.


8.4. ELEMENTARY COMPLEX POTENTIALS 205

So we have

φ = Ar n cos nθ, (8.136)


ψ = Ar n sin nθ. (8.137)

Now recall that lines on which ψ is constant are streamlines. Examining the stream function,
we obviously have streamlines when ψ = 0 which occurs whenever θ = 0 or θ = π/n.
For example if n = 2, we model a stream striking a flat wall. For this flow, we have

W (z) = Az 2 , (8.138)
= A(x + iy)2 , (8.139)
= A((x2 − y 2 ) + i(2xy)). (8.140)

Thus,

φ = A(x2 − y 2 ), (8.141)
ψ = A(2xy). (8.142)

So the streamlines are hyperbolas. For the velocity field, we take


dW
= 2Az, (8.143)
dz
= 2A(x + iy), (8.144)
= u − iv. (8.145)

Thus,

u = 2Ax, (8.146)
v = −2Ay. (8.147)

This flow actually represents flow in a corner formed by a right angle or flow striking a flat
plate, or the impingement of two streams. For n = 2, streamlines are sketched in in Fig. 8.10.

8.4.6 Doublets
We can form what is known as a doublet flow by considering the superposition of a source
and sink and let the two approach each other. Consider a source and sink of equal and
opposite strength straddling the y axis, each separated from the origin by a distance ǫ as
sketched in Fig. 8.11. The complex velocity potential is
Q Q
W (z) = ln(z + ǫ) − ln(z − ǫ), (8.148)
2π   2π
Q z+ǫ
= ln . (8.149)
2π z−ǫ

© 06 February 2024. J. M. Powers.


206 CHAPTER 8. COMPLEX VARIABLE METHODS

iy iy iy
ψ ψ
4

x 0 x x
φ φ
-2 -2

-4 -4
φ φ
-4 -2 0 2 -44 -2 0 2 4

Figure 8.10: Sketch for impingement flow, stagnation flow, and flow in a corner, n = 2.
iy

Q −Q
x

source sink

Figure 8.11: Source sink pair.

It can be shown by synthetic division that as ǫ → 0, that


z+ǫ 2 2
= 1 + ǫ + ǫ2 2 + . . . . (8.150)
z−ǫ z z
So the potential approaches
 
Q 2 2 2
W (z) ∼ ln 1 + ǫ + ǫ 2 + . . . . (8.151)
2π z z
Now because ln(1 + x) → x as x → 0, we get for small ǫ that
Q 2 Qǫ
W (z) ∼ ǫ ∼ . (8.152)
2π z πz
Now if we require that

lim → µ, (8.153)
ǫ→0 π
we have
µ
W (z) = , (8.154)
z
µ x − iy
= , (8.155)
x + iy x − iy
µ(x − iy)
= . (8.156)
x2 + y 2

© 06 February 2024. J. M. Powers.


8.4. ELEMENTARY COMPLEX POTENTIALS 207

iy
4

0 x

−2

−4
−4 −2 0 2 4

Figure 8.12: Streamlines and equipotential lines for a doublet. Notice because the sink is
infinitesimally to the right of the source, there exists a directionality. This can be considered
a type of dipole moment; in this case, the direction of the dipole is −i.

So
x
φ(x, y) = µ , (8.157)
+ y2 x2
y
ψ(x, y) = −µ 2 . (8.158)
x + y2

In polar coordinates, we then say

cos θ
φ = µ , (8.159)
r
sin θ
ψ = −µ . (8.160)
r
Streamlines and equipotential lines for a doublet are plotted in Fig. 8.12.

8.4.7 Quadrupoles
It is natural to examine a higher order potential, which will be called the quadrupole:

k
W (z) = , (8.161)
z2
1
= k , (8.162)
(x + iy)(x + iy)

© 06 February 2024. J. M. Powers.


208 CHAPTER 8. COMPLEX VARIABLE METHODS

iy
4

0 x

−2

−4
−4 −2 0 2 4

Figure 8.13: Streamlines and equipotential lines for a quadrupole, k = 1.

(x − iy)2
= k , (8.163)
(x2 + y 2)2
x2 − y 2 − 2ixy
= k . (8.164)
(x2 + y 2 )2
This gives
x2 − y 2
φ(x, y) = k , (8.165)
(x2 + y 2 )2
−2xy
ψ(x, y) = k 2 . (8.166)
(x + y 2 )2
Streamlines and equipotential lines for a quadrupole are plotted in Fig. 8.13 for k = 1.

8.4.8 Rankine half body


Now consider the superposition of a uniform stream and a source, which we define to be a
Rankine half body:
Q
W (z) = Uz + ln z, U, Q ∈ R1 , (8.167)

Q
= Ureiθ + (ln r + iθ), (8.168)

Q
= Ur(cos θ + i sin θ) + (ln r + iθ), (8.169)
  2π  
Q Q
= Ur cos θ + ln r + i Ur sin θ + θ . (8.170)
2π 2π

© 06 February 2024. J. M. Powers.


8.4. ELEMENTARY COMPLEX POTENTIALS 209

iy
4

0 x

−2

−4
−4 −2 0 2 4

Figure 8.14: Streamlines for a Rankine half body.

So
Q
φ = Ur cos θ + ln r, (8.171)

Q
ψ = Ur sin θ + θ. (8.172)

Streamlines for a Rankine half body are plotted in Fig. 8.14. Now for the Rankine half body,
it is clear that there is a point where the velocity vector u = 0 somewhere on the x axis,
along θ = π. With the velocity given by

dW Q
=U+ = u − iv, (8.173)
dz 2πz
we get

Q 1 −iθ
U+ e = u − iv, (8.174)
2π r
Q1
U+ (cos θ − i sin θ) = u − iv. (8.175)
2π r
Thus,

Q
u = U+ cos θ, (8.176)
2πr
Q
v = sin θ. (8.177)
2πr

© 06 February 2024. J. M. Powers.


210 CHAPTER 8. COMPLEX VARIABLE METHODS

When θ = π, we get u = 0 when;


Q
0 = U+ (−1), (8.178)
2πr
Q
r = . (8.179)
2πU

8.4.9 Flow over a cylinder


We can model flow past a cylinder by superposing a uniform flow with a doublet. Defining
a2 = µ/U, we write
 
µ a2
W (z) = Uz + = U z + , (8.180)
z z
 
iθ a2
= U re + iθ , (8.181)
re
 
a2
= U r(cos θ + i sin θ) + (cos θ − i sin θ) , (8.182)
r
   
a2 a2
= U r cos θ + cos θ + i r sin θ − sin θ , (8.183)
r r
    
a2 a2
= Ur cos θ 1 + 2 + i sin θ 1 − 2 . (8.184)
r r
So
 
a2
φ = Ur cos θ 1 + 2 , (8.185)
r
 
a2
ψ = Ur sin θ 1 − 2 . (8.186)
r
Now on r = a, we have ψ = 0. Because the stream function is constant here, the curve
r = a, a circle, must be a streamline. A sketch of the streamlines and equipotential lines is
plotted in Fig. 8.15.
For the velocities, we have
   
∂φ a2 a2
ur = = U cos θ 1 + 2 + Ur cos θ −2 3 , (8.187)
∂r r r
 
a2
= U cos θ 1 − 2 , (8.188)
r
 
1 ∂φ a2
uθ = = −U sin θ 1 + 2 . (8.189)
r ∂θ r
So on r = a, we have ur = 0, and uθ = −2U sin θ.
There are more basic ways to describe the force on bodies using complex variables directly.
We shall give those methods, but first a discussion of the motivating complex variable theory
is necessary.

© 06 February 2024. J. M. Powers.


8.5. CONTOUR INTEGRALS 211

iy
2

0 x

−1

−2
−2 −1 0 1 2

Figure 8.15: Streamlines and equipotential lines for flow over a cylinder without circulation.

8.5 Contour integrals


Consider the closed contour integral of a complex function in the complex plane. For such in-
tegrals, we have a useful theory which we will not prove, but will demonstrate here. Consider
contour integrals enclosing the origin with a circle in the complex plane for four functions.
The contour in each is C : z = R̂eiθ with θ ∈ [0, 2π]. For such a contour dz = iR̂eiθ dθ.

8.5.1 Simple pole


We describe a simple pole with the complex potential
a
W (z) = , (8.190)
z
and the contour integral is
I I Z θ=2π
a a
W (z) dz = dz = iR̂eiθ dθ, (8.191)
C z R̂e iθ
C θ=0
Z 2π
= ai dθ = 2πia. (8.192)
0

8.5.2 Constant potential


We describe a constant with the complex potential

W (z) = b. (8.193)

© 06 February 2024. J. M. Powers.


212 CHAPTER 8. COMPLEX VARIABLE METHODS

The contour integral is


I I
W (z) dz = b dz, (8.194)
C C
Z θ=2π
= biR̂eiθ dθ, (8.195)
θ=0

biR̂ iθ
= e , (8.196)
i
0
= 0, (8.197)
because e0i = e2πi = 1.

8.5.3 Linear potential


We describe a linear field with the complex potential
W (z) = cz. (8.198)
The contour integral is
I I Z θ=2π
W (z) dz = cz dz = cR̂eiθ iR̂eiθ dθ, (8.199)
C C θ=0
Z 2π

2 2iθ icR̂2 2iθ
= icR̂ e dθ = e = 0, (8.200)
0 2i
0

because e0i = e4πi = 1.

8.5.4 Quadrupole
A quadrupole potential is described by
k
W (z) = . (8.201)
z2
Taking the contour integral, we find
I Z 2π
k iR̂eiθ
2
dz = k dθ, (8.202)
C z 0 R̂2 e2iθ
Z   2π
ki 2π −iθ ki 1
= e dθ = e−iθ = 0. (8.203)
R̂ 0 R̂ −i 0

So the only nonzero contour integral is for functions of the form W (z) = a/z. If we
continued, we would find all powers of z have a zero contour integral about the origin for
arbitrary contours except this special one.

© 06 February 2024. J. M. Powers.


8.6. LAURENT SERIES 213

8.6 Laurent series


Now it can be shown that any function can be expanded, much as for a Taylor series, as a
Laurent series:8
W (z) = . . . + C−2 (z − zo )−2 + C−1 (z − zo )−1 + C0 (z − zo )0 + C1 (z − zo )1 + C2 (z − zo )2 + . . . .
(8.204)
In compact summation notation, we can say
n=∞
X
W (z) = Cn (z − zo )n . (8.205)
n=−∞

Taking the contour integral of both sides we get


I I n=∞
X
W (z) dz = Cn (z − zo )n dz, (8.206)
C C n=−∞
n=∞
X I
= Cn (z − zo )n dz. (8.207)
n=−∞ C

From our just completed analysis, this has value 2πi only when n = −1, so
I
W (z) dz = C−1 2πi. (8.208)
C
Here C−1 is known as the residue of the Laurent series. In general we have the Cauchy
integral theorem which holds that if W (z) is analytic within and on a closed curve C except
for a finite number of singular points, then
I X
W (z) dz = 2πi residues. (8.209)
C
Let us get a simple formula for Cn . We first exchange m for n in Eq. (8.205) and say
m=∞
X
W (z) = Cm (z − zo )m . (8.210)
m=−∞

Then we operate as follows:


m=∞
X
W (z)
= Cm (z − zo )m−n−1 , (8.211)
(z − zo )n+1 m=−∞
I I m=∞
X
W (z)
dz = Cm (z − zo )m−n−1 dz, (8.212)
C (z − zo )n+1 C m=−∞
m=∞
X I
= Cm (z − zo )m−n−1 dz. (8.213)
m=−∞ C

8
Pierre Alphonse Laurent, 1813-1854, Parisian engineer who worked on port expansion in Le Harve,
submitted his work on Laurent series for a Grand Prize in 1842, with the recommendation of Cauchy, but
was rejected because of a late submission.

© 06 February 2024. J. M. Powers.


214 CHAPTER 8. COMPLEX VARIABLE METHODS

Here C is any closed contour which has zo in its interior. The contour integral on the right
side only has a non-zero value when n = m. Let us then insist that n = m, giving
I I
W (z)
n+1
dz = Cn (z − zo )−1 dz . (8.214)
C (z − zo ) C
| {z }
=2πi

We know from earlier analysis that the contour integral enclosing a simple pole such as found
on the right side has a value of 2πi. Solving, we find then that
I
1 W (z)
Cn = dz. (8.215)
2πi C (z − zo )n+1

If the closed contour C encloses no poles, then


I
W (z) dz = 0. (8.216)
C

We next consider some examples also described by Mei.

Example 8.9
Use complex variable methods to evaluate
Z ∞
dx
I= , x ∈ R1 . (8.217)
0 1 + x2

We note x is real, and so the use of complex variable methods is not yet obvious. We first note the
integrand is an even function with symmetry about x = 0. This allows us to rewrite the formula with
symmetric limits as
Z
1 ∞ dx
I= , x ∈ R1 . (8.218)
2 −∞ 1 + x2

Let us replace the real variable x with a complex variable z = x + iy. We recognize however that the
path on which I is calculated has z being purely real. So
Z ∞+0i
1 dz
I= , z ∈ C1 . (8.219)
2 −∞+0i 1 + z2

On the entire path of integration ℑ(z) = y = 0.


Now consider the integrand

1 1
2
= . (8.220)
1+z (z + i)(z − i)

By inspection, it has two poles, one at z = i and the other at z = −i. We find the Laurent series
expansions near both poles. Let us first consider the pole at z = i in some detail. There are many ways

© 06 February 2024. J. M. Powers.


8.6. LAURENT SERIES 215

iy

CR CR

CI CI x

−i

H
Figure 8.16: Contour integral for C
dz/(1 + z 2 ).

to analyze this. Let us define ẑ as the deviation of z from the pole: ẑ = z − i. So we have
1 1 1
= = , (8.221)
1 + z2 (z + i)(z − i) (ẑ + 2i)ẑ
1 1 1
= ẑ
, (8.222)
2i ẑ 1 + 2i
 2  3 !
1 1 ẑ ẑ ẑ
∼ 1− + − + ... , (8.223)
2i ẑ 2i 2i 2i
 
i1 iẑ ẑ 2 iẑ 3
∼ − 1+ − − + ... , (8.224)
2 ẑ 2 4 8
i 1 1 ẑ ẑ 2
∼ − + + − + .... (8.225)
2 ẑ 4 8 16
Returning to z, we then can easily show that the expansion near z = i is

1 i 1 i 1
2
≈ − (z − i)−1 + (z − i)0 + (z − i)1 − (z − i)2 − . . . . (8.226)
1+z 2
|{z} 4 8 16
residue

At z = −i, we have by the same procedure

1 i 1 i 1
≈ (z + i)−1 + (z + i)0 − (z + i)1 − (z + i)2 − . . . . (8.227)
1 + z2 2
|{z} 4 8 16
residue

The coefficient on (z ± i)−1 is the residue near each pole, in this case ±i/2.9
We now choose a closed contour C in the complex plane depicted in Fig. 8.16. Here we have

C = CR + CI , (8.228)
9
We could also use partial fraction expansion to achieve the same end. It is easily verified that 1/(1+z 2) =
(i/2)/(z + i) − (i/2)/(z − i). So the residue for the pole at z = i is −i/2. Similarly, the residue for the pole
at z = −i is i/2.

© 06 February 2024. J. M. Powers.


216 CHAPTER 8. COMPLEX VARIABLE METHODS

where CR is the semicircular portion of the contour, and CI is the portion of the contour on the real
axis. We take the semicircle to have radius R and will study R → ∞. When R → ∞, CI becomes the
path for the original integral with which we are concerned. This contour C encloses the pole at z = i,
but does not enclose the second pole at z = −i. So when we apply Eq. (8.209), we will only need to
consider the residue at z = i. Applying Eq. (8.209), we can say
I  
dz −i
2+1
= 2πi = π. (8.229)
C z 2
| {z }
residue
Now we are really interested in the portion of C on the real axis, namely CI . So motivated, we rewrite
Eq. (8.219) as
Z
1 dz
I = , (8.230)
2 CI 1 + z 2
 
I Z
1 dz dz ,
=  2
− 2
(8.231)
2 C 1+z CR 1 + z
| {z }
π
 Z 
1 dz
= π− 2
. (8.232)
2 CR 1 + z

Now consider the integral applied on CR . On CR , we have z = Reiθ with R → ∞. So dz = Rieiθ dθ.
This gives
Z Z
dz 1
2
= 2 2iθ
Rieiθ dθ, (8.233)
CR 1 + z CR 1 + R e
Z π
eiθ
= iR 2 2iθ
dθ. (8.234)
0 1+R e

Now when we let R → ∞, we can neglect the 1 in the denominator of the integrand so as to get
Z Z π
dz
2+1
= iR R−2 e−iθ dθ, (8.235)
CR z 0
Z
i π −iθ
= e dθ, (8.236)
R 0
i 1 −iθ π
= e 0
, (8.237)
R −i
1
= − (−1 − 1), (8.238)
R
2
= . (8.239)
R
R
Clearly as R → ∞, CR → 0. Thus, Eq. (8.232) reduces to
π
I= . (8.240)
2
Note that in this case, we could have directly evaluated the integral. If we take the transformation
x = tan θ, we get dx = dθ/ cos2 θ. We find
Z ∞ Z θ=π/2 Z π/2 Z π/2
dx 1 dθ dθ π
2
= 2 2
= 2 = dθ = . (8.241)
0 1+x θ=0 1 + tan θ cos θ 0
2
cos θ + sin θ 0 2

© 06 February 2024. J. M. Powers.


8.6. LAURENT SERIES 217

iy

CR CR

i
C+
x
C−

−i
CR
CR

H √
Figure 8.17: Contour integral for C
z/(1 + z 2 ) dz.

Example 8.10
Use complex variable methods to evaluate
Z ∞ √
x
I= dx, x ∈ R1 . (8.242)
0 1 + x2


This is similar to the previous example, except for the x in the numerator of the integrand.
However, we will take a different approach to the contour integration. We first extend to the complex
domain and say
Z ∞+0i √
z
I= dz, z ∈ C1 , (8.243)
0+0i 1 + z2

recognizing that our integration path is confined to the real axis. Now consider this integral in the
context of the closed contour depicted in Fig. 8.17. The closed contour C depicted here is

C = C+ + CR + C− . (8.244)

For our I, we are interested in the integral along C+ . The combination of C+ and C− is known as a
branch cut. We note that the integrand can be rewritten as
√ √ √ √
z z i z i z
= = − , (8.245)
1 + z2 (z − i)(z + i) 2(z + i) 2(z − i)

© 06 February 2024. J. M. Powers.


218 CHAPTER 8. COMPLEX VARIABLE METHODS

so there are poles at z = ±i, as indicated in Fig. 8.17. We find the Laurent series expansions near both
poles. At z = i, we find, omitting details of the analysis, which is analogous to finding Taylor series
coefficients,
√ √
z i i
≈− (z − i)−1 + . . . . (8.246)
1 + z2 2
At z = −i, we have
√ √
z i −i
≈ (z + i)−1 + . . . . (8.247)
1 + z2 2
Now we need the square root to be single-valued. We can achieve this by defining
√ √
z = reiθ/2 , θ ∈ [0, 2π]. (8.248)
√ √ √
With this, we see that
√ i = (eiπ/2 )1/2 = eiπ/4 = (1 + i)/ 2. And we see also that −i = (e3πi/2 )1/2 =
3iπ/4
e = (−1 + i)/ 2. The sum of the residues is then
√ √
X i i i −i
= − + , (8.249)
2 2
residues
√ √
i −i
= − , (8.250)
2i 2i
1 √ √ 
= i − −i , (8.251)
2i  
1 1 + i −1 + i
= √ − √ , (8.252)
2i 2 2
 
1 2
= √ , (8.253)
2i 2
1
= √ , (8.254)
2i
So by Eq. (8.209), the closed contour integral is
I √  
z 1 √
2
dz = 2πi √ = 2π. (8.255)
C 1+z 2i
Now consider the portion of the integral in the far field where we are on CR where z = Reiθ and
dz = Rieiθ dθ. This portion of the integral becomes
Z √ Z 2π √ iθ/2
z Re
2
dz = Rieiθ dθ. (8.256)
CR 1 + z 0 1 + R2 e2iθ
For large R, we can neglect the 1 in the denominator and we get
Z √ Z 2π
z i
2
dz = √ e−iθ/2 dθ. (8.257)
CR 1 + z R 0

While we could evaluate this integral, we see by inspection that as R → ∞ that it will go to zero; thus,
there is no contribution at infinity. Examining the integral then we see
I Z Z Z
= + + . (8.258)
C C+ CR C−
|{z} |{z}

2π 0

© 06 February 2024. J. M. Powers.


8.7. JORDAN’S LEMMA 219

So we have now
Z √ Z √
√ z z
2π = dz + dz. (8.259)
C+ 1 + z2 C− 1 + z2
Now on C+ , we have
√ √ √
z = x(ei0 )1/2 = x. (8.260)
But on C− , because of the way we defined our branch cut, we have
√ √ √ √
z = x(e2πi )1/2 = xeπi = − x. (8.261)
So Eq. (8.259) becomes
Z √ Z 0 √
√ ∞
x − x
2π = 2
dx + dx, (8.262)
0 1 + x ∞ 1 + x2
Z ∞ √ Z ∞ √
x x
= 2
dx + dx, (8.263)
0 1 + x 0 1 + x2
Z ∞ √
x
= 2 dx, (8.264)
0 1 + x2
Z ∞ √
x π
dx = √ . (8.265)
0 1 + x2 2

8.7 Jordan’s lemma


The previous examples have shown us that often the portion of the contour integral in the
far field is negligible. Here we state without proof the generalization of this that is Jordan’s10
lemma, quoting liberally from Mei, pp. 246-247:

Jordan’s lemma If f (z) is analytic in ℑ(z) ≥ 0 except at poles, and |f (z)| → 0


on the semicircular arc CR in the upper half plane as R → ∞, then for m > 0
Z
lim f (z)eimz dz = 0. (8.266)
R→∞ CR

If m < 0 and f (z) is analytic in ℑ(z) ≤ 0 except at poles, and |f (z)| → 0 as


|z| → ∞ in the lower half plane then Jordan’s lemma holds along a semicircle
CR in the lower half plane.
Similarly, if f (z) is analytic in ℜ(z) ≤ 0 except at poles, and |f (z)| → 0 on
the semicircle CR in the left-half plane as R → ∞, then for m > 0
Z
lim f (z)emz dz = 0. (8.267)
R→∞ CR

10
Camille Jordan, 1838-1922, French engineer.

© 06 February 2024. J. M. Powers.


220 CHAPTER 8. COMPLEX VARIABLE METHODS

And if f (z) is analytic in ℜ(z) ≥ 0 except at poles, and |f (z)| → 0 on the


semicircle CR in the right-half plane as R → ∞, then for m < 0
Z
lim f (z)emz dz = 0. (8.268)
R→∞ CR

8.8 Conformal mapping


Conformal mapping is a technique by which we can render results obtained for simple ge-
ometries applicable to more complicated geometries. It can apply to any physical scenario
described by Laplace’s equation.

8.8.1 Analog to steady two-dimensional heat transfer


Let us largely use the notation we developed in Sec. 8.2 that was motivated by incompressible,
irrotational fluid mechanics, but think of it in terms of a steady two-dimensional heat transfer
problem. Here we shall think of temperature T as a potential φ, so T → φ. Thus, Eq. (1.104)
for the steady temperature field is here

∂2φ ∂2φ
+ 2 = 0. (8.269)
∂x2 ∂y

This equation employs Fourier’s law, Eq. (1.100), which is recast as

∂φ
u = −k , (8.270)
∂x
∂φ
v = −k . (8.271)
∂y

Here we take qx → u and qy → v. Or with u = (u, v)T , we could describe Fourier’s law
as u = −k∇φ. Loosely speaking u is the diffusion velocity of energy in the x direction
and v is the diffusion velocity of energy in the y direction. In the steady-state limit, our
two-dimensional energy conservation equation Eq. (1.99) is recast as

∇T · u = 0, (8.272)
∂u ∂v
+ = 0. (8.273)
∂x ∂y

Differentiating Eq. (8.270) with respect to y and Eq. (8.271) with respect to x, we find

∂u ∂2φ
= −k , (8.274)
∂y ∂y∂x
∂v ∂2φ
= −k . (8.275)
∂x ∂x∂y

© 06 February 2024. J. M. Powers.


8.8. CONFORMAL MAPPING 221

Assuming sufficient continuity and differentiability of φ, we subtract Eq. (8.274) from Eq. (8.275)
to get
∂v ∂u
− = 0. (8.276)
∂x ∂y
This is the analog of the irrationality condition, Eq. (8.4).

8.8.2 Mapping of one geometry to another


We look at some problems discussed also by Churchill, Brown, and Verhey. Let us imagine
the notion of a complex function as a mapping from one plane to another. We shall see that
this leads us to some powerful and surprising results with relevance to Laplace’s equation.
In short, we will see that we can solve Laplace’s equation in a simple domain and use a
mapping to infer a solution in a more complicated domain.

8.8.2.1 Solution in a half-plane


Let us explore this with an example. Consider the complex function
z−1
w(z) = ln . (8.277)
z+1
This is similar to the doublet studied in Sec. 8.4.6 except rather than letting ǫ → 0, we take
ǫ = −1. Now z = x + iy, and we restrict such that x ∈ R1 , y ∈ R1 . Moreover w has a real
and imaginary part, yielding the form
z−1
ξ(x, y) + iη(x, y) = ℜ(w) + iℑ(w) = ln . (8.278)
z+1
In short, we have used a complex function to define a special type of coordinate transforma-
tion in which
(x, y) → (ξ, η), (8.279)
via
ξ = ξ(x, y), (8.280)
η = η(x, y), (8.281)
where the transformation is restricted by the properties of the chosen function w(z) =
w(x + iy). We have also ξ ∈ R1 and η ∈ R1 . Now we note that

ln z = ln reiθ , (8.282)


= ln r + ln e , (8.283)
= ln r + iθ, (8.284)
= ln |z| + i arg z. (8.285)

© 06 February 2024. J. M. Powers.


222 CHAPTER 8. COMPLEX VARIABLE METHODS

We extend this result to express w as


z−1 z−1 z−1
w(z) = ln = ln +i arg . (8.286)
z+1 z+1 | z + 1}
{z
| {z }
ξ η

We see then that our coordinate η(x, y) is given by


z−1
η = arg , (8.287)
z+1
x − 1 + iy
η(x, y) = arg , (8.288)
x + 1 + iy
(x + 1 − iy)(x − 1 + iy)
= arg , (8.289)
(x + 1 − iy)(x + 1 + iy)
x2 + y 2 − 1 + i2y
= arg , (8.290)
(x + 1)2 + y 2
 2 
−1 x + y2 − 1 2y
= Tan , , (8.291)
(x + 1)2 + y 2 (x + 1)2 + y 2

= Tan−1 x2 + y 2 − 1, 2y . (8.292)
Our other coordinate ξ(x, y) is given by
z−1
ξ = ln , (8.293)
z+1
x − 1 + iy
ξ(x, y) = ln , (8.294)
x + 1 + iy
(x + 1 − iy)(x − 1 + iy)
= ln , (8.295)
(x + 1 − iy)(x + 1 + iy)
x2 + y 2 − 1 + i2y
= ln , (8.296)
(x + 1)2 + y 2
p
(x2 + y 2 − 1)2 + 4y 2
= ln . (8.297)
(x + 1)2 + y 2
A plot of contours of constant ξ and η in the x − y plane is given in Fig. 8.18a. Also
shown are four points on the x axis, A for which x → −∞, B at x = −1, C at x = 1, and
D at x → ∞; additionally, two points are shown in the region for y > 0. We have E at
(x, y) = (−1, 3) and F at (x, y) = (1, 3).
Let us examine a special contour, namely
lim η(x, y). (8.298)
y→0+

When y = 0, there are problems at x = ±1. Setting aside a detailed formal analysis, which
is possible, we learn much by simply plotting η(x, y = 0.0001), given in Fig. 8.19. We see

© 06 February 2024. J. M. Powers.


8.8. CONFORMAL MAPPING 223

η
4 0.5
0

0.25
C =1 π =1 B
3 E F

F E

− 0.25
y 2 0.5 =0 =0
1
C −1 DA 1 B ξ

− 0.5
1 −1

0 =0 1.5
=1
2
=0
A B C D
−2 −1 0 1 2
x

Figure 8.18: a) Contours of ξ(x, y) and η(x, y) induced by w(z) = ln((z − 1)/(z + 1)), b)
The ξ − η plane for this mapping.

that for x ∈ [−1, 1] that η maps to π with a small error due to the finite value of y. For
x < −1 or x > 1, η maps to 0. A plot of contours of x ∈ (−∞, ∞) for y ≈ 0 in the ξ − η
plane is given in Fig. 8.18b. Also shown are the mappings of the points A, B, C, D, E, and
F . Importantly E and F , which have y > 0, lie within η ∈ [0, π].
Up to now we have simply discussed coordinate transformations. But let us now imagine
that in the x − y plane we have the temperature, which is the potential φ, defined on the
boundary y = 0, and we are considering the domain y > 0. Let us further say that we have

 0 x < −1,
φ(x, y = 0) = 1 x ∈ [−1, 1], (8.299)

0 x > 1.

This has been indicated on Fig. 8.18. We need to solve

∂2φ ∂2φ
+ 2 = 0, (8.300)
∂x2 ∂y

© 06 February 2024. J. M. Powers.


224 CHAPTER 8. COMPLEX VARIABLE METHODS

(x,y=0.0001)

3.0

2.5

2.0

1.5

1.0

0.5

x
A B C D
−2 −1 0 1 2

Figure 8.19: Plot of η(x, y = 0.0001) induced by w(z) = ln((z − 1)/(z + 1)).

with these boundary conditions. This is difficult because there is a singularity in the bound-
ary conditions at x = ±1. Remarkably, we can achieve a solution by employing our mapping
and solving
∂2φ ∂2φ
+ 2 = 0, (8.301)
∂ξ 2 ∂η
with the transformed boundary conditions,
φ(ξ, 0) = 0, φ(ξ, π) = 1. (8.302)
We note from Fig. 8.18 that in the transformed space, the boundary conditions are particu-
larly simple. In fact, by inspection, we note that
1
φ(ξ, η) = η, (8.303)
π
satisfies Laplace’s equation in the transformed space, Eq. (8.301), as well as the transformed
boundary conditions at η = 0 and η = π. We then transform back to x − y space and present
our solution for the temperature field as
1 
φ(x, y) =Tan−1 x2 + y 2 − 1, 2y . (8.304)
π
In terms of the ordinary arctan function, we could say
2 2y
φ(x, y) = tan−1 q . (8.305)
π 2
x2 + y2 −1+ (x2 + y2 − 1) + 4y 2

Direct calculation in x − y space for either form reveals that both Laplace’s equation is
satisfied as well as the boundary conditions at y = 0. The temperature field φ(x, y) is
plotted in Fig. 8.20.

© 06 February 2024. J. M. Powers.


8.8. CONFORMAL MAPPING 225

3 0.2

y 2
0.3

0.4
1

0.1 0.5 0.1

0 0.6
−2 −1 0 1 2
x

Figure 8.20: The temperature field φ(x, y) satisfying ∇2 φ = 0 with φ(x, 0) = 0 for x < −1
and x > 1 and with φ(x, 0) = 1 for x ∈ [−1, 1].

8.8.2.2 Solution in a semi-infinite strip


Consider now the complex function
w(z) = sin z. (8.306)
We need to understand trigonometric functions of complex variables. We can do so by simply
extending their real analogs. So, drawing upon Eq. (8.60), we say
eiz − e−iz
w(z) = sin z = , (8.307)
2i
ei(x+iy) − e−i(x+iy)
= , (8.308)
2i
eix−y − e−ix+y
= , (8.309)
2i
e−y (cos x + i sin x) − ey (cos x − i sin x)
= , (8.310)
 y  2i  
e + e−y ey − e−y
= sin x + i cos x , (8.311)
2 2
= sin x cosh y +i cos x sinh y . (8.312)
| {z } | {z }
x∗ y∗

Similar to as before, the complex function has defined a coordinate transformation


x∗ (x, y) = sin x cosh y, (8.313)

© 06 February 2024. J. M. Powers.


226 CHAPTER 8. COMPLEX VARIABLE METHODS

A
=0 y D
y*

=0
B =1 =1 C A =0 B =1 =1 C =0 D
− /2 /2 x −1 1 x*

Figure 8.21: Transformation induced by w(z) = sin z.

y∗ (x, y) = cos x sinh y. (8.314)

We sketch this transformation in Fig. 8.21. We note that the semi-infinite strip lying between
x = ±π/2 has been mapped to the x∗ axis. And if we take our boundary conditions to be as
sketched with φ(±π/2, y) = 0 and φ(x, 0) = 1 for x ∈ [−π/2, π/2], we see that in the (x∗ , y∗ )
system we have precisely the same problem we solved in the previous section. This suggests
that we can simply adopt the solution of the previous section, in a transformed coordinate
system. Take then Eq. (8.304) as the solution in the (x∗ , y∗ ) system:
1 
φ(x∗ , y∗ ) = Tan−1 x2∗ + y∗2 − 1, 2y∗ . (8.315)
π
Then we use Eqs. (8.313,8.314) to get φ(x, y):
1 
φ(x, y) = Tan−1 (sin x cosh y)2 + (cos x sinh y)2 − 1, 2 cos x sinh y . (8.316)
π
Detailed use of trigonometric identities reveals that Eq. (8.316) can be rewritten as
 
2 cos x
φ(x, y) = arctan . (8.317)
π sinh y

Direct calculation reveals that ∇2 φ = 0, and that φ(±π/2, y) = 0 and φ(x, 0) = 1. The
temperature field φ(x, y) is plotted in Fig. 8.22.

8.8.2.3 Solution in a quarter-plane


Consider now the transformation

w(z) = sin−1 z. (8.318)

This can be rewritten as

z = sin w, (8.319)

© 06 February 2024. J. M. Powers.


8.8. CONFORMAL MAPPING 227

3.0

2.5 0.1

2.0

y
1.5

1.0 0.4

0.5
0.5 0.7 0.6
0.8
0.00.3 0.9
0.2
− 1.5 − 1.0 − 0.5 0.0 0.5 1.0 1.5
x

Figure 8.22: For x ∈ [−π/2, π/2], y ∈ [0, ∞], the temperature field φ(x, y) satisfying
∇2 φ = 0 with φ(±π/2, y) = 0, φ(x, 0) = 1.

which we take here to directly induce the transformation

x(ξ, η) = sin ξ cosh η, (8.320)


y(ξ, η) = cos ξ sinh η. (8.321)

Here, we consider the quarter-plane in the x−y system and its image under transformation
to the ξ−η system as sketched in Fig. 8.23. In the x−y quarter-plane, we consider φ(0, y) = 0.
For y = 0 and x ∈ [0, 1], we take the Neumann condition ∂φ/∂y = 0. For y = 0, x > 1,
we take the Dirichlet condition φ(x > 1, 0) = 1. Under the mapping, the quarter-plane
transforms to a semi-infinite strip confined to ξ ∈ [0, π/2] and η > 0.

D D A
y
=0

=0

=1

C B =1 A C B
1 x /2

Figure 8.23: Transformation induced by w(z) = sin−1 z for the quarter-plane.

© 06 February 2024. J. M. Powers.


228 CHAPTER 8. COMPLEX VARIABLE METHODS

The solution
2
φ(ξ, η) = ξ, (8.322)
π
satisfies ∂ 2 φ/∂ξ 2 + ∂ 2 φ/∂η 2 = 0 and all boundary conditions. To return to the x − y plane
we must find the inverse transformation. Squaring both Eqs. (8.320) and (8.321) and scaling
gives

x2
2 = cosh2 η, (8.323)
sin ξ
y2
2
= sinh2 η. (8.324)
cos ξ

Now because cosh2 u − sinh2 u = 1, we have

x2 y2
− = 1. (8.325)
sin2 ξ cos2 ξ
We can rewrite this as
x2 y2
− = 1, (8.326)
sin2 ξ 1 − sin2 ξ
and solve for first sin ξ and then ξ to get
1 p 2 2
p
2 2

ξ = arcsin (x + 1) + y − (x − 1) + y . (8.327)
2
Then we see that
2 1 p p 
φ(x, y) = arcsin (x + 1)2 + y 2 − (x − 1)2 + y 2 . (8.328)
π 2
Direct substitution reveals that ∂ 2 φ/∂x2 + ∂ 2 φ/∂y 2 = 0 and that the boundary conditions
are satisfied. The temperature field φ(x, y) is plotted in Fig. 8.24.

Problems

© 06 February 2024. J. M. Powers.


8.8. CONFORMAL MAPPING 229

2.0 0.1 0.4

1.5

0.5
y 1.0 0.3
0.7

0.5
0.8
0.6 0.9

0.0 0.2
0.0 0.5 1.0 1.5 2.0
x

Figure 8.24: For x ∈ [0, ∞], y ∈ [0, ∞], the temperature field φ(x, y) satisfying ∇2 φ = 0
with ∂φ/∂y = 0 for y = 0, x ∈ [0, 1], φ(x, 0) = 1 for x > 1, and φ(0, y) = 0.

© 06 February 2024. J. M. Powers.


230 CHAPTER 8. COMPLEX VARIABLE METHODS

© 06 February 2024. J. M. Powers.


Chapter 9

Integral transformation methods

see Mei, Chapters 7, 10.

Here we consider integral transformation methods.

9.1 Fourier transformations


We have familiarity with the Fourier series representation of functions, often formed using
a set of orthogonal basis functions with discrete values of wavenumber. The Fourier trans-
formation may be considered a limit in which the wavenumber varies continuously. To see
how to arrive at this limit, let us begin with a more general consideration of a Fourier series
representation of a function. Let us seek to expand f (x) in a Fourier series expansion in
terms of orthogonal basis functions un (x) as

X
f (x) = cn un (x). (9.1)
n=−∞

Taking the basis functions to be un (x) = einπx/L , we express our expansion as



X
f (x) = cn einπx/L , n = 0, ±1, ±2, . . . (9.2)
n=−∞

We recognize that via Euler’s formula, Eq. (8.39), einπx/L = cos(nπx/L) + i sin(nπx/L), that
this can be thought of as an expansion in trigonometric functions. Now for convenience, we
have chosen basis functions, einπx/L for n = 0, ±1, ±2, . . ., that are orthogonal. We could also
have made the less restrictive assumption that the un (x) were at most linearly independent,
at the expense of added complication. We also could have scaled our orthogonal basis
functions to render them orthonormal, though this useful practice is not commonly done.
We recall that for complex functions, taking the inner product on the domain x ∈ [−L, L]

231
232 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

requires a complex conjugation, so


Z L
hum (x), un (x)i = um (x)un (x) dx. (9.3)
−L

So for us
Z L
imπx/L inπx/L
he ,e i = eimπx/L einπx/L dx, (9.4)
−L
Z L
= ei(n−m)πx/L dx, (9.5)
−L
L
L
= ei(n−m)πx/L , n 6= m, (9.6)
i(n − m)π −L
 i(n−m)π 
2L e − e−i(n−m)π
= , n 6= m, (9.7)
(n − m) 2i
2L
= sin(n − m)π, n 6= m, (9.8)
(n − m)
= 0, n 6= m. (9.9)
If n = m, Eq. (9.5) reduces to
Z L
imπx/L inπx/L
he ,e i = dx, n = m, (9.10)
−L

= , x|L−L n = m, (9.11)
= 2L, n = m. (9.12)
In summary,

imπx/L inπx/L 0, m 6= n,
he ,e i = 2Lδmn = (9.13)
2L, m = n.
We next find the Fourier coefficients cn . Operating on Eq. (9.2), we find

X
imπx/L imπx/L
he , f (x)i = he , cn einπx/L i, (9.14)
n=−∞
Z L ∞
X Z L
e−imπx/L f (x) dx = cn ei(n−m)πx/L dx, (9.15)
−L n=−∞ | −L {z }
=2Lδmn
= 2Lcm (9.16)
Exchanging m for n and x for ξ, we can say
Z L
1
cn = f (ξ)e−inπξ/L dξ, n = 0, ±1 ± 2, . . . , (9.17)
2L −L

© 06 February 2024. J. M. Powers.


9.1. FOURIER TRANSFORMATIONS 233

gives the expression for the Fourier coefficients cn . Using this in Eq. (9.2), we can say

X∞  Z L 
1
f (x) = f (ξ)e−inπξ/L
dξ einπx/L , n = 0, ±1, ±2, . . . , (9.18)
n=−∞ |
2L −L
{z }
cn

X Z L
1
= f (ξ)einπ(x−ξ)/L dξ, n = 0, ±1, ±2, . . . . (9.19)
2L n=−∞ −L

Now let us allow L → ∞. Following Mei’s analysis on his pp. 132-133, we define αn such
that

αn = . (9.20)
L
So we might have α5 = 5π/L and α4 = 4π/L; thus, α5 − α4 = π/L. Generalizing, we can
say

(n + 1)π nπ π
∆α = αn+1 − αn = − = . (9.21)
L L L
For convenience, we now define
Z L
1
fb(αn , x) = f (ξ)eiαn (x−ξ) dξ. (9.22)
2π −L

Using the definition of Eq. (9.22) in Eq. (9.19), we get



1 X
f (x) = 2π fb(αn , x), (9.23)
2L n=−∞

X
= fb(αn , x)∆α, (9.24)
n=−∞

This appears much as the rectangular rule for discrete approximation of integrals of contin-
uous functions. Now as we let L → ∞, we see that ∆α → 0, and Eq. (9.24) passes to the
limit of a Riemann integral:
Z ∞
f (x) = fb(α, x) dα. (9.25)
−∞

We also see that as L → ∞ that Eq. (9.22) becomes


Z ∞
b 1
f (α, x) = f (ξ)eiα(x−ξ) dξ, (9.26)
2π −∞

© 06 February 2024. J. M. Powers.


234 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

R∞
provided −∞
|f (x)| dx < ∞. We combine Eqs. (9.25, 9.26) to form
Z ∞ Z ∞ 
1 iα(x−ξ)
f (x) = f (ξ)e dξ dα, (9.27)
−∞ 2π −∞
| {z }
fb(α,x)
Z ∞ Z ∞
1
= f (ξ)eiα(x−ξ) dξ dα. (9.28)
2π −∞ −∞

Let us define the Fourier transformation F of the function f (ξ) as follows:


Z ∞
F (f (ξ)) = F (α) = f (ξ)e−iαξ dξ, Fourier transformation. (9.29)
−∞

The Fourier transformation is somewhat analogous to the discrete Eq. (9.17), though they
differ by a leading constant, 1/(2L), which has no clear analog in the limit L → ∞. So F is
somewhat analogous to the cn from a discrete Fourier series. Next operate on Eq. (9.28) to
get the inverse Fourier transformation:
Z ∞Z ∞
1
f (x) = f (ξ)e−iαξ dξ eiαx dα, (9.30)
2π −∞ −∞
| {z }
F (α)
Z ∞
1
= F (α)eiαx dα, inverse Fourier transformation. (9.31)
2π −∞

Note that Eq. (9.31) corrects an error in Mei’s Eq. (7.1.8) on his p. 133. We take f (x) to
describe our function in the spatial domain and its image F (α) to represent our function in
the so-called spectral domain. We also note that many texts will also exchange ξ for x and
rewrite Eq. (9.29) as
Z ∞
F (f (x)) = F (α) = f (x)e−iαx dx, (9.32)
−∞

We note the Fourier transformation is a linear operator, so


F (af (x) + bg(x)) = aF (f (x)) + bF (g(x)). (9.33)

Example 9.1
Find the Fourier transformation of f (x) = δ(x − x0 ), with x0 ∈ R1 .

Applying Eq. (9.32), we get for the Dirac delta function


Z ∞
F (α) = δ(x − x0 )e−iαx dx, (9.34)
−∞
−iαx0
= e , (9.35)
= cos αx0 − i sin αx0 . (9.36)
Note

© 06 February 2024. J. M. Powers.


9.1. FOURIER TRANSFORMATIONS 235

f(x) = (x) F( )
2.0
5

4 1.5

3 1.0
2
0.5
1

−4 −2 2 4 −4 −2 2 4
x

Figure 9.1: The Dirac delta function δ(x) and its Fourier transformation.

• If f (x) is symmetric about x = 0, implying here that x0 = 0, F (α) is purely real, and specifically here
is

F (α) = 1. (9.37)

• Loss of symmetry of f (x) induces an imaginary component of F (α).


• The Dirac1 delta function is highly nonuniform, i.e. localized, in the x-domain; however, its image
in the spectral domain is uniform throughout. This reflects the notion that the Dirac delta function
contains information at all frequencies.
For x0 = 0, the function and its Fourier transformation are plotted in Fig. 9.1.

Example 9.2
Find the Fourier transformation of f (x) = δ(x + x0 ) + δ(x − x0 ), with x0 ∈ R1 .

Applying Eq. (9.32), we get


Z ∞
F (α) = (δ(x + x0 ) + δ(x − x0 ))e−iαx dx, (9.38)
−∞
iαx0
= e + e−iαx0 , (9.39)
= 2 cos αx0 . (9.40)

Note
• Here f (x) is symmetric about x = 0, and F (α) is purely real,
• Again, the function is nonuniform in the spatial domain and more uniform in the spectral domain.
For x0 = 1, the function and its Fourier transformation are plotted in Fig. 9.2.

1
Paul Adrien Maurice Dirac, 1902-1982, English physicist.

© 06 February 2024. J. M. Powers.


236 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

f(x) = (x+1)+ (x−1) F( )

5 2

4
1
3

2 − 15 − 10 −5 5 10 15

1 −1

−4 −2 2 4 −2
x

Figure 9.2: The function f (x) = δ(x − 1) + δ(x + 1) and its Fourier transformation.

f(x) = (x+1)− (x−1) Im(F( ))


10 2

5 1

−4 −2 2 4 x − 15 − 10 −5 5 10 15
−5 −1

− 10
−2

Figure 9.3: The function f (x) = δ(x + 1) − δ(x − 1) and the imaginary component of its
Fourier transformation.

Example 9.3
Find the Fourier transformation of f (x) = δ(x + x0 ) − δ(x − x0 ), with x0 ∈ R1 .

Applying Eq. (9.32), we get


Z ∞
F (α) = (δ(x + x0 ) − δ(x − x0 ))e−iαx dx, (9.41)
−∞
iαx0
= e − e−iαx0 , (9.42)
= 2i sin αx0 . (9.43)

Note
• Here f (x) is anti-symmetric about x = 0, and F (α) is purely imaginary,
• Again the function is nonuniform in the spatial domain and more uniform in the spectral domain.
For x0 = 1, the function and the imaginary component of its Fourier transformation are plotted in
Fig. 9.3.

© 06 February 2024. J. M. Powers.


9.1. FOURIER TRANSFORMATIONS 237

f(x) =(H(x+1)−H(x−1))/2 F( )
0.5 1.0

0.4 0.8

0.6
0.3
0.4
0.2
0.2
0.1
− 15 − 10 −5 5 10 15
− 0.2
− 10 −5 5 10 x

Figure 9.4: A top hat function and its Fourier transformation.

Example 9.4
Find the Fourier transformation of the top hat function
1
f (x) = (H(x + a) − H(x − a)). (9.44)
2a

Here f (x) is symmetric about x = 0, so we expect a real-valued Fourier transformation. Note that
the width of the top hat is 2a and the height is 1/(2a), so the area under the top hat is unity. Thus
as a → 0, our top hat approaches a Dirac delta function. Applying Eq. (9.32), we get for our top hat
function
Z ∞
1
F (α) = (H(x + a) − H(x − a))e−iαx dx, (9.45)
2a −∞
Z a
1
= e−iαx dx, (9.46)
2a −a
 
1 1 −iαx a
= e −a
, (9.47)
2a −iα
1 
= − e−iaα − eiaα , (9.48)
2iaα
 iaα 
1 e − e−iaα
= , (9.49)
aα 2i
sin aα
= . (9.50)

The function and its Fourier transformation are plotted in Fig. 9.4 for a = 1. Note that in the
transformed space, F is symmetric and non-singular at α = 0. Taylor series of F (α) about α = 0
verifies this as F (α) ∼ 1 − a2 α2 /6 + a4 α4 /120 − . . . . If we were to broaden the top hat by increasing
a, we would narrow its Fourier transformation, F (α). Conversely, if we were to narrow the top hat by
decreasing a, we would broaden its Fourier transformation.

© 06 February 2024. J. M. Powers.


238 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

f(x) =exp(−x2/2) F( )
1.0 2.5

0.8 2.0

0.6 1.5

0.4 1.0

0.2 0.5

−4 −2 2 4 −4 −2 2 4
x

Figure 9.5: A Gaussian function and its Fourier transformation.

Example 9.5
2
/2
Find the Fourier transformation of the Gaussian function f (x) = e−x .

Applying Eq. (9.32), we get for our Gaussian function


Z ∞
2
F (α) = e−x /2 e−iαx dx, (9.51)
−∞
√ 2
= 2πe−α /2 . (9.52)
The function and its Fourier transformation are plotted in Fig. 9.5. Remarkably, it maps into a function
of the same form as its generator. So this Gaussian has identical spatial and spectral localization.

Example 9.6
Find the Fourier transformation of a cosine whose amplitude is modulated by a Gaussian function
2
/2
f (x) = e−x cos ax. (9.53)

Applying Eq. (9.32), we get


Z ∞
2
/2
F (α) = e−x (cos ax)e−iαx dx, (9.54)
−∞
r
π 2
= (1 + e2aα )e−(a+α) /2 . (9.55)
2

© 06 February 2024. J. M. Powers.


9.1. FOURIER TRANSFORMATIONS 239

f(x) =exp(−x2/2)cos10x F( )
1.0 1.2
1.0
0.5
0.8
0.6
−3 −2 −1 1 2 3 x
0.4
− 0.5 0.2

− 1.0 − 15 − 10 −5 5 10 15

Figure 9.6: A Gaussian modulated cosine function and its Fourier transformation.

For a = 10, the function and its Fourier transformation are plotted in Fig. 9.6. The function f (x)
appears as a pulse which oscillates. The pulse width is dictated by the exponential function. If we were
to weaken the amplitude modulation, say by taking f (x) = exp(−x2 /20) cos(10x), we would find the
pulse width increased in the x domain and the spike width would decrease in the frequency domain.
This suggests the function is not spatially localized but is spectrally localized. We see the single mode
with wavenumber 10 is reflected in spectral space at α = ±10. We might be surprised to see the mirror
image at −10. This feature is part of all Fourier analysis, and is known as an aliasing effect. Only one
of the peaks gives us the clue to what the wavenumber of the generating function was.
Let us add another frequency mode and consider the Fourier transformation of
 
−x2 /2 1
f (x) = e cos 10x + cos 50x . (9.56)
2

The Fourier transformation can be shown to be


r
π − α2 −1250 
e 2 2e1200 cosh 10α + cosh 50α . (9.57)
2

The function and its Fourier transformation are plotted in Fig. 9.7. We clearly see the two peaks at
α = 10 and α = 50, as well as their aliases.

Example 9.7
Find the Fourier transformation of a sine whose amplitude is modulated by a Gaussian function
2
/2
f (x) = e−x sin ax. (9.58)

© 06 February 2024. J. M. Powers.


240 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

F( )
f(x) =exp(−x2/2)(cos10x+(1/2)cos50x)
1.2
1.5
1.0
1.0
0.8
0.5
0.6
x 0.4
−3 −2 −1 1 2 3
− 0.5 0.2
− 1.0
− 100 − 50 50 100
− 1.5

Figure 9.7: A Gaussian modulated sum of cosines and its Fourier transformation.

f(x) =exp(−x2/2)sin10x Im(F( ))


1.0
1.0
0.5
0.5

−3 −2 −1 1 2 3 x − 15 − 10 −5 5 10 15

− 0.5 − 0.5

− 1.0
− 1.0

Figure 9.8: A Gaussian modulated sine function and its Fourier transformation.

Applying Eq. (9.32), we get


Z ∞
2
/2
F (α) = e−x (sin ax)e−iαx dx, (9.59)
−∞
r
π 2
= −i (−1 + e2aα )e−(a+α) /2 . (9.60)
2

For a = 10, the function and its Fourier transformation are plotted in Fig. 9.8. In contrast to the even
cosine function of of the previous example which induced a purely real F (α), the odd sine function
induces a purely imaginary F (α). Other features are similar to the previous example involving cosine.

Example 9.8
If u = u(x, t) and u → 0 as x → ±∞, find the Fourier transformation of ∂u/∂x.

© 06 February 2024. J. M. Powers.


9.1. FOURIER TRANSFORMATIONS 241

Let us start by taking


Z ∞
F (u(x, t)) = U (α, t) = u(x, t)e−iαx dx. (9.61)
−∞

We then have
  Z ∞
∂u ∂u −iαx
F = e dx, (9.62)
∂x −∞ ∂x
Z ∞

= e−iαx u −∞ − (−iα)ue−iαx dx. (9.63)
−∞

Because we have insisted that u vanish as x → ±∞, this reduces to


  Z ∞
∂u
F = iα ue−iαx dx, (9.64)
∂x −∞
= iαU (α, t). (9.65)
In general, we can say that
 
∂nu
F = (iα)n U (α, t). (9.66)
∂xn

Example 9.9
Solve the heat equation with general initial conditions using Fourier transformations:
∂u ∂2u
= ν 2, u(x → ±∞, t) → 0, u(x, 0) = f (x). (9.67)
∂t ∂x

Let us take the Fourier transformation of the heat equation:


   2 
∂u ∂ u
F = νF , (9.68)
∂t ∂x2
Z ∞
∂u −iαx
e dx = −να2 U (α, t), (9.69)
−∞ ∂t
Z ∞

ue−iαx dx = −να2 U (α, t), (9.70)
∂t −∞
∂U
= −να2 U (α, t), (9.71)
∂t 
U (α, t) = C(α) exp −να2 t . (9.72)
Now our initial condition gives us
u(x, 0) = f (x), (9.73)
F (u(x, 0)) = F (f (x)) , (9.74)
Z ∞
U (α, 0) = f (x)e−iαx dx, (9.75)
−∞
= F (α). (9.76)

© 06 February 2024. J. M. Powers.


242 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

Substituting this transformed initial condition into Eq. (9.72) gives

U (α, 0) = F (α) = C(α) exp(0), (9.77)


F (α) = C(α). (9.78)

Therefore, our solution in transformed space is

U (α, t) = F (α) exp(−να2 t). (9.79)

To return to the (x, t) domain, we employ the inverse Fourier transformation of Eq. (9.31) to get
Z ∞
1 2
u(x, t) = F (α)eiαx−να t dα. (9.80)
2π −∞

In terms of f , we can use Eq. (9.29) to say


Z ∞ Z ∞ 
1 2
u(x, t) = f (ξ)e −iαξ
dξ eiαx−να t dα, (9.81)
2π −∞ −∞
| {z }
F (α)
Z ∞ Z ∞
1 iα(x−ξ)−να2 t
= f (ξ) e dα dξ. (9.82)
2π −∞ −∞

Symbolic calculation reveals this reduces to


Z ∞
1 (x−ξ)2
u(x, t) = √ f (ξ)e− 4νt dξ. (9.83)
2 πνt −∞

Example 9.10
Following the example of Mei, p. 137, solve the heat equation with a Dirac delta distribution as an
initial condition:
∂u ∂2u
= ν 2, u(x → ±∞, t) → 0, u(x, 0) = δ(x). (9.84)
∂t ∂x

We know F (α) = 1 from Eq. (9.37), and the solution of Eq. (9.80) becomes
Z ∞
1 2
u(x, t) = e−να t eiαx dα, (9.85)
2π −∞
Z ∞
1 2
= e−να t (cos αx + i sin αx) dα, (9.86)
2π −∞
Z ∞ Z ∞
1 −να2 t i 2
= e cos αx dα + e−να t sin αx dα, (9.87)
2π −∞ 2π −∞
| {z }
=0
Z ∞
1 2
= e−να t cos αx dα. (9.88)
2π −∞

© 06 February 2024. J. M. Powers.


9.1. FOURIER TRANSFORMATIONS 243

The integral involving sin is zero because the limits are symmetric and the function is odd in α. We
next break the integral into two pieces:
Z 0 Z ∞ 
1 −να2 t −να2 t
u(x, t) = e cos αx dα + e cos αx dα . (9.89)
2π −∞ 0
R0 R∞
Because of symmetry about α = 0, the −∞ is equal to the 0 . Thus, we can also say
Z ∞ Z ∞ 
1 2 2
u(x, t) = e−να t cos αx dα + e−να t cos αx dα , (9.90)
2π 0 0
Z
1 ∞ −να2 t
= e cos αx dα. (9.91)
π 0
Let us now change variables, exchanging α for β via
β
α= √ . (9.92)
νt
Thus for fixed t, we have

dα = √ . (9.93)
νt
Substituting into Eq. (9.91) to eliminate α in favor of β, we find
Z ∞
1 2 βx
u(x, t) = √ e−β cos √ dβ. (9.94)
π νt 0 νt
Now define for convenience
x
µ = √ , (9.95)
νt
Z ∞
2
I(µ) = e−β cos µβ dβ. (9.96)
0

With these definitions, Eq. (9.94) becomes


1
u(x, t) = √ I(µ), (9.97)
π νt
Let us consider I(µ). Differentiating Eq. (9.96), we get
Z ∞
dI 2
=− βe−β sin µβ dβ. (9.98)
dµ 0

Let us integrate the right side by parts to obtain


∞ Z
dI 1 −β 2 1 ∞ −β 2
= e sin µβ − e µ cos µβ dβ , (9.99)
dµ 2 0 2 0
| {z } | {z }
=0 =µI(µ)
1
= − µI (9.100)
2
This atypical equation is actually a first order ordinary differential equation for the variable I, which
is itself an integral. We can get a condition at µ = 0 by considering the definition of Eq. (9.96) applied
at µ = 0:
Z ∞ Z ∞
2 2
I(0) = e−β cos(0) dβ = e−β dβ. (9.101)
0 0

© 06 February 2024. J. M. Powers.


244 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

u
x
0.8
t=1/10

0.6

0.4 u
t=1/2
0.2 t=1
t=2
x t
− 10 −5 5 10

Figure 9.9: Solution to the heat equation for ν = 1 for an initial pulse distribution.

This integral is well-known to have a value which can be determined by a coordinate change, as described
earlier on p. 150. Using this, we find

π
I(0) = . (9.102)
2
Solution of Eqs. (9.100, 9.102) gives by separation of variables
dI µ dµ
= − , (9.103)
I 2
µ2
ln I = − + C, (9.104)
4
2
I = Ĉe−µ /4 , (9.105)

π −µ2 /4
= e , (9.106)
√ 2
π −x2
= e 4νt . (9.107)
2
Substituting into Eq. (9.97) to eliminate I, we get
 
1 −x2
u(x, t) = √ exp . (9.108)
2 πνt 4νt

Note this is fully consistent with the more general result of Eq. (9.83) for f (ξ) = δ(ξ). We plot results
for u(x, t) in Fig. 9.9 for ν = 1.

We close with a discussion of the notion of convolution. Let us say we have two functions
f and g and their respective Fourier transformations:
Z ∞
F (f ) = F (α) = f (x)e−iαx dx, (9.109)
−∞
Z ∞
F (g) = G(α) = g(x)e−iαx dx. (9.110)
−∞

© 06 February 2024. J. M. Powers.


9.2. LAPLACE TRANSFORMATIONS 245

We also have the inverse Fourier transformations


Z ∞
1
f (x) = F (α)eiαx dα, (9.111)
2π −∞
Z ∞
1
g(x) = G(α)eiαx dα. (9.112)
2π −∞

Let us define the convolution of f and g by the operation


Z ∞
f ∗g ≡ g(ξ)f (x − ξ) dξ. (9.113)
−∞

Now use Eq. (9.111) to eliminate f (x − ξ):


Z ∞  Z ∞ 
1 iα(x−ξ)
f ∗g = g(ξ) F (α)e dα dξ, (9.114)
−∞ 2π −∞
| {z }
f (x−ξ)
Z ∞ Z ∞
1
= g(ξ)F (α)eiα(x−ξ) dα dξ, (9.115)
2π −∞ −∞
Z ∞Z ∞
1
= g(ξ)F (α)eiα(x−ξ) dξ dα, (9.116)
2π −∞ −∞
Z ∞ Z ∞
1 iαx
= F (α)e g(ξ)e−iαξ dξ dα, (9.117)
2π −∞
| −∞ {z }
G(α)
Z ∞
1
= F (α)G(α)eiαx dα, (9.118)
2π −∞
= F −1 (F (α)G(α)) , (9.119)

F (f ∗ g) = F F −1 (F (α)G(α)) , (9.120)
= F (α)G(α), (9.121)
= F (f )F (g). (9.122)

9.2 Laplace transformations


The Laplace transformation is a technique often applied to linear ordinary differential equa-
tion to allow them to be transformed to algebraic equations, which are more easily solved.
In a similar fashion as the Fourier transformation, the Laplace transformation can be ex-
tended to apply to partial differential equations so as to convert them to ordinary differential
equations. As discussed by Mei, there are some problems for which Fourier transformation
integrals are not convergent but which have no such problems under the Laplace transfor-
mation.

© 06 February 2024. J. M. Powers.


246 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

Let us see how the Laplace transformation can be considered as a special case of the
Fourier transformation. Consider the function

g(x) = H(x)e−cx f (x), (9.123)

where H(x) is the Heaviside unit step function and c ∈ R1 > 0. Let us take the Fourier
transformation of g(x):
Z ∞
F (g(x)) = G(λ) = g(x)e−iλx dx, (9.124)
−∞
Z ∞
= H(x)e−cx f (x)e−iλx dx, (9.125)
−∞
Z ∞
= e−(c+iλ)x f (x) dx, (9.126)
0
Z ∞
= e−sx f (x) dx. (9.127)
0

where we have defined

s = c + iλ. (9.128)

The inverse Fourier transformation is


Z ∞
1
g(x) = H(x)e −cx
f (x) = G(λ)eiλx dλ. (9.129)
2π −∞

Thus, we have
Z ∞
1
H(x)f (x) = G(λ)e(c+iλ)x dλ. (9.130)
2π −∞

From Eq. (9.128), we have

λ = −i(s − c), (9.131)


dλ = −i ds. (9.132)

Thus,
Z
−i c+i∞
H(x)f (x) = G(−i(s − c))esx ds, (9.133)
2π c−i∞
Z c+i∞
1
= G(−i(s − c))esx ds. (9.134)
2πi c−i∞
Let us define

F (s) = G(−i(s − c)), (9.135)

© 06 February 2024. J. M. Powers.


9.2. LAPLACE TRANSFORMATIONS 247

and

F (x) = H(x)f (x). (9.136)

Then we define the Laplace transformation L(F (x)) as


Z ∞
L(F (x)) = F (s) = e−sx F (x) dx. (9.137)
0

We define the inverse Laplace transformation, L−1 , as


Z c+i∞
1
−1
L (F (s)) = F (x) = F (s)esx ds. (9.138)
2πi c−i∞

Because c is positive, the path of integration is on a vertical line to the right of the origin in
the complex plane.
Now, most physical problems involving the Laplace transformation involve time t rather
than distance x. So following convention, we simply trade x for t in the definition of the
Laplace transformation and its inverse:
Z ∞
L(F (t)) = F (s) = e−st F (t) dt. (9.139)
0
Z c+i∞
1
−1
L (F (s)) = F (t) = F (s)est ds. (9.140)
2πi c−i∞

Because the Laplace transformation is only defined for t ≥ 0, we can effectively ignore any
part of F (t) for t < 0.

Example 9.11
Find the Laplace transformation of F (t) = δ(t − t0 ) with t0 = R1 ≥ 0.

Applying Eq. (9.139), we get


Z ∞
L(δ(t − t0 )) = F (s) = e−st δ(t − t0 ) dt, (9.141)
0
−st0
= e . (9.142)

For t0 = 0, the Dirac delta function and its Laplace transformation are plotted in Fig. 9.10 for s ∈ R1 .
Note here that F (t) = 0 already for t < 0.

Example 9.12
Find the Laplace transformation of F (t) = H(t − t0 ) with t0 = R1 ≥ 0.

© 06 February 2024. J. M. Powers.


248 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

F(t) = (t) F(s)


2.0
5

4 1.5

3 1.0
2
0.5
1

−4 −2 2 4 −4 −2 2 4
t s

Figure 9.10: The Dirac delta function δ(t) and its Laplace transformation.

F(t) = H(t) F(s)


5
1.0
4
0.8
3
0.6
2
0.4

0.2 1

−3 −2 −1 1 2 3 t 0 1 2 3 4 5 s

Figure 9.11: The Heaviside function H(t) and its Laplace transformation.

Applying Eq. (9.139), we get


Z ∞
L(H(t − t0 )) = F (s) = e−st H(t − t0 ) dt, (9.143)
0
Z ∞
= e−st dt, (9.144)
t0

1
= − e−st , (9.145)
s t0
e−st0
= . (9.146)
s
We must make the additional restriction that s > 0 here. For t0 = 0, the Heaviside function and its
Laplace transformation are plotted in Fig. 9.11 for s ∈ R1 . Note here that F (t) = 0 already for t < 0.

Example 9.13
Find the Laplace transformation of F (t) = t.

© 06 February 2024. J. M. Powers.


9.2. LAPLACE TRANSFORMATIONS 249

F(t) = t F(s)
10
3
8

2 6

4
1
2

1 2 3 0 1 2 3 4 5
t s

Figure 9.12: The function F (t) = t and its Laplace transformation.

Applying Eq. (9.139), we get


Z ∞
L(t) = F (s) = e−st t dt, (9.147)
0

e−st (1 + st)
= − , (9.148)
s2 0
1
= . (9.149)
s2
We must make the additional restoration that s > 0 here. The function F (t) = t and its Laplace
transformation are plotted in Fig. 9.12 for s ∈ R1 . We only plot F (t) for t > 0 because it is on that
domain that F is defined. For the more general F (t) = tn , it is easily shown that F (s) = Γ(n + 1)/sn+1.

Example 9.14
Find the Laplace transformation of F (t) = b sin at, for a, b ∈ R1 > 0.

Applying Eq. (9.139), we get


Z ∞
L(b sin at) = F (s) = be−st sin at dt, (9.150)
0

be−st (s sin at + a cos ax)
= − , (9.151)
a2 + s2 0
ab
= . (9.152)
a2 + s2
For a = 2, b = 1, the function F (t) = sin 2t and its Laplace transformation are plotted in Fig. 9.13 for
s ∈ R1 .

© 06 February 2024. J. M. Powers.


250 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

F(t) = sin 2t F(s)


1.0 0.5

0.5 0.4

0.3
1 2 3 4 5
t 0.2
- 0.5
0.1
- 1.0
2 4 6 8 10 s

Figure 9.13: The function F (t) = sin 2t and its Laplace transformation.

F(t) = exp(−t 2/2) F(s)


1.0
1.2
0.8 1.0
0.6 0.8

0.4 0.6
0.4
0.2
0.2

1 2 3 4 5 t 2 4 6 8 10 s

Figure 9.14: The function F (t) = exp(−t2 /2) and its Laplace transformation.

Example 9.15
Find the Laplace transformation of F (t) = exp(−t2 /2)

Applying Eq. (9.139), we get


Z ∞
2
L(exp(−t2 /2)) = F (s) = e−st e−t /2
dt. (9.153)
0

Omitting details, we find


r  
π s2 s
F (s) = e 2 erfc √ . (9.154)
2 2

The function F (t) = exp(−t2 /2) and its Laplace transformation are plotted in Fig. 9.14.

Example 9.16
If u = u(t), find the Laplace transformation of du/dt.

© 06 February 2024. J. M. Powers.


9.2. LAPLACE TRANSFORMATIONS 251

Let us take
Z ∞
L(u(t)) = U (s) = e−st u(t) dt. (9.155)
0

We then have
  Z ∞
du du −st
L = e dt, (9.156)
dt 0 dt
Z ∞

= e−st u 0 − (−s)ue−st dt, (9.157)
0
= sU (s) − u(0). (9.158)

In general, one can show that


 n 
d u dn−1 u
L n
= sn U (s) − sn−1 u(0) − . . . − s0 n−1 . (9.159)
dt dt x=0

Example 9.17
Use Laplace transformations and their inverses to solve

d2 u
+ 9u = 0, u(0) = 0, u̇(0) = 2. (9.160)
dt2

Take the Laplace transformation of the system, Eq. (9.160), to get


 2 
d u
L + 9u = L(0), (9.161)
dt2
 2 
d u
L + L(9u) = L(0), (9.162)
dt2

s2 U − s✟ ✟ − u̇(0) + 9U
u(0) = 0, (9.163)
s2 U − 2 + 9U = 0, (9.164)
U (s2 + 32 ) = 2, (9.165)
2
U (s) = . (9.166)
s2 + 3 2
Comparing to Eq. (9.152), we induce that
2
u(t) = sin 3t. (9.167)
3
It is easy to verify that the differential equation and conditions at t = 0 are satisfied.
Let us see if we can use the more formal machinery of the inverse Laplace transformation to deduce
Eq. (9.167). Substituting Eq. (9.166) into Eq. (9.140), we get
Z c+i∞
1 2est
L−1 (U (s)) = u(t) = ds, (9.168)
2πi c−i∞ s2 + 9
Z c+i∞
1 2est
= ds, (9.169)
2πi c−i∞ (s + 3i)(s − 3i)
(9.170)

© 06 February 2024. J. M. Powers.


252 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

Im(s)

CR
3i X

Re(s)
CI

−3i X

Figure 9.15: Contour integration path for inverse Laplace transformation integral associated
with d2 u/dt2 + 9u = 0.

The integrand has two poles on the imaginary axis at

s = 0 ± 3i. (9.171)

Consider now the contour integral depicted in Fig. 9.15. We have the closed contour C as the sum of
two portions of the contour:

C = CI + CR . (9.172)
H
We can use Eq. (8.209) to give us C . First we need the residues of the integrand. Finding a Laurent
series in the neighborhood of both poles gives us

2est ∓(i/3)e±3it
= + ..., s = ±3i. (9.173)
(s + 3i)(s − 3i) s ∓ 3i

The residues are thus ∓(i/3)e±3it . So we get


X i −3it
residues = (e − e3it ), (9.174)
I 3
X 2π
= 2πi residues = − (e−3it − e3it ), (9.175)
C 3
4πi e3it − e−3it
= , (9.176)
3 2i
4πi
= sin 3t. (9.177)
3
Now
Z
1
u(t) = , (9.178)
2πi CI
I Z 
1
= − , (9.179)
2πi C CR

© 06 February 2024. J. M. Powers.


9.2. LAPLACE TRANSFORMATIONS 253

 Z 
1 4πi
= sin 3t − , (9.180)
2πi 3 CR
Z
2 1 2est
= sin 3t − ds. (9.181)
3 2πi CR s2 + 9
R
We now apply Jordan’s lemma, Eq. (8.267), to CR . We note that CR lies in the region where ℜ(s) ≤ 0.
And for us f (s) = 2/(sR2 + 9). Clearly, on CR with s = Reiθ , we see as R → ∞ that |f (s)| → 0. So as
long as t > 0, we have CR = 0. Thus, we get

2
u(t) = sin 3t. (9.182)
3

Let us now discuss convolution in the context of the Laplace transformation. As with
the convolution for the Fourier transformation, let us assume we have two functions F and
G, and their respective Laplace and inverse Laplace transformations:
Z ∞
L(F ) = F (s) = e−st F (t) dt, (9.183)
0
Z ∞
L(G) = G(s) = e−st G(t) dt. (9.184)
0

Let us define the convolution as


Z t
F ∗G = G(τ )F (t − τ ) dτ. (9.185)
0

Then we operate as follows


Z ∞ Z t 
−st
L(F ∗ G) = e G(τ )F (t − τ ) dτ dt, (9.186)
0 0
Z ∞Z t
= e−st G(τ )F (t − τ ) dτ dt. (9.187)
0 0

The domain of integration is sketched in Fig. 9.16. The graph on the left is bounded by the
curves τ = 0 and τ = t and lies between the curves t = 0 and t → ∞. When we switch the
order of integration, we have to carefully change the limits. When we first integrate on t,
we must enter the domain at t = τ and exit at t → ∞. Then we must bound this area by
τ = 0 and τ = ∞. So the integral becomes
Z ∞Z ∞
L(F ∗ G) = e−st G(τ )F (t − τ ) dt dτ. (9.188)
0 τ

© 06 February 2024. J. M. Powers.


254 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

t=
t t
=0 =t

Figure 9.16: Sketch of area of integration and limits, depending on order of integration.

Let now t̂ = t − τ . Then dt̂ = dt and


Z ∞Z ∞
L(F ∗ G) = e−s(t̂+τ ) G(τ )F (t̂) dt̂ dτ, (9.189)
Z0 ∞ Z0 ∞
= e−s(t̂+τ ) G(τ )F (t̂) dτ dt̂, (9.190)
Z0 ∞ 0 Z ∞
−st̂
= e F (t̂) e−sτ G(τ ) dτ dx̂, (9.191)
0Z ∞ 0
 Z ∞ 
−st̂ −sτ
= e F (t̂) dt̂ e G(τ ) dτ , (9.192)
0 0
= F (s)G(s). (9.193)

So

L(F ∗ G) = L(F )L(G). (9.194)

Example 9.18
Following the analysis of Mei, pp. 272-275, apply the Laplace transformation method to solve
Stokes’ first problem, considered in Sec. 6.1:

∂u ∂2u
= ν 2, u(y, 0) = 0, u(0, t) = (U )H(t), u(∞, t) = 0. (9.195)
∂t ∂y

We consider the Laplace transformation to operate on t and thus it does not impact y. So
L(u(y, t)) = u(y, s). And here H(t) is the Heaviside function in t, which is a more complete way
to formulate Stokes’ first problem than done previously. Taking the Laplace transformation of the
governing equation, we get

∂2u
su(y, s) − u(y, 0) = ν 2 , (9.196)
| {z } ∂y
=0

© 06 February 2024. J. M. Powers.


9.2. LAPLACE TRANSFORMATIONS 255

∂2u s
− u = 0, (9.197)
∂y 2 ν
r   r 
s s
u(y, s) = C1 (s) exp y + C2 (s) exp − y . (9.198)
ν ν
We need a bounded solution as y → ∞, so we take C1 (s) = 0, giving
 r 
s
u(y, s) = C2 (s) exp − y . (9.199)
ν
Now, we evaluate C2 (s) by employing the boundary condition. We thus need to take the Laplace
transformation of the boundary condition at y = 0, which is
L(u(0, t)) = L((U )H(t)), (9.200)
U
= . (9.201)
s
Thus C2 (s) = U/s, and we get
 r 
U s
u(y, s) = exp − y . (9.202)
s ν
Now we need to take the inverse Laplace transformation to find u(y, t):
Z c+i∞
1
u(y, t) = u(y, s)est ds, (9.203)
2πi c−i∞
Z c+i∞
U 1 st−√s/νy
= e ds, (9.204)
2πi c−i∞ s
Z c+i∞
U 1 st −√s/νy
= e e ds. (9.205)
2πi c−i∞ s
Obviously there is a pole at s = 0. Let us employ a special contour which avoids this pole at the
expense of introducing a branch cut as we take the contour integral sketched in Fig. 9.17. We have the
closed contour C as
C = CI + CR1 + C+ + Cǫ + C− + CR2 . (9.206)
For CR1 and CR2 , we will let R → ∞, and for Cǫ , we will let ǫ → 0. Our contour integral will take the
form
I Z Z Z Z Z Z
= + + + + + = 0. (9.207)
C CI CR1 C+ Cǫ C− CR2

There are no residues to consider


R by the nature of our contour, which has avoided the singularity at
s = 0. And we are interested in CI for as needed by our inverse Laplace transformation. By Jordan’s
R R
lemma, Sec. 8.7, both CR1 and CR2 vanish as R → ∞, for t > 0. On Cǫ , we let

s = ǫeiθ , ds = ǫieiθ dθ. (9.208)


and consider
Z Z
1 st −√s/νy −π
1 O(ǫ) O(√ǫ) iθ
e e ds = lim e e }(ǫie dθ), (9.209)
Cǫ s ǫ→0 π ǫeiθ | {z
→1
Z −π
= i dθ, (9.210)
π
= −2πi. (9.211)

© 06 February 2024. J. M. Powers.


256 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

Im(s)

CR1

C+
X Re(s)
C− Cε CI

CR2

Figure 9.17: Contour integration path for inverse Laplace transformation integral associated
with ∂u/∂t = ν∂ 2 u/∂y 2.

Note that this corrects a small error in Mei’s analysis on p. 274. Now along C± , we introduce the
positive real variable v so as to have

s = ve±iπ = −v, v ∈ R1 > 0, ds = −dv. (9.212)

Also on C± we have
√ √ √
s = ve±iπ/2 = ±i v. (9.213)

We get then on C+
Z Z
1 st −√s/νy 0
1 −vt −i√v/νy
e e ds = e e (−dv), (9.214)
C+ s ∞ −v
Z ∞
1 −vt −i√v/νy
= − e e dv. (9.215)
0 v
We get on C−
Z Z
1 st −√s/νy ∞
1 −vt i√v/νy
e e ds = e e (−dv), (9.216)
C− s 0 −v
Z ∞
1 −vt i√v/νy
= e e dv. (9.217)
0 v
Adding, we get
Z Z Z
1 −vt  i√v/νy
∞ √ 
+ = e e − e−i v/νy dv, (9.218)
C− C+ 0 v
Z ∞ p
1 −vt
= 2i e sin( v/νy) dv, (9.219)
0 v
 
y
= 2πi erf √ , (9.220)
2 νt

© 06 February 2024. J. M. Powers.


9.2. LAPLACE TRANSFORMATIONS 257

where the last integral was obtained with the aid of symbolic software. So Eq. (9.207) tells us
Z Z Z Z
= − − − , (9.221)
CI Cǫ C− C+
Z √  
1 st − s/νy y
e e ds = 2πi − 2πi erf √ , (9.222)
CI s 2 νt
Z   
U 1 st −√s/νy y
e e ds = U 1 − erf √ , (9.223)
2πi CI s 2 νt
| {z }
u(y,t)
  
y
u(y, t) = U 1 − erf √ , (9.224)
2 νt
 
y
= U erfc √ . (9.225)
2 νt

This is fully equivalent to our earlier result found in in Eq. (6.70) using slightly different notation.

Problems

© 06 February 2024. J. M. Powers.


258 CHAPTER 9. INTEGRAL TRANSFORMATION METHODS

© 06 February 2024. J. M. Powers.


Chapter 10

Linear integral equations

see Powers and Sen, Chapter 8.

In this chapter, adopted largely from Powers and Sen1 we introduce an important, though
often less emphasized, topic: integral equations. Integral equations, and their cousins the
integro-differential equations, often arise naturally in engineering problems where nonlocal
effects are significant, i.e. when what is happening at a given point in space-time is affected by
the past or by points at a distance, or by both. They may arise in such areas as radiation heat
transfer and statistical physics. They also arise in problems involving the Green’s functions of
linear operators, which may originate from a wide variety of problems in engineering such as
heat transfer, elasticity, or electromagnetics. Our focus will be on linear integral equations,
though one could extend to the nonlinear theory if desired. More common studies of linear
equations of the sort Ly = f typically address cases where L is either a linear differential
operator or a matrix. Here we take it to be an integral. We will then be able to apply
standard notions from eigenvalue analysis to aid in the interpretation of the solutions to such
equations. When the integral operator is discretized, integral equations can be approximated
as linear algebra problems.

10.1 Definitions
We consider integral equations that take the form
Z b
h(x)y(x) = f (x) + λ K(x, s)y(s) ds. (10.1)
a

Such an equation is linear in y(x), the unknown dependent variable for which we seek a
solution. Here K(x, s), the so-called kernel, is known, h(x) and f (x) are known functions,
1
J. M. Powers and M. Sen, 2015, Mathematical Methods in Engineering, Cambridge University Press,
New York.

259
260 CHAPTER 10. LINEAR INTEGRAL EQUATIONS

and λ is a constant parameter. We could rewrite Eq. (10.1) as


 Z b 
h(x) (·)|s=x − λ K(x, s) (·) ds y(s) = f (x), (10.2)
a
| {z }
L

so that it takes the explicit form Ly = f . Here (·) is a placeholder for the operand. If
f (x) = 0, our integral equation is homogeneous. When a and b are fixed constants, Eq. (10.1)
is called a Fredholm equation. If the upper limit is instead the variable x, we have a Volterra2
equation:
Z x
h(x)y(x) = f (x) + λ K(x, s)y(s) ds. (10.3)
a

A Fredholm equation whose kernel has the property K(x, s) = 0 for s > x is in fact a
Volterra equation. If one or both of the limits is infinite, the equation is known as singular
integral equation, e.g.
Z ∞
h(x)y(x) = f (x) + λ K(x, s)y(s) ds. (10.4)
a

If h(x) = 0, we have what is known as a Fredholm equation of the first kind:


Z b
0 = f (x) + λ K(x, s)y(s) ds. (10.5)
a

Here, we can expect difficulties in solving for y(s) if for a given x, K(x, s) takes on a value
of zero or near zero for s ∈ [a, b]. That is because when K(x, s) = 0, it maps all y(s) into
zero, rendering the solution nonunique. The closer K(x, s) is to zero, the more challenging
it is to estimate y(s).
If h(x) = 1, we have a Fredholm equation of the second kind:
Z b
y(x) = f (x) + λ K(x, s)y(s) ds. (10.6)
a

Equations of this kind have a more straightforward solution than those of the first kind.

10.2 Homogeneous Fredholm equations


Let us here consider homogeneous Fredholm equations, i.e. those with f (x) = 0.
2
Vito Volterra, 1860-1940, Italian mathematician.

© 06 February 2024. J. M. Powers.


10.2. HOMOGENEOUS FREDHOLM EQUATIONS 261

10.2.1 First kind


A homogeneous Fredholm equation of the first kind takes the form
Z b
0= K(x, s)y(s) ds. (10.7)
a

Solutions to Eq. (10.7) are functions y(s) which lie in the null space of the linear integral
operator. Certainly, y(s) = 0 satisfies, but there may be other nontrivial solutions, based
on the nature of the kernel K(x, s). Certainly, for a given x, if there are points or regions
where K(x, s) = 0 in s ∈ [a, b], one would expect nontrivial and nonunique y(s) to exist
which would still satisfy Eq. (10.7). Also, if K(x, s) oscillates appropriately about zero for
s ∈ [a, b], one may find nontrivial and nonunique y(s).

Example 10.1
Find solutions y(x) to the homogeneous Fredholm equation of the first kind
Z 1
0= xsy(s) ds. (10.8)
0

Assuming x 6= 0, we can factor to say


Z 1
0= sy(s) ds. (10.9)
0
Certainly solutions for y are nonunique. For example, any function which is odd and symmetric about
s = 1/2 and scaled by s will satisfy, e.g.
sin(2nπx)
y(x) = C , C ∈ R1 , n ∈ Q 1 . (10.10)
x
The piecewise function

C, x = 0,
y(x) = (10.11)
0, x ∈ (0, 1],
also satisfies, where C ∈ R1 .

10.2.2 Second kind


A homogeneous Fredholm equation of the second kind takes the form
Z b
y(x) = λ K(x, s)y(s) ds. (10.12)
a

Obviously, when y(s) = 0, Eq. (10.12) is satisfied. But we might expect that there exist
nontrivial eigenfunctions and corresponding eigenvalues which also satisfy Eq. (10.12). This
is because Eq. (10.12) takes the form of (1/λ)y = Ly, where L is the linear integral operator.
In the theory of integral equations, it is more traditional to have the eigenvalue λ play the
role of the reciprocal of the usual eigenvalue.

© 06 February 2024. J. M. Powers.


262 CHAPTER 10. LINEAR INTEGRAL EQUATIONS

10.2.2.1 Separable kernel


In the special case in which the kernel is what is known as a separable kernel or degenerate
kernel with the form:
XN
K(x, s) = φi (x)ψi (s), (10.13)
i=1

significant simplification arises. We then substitute into Eq. (10.12) to get


Z b X N
!
y(x) = λ φi (x)ψi (s) y(s) ds, (10.14)
a i=1
N
X Z b
= λ φi (x) ψi (s)y(s) ds . (10.15)
a
i=1 | {z }
ci

Then we define the constants ci , i = 1, . . . , N, as


Z b
ci = ψi (s)y(s) ds, i = 1, . . . , N, (10.16)
a

and find
N
X
y(x) = λ ci φi (x). (10.17)
i=1

We get the constants ci by substituting Eq. (10.17) into Eq. (10.16):


Z b N
X
ci = ψi (s)λ cj φj (s) ds, (10.18)
a j=1
N
X Z b
= λ cj ψi (s)φj (s) ds . (10.19)
a
j=1 | {z }
Bij

Rb
Defining the constant matrix Bij as Bij = a
ψi (s)φj (s) ds, we then have
N
X
ci = λ Bij cj . (10.20)
j=1

In Gibbs notation, we would say

c = λB · c, (10.21)
0 = (λB − I) · c. (10.22)

© 06 February 2024. J. M. Powers.


10.2. HOMOGENEOUS FREDHOLM EQUATIONS 263

This is an eigenvalue problem for c. Here the reciprocal of the traditional eigenvalues of B
give the values of λ, and the eigenvectors are the associated values of c.

Example 10.2
Find the eigenvalues and eigenfunctions for the homogeneous Fredholm equation of the second kind
with the degenerate kernel, K(x, s) = xs on the domain x ∈ [0, 1]:
Z 1
y(x) = λ xsy(s) ds. (10.23)
0

The equation simplifies to


Z 1
y(x) = λx sy(s) ds. (10.24)
0

Take then
Z 1
c= sy(s) ds, (10.25)
0

so that

y(x) = λxc. (10.26)

Thus,
Z 1
c = sλsc ds, (10.27)
0
Z 1
1 = λ s2 ds, (10.28)
0
1
s3
= λ , (10.29)
3 0
 
1
= λ , (10.30)
3
λ = 3. (10.31)

Thus, there is a single eigenfunction, y = x associated with a single eigenvalue, λ = 3. Any constant
multiplied by the eigenfunction is also an eigenfunction.

10.2.2.2 Non-separable kernel


For many problems, the kernel is not separable, and we must resort to numerical methods.
Let us consider Eq. (10.12) with a = 0, b = 1:
Z 1
y(x) = λ K(x, s)y(s) ds. (10.32)
0

© 06 February 2024. J. M. Powers.


264 CHAPTER 10. LINEAR INTEGRAL EQUATIONS

Now, while there are many sophisticated numerical methods to evaluate the integral in
Eq. (10.32), it is easiest to convey our ideas via the simplest method: the rectangular rule
with evenly spaced intervals. Let us distribute N points uniformly in x ∈ [0, 1] so that
xi = (i − 1)/(N − 1), i = 1, . . . , N. We form the same distribution for s ∈ [0, 1] with
sj = (j − 1)/(N − 1), j = 1, . . . , N. For a given x = xi , this distribution defines N − 1
rectangles of width ∆s = 1/(N − 1) and of height K(xi , sj ) ≡ Kij . We can think of Kij as
a matrix of dimension (N − 1) × (N − 1). We can estimate the integral by adding the areas
of all of the individual rectangles. By the nature of the rectangular rule, this method has
a small asymmetry which ignores the influence of the function values at i = j = N. In the
limit of large N, this is not a problem. Next, let y(xi ) ≡ yi , i = 1, . . . , N − 1, and y(sj ) ≡ yj ,
j = 1, . . . , N − 1, and write Eq. (10.32) in a discrete approximation as
N
X −1
yi = λ Kij yj ∆s. (10.33)
j=1

In vector form, we could say

y = λK · y∆s, (10.34)
 
1
0 = K− I · y, (10.35)
λ∆s
= (K − σI) · y. (10.36)

Obviously, this is a eigenvalue problem in linear algebra. The Reigenvalues of K, σi =


1
1/(λi ∆s), i = 1, . . . , N − 1, approximate the eigenvalues of L = 0 K(x, s)(·) ds, and the
eigenvectors are approximations to the eigenfunctions of L.

Example 10.3
Find numerical approximations of the first nine eigenvalues and eigenfunctions of the homogeneous
Fredholm equation of the second kind
Z 1
y(x) = λ sin(10xs)y(s) ds. (10.37)
0

Discretization leads to a matrix equation in the form of Eq. (10.34). For display purposes only, we
examine a coarse discretization of N = 6. In this case, our discrete equation is
    
y1 0 0 0 0 0 y1
y
 2  0 0.389 0.717 0.932 1.000  y2   
      1
y = λ  0 0.717 1.000 0.675 −0.058 
 3
     y3  5 . (10.38)
y4  0 0.932 0.675 −0.443 −0.996   y4  | {z }
y5 0 1.000 −0.058 −0.996 0.117 y5 ∆s
| {z } | {z } | {z }
y K y

Obviously, K is not full rank because of the row and column of zeros. In fact it has a rank of 4. The
zeros exist because K(x, s) = 0 for both x = 0 and s = 0. This however, poses no issues for computing
the eigenvalues and eigenvectors. However N = 6 is too small to resolve either the eigenvalues or

© 06 February 2024. J. M. Powers.


10.2. HOMOGENEOUS FREDHOLM EQUATIONS 265

y
0.3

0.2

0.1

x
0.2 0.4 0.6 0.8 1.0
−0.1

−0.2

−0.3
R1
Figure 10.1: First nine eigenfunctions for y(x) = λ 0
sin(10xs)y(s) ds.

eigenfunctions of the underlying continuous operator. Choosing N = 201 points gives acceptable
resolution for the first nine eigenvalues, which are

λ1 = 2.523, λ2 = −2.526, λ3 = 2.792, (10.39)


λ4 = −7.749, λ5 = 72.867, λ6 = −1225.2, (10.40)
4
λ7 = 3.014 × 10 , λ8 = −1.011 × 106 , λ9 = 4.417 × 107 . (10.41)

The corresponding eigenfunctions are plotted in Fig. 10.1.

Example 10.4
Find numerical approximations of the first six eigenvalues and eigenfunctions of the homogeneous
Fredholm equation of the second kind
Z 1
y(x) = λ g(x, s)y(s) ds, (10.42)
0

where

x(s − 1), x ≤ s,
g(x, s) = (10.43)
s(x − 1), x ≥ s.

This kernel is the Green’s function for the problem d2 y/dx2 = f (x) with y(0) = y(1) = 0. The Green’s
R1
function solution is y(x) = 0 g(x, s)f (s) ds. For our example problem, we have f (s) = λy(s); thus, we
are also solving the eigenvalue problem d2 y/dx2 = λy.
Choosing N = 201 points gives acceptable resolution for the first six eigenvalues, which are

λ1 = −9.869, λ2 = −39.48, λ3 = −88.81, (10.44)


λ4 = −157.9, λ5 = −246.6, λ6 = −355.0. (10.45)

© 06 February 2024. J. M. Powers.


266 CHAPTER 10. LINEAR INTEGRAL EQUATIONS

y
0.10

0.05

0.2 0.4 0.6 0.8 1.0 x

−0.05

−0.10
R1
Figure 10.2: First six eigenfunctions for y(x) = λ 0 g(x, s)y(s) ds, where g(x, s) is the
Green’s function for d2 y/dx2 = f (x), y(0) = y(1) = 0.

These compare well with the known eigenvalues of λ = −n2 π 2 , n = 1, 2, . . .:

λ1 = −9.870, λ2 = −39.48, λ3 = −88.83, (10.46)


λ4 = −157.9, λ5 = −246.7, λ6 = −355.3. (10.47)

The corresponding eigenfunctions are plotted in Fig. 10.2. The eigenfunctions appear to approximate
well the known eigenfunctions sin(nπx), n = 1, 2, . . ..
We can gain some understanding of the accuracy of our method by studying how its error converges
as the number of terms in the approximation increases. There are many choices as to how to evaluate
the error. Here let us choose one of the eigenvalues, say λ4 ; others could have been chosen. We know
the exact value is λ4 = −16π 2 . Let us take the relative error to be

|λ4N + 16π 2 |
e4 = , (10.48)
16π 2

where λ4N here is understood to be the numerical approximation to λ4 , which is a function of N .


Fig. 10.3 shows the convergence, which is well approximated by the curve fit e4 ≈ 15.99N −2.04.

10.3 Inhomogeneous Fredholm equations


Inhomogeneous integral equations can also be studied, and we do so here.

© 06 February 2024. J. M. Powers.


10.3. INHOMOGENEOUS FREDHOLM EQUATIONS 267

e4
100

10-1

e4 ~ 15.99 N −2.04

10-2

10-3

10 20 50 100 200 N

Figure
R 1 10.3: Convergence of the relative error in approximation of λ4 = −16π 2 for y(x) =
λ 0 g(x, s)y(s) ds, where g(x, s) is the Green’s function for d2 y/dx2 = f (x), y(0) = y(1) = 0.

10.3.1 First kind

Example 10.5
Consider solutions y(x) to the inhomogeneous Fredholm equation of the first kind
Z 1
0=x+ sin(10xs)y(s) ds. (10.49)
0

Here we have f (x) = x, λ = 1, and K(x, s) = sin(10xs). For a given value of x, we have K(x, s) = 0,
when s = 0, and so we expect a nonunique solution for y.
Let us once again solve this by discretization techniques identical to previous examples. In short,
Z 1
0 = f (x) + K(x, s)y(s) ds, (10.50)
0

leads to the matrix equation

0 = f + K · y∆s, (10.51)

where f is a vector of length N − 1 containing the values of f (xi ), i = 1, . . . , N − 1, K is a matrix of


dimension (N − 1) × (N − 1) populated by values of K(xi , sj ), i = 1, . . . , N − 1, j = 1, . . . , N − 1, and
y is a vector of length N − 1 containing the unknown values of y(xj ), j = 1, . . . , N − 1.
When we evaluate the rank of K, we find for K(x, s) = sin(xs) that the rank of the discrete K is
r = N − 2. This is because K(x, s) evaluates to zero at x = 0 and s = 0. Thus, the right null space
is of dimension unity. Now, we have no guarantee that f lies in the column space of K, so the best we
can imagine is that there exists a unique solution y that minimizes ||f + K · y∆s||2 which itself has no
components in the null space of K, so that y itself is of minimum “length.” So, we say our best y is
1 +
y=− K · f, (10.52)
∆s

© 06 February 2024. J. M. Powers.


268 CHAPTER 10. LINEAR INTEGRAL EQUATIONS

where K+ is the Moore-Penrose pseudoinverse of K.


Letting N = 6 gives rise to the matrix equation
      
0 0 0 0 0 0 0 y1
 0   0.2   0 0.389 0.717 0.932 1.000   y2   
       1
 0  =  0.4  +  0 0.717 1.000 0.675 −0.058   y2  . (10.53)
       5
0 0.6 0 0.932 0.675 −0.443 0.996 y4 | {z }
0 0.8 0 1.000 −0.058 0.996 0.117 y5 ∆s
| {z } | {z } | {z } | {z }
0 f K y

Solving for the y of minimum length which minimizes ||f + K · y∆s||2 , we find
 
0
 −4.29 
 
y =  1.32  . (10.54)
 
−0.361
0.057
We see by inspection that the vector (1, 0, 0, 0, 0)T lies in the right null space of K. So K operating on
any scalar multiple, α, of this null space vector maps into zero, and does not contribute to the error.
So the following set of solution vectors y all have the same error in approximation:
 
α
 −4.29 
 
y =  1.32  , α ∈ R1 . (10.55)
 
−0.361
0.057
We also find the error to be, for N = 6
||f + K · y∆s||2 = 0. (10.56)
Because the error is zero, we have selected a function f (x) = x, whose discrete approximation lies in
the column space of K; for more general functions, this will not be the case. This is a consequence of
our selected function, f (x) = x, evaluating to zero at x = 0. For example, for f (x) = x + 2, we would
find ||f + K · y∆s||2 = f (0) = 2, with all of the error at x = 0, and none at other points in the domain.
This seems to be a rational way to approximate the best continuous y(x) to satisfy the continuous
integral equation. However, as N increases, we find the approximation y does not converge to a finite
well-behaved function, as displayed for N = 6, 51, 101 in Fig. 10.4. This lack of convergence is likely
related to the ill-conditioned nature of K. For N = 6 the condition number c, that is the ratio of the
largest and smallest singular values is c = 45; for N = 51, we find c = 10137 ; for N = 101, c = 10232 .
This ill-conditioned behavior is typical for Fredholm equations of the first kind. While the function
itself does not converge with increasing N , the error ||f + K · y∆s||2 remains zero for all N for f (x) = x
(or any other f which has f (0) = 0).

10.3.2 Second kind

Example 10.6
Identify the solution y(x) to the inhomogeneous Fredholm equation of the second kind
Z 1
y(x) = x + sin(10xs)y(s) ds. (10.57)
0

© 06 February 2024. J. M. Powers.


10.4. FREDHOLM ALTERNATIVE 269

y y y
1000
200
x
0.2 0.4 0.6 0.8
x x
0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0

−2 −200
−1000
N=6 N=51 N=101
−400
−4
−2000

Figure 10.4: Approximations y ≈ y(x) which have minimum norm while best satisfying
the discrete
R 1 Fredholm equation of the first kind 0 = f + K · y∆s, modeling the continuous
0 = x + 0 sin(xs)y(s) ds.

Again, we have f (x) = x, λ = 1, and K(x, s) = sin(10xs). Let us once again solve this by
discretization techniques identical to previous examples. In short,
Z 1
y(x) = f (x) + K(x, s)y(s) ds, (10.58)
0

leads to the matrix equation


y = f + K · y∆s, (10.59)
where f is a vector of length N − 1 containing the values of f (xi ), i = 1, . . . , N − 1, K is a matrix of
dimension (N − 1) × (N − 1) populated by values of K(xi , sj ), i = 1, . . . , N − 1, j = 1, . . . , N − 1, and
y is a vector of length N − 1 containing the unknown values of y(xj ), j = 1, . . . , N − 1.
Solving for y, we find
−1
y = (I − K∆s) · f. (10.60)
The matrix I − K∆s is not singular, and thus we find a unique solution. The only error in this solution
is that associated with the discrete nature of the approximation. This discretization error approaches
zero as N becomes large. The converged solution is plotted in Fig. 10.5. In contrast to Fredholm
equations of the first kind, those of the second kind generally have unambiguous solution.

10.4 Fredholm alternative


The Fredholm alternative applies to integral equations, as well as many other types of equa-
tions. Consider, respectively, the inhomogeneous and homogeneous Fredholm equations of
the second kind,
Z b
y(x) = f (x) + λ K(x, s)y(s) ds, (10.61)
a
Z b
y(x) = λ K(x, s)y(s) ds. (10.62)
a

For such systems, given K(x, s), f (x), and nonzero λ ∈ C1 either

© 06 February 2024. J. M. Powers.


270 CHAPTER 10. LINEAR INTEGRAL EQUATIONS

y
1.2

0.8

0.4

x
0.2 0.4 0.6 0.8 1.0

Figure 10.5: The function y(x) which solves the Fredholm equation of the second kind
R1
y(x) = x + 0 sin(10xs)y(s) ds.

• Eq. (10.61) can be uniquely solved for all f (x), or


• Eq. (10.62) has a nontrivial nonunique solution.

10.5 Fourier series projection


We can use the eigenfunctions of the linear integral operator as a basis on which to project
a general function. This then yields a Fourier series approximation of the general function.
First let us take the inner product to be defined in a typical fashion for functions u, v ∈
L2 [0, 1]:
Z 1
hu, vi = u(s)v(s) ds. (10.63)
0

If u(s) and v(s) are sampled at N uniformly spaced points in the domain s ∈ [0, 1], with
s1 = 0, sN = 1, ∆s = 1/(N − 1), the inner product can be approximated by what amounts
to the rectangular method of numerical integration:
N
X −1
hu, vi ≈ un vn ∆s. (10.64)
n=1

Then if we consider un , vn , to be the components of vectors u and v, each of length N − 1,


we can cast the inner product as
hu, vi ≈ (uT · v)∆s, (10.65)
 √ T  √ 
≈ u ∆s · v ∆s . (10.66)

© 06 February 2024. J. M. Powers.


10.5. FOURIER SERIES PROJECTION 271

The functions u and v are orthogonal if hu, vi = 0 when u 6= v. The norm of a function
is, as usual,
s
p Z 1
||u||2 = hu, ui = u2 (s) ds. (10.67)
0

In the discrete approximation, we have


p
||u||2 ≈ (uT · u)∆s, (10.68)
r
√ T  √ 
≈ u ∆s · u ∆s (10.69)

Now consider the integral equation defining our eigenfunctions y(x), Eq. (10.32):
Z 1
y(x) = λ K(x, s)y(s) ds. (10.70)
0

We restrict attention to problems where K(x, s) = K(s, x). With this, the integral operator
is self-adjoint, and the eigenfunctions are thus guaranteed to be orthogonal. Consequently,
we are dealing with a problem from Hilbert-Schmidt theory. Discretization, as before, leads
to Eq. (10.36):

0 = (K − σI) · y. (10.71)

Because K(x, s) = K(s, x), its discrete form gives Kij = Kji . Thus, K = KT , and the
discrete operator is self-adjoint. We find a set of N − 1 eigenvectors, each of length N − 1,
yi , i = 1, . . . , N − 1. The eigenvalues are σi = 1/(λi ∆s), i = 1, . . . , N − 1.
Now if yi (x) is the eigenfunction, we can define a corresponding orthonormal eigenfunction
ϕi (x), by scaling yi (x) by its norm:

yi (x) yi (x)
ϕi (x) = = qR . (10.72)
||yi ||2 1 2
y (s) ds
0 i

The discrete analog, properly scaled to render φi to be of unit magnitude, is



yi ∆s
φi = p , (10.73)
(yiT · yi )∆s
yi
= . (10.74)
||yi ||2
Now for an M-term Fourier series, we approximate f (x) by
M
X
f (x) ≈ f (xj ) = fpT = αi ϕi (xj ) = αT · Φ. (10.75)
i=1

© 06 February 2024. J. M. Powers.


272 CHAPTER 10. LINEAR INTEGRAL EQUATIONS

Here fp is an (N − 1) × 1 vector containing the projection of f , Φ is a matrix of dimension


M × (N − 1) with each row populated by a normalized eigenvector φi :
 
. . . φ1 . . .
 . . . φ2 . . . 
Φ=  .. .
 (10.76)
.
. . . φM ...
So if M = 4, we would have the approximation
f (x) = f (xj ) = fp = α1 φ1 + α2 φ2 + α3 φ3 + α4 φ4 . (10.77)
However,R we need an expression for the Fourier coefficients α. Now in the continuous
1
limit, αi = 0 f (s)ϕi (s) ds. The discrete analog of this is

αT = (f T · ΦT ), (10.78)
α = Φ · f. (10.79)
The vector f is of length N − 1 and contains the values of f (x) evaluated at each xi . When
M = N − 1, the matrix Φ is square, and moreover, orthogonal. Thus, its norm is unity,
and its transpose is its inverse. When square, it can always be constructed such that its
determinant is unity, thus rendering it to be a rotation. In this case, f is rotated by Φ to
form α.
We could also represent Eq. (10.75) as
fp = ΦT · α. (10.80)
Using Eq. (10.79) to eliminate α in Eq. (10.80), we can say
fp = |ΦT{z· Φ} ·f. (10.81)
P

The matrix ΦT · Φ is a projection matrix P:


P = ΦT · Φ. (10.82)
The matrix P has dimension (N − 1) × (N − 1) and is of rank M. It has M eigenvalues of
unity and N − 1 − M eigenvalues which are zero. If Φ is square, P becomes the identity
matrix I, rendering fp = f, and no information is lost. That is, the approximation at each
of the N − 1 points is exact. Still, if the underlying function f (x) has fine scale structures,
one must take N − 1 to be sufficiently large to capture those structures.

Example 10.7
Find a Fourier series approximation for the function f (x) = 1 − x2 , x ∈ [0, 1], where the basis
functions are the orthonormalized eigenfunctions of the integral equation
Z 1
y(x) = λ sin(10xs)y(s) ds. (10.83)
0

© 06 February 2024. J. M. Powers.


10.5. FOURIER SERIES PROJECTION 273

We have found the unnormalized eigenfunction approximation y in an earlier example by solving


the discrete equation
0 = (K − σI) · y. (10.84)
Here K is of dimension (N − 1) × (N − 1), is populated by sin(10xi sj ), i, j = 1, . . . , N − 1, and is
obviously symmetric.
Let us first select a coarse approximation with N = 6. Thus, ∆s = 1/(N − 1) = 1/5. This yields
the same K we saw earlier in Eq. (10.38):
 
0 0 0 0 0
 0 0.389 0.717 0.932 1.000 
 

K =  0 0.717 1.000 0.675 −0.058  . (10.85)
 0 0.932 0.675 −0.443 −0.996 
0 1.000 −0.058 −0.996 0.117
We then find the eigenvectors of K and use them to construct the matrix Φ. For completeness, we
present Φ for the case where M = N − 1 = 5:
   
. . . φ1 . . . 0 0.60 0.70 0.39 0.092
 . . . φ2 . . .   0 0.49 0.023 −0.67 −0.55 
  


Φ5×5 =  . . . φ3 . . .  =  0 0.39 −0.23 −0.38 0.80  . (10.86)
  
. . . φ4 . . . 0 −0.50 0.68 −0.50 0.20 
. . . φ5 . . . 1.0 0 0 0 0
Now, let us consider an M = 3-term Fourier series approximation. Then we will restrict attention to
the first three eigenfunctions and consider Φ to be a matrix of dimension M × (N − 1) = 3 × 5:
   
. . . φ1 . . . 0 0.60 0.70 0.39 0.092
Φ =  . . . φ2 . . .  =  0 0.49 0.023 −0.67 −0.55  . (10.87)
. . . φ3 . . . 0 0.39 −0.23 −0.38 0.80
Now we consider the value of f (x) at each of the N − 1 sample points given in the vector x:
   
0 0
1
 5   0.2 
   
x =  25  =  0.4  . (10.88)
3  
5 0.6
4
5 0.8
At each point f (x) gives us the vector f , of length N − 1:
   
1 1.00
 24   0.96 
 25   
f =  21  =  0.84  (10.89)
 2516   
25 0.64
9
25 0.36
We can find the projected value fp by direct application of Eq. (10.81):
   
0 0 0   1.00
 0.60 0.49 0.39  0 0.60 0.70 0.39 0.092  0.96 
   
fp =   0.70 0.023 −0.23 
 0 0.49 0.023 −0.67 −0.55   0.84 ,
 0.39 −0.67 −0.38  0 0.39 −0.23 −0.38 0.80  
0.64
0.092 −0.55 0.80 | {z } 0.36
| {z } Φ | {z }
f
ΦT
| {z }
P

© 06 February 2024. J. M. Powers.


274 CHAPTER 10. LINEAR INTEGRAL EQUATIONS

f(x) three-term
approximation error
1.0

1−x2 0.2 0.4 0.6 0.8 x


0.8
−0.2
0.6
−0.4

0.4 −0.6

0.2 −0.8

−1.0
0.2 0.4 0.6 0.8 1.0 x

Figure 10.6: M = 3-term Fourier approximation of f (x) = 1 − x2 , x ∈ [0, 1] where the basis
functions are eigenfunctions of the N − 1 = 5-term discretization of the integral operator
with the symmetric kernel K(x, s) = sin(10xs), along with the error distribution.

 
0
 0.88 
 
=  0.95  . (10.90)
 
0.56
0.39
A plot of the M = 3-term approximation for N − 1 = 5 superposed onto the exact solution, and in
a separate plot, the error distribution, is shown in Fig. 10.6. We see the approximation is generally a
good one even with only three terms. At x = 0, the approximation is bad because all the selected basis
functions evaluate to zero there, while the function evaluates to unity.
The Fourier coefficients α are found from Eq. (10.79) and are given by
 
  1.00  
0 0.60 0.70 0.39 0.092  0.96  1.4
 
α =  0 0.49 0.023 −0.67 −0.55   0.84  =  −0.14  . (10.91)
0 0.39 −0.23 −0.38 0.80  
0.64 0.23
0.36
So the Fourier series is
       
0 0 0 0
 0.60   0.49   0.39   0.88 
      
fp = α1 φ1 + α2 φ2 + α3 φ3 = 1.4  0.70  − 0.14  0.023  + 0.23  −0.23  =  0.95 
 . (10.92)
       
 0.39   −0.67   −0.38  0.56
0.092 −0.55 0.80 0.39
If we increase N − 1, while holding M fixed, our basis functions become smoother, but the error
remains roughly the same. If we increase M while holding N − 1 fixed, we can reduce the error; we
achieve no error when M = N − 1. Let us examine a case where N − 1 = 100, so the basis functions
are much smoother, and M = 20, so the error is reduced. A plot of the M = 20-term approximation
for N − 1 = 100 superposed onto the exact solution, and in a separate plot, the error distribution, is
shown in Fig. 10.7.

© 06 February 2024. J. M. Powers.


10.5. FOURIER SERIES PROJECTION 275

f(x) 1-x2 twenty-term error


approximation
0.2
1.0

0.2 0.4 0.6 0.8 1.0


x
0.8
−0.2
0.6
−0.4
0.4
−0.6
0.2
−0.8

0.2 0.4 0.6 0.8 1.0 x −1.0

Figure 10.7: M = 20-term Fourier approximation of f (x) = 1 − x2 , x ∈ [0, 1] where the basis
functions are eigenfunctions of the N − 1 = 100-term discretization of the integral operator
with the symmetric kernel K(x, s) = sin(10xs), along with the error distribution.

© 06 February 2024. J. M. Powers.


276 CHAPTER 10. LINEAR INTEGRAL EQUATIONS

Problems
1. Solve the Volterra equation Z t
a+ ebs u(s) ds = aebt .
0
Hint: Differentiate.
2. Find any and all eigenvalues λ and associated eigenfunctions y which satisfy
Z 1
x
y(x) = λ y(s) ds.
0 s

3. Find a numerical approximation to the first six eigenvalues and eigenfunctions of


Z 1
y(x) = λ cos(10xs)y(s) ds.
0

Use sufficient resolution to resolve the eigenvalues to three digits of accuracy. Plot on a single graph
the first six eigenfunctions.
4. Find numerical approximations to y(x) via a process of discretization and, where appropriate, Moore-
Penrose pseudoinverse to the equations
(a)
Z 1
0=x+ cos(10xs)y(s) ds.
0

(b)
Z 1
y(x) = x + cos(10xs)y(s) ds.
0

In each, demonstrate whether or not the solution converges as the discretization is made finer.
5. Find any and all solutions, y(x), which satisfy
R1
(a) y(x) = 0 y(s) ds,
R1
(b) y(x) = x + 0 y(s) ds,
R1
(c) y(x) = 0 x2 s2 y(s) ds,
R1
(d) y(x) = x2 + 0 x2 s2 y(s) ds.
6. Using the eigenfunctions yi (x) of the equation
Z 1
y(x) = λ exs y(s) ds,
0

approximate the following functions f (x) for x ∈ [0, 1] in ten-term expansions of the form
10
X
f (x) = αi yi (x),
i=1

(a) f (x) = x,
(b) f (x) = sin(πx).
The eigenfunctions will need to be estimated by numerical approximation.

© 06 February 2024. J. M. Powers.


Bibliography

R. Abraham, J. E. Marsden, and T. Ratiu, Manifolds, Tensor Analysis, and Applications,


Springer, New York, 1988.

M. Abramowitz and I. A. Stegun, eds., Handbook of Mathematical Functions, Dover Pub-


lications, New York, 1964.

A. A. Andronov, Qualitative Theory of Second Order Dynamical Systems, John Wiley &
Sons, New York, 1973.

P. J. Antsaklis, and A. N. Michel, Linear Systems, Birkhäuser, Boston, 1997.

P. J. Antsaklis, and A. N. Michel, A Linear Systems Primer, Birkhäuser, Boston, 2007.

T. M. Apostol, Calculus: One-Variable Calculus, with an Introduction to Linear Algebra,


Vol. 1, Second Edition, John Wiley & Sons, New York, 1991.

T. M. Apostol, Calculus: Multi-Variable Calculus and Linear Algebra with Applications to


Differential Equations and Probability, Vol. 2, Second Edition, John Wiley & Sons,
New York, 1991.

G. B. Arfken, H. J. Weber, and F. E. Harris, Mathematical Methods for Physicists, Seventh


Edition, Academic Press, Waltham, MA, 2012.

R. Aris, Vectors, Tensors, and the Basic Equations of Fluid Mechanics, Dover Publications,
New York, 1962.

V. I. Arnold, Ordinary Differential Equations, MIT Press, Cambridge, MA, 1973.

V. I. Arnold, Geometrical Methods in the Theory of Ordinary Differential Equations, Springer,


New York, 1983.

D. Arrowsmith and C. M. Place, Dynamical Systems: Differential Equations, Maps, and


Chaotic Behaviour, Chapman Hall/CRC, Boca Raton, FL, 1992.

277
N. H. Asmar, Applied Complex Analysis with Partial Differential Equations, Prentice-Hall,
Upper Saddle River, NJ, 2002.

G. I. Barenblatt, Scaling, Self-Similarity, and Intermediate Asymptotics, Cambridge Uni-


versity Press, Cambridge, UK, 1996.

R. Bellman and K. L. Cooke, Differential-Difference Equations, Academic Press, New York,


1963.

C. M. Bender and S. A. Orszag, Advanced Mathematical Methods for Scientists and Engi-
neers, Springer-Verlag, New York, 1999.

M. L. Boas, Mathematical Methods in the Physical Sciences, Third Edition, John Wiley &
Sons, New York, 2005.

A. I. Borisenko and I. E. Tarapov, Vector and Tensor Analysis with Applications, Dover
Publications, New York, 1968.

W. E. Boyce and R. C. DiPrima, Elementary Differential Equations and Boundary Value


Problems, Tenth Edition, John Wiley & Sons, New York, 2012.

K. E. Brenan, S. L. Campbell, and L. R. Petzold, Numerical Solution of Initial-Value


Problems in Differential-Algebraic Equations, SIAM, Philadelphia, 1996.

M. Braun, Differential Equations and Their Applications, Springer-Verlag, New York, 1983.

I. N. Bronshtein and K. A. Semendyayev, Handbook of Mathematics, Springer, Berlin, 1998.

B. J. Cantwell, Introduction to Symmetry Analysis, Cambridge University Press, Cam-


bridge, UK, 2002.

C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang, Spectral Methods in Fluid


Dynamics, Springer-Verlag, New York, 1988.

J. Carr, Applications of Centre Manifold Theory, Springer-Verlag, New York, 1981.

G. F. Carrier and C. E. Pearson, Ordinary Differential Equations, SIAM, Philadelphia,


1991.

R. V. Churchill, Fourier Series and Boundary Value Problems, McGraw-Hill, New York,
1941.

R. V. Churchill, J. W. Brown, and R. F. Verhey, Complex Variables and Applications, Third


Edition, McGraw-Hill, New York, 1976.

P. G. Ciarlet, Introduction to Numerical Linear Algebra and Optimisation, Cambridge Uni-


versity Press, Cambridge, UK, 1989.

278
T. B. Co, Methods of Applied Mathematics for Engineers and Scientists, Cambridge Uni-
versity Press, Cambridge, UK, 2013.

J. A. Cochran, H. C. Wiser and B. J. Rice, Advanced Engineering Mathematics, Second


Edition, Brooks/Cole, Monterey, CA, 1987.

E. A. Coddington, An Introduction to Ordinary Differential Equations, Dover Publications,


New York, 1989.

E. A. Coddington and N. Levinson, Theory of Ordinary Differential Equations, Krieger,


Malabar, FL, 1987.

R. Courant and D. Hilbert, Methods of Mathematical Physics, Vols. 1 and 2, John Wiley
& Sons, New York, 1989.

R. Courant, Differential and Integral Calculus, Vols. 1 and 2, John Wiley & Sons, New
York, 1988.

I. Daubechies, Ten Lectures on Wavelets, SIAM, Philadelphia, 1992.

L. Debnath and P. Mikusinski, Introduction to Hilbert Spaces with Applications, Third


Edition, Elsevier, Amsterdam, 2005.

J. W. Dettman, Mathematical Methods in Physics and Engineering, McGraw-Hill, New


York, 1962.

P. G. Drazin, Nonlinear Systems, Cambridge University Press, Cambridge, UK, 1992.

R. D. Driver, Ordinary and Delay Differential Equations, Springer-Verlag, New York, 1977.

J. Feder, Fractals, Plenum Press, New York, 1988.

B. A. Finlayson, The Method of Weighted Residuals and Variational Principles, Academic


Press, New York, 1972.

C. A. J. Fletcher, Computational Techniques for Fluid Dynamics, Second Edition, Springer,


Berlin, 1991.

B. Fornberg, A Practical Guide to Pseudospectral Methods, Cambridge University Press,


Cambridge, UK, 1998.

B. Friedman, Principles and Techniques of Applied Mathematics, Dover Publications, New


York, 1956.

I. M. Gelfand and S. V. Fomin, Calculus of Variations, Dover Publications, New York,


2000.

J. Gleick, Chaos, Viking, New York, 1987.

279
G. H. Golub and C. F. Van Loan, Matrix Computations, Third Edition, The Johns Hopkins
University Press, Baltimore, MD, 1996.

S. W. Goode, An Introduction to Differential Equations and Linear Algebra, Prentice-Hall,


Englewood Cliffs, NJ, 1991.

B. Goodwine, Engineering Differential Equations: Theory and Applications, Springer, New


York, 2011.

D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral Methods: Theory and Appli-
cations, SIAM, Philadelphia, 1977.

M. D. Greenberg, Foundations of Applied Mathematics, Prentice-Hall, Englewood Cliffs,


NJ, 1978.

M. D. Greenberg, Advanced Engineering Mathematics, Second Edition, Pearson, Upper


Saddle River, NJ, 1998.

D. H. Griffel, Applied Functional Analysis, Dover Publications, New York, 2002.

J. Guckenheimer and P. H. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bi-


furcations of Vector Fields, Springer-Verlag, New York, 2002.

M. T. Heath, Scientific Computing, Second Edition, McGraw-Hill, Boston, 2002.

J. Hale and H. Koçak, Dynamics and Bifurcations, Springer-Verlag, New York, 1991.

F. B. Hildebrand, Advanced Calculus for Applications, Second Edition, Prentice-Hall, En-


glewood Cliffs, NJ, 1976.

M. W. Hirsch and S. Smale, Differential Equations, Dynamical Systems, and Linear Algebra,
Academic Press, Boston, 1974.

M. W. Hirsch, S. Smale, and R. L. Devaney, Differential Equations, Dynamical Systems,


and an Introduction to Chaos, Third Edition, Academic Press, Waltham, MA, 2013.

M. H. Holmes, Introduction to Perturbation Methods, Springer-Verlag, New York, 1995.

M. H. Holmes, Introduction to the Foundations of Applied Mathematics, Springer-Verlag,


New York, 2009.

R. A. Howland, Intermediate Dynamics: a Linear Algebraic Approach, Springer, New York,


2006.

J. H. Hubbard and B. B. Hubbard, Vector Calculus, Linear Algebra, and Differential Forms:
a Unified Approach, Fourth Edition, Matrix Editions, Ithaca, NY, 2009.

280
M. Humi and W. Miller, Second Course in Ordinary Differential Equations for Scientists
and Engineers, Springer-Verlag, New York, 1988.

E. J. Hinch, Perturbation Methods, Cambridge University Press, Cambridge, UK, 1991.

A. Iserles, A First Course in the Numerical Analysis of Differential Equations, Second


Edition, Cambridge University Press, Cambridge, UK, 2009.

E. T. Jaynes, Probability Theory: the Logic of Science, Cambridge University Press, Cam-
bridge, UK, 2003.

H. Jeffreys and B. Jeffreys, Methods of Mathematical Physics, Third Edition, Cambridge


University Press, Cambridge, UK, 1972.

D. W. Jordan and P. Smith, Nonlinear Ordinary Differential Equations: an Introduction


for Scientists and Engineers, Fourth Edition, Oxford University Press, Oxford, UK,
2007.

P. B. Kahn, Mathematical Methods for Engineers and Scientists, Dover Publications, New
York, 2004.

W. Kaplan, Advanced Calculus, Fifth Edition, Addison-Wesley, Boston, 2003.

D. C. Kay, Tensor Calculus, Schaum’s Outline Series, McGraw-Hill, New York, 1988.

J. Kevorkian and J. D. Cole, Perturbation Methods in Applied Mathematics, Springer-


Verlag, New York, 1981.

J. Kevorkian and J. D. Cole, Multiple Scale and Singular Perturbation Methods, Springer-
Verlag, New York, 1996.

A. N. Kolmogorov and S. V. Fomin, Elements of the Theory of Functions and Functional


Analysis, Dover Publications, New York, 1999.

L. D. Kovach, Advanced Engineering Mathematics, Addison-Wesley, Reading, MA, 1982.

E. Kreyszig, Advanced Engineering Mathematics, Tenth Edition, John Wiley & Sons, New
York, 2011.

E. Kreyszig, Introductory Functional Analysis with Applications, John Wiley & Sons, New
York, 1978.

C. Lanczos, The Variational Principles of Mechanics, Fourth Edition, Dover Publications,


New York, 2000.

P. D. Lax, Functional Analysis, John Wiley & Sons, New York, 2002.

281
P. D. Lax, Linear Algebra and its Applications, Second Edition, John Wiley & Sons, Hobo-
ken, NJ, 2007.

J. R. Lee, Advanced Calculus with Linear Analysis, Academic Press, New York, 1972.

R. J. LeVeque, Finite Volume Methods for Hyperbolic Problems, Cambridge University


Press, Cambridge, UK, 2002.

R. J. LeVeque, Finite Difference Methods for Ordinary and Partial Differential Equations,
SIAM, Philadelphia, 2007.

A. J. Lichtenberg and M. A. Lieberman, Regular and Chaotic Dynamics, Second Edition,


Springer, Berlin, 1992.

C. C. Lin and L. A. Segel, Mathematics Applied to Deterministic Problems in the Natural


Sciences, SIAM, Philadelphia, 1988.

J. D. Logan, Applied Mathematics, Fourth Edition, John Wiley & Sons, Hoboken, NJ, 2013.

R. J. Lopez, Advanced Engineering Mathematics, Addison Wesley Longman, Boston, 2001.

D. Lovelock and H. Rund, Tensors, Differential Forms, and Variational Principles, Dover
Publications, New York, 1989.

J. E. Marsden and A. Tromba, Vector Calculus, Sixth Edition, W. H. Freeman, San Fran-
cisco, 2011.

J. Mathews and R. L. Walker, Mathematical Methods of Physics, Addison-Wesley, Redwood


City, CA, 1970.

A. J. McConnell, Applications of Tensor Analysis, Dover Publications, New York, 1957.

C. C. Mei, Mathematical Analysis in Engineering, Cambridge University Press, Cambridge,


UK, 1997.

A. N. Michel and C. J. Herget, Applied Algebra and Functional Analysis, Dover Publications,
New York, 1981.

R. K. Miller and A. N. Michel, Ordinary Differential Equations, Dover Publications, New


York, 2007.

P. M. Morse and H. Feshbach, Methods of Theoretical Physics, Vols. 1 and 2, McGraw-Hill,


New York, 1953.

J. A. Murdock, Perturbations, Theory and Methods, SIAM, Philadelphia, 1987.

G. M. Murphy, Ordinary Differential Equations and Their Solutions, Dover Publications,


New York, 2011.

282
J. T. Oden and L. F. Demkowicz, Applied Functional Analysis, Second Edition, CRC, Boca
Raton, FL, 2010.
P. V. O’Neil, Advanced Engineering Mathematics, Seventh Edition, Cennage, Stamford,
CT, 2012.
L. Perko, Differential Equations and Dynamical Systems, Third Edition, Springer, Berlin,
2006.
J. M. Powers and M. Sen, Mathematical Methods in Engineering, Cambridge University
Press, New York, 2015.
W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in
Fortran 77, Cambridge University Press, Cambridge, UK, 1986.
A. Prosperetti, Advanced Mathematics for Applications, Cambridge University Press, Cam-
bridge, UK, 2011.
J. N. Reddy, Applied Functional Analysis and Variational Methods in Engineering, McGraw-
Hill, New York, 1986.
J. N. Reddy and M. L. Rasmussen, Advanced Engineering Analysis, John Wiley & Sons,
New York, 1982.

R. M. Redheffer, Differential Equations: Theory and Applications, Jones and Bartlett,


Boston, 1991.

R. D. Richtmyer and K. W. Morton, Difference Methods for Initial-Value Problems, Second


Edition, Krieger, Malabar, FL, 1994.

F. Riesz and B. Sz.-Nagy, Functional Analysis, Dover Publications, New York, 1990.
K. F. Riley, M. P. Hobson, and S. J. Bence, Mathematical Methods for Physics and Engineer-
ing: a Comprehensive Guide, Third Edition, Cambridge University Press, Cambridge,
UK, 2006.

P. D. Ritger and N. J. Rose, Differential Equations with Applications, Dover Publications,


New York, 2010.

J. C. Robinson, Infinite-Dimensional Dynamical Systems, Cambridge University Press,


Cambridge, UK, 2001.

M. Rosenlicht, Introduction to Analysis, Dover Publications, New York, 1968.


T. L. Saaty and J. Bram, Nonlinear Mathematics, Dover Publications, New York, 2010.

H. Sagan, Boundary and Eigenvalue Problems in Mathematical Physics, Dover Publications,


New York, 1989.

283
D. A. Sanchez, R. C. Allen, and W. T. Kyner, Differential Equations, Addison-Wesley,
Boston,1988.

H. M. Schey, Div, Grad, Curl, and All That, Fourth Edition, W.W. Norton, London, 2005.

M. J. Schramm, Introduction to Real Analysis, Prentice-Hall, Englewood Cliffs, NJ, 1996.

L. A. Segel, Mathematics Applied to Continuum Mechanics, Dover Publications, New York,


1987.

D. S. Sivia and J. Skilling, Data Analysis: a Bayesian Tutorial, Second Edition, Oxford
University Press, Oxford, UK, 2006.

I. S. Sokolnikoff and R. M. Redheffer, Mathematics of Physics and Modern Engineering,


Second Edition, McGraw-Hill, New York, 1966.

G. Stephenson and P. M. Radmore, Advanced Mathematical Methods for Engineering and


Science Students, Cambridge University Press, Cambridge, UK, 1990.

G. Strang, Linear Algebra and its Applications, Fourth Edition, Cennage Learning, Stam-
ford, CT, 2005.

G. Strang, Introduction to Applied Mathematics, Wellesley-Cambridge, Wellesley, MA,


1986.

G. Strang, Computational Science and Engineering, Wellesley-Cambridge, Wellesley, MA,


2007.

S. H. Strogatz, Nonlinear Dynamics and Chaos with Applications to Physics, Biology,


Chemistry, and Engineering, Westview, Boulder, CO, 2001.

R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics, Second


Edition, Springer, New York, 1997.

G. B. Thomas and R. L. Finney, Calculus and Analytic Geometry, Ninth Edition, Addison-
Wesley, Boston, 1995.

L. N. Trefethen and M. Embree, Spectra and Pseudospectra: the Behavior of Nonnormal


Matrices and Operators, Princeton University Press, Princeton, NJ, 2005.

L. N. Trefethen and D. Bau, Numerical Linear Algebra, SIAM, Philadelphia, 1997.

M. Van Dyke, Perturbation Methods in Fluid Mechanics, Parabolic Press, Stanford, CA,
1975.

A. Varma and M. Morbinelli, Mathematical Methods in Chemical Engineering, Oxford Uni-


versity Press, Oxford, UK, 1997.

284
G. B. Whitham, Linear and Nonlinear Waves, Wiley, New York, 1999.

S. Wiggins, Global Bifurcations and Chaos: Analytical Methods, Springer-Verlag, New York,
1988.

S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos, Second Edi-
tion Springer-Verlag, New York, 2003.

H. J. Wilcox and L. W. Lamm, An Introduction to Lebesgue Integration and Fourier Series,


Dover Publications, New York, 2012.

C. R. Wylie and L. C. Barrett, Advanced Engineering Mathematics, Sixth Edition, McGraw-


Hill, New York, 1995.

D. Xiu, Numerical Methods for Stochastic Computations, Princeton University Press, Prince-
ton, NJ, 2010.

E. Zauderer, Partial Differential Equations of Applied Mathematics, Second Edition, Wiley,


New York, 1989.

E. Zeidler, Applied Functional Analysis: Main Principles and Their Applications, Springer-
Verlag, New York, 1995.

E. Zeidler, Applied Functional Analysis: Applications to Mathematical Physics, Springer-


Verlag, New York, 1999.

D. G. Zill and M. R. Cullen, Advanced Engineering Mathematics, Fourth Edition, Jones


and Bartlett, Boston, 2009.

285

You might also like