0% found this document useful (0 votes)
61 views58 pages

Integral Transforms: Richard Earl (Edited, and Chapter 6 Added, by Sam Howison) Hilary Term 2019

This document introduces the Dirac delta function δ and distributions as a way to model singular functions that arise in physics problems but cannot be described by classical functions. It gives examples where a point mass, point heat source, or impulse would require a δ-like function. The document then discusses how to rigorously define the δ function and distributions using test functions and motivates this approach by solving example problems involving differential equations with singular source terms.

Uploaded by

EDU CIPANA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
61 views58 pages

Integral Transforms: Richard Earl (Edited, and Chapter 6 Added, by Sam Howison) Hilary Term 2019

This document introduces the Dirac delta function δ and distributions as a way to model singular functions that arise in physics problems but cannot be described by classical functions. It gives examples where a point mass, point heat source, or impulse would require a δ-like function. The document then discusses how to rigorously define the δ function and distributions using test functions and motivates this approach by solving example problems involving differential equations with singular source terms.

Uploaded by

EDU CIPANA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 58

Integral Transforms

Richard Earl (edited, and Chapter 6 added, by Sam Howison)

Hilary Term 2019


1

SYLLABUS

Motivation for a “function” with the properties


R of the Dirac δ-function. Test functions.
Continuous functions are determined by φ → f φ. Distributions and δ as a distribution.
Differentiating distributions. (3 lectures)

Theory of Fourier and Laplace transforms, inversion, convolution. Inversion of some standard
Fourier and Laplace transforms via contour integration.

Use of Fourier and Laplace transforms in solving ordinary differential equations, with some
examples including δ.

Use of Fourier and Laplace transforms in solving partial differential equations; in particular,
use of Fourier transform in solving Laplace’s equation and the Heat equation. (5 lectures).

SUGGESTED READING

Sam Howison, Practical Applied Mathematics (CUP, 2005), chapters 9 & 10 (for distribu-
tions).
RI Richards & HK Youn, The Theory of Distributions: A Nontechnical Introduction (CUP).
PJ Collins, Differential and Integral Equations (OUP, 2006), Chapter 14
WE Boyce & RC DiPrima, Elementary Differential Equations and Boundary Value Problems
(7th edition, Wiley, 2000). Chapter 6
KF Riley & MP Hobson, Essential Mathematical Methods for the Physical Sciences (CUP
2011) Chapter 5
HA Priestley, Introduction to Complex Analysis (2nd edition, OUP, 2003) Chapters 21 and
22

LECTURE LAYOUT

1. Motivation. Functions as distributions.


2-3. Test Functions. Distributions and differentiating distributions.
3-4 Laplace Transform. Properties.
4-5. Applications to ODEs
6. Convolution and Inversion.
7. Fourier Transform and applications.
8. Applications to PDEs
2

Remark 1 Before we even get started we need to recognize that we don’t really have a rig-
orous enough or general enough theory of integration on the real line. The Riemann integral
(constructed in Prelims Analysis III) applies to bounded functions on a bounded interval. A
more general theory – Lebesgue Integration – does exist and anyone interested can study this
in further detail in the A4 Integration option.
Somewhat simplistically there are two ways in which a function can fail to be integrable.
A function can be so pathological that there is simply no hope for integrating it. However
the Lebesgue integral is so very general that almost any function we might conceive of is
Lebesgue measurable and so in practice this issue does not arise.
Rather we more typically meet functions that fail to be integrable because the area that
integral would represent is simply infinite. Such examples are
1(0,1) (x)
, 1(0,∞) (x), ex × 1(0,∞) (x).
x
There are some more subtle examples such as
sin x
× 1(0,∞) (x),
x
where the absolute value of the function isn’t integrable even though the integral is con-
ditionally convergent (if you integrate to infinity via intervals of length 2π, cancellation of
alternate positive and negative terms in each of these gives a finite result).
Especially when we come to the Laplace Transform one of the main properties of the
Lebesgue integral that we shall be using is:
• If f is a (measurable) function, g an integrable function and |f | 6 g then f is integrable.
One might think of this as a comparison test for integrals.
• (Riemann-Lebesgue Lemma.) If f is an integrable function then
Z ∞ Z ∞
f (x) cos ax dx → 0 and f (x) sin ax dx → 0 as a → ∞.
−∞ −∞

(This is another cancellation-of-oscillations result, now because the oscillations get more and
more rapid as a → ∞. A consequence of this lemma is that Fourier coefficients an and bn
tend to zero as n → ∞.)
Finally, when it comes to the discussion of distributions, the following definition will also
be important.
Definition 2 f : R → R is locally integrable if it is integrable on any bounded interval.
So, for example, x2 is not integrable on R but it is locally integrable.
Chapter 1

The Dirac δ-function and


Distributions

1.1 Motivation
There are many instances in mathematics where one might want to model a problem which
– in the traditional sense – cannot be described by a function and will lead to singularities.
Typical examples are:

• A point mass.

• A point heat source.

• An instantaneous impulse.

Example 3 (A Point Heat Source) Consider the time-independent heat equation1 in a


bar −1 6 x 6 1:

−T 00 (x) = g(x), −1 < x < 1, T (−1) = 0 = T (1).

The function g(x) describes the heat being introduced or removed from the bar. What function
δ would model a unit point source at x = 0? Necessarily we would need the following:
Z ∞
δ(x) = 0 for x 6= 0; δ(x) dx = 1.
−∞

1
Why is there a minus sign on the left? The signs are chosen so that, for the time-dependent heat equation
∂u/∂t − ∂ 2 u/∂x2 = g, a positive source g (heat input) results in an increasing temperature.

3
CHAPTER 1. THE DIRAC δ-FUNCTION AND DISTRIBUTIONS 4

Our immediate problem is that no function g, in the classical sense of the word, has these
properties.
On the other hand we might nonetheless try to solve the the BVP above for such δ. As
00
T (x) = 0 for x 6= 0 and with the given boundary conditions we know

A(x − 1) x > 0;
T (x) =
B(1 + x) x < 0.
Physically we would also expect T to be continuous at x = 0 which means −A = B. Finally
we would also need Z 1
1
1= −T 00 (x) dx = [−T 0 (x)]−1 = −A + B,
−1
so that A = −1/2, B = 1/2. So our solution is
 1
2
(1 − x) x > 0;
T (x) = 1
2
(1 + x) x < 0.
And in some sense the function we are interested in is δ(x) = −T 00 (x). This doesn’t help that
much again as in the classical sense T is not even differentiable let alone twice-differentiable.
Example 4 (A Point Mass) In a similar fashion, by Poisson’s equation, a point mass M
at the origin of the real line will generate a gravitational field f (x) that satisfies
f 0 (x) = −4πGρ = 0 for x 6= 0
but by Gauss’s Flux Theorem we also have
Z ∞
f 0 (x) dx = −4πGM.
−∞

It would seem that if we could find an appropriate function δ(x) for the first example then
f 0 (x) = −4πGM δ(x) would work here.
Whilst we are here might also notice from Gauss’ Flux Theorem that we would expect
Z a  Z a 
0 −4πGM a > 0; 1 a > 0;
f (x) dx = =⇒ δ(x) dx =
−∞ 0 a < 0; −∞ 0 a < 0.
Example 5 (Kick Start) Let T > 0 and consider the ODE
mẍ(t) + kx(t) = I δ(t − T ), x(0) = ẋ(0) = 0.
This is the equation governing the extension of a mass m on a spring with spring constant
k. The system remains at rest until a time T when an instantaneous impulse I is applied.
Determine the extenstion x(t).
CHAPTER 1. THE DIRAC δ-FUNCTION AND DISTRIBUTIONS 5

Solution. The solution to the problem is of the form



A cos ωt + B sin ωt t < T ;
x(t) =
C cos ωt + D sin ωt t > T ;

wher ω 2 = k/m. From the initial conditions we have that A = B = 0 and the system sits at
rest for t < T. By continuity we also have that

C cos ωT + D sin ωT = 0.

However there is a discontinuity I in the momentum mẋ of the particle at t = T . So

mω(−C sin ωT + D cos ωT ) = I.

Hence
         
cos ωT sin ωT C 0 C cos ωT − sin ωT 0
= I =⇒ = I
− sin ωT cos ωT D mω
D sin ωT cos ωT mω

and
I I sin ω(t − T )
x(t) = {− sin ωT cos ωt + cos ωT sin ωt} = for t > T.
mω mω

1.2 Delta function


So how do we go about rigorously defining a ”function” δ with the properties
Z ∞
δ(x) = 0 for x 6= 0; δ(x) dx = 1.
−∞

We know that there is no such function in the classical sense, so we need to think how else
we might convey information about functions and, with luck, find a more general setting in
which δ might exist.
Suppose now that φ : R → R is continuous. Then for any ε > 0 there exists ∆ > 0 such
that
−∆ < x < ∆ =⇒ −ε < φ(x) − φ(0) < ε.
Note that
Z ∞ Z −∆ Z ∆ Z ∞ Z ∆
δ(x)φ(x) dx = δ(x)φ(x) dx+ δ(x)φ(x) dx+ δ(x)φ(x) dx = δ(x)φ(x) dx,
−∞ −∞ −∆ ∆ −∆
CHAPTER 1. THE DIRAC δ-FUNCTION AND DISTRIBUTIONS 6

and so we would expect


Z ∆ Z ∆ Z ∆
φ(0) − ε = (φ(0) − ε) δ(x) dx 6 δ(x)φ(x) dx 6 (φ(0) + ε) δ(x) dx = φ(0) + ε.
−∆ −∆ −∆

As this is true for all ε then we have


Z ∞
δ(x)φ(x) dx = φ(0) when φ is continuous. (1.1)
−∞

This is quite a big steer towards the idea of a generalized function or distribution. As we
will see a continuous function f (x) can be reconstructed from knowledge of integrals such as
Z ∞
hf, φi := f (x)φ(x) dx
−∞

where φ(x) is any continuous function. Piecewise continuous functions could also largely be
reconstructed, but we would be uncertain quite what values were taken at the discontinuities.
We see from (1.1) that our desired function δ(x) could fit within this framework of gen-
eralized functions. Moreover if we are careful with our choice of functions φ(x) we will see
that it is possible to do calculus with these generalized functions.

1.3 Test Functions and Distributions


So it seems that to understand δ we need to set aside our narrow view of a function f being
defined simply at points and instead try to understand the map
Z ∞
φ 7→ hf, φi = f (x)φ(x) dx (1.2)
−∞

where φ(x) is a continuous function. Certainly continuous functions f can be uniquely


represented this way and we have seen that δ can be understood this way as “evaluation at
0”. Note though that this effectively treats f (x) as a functional on the space of continous
functions.
However, we might be more ambitious and hope to differentiate these generalized func-
tions. Given a generalized function f , can we make sense of its derivative f 0 as another
generalized function? This would be
Z ∞
0
φ 7→ hf , φi = f 0 (x)φ(x) dx.
−∞
CHAPTER 1. THE DIRAC δ-FUNCTION AND DISTRIBUTIONS 7

What sense might this integral on the RHS make? If we could apply integration-by-parts
and if φ were differentiable then we’d have
Z ∞ Z ∞
0 ∞
f (x)φ(x) dx = [f (x)φ(x)]−∞ − f (x)φ0 (x) dx.
−∞ −∞

The second integral makes sense as this is just the generalized function f evaluated on φ 0 .
And not unreasonably we might also require that

lim φ(x) = lim φ(x) = 0.


x→∞ x→−∞

This would mean that f 0 was the generalized function


Z ∞
φ 7→ − f (x)φ0 (x) dx. (1.3)
−∞

So we are going to have to shift the goal posts a little now. Instead of considering continuous
functions φ(x) generally, if we want generalized functions to be differentiable and for the
derivative to be itself a generalized function, then the functions φ(x) need to be infinitely
differentiable On the basis of the calculation above it seems we would also like φ(∞) =
φ(−∞) = 0.
Thus we make the following definition.

Definition 6 A map φ : R → R is a test function if it is smooth (i.e. infinitely differen-


tiable) and if there exists X such that φ(x) = 0 when |x| > X.

• It is an easy check to show that the test functions form a vector space D.

• We have restricted our attention from continuous functions to test functions so that we
can differentiate generalized functions. We shall see though that test functions are still
plentiful enough that a continous function can be reconstructed from (1.2).

Example 7 The function


1
 
exp x2 −1
|x| < 1;
φ(x) =
0 |x| > 1;

is a test function.
CHAPTER 1. THE DIRAC δ-FUNCTION AND DISTRIBUTIONS 8

Solution. It is clearly infinitely differentiable for x 6= ±1. We also have


 
φ(x) − φ(1) 1 1
lim = lim exp
x%1 x−1 x%1 x − 1 x2 − 1
   
1 1/2 1/2
= lim exp exp −
x%1 x − 1 x−1 x+1
   
1 1 1/2
= exp − lim exp
4 x%1 x − 1 x−1
   
1 t
= exp − lim t exp = 0.
4 t→−∞ 2

So that φ0 (1) = 0 and as φ is even then φ0 (−1) = 0.


For x 6= ±1 we have  
0 −2x 1
φ (x) = 2 exp
(x − 1)2 x2 − 1
and for k > 1 might make the inductive hypothesis that, for x 6= ±1,
 
(k) pk (x) 1
φ (x) = 2 exp ,
(x − 1)2k x2 − 1
CHAPTER 1. THE DIRAC δ-FUNCTION AND DISTRIBUTIONS 9

where pk (x) is a polynomial of degree less than or equal to 3k. If true at k then
p0k (x)
   
(k+1) 2xpk (x) 4kxpk (x) 1
φ (x) = − − exp
(x2 − 1)2k (x2 − 1)2k+2 (x2 − 1)2k+1 x2 − 1
 0
pk (x)(x2 − 1)2 − 2xpk (x) − 4kx(x2 − 1)pk (x)
  
1
= exp
(x2 − 1)2k+2 x2 − 1
 
pk+1 (x) 1
= 2 2k+2
exp 2
.
(x − 1) x −1
So our hypothesis holds for all k and by an argument similar to the one showing φ0 (1) = 0
we see that φ(k) (1) = 0 for all k > 1.
Hence φ(x) is a test function.

A similar calculation would show more generally that


(  
C
exp (x−a)(x−b) a<x<b
φ(x) = (1.4)
0 otherwise

is a test function. In the end, however, we don’t really care too much about the test functions,
as long as we know they exist and there are ‘enough’ of them to let us work with distributions.

It remains an important point that continuous functions can be reconstructed from knowl-
edge of the integrals in (1.2). More precisely we show the following.

Theorem 8 Let f : R → R be a continuous function such that


Z ∞
f (x)φ(x) dx = 0
−∞

for all test functions φ. Then f = 0.

Proof. Suppose for a contradiction that f (x0 ) 6= 0 for some x0 . Without loss of generality
say f (x0 ) > 0. If we set ε = f (x0 )/2 > 0 then by continuity we can find ∆ > 0 such that
f (x0 ) > ε for x0 − ∆ < x < x0 + ∆. If we take the test function from (1.4) with a = x0 − ∆
and b = x0 + ∆
Z ∞ Z x0 +∆ Z x0 +∆
f (x)φ(x) dx = f (x)φ(x) dx > ε φ(x) dx > 0,
−∞ x0 −∆ x0 −∆

which is the required contradiction.


CHAPTER 1. THE DIRAC δ-FUNCTION AND DISTRIBUTIONS 10

For each continuous function f we then have a functional Ff on the space D of test
functions Z ∞
Ff : φ 7→ Ff (φ) = hf, φi = f (x)φ(x) dx.
−∞

What the above theorem shows is that the map f 7→ Ff is 1–1.


If we wish to view the δ-function in the same way then, following (1.1), we should think
of δ as the following functional
δ : φ 7→ φ(0).
Thus we are almost ready to define generalized functions/distributions.

Definition 9 A distribution or generalized function F is a linear functional from D


to R which is continuous in the following sense:

• F is continuous if whenever φ and φn (n > 1) are test functions which are all zero
(k)
outside some bounded interval I and each φn converges uniformly to φ(k) as n → ∞,
then F (φn ) → F (φ).

We write D0 for the space of distributions. Also, we write hF, φi for the real number F (φ).
When we want emphasise the range of the distribution (which really means the range of the
test functions), we may write F (x) instead of just F .

Remark 10 Informally one might think of distributions as functions, which whilst not de-
fined at points, do have a well-defined average on any neighbourhood of a point. This is
consistent with the general point that the integral of a function is normally smoother than
the function itself (whereas differentiation normally decreases smoothness — but not for test
functions, of course!).2 Thus, an average can make sense where a pointwise view does not.

Remark 11 The above requirement of continuity may seem somewhat technical but it is
precisely what we want if we desire the derivative of a distribution to be a distribution.

Remark 12 It is a relatively easy check to show that D0 is indeed a vector space. Note that
D0 isn’t the algebraic dual of D — the space of all functionals — but rather a subspace of the
algebraic dual.

Proposition 13 Given a locally integrable function f then Ff is a distribution. Such distri-


butions are called regular distributions.
2
For example, think of integrating and/or differentiation functions like xn , or max(x, 0).
CHAPTER 1. THE DIRAC δ-FUNCTION AND DISTRIBUTIONS 11

(k) u
Proof. Clearly Ff is linear. Suppose, for each k, that φn → φ(k) and that the φn , φ are zero
outside some bounded interval I. Then
Z ∞
Ff (φn ) = hf, φn i = f (x)φn (x) dx
−∞
Z
= f (x)φn (x) dx
Z I

→ f (x)φ(x) dx [by uniform convergence]


I
= hf, φi
= Ff (φ).

• Note, though, that different functions may represent the same distribution.

Example 14 The functions


 
1 x > 0; 1 x > 0;
H1 (x) = H2 (x) =
0 x 6 0; 0 x < 0;

both lead to the same distribution


Z ∞
H : φ → hH, φi = φ(x) dx.
0

This distribution is called the Heaviside function.

• But different continuous functions induce different distributions (as a consequence of


Theorem 8).

Proposition 15 δ is a distribution.

• No locally integrable function f induces δ. This means δ is a singular distribution.


(k) u
Proof. Clearly δ is linear. Suppose, for each k, that φn → φ(k) . Then in particular (with
k = 0) we have φn → φ pointwise. So φn (0) → φ(0).

Example 16 (Approximating δ(x)) Consider the sequence of functions


 n
2
|x| < n1 ;
δn (x) =
0 otherwise;
CHAPTER 1. THE DIRAC δ-FUNCTION AND DISTRIBUTIONS 12

We say that a sequence of distributions (Fn ) converges to a distribution F if hFn , φi → hF, φi


for all test functions φ. If φ is a test function then in particular it is continuous at 0. By the
MVT for integrals, we have for some ξn ∈ (−1/n, 1/n)
Z ∞
n 1/n
Z Z 1/n
n
hδn , φi = δn (x)φ(x) dx = φ(x) dx = φ(ξn ) dx = φ(ξn ) → φ(0),
−∞ 2 −1/n 2 −1/n

by continuity. Hence hδn , φi → hδ, φi as n → ∞.

Proposition 17 Suppose that the integrable function f (x) is continuous at x = 0. Then, as


n → ∞, Z ∞
hδn , f i = δn (x)f (x) dx → f (0)
−∞
and hence δn → δ.

Proof. Simply replace φ with f in Example 16.

Remark 18 This proposition shows us that, although it is defined by its action on a test
function, the delta function also works when integrated against a continuous function. One
can define δ and other distributions via limits of approximating sequences, but this approach
is fraught with technical problems (for example, would two sequences, both approximating δ,
give the same result for δ 0 (defined below)?).

Definition 19 (Translation of a distribution) Let F (x) be a distribution and a ∈ R. The


translation of F through a, written F (x − a), is defined by its action

hF (x − a), φ(x)i = hF (x), φ(x + a)i.

When f is a regular distribution corresponding to an integrable function f , this is just a


change of variable in the integral:
Z ∞ Z ∞
hf (x − a), φ(x)i = f (x − a)φ(x) dx = f (u)φ(u + a) du = hf (x), φ(x + a)i.
−∞ −∞

When the distribution is the delta function, this is known as the sifting property:

hδ(x − a), φ(x)i = φ(a).

And by a very simple extension of Proposition 17, this applies to any locally integrable
function which is continous at a:

hδ(x − a), f (x)i = f (a).


CHAPTER 1. THE DIRAC δ-FUNCTION AND DISTRIBUTIONS 13

Remark 20 It’s essentially this property that earned the distribution its δ notation. Compare
this with the discrete version X
δij ajk = aik
j

(here δij is the Kronecker delta). One might view δ(x − a) as a continuous version of δij .

Remark 21 The delta function is only ‘active’ where its argument vanishes (e.g., for δ(x−a)
this is at x = a); if the support of a test function φ does not contain x = a, then hδa , φi = 0.
Thus, the delta function is ‘localised’ at x = a. In many uses of the delta function, for example
in solving differential equations, we work on an interval rather than all of R. Because our
test functions were defined on all of R, for the utmost rigour (mortis) we should redefine a
new class of test functions adapted to our interval. However, the localised nature of the delta
function makes this unnecessary, and it is perfectly safe to go ahead and just use δ. That is
what it was dreamed up for!

1.4 Operations on Distributions


We have already noted that the distributions form a vector space D0 . There are other impor-
tant operations that can be performed on distributions.

Proposition 22 If f is a smooth function (i.e. it has derivatives of all orders) and F is a


distribution then f F is a distribution.

Solution. For a regular distribution F and a test function φ we have


Z ∞
hF, φi = F (x)φ(x) dx
−∞

and so we would have


Z ∞
hf F, φi = f (x)F (x)φ(x) dx
Z−∞

= F (x)f (x)φ(x) dx = hF, f φi.
−∞

Hence we can more generally define f F for a general distribution f as

hf F, φi = hF, f φi.
CHAPTER 1. THE DIRAC δ-FUNCTION AND DISTRIBUTIONS 14

(This amounts to saying that f φ is a test function.) Clearly f F is linear. Suppose, for each
(k) u
k, that φn → φ(k) and that the φn , φ are zero outside some bounded interval I. Then it is
u
relatively easy to show, for each k, that (f φn )(k) → (f φ)(k) and hence
hf F, φn i = hF, f φn i → hF, f φi = hf F, φi.
Hence f F is continuous and so a distribution.
Surprisingly, perhaps, it is also possible to differentiate distributions. We already consid-
ered the possibility at (1.3) and so we define:
Definition 23 Given a a distribution F and test function φ we define
hF 0 , φi = −hF, φ0 i.
• Note that this agrees with normal differentiation for regular distributions from differ-
entiable functions: say that f is differentiable, so that
Z ∞ Z ∞
0 0 ∞
hf , φi = f (x)φ(x) dx = [f (x)φ(x)]−∞ − f (x)φ0 (x) dx = −hf, φ0 i.
−∞ −∞

• Also if F is a distribution and f a smooth function then we have the product rule
(f F )0 = f 0 F + f F 0
as for any test function φ we have
h(f F )0 , φi = −hf F, φ0 i
= −hF, f φ0 i
= −hF, (f φ)0 i + hF, f 0 φi
= hF 0 , f φi + hf 0 F, φi
= hf F 0 , φi + hf 0 F, φi.

Proposition 24 If F is a distribution then so is F 0 .


(k) u
Proof. Clearly F 0 is linear. Further if φn → φ(k) and φn , φ are zero outside the bounded
u
interval I then in particular (φ0n )(k) → (φ0 )(k) and hence
hF 0 , φn i = −hF, φ0n i
→ −hF, φ0 i [by the continuity of F ]
= hF 0 , φi [by definition of F 0 ].
So F 0 is continuous and is a distribution.
CHAPTER 1. THE DIRAC δ-FUNCTION AND DISTRIBUTIONS 15

Remark 25 This proof shows that distributions inherit the infinite differentiability of test
functions. If we had defined test functions to have only a finite number of derivatives, then the
same would have applied to the corresponding distributions. Such a theory, although possible,
would be unrewardingly cumbersome.

We can now do a remarkable thing: we can differentiate a function with a singularity


such as a jump discontinuity at a point (the Heaviside function is a simple example) and
interpret the result without having to take limits from the left and right. Such functions fit
naturally into the framework of distributions. We can go further and differentiate singular
distributions such as δ.

Example 26
(a) H 0 = δ, (b) hδ 0 , φi = −φ0 (0).

Solution. (a) Recall that the Heaviside function satisfies


Z ∞ Z ∞
hH, φi = H(x)φ(x) dx = φ(x) dx.
−∞ 0

So Z ∞
0 0
hH , φi = −hH, φ i = − φ0 (x) dx = φ(0) = hδ, φi.
0

(b) We also have


hδ 0 , φi = −hδ, φ0 i = −φ0 (0).

Example 27 Find the derivative and second derivative of f (x) = |x| .

Solution. As f is differentiable at x 6= 0 then f 0 (x) = 1 for x > 0 and f 0 (x) = −1 for x < 0.
This is sufficient to define f 0 (x) as a distribution and we in fact see

f 0 (x) = 2H(x) − 1.

Note that we have not determined the gradient of f at 0; this is unnecessary to define f 0 (x)
as a distribution. By the previous example we then have

f 00 (x) = 2δ(x).
CHAPTER 1. THE DIRAC δ-FUNCTION AND DISTRIBUTIONS 16

Example 28 The distributional derivative can be found by the ordinary calculus method:
δ(x + h) − δ(x)
δ 0 (x) = lim .
h→0 h
Solution. We have (expanding our notation to show the arguments of δ and φ)
 
δ(x + h) − δ(x) φ(−h) − φ(0)
, φ(x) =
h h
0
→ −φ (0) as h → 0
= hδ 0 , φi.
The reader may like to show that H 0 = δ in the same way.
Proposition 29 Every distribution F has an antiderivative G such that G0 = F.
Proof. Let φ0 be a fixed test function with total integral 1. Given any test function φ we
can write φ = Kφ0 + φ1 where K is the total integral of φ and φ1 has total integral 0. The
point of this is that Z x
ψ(x) = φ1 (x) dx
−∞
is a test function and ψ 0 (x) = φ1 (x). We then define G by
hG, φi = −hF, ψi.
Note that φ0 has total integral 0 and so (φ0 )1 = φ0 when φ0 is decomposed as above; this
means that the ψ corresponding to φ0 is just φ. Hence we have
hG0 , φi = −hG, φ0 i = hF, φi
and G0 = F.
Example 30 The product of two distributions need not be a distribution. This follows from
the fact that the product f g of two locally integrable functions f and g need not be locally
integrable. For example, consider
1
f (x) = g(x) = √ 1(0,1) (x).
x
Remark 31 (Some Historical Background) Efforts to rigorously handle the mathemat-
ics behind point sources, point charges and point masses date back to Cauchy and Fourier.
In the late nineteenth century Oliver Heaviside used Fourier series to model the unit im-
pulse. The δ-function notation dates back to Paul Dirac’s 1930 influential book ”Principles
of Quantum Mechanics”. The French mathematician, Laurent Schwartz, developed the theory
of distributions in the late 1940s to rigorously handle such notions, for which he was awarded
the Fields medal in 1950.
Chapter 2

Laplace Transform. Applications to


ODEs.

This section is about the Laplace transform which is one of a number of important integral
transforms in mathematics. An important aspect of the Laplace transform is that it can
transform a differential equation into an algebraic one: ideally a differential equation in f (x)
is transformed into an algebraic one in its transform f (p) which we might solve with simple
algebraic manipulation. Our remaining problem is then the inverse problem: to recognize this
transform and so find the solution f (x) to the original differential equation that transforms
to this f (p).

Definition 32 Let f (x) be a real- or complex-valued function defined when x > 0. Then the
Laplace transform f (p) of f (x) is defined to be
Z ∞
f (p) = f (x)e−px dx, (2.1)
0

for those complex p where this integral exists. f is also commonly denoted as Lf and the
Laplace Transform itself as L.

Example 33 Let f (x) = eax where a is a complex number. Then


Z ∞ Z ∞  −(p−a) ∞
ax −px −(p−a)x e 1
f (p) = e e dx = e dx = =
0 0 a−p 0 p−a

provided Re p > Re a. As |ez | = eRez , note that f (p) is undefined when Re p 6 Re a.

17
CHAPTER 2. LAPLACE TRANSFORM. APPLICATIONS TO ODES. 18

Example 34 Let fn (x) = xn where n > 0 is an integer. Then, provided Re p > 0,


Z ∞
−1  n −px ∞ n ∞ n−1 −px
Z
n −px n
fn (p) = x e dx = x e 0
+ x e dx = fn−1 (p).
0 p p 0 p
Now Z ∞
1
f0 (p) = e−px dx = ,
0 p
so that
n n n−1 1 n!
fn (p) = × fn−1 (p) = × × · · · × × f0 (p) = n+1 .
p p p p p
Again the integral in the definition of fn (p) is undefined when Re p 6 0.

Remark 35 More generally Rthe Laplace transform of xa , where a > −1 is a real number, is

Γ(a + 1)/pa+1 where Γ(x) = 0 tx−1 e−t dt is the Gamma Function. See Sheet 1, Exercise 6.

Example 36 Let f (x) = cos ax and g(x) = sin ax. Then


p a
f (p) = ; g(p) = . (2.2)
p 2 + a2 p 2 + a2
Solution. By integration by parts, and provided Re p > |Ima| ,
Z ∞
−1  −px ∞ a ∞ −px
Z
−px 1 a
f (p) = e cos ax dx = e cos ax 0 − e sin ax dx = − ḡ(s)
p p 0 p p
Z0 ∞ Z ∞
−1 −px ∞ a a
e−px sin ax dx = e−px cos ax dx = f (p).
 
g(p) = e sin ax 0 +
0 p p 0 p
Solving the simultaneous equations
a 1 a
f (p) + ḡ(p) = , ḡ(p) = f (p),
p p p
gives the expressions in (2.2).
An alternative and faster way to these expressions is the following. Let h(x) = eiax where
a is real. By Example 33 we have
1 p + ia
h̄(p) = = 2 .
p − ia p + a2

Taking real parts gives f (p) = p/(p2 +a2 ) and taking imaginary parts gives g(p) = a/(p2 +a2 ).
As these expressions hold for all real values of a then by the Identity Theorem they hold for
all valid complex numbers a.
CHAPTER 2. LAPLACE TRANSFORM. APPLICATIONS TO ODES. 19

Example 37 Let a > 0. The Laplace transform of δ(x − a) is e−ap . 1

Proof. By the Sifting Property we have


Z ∞
e−px δ(x − a) dx = e−ap .
0

Example 38 Let a > 0. The Laplace transform of



0 0 < x 6 a,
H(x − a) =
1 a<x

equals e−ap /p.

Solution. We have
∞ ∞ ∞
e−px e−ap
Z Z 
−px −px
H(x − a)e dx = e dx = = .
0 a −p a p

In all the examples we have seen, we can note that if the Laplace transform f (p0 ) exists
(i.e. the relevant integral coverges) for a particular p0 ∈ C then f (p) exists whenever Re p >
Re p0 which we formally state below.

Proposition 39 Let f (x) be a complex-valued function defined when x > 0, such that the
integral (2.1) in the definition of f (p0 ) exists for some complex number p0 . Then f (p) exists
for all Re p > Re p0 .

Proof. If Re p > Re p0 then


f (x)e−px 6 f (x)e−p0 x

and hence by the comparison test for integrals (mentioned in the preamble to the notes) we
see that f (x)e−px is integrable on (0, ∞) .
We also note the following:
1
It is usual to extend this to a = 0 by defining δ̄(p) = 1, which (for example by taking the Laplace
Transform of H 0 (x)) amounts to choosing H(0) = 1; it says that the interval of interest includes the origin,
and one can ‘kick-start’ a differential equation at x = 0. This is a rare example where the definition of
a distribution at a point matters. Another example is the choice between P[X ≤ x] and P[X < x] as the
definition of the cumulative distribution function (the former is more common, and corresponds to H(0) = 1).
CHAPTER 2. LAPLACE TRANSFORM. APPLICATIONS TO ODES. 20

Proposition 40 Let f (x) be a continuous complex-valued function on [0, ∞) such that f (p0 )
exists. Then f (p) converges to 0 as Re p → ∞.

Proof. Note that for Re t > 0,


Z ∞
−(p0 +t)x

f (p0 + t) = f (x)e dx

Z0 1 Z ∞
−p −tx −p −tx
x
x

6 f (x)e 0
e dx + f (x)e 0
e dx
0
Z 1 Z1 ∞
−p −tx −p −tx
x
x

6 f (x)e 0 e dx + f (x)e 0
e dx

0 1
Z 1 Z ∞
e−(Re t)x dx + e−Re t f (x)e−p0 x dx [ f (x)e−p0 x 6 M on [0, 1] ]

= M
0
Z1 ∞
1 − e−Re t
 
+ e−Re t f (x)e−p0 x dx

= M
Re t
1
→ 0 as Re t → ∞

(The requirement that f be continuous is not necessary, but is not a restrictive hypothesis
and simplifies the proof substantially.)

Note also the following property of the Laplace transform.

Proposition 41 Let a ∈ C with Re a > 0 and assume f¯(p) converges on some half-plane
Re p > c. Then the function g(x) = f (x)e−ax has transform

ḡ(p) = f¯(p + a).

Proof. Simply note that


Z ∞ Z ∞
ḡ(p) = −ax −px
f (x)e e dx = f (x)e−(a+p)x dx = f¯(p + a).
0 0

For the Laplace transform to be of use treating differential equations, it needs to handle
derivatives well and this is indeed the case.

Proposition 42 Provided the Laplace transforms of f 0 (x) and f (x) converge, then

f 0 (p) = pf (p) − f (0).


CHAPTER 2. LAPLACE TRANSFORM. APPLICATIONS TO ODES. 21

Proof. We have, by integration by parts,


Z ∞ Z ∞
0 −px −px ∞
f (x) −pe−px dx = (0 − f (0)) + pf (p).
  
0
f (p) = f (x)e dx = f (x)e −
0
0 0

Corollary 43 Provided that the Laplace transforms of f (x), f 0 (x), f 00 (x) converge then

f 00 (p) = p2 f (p) − pf (0) − f 0 (0).

Proof. If we write g(x) = f 0 (x) then

f 00 (p) = g 0 (p) = pḡ(p) − g(0) = pf 0 (p) − f 0 (0) = p2 f (p) − pf (0) − f 0 (0).

Example 44 The function f (x) is the solution of the differential equation

f 00 (x) − 3f 0 (x) + 2f (x) = 0, f (0) = f 0 (0) = 1.

Determine f (p).

Proof. Transforming the differential equation, and recalling that f (0) = f 0 (0) = 1 we see

p2 f (p) − p − 1 − 3(pf (p) − 1) + 2f (p) = 0.




Hence, rearranging this equation, we find

p2 − 3p + 2 f (p) = p − 2


and hence
p−2 1
f (p) = = .
(p − 1) (p − 2) p−1

We of course recognise f (p) as the Laplace Transform of ex . Can we reasonably write


that f (x) = ex ? For now we merely claim:

• The Laplace transform L is injective. So if f¯(p) = ḡ(p) on some half-plane Re p > c we


can conclude that f (x) = g(x).

We shall prove a version of this fact in due course, but for now this claim allows us to
invert a range of transforms by inspection.
CHAPTER 2. LAPLACE TRANSFORM. APPLICATIONS TO ODES. 22

Example 45 Find the Laplace inverses of


1 1
f (p) = ; ḡ(p) = .
p2 (p+ 1) p2 + 2p + 4
Solution. Both the given functions are not instantly recognizable, given the examples we
have seen so far, but with some simple algebraic rearrangement we can quickly circumvent
this. Firstly using partial fractions we see that
1 1 1 1
f (p) = = 2− +
p2 (p + 1) p p p+1
and inverting the Laplace Transform we see that
f (x) = x − 1 + e−x .
If we complete the square in the denominator of ḡ(p) we also see
1 1
ḡ(p) = = .
p2
+ 2p + 4 (p + 1)2 + 3
√ √
Now we know sin( 3x) transforms to 3/ (p2 + 3) and that e−x h(x) transforms to h̄(p + 1)
and so
1 √
g(x) = √ e−x sin 3x.
3

Example 46 Find the Laplace inverse of p−2 e−p .


Solution. We know that H(x − 1) has transform p−1 e−p and we also know that x transforms
to p−2 , so perhaps we can combine these facts somehow. Note the function f (x) = xH(x − 1)
transforms to
Z ∞ Z ∞
−px
∞
xe−px dx = −p−2 (px + 1)e−px 1 = p−2 (p + 1)e−p ,

f (p) = xH(x − 1)e dx =
0 1

which is close to what we want. In fact we can see that we are wrong precisely by the
transform of H(x − 1). So we see that the inverse transform of p−2 e−p is
xH(x − 1) − H(x − 1) = (x − 1)H(x − 1).

It might be easiest to think of (x − 1)H(x − 1) in terms of its graph. It is just the graph
of x translated one to the right, taking value 0 on the interval 0 < x 6 1. This is a particular
instance of the following result.
CHAPTER 2. LAPLACE TRANSFORM. APPLICATIONS TO ODES. 23

Proposition 47 Assuming the Laplace transform f (p) of f (x) to exist on some half-plane
Re p > c, and a > 0 then
g(x) = f (x − a)H(x − a)
has transform ḡ(p) = f¯(p)e−ap .
Proof. We have
Z ∞
ḡ(p) = f (x − a)H(x − a)e−px dx
Z0 ∞
= f (x − a)e−px dx
Za ∞
= f (u)e−p(u+a) du [u = x − a]
0
Z ∞
−ap
= e f (u)e−pu du
0
= e−ap f¯(p).

Example 48 Solve the ODE


f 00 (x) + 4f 0 (x) + 8f (x) = x, f (0) = 1, f 0 (0) = 0.
Solution. Applying the transform we have
1
p f¯(p) − p + 4 pf¯(p) − 1 + 8f¯(p) = 2 ,
 2 
p
and rearranging gives
1
(p2 + 4p + 8)f¯(p) − p − 4 = 2 .
p
So f¯(p) equals
p3 + 4p2 + 1
   
1 2 1 17p + 66 1 2 1 17(p + 2) + 32
= − + = − +
p2 (p2 + 4p + 8) 16 p2 p p2 + 4p + 8 16 p2 p (p + 2)2 + 4
which by inspection inverts to
1 
2x − 1 + 17e−2x cos 2x + 16e−2x sin 2x .

f (x) =
16
We made use here of
L p L a L
cos ax 7→ 2 2
, sin ax 7→ 2 2
, f (x)e−ax 7→ f¯(p + a).
p +a p +a
CHAPTER 2. LAPLACE TRANSFORM. APPLICATIONS TO ODES. 24

Example 49 Let τ2 > τ1 > 0. Solve the ODE

x 00 (t) + x(t) = δ(t − τ1 ) − δ(t − τ2 ), x(0) = 0, x 0 (0) = 0.

Solution. Applying the transform we find


e−pτ1 − e−pτ2
(p2 + 1)x̄ = e−pτ1 − e−pτ2 , =⇒ x̄ = ,
p2 + 1
which inverts to

 0 t < τ1 ,
x(t) = sin(t − τ1 )H(t − τ1 ) − sin(t − τ2 )H(t − τ2 ) = sin(t − τ 1 ) τ 1 6 t 6 τ2 ,
sin(t − τ1 ) − sin(t − τ2 ) τ2 < t.

Unfortunately transforming a differential equation can sometimes lead to another differ-


ential equation. This is because of the following property.

Proposition 50 Assuming that the Laplace transforms of f (x) and g(x) = xf (x) converge
then
df
ḡ(p) = − .
dp
Proof. Using differentiation under the integral sign
Z ∞ Z ∞ Z ∞
df d −px ∂ −px
xf (x)e−px dx = −ḡ(p).

= f (x)e dx = f (x)e dx = −
dp dp 0 0 ∂p 0

Example 51 Find the inverse Laplace transform of (p − a)−n .

Solution. From Example 34 we know that the Laplace transform of xn is n!p−n−1 . Hence
by Proposition 41 we can see that the xn−1 eax /(n − 1)! has transform (p − a)−n .
Alternaitvely we could make use of Proposition 50 and Example 33 to note
 n−1  n−1
−n 1 d −1 1 d
(p − a) = − (p − a) = − L(eax )
(n − 1)! dp (n − 1)! dp
 n−1 ax 
1 n−1 ax x e
= L(x e ) = L .
(n − 1)! (n − 1)!
CHAPTER 2. LAPLACE TRANSFORM. APPLICATIONS TO ODES. 25

Example 52 Find the inverse Laplace transform of (p2 + 2p + 2)−2 .

Solution. We know that


L 1 L p
sin x 7→ ; cos x 7→ .
p2 +1 p2 +1
So
p2 − 1
   
L d 1 2p L d p
x sin x 7→ − = ; x cos x 7→ − = .
dp 2
p +1 (p + 1)2
2 dp 2
p +1 (p2 + 1)2
So we might write
 2
p2 − 1

1 1 p +1
= −
(p2 + 1)2 2 (p2 + 1)2 (p2 + 1)2
p2 − 1
 
1 1 L−1 1
= 2
− 2 7→ (sin x − x cos x) .
2 p + 1 (p + 1)
2 2
Thus −x
1 1 L−1 e
= 7 → (sin x − x cos x) .
(p2 + 2p + 2)2 (p + 1)2 + 1

2

Example 53 Bessel’s function of order zero, J0 (x), satisfies the initial-value problem
d2 J0 dJ0
x + + xJ0 = 0, J0 (0) = 1, J00 (0) = 0.
dx2 dx
Show that J0 (p) = (1 + p2 )−1/2 .

Solution. By Propositions 42 and 50, when we apply the Laplace transform to both sides
of the above IVP we get
d 2 dJ0
− (p J0 − p) + (pJ0 (p) − 1) − = 0.
dp dp
Simplifying we see
dJ0
(p2 + 1)
+ pJ0 = 0.
dp
This equation is separable and we may solve it to find

J0 (p) = A(1 + p2 )−1/2


CHAPTER 2. LAPLACE TRANSFORM. APPLICATIONS TO ODES. 26

where A is some constant. We might try to determine A by recalling that J0 (p) approaches
0 as Re p becomes large, however this is the case for all values of A. Instead we can note that

J00 (p) = pJ0 (p) − J0 (0) = Ap(1 + p2 )−1/2 − 1 = A(1 + p−2 )−1/2 − 1, (2.3)

must also approach 0 as p becomes large. As (2.3) approaches to A − 1 for large p then A = 1
and J0 (p) = (1 + p2 )−1/2 .

We conclude with a table of transforms that we have so far determined.

f (x) f¯(p) f (x) f¯(p)


xn n!/pn+1 f 0 (x) pf¯(p) − f (0)

eax (p − a)−1 f 00 (x) p f (p) − pf (0) − f 0 (0)
cos ax p/(p2 + a2 ) xf (x) −df¯/dp
sin ax 2 2
a/(p + a ) f (x − a)H(x − a) e−ap f¯(p)
δ(x − a) e−ap e−ax f (x) f¯(p + a)
Chapter 3

Convolution and Inversion

Definition 54 Given two functions f, g whose Laplace transforms f¯, ḡ exist for Re p > c,
we define the convolution h = f ∗ g by
Z x
h(x) = (f ∗ g) (x) = f (t)g(x − t) dt for x > 0.
0

Remark 55 Note that


Z x
(f ∗ g) (x) = f (t)g(x − t) dt
0
Z 0
= f (x − u)g(u) (−du) [u = x − t]
x
Z x
= g(u)f (x − u) du
0
= (g ∗ f )(x).
Example 56 Let f (x) = sin x and g(x) = sin x. Then we have
Z x
h(x) = sin t sin(x − t) dt
0
1 x
Z
= {cos(2t − x) − cos x} dt
2 0
 x
1 1
= sin(2t − x) − t cos x
2 2 0
 
1 1 1
= sin x + sin x − x cos x
2 2 2
1
= (sin x − x cos x) .
2
27
CHAPTER 3. CONVOLUTION AND INVERSION 28

We previously met this as the Laplace inverse of


1
= f¯(p) ḡ(p).
(p2 + 1)2

Example 57 Let f (x) = eax and g(x) = ebx where a 6= b. Then


Z x
h(x) = eat eb(x−t) dt
0
Z x
bx
= e e(a−b)t dt
0 (a−b)t x
e
= ebx
a−b 0
e − ebx
ax
= .
a−b
This transforms to
 
1 1 1
h̄(p) = −
a−b p−a p−b
 
1 a−b
=
a − b (p − a)(p − b)
1
=
(p − a)(p − b)
= f¯ (p)ḡ(p).

Consequently the following theorem should not come as a great surprise.

Theorem 58 Given two functions f and g whose Laplace transforms f¯ and ḡ exist for
Re p > c. Then
h̄ = f¯ ḡ
where h = f ∗ g.
CHAPTER 3. CONVOLUTION AND INVERSION 29

Proof.
Z ∞  Z ∞ 
f¯ (p)ḡ(p) = f (t)e −pt
dt −px
g(x)e dx
0 0
Z ∞ Z ∞
= f (t)g(x)e−p(x+t) dx dt
0 0
Z ∞ Z ∞
= f (t)g(y − t)e−py dy dt [x = y − t]
Z0Z t

= f (t)g(y − t)e−py dy dt,


R

where R is the region


R = {(y, t) : y > t > 0} .
If we swap the order of integration we instead find
Z ∞Z y
¯
f (p)ḡ(p) = f (t)g(y − t)e−py dt dy
Z0 ∞ 0Z y 
= f (t)g(y − t) dt e−py dy
Z0 ∞ 0

= (f ∗ g) (y) e−py dy
0
= h̄(p).

Example 59 Find the Laplace inverse of


p
f¯ (p) = .
(p2 + 1)2

Solution. Note that f¯ (p) is the product of


p 1
cos(p) = and sin(p) = .
p2 +1 p2 +1
CHAPTER 3. CONVOLUTION AND INVERSION 30

Hence by the Convolution Theorem


Z x
f (x) = sin t cos(x − t) dt
0
 Z x Z x 
2
= cos x sin t cos t dt + sin x sin t dt
0 0
 Z x Z x 
1
= cos x sin 2t dt + sin x (1 − cos 2t) dt
2 0 0
    
1 1 1 1
= cos x − cos 2x + sin x x − sin 2x
2 2 2 2
1
= {cos x − cos x cos 2x + 2x sin x − sin x sin 2x}
4
1 1
cos x − cos x 1 − 2 sin2 x + 2x sin x − 2 sin2 x cos x = x sin x.

=
4 2
Having determined this answer we see we could also have realized this as
 
p 1 d 1 1 d  1
= − = − sin(p) = L (x sin x) .
(p2 + 1)2 2 dp p2 + 1 2 dp 2

Example 60 Determine the solution of the IVP

y 00 (x) + 3y 0 (x) + 2y(x) = f (x), y(0) = y 0 (0) = 1,

with your solution involving a convolution.

Solution. Applying the Laplace transform we find

p ȳ − p − 1 + 3 {pȳ − 1} + 2ȳ = f¯.


 2

Hence
(p + 2)(p + 1)ȳ = p + 4 + f¯,
and
p+4 f¯
ȳ = +
(p + 2)(p + 1) (p + 2)(p + 1)
 
3 2 1 1
= − + − f¯.
p+1 p+2 p+1 p+2
CHAPTER 3. CONVOLUTION AND INVERSION 31

Hence Z x
−x −2x
e−t − e−2t f (x − t) dt.

ȳ(p) = 3e − 2e +
0

We now have a various toolkit for finding inverse Laplace transforms using some form of
inspection. However we have no comprehensive method for achieving this, nor even certainty
yet that the Laplace transform is injective. We first prove this last fact for a fairly wide range
of functions.
Theorem 61 (Injectivity of the Laplace Transform) Let f be a continuous function
on [0, ∞), bounded by some function M ecx and such that f¯(p) = 0 for Re p > c. Then f = 0.
Proof. (Non-examinable) We will need to make use of the Weierstrass Approximation The-
orem during this proof which says that given any continuous function on a closed bounded
interval there is a sequence of polynomials that uniformly converge to it.
Fix k > c and set p = k + n + 1 where n > 0. We then have that
Z ∞
0 = f (x)e−px dx
Z0 ∞
= e−nx e−kx e−x f (x) dx
Z0 1
= y n y k f (− log y) dy [y = e−x ]
Z0 1
= y n g(y) dy
0

where g(y) = y k f (− log y). Note that g is immediately continuous on (0, 1] and is also
continuous at 0 as
|g(y)| = y k f (− log y) = e−kx |f (x)| 6 M e(c−k)x → 0 as y → 0 (or x → ∞).

By linearity it follows that Z 1


p(y)g(y) dy = 0
0
for any polynomial p. If pn is a sequence of polynomials uniformly converging to g then we
have Z 1 Z 1
2
g(y) dy = lim pn (y)g(y) dy = 0
0 n→∞ 0
and hence g = 0 and finally f = 0.

And at last we arrive at the Inversion Theorem.


CHAPTER 3. CONVOLUTION AND INVERSION 32

Theorem 62 (Inversion Theorem for Laplace Transform) Let f be a differentiable


function on (0, ∞) such that f¯(p) exists for Re p > c > 0. Then for x > 0,
Z σ+i∞
1
f (x) = f¯(p)epx dp (σ > c) .
2πi σ−i∞

Remark 63 A complete proof of this result would be technical and beyond the scope of this
course. However we shall prove the above result for rational functions which in practice
addresses the inverse problem for many of the functions that we have already met. We shall
revisit this inversion theorem when we tackle the inverse problem for the Fourier transform.

Proof. (Non-examinable.) Suppose that f (x) has a Laplace transform f¯(p) which is a
rational function of the form
g(p)
f¯(p) =
(p − a)n
where g is a polynomial of degree less than n. Consider the integral
Z σ+i∞
1
I(x) = f¯(p)epx dp
2πi σ−i∞

where x > 0 and σ > Re a. We will seek to evaluate this integral using the Residue Theorem
applied to the contour shown in the figure:
CHAPTER 3. CONVOLUTION AND INVERSION 33

This gives Z Z
1 1
f¯(p)epx dp + f¯(p)epx dp = res(f¯(p)epx ; a).
2πi CR 2πi ΓR

We also know
g(p)epx dn−1
 
1
res(f¯(p)epx ; a) = res ;a = g(p)epx .
(p − a)n (n − 1)! dpn−1 p=a

As the degree of g is less than n then f¯(p) = O(|p|−1 ) for suitably large |p| and so for

suitably large R we have that


Z Z 3π/2
1 1 iθ x(σ+Reiθ )

¯ ¯
px


2πi f (p)e dp 6 f (σ + Re )e iRe dθ [p = σ + Reiθ ]


ΓR π/2

  Z 3π/2
Re 1
6 O exR cos θ dθ
2π R π/2
Z π
= O (1) exR cos θ dθ
π/2
Z π/2
= O (1) e−xR sin θ dθ
0
Z π/2
= O (1) e−2xRθ/π dθ [by Jordan’s Lemma]
0
−1

= O R →0 as R → ∞.

Hence letting R → ∞ we have


Z σ+i∞
dn−1

1 ¯ px 1
I(x) = f (p)e dp = g(p)epx .
2πi σ−i∞ (n − 1)! dpn−1 p=a

This should be an expression for f (x). Given the injectivity of the Laplace transform,
it is sufficient to show that this does indeed transform into f¯(p). We shall demonstrate this
CHAPTER 3. CONVOLUTION AND INVERSION 34

using Leibniz’s rule for differentiating products. We have


!

dn−1
Z
¯ 1
I(p) = n−1
g(p)epx e−px dx
(n − 1)! 0 dp p=a
n−1   ∞
n−1
Z
1 X
= g (k) (a)xn−1−k eax e−px dx
(n − 1)! k=0 k 0
n−1   Z ∞
1 X n − 1 (k)
= g (a) xn−1−k e−(p−a)x dx
(n − 1)! k=0 k 0
n−1  
1 X n − 1 (k) (n − 1 − k)!
= g (a)
(n − 1)! k=0 k (p − a)n−k
n−1 (k)
1 X g (a)
= n (p − a)k
(p − a) k=0 k!
g(p)
= = f¯(p),
(p − a)n

noting the last sum is just Taylor expansion of the polynomial of g(p) centred at a and so
equals g(p). By the injectivity of L we can conclude that I(x) = f (x).

As a rational function (with a numerator of degree strictly less than its denominator) can
be written as a linear combination of such f¯ then the desired result follows by linearity.

Example 64 Find the Laplace inverse of


1
f¯(p) = .
(p2 + 1)2

Solution. Consider the contour described in the remark following the Inversion Theorem.
For x > 0 we will calculate
epx
Z Z
1 1
I= f¯(p)e dp =
px
dp
2πi γ 2πi γ (p2 + 1)2

around the contour γ = CR ∪ ΓR . The integrand has double poles at ±i and so by Cauchy’s
Residue Theorem I equals

epx epx
   
I = res ; i + res ; −i .
(p2 + 1)2 (p2 + 1)2
CHAPTER 3. CONVOLUTION AND INVERSION 35

Now
epx epx
   
res ;i = res ;i
(p2 + 1)2 (p + i)2 (p − i)2
epx

1 d
=
1! dp p=i (p + i)2
xepx 2epx
 

= −
(p + i)2 (p + i)3 p=i
−xeix ieix
= − .
4 4
Similarly
epx epx −xe−ix ie−ix
 
1 d
res ; −i = = + .
(p2 + 1)2 1! dp p=−i (p − i)2 4 4
Hence
1
i(e−ix − eix ) − x(eix + e−ix )

I =
4
1
= (i(−2i sin x) − x(2 cos x))
4
1
= (sin x − x cos x) .
2
We can parametrize the ΓR arc in γ by z = σ + Reiθ where π/2 6 θ 6 3π/2. Parametrizing
the semicircular arc as p = σ + Reiθ and using the Estimation Theorem we have
px
Z
e πR
x(σ+Reiθ )
(p2 + 1)2 dp 6 O(R4 ) × sup e

Γ θ∈(π/2,3π/2)
πRexσ
6 × sup exR cos θ
O(R4 ) θ∈(π/2,3π/2)
πRexσ
6 [as xR cos θ < 0]
O(R4 )
= O R−3 → 0

as R → ∞.

Thus
Z σ+i∞ Z
1 1 1
f (x) = f¯(p)e dp = lim
px
f¯(p)epx dp = (sin x − x cos x) .
2πi σ−i∞ R→∞ 2πi γ 2
CHAPTER 3. CONVOLUTION AND INVERSION 36

Example 65 Find the Laplace inverse of f¯(p) = p−1/3 .


Solution. We define a branch of p−1/3 in the cut plane C\(−∞, 0] by
p−1/3 = r−1/3 e−iθ/3 where p = reiθ − π < θ < π,
and adapt the contour γ around the cut, and divide it into various line segments and arcs
AB, BC, CD, EF, F A as in the diagram below.

By Cauchy’s Theorem we have


Z
1
I= f¯(p)epx dp = 0.
2πi γ

Along the topside of the cut CD we have θ = π and p = −r and


p−1/3 = (reiπ )−1/3 = r−1/3 e−iπ/3
and along the bottom side of the cut EF we have θ = −π and p = −r and
p−1/3 = (re−iπ )−1/3 = r−1/3 eiπ/3 .
Hence when R → ∞ we have
Z D −iπ/3 Z ∞ −rx −iπ/3 Z ∞ −u
1 e e e e e−iπ/3 −2/3
f¯(p)e dp →
px
dr = x −2/3
du = Γ(2/3) x ;
2πi C 2πi 0 r1/3 2πi 0 u1/3 2πi
eiπ/3 ∞ e−rx eiπ/3 −2/3 ∞ e−u
Z F
eiπ/3 −2/3
Z Z
1 ¯ px
f (p)e dp → − dr = − x du = −Γ(2/3) x .
2πi E 2πi 0 r1/3 2πi 0 u1/3 2πi
CHAPTER 3. CONVOLUTION AND INVERSION 37

Together these add to



Γ(2/3) −2/3 −iπ/3 Γ(2/3) 3
− eiπ/3 = −

x e 2/3
.
2πi 2πx
Now considering the BC and F A integrals we have
Z C px Z π
e O(R)

1/3
dp =
ex(σ+R cos θ) dθ

B p O(R1/3 ) π/2
 π/2 −xR sin θ
Z
2/3
= O R e dθ
0
 π/2 −2xRθ/π
Z
2/3
6 O R e dθ [by Jordan’s Lemma]
0
−1/3
= O(R )→0 as R → ∞.
Similarly the contribution from F A tends to zero in the limit. Hence letting R → ∞ we have
Z σ+i∞ √
1 Γ(2/3) 3
f (x) = f¯(p)epx dp = 2/3
.
2πi σ−i∞ 2πx
We already know that
L Γ(a + 1) L Γ(1/3)
xa 7−→ , so that x−2/3 7−→
pa+1 p1/3
so our answer may seem somewhat wrong however it is the case1 that
   
1 2 2π
Γ Γ =√
3 3 3
1
Since you ask: we prove that Γ(z)Γ(1 − z) = π/ sin(πz) for z not an integer (the Gamma function has
poles at the negative integers). First take 0 < Re z < 1. Then
Z ∞Z ∞
Γ(z)Γ(1 − z) = e−(s+t) sz−1 tz dsdt
Z0 ∞ Z0 ∞
dudv
= e−u v z−1 (by u = s + t, v = s/t)
0 0 1+v
Z ∞ z−1
v
= dv
0 1+v
π
=
sin πz
where the last integral ia a standard one round a keyhole contour with the branch cut for v z−1 taken along
the positive real axis and the value of the integral coming from the residue of the pole at v = −1. The result
holds for other values of z by holomorphic continuation. Put z = 1/3 to get the result above.
CHAPTER 3. CONVOLUTION AND INVERSION 38

and so √ √
Γ(2/3) 3 2π 3 1
f (x) = 2/3
= √ 2/3
= .
2πx Γ(1/3) 3 2πx Γ(1/3)x2/3

Theorem 66 (Term-by-term Laplace Inversion) Let f be a differentiable function on


(0, ∞) such that f¯(p) exists and is expressible as

X an
f¯(p) = for Re p > c > 0. (3.1)
n=0
pn+1
Then ∞
X an x n
f (x) = .
n=0
n!

Proof. The series in (3.1) converges for |p| > c and defines a holomorphic function in that
domain. Further, as in the proof of Laurent’s Theorem, the series converges uniformly on
any γ (0, r) where r > c and we may also note that f¯(p) = O(1/|p|) as p → ∞.

By the Inversion Theorem we have


Z σ+i∞
1
f (x) = f¯(p)epx dp
2πi σ−i∞
Z σ+iR
1
= lim f¯(p)epx dp [by definition]
2πi R→∞ σ−iR

Z σ+iR X !
1 an px
= lim e dp
2πi R→∞ σ−iR n=0
pn+1

Z X !
1 an px f¯(p) = O(1/|p|) and Jordan]

= lim e dp [using
2πi R→∞ γ n=0 pn+1

Z !
1 X an px
= lim e dp [by Deformation Theorem]
2πi R→∞ γ(0,R) n=0 pn+1
∞ Z  
1 X an px
= lim e dp [by uniform convergence on γ(0, R)]
2πi R→∞ n=0 γ(0,R) pn+1

1 X 2πian xn
= lim [by Cauchy’s Residue Theorem]
2πi R→∞ n=0 n!

X an x n
= .
n=0
n!
CHAPTER 3. CONVOLUTION AND INVERSION 39

Example 67 Find the Laplace inverse of (p2 + 1)−1 using term-by-term inversion.
Solution. We have
 −1
2 −1
 1 1
1+p = 2 1+ 2
p p

X (−1)k
=
k=0
p2k+2

L−1
X (−1)k x2k+1
7−→
k=0
(2k + 1)!
= sin x.

Example 68 From Example 53 we have that


J0 (p) = (1 + p2 )−1/2 .
Find the Taylor series for J0 (x).
Solution. We have
J0 (p) = p−1 (1 + p−2 )−1/2
∞ −1 k
× −3 × ··· × 1−2k 
−1
X
2 2 2 1
= p
k=0
k! p2

X (−1)k 1 × 3 × · · · × (2k − 1) 1
=
k=0
2k k! p2k+1

(−1)k (2k)! 1
X
=
k=0
22k (k!)2 p2k+1
∞ 
2k (−1)k
X 
= .
k=0
k 22k p2k+1

Hence, inverting term-by-term, we have



X (−1)k  x 2k
J0 (x) = 2
.
k=0
(k!) 2
Chapter 4

Fourier Transform and Applications

Definition 69 Let f : R → C be integrable. Then the Fourier transform fˆ(s) of f (x) is


Z ∞
ˆ
f (s) = f (x)e−isx dx.
−∞

We will also denote this (Ff )(s) and write F for the Fourier transform.

Remark 70 Note that a much smaller range of functions have a convergent Fourier trans-
form compared with the Laplace transform. The multiplicand of e−px in the Laplace transform
2
means that many common functions – though not something like ex (which is too big for large
x) nor x−1 (which is not integrable near 0) – have a convergent Laplace transform. The re-
quirement that f be integrable on the whole of R is therefore relatively restrictive.

Example 71 Let f = 1[−1,1] . Determine fˆ.

Solution. We have
1
−1  −isx x=1 eis − e−is
Z
2 sin s
fˆ(s) = e−isx dx = e x=−1
= = .
−1 is is s

Example 72 Let g(x) = e−a|x| where a > 0. Determine ĝ.

40
CHAPTER 4. FOURIER TRANSFORM AND APPLICATIONS 41

Solution. We have
Z ∞
ĝ(s) = e−a|x| e−isx dx
−∞
Z 0 Z ∞
= e (a−is)x
dx + e−(a+is) dx
−∞ 0
1  (a−is)x 0 1  −(a+is)x ∞
= e −∞
− e 0
a − is a + is
1 1
= +
a − is a + is
2a
= 2 .
a + s2

Example 73 Using the sifting property, we see that the Fourier transform of δ(x − a) is
e−isa .
2 x2
Example 74 Let a > 0. Show that the Fourier transform of f (x) = e−a equals
√  2
π −s
fˆ(s) = exp .
a 4a2

Solution. We are interested in


Z ∞
ˆ
f (s) = exp(−a2 x2 ) exp(−isx) dx
Z−∞∞
= exp(−a2 x2 − isx) dx
−∞
2 !
Z ∞  2
is s
= exp −a2 x + 2 − 2 dx
−∞ 2a 4a
 2Z ∞  2 !
−s is
= exp exp −a2 x + 2 dx.
4a2 −∞ 2a

is
Let s > 0. We will consider the rectangular contour ΓR with vertices ±R and ±R + 2a2
and
an integrand of exp(−a2 z 2 ). By Cauchy’s Theorem
Z
exp −a2 z 2 dz = 0.

ΓR
CHAPTER 4. FOURIER TRANSFORM AND APPLICATIONS 42

Note that the contribution from the rectangle’s right and left edges satisfy
Z
s/2a2 Z s/2a2
−a2 R2
2 2


exp(−a (±R + iy) ) idy 6 e
exp(a2 y 2 ) dy → 0 as R → ∞.
0 0

Hence, letting R → ∞ we have


Z ∞ Z ∞  2 !
is
exp(−a2 x2 ) dx − exp −a2 x+ 2 dx = 0.
−∞ −∞ 2a

We know (from knowledge of the normal distribution) that


Z ∞ √
2 2 π
exp(−a x ) dx = .
−∞ a

Hence √ Z ∞  2 !  2 
π is s
= 2
exp −a x + 2 dx = exp fˆ(s),
a −∞ 2a 4a2
and √  2
π −s
fˆ(s) = exp .
a 4a2

Proposition 75 (Riemann-Lebesgue Lemma) If f is an integrable function then


Z ∞ Z ∞
f (x) cos sx dx → 0 and f (x) sin sx dx → 0 as s → ∞.
−∞ −∞

As a consequence fˆ(s) → 0 as s → ∞.

Proof. We shall not prove this result here. It appears in this term’s Integration option.

Theorem 76 (Fourier Transform for Derivatives) Let f : R → C be a differentiable


function with an integrable derivative f 0 . Then

(Ff 0 )(s) = is(Ff )(s).


CHAPTER 4. FOURIER TRANSFORM AND APPLICATIONS 43

Proof. By Integration by Parts we have


Z ∞
0
(Ff )(s) = f 0 (x)e−isx dx
−∞
Z ∞
−isx ∞
f (x)(−is)e−isx dx
 
= f (x)e −∞

−∞
= 0 + is(Ff )(s)
as required.
In a similar fashion to the Laplace transform, the Fourier transform also has a convolution.
However, do note the difference in the limits.
Theorem 77 (Fourier Transform Convolution) Let f and g be integrable functions.
Then the convolution f ∗ g is defined by
Z ∞
h(x) = (f ∗ g) (x) = f (t)g(x − t) dt
−∞

and is itself integrable and satisfies


ĥ(s) = fˆ(s)ĝ(s).
Remark 78 As before with the Laplace convolution we now have
(f ∗ g)(x) = (g ∗ f )(x),
with the proof following in a like manner.
Proof. We have
Z ∞  Z ∞ 
fˆ(s)ĝ(s) = −isx
f (x)e dx −isy
g(y)e dy
−∞ −∞
Z ∞ Z ∞
= f (x)g(y)e−is(x+y) dx dy
Zy=−∞
∞ Zx=−∞

= f (u − y)g(y)e−isu du dy [u = x + y]
y=−∞ u=−∞
Z ∞ Z ∞ 
= g(y)f (u − y) dy e−isu du
Zu=−∞

y=−∞

= (g ∗ f )(u) e−isu du
Zu=−∞

= (f ∗ g)(u) e−isu du
u=−∞

= ĥ(s).
CHAPTER 4. FOURIER TRANSFORM AND APPLICATIONS 44

Example 79 Determine the convolution of:


(i) δa (x) = δ(x − a) and f (x);
(ii) e−|x| with itself.

Solution. (i) By the sifting property


Z ∞
(δa ∗ f ) (x) = δ(t − a)f (x − t) dt = f (x − a).
−∞

(ii) Let f (x) = e−|x| . Then


Z ∞ Z 0 Z ∞
−|t| −|x−t| t −|x−t|
(f ∗ f )(x) = e e dt = ee dt + e−t e−|x−t| dt.
−∞ −∞ 0

If x > 0 we have
Z 0 Z x Z ∞
2t−x −x
(f ∗ f )(x) = e dt + e dt + ex−2t dt
−∞ 0 x
−2t ∞
 2t
0  
e e
= e−x + xe−x + ex
2 −∞ −2 x
−x −x
e e
= + xe−x +
2 2
= (x + 1)e−x .

If x < 0 we have
Z x Z 0 Z ∞
2t−x x
(f ∗ f )(x) = e dt + e dt + ex−2t dt
−∞ x 0
 2t x  −2t ∞
−x e x x e
= e − xe + e
2 −∞ −2 0
x x
e e
= − xex +
2 2
x
= (1 − x)e .

Hence, putting these functions together, we see

(f ∗ f )(x) = (1 + |x|)e−|x| .
CHAPTER 4. FOURIER TRANSFORM AND APPLICATIONS 45

Theorem 80 (Inversion Theorem for Fourier Transform) Let f be integrable and


differentiable. Then Z ∞
1
f (x) = fˆ(s)eisx ds.
2π −∞
Proof. (Sketch proof) We might demonstrate the inversion Fourier transform in various
ways, but at the heart of each such proof will essentially be the identity
Z ∞
1
eisx dx = δ(s). (4.1)
2π −∞
If the theorem is true after all, we would expect this as the Fourier transform of δ(x) is 1.
Assuming (4.1) for now, we can argue as follows:
Z ∞ Z ∞ Z ∞ 
1 ˆ isx 1 −iys
f (s)e ds = f (y)e dy eisx ds
2π s=−∞ 2π s=−∞ y=−∞
Z ∞ Z ∞ 
1 is(x−y)
= f (y) e ds dy
2π y=−∞ s=−∞
Z ∞
1
= f (y)(2πδ(x − y)) dy
2π y=−∞
Z ∞
= f (y)δ(x − y) dy
y=−∞
= f (x) [by the sifting property].
To try to give some rigorous sense to (4.1), the delta function is usually approximated
by a sequence of Gaussian pdfs (as these combine well with the exponential in the Fourier
transform). Swapping the roles of s and x in Example 74 we see
Z ∞ √  2
2 2
 −ixs π −x
exp −a s e ds = exp .
−∞ a 4a2
Replacing s with −s and a with a/2 we have
Z ∞  2 2  2
1 −a s ixs 1 −x
exp e ds = √ exp .
2π −∞ 4 a π a2
On the RHS are Gaussians which tend to δ(x) as a → 0. Letting a → 0 we are left with the
desired integral. (N.B. Nowhere do we claim that cos sx and sin sx are integrable as functions
over R!)
We can deduce the Laplace Inversion Theorem from the Fourier Inversion Theorem as
follows:
CHAPTER 4. FOURIER TRANSFORM AND APPLICATIONS 46

Corollary 81 (Inversion Theorem for Laplace Transform) Let f be a differentiable


function on (0, ∞) such that f¯(p) exists for Re p > c > 0. Then for x > 0,
Z σ+i∞
1
f (x) = f¯(p)epx dp (σ > c) .
2πi σ−i∞
Proof. Writing p = σ + iy we have
Z ∞ Z ∞
f¯(σ + iy) = −(σ+iy)x
e−σx f (x) e−iyx dx = ĝ(y)
 
f (x)e dx =
0 0
where
g(x) = e−σx f (x)1[0,∞) (x).
If we apply the Inverse Fourier Transform we find for x > 0 that
Z ∞
1
−σx
e f (x) = f¯(σ + iy)eixy dy
2π −∞
which rearranges to
Z ∞ Z σ+i∞
1 1
f (x) = f¯(σ + iy)ex(σ+iy) d(σ + iy) = f¯(p)exp dp.
2πi −∞ 2πi σ−i∞

Remark 82 In applied mathematics, it is common to use the Fourier Transform pair in the
form Z ∞ Z ∞
ˆ 1
f (k) = ikx
f (x)e dx, f (x) = fˆ(s)e−ikx dk,
−∞ 2π −∞
with the minus in the exponent of the inverse. Here k is often interpreted as a wavenumber
(wavelength = 2π/k). This is our version of the transform with s swapped with −s.
Remark 83 Note that there is a factor of 2π in the inverse Fourier transform which is not
present in the Fourier transform itself. This is a consequence of the Fourier transform, as
defined here, not being an isometry with respect to the inner product
Z ∞
hf, gi = f (x)g(x) dx.
−∞

Consequently some texts define the Fourier transform and its inverse as
Z ∞ Z ∞
ˆ 1 1
f (s) = √ f (x)e−isx
dx, f (x) = √ fˆ(s)eisx ds.
2π −∞ 2π −∞
This has a nicer symmetry and both the transform and its inverse are now isometries of the
above inner product. The obvious downside to this, working with specific
√ examples, is that
the Fourier transforms of common functions now involve an unhelpful 2π term.
CHAPTER 4. FOURIER TRANSFORM AND APPLICATIONS 47

A result demonstrating the near-isometric nature of the Fourier transform is Parseval’s


Theorem

Theorem 84 (Parseval’s Theorem for Fourier Transform) (Off-syllabus) Let f and


g be integrable functions. Then
Z ∞ Z ∞
ˆ
f (s)ĝ(s) ds = 2π f (x)g(x) dx.
−∞ −∞

Proof. We have
Z ∞ Z ∞ Z ∞  Z ∞ 
fˆ(s)ĝ(s) ds = f (x)e −isx
dx g(y)e−isy dy ds
−∞ s=−∞ x=−∞ y=−∞
Z ∞ Z ∞ Z ∞
= f (x)g(y)eis(y−x) dy dx ds
s=−∞ x=−∞ y=−∞
Z ∞ Z ∞ Z ∞ 
is(y−x)
= f (x)g(y) e ds dy dx
x=−∞ y=−∞ s=−∞
Z ∞ Z ∞
= 2π f (x)g(y)δ(y − x) dy dx
x=−∞ y=−∞
Z ∞
= 2π f (x)g(x) dx [by the sifting property].
x=−∞

Remark 85 If we take f = g, Parseval’s theorem shows that ‘the energy in the function
and the energy in its Fourier transform are the same’. The theorem requires f and g to be
square-integrable (i.e. f 2 and g 2 must be integrable). For Lebesgue integration this says that
f ∈ L2 , and it is a nice property of the Fourier transform that if f ∈ L2 then fˆ ∈ L2 too.
The corresponding result for functions defined as the sum of a series of basis functions, for
example a Fourier series, is made more complicated by the question of completeness (can
every function in L2 be represented as such a sum?).

Example 86 Use the Inversion Theorem to determine


Z ∞ Z ∞ Z ∞
sin x cos x cos ax dx
(i) dx, (ii) dx, (iii)
−∞ x
2
−∞ x + a
2
−∞ (x2 + 1)2

where a > 0.
CHAPTER 4. FOURIER TRANSFORM AND APPLICATIONS 48

Solution. (i) In Example 71 we showed that for f = 1[−1,1] we have


2 sin s
fˆ(s) = .
s
Hence by the Inversion Theorem
Z ∞
1 2 sin s isx
e ds = f (x).
2π −∞ s
If we set x = 0 we get
Z ∞ Z ∞
1 2 sin s sin s
ds = f (0) = 1 =⇒ ds = π.
2π −∞ s −∞ s

(ii) Similarly from Example 72 for g(x) = e−a|x| we have


2a
ĝ(s) = .
a2 + s2
Hence by the Inversion Theorem
Z ∞
1 2a
eisx ds = e−a|x| .
2π −∞ a2 + s2
If we set x = 1 and take real parts then we find
Z ∞
cos s π
2 2
ds = a .
−∞ a + s ae

(iii) We determined in Example 79 the convolution of h(x) = e−|x| with itself to get

(h ∗ h) (x) = (1 + |x|)e−|x| .

We know that the Fourier transform of h(x) is 2/(1+s2 ) and hence by the Inversion Theorem
we have Z ∞
1 4eisx ds
= (h ∗ h) (x).
2π −∞ (s2 + 1)2
Taking real parts, setting x = a, and renaming our dummy variable, we have
Z ∞
cos ax dx h∗h π
2 = 2π (a) = (a + 1)e−a .
2
−∞ (x + 1) 4 2
CHAPTER 4. FOURIER TRANSFORM AND APPLICATIONS 49

Remark 87 Note – for later – that replacing s with −s in


Z ∞
2a
e−a|x| e−isx dx = 2
−∞ a + s2
means Z ∞
1 a
e−a|x| eisx dx =
2π −∞ π(a2 + s2 )
−a|x|
so that the Fourier inverse of e is
a
.
π(a2 + s2 )

Example 88 (Convolution of Two Gaussians) Let

a 2 2 b 2 2
f (x) = √ e−a x , g(x) = √ e−b x .
π π

We have seen that


−s2 −s2
   
fˆ(s) = exp , ĝ(s) = exp .
4a2 4b2
Hence
−s2 −s2 −s2
     
∗ g(s) = exp
f[ exp = exp .
4a2 4b2 4(a2 + b2 )
Hence the convolution of the Gaussian pdfs for N (0, a2 ) and N (0, b2 ) is that of N (0, a2 + b2 ).
Chapter 5

Applications to PDEs

Theorem 89 (Poisson’s Solution to the Dirichlet Problem in the Half-plane)


The function u(x, y) satisfies Laplace’s equation

uxx (x, y) + uyy (x, y) = 0 (x ∈ R, y > 0)

and satisfies the boundary conditions

u(x, 0) = f (x), where f is integrable,


u(x, y) remains bounded as x2 + y 2 → ∞.

Then Z ∞
y f (s) ds
u(x, y) =
π −∞ y2 + (x − s)2

Proof. Applying the Fourier transform in the x-variable to the PDE we find that

ûyy (s, y) + (is)2 û(s, y) = 0, =⇒ ûyy (s, y) = s2 û(s, y).

Solving this we find


û(s, y) = A(s)eys + B(s)e−ys .
By an appropriate choice of functions α(s) and β(s) we can rewrite this instead as

û(s, y) = α(s)ey|s| + β(s)e−y|s| .

Now û(s, y) remains bounded as y → ∞ for fixed s and hence we have α(s) = 0 and so

û(s, y) = β(s)e−y|s| .

50
CHAPTER 5. APPLICATIONS TO PDES 51

Applying the Fourier transform to the boundary condition we have that

û(s, 0) = fˆ(s),

and hence β(s) = fˆ(s) and


û(s, y) = fˆ(s)e−y|s| .
We can write the Fourier inverse of this as a convolution provided we can invert e−y|s| , but
we found precisely this inverse in Remark 87 to be
y
.
π(y 2 + x2 )
Hence the result follows by the convolution theorem.

Example 90 Solve the PDE


∂u ∂u
+ = ex−y
∂x ∂y
subject to the boundary conditions

u(x, 0) = u(0, y) = 0 x, y > 0.

Solution. Applying the Laplace transform in the x-direction we find


e−y
pū(p, y) − u(0, y) + ūy (p, y) = ,
p−1
which rearranges to
e−y
ūy (p, y) + pū(p, y) = .
p−1
This differential equation has general solution
e−y
ū(p, y) = A(p)e−py +
(p − 1)2

Applying the Laplace transform to the other boundary condition we see that ū(p, 0) = 0 and
hence
−1
A(p) = ,
(p − 1)2
so that
e−y − e−py
ū(p, y) = .
(p − 1)2
CHAPTER 5. APPLICATIONS TO PDES 52

Recall that the Laplace transform of xex is (p − 1)−2 and hence, inverting, we have
u(x, y) = xex e−y − (x − y)ex−y H(x − y)

xex−y x6y
= x−y x−y
xe − (x − y)e x>y
 x−y
xe x 6 y,
= x−y
ye x > y.

Example 91 Solve the following heat equation using the Laplace transform.
∂u ∂ 2u
(x, t) = 0 6 x 6 π, t > 0
∂t ∂x2
subject to the boundary and initial conditions
u(0, t) = 0 = u(π, t), u(x, 0) = sin x, 0 6 x 6 π, t > 0.
Solution. Applying the Laplace transform to the PDE in the t variable we find
pū(x, p) − u(x, 0) = ūxx (x, p)
which rearranges to
ūxx (x, p) − pū(x, p) = − sin x.
This has general solution
√ √ sin x
ū(x, p) = A(p)e px
+ B(p)e− px
+ .
p+1
Applying the Laplace transform to the boundary conditions we get
ū(0, p) = 0 = ū(π, p)
and so √ √
A(p) + B(p) = 0; A(p)eπ p
+ B(p)e−π p
= 0.
Solving we find A(p) = 0 = B(p) and hence
sin x
ū(x, p) = ,
p+1
so that, inverting, we see
u(x, t) = e−t sin x.

(This is, of course, the usual separation-of-variables solution.)


Chapter 6

The bigger picture, and a forward look

This course is a bit like Clapham Junction: you can get from it to (or pass through it en
route to) a vast array of destinations. In this (non-examinable!) conclusion, we take a look
at the broader context and see some possible destinations.

6.0.1 More on distributions


More dimensions. It is relatively straightforward to extend the definitions of test func-
tions and distributions to more than one dimension (given a theory of integration). For
example, this lets us properly describe a point charge/mass/heat-source in three dimensions
using the three-dimensional delta function δ(x). Then, the steady temperature field gener-
ated by a point source of heat of strength Q at the origin, which we know is the radially
symmetric solution T (x) = Q/(4πk|x|) for |x| > 0, satisfies

−k∇2 T = Qδ(x)

on all of R3 ; the line sources, vortices and dipoles of fluid in A10 (Fluids and Waves) satisfy
similar equations. The Green’s function (B5.2, Applied PDEs) G(x, ξ) for Laplace’s equation
in a domain D satisfies (as a function of x) the equation −∇2 G = δ(x − ξ) on all of D. This
idea is itself an extension of the one-dimensional Green’s functions, satisfying Ly = δ(x − ξ),
covered in A6 (DEs 2). All these ideas are unified at the modelling level by thinking of the
Green’s function as the response of the system to a point influence.

Test functions and weak solutions. We saw earlier the key technical step of moving
from a pointwise definition of a function to an averaged definition via an integral. Because
the latter is more forgiving, it lets us define, for example, the derivative of a distribution
by the integration-by-parts formula hF 0 , φi = −hF, φ0 i: the point is that we transfer any

53
CHAPTER 6. THE BIGGER PICTURE, AND A FORWARD LOOK 54

possible source of trouble (remember differentiation makes functions less well-behaved) from
F to the test function, where it can do no harm as φ is smooth. This idea underpins much
of the modern analysis of PDEs, via the notion of what are called weak solutions, and you
can explore it in B4.3 (Distribution Theory and Fourier Analysis), as well as from a more
modelling perspective in B5.2 (Applied PDEs) and B5.4 (Waves & Compressible Flow), where
it helps to analyse shock waves such as sonic booms or the Severn Bore.

Other aspects of distributions Another extension of the basic theory is to pseudofunc-


tions. This enables us to treat functions such as 1/x or log x on all of R, without worrying
about their singularities. The key definition is that of the pseudofunction 1/x by its action
on a test function φ: Z − Z ∞
φ(x) dx
h1/x, φi = lim + .
↓0 −∞  x
The singularity in the integrand at x = 0 is eliminated by the symmetric way we let the
interval (−, ) tend to zero. The result is called a Cauchy Principal Value integral and it
plays a big part in some applications of complex analysis (covered in C5.6 Applied Complex
variables). Its similarity to the Cauchy kernel that you see in Cauchy’s Integral Formula
is no coincidence. One then defines the derivative of a pseudofunction by its action on a
test function in the same way as for a distribution, and you might like to use this idea
to show that the ordinary (integrable) function log |x| and the pseudofunction 1/x satisfy
d log |x|/dx = 1/x on all of R. (It is a short extension to define 1/x2 as the derivative of
−1/x, and then you can prove the amusing (but correct) formula h1/x2 , 1i = 0 — to see why
it is amusing, write it as an ‘integral’.
If you are interested in probability, B8.1 (Probability, Measure & Martingales) starts with
an outline of Measure Theory, in which the delta measure (our delta function) allows us to
bring together discrete and continuous random variables in a single setting.

6.0.2 Fourier Transforms and distributions


Earlier in these notes we were happy to take the Fourier Transform of (for example) the delta
function, to get δ̂ = 1. Strictly speaking, some more test-function machinery is needed to
make this work. It turns out that we need to modify our definition of test functions slightly,
to allow them to be nonzero as |x| → ∞, provided they decay fast enough in the limit.
They then have the nice property that the Fourier Transform of a test function is also a test
function (not true for the previous kind). The result is the space of ‘tempered distributions’,
and the formal definition of the Fourier Transform of a tempered distribution F is via a test
function φ(x):
hF̂ , φi = hF, φ̂i
CHAPTER 6. THE BIGGER PICTURE, AND A FORWARD LOOK 55

with inverse
hF̌ , φi = hF, φ̌i.
For all practical purposes, these distributions are the same as our earlier ones.

6.0.3 Where do transforms come from?


A first answer to this question is that they are ‘Fourier series on an infinite interval’. To see
this very informally, consider a function f (x) defined on the interval (−L, L). We may write
f (x) as the complex Fourier series, with coefficients fn ,

2 L
Z
0
X
f (x) = fn e inπx/L
, where fn = f (x0 )e−inπx /L dx0 ,
n=−∞
L −L

from which
∞ ∞ 
2 L
Z 
0 −inπx0 /L
X X
inπx/L 0
f (x) = fn e = f (x )e dx einπx/L .
n=−∞ n=−∞
L −L

Now compare this with the Fourier Transform pair when f (x) is defined on all of R:
Z ∞ Z ∞ Z ∞ 
1 ˆ isx 1 0 −isx0 0
f (x) = f (s)e ds = f (x )e dx eisx ds.
2π −∞ −∞ 2π −∞

If we let L → ∞, the two expressions agree if we interpret the sum as a Riemann integral
by setting nπ/L = s and the increment π/L = ds. Although not rigorous, this is certainly a
strong clue that the Fourier Transform is indeed an extension of Fourier series. Unfortunately,
no such easy idea is available for Laplace Transforms, and so we must look deeper. Those
taking A6 (DEs 2) will recognise some of what follows.
Recall from linear algebra that a linear transformation T : Rn → Rn can be represented
by a matrix A with respect to any basis. When A is symmetric, we know that the eigenvalues
are all real and the eigenvectors are orthogonal. When A also has rank n (so no eigenvalue
vanishes), the (normalised) eigenvectors vi form an orthonormal Pbasis which is ‘natural’ for
this transformation. Any other vector w has an expansion w = i ci vi where the coefficients
take the simple form ci = hvPi , wi (here h · , · i is the usual inner product). In particular, the
solution of Ax = b is x = i hvi , bivi /λi .
Now recall Fourier series. Any continuous function f (x) which vanishes at x = ±L has
the Fourier series representation
X 
f (x) = bn sin(nπx/L) + an cos ((2n + 1)πx/2L) ,
n
CHAPTER 6. THE BIGGER PICTURE, AND A FORWARD LOOK 56

where each coefficient is given by integrating f (x) against the corresponding basis function.
These basis functions all satisfy the ‘eigenproblem’
d2 y
− = λy, −L < x < L, y(±L) = 0,
dx2
where λ, the eigenvalue, is either (nπ/L)2 or ((2n + 1)π/2L)2 . So this differential operator
(−d2 /dx2 plus boundary conditions) leads to a natural basis for the representation of the
solution to −d2 y/dx2 = g(x) with y(±L) = 0. More practically, it is why separation of
variables works for the one-dimensional heat and wave equations, as the ‘spatial’ differential
operator is precisely ‘−d2 /dx2 plus boundary conditions’. If we have a different operator, we
may get different basis functions; for example, separation of variables for a radially symmetric
solution of the two-dimensional heat equation leads to a series in terms of Bessel’s function
of order zero, J0 .
It is a small step to see that this idea can apply to transforms as well. The Fourier
Transform arises from the (self-adjoint) eigenproblem

d2 y
− = λy, −∞ < x < ∞, y bounded as x → ±∞,
dx2
for which λ = s2 is real and positive. Note that the discrete spectrum (countable number of
eigenvalues) we saw on a finite interval has become a continuous one as s can take any real
value. Likewise the Laplace transform arises from the (not self-adjoint) eigenproblem
dy
− = λy, 0 < x < ∞, y bounded as x → ∞,
dx
for which λ = p. Other differential operators give other transforms; for example, the Bessel
operator leads to the Hankel Transform, and so on. For more on the theoretical underpinnings
of these calculations, B4.1 and B4.2 Functional Analysis are the courses to take.

6.0.4 Further uses of transforms


Many areas of applied mathematics use transforms of one kind or another: you can expect to
see them whenever the underlying model is a linear ordinary or partial differential equation.
Prominent examples include wave motion in fluid mechanics (sound waves in a compressible
fluid, water waves; A10), solid mechanics (the elastic equivalent of sound waves; C5.2) and
electromagnetism (B7.2). This last gives a good example of the power of the Laplace Trans-
form: whereas most of the examples we have seen in this course are straightforward and
could have been solved just as easily by other methods, the boot is on the other foot when
it comes to large systems of ODEs, such as arise in the theory of linear electrical circuits,
CHAPTER 6. THE BIGGER PICTURE, AND A FORWARD LOOK 57

for example models of power grids subject to unexpected shocks (sorry . . . ). Here a huge
system of ODEs modelling the interaction of the inductances, capacitances and resistances
of the system is reduced by the Laplace Transform to a much more basic problem in linear
algebra.
On a much smaller scale, the Fourier Transform is important in quantum mechanics:
Heisenberg’s Uncertainty Principle follows from the Fourier Transform result
R∞ 2 R∞ 2
1 −∞
x (f (x))2 dx −∞
s |fˆ(s)|2 ds
Ex Es ≥ , where Ex = R ∞ and E s = R∞ .
4 (f (x))2 dx
−∞ |fˆ(s)|2 ds
−∞

In probability, the characteristic function of a random variable is essentially the Fourier


Transform of its density. Signals processing is another fertile area of use for the Fourier
transform, with the recent detection of gravitational waves by the LIGO experiment being
just one example of its use. Here the independent variable x becomes time, t, and then we
think of the transform variable s as an angular frequency, often written as ω; sometimes a
factor 2π appears in the exponent, indicating that frequencies are measured in Hz.1 The
transform then takes a signal in the ‘time domain‘ into the ‘frequency domain’ and the FT
of a signal represents the amplitudes in its decomposition into (a continuum of) frequencies;
the numerical computation of the transform is often done using the celebrated Fast Fourier
Transform (FFT). In this context, the result that δ̂ = 1 says that the delta function (in time)
is a bang which contains an equal amount of every frequency (a fact which finds application
in seismology). Put another way, a signal that is completely localised in time is uniformly
spread out in frequency, an extreme case of the uncertainty principle mentioned above. This
motivates the idea of wavelets, which are functions localised both in time and frequency, and
which are used (among other things) to generate hierarchical compression of digital images. A
final example is the famous Radon transform which is closely related to the Fourier Transform
and underpins the imaging of a patient having a CT scan.
Finally, note that oten the solution of a transform problem comes in the form of an
inversion integral which cannot be calculated explicitly, and then a systematic approximation
may be useful (C5.5 Perturbation Methods).

1
And, in engineering, i2 = −1 becomes j2 = −1. . . .

You might also like