Math Physics Lecture Notes
Math Physics Lecture Notes
Lecture Notes
Alexei Rybkin
University of Alaska Fairbanks
Contents
Part 1.
Complex Analysis
11
11
11
13
23
23
23
26
29
29
30
30
31
37
37
38
CONTENTS
50
Part 2.
Linear Spaces
53
55
55
56
58
58
59
59
59
61
63
64
67
67
70
71
Lecture 11. More about Selfadjoint and Unitary Operators. Change of Basis
1. Examples of Selfadjoint and Unitary Operators
2. Change of Basis
73
73
74
77
77
78
Part 3.
81
Hilbert Spaces
83
83
83
83
85
87
87
89
90
91
95
CONTENTS
1. Signal Processing
2. Solving Some Linear ODEs
95
98
101
101
105
106
109
111
111
111
116
119
121
121
123
127
133
136
143
147
147
149
151
151
152
155
155
158
159
159
Part 4.
165
160
162
167
CONTENTS
167
169
173
173
175
189
Lecture 32. Nonhomogeneous Wave and Heat Equations on the Whole Line
1. Nonhomogeneous Heat Equation
2. Nonhomogeneous Wave Equation
Appendix: The Method of Variation of Parameters
195
195
197
203
207
211
221
221
222
Lecture 37. The Principle of Conformal Mapping for the Laplace Equation.
Laplace Equation in the Upper Half Plane
Appendix. Change of Coordinate System and the Chain Rule
225
233
Lecture 38. The Spectrum of the Laplace Operator on Some Simple Domains
1. The Spectrum of on a Rectangle
2. The Spectrum of on a Disk
235
235
238
241
Lecture 40. Wave and Heat Equation in Dimension Higher Than One
1. Wave Equation
2. Heat Equation
3. Further Discussions
247
247
248
249
CONTENTS
Part 5.
Greens Function
255
257
261
261
262
Index
267
Part 1
Complex Analysis
LECTURE 1
x, y R
12
z1 z2
z1 z2
z1
:=
=
.
z2
z2 z2
|z2 |2
C = {z : z = x + iy , x, y R}.
It is convenient to place complex numbers on a plane (called the complex plane).
y
z = x + iy = (x, y)
|z|
x
z = x iy
The argument (phase, polar angle) of a complex number z is the angle between
the x-axis and the vector (x, y). The standard notation for is
= arg z.
3. COMPLEX FUNCTIONS
13
nN
14
Example 1.2. Let D = C and let z = x + iy. The following two maps are examples
of complex valued functions.
f (z) = x + iy = z,
g(z) = x2 + iy 2 .
Although both are looking fairly simple, they are profoundly different as we will see next
time.
LECTURE 2
f (z0 + z) f (z0 )
z0 + z z0
=
=1
z
z
f (z0 + z) f (z0 )
lim
= 1.
z0
z
g(z0 + z) g(z0 )
2xx + x2
=
= 2x + x
z
x
g(z0 + z) g(z0 )
lim
= lim 2x + x = 2x.
z0
x0
z
15
16
g(z0 + z) g(z0 )
2iyy + iy 2
=
= 2y + y
z
iy
g(z0 + z) g(z0 )
lim
= lim 2y + y = 2y.
z0
y0
z
The limits are different in general so g is not analytic in C. Note that even if
points on the line y = x give the same limit, they do not constitute a domain,
i.e. an open set so g is not analytic.
Exercise 2.3. Show by definition that f (z) = x2 y 2 + 2ixy is analytic on C but
g(z) = x2 + y 2 + 2ixy is not.
We are going to accept the following math language: we write f H(D) if f is
analytic in D, i.e.
f H(D)
f is analytic in D.
As Exercise 2.3 shows, not every complex function f of two variables is analytic.
Moreover, if one picks up such a function arbitrarily, it will most likely not be analytic.
There must be some relations between Re f, Im f .
2. Cauchy-Riemann Conditions
Theorem 2.4. Let f = u + iv be differentiable at z = z0 . Then the partial derivatives ux , uy , vx , vy exist at z0 = x0 + iy0 and are related by
ux (x0 , y0 ) = vy (x0 , y0 )
uy (x0 , y0 ) = vx (x0 , y0 ).
Proof. Since f = u + iv is differentiable then the limit (2.1) exists and is independent of the way z 0. Let us consider two paths z = z0 + x and z = z0 + iy.
In the first case, we have
f (z0 + z) f (z0 )
z0
z
u(x0 + x, y0 ) + iv(x0 + x, y0 ) u(x0 , y0 ) iv(x0 , y0 )
= lim
x0
x
u(x0 + x, y0 ) u(x0 , y0 )
v(x0 + x, y0 ) v(x0 , y0 )
= lim
+ i lim
x0
x0
x
x
= ux (x0 , y0 ) + ivx (x0 , y0 )
ux , vx at z0 = x0 + iy0 .
f 0 (z0 ) = lim
(2.2)
2. CAUCHY-RIEMANN CONDITIONS
17
(2.3)
ux = vy
uy = vx
at z0 = x0 + iy0 .
ux = vy
uy = vx .
(2.4)
Def 2.2
z n H(C).
(2.5)
18
So we find that ez is a true exponential, i.e. the only function (up to a multiplicative constant) whose derivative coincide with itself.
Exercise 2.10.
2.
f (z)dz =
3.
f (z)dz, is a constant
f (z)dz =
1 2
f (z)dz +
1
f (z)dz
Hint: you are free to use the corresponding properties of line integrals of real functions (Calc III).
Definition 2.13. Let be a path (curve) starting at a point z = z0 and ending at
z = z1 .
z1
z0
We define () as the same curve but starting at z1 and ending at z0 .
19
z1
z0
Exercise 2.14. Show that
f (z)dz =
f (z)dz.
()
p
Recall |dz| = dx2 + dy 2 , i.e. the arc length dS in Calc III. The triangle inequality
for complex numbers, |z1 + z2 | |z1 | + |z2 | is at the origin of the triangle inequality
for integrals (see proof below).
Proof. It immediately follows from Definition 2.11 that f (z)dz can also be
defined via Riemann partial sums. Namely, if we partition as on the figure
zk
zk1
z1
A = z0
z2
z1
zk
zk+1
zn = B
20
then
f (z)dz =
n
X
lim
f (zk )zk
maxk |zk |0
k=1
n
X
f (z)dz =
lim
f
(z
)z
k
k
maxk |zk |0
k=1
n
X
triangle inequality
lim
maxk |zk |0
|f (zk )| |zk |
|f (z)| |dz| .
k=1
ffi
Definition 2.17. Let C be a closed curve (contour), then
f (z)dz = f (z)dz.
C
ffi
f (z)dz = 0 , C D.
C
D
C
21
Exercise 2.21. Prove Theorem 2.20. Hint: use Definition 2.11, Greens formula,
and Cauchy-Riemann conditions.
Remark 2.22. The converse of the Cauchy theorem (known as Moreras theorem)
is also valid. If time permits we will prove it later.
LECTURE 3
f (z)dz = 0 ,
contour C D.
The Cauchy theorem fails if Int C contains at least one point at which our function is
not analytic. The following example shows it.
1
. This function is analytic z C \ {z0 }.
Example 3.1. Consider f (z) =
z z0
Let C = {z : |z z0 | = }. Consider the integral
dz
f (z)dz =
.
(3.1)
|zz0 |= z z0
C
Let us compute this integral explicitly using a typical complex variable technique. First
of all, C = {z : |z z0 | = } is a circle in C of radius , centered at z0 . This means
that z z0 = ei where 0 < 2 and by setting = z z0 in (3.1), we have
2 dei 2 i
i e d
dz
d
=
=
=
i
e
|zz0 |= z z0
0
||=
0
ei
2
=i
d = 2i.
Not zero!
0
dz
= 2i.
|zz0 |= z z0
(3.2)
2. Cauchy Integral
Lets now consider the function
that
f (z)
where f (z) H(D)1 and z0 D. It is clear
z z0
f (z)
1
f (z)
dz , where C is any contour: z0 Int C.
2i C z z0
1recall
24
This integral is called the Cauchy integral of f (z) along a contour C and is one of the
most important objects in mathematics and theoretical physics.
Theorem 3.2 (Cauchy Formula). If f (z) H(D) and D is a simply connected
domain, then
1
f (z)
dz = f (z0 ),
(3.3)
2i C z z0
for any contour C D and z0 Int C.
This theorem is fundamental and its proof is very typical in complex analysis.
Before we present it we need to discuss one lemma.
Lemma 3.3. If g(z) H(D), where D is the domain between two contours C, C 0 .
Then
g(z)dz =
g(z)dz.
C0
C0
C
C0
1 2
C
By condition g(z) H(D) and then by the Cauchy Theorem,
g(z)dz = 0.
(3.4)
+
C
(3.5)
g(z)dz
C
+
C0
g(z)dz = 0
C0
2. CAUCHY INTEGRAL
25
f (z)
f (z)
H(D \ {z0 }). Apply Lemma 3.3 to
with C = C, C 0 =
z z0
z z0
{z : |z z0 | = }. We have
f (z)
1
f (z)
1
dz =
dz
2i C z z0
2i |zz0 |= z z0
1
1
f (z) f (z0 )
dz
=
dz + f (z0 )
2i |zz0 |=
z z0
2i |zz0 |= z z0
|
{z
}
Proof.
= f (z0 ) +
1
2i
=1 by (3.2)
|zz0 |=
f (z) f (z0 )
dz
z z0
(3.6)
Let us now evaluate the last integral in (3.6). By Theorem 2.16, we have
1
f (z) f (z0 )
1
f
(z)
f
(z
)
0
|dz|
dz
2i
z z0
2 |zz0 |= z z0
C
1
|dz|
max |f (z) f (z0 )|
.
|zz0 |=
2 |zz0 |= |z z0 |
|
{z
}
1
= 2
|zz0 |=
1 1
|dz|= 2
2=1
So we got
1
f
(z)
f
(z
)
0
max |f (z) f (z0 )|.
dz
(3.7)
|zz0 |=
2i
z z0
C
But is arbitrary and we can make it as small as we want. Since f (z) is continuous,
|f (z) f (z0 )| is small if is small and it follows now from (3.7) that
1
f (z) f (z0 )
lim
dz = 0
(3.8)
0 2i |zz |=
z z0
0
f (z)
f (z) f (z0 )
1
1
dz = lim
dz
f (z0 )
lim
+ lim
0
0 2i |zz |= z z0
0 2i |zz |=
| {z }
z z0
0
0
|
{z
}
|
{z
}
is independent of
is independent of
=0 by (3.8)
(n)
n!
(z0 ) =
2i
f (z)
dz.
(z z0 )n+1
Corollary 3.5 (The Liouville Theorem). If f (z) H(C) and bounded2, then
f (z) = const,
2A
z C.
M > 0 : |f (z)| M
z D.
26
1
f (z)
0
f (z0 ) =
dz , z0 C.
(3.9)
2i |zz0 |=R (z z0 )2
(3.9) implies
|f (z)|
1
f (z)
1
|f (z0 )| =
dz
|dz| (By Theorem 2.16).
2
2 |zz0 |=R (z z0 )
2 |zz0 |=R |z z0 |2
0
M
M
|dz|
M 1
0
=
2
R
=
.
|f (z0 )|
2 |zz0 |=R |z z0 |2
R
2 R62
R is an arbitrary number. Make R . We have
lim |f 0 (z0 )| = 0
f 0 (z0 ) = 0 z0
f (z0 ) = const.
Done.
f (z)dz = 0
C
then
F (z)
f () d H(D)
z0
and
F 0 (z) = f (z).
Proof. Consider
F (z + z) F (z)
1
=
z
z
1
=
z
z+z
f () d
z0
z+z
f () d
z0
f () d.
z
(3.10)
27
Note that all the integrals above are independent of specific curves and defined only
by their endpoints. It follows from (3.10) that
z+z
z+z
F (z + z) F (z)
1
d
f (z) =
f () d f (z)
z
|z| z
| z {z }
=z
z+z
1
=
(f () f (z)) d
|z| z
z+z
Theorem 2.16
1
|f () f (z)| |d|
|z| z
z+z
1
max |f () f (z)|
|d|
|z| [z,z+z]
z
| {z }
=|z|
max
|f () f (z)| .
(3.11)
[z,z+z]
max
z0 [z,z+z]
|f () f (z)| = 0
F (z + z) F (z)
= f (z).
z0
z
lim
f (z)dz = 0
C
then f H(D).
Proof. By Theorem 3.6,
f ()d H(D) ,
F (z) =
z0 D
z0
LECTURE 4
Complex Series
Series (real or complex) are the main ingredient of not only mathematics but also
physics. Its just one of the very few ways to get to a numerical answer. So, from now
on we are going to deal with series on a regular basis.
1. Numerical Series
Definition 4.1. A sequence of complex numbers
{zk }
k=1 = {z1 , z2 , , zn , } is said to converge to
some z, if for any small > 0
z1
z2
|z zn | <
zn
z3
lim zn = z or zn z, n .
This means that starting from some n, all numbers {zn , zn+1 , } get in a given
disk centered at z.
X
X
Definition 4.2. A formal sum
zn =
zn of numbers {zn }
n=1 is called a
n=1
n1
series.
Definition 4.3. A series
n1
that
Sn S , n ,
P
where Sn = z1 + z2 + X
+ zn = nk=1 zk is a partial sum of the series.
In this case we write
zn = S.
n1
Proof.
1) . S =
kn
zn . For S Sn we have
n1
S Sn =
zn
n
X
n1
k=1
29
zk =
X
k=n+1
zk .
(4.1)
30
4. COMPLEX SERIES
X
|S Sn | =
zk 0.
k=n+1
n1
|zn | is
n1
convergent.
Actually there is no difference between real and complex series.
2. Functional Series
In applications it is a typical situation when every term of a series is a function of
some variable. This variable can easily be complex.
P
Definition 4.7. A formal sum
fn (z) is called a functional series.
In the future were going to treat functional series with pretty much general {fn (z)}.
But at this point we concentrate on power series.
3. Power Series
Definition 4.8. A power series is
X
n=
an (z z0 )n =
an (z z0 )n +
n1
an (z z0 )n
(4.2)
n0
where {an }
n= = { , an , , a1 , a0 , a1 , , an , } is a sequence of complex
numbers, z0 C.
Note that in Calc II we only consider power series with non-negative powers.
Definition 4.9. The domain of convergence D of a functional series is
X
D = {z C :
fn (z) is convergent }.
Theorem 4.10. The domain of convergence of a power
series is always an annulus.
Proof. Try to prove it yourself.
1QED
z0
31
(4.3)
n0
The domain D = {z : |z| < 1} which is a disk of radius 1. Let us compute (4.3).
Sn (z) = 1 + z + z 2 + + z n =
1 z n+1
.
1z
zn =
n0
1
1z
and hence
1
.
1z
(4.4)
an z n
n0
where
1 dn
f
(z)
an =
.
n! dz n
z=z0
1
f ()d
f (z) =
(4.6)
2i C z
1
. We have
Consider z
1
1
1
=
.
0
z
z0 1 zz
z0
z0 R
(4.7)
32
4. COMPLEX SERIES
z z0
< 1, using formula (4.4) we get
Since, by construction,
z0
X z z0 n
1
=
0
z0
1 zz
z0
n0
and (4.7) can be continued
n
1 X z z0
1
=
.
z
z0 n0 z0
Plug now this expression in (4.6) and we get
1
f (z) X (z z0 )n
f (z) =
d
2i C z0 n0 ( z0 )n
X
f ()
1
(z z0 )n d.
=
2i C n0 ( z0 )n+1
(4.8)
Now switch the order of integration and summation. It is a very subtle point and not
that easy to prove. But in this case, its true! So (4.8) can be continued
X 1
f ()d
f (z) =
(z z0 )n .
n+1
2i
(
z
)
0
C
n0 |
{z
}
1 (n)
= n!
f (z0 ) (by Theorem 3.4)
1 (n)
f (z0 ).
n!
Definition 4.13. Series (4.5) is called the Taylor series of f (z).
QED
is such a domain.
C0
D
C1
C2
33
To under-
C
Theorem 4.16 (The Cauchy Formula). If f H(D), where D is a path-connected
domain, then
1
f (z)
dz = f (z0 )
2i C z z0
for any C D and z0 Int C.
Exercise 4.17. Prove Theorem 4.16. Hint: use the figure below, some arguments
of Lemma 3.3 and Theorem 3.2.
34
4. COMPLEX SERIES
an (z z0 )n
z D
(4.9)
n=
1
an =
2i
f ()d
( z0 )n+1
R2
R1
z0
z
X zn
n0
n!
X
1
=
zn
1z
n0
1
in |z 1| < 1 and the
z(z 2)
1
1
z2 z
1
=
2
1
1
1 (z 1) 1 + (z 1)
.
(z 1)
(1) (z 1)
2 n0
n0
X
1X
=
(4.10)
35
LECTURE 5
R
z0
(5.1)
n0
{z
f1 (z)
{z
f2 (z)
f1 (z) is called the essential or improper part, and f2 (z) is called the correct or proper
part. We can classify the types of singularities.
Definition 5.2.
1) If f1 (z) = 0 then z0 is called a removable singularity.
N
X
an (z z0 )n (i.e. f1 (z) has only a finite number of terms) then
2) If f1 (z) =
n=1
38
Example 5.4.
sin z
has a removable singularity at z0 = 0. Indeed we have that
1) f (z) =
z
X (1)n
sin z X (1)n 2n
sin z =
z 2n+1 and hence
=
z and no negative
(2n + 1)!
z
(2n + 1)!
n0
n0
powers of z.
sin z
2) f (z) = 3 has a pole of order 2 at z0 = 0. Indeed
z
X (1)n+1
sin z X (1)n 2n2
2
z
=
z
+
z 2n .
=
z3
(2n
+
1)!
(2n
+
3)!
n0
n0
3) f (z) = e1/z has an essential singularity at z0 = 0. Indeed e1/z =
X 1
z n , i.e.
n!
n0
e1/z has an infinite number of terms of negative powers in the Laurent series.
Exercise 5.5. Use Laurent series expansion to show that 0 is
ez 1
1) a removable singularity for f (z) =
;
z
z
e 1
;
2) a double pole for f (z) =
z3
3) an essential singularity for f (z) = e1/z 1.
Definition 5.6. A function f (z) is said to be analytic at infinity if the function
g(z) = f (1/z)
is analytic at z = 0.
It is clear that all the definitions and theorems apply also for z = . Restate all
of them as an exercise for z = .
2. Residues
Definition 5.7. Let z0 be a singular point and C be any contour enclosing z0 . Let
f H(Int C \ {z0 }) then
1
f (z)dz Res{f (z), z0 }
2i C
is called the residue of f (z) at z = z0 . Other notations include Res f , Res{f, z0 }.
z=z0
C
D
0
C
z0
2. RESIDUES
39
(5.2)
1
f (z)dz =
2i
C
f (z)dz
(5.3)
C (z0 )
(5.4)
n0
nN
{z
g(z)
1
(z z0 )N g(z)dz.
Res{f (z), z0 } =
2i C (z0 )
Recall the Cauchy formula (Theorem 3.4)
g
(n)
n!
(z0 ) =
2i
g(z)dz
.
(z z0 )n+1
(N 1)
(N 1)!
(z0 ) =
2i
g(z)dz
.
(z z0 )N
Hence
g (N 1) (z0 )
1
dN 1
Res{f (z), z0 } =
=
g(z)
(N 1)!
(N 1)! dz N 1
z=z
0
N 1
d
1
(5.4)
(z z0 )N f (z)
.
=
N
1
(N 1)! dz
z=z0
QED
Remark 5.10. The formula above does not apply to essential singularities.
Corollary 5.11. If z0 is a simple pole and f can be represented in some neighborhood of z0 as
(z)
f (z) =
(z)
with some analytic functions , such that (z0 ) 6= 0, (z0 ) = 0 then
Res{f, z0 } =
Proof. is left as an exercise.
(z0 )
.
0 (z0 )
40
ffi
dz
1
, consider f (z) = 2
and note that i
+1
z +1
C1 (i)
are isolated singularities. Furthermore, only i is inside our contour and we can write
1
g(z)
f (z) =
=
(z i)(z + i)
zi
Example 5.12. To evaluate
z2
1
where g(z) = z+i
and thus g(i) = 2i1 is finite and nonzero. So if we expand g(z) in
powers of z i, the first term is nonnegative. So there is only one negative term for f
around z i, i.e. i is a pole of order 1. Hence
1
dz
1
1
= 2i Res 2
= 2i(z i) 2
= 2i
= .
2
z=i z + 1
z + 1 z=i
z + i z=i
C1 (i) z + 1
e1/z dz = 2i 1 = 2i.
C1 (0)
Theorem 5.14 (The Residue Theorem). Let f (z) be analytic inside a contour C
except for a finite number of poles {zk }nk=1 = {z1 , z2 , , zn }. Then
n
X
Res{f (z), zk }
(5.5)
f (z)dz = 2i
C
k=1
Proof. Its enough to prove it for n = 2. By Lemma 3.3 we can deform C into
C
2
z1
C 0 = {z : |z z1 | = } 1 {z :
2|
|z z2 | = } 2 where
< |z1 z
(small
2
f (z)dz =
f (z)dz+
f (z)dz.
z2
|zz1 |=
|zz2 |=
QED
Exercise
5.15. Evaluate the following integrals
ffi
dz
a)
.
2
z +4
|z2i|=1
ffi
cosh z
b)
dz.
z(z 2 + 1)
|z|=2
ffi
c)
ze1/z dz.
|z|=2
2. RESIDUES
ffi
d)
z2
C
z1
dz, where C = {(x, y) : x4 + y 4 = 4}.
+ iz + 2
41
LECTURE 6
z = ei = cos + i sin
2
i
zz
z = e = cos i sin
sin = 2i
0
2
Furthermore, z = 1/z since 1 = |z|2 = zz and dz = iei d = izd. So we have
1
dz
1
1
1
, cos =
d =
z+
, sin =
z
,
iz
2
z
2i
z
and making these substitutions we get
2
e
R(cos , sin )d =
R(z)dz
I :=
0
|z|=1
where R(z)
is a new rational function of z
Pn (z)
e
R(z)
=
, Pn , Qm are polynomials of order n, m respectively.
Qm (z)
e
The function R(z)
is analytic inside {z : |z| = 1} except for a finite number of poles
{z1 , , zN }, N m. By the Residue Theorem
N
n
o
X
e
I = 2i
Res R(z), zk .
k=1
I=
0
d
1 + a cos
43
|a| < 1.
44
Put z = ei . We have
I=
|z|=1
2
=
1
z+
i
iz 1 + a z
dz
dz
az 2 + 2z + a
|z|=1
1
The poles of R(z) =
are solutions to az 2 + 2z + a = 0 z1,2 =
2 + 2z + a
az
r
1
1
1. Note that we can rewrite z1 in various forms:
a
a2
r
a
1
1
1 a2 1
z1 =
=
=
.
2
a
a
a
1 a2 + 1
From this last one, recall |a| < 1 so 1 a2 + 1 > 1 and |z1 | < 1. But z1 z2 = 1
1
|z2 | =
> 1 only z1 is inside of the contour |z| = 1. By the Residue Theorem
|z2 |
1
2
, z1 .
(6.1)
I = 2i Res
i
az 2 + 2z + a
By Theorem 5.9
Res
1
, z1
2
az + 2z + a
= lim
zz1
z z1
z z1
=
lim
az 2 + 2z + a zz1 a (z z1 ) (z z2 )
1
1
1
= q
=
.
a(z1 z2 )
2 1 a2
2a a12 1
2
2
1
=
.
6 i 6 2 1 a2
1 a2
f (x)dx
lim
f (z)dz = 0 ,
+
CR
CR+ =
R
f (x)dx
45
|dz|
M
f (z)dz
|f (z)| |dz| M
= 1+
|dz|
1+
+
+ |z|
+
CR+
R
CR
CR
CR
M
M
Rd =
= 1+
0 , R .
R
R
0
QED
Theorem 6.3. Let f (z) satisfy the conditions of Lemma 6.2 and let f (z) have no
poles on R. If f (z)|zR = f (x) then
N
X
f (x)dx = 2i
Res{f (z), zk }.
(6.2)
k=1
N
X
f (z)dz = 2i
Res{f (z), zk }
(6.3)
CR
k=1
where CR =
inside of CR :
CR = CR+ (R, R)
z2
zN
z1
R
But
CR
R
R
+
CR
f (x)dx = 2i
R
N
X
Res{f (z), zk }
k=1
f (z)dz.
+
CR
N
X
f (x)dx = 2i
Res{f (z), zk } lim
k=1
|
and the theorem is proven.
Example 6.4. Prove that
f (z)dz
+
CR
{z
=0 by Lemma 6.2
}
dx
2
=
.
x4 + 1
2
46
M
4
Consider f (z) = z41+1 . Note that |f (z)| |z|
4 and the poles solve z + 1 = 0, i.e.
2k1
zk = ei/4
for k = 1, , 4. Only z1 , z2 are in C+ and since theyre simple zeros
4
of z + 1, theyre simple poles of f (z). Hence by Theorem 6.3 and Corollary 5.11
i 1
dx
1
1
1
=
.
= 2i
+
+
4
4z13 4z23
2 z13 z23
x + 1
1
zk3
i
i
dx
= (z1 z2 ) =
4
x +1
2
2
2i
1+i
,
2
z2 = e3i/4 =
1+i
,
2
it
2
=
.
2
eix f (x)dx
You can assume > 0 and f : R R. These integrals are important in harmonic
analysis (signal processing) where is the frequency and x is a spatial or temporal
variable.
Lemma 6.5 (Jordans Lemma). Let f (z) be analytic in C+ except for a finite number
of poles and
lim f (z) = 0 , Im z 0.
z
Then
lim
eiz f (z)dz = 0 , if > 0.
R
+
CR
We offer this lemma without proof. Note that CR+ is as before the arc of radius R
in the upper half plane. Note also that z is equivalent to |z| and this can
happen in many ways in the upper half plane:
Theorem 6.6. Let f (z) be subject to the conditions of the Jordan Lemma and have
no poles on R. Then if > 0
N
X
ix
e f (x)dx = 2i
Res eiz f (z), zk .
k=1
Proof. can be done in the very same manner as Theorem 6.3. Do it!
Example 6.7. Compute
cos x
dx
x 2 + a2
>0 ,
a > 0.
eix f (x)dx
e
a
47
cos x
eix
dx
=
Re
dx =: Re(I).
2
2
2
2
x + a
x + a
1
+
But f (z) = z2 +a
2 is analytic in C \{ia}, where z1 = ia is a simple pole, and lim f (z) =
z
0. So by Theorem 6.6 and Corollary 5.11
1
= ea .
I = 2i Res eiz f (z), ia = 2iea
2ia
a
cos x
dx = Re(I) = ea .
So
2
2
a
x + a
Observe the following:
when a , the value decays, which makes sense.
when , the integral converges to zero also, but this is not intuitive; it is
caused by lots of cancellations
of high frequencies.
because
1
when 0, we verify that x2 +a2 dx = a1 arctan xa = a . X
cos
0
2
cos 3
b)
d.
0 5 4 cos
d
c)
2 .
0 21 + sin
d
d)
.
2
(a
+
b
sin
)
0
2
e)
cosn d.
0
x2
b)
dx.
6
0 x + 1
dx
c)
.
2
(x + 1)3
cos kx dx
.
d)
(x a)2 + b2
x sin x
e)
dx.
x2 + 1
0
48
f)
(x2
cos x dx
.
+ a2 )(x2 + b2 )
LECTURE 7
sin x
dx = , > 0. Lets prove it!
Example 7.1. I =
x
2
0
sin x
Note that I is improper on both sides but it should be ok since lim
= ; yet
x0
x
we have to find something to do about it.
sin x
is even
Since
x
1
I=
2
1
sin x
dx = Im
x
2
eix
dx.
x
(7.1)
1
has a pole on the real axis. But it is
z
not crucial. First of all, we have to agree upon how we understand (7.1):
Theorem 6.6 does not apply since f (z) =
eix
dx lim
0
x
R
+
R
eix
dx.
x
(7.2)
0=
C
eiz
dz =
z
R
+
R
R
eix
dx +
x
+
CR
eiz
eiz
dz + y
dz.
z
C+ z
Note that the 0 on the LHS is independent of R, so they can run freely away, R to
infinity and to 0.
49
50
e
eiz
eiz
lim
+
dx = lim
dz + lim x
dz
0
0 C z
R C + z
x
R
R
|
{z
}
=0 , by Jordans lemma
iei
eiz
e
= lim x
dz = lim
iei d
i
0 C z
0 0
e
(i cos sin )
lim e(i cos sin ) id = i.
e
id =
= lim
0
0 0
0 |
{z
}
=1
Here we switched lim = lim. I swear that in this case it can be justified!
1
So I = Im i = . Done!
2
2
Note the following:
at first, it might look disturbing that no appears in the answer; however, the
result is correct and the absence of is due to the fact that the oscillations
always add up to the same
number;
if = 0 then we have 0 = 0;
if < 0, rewrite sin x = sin()x then the integral is /2;
so a more general result is
2 sin x
dx = sgn .
0
x
2. Integrals of the type
x1 f (x)dx , 0 < < 1
0
Before we present the main result of this section, we must introduce a new concept:
branch cuts.
2
Whereas z 2 is analytic, we would like
2z to be the inverse of z , but how can we
define it? There is more to it than just
z) = z because there are two choices. Indeed
(
for z = ei , we naturally think of z = ei/2 . But since we also have z = ei(+2)
So
is a different kind of function, but we can figure
1x
51
that each point on [0, ) is a singularity, but not isolated, so we cant do residue
calculations on it.
A similar issue arises with the logarithmic function.
Note also that the cut along R+ is not the only one possible; any ray is ok since it
precludes any contour/neighborhood from going around the origin.
[, )
Theorem 7.2. Let f (z) be analytic in C except for a finite number of poles off the
positive part of the real axis. Assume |f (z)| |z|M
+ for some > 0, 0 < < 1 and
|z| > r. Then
N
1
2i X
Res
z
f
(z),
z
f (x)dx =
.
k
1 e2i k=1
Note that f (x) = sinx x does not satisfy the conditions above since even though
1
on the real line, this is no longer true for z in the complex plane since sin z
|f (x)| |x|
is unbounded.
Proof. Note first that z 1 cannot be analytic on the whole C but it is analytic
on C \ R+ , R+ = [0, ). Make sure that you understand it!
Consider the following contour C:
CR
0
`+
N
X
(z)dz = 2i
Res{(z), zk }.
C
k=1
52
1
1
1
(z)dz =
z f (z)dz +
z f (z)dz +
z 1 f (z)dz .
z f (z)dz +
C
`
`
C
C
}
| + {z
} | R {z
| {z
}
= R x1 f (x)dx
=:I1
=:I2
But because f (x) is analytic on R+ , it returns the same value along `+ or ` , and
along ` we also have that dz = dx for x from R to , but z 1 = |z|1 e2(1)i =
x1 e2(1)i . Hence the above becomes
R
1
(1)2i
(z)dz =
x f (x)dx + I1 + e
x1 f (x)dx + I2
C
= 1 e(1)2i
R
R
x1 f (x)dx + I1 + I2 .
(7.3)
For I1 we have
R1 Rd
|f (z)| |dz| max |f (z)|
|{z} zCR
0
CR
M
max |f (z)| R 2 2R + 0 , R .
|z|=R
R
lim I1 = 0.
|z|
|I1 |
max |f (z)|
|z|=
| {z }
0 , 0.
But lim
0
R
lim I2 = 0.
x1 f (x)dx.
f (x)dx =
QED
x1
dx =
, 0 < < 1.
x+1
sin
0
1
1
Note that f (x) =
admits an analytic continuation f (z) =
with one
x+1
z+1
pole: -1. We also have = 1 > 0 since 0 < < 1. So by Theorem 7.2
1
2i
ei(1)
x
1
(1)
=
2i
dx =
x+1
1 e2i(1)
1 ee2i(1)
0
2i
= i(1)
=
=
.
i(1)
e
e
sin ( 1)
sin
Exercise 7.4. Show that ( 1 < < 3 )
x
(1 )
dx =
.
2
2
(x + 1)
4 cos
0
2
Example 7.3.
Part 2
Linear Spaces
LECTURE 8
Vector Spaces
1. Basic Definitions
The concept of a vector space is central in math physics and its going to be a part
of our math language.
Definition 8.1. A vector space E is a set of elements (also called vectors) equipped
with operations + and multiplication by a scalar subject to
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
X, Y E X + Y E
(closure under addition)
X +Y =Y +X
(commutative law)
(X + Y ) + Z = X + (Y + Z)
(associative law)
0 : X + 0 = X, X E
(existence of zero vector)
X E (X) : X + (X) = 0
(existence of additive inverse)
X E cX E (c is a scalar)
(closure under scalar multiplication)
a(bX) = ab(X) : a, b are scalars
(associative law)
(a + b)X = aX + bX
(distributive law with respect to multiplication)
a(X + Y ) = aX + aY
(distributive law with respect to addition)
1X =X
(invariance with respect to multiplication by unity)
Note, first of all that our usual 3 space (E 3 ) is a space whose elements are the usual
three component vectors with + defined by
x1
y1
x1 + y 1
X + Y = x2 + y 2 = x2 + y 2
x3
y3
x3 + y 3
and multiplication by scalars
x1
cx1
cX = c x2 = cx2
x3
cx3
Verify properties #1 - 10!
Consider now less simple examples.
Example 8.2. Let y 00 +py 0 +qy = 0 be a second order linear homogeneous differential
equation. If y1 , y2 are two solutions, y1 + y2 is a solution too and so is cy1 , cy2 , c is
an arbitrary constant. Operations + and multiplication by a scalar are usual + and .
One can easily verify that the set of all solutions to this differential equation forms a
linear space.
55
56
8. VECTOR SPACES
Example 8.3. Consider the set Pn of all polynomials of order not greater than
n. Its clear that if P1 and P2 are polynomials then P1 + P2 is a polynomial too. I.e,
P1 , P2 Pn P1 + P2 Pn where + is the usual addition. Next, if a is a scalar,
P Pn aP Pn . So Pn forms a linear space.
Example 8.4. Let C[0, 1] be the set of all continuous functions on [0, 1]. As in
Example 8.3, C[0, 1] is a linear space. (Check it!)
These examples show that a linear space is not a weird object.
Definition 8.5. A linear space is called real if all scalars in Def 8.1 are real numbers.
Definition 8.6. A linear space is called complex if all scalars in Def 8.1 are complex numbers.
2. Bases
Some more definitions.
Definition 8.7. Vectors X1 , X2 , . . . , Xn E are called linearly independent if the
equation
c1 X 1 + c2 X 2 + . . . + cn X n = 0
x1
x3
columns). Consider the following systems of vectors {e1 , e2 , e3 }
1
0
0
e1 = 0 , e2 = 1 , e3 = 0.
0
0
1
We claim it is a basis in E. Indeed,
2. BASES
57
1
0
0
c1 e1 + c2 e2 + c3 e3 = 0 c1 0 + c2 1 + c3 0
0
0
1
c1
c2 = 0 c1 = c2 = c3 = 0.
c3
Hence {e1 , e2 , e3 } are linearly independent.
Now we need to make sure that {e1 , e2 , e3 , X} is linearly dependent with any X 6= 0.
x1
Indeed, let X = x2 with some x1 , x2 , x3 : x21 + x22 + x23 6= 0. We then have
x3
c1 + c4 x 1
c1 e1 + c2 e2 + c3 e3 + c4 X = 0 c2 + c4 x2 = 0
c3 + c4 x 3
which is equivalent to the system
c1 = c4 x1
c2 = c4 x2
c = c x
3
4 3
This system has infinitely many solutions since if we put c4 = t 6= 0 then at least
one of c1 , c2 , c3 is not 0 (x1 , x2 , x3 are not all zeros). Hence {e1 , e2 , e3 , X} is linearly
dependent and by definition, {e1 , e2 , e3 } is a basis in E.
Remark 8.13. The space E in Example 8.12 is actually our 3-space. Commonly,
E is denoted by R3 . So,
x1
x2 : x1 , x2 , x3 R R3
x
3
Example 8.14. Let Pn be the set of all polynomials of order n. We show that
dim Pn = n + 1. Consider {1, x, x2 , . . . , xn }. This system is linearly independent.
Indeed,
n
X
ck xk = 0 ck = 0, k = 0, 1, . . . , n.
k=0
(it means that a polynomial is identically 0 if and only if all of its coefficients are 0.)
We leave it as an exercise to prove that {1, x, x2 , . . . , xn } is a basis in Pn . Hence,
by definition dim Pn = n + 1.
Some more examples:
The set of solutions to an order 2 linear homogeneous differential equation is a
linear space of dimension 2.
58
8. VECTOR SPACES
LECTURE 9
Linear Operators
1. Linear Operator
A linear operator is a fundamental object in math physics.
Definition 9.1. Let E1 , E2 be two linear spaces. A mapping A that maps E1 into
E2 (A : E1 E2 ) is called a linear operator if X, Y E1 and scalars , ,
A(X + Y ) = AX + AY.
In most of our cases, E1 = E2 = E and we then say that A acts in E. It means
that A sends every X E into a vector Y E.
Example 9.2. Let E = R2 and A be an operator acting by the following rule,
x1
x1
, where X =
.
AX =
x2
0
It is clear that for all X, Y ;
x1 + y 1
x1 + y1
x1
y
A(X + Y ) = A
=
=
+ 1 = AX + AY ;
x2 + y 2
0
0
0
x1
x1
x
AX = A
=
= 1 = AX.
x2
0
0
Hence, A is a linear operator in R2 . As you can see, A performs an orthogonal
projection of a vector in a 2-space on the x-axis.
Example 9.3. Let E = Pn . Define A by the formula
d
Ap(x) =
p(x) (p(x) Pn ) .
dx
A is clearly a linear operator. This operator is called the operator of differentiation
and will be playing a crucial role in our course.
2. Matrices
Definition 9.4. The following table of numbers
.
.
am1 am2 . . . amn
is called a m n matrix. If n = m, then the matrix is called square.
59
60
9. LINEAR OPERATORS
a11 a12 . . . a1n
b11 b12 . . . b1n
a21 a22 . . . a2n b21 b22 . . . b2n
+ .
A+B =
..
.. . .
.
..
...
...
. ..
.
.
. ..
bm1 bm2 . . . bmn
am1 am2 . . . amn
.
.
.
.
am1 + bm1 am2 + bm2 . . . amn + bmn
That is, matrices add up element-by-element,
(A + B)ik = (A)ik + (B)ik
2)
a11 a12
a21 a22
A =
..
...
.
am1 am2
i.e.,
. . . a1n
a11 a12
. . . a2n a21 a22
= .
..
..
...
. ..
.
. . . amn
am1 am2
. . . a1n
. . . a2n
..
..
.
.
. . . amn
(A)ik = (A)ik
3)
a11
a21
A=
...
a12
a22
..
.
ik
a1n
b11 b12 . . . b1`
b21 b22 . . . b2`
a2n
.
,
B
=
..
.. . .
. ;
..
. ..
.
.
amn
bn1 bn2 . . . bn`
. . . a1n
b11 b12 . . . b1`
. . . a2n b21 b22 . . . b2`
..
..
.. . .
..
...
.
.
.
.
.
am1 am2 . . .
a11 a12
a21 a22
AB =
..
...
.
am1 am2 . . . amn
bn1 bn2 . . . bn`
n
n
n
X
X
X
ak1 b1k
a1k bk2 . . .
a1k bk`
k=1
k=1
k=1
n
n
n
X
X
X
b
a
b
.
.
.
a
b
2k
k1
2k
k2
2k
k`
= k=1
k=1
k=1
..
..
..
..
.
.
.
.
n
n
n
X
X
X
amk bk1
amk bk2 . . .
amk bk`
k=1
1(A)
...
...
..
.
k=1
k=1
61
i.e.,
(AB)ij =
n
X
aik bkj
k=1
Note, that we can mutliply two matrices only in the following case:
n
`
In general, m 6= `.
m
n
Some more
0 0
0 0
. .
.. ..
terminology:
... 0
. . . 0
=0
. . . ..
.
0 0 ... 0
a11
a22
..
.
a
nn
1
1
..
=I
n
X
aik ei , k = 1, 2, . . . , n,
(9.1)
i=1
where {aik }ni=1 are coordinates of the vector Aek in the basis {ek }nk=1 .
Coefficients {aik }ni,k=1 form a matrix. The thing is that this matrix represents the
operator A in the basis {ek }nk=1 . This means that if we know all {aik }ni,k=1 then we can
compute AX for any X E.
62
9. LINEAR OPERATORS
Indeed, let X =
n
X
xk ek . Then,
k=1
AX = A
n
X
x k ek =
k=1
n
by (9.1) X
xk
k=1
n
X
xk Aek
k=1
n
X
n
n
X
X
i=1
i=1
aik ei =
!
aik xk
ei
k=1
i.e.2
(AX)i =
n
X
aik xk ; i = 1, 2, . . . , n
(9.2)
k=1
(9.2) reads
n
X
a1k xk
(AX)1 =
k=1
(AX) =
a2k xk
2
k=1
ank xk
(AX)n =
k=1
x1
x2
n
So if X =
... is given by its coordinates in {ei }i=1 then,
xn
a11 a12
a21 a22
AX =
..
...
.
an1 an2
. . . a1n
x1
. . . a2n x2
. ,
. . . ..
. ..
. . . ann
xn
(9.3)
4. MATRIX RING
63
You can think of an operator as a set of houses. You can list these houses using
their street addresses. It would play the role of the matrix representation. Clearly
the set of houses is independent of the way you divide them into blocks. But once a
division is fixed then you identify every house with its street address.
Some examples:
Example
9.7. Consider the operator A as in Example 9.2. Choose a basis e1 =
1
0
, e2 =
.
0
1
Let us find the matrix of this operator in the basis {e1 , e2 }. We have
Ae1 = e2
Ae2 = 0
(9.4)
(9.5)
a21 = 0 ,
a12 = 0 ,
a22 = 0
and finally
A=
1 0
0 0
1
0
in
,
.
0
1
d k1
x
= (k 1)xk2 = (k 1)ek2
dx
k = 1, 2, . . . , n + 1.
n+1
X
i=1
=(k1)ek1
ak1,k = k 1 . So,
... 0
. . . 0
. . ..
in {1, x, x2 , . . . , xn }
. .
. . . n
0 0 0 ... 0
aij = 0 except
0 1 0
0 0 2
. . .
A=
.. .. ..
0 0 0
4. Matrix Ring
The following theorem is very important.
=0
64
9. LINEAR OPERATORS
a11 . . . a1n
b11 . . . b1n
e = ... . . . ... , B
e = ... . . . ...
A
an1 . . . ann
bn1 . . . bnn
be their matrix representations in {ei }ni=1 . Then
e+B
e , A
f =A
e
^
1) A
+B =A
g=A
eB
e
2) AB
i.e. when we add two operators their matrices add and when we multiply two operators their matrices multiply.
1) is clear. Show it!
Proof.
2)
A(Bek ) = A
n
X
bjk ej
(by (9.1))
j=1
n
X
bjk Aej =
j=1
n
X
j=1
n
X
n
X
i=1
j=1
bjk
n
X
ei =
n
n
X
X
i=1
(AB)ik =
n
X
aij bjk
!
aij bjk
ei
j=1
i=1
!
bjk aij
aij ei
{z
=(AB)ik
g=A
eB.
e
AB
}
QED
j=1
5. Noncommutative Ring
Note that multiplication of matrices, and hence operators, is not commutative.
d
Example 9.10. Let E = P1 = {a + bx | a, b R} and A = dx
, B(a + bx) = bx.
Then
AB(a + bx) =
bx = b
dx
AB 6= BA.
d
5. NONCOMMUTATIVE RING
65
LECTURE 10
x1 , x2 , x3 C .
Verify that
1) hx, yi =
3
X
xi y i is an inner product on C3 ,
i=1
2) but hx, yi =
3
X
|xi yi |2 is not.
i=1
hX, Y i = 0 Y E
hX, Xi = 0
X = 0.
68
i 6= k
i=k
Definition 10.9. A basis {ek }nk=1 is called orthogonal and normed (or orthonormal) or ONB if
hei , ek i = 0 ,
i 6= k
kei k = 1 ,
i = 1, 2, , n.
{ek }nk=1
hei , ek i = ik .
be an ONB and X =
n
X
xk ek , Y =
n
X
yk ek . Then
k=1
k=1
hX, Y i =
n
X
xk y k .
k=1
* n
X
xi ei ,
i=1
1)
n
X
1)
i=1
n
X
=
=
xi
+
2)
yk ek
=
+
yk ek , ei
2)
*
xi
n
X
ei ,
n
X
k=1
n
X
+
yk ek
k=1
xi
i=1
k=1
xi y k hei , ek i =
n
X
i=1
k=1
* n
X
i,k=1
n
X
n
X
n
X
y k hei , ek i
k=1
xk y k hek , ek i +
| {z }
=1
X
i6=k
xi y k hei , ek i
| {z }
=0
xk y k .
QED
k=1
n
X
|xk |2 .
k=1
1. INNER PRODUCT
69
(10.1)
[
where kXk is the length of X and (X,
Y ) is the angle between X and Y .
Proof. Note first that in R3 any inner product must be real-valued. So 1) transforms to
hX, Y i = hY, Xi .
But this equation obviously holds for the inner product defined by (10.1). Property 3)
[
is also clear since cos(X,
X) = 1. Property 2) is the least trivial. Prove first that
hX + Y, Zi = hX, Zi + hY, Zi .
Indeed
X +Y
\
hX + Y, Zi = kX + Y k kZk cos(X+Y,Z)
\
= kZk kX + Y k cos(X+Y,Z).
But geometrically, it is clear that the projection of a sum is the sum of the projections!
So
\ = kXk cos(X,
[
kX + Y k cos(X+Y,Z)
Z)
Z
d
[
Z)
kXk cos(X,
Z) kY k cos(Y,
d
+ kY k cos(Y,
Z)
\
kX + Y k cos(X+Y,Z)
d
[
and hX + Y, Zi = kXk kZk cos(X,
Z) + kY k kZk cos(Y,
Z) = hX, Zi + hY, Zi.
X
X
cos( ) = cos
QED
Theorem 10.13. Let {ek }nk=1 be an ONB and {xk }nk=1 be the coordinates of a vector
X E in {ek }nk=1 . Then
xk = hX, ek i
Proof.
hX, ek i =
* n
X
k=1
(10.2)
+
xi ei , ek
Pn
k=1
xi hei , ek i = xk .
| {z }
=ik
70
Every finite dimensional linear space has an inner product (e.g. Rn , Cn ). An infinite
dimensional Euclidean space is called differently: a Hilbert space. But not all infinite
dimensional spaces have an inner product.
2. Adjoint and Selfadjoint Operators
Definition 10.15. Let E be a Euclidean space. An operator A is called the adjoint
operator to an operator A if X, Y E
hAX, Y i = hX, A Y i .
(10.3)
by def of A
= hAZ, Xi + hAZ, Y i
by def of A
hZ, A Xi + hZ, A Y i
X, Y, Z E ; , C
hZ, A (X + Y )i = hZ, A X + A Y i .
QED
So an adjoint operator A is linear. We will show later how any linear operator in
a Euclidean space has an adjoint.
Definition 10.17. An operator A in a Euclidean space E is called selfadjoint if
A = A.
Lemma 10.18. Let A be a linear operator and let {ek }nk=1 be an ONB, then for the
elements of the matrix of A, we have
aik = hAek , ei i
Proof. By definition (formula (9.1))
n
X
Aek =
ajk ej
(10.4)
(Aek )i = aik .
j=1
QED
3. UNITARY OPERATORS
71
Proof. By (10.4)
(A )ik = hA ek , ei i = hek , Aei i = hAei , ek i = (A)ki .
QED
Why is it that every operator in a finite dimensional space has an adjoint? Because
of Theorem 10.19, a constructive proof.
Lemma 10.20. (A ) = A.
Proof.
(A ) = A.
(10.5)
kU Xk = kXk .
72
Remark 10.27. Equation 2) in Theorem 10.26 means that a unitary operator preserves the norm of a vector.
Norm in physics is often energy (at least in PDE in math physics) so an energy
preserving operator (often of time) leads to a conservation law.
Theorem 10.28. Let U be unitary, and let {uik } be its matrix representation. Then
any two columns and rows are orthogonal (and orthonormal).
Proof. By definition
U U = U U = I.
In the matrix form this equation reads, for any 1 i, k n
n
X
(U U )ik =
(U )ij (U )jk = ik .
(10.6)
j=1
But by Theorem 10.19 (U )jk = (U )kj = ukj and for (10.6) we have
n
X
uij ukj = ik .
(10.7)
j=1
But (ui1 , ui2 , . . . , uin ) Ui is the ith row of {uik } and (uk1 , uk2 , . . . , ukn ) Uk is the
k th row of {uik }. (10.7) then reads
hUi , Uk i = ik
Ui Uk
QED
Note that you can use the rows (or columns) to make a new orthonormal basis.
Definition 10.29. Real matrices with orthogonal columns and rows are called orthogonal.
Exercise 10.30. Let U be unitary. Show that its columns are orthonormal.
Exercise 10.31. Show that
(AB) = B A .
LECTURE 11
A = A. Note that both the boundary conditions and the factor 1i were
crucial in making A selfadjoint. If we just have the operator of differentiation, d/dx
on P0n [0, 1], then this operator is antisymmetric, i.e. hp0 , qi = hp, q 0 i.
1
Exercise 11.2. Show that hp, qi =
p(x)q(x)dx defined on Pn [0, 1] is an inner
product.
Example 11.3 (the operator of rotation). Let a particle rotate about the origin with
an angular velocity . Let (x0 , y0 ) be its initial position. Find a formula for (x(t), y(t))
at any instant of time t.
73
74
B (x, y)
Given A, B, construct:
t
C: draw the ray of angle t with the
origin, then C is the projection of B on
the ray;
y
T : the projection of A on the x-axis;
Q: the projection of C on the x-axis;
A
(x0 , y0 )
x = OP = OQ P Q = OC cos t SC = |{z}
OT cos t |{z}
BC sin t
=x0
=AT =y0
=AT =y0
x = x0 cos t y0 sin t
y = x0 sin t + y0 cos t
We claim that
x0
cos t sin t
x
.
=
y0
sin t cos t
y
cos t sin t
U=
sin t cos t
is an orthogonal matrix. Check it!
Hence, U is a unitary operator.
So the solution to our problem can be written as follows
X(t) = U (t)X0
x(t)
x0
where X(t) =
is the position vector at time t, X0 =
is the initial position
y(t)
y0
and
cos t sin t
U (t) =
.
sin t cos t
2. Change of Basis
Note that the xyz-frame is not always practical, e.g. coordinates on Earth, so we
switch to spherical coordinates for example; but switching coordinate system is the
same as switching bases.
Let us have two bases {ei }ni=1 , {gi }ni=1 not necessarily orthogonal. We raise a question: does there exist a linear operator G such that
gi = Gei
i = 1, 2, . . . , n ?
2. CHANGE OF BASIS
75
(11.2)
i=1
{gik }ni,k=1
: gik (gk )i .
Consider the matrix G
On the other hand by definition, we have
n
n
X
X
Gek =
(Gek )i ei =
gik ei .
| {z }
k=1
(11.3)
i=1
=gik
th
k = 1, 2, . . . , n
Definition 11.4. Operators A and B are called similar if there exists an invertible
operator G:
G1 AG = B.
Theorem 11.5. If A and B are similar, i.e.
G1 AG = B,
(11.4)
and {ek }nk=1 is a basis, then the matrix of B in {ek }nk=1 coincides with the matrix of A
in the basis {gk }nk=1 , gk = Gek .
Proof. Left as an exercise.
Remark 11.6. Given G, A, the transformation
A G1 AG
is called a similarity transformation of A.
Theorem 11.5 actually says
(G1 AG)ik =
in {ei }
(A)ik
in {Gei }
LECTURE 12
C.
(12.1)
The values of for which (12.1) has a nontrivial solution are called eigenvalues of A.
The corresponding nontrivial solutions X are called eigenvectors of A.
Equation (12.1) always has a solution X = 0. But we are talking about nontrivial
solutions (i.e. X 6= 0). The curious fact is that (12.1) has nontrivial solutions only for
a finite number of .
Definition 12.2.
p() det(A I)
is called the characteristic polynomial of A and the equation
p() = 0
is called the characteristic equation of A.
Lemma 12.3. Let A be an operator in E, dim E = n. Then the characteristic
equation for A has n roots.
The proof of this fact lies beyond the scope of our consideration.
clear why the number of roots n. Indeed,
a11
a12
a1n
a21
a22
a2n
p() = det(A I) = det
..
..
..
...
.
.
.
an1
an2
ann
78
(A I)X = 0.
QED
hXi , Xk i = ik .
(12.2)
Let i = k. We have
k kXk k2 = k kXk k2
k = k
k R.
(i k ) hXi , Xk i = 0
hXi , Xk i = 0.
QED
2. SPECTRAL ANALYSIS
79
Theorem 12.10. If A is a selfadjoint operator and 0 (A), then the set of all
eigenvectors corresponding to 0 forms a subspace, named an eigenspace.
Proof. Let X0 , Y0 be two solutions of (12.1)
AX0 = 0 X0 ,
AY0 = 0 Y0 ,
then X0 + Y0 is a solution to (12.1) too. Indeed
A(X0 + Y0 ) = AX0 + AY0 = 0 X0 + 0 Y0 = 0 (X0 + Y0 ).
QED
Theorem 12.11. Let A = A and let the spectrum (A) be simple (all eigenvalues
are simple). Then the set of its normalized eigenvectors forms an ONB.
Proof. Let dim E = n. By Lemma 12.3 the number of eigenvalues of A is n. By
Theorem 12.9, all eigenvectors X1 , X2 , . . . , Xn are orthogonal. Hence they are linearly
Xk
. Now
independent and therefore {X1 , X2 , . . . , Xn } forms a basis in E. Put ek =
kXk k
QED
{ek }nk=1 is an ONB.
The statement holds without simple for selfadjoint operators, i.e. each eigenspace
is of dimension the multiplicity of its corresponding eigenvalue.
Remark 12.12. The fact that eigenvectors of a selfadjoint operator with a simple
spectrum forms an ONB provides us with a very efficient way of constructing ONBs.
The following theorem is of the utmost importance.
Theorem 12.13 (Diagonalization Theorem). Let A = A and (A) = {k }nk=1 be
simple. Let {ek }nk=1 be the ONB compiled of the eigenvectors of A. Then the matrix of
A in {ek }nk=1 is diagonal.
Again, this is actually true for any selfadjoint operator.
Proof. By definition of (A)ik
Aek
n
X
(A)ik ei
k=1
k
k ek
(A)ik = 0 , i 6= k ; (A)kk = k .
A=
1
2
..
.
n
QED
80
Now we are going to answer the following question: Given a selfadjoint matrix A,
find a transformation G that moves the old basis {ek }nk=1 into a basis consisting of the
eigenvectors {gk }nk=1 of A.
So let us find G : Gek = gk , k = 1, 2, . . . , n, where {gk }nk=1 is a basis of normalized
eigenvectors of A.
n
X
Gek =
(G)ik ei
(G)ik = (gk )i where (gk )i is the ith coordinate of gk .
k=1
Therefore,
(g1 )1 (g2 )1
(g1 ) (g2 )
2
2
G=
...
...
(g1 )n (g2 )n
. . . (gn )1
. . . (gn )2
= g1 g2 . . . gn .
... ...
. . . (gn )n
Part 3
Hilbert Spaces
LECTURE 13
X
xn e n .
(13.1)
X=
n=1
Wow! But how to understand this infinite series! Here the main difference between
finite and infinite dimensional spaces starts.
In general, it is a very deep issue. We are not able to treat it here and we restrict
ourselves to very special yet important cases.
3. Normed Spaces
Definition 13.5. A space E is called a normed space if there exists a real valued
function, called a norm, denoted by kXk, of X E subject to
1) kXk 0 ; kXk = 0 X = 0.
2) kXk = || kXk , C.
3) kX + Y k kXk + kY k . (triangle inequality)
83
84
p
Example 13.6. Let E = R3 , kXk = x21 + x22 + x23 . We claim kXk is a norm.
Indeed,
p
1) kXk 0 p
and if kXk = 0 x21 + x22 +x23 =
p0 x1 = x2 = x3 = 0 X = 0.
2
2
2
2
2) kXk = (x1 ) + (x2 ) + (x3 ) = x21 + x22 + x23 = || kXk .
3)
kX + Y k2 = hX + Y, X + Y i
= hX, Xi + hX, Y i + hY, Xi + hY, Y i
= kXk2 + kY k2 + 2 hX, Y i
[
= kXk2 + kY k2 + 2 kXk kY k cos (X,
Y)
kXk2 + kY k2 + 2 kXk kY k = (kXk + kY k)2
So we get
kX + Y k2 (kXk + kY k)2
kX + Y k kXk + kY k .
Example 13.9. Let C[0, 1] be as previously defined. Set kf k max |f (x)|. Prove
x[0,1]
that kf k is a norm.
So, C[0, 1] is a normed space.
Example 13.10. P is not a normed space. (try to understand it!)
Example 13.11. Let C 1 [0, 1] be the space of all functions f (x) continuous on [0, 1]
whose derivatives are also continuous on [0, 1]. I.e.,
C 1 [0, 1] = {f C[0, 1] : f 0 C[0, 1]}
Prove that kf k = max |f (x)| + max |f 0 (x)| is a norm.
x[0,1]
x[0,1]
4. HILBERT SPACES
85
4. Hilbert Spaces
We introduce first the concept of a scalar (inner) product in the very same way as
Definition 10.1.
The point here is that not all infinite dimensional spaces have a scalar product.
Example 13.12. C[0, 1] has a scalar product. Indeed, let f, g C[0, 1], then
1
f (x)g(x)dx
hf, gi =
0
LECTURE 14
(14.2)
that
2|f (x)||g(x)| |f (x)|2 + |g(x)|2
and (14.2) becomes
<
is a typical example of a function space. Other function spaces are C[0, 1], P, etc.
87
88
Clearly,
hf, gi =
f (x)g(x)dx
a
has all the properties of a scalar product and hence L2 (a, b) is a Hilbert space. QED
Remark 14.3. Every function f(x) continuous on (a, b) is in L2 (a, b). So C[a, b]
L2 (a, b). But L2 (a, b) also contains discontinuous functions and even some unbounded.
For example,
1
or,
L (, ) L (R) = f (x) :
2
|f (x)| dx < .
2
/ L2 (R)
x
eix
/ L2 (R) ,
1
Example 14.7. Let fn (x)
sin nx in L2 (0, 2). We prove that kfn k = 1,
fn fm , n 6= m. Indeed,
2
1 2 2
1 2 1 cos 2nx
1 x
2
kfn k =
sin nx dx =
dx =
= 1 kfn k = 1.
0
0
2
2 0
1 2
Now consider hfn , fm i =
sin nx sin mx dx.
0
89
n 6= m.
=0
=0 , n6=m
n 6= m.
o
nq
1
sin
nx
play a very important role in Fourier
As we will see, functions
n=1
analysis.
So
hfn , fm i = 0 ,
(14.3)
Remark
14.9. If X, Y E, then kX Y k plays the role of a distance between X
p
and Y kX Y k is the distance if X, Y R3 and kXk = x21 + x22 + x23 . In view
of this, (14.3) means that the distance between X and Xn gets smaller and smaller as
n .
Definition 14.10. A sequence {Xn }n1 is said to be a Cauchy sequence if
lim kXn Xm k = 0.
n
m
2
if
{fn }n=1 L (a, b) and
lim
|fn (x) fm (x)|2 dx = 0
n
m
then {fn (x)} converges in L2 (a, b) to some function f (x) L2 (a, b).
90
Example 14.12.
1
2
1
n
1
2
1
n
1
2
C[0, 1]
/ C[0, 1]
But L2 (0, 1) is a Hilbert space big enough to contain all the limits of Cauchy
sequences in C[0, 1].
3. Bases and Coordinates
Now we are ready to talk about bases and coordinates in infinite dimensional spaces.
Definition 14.13. Let E be a Banach space. A system {en }
n=1 is called a basis if
n
X
xk ek
0 , n .
X E {xk }k=1 C :
X
k=1
The numbers
{xk }
k=1
Xk is called convergent to X in E if
k=1
n
X
X
X
k
0 , n .
k=1
X
hX, ek i ek
X=
k=1
xk = hX, ek i .
LECTURE 15
Fourier Series
Now we are going to harvest on the previous abstract results. for example, in the
Hilbert space L2 (, ), there is an easy basis to construct.
Theorem 15.1.
1 einx
2
is an ONB in L2 (, ).
nZ
einx eimx
1
dx =
2
i(nm)x
1 einx .
2
1 ei(nm)x
dx =
2 i(n m)
1
1 ei(nm) ei(nm)
=
sin(n m) = 0.
2i(n m)
(n m)
hen , em i = 0.
1
ken k =
2
einx einx dx = 1.
X
nZ
91
|cn |2 .
(15.2)
92
1 einx .
2
1 einx
2
nZ
We have
f=
hf, en i en =
nZ
X
f (x)en (x) dx en
nZ
X 1
1
inx
f (x)e
dx einx
=
2
2
nZ
X 1
X
=
f (x)einx dx einx =
cn einx .
2
nZ |
nZ
{z
}
cn
By Theorem 14.16,
2
kf k =
2
X
|hf, en i| =
2cn
2
QED
f (x)einx dx
cn =
2
1
1 einx
1 ein 1
inx
=
e
dx =
=
2
2 in 0
2 in
0
1
, n = 1, 3, . . .
in
= 0
, n = 2, 4, . . .
1
, n = 0.
2
So
1
1 X einx
f (x) = +
.
2 i nZ n
n odd
Now clearly,
kf k =
|f (x)| dx =
dx = .
0
X
1
1
1
1
1 2
2
kf k = 2
+
= 2
+
= .
4 2 n= (2n + 1)2
4 2 4
f (x)
(15.3)
93
2
N
X
inx
cn e dx 0 ,
f (x)
cn einx f (x)
x (, ).
The reason can be observed from Example 15.3. Series (15.3) converges to f (x) for all
x (, ) but x = 0. At x = 0,
N
1
1 X 1
1
1
1
1
1 1
1
+
= +
. . . 6 1+ 6 1 + . . . + + = .
2 i N n
2 i
N 2
N 2 N
2
N
n odd
So at x = 0 (15.3) converges to 12 .
It can be proven that if f (x) is continuous then its Fourier series converges for all
x (, ). If f (x) is piecewise continuous (Here is a typical function)
then on each interval of continuity of f the Fourier series converges pointwise (i.e. at
every point of this interval) and if x0 is a point of jump discontinuity then the Fourier
series converges to
f (x0 0) + f (x0 + 0)
,
2
f (x0 0) = xx
lim f (x).
0
x<x0
Since in applications most of the functions are at least piecewise continuous, series
(15.1) converges not only in norm but also pointwise.
n
o
1
inx
inx
Note, that e
= cos nx + i sin nx and hence instead of
e
we can
2
nZ
n
o
consider 1 sin nx , 1 cos nx
which is already a real basis. An expansion in
n0
94
a0 X
+
(an cos nx + bn sin nx)
2
n1
1
an =
f (x) cos nx dx , n = 0, 1, . . .
1
f (x) sin nx dx , n = 1, 2, . . .
bn =
f (x) =
where
(15.4)
Exercise 15.6. Derive Theorem 15.5 from Theorem 15.2 and establish the Parseval
identity.
Definition 15.7. Functions einx , sin nx, cos nx are called simple harmonics.
So, a Fourier series can be viewed as an expansion in simple harmonics that play
an enormous role in physics.
Remark 15.8. Theorems 15.2 & 15.5 give us expansions of functions defined on
(, ). However if we continue f (x) outside (, ) as a periodic function f (x+2) =
f (x), then the Fourier expansion formulas are valid for all x. To see this, one has only
to observe that einx are 2periodic functions too.
Theorem 15.9. Let f L2 (`, `), then
X
n
1 `
i n
x
f (x) =
cn e `
, cn =
f (x)ei ` x dx
2` `
nZ
n Z.
(15.5)
Exercise 15.10. Derive this theorem from Theorem 15.2 using a suitable change
of variables.
Exercise 15.11. Expand f (x) = x2 on [, ] by
(a) (15.1);
(b) (15.5).
Exercise 15.12. Expand f (x) = |x| on [1, 1] by (15.5).
LECTURE 16
Lf
output
L(f ) = Lf.
In other words, the system works as a linear operator. In physical terms it means
that the system is subject to the principle of superposition.
There are two basic problems:
1) Given input f and L, find the output Lf
2) Given input f and output Lf , find L
(direct problem),
(inverse problem).
or
Len
nZ
and form a database: {Len } {gn } and the inverse problem 2) is solved since {Len }nZ
determines our system. Now, given signal f (t) we want to know what the output will
be without actually putting this signal through the system.
What we have to do is represent f (t) by the Fourier formula
X
f (t) =
cn eint
(16.1)
nZ
95
96
e0
constant signal
e1
first harmonic
e2
second harmonic
e3
third harmonic
1. SIGNAL PROCESSING
97
and compute all {cn }nZ . Then the output (Lf )(t) is
X
X
(Lf )(t) =
cn Leint =
cn Len .
nZ
(16.2)
nZ
So once the system is determined, i.e. {Len } are known, then one can easily compute Lf for any signal f by formulas (16.1), (16.2). So the direct problem 1) is also
solved.
Example 16.1. Consider the following system L
(
en
, n0 n n1
Len =
0
, otherwise.
So, L puts through all simple harmonics with frequencies in [n0 , n1 ] with no change and
cuts off all other harmonics. It is a filter.
Lf =
n1
X
cn e n .
n=n0
A > 1.
L is a band amplifier.
A
1
n0 n1
(n0 , n1 ) frequency amplifier
n0
resonance amplifier
So what does a resonance amplifier do to a signal? It cuts out all the harmonics
but one n0 and increases its amplitude by A.
Exercise 16.3. Put a signal
98
f (t) = t , t ,
through a system L:
(
en
, n = 0, 1, 2
Len =
0
, otherwise.
Graph the output Lf . Use the Fourier trigonometric series expansion for f (x).
where
E(t + T ) = E(t)
is a T -periodic electricity source,
E(t)
the charge Q(t) satisfies the ODE
L
d2 Q
dQ
1
+
R
+
Q = E(t)
dt2
dt
C
(16.3)
assuming that the circuit is steady-state (i.e. its been plugged in for a time long
enough so that all transition processes are over).
We need to find Q(t). I.e. we want to find the response of our linear system to the
periodic external source E(t).
Note that this problem is studied in standard ODE courses. As you may remember,
it can be approached in a few different ways but there is no exact solution unless E(t)
has a very specific form. We offer here yet another solution (a Fourier series solution).
Here is how it goes.
By Theorem 15.9 with ` = T
E(t) =
cn e
int
nZ
2
=
T
(16.4)
and we look for a solution to (16.3) in the form of the Fourier series
Q(t) =
X
nZ
qn eint
(16.5)
99
can be interchanged)
(16.6)
X
d2
Q
=
qn (n2 2 )eint .
dt2
nZ
(16.7)
n2 2 L + inR +
C
I.e.
qn =
Ccn
.
(1 n2 2 CL) + inCR
But
1
cn =
T
(16.8)
T /2
E(t)eint dt
(16.9)
T /2
are all known (but you have to compute them though) and the problem is solved in
the form of the Fourier expansion (16.5) with qn computed by (16.8), (16.9).
Exercise 16.4. Find the Fourier series solution to (16.3) (assuming a steady-state
solution) in the form (15.5) for
a
(a) E(t) =
2T T
2T
a
(b) E(t) =
T
2
T
2
3T
2
C
2
qn =
C
n [nCR + i(1 n2 CL)]
n odd integer
100
n
nZ
n odd
Note that the result has only the odd integers again, and trigonometry will cancel
out the imaginary parts.
LECTURE 17
4 x.
By a direct
/ Dom
and hence Af
/ L2 (0, 1) and f
A.
d
d
So Dom dx H but Dom dx 6= H!
Example 17.3. Let H = L2 (0, 1) again and define an operator A as follows
(Af )(x) = v(x) f (x)
where v(x) is a bounded function on (0, 1). I.e. max |v(x)| = C < .
x(0,1)
2
101
(17.1)
102
Definition 17.6. A linear operator A is called bounded if kAk < and unbounded
if kAk = .
Example 17.7. Let A be as in Example 17.3. By Definition 17.5,
1
2
2
kAk = max kAf k = max
|v(x)f (x)|2 dx.
kf k1
By (17.1)
kf k1
(17.2)
|v(x)f (x)|2 dx C 2 kf k2
0
kAXk C kXk
Proof.
1
1
.
2 4x
We have
C<
,
x H.
(17.3)
kXk1
X
Let X H. Consider e = kXk
. It follows from (17.4) that
X
kAek kAk
A kXk
kAk kAXk kAk kXk
(17.4)
1. GENERAL CONCEPTS
103
kXk 1
QED
kXk1
Theorem 17.10. All linear operators in finite dimensional spaces are bounded.
Proof. Let A be a linear operator in a finite dimensional space E. X E,
n
n
X
X
xi Aei
kAXk =
A
xi e i
=
i=1
i=1
v
v
u n
u n
n
X
uX
uX
2
|xi | kAei k t
|xi | t
kAei k2 .
(17.5)
i=1
i=1
i=1
k=1
k=1
k=1
n
X
kek k = 1.
since
k=1
v
v
!2
u n
u n
n
uX X
uX
kAXk t
|xi |2 t
|aki |
i=1
k=1
{z
kXk
!
|aki |
kXk
i,k=1
k=1
n
X
n
X
|aki | <
A is bounded.
QED
i,k=1
1 0 2
1 1
3/2
1/2
A=
, B = 1 1 1
, C=
.
2 0
1/2
3/2
0 1 2
2) Now find the norm of A and C again, but using the definition of the norm.
104
L (R) L (, ) = f :
1
|f (x)| < .
Clearly every function f (x) continuous on [a, b] is in L1 (a, b). Its also obvious that not
every continuous function is in L1 (R). Take, e.g., f (x) = 1.
|f (x)| dx =
dx =
and 1
/ L (R).
1
1
A typical example of an unbounded L1 -function is f (x) = . L1 (0, 1).
x
x
1
1
1
1
However
L1 (R). Next
/ L1 (0, 1), 2 L1 (1, ).
/ L1 (0, ). But
1 + x2
x
x
x
Make sure that you understand these things! Unfortunately, L1 is not a Hilbert
space.
Definition 17.13. The Fourier transform of a function f (x) L1 (R) is a function
f() defined by
1
eix f (x)dx.
F f () = f() =
(17.6)
2
Note that F : L1 (R) L (R) is bounded, but not F : L1 (R) L1 (R). Indeed,
1 ix
kF f k = max
e
f (x)dx .
R
2
And
ix
f (x)dx
ix
e
|f (x)| dx =
| {z }
|f (x)| dx = kf k1
=1
1
kf kL1 and so F : L1 (R) L (R) is bounded.
2
(
1
L x L
But now consider the box function f (x) =
for some fixed L > 0.
0
otherwise
105
L
kf k1 =
|f | =
1 = 2L < .
L
We next compute
1
F f () =
2
f (x)e
ix
L
1 ix
=
e
2i
L
1
dx =
2
eix dx
L
r
eiL eiL
2 sin L
=
=
.
2i
But
r
r
r
sin x
2 sin L
1 2
2
2
|sin x|
dx =
kF f k1 =
dx.
d =
L R x
L 0
x
R
Using a result from analysis (beyond our scope),
r
r
2 2 X n |sin x|
2 2X
|sin y|
kF f k1 =
dx =
dy
L n=1 (n1) x
L n=1 0 y + (n 1)
r
2 2X 1
sin y dy = .
L n=1 n 0
Hence F : L1 (R) L1 (R) is unbounded.
Theorem 17.14. The Fourier transform defines a bounded operator on L2 (R). I.e.
f L2 (R)
f L2 (R).
Proof. The proof of this theorem requires advanced analysis which lies beyond
the scope of our course.
2. Selfadjoint and Unitary Operators
Selfadjoint and unitary operators are basically defined in the same way as in the
finite dimensional case. There is one difference related to the fact that Dom A 6= H
but we ignore it.
So see Definitions 10.15, 10.17, 10.25.
Note that a selfadjoint operator may be unbounded. Here is a very important
example.
d
Example 17.15. Consider A = 1i dx
in L2 (0, 2) and assume
Dom A = f L2 (0, 2) : f 0 L2 (0, 2) and f (0) = f (2) , || = 1 .
106
XH
kXk1
kXk1
kU k = 1.
3. SPECTRUM
107
LECTURE 18
(18.1)
1 d
i dx
in L2 (0, 2) with
Dom A = f L2 (0, 2) : f (0) = f (2)
where || = 1. Let us do the spectral analysis of A. Observe that in Example 17.15 we
showed that A is selfadjoint, so by Theorem 17.18, we expect (A) R. We have to
solve the equation:
1 0
u = u
i
u(0) = u(2)
u0 = iu
u = Ceix L2 (0, 2)
u(0) = C = Ce2i
|| = 1
e2i = 1
ei(2+) = 1 2 + = 2n , n Z
2n
=
n = n
.
2
2
So d (A) = n 2
. Corresponding eigenfunctions are
nZ
un = Cn ei(n 2 )x
109
x (0, 2).
110
1
Note, if = 1 ( = 0) then d (A) = Z and un (x) = Cn einx . If we choose Cn =
2
then kun k = 1 and {un (x)} forms an ONB.
So let us analyze what we got!
o
n
The set of eigenfunctions of A is the set of simple harmonics 12 einx
we
nZ
studied previously. Its possible to prove that (A) = d (A) and hence the spectrum
of A is purely discrete: d (A) = Z.
So the spectrum of an unbounded operator need not be finite!
d
Example 18.2. Let A = 1i dx
in L2 (, ). In quantum mechanics this operator
is called the operator of momentum. Let us find d (A):
1 0
u = u u0 = iu
(18.2)
i
u(x) = Ceix .
ix 2
2
2
2
But kuk = |C|
e
dx = |C|
dx =
u(x)
/ L (R)
if C 6= 0
d (A) = .
Thus this operator has no discrete spectrum. On the other hand, (18.2) can be
solved R and solutions u (x) = Ceix are bounded functions but not in L2 (R).
Here we have a typical case of the so-called continuous spectrum. Moreover we
have (A) = R. (see Lecture 29 for more on this topic.)
Functions {u }R are called eigenfunctions of the continuous spectrum. It can be
proven that the resolvent R (A) of A is not analytic for R.
Note that we have to have
/ C \ R otherwise u would be unbounded. Now
{u }R is a kind of basis but not in the space, so its a basis for the Fourier transform.
The following theorem is utmost important!
Theorem 18.3. Let A be a sefadjoint operator in H with a purely discrete spectrum
d (A) = {n }. Then all n R and the set of its eigenvectors {Xn } forms an ONB in
H.
Proof. The proof of n R can be done in the same way as for finite dimensional
operators. It is also clear why hXn , Xm i = nm . But why {Xn } is a basis is a difficult
question!
Exercise 18.4.
a) Find eigenvalues and eigenfunctions of A defined in L2 (0, ) as follows
00
Au = u
u0 (0) hu(0) = 0
u0 () hu() = 0
where h is a real parameter.
d2
b) Show that A = 2 in L2 (, ) has no discrete spectrum.
dx
LECTURE 19
x0
(19.2)
(19.3)
112
n N.
Example
19.2.
1
1)
0 on R. Indeed,
x2 + n2 n=1
1
1
1
= max
0
=
0 ,
x2 + n2
xR x2 + n2
n2
(
0 , 1/n < x < 1
2) Let fn (x) =
.
n , 0 < x 1/n
It is clear that
x (0, 1)
lim fn (x) = 0
n .
n
but
1
n
much better than the usual one! It lets us interchage lim and . Namely,
Theorem 19.3. Let be any interval and {fn (x)} be a sequence of integrable
functions. If fn (x) f (x) and f (x) is integrable then
f (x) dx.
lim fn (x) dx =
lim
fn (x) dx =
n
lim
fn (x) dx
f (x) dx = 0.
n
But
fn (x) dx
= (fn (x) f (x)) dx
f
(x)
dx
=
kfn f k dx = kfn f k
dx
= kfn f k 0 , n .
QED
2. UNIFORM CONVERGENCE
113
Remark 19.4. Uniform convergence in Thm 19.3 is essential. Indeed, let {fn } be
the sequence defined in Example 19.2 2). It is clear that
1
1
1
0 dx = 0.
lim fn (x)dx =
fn (x)dx = 1 but
0 n
n (x)dx = 1
2)
is called a -sequence.
n
2 2
Example 19.6. n (x) = en x . {n } is a -sequence!
1
n
n (x) = lim n2 2 = 0.
n xR\(,)
n e
lim
n
n (x)dx =
max
n2 x2
1
dx =
ey dy = 1.
Exercise 19.7. Show that {fn (x)} defined in Example 19.2 2) is a -sequence.
Definition 19.8. The support of a function f (x), denoted by Supp f , is the set of
all x : f (x) 6= 0. That is,
Supp f = {x : f (x) 6= 0}.
Example:
f (x)
Supp f
Definition 19.9. A function f (x) is called finitely supported if there exists a finite
interval such that Supp f .
114
Example:
f (x)
Supp f
Definition 19.10. A function f (x) is called smooth on a set if all derivatives
of f are continuous functions.
C () stands for the set of all smooth functions on .
2
Example: f (x) = ex , sin x, x2 x + 1 are smooth functions on R.
We are going to use the following notation:
C0 (R) = {all functions in C (R) having finite support}.
For instance,
f (x) C0 (R)
is such a function.
Proof. We have
1=
n (x)dx =
n (x)dx +
lim
n (x)dx = 1 lim
n (x)dx
R\(,)
n (x)dx.
R\(,)
2. UNIFORM CONVERGENCE
115
Thm 19.3
But n 0 on R \ (, )
n (x)dx =
0 dx = 0
lim
R\(,)
R\(,)
n (x)dx = 1 ,
lim
> 0.
QED
= (0) +
n (x) ((x) (0)) dx.
We need to show that the last integral goes to 0 when n . It will take a while!
Let be any positive number. We have
Triangle ineq.
.
(19.4)
+
=
+
R\(,)
R\(,)
n (x)dx
max |(x) (0)|
x(,)
|
{z
}
1
(19.5)
x(,)
That is, choosing > 0 we can make max |(x) (0)| where is as small
x(,)
as we want. Next,
R\(,)
n (x) |(x)
R\(,)
(0)| dx
k(x) (0)k
n (x)dx.
(19.6)
R\(,)
n (x) ((x) (0)) dx max |(x) (0)| + k(x) (0)k
n (x)dx .
x(,)
R\(,)
|
{z
}
|
{z
}
0, >0
(by Lemma 19.11)
116
So,
lim
n (x) ((x) (0)) dx < ,
n
lim
n (x) ((x) (0)) dx = 0.
>0
QED
3. Weak Convergence
Definition 19.13. A sequence {fn (x)}
n=1 of functions integrable
on is said
to
n (x)(x)dx
That is,
converges
Def. 19.13
{n } converges weakly.
QED
n=1
So, we have three types of convergence: usual (pointwise), uniform, and weak.
pointwise
fn f on iff
uniform
fn f on iff
n
weak
fn converges
weakly on iff
fn (x)(x)dx
lim
exists C0
Theorem 19.15. If {fn (x)} converges to f (x) on uniformly then {fn (x)} converges to f (x) weakly.
Proof. We are supposed to prove that
lim
fn (x)(x)dx =
f (x)(x)dx.
n
fn (x)(x)dx =
f (x)(x)dx.
3. WEAK CONVERGENCE
(fn (x) f (x)) (x)dx
|fn (x) f (x)| |(x)| dx
|(x)|dx
max |fn (x) f (x)|
x
= kfn f k
|(x)|dx 0.
lim
fn (x)(x)dx =
f (x)(x)dx.
n
117
QED
Remark 19.16. The converse of Theorem 19.15 is not true. That is, weak convergence does
notimply uniform convergence. We
providea counterexample.
inx
e
einx
Let
on (, ). We claim that
converges weakly to 0.
2 nZ
2
Indeed, let be a test function. Then
einx
(x) dx = cn , n Z ,
2
einx
(x)dx 0 , n
cn =
2
n inx o
e
and by definition,
converges weakly to 0.
2
On the other hand,
inx
inx
e
e
0
=
= 1 6= 0
nN
2
2
2
inx
inx
e
e
and hence
does not converge to 0 uniformly. Moreover,
does not
2
2
converge even pointwise.
So,
Uniform convergence
Weak convergence
:
118
n (x)(x)dx =
lim
n
(x)(x)dx.
Example 19.18. We explore some properties of the Dirac -function. In the following, is any test function.
1) x(x) = 0, yet neither x, nor (x) are 0 on an interval.
(x)(x(x))dx = 0 (0) = 0.
x(x)(x)dx =
2) (x a)
(x a)(x)dx = (a).
R
a
0
3) (x a) + (x b)
4) c(x)
(x)(c(x))dx = c(0).
c(x)(x)dx =
R
Exercise 19.19.
=0
Surprisingly, is smooth, i.e. infinitely differentiable. You cant square it but you
can differentiate it infinitely many times!
4. GENERALIZED DERIVATIVES
119
4. Generalized Derivatives
Definition 19.20. f 0 (x) is called a generalized derivative of f (x) if
f (x)0 (x)dx
f (x)(x)dx =
1, x 0
(Heaviside function). The generalized
0, x < 0
(x) (x)dx +
(x)0 (x)dx
(x) (x)dx =
0
0
}
| {z
=0
0
=
(x)dx = (x) = () (0) = (0).
| {z }
0
0
(x) (x)dx =
=0
So
Def.
(0) =
(x) (x)dx =
(x)(x)dx
0 (x)(x)dx = (0).
n (x)(x)dx.
(0) = lim
lim
QED
120
Proof.
d (m1)
n
(x)(x)dx
dx
:0
by parts (m1)
= n
n(m1) (x)0 (x)dx
(x)(x)
= . . . = (1)m
n (x)(m) (x)dx
m
d
Thm 19.12
lim
n (x)(x)dx = (1)m (m) (0).
n dxm
m
d
Definition 19.24. Let {n (x)} be a -sequence. The weak limit of
n (x)
dxm
is called the mth derivative of (x) and denoted (m) (x).
0 (x)dx = 0. Support is at one point, and the average
Note that in particular,
(m)
n (x)
(x)dx =
is zero. Weird!
n
2 2
But consider for example, the sequence n (x) = ex n ; it is infinitely differen
tiable and converges weakly to (x). Furthermore, one can observe that
3
2
1
10
the derivative at 0 is 0,
and the average of the derivative is indeed 0.
20
30
LECTURE 20
1
, x
.
Let = 2
1
0
2
, x (, ) \ [, ]
{ } is clearly a -sequence.1
Since L2 (, ) we can expand it into a
Fourier series:
X
1
inx
cn e
, cn =
(x) =
(x)einx dx.
2
nZ
We have
1
1 inx
1
cn =
e
dx =
einx dx
2 2
4
1 einx
1 ein ein
1
=
=
=
sin n.
4 in 2n
2i
2n
So cn =
1 1
sin n, and
2 n
(x) =
1 X 1
sin n einx .
2 nZ n
1{
}0<<
1 X inx
1
1X
e =
+
cos nx.
2 nZ
2 n1
(20.1)
122
Unfortunately this series diverges for all x. But it can be understood in the weak
sense. Indeed test function (x),
X
N2
N2
1 X
1
inx
einx (x)dx
e (x)dx =
2
2
n=N
n=N
1
N2
X
1
=
2
n=N1 |
We show now that the partial sum
N2
X
inx
N2
X
cn .
e (x)dx =
N
1
{z
}
(20.2)
=cn
cn converges absolutely.
N1
|cn | converges.
nZ
: 0 1
1
by parts
inx
(x)e
+
einx 0 (x)dx
=
2in
2in
inx
1
de
1
=
0 (x)
=
einx 00 (x)dx
2in
in
2(in)2
1
(x)dx.
and c0 =
2
Note that all integrated terms are 0 since our function (x) is a test function and
hence Supp (, ). So
1 1
cn =
einx 00 (x)dx
2 n2
1 1 inx 00
|cn |
2
e
(x)dx
2 n
1 1
1
2
|00 (x)| dx C 2 , n 6= 0
2 n
n
X
X 1
|cn | C
< .
QED
n2
n6=0
n6=0
nZ
nZ
(20.3)
nZ
nZ
N2
X
einx
2
n=N
lim
N1 ,N2
(x)(x)dx.
(x)dx = (0) =
1
1X
+
cos nx.
2 n1
a+ih
aih
exz dz
= (x)
z
Heaviside function.
ezx
H(Int R ) (analytic
z
within R ) then
Since
a+ih
zx
ezx
e
dz = dz.
z
z
aih
ih
ezx
dz =
z
1 dezx
z x
by parts
a+ih
zx
1 zx
1
e
e
+
dz.
zx
x
z2
aih
123
124
zx aih x a + ih
a ih
x a + ih a ih
|
{z
}
0 , h
a+ih
1 zx
lim
= 0.
e
h zx
aih
Next,
zx
1
zx
e
|e |
1
1 ee Re z
1 ex Re z
ex Re z
.
dz
|dz|
=
|dz|
R
=
x
z 2 |x|
|x| R2
|x| R62
|x| R
|z|2
So
1
ex Re z
zx
e
dz
.
2
x
z
|x| R
(20.4)
zx
e
So lim dz = 0 ,
hz
(20.4) 0 ,
R .
a+ih zx
1
e
x < 0 and lim
dz = 0 ,
h 2i aih
z
x < 0.
exz
dz =
z
ih
by parts
0R
ih
eax
x
|
eihx
eihx
1
+
a + ih a ih
x
{z
}
0 , h
| {z }
I1
| {z }
I2
ezx
dz.
z2
1
|I2 |
|x|
125
zx
x Re z
e
|dz| = 1 e
R.
z2
|x| R62
R
R
e
1
ex Re z |dz|
1
|dz| =
|I1 |
.
z2
|x|
|x|
R2
|I2 |
exa
1 exa
R
=
0 ,
|x| R62
|x| R
R .
= 0.
So lim
1
exz
e
zexz
dz = Res
, 0 = lim
= 1.
z0 z
2i 0R z
z
1
2i
k
a+ih
aih
xz
e
1
dz +
z
2i
exz
dz
z
Taking h , we get
a+ih xz
1
e
lim
dz = 1 ,
h 2i aih
z
and the lemma is proven.
QED
x>0
eixt dt.
126
d
(x).
dx
By Lemma 20.3
a+ih zx
e
1
dz.
(x) = lim
h 2i aih
z
Differentiating (x) (formally), we have
a+ih
a+ih
d
1
1
d ezx
(x) = lim
dz = lim
ezx dz.
h 2i aih dx z
h 2i aih
dx
zx
Since a is arbitrary and e analytic
a+ih
ih
h
z = it
zx
zx
e dz
=
e dz =
eixt dt
=i
t
=
iz
aih
ih
h
a=0
1
d
(x) =
eixt dt.
dx
2
QED
LECTURE 21
1
0
(py 0 ) + qy = y
(21.1)
w
where w, p, q are known functions and is a parameter is called the Sturm-Liouville
equation.
Equation (21.1) always comes with some boundary conditions which make it a
Sturm-Liouville problem. Equation (21.1) is considered on a finite interval (a, b), halfline (, a) or (a, ), or the whole line (, ). We start with the finite interval
case. In literature this case is also called regular.
Definition 21.3. The Sturm-Liouville problem on (a, b)
d
d
p(x) y + w(x)q(x)y = w(x)y
dx
dx
y(a) = 0 = y(b)
is called a Dirichlet problem and the condition y(a) = 0 = y(b) is called Dirichlet
conditions.
Definition 21.4.
d
d
p(x) y + w(x)q(x)y = w(x)y
dx
dx
y 0 (a) = 0 = y 0 (b)
is called a Neumann problem.
127
128
Definition 21.5.
d
d
p(x) y + w(x)q(x)y = w(x)y
dx
dx
y 0 (a) + y(a) = 0 = y 0 (b) + y(b) ,
, R
hf, gi =
w(x)f (x)g(x) dx
1 d
d
p(x) + q(x)
w(x) dx
dx
defined on
DomA f L2 (, w) : f 0 (a) + f (a) = 0 = f 0 (b) + f (b), , R
is self-adjoint.
129
1 d
0
hAf, gi =
w(x)
p(x)f (x) + q(x)f (x) g(x) dx
w(x) dx
d
0
=
p(x)f (x) g(x) dx +
w(x)q(x)f (x)g(x) dx
dx
|
{z
}
by parts
d
0
= p(x)f (x)g(x) +
w(x)f (x)q(x)g(x) dx
p(x)f (x) g(x) dx +
dx
|
{z
}
by parts
d
0
0
f (x)p(x)g (x) dx +
w(x)f (x)q(x)g(x) dx
= p(x)f (x)g(x) +
dx
d
0
= pf g + f pg 0
f
pg 0 dx +
wf qg dx
dx
1 d 0
0
0
= p f g + f g +
wf
pg + qg dx .
w dx
{z
}
|
{z
} |
0
=hf,Agi
p f g + f g =
= p(b) f 0 (b) g(b) + f (b) g 0 (b) p(a) f 0 (a) g(a) + f (a) g 0 (a)
|{z}
| {z }
| {z }
| {z }
g(b)
f (b)
g(a)
f (a)
=0
= 0.
So, we get hAf, gi = hf, Agi.
QED
d2
+ q(x).
dx2
Actually any Sturm-Liouville problem can be rewritten with H when put in the
canonical form. But it likes the whole line, so often if we have just an interval, the
operator is called Sturm-Liouville and if its on the whole line, the operator is called
Schrodinger.
130
p(x) + q(x)y = y
(21.2)
w(x) dx
dx
can be tranformed into the Schrodinger problem
d2 u
2 + q(z)u = u
(21.3)
dz
by a suitable substitution.
Proof. Rewrite (21.2) as
dy
1 d
p
+ (q )y = 0.
w dx dx
(21.4)
Let
s
s
1 x w(s)
1 b w(x)
z=
ds , c =
dx
c a
p(s)
a
p(x)
be our new variables. One has
1/2
1/2
1 w
d
d
1 w
=
dx
dz =
c p
dx
c p
dz
and (21.4) reads
1 1
2
c w
|
Introduce = (wp)1/4
1/2
1/2
w
w
dy
d
p
+ (q )y = 0.
p
dz
p
dz
| {z }
{z }
1
(wp)1/2
(21.5)
=(wp)1/2
(21.6)
But
2 (1 u)0
0
0
= 2 (1 u0 2 u) = (u0 0 u)0
00
00
0 + u00 00 u 0 u
0
=
0 u
= u u
QED
Theorem 21.10. The spectrum of the operator A in Theorem 21.7 is discrete and
simple.
(No proof).
131
Remark 21.11. Since A = A and (A) = d (A) then the set of its eigenfunctions
{yn (x)} forms an ONB in L2 (, w).
LECTURE 22
Legendre Polynomials
d
d
(1x2 ) on L2 (1, 1), i.e. where
dx
dx
2
w(x) = 1 , p(x) = 1 x , q(x) = 0.
d
d
(1 x2 )
dx
dx
Dom A = y C[1, 1] : Ay L2 (1, 1) .
Note that it is possible to extend this operator to a larger domain and still show
selfadjointness but the proof is more complicated.
Exercise 22.2. Prove that A defined above is selfadjoint for y C 1 [1, 1].
Let us consider the eigenfunction problem for A. The spectrum of A is expected
to be discrete, and we will find that the solutions associated to each eigenvalue in the
spectrum are the so-called Legendre polynomials.
Ay = y
dx
2 d
(1 x ) y = y.
dx
This equation is called the Legendre equation. One can check that it is equivalent
to
d2
dy
(1 x ) 2 y 2x + y = 0,
(22.1)
dx
dx
which is a second order linear homogeneous equation. Let us solve it using power series,
i.e. by the Frobenius method (see Appendix in this Lecture). Remark that x = 1 are
regular singular points since for example for x = 1, (22.1) can be rewritten as
2
y 00 +
q(x)
p(x) 0
y +
y=0
x1
(x 1)2
with
p(x) =
2x
1x
, q(x) =
.
x+1
1+x
But here were not going to use the expansion at the regular singular points but
rather at x0 = 0, i.e. an ordinary point. So we are looking for a solution to (22.1) in
1Recall
that C[1, 1] is the set of continuous functions f on [1, 1]; so in particular lim f (x)
x1
are finite.
133
134
cn x n
with
y C[1, 1].
n0
We have
(1 x2 )
n0
ncn xn1 +
n0
n0
cn x n = 0
n0
n(n 1)cn xn
n0
2ncn xn +
n0
cn x n = 0
n0
n
n0
(n 2)(n 1)
cn2 , n = 2, 3,
n(n 1)
(22.2)
which gives us a recursion formula for {cn }. Separating odd and even powers, one has
X
X
y(x) = A
c2n x2n + B
c2n+1 x2n+1
(22.3)
n0
n0
where A, B are arbitrary constants and for {cn }, and by (22.2), we explicitly have
23
23
, c4 =
c2 =
()
2
43
234
45
(4 5 )(2 3 )()
c6 =
c4 =
,
56
23456
12
,
c1 = 1 , c 3 =
23
From Frobenius theory, we also know that each power series converges for |x| < 1.
Let us check what is going on at the endpoints.
Note that if is an integer of the form n(n + 1), then eventually cn+2 = 0 and then
all the following coefficients will be zero, and we get the Legendre polynomials. We
will show that if not, the corresponding function blows up at one of the endpoints (and
hence is not in the domain of A).
We rewrite the even coefficients as follows
( n1
)
Y
c2n =
1
.
2m(2m
+
1)
2n
m=1
c0 = 1 , c 2 =
if the series
X
mN
zm converges absolutely.
135
Since
N
X
2N
+1
X
||
(1)k
= ||
,
2m(2m
+
1)
k
m=1
k=2
Y
n0
A similar reasoning can be used for the odd terms. So we find that if the sum is
indeed infinite, i.e. if 6= n(n + 1) for some nonnegative integer n, then the solution
y
/ Dom A.
Theorem 22.4. The spectrum (A) of
d
d
A = (1 x2 )
dx
dx
is purely discrete and
(A) = {n(n + 1)}nN0
L2 (1, 1)
on
N0 = {0, 1, 2, } .
0
2
6
Corresponding eigenfunctions, denoted Pn are called Legendre polynomials.
plicitly
P0 (x) = 1 ,
P1 (x) = x
P2 (x) =
3x2 1
2
P3 (x) =
5x3 3x
2
P0 (x) = 1
P1 (x) = x
P2 (x) =
3x2 1
2
3
P3 (x) = 5x 23x
Ex
136
Remark 22.5. Note that we dont have to impose any boundary conditions for
the operator A in Theorem 22.4. The requirement that solutions be in C[1, 1] is a
condition (but not boundary).
There is a nice formula, called Rodrigues formula, for computing Legendre polynomials:
1 dn 2
Pn (x) =
(x 1)n , m N0 .
n!2n dxn
Exercise 22.6. Use Rodrigues formula to compute
1
Pn2 (x)dx
1
n0
(The expression on the left hand side is called the generating function of Legendre
polynomials.)
Appendix. Frobenius Theory
Consider the foolowing ODE
y 00 + P (x)y 0 + Q(x)y = 0.
(22.4)
We can expand P (x) and Q(x) in power series and solve by the method of undetermined coefficients, but this becomes unbelievably unwieldly.
Example 22.8. Consider y 00 + y = 0. Write
X
y=
cn xn
n0
137
n2
k0
So we get
X
X
X
(k + 1)(k + 2)ck+2 xk +
ck xk = 0
{(k + 1)(k + 2)ck+2 + ck } xk = 0.
{z
}
|
k0
k0
k0
=0
k = 0, 1, 2,
Note that
c2 =
c3 =
c0
2
c4 =
c6 =
c4
(1)3 c0
=
56
23456
(1)n
c0
(2n)!
c2n =
c1
23
c2
(1)2 c0
=
34
234
c2n+1 =
c5 =
c3
(1)2 c1
=
45
2345
(1)n
c1 .
(2n + 1)!
So
y = c0
X (1)n
n0
(2n)!
2n
X (1)n
+ c1
x2n+1 = c0 cos x + c1 sin x
(2n + 1)!
n0
as expected!
absolutely convergent on DR (x0 ). Moreover, the general solution to (22.4) has the form
of (22.5).
Example 22.11. Consider Stokes equation:
y 00 xy = 0.
Note that any point x0 is ordinary since P, Q are entire. Hence we can choose x0 = 0
for simplicity. We have
X
X
y=
cn x n
,
y 00 =
(k + 1)(k + 2)ck+2 xk .
n0
k0
138
Then
X
(k + 1)(k + 2)ck+2 xk
k0
n0
cn xn+1
{z
=0
X
X
(k + 1)(k + 2)ck+2 xk
ck1 xk = 0
k0
2c2 +
k1
k1
c3n+2 = 0
c3n x3n + c1
n0
c3n+1 x3n+1
n0
also known as Airy functions A(x), B(x). These are not elementary functions, but they
are special functions. And so sometimes, the Stokes equation is referred to as part of
the Airy family.
Exercise 22.12. Use the power series method to solve
y 00 xy = 1
y(0) = y 0 (0) = 0.
that this form is called the standard form; the canonical form would be: y 00 = q(x)y.
139
(22.7)
Note that there would be no loss of generality to take x0 = 0 since we can change
variables.
Definition 22.15. If x0 is not a regular singular point, then x0 is an irregular
singular point.
Example 22.16. Consider
(x2 4)2 y 00 + (x 2)y 0 + y = 0
p(x)
q(x)
+
y=0
x 2 (x 2)2
1
where p(x) = q(x) = (x+2)
So x = 2 is a regular singular point, but x = 2 is
2.
an irregular singular point, and all other points are ordinary. There are treatments
for some irregular singular points but they cause severe problems and are not in most
books.
n0
n0
140
So
0 = 3xy 00 + y 0 y
X
X
X
=
3cn (n + r)(n + r 1)xn+r1 +
cn (n + r)xn+r1
cn xn+r
n0
n0
= c0 xr1 (3r(r 1) + r) +
n0
k1
r(3r 2) = 0
ck =
ck1
(k + r)(3k + 3r 2)
k = 1, 2,
The equation r(3r 2) = 0 is referred to as the indicial equation and leads to two
possible solutions: r1 = 2/3 and r2 = 0.
r1 = 2/3
ck1
3ck1
ck1
=
,
=
3k(3k + 2)
k(3k + 2)
k + (3k + 2 2)
c0
c0
ck = Q k
=
k!5 8 (3k + 2)
n=1 {n(3n + 2)}
c1
c0
c0
, c2 =
=
,
c0 6= 0 , c1 =
5
28
258
!
n
X
x
y1 (x) = c0 x2/3 1 +
,
n!5 8 (3n + 2)
n1
ck =
2
3
k = 1, 2,
x R.
r2 = 0
ck =
ck1
k(3k 2)
k = 1, 2,
y2 (x) = c0 1 +
X
n1
xn
n!1 4 7 (3n 2)
!
,
x R.
The powers are different so we have two linearly independent power series solutions!
Remark 22.19. This example says that there could be two series solutions in Frobenius Theorem.
Now the second example seems very similar.
Example 22.20 (One series solution). Consider the equation xy 00 + 3y 0 y = 0.
By the Frobenius theorem,
X
X
X
cn xn , y 0 (x) =
cn (n+r)xn+r1 , y 00 (x) =
cn (n+r)(n+r1)xn+r2 .
y(x) = xr
n0
n0
n0
141
So
0 = xy 00 + 3y 0 y
X
X
=
{(n + r)(n + r 1) + 3(n + r)} cn xn+r1
cn xn+r
n0
n0
n1
x R.
2)!
(n + 2)!n!
n0
n2
which is the same as y1 . So there exists only one series solution.
Remark 22.21. Compare Example 22.18 and Example 22.20. There is not much
difference between the two. However, the first one has two series solutions and the
second one only one.
Exercise 22.22 (From a physics qualifying exam). Solve the differential equation
x2 y 00 + 2xy 0 + (x2 2)y = 0
using the Frobenius method. That is, assume a solution of the form
X
y=
cn xn+k
n0
142
q(x) =
n0
qn (x x0 )n
n0
c0 6= 0
b0 6= 0.
n0
y2 (x) =
bn (x x0 )n+r2
n0
Case 2
n0
bn (x x0 )n+r2
n0
where C is a constant.
Case 3 r1 = r2
X
y1 (x) =
cn (x x0 )n+r1
c0 6= 0
n0
bn (x x0 )n+r2 .
n1
b0 6= 0
LECTURE 23
Harmonic Oscillator
Let us consider the equation
d2
u + x2 u = u , < x < .
(23.1)
dx2
This equation appears in Quantum Mechanics and is a specific case of the Schrodinger
equation
u00 + q(x)u = u.
d
where A = dx2 + x2 in L2 (R). The term dx
2 represents the kinetic energy, and the
2
term x the potential energy.
Clearly, A is a Sturm-Liouville operator but in L2 on the whole line (, ).
Note at this point that you may wonder what basis would work for L2 (R): polynomials work locally but blow up at so they wont work for R; harmonics (einx ) are
not in L2 (R) so they wont work either; plus they would require some decay at infinity.
So it is not obvious to find something that would work.
Theorem 23.1. A = A .
Proof. Note first, that if f (x) L2 (R) then lim f (x) = 0. For f, g L2 (R), we
x
have
hAf, gi =
00
Af (x)g(x)dx =
x2 f (x)g(x)dx
f (x)g(x)dx +
*0
by parts
0
0
0
= f (x)g(x)
+
f (x)g (x)dx +
x2 f (x)g(x)dx
* 0
by parts
00
f (x)g (x)dx +
x2 f (x)g(x)
= f (x)g(x)
=
f (x) g 00 (x) + x2 g(x) dx = hf, Agi .
QED
d
Now we are going to find the spectrum of A. Recall that H = dx
2 has only a
continuous spectrum with no eigenvalues.
143
144
u(x) = e 2 y(x).
x2
x2
u0 (x) = xe 2 y + e 2 y 0
x2
x2
x2
x2
x2
u00 (x) = (e 2 x2 e 2 )y xe 2 y 0 xe 2 y 0 + e 2 y 00
x2
= e 2 (y 00 2xy 0 (1 x2 )y)
x2
x2
u00 + x2 u = e 2 (y 00 + 2xy 0 + y x2 y + x2 y) = e 2 y
y 00 + 2xy 0 + y = y , or
y 00 2xy 0 + ( 1)y = 0 , x R.
y 00 =
n0
(23.2)
cn n(n 1)xn2 .
n0
n0
n0
o
Xn
=
cn+2 (n + 1)(n + 2) 2cn n + ( 1)cn xn .
n0
2n + 1
cn , n 0
(n + 1)(n + 2)
c0 6= 0 ,
c1 6= 0.
n0
{z
y1
{z
y2
cn
(n + 1)(n + 2)
n + 2 (n + 1)(n + 2)
1 + n/2
X x2n
1
2
c2n
y1 (x)
= ex .
n!
n!
n0
(23.3)
145
cn
n + 1 (n + 1)(n + 2)
n+1
2n
X
x
1
2
y2 (x) x
= xex .
c2n+1
n!
n!
n0
2
x , and hence
x2
u(x) = e 2 y(x) c0 ex
2 /2
+ c1 xex
2 /2
/ L2 (R).
d2
+ x2 on L2 (R). Then
dx2
Associated eigenfunctions are ex /2 ym (x) where polynomials {ym (x)} are called
Hermite polynomials and usually denoted by Hm (x). Explicitly,
H0 (x) = 1 , H1 (x) = 2x , H2 (x) = 4x2 2 , H3 (x) = 8x3 12x , . . .
2
This also means that the Hermite polynomials {Hm (x)} are orthogonal in the
2
weighted space L2 (ex , R). So a good basis in L2 (R) looks like a polynomial weighted
2
by ex /2 to ensure decay.
There are some nice formulas for {Hm (x)}.
Hm (x) = (1)m ex
dm x2
e .
dxm
Or, Hm can be computed from the Taylor expansion of the generating function
G(x, t),
X
tm
2
G(x, t) = et +2tx =
Hm (x) .
(23.4)
m!
m0
146
2
dm
2
x2 /2
x2 2
Hm
=
e Hm (x)dx = (1)m Hm (x) m ex dx
e
dx
R
R
:0
m1
dm1 x2
d
2
0
m
x
m1
H
(x)
= (1) Hm
(x)
+
(1)
e dx.
m1 e
m
dx
dxm1
R
By a direct computation,
0
Hm
(x) = 2mHm1 (x) ,
m1
x2 /2
Hm
= 2m m! .
e
So, for (23.1) we get
um (x) = p
2m m!
ex
2 /2
Hm (x) ,
m = 0, 1, 2, . . .
(23.5)
LECTURE 24
nZ
It is common to adopt the following notation, cn = fb(n).
By making a scale transformation as seen in Theorem 15.9 and Exercise 15.10,
every function f (x) L2 (T, T ) can be expanded into a Fourier series:
T
X 1
inx
inx
f (x) =
fb(n)e T , fb(n) =
f (x)e T dx.
(24.2)
2T
T
nZ
The coefficients fb(n) represent how much of each harmonic is present in the signal,
the weight of each harmonic. The smoother the function, the faster fb(n) decays.
In other words, a decent function f (x) defined on a finite interval (T, T ) can
be represented by (24.2). It is natural to ask what if f (x) is defined on the whole line
(, ).
Well, if f (x) is periodic with period 2T as in the figure below, then (24.2) remains
valid outside of (T, T ).
3T
But what if f (x) is not periodic, like the example in the figure below?
3T
147
148
Let us see whats going on with (24.2) as T . This limiting procedure is not
trivial since none of the formulas (24.2) admit switching T for .
Introduce a new quantity
n
n
,
T
then (24.2) reads
T
X 1
in x
b
b
f (x) =
f (n)e
, f (n) =
f (x)ein x dx.
(24.3)
2T
T
nZ
Note that
1
1
1
=
= (n n1 ) .
{z
}
2T
2 T
2 |
=: n
2
2
nZ
nZ
| {z }
=: fbn
where
1
1
fbn =
2
2
f (x)ein x dx.
T
Let
1 b
1
F (n ) lim fn =
f (x)een x dx.
T
2
2
Note that the above is not rigorous since n 0 for each n when T (hidden
T in n ), but for any large value of T , there are infinitely many n-values still bigger to
compensate for the large T , so since were looking at the overall limit, we press on.
Then,
X 1
1
fbn ein x n
f (x) =
2
2
nZ
X
1 b
1 in x
1
looks like
fn e
= lim
n =
F ()eix d
T
2
2
2
nZ
as in a Riemann sum. So we get
1
f (x) =
eix F ()d , where
2
Inverse Fourier transform
1
F () =
2
eix f (x)dx
(24.4)
Fourier transform
which are continuous analogs of (24.1) or (24.2). Another notation for F () is fb().
This approach is, by no means, rigorous but it prompts a very important concept, the
concept of the Fourier transform.
2. FOURIER TRANSFORM
149
Note that fb() represents how much of the function has frequency, and now with
the continuum of frequencies, we have fb() 0 , , i.e. the relative weight of
harmonics should decrease as frequencies increase.
2. Fourier Transform
Definition 24.1. Let f (x) be a function from L1 (R).
Then the function fb(), R, defined by the formula
1
b
f () =
eix f (x)dx , R
2
is called the Fourier transform of f (x), also denoted fb = F f .
I claim that there is no physicist unaware of this concept!
Theorem 24.2. If f (x) L1 (R) then fb() exists.
Proof. If f (x) L1 (R) then eix f (x) L1 (R) since eix f (x) = |f (x)|. Moreover,
1 ix
b
e
f (x)dx
f () =
2
ix
1
1
e
f (x) dx =
|f (x)| dx.
QED
2
2
We are not proving the above theorem for L2 (R) since its a bit too complicated.
But basically, oscillations at high frequencies will help smaller rates of decay.
Remark 24.3. Equation (24.4) on the previous page suggests that
1
0
!
ix ax 0
ix ax
1
e
e
e
e
=
+
a i
a i 0
2
r
1
1
1
2
a
=
+
=
.
2
a + 2
2 a i a + i
(24.5)
150
Thus,
r
a|x| () =
e[
2
a
.
a2 + 2
1
2
a
eix
ea|x| =
d
2
a + 2
2 R
or,
a|x|
cos x
e
.
(24.6)
d
=
2
2
a
R a +
A nice, valuable formula for free. (Do you remember seeing this before?)
(
1 , |x| a
Exercise 24.5. Let f (x) =
. Compute fb() and then use (24.5) to
0 , |x| > a
derive a curious integral similar to (24.6).
Note that in Example 24.4 both function and Fourier transform are real. This is
not always the case.
Proposition 24.6.
(i) fb() = fb().
(ii) (symmetry) fb() = fb()
f (x) = f (x).
f (x)(i)eix dx
= e f (x)
2
2
1
= i
eix f (x)dx = ifb()
2
{z
}
|
fb()
i.e.,
fb0 () = ifb().
Exercise 24.12. Show
e 4a
e[ () = .
2a
ax2
f is even.
LECTURE 25
1 d
f (n) ()
(i)n
1 1
C
b
ix (n)
e
f (x)dx
,
f () =
n
||
||n
2
1
C=
2
(n)
f (x) dx.
QED
This implies that if f is smooth, i.e. infinitely differentiable, then fb decays faster
than any power. Also, unless f is rough, the high frequencies have less of a role so we
can cut them off.
Definition 25.3. Given functions f, g, the convolution f g of these functions is
defined as
1
(f g)(x) =
f (s)g(x s)ds.
2
151
152
1
=
eix f (s)g(x s)dsdx
2
x s = t
1
ix
=
e
g(x s)dx f (s)ds =
x = t + s
2
1
itis
=
e
g(t)dt f (s)ds
2
1
1
is
it
e
f (s)
e
g(t)dt ds
=
2
2
|
{z
}
g
b()
= fb() gb().
QED
Indeed,
2
b
f
=
(1,1)
1
2
|
1
b 2
f () d +
R\(1,1)
153
b 2
f () d
2 1
2
d
1
0
d +
|f (x)|dx
|f (x)|dx
2
2
1
R
R\(1,1)
{z
} |
{z
}
by Theorem 24.2
2
|f (x)|dx
by Corollary 25.2
|f 0 (x)|dx
2
<
d
= 2.
2
R\(1,1)
So fb, gb L2 (R) and the integral
E
D
b
f , gb =
since
fb()b
g ()d
(25.1)
Since all the integrals here are absolutely convergent we can rearrange the order of
integration and (25.1) reads
D
E 1
i(sx)
b
f , gb =
e
d f (x)g(s) dx ds
2
|
{z
}
f (x)
(s x)g(s) ds
{z
}
dx
f (x)g(x) dx = hf, gi .
D
E
So, we get that for all f, g C0 (R), fb, gb = hf, gi. i.e.,
hF f, F gi = hf, gi .
(25.2)
QED
Observe now that the right side of (25.2) exists not only for C0 -functions but for
any f, g L2 (R). (Indeed, by the Cauchy inequality |hf, gi| kf kkgk.) This suggests
that the left hand side of (25.2) exists too for all f, g L2 (R) which, in turn, means
that the natural domain of the Fourier operator is not C0 (R) or L1 (R) but L2 (R).
Rigorous proofs of this can be found in advanced textbooks.
154
in general does not converge absolutely for f (x) L2 (R), the following limit
N
eix f (x)dx
lim
N
(25.3)
b 2
f () d =
|f (x)|2 dx.
(25.4)
LECTURE 26
ix
fb()d ,
1
fb() =
2
eix f (x)dx.
(26.1)
Proof. By Theorem 25.9, the Fourier operator F is invertible and hence for all
f L2 (R),
F 1 F f = f.
(26.2)
For simplicity, let f be smooth. Then (26.2), by (25.4), reads:
1
1
ix
is
x R :
e
e
f (s)ds d = f (x)
2
2
which is exactly (26.1).
QED
Note that if f is even or odd, we get only [0, ) with sin or cos. Its still called the
Fourier transform. What about points of irregularity?
Remark 26.2. In view of Remark 25.5, we claim that (26.1) holds under the only
condition f L2 (R) if we understand integrals in (26.1) as
R
= lim
.
(26.3)
155
156
Note, however, that (26.1) holds not for all x R, but, roughly speaking, for those
x for which our function f (x) is defined/differentiable. Points of discontinuity of f (x)
are troublesome, as the next example shows.
Indeed, Carleson showed that if f is continuous, the representation still only converges almost everywhere, although it will be ok if f is differentiable.
(
1, 0x<1
Example 26.3. Let f (x) =
.
0 , otherwise
1
0
1
1
1
1
1
1
ix
ix
ix
e
fb() =
e
f (x)dx =
e
dx =
2
2 0
2 i
0
i
i
1 1e
1
1
e
=
fb() =
.
i
2
2i
By Theorem 26.1, x 6= 0, 1,
1
f (x) =
2
fb()eix d.
(26.4)
Lets see what happens, say, at x = 0. The right side of (26.4) then becomes
1
1
1 ei
1
1 ei
d =
d =
2i
2 2i
i
1
1
1 ei
e 1
=
d() =
d.
2i
2i
Note that we get the same expression when setting x = 1.
This integral is kind of tricky since it is absolutely divergent. We have to use Complex Analysis to evaluate it.
Actually, contour integrals, residue theorem, etc. are usual tools for computing
Fourier integrals.
CR+
+
CR
sincen = 0 o
is a removable singularity and hence
ei 1
Res
, 0 = 0.
I1 + I2 + I3 + I4 .
+
R
(26.6)
1
ei 1
1
ei 1
I1 =
d =
d
2i CR+
2i CR+
|
{z
}
0 , R
157
d
1
2i CR+
|
{z
}
1
|I2 | =
2i
i
e 1
ei 1
1
d
|d|.
2 C
||
f (x)
where Big O notation is defined by f (x) = O(x) , x 0 if x C , x 0.
So, ei 1 = |O()| 0 , 0, and we get
1
|I2 |
2
|O()|d = |O()| 0 , 0.
So lim I2 = 0. Next,
0
1
I3 + I4 =
2i
(R,R)\(,)
ei 1
d
and it follows from (26.5) and (26.6) that I3 + I4 = I1 I2 and passing to the lim ,
R
we get
1
2i
ei 1
1
d = lim I3 + I4 = lim I1 lim I2 =
R
0
R
2
0
158
(26.7)
a0
b
1() = 2().
1
b
Note also that ()
=
but this is a generalization of the Fourier transform
2
since
/ L2 (R) (recall that 2 is undefined).
Exercise 26.6.
1where
BB
Show that:
r ||
3/2
e
1
\
+ ().
tan x = i
2
2
1
.)
1+x2
def
a|x| .
we can set as definition: b
1() = lim e\
a0
LECTURE 27
d
F f
dx
() = i(F f )()
(F f ) () = i(F f )() or
1 d
F
f () = (F f )().
i dx
(27.2)
F 1 F f = f
1 d
i dx
F 1 = .
(27.3)
This relation is very profound and all applications of the Fourier theory owe just
to this formula.
Lets try to understand what (27.3) means.
Recollect our old business in Linear Algebra.
1 d
According to Remark 11.6, operators
of differentiation and multiplication by
i dx
are similar.
Multiplying (27.3) by F 1 on the left and F on the right yields
1 d
1 d
1
1
1
F
= F 1 F
(27.4)
| {zF} i dx F
| {zF} = F F
i dx
=I
=I
159
160
1 d
whih reads: the operator of differentiation
in the Fourier representation is equal
i dx
to the operator of multiplication by .
In Quantum Mechanics, it means that coordinate and momentum representations
are similar.
So, once again, (27.3), (27.4) mean that in the Fourier representation the operator
of differentiation becomes the operator of multiplication.
Definition 27.2. The object
A=
n
X
ak (x)
k=0
dk
dxk
, or
n
X
F A = p()F
(27.5)
ak (i)k .
k=0
(27.6)
(27.7)
where as usual u
b = F u , fb = F f .
1Such
equations occur in solving differential equations coming from Newtons Second Law of
motion (in particular harmonic oscillators with damping), and the variable is usually temporal. So
here we switched x to t and use dots for derivatives.
fb()
.
2 + 2i + 20
By Theorem 26.1,
u(t) = F
1
u
b (t) =
2
eit fb()
d.
(20 2 ) + 2i
2 0
2
Remark 27.5. We obtained (27.8) under the assumption f (t) L2 (R). Actually,
(27.8) remains true as long as the integrals in (27.8) are defined somehow (e.g. in the
weak sense). For example, one can handle cases like f (t) = (t), f (t) = sin t, etc. All
these functions are not in L2 (R).
Remark 27.6. For the denominator in (27.8) one has
20 2 + 2i = ( 1 )( 2 ) ,
(27.9)
p
where 1,2 = 20 2 + i. Note Im 1,2 > 0.
(
1 , 0t1
Exercise 27.7. Solve (27.6) with f (t) =
.
0 , otherwise
(Hint: use Example 26.3 and equation (27.8).)
Remark 27.8. Lets show another way to handle (27.8). Putting equations (27.8),
(27.9) together we have
1
eit
is
e
f (s)ds d
u(t) =
2 ( 1 )( 2 )
1
ei(ts)
=
d f (s)ds.
(27.10)
2
( 1 )( 2 )
|
{z
}
=I(ts)
1
is
( 1 )( 2 )
162
1
is subject to Jordans lemma in
( 1 )( 2 )
1
has no poles in C and hence by Cauchys
( 1 )( 2 )
ei(ts)
d = 0.
( 1 )( 2 )
u(t) = p 2
0 2
q
2
0 2 (t s) e(ts) f (s)ds.
sin
(27.11)
Looks like a nice formula!? Not particularly, since we can no longer use Complex
Analysis to evaluate this integral.
Remark 27.9. Formula (27.11) implies the so-called principle of causality one
of the basic principle of physics. It says that an effect cannot happen before the cause
has occured. Indeed since the integration in (27.11) is done over (, t), computing
the solution u(t) of equation (27.6) requires the knowledge of the force f (s) on (, t)
and doesnt need any information on f (s) for s > t.
Its the principle of causality for physical processes described by ordinary differential
equations.
3. Applications of the Fourier Transform to Higher Order Differential
Equations
Example 27.10 (A beam on an elastic foundation). Consider an infinite beam with
a force f (x) considered constant over time such as gravity or a load. We measure y(x)
the deflection or displacement.
f (x)
y(x)
0
EIy IV + Cy = f (x)
E, I, C are constants
or
C
1
y=
f (x).
(27.12)
EI
EI
Let us solve (27.12). Note that this equation can be approached by the method of
variation of parameters but its more complicated than the Fourier method well apply
here. Consider y L2 (R), then no boundary conditions are needed (even though we
have a fourth order linear equation) and y 0 , y 00 , y 000 , y IV L2 (R) although these will be
automatically satisfied.
Apply the Fourier transform to (27.12). By (27.5) we get
y IV +
(i)4 yb +
C
1 b
yb =
f.
EI
EI
1 1
y(x) =
2 EI
where 4
C
.
EI
eix fb()
d
4 + 4
(27.13)
1 P
eix
P
eix
y(x) =
d
=
d.
2 EI4 R 4 + 1
2C R 4 + 1
1
The function 4
is subject to Jordans lemma (Lemma 6.5) and then by Theorem
+1
6.6
ix
2
iP X
e
y(x) =
Res
, k
,
, x > 0
C k=1
4 + 1
where 1 , 2 are zeros of 4 + 1 = 0 in C+ , i.e.
1
1
1 = ei/4 = (1 + i)
,
2 = ei3/4 = (1 + i).
2
2
164
1
Since 1 , 2 are simple poles of 4
, by Corollary 5.11 we get
+1
iP ei1 x ei2 x
P
y(x) =
=
i1 ei1 x + i2 ei2 x
+
3
3
C
41
42
4C
iP d
P d i1 x
i 1 (1+i) x
i 1 (1+i) x
i2 x
=
e
+e
=
e 2
+e 2
4C dx
4C dx
P d x
x
P d x i x
x
i
2
2
2
2
=
e
e
e
+e
=
cos
4C dx
2C dx
2
P x
x
x
= e 2 cos + sin
, x>0
2 2C
2
2
and for x < 0, we have
x
P d
x
2
y(x) =
e cos .
2C dx
2
So putting it all together using absolute value, we have
r
x
|x|
P |x|
C
4
y(x) = e 2 cos + sin
, =
EI
2 2C
2
2
an even function. We can also write explicitly
!
r
r
4
C
P e 4EI |x|
C
C
4
4
y(x) =
cos
x + sin
|x| .
4
3
4EI
4EI
2 4EIC
Part 4
LECTURE 28
Wave Equations
1. The Stretched String
Here we are going to discuss a simple problem that historically led to the wave
equation.
Consider an ideal stretched string as below (finite or infinite):
x
x + dx
T
F dx
dx
x + dx
167
168
cos ' 1 ,
cos ' 1.
Hence,
u
sin ' tan =
,
x
x
u
sin '
x
and
sin sin =
x+dx
.
x
x
and we arrive at
2u
2u
(x) 2 = T 2 + F (x, t).
(28.2)
t
x
If there is no external force F (x, t) and if (x) = const then (28.2) transforms into
the homogeneous wave equation:
1 2u
2u
=
c2 t2
x2
c2
T
.
(28.3)
The general solution to this equation can be easily found. Indeed, if f (x) is an
arbitrary twice differentiable function, then u(x, t) = f (x ct) is a solution to (28.3)
since if we set z = x ct then
u
f z
=
= f 0 (z) (c)
t
z t
2u
0
f 0 (z) z
=
c
f
(z)
=
c
(28.4)
the sum of the forces equals mass times acceleration, and here projecting on the y-axis.
169
Example 28.2. The string is infinite but the initial shape of the string and the
distribution of initial velocities are given:
u
u(x, 0) = (x) ,
= (x).
t
t=0
utt uxx = 0
(28.5a)
(BC)
u(0, t) = u(, t) = 0
(IC)
u(x, 0) = (x) , ut (x, 0) = (x)
which is often referred to as an initial value Dirichlet problem for the free wave equation in dimension one or the boundary initial value (BIV) problem for the homogeneous
wave equation.
For solving our problem at hand, there are two steps, the first one being called
spectral analysis.
It is reasonable to assume that the solution u(x, t) of (28.5a) belongs to L2 (0, )
(as a function of x) and (28.5a) can then be viewed as
utt + Au = 0
(28.5b)
d2
is the operator of kinetic energy (Schrodinger operator), with bounddx2
ary conditions u(0) = u() = 0. For now we will ignore t. Let us perform the spectral
where A =
170
+ bei
Further,
(
y(0) = a +b = 0
= n = n2 , n N
(
b = a
sin = 0
1=
|yn (x)| dx =
0
r
2
Cn =
.
Cn2
C2
sin nxdx = n
2
(1 cos 2nx)dx =
0
Cn2
2
en (x) =
2
sin nx ,
n N.
The idea at this point is to note that since A is selfadjoint and its spectrum purely
discrete (A) = {n2 }, the resulting eigenfunctions {en (x)} form a basis, and so the
second step is to consider the solutions in this basis. I.e.
By Theorem 18.3, we can represent any solution of (28.5a) in the form
u(x, t) =
un (t)en (x) ,
where
un (t) = hu, en i
(28.6)
n1
X 2
u (t) en (x)
2 n
t
|
{z
}
n1
=
un (t)
(28.7)
171
n1
X
un (t)en +
n1
un (t)n en
n1
un (t) + n un (t) en .
n1
So we get
X
un (t) + n un (t) en = 0
un (t) + n un (t) = 0 , n N.
(28.8)
n1
So, our original partial differential equation (28.5a) broke into the infinite chain
of linear ordinary differential equations (28.8). This is the crux of this approach:
reduce the partial differential equation (PDE) to infinitely many ordinary differential
equations (ODE) which are hopefully simple enough to solve. Here indeed, each of
these equations can be trivially solved
un (t) = an ei
n t
+ bn ei
n t
n N.
(28.9)
Now we need to find {an , bn }. It should be found from the initial conditions:
X
X
un (0)en (x) = (x)
u(x, 0) =
un (t)en (x) =
n1
n1
t=0
.
(28.10)
X
X
0
0
un (0)en (x) = (x)
ut (x, 0) =
un (t)en (x) =
n1
t=0
n1
(x) =
n en (x) ,
n = h, en i
n en (x) ,
n = h, en i
n1
(28.11)
n1
u0n (0) = n .
172
So we get
or
(an bn ) = inn
bn = n an = 12 n inn
So
n int
n int
n +
e + n
e
in
in
)
(
1
n int
n int
e + n +
e
=
n +
2
in
in
n int
n
= Re n +
e = Re
n +
(cos nt + i sin nt)
in
in
n
= n cos nt +
sin nt.
n
Now we are able to present the solution to (28.5a)
X
u(x, t) =
un (t)en (x)
(28.12)
1
un (t) =
2
n1
where
n
sin nt , n =
un (t) = n cos nt +
n
r
2
en (x) =
sin nx ,
n N.
(x)en (x)dx , n =
0
(x)en (x)dx ,
0
Exercise 28.3. Adapt this method to the nonhomogeneous wave equation (i.e. derive formulas similar to (28.12))
LECTURE 29
(29.1)
u H , u 6= 0 :
Au = u.
174
(29.3)
It is clear that d (A) (A): pick un = u for all n (or rather a normalized version)
where u solves Au = u.
Here is the standard classification of the spectrum.
Definition 29.2. A scalar (A) is said to be from the discrete spectrum d (A)
if {un } can be chosen convergent to some element u H i.e.
d (A) {un } , un H , kun k = 1 and un u H such that (29.3) holds.
Definition 29.3. A scalar (A) is said to be from the continuous spectrum
c (A) if {un } can be chosen divergent in H, i.e. un doesnt converge to any element of
H.
Theorem 29.4.
Proof. Let (A), then by Definition 29.1 there exists a Weyl sequence {un }
of elements un H, kun k = 1 such that (29.3) holds. Each such sequence can be
represented as {u0n } {u00n } (as a union of two subsequences2) such that
(i) u0n u H , n
(ii) u00n does not converge to any element of H.
Consider case (i). Let us make up a new sequence {un }, un = u (yes, it consists of
the same element u), (29.3) then reads
k(A )uk = 0
(A )u = 0
Au = u
and so d (A).
Consider case (ii). By Definition 29.3, c (A).
So if (A) then d (A) or c (A) i.e. d (A) c (A).
QED
Recall that the spectrum could also be empty or the whole complex plane.
Remark 29.5. d (A) c (A) need not be empty! I.e. there may exist eigenvalues
embedded into the continuous spectrum.
For such , you can find a divergent sequence where you can pick a divergent
subsequence (and then c (A)) and a convergent subsequence (and so d (A)).
It seems like a weird case, but it happens all the time in physics with bound states.
Lemma 29.6. If (A) then there exists a Weyl sequence {un }:
lim hAun , un i = .
2{u0
n}
175
Proof.
|hAun , un i | = |h(A )un , un i|
Cauchy
k(A )un k 0 , n .
QED
There is a whole theory behind the concept of the continuous spectrum. We concentrate mainly on the spectral theory only of some specific operators of mathematical
physics.
2. Continuous spectrum of selfadjoint operators
Theorem 29.7. Let A = A then (A) R.
Proof. By Lemma 29.6, for (A), there exists a Weyl sequence {un } such
that
lim hAun , un i = .
n
Exercise 29.8. State and prove Theorem 29.7 for unitary operators.
(Hint: use Lemma 29.6.)
Let us consider a very important specific operator.
1 d
, the operator of momentum, on L2 (R), then
i dx
(A) = c (A) = R.
(29.4)
Proof. Note first3 that A = A . Weve also proved in Example 18.2 that
d (A) =
(A) = c (A).
1
2 2x/n
2
2|x|/n
e
dx =
kun k =
e
dx = 1.
n R
n 0
3Weve
proved this fact for the operator of momentum on two different spaces in Examples 11.1
and 17.15. The proof here would go similarly.
176
Also
1 d
1
sgn x
un un =
i
un un
i dx
i
n
(
1 ,
where sgn x =
1,
x<0
x>0
i
sgn x un
n
1 d
1
= ksgn x un k = 1 kun k = 1 0 , n .
u
u
n
n
i dx
n
n | {z } n
=1
d
d
By Definition 29.1, 1i dx
. Since is arbitrary, we have 1i dx
= R. QED
=
(29.5)
(29.6)
Indeed multiply (29.5) by U from the right and U from the left.
Do you remember that we dealt with Definition 29.12 while considering finite dimensional spaces? It was then called similarity (see Definition 11.4) but also goes
by equivalence. Here we consider unitary equivalence which is much more useful and
powerful.
Theorem 29.13. If A, B are unitary equivalent then
(A) = (B).
In other words unitary equivalence preserves the spectrum.
Proof. By (29.6) we have
k(A )uk =
U BU 1 I u
=
U BU 1 U U 1 u
=
U (B )U 1 u
=
(B )U 1 u
since U is unitary.
4The
term weak solution does not mean that the Weyl sequence converges weakly to the weak
solution. In the example above, one can easily show that {un } converges weakly to 0, not eix .
177
Set v = U 1 u. So
k(A )uk = k(B )vk .
(29.7)
Note v H and kvk = kuk since U is unitary. So, (29.7) means that if (A),
then by Definition 29.1, there exists a Weyl sequence {un } for A in the Hilbert space
H, such that kun k = 1 and
lim k(A ) un k = 0.
n
Then
lim k(B ) vn k = 0
QED
Theorem 29.13 is very important to spectral analysis. I.e., if you need to find (B)
of an operator B but you know that B is unitary equivalent to another operator A for
which the spectrum (A) is known then we simply have
(B) = (A).
Definition 29.14. Let H = L2 (R). The operator B acting by the rule
u(x) L2 (R)
Bu(x) = x u(x)
u L2 (R)
or
F AF 1 = B
A = F 1 BF.
178
(29.8)
which is not from H, and basically to perform the spectral analysis of A we have to
find those s for which (29.8) has a solution (from H or not). Then {} will be (A)
and solutions will be eigenfunctions of A (from the discrete or continuous spectrum).
In general, the theory behind is very involved. But for differential operators everything
is a whole lot simpler.
Exercise 29.17. Prove Theorem 29.9 without using Theorem 29.13. Instead modify
the proof of Theorem 29.15 using
n
un (x) =
p
n (x )
where
n (x) =
1
2n
1
2n
LECTURE 30
d2
on L2 (R). Then
dx2
(A) = c (A) = [0, ).
(30.1)
R.
(30.2)
2 2
1
d
d2
=
=A
i
dx2
dx2
1
with = 2 , e(x, ) = ei x .
2
179
(30.3)
180
QED
We also obtained
d2
Theorem 30.2. Let A = 2 on L2 (R). Then
dx
1 ix
1 ix
e
, e
2
2
[0,)
(30.4)
eix
1
u(x) =
u
b() d , u
b() =
eix u(x)dx.
2
2 R
R
Due to (30.7), we rewrite it as
u(x) =
u
b()e(x, )d , u
b() =
u(x)e(x, ) .
R
R
|
{z
}
looks like hu,ei?
u(x)e(x, ) = hu, ei .
R
So we get
181
u(x) =
R
u
b()e(x, )d ,
u
b = hu, ei
(30.8)
u
bn = hu, en i .
u
bn en (x) ,
u(x, t) =
u
b(, t)e(x, )d
R
2
2
u
b,t e(x, ) d + B
u
b(, t) e(x, )d
0= 2
| {z }
t R
R | {z }
no t
no x
=
u
btt (, t)e(x, )d + u
b(, t)
B 2 e(x, )
| {z }
R
R
u
btt (, t) + 2 u
b(, t) = 0
u
b(, t) = a()eit + b()eit .
b+
, b=
ba =
2
i
So
1
u
b(, t) =
2
b
()
()
b
+
i
1
eit +
2
a + b =
b
b
a b =
!i
b
1
b
.
2
i
b
()
()
b
!
eit
182
and finally
u(x, t) = F 1 a()eit + b()eit
or more explicitly
u(x, t) =
u
b(, t)e(x, )d , where
!
!
b
b
1
()
()
1
u
b(, t) =
()
b
+
eit +
()
b
eit
2
i
2
i
b
(x)e(x, )dx , () =
()
b
=
(x)e(x, ) ,
R
,
(30.9)
1
e(x, ) = eix .
2
In this form, the solution is difficult to visualize and analyze. Furthermore, historically, the wave equation was solved directly through a clever change of variable by
dAlembert. So we present here dAlemberts solution, but in Appendix to this lecture
we present how we can derive the dAlembert formula from (30.9).
Consider
(
utt = uxx
.
u(x, 0) = (x) , ut (x, 0) = (x)
Introduce the following canonical variables:
=x+t
= x t.
ux = u + u
uxx = (u )x + (u )x = u + u + u + u
(
utt = u 2u + u
u 2u + u = u + 2u + u
uxx = u + 2u + u
u = 0
u = F ()
u(, ) = F () + G()
3. PROPAGATION OF WAVES
183
x
1
1
C
x
(s)ds +
F (x) = (x) +
2
2 x0
2
(s)ds + C
F (x) G(x) =
x
1
C
1
x0
(s)ds
G(x) = (x)
2
2 x0
2
x+t
xt
1
1
C 1
1
C
u(x, t) = (x + t) +
(s)ds + + (x t)
(s)ds
2
2 x0
2
2
2 x0
2
1 x+t
1
(x t) + (x + t) +
(s)ds.
u(x, t) =
2
2 xt
So we summarize our result as
Theorem 30.4. The solution to the initial value problem
(
utt = uxx
.
u(x, 0) = (x) , ut (x, 0) = (x)
can be represented by the dAlembert formula
(x t) + (x + t) 1
u(x, t) =
+
2
2
x+t
(s)ds.
(30.10)
xt
3. Propagation of Waves
Formula (30.10) describes the phenomenon of wave propagation.
1) Consider first the case with no initial velocity; imagine that the string is pinched
then released.
(x) =
,
a
0.
(x t) + (x + t)
.
2
(x)
2
(xt)
2
a + t t
a+t
184
(x t)
represents a bump initially supported on (a, a) moving with a
2
speed 1 in the positive direction of the x-axis.1
Similarly,
i.e.
(x+t)
2
(x)
2
a t t a t
i.e.
1.
(x + t)
is a bump moving in the opposite direction with the same speed
2
1
1
Since u(x, t) = (x t) + (x + t) we get
2
2
u(x, t)
t = 4a
t = 3a
t = 2a
5a
3a
4a
3a
2a
3a
t=a
2a
2a
t=0
5a
4a
3a
2a
Figure 1
1Some
students have trouble understanding why subtracting t from x moves the graph to the
right. But watch:
Supp = (a, a)
a x t a
a + t x a + t.
3. PROPAGATION OF WAVES
185
(x) = 0
(x) =
.
a
1
Set g(x) =
2
1
u(x, t) =
2
x+t
(s)ds.
xt
(s)ds, then
0
h/2
a
g(x) =
1
h
2
(s)ds
a
h/2
and u(x, t) = g(x + t) g(x t).
For g(x t) we have
h/2
g(x t)
g(x)
a+t
h/2
a + t
and hence
a
h/2
a+t
g(x)
a + t
x
g(x t)
h/2
Figure 2
For g(x + t), see Figure 3 on the next page.
The total picture, Figure 4, is the result of the addition of Figure 2 and
Figure 3.
186
h/2
g(x + t)
g(x)
a
at
h/2
a t
Figure 3
u(x, t)
t = 4a
5a
t = 3a
4a
t = 2a
5a
3a
t=a
4a
3a
2a
2a
g(x + t) g(x t)
t=0
Figure 4
3) If 6= 0 , 6= 0 then the picture will be the superposition of Figure 1 and
Figure 4 since we have a linear equation.
Exercise 30.5. Solve the wave equation for u(x, t) with
2
(x) =
,
1
(x) =
187
u(x, 0) = 0 ,
ut (x, 0) = xex
eit e(x, )d
2
i
2
i
R
!
!
b
b
()
()
1
1
()
b
+
eit+ix d +
()
b
eit+ix d
=
i
i
2 2 R
2 2 R
!
!
b
b
1
()
1
()
=
ei(x+t) d +
ei(xt) d
()
b
+
()
b
i
i
2 2 R
2 2 R
1
(s) is
=
(s) +
e
ds ei(x+t) d
4 R
i
R
1
(s) is
+
(s)
e
ds ei(xt) d.
(30.11)
4 R
i
R
Change now the order of integration in (30.11)
i(x+ts)
1
e
i(x+ts)
u(x, t) =
d + (s)
d ds
(s) e
4 R
i
R
R
i(xts)
1
e
i(xts)
+
(s) e
d (s)
d ds.
4 R
i
R
R
(30.12)
In order to give this nasty expression a nice look, we basically have to evaluate
ia
e
ia
e d
,
d
R
R i
with a = x + t s a = x t s. But it has been already done. By Theorem 20.4 we
have
eia d = 2(a).
(30.13)
We have also computed (even twice!) the other integral in Example 7.1:
ia
ia
e
e
d = i
d =
(a > 0).
R i
(30.14)
188
, a>0
d =
= sgn a.
, a < 0
i
(30.15)
1
1
= (x + t) +
(s) sgn(x + t s)ds
2
4
R
|{z}
x+t
=
+
x+t
1
1
+ (x t)
(s) sgn(x t s)ds
2
4
R
|{z}
xt
=
+
xt
(x + t) + (x t) 1
1
=
+
(s) (1) ds +
(s) (1) ds
2
4
4 x+t
1 xt
1 xt
(s)ds
(s)(1)ds
4
4
(x + t) + (x t) 1 x+t
1 xt
=
(s)ds
(s)ds
+
2
4
4
|
{z
}
x+t
1
=
(s)ds
4 xt
1
1
(s)ds +
(s)ds
4 x+t
4 xt
{z
}
|
x+t
1
=
(s)ds
4 xt
(x + t) + (x t) 1 x+t
=
+
(s)ds.
2
2 xt
x+t
LECTURE 31
(31.1)
Note that represents the initial distribution of heat; its physical so it is reasonable
to assume L2 (R) since the total energy is finite.
Our approach is going to be absosulety the same as that for the wave equation.
d2
Namely, we introduce the operator A = 2 and (31.1) rewrites
dx
ut + Au = 0.
Next, A = B 2 , B =
1 d
and we have
i dx
ut + B 2 u = 0.
1
e(x, ) = eix
2
u(x, t) =
u
b(, t)e(x, )d , u
b(, t) =
u(x, t)e(x, )dx.
R
(31.2)
ut (x, t) =
u
bt (, t)e(x, )d
B u(x, t) =
R
u
b(, t)B e(x, )d =
2 u
b(, t)e(x, )d.
Since ut + B 2 u = 0 we get
u
bt (, t)+2 u
b(, t) d = 0 , x R u
bt +2 u
b=0
R
189
u
b(, t) = C()e t .
190
u(x, 0) =
u
b(, 0)e(x, )d =
C()e(x, )d
R
||
(x)
()e(x,
b
)d
(x)e(x, )dx
where ()
b
=
1
1
2 t+ix
()e
b
d where ()
b
=
u(x, t) =
(x)eix dx.
2 R
2 R
(31.3)
However, this answer cannot be considered as final (we cant see if its real!).
It follows from (31.3) that
1
2
is
u(x, t) =
(s)e
ds e t+ix d
2 R
R
and changing the order of integration, we get
1
i(xs)2 t
u(x, t) =
e
d (s)ds
2 R
R
|
{z
}
(31.4)
=:I(x,s,t)
xs
(x s)2
xs
xs
2
t + i(x s) = i t +
= t+
.
4t
2 t
2i t
2 t
Then
1
I(x, s, t) =
2
2
2
t+ xs
xs
e
R
2i t
1
d =
e
2
xs
2 t
2
2
t+ xs
2i t
{z
d .
}
(31.5)
xs
dz
Make a substitution t + = z, then dz = td d = and
2i t
t
2
1
2
t+ xs
2i t
e
d =
ez dz ,
t C
R
xs
where C = z C : z = t i , < < .
2 t
(31.6)
191
t
R
xs
2 t
Evaluate
z 2
ez dz ,
dz = lim
CR
R
+
h=
CR
xs
.
2 t
R
z 2
0 = e dz =
+
+
CR
= 0.
R2 2iRy+y 2
R2
ez dz = 0.
lim
e2iRy ey dy.
h
So,
dy = ie
2
2
2
2
z
R
2iRy
y
R
e dz = e
e
e dy e
+
(31.7)
Indeed, on + , z = R + iy and
z 2
(R+iy)2
e dz =
e
idy = i
+
ey dy eR heh
0.
192
Similarly,
ez dz = 0
lim
R
lim
+ lim
lim
=0
+lim
CR
+
R
| {z
| {z }
}
and finally
z 2
ez dz =
dz =
(Gauss integral)
2
t+ xs
2i t
e
d =
.
t
R
Insert it into (31.5) we get
(xs)2
2 r
4t
1 xs
I(x, s, t) =
e 2 t
=
2
t
4t
and (31.4) becomes
(xs)2
1
e 4t (s)ds.
u(x, t) =
2 t R
So we proved
Theorem 31.1. The solution of (31.1) can be represented in the following form:
(xs)2
1
u(x, t) =
(31.8)
e 4t (s)ds.
2 t R
Remark 31.2. The right hand side of (31.8) is clearly undefined at t = 0. But as
you know1
1
n2 (xs)2
ne
nN
1
forms a -sequence and hence, setting n = , we find that
2 t
2
(xs)
1
n
2
2
w-lim e 4t = w-lim en (xs) = (x s)
t0 2 t
n
u(x, 0) =
(x s)(s)ds = (x)
R
1Recall
Example 19.6.
193
4t
(xs)2
1
e
u(x, t) =
e 4t (s x0 )ds =
.
2 t
2 t
u(x, t)
t1
t2
t3
x0
t1 < t2 < t3
Note that {u(x, t)}t0
forms a -sequence.
Remark 31.4. In the wave equation, the speed of propagation is constant and finite.
But in the heat equation, in a split second, tails spread from to (since exponentials are positive for all x). This means that the heat propagates instantaneously, i.e.
the speed of propagation is infinite. This of course is not quite true in real situations;
it shouldnt go faster than the speed of light. So recall that this is a model, and far
enough the change is infinitesimally small.
Remark 31.5. Note also that the solution to ut = uxx , u(x, 0) = (x) is the convolution of and the solution to the heat equation with an impulse initial distribution.
This is known as Duhamels principle.
Exercise 31.6. Solve (31.1) and draw a picture of u(x, t) for
(
1
1 , |x| a
(x) =
=
.
0 , |x| > a
a
a
x
2
2
(Hint: use (x) =
eu du, the error function and its properties.)
0
Exercise 31.7. Solve
(
ut = cuxx bux x R , t > 0
2
u(x, 0) = ex
where b, c are constants and c > 0. Graph the solution for various values of t > 0.
(Hint: use the substitution = x bt to reduce this eq. to the standard heat eq.)
LECTURE 32
(
ut = uxx + f (x, t)
u(x, 0) = (x)
(32.1)
on the whole line. Assume f, L2 (R) with respect to x so that we can use the
spectral, here Fourier method. The function f is independent of u and is referred to
as the forcing term in general. Here it corresponds to heat generation (a heat source)
or dispersion.
We are going to apply the same eigenfunction expansion as we did in Lecture 30,
31.
We write (32.1) as
1 d
.
(32.2)
ut + B 2 u = f , B =
i dx
As previously, write the solution of (32.1) as
1
u(x, t) =
u
b(, t)e(x, )d , u
b(, t) =
u(x, t)e(x, )dx , e(x, ) = eix .
2
R
R
Since
2
ut =
u
bt (, t)e(x, )d
;
B u(, t) =
2 u
b(, t)e(x, )d
R
R
f (x, t) =
fb(, t)e(x, )d
,
fb(, t) =
f (x, t)e(x, )dx ,
R
by inserting the above in (32.2), equation (32.1) becomes in the frequency domain:
u
bt (, t) + 2 u
b(, t) = fb(, t).
(32.3)
195
196
u
b(, t) = e
|
2 t
2
2
fb(, )e d + ()
b
e t .
| {z }
{z
}
u
b0 (,t)
u
b1 (,t)
If we set
u0
R
u
b0 (, t)e(x, )d
and compare this with (31.3), we note that u0 is the solution of equation (32.1) with
f 0, i.e. the homogeneous equation (32.1). By Theorem 31.1 then
(xs)2
1
u0 (x, t) =
e 4t (s)ds
2 t R
and we arrive at
u(x, t) = u0 (x, t) +
u
b1 (, t)e(x, )d .
|
{z
}
R
u1 (x,t)
1
2
2 t
fb(, )e d eix d.
u1 (x, t) =
e
2 R
0
1
Since fb(, ) =
f (s, )eis ds, (32.4) can be continued as
2 R
t
1
2
2 t
u1 (x, t) =
e
f (s, )eis ds e d eix d.
2 R
0
R
Now rearrange the terms and the order of integration.
!
t
1
2
ei(xs) (t ) d f (s, )d ds.
u1 (x, t) =
2
R
R
0
|
{z
}
(32.4)
(32.5)
=I(x,s,t )
e 4(t )
I(x, s, t ) =
.
2 t
Plugging it into (32.5) one has
n
o
2
(xs)2
t (xs)
t
exp
4(t )
e 4(t )
u1 (x, t) =
f (s, )d ds =
ds
d p
f (s, ).
2 (t )
R
0 2 t
R
0
197
(xs)2
t
(xs)2
1
e 4(t )
p
f (s, )d ds.
u(x, t) =
e 4t (s)ds +
2 t R
R
0 2 (t )
In this case, we have again a sort of superposition principle where the first term
corresponds to the homogeneous solution and the second term is a particular solution
for the initial condition 0.
This formula is not particularly pleasant but Im not aware of any better derivation
of it.
Exercise 32.2. Show that the solution to the heat equation for (x) = 0 and
f (x, t) = (x) , t is
r
t x2
x
x
u(x, t) =
e 4t erfc
2
2 t
2
2
where erfc(x) =
eu du, the complimentary error function.
x
2. Nonhomogeneous Wave Equation
Consider the initial value problem:
ut (x, 0) = (x)
(32.6a)
(32.6b)
(32.6c)
(32.7)
eit
eit
it
ie
ieit
0
0
C1
= b
0
C2
f
and C10 , C20 are the (partial) derivatives of respectively C1 , C2 with respect to t.
198
fbeit
W
C20 =
fbeit
,
W
(32.8)
where W is the Wronskian of eit , eit , i.e.
it
e
eit
= 2i.
W = det
ieit ieit
It now follows from (32.8) that
b
f (, t)eit
C1 (, t) =
dt
2i
b
f (, t)eit
C2 (, t) =
dt.
2i
(32.9)
t b
fb(, )ei
f (, )ei
it
u
b(, t) = e
d + e
d
2i
2i
0
0
+ C1 ()eit + C2 ()eit .
t
it
(32.10)
u
b(, 0)e(x, )d
u(x, 0) =
R
||
(x)
()e(x,
b
)d
where
()
b
=
(x)e(x, )dx
R
u
b(, 0) = ().
b
(32.11)
Similarly,
b
b
b
ut (x, 0) =
()e(x, )d , () =
(x)e(x, )dx and u
bt (, 0) = ().
(32.12)
R
u
b(, 0) =
|0
0 b
fb(, )ei
f (, )ei
d +
d +C1 () + C2 ()
2i
2i
0
{z
} |
{z
}
=0
u
b(, 0) = C1 () + C2 ().
=0
)e
f
u
bt (, t) = ieit
d + eit
2i
2i
0
=t
t b
i
i
b
f (, )e
f (, )e
d eit
+ ieit
2i
2i
0
199
=t
it
it
eit t b
eit t b
i
d +
f (, )e
f (, )ei d
=
2 0
2
0
it
it
+ iC1 ()e iC2 ()e
and compute it at t = 0
u
bt (, 0) = iC1 () iC2 () = i (C1 () C2 ()) .
By (32.11), (32.12) we have
C 1 + C 2 =
b
b
C 1 C 2 =
i
C 2 = 2
C 1 = 2
!
b
b
i
!
b
b+
i
b
b
b + /i)
, C2 () = 21 (
b /i)
and finally,
That is C1 () = 12 (
(
!)
t b
i
b
f
(,
)e
1
()
u
b(, t) = eit
d +
()
b
+
2i
2
i
0
(
!)
t
b(, )ei
b
f
1
()
+ eit
d +
()
b
2i
2
i
0
eit t b
eit t b
i
=u
b0 (, t) +
f (, )e
d
f (, )ei d
2i 0
2i 0
|
{z
}
u
b1 (,t)
where
1
u
b0 (, t) =
2
b+
i
1
eit +
2
b
i
u0 (x, t)
u
b0 (, t)e(x, t)d
R
!
eit .
200
is the solution of equation (32.6) with f 0, i.e. the homogeneous equation (32.6).
By Theorem 30.4 then
(x t) + (x + t) 1 x+t
u0 (x, t) =
+
(s)ds
2
2 xt
and we arrive at
u(x, t) = u0 (x, t) +
u
b1 (, t)e(x, )d .
|
{z
}
R
u1 (x,t)
f (s, )e
ds e d
d.
2 0
2i
R
R
(32.13)
I(t +xs)
1
2
)
ei(t+ +xs)
d f (s, )d ds.
2i
{z
}
I(t+ +xs)
eia
d.
2i
1
sgn a
4
(32.14)
201
ds
t
0
(32.15)
Let us now figure out the support of (s, ).
s+ =x+t
s =xt
t
I
II
III (x, t)
xt
x+t x+t
(s, ) 0.
(s, ) 0.
(s, ) 2.
So (s, ) = 2 for (s, ) III and zero otherwise and (32.15) becomes
u1 (x, t) =
1 x
1x
f (s, )dsd =
f (s, )dsd
2
2
III
where
(x, t) III.
(x,t)
(32.16)
can be represented as
1To
see this, note that a point in I has a y-coordinate that is above the curve s = x t and
below s + = x + t, i.e. > s (x t) and < s (x t).
202
(x t) + (x + t) 1
u(x, t) =
+
2
2
x+t
(s)ds +
xt
1 x
f (s, )dsd
2
(32.17)
(x,t)
s+ =x+t
s =xt
where (x, t) =
0
xt
x+t x+t
(x t) + (x + t) 1 x+t
1 t +(x+t)
u(x, t) =
+
(s)ds +
f (s, )dsd.
2
2 xt
2 0 +(xt)
Definition 32.4. (x, t) is called the characteristic triangle.
One can easily see that (x, t) expands as t increases for every fixed x.
Example 32.5. Consider the case when (x) 0 (x) and
T
(
1 , (x, t)
f (x, t) = (x, t)
0 , (x, t)
/
where =
a
i.e. = { (x, t) : a x a , 0 t T } .
That is the perturbation f (x, t) acts like a blast concentrated on [a, a] and lasting
T seconds.
Then the solution of (32.16) takes the form:
1 x
1 x
1
u(x, t) =
(s, )dsd =
dsd = Area (x, t).
2
2
2
(x,t)
(x,t)
203
t
(x0 , ta )
(x0 , t)
T
a
a = x0 t0
x0
Figure 1
u(x0 , t)
aT
ta
t0 = x0 a
Figure 2
u = e
p(x)dx
204
(u0 + pu) v + uv 0 = f
| {z }
=0
uv = f
v 0 = f u1 .
Hence
x
x
1
1
(s)ds + C
fu
(s)ds + C y = u(x)
fu
v=
0
0
x
p
p
fe
(s)ds + C .
y=e
0
(32.18)
(32.19)
f=
C10 (x)y10
C20 (x)y20 .
(32.20)
C20 (x) =
,
y y
W = det 10 20
y1 y2
and
0 y2
W1 = det
f y20
= f y2
f y2
dx
W
and the general solution to our ODE is:
,
y1 0
W2 = det 0
= f y1 .
y1 f
f y1
C2 (x) =
dx
W
C1 (x) =
y = yc + yp
W2
W
Hence
205
yc = C1 y1 + C2 y2
yp = y1
x0
f y2
dx + y2
W
x0
(32.21)
f y1
dx
W
0<t<
.
2
y(t0 ) = y0
y 0 (t0 ) = y00
can be written as
y =u+v
where u, v solve
Lu = 0
u(t0 ) = y0
Lv = g
v(t0 ) = 0 ,
u0 (t0 ) = y00
v 0 (t0 ) = 0
respectively.
(Hint: use (32.21) and choose C1 , C2 to satisfy the ICs. Then u = C1 y1 + C2 y2 and
v is the other part.)
Exercise 32.9.
is
sin(t s)g(s)ds.
y(t) =
t0
y(0) = y0
y 0 (0) = y00 .
LECTURE 33
utt
((x)ux ) = 0.
x
(Helmholtz equation)
(33.2)
None of these forms is considered canonical. Since q(x) is not constant, this is a
different ballgame (although we can handle boxes to get explicit solutions).
By denoting uxx + q(x)u Hu, (33.1) transforms into
utt + Hu = f
.
(33.3)
ut + Hu = f
Recall that
H=
d2
+ q(x)
dx2
(33.4)
207
208
Our goal is to adjust Fouriers method to this setting. In general, a lot now depends
on the properties of the optical potential q(x) and things get a lot more complex. But
the general idea of eigenfunction expansions does work and as previously, while studying
the case q 0, we start from the spectral analysis of operator H. Well, its easier said
than done since if q 6= 0 the Schrodinger operator is a very difficult object and we are
not in a position to present its theory at any level.
First of all, a lot depends on whether we deal with a finite or infinite interval.
1
(33.5)
Typically, the spectrum of H is purely discrete but infinite. I.e. the equation
(
Hu = u
boundary conditions
has L2 (a, b)-solutions {en (x)}, i.e. eigenfunctions, corresponding to a discrete
set of {n }, the eigenvalues. Then by Theorem 18.3, {en (x)} forms an orthonormal basis in L2 (a, b) and we apply the procedure of Lecture 28.
Namely any solution to (33.5) can be represented as
X
u(x, t) =
u
bn (t)en (x) , u
bn (t) = hu(x, t), en (x)i
where hf, gi =
f (x)g(x)dx.
a
u
b00n + n u
bn = fbn .
Then we solve this second order linear nonhomogeneous equation by variation of parameters finding constants of integration from meeting the initial
conditions.
1
bn , hence, are no longer Fourier
Note that en (x) are no longer einx and u
2
coefficients. But {b
un } are commonly called generalized Fourier coefficients.
Example 33.1. Consider
utt =
(1 x2 )ux
x
x [1, 1]
209
There are now a lot of cases for the spectrum: the spectrum can still be
purely discrete, it can be discrete on top of continuous, you could have negative
eigenvalues, and there could be infinitely many of them, an accumulation to
zero, the continuous spectrum may extend past zero, etc.
It is also quite typical that c (H) is not simple but of multiplicity two,
i.e. if c (H) then there are exactly two different (linearly independent)
eigenfunctions of the continuous spectrum + (x, ), (x, ).
Then the complete system of eigenfunctions of the discrete and continuous
spectrum is
{n (x), (x, )}
where n are the eigenfunctions of the discrete spectrum and we have
hn , m i = nm ,
(x, ) (x, ) = ( ) ,
+ (x, ) (x, ) = 0.
Note that even with infinitely many n , they will not form a basis by themselves; the are needed to cover the rest of the space as we will see in the
next theorem.
d2
Recall also that for q 0, i.e. H = 2 , by Theorem 30.2 we have
dx
1
d (H) = , c (H) = [0, ) , and (x, ) = ei x .
2
It is no longer true if q 6= 0. Furthermore, if (H) = [0, ), this does
not imply that q(x) 0 (counterexamples abound). But, amazingly enough,
Fouriers Integral Theorem remains valid in the following edition.
210
d2
+ q(x) on L2 (R) and q(x) is such that
2
dx
(H) = d (H) c (H). Let {n (x)} be eigenfunctions of d (H) and let
{ (x, )} be eigenfunctions of c (H). Then any u L2 (R) can be represented
as
X
u(x) =
u
bn n (x) +
u
b+ ()+ (x, )d +
u
b () (x, )d ,
(33.7)
Theorem 33.2. Let H =
c (H)
where u
bn = hu, n i , u
b () =
c (H)
Note that the above becomes the Fourier transform for q = 0 with a square
root:
i x
u
b+ () =
u(x)e
dx , u
b () =
u(x)ei x dx
0
X
2
2
00
u
b
(,
t)
(x,
)d
+
u
b (, t) (x, )d ,
utt =
u
bn (t)n (x) +
+
+
2
2
c (H) t
c (H) t
X
u
b (, t) (x, )d.
u
b+ (, t)+ (x, )d +
Hu =
n u
bn (t)n (x) +
c (H)
c (H)
fbn (t) =
(33.8)
fb (, t) =
LECTURE 34
2
2
|f (X)| dV
|f (x1 , x2 , , xn )|2 dx1 dx2 dxn <
kf k
is called the set of square integrable functions on and commonly denoted by L2 ().
y
Typical notation:
2
1) [0, 1] [0, 1] is a rectangle in R2 :
x
y
x
y
x
S = {(x, y, z) R : x + y + z = 1}
z
4) T2 is a unit torus1 in R3 :
y
1
2
x
1I.e.
[0, 1]3
3
2
212
2
2
|f + g| dV 2 |f | dV + 2 |g|2 dV < .
hf, gi =
f (X)g(X)dV.
QED
dx dy dz
x2 y 2 z 2
|f (x, y, z)| dx dy dz =
[0,1]3
[0,1]3
213
dx
x2
dz
2
0
0
0 z
1
1
1
x2+1
y 2+1
z 2+1
=
.
2 + 1 0 2 + 1 0 2 + 1 0
=
dy
y 2
(34.2)
1
|f (x, y, z)|2 dx dy dz =
< .
(1 2)(1 2)(1 2)
[0,1]3
f (x, y, z) :
)
2
1
1
2
3
L
(R
),
/ L2 (R3 ),
3 xyz
1 + (x2 + y 2 + z 2 )
L2 (R3 ), but xyze|x||y|
/ L2 (R3 ).
Example 34.6. L2 (B 3 ).
z
Spherical coordinate system:
x = r sin cos
y = r sin sin
z = r cos
z
(x, y, z)
0 r < , 0 2 , 0 .
y
r
Or by setting r = r sin , we can write:
x = r cos
y = r sin
z = r cos
214
x y z
r r r
x y z = r2 sin .
|J(r, , )| = det
x y z
Hence if f L2 (B 3 )
y
|f (x, y, z)| dx dy dz =
0
B3
2
f (r, , ) sin d d r2 dr
(34.3)
/ L2 (B 2 ) since
2
2
x +y
2 1
1
1
2
kf k =
dx dy =
r dr d =
2
2
2
B2 x + y
0
0 r
but weve just seen that f (x, y, z) L2 (B 3 ).
Note that exact computation of norms are rare. Most of the time, youll need to
use estimates to prove membership into L2 ().
Definition 34.7. The operator defined in L2 () as follows
u
2u
2u 2u
+
+
+
x21 x22
x2n
u L2 ()
sign.
Jacobian can be rearranged some other ways too, but it will lead at most to a change of
215
d2
, and in R3 we have:
2
dx
2u 2u 2u
+
+
i.e. = x2 + y2 + z2 .
u =
x2 y 2 z 2
Clearly, is a linear operator.
u
2) Dom = u L2 : u L2 ,
=0
n
u
2
2
3) Dom = u L : u L ,
+ hu = 0 , for some h R
n
u v dx dy dz
Thm 34.9
:
0
x u
v
uv dx dy dz +
v
u
dS
n
n
= hu, vi .
QED
Exercise 34.12. Prove Theorem 34.11 for Neumann and Robin Laplacians.
As you can guess, our goal will be the spectral analysis of .
But we start from the Laplace equation
u = 0.
LECTURE 35
Proof.
1) . Let f (z) = u(x, y) + iv(x, y) H(). This means that u, v
are subject to the Cauchy-Riemann conditions:
ux = vy
uy = vx
(x, y) .
(35.2)
uyy = vxy .
x
(uy dx + ux dy) =
(uxx (uyy ))dx dy = 0.
C
Int C
(35.3)
(35.4)
uy = vx
f = u + iv H()
218
This statement is too general to be useful although it gives us the general structure
of a solution to u = 0: the real part of an analytic function.
Consider the Dirichlet problem for the Laplace equation on the unit disk D
{(x, y) : x2 + y 2 < 1}:
(
u
=0
.
(35.5)
u = () , 0 2
D
We can apply Theorem 35.2 to find a series representation for the above problem,
and we present such a solution in Appendix to this lecture.
But for now, let us find an integral representation of the solution to (35.5). We
need some ingredients.
Lemma 35.3. Let u be harmonic on the unit disk then
2
1
u(0) =
(35.6)
u eit dt.
2 0
Proof. By the Cauchy formula (Theorem 3.2) with z0 = 0,
2
2
1
f (z)
1
f (eit )
1
it
f (0) =
dz =
f eit dt
ie dt =
2i D z
2i 0
2 0
eit
2
2
1
1
Re f eit dt =
u eit dt.
QED
u(0) = Re f (0) =
2 0
2 0
Remark 35.4. Lemma 35.3 is called the mean value theorem.
Lemma 35.5. The function (z) defined by
z z0
(35.7)
(z) =
1 z0z
where is unimodular, i.e. || = 1, and z0 D, transforms the unit disk onto itself.
QED
of the properties of conformal mapping is that the boundary of the domain is mapped onto
the boundary of the image of the domain. Then using the fact that for z = 0, (0) = z0 , one finds
that (0) = |z0 | < 1 since z0 D and so the image is inside the circle, i.e. it is the disk.
219
z0 z
and z0 = 1 (0).
1 z0z
1
z 0
z +
zz0 |z0 |2
1 |z0 |2
1 |z0 |2
=
=
z(z z0 )(1 z 0 z)
(1 z 0 z)(1 z 0 z)
(|z|2 zz0 )(1 z 0 z)
1 |z0 |2
1 |z0 |2
1 |z0 |2
=
=
.
|1 z 0 z|2
|z z 0 |2
|z z0 |2
220
Exercise 35.7. Prove that if u(z) is harmonic (i.e. u = 0) on the unit disk
D = {z : |z| < 1} then so is u ( 1 (z)) where (z) is defined by (35.7).
Exercise 35.8. Graph Poisson kernels as a function of for different 0 r < 1.
Prove that Poisson kernels represent a -family r ( 0 ) as r 1.
Appendix. Series Representation of the Solution to the Dirichlet Problem
for the Laplace Equation
Consider the Laplace equation as a Dirichlet problem on the unit disk, i.e. (35.5):
u = 0 , u = 0.
D
n0
+ an ein
u = Re f =
r
2
n0
1 X n in
1 X n in
+ Re a0 +
=
r an e
r an e
2 n1
2 n1
1 X |n|
1 X n in
=
r an ein + Re a0 +
r an e .
2 n1
2 n1
X
n an e
in
0r<1
nZ
an
2
where An = Re a0
1 an
2
Exercise 35.9.
n 1
n=0
n1
(35.10)
.
LECTURE 36
(36.2)
x
v
u ds =
(uv + ux vx + uy vy )dxdy.
n
(No proof).
221
(36.3)
222
x
x
u
u ds =
(u |{z}
u +u2x + vy2 )dxdy =
(u2x + vy2 )dxdy.
n
=0
But u = 0. Therefore
x
(u2x + vy2 )dxdy = 0 u2x + vy2 0 in u = const in .
But since u = 0 this const = 0.
So we showed that (36.2) has only a trivial solution, hence u1 = u2 and that means
uniqueness.
QED
u
Note that if now we have Neumann conditions, i.e.
= 0, then solutions can
n
only differ by a constant.
2. Dirichlet Problem in a Rectangle
y
(x, y) = f (x)
b
=0
=0
(36.4)
a x
0
=0
Let us look for a solution to (36.4) in the form1
ux = X 0 Y
,
uxx = X 00 Y
0
uy = XY
,
uyy = XY 00
and u = X 00 Y + XY 00 = 0. Dividing by XY yields
X 00 Y 00
+
=0
X
Y
or
X 00 (x)
Y 00 (y)
=
.
X(x)
Y (y)
(36.5a)
(36.5b)
u(x, 0) = 0
u(0, y) = 0
u(a, y) = 0
u(x, b) = f (x)
X(x)Y (0) = 0
X(0)Y (y) = 0
X(a)Y (y) = 0
X(x)Y (b) = f (x)
223
Y (0) = 0
X(0) = 0
X(a) = 0
X(0) = X(a) = 0.
X = A sin x + B cos x
X(0) = B cos x = 0 B = 0
X(a) = A sin x = 0
a = n ,
n 2
=
.
a
So by choosing A = 1, we get
Xn (x) = sin
nZ
n
x.
a
Y (y) = Ae
Y (0) = 0
+ Be
Y (0) = A + B = 0
, where =
n 2
a
B = A
n
y
a
e
2
n
y
a
or
Yn (y) = sinh
n
y.
a
u(x, y) =
X
n=1
2We
cn sin
n
ny
x sinh
.
a
a
+ Be
cannot satisfy
224
But on the other hand, {bn } are the sin-Fourier coefficients of f (x) so
2 a
n
bn =
f (x) sin
xdx
a 0
a
||
1
cn sinh nb
cn = sinh nb
bn
a
a
and
sinh ny
n
a
sin
x
u(x, y) =
bn
nb
a
sinh a
n=1
2 a
n
where bn =
f (x) sin
xdx.
a 0
a
LECTURE 37
226
The exact meaning of this statement will be explained a bit later, but recall notions
of invariance: time invariance means some quantity doesnt depend on time, for example the energy of a system; here the equation is invariant with respect to conformal
mapping, i.e. you can change the coordinate system (via an analytic function2), and
the equation has still the same form.
Proof. Consider the Laplace equation on some domain :
2u 2u
u =
+
=0
x2 y 2
(x, y) .
w = + i.
2u
=
(U x + U x ) = (U )x x + U xx + (U )x x + U xx
2
x
x
= (U x + U x ) x + U xx + (U x + U x ) x + U xx
= U x2 + U x x + U xx + U x x + U x2 + U xx
uxx = U x2 + 2U x x + U x2 + U xx + U xx .
(37.2)
(37.3)
2This
(37.4)
227
Now our function f (z) = (x, y) + i(x, y) is analytic. So by Theorem 35.2, and
are harmonic = 0 , = 0. Also since f is analytic, the Cauchy-Riemann
conditions apply
(
x x + y y = x y + y x = 0
x = y , y = x
x2 + 22 = x2 + y2
and (37.4) reads
uxx + uyy = (U + U ) x2 + y2
||
0
U + U = 0
QED
Let us now figure out what Theorem 37.4 actually means. It allows one to solve
Laplace equations for various domains, not only disks or rectangles.
Indeed, consider the Dirichlet problem
y
z
(
u
=0
u =
(37.5)
where =
x
(
w
U
(w)
=
0
(37.6)
U = f 1 ei
f (z)
0 = D
w = + i
We solve next (37.6) by Poissons formula (35.9).
2
1
1 |w|2
U (w) =
f 1 ei d.
2
2 0 |ei w|
We now come back to our original variable z and have
u(z) = U (w)
and problem (37.5) is solved!
where
w = f (z)
(37.7)
228
where =
(37.8)
x
Physically, this problem can be interpreted in the following way. We have some
electrostatic field u(x, y) in the upper half plane C+ . We know the potential (x) on
its boundary R. How to recover the potential u(x, y) in the whole C+ ?
Solution. According to the procedure oulined above, we need to find a conformal
mapping that maps C+ onto D.
This conformal mapping is well-known3
w
z
z=i
1w
.
1+w
(37.9)
1 1 + w
w |w|2 + 1 w +
w |w2 |
1 |w|2
=
0
2
|1 + w|2
|1 + w|2
with equality for |w| = 1. So, indeed (37.9) maps the unit circle onto the real line, and
onto
.
It follows from (37.9) that
w=
3Note
iz
f (z)
i+z
(37.10)
that a conformal mapping is not unique; here for example, we could multiply by a unimodular factor, i.e. do a rotation.
1 |f (z)|2
f 1 ei d.
2
|ei f (z)|
229
(37.11)
1 ei
From (37.9) one has i
= t R and if runs through (0, 2) then t runs
1 + ei
through (, ).
So lets make the following change of variables
t=i
1 ei
i
1
e
=
f
1 + ei
ei =
it
.
i+t
Then
2i dt
2 dt
2 dt
(i + t)(1) (i t)(1)
dt d = it
=
=
.
2
2
2
(i + t)
1 t
1 + t2
i i+t (i + t)
Since t = f 1 ei , (37.11) then reads
iz 2
2 dt
dt
1 |f (z)|2
1 1 i+z
1
(t)
=
u(z) =
2 (t)
2
2 |f (t) f (z)|
1 + t2
it iz
1 + t2
i+t
i+z
dt
1
|z + i|2 |z i|2
2
|i
+
t|
=
2 (t)
1+
t2
|(i t)(i + z) (i z)(i + t)|
1
(z + i)(z i) (z i)(z + i)
=
2 (t)dt
|1 + iz it
+ 1
zt
zt|
it + iz +
1 |z|2 + iz iz + 1 |z|2 iz + iz 1
=
(t)dt
|2i(z t)|2
1 zz
1 Im z
1 2i(z z)
(t)dt =
(t)dt =
(t)dt
=
4 |t z|2
2i |t z|2
|t z|2
y
z=x+iy 1
=
(t)dt.
(t x)2 + y 2
iei d =
(t) dt
(t x)2 + y 2
y 0 , x R.
(37.12)
Done!
The function
1
y
is also known as the Poisson kernel for the upper half
(x t)2 + y 2
plane.
Remark 37.6. Note that the solution (37.8) represents the convolution of the Poisson kernel and the boundary condition.
Note a couple of potential issues:
230
if (t) |t|, then the integral is not convergent. But has physical meaning,
so in practice, this will not be an issue;
how to reproduce u(x, y) when y 0? well actually, we have again a
-sequence, so this will be satisfied too (see Exercise 37.10).
In practice, even the simple integral (37.12) is hard to evaluate by hand. However,
in some cases we can avoid computing (37.12).
Indeed, u(x, y) is the real (or imaginary) part of some analytic function f (z) , z =
x + iy by Theorem 35.2. Moreover, (x) = Re f (x) (or Im f (x)). By the look of (x)
sometimes its possible to tell what f (x) and even f (z) is.
Example 37.7. In the numbered examples below, we consider the Dirichlet problem
in the upper half plane:
Note that since its an order 2 equation, we would need an extra condition besides
the boundary one given. This generally comes from physics. So here, we will rather
simply look for a continuation of our function in the upper half plane and check its
analyticity. Furthermore, we make no claim of uniqueness since here the domain is
infinite.
(
u
=0
1
.
(37.13)
u = 1
R
This problem can easily be solved by (37.12). But on the other hand, there is
an obvious analytic function f (z) in C+ which is identically 1 on R and whose
real part is a bounded function.4 We have f (z) = 1. That is
u(x, y) = 1
2
(x, y) C+ .
(
u
=0
.
(37.14)
u = ln |x|
R
Note here that the boundary condition is not bounded, but one can easily
figure out that the function f (z) = ln z is analytic on C+ and equals ln |x| on
R. Indeed,
1
y
ln(x2 + y 2 ) + i arctan .
2
x
1
If y = 0 then f (x) = f (x + i0) = ln x2 = ln |x|.
2
ln z = ln |z| + i arg z =
4Indeed,
one could argue that f (z) = 1 iCz for some real constant C is also analytic and
identically 1 on R. But it is not bounded, and hence u(x, y) = 1 + Cy does not make sense physically
if were looking for a finite energy field.
231
u(x, y) =
1
ln(x2 + y 2 )
2
(x, y) C+ .
equipotential
curves
u(x, y) = ln r
x2 + y 2 = r 2
u = 0
1
u =
x
R
Note that f (z) =
(37.15)
1
is analytic on C+ , and one easily has
z
u(x, y) = Re
x
1
= 2
.
z
x + y2
f (z) = z 4
/4
232
1
f (z) =
2
1
z+
z
f (z)
f (z) = C
(a, b)
a
z
(a, b)
dt
elliptic integral
(1 t2 )(1 k 2 t2 )
where C, k are defined from solving the equations
1
1/k
dt
dt
p
p
=b
,
a
(t2 1)(1 k 2 t2 )
(1 t2 )(1 k 2 t2 )
0
1
b
C = 1/k
.
dt
p
(t2 1)(1 k 2 t2 )
1
This formula is a particular case of the more general Schwarz-Christoffel
conformal mappings which describe how to transform the upper half plane into
polygons. As one may imagine, such formulas get more and more complicated,
but at least they are known.
One can now figure out why we could solve in Lecture 36 the Laplace equation
on a rectangular domain only with some restrictions.
From the above, you may also appreciate the following funny story: a physicist needed to solve a Laplace problem and decided to use a simple model for
the boundary. So he asked the mathematician to solve his problem for a square.
After much complicated calculations, the mathematician gave his answer, and
then the physicist said that now he needed to have it altered to fit his original
data: a circle!
0
5The
transform itself is often referred to as the Joukowsky transform after the same person since
Russian names can be transcribed in the Latin alphabet in multiple ways.
233
Exercise 37.10.
1
Prove that the Poisson kernel on the upper half plane is a -sequence for y 0.
Note that you must show among other things that:
1
y dt
= 1.
(x t)2 + y 2
0
(x t)2 + y 2 (x + t)2 + y 2
Show also that it solves the following Dirichlet problem
, x>0, y>0
u = 0
.
u(0, y) = 0
, y>0
u(x, 0) = (x) , x 0
u = 0
u(x, 0) = (x)
u(0, y) = (y)
x, y > 0
.
x0
y0
f (x, y) =
+
x
x
x
F (, ) F (, )
f (x, y) =
+
y
y
y
or in a compact form
fx = F x + F x
fy = F y + F y
234
fx
x x
F
=
.
fy
y y
F
| {z }
Jacobis matrix
And for the second partial derivative (in x), since fx = F x + F x , one has
fxx
(F x )x
(F )x x
F x2
(F x )x
F xx
F x x
F xx
(F )x x
F x x
F x2
LECTURE 38
(38.1)
where is
a
0
Set u(x, y) = X(x)Y (y), then (38.1) reads
d2
d2
X
Y = XY
dx2
dy 2
1 d2 X 1 d2 Y
=
separable equation!
2
2
| X{zdx } | Y{zdy }
1
2
00
X = 1 X
.
Y 00 = 2 Y
+ =
1
2
(38.2)
X(0) = 0 ,
X(a) = 0 ,
Y (0) = 0 ,
Y (b) = 0 .
So we got
(
X 00 = 1 X
X(0) = 0 = X(a)
235
X = Aei
1 x
+ Bei
1 x
236
Find now A, B.
(
X(0) = A + B = 0 B = A
p
e2i 1 a = 1 2 1 a = 2 n , n Z
2
n
n
n
or 1,n =
, n N = {1, 2, } ; Xn (x) = A ei a x ei a x .
a
1
we finally get
Taking A =
2i
n
Xn (x) = sin
x , n N.
a
In a very similar way,
Ym (y) = sin
m
y
b
m N.
m
n
x sin
y
a
b
n, m N.
Then
(
(D ) = d (D ) =
n 2
a
m 2
b
)
2 !
k
+
; n, m, k N
c
237
1
2
2 = 2
+
n = 1, m = 2
2
a
b
!
2
2
1
2
3 =
+ 2
n = 2, m = 1
a
b
2 2 !
2
2
4 = 2
+
n=m=2
a
b
..........................
In the simplest case of a = b = we get
un,m,k (x, y, z) = sin
1 = 1 + 1
=2
2
2 = 1 + 2 = 2 + 1
2
3 = 2 + 2
=5
=8
4 = 1 + 32 = 32 + 1
= 10
5 = 22 + 32 = 32 + 22 = 13
6 = 1 + 42 = 42 + 1
2
7 = 3 + 3
= 18
8 = 2 + 4 = 4 + 2
= 17
= 20
9 = 3 + 4 = 4 + 3 = 24
.............................
3 4
5
6 7
10
...
20
2
d
on [0, ] which was
dx2
5
...
0
10
20
In math physics they ask the following question: Can you hear the shape of a drum?
In our case it would be a box. This means: if we know all the tones (proper
frequencies) of a box can we then recover the shape of the box?
This problem belongs to so-called inverse problems and the answer is negative.
238
(
u
= u
u = 0
(38.3)
1
r r
U
1 2U
r
2 2 = U
r
r
(38.4)
r
1
0
(rR0 (r)) + r2 +
00 () = 0
R(r)
()
|
{z
} | {z }
or
(
00 () = ()
r(rR0 (r))0 r2 R(r) = R(r)
00 =
1 d
d
r R + 2 R = R(r)
r dr
dr
r
(38.5)
U (1, ) = 0
||
R(1)() = 0
r=1
R(1) = 0.
239
, 0 (0) = 0 (2)
1 d r dR + R = R
r dr
dr
r2
.
2
R(1) = 0 , R L (0, 1)
(38.6)
(38.7)
+ Bei
=
ei 2 = 1
2
2n
= n2 , n N.
2
1 d r d R + n R = R , 0 r 1
r dr
dr
r2
R(1) = 0
Sturm-Liouville problem
or
1 0
n2
00
R R + 2 R = R
r
r
or finally we get
1 0
n2
R + R + 2 R=0 ,
r
r
00
0r1
Jn ( ) = 0.
The equation has infinitely many solutions:
240
J0 (k)
J1 (k)
J2 (k)
k01
0
k11
k02
k12
k21
k03
k22
k13
k04
10
k23
k14
k05
k24
k15
k06
k25
k16
There is no nice formula for the roots but they can be computed numerically. We
call them knm (counted by two indices n, m). So
2
= knm
and we arrive at
Theorem 38.4. Let D be the Dirichlet Laplace operator on the unit disk D. The
spectrum (D ) is discrete and infinite and
2
(D ) = {knm
}
n = 0, 1,
LECTURE 39
(39.1)
Solution. By Theorem 39.1, the spectrum of is discrete and made of eigenvalues {n }. Corresponding eigenfunctions {en (X)} form an ONB in L2 ().
Hence every function u L2 () can be represented as
X
u(X) =
u
bn en (X)
where
X = (x1 , x2 , , xm ) ,
n
u
bn = hu, en i
241
242
u
bn en
fbn en
X ||
X
u
bn (en ) =
n u
bn en
n
X
n u
bn fbn en = 0
{en } is an ONB
n u
bn fbn = 0.
fbn
.
n
X fbn
en (X)
n
n
X = (x1 , x2 , , xm ).
Note however that there is no way to generalize from dimension 1 how to obtain
n , en (X). So lets consider a particular example where this algorithm can be performed
by hand (explicitly).
Example 39.4. Consider the problem
b
(
u
=f
u = 0
where
and
0
f (x, y) = x + y.
By Theorem 38.1,
n 2 m 2
() =
+
; n, m N
a
b
and corresponding eigenfunctions are
un,m (x, y) = sin
nx
ny
sin .
a
b
(39.2)
So kun,m k2 =
We now set
2
2
243
ab.
2
nx
my
en,m = sin
sin
a
b
ab
and {en,m } are all normalized by 1, i.e.
ken,m k = 1.
By Theorem 39.1, {en,m } is an ONB in L2 () and hence any u L2 () can be
represented as
u(x, y) =
u
bn,m en,m (x, y)
where u
bn,m =
u(x, y)en,m (x, y) dx dy
(x + y) d cos
n
a
b
ab 0
0
)
b (
a
0
:
a
2a
nx
nx
my
dx
=
(x + y) cos
sin
dy
+ cos
a 0
a
b
n ab 0
0
r b
2
my
a
{(a + y) |cos{zn} +y} sin
=
dy
n b 0
b
n
(1)
b 0
b m
2
my b
n
=
ab (y (a + y)(1) ) cos
nm
b 0
2 ab
=
{(b (a + b)(1)n ) (1)m + a(1)n }
nm
2 ab
(1)m b + a(1)n+m + b(1)n+m a(1)n
=
nm
2 ab
=
{(1)n ((1)m 1) a + (1)m ((1)n 1) b} .
nm
2
fbn,m =
n
244
So
2
ab
fbn,m =
{(1)n ((1)m 1) a + (1)m ((1)n 1) b} .
nm
Now the solution is
(39.3)
X fbn,m
en,m (x, y)
(Double Fourier series)
n,m1 n,m
n 2 m 2
n,m =
+
,
a
b
2 ab
fbn,m =
{(1)n ((1)m 1) a + (1)m ((1)n 1) b} ,
nm
2
mb
nx
en,m (x, y) = sin
sin
a
y
ab
u(x, y) =
where
(
u
=f
u =
homogeneous equation,
nonhomogeneous boundary conditions
(I)
nonhomogeneous equation,
homogeneous boundary conditions
(II)
Equation (I) was the content of Lectures 35-37 (with conformal mappings and
such).
Equation (II) was considered in Problem 39.3, equation (39.1).
So the solution to Problem 39.5 now is
u = u1 + u2 where u1 is the solution to (I)
and
u2 is the solution to (II).
245
In the very same manner one can treat other than Dirichlet boundary conditions.
2
Exercise 39.6. Solve u = 1 in = [0, 1] subject to u = 0.
LECTURE 40
utt = u
u = 0
u(0, X) = (X)
,
ut (0, X) = (X)
(40.1)
X
=0
u
bn en =
u
bn (t)en (X) ,
where u
bn = hu, en i
X
2 X
u
b
e
=
u
bn en
n
n
t2 n
n
b
u00n = n u
bn
u
b00n en =
u
bn = An cos
247
X
n
u
bn en =
X
n
p
p
n t + Bn sin n t.
u
bn (n )en
248
bn en
,
=
bn en
X
X
u =
u
bn en =
u
bn en
t=0
t=0
X
X t=0
=
An en = =
bn en An =
bn
X
X
ut =
u
b0n en =
u
b0n en
t=0
t=0
t=0
X
Xp
n Bn en = =
bn en
=
bn
Bn =
n
and the problem is completely solved. Indeed, from solving the eigenfunction problem
(
u
= u
u = 0
we obtain the spectrum {n } and eigenfunctions {en }. The solution to (40.1) is then
(X )
X
p
p
u(X, t) =
An cos n t + Bn sin n t en (X)
!
X
p
p
bn
=
bn cos
!
b
p
n
n t + sin n t en (X) ,
n
X .
(40.2)
Done!
Remark 40.1. One has probably already observed that once we know the spectrum of
the eigenfunction expansion method works in the very same way in any dimension
(1, 2, 3 or even higher).
2. Heat Equation
Exercise 40.2. As it was done for the wave equation, solve
ut = u
u = 0
u(0, X) = (X) , X
and get a formula similar to (40.2).
3. FURTHER DISCUSSIONS
249
3. Further Discussions
We will explore two further topics: nodal lines and how to handle unbounded
domains.
1
Nodal lines
Consider the wave equation with Dirichlet and initial conditions for =
[0, ]2 :
utt = u
u = 0
.
ut=0 = (X)
u = (X)
t
t=0
n,m
n,m
Nodal lines are lines where en,m (x, y) = 0, i.e. they are lines which remain
at rest while other parts vibrate. So assuming all coefficients equal 1 (or 0), we
have the following nodal lines on [0, ]2 :
simple harmonics
e1,1
e1,2
e2,1
e1,3
e3,1
e3,3
e1,2 + e2,1
e1,3 + e3,1
e1,4 + e4,1
double harmonics
250
Note that sin x = 0, sin y = 0 give us the boundary. Then since we have
cos( x) = cos x, the other solution in [0, ]2 is the line y = x (the
diagonal).
e1,3 + e3,1 . We will use the following, derived from trigonometric identities:
sin 3x = sin x(2 cos 2x + 1). Then we have:
sin x sin 3y + sin 3x sin y = 0
Again the first two factors give us the boundary, and for the third, note
that we solve: cos 2y = 1 cos 2x so in order to get a solution, we need
cos 2x 0 for x [0, ], i.e. x 4 , 3
. Then we must have y [0, ]
4
and
2y = arccos(1 cos 2x) or 2y = 2 arccos(1 cos 2x)
1
1
y = arccos(1 cos 2x) or y = arccos(1 cos 2x).
2
2
Note that the equation cos 2x + cos 2y = 1 is the most convenient to
solve to plot, but it takes several other forms: cos2 x + cos2 y = 12 or
cos(x + y) cos(x y) = 32 for example.
e1,4 + e4,1 . Here one can check that sin 4x = 4 sin x cos x cos 2x. Then
sin x sin 4y + sin 4x sin y = 0
4 sin x sin y cos y cos 2y + 4 sin x cos x cos 2x sin y = 0
Well have sin x = 0, sin y = 0 give us the boundary, and noting the
symmetry in cos x cos 2x + cos y cos 2y = 0, one can see that y = x is
3. FURTHER DISCUSSIONS
251
again a solution since then cos y = cos x while cos 2y = cos 2x. So we
can also factor cos x + cos y:
0 = cos x cos 2x + cos y cos 2y
= (cos x + cos y)(cos 2x + cos 2y) cos x cos 2y cos y cos 2x
= (cos x + cos y)(cos 2x + cos 2y) cos x(2 cos2 y 1) cos y(2 cos2 x 1)
= (cos x + cos y)(cos 2x + cos 2y 2 cos x cos y + 1)
= (cos x + cos y)(2 cos2 x 1 + 2 cos2 y 1 2 cos x cos y + 1)
1
2
2
= 2(cos x + cos y) cos y cos x cos y + cos x
.
2
But the second term is a quadratic equation in Y = cos y, X = cos x. In
order to plot the nodal line, we further solve the equation, noting that
using X as a parameter, the equation has a real solution for
1
2
2
= (X) 4 X
= 2 3X 2 0,
2
r !
r !
2
2
i.e. arccos
x arccos
.
3
3
Then on [0, ], we have for suitable x-values:
cos x 2 3 cos2 x
y = arccos
.
2
Remark 40.3. Note that for the eigenvalue 5 for example, there are 2 corresponding eigenfunctions e1,2 and e2,1 . And thus, = 5 has multiplicity two.
But then the number of integer solutions to = n2 + m2 , known as a general
quadratic Diophantine equation in 2 variables, depends on the prime number
decomposition of , and although this study belongs to Algebra, it is important
to note that as a result, the multiplicity of is unbounded.
Remark also that the superposition of harmonics is not trivial if coefficients
are changed.
In addition, if we deal with a round membrane, we now have Bessel functions
instead of the double sine functions so wed get different pictures for the nodal
lines too.
The study of nodal lines have applications in construction for example, to
make sure certain parts wont crack under the vibrations (recall that nodal
lines on the membrane will stay put).
Actually, there is a whole field of study concerned with nodal lines and
eigenfunctions involving the Laplacian: in particular in seismology for the wave
equation on a sphere, called the Laplace-Beltrami equation (or spherical Laplacian); or in quantum mechanics since the Schrodinger equation iut = u + qu
is involved (not quite the heat equation though; i makes a big difference!). We
252
but now in 2D or 3D. Sadly at this point, very little is known about this
equation in 2D or 3D compared to what is known for one dimension.
Unbounded domains
For domains like = Rn , the condition u = 0 is automatically satisfied
since L2 functions must decay at . But there are many ways for a domain
to be unbounded, for example an infinite strip in R2 or an infinite rod in R3
have only one dimension carrying the unboundedness.
There is no general approach since we there are so many kinds of possible
spectrum. What we will have is that is positive definite, i.e. hu, ui 0
for all u. Hence we can only have nonnegative eigenvalues.
We will consider the specific case of Rn . We need to redefine the Fourier
transform in higher dimensions so that it still works. Recall in one dimension,
we have:
1
fb() =
eix f (x)dx.
2 R
How can we move to a vector form? We have: f (x) f (X) and dx dX.
So we introduce Rn , called a wave vector and
1
b
f () =
f (X)eih,Xi dX
direct Fourier Transform,
(2)n/2 Rn
1
fb()eih,Xi d
inverse Fourier Transform.
f (X) =
(2)n/2 Rn
n
Note
the inner product here is the dot product in R . We also verify
that
that
fb()
= kf (X)k, we have an inverse, and the uniqueness property is
also satisfied. Furthermore,
e (X) = eih,Xi
is an eigenfunction of the continuous spectrum with
e (X) = kk2 e (X).
This can be easily checked in R2 (and generalization to Rn will be obvious):
e (X) = ei(1 x1 +2 x2 )
=
2 i(1 x1 +2 x2 )
2 i(1 x1 +2 x2 )
e
+
e
x21
x22
3. FURTHER DISCUSSIONS
Now
253
1
1
eih,Xi =
eih,Xi
n/2
(2)
(2)n/2
where is a unit directional vector; and we now have infinitely many directions
for each eigenvalue (instead of just before), and the multiplicity of each
eigenvalue is therefore infinite.
Part 5
Greens Function
LECTURE 41
dF (x)
d
F (x) = f (x) F (x) = f (x)dx =
dx.
dx
dx
But we have non-uniqueness here since for each f (x) there exists an infinite number
of antiderivatives F (x) + C , C is a constant. If we add an initial condition, then C
can be fixed and we get uniqueness. That is
x
d
F (x) = f (x)
F (x) = C +
f (t)dt.
dx
F (x ) = C
x
0
0
x
The operator Au =
u(t)dt is the simplest integral operator.
x0
b
(Kf )(x) =
K(x, y)f (y)dy =
K(x, y)f (y)dy , x = (a, b)
is called an integral operator. The function K(x, y) is called the kernel of the integral
operator K.
x
Exercise 41.2. Rewrite the operator of integration Au =
u(t)dt formally as an
x0
integral operator.
258
K(x, y) =
Proof.
2)
1) trivial.
(Kf )(x) = (K1 K2 f )(x) = (K1 (K2 f ))(x)
=
K1 (x, z) (K2 f )(z)dz
K1 (x, z)K2 (z, y)dz f (y)dy
=
|
{z
}
=K(x,y)
QED
(Kf )(x) =
K(x, y)f (y)dy , f L2 ()
where K(x, y) is the kernel of K. Then the kernel K (x, y) of the adjoint operator K
is given by
K (x, y) = K(y, x).
Exercise 41.7. Prove Theorem 41.6
Corollary 41.8. For a self adjoint integral operator, we have
K = K
K(x, y) = K(y, x)
Remark 41.9. Theorems 41.5 & 41.6 show us that one can understand an integral
operator as analogous to a continuous matrix. Indeed, addition is a linear combination
of the corresponding components. The product of integral operator looks like a matrix
product, where we take dz = 1 and i, j become the continuous variables x, y (like we did
when considering Fourier transform vs Fourier series). Similarly, the adjoint kernel is
a continuous adjoint matrix analog since we had (A )ij = (A)ji for discrete indices i, j.
259
Exercise 41.10. If F is the Fourier transform, find the kernel of its adjoint.
Let A be the integral operator on L1 (0, 1) defined by
x
f (y)dy.
(Af )(x) =
0
|f (x)| dx.
kf k1
0
|u(t)| dt dx
|u(t)| dt dx
0
0
0
0
|
{z
}
=kuk1
dx = kuk1 .
= kuk1
0
That is
kAuk1 kuk1
and by definition A is bounded and
kAk 1.
QED
Note that it can be proven that kAk = 1 but we dont care at this point.
Remark also that the Fourier operator is bounded on L2 (R) since it is unitary:
kFf k2 = kf k2 .
Exercise 41.13. Show that the operator B defined on L2 (0, 1) by the formula
1 x
Bu =
u(t)dt
i 0
1
on functions u with a zero mean (that is
u(x)dx = 0) is selfadjoint.
0
x
d
(Hint: if v L1 (0, 1) then
v(t)dt = v(x).)
dx 0
260
k(Kf )(x)k22
2
K(x, y)f (y)dy dx.
|Kf (x)| dx =
2
2
2
k(Kf )(x)k2
dx
|K(x, y)| dy
|f (y)|2 dy
| {z
}
=kf k22
where
k(Kf )(x)k2
1/2
kf k2
and K is bounded.
QED
1/2
|K(x, y)|2 dxdy
is called the Hilbert-Schmidt norm of an
Note that
integral operator.
Example 41.16 (good example). Consider K(x, y) = e(x+y) , x, y (0, 1). Then
1
1
1 1
2x
2(x+y)
e2y dy
e dx
e
dxdy =
0
0
0
0
1 !2
1
2
2
1
1 e2
2x
2x
e dx = e
=
=
.
2
2
0
0
So K is Hilbert-Schmidt.
The converse of Theorem 41.15 is not true as illustrated in the example below.
Example 41.17 (bad example). Let F be the Fourier transform, which is bounded
on L2 (R). Is it Hilbert-Schmidt?
1 ix 2
ddx = 1
e
ddx =
2
2 R R
R R
so F is not Hilbert-Schmidt.
Usually, Hilbert-Schmidt does not like infinite intervals. It could be that it converges in x but not in y (or vice versa).
LECTURE 42
(42.1)
where x is a variable and y is a parameter is called the Greens function of the differential operator A.
Note that this means that the Greens function is the kernel of an important integral
operator. The operator A need not be doing only differentiation, e.g. we could consider
d2
the Schrodinger operator A = 2 + q(x).
dx
One may wonder why equation (42.1) has a solution. It is a very difficult question
which we are unable to answer in this course. But the answer is affirmative for most
of the differential operators of mathematical physics. But finding G needs to be done
on a case-by-case basis.
Why is the Greens function that important?
The reason is: if we know the Greens function G(x, y) of an operator A then we
can easily solve the equation
Au = f
(42.2)
for any f .
Indeed, let us show that
u(x) =
(42.3)
42. THE GREENS FUNCTION OF THE SCHRODINGER
OPERATOR
262
is a solution to (42.2). Note that the limits of integration depend on the problem at
hand. We have
So (42.3) is a solution to (42.2). If (42.2) has a unique solution then (42.3) is the
solution.
Remark 42.2. One can view the Greens function as the continuum matrix of the
inverse of A. Indeed as a parallel to Linear Algebra, one would have:
n
X
1
u = A f = Gf
,
ui =
Gik fk
k=1
ei|xy|
2i
d2
2
2
dx
on L2 (R).
(
u00 + p(x)u0 + q(x)u = f (x) ,
(BC) u(a) = 0 = u(b)
x (a, b)
(42.4)
263
Note that we cant have u1 (b) = 0 otherwise the Wronskian is 0 and u1 , u2 would
be linearly dependent. Similarly, u2 (a) 6= 0.
Now we look for a particular solution to (42.4) in the form
up (x) = C1 (x)u1 (x) + C2 (x)u2 (x).
For C1 , C2 we have
u1 u2
u01 u02
0
0
C1
=
.
f
C20
f u2
=
W
C20
f u1
=
W
u1 u2
,
where W = det 0
u1 u02
0
:
2 (b) + C1 u1 (b) +
u(b) = C1 u1 (b) +
C2(b)u
C2u
2 (b) = 0
(
C2 = C2 (a)
C1 = C1 (b)
and
u(x) = C1 (x) C1 (b) u1 (x) + C2 (x) C2 (a) u2 (x).
But
C1 (x) C1 (b) =
x0
and
C2 (x) C2 (a) =
x1
f u2
W
f u1
W
b
(t)dt +
x0
(t)dt
x1
f u2
W
f u1
W
b
(t)dt =
(t)dt =
(42.5)
f u2
W
f u1
W
(t)dt ,
(t)dt.
u(x) = u1 (x)
x
f (t)u2 (t)
dt + u2 (x)
W (t)
f (t)u1 (t)
dt.
W (t)
(42.6)
42. THE GREENS FUNCTION OF THE SCHRODINGER
OPERATOR
264
(
u2 (x)u1 (t) ,
u1 (x)u2 (t) ,
atx
.
x<tb
u1 (x)u2 (y)
,
x<y
W (y)
G(x, y) = u (x)u (y)
2
1
,
x>y
W (y)
where u1 , u2 are solutions of u00 + pu0 + qu = 0 with conditions u1 (a) = 0 , u2 (b) = 0,
respectively, and W is the Wronskian of (u1 , u2 ), that is
W = u1 u02 u01 u2 .
But we need a fundamental set... thats the hard part. If we have Neumann
conditions or others, we still follow the same procedure, but we figure out different BC
to impose on the fundamental set so that we can get a solution in the simplest form.
Note that the above theorem can easily be restated for the Schrodinger operator.
Exercise 42.5. Derive the expression for Greens function G(x, y) of the operator
d2
A = 2 + 2 on L2 (R) for > 0.
dx
e|xy|
Answer: G(x, y) =
.
2
(Hint: modify the arguments of this section to treat (, ).)
Note that q(x) could be some form of spectral parameter as above. Then we can
write: G (x, y).
265
Index
properties, 13
real part, 11
Complex valued function, 13
Conformal mapping, 225, 228
Joukowsky transform, 232
Mobius transform, 218
Riemann theorem, 225, 231
Schwarz-Christoffel formula, 232
Connected
multiconnected contour, 33
path-connected, 32
simply connected domain, 225
Continuous spectrum, 107, 110, 173, 174, 176,
180, 209
Contour
multiconnected, 33
Contour integral, 20
Convergence
absolute, 30
normed spaces, 89
of a sequence, 29
series, 90
uniform, 112
weak, 116
Convolution, 151, 193, 229
Convolution theorem, 152
Coordinates, 90
Coordinates of a vector, 58
Coulomb potential, 214
268
INDEX
Harmonics
rectangular, 223
Heat equation, 193, 207, 241
Helmholtz equation, 207, 241
Hermite polynomials, 145
Hermites equation, 144
Hilbert space, 85, 212, 261
examples
L2 (a, b), 87
orthonormal basis, 90
Hilbert-Schmidt integral operator, 260
Hilbert-Schmidt norm, 260
Holomorphic function, 15
Homogeneous wave equation, 168
Indicial equation, 140142
Initial conditions, 169
Inner product, 67, 85, 212
in R3 , 69
Inner product space, see also Euclidean space
Integral operator, 257, 261
Hilbert-Schmidt, 260
Inverse
of an operator, 106
Inverse operator, 71
Invertible operator, 71
Irregular singular point, 139
Jacobi matrix, 234
Jordans lemma, 46, 161, 163
Kernel
integral operator, 257
Klein-Gordon equation, 207, 252
Kronecker delta, 68, 181
L1 space, 104
L2 space, 87
Laplace equation, 215, 217, 221, 232
nonhomogeneous, 241
Laplace operator, 211, 214, 241, 247
Laplace-Beltrami equation, 252
Laplacian, 265
Dirichlet, 215, 241, 247
Neumann, 215, 241
Robin, 215
Laurent theorem, 34
Legendre equation, 133
Legendre polynomials, 133, 135, 209
Line integral, 18
Linear operator, 59, 173
bounded, 102
differentiation, 59, 105
INDEX
domain, 101
inverse, 106
kernel, 64
momentum, 110
norm, 102
resolvent, 106
spectrum, 107
unbounded, 102
Linear space, 55
basis, 56, 90
complex, 56
convergence of series, 90
dimension, 56
infinite dimensional, 83
norm, 83
real, 56
Linearly dependent, 56
Linearly independent, 56
Liouville theorem, 25
M
obius transform, 218
Matrix, 59
addition, 60
diagonal matrix, 61
multiplication, 60
orthogonal, 72
scalar multiplication, 60
square, 59
unit matrix, 61
zero matrix, 61
Matrix representation of an operator, 62
Mean value theorem, 218
Momentum operator, 73, 105, 110, 160, 175,
177, 179, 189
Moreras theorem, 27
Multiplicity, 78
Neumann
Laplacian, 215, 241
Neumann problem, 127, 222, 241
Nodal lines, 249
Norm, 83
induced by an inner product, 68
of a linear operator, 102
sup-norm, 111
Normalized, 88
Normed space, 83
Operator
adjoint, 70, 258
coordinate, 177
coordinates, 160
269
differential, 261
differentiation, 160
Fourier, 177, 258
integral, 257, 261
inverse, 71
invertible, 71
Laplace, 211, 214, 241, 247
momentum, 73, 105, 160, 175, 177, 179, 189
multiplication, 160, 177
rotation, 73
Schrodinger, 179, 189, 261, 264
selfadjoint, 70, 133
similar, 75, 159
unitary, 71
unitary equivalent, 176, 177
Ordinary point, 137
Orthogonal basis, 136
Orthogonal functions, 88
Orthogonal matrix, 72
Orthogonal vectors, 68
Orthonormal basis, 68, 110, 241
Parseval equation, 68, 91
Path
connected, 32
Plancherel theorem, 154
Point
irregular singular, 139
ordinary, 137
regular singular, 139
singular, 138
Point spectrum, 173
Poisson formula, 219, 227
Poisson kernel, 219
upper half plane, 229
Pole, 37
Potential, 207
Coulomb, 214
Power series, 30, 133
Power series solution, 137
Purely discrete spectrum, 110
Rectangular harmonics, 223
Regular singular point, 139
Residue, 38, 39
Residue formula, 39
Residue theorem, 40
Resolvent, 106, 173
Robin
Laplacian, 215
Robin problem, 128
Rodrigues formula, 136
270
INDEX
Rotation operator, 73
Triangle inequality, 19