0% found this document useful (0 votes)
115 views

Mathematical Methods: 1 Ordinary Differential Equations (ODE's)

1. This document discusses methods for solving ordinary differential equations (ODEs). 2. It describes techniques for solving linear first-order and second-order ODEs, including separation of variables, integrating factors, and variation of parameters. 3. It also provides examples of solving ODEs using these methods, such as solving the equations y' - y = e^2x and y'' + y = sec(x).

Uploaded by

nely coni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
115 views

Mathematical Methods: 1 Ordinary Differential Equations (ODE's)

1. This document discusses methods for solving ordinary differential equations (ODEs). 2. It describes techniques for solving linear first-order and second-order ODEs, including separation of variables, integrating factors, and variation of parameters. 3. It also provides examples of solving ODEs using these methods, such as solving the equations y' - y = e^2x and y'' + y = sec(x).

Uploaded by

nely coni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Mathematical Methods

Lecture 6.
20/10-2008

1 Ordinary differential equations (ODE’s)


We are interested in solving differential equations of the form
F (y(x), y ′ (x), y ′′ (x), ..., x) = 0.
If F is a linear function of (the derivatives of) y(x), the differential equation in said to be linear.
Otherwise it is non-linear. We will write y for y(x), and differentiation is always in with respect
to x unless otherwise specified.
Ex.
y ′′ + 3y ′ + cos(x)y = x2 , Linear (y ′ )2 + y = 0 Non-linear.

1.1 Simple tricks


Z
y ′ = f (x) → y(x) = f (x)dx + c.

We fix c by using initial conditions, like f (x0 ) = y0 . For a first order equation, we need only
one: Solution space is one-dimensional.
Z Z 
y ′′ = f (x) → y(x) = f (x)dx + c1 dx + c2 .

We fix c1,2 by using initial conditions, like y(x0 ) = y0 , y ′ (x0 ) = c0 . For second order equations,
we need two: Solution space is two-dimensional.
Sometimes, we can separate the variables
Z Z

g(y)y = f (x) → g(y)dy = f (x)dx + c.

We again fix c using initial conditions.


Ex.
9 2 y2 x2
Z Z

9yy + 4x = 0 → 9y dy = − 4x dx + c → y = 2x2 + c → + = c̃.
2 9 4
Sometimes, we can use a change of variable to reduce to separable form
y y du dx
y′ = g , u(x) = → u + u′ x = g(u) → = .
x x g(u) − u x
Ex.
y  y 2 2u du dx
2xyy ′ − y 2 + x2 = 0 → 2 y′ − +1=0 → =− .
x x 1 + u2 x
By integration
ln(1 + u2 ) = − ln |x| + c → x2 + y 2 = cx → (x − c/2)2 + y 2 = c2 /4.

1
1.2 Total derivatives
The equation

f1 (y, x)dx + f2 (y, x)dy = 0

is a total derivative if there exists a function u(x, y), so that

∂u ∂u
f1 (y, x) = , f2 (y, x) = ,
∂x ∂y

Then an equivalent differential equation is

du(y, x) = 0 → u(y, x) = c.

We find u by noting that


Z Z
u(y, x) = f1 (y, x)dx + g1 (y), u(y, x) = f2 (y, x)dy + g2 (x).

Sometimes, the equation is not a total differential, but can be made to be one by multiplying
by a function F (x, y), known as a integrating factor.
Ex.
dy y dx y c
x dy − y dx = 0 → F (x) = 1/x2 → − 2 =0 → u(y, x) = a =c → y= x.
x x x a

1.3 Linear first order ODE’s


We are specifically interested in solving linear ordinary differential equations of the form

y ′ + f (x)y = r(x).

f (x) and r(x) are assumed to be known analytic functions. If r(x) = 0 we speak of a homoge-
neous differential equation.
First order linear differential equations always have solutions (Theorem!), and the solution
space is one-dimensional. Hence any solution can be written as

y(x) = ay1 (x)

where y1 (x) is one solution, for some a determined from a boundary condition. In the homo-
geneous case, we have by separation of variables

dy
Z
y ′ + f (x)y = 0 → = −f (x)dx → ln |y| = − f (x)dx + c
y

which becomes
R
f (x)dx
y(x) = c e .

In the inhomogeneous case,

y ′ + f (x)y = r(x),

2
we assume an integrating factor F depending only on x. By cross differentiation and separation
of variables we get
dF
R
F (x)(f y − r)dx + F (x)dy = 0 → F f = → F (x) = e f (x)dx
dx
Then we use the integrating factor
 R ′ R R
ye f (x)dx = e f (x)dx
(y ′ + f y) = e f (x)dx r(x) →
R Z R 
− f (x)dx f (x)dx
y(x) = e e r(x)dx + c

Ex.
Z 
2x x −x 2x

y −y =e → y(x) = e e e dx + c = c ex + e2x .

We can
R write this differently, starting with the solution to the homogeneous equation v(x) =
exp( f (x)dx) and write for the inhomogeneous solution (for some u(x) we then need to solve
for)

y(x) = u(x)v(x) → u′ (x)v(x) + u(x)(v ′ (x) + f (x)v(x)) = r(x) → u′ = r/v.

Then
Z 
r(x) r(x)
Z
u(x) = dx + c → y(x) = u(x)v(x) = v(x) dx + c
v(x) v(x)

as above. This procedure is know as variation of parameters.

1.4 Linear second order ODEs


Second order linear differential equations

y ′′ + f (x)y ′ + g(x)y = r(x),

always have solutions (Theorem!), and the solution space is two-dimensional. Hence we can
find two linearly independent solutions y1 (x) and y2 (x).
For the homogeneous case r(x) = 0, any solution can be written as a linear combination of the
two,

y(x) = ay1 (x) + by2 (x).

The coefficients a and b are determined from the boundary conditions.


In the special case where the coefficients are constants

y ′′ + ay ′ + by = 0,

we can use an ansatz for y(x) to get a simpler equation



λx −a ± a2 − 4b
y(x) = c e → λ± = .
2

3
The eigenvalues λ± may be complex, but then they are complex conjugate. If λ− 6= λ+ , we
have two linearly independent solutions, and we write the general solution as

y(x) = c1 eλ+ x + c2 eλ− x .

If λ− = λ+ , the solutions are not linearly independent, and we need another solution to span
the solution space. By variation of parameters (using the ansatz y2 (x) = u(x)y1 (x)), we find
that this is given by y2 (x) = xeλ± x , and so the general solution is

y(x) = c1 eλ± x + c2 xeλ± x .

Ex.

y ′′ + y ′ − 2y = 0 → λ2 + λ − 2 = 0 → λ± = 1, −2

so that

y(x) = c1 ex + c2 e−2x .

Ex.

y ′′ + 8y ′ + 16y = 0 → λ2 + 8λ + 16 = 0 → λ± = −4,

y(x) = c1 e−4x + c2 xe−4x .

In the inhomogeneous case r(x) 6= 0, we construct the general solution from the solution to the
homogeneous equation,

yh (x) = c1 y1 (x) + c2 y2 (x).

then we have
y2 (x)r(x) y1 (x)r(x)
Z Z
y(x) = yp (x) + yh (x), yp (x) = −y1 (x) dx + y2 (x) dx,
W (x) W (x)

W is the Wronskian W (y1 , y2 ) = y1 y2′ − y2 y1′ , which is non-zero if y1,2 are linearly independent.
Ex.

y ′′ + y = sec(x)

Solution to homogeneous equation

y ′′ + y = 0 → y1 (x) = sin(x), y2 (x) = cos(x) → W = 1.

Z Z
yp (x) = − cos(x) sin(x) sec(x)dx + sin(x) cos(x) sec(x)dx = cos(x) ln | cos(x)| + x sin(x).

y(x) = yp (x) + c1 y1 (x) + c2 y2 (x).

4
1.5 Existence and uniqueness
Theorem: If the coefficient functions fn (x) are continuous on an open interval I, then the
linear homogeneous n’th order equation

f n y (n) + f n−1 y (n−1) + .... + f 1 y = 0

has a unique solution obeying the initial conditions

y(x0 ) = k1 , y ′ (x0 ) = k2 , , ... y (n−1) (x0 ) = kn .

All the solutions (with different initial conditions) are linear combinations of precisely n linearly
independent solutions, and any linear combination of these is a solution. In other words, the
solution space is an n dimensional vector space, spanned by the basis yi , i = 1, .., n.

1.6 Power series method


Assume that the solution to some linear differential equation is analytic and write

X
y(x) = cm xm .
m=0

Plugging this into (1), as well as similar power series for f (x), g(x) and r(x), we can solve for
each coefficient cm by equating coefficients of xm order by order.
Ex.
X X
y′ − y = 0 → ncn xn−1 − cm xm = 0.

Matching powers
X xn
ncn = cn+1 → y(x) = c0 = c0 e x .
n!
c0 is given by the initial conditions.

1.6.1 Convergence
• If a power series converges for all x with |x| < R, R is said to be its radius of convergence.
This can be found from
1
= lim |cm |1/m ,
R m→∞
1 cm+1
= lim | |.
R m→∞ cm

• A power series retains its radius of convergence upon differentiation and integration term
by term.

• The sum and product of two power series is convergent in the region where they are both
convergent.

5
1.7 Frobenius method
Simple power series don’t always work. The equation of the form
a(x) ′ b(x)
y ′′ + y + 2 y = 0,
x x
has at least one solution of the form

X
r
y(x) = x cm xm ,
m=0

with r a real number.


The form of the solutions is determined by the indicial equation

r2 + (a0 − 1)r + b0 = 0,

where a0 and b0 are the first coefficients of the power series for a(x) and b(x), respectively.
This equation has two roots, r1 and r2 . The simplest case is when a(x) = a0 and b(x) = b0 are
just constants. Then the solution is of the form y(x) = xr , and the differential equation is a
Cauchy equation.
Otherwise, three distinct cases can occur:
• If r1 6= r2 and r1 − r2 is not an integer, the two independent solutions come from plugging
r1 and r2 into the power series equation.

X ∞
X
y1 (x) = xr1 cm xm , y2 (x) = xr2 dm xm .
m=0 m=0

Then put these expressions back into the differential equation and solve for all the cm
and dm .
• If r1 = r2 only one solution is found from plugging back into the power series equation.
The second independent solution is of the form

X ∞
X
y1 (x) = xr cm xm , y2 (x) = y1 (x) ln(x) + xr dm xm .
m=0 m=0

Then put these expressions back into the differential equation and solve for all the cm .
• If r1 6= r2 and r1 − r2 is an integer, y2 (x) instead follows from plugging r1 into the power
series equation and

X ∞
X
y(x) = xr1 cm xm , y2 (x) = ky1 (x) ln(x) + xr2 em xm ,
m=0 m=0

for some k which may be zero. Then put these expressions back into the differential
equation and solve for all the cm and k.
(Note that case 2) is just case 3) for r1 − r2 = 0).
What does it mean? Only the simplest non-linear differential equations are tractable. The
linear ones are much simpler. Use any of the tools above. If all else fails, use a powers series. If
the coefficient functions are analytic, positive, integer powers, then so is the solution. Otherwise,
in the special Frobenius case, we can use one non-integer power.

6
2 Orthogonal polynomials
The set of functions defined on (a subset of) the real axis can be thought of as a vector space,
with the obvious composition rules

(af + bg)(x) = af (x) + bg(x),

which is of course associative, commutative, distributive. The zero vector is

f (x) = 0,

and the inverse is

(−f )(x) = −f (x).

Consider in particularly the space of functions that are square integrable over an interval [a, b]
Z b
dx|f (x)|2 < ∞,
a

(the completion of) which is denoted as L2 [a, b]. We can define the inner product
b
1
Z
(f, g) = dxf ∗ (x)g(x),
b−a a

and define the norm of a vector as


p
|f | = (f, f ).

In the case b = −a = ∞ the prefactor, we use 1/(b − a) → 1/(2π). If (f, g) = 0, the functions
are said to be orthogonal. As for any vector space, individual vectors can be written as a linear
combination of a set of linearly independent basis vectors hi . It is particularly useful to have a
set of orthogonal, or even orthonormal basis vectors ei . We can then write for any f (x)
X
f (x) = ai ei (x),
i

where ai are complex coefficients. This is known as a generalized Fourier series for f . We have

ai = (f, ei ).

and Parsevals theorem, saying that


Z b X
2
|f | = dx|f (x)|2 = |ai |2 ,
a i

as can be easily checked for a finite-dimensional space, where it is the generalisation of Pythago-
ras theorem.
A (complete) inner product space is called a Hilbert space, a crucial concept in quantum
mechanics. In fact L2 [a, b] is the space of wavefunctions ψ(x) defined on some interval [a, b]. This
can of course be generalised to functions on, say, three-dimensional spaces. And to infinitely
many degrees of freedom, as in quantum field theory.

7
We can generalize by defining orthogonality of real functions f, g with respect to a given weight
function p(x),
Z b
1
(f, g) = dx p(x)f (x)g(x) = 0
b−a a
What does is meant? We should think of functions as vectors in a vector space, with an
inner product, from which we can define norms, orthogonality and linear independence. In
particular, we can look for a basis of vectors on which we can expand all functions (of the type
we are interested in).

2.1 Sturm-Liouvilles equation


Consider the differential equation

(r(x)y ′ ) + (q(x) + λp(x)) y = 0,
or
r′ (x) ′ q(x) p(x)
y ′′ + y + y = −λ y(x),
r(x) r(x) r(x)
on [a, b] with boundary conditions
k1 y(a) + k2 y ′ (a) = 0, l1 y(b) + l2 y ′ (b) = 0.
λ is a parameter. Given r, q, p, the equation has solutions y(x) for certain values of λ.
Theorem: If r, r′ , q, p are real and continuous, solutions yn (eigenfunctions) corresponding to
different of λn (eigenvalues) are orthogonal with respect to the weight p(x). The eigenvalues
are real, if p(x) is everywhere positive or everywhere negative.
What does it mean? The Sturm-Liouville equation is a unifying description of many linear
homogeneous second order ODEs, that appear in many physical applications. The theorem
then states that for sets of eigenvalues λn , the corresponding solutions/eigenvectors can be
used as a basis for the vector space of functions. Each basis will have some applications where
it is particularly useful.

2.2 Examples of orthogonal polynomials


2.2.1 Legendre polynomials
Choosing r(x) = (1 − x2 ), q(x) = 0, λ = n(n + 1), n integer, we get Legendres equation.
(1 − x2 )y ′′ − 2xy ′ + n(n + 1)y = 0.
n is a real number. Power series solution gives the Legendre polynomials
M
X (2n − 2m)!
Pn (x) = (−1)m xn−2m ,
m=0
2n m!(n − m)!(n − 2m)!
with M = n/2 or M = (n − 1)/2 depending on which one is integer. The first four polynomials
are
P0 = 1,
P1 (x) = x,
1 1
P2 = (3x2 − 1), P3 (x) = (5x3 − 3x),
2 2
Legendres polynomials are orthogonal on [−1; 1] with respect to the weight function p(x) = 1
(so in the L2 -sense).

8
2.2.2 Bessel functions
With p(x) = x, q(x) = −ν 2 /x, r(x) = x we get Bessels equation,
x2 y ′′ + xy ′ + (x2 − ν 2 )y = 0.
ν is a real positive number. We plug in (1) And get the indicial equation
(r + ν)(r − ν) = 0
For r1 = ν > 0 we have the Bessel function of the first kind of order ν,

X (−1)m x2m
Jν (x) = xν
m=0
22m+ν m!Γ(ν + m + 1)

It converges for all x. Similary we define J−ν (x). If ν is not integer, the two are linearly
independent, and a complete solution to Bessels equation is (for some a, b)
y(x) = aJν (x) + bJ−ν (x).
If ν is integer, the two are not linearly independent, and we define instead Bessels functions of
the second kind of order ν
1
Yν (x) = [Jν (x) cos(νπ) − J−ν (x)] .
sin(νπ)
The complete solution to Bessels equation is now
y(x) = aJν (x) + bYν (x).
For real values of x we can also define complex-valued Bessel functions of the third kind (first
and second Hankel function)
Hν1 (x) = Jν + iYν (x),
Hν2 (x) = Jν − iYν (x).
Bessel functions are orthogonal with respect to the weight function p(x) = x.

2.2.3 Chebyshev polynomials


√ √
With r(x) = 1 − x2 , p(x) = 1/ 1 − x2 , q(x) = 0 and λ = n we get Chebyshevs equation
(1 − x2 )y ′′ − xy ′ + n2 y = 0,
for n integer. The Chebyshev polynomials
Tn (x) = cos(n arccos(x)),

are orthogonal on [−1; 1] with respect to the weightfunction p(x) = 1/ 1 − x2 .

2.2.4 Laguerre polynomials


With r(x) = e−x x, p(x) = e−x , q(x) = 0 and λ = n we get Laguerres equation
xy ′′ + (1 − x)y ′ + ny = 0.
The solutions
ex dn (xn e−x )
L0 = 1, Ln (x) = ,
n! dxn
are orthogonal on [0; ∞[ with respect to the weightfunction p(x) = e−x .

9
2.2.5 Hermite polynomials
2 2
/2 /2
With r(x) = e−x , p(x) = e−x , q(x) = 0 and λ = n we get Webers/Hermites equation

y ′′ − xy ′ + ny = 0.

The solutions
n −x2 /2
2
/2 d (e )
He0 = 1, Hen (x) = (−1)n ex ,
dxn
2
are orthogonal on ] − ∞; ∞[ with respect to the weightfunction p(x) = e−x /2 . (Note that
2 2
Mathematica has a different definition with e−x instead of e−x /2 . So there are factor of 2
running around.)

10

You might also like