Calculus of Variations
Calculus of Variations
by Peter J. Olver
University of Minnesota
1. Introduction.
Minimization principles form one of the most wide-ranging means of formulating math-
ematical models governing the equilibrium configurations of physical systems. Moreover,
many popular numerical integration schemes such as the powerful finite element method
are also founded upon a minimization paradigm. In these notes, we will develop the basic
mathematical analysis of nonlinear minimization principles on infinite-dimensional function
spaces — a subject known as the “calculus of variations”, for reasons that will be explained
as soon as we present the basic ideas. Classical solutions to minimization problems in the
calculus of variations are prescribed by boundary value problems involving certain types
of differential equations, known as the associated Euler–Lagrange equations. The math-
ematical techniques that have been developed to handle such optimization problems are
fundamental in many areas of mathematics, physics, engineering, and other applications.
In this chapter, we will only have room to scratch the surface of this wide ranging and
lively area of both classical and contemporary research.
The history of the calculus of variations is tightly interwoven with the history of
mathematics. The field has drawn the attention of a remarkable range of mathematical
luminaries, beginning with Newton, then initiated as a subject in its own right by the
Bernoulli family. The first major developments appeared in the work of Euler, Lagrange
and Laplace. In the nineteenth century, Hamilton, Dirichlet and Hilbert are but a few of
the outstanding contributors. In modern times, the calculus of variations has continued
to occupy center stage, witnessing major theoretical advances, along with wide-ranging
applications in physics, engineering and all branches of mathematics.
Minimization problems that can be analyzed by the calculus of variations serve to char-
acterize the equilibrium configurations of almost all continuous physical systems, ranging
through elasticity, solid and fluid mechanics, electro-magnetism, gravitation, quantum me-
chanics, string theory, and many, many others. Many geometrical configurations, such as
minimal surfaces, can be conveniently formulated as optimization problems. Moreover,
numerical approximations to the equilibrium solutions of such boundary value problems
are based on a nonlinear finite element approach that reduces the infinite-dimensional min-
imization problem to a finite-dimensional problem. See [13; Chapter 11] for full details.
Just as the vanishing of the gradient of a function of several variables singles out the
critical points, among which are the minima, both local and global, so a similar “func-
tional gradient” will distinguish the candidate functions that might be minimizers of the
functional. The finite-dimensional calculus leads to a system of algebraic equations for the
critical points; the infinite-dimensional functional analog results a boundary value prob-
lem for a nonlinear ordinary or partial differential equation whose solutions are the critical
functions for the variational problem. So, the passage from finite to infinite dimensional
nonlinear systems mirrors the transition from linear algebraic systems to boundary value
problems.
The minimal curve problem is to find the shortest path between two specified locations.
In its simplest manifestation, we are given two distinct points
and our task is to find the curve of shortest length connecting them. “Obviously”, as you
learn in childhood, the shortest route between two points is a straight line; see Figure 1.
Mathematically, then, the minimizing curve should be the graph of the particular affine
function†
β−α
y = cx + d = (x − a) + α (2.2)
b−a
that passes through or interpolates the two points. However, this commonly accepted
“fact” — that (2.2) is the solution to the minimization problem — is, upon closer inspec-
tion, perhaps not so immediately obvious from a rigorous mathematical standpoint.
Let us see how we might formulate the minimal curve problem in a mathematically
precise way. For simplicity, we assume that the minimal curve is given as the graph of
†
We assume that a 6= b, i.e., the points a, b do not lie on a common vertical line.
†
Assuming time = money!
a = (a, α, F (a, α)), and b = (b, β, F (b, β)), lying on the surface S.
‡
In the absence of gravitational effects due to general relativity.
†
Cylinders are not graphs, but can be placed within this framework by passing to cylindrical
coordinates. Similarly, spherical surfaces are best treated in spherical coordinates. In differential
geometry, [ 6 ], one extends these constructions to arbitrary parametrized surfaces and higher
dimensional manifolds.
Minimal Surfaces
The minimal surface problem is a natural generalization of the minimal curve or
geodesic problem. In its simplest manifestation, we are given a simple closed curve C ⊂ R 3 .
The problem is to find the surface of least total area among all those whose boundary is
the curve C. Thus, we seek to minimize the surface area integral
ZZ
area S = dS
S
over all possible surfaces S ⊂ R 3 with the prescribed boundary curve ∂S = C. Such an
area–minimizing surface is known as a minimal surface for short. For example, if C is a
closed plane curve, e.g., a circle, then the minimal surface will just be the planar region
it encloses. But, if the curve C twists into the third dimension, then the shape of the
minimizing surface is by no means evident.
Physically, if we bend a wire in the shape of the curve C and then dip it into soapy
water, the surface tension forces in the resulting soap film will cause it to minimize surface
area, and hence be a minimal surface† . Soap films and bubbles have been the source of
much fascination, physical, æsthetical and mathematical, over the centuries. The minimal
surface problem is also known as Plateau’s Problem, named after the nineteenth century
French physicist Joseph Plateau who conducted systematic experiments on such soap films.
A satisfactory mathematical solution to even the simplest version of the minimal surface
problem was only achieved in the mid twentieth century, [10, 11]. Minimal surfaces and
†
More accurately, the soap film will realize a local but not necessarily global minimum for
the surface area functional. Nonuniqueness of local minimizers can be realized in the physical
experiment — the same wire may support more than one stable soap film.
∂Ω
related variational problems remain an active area of contemporary research, and are of
importance in engineering design, architecture, and biology, including foams, domes, cell
membranes, and so on.
Let us mathematically formulate the search for a minimal surface as a problem in
the calculus of variations. For simplicity, we shall assume that the bounding curve C
projects down to a simple closed curve Γ = ∂Ω that bounds an open domain Ω ⊂ R 2 in
the (x, y) plane, as in Figure 3. The space curve C ⊂ R 3 is then given by z = g(x, y) for
(x, y) ∈ Γ = ∂Ω. For “reasonable” boundary curves C, we expect that the minimal surface
S will be described as the graph of a function z = u(x, y) parametrized by (x, y) ∈ Ω.
According to the basic calculus, the surface area of such a graph is given by the double
integral
ZZ s 2 2
∂u ∂u
J[ u ] = 1+ + dx dy. (2.9)
Ω ∂x ∂y
To find the minimal surface, then, we seek the function z = u(x, y) that minimizes the
surface area integral (2.9) when subject to the Dirichlet boundary conditions
As we will see, (5.10), the solutions to this minimization problem satisfy a complicated
nonlinear second order partial differential equation.
A simple version of the minimal surface problem, that still contains some interesting
features, is to find minimal surfaces with rotational symmetry. A surface of revolution is
obtained by revolving a plane curve about an axis, which, for definiteness, we take to be
the x axis. Thus, given two points a = (a, α), b = (b, β) ∈ R 2 , the goal is to find the curve
y = u(x) joining them such that the surface of revolution obtained by revolving the curve
around the x-axis has the least surface area. Each cross-section of the resulting surface is
a circle centered on the x axis. The area of such a surface of revolution is given by
Z b p
J[ u ] = 2π | u | 1 + (u′ )2 dx. (2.11)
a
over all possible domains Ω ⊂ R 2 . Of course, the “obvious” solution to this problem is that
the curve must be a circle whose perimeter is ℓ, whence the name “isoperimetric”. Note
that the problem, as stated, does not have a unique solution, since if Ω is a maximizing
domain, any translated or rotated version of Ω will also maximize area subject to the
length constraint.
To make progress on the isoperimetric problem, let us assume that the boundary curve
T
is parametrized by its arc length, so x(s) = ( x(s), y(s) ) with 0 ≤ s ≤ ℓ, subject to the
requirement that
2 2
dx dy
+ = 1. (2.12)
ds ds
We can compute the area of the domain by a line integral around its boundary,
I Z ℓ
dy
area Ω = x dy = x ds, (2.13)
∂Ω 0 ds
and thus we seek to maximize the latter integral subject to the arc length constraint (2.12).
We also impose periodic boundary conditions
The integrand is known as the Lagrangian for the variational problem, in honor of La-
grange, one of the main founders of the subject. We usually assume that the Lagrangian
L(x, u, p) is a reasonably smooth function of all three of its (scalar) arguments x, u, and
p, which represents the derivativepu′ . For example, the arc length functional (2.3) has
Lagrangian function L(x, u, p) = p 1 + p2 , whereas in the surface of revolution problem
(2.11), we have L(x, u, p) = 2 π | u | 1 + p2 . (In the latter case, the points where u = 0
are slightly problematic, since L is not continuously differentiable there.)
In order to uniquely specify a minimizing function, we must impose suitable boundary
conditions. All of the usual suspects — Dirichlet (fixed), Neumann (free), as well as mixed
and periodic boundary conditions — are also relevant here. In the interests of brevity, we
shall concentrate on the Dirichlet boundary conditions
d
h ∇J[ u ] ; v i = J[ u + t v ] . (3.3)
dt t=0
Here v(x) is a function that prescribes the “direction” in which the derivative is computed.
Classically, v is known as a variation in the function u, sometimes written v = δu, whence
the term “calculus of variations”. Similarly, the gradient operator on functionals is often
referred to as the variational derivative, and often written δJ. The inner product used in
(3.3) is usually taken (again for simplicity) to be the standard L2 inner product
Z b
hf ;gi = f (x) g(x) dx (3.4)
a
on function space. Indeed, while the formula for the gradient will depend upon the under-
lying inner product, the characterization of critical points does not, and so the choice of
inner product is not significant here.
Now, starting with (3.1), for each fixed u and v, we must compute the derivative of
the function Z b
h(t) = J[ u + t v ] = L(x, u + t v, u′ + t v ′ ) dx. (3.5)
a
Assuming sufficient smoothness of the integrand allows us to bring the derivative inside
the integral and so, by the chain rule,
Z b
′ d d
h (t) = J[ u + t v ] = L(x, u + t v, u′ + t v ′ ) dx
dt a dt
Z b
∂L ′ ′ ′ ∂L ′ ′
= v (x, u + t v, u + t v ) + v (x, u + t v, u + t v ) dx.
a ∂u ∂p
Therefore, setting t = 0 in order to evaluate (3.3), we find
Z b
∂L ′ ′ ∂L ′
h ∇J[ u ] ; v i = v (x, u, u ) + v (x, u, u ) dx. (3.6)
a ∂u ∂p
The resulting integral often referred to as the first variation of the functional J[ u ]. The
condition
h ∇J[ u ] ; v i = 0
for a minimizer is known as the weak form of the variational principle.
between some function h(x) = ∇J[ u ] and the variation v. The first summand has this
form, but the derivative v ′ appearing in the second summand is problematic. However,
one can easily move derivatives around inside an integral through integration by parts. If
we set
∂L
r(x) ≡ (x, u(x), u′ (x)),
∂p
we can rewrite the offending term as
Z b Z b
′
r(x) v (x) dx = r(b) v(b) − r(a) v(a) − r ′ (x) v(x) dx, (3.7)
a a
u
b(a) = u(a) + t v(a) = α, u
b(b) = u(b) + t v(b) = β.
For this to hold, the variation v(x) must satisfy the corresponding homogeneous boundary
conditions
v(a) = 0, v(b) = 0. (3.9)
As a result, both boundary terms in our integration by parts formula (3.7) vanish, and we
can write (3.6) as
Z b Z b
∂L ′ d ∂L ′
h ∇J[ u ] ; v i = ∇J[ u ] v dx = v (x, u, u ) − (x, u, u ) dx.
a a ∂u dx ∂p
Since this holds for all variations v(x), we conclude that
∂L ′ d ∂L ′
∇J[ u ] = (x, u, u ) − (x, u, u ) . (3.10)
∂u dx ∂p
This is our explicit formula for the functional gradient or variational derivative of the func-
tional (3.1) with Lagrangian L(x, u, p). Observe that the gradient ∇J[ u ] of a functional
is a function.
∂L ∂ 2L ∂ 2L ∂ 2L
E(x, u, u′ , u′′ ) = (x, u, u′ ) − (x, u, u′ ) − u′ (x, u, u′ ) − u′′ (x, u, u′ ) = 0,
∂u ∂x ∂p ∂u ∂p ∂p2
(3.12)
known as the Euler–Lagrange equation associated with the variational problem (3.1), in
honor of two of the most important contributors to the subject. Any solution to the Euler–
Lagrange equation that is subject to the assumed boundary conditions forms a critical point
for the functional, and hence is a potential candidate for the desired minimizing function.
And, in many cases, the Euler–Lagrange equation suffices to characterize the minimizer
without further ado.
Theorem 3.1. Suppose the Lagrangian function is at least twice continuously dif-
ferentiable: L(x, u, p) ∈ C2 . Then any C2 minimizer u(x) to the corresponding functional
Z b
J[ u ] = L(x, u, u′ ) dx, subject to the selected boundary conditions, must satisfy the
a
associated Euler–Lagrange equation (3.11).
Let us now investigate what the Euler–Lagrange equation tells us about the examples
of variational problems presented at the beginning of this section. One word of caution:
there do exist seemingly reasonable functionals whose minimizers are not, in fact, C2 ,
and hence do not solve the Euler–Lagrange equation in the classical sense; see [2] for
examples. Fortunately, in most variational problems that arise in real-world applications,
such pathologies do not appear.
Let us return to the most elementary problem in the calculus of variations: finding
the curve of shortest length connecting two points a = (a, α), b = (b, β) ∈ R 2 in the
plane. As we noted in Section 3, such planar geodesics minimize the arc length integral
Z b p p
J[ u ] = 1 + (u′ )2 dx with Lagrangian L(x, u, p) = 1 + p2 ,
a
u(a) = α, u(b) = β.
Since
∂L ∂L p
= 0, =p ,
∂u ∂p 1 + p2
where we have omitted an irrelevant factor of 2 π and used positivity to delete the absolute
value on u in the integrand. Since
∂L p ∂L up
= 1 + p2 , =p ,
∂u ∂p 1 + p2
the Euler–Lagrange equation (3.11) is
p d u u′ 1 + (u′ )2 − u u′′
1+ (u′ )2 − p = = 0. (3.16)
dx 1 + (u′ )2 (1 + (u′ )2 )3/2
Therefore, to find the critical functions, we need to solve a nonlinear second order ordinary
differential equation — and not one in a familiar form.
†
Actually, as with many tricks, this is really an indication that something profound is going on.
Noether’s Theorem, a result of fundamental importance in modern physics that relates symmetries
and conservation laws, [ 7, 12 ], underlies the integration method.
‡
The square root is real since, by (3.17), | u | ≤ | c |.
§
Here “function” must be taken in a very broad sense, as this one does not even correspond
to a generalized function!
0= 1
2 m v 2 − m g u.
We can solve this equation to determine the bead’s speed as a function of its height:
p
v = 2g u . (3.20)
Substituting this expression into (3.19), we conclude that the shape y = u(x) of the wire
is obtained by minimizing the functional
Z bs
1 + (u′ )2
T[u] = dx, (3.21)
0 2g u
where c = 1/k 2 is a constant. (This can be checked by directly calculating dH/dx ≡ 0.)
Solving for the derivative u′ results in the first order autonomous ordinary differential
equation r
du c−u
= .
dx u
This equation can be explicitly solved by separation of variables, and so
Z r
u
du = x + k.
c−u
The left hand integration relies on the trigonometric substitution
1
u= 2 c (1 − cos θ),
whereby
Z r Z
1 1 − cos θ 1 1
x+k = 2
c sin θ dθ = 2
c (1 − cos θ) dθ = 2
c(θ − sin θ).
1 + cos θ
The left hand boundary condition implies k = 0, and so the solution to the Euler–Lagrange
equation are curves parametrized by
A(x) v 2 + 2 B(x) v v ′ + C(x) (v ′ )2 > 0 whenever a < x < b, and v(x) 6≡ 0, (4.3)
then Q[ u; v ] > 0 is also positive definite.
Example p 4.1. For the arc length minimization functional (2.3), the Lagrangian is
L(x, u, p) = 1 + p2 . To analyze the second variation, we first compute
∂ 2L ∂2L ∂ 2L 1
= 0, = 0, = .
∂u2 ∂u ∂p ∂p 2 (1 + p2 )3/2
For the critical straight line function
β−α β−α
u(x) = (x − a) + α, with p = u′ (x) = ,
b−a b−a
we find
∂ 2L ∂2L ∂2L (b − a)3
A(x) = = 0, B(x) = = 0, C(x) = = 3/2 ≡ k.
∂u2 ∂u ∂p ∂p2 (b − a)2 + (β − α)2
Therefore, the second variation functional (4.1) is
Z b
Q[ u; v ] = k (v ′ )2 dx,
a
In the second equality, we integrated the middle term by parts, using (v 2 )′ = 2 v v ′ , and
noting that the boundary terms vanish owing to our imposed boundary conditions. Since
e v ] is positive definite, so is Q[ v ], justifying the previous claim.
Q[
To appreciate how subtle this result is, consider the almost identical quadratic func-
tional Z 4
′ 2
b
Q[ v ] = (v ) − v 2 dx, (4.5)
0
the only difference being the upper limit of the integral. A quick computation shows that
the function v(x) = x(4 − x) satisfies the boundary conditions v(0) = 0 = v(4), but
Z 4
64
b
Q[ v ] = (4 − 2 x)2 − x2 (4 − x)2 dx = − < 0.
0 5
Therefore, Q[ b v ] is not positive definite. Our preceding analysis does not apply be-
cause the function tan x becomes singular at x = 12 π, and so the auxiliary integral
Z 4
(v ′ + v tan x)2 dx does not converge.
0
is positive definite, so Q[ v ] > 0 for all v 6≡ 0 satisfying the homogeneous Dirichlet boundary
conditions v(a) = v(b) = 0, provided
(a) C(x) > 0 for all a ≤ x ≤ b, and
(b) For any a < c ≤ b, the only solution to its linear Euler–Lagrange boundary value
problem
− (C v ′ )′ + (A − B ′ ) v = 0, v(a) = 0 = v(c), (4.6)
is the trivial function v(x) ≡ 0.
Remark : A value c for which (4.6) has a nontrivial solution is known as a conjugate
point to a. Thus, condition (b) can be restated that the variational problem has no
conjugate points in the interval ( a, b ].
Example 4.4. The quadratic functional
Z b
′ 2
Q[ v ] = (v ) − v 2 dx (4.7)
0
∂2L
(x, u, u′ ) > 0 (4.8)
∂p2
for the minimizer u(x). This is known as the Legendre condition. The second, conjugate
point condition requires that the so-called linear variational equation
2 2
d ∂ L ′ dv ∂ L ′ d ∂ 2L ′
− (x, u, u ) + (x, u, u ) − (x, u, u ) v = 0 (4.9)
dx ∂p2 dx ∂u2 dx ∂u ∂p
has no nontrivial solutions v(x) 6≡ 0 that satisfy v(a) = 0 and v(c) = 0 for a < c ≤ b. In
this way, we have arrived at a rigorous form of the second derivative test for the simplest
functional in the calculus of variations.
having the form of a double integral over a prescribed domain Ω ⊂ R 2 . The Lagrangian
L(x, y, u, p, q) is assumed to be a sufficiently smooth function of its five arguments. Our
goal is to find the function(s) u = f (x, y) that minimize the value of J[ u ] when subject
to a set of prescribed boundary conditions on ∂Ω, the most important being our usual
Dirichlet, Neumann, or mixed boundary conditions. For simplicity, we concentrate on the
Dirichlet boundary value problem, and require that the minimizer satisfy
To convert (5.4) into this form, we need to remove the offending derivatives from v. In
two dimensions, the requisite integration by parts formula is based on Green’s Theorem:
ZZ I ZZ
∂v ∂v ∂w1 ∂w2
w1 + w dx dy = v (− w2 dx + w1 dy) − v + dx dy,
Ω ∂x ∂y 2 ∂Ω Ω ∂x ∂y
(5.5)
∂L ∂L
in which w1 , w2 are arbitrary smooth functions. Setting w1 = , w2 = , we find
∂p ∂q
ZZ ZZ
∂L ∂L ∂ ∂L ∂ ∂L
vx + vy dx dy = − v + dx dy,
Ω ∂p ∂q Ω ∂x ∂p ∂y ∂q
where the boundary integral vanishes owing to the boundary conditions (5.3) that we
impose on the allowed variations. Substituting this result back into (5.4), we conclude
that ZZ
′ ∂L ∂ ∂L ∂ ∂L
h (0) = v − − dx dy = h ∇J[ u ] ; v i, (5.6)
Ω ∂u ∂x ∂p ∂y ∂q
where
∂L ∂ ∂L ∂ ∂L
∇J[ u ] = − −
∂u ∂x ∂p ∂y ∂q
is the desired first variation or functional gradient. Since the gradient vanishes at a critical
function, we conclude that the minimizer u(x, y) must satisfy the Euler–Lagrange equation
∂L ∂ ∂L ∂ ∂L
(x, y, u, ux, uy ) − (x, y, u, ux, uy ) − (x, y, u, ux, uy ) = 0. (5.7)
∂u ∂x ∂p ∂y ∂q
The parameters λ, µ are known as the Lamé moduli of the material, and govern its intrinsic
elastic properties. They are measured by performing suitable experiments on a sample of
the material. Physically, (5.11) represents the stored (or potential) energy in the body
under the prescribed displacement. Nature, as always, seeks the displacement that will
minimize the total energy.
To compute the Euler–Lagrange equations, we consider the functional variation
h(t) = J[ u + t f, v + t g ],
in which the individual variations f, g are arbitrary functions subject only to the given
homogeneous boundary conditions. If u, v minimize J, then h(t) has a minimum at t = 0,
and so we are led to compute
ZZ
′
h (0) = h ∇J ; f i = (f ∇u J + g ∇v J) dx dy,
Ω
We use the integration by parts formula (5.5) to remove the derivatives from the variations
f, g. Discarding the boundary integrals, which are used to prescribe the allowable boundary
conditions, we find
ZZ !
(λ + 2 µ) u + µ u + (λ + µ) v f +
h′ (0) = − xx yy xy
dx dy.
Ω + (λ + µ) uxy + µ vxx + (λ + 2 µ) vyy g
The two terms in brackets give the two components of the functional gradient. Setting
them equal to zero, we derive the second order linear system of Euler–Lagrange equations
(λ + 2 µ) uxx + µ uyy + (λ + µ) vxy = 0,
(5.12)
(λ + µ) uxy + µ vxx + (λ + 2 µ) vyy = 0,
known as Navier’s equations, which can be compactly written as
µ ∆u + (µ + λ) ∇(∇ · u) = 0 (5.13)
T
for the displacement vector u = ( u, v ) . The solutions to are the critical displacements
that, under appropriate boundary conditions, minimize the potential energy functional.
Since we are dealing with a quadratic functional, a more detailed algebraic analy-
sis will demonstrate that the solutions to Navier’s equations are the minimizers for the
variational principle (5.11). Although only valid in a limited range of physical and kine-
matical conditions, the solutions to the planar Navier’s equations and its three-dimensional
counterpart are successfully used to model a wide class of elastic materials.
In general, the solutions to the Euler–Lagrange boundary value problem are critical
functions for the variational problem, and hence include all (smooth) local and global min-
imizers. Determination of which solutions are genuine minima requires a further analysis
of the positivity properties of the second variation, which is beyond the scope of our intro-
ductory treatment. Indeed, a complete analysis of the positive definiteness of the second
variation of multi-dimensional variational problems is quite complicated, and still awaits
a completely satisfactory resolution!
[1] Antman, S.S., Nonlinear Problems of Elasticity, Appl. Math. Sci., vol. 107,
Springer–Verlag, New York, 1995.
[2] Ball, J.M., and Mizel, V.J., One-dimensional variational problem whose minimizers
do not satisfy the Euler-Lagrange equation, Arch. Rat. Mech. Anal. 90 (1985),
325–388.
[3] Born, M., and Wolf, E., Principles of Optics, Fourth Edition, Pergamon Press, New
York, 1970.
[4] Courant, R., and Hilbert, D., Methods of Mathematical Physics, vol. I, Interscience
Publ., New York, 1953.
[5] Dacorogna, B., Introduction to the Calculus of Variations, Imperial College Press,
London, 2004.
[6] do Carmo, M.P., Differential Geometry of Curves and Surfaces, Prentice-Hall,
Englewood Cliffs, N.J., 1976.
[7] Gel’fand, I.M., and Fomin, S.V., Calculus of Variations, Prentice–Hall, Inc.,
Englewood Cliffs, N.J., 1963.
[8] Gurtin, M.E., An Introduction to Continuum Mechanics, Academic Press, New
York, 1981.
[9] Kot, M., A First Course in the Calculus of Variations, American Mathematical
Society, Providence, R.I., 2014.
[10] Morgan, F., Geometric Measure Theory: a Beginner’s Guide, Academic Press, New
York, 2000.
[11] Nitsche, J.C.C., Lectures on Minimal Surfaces, Cambridge University Press,
Cambridge, 1988.
[12] Olver, P.J., Applications of Lie Groups to Differential Equations, 2nd ed., Graduate
Texts in Mathematics, vol. 107, Springer–Verlag, New York, 1993.
[13] Olver, P.J., Introduction to Partial Differential Equations, Undergraduate Texts in
Mathematics, Springer–Verlag, New York, to appear.