Viscosity Solutions
Viscosity Solutions
Jeff Calder
University of Minnesota
School of Mathematics
[email protected]
1 Introduction 5
1.1 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Motivation via dynamic programming . . . . . . . . . . . . . . 9
1.3 Motivation via vanishing viscosity . . . . . . . . . . . . . . . . 10
1.4 Motivation via the maximum principle . . . . . . . . . . . . . 11
2 Definitions 15
3 A comparison principle 23
6 Boundary conditions 53
6.1 Time-dependent Hamilton-Jacobi equations . . . . . . . . . . 58
6.2 The Hopf-Lax Formula . . . . . . . . . . . . . . . . . . . . . . 60
3
4 CONTENTS
10 Homogenization 97
Introduction
These notes are concerned with viscosity solutions for fully nonlinear equa-
tions. A majority of the notes are concerned with Hamilton-Jacobi equations
of the form
H(Du, u, x) = 0.
First order equations generally do not admit classical solutions, due to the
possibility of crossing characteristics. On the other hand, there are infinitely
many Lipschitz continuous functions that satisfy the equation almost every-
where. Since the equation is nonlinear, we cannot define weak solutions via
integration by parts. In this setting, the correct notion of weak solution is the
viscosity solution, discovered by by Crandall, Evans and Lions [5,7]. At a high
level, the notion of viscosity solution selects, from among the infinitely many
Lipschitz continuous solutions, the one that is ‘physically correct’ for a very
wide range of applications.
Viscosity solutions have proven to be extremely useful, and this is largely
because very strong comparison and stability results are available via the max-
imum (or comparison) principle. As we shall see, these results come almost
directly from the definitions. As such, viscosity solutions could easily have
been called “comparison solutions” or “L∞ -stable solutions”. The term “vis-
cosity” comes from the original motivation for the definitions via the method
of vanishing viscosity (see Section 1.3 and Chapter 5). Viscosity solutions
have a wide range of applications, including problems in optimal control the-
ory. A good reference for the first order theory is the book by Bardi and
Capuzzo-Dolcetta [1], and Evans [11, Chapter 10].
Since viscosity solutions are defined by, and based upon, the maximum
principle, it is natural that they extend to fully nonlinear second order equa-
tions of the form
F (D2 u, Du, u, x) = 0,
5
6 CHAPTER 1. INTRODUCTION
provided F satisfies some form of ellipticity. However, in the early days of the
theory, it was not clear that uniqueness would hold for second order equations,
since the standard proof of uniqueness for first order equations does not directly
extend. The first uniqueness result for second order equations is due to Jensen
[13], and his role in the theory is immortalized in Jensen’s Lemma (see Lemma
12.1), which is a crucial technical tool in the second order theory. Good
references for second order theory include the User’s Guide [6], Crandall’s
introductory paper [4], and the book by Katzourakis [14].
These notes were designed to illustrate the theory and applications of vis-
cosity solutions. They are written in a lecture style and are not meant to be a
thorough reference. We do prove the comparison principle for first and second
order equations in full generality for semi-continuous sub- and supersolutions.
When considering applications, we take simple settings where the main ideas
are present, but the proofs are particularly simple. Almost all of the applica-
tions (e.g., convergence rates, homogenization, etc.) can be stated and proved
in far more generality. However, the ideas in these notes contain the essence
of the key tools for many of these problems.
The organization of these notes is as follows. In Sections 1.1, 1.2, 1.3, and
1.4 we give several different motivational examples leading to the definition of
viscosity solution. In Chapter 2 we give the main definitions of viscosity solu-
tions, and provide a number of interesting exercises. In Chapter 3 we prove the
comparison principle for viscosity solutions of first order equations. In Chap-
ter 4 we discuss the Hamilton-Jacobi-Bellman equation from optimal control
theory in the special case of shortest path problems (i.e., distance functions).
Chapter 5 treats the method of vanishing √viscosity, proving convergence via
the weak upper and lower limits, the O( ε) convergence rate, and a one-
sided O(ε) rate when the solution is semiconcave. In Chapter 6 we briefly
discuss boundary conditions in the viscosity sense. Chapter 7 covers the Per-
ron method for establishing existence of viscosity solutions. In Chapter 8 we
discuss the inf- and sup-convolutions and their role in constructing semiconvex
and semiconcave approximate viscosity sub- and supersolutions. In Chapter 9
we construct√convergent finite difference schemes for viscosity solutions, and
we prove O( h) and one-sided O(h) convergence rates. In Chapter 10 we
give a brief introduction to homogenization, and illustrate the perturbed test
function method. Chapter 11 establishes comparison principles for first or-
der equations with discontinuous coefficients. Finally, in Chapter 12 we prove
the comparison principle for viscosity solutions of second order equations, and
discuss some applications.
While most of the notes address first order Hamilton-Jacobi equations, I
have extended results to second order equations when the proofs are simi-
1.1. AN EXAMPLE 7
lar. In particular, Chapter 7 (the Perron method), Chapter 8 (inf- and sup-
convolutions), and Chapter 12 address general second order equations. Let me
also mention that the references are lacking; in future versions of these notes
I plan to extend the bibliography considerably.
1.1 An example
We begin with a simple example. Let Γ be a closed subset of Rn and let
u : Rn → [0, ∞) be the distance function to Γ, defined by
Exercise 1.1. Verify that u is 1-Lipschitz, that is, |u(x) − u(y)| ≤ |x − y| for
all x, y ∈ Rn .
To see this, fix z ∈ ∂B(x, r) minimizing the right hand side of (1.2). Select
y ∈ Γ such that u(z) = |z − y| and compute
u(x) ≤ |x − y| ≤ |x − z| + |z − y| = r + u(z).
For the other direction, fix y ∈ Γ such that u(x) = |x − y|. Let z ∈ ∂B(x, r)
lie on the line segment between x and y. Then
u(x) = |x − y| = |x − z| + |z − y| ≥ r + u(z).
Setting a = x−z
r
and sending r → 0+ we deduce
Show that there are infinitely many Lipschitz almost everywhere solutions u
of (1.6).
1.2. MOTIVATION VIA DYNAMIC PROGRAMMING 9
for r > 0 sufficiently small. Since φ is smooth, the argument in Section 1.1
can be used to conclude that
A similar argument can be used to show that for every x ∈ U and every
φ ∈ C ∞ (Rn ) such that u − φ has a local minimum at x
Since the highest order term in (1.9) is −ε∆uε , which is uniformly elliptic, we
can in very general settings prove existence and uniqueness of smooth solutions
uε of (1.9) subject to, say, Dirichlet boundary conditions u = g on ∂U . In fact,
we did this for a special case of (1.9) using Schaefer’s Fixed Point Theorem
earlier in the course. As a remark, the additional second order term ε∆uε
is called a viscosity term, since for the Navier-Stokes equations such a term
models the viscosity of the fluid.
1.4. MOTIVATION VIA THE MAXIMUM PRINCIPLE 11
H(Dφ(x), u(x), x) ≤ 0.
H(Dφ(x), u(x), x) ≥ 0.
H(Du, x) = 0 in U. (1.10)
H(Dφ, x) > 0 in U,
that is, the maximum principle holds when comparing u against strict super
solutions. In fact, we can say a bit more. Since we know that Du(x) 6= Dφ(x)
for all x ∈ U , the maximum of u − φ cannot be attained in U . This implies
that
u ≤ φ on ∂U =⇒ u < φ in U.
The observations above hold equally well for any V ⊂⊂ U . That is, if φ ∈
C ∞ (Rn ) satisfies
H(Dφ, x) > 0 in V, (1.11)
then we have
u ≤ φ on ∂V =⇒ u < φ in V. (1.12)
Similarly, if φ ∈ C ∞ (Rn ) satisfies
then we have
u ≥ φ on ∂V =⇒ u > φ in V. (1.14)
Now suppose we have a continuous function u ∈ C(U ) that satisfies the
maximum (or rather, comparison) principle against smooth strict super and
subsolutions, as above. What can we say about u? Does u solve (1.10) in
any reasonable sense? To answer these questions, we need to formulate what
it means for a continuous function to satisfy the maximum principles stated
above.
For every V ⊂⊂ U we define
and
Let u ∈ C(U ). Suppose that for every V ⊂⊂ U , u satisfies (1.12) for all
φ ∈ S + (V ) and (1.14) for all φ ∈ S − (V ). Such a function u could be called
a comparison solution of (1.10), since it is defined precisely to satisfy the
comparison or maximum principle.
We now derive a much simpler property that is satisfied by u. Let ψ ∈
C ∞ (Rn ) and x ∈ U such that u − ψ has a local maximum at x. This means
that for some r > 0
Define
φ(y) := ψ(y) + u(x) − ψ(x).
Then u ≤ φ on the ball B(x, r) and u(x) = φ(x). Therefore, u − φ attains
its maximum over the ball B(x, r) at the interior point x. It follows from our
definition of u that φ 6∈ S + (B 0 (x, r)), and hence
H(Dψ(x), x) ≤ 0. (1.17)
That is, for any x ∈ U and ψ ∈ C ∞ (Rn ) such that u − ψ has a local maximum
at x we deduce (1.17). It is left as an exercise to the reader to show that
whenever u − ψ has a local minimum at x we have
H(Dψ(x), x) ≥ 0.
14 CHAPTER 1. INTRODUCTION
Chapter 2
Definitions
Let us now consider a general second order nonlinear partial differential equa-
tion
H(D2 u, Du, u, x) = 0 in O, (2.1)
where H is continuous and O ⊂ Rn . We recall that a function u : O ⊂ Rn → R
is upper (resp. lower) semicontinuous at x ∈ O provided
and
lim inf u(y) := sup inf{u(y) : y ∈ O ∩ B(x, r)}.
O∋y→x r>0
Let USC(O) (resp. LSC(O)) denote the collection of functions that are up-
per (resp. lower) semicontinuous at all points in O. We make the following
definitions.
15
16 CHAPTER 2. DEFINITIONS
Similarly, if
H(D2 φ, Dφ, x) < 0 in U (2.4)
and u ∈ LSC(U ) is a viscosity solution of H ≥ 0 in U then
H(Du, u, x) = 0 in O. (2.6)
Remark 2.5. It is possible that for some x ∈ O, there are no admissible test
functions φ in the definition of viscosity sub- or supersolution. For example, if
n = 1 and u(x) := |x|, there does not exist φ ∈ C ∞ (R) touching u from above
at x = 0 (why?). Of course, it is possible to touch u(x) = |x| from below at
x = 0 (e.g., take φ ≡ 0). A more intricate example is the function
(
x sin(log(|x|)), if x 6= 0
v(x) =
0, if x = 0.
H(Du(x), u(x), x) = 0.
Remark 2.8. The set O ⊂ Rn need not be open. In some settings, we may
take O = U ∪ Γ, where U ⊂ Rn is open and Γ ⊂ ∂U . The reader should note
that u − φ is assumed to have a local max or min at x ∈ O with respect to
the set O. This allows a very wide class of test function when x ∈ ∂O, and
as a consequence, classical solutions on non-open sets need not be viscosity
solutions at boundary points (see Exercise 2.12).
18 CHAPTER 2. DEFINITIONS
Exercise 2.11. Show that the distance function u defined by (1.1) is the
unique viscosity solution of (1.5) in the setting where Γ = ∂U and U is an
open and bounded set in Rn . [Hint: Use Theorem 2.2 and compare u against
a suitable family of strict super and subsolutions of (1.1).]
Exercise 2.13. Verify that u(x) = −|x| is a viscosity solution of |u′ (x)|−1 = 0
on R, but is not a viscosity solution of −|u′ (x)| + 1 = 0 on R. What is the
viscosity solution of the second PDE?
Exercise 2.13 shows that, roughly speaking, viscosity solutions allow ‘cor-
ners’ or ‘kinks’ in only one direction. Changing the sign of the equation reverses
the orientation of the allowable corners.
Exercise 2.14. Let u : (0, 1) → R be continuous. Show that the following are
equivalent.
(i) u is nondecreasing.
[Hint: For the hard direction, suppose that u′ ≥ 0 in the viscosity sense on
(0, 1), but u is not nondecreasing on (0, 1). Show that there exists 0 < x1 <
x2 < x3 < 1 such that u(x3 ) < u(x2 ) < u(x1 ). Construct a test function
φ ∈ C ∞ (R) with φ′ < 0 such that φ touches u from below somewhere in the
interval (x1 , x3 ). Drawing a picture might help.]
19
Construct a test function φ ∈ C ∞ (R) with φ′′ < 0 such that φ touches u from
above somewhere in the interval (x1 , x2 ).]
Exercise 2.16. Let u : Rn → R be Lipschitz continuous. Show that u is a
viscosity solution of |Du| ≤ Lip(u) and −|Du| ≥ −Lip(u) on Rn .
Exercise 2.17. We define the superdifferential of u at x to be
n o
D u(x) := p ∈ R : u(y) ≤ u(x) + p · (y − x) + o(|y − x|) as y → x .
+ n
[Hint: Show that p ∈ D+ u(x) if and only if there exists φ ∈ C 1 (Rn ) such that
Dφ(x) = p and u − φ has a local maximum at x. A similar statement holds
for the subdifferential.]
Exercise 2.18. Let u ∈ USC(Rn ). Show that the set
A := {x ∈ Rn : D+ u(x) 6= ∅}
for every x ∈ U . Note this is an asymptotic version of the mean value property.
Show that u is a viscosity solution of
−∆u = 0 in U.
Then verify the viscosity sub- and supersolution properties directly from the
definitions.]
Exercise 2.20.
H(D2 uk , Duk , uk , x) = 0 in U.
H(D2 u, Du, u, x) = 0 in U.
Exercise 2.22. Suppose that p 7→ H(p, x) is convex for any fixed x. Let
u ∈ Cloc
0,1
(U ) satisfy
λu + H(Du, x) ≤ 0 in U.
Give an example to show that the same result does not hold for supersolutions.
[Hint: Mollify u: uε := ηε ∗ u. For V ⊂⊂ U , use Jensen’s inequality to show
that
λuε (x) + H(Duε (x), x) ≤ hε (x) for all x ∈ V
and ε > 0 sufficiently small, where hε → 0 uniformly on V . Then apply an
argument similar to Exercise 2.21.]
22 CHAPTER 2. DEFINITIONS
Chapter 3
A comparison principle
The utility of viscosity solutions comes from the fact that we can prove exis-
tence and uniqueness under very broad assumptions on the Hamiltonian H.
Uniqueness of viscosity solutions is based on the maximum principle. In this
setting, the maximum principle gives a comparison principle, which states that
subsolutions must lie below supersolutions, provided their boundary conditions
do as well.
As motivation, let us give the formal comparison principle argument for
smooth sub- and super solutions. Let u, v ∈ C 2 (U ) ∩ C(U ) such that
)
H(D2 u, Du, u, x) < H(D2 v, Dv, v, x) in U
(3.1)
u≤v on ∂U.
23
24 CHAPTER 3. A COMPARISON PRINCIPLE
and
H(X, p, z, x) ≥ H(Y, p, z, x) whenever X ≤ Y. (3.4)
The condition (3.4) is called ellipticity, or sometimes degenerate ellipticity.
The condition (3.3) is the familiar monotonicity we encountered when studying
linear elliptic equations
X
n X
n
Lu = − ij
a u xi xj + bi uxi + cu, (3.5)
i,j=1 i=1
This is strict form of the monotonicity condition (3.3). Then the hy-
potheses of Corollary 3.2 hold with uk = u − γk
1
. Notice that the mono-
tonicity condition (3.3) allows H to have no dependence on u, whereas
the strict monotonicity condition (3.12) requires such a dependence. A
special case of (3.12) is a Hamilton-Jacobi equation with zeroth order
term
u + H(Du, x) = 0 in U.
ut + H(Du, x) = 0 in U × (0, T ).
27
uk = εk φ + (1 − εk )u,
where εk = γk 1
. Note that we can assume that φ ≤ 0, due to the fact
that H has no dependence on u. A special case is the eikonal equation
(1.5), in which case we can take φ ≡ 0.
Exercise 3.3. For each of the cases listed above, verify that uk is a viscosity
solution of H ≤ − k1 in U , uk ≤ u for all k, and uk → u uniformly on U .
The comparison principle from Corollary 3.2 shows that if H satisfies (3.3)
and (3.6), and any one of the conditions listed above holds, then there exists
at most one viscosity solution u ∈ C(U ) of the Dirichlet problem
)
H(Du, u, x) = 0 in U
(3.14)
u = g on ∂U
Hence, if u = v = g on ∂U , then u = v in U .
Exercise 3.4. Show that the following PDEs are degenerate elliptic.
P
(i) The linear elliptic operator (3.5), provided ni,j=1 aij ηi ηj ≥ 0.
(ii) The Monge-Ampère equation
− det(D2 u) + f = 0,
provided u is convex.
28 CHAPTER 3. A COMPARISON PRINCIPLE
where u xi u xj
aij (Du) = δij − .
|Du|2
Verify the ellipticity when Du 6= 0. (Remark: To handle Du = 0, we
redefine viscosity solutions by taking the upper and lower semicontinuous
envelopes of
Xn
F (X, p) = aij (p)Xij ,
i,j=1
(b) Explain why uniqueness fails for (H), i.e., which hypothesis from Theorem
3.1 is not satisfied.
(c) Show that u1 (i.e., uλ with λ := 1) is the unique viscosity solution of (H)
that is positive on B 0 (0, 1). [Hint: Show that if u e is√another viscosity
√
solution of (H) that is positive on B 0 (0, 1), then w := 2 u and w e := 2 u e
0
are both viscosity solutions of the eikonal equation in B (0, 1).]
30 CHAPTER 3. A COMPARISON PRINCIPLE
Chapter 4
The Hamilton-Jacobi-Bellman
equation
We now aim to generalize the distance function example in Section 1.1. Con-
sider the following calculus of variations problem:
n o
T (x, y) = inf I[w] : w ∈ C 1 ([0, 1]; U ), w(0) = x, and w(1) = y , (4.1)
where Z 1
I[w] := L(w′ (t), w(t)) dt. (4.2)
0
Here, U ⊂ Rn is open, bounded, and path connected with a Lipschitz bound-
ary, and x, y ∈ U . We assume that L : Rn × U → R is continuous,
p 7→ L(p, x) is positively 1-homogeneous, (4.3)
and
L(p, x) > 0 for all p 6= 0, x ∈ U . (4.4)
Recall that positively 1-homogeneous means means that L(αp, x) = αL(p, x)
for all α > 0 and x ∈ U .
Let us note a few consequences of these assumptions. First, the 1-homogeneity
requires that L(0, x) = 0 for all x. Since L is continuous on the compact set
{|p| = 1} × U , and L(p, x) > 0 for p 6= 0, we have
γ := inf {L(p, x)} > 0.
|p|=1
x∈U
31
32 CHAPTER 4. THE HAMILTON-JACOBI-BELLMAN EQUATION
where ℓ(w) denotes the length of w. Hence, curves that minimize, or nearly
minimize I must have bounded length.
Instead of looking for minimizing curves w via the Euler-Lagrange equa-
tions, we consider the value function
holds. The compatibility condition ensures that u assumes its boundary values
u = g on ∂U continuously.
Proposition 4.1. For any x, y ∈ U such that the line segment between x and
y belongs to U we have
T (x, y) ≤ K|x − y|, (4.9)
where K = supx∈U ,|p|=1 L(p, x).
Proof. Let ε > 0. For i = 1, 2, let wi ∈ C 1 ([0, 1]; U ) such that w1 (0) = x,
w1 (1) = y, w2 (0) = y, w2 (1) = z and
Define (
w1 (2t), if 0 ≤ t ≤ 21
w(t) =
w2 (2t − 1), if 12 < t ≤ 1.
Note we can reparameterize w, if necessary, so that w ∈ C 1 ([0, 1]; U ), and we
easily compute that I[w1 ] + I[w2 ] = I[w]. Since w(0) = x and w(1) = z we
have
T (x, z) ≤ I[w] ≤ T (x, y) + T (y, z) + ε.
Sending ε → 0 completes the proof.
We now establish the important dynamic programming principle for the
value function u.
Lemma 4.3. For every B(x, r) ⊂ U we have
u(x) = inf {u(y) + T (x, y)} . (4.11)
y∈∂B(x,r)
Proof. Fix x ∈ U with B(x, r) ⊂ U , and let v(x) denote the right hand side
of (4.11).
We first show that u(x) ≥ v(x). Let ε > 0. Then there exists z ∈ ∂U and
w ∈ C 1 ([0, 1]; U ) such that w(0) = x, w(1) = z and
g(z) + I[w] ≤ u(x) + ε. (4.12)
Let y ∈ ∂B(x, r) and s ∈ (0, 1) such that w(s) = y. Define
w1 (t) = w(st) and w2 (t) = w(s + t(1 − s)).
Then we have I[w1 ]+I[w2 ] = I[w]. Furthermore, w1 (0) = x, w1 (1) = w2 (0) =
y and w2 (1) = z. Combining these observations with (4.12) and (4.1) we have
u(x) + ε ≥ g(z) + I[w1 ] + I[w2 ] ≥ u(y) + T (x, y) ≥ v(x).
Since ε > 0 is arbitrary, u(x) ≥ v(x).
To show that u(x) ≤ v(x), note that by Lemma 4.2 we have
g(z) + T (x, z) ≤ g(z) + T (y, z) + T (x, y),
for any y ∈ U and z ∈ ∂U . Therefore
u(x) = inf {g(z) + T (x, z)}
z∈∂U
≤ inf {g(z) + T (y, z)} + T (x, y)
z∈∂U
= u(y) + T (x, y), (4.13)
for any y ∈ U , and the result easily follows.
34 CHAPTER 4. THE HAMILTON-JACOBI-BELLMAN EQUATION
Proof. Let x, y ∈ U such that the line segment between x and y is contained
in U . By (4.13) and Proposition 4.1 we have
u(x) ≤ u(y) + T (x, y) ≤ u(y) + K|x − y|.
Therefore u is Lipschitz on any convex subset of U , and hence u is locally
Lipschitz.
To show that u assumes the boundary values g, we need to use the compat-
ibility condition (4.8) and the Lipschitzness of the boundary ∂U . Fix x0 ∈ ∂U .
Up to orthogonal transformation, we may assume that x0 = 0 and
U ∩ B(0, r) = {x ∈ B(0, r) : xn ≥ h(e
x)} ,
x, xn ) ∈ Rn , h : Rn−1 → R is Lipschitz
for r > 0 sufficiently small, where x = (e
continuous, and h(0) = 0. Let x ∈ U ∩ B(0, r) and define
x|).
y = (x1 , . . . , xn−1 , Lip(h)|e
Then |x − y| ≤ C|x| and yn = Lip(h)|e y |. It follows that the line segment
from y to 0 is contained in U , as well as the segment from x to y. In light of
Proposition 4.1 and Lemma 4.2 we have
u(x) = inf {g(z) + T (x, z)}
z∈∂U
≤ g(0) + T (x, 0)
≤ g(0) + T (x, y) + T (y, 0)
≤ g(0) + C|x − y| + C|y|
≤ g(0) + C|x|.
Now let ε > 0 and z ∈ ∂U such that
u(x) + ε ≥ g(z) + T (x, z).
Invoking the compatibility condition (4.8) we have
u(x) + ε ≥ g(0) − T (0, z) + T (x, z)
≥ g(0) − T (0, x) ≥ g(0) − C|x|.
Therefore |u(x) − g(0)| ≤ C|x| for all x ∈ U ∩ B(0, r), and the result immedi-
ately follows.
35
We compute
Therefore we have
0= inf {u(y) − u(x) + T (x, y)} ≤ inf {φ(y) − φ(x) + T (x, y)}.
y∈∂B(x,r) y∈∂B(x,r)
36 CHAPTER 4. THE HAMILTON-JACOBI-BELLMAN EQUATION
Let 0 < r < r0 . By the dynamic programming principle (4.11) there exist
y ∈ ∂B(x, r) and w ∈ C 1 ([0, 1]; U ) with w(0) = x and w(1) = y such that
θ
u(x) ≥ u(y) + I[w] − r. (4.18)
4
By (4.6) and Lemma 4.4 we have
θ
γ ℓ(w) ≤ I[w] ≤ u(x) − u(y) + r ≤ Cr.
4
Fix 0 < r < r0 small enough so that ℓ(w) < r0 . Then w(t) ∈ B 0 (x, r0 ) for all
t ∈ [0, 1]. We can now invoke (4.17) to find that
u(y) − u(x) ≥ φ(y) − φ(x)
Z 1
d
= φ(w(t)) dt
0 dt
Z 1
= Dφ(w(t)) · w′ (t) dt
0
Z Z 1
θ 1 ′
by (4.17) ≥ |w (t)| dt − L(w′ (t), w(t)) dt
2 0 0
Z 1
θ
≥ r− L(w′ (t), w(t)) dt.
2 0
where
T (x, y) = inf I[w] : w ∈ C 1 ([0, 1]; U ), w(0) = x, w(1) = y ,
Z 1
I[w] = f (w(t))|w′ (t)|q dt,
0
1 1
and q is the Hölder conjugate of p, i.e., p
+ q
= 1.]
Chapter 5
Convergence of vanishing
viscosity
and nonnegativity
−H(0, x) ≥ 0 for all x ∈ U. (5.4)
The reason we call (5.4) nonnegativity is that when H(p, x) = G(p) − f (x) and
G(0) ≥ 0, (5.4) implies that f ≥ 0.
The main structural condition we place on U is the following exterior sphere
condition: There exists r > 0 such that for every x0 ∈ ∂U there is a point
x∗0 ∈ Rn \ U for which
B(x∗0 , r) ∩ U = {x0 }. (5.5)
Throughout this section we assume that U ⊂ Rn is open, bounded, and
satisfies the exterior sphere condition, and H satisfies (3.6) and is continuous,
39
40 CHAPTER 5. CONVERGENCE OF VANISHING VISCOSITY
Proof. The argument is based on the maximum principle. Due to the com-
pactness of U , uε must attain its maximum value at some x0 ∈ U . If x0 ∈ ∂U
then u(x0 ) = 0. If x0 ∈ U then Duε (x0 ) = 0 and ∆uε (x0 ) ≤ 0. Therefore
Therefore uε ≥ 0 throughout U .
Definition 5.2. Let {uε }ε>0 be a family of real-valued functions on U .
The upper weak limit u : U → R of the family {uε }ε>0 is defined by
The upper and lower weak limits are fundamental objects in the theory of
viscosity solutions and allow passage to the limit in a wide variety of applica-
tions.
Lemma 5.3. Suppose the family {uε }ε>0 is uniformly bounded. Then u ∈
USC(U ) and u ∈ LSC(U ).
Proof. By the uniform boundedness assumption, u and u are bounded real-
valued functions on U . We will show that u ∈ USC(U ); the proof that u ∈
LSC(U ) is very similar.
We assume by way of contradiction that xk → x and u(xk ) ≥ u(x) + δ for
some δ > 0 and all k large enough, where xk , x ∈ U . By the definition of u,
for each k there exists yk and εk such that |xk − yk | < 1/k, εk < 1/k and
δ δ
uεk (yk ) ≥ u(xk ) − ≥ u(x) +
2 2
for sufficiently large k. Therefore
lim inf uεk (yk ) > u(x),
k→∞
It follows that
u(x) + H(Dφ(x), x) = lim uεk (xk ) + H(Dφ(xk ), xk ) − εk ∆φ(xk )
k→∞
≤ lim uεk (xk ) + H(Duεk (xk ), xk ) − εk ∆uεk (xk ) = 0.
k→∞
Theorem 5.8. For each ε > 0, let uε ∈ C 2 (U ) ∩ C(U ) solve (5.1), and let u
be the unique viscosity solution of (5.2). Then there exists C depending only
on H such that
√
|u − uε | ≤ C ε.
√
Proof. We first show that u − uε ≤ C ε. Define
α
Φ(x, y) = u(x) − uε (y) − |x − y|2 ,
2
max Φ = Φ(xα , yα ).
U ×U
C
|xα − yα | ≤ . (5.12)
α
We claim that
u(xα ) − uε (yα ) ≤ C 1
α
+ αε . (5.13)
To see this: If xα ∈ ∂U then
u(xα ) − uε (yα ) ≤ 0,
C
u(xα ) − uε (yα ) ≤ u(xα ) − u(yα ) ≤ C|xα − yα | ≤ .
α
If (xα , yα ) ∈ U × U then x 7→ u(x) − α2 |x − yα |2 has a maximum at xα and
hence
u(xα ) + H(pα , xα ) ≤ 0, (5.14)
where pα = α(xα − yα ). Similarly, y 7→ uε (y) + α2 |xα − y|2 has a minimum at
yα and hence Duε (yα ) = pα and −∆uε (yα ) ≤ αn. Therefore
C
u(xα ) − uε (yα ) ≤ H(pα , yα ) − H(pα , xα ) + αnε ≤ + αnε,
α
due to (5.11), (5.12) and the inequality |pα | = α|xα −yα | ≤ C. This establishes
the claim.
By (5.13) and the definition of Φ
max(u − uε ) ≤ Φ(xα , yα ) ≤ u(xα ) − uε (yα ) ≤ C α1 + αε .
U
√
Selecting α = 1/ ε completes √
the proof.
The proof that uε − u ≤ C ε is similar, and is left to Exercise 5.9.
u + G(Du) = f in Rn .
Since u is not generally smooth, these arguments are only a heuristic. The
following theorem makes the arguments rigorous in the viscosity sense.
Theorem 5.11. Assume p 7→ G(p) is convex, G(0) = 0, and f ∈ Cc2 (Rn ). Let
u ∈ C(Rn ) be a compactly supported viscosity solution of
u + G(Du) = f in Rn . (5.16)
− max uξξ ≥ −c in Rn .
|ξ|=1
max Φ = Φ(xα , yα , zα ).
Rn ×Rn ×Rn
48 CHAPTER 5. CONVERGENCE OF VANISHING VISCOSITY
It follows that
yα → y0 , xα − yα → h0 , and yα − zα → h0 ,
as α → ∞. Therefore
Φ(xα , yα , zα ) ≥ Φ(y0 +h0 , y0 , y0 −h0 ) = u(y0 +h0 )−2u(y0 )+u(y0 −h0 )−c|h0 |2 ,
and so we deduce
Therefore
α|xα − 2yα + zα |2 → 0 as α → ∞.
Passing to limits in (5.20) we have
for all y, h ∈ Rn . Now let φ ∈ C ∞ (Rn ) such that u − φ has a local minimum
at y ∈ Rn . Then
Therefore
for small |h|. It follows that φξξ (y) ≤ c for all ξ ∈ Rn with |ξ| = 1, and so
D2 φ(y) ≤ cI.
The second derivative estimate from Theorem 5.11 allows us to prove a
better one-sided rate in the method of vanishing viscosity.
uε − u ≤ Cε.
Proof. Define (
u(x), if x ∈ U
v(x) =
0, otherwise.
50 CHAPTER 5. CONVERGENCE OF VANISHING VISCOSITY
v + G(Dv) = f in Rn .
Exercise 5.14.
H(Du, u, x) = 0 in U.
H((Φ′ ◦ v)Dv, Φ ◦ v, x) = 0 in U,
where Φ := Ψ−1 .
5.3. SEMICONCAVITY AND AN O(ε) ONE-SIDED RATE 51
H(Du) = f in U,
We say that u satisfies the boundary condition from (3.14) in the strong sense
provided u = g on ∂U . This is the usual sense, and is how we have been inter-
preting boundary conditions thus far. However, depending on the geometry
of the projected characteristics, the Dirichlet problem (3.14) with boundary
conditions in the strong sense is in general overdetermined. For example, the
solution u of
ux1 + ux2 = 0 in B(0, 1) ⊂ R2
is constant along the projected characteristics
Find explicitly the solution uε and sketch its graph. Show that uε (x) → x
pointwise on [0, 1) as ε → 0.
The previous exercise suggests that u(x) = x should be the viscosity solu-
tion of
u′ (x) = 1, u(0) = u(1) = 0,
53
54 CHAPTER 6. BOUNDARY CONDITIONS
even though u(1) 6= 0. The issue is that the problem above is overdetermined,
so we lose one of the boundary conditions in the vanishing viscosity limit. The
same thing happens in a more complicated manner in higher dimensions.
In order to make sense of this, we should consider carefully how boundary
conditions behave in the vanishing viscosity limit. Let uε be a smooth solution
of
H(Duε , uε , x) − ε∆uε = 0 in U, (6.1)
and assume that uε ≤ g on ∂U , where g : ∂U → R is continuous. Exercise
(6.1) shows that we cannot expect uε to converge uniformly on U . Instead, let
us consider the weak upper limit
H(Dφ(x), u(x), x) ≤ 0.
We can make the same argument with the weak lower limit u to find that
when u − φ has a local minimum at x ∈ ∂U we have
provided uε ≥ g on ∂U .
This motivates the following definitions.
55
The proof is very similar to that of Theorem 3.1, so we will briefly outline
the details. The main difficulty is to ensure that the auxiliary function assumes
its maximum on the unbounded domain U . We also remark that the theorem
holds when U = Rn and Γ1 = Γ2 = ∅.
Therefore, there exists Λ > 0 such that for all 0 < λ < Λ, uλ is a viscosity
solution of
ε
H(Duλ , uλ , x) ≤ − in U ∪ Γ1 . (6.3)
2
We will prove that uλ ≤ v on U for all 0 < λ < Λ. To see this, assume to
the contrary that supU (uλ − v) > 0. For α > 0 define the auxiliary function
α
Φ(x, y) = uλ (x) − v(y) − |x − y|2 . (6.4)
2
57
(1 + |pα |)|xα − yα | → 0 as α → ∞.
(a) Show that there is at most one viscosity solution u ∈ C(Rn ) of (H) satis-
fying the boundary condition at infinity
lim u(x) = ∞. (6.6)
|x|→∞
[Hint: Theorem 6.4 does not apply, since u and v are unbounded. To prove
uniqueness, let u, v ∈ C(Rn ) be two viscosity solutions of (H) satisfying
(6.6). Let Ψ : R → R be a smooth function satisfying
Ψ(s) = s, if s ≤ 1,
Ψ(s) ≤ 2, for all s ∈ R,
′
0 < Ψ (s) ≤ 1, for all s ∈ R.
For R > 1 define
w(x) := (R − 1) Ψ(R−1 u(x)).
Show that w ≤ 2R is a viscosity solution of
1
|Dw| + ≤ 1 in Rn \ Γ.
R
Use the doubling of the variables argument to show that w ≤ v on Rn \ Γ.
Complete the argument from here.]
(b) Show that the solution is not unique without (6.6).
U ∪ Γ1 = Rn × (0, T ].
and hence
εk
φt (xk , tk ) + + H(Dφ(xk , tk ), xk ) ≤ 0.
(T − tk )2
Letting k → ∞ we find that
φt (x0 , T ) + H(Dφ(x0 , T ), x0 ) ≤ 0.
wt + H(Dw, x) = 0 in Rn × (0, T ).
Then
sup |u − v| ≤ sup |u(x, 0) − v(x, 0)|.
Rn ×[0,T ] x∈Rn
H(p)
lim = ∞,
|p|→∞ |p|
where
L(v) = sup {p · v − H(p)}
p∈Rn
and
C
φ(x, t) := φ(x, t) + (|x − x0 |2 + |t − t0 |2 ).
r2
Then φ(x0 , t0 ) = u(x0 , t0 ), φ ≥ u for (x, t) ∈ B((x0 , t0 ), r), and
for all 0 < t < t0 . The reader should notice the similarity with the proof of
Theorem 4.6. Since φ is smooth, the same arguments that showed that u is a
Lipschitz almost everywhere solution (see [11, Section 3.3]) prove that
φt + H(Dφ) ≤ 0 at (x0 , t0 ).
and
u(x) := sup{v(x) : v ∈ F}. (7.2)
The function u is presumably a prime candidate for a viscosity solution of
(7.1).
We now establish two lemmas that are fundamental to the Perron method.
Lemma 7.1. Suppose F is nonempty. Then the upper semicontinuous func-
tion u∗ is a viscosity subsolution of (7.1)
63
64 CHAPTER 7. THE PERRON METHOD
u∗ (x0 ) = φ(x0 ) and u∗ (x) − φ(x) ≤ −|x − x0 |4 for x ∈ B(x0 , r). (7.3)
We may assume that φ(x0 ) = u∗ (x0 ). If φ(x0 ) = w(x0 ) then w − φ has a local
minimum at x0 , which contradicts (7.4), as w is a supersolution. Therefore
65
φ(x0 ) < w(x0 ). Hence there exists ε > 0 and a ball B(x0 , r) ⊂ U such that
φ ≤ u∗ and φ + ε ≤ w on B(x0 , r) and
H(D2 φ(x), Dφ(x), φ(x), x) + ε ≤ 0 for x ∈ B(x0 , r). (7.5)
Set
r4
ψ(x) := φ(x) + δ − |x − x0 | ,
4
24
and choose δ > 0 small enough so that ψ ≤ w on B(x0 , r) and
H(D2 ψ(x), Dψ(x), ψ(x), x) ≤ 0 for x ∈ B(x0 , r).
Define (
max{u(x), ψ(x)}, if x ∈ B(x0 , r)
v(x) :=
u(x), otherwise.
Since u and ψ are subsolutions of H = 0 in B(x0 , r), v is a subsolution in
B(x0 , r). Furthermore, since
ψ(x) ≤ φ(x) ≤ u(x) for x ∈ B(x0 , r) \ B(x0 , 2r ),
we have u = v on the annulus B(x0 , r) \ B(x0 , 2r ). Therefore v is a subsolution
of (7.1) and v ≤ w on U . Therefore v ∈ F .
By definition of the lower semicontinuous envelope u∗ , there exists a se-
quence xk → x0 such that u(xk ) → u∗ (x0 ). Since v ≥ ψ on B(x0 , r), we
have
lim inf v(xk ) ≥ lim ψ(xk ) = u∗ (x0 ) + δ ′ ,
k→∞ k→∞
′ 4 4
where δ = δr /2 . Therefore, for k large enough
δ′
v(xk ) ≥ u(xk ) + ,
2
or v(xk ) > u(xk ). This completes the proof.
The remaining ingredient for Perron’s method is a comparison principle for
(7.1). Let us illustrate the technique on the time-dependent Hamilton-Jacobi
equation (6.7). As usual, we assume H is continuous and satisfies (3.3), (3.6),
and (6.2).
Theorem 7.3. Let g : Rn → R be bounded and Lipschitz continuous, and
suppose that
n o
K := sup |H(p, x)| : |p| ≤ Lip(g) and x ∈ Rn < ∞.
Then for every T > 0 there exists a unique bounded viscosity solution u ∈
C(Rn × [0, T ]) of (6.7).
66 CHAPTER 7. THE PERRON METHOD
Proof. Define
w(x, t) := g(x) + Kt.
If φ ∈ C ∞ (Rn × R) and w − φ has a local minimum at (x0 , t0 ) ∈ Rn × (0, T ),
then |Dφ(x0 , t0 )| ≤ Lip(g) and φt (x0 , t0 ) = K. Therefore
φt (x0 , t0 ) + H(Dφ(x0 , t0 ), x0 ) ≥ K − K = 0.
and
u(x, t) := sup{v(x, t) : v ∈ F}.
We can verify, as before, that w(x,e t) := g(x) − Kt is a subsolution of (6.7).
Therefore F is nonempty. Since u ≤ w and w is continuous, u∗ ≤ w and so
u∗ (x, 0) ≤ w(x, 0) = g(x). By Lemma 7.1, u∗ is a viscosity subsolution of
(6.7). Therefore u∗ ∈ F , and so u = u∗ .
Since we ≤ w, w e ∈ F and hence u ≥ w. e Since w e is continuous, u∗ (x, 0) ≥
w(x, 0) = g(x). By Lemma 7.2, u∗ is a viscosity supersolution of (6.7). Since
u∗ (x, 0) = u∗ (x, 0), we can use the comparison principle (Theorem 6.6) to show
that u∗ ≤ u∗ on Rn × [0, T ]. Since the opposite inequality is true by definition,
we have u∗ = u∗ = u. Therefore u ∈ C(Rn × [0, T ]) is a bounded viscosity
solution of (6.7). Uniqueness follows from Theorem 6.6.
u + H(Du, x) = 0 in Rn .
Exercise 8.1. Recall from Exercise 2.13 that u(x) = 1 − |x| is a viscosity
solution of |u′ (x)| − 1 = 0. In fact, this is the unique solution with boundary
conditions u(−1) = 0 = u(1) on the interval (−1, 1) (why?). Show that there
does not exist a sequence uk ∈ C 1 ([−1, 1]) such that uk → u and |u′k | → 1
uniformly as k → ∞. This shows that it is impossible, in general, to uniformly
approximate a viscosity solution by a classical solution.
67
68 CHAPTER 8. SMOOTHING VISCOSITY SOLUTIONS
8 8
6 6
4 4
2 2
0 0
-2 -2
-4 -4
-6 -6
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
convolution of u, denoted uε , to be
1
u (x) = sup u(y) − |x − y| .
ε 2
(8.1)
y∈U 2ε
The inf- and sup-convolutions are tools that originally appeared in con-
vex analysis—the inf-convolution is called the Moreau envelop in optimiza-
tion [3]—and have been appropriated in the viscosity solution literature due
to their useful approximation properties. As we show below, the inf- and
sup-convolutions of a viscosity solution u are nearly C 2 functions, and are
approximate viscosity super- and subsolutions, respectively.
We first establish some basic properties of inf- and sup-convolutions
(i) we have uε ≤ u ≤ uε ,
and recall that the supremum of a family of affine functions is convex. The
proof that uε − 2ε1
|x|2 is concave is similar.
For (iii), since
1
uε (x) = u(y ε ) − |x − y ε |2
2ε
and u (x) ≥ u(x) we have
ε
1
|x − y ε |2 = u(y ε ) − uε (x) ≤ u(y ε ) − u(x) ≤ 2kukL∞ (U ) .
2ε
The proof of (iv) is similar.
For (v), by the Alexandrov Theorem any convex function is twice differ-
entiable almost everywhere. Thus, it follows from (ii) that uε + 2ε 1
|x|2 and
uε − 2ε |x| are twice differentiable almost everywhere in U , and hence so are
1 2
uε and uε .
To prove the Lipschitz estimate (8.3), let x, y ∈ U and δ > 0. Let y ε ∈ U
such that
1
uε (x) ≤ u(y ε ) − |x − y ε |2 + δ.
2ε
Then we have
|x − y ε |2 ≤ 2(2kukL∞ (U ) + δ)ε. (8.4)
Since uε (y) ≥ u(y ε ) − 1
2ε
|y − y ε |2 we have
1
uε (x) − uε (y) ≤ |y − y ε |2 − |x − y ε |2 + δ
2ε
1
≤ (|x − y| + |x − y ε |)2 − |x − y ε |2 + δ
2ε
1
= |x − y|2 + 2|x − y||x − y ε | + δ
2ε
1
= (|x − y| + 2|x − y ε |) |x − y| + δ.
2ε
70 CHAPTER 8. SMOOTHING VISCOSITY SOLUTIONS
(iv) uε , uε ∈ C 0,α (U ) and [uε ]0,α;U , [uε ]0,α;U ≤ C[u]0,α;U , with C independent
of ε > 0.
Proof. We first prove uniform convergence. Let
1
y ∈ arg max u(y) − |x − y|
ε 2
y∈U 2ε
where f : U → R.
Corollary 8.9. Let U ⊂ Rn be open and bounded and suppose f ∈ C(U ) with
modulus of continuity ω. If u ∈ USC(U ) is a bounded viscosity subsolution of
(8.16) then the sup-convolution uε is a viscosity solution of
Proof. Let x0 ∈ M ε (u) and let φ ∈ C ∞ (Rn ) such that uε − φ has a local
maximum at x0 . As in the proof of Theorem 8.7 we define ψ(y) := φ(y+x0 −y0 )
where y0 ∈ U is a point for which uε (x0 ) = u(y0 ) − 2ε
1
|x0 − y0 |2 . Then u − ψ
has a local maximum at y0 and so
H(Du, u, x) = 0 in U. (8.19)
75
Theorem 8.11. Let U ⊂ Rn be open and bounded, and assume H ∈ Cloc 0,1
(Rn ×
R × U ). If u ∈ C 0,1 (U ) is a viscosity subsolution of (8.19) then the sup-
convolution uε is a viscosity solution of
Proof. Since u ∈ C 0,1 (U ), there exists K > 0 such that kukL∞ (U ) ≤ K, and
|Du| ≤ K in U in the viscosity sense (see Exercise 2.16). Since H is locally
Lipschitz, there exists C > 0 such that
H(Dψ(y0 ), u(y0 ), y0 )) ≤ 0.
y∈U 2δ
Then uδ is semiconvex with constant − 1δ , i.e., −D2 uδ ≤ 1δ I on U in the viscosity
sense, and by Theorem 8.11 there exists C > 0 such that uδ is a viscosity
solution of
uδ + H(Duδ , x) ≤ Cδ in UCδ ,
where UCδ = {x ∈ U : dist(x, ∂U ) ≥ Cδ}. Let x0 ∈ U such that
max(uδ − uε ) = uδ (x0 ) − uε (x0 ).
U
If x0 6∈ UCδ then dist(x0 , ∂U ) < Cδ and hence by Lemma 8.5 and the nonneg-
ativity of uε we have
max(uδ − uε ) ≤ uδ (x0 ) − uε (x0 ) ≤ Cδ.
U
We will always assume that 1/h is an integer. Given a function u : [0, 1]hn → R,
we define the forward and backward difference quotients by
u(x ± hei ) − u(x)
∇±
i u(x) := ± , (9.2)
h
and we set
∇± u(x) = (∇± ±
1 u(x), . . . , ∇n u(x)).
When u is a smooth function restricted to the grid, the forward and backward
difference quotients (9.2) offer O(h) (or first order) accurate approximations
of uxi . This can be immediately verified by expanding u via its Taylor series.
The idea is to restrict (9.1) to the grid [0, 1]nh , and replace each partial
derivative by a corresponding finite difference. However, some care must be
taken in how this is done.
Exercise 9.1. Consider the following finite difference scheme for the one di-
mensional eikonal equation (1.6) from Exercise 1.4:
|∇+
1 uh (x)| = 1 for x ∈ [0, 1)h , and uh (0) = uh (1) = 0. (9.3)
77
78 CHAPTER 9. FINITE DIFFERENCE SCHEMES
Show that the scheme is not well-posed, that is, depending on whether 1/h is
even or odd, there is either no solution, or there is more than one solution.
In this case, the solution u satisfies the dynamic programming principle (4.11)
Since T (x, y) > 0, this expresses two things. First, there must exist y ∈
∂B(x, r) such that u(y) < u(x), and second, u(x) depends only on the neigh-
boring values u(y) that are smaller than u(x). Keeping these ideas in mind,
we define the monotone finite differences
−
∇m
i u = m(∇i u, ∇i u),
+
(9.4)
where
a, if a + b < 0 and a ≤ 0
m(a, b) = b, if a + b ≥ 0 and b ≥ 0
0, otherwise.
We also define the monotone gradient by
∇m u = (∇m
1 u, . . . , ∇n u).
m
Therefore H(p, x) ≤ H(q, x) whenever |pi | ≤ |qi | for all i. Combining this with
Proposition 9.3 completes the proof.
80 CHAPTER 9. FINITE DIFFERENCE SCHEMES
Let us suppose now that H satisfies the hypotheses of Lemma 9.4 and
suppose uh : [0, 1]nh → R is a solution of the numerical scheme
where
Sh : Xh × R × Rn → R,
and Xh denotes the collection of real-valued functions on [0, 1]nh . We remark
that the first argument of Sh represents the dependence of Sh on neighboring
grid points, while the second argument represents the dependence of Sh on the
grid point x.
Definition 9.5. We say the scheme Sh is monotone if
We note that even though viscosity solutions are not in general smooth,
consistency need only be verified for smooth test functions φ ∈ C ∞ (Rn ).
Definition 9.7. We say the scheme Sh is stable if the solutions uh are uni-
formly bounded as h → 0+ , that is, there exists C > 0 such that
The proof is similar to Theorem 5.4. We sketch the details below. We note
that in the context of the following proof, all viscosity solutions are interpreted
in the sense of Definition 6.2.
The limits above are taken with y ∈ [0, 1]nh . Since the scheme is stable, both
u and u are bounded real valued functions.
82 CHAPTER 9. FINITE DIFFERENCE SCHEMES
φk (x) = φ(x) + γk ,
where γk = uhk (xk ) − φ(xk ). Then φk (xk ) = uhk (xk ) and uhk ≤ φk . Since the
scheme Sh is monotone we have
If x0 ∈ ∂[0, 1]n , then we can arrange it so that xk ∈ ∂[0, 1]nhk for all k, or
xk ∈ (0, 1)nhk for all k. In the first case, we have
due to the continuity of g. The second case proceeds as above and we find
that (9.10) holds. Therefore u is a viscosity subsolution of (9.1).
That u is a viscosity supersolution of (9.1) is verified similarly. By strong
uniqueness we have u = u. Therefore uh → u uniformly, where u is the unique
viscosity solution of (9.1).
|uh (x) − uh (y)| ≤ C|x − y| for all x, y ∈ [0, 1]nh and h > 0.
This is a stronger form of stability. Prove Theorem 9.1 without the strong
uniqueness hypothesis. You can assume that ordinary uniqueness holds, that
is, there is at most one viscosity solution of (9.1) satisfying the boundary con-
ditions in the usual sense. [Hint: Use the Arzelà-Ascoli Theorem to extract a
subsequence uhk converging uniformly to a continuous function u ∈ C([0, 1]n ).
Show that u is the unique viscosity solution of (9.1), and conclude that the
entire sequence must converge uniformly to u.]
9.2. CONVERGENCE OF MONOTONE SCHEMES 83
Exercise 9.10. Suppose that Sh depends only on the forward and backward
neighboring grid points in each direction, so that we can write
Let us set F = F (a1 , . . . , a2n , z, x). You may assume that H and F are smooth.
(c) Find a monotone and consistent scheme for the linear PDE
where a1 , . . . , an are real numbers. Compare your scheme with the direc-
tion of the projected characteristics. [Hint: Your solution should depend
on the signs of the ai .]
ah
Sh (u, u(x), x) := H (∇h u(x), u(x), x) − ∆h u(x),
2
where
u(x + he1 ) − u(x − he1 ) u(x + hen ) − u(x − hen )
∇h u(x) := ,..., ,
2h 2h
and
X
n
u(x + hei ) − 2u(x) + u(x − hei )
∆h u(x) := .
i=1
h2
Show that the Lax-Friedrichs scheme is monotone and consistent. [Hint:
Rewrite the scheme as a function of the forward and backward differences
∇±i u(x), as above.]
84 CHAPTER 9. FINITE DIFFERENCE SCHEMES
Exercise 9.11. Let U := B 0 (0, 1) and ε > 0. Consider the nonlocal integral
equation
Z
(1 + cε2 )uε (x) − − uε dy = cε2 f (x) if x ∈ U
(Iε ) B(x,ε)
uε (x) = 0 if x ∈ Γε ,
where c = 1
2(n+2)
, uε : Γε ∪ U → R, f ∈ C(U ), and
Γε = {x ∈ Rn \ U : dist(x, ∂U ) ≤ ε}.
is a contraction mapping. Use the usual norm kuk := maxU |u| on C(U ).
Then appeal to Banach’s fixed point theorem.]
(b) Define Sε : L∞ (U ∪ Γε ) × R × U → R by
Z
Sε (u, t, x) := (1 + cε )t − −
2
u dy.
B(x,ε)
(d) Use the comparison principle to show that there exists C > 0 such that
|uε (x)| ≤ C(1 + 3ε − |x|2 ),
for all x ∈ U and 0 < ε ≤ 1, where C depends only on kf k = maxU |f |.
[Hint: Compare against v(x) := C(1 + 3ε − |x|2 ) and −v, and adjust the
constant C appropriately.]
(e) Use the method of weak upper and lower limits to show that uε → u
uniformly on U , where u is the viscosity solution of (P). You may assume
a comparison principle holds for (P) for semicontinuous viscosity solutions.
That is, if u ∈ USC(U ) is a viscosity subsolution of (P) and v ∈ LSC(U )
is a viscosity supersolution, and u ≤ v on ∂U , then u ≤ v in U . [Hint:
You will find the identity in the hint from Exercise 2.19 useful.]
Thus, we are assuming that the neighborhood N (x) of x contains just the
forward and backward neighbors in each coordinate direction. Let us write
F = F (a1 , . . . , a2n , z, x)
for notational simplicity. Recall from Exercise (9.8) that F is monotone if
and only if F is nondecreasing in each ai , i.e., Fai ≥ 0 for all i. In this case,
consistency of the scheme states that
F (p1 , −p1 , . . . , pn , −pn , z, x) = H(p, z, x). (9.11)
86 CHAPTER 9. FINITE DIFFERENCE SCHEMES
Then there exists M > 0, C > 0, c > 0 and h > 0 such that for all 0 < h < h
Remark 9.13. The condition (9.12) says that H is not a trivial zeroth order
PDE, such as H(p, z, x) = z.
Proof. The basic idea of the proof is that the monotonicity of F ensures that
the second order terms in the Taylor expansion for u(x) − u(x ± hei ) cannot
be cancelled out to improve accuracy.
Without loss of generality, let us assume that i = 1, z = 0 and x = 0 in
(9.12). If p1 = 0, then we can find a nearby point p where p1 6= 0 and (9.12)
holds, by smoothness of H. Hence we may assume that p1 > 0. By consistency
(Eq. (9.11)) we have
Define
1 1
φ(x) = p1 (x1 + 1)2 − p1 + p2 x2 + · · · + pn xn .
2 2
Then Dφ(0) = p and φ(0) = 0. We also note that
1 − 1 ±
1 φ(0) = p1 + p1 h, ∇1 φ(0) = p1 − p1 h, and ∇i φ(0) = pi
∇+
2 2
√
9.4. THE O( h) RATE 87
√
9.4 The O( h) rate
Even though monotone schemes have O(h) local truncation errors, it turns
out that the√best global errors that can be established√rigorously are worse;
they are O( h). This should be compared with the O( ε) convergence rates
established in Chapter 5 for the method of vanishing viscosity. Intuitively, the
reason for this is that local truncation errors consider how the scheme acts on
smooth functions, and viscosity solutions are in general not smooth. Thus,
the usual trick of substituting the solution of the PDE into the scheme does
not convert local errors into global errors for viscosity solutions. However,
see Section 5.3 for situations where the viscosity solution satisfies a one-sided
second derivative bound. In this situation, we would expect a one-sided O(h)
rate.
Nevertheless, it is commonplace in practice to observe global errors on
the order of O(h) in numerical experiments, even when the solutions are not
smooth. There is currently no theory that fully explains this difference between
the experimental and theoretical convergence rates.
We assume that L is Lipschitz continuous, and satisfies (9.5) as well as all
of the assumptions of Chapter 4, and we take H to be given by (4.14).
88 CHAPTER 9. FINITE DIFFERENCE SCHEMES
Then we have
H(q, x) ≥ −q · a − L(a, x),
and so
H(p, x) − H(q, x) ≤ (q − p) · a ≤ |q − p|.
Therefore H is Lipschitz continuous.
Let u ∈ C 0,1 ([0, 1]n ) be the unique viscosity solution of
)
H(Du, x) = 0 in (0, 1)n
(9.15)
u = 0 on ∂(0, 1)n ,
∇− −
i u(x0 ) ≥ ∇i v(x0 ) and ∇i u(x0 ) ≤ ∇i v(x0 ).
+ +
Hence we cannot expect equality like (9.17) at the discrete level. Monotone
schemes are designed precisely to give the correct inequality so that the max-
imum principle holds.
To see how this works, recall from Lemma 9.4 that if u(x0 ) = v(x0 ) and
u ≤ v, then H(∇m u(x0 ), x0 ) ≥ H(∇m v(x0 ), x0 ). We can rephrase this in
√
9.4. THE O( h) RATE 89
This is the discrete analogue of (9.17) and is exactly what allows maximum
principle arguments to hold for monotone finite difference schemes.
Proof. We will show that for every θ ∈ (0, 1), θu ≤ v. Fix θ ∈ (0, 1) and
assume to the contrary that max[0,1]nh (θu − v) > 0. Let x ∈ [0, 1]nh be a point
at which θu − v attains its positive maximum. Then by (9.18) we have
Since u ≤ 0 ≤ v on ∂(0, 1)nh , we must have x ∈ (0, 1)nh . Due to the convexity
of H we have
Lemma 9.16. There exists a unique grid function uh : [0, 1]nh → R satisfying
the monotone scheme (9.16). Furthermore, the sequence uh is nonnegative and
uniformly bounded.
The proof of Lemma 9.16 is based on the Perron method, but is consider-
ably simpler due to the discrete setting.
90 CHAPTER 9. FINITE DIFFERENCE SCHEMES
Proof. Define
n o
F = u : [0, 1]nh → R : u is a nonnegative subsolution of (9.16) .
Proof. For θ ∈ (0, 1), to be selected later, define the auxiliary function
1
Φ(x, y) = θu(x) − uh (y) − √ |x − y|2 ,
h
for x ∈ [0, 1]n and y ∈ [0, 1]nh . Let (xh , yh ) ∈ [0, 1]n × [0, 1]nh such that
Φ(xh , yh ) = max Φ.
[0,1]n ×[0,1]n
h
1
θu(xh ) − uh (yh ) − √ |xh − yh |2 ≥ θu(yh ) − uh (yh ).
h
Therefore
1
√ |xh − yh |2 ≤ θ(u(xh ) − u(yh )) ≤ C|xh − yh |
h
due to the Lipschitzness of u. Therefore
√
|xh − yh | ≤ C h.
θu(xh ) − uh (yh ) ≤ 0
1
x 7→ u(x) − √ |x − yh |2
θ h
92 CHAPTER 9. FINITE DIFFERENCE SCHEMES
has a maximum at xh . Letting p = √2h (xh − yh ) we have H p
θ
, xh ≤ 0.
Therefore
p
H(p, xh ) = H θ + (1 − θ) · 0, xh
θp
≤ θH , xh + (1 − θ)H(0, xh ) ≤ −(1 − θ)γ, (9.19)
θ
for some γ > 0 depending only on L. Note we used the convexity of H with
respect to p above.
Notice that
1
y 7→ uh (y) + √ |xh − y|2
h
has a maximum at y = yh . By (9.18) we have
for all θ ∈ (0, 1), due to the Lipschitzness of H. For h sufficiently small, we
set
(C + 1) √
θ =1− h,
γ
to obtain √ √
(C + 1) h ≤ C h.
Since this is a contradiction, case 3 is impossible. √
We have shown that there exists K > 0 such that when θ := 1 − K h we
have √
θu(xh ) − uh (yh ) ≤ C h
for h > 0 sufficiently small. Therefore, there exists h > 0 such that
√
maxn (θu − uh ) ≤ θu(xh ) − uh (yh ) ≤ C h,
[0,1]h
9.5. ONE-SIDED O(h) RATE 93
For h ≥ h we have
√
e h,
maxn (u − uh ) ≤ maxn |u| + maxn |uh | ≤ C ≤ C
[0,1]h [0,1] [0,1]h
√
e := C/ h. This completes the proof.
due to Lemma 9.16, where C
Exercise
√ 9.18. Complete the proof of Theorem 9.17 by showing that uh −u ≤
C h.
c
i u(x0 )| ≥ |uxi (x0 )| − h.
|∇m (9.20)
2
c
u(x) − |x − x0 |2 ≤ u(x0 ) + Du(x0 ) · (x − x0 ) for all x ∈ Rn .
2
Therefore
u(x0 ± hei ) − u(x0 ) c
≤ ±uxi (x0 ) + h,
h 2
94 CHAPTER 9. FINITE DIFFERENCE SCHEMES
|∇m
i u(x0 )| = max{(u(x0 ) − u(x0 − hei ))+ , (u(x0 ) − u(x0 + hei ))+ }
c c
≥ max uxi (x0 ) − h , −uxi (x0 ) − h
2 + 2 +
c
≥ max {(uxi (x0 ))+ , (−uxi (x0 ))+ } − h
2
c
= |uxi (x0 )| − h
2
for almost every x0 .
uh − u ≤ Ch. (9.21)
for h > 0 small enough that θ > 0. By the discrete comparison principle
(Lemma 9.15) we have that vh ≤ u on [0, 1]nh . Therefore (1 − Ch)uh ≤ u.
Since the sequence uh is uniformly bounded, we conclude that
uh − u ≤ Ch
Proof. Let u denote the unique viscosity solution of (9.15). By Exercise 2.22, w
is a viscosity subsolution of (9.15), and so by comparison, w ≤ u. By Theorem
9.20, there exists a constant C such that w ≥ uh − Ch for all h > 0, where
uh is the solution of the monotone scheme (9.16). Since uh → u uniformly as
h → 0, we have w ≥ u, hence w = u.
96 CHAPTER 9. FINITE DIFFERENCE SCHEMES
Chapter 10
Homogenization
and
(Nonnegative) − H(0, y) ≥ 0 for all y ∈ Rn . (10.4)
We first record a Lipschitz estimate on the solution uε .
Lemma 10.1. There exists a constant C such that for all ε > 0
97
98 CHAPTER 10. HOMOGENIZATION
The proof of Lemma 10.1 is very similar to Lemmas 5.1 and 5.7, so we
omit it.
By the Arzelà-Ascoli Theorem, we can pass to a subsequence uεj so that
uεj → u ∈ C 0,1 (U ) uniformly on U . The goal is to identify a PDE that is
satisfied by u. To do this, we need to understand locally the structure of
uε − u. Let us suppose that near a point x0 , uε has the form
H(p) := λ, (10.7)
and the heuristics above suggest that u should be the viscosity solution of
u + H(Du) = 0 in U,
The addition of the zeroth order term guarantees that a comparison principle
holds for (10.8) (see Theorem 6.4 and Corollary 3.2). We can prove existence
99
of a viscosity solution of (10.8) via the Perron method. Indeed, let C > 0 be
large enough so that
which is a contradiction.
3. Since wδ is Zn -periodic, similar arguments to the proof of Lemma 10.1
show that there exists C > 0 such that
kδwδ kC(Rn ) ≤ C
and
|wδ (x) − wδ (y)| ≤ C|x − y| for all x, y ∈ Rn ,
where C is independent of δ. We now define
vδ := wδ − min wδ .
nR
Utilizing the above information and the Arzelà-Ascoli Theorem, we can extract
a subsequence δj → 0 such that
H(p + Dv, y) = λ in Rn .
and, say, λ̂ > λ. By the comparison principle from Theorem 6.4, we have
v ≤ v̂ in Rn . This contradicts the fact that we can add an arbitrary constant
to v̂ without changing (10.9).
The proof of Theorem 10.3 is based on the “perturbed test function” tech-
nique, which was pioneered in [9, 10].
Proof. By Lemma 10.1 and the Arzelà-Ascoli Theorem, there exists a function
u ∈ C 0,1 (U ) and a subsequence εj → 0 such that uεj → u uniformly on U . We
claim that u is the unique viscosity solution of (10.10). Once this is established,
it immediately follows that uε → u uniformly on U .
We first verify that u is a viscosity subsolution of (10.10). The proof is
split into three steps.
1. Let x0 ∈ U and φ ∈ C ∞ (Rn ) such that u− φ has a strict local maximum
at x0 and u(x0 ) = φ(x0 ). We must show that
φ(x0 ) + H(Dφ(x0 )) ≤ 0.
101
Discontinuous coefficients
103
104 CHAPTER 11. DISCONTINUOUS COEFFICIENTS
Hence, given appropriate boundary conditions, the shape from shading problem
reduces to solving the eikonal equation. If the object u is not a smooth graph–
it may have corners–then I, and hence f , may be discontinuous. For more
details on shape from shading and connections to viscosity solutions, we refer
the reader to [16].
Example 11.1 motivates the need for a theory of viscosity solutions with
discontinuous coefficients. Since our definition of viscosity solution (Definition
2.1) assumed continuity, we first need to revisit definitions.
As motivation, we consider the method of vanishing viscosity for f possibly
discontinuous. In the viscous regularization, we replace f with the mollification
fε := η ε ∗ f , where η ε is the standard mollifier, yielding
)
H(Duε ) − ε∆uε = fε in U
(11.4)
uε = gε on ∂U.
Sending εk → 0 we have
Noting that Z
fε (x) = η ε (x − y)f (y) dy ≤ sup f
B(x,ε) B(x,ε)
we find that
H(Dφ(x0 )) ≤ lim inf sup f ≤ f ∗ (x0 ),
k→∞ B(x,ε )
k
H(Dφ(x)) ≤ f ∗ (x).
Theorem 11.2. Let U = B 0 (0, 1) and set B + = U ∩{xn > 0}, B − = U ∩{xn <
0}, and Γ = U ∩ {xn = 0}. Assume that f |B + ∈ C(B + ), f |B − ∈ C(B − ) and
for all x ∈ Γ
−
lim f (y) ≤ +lim f (y). (11.5)
B ∋y→x B ∋y→x
Let ε > 0 and let u, v ∈ C 0,1 (U ) such that H(Du) ≤ f and H(Dv) ≥ f + ε in
U in the viscosity sense of Definition 11.1. Then
The proof of Theorem 11.2 uses a modified doubling the variables argu-
ment. The proof given below is borrowed in part from [8].
Φ(xα , yα ) = max Φ.
U ×U
We claim that
lim Φ(xα , yα ) = δ. (11.9)
α→∞
Φ(xα , yα ) ≥ Φ(x0 , x0 + √1 en )
α
= u(x0 ) − v(x0 + √1 en )
α
→δ
as α → ∞, and so
lim inf Φ(xα , yα ) ≥ δ > 0.
α→∞
Therefore
1 C
xα − yα + √ en , |xα − yα | ≤ √ . (11.10)
α α
107
It follows that
C
Φ(xα , yα ) ≤ u(xα ) − v(yα ) = u(xα ) − u(yα ) + u(yα ) − v(yα ) ≤ √ + δ
α
and so lim supα→∞ Φ(xα , yα ) ≥ δ, which establishes the claim.
For large enough α, u(xα ) − v(yα ) ≥ 2δ , and so xα , yα ∈ U . By the viscosity
sub- and super-solution properties we have
H(pα ) ≤ f ∗ (xα ) and H(pα ) ≥ f∗ (yα ) + ε,
where
1
pα = α xα − yα + √ e n .
α
Therefore
ε ≤ f ∗ (xα ) − f∗ (yα ). (11.11)
√
Setting wα = α xα − yα + √1α en we have
1
yα = xα + √ (en − wα ) . (11.12)
α
Notice that
1
|wα |2 = u(xα ) − v(yα ) − Φ(xα , yα )
2
≤ u(xα ) − u(yα ) + u(yα ) − v(yα ) − Φ(xα , yα )
C
≤ √ + δ − Φ(xα , yα ),
α
and so wα → 0 as α → ∞. It follows from (11.12) that yα,n > xα,n for α
sufficiently large. Thus by (11.7) and (11.11) we have
ε ≤ f ∗ (xα ) − f∗ (yα ) ≤ ω(|xα − yα |) ≤ ω(Cα−1/2 ).
Sending α → ∞ yields a contradiction.
We can generalize the argument in some ways. We follow [8] and make the
assumption that
(D) For all x0 ∈ U there exists εx0 > 0 and ηx0 ∈ Sn−1 such that
f ∗ (x) − f∗ (x + rd) ≤ ω(|x − x0 | + r), (11.13)
for all x ∈ U , r > 0 and d ∈ Sn−1 such that |d−ηx0 | < εx0 and x+rd ∈ U ,
where ω is a modulus of continuity.
108 CHAPTER 11. DISCONTINUOUS COEFFICIENTS
This models the situation where the domain can be decomposed as the disjoint
union U = U1 ∪ U2 ∪ Γ where U1 , U2 are open and Γ = ∂U1 ∩ ∂U2 ∩ U is the
boundary between U1 and U2 . Then (D) is satisfied provided Γ is a Lipschitz
hypersurface, f |U1 ∈ C(U1 ), f |U2 ∈ C(U2 ), and
for all x ∈ Γ.
We now give a more general comparison principle assuming (D) holds, and
that H is continuous.
Φ(xα , yα ) = max Φ.
U ×U
Therefore
Since u, v ∈ C 0,1 (U ), there exists C > 0 such that (see Exercise 2.16)
√
Setting wα = α xα − yα + √1 ηx
α 0
we have
1
yα = xα + √ (ηx0 − wα ) . (11.17)
α
for α sufficiently large. Inserting this into (11.16) and taking α → ∞ yields a
contradiction.
110 CHAPTER 11. DISCONTINUOUS COEFFICIENTS
Chapter 12
where U ⊂ Rn , and F is degenerate elliptic (see (3.4)) and satisfies the usual
monotonicity in u (see (3.3)). Our treatment will loosely follow [4], though
we prefer to avoid the super/sub-jet terminology. A comprehensive reference
on the theory of second order equations with the sharpest results is given the
User’s Guide [6].
We first examine why the method of proof we used for first order equations
(see Theorem 3.1) does not work here. The comparison principle for first order
equations is based on doubling the variables and examining the maximum of
α
Φ(x, y) = u(x) − v(y) − |x − y|2
2
as α → ∞. The key step was identifying that at a maximum (xα , yα ) of Φ, the
smooth function φ(x) := α2 |x − yα |2 touches u from above at xα , and ψ(y) =
− α2 |xα − y|2 touches v from below at yα . Furthermore, we have the magic
property Dφ(xα ) = Dψ(yα ), which replaces the classical identity Du(x) =
Dv(x) at a maximum of u − v when u, v are differentiable. For second order
equations, we also need the identity D2 u(x) ≤ D2 v(x) at the max of u − v.
However, D2 φ(xα ) = αI −αI = D2 ψ(yα ). So we appear to be at an
impasse.
However, we have not used one important piece of information; namely
that (x, y) 7→ Φ(x, y) is jointly maximal at (xα , yα ). If we, for the moment,
assume u, v ∈ C 2 , then the condition that (xα , yα ) maximize Φ can be written
111
112 CHAPTER 12. SECOND ORDER EQUATIONS
as
D2 u(xα ) 0 I −I
≤α . (12.2)
0 −D2 v(yα ) −I I
Since the right hand side annihilates vectors of the form (η, η) for η ∈ Rn ,
we find that η T D2 u(xα )η ≤ η T D2 v(yα )η for all η ∈ Rn —that is D2 u(xα ) ≤
D2 v(yα ). So when u, v are sufficiently smooth, the doubling variables argument
contains enough information to utilize the maximum principle for second order
equations.
This suggests performing some regularization of u and v, and then ap-
plying the doubling variables argument to the regularizations. The standard
regularizers in viscosity solutions are the inf- and sup-convolutions, defined in
Chapter 8. We replace the subsolution u with the sup-convolution uε , and the
supersolution v with the inf-convolution vε . Thus, we consider the doubling
variables argument in the form
α
Φε (x, y) := uε (x) − vε (y) − |x − y|2 .
2
ε
The key is that (see Chapter 8) u remains a subsolution (approximately)
and vε remains a supersolution, so we have not lost much by making this
substitution, and we have gained a great deal of regularity. However, to use this
additional regularity, we require a more refined understanding of semiconvex
functions.
We now turn to the proof of Jensen’s Lemma. The proof requires the
area formula, which is a generalization of the change of variables formula in
Lebesgue integration. Note we write #A to denote the number of points in
A ⊂ Rn and |A| to denote the Lebesgue measure.
Remark 12.4. Since #(A ∩ f −1 ({x})) ≥ 1 for x ∈ f (A), it follows from the
area formula that Z Z
|f (A)| = dx ≤ Jf (x) dx. (12.4)
f (A) A
This form of the area formula is used in the proof of Jensen’s Lemma.
Proof of Jensen’s Lemma. Let r > 0 be small enough so that φ(x0 ) > φ(x)
for all x ∈ B(x0 , r) with x 6= x0 , and let a > 0 such that φ(x) + a ≤ φ(x0 ) for
all x ∈ ∂B(x0 , r). Let ε > 0 and define the mollification φε = φ ∗ η ε . Then
φε → φ uniformly on B(x0 , r) as ε → 0. Define the corresponding sets
K ε = y ∈ B(x0 , r) : ∃p ∈ B(0, δ) such that φεp (x) ≤ φεp (y) for x ∈ B(x0 , r) ,
φεp (y) − φεp (x0 ) = φεp (y) − φ(y) + φ(y) − φ(x0 ) + φ(x0 ) − φεp (x0 )
≤ 2kφ − φε kL∞ (B(x0 ,r)) + 2|p|r − a.
Therefore, for ε and δ sufficiently small, φεp (y) < φεp (x0 ) for all p with |p| ≤ δ
and all y ∈ ∂B(x0 , r). Thus, every maximum of φεp with respect to B(x0 , r)
lies in the interior B 0 (x0 , r) when |p| ≤ δ. At a maximum y ∈ B 0 (x0 , r) of φεp
we have Dφε (y) = p, and so Dφε (K ε ) ⊃ B(0, δ). For the rest of the proof we
fix δ > 0 sufficiently small, as above.
Now, let λ > 0 such that φ(x) + λ2 |x|2 is convex. This yields
Therefore
α(n)δ n
|K ε | ≥ . (12.5)
λn
Since φε → φ uniformly, if x ∈ K εj for a sequence εj → 0 then x ∈ K.
Therefore
χK (x) ≥ lim sup χK m1 (x),
m→∞
where χA is the indicator function of the set A. By Fatou’s Lemma
Z Z
α(n)δ n
|K| = dx ≥ lim sup dx ≥ ,
K m→∞ Km
1
λn
which completes the proof.
The following proposition illustrates the usefulness of Jensen’s Lemma in
establishing the maximum principle for semiconvex functions.
Proposition 12.5. Let φ : Rn → R be semiconvex and let x0 be a local
maximum of φ. Then there exists xk → x0 such that φ is twice differentiable
at xk , Dφ(xk ) → 0 as k → ∞ and D2 φ(xk ) ≤ εk I for a sequence εk → 0.
Remark 12.6. Proposition 12.5, which is a restatement of Jensen’s Lemma,
is the semiconvex analog of the condition that Dφ = 0 and D2 φ ≤ 0 at the
maximum of a C 2 function.
Proof. Define ψ(x) = φ(x) − |x − x0 |4 . Then ψ has a strict local max at x0 ,
and ψ is semiconvex. Let rk > 0 be a decreasing sequence of real numbers con-
verging to zero. By Lemma 12.1 (Jensen’s Lemma), there is a corresponding
decreasing sequence δk > 0 such that δk → 0 and
{y ∈ B(x0 , rk ) : ∃p ∈ B(0, δk ) such that ψp (x) ≤ ψp (y) for x ∈ B(x0 , rk )}
has positive measure, where ψp (x) = ψ(x) + p · (x − x0 ). Since ψ is twice
differentiable almost everywhere, there exists xk ∈ B 0 (x0 , rk ) and pk ∈ B(0, δk )
such that ψpk has a local maximum at xk and ψ is twice differentiable at xk .
Hence xk → x0 , pk → 0, Dψ(xk ) = pk and D2 ψ(xk ) = D2 ψpk (xk ) ≤ 0.
The proof is completed by noting that φ is also twice differentiable along the
sequence xk , and that
|Dφ(xk ) − pk | ≤ 4|xk − x0 |3 and D2 φ(xk ) ≤ 12|xk − x0 |2 I.
12.2. COMPARISON FOR CONTINUOUS FUNCTIONS 115
The lower bound in (12.9) follows from semiconvexity of u and −v, while the
upper bound follows from Dxy 2
Φ(xkα , yαk ) ≤ εk I. By conjugating both sides with
vectors of the form (η, η) ∈ R2n we have
η T D2 u(xkα )η ≤ η T D2 v(xkα )η + 2εk |η|2 ,
for all η ∈ Rn , and hence D2 u(xkα ) ≤ D2 v(xkα ) + 2εk I. Using (12.9), we can,
upon passing to a subsequence, assume that D2 u(xkα ) → Xα and D2 v(yαk ) → Yα
as k → ∞, where Xα ≤ Yα .
By the viscosity sub- and supersolution properties, and Remark 2.7, we
have
F (D2 u(xkα ), Du(xkα ), u(xkα ), xkα ) ≤ 0 (12.10)
and
F (D2 v(yαk ), Dv(yαk ), v(yαk ), yαk ) ≥ δ. (12.11)
Taking k → ∞ and using continuity of F , u, and v we have
F (Xα , pα , u(xα ), xα ) ≤ 0
and
F (Yα , pα , v(yα ), yα ) ≥ a,
where Xα ≤ Yα . Since Φ(xα , yα ) ≥ maxU (u − v) > 0 we have u(xα ) > v(yα ),
and so by monotonicity and degenerate ellipticity, we have
δ ≤ F (Yα , pα , v(yα ), yα ) ≤ F (Xα , pα , u(xα ), yα ).
Applying (12.6) and (12.10) we find that
δ ≤ ω((1 + |pα |)|xα − yα |).
Sending α → ∞ and recalling (12.8) yields a contradiction.
Using the inf- and sup-convolutions, we can extend the semiconvex com-
parison principle to continuous functions. Here, we assume F has the form
F (X, p, z, x) = λz + H(X, p) − f (x), (12.12)
where λ ≥ 0. Then the regularity condition (12.6) is equivalent to the condi-
tion f ∈ C(U ).
Theorem 12.8 (Continuous comparison). Assume F has the form (12.12).
Let u ∈ C(U ) be a viscosity subsolution of (12.1), and let v ∈ C(U ) be a
viscosity solution of
F (D2 v, Dv, v, x) − δ ≥ 0 in U,
for some δ > 0. If u ≤ v on ∂U then u ≤ v in U .
12.3. SUPERJETS AND SUBJETS 117
Proof. For ε > 0, let uε be the sup-convolution of u, and let vε be the inf-
convolution of v, as defined in Chapter 8. By an argument similar to Corollary
8.9, we have that
λuε + H(D2 uε , Duε ) − f ≤ g(ε) in Uε
and
λvε + H(D2 vε , Dvε ) − f ≥ δ − h(ε) in Uε
hold in the viscosity sense, where
Uε = {x ∈ U : dist(x, ∂U ) ≥ Cε}
for a constant C > 0, and g, h are nonnegative continuous functions with
g(0) = h(0) = 0. Let mε = supU \Uε (uε − vε ). Since u, v ∈ C(U ), uε → u
and vε → v uniformly, and u ≤ v on ∂U , we have mε → 0 as ε → 0. Define
wε = uε − mε . Then wε satisfies
λwε + H(D2 wε , Dwε ) − f ≤ g(ε) − λmε in Uε
in the viscosity sense, and wε ≤ vε on ∂Uε . For ε > 0 sufficiently small, we
can apply Lemma 12.7 to show that wε ≤ vε on Uε , and so
uε ≤ v ε + m ε on Uε .
Sending ε → 0 we recover u ≤ v on U .
and
J 2,− u(x0 ) = (Dφ(x0 ), D2 φ(x0 )) : φ ∈ C 2 (Rn ) and u − φ has a local min at x0 .
Remark 12.13. The conditions (12.13) and (12.14) are sometimes given as
the definitions of viscosity solutions (see, e.g., [6]). While this notation may
seem convenient and compact, nobody quite likes this “jet” business [4].
Proof. Let u ∈ USC(U ) be a viscosity subsolution of (12.1) and let x0 ∈ U
and (p, X) ∈ J 2,+ u(x). By Proposition 12.11 there exists φ ∈ C 2 (Rn ) such
that u − φ has a strict local maximum at x0 , p = Dφ(x0 ) and X = D2 φ(x0 ).
Define the standard mollification φε = η ε ∗ φ. Since φε → φ locally uniformly,
there exists εk → 0 and xk → x0 such that u(xk ) → u(x0 ) and u − φεk has a
local max at xk for each k ≥ 1. Since φ ∈ C ∞ (Rn ), the viscosity subsolution
property yields
F (X, p, u(x0 ), x0 ) ≤ 0,
which completes the proof. The proof for the superjet is similar.
120 CHAPTER 12. SECOND ORDER EQUATIONS
Φ(xα , yα ) = max Φ.
U ×U
α|xα − yα |2 −→ 0. (12.16)
and
g(y) = v(y + yα ) − v(yα ) − αy · (xα − yα ).
Then we have
α
f (x) − g(y) − |x − y|2
2
α
= u(x + xα ) − v(y + yα ) − |x − y|2 − α(x − y) · (xα − yα ) + v(yα ) − u(xα )
2
α α
= u(x + xα ) − v(y + yα ) − |x − y + xα − yα |2 + |xα − yα |2 + v(yα ) − u(xα )
2 2
α
= Φ(x + xα , y + yα ) + |xα − yα | + v(yα ) − u(xα ).
2
2
Therefore
α
f (x) − g(y) − |x − y|2
2
attains its maximum at (x, y) = (0, 0), and f (0) = g(0) = 0. Thus, we have
α
f (x) − g(y) ≤ |x − y|2 (12.17)
2
for x, y near 0.
2. We now take the sup-convolution on both sides of (12.17) jointly in
(x, y) to obtain (see Exercise 12.16)
α
f ε (x) − gε (y) ≤ (1 − 2αε)−1 |x − y|2 (12.18)
2
122 CHAPTER 12. SECOND ORDER EQUATIONS
and so by (12.18) we have f ε (0) = gε (0). Since f ε (0) ≥ f (0) = 0 and gε (0) ≤
g(0) = 0 we have f ε (0) = 0 = gε (0). Therefore the function
α
f ε (x) − gε (y) − (1 − 2αε)−1 |x − y|2 − |x|4 − |y|4
2
attains a strict local maximum at (x, y) = (0, 0). By Lemma 12.1 (Jensen’s
Lemma), there exists xk , y k → 0 and ξ k , ζ k → 0 such that
α
f ε (x) − gε (y) − (1 − 2αε)−1 |x − y|2 − |x|4 − |y|4 − ξ k · x − ζ k · y
2
and
1 k
f ε (xk ) = f (xkε ) − |x − xkε |2 . (12.22)
2ε
12.4. SEMICONTINUOUS COMPARISON 123
By Proposition 8.6 f (x) − φε (x) has a local maximum at xkε , where φε (x) :=
φ(x + xk − xkε ), and we have
1
Dφε (xkε ) = pk = (xkε − xk ),
ε
and D φε (xε ) = X . Since p → 0 as k → ∞ we have xkε → 0 as k → ∞. In
2 k k k
u + F (D2 u) = f on Rn . (12.26)
The well-posedness theory is far more general; we study this simple problem to
illustrate the main ideas, and to show how to handle the unbounded domain.
Throughout this section, we assume F is uniformly continuous and degenerate
elliptic, and f is continuous.
We first prove a comparison principle.
u(x) − v(x)
lim =0 (12.27)
|x|→∞ |x|2
then u ≤ v on Rn .
Thus, by (12.27) there exists rε > 0 such that uε ≤ v for |x| ≥ rε . By Theorem
12.15 we have uε ≤ v on B(0, r) for all r > rε , and thus uε ≤ v on Rn . Sending
ε → 0 completes the proof.
12.5. A PROBLEM ON AN UNBOUNDED DOMAIN 125
Now, define
and
u(x) := sup{v(x) : v ∈ F}. (12.33)
Since v = −w ∈ F, we have −w ≤ u ≤ w, and so u satisfies (12.30). By
Lemma 7.1 u∗ is a viscosity subsolution of (12.26). Since u ≤ w and w is
continuous, we have u∗ ≤ w and so u∗ ∈ F and u = u∗ . By Lemma 7.2, u∗
is a viscosity supersolution of (12.26). Since u satisfies (12.30), we can invoke
Lemma 12.17 to obtain u∗ ≤ u∗ , and so u = u∗ = u∗ is the unique viscosity
solution of (12.26).
Exercise 12.19. Modify the proof of Theorem 12.18 to hold under the con-
dition
|f (x)| ≤ Cf (1 + |x|α ) (12.34)
for some Cf > 0 and α < 2.
Bibliography
[6] M. G. Crandall, H. Ishii, and P.-L. Lions. User’s guide to viscosity solu-
tions of second order partial differential equations. Bulletin of the Amer-
ican Mathematical Society, 27(1):1–67, 1992.
[9] L. C. Evans. The perturbed test function method for viscosity solutions
of nonlinear PDE. Proceedings of the Royal Society of Edinburgh: Section
A Mathematics, 111(3-4):359–375, 1989.
127
128 BIBLIOGRAPHY
[12] L. C. Evans and R. Gariety. Measure theory and fine properties of func-
tions. Studies in Advanced Mathematics. CRC Press. Boca Raton London
New York Washington, DC, 1992.
[13] R. Jensen. The maximum principle for viscosity solutions of fully non-
linear second order partial differential equations. Archive for Rational
Mechanics and Analysis, 101(1):1–27, 1988.