Methods For Economists Lecture Notes: (In Extracts)
Methods For Economists Lecture Notes: (In Extracts)
Frank Werner
Faculty of Mathematics
Institute of Mathematical Optimization (IMO)
http://math.uni-magdeburg.de/werner/meth-ec-ma.html
Methods for Economists
Lecture Notes
(in extracts)
Winter Term 2014/15
Annotation:
1. These lecture notes do not replace your attendance of the lecture. Nu-
merical examples are only presented during the lecture.
2. The symbol points to additional, detailed remarks given in the lecture.
3. I am grateful to Julia Lange for her contribution in editing the lecture
notes.
Contents
1 Basic mathematical concepts 1
1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Convex and concave functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Quasi-convex and quasi-concave functions . . . . . . . . . . . . . . . . . . . . . . 8
2 Unconstrained and constrained optimization 11
2.1 Extreme points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.1 Global extreme points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.2 Local extreme points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Equality constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Inequality constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Non-negativity constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3 Sensitivity analysis 20
3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Value functions and envelope results . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.1 Equality constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.2 Properties of the value function for inequality constraints . . . . . . . . . 21
3.2.3 Mixed constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Some further microeconomic applications . . . . . . . . . . . . . . . . . . . . . . 23
3.3.1 Cost minimization problem . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3.2 Prot maximization problem of a competitive rm . . . . . . . . . . . . . 24
4 Consumer choice and general equilibrium theory 25
4.1 Some aspects of consumer choice theory . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Fundamental theorems of welfare economics . . . . . . . . . . . . . . . . . . . . . 28
4.2.1 Notations and preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2.2 First fundamental theorem of welfare economics . . . . . . . . . . . . . . . 29
4.2.3 Second fundamental theorem of welfare economics . . . . . . . . . . . . . 30
5 Dierential equations 32
5.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
i
CONTENTS ii
5.2 Dierential equations of the rst order . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2.1 Separable equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2.2 First-order linear dierential equations . . . . . . . . . . . . . . . . . . . . 33
5.3 Second-order linear dierential equations and systems in the plane . . . . . . . . 35
6 Optimal control theory 42
6.1 Calculus of variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.2 Control theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.2.1 Basic problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.2.2 Standard problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.2.3 Current value formulations . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7 Growth theory and monetary economics 49
7.1 Some growth models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.2 The Solow-Swan model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Chapter 1
Basic mathematical concepts
1.1 Preliminaries
Quadratic forms and their sign
Denition 1:
If A = (a
ij
) is a matrix of order n n and x
T
= (x
1
, x
2
, . . . , x
n
) R
n
, then the term
Q(x) = x
T
A x
is called a quadratic form.
Thus:
Q(x) = Q(x
1
, x
2
, . . . , x
n
) =
n
i=1
n
j=1
a
ij
x
i
x
j
Example 1
Denition 2:
A matrix A of order n n and its associated quadratic form Q(x) are said to be
1. positive denite, if Q(x) = x
T
A x > 0 for all x
T
= (x
1
, x
2
, . . . , x
n
) = (0, 0, . . . , 0);
2. positive semi-denite, if Q(x) = x
T
A x 0 for all x R
n
;
3. negative denite, if Q(x) = x
T
A x < 0 for all x
T
= (x
1
, x
2
, . . . , x
n
) = (0, 0, . . . , 0);
4. negative semi-denite, if Q(x) = x
T
A x 0 for all x R
n
;
5. indenite, if it is neither positive semi-denite nor negative semi-denite.
Remark:
In case 5., there exist vectors x
and y
) < 0.
1
CHAPTER 1. BASIC MATHEMATICAL CONCEPTS 2
Denition 3:
The leading principle minors of a matrix A = (a
ij
) of order n n are the determinants
D
k
=
a
11
a
12
a
1k
a
21
a
22
a
2k
.
.
.
.
.
.
.
.
.
.
.
.
a
k1
a
k2
a
kk
, k = 1, 2, . . . , n
(i.e., D
k
is obtained from |A| by crossing out the last n k columns and rows).
Theorem 1
Let A be a symmetric matrix of order n n. Then:
1. A positive denite D
k
> 0 for k = 1, 2, . . . , n.
2. A negative denite (1)
k
D
k
> 0 for k = 1, 2, . . . , n.
3. A positive semi-denite = D
k
0 for k = 1, 2, . . . , n.
4. A negative semi-denite = (1)
k
D
k
0 for k = 1, 2, . . . , n.
now: necessary and sucient criterion for positive (negative) semi-deniteness
Denition 4:
An (arbitrary) principle minor
k
of order k (1 k n) is the determinant of a submatrix
of A obtained by deleting all but k rows and columns in A with the same numbers.
Theorem 2
Let A be a symmetric matrix of order n n. Then:
1. A positive semi-denite
k
0 for all principle minors of order k = 1, 2, . . . , n.
2. A negative semi-denite (1)
k
k
0 for all principle minors of order k =
1, 2, . . . , n.
Example 2
alternative criterion for checking the sign of A:
CHAPTER 1. BASIC MATHEMATICAL CONCEPTS 3
Theorem 3
Let A be a symmetric matrix of order n n and
1
,
2
, . . . ,
n
be the real eigenvalues of
A. Then:
1. A positive denite
1
> 0,
2
> 0, . . . ,
n
> 0.
2. A positive semi-denite
1
0,
2
0, . . . ,
n
0.
3. A negative denite
1
< 0,
2
< 0, . . . ,
n
< 0.
4. A negative semi-denite
1
0,
2
0, . . . ,
n
0.
5. A indenite A has eigenvalues with opposite signs.
Example 3
Level curve and tangent line
consider:
z = F(x, y)
level curve:
F(x, y) = C with C R
= slope of the level curve F(x, y) = C at the point (x, y):
y
=
F
x
(x, y)
F
y
(x, y)
(See Werner/Sotskov(2006): Mathematics of Economics and Business, Theorem 11.6, implicit-function theorem.)
equation of the tangent line T:
y y
0
= y
(x x
0
)
y y
0
=
F
x
(x
0
, y
0
)
F
y
(x
0
, y
0
)
(x x
0
)
= F
x
(x
0
, y
0
) (x x
0
) +F
y
(x
0
, y
0
) (y y
0
) = 0
Illustration: equation of the tangent line T
Remark:
The gradient F(x
0
, y
0
) is orthogonal to the tangent line T at (x
0
, y
0
).
Example 4
CHAPTER 1. BASIC MATHEMATICAL CONCEPTS 4
generalization to R
n
:
let x
0
= (x
0
1
, x
0
2
, . . . , x
0
n
)
gradient of F at x
0
:
F(x
0
) =
_
_
_
_
_
_
F
x
1
(x
0
)
F
x
2
(x
0
)
.
.
.
F
x
n
(x
0
)
_
_
_
_
_
_
= equation of the tangent hyperplane T at x
0
:
F
x
1
(x
0
) (x
1
x
0
1
) +F
x
2
(x
0
) (x
2
x
0
2
) + +F
x
n
(x
0
) (x
n
x
0
n
) = 0
or, equivalently:
[F(x
0
)]
T
(x x
0
) = 0
Directional derivative
measures the rate of change of function f in an arbitrary direction r
Denition 5:
Let function f : D
f
R, D
f
R
n
, be continuously partially dierentiable and
r = (r
1
, r
2
, . . . , r
n
)
T
R
n
with |r| = 1. The term
_
f(x
0
)
T
r = f
x
1
(x
0
) r
1
+f
x
2
(x
0
) r
2
+ +f
x
n
(x
0
) r
n
is called the directional derivative of function f at the point x
0
= (x
0
1
, x
0
2
, . . . , x
0
n
) D
f
.
Example 5
Homogeneous functions and Eulers theorem
Denition 6
A function f : D
f
R, D
f
R
n
, is said to be homogeneous of degree k on D
f
, if t > 0
and (x
1
, x
2
, . . . , x
n
) D
f
imply
(t x
1
, t x
2
, . . . , t x
n
) D
f
and f(t x
1
, t x
2
, . . . , t x
n
) = t
k
f(x
1
, x
2
, . . . , x
n
)
for all t > 0, where k can be positive, zero or negative.
CHAPTER 1. BASIC MATHEMATICAL CONCEPTS 5
Theorem 4 (Eulers theorem)
Let the function f : D
f
R, D
f
R
n
, be continuously partially dierentiable, where
t > 0 and (x
1
, x
2
, . . . , x
n
) D
f
imply (t x
1
, t x
2
, . . . , t x
n
) D
f
. Then:
f is homogeneous of degree k on D
f
x
1
f
x
1
(x) +x
2
f
x
2
(x) + +x
n
f
x
n
(x) = k f(x) holds for all (x
1
, x
2
, . . . , x
n
) D
f
.
Example 6
Linear and quadratic approximations of functions in R
2
known: Taylors formula for functions of one variable (See Werner/Sotskov (2006), Theorem 4.20.)
f(x) = f(x
0
) +
f
(x
0
)
1!
(x x
0
) +
f
(x
0
)
2!
(x x
0
)
2
+ +
f
(n)
(x
0
)
n!
(x x
0
)
n
+R
n
(x)
R
n
(x) - remainder
now: n = 2
z = f(x, y) dened around (x
0
, y
0
) D
f
let: x = x
0
+h, y = y
0
+k
Linear approximation of f:
f(x
0
+h, y
0
+k) = f(x
0
, y
0
) +f
x
(x
0
, y
0
) h +f
y
(x
0
, y
0
) k +R
1
(x, y)
Quadratic approximation of f:
f(x
0
+h, y
0
+k) = f(x
0
, y
0
) +f
x
(x
0
, y
0
) h +f
y
(x
0
, y
0
) k
+
1
2
_
f
xx
(x
0
, y
0
) h
2
+ 2f
xy
(x
0
, y
0
) h k +f
yy
(x
0
, y
0
) k
2
+R
2
(x, y)
often: (x
0
, y
0
) = (0, 0)
Example 7
Implicitly dened functions
exogenous variables: x
1
, x
2
, . . . , x
n
endogenous variables: y
1
, y
2
, . . . , y
m
CHAPTER 1. BASIC MATHEMATICAL CONCEPTS 6
F
1
(x
1
, x
2
, . . . , x
n
; y
1
, y
2
, . . . , y
m
) = 0
F
2
(x
1
, x
2
, . . . , x
n
; y
1
, y
2
, . . . , y
m
) = 0
.
.
.
F
m
(x
1
, x
2
, . . . , x
n
; y
1
, y
2
, . . . , y
m
) = 0
(1)
(m < n)
Is it possible to put this system into its reduced form:
y
1
= f
1
(x
1
, x
2
, . . . , x
n
)
y
2
= f
2
(x
1
, x
2
, . . . , x
n
)
.
.
.
y
m
= f
m
(x
1
, x
2
, . . . , x
n
)
(2)
Theorem 5
Assume that:
F
1
, F
2
, . . . , F
m
are continuously partially dierentiable;
(x
0
, y
0
) = (x
0
1
, x
0
2
, . . . , x
0
n
; y
0
1
, y
0
2
, . . . , y
0
m
) satises (1);
|J(x
0
, y
0
)| = det
_
F
j
(x
0
,y
0
)
y
k
_
= 0
(i.e., the Jacobian determinant is regular).
Then the system (1) can be put into its reduced form (2).
Example 8
1.2 Convex sets
Denition 7
A set M is called convex, if for any two points (vectors) x
1
, x
2
M, any convex combination
x
1
+ (1 )x
2
with 0 1 also belongs to M.
Illustration: Convex set
Remark:
The intersection of convex sets is always a convex set, while the union of convex sets is not
necessarily a convex set.
CHAPTER 1. BASIC MATHEMATICAL CONCEPTS 7
Illustration: Union and intersection of convex sets
1.3 Convex and concave functions
Denition 8
Let M R
n
be a convex set.
A function f : M R is called convex on M, if
f(x
1
+ (1 )x
2
) f(x
1
) + (1 )f(x
2
)
for all x
1
, x
2
M and all [0, 1].
f is called concave, if
f(x
1
+ (1 )x
2
) f(x
1
) + (1 )f(x
2
)
for all x
1
, x
2
M and all [0, 1].
Illustration: Convex and concave functions
Denition 9
The matrix
H
f
(x
0
) = (f
x
i
x
j
(x
0
)) =
_
_
_
_
_
_
f
x
1
x
1
(x
0
) f
x
1
x
2
(x
0
) f
x
1
x
n
(x
0
)
f
x
2
x
1
(x
0
) f
x
2
x
2
(x
0
) f
x
2
x
n
(x
0
)
.
.
.
.
.
.
.
.
.
.
.
.
f
x
n
x
1
(x
0
) f
x
n
x
2
(x
0
) f
x
n
x
n
(x
0
)
_
_
_
_
_
_
is called the Hessian matrix of function f at the point x
0
= (x
0
1
, x
0
2
, . . . , x
0
n
) D
f
R
n
.
Remark:
If f has continuous second-order partial derivatives, the Hessian matrix is symmetric.
CHAPTER 1. BASIC MATHEMATICAL CONCEPTS 8
Theorem 6
Let f : D
f
R, D
f
R
n
, be twice continuously dierentiable and M D
f
be convex.
Then:
1. f is convex on M the Hessian matrix H
f
(x) is positive semi-denite for all
x M;
2. f is concave on M the Hessian matrix H
f
(x) is negative semi-denite for all
x M;
3. the Hessian matrix H
f
(x) is positive denite for all x M = f is strictly convex
on M;
4. the Hessian matrix H
f
(x) is negative denite for all x M = f is strictly concave
on M.
Example 9
Theorem 7
Let f : M R, g : M R and M R
n
be a convex set. Then:
1. f, g are convex on M and a 0, b 0 = a f +b g is convex on M;
2. f, g are concave on M and a 0, b 0 = a f +b g is concave on M.
Theorem 8
Let f : M R with M R
n
being convex and let F : D
F
R with R
f
D
F
. Then:
1. f is convex and F is convex and increasing = (F f)(x) = F(f(x)) is convex;
2. f is convex and F is concave and decreasing = (F f)(x) = F(f(x)) is concave;
3. f is concave and F is concave and increasing = (F f)(x) = F(f(x)) is concave;
4. f is concave and F is convex and decreasing = (F f)(x) = F(f(x)) is convex.
Example 10
1.4 Quasi-convex and quasi-concave functions
Denition 10
Let M R
n
be a convex set and f : M R. For any a R, the set
P
a
= {x M | f(x) a}
is called an upper level set for f.
Illustration: Upper level set
CHAPTER 1. BASIC MATHEMATICAL CONCEPTS 9
Theorem 9
Let M R
n
be a convex set and f : M R. Then:
1. If f is concave, then
P
a
= {x M | f(x) a}
is a convex set for any a R;
2. If f is convex, then the lower level set
P
a
= {x M | f(x) a}
is a convex set for any a R.
Denition 11
Let M R
n
be a convex set and f : M R.
Function f is called quasi-concave, if the upper level set P
a
= {x M | f(x) a} is
convex for any number a R.
Function f is called quasi-convex, if f is quasi-concave.
Remark:
f quasi-convex the lower level set P
a
= {x M | f(x) a} is convex for any a R
Example 11
Remarks:
1. f convex = f quasi-convex
f concave = f quasi-concave
2. The sum of quasi-convex (quasi-concave) functions is not necessarily quasi-convex (quasi-
concave).
Denition 12
Let M R
n
be a convex set and f : M R.
Function f is called strictly quasi-concave, if
f(x
1
+ (1 )x
2
) > min{f(x
1
), f(x
2
)}
for all x
1
, x
2
M with x
1
= x
2
and (0, 1).
Function f is strictly quasi-convex, if f is strictly quasi-concave.
CHAPTER 1. BASIC MATHEMATICAL CONCEPTS 10
Remarks:
1. f strictly quasi-concave = f quasi-concave
2. f : D
f
R, D
f
R, strictly increasing (decreasing) = f strictly quasi-concave
3. A strictly quasi-concave function cannot have more than one global maximum point.
Theorem 10
Let f : D
f
R, D
f
R
n
, be twice continuously dierentiable on a convex set M R
n
and
B
r
=
0 f
x
1
(x) f
x
r
(x)
f
x
1
(x) f
x
1
x
1
(x) f
x
1
x
r
(x)
.
.
.
.
.
.
.
.
.
f
x
r
(x) f
x
r
x
1
(x) f
x
r
x
r
(x)
, r = 1, 2, . . . , n
Then:
1. A necessary condition for f to be quasi-concave is that (1)
r
B
r
(x) 0 for all
x M and all r = 1, 2, . . . , n;
2. A sucient condition for f to be strictly quasi-concave is that (1)
r
B
r
(x) > 0 for
all x M and all r = 1, 2, . . . , n.
Example 12
Chapter 2
Unconstrained and constrained
optimization
2.1 Extreme points
Consider:
f(x) min! (or max!)
s.t.
x M,
where f : R
n
R, = M R
n
M - set of feasible solutions
x M - feasible solution
f - objective function
x
i
, i = 1, 2, . . . , n - decision variables (choice variables)
often:
M = {x R
n
| g
i
(x) 0, i = 1, 2, . . . , m}
where g
i
: R
n
R, i = 1, 2, . . . , m
2.1.1 Global extreme points
Denition 1
A point x
= (x
1
, x
2
, . . . , x
n
) be an interior point of M. A
necessary condition for x
to be an extreme point is
f(x
) = 0,
i.e., f
x
1
(x
) = f
x
2
(x
) = = f
x
n
(x
) = 0.
Remark:
x
(x
) := {x R
n
||x x
| < }
is called an (open) -neighborhood U
(x
) with > 0.
CHAPTER 2. UNCONSTRAINED AND CONSTRAINED OPTIMIZATION 13
Denition 3
A point x
(x
).
The number f(x
) = 0.
Theorem 4 (sucient optimality condition)
Let f be twice continuously dierentiable and x
) = 0 and H(x
) = 0 and H(x
)| = 0 requires further
examination.
Example 2
CHAPTER 2. UNCONSTRAINED AND CONSTRAINED OPTIMIZATION 14
2.2 Equality constraints
Consider:
z = f(x
1
, x
2
, . . . , x
n
) min! (or max!)
s.t.
g
1
(x
1
, x
2
, . . . , x
n
) = 0
g
2
(x
1
, x
2
, . . . , x
n
) = 0
.
.
.
g
m
(x
1
, x
2
, . . . , x
n
) = 0 (m < n)
apply Lagrange multiplier method:
L(x; ) = L(x
1
, x
2
, . . . , x
n
;
1
,
2
, . . . ,
m
)
= f(x
1
, x
2
, . . . , x
n
) +
m
i=1
i
g
i
(x
1
, x
2
, . . . , x
n
)
L - Lagrangian function
i
- Lagrangian multiplier
Theorem 5 (necessary optimality condition, Lagranges theorem)
Let f and g
i
, i = 1, 2, . . . , m, be continuously dierentiable, x
0
= (x
0
1
, x
0
2
, . . . , x
0
n
) be a local
extreme point subject to the given constraints and let |J(x
0
1
, x
0
2
, . . . , x
0
n
)| = 0. Then there
exists a
0
= (
0
1
,
0
2
, . . . ,
0
m
) such that
L(x
0
;
0
) = 0.
The condition of Theorem 5 corresponds to
L
x
j
(x
0
;
0
) = 0, j = 1, 2, . . . , n;
L
i
(x
0
;
0
) = g
i
(x
1
, x
2
, . . . , x
n
) = 0, i = 1, 2, . . . , m.
CHAPTER 2. UNCONSTRAINED AND CONSTRAINED OPTIMIZATION 15
Theorem 6 (sucient optimality condition)
Let f and g
i
, i = 1, 2, . . . , m, be twice continuously dierentiable and let (x
0
;
0
) with
x
0
D
f
be a solution of the system L(x; ) = 0.
Moreover, let
H
L
(x; ) =
_
_
_
_
_
_
_
_
_
_
_
_
0 0 L
1
x
1
(x; ) L
1
x
n
(x; )
.
.
.
.
.
.
.
.
.
.
.
.
0 0 L
m
x
1
(x; ) L
m
x
n
(x; )
L
x
1
1
(x; ) L
x
1
m
(x; ) L
x
1
x
1
(x; ) L
x
1
x
n
(x; )
.
.
.
.
.
.
.
.
.
.
.
.
L
x
n
1
(x; ) L
x
n
m
(x; ) L
x
n
x
1
(x; ) L
x
n
x
n
(x; )
_
_
_
_
_
_
_
_
_
_
_
_
be the bordered Hessian matrix and consider its leading principle minors D
j
(x
0
;
0
) of the
order j = 2m+ 1, 2m+ 2, . . . , n +m at point (x
0
;
0
). Then:
1. If all D
j
(x
0
;
0
), 2m+1 j n+m, have the sign (1)
m
, then x
0
= (x
0
1
, x
0
2
, . . . , x
0
n
)
is a local minimum point of function f subject to the given constraints.
2. If all D
j
(x
0
;
0
), 2m + 1 j n + m, alternate in sign, the sign of D
n+m
(x
0
;
0
)
being that of (1)
n
, then x
0
= (x
0
1
, x
0
2
, . . . , x
0
n
) is a local maximum point of function
f subject to the given constraints.
3. If neither the condition 1. nor those of 2. are satised, then x
0
is not a local extreme
point of function f subject to the constraints.
Here the case when one or several principle minors have value zero is not considered
as a violation of condition 1. or 2.
special case: n = 2, m = 1 = 2m+ 1 = n +m = 3
= consider only D
3
(x
0
;
0
)
D
3
(x
0
;
0
) < 0 = sign is (1)
m
= (1)
1
= 1
= x
0
is a local minimum point according to 1.
D
3
(x
0
;
0
) > 0 = sign is (1)
n
= (1)
2
= 1
= x
0
is a local maximum point according to 2.
Example 3
Theorem 7 (sucient condition for global optimality)
If there exist numbers (
0
1
,
0
2
, . . . ,
0
m
) =
0
and an x
0
D
f
such that L(x
0
,
0
) = 0,
then:
1. If L(x) = f(x) +
m
i=1
0
i
g
i
(x) is concave in x, then x
0
is a maximum point.
2. If L(x) = f(x) +
m
i=1
0
i
g
i
(x) is convex in x, then x
0
is a minimum point.
CHAPTER 2. UNCONSTRAINED AND CONSTRAINED OPTIMIZATION 16
Example 4
2.3 Inequality constraints
Consider:
f(x
1
, x
2
, . . . , x
n
) min!
s.t.
g
1
(x
1
, x
2
, . . . , x
n
) 0
g
2
(x
1
, x
2
, . . . , x
n
) 0
.
.
.
g
m
(x
1
, x
2
, . . . , x
n
) 0
(3)
=L(x; ) = f(x
1
, x
2
, . . . , x
n
) +
m
i=1
i
g
i
(x
1
, x
2
, . . . , x
n
) = f(x) +
T
g(x),
where
=
_
_
_
_
_
_
2
.
.
.
m
_
_
_
_
_
_
and g(x) =
_
_
_
_
_
_
g
1
(x)
g
2
(x)
.
.
.
g
m
(x)
_
_
_
_
_
_
Denition 4
A point (x
; ) L(x
) L(x;
) (2.1)
for all x R
n
, R
m
+
.
Theorem 8
If (x
) with
) with
0.
Remark:
Condition (2.1) is often dicult to check. It is a global condition on the Lagrangian function.
If all functions f, g
1
, . . . , g
m
are continuously dierentiable and convex, then the saddle point
condition of Theorem 9 can be replaced by the following equivalent local conditions.
Theorem 10
If condition (S) is satised and functions f, g
1
, . . . , g
m
are continuously dierentiable
and convex, then x
) +
m
i=1
i
g
i
(x
) = 0 (2.2)
i
g
i
(x
) = 0 (2.3)
g
i
(x
) 0 (2.4)
i
0 (2.5)
i = 1, 2, . . . , m
Remark:
Without convexity of the functions f, g
1
, . . . , g
m
the KKT-conditions are only a necessary opti-
mality condition, i.e.: If x
) satises the
KKT-conditions, problem is
convex
= x
) +
m
i=1
i
g
i
(x
= 0 (2.6)
i
g
i
(x
) = 0, i = 1, 2, . . . , m (2.7)
j
x
j
= 0, j = 1, 2, . . . , n (2.8)
g
i
(x
) 0 (2.9)
x
0,
0,
0 (2.10)
CHAPTER 2. UNCONSTRAINED AND CONSTRAINED OPTIMIZATION 19
Using (2.6) to (2.10), we can rewrite the KKT-conditions as follows:
f(x
) +
m
i=1
i
g
i
(x
) 0
i
g
i
(x
) = 0, i = 1, 2, . . . , m
x
j
_
f
x
j
(x
) +
m
i=1
i
g
i
x
j
(x
)
_
= 0, j = 1, 2, . . . , n
g
i
(x
) 0
x
0,
0
i.e., the new Lagrangian multipliers
j
have been eliminated.
Example 7
Some comments on quasi-convex programming
Theorem 11
Consider a problem (5), where function f is continuously dierentiable and quasi-convex.
Assume that there exist numbers
1
,
2
, . . . ,
m
and a vector x
such that
1. the KKT-conditions are satised;
2. f(x
) = 0;
3.
i
g
i
(x) is quasi-convex for i = 1, 2, . . . , m.
Then x
(r) = f(x
1
(r), x
2
(r), . . . , x
n
(r)) - (minimum) value function
20
CHAPTER 3. SENSITIVITY ANALYSIS 21
i
(r) (i = 1, 2, . . . , m) - Lagrangian multipliers in the necessary optimality condition
Lagrangian function:
L(x; ; r) = f(x; r) +
m
i=1
i
g
i
(x; r)
= f(x(r); r) +
m
i=1
i
(r) g
i
(x(r); r) = L
(r)
Theorem 1 (Envelope Theorem for equality constraints)
For j = 1, 2, . . . , k, we have:
f
(r)
r
j
=
_
L(x; ; r)
r
j
_
(
x(r)
(r)
)
=
L
(r)
r
j
Remark:
Notice that
L
r
j
measures the total eect of a change in r
j
on the Lagrangian function, while
L
r
j
measures the partial eect of a change in r
j
on the Lagrangian function with x and being held
constant.
Example 2
3.2.2 Properties of the value function for inequality constraints
Consider:
f(x, r) min!
s.t.
g
i
(x, r) 0, i = 1, 2, . . . , m
minimum value function:
b f
(b)
f
(b) = min{f(x) | g
i
(x) b
i
0, i = 1, 2, . . . , m}
x(b) - optimal solution
i
(b) - corresponding Lagrangian multipliers
=
f
(b)
b
i
=
i
(b), i = 1, 2, . . . , m
Remark:
Function f
(b) is concave.
Example 3:
A rm has L units of labour available and produces 3 goods whose values per unit of output are
a, b and c, respectively. Producing x, y and z units of the goods requires x
2
, y
2
and z
2
units
of labour, respectively. We maximize the value of output and determine the value function.
3.2.3 Mixed constraints
Consider:
f(x, r) min!
s.t.
x M(r) = {x R
n
| g
i
(x, r) 0, i = 1, 2, . . . , m
; g
i
(x, r) = 0, i = m
+ 1, m
+ 2, . . . , m}
(minimum) value function:
f
i=1
i
g
i
(x; r)
= f(x(r); r) +
m
i=1
i
(r) g
i
(x(r); r) = L
(r)
Theorem 3 (Envelope Theorem for mixed constraints)
For j = 1, 2, . . . , k, we have:
f
(r)
r
j
=
_
L(x; ; r)
r
j
_
(
x(r)
(r)
)
=
L
(r)
r
j
Example 4
CHAPTER 3. SENSITIVITY ANALYSIS 23
3.3 Some further microeconomic applications
3.3.1 Cost minimization problem
Consider:
C(w, x) = w
T
x(w, y) min!
s.t.
y f(x) 0
x 0, y 0
Assume that w > 0 and that the partial derivatives of C are > 0.
Let x(w, y) be the optimal input vector and (w, y) be the corresponding Lagrangian
multiplier.
L(x; ; w, y) = w
T
x + (y f(x))
=
C
y
=
L
y
= = (w, y) (3.1)
i.e., signies marginal costs
Shepard(-McKenzie) Lemma:
C
w
i
= x
i
= x
i
(w, y), i = 1, 2, . . . , n (3.2)
Remark:
Assume that C is twice continuously dierentiable. Then the Hessian H
C
is symmetric.
Dierentiating (3.1) w.r.t. w
i
and (3.2) w.r.t. y, we obtain
Samuelsons reciprocity relation:
=
x
j
w
i
=
x
i
w
j
and
x
i
y
=
w
i
, for all i and j
Interpretation of the rst result:
A change in the j-th factor input w.r.t. a change in the i-th factor price (output being constant)
must be equal to the change in the i-th factor input w.r.t. a change in the j-th factor price.
CHAPTER 3. SENSITIVITY ANALYSIS 24
3.3.2 Prot maximization problem of a competitive rm
Consider:
(x, y) = p
T
y w
T
x max! ( min!)
s.t.
g(x, y) = y f(x) 0
x 0, y 0,
where:
p > 0 - output price vector
w > 0 - input price vector
y R
m
+
- produced vector of output
x R
n
+
- used input vector
f(x) - production function
Let:
x(p, w), y(p, w) be the optimal solutions of the problem and
(p, w) = p
T
y(p, w) w
T
x(p, w) be the (maximum) prot function.
L(x, y; ; p, w) = p
T
y +w
T
x + (y f(x))
The Envelope theorem implies
Hotellings lemma:
1.
()
p
i
=
L
p
i
= y
i
i.e.:
p
i
= y
i
> 0, i = 1, 2, . . . , m (3.3)
2.
()
w
i
=
L
w
i
= x
i
i.e.:
w
i
= x
i
< 0, i = 1, 2, . . . , m (3.4)
Interpretation:
1. An increase in the price of any output increases the maximum prot.
2. An increase in the price of any input lowers the maximum prot.
Remark:
Let (p, w) be twice continuously dierentiable. Using (3.3) and (3.4), we obtain
Hotellings symmetry relation:
y
j
p
i
=
y
i
p
j
,
x
j
w
i
=
x
i
w
j
,
x
j
p
i
=
y
i
w
j
, for all i and j.
Chapter 4
Applications to consumer choice and
general equilibrium theory
4.1 Some aspects of consumer choice theory
Consumer choice problem
Let:
x R
n
+
- commodity bundle of consumption
U(x) - utility function
p R
n
+
- price vector
I - income
Then:
U(x) max! (U(x) min!)
s.t.
p
T
x I (g(x) = p
T
x I 0)
x 0
assumption: U quasi-concave (= U quasi-convex)
L(x; ) = U(x) +(p
T
x I)
KKT-conditions:
U
x
i
(x) +p
i
0 (4.1)
(p
T
x I) = 0 (4.2)
25
CHAPTER 4. CONSUMER CHOICE AND GENERAL EQUILIBRIUM THEORY 26
x
i
(U
x
i
(x) +p
i
) = 0
p
T
x I 0
x 0, 0
Suppose that U(x
) = 0 and that x
is feasible.
Thm. 11, Ch.2
= x
) 0 is assumed for i = 1, 2, . . . , n
U(x
)=0
= There exists a j such that U
x
j
(x
) > 0
(4.1)
= > 0
(4.2)
= p
T
x = I, i.e., all income is spent.
Consider now the following version of the problem:
U(x) max!
s.t.
p
T
x = I
x 0
Let:
r
T
= (p, I) - vector of parameters
x
I
and
U
p
i
L(x; ; p, I) = U(x; p, I) +(I p
T
x)
L
I
(
x(p,I)
(p,I)
)
= = (p, I).
CHAPTER 4. CONSUMER CHOICE AND GENERAL EQUILIBRIUM THEORY 27
Thm. 1,Ch. 3
=
(U
)
I
= =
U
I
= (4.3)
Thm. 1,Ch. 3
=
(U
)
p
i
=
L
p
i
(
x(p,I)
(p,I)
)
= x
i
, i = 1, 2, . . . , n (4.4)
(4.3),(4.4)
=
(U
)
p
i
+x
i
(U
)
I
= 0
. .
ROYs identity
Pareto-ecient allocation of commodities
Let:
U
i
(x) = U
i
(x
1
, x
2
, . . . , x
l
) - utility function of consumer i = 1, 2, . . . , k in dependence on the
amounts x
j
of commodity j, j = 1, 2, . . . , l
Denition 1
An allocation x = (x
1
, x
2
, . . . , x
l
) is said to be pareto-ecient (or pareto-optimal), if
there does not exist an allocation x
= (x
1
, x
2
, . . . , x
l
) such that U
i
(x
) U
i
(x) for
i = 1, 2, . . . , k and U
i
(x
) > U
i
(x) for at least one i {1, 2, . . . , k}.
If such an allocation x
would exist, x
is said to be pareto-superior to x.
Preference relation : x
x (x
Denition 3
An allocation
_
(x
i
)
iI
, (y
j
)
jJ
is feasible, if
k
i=1
x
i
e +
l
j=1
y
j
.
Interpretation: consumption endowment + production
CHAPTER 4. CONSUMER CHOICE AND GENERAL EQUILIBRIUM THEORY 29
Competitive economy with private ownership
Each consumer (household) i I is characterized by
an endowment e
i
= (e
i
1
, e
i
2
, . . . , e
i
n
) R
n
+
and
the ownership share
i
j
of rm j (j J):
i
= (
i
1
, . . . ,
1
l
).
Competitive equilibrium for E
Denition 4
For the economy E
is feasible in E
;
2. Given the equilibrium prices p
for all y
j
;
3. Given the equilibrium prices p
j=1
i
j
p
T
y
j
}.
Then: x
i
X and U
i
(x
i
) U
i
(x
i
) for all x
i
X.
Remark:
The above equilibrium is denoted as Walrasian equilibrium.
4.2.2 First fundamental theorem of welfare economics
Theorem 1
For the economy E
be a Walrasian equilibrium.
Then the Walrasian equilibrium allocation
_
(x
i
)
iI
, (y
j
)
jJ
is pareto-ecient for E
.
CHAPTER 4. CONSUMER CHOICE AND GENERAL EQUILIBRIUM THEORY 30
Interpretation: Theorem 1 states that any Walrasian equilibrium leads to a pareto-ecient
allocation of resources.
Remark:
Theorem 1 does not require convexity of tastes (preferences) and technologies.
4.2.3 Second fundamental theorem of welfare economics
Consider a more abstract economy with transfers (e.g. positive/negative taxes).
Let:
w = (w
1
, w
2
, . . . , w
k
) R
k
- wealth vector
Denition 5
For a competitive economy E the triplet
_
(x
i
)
iI
, (y
j
)
jJ
, p
i=1
w
i
= p
T
e +
p
T
y
j
such that
1. The allocation
_
(x
i
)
iI
, (y
j
)
jJ
is feasible in E;
2. Given the equilibrium prices p
for all y
j
;
3. Given the equilibrium prices p
X and U
i
(x
i
) U
i
(x
i
) for all x
i
X.
CHAPTER 4. CONSUMER CHOICE AND GENERAL EQUILIBRIUM THEORY 31
Theorem 2
For the economy E with strictly monotonic utility functions U
i
: R
n
R, i I, let the
preferences and y
j
be convex.
Then:
To any pareto-ecient allocation
_
(x
i
)
iI
, (y
j
)
jJ
,
there exists a price vector p
, y
, . . . , y
(n)
) = 0
between the independent variable x, a function y(x) and its derivatives is called an ordinary
dierential equation. The order of the dierential equation is determined by the highest
order of the derivatives appearing in the dierential equation.
Explicit representation:
y
(n)
= f(x, y, y
, y
, . . . , y
(n1)
)
Example 1
Denition 2
A function y(x) for which the relationship F(x, y, y
, y
, . . . , y
(n)
) = 0 holds for all x D
y
is called a solution of the dierential equation.
The set
S = {y(x) | F(x, y, y
, y
, . . . , y
(n)
) = 0 for all x D
y
}
is called the set of solutions or the general solution of the dierential equation.
in economics often:
time t is the independent variable, solution x(t) with
x =
dx
dt
, x =
d
2
x
dt
2
, etc.
32
CHAPTER 5. DIFFERENTIAL EQUATIONS 33
5.2 Dierential equations of the rst order
implicit form:
F(t, x, x) = 0
explicit form:
x = f(t, x)
Graphical solution:
given: x = f(t, x)
At any point (t
0
, x
0
) the value x = f(t
0
, x
0
) is given, which corresponds to the slope of the
tangent at point (t
0
, x
0
).
graph the direction eld (or slope eld)
Example 2
5.2.1 Separable equations
x = f(t, x) = g(t) h(x)
=
_
dx
h(x)
=
_
g(t) dt
= H(x) = G(t) +C
solve for x (if possible)
x(t
0
) = x
0
given:
C is assigned a particular value
= x
p
- particular solution
Example 3
Example 4
5.2.2 First-order linear dierential equations
x +a(t) x = q(t) q(t) - forcing term
CHAPTER 5. DIFFERENTIAL EQUATIONS 34
(a) a(t) = a and q(t) = q
multiply both sides by the integrating factor e
at
> 0
= xe
at
+axe
at
= qe
at
=
d
dt
(x e
at
) = qe
at
= x e
at
=
_
qe
at
dt =
q
a
e
at
+C
i.e.
x +ax = q x = Ce
at
+
q
a
(C R) (5.1)
C = 0 = x(t) =
q
a
= constant
x =
q
a
- equilibrium or stationary state
Remark:
The equilibrium state can be obtained by letting x = 0 and solving the remaining equation for
x. If a > 0, then x = Ce
at
+
q
a
converges to
q
a
as t , and the equation is said to be stable
(every solution converges to an equilibrium as t ).
Example 5
(b) a(t) = a and q(t)
multiply both sides by the integrating factor e
at
> 0
= xe
at
+axe
at
= q(t) e
at
=
d
dt
(x e
at
) = q(t) e
at
= x e
at
=
_
q(t) e
at
dt +C
i.e.
x +ax = q(t) x = Ce
at
+e
at
_
e
at
q(t)dt (5.2)
(c) General case
multiply both sides by e
A(t)
= xe
A(t)
+a(t)xe
A(t)
= q(t) e
A(t)
CHAPTER 5. DIFFERENTIAL EQUATIONS 35
choose A(t) such that A(t) =
_
a(t)dt because
d
dt
(x e
A(t)
) = x e
A(t)
+x
A(t)
..
a(t)
e
A(t)
= x e
A(t)
=
_
q(t) e
A(t)
dt +C | e
A(t)
= x = Ce
A(t)
+e
A(t)
_
q(t) e
A(t)
dt, where A(t) =
_
a(t)dt
Example 6
(d) Stability and phase diagrams
Consider an autonomous (i.e. time-independent) equation
x = F(x) (5.3)
and a phase diagram:
Illustration: Phase diagram
Denition 3
A point a represents an equilibrium or stationary state for equation (5.3) if F(a) = 0.
= x(t) = a is a solution if x(t
0
) = x
0
.
= x(t) converges to x = a for any starting point (t
0
, x
0
).
Illustration: Stability
5.3 Second-order linear dierential equations and systems in the
plane
x +a(t) x +b(t)x q(t) (5.4)
Homogeneous dierential equation:
q(t) 0 = x +a(t) x +b(t)x = 0 (5.5)
CHAPTER 5. DIFFERENTIAL EQUATIONS 36
Theorem 1
The homogeneous dierential equation (5.5) has the general solution
x
H
(t) = C
1
x
1
(t) +C
2
x
2
(t), C
1
, C
2
R
where x
1
(t), x
2
(t) are two solutions that are not proportional (i.e., linearly independent).
The non-homogeneous equation (5.4) has the general solution
x(t) = x
H
(t) +x
N
(t) = C
1
x
1
(t) +C
2
x
2
(t) +x
N
(t),
where x
N
(t) is any particular solution of the non-homogeneous equation.
(a) Constant coecients a(t) = a and b(t) = b
x +a x +bx = q(t)
Homogeneous equation:
x +a x +bx = 0
use the setting x(t) = e
t
( R)
= x(t) = e
t
, x(t) =
2
e
t
= Characteristic equation:
2
+a +b = 0 (5.6)
3 cases:
1. (5.6) has two distinct real roots
1
,
2
= x
H
(t) = C
1
e
1
t
+C
2
e
2
t
2. (5.6) has a real double root
1
=
2
= x
H
(t) = C
1
e
1
t
+C
2
te
1
t
3. (5.6) has two complex roots
1
= + i and
2
= i
x
H
(t) = e
t
(C
1
cos t +C
2
sin t)
Non-homogeneous equation:
x +a x +bx = q(t)
CHAPTER 5. DIFFERENTIAL EQUATIONS 37
Discussion of special forcing terms:
Forcing term q(t) Setting x
N
(t)
1. q(t) = p e
st
(a) x
N
(t) = A e
st
- if s is not a root of the characteristic
equation
(b) x
N
(t) = A t
k
e
st
- if s is a root of multiplicity k
(k 2) of the characteristic equation
2. q(t) =
p
n
t
n
+p
n1
t
n1
+ +p
1
t+p
0
(a) x
N
(t) = A
n
t
n
+A
n1
t
n1
+ +A
1
t +A
0
- if b = 0
in the homogeneous equation
(b) x
N
(t) = t
k
(A
n
t
n
+A
n1
t
n1
+ +A
1
t +A
0
) -
with k = 1 if a = 0, b = 0 and k = 2 if a = b = 0
3. q(t) = p cos st +r sin st
(a) x
N
(t) = Acos st +Bsin st - if si is not a root of the
characteristic equation
(b) x
N
(t) = t
k
(Acos st +Bsin st) - if si is a root of
multiplicity k of the characteristic equation
Use the above setting and insert it and the derivatives into the non-homogeneous equation.
Determine the coecients A, B and A
i
, respectively.
Example 7
(b) Stability
Consider equation (5.4)
Denition 4
Equation (5.4) is called globally asymptotically stable if every solution x
H
(t) = C
1
x
1
(t) +
C
2
x
2
(t) of the associated homogeneous equation tends to 0 as t for all values of C
1
and C
2
.
Remark:
x
H
(t) 0 as t x
1
(t) 0 and x
2
(t) 0 as t
Example 8
CHAPTER 5. DIFFERENTIAL EQUATIONS 38
Theorem 2
Equation x+a x+bx = q(t) is globally asymptotically stable if and only if a > 0 and b > 0.
(c) Systems of equations in the plane
Consider:
x = f(t, x, y)
y = g(t, x, y)
(7)
Solution: pair (x(t), y(t)) satisfying (7)
Initial value problem:
The initial conditions x(t
0
) = x
0
and y(t
0
) = y
0
are given.
A solution method:
Reduce the given system (7) to a second-order dierential equation in only one unknown.
1. Use the rst equation in (7) to express y as a function of t, x, x.
y = h(t, x, x)
2. Dierentiate y w.r.t. t and substitute the terms for y and y into the second equation in
(7).
3. Solve the resulting second-order dierential equation to determine x(t).
4. Determine
y(t) = h(t, x(t), x(t))
Example 9
(d) Systems with constant coecients
Consider:
x = a
11
x +a
12
y +q
1
(t)
y = a
21
x +a
22
y +q
2
(t)
Solution of the homogeneous system:
_
x
y
_
=
_
a
11
a
12
a
21
a
22
__
x
y
_
CHAPTER 5. DIFFERENTIAL EQUATIONS 39
we set
_
x
y
_
=
_
z
1
z
2
_
e
t
=
_
x
y
_
=
_
z
1
z
2
_
e
t
= we obtain the eigenvalue problem:
_
a
11
a
12
a
21
a
22
__
z
1
z
2
_
=
_
z
1
z
2
_
or equivalently
_
a
11
a
12
a
21
a
22
__
z
1
z
2
_
=
_
0
0
_
Determine the eigenvalues
1
,
2
and the corresponding eigenvectors
z
1
=
_
z
1
1
z
1
2
_
and z
2
=
_
z
2
1
z
2
2
_
.
Consider now the cases in a similar way as for a second-order dierential equation, e.g.
1
R,
2
R and
1
=
2
.
= General solution:
_
x
H
(t)
y
H
(t)
_
= C
1
_
z
1
1
z
1
2
_
e
1
t
+C
2
_
z
2
1
z
2
2
_
e
2
t
Solution of the non-homogeneous system:
A particular solution of the non-homogeneous system can be determined in a similar way as for
a second-order dierential equation. Note that all occurring specic functions q
1
(t) and q
2
(t)
have to be considered in each function x
N
(t) and y
N
(t).
Example 10
CHAPTER 5. DIFFERENTIAL EQUATIONS 40
(e) Equilibrium points for linear systems with constant coecients and forcing term
Consider:
x = a
11
x +a
12
y +q
1
y = a
21
x +a
22
y +q
2
For nding an equilibrium point (state), we set x = y = 0 and obtain
a
11
x +a
12
y = q
1
a
21
x +a
22
y = q
2
Cramers rule
= equilibrium point:
x
q
1
a
12
q
2
a
22
a
11
a
12
a
21
a
22
=
a
12
q
2
a
22
q
1
|A|
y
a
11
q
1
a
21
q
2
a
11
a
12
a
21
a
22
=
a
21
q
1
a
11
q
2
|A|
Example 11
Theorem 3
Suppose that |A| = 0. Then the equilibrium point (x
, y
a
11
a
12
a
21
a
22
> 0,
where tr(A) is the trace of A (or equivalently, if and only if both eigenvalues of A have
negative real parts).
Example 12
CHAPTER 5. DIFFERENTIAL EQUATIONS 41
(f) Phase plane analysis
Consider an autonomous system:
x = f(x, y)
y = g(x, y)
Rates of change of x(t) and y(t) are given by f(x(t), y(t)) and g(x(t), y(t)), e.g.
if f(x(t), y(t)) > 0 and g(x(t), y(t)) < 0 at a point P = (x(t), y(t)), then (as t increases) the
system will move from point P down and to the right.
= ( x(t), y(t)) gives direction of motion, length of ( x(t), y(t)) gives speed of motion
Illustration: Motion of a system
Graph a sample of these vectors. = phase diagram
Equilibrium point: point (a, b) with f(a, b) = g(a, b) = 0
equilibrium points are the points of the intersection of the nullclines
f(x, y) = 0 and g(x, y) = 0
Graph the nullclines:
At point P with f(x, y) = 0, x = 0 and the velocity vector is vertical, it points up if y > 0
and down if y < 0.
At point Q with g(x, y) = 0, y = 0 and the velocity vector is horizontal, it points to the
right if x > 0 and to the left if x < 0.
Continue and graph further arrows.
Example 13
Chapter 6
Optimal control theory
6.1 Calculus of variations
Consider:
t
1
_
t
0
F(t, x, x)dt max!
s.t.
x(t
0
) = x
0
, x(t
1
) = x
1
(8)
Illustration
necessary optimality condition:
Function x(t) can only solve problem (8) if x(t) satises the following dierential equation.
Euler equation:
F
x
d
dt
_
F
x
_
= 0 (6.1)
we have
d
dt
_
F(t, x, x)
x
_
=
2
F
t x
+
2
F
x x
x +
2
F
x x
x
= (6.1) can be rewritten as
2
F
x x
x +
2
F
x x
x +
2
F
t x
F
x
= 0
42
CHAPTER 6. OPTIMAL CONTROL THEORY 43
Theorem 1
If F(t, x, x) is concave in (x, x), a feasible x
(t) solves problem (9) with either (a) or (b) as the terminal condition, then x
(t) must
satisfy the Euler equation.
With the terminal condition (a), the transversality condition is
_
F
x
_
t=t
1
= 0. (6.2)
With the terminal condition (b), the transversality condition is
_
F
x
_
t=t
1
0
_
_
F
x
_
t=t
1
= 0, if x
(t
1
) > x
1
_
(6.3)
If F(t, x, x) is concave in (x, x), then a feasible x
(t), u
(t) maximizes
H(t, x
(t), u
(t), u
(t), p
(t), u
(t)) is optimal.
Example 4
6.2.2 Standard problem
Consider the standard end constrained problem :
t
1
_
t
0
f(t, x, u)dt, u U R (6.9)
s.t.
x(t) = g(t, x(t), u(t)), x(t
0
) = x
0
(6.10)
with one of the following terminal conditions
(a) x(t
1
) = x
1
, (b) x(t
1
) x
1
or (c) x(t
1
) free. (6.11)
Dene now the Hamiltonian function as follows:
H(t, x, u, p) = p
0
f(t, x, u) +p g(t, x, u)
CHAPTER 6. OPTIMAL CONTROL THEORY 46
Theorem 5 (Maximum principle for standard end constraints)
Suppose that (x
(t), u
(t), u
(t), u
(t
1
) > x
1
)
(c) p(t
1
) = 0
Theorem 6 (Mangasarian)
Suppose that (x
(t), u
(t), u
(t) = u(t, x
(t), p(t))
CHAPTER 6. OPTIMAL CONTROL THEORY 47
Remarks:
1. If the Hamiltonian is not concave, there exists a weaker sucient condition due to Arrow:
If the maximized Hamiltonian
H(t, x, p) = max
u
H(t, x, u, p)
is concave in x for every t [t
0
, t
1
] and conditions 1. - 3. of Theorem 5 are satised with
p
0
= 1, then (x
(t), u
(t), u
(t) maximizes H
c
(t, x
(t) r(t) =
H
c
(t, x
(t), u
(t), (t))
x
3. The transversality conditions are:
(a) no condition on (t
1
)
(b) (t
1
) 0 (with (t
1
) = 0 if x
(t
1
) > x
1
)
(c) (t
1
) = 0
Remark:
The conditions in Theorem 7 are sucient for optimality if
0
= 1 and
H
c
(t, x, u, (t)) is concave in (x, u) (Mangasarian)
or more generally
H
c
(t, x, (t)) = max
uU
H
c
(t, x, u, (t)) is concave in x (Arrow).
Example 6
Remark:
If explicit solutions for the system of dierential equations are not obtainable, a phase diagram
may be helpful.
Illustration: Phase diagram for example 6
Chapter 7
Applications to growth theory and
monetary economics
7.1 Some growth models
Example 1: Economic growth I
Let
X = X(t) - national product at time t
K = K(t) - capital stock at time t
L = L(t) - number of workers (labor) at time t
and
X = A K
1
L
k = s f(k)
. .
investment
k
..
depreciation
equilibrium state k
k = 0 = s f(k
) = k
(7.1)
Illustration: Equilibrium state
CHAPTER 7. GROWTH THEORY AND MONETARY ECONOMICS 51
Golden rule level of capital accumulation
The government would choose an equilibrium state at which consumption is maximized. To alter
the equilibrium state, the government must change the savings rate s:
c = f(k) s f(k)
(7.1)
= c = f(k
) k
)
= necessary optimality condition for c max!
f
(k
) = 0 = f
(k
) = (7.2)
Using (7.1) and (7.2), we obtain:
s
f(k) = f
(k) k = s
=
f
(k) k
f(k)
s
:
s f(k
) = ( +)k
:
s f(k
) = ( + +g)k
(k
) = + +g
Example 4