0% found this document useful (0 votes)
43 views20 pages

Convex Functions and Optimization

This document provides an introduction to convex functions. It defines convex functions as those whose epigraph is a convex set. It gives examples of convex functions including x^2 and x1^2 + y^2. It proves the equivalence between the epigraph definition of convexity and Jensen's inequality definition. It also discusses the relationship between convex and concave functions, noting that a function is concave if and only if -f is convex. Finally, it introduces indicator functions and notes their relationship to convex sets.

Uploaded by

hoalongkiem
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
43 views20 pages

Convex Functions and Optimization

This document provides an introduction to convex functions. It defines convex functions as those whose epigraph is a convex set. It gives examples of convex functions including x^2 and x1^2 + y^2. It proves the equivalence between the epigraph definition of convexity and Jensen's inequality definition. It also discusses the relationship between convex and concave functions, noting that a function is concave if and only if -f is convex. Finally, it introduces indicator functions and notes their relationship to convex sets.

Uploaded by

hoalongkiem
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 20

Chapter 5

Convex Functions and Optimization

5.1 Convex Functions


Our next topic is that of convex functions. Again, we will concentrate on the context of
a map f : Rn → R although the situation can be generalized almost withoiut change by
replacing Rn with any real vector space V . We will also find it useful, and in fact modern
algorithms reflect this usefulness, to consider functions f : Rn → R∗ where R∗ is the set
of extended real numbers introduced earlier1 . Before beginning with the main part of
the discussion, we want to keep some examples in mind.

A simple example of a convex function is x 7→ x2 , x ∈ R. Indeed, in the smooth case it


is arguably the most basic. As we learn in elementary calculus, this function is infinitely
often differentiable and has a single critical point at which the function in fact takes on,
not just a relative minimum, but an absolute minimum.
d 2
A critical point is, by definition, the solution of the equation x = 2 x or 2 x = 0.
dx
We can apply the second derivative test at the point x = 0 to determine the nature of the
d2
x2 = 2 > 0, the function is“concave up” and

critical point and we find that, since 2
dx
the critical point is indeed a point of relative minimum. That this point gives an absolute
minimum to the function, we need only remark that the function values are bounded
below by zero since x2 > 0 for all x 6= 0.
We can give a similar example in R2 .

Example 5.1.1 We consider the function


1 1
(x, y) 7→ x2 + y 2 := z,
2 3
1
See the discussion in Appendix B.

1
2 CHAPTER 5. CONVEX FUNCTIONS

The graph of this function is an elliptic paraboloid. In this case we expect that, once
again, the minimum will occur at the origin of coordinates and, setting f (x, y) = z, we
can compute
   
x 1 0
grad (f ) (x, y) =  , and H(f(x, y)) =  .
   
2  2 
y 0
3 3
Notice that the Hessian matrix, H(f ), is positive definite at all points (x, y) ∈ R2 . Here
the critical points are exactly those for which grad [f(x, y)] = 0 whose only solution is
x = 0, y = 0. The second derivative test for problems of this type is just that
 2  2 
∂ f ∂ f ∂ 2f
det H(f(x, y)) = − > 0
∂x2 ∂y2 ∂x ∂y

which is clearly satisfied in the present example. It is simply a condition that guarantees
that the Hessian is a positive definite matrix. Again, since for all (x, y) 6= (0, 0), z > 0,
the origin is a point where f has an absolute minimum.

As the idea of a convex set lies at the foundation of our analysis, it is natural to describe
the notion of convex functions in terms of such sets. We recall that, if A and B are two
non-empty sets, then the Cartesian product of these two sets A × B is defined as the set
of ordered pairs {(a, b) : a ∈ A, b ∈ B}. We recall that, if the two sets are convex then
their Cartesian product is as well.

Previously, we introduced the idea of the epigraph of a function f : X → R where


X ⊂ Rn (see Appendix B section 2.2). For convenience, we repeat the definition here.

Definition 5.1.2 Let X ⊂ Rn be a non-empty set. If f : X → R then epi (f ) is defined


by

epi (f ) := {(x, z) ∈ X × R | z ≥ f (x)}.

Convex functions are now defined in terms of their epigraphs:

Definition 5.1.3 Let C ⊂ Rn be convex and f : C −→ R∗ . Then the function f is called


a convex function provided epi (f ) ⊂ R × Rn is a convex set.
5.1. CONVEX FUNCTIONS 3

Since we admit “infinite” values we have to be careful when doing computations. As


a preliminary precaution we will assume that all the convex functions that we treat here,
unless specifically stated to the contrary, are proper convex functions which means that the
epigraph of f is non-empty, and does not contain a line. Hence we insist that f (x) > −∞
for all x ∈ C and that (x) < ∞ for at least one x ∈ C.
¯
We emphasize that Definition 5.1.3 has the advantage of directly relating the theory
of convex sets to the theory of convex functions. A more traditional definition is that a
function is convex provided that, for any x, y ∈ C and any λ ∈ [0, 1]

f ( (1 − λ) x + λ y) ≤ (1 − λ) f (x) + λ f (y) ,

which is sometimes referred to as Jensen’s inequality.

In fact, these definitions turn out to be equivalent. Indeed, we have the following result
which involves a more general form of Jensen’s Inequality.

Theorem 5.1.4 Let C ⊂ Rn be convex and f : C −→ R∗ . Then the following are


equivalent:

(a) epi(f ) is convex.


Pn
(b) For all λ1 , λ2 , . . . , λn with λi ≥ 0 and i=1 λi = 1, and points x(i) ∈ C, i =
1, 2, . . . , n, we have

n
! n
X X
(i)
f λi x ≤ λi f (x(i) ) .
i=1 i=1

(c) For any x, y ∈ C and λ ∈ [0, 1],

f ( (1 − λ) x + λ y ) ≤ (1 − λ) f (x) + λ f (y) .

Proof: To see that (a) implies (b) we note that, if for all i = 1, 2, . . . , n,
(x(i) , f (x(i) ) ∈ epi (f ), then since this latter set is convex, we have

n n n
!
X X X
(i) (i) (i) (i)
λi (x , f (x )) = λi x , λi f (x ) ∈ epi (f ) ,
i=1 i=1 i=1

which, in turn, implies that


4 CHAPTER 5. CONVEX FUNCTIONS

n
! n
X X
(i)
f λi x ≤ λi f (x(i) ) .
i=1 i=1

This establishes (b). It is obvious that (b) implies (c). So it remains only to show that
(c) implies (a) in order to establish the equivalence.

To this end, suppose that (x(1) , z1 ), (x(2) , z2 ) ∈ epi (f ) and take 0 ≤ λ ≤ 1. Then

(1 − λ) (x(1) , z1 ) + λ (x(2) , z2 ) = (1 − λ) x(1) + λ x(2) , (1 − λ) z1 + λ z2 ,




and since f (x(1) ) ≤ z1 and f (x(2) ) ≤ z2 we have, since (1 − λ) > 0, and λ > 0, that

(1 − λ) f (x(1) ) + λ f (x(2) ) ≤ (1 − λ) z1 + λ z2 .

Hence, by the assumption (c), f (1 − λ) x(1) + λ x(2) ≤ (1 − λ) z1 + λ z2 , which shows

the point (1 − λ) x(1) + λ x(2) , (1 − λ) z1 + λ z2 is in epi(f ).

Convex functions are fundamental to minimization problems and we shall concentrate


on them in the following sections. But since many applications to Economics involve
maximization rather than minimization problems, many discussions in this area involve
concave functions rather than convex ones. These latter functions are simply related to
convex functions. Indeed a function f is concave if and only if the function −f is convex2 .
In any systematic presentation, it is most economical to choose one type of optimization
problem on which to concentrate; they are completely interchangable in that minimizing
a function f is the same problem as maximizing the function −f . Here, we concentrate
on convex functions.

We can see another connection between convex sets and convex functions if we introduce
the indicator function, ψK of a set K ⊂ Rn . Indeed, ψK : Rn → R∗ is defined by

0 if x ∈ K,
ψK (x) =
+∞ if x 6∈ K .

Proposition 5.1.5 A non-empty subset D ⊂ Rn is convex if and only if its indicator


function is convex.
2
This will also be true of quasi-convex and quasi-concave functions which we will define below.
5.1. CONVEX FUNCTIONS 5

Proof: The result follows immediately from the fact that epi (ψD ) = D × R≥0 .

Certain simple properties follow immediately from the analytic form of the definition
(part (c) of the equivalence theorem above). Indeed, it is easy to see, and we leave it as
an exercise for the reader, that if f and g are convex functions defined on a convex set
C, then f + g is likewise convex on C provided there is no point for which f (x) = ∞ and
g(x) = −∞. The same is true if β ∈ R , β > 0 and we consider βf .

Moreover, we have the following result which is extrenekt useful in broadening our class
of convex functions.

Proposition 5.1.6 Let f : Rn → R be given, x(1) , x(2) ∈ Rn be fixed and define a function
ϕ : [0, 1] → R by ϕ(λ) := f ((1 − λ)x(1) + λx(2) ). Then the function f is convex on Rn if
and only if the function ϕ is convex on [0, 1].

Proof: Suppose, first, that f is convex on Rn . Then it is sufficient to show that epi (ϕ)
is a convex subset of R2 . To see this, let (λ1 , z1 ) , (λ2 , z2 ) ∈ epi (ϕ) and let

ŷ 1 = λ1 x1 + (1 − λ1 ) x2 ,
ŷ 2 = λ2 x1 + (1 − λ2 ) x2 .

Then

f (y 1 ) = ϕ(λ1 ) ≤ z1 and f (y 2 ) = ϕ(λ2 ) ≤ z2 .

Hence (y 1 , z1 ) ∈ epi (f ) and (y 2 , z2 ) ∈ epi (f ). Since epi(f ) is a convex set, we also


have (µ y 1 + (1 − µ) y 2 , µ z1 + (1 − µ) z 2 ) ∈ epi (f ) for every µ ∈ [0, 1]. It follows that
f (µ y 1 + (1 − µ) y 2 ) ≤ µ z1 + (1 − µ) z2 ).

Now

µ y 1 + (1 − µ) y 2 ) = µ(λ1 x1 + (1 − λ1 ) x2 ) + (1 − µ) (λ2 x1 + (1 − λ2 ) x2
= (µ λ1 + (1 − µ) λ2 )x1 + µ (1 − λ1 ) + (1 − µ) (1 − λ2 ) x2 ,

and since
6 CHAPTER 5. CONVEX FUNCTIONS

1 − [µ λ1 + (1 − µ) λ2 )] = [µ + (1 − µ)] − [µ λ1 + (1 − µ) λ2 )]
= µ (1 − λ1 ) + (1 − µ) (1 − λ2 ) ,

we have from the definition of ϕ that f (µy 1 + (1 − µ)y 2 ) = ϕ(µλ1 + (1 − µ)λ2 ) and so
(µλ1 + (1 − µ)λ2 , µz1 + (1 − µ)z2 ) ∈ epi (ϕ) i.e., ϕ is convex.

We leave the proof of the converse statement as an exercise.

It should be clear that if f : Rn −→ R is a linear or affine, then f is convex. Indeed,


suppose that for a vector a ∈ Rn and a real number b, the affine function f is given by
f (x) =< a, x > +b. Then we have, for any λ ∈ [0, 1],

f ( (1 − λ) x + λ y) = ha, (1 − λ) x + λ yi
= (1 − λ) ha, xi + λ ha, yi + (1 − λ) b + λ b
= (1 − λ) (ha, xi + b) + λ (ha, yi + b) = (1 − λ) f (x) + λ f (y) ,

and so f is convex, the weak inequality being an equality in this case.

In the case that f is linear, that is f (x) =< a, x > for some a ∈ Rn then it is easy
to see that the map ϕ : x → [f (x)]2 is also convex. Indeed, if x, y ∈ Rn then, setting
α = f (x) and β = f (y), and taking 0 < λ < 1 we have

(1 − λ) ϕ(x) + λ ϕ(y) − ϕ( (1 − λ) x + λ y)
= (1 − λ) α2 + λ β 2 − ( (1 − λ) α + λ β)2
= (1 − λ) λ (α − β)2 ≥ 0 .

Note, that in particular for the function f : R −→ R given by f (x) = x is linear and
that [f (x)]2 = x2 so that we have a proof that the function that we usually write y = x2
is a convex function.

The next result expands our repertoire of convex functions.

Proposition 5.1.7 (a) If A : Rm −→ Rn is linear and f : Rn −→ R∗ is convex, then


f ◦ A is convex as a map from Rm to R.
5.1. CONVEX FUNCTIONS 7

(b) If f is as in part(a) and ϕ : R −→ R is convex and non-decreasing, then ϕ ◦ f :


Rn −→ R is convex.

(c) Let {fα }α∈A be a family of functions fα : Rn −→ R∗ then its upper envelope
supα∈A fα is convex.

Proof: To prove (a) we use Jensen’s inequality: Given any x, y ∈ Rn and λ ∈ [0.1] we
have

(f ◦ A) ( (1 − λ)x + λ y) = f ( (1 − λ) (Ax) + λ (Ay) ) ≤ (1 − λ) f (Ax) + λ f (Ay)


= (1 − λ) (f ◦ A)(x) + λ (f ◦ A)(y) .

For part (b), again we take x, y ∈ Rn and λ ∈ [0, 1]. Then

(ϕ ◦ f ) [ (1 − λ) x + λ y ] ≤ ϕ [ (1 − λ) f (x) + λ f (y) ]
≤ (1 − λ) ϕ(f (x)) + λ ϕ(f (y)) = (1 − λ) (ϕ ◦ f ) (x) + λ (ϕ ◦ f ) (y) ,

where the first inequality comes from the convexity of f and the monotonicity of ϕ and
the second from the convexity of this later function. This proves part (b).

To establish part (c) we note that, since the arbitrary intersection of convex sets is
convex, it suffices to show that
  [
epi sup fα = epi (fα ).
α∈A
αıA

To check the equality of these two sets, start with a point


 
(x, z) ∈ epi sup fα .
α∈A

Then z ≥ supα∈A fα (x) and so, for all β ∈ A , z ≥ fβ (x). Hence, by definition, (x, z) ∈
epi fβ for all β from which it follows that

\
(x, z) ∈ epi (fα ) .
α∈A
8 CHAPTER 5. CONVEX FUNCTIONS

Conversely, suppose (x, z) ∈ epi (fα ) for all α ∈ A. Then z ≥ fα (x) for all α ∈ A
and hence z ≥ supα∈A fα . But this, by definition, implies (x, z) ∈ epi (supα∈A fα ) . This
completes the proof of part (c) and the proposition is proved.

Next, we introduce the definition:

Definition 5.1.8 Let f : Rn → R∗ , and α ∈ R. Then the sets

S(f, α) := {x ∈ Rn | f (x) < α} and S(f, α) := {x ∈ Rn | f (x) ≤ α} ,

are called lower sections of the function f .

Proposition 5.1.9 If f : R → R∗ is convex, then its lower sections are likewise convex.

The proof of this result is trivial and we omit it.

The converse of this last proposition is false as can be easily seen from the function
1
x 7→ x 2 from R> to R. However, the class of functions whose lower level sets S(f, α) (or
equivalently the sets S(f, α)) are all convex is likewise an important class of functions and
are called quasi-convex. These functions appear in game theory nonlinear programming
(optimization) problems and mathematical economics. For example, quasi-convex utility
functions imply that consumers have convex preferences. They are obviously generaliza-
tions of convex functions since every convex function is clearly quali-convex. However
they are not as easy to work with. In particular, while the sum of two convex functions
is convex, the same is not true of quasi-convex functions as the following example shows.

Example 5.1.10 Define


 


 0 x ≤ −2 

 0 x≤0

 

−(x + 2)

−2 < x ≤ −1 −x

0<x≤1
f (x) = and g(x) = .



 x −1 < x ≤ 0 


 x−2 1<x≤2

 

0 x>0 0 x>2
 

Here, the functions are each concave, the level sections are convex for each function so
that each is quasi-convex, and yet the level section corresponding to α = −1/2 for the sum
f + g is not convex. Hence the sum is not quasi-convex.
5.1. CONVEX FUNCTIONS 9

It is useful for applications to have an analytic criterion for quasi-convexity. This is


the content of the next result.

Proposition 5.1.11 A function f : Rn → R∗ is quasi-convex if and only if, for any


x, y ∈ Rn and any λ ∈ [0, 1] we have

f ( (1 − λ) x + λ y) ≤ max{f (x), f (y)} .

Proof: Suppose that the sets S(f, α) are convex for every α. Let x, y ∈ Rn and let
α̃ := max{f (x), f (y)}. Then S(f, α̃) is convex and, since both f (x) ≤ α̃ and f (y) ≤ α̃,
we have that both x and y belong to S(f, α̃). Since this latter set is convex, we have

(1 − λ) x + λ y ∈ S(f, α̃) or f ( (1 − λ) x + λ y ) ≤ α̃ = max{f (x), f (y)} .

As we have seen above, the sum of two quasi-convex functions may well not be quasi-
convex. With this analytic test for quasi-convexity, we can check that there are certain
operations which preserve quasi-convexity. We leave the proof of the following result to
the reader.

Proposition 5.1.12 (a) If the functions f1 , . . . , fk are quasi-convex and αa , . . . , αk are


non-negative real numbers, then the function f := max{α1 f1 , . . . , αk , fk } is quasi-
convex.

(b) If ϕ : R → R is a non-decreasing function and f : Rn → R is quasi-convex, then the


composition ϕ ◦ f is a quasi-convex function.

We now return to the study of convex functions.

A simple sketch of the parabola y = x2 and any horizontal cord (which necessarily lies
above the graph) will convince the reader that all points in the domain corresponding to
the values of the function which lie below that horizontal line, form a convex set in the
domain. Indeed, this is a property of convex functions which is often useful.

Proposition 5.1.13 If C ⊂ Rn is a convex set and f : C −→ R is a convex function,


then the level sets {x ∈ C | f (x) ≤ α} and {x ∈ C | f (x) < α} are convex for all scalars
α.
10 CHAPTER 5. CONVEX FUNCTIONS

Proof: We leave this proof as an exercise.

Notice that, since the intersection of convex sets is convex, the set of points simulta-
neously satisfying m inequalities f1 (x) ≤ c1 , f2 (x) ≤ c2 , . . . , fm (x) ≤ cm where each fi is
a convex function, defines a convex set. In particular, the polygonal region defined by a
set of such inequalities when the fi are affine is convex.

From this result, we can obtain an important fact about points at which a convex
function attains a minimum.

Proposition 5.1.14 Let C ⊂ R be a convex set and f : C −→ R a convex function.


Then the set of points M ⊂ C at which f attains its minumum is convex. Moreover, any
relative minimum is an absolute minimum.

Proof: If the function does not attain its minimum at any point of C, then the set of
such points in empty, which is a convex set. So, suppose that the set of points at which
the function attains its minimum is non-empty and let m be the minimal value attained
by f . If x, y ∈ M and λ ∈ [0, 1] then certainly (1 − λ)x + λy ∈ C and so

m ≤ f ( (1 − λ) x + λ y) ) ≤ (1 − λ) f (x) + λ f (y) = m ,

and so the point (1 − λ)x + λy ∈ M . Hence M , the set of minimal points, is convex.

Now, suppose that x? ∈ C is a relative minimum point of f , but that there is another
point x̂ ∈ C such that f (x̂) < f (x? ). On the line (1 − λ)x̂ + λx? , 0 < λ < 1, we have

f ((1 − λ) x̂ + λ x? ) ≤ (1 − λ) f (x̂) + λ f (x? ) < f (x? ) ,

contradicting the fact that x? is a relative minimum point.

Again, the example of the simple parabola, shows that the set M may well contain only
a single point, i.e., it may well be that the minimum point is unique. We can guarantee
that this is the case for an important class of convex functions.

Definition 5.1.15 A real-valued function f , defined on a convex set C ⊂ R is said to be


strictly convex provided, for all x, y ∈ C, x 6= y and λ ∈ (0, 1), we have

f ( (1 − λ) x + λ y) ) < (1 − λ) f (x) + λ f (y) .


5.1. CONVEX FUNCTIONS 11

Proposition 5.1.16 If C ⊂ Rn is a convex set and f : C −→ R is a strictly convex


function then f attains its minimum at, at most, one pont.

Proof: Suppose that the set of minimal points M is not empty and contains two distinct
points x and y. Then, for any 0 < λ < 1, since M is convex, we have (1 − λ)x + λy ∈ M .
But f is strictly convex. Hence

m = f ( (1 − λ) x + λ y ) < (1 − λ) f (x) + λ f (y) = m ,

which is a contradiction.

If a function is differentiable then, as in the case in elementary calculus, we can give


characterizations of convex functions using derivatives. If f is a continuously differentiable
function defined on an open convex set C ⊂ Rn then we denote its gradient at x ∈ C, as
usual, by ∇ f (x). The excess function

E(x, y) := f (y) − f (x) − h∇ f (x), y − xi

is a measure of the discrepancy between the value of f at the point y and the value of
the tangent approximation at x to f at the point y. This is illustrated in the next figure.

Now we introduce the notion of a monotone derivative

Definition 5.1.17 The map x 7→ ∇ f (x) is said to be monotone on C ⊂ Rn provided

h∇ f (y) − ∇ f (x), y − xi ≥ 0 ,

for all x, y ∈ C.

We can now characterize convexity in terms of the function E and the monotonicity
concept just introduced. However, before stating and proving the next theorem, we need
a lemma.

Lemma 5.1.18 Let f be a real-valued, differentiable function defined on an open interval


I ⊂ R. Then if the first derivative f 0 is a non-decreasing function on I, the function f is
convex on I.
12 CHAPTER 5. CONVEX FUNCTIONS

Proof: Choose x, y ∈ I with x < y, and for any λ ∈ [0, 1], define zλ := (1 − λ)x + λy. By
the Mean Value Theorem, there exist u, v ∈ R , x ≤ v ≤ zλ ≤ u ≤ y such that

f (y) = f (zλ ) + (y − zλ ) f 0 (u) , and f (zλ ) = f (x) + (zλ − x) f 0 (v) .

But, y − zλ = y − (1 − λ)x − λy = (1 − λ)(y − x) and zλ − x = (1 − λ)x + λy − x = λ(y − x)


and so the two expressions above may be rewritten as

f (y) = f (zλ ) + λ (y − x) f 0 (u) , and f (zλ ) = f (x) + λ (y − x) f 0 (v) .

Since, by choice, v < u, and since f 0 is non-decreasing, this latter equation yields

f (zλ ) ≤ f (x) + λ (y − x) f 0 (u) .

Hence, multiplying this last inequality by (1 − λ) and the expression for f (y) by −λ and
adding we get

(1−λ) f (zλ )−λ f (y) ≤ (1−λ) f (x)+λ(1−λ)(y −x)f 0 (u)−λf (zλ )−λ(1−λ)(y −x)f 0 (u) ,

which we then rearrange to yield

(1 − λ) f (zλ ) + λ f (zλ ) = f (zλ ) ≤ (1 − λ) f (x) + λ f (y) ,

and this is just the condition for the convexity of f

We can now prove a theorem which gives three different characterizations of convexity
for continuously differentiable functions.

Theorem 5.1.19 Let f be a continuously differentiable function defined on an open con-


vex set C ⊂ Rn . Then the following are equivalent:

(a) E(x, y) ≥ 0 for all x, y ∈ C;

(b) the map x 7→ ∇f (x) is monotone in C;

(c) the function f is convex on C.


5.1. CONVEX FUNCTIONS 13

Proof: Suppose that (a) holds, i.e. E(x, y) ≥ 0 on C × C. Then we have both

f (y) − f (x) ≥ h∇ f (x), y − xi ,

and
f (x) − f (y) ≥ h∇ f (y), x − yi = −h∇ f (y), y − xi .

Then, from the second inequality, f (y) − f (x) ≤ h∇ f (y), x − yi, and so

h∇ f (y) − ∇ f (x), y − xi = h∇ f (y), y − xi − h∇ f (x), y − xi


≥ (f (y) − f (x)) − (f (y) − f (x)) = 0 .

Hence, the map x 7→ ∇ f (x) is monotone in C.

Now suppose the map x 7→ ∇ f (x) is monotone in C, and choose x, y ∈ C. Define a


function ϕ : [0, 1] → R by ϕ(t) := f (x + t(y − x)). We observe, first, that if ϕ is convex
on [0, 1] then f is convex on C. To see this, let u, v ∈ [0, 1] be arbitrary. On the one hand,

ϕ((1−λ)u+λv) = f (x+[(1−λ)u+λv](y−x)) = f ( (1−[(1−λ)u+λv]) x+((1−λ)u+λv) y) ,

while, on the other hand,

ϕ((1 − λ)u + λv) ≤ (1 − λ) f (x + u(y − x)) + f (x + v(y − x)) .

Setting u = 0 and v = 1 in the above expressions yields

f ((1 − λ) x) + λ y) ≤ (1 − λ) f (x) + λ f (y) .

so the convexity of ϕ on [0, 1] implies the convexity of f on C.

Now, choose any α, β, 0 ≤ α < β ≤ 1. Then

ϕ0 (β) − ϕ0 (α) = h(∇f (x + β(y − x)) − ∇f (x + α(y − x)) , y − xi .

Setting u := x + α(y − x) and v := x + β(y − x) twe have v − u = (β − α)(y − x) and


so

ϕ0 (β) − ϕ0 (α) = h(∇f (v) − ∇f (u), v − ui ≥ 0 .


14 CHAPTER 5. CONVEX FUNCTIONS

Hence ϕ0 is non-decreasing, so that the function ϕ is convex.

Finally, if f is convex on C, then, for fixed x, y ∈ C define

h(λ) := (1 − λ) f (x) + λ f (y) − f ((1 − λ) x + λ y) .

Then λ 7→ h(λ) is a non-negative, differentiable function on [0, 1] and attains its minimum
at λ = 0. Therefore 0 ≤ h0 (0) = E(x, y), and the proof is complete.

As an immediate corollary, we have

Corollary 5.1.20 Let f be a continuously differentiable convex function defined on a


convex set C. If there is a point x? ∈ C such that, for all y ∈ C, h∇ f (x? ), y − x? i ≥ 0,
then x? is an absolute minimum point of f over C.

Proof: By the preceeding theorem, the convexity of f implies that

f (y) − f (x? ) ≥ h∇ f (x? ), y − x? i ,

and so, by hypothesis,

f (y) ≥ f (x? ) + h∇ f (x? ), y − x? i ≥ f (x? ) .

The inequality E(x, y) ≥ 0 shows that local information about a convex function,
given in terms of the derivative at a point) gives us global information in terms of a global
underestimator of the function f . In a way, this is the the key property of convex functions.
For example, suppose that ∇f (x) = 0. Then, for all y ∈ dom (f ) , f (y) ≥ f (x) so that
x is a global minimizer of the convex function f .

It is also important to remark that the hypothesis that the convex function f is defined
on a convex set is crucial, both for the first order conditions as well as for the second order
conditions. Indeed, if we consider the function f (x) = 1/x2 with domain {x ∈ R | x 6= 0}.
The usual second order condition f 00 (x) > 0 for all x ∈ dom (f ) yet f is not convex there
so that the second order test fails.

The condition E(x, y) ≥ 0 can be given an important geometrical interpretation in


terms of epigraphs. Indeed if f is convex and x, y ∈ dom (f ) then for (x, z) ∈ epi (f ),
then
5.1. CONVEX FUNCTIONS 15

z ≥ f (y) ≥ f (x) + ∇f (x)> (y − x) ,

can be expressed as

" #> " # " #!


∇ f (x) y x
− ≤ 0.
−1 z f (x)
This shows that the hyperplane defined by (∇f (x), −1)> supports epi(f ) at the boundary
point (x, f (x)).

We now turn to so-called second order criteria for convexity. The discussion involves
the Hessian matrix of a twice continuously differentiable function, and depends on the
question of whether this matrix is positive semi-definite or even positive definite (for strict
convexity) Let us recall some definitions.

Definition 5.1.21 A real symmetric n × n matrix A is said to be

(a) Positive definite provided x> A x > 0 for all x ∈ Rn , x 6= 0.

(b) Negative definite provided x> A x < 0 for all x ∈ Rn , x 6= 0.

(c) Positive semidefinite provided x> A x ≥ 0 for all x ∈ Rn , x 6= 0.

(d) Negative semidefinite provided x> A x ≤ 0 for all x ∈ Rn , x 6= 0.

(e) Indefinite provided x> A x takes on values that differ in sign.

It is important to be able to determine if a matrix is indeed positive definite. In


order to do this, a number of criteria have been developed. Perhaps the most important
characterization is in terms of the eigenvalues.

Theorem 5.1.22 Let A be a real symmetric n × n matrix. Then A is positive definite if


and only if all its eigenvalues are positive.

Proof: If A is positive definite and λ is an eigenvalue of A, then, for any eigenvector x


belonging to λ
16 CHAPTER 5. CONVEX FUNCTIONS

x> A x = λ x> x = λkxk2 , .

Hence
x> A x
λ = > 0
kxk

Converesly, suppose that all the eigenvalues of A are positive. Let {x1 , . . . xn } be an
orthonormal set of eigenvectors of A. Hence any x ∈ Rn can be written as

x = α1 x1 + α2 bx2 + · · · + αn xn

with

n
X
>
αi = x xi for i = 1, 2, . . . n , and αi2 = kxk2 > 0 .
i=1

It follows that

x> A x = (α1 x1 + · · · + αn xn )> (α1 λa x1 + · · · + αn λn xn )


Xn
= αi2 λi ≥ (min{λi } kxk > 0 .
i
i=1

Hence A is positive definite.

In simple cases where we can compute the eigenvalues easily, this is a useful criterion.

Example 5.1.23 Let


!
2 −2
A = .
−2 5
Then the eigenvalues are the roots of

det (A − λI) = (2 − λ) (5 − λ) − 4 = (λ − 1) (λ − 6) .

Hence the eigenvalues are both positive and hence the matrix is positive definite. In this
particular case it is easy to check directly that A is positive definite. Indeed
5.1. CONVEX FUNCTIONS 17

! ! !
2 −2 x1 2x1 − 2x2
(x1 , x2 ) = (x1 , x2 )
−2 5 x2 −2x1 + 5x2

= 2 x22 − 4 x1 x2 + 5 x22

= 2 [ x21 − 2 x1 x2 + x22 ] + 4 x22 = 2 (x1 − x2 )2 + 4 x22 > 0 .

This last theorem has some immediate useful consequences. First, if A is positive
definite, then A must be nonsingular, since singular matrices have λ = 0 as an eigenvalue.
Moreover, since we know that the det(A) is the product of the eigenvalues, and since
each eigenvalue is positive, then det(A) > 0. Finally, we have the following result which
depends on the notion of leading principle submatrices.

Definition 5.1.24 Given any n×N matrix A, let Ar denote the matrix formed by deleting
the last n − r rows and columns of A. Then A4 is called the leading principal submatrix of
A.

Proposition 5.1.25 If A is a symmetric positive definite matrix then the leading prin-
cipal submatrices A1 , A2 , . . . , An of A are all positive definite. In particular, det(Ar ) > 0.

Proof: Let xr = (x1 , x2 , . . . , xr )> be any non-zero vector in Rr . Set

x = (x1 , x2 , . . . , xr , 0, . . . , 0)> .

Since x> >


r Ar xr = x A x > 0, it follows that Ar is positive definite, by definition.

This proposition is half of the famous criterion of Sylvester for positive definite matrices.

Theorem 5.1.26 A real, symmetric matrix A is positive definite if and only if all of its
leading principle minors are positive definite.

We will not prove this theorem here but refer the reader to his or her favorite treatise
on linear algebra.
18 CHAPTER 5. CONVEX FUNCTIONS

Example 5.1.27 Let


 
2 −1 0
 
A =  −1
 2 −1 

0 −1 2
Then
!
2 −1
A2 = (2) , A2 = , A3 = A .
−1 2
Then
det A1 = 2 , det A2 = 4 − 1 = 3 , and det A = 4 .

Hence, according to Sylvester’s criterion, the matrix A is positive definite.

Now we are ready to look at second order conditions for convexity.

Proposition 5.1.28 Let D ⊂ Rn be an open convex set and let f : D −→ R be twice


continuously differentiable in D. Then f is convex if and only if the Hessian matrix of f
is positive semidefinite throughout D.

Proof: By Taylor’s Theorem we have

y − x, ∇2 f (x + λ (y − x))(y − x) ,

f (y) = f (x) + h∇ f (x), y − xi +
2
for some λ ∈ [0, 1]. Clearly, if the Hessian is positive semi-definite, we have

f (y) ≥ f (x) + h∇ f (x), y − xi ,

which in view of the definition of the excess function, means that E(x, y) ≥ 0 which
implies that f is convex on D.

Conversely, suppose that the Hessian is not positive semi-definite at some point x ∈ D.
Then, by the continuity of the Hessian, there is a y ∈ D so that, for all λ ∈ [0, 1],

hy − x, ∇2 f (x + λ(y − x)) (y − x)i < 0 ,

which, in light of the second order Taylor expansion implies that E(x, y) < 0 and so f
cannot be convex.
5.1. CONVEX FUNCTIONS 19

Let us consider, as an example, the quadratic function f : Rn → R with dom(f ) = Rn ,


given by
1 >
x Q x + q>x + r ,
f (x) =
2
with Q and n × n symmetric matrix, q ∈ Rn and r ∈ R. Then since as we have seen
previously, ∇2 f (x) = Q, the function f is convex if and only if Q is positive semidefinite.
Strict convexity of f is likewise characterized by the positive definiteness of Q.

These first and second-order necessary conditions give us methods of showing that a
given function is convex. Thus, we either check the definition, Jensen’s inequality, using
the equivalence that is given by Theorem 2.1.3, or showing that the Hessian is positive
semi-definite. Let us look as some simple examples.

Example 5.1.29 (a) The real-valued function defined on R+ x ln (x). Then, since this
function C 2 (R+ ) and f 0 (x) = ln (x) + 1 and f 00 (x) = 1/x > 0, we see that f is (even
strictly) convex.

(b) The max function f (x) = max{x1 , . . . xn } is convex on Rn . Here we can use Jensen’s
inequality. Let λ ∈ [0, 1] then

f ( (1 − λ) x + λ y) = max ( λ xi + λ yi ) ≤ λ max xi + (1 − λ) max yi


1≤i≤n 1≤i≤n 1≤i≤n
= (1 − λ) f (x) + +λ f (y) .

(c) The function q : R × R+ → R given by q(x, y) = x2 /y is convex. In this case,


∇q(x, y) = (2x/y, −x2 /y 2 )> while an easy computation shows

!
2 y 2 −xy
∇2 q(x, y) = 3 .
y −xy x2
Since y > 0 and
!
y 2 −xy
(u1 , u2 ) 2
(u1 , u2 )> = (u1 y − u2 x)2 ≥ 0 ,
−xy x

the Hessian of q is positive definite and the function is convex.


20 CHAPTER 5. CONVEX FUNCTIONS

You might also like