Non Linear Programming Problems
Non Linear Programming Problems
Problem
Classical Optimization
Classical optimization theory uses differential
calculus to determine points of maxima and
minima for unconstrained and constrained
functions.
This chapter develops necessary and sufficient
conditions for determining maxima and
minima of unconstrained and constrained
functions.
2
Quadratic forms
Consider the function
f (X ) = a x + a x ++ a x +
2
11 1
2
22 2
2
nn n
Quadratic forms
Then the quadratic form
Q( X ) = X T A X = aii xi2 + 2 aij xi x j
i
1i j n
2. Eigenvalue Test
Since matrix A is a real symmetric matrix in XTAX, it
follows that its eigenvalues ( i) are real. Then XTAX is
Positive definite (positive semi definite) if i > 0
(i 0) i =1, 2, ,n.
Negative definite (negative semi definite) if i < 0 ,
(i 0) i=1,2,,n
Indefinite, if A has both positive and negative eigenvalues.
6
2
2
2
3
2
1
x x
2
2
Here
"
f ( X 0 + h ) f ( X 0 ) for all h j
"
"
T
0
0
0 T
h = (h1 , h2 ,.., hn ) , X 0 = ( x1 , x2 ,.., xn ) and X 0 + h = ( x10 + h1 , x20 + h2 ,.., xn0 + hn )T .
"
f ( X 0 + h ) f ( X 0 ) for all h j
8
f (X ) f (X0) X.
X0 is called an absolute minimum or global minimum of f (X) if
f (X ) f (X0) X.
Taylor Series expansion for functions of several variables
1 T
f ( X 0 + h) f ( X 0 ) = f ( X 0 )h + h Hh +
2
Theorem
A necessary condition for X0 to be an optimum
point of f (X) is that f ( X 0 ) = 0
(that is all the first order partial derivatives
zero at X0.)
f
xi
are
10
Theorem
Let X0 be a stationary point of f (X). A sufficient condition
for X0 to be a
local minimum of f (X) is that the Hessian matrix
H(X0) is positive definite;
local maximum of f (X) is that the Hessian matrix
H(X0) is negative definite.
11
Hessian matrix
2 f
x
1
2 f
x2 x1
H ( x) =
.
.
2
f
xn x1
2 f
x1x2
2 f
x22
2 f
. .
xn x2
2 f
x1xn
2 f
x2 xn
.
2 f
xn2
12
Example 1
Examine the following function for extreme
points
f (x1, x2,x3) = x1 +2 x3 +x2 x3 x12 x 22 x32
13
Definition:
A function f (X)=f (x1,x2,xn) of n variables is said to be
convex if for each pair of points X, Y on the graph, the line
segment joining these two points lies entirely above or on
the graph.
i.e. f((1-) X + Y) (1-) f (X)+ f (Y)
for all 0 1.
14
15
d2 f
0
2
dx
concave if
d2 f
0
2
dx
16
convex
Strictly
convex
concave
Strictly
concave
>0
>0
fxx
>0
<0
fyy
>0
<0
17
Results
(1) The Sum of two convex functions is convex.
(2) Let f(X) = XTAX. Then f(X) is convex if XTAX
is positive semi-definite.
Example: Show that the following function is
convex
f (x1, x2) = -2 x1 x2 +x12 + 2x22
18
Constraint Optimization
Problems
21
KKT conditions
KKT necessary conditions for optimality are given
by ,
L
= f ( X ) g ( X ) = 0
X
L
= 2 i S i = 0, i = 1, 2, ..., m
S i
L
= ( g ( X ) + S 2 ) = 0
22
KKT conditions
The KKT necessary conditions for maximization
problem are :
0
f ( X ) g ( X ) = 0
i g i ( X ) = 0
gi ( X ) 0
i=1,2m
KKT conditions
In scalar notation, this is given by
i 0 i=1,2,.m
f
g1
g 2
g m
1
2
.. m
=0
x j
x j
x j
x j
j = 1, 2, . . . , n
i g i ( X ) = 0 i=1,2,.m
gi ( X ) 0
i=1,2,.m
24
IMPORTANT: The KKT condition can be satisfied at a local minimum (or max.),
a global minimum (or max.) as well as at a saddle point.
We can use the KKT condition to characterize all the stationary points
of the problem, and then perform some additional testing to determine
the optimal solutions of the problem.
25
maximization
Concave
Convex set
minimization
Convex
Convex set
26
0
x2 0
27
g3(x1,x2) = -x2
0
The KKT necessary conditions for maximization
problem are :
i 0 f ( X ) g ( X ) = 0
i g i ( X ) = 0
gi ( X ) 0
i=1,2m
28
1g1(x1,x2) = 0
2g2(x1,x2) = 0
3g3(x1,x2) = 0
g1(x1,x2) 0
g2(x1,x2) 0
g3(x1,x2) 0
i.e.
1 1 + 2 = 0
(1)
2 3x22 1 + 3 = 0 (2)
1(x1 + x2 1) = 0
(3)
2 x1
=0
(4)
3 x2
=0
(5)
x1 + x2 1 0
(6)
x1
(7)
x2
(8)
1 0
(10)
and 3 0
(9)
(11)
30
2
2
1 ( x + x 9) = 0
2 ( x1 + x2 1) = 0
x12 + x22 9
x1 + x2 1
33
Now 1 = 0
Hence 1 0 and so
2
1
2
2
x +x =9
(5)
2 x1 (1 1 ) = 0
Since 1 0
we get x1= 0
z = 3
34
Quadratic Programming
36
Quadratic Programming
A quadratic programming problem is a non-linear
programming problem of the form
Maximize z = CX + X T DX
subject to AX b, X 0
T
a
.
.
a
2n
21 22
A= .
.
.
d
2n
21 22
D= .
Quadratic Programming
!
Quadratic Programming
In scalar notation, the quadratic programming
problem reads:
n
Maximize z = c j x j + d jj x 2j + 2
j =1
j =1
x xj
ij i
1 i < j n
a m 1 x1 + a m 2 x 2 + . . . + a m n x n b m
x1 , x 2 , . . . , x n 0
39
Wolfes Method
The KKT (necessary) conditions are:
1. 1, 2 ,. . ., m , 1, 2 ,. . ., n 0
n
i =1
i =1
2 . c j + 2 d ij xi i a ij + j = 0 , j = 1, 2 , . . . , n
3. i aij x j bi = 0, i = 1, 2,. . ., m
j =1
j x j = 0 , j = 1, 2,. . ., n
n
4.
a x
ij
j =1
41
Wolfes Method
Denoting the (non-negative) slack variable for the ith constraint
n
a ij x j b i
j =1
3.
i si = 0, i = 1, 2,. . ., m
j x j = 0 , j = 1, 2,. . ., n
Wolfes Method
Also condition(s) (2) can be rewritten as:
n
i =1
i =1
2 d ij x i + i a ij j = c j ,
j = 1, 2 , . . . , n
4.
ij
x j + si = bi , i = 1, 2,. . ., m
j =1
x j 0 , j = 1, 2,. . ., n
43
Wolfes Method
Thus we have to find a solution of the following m + n
linear equations in the 2n + m unknowns x j , i , j
n
i =1
i =1
2 d ij xi + i a ij j = c j , j = 1, 2 , . . . , n
n
a ij x
+ s i = b i , i = 1, 2 , . . . , m
j =1
i si = 0 ,
i = 1, 2 , . . . , m ,
j x j = 0 , j = 1, 2 , . . . , n
i 0 , si 0 ,
j 0, x j 0 ,
i = 1, 2 , . . . , m ,
j = 1, 2 , . . . , n
44
Wolfes Method
Since we are interested only in a " feasible solution
of the above system of linear equations, we use
Phase-I method to find such a feasible solution. By
the sufficiency of the KKT conditions, it will be
automatically the optimum solution of the given
quadratic programming problem.
45
Example-1:
2
1
2
2
46
8 2 x1 1 + 1 = 0
4 2 x2 1 + 2 = 0
i.e. 2 x1 + 1 1
=8
2 x2 + 1
2 = 4
3. x1 + x2 + s1 = 2, 1s1 = 1 x1 = 2 x2 = 0
All variables 0.
47
2 x1
+ R1
+ 1 1
2 x 2 + 1
+ R2
x1 + x 2
=8
=4
+ S1 = 2
1 S 1 = 1 x1 = 2 x 2 = 0
All variables 0 (We solve by " Modified Simplex " Algorithm).
48
R1
R2
-1
0
0
-1
0
0
-1
1
0
0
0
-1
0
1
0
0
0
0
0
1
12
0
8
4
2
-1
-1
0
0
-1
0
-1
0
0
1
0
0
0
0
1
0
-2
-2
0
1
8
4
4
2
1
-1
1
0
-1
0
-1
0
-2
1
-1
0
0
0
1
0
2
-2
2
1
0
4
0
49
2
1 1 2
Basic r
x1
x2
r
R1
R2
S1
1
0
0
0
2
0
2
0
1
2
0
0
2
1
2
0
1
1
0
-1
0
-1
0
0
r
R1
R2
x1
1
0
0
0
0
0
0
1
0
-2
2
1
2
1
1
0
1
0
0
0
0
0
0
1
4
-2
4
1
0
1
0
0
1
R2
x1
s1
Sol
50
Example-2
Maximize
2
1
z = 8x1 x + 2x2 + x3
subject to
x1 + 3 x 2 + 2 x 3 12
x1 , x 2 , x 3 0
51
Example-2: Solution
Denoting the Lagrange multipliers by 1,1, 2, and 3,
the KKT conditions are:
1. 1 , 1 , 2 , 3 0
2. 8 2 x1 1 + 1 = 0
2
31
2 1
+ 2 = 0
+ 3 = 0
i.e. 2 x1 + 1 1
31 2
2 1
=8
=2
3 = 1
52
3. x1 + 3 x2 + 2 x3 + S1 = 12,
1S1 = 0
1 x1 = 2 x2 = 3 x3 = 0
All variables 0.
193
9
53
Basic r
x1
x2
x3 1 1 2
3 R1 R2 R3 S1
Sol
r 1
R1 0
2
0
2
0
0
0
0
0
0
6 -1
0 0
1 -1
-1
0
0
-1
0
0
0 0 0
-1 -1 -1
1 0 0
0
0
0
11
0
8
R2 0
-1
R3 0
-1
S1 0
12
Basic r
x1
x2
x3 1 1 2
-1
-1
x1 0
0 1/2 -1/2 0
R2 0
-1
R3 0
-1
S1 0
2 -1/2 1/2
3 R1 R2 R3 S1
-1
Sol
1/2 0
-1/2 0
x1
x2 x3 1 1 2
R1 R2 R3 S1
-1
-1
-1 0
x1 0
0 1/2 -1/2 0
1/2 0
R2 0
-1
R3 0
-1
x2 0
-1/6 0
1/3 8/3
Basic r
Sol
56
Basic
x1
x2
x3 1 1
-1 3/2
x1 0
0 -1/2
1/4 1/2 0
-1/4 0 15/4
R2 0
-1
3/2
0 1
-3/2 0
1/2
1 0
0 -1/2
1/2
1/2
x2 0
1 2/3 0 1/6
R1
R2 R3
-1 0
0 -1/12 -1/6 0
-5/2
S1
Sol
1/2
57
Basic
x1
x2 x3 1 1
x1 0
0 -1/2 1/6
3 0
1 0
0
0
0 0 0
0 0 1
x2 0
0
0
0 -2/3 1
0 -1/3 0
R1
R2 R3
-1 -1
S1 Sol
-1
1/2 -1/6 0
0 11/3
0 2/3 -1
0 1/3 0
0 1/3
0 2/3
Example-3
Maximize
2
1
2
2
subject to
x1 + x 2 1
2 x1 + 3 x 2 4
x1 , x 2 0
59
1.
1 , 2 , 1 , 2 0
2. 6 4 x1 4 x2 1 2 2 + 1 = 0
3 4 x1 6 x2 1 32 + 2 = 0
i.e. 4 x1 + 4 x2 + 1 + 2 2 1 = 6
4 x1 + 6 x2 + 1 + 32 2 = 3
60
3.
x1 + x2 + S1 = 1,
2 x1 + 3 x2 + S 2 = 4,
1S1 = 2 S 2 = 0
1 x1 = 2 x2 = 0
and all variables 0.
Solving this by " Modified Simplex Algorithm ", the
optimal solution is:
x1 = 1, x2 = 0 and the optimal z = 4.
61
Basic
x 1 x2
1 2 1 2
R1
R2 S1
S2 Sol
r 1
R1 0
8
0
4
10
0
4
2
0
1
5 -1
0 0
2 -1
-1
0
0
0
-1
1
0
-1
0
0
0
0
0
0
0
9
0
6
R2 0
-1
S1 0
S2 0
1 4/3 0
1/3 0
-1
2/3
-5/3
R1 0 4/3 0
1/3 0
-1
2/3
-2/3
x2 0 2/3 1
1/6
1/2
1/2
S1 0 1/3
0 -1/6 1/2 0
1/6
-1/6
S2 0
0 -1/2 -3/2 0
1/2
-1/2
5/2
62
Basic r
x 1 x2
1 2 1 2
R1
R2 S1 S2 Sol
-2
-1
-1
-2
R1 0
-1
x1 0
1/4
3/4
S1 0
-1/2 -1/43/4 0
1/4
0 -1/4
1/4
S2 0
0 -1/2 -3/2 0
1/2
-1/2
5/2
-1
-1
-4
R1 0
-1
-4
x1 0
2 0 0
-2
1
-1 3 0
0 0 0
1
0
0
0
-1
0
4
-2
0
1
1
2
S2 0
63
Basic r
x 1 x2
1 2 1 2
R1
R2 S1 S2 Sol
-1
-1
1 0
-1
-4
x1 0
2 0 0
-2
0 1
-1
-1
S2 0
-2
Remark:
If the problem is a minimization problem, say, Minimize z,
we convert it into a maximization problem, Maximize -z.
66