0% found this document useful (0 votes)
19 views14 pages

Lect. #8

The document discusses multivariable optimization techniques, focusing on analytical methods such as Lagrange's Criteria Method for finding extreme points in functions with multiple variables. It outlines the conditions for determining maximums, minimums, and saddle points using the Hessian Matrix and provides examples to illustrate the application of these methods. Additionally, it introduces numerical methods like Quasi Newton's Method for optimization, including a detailed example of gradient search to find minimum values.

Uploaded by

thamerhuneen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views14 pages

Lect. #8

The document discusses multivariable optimization techniques, focusing on analytical methods such as Lagrange's Criteria Method for finding extreme points in functions with multiple variables. It outlines the conditions for determining maximums, minimums, and saddle points using the Hessian Matrix and provides examples to illustrate the application of these methods. Additionally, it introduces numerical methods like Quasi Newton's Method for optimization, including a detailed example of gradient search to find minimum values.

Uploaded by

thamerhuneen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Optimization CES.P.

424

Spring 2025
Lecture #8

Dr. Haydar Aljaafari

DR. ALJAAFARI 11
Chapter 3
Multivariable Optimization Techniques
Multivariable optimization is similar to single variable except it requires algebra to get the
expression that comes from solving a system of equations in several unknowns down to one
equation in one unknown is sometimes fairly difficult. However, once your equation is of
that form, all optimization procedures are the same as previously discussed.
Multivariable optimization can be solved using the following techniques:
1- Analytical: unconstrained problems and constrained problems.
2- Numerical: unconstrained problems and constrained problems.
3- Graphical Method.

DR. ALJAAFARI 2
Multivariable Optimization Techniques
Analytical methods
Unconstrained Search
- Lagrange's Criteria Method:
Find the extreme points for the multivariable function and determine if they are
maximums or minimums as unimodal or multimodal.
The conditions for the extreme points are:
a- An optimum can exist only at the stationary points, if the function and its derivatives
continue.
b- The stationary points are defined and can be estimated when all the first derivatives of
all variables are equal to zero.
i.e. 𝑓𝑥′ , 𝑓𝑦′ , 𝑓𝑧′ , … … … . , 𝑓𝑛′ = 0
c- The value of the objective function should be found at each of the stationary values
yielding the desired type of optimum.

DR. ALJAAFARI 3
Multivariable Optimization Techniques
Analytical methods
Unconstrained Search
Lagrange's Criteria Method:
d- For the sufficient condition we must determine the sign of the second derivatives of
each first derivative in terms of the other variables such as:
And 𝑓𝑥𝑥 ′′ , 𝑓 ′′ , 𝑓 ′′
𝑥𝑦 𝑥𝑧
′′ , 𝑓 ′′ , 𝑓 ′′
𝑓𝑦𝑥 𝑦𝑦 𝑦𝑧
′′ , 𝑓 ′′ , 𝑓 ′′
𝑓𝑧𝑥 𝑧𝑦 𝑧𝑧
e- Because there are many second derivatives, so Lagrange gave certain criteria to
overcome this difficulty.
′′ ′′ ′′
′′ ′′ 𝑓𝑥𝑥 𝑓 𝑥𝑦 𝑓𝑥𝑧
′′
𝑓𝑥𝑥 𝑓𝑥𝑦 ′′ ′′ ′′
𝐷1 = 𝑓𝑥𝑥 𝐷2 = ′′ ′′ 𝐷3 = 𝑓𝑦𝑥 𝑓𝑦𝑦 𝑓𝑦𝑧
𝑓𝑦𝑥 𝑓𝑦𝑦 ′′ ′′ ′′
𝑓𝑧𝑥 𝑓𝑧𝑦 𝑓𝑧𝑧

DR. ALJAAFARI 4
Multivariable Optimization Techniques
Analytical methods
Unconstrained Search
Lagrange's Criteria Method:
This Matrix called Hessian Matrix
To determine the optimum points whether they are maximum or minimum:
For one variable test 𝐷1 = 𝑦𝑥𝑥 ′′

For two variables test 𝐷1 & 𝐷2 .


For three variables we have 𝐷1 , 𝐷2 & 𝐷3 .
For n-variables we have 𝐷1 , 𝐷2 , 𝐷3 ,−−−− −𝐷𝑛.
To define the extreme points the following statements must be applied.
1- Sufficient conditions for a Minimum to exist at the stationary point are that all the
matrices are positive.
Di > 0 for i= 1, 2, 3, ------, n.

DR. ALJAAFARI 5
Multivariable Optimization Techniques
Analytical methods
Unconstrained Search
Lagrange's Criteria Method:
2- Sufficient conditions for a Maximum to exist at the stationary point are that all odd
matrices are negative and all the Even matrices are positive.
Di < 0 for i= 1, 3, 5, ------, n-1.
Di > 0 for i= 2, 4, 6, ------, n.
3- If neither of these conditions are satisfied this means, the stationary point has no
optimum and it is a saddle point.
4- If, in special case, all the matrices Di=0, which is equivalent to 𝑓𝑥′′ = 0, so
higher derivatives must be tested.
Ex. for two 𝑫𝟏 > 𝟎 𝑫 <𝟎 𝑫 >𝟎
൨ → 𝑴𝒊𝒏 𝒑𝒐𝒊𝒏𝒕; If 𝟏 ൨ → 𝑴𝒂𝒙 𝒑𝒐𝒊𝒏𝒕; If 𝟏 ൨ → 𝑺𝒂𝒅𝒅𝒍𝒆 𝒑𝒐𝒊𝒏𝒕
variables 𝑫𝟐 > 𝟎 𝑫𝟐 > 𝟎 𝑫𝟐 < 𝟎

DR. ALJAAFARI 6
Multivariable Optimization Techniques
Analytical methods
Unconstrained Search
Lagrange's Criteria Method:
Example 3.1:
Search for the max or min 𝑓 𝑥, 𝑦 = 𝑥 2 + 𝑦 3 − 6𝑥𝑦
Solution:
1- find the partial derivatives
𝑓𝑥′ = 2𝑥 − 6𝑦 = 0; 𝑓𝑦′ = 3𝑦 2 − 6𝑥 = 0
Solve these two equations 𝑦1 =0 or 𝑦2 =6 → 𝑥1 = 0 or 𝑥2 = 18
We have two variables → test 𝑫𝟏 and 𝑫𝟐
𝑓 ′′ 𝑓 ′′
′′ 𝑥𝑥 𝑥𝑦 2 −6
𝐷1 = 𝑓𝑥𝑥 = 2 𝐷2 = ′′ ′′ = = 12y − (36)
𝑓𝑦𝑥 𝑓𝑦𝑦 −6 6𝑦
At (0,0) 𝐷2 = - 36 < 0 and 𝐷1 = 2 > 0 → 𝑆𝑎𝑑𝑙𝑒 𝑝𝑜𝑖𝑛𝑡
At (18,6) 𝐷2 = 36 > 0 and 𝐷1 = 2 > 0 → 𝑚𝑖𝑛𝑚𝑢𝑚 𝑝𝑜𝑖𝑛𝑡
DR. ALJAAFARI 7
Multivariable Optimization Techniques
Analytical methods
Unconstrained Search
Lagrange's Criteria Method:
Example 3.2:
Search for the max or min 𝑓 𝑥, 𝑦, 𝑧 = −𝑥 3 − 𝑦 2 − 3𝑧 2 + 3𝑥𝑧 + 2𝑦
Solution:
1- find the partial derivatives
𝑓𝑥′ = −3𝑥 2 + 3𝑧 = 0; 𝑓𝑦′ = 2 − 2𝑦 = 0; 𝑓𝑧′ = 3𝑥 − 6𝑧 = 0
Solve these equations 𝑥1 =0.5 , 𝑦1 =1 , 𝑧1 = 0.25
We have 3 variables → test 𝑫𝟏 , 𝑫𝟐 and 𝑫𝟑 ′′ ′′ ′′
′′ ′′ 𝑓𝑥𝑥 𝑓𝑥𝑦 𝑓𝑥𝑧
′′
𝑓𝑥𝑥 𝑓𝑥𝑦 ′′ ′′ ′′
𝐷1 = 𝑓𝑥𝑥 𝐷2 = ′′ ′′ 𝐷3 = 𝑓𝑦𝑥 𝑓𝑦𝑦 𝑓𝑦𝑧
𝑓𝑦𝑥 𝑓𝑦𝑦 ′′ ′′ ′′
𝑓𝑧𝑥 𝑓𝑧𝑦 𝑓𝑧𝑧

DR. ALJAAFARI 8
Multivariable Optimization Techniques
Analytical methods
Unconstrained Search
Lagrange's Criteria Method:
Example 3.2:
Search for the max or min 𝑓 𝑥, 𝑦, 𝑧 = −𝑥 3 − 𝑦 2 − 3𝑧 2 + 3𝑥𝑧 + 2𝑦
Solution:
−6𝑥 0 3
−6𝑥 0
𝐷1 = −6𝑥 𝐷2 = 𝐷3 = 0 −2 0
0 −2
3 0 −6
At critical point (x,y,z) = (0.5, 1, 0.25)
𝐷1 = −3 < 0 𝐷2 = −3 ∗ −2 − 0 = 6 >0
−3 0 3
𝐷3 = 0 −2 0 = −3 −2 ∗ −6 − 0 ∗ 0 − 0 0 ∗ −6 − 0 ∗ 3 + 3 0 ∗ 0 − −2 ∗ 3
3 0 −6
𝐷3 = -18 < 0 so the point (0.5, 1, 0.25) is maximum

DR. ALJAAFARI 9
Numerical Methods
Quasi Newton's Method
It is an extension of Newton's method for single variable. The most important thing is the
starting point. If 𝑓 = 𝑓 ( 𝑥, 𝑦, 𝑧) and the starting point is 𝑥0 , 𝑦0 , 𝑧0 , then the next point
on the contour can be calculated from,
𝑑𝑓
𝑥1 = 𝑥0 +∝ |
𝑑𝑥 𝑥=𝑥0
Where ∝ is the slope of the gradient line.
- Substitute the new 𝑥1 in the function which becomes in terms of ∝.
𝑑𝑓
- Find and then equalize it to zero to find the value of ∝.
𝑑∝

DR. ALJAAFARI 10
Numerical Methods
Quasi Newton's Method
Example 3.3: Search for the minimum of the following function
using gradient search starting at point = (2, -2, 1),
𝑓 = 2𝑥 2 + 𝑦 2 + 3𝑧 2
Solution
𝑑𝑓
𝑥1 = 𝑥0 +∝ |
𝑑𝑥 𝑥=𝑥0
𝑑𝑓 𝑥1 = 2 + 8 ∝
𝑦1 = 𝑦0 +∝ |𝑦=𝑦0 𝑦1 = −2 − 4 ∝
𝑑𝑦
𝑑𝑓
𝑧1 = 𝑧0 +∝ |𝑧=𝑧0 𝑧1 = 1 + 6 ∝
𝑑𝑧
𝑑𝑓 𝑑𝑓 Sub in f
= 4x at 𝑥0 = 2 → = 4 * 2 = 8
𝑑𝑥 𝑑𝑥 𝑓 = 2(2 + 8 ∝)2 +(−2 − 4 ∝)2 +3(1 + 6 ∝)2
𝑑𝑓 𝑑𝑓 𝑑𝑓
= 2y at 𝑦0 = -2 → = 2 * -2 = -4 = 2 ∗ 2 ∗ 8 2 + 8 ∝ − 4 ∗ 2 −2 − 4 ∝ + 3 ∗ 2 ∗
𝑑𝑦 𝑑𝑦 𝑑∝
𝑑𝑓 𝑑𝑓
= 6z at 𝑧0 = 1 → = 6 * 1 = 6 6(1 + 6 ∝)= 0
𝑑𝑧 𝑑𝑧
∝ = -0.23016 → 𝑓𝑖𝑛𝑑 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑠 𝑜𝑓 𝑥1 , 𝑦1 , 𝑧1

DR. ALJAAFARI 11
Numerical Methods
Quasi Newton's Method
Example 3.3: Search for the minimum of the following function
using gradient search starting at point = (2, -2, 1)
𝑓 = 2𝑥 2 + 𝑦 2 + 3𝑧 2
Solution
Iteration x y z ∝ f
0 2 -2 1 15
1 0.15872 1.0794 -0.38096 -0.23016 1.651
2 0.004254 0.5542 0.1752 -0.2433 0.3993
3 -1.2*10^-4 0.2696 -0.0947 -0.2568 0.0996
4 3.4*10^-6 0.1383 0.04371 -0.2436 0.02486 So the minimum values
5 0 0.06727 -0.02365 -0.2568 0.0062 of f is at (0, 3*10^-6,
6 0 0.03452 0.0109 -0.2435 0.001548 1*10^-6)
7 0 1.68*10^-3 -5.9*10^-3 -0.257 2.8*10^-4
8 0 3*10^-6 10^-6 -0.4999 1.2*10^-11

DR. ALJAAFARI 12
Numerical Methods Constrained
Equality Constraints:
This method is to solve m restriction equations simultaneously for m of the n variables.
These may then be substituted into the objective function, so this method is called direct
substitution which is used for simple problems.
Example 3.5:
Find the minimum of the function 𝑓 = 4𝑥 2 + 5𝑦 2 subjected to
𝑔 = 2𝑥 + 3y - 6 = 0
Solution
Find the value of x in terms of y and sub in the function
2𝑥 + 3y - 6 = 0 → 𝑥 = 3 – 3/2y
𝑓 = 4𝑥 2 + 5y 2
𝑓 = 4(3 – 3/2y)2 +5y 2
𝑓 ′ = (−6)(3 – 3/2y) + 10y =0 → y = 1.286
Find the value of 𝑓 ′′ at y = 1.286, and apply same rules as in single variable
Value of x can be founded from the 𝑔 = 2𝑥 + 3y - 6 = 0

DR. ALJAAFARI 13
H.W 4 Due Date: Saturday, April 19th

1- Determine whether the extreme points of the function are max or min
𝑓 𝑥, 𝑦 = 𝑥 3 + 𝑦 3 + 2𝑥 2 + 4𝑦 2 + 6
2- Determine whether the extreme points of the function are max or min
𝑓 𝑥, 𝑦, 𝑧 = 𝑥 + 3𝑦 + 𝑦𝑧 − 𝑥 2 − 𝑦 2 − 𝑧 2
3- Determine the minimum of the following function:
a- 𝑓 𝑥 = 𝑥12 + 3𝑥22 , at x0 = (6, 3)
b- 𝑦 = 2𝑥12 + 𝑥22 − x1x2, at x0 = (2, 2)

4- The profit function of an alkylation process with simplified reactor can be represented
by the equation: 𝑃 = 150 − 6 𝐹 − 10 2 − 24 𝐶 − 95 2 , where F is the feed and C is
the conversion. Start at F=5 and C=60, find the optimum of these variables which can
maximize the profit using gradient method.

DR. ALJAAFARI 14

You might also like