0% found this document useful (0 votes)
273 views45 pages

An Introduction To An Introduction To Optimization Optimization Using Using Evolutionary Algorithms Evolutionary Algorithms

The document provides an introduction to optimization using evolutionary algorithms such as genetic algorithms and simulated annealing. It discusses solving optimization problems in MATLAB using its optimization toolbox. The toolbox contains routines for unconstrained and constrained minimization problems. The document explains implementing the toolbox by defining an objective function, setting optimization parameters, and invoking the appropriate optimization routine. It also discusses the difference between global and local optima and how evolutionary algorithms can help find global optima by using a population of initial guesses rather than a single guess.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
273 views45 pages

An Introduction To An Introduction To Optimization Optimization Using Using Evolutionary Algorithms Evolutionary Algorithms

The document provides an introduction to optimization using evolutionary algorithms such as genetic algorithms and simulated annealing. It discusses solving optimization problems in MATLAB using its optimization toolbox. The toolbox contains routines for unconstrained and constrained minimization problems. The document explains implementing the toolbox by defining an objective function, setting optimization parameters, and invoking the appropriate optimization routine. It also discusses the difference between global and local optima and how evolutionary algorithms can help find global optima by using a population of initial guesses rather than a single guess.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 45

AN INTRODUCTION TO OPTIMIZATION USING EVOLUTIONARY ALGORITHMS

Dr B.Srinivas Dept of Chemical Engg GVP College of Engg g gg

Contents
Introduction to Optimization. Solving optimization problems using MATLAB. Global Optima versus Local Optima. Introduction to GA and SA. Solving GA and SA optimization problems using MATLAB MATLAB. Multi-objective Optimization. Solving Multi-objective optimization problems in MATLAB.

Solving optimization problems in MATLAB


Introduction
Function Optimization: Optimization Toolbox O ti i ti T lb Routines / Algorithms available

Minimization Problems
Unconstrained Constrained Example The Algorithm Description

Function Optimization
Optimization concerns the minimization or maximization of functions Standard Optimization Problem
min f x
x
~

Subject to: j

() h ( x) = 0 g ( x) 0
~
i ~ j ~

Equality Constraints Inequality Constraints Side Constraints

x L k xk xU k

Function Optimization
f x
~

()

is the objective function, which measure and evaluate the performance of a system. system In a standard problem, we are minimizing the function. For F maximization, it is equivalent to minimization i i ti i i l tt i i i ti of the ve of the objective function. is a column vector of design variables, which can affect the performance of the system.

x
~

Function Optimization
Constraints Limitation to the design space. Can be linear or nonlinear, explicit or implicit functions

() g ( x) 0
hi x = 0
~
j

Equality Constraints Inequality Constraints Algorithms require Less Than l ih i h Side C Sid Constraints t i t

x L k xk xU k

Optimization Toolbox
Is a collection of functions that extend the capability of MATLAB. The toolbox includes routines for: Unconstrained optimization p Constrained nonlinear optimization, including goal attainment problems, minimax problems, and semi-infinite minimization problems Quadratic and linear programming Nonlinear least squares and curve fitting Nonlinear systems of equations solving Constrained linear least squares Specialized algorithms for large scale problems

Minimization Algorithm

Minimization Algorithm (Cont.)

Implementing Opt. Toolbox


Most of these optimization routines require the definition of an M-file containing the function, f, to be minimized. Maximization is achieved by supplying the routines with f. f Optimization options passed to the routines change optimization parameters. Default optimization parameters can be changed through an options structure.

Unconstrained Minimization
Consider the problem of finding a set of values [x1 x2]T that solves
2 min f x = e x1 ( 4 x12 + 2 x2 + 4 x1 x2 + 2 x2 + 1) x
~

()
~

x = [ x1
~

x2 ]

Steps
Create an M-file that returns the function value (Objective Function)
Call it objfun.m

Then, invoke the unconstrained minimization routine


Use fminunc

Step 1 Obj. Function


x = [ x1
function f = objfun(x)
~

x2 ]

f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);

Objective Function

Step 2 Invoke Routine


x0 = [-1,1]; Starting Guess Optimization parameter Settings

options = optimset(LargeScale,off); [xmin,feval,exitflag,output]= fminunc(objfun,x0,options); Output Arguments

Input Arguments

Results
xmin = 0.5000 feval = 1.3028e-010 g exitflag = 1 output = iterations: 7 funcCount: 40 stepsize: 1 firstorderopt: 8.1998e-004 algorithm: 'medium-scale: Quasi-Newton line search' Some other information Exitflag tells if the algorithm is converged. If exitflag > 0, then local minimum is found Objective function value -1.0000 Minimum point of design variables

More on fminunc Input


[xmin,feval,exitflag,output,grad,hessian]= fminunc(fun,x0,options,P1,P2,) f i (f 0 ti P1 P2 )
fun: Return a function of objective function. x0: Starts with an initial guess. The guess must be a vector of size of number of design variables. option: To set some of the optimization parameters. (More after few parameters slides) P1,P2,: To pass additional parameters.
Ref. Manual: Pg. 5-5 to 5-9

More on fminunc Output


[xmin,feval,exitflag,output,grad,hessian]= fminunc(fun,x0,options,P1,P2,) fminunc(fun x0 options P1 P2 )
xmin: Vector of the minimum point (optimal point). The size is the number of design variables. feval: The objective function value of at the optimal point. exitflag: A value shows whether the optimization routine is terminated successfully (converged if >0) successfully. output: This structure gives more details about the optimization grad: The gradient value at the optimal point. hessian: Th h i value of at th optimal point The hessian l f t the ti l i t
Ref. Manual: Pg. 5-5 to 5-9

Options Setting optimset


Options = optimset( param1 ,value1, param2 ,value2,) optimset(param1,value1, param2,value2, )
The routines in Optimization Toolbox has a set of default optimization parameters. However, the toolbox allows you to alter some of those parameters, for example: the tolerance, the step size, the gradient or hessian values, the max. number of iterations etc. There are also a list of features available, for example: displaying the values at each iterations, compare the user supply gradient or hessian, etc. You can also choose the algorithm you wish to use.
Ref. Manual: Pg. 5-10 to 5-14

Constrained Minimization of Lagrange Vector

Multiplier at optimal point

[xmin,feval,exitflag,output,lambda,grad,hessian] = fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options, P1,P2,)

Example
min f x = x1 x2 x3
x
~

function f = myfun(x) f=-x(1)*x(2)*x(3);

()
~

Subject to:

2 x12 + x2 0

x1 2 x2 2 x3 0 x1 + 2 x2 + 2 x3 72
0 x1 , x2 , x3 30

1 2 2 0 A= , B = 72 1 2 2

0 30 LB = 0 , UB = 30 0 30

Example (Cont ) (Cont.)


For

2 x12 + x2 0
function [C,Ceq]=nonlcon(x) C=2*x(1)^2+x(2); Ceq=[]; Remember to return a null Matrix if the constraint does not apply

Create a function call nonlcon which returns 2 constraint vectors [C,Ceq]

Example (Cont.)
Initial guess (3 design variables) x0=[10;10;10]; A=[-1 -2 -2;1 2 2]; B [0 72]'; B=[0 72] ; LB = [0 0 0]'; UB = [30 30 30]';

1 2 2 0 A= , B = 72 1 2 2 0 30 LB = 0 , UB = 30 0 30
CAREFUL!!!

[x,feval]=fmincon(@myfun,x0,A,B,[],[],LB,UB,@nonlcon)

fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,)

Conclusion
Easy to use! But, we do not know what is happening behind the routine. Therefore, it is still important to understand the limitation of each routine. d d h li i i f h i Basic steps: Recognize the class of optimization problem Define the design variables Create objective function Recognize the constraints Start an initial guess Invoke suitable routine Analyze the results (it might not make sense)

Local and Global Optima

Basins of attraction behave like local attractors. All initial conditions close to a local attractor reach that point.

To T reach a global optima h l b l i we have to ensure that the initial guess we provide is lying close to the attractor y g belonging to the global optima.

But how to give a guess close to the global optima?

Give different initial guesses to the simulator!!!

Ge e c go Genetic Algorithm
Rather than starting from a single point (or guess) within the search space space, GAs are initialized with a population of guesses. These are usually random and will be spread throughout the search space. A typical algorithm then uses three operators, selection, crossover and mutation (chosen in part by analogy with the natural world) to direct the population (over a series of time steps or generations) towards convergence at the global optimum. Selection attempts to apply pressure upon the population in a manner similar to that of natural selection found in biological systems. Poorer performing individuals are weeded out and better performing, or fitter, individuals have a greater than average chance of promoting the information they contain within the next generation.

You aim to produce a puppy p p ppy with the loudest bark.

You select dogs which bark loudly and rank them.

You select the loudest barking dogs out of the lot and mate them. Since the parents bark loud its Natural that the pups they generate would bark loudly too!!

Operations in GA
Crossover allows solutions to exchange information in a way similar to that used by a natural organism undergoing sexual reproduction. One method (termed single point crossover) is to choose pairs of individuals promoted by the selection operator randomly operator, choose a single locus (point) within the binary strings and swap all the information (digits) to the right of this locus between the two individuals. Mutation is used to randomly change (flip) the value of single bits within individual strings. Mutation is typically used very sparingly. After selection, crossover and mutation have been applied to the initial population, a new population will have been formed and the generational counter is increased by one. This process of selection, crossover and mutation is continued until a fixed number of generations have elapsed or some form of convergence criterion has been met.

Crossover
Merging 2 chromosomes to create new chromosomes
Single point crossover
0100101010011011 0100101010011011

Mutation
Random changing of a gene in a chromosome

0100101010011011 1

M Mutation can h l escaping from L l optima i help i f Local i in our state space.

An elementary example
A trivial problem might be to maximize a function, f(x), where f(x) = x^2 ; for integer x and 0 < x < 4095. There are of course other ways of finding the answer (x = 4095) to this problem than using a GA, but its simplicity makes it ideal as an example. GA example

Firstly, the exact form of the algorithm must be decided upon. As mentioned earlier, GAs can take many forms. This allows a wealth of freedom in the details of the algorithm. The following (Algorithm 1) represents just one possibility.

1. Form a population, of eight random binary strings of length twelve (e.g. 101001101010, 110011001100, ). 2. Decode each binary string to an integer x (i.e. 000000000111 implies x = 7, 000000000000 = 0, 111111111111 = 4095). 3. Test these numbers as so u o s to the p ob e f(x) = x^2 a d assign a fitness to eac es ese u be s solutions o e problem ( ) and ass g ess o each individual equal to the value of fix) (e.g. the solution x = 7 has a fitness of 7^2 = 49). 4. S l t th best h lf (th 4 Select the b t half (those with highest fit ith hi h t fitness) of th population t go f ) f the l ti to forward t th d to the next generation.

5. Pick pairs of parent strings at random (with each string being selected exactly once) from these more successful individuals to undergo single point crossover. Taking each pair g g p g p in turn, choose a random point between the end points of the string, cut the strings at this point and exchange the tails, creating pairs of child strings. 6. 6 Apply mutation to the children by occasionally (with probability one in six) flipping a 0 to 1 / or vice versa. 7. Allow these new strings, together with their parents to form the new population, which will still contain only eight members. 8. Return to Step 2, and repeat until fifty generations have elapsed.

To further clarify the crossover operator, imagine two strings, 000100011100 and 111001101010, Performing crossover between the third and fourth characters produces two new strings: parents 000/100011100 / 111/001101010 children 000001101010 111100011100

It is this process of crossover which is responsible for much of the power of p p p genetic algorithms.

Returning to the example, let the initial population be:


string population member 1 2 3 4 5 6 7 8 110101100100 010100010111 101111101110 010100001100 011101011101 101101001001 101011011010 010011010101 3428 1303 3054 1292 1885 2889 2778 1237 11751184 1697809 9326916 1669264 3553225 8346321 7717284 1530169 X fitness

Population members 1, 3, 6 and 7 have the highest fitness. Deleting those four with the least fitness provides a temporary reduced population ready to undergo crossover:

temp. p p String p pop g member 1 2 3 4 110101100100 101111101110 101101001001 101011011010

x 3428 3054 2889 2778

fitness 11751184 9326916 8346321 7717284

Temp. population member 1 2 3 4

String

fitness

110101100100 101111101110 101101011010 111111101110

3428 3054 2906 4078

11751184 9326916 8444836 16630084

This temporary population does not contain a 1 as the last digit in any of the strings(whereas the initial population did). This implies that no string from this moment on can contain such a digit and the maximum value that can evolve will be 111111111110after hich 111111111110 after which point this string will reprod ce so as to dominate the ill reproduce population. This domination of the population by a single sub- optimal string gives a first indication of why mutation might be important. Any further populations will only contain the same, identical string. This is because the crossover operator can only swap bits between strings, not introduce any new information. Mutation can thus be seen in part as an operator charged with maintaining the genetic diversity of the population by preserving the diversity embodied in the initial generation.

The inclusion of mutation allows the population to leapfrog over this sticking point. It is worth reiterating that the initial population did include a 1 in all positions. Thus the mutation operator is not necessarily inventing new information but simply working as an insurance policy against premature loss of genetic information information.

Rerunning the algorithm from the same initial population, but with mutation, allows the string 111111111111 to evolve and the global optimum to be found. Mutation has been included by visiting every bit in each new child string, throwing a random number between 0 and 1 and if this number is less than 1/12, flipping the value of the bit.

Summary of GA
start

Simulated Annealing Si l d A li

Dont be too ambitious with this course load. You CANNOT do optimization without a properly posed math problem.

Thank Y Th k You

You might also like