0% found this document useful (0 votes)
22 views28 pages

Chap 5.3 Structural Optimization & Introduction To Search Methods

Chapter 5 discusses structural optimization in engineering, focusing on automated synthesis of mechanical components based on structural properties. It covers various optimization methods, including topology optimization, shape optimization, and size optimization, along with tools like gradient-based and non-gradient-based methods. Additionally, it highlights generative design as an AI-driven approach to create multiple design alternatives based on user-defined requirements.

Uploaded by

majvand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views28 pages

Chap 5.3 Structural Optimization & Introduction To Search Methods

Chapter 5 discusses structural optimization in engineering, focusing on automated synthesis of mechanical components based on structural properties. It covers various optimization methods, including topology optimization, shape optimization, and size optimization, along with tools like gradient-based and non-gradient-based methods. Additionally, it highlights generative design as an AI-driven approach to create multiple design alternatives based on user-defined requirements.

Uploaded by

majvand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Chap.

5 Optimization in Engineering Introduction


◼ Structural optimization is an automated synthesis of mechanical components
Contents
based on structural properties.
(20.09, 2024)

◼ Structural Optimization → automatically generates component design exhibiting structural performance

◼ Introduction
◼ General mathematical formulation of optimization problem statement
◼ Shape and size optimization

◼ Topology optimized design and generative design Minimize Objective function Minimize f(x1, x2)
Optimization variables
x1, x2
◼ Optimization Methods and Tools
Subject to
Introduction to Subject to

◼ Gradient-based and nongradient-based methods ◼ h(x1, x2) = 0


◼ Constraints g(x1, x2) ≲ 0
◼ Simulated annealing
→ Limits on variables 𝑥1𝑙 ≤ 𝑥1 ≤ 𝑥1𝑢 and 𝑥2𝑙 ≤ 𝑥2 ≤ 𝑥2𝑢
◼ Genetic algorithm
Note: This is a typical constrained optimization problem statement
CAD/CAM & Additive Manufacturing 1 2

1 2

1
Formulating structural optimization … Related to the Structural optimization …
geometric features
General mathematical form … General mathematical form …
Minimize Objective function (optimization variables, state variable) ◼ Common Objective functions:
Optimization variables
Minimize f(x1, x2) - stiffness (K),
Displacement, x1, x2 - weight (W),
Governing temperature Five possible design variables:
differential - material distribution
Material, sizing, configuration
Subject to equations shape and topology
- manufacturability (M) and
- cost (C)
◼ Constraints on state variables
Cost, size, Subject to
◼ Constraints on resources
weight,
◼ Constraints on performance
etc. h(x1, x2) = 0 ◼ Common Constraints
◼ Limits on variables
- Location of support points (BC)
g(x1, x2) ≤ 0
- Size/weight limitations (S)
Data - Max. allowable stress ()
Stiffness, strength, 𝑥1𝑙 ≤ 𝑥1 ≤ 𝑥1𝑢 and - Max. weight (W max)
Material Natural frequency, 𝑥2𝑙 ≤ 𝑥2 ≤ 𝑥2𝑢
properties, Etc.
loads, etc.
3 4

3 4

2
Categories of structural optimization Categories of structural optimization …
◼ Structural optimization approaches are categorized into three ◼ Components of structural optimization
Optimization result

Topology optimization
- Highest level in structural optimization
- Decides connectivity & number of holes in a structure
- Main challenge is identifying variables that decide the
topology
- Topology should be decided first

Optimization result
Shape optimization
- Shape of a feature in a structure
- Shape of a segment of a structure
- Shape of holes in a structure
- Identifying shape of a feature, a segment or a hole
is easier compared to topology
- Shape optimization follows topology optimization

Size (parameter) optimization


- Size optimization is the easiest and last Note that finite element analysis must be done on the final optimized part.
optimization step
- Parameter such as radius of a circular hole is
also a size variable Justify why!
5 6

5 6

3
Categories of structural optimization … Categories of structural optimization …

Size optimization: This is the simplest of the Size optimization: …


optimization techniques and it
◼ keeps the design shape & topology Design variable can be some type of structural thickness, i.e.,
unchanged ◼ cross-sectional areas of truss members, or
◼ modifies dimensions to get an improved ◼ the thickness distribution of a sheet.
performance
◼ implemented for easy-to-analyze structures
such as Example: A sizing optimization problem for a truss structure
◼ Trusses
◼ Frames and
◼ Plates

◼ Optimization variables:
◼ cross-sectional area,
◼ element configuration,
◼ material selection
◼ Etc.

7 8

7 8

4
Categories of structural optimization … Categories of structural optimization …
Topology optimization: Typical steps
◼ Why?
Shape optimization: In cases where change in shape & size did not lead to
This technique modifies shape by maintaining constant topology. satisfaction of the design criterion for reduction of
structural weight.
◼ It leads to size optimization,
i.e., size optimization is a special case (by-product) of shape optimization ◼ ensures a globally optimized shape with best
material distribution
◼ Design (optimization) variables: ◼ design variables that provide optimal
parameters defining certain features of the shape, example, performance define the particular topology of
◼ radius of a hole, design
◼ side of a square, ◼ focused, in earlier applications, on truss-like
◼ boundary of a solid structures

Shape optimization of torque arm


using FEA
(Source: Neural concept)
9 10

9 10

5
Categories of structural optimization … Categories of structural optimization …
Topology optimization: ▪ Two cases
Topology optimization does not need definition of optimization parameter

➔ material distribution is used as optimization parameter

◼ Objective: (example) min. /maximize energy of structural compliance or maximize the natural
frequency while satisfying the constraints specified

A discrete case, such as for a truss A continuum-type structure case like


- Optimization is achieved by taking cross- a two-dimensional sheet,
Methods - Optimized by letting the thickness of
◼ Ground structure approach sectional areas of truss members as
design variables, and then allowing these the sheet take the value zero.
– removal of elements with no or - For pure topological optimization,
minor contribution to the stiffness of variables to take value of zero → the
truss member is removed the optimal thickness should take
the structure only two values: 0 and a fixed
- This makes connectivity of nodes maximum sheet thickness.
◼ Finite Element Analysis (FEA) – variable so that the topology of the truss - In a 3-D case, the same effect can
an unstressed member is removed after changes be achieved by letting x be a
stress analysis density-like variable that can only
take the values 0 and 1.
11 12

11 12

6
Categories of structural optimization … Topology optimization and generative design
Example: Topology Optimization ◼ Generative design
Integrated into commercial FEA ◼ One of the applications of AI algorithms in design (AI and machine
software such as ANSYS & ABAQUS learning driven design)
Stand-alone tools ◼ a method to generate and evaluate multiple design alternatives
- Altair Hypemesh 14 Optistruct based on input from the user.
- Solidthinking Inspire
◼ A method of autonomously creating manufacturing-ready optimal
designs from a set of system design requirements.
◼ Engineers can interactively specify their design requirements and
goals, including materials and manufacturing processes

AI-driven design

(Courtesy: A.W. Gebisa, 2015) Smoothing:


Still a challenging phase [Link]

13 14

13 14

7
Topology optimization and generative design … Generative design …
◼ Topology Optimization is primarily used for lightweight design, where a design ◼ Benefits (from PTC – Creo software (videos))
engineer defines a design space and TO software removes any part of that design
space that is not contributing some defined percentage to the structural integrity Improved efficiency:
of the product. TO is good when there is a set space and overall idea, and it is users can explore many design alternatives in a short period of
needed to make it as lightweight as possible using a computer algorithm. time
◼ Generative design is an AI-driven approach that creates many alternative designs ➔more efficient and innovative designs can be created more quickly.
in an evolutionary way whereas the TO creates only one design solution that’s
been optimized for structural integrity based on existing criteria.
Enhanced performance:
- considers a wide range of factors-such as materials, manufacturing
processes, and performance requirements.
- Helps designers to create designs that are optimized for specific
goals.
➔ designs that are stronger, lighter, or more efficient than those
designed using traditional methods.

Reduced design time:


Greatly reduces the design time of a product or component by
Source: Siemens automating the design process and exploring many design
alternatives quickly.
Aliyi and Lemu, 2019
15 16

15 16

8
Generative design … Generative design …
Some typical examples Software tools for generative design
Examples of heat-exchanger geometries with
different topologies ◼ Creo Generative Design from PTC

Fusion 360 from Autodesk


Source: [Link]

(Video 1: [Link]
(Video 2: [Link]
Drone frame optimization: structural analysis

◼ nTop Platform from nTopology

◼ NX from Siemens
Optimized geometry (Source: Neural concept) Car frame generated by
◼ MSC Apex Generative Design deep learning algorithm

Drone frame design & optimization: flow analysis


Source: BAY and ERYILDIZ, 2023

17 18

17 18

9
Generative design … Classification of Optimization Methods

Example (Demo)

Initial design
Many alternative designs generated by AI

Search methods:
Key benefits - Involve numerical calculation through iterative process
- Lighter design
Final design - Gradient-based: Derivatives of objective and constraint function guide the search.
- Fewer parts in the assembly
- Stiffer and stronger Example; steepest descent method for unconstrained optimization (invented by Cauchy)
- Cheaper - Non-gradient approaches use certain rules not based on derivatives, e.g. SA, GA, ..
(Courtesy: Autodesk Fusion 360)
19 UiS/IKM 20

19 20

10
Search methods and optimization tools Search methods and optimization tools …
The gradient-based approaches , example: Simple cantilever beam
𝐵𝑒𝑛𝑑𝑖𝑛𝑔 𝑠𝑡𝑟𝑒𝑠𝑠 𝑖𝑠 𝑔𝑖𝑣𝑒𝑛 𝑏𝑦:
Search methods in numerical optimization
6𝑃𝐿
𝜎=
𝑏ℎ2

Optimization of cross-section f(b, h)

𝐺𝑟𝑎𝑑𝑖𝑒𝑛𝑡 𝑤𝑖𝑡ℎ 𝑟𝑒𝑠𝑝𝑒𝑐𝑡 𝑡𝑜 𝑤𝑖𝑑𝑡ℎ


𝐺𝑟𝑎𝑑𝑖𝑒𝑛𝑡 𝑤𝑖𝑡ℎ 𝑟𝑒𝑠𝑝𝑒𝑐𝑡 𝑡𝑜 ℎ𝑒𝑖𝑔ℎ𝑡
𝑜𝑓 𝑐𝑟𝑜𝑠𝑠 𝑠𝑒𝑐𝑡𝑖𝑜𝑛:
𝑜𝑓 𝑐𝑟𝑜𝑠𝑠 𝑠𝑒𝑐𝑡𝑖𝑜𝑛:
𝜕𝜎 6𝑃𝐿 𝜕𝜎 12𝑃𝐿
The gradient-based approaches =− 2 2 =−
𝜕ℎ 𝑏 ℎ 𝜕ℎ 𝑏ℎ3
◼ Require both function evaluation and gradient or sensitivity information,
to determine search direction
❑ Both gradients are negative ❑ Increasing the height dimension h
◼ Often leads to local optimum.
➔Increasing either the height or the is more effective compared with the
width reduces the bending stress width b

21 22

21 22

11
Search methods and optimization tools Search methods ..

Search methods in numerical optimization


Categories of non-gradient approaches
The non-gradient (gradient-free) approach
◼ uses only the function values in the ◼ Simulated Annealing (SA)
search process, without the need for
gradient information.
◼ The algorithms developed in this
approach are very general and can be
applied to all kinds of problems; ◼ Genetic Algorithms (GAs)
discrete, continuous, or non
differentiable functions.
◼ The methods determine global optimal
solutions as opposed to the local
◼ Particle Swarm Optimization (PSO)
optimal determined by a gradient-based
approach.
◼ These techniques require a
large amount of function evaluations.
23 24

23 24

12
Search methods .. Search methods …
◼ Simulated Annealing algorithm SA is based on evolution of a thermal equilibrium
Simulated annealing (SA) is a stochastic approach that simulates the ◼ Used to give a good local approximation to the global optimum of a given
statistical process of growing crystals using the annealing process to reach function in a large search space

its absolute (global) minimum internal energy configuration. If the ◼ Used in various combinatorial optimization problems -

temperature in the annealing process is not lowered slowly and enough time ◼ Attempts to select the ”best” combination of solutions from a large possible
solution of discrete values by iterative improvement and exploration
is not spent at each temperature, the process could get trapped in a local ◼ Typical example: Traveling Salesman Problem (TSP) – where a Traveling
minimum state for the internal energy. The resulting crystal may have many Salesman visits many cities with a min. cost possible.

defects or the material may even become glass with no crystalline order. The
simulated annealing method for optimization of systems emulates this
process. Given a long enough time to run, an algorithm based on this
concept finds global minima for continuous-discrete-integer variable
nonlinear Programming problems.

Source: Introduction to Optimum Design, J.S. Arora, 3rd ed.

25 26

25 26

13
Search methods .. . Search methods .. .
◼ Working principle: According to Wong, et. al., (1989), the lowest energy gives the most probable state
of thermal equilibrium
The probability that a system of interacting atoms is in a −E(S )
given thermal equilibrium state S is given by k bT
SA algorthim (an example)
e Begin
S:= Initial solution S0
Where E(S) is the energy associated with state S and kb is Boltzmann’s constant.
T:= Initial temperature T0
While (not stop criteria) do
begin
◼ Analogy between physical systems and optimization problems while (not yet equilibrium) do
begin
S’:= some random neighboring
configuration of S
:= E(S’) – E(S)
Prob:= min(1, e(- /T))
If random(0,1) ≤ Prob then S:=S’
end;
Update T;
end;
Output best solution;
End;

27 28

27 28

14
Search methods .. . Search methods .. .
Example algorithm/flowchart
◼ Application:
1) Fix initial temperature (T0). Application of SA method requires evaluation of cost and constraint functions
2) Generate starting point x0 (serves as current best only.
point X⁎). ➔ Continuity and differentiability of functions are not required.
3) Generate randomly point Xs (neighbouring point).
◼ Thus, the method can be useful for nondifferentiable problems, and problems for
4) Accept Xs as X⁎ (current best solution) if an
which gradients cannot be calculated or are too expensive to calculate.
acceptance criterion is met. This gives the condition
that the probability of accepting a worse point is ◼ Application of SA technique requires careful design of the basic elements such as
greater than zero, particularly at higher (1) concise description of the problem configuration
temperatures.
(2) generating systematically the neighboring solutions
5) If an equilibrium condition is satisfied,
(3) choosing a suitable cost function
go to step (6), otherwise jump back to (3).
6) If termination conditions are not met, decrease (4) defining the annealing schedule, i.e.,
temperature according to certain cooling scheme and - specifying the initial temperature
jump back to (1). If termination conditions are - rule of changing temperature
satisfied, stop calculations accepting current best Alternative flowchart
- duration of search at each temperature
value X⁎ as final («optimal») solution.
- termination criteria of the algorithm
29 30

29 30

15
Search methods .. . Search methods .. .
Genetic algorithms (GAs) …

◼ GA belongs to the class of stochastic search optimization methods


(non-gradient algorithm)
Genetic algorithms (GAs)
◼ Most computations of GA are based on random number generation.
◼ executed at different times, the algorithm can lead to a different

sequence of designs and


◼ the same initial conditions can lead to different problem solution

◼ . GAs determine global optimum solutions as opposed to the local


solutions determined by a derivative-based optimization algorithm.

Used to test performance of optimization algorithms

31 32

31 32

16
Search methods .. . Genetic Algorithms .. .
Genetic algorithms (GAs)
◼ Optimization problem formulation
◼ based on adaptive methods of natural population
(biological system), i.e. Darwinian theory of natural 𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑓 𝑥 , 𝑓𝑜𝑟 𝑥 ∈ 𝑆
selection. Thus called evolutionary methods or
nature-inspired methods where S is the set of feasible designs defined by equality and inequality
constraints.
◼ Optimizes the fitness function
(simulates the principle of “survival of the fittest”) ◼ For unconstrained problems, S is the entire design space.
◼ A robust algorithm: continuity and differentiability of
◼ For GA solution, constrained problems must be converted to
functions is not required
unconstrained problems
◼ Deals with a wide range of problem types (discrete,
continuous, non-differentiable) that are difficult to solve
using other methods
33 34

33 34

17
Genetic Algorithms (GAs) .. . Chromosome Gene Genetic Algorithms (GAs) .. .
◼ Concepts and classical steps ◼ Concepts and classical steps
P = population (set of design points)
Three genetic operators are used to accomplish this
P0 = set of initial design points
Pi = set of design points at current iteration task: reproduction, crossover, and mutation.
(generation)
Selection (reproduction) – identifying a set of designs from
current population and carrying them over (copying) to the next
Chromosome: a design point in the population (whether feasible generation.
or infeasible) containing values for all design variables of the system.
Gene: a scalar component of the design vector representing the value Crossover: corresponds to allowing selected members of the
of a particular design variable new population to exchange characteristics of their designs
among themselves. It entails selection of starting and ending
Generation. An iteration of the GA and it has a population of positions on a pair of randomly selected mating strings, & simply
size Np that is manipulated in a GA process. exchanging the string of 0s and 1s between these positions.

Evaluation – calculation of the fitness function which gives the Mutation: a step that safeguards the process from a complete
relative importance of a design. premature loss of valuable genetic material during reproduction
and crossover. In terms of a binary string, this step corresponds
Higher fitness value → better design to selection of a few members of the population, determining a
location on the strings at random, and switching the 0 to 1 or vice
versa.
35 36

35 36

18
Genetic Algorithms (GAs) .. . Genetic Algorithms .. .
◼ Concepts and classical steps Reproduction: Process of creating the next generation
◼ There are many different strategies to implement this reproduction operator, also
The basic idea of a genetic algorithm is to generate a called the selection process.
new set of designs (population) from the current set (1) Elitist method
such that the average fitness of the population is Elite children are the individuals in the current
improved. generation with the best fitness values. These
individuals automatically survive to next generation.
Iteration process continues until
- no further improvement is observed in the fitness (2) Random selection: the simplest method that randomly selects two points from
the population until the requisite number of pairs is completed.
function or
Ineffective approach due to lack of mechanism of moving the population with better
- the stated termination criteria is reached objective function forward.

(3) Tournament method: randomly pairs up np points and selects the best point
from each pair to join the mating pool
Crossover children are created by combining the vectors of a pair of parents.

37 38

37 38

19
Genetic Algorithms .. . Genetic Algorithms .. .
Crossover ◼ Mutation

Mutation children are created by introducing


Crossover children are created by
random changes in the genes of individual
combining the vectors of a pair of parents. parent.

◼ Mutation safeguards premature loss of valuable genetic


◼ Crossover introduces variation into the population
materials during reproduction and crossover steps
◼ It combines / mixes two different designs (chromosomes) into
◼ A few randomly selected members switch 0 to 1 and vice
the population Two parents
versa
◼ Commonly used methods (among many):
1 0 0 1 0 4 X R 5 P
◼ One-cut point method
Two offsprings
Mutation Mutation
1 1 0 0 0 4 5 R X P
◼ Two-cut-point method Two parents

Two offsprings (a) Binary strings (b) Permutation strings


39 40

39 40

20
Genetic Algorithms.. . Genetic Algorithms.. .
Roulette 10 %
◼ The overall process Wheel
20 % D
D E
E10 %
6% 6%
20 %

F General Algorithm of GA
C C 18 %
F
Inputs: 𝒙𝒍 , 𝒙𝒖 ∶

16 %
Lower and upper bounds

B
18 %

B
1) Population generation

31
16 %

%
A
A
(selection), e.g. roulette wheel approach 31
% Outputs:
𝑥*: Best point and 𝑓*: corresponding function value
--------------------------------
2) Encoding
k = 0,
(create chromosomes)
Binary, integer or real number Pk = 𝑥 (1) , 𝑥 (2) , … . . 𝑥 (𝑛𝑝) Generate initial population
based encoding
While k < kmax Do
3) Cross-over Random cut point After crossover
Compute f(x) for all x ∈ Pk Evaluate objective function
Two parents
and Select np/2 parent pairs from Pk for crossover Selection
Crossover

Two offsprings Generate a new population of np offspring (Pk+1) Crossover


pcross Randomly mutate some points in the population Mutation
4) Mutation
1 0 0 1 0 4 X R 5 P K=k+1
(random change of genes)
Mutation Mutation
End while
1 1 0 0 0 4 5 R X P
41 42
(a) Binary strings (b) Permutation strings
41 42

21
Genetic Algorithms .. . Genetic Algorithms.. .

◼ Comparison of GA with gradient-based optimization Some Applications of GAs

Design Optimization Problems


GAs excel at solving design optimization problems, aiming to find the best
Classical (gradient-based) GA based optimization solution among a large set of possibilities. Among others, the GA algorithm
optimization finds solutions to mathematical function optimization, parameter tuning,
Generates a single point at each iteration. Generates a population of points at each resource allocation, and more.
The sequence of points approaches an iteration. The best point in the population
optimal solution. approaches an optimal solution. Combinatorial optimization
Combinatorial optimization problems involving finding the best
arrangement or combination of elements from a finite set are good
Selects the next point in the sequence by Selects the next population by candidates for GA . Examples include the traveling salesman problem
a deterministic computation. computation which uses random number (TSP), vehicle routing problem (VRP), job scheduling, bin packing, etc.
generators.
Typically converges quickly to a local Typically takes many function evaluations Machine learning
solution. to converge. May or may not converge to GAs are employed in optimizing the configuration and parameters of
a local or global minimum. machine learning models, and feature selection in a population of feature
subsets to identify the most relevant subset for a given task.

43 44

43 44

22
Genetic Algorithms.. . Drawbacks of GAs

Some Applications of GAs


Two key drawbacks of genetic algorithms:

◼ They require a large amount of calculation for even reasonably


Evolutionary robotics
GAs can solve evolutionary robotics problems including evolving robot sized problems or for problems where evaluation of functions itself
behavior and control strategies by representing the robot’s control
parameters or policies as chromosomes. Typically, they can be applied to requires massive calculation.
solve robotic strategies that are difficult to solve analytically
: Can be overcome by parallel computation

◼ There is no absolute guarantee that a global solution has been


Image and signal processing obtained.
In addition to image reconstruction, feature extraction and pattern, GAs are
applied to optimize the parameters of reconstruction algorithms to enhance : Can be overcome, to some extent, by executing the algorithm
image quality
several times and allowing it to run longer.

45 46

45 46

23
Genetic Algorithms.. . Summary and Questions

Examples
1) Sequencing type problems: 11 bolts are to be
inserted into a metal plate by a robot arm
(TSP type formulation).
The position of holes are given by a coordinate system Particle Swarm Optimization (PSO)
2) GA-based shape optimization (GAs in CAD): (To be continued) …
❑ Parametric GA: a grid structure of control
points in applying NURBS/B-spline

❑ Cell GA: subdividing the space into small


rectangular domains (material: 1, void: 0)

See also external sources, for example:


[Link]

47 48

47 48

24
Particle Swarm Optimization (PSO) Particle Swarm Optimization (PSO)

◼ PSO is one of the nature-inspired search methods PSO Algorithm

◼ PSO mimics the social behavior of bird flocking or fish schooling ◼ Search strategy: Follow the bird that is closer to the food, though
((moving in search for food) and translates into computation the exact location is unknown
algorithm ◼ Each member of the population (Swarm) is a particle (i.e. a bird)
◼ Similar to other nature-inspired methods, also called metaheuristics ◼ Step 0. Initialization:
methods, like GA and SA, ◼ Set the iteration counter at k = 1.
◼ It can search very large spaces for candidate solutions. ◼ Initialize the position (x(i)) randomly for each particle
◼ It can overcome some of the challenges due to multiple objectives, mixed ◼ Initialize the velocity (v(i)) randomly for each particle
design variables, irregular/noisy problem functions, implicit problem functions, ◼ Each particle searches for the optimum value
expensive and/or unreliable function gradients, and uncertainty in the model by updating the generation (iteration)
and the environment.
◼ Step 1. Initial Generation:
◼ It starts with a randomly generated set of solutions called the initial (0)
population leading to an optimum solution searched by updating generations ◼ Generate Np particles 𝑥1 using random number generator
(0)
◼ Unlike GA, ◼ Evaluate the cost function (fitness function) for each of these points f(𝑥1 )
◼ PSO has fewer algorithmic parameters to specify and ◼ Determine the best solution among all particles as 𝑥𝑔(𝑘)
◼ does not require binary number encoding or decoding & thus is easier to
implement
49 50

49 50

25
Particle Swarm Optimization (PSO) Particle Swarm Optimization (PSO)

PSO Algorithm … PSO Algorithm


◼ Step 2. Update current position and calculate velocities ◼ The artificial time step ∆𝑡 can be eliminated by multiplying the
◼ Update position of particle i for iteration k+1 position update function by ∆𝑡 as:
(𝑖) (𝑖) (𝑖)
𝑥𝑘+1 = 𝑥𝑘 + 𝑣𝑘+1 ∆𝑡; where ∆𝑡 is a constant artificial time step
(𝑖) (𝑖) (𝑖)
◼ ∆𝑥𝑘+1 = 𝛼∆𝑥𝑘 + 𝛽(𝑥𝑏𝑒𝑠𝑡 −𝑥𝑘𝑖 ) + 𝛾(𝑥𝑏𝑒𝑠𝑡 − 𝑥𝑘𝑖 )
◼ Update the velocity of each particle using
(𝑖) (𝑖) (𝑖)
(𝑖) (𝑖) 𝑥𝑏𝑒𝑠𝑡 −𝑥𝑘 𝑥𝑏𝑒𝑠𝑡 −𝑥𝑘
𝑣𝑘 = 𝛼𝑣𝑘 + 𝛽 ∆𝑡
+ 𝛾 ∆𝑡
; i = 1 to Np 𝑥𝑏𝑒𝑠𝑡
(𝑖)
𝑥𝑘+1
(𝑖) 𝑖
The “inertia”: This term represents the This term represents the 𝛽(𝑥𝑏𝑒𝑠𝑡 −𝑥𝑘 )
determines the new “memory”, a vector pointing “social” behaviour, 𝑥𝑏𝑒𝑠𝑡 is (𝑖) (𝑖)
∆𝑥𝑘+1
𝑥𝑝𝐵𝑒𝑠𝑡
velocity is similar to toward the best position particle i the best point the entire
the previous iteration has been in all iterations so far swarm has found so far and
𝑖
through parameter 𝛼,
(𝑖)
𝑥𝑏𝑒𝑠𝑡 . The random number 𝛽[0 2] 𝛾 is a random number in (𝑖) 𝛾(𝑥𝑏𝑒𝑠𝑡 − 𝑥𝑘 )
𝑥𝑘
whose typical value introduces stochastic component the range [0 2], which (𝑖)
(𝑖)
controls the tendency 𝑥𝑘−1 𝛼∆𝑥𝑘
are in the range [0.8 in the algorithm and controls
tendency to local search towards global search
1.2]
51 52

51 52

26
Particle Swarm Optimization (PSO) Particle Swarm Optimization (PSO)

PSO Algorithm … PSO Algorithm …


◼ Then, the particle position for next iteration:

(𝑖)
𝑥𝑘+1 = 𝑥𝑘
(𝑖) (𝑖)
+ ∆𝑥𝑘+1 ; ◼ Step 4. Stopping Criterion.

◼ Check for convergence of the iterative process. If a stopping


◼ Step 3: Update the Best Solution. Calculate the cost function at
(𝑖)
all new points f(𝑥𝑘+1 ). criterion is satisfied (i.e., if all of the particles have converged to the
◼ For each particle, perform the following check:
best swarm solution), stop.
(𝑖) (𝑖) (𝑖) (𝑖)
If f(𝑥𝑘+1 ) ≤ 𝑓(𝑥𝑏𝑒𝑠𝑡 ), then 𝑥𝑝𝐵𝑒𝑠𝑡) = 𝑥𝑘+1
(𝑖) (𝑖) Otherwise, set k= k +1 and go to Step 2.
Otherwise, 𝑥𝑝𝐵𝑒𝑠𝑡 = 𝑥𝑏𝑒𝑠𝑡 (𝑘) for each i =1 to Np

(𝑖) (𝑖)
If f(𝑥𝑏𝑒𝑠𝑡(𝑘+1) ) ≤ 𝑓(𝑥𝑏𝑒𝑠𝑡 ), then 𝑥𝑏𝑒𝑠𝑡 =𝑥𝑏𝑒𝑠𝑡

(𝑖)
◼ 𝑥𝑏𝑒𝑠𝑡 = global optimum, 𝑥𝑝𝐵𝑒𝑠𝑡 = Optimum for particle i
53 54

53 54

27
Summary and Questions

What did we learn in this chapter?


◼ Structural optimization
◼ Search methods of optimization.
◼ Gradient-based and gradient-free methods are compared..
◼ Observed that gradient-based methods can fail to get global optimum
◼ Gradient-free methods are known for their capacity to find global optimum
◼ The gradient-free methods (SA, GA & PSO) are nature-inspired stochastic
methods and their algorithms are well developed including in MATLAB
◼ .
◼ …

?
Next: Chap. 6: Additive Manufacturing Technologies
55

55

28

You might also like