Chap 5.3 Structural Optimization & Introduction To Search Methods
Chap 5.3 Structural Optimization & Introduction To Search Methods
◼ Introduction
◼ General mathematical formulation of optimization problem statement
◼ Shape and size optimization
◼ Topology optimized design and generative design Minimize Objective function Minimize f(x1, x2)
Optimization variables
x1, x2
◼ Optimization Methods and Tools
Subject to
Introduction to Subject to
1 2
1
Formulating structural optimization … Related to the Structural optimization …
geometric features
General mathematical form … General mathematical form …
Minimize Objective function (optimization variables, state variable) ◼ Common Objective functions:
Optimization variables
Minimize f(x1, x2) - stiffness (K),
Displacement, x1, x2 - weight (W),
Governing temperature Five possible design variables:
differential - material distribution
Material, sizing, configuration
Subject to equations shape and topology
- manufacturability (M) and
- cost (C)
◼ Constraints on state variables
Cost, size, Subject to
◼ Constraints on resources
weight,
◼ Constraints on performance
etc. h(x1, x2) = 0 ◼ Common Constraints
◼ Limits on variables
- Location of support points (BC)
g(x1, x2) ≤ 0
- Size/weight limitations (S)
Data - Max. allowable stress ()
Stiffness, strength, 𝑥1𝑙 ≤ 𝑥1 ≤ 𝑥1𝑢 and - Max. weight (W max)
Material Natural frequency, 𝑥2𝑙 ≤ 𝑥2 ≤ 𝑥2𝑢
properties, Etc.
loads, etc.
3 4
3 4
2
Categories of structural optimization Categories of structural optimization …
◼ Structural optimization approaches are categorized into three ◼ Components of structural optimization
Optimization result
Topology optimization
- Highest level in structural optimization
- Decides connectivity & number of holes in a structure
- Main challenge is identifying variables that decide the
topology
- Topology should be decided first
Optimization result
Shape optimization
- Shape of a feature in a structure
- Shape of a segment of a structure
- Shape of holes in a structure
- Identifying shape of a feature, a segment or a hole
is easier compared to topology
- Shape optimization follows topology optimization
5 6
3
Categories of structural optimization … Categories of structural optimization …
◼ Optimization variables:
◼ cross-sectional area,
◼ element configuration,
◼ material selection
◼ Etc.
7 8
7 8
4
Categories of structural optimization … Categories of structural optimization …
Topology optimization: Typical steps
◼ Why?
Shape optimization: In cases where change in shape & size did not lead to
This technique modifies shape by maintaining constant topology. satisfaction of the design criterion for reduction of
structural weight.
◼ It leads to size optimization,
i.e., size optimization is a special case (by-product) of shape optimization ◼ ensures a globally optimized shape with best
material distribution
◼ Design (optimization) variables: ◼ design variables that provide optimal
parameters defining certain features of the shape, example, performance define the particular topology of
◼ radius of a hole, design
◼ side of a square, ◼ focused, in earlier applications, on truss-like
◼ boundary of a solid structures
9 10
5
Categories of structural optimization … Categories of structural optimization …
Topology optimization: ▪ Two cases
Topology optimization does not need definition of optimization parameter
◼ Objective: (example) min. /maximize energy of structural compliance or maximize the natural
frequency while satisfying the constraints specified
11 12
6
Categories of structural optimization … Topology optimization and generative design
Example: Topology Optimization ◼ Generative design
Integrated into commercial FEA ◼ One of the applications of AI algorithms in design (AI and machine
software such as ANSYS & ABAQUS learning driven design)
Stand-alone tools ◼ a method to generate and evaluate multiple design alternatives
- Altair Hypemesh 14 Optistruct based on input from the user.
- Solidthinking Inspire
◼ A method of autonomously creating manufacturing-ready optimal
designs from a set of system design requirements.
◼ Engineers can interactively specify their design requirements and
goals, including materials and manufacturing processes
AI-driven design
13 14
13 14
7
Topology optimization and generative design … Generative design …
◼ Topology Optimization is primarily used for lightweight design, where a design ◼ Benefits (from PTC – Creo software (videos))
engineer defines a design space and TO software removes any part of that design
space that is not contributing some defined percentage to the structural integrity Improved efficiency:
of the product. TO is good when there is a set space and overall idea, and it is users can explore many design alternatives in a short period of
needed to make it as lightweight as possible using a computer algorithm. time
◼ Generative design is an AI-driven approach that creates many alternative designs ➔more efficient and innovative designs can be created more quickly.
in an evolutionary way whereas the TO creates only one design solution that’s
been optimized for structural integrity based on existing criteria.
Enhanced performance:
- considers a wide range of factors-such as materials, manufacturing
processes, and performance requirements.
- Helps designers to create designs that are optimized for specific
goals.
➔ designs that are stronger, lighter, or more efficient than those
designed using traditional methods.
15 16
8
Generative design … Generative design …
Some typical examples Software tools for generative design
Examples of heat-exchanger geometries with
different topologies ◼ Creo Generative Design from PTC
◼ NX from Siemens
Optimized geometry (Source: Neural concept) Car frame generated by
◼ MSC Apex Generative Design deep learning algorithm
17 18
17 18
9
Generative design … Classification of Optimization Methods
Example (Demo)
Initial design
Many alternative designs generated by AI
Search methods:
Key benefits - Involve numerical calculation through iterative process
- Lighter design
Final design - Gradient-based: Derivatives of objective and constraint function guide the search.
- Fewer parts in the assembly
- Stiffer and stronger Example; steepest descent method for unconstrained optimization (invented by Cauchy)
- Cheaper - Non-gradient approaches use certain rules not based on derivatives, e.g. SA, GA, ..
(Courtesy: Autodesk Fusion 360)
19 UiS/IKM 20
19 20
10
Search methods and optimization tools Search methods and optimization tools …
The gradient-based approaches , example: Simple cantilever beam
𝐵𝑒𝑛𝑑𝑖𝑛𝑔 𝑠𝑡𝑟𝑒𝑠𝑠 𝑖𝑠 𝑔𝑖𝑣𝑒𝑛 𝑏𝑦:
Search methods in numerical optimization
6𝑃𝐿
𝜎=
𝑏ℎ2
21 22
21 22
11
Search methods and optimization tools Search methods ..
23 24
12
Search methods .. Search methods …
◼ Simulated Annealing algorithm SA is based on evolution of a thermal equilibrium
Simulated annealing (SA) is a stochastic approach that simulates the ◼ Used to give a good local approximation to the global optimum of a given
statistical process of growing crystals using the annealing process to reach function in a large search space
its absolute (global) minimum internal energy configuration. If the ◼ Used in various combinatorial optimization problems -
temperature in the annealing process is not lowered slowly and enough time ◼ Attempts to select the ”best” combination of solutions from a large possible
solution of discrete values by iterative improvement and exploration
is not spent at each temperature, the process could get trapped in a local ◼ Typical example: Traveling Salesman Problem (TSP) – where a Traveling
minimum state for the internal energy. The resulting crystal may have many Salesman visits many cities with a min. cost possible.
defects or the material may even become glass with no crystalline order. The
simulated annealing method for optimization of systems emulates this
process. Given a long enough time to run, an algorithm based on this
concept finds global minima for continuous-discrete-integer variable
nonlinear Programming problems.
25 26
25 26
13
Search methods .. . Search methods .. .
◼ Working principle: According to Wong, et. al., (1989), the lowest energy gives the most probable state
of thermal equilibrium
The probability that a system of interacting atoms is in a −E(S )
given thermal equilibrium state S is given by k bT
SA algorthim (an example)
e Begin
S:= Initial solution S0
Where E(S) is the energy associated with state S and kb is Boltzmann’s constant.
T:= Initial temperature T0
While (not stop criteria) do
begin
◼ Analogy between physical systems and optimization problems while (not yet equilibrium) do
begin
S’:= some random neighboring
configuration of S
:= E(S’) – E(S)
Prob:= min(1, e(- /T))
If random(0,1) ≤ Prob then S:=S’
end;
Update T;
end;
Output best solution;
End;
27 28
27 28
14
Search methods .. . Search methods .. .
Example algorithm/flowchart
◼ Application:
1) Fix initial temperature (T0). Application of SA method requires evaluation of cost and constraint functions
2) Generate starting point x0 (serves as current best only.
point X⁎). ➔ Continuity and differentiability of functions are not required.
3) Generate randomly point Xs (neighbouring point).
◼ Thus, the method can be useful for nondifferentiable problems, and problems for
4) Accept Xs as X⁎ (current best solution) if an
which gradients cannot be calculated or are too expensive to calculate.
acceptance criterion is met. This gives the condition
that the probability of accepting a worse point is ◼ Application of SA technique requires careful design of the basic elements such as
greater than zero, particularly at higher (1) concise description of the problem configuration
temperatures.
(2) generating systematically the neighboring solutions
5) If an equilibrium condition is satisfied,
(3) choosing a suitable cost function
go to step (6), otherwise jump back to (3).
6) If termination conditions are not met, decrease (4) defining the annealing schedule, i.e.,
temperature according to certain cooling scheme and - specifying the initial temperature
jump back to (1). If termination conditions are - rule of changing temperature
satisfied, stop calculations accepting current best Alternative flowchart
- duration of search at each temperature
value X⁎ as final («optimal») solution.
- termination criteria of the algorithm
29 30
29 30
15
Search methods .. . Search methods .. .
Genetic algorithms (GAs) …
31 32
31 32
16
Search methods .. . Genetic Algorithms .. .
Genetic algorithms (GAs)
◼ Optimization problem formulation
◼ based on adaptive methods of natural population
(biological system), i.e. Darwinian theory of natural 𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑓 𝑥 , 𝑓𝑜𝑟 𝑥 ∈ 𝑆
selection. Thus called evolutionary methods or
nature-inspired methods where S is the set of feasible designs defined by equality and inequality
constraints.
◼ Optimizes the fitness function
(simulates the principle of “survival of the fittest”) ◼ For unconstrained problems, S is the entire design space.
◼ A robust algorithm: continuity and differentiability of
◼ For GA solution, constrained problems must be converted to
functions is not required
unconstrained problems
◼ Deals with a wide range of problem types (discrete,
continuous, non-differentiable) that are difficult to solve
using other methods
33 34
33 34
17
Genetic Algorithms (GAs) .. . Chromosome Gene Genetic Algorithms (GAs) .. .
◼ Concepts and classical steps ◼ Concepts and classical steps
P = population (set of design points)
Three genetic operators are used to accomplish this
P0 = set of initial design points
Pi = set of design points at current iteration task: reproduction, crossover, and mutation.
(generation)
Selection (reproduction) – identifying a set of designs from
current population and carrying them over (copying) to the next
Chromosome: a design point in the population (whether feasible generation.
or infeasible) containing values for all design variables of the system.
Gene: a scalar component of the design vector representing the value Crossover: corresponds to allowing selected members of the
of a particular design variable new population to exchange characteristics of their designs
among themselves. It entails selection of starting and ending
Generation. An iteration of the GA and it has a population of positions on a pair of randomly selected mating strings, & simply
size Np that is manipulated in a GA process. exchanging the string of 0s and 1s between these positions.
Evaluation – calculation of the fitness function which gives the Mutation: a step that safeguards the process from a complete
relative importance of a design. premature loss of valuable genetic material during reproduction
and crossover. In terms of a binary string, this step corresponds
Higher fitness value → better design to selection of a few members of the population, determining a
location on the strings at random, and switching the 0 to 1 or vice
versa.
35 36
35 36
18
Genetic Algorithms (GAs) .. . Genetic Algorithms .. .
◼ Concepts and classical steps Reproduction: Process of creating the next generation
◼ There are many different strategies to implement this reproduction operator, also
The basic idea of a genetic algorithm is to generate a called the selection process.
new set of designs (population) from the current set (1) Elitist method
such that the average fitness of the population is Elite children are the individuals in the current
improved. generation with the best fitness values. These
individuals automatically survive to next generation.
Iteration process continues until
- no further improvement is observed in the fitness (2) Random selection: the simplest method that randomly selects two points from
the population until the requisite number of pairs is completed.
function or
Ineffective approach due to lack of mechanism of moving the population with better
- the stated termination criteria is reached objective function forward.
(3) Tournament method: randomly pairs up np points and selects the best point
from each pair to join the mating pool
Crossover children are created by combining the vectors of a pair of parents.
37 38
37 38
19
Genetic Algorithms .. . Genetic Algorithms .. .
Crossover ◼ Mutation
39 40
20
Genetic Algorithms.. . Genetic Algorithms.. .
Roulette 10 %
◼ The overall process Wheel
20 % D
D E
E10 %
6% 6%
20 %
F General Algorithm of GA
C C 18 %
F
Inputs: 𝒙𝒍 , 𝒙𝒖 ∶
16 %
Lower and upper bounds
B
18 %
B
1) Population generation
31
16 %
%
A
A
(selection), e.g. roulette wheel approach 31
% Outputs:
𝑥*: Best point and 𝑓*: corresponding function value
--------------------------------
2) Encoding
k = 0,
(create chromosomes)
Binary, integer or real number Pk = 𝑥 (1) , 𝑥 (2) , … . . 𝑥 (𝑛𝑝) Generate initial population
based encoding
While k < kmax Do
3) Cross-over Random cut point After crossover
Compute f(x) for all x ∈ Pk Evaluate objective function
Two parents
and Select np/2 parent pairs from Pk for crossover Selection
Crossover
21
Genetic Algorithms .. . Genetic Algorithms.. .
43 44
43 44
22
Genetic Algorithms.. . Drawbacks of GAs
45 46
45 46
23
Genetic Algorithms.. . Summary and Questions
Examples
1) Sequencing type problems: 11 bolts are to be
inserted into a metal plate by a robot arm
(TSP type formulation).
The position of holes are given by a coordinate system Particle Swarm Optimization (PSO)
2) GA-based shape optimization (GAs in CAD): (To be continued) …
❑ Parametric GA: a grid structure of control
points in applying NURBS/B-spline
47 48
47 48
24
Particle Swarm Optimization (PSO) Particle Swarm Optimization (PSO)
◼ PSO mimics the social behavior of bird flocking or fish schooling ◼ Search strategy: Follow the bird that is closer to the food, though
((moving in search for food) and translates into computation the exact location is unknown
algorithm ◼ Each member of the population (Swarm) is a particle (i.e. a bird)
◼ Similar to other nature-inspired methods, also called metaheuristics ◼ Step 0. Initialization:
methods, like GA and SA, ◼ Set the iteration counter at k = 1.
◼ It can search very large spaces for candidate solutions. ◼ Initialize the position (x(i)) randomly for each particle
◼ It can overcome some of the challenges due to multiple objectives, mixed ◼ Initialize the velocity (v(i)) randomly for each particle
design variables, irregular/noisy problem functions, implicit problem functions, ◼ Each particle searches for the optimum value
expensive and/or unreliable function gradients, and uncertainty in the model by updating the generation (iteration)
and the environment.
◼ Step 1. Initial Generation:
◼ It starts with a randomly generated set of solutions called the initial (0)
population leading to an optimum solution searched by updating generations ◼ Generate Np particles 𝑥1 using random number generator
(0)
◼ Unlike GA, ◼ Evaluate the cost function (fitness function) for each of these points f(𝑥1 )
◼ PSO has fewer algorithmic parameters to specify and ◼ Determine the best solution among all particles as 𝑥𝑔(𝑘)
◼ does not require binary number encoding or decoding & thus is easier to
implement
49 50
49 50
25
Particle Swarm Optimization (PSO) Particle Swarm Optimization (PSO)
51 52
26
Particle Swarm Optimization (PSO) Particle Swarm Optimization (PSO)
(𝑖) (𝑖)
If f(𝑥𝑏𝑒𝑠𝑡(𝑘+1) ) ≤ 𝑓(𝑥𝑏𝑒𝑠𝑡 ), then 𝑥𝑏𝑒𝑠𝑡 =𝑥𝑏𝑒𝑠𝑡
(𝑖)
◼ 𝑥𝑏𝑒𝑠𝑡 = global optimum, 𝑥𝑝𝐵𝑒𝑠𝑡 = Optimum for particle i
53 54
53 54
27
Summary and Questions
?
Next: Chap. 6: Additive Manufacturing Technologies
55
55
28