0% found this document useful (0 votes)
27 views19 pages

Coello Coello 2004

The article discusses a cultural algorithm designed to enhance evolutionary optimization techniques by incorporating domain knowledge to improve performance in constrained optimization problems. It details how the algorithm builds a map of the feasible region to guide the search process more effectively and uses a memory management scheme involving 2n-trees. The results demonstrate that this approach yields competitive outcomes with lower computational costs compared to traditional optimization methods.

Uploaded by

dnam16092005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views19 pages

Coello Coello 2004

The article discusses a cultural algorithm designed to enhance evolutionary optimization techniques by incorporating domain knowledge to improve performance in constrained optimization problems. It details how the algorithm builds a map of the feasible region to guide the search process more effectively and uses a memory management scheme involving 2n-trees. The results demonstrate that this approach yields competitive outcomes with lower computational costs compared to traditional optimization methods.

Uploaded by

dnam16092005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

This article was downloaded by: [North Dakota State University]

On: 22 August 2013, At: 00:56


Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Engineering Optimization
Publication details, including instructions for authors and
subscription information:
[Link]

Efficient evolutionary optimization


through the use of a cultural algorithm
Carlos A. Coello Coello & Ricardo Landa Becerra
a
CINVESTAV-IPN, Evolutionary Computation Group, Departamento
de Ingeniería Eléctrica Sección de Computación, Av. IPN No. 2508,
Col. San Pedro Zacatenco, México, D.F., 07300
Published online: 12 May 2010.

To cite this article: Carlos A. Coello Coello & Ricardo Landa Becerra (2004) Efficient evolutionary
optimization through the use of a cultural algorithm , Engineering Optimization, 36:2, 219-236,
DOI: 10.1080/03052150410001647966

To link to this article: [Link]

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with, in relation to or
arising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at [Link]
and-conditions
Engineering Optimization
Vol. 36, No. 2, April 2004, 219–236

EFFICIENT EVOLUTIONARY OPTIMIZATION


THROUGH THE USE OF A CULTURAL ALGORITHM
CARLOS A. COELLO COELLO∗ and RICARDO LANDA BECERRA
Downloaded by [North Dakota State University] at 00:56 22 August 2013

CINVESTAV-IPN, Evolutionary Computation Group, Departamento de Ingenierı́a Eléctrica Sección de


Computación, Av. IPN No. 2508, Col. San Pedro Zacatenco, México, D.F. 07300

This paper introduces a cultural algorithm that uses domain knowledge to improve the performance of an evolutionary
programming technique adopted for constrained optimization. The proposed approach extracts domain knowledge
during the evolutionary process and builds a map of the feasible region to guide the search more [Link],
in order to have a more efficient memory management scheme, the current implementation uses 2n -trees to store this
map of the feasible region. Results indicate that the approach is able to produce very competitive results with respect
to other optimization techniques at a considerably lower computational cost.

Keywords: Cultural algorithms; Evolutionary programming; Constrained optimization

1 INTRODUCTION

The use of evolutionary algorithms for solving optimization problems has become very
extensive in the last few years [1, 2]. This popularity is mainly due to the robustness, ease of
use and wide applicability of evolutionary algorithms [3]. However, it is commonly the case
that evolutionary algorithms are seen as ‘blind heuristics’ in the sense that they do not use
or require any specific domain knowledge. Nevertheless, several researchers have proposed
different mechanisms to extract knowledge (or certain design patterns) from an evolutionary
algorithm in order to improve convergence of another evolutionary algorithm (see, for example
Refs. 4–6).
This paper proposes the use of a biological metaphor called a ‘cultural algorithm’ as a global
optimization technique. Cultural algorithms are based on the following notion: in advanced
societies, the improvement of individuals occurs beyond natural selection; besides the infor-
mation that an individual possesses within his genetic code (inherited from his ancestors)
there is another component called ‘culture’. Culture can be seen as a sort of repository where
individuals place the information acquired after years of experience. When a new individual
has access to this library of information, it can learn things even when it has not experienced
them directly. Humankind as a whole has reached its current degree of progress mainly due
to culture.

∗ Corresponding author. E-mail: ccoello@[Link]


Engineering Optimization
ISSN 0305-215X print; ISSN 1029-0273 online c 2004 Taylor & Francis Ltd
[Link]
DOI: 10.1080/03052150410001647966
220 C. A. COELLO COELLO AND R. L. BECERRA

This paper proposes an approach in which domain knowledge (using the concept of a cultural
algorithm) extracted during a run of an evolutionary algorithm is used to guide the search more
efficiently in constrained optimization problems [7, 8].
The remainder of this paper is organized as follows. Section 2 provides some basics of
cultural algorithms. Section 3 discusses the most important previous related work. The use of
cultural algorithms in constrained optimization is discussed in Section 4. The way in which
constraints are handled as a belief space is discussed in Section 5. The proposed approach
is described in Section 6. The mathematical description of the examples used to validate our
approach is provided in Section 7. Results are compared with respect to other approaches in
Section 8. Finally, conclusions and some possible paths for future research are provided in
Section 9.
Downloaded by [North Dakota State University] at 00:56 22 August 2013

2 BASICS OF CULTURAL ALGORITHMS

Cultural algorithms were developed by Robert G. Reynolds as a complement to the metaphor


used by evolutionary algorithms, which had focused mainly on genetic and natural selection
concepts [9].
Cultural algorithms are based on some theories which originated in sociology and archae-
ology which try to model cultural evolution. Such theories indicate that cultural evolution
can be seen as an inheritance process operating at two levels: (1) a micro-evolutionary level,
which consists of the genetic material that an offspring inherits from its parents, and (2) a
macro-evolutionary level, which consists of the knowledge acquired by individuals through
generations. This knowledge, once encoded and stored, is used to guide the behavior of the
individuals that belong to a certain population [10, 11].
Culture can be seen as a set of ideological phenomena shared by a population. Through
these phenomena, an individual can interpret its experiences and decide its behavior. In these
models, we can clearly appreciate the part of the system that is shared by the population: the
knowledge, acquired by members of a society, but encoded in such a way that such knowl-
edge can be accessed by every other member of the society. And then there is an individual
part, which consists of the interpretation of such knowledge encoded in the form of symbols.
This interpretation will produce new behaviors as a consequence of the assimilation of the
corresponding knowledge acquired combined with the experiences lived by the individual
itself.
Reynolds attempts to capture this double inheritance phenomenon through his proposal
of cultural algorithms [9]. The main goal of such algorithms is to increase the learning or
convergence rates of an evolutionary algorithm such that the system can respond better to a
wide variety of problems [12].
Cultural algorithms operate in two spaces. First, there is the population space, which con-
sists of (as in all evolutionary algorithms) a set of individuals. Each individual has a set of
independent features that are used to determine its fitness. Through time, such individuals can
be replaced by some of their descendants, which are obtained from a set of operators applied
to the population.
The second space is the belief space, which is where the knowledge acquired by individuals
through generations is stored. The information contained in this space must be accessible to
each individual, so that they can use it to modify their behavior.
In order to join the two spaces, it is necessary to provide a communication link, which
dictates the rules regarding the type of information that must be exchanged between the two
spaces. The pseudo-code of a cultural algorithm is shown in Algorithm 1.
CULTURAL ALGORITHM 221

ALGORITHM 1 Pseudo-code of a cultural algorithm.

Generate the initial population


Initialize the belief space
Evaluate the initial population
Repeat
Update the belief space (with the individuals accepted)
Apply the variation operators (under the influence of the belief space)
Evaluate each child
Perform selection
While the end condition is not satisfied
Downloaded by [North Dakota State University] at 00:56 22 August 2013

Most of the steps of a cultural algorithm correspond with the steps of a traditional evolution-
ary algorithm. It can be clearly seen that the main difference lies in the fact that cultural algo-
rithms use a belief space. The main loop of Algorithm 1 updates the belief space. It is at this point
where the belief space incorporates the individual experiences of a select group of members of
the population. Such a group is obtained with the function accept, which is applied to the entire
population. On the other hand, the variation operators (such as recombination or mutation) are
modified by the function influence. This function applies some pressure such that the children
resulting from the variation operators can exhibit behaviors closer to the desirable ones and
farther away from the undesirable ones, according to the information stored in the belief space.
These two functions (accept and influence) constitute the communication link between the
population space and the belief space. Such interactions can be appreciated in Figure 1 [13].

FIGURE 1 Spaces used by a cultural algorithm.


222 C. A. COELLO COELLO AND R. L. BECERRA

In Ref. [9], Reynolds proposed the use of genetic algorithms to model the micro-evolutionary
process, and Version Spaces [14] to model the macro-evolutionary process of a cultural
algorithm. This sort of algorithm was called the Version Space guided Genetic Algorithm
(VGA). The main idea behind this approach is to preserve beliefs that are socially accepted
and discard (or prune) unacceptable beliefs. Therefore, if a cultural algorithm is used for global
optimization, the acceptable beliefs can be seen as constraints that direct the population at the
micro-evolutionary level [15].

3 RELATED WORK

Reynolds et al. [16] and Chung and Reynolds [17] have explored the use of cultural algo-
Downloaded by [North Dakota State University] at 00:56 22 August 2013

rithms for global optimization with very encouraging results. Chung and Reynolds [17] use a
hybrid of evolutionary programming and GENOCOP [18] in which they incorporate an interval
constraint network [19] to represent the constraints of the problem at hand. An individual is
considered as ‘acceptable’ when it satisfies all the constraints of the problem. When that does
not happen, then the belief space, i.e. the intervals associated with the constraints, is adjusted.
This approach is really a more sophisticated version of a repair algorithm in which an infeasible
solution is made feasible by replacing its genes with a different value between its lower and
upper bounds. Since GENOCOP assumes a convex search space, it is relatively easy to design
operators that can exploit a search direction towards the boundary between the feasible and
infeasible regions.
In further work, Jin and Reynolds [20] proposed an n-dimensional regional-based schema,
called a belief-cell, as an explicit mechanism that supports the acquisition, storage and inte-
gration of knowledge about non-linear constraints in a cultural algorithm. This belief-cell can
be used to guide the search of an evolutionary computation (EC) technique (evolutionary pro-
gramming in this case) by pruning the instances of infeasible individuals and promoting the
exploration of promising regions of the search space. The key aspect of this work is precisely
how to represent and save the knowledge about the problem constraints in the belief space of
the cultural algorithm.
The idea of Jin and Reynolds’ approach is to build a map of the search space similar to the
‘Divide-and-Label’ approaches used for robot motion planning [21]. This map is built using
information derived from evaluating the constraints of each individual in the population of
the EC technique. The map is formed by dividing the search space into sub-areas called cells.
Each cell can be classified as: feasible (if it lies completely in a feasible region), infeasible (if
it lies completely in an infeasible region), semi-feasible (if it occupies part of a feasible and
part of an infeasible region), or unknown (if that region has not been explored yet). This map
is used to derive rules about how to guide the search of the evolutionary algorithm (avoiding
infeasible regions and promoting the exploration of feasible regions).
This previous work, however, has an important drawback: the authors do not indicate how
to implement the belief space and, from their publications, one can infer that static data struc-
tures were adopted in their work. This has some important scalability issues since a relatively
low dimensionality (about 20 decision variables) can become impractical in terms of the cell
representation needed (i.e. we would run out of memory).

4 CONSTRAINED OPTIMIZATION

In this paper, cultural algorithms are used with evolutionary programming (CAEP) [17].
The basic idea is to ‘influence’ the mutation operator (the only operator in evolutionary
CULTURAL ALGORITHM 223

programming) so that current knowledge about the properties of the search space can be
properly exploited.
As indicated above, in a cultural algorithm there are two main spaces: the normal population
adopted with evolutionary programming, and the belief space. The shared acquired knowledge
is stored in the belief space during the evolution of the population. The interactions between
these two spaces are detailed below [17]:

1. Select an initial population of p candidate solutions, from a uniform distribution within the
given domain for each parameter from 1 to n.
2. Assess the performance score of each parent solution by a given objective function f .
3. Initialize the belief space with the given problem domain and candidate solutions.
Downloaded by [North Dakota State University] at 00:56 22 August 2013

4. Generate p new offspring solutions by applying a variation operator, V , as modified by the


influence function, Influence. Now there are 2 p solutions in the population.
5. Assess the performance score of each offspring solution by the given objective function f .
6. For each individual, select c competitors at random from the population of size 2 p. Conduct
pairwise competitions between the individual and the competitors.
7. Select the p solutions that have the greatest number of wins to be parents for the next
generation.
8. Update the belief space by accepting individuals using the acceptance function.
9. Go back to step 4 unless the available execution time is exhausted or an acceptable solution
has been discovered.

Most of the steps described above are the same as in the evolutionary algorithm adopted
(evolutionary programming [3]). The acceptance function accepts those individuals that can
contribute with their knowledge to the belief space. The update function creates the new belief
space with the beliefs of the accepted individuals. The idea is to add to the current knowledge
the new knowledge acquired by the accepted individuals.
The function to generate offspring used in evolutionary programming is modified so that it
includes the influence of the belief space in the generation of offspring. Evolutionary program-
ming uses only mutation, and the influence function indicates the most promising mutation
direction. The remaining steps are the same as those used in evolutionary programming.
For unconstrained problems, Chung [22] proposes the use of two types of knowledge: (1)
situational, which provides the exact point where the best individual of each generation was
found; and (2) normative, which stores intervals for the decision variables of the problem that
correspond to the regions where good results were found.

5 BELIEFS AS CONSTRAINTS

As mentioned before, Jin and Reynolds [20] modified Chung’s proposal so as to include in the
belief space information about feasibility of the solutions. We will explain next the changes
performed in more detail, since the current proposal is an extension of Jin and Reynolds’
algorithm.
First, Jin and Reynolds eliminated the situational knowledge and added constraints knowl-
edge. Taking advantage of the intervals of good solutions that are stored in the normative
portion of the belief space, they created what they called ‘belief cells’. These belief cells are
a subdivision of the search space within the intervals of good solutions, such that feasibility
of the cells can be determined. When the intervals of the variables are modified, the cells are
224 C. A. COELLO COELLO AND R. L. BECERRA
Downloaded by [North Dakota State University] at 00:56 22 August 2013

FIGURE 2 The figure at the top illustrates the feasible region of a problem. The figure at the bottom illustrates the
representation of the constraints part of the belief space for the search space of the same problem. In this example, the
intervals stored in the normative part must be [0.6, 2.6] for x1 , and [3, 5] for x2 .

also modified. As indicated before, there are four types of cells (see Fig. 2):1 (1) feasible, (2)
infeasible, (3) semi-feasible (containing part of both areas) and (4) unknown.
The influence that the belief space has on the generation of offspring consists of moving
individuals that lie in infeasible cells towards feasible cells. Actually, in this process, semi-
feasible cells are given preference because in most difficult constrained problems, the opti-
mum lies on the boundary between the feasible and infeasible regions. However, Jin and
Reynolds [20] do not modify the rules used to update the normative part of the belief space
proposed by Chung [22]: the intervals are expanded if the accepted individuals do not fit within
them; conversely, they are tightened only if the accepted individuals have a better fitness. This
may reduce the intervals towards infeasible regions in which the objective function values
are higher.

6 DESCRIPTION OF THE AUTHORS’ APPROACH

The approach proposed here is a variation of Jin and Reynolds’ technique [20]. However, in
the proposed approach spatial data structures (2n -trees) are employed in order to store the
map of the feasible region more efficiently. Next, the main differences between traditional
evolutionary programming and the proposed approach will be described.

6.1 Initialization of the Belief Space

The lower and upper boundaries of the promising intervals for each variable are stored in the
normative part of the belief space, together with the fitness for each extreme of the interval.

1 Other authors have also proposed the use of a map of the feasible region. See for example Ref. [23].
CULTURAL ALGORITHM 225

This part is initialized by putting in the boundaries of the variables the values given in the input
data of the problem. The initial fitnesses in all cases are set to +∞ (assuming a minimization
problem).
Regarding the constraints of the problem, the interval given in the normative part is sub-
divided into s subintervals such that a portion of the search space is divided in hypercubes
(see Fig. 3). The following information about each hypercube is stored: number of feasible
individuals (within that cell), number of infeasible individuals (within that cell), and the type
of region. The type of region depends on the feasibility of the individuals within. Four types
are defined:
• if feasible individuals = 0 and infeasible individuals = 0, then cell type = unknown
• if feasible individuals > 0 and infeasible individuals = 0, then cell type = feasible
Downloaded by [North Dakota State University] at 00:56 22 August 2013

• if feasible individuals = 0 and infeasible individuals > 0, then cell type = infeasible
• if feasible individuals > 0 and infeasible individuals > 0, then cell type = semi-feasible

To initialize this part, all counters are set to zero and the cell type is initialized to ‘unknown’
(other values could be used in this case, but that would obviously affect the performance of
the algorithm).

6.2 Updating the Belief Space

The constraints part of the belief space is updated at each generation, whereas the normative part
is updated every k generations. The update of the constraints part consists only of adding any
new individuals that fall into each region to the counter of feasible individuals. The update of the
normative part is more complex (that is the reason why it is not performed at every generation).
When the interval of each variable is updated, the cells or hypercubes of the restrictions part are
changed and the counters of feasible and infeasible individuals are reinitialized. Furthermore,
this update is done taking into consideration only a portion of the population. Such a portion
is selected by the function accept(), taking as a parameter (given by the user) the percentage
of the total population size to be used. We set this percentage to 25% in our experiments,
based on some empirical testing. Note that changing this value does not significantly affect
the computational cost of the algorithm, but it may affect the results that it produces. The
interpretation of this percentage in terms of its role in the algorithm is that it regulates the
rate at which the knowledge becomes specialized. As this percentage approaches 100%, the
knowledge gets specialized at a slower rate, and vice versa. We found that 25% was a good

FIGURE 3 Graphical representation of the division of the semi-feasible cells.


226 C. A. COELLO COELLO AND R. L. BECERRA

compromise. The function accept() selects the best individuals, based on their number of
victories obtained during the selection process.
In the approach proposed in this paper, the conditions to reduce the intervals are stronger
than those in previous approaches (e.g. Ref. 20): an interval is reduced only if the accepted
individual has a better fitness AND it is feasible. In order to make this mechanism work, it
is necessary to modify the acceptance function so that feasible individuals are preferred and
fitness is adopted as a secondary criterion. If this is not done, then the condition for interval
reduction will not hold most of the time because the accepted individuals are more likely to
be infeasible.

6.3 Influence of Beliefs on the Mutation Operator


Downloaded by [North Dakota State University] at 00:56 22 August 2013

Mutation takes place for each variable of each individual, with the influence of the belief space
and in accordance with the following rules:

• If the variable j of the parent is outside the interval given by the normative part of the
constraints, then we attempt to move within this interval through the use of a random
variable.
• If the variable is within a feasible, a semi-feasible or an unknown hypercube, the perturbation
is made trying to place it within the same hypercube or very close to it.
• Finally, if the variable is in an infeasible cell, we try to move it first to the closest semi-
feasible cell. However, if none is found, we try to move it to the closest feasible or unknown
cell. If that does not work either, then we move it to a random position within the interval
defined by the normative part.

6.4 Tournament Selection

The rules for updating the belief space may result in that knowledge becoming specialized
at a slower rate. To improve the speed of the algorithm, advantage is taken of the rules for
performing tournament selection. After performing mutation, there will be a population of
size 2 p ( p parents generate p children). The tournament is performed considering the entire
population (i.e. using (µ + λ) selection with µ = λ = p). Tournaments consist of c confronta-
tions per individual, with the c opponents randomly chosen from the entire population. When
the tournaments finish, the p individuals with the largest number of victories are selected to
form the following generation. The tournament rules adopted for the current proposal are very
similar to those adopted by Deb in his penalty approach based on feasibility [24].
The new tournament rules adopted by the proposed approach are the following:

1. If both individuals are feasible, then the individual with the best objective function value
wins.
2. If both individuals are infeasible, then the individual with the lowest constraint violation
wins. The constraint violation is measured using:
 g j (x)
sum g(x) =
j ∈J
g max j

where g max j is the largest value of the constraint g j found during the evolutionary process,
and J = { j |g j (
x ) is a constraint violated in x }.
In words, we are saying that the winner is the individual that presents a lower constraint
violation, considering normalized constraints (this normalization is done to avoid problems
with the use of different units for each of the constraints used).
3. Otherwise, the feasible individual always wins.
CULTURAL ALGORITHM 227

6.5 Use of 2n -Trees

One of the main drawbacks of Jin and Reynolds’ approach [20] is its intense memory usage.
Since the belief maps of each decision variable have to be stored, the approach runs out of
memory very quickly and cannot possibly handle problems with more than a few decision
variables (memory requirements grow exponentially with the number of decision variables
of the problem). This led us to develop a scheme in which 2n -trees are used to partition the
feasible region into cells so that with higher-dimensionality problems the memory usage is
not exponentially increased. The idea was inspired by the popularity of spatial data structures
to store efficiently navigation maps in robotics [21] and to represent efficiently 3D objects in
computer graphics [25].
In order to be able to use 2n -trees within our implementation, we have to partition only
Downloaded by [North Dakota State University] at 00:56 22 August 2013

the projection of the search space in some dimensions, since 2n -trees have practical use only
when n ≤ 4, where n corresponds to the number of decision variables of our problem [21]. An
example of how to partition a 2D space using a quadtree with a depth of 2 is shown in Figure 4.
Note that the decision of how to partition decision variable space so as to comply with
this restriction is very important since the number of nodes used may be incremented rather
than reduced! For example, if an octree is adopted, using a node division we will divide three
dimensions and our tree will have 23 + 1 = 9 nodes in total. However, if we use a tree that
divides only one dimension and through three successive divisions we partition a 3D space,
the leaf nodes will give the same result as for the octree but using 15 nodes.
From the previous discussion it can be inferred that a 2n -tree should be used with the largest
possible n, but being careful not to use too much memory. Our conjecture is that n = 3 is the
largest number with which the problem remains manageable.
Once the number of dimensions to be partitioned has been decided, it is necessary to decide
which are the dimensions to be partitioned. The idea is to choose the 3D projection that best
divides the search space into a feasible and an infeasible region (or regions). However, since
the number of possible combinations of three dimensions grows exponentially with the number
of variables, it soon becomes impossible to try them all. Therefore, we can choose a group

FIGURE 4 Example of the partition of a 2D space using a quadtree of depth two.


228 C. A. COELLO COELLO AND R. L. BECERRA

of combinations to be tried such that the size of this group grows linearly with the number of
variables of the problem. In order to determine the ‘goodness’ of a certain partition, we have
to count the number of feasible and infeasible individuals in each leaf node. A node will be
considered good as long as one of these two values (i.e. feasible and infeasible individuals)
tends to zero. For example, assume that n f is the number of feasible individuals in a certain
node and that n i is the number of infeasible individuals in that same node. Thus, a small number
min(n f , n i ) in each node will indicate a good partition. From the previous discussion, we can
say that we are looking for a partition that minimizes:

λ= min(n f , n i ).
leaf nodes
Downloaded by [North Dakota State University] at 00:56 22 August 2013

Having this partition, we can continue partitioning with the same method until reaching the
maximum allowable depth. Since we have tried several partitioning methods for a single node
division, it is better to choose a small depth limit so that not much time is spent in the creation
of the tree.
The method described to expand nodes is only done for nodes corresponding to semi-
feasible cells, and it stops when it reaches the maximum depth of the tree. The tree is rebuilt
every time the normative part is updated.

7 EXAMPLES

To validate the approach, some test functions have been used from the well-known benchmark
proposed in Ref. [7] which has been often used in the literature to validate new constraint-
handling techniques. Additionally, some well-known engineering optimization problems were
also used. All the problems are described in Appendix A.

8 COMPARISON OF RESULTS

For all the experiments reported 10 independent runs per problem were performed, and the fol-
lowing parameters were used: population size = 20, maximum number of generations = 2500,
the normative part is updated every 20 generations with 25% of the population (acceptance
%), tournaments consist of 10 encounters per individual (half the population size), the max-
imum depth of the octree is equal to the number of decision variables of the problem. These
parameters were derived empirically after numerous experiments.

8.1 Example 1: g01

In this case, the global optimum is at x ∗ = (1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1) where f (x ∗ ) =


−15. The constraints g1 , g2 , g3 , g4 , g5 and g6 are active. The results of our approach and results
from the homomorphous maps of Koziel and Michalewicz [26] are shown in Tables I and II.

8.2 Example 2: g02

The global maximum is unknown; the best reported solution is in Ref. [27]: f(x ∗ ) = 0.803619.
Constraint g1 is close to being active (g1 = −10−8 ). The results of our approach and the
homomorphous maps of Koziel and Michalewicz [18] are shown in Tables I and II.
CULTURAL ALGORITHM 229

TABLE I Results Produced by CAEP Using 2n -Trees.

TF Optimal Best Mean Worst Std Dev

g01 −15.0 −15.0000 −14.4999 −12.0000 1.0801


g02 0.803619 0.77351 0.66995 0.51762 0.09456
g04 −30665.539 −30665.5 −30662.5 −30636.2 9.3
g08 −0.095825 −0.095825 −0.095825 −0.095825 0.000000
g12 1.0 1.000000 0.996375 0.969375 0.009650

8.3 Example 3: g04


Downloaded by [North Dakota State University] at 00:56 22 August 2013

In this example, the optimum solution is x ∗ = (78, 33, 29.995256025682, 45,


36.775812905788) where f (x ∗ ) = −30665.539. Constraints g1 and g6 are active. The results
of our approach and the homomorphous maps of Koziel and Michalewicz [26] are shown in
Tables I and II.

8.4 Example 4: g08

In this case, the optimum solution is located at x ∗ = (1.2279713, 4.2453733) where


f (x ∗ ) = 0.095825. The results of our approach and the homomorphous maps of Koziel and
Michalewicz [26] are shown in Tables I and II.

8.5 Example 5: g12

In this test function, the global optimum is located at x ∗ = (5, 5, 5) where f (x ∗ ) = 1. The
results of our approach and the homomorphous maps of Koziel and Michalewicz [26] are
shown in Tables I and II.
Note that the homomorphous maps approach of Koziel and Michalewicz [26] is one of
the best constraint-handling techniques for evolutionary algorithms known to date. Also, it is
important to indicate that the results of Koziel and Michalewicz were obtained with 1,400,000
fitness function evaluations, whereas our approach required only 50,020 fitness function eval-
uations. Note that our approach has been able to deal with problems that have several variables
(g01 has 13 decision variables and g02 has 20 decision variables).
As can be seen in Tables I and II, our approach produces very competitive results with
respect to the homomorphous maps (which is considerably more difficult to implement) at a
fraction of its computational cost (in some cases the present method converges to the global
optimum). The main reason for this cost reduction is that the belief cells are used to guide
the search of the evolutionary algorithm very efficiently, avoiding the algorithm moving to
unpromising regions of the search space.

TABLE II Results Produced by the Homomorphous Maps of Koziel and Michalewicz [26].

TF Optimal Best Mean Worst Std Dev

g01 −15.0 −14.7864 −14.7082 −14.6154 NA


g02 0.803619 0.79953 0.79671 0.79119 NA
g04 −30665.539 −30664.5 −30655.3 −30645.9 NA
g08 −0.095825 −0.095825 −0.089157 −0.029144 NA
g12 1.0 0.999999 0.999135 0.991950 NA

Note: NA = not available.


230 C. A. COELLO COELLO AND R. L. BECERRA

Let us analyze now the results for the engineering optimization problems chosen for this
comparative study.

8.6 Example 6: Design of a Welded Beam

This problem was solved before by Deb [28] using a simple genetic algorithm with binary
representation, and a traditional penalty function as suggested by Goldberg [1]. It has also been
solved by Ragsdell and Phillips [29] using geometric programming. Ragsdell and Phillips also
compared their results with those produced by the methods contained in a software package
called ‘Opti-Sep’ [30], which includes the following numerical optimization techniques:
ADRANS (Gall’s adaptive random search with a penalty function), APPROX (Griffith and
Stewart’s successive linear approximation), DAVID (Davidon-Fletcher-Powell with a penalty
Downloaded by [North Dakota State University] at 00:56 22 August 2013

function), MEMGRD (Miele’s memory gradient with a penalty function), SEEK1 & SEEK2
(Hooke and Jeeves with two different penalty functions), SIMPLX (Simplex method with a
penalty function) and RANDOM (Richardson’s random method).
The results of the techniques previously indicated are compared against those produced by
the approach proposed in this paper (see Tab. III). In the case of Siddall’s techniques [30],
only the best solution produced by the techniques contained in ‘Opti-Sep’ is displayed. The
mean from the runs performed with our approach was f ( x ) = 1.9718091, with a standard
deviation of 0.4431313. The worst solution found was f ( x ) = 3.1797085, although this
solution appeared only once in the runs performed.

8.7 Example 7: Minimization of the Weight of a Tension Compression Spring

This problem was solved before by Belegundu [31] using the following numerical optimization
techniques: Feasible directions (CONMIN and OPTDYN), Pshenichny’s Recursive Quadratic
Programming (LINRM), Gradient Projection (GRP-UI), Exterior Penalty Function (SUMT),
and Multiplier Methods (M-3, M-4 and M-5). Only the best feasible result reported by him is
shown in Table IV. Additionally, Arora [32] solved this problem using a numerical optimization
technique called Constraint Correction at constant Cost (CCC). It is important to notice that
Arora’s solution is actually infeasible because it violates one of the constraints slightly. In the
experiments reported here, our approach handled all constraints as hard, so that the solutions

TABLE III Comparison of Results for the Sixth Example (Optimal Design of a Welded Beam).

Best solution found


Design
Variables CAEP Deb [28] Siddall [30] Ragsdell [29]

x1 (h) 0.2057 0.2489 0.2444 0.2455


x2 (l) 3.4705 6.1730 6.2189 6.1960
x3 (t) 9.0366 8.1789 8.2915 8.2730
x4 (b) 0.2057 0.2533 0.2444 0.2455
g1 (
x) −0.000472 −5758.603777 −5743.502027 −5743.826517
g2 (
x) −0.001561 −255.576901 −4.015209 −4.715097
g3 (
x) 0.000000 −0.004400 0.000000 0.000000
g4 (
x) −3.432984 −2.982866 −3.022561 −3.020289
g5 (
x) −0.080730 −0.123900 −0.119400 −0.120500
g6 (
x) −0.235540 −0.234160 −0.234243 −0.234208
g7 (
x) −0.000779 −4465.270928 −3490.469418 −3604.275002
f (
x) 1.7248523 2.4331160 2.3815434 2.3859373
CULTURAL ALGORITHM 231

TABLE IV Comparison of Results for the Seventh Example (Minimization


of the Weight of a Tension/Compression Spring).

Best solution found


Design
Variables CAEP Arora [32] Belegundu [31]

x1 (d) 0.050000 0.053396 0.050000


x2 (D) 0.317395 0.399180 0.315900
x3 (N ) 14.031795 9.185400 14.250000
g1 (
x) 0.000000 0.000019 −0.000014
g2 (
x) −0.000075 −0.000018 −0.003782
g3 (
x) −3.967960 −4.123832 −3.938302
g4 (
x) −0.755070 −0.698283 −0.756067
Downloaded by [North Dakota State University] at 00:56 22 August 2013

f (
x) 0.0127210 0.0127303 0.0128334

produced were considered valid only if all of them were fully satisfied. Nevertheless, the
proposed approach was able to find a better (feasible) solution than Arora’s technique, as can
be seen in Table IV.
The mean from the runs performed with our approach was f ( x ) = 0.0135681, with a
standard deviation of 0.00084152. The worst solution found was f ( x ) = 0.0151156.
It can be seen that in the engineering problems chosen, as in the numerical examples, our
approach produced very competitive results at a low computational cost (the computational
costs of the other approaches against which we compared our algorithm were not available).

9 CONCLUSIONS AND FUTURE WORK

We have presented an approach based on cultural algorithms and evolutionary programming for
constrained optimization. The approach proposed has provided good results at a relatively low
computational cost both in some well-known test functions used with evolutionary algorithms
and in some engineering optimization problems.
The results suggest that the proper use of domain knowledge can certainly improve the
performance of an evolutionary algorithm when such domain knowledge is properly handled.
Also, we argue that our results suggest that this domain knowledge can be extracted during the
evolutionary process which aims to reach the global optimum of a problem. This contrasts with
the more conventional approach of using domain knowledge extracted from previous runs of
an evolutionary algorithm (see for example, Refs. 5, 33).
One of the main drawbacks of cultural algorithms in constrained search spaces (i.e. memory
usage) is attacked using spatial data structures that can efficiently store the belief space. To
illustrate this point, we will briefly discuss a simple example. With the 2n -trees adopted in this
paper, a maximum depth of 5 was defined. Since an octree (such as those used in our approach)
has exactly eight child nodes, using a maximum depth of 5, the maximum number of nodes
of a tree will be: 80 + 81 + 82 + 83 + 84 = 4681 nodes. However, this number is not always
reached in practice. If we now consider a static data structure to divide the space with 12 decision
variables, if we just split each dimension in half, we will need 212 = 4096 nodes. This number is
slightly lower than the one used. However, if we now assume 20 decision variables (as in g02),
our approach still requires the same number of nodes, whereas a static data structure would
require 220 = 1,048,576 nodes. This would introduce obvious memory management problems.
232 C. A. COELLO COELLO AND R. L. BECERRA

Thus, the mechanism for memory management introduced in the present approach is one of its
main contributions and it constitutes the main difference with respect to previous proposals.
As part of our future work, we are considering the possibility of using self-adaptation or
online adaptation mechanisms that make it unnecessary to fine tune the parameters required
by our approach. Additionally, we are also considering the possibility of using additional rules
in the tournaments performed, so that more feasibility information can be supplied to our
evolutionary algorithm so as to guide the search in a more effective way (for example, in g02
we were unable to converge to the best known solution). Finally, we are also considering the
possible use of alternative data structures for representing the belief space (e.g. k-d trees [34]).

Acknowledgements
Downloaded by [North Dakota State University] at 00:56 22 August 2013

The first author acknowledges support from the Consejo Nacional de Ciencia y Tecnologı́a
(CONACyT) through project number 32999-A.
The second author acknowledges support from CONACyT through a scholarship to pursue
graduate studies at the Computer Science Section of the Electrical Engineering Department of
CINVESTAV-IPN.

References
[1] Goldberg, D. E. (1989) Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley
Publishing Company, Reading, Massachusetts.
[2] Bäck, T., Fogel, D. and Michalewicz, Z. (Eds.) (1997) Handbook of Evolutionary Computation, Vol. 1, IOP
Publishing Ltd. and Oxford University Press.
[3] Fogel, L. J. (1999) Artificial Intelligence Through Simulated Evolution. FortyYears of Evolutionary Programming.
John Wiley & Sons, New York.
[4] Zhang, Z. M. and Liao, T. M. (1999) Combining case-based reasoning with genetic algorithms. Late Breaking
Papers at the 1999 Genetic and Evolutionary Computation Conference, S. Brave and A. S. Wu, (Eds.), Orlando,
Florida, 305–310.
[5] Louis, S. J. and Johnson, J. (1997) Solving similar problems using genetic algorithms case-based memory.
Bäck, T. (Ed.), Proceedings of the Seventh International Conference on Genetic Algorithms, Morgan Kaufmann
Publishers, San Francisco, California, pp. 283–290.
[6] Ramsey, C. L. and Grefenstette, J. J. (1993) Case-based initialization of genetic algorithms. Proceedings of
the Fifth International Conference on Genetic Algorithms, S. Forrest, (Ed.), Morgan Kauffman Publishers, San
Mateo, California, pp. 84–91.
[7] Michalewicz, Z. and Schoenauer, M. (1996) Evolutionary algorithms for constrained parameter optimization
problems. Evolutionary Computation, 4(1), 1–32.
[8] Coello Coello, C. A. (2002) Theoretical and numerical constraint-handling techniques used with evolutionary
algorithms: A survey of the state of the art. Computer Methods in Applied Mechanics and Engineering,
191(11–12), 1245–1287.
[9] Reynolds, R. G. (1994) An introduction to cultural algorithms. In Proceedings of the Third Annual Conference
on Evolutionary Programming, A. V. Sebald and L. J. Fogel, (Eds.), World Scientific, River Edge, New Jersey,
pp. 131–139.
[10] Renfrew, A. C. (1994) Dynamic modeling in archaeology: What, when, and where? In Dynamical Modeling and
the Study of Change in Archaelogy, S. E. van der Leeuw, (Ed.), Edinburgh University Press, Edinburgh, Scotland.
[11] Durham, W. H. (1994) Co-evolution: Genes, Culture, and Human Diversity. Stanford University Press, Stanford,
California.
[12] Franklin, B. and Bergerman, M. (2000) Cultural algorithms: Concepts and experiments. Proceedings of the 2000
Congress on Evolutionary Computation, IEEE Service Center, Piscataway, New Jersey, 1245–1251.
[13] Reynolds, R. G. (1999) Cultural algorithms: Theory and applications. New Ideas in Optimization, D. Corne,
M. Dorigo, and F. Glover (Eds.), McGraw-Hill, London, UK, 367–377.
[14] Mitchell, T. (1978) Version spaces: An approach to concept learning. PhD thesis, Computer Science Department,
Stanford University, Stanford, California, 1978.
[15] Michalewicz, Z. (1995) A survey of constraint handling techniques in evolutionary computation methods.
Proceedings of the 4th Annual Conference on Evolutionary Programming, J. R. McDonnell, R. G. Reynolds
and D. B. Fogel, (Eds.), The MIT Press, Cambridge, Massachusetts, pp. 135–155.
[16] Reynolds, R. G., Michalewicz, Z. and Cavaretta, M. (1995) Using cultural algorithms for constraint handling
in GENOCOP. Proceedings of the Fourth Annual Conference on Evolutionary Programming, J. R. McDonnell,
R. G. Reynolds and D. B. Fogel (Eds.), MIT Press, Cambridge, Massachusetts, 298–305.
CULTURAL ALGORITHM 233

[17] Chan-Jin Chung, C. J. and Reynolds, R. G. (1996) A testbed for solving optimization problems using
cultural algorithms. Evolutionary Programming V: Proceedings of the Fifth Annual Conference on Evolutionary
Programming, L. J. Fogel, P. J. Angeline and T. Bäck, (Eds.), MIT Press, Cambridge, Massachusetts.
[18] Michalewicz, Z. and Janikow, C. Z. (1991) Handling constraints in genetic algorithms. Proceedings of the Fourth
International Conference on Genetic Algorithms, R. K. Belew and L. B. Booker, (Eds.), Morgan Kaufmann
Publishers, San Mateo, California, 151–157.
[19] Davis, E. (1987) Constraint propagation with interval labels. Artificial Intelligence, 32, 281–331.
[20] Jin, X. D. and Reynolds, R. G. (1999) Using knowledge-based evolutionary computation to solve nonlinear
constraint optimization problems: A cultural algorithm approach. 1999 Congress on Evolutionary Computation,
IEEE Service Center, Washington, D.C., July 1999, 1672–1678.
[21] Latombe, J.-C. (1993) Robot Motion Planning. Kluwer Academic Publishers, Norwell, Massachusetts.
[22] Chung, C. J. (1997) Knowledge-based approaches to self-adaptation in cultural algorithms. PhD thesis, Wayne
State University, Detroit, Michigan.
[23] Mariano, C. E. and Morales, E. F. (2000) Distributed reinforcement learning for multiple objective optimization
problems. 2000 Congress on Evolutionary Computation, IEEE Service Center, Piscataway, New Jersey, 188–195.
[24] Deb, K. (2000) An efficient constraint handling method for genetic algorithms. Computer Methods in Applied
Downloaded by [North Dakota State University] at 00:56 22 August 2013

Mechanics and Engineering, 186(2/4), 311–338.


[25] Jackins, C. L. and Tanimoto, S. L. (1980) Octrees and their use in representing three-dimensional objects.
Computer Graphics and Image Processing, 14(3), 249–270.
[26] Koziel, S. and Michalewicz, Z. (1999) Evolutionary algorithms, homomorphous mappings, and constrained
parameter optimization. Evolutionary Computation, 7(1), 19–44.
[27] Runarsson, T. P. and Yao, X. (2000) Stochastic ranking for constrained evolutionary optimization. IEEE Tran-
sactions on Evolutionary Computation, 4(3), 284–294.
[28] Deb, K. (1991) Optimal design of a welded beam via genetic algorithms. AIAA Journal, 29(11), 2013–2015.
[29] Ragsdell, K. M. and Phillips, D. T. (1967) Optimal design of a class of welded structures using geometric
programming. ASME Journal of Engineering for Industries, 98(3), 1021–1025.
[30] Siddall, J. N. (1972) Analytical Design-Making in Engineering Design, Prentice-Hall.
[31] Belegundu, A. D. (1982) A Study of Mathematical Programming Methods for Structural Optimization.
Department of Civil and Environmental Engineering, University of Iowa, Iowa City, Iowa.
[32] Arora, J. S. (1989) Introduction to Optimum Design. McGraw-Hill, New York.
[33] Pérez, E. I., Coello Coello, C. A. and Arturo Hernández Aguirre, A. H. (2001) Extraction of design
patterns from evolutionary algorithms using case-based reasoning. Evolvable Systems: From Biology to
Hardware (ICES’2001), Y. Liu, K. Tanaka, M. Iwata, T. Higuchi and M. Yasunaga, (Eds.), Springer-Verlag.
Lecture Notes in Computer Science No. 2210, 244–255.
[34] Bentley, J. L. and Friedman, J. H. (1979) Data structures for range searching. ACM Computing Surveys, 11(4),
397–409.
[35] Rao, S. S. (1996) Engineering Optimization. John Wiley and Sons.

APPENDIX TEST PROBLEMS

Example 1: g01


4 
4 
13
Minimize f (
x) = 5 xi − 5 x i2 − xi
i=1 i=1 i=5

subject to g1 (
x ) = 2x 1 + 2x 2 + x 10 + x 11 − 10 ≤ 0
g2 (
x ) = 2x 1 + 2x 3 + x 10 + x 12 − 10 ≤ 0
g3 (
x ) = 2x 2 + 2x 3 + x 11 + x 12 − 10 ≤ 0
g4 (
x ) = −8x 1 + x 10 ≤ 0
g5 (
x ) = −8x 2 + x 11 ≤ 0
g6 (
x ) = −8x 3 + x 12 ≤ 0
g7 (
x ) = −2x 4 − x 5 + x 10 ≤ 0
g8 (
x ) = −2x 6 − x 7 + x 11 ≤ 0
g9 (
x ) = −2x 8 − x 9 + x 12 ≤ 0,
234 C. A. COELLO COELLO AND R. L. BECERRA

where the bounds are 0 ≤ x i ≤ 1 (i = 1, . . . , 9), 0 ≤ x i ≤ 100 (i = 10, 11, 12) and
0 ≤ x 13 ≤ 1.

Example 2: g02
 
 
 n cos4 (x ) − 2 n cos2 (x ) 
 i=1 i i 
Maximize f (
x) =   i=1

 n 
 i=1 i x 2
i 

n
subject to g1 (
x ) = 0.75 − xi ≤ 0
i=1
Downloaded by [North Dakota State University] at 00:56 22 August 2013


n
g2 (
x) = x i − 7.5n ≤ 0,
i=1

where n = 20 and 0 ≤ x i ≤ 10 (i = 1, . . . , n).

Example 3: g04

Minimize x ) = 5.3578547x 32 + 0.8356891x 1x 5 + 37.293239x 1 − 40792.141


f (
subject to g1 (
x ) = 85.334407 + 0.0056858x 2x 5 + 0.0006262x 1 x 4
− 0.0022053x 3x 5 − 92 ≤ 0
g2 (
x ) = −85.334407 − 0.0056858x 2 x 5 − 0.0006262x 1 x 4
+ 0.0022053x 3x 5 ≤ 0
g3 (
x ) = 80.51249 + 0.0071317x 2x 5 + 0.0029955x 1x 2
+ 0.0021813x 32 − 110 ≤ 0
g4 (
x ) = −80.51249 − 0.0071317x 2 x 5 − 0.0029955x 1x 2
− 0.0021813x 32 + 90 ≤ 0
g5 (
x ) = 9.300961 + 0.0047026x 3x 5 + 0.0012547x 1x 3
+ 0.0019085x 3x 4 − 25 ≤ 0
g6 (
x ) = −9.300961 − 0.0047026x 3x 5 − 0.0012547x 1x 3
− 0.0019085x 3x 4 + 20 ≤ 0,

where: 78 ≤ x 1 ≤ 102, 33 ≤ x 2 ≤ 45, 27 ≤ x i ≤ 45 (i = 3, 4, 5).

Example 4: g08

sin3 (2π x 1 ) sin(2π x 2 )


Minimize f (
x) =
x 13 (x 1 + x 2 )
subject to g1 (
x ) = x 12 − x 2 + 1 ≤ 0
g2 (
x ) = 1 − x 1 + (x 2 − 4)2 ≤ 0,

where 0 ≤ x 1 ≤ 10, 0 ≤ x 2 ≤ 10.


CULTURAL ALGORITHM 235

Example 5: g12

100 − (x 1 − 5)2 − (x 2 − 5)2 − (x 3 − 5)2


Maximize f (
x) =
100
subject to x ) = (x 1 − p)2 + (x 2 − q)2 + (x 3 − r )2 − 0.0625 ≤ 0,
g(

where 0 ≤ x i ≤ 10 (i = 1, 2, 3), and p, q, r = 1, 2, . . . , 9. The feasible region of the search


space consists of 93 disjoint spheres. A point (x 1 , x 2 , x 3 ) is feasible if and only if there exist
p, q, r such that the above inequality holds.

Example 6: design of a welded beam A welded beam is designed for minimum cost subject
to constraints on shear stress (τ ), bending stress in the beam (σ ), buckling load on the bar (Pc ),
Downloaded by [North Dakota State University] at 00:56 22 August 2013

end deflection of the beam (δ), and side constraints [35]. There are four design variables as
shown in Figure 5 [35]: h(x 1 ), l(x 2 ), t (x 3 ) and b(x 4 ).
The problem can be stated as follows:

Minimize f (
x ) = 1.10471x 12 x 2 + 0.04811x 3 x 4 (14.0 + x 2 )
subject to g1 (
x ) = τ (
x ) − τmax ≤ 0
g2 (
x ) = σ (
x ) − σmax ≤ 0
g3 (
x ) = x1 − x4 ≤ 0
g4 (
x ) = 0.10471x 12 + 0.04811x 3 x 4 (14.0 + x 2 ) − 5.0 ≤ 0
g5 (
x ) = 0.125 − x 1 ≤ 0
g6 (
x ) = δ(
x ) − δmax ≤ 0
g7 (
x ) = P − Pc (
x) ≤ 0

where

x2
τ (
x) = (τ  )2 + 2τ  τ  + (τ  )2
2R

FIGURE 5 The welded beam used for the sixth example.


236 C. A. COELLO COELLO AND R. L. BECERRA

P MR  x2
τ = √ , τ  = , M=P L+
2x 1 x 2 J 2
2
x1 + x3
x 22
R= +
4 2
 
√ x2 x1 + x3 2
J =2 2x 1 x 2 2 +
12 2

6PL 4PL3
σ (
x) = , δ(
x) =
x 4 x 32 E x 33 x 4
  

Downloaded by [North Dakota State University] at 00:56 22 August 2013

4.013E (x 32 x 46 )/36 x3 E
Pc (
x) = 1−
L2 2L 4G

P = 6000 lb, L = 14 in, E = 30 × 106 psi, G = 12 × 106 psi


τmax = 13,600 psi, σmax = 30,000 psi, δmax = 0.25 in.

Example 7: minimization of the weight of a tension/compression spring This problem was


described by Arora [32] and Belegundu [31], and it consists of minimizing the weight of a
tension/compression spring (see Fig. 6) subject to constraints on minimum deflection, shear
stress, surge frequency, limits on outside diameter and on design variables. The design variables
are the mean coil diameter D, the wire diameter d and the number of active coils N.
Formally, the problem can be expressed as:

Minimize f (
x ) = (N + 2)Dd 2
D3 N
subject to g1 (
x) = 1 − ≤0
71785d 4
4D 2 − dD 1
g2 (
x) = + −1≤0
12566(Dd − d ) 5108d 2
3 4

140.45d
g3 (
x) = 1 − ≤0
D2 N
D+d
g4 (
x) = − 1 ≤ 0.
1.5

FIGURE 6 Tension/compression string used for the seventh example.

You might also like