A New Hybrid Particle Swarm Optimization Algorithm For Solving Continuous Optimization Problems
A New Hybrid Particle Swarm Optimization Algorithm For Solving Continuous Optimization Problems
a r t i c l e i n f o a b s t r a c t
Keywords: This study deals with a new hybrid global–local optimization algorithm named PSOLVER that combines
Particle swarm optimization particle swarm optimization (PSO) and a spreadsheet ‘‘Solver” to solve continuous optimization prob-
Hybridization lems. In the hybrid PSOLVER algorithm, PSO and Solver are used as the global and local optimizers,
Spreadsheets respectively. Thus, PSO and Solver work mutually by feeding each other in terms of initial and sub-initial
Solver
solution points to produce fine initial solutions and avoid from local optima. A comparative study has
Optimization
been carried out to show the effectiveness of the PSOLVER over standard PSO algorithm. Then, six con-
strained and three engineering design problems have been solved and obtained results are compared
with other heuristic and non-heuristic solution algorithms. Identified results demonstrate that, the
hybrid PSOLVER algorithm requires less iterations and gives more effective results than other heuristic
and non-heuristic solution algorithms.
Ó 2010 Elsevier Ltd. All rights reserved.
1. Introduction ing (SA) (Kirkpatrick, Gelatt, & Vecchi, 1983), ant colony optimiza-
tion (ACO) (Dorigo & Di Caro, 1999), particle swarm optimization
Optimization is the process of finding the best set of solutions to (PSO) (Kennedy & Eberhart, 1995), and harmony search (HS)
achieve an objective subject to given constraints. It is a challenging (Geem, Kim, & Loganathan, 2001), etc. Although these algorithms
part of operations research and has a wide variety of applications are very effective at exploring the search space, they require rela-
in economy, engineering and management sciences (Zahara & tively long time to precisely find the local optimum (Ayvaz, Kay-
Kao, 2009). During the last decades, huge number of solution algo- han, Ceylan, & Gurarslan, 2009; Fesanghary, Mahdavi, Minary-
rithms has been proposed for solving the optimization problems. Jolandan, & Alizadeh, 2008; Houck, Joines, & Kay, 1996; Houck,
These algorithms may be mainly classified under two categories: Joines, & Wilson, 1997; Michalewicz, 1992).
non-heuristic and heuristic algorithms. Non-heuristic algorithms Recently, hybrid global–local optimization algorithms have be-
are mostly the gradient-based search methods and very efficient come popular solution approaches for solving the optimization
in finding the local optimum solutions with a reasonable times. problems. These algorithms integrate the global exploring feature
However, they usually require gradient information to find the of heuristic algorithms and local fine tuning feature of non-heuris-
search directions (Lee & Geem, 2005). Thus, they may be inefficient tic algorithms. Through this integration, optimization problems
for solving the problems where the objective function and the can be solved more effectively than both global and local optimiza-
constraints are not differentiable. Therefore, there has been an tion algorithms (Shannon, 1998). In these algorithms, the global
increasing interest to use the heuristic algorithms to solve the opti- optimization process searches the optimum solution with multiple
mization problems. solution vectors, and then, local optimization process adjusts the
Heuristic optimization algorithms get their mathematical basis results of global optimization by getting its results as initial solu-
from the natural phenomena. Most widely used heuristic optimiza- tion (Ayvaz et al., 2009). However, their main drawback is that pro-
tion algorithms are the genetic algorithms (GA) (Goldberg, 1989; gramming the non-heuristic optimization algorithms may be
Holland, 1975), tabu search (TS) (Glover, 1977), simulated anneal- difficult since they require some mathematical calculations such
as taking partial derivatives, calculating Jacobian and/or Hessian
matrices, taking matrix inversions, etc. Besides, they may require
* Corresponding author. Tel.: +90 258 296 3386; fax: +90 258 296 3382. an extra effort to handle the given constraint set through non-heu-
E-mail addresses: hkayhan@pamukkale.edu.tr (A.H. Kayhan), hceylan@ ristic algorithms.
pamukkale.edu.tr (H. Ceylan), tayvaz@pamukkale.edu.tr (M.T. Ayvaz), gurarslan@
pamukkale.edu.tr (G. Gurarslan).
Recently, popularity of spreadsheets in solving the optimization
1
Tel.: +90 258 296 3393; fax: +90 258 296 3382. problems has been increasing through their mathematical add-ins.
2
Tel.: +90 258 296 3384; fax: +90 258 296 3382. Most available spreadsheet packages are coupled with a ‘‘Solver”
0957-4174/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.eswa.2010.03.046
A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808 6799
add-in (Frontline System Inc., 1999) which can solve many nonlin- 0 and 1. Note that the values of x; c1 and c2 control the impact of
ear optimization problems without requiring much knowledge previous historical values of particle velocities on its current one. A
about the non-heuristic algorithms, and so are extremely easy to larger value of x leads to global exploration, whereas smaller val-
use (Stokes & Plummer, 2004). ‘‘Solver” solves the optimization ues results with a fine search within the solution space. Therefore,
problems through generalized reduced gradient (GRG) algorithm suitable selection of x; c1 and c2 provides a balance between the
(Lasdon, Waren, Jain, & Ratner, 1978) and can solve many linear global and local search processes (Salman, Ahmad, & Al-Madani,
and nonlinear optimization problems (Ayvaz et al., 2009). 2002). Note that the terms c1 r1 x ^ i xki and c2 r 2 g^ xki in Eq. (1)
The main objective of this study is to develop a new hybrid glo- are called the cognition and social terms, respectively. The cognition
bal–local optimization algorithm for solving the constrained opti- term takes into account only the particle’s own experience, whereas
mization problems. With this purpose, a new hybrid solution the social term signifies the interaction between the particles. Parti-
algorithm, PSOLVER, is proposed. In the PSOLVER algorithm, PSO cle’s velocities in a swarm are usually bounded with a maximum
T
is used as a global optimizer and integrated with a spreadsheet velocity v max ¼ v max
1 ; v max
2 ; . . . ; v max
m which is calculated as a frac-
‘‘Solver” to improve the PSO results. The performance of the tion of the entire search space as follows (Shi & Eberhart, 1998):
PSOLVER algorithm is tested on several constrained optimization
problems and the results are compared with other solution meth- vmax ¼ cðxmax xmin Þ ð2Þ
ods in terms of solution accuracy and the number of function eval- max
T
where c is a fraction ð06 c < 1Þ; x ¼ xmax
1 ; xmax
2 ; . . . ; xmax
and m
uations. Identified results showed that, PSOLVER algorithm xmin ¼ xmin min min T
1 ; x2 ; . . . ; xm are the vectors that contain the upper
requires less number of function evaluations and gives more effec- and lower bounds of the search space for each dimension, respec-
tive results than other solution algorithms. tively. After the velocity updating process is performed through
The remaining of this study is organized as follows: First, the Eqs. (1) and (3), the new positions of the particles are calculated
main structure of PSO algorithm is described; second, the neces- as follows:
sary steps of building PSOLVER algorithm is presented; and finally,
the performance of the proposed model is tested on different con- xkþ1
i ¼ xki þ v kþ1
i 8i ¼ 1; 2; . . . ; n ð3Þ
strained optimization problems.
After the calculation of Eq. (3), the corresponding fitness values are
calculated based on the new positions of the particles. Then, the val-
2. The particle swarm optimization algorithm ues of x ^ ð8i ¼ 1; 2; . . . ; nÞ are updated. This solution proce-
^ i and g
dure is repeated until the given termination criterion has been
The PSO algorithm, first proposed by Kennedy and Eberhart satisfied. Fig. 1 shows the step by step solution procedure of PSO
(1995), is developed based on the observations of the social behav- algorithm (Wikipedia, 2009).
ior of animals, such as bird flocking or fish schooling. Like other The PSO has been applied to wide variety of disciplines includ-
evolutionary algorithms, PSO is also a population based optimiza- ing neural network training (Eberhart & Hu, 1999; Eberhart & Ken-
tion algorithm. In PSO, members of the population are called as the nedy, 1995; Kennedy & Eberhart, 1995, 1997; Salerno, 1997; Van
swarm and each individual within the swarm is called as the par- Den Bergh & Engelbrecht, 2000), biochemistry (Cockshott & Hart-
ticle. During the solution process, each particle in the swarm ex- man, 2001), manufacturing (Tandon, El-Mounayri, & Kishawy,
plores the search space through their current positions and 2002), electromagnetism (Baumgartner, Magele, & Renhart, 2004;
velocities. In order to solve an optimization problem using PSO, ini- Brandstätter & Baumgartner, 2002; Ciuprina, Loan, & Munteanu,
tially, all the positions and velocities are randomly generated from 2002), electrical power (Abido, 2002; Yoshida, Fukuyama, Takay-
the feasible search space. Then, the velocity of each particle is up- ama, & Nakanishi, 1999), optics (Slade, Ressom, Musavi, & Miller,
dated based on their individual experiences and experiences of the 2004), structural optimization (Fourie & Groenwold, 2002; Perez
other particles. This task is performed by updating the velocities of & Behdinan, 2007; Venter & Sobieszczanski-Sobieski, 2004), end
each particle using the best position of the related particle and the milling (Tandon, 2000) and structural reliability (Elegbede, 2005).
overall best position visited by the other particles. Finally, the posi- Generally, it can be said that PSO is applicable to solve the most
tions of the particles are updated through their new velocities and optimization problems.
this process is iterated until the given termination criterion is sat-
isfied. This solution sequence provides that each particle in the
3. Development of hybrid PSOLVER algorithm
swarm can learn based on their own experiences (local search)
and the experiences of the group (global search). Mathematical
As indicated above, PSO is an efficient optimization algorithm
statement of PSO algorithm can be given as follows:
and successively applied to the solution of optimization problems.
Let f be the fitness function governing the problem, n be the
However, like other heuristic optimization algorithms, PSO is also
number of particles in the swarm, m be the dimension of the prob-
T an evolutionary computation technique and may require high
lem (e.g. number of decision variables), xi ¼ ½xi1 ; xi2 ; . . . ; xim and
T computational times to precisely find an exact optimum. There-
v i ¼ ½v i1 ; v i2 ; . . . ; v im be the vectors that contain the current posi-
fore, hybridizing the PSO with a local search method becomes a
tions and the velocities of the particles in each dimension,
good idea such that PSO finds the possible solutions where the glo-
^ i ¼ ½^
x xi1 ; ^ xim T be the vector that contains the current best
xi2 ; . . . ; ^
^ ¼ ½g 1 ; bal optimum exists, and local search method employs a fine search
position of each particle in each dimension, and g
to precisely find the global optimum. This kind of solution ap-
g 2 ; . . . ; g m T be the vector that contains the global best position in
proach makes the convergence rate faster than the pure global
each dimension (8i ¼ 1; 2; . . . ; n and 8j ¼ 1; 2; . . . ; m), T be the
search and prevents the problem of trapping to local optimums
transpose operator. The new velocities of the particles are calcu-
by pure local search (Fan & Zahara, 2007).
lated as follows:
The current literature includes several studies in which the PSO
v ikþ1 ¼ xv ki þ c1 r1 ^i xki þ c2 r 2 g
x ^ xki 8i ¼ 1; 2; . . . ; n ð1Þ algorithm is integrated with the local search methods. Fan, Liang,
and Zahara (2004) developed a hybrid optimization algorithm
where k is the iteration index, x is the inertial constant, c1 and c2 which integrates the PSO and Nelder–Mead (NM) simplex search
are the acceleration coefficients which are used to determine how method for the optimization of multimodal test functions. Their re-
much the particle’s personal best and the global best influence its sults showed that the NM–PSO algorithm is superior to other
movement, and r1 and r2 are the uniform random numbers between search methods. Victoire and Jeyakumar (2004) integrated the
6800 A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808
PSO algorithm with the sequential quadratic programming (SQP) ods. Note that the user is not required to provide the partial
technique for solving the economic dispatch problems. In their derivatives with respect to decision variables in Solver. Instead,
PSO–SQP algorithm, PSO is used as the global optimizer and SQP forward or central difference approximations are used in the
is used as the local optimizer which is used for fine tuning the each search process (OTC, 2009). This may be the main advantage of
solution of PSO. They tested their model performance on three dif- using Solver as a local optimizer in this study.
ferent economic dispatch problems. Results showed that their It should be noted that the global optimizer PSO and the local
PSO–SQP algorithm provides better solutions than those of other optimizer Solver have been integrated by developing a running Vi-
solution methods. Kazuhiro, Shinji, and Masataka (2006) combined sual Basic for Applications (VBA) code on the background of a
the PSO and sequential linear programming (SLP) to solve the spreadsheet platform (Excel for this study). In this integration,
structural optimization problems. Their results showed that hybrid two separate running VBA codes have been developed. The first
PSO–SLP finds very efficient results. Ghaffari-Miab, Farmahini- code includes the standard PSO algorithm and is used as the global
Farahani, Faraji-Dana, and Lucas (2007) developed a hybrid solu- optimizer. The second code is used for calling the Solver add-in and
tion algorithm which integrates the PSO and gradient-based qua- developed by creating a VBA macro instead of manually calling the
si-Newton method. They applied their hybrid model to the Solver add-in. Note that a macro is a series of commands grouped
solution of complex time Green’s functions of multilayer media. together as a single command to accomplish a task automatically
Their results indicated that hybrid PSO algorithm is superior com- and can be created through macro recorder that saves the series
pared to other optimization techniques. Zahara and Hu (2008) of commands in VBA (Ferreira & Salcedo, 2001). The source code
developed a hybrid NM–PSO algorithm for solving the constrained of the recorded macro can be easily modified in the Visual Basic
optimization problems. Their NM–PSO algorithm handles con- Editor of the spreadsheets (Ferreira & Salcedo, 2001; Microsoft,
straint sets by using both gradient repair and constraint fitness pri- 1995; Rosen, 1997). By using this feature of the spreadsheets, the
ority-based ranking operators. According to their results, NM–PSO recorded Solver macro is integrated with the developed PSO code
with embedded constraint handling operators is extremely effec- on VBA platform.
tive and efficient at locating optimal solutions. As a later study, Note that, dealing with the use of a spreadsheet Solver as a local
Zahara and Kao (2009) applied the NM–PSO algorithm of Zahara optimizer, Ayvaz et al. (2009) firstly proposed a hybrid optimiza-
and Hu (2008) to the solution of engineering design problems with tion algorithm in which HS and the Solver is integrated to solve
a great success. engineering optimization problems. With this purpose, they devel-
As summarized above, hybridizing the PSO algorithm with local oped a hybrid HS–Solver algorithm. They tested the performance of
search methods is an effective and efficient way to solve the opti- HS–Solver algorithm on 4 unconstrained, 4 constrained and 4
mization problems. However, programming these hybrid algo- structural engineering problems. Their results indicated that hy-
rithms may be a difficult task for the non-major people since brid HS–Solver algorithm requires less number of function evalua-
most of the local search methods require some complex mathe- tions and finds better or identical objective function values than
matical calculations. Therefore, in this study, PSO is hybridized many non-heuristic and heuristic optimization algorithms.
with a spreadsheet Solver since it requires little knowledge about It should be noted that Fesanghary et al. (2008) mentions about
the programming of local search methods. two approaches of integrating global and local search processes. In
Solver is a powerful gradient-based optimization add-in and the the first approach, global search process explores the entire search
most commercial spreadsheet products (Lotus 1-2-3Ò, Quattro space until the objective function improvement is negligible, and
ProÒ, Microsoft ExcelÒ) contain it. Solver solves the linear and non- then, local search method performs a fine search by taking the best
linear optimization problems through GRG algorithm (Lasdon solution of global search as a starting point. On the other hand, in
et al., 1978). It works by first evaluating the functions and deriva- the second approach, both global and local search processes work
tives at a starting value of the decision vector, and then iteratively simultaneously such that all the solutions of global search are fine
searches for a better solution using a search direction suggested by tuned by local search. When the optimized solution of local search
derivatives (Stokes & Plummer, 2004). To determine a search direc- has a better objective function value than the global search, this
tion, Solver uses the quasi-Newton and conjugate gradient meth- solution is transferred to global search and solution proceeds until
A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808 6801
the given termination criterion satisfied (Fesanghary et al., 2008; two stopping criteria have been considered such that the optimiza-
Ayvaz et al., 2009). Compared the first and second approaches, it tion process ends when the number of generations equals to 1000
is obvious that the second approach provides better results than or the reference or a better solution has been obtained.
the first approach. However, computational cost of the second ap-
proach is usually higher than the first one since all the solutions of
4.1. Performance evaluation study: Michalewicz’s test function
global search will be subject to local search. Note that the second
approach is taken into account in this study and PSO and Solver
Michalewicz’s function is a typical example of nonlinear multi-
optimizers are integrated based on a probability of P c such that a
modal functions including n! local optima (Michalewicz, 1992).
globally generated solution vector is subjected to local search with
The function can be given as follows:
a probability of P c . Note that our trials and the recommendations of
" 2s #
Fesanghary et al. (2008) and Ayvaz et al. (2009) state that use of a X
n
k x2k
fairly small P c value is sufficient for solving many optimization Min f ðxÞ ¼ sinðxk Þ sin ð4Þ
p
problems. Therefore, we have used the probability Pc ¼ 0:01 k¼1
throughout the paper. After given convergence criteria of the Sol- s:t: 0 6 xk 6 p; k ¼ 1; 2; . . . ; n ð4aÞ
ver are satisfied, the locally improved solution is included to PSO
and the global search proceeds until termination. Fig. 2 shows where the parameter s defines the ‘‘steepness” of the valleys or
the step by step procedure of the PSOLVER algorithm. edges and assumed to be 10 for this solution. This function has a
global optimum solution of f ðx Þ ¼ 4:687658 when n ¼ 5. Fig. 3
shows the solution space of the function when n ¼ 2.
4. Numerical applications
It can be clearly seen from Fig. 3 that, solution of this function Fukushima, 2006), GA (Chootinan & Chen, 2006), and NM–PSO
using a gradient-based optimization algorithm is quite difficult (Zahara & Hu, 2008) methods. After applying the PSOLVER algo-
task since there are many locations where the gradient of the func- rithm to this problem, we obtained the best solution at x ¼ ð1; 1;
tion equals to zero. Therefore, solving this problem through gradi- 1; 1; 1; 1; 1; 1; 1; 3; 3; 3; 1Þ with the corresponding objective value of
ent-based algorithms depends on the quality of the initial f(x*) = 15.000000. Table 1 compares the identified results for dif-
solutions. In order to test the performance of PSOLVER algorithm, ferent solution algorithms.
this problem has been solved using both PSOLVER and standard As can be seen from Table 1, while the optimum solution were
PSO algorithms. Note that same random number seeds have been obtained using GA, EA and NM–PSO algorithms after 95,512,
used. Thus, same initial solutions have been used in both algo- 122,000 and 41,959 function evaluations, respectively, the PSOLV-
rithms. Fig. 4 compares the convergence histories of both ER algorithm requires only 679 function evaluations. Therefore, the
algorithms. PSOLVER algorithm is the most effective solution method among
As can be seen from Fig. 4, both algorithms are started from the the other methods in terms of the number of function evaluations.
same initial solution. Although both PSO and hybrid PSOLVER algo-
rithms find the optimum solution of f ðx Þ ¼ 4:687658, the 4.3. Example 2
PSOLVER requires much less function evaluations than PSO.
PSOLVER requires only 456 function evaluations, whereas PSO re- This minimization problem has two decision variables and two
quires 67,600 function evaluations to solve the same problem. inequality constraints as given in Eq. (6):
Table 1
Comparison of the identified results for Example 1.
Table 2
Comparison of the identified results for Example 2.
Methods Best objective Mean objective Worst objective Standard Number of function
function value function value function value deviation evaluations
EA (Runarsson & Yao, 2005) 6961.8139 6961.8139 6961.8139 0 56,000
CDE (Becerra & Coello, 2006) 6961.8139 6961.8139 6961.8139 0 100,100
FSA (Hedar & Fukushima, 2006) 6961.8139 6961.8139 6961.8139 0 44,538
GA (Chootinan & Chen, 2006) 6961.8139 6961.8139 6961.8139 0 13,577
NM–PSO (Zahara & Hu, 2008) 6961.8240 6961.8240 6961.8240 0 9856
PSOLVER 6961.8244 6961.8244 6961.8244 0 179
Table 3
Comparison of the identified results for Example 3.
Methods Best objective Mean objective Worst objective Standard Number of function
function value function value function value deviation evaluations
EA (Runarsson & Yao, 2005) 0.053942 0.111671 0.438804 1.40E01 109,200
CDE (Becerra & Coello, 2006) 0.056180 0.288324 0.392100 1.67E01 100,100
FSA (Hedar & Fukushima, 2006) 0.053950 0.297720 0.438851 1.89E01 120,268
NM–PSO (Zahara & Hu, 2008) 0.053949 0.054854 0.058301 1.26E03 265,548
PSOLVER 0.053949 0.053950 0.053950 1.14E07 779
Table 4
Comparison of the identified results for Example 4.
Methods Best objective Mean objective Worst objective Standard Number of function
function value function value function value deviation evaluations
HM (Koziel & Michalewicz, 1999) 30,664.500 30,665.300 30,645.900 N/A 1,400,000
SR (Runarsson & Yao, 2000) 30,665.539 30,665.539 30,665.539 0.0000200 350,000
EP (Coello & Becerra, 2004) 30,665.500 30,662.500 30,662.200 9.3000000 50,020
HPSO (He & Wang, 2007) 30,665.539 30,665.539 30,665.539 0.0000017 81,000
NM–PSO (Zahara & Kao, 2009) 30,665.539 30,665.539 30,665.539 0.0000140 19,568
PSOLVER 30,665.539 30,665.539 30,665.539 0.0000024 328
tained by He and Wang (2007) using HPSO algorithm with an objec- ð1:2279713; 4:2453733Þ with a corresponding objective function
tive function value of f ðx Þ ¼ 30; 665:539 after 81,000 iterations. value of f ðx Þ ¼ 0:095825. This solution is obtained after 308 func-
We obtained the best solution using PSOLVER algorithm at tion evaluations. The comparison of the identified results for differ-
x ¼ ð78; 33; 29:995256025682; 45; 36:775812905788Þ with the ent solution algorithms are given in Table 5. It can be clearly seen
corresponding objective value of f ðx Þ ¼ 30; 665:539. Table 4 com- from Table 5 that the PSOLVER algorithm finds optimal solution
pares the identified results of different solution algorithms. It can be with the lowest number of function evaluations among those of
seen in Table 4 that PSOLVER gives the same result with SR, HPSO, the other algorithms.
NM–PSO and better than HM and EP. It should be note that the hy-
brid PSOLVER requires only 328 function evaluations which is much 4.7. Example 6
less in comparison with the other methods.
This maximization problem has 3 decision variables and 1
4.6. Example 5 inequality constraints as given in Eq. (9d):
The fifth example has two decision variables and two inequality 100 ðx1 5Þ2 ðx2 5Þ2 ðx3 5Þ2
Max f ðxÞ ¼ ð10Þ
constraints as given in Eq. (9): 100
s:t: gðxÞ ¼ ðx1 pÞ2 þ ðx2 qÞ2 þ ðx3 rÞ2 0:0625 6 0 ð10aÞ
3
sin ð2px1 Þ sinð2px2 Þ 0 6 xi 6 10 i ¼ 1; 2; 3 and p; q; r ¼ 1; 2; . . . ; 9 ð10bÞ
Max f ðxÞ ¼ ð9Þ
x31 ðx1 þ x2 Þ
s:t: g 1 ðxÞ ¼ x21 x2 þ 1 6 0 ð9aÞ For this example, the feasible region of the search space consists of
2
93 disjoint spheres. A point ðx1 ; x2 ; x3 Þ is feasible if and only if there
g 2 ðxÞ ¼ 1 x1 þ ðx2 4Þ 6 0 ð9bÞ exist p; q; r such that the above inequality holds (Zahara & Kao,
0 6 x1 6 10 ð9cÞ 2009). For this problem, the optimum solution is x ¼ ð5; 5; 5Þ with
0 6 x2 6 10 ð9dÞ f ðx Þ ¼ 1. This problem was previously solved by using a HM (Koziel
& Michalewicz, 1999), SR (Runarsson & Yao, 2000), EP (Coello & Bec-
This function has the global optimum at x ¼ ð1:2279713; erra, 2004), HPSO (He & Wang, 2007), and NM–PSO (Zahara & Kao,
4:2453733Þ with a corresponding function value of f ðx Þ ¼ 2009). Table 6 shows the identified results of PSOLVER and the pre-
0:095825. This function was previously solved by using a HM (Koz- vious studies given above.
iel & Michalewicz, 1999), SR (Runarsson & Yao, 2000), EP (Coello & As can be seen from Table 6, PSOLVER algorithm results with
Becerra, 2004), HPSO (He & Wang, 2007), and NM–PSO (Zahara & x ¼ ð5; 5; 5Þ with the objective function value of f ðx Þ ¼ 1. PSOLV-
Kao, 2009). We applied the PSOLVER algorithm to the solution of ER algorithm requires only 584 function evaluations for obtaining
this problem and obtained the optimum solution at x ¼ the optimal solution.
Table 5
Comparison of the identified results for Example 5.
Table 6
Comparison of the identified results for Example 6.
Methods Best objective Mean objective Worst objective Standard Number of function
function value function value function value deviation evaluations
HM (Koziel & Michalewicz, 1999) 0.999999 0.999135 0.991950 N/A 1,400,000
SR (Runarsson & Yao, 2000) 1.000000 1.000000 1.000000 0 350,000
EP (Coello & Becerra, 2004) 1.000000 0.996375 0.996375 9.7E03 50,020
HPSO (He & Wang, 2007) 1.000000 1.000000 1.000000 1.6E15 81,000
NM–PSO (Zahara & Kao, 2009) 1.000000 1.000000 1.000000 0 923
PSOLVER 1.000000 1.000000 1.000000 2.6E14 584
A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808 6805
nament selection (GA2) (Coello & Montes, 2002), EP (Coello & Bec-
erra, 2004), co-evolutionary particle swarm optimization approach
(CPSO) (He & Wang, 2006), HPSO (He & Wang, 2007), and NM–
PSO (Zahara & Kao, 2009). After applying the PSOLVER algorithm
to this problem, best solution is obtained at x ¼ ð0:05186;
0:356650; 11:292950Þ with the corresponding value of f ðxÞ ¼
Fig. 5. A tension/compression string design problem. 0:0126652. The best solutions obtained by the above-mentioned
methods and the PSOLVER algorithm are given in Tables 7 and 8.
It can be seen from Table 8 that the standard deviation of the
4.8. Example 7: The tension/compression string design problem PSOLVER solution is the smallest. In addition, the PSOLVER requires
only 253 function evaluations for solving this problem, while GA2,
The tension/compression string design problem is described in HPSO, CPSO, and NM–PSO require 80,000, 81,000, 200,000, and
Arora (1989) and the aim is to minimize the weight ðf ðxÞÞ of a ten- 80,000 function evaluations, respectively. Therefore, the PSOLVER
sion/compression spring (as shown in Fig. 5) subject to constraints is an efficient approach locating the global optimum for this
on minimum deflection, shear stress, surge frequency, limits on problem.
outside diameter and on design variables. The design variables
are the wire diameter dðx1 Þ, the mean coil diameter Dðx2 Þ and 4.9. Example 8: The welded beam design problem
the number of active coils Pðx3 Þ.
The mathematical formulation of this problem can be described This design problem, which has been often used as a benchmark
as follows: problem, was firstly proposed by Coello (2000). In this problem, a
welded beam is designed for minimum cost subject to constraints
Min f ðxÞ ¼ ðx3 þ 2Þx2 x21 ð11Þ on shear stress ðsÞ; bending stress ðrÞ in the beam; buckling load
x32 x3 on the bar ðP b Þ; end deflection of the beam ðdÞ; and side con-
s:t: g 1 ðxÞ ¼ 1 60 ð11aÞ straints. There are four design variables as shown in Fig. 6:
71; 785x41
hðx1 Þ; lðx2 Þ; tðx3 Þ and bðx4 Þ.
4x22 x1 x2 1 The mathematical formulation of the problem is as follows:
g 2 ðxÞ ¼ þ 160 ð11bÞ
12; 566 x2 x31 x41 5108x21
Min f ðxÞ ¼ 1:10471x21 x2 þ 0:04811x3 x4 ð14 þ x2 Þ ð12Þ
140:45x1
g 3 ðxÞ ¼ 1 60 ð11cÞ s:t: g 1 ðxÞ ¼ sðxÞ smax 6 0 ð12aÞ
x22 x3
x2 þ x1 g 2 ðxÞ ¼ rðxÞ rmax 6 0 ð12bÞ
g 4 ðxÞ ¼ 160 ð11dÞ
1:5 g 3 ðxÞ ¼ x1 x4 6 0 ð12cÞ
0:05 6 x1 6 2:00 ð11eÞ g 4 ðxÞ ¼ 0:10471x21 þ 0:04811x3 x4 ð14 þ x2 Þ 5 6 0 ð12dÞ
0:25 6 x2 6 1:30 ð11fÞ g 5 ðxÞ ¼ 0:125 x1 6 0 ð12eÞ
2:00 6 x3 6 15:00 ð11gÞ g 6 ðxÞ ¼ dðxÞ dmax 6 0 ð12fÞ
g 7 ðxÞ ¼ P Pc ðxÞ 6 0 ð12gÞ
This problem has been used as a benchmark for testing different
optimization methods, such as GA based co-evolution model 0:1 6 x1 ; x4 6 2:0 ð12hÞ
(GA1) (Coello, 2000), GA through the use of dominance-based tour- 0:1 6 x2 ; x3 6 10:0 ð12iÞ
Table 7
Comparison of the best solutions for tension/compression spring design problem.
Table 8
Statistical results for tension/compression spring design problem.
Table 9
Comparison of the best solutions for welded beam design problem.
Table 10
Statistical results for welded beam design problem.
Methods Best objective Mean objective Worst objective Standard Number of function
function value function value function value deviation evaluations
GA1 (Coello, 2000) 1.748309 1.771973 1.785835 1.12E02 N/A
GA2 (Coello & Montes, 2002) 1.728226 1.792654 1.993408 7.47E02 80,000
EP (Coello & Becerra, 2004) 1.724852 1.971809 3.179709 4.43E01 N/A
CPSO (He & Wang, 2006) 1.728024 1.748831 1.782143 1.29E02 200,000
HPSO (He & Wang, 2007) 1.724852 1.749040 1.814295 4.01E02 81,000
NM–PSO (Zahara & Kao, 2009) 1.724717 1.726373 1.733393 3.50E03 80,000
PSOLVER 1.724717 1.724717 1.724717 1.62E11 297
A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808 6807
Table 11
Comparison of the best solutions for pressure vessel design problem.
Table 12
Statistical results for pressure vessel design problem.
Methods Best objective Mean objective Worst objective Standard Number of function
function value function value function value deviation evaluations
GA1 (Coello, 2000) 6288.7445 6293.8432 6308.1497 7.413E+00 N/A
GA2 (Coello & Montes, 2002) 6059.9463 6177.2533 6469.3220 1.309E+02 80,000
CPSO (He & Wang, 2006) 6061.0777 6147.1332 6363.8041 8.645E+01 200,000
HPSO (He & Wang, 2007) 6059.7143 6099.9323 6288.6770 8.620E+01 81,000
NM–PSO (Zahara & Kao, 2009) 5930.3137 5946.7901 5960.0557 9.161E+00 80,000
PSOLVER 6059.7143 6059.7143 6059.7143 4.625E12 310
Eberhart, R. C., & Shi, Y. (2001). Tracking and optimizing dynamic systems with Koziel, S., & Michalewicz, Z. (1999). Evolutionary algorithms, homomorphous
particle swarms. In Proceedings of congress on evolutionary computation, Seoul, mappings, and constrained parameter optimization. Evolutionary Computation,
Korea (pp. 27–30). 7, 19–44.
Elegbede, C. (2005). Structural reliability assessment based on particle swarm Lasdon, L. S., Waren, A. D., Jain, A., & Ratner, M. (1978). Design and testing of a
optimization. Structural Safety, 27(2), 171–186. generalized reduced gradient code for nonlinear programming. ACM
Fan, S. S., Liang, Y. C., & Zahara, E. (2004). Hybrid simplex search and particle swarm Transactions on Mathematical Software, 4(1), 34–49.
optimization for the global optimization of multimodal functions. Engineering Lee, K. S., & Geem, Z. W. (2005). A new meta-heuristic algorithm for continuous
Optimization, 36(4), 401–418. engineering optimization: Harmony search theory and practice. Computer
Fan, S. S., & Zahara, E. (2007). A hybrid simplex search and particle swarm Methods in Applied Mechanics and Engineering, 194, 3902–3933.
optimization for unconstrained optimization. European Journal of Operational Michalewicz, Z. (1992). Genetic algorithm + data structure = evolution programs. New
Research, 181, 527–548. York: Springer-Verlag.
Ferreira, E. N. C., & Salcedo, R. (2001). Can spreadsheet solvers solve demanding Microsoft. (1995). Microsoft Excel – Visual Basic for applications. Washington:
optimization problems. Computer Applications in Engineering Education, 9(1), Microsoft Press.
49–56. OTC. (2009). Optimization Technology Center’s Web site (online). <http://www-
Fesanghary, M., Mahdavi, M., Minary-Jolandan, M., & Alizadeh, Y. (2008). fp.mcs.anl.gov/OTC/Guide/SoftwareGuide/Blurbs/grg2.html> (accessed
Hybridizing harmony search algorithm with sequential quadratic 31.03.2009).
programming for engineering optimization problems. Computer Methods in Perez, R. E., & Behdinan, K. (2007). Particle swarm approach for structural design
Applied Mechanics and Engineering, 197, 3080–3091. optimization. Computers and Structures, 85, 1579–1588.
Fourie, P. C., & Groenwold, A. A. (2002). The particle swarm optimization algorithm Rosen, E. M. (1997). Visual Basic for applications, Add-Ins and Excel 7.0. CACHE News
in size and shape optimization. Structural and Multidisciplinary Optimization, (Vol. 45, pp. 1–3).
23(4), 259–267. Runarsson, T. P., & Yao, X. (2000). Stochastic ranking for constrained evolutionary
Frontline System Inc. (1999). A tutorial on spreadsheet optimization. optimization. IEEE Transactions on Evolutionary Computation, 4(3), 284–292.
Geem, Z. W., Kim, J. H., & Loganathan, G. V. (2001). A new heuristic optimization Runarsson, T. P., & Yao, X. (2005). Search biases in constrained evolutionary
algorithm: harmony search. Simulation, 76(2), 60–68. optimization. IEEE Transactions on Systems, Man and Cybernetics, 35(2), 233–243.
Ghaffari-Miab, M., Farmahini-Farahani, A., Faraji-Dana, R., & Lucas, C. (2007). An Salerno, J. (1997). Using particle swarm optimization technique to train a recurrent
efficient hybrid Swarm intelligence-gradient optimization method for complex neural model. In Proc. of the ninth IEEE int. conf. tools and artificial intelligence,
time Green’s functions of multilayer media. Progress in Electromagnetics USA (pp. 45–49).
Research, 77, 181–192. Salman, A., Ahmad, I., & Al-Madani, S. (2002). Particle swarm optimization for task
Glover, F. (1977). Heuristic for integer programming using surrogate constraints. assignment problem. Microprocessors and Microsystems, 26, 363–371.
Decision Sciences, 8(1), 156–166. Shannon, M. W. (1998). Evolutionary algorithms with local search for combinatorial
Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine optimization. PhD thesis, University of California, San Diego.
learning. Addison-Wesley. Shi, Y., & Eberhart, R. C. (1998). Parameter selection in particle swarm optimization.
Hedar, A. D., & Fukushima, M. (2006). Derivative-free filter simulated annealing Evolutionary Programming VII. Lecture Notes in Computer Science (Vol. 1447).
method for constrained continuous global optimization. Journal of Global Berlin: Springer.
Optimization, 35(4), 521–549. Slade, W. H., Ressom, H. W., Musavi, M. T., & Miller, R. L. (2004). Inversion of ocean
He, Q., & Wang, L. (2006). An effective co-evolutionary particle swarm optimization color observations using particle swarm optimization. IEEE Transactions on
for engineering optimization problems. Engineering Application of Artificial Geoscience and Remote Sensing, 42(9), 1915–1923.
Intelligence, 20, 89–99. Stokes, L., & Plummer, J. (2004). Using spreadsheet solvers in sample design.
He, Q., & Wang, L. (2007). A hybrid particle swarm optimization with a feasibility Computational Statistics and Data Analysis, 44(3), 527–546.
based rule for constrained optimization. Applied Mathematics and Computation, Tandon, V. (2000). Closing the gap between CAD/CAM and optimized CNC end milling.
186, 1407–1422. MSc thesis, Purdue School of Engineering and Technology, Indianapolis, USA.
Holland, J. H. (1975). Adaptation in natural and artificial systems: An introductory Tandon, V., El-Mounayri, H., & Kishawy, H. (2002). NC end milling optimization
analysis with applications to biology, control, and artificial intelligence. Ann Arbor: using evolutionary computation. International Journal of Machine Tools and
University of Michigan Press. Manufacture, 42(5), 595–605.
Houck, C. R., Joines, J. A., & Kay, M. G. (1996). Comparison of genetic algorithms, Van Den Bergh, F., & Engelbrecht, A. P. (2000). Cooperative learning in neural
random start, and two-opt switching for solving large location–allocation networks using particle swarm optimizers. South African Computer Journal, 26,
problems. Computers and Operations Research, 23(6), 587–596. 84–90.
Houck, C. R., Joines, J. A., & Wilson, J. R. (1997). Empirical investigation of the Venter, G., & Sobieszczanski-Sobieski, J. (2004). Multidisciplinary optimization of a
benefits of partial Lamarckianism. Evolutionary Computation, 5(1), 31–60. transport aircraft wing using particle swarm optimization. Structural and
Hu, X., & Eberhart, R. C. (2001). Tracking dynamic systems with PSO: Where’s the Multidisciplinary Optimization, 26(1–2), 121–131.
cheese? In Proceedings of workshop on particle swarm optimization, Indianapolis, Victoire, T. A., & Jeyakumar, A. E. (2004). Hybrid PSO–SQP for economic dispatch
USA. with valve-point effect. Electric Power Systems Research, 71(1), 51–59.
Kannan, B. K., & Kramer, S. N. (1994). An augmented lagrange multiplier based Wikipedia. (2009). The free Encyclopedia web site (online). <http://
method for mixed integer discrete continuous optimization and its applications www.en.wikipedia.org/wiki/Particle_swarm_optimization> (accessed
to mechanical design. Journal of Mechanical Design, 116, 318–320. 31.03.2009).
Kazuhiro, I., Shinji, N., & Masataka, Y. (2006). Hybrid swarm optimization Yoshida, H., Fukuyama, Y., Takayama, S., & Nakanishi, Y. (1999). A particle swarm
techniques incorporating design sensitivities. Transactions of the Japan Society optimization for reactive power and voltage control in electric power systems
of Mechanical Engineers, 72(719), 2264–2271. considering voltage security assessment. In Proc. IEEE int. conf. systems, man. and
Kennedy, J., & Eberhart, R. C. (1995). Particle swarm optimization. In Proceedings of cybernetics, Tokyo, Japan (pp. 497–502).
the IEEE international conference on neural networks, Piscataway, USA (pp. 1942– Zahara, E., & Hu, C. H. (2008). Solving constrained optimization problems with
1948). hybrid particle swarm optimization. Engineering Optimization, 40(11),
Kennedy, J., & Eberhart, R. C. (1997). A discrete binary version of the particle swarm 1031–1049.
algorithm. In Proc. IEEE int. conf. systems, man. and cybernetics (Vol. 5, pp. 4104– Zahara, E., & Kao, Y. T. (2009). Hybrid Nelder–Mead simplex search and particle
4108). swarm optimization for constrained engineering design problems. Expert
Kirkpatrick, S., Gelatt, C., & Vecchi, M. (1983). Optimization by simulated annealing. Systems with Applications, 36, 3880–3886.
Science, 220(4598), 671–680.