0% found this document useful (0 votes)
65 views

A New Hybrid Particle Swarm Optimization Algorithm For Solving Continuous Optimization Problems

The document presents a new hybrid particle swarm optimization (PSO) algorithm called PSOLVER that combines PSO with the spreadsheet optimization tool Solver. PSOLVER uses PSO as a global optimizer to search the solution space, and Solver as a local optimizer to refine the initial solutions found by PSO and avoid local optima. The authors develop PSOLVER and compare its performance to standard PSO on various constrained optimization problems. Results show PSOLVER requires fewer iterations and finds more effective solutions than other heuristic and non-heuristic algorithms.

Uploaded by

Anıl Akçakaya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views

A New Hybrid Particle Swarm Optimization Algorithm For Solving Continuous Optimization Problems

The document presents a new hybrid particle swarm optimization (PSO) algorithm called PSOLVER that combines PSO with the spreadsheet optimization tool Solver. PSOLVER uses PSO as a global optimizer to search the solution space, and Solver as a local optimizer to refine the initial solutions found by PSO and avoid local optima. The authors develop PSOLVER and compare its performance to standard PSO on various constrained optimization problems. Results show PSOLVER requires fewer iterations and finds more effective solutions than other heuristic and non-heuristic algorithms.

Uploaded by

Anıl Akçakaya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Expert Systems with Applications 37 (2010) 6798–6808

Contents lists available at ScienceDirect

Expert Systems with Applications


journal homepage: www.elsevier.com/locate/eswa

PSOLVER: A new hybrid particle swarm optimization algorithm for solving


continuous optimization problems
Ali Haydar Kayhan 1, Huseyin Ceylan *, M. Tamer Ayvaz 2, Gurhan Gurarslan 2
Department of Civil Engineering, Pamukkale University, TR-20070 Denizli, Turkey

a r t i c l e i n f o a b s t r a c t

Keywords: This study deals with a new hybrid global–local optimization algorithm named PSOLVER that combines
Particle swarm optimization particle swarm optimization (PSO) and a spreadsheet ‘‘Solver” to solve continuous optimization prob-
Hybridization lems. In the hybrid PSOLVER algorithm, PSO and Solver are used as the global and local optimizers,
Spreadsheets respectively. Thus, PSO and Solver work mutually by feeding each other in terms of initial and sub-initial
Solver
solution points to produce fine initial solutions and avoid from local optima. A comparative study has
Optimization
been carried out to show the effectiveness of the PSOLVER over standard PSO algorithm. Then, six con-
strained and three engineering design problems have been solved and obtained results are compared
with other heuristic and non-heuristic solution algorithms. Identified results demonstrate that, the
hybrid PSOLVER algorithm requires less iterations and gives more effective results than other heuristic
and non-heuristic solution algorithms.
Ó 2010 Elsevier Ltd. All rights reserved.

1. Introduction ing (SA) (Kirkpatrick, Gelatt, & Vecchi, 1983), ant colony optimiza-
tion (ACO) (Dorigo & Di Caro, 1999), particle swarm optimization
Optimization is the process of finding the best set of solutions to (PSO) (Kennedy & Eberhart, 1995), and harmony search (HS)
achieve an objective subject to given constraints. It is a challenging (Geem, Kim, & Loganathan, 2001), etc. Although these algorithms
part of operations research and has a wide variety of applications are very effective at exploring the search space, they require rela-
in economy, engineering and management sciences (Zahara & tively long time to precisely find the local optimum (Ayvaz, Kay-
Kao, 2009). During the last decades, huge number of solution algo- han, Ceylan, & Gurarslan, 2009; Fesanghary, Mahdavi, Minary-
rithms has been proposed for solving the optimization problems. Jolandan, & Alizadeh, 2008; Houck, Joines, & Kay, 1996; Houck,
These algorithms may be mainly classified under two categories: Joines, & Wilson, 1997; Michalewicz, 1992).
non-heuristic and heuristic algorithms. Non-heuristic algorithms Recently, hybrid global–local optimization algorithms have be-
are mostly the gradient-based search methods and very efficient come popular solution approaches for solving the optimization
in finding the local optimum solutions with a reasonable times. problems. These algorithms integrate the global exploring feature
However, they usually require gradient information to find the of heuristic algorithms and local fine tuning feature of non-heuris-
search directions (Lee & Geem, 2005). Thus, they may be inefficient tic algorithms. Through this integration, optimization problems
for solving the problems where the objective function and the can be solved more effectively than both global and local optimiza-
constraints are not differentiable. Therefore, there has been an tion algorithms (Shannon, 1998). In these algorithms, the global
increasing interest to use the heuristic algorithms to solve the opti- optimization process searches the optimum solution with multiple
mization problems. solution vectors, and then, local optimization process adjusts the
Heuristic optimization algorithms get their mathematical basis results of global optimization by getting its results as initial solu-
from the natural phenomena. Most widely used heuristic optimiza- tion (Ayvaz et al., 2009). However, their main drawback is that pro-
tion algorithms are the genetic algorithms (GA) (Goldberg, 1989; gramming the non-heuristic optimization algorithms may be
Holland, 1975), tabu search (TS) (Glover, 1977), simulated anneal- difficult since they require some mathematical calculations such
as taking partial derivatives, calculating Jacobian and/or Hessian
matrices, taking matrix inversions, etc. Besides, they may require
* Corresponding author. Tel.: +90 258 296 3386; fax: +90 258 296 3382. an extra effort to handle the given constraint set through non-heu-
E-mail addresses: hkayhan@pamukkale.edu.tr (A.H. Kayhan), hceylan@ ristic algorithms.
pamukkale.edu.tr (H. Ceylan), tayvaz@pamukkale.edu.tr (M.T. Ayvaz), gurarslan@
pamukkale.edu.tr (G. Gurarslan).
Recently, popularity of spreadsheets in solving the optimization
1
Tel.: +90 258 296 3393; fax: +90 258 296 3382. problems has been increasing through their mathematical add-ins.
2
Tel.: +90 258 296 3384; fax: +90 258 296 3382. Most available spreadsheet packages are coupled with a ‘‘Solver”

0957-4174/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.eswa.2010.03.046
A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808 6799

add-in (Frontline System Inc., 1999) which can solve many nonlin- 0 and 1. Note that the values of x; c1 and c2 control the impact of
ear optimization problems without requiring much knowledge previous historical values of particle velocities on its current one. A
about the non-heuristic algorithms, and so are extremely easy to larger value of x leads to global exploration, whereas smaller val-
use (Stokes & Plummer, 2004). ‘‘Solver” solves the optimization ues results with a fine search within the solution space. Therefore,
problems through generalized reduced gradient (GRG) algorithm suitable selection of x; c1 and c2 provides a balance between the
(Lasdon, Waren, Jain, & Ratner, 1978) and can solve many linear global and local search processes (Salman, Ahmad, & Al-Madani,
   
and nonlinear optimization problems (Ayvaz et al., 2009). 2002). Note that the terms c1 r1 x ^ i  xki and c2 r 2 g^  xki in Eq. (1)
The main objective of this study is to develop a new hybrid glo- are called the cognition and social terms, respectively. The cognition
bal–local optimization algorithm for solving the constrained opti- term takes into account only the particle’s own experience, whereas
mization problems. With this purpose, a new hybrid solution the social term signifies the interaction between the particles. Parti-
algorithm, PSOLVER, is proposed. In the PSOLVER algorithm, PSO cle’s velocities in a swarm are usually bounded with a maximum
 T
is used as a global optimizer and integrated with a spreadsheet velocity v max ¼ v max
1 ; v max
2 ; . . . ; v max
m which is calculated as a frac-
‘‘Solver” to improve the PSO results. The performance of the tion of the entire search space as follows (Shi & Eberhart, 1998):
PSOLVER algorithm is tested on several constrained optimization
problems and the results are compared with other solution meth- vmax ¼ cðxmax  xmin Þ ð2Þ
ods in terms of solution accuracy and the number of function eval- max
 T
where c is a fraction ð06 c < 1Þ; x ¼ xmax
1 ; xmax
2 ; . . . ; xmax
and m
uations. Identified results showed that, PSOLVER algorithm xmin ¼ xmin min min T
1 ; x2 ; . . . ; xm are the vectors that contain the upper
requires less number of function evaluations and gives more effec- and lower bounds of the search space for each dimension, respec-
tive results than other solution algorithms. tively. After the velocity updating process is performed through
The remaining of this study is organized as follows: First, the Eqs. (1) and (3), the new positions of the particles are calculated
main structure of PSO algorithm is described; second, the neces- as follows:
sary steps of building PSOLVER algorithm is presented; and finally,
the performance of the proposed model is tested on different con- xkþ1
i ¼ xki þ v kþ1
i 8i ¼ 1; 2; . . . ; n ð3Þ
strained optimization problems.
After the calculation of Eq. (3), the corresponding fitness values are
calculated based on the new positions of the particles. Then, the val-
2. The particle swarm optimization algorithm ues of x ^ ð8i ¼ 1; 2; . . . ; nÞ are updated. This solution proce-
^ i and g
dure is repeated until the given termination criterion has been
The PSO algorithm, first proposed by Kennedy and Eberhart satisfied. Fig. 1 shows the step by step solution procedure of PSO
(1995), is developed based on the observations of the social behav- algorithm (Wikipedia, 2009).
ior of animals, such as bird flocking or fish schooling. Like other The PSO has been applied to wide variety of disciplines includ-
evolutionary algorithms, PSO is also a population based optimiza- ing neural network training (Eberhart & Hu, 1999; Eberhart & Ken-
tion algorithm. In PSO, members of the population are called as the nedy, 1995; Kennedy & Eberhart, 1995, 1997; Salerno, 1997; Van
swarm and each individual within the swarm is called as the par- Den Bergh & Engelbrecht, 2000), biochemistry (Cockshott & Hart-
ticle. During the solution process, each particle in the swarm ex- man, 2001), manufacturing (Tandon, El-Mounayri, & Kishawy,
plores the search space through their current positions and 2002), electromagnetism (Baumgartner, Magele, & Renhart, 2004;
velocities. In order to solve an optimization problem using PSO, ini- Brandstätter & Baumgartner, 2002; Ciuprina, Loan, & Munteanu,
tially, all the positions and velocities are randomly generated from 2002), electrical power (Abido, 2002; Yoshida, Fukuyama, Takay-
the feasible search space. Then, the velocity of each particle is up- ama, & Nakanishi, 1999), optics (Slade, Ressom, Musavi, & Miller,
dated based on their individual experiences and experiences of the 2004), structural optimization (Fourie & Groenwold, 2002; Perez
other particles. This task is performed by updating the velocities of & Behdinan, 2007; Venter & Sobieszczanski-Sobieski, 2004), end
each particle using the best position of the related particle and the milling (Tandon, 2000) and structural reliability (Elegbede, 2005).
overall best position visited by the other particles. Finally, the posi- Generally, it can be said that PSO is applicable to solve the most
tions of the particles are updated through their new velocities and optimization problems.
this process is iterated until the given termination criterion is sat-
isfied. This solution sequence provides that each particle in the
3. Development of hybrid PSOLVER algorithm
swarm can learn based on their own experiences (local search)
and the experiences of the group (global search). Mathematical
As indicated above, PSO is an efficient optimization algorithm
statement of PSO algorithm can be given as follows:
and successively applied to the solution of optimization problems.
Let f be the fitness function governing the problem, n be the
However, like other heuristic optimization algorithms, PSO is also
number of particles in the swarm, m be the dimension of the prob-
T an evolutionary computation technique and may require high
lem (e.g. number of decision variables), xi ¼ ½xi1 ; xi2 ; . . . ; xim  and
T computational times to precisely find an exact optimum. There-
v i ¼ ½v i1 ; v i2 ; . . . ; v im  be the vectors that contain the current posi-
fore, hybridizing the PSO with a local search method becomes a
tions and the velocities of the particles in each dimension,
good idea such that PSO finds the possible solutions where the glo-
^ i ¼ ½^
x xi1 ; ^ xim T be the vector that contains the current best
xi2 ; . . . ; ^
^ ¼ ½g 1 ; bal optimum exists, and local search method employs a fine search
position of each particle in each dimension, and g
to precisely find the global optimum. This kind of solution ap-
g 2 ; . . . ; g m T be the vector that contains the global best position in
proach makes the convergence rate faster than the pure global
each dimension (8i ¼ 1; 2; . . . ; n and 8j ¼ 1; 2; . . . ; m), T be the
search and prevents the problem of trapping to local optimums
transpose operator. The new velocities of the particles are calcu-
by pure local search (Fan & Zahara, 2007).
lated as follows:
The current literature includes several studies in which the PSO
   
v ikþ1 ¼ xv ki þ c1 r1 ^i  xki þ c2 r 2 g
x ^  xki 8i ¼ 1; 2; . . . ; n ð1Þ algorithm is integrated with the local search methods. Fan, Liang,
and Zahara (2004) developed a hybrid optimization algorithm
where k is the iteration index, x is the inertial constant, c1 and c2 which integrates the PSO and Nelder–Mead (NM) simplex search
are the acceleration coefficients which are used to determine how method for the optimization of multimodal test functions. Their re-
much the particle’s personal best and the global best influence its sults showed that the NM–PSO algorithm is superior to other
movement, and r1 and r2 are the uniform random numbers between search methods. Victoire and Jeyakumar (2004) integrated the
6800 A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808

Fig. 1. Step by step solution procedure of PSO algorithm.

PSO algorithm with the sequential quadratic programming (SQP) ods. Note that the user is not required to provide the partial
technique for solving the economic dispatch problems. In their derivatives with respect to decision variables in Solver. Instead,
PSO–SQP algorithm, PSO is used as the global optimizer and SQP forward or central difference approximations are used in the
is used as the local optimizer which is used for fine tuning the each search process (OTC, 2009). This may be the main advantage of
solution of PSO. They tested their model performance on three dif- using Solver as a local optimizer in this study.
ferent economic dispatch problems. Results showed that their It should be noted that the global optimizer PSO and the local
PSO–SQP algorithm provides better solutions than those of other optimizer Solver have been integrated by developing a running Vi-
solution methods. Kazuhiro, Shinji, and Masataka (2006) combined sual Basic for Applications (VBA) code on the background of a
the PSO and sequential linear programming (SLP) to solve the spreadsheet platform (Excel for this study). In this integration,
structural optimization problems. Their results showed that hybrid two separate running VBA codes have been developed. The first
PSO–SLP finds very efficient results. Ghaffari-Miab, Farmahini- code includes the standard PSO algorithm and is used as the global
Farahani, Faraji-Dana, and Lucas (2007) developed a hybrid solu- optimizer. The second code is used for calling the Solver add-in and
tion algorithm which integrates the PSO and gradient-based qua- developed by creating a VBA macro instead of manually calling the
si-Newton method. They applied their hybrid model to the Solver add-in. Note that a macro is a series of commands grouped
solution of complex time Green’s functions of multilayer media. together as a single command to accomplish a task automatically
Their results indicated that hybrid PSO algorithm is superior com- and can be created through macro recorder that saves the series
pared to other optimization techniques. Zahara and Hu (2008) of commands in VBA (Ferreira & Salcedo, 2001). The source code
developed a hybrid NM–PSO algorithm for solving the constrained of the recorded macro can be easily modified in the Visual Basic
optimization problems. Their NM–PSO algorithm handles con- Editor of the spreadsheets (Ferreira & Salcedo, 2001; Microsoft,
straint sets by using both gradient repair and constraint fitness pri- 1995; Rosen, 1997). By using this feature of the spreadsheets, the
ority-based ranking operators. According to their results, NM–PSO recorded Solver macro is integrated with the developed PSO code
with embedded constraint handling operators is extremely effec- on VBA platform.
tive and efficient at locating optimal solutions. As a later study, Note that, dealing with the use of a spreadsheet Solver as a local
Zahara and Kao (2009) applied the NM–PSO algorithm of Zahara optimizer, Ayvaz et al. (2009) firstly proposed a hybrid optimiza-
and Hu (2008) to the solution of engineering design problems with tion algorithm in which HS and the Solver is integrated to solve
a great success. engineering optimization problems. With this purpose, they devel-
As summarized above, hybridizing the PSO algorithm with local oped a hybrid HS–Solver algorithm. They tested the performance of
search methods is an effective and efficient way to solve the opti- HS–Solver algorithm on 4 unconstrained, 4 constrained and 4
mization problems. However, programming these hybrid algo- structural engineering problems. Their results indicated that hy-
rithms may be a difficult task for the non-major people since brid HS–Solver algorithm requires less number of function evalua-
most of the local search methods require some complex mathe- tions and finds better or identical objective function values than
matical calculations. Therefore, in this study, PSO is hybridized many non-heuristic and heuristic optimization algorithms.
with a spreadsheet Solver since it requires little knowledge about It should be noted that Fesanghary et al. (2008) mentions about
the programming of local search methods. two approaches of integrating global and local search processes. In
Solver is a powerful gradient-based optimization add-in and the the first approach, global search process explores the entire search
most commercial spreadsheet products (Lotus 1-2-3Ò, Quattro space until the objective function improvement is negligible, and
ProÒ, Microsoft ExcelÒ) contain it. Solver solves the linear and non- then, local search method performs a fine search by taking the best
linear optimization problems through GRG algorithm (Lasdon solution of global search as a starting point. On the other hand, in
et al., 1978). It works by first evaluating the functions and deriva- the second approach, both global and local search processes work
tives at a starting value of the decision vector, and then iteratively simultaneously such that all the solutions of global search are fine
searches for a better solution using a search direction suggested by tuned by local search. When the optimized solution of local search
derivatives (Stokes & Plummer, 2004). To determine a search direc- has a better objective function value than the global search, this
tion, Solver uses the quasi-Newton and conjugate gradient meth- solution is transferred to global search and solution proceeds until
A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808 6801

the given termination criterion satisfied (Fesanghary et al., 2008; two stopping criteria have been considered such that the optimiza-
Ayvaz et al., 2009). Compared the first and second approaches, it tion process ends when the number of generations equals to 1000
is obvious that the second approach provides better results than or the reference or a better solution has been obtained.
the first approach. However, computational cost of the second ap-
proach is usually higher than the first one since all the solutions of
4.1. Performance evaluation study: Michalewicz’s test function
global search will be subject to local search. Note that the second
approach is taken into account in this study and PSO and Solver
Michalewicz’s function is a typical example of nonlinear multi-
optimizers are integrated based on a probability of P c such that a
modal functions including n! local optima (Michalewicz, 1992).
globally generated solution vector is subjected to local search with
The function can be given as follows:
a probability of P c . Note that our trials and the recommendations of
"    2s #
Fesanghary et al. (2008) and Ayvaz et al. (2009) state that use of a X
n
k  x2k
fairly small P c value is sufficient for solving many optimization Min f ðxÞ ¼  sinðxk Þ sin ð4Þ
p
problems. Therefore, we have used the probability Pc ¼ 0:01 k¼1

throughout the paper. After given convergence criteria of the Sol- s:t: 0 6 xk 6 p; k ¼ 1; 2; . . . ; n ð4aÞ
ver are satisfied, the locally improved solution is included to PSO
and the global search proceeds until termination. Fig. 2 shows where the parameter s defines the ‘‘steepness” of the valleys or
the step by step procedure of the PSOLVER algorithm. edges and assumed to be 10 for this solution. This function has a
global optimum solution of f ðx Þ ¼ 4:687658 when n ¼ 5. Fig. 3
shows the solution space of the function when n ¼ 2.
4. Numerical applications

In this section, performance of the PSOLVER algorithm is tested


by solving several constrained optimization problems. However,
before solving these examples, it may be essential to show the effi- 1
ciency of PSOLVER over the standard PSO algorithm. With this pur-
0.5
pose, a performance evaluation study has been performed by
solving a common unconstrained optimization problem using both 0
PSOLVER and standard PSO algorithms. Then, six constrained
benchmark problems and three well-known engineering design f ( x ) -0.5
problems have been solved and the results have been compared -1
with other non-heuristic and heuristic optimization algorithms. -1.5
The related solution parameters of PSOLVER algorithm were set
-2
as follows: the number of particles is set to n ¼ 21m þ 1 (Zahara & 4
Kao, 2009), the acceleration coefficients are set to c1 ¼ c2 ¼ 2, the 3 4
inertia factor is x ¼ ½0:5 þ randð0; 1Þ=2:0 (Eberhart & Shi, 2001; 2 3
2
Hu & Eberhart, 2001), the maximum velocity of particles v max ¼ x
X2
2 1 1
0:1ðxmax  xmin Þ and the Solver run probability is set as P c ¼ 0:01. 0
x1
X1
0
All the examples have been solved 30 times for different random
Fig. 3. Michalewicz’s test function.
number seeds to show the robustness of the algorithm. Note that

Fig. 2. Step by step solution procedure of PSOLVER algorithm.


6802 A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808

Fig. 4. Convergence history of PSO and PSOLVER.

It can be clearly seen from Fig. 3 that, solution of this function Fukushima, 2006), GA (Chootinan & Chen, 2006), and NM–PSO
using a gradient-based optimization algorithm is quite difficult (Zahara & Hu, 2008) methods. After applying the PSOLVER algo-
task since there are many locations where the gradient of the func- rithm to this problem, we obtained the best solution at x ¼ ð1; 1;
tion equals to zero. Therefore, solving this problem through gradi- 1; 1; 1; 1; 1; 1; 1; 3; 3; 3; 1Þ with the corresponding objective value of
ent-based algorithms depends on the quality of the initial f(x*) = 15.000000. Table 1 compares the identified results for dif-
solutions. In order to test the performance of PSOLVER algorithm, ferent solution algorithms.
this problem has been solved using both PSOLVER and standard As can be seen from Table 1, while the optimum solution were
PSO algorithms. Note that same random number seeds have been obtained using GA, EA and NM–PSO algorithms after 95,512,
used. Thus, same initial solutions have been used in both algo- 122,000 and 41,959 function evaluations, respectively, the PSOLV-
rithms. Fig. 4 compares the convergence histories of both ER algorithm requires only 679 function evaluations. Therefore, the
algorithms. PSOLVER algorithm is the most effective solution method among
As can be seen from Fig. 4, both algorithms are started from the the other methods in terms of the number of function evaluations.
same initial solution. Although both PSO and hybrid PSOLVER algo-
rithms find the optimum solution of f ðx Þ ¼ 4:687658, the 4.3. Example 2
PSOLVER requires much less function evaluations than PSO.
PSOLVER requires only 456 function evaluations, whereas PSO re- This minimization problem has two decision variables and two
quires 67,600 function evaluations to solve the same problem. inequality constraints as given in Eq. (6):

Min f ðxÞ ¼ ðx1  10Þ3 þ ðx2  20Þ3 ð6Þ


4.2. Example 1
2 2
s:t: g 1 ðxÞ ¼ ðx1  5Þ  ðx2  5Þ þ 100 6 0 ð6aÞ
The first minimization problem, which includes 13 decision g 2 ðxÞ ¼ ðx1  6Þ2 þ ðx2  5Þ2  82:81 6 0 ð6bÞ
variables and nine inequality constraints, is given in Eq. (5):
13 6 x1 6 100 ð6cÞ
X
4 X
4 X
13 0 6 x2 6 100 ð6dÞ
Min f ðxÞ ¼ 5 xi  5 x2i  xi ð5Þ
i¼1 i¼1 i¼5 This function has an optimal solution at x ¼ ð14:095; 0:84296Þ with
s:t: g 1 ðxÞ ¼ 2x1 þ 2x2 þ x10 þ x11  10 6 0 ð5aÞ a corresponding function value of f(x*) = 6961.81388. This prob-
g 2 ðxÞ ¼ 2x1 þ 2x3 þ x10 þ x12  10 6 0 ð5bÞ lem was previously solved using EA (Runarsson & Yao, 2005), CDE
(Becerra & Coello, 2006), FSA (Hedar & Fukushima, 2006), GA
g 3 ðxÞ ¼ 2x2 þ 2x3 þ x11 þ x12  10 6 0 ð5cÞ
(Chootinan & Chen, 2006), and NM–PSO (Zahara & Hu, 2008) meth-
g 4 ðxÞ ¼ 8x1 þ x10 6 0 ð5dÞ ods. Among those studies, the best solution was reported by Zahara
g 5 ðxÞ ¼ 8x2 þ x11 6 0 ð5eÞ and Hu (2008) with an objective function value of f(x*) =
g 6 ðxÞ ¼ 8x3 þ x12 6 0 ð5fÞ 6961.8240 using NM–PSO algorithm after 9856 iterations. We ap-
g 7 ðxÞ ¼ 2x4  x5 þ x10 6 0 ð5gÞ plied the PSOLVER algorithm to this problem and obtained the opti-
mum solution at x ¼ ð14:095; 0:842951Þ where the corresponding
g 8 ðxÞ ¼ 2x6  x7 þ x11 6 0 ð5hÞ
objective function value is f(x*) = 6961.8244. This solution is ob-
g 9 ðxÞ ¼ 2x8  x9 þ x12 6 0 ð5iÞ tained after 179 iterations. The comparison of the identified results
0 6 xi 6 1; i ¼ 1; 2; 3; . . . ; 9 ð5jÞ for different solution algorithms is given in Table 2. It can be clearly
0 6 xi 6 100; i ¼ 10; 11; 12 ð5kÞ seen from Table 2 that the PSOLVER algorithm provides a better
0 6 xi 6 1; i ¼ 13 ð5lÞ solution than the other solution algorithms with fewer number of
function evaluations.
The optimal solution of this problem is at x ¼ ð1; 1; 1; 1; 1; 1; 1; 1; 1;
3; 3; 3; 1Þ with a corresponding function value of f(x*) = 15. This 4.4. Example 3
function was previously solved using Evolutionary Algorithm (EA)
(Runarsson & Yao, 2005), Cultural Differential Evolution (CDE) (Bec- The third example has 3 constraints and 5 decision variables.
erra & Coello, 2006), Filter Simulated Annealing (FSA) (Hedar & These are:
A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808 6803

Table 1
Comparison of the identified results for Example 1.

Methods Best objective Mean objective Worst objective Standard Number of


function value function value function value deviation function evaluations
EA (Runarsson & Yao, 2005) 15.000000 15.000000 15.000000 0 122,000
CDE (Becerra & Coello, 2006) 15.000000 14.999996 14.999993 0.000002 100,100
FSA (Hedar & Fukushima, 2006) 14.999105 14.993316 14.979977 0.004813 205,748
GA (Chootinan & Chen, 2006) 15.000000 15.000000 15.000000 0 95,512
NM–PSO (Zahara & Hu, 2008) 15.000000 15.000000 15.000000 0 41,959
PSOLVER 15.000000 15.000000 15.000000 0 679

Table 2
Comparison of the identified results for Example 2.

Methods Best objective Mean objective Worst objective Standard Number of function
function value function value function value deviation evaluations
EA (Runarsson & Yao, 2005) 6961.8139 6961.8139 6961.8139 0 56,000
CDE (Becerra & Coello, 2006) 6961.8139 6961.8139 6961.8139 0 100,100
FSA (Hedar & Fukushima, 2006) 6961.8139 6961.8139 6961.8139 0 44,538
GA (Chootinan & Chen, 2006) 6961.8139 6961.8139 6961.8139 0 13,577
NM–PSO (Zahara & Hu, 2008) 6961.8240 6961.8240 6961.8240 0 9856
PSOLVER 6961.8244 6961.8244 6961.8244 0 179

Table 3
Comparison of the identified results for Example 3.

Methods Best objective Mean objective Worst objective Standard Number of function
function value function value function value deviation evaluations
EA (Runarsson & Yao, 2005) 0.053942 0.111671 0.438804 1.40E01 109,200
CDE (Becerra & Coello, 2006) 0.056180 0.288324 0.392100 1.67E01 100,100
FSA (Hedar & Fukushima, 2006) 0.053950 0.297720 0.438851 1.89E01 120,268
NM–PSO (Zahara & Hu, 2008) 0.053949 0.054854 0.058301 1.26E03 265,548
PSOLVER 0.053949 0.053950 0.053950 1.14E07 779

Min f ðxÞ ¼ ex1 x2 x3 x4 x5 ð7Þ Min f ðxÞ ¼ 5:3578547x33 þ 0:8356891x1 x5 þ 37:293239x1


s:t: g 1 ðxÞ ¼ x21 þ x22 þ x23 þ x24 þ x25  10 ¼ 0 ð7aÞ þ 40792:141 ð8Þ
g 2 ðxÞ ¼ x2 x3  5x4 x5 ¼ 0 ð7bÞ s:t: g 1 ðxÞ ¼ 85:334407 þ 0:0056858x2 x5 þ 0:0006262x1 x4
g 3 ðxÞ ¼ x31 þ x32  1 ¼ 0 ð7cÞ  0:0022053x3 x5  92 6 0 ð8aÞ
 2:3 6 xi 6 2:3; i ¼ 1; 2 ð7dÞ g 2 ðxÞ ¼ 85:334407  0:0056858x2 x5  0:0006262x1 x4
 3:2 6 xi 6 3:2; i ¼ 3; 4; 5 ð7eÞ  0:0022053x3 x5 6 0 ð8bÞ
g 3 ðxÞ ¼ 80:51249 þ 0:0071317x2 x5 þ 0:0029955x1 x2
For this problem, the optimum solution is x ¼ ð1:717143; þ 0:0021813x23  110 6 0 ð8cÞ
1:595709; 1:827247; 0:7636413; 0:763645Þ where f(x*) = 0.0539 g 4 ðxÞ ¼ 80:51249  0:0071317x2 x5  0:0029955x1 x2
498. This problem was previously solved using EA (Runarsson &
 0:0021813x23 þ 90 6 0 ð8dÞ
Yao, 2005), CDE (Becerra & Coello, 2006), FSA (Hedar & Fukushima,
2006), and NM–PSO (Zahara & Hu, 2008) methods. Table 3 shows g 5 ðxÞ ¼ 9:300961 þ 0:0047026x3 x5 þ 0:0012547x1 x3
the optimal solutions of PSOLVER and the previous solution þ 0:0019085x3 x4  25 6 0 ð8eÞ
algorithms. g 6 ðxÞ ¼ 9:300961  0:0047026x3 x5  0:0012547x1 x3
As can be seen from the Table 3, NM–PSO and PSOLVER algo-  0:0019085x3 x4 þ 20 6 0 ð8fÞ
rithms give the best result with the objective function value of
78 6 x1 6 102 ð8gÞ
f(x*) = 0.053949. It should be note that the lowest standard devia-
tion, which is observed with PSOLVER algorithm, demonstrates 33 6 x2 6 45 ð8hÞ
its higher robustness in comparison with the other algorithms. 27 6 xi 6 45; i ¼ 3; 4; 5 ð8iÞ
The best solution vector x ¼ ð1:717546; 1:596176; 1:826500;
0:763605; 0:763594Þ has been obtained after 779 function eval- The optimal solution of the problem is at x ¼ ð78; 33;
uations with PSOLVER while the NM–PSO algorithm requires 29:995256025682; 45; 36:775812905788Þ with a corresponding
265548. function value of f ðx Þ ¼ 30; 665:539. This function was previously
solved by using a homomorphous mapping (HM) (Koziel & Mich-
4.5. Example 4 alewicz, 1999), Stochastic Ranking (SR) (Runarsson & Yao, 2000),
evolutionary programming (EP) (Coello & Becerra, 2004), hybrid par-
This example has 5 decision variables and 6 inequality con- ticle swarm optimization (HPSO) (He & Wang, 2007), and NM–PSO
straints as given in Eq. (8): (Zahara & Kao, 2009). Among those studies, the best solution was ob-
6804 A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808

Table 4
Comparison of the identified results for Example 4.

Methods Best objective Mean objective Worst objective Standard Number of function
function value function value function value deviation evaluations
HM (Koziel & Michalewicz, 1999) 30,664.500 30,665.300 30,645.900 N/A 1,400,000
SR (Runarsson & Yao, 2000) 30,665.539 30,665.539 30,665.539 0.0000200 350,000
EP (Coello & Becerra, 2004) 30,665.500 30,662.500 30,662.200 9.3000000 50,020
HPSO (He & Wang, 2007) 30,665.539 30,665.539 30,665.539 0.0000017 81,000
NM–PSO (Zahara & Kao, 2009) 30,665.539 30,665.539 30,665.539 0.0000140 19,568
PSOLVER 30,665.539 30,665.539 30,665.539 0.0000024 328

tained by He and Wang (2007) using HPSO algorithm with an objec- ð1:2279713; 4:2453733Þ with a corresponding objective function
tive function value of f ðx Þ ¼ 30; 665:539 after 81,000 iterations. value of f ðx Þ ¼ 0:095825. This solution is obtained after 308 func-
We obtained the best solution using PSOLVER algorithm at tion evaluations. The comparison of the identified results for differ-
x ¼ ð78; 33; 29:995256025682; 45; 36:775812905788Þ with the ent solution algorithms are given in Table 5. It can be clearly seen
corresponding objective value of f ðx Þ ¼ 30; 665:539. Table 4 com- from Table 5 that the PSOLVER algorithm finds optimal solution
pares the identified results of different solution algorithms. It can be with the lowest number of function evaluations among those of
seen in Table 4 that PSOLVER gives the same result with SR, HPSO, the other algorithms.
NM–PSO and better than HM and EP. It should be note that the hy-
brid PSOLVER requires only 328 function evaluations which is much 4.7. Example 6
less in comparison with the other methods.
This maximization problem has 3 decision variables and 1
4.6. Example 5 inequality constraints as given in Eq. (9d):

The fifth example has two decision variables and two inequality 100  ðx1  5Þ2  ðx2  5Þ2  ðx3  5Þ2
Max f ðxÞ ¼ ð10Þ
constraints as given in Eq. (9): 100
s:t: gðxÞ ¼ ðx1  pÞ2 þ ðx2  qÞ2 þ ðx3  rÞ2  0:0625 6 0 ð10aÞ
3
sin ð2px1 Þ sinð2px2 Þ 0 6 xi 6 10 i ¼ 1; 2; 3 and p; q; r ¼ 1; 2; . . . ; 9 ð10bÞ
Max f ðxÞ ¼ ð9Þ
x31 ðx1 þ x2 Þ
s:t: g 1 ðxÞ ¼ x21  x2 þ 1 6 0 ð9aÞ For this example, the feasible region of the search space consists of
2
93 disjoint spheres. A point ðx1 ; x2 ; x3 Þ is feasible if and only if there
g 2 ðxÞ ¼ 1  x1 þ ðx2  4Þ 6 0 ð9bÞ exist p; q; r such that the above inequality holds (Zahara & Kao,
0 6 x1 6 10 ð9cÞ 2009). For this problem, the optimum solution is x ¼ ð5; 5; 5Þ with
0 6 x2 6 10 ð9dÞ f ðx Þ ¼ 1. This problem was previously solved by using a HM (Koziel
& Michalewicz, 1999), SR (Runarsson & Yao, 2000), EP (Coello & Bec-
This function has the global optimum at x ¼ ð1:2279713; erra, 2004), HPSO (He & Wang, 2007), and NM–PSO (Zahara & Kao,
4:2453733Þ with a corresponding function value of f ðx Þ ¼ 2009). Table 6 shows the identified results of PSOLVER and the pre-
0:095825. This function was previously solved by using a HM (Koz- vious studies given above.
iel & Michalewicz, 1999), SR (Runarsson & Yao, 2000), EP (Coello & As can be seen from Table 6, PSOLVER algorithm results with
Becerra, 2004), HPSO (He & Wang, 2007), and NM–PSO (Zahara & x ¼ ð5; 5; 5Þ with the objective function value of f ðx Þ ¼ 1. PSOLV-
Kao, 2009). We applied the PSOLVER algorithm to the solution of ER algorithm requires only 584 function evaluations for obtaining
this problem and obtained the optimum solution at x ¼ the optimal solution.

Table 5
Comparison of the identified results for Example 5.

Methods Best objective Mean objective Worst objective Standard Number of


function value function value function value deviation function evaluations
HM (Koziel & Michalewicz, 1999) 0.095825 0.089157 0.029144 N/A 1,400,000
SR (Runarsson & Yao, 2000) 0.095825 0.095825 0.095825 2.6E17 350,000
EP (Coello & Becerra, 2004) 0.095825 0.095825 0.095825 0 50,020
HPSO (He & Wang, 2007) 0.095825 0.095825 0.095825 1.2E10 81,000
NM–PSO (Zahara & Kao, 2009) 0.095825 0.095825 0.095825 3.5E08 2103
PSOLVER 0.095825 0.095825 0.095825 2.7E12 308

Table 6
Comparison of the identified results for Example 6.

Methods Best objective Mean objective Worst objective Standard Number of function
function value function value function value deviation evaluations
HM (Koziel & Michalewicz, 1999) 0.999999 0.999135 0.991950 N/A 1,400,000
SR (Runarsson & Yao, 2000) 1.000000 1.000000 1.000000 0 350,000
EP (Coello & Becerra, 2004) 1.000000 0.996375 0.996375 9.7E03 50,020
HPSO (He & Wang, 2007) 1.000000 1.000000 1.000000 1.6E15 81,000
NM–PSO (Zahara & Kao, 2009) 1.000000 1.000000 1.000000 0 923
PSOLVER 1.000000 1.000000 1.000000 2.6E14 584
A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808 6805

nament selection (GA2) (Coello & Montes, 2002), EP (Coello & Bec-
erra, 2004), co-evolutionary particle swarm optimization approach
(CPSO) (He & Wang, 2006), HPSO (He & Wang, 2007), and NM–
PSO (Zahara & Kao, 2009). After applying the PSOLVER algorithm
to this problem, best solution is obtained at x ¼ ð0:05186;
0:356650; 11:292950Þ with the corresponding value of f ðxÞ ¼
Fig. 5. A tension/compression string design problem. 0:0126652. The best solutions obtained by the above-mentioned
methods and the PSOLVER algorithm are given in Tables 7 and 8.
It can be seen from Table 8 that the standard deviation of the
4.8. Example 7: The tension/compression string design problem PSOLVER solution is the smallest. In addition, the PSOLVER requires
only 253 function evaluations for solving this problem, while GA2,
The tension/compression string design problem is described in HPSO, CPSO, and NM–PSO require 80,000, 81,000, 200,000, and
Arora (1989) and the aim is to minimize the weight ðf ðxÞÞ of a ten- 80,000 function evaluations, respectively. Therefore, the PSOLVER
sion/compression spring (as shown in Fig. 5) subject to constraints is an efficient approach locating the global optimum for this
on minimum deflection, shear stress, surge frequency, limits on problem.
outside diameter and on design variables. The design variables
are the wire diameter dðx1 Þ, the mean coil diameter Dðx2 Þ and 4.9. Example 8: The welded beam design problem
the number of active coils Pðx3 Þ.
The mathematical formulation of this problem can be described This design problem, which has been often used as a benchmark
as follows: problem, was firstly proposed by Coello (2000). In this problem, a
welded beam is designed for minimum cost subject to constraints
Min f ðxÞ ¼ ðx3 þ 2Þx2 x21 ð11Þ on shear stress ðsÞ; bending stress ðrÞ in the beam; buckling load
x32 x3 on the bar ðP b Þ; end deflection of the beam ðdÞ; and side con-
s:t: g 1 ðxÞ ¼ 1  60 ð11aÞ straints. There are four design variables as shown in Fig. 6:
71; 785x41
hðx1 Þ; lðx2 Þ; tðx3 Þ and bðx4 Þ.
4x22  x1 x2 1 The mathematical formulation of the problem is as follows:
g 2 ðxÞ ¼  þ 160 ð11bÞ
12; 566 x2 x31  x41 5108x21
Min f ðxÞ ¼ 1:10471x21 x2 þ 0:04811x3 x4 ð14 þ x2 Þ ð12Þ
140:45x1
g 3 ðxÞ ¼ 1  60 ð11cÞ s:t: g 1 ðxÞ ¼ sðxÞ  smax 6 0 ð12aÞ
x22 x3
x2 þ x1 g 2 ðxÞ ¼ rðxÞ  rmax 6 0 ð12bÞ
g 4 ðxÞ ¼ 160 ð11dÞ
1:5 g 3 ðxÞ ¼ x1  x4 6 0 ð12cÞ
0:05 6 x1 6 2:00 ð11eÞ g 4 ðxÞ ¼ 0:10471x21 þ 0:04811x3 x4 ð14 þ x2 Þ  5 6 0 ð12dÞ
0:25 6 x2 6 1:30 ð11fÞ g 5 ðxÞ ¼ 0:125  x1 6 0 ð12eÞ
2:00 6 x3 6 15:00 ð11gÞ g 6 ðxÞ ¼ dðxÞ  dmax 6 0 ð12fÞ
g 7 ðxÞ ¼ P  Pc ðxÞ 6 0 ð12gÞ
This problem has been used as a benchmark for testing different
optimization methods, such as GA based co-evolution model 0:1 6 x1 ; x4 6 2:0 ð12hÞ
(GA1) (Coello, 2000), GA through the use of dominance-based tour- 0:1 6 x2 ; x3 6 10:0 ð12iÞ

Table 7
Comparison of the best solutions for tension/compression spring design problem.

Methods x1 ðdÞ x2 ðDÞ x3 ðP b Þ f(x)

GA1 (Coello, 2000) 0.051480 0.351661 11.632201 0.0127048


GA2 (Coello & Montes, 2002) 0.051989 0.363965 10.890522 0.0126810
EP (Coello & Becerra, 2004) 0.050000 0.317395 14.031795 0.0127210
CPSO (He & Wang, 2006) 0.051728 0.357644 11.244543 0.0126747
HPSO (He & Wang, 2007) 0.051706 0.357126 11.265083 0.0126652
NM–PSO (Zahara & Kao, 2009) 0.051620 0.355498 11.333272 0.0126302
PSOLVER 0.051686 0.356650 11.292950 0.0126652

Table 8
Statistical results for tension/compression spring design problem.

Methods Best objective Mean objective Worst objective Standard Number of


function value function value function value deviation function evaluations
GA1 (Coello, 2000) 0.0127048 0.0127690 0.0128220 3.94E05 N/A
GA2 (Coello & Montes, 2002) 0.0126810 0.0127420 0.0129730 5.90E05 80,000
EP (Coello & Becerra, 2004) 0.0127210 0.0135681 0.0151160 8.42E04 N/A
CPSO (He & Wang, 2006) 0.0126747 0.0127300 0.0129240 5.20E04 200,000
HPSO (He & Wang, 2007) 0.0126652 0.0127072 0.0127190 1.58E05 81,000
NM–PSO (Zahara & Kao, 2009) 0.0126302 0.0126314 0.0126330 8.74E07 80,000
PSOLVER 0.0126652 0.0126652 0.0126652 2.46E09 253
6806 A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808

We applied the PSOLVER algorithm to this problem and obtained


the best solution of f ðxÞ ¼ 1:724717. The comparison of the identi-
fied results is given in Tables 9 and 10, respectively.
From Table 9, it can be seen that the best solutions found by the
PSOLVER is same with the NM-PSO and better than those obtained
by the other methods. Standard deviation of the results by the
PSOLVER is the smallest. Note that the average number of function
evaluations of the PSOLVER is 297. Therefore, it can be said that the
PSOLVER is the most efficient among the previous methods.

5. Example 9: The pressure vessel design problem

In pressure vessel design problem, proposed by Kannan and


Fig. 6. The welded beam design problem. Kramer (1994), the aim is to minimize the total cost, including
the cost of material, forming and welding. A cylindrical vessel is
where
capped at both ends by hemispherical heads as shown in Fig. 7.
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi The are four design variables in this problem: T s (x1 , thickness of
x2
sðxÞ ¼ ðs0 Þ2 þ 2s0 s00 þ ðs00 Þ2 ð12jÞ the shell), T h (x2 , thickness of the head), R (x3 , inner radius) and L
2R
(x4 , length of the cylindrical section of the vessel). Among the four
P MR x2 design variables, T s and T h are expected to be integer multiples of
s0 ¼ pffiffiffi ; s00 ¼ ; M ¼P Lþ ð12kÞ
2 x 1 x2 J 2 0.0625 in., and R and L are continuous variables.
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x22 x 1 þ x3 2 The problem can be formulated as follows (Kannan & Kramer,
R¼ þ ð12lÞ 1994):
4 2
pffiffiffi  2 
x x1 þ x3 2 Min f ðxÞ ¼ 0:6224x1 x3 x4 þ 1:7781x2 x23 þ 3:1661x21 x4 þ 19:84x21 x3
J¼2 2 x 1 x2 2 þ ð12mÞ
12 2 ð13Þ
6PL 4PL3 s:t: g 1 ðxÞ ¼ x1 þ 0:0193x3 6 0 ð13aÞ
rðxÞ ¼ 2 ; dðxÞ ¼ ð12nÞ
x4 x3 Ex33 x4 g 2 ðxÞ ¼ x2 þ 0:00954x3 6 0 ð13bÞ
qffiffiffiffiffiffiffi
x23 x64 rffiffiffiffiffiffi!
4:013E 36 x3 E
Pc ðxÞ ¼ 1 ð12oÞ
L2 2L 4G
P ¼ 6000 lb; L ¼ 14 in:; E ¼ 30  106 psi; G ¼ 12  106 psi
ð12pÞ
smax ¼ 13; 600 psi; rmax ¼ 30; 000 psi; dmax ¼ 0:25 in: ð12qÞ

The methods previously applied to this problem include GA1 (Coel-


lo, 2000), GA2 (Coello & Montes, 2002), EP (Coello & Becerra, 2004),
CPSO (He & Wang, 2006), HPSO (He & Wang, 2007), and NM–PSO
(Zahara & Kao, 2009). Among those studies, the best solution was
obtained by using NM–PSO (Zahara & Hu, 2008) with an objective
function value of f ðxÞ ¼ 1:724717 after 80,000 function evaluations. Fig. 7. Pressure vessel design problem.

Table 9
Comparison of the best solutions for welded beam design problem.

Methods x1 ðhÞ x2 ðlÞ x3 ðtÞ x4 ðbÞ f(x)

GA1 (Coello, 2000) 0.208800 3.420500 8.997500 0.210000 1.748309


GA2 (Coello & Montes, 2002) 0.205986 3.471328 9.020224 0.206480 1.728226
EP (Coello & Becerra, 2004) 0.205700 3.470500 9.036600 0.205700 1.724852
CPSO (He & Wang, 2006) 0.202369 3.544214 9.048210 0.205723 1.728024
HPSO (He & Wang, 2007) 0.205730 3.470489 9.033624 0.205730 1.724852
NM–PSO (Zahara & Kao, 2009) 0.205830 3.468338 9.033624 0.205730 1.724717
PSOLVER 0.205830 3.468338 9.036624 0.205730 1.724717

Table 10
Statistical results for welded beam design problem.

Methods Best objective Mean objective Worst objective Standard Number of function
function value function value function value deviation evaluations
GA1 (Coello, 2000) 1.748309 1.771973 1.785835 1.12E02 N/A
GA2 (Coello & Montes, 2002) 1.728226 1.792654 1.993408 7.47E02 80,000
EP (Coello & Becerra, 2004) 1.724852 1.971809 3.179709 4.43E01 N/A
CPSO (He & Wang, 2006) 1.728024 1.748831 1.782143 1.29E02 200,000
HPSO (He & Wang, 2007) 1.724852 1.749040 1.814295 4.01E02 81,000
NM–PSO (Zahara & Kao, 2009) 1.724717 1.726373 1.733393 3.50E03 80,000
PSOLVER 1.724717 1.724717 1.724717 1.62E11 297
A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808 6807

Table 11
Comparison of the best solutions for pressure vessel design problem.

Methods x1 ðT s Þ x2 ðT h Þ x3 ðRÞ x4 ðLÞ f(x)


GA1 (Coello, 2000) 0.8125 0.4375 40.3239 200.0000 6288.7445
GA2 (Coello & Montes, 2002) 0.8125 0.4375 42.0974 176.6540 6059.9463
CPSO (He & Wang, 2006) 0.8125 0.4375 42.0913 176.7465 6061.0777
HPSO (He & Wang, 2007) 0.8125 0.4375 42.0984 176.6366 6059.7143
NM–PSO (Zahara & Kao, 2009) 0.8036 0.3972 41.6392 182.4120 5930.3137
PSOLVER 0.8125 0.4375 42.0984 176.6366 6059.7143

Table 12
Statistical results for pressure vessel design problem.

Methods Best objective Mean objective Worst objective Standard Number of function
function value function value function value deviation evaluations
GA1 (Coello, 2000) 6288.7445 6293.8432 6308.1497 7.413E+00 N/A
GA2 (Coello & Montes, 2002) 6059.9463 6177.2533 6469.3220 1.309E+02 80,000
CPSO (He & Wang, 2006) 6061.0777 6147.1332 6363.8041 8.645E+01 200,000
HPSO (He & Wang, 2007) 6059.7143 6099.9323 6288.6770 8.620E+01 81,000
NM–PSO (Zahara & Kao, 2009) 5930.3137 5946.7901 5960.0557 9.161E+00 80,000
PSOLVER 6059.7143 6059.7143 6059.7143 4.625E12 310

4 3 heuristic and non-heuristic optimization techniques in terms of


g 3 ðxÞ ¼ px23 x4  px þ 1; 296; 000 6 0 ð13cÞ
3 3 objective function values and number of function evaluations.
g 4 ðxÞ ¼ x4  240 6 0 ð13dÞ The most important contribution of the proposed hybrid PSOLV-
0 6 x1 ; x2 6 100 ð13eÞ ER algorithm is that it requires much less iterations than other
solution approaches. It should be note that the spreadsheet appli-
10 6 x3 ; x4 6 200 ð13fÞ
cations may require long run-time since the data processing is
This problem has been solved before by using previously mentioned executed on the PC screen, this deficiency can be overcome by
GA1 (Coello, 2000), GA2 (Coello & Montes, 2002), CPSO (He & Wang, deactivating the screen updating property during the optimiza-
2006), HPSO (He & Wang, 2007), and NM–PSO (Zahara & Kao, 2009). tion process. Finally, the PSOLVER algorithm may be useful to ap-
Their best solutions were compared against those produced by the ply to the real world optimization problems which need
PSOLVER and given in Tables 11 and 12, respectively. significant computational efforts.
In this problem, decision variables x1 and x2 are expected to be
integer multiples of 0.0625 in. Previous best solutions obtained by
the other methods, except NM–PSO, satisfy those constraints. As References
obviously seen from Table 11, values of x1 and x2 given for NM–
PSO are not integer multiples of 0.0625 in. Therefore, the HPSO Abido, M. A. (2002). Optimal power flow using particle swarm optimization.
International Journal of Electrical Power and Energy Systems, 24(7), 563–571.
and the PSOLVER methods give the best results by considering
Arora, J. S. (1989). Introduction to optimum design. New York: McGraw-Hill.
the NM–PSO solution is not feasible for this problem. It should Ayvaz, M. T., Kayhan, A. H., Ceylan, H., & Gurarslan, G. (2009). Hybridizing harmony
be note that, the PSOLVER requires about only 310 function evalu- search algorithm with a spreadsheet solver for solving continuous engineering
ations for obtaining a feasible solution while GA2, HPSO, CPSO, and optimization problems. Engineering Optimization, 41(12), 1119–1144.
Baumgartner, U., Magele, Ch., & Renhart, W. (2004). Pareto optimality and particle
NM–PSO require 80,000, 81,000, 200,000, and 80,000 fitness func- swarm optimization. IEEE Transactions on Magnetics, 40(2), 1172–1175.
tion evaluations, respectively. In addition, the standard deviation Becerra, R. L., & Coello, C. A. C. (2006). Cultured differential evolution for constrained
of the results by PSOLVER is the smallest. Considering the statisti- optimization. Computer Methods in Applied Mechanics and Engineering, 195(33–
36), 4303–4322.
cal and comparisonal results, it can be concluded that PSOLVER is Brandstätter, B., & Baumgartner, U. (2002). Particle swarm optimization-mass
more efficient than the other methods for pressure vessel design spring system analogon. IEEE Transactions on Magnetics, 38(2), 997–1000.
problem. Chootinan, P., & Chen, A. (2006). Constraint handling in genetic algorithms using a
gradient-based repair method. Computers and Operations Research, 33(8),
2263–2281.
6. Conclusion Ciuprina, G., Loan, D., & Munteanu, I. (2002). Use of intelligent-particle swarm
optimization in electromagnetics. IEEE Transactions on Magnetics, 38(2),
1037–1040.
In this study, a new hybrid global–local optimization algo- Cockshott, A. R., & Hartman, B. E. (2001). Improving the fermentation medium for
rithm is proposed which combines the PSO with a spreadsheet Echinocandin B production part II: Particle swarm optimization. Process
Biochemistry, 36, 661–669.
‘‘Solver” for solving continuous optimization problems. In the Coello, C. A. C. (2000). Use of a self-adaptive penalty approach for engineering
proposed PSOLVER algorithm, PSO is used as a global optimizer optimization problems. Computers in Industry, 41(2000), 113–127.
and Solver is used as a local optimizer. During the optimization Coello, C. A. C., & Becerra, R. L. (2004). Efficient evolutionary optimization through
the use of a cultural algorithm. Engineering Optimization, 36(2), 219–236.
process, the PSO and Solver work mutually by feeding each other
Coello, C. A. C., & Montes, E. M. (2002). Constraint-handling in genetic algorithms
in terms of initial and sub-initial solution points. With this pur- through the use of dominance-based tournament selection. Advanced
pose, a VBA code has been developed on the background of Excel Engineering Informatics, 16(2002), 193–203.
Dorigo, M., & Di Caro, G. (1999). Ant colony optimisation: A new meta-heuristic. In
spreadsheet to provide the integration of the PSO and Solver pro-
Proceedings of the congress on evolutionary computation (Vol. 2, pp. 1470–1477).
cesses. Main advantages of the PSOLVER over standard PSO algo- Eberhart, R. C., & Hu, X. (1999). Human tremoe analysis using particle swarm
rithm is demonstrated within a comparative study and then six optimization. In Proceedings of the congress on evolutionary computation,
constrained and three engineering design problems have been Washington, DC, USA (pp. 1927–1930).
Eberhart, R. C., & Kennedy, J. (1995). A new optimizer using particle swarm theory.
solved by using the PSOLVER algorithm. Results showed that In Proc. of the sixth int. symp. micro machine and human science, Nagoya, Japan
the proposed algorithm provides better solutions than the other (pp. 39-43).
6808 A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808

Eberhart, R. C., & Shi, Y. (2001). Tracking and optimizing dynamic systems with Koziel, S., & Michalewicz, Z. (1999). Evolutionary algorithms, homomorphous
particle swarms. In Proceedings of congress on evolutionary computation, Seoul, mappings, and constrained parameter optimization. Evolutionary Computation,
Korea (pp. 27–30). 7, 19–44.
Elegbede, C. (2005). Structural reliability assessment based on particle swarm Lasdon, L. S., Waren, A. D., Jain, A., & Ratner, M. (1978). Design and testing of a
optimization. Structural Safety, 27(2), 171–186. generalized reduced gradient code for nonlinear programming. ACM
Fan, S. S., Liang, Y. C., & Zahara, E. (2004). Hybrid simplex search and particle swarm Transactions on Mathematical Software, 4(1), 34–49.
optimization for the global optimization of multimodal functions. Engineering Lee, K. S., & Geem, Z. W. (2005). A new meta-heuristic algorithm for continuous
Optimization, 36(4), 401–418. engineering optimization: Harmony search theory and practice. Computer
Fan, S. S., & Zahara, E. (2007). A hybrid simplex search and particle swarm Methods in Applied Mechanics and Engineering, 194, 3902–3933.
optimization for unconstrained optimization. European Journal of Operational Michalewicz, Z. (1992). Genetic algorithm + data structure = evolution programs. New
Research, 181, 527–548. York: Springer-Verlag.
Ferreira, E. N. C., & Salcedo, R. (2001). Can spreadsheet solvers solve demanding Microsoft. (1995). Microsoft Excel – Visual Basic for applications. Washington:
optimization problems. Computer Applications in Engineering Education, 9(1), Microsoft Press.
49–56. OTC. (2009). Optimization Technology Center’s Web site (online). <http://www-
Fesanghary, M., Mahdavi, M., Minary-Jolandan, M., & Alizadeh, Y. (2008). fp.mcs.anl.gov/OTC/Guide/SoftwareGuide/Blurbs/grg2.html> (accessed
Hybridizing harmony search algorithm with sequential quadratic 31.03.2009).
programming for engineering optimization problems. Computer Methods in Perez, R. E., & Behdinan, K. (2007). Particle swarm approach for structural design
Applied Mechanics and Engineering, 197, 3080–3091. optimization. Computers and Structures, 85, 1579–1588.
Fourie, P. C., & Groenwold, A. A. (2002). The particle swarm optimization algorithm Rosen, E. M. (1997). Visual Basic for applications, Add-Ins and Excel 7.0. CACHE News
in size and shape optimization. Structural and Multidisciplinary Optimization, (Vol. 45, pp. 1–3).
23(4), 259–267. Runarsson, T. P., & Yao, X. (2000). Stochastic ranking for constrained evolutionary
Frontline System Inc. (1999). A tutorial on spreadsheet optimization. optimization. IEEE Transactions on Evolutionary Computation, 4(3), 284–292.
Geem, Z. W., Kim, J. H., & Loganathan, G. V. (2001). A new heuristic optimization Runarsson, T. P., & Yao, X. (2005). Search biases in constrained evolutionary
algorithm: harmony search. Simulation, 76(2), 60–68. optimization. IEEE Transactions on Systems, Man and Cybernetics, 35(2), 233–243.
Ghaffari-Miab, M., Farmahini-Farahani, A., Faraji-Dana, R., & Lucas, C. (2007). An Salerno, J. (1997). Using particle swarm optimization technique to train a recurrent
efficient hybrid Swarm intelligence-gradient optimization method for complex neural model. In Proc. of the ninth IEEE int. conf. tools and artificial intelligence,
time Green’s functions of multilayer media. Progress in Electromagnetics USA (pp. 45–49).
Research, 77, 181–192. Salman, A., Ahmad, I., & Al-Madani, S. (2002). Particle swarm optimization for task
Glover, F. (1977). Heuristic for integer programming using surrogate constraints. assignment problem. Microprocessors and Microsystems, 26, 363–371.
Decision Sciences, 8(1), 156–166. Shannon, M. W. (1998). Evolutionary algorithms with local search for combinatorial
Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine optimization. PhD thesis, University of California, San Diego.
learning. Addison-Wesley. Shi, Y., & Eberhart, R. C. (1998). Parameter selection in particle swarm optimization.
Hedar, A. D., & Fukushima, M. (2006). Derivative-free filter simulated annealing Evolutionary Programming VII. Lecture Notes in Computer Science (Vol. 1447).
method for constrained continuous global optimization. Journal of Global Berlin: Springer.
Optimization, 35(4), 521–549. Slade, W. H., Ressom, H. W., Musavi, M. T., & Miller, R. L. (2004). Inversion of ocean
He, Q., & Wang, L. (2006). An effective co-evolutionary particle swarm optimization color observations using particle swarm optimization. IEEE Transactions on
for engineering optimization problems. Engineering Application of Artificial Geoscience and Remote Sensing, 42(9), 1915–1923.
Intelligence, 20, 89–99. Stokes, L., & Plummer, J. (2004). Using spreadsheet solvers in sample design.
He, Q., & Wang, L. (2007). A hybrid particle swarm optimization with a feasibility Computational Statistics and Data Analysis, 44(3), 527–546.
based rule for constrained optimization. Applied Mathematics and Computation, Tandon, V. (2000). Closing the gap between CAD/CAM and optimized CNC end milling.
186, 1407–1422. MSc thesis, Purdue School of Engineering and Technology, Indianapolis, USA.
Holland, J. H. (1975). Adaptation in natural and artificial systems: An introductory Tandon, V., El-Mounayri, H., & Kishawy, H. (2002). NC end milling optimization
analysis with applications to biology, control, and artificial intelligence. Ann Arbor: using evolutionary computation. International Journal of Machine Tools and
University of Michigan Press. Manufacture, 42(5), 595–605.
Houck, C. R., Joines, J. A., & Kay, M. G. (1996). Comparison of genetic algorithms, Van Den Bergh, F., & Engelbrecht, A. P. (2000). Cooperative learning in neural
random start, and two-opt switching for solving large location–allocation networks using particle swarm optimizers. South African Computer Journal, 26,
problems. Computers and Operations Research, 23(6), 587–596. 84–90.
Houck, C. R., Joines, J. A., & Wilson, J. R. (1997). Empirical investigation of the Venter, G., & Sobieszczanski-Sobieski, J. (2004). Multidisciplinary optimization of a
benefits of partial Lamarckianism. Evolutionary Computation, 5(1), 31–60. transport aircraft wing using particle swarm optimization. Structural and
Hu, X., & Eberhart, R. C. (2001). Tracking dynamic systems with PSO: Where’s the Multidisciplinary Optimization, 26(1–2), 121–131.
cheese? In Proceedings of workshop on particle swarm optimization, Indianapolis, Victoire, T. A., & Jeyakumar, A. E. (2004). Hybrid PSO–SQP for economic dispatch
USA. with valve-point effect. Electric Power Systems Research, 71(1), 51–59.
Kannan, B. K., & Kramer, S. N. (1994). An augmented lagrange multiplier based Wikipedia. (2009). The free Encyclopedia web site (online). <http://
method for mixed integer discrete continuous optimization and its applications www.en.wikipedia.org/wiki/Particle_swarm_optimization> (accessed
to mechanical design. Journal of Mechanical Design, 116, 318–320. 31.03.2009).
Kazuhiro, I., Shinji, N., & Masataka, Y. (2006). Hybrid swarm optimization Yoshida, H., Fukuyama, Y., Takayama, S., & Nakanishi, Y. (1999). A particle swarm
techniques incorporating design sensitivities. Transactions of the Japan Society optimization for reactive power and voltage control in electric power systems
of Mechanical Engineers, 72(719), 2264–2271. considering voltage security assessment. In Proc. IEEE int. conf. systems, man. and
Kennedy, J., & Eberhart, R. C. (1995). Particle swarm optimization. In Proceedings of cybernetics, Tokyo, Japan (pp. 497–502).
the IEEE international conference on neural networks, Piscataway, USA (pp. 1942– Zahara, E., & Hu, C. H. (2008). Solving constrained optimization problems with
1948). hybrid particle swarm optimization. Engineering Optimization, 40(11),
Kennedy, J., & Eberhart, R. C. (1997). A discrete binary version of the particle swarm 1031–1049.
algorithm. In Proc. IEEE int. conf. systems, man. and cybernetics (Vol. 5, pp. 4104– Zahara, E., & Kao, Y. T. (2009). Hybrid Nelder–Mead simplex search and particle
4108). swarm optimization for constrained engineering design problems. Expert
Kirkpatrick, S., Gelatt, C., & Vecchi, M. (1983). Optimization by simulated annealing. Systems with Applications, 36, 3880–3886.
Science, 220(4598), 671–680.

You might also like