Applied Soft Computing 28 (2015) 259–275
Contents lists available at ScienceDirect
Applied Soft Computing
journal homepage: www.elsevier.com/locate/asoc
A new modification approach on bat algorithm for solving
optimization problems
Selim Yılmaza,∗
, Ecir U. Küçüksilleb
a
Hacettepe University, Engineering Faculty, Department of Computer Engineering, Ankara 06800, Turkey
b
Süleyman Demirel University, Engineering Faculty, Department of Computer Engineering, Isparta 32260, Turkey
a r t i c l e i n f o
Article history:
Received 18 March 2014
Received in revised form
24 September 2014
Accepted 25 November 2014
Available online 11 December 2014
Keywords:
Heuristics
Bat algorithm
Real-world problems
Unconstrained problems
a b s t r a c t
Optimization can be defined as an effort of generating solutions to a problem under bounded circum-
stances. Optimization methods have arisen from a desire to utilize existing resources in the best possible
way. An important class of optimization methods is heuristic algorithms. Heuristic algorithms have gen-
erally been proposed by inspiration from the nature. For instance, Particle Swarm Optimization has been
inspired by social behavior patterns of fish schooling or bird flocking. Bat algorithm is a heuristic algorithm
proposed by Yang in 2010 and has been inspired by a property, named as echolocation, which guides
the bats’ movements during their flight and hunting even in complete darkness. In this work, local and
global search characteristics of bat algorithm have been enhanced through three different methods. To
validate the performance of the Enhanced Bat Algorithm (EBA), standard test functions and constrained
real-world problems have been employed. The results obtained by these test sets have proven EBA supe-
rior to the standard one. Furthermore, the method proposed in this study is compared with recently
published studies in the literature on real-world problems and it is proven that this method is more
effective than the studies belonging to other literature on this sort of problems.
© 2014 Elsevier B.V. All rights reserved.
1. Introduction
Optimization is an effort of obtaining the optimal solution of a
problem under given circumstances. The crucial task of optimiza-
tion is to minimize wasted time or maximize desired benefit of a
given engineering system. All systems that are to be optimized have
an objective function and several decision variables that affect the
function [1].
Optimization methods can be defined as a process of achieving
optimal solutions that satisfy a given objective function [2].
Optimization algorithms are generally divided into two groups as
deterministic and stochastic algorithms. Deterministic algorithms
do not contain any operators that cause randomness. This type
of algorithms produce the same result as long as their initial
conditions remain constant. On the other hand, due to their
random nature, stochastic algorithms tend to produce different
solutions even when their initial conditions remain constant at
each run. Most deterministic algorithms use gradient information
∗ Corresponding author. Tel.: +90 3122977500.
E-mail addresses: selimy@hacettepe.edu.tr (S. Yılmaz),
ecirkucuksille@sdu.edu.tr (E.U. Küçüksille).
and these algorithms are ideal for unimodal functions having one
global optimum, while they might be troublesome for multimodal
functions having several local optima or functions that include
flat regions where the gradient is small. Stochastic algorithms are
preferred for such functions as they can escape from local minima
easily in spite of their slow convergence speed [3,4].
Stochastic algorithms are categorized into two groups as heuris-
tic and metaheuristic algorithms. Heuristic refers to algorithms that
produce high quality results by trial and error methods in an accept-
able computational time. The suffix meta means “beyond, in an
upper level” so the term metaheurictic refers to a higher level of
heuristics. The studies in the literature tend to refer to all new
stochastic algorithms as metaheuristic [5,6]. Heuristic algorithms
are generally inspired by the nature, hence these algorithms are
also called nature inspired algorithms. Flexible and easily applicable
structure of these algorithms has made them very popular in recent
years.
Swarm algorithms, regarded as a subset of heuristic algorithms,
have been developed on inspiration by the various types of collab-
orative behavior of swarms while carrying out a work [7]. There
is ample literature on this type of algorithms. Kennedy and Eber-
hart proposed Particle Swarm Optimization (PSO) by inspiration
from social and cognitive behavior of fish or bird swarms [8].
http://dx.doi.org/10.1016/j.asoc.2014.11.029
1568-4946/© 2014 Elsevier B.V. All rights reserved.
260 S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275
Dorigo et al. inspired by the foraging behavior of ants, proposed
Ant Colony Algorithm (ACO) [9]. Chu et al. introduced Cat Swarm
Algorithm (CSO) by observing the behavior of cats as they trace
and catch their prey [10]. Yang et al. proposed Cuckoo Search Algo-
rithm (CS) inspired by the obligate brood parasitism behavior of
cuckoo species [11]. Pan proposed Fruit Fly Optimization (FFO)
algorithm based on food acquisition behavior of fruit flies [12].
Krishnanand and Ghose were inspired by the luminescence capabil-
ity of glowworms and they transformed it into Glowworm Swarm
Optimization algorithm (GSO) [13]. Gandomi and Alavi proposed
an algorithm named Krill Herd (KH) based on the clever herding
behavior of krill individuals [14]. Duan and Qiao inspired by hom-
ing behavior of pigeons, introduced Pigeon-Inspired Optimization
(PIO) algorithm [15]. Bansal et al. proposed a new swarm algorithm
named Spider Monkey Optimization (SMO) [16]. Sur et al. trans-
formed food acquisition behavior of the Egyptian vulture into an
algorithm they named Egyptian Vulture Optimization Algorithm
(EVOA) [17].
Bat algorithm (BA), proposed by Yang in 2010, is also a swarm
based metaheuristic algorithm inspired by a property known as
echolocation. Echolocation is a type of sonar that guides bats in
their flying and hunting behavior. Not only can the bats move, but
also they distinguish different types of insects even in complete
darkness thanks to their such capability [18].
There are two crucial components that affect the search
characteristic of an algorithm: exploration (also called diversifi-
cation) and exploitation (also called intensification). Exploration
is a capability of an algorithm to find promising solutions by
seeking various unknown regions while exploitation improves
over solutions obtained by exploration. Exploration capability can
get solutions away from the region where they get stuck in,
while exploitation capability increases convergence speed of an
algorithm [19]. Many studies in the literature indicate that an algo-
rithm’s exploration capability should be employed first so that
the algorithm scans the whole search space and its exploitation
capability should be employed later to improve over the solution
obtained by exploration at the end of the optimization process
[20].
BA is a very powerful algorithm and produces robust solutions
on low dimensional functions but its performance diminishes as
the dimension of problem increases [21]. Exploration and exploita-
tion properties of the algorithm have been aimed to be improved
within the scope of this study. For this purpose, two modifica-
tion structures have been embedded to bat algorithm and it has
been hybridized with Invasive Weed Optimization [22] algorithm.
In order to verify the superiority of the proposed method, Enhanced
Bat Algorithm (EBA) has been compared with the standard BA and
results of Genetic Algorithm (GA) [23] in terms of optimization
quality within negligible CPU time on 50 unconstrained benchmark
test functions with continuous variables. Furthermore, superiority
of EBA has been measured by comparing it with some exist-
ing improved versions of BA. EBA has also been compared with
some studies in the literature on three well known constrained
real-world engineering problems; welded beam, spring design and
pressure vessel with continuous and discrete variables taken from
[24,25]. The results obtained from unconstrained benchmark test
functions have depicted that the proposed method is superior to
the standard algorithm. It has also been proven that the enhanced
algorithm is competitive and better than most of the algorithms
suggested by those other studies on real-world problems.
The organization of this paper is as follows: literature survey
of BA is given in Section 2, BA and EBA are described in Sections 3
and 4, Section 5 introduces unconstrained and constrained bench-
mark results, finally the proof of the contribution of the proposed
method on unconstrained and constrained problems is revealed in
Section 6.
2. Literature review
Although BA was proposed recently, there are many variants of
BA in the literature introduced as modification or implementation
studies for different sorts of problems. From the modification point
of view, Gandomi and Yang embedded chaos mechanism into bat
algorithm to enhance the global search behavior of BA and opti-
mized unconstrained functions with different chaotic maps [26],
Fister et al. hybridized BA to overcome the deficiency of BA espe-
cially on higher dimensional problems [21], Wang and Guo also
hybridized BA with Harmony Search Algorithm [27,28], Nakamura
et al. introduced a discrete version of bat algorithm to solve clas-
sifications and feature selection presenting the superiority of BA
over well-known swarm based techniques [29], Li and Zhou pre-
sented a new bat-algorithm based on complex-value to increase the
diversity of population thus improving on the exploration capa-
bility [30], Ali proposed a new metaheuristic method based on
bat algorithm for optimal design of Power System Stabilizers in a
multi-machine environment [31], Hasançebi et al. proposed a new
algorithm that makes use of BA for structural optimizations [32],
Hasançebi and Carbas solved the problem of discrete size optimiza-
tion of steel frames designed for minimum weight by a BA inspired
method and compared the results with other metaheuristics [33],
Lin et al. proposed chaotic bat algorithm using Levy flights and
chaotic maps for parameter estimation of dynamic biological sys-
tems [34]. Besides, the search capabilities of BA was improved in
the studies [35–44]. As for implementation studies, Yang and Gan-
domi presented the superiority of bat algorithm over the studies
in the literature on well-known constrained benchmark functions
[45], Gandomi et al. used BA to solve constrained problems and
solved both well-known benchmark and real-world problems [46],
Yang et al. compared the efficiency of BA with the so-called inter-
mittent search methods [47], Peres et al. compared BA with other
compelling metaheuristic algorithms on power system stabilizers
tuning problem [48], Sathya and Ansari employed BA based dual
mode PI controller to tune the parameter PI controller in multi-
area interconnected thermal power system [49], Rodrigues et al.
presented feature selection approach based on BA [50], Bora et al.
optimized mono and multi-objective brushless DC wheel motor
problems by BA and compared the results with other optimization
approaches [51], Biswal et al. employed bat algorithm to optimize
operating cost of thermal power plant [52]. Apart from those, BA
was used to solve different kinds of problems in various studies
[53–59].
3. Bat algorithm
Bat algorithm (BA) is a heuristic algorithm proposed by Yang
in 2010. It is based on the echolocation capability of micro bats
guiding them on their foraging behavior [18].
3.1. Echolocation capability of bats
Most bat species use a type of sonar called as echolocation to
communicate, recognize different types of insects, sense distance
to their prey and move without hitting to any obstacle even in
complete darkness.
All animals including bats, which use echolocation capability,
emit some pulses. These pulses contain frequencies ranging from
high pitch (> 200 kHz) to low pitch (∼10 kHz). Pulses, upon hitting
the objects or the prey that are around a bat, form echoes. The bat
listens to the echo and then analyzes and evaluates codes in these
echoes [60].
The echolocation characteristics are idealized within the frame-
work of the following rules by benefiting such features of bats:
S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 261
• All bats use echolocation to sense distance, and they also “know”
the difference between food/prey and background barriers in
some magical way.
• Bats fly randomly with velocity vi at position xi with a frequency
fmin, varying wavelength and loudness A0 to search for prey. They
can automatically adjust the wavelength (or frequency) of their
emitted pulses and adjust the rate of pulse emission r ∈ [0, 1],
depending on the proximity of their target.
• Although the loudness can vary in many ways, we assume that the
loudness varies from a large (positive) A0 to a minimum constant
value Amin.
3.2. The structure of bat algorithm
(a) Initialization of bat population. The search space is assumed as
a region that contains many prey sources on it. The algorithm
tends to find the high or optimum quality food in the search
space. Because locations of food sources are not known, ini-
tial population is randomly generated from real-valued vectors
with dimension d and number N, by taking into account lower
and upper boundaries. Then, quality of food sources located
within the population are evaluated.
xij = xmin + ϕ(xmax − xmin) (1)
where i = 1, 2, . . ., N, j = 1, 2, . . ., d, xmax and xmin are upper and
lower boundaries for dimension j, respectively. ϕ is a randomly
generated value ranging from 0 to 1.
(b) Generation of frequency, velocity and new solutions. Evaluated
fitness values of all bats influence their movements. Bats fly
with velocity vi which is affected by a randomly predefined fre-
quency f. Finally they locate their new position xi in the search
space.
fi = fmin + ˇ(fmax − fmin) (2)
vt
i
= vt−1
i
+ (xt
i
− x∗)fi (3)
xt
i
= xt−1
i
+ vt
i
(4)
where fi is a frequency value belonging to the ith bat, fmin and
fmax are minimum and maximum frequency values, respec-
tively, ˇ indicates a randomly generated value, x* is the obtained
global best location (solution) after comparison of all solutions
among N bats so far and vt
i
implies the velocity of the ith bat at
tth time step.
(c) Local search capability of the algorithm. In order to improve local
search capability of the algorithm, Yang has created a structure
in order that the bat can improve the solution near the obtained
one.
xnew = xold + A
t
(5)
where xold is a high quality solution chosen by some mechanism
(e.g. roulette wheel), A
t
is average loudness value of all bats at
tth time step and  is a randomly generated value ranging from
−1 to 1.
(d) Loudness and pulse emission rate. The loudness A and pulse emis-
sion rate r are updated as a bat gets closer to its target, namely
its prey. Loudness A is decreased while pulse emission rate r
is increased with respect to Eqs. (6) and (7), respectively (see
Fig. 1).
At+1
i
= ˛At
i
(6)
rt+1
i
= r0
i
(1 − e t
) (7)
where and ˛ are constraints, r0
i
is the initial pulse emission
rate value of the ith bat.
Pseudo-code and flow chart of the algorithm are given in
Algorithm 1 and Fig. 2, respectively.
Algorithm 1. Pseudo code of the bat algorithm
1 Initialize bat population xi and velocity vi
2 Define frequency fi
3 Initialize pulse emission rate r and loudness A
4 repeat
5 Generate new solutions by adjusting frequency and updating velocity and location by
Eqs. 2 to 4
6 if rand ri then
7 Select a solution among best solutions
8 Generate new local solution around selected best solution
9 end
10 Generate new solution by flying randomly
11 if rand Ai and f(xi) f(x∗) then
12 Accept the new solution
13 Decrease Ai, increase ri, by Eqs. 6 and 7
14 end
15 Rank the bats and find the current best x∗
16 until termination criteria is met;
17 Postprocess results and visualization
4. Enhanced Bat Algorithm
Bat algorithm is bad at exploration and exploitation. In order to
tackle with the problem mentioned above, three different improve-
ment structures, which are called IS1, IS2 and IS3 have been
proposed for the original algorithm.
4.1. Inertia weight modification (IS1)
The update processes of velocity and location in the algorithm
have some similarities with PSO [8]. The standard bat algorithm
has some deficiencies as in PSO. In order to overcome this issue the
following modification structure is proposed, inspired by the study
in [61]. That the equation consists of two parts can be seen, when
the velocity equation (Eq. (3)) is analyzed. The first term of the
equation is a factor that defines velocity of population namely step
262 S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275
0 20 40 60 80 100
0
0.2
0.4
0.6
0.8
1
Iteration
Pulse
emission
rate
r
0 20 40 60 80 100
0
0.5
1
1.5
2
Iteration
Loudness
A
Fig. 1. The changes of r and A with iterations.
Fig. 2. Flowchart of bat algorithm.
size, while the second term is another factor affecting the velocity
of the ith solution with guidance of the best solution (x*).
The first and second terms of the equation contribute to the
algorithm so that it performs global and local search, respectively.
Only if the first term of the Eq. (3) affects the solutions may it be
observed that these solutions overflow the space by keeping their
velocities and directions, thus reducing their convergence speeds
rapidly. On the other hand, only if the second term of the Eq. (3)
affects the solutions may it be observed that the solutions converge
to a region somewhere around the global best solution (x*). Hence
they may face the premature convergence problem.
The main purpose of this modification is to intensify the first
term of the equation at the beginning of the optimization pro-
cess and then the second term toward the end of the optimization
process in turn. The modified equation is given below.
vt
i
= ω(vt−1
i
) + (xt
i
− x∗)fi (8)
where ω is the inertia weight factor that balances global and local
search intensity of the ith solution by controlling the magnitude of
old velocity v.
This modification structure was also utilized in [62], and its
superiority to bat algorithm was proven with unconstrained bench-
mark test functions.
4.2. Distribution of the population modification (IS2)
As it has been explained before, the second term of the Eq. (3)
provides local search with guidance of the best solution in the stan-
dard algorithm. Exclusive usage of this term may cause premature
convergence problem thus solutions get stuck at a local minimum.
In order to deal with this issue, inertia weight factor has been pro-
posed. When the best solution is near a local minimum toward the
end of optimization process, the ith solution can have no chance
to get away from that undesired local minimum as the movement
of the ith particle depends on such best solution toward the end of
the optimization process. This modification has been described in
detail in Fig. 3.
A case in which the best solution in the population gets stuck at
the local minimum is seen in Fig. 3, in such a case, if the ith solution
moves by regarding only the best solution, it can also converge
to that local minimum thus it cannot tend to promising regions.
Providing the kth solution also affects the ith solution, the algorithm
produces better solutions.
For this purpose, the velocity equation has been modified to
perform the situation that the kth solution could also affect the ith
solution. The modified equation is
vt
i
= ω(vt−1
i
) + (xt
i
− x∗)fi1 + (xt
i
− xt
k
)fi2 (9)
1 + 2 = 1 (10)
where xk is one of the best solutions randomly chosen among the
population (i /
= k), 1 is learning factor ranging from 0 to 1. As the
value of 1 increases, the effect of the best solution (x*) is higher
than the kth solution and vice versa. The 1 value has to be updated
S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 263
−500 0 500
−500
−400
−300
−200
−100
0
100
200
300
400
500
Search space
Fitness
value
x*
xk
xi
Fig. 3. Distribution of the population toward the end of the iteration on Schwefel
function.
0 200 400 600 800 1000
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Iterations
ζ
1
and
ζ
2
parameters
ζ
1
ζ2
Fig. 4. The changes of 1 and 2 with iterations.
as iterations proceed in order that the solution can switch from
global to local search.
1 = 1 + (init − 1)
(itermax − iter)
n
(itermax)
n (11)
where init is initial impact factor of 1, itermax is the maximal num-
ber of iterations, iter is the number of iterations and n is a nonlinear
modulation index. The state of 1 and 2 have been demonstrated
in Fig. 4 when init and n are 0.6 and 3 respectively.
4.3. Hybridization with invasive weed optimization (IS3)
When the pseudo code of the algorithm is analyzed (see
Algorithm 1), the fact that local search of the algorithm is performed
only by the solutions that satisfy the condition written in line 6 is
easily seen. In other words, when randomly generated value rang-
ing 0–1 is greater than the pulse emission value of the ith solution,
the solution of interest performs local search. As it is expressed
before, the pulse emission rate r increases as iterations proceed
(Fig. 1). Thus, the local search ability of the algorithm weakens. In
order to prevent such lack of local search capability of the algo-
rithm, it has been hybridized (low-level relay hybrid) with Invasive
Weed Optimization (IWO) algorithm which has a good exploration
capability [22].
Maximum
number of
seeds
Minimum
number of
seeds
Worst
ϐitness
value
Best
ϐitness
value
Fitness
value
Number of
seeds
Fig. 5. Seed production in IWO.
After the solutions perform search process, they form the popu-
lation of IWO. They fulfill all steps of IWO introduced below, except
initialization phase of the population.
4.3.1. Invasive weed optimization
Invasive weed optimization algorithm, inspired by invasive
weed colonies, is a heuristic algorithm.
Some rules are considered by mimicking the colonizing behavior
of invasive weeds:
• A finite number of seeds are spread over the search space (initial-
ization of population).
• Each seed flourishes to flowering plant and produces seeds with
respect to its fitness value (reproduction)(see Fig. 5).
s =

smin + (smax − smin)
 ui − uworst
ubest − uworst

(12)
where smax and smin are the maximum and the minimum number
of seeds to be produced, respectively, uworst and ubest indicate the
worst and the best fitness values, respectively, while ui indicates
the fitness value of the ith solution.
• The produced seeds are randomly spread over the space by Eq.
(13).
iter =
(itermax − iter)
n
(itermax)
n (initial − final) + final (13)
here itermax is the maximum number of iterations, iter is the
number of iterations, initial and final standard deviation values
are represented by initial and final, n is a nonlinear modulation
index.
The dispersion of newly produced seeds during 500 iterations
have been demonstrated in Fig. 6. As it is seen in the figure, dis-
tances of the seeds to their parent (located at the origin) reduce
as iterations proceed.
• This procedure continues until the maximum population number
(pmax) is reached; now, only the better fitness valued seeds can
survive, the others are eliminated (see Fig. 7).
• Algorithm runs until termination criteria is met.
5. Experiments and discussions
5.1. The research in finding of optimum initial values of some
parameters
The parameters (A, r, fmax, ω, init) existing in both algorithms
(BA, EBA) have been trained on the functions given in Table 1 in
264 S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275
Table 1
The unconstrained functions used for analyzing.
No. Function C*
Formulation
1 Sphere U f (x) =
n
i=1
x2
i
2 Rosenbrock U f (x) =
n−1
i=1
[100(xi+1 − xi)
2
+ (xi − 1)
2
]
3 Ackley M f (x) = −20 exp

−0.2

1
n
n
i=1
x2
i

− exp
1
n
n
i=1
cos(2xi) + 20 + e
4 Griewangk M f (x) = 1
4000
n
i=1
x2
i
−
n
i=1
cos

xi
√
i

+ 1
5 Rastrigin M f (x) =
n
i=1
x2
i
− 10 cos(2xi) + 10
6 Zakharov U f (x) =
n
i=1
x2
i
+
n
i=1
0.5ixi
2
+
n
i=1
0.5ixi
4
7 Step U f (x) =
n
i=1
([xi + 0.5])
2
8 Dixon-price U f (x) = (xi − 1)
2
+
n
i=1
i(2x2
i
− xi−1)
2
9 Easom M f (x) = −(−1)
n
 n
i=1
cos2
(xi) exp −
n
i=1
(xi − )
2
10 Michalewicz M f (x) = −
n
i=1
sin(xi)

sin

ix2
i

2m
−5 0 5
−5
−4
−3
−2
−1
0
1
2
3
4
5
x
y
0−200
200−400
400−500
Fig. 6. The distance of seeds to their parent during 500 iterations.
Fig. 7. Seed production and elimination of population.
order to gain better performance. The letters “U” and “M” in the
table represent unimodal and multimodal functions, respectively.
The parameters of population number (N), dimension of a func-
tion (d), function evaluation number (FEN) and run time (R) have
been set as 50, 10, 2 × 103 and 20, respectively. Only the average of
cost values obtained at the end of each runtime has been consid-
ered in the comparison phase. The optimum parameter values used
for experiments in this study have been summarized in Table 7.
(1) Loudness A: This parameter has been trained with different
values ranging from 0 to 1. The results have been normalized
and shown in Table 2. As it is seen from the results, the algorithm
can produce better solutions as loudness value increases with
some exceptions. When A is 0.9 or 1, algorithm produces better
cost value when compared with other values, that is why A has
been set as 0.95.
(2) Pulse emission rate r: The pulse emission rate parameter has
also been trained with the values ranging from 0 to 1 and A is set
as 0.95. The results have been demonstrated in Table 3. Regard-
less of some exceptions, there is a positive correlation between
the optimization performance and pulse emission rate value.
Since the best optimization performance has been obtained
when r is 0.9 and 1, the average of these values (0.85) has been
set.
(3) Maximum frequency value fmax: Frequency value belonging to
each bat stands for the step size in the algorithm. As this value
increases, the possibility of missing during seeking promis-
ing regions also increases. On the other hand, the convergence
speed reduces as step sizes of the solutions decrease. In this
study, minimum frequency value (fmin) has been set as 0 and
maximum frequency value (fmax) has been trained when A and
r are 0.95 and 0.85, respectively. The results have been shown
in Table 4. As it is seen from the table, the algorithm produces
best result when fmax is 1.
(4) Inertia weight ω: Most of the strategies are to be updated in
inertia value as iterations proceed. “Random (a), linear decreas-
ing (b), nonlinear decreasing (c), chaotic term added linear
decreasing (d), Sugeno function (e), linear or nonlinear decreas-
ing (f)” inertia weight strategies (for further information refer to
[63]) have been used in this study to train ω. From the results
indicated in Table 5, that the minimum value is obtained by
using nonlinear decreasing strategy can be easily understood.
So nonlinear decreasing inertia weight strategy, shown in Eq.
(14), has been chosen to update ω.
ω =
itermax − iter
itermax
n
(ωinit − ωfinal) + ωfinal (14)
S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 265
Table 2
Mean normalized values of A on numerical functions.
No. A
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1 1.28 1.18 1.16 1.03 1.13 1.45 1.01 1 1.14 1.04 1.13
2 5.32 3.98 4.55 2.74 4.43 3.83 3.62 3.35 2.23 1.26 1
3 1.22 1.21 1.22 1.23 1.19 1.17 1.07 1.10 1 1.06 1
4 1.24 1.18 1 1.12 1.18 1.18 1.42 1.28 1.17 1.09 1.10
5 1.83 1.65 1.35 1.37 1.41 1.19 1.18 1.21 1.02 1 1.05
6 3.47 4.41 4.97 4.87 4.13 2.96 3.53 3.47 2.18 1.40 1
7 1.32 1.35 1.42 1.17 1.15 1 1.14 1.36 1.37 1.07 1.17
8 18.3 16.4 17.2 19.0 6.00 6.92 11.4 4.18 4.33 1.34 1
9 1.99 1.99 1.98 1.95 1.97 1.98 1.70 1.59 1.47 1.64 1
10 1.05 1.12 1.07 1.08 1.12 1.06 1.07 1.05 1.02 1.01 1
Avg. 3.71 3.44 3.60 3.55 2.37 2.27 2.72 1.95 1.69 1.19 1.04
Table 3
Mean normalized values of r on numerical functions.
No. r
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1 17.79 7.73 4.27 3.44 2.30 2.31 1.61 1.65 1.35 1 1.20
2 360.70 56.79 29.64 11.30 10.15 7.53 4.13 1.74 1 1.31 2.79
3 2.13 1.79 1.54 1.39 1.35 1.25 1.14 1.18 1 1 1.09
4 11.43 5.57 4.51 3.08 2.06 1.84 1.78 1.25 1.40 1 1.14
5 2.95 1.90 1.74 1.44 1.46 1.17 1.22 1.10 1 1.09 1.50
6 1.19 1.62 1.29 1.06 1.08 1.28 1 1.06 1.73 2.04 2.23
7 14.29 6.72 3.89 3.26 2.69 1.56 1.68 1.26 1.01 1.17 1
8 332.36 147.74 16.58 6.10 2.06 7.27 1.21 1.09 1.59 1 4.46
9 1.97 1.79 1.32 1 1.66 1.13 1.09 1.45 1.12 1.51 1.99
10 1 1.01 1 1.02 1.01 1.01 1.01 1.02 1.02 1.06 1.07
Avg. 74.58 23.26 6.57 3.30 2.58 2.63 1.58 1.28 1.22 1.21 1.84
Table 4
Mean normalized values of fmax on numerical functions.
No. fmax
1 2 3 4 5 6 7 8 9 10
1 1 3.24 5.33 6.16 7.02 9.54 11.64 10.80 10.73 13.56
2 1 1.76 2.03 5.56 17.92 89.75 63.09 77.42 107.15 207.92
3 1 2.98 5.34 6.05 7.54 8.14 8.19 8.40 8.11 7.60
4 1 1.64 2.44 2.31 3.91 3.95 4.42 4.92 5.74 4.81
5 1 1.05 1.05 1.40 1.47 1.29 1.74 1.43 1.71 1.98
6 1.81 4.74 3.32 17.55 31.22 7.17 24.03 33.63 97.79 1
7 1 2.84 3.45 5.28 7.09 5.98 6.97 9.35 7.16 8.86
8 1 1.15 1 1.70 1 5.53 2.56 9.84 5.97 3.77
9 4.12 1 2.47 5.67 5.68 10.33 4.00 3.90 5.50 5.51
10 1.01 1.03 1.03 1 1.04 1.03 1 1 1.04 1.03
Avg. 1.39 2.14 2.74 5.26 8.38 14.27 12.76 16.06 25.09 25.60
where ωinit and ωfinal are the initial and the final inertia values,
respectively.
(5) Initial coefficient factor init: Determination of the optimum
initial value of this parameter, improves exploration capability
and convergence performance of the algorithm. The parameter
has been trained with values ranging from 0 to 1. Results are
demonstrated in Table 6. From the results in the table, it is seen
that the algorithm performs better when init is 0.6, so this value
has been chosen in this study. Fig. 4 shows the states of 1 and
2 with the initial value of init.
Table 5
Mean normalized values of ω updating strategy on numerical functions.
No. a b c d e f
1 1.34e−02 6.66e−23 1.14e−05 1.14e−24 3.63e−19 9.86e+07
2 5.87 1.06 0.82 0.80 5.91 5.12e+09
3 2.98 0.36 0.60 1.92 0.58 16.35
4 2.58 4.25 3.31 3.96 6.67 120.43
5 37.16 20.80 20.05 33.83 22.44 77.27
6 3.69e−03 8.75e−18 4.08e−02 3.57e−02 1.34e−01 0.03
7 40.30 41.30 28.45 42.40 75.50 9.60e+07
8 0.63 0.67 0.63 0.60 0.67 20.77
9 −0.80 −1.00 −0.95 −1.00 −1.00 −0.59
10 −7.68 −7.73 −7.50 −7.62 −7.71 −5.58
Avg. 8.11 5.97 4.55 7.49 10.30 5.32e+08
266 S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275
Table 6
Mean normalized values of init on numerical functions.
No. init
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1 5.32 4.08 3.80 2.91 1.84 1.54 1.45 1 1.05 1.19 1.53
2 86.92 54.87 37.73 56.46 7.81 4.02 1 5.89 2.11 3.84 2.52
3 7.73 6.66 4.49 3.20 3.25 2.52 1 1.35 1.76 1.09 2.38
4 2.85 2.36 2.00 1.54 1.45 1.30 1.21 1.02 1.12 1 1.10
5 1.19 1.16 1.10 1.01 1 1.09 1.06 1.04 1.01 1.03 1.02
6 4.04 4.92 1.16 1.15 1.16 1 1.19 1.34 1.21 1.18 1.03
7 5.47 3.34 2.49 2.32 1.56 1.83 1.43 1.67 1 1.40 1.62
8 1.14 2.56 1.05 1.04 1 1.03 1.04 1.05 1.04 1 1.04
9 4.22 4.27 4.26 4.91 2.34 2.29 1.69 1 1.01 3.68 2.99
10 1.02 1.04 1.01 1 1.03 1.01 1.09 1.06 1.03 1.07 1.06
Avg. 11.99 8.53 5.91 7.55 2.24 1.76 1.22 1.64 1.23 1.65 1.63
Table 7
Optimum parameter values obtained after training.
Parameters BA EBA
Run time, R 30
Population, N 50
Loudness, A 0.95
Pulse emission, r 0.85
The factors updating A and r, ˛ − 0.9
Minimum and maximum
frequencies, fmin − fmax
0–1
Initial and final values of inertia
weight, ωinit − ωfinal
– 0.9–0.2
Modulation index of inertia
weight, n
– 2
Coefficient factor, init – 0.6
Minimum number of seeds, smin – 0
Maximum number of seeds, smax – 4
5.2. Analysis of contributions by proposed modifications
In order to verify the efficiency of proposed modification struc-
tures separately, the algorithm with proposed modifications (IS1,
IS2, IS3) is tested in different combinations via the functions given
in Table 1. The results have been shown in Table 8. Fig. 8 has also
depicted the convergence speeds of each modification structure.
The parameter values given in Table 7 have been used in the test
phase.
It can easily be seen from the results given in Table 8 that all
proposed structures (IS1, IS2, IS3) have produced better solutions
than the standard bat algorithm. IS3 is generally better than IS2 and
IS3 on all functions. As it is expected, the performance of IS2 modifi-
cation structure obtained from multimodal functions is better than
that of the unimodal.
5.3. Optimization of unconstrained benchmark functions
To measure the efficiency of proposed method on unconstrained
problems, it has been compared with the results of BA and GA
in [23]. This experiment has been conducted by 50 unconstrained
benchmark functions given in [23]. The benchmark set comprises
unimodal and benchmark functions with various dimensions. Uni-
modal functions have been applied to quantify the convergence
speed, while multimodal functions have been applied to detect
whether the algorithm faces the problem of getting stuck in local
minimum, namely premature convergence problem.
In order to determine if the average objective values of the
proposed method is significantly different than those of BA and
GA, statistical student’s t-test has been employed. The success of
the competitive algorithms have been compared with respect to t
value. t value has been evaluated by Eq. (15):
t =
X1 − X2

(SD2
1/(n1 − 1)) + (SD2
2/(n2 − 1))
(15)
where X1, SD1 and n1 are mean, standard deviation and size of the
first sample (BA or GA), while X2, SD2 and n2 are mean, standard
deviation and size of the second sample (EBA), respectively. t value
can be positive or negative. Positive and negative values mean that
EBA and BA (or GA) have produced better solutions during opti-
mization process, respectively. In this study, the confidence interval
has been set at the 95% which means t0.05=1.96. When t 1.96, the
difference between two samples is significant and EBA is better
than BA (or GA); on the other hand, when t −1.96, BA (or GA) is
better than EBA. The rightmost four columns of Tables 9 and 10
indicate which algorithm has provided the better solution.
To measure the convergence speed performance of EBA, Conver-
gency Rate (CR), proposed in Ref. [64], has been used. There are four
basic steps to compute CR value of an algorithm [64]. In this study,
mean values of 30 runs for each problem has been regarded as one
run while computing.
There are different types of schemes proposed in the literature
[65] to handle natural constraints. In this experiment, absorbing
scheme has been used. The initial parameters, vital for the qual-
ity of optimization process, have been set to the values given in
Table 7. For the parameters set for GA, refer to [23]. The “minimum,
maximum, mean, standard deviation, t and significance” values of
the results have been comparatively shown in Tables 9 and 10. The
convergence graphics of BA and EBA have been demonstrated in
Fig. 9.
As seen in Tables 9 and 10, there are a total of 76 optimization
processes with various dimensions for 50 functions. t values point
out that EBA has exhibited better performance than BA on 71 of
76 optimization processes, while both methods cannot defeat each
other on the remaining five optimization processes. Fig. 9 proves
that EBA can converge much better than BA on both unimodal and
multimodal functions. Though the results obtained from the func-
tions numbered 18, 24, 28 and 45 seem similar, t values and Fig. 10
reveal that EBA is better than BA on these functions.
When the t values of EBA vs GA in Tables 9 and 10 are consid-
ered, it can be easily seen that EBA and GA have produced better
values on 25 and 6 of 50 functions, respectively. It is worthy to state
that, Karaboğa and Akay [23] assumed the values below 10−12 as
0. By this assumption, it would not be wrong to say that EBA have
actually produced the same values as GA did on the eighth func-
tion. On the other hand, when CR values, ranging from 0.44 to 0.99,
are considered, it is understood that EBA can converge to the point
where BA reaches at the end of iterations faster than BA.
The results of EBA have also been compared with Hybrid Bat
Algorithm (HBA) [21], Hybrid Bat with Random Forest (HBARF) [66]
S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 267
Fig. 8. Convergence rates of all modification structures.
Table 8
Performance of modification structures on bat algorithm.
No. D BA IS1 IS2 IS3 IS1  IS2 IS1  IS3 IS2  IS3 EBA
1 10 4.68e+02 4.69e−27 3.12e+02 2.02e−04 7.19e−28 3.87e−06 2.08e−04 9.95e−24
30 5.22e+03 1.23e+01 3.54e+03 8.07e−03 2.54e−00 2.17e−05 8.98e−03 1.85e−05
50 9.55e+03 4.59e+02 7.79e+03 1.17e−02 1.65e+02 2.07e−05 1.01e−02 2.18e−05
2 10 2.20e+02 1.34e−00 4.38e+02 1.30e+01 4.87e−00 9.95e−01 1.03e+01 4.13e−01
30 1.84e+05 3.01e+01 5.69e+04 5.16e+01 2.99e+01 2.99e+01 6.02e+01 2.80e+01
50 8.96e+05 2.51e+02 4.57e+05 6.69e+01 8.65e+01 5.16e+01 8.03e+01 5.54e+01
3 10 1.31e−00 6.92e−01 1.26e−00 2.01e−02 3.33e−01 9.44e−02 1.99e−02 8.56e−08
30 9.44e−00 5.68e−00 8.22e−00 1.80e−01 4.10e−00 8.09e−01 2.04e−01 2.25e−01
50 1.08e+01 8.62e−00 1.00e+01 1.96e−00 7.16e−00 2.01e−00 2.01e−00 1.76e−00
4 10 1.10e+01 4.15e−00 1.03e+01 9.87e−01 2.73e−00 1.70e−00 6.40e−01 1.14e−00
30 6.08e+01 6.86e−00 5.33e+01 1.41e−02 4.00e−00 8.62e−03 5.41e−03 6.90e−03
50 1.11e+02 2.58e−01 1.01e+02 6.59e−03 1.32e+01 3.61e−03 3.87e−03 5.26e−03
5 10 3.08e+01 2.29e+01 2.96e+01 1.73e+01 1.69e+01 2.21e+01 9.83e−00 1.19e+01
30 1.18e+02 1.21e+02 1.08e+02 9.71e+01 9.16e+01 1.19e+02 5.16e+01 5.73e+01
50 2.47e+02 2.43e+02 2.05e+02 1.91e+02 1.92e+02 2.35e+02 1.16e+02 1.20e+02
6 10 3.00e−02 4.14e−06 8.33e−02 4.99e−04 5.88e−25 1.07e−05 2.79e−04 1.41e−11
30 2.23e+02 5.69e−05 2.07e+02 6.91e−03 3.86e−05 4.85e−05 7.41e−03 4.60e−05
50 8.01e+02 3.11e−00 7.12e+02 2.35e−02 1.35e−00 3.66e−04 2.41e−02 1.46e−04
7 10 6.05e+02 3.40e+01 4.19e+02 0.00e−00 9.33e−00 0.00e−00 0.00e−00 0.00e−00
30 5.09e+03 1.91e+03 4.38e+03 6.33e−01 9.03e+02 1.63e−00 4.00e−01 8.33e−01
50 1.18e+04 5.30e+03 9.02e+03 7.70e−00 3.32e+03 9.30e−00 7.00e−00 7.67e−00
8 10 7.34e−01 6.44e−01 7.58e−01 7.04e−01 6.44e−01 5.78e−01 6.95e−01 6.67e−01
30 1.02e+01 6.71e−01 9.37e−00 6.41e−00 6.68e−01 6.68e−01 6.70e−00 6.68e−01
50 4.73e+02 6.78e−01 2.45e+02 3.17e+02 6.93e−01 8.69e−01 1.98e+02 8.28e−01
9 10 −8.14e−01 −1.00e−00 −9.08e−01 −1.00e−00 −1.00e−00 −1.00e−00 −9.99e−01 −1.00e−00
30 −3.24e−07 −6.74e−02 −7.29e−12 −6.66e−02 −1.68e−01 −3.34e−01 −2.95e−01 −3.01e−01
50 −2.70e−40 −6.78e−07 −1.59e−39 −3.47e−12 −3.40e−07 −1.07e−04 −1.05e−04 −3.35e−02
10 2 −1.80e−00 −1.80e−00 −1.80e−00 −1.80e−00 −1.80e−00 −1.80e−00 −1.80e−00 −1.80e−00
5 −3.71e−00 −4.30e−00 −3.82e−00 −4.34e−00 −4.47e−00 −4.16e−00 −4.51e−00 −4.38e−00
10 −5.61e−00 −7.56e−00 −5.75e−00 −7.79e−00 −8.09e−00 −7.37e−00 −8.48e+00 −8.17e−00
268
S.
Yılmaz,
E.U.
Küç
üksille
/
Applied
Soft
Computing
28
(2015)
259–275
Table 9
The comparative results on unconstrained functions.
No. D BA EBA GA [23] CR EBA vs BA EBA vs GA
Minimum Maximum Mean Std. dev. Minimum Maximum Mean Std. dev. Mean t Sig. t Sig.
1 5 0 0 0 0 0 0 0 0 0 0.44 N.A.a
N.S.b
N.A. N.S.
2 10 5.40e+01 1.35e+03 5.22e+02 3.48e+02 0 0 0 0 N.A. 0.99 8.0755 EBA
30 2.99e+03 8.82e+03 5.44e+03 1.34e+03 0 3.00e−00 9.66e−01 1.06e−00 1.17e+03 0.99 21.8336 EBA 82.2199 EBA
50 6.39e+03 1.54e+04 1.08e+04 2.61e+03 1.00e−00 1.80e+01 7.13e−00 3.60e−00 N.A. 0.99 22.3300 EBA
3 10 1.55e+01 1.12e+03 4.37e+02 2.83e+02 1.64e−35 1.70e−30 1.13e−31 3.91e−31 N.A. 0.99 8.2932 EBA
30 2.13e+03 1.04e+04 4.65e+03 2.04e+03 6.97e−06 3.70e−05 2.06e−05 6.25e−06 1.11e+03 0.99 12.2558 EBA 80.5489 EBA
50 6.14e+03 1.89e+04 1.06e+04 3.24e+03 1.08e−05 2.84e−05 2.08e−05 4.38e−06 N.A. 0.99 17.7032 EBA
4 10 3.30e−02 1.69e−01 9.78e−02 3.61e−02 2.17e−34 1.88e−27 6.41e−29 3.43e−28 N.A. 0.92 14.5803 EBA
30 8.92e−01 6.55e+01 1.16e+01 1.38e+01 2.49e−04 1.11e−03 5.43e−04 1.77e−04 1.48e+02 0.94 4.5556 EBA 64.2277 EBA
50 3.05e+01 3.49e+02 1.53e+02 9.00e+01 6.30e−04 3.14e−03 1.13e−03 6.40e−04 N.A. 0.96 9.2078 EBA
5 10 2.80e−02 1.67e−01 9.12e−02 3.51e−02 2.96e−04 4.42e−03 1.69e−03 1.06e−03 N.A. 0.94 13.7219 EBA
30 3.92e−01 3.04e−00 1.48e−00 5.79e−01 1.18e−02 8.42e−02 3.77e−02 1.69e−02 0.1807 0.98 13.4846 EBA 24.1119 EBA
50 4.48e−01 1.04e+01 4.85e−00 2.91e−00 1.16e−02 1.21e−01 4.88e−02 2.68e−02 N.A. 0.97 8.8870 EBA
6 2 9.92e−06 8.10e−04 2.56e−04 2.45e−04 0 0 0 0 0 0.95 5.6373 EBA N.A. N.S.
7 2 −1.00e+00 0 −9.66e−01 1.83e−01 −1.00e−00 −1.00e−00 −1.00e−00 0 −1 0.95 0.9895 N.S. N.A. N.S.
8 2 6.08e−07 4.77e−05 1.22e−05 1.31e−05 1.98e−40 4.87e−37 5.72e−38 9.91e−38 0 0.95 5.0176 EBA 0 N.S.
9 4 8.27e−02 5.50e−00 1.06e−00 1.53e−00 0 0 0 0 0.01493 0.92 3.7345 EBA 10.9240 EBA
10 6 −4.99e+01 −3.13e+01 −4.80e+01 4.30e−00 −5.00e+01 −5.00e+01 −5.00e+01 0 −49.9999 0.99 2.4549 EBA 23.9341 EBA
11 10 −8.50e−00 1.47e+03 3.85e+02 3.59e+02 −2.10e+02 −2.10e+02 −2.10e+02 0 −209.476 0.99 8.9346 EBA 14.5899 EBA
12 10 1.11e−02 7.48e−02 3.46e−02 1.59e−02 2.05e−35 1.04e−09 3.49e−11 1.91e−10 0.01335 0.93 11.6985 EBA 15.8702 EBA
30 4.64e+01 4.60e+02 1.71e+02 1.00e+02 2.78e−05 7.02e−05 4.40e−05 1.00e−05 N.A. 0.99 9.1331 EBA
50 4.78e+02 1.24e+03 8.66e+02 2.15e+02 4.88e−05 1.45e−03 2.06e−04 3.01e−04 N.A. 0.99 21.6782 EBA
13 24 2.18e−01 5.13e−00 1.70e−00 1.17e−00 4.91e−04 1.07e−02 3.02e−03 2.21e−03 9.70377 0.90 7.7893 EBA 33.7490 EBA
14 10 1.24e−01 4.33e−01 3.01e−01 7.48e−02 2.15e−05 1.79e−01 1.04e−02 3.23e−02 N.A. 0.92 19.2010 EBA
30 4.92e−01 1.44e+01 4.09e−00 3.42e−00 3.15e−02 1.70e−00 3.11e−01 3.99e−01 11.0214 0.94 5.8988 EBA 39.9688 EBA
50 1.10e+01 4.46e+01 2.07e+01 7.14e−00 4.50e−01 2.78e−00 1.34e−00 7.17e−01 N.A. 0.97 14.5371 EBA
15 10 1.50e+02 2.18e+03 1.05e+03 5.01e+02 1.91e−35 5.54e−12 1.84e−13 1.01e−12 N.A. 0.99 11.3544 EBA
30 4.86e+03 2.82e+04 1.26e+04 5.66e+03 6.41e−05 2.60e−04 1.18e−04 3.96e−05 7.40e+03 0.99 12.0651 EBA 34.9563 EBA
50 1.66e+04 1.43e+05 3.95e+04 2.37e+04 1.69e−03 3.19e−02 1.07e−02 6.06e−03 N.A. 0.99 8.9742 EBA
16 10 6.72e−00 9.50e+03 7.46e+02 1.93e+03 4.19e−12 3.98e−00 1.32e−01 7.27e−01 N.A. 0.99 2.0763 EBA
30 2.54e+03 2.30e+06 1.98e+05 4.29e+05 8.34e−01 2.94e+01 2.11e+01 5.96e−00 1.96e+05 0.99 2.4871 EBA 27.4124 EBA
50 1.26e+05 2.53e+06 1.04e+06 6.12e+05 3.94e+01 1.01e+02 5.45e+01 2.16e+01 N.A. 0.99 9.1957 EBA
17 10 6.93e−01 8.20e−01 7.56e−01 3.50e−02 6.66e−01 6.66e−01 6.66e−01 1.79e−09 N.A. 0.91 13.8873 EBA
30 1.28e−00 7.45e+01 1.12e+01 1.44e+01 6.66e−01 6.69e−01 6.67e−01 6.48e−04 1.22e+03 0.93 3.9463 EBA 24.6854 EBA
50 1.15e+02 3.86e+03 4.92e+02 6.97e+02 6.67e−01 1.53e−00 6.99e−01 1.56e−01 N.A. 0.96 3.8015 EBA
18 2 9.98e−01 9.98e−01 9.98e−01 2.20e−13 9.98e−01 9.98e−01 9.98e−01 2.22e−16 0.9980 0.94 3.8986 EBA 0 N.S.
19 2 3.97e−01 3.99e−01 3.98e−01 3.11e−04 3.97e−01 3.97e−01 3.97e−01 0 0.3978 0.94 4.7628 EBA N.A. N.S.
20 2 4.29e−05 5.85e−03 1.76e−03 1.54e−03 0 0 0 0 0 0.94 6.1474 EBA N.A. N.S.
21 2 2.04e−05 1.99e−03 4.52e−04 5.41e−04 0 0 0 0 0 0.95 4.5002 EBA N.A. N.S.
a
Not available.
b
Not significant.
S.
Yılmaz,
E.U.
Küç
üksille
/
Applied
Soft
Computing
28
(2015)
259–275
269
Table 10
The comparative results on unconstrained functions.
No. D BA EBA GA [23] CR EBA vs BA EBA vs GA
Minimum Maximum Mean Std. dev. Minimum Maximum Mean Std. dev. Mean t Sig. t Sig.
22 10 1.31e+01 4.37e+01 3.06e+01 7.58e−00 3.97e−00 1.98e+01 1.01e+01 4.14e−00 N.A. 0.93 12.7979 EBA
30 8.61e+01 1.96e+02 1.42e+02 2.67e+01 1.99e+01 6.76e+01 3.44e+01 1.28e+01 52.9225 0.91 19.5803 EBA 7.3399 EBA
50 8.42e+01 3.42e+02 2.25e+02 8.43e+01 3.18e+01 8.35e+01 5.15e+01 1.40e+01 N.A. 0.91 10.9442 EBA
23 10 −2.52e+03 −1.41e+03 −2.03e+03 2.67e+02 −3.39e+03 −1.58e+03 −2.55e+03 4.14e+02 N.A. 0.99 5.6800 EBA
30 −4.58e+03 −2.57e+03 −3.59e+03 4.85e+02 −8.97e+03 −4.51e+03 −6.98e+03 9.61e+02 −11593.4 0.99 16.9875 EBA −25.7313 GA
50 −7.27e+03 −3.34e+03 −4.84e+03 8.78e+02 −1.43e+03 −6.35e+03 −1.12e+04 1.56e+03 N.A. 0.99 19.2871 EBA
24 2 −1.80e−00 −1.79e−00 −1.80e−00 3.28e−03 −1.80e−00 −1.80e−00 −1.80e−00 9.03e−16 −1.8013 0.95 5.2415 EBA 0 N.S.
25 5 −4.10e−00 −3.11e−00 −3.67e−00 2.46e−01 −4.69e−00 −3.41e−00 −4.45e−00 3.33e−01 −4.6448 0.92 10.2037 EBA −3.0225 GA
26 10 −6.21e−00 −4.97e−00 −5.68e−00 3.32e−01 −9.41e−00 −6.08e−00 −8.14e−00 9.26e−01 −9.4968 0.90 13.4588 EBA −7.8004 GA
27 2 2.60e−03 7.81e−02 1.91e−02 1.92e−02 0 9.71e−03 5.50e−03 4.89e−03 0.0042 0.98 3.7028 EBA −1.0256 N.S.
28 2 −1.03e−00 −1.02e−00 −1.03e−00 7.30e−04 −1.03e−00 −1.03e−00 −1.03e-00 6.18e−16 −1.0316 0.94 5.8509 EBA 0 N.S.
29 2 5.27e−05 9.24e−03 2.55e−03 2.25e−03 0 0 0 0 0.0682 0.95 6.0833 EBA 4.6959 EBA
30 2 5.51e−05 2.32e−03 6.84e−04 6.44e−04 0 0 0 0 0 0.94 5.7218 EBA N.A. N.S.
31 2 −1.86e+02 −1.85e+02 −1.86e+02 3.53e−01 −1.86e+02 −1.86e+02 −1.86e+02 3.73e−14 −186.73 0.91 5.2554 EBA 0 N.S.
32 2 3.00e−00 3.13e−00 3.04e−00 3.19e−02 3.00e−00 3.00e−00 3.00e−00 0 5.2506 0.94 2.0647 EBA 2.0647 EBA
33 4 6.66e−04 2.03e−02 4.17e−03 6.87e−03 3.07e−04 2.03e−02 3.07e−03 6.90e−03 0.0056 0.97 0.6117 N.S. 1.2740 N.S.
34 4 −9.96e−00 −2.61e−00 −5.77e−00 3.14e−00 −1.01e+01 −2.63e−00 −8.89e−00 2.36e−00 −5.6605 0.97 4.2673 EBA 3.8391 EBA
35 4 −1.02e+01 −2.70e−00 −6.90e−00 3.14e−00 −1.04e+01 −2.76e−00 −9.46e−00 2.46e−00 −5.3440 0.97 3.4403 EBA 5.1643 EBA
36 4 −1.03e+01 −2.36e−00 −7.85e−00 3.22e−00 −1.05e+01 −5.17e−00 −1.01e+01 1.36e−00 −3.8298 0.96 3.5741 EBA 12.0427 EBA
37 4 2.52e−02 1.27e−00 5.51e−01 3.52e−01 1.00e−20 4.72e−01 7.20e−02 1.44e−01 0.3026 0.83 6.7898 EBA 5.1528 EBA
38 4 9.33e−03 2.52e−01 9.41e−02 7.32e−02 7.87e−07 4.07e−04 1.51e−04 1.45e−04 0.01040 0.85 6.9132 EBA 6,0844 EBA
39 3 −3.86e−00 −3.81e−00 −3.85e−00 9.27e−03 −3.86e−00 −3.86e−00 −3.86e−00 2.62e−15 −3.8627 0.96 7.0995 EBA 0 N.S.
40 6 −3.12e−00 −2.90e−00 −3.02e−00 5.29e−02 −3.32e−00 −3.20e−00 −3.25e−00 6.04e−02 −3.2982 0.98 15.5573 EBA −3.3068 GA
41 10 2.82e−00 2.20e+01 1.18e+01 5.44e−00 5.66e−02 2.67e−00 9.04e−01 6.57e−01 N.A. 0.99 10.7187 EBA
30 3.31e+01 9.63e+01 6.45e+01 1.82e+01 5.10e−07 4.43e−02 7.30e−03 9.46e−03 10.6334 0.99 19.0546 EBA 49.2672 EBA
50 7.05e+01 1.86e+02 1.16e+02 2.74e+01 5.77e−07 2.21e−02 4.68e−03 6.21e−03 N.A. 0.99 22.9103 EBA
42 10 9.09e−02 7.98e−00 1.63e−00 2.32e−00 4.44e−15 1.26e−07 4.21e−09 2.30e−08 N.A. 0.97 3.8019 EBA
30 4.49e−00 1.25+01 9.17e−00 2.07e−00 2.41e−03 1.50e−00 4.58e−01 5.78e−01 14.6717 0.99 21.8369 EBA 126.5533 EBA
50 8.54e−00 1.34e+01 1.08e+01 1.28e−00 1.02e−00 2.57e−00 1.80e−00 4.11e−01 N.A. 0.99 35.9427 EBA
43 30 7.68e−00 6.21e+04 2.14e+03 1.13e+04 2.64e−07 9.00e−00 9.99e−01 1.84e−00 13.3772 0.99 1.0192 N.S. 28.4637 EBA
44 30 6.66e+01 3.51e+02 1.71e+02 6.41e+01 1.87e−06 1.93e−02 7.74e−03 9.64e−03 125.06 0.99 14.4012 EBA 56.1133 EBA
45 2 −1.08e−00 −1.08e−00 −1.08e−00 2.29e−04 −1.08e−00 −1.08e−00 −1.08e−00 6.51e−16 −1.0809 0.96 7.5889 EBA 0 N.S.
46 5 −1.49e−00 −4.82e−01 −9.53e−01 3.88e−01 −1.49e−00 −9.07e−01 −1.18e−00 2.80e−00 −0.9684 0.98 2.6005 EBA 0.4048 N.S.
47 10 −7.97e−01 −2.74e−01 −4.80e−01 1.81e−01 −1.49e−00 −2.74e−01 −6.38e−01 3.04e−00 −0.6364 0.98 2.3956 EBA 0.0028 N.S.
48 2 4.36e−02 1.94e−00 5.64e−01 4.27e−01 0 0 0 0 0 0.94 7.1099 EBA N.A. N.S.
49 5 3.29e+01 1.61e+03 2.93e+02 4.51e+02 0 2.52e+02 7.70e+01 1.00e+02 0.0043 0.98 2.5197 EBA −4.1463 GA
50 10 3.03e+02 9.43e+03 2.74e+03 2.67e+03 1.71e−71 6.60e+02 1.92e+02 2.34e+02 29.573 0.96 5.1303 EBA −3.7293 GA
270 S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275
0 500 1000 1500 2000
10
−40
10
−20
10
0
10
20
Iteration
Fitness
value
Sphere function D=10
BA
EBA
0 500 1000 1500 2000
10
−40
10
−20
10
0
10
20
Iteration
Fitness
value
SumSquares function D=10
BA
EBA
0 1000 2000 3000 4000 5000 6000
10
−4
10
−2
10
0
10
2
10
4
10
6
Iteration
Fitness
value
Schwefel 1.2 function D=30
BA
EBA
0 500 1000 1500 2000
10
−10
10
−5
10
0
10
5
Iteration
Fitness
value
Ackley function D=10
BA
EBA
0 2000 4000 6000 8000 10000
10
−4
10
−2
10
0
10
2
10
4
Iteration
Fitness
value
Griewangk function D=50
BA
EBA
0 200 400 600 800
−10
−8
−6
−4
−2
0
Iteration
Fitness
value
Shekel5 function D=4
BA
EBA
Fig. 9. Convergence results of the algorithms.
Table 11
The comparative results of EBA and existing improvement approaches.
No.a
Method Minimum Maximum Mean Std. dev.
3 MoBA 3.73e−03 1.60e−02 8.80e−03 3.34e−03
HBA 4.83e−09 2.89e−03 1.26e−04 1.66e−07
HBARF 2.36e−06 5.90e−02 5.92e−03 1.22e−02
EBA 1.64e−35b
1.70e−30 1.13e−31 3.91e−31
16 MoBA 7.44e−00 1.64e+01 1.03e+01 1.94e−00
HBA 6.34e−02 5.10e+02 6.22e+01 7.73e−00
HBARF 5.00e−05 1.99e+00 2.64e−01 5.44e−01
EBA 4.19e−12 3.98e−00 1.32e−01 7.27e−01
22 MoBA 1.46e+01 3.48e+01 2.49e+01 4.35e−00
HBA 5.12e−00 2.38e+01 1.55e+01 1.69e+01
HBARF 3.09e−05 1.02e+01 5.92e−01 2.00e−00
EBA 3.97e−00 1.98e+01 1.01e+01 4.14e−00
41 MoBA 2.05e−00 2.06e+01 8.12e−00 5.39e−00
HBA 2.25e−09 3.97e−05 3.18e−06 1.14e−07
HBARF 1.44e−11 6.35e−04 3.92e−05 1.25e−04
EBA 5.66e−02 2.67e−00 9.04e−01 6.57e−01
42 MoBA 3.61e−02 1.79e−00 1.67e−01 3.60e−01
HBA 6.31e−04 2.00e+01 1.16e+01 1.78e+01
HBARF 7.21e−04 3.53e−01 3.14e−02 6.87e−02
EBA 4.44e−15 1.26e−07 4.21e−09 2.30e−08
a
Indicates number of functions given in Tables 9 and 10.
b
Bold sets emphasize the best value of cluster of interest.
S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 271
Fig. 10. Graphical comparison of BA and EBA in terms of best, worst, mean values.
and Modified Bat Algorithm (MoBA) [62] on a limited number of 10
dimensional benchmark functions presented in [66] to determine
the quality of EBA over the existing improvement approaches. The
results have been shown in Table 11.
From the results in Table 11, of four different improvement
approaches tested on various functions, EBA has outperformed
other methods on three of five functions while HBA and HBARF
could optimize better than EBA on one function.
5.4. Optimization of constrained real-world problems
In this test stage, it has been planned to investigate the per-
formance of EBA on constrained engineering problems. For this
purpose, three well-known real world problems have been chosen
from literatures [24,25]. They are, welded beam design, spring design
and pressure vessel design. The results obtained from these prob-
lems have been compared with the studies (in particular, published
after 2007) in the literature. For fair comparison, the efficiency of
each approach has been measured by comparing function evalua-
tion number (FEN), which is equal to the population size multiplied
by the number of iterations proceeded. For constrained problems,
N has been set to 20, 10 and 25; maximum number of iterations has
been set to 2000, 500 and 600 for these problems, respectively, R
has been taken as 30.
Though BA has been proposed for solving unconstrained prob-
lems, it has been applied to constrained engineering problems
by transforming the problem into an unconstrained problem. The
underlying idea is to define a penalty function to the problem as
given below:
(x) = f (x) +
M
i=1
i2
i
(x) +
M
j=1
j
2
j
(x) (16)
where i and i denote the fitness values of equality and inequality
constraints, respectively. i and j are the penalty parameters that
enable when the constraints are violated, note that they should be
large enough.
5.4.1. Welded beam problem
The main goal of the problem is to produce a welded beam
design within minimum fabrication cost. As seen in Fig. 11, it is
planned to weld the object B to beam A. The problem has four design
parameters (x1, x2, x3 and x4) and seven constraints. The thickness
of the weld is h (x1), the length of the welded joint is l (x2), the
width of the beam is t (x3) and the thickness of the beam is b (x4).
The values h (x1) and l (x2) are discrete and take integer multiplies
of 0.0065. EBA has been adapted to solve the problem by rounding
the real values to the nearest integer values. The boundaries of the
variables are 0.125 ≤ x1 ≤ 5 and 0.1 ≤ x2, x3, x4 ≤ 10. For more details
refer to [25]. The results have been given in Table 12.
The results in Table 12 point out that all studies has achieved
acceptable solutions without exceeding the boundaries. The stud-
ies [74,77,71] seem to find best cost value, however; they neglected
the discrete variables and regarded them as continuous. EBA has
found the compelling cost value within minimum FEN without
abandoning any rules of the problem.
Fig. 11. Welded beam.
272 S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275
Table 12
The comparative results on welded beam problem.
Study Method x1 x2 x3 x4 Cost FEN
Bernardino et al. (2007) [67] GA-AISa
0.2443 6.2202 8.2915 0.2444 2.3812 320 000
Bernardino et al. (2008) [68] GA-AIS 0.2444 6.2183 8.2912 0.2444 2.3812 320 000
Montes and Ocana (2008) [69] BFAb
0.2057 3.4711 9.0367 0.2057 2.3868 48 000
Zhang et al. (2008) [70] DEc
0.2444 6.2175 8.2915 0.2444 2.3810 24 000
Zahara and Kao (2009) [71] NM-PSOd
0.206 3.468 9.037 0.206 1.7248 80 000
Zhang et al. (2009) [72] EAe
0.2443 6.2201 8.2940 0.2444 2.3816 28 897
Aragon et al. (2010) [73] TCAf
0.2444 6.2186 8.2915 0.2444 2.3811 320 000
Kaveh and Talatahari (2010) [74] CSSg
0.2058 3.4681 9.0380 0.2057 1.7249 N.A.l
Datta and Figueira (2011) [75] ID-PSOh
0.1875 1.7821 8.2500 0.2500 1.9553 N.A.
Gandomi et al. (2011) [76] FAi
0.2015 3.562 9.0414 0.2057 1.7312 50 000
Gandomi et al. (2013) [46] BA 0.2015 3.5620 9.0414 0.2057 1.7312 50 000
Sadollah et al. (2013) [77] MBAj
0.2057 3.4704 9.0366 0.2057 1.7248 47 340
Gandomi (2014) [64] ISAk
0.2443 6.2199 8.2915 0.2443 2.3812 30 000
Present study EBA 0.2015 3.5620 9.0414 0.2057 1.7312 40 000
a
AIS: artificial immune system.
b
Bacterial foraging algorithm.
c
Differential evolution.
d
NM:Nelder–Mead.
e
Evolutionary algorithm.
f
T-cell algorithm.
g
Charged system search.
h
ID: integer-discrete.
i
Firefly algorithm.
j
Mine blast algorithm.
k
Interior Search algorithm.
l
Not available.
Fig. 12. Spring design.
5.4.2. Spring design problem
This problem is another well-known engineering problem to
investigate the superiority of an algorithm. Its objective is to design
a spring for a minimum weight by achieving optimum values of
the variables also given in Fig. 12. This problem has three design
variables (x1, x2 and x3) and four constraints. The wire diameter
is d (x1), the mean diameter is D (x2) and the number of active
coils is N (x3). The boundaries of these variables are 0.05 ≤ x1 ≤ 1,
0.25 ≤ x2 ≤ 1.3 and 2 ≤ x3 ≤ 15. For further details refer to [24]. The
results of the problem have been given in Table 13.
Although all studies stated in Table 13 have managed to find
reasonable cost values, most of them have neglected the bound-
aries to obtain the cost values given in this table. But the proposed
method has accomplished to find the minimum cost value without
violating the boundaries of the problem.
5.4.3. Pressure vessel design problem
As seen in Fig. 13, a cylindrical pressure vessel is capped at both
ends by hemispherical heads. The objective of this problem is to
minimize the cost value including welding, material and forming
costs. Pressure vessel problem has four design variables (x1, x2, x3
and x4) and four constraints. The thickness is Ts (x1), thickness of the
head is Th (x2), inner radius is R (x3) and the length of the cylindrical
section of the vessel is L (x4). The variables x1 and x2 are discrete
and integer multiples of 0.0625 in. Boundaries of these parame-
ters are 1×0.0625 ≤ x1, x2 ≤ 99 × 0.0625 and 10 ≤ x3, x4 ≤ 200. For
more details refer to [25]. Table 14 shows the comparative results
obtained by competitive studies.
Even though pressure vessel is relatively harder to solve than
the problems with continuous variables, EBA can find the minimum
value of this problem together with [46,81,24] as seen in Table 14.
Table 13
The comparative results on spring design problem.
Study Method x1 x2 x3 Cost FEN
Bernardino et al. (2007)d
[67] GA-AIS 0.0516 0.3560 11.329 0.01267 36 000
He and Wang (2007) [78] PSO 0.0517 0.3576 11.244 0.01267 200 000
Hsu and Liu (2007)d
[79] F-PDa
0.0523 0.3731 10.364 0.01265 N.A.
Bernardino et al. (2008) [68] GA-AIS 0.0514 0.3505 11.661 0.01267 36 000
Montes and Coello (2008) [80] ESb
0.0516 0.3553 11.397 0.01270 25 000
Aragon et al. (2010) [73] TCA 0.0516 0.3551 11.384 0.01267 36 000
Dos Santos Coelho (2010) [81] GQ-PSOc
0.0515 0.3525 11.538 0.01267 2000
Gandomi et al. (2013) [46] BA 0.0516 0.3567 11.288 0.01267 50 000
Sadollah et al. (2013)d
[77] MBA 0.0516 0.3559 11.344 0.01267 7650
Gandomi (2014) [64] ISA N.A.e
N.A. N.A. 0.01267 8000
Present study EBA 0.0519 0.3620 10.980 0.01267 5000
a
Fuzzy proportional-derivative controller.
b
Evolution strategies.
c
Gaussian quantum.
d
Violated studies.
e
Not available.
S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 273
Table 14
The comparative results on pressure vessel design problem.
Study Method Minimum Mean Maximum Std. dev. FEN
Mahdavi et al. (2007)b
[82] IHSa
5849.76 N.A.c
N.A. N.A. N.A.
Montes et al. (2007)b
[83] DE 6059.70 6059.70 N.A. 0 24 000
He and Wang (2007) [78] PSO 6061.08 6147.13 6363.80 86.454 200 000
Bernardino et al. (2008) [68] GA-AIS 6059.85 7388.16 6545.12 124.00 80 000
Cagnina et al. (2008) [24] PSO 6059.71 N.A. N.A. N.A. 24 000
Montes and Coello (2008) [80] ES 6059.75 6850.00 7332.88 426.00 25 000
Zahara and Kao (2009)b
[71] NM-PSO 5930.31 5946.79 5960.06 9.1614 80 000
Aragon et al. (2010) [73] TCA 6390.55 7694.06 6737.06 357.00 80 000
Dos Santos Coelho (2010) [81] GQ-PSO 6059.71 N.A. N.A. N.A. 8000
Datta and Figueira (2011)b
[75] ID-PSO 5850.38 N.A. N.A. N.A. N.A.
Gandomi et al. (2013) [46] BA 6059.71 6179.13 6318.95 137.22 375 000
Sadollah et al. (2013)b
[77] MBA 5889.32 6200.64 6392.50 160.34 70 650
Gandomi (2014) [64] ISA 6059.71 6410.08 7332.84 384.6 5000
Present study EBA 6059.71 6173.67 6370.77 142.33 15 000
a
Improved harmony search.
b
Violated studies.
c
Not available.
Fig. 13. Pressure vessel.
When the “mean” values in the table are considered, it is noticed
that EBA is better than all of the studies except [78]. However,
to find such an objective value He and Wang needed much more
than 10 times FEN that EBA has needed. Note that only the studies
producing feasible solutions have been regarded for evaluation.
6. Conclusion
BA is one of the recently proposed heuristic algorithms and
serves efficient or at least adequate solutions to different types of
problems. In contrast to traditional methods like gradient-based
algorithms, implementation of heuristic algorithms to a prob-
lem is rather convenient for researchers due to their lucidity and
applicability. However, as in other heuristics, BA has also some
insufficiencies on exploration and exploitation capabilities espe-
cially to solve unimodal and multimodal functions having more
than one local minimum. In this study global and local search capa-
bilities of the standard BA, which is adopted as a virgin algorithm
in terms of development, have been enhanced by three approaches
(IS1, IS2 and IS3). IS1 has been proposed to balance these search
capabilities during the optimization process depending on the
requirements of BA. IS2 contributes to dispersion of the solutions in
BA into search space. IS3 focuses on exploitation capability rather
than exploration capability and rectifies it toward the end of opti-
mization process. In the experiment section (see Section 5), the
contribution of each modification has been analyzed by different
type of functions in Table 1 with various dimensions. After then,
unconstrained unimodal, multimodal benchmark test functions,
presented in Ref. [23], and three well-known constrained engineer-
ing design problems, which are rather tough to solve, have been
applied to investigate the superiority and robustness of proposed
method (EBA). Furthermore, to measure the efficiency of EBA over
other existing improvement studies, an experiment which includes
only five functions has been carried out. The innovative aspect of
the proposed method is to find better fitness and cost values for
unconstrained and constrained problems respectively. When the
proposed method is compared with the standard BA and GA on
unconstrained functions in terms of statistical t value, it is seen
that the proposed method is better than BA on almost all optimiza-
tion processes and GA on most of optimization processes. On the
other hand, results obtained from real-world problems reveal that
EBA produces feasible solutions and minimum (also optimum) cost
values without exceeding the boundaries. For prospective work it
is planned to investigate performance of EBA on both state-of-art
benchmark functions as in Refs. [84,85] and Artificial Neural Net-
work (ANN) for training problems. Furthermore, as the optimization
results of DE with different natural constraint handling schemes
presented in Ref. [65] are rather intriguing, investigation of the per-
formance of EBA with these schemes is also planned as a piece of
future work.
References
[1] S. Rao, Engineering Optimization: Theory and Practice, New Age International,
1996.
[2] E.K.P. Chong, S.H. Zak, An Introduction to Optimization (Wiley-Interscience
Series in Discrete Mathematics and Optimization), third ed., Wiley-
Interscience, 2008.
[3] X. Yang, Nature-Inspired Metaheuristic Algorithms, second ed., Luniver Press,
2010.
[4] M.M. Noel, A new gradient based particle swarm optimization algorithm for
accurate computation of global minimum, Appl. Soft Comput. 12 (1) (2012)
353–359.
[5] C. Blum, A. Roli, Metaheuristics in combinatorial optimization: overview and
conceptual comparison, ACM Comput. Surv. 35 (3) (2003) 268–308.
[6] X.-S. Yang, Optimization and metaheuristic algorithms in engineering, in: X.-S.
Yang, A.H. Gandomi, S. Talatahari, A.H. Alavi (Eds.), Metaheuristics in Water,
Geotechnical and Transport Engineering, Elsevier, Oxford, 2013, pp. 1–23.
[7] A.H. Gandomi, X.-S. Yang, S. Talatahari, A.H. Alavi, Metaheuristic algorithms
in modeling and optimization, in: A.H. Gandomi, X.-S. Yang, S. Talatahari,
A.H.A. Newnes (Eds.), Metaheuristic Applications in Structures and Infras-
tructures, Elsevier, Oxford, 2013, pp. 1–24, http://dx.doi.org/10.1016/B978-
0-12-398364-0.00001-2, URL http://www.sciencedirect.com/science/article/
pii/B9780123983640000012
[8] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Neural Networks,
1995. Proceedings. IEEE International Conference on, vol. 4, 1995, pp.
1942–1948.
[9] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of
cooperating agents, IEEE Trans. Syst. Man Cybern. B: Cybern. 26 (1) (1996)
29–41.
[10] S.-C. Chu, P.-W. Tsai, J.-S. Pan, Cat swarm optimization, in: Q. Yang, G. Webb
(Eds.), PRICAI 2006: Trends in Artificial Intelligence, Vol. 4099 of Lecture Notes
in Computer Science, Springer, Berlin, Heidelberg, 2006, pp. 854–858.
[11] X.-S. Yang, S. Deb, Cuckoo search via levy flights, in: Nature Biologically Inspired
Computing, 2009. NaBIC 2009. World Congress on, 2009, pp. 210–214.
[12] W.-T. Pan, A new fruit fly optimization algorithm: taking the financial distress
model as an example, Knowl. Based Syst. 26 (2012) 69–74.
274 S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275
[13] K. Krishnanand, D. Ghose, Glowworm swarm optimization for simultaneous
capture of multiple local optima of multimodal functions, Swarm Intell. 3 (2)
(2009) 87–124.
[14] A.H. Gandomi, A.H. Alavi, Krill herd: a new bio-inspired optimization algorithm,
Commun. Nonlinear Sci. Numer. Simul. 17 (12) (2012) 4831–4845.
[15] H. Duan, P. Qiao, Pigeon-inspired optimization: a new swarm intelligence opti-
mizer for air robot path planning, Int. J. Intell. Comput. Cybern. 7 (1) (2014)
24–37.
[16] J. Bansal, H. Sharma, S. Jadon, M. Clerc, Spider monkey optimization algorithm
for numerical optimization, Memet. Comput. 6 (1) (2014) 31–47.
[17] C. Sur, S. Sharma, A. Shukla, Egyptian vulture optimization algorithm – a new
nature inspired meta-heuristics for knapsack problem, in: P. Meesad, H. Unger,
S. Boonkrong (Eds.), The Ninth International Conference on Computing and
Information Technology (IC2IT2013). Vol. 209 of Advances in Intelligent Sys-
tems and Computing, Springer, Berlin, Heidelberg, 2013, pp. 227–237.
[18] X.-S. Yang, A new metaheuristic bat-inspired algorithm, in: J. Gonzlez, D. Pelta,
C. Cruz, G. Terrazas, N. Krasnogor (Eds.), Nature Inspired Cooperative Strategies
for Optimization (NICSO 2010), Vol. 284 of Studies in Computational Intelli-
gence, Springer, Berlin, Heidelberg, 2010, pp. 65–74.
[19] W. Feng Gao, S. Yang Liu, A modified artificial bee colony algorithm, Comput.
Oper. Res. 39 (3) (2012) 687–697, http://dx.doi.org/10.1016/j.cor.2011.06.007.
[20] K. Tan, S. Chiam, A. Mamun, C. Goh, Balancing exploration and exploitation with
adaptive variation for evolutionary multi-objective optimization, Eur. J. Oper.
Res. 197 (2) (2009) 701–713, http://dx.doi.org/10.1016/j.ejor.2008.07.025.
[21] I. F. Jr., D. Fister, X.-S. Yang, A hybrid bat algorithm, CoRR abs/1303.6310.
[22] A. Mehrabian, C. Lucas, A novel numerical optimization algorithm inspired from
weed colonization, Ecol. Inf. 1 (4) (2006) 355–366, http://dx.doi.org/10.1016/j.
ecoinf.2006.07.003.
[23] D. Karaboga, B. Akay, A comparative study of artificial bee colony algo-
rithm, Appl. Math. Comput. 214 (1) (2009) 108–132, http://dx.doi.org/10.
1016/j.amc.2009.03.090.
[24] L.C. Cagnina, S.C. Esquivel, Solving engineering optimization problems with
the simple constrained particle swarm optimizer, Informatica 32 (3) (2008)
319–326.
[25] A. Gandomi, X.-S. Yang, Benchmark problems in structural optimization, in:
S. Koziel, X.-S. Yang (Eds.), Computational Optimization, Methods and Algo-
rithms, Vol. 356 of Studies in Computational Intelligence, Springer, Berlin,
Heidelberg, 2011, pp. 259–281.
[26] A.H. Gandomi, X.-S. Yang, Chaotic bat algorithm, J. Comput. Sci. 5 (2) (2014)
224–232.
[27] Z. Geem, J. Kim, G. Loganathan, A new heuristic optimization algorithm: har-
mony search, Simulation 76 (2) (2001) 60–68.
[28] G. Wang, L. Guo, A novel hybrid bat algorithm with harmony search for global
numerical optimization, J. Appl. Math. 2013 (2013) 21.
[29] R.Y.M. Nakamura, L.A.M. Pereira, K.A. Costa, D. Rodrigues, J.P. Papa, X.S. Yang,
BBA: a binary bat algorithm for feature selection, in: Graphics, Patterns and
Images (SIBGRAPI), 2012 25th SIBGRAPI Conference on, 2012, pp. 291–297.
[30] L. Li, Y. Zhou, A novel complex-valued bat algorithm, Neural Comput. Appl.
(2014) 1–13.
[31] E. Ali, Optimization of power system stabilizers using {BAT} search algorithm,
Int. J. Electr. Power Energy Syst. 61 (2014) 683–690.
[32] O. Hasancebi, T. Teke, O. Pekcan, A bat-inspired algorithm for structural
optimization, Comput. Struct. 128 (2013) 77–90, http://dx.doi.org/10.1016/
j.compstruc.2013.07.006.
[33] O. Hasancebiebi, S. Carbas, Bat inspired algorithm for discrete size optimization
of steel frames, Adv. Eng. Softw. 67 (2014) 173–185, http://dx.doi.org/10.1016/
j.advengsoft.2013.10.003.
[34] C.-H.Y.H.-L.T. Jiann-Horng Lin, Chao-Wei Chou, A chaotic levy flight bat algo-
rithm for parameter estimation in nonlinear dynamic biological systems, J.
Comput. Inf. Technol. 2 (2) (2012) 56–63.
[35] J.-H. Lin, C.-W. Chou, C.-H. Yang, H.-L. Tsai, A novel bat algorithm based on
differential operator and lvy flights trajectory, Comput. Intell. Neurosci. 2013
(2013) 13.
[36] A. Baziar, A. Kavoosi-Fard, J. Zare, A novel self adaptive modification approach
based on bat algorithm for optimal management of renewable MG, J. Intell.
Learn. Syst. Appl. 5 (1) (2013) 11–18.
[37] A.M. Taha, A. Tang, Bat algorithm for rough set attribute reduction, J. Theor.
Appl. Inf. Technol. 51 (1) (2013) 1–8.
[38] P.-W. Tsai, J.S. Pan, B.Y. Liao, M.J. Tsai, V. Istanda, Bat algorithm inspired algo-
rithm for solving numerical optimization problems, Appl. Mech. Mater. 148
(2012) 134–137.
[39] I. Fister Jr., D. Fister, I. Fister, Differential evolution strategies with random forest
regression in the bat algorithm, in: Proceedings of the 15th Annual Conference
Companion on Genetic and Evolutionary Computation, GECCO ‘13 Companion,
2013, pp. 1703–1706.
[40] S. Yılmaz, E.U. Kucuksille, Y. Cengiz, Modified bat algorithm, Elektron. Elek-
trotech. 20 (2) (2014) 71–78.
[41] X. Cai, L. Wang, Q. Kang, W. Qidi, Bat algorithm with Gaussian walk, Int. J.
Bio-Inspired Comput. 6 (3) (2014) 166–174.
[42] S. Sabba, S. Chikhi, A discrete binary version of bat algorithm for multidimen-
sional knapsack problem, Int. J. Bio-Inspired Comput. 6 (2) (2014) 140–152.
[43] S. Tabatabaei, A new stochastic framework for optimal generation schedul-
ing considering wind power sources, J. Intell. Fuzzy Syst. 26 (3) (2014)
1571–1579.
[44] J. Guo, Y. Gao, G. Cui, The navigation of mobile robot based on hybrid dijkstra
algorithm, J. Comput. Inf. Syst. 10 (9) (2014) 3879–3886.
[45] X.-S. Yang, A. Hossein Gandomi, Bat algorithm: a novel approach for global
engineering optimization, Eng. Comput. 29 (5) (2012) 464–483.
[46] A. Gandomi, X.-S. Yang, A. Alavi, S. Talatahari, Bat algorithm for con-
strained optimization tasks, Neural Comput. Appl. 22 (6) (2013) 1239–1255,
http://dx.doi.org/10.1007/s00521-012-1028-9.
[47] X.S. Yang, S. Deb, S. Fong, Bat algorithm is better than intermittent search
strategy, J. Multiple-Valued Logic Soft Comput. 22 (3) (2014) 223–237.
[48] W. Peres, E.J. de Oliveira, J.A.P. Filho, I.C. da Silva Junior, Coordinated tuning
of power system stabilizers using bio-inspired algorithms, Int. J. Electr. Power
Energy Syst. 64 (2015) 419–428.
[49] M. Sathya, M.M.T. Ansari, Load frequency control using bat inspired algorithm
based dual mode gain scheduling of {PI} controllers for interconnected power
system, Int. J. Electr. Power Energy Syst. 64 (2015) 365–374.
[50] D. Rodrigues, L.A. Pereira, R.Y. Nakamura, K.A. Costa, X.-S. Yang, A.N. Souza, J.P.
Papa, A wrapper approach for feature selection based on bat algorithm and
optimum-path forest, Expert Syst. Appl. 41 (5) (2014) 2250–2258.
[51] T. Bora, L. Coelho, L. Lebensztajn, Bat-inspired optimization approach for the
brushless dc wheel motor problem, IEEE Trans. Magn. 48 (2) (2012) 947–950.
[52] S. Biswal, A. Barisal, A. Behera, T. Prakash, Optimal power dispatch using bat
algorithm, in: Energy Efficient Technologies for Sustainability (ICEETS), 2013
International Conference on, 2013, pp. 1018–1023.
[53] A.S. Koffka Khan, A comparison of ba, ga, pso, bp and lm for training feed forward
neural networks in e-learning context, Int. J. Intell. Syst. Appl. 4 (7) (2012)
23–29.
[54] S. Akhtar, A.R. Ahmad, E.M. Abdel-Rahman, A metaheuristic bat-inspired algo-
rithm for full body human pose estimation, in: Computer and Robot Vision
(CRV), 2012 Ninth Conference on, 2012, pp. 369–375.
[55] S. Sakthivel, R. Natarajan, P. Gurusamy, Application of bat optimization algo-
rithm for economic load dispatch considering valve point effects, Int. J. Comput.
Appl. 67 (11) (2013) 35–39.
[56] M. Marichelvam, T. Prabaharan, Y. Xin-She, M. Geetha, Solving hybrid flow shop
scheduling problems using bat algorithm, Int. J. Logist. Econ. Glob. 5 (1) (2013)
15–29.
[57] P. Musikapun, P. Pongcharoen, Solving multi-stage multi-machine multi-
product scheduling problem using bat algorithm, Int. Proc. Econ. Dev. Res. 35
(2012) 98–102.
[58] X. Cai, W. Li, L. Wang, Q. Kang, Q. Wu, X. Huang, Bat algorithm with Gaussian
walk for directing orbits of chaotic systems, Int. J. Comput. Sci. Math. 5 (2)
(2014) 198–208.
[59] S. Gholizadeh, A.M. Shahrezaei, Optimal placement of steel plate shear walls
for steel frames by bat algorithm, Struct. Des. Tall Spec. Build. 24 (1) (2014)
1–18.
[60] M. Fenton, Bat natural history and echolocation, in: R. Brigham, K. Elisabeth, J.
Gareth, P. Stuart, A. Herman (Eds.), Bat Echolocation Research Tools, Techniques
and Analysis, Bat Conservation International, 2004, pp. 2–6.
[61] Y. Shi, R. Eberhart, A modified particle swarm optimizer, in: Evolution-
ary Computation Proceedings, 1998. IEEE World Congress on Compu-
tational Intelligence. The 1998 IEEE International Conference on, 1998,
pp. 69–73.
[62] S. Yılmaz, E.U. Kucuksille, Improved bat algorithm (IBA) on continuous opti-
mization problems, Lect. Notes Softw. Eng. 1 (3) (2013) 279–283.
[63] A. Nickabadi, M.M. Ebadzadeh, R. Safabakhsh, A novel particle swarm optimiza-
tion algorithm with adaptive inertia weight, Appl. Soft Comput. 11 (4) (2011)
3658–3670, http://dx.doi.org/10.1016/j.asoc.2011.01.037.
[64] A.H. Gandomi, Interior search algorithm (ISA): A novel approach for global
optimization, ISA Trans. 53 (4) (2014) 1168–1183, http://dx.doi.org/10.1016/j.
isatra.2014.03.018, Disturbance estimation and mitigation.
[65] A. Gandomi, X.-S. Yang, Evolutionary boundary constraint handling scheme,
Neural Comput. Appl. 21 (6) (2012) 1449–1462.
[66] I. Fister Jr., D. Fister, I. Fister, Differential evolution strategies with random forest
regression in the bat algorithm, in: Proceedings of the 15th Annual Conference
Companion on Genetic and Evolutionary Computation, GECCO ‘13 Companion,
2013, pp. 1703–1706.
[67] H. Bernardino, I. Barbosa, A. Lemonge, A hybrid genetic algorithm for con-
strained optimization problems in mechanical engineering, in: Evolutionary
Computation, 2007. CEC 2007. IEEE Congress on, 2007, pp. 646–653.
[68] H. Bernardino, I. Barbosa, A. Lemonge, L. Fonseca, A new hybrid AIS-GA for
constrained optimization problems in mechanical engineering, in: Evolution-
ary Computation, 2008. CEC 2008 (IEEE World Congress on Computational
Intelligence). IEEE Congress on, 2008, pp. 1455–1462, http://dx.doi.org/10.
1109/CEC.2008.4630985.
[69] E. Mezura-Montes, B. Hernández-Ocaña, Bacterial foraging for engineering
design problems: Preliminary results, in: Proceedings of the Fourth Mexican
congress on evolutionary computation (COMCEV 2008), 2008.
[70] M. Zhang, W. Luo, X. Wang, Differential evolution with dynamic stochastic
selection for constrained optimization, Inf. Sci. 178 (15) (2008) 3043–3074,
http://dx.doi.org/10.1016/j.ins.2008.02.014, Nature inspired problem-solving.
[71] E. Zahara, Y.-T. Kao, Hybrid Nelder–Mead simplex search and particle swarm
optimization for constrained engineering design problems, Expert Systems
with Applications 36 (2, Part 2) (2009) 3880–3886, http://dx.doi.org/10.1016/
j.eswa.2008.02.039, URL http://www.sciencedirect.com/science/article/pii/
S0957417408001735
[72] J. Zhang, C. Liang, Y. Huang, J. Wu, S. Yang, An effective multiagent evolutionary
algorithm integrating a novel roulette inversion operator for engineering opti-
mization, Appl. Math. Comput. 211 (2) (2009) 392–416, http://dx.doi.org/10.
1016/j.amc.2009.01.048.
S. Yılmaz, E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 275
[73] V.S. Aragón, S.C. Esquivel, C.A.C. Coello, A modified version of a t-cell algo-
rithm for constrained optimization problems, Int. J. Numer. Methods Eng. 84
(3) (2010) 351–378, http://dx.doi.org/10.1002/nme.2904.
[74] A. Kaveh, S. Talatahari, A novel heuristic optimization method: charged
system search, Acta Mech. 213 (3-4) (2010) 267–289, http://dx.doi.org/10.
1007/s00707-009-0270-4.
[75] D. Datta, J.R. Figueira, A real-integer-discrete-coded particle swarm opti-
mization for design problems, Appl. Soft Comput. 11 (4) (2011) 3625–3633,
http://dx.doi.org/10.1016/j.asoc.2011.01.034, URL http://www.sciencedirect.
com/science/article/pii/S1568494611000445
[76] A.H. Gandomi, X.-S. Yang, A.H. Alavi, Mixed variable structural optimiza-
tion using firefly algorithm, Comput. Struct. 89 (23-24) (2011) 2325–2336,
http://dx.doi.org/10.1016/j.compstruc.2011.08.002.
[77] A. Sadollah, A. Bahreininejad, H. Eskandar, M. Hamdi, Mine blast algorithm: A
new population based algorithm for solving constrained engineering optimiza-
tion problems, Appl. Soft Comput. 13 (5) (2013) 2592–2612, http://dx.doi.org/
10.1016/j.asoc.2012.11.026, URL http://www.sciencedirect.com/science/
article/pii/S1568494612005108
[78] Q. He, L. Wang, An effective co-evolutionary particle swarm optimization for
constrained engineering design problems, Eng. Appl. Artif. Intell. 20 (1) (2007)
89–99, http://dx.doi.org/10.1016/j.engappai.2006.03.003.
[79] Y.-L. Hsu, T.-C. Liu, Developing a fuzzy proportional-derivative controller opti-
mization engine for engineering design optimization problems, Eng. Optim. 39
(6) (2007) 679–700, http://dx.doi.org/10.1080/03052150701252664.
[80] E. Mezura-Montes, C.A.C. Coello, An empirical study about the usefulness of
evolution strategies to solve constrained optimization problems, Int. J. Gen.
Syst. 37 (4) (2008) 443–473, http://dx.doi.org/10.1080/03081070701303470.
[81] L. dos Santos Coelho, Gaussian quantum-behaved particle swarm optimization
approaches for constrained engineering design problems, Expert Syst. Appl. 37
(2) (2010) 1676–1683, http://dx.doi.org/10.1016/j.eswa.2009.06.044.
[82] M. Mahdavi, M. Fesanghary, E. Damangir, An improved harmony search algo-
rithm for solving optimization problems, Appl. Math. Comput. 188 (2) (2007)
1567–1579, http://dx.doi.org/10.1016/j.amc.2006.11.033.
[83] E. Mezura-Montes, C.A. Coello Coello, J. Velázquez-Reyes, L. Muñoz-
Dávila, Multiple trial vectors in differential evolution for engineering
design, Eng. Optim. 39 (5) (2007) 567–589, http://dx.doi.org/10.1080/
03052150701364022.
[84] J.J. Liang, B.Y. Qu, P.N. Suganthan, Problem definitions and evaluation cri-
teria for the CEC 2013 special session and competition on single objective
real-parameter numerical optimization, Tech. rep., Computational Intelligence
Laboratory, Zhengzhou University, 2013.
[85] J.J. Liang, B.Y. Qu, P.N. Suganthan, Problem definitions and evalua-
tion criteria for the CEC 2014 special session and competition on
single objective real-parameter numerical optimization, Technical
Report, Computational Intelligence Laboratory, Zhengzhou University,
2013.

A New Modification Approach On Bat Algorithm For Solving Optimization Problems

  • 1.
    Applied Soft Computing28 (2015) 259–275 Contents lists available at ScienceDirect Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc A new modification approach on bat algorithm for solving optimization problems Selim Yılmaza,∗ , Ecir U. Küçüksilleb a Hacettepe University, Engineering Faculty, Department of Computer Engineering, Ankara 06800, Turkey b Süleyman Demirel University, Engineering Faculty, Department of Computer Engineering, Isparta 32260, Turkey a r t i c l e i n f o Article history: Received 18 March 2014 Received in revised form 24 September 2014 Accepted 25 November 2014 Available online 11 December 2014 Keywords: Heuristics Bat algorithm Real-world problems Unconstrained problems a b s t r a c t Optimization can be defined as an effort of generating solutions to a problem under bounded circum- stances. Optimization methods have arisen from a desire to utilize existing resources in the best possible way. An important class of optimization methods is heuristic algorithms. Heuristic algorithms have gen- erally been proposed by inspiration from the nature. For instance, Particle Swarm Optimization has been inspired by social behavior patterns of fish schooling or bird flocking. Bat algorithm is a heuristic algorithm proposed by Yang in 2010 and has been inspired by a property, named as echolocation, which guides the bats’ movements during their flight and hunting even in complete darkness. In this work, local and global search characteristics of bat algorithm have been enhanced through three different methods. To validate the performance of the Enhanced Bat Algorithm (EBA), standard test functions and constrained real-world problems have been employed. The results obtained by these test sets have proven EBA supe- rior to the standard one. Furthermore, the method proposed in this study is compared with recently published studies in the literature on real-world problems and it is proven that this method is more effective than the studies belonging to other literature on this sort of problems. © 2014 Elsevier B.V. All rights reserved. 1. Introduction Optimization is an effort of obtaining the optimal solution of a problem under given circumstances. The crucial task of optimiza- tion is to minimize wasted time or maximize desired benefit of a given engineering system. All systems that are to be optimized have an objective function and several decision variables that affect the function [1]. Optimization methods can be defined as a process of achieving optimal solutions that satisfy a given objective function [2]. Optimization algorithms are generally divided into two groups as deterministic and stochastic algorithms. Deterministic algorithms do not contain any operators that cause randomness. This type of algorithms produce the same result as long as their initial conditions remain constant. On the other hand, due to their random nature, stochastic algorithms tend to produce different solutions even when their initial conditions remain constant at each run. Most deterministic algorithms use gradient information ∗ Corresponding author. Tel.: +90 3122977500. E-mail addresses: selimy@hacettepe.edu.tr (S. Yılmaz), ecirkucuksille@sdu.edu.tr (E.U. Küçüksille). and these algorithms are ideal for unimodal functions having one global optimum, while they might be troublesome for multimodal functions having several local optima or functions that include flat regions where the gradient is small. Stochastic algorithms are preferred for such functions as they can escape from local minima easily in spite of their slow convergence speed [3,4]. Stochastic algorithms are categorized into two groups as heuris- tic and metaheuristic algorithms. Heuristic refers to algorithms that produce high quality results by trial and error methods in an accept- able computational time. The suffix meta means “beyond, in an upper level” so the term metaheurictic refers to a higher level of heuristics. The studies in the literature tend to refer to all new stochastic algorithms as metaheuristic [5,6]. Heuristic algorithms are generally inspired by the nature, hence these algorithms are also called nature inspired algorithms. Flexible and easily applicable structure of these algorithms has made them very popular in recent years. Swarm algorithms, regarded as a subset of heuristic algorithms, have been developed on inspiration by the various types of collab- orative behavior of swarms while carrying out a work [7]. There is ample literature on this type of algorithms. Kennedy and Eber- hart proposed Particle Swarm Optimization (PSO) by inspiration from social and cognitive behavior of fish or bird swarms [8]. http://dx.doi.org/10.1016/j.asoc.2014.11.029 1568-4946/© 2014 Elsevier B.V. All rights reserved.
  • 2.
    260 S. Yılmaz,E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 Dorigo et al. inspired by the foraging behavior of ants, proposed Ant Colony Algorithm (ACO) [9]. Chu et al. introduced Cat Swarm Algorithm (CSO) by observing the behavior of cats as they trace and catch their prey [10]. Yang et al. proposed Cuckoo Search Algo- rithm (CS) inspired by the obligate brood parasitism behavior of cuckoo species [11]. Pan proposed Fruit Fly Optimization (FFO) algorithm based on food acquisition behavior of fruit flies [12]. Krishnanand and Ghose were inspired by the luminescence capabil- ity of glowworms and they transformed it into Glowworm Swarm Optimization algorithm (GSO) [13]. Gandomi and Alavi proposed an algorithm named Krill Herd (KH) based on the clever herding behavior of krill individuals [14]. Duan and Qiao inspired by hom- ing behavior of pigeons, introduced Pigeon-Inspired Optimization (PIO) algorithm [15]. Bansal et al. proposed a new swarm algorithm named Spider Monkey Optimization (SMO) [16]. Sur et al. trans- formed food acquisition behavior of the Egyptian vulture into an algorithm they named Egyptian Vulture Optimization Algorithm (EVOA) [17]. Bat algorithm (BA), proposed by Yang in 2010, is also a swarm based metaheuristic algorithm inspired by a property known as echolocation. Echolocation is a type of sonar that guides bats in their flying and hunting behavior. Not only can the bats move, but also they distinguish different types of insects even in complete darkness thanks to their such capability [18]. There are two crucial components that affect the search characteristic of an algorithm: exploration (also called diversifi- cation) and exploitation (also called intensification). Exploration is a capability of an algorithm to find promising solutions by seeking various unknown regions while exploitation improves over solutions obtained by exploration. Exploration capability can get solutions away from the region where they get stuck in, while exploitation capability increases convergence speed of an algorithm [19]. Many studies in the literature indicate that an algo- rithm’s exploration capability should be employed first so that the algorithm scans the whole search space and its exploitation capability should be employed later to improve over the solution obtained by exploration at the end of the optimization process [20]. BA is a very powerful algorithm and produces robust solutions on low dimensional functions but its performance diminishes as the dimension of problem increases [21]. Exploration and exploita- tion properties of the algorithm have been aimed to be improved within the scope of this study. For this purpose, two modifica- tion structures have been embedded to bat algorithm and it has been hybridized with Invasive Weed Optimization [22] algorithm. In order to verify the superiority of the proposed method, Enhanced Bat Algorithm (EBA) has been compared with the standard BA and results of Genetic Algorithm (GA) [23] in terms of optimization quality within negligible CPU time on 50 unconstrained benchmark test functions with continuous variables. Furthermore, superiority of EBA has been measured by comparing it with some exist- ing improved versions of BA. EBA has also been compared with some studies in the literature on three well known constrained real-world engineering problems; welded beam, spring design and pressure vessel with continuous and discrete variables taken from [24,25]. The results obtained from unconstrained benchmark test functions have depicted that the proposed method is superior to the standard algorithm. It has also been proven that the enhanced algorithm is competitive and better than most of the algorithms suggested by those other studies on real-world problems. The organization of this paper is as follows: literature survey of BA is given in Section 2, BA and EBA are described in Sections 3 and 4, Section 5 introduces unconstrained and constrained bench- mark results, finally the proof of the contribution of the proposed method on unconstrained and constrained problems is revealed in Section 6. 2. Literature review Although BA was proposed recently, there are many variants of BA in the literature introduced as modification or implementation studies for different sorts of problems. From the modification point of view, Gandomi and Yang embedded chaos mechanism into bat algorithm to enhance the global search behavior of BA and opti- mized unconstrained functions with different chaotic maps [26], Fister et al. hybridized BA to overcome the deficiency of BA espe- cially on higher dimensional problems [21], Wang and Guo also hybridized BA with Harmony Search Algorithm [27,28], Nakamura et al. introduced a discrete version of bat algorithm to solve clas- sifications and feature selection presenting the superiority of BA over well-known swarm based techniques [29], Li and Zhou pre- sented a new bat-algorithm based on complex-value to increase the diversity of population thus improving on the exploration capa- bility [30], Ali proposed a new metaheuristic method based on bat algorithm for optimal design of Power System Stabilizers in a multi-machine environment [31], Hasançebi et al. proposed a new algorithm that makes use of BA for structural optimizations [32], Hasançebi and Carbas solved the problem of discrete size optimiza- tion of steel frames designed for minimum weight by a BA inspired method and compared the results with other metaheuristics [33], Lin et al. proposed chaotic bat algorithm using Levy flights and chaotic maps for parameter estimation of dynamic biological sys- tems [34]. Besides, the search capabilities of BA was improved in the studies [35–44]. As for implementation studies, Yang and Gan- domi presented the superiority of bat algorithm over the studies in the literature on well-known constrained benchmark functions [45], Gandomi et al. used BA to solve constrained problems and solved both well-known benchmark and real-world problems [46], Yang et al. compared the efficiency of BA with the so-called inter- mittent search methods [47], Peres et al. compared BA with other compelling metaheuristic algorithms on power system stabilizers tuning problem [48], Sathya and Ansari employed BA based dual mode PI controller to tune the parameter PI controller in multi- area interconnected thermal power system [49], Rodrigues et al. presented feature selection approach based on BA [50], Bora et al. optimized mono and multi-objective brushless DC wheel motor problems by BA and compared the results with other optimization approaches [51], Biswal et al. employed bat algorithm to optimize operating cost of thermal power plant [52]. Apart from those, BA was used to solve different kinds of problems in various studies [53–59]. 3. Bat algorithm Bat algorithm (BA) is a heuristic algorithm proposed by Yang in 2010. It is based on the echolocation capability of micro bats guiding them on their foraging behavior [18]. 3.1. Echolocation capability of bats Most bat species use a type of sonar called as echolocation to communicate, recognize different types of insects, sense distance to their prey and move without hitting to any obstacle even in complete darkness. All animals including bats, which use echolocation capability, emit some pulses. These pulses contain frequencies ranging from high pitch (> 200 kHz) to low pitch (∼10 kHz). Pulses, upon hitting the objects or the prey that are around a bat, form echoes. The bat listens to the echo and then analyzes and evaluates codes in these echoes [60]. The echolocation characteristics are idealized within the frame- work of the following rules by benefiting such features of bats:
  • 3.
    S. Yılmaz, E.U.Küçüksille / Applied Soft Computing 28 (2015) 259–275 261 • All bats use echolocation to sense distance, and they also “know” the difference between food/prey and background barriers in some magical way. • Bats fly randomly with velocity vi at position xi with a frequency fmin, varying wavelength and loudness A0 to search for prey. They can automatically adjust the wavelength (or frequency) of their emitted pulses and adjust the rate of pulse emission r ∈ [0, 1], depending on the proximity of their target. • Although the loudness can vary in many ways, we assume that the loudness varies from a large (positive) A0 to a minimum constant value Amin. 3.2. The structure of bat algorithm (a) Initialization of bat population. The search space is assumed as a region that contains many prey sources on it. The algorithm tends to find the high or optimum quality food in the search space. Because locations of food sources are not known, ini- tial population is randomly generated from real-valued vectors with dimension d and number N, by taking into account lower and upper boundaries. Then, quality of food sources located within the population are evaluated. xij = xmin + ϕ(xmax − xmin) (1) where i = 1, 2, . . ., N, j = 1, 2, . . ., d, xmax and xmin are upper and lower boundaries for dimension j, respectively. ϕ is a randomly generated value ranging from 0 to 1. (b) Generation of frequency, velocity and new solutions. Evaluated fitness values of all bats influence their movements. Bats fly with velocity vi which is affected by a randomly predefined fre- quency f. Finally they locate their new position xi in the search space. fi = fmin + ˇ(fmax − fmin) (2) vt i = vt−1 i + (xt i − x∗)fi (3) xt i = xt−1 i + vt i (4) where fi is a frequency value belonging to the ith bat, fmin and fmax are minimum and maximum frequency values, respec- tively, ˇ indicates a randomly generated value, x* is the obtained global best location (solution) after comparison of all solutions among N bats so far and vt i implies the velocity of the ith bat at tth time step. (c) Local search capability of the algorithm. In order to improve local search capability of the algorithm, Yang has created a structure in order that the bat can improve the solution near the obtained one. xnew = xold + A t (5) where xold is a high quality solution chosen by some mechanism (e.g. roulette wheel), A t is average loudness value of all bats at tth time step and is a randomly generated value ranging from −1 to 1. (d) Loudness and pulse emission rate. The loudness A and pulse emis- sion rate r are updated as a bat gets closer to its target, namely its prey. Loudness A is decreased while pulse emission rate r is increased with respect to Eqs. (6) and (7), respectively (see Fig. 1). At+1 i = ˛At i (6) rt+1 i = r0 i (1 − e t ) (7) where and ˛ are constraints, r0 i is the initial pulse emission rate value of the ith bat. Pseudo-code and flow chart of the algorithm are given in Algorithm 1 and Fig. 2, respectively. Algorithm 1. Pseudo code of the bat algorithm 1 Initialize bat population xi and velocity vi 2 Define frequency fi 3 Initialize pulse emission rate r and loudness A 4 repeat 5 Generate new solutions by adjusting frequency and updating velocity and location by Eqs. 2 to 4 6 if rand ri then 7 Select a solution among best solutions 8 Generate new local solution around selected best solution 9 end 10 Generate new solution by flying randomly 11 if rand Ai and f(xi) f(x∗) then 12 Accept the new solution 13 Decrease Ai, increase ri, by Eqs. 6 and 7 14 end 15 Rank the bats and find the current best x∗ 16 until termination criteria is met; 17 Postprocess results and visualization 4. Enhanced Bat Algorithm Bat algorithm is bad at exploration and exploitation. In order to tackle with the problem mentioned above, three different improve- ment structures, which are called IS1, IS2 and IS3 have been proposed for the original algorithm. 4.1. Inertia weight modification (IS1) The update processes of velocity and location in the algorithm have some similarities with PSO [8]. The standard bat algorithm has some deficiencies as in PSO. In order to overcome this issue the following modification structure is proposed, inspired by the study in [61]. That the equation consists of two parts can be seen, when the velocity equation (Eq. (3)) is analyzed. The first term of the equation is a factor that defines velocity of population namely step
  • 4.
    262 S. Yılmaz,E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 Iteration Pulse emission rate r 0 20 40 60 80 100 0 0.5 1 1.5 2 Iteration Loudness A Fig. 1. The changes of r and A with iterations. Fig. 2. Flowchart of bat algorithm. size, while the second term is another factor affecting the velocity of the ith solution with guidance of the best solution (x*). The first and second terms of the equation contribute to the algorithm so that it performs global and local search, respectively. Only if the first term of the Eq. (3) affects the solutions may it be observed that these solutions overflow the space by keeping their velocities and directions, thus reducing their convergence speeds rapidly. On the other hand, only if the second term of the Eq. (3) affects the solutions may it be observed that the solutions converge to a region somewhere around the global best solution (x*). Hence they may face the premature convergence problem. The main purpose of this modification is to intensify the first term of the equation at the beginning of the optimization pro- cess and then the second term toward the end of the optimization process in turn. The modified equation is given below. vt i = ω(vt−1 i ) + (xt i − x∗)fi (8) where ω is the inertia weight factor that balances global and local search intensity of the ith solution by controlling the magnitude of old velocity v. This modification structure was also utilized in [62], and its superiority to bat algorithm was proven with unconstrained bench- mark test functions. 4.2. Distribution of the population modification (IS2) As it has been explained before, the second term of the Eq. (3) provides local search with guidance of the best solution in the stan- dard algorithm. Exclusive usage of this term may cause premature convergence problem thus solutions get stuck at a local minimum. In order to deal with this issue, inertia weight factor has been pro- posed. When the best solution is near a local minimum toward the end of optimization process, the ith solution can have no chance to get away from that undesired local minimum as the movement of the ith particle depends on such best solution toward the end of the optimization process. This modification has been described in detail in Fig. 3. A case in which the best solution in the population gets stuck at the local minimum is seen in Fig. 3, in such a case, if the ith solution moves by regarding only the best solution, it can also converge to that local minimum thus it cannot tend to promising regions. Providing the kth solution also affects the ith solution, the algorithm produces better solutions. For this purpose, the velocity equation has been modified to perform the situation that the kth solution could also affect the ith solution. The modified equation is vt i = ω(vt−1 i ) + (xt i − x∗)fi1 + (xt i − xt k )fi2 (9) 1 + 2 = 1 (10) where xk is one of the best solutions randomly chosen among the population (i / = k), 1 is learning factor ranging from 0 to 1. As the value of 1 increases, the effect of the best solution (x*) is higher than the kth solution and vice versa. The 1 value has to be updated
  • 5.
    S. Yılmaz, E.U.Küçüksille / Applied Soft Computing 28 (2015) 259–275 263 −500 0 500 −500 −400 −300 −200 −100 0 100 200 300 400 500 Search space Fitness value x* xk xi Fig. 3. Distribution of the population toward the end of the iteration on Schwefel function. 0 200 400 600 800 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Iterations ζ 1 and ζ 2 parameters ζ 1 ζ2 Fig. 4. The changes of 1 and 2 with iterations. as iterations proceed in order that the solution can switch from global to local search. 1 = 1 + (init − 1) (itermax − iter) n (itermax) n (11) where init is initial impact factor of 1, itermax is the maximal num- ber of iterations, iter is the number of iterations and n is a nonlinear modulation index. The state of 1 and 2 have been demonstrated in Fig. 4 when init and n are 0.6 and 3 respectively. 4.3. Hybridization with invasive weed optimization (IS3) When the pseudo code of the algorithm is analyzed (see Algorithm 1), the fact that local search of the algorithm is performed only by the solutions that satisfy the condition written in line 6 is easily seen. In other words, when randomly generated value rang- ing 0–1 is greater than the pulse emission value of the ith solution, the solution of interest performs local search. As it is expressed before, the pulse emission rate r increases as iterations proceed (Fig. 1). Thus, the local search ability of the algorithm weakens. In order to prevent such lack of local search capability of the algo- rithm, it has been hybridized (low-level relay hybrid) with Invasive Weed Optimization (IWO) algorithm which has a good exploration capability [22]. Maximum number of seeds Minimum number of seeds Worst ϐitness value Best ϐitness value Fitness value Number of seeds Fig. 5. Seed production in IWO. After the solutions perform search process, they form the popu- lation of IWO. They fulfill all steps of IWO introduced below, except initialization phase of the population. 4.3.1. Invasive weed optimization Invasive weed optimization algorithm, inspired by invasive weed colonies, is a heuristic algorithm. Some rules are considered by mimicking the colonizing behavior of invasive weeds: • A finite number of seeds are spread over the search space (initial- ization of population). • Each seed flourishes to flowering plant and produces seeds with respect to its fitness value (reproduction)(see Fig. 5). s = smin + (smax − smin) ui − uworst ubest − uworst (12) where smax and smin are the maximum and the minimum number of seeds to be produced, respectively, uworst and ubest indicate the worst and the best fitness values, respectively, while ui indicates the fitness value of the ith solution. • The produced seeds are randomly spread over the space by Eq. (13). iter = (itermax − iter) n (itermax) n (initial − final) + final (13) here itermax is the maximum number of iterations, iter is the number of iterations, initial and final standard deviation values are represented by initial and final, n is a nonlinear modulation index. The dispersion of newly produced seeds during 500 iterations have been demonstrated in Fig. 6. As it is seen in the figure, dis- tances of the seeds to their parent (located at the origin) reduce as iterations proceed. • This procedure continues until the maximum population number (pmax) is reached; now, only the better fitness valued seeds can survive, the others are eliminated (see Fig. 7). • Algorithm runs until termination criteria is met. 5. Experiments and discussions 5.1. The research in finding of optimum initial values of some parameters The parameters (A, r, fmax, ω, init) existing in both algorithms (BA, EBA) have been trained on the functions given in Table 1 in
  • 6.
    264 S. Yılmaz,E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 Table 1 The unconstrained functions used for analyzing. No. Function C* Formulation 1 Sphere U f (x) = n i=1 x2 i 2 Rosenbrock U f (x) = n−1 i=1 [100(xi+1 − xi) 2 + (xi − 1) 2 ] 3 Ackley M f (x) = −20 exp −0.2 1 n n i=1 x2 i − exp 1 n n i=1 cos(2xi) + 20 + e 4 Griewangk M f (x) = 1 4000 n i=1 x2 i − n i=1 cos xi √ i + 1 5 Rastrigin M f (x) = n i=1 x2 i − 10 cos(2xi) + 10 6 Zakharov U f (x) = n i=1 x2 i + n i=1 0.5ixi 2 + n i=1 0.5ixi 4 7 Step U f (x) = n i=1 ([xi + 0.5]) 2 8 Dixon-price U f (x) = (xi − 1) 2 + n i=1 i(2x2 i − xi−1) 2 9 Easom M f (x) = −(−1) n n i=1 cos2 (xi) exp − n i=1 (xi − ) 2 10 Michalewicz M f (x) = − n i=1 sin(xi) sin ix2 i 2m −5 0 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 x y 0−200 200−400 400−500 Fig. 6. The distance of seeds to their parent during 500 iterations. Fig. 7. Seed production and elimination of population. order to gain better performance. The letters “U” and “M” in the table represent unimodal and multimodal functions, respectively. The parameters of population number (N), dimension of a func- tion (d), function evaluation number (FEN) and run time (R) have been set as 50, 10, 2 × 103 and 20, respectively. Only the average of cost values obtained at the end of each runtime has been consid- ered in the comparison phase. The optimum parameter values used for experiments in this study have been summarized in Table 7. (1) Loudness A: This parameter has been trained with different values ranging from 0 to 1. The results have been normalized and shown in Table 2. As it is seen from the results, the algorithm can produce better solutions as loudness value increases with some exceptions. When A is 0.9 or 1, algorithm produces better cost value when compared with other values, that is why A has been set as 0.95. (2) Pulse emission rate r: The pulse emission rate parameter has also been trained with the values ranging from 0 to 1 and A is set as 0.95. The results have been demonstrated in Table 3. Regard- less of some exceptions, there is a positive correlation between the optimization performance and pulse emission rate value. Since the best optimization performance has been obtained when r is 0.9 and 1, the average of these values (0.85) has been set. (3) Maximum frequency value fmax: Frequency value belonging to each bat stands for the step size in the algorithm. As this value increases, the possibility of missing during seeking promis- ing regions also increases. On the other hand, the convergence speed reduces as step sizes of the solutions decrease. In this study, minimum frequency value (fmin) has been set as 0 and maximum frequency value (fmax) has been trained when A and r are 0.95 and 0.85, respectively. The results have been shown in Table 4. As it is seen from the table, the algorithm produces best result when fmax is 1. (4) Inertia weight ω: Most of the strategies are to be updated in inertia value as iterations proceed. “Random (a), linear decreas- ing (b), nonlinear decreasing (c), chaotic term added linear decreasing (d), Sugeno function (e), linear or nonlinear decreas- ing (f)” inertia weight strategies (for further information refer to [63]) have been used in this study to train ω. From the results indicated in Table 5, that the minimum value is obtained by using nonlinear decreasing strategy can be easily understood. So nonlinear decreasing inertia weight strategy, shown in Eq. (14), has been chosen to update ω. ω = itermax − iter itermax n (ωinit − ωfinal) + ωfinal (14)
  • 7.
    S. Yılmaz, E.U.Küçüksille / Applied Soft Computing 28 (2015) 259–275 265 Table 2 Mean normalized values of A on numerical functions. No. A 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 1.28 1.18 1.16 1.03 1.13 1.45 1.01 1 1.14 1.04 1.13 2 5.32 3.98 4.55 2.74 4.43 3.83 3.62 3.35 2.23 1.26 1 3 1.22 1.21 1.22 1.23 1.19 1.17 1.07 1.10 1 1.06 1 4 1.24 1.18 1 1.12 1.18 1.18 1.42 1.28 1.17 1.09 1.10 5 1.83 1.65 1.35 1.37 1.41 1.19 1.18 1.21 1.02 1 1.05 6 3.47 4.41 4.97 4.87 4.13 2.96 3.53 3.47 2.18 1.40 1 7 1.32 1.35 1.42 1.17 1.15 1 1.14 1.36 1.37 1.07 1.17 8 18.3 16.4 17.2 19.0 6.00 6.92 11.4 4.18 4.33 1.34 1 9 1.99 1.99 1.98 1.95 1.97 1.98 1.70 1.59 1.47 1.64 1 10 1.05 1.12 1.07 1.08 1.12 1.06 1.07 1.05 1.02 1.01 1 Avg. 3.71 3.44 3.60 3.55 2.37 2.27 2.72 1.95 1.69 1.19 1.04 Table 3 Mean normalized values of r on numerical functions. No. r 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 17.79 7.73 4.27 3.44 2.30 2.31 1.61 1.65 1.35 1 1.20 2 360.70 56.79 29.64 11.30 10.15 7.53 4.13 1.74 1 1.31 2.79 3 2.13 1.79 1.54 1.39 1.35 1.25 1.14 1.18 1 1 1.09 4 11.43 5.57 4.51 3.08 2.06 1.84 1.78 1.25 1.40 1 1.14 5 2.95 1.90 1.74 1.44 1.46 1.17 1.22 1.10 1 1.09 1.50 6 1.19 1.62 1.29 1.06 1.08 1.28 1 1.06 1.73 2.04 2.23 7 14.29 6.72 3.89 3.26 2.69 1.56 1.68 1.26 1.01 1.17 1 8 332.36 147.74 16.58 6.10 2.06 7.27 1.21 1.09 1.59 1 4.46 9 1.97 1.79 1.32 1 1.66 1.13 1.09 1.45 1.12 1.51 1.99 10 1 1.01 1 1.02 1.01 1.01 1.01 1.02 1.02 1.06 1.07 Avg. 74.58 23.26 6.57 3.30 2.58 2.63 1.58 1.28 1.22 1.21 1.84 Table 4 Mean normalized values of fmax on numerical functions. No. fmax 1 2 3 4 5 6 7 8 9 10 1 1 3.24 5.33 6.16 7.02 9.54 11.64 10.80 10.73 13.56 2 1 1.76 2.03 5.56 17.92 89.75 63.09 77.42 107.15 207.92 3 1 2.98 5.34 6.05 7.54 8.14 8.19 8.40 8.11 7.60 4 1 1.64 2.44 2.31 3.91 3.95 4.42 4.92 5.74 4.81 5 1 1.05 1.05 1.40 1.47 1.29 1.74 1.43 1.71 1.98 6 1.81 4.74 3.32 17.55 31.22 7.17 24.03 33.63 97.79 1 7 1 2.84 3.45 5.28 7.09 5.98 6.97 9.35 7.16 8.86 8 1 1.15 1 1.70 1 5.53 2.56 9.84 5.97 3.77 9 4.12 1 2.47 5.67 5.68 10.33 4.00 3.90 5.50 5.51 10 1.01 1.03 1.03 1 1.04 1.03 1 1 1.04 1.03 Avg. 1.39 2.14 2.74 5.26 8.38 14.27 12.76 16.06 25.09 25.60 where ωinit and ωfinal are the initial and the final inertia values, respectively. (5) Initial coefficient factor init: Determination of the optimum initial value of this parameter, improves exploration capability and convergence performance of the algorithm. The parameter has been trained with values ranging from 0 to 1. Results are demonstrated in Table 6. From the results in the table, it is seen that the algorithm performs better when init is 0.6, so this value has been chosen in this study. Fig. 4 shows the states of 1 and 2 with the initial value of init. Table 5 Mean normalized values of ω updating strategy on numerical functions. No. a b c d e f 1 1.34e−02 6.66e−23 1.14e−05 1.14e−24 3.63e−19 9.86e+07 2 5.87 1.06 0.82 0.80 5.91 5.12e+09 3 2.98 0.36 0.60 1.92 0.58 16.35 4 2.58 4.25 3.31 3.96 6.67 120.43 5 37.16 20.80 20.05 33.83 22.44 77.27 6 3.69e−03 8.75e−18 4.08e−02 3.57e−02 1.34e−01 0.03 7 40.30 41.30 28.45 42.40 75.50 9.60e+07 8 0.63 0.67 0.63 0.60 0.67 20.77 9 −0.80 −1.00 −0.95 −1.00 −1.00 −0.59 10 −7.68 −7.73 −7.50 −7.62 −7.71 −5.58 Avg. 8.11 5.97 4.55 7.49 10.30 5.32e+08
  • 8.
    266 S. Yılmaz,E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 Table 6 Mean normalized values of init on numerical functions. No. init 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 5.32 4.08 3.80 2.91 1.84 1.54 1.45 1 1.05 1.19 1.53 2 86.92 54.87 37.73 56.46 7.81 4.02 1 5.89 2.11 3.84 2.52 3 7.73 6.66 4.49 3.20 3.25 2.52 1 1.35 1.76 1.09 2.38 4 2.85 2.36 2.00 1.54 1.45 1.30 1.21 1.02 1.12 1 1.10 5 1.19 1.16 1.10 1.01 1 1.09 1.06 1.04 1.01 1.03 1.02 6 4.04 4.92 1.16 1.15 1.16 1 1.19 1.34 1.21 1.18 1.03 7 5.47 3.34 2.49 2.32 1.56 1.83 1.43 1.67 1 1.40 1.62 8 1.14 2.56 1.05 1.04 1 1.03 1.04 1.05 1.04 1 1.04 9 4.22 4.27 4.26 4.91 2.34 2.29 1.69 1 1.01 3.68 2.99 10 1.02 1.04 1.01 1 1.03 1.01 1.09 1.06 1.03 1.07 1.06 Avg. 11.99 8.53 5.91 7.55 2.24 1.76 1.22 1.64 1.23 1.65 1.63 Table 7 Optimum parameter values obtained after training. Parameters BA EBA Run time, R 30 Population, N 50 Loudness, A 0.95 Pulse emission, r 0.85 The factors updating A and r, ˛ − 0.9 Minimum and maximum frequencies, fmin − fmax 0–1 Initial and final values of inertia weight, ωinit − ωfinal – 0.9–0.2 Modulation index of inertia weight, n – 2 Coefficient factor, init – 0.6 Minimum number of seeds, smin – 0 Maximum number of seeds, smax – 4 5.2. Analysis of contributions by proposed modifications In order to verify the efficiency of proposed modification struc- tures separately, the algorithm with proposed modifications (IS1, IS2, IS3) is tested in different combinations via the functions given in Table 1. The results have been shown in Table 8. Fig. 8 has also depicted the convergence speeds of each modification structure. The parameter values given in Table 7 have been used in the test phase. It can easily be seen from the results given in Table 8 that all proposed structures (IS1, IS2, IS3) have produced better solutions than the standard bat algorithm. IS3 is generally better than IS2 and IS3 on all functions. As it is expected, the performance of IS2 modifi- cation structure obtained from multimodal functions is better than that of the unimodal. 5.3. Optimization of unconstrained benchmark functions To measure the efficiency of proposed method on unconstrained problems, it has been compared with the results of BA and GA in [23]. This experiment has been conducted by 50 unconstrained benchmark functions given in [23]. The benchmark set comprises unimodal and benchmark functions with various dimensions. Uni- modal functions have been applied to quantify the convergence speed, while multimodal functions have been applied to detect whether the algorithm faces the problem of getting stuck in local minimum, namely premature convergence problem. In order to determine if the average objective values of the proposed method is significantly different than those of BA and GA, statistical student’s t-test has been employed. The success of the competitive algorithms have been compared with respect to t value. t value has been evaluated by Eq. (15): t = X1 − X2 (SD2 1/(n1 − 1)) + (SD2 2/(n2 − 1)) (15) where X1, SD1 and n1 are mean, standard deviation and size of the first sample (BA or GA), while X2, SD2 and n2 are mean, standard deviation and size of the second sample (EBA), respectively. t value can be positive or negative. Positive and negative values mean that EBA and BA (or GA) have produced better solutions during opti- mization process, respectively. In this study, the confidence interval has been set at the 95% which means t0.05=1.96. When t 1.96, the difference between two samples is significant and EBA is better than BA (or GA); on the other hand, when t −1.96, BA (or GA) is better than EBA. The rightmost four columns of Tables 9 and 10 indicate which algorithm has provided the better solution. To measure the convergence speed performance of EBA, Conver- gency Rate (CR), proposed in Ref. [64], has been used. There are four basic steps to compute CR value of an algorithm [64]. In this study, mean values of 30 runs for each problem has been regarded as one run while computing. There are different types of schemes proposed in the literature [65] to handle natural constraints. In this experiment, absorbing scheme has been used. The initial parameters, vital for the qual- ity of optimization process, have been set to the values given in Table 7. For the parameters set for GA, refer to [23]. The “minimum, maximum, mean, standard deviation, t and significance” values of the results have been comparatively shown in Tables 9 and 10. The convergence graphics of BA and EBA have been demonstrated in Fig. 9. As seen in Tables 9 and 10, there are a total of 76 optimization processes with various dimensions for 50 functions. t values point out that EBA has exhibited better performance than BA on 71 of 76 optimization processes, while both methods cannot defeat each other on the remaining five optimization processes. Fig. 9 proves that EBA can converge much better than BA on both unimodal and multimodal functions. Though the results obtained from the func- tions numbered 18, 24, 28 and 45 seem similar, t values and Fig. 10 reveal that EBA is better than BA on these functions. When the t values of EBA vs GA in Tables 9 and 10 are consid- ered, it can be easily seen that EBA and GA have produced better values on 25 and 6 of 50 functions, respectively. It is worthy to state that, Karaboğa and Akay [23] assumed the values below 10−12 as 0. By this assumption, it would not be wrong to say that EBA have actually produced the same values as GA did on the eighth func- tion. On the other hand, when CR values, ranging from 0.44 to 0.99, are considered, it is understood that EBA can converge to the point where BA reaches at the end of iterations faster than BA. The results of EBA have also been compared with Hybrid Bat Algorithm (HBA) [21], Hybrid Bat with Random Forest (HBARF) [66]
  • 9.
    S. Yılmaz, E.U.Küçüksille / Applied Soft Computing 28 (2015) 259–275 267 Fig. 8. Convergence rates of all modification structures. Table 8 Performance of modification structures on bat algorithm. No. D BA IS1 IS2 IS3 IS1 IS2 IS1 IS3 IS2 IS3 EBA 1 10 4.68e+02 4.69e−27 3.12e+02 2.02e−04 7.19e−28 3.87e−06 2.08e−04 9.95e−24 30 5.22e+03 1.23e+01 3.54e+03 8.07e−03 2.54e−00 2.17e−05 8.98e−03 1.85e−05 50 9.55e+03 4.59e+02 7.79e+03 1.17e−02 1.65e+02 2.07e−05 1.01e−02 2.18e−05 2 10 2.20e+02 1.34e−00 4.38e+02 1.30e+01 4.87e−00 9.95e−01 1.03e+01 4.13e−01 30 1.84e+05 3.01e+01 5.69e+04 5.16e+01 2.99e+01 2.99e+01 6.02e+01 2.80e+01 50 8.96e+05 2.51e+02 4.57e+05 6.69e+01 8.65e+01 5.16e+01 8.03e+01 5.54e+01 3 10 1.31e−00 6.92e−01 1.26e−00 2.01e−02 3.33e−01 9.44e−02 1.99e−02 8.56e−08 30 9.44e−00 5.68e−00 8.22e−00 1.80e−01 4.10e−00 8.09e−01 2.04e−01 2.25e−01 50 1.08e+01 8.62e−00 1.00e+01 1.96e−00 7.16e−00 2.01e−00 2.01e−00 1.76e−00 4 10 1.10e+01 4.15e−00 1.03e+01 9.87e−01 2.73e−00 1.70e−00 6.40e−01 1.14e−00 30 6.08e+01 6.86e−00 5.33e+01 1.41e−02 4.00e−00 8.62e−03 5.41e−03 6.90e−03 50 1.11e+02 2.58e−01 1.01e+02 6.59e−03 1.32e+01 3.61e−03 3.87e−03 5.26e−03 5 10 3.08e+01 2.29e+01 2.96e+01 1.73e+01 1.69e+01 2.21e+01 9.83e−00 1.19e+01 30 1.18e+02 1.21e+02 1.08e+02 9.71e+01 9.16e+01 1.19e+02 5.16e+01 5.73e+01 50 2.47e+02 2.43e+02 2.05e+02 1.91e+02 1.92e+02 2.35e+02 1.16e+02 1.20e+02 6 10 3.00e−02 4.14e−06 8.33e−02 4.99e−04 5.88e−25 1.07e−05 2.79e−04 1.41e−11 30 2.23e+02 5.69e−05 2.07e+02 6.91e−03 3.86e−05 4.85e−05 7.41e−03 4.60e−05 50 8.01e+02 3.11e−00 7.12e+02 2.35e−02 1.35e−00 3.66e−04 2.41e−02 1.46e−04 7 10 6.05e+02 3.40e+01 4.19e+02 0.00e−00 9.33e−00 0.00e−00 0.00e−00 0.00e−00 30 5.09e+03 1.91e+03 4.38e+03 6.33e−01 9.03e+02 1.63e−00 4.00e−01 8.33e−01 50 1.18e+04 5.30e+03 9.02e+03 7.70e−00 3.32e+03 9.30e−00 7.00e−00 7.67e−00 8 10 7.34e−01 6.44e−01 7.58e−01 7.04e−01 6.44e−01 5.78e−01 6.95e−01 6.67e−01 30 1.02e+01 6.71e−01 9.37e−00 6.41e−00 6.68e−01 6.68e−01 6.70e−00 6.68e−01 50 4.73e+02 6.78e−01 2.45e+02 3.17e+02 6.93e−01 8.69e−01 1.98e+02 8.28e−01 9 10 −8.14e−01 −1.00e−00 −9.08e−01 −1.00e−00 −1.00e−00 −1.00e−00 −9.99e−01 −1.00e−00 30 −3.24e−07 −6.74e−02 −7.29e−12 −6.66e−02 −1.68e−01 −3.34e−01 −2.95e−01 −3.01e−01 50 −2.70e−40 −6.78e−07 −1.59e−39 −3.47e−12 −3.40e−07 −1.07e−04 −1.05e−04 −3.35e−02 10 2 −1.80e−00 −1.80e−00 −1.80e−00 −1.80e−00 −1.80e−00 −1.80e−00 −1.80e−00 −1.80e−00 5 −3.71e−00 −4.30e−00 −3.82e−00 −4.34e−00 −4.47e−00 −4.16e−00 −4.51e−00 −4.38e−00 10 −5.61e−00 −7.56e−00 −5.75e−00 −7.79e−00 −8.09e−00 −7.37e−00 −8.48e+00 −8.17e−00
  • 10.
    268 S. Yılmaz, E.U. Küç üksille / Applied Soft Computing 28 (2015) 259–275 Table 9 The comparativeresults on unconstrained functions. No. D BA EBA GA [23] CR EBA vs BA EBA vs GA Minimum Maximum Mean Std. dev. Minimum Maximum Mean Std. dev. Mean t Sig. t Sig. 1 5 0 0 0 0 0 0 0 0 0 0.44 N.A.a N.S.b N.A. N.S. 2 10 5.40e+01 1.35e+03 5.22e+02 3.48e+02 0 0 0 0 N.A. 0.99 8.0755 EBA 30 2.99e+03 8.82e+03 5.44e+03 1.34e+03 0 3.00e−00 9.66e−01 1.06e−00 1.17e+03 0.99 21.8336 EBA 82.2199 EBA 50 6.39e+03 1.54e+04 1.08e+04 2.61e+03 1.00e−00 1.80e+01 7.13e−00 3.60e−00 N.A. 0.99 22.3300 EBA 3 10 1.55e+01 1.12e+03 4.37e+02 2.83e+02 1.64e−35 1.70e−30 1.13e−31 3.91e−31 N.A. 0.99 8.2932 EBA 30 2.13e+03 1.04e+04 4.65e+03 2.04e+03 6.97e−06 3.70e−05 2.06e−05 6.25e−06 1.11e+03 0.99 12.2558 EBA 80.5489 EBA 50 6.14e+03 1.89e+04 1.06e+04 3.24e+03 1.08e−05 2.84e−05 2.08e−05 4.38e−06 N.A. 0.99 17.7032 EBA 4 10 3.30e−02 1.69e−01 9.78e−02 3.61e−02 2.17e−34 1.88e−27 6.41e−29 3.43e−28 N.A. 0.92 14.5803 EBA 30 8.92e−01 6.55e+01 1.16e+01 1.38e+01 2.49e−04 1.11e−03 5.43e−04 1.77e−04 1.48e+02 0.94 4.5556 EBA 64.2277 EBA 50 3.05e+01 3.49e+02 1.53e+02 9.00e+01 6.30e−04 3.14e−03 1.13e−03 6.40e−04 N.A. 0.96 9.2078 EBA 5 10 2.80e−02 1.67e−01 9.12e−02 3.51e−02 2.96e−04 4.42e−03 1.69e−03 1.06e−03 N.A. 0.94 13.7219 EBA 30 3.92e−01 3.04e−00 1.48e−00 5.79e−01 1.18e−02 8.42e−02 3.77e−02 1.69e−02 0.1807 0.98 13.4846 EBA 24.1119 EBA 50 4.48e−01 1.04e+01 4.85e−00 2.91e−00 1.16e−02 1.21e−01 4.88e−02 2.68e−02 N.A. 0.97 8.8870 EBA 6 2 9.92e−06 8.10e−04 2.56e−04 2.45e−04 0 0 0 0 0 0.95 5.6373 EBA N.A. N.S. 7 2 −1.00e+00 0 −9.66e−01 1.83e−01 −1.00e−00 −1.00e−00 −1.00e−00 0 −1 0.95 0.9895 N.S. N.A. N.S. 8 2 6.08e−07 4.77e−05 1.22e−05 1.31e−05 1.98e−40 4.87e−37 5.72e−38 9.91e−38 0 0.95 5.0176 EBA 0 N.S. 9 4 8.27e−02 5.50e−00 1.06e−00 1.53e−00 0 0 0 0 0.01493 0.92 3.7345 EBA 10.9240 EBA 10 6 −4.99e+01 −3.13e+01 −4.80e+01 4.30e−00 −5.00e+01 −5.00e+01 −5.00e+01 0 −49.9999 0.99 2.4549 EBA 23.9341 EBA 11 10 −8.50e−00 1.47e+03 3.85e+02 3.59e+02 −2.10e+02 −2.10e+02 −2.10e+02 0 −209.476 0.99 8.9346 EBA 14.5899 EBA 12 10 1.11e−02 7.48e−02 3.46e−02 1.59e−02 2.05e−35 1.04e−09 3.49e−11 1.91e−10 0.01335 0.93 11.6985 EBA 15.8702 EBA 30 4.64e+01 4.60e+02 1.71e+02 1.00e+02 2.78e−05 7.02e−05 4.40e−05 1.00e−05 N.A. 0.99 9.1331 EBA 50 4.78e+02 1.24e+03 8.66e+02 2.15e+02 4.88e−05 1.45e−03 2.06e−04 3.01e−04 N.A. 0.99 21.6782 EBA 13 24 2.18e−01 5.13e−00 1.70e−00 1.17e−00 4.91e−04 1.07e−02 3.02e−03 2.21e−03 9.70377 0.90 7.7893 EBA 33.7490 EBA 14 10 1.24e−01 4.33e−01 3.01e−01 7.48e−02 2.15e−05 1.79e−01 1.04e−02 3.23e−02 N.A. 0.92 19.2010 EBA 30 4.92e−01 1.44e+01 4.09e−00 3.42e−00 3.15e−02 1.70e−00 3.11e−01 3.99e−01 11.0214 0.94 5.8988 EBA 39.9688 EBA 50 1.10e+01 4.46e+01 2.07e+01 7.14e−00 4.50e−01 2.78e−00 1.34e−00 7.17e−01 N.A. 0.97 14.5371 EBA 15 10 1.50e+02 2.18e+03 1.05e+03 5.01e+02 1.91e−35 5.54e−12 1.84e−13 1.01e−12 N.A. 0.99 11.3544 EBA 30 4.86e+03 2.82e+04 1.26e+04 5.66e+03 6.41e−05 2.60e−04 1.18e−04 3.96e−05 7.40e+03 0.99 12.0651 EBA 34.9563 EBA 50 1.66e+04 1.43e+05 3.95e+04 2.37e+04 1.69e−03 3.19e−02 1.07e−02 6.06e−03 N.A. 0.99 8.9742 EBA 16 10 6.72e−00 9.50e+03 7.46e+02 1.93e+03 4.19e−12 3.98e−00 1.32e−01 7.27e−01 N.A. 0.99 2.0763 EBA 30 2.54e+03 2.30e+06 1.98e+05 4.29e+05 8.34e−01 2.94e+01 2.11e+01 5.96e−00 1.96e+05 0.99 2.4871 EBA 27.4124 EBA 50 1.26e+05 2.53e+06 1.04e+06 6.12e+05 3.94e+01 1.01e+02 5.45e+01 2.16e+01 N.A. 0.99 9.1957 EBA 17 10 6.93e−01 8.20e−01 7.56e−01 3.50e−02 6.66e−01 6.66e−01 6.66e−01 1.79e−09 N.A. 0.91 13.8873 EBA 30 1.28e−00 7.45e+01 1.12e+01 1.44e+01 6.66e−01 6.69e−01 6.67e−01 6.48e−04 1.22e+03 0.93 3.9463 EBA 24.6854 EBA 50 1.15e+02 3.86e+03 4.92e+02 6.97e+02 6.67e−01 1.53e−00 6.99e−01 1.56e−01 N.A. 0.96 3.8015 EBA 18 2 9.98e−01 9.98e−01 9.98e−01 2.20e−13 9.98e−01 9.98e−01 9.98e−01 2.22e−16 0.9980 0.94 3.8986 EBA 0 N.S. 19 2 3.97e−01 3.99e−01 3.98e−01 3.11e−04 3.97e−01 3.97e−01 3.97e−01 0 0.3978 0.94 4.7628 EBA N.A. N.S. 20 2 4.29e−05 5.85e−03 1.76e−03 1.54e−03 0 0 0 0 0 0.94 6.1474 EBA N.A. N.S. 21 2 2.04e−05 1.99e−03 4.52e−04 5.41e−04 0 0 0 0 0 0.95 4.5002 EBA N.A. N.S. a Not available. b Not significant.
  • 11.
    S. Yılmaz, E.U. Küç üksille / Applied Soft Computing 28 (2015) 259–275 269 Table 10 The comparativeresults on unconstrained functions. No. D BA EBA GA [23] CR EBA vs BA EBA vs GA Minimum Maximum Mean Std. dev. Minimum Maximum Mean Std. dev. Mean t Sig. t Sig. 22 10 1.31e+01 4.37e+01 3.06e+01 7.58e−00 3.97e−00 1.98e+01 1.01e+01 4.14e−00 N.A. 0.93 12.7979 EBA 30 8.61e+01 1.96e+02 1.42e+02 2.67e+01 1.99e+01 6.76e+01 3.44e+01 1.28e+01 52.9225 0.91 19.5803 EBA 7.3399 EBA 50 8.42e+01 3.42e+02 2.25e+02 8.43e+01 3.18e+01 8.35e+01 5.15e+01 1.40e+01 N.A. 0.91 10.9442 EBA 23 10 −2.52e+03 −1.41e+03 −2.03e+03 2.67e+02 −3.39e+03 −1.58e+03 −2.55e+03 4.14e+02 N.A. 0.99 5.6800 EBA 30 −4.58e+03 −2.57e+03 −3.59e+03 4.85e+02 −8.97e+03 −4.51e+03 −6.98e+03 9.61e+02 −11593.4 0.99 16.9875 EBA −25.7313 GA 50 −7.27e+03 −3.34e+03 −4.84e+03 8.78e+02 −1.43e+03 −6.35e+03 −1.12e+04 1.56e+03 N.A. 0.99 19.2871 EBA 24 2 −1.80e−00 −1.79e−00 −1.80e−00 3.28e−03 −1.80e−00 −1.80e−00 −1.80e−00 9.03e−16 −1.8013 0.95 5.2415 EBA 0 N.S. 25 5 −4.10e−00 −3.11e−00 −3.67e−00 2.46e−01 −4.69e−00 −3.41e−00 −4.45e−00 3.33e−01 −4.6448 0.92 10.2037 EBA −3.0225 GA 26 10 −6.21e−00 −4.97e−00 −5.68e−00 3.32e−01 −9.41e−00 −6.08e−00 −8.14e−00 9.26e−01 −9.4968 0.90 13.4588 EBA −7.8004 GA 27 2 2.60e−03 7.81e−02 1.91e−02 1.92e−02 0 9.71e−03 5.50e−03 4.89e−03 0.0042 0.98 3.7028 EBA −1.0256 N.S. 28 2 −1.03e−00 −1.02e−00 −1.03e−00 7.30e−04 −1.03e−00 −1.03e−00 −1.03e-00 6.18e−16 −1.0316 0.94 5.8509 EBA 0 N.S. 29 2 5.27e−05 9.24e−03 2.55e−03 2.25e−03 0 0 0 0 0.0682 0.95 6.0833 EBA 4.6959 EBA 30 2 5.51e−05 2.32e−03 6.84e−04 6.44e−04 0 0 0 0 0 0.94 5.7218 EBA N.A. N.S. 31 2 −1.86e+02 −1.85e+02 −1.86e+02 3.53e−01 −1.86e+02 −1.86e+02 −1.86e+02 3.73e−14 −186.73 0.91 5.2554 EBA 0 N.S. 32 2 3.00e−00 3.13e−00 3.04e−00 3.19e−02 3.00e−00 3.00e−00 3.00e−00 0 5.2506 0.94 2.0647 EBA 2.0647 EBA 33 4 6.66e−04 2.03e−02 4.17e−03 6.87e−03 3.07e−04 2.03e−02 3.07e−03 6.90e−03 0.0056 0.97 0.6117 N.S. 1.2740 N.S. 34 4 −9.96e−00 −2.61e−00 −5.77e−00 3.14e−00 −1.01e+01 −2.63e−00 −8.89e−00 2.36e−00 −5.6605 0.97 4.2673 EBA 3.8391 EBA 35 4 −1.02e+01 −2.70e−00 −6.90e−00 3.14e−00 −1.04e+01 −2.76e−00 −9.46e−00 2.46e−00 −5.3440 0.97 3.4403 EBA 5.1643 EBA 36 4 −1.03e+01 −2.36e−00 −7.85e−00 3.22e−00 −1.05e+01 −5.17e−00 −1.01e+01 1.36e−00 −3.8298 0.96 3.5741 EBA 12.0427 EBA 37 4 2.52e−02 1.27e−00 5.51e−01 3.52e−01 1.00e−20 4.72e−01 7.20e−02 1.44e−01 0.3026 0.83 6.7898 EBA 5.1528 EBA 38 4 9.33e−03 2.52e−01 9.41e−02 7.32e−02 7.87e−07 4.07e−04 1.51e−04 1.45e−04 0.01040 0.85 6.9132 EBA 6,0844 EBA 39 3 −3.86e−00 −3.81e−00 −3.85e−00 9.27e−03 −3.86e−00 −3.86e−00 −3.86e−00 2.62e−15 −3.8627 0.96 7.0995 EBA 0 N.S. 40 6 −3.12e−00 −2.90e−00 −3.02e−00 5.29e−02 −3.32e−00 −3.20e−00 −3.25e−00 6.04e−02 −3.2982 0.98 15.5573 EBA −3.3068 GA 41 10 2.82e−00 2.20e+01 1.18e+01 5.44e−00 5.66e−02 2.67e−00 9.04e−01 6.57e−01 N.A. 0.99 10.7187 EBA 30 3.31e+01 9.63e+01 6.45e+01 1.82e+01 5.10e−07 4.43e−02 7.30e−03 9.46e−03 10.6334 0.99 19.0546 EBA 49.2672 EBA 50 7.05e+01 1.86e+02 1.16e+02 2.74e+01 5.77e−07 2.21e−02 4.68e−03 6.21e−03 N.A. 0.99 22.9103 EBA 42 10 9.09e−02 7.98e−00 1.63e−00 2.32e−00 4.44e−15 1.26e−07 4.21e−09 2.30e−08 N.A. 0.97 3.8019 EBA 30 4.49e−00 1.25+01 9.17e−00 2.07e−00 2.41e−03 1.50e−00 4.58e−01 5.78e−01 14.6717 0.99 21.8369 EBA 126.5533 EBA 50 8.54e−00 1.34e+01 1.08e+01 1.28e−00 1.02e−00 2.57e−00 1.80e−00 4.11e−01 N.A. 0.99 35.9427 EBA 43 30 7.68e−00 6.21e+04 2.14e+03 1.13e+04 2.64e−07 9.00e−00 9.99e−01 1.84e−00 13.3772 0.99 1.0192 N.S. 28.4637 EBA 44 30 6.66e+01 3.51e+02 1.71e+02 6.41e+01 1.87e−06 1.93e−02 7.74e−03 9.64e−03 125.06 0.99 14.4012 EBA 56.1133 EBA 45 2 −1.08e−00 −1.08e−00 −1.08e−00 2.29e−04 −1.08e−00 −1.08e−00 −1.08e−00 6.51e−16 −1.0809 0.96 7.5889 EBA 0 N.S. 46 5 −1.49e−00 −4.82e−01 −9.53e−01 3.88e−01 −1.49e−00 −9.07e−01 −1.18e−00 2.80e−00 −0.9684 0.98 2.6005 EBA 0.4048 N.S. 47 10 −7.97e−01 −2.74e−01 −4.80e−01 1.81e−01 −1.49e−00 −2.74e−01 −6.38e−01 3.04e−00 −0.6364 0.98 2.3956 EBA 0.0028 N.S. 48 2 4.36e−02 1.94e−00 5.64e−01 4.27e−01 0 0 0 0 0 0.94 7.1099 EBA N.A. N.S. 49 5 3.29e+01 1.61e+03 2.93e+02 4.51e+02 0 2.52e+02 7.70e+01 1.00e+02 0.0043 0.98 2.5197 EBA −4.1463 GA 50 10 3.03e+02 9.43e+03 2.74e+03 2.67e+03 1.71e−71 6.60e+02 1.92e+02 2.34e+02 29.573 0.96 5.1303 EBA −3.7293 GA
  • 12.
    270 S. Yılmaz,E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 0 500 1000 1500 2000 10 −40 10 −20 10 0 10 20 Iteration Fitness value Sphere function D=10 BA EBA 0 500 1000 1500 2000 10 −40 10 −20 10 0 10 20 Iteration Fitness value SumSquares function D=10 BA EBA 0 1000 2000 3000 4000 5000 6000 10 −4 10 −2 10 0 10 2 10 4 10 6 Iteration Fitness value Schwefel 1.2 function D=30 BA EBA 0 500 1000 1500 2000 10 −10 10 −5 10 0 10 5 Iteration Fitness value Ackley function D=10 BA EBA 0 2000 4000 6000 8000 10000 10 −4 10 −2 10 0 10 2 10 4 Iteration Fitness value Griewangk function D=50 BA EBA 0 200 400 600 800 −10 −8 −6 −4 −2 0 Iteration Fitness value Shekel5 function D=4 BA EBA Fig. 9. Convergence results of the algorithms. Table 11 The comparative results of EBA and existing improvement approaches. No.a Method Minimum Maximum Mean Std. dev. 3 MoBA 3.73e−03 1.60e−02 8.80e−03 3.34e−03 HBA 4.83e−09 2.89e−03 1.26e−04 1.66e−07 HBARF 2.36e−06 5.90e−02 5.92e−03 1.22e−02 EBA 1.64e−35b 1.70e−30 1.13e−31 3.91e−31 16 MoBA 7.44e−00 1.64e+01 1.03e+01 1.94e−00 HBA 6.34e−02 5.10e+02 6.22e+01 7.73e−00 HBARF 5.00e−05 1.99e+00 2.64e−01 5.44e−01 EBA 4.19e−12 3.98e−00 1.32e−01 7.27e−01 22 MoBA 1.46e+01 3.48e+01 2.49e+01 4.35e−00 HBA 5.12e−00 2.38e+01 1.55e+01 1.69e+01 HBARF 3.09e−05 1.02e+01 5.92e−01 2.00e−00 EBA 3.97e−00 1.98e+01 1.01e+01 4.14e−00 41 MoBA 2.05e−00 2.06e+01 8.12e−00 5.39e−00 HBA 2.25e−09 3.97e−05 3.18e−06 1.14e−07 HBARF 1.44e−11 6.35e−04 3.92e−05 1.25e−04 EBA 5.66e−02 2.67e−00 9.04e−01 6.57e−01 42 MoBA 3.61e−02 1.79e−00 1.67e−01 3.60e−01 HBA 6.31e−04 2.00e+01 1.16e+01 1.78e+01 HBARF 7.21e−04 3.53e−01 3.14e−02 6.87e−02 EBA 4.44e−15 1.26e−07 4.21e−09 2.30e−08 a Indicates number of functions given in Tables 9 and 10. b Bold sets emphasize the best value of cluster of interest.
  • 13.
    S. Yılmaz, E.U.Küçüksille / Applied Soft Computing 28 (2015) 259–275 271 Fig. 10. Graphical comparison of BA and EBA in terms of best, worst, mean values. and Modified Bat Algorithm (MoBA) [62] on a limited number of 10 dimensional benchmark functions presented in [66] to determine the quality of EBA over the existing improvement approaches. The results have been shown in Table 11. From the results in Table 11, of four different improvement approaches tested on various functions, EBA has outperformed other methods on three of five functions while HBA and HBARF could optimize better than EBA on one function. 5.4. Optimization of constrained real-world problems In this test stage, it has been planned to investigate the per- formance of EBA on constrained engineering problems. For this purpose, three well-known real world problems have been chosen from literatures [24,25]. They are, welded beam design, spring design and pressure vessel design. The results obtained from these prob- lems have been compared with the studies (in particular, published after 2007) in the literature. For fair comparison, the efficiency of each approach has been measured by comparing function evalua- tion number (FEN), which is equal to the population size multiplied by the number of iterations proceeded. For constrained problems, N has been set to 20, 10 and 25; maximum number of iterations has been set to 2000, 500 and 600 for these problems, respectively, R has been taken as 30. Though BA has been proposed for solving unconstrained prob- lems, it has been applied to constrained engineering problems by transforming the problem into an unconstrained problem. The underlying idea is to define a penalty function to the problem as given below: (x) = f (x) + M i=1 i2 i (x) + M j=1 j 2 j (x) (16) where i and i denote the fitness values of equality and inequality constraints, respectively. i and j are the penalty parameters that enable when the constraints are violated, note that they should be large enough. 5.4.1. Welded beam problem The main goal of the problem is to produce a welded beam design within minimum fabrication cost. As seen in Fig. 11, it is planned to weld the object B to beam A. The problem has four design parameters (x1, x2, x3 and x4) and seven constraints. The thickness of the weld is h (x1), the length of the welded joint is l (x2), the width of the beam is t (x3) and the thickness of the beam is b (x4). The values h (x1) and l (x2) are discrete and take integer multiplies of 0.0065. EBA has been adapted to solve the problem by rounding the real values to the nearest integer values. The boundaries of the variables are 0.125 ≤ x1 ≤ 5 and 0.1 ≤ x2, x3, x4 ≤ 10. For more details refer to [25]. The results have been given in Table 12. The results in Table 12 point out that all studies has achieved acceptable solutions without exceeding the boundaries. The stud- ies [74,77,71] seem to find best cost value, however; they neglected the discrete variables and regarded them as continuous. EBA has found the compelling cost value within minimum FEN without abandoning any rules of the problem. Fig. 11. Welded beam.
  • 14.
    272 S. Yılmaz,E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 Table 12 The comparative results on welded beam problem. Study Method x1 x2 x3 x4 Cost FEN Bernardino et al. (2007) [67] GA-AISa 0.2443 6.2202 8.2915 0.2444 2.3812 320 000 Bernardino et al. (2008) [68] GA-AIS 0.2444 6.2183 8.2912 0.2444 2.3812 320 000 Montes and Ocana (2008) [69] BFAb 0.2057 3.4711 9.0367 0.2057 2.3868 48 000 Zhang et al. (2008) [70] DEc 0.2444 6.2175 8.2915 0.2444 2.3810 24 000 Zahara and Kao (2009) [71] NM-PSOd 0.206 3.468 9.037 0.206 1.7248 80 000 Zhang et al. (2009) [72] EAe 0.2443 6.2201 8.2940 0.2444 2.3816 28 897 Aragon et al. (2010) [73] TCAf 0.2444 6.2186 8.2915 0.2444 2.3811 320 000 Kaveh and Talatahari (2010) [74] CSSg 0.2058 3.4681 9.0380 0.2057 1.7249 N.A.l Datta and Figueira (2011) [75] ID-PSOh 0.1875 1.7821 8.2500 0.2500 1.9553 N.A. Gandomi et al. (2011) [76] FAi 0.2015 3.562 9.0414 0.2057 1.7312 50 000 Gandomi et al. (2013) [46] BA 0.2015 3.5620 9.0414 0.2057 1.7312 50 000 Sadollah et al. (2013) [77] MBAj 0.2057 3.4704 9.0366 0.2057 1.7248 47 340 Gandomi (2014) [64] ISAk 0.2443 6.2199 8.2915 0.2443 2.3812 30 000 Present study EBA 0.2015 3.5620 9.0414 0.2057 1.7312 40 000 a AIS: artificial immune system. b Bacterial foraging algorithm. c Differential evolution. d NM:Nelder–Mead. e Evolutionary algorithm. f T-cell algorithm. g Charged system search. h ID: integer-discrete. i Firefly algorithm. j Mine blast algorithm. k Interior Search algorithm. l Not available. Fig. 12. Spring design. 5.4.2. Spring design problem This problem is another well-known engineering problem to investigate the superiority of an algorithm. Its objective is to design a spring for a minimum weight by achieving optimum values of the variables also given in Fig. 12. This problem has three design variables (x1, x2 and x3) and four constraints. The wire diameter is d (x1), the mean diameter is D (x2) and the number of active coils is N (x3). The boundaries of these variables are 0.05 ≤ x1 ≤ 1, 0.25 ≤ x2 ≤ 1.3 and 2 ≤ x3 ≤ 15. For further details refer to [24]. The results of the problem have been given in Table 13. Although all studies stated in Table 13 have managed to find reasonable cost values, most of them have neglected the bound- aries to obtain the cost values given in this table. But the proposed method has accomplished to find the minimum cost value without violating the boundaries of the problem. 5.4.3. Pressure vessel design problem As seen in Fig. 13, a cylindrical pressure vessel is capped at both ends by hemispherical heads. The objective of this problem is to minimize the cost value including welding, material and forming costs. Pressure vessel problem has four design variables (x1, x2, x3 and x4) and four constraints. The thickness is Ts (x1), thickness of the head is Th (x2), inner radius is R (x3) and the length of the cylindrical section of the vessel is L (x4). The variables x1 and x2 are discrete and integer multiples of 0.0625 in. Boundaries of these parame- ters are 1×0.0625 ≤ x1, x2 ≤ 99 × 0.0625 and 10 ≤ x3, x4 ≤ 200. For more details refer to [25]. Table 14 shows the comparative results obtained by competitive studies. Even though pressure vessel is relatively harder to solve than the problems with continuous variables, EBA can find the minimum value of this problem together with [46,81,24] as seen in Table 14. Table 13 The comparative results on spring design problem. Study Method x1 x2 x3 Cost FEN Bernardino et al. (2007)d [67] GA-AIS 0.0516 0.3560 11.329 0.01267 36 000 He and Wang (2007) [78] PSO 0.0517 0.3576 11.244 0.01267 200 000 Hsu and Liu (2007)d [79] F-PDa 0.0523 0.3731 10.364 0.01265 N.A. Bernardino et al. (2008) [68] GA-AIS 0.0514 0.3505 11.661 0.01267 36 000 Montes and Coello (2008) [80] ESb 0.0516 0.3553 11.397 0.01270 25 000 Aragon et al. (2010) [73] TCA 0.0516 0.3551 11.384 0.01267 36 000 Dos Santos Coelho (2010) [81] GQ-PSOc 0.0515 0.3525 11.538 0.01267 2000 Gandomi et al. (2013) [46] BA 0.0516 0.3567 11.288 0.01267 50 000 Sadollah et al. (2013)d [77] MBA 0.0516 0.3559 11.344 0.01267 7650 Gandomi (2014) [64] ISA N.A.e N.A. N.A. 0.01267 8000 Present study EBA 0.0519 0.3620 10.980 0.01267 5000 a Fuzzy proportional-derivative controller. b Evolution strategies. c Gaussian quantum. d Violated studies. e Not available.
  • 15.
    S. Yılmaz, E.U.Küçüksille / Applied Soft Computing 28 (2015) 259–275 273 Table 14 The comparative results on pressure vessel design problem. Study Method Minimum Mean Maximum Std. dev. FEN Mahdavi et al. (2007)b [82] IHSa 5849.76 N.A.c N.A. N.A. N.A. Montes et al. (2007)b [83] DE 6059.70 6059.70 N.A. 0 24 000 He and Wang (2007) [78] PSO 6061.08 6147.13 6363.80 86.454 200 000 Bernardino et al. (2008) [68] GA-AIS 6059.85 7388.16 6545.12 124.00 80 000 Cagnina et al. (2008) [24] PSO 6059.71 N.A. N.A. N.A. 24 000 Montes and Coello (2008) [80] ES 6059.75 6850.00 7332.88 426.00 25 000 Zahara and Kao (2009)b [71] NM-PSO 5930.31 5946.79 5960.06 9.1614 80 000 Aragon et al. (2010) [73] TCA 6390.55 7694.06 6737.06 357.00 80 000 Dos Santos Coelho (2010) [81] GQ-PSO 6059.71 N.A. N.A. N.A. 8000 Datta and Figueira (2011)b [75] ID-PSO 5850.38 N.A. N.A. N.A. N.A. Gandomi et al. (2013) [46] BA 6059.71 6179.13 6318.95 137.22 375 000 Sadollah et al. (2013)b [77] MBA 5889.32 6200.64 6392.50 160.34 70 650 Gandomi (2014) [64] ISA 6059.71 6410.08 7332.84 384.6 5000 Present study EBA 6059.71 6173.67 6370.77 142.33 15 000 a Improved harmony search. b Violated studies. c Not available. Fig. 13. Pressure vessel. When the “mean” values in the table are considered, it is noticed that EBA is better than all of the studies except [78]. However, to find such an objective value He and Wang needed much more than 10 times FEN that EBA has needed. Note that only the studies producing feasible solutions have been regarded for evaluation. 6. Conclusion BA is one of the recently proposed heuristic algorithms and serves efficient or at least adequate solutions to different types of problems. In contrast to traditional methods like gradient-based algorithms, implementation of heuristic algorithms to a prob- lem is rather convenient for researchers due to their lucidity and applicability. However, as in other heuristics, BA has also some insufficiencies on exploration and exploitation capabilities espe- cially to solve unimodal and multimodal functions having more than one local minimum. In this study global and local search capa- bilities of the standard BA, which is adopted as a virgin algorithm in terms of development, have been enhanced by three approaches (IS1, IS2 and IS3). IS1 has been proposed to balance these search capabilities during the optimization process depending on the requirements of BA. IS2 contributes to dispersion of the solutions in BA into search space. IS3 focuses on exploitation capability rather than exploration capability and rectifies it toward the end of opti- mization process. In the experiment section (see Section 5), the contribution of each modification has been analyzed by different type of functions in Table 1 with various dimensions. After then, unconstrained unimodal, multimodal benchmark test functions, presented in Ref. [23], and three well-known constrained engineer- ing design problems, which are rather tough to solve, have been applied to investigate the superiority and robustness of proposed method (EBA). Furthermore, to measure the efficiency of EBA over other existing improvement studies, an experiment which includes only five functions has been carried out. The innovative aspect of the proposed method is to find better fitness and cost values for unconstrained and constrained problems respectively. When the proposed method is compared with the standard BA and GA on unconstrained functions in terms of statistical t value, it is seen that the proposed method is better than BA on almost all optimiza- tion processes and GA on most of optimization processes. On the other hand, results obtained from real-world problems reveal that EBA produces feasible solutions and minimum (also optimum) cost values without exceeding the boundaries. For prospective work it is planned to investigate performance of EBA on both state-of-art benchmark functions as in Refs. [84,85] and Artificial Neural Net- work (ANN) for training problems. Furthermore, as the optimization results of DE with different natural constraint handling schemes presented in Ref. [65] are rather intriguing, investigation of the per- formance of EBA with these schemes is also planned as a piece of future work. References [1] S. Rao, Engineering Optimization: Theory and Practice, New Age International, 1996. [2] E.K.P. Chong, S.H. Zak, An Introduction to Optimization (Wiley-Interscience Series in Discrete Mathematics and Optimization), third ed., Wiley- Interscience, 2008. [3] X. Yang, Nature-Inspired Metaheuristic Algorithms, second ed., Luniver Press, 2010. [4] M.M. Noel, A new gradient based particle swarm optimization algorithm for accurate computation of global minimum, Appl. Soft Comput. 12 (1) (2012) 353–359. [5] C. Blum, A. Roli, Metaheuristics in combinatorial optimization: overview and conceptual comparison, ACM Comput. Surv. 35 (3) (2003) 268–308. [6] X.-S. Yang, Optimization and metaheuristic algorithms in engineering, in: X.-S. Yang, A.H. Gandomi, S. Talatahari, A.H. Alavi (Eds.), Metaheuristics in Water, Geotechnical and Transport Engineering, Elsevier, Oxford, 2013, pp. 1–23. [7] A.H. Gandomi, X.-S. Yang, S. Talatahari, A.H. Alavi, Metaheuristic algorithms in modeling and optimization, in: A.H. Gandomi, X.-S. Yang, S. Talatahari, A.H.A. Newnes (Eds.), Metaheuristic Applications in Structures and Infras- tructures, Elsevier, Oxford, 2013, pp. 1–24, http://dx.doi.org/10.1016/B978- 0-12-398364-0.00001-2, URL http://www.sciencedirect.com/science/article/ pii/B9780123983640000012 [8] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Neural Networks, 1995. Proceedings. IEEE International Conference on, vol. 4, 1995, pp. 1942–1948. [9] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of cooperating agents, IEEE Trans. Syst. Man Cybern. B: Cybern. 26 (1) (1996) 29–41. [10] S.-C. Chu, P.-W. Tsai, J.-S. Pan, Cat swarm optimization, in: Q. Yang, G. Webb (Eds.), PRICAI 2006: Trends in Artificial Intelligence, Vol. 4099 of Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 2006, pp. 854–858. [11] X.-S. Yang, S. Deb, Cuckoo search via levy flights, in: Nature Biologically Inspired Computing, 2009. NaBIC 2009. World Congress on, 2009, pp. 210–214. [12] W.-T. Pan, A new fruit fly optimization algorithm: taking the financial distress model as an example, Knowl. Based Syst. 26 (2012) 69–74.
  • 16.
    274 S. Yılmaz,E.U. Küçüksille / Applied Soft Computing 28 (2015) 259–275 [13] K. Krishnanand, D. Ghose, Glowworm swarm optimization for simultaneous capture of multiple local optima of multimodal functions, Swarm Intell. 3 (2) (2009) 87–124. [14] A.H. Gandomi, A.H. Alavi, Krill herd: a new bio-inspired optimization algorithm, Commun. Nonlinear Sci. Numer. Simul. 17 (12) (2012) 4831–4845. [15] H. Duan, P. Qiao, Pigeon-inspired optimization: a new swarm intelligence opti- mizer for air robot path planning, Int. J. Intell. Comput. Cybern. 7 (1) (2014) 24–37. [16] J. Bansal, H. Sharma, S. Jadon, M. Clerc, Spider monkey optimization algorithm for numerical optimization, Memet. Comput. 6 (1) (2014) 31–47. [17] C. Sur, S. Sharma, A. Shukla, Egyptian vulture optimization algorithm – a new nature inspired meta-heuristics for knapsack problem, in: P. Meesad, H. Unger, S. Boonkrong (Eds.), The Ninth International Conference on Computing and Information Technology (IC2IT2013). Vol. 209 of Advances in Intelligent Sys- tems and Computing, Springer, Berlin, Heidelberg, 2013, pp. 227–237. [18] X.-S. Yang, A new metaheuristic bat-inspired algorithm, in: J. Gonzlez, D. Pelta, C. Cruz, G. Terrazas, N. Krasnogor (Eds.), Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), Vol. 284 of Studies in Computational Intelli- gence, Springer, Berlin, Heidelberg, 2010, pp. 65–74. [19] W. Feng Gao, S. Yang Liu, A modified artificial bee colony algorithm, Comput. Oper. Res. 39 (3) (2012) 687–697, http://dx.doi.org/10.1016/j.cor.2011.06.007. [20] K. Tan, S. Chiam, A. Mamun, C. Goh, Balancing exploration and exploitation with adaptive variation for evolutionary multi-objective optimization, Eur. J. Oper. Res. 197 (2) (2009) 701–713, http://dx.doi.org/10.1016/j.ejor.2008.07.025. [21] I. F. Jr., D. Fister, X.-S. Yang, A hybrid bat algorithm, CoRR abs/1303.6310. [22] A. Mehrabian, C. Lucas, A novel numerical optimization algorithm inspired from weed colonization, Ecol. Inf. 1 (4) (2006) 355–366, http://dx.doi.org/10.1016/j. ecoinf.2006.07.003. [23] D. Karaboga, B. Akay, A comparative study of artificial bee colony algo- rithm, Appl. Math. Comput. 214 (1) (2009) 108–132, http://dx.doi.org/10. 1016/j.amc.2009.03.090. [24] L.C. Cagnina, S.C. Esquivel, Solving engineering optimization problems with the simple constrained particle swarm optimizer, Informatica 32 (3) (2008) 319–326. [25] A. Gandomi, X.-S. Yang, Benchmark problems in structural optimization, in: S. Koziel, X.-S. Yang (Eds.), Computational Optimization, Methods and Algo- rithms, Vol. 356 of Studies in Computational Intelligence, Springer, Berlin, Heidelberg, 2011, pp. 259–281. [26] A.H. Gandomi, X.-S. Yang, Chaotic bat algorithm, J. Comput. Sci. 5 (2) (2014) 224–232. [27] Z. Geem, J. Kim, G. Loganathan, A new heuristic optimization algorithm: har- mony search, Simulation 76 (2) (2001) 60–68. [28] G. Wang, L. Guo, A novel hybrid bat algorithm with harmony search for global numerical optimization, J. Appl. Math. 2013 (2013) 21. [29] R.Y.M. Nakamura, L.A.M. Pereira, K.A. Costa, D. Rodrigues, J.P. Papa, X.S. Yang, BBA: a binary bat algorithm for feature selection, in: Graphics, Patterns and Images (SIBGRAPI), 2012 25th SIBGRAPI Conference on, 2012, pp. 291–297. [30] L. Li, Y. Zhou, A novel complex-valued bat algorithm, Neural Comput. Appl. (2014) 1–13. [31] E. Ali, Optimization of power system stabilizers using {BAT} search algorithm, Int. J. Electr. Power Energy Syst. 61 (2014) 683–690. [32] O. Hasancebi, T. Teke, O. Pekcan, A bat-inspired algorithm for structural optimization, Comput. Struct. 128 (2013) 77–90, http://dx.doi.org/10.1016/ j.compstruc.2013.07.006. [33] O. Hasancebiebi, S. Carbas, Bat inspired algorithm for discrete size optimization of steel frames, Adv. Eng. Softw. 67 (2014) 173–185, http://dx.doi.org/10.1016/ j.advengsoft.2013.10.003. [34] C.-H.Y.H.-L.T. Jiann-Horng Lin, Chao-Wei Chou, A chaotic levy flight bat algo- rithm for parameter estimation in nonlinear dynamic biological systems, J. Comput. Inf. Technol. 2 (2) (2012) 56–63. [35] J.-H. Lin, C.-W. Chou, C.-H. Yang, H.-L. Tsai, A novel bat algorithm based on differential operator and lvy flights trajectory, Comput. Intell. Neurosci. 2013 (2013) 13. [36] A. Baziar, A. Kavoosi-Fard, J. Zare, A novel self adaptive modification approach based on bat algorithm for optimal management of renewable MG, J. Intell. Learn. Syst. Appl. 5 (1) (2013) 11–18. [37] A.M. Taha, A. Tang, Bat algorithm for rough set attribute reduction, J. Theor. Appl. Inf. Technol. 51 (1) (2013) 1–8. [38] P.-W. Tsai, J.S. Pan, B.Y. Liao, M.J. Tsai, V. Istanda, Bat algorithm inspired algo- rithm for solving numerical optimization problems, Appl. Mech. Mater. 148 (2012) 134–137. [39] I. Fister Jr., D. Fister, I. Fister, Differential evolution strategies with random forest regression in the bat algorithm, in: Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation, GECCO ‘13 Companion, 2013, pp. 1703–1706. [40] S. Yılmaz, E.U. Kucuksille, Y. Cengiz, Modified bat algorithm, Elektron. Elek- trotech. 20 (2) (2014) 71–78. [41] X. Cai, L. Wang, Q. Kang, W. Qidi, Bat algorithm with Gaussian walk, Int. J. Bio-Inspired Comput. 6 (3) (2014) 166–174. [42] S. Sabba, S. Chikhi, A discrete binary version of bat algorithm for multidimen- sional knapsack problem, Int. J. Bio-Inspired Comput. 6 (2) (2014) 140–152. [43] S. Tabatabaei, A new stochastic framework for optimal generation schedul- ing considering wind power sources, J. Intell. Fuzzy Syst. 26 (3) (2014) 1571–1579. [44] J. Guo, Y. Gao, G. Cui, The navigation of mobile robot based on hybrid dijkstra algorithm, J. Comput. Inf. Syst. 10 (9) (2014) 3879–3886. [45] X.-S. Yang, A. Hossein Gandomi, Bat algorithm: a novel approach for global engineering optimization, Eng. Comput. 29 (5) (2012) 464–483. [46] A. Gandomi, X.-S. Yang, A. Alavi, S. Talatahari, Bat algorithm for con- strained optimization tasks, Neural Comput. Appl. 22 (6) (2013) 1239–1255, http://dx.doi.org/10.1007/s00521-012-1028-9. [47] X.S. Yang, S. Deb, S. Fong, Bat algorithm is better than intermittent search strategy, J. Multiple-Valued Logic Soft Comput. 22 (3) (2014) 223–237. [48] W. Peres, E.J. de Oliveira, J.A.P. Filho, I.C. da Silva Junior, Coordinated tuning of power system stabilizers using bio-inspired algorithms, Int. J. Electr. Power Energy Syst. 64 (2015) 419–428. [49] M. Sathya, M.M.T. Ansari, Load frequency control using bat inspired algorithm based dual mode gain scheduling of {PI} controllers for interconnected power system, Int. J. Electr. Power Energy Syst. 64 (2015) 365–374. [50] D. Rodrigues, L.A. Pereira, R.Y. Nakamura, K.A. Costa, X.-S. Yang, A.N. Souza, J.P. Papa, A wrapper approach for feature selection based on bat algorithm and optimum-path forest, Expert Syst. Appl. 41 (5) (2014) 2250–2258. [51] T. Bora, L. Coelho, L. Lebensztajn, Bat-inspired optimization approach for the brushless dc wheel motor problem, IEEE Trans. Magn. 48 (2) (2012) 947–950. [52] S. Biswal, A. Barisal, A. Behera, T. Prakash, Optimal power dispatch using bat algorithm, in: Energy Efficient Technologies for Sustainability (ICEETS), 2013 International Conference on, 2013, pp. 1018–1023. [53] A.S. Koffka Khan, A comparison of ba, ga, pso, bp and lm for training feed forward neural networks in e-learning context, Int. J. Intell. Syst. Appl. 4 (7) (2012) 23–29. [54] S. Akhtar, A.R. Ahmad, E.M. Abdel-Rahman, A metaheuristic bat-inspired algo- rithm for full body human pose estimation, in: Computer and Robot Vision (CRV), 2012 Ninth Conference on, 2012, pp. 369–375. [55] S. Sakthivel, R. Natarajan, P. Gurusamy, Application of bat optimization algo- rithm for economic load dispatch considering valve point effects, Int. J. Comput. Appl. 67 (11) (2013) 35–39. [56] M. Marichelvam, T. Prabaharan, Y. Xin-She, M. Geetha, Solving hybrid flow shop scheduling problems using bat algorithm, Int. J. Logist. Econ. Glob. 5 (1) (2013) 15–29. [57] P. Musikapun, P. Pongcharoen, Solving multi-stage multi-machine multi- product scheduling problem using bat algorithm, Int. Proc. Econ. Dev. Res. 35 (2012) 98–102. [58] X. Cai, W. Li, L. Wang, Q. Kang, Q. Wu, X. Huang, Bat algorithm with Gaussian walk for directing orbits of chaotic systems, Int. J. Comput. Sci. Math. 5 (2) (2014) 198–208. [59] S. Gholizadeh, A.M. Shahrezaei, Optimal placement of steel plate shear walls for steel frames by bat algorithm, Struct. Des. Tall Spec. Build. 24 (1) (2014) 1–18. [60] M. Fenton, Bat natural history and echolocation, in: R. Brigham, K. Elisabeth, J. Gareth, P. Stuart, A. Herman (Eds.), Bat Echolocation Research Tools, Techniques and Analysis, Bat Conservation International, 2004, pp. 2–6. [61] Y. Shi, R. Eberhart, A modified particle swarm optimizer, in: Evolution- ary Computation Proceedings, 1998. IEEE World Congress on Compu- tational Intelligence. The 1998 IEEE International Conference on, 1998, pp. 69–73. [62] S. Yılmaz, E.U. Kucuksille, Improved bat algorithm (IBA) on continuous opti- mization problems, Lect. Notes Softw. Eng. 1 (3) (2013) 279–283. [63] A. Nickabadi, M.M. Ebadzadeh, R. Safabakhsh, A novel particle swarm optimiza- tion algorithm with adaptive inertia weight, Appl. Soft Comput. 11 (4) (2011) 3658–3670, http://dx.doi.org/10.1016/j.asoc.2011.01.037. [64] A.H. Gandomi, Interior search algorithm (ISA): A novel approach for global optimization, ISA Trans. 53 (4) (2014) 1168–1183, http://dx.doi.org/10.1016/j. isatra.2014.03.018, Disturbance estimation and mitigation. [65] A. Gandomi, X.-S. Yang, Evolutionary boundary constraint handling scheme, Neural Comput. Appl. 21 (6) (2012) 1449–1462. [66] I. Fister Jr., D. Fister, I. Fister, Differential evolution strategies with random forest regression in the bat algorithm, in: Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation, GECCO ‘13 Companion, 2013, pp. 1703–1706. [67] H. Bernardino, I. Barbosa, A. Lemonge, A hybrid genetic algorithm for con- strained optimization problems in mechanical engineering, in: Evolutionary Computation, 2007. CEC 2007. IEEE Congress on, 2007, pp. 646–653. [68] H. Bernardino, I. Barbosa, A. Lemonge, L. Fonseca, A new hybrid AIS-GA for constrained optimization problems in mechanical engineering, in: Evolution- ary Computation, 2008. CEC 2008 (IEEE World Congress on Computational Intelligence). IEEE Congress on, 2008, pp. 1455–1462, http://dx.doi.org/10. 1109/CEC.2008.4630985. [69] E. Mezura-Montes, B. Hernández-Ocaña, Bacterial foraging for engineering design problems: Preliminary results, in: Proceedings of the Fourth Mexican congress on evolutionary computation (COMCEV 2008), 2008. [70] M. Zhang, W. Luo, X. Wang, Differential evolution with dynamic stochastic selection for constrained optimization, Inf. Sci. 178 (15) (2008) 3043–3074, http://dx.doi.org/10.1016/j.ins.2008.02.014, Nature inspired problem-solving. [71] E. Zahara, Y.-T. Kao, Hybrid Nelder–Mead simplex search and particle swarm optimization for constrained engineering design problems, Expert Systems with Applications 36 (2, Part 2) (2009) 3880–3886, http://dx.doi.org/10.1016/ j.eswa.2008.02.039, URL http://www.sciencedirect.com/science/article/pii/ S0957417408001735 [72] J. Zhang, C. Liang, Y. Huang, J. Wu, S. Yang, An effective multiagent evolutionary algorithm integrating a novel roulette inversion operator for engineering opti- mization, Appl. Math. Comput. 211 (2) (2009) 392–416, http://dx.doi.org/10. 1016/j.amc.2009.01.048.
  • 17.
    S. Yılmaz, E.U.Küçüksille / Applied Soft Computing 28 (2015) 259–275 275 [73] V.S. Aragón, S.C. Esquivel, C.A.C. Coello, A modified version of a t-cell algo- rithm for constrained optimization problems, Int. J. Numer. Methods Eng. 84 (3) (2010) 351–378, http://dx.doi.org/10.1002/nme.2904. [74] A. Kaveh, S. Talatahari, A novel heuristic optimization method: charged system search, Acta Mech. 213 (3-4) (2010) 267–289, http://dx.doi.org/10. 1007/s00707-009-0270-4. [75] D. Datta, J.R. Figueira, A real-integer-discrete-coded particle swarm opti- mization for design problems, Appl. Soft Comput. 11 (4) (2011) 3625–3633, http://dx.doi.org/10.1016/j.asoc.2011.01.034, URL http://www.sciencedirect. com/science/article/pii/S1568494611000445 [76] A.H. Gandomi, X.-S. Yang, A.H. Alavi, Mixed variable structural optimiza- tion using firefly algorithm, Comput. Struct. 89 (23-24) (2011) 2325–2336, http://dx.doi.org/10.1016/j.compstruc.2011.08.002. [77] A. Sadollah, A. Bahreininejad, H. Eskandar, M. Hamdi, Mine blast algorithm: A new population based algorithm for solving constrained engineering optimiza- tion problems, Appl. Soft Comput. 13 (5) (2013) 2592–2612, http://dx.doi.org/ 10.1016/j.asoc.2012.11.026, URL http://www.sciencedirect.com/science/ article/pii/S1568494612005108 [78] Q. He, L. Wang, An effective co-evolutionary particle swarm optimization for constrained engineering design problems, Eng. Appl. Artif. Intell. 20 (1) (2007) 89–99, http://dx.doi.org/10.1016/j.engappai.2006.03.003. [79] Y.-L. Hsu, T.-C. Liu, Developing a fuzzy proportional-derivative controller opti- mization engine for engineering design optimization problems, Eng. Optim. 39 (6) (2007) 679–700, http://dx.doi.org/10.1080/03052150701252664. [80] E. Mezura-Montes, C.A.C. Coello, An empirical study about the usefulness of evolution strategies to solve constrained optimization problems, Int. J. Gen. Syst. 37 (4) (2008) 443–473, http://dx.doi.org/10.1080/03081070701303470. [81] L. dos Santos Coelho, Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems, Expert Syst. Appl. 37 (2) (2010) 1676–1683, http://dx.doi.org/10.1016/j.eswa.2009.06.044. [82] M. Mahdavi, M. Fesanghary, E. Damangir, An improved harmony search algo- rithm for solving optimization problems, Appl. Math. Comput. 188 (2) (2007) 1567–1579, http://dx.doi.org/10.1016/j.amc.2006.11.033. [83] E. Mezura-Montes, C.A. Coello Coello, J. Velázquez-Reyes, L. Muñoz- Dávila, Multiple trial vectors in differential evolution for engineering design, Eng. Optim. 39 (5) (2007) 567–589, http://dx.doi.org/10.1080/ 03052150701364022. [84] J.J. Liang, B.Y. Qu, P.N. Suganthan, Problem definitions and evaluation cri- teria for the CEC 2013 special session and competition on single objective real-parameter numerical optimization, Tech. rep., Computational Intelligence Laboratory, Zhengzhou University, 2013. [85] J.J. Liang, B.Y. Qu, P.N. Suganthan, Problem definitions and evalua- tion criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization, Technical Report, Computational Intelligence Laboratory, Zhengzhou University, 2013.