SlideShare a Scribd company logo
1 of 17
Download to read offline
Applied Intelligence
https://doi.org/10.1007/s10489-020-01763-8
A modified brain storm optimization algorithm with a special
operator to solve constrained optimization problems
Adriana Cervantes-Castillo1 · Efrén Mezura-Montes1
© Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract
This paper presents a novel approach based on the combination of the Modified Brain Storm Optimization algorithm
(MBSO) with a simplified version of the Constraint Consensus method as special operator to solve constrained numerical
optimization problems. Regarding the special operator, which aims to reach the feasible region of the search space, the
consensus vector becomes the feasibility vector computed by the hardest constraint in turn for a current infeasible solution;
then the operations to mix the other feasibility vectors are avoided. This new combined algorithm, named as MBSO-R+V,
solves a suit of eighteen test problems in ten and thirty dimensions. From a set of experiments related to the location and
frequency of application of the constraint consensus method within MBSO, a suitable design of the combined approach is
presented. This proposal shows encouraging final results while being compared against state-of-the-art algorithms, showing
that it is viable to add special operators to improve the capabilities of swarm-intelligence algorithms when dealing with
continuous constrained search spaces.
Keywords Brain storm optimization algorithm · Constrained numerical optimization problems · Constraint-consensus
method · Feasibility vectors · ε-constrained method
1 Introduction
Nowadays, Constrained Numerical Optimization Problems
(CNOPs) can be found in different disciplines [5]. Such
problems, assuming minimization, can be defined as to:
find
f (x) (1)
subject to:
gi(x) ≤ 0; i = 1, 2, . . . , m
hj (x) = 0; j = 1, 2, . . . , p
 Adriana Cervantes-Castillo
zS14020641@estudiantes.uv.mx
Efrén Mezura-Montes
emezura@uv.mx
1 Artificial Intelligence Research Center, University of
Veracruz, Sebastian Camacho 5, Xalapa,
Veracruz, 91000/Centro, México
where f (x) is named as the objective function, gi(x), i =
1, . . . , m and hj (x), j = 1, . . . , p are a collection of
inequality and equality constraints; x = [x1, x2, . . . , xn] is
a real-valued vector with the variables of the optimization
problem, where each xk, k = 1, . . . , n has bounds, Lk ≤
xk ≤ Uk and such bounds state the search space S. The
set of solutions which satisfy the constraints of the problem
defines the feasible region F ⊆ S.
Compared with the whole search space, the size of
the feasible region can be significantly small. Therefore,
finding it and keeping the search within it becomes an
important challenge in CNOPs [9, 11, 26, 33], mostly by
the fact that variation operators are blind with respect to
constraint satisfaction. This hard task has been tackled
by different optimization methods [1, 22, 28]. In this
regard, the specialized literature shows methods which (1)
attempt to maintain feasibility as well as (2) methods
which help infeasible solutions to become feasible so as
to reduce the number of evaluations required to complete
that task. The drawback with the first approach is the
requirement of feasible initial solutions [4, 5, 25]. Among
those approaches of the second class, the Constraint
Consensus (CC) method introduced by John W. Chineck
in 2004 [3] is of special interest in this research. This
A. Cervantes-Castillo and E. Mezura-Montes
method calculates the feasibility vectors for all the violated
constraints of a given infeasible solution, i.e., the method
estimates search directions towards the satisfaction of
each constraint. Once all the feasibility vectors are com-
puted, they are combined to form a consensus vector which
contains both, the direction and the distance to generate a
solution close or inside the feasible region. In the following,
research works are presented where the CC method has been
used to solve constrained optimization problems.
Chineck presented the Constraint Consensus (CC)
algorithm [3] as a method to move an arbitrary infeasible
solution from its position to another one relatively near or
even inside the feasible region, i.e., generating a feasible
solution. Problems with different types of constraints,
shapes, and complexities were solved by this method in the
software MINOS [23].
In 2008, Walid Ibrahim and John W. Chinneck presented
five new variants of the CC method [17], based on feasibility
distance (FDfar, FDnear) and based on direction (DBavg,
DBmax, DBbnd). Those new variants differ only in the
way they build the consensus vector. In those based on
feasibility distance, the consensus vector becomes the
highest or shortest feasibility vector while in those based
on direction the consensus vector is built component-wise,
where the elements that make up the feasibility vector define
the winner direction (positive or negative). The authors
solved 231 constrained problems using the commercially
packages CONOPT 3.13 [7], SNOPT 6.1-1 [10], MINOS
5.5 [23], DONLP2 [30], and KNITRO 4.0 [34], showing
the DBmax variant as the best method to provide initial
start solutions. In 2013 Laurence Smith et al. [29] presented
some improvements to the CC method. A new idea to build
the consensus vector named SUM was introduced, where
the consensus vector becomes the average of the feasibility
vectors computed in the current solution. Additionally, they
presented the augmented version, where the main idea
is seeking feasibility using information from previously
estimated consensus vectors as well as previous information
of the violated constraints. In this way, less computations
need to be done in comparison with the others CC variants.
In the same year [28], the same authors presented the idea of
using the CC methods to identify disjoint feasible regions in
constrained problems using different multistart algorithms.
It is clear from the above specialized literature revision,
that different commercial programs with classical optimiza-
tion algorithms have been used in combination with the
CC algorithm to solve constrained problems showing com-
petitive results. Furthermore, in recent years, since 2011
[13–15], the CC method has been combined with Evolu-
tionary Algorithms like the Genetic Algorithm (GA) and
Differential Evolution (DE), solving different test problems
[18, 20].
The first efforts focused on using CC as a pre-processing
method, where at each iteration of the GA or DE, the CC
method was applied to some infeasible solutions chosen
from the current population. After that, the worst solutions
in the same current population were replaced by those
obtained by the CC method. In 2016, Hamza et al. [14]
proposed a different way to incorporate the CC method
into the DE algorithm. The authors used the CC method
into the DE mutation operator with the aim to improve
the final results while saving evaluations. The final results
of this approach outperformed the standard DE as well as
other state-of-the-art algorithms showing better results and
reducing the computational cost. Sun et al. [31] added the
basic CC method to the artificial bee colony algorithm and
their results were compared against well-known approaches
showing a good performance. However, no other CC
variants were analyzed.
The motivation of this work is based on two issues:
(1) even the CC method has been improved with different
variants, there are no proposals designed by considering
a swarm-intelligence algorithm, and (2) the Brain Storm
Optimization Algorithm, which has provided competitive
results when dealing with CNOPs [2], has not been
enriched with studies on special operators for constrained
optimization.
Based on the two above mentioned issues, this papers
presents a novel CC variant named as R+V (restriction
more violated) which considers the generation of the con-
sensus vector in a simple and cheap way (considering it
will be added to a population-based approach) because
the feasibility vector is the one of the hardest constraint
in turn. Such CC variant is combined with a BSO vari-
ant called Modified BSO, where the research hypothesis
is that the addition of this special operator will lead to a
performance improvement when solving constrained opti-
mization problems with different features. The contribu-
tion of this work is then a first BSO-based approach to
deal with constrained search spaces now enriched with
a cheap special operator focused on improving infeasible
solutions to get them inside or at least closer to the feasible
region.
The new CC variant is compared with previous CC pro-
posals [17, 29] in a set of well-known test problems [20].
After that, based on an empirical validation, a suitable
incorporation of the R+V variant within MBSO is pre-
sented, where its location and application frequency are
defined. Finally, the proposed MBSO-R+V is compared
against state-of-the-art algorithms presenting a highly com-
petitive performance when solving CNOPs with different
characteristics.
The organization of this paper is as follows. Section 2
includes the original Brain Storm Optimization (BSO)
A modified brain storm optimization algorithm with a special operator to solve constrained..
algorithm, its modified version (MBSO) adopted in this
work motivated by a previous study [2], and the introduction
of the CC methods under study. Section 3 describes the
proposed approach with the CC R+V version proposed in
this research and also the MBSO-R+V algorithm. Section 4
presents the experimental design, the corresponding results
and discussion. Finally, Section 5 draws the conclusions of
this research and outlines future work.
2 BSO algorithms and CC methods
2.1 BSO algorithm
In 2011 Yuhui Shi presented the BrainStorm Optimization
(BSO) algorithm [27], which is inspired in the brainstorm-
ing process [24], where a group of people with different
backgrounds is meeting with the aim to generate and com-
bine different ideas and propose a solution to a specific
problem. Four rules are the base of the brainstorming pro-
cess, which are the following:
– No critics.
– All ideas proposed can be considered.
– It is supposed to generate a considerable number of
ideas.
– It is possible to generate new ideas based on the
combination of current ideas.
Following the four rules earlier presented a brainstorming
process considers the following steps:
1. Consider a set of people with different backgrounds.
2. Create a highest number of ideas based on Osborn’s
rules [24].
3. Based on the problem owner opinion, the best ideas are
chosen.
4. Those selected ideas are used as the base to create new
ideas.
5. From the set of those new ideas, the best based on the
problem owner opinion are selected.
6. Select a group of ideas to generate new ones and avoid
getting stuck with the same opinions.
7. The best ideas are selected by the problem owner.
Taking the above steps as a base, and considering the
fact that an idea is a potential solution of an optimization
problem (a CNOP in this case) the BSO algorithm is
detailed in Algorithm 1, where input parameters are the
number of ideas NP, the number of clusters M, and
probabilities preplace, pone, poneCenter, and ptwoCenter,
while rand(0,1) returns a random real number between 0 and
1 with uniform distribution.
The BSO algorithm uses four main operators:
– GroupingOperator (NP, M): The k-means algorithm
is used to cluster the NP ideas into M clusters. The
center of each cluster is defined by the best idea, i.e.,
the best solution based on fitness. The goal here is to
bias the search to different areas of the space to locate
those promising ones. As the previously mentioned, this
operator promotes the exploration of the search space.
A. Cervantes-Castillo and E. Mezura-Montes
– ReplacingOperator (x): The best idea (center) x in
the selected cluster is replaced by an idea generated at
random with uniform distribution. The aim is avoiding
local optima while keeping diversity in the set of
solutions (ideas).
– CreatingOperator (xs): A new idea is generated by
considering ideas from one or two chosen clusters. Such
current ideas can be the best ones, i.e., the centers
of the clusters, or just randomly chosen ideas of the
corresponding clusters. The new idea is created by
adding a Gaussian noise to the selected idea as in (2)
and (3):
yi = xs + ξ ∗ N(μ, σ) (2)
ξ = logsig

(0.5 ∗ T − t)
k

∗ rand(0, 1) (3)
where yi represents the new idea, xs is the selected
idea (the center of the cluster or just a randomly chosen
solution); N(μ, σ) is a vector of Gaussian numbers
with mean μ and variance σ; T is the maximum
number of BSO iterations, t is the current iteration and
k determines the step size in the logsig () function,
where rand(0,1) returns a random value with uniform
distribution between 0 and 1.
– Combine (x1, x2): When two clusters are selected, the
ideas are combined in a single one xs as in (4).
xs = R × x1 + (1 − R) × x2 (4)
where R is a randomly number previously selected,
x1 and x2 are the selected ideas from cluster one and
cluster two, respectively.
This algorithm has showed success on solving different
optimization problems. In fact it has been extended to
multi-strategies with adaptive parameters [19], and also to a
parallel hardware implementation [16].
2.2 MBSO algorithm
The Modified Brain Storm Optimization algorithm (MBSO),
is an improved BSO version proposed in 2012 [12]. MBSO
introduces a new clustering method in the grouping operator
called Simple Grouping Method which follows the next
steps:
1. Select randomly M ideas which become the seeds of the
M clusters.
2. Compute the Euclidean distance from each idea in the
population to each cluster seed.
3. Compare all the M distances to the current idea and add
it into the nearest cluster.
4. Repeat until group all NP ideas into the M clusters.
In the creating operator, MBSO introduces a new method
to generate the new ideas. The Gaussian noise is replaced
by the Idea Difference Strategy (IDS), which adds more
information of the current population to the idea to be
generated. The IDS uses (5) to create the new idea yi. based
on a current idea xs:
yi =

rand(L, H) if rand(0, 1)  pr;
xs + rand(0, 1) × (xa − xb) otherwise.
(5)
where xa and xb are two ideas (solutions) from the current
population chosen at random with uniform distribution used
in the vector difference, and pr is a parameter to simulate
the open-minded in the creation of new ideas, similar to the
brainstorming process where all ideas are welcome.
MBSO was chosen in this research based on a previous
study where it outperformed other BSO variants in
constrained search spaces [2].
2.3 CC methods
The Constraint Consensus (CC) method uses the concept of
projection algorithms, which are effective to move infeasi-
ble solutions close to the feasible region. This movement
is through the feasibility vector computed for each violated
constraint, then such vector includes movement and distance
information related to its corresponding constraint.
In this way, if xs is an infeasible solution and gi(xs) its
constraint violation for constraint i, then the CC method
computes the feasibility vector (fvi) for that constraint
using (6).
fvi =
−gi(xs)
 ∇gi(xs) 
∇gi(xs) (6)
where gi(xs) is the amount of constraint violation, ∇gi(xs)
is the constraint gradient and  ∇gi(xs)  is the gradient
length. Despite the fact that feasibility vectors are exact
just for linear constraints, they are suitable approximations
for non-linear constraints and they can be successfully
applied within stochastic search algorithms [14]. As it
was mentioned in Section 1, there are different ways to
generate the consensus vector based on the feasibility
vectors obtained by using (6).
The basic CC approach is detailed in Algorithm 2, where
NINF is the number of violated constraints, sj is the
sum of the feasibility vector elements for variable j, nj
is the number of violated constraints where variable j is
A modified brain storm optimization algorithm with a special operator to solve constrained..
considered, and t is the consensus vector. As it can be
noted, the basic CC method computes the elements of the
consensus vector by an average of those values of the
feasibility vectors of the corresponding violated constraints.
Besides the basic CC method, other variants are tested in
this work:
FDFAR: The feasibility vector with the largest distance
becomes the consensus vector. The aim is to get the
feasible region faster [17].
DBMAX: In this case the signs of the elements of
the feasibility vectors are considered. If more positive
values are present for a given variable, the highest value
among them is taken as the corresponding value for the
consensus vector. The same applies if more negative
values are found. Ties consider the maximum values of
the positive and negative elements and they are averaged
to get the corresponding element of the consensus vector.
DBBND: Besides considering the signs of the feasibility
vector elements, the length of the movement and the
type of constraint (equality or inequality) are taken
into account (shorter movements and larger movements,
respectively).
AUGMENTED: This variant adopts a predict-correct ap-
proach [29], where the predictor is the consensus vector
obtained by the basic CC variant. The corrector is formed
by the average of the relaxation factors computed inde-
pendently for each violated constraint and it is used to
adjust the length of the vector without modifying its
direction.
3 Proposed approach
3.1 R+V: a new constraint consensus method variant
Each one of the CC variants discussed in Section 1 calcu-
lates the feasibility vector for each violated constraint of a
given solution (as in (6)). Thus, computing the gradient is
mandatory in this step, adding computational effort mostly
when the number of constraints associated with the problem
increases. A new CC variant called R+V (restriction more
violated) is proposed in this work, where the consensus
vector only includes the feasibility vector of the hardest
constraint in turn, i.e., the constraint with the highest vio-
lation. In consequence, just the gradient of such constraint
is computed, regardless of the feasibility information of the
remaining constraints. In other words, besides computing
just one feasibility vector, the consensus step is avoided
because the only feasibility vector is used to reduce the
infeasibility of a solution and such action saves computa-
tional time with respect to previous CC variants. Algorithm
3 shows the R+V steps.
A. Cervantes-Castillo and E. Mezura-Montes
Table 1 MBSO parameter values used in the experiments
Parameter Value
N 100
M 5
pr 0.005
p-replace 0.2
p-one 0.8
p-one-center 0.4
p-two-center 0.5
In the hardestConstraint() method, the constraint with the
the highest violation amount is chosen (line 3 in Algorithm
3). The feasibility vector for such constraint becomes the
consensus vector in the R+V variant (line 4 in Algorithm
3) to compute the movement of the solution (line 5 in
Algorithm 3). There are two possible stop conditions: (1)
when the consensus vector length is less than a specified
tolerance α (line 5 in Algorithm 3), or (2) reaching a
pre-defined number of iterations μ (line 11 in Algorithm 3).
3.2 ε-constrained method
The ε-constrained method, proposed by Takahama in [32]
is adopted as a constraint-handling technique in this work
to let the MBSO algorithm to deal with a constrained
search space, because the original MBSO was proposed to
solve unconstrained optimization problems. This approach
is based on a problem transformation, i.e., the constrained
problem is transformed in an unconstrained optimization
problem. It compares the solutions based either on the
constraint violation φ(x) or the objective function value
f (x) according to an ε level. The ε-constrained method
emphasizes the constraint satisfaction followed by the
optimization of f (x). However, the method promotes a
balance of promising infeasible solutions by allowing
comparison of infeasible solutions close to the feasible
region based only on their objective function values. The
ε level comparison between two solutions (f (x1), φ(x1)),
(f (x2), φ(x2)) is calculated as indicated in (7) and (8):
(f (x1), φ(x1)) ε (f (x2), φ(x2)) ⇐⇒ (7)
⎧
⎨
⎩
f (x1)  f (x2), if φ(x1), φ(x2) ≤ ε;
f (x1)  f (x2), if φ(x1) = φ(x2);
φ(x1)  φ(x2), otherwise
(8)
When ε = 0, the constraint violation precedes the
objective function value on the comparison. In contrast,
when ǫ = ∞ only the objective function value is used to
compare the solutions, i.e. the feasibility information is not
considered.
Table 2 Test problems adopted
in the experiments with
different dimensions (D),
separable (S), non-separable
(NS) or rotated (R) constraints
Test function Search space Objective function Constraints number
Equality constraints Inequality constraints
C01 [0, 10]D Non Separable – 2-NS
C02 [–5.12, 5.12]D Separable 1-S 2-S
C03 [–1000, 1000]D Non Separable 1-NS –
C04 [–50, 50]D Separable 2-S / 2-NS –
C05 [–600, 600]D Separable 2-S –
C06 [–600, 600]D Separable 2-R –
C07 [–140, 140]D Non Separable – 1-S
C08 [–140, 140]D Non Separable – 1-R
C09 [–500, 500]D Non Separable 1-S –
C10 [–500, 500]D Non Separable 1-R –
C11 [–100, 100]D Rotated 1-NS - -
C12 [–1000, 1000]D Separable 1-NS 1-S
C13 [–500, 500]D Separable – 2-S / 1-NS
C14 [–1000, 1000]D Non Separable – 3-S
C15 [–1000, 1000]D Non Separable – 3-R
C16 [–10, 10]D Non Separable 2-S 1-S / 1-NS
C17 [–10, 10]D Non Separable 1-S 2-NS
C18 [–50, 50]D Non Separable 1-S 1-S
10D and 30D are solved in this research
A modified brain storm optimization algorithm with a special operator to solve constrained..
Equation (9) shows how to control the ǫ level value.
ǫ(0) = φ(xθ )
ǫ(t) =

ǫ(0)(1 − t
T c )cp 0  t  T c;
0 t ≤ T c.
(9)
where t represents the current iteration; T c = maximum
iteration and xθ is the top θ-th solution, θ = 0.2N and cp
regulates the reduction of the constraint tolerance.
The comparison criteria in (7) and (8) replace the
comparison based just on the objective function value used
in Algorithm 1.
4 Experiments and results
4.1 Experimental design and parameter tuning
To investigate the performance of the proposed MBSO-R+V
algorithm, four experiments were designed as follows:
1. To determine the quality of the R+V proposal with
respect to other CC variants.
2. To define the best location of the R+V method within
MBSO
3. To set the R+V application frequency in MBSO.
4. To compare the combined algorithm MBSO-R+V
against state-of-the-art approaches for CNOPs.
Fig. 1 Experiment A, total number of improved solutions by each CC
variant, except R+V
The parameter values used in the experiments for the R+V
variant were similar to those suggested in [14], where
the CC method was added to a population-based search
algorithm: α = 0.000001; μ = 5. For the MBSO algorithm
the parameters used were those proposed in [2], where
MBSO solved different types of CNOPs. The values are
in Table 1. The test functions solved in this research are
those proposed in [20] (10D and 30D) and their details
are summarized in Table 2. The maximum number of
evaluations for 10D was, Maxf es = 200,000 and Maxf es
= 600,000 for 30D. The value for the ε-constrained method
parameter cp was 0.5, as proposed in [2].
Table 3 Experiment A, results
obtained by each CC variant in
the 10D benchmark functions
F Infeasible BASIC FDFAR DBMAX AUGMENTED DBBND R+V
C01 1 1 1 1 1 1 1
C02 100 72 78 72 64 72 76
C03 100 100 100 100 98 100 100
C04 100 61 37 32 68 37 100
C05 100 67 65 71 64 67 59
C06 100 53 62 68 64 53 59
C07 65 65 65 65 65 65 65
C08 59 56 56 56 53 56 56
C09 100 46 46 46 100 46 46
C10 100 70 70 70 88 70 70
C11 100 100 100 100 100 100 100
C12 100 100 100 100 94 100 100
C13 100 100 100 100 98 100 100
C14 100 100 100 100 97 100 100
C15 100 57 50 55 63 55 49
C16 100 95 62 79 42 90 83
C17 100 53 52 52 50 50 49
C18 100 57 100 57 57 57 100
TOTAL 1625 1253 1244 1224 1266 1219 1313
Bold data indicate best results
A. Cervantes-Castillo and E. Mezura-Montes
Fig. 2 Experiment A, total number of improved solutions by
AUGMENTED and R+V variants
4.2 Experiment A: comparison of R+V against other
constraint consensus variants
To assess the R+V performance against other CC variants
proposed in [17, 29] and mentioned in Section 2.3, the
following was carried out. For each test problem 100
initial solutions were generated at random with uniform
distribution. From those solutions, the infeasible ones were
considered as starting points for each CC variant compared.
The number of infeasible solutions for each problem is
shown in the second column of Table 3. Those numbers vary
because of the different types of constraints found in each
test problem (see Table 2 for details).
The remaining columns at the right-hand side of the
table present the success obtained by each variant, i.e., the
number of infeasible solutions which became feasible or
at least their sum of constraint violation was decreased
(i.e., they were located closer to the feasible region).
Both situations, feasibility and violation decreasing, were
considered as success because the goal was to detect the
ability of the operator to improve an infeasible solution.
Because of the fact that similar results were obtained in 10D
and 30D, only those in 10D are presented. Aside from R+V,
and based on Table 3, the AUGMENTED version provided
the most competitive performance. To add clarity, such
comparison is graphically presented in Fig. 1. However, as
indicated in Fig. 2, R+V outperforms the AUGMENTED
variant. It is worth remarking that, based on Table 3, R+V
outperformed the other CC variants in test problem C04,
which is the one with more equality constraints. The results
then suggest, for the test problems adopted in this work,
that letting the CC method to discard the information of all
constraints except the most difficult to satisfy, instead of
joining the violation information of all violated constraints
(even with the relaxation factors per constraint as in the
AUGMENTED CC variant) has two advantages: (1) helps
the solution to get closer to the feasible region or get it
feasible, and (2) eliminates the cost related to the consensus
process and just one feasibility vector is computed.
From the above discussion it can be concluded that R+V
is a competitive CC variant and it has the advantage that
it avoids the usage of the consensus step by adopting the
hardest constraint to be satisfied as the promising search
direction to get a feasible solution.
4.3 Experiment B: locating the R+V variant
into the MBSO algorithm
Having evidence about the competitive performance of the
R+V CC variant, the next phase consists in finding a suitable
combination of this method as a special operator within
MBSO. In this sense, this experiment aimed to identify
the best location of R+V in MBSO. From Section 3 three
MBSO elements were considered: (1) grouping operator,
(2) replacing operator and (3) creating-combine operators).
Therefore, three experimental versions were designed.
1. Experimental version 1 (R+VE1): The R+V variant was
located within the MBSO algorithm before applying
the grouping operator. In this way, R+V acts only as
Table 4 Experiment B, 95%-
confidence rank-sum Wilcoxon
test results in 10D test problems
Versions Criteria Better Equal Worse Decision p-value
R+VE1 vs R+VE2 Best Results 3 15 0 = 0.974277525
R+VE1 Vs R+VE3a 4 14 0 = 0.66585532
R+VE1 Vs R+VE3b 2 15 0 = 0.961219496
R+VE1 Vs R+VE3c 4 14 0 = 1
R+V E1 vs R+VE2 Average Results 2 15 1 = 0.987376927
R+VE1 Vs R+VE3a 3 14 1 = 0.874296698
R+VE1 Vs R+VE3b 2 16 0 = 0.824715242
R+VE1 Vs R+VE3c 4 14 0 = 0.447628106
A modified brain storm optimization algorithm with a special operator to solve constrained..
Table 5 Experiment B, 95%-
confidence rank-sum Wilcoxon
test results in 30D test problems
Versions Criteria BEST Equal Worse Decision p value
R+VE1 vs R+VE2 Best Results 3 15 0 = 0.843011125
R+VE1 Vs R+VE3a 4 13 1 = 0.679885581
R+VE1 Vs R+VE3b 5 12 1 = 0.800171553
R+VE1 Vs R+VE3c 4 12 2 = 0.861838193
R+V E1 vs R+VE2 Average Results 3 15 0 = 0.65591049
R+VE1 Vs R+VE3a 5 13 0 = 0.65591049
R+VE1 Vs R+VE3b 4 12 2 = 1
R+VE1 Vs R+VE3c 6 12 0 = 0.987378551
a preprocessing phase of the population which will
be used by the MBSO algorithm later. Five randomly
selected infeasible solutions are processed by the R+V
variant. The obtained solutions replace the original
input solutions in the current population.
2. Experimental version 2 (R+VE2): The R+V variant
is within the replacing operator. If the new solution
is infeasible, then the R+V variant is applied to such
solution before replacing it.
3. Experimental version 3: Three situations were obser-
ved. Considering the fact that the crossover operator
in the MBSO algorithm in (5) is similar to that of
Differential Evolution, where a base vector added to a
difference vector is computed, then the following three
places are of interest to apply the R+V variant.
(a) Experimental version 3a (R+VE3a): The R+V
variant acts in the base idea xs before it is used to
generate the new solution.
(b) Experimental version 3b (R+VE3b): Being xa and
xb the difference ideas, the R+V variant acts in idea
Fig.3 Experiment B, number of 10D test problems where each version
was better than the other ones, based on the median value
xa which provides the direction in such difference
ideas.
(c) Experimental version 3c, (R+VE3c): The R+V
variant acts in both, the base idea xs and difference
idea xa used in the crossover operator.
The 95%-confidence rank-sum Wilcoxon test was applied
to the final results of a set of 30 independent runs per each
algorithm version. The results are shown in Tables 4 (10D)
and 5 (30D), where the R+VE1 version was adopted as the
base algorithm for the statistical test. Those results suggest
no significant differences among versions, i.e, the R+V
variant helps MBSO regardless its position in the algorithm.
However, R+VE1 was sligthly better than its compared
versions.
Such behavior can be clearly observed in Figs. 3 and 4,
where the number of test functions where the median value
of each version is better than those of the other versions
is graphically presented. Based on such figures, R+VE1 is
better, particularly in 30D problems, i.e., the most difficult
to solve. From the results in this experiment B, the R+V
Fig.4 Experiment B, number of 30D test problems where each version
was better than the other ones, based on the median value
A. Cervantes-Castillo and E. Mezura-Montes
Generations
0 200 400 600 800 1000 1200 1400 1600 1800 2000
0
20
40
60
80
100
120
140
160
180
R+V application
constraint violation degree
feasible points
Fig. 5 Experiment C, R+V applied during all the search process in
representative 10D test problem C05
version will be located before the grouping operator in this
research.
Despite the fact that R+V benefits MBSO in all the
positions above mentioned, it is worth remarking that, once
the dimensions in the constrained search space increase,
the R+V usage is more convenient before the variation
operators and the replacement process. Such behavior
differs with that observed in other approaches where the CC
method has been adopted, as it is the case for differential
evolution in [14], where the CC method is considered within
the mutation operator.
4.4 Experiment C: R+V frequency application
within MBSO
To analyze the frequency of application for the R+V variant
within MBSO, the expected behavior of a nature-inspired
search algorithm when solving CNOPs was considered.
Fig. 6 Experiment C, average
evaluations required by the
algorithm to approximate the
best known solution in the whole
benchmark, where R+V was
applied every 5, 10, 15, 20, 25,
30, 35, 40, 45 and 50 generations
during the first 15% of total
generations of the algorithm in
10D and 30D test problems
A modified brain storm optimization algorithm with a special operator to solve constrained..
Such behavior states that most infeasible solutions are
present at the beginning of the search. As the process
advances, the effect of the constraint-handling technique
will let to generate more feasible solutions in the population.
Figure 5 presents such behavior using the MBSO algorithm
with the R+V variant along all a single run in representative
10D test problem C05.
Based on the aforementioned, the R+V variant was
applied only in the first 15% of the total number of
generations of the algorithm. However, it remains to be
known the frequency of application within that 15%.
Figure 6 reports the average evaluations required by the
algorithm to approximate the best known solution in the
whole benchmark when applying R+V every 5, 10, 15, 20,
25, 30, 35, 40, 45, and 50 generations in the first 15% of the
total generations of the algorithm.
From the results in Fig. 6, R+V saves more evaluations
when it is applied every 35 generations during the first 15%
of the total generations spent by the algorithm.
To further analyze the positive effect of R+V within
MBSO, representative convergence plots are shown in
Figs. 7 and 8, for 10D and 30D test problems, respectively.
The positive effect of the R+V special operator allows
the approach to reach better results faster than the MBSO
version without it.
Regarding the computational complexity of MBSO-R+V,
the proposal has two important advantages: (1) based on
the fact that the approach adopted MBSO and not BSO,
the O(n2) of the K-means algorithm is avoided while the
MBSO’s Simple Grouping Method is O(NM), where N is
the number of ideas and M is the number of groups, and (2)
as mentioned in Section 3.1, R+V, unlike other CC variants,
Fig. 7 Experiment C, MBSO-
R+V against MBSO 10D
representative convergence plots
A. Cervantes-Castillo and E. Mezura-Montes
computes just one feasibility vector and also avoids the
consensus step, decreasing the operations required to obtain
the feasible direction.
The pseudocode of the proposed MBSO-R+V is detailed
in Algorithm 4:
4.5 Experiment D: comparing MBSO-R+V against
state-of-the-art algorithms
Having the complete design of the proposed MBSO-R+V
algorithm, its performance is compared against state-of-the-
art algorithms. The results are shown in Tables 6–7. The
state-of-the-art algorithms compared are the following:
– IDFRD: Individual-dependent feasibility rule for con-
strained differential evolution [36].
– FRC-CEA: A feasible-ratio control technique for
constrained optimization [35].
– CoBe-MmDE: A multimeme DE algorithm empowered
by local search operators [6].
– EMODE: An enhanced Multi-operator DE [8].
– DEbavDBmax: A Constraint Consensus Mutation-
Based DE [14].
The 95%-confidence Kruskal-Wallis and the Bonferroni
post-hoc statistical tests were applied to the results in
Tables 6–7. Figure 9 includes such comparison and it can be
seen that no significant differences were observed regarding
10D and 30D with respect to all compared algorithms. The
statistical tests results indicate that MBSO-R+V provides a
competitive performance against state-of-the-art algorithms
to solve different types of CNOPs. It is worth noting
that, with respect to the compared and recently proposed
approaches, MBSO-R+V does not require the problem
transformation [35], the modification of the constraint-
handling technique [36], the combination of different local
searches [6] or multiple operators [8]. Such requirements
might make them more difficult to either code or calibrate.
5 Conclusions and future work
This paper presented an improved brainstorm optimization
algorithm coupled with a simplified version of the
constraint consensus special operator to solve constrained
optimization problems. The new constraint consensus
version, named R+V, which is based on the search direction
of the hardest constraint to satisfy by the solution to be
updated, was compared against other Constraint Consensus
versions in thirty-six well-known constrained problems.
The results showed R+V as the most competitive variant,
even just one feasibility vector is computed and the
consensus step and its cost are avoided. After getting a
competitive and low cost special operator, its incorporation
within the MBSO algorithm, which has provided a
competitive performance in constrained search spaces [2],
was presented. Based on empirical comparisons validated
by statistical tests, It was found that using the R+V variant
before the MBSO grouping operator and just every 35
A modified brain storm optimization algorithm with a special operator to solve constrained..
Fig. 8 Experiment C, MBSO-
R+V against MBSO 30D
representative convergence plots
generations early in the search process only, provided better
results.
This MBSO-R+V version was further compared against
five state-of-the-art algorithms for constrained optimization.
Such comparison indicated no significant differences of the
performance provided by MBSO-R+V with respect to those
obtained by the compared approaches. It is important to
remark that most algorithms used for comparison are based
on differential evolution, which has showed a particular
ability to provide highly competitive results when solving
CNOPs [21]. Moreover, the suitable addition of a simplified
variant of a special operator to the MBSO algorithm,
kept its implementation simplicity when contrasted against
the compared approaches which require modifications to
the search algorithm like multiple variation operators [8],
modifications to the variation operators [14], multiple
local-search operators [6], modifications to the constraint-
handling technique [35], or using dynamic multi-objective
optimization concepts [36].
It has been showed in this research work that a suitable
special operator is able to significantly improve the search
ability of a particular swarm intelligence algorithm so as
to provide a similar performance with respect to DE-based
state-of-the-art proposals.
Based on the findings obtained in this work, the future
paths or research are: (1) the proposal of parameter con-
trol techniques to deal with the proper MBSO parameters,
(2) the addition of R+V in other popular population-based
algorithms in constrained optimization like differential evo-
lution, (3) the study of other special operators coupled with
MBSO, and (4) considering multi-objective constrained
optimization problems.
A. Cervantes-Castillo and E. Mezura-Montes
Table 6 Experiment D,
statistical results by
MBSO-R+V and
state-of-the-art algorithms (1/2)
F Algorihtm 10D 30D
Mean Std Mean Std
C01 MBSO-R+V –7.44E-01 7.85E-03 –7.86E-01 1.99E-02
IDFRD –7.47E-01 1.87E-03 –8.19E-01 2.66E-03
FRC-CEA –7.47E-01 4.49E-13 –8.21E-01 1.88E-03
CoBe-MmDE –7.35E-01 2.72E-02 –7.37E-01 2.56E+00
DEbavDBmax –7.46E-01 2.55E-03 –8.14E-01 7.85E-03
E-MODE –7.47E-01 2.45E-16 –8.20E-01 2.86E-03
C02 MBSO-R+V –1.95E+00 3.77E-01 –1.91E+00 6.76E-01
IDFRD –2.25E+00 5.48E-02 –2.27E+00 2.31E-02
FRC-CEA –2.28E+00 3.76E-03 –2.27E+00 6.06E-03
CoBe-MmDE –2.07E+00 8.67E-02 –2.00E+00 1.43E-01
DEbavDBmax –2.20E+00 8.39E-02 –2.28E+00 3.15E-03
E-MODE –2.28E+00 2.54E-10 –2.28E+00 4.90E-03
C03 MBSO-R+V 2.43E-17 1.16E-16 5.98E+01 4.36E+01
IDFRD 0.00E+00 0.00E+00 0.00E+00 0.00E+00
FRC-CEA 0.00E+00 0.00E+00 2.87E+01 1.22E-07
CoBe-MmDE 0.00E+00 0.00E+00 1.25E-05 1.02E-05
DEbavDBmax 0.00E+00 0.00E+00 6.38E-25 7.13E-25
E-MODE 0.00E+00 0.00E+00 3.12E-25 5.73E-25
C04 MBSO-R+V 8.19E-02 2.20E-01 1.94E+01 0.00E+00
IDFRD –1.00E-05 7.64E-13 –3.32E-06 2.52E-08
FRC-CEA 2.75E-05 9.14E-05 4.17E-03 4.17E-03
CoBe-MmDE –9.99E-06 4.60E-09 5.48E-02 1.83E-01
DEbavDBmax –1.00E-05 0.00E+00 –3.33E-06 2.77E-09
E-MODE –1.00E-05 0.00E+00 –3.33E-06 2.46E-16
C05 MBSO-R+V –3.88E+02 2.41E+02 –4.01E+02 1.20E+02
IDFRD –4.84E+02 3.03E-13 –4.84E+02 7.02E-11
FRC-CEA –4.84E+02 3.36E-02 –4.80E+02 2.45E+00
CoBe-MmDE –4.84E+02 2.42E-10 –1.89E+02 7.82E+01
DEbavDBmax –4.84E+02 4.95E-06 –4.84E+02 7.16E-09
E-MODE –4.84E+02 3.48E-13 –4.84E+02 2.04E-13
C06 MBSO-R+V –4.84E+02 2.76E+02 –3.30E+02 3.68E+02
IDFRD –5.79E+02 5.02E-07 –5.31E+02 1.10E-02
FRC-CEA –5.79E+02 2.63E-05 –5.31E+02 1.82E-02
CoBe-MmDE –5.79E+02 2.17E-07 –4.79E+02 7.42E+01
DEbavDBmax –5.76E+02 7.68E-07 –5.31E+02 2.35E-01
E-MODE –5.79E+02 3.25E-13 –5.31E+02 4.66E-10
C07 MBSO-R+V 1.59E-01 7.81E-01 3.19E-01 1.08E+00
IDFRD 0.00E+00 0.00E+00 0.00E+00 0.00E+00
FRC-CEA 0.00E+00 0.00E+00 3.88E-01 1.05E+00
CoBe-MmDE 1.66E-01 8.14E-01 7.69E+01 8.05E+01
DEbavDBmax 1.25E-28 2.27E-28 1.59E-01 7.97E-01
E-MODE 0.00E+00 0.00E+00 1.61E-27 4.14E-27
C08 MBSO-R+V 9.89E+00 6.14E+00 3.19E+01 8.63E+01
IDFRD 8.34E+00 4.47E+00 0.00E+00 0.00E+00
FRC-CEA 3.30E+00 4.54E+00 1.43E+01 3.34E+01
CoBe-MmDE 6.48E+00 5.72E+00 1.17E+03 1.38E+03
DEbavDBmax 9.22E+00 3.60E+00 4.20E-26 4.92E-26
E-MODE 1.01E+01 3.03E+00 1.37E-27 4.09E-27
C09 MBSO-R+V 1.24E+10 6.07E+10 9.00E+10 4.39E+11
IDFRD 0.00E+00 0.00E+00 3.08E-01 1.54E+00
FRC-CEA 1.39E-01 1.10E+00 4.59E+01 3.75E+01
CoBe-MmDE 3.92E+00 1.65E+01 4.24E+05 1.30E+06
DEbavDBmax 4.54E-26 1.84E-25 4.30E-26 4.47E-26
E-MODE 0.00E+00 0.00E+00 9.28E-27 1.82E-26
Bold data indicate best results
A modified brain storm optimization algorithm with a special operator to solve constrained..
Table 7 Experiment D,
statistical results by
MBSO-R+V and
state-of-the-art algorithms (2/2)
F Algorihtm 10D 30D
Mean Std Mean Std
C10 MBSO-R+V 1.65E+01 2.05E+01 9.93E+01 2.56E+02
IDFRD 0.00E+00 0.00E+00 3.13E+01 1.76E-01
FRC-CEA 1.42E+01 1.95E+01 9.05E+01 9.05E+01
CoBe-MmDE 3.48E+00 1.18E+01 1.27E+03 3.55E+03
DEbavDBmax 1.33E-26 3.34E-26 4.41E-20 1.21E-19
E-MODE 0.00E+00 0.00E+00 6.99E-27 1.43E-26
C11 MBSO-R+V –1.52E-03 3.36E-10 –1.48E-04 2.19E-04
IDFRD –1.52E-03 5.07E-11 –3.92E-04 3.65E-09
FRC-CEA –1.15E-02 4.01E-02 –3.72E-02 4.58E-01
CoBe-MmDE –1.52E-03 2.27E-10 –3.82E-04 –3.82E-04
DEbavDBmax –1.52E-03 1.46E-14 –3.92E-04 3.98E-10
E-MODE –1.52E-03 8.90E-17 –3.92E-04 1.07E-10
C12 MBSO-R+V –3.02E+01 1.27E+02 4.18E-01 1.95E+00
IDFRD –4.24E-01 2.90E+00 –1.99E-01 2.12E-04
FRC-CEA –2.17E+02 2.66E+02 –1.61E+02 3.26E+02
CoBe-MmDE -2.32E+01 6.80E+01 -1.49E-01 7.11E-02
DEbavDBmax -2.20E-01 6.62E-07 –1.99E-01 1.63E-09
E-MODE –9.46E+01 1.48E+02 –1.99E-01 2.90E-09
C13 MBSO-R+V –6.46E+01 2.12E+00 –6.06E+01 2.38E+00
IDFRD –6.84E+01 2.90E-14 –6.63E+01 3.00E+00
FRC-CEA –6.84E+01 3.51E-12 –6.84E+01 2.88E-01
CoBe-MmDE –5.72E+01 8.52E+00 –5.76E+01 2.56E+00
DEbavDBmax –6.75E+01 1.41E+00 –6.02E+01 5.07E+00
E-MODE –6.84E+01 2.97E-04 –6.52E+01 1.92E+00
C14 MBSO-R+V 4.69E-05 2.29E-04 1.24E+02 5.61E+02
IDFRD 0.00E+00 0.00E+00 0.00E+00 0.00E+00
FRC-CEA 0.00E+00 0.00E+00 6.19E-01 6.20E-01
CoBe-MmDE 6.64E-01 1.52E+00 1.93E+06 4.99E+06
DEbavDBmax 3.30E-27 4.72E-27 1.01E-25 8.05E-26
E-MODE 0.00E+00 0.00E+00 2.08E-27 5.76E-27
C15 MBSO-R+V 8.74E+11 3.22E+12 1.11E+12 5.42E+12
IDFRD 2.94E-01 1.02E+00 2.21E+01 1.58E+00
FRC-CEA 1.51E+00 1.83E+00 2.16E+01 9.92E-05
CoBe-MmDE 1.87E-01 9.18E-01 3.35E+09 6.84E+09
DEbavDBmax 2.96E-25 1.09E-24 1.87E-22 2.97E-22
E-MODE 0.00E+00 0.00E+00 2.54E-27 6.81E-27
C16 MBSO-R+V 2.76E-01 3.66E-01 4.79E-02 2.02E-01
IDFRD 1.11E-02 1.92E-02 0.00E+00 0.00E+00
FRC-CEA 0.00E+00 0.00E+00 0.00E+00 0.00E+00
CoBe-MmDE 4.54E-01 5.12E-01 3.99E-03 1.28E-02
DEbavDBmax 0.00E+00 0.00E+00 0.00E+00 0.00E+00
E-MODE 0.00E+00 0.00E+00 0.00E+00 0.00E+00
C17 MBSO-R+V 4.10E+01 1.18E+02 1.88E+02 3.47E+02
IDFRD 1.44E-20 1.37E-20 7.46E-02 2.62E-01
FRC-CEA 0.00E+00 0.00E+00 7.02E+00 1.27E+01
CoBe-MmDE 0.00E+00 0.00E+00 6.49E-04 3.18E-03
DEbavDBmax 7.05E-19 1.08E-18 1.81E-12 5.28E-12
A. Cervantes-Castillo and E. Mezura-Montes
Table 7 (continued)
F Algorihtm 10D 30D
Mean Std Mean Std
E-MODE 3.42E-30 1.71E-29 2.77E-21 8.44E-21
C18 MBSO-R+V 1.29E+02 4.19E+02 2.08E+02 9.56E+02
IDFRD 0.00E+00 0.00E+00 2.28E-29 7.14E-29
FRC-CEA 0.00E+00 0.00E+00 0.00E+00 0.00E+00
CoBe-MmDE 0.00E+00 0.00E+00 6.48E+01 1.46E+02
DEbavDBmax 3.89E-24 4.34E-24 2.83E-01 1.36E+00
E-MODE 1.53E-32 2.20E-32 1.34E-20 6.55E-20
Bold data indicate best results
30 40 50 60 70 80 90
No groups have mean ranks significantly different from MBSO-R+V
E-MODE
DEbavDBmax
CoBe-MmDE
FRC-CEA
IDFRD
MBSO-R+V
Average fx 10D
20 30 40 50 60 70 80 90
No groups have mean ranks significantly different from MBSO-R+V
E-MODE
DEbavDBmax
CoBe-MmDE
FRC-CEA
IDFRD
MBSO-R+V
Average fx 30D
Fig. 9 Experiment D, Kruskal-Wallis and Bonferroni post-hoc
statistical tests. Average values obtained in the objective function by
each compared algorithm
Acknowledgments The first author acknowledges support from the
Mexican Council of Science and Technology (CONACyT) and the
University of Veracruz to pursue graduate studies at its Artificial
Intelligence Research Center. The second author acknowledges
support from CONACyT through project No. 220522.
Compliance with Ethical Standards
Conflict of interests The authors declare that they have no confict of
interest.
References
1. Bonyadi MR, Michalewicz Z (2014) On the edge of feasibility:
a case study of the particle swarm optimizer. In: 2014 IEEE
Congress on evolutionary computation (CEC)
2. Cervantes-Castillo A, Mezura-Montes E (2016) A study of
constraint-handling techniques in brain storm optimization. In:
2016 IEEE Congress on evolutionary computation (CEC),
pp 3740–3746
3. Chinneck JW (2004) The constraint consensus method for finding
approximately feasible points in nonlinear programs. INFORMS J
Comput 16(3):255–265
4. Chinneck JW (2008) Feasibility and infeasibility in optimization:
algorithms and computational methods. Springer Science +
Business Media LLC
5. Datta R, Deb K (2014) Evolutionary constrained optimization.
Springer Publishing Company, Incorporated
6. Domı́nguez-Isidro S, Mezura-Montes E (2018) A cost-benefit
local search coordination in multimeme differential evolution
for constrained numerical optimization problems. Swarm Evol
Comput 39:249–266
7. Drud AS (1994) Conopt a large scale grg code. ORSA J Comput
6(2):207–216
8. Elsayed S, Sarker R, Coello CC (2016) Enhanced multi-operator
differential evolution for constrained optimization. In: 2016 IEEE
Congress on evolutionary computation (CEC), pp 4191–4198
9. Elsayed SM, Sarker RA, Essam DL (2011) Multi-operator based
evolutionary algorithms for solving constrained optimization
problems. Comput Oper Res 38(12):1877–1896
A modified brain storm optimization algorithm with a special operator to solve constrained..
10. Gill PE, Murray W, Saunders MA (1997) Snopt: an sqp algorithm
for large-scale constrained optimization report sol, vol 97–
3. Stanford University, Technical report, Systems Optimization
Laboratory
11. Gong W, Cai Z, Liang D (2015) Adaptive ranking mutation
operator based differential evolution for constrained optimization.
IEEE Trans Cybern 45(4):716–727
12. Zhan Zh, Zhang J, Shi Yh, Liu Hl (2012) A modified brain
storm optimization. In: 2012 IEEE Congress on evolutionary
computation (CEC), pp 1–8
13. Hamza NM, Elsayed SM, Essam DL, Sarker RA (2011)
Differential evolution combined with constraint consensus for
constrained optimization. In: 2011 IEEE Congress of evolutionary
computation (CEC), pp 865–872
14. Hamza NM, Essam DL, Sarker RA (2016) Constraint consensus
mutation-based differential evolution for constrained optimiza-
tion. IEEE Trans Evol Comput 20(3):447–459
15. Hamza NM, Sarker RA, Essam DL (2013) Differential evo-
lution with multi-constraint consensus methods for constrained
optimization. J Glob Optim 57(2):583–611
16. Hassanein A, El-Abd M, Damaj I, Ur-Rehmana H (2020)
Parallel hardware implementation of the brain storm optimization
algorithm using FPGAs. Microprocess Microsyst 74:103005
17. Ibrahim W, Chinneck JW (2008) Improving solver success in
reaching feasibility for sets of nonlinear constraints. Comput
Oper Res 35(5):1394–1411. Part Special Issue: Algorithms and
Computational Methods in Feasibility and Infeasibility
18. Liang JJ, Runarsson TP, Mezura-Montes E, Clerc M, Suganthan
PN, Coello Coello CA, Deb K (2005) Problem definitions and
evaluation criteria for the CEC 2006 special session on constrained
real-parameter optimization. Technical report, Nanyang Tech-
nological University, Singapore, December. Available at: http://
www.lania.mx/∼emezura
19. Liu J, Peng H, Wu Z, Chen J, Deng C (2020) Multi-strategy
brain storm optimization algorithm with dynamic parameters
adjustment. Appl Intell 50:1289–1315
20. Mallipeddi R, Suganthan PN (2010) Problem definitions and eval-
uation criteria for the CEC 2010 competition on constrained
real-parameter optimization. Technical Report, Nanyang Techno-
logical University, Singapore
21. Mezura-Montes E, Coello-Coello CA (2011) Constraint-handling
in nature-inspired numerical optimization: past, present and
future. Swarm Evol Comput 1:173–194
22. Michalewicz Z, Schoenauer M (1996) Evolutionary algorithms
for constrained parameter optimization problems. Evol Comput
4(1):1–32
23. Saunders MA, Murtagh BA (1993) Minos 5.4 users guide
(preliminary), techical report sol 83 20 r. techical report
24. Osborn AF, Bristol LH (1979) Applied imagination: principles
and procedures of creative problem-solving, 3rd edn. Scribners,
New York. Includes index
25. Rao SS (2009) Engineering optimization: theory and practice.
Wiley
26. Sarker RA, Elsayed SM, Ray T (2014) Differential evolution with
dynamic parameters selection for optimization problems. IEEE
Trans Evol Comput 18(5):689–707
27. Shi Y (2011) Brain storm optimization algorithm. In: Proc. 2nd
Int. conf. on swarm intelligence, pp 303–309
28. Smith L, Chinneck JW, Aitken V (2013) Constraint consensus
concentration for identifying disjoint feasible regions in nonlinear
programmes. Optim Methods Softw 28(2):339–363
29. Smith L, Chinneck JW, Aitken V (Apr 2013) Improved constraint
consensus methods for seeking feasibility in nonlinear programs.
Comput Optim Appl 54(3):555–578
30. Spellucci P (1998) An sqp method for general nonlinear programs
using only equality constrained subproblems. Math Program
82(3):413–448
31. Sun L, Wu Y, Liang X, He M, Chen H (2019) Constraint
consensus based artificial bee colony algorithm for constrained
optimization problems. Discrete Dynamics in Nature and Society,
Article ID 6523435 24 pages
32. Takahama T, Sakai S, Iwane N (2006) Solving nonlinear
constrained optimization problems by the epsilon constrained
differential evolution. In: 2006 IEEE International conference on
systems, man and cybernetics, vol 3, pp 2322–2327
33. Takahama T, Sakai S (2010) Constrained optimization by
the epsilon constrained differential evolution with an archive
and gradient-based mutation. In: WCCI 2010 IEEE World
Congress on computational intelligence July, 18-23, 2010 - CCIB,
Barcelona
34. Waltz RA, Nocedal J (2003) KNITRO users manual. Technical
report OTC 2003 05. Optimization Technology Center, Northwest-
ern University, Evanston, IL, USA
35. Jiao R, Zeng S, Li C (2019) A feasible-ratio control technique for
constrained optimization. Inform Sci 201–217:502
36. Wang B-C, Feng Y, Li H-X (2020) Individual-dependent
feasibility rule for constrained differential evolution. Inform Sci
174–195:506
Publisher’s note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
Adriana Cervantes-Castillo
was born in Alto Tı́o Diego,
Veracruz, México, in 1982.
She received the BSc in
Computer Science from
the University of Veracruz,
Xalapa, in 2008, the MSc in
Artificial Intelligence from
the University of Veracruz in
2014, and the PhD in Arti-
ficial Intelligence from the
University of Veracruz in
2018. Her research interests
are in the design, study, and
application of nature-inspired
meta-heuristic algorithms to solve complex optimization problems.
Dr. Efrén Mezura-Montes
is a full-time researcher at
the Artificial Intelligence
Research Center, Univer-
sity of Veracruz, MEXICO.
His research interests are
the design, analysis and
application of bio-inspired
algorithms to solve complex
optimization problems. He
has published over 145 papers
in peer-reviewed journals and
conferences. He also has one
edited book and over 11 book
chapters published by interna-
tional publishing companies.
From his work, Google Scholar reports over 5,800 citations.

More Related Content

Similar to A Modified Brain Storm Optimization Algorithm With A Special Operator To Solve Constrained Optimization Problems

A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
 
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...gerogepatton
 
MIXED 0−1 GOAL PROGRAMMING APPROACH TO INTERVAL-VALUED BILEVEL PROGRAMMING PR...
MIXED 0−1 GOAL PROGRAMMING APPROACH TO INTERVAL-VALUED BILEVEL PROGRAMMING PR...MIXED 0−1 GOAL PROGRAMMING APPROACH TO INTERVAL-VALUED BILEVEL PROGRAMMING PR...
MIXED 0−1 GOAL PROGRAMMING APPROACH TO INTERVAL-VALUED BILEVEL PROGRAMMING PR...cscpconf
 
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATION
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATIONMODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATION
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATIONcscpconf
 
Modified Vortex Search Algorithm for Real Parameter Optimization
Modified Vortex Search Algorithm for Real Parameter Optimization Modified Vortex Search Algorithm for Real Parameter Optimization
Modified Vortex Search Algorithm for Real Parameter Optimization csandit
 
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSA HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSijfcstjournal
 
facility layout paper
 facility layout paper facility layout paper
facility layout paperSaurabh Tiwary
 
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSA HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSijfcstjournal
 
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...Waqas Tariq
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)theijes
 
A Convex Hull Formulation for the Design of Optimal Mixtures.pdf
A Convex Hull Formulation for the Design of Optimal Mixtures.pdfA Convex Hull Formulation for the Design of Optimal Mixtures.pdf
A Convex Hull Formulation for the Design of Optimal Mixtures.pdfChristine Maffla
 
A Hybrid Data Clustering Approach using K-Means and Simplex Method-based Bact...
A Hybrid Data Clustering Approach using K-Means and Simplex Method-based Bact...A Hybrid Data Clustering Approach using K-Means and Simplex Method-based Bact...
A Hybrid Data Clustering Approach using K-Means and Simplex Method-based Bact...IRJET Journal
 
Selecting the best stochastic systems for large scale engineering problems
Selecting the best stochastic systems for large scale engineering problemsSelecting the best stochastic systems for large scale engineering problems
Selecting the best stochastic systems for large scale engineering problemsIJECEIAES
 
Solving Bipolar Max-Tp Equation Constrained Multi-Objective Optimization Prob...
Solving Bipolar Max-Tp Equation Constrained Multi-Objective Optimization Prob...Solving Bipolar Max-Tp Equation Constrained Multi-Objective Optimization Prob...
Solving Bipolar Max-Tp Equation Constrained Multi-Objective Optimization Prob...ijsc
 
SOLVING BIPOLAR MAX-TP EQUATION CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION PROB...
SOLVING BIPOLAR MAX-TP EQUATION CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION PROB...SOLVING BIPOLAR MAX-TP EQUATION CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION PROB...
SOLVING BIPOLAR MAX-TP EQUATION CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION PROB...ijsc
 
FIDUCIAL POINTS DETECTION USING SVM LINEAR CLASSIFIERS
FIDUCIAL POINTS DETECTION USING SVM LINEAR CLASSIFIERSFIDUCIAL POINTS DETECTION USING SVM LINEAR CLASSIFIERS
FIDUCIAL POINTS DETECTION USING SVM LINEAR CLASSIFIERScsandit
 
Pak eko 4412ijdms01
Pak eko 4412ijdms01Pak eko 4412ijdms01
Pak eko 4412ijdms01hyuviridvic
 
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...Jim Jimenez
 

Similar to A Modified Brain Storm Optimization Algorithm With A Special Operator To Solve Constrained Optimization Problems (20)

A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
 
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
 
MIXED 0−1 GOAL PROGRAMMING APPROACH TO INTERVAL-VALUED BILEVEL PROGRAMMING PR...
MIXED 0−1 GOAL PROGRAMMING APPROACH TO INTERVAL-VALUED BILEVEL PROGRAMMING PR...MIXED 0−1 GOAL PROGRAMMING APPROACH TO INTERVAL-VALUED BILEVEL PROGRAMMING PR...
MIXED 0−1 GOAL PROGRAMMING APPROACH TO INTERVAL-VALUED BILEVEL PROGRAMMING PR...
 
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATION
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATIONMODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATION
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATION
 
Modified Vortex Search Algorithm for Real Parameter Optimization
Modified Vortex Search Algorithm for Real Parameter Optimization Modified Vortex Search Algorithm for Real Parameter Optimization
Modified Vortex Search Algorithm for Real Parameter Optimization
 
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSA HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
 
facility layout paper
 facility layout paper facility layout paper
facility layout paper
 
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSA HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
 
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
 
A Convex Hull Formulation for the Design of Optimal Mixtures.pdf
A Convex Hull Formulation for the Design of Optimal Mixtures.pdfA Convex Hull Formulation for the Design of Optimal Mixtures.pdf
A Convex Hull Formulation for the Design of Optimal Mixtures.pdf
 
I0341042048
I0341042048I0341042048
I0341042048
 
A Hybrid Data Clustering Approach using K-Means and Simplex Method-based Bact...
A Hybrid Data Clustering Approach using K-Means and Simplex Method-based Bact...A Hybrid Data Clustering Approach using K-Means and Simplex Method-based Bact...
A Hybrid Data Clustering Approach using K-Means and Simplex Method-based Bact...
 
Selecting the best stochastic systems for large scale engineering problems
Selecting the best stochastic systems for large scale engineering problemsSelecting the best stochastic systems for large scale engineering problems
Selecting the best stochastic systems for large scale engineering problems
 
Solving Bipolar Max-Tp Equation Constrained Multi-Objective Optimization Prob...
Solving Bipolar Max-Tp Equation Constrained Multi-Objective Optimization Prob...Solving Bipolar Max-Tp Equation Constrained Multi-Objective Optimization Prob...
Solving Bipolar Max-Tp Equation Constrained Multi-Objective Optimization Prob...
 
SOLVING BIPOLAR MAX-TP EQUATION CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION PROB...
SOLVING BIPOLAR MAX-TP EQUATION CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION PROB...SOLVING BIPOLAR MAX-TP EQUATION CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION PROB...
SOLVING BIPOLAR MAX-TP EQUATION CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION PROB...
 
F5233444
F5233444F5233444
F5233444
 
FIDUCIAL POINTS DETECTION USING SVM LINEAR CLASSIFIERS
FIDUCIAL POINTS DETECTION USING SVM LINEAR CLASSIFIERSFIDUCIAL POINTS DETECTION USING SVM LINEAR CLASSIFIERS
FIDUCIAL POINTS DETECTION USING SVM LINEAR CLASSIFIERS
 
Pak eko 4412ijdms01
Pak eko 4412ijdms01Pak eko 4412ijdms01
Pak eko 4412ijdms01
 
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...
 

More from Samantha Vargas

Narrative College Essay. 21 Narrative Essay Examples College Background - Exam
Narrative College Essay. 21 Narrative Essay Examples College Background - ExamNarrative College Essay. 21 Narrative Essay Examples College Background - Exam
Narrative College Essay. 21 Narrative Essay Examples College Background - ExamSamantha Vargas
 
Handwriting Paper - Free Printable Handwriting Pap
Handwriting Paper - Free Printable Handwriting PapHandwriting Paper - Free Printable Handwriting Pap
Handwriting Paper - Free Printable Handwriting PapSamantha Vargas
 
Freebie Space Themed Writing Paper By Teaching
Freebie Space Themed Writing Paper By TeachingFreebie Space Themed Writing Paper By Teaching
Freebie Space Themed Writing Paper By TeachingSamantha Vargas
 
Pin By Monika Fuchs On Stationery Writing Paper,
Pin By Monika Fuchs On Stationery Writing Paper,Pin By Monika Fuchs On Stationery Writing Paper,
Pin By Monika Fuchs On Stationery Writing Paper,Samantha Vargas
 
Fireman Design Paper Page Fireman Crafts, Fireman,
Fireman Design Paper Page Fireman Crafts, Fireman,Fireman Design Paper Page Fireman Crafts, Fireman,
Fireman Design Paper Page Fireman Crafts, Fireman,Samantha Vargas
 
Self Reflection Paper Example Reflective Essa
Self Reflection Paper Example Reflective EssaSelf Reflection Paper Example Reflective Essa
Self Reflection Paper Example Reflective EssaSamantha Vargas
 
Photo Essay Examples - MosOp. Online assignment writing service.
Photo Essay Examples - MosOp. Online assignment writing service.Photo Essay Examples - MosOp. Online assignment writing service.
Photo Essay Examples - MosOp. Online assignment writing service.Samantha Vargas
 
Een Inleiding Schrijven Voor Een Argumentatief Essay
Een Inleiding Schrijven Voor Een Argumentatief EssayEen Inleiding Schrijven Voor Een Argumentatief Essay
Een Inleiding Schrijven Voor Een Argumentatief EssaySamantha Vargas
 
009 High School Vs College Essay Com. Online assignment writing service.
009 High School Vs College Essay Com. Online assignment writing service.009 High School Vs College Essay Com. Online assignment writing service.
009 High School Vs College Essay Com. Online assignment writing service.Samantha Vargas
 
Analytical Essay Advanced English. Online assignment writing service.
Analytical Essay Advanced English. Online assignment writing service.Analytical Essay Advanced English. Online assignment writing service.
Analytical Essay Advanced English. Online assignment writing service.Samantha Vargas
 
Transitional Words Transition Words, Transitio
Transitional Words Transition Words, TransitioTransitional Words Transition Words, Transitio
Transitional Words Transition Words, TransitioSamantha Vargas
 
How To Write Assignment Prime. Online assignment writing service.
How To Write Assignment Prime. Online assignment writing service.How To Write Assignment Prime. Online assignment writing service.
How To Write Assignment Prime. Online assignment writing service.Samantha Vargas
 
Visual Text Analysis Essay Examples. How To Write A
Visual Text Analysis Essay Examples. How To Write AVisual Text Analysis Essay Examples. How To Write A
Visual Text Analysis Essay Examples. How To Write ASamantha Vargas
 
What Expectations Should You Have LetS Get Wri
What Expectations Should You Have LetS Get WriWhat Expectations Should You Have LetS Get Wri
What Expectations Should You Have LetS Get WriSamantha Vargas
 
Writing An Analytical Essay. How To Write An Analytical Essay Ex
Writing An Analytical Essay. How To Write An Analytical Essay ExWriting An Analytical Essay. How To Write An Analytical Essay Ex
Writing An Analytical Essay. How To Write An Analytical Essay ExSamantha Vargas
 
300 Word Essay - DexteroiChapman. Online assignment writing service.
300 Word Essay - DexteroiChapman. Online assignment writing service.300 Word Essay - DexteroiChapman. Online assignment writing service.
300 Word Essay - DexteroiChapman. Online assignment writing service.Samantha Vargas
 
Composition Topics For Grade 5 - Dorian Whitehea
Composition Topics For Grade 5 - Dorian WhiteheaComposition Topics For Grade 5 - Dorian Whitehea
Composition Topics For Grade 5 - Dorian WhiteheaSamantha Vargas
 
Essay Writing Competition 2015. Online assignment writing service.
Essay Writing Competition 2015. Online assignment writing service.Essay Writing Competition 2015. Online assignment writing service.
Essay Writing Competition 2015. Online assignment writing service.Samantha Vargas
 
😝 Descriptive Essay About A Person You Admire. Descri.pdf
😝 Descriptive Essay About A Person You Admire. Descri.pdf😝 Descriptive Essay About A Person You Admire. Descri.pdf
😝 Descriptive Essay About A Person You Admire. Descri.pdfSamantha Vargas
 
What Is Writing A Review Of Related Literature
What Is Writing A Review Of Related LiteratureWhat Is Writing A Review Of Related Literature
What Is Writing A Review Of Related LiteratureSamantha Vargas
 

More from Samantha Vargas (20)

Narrative College Essay. 21 Narrative Essay Examples College Background - Exam
Narrative College Essay. 21 Narrative Essay Examples College Background - ExamNarrative College Essay. 21 Narrative Essay Examples College Background - Exam
Narrative College Essay. 21 Narrative Essay Examples College Background - Exam
 
Handwriting Paper - Free Printable Handwriting Pap
Handwriting Paper - Free Printable Handwriting PapHandwriting Paper - Free Printable Handwriting Pap
Handwriting Paper - Free Printable Handwriting Pap
 
Freebie Space Themed Writing Paper By Teaching
Freebie Space Themed Writing Paper By TeachingFreebie Space Themed Writing Paper By Teaching
Freebie Space Themed Writing Paper By Teaching
 
Pin By Monika Fuchs On Stationery Writing Paper,
Pin By Monika Fuchs On Stationery Writing Paper,Pin By Monika Fuchs On Stationery Writing Paper,
Pin By Monika Fuchs On Stationery Writing Paper,
 
Fireman Design Paper Page Fireman Crafts, Fireman,
Fireman Design Paper Page Fireman Crafts, Fireman,Fireman Design Paper Page Fireman Crafts, Fireman,
Fireman Design Paper Page Fireman Crafts, Fireman,
 
Self Reflection Paper Example Reflective Essa
Self Reflection Paper Example Reflective EssaSelf Reflection Paper Example Reflective Essa
Self Reflection Paper Example Reflective Essa
 
Photo Essay Examples - MosOp. Online assignment writing service.
Photo Essay Examples - MosOp. Online assignment writing service.Photo Essay Examples - MosOp. Online assignment writing service.
Photo Essay Examples - MosOp. Online assignment writing service.
 
Een Inleiding Schrijven Voor Een Argumentatief Essay
Een Inleiding Schrijven Voor Een Argumentatief EssayEen Inleiding Schrijven Voor Een Argumentatief Essay
Een Inleiding Schrijven Voor Een Argumentatief Essay
 
009 High School Vs College Essay Com. Online assignment writing service.
009 High School Vs College Essay Com. Online assignment writing service.009 High School Vs College Essay Com. Online assignment writing service.
009 High School Vs College Essay Com. Online assignment writing service.
 
Analytical Essay Advanced English. Online assignment writing service.
Analytical Essay Advanced English. Online assignment writing service.Analytical Essay Advanced English. Online assignment writing service.
Analytical Essay Advanced English. Online assignment writing service.
 
Transitional Words Transition Words, Transitio
Transitional Words Transition Words, TransitioTransitional Words Transition Words, Transitio
Transitional Words Transition Words, Transitio
 
How To Write Assignment Prime. Online assignment writing service.
How To Write Assignment Prime. Online assignment writing service.How To Write Assignment Prime. Online assignment writing service.
How To Write Assignment Prime. Online assignment writing service.
 
Visual Text Analysis Essay Examples. How To Write A
Visual Text Analysis Essay Examples. How To Write AVisual Text Analysis Essay Examples. How To Write A
Visual Text Analysis Essay Examples. How To Write A
 
What Expectations Should You Have LetS Get Wri
What Expectations Should You Have LetS Get WriWhat Expectations Should You Have LetS Get Wri
What Expectations Should You Have LetS Get Wri
 
Writing An Analytical Essay. How To Write An Analytical Essay Ex
Writing An Analytical Essay. How To Write An Analytical Essay ExWriting An Analytical Essay. How To Write An Analytical Essay Ex
Writing An Analytical Essay. How To Write An Analytical Essay Ex
 
300 Word Essay - DexteroiChapman. Online assignment writing service.
300 Word Essay - DexteroiChapman. Online assignment writing service.300 Word Essay - DexteroiChapman. Online assignment writing service.
300 Word Essay - DexteroiChapman. Online assignment writing service.
 
Composition Topics For Grade 5 - Dorian Whitehea
Composition Topics For Grade 5 - Dorian WhiteheaComposition Topics For Grade 5 - Dorian Whitehea
Composition Topics For Grade 5 - Dorian Whitehea
 
Essay Writing Competition 2015. Online assignment writing service.
Essay Writing Competition 2015. Online assignment writing service.Essay Writing Competition 2015. Online assignment writing service.
Essay Writing Competition 2015. Online assignment writing service.
 
😝 Descriptive Essay About A Person You Admire. Descri.pdf
😝 Descriptive Essay About A Person You Admire. Descri.pdf😝 Descriptive Essay About A Person You Admire. Descri.pdf
😝 Descriptive Essay About A Person You Admire. Descri.pdf
 
What Is Writing A Review Of Related Literature
What Is Writing A Review Of Related LiteratureWhat Is Writing A Review Of Related Literature
What Is Writing A Review Of Related Literature
 

Recently uploaded

Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17Celine George
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
internship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerinternship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerunnathinaik
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...jaredbarbolino94
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitolTechU
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxHistory Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxsocialsciencegdgrohi
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaVirag Sontakke
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,Virag Sontakke
 

Recently uploaded (20)

Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
internship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerinternship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developer
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptx
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxHistory Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of India
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
 

A Modified Brain Storm Optimization Algorithm With A Special Operator To Solve Constrained Optimization Problems

  • 1. Applied Intelligence https://doi.org/10.1007/s10489-020-01763-8 A modified brain storm optimization algorithm with a special operator to solve constrained optimization problems Adriana Cervantes-Castillo1 · Efrén Mezura-Montes1 © Springer Science+Business Media, LLC, part of Springer Nature 2020 Abstract This paper presents a novel approach based on the combination of the Modified Brain Storm Optimization algorithm (MBSO) with a simplified version of the Constraint Consensus method as special operator to solve constrained numerical optimization problems. Regarding the special operator, which aims to reach the feasible region of the search space, the consensus vector becomes the feasibility vector computed by the hardest constraint in turn for a current infeasible solution; then the operations to mix the other feasibility vectors are avoided. This new combined algorithm, named as MBSO-R+V, solves a suit of eighteen test problems in ten and thirty dimensions. From a set of experiments related to the location and frequency of application of the constraint consensus method within MBSO, a suitable design of the combined approach is presented. This proposal shows encouraging final results while being compared against state-of-the-art algorithms, showing that it is viable to add special operators to improve the capabilities of swarm-intelligence algorithms when dealing with continuous constrained search spaces. Keywords Brain storm optimization algorithm · Constrained numerical optimization problems · Constraint-consensus method · Feasibility vectors · ε-constrained method 1 Introduction Nowadays, Constrained Numerical Optimization Problems (CNOPs) can be found in different disciplines [5]. Such problems, assuming minimization, can be defined as to: find f (x) (1) subject to: gi(x) ≤ 0; i = 1, 2, . . . , m hj (x) = 0; j = 1, 2, . . . , p Adriana Cervantes-Castillo zS14020641@estudiantes.uv.mx Efrén Mezura-Montes emezura@uv.mx 1 Artificial Intelligence Research Center, University of Veracruz, Sebastian Camacho 5, Xalapa, Veracruz, 91000/Centro, México where f (x) is named as the objective function, gi(x), i = 1, . . . , m and hj (x), j = 1, . . . , p are a collection of inequality and equality constraints; x = [x1, x2, . . . , xn] is a real-valued vector with the variables of the optimization problem, where each xk, k = 1, . . . , n has bounds, Lk ≤ xk ≤ Uk and such bounds state the search space S. The set of solutions which satisfy the constraints of the problem defines the feasible region F ⊆ S. Compared with the whole search space, the size of the feasible region can be significantly small. Therefore, finding it and keeping the search within it becomes an important challenge in CNOPs [9, 11, 26, 33], mostly by the fact that variation operators are blind with respect to constraint satisfaction. This hard task has been tackled by different optimization methods [1, 22, 28]. In this regard, the specialized literature shows methods which (1) attempt to maintain feasibility as well as (2) methods which help infeasible solutions to become feasible so as to reduce the number of evaluations required to complete that task. The drawback with the first approach is the requirement of feasible initial solutions [4, 5, 25]. Among those approaches of the second class, the Constraint Consensus (CC) method introduced by John W. Chineck in 2004 [3] is of special interest in this research. This
  • 2. A. Cervantes-Castillo and E. Mezura-Montes method calculates the feasibility vectors for all the violated constraints of a given infeasible solution, i.e., the method estimates search directions towards the satisfaction of each constraint. Once all the feasibility vectors are com- puted, they are combined to form a consensus vector which contains both, the direction and the distance to generate a solution close or inside the feasible region. In the following, research works are presented where the CC method has been used to solve constrained optimization problems. Chineck presented the Constraint Consensus (CC) algorithm [3] as a method to move an arbitrary infeasible solution from its position to another one relatively near or even inside the feasible region, i.e., generating a feasible solution. Problems with different types of constraints, shapes, and complexities were solved by this method in the software MINOS [23]. In 2008, Walid Ibrahim and John W. Chinneck presented five new variants of the CC method [17], based on feasibility distance (FDfar, FDnear) and based on direction (DBavg, DBmax, DBbnd). Those new variants differ only in the way they build the consensus vector. In those based on feasibility distance, the consensus vector becomes the highest or shortest feasibility vector while in those based on direction the consensus vector is built component-wise, where the elements that make up the feasibility vector define the winner direction (positive or negative). The authors solved 231 constrained problems using the commercially packages CONOPT 3.13 [7], SNOPT 6.1-1 [10], MINOS 5.5 [23], DONLP2 [30], and KNITRO 4.0 [34], showing the DBmax variant as the best method to provide initial start solutions. In 2013 Laurence Smith et al. [29] presented some improvements to the CC method. A new idea to build the consensus vector named SUM was introduced, where the consensus vector becomes the average of the feasibility vectors computed in the current solution. Additionally, they presented the augmented version, where the main idea is seeking feasibility using information from previously estimated consensus vectors as well as previous information of the violated constraints. In this way, less computations need to be done in comparison with the others CC variants. In the same year [28], the same authors presented the idea of using the CC methods to identify disjoint feasible regions in constrained problems using different multistart algorithms. It is clear from the above specialized literature revision, that different commercial programs with classical optimiza- tion algorithms have been used in combination with the CC algorithm to solve constrained problems showing com- petitive results. Furthermore, in recent years, since 2011 [13–15], the CC method has been combined with Evolu- tionary Algorithms like the Genetic Algorithm (GA) and Differential Evolution (DE), solving different test problems [18, 20]. The first efforts focused on using CC as a pre-processing method, where at each iteration of the GA or DE, the CC method was applied to some infeasible solutions chosen from the current population. After that, the worst solutions in the same current population were replaced by those obtained by the CC method. In 2016, Hamza et al. [14] proposed a different way to incorporate the CC method into the DE algorithm. The authors used the CC method into the DE mutation operator with the aim to improve the final results while saving evaluations. The final results of this approach outperformed the standard DE as well as other state-of-the-art algorithms showing better results and reducing the computational cost. Sun et al. [31] added the basic CC method to the artificial bee colony algorithm and their results were compared against well-known approaches showing a good performance. However, no other CC variants were analyzed. The motivation of this work is based on two issues: (1) even the CC method has been improved with different variants, there are no proposals designed by considering a swarm-intelligence algorithm, and (2) the Brain Storm Optimization Algorithm, which has provided competitive results when dealing with CNOPs [2], has not been enriched with studies on special operators for constrained optimization. Based on the two above mentioned issues, this papers presents a novel CC variant named as R+V (restriction more violated) which considers the generation of the con- sensus vector in a simple and cheap way (considering it will be added to a population-based approach) because the feasibility vector is the one of the hardest constraint in turn. Such CC variant is combined with a BSO vari- ant called Modified BSO, where the research hypothesis is that the addition of this special operator will lead to a performance improvement when solving constrained opti- mization problems with different features. The contribu- tion of this work is then a first BSO-based approach to deal with constrained search spaces now enriched with a cheap special operator focused on improving infeasible solutions to get them inside or at least closer to the feasible region. The new CC variant is compared with previous CC pro- posals [17, 29] in a set of well-known test problems [20]. After that, based on an empirical validation, a suitable incorporation of the R+V variant within MBSO is pre- sented, where its location and application frequency are defined. Finally, the proposed MBSO-R+V is compared against state-of-the-art algorithms presenting a highly com- petitive performance when solving CNOPs with different characteristics. The organization of this paper is as follows. Section 2 includes the original Brain Storm Optimization (BSO)
  • 3. A modified brain storm optimization algorithm with a special operator to solve constrained.. algorithm, its modified version (MBSO) adopted in this work motivated by a previous study [2], and the introduction of the CC methods under study. Section 3 describes the proposed approach with the CC R+V version proposed in this research and also the MBSO-R+V algorithm. Section 4 presents the experimental design, the corresponding results and discussion. Finally, Section 5 draws the conclusions of this research and outlines future work. 2 BSO algorithms and CC methods 2.1 BSO algorithm In 2011 Yuhui Shi presented the BrainStorm Optimization (BSO) algorithm [27], which is inspired in the brainstorm- ing process [24], where a group of people with different backgrounds is meeting with the aim to generate and com- bine different ideas and propose a solution to a specific problem. Four rules are the base of the brainstorming pro- cess, which are the following: – No critics. – All ideas proposed can be considered. – It is supposed to generate a considerable number of ideas. – It is possible to generate new ideas based on the combination of current ideas. Following the four rules earlier presented a brainstorming process considers the following steps: 1. Consider a set of people with different backgrounds. 2. Create a highest number of ideas based on Osborn’s rules [24]. 3. Based on the problem owner opinion, the best ideas are chosen. 4. Those selected ideas are used as the base to create new ideas. 5. From the set of those new ideas, the best based on the problem owner opinion are selected. 6. Select a group of ideas to generate new ones and avoid getting stuck with the same opinions. 7. The best ideas are selected by the problem owner. Taking the above steps as a base, and considering the fact that an idea is a potential solution of an optimization problem (a CNOP in this case) the BSO algorithm is detailed in Algorithm 1, where input parameters are the number of ideas NP, the number of clusters M, and probabilities preplace, pone, poneCenter, and ptwoCenter, while rand(0,1) returns a random real number between 0 and 1 with uniform distribution. The BSO algorithm uses four main operators: – GroupingOperator (NP, M): The k-means algorithm is used to cluster the NP ideas into M clusters. The center of each cluster is defined by the best idea, i.e., the best solution based on fitness. The goal here is to bias the search to different areas of the space to locate those promising ones. As the previously mentioned, this operator promotes the exploration of the search space.
  • 4. A. Cervantes-Castillo and E. Mezura-Montes – ReplacingOperator (x): The best idea (center) x in the selected cluster is replaced by an idea generated at random with uniform distribution. The aim is avoiding local optima while keeping diversity in the set of solutions (ideas). – CreatingOperator (xs): A new idea is generated by considering ideas from one or two chosen clusters. Such current ideas can be the best ones, i.e., the centers of the clusters, or just randomly chosen ideas of the corresponding clusters. The new idea is created by adding a Gaussian noise to the selected idea as in (2) and (3): yi = xs + ξ ∗ N(μ, σ) (2) ξ = logsig (0.5 ∗ T − t) k ∗ rand(0, 1) (3) where yi represents the new idea, xs is the selected idea (the center of the cluster or just a randomly chosen solution); N(μ, σ) is a vector of Gaussian numbers with mean μ and variance σ; T is the maximum number of BSO iterations, t is the current iteration and k determines the step size in the logsig () function, where rand(0,1) returns a random value with uniform distribution between 0 and 1. – Combine (x1, x2): When two clusters are selected, the ideas are combined in a single one xs as in (4). xs = R × x1 + (1 − R) × x2 (4) where R is a randomly number previously selected, x1 and x2 are the selected ideas from cluster one and cluster two, respectively. This algorithm has showed success on solving different optimization problems. In fact it has been extended to multi-strategies with adaptive parameters [19], and also to a parallel hardware implementation [16]. 2.2 MBSO algorithm The Modified Brain Storm Optimization algorithm (MBSO), is an improved BSO version proposed in 2012 [12]. MBSO introduces a new clustering method in the grouping operator called Simple Grouping Method which follows the next steps: 1. Select randomly M ideas which become the seeds of the M clusters. 2. Compute the Euclidean distance from each idea in the population to each cluster seed. 3. Compare all the M distances to the current idea and add it into the nearest cluster. 4. Repeat until group all NP ideas into the M clusters. In the creating operator, MBSO introduces a new method to generate the new ideas. The Gaussian noise is replaced by the Idea Difference Strategy (IDS), which adds more information of the current population to the idea to be generated. The IDS uses (5) to create the new idea yi. based on a current idea xs: yi = rand(L, H) if rand(0, 1) pr; xs + rand(0, 1) × (xa − xb) otherwise. (5) where xa and xb are two ideas (solutions) from the current population chosen at random with uniform distribution used in the vector difference, and pr is a parameter to simulate the open-minded in the creation of new ideas, similar to the brainstorming process where all ideas are welcome. MBSO was chosen in this research based on a previous study where it outperformed other BSO variants in constrained search spaces [2]. 2.3 CC methods The Constraint Consensus (CC) method uses the concept of projection algorithms, which are effective to move infeasi- ble solutions close to the feasible region. This movement is through the feasibility vector computed for each violated constraint, then such vector includes movement and distance information related to its corresponding constraint. In this way, if xs is an infeasible solution and gi(xs) its constraint violation for constraint i, then the CC method computes the feasibility vector (fvi) for that constraint using (6). fvi = −gi(xs) ∇gi(xs) ∇gi(xs) (6) where gi(xs) is the amount of constraint violation, ∇gi(xs) is the constraint gradient and ∇gi(xs) is the gradient length. Despite the fact that feasibility vectors are exact just for linear constraints, they are suitable approximations for non-linear constraints and they can be successfully applied within stochastic search algorithms [14]. As it was mentioned in Section 1, there are different ways to generate the consensus vector based on the feasibility vectors obtained by using (6). The basic CC approach is detailed in Algorithm 2, where NINF is the number of violated constraints, sj is the sum of the feasibility vector elements for variable j, nj is the number of violated constraints where variable j is
  • 5. A modified brain storm optimization algorithm with a special operator to solve constrained.. considered, and t is the consensus vector. As it can be noted, the basic CC method computes the elements of the consensus vector by an average of those values of the feasibility vectors of the corresponding violated constraints. Besides the basic CC method, other variants are tested in this work: FDFAR: The feasibility vector with the largest distance becomes the consensus vector. The aim is to get the feasible region faster [17]. DBMAX: In this case the signs of the elements of the feasibility vectors are considered. If more positive values are present for a given variable, the highest value among them is taken as the corresponding value for the consensus vector. The same applies if more negative values are found. Ties consider the maximum values of the positive and negative elements and they are averaged to get the corresponding element of the consensus vector. DBBND: Besides considering the signs of the feasibility vector elements, the length of the movement and the type of constraint (equality or inequality) are taken into account (shorter movements and larger movements, respectively). AUGMENTED: This variant adopts a predict-correct ap- proach [29], where the predictor is the consensus vector obtained by the basic CC variant. The corrector is formed by the average of the relaxation factors computed inde- pendently for each violated constraint and it is used to adjust the length of the vector without modifying its direction. 3 Proposed approach 3.1 R+V: a new constraint consensus method variant Each one of the CC variants discussed in Section 1 calcu- lates the feasibility vector for each violated constraint of a given solution (as in (6)). Thus, computing the gradient is mandatory in this step, adding computational effort mostly when the number of constraints associated with the problem increases. A new CC variant called R+V (restriction more violated) is proposed in this work, where the consensus vector only includes the feasibility vector of the hardest constraint in turn, i.e., the constraint with the highest vio- lation. In consequence, just the gradient of such constraint is computed, regardless of the feasibility information of the remaining constraints. In other words, besides computing just one feasibility vector, the consensus step is avoided because the only feasibility vector is used to reduce the infeasibility of a solution and such action saves computa- tional time with respect to previous CC variants. Algorithm 3 shows the R+V steps.
  • 6. A. Cervantes-Castillo and E. Mezura-Montes Table 1 MBSO parameter values used in the experiments Parameter Value N 100 M 5 pr 0.005 p-replace 0.2 p-one 0.8 p-one-center 0.4 p-two-center 0.5 In the hardestConstraint() method, the constraint with the the highest violation amount is chosen (line 3 in Algorithm 3). The feasibility vector for such constraint becomes the consensus vector in the R+V variant (line 4 in Algorithm 3) to compute the movement of the solution (line 5 in Algorithm 3). There are two possible stop conditions: (1) when the consensus vector length is less than a specified tolerance α (line 5 in Algorithm 3), or (2) reaching a pre-defined number of iterations μ (line 11 in Algorithm 3). 3.2 ε-constrained method The ε-constrained method, proposed by Takahama in [32] is adopted as a constraint-handling technique in this work to let the MBSO algorithm to deal with a constrained search space, because the original MBSO was proposed to solve unconstrained optimization problems. This approach is based on a problem transformation, i.e., the constrained problem is transformed in an unconstrained optimization problem. It compares the solutions based either on the constraint violation φ(x) or the objective function value f (x) according to an ε level. The ε-constrained method emphasizes the constraint satisfaction followed by the optimization of f (x). However, the method promotes a balance of promising infeasible solutions by allowing comparison of infeasible solutions close to the feasible region based only on their objective function values. The ε level comparison between two solutions (f (x1), φ(x1)), (f (x2), φ(x2)) is calculated as indicated in (7) and (8): (f (x1), φ(x1)) ε (f (x2), φ(x2)) ⇐⇒ (7) ⎧ ⎨ ⎩ f (x1) f (x2), if φ(x1), φ(x2) ≤ ε; f (x1) f (x2), if φ(x1) = φ(x2); φ(x1) φ(x2), otherwise (8) When ε = 0, the constraint violation precedes the objective function value on the comparison. In contrast, when ǫ = ∞ only the objective function value is used to compare the solutions, i.e. the feasibility information is not considered. Table 2 Test problems adopted in the experiments with different dimensions (D), separable (S), non-separable (NS) or rotated (R) constraints Test function Search space Objective function Constraints number Equality constraints Inequality constraints C01 [0, 10]D Non Separable – 2-NS C02 [–5.12, 5.12]D Separable 1-S 2-S C03 [–1000, 1000]D Non Separable 1-NS – C04 [–50, 50]D Separable 2-S / 2-NS – C05 [–600, 600]D Separable 2-S – C06 [–600, 600]D Separable 2-R – C07 [–140, 140]D Non Separable – 1-S C08 [–140, 140]D Non Separable – 1-R C09 [–500, 500]D Non Separable 1-S – C10 [–500, 500]D Non Separable 1-R – C11 [–100, 100]D Rotated 1-NS - - C12 [–1000, 1000]D Separable 1-NS 1-S C13 [–500, 500]D Separable – 2-S / 1-NS C14 [–1000, 1000]D Non Separable – 3-S C15 [–1000, 1000]D Non Separable – 3-R C16 [–10, 10]D Non Separable 2-S 1-S / 1-NS C17 [–10, 10]D Non Separable 1-S 2-NS C18 [–50, 50]D Non Separable 1-S 1-S 10D and 30D are solved in this research
  • 7. A modified brain storm optimization algorithm with a special operator to solve constrained.. Equation (9) shows how to control the ǫ level value. ǫ(0) = φ(xθ ) ǫ(t) = ǫ(0)(1 − t T c )cp 0 t T c; 0 t ≤ T c. (9) where t represents the current iteration; T c = maximum iteration and xθ is the top θ-th solution, θ = 0.2N and cp regulates the reduction of the constraint tolerance. The comparison criteria in (7) and (8) replace the comparison based just on the objective function value used in Algorithm 1. 4 Experiments and results 4.1 Experimental design and parameter tuning To investigate the performance of the proposed MBSO-R+V algorithm, four experiments were designed as follows: 1. To determine the quality of the R+V proposal with respect to other CC variants. 2. To define the best location of the R+V method within MBSO 3. To set the R+V application frequency in MBSO. 4. To compare the combined algorithm MBSO-R+V against state-of-the-art approaches for CNOPs. Fig. 1 Experiment A, total number of improved solutions by each CC variant, except R+V The parameter values used in the experiments for the R+V variant were similar to those suggested in [14], where the CC method was added to a population-based search algorithm: α = 0.000001; μ = 5. For the MBSO algorithm the parameters used were those proposed in [2], where MBSO solved different types of CNOPs. The values are in Table 1. The test functions solved in this research are those proposed in [20] (10D and 30D) and their details are summarized in Table 2. The maximum number of evaluations for 10D was, Maxf es = 200,000 and Maxf es = 600,000 for 30D. The value for the ε-constrained method parameter cp was 0.5, as proposed in [2]. Table 3 Experiment A, results obtained by each CC variant in the 10D benchmark functions F Infeasible BASIC FDFAR DBMAX AUGMENTED DBBND R+V C01 1 1 1 1 1 1 1 C02 100 72 78 72 64 72 76 C03 100 100 100 100 98 100 100 C04 100 61 37 32 68 37 100 C05 100 67 65 71 64 67 59 C06 100 53 62 68 64 53 59 C07 65 65 65 65 65 65 65 C08 59 56 56 56 53 56 56 C09 100 46 46 46 100 46 46 C10 100 70 70 70 88 70 70 C11 100 100 100 100 100 100 100 C12 100 100 100 100 94 100 100 C13 100 100 100 100 98 100 100 C14 100 100 100 100 97 100 100 C15 100 57 50 55 63 55 49 C16 100 95 62 79 42 90 83 C17 100 53 52 52 50 50 49 C18 100 57 100 57 57 57 100 TOTAL 1625 1253 1244 1224 1266 1219 1313 Bold data indicate best results
  • 8. A. Cervantes-Castillo and E. Mezura-Montes Fig. 2 Experiment A, total number of improved solutions by AUGMENTED and R+V variants 4.2 Experiment A: comparison of R+V against other constraint consensus variants To assess the R+V performance against other CC variants proposed in [17, 29] and mentioned in Section 2.3, the following was carried out. For each test problem 100 initial solutions were generated at random with uniform distribution. From those solutions, the infeasible ones were considered as starting points for each CC variant compared. The number of infeasible solutions for each problem is shown in the second column of Table 3. Those numbers vary because of the different types of constraints found in each test problem (see Table 2 for details). The remaining columns at the right-hand side of the table present the success obtained by each variant, i.e., the number of infeasible solutions which became feasible or at least their sum of constraint violation was decreased (i.e., they were located closer to the feasible region). Both situations, feasibility and violation decreasing, were considered as success because the goal was to detect the ability of the operator to improve an infeasible solution. Because of the fact that similar results were obtained in 10D and 30D, only those in 10D are presented. Aside from R+V, and based on Table 3, the AUGMENTED version provided the most competitive performance. To add clarity, such comparison is graphically presented in Fig. 1. However, as indicated in Fig. 2, R+V outperforms the AUGMENTED variant. It is worth remarking that, based on Table 3, R+V outperformed the other CC variants in test problem C04, which is the one with more equality constraints. The results then suggest, for the test problems adopted in this work, that letting the CC method to discard the information of all constraints except the most difficult to satisfy, instead of joining the violation information of all violated constraints (even with the relaxation factors per constraint as in the AUGMENTED CC variant) has two advantages: (1) helps the solution to get closer to the feasible region or get it feasible, and (2) eliminates the cost related to the consensus process and just one feasibility vector is computed. From the above discussion it can be concluded that R+V is a competitive CC variant and it has the advantage that it avoids the usage of the consensus step by adopting the hardest constraint to be satisfied as the promising search direction to get a feasible solution. 4.3 Experiment B: locating the R+V variant into the MBSO algorithm Having evidence about the competitive performance of the R+V CC variant, the next phase consists in finding a suitable combination of this method as a special operator within MBSO. In this sense, this experiment aimed to identify the best location of R+V in MBSO. From Section 3 three MBSO elements were considered: (1) grouping operator, (2) replacing operator and (3) creating-combine operators). Therefore, three experimental versions were designed. 1. Experimental version 1 (R+VE1): The R+V variant was located within the MBSO algorithm before applying the grouping operator. In this way, R+V acts only as Table 4 Experiment B, 95%- confidence rank-sum Wilcoxon test results in 10D test problems Versions Criteria Better Equal Worse Decision p-value R+VE1 vs R+VE2 Best Results 3 15 0 = 0.974277525 R+VE1 Vs R+VE3a 4 14 0 = 0.66585532 R+VE1 Vs R+VE3b 2 15 0 = 0.961219496 R+VE1 Vs R+VE3c 4 14 0 = 1 R+V E1 vs R+VE2 Average Results 2 15 1 = 0.987376927 R+VE1 Vs R+VE3a 3 14 1 = 0.874296698 R+VE1 Vs R+VE3b 2 16 0 = 0.824715242 R+VE1 Vs R+VE3c 4 14 0 = 0.447628106
  • 9. A modified brain storm optimization algorithm with a special operator to solve constrained.. Table 5 Experiment B, 95%- confidence rank-sum Wilcoxon test results in 30D test problems Versions Criteria BEST Equal Worse Decision p value R+VE1 vs R+VE2 Best Results 3 15 0 = 0.843011125 R+VE1 Vs R+VE3a 4 13 1 = 0.679885581 R+VE1 Vs R+VE3b 5 12 1 = 0.800171553 R+VE1 Vs R+VE3c 4 12 2 = 0.861838193 R+V E1 vs R+VE2 Average Results 3 15 0 = 0.65591049 R+VE1 Vs R+VE3a 5 13 0 = 0.65591049 R+VE1 Vs R+VE3b 4 12 2 = 1 R+VE1 Vs R+VE3c 6 12 0 = 0.987378551 a preprocessing phase of the population which will be used by the MBSO algorithm later. Five randomly selected infeasible solutions are processed by the R+V variant. The obtained solutions replace the original input solutions in the current population. 2. Experimental version 2 (R+VE2): The R+V variant is within the replacing operator. If the new solution is infeasible, then the R+V variant is applied to such solution before replacing it. 3. Experimental version 3: Three situations were obser- ved. Considering the fact that the crossover operator in the MBSO algorithm in (5) is similar to that of Differential Evolution, where a base vector added to a difference vector is computed, then the following three places are of interest to apply the R+V variant. (a) Experimental version 3a (R+VE3a): The R+V variant acts in the base idea xs before it is used to generate the new solution. (b) Experimental version 3b (R+VE3b): Being xa and xb the difference ideas, the R+V variant acts in idea Fig.3 Experiment B, number of 10D test problems where each version was better than the other ones, based on the median value xa which provides the direction in such difference ideas. (c) Experimental version 3c, (R+VE3c): The R+V variant acts in both, the base idea xs and difference idea xa used in the crossover operator. The 95%-confidence rank-sum Wilcoxon test was applied to the final results of a set of 30 independent runs per each algorithm version. The results are shown in Tables 4 (10D) and 5 (30D), where the R+VE1 version was adopted as the base algorithm for the statistical test. Those results suggest no significant differences among versions, i.e, the R+V variant helps MBSO regardless its position in the algorithm. However, R+VE1 was sligthly better than its compared versions. Such behavior can be clearly observed in Figs. 3 and 4, where the number of test functions where the median value of each version is better than those of the other versions is graphically presented. Based on such figures, R+VE1 is better, particularly in 30D problems, i.e., the most difficult to solve. From the results in this experiment B, the R+V Fig.4 Experiment B, number of 30D test problems where each version was better than the other ones, based on the median value
  • 10. A. Cervantes-Castillo and E. Mezura-Montes Generations 0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 20 40 60 80 100 120 140 160 180 R+V application constraint violation degree feasible points Fig. 5 Experiment C, R+V applied during all the search process in representative 10D test problem C05 version will be located before the grouping operator in this research. Despite the fact that R+V benefits MBSO in all the positions above mentioned, it is worth remarking that, once the dimensions in the constrained search space increase, the R+V usage is more convenient before the variation operators and the replacement process. Such behavior differs with that observed in other approaches where the CC method has been adopted, as it is the case for differential evolution in [14], where the CC method is considered within the mutation operator. 4.4 Experiment C: R+V frequency application within MBSO To analyze the frequency of application for the R+V variant within MBSO, the expected behavior of a nature-inspired search algorithm when solving CNOPs was considered. Fig. 6 Experiment C, average evaluations required by the algorithm to approximate the best known solution in the whole benchmark, where R+V was applied every 5, 10, 15, 20, 25, 30, 35, 40, 45 and 50 generations during the first 15% of total generations of the algorithm in 10D and 30D test problems
  • 11. A modified brain storm optimization algorithm with a special operator to solve constrained.. Such behavior states that most infeasible solutions are present at the beginning of the search. As the process advances, the effect of the constraint-handling technique will let to generate more feasible solutions in the population. Figure 5 presents such behavior using the MBSO algorithm with the R+V variant along all a single run in representative 10D test problem C05. Based on the aforementioned, the R+V variant was applied only in the first 15% of the total number of generations of the algorithm. However, it remains to be known the frequency of application within that 15%. Figure 6 reports the average evaluations required by the algorithm to approximate the best known solution in the whole benchmark when applying R+V every 5, 10, 15, 20, 25, 30, 35, 40, 45, and 50 generations in the first 15% of the total generations of the algorithm. From the results in Fig. 6, R+V saves more evaluations when it is applied every 35 generations during the first 15% of the total generations spent by the algorithm. To further analyze the positive effect of R+V within MBSO, representative convergence plots are shown in Figs. 7 and 8, for 10D and 30D test problems, respectively. The positive effect of the R+V special operator allows the approach to reach better results faster than the MBSO version without it. Regarding the computational complexity of MBSO-R+V, the proposal has two important advantages: (1) based on the fact that the approach adopted MBSO and not BSO, the O(n2) of the K-means algorithm is avoided while the MBSO’s Simple Grouping Method is O(NM), where N is the number of ideas and M is the number of groups, and (2) as mentioned in Section 3.1, R+V, unlike other CC variants, Fig. 7 Experiment C, MBSO- R+V against MBSO 10D representative convergence plots
  • 12. A. Cervantes-Castillo and E. Mezura-Montes computes just one feasibility vector and also avoids the consensus step, decreasing the operations required to obtain the feasible direction. The pseudocode of the proposed MBSO-R+V is detailed in Algorithm 4: 4.5 Experiment D: comparing MBSO-R+V against state-of-the-art algorithms Having the complete design of the proposed MBSO-R+V algorithm, its performance is compared against state-of-the- art algorithms. The results are shown in Tables 6–7. The state-of-the-art algorithms compared are the following: – IDFRD: Individual-dependent feasibility rule for con- strained differential evolution [36]. – FRC-CEA: A feasible-ratio control technique for constrained optimization [35]. – CoBe-MmDE: A multimeme DE algorithm empowered by local search operators [6]. – EMODE: An enhanced Multi-operator DE [8]. – DEbavDBmax: A Constraint Consensus Mutation- Based DE [14]. The 95%-confidence Kruskal-Wallis and the Bonferroni post-hoc statistical tests were applied to the results in Tables 6–7. Figure 9 includes such comparison and it can be seen that no significant differences were observed regarding 10D and 30D with respect to all compared algorithms. The statistical tests results indicate that MBSO-R+V provides a competitive performance against state-of-the-art algorithms to solve different types of CNOPs. It is worth noting that, with respect to the compared and recently proposed approaches, MBSO-R+V does not require the problem transformation [35], the modification of the constraint- handling technique [36], the combination of different local searches [6] or multiple operators [8]. Such requirements might make them more difficult to either code or calibrate. 5 Conclusions and future work This paper presented an improved brainstorm optimization algorithm coupled with a simplified version of the constraint consensus special operator to solve constrained optimization problems. The new constraint consensus version, named R+V, which is based on the search direction of the hardest constraint to satisfy by the solution to be updated, was compared against other Constraint Consensus versions in thirty-six well-known constrained problems. The results showed R+V as the most competitive variant, even just one feasibility vector is computed and the consensus step and its cost are avoided. After getting a competitive and low cost special operator, its incorporation within the MBSO algorithm, which has provided a competitive performance in constrained search spaces [2], was presented. Based on empirical comparisons validated by statistical tests, It was found that using the R+V variant before the MBSO grouping operator and just every 35
  • 13. A modified brain storm optimization algorithm with a special operator to solve constrained.. Fig. 8 Experiment C, MBSO- R+V against MBSO 30D representative convergence plots generations early in the search process only, provided better results. This MBSO-R+V version was further compared against five state-of-the-art algorithms for constrained optimization. Such comparison indicated no significant differences of the performance provided by MBSO-R+V with respect to those obtained by the compared approaches. It is important to remark that most algorithms used for comparison are based on differential evolution, which has showed a particular ability to provide highly competitive results when solving CNOPs [21]. Moreover, the suitable addition of a simplified variant of a special operator to the MBSO algorithm, kept its implementation simplicity when contrasted against the compared approaches which require modifications to the search algorithm like multiple variation operators [8], modifications to the variation operators [14], multiple local-search operators [6], modifications to the constraint- handling technique [35], or using dynamic multi-objective optimization concepts [36]. It has been showed in this research work that a suitable special operator is able to significantly improve the search ability of a particular swarm intelligence algorithm so as to provide a similar performance with respect to DE-based state-of-the-art proposals. Based on the findings obtained in this work, the future paths or research are: (1) the proposal of parameter con- trol techniques to deal with the proper MBSO parameters, (2) the addition of R+V in other popular population-based algorithms in constrained optimization like differential evo- lution, (3) the study of other special operators coupled with MBSO, and (4) considering multi-objective constrained optimization problems.
  • 14. A. Cervantes-Castillo and E. Mezura-Montes Table 6 Experiment D, statistical results by MBSO-R+V and state-of-the-art algorithms (1/2) F Algorihtm 10D 30D Mean Std Mean Std C01 MBSO-R+V –7.44E-01 7.85E-03 –7.86E-01 1.99E-02 IDFRD –7.47E-01 1.87E-03 –8.19E-01 2.66E-03 FRC-CEA –7.47E-01 4.49E-13 –8.21E-01 1.88E-03 CoBe-MmDE –7.35E-01 2.72E-02 –7.37E-01 2.56E+00 DEbavDBmax –7.46E-01 2.55E-03 –8.14E-01 7.85E-03 E-MODE –7.47E-01 2.45E-16 –8.20E-01 2.86E-03 C02 MBSO-R+V –1.95E+00 3.77E-01 –1.91E+00 6.76E-01 IDFRD –2.25E+00 5.48E-02 –2.27E+00 2.31E-02 FRC-CEA –2.28E+00 3.76E-03 –2.27E+00 6.06E-03 CoBe-MmDE –2.07E+00 8.67E-02 –2.00E+00 1.43E-01 DEbavDBmax –2.20E+00 8.39E-02 –2.28E+00 3.15E-03 E-MODE –2.28E+00 2.54E-10 –2.28E+00 4.90E-03 C03 MBSO-R+V 2.43E-17 1.16E-16 5.98E+01 4.36E+01 IDFRD 0.00E+00 0.00E+00 0.00E+00 0.00E+00 FRC-CEA 0.00E+00 0.00E+00 2.87E+01 1.22E-07 CoBe-MmDE 0.00E+00 0.00E+00 1.25E-05 1.02E-05 DEbavDBmax 0.00E+00 0.00E+00 6.38E-25 7.13E-25 E-MODE 0.00E+00 0.00E+00 3.12E-25 5.73E-25 C04 MBSO-R+V 8.19E-02 2.20E-01 1.94E+01 0.00E+00 IDFRD –1.00E-05 7.64E-13 –3.32E-06 2.52E-08 FRC-CEA 2.75E-05 9.14E-05 4.17E-03 4.17E-03 CoBe-MmDE –9.99E-06 4.60E-09 5.48E-02 1.83E-01 DEbavDBmax –1.00E-05 0.00E+00 –3.33E-06 2.77E-09 E-MODE –1.00E-05 0.00E+00 –3.33E-06 2.46E-16 C05 MBSO-R+V –3.88E+02 2.41E+02 –4.01E+02 1.20E+02 IDFRD –4.84E+02 3.03E-13 –4.84E+02 7.02E-11 FRC-CEA –4.84E+02 3.36E-02 –4.80E+02 2.45E+00 CoBe-MmDE –4.84E+02 2.42E-10 –1.89E+02 7.82E+01 DEbavDBmax –4.84E+02 4.95E-06 –4.84E+02 7.16E-09 E-MODE –4.84E+02 3.48E-13 –4.84E+02 2.04E-13 C06 MBSO-R+V –4.84E+02 2.76E+02 –3.30E+02 3.68E+02 IDFRD –5.79E+02 5.02E-07 –5.31E+02 1.10E-02 FRC-CEA –5.79E+02 2.63E-05 –5.31E+02 1.82E-02 CoBe-MmDE –5.79E+02 2.17E-07 –4.79E+02 7.42E+01 DEbavDBmax –5.76E+02 7.68E-07 –5.31E+02 2.35E-01 E-MODE –5.79E+02 3.25E-13 –5.31E+02 4.66E-10 C07 MBSO-R+V 1.59E-01 7.81E-01 3.19E-01 1.08E+00 IDFRD 0.00E+00 0.00E+00 0.00E+00 0.00E+00 FRC-CEA 0.00E+00 0.00E+00 3.88E-01 1.05E+00 CoBe-MmDE 1.66E-01 8.14E-01 7.69E+01 8.05E+01 DEbavDBmax 1.25E-28 2.27E-28 1.59E-01 7.97E-01 E-MODE 0.00E+00 0.00E+00 1.61E-27 4.14E-27 C08 MBSO-R+V 9.89E+00 6.14E+00 3.19E+01 8.63E+01 IDFRD 8.34E+00 4.47E+00 0.00E+00 0.00E+00 FRC-CEA 3.30E+00 4.54E+00 1.43E+01 3.34E+01 CoBe-MmDE 6.48E+00 5.72E+00 1.17E+03 1.38E+03 DEbavDBmax 9.22E+00 3.60E+00 4.20E-26 4.92E-26 E-MODE 1.01E+01 3.03E+00 1.37E-27 4.09E-27 C09 MBSO-R+V 1.24E+10 6.07E+10 9.00E+10 4.39E+11 IDFRD 0.00E+00 0.00E+00 3.08E-01 1.54E+00 FRC-CEA 1.39E-01 1.10E+00 4.59E+01 3.75E+01 CoBe-MmDE 3.92E+00 1.65E+01 4.24E+05 1.30E+06 DEbavDBmax 4.54E-26 1.84E-25 4.30E-26 4.47E-26 E-MODE 0.00E+00 0.00E+00 9.28E-27 1.82E-26 Bold data indicate best results
  • 15. A modified brain storm optimization algorithm with a special operator to solve constrained.. Table 7 Experiment D, statistical results by MBSO-R+V and state-of-the-art algorithms (2/2) F Algorihtm 10D 30D Mean Std Mean Std C10 MBSO-R+V 1.65E+01 2.05E+01 9.93E+01 2.56E+02 IDFRD 0.00E+00 0.00E+00 3.13E+01 1.76E-01 FRC-CEA 1.42E+01 1.95E+01 9.05E+01 9.05E+01 CoBe-MmDE 3.48E+00 1.18E+01 1.27E+03 3.55E+03 DEbavDBmax 1.33E-26 3.34E-26 4.41E-20 1.21E-19 E-MODE 0.00E+00 0.00E+00 6.99E-27 1.43E-26 C11 MBSO-R+V –1.52E-03 3.36E-10 –1.48E-04 2.19E-04 IDFRD –1.52E-03 5.07E-11 –3.92E-04 3.65E-09 FRC-CEA –1.15E-02 4.01E-02 –3.72E-02 4.58E-01 CoBe-MmDE –1.52E-03 2.27E-10 –3.82E-04 –3.82E-04 DEbavDBmax –1.52E-03 1.46E-14 –3.92E-04 3.98E-10 E-MODE –1.52E-03 8.90E-17 –3.92E-04 1.07E-10 C12 MBSO-R+V –3.02E+01 1.27E+02 4.18E-01 1.95E+00 IDFRD –4.24E-01 2.90E+00 –1.99E-01 2.12E-04 FRC-CEA –2.17E+02 2.66E+02 –1.61E+02 3.26E+02 CoBe-MmDE -2.32E+01 6.80E+01 -1.49E-01 7.11E-02 DEbavDBmax -2.20E-01 6.62E-07 –1.99E-01 1.63E-09 E-MODE –9.46E+01 1.48E+02 –1.99E-01 2.90E-09 C13 MBSO-R+V –6.46E+01 2.12E+00 –6.06E+01 2.38E+00 IDFRD –6.84E+01 2.90E-14 –6.63E+01 3.00E+00 FRC-CEA –6.84E+01 3.51E-12 –6.84E+01 2.88E-01 CoBe-MmDE –5.72E+01 8.52E+00 –5.76E+01 2.56E+00 DEbavDBmax –6.75E+01 1.41E+00 –6.02E+01 5.07E+00 E-MODE –6.84E+01 2.97E-04 –6.52E+01 1.92E+00 C14 MBSO-R+V 4.69E-05 2.29E-04 1.24E+02 5.61E+02 IDFRD 0.00E+00 0.00E+00 0.00E+00 0.00E+00 FRC-CEA 0.00E+00 0.00E+00 6.19E-01 6.20E-01 CoBe-MmDE 6.64E-01 1.52E+00 1.93E+06 4.99E+06 DEbavDBmax 3.30E-27 4.72E-27 1.01E-25 8.05E-26 E-MODE 0.00E+00 0.00E+00 2.08E-27 5.76E-27 C15 MBSO-R+V 8.74E+11 3.22E+12 1.11E+12 5.42E+12 IDFRD 2.94E-01 1.02E+00 2.21E+01 1.58E+00 FRC-CEA 1.51E+00 1.83E+00 2.16E+01 9.92E-05 CoBe-MmDE 1.87E-01 9.18E-01 3.35E+09 6.84E+09 DEbavDBmax 2.96E-25 1.09E-24 1.87E-22 2.97E-22 E-MODE 0.00E+00 0.00E+00 2.54E-27 6.81E-27 C16 MBSO-R+V 2.76E-01 3.66E-01 4.79E-02 2.02E-01 IDFRD 1.11E-02 1.92E-02 0.00E+00 0.00E+00 FRC-CEA 0.00E+00 0.00E+00 0.00E+00 0.00E+00 CoBe-MmDE 4.54E-01 5.12E-01 3.99E-03 1.28E-02 DEbavDBmax 0.00E+00 0.00E+00 0.00E+00 0.00E+00 E-MODE 0.00E+00 0.00E+00 0.00E+00 0.00E+00 C17 MBSO-R+V 4.10E+01 1.18E+02 1.88E+02 3.47E+02 IDFRD 1.44E-20 1.37E-20 7.46E-02 2.62E-01 FRC-CEA 0.00E+00 0.00E+00 7.02E+00 1.27E+01 CoBe-MmDE 0.00E+00 0.00E+00 6.49E-04 3.18E-03 DEbavDBmax 7.05E-19 1.08E-18 1.81E-12 5.28E-12
  • 16. A. Cervantes-Castillo and E. Mezura-Montes Table 7 (continued) F Algorihtm 10D 30D Mean Std Mean Std E-MODE 3.42E-30 1.71E-29 2.77E-21 8.44E-21 C18 MBSO-R+V 1.29E+02 4.19E+02 2.08E+02 9.56E+02 IDFRD 0.00E+00 0.00E+00 2.28E-29 7.14E-29 FRC-CEA 0.00E+00 0.00E+00 0.00E+00 0.00E+00 CoBe-MmDE 0.00E+00 0.00E+00 6.48E+01 1.46E+02 DEbavDBmax 3.89E-24 4.34E-24 2.83E-01 1.36E+00 E-MODE 1.53E-32 2.20E-32 1.34E-20 6.55E-20 Bold data indicate best results 30 40 50 60 70 80 90 No groups have mean ranks significantly different from MBSO-R+V E-MODE DEbavDBmax CoBe-MmDE FRC-CEA IDFRD MBSO-R+V Average fx 10D 20 30 40 50 60 70 80 90 No groups have mean ranks significantly different from MBSO-R+V E-MODE DEbavDBmax CoBe-MmDE FRC-CEA IDFRD MBSO-R+V Average fx 30D Fig. 9 Experiment D, Kruskal-Wallis and Bonferroni post-hoc statistical tests. Average values obtained in the objective function by each compared algorithm Acknowledgments The first author acknowledges support from the Mexican Council of Science and Technology (CONACyT) and the University of Veracruz to pursue graduate studies at its Artificial Intelligence Research Center. The second author acknowledges support from CONACyT through project No. 220522. Compliance with Ethical Standards Conflict of interests The authors declare that they have no confict of interest. References 1. Bonyadi MR, Michalewicz Z (2014) On the edge of feasibility: a case study of the particle swarm optimizer. In: 2014 IEEE Congress on evolutionary computation (CEC) 2. Cervantes-Castillo A, Mezura-Montes E (2016) A study of constraint-handling techniques in brain storm optimization. In: 2016 IEEE Congress on evolutionary computation (CEC), pp 3740–3746 3. Chinneck JW (2004) The constraint consensus method for finding approximately feasible points in nonlinear programs. INFORMS J Comput 16(3):255–265 4. Chinneck JW (2008) Feasibility and infeasibility in optimization: algorithms and computational methods. Springer Science + Business Media LLC 5. Datta R, Deb K (2014) Evolutionary constrained optimization. Springer Publishing Company, Incorporated 6. Domı́nguez-Isidro S, Mezura-Montes E (2018) A cost-benefit local search coordination in multimeme differential evolution for constrained numerical optimization problems. Swarm Evol Comput 39:249–266 7. Drud AS (1994) Conopt a large scale grg code. ORSA J Comput 6(2):207–216 8. Elsayed S, Sarker R, Coello CC (2016) Enhanced multi-operator differential evolution for constrained optimization. In: 2016 IEEE Congress on evolutionary computation (CEC), pp 4191–4198 9. Elsayed SM, Sarker RA, Essam DL (2011) Multi-operator based evolutionary algorithms for solving constrained optimization problems. Comput Oper Res 38(12):1877–1896
  • 17. A modified brain storm optimization algorithm with a special operator to solve constrained.. 10. Gill PE, Murray W, Saunders MA (1997) Snopt: an sqp algorithm for large-scale constrained optimization report sol, vol 97– 3. Stanford University, Technical report, Systems Optimization Laboratory 11. Gong W, Cai Z, Liang D (2015) Adaptive ranking mutation operator based differential evolution for constrained optimization. IEEE Trans Cybern 45(4):716–727 12. Zhan Zh, Zhang J, Shi Yh, Liu Hl (2012) A modified brain storm optimization. In: 2012 IEEE Congress on evolutionary computation (CEC), pp 1–8 13. Hamza NM, Elsayed SM, Essam DL, Sarker RA (2011) Differential evolution combined with constraint consensus for constrained optimization. In: 2011 IEEE Congress of evolutionary computation (CEC), pp 865–872 14. Hamza NM, Essam DL, Sarker RA (2016) Constraint consensus mutation-based differential evolution for constrained optimiza- tion. IEEE Trans Evol Comput 20(3):447–459 15. Hamza NM, Sarker RA, Essam DL (2013) Differential evo- lution with multi-constraint consensus methods for constrained optimization. J Glob Optim 57(2):583–611 16. Hassanein A, El-Abd M, Damaj I, Ur-Rehmana H (2020) Parallel hardware implementation of the brain storm optimization algorithm using FPGAs. Microprocess Microsyst 74:103005 17. Ibrahim W, Chinneck JW (2008) Improving solver success in reaching feasibility for sets of nonlinear constraints. Comput Oper Res 35(5):1394–1411. Part Special Issue: Algorithms and Computational Methods in Feasibility and Infeasibility 18. Liang JJ, Runarsson TP, Mezura-Montes E, Clerc M, Suganthan PN, Coello Coello CA, Deb K (2005) Problem definitions and evaluation criteria for the CEC 2006 special session on constrained real-parameter optimization. Technical report, Nanyang Tech- nological University, Singapore, December. Available at: http:// www.lania.mx/∼emezura 19. Liu J, Peng H, Wu Z, Chen J, Deng C (2020) Multi-strategy brain storm optimization algorithm with dynamic parameters adjustment. Appl Intell 50:1289–1315 20. Mallipeddi R, Suganthan PN (2010) Problem definitions and eval- uation criteria for the CEC 2010 competition on constrained real-parameter optimization. Technical Report, Nanyang Techno- logical University, Singapore 21. Mezura-Montes E, Coello-Coello CA (2011) Constraint-handling in nature-inspired numerical optimization: past, present and future. Swarm Evol Comput 1:173–194 22. Michalewicz Z, Schoenauer M (1996) Evolutionary algorithms for constrained parameter optimization problems. Evol Comput 4(1):1–32 23. Saunders MA, Murtagh BA (1993) Minos 5.4 users guide (preliminary), techical report sol 83 20 r. techical report 24. Osborn AF, Bristol LH (1979) Applied imagination: principles and procedures of creative problem-solving, 3rd edn. Scribners, New York. Includes index 25. Rao SS (2009) Engineering optimization: theory and practice. Wiley 26. Sarker RA, Elsayed SM, Ray T (2014) Differential evolution with dynamic parameters selection for optimization problems. IEEE Trans Evol Comput 18(5):689–707 27. Shi Y (2011) Brain storm optimization algorithm. In: Proc. 2nd Int. conf. on swarm intelligence, pp 303–309 28. Smith L, Chinneck JW, Aitken V (2013) Constraint consensus concentration for identifying disjoint feasible regions in nonlinear programmes. Optim Methods Softw 28(2):339–363 29. Smith L, Chinneck JW, Aitken V (Apr 2013) Improved constraint consensus methods for seeking feasibility in nonlinear programs. Comput Optim Appl 54(3):555–578 30. Spellucci P (1998) An sqp method for general nonlinear programs using only equality constrained subproblems. Math Program 82(3):413–448 31. Sun L, Wu Y, Liang X, He M, Chen H (2019) Constraint consensus based artificial bee colony algorithm for constrained optimization problems. Discrete Dynamics in Nature and Society, Article ID 6523435 24 pages 32. Takahama T, Sakai S, Iwane N (2006) Solving nonlinear constrained optimization problems by the epsilon constrained differential evolution. In: 2006 IEEE International conference on systems, man and cybernetics, vol 3, pp 2322–2327 33. Takahama T, Sakai S (2010) Constrained optimization by the epsilon constrained differential evolution with an archive and gradient-based mutation. In: WCCI 2010 IEEE World Congress on computational intelligence July, 18-23, 2010 - CCIB, Barcelona 34. Waltz RA, Nocedal J (2003) KNITRO users manual. Technical report OTC 2003 05. Optimization Technology Center, Northwest- ern University, Evanston, IL, USA 35. Jiao R, Zeng S, Li C (2019) A feasible-ratio control technique for constrained optimization. Inform Sci 201–217:502 36. Wang B-C, Feng Y, Li H-X (2020) Individual-dependent feasibility rule for constrained differential evolution. Inform Sci 174–195:506 Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Adriana Cervantes-Castillo was born in Alto Tı́o Diego, Veracruz, México, in 1982. She received the BSc in Computer Science from the University of Veracruz, Xalapa, in 2008, the MSc in Artificial Intelligence from the University of Veracruz in 2014, and the PhD in Arti- ficial Intelligence from the University of Veracruz in 2018. Her research interests are in the design, study, and application of nature-inspired meta-heuristic algorithms to solve complex optimization problems. Dr. Efrén Mezura-Montes is a full-time researcher at the Artificial Intelligence Research Center, Univer- sity of Veracruz, MEXICO. His research interests are the design, analysis and application of bio-inspired algorithms to solve complex optimization problems. He has published over 145 papers in peer-reviewed journals and conferences. He also has one edited book and over 11 book chapters published by interna- tional publishing companies. From his work, Google Scholar reports over 5,800 citations.