2. A. Cervantes-Castillo and E. Mezura-Montes
method calculates the feasibility vectors for all the violated
constraints of a given infeasible solution, i.e., the method
estimates search directions towards the satisfaction of
each constraint. Once all the feasibility vectors are com-
puted, they are combined to form a consensus vector which
contains both, the direction and the distance to generate a
solution close or inside the feasible region. In the following,
research works are presented where the CC method has been
used to solve constrained optimization problems.
Chineck presented the Constraint Consensus (CC)
algorithm [3] as a method to move an arbitrary infeasible
solution from its position to another one relatively near or
even inside the feasible region, i.e., generating a feasible
solution. Problems with different types of constraints,
shapes, and complexities were solved by this method in the
software MINOS [23].
In 2008, Walid Ibrahim and John W. Chinneck presented
five new variants of the CC method [17], based on feasibility
distance (FDfar, FDnear) and based on direction (DBavg,
DBmax, DBbnd). Those new variants differ only in the
way they build the consensus vector. In those based on
feasibility distance, the consensus vector becomes the
highest or shortest feasibility vector while in those based
on direction the consensus vector is built component-wise,
where the elements that make up the feasibility vector define
the winner direction (positive or negative). The authors
solved 231 constrained problems using the commercially
packages CONOPT 3.13 [7], SNOPT 6.1-1 [10], MINOS
5.5 [23], DONLP2 [30], and KNITRO 4.0 [34], showing
the DBmax variant as the best method to provide initial
start solutions. In 2013 Laurence Smith et al. [29] presented
some improvements to the CC method. A new idea to build
the consensus vector named SUM was introduced, where
the consensus vector becomes the average of the feasibility
vectors computed in the current solution. Additionally, they
presented the augmented version, where the main idea
is seeking feasibility using information from previously
estimated consensus vectors as well as previous information
of the violated constraints. In this way, less computations
need to be done in comparison with the others CC variants.
In the same year [28], the same authors presented the idea of
using the CC methods to identify disjoint feasible regions in
constrained problems using different multistart algorithms.
It is clear from the above specialized literature revision,
that different commercial programs with classical optimiza-
tion algorithms have been used in combination with the
CC algorithm to solve constrained problems showing com-
petitive results. Furthermore, in recent years, since 2011
[13–15], the CC method has been combined with Evolu-
tionary Algorithms like the Genetic Algorithm (GA) and
Differential Evolution (DE), solving different test problems
[18, 20].
The first efforts focused on using CC as a pre-processing
method, where at each iteration of the GA or DE, the CC
method was applied to some infeasible solutions chosen
from the current population. After that, the worst solutions
in the same current population were replaced by those
obtained by the CC method. In 2016, Hamza et al. [14]
proposed a different way to incorporate the CC method
into the DE algorithm. The authors used the CC method
into the DE mutation operator with the aim to improve
the final results while saving evaluations. The final results
of this approach outperformed the standard DE as well as
other state-of-the-art algorithms showing better results and
reducing the computational cost. Sun et al. [31] added the
basic CC method to the artificial bee colony algorithm and
their results were compared against well-known approaches
showing a good performance. However, no other CC
variants were analyzed.
The motivation of this work is based on two issues:
(1) even the CC method has been improved with different
variants, there are no proposals designed by considering
a swarm-intelligence algorithm, and (2) the Brain Storm
Optimization Algorithm, which has provided competitive
results when dealing with CNOPs [2], has not been
enriched with studies on special operators for constrained
optimization.
Based on the two above mentioned issues, this papers
presents a novel CC variant named as R+V (restriction
more violated) which considers the generation of the con-
sensus vector in a simple and cheap way (considering it
will be added to a population-based approach) because
the feasibility vector is the one of the hardest constraint
in turn. Such CC variant is combined with a BSO vari-
ant called Modified BSO, where the research hypothesis
is that the addition of this special operator will lead to a
performance improvement when solving constrained opti-
mization problems with different features. The contribu-
tion of this work is then a first BSO-based approach to
deal with constrained search spaces now enriched with
a cheap special operator focused on improving infeasible
solutions to get them inside or at least closer to the feasible
region.
The new CC variant is compared with previous CC pro-
posals [17, 29] in a set of well-known test problems [20].
After that, based on an empirical validation, a suitable
incorporation of the R+V variant within MBSO is pre-
sented, where its location and application frequency are
defined. Finally, the proposed MBSO-R+V is compared
against state-of-the-art algorithms presenting a highly com-
petitive performance when solving CNOPs with different
characteristics.
The organization of this paper is as follows. Section 2
includes the original Brain Storm Optimization (BSO)
3. A modified brain storm optimization algorithm with a special operator to solve constrained..
algorithm, its modified version (MBSO) adopted in this
work motivated by a previous study [2], and the introduction
of the CC methods under study. Section 3 describes the
proposed approach with the CC R+V version proposed in
this research and also the MBSO-R+V algorithm. Section 4
presents the experimental design, the corresponding results
and discussion. Finally, Section 5 draws the conclusions of
this research and outlines future work.
2 BSO algorithms and CC methods
2.1 BSO algorithm
In 2011 Yuhui Shi presented the BrainStorm Optimization
(BSO) algorithm [27], which is inspired in the brainstorm-
ing process [24], where a group of people with different
backgrounds is meeting with the aim to generate and com-
bine different ideas and propose a solution to a specific
problem. Four rules are the base of the brainstorming pro-
cess, which are the following:
– No critics.
– All ideas proposed can be considered.
– It is supposed to generate a considerable number of
ideas.
– It is possible to generate new ideas based on the
combination of current ideas.
Following the four rules earlier presented a brainstorming
process considers the following steps:
1. Consider a set of people with different backgrounds.
2. Create a highest number of ideas based on Osborn’s
rules [24].
3. Based on the problem owner opinion, the best ideas are
chosen.
4. Those selected ideas are used as the base to create new
ideas.
5. From the set of those new ideas, the best based on the
problem owner opinion are selected.
6. Select a group of ideas to generate new ones and avoid
getting stuck with the same opinions.
7. The best ideas are selected by the problem owner.
Taking the above steps as a base, and considering the
fact that an idea is a potential solution of an optimization
problem (a CNOP in this case) the BSO algorithm is
detailed in Algorithm 1, where input parameters are the
number of ideas NP, the number of clusters M, and
probabilities preplace, pone, poneCenter, and ptwoCenter,
while rand(0,1) returns a random real number between 0 and
1 with uniform distribution.
The BSO algorithm uses four main operators:
– GroupingOperator (NP, M): The k-means algorithm
is used to cluster the NP ideas into M clusters. The
center of each cluster is defined by the best idea, i.e.,
the best solution based on fitness. The goal here is to
bias the search to different areas of the space to locate
those promising ones. As the previously mentioned, this
operator promotes the exploration of the search space.
4. A. Cervantes-Castillo and E. Mezura-Montes
– ReplacingOperator (x): The best idea (center) x in
the selected cluster is replaced by an idea generated at
random with uniform distribution. The aim is avoiding
local optima while keeping diversity in the set of
solutions (ideas).
– CreatingOperator (xs): A new idea is generated by
considering ideas from one or two chosen clusters. Such
current ideas can be the best ones, i.e., the centers
of the clusters, or just randomly chosen ideas of the
corresponding clusters. The new idea is created by
adding a Gaussian noise to the selected idea as in (2)
and (3):
yi = xs + ξ ∗ N(μ, σ) (2)
ξ = logsig
(0.5 ∗ T − t)
k
∗ rand(0, 1) (3)
where yi represents the new idea, xs is the selected
idea (the center of the cluster or just a randomly chosen
solution); N(μ, σ) is a vector of Gaussian numbers
with mean μ and variance σ; T is the maximum
number of BSO iterations, t is the current iteration and
k determines the step size in the logsig () function,
where rand(0,1) returns a random value with uniform
distribution between 0 and 1.
– Combine (x1, x2): When two clusters are selected, the
ideas are combined in a single one xs as in (4).
xs = R × x1 + (1 − R) × x2 (4)
where R is a randomly number previously selected,
x1 and x2 are the selected ideas from cluster one and
cluster two, respectively.
This algorithm has showed success on solving different
optimization problems. In fact it has been extended to
multi-strategies with adaptive parameters [19], and also to a
parallel hardware implementation [16].
2.2 MBSO algorithm
The Modified Brain Storm Optimization algorithm (MBSO),
is an improved BSO version proposed in 2012 [12]. MBSO
introduces a new clustering method in the grouping operator
called Simple Grouping Method which follows the next
steps:
1. Select randomly M ideas which become the seeds of the
M clusters.
2. Compute the Euclidean distance from each idea in the
population to each cluster seed.
3. Compare all the M distances to the current idea and add
it into the nearest cluster.
4. Repeat until group all NP ideas into the M clusters.
In the creating operator, MBSO introduces a new method
to generate the new ideas. The Gaussian noise is replaced
by the Idea Difference Strategy (IDS), which adds more
information of the current population to the idea to be
generated. The IDS uses (5) to create the new idea yi. based
on a current idea xs:
yi =
rand(L, H) if rand(0, 1) pr;
xs + rand(0, 1) × (xa − xb) otherwise.
(5)
where xa and xb are two ideas (solutions) from the current
population chosen at random with uniform distribution used
in the vector difference, and pr is a parameter to simulate
the open-minded in the creation of new ideas, similar to the
brainstorming process where all ideas are welcome.
MBSO was chosen in this research based on a previous
study where it outperformed other BSO variants in
constrained search spaces [2].
2.3 CC methods
The Constraint Consensus (CC) method uses the concept of
projection algorithms, which are effective to move infeasi-
ble solutions close to the feasible region. This movement
is through the feasibility vector computed for each violated
constraint, then such vector includes movement and distance
information related to its corresponding constraint.
In this way, if xs is an infeasible solution and gi(xs) its
constraint violation for constraint i, then the CC method
computes the feasibility vector (fvi) for that constraint
using (6).
fvi =
−gi(xs)
∇gi(xs)
∇gi(xs) (6)
where gi(xs) is the amount of constraint violation, ∇gi(xs)
is the constraint gradient and ∇gi(xs) is the gradient
length. Despite the fact that feasibility vectors are exact
just for linear constraints, they are suitable approximations
for non-linear constraints and they can be successfully
applied within stochastic search algorithms [14]. As it
was mentioned in Section 1, there are different ways to
generate the consensus vector based on the feasibility
vectors obtained by using (6).
The basic CC approach is detailed in Algorithm 2, where
NINF is the number of violated constraints, sj is the
sum of the feasibility vector elements for variable j, nj
is the number of violated constraints where variable j is
5. A modified brain storm optimization algorithm with a special operator to solve constrained..
considered, and t is the consensus vector. As it can be
noted, the basic CC method computes the elements of the
consensus vector by an average of those values of the
feasibility vectors of the corresponding violated constraints.
Besides the basic CC method, other variants are tested in
this work:
FDFAR: The feasibility vector with the largest distance
becomes the consensus vector. The aim is to get the
feasible region faster [17].
DBMAX: In this case the signs of the elements of
the feasibility vectors are considered. If more positive
values are present for a given variable, the highest value
among them is taken as the corresponding value for the
consensus vector. The same applies if more negative
values are found. Ties consider the maximum values of
the positive and negative elements and they are averaged
to get the corresponding element of the consensus vector.
DBBND: Besides considering the signs of the feasibility
vector elements, the length of the movement and the
type of constraint (equality or inequality) are taken
into account (shorter movements and larger movements,
respectively).
AUGMENTED: This variant adopts a predict-correct ap-
proach [29], where the predictor is the consensus vector
obtained by the basic CC variant. The corrector is formed
by the average of the relaxation factors computed inde-
pendently for each violated constraint and it is used to
adjust the length of the vector without modifying its
direction.
3 Proposed approach
3.1 R+V: a new constraint consensus method variant
Each one of the CC variants discussed in Section 1 calcu-
lates the feasibility vector for each violated constraint of a
given solution (as in (6)). Thus, computing the gradient is
mandatory in this step, adding computational effort mostly
when the number of constraints associated with the problem
increases. A new CC variant called R+V (restriction more
violated) is proposed in this work, where the consensus
vector only includes the feasibility vector of the hardest
constraint in turn, i.e., the constraint with the highest vio-
lation. In consequence, just the gradient of such constraint
is computed, regardless of the feasibility information of the
remaining constraints. In other words, besides computing
just one feasibility vector, the consensus step is avoided
because the only feasibility vector is used to reduce the
infeasibility of a solution and such action saves computa-
tional time with respect to previous CC variants. Algorithm
3 shows the R+V steps.
6. A. Cervantes-Castillo and E. Mezura-Montes
Table 1 MBSO parameter values used in the experiments
Parameter Value
N 100
M 5
pr 0.005
p-replace 0.2
p-one 0.8
p-one-center 0.4
p-two-center 0.5
In the hardestConstraint() method, the constraint with the
the highest violation amount is chosen (line 3 in Algorithm
3). The feasibility vector for such constraint becomes the
consensus vector in the R+V variant (line 4 in Algorithm
3) to compute the movement of the solution (line 5 in
Algorithm 3). There are two possible stop conditions: (1)
when the consensus vector length is less than a specified
tolerance α (line 5 in Algorithm 3), or (2) reaching a
pre-defined number of iterations μ (line 11 in Algorithm 3).
3.2 ε-constrained method
The ε-constrained method, proposed by Takahama in [32]
is adopted as a constraint-handling technique in this work
to let the MBSO algorithm to deal with a constrained
search space, because the original MBSO was proposed to
solve unconstrained optimization problems. This approach
is based on a problem transformation, i.e., the constrained
problem is transformed in an unconstrained optimization
problem. It compares the solutions based either on the
constraint violation φ(x) or the objective function value
f (x) according to an ε level. The ε-constrained method
emphasizes the constraint satisfaction followed by the
optimization of f (x). However, the method promotes a
balance of promising infeasible solutions by allowing
comparison of infeasible solutions close to the feasible
region based only on their objective function values. The
ε level comparison between two solutions (f (x1), φ(x1)),
(f (x2), φ(x2)) is calculated as indicated in (7) and (8):
(f (x1), φ(x1)) ε (f (x2), φ(x2)) ⇐⇒ (7)
⎧
⎨
⎩
f (x1) f (x2), if φ(x1), φ(x2) ≤ ε;
f (x1) f (x2), if φ(x1) = φ(x2);
φ(x1) φ(x2), otherwise
(8)
When ε = 0, the constraint violation precedes the
objective function value on the comparison. In contrast,
when ǫ = ∞ only the objective function value is used to
compare the solutions, i.e. the feasibility information is not
considered.
Table 2 Test problems adopted
in the experiments with
different dimensions (D),
separable (S), non-separable
(NS) or rotated (R) constraints
Test function Search space Objective function Constraints number
Equality constraints Inequality constraints
C01 [0, 10]D Non Separable – 2-NS
C02 [–5.12, 5.12]D Separable 1-S 2-S
C03 [–1000, 1000]D Non Separable 1-NS –
C04 [–50, 50]D Separable 2-S / 2-NS –
C05 [–600, 600]D Separable 2-S –
C06 [–600, 600]D Separable 2-R –
C07 [–140, 140]D Non Separable – 1-S
C08 [–140, 140]D Non Separable – 1-R
C09 [–500, 500]D Non Separable 1-S –
C10 [–500, 500]D Non Separable 1-R –
C11 [–100, 100]D Rotated 1-NS - -
C12 [–1000, 1000]D Separable 1-NS 1-S
C13 [–500, 500]D Separable – 2-S / 1-NS
C14 [–1000, 1000]D Non Separable – 3-S
C15 [–1000, 1000]D Non Separable – 3-R
C16 [–10, 10]D Non Separable 2-S 1-S / 1-NS
C17 [–10, 10]D Non Separable 1-S 2-NS
C18 [–50, 50]D Non Separable 1-S 1-S
10D and 30D are solved in this research
7. A modified brain storm optimization algorithm with a special operator to solve constrained..
Equation (9) shows how to control the ǫ level value.
ǫ(0) = φ(xθ )
ǫ(t) =
ǫ(0)(1 − t
T c )cp 0 t T c;
0 t ≤ T c.
(9)
where t represents the current iteration; T c = maximum
iteration and xθ is the top θ-th solution, θ = 0.2N and cp
regulates the reduction of the constraint tolerance.
The comparison criteria in (7) and (8) replace the
comparison based just on the objective function value used
in Algorithm 1.
4 Experiments and results
4.1 Experimental design and parameter tuning
To investigate the performance of the proposed MBSO-R+V
algorithm, four experiments were designed as follows:
1. To determine the quality of the R+V proposal with
respect to other CC variants.
2. To define the best location of the R+V method within
MBSO
3. To set the R+V application frequency in MBSO.
4. To compare the combined algorithm MBSO-R+V
against state-of-the-art approaches for CNOPs.
Fig. 1 Experiment A, total number of improved solutions by each CC
variant, except R+V
The parameter values used in the experiments for the R+V
variant were similar to those suggested in [14], where
the CC method was added to a population-based search
algorithm: α = 0.000001; μ = 5. For the MBSO algorithm
the parameters used were those proposed in [2], where
MBSO solved different types of CNOPs. The values are
in Table 1. The test functions solved in this research are
those proposed in [20] (10D and 30D) and their details
are summarized in Table 2. The maximum number of
evaluations for 10D was, Maxf es = 200,000 and Maxf es
= 600,000 for 30D. The value for the ε-constrained method
parameter cp was 0.5, as proposed in [2].
Table 3 Experiment A, results
obtained by each CC variant in
the 10D benchmark functions
F Infeasible BASIC FDFAR DBMAX AUGMENTED DBBND R+V
C01 1 1 1 1 1 1 1
C02 100 72 78 72 64 72 76
C03 100 100 100 100 98 100 100
C04 100 61 37 32 68 37 100
C05 100 67 65 71 64 67 59
C06 100 53 62 68 64 53 59
C07 65 65 65 65 65 65 65
C08 59 56 56 56 53 56 56
C09 100 46 46 46 100 46 46
C10 100 70 70 70 88 70 70
C11 100 100 100 100 100 100 100
C12 100 100 100 100 94 100 100
C13 100 100 100 100 98 100 100
C14 100 100 100 100 97 100 100
C15 100 57 50 55 63 55 49
C16 100 95 62 79 42 90 83
C17 100 53 52 52 50 50 49
C18 100 57 100 57 57 57 100
TOTAL 1625 1253 1244 1224 1266 1219 1313
Bold data indicate best results
8. A. Cervantes-Castillo and E. Mezura-Montes
Fig. 2 Experiment A, total number of improved solutions by
AUGMENTED and R+V variants
4.2 Experiment A: comparison of R+V against other
constraint consensus variants
To assess the R+V performance against other CC variants
proposed in [17, 29] and mentioned in Section 2.3, the
following was carried out. For each test problem 100
initial solutions were generated at random with uniform
distribution. From those solutions, the infeasible ones were
considered as starting points for each CC variant compared.
The number of infeasible solutions for each problem is
shown in the second column of Table 3. Those numbers vary
because of the different types of constraints found in each
test problem (see Table 2 for details).
The remaining columns at the right-hand side of the
table present the success obtained by each variant, i.e., the
number of infeasible solutions which became feasible or
at least their sum of constraint violation was decreased
(i.e., they were located closer to the feasible region).
Both situations, feasibility and violation decreasing, were
considered as success because the goal was to detect the
ability of the operator to improve an infeasible solution.
Because of the fact that similar results were obtained in 10D
and 30D, only those in 10D are presented. Aside from R+V,
and based on Table 3, the AUGMENTED version provided
the most competitive performance. To add clarity, such
comparison is graphically presented in Fig. 1. However, as
indicated in Fig. 2, R+V outperforms the AUGMENTED
variant. It is worth remarking that, based on Table 3, R+V
outperformed the other CC variants in test problem C04,
which is the one with more equality constraints. The results
then suggest, for the test problems adopted in this work,
that letting the CC method to discard the information of all
constraints except the most difficult to satisfy, instead of
joining the violation information of all violated constraints
(even with the relaxation factors per constraint as in the
AUGMENTED CC variant) has two advantages: (1) helps
the solution to get closer to the feasible region or get it
feasible, and (2) eliminates the cost related to the consensus
process and just one feasibility vector is computed.
From the above discussion it can be concluded that R+V
is a competitive CC variant and it has the advantage that
it avoids the usage of the consensus step by adopting the
hardest constraint to be satisfied as the promising search
direction to get a feasible solution.
4.3 Experiment B: locating the R+V variant
into the MBSO algorithm
Having evidence about the competitive performance of the
R+V CC variant, the next phase consists in finding a suitable
combination of this method as a special operator within
MBSO. In this sense, this experiment aimed to identify
the best location of R+V in MBSO. From Section 3 three
MBSO elements were considered: (1) grouping operator,
(2) replacing operator and (3) creating-combine operators).
Therefore, three experimental versions were designed.
1. Experimental version 1 (R+VE1): The R+V variant was
located within the MBSO algorithm before applying
the grouping operator. In this way, R+V acts only as
Table 4 Experiment B, 95%-
confidence rank-sum Wilcoxon
test results in 10D test problems
Versions Criteria Better Equal Worse Decision p-value
R+VE1 vs R+VE2 Best Results 3 15 0 = 0.974277525
R+VE1 Vs R+VE3a 4 14 0 = 0.66585532
R+VE1 Vs R+VE3b 2 15 0 = 0.961219496
R+VE1 Vs R+VE3c 4 14 0 = 1
R+V E1 vs R+VE2 Average Results 2 15 1 = 0.987376927
R+VE1 Vs R+VE3a 3 14 1 = 0.874296698
R+VE1 Vs R+VE3b 2 16 0 = 0.824715242
R+VE1 Vs R+VE3c 4 14 0 = 0.447628106
9. A modified brain storm optimization algorithm with a special operator to solve constrained..
Table 5 Experiment B, 95%-
confidence rank-sum Wilcoxon
test results in 30D test problems
Versions Criteria BEST Equal Worse Decision p value
R+VE1 vs R+VE2 Best Results 3 15 0 = 0.843011125
R+VE1 Vs R+VE3a 4 13 1 = 0.679885581
R+VE1 Vs R+VE3b 5 12 1 = 0.800171553
R+VE1 Vs R+VE3c 4 12 2 = 0.861838193
R+V E1 vs R+VE2 Average Results 3 15 0 = 0.65591049
R+VE1 Vs R+VE3a 5 13 0 = 0.65591049
R+VE1 Vs R+VE3b 4 12 2 = 1
R+VE1 Vs R+VE3c 6 12 0 = 0.987378551
a preprocessing phase of the population which will
be used by the MBSO algorithm later. Five randomly
selected infeasible solutions are processed by the R+V
variant. The obtained solutions replace the original
input solutions in the current population.
2. Experimental version 2 (R+VE2): The R+V variant
is within the replacing operator. If the new solution
is infeasible, then the R+V variant is applied to such
solution before replacing it.
3. Experimental version 3: Three situations were obser-
ved. Considering the fact that the crossover operator
in the MBSO algorithm in (5) is similar to that of
Differential Evolution, where a base vector added to a
difference vector is computed, then the following three
places are of interest to apply the R+V variant.
(a) Experimental version 3a (R+VE3a): The R+V
variant acts in the base idea xs before it is used to
generate the new solution.
(b) Experimental version 3b (R+VE3b): Being xa and
xb the difference ideas, the R+V variant acts in idea
Fig.3 Experiment B, number of 10D test problems where each version
was better than the other ones, based on the median value
xa which provides the direction in such difference
ideas.
(c) Experimental version 3c, (R+VE3c): The R+V
variant acts in both, the base idea xs and difference
idea xa used in the crossover operator.
The 95%-confidence rank-sum Wilcoxon test was applied
to the final results of a set of 30 independent runs per each
algorithm version. The results are shown in Tables 4 (10D)
and 5 (30D), where the R+VE1 version was adopted as the
base algorithm for the statistical test. Those results suggest
no significant differences among versions, i.e, the R+V
variant helps MBSO regardless its position in the algorithm.
However, R+VE1 was sligthly better than its compared
versions.
Such behavior can be clearly observed in Figs. 3 and 4,
where the number of test functions where the median value
of each version is better than those of the other versions
is graphically presented. Based on such figures, R+VE1 is
better, particularly in 30D problems, i.e., the most difficult
to solve. From the results in this experiment B, the R+V
Fig.4 Experiment B, number of 30D test problems where each version
was better than the other ones, based on the median value
10. A. Cervantes-Castillo and E. Mezura-Montes
Generations
0 200 400 600 800 1000 1200 1400 1600 1800 2000
0
20
40
60
80
100
120
140
160
180
R+V application
constraint violation degree
feasible points
Fig. 5 Experiment C, R+V applied during all the search process in
representative 10D test problem C05
version will be located before the grouping operator in this
research.
Despite the fact that R+V benefits MBSO in all the
positions above mentioned, it is worth remarking that, once
the dimensions in the constrained search space increase,
the R+V usage is more convenient before the variation
operators and the replacement process. Such behavior
differs with that observed in other approaches where the CC
method has been adopted, as it is the case for differential
evolution in [14], where the CC method is considered within
the mutation operator.
4.4 Experiment C: R+V frequency application
within MBSO
To analyze the frequency of application for the R+V variant
within MBSO, the expected behavior of a nature-inspired
search algorithm when solving CNOPs was considered.
Fig. 6 Experiment C, average
evaluations required by the
algorithm to approximate the
best known solution in the whole
benchmark, where R+V was
applied every 5, 10, 15, 20, 25,
30, 35, 40, 45 and 50 generations
during the first 15% of total
generations of the algorithm in
10D and 30D test problems
11. A modified brain storm optimization algorithm with a special operator to solve constrained..
Such behavior states that most infeasible solutions are
present at the beginning of the search. As the process
advances, the effect of the constraint-handling technique
will let to generate more feasible solutions in the population.
Figure 5 presents such behavior using the MBSO algorithm
with the R+V variant along all a single run in representative
10D test problem C05.
Based on the aforementioned, the R+V variant was
applied only in the first 15% of the total number of
generations of the algorithm. However, it remains to be
known the frequency of application within that 15%.
Figure 6 reports the average evaluations required by the
algorithm to approximate the best known solution in the
whole benchmark when applying R+V every 5, 10, 15, 20,
25, 30, 35, 40, 45, and 50 generations in the first 15% of the
total generations of the algorithm.
From the results in Fig. 6, R+V saves more evaluations
when it is applied every 35 generations during the first 15%
of the total generations spent by the algorithm.
To further analyze the positive effect of R+V within
MBSO, representative convergence plots are shown in
Figs. 7 and 8, for 10D and 30D test problems, respectively.
The positive effect of the R+V special operator allows
the approach to reach better results faster than the MBSO
version without it.
Regarding the computational complexity of MBSO-R+V,
the proposal has two important advantages: (1) based on
the fact that the approach adopted MBSO and not BSO,
the O(n2) of the K-means algorithm is avoided while the
MBSO’s Simple Grouping Method is O(NM), where N is
the number of ideas and M is the number of groups, and (2)
as mentioned in Section 3.1, R+V, unlike other CC variants,
Fig. 7 Experiment C, MBSO-
R+V against MBSO 10D
representative convergence plots
12. A. Cervantes-Castillo and E. Mezura-Montes
computes just one feasibility vector and also avoids the
consensus step, decreasing the operations required to obtain
the feasible direction.
The pseudocode of the proposed MBSO-R+V is detailed
in Algorithm 4:
4.5 Experiment D: comparing MBSO-R+V against
state-of-the-art algorithms
Having the complete design of the proposed MBSO-R+V
algorithm, its performance is compared against state-of-the-
art algorithms. The results are shown in Tables 6–7. The
state-of-the-art algorithms compared are the following:
– IDFRD: Individual-dependent feasibility rule for con-
strained differential evolution [36].
– FRC-CEA: A feasible-ratio control technique for
constrained optimization [35].
– CoBe-MmDE: A multimeme DE algorithm empowered
by local search operators [6].
– EMODE: An enhanced Multi-operator DE [8].
– DEbavDBmax: A Constraint Consensus Mutation-
Based DE [14].
The 95%-confidence Kruskal-Wallis and the Bonferroni
post-hoc statistical tests were applied to the results in
Tables 6–7. Figure 9 includes such comparison and it can be
seen that no significant differences were observed regarding
10D and 30D with respect to all compared algorithms. The
statistical tests results indicate that MBSO-R+V provides a
competitive performance against state-of-the-art algorithms
to solve different types of CNOPs. It is worth noting
that, with respect to the compared and recently proposed
approaches, MBSO-R+V does not require the problem
transformation [35], the modification of the constraint-
handling technique [36], the combination of different local
searches [6] or multiple operators [8]. Such requirements
might make them more difficult to either code or calibrate.
5 Conclusions and future work
This paper presented an improved brainstorm optimization
algorithm coupled with a simplified version of the
constraint consensus special operator to solve constrained
optimization problems. The new constraint consensus
version, named R+V, which is based on the search direction
of the hardest constraint to satisfy by the solution to be
updated, was compared against other Constraint Consensus
versions in thirty-six well-known constrained problems.
The results showed R+V as the most competitive variant,
even just one feasibility vector is computed and the
consensus step and its cost are avoided. After getting a
competitive and low cost special operator, its incorporation
within the MBSO algorithm, which has provided a
competitive performance in constrained search spaces [2],
was presented. Based on empirical comparisons validated
by statistical tests, It was found that using the R+V variant
before the MBSO grouping operator and just every 35
13. A modified brain storm optimization algorithm with a special operator to solve constrained..
Fig. 8 Experiment C, MBSO-
R+V against MBSO 30D
representative convergence plots
generations early in the search process only, provided better
results.
This MBSO-R+V version was further compared against
five state-of-the-art algorithms for constrained optimization.
Such comparison indicated no significant differences of the
performance provided by MBSO-R+V with respect to those
obtained by the compared approaches. It is important to
remark that most algorithms used for comparison are based
on differential evolution, which has showed a particular
ability to provide highly competitive results when solving
CNOPs [21]. Moreover, the suitable addition of a simplified
variant of a special operator to the MBSO algorithm,
kept its implementation simplicity when contrasted against
the compared approaches which require modifications to
the search algorithm like multiple variation operators [8],
modifications to the variation operators [14], multiple
local-search operators [6], modifications to the constraint-
handling technique [35], or using dynamic multi-objective
optimization concepts [36].
It has been showed in this research work that a suitable
special operator is able to significantly improve the search
ability of a particular swarm intelligence algorithm so as
to provide a similar performance with respect to DE-based
state-of-the-art proposals.
Based on the findings obtained in this work, the future
paths or research are: (1) the proposal of parameter con-
trol techniques to deal with the proper MBSO parameters,
(2) the addition of R+V in other popular population-based
algorithms in constrained optimization like differential evo-
lution, (3) the study of other special operators coupled with
MBSO, and (4) considering multi-objective constrained
optimization problems.
16. A. Cervantes-Castillo and E. Mezura-Montes
Table 7 (continued)
F Algorihtm 10D 30D
Mean Std Mean Std
E-MODE 3.42E-30 1.71E-29 2.77E-21 8.44E-21
C18 MBSO-R+V 1.29E+02 4.19E+02 2.08E+02 9.56E+02
IDFRD 0.00E+00 0.00E+00 2.28E-29 7.14E-29
FRC-CEA 0.00E+00 0.00E+00 0.00E+00 0.00E+00
CoBe-MmDE 0.00E+00 0.00E+00 6.48E+01 1.46E+02
DEbavDBmax 3.89E-24 4.34E-24 2.83E-01 1.36E+00
E-MODE 1.53E-32 2.20E-32 1.34E-20 6.55E-20
Bold data indicate best results
30 40 50 60 70 80 90
No groups have mean ranks significantly different from MBSO-R+V
E-MODE
DEbavDBmax
CoBe-MmDE
FRC-CEA
IDFRD
MBSO-R+V
Average fx 10D
20 30 40 50 60 70 80 90
No groups have mean ranks significantly different from MBSO-R+V
E-MODE
DEbavDBmax
CoBe-MmDE
FRC-CEA
IDFRD
MBSO-R+V
Average fx 30D
Fig. 9 Experiment D, Kruskal-Wallis and Bonferroni post-hoc
statistical tests. Average values obtained in the objective function by
each compared algorithm
Acknowledgments The first author acknowledges support from the
Mexican Council of Science and Technology (CONACyT) and the
University of Veracruz to pursue graduate studies at its Artificial
Intelligence Research Center. The second author acknowledges
support from CONACyT through project No. 220522.
Compliance with Ethical Standards
Conflict of interests The authors declare that they have no confict of
interest.
References
1. Bonyadi MR, Michalewicz Z (2014) On the edge of feasibility:
a case study of the particle swarm optimizer. In: 2014 IEEE
Congress on evolutionary computation (CEC)
2. Cervantes-Castillo A, Mezura-Montes E (2016) A study of
constraint-handling techniques in brain storm optimization. In:
2016 IEEE Congress on evolutionary computation (CEC),
pp 3740–3746
3. Chinneck JW (2004) The constraint consensus method for finding
approximately feasible points in nonlinear programs. INFORMS J
Comput 16(3):255–265
4. Chinneck JW (2008) Feasibility and infeasibility in optimization:
algorithms and computational methods. Springer Science +
Business Media LLC
5. Datta R, Deb K (2014) Evolutionary constrained optimization.
Springer Publishing Company, Incorporated
6. Domı́nguez-Isidro S, Mezura-Montes E (2018) A cost-benefit
local search coordination in multimeme differential evolution
for constrained numerical optimization problems. Swarm Evol
Comput 39:249–266
7. Drud AS (1994) Conopt a large scale grg code. ORSA J Comput
6(2):207–216
8. Elsayed S, Sarker R, Coello CC (2016) Enhanced multi-operator
differential evolution for constrained optimization. In: 2016 IEEE
Congress on evolutionary computation (CEC), pp 4191–4198
9. Elsayed SM, Sarker RA, Essam DL (2011) Multi-operator based
evolutionary algorithms for solving constrained optimization
problems. Comput Oper Res 38(12):1877–1896
17. A modified brain storm optimization algorithm with a special operator to solve constrained..
10. Gill PE, Murray W, Saunders MA (1997) Snopt: an sqp algorithm
for large-scale constrained optimization report sol, vol 97–
3. Stanford University, Technical report, Systems Optimization
Laboratory
11. Gong W, Cai Z, Liang D (2015) Adaptive ranking mutation
operator based differential evolution for constrained optimization.
IEEE Trans Cybern 45(4):716–727
12. Zhan Zh, Zhang J, Shi Yh, Liu Hl (2012) A modified brain
storm optimization. In: 2012 IEEE Congress on evolutionary
computation (CEC), pp 1–8
13. Hamza NM, Elsayed SM, Essam DL, Sarker RA (2011)
Differential evolution combined with constraint consensus for
constrained optimization. In: 2011 IEEE Congress of evolutionary
computation (CEC), pp 865–872
14. Hamza NM, Essam DL, Sarker RA (2016) Constraint consensus
mutation-based differential evolution for constrained optimiza-
tion. IEEE Trans Evol Comput 20(3):447–459
15. Hamza NM, Sarker RA, Essam DL (2013) Differential evo-
lution with multi-constraint consensus methods for constrained
optimization. J Glob Optim 57(2):583–611
16. Hassanein A, El-Abd M, Damaj I, Ur-Rehmana H (2020)
Parallel hardware implementation of the brain storm optimization
algorithm using FPGAs. Microprocess Microsyst 74:103005
17. Ibrahim W, Chinneck JW (2008) Improving solver success in
reaching feasibility for sets of nonlinear constraints. Comput
Oper Res 35(5):1394–1411. Part Special Issue: Algorithms and
Computational Methods in Feasibility and Infeasibility
18. Liang JJ, Runarsson TP, Mezura-Montes E, Clerc M, Suganthan
PN, Coello Coello CA, Deb K (2005) Problem definitions and
evaluation criteria for the CEC 2006 special session on constrained
real-parameter optimization. Technical report, Nanyang Tech-
nological University, Singapore, December. Available at: http://
www.lania.mx/∼emezura
19. Liu J, Peng H, Wu Z, Chen J, Deng C (2020) Multi-strategy
brain storm optimization algorithm with dynamic parameters
adjustment. Appl Intell 50:1289–1315
20. Mallipeddi R, Suganthan PN (2010) Problem definitions and eval-
uation criteria for the CEC 2010 competition on constrained
real-parameter optimization. Technical Report, Nanyang Techno-
logical University, Singapore
21. Mezura-Montes E, Coello-Coello CA (2011) Constraint-handling
in nature-inspired numerical optimization: past, present and
future. Swarm Evol Comput 1:173–194
22. Michalewicz Z, Schoenauer M (1996) Evolutionary algorithms
for constrained parameter optimization problems. Evol Comput
4(1):1–32
23. Saunders MA, Murtagh BA (1993) Minos 5.4 users guide
(preliminary), techical report sol 83 20 r. techical report
24. Osborn AF, Bristol LH (1979) Applied imagination: principles
and procedures of creative problem-solving, 3rd edn. Scribners,
New York. Includes index
25. Rao SS (2009) Engineering optimization: theory and practice.
Wiley
26. Sarker RA, Elsayed SM, Ray T (2014) Differential evolution with
dynamic parameters selection for optimization problems. IEEE
Trans Evol Comput 18(5):689–707
27. Shi Y (2011) Brain storm optimization algorithm. In: Proc. 2nd
Int. conf. on swarm intelligence, pp 303–309
28. Smith L, Chinneck JW, Aitken V (2013) Constraint consensus
concentration for identifying disjoint feasible regions in nonlinear
programmes. Optim Methods Softw 28(2):339–363
29. Smith L, Chinneck JW, Aitken V (Apr 2013) Improved constraint
consensus methods for seeking feasibility in nonlinear programs.
Comput Optim Appl 54(3):555–578
30. Spellucci P (1998) An sqp method for general nonlinear programs
using only equality constrained subproblems. Math Program
82(3):413–448
31. Sun L, Wu Y, Liang X, He M, Chen H (2019) Constraint
consensus based artificial bee colony algorithm for constrained
optimization problems. Discrete Dynamics in Nature and Society,
Article ID 6523435 24 pages
32. Takahama T, Sakai S, Iwane N (2006) Solving nonlinear
constrained optimization problems by the epsilon constrained
differential evolution. In: 2006 IEEE International conference on
systems, man and cybernetics, vol 3, pp 2322–2327
33. Takahama T, Sakai S (2010) Constrained optimization by
the epsilon constrained differential evolution with an archive
and gradient-based mutation. In: WCCI 2010 IEEE World
Congress on computational intelligence July, 18-23, 2010 - CCIB,
Barcelona
34. Waltz RA, Nocedal J (2003) KNITRO users manual. Technical
report OTC 2003 05. Optimization Technology Center, Northwest-
ern University, Evanston, IL, USA
35. Jiao R, Zeng S, Li C (2019) A feasible-ratio control technique for
constrained optimization. Inform Sci 201–217:502
36. Wang B-C, Feng Y, Li H-X (2020) Individual-dependent
feasibility rule for constrained differential evolution. Inform Sci
174–195:506
Publisher’s note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
Adriana Cervantes-Castillo
was born in Alto Tı́o Diego,
Veracruz, México, in 1982.
She received the BSc in
Computer Science from
the University of Veracruz,
Xalapa, in 2008, the MSc in
Artificial Intelligence from
the University of Veracruz in
2014, and the PhD in Arti-
ficial Intelligence from the
University of Veracruz in
2018. Her research interests
are in the design, study, and
application of nature-inspired
meta-heuristic algorithms to solve complex optimization problems.
Dr. Efrén Mezura-Montes
is a full-time researcher at
the Artificial Intelligence
Research Center, Univer-
sity of Veracruz, MEXICO.
His research interests are
the design, analysis and
application of bio-inspired
algorithms to solve complex
optimization problems. He
has published over 145 papers
in peer-reviewed journals and
conferences. He also has one
edited book and over 11 book
chapters published by interna-
tional publishing companies.
From his work, Google Scholar reports over 5,800 citations.