SlideShare a Scribd company logo
1 of 15
Download to read offline
A Hybrid Immunological Search for the Weighted
Feedback Vertex Set Problem
Vincenco Cutello, Maria Oliva, Mario F. Pavone, Rocco A. Scollo
Department of Mathematics and Computer Science
University of Catania
V.le A. Doria 6, I-95125 Catania, Italy
cutello@unict.it, mpavone@dmi.unict.it
Abstract. In this paper we present a hybrid immunological inspired algorithm
(HYBRID-IA) for solving the Minimum Weighted Feedback Vertex Set (M W
F V S) problem. MWFV S is one of the most interesting and challenging com-
binatorial optimization problem, which ļ¬nds application in many ļ¬elds and in
many real life tasks. The proposed algorithm is inspired by the clonal selection
principle, and therefore it takes advantage of the main strength characteristics
of the operators of (i) cloning; (ii) hypermutation; and (iii) aging. Along with
these operators, the algorithm uses a local search procedure, based on a deter-
ministic approach, whose purpose is to reļ¬ne the solutions found so far. In or-
der to evaluate the efļ¬ciency and robustness of HYBRID-IA several experiments
were performed on different instances, and for each instance it was compared to
three different algorithms: (1) a memetic algorithm based on a genetic algorithm
(MA); (2) a tabu search metaheuristic (XTS); and (3) an iterative tabu search
(ITS). The obtained results prove the efļ¬ciency and reliability of HYBRID-IA on
all instances in term of the best solutions found and also similar performances
with all compared algorithms, which represent nowadays the state-of-the-art on
for MWFV S problem.
Keywords: Immunological algorithms, immune-inspired computation, metaheuris-
tics, combinatorial optimization, feedback vertex set, weighted feedback vertex
set.
1 Introduction
Immunological inspired computation represents an established and rich family of algo-
rithms inspired by the dynamics and the information processing mechanisms that the
Immune System uses to detect, recognise, learn and remember foreign entities to the
living organism [11]. Thanks to these interesting features, immunological inspired al-
gorithms represent successful computational methodologies in search and optimization
tasks [4, 20]. Although there exist several immune theories at the basis of immunolog-
ical inspired computation that could characterize their natural application to anomaly
detection and classiļ¬cation tasks, one that has been proven to be quite effective and ro-
bust is based on the clonal selection principle. Algorithms based on the clonal selection
principle work on a population of immunological cells, better known as antibodies, that
proliferate, i.e. clone themselves ā€“ the number of copies depends on the quality of their
2
foreign entities detection ā€“ and undergo a mutation process usually at a high rate. This
process, biologically called Afļ¬nity Maturation, makes these algorithms very suitable
in functions and combinatorial optimization problems. This statement is supported by
several experimental applications [7, 6, 8, 18], as well as by theoretical analyses that
prove their efļ¬ciency with respect to several Randomized Search Heuristics [25, 15, 14,
23].
In light of the above, we have designed and developed an immune inspired hyper-
mutations algorithm - based precisely on the clonal selection principle - in order to solve
a classical combinatorial optimization problem, namely the Weighted Feedback Vertex
Set (WFV S).
In addition, we take into account what clearly emerges from the evolutionary com-
putation literature: in order to obtain good performances and solve hard a combinato-
rial optimization problem, it is necessary to combine together metaheuristics and other
classical optimization techniques, such as local search, dynamic programming, exact
methods, etc. For this reason, we have designed a hybrid immunological algorithm, by
including some deterministic criteria inside in order to reļ¬ne the found solutions and to
help the convergence of the algorithm towards the global optimal solution.
The hybrid immunological algorithm proposed in this paper, hereafter simply called
HYBRID-IA, takes advantage of the immunological operators of cloning, hypermuta-
tion and aging, to carefully explore the search space and properly exploit the infor-
mation learned, but it also makes use of local search for improving the quality of the
solutions, and deterministically trying to remove that vertex that break off one or more
cycles in the graph, and has the minor weight-degree ratio. The many experiments per-
formed have proved the fruitful impact of such a greedy idea, since it was almost always
able to improve the solutions, leading the HYBRID-IA towards the global optimal solu-
tion. For evaluating the reliability and efļ¬ciency of HYBRID-IA, many experiments
on several different instances have been performed, taken by [2] and following the
same experimental protocol proposed in it. HYBRID-IA was also compared with other
three metaheuristics: Iterated Tabu Search (ITS) [3]; eXploring Tabu Search (XTS) [9,
1]; and Memetic Algorithm (MA) [2]. From all comparisons performed and analysed,
HYBRID-IA proved to be always competitive and comparable with the best of the three
metaheuristics. Furthermore, it is worth emphasizing that HYBRID-IA outperforms the
three algorithms on some instances, since, unlike them, it is able to ļ¬nd the global best
solution.
2 The Weighted Feedback Vertex Set Problem
Given a directed or undirected graph G = (V, E)., a feedback vertex set of G is a subset
S āŠ‚ V of vertices whose removal makes G acyclic. More formally, if S āŠ‚ V we can
deļ¬ne the subgraph G[S] = (V S, EV S) where EV S = {(u, v) āˆˆ E : u, v āˆˆ V S}.
If G[S] is acyclic, then S is a feedback set. The Minimum Feedback Vertex Set Problem
(MFV S) is the problem of ļ¬nding a feedback vertex set of minimal cardinality. If S
is a feedback set, we say that a vertex v āˆˆ S is redundant if the induced subgraph
G[S{v}] = ((V S) āˆŖ {v}, E(V S)āˆŖ{v}, w) is still an acyclic graph. It follows that S
is a minimal FV S if it doesnā€™t contain any redundant vertices. It is well known that the
3
decisional version of the MFV S is a NP-complete problem for general graphs [12,
17] and even for bipartite graphs [24].
If we associate a positive weight w(v) to each vertex v āˆˆ V , let S be any subset
of V, then its weight is the sum of the weights of its vertices, i.e. vāˆˆS w(v). The
Minimum Weighted Feedback Vertex Set Problem (MWFV S) is the problem of ļ¬nding
a feedback vertex set of minimal weight.
The MWFV S problem is an interesting and challenging task as it ļ¬nds application
in many ļ¬elds and in many real-life tasks, such as in the context of (i) operating systems
for preventing or removing deadlocks [22]; (ii) combinatorial circuit design [16]; (iii)
information security [13]; and (iv) the study of monopolies in synchronous distributed
systems [19]. Many heuristics and metaheuristics have been developed for the simple
MFV S problem, whilst very few, instead, have been developed for the weighted vari-
ant of the problem.
3 HYBRID-IA: a Hybrid Immunological Algorithm
HYBRID-IA is an immunological algorithm inspired by the clonal selection principle,
which represents an excellent example of bottom up intelligent strategy where adapta-
tion operates at local level, whilst a complex and useful behaviour emerges at global
level. The basic idea of this metaphor is how the cells adapt for binding and eliminate
foreign entities, better known as Antigene (Ag). The key features of HYBRID-IA are
the cloning, hypermutation and aging operators, which, respectively, generate new pop-
ulations centered on higher afļ¬nity values; explore the neighborhood of each solution
into the search space; and eliminate the old cells and the less promising ones, in order
to keep the algorithm from getting trapped into a local optimal. To these immunological
operators, a local search strategy is also added, which, by restoring one or more cycles
with the addition of a previously removed vertex, it tries again to break off the cycle,
or the cycles, by conveniently removing a new vertex that improves the ļ¬tness of the
solution. Usually the best choice is to select the node with the minimal weight-degree
ratio. The existence of a cycle, or more cycles, is computed via the well-know proce-
dure Depth First Search (DFS) [5]. The aim of the local search is then to repair the
random choices done by the stochastic operators via more locally appropriate choices.
HYBRID-IA is based on two main entities, such as the antigen that represents the
problem to tackle, and the B cell receptor that instead represents a point (conļ¬guration)
of the search space. Each B cell, in particular, represents a permutation of vertices that
determines the order of the nodes to be removed: starting from the ļ¬rst position of
the permutation, the relative node is selected and removed from the graph, and this
is iteratively repeated, following the order in the permutation, until an acyclic graph
is obtained, or, in general, there are no more vertices to be removed. Once an acyclic
graph is obtained, all possible redundant vertices are restored in V . Now, all removed
vertices represent the S set, i.e. a solution to the problem. Such process is outlined in
simple way in Algorithm 1. Some clariļ¬cation about Algorithm 1:
ā€“ Input to the Algorithm is an undirected graph G = (V, E);
ā€“ given any X āŠ† V by dX (v) we denote the degree of vertex v in the graph induced
by X, i.e. the graph G(X) obtained from G by removing all the vertices not in X.
4
ā€“ if, given any X āŠ† V a vertex v has degree dX (v) ā‰¤ 1, such a vertex cannot be
involved in any cycle. So, when searching for a feedback vertex set, any such a
vertex can simply be ignored, or removed from the graph.
ā€“ the algorithm associates deterministically a subset S of the graph vertices to any
given permutation of the vertices in V. The sum of the weights of the vertices in S
will be the ļ¬tness value associated to the permutation.
Algorithm 1 Create Solution
X ā† V
P ā† permutation(V )
S ā† āˆ…
for u āˆˆ P do
if u āˆˆ X and G(X) not acyclic then
X ā† X  {u}
S ā† S āˆŖ {u}
while āˆƒv āˆˆ X : dX(v) < 2 do
X ā† X  {u}
end while
end if
end for
Removing all redundant vertices from S
return S
At each time step t, we maintain a population of B cells of size d that we label P(t)
.
Each permutation, i.e. B cell, is randomly generated during the initialization phase (t =
0) using an uniform distribution. A summary of the proposed algorithm is shown below.
HYBRID-IA terminates its execution when the ļ¬xed termination criterion is satisļ¬ed.
For our experiments and for all outcomes presented in this paper, a maximum number
of generations has been considered.
Algorithm 2 HYBRID-IA (d, dup, Ļ, Ļ„B)
t ā† 0;
P(t)
ā† Initialize Population(d);
Compute Fitness(P(t)
);
repeat
P(clo)
ā† Cloning (P(t)
, dup);
P(hyp)
ā† Hypermutation(P(clo)
, Ļ);
Compute Fitness(P(hyp)
);
(P
(t)
a , P
(hyp)
a ) ā† Aging(P(t)
, P(hyp)
, Ļ„B);
P(select)
ā† (Āµ + Ī»)-Selection(P
(t)
a , P
(hyp)
a );
P(t+1)
ā† Local Search(P(select)
);
Compute Fitness(P(t+1)
);
t ā† t + 1;
until (termination criterion is satisļ¬ed)
5
Cloning operator: it is the ļ¬rst immunological operator to be applied, and it has the
aim to reproduce the proliferation mechanism of the immune system. Indeed it sim-
ply copies dup times each B cell producing an intermediate population P(clo)
of size
d Ɨ dup. Once a B cell copy is created, i.e. the B cell is cloned, to this is assigned an
age that determines its lifespan, during that it can mature, evolves and improves: from
the assigned age it will evolve into the population for producing robust offspring until
a maximum age reachable (a user-deļ¬ned parameter), i.e. a preļ¬xed maximum number
of generations. It is important to highlight that the assignment of the age together to the
aging operator play a key role on the performances of HYBRID-IA [10, 21], since their
combination has the purpose to reduce premature convergences, and keep an appropri-
ate diversity between the B cells.
The cloning operator, coupled with the hypermutation operator, performs a local
search around the cloned solutions; indeed the introduction of a high number of blind
mutations produces individuals with higher ļ¬tness function values, which will be after
selected to form ever better progenies.
Inversely hypermutation operator: this operator has the aim to explore the neighbor-
hood of each clone taking into account, however, the quality of the ļ¬tness function of
the clone it is working on. It acts on each element of the population P(clo)
, performing
M mutations on each clone, whose number M is determined by an inversely propor-
tional law to the clone ļ¬tness value: the higher is the ļ¬tness function value, the lower is
the number of mutations performed on the B cell. Unlike of any evolutionary algorithm,
no mutation probability has been considered in HYBRID-IA.
Given a clone x, the number of mutations M that it will be undergo is determined
by the following potential mutation:
Ī± = eāˆ’Ļ Ė†f(x)
, (1)
where Ī± represents the mutation rate, and Ė†f(x) the ļ¬tness function value normalized in
[0, 1]. The number of mutations M is then given by
M = āŒŠ(Ī± Ɨ ā„“) + 1āŒ‹, (2)
where ā„“ is the length of the B cell. From equation 2, is possible to note that at least
one mutation occurs on each cloned B cell, and this happens to the solutions very close
to the optimal one. Since any B cell is represented as a vertices permutation that de-
termines their removing order, the position occupied by each vertex in the permutation
becomes crucial for the reachment of the global best solution. Consequently, the hyper-
mutation operator adopted is the well-known Swap Mutations, through which the right
permutation, i.e. the right removing order, is searched: for any x B cell, choices two
positions i and j, the vertices xi and xj are exchanged of position, becoming xā€²
i = xj
and xā€²
j = xi, respectively.
Aging operator: this operator has the main goal to help the algorithm for jumping
out from local optimal. Simply it eliminates the old B cells from the populations P(t)
and P(hyp)
: once the age of a B cell exceeds the maximum age allowed (Ļ„B), it will
be removed from the population of belonging independently from its ļ¬tness value. As
6
written above, the parameter Ļ„B represents the maximum number of generations al-
lowed so that every B cell can be considered to remain into the population. In this way,
this operator is able to maintain a proper turnover between the B cells in the population,
producing high diversity inside it, and this surely help the algorithm to avoid premature
convergences and, consequently, to get trapped into local optima.
An exception about the removal may be done for the best current solution that is kept
into the population even if its age is older than Ļ„B. This variant of the aging operator is
called elitist aging operator.
(Āµ + Ī»)-Selection operator: once the aging operator ended its work, the best d sur-
vivors from both populations P
(t)
a and P
(hyp)
a are selected for generating a temporary
population P(select
that, afterwards, will be undergo to the local search. For this process
the classical (Āµ + Ī»)-Selection operator has been considered, where, in our case, Āµ = d
and Ī» = (d Ɨ dup). In a nutshell, this operator reduces the offspring B cell population
(P
(hyp)
a ) of size Ī» ā‰„ Āµ to a new population (P(select
) of size Āµ = d. Since this se-
lection mechanism identiļ¬es the d best elements between the offspring set and the old
parent B cells, then it guarantees monotonicity in the evolution dynamics. Nevertheless,
may happens that all survived B cells are less than the required population size (d), i.e.
da < d. This can easily happens depending on the chosen age assignment, and ļ¬xed
value of Ļ„B. In this case, the selection mechanism randomly generates d āˆ’ da new B
cells.
Local search: the main idea at the base of this local search is to repair in a proper and
deterministic way the solutions produced by the stochastic mutation operator. Whilst the
hypermutation operator determines the vertices to be removed in a blind and random
way, i.e. choosing them independently of its weight, then the local search try to change
one, or more of them with an other node of lesser weight, improving thus the ļ¬tness
function of that B cell. Given a solution x, all vertices in x are sorted in decreasing order
with respect their weights. Then, starting from vertex u with largest weight, the local
search procedure iteratively works as follows: the vertex u is inserted again in V  S,
generating then one or more cycles; via the classical DFS procedure, are computed the
number of the cycles produced, and, thus one of these (if there are more) is taken into
account, together with all vertices involved in it. These last vertices are now sorted in
increasing way with respect their weight-degree ratio; thus, the vertex v with smaller
ratio is selected to break off the cycle. Of course, may also happens that v break off
also more cycles. Anyway, if from the removal of v the subgraph becomes acyclic, and
this removal improves the ļ¬tness function then v is considered for the solution and it
replaces u; otherwise the process is repeated again taking into account a new cycle. At
the end of the iterations, if the sum of the new removed vertices (i.e. the ļ¬tness) is less
to the previous one, then such vertices are inserted in the new solution in place of vertex
u.
4 Results
In this section all analysis, experiments, and comparisons of HYBRID-IA are presented
in order to measure the goodness of the proposed approach. The main goal of these ex-
7
periments is obviously to prove the reliability and competitiveness of the HYBRID-IA
with respect the state-of-the-art, but also to test the efļ¬ciency and the computational
impact provided by the designed local search. Thus, for properly evaluating the perfor-
mances of the proposed algorithm, a set of benchmark instances proposed in [3] have
been considered, whose set includes grid, toroidal, hypercube, and random graphs. Each
instance considered, besides to the topology, differs in the number of vertices; number
of edges; and range of values for the vertices weights. In this way, as suggested in [2],
is possible to inspect the computational performances of HYBRID-IA with respect to
the density of the graph and weight ranges. Further, HYBRID-IA has been also com-
pared with three different algorithms, which represent nowadays the state-of-the-art for
the WFVS problem: Iterated Tabu Search (ITS) [3]; eXploring Tabu Search (XTS) [9,
1]; and Memetic Algorithm (MA) [2]. All three algorithms, and their results, have been
taken in [2].
Each experiment was performed by HYBRID-IA with population size d = 100,
duplication parameter dup = 2, maximum age reachable Ļ„B = 20, mutation rate pa-
rameter Ļ = 0.5, and maximum number of generations maxgen = 300. Further, each
experiment reported in the tables below is the average over ļ¬ve instances with the same
characteristics but different assignment of the vertices weights. The experimental proto-
col used, and not included above, has been taken by [2]. It is important to highlight that
in [2] a ļ¬xed maximum number of generators is not given, but the authors use a stop
criterion based on a formula (MaxIt) that depends on the density of the graph: the al-
gorithm will end its evolution process when it will reach MaxIt consecutive iterations
without improvements. Note that this threshold is reset every time an improvement oc-
curs. From a simple calculus, it is possible to check how 300 generations used in this
work are almost always lowest or equal to the minimum ones performed by the algo-
rithms presented in [2], considering primarily the simplicity of ļ¬tness improvements in
the ļ¬rst steps of the generations.
4.1 Dynamic Behaviour
Before to presents the experiments, and comparisons performed, the dynamic behavior
and the learning ability of HYBRID-IA are presented. An analysis on the computational
impact, and convergence advantages provided by the use of the local search developed
has been conducted as well.
In ļ¬gure 1, left plot, is shown the dynamic behavior of HYBRID-IA, where are
displayed the curves of the (i) best ļ¬tness, (ii) average ļ¬tness of the population, and
(iii) average ļ¬tness of the cloned hypermutated population over the generations. For
this analysis the squared grid instance S SG9 has been considered, whose features, and
weights range are shown in table 1. From this plot is possible to see how HYBRID-IA
go down quickly in a very short generations to acceptable solutions for then oscillat-
ing between values close to each other. This oscillatory behavior, with main reference
to the cloned hypermutated population curve, proves how the algorithm HYBRID-IA
has a good solutions diversity, which is surely helpful for the search space exploration.
These ļ¬‚uctuations are instead less pronounced in the curve of the average ļ¬tness of the
population, and this is due to the use of the local search that provides a greater con-
vergence stability. Last curve, the one of the best ļ¬tness, indicates how the algorithm
8
takes the best advantage of the generations, managing to still improve in the last gen-
erations. Since this is one of few instances where HYBRID-IA didnā€™t reach the optimal
solution, inspecting this curve we think that likely increasing the number of generations
(even just a little), HYBRID-IA will be able to ļ¬nd the global best. Right plot of ļ¬gure
1100
1150
1200
1250
1300
1350
1400
1450
1500
1550
1600
0 50 100 150 200 250 300
Fitness
Generations
Hybrid-IA Convergence Behavior on the S-SG9 instance
Legend
cloneā€™s avg fitness
popā€™s avg fitness
best fitness
1290
1295
1300
1305
1310
1315
1320
1325
50 100 150 200 250 300
Fitness
Generations
Hybrid-IA Convergence Behavior on S-R23 instance
Legend
Survivors average fitness
Local search average fitness
Fig. 1. Convergence behavior of the average ļ¬tness function values of P(t)
, P(hyp)
, and the
best B cell versus generations on the grid S SG9 instance (left plot). Average ļ¬tness function of
P(select)
vs. average ļ¬tness function P(t)
on the random S R23 instance (right plot).
1 shows the comparison between the average ļ¬tness values of the survivorsā€™ popula-
tion (P(select)
), and the one produced by the local search. Although they show a very
similar parallel behavior, this plot proves the usefulness and beneļ¬ts produced by the
local search, which is always able to improve the solutions generated by the stochastic
immune operators. Indeed the curve of the average ļ¬tness produced by the local search
is always below to the survivors one.
Besides the convergence analysis, it becomes important also to understand the learn-
ing ability of the algorithm, i.e. how many information it is able to gain during all
evolutionary process, which affects the performances of any evolutionary algorithm in
general. For analyzing the learning process we have then used the well-known entropy
function, Information Gain, which measures the quantity of information the system dis-
covers during the learning phase [6, 7, 18]. Let Bt
m be the number of the B cells that
at the timestep t have the ļ¬tness function value m; we deļ¬ne the candidate solutions
distribution function f
(t)
m as the ratio between the number Bt
m and the total number of
candidate solutions:
f(t)
m =
Bt
m
h
m=0 Bt
m
=
Bt
m
d
. (3)
It follows that the information gain K(t, t0) and entropy E(t) can be deļ¬ned as:
K(t, t0) =
m
f(t)
m log(f(t)
m /f(t0)
m ), (4)
E(t) =
m
f(t)
m log f(t)
m . (5)
9
The gain is the amount of information the system learned compared to the randomly
generated initial population P(t=0)
. Once the learning process begins, the information
gain increases until to reach a peak point.
20.5
21
21.5
22
22.5
23
23.5
24
1 10 100
K(t,t0)
Generations
Information Gain computed using Local Search
Legend
Information Gain
25
26
27
28
29
30
31
32
33
1 10 100
SD
12
14
16
18
20
22
24
1 10 100
K(t,t0)
Generations
Information Gain computed without Local Search
Legend
Information Gain
16
17
18
19
20
21
22
23
1 10 100
SD
Fig. 2. Learning during the evolutionary process. Information gain curves of Hybrid-IA with local
search strategy (left plot) and without it (right plot). The inset plots, in both ļ¬gures, display the
relative standard deviations.
Figure 2 shows the information gain obtained with the use of the local search mech-
anism (left plot), and without it (right plot) when HYBRID-IA is applied to the random
S R23 instance. In both plots we report also the standard deviation, showed as inset
plot. Inspecting both plots is possible to note that with the using of the local search,
the algorithm quickly gains high information until to reach the higher peak within the
ļ¬rst generations, which exactly corresponds to the achievement of the optimal solution.
It is important to highlight that the higher peak of the information gain corresponds
to the lower point of the standard deviation, and then this conļ¬rm that HYBRID-IA
reaches its maximum learning at the same time as it show less uncertainty. Due to the
deterministic approach of the local search, once reached the highest peak, and found
the global optimal solution, the algorithm begins to lost information and starts to show
an oscillatory behavior. The same happens for the standard deviation. From the right
plot, instead, without the use of the local search, HYBRID-IA gains information more
slowly. Also for this version, the higher peak of information learned corresponds to the
lowest uncertainty degree, and in this temporal window HYBRID-IA reaches the best
solution. Unlike the curve produced with the use of the local search, once found the
optimal solution and reached the highest information peak, the algorithm starts to lost
information as well but its behavior seems to be more steady-state.
The two curves of the information gain have been compared and showed in ļ¬gure
3 (left plot) over all generations. The relative inset plot is a zoom of the information
gain behavior over the ļ¬rst 25 generations. From this plot is clear how the local search
approach developed helps the algorithm to learn more information already from the ļ¬rst
generations, and in a quickly way; the inset plot, in particular, highlight the important
existing distance between the two curves. From an overall view over all 300 generations,
it is possible to see how the local search gives more steady-state even after reaching
10
8
10
12
14
16
18
20
22
24
0 50 100 150 200 250 300
K(t,t0)
Generations
Information Gain
Legend
using LS
without LS
8
10
12
14
16
18
20
22
24
26
0 5 10 15 20 25
16
18
20
22
24
26
28
30
32
34
1 10 100
SD
Generations
Standard Deviation
Legend
using LS
without LS
Fig. 3. Information Gain and standard deviation.
the global optimal. In the right plot of the ļ¬gure 3 is instead shown the comparison
between the two standard deviations produced by HYBRID-IA with and without the
use of the local search. Of course, as we expected, the curve of the standard deviation
using the local search is taller as the algorithm gains higher amount of information than
the version without local search.
Finally, from the analysis of all these ļ¬gures ā€“ from convergence speed to learning
ability ā€“ clearly emerges how the local search designed and developed helps HYBRID-
IA in having an appropriate and correct convergence towards the global optimum, and
a greater amount of information gained.
4.2 Experiments and Comparisons
In this subsection all experimental results and comparisons done with the state-of-the-
art are presented in order to evaluate the efļ¬ciency, reliability and robustness of the
HYBRID-IA performances. Many experiments have been performed, and the bench-
mark instances proposed in [2] have been considered for our tests and comparisons.
In particular, HYBRID-IA has been compared with three different metaheuristics: ITS
[3], XTS [9, 1], and MA [2]. As written in section 4, the same experimental protocol
proposed in [2] has been used. Tables 1 and 2 show the results obtained by HYBRID-IA
on different sets of instance: squared grid graph, no squared grid graph, and toroidal
graph in table 1; hypercube graph and random graphs in table 2. In both tables are
reported: the name of the instance in 1st column; number of vertices in 2nd; number
of edges in 3rd; lower and upper bounds of the vertex weights in 4th and 5th; and
optimal solution Kāˆ—
in 6th. In the next columns are reported the results of HYBRID-IA
(7th) and of the other three algorithms compared. It is important to clarify that for the
grid graphs (squared and not squared) n and m indicate the number of the rows and
columns of the grid. The last column of both tables shows the difference between the
result obtained by Hybrid-IA and the best result among the three compared algorithms.
Further, in each line of the tables the best results among all are reported in boldface.
On the squared grid graph instances (top of table 1) is possible to see how HYBRID-
IA is able to reach the optimal solution in 7 instances over 9, unlike of MA that instead
reaches it in all instances. However, in those two instances (S SG7 and S SG9) the per-
11
Table 1. HYBRID-IA versus MA, ITS and XTS on the set of the instances: squared grid; not
squared grid and toroidal graphs.
INSTANCE
n m Low Up Kāˆ—
HYBRID-IA ITS XTS MA Ā±
SQUARED GRID GRAPHS
S SG1 5 5 10 25 114.0 114.0 114.0 114.0 114.0 =
S SG2 5 5 10 50 199.8 199.8 199.8 199.8 199.8 =
S SG3 5 5 10 75 312.4 312.4 312.6 312.4 312.4 =
S SG4 7 7 10 25 252.0 252.0 252.4 252.0 252.0 =
S SG5 7 7 10 50 437.6 437.6 439.8 437.6 437.6 =
S SG6 7 7 10 75 713.6 713.6 718.4 717.4 713.6 =
S SG7 9 9 10 25 442.2 444.2 444.2 442.8 442.2 +2.0
S SG8 9 9 10 50 752.2 752.2 754.6 753.0 752.2 =
S SG9 9 9 10 75 1134.4 1136.0 1138.0 1134.4 1134.4 +1.6
NOT SQUARED GRID GRAPHS
S NG1 8 3 10 25 96.8 96.8 96.8 96.8 96.8 =
S NG2 8 3 10 50 157.4 157.4 157.4 157.4 157.4 =
S NG3 8 3 10 75 220.0 220.0 220.0 220.0 220.0 =
S NG4 9 6 10 25 295.6 295.6 295.8 295.8 295.6 =
S NG5 9 6 10 50 488.6 488.6 489.4 488.6 488.6 =
S NG6 9 6 10 75 755.0 755.0 755.0 755.2 755.0 =
S NG7 12 6 10 25 398.2 398.2 399.8 398.8 398.4 āˆ’0.2
S NG8 12 6 10 50 671.8 671.8 673.4 671.8 671.8 =
S NG9 12 6 10 75 1015.2 1015.2 1017.4 1015.4 1015.2 =
TOROIDAL GRAPHS
S T1 5 5 10 25 101.4 101.4 101.4 101.4 101.4 =
S T2 5 5 10 50 124.4 124.4 124.4 124.4 124.4 =
S T3 5 5 10 75 157.8 157.8 157.8 158.8 157.8 =
S T4 7 7 10 25 195.4 195.4 197.4 195.4 195.4 =
S T5 7 7 10 50 234.2 234.2 234.2 234.2 234.2 =
S T6 7 7 10 75 269.6 269.6 269.6 269.6 269.6 =
S T7 9 9 10 25 309.6 310.2 310.4 309.8 309.8 +0.4
S T8 9 9 10 50 369.6 369.6 370.0 369.6 369.6 =
S T9 9 9 10 75 431.8 431.8 432.2 432.2 431.8 =
formances showed by HYBRID-IA are not the worst with respect the other two algo-
rithms. On the not square grid graphs (middle of table 1) instead HYBRID-IA reaches
the optimal solution on all instances (9 over 9), outperforming all three algorithms on
the instance S NG7, where none of the three compared algorithms is able to ļ¬nd the
optimal solution. Also on the toroidal graphs (bottom of table 1) HYBRID-IA is able
to reaches the global optimal solution in 8 over 9 instances; exception is made on the
instance S T7 where the solution found is 0.4 higher with respect the best solution
12
found by MA, and 0.6 with respect to the optimum one. Also in this instance, how-
ever, the worst performances are not by HYBRID-IA but rather by ITS. In the overall,
inspecting all results in table 1, is possible to see how HYBRID-IA shows competitive
performances with MA algorithm, except in three instances where it shows the worst re-
sults, whilst instead it is able to outperform MA in the instance S NG7 where it reaches
the optimal solution unlike of MA. Analysing the results with respect the other two
algorithms, is clear how HYBRID-IA outperform ITS and XTS on all instances. An
exception is done with respect XTS only on the instances S SG7 and S SG9.
In table 2 are presented the comparisons on the hypercube, and random graphs,
which present larger problem dimensions with respect the previous ones. Analysing the
results obtained on the hypercube graph instances, it is very clear how HYBRID-IA
outperform all three algorithms in all instances (9 over 9), reaching even the optimal
solution on the S H7 instance where instead the three algorithms fail. On the random
graphs instead HYBRID-IA shows comparable results with respect MA except on three
instances, where the result difference is however not excessive (1.0 as average); whilst
it is able to reach the optimum on the instances S R20 and S R23 where instead MA,
and the other two algorithms fail. Also in this set of instances, in those instances where
HYBRID-IA fail with respect MA, it shows in any case always better performances than
ITS and XTS in all instances. In the overall, analyzing all results of this table, is possible
to assert that HYBRID-IA is competitive and comparable to MA, losing the comparison
on three instances but winning on other three. If we compare HYBRID-IA with ITS
and XTS is possible to see how our proposed algorithm shows better performances in
all instances ļ¬nding often, or almost always, better solutions than these two compared
algorithms.
Finally, from the analysis of the convergence behavior, learning ability, and com-
parisons performed it is possible to clearly assert how HYBRID-IA is comparable with
the state-of-the-art for the MWFV S problem, showing reliable and robustness perfor-
mances, and good ability in information learning. These efļ¬cient performances are due
to the combination of the immunological operators, which introduce enough diversity
in the search phase, and the local search designed, which instead reļ¬ne the solution via
more appropriate and deterministic choices.
5 Conclusion
In this paper we introduce a hybrid immunological algorithm, simply called HYBRID-
IA, which takes advantage by the immunological operators (cloning, hypermutation
and aging) for carefully exploring the search space, introducing diversity, and avoiding
to get trapped into a local optima, and by a Local Search whose aim is to reļ¬ne the
solutions found through appropriate choices and based on a deterministic approach.
The algorithm was designed and developed for solving one of the most challenging
combinatorial optimization problems, such as the Weighted Feedback Vertex Set, which
simply consists, given an undirected graph, in ļ¬nding the subset of vertices of mini-
mum weight such that their removal produce an acyclic graph. In order to evaluate the
goodness and reliability of HYBRID-IA, many experiments have been performed and
the results obtained have been compared with three different metaheuristics (ITS, XTS,
13
Table 2. HYBRID-IA versus MA, ITS and XTS on the set of the instances: hypercube and random
graphs.
INSTANCE
n m Low Up Kāˆ—
HYBRID-IA ITS XTS MA Ā±
HYPERCUBE GRAPHS
S H1 16 32 10 25 72.2 72.2 72.2 72.2 72.2 =
S H2 16 32 10 50 93.8 93.8 93.8 93.8 93.8 =
S H3 16 32 10 75 97.4 97.4 97.4 97.4 97.4 =
S H4 32 80 10 25 170.0 170.0 170.0 170.0 170.0 =
S H5 32 80 10 50 240.6 240.6 241.0 240.6 240.6 =
S H6 32 80 10 75 277.6 277.6 277.6 277.6 277.6 =
S H7 64 192 10 25 353.4 353.4 354.6 353.8 353.8 āˆ’0.4
S H8 64 192 10 50 475.6 475.6 476.0 475.6 475.6 =
S H9 64 192 10 75 503.8 503.8 503.8 504.8 503.8 =
RANDOM GRAPHS
S R1 25 33 10 25 63.8 63.8 63.8 63.8 63.8 =
S R2 25 33 10 50 99.8 99.8 99.8 99.8 99.8 =
S R3 25 33 10 75 125.2 125.2 125.2 125.2 125.2 =
S R4 25 69 10 25 157.6 157.6 157.6 157.6 157.6 =
S R5 25 69 10 50 272.2 272.2 272.2 272.2 272.2 =
S R6 25 69 10 75 409.4 409.4 409.4 409.4 409.4 =
S R7 25 204 10 25 273.4 273.4 273.4 273.4 273.4 =
S R8 25 204 10 50 507.0 507.0 507.0 507.0 507.0 =
S R9 25 204 10 75 785.8 785.8 785.8 785.8 785.8 =
S R10 50 85 10 25 174.6 174.6 175.4 176.0 174.6 =
S R11 50 85 10 50 280.8 280.8 280.8 281.6 280.8 =
S R12 50 85 10 75 348.0 348.0 348.0 349.2 348.0 =
S R13 50 232 10 25 386.2 386.2 389.4 386.8 386.2 =
S R14 50 232 10 50 708.6 708.6 708.6 708.6 708.6 =
S R15 50 232 10 75 951.6 951.6 951.6 951.6 951.6 =
S R16 50 784 10 25 602.0 602.0 602.2 602.0 602.0 =
S R17 50 784 10 50 1171.8 1171.8 1172.2 1172.0 1171.8 =
S R18 50 784 10 75 1648.8 1648.8 1649.4 1648.8 1648.8 =
S R19 75 157 10 25 318.2 319.2 321.0 320.0 318.2 +1.0
S R20 75 157 10 50 521.6 522.2 526.2 525.0 522.6 āˆ’0.4
S R21 75 157 10 75 751.0 751.8 757.2 754.2 751.0 +0.8
S R22 75 490 10 25 635.8 637.0 638.6 635.8 635.8 +1.2
S R23 75 490 10 50 1226.6 1226.6 1230.6 1228.6 1227.6 āˆ’1.0
S R24 75 490 10 75 1789.4 1789.4 1793.6 1789.4 1789.4 =
S R25 75 1739 10 25 889.8 889.8 891.0 889.8 889.8 =
S R26 75 1739 10 50 1664.2 1664.2 1664.8 1664.2 1664.2 =
S R27 75 1739 10 75 2452.2 2452.2 2452.8 2452.2 2452.2 =
14
and MA), which represent nowadays the state-of-the-art. For these experiments a set
of graph benchmark instances has been considered, based on different topologies: grid
(squared and no squared), toroidal, hypercube, and random graphs.
An analysis on the convergence speed and learning ability of HYBRID-IA has been
performed in order to evaluate, of course, its efļ¬ciency but also the computational im-
pact and reliability provided by the developed local search. From this analysis, clearly
emerges how the developed local search helps the algorithm in having a correct conver-
gence towards the global optima, as well as a high amount of information gained during
the learning process.
Finally, from the results obtained, and the comparisons done, it is possible to as-
sert how HYBRID-IA is competitive and comparable with the WFV S state-of-the-art,
showing efļ¬cient and robust performances. In almost all instances tested it was able to
reach the optimal solutions. It is important to point out how HYBRID-IA has been in-
stead the only one to reach the global optimal solutions on 4 instances, unlike the three
compared algorithms.
References
1. L. Brunetta, F. Mafļ¬oli, M. Trubian: ā€œSolving the Feedback Vertex Set Problem on Undi-
rected Graphsā€, Discrete Applied Mathematics, vol. 101, pp. 37-51, 2000.
2. F. Carrabs, C. Cerrone, R. Cerulli: ā€œA Memetic Algorithm for the Weighted Feedback Vertex
Set Problemā€, Networks, vol. 64, no. 4, pp. 339ā€“356, 2014.
3. F. Carrabs, R. Cerulli, M. Gentili,G. Parlato: ā€œA Tabu Search Heuristic Based on k-Diamonds
for the Weighted Feedback Vertex Set Problemā€, Proc. International Conference on Network
Optimization (INOC), LNCS 6701, pp. 589ā€“602, 2011.
4. P. Conca, G. Stracquadanio, O. Greco, V. Cutello, M. Pavone, G. Nicosia: ā€œPacking Equal
Disks in a Unit Square: an Immunological Optimization Approachā€, Proc. International
Workshop on Artiļ¬cial Immune Systems (AIS), IEEE Press, pp. 1ā€“5, 2015.
5. T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein: ā€œIntroduction to Algorithms, Third
Editionā€, MIT Press, July 2009.
6. V. Cutello, G. Nicosia, M. Pavone: ā€œAn Immune Algorithm with Stochastic Aging and Kull-
back Entropy for the Chromatic Number Problemā€, Journal of Combinatorial Optimization,
vol. 14, no. 1, pp. 9ā€“33, 2007.
7. V. Cutello, G. Nicosia, M. Pavone, J. Timmis: ā€œAn Immune Algorithm for Protein Structure
Prediction on Lattice Modelsā€, IEEE Transaction on Evolutionary Computation, vol. 11,
no. 1, pp. 101ā€“117, 2007.
8. V. Cutello, G. Nicosia, M. Pavone, I. Prizzi: ā€œProtein Multiple Sequence Alignment by Hy-
brid Bio-Inspired Algorithmsā€, Nucleic Acids Research, Oxford Journals, vol. 39, no. 6,
pp. 1980ā€“1992, 2011.
9. M. DellAmico, A. Lodi, and F. Mafļ¬oli: ā€œSolution of the Cumulative Assignment Problem
with a Well-Structured Tabu Search Methodā€, Journal of Heuristics, vol. 5, no. 2, pp. 123-
143, 1999.
10. A. Di Stefano, A. Vitale, V. Cutello, M. Pavone: ā€œHow Long Should Offspring Lifespan Be in
Order to Obtain a Proper Exploration?ā€, Proc. IEEE Symposium Series on Computational
Intelligence (IEEE SSCI), IEEE Press, pp. 1ā€“8, 2016.
11. S. Fouladvand, A. Osareh, B. Shadgar, M. Pavone, S. Sharaļ¬a: ā€œDENSA: An Effective Neg-
ative Selection Algorithm with Flexible Boundaries for SelfSpace and Dynamic Number of
Detectorsā€, Engineering Applications of Artiļ¬cial Intelligence, vol. 62, pp. 359ā€“372, 2016.
15
12. M. R. Garey, D. S. Johnson: ā€œComputers and Intractability: A Guide to Theory of NP-
Completenessā€, Freeman, New York, 1979.
13. D. Gusļ¬eld: ā€œA Graph Theoretic Approach to Statistical Data Securityā€, SIAM Journal on
Computing, vol. 17, no. 3, pp. 552ā€“571, 1988.
14. T. Jansen, C. Zarges: ā€œComputing Longest Common Subsequences with the B-cell Algo-
rithmā€, Proc. of 11th International Conference on Artiļ¬cial Immune Systems (ICARIS),
LNCS 7597, pp. 111-124, 2012.
15. T. Jansen, P. S. Oliveto, C. Zarges: ā€œOn the Analysis of the Immune-Inspired B-Cell Algo-
rithm for the Vertex Cover Problemā€, Proc. of 10th International Conference on Artiļ¬cial
Immune Systems (ICARIS), LNCS 6825, pp. 117-131, 2011.
16. D. B. Johnson: ā€œFinding All the Elementary Circuits of a Directed Graphā€, SIAM Journal
on Computing, vol. 4, pp. 77ā€“84, 1975.
17. R. M. Karp: ā€œReducibility among Combinatorial Problemsā€, Complexity of Computer Com-
putations, The IBM Research Symposia Series, Springer, pp. 85ā€“103, 1972
18. M. Pavone, G. Narzisi, G. Nicosia: ā€œClonal Selection - An Immunological Algorithm for
Global Optimization over Continuous Spacesā€, Journal of Global Optimization, vol. 53,
no. 4, pp. 769ā€“808, 2012.
19. D. Peleg: ā€œSize Bounds for Dynamic Monopoliesā€, Discrete Applied Mathematics, vol. 86,
pp. 263ā€“273, 1998
20. Y. Tian, H. Zhang: ā€œ Research on B Cell Algorithm for Learning to Rank Method Based on
Parallel Strategyā€, PLoS ONE, vol. 11, no. 8, e0157994, 2016.
21. A. Vitale, A. Di Stefano, V. Cutello, M. Pavone: ā€œThe Inļ¬‚uence of Age Assignments on the
Performance of Immune Algorithmsā€, Proc. 18th Annual UK Workshop on Computational
Intelligence (UKCI), Advances in Computational Intelligence Systems, Advances in Intelli-
gent Systems and Computing series, vol. 840, pp. 16ā€“28, 2018.
22. C. C. Wang, E. L. Lloyd, M. L. Soffa: ā€œFeedback Vertex Set and Cyclically Reducible
Graphsā€, Journal of Association of Computing Machinery (ACM), vol. 32, no. 2, pp. 296ā€“
313, 1985.
23. X. Xia, Z. Yuren: ā€œOn the Effectiveness of Immune Inspired Mutation Operators in Some
Discrete Optimization Problemsā€, Information Sciences, vol. 426, pp. . 87ā€“100.
24. M. Yannakakis: ā€œNode-Deletion Problem on Bipartite Graphsā€, SIAM Journal on Comput-
ing, vol. 10, no. 2, pp. 310ā€“327, 1981.
25. C. Zarges: ā€œOn the Utility of the Population Size for Inversely Fitness Proportional Muta-
tion Ratesā€, Proc. of 10th ACM SIGEVO Workshop on Foundations of Genetic Algorithms
(FOGA), pp. 39-46, 2009.

More Related Content

What's hot

Analyzing probabilistic models in hierarchical BOA on traps and spin glasses
Analyzing probabilistic models in hierarchical BOA on traps and spin glassesAnalyzing probabilistic models in hierarchical BOA on traps and spin glasses
Analyzing probabilistic models in hierarchical BOA on traps and spin glasseskknsastry
Ā 
Simulation Study of Hurdle Model Performance on Zero Inflated Count Data
Simulation Study of Hurdle Model Performance on Zero Inflated Count DataSimulation Study of Hurdle Model Performance on Zero Inflated Count Data
Simulation Study of Hurdle Model Performance on Zero Inflated Count DataIan Camacho
Ā 
Let's get ready to rumble redux: Crossover versus mutation head to head on ex...
Let's get ready to rumble redux: Crossover versus mutation head to head on ex...Let's get ready to rumble redux: Crossover versus mutation head to head on ex...
Let's get ready to rumble redux: Crossover versus mutation head to head on ex...kknsastry
Ā 
Heteroscedasticity
HeteroscedasticityHeteroscedasticity
HeteroscedasticityMuhammad Ali
Ā 
F0422052058
F0422052058F0422052058
F0422052058ijceronline
Ā 
Modeling strategies for definitive screening designs using jmp and r
Modeling strategies for definitive  screening designs using jmp and rModeling strategies for definitive  screening designs using jmp and r
Modeling strategies for definitive screening designs using jmp and rPhilip Ramsey
Ā 
Discriminant analysis ravi nakulan slideshare
Discriminant analysis ravi nakulan slideshareDiscriminant analysis ravi nakulan slideshare
Discriminant analysis ravi nakulan slideshareRavi Nakulan
Ā 
Robust Joint and Individual Variance Explained
Robust Joint and Individual Variance ExplainedRobust Joint and Individual Variance Explained
Robust Joint and Individual Variance ExplainedAlina Leidinger
Ā 
Heteroscedasticity
HeteroscedasticityHeteroscedasticity
HeteroscedasticityGeethu Rangan
Ā 
Econometrics of High-Dimensional Sparse Models
Econometrics of High-Dimensional Sparse ModelsEconometrics of High-Dimensional Sparse Models
Econometrics of High-Dimensional Sparse ModelsNBER
Ā 
[M3A4] Data Analysis and Interpretation Specialization
[M3A4] Data Analysis and Interpretation Specialization[M3A4] Data Analysis and Interpretation Specialization
[M3A4] Data Analysis and Interpretation SpecializationAndrea Rubio
Ā 
Nature-Inspired Metaheuristic Algorithms
Nature-Inspired Metaheuristic AlgorithmsNature-Inspired Metaheuristic Algorithms
Nature-Inspired Metaheuristic AlgorithmsXin-She Yang
Ā 
UNDERSTANDING NEGATIVE SAMPLING IN KNOWLEDGE GRAPH EMBEDDING
UNDERSTANDING NEGATIVE SAMPLING IN KNOWLEDGE GRAPH EMBEDDINGUNDERSTANDING NEGATIVE SAMPLING IN KNOWLEDGE GRAPH EMBEDDING
UNDERSTANDING NEGATIVE SAMPLING IN KNOWLEDGE GRAPH EMBEDDINGijaia
Ā 
Geert van Kollenburg-masterthesis
Geert van Kollenburg-masterthesisGeert van Kollenburg-masterthesis
Geert van Kollenburg-masterthesisGeert van Kollenburg
Ā 
Multicollinearity1
Multicollinearity1Multicollinearity1
Multicollinearity1Muhammad Ali
Ā 
Cointegration among biotech stocks
Cointegration among biotech stocksCointegration among biotech stocks
Cointegration among biotech stocksPeter Zobel
Ā 
Study on Evaluation of Venture Capital Based onInteractive Projection Algorithm
	Study on Evaluation of Venture Capital Based onInteractive Projection Algorithm	Study on Evaluation of Venture Capital Based onInteractive Projection Algorithm
Study on Evaluation of Venture Capital Based onInteractive Projection Algorithminventionjournals
Ā 
A Modified KS-test for Feature Selection
A Modified KS-test for Feature SelectionA Modified KS-test for Feature Selection
A Modified KS-test for Feature SelectionIOSR Journals
Ā 

What's hot (19)

Analyzing probabilistic models in hierarchical BOA on traps and spin glasses
Analyzing probabilistic models in hierarchical BOA on traps and spin glassesAnalyzing probabilistic models in hierarchical BOA on traps and spin glasses
Analyzing probabilistic models in hierarchical BOA on traps and spin glasses
Ā 
Simulation Study of Hurdle Model Performance on Zero Inflated Count Data
Simulation Study of Hurdle Model Performance on Zero Inflated Count DataSimulation Study of Hurdle Model Performance on Zero Inflated Count Data
Simulation Study of Hurdle Model Performance on Zero Inflated Count Data
Ā 
Let's get ready to rumble redux: Crossover versus mutation head to head on ex...
Let's get ready to rumble redux: Crossover versus mutation head to head on ex...Let's get ready to rumble redux: Crossover versus mutation head to head on ex...
Let's get ready to rumble redux: Crossover versus mutation head to head on ex...
Ā 
Heteroscedasticity
HeteroscedasticityHeteroscedasticity
Heteroscedasticity
Ā 
F0422052058
F0422052058F0422052058
F0422052058
Ā 
Modeling strategies for definitive screening designs using jmp and r
Modeling strategies for definitive  screening designs using jmp and rModeling strategies for definitive  screening designs using jmp and r
Modeling strategies for definitive screening designs using jmp and r
Ā 
Modelo Generalizado
Modelo GeneralizadoModelo Generalizado
Modelo Generalizado
Ā 
Discriminant analysis ravi nakulan slideshare
Discriminant analysis ravi nakulan slideshareDiscriminant analysis ravi nakulan slideshare
Discriminant analysis ravi nakulan slideshare
Ā 
Robust Joint and Individual Variance Explained
Robust Joint and Individual Variance ExplainedRobust Joint and Individual Variance Explained
Robust Joint and Individual Variance Explained
Ā 
Heteroscedasticity
HeteroscedasticityHeteroscedasticity
Heteroscedasticity
Ā 
Econometrics of High-Dimensional Sparse Models
Econometrics of High-Dimensional Sparse ModelsEconometrics of High-Dimensional Sparse Models
Econometrics of High-Dimensional Sparse Models
Ā 
[M3A4] Data Analysis and Interpretation Specialization
[M3A4] Data Analysis and Interpretation Specialization[M3A4] Data Analysis and Interpretation Specialization
[M3A4] Data Analysis and Interpretation Specialization
Ā 
Nature-Inspired Metaheuristic Algorithms
Nature-Inspired Metaheuristic AlgorithmsNature-Inspired Metaheuristic Algorithms
Nature-Inspired Metaheuristic Algorithms
Ā 
UNDERSTANDING NEGATIVE SAMPLING IN KNOWLEDGE GRAPH EMBEDDING
UNDERSTANDING NEGATIVE SAMPLING IN KNOWLEDGE GRAPH EMBEDDINGUNDERSTANDING NEGATIVE SAMPLING IN KNOWLEDGE GRAPH EMBEDDING
UNDERSTANDING NEGATIVE SAMPLING IN KNOWLEDGE GRAPH EMBEDDING
Ā 
Geert van Kollenburg-masterthesis
Geert van Kollenburg-masterthesisGeert van Kollenburg-masterthesis
Geert van Kollenburg-masterthesis
Ā 
Multicollinearity1
Multicollinearity1Multicollinearity1
Multicollinearity1
Ā 
Cointegration among biotech stocks
Cointegration among biotech stocksCointegration among biotech stocks
Cointegration among biotech stocks
Ā 
Study on Evaluation of Venture Capital Based onInteractive Projection Algorithm
	Study on Evaluation of Venture Capital Based onInteractive Projection Algorithm	Study on Evaluation of Venture Capital Based onInteractive Projection Algorithm
Study on Evaluation of Venture Capital Based onInteractive Projection Algorithm
Ā 
A Modified KS-test for Feature Selection
A Modified KS-test for Feature SelectionA Modified KS-test for Feature Selection
A Modified KS-test for Feature Selection
Ā 

Similar to A Hybrid Immunological Search for theWeighted Feedback Vertex Set Problem

A Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test FunctionsA Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
Ā 
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...IAEME Publication
Ā 
Comparison between the genetic algorithms optimization and particle swarm opt...
Comparison between the genetic algorithms optimization and particle swarm opt...Comparison between the genetic algorithms optimization and particle swarm opt...
Comparison between the genetic algorithms optimization and particle swarm opt...IAEME Publication
Ā 
How long should Offspring Lifespan be in order to obtain a proper exploration?
How long should Offspring Lifespan be in order to obtain a proper exploration?How long should Offspring Lifespan be in order to obtain a proper exploration?
How long should Offspring Lifespan be in order to obtain a proper exploration?Mario Pavone
Ā 
A Mathematical Programming Approach for Selection of Variables in Cluster Ana...
A Mathematical Programming Approach for Selection of Variables in Cluster Ana...A Mathematical Programming Approach for Selection of Variables in Cluster Ana...
A Mathematical Programming Approach for Selection of Variables in Cluster Ana...IJRES Journal
Ā 
Biology-Derived Algorithms in Engineering Optimization
Biology-Derived Algorithms in Engineering OptimizationBiology-Derived Algorithms in Engineering Optimization
Biology-Derived Algorithms in Engineering OptimizationXin-She Yang
Ā 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
Ā 
Machine learning (5)
Machine learning (5)Machine learning (5)
Machine learning (5)NYversity
Ā 
The Convergence Speed of Single- And Multi-Objective Immune Algorithm Based O...
The Convergence Speed of Single- And Multi-Objective Immune Algorithm Based O...The Convergence Speed of Single- And Multi-Objective Immune Algorithm Based O...
The Convergence Speed of Single- And Multi-Objective Immune Algorithm Based O...CSCJournals
Ā 
Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...
Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...
Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...CSCJournals
Ā 
Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Be...
Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Be...Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Be...
Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Be...infopapers
Ā 
An Efficient Genetic Algorithm for Solving Knapsack Problem.pdf
An Efficient Genetic Algorithm for Solving Knapsack Problem.pdfAn Efficient Genetic Algorithm for Solving Knapsack Problem.pdf
An Efficient Genetic Algorithm for Solving Knapsack Problem.pdfNancy Ideker
Ā 
12918 2015 article_144 (1)
12918 2015 article_144 (1)12918 2015 article_144 (1)
12918 2015 article_144 (1)Anandsingh06
Ā 
Performance evaluation of hepatitis diagnosis using single and multi classifi...
Performance evaluation of hepatitis diagnosis using single and multi classifi...Performance evaluation of hepatitis diagnosis using single and multi classifi...
Performance evaluation of hepatitis diagnosis using single and multi classifi...ahmedbohy
Ā 
Chi square test
Chi square testChi square test
Chi square testNayna Azad
Ā 
An Hybrid Learning Approach using Particle Intelligence Dynamics and Bacteri...
An Hybrid Learning Approach using Particle Intelligence  Dynamics and Bacteri...An Hybrid Learning Approach using Particle Intelligence  Dynamics and Bacteri...
An Hybrid Learning Approach using Particle Intelligence Dynamics and Bacteri...IJMER
Ā 
Enhanced Genetic Algorithm with K-Means for the Clustering Problem
Enhanced Genetic Algorithm with K-Means for the Clustering ProblemEnhanced Genetic Algorithm with K-Means for the Clustering Problem
Enhanced Genetic Algorithm with K-Means for the Clustering ProblemAnders Viken
Ā 
Screening of new strains of sugarcane using augmented block designs
Screening of new strains of sugarcane using augmented block designsScreening of new strains of sugarcane using augmented block designs
Screening of new strains of sugarcane using augmented block designsAlexander Decker
Ā 
Two-Stage Eagle Strategy with Differential Evolution
Two-Stage Eagle Strategy with Differential EvolutionTwo-Stage Eagle Strategy with Differential Evolution
Two-Stage Eagle Strategy with Differential EvolutionXin-She Yang
Ā 

Similar to A Hybrid Immunological Search for theWeighted Feedback Vertex Set Problem (20)

A Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test FunctionsA Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test Functions
Ā 
T180203125133
T180203125133T180203125133
T180203125133
Ā 
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
Ā 
Comparison between the genetic algorithms optimization and particle swarm opt...
Comparison between the genetic algorithms optimization and particle swarm opt...Comparison between the genetic algorithms optimization and particle swarm opt...
Comparison between the genetic algorithms optimization and particle swarm opt...
Ā 
How long should Offspring Lifespan be in order to obtain a proper exploration?
How long should Offspring Lifespan be in order to obtain a proper exploration?How long should Offspring Lifespan be in order to obtain a proper exploration?
How long should Offspring Lifespan be in order to obtain a proper exploration?
Ā 
A Mathematical Programming Approach for Selection of Variables in Cluster Ana...
A Mathematical Programming Approach for Selection of Variables in Cluster Ana...A Mathematical Programming Approach for Selection of Variables in Cluster Ana...
A Mathematical Programming Approach for Selection of Variables in Cluster Ana...
Ā 
Biology-Derived Algorithms in Engineering Optimization
Biology-Derived Algorithms in Engineering OptimizationBiology-Derived Algorithms in Engineering Optimization
Biology-Derived Algorithms in Engineering Optimization
Ā 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
Ā 
Machine learning (5)
Machine learning (5)Machine learning (5)
Machine learning (5)
Ā 
The Convergence Speed of Single- And Multi-Objective Immune Algorithm Based O...
The Convergence Speed of Single- And Multi-Objective Immune Algorithm Based O...The Convergence Speed of Single- And Multi-Objective Immune Algorithm Based O...
The Convergence Speed of Single- And Multi-Objective Immune Algorithm Based O...
Ā 
Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...
Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...
Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...
Ā 
Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Be...
Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Be...Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Be...
Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Be...
Ā 
An Efficient Genetic Algorithm for Solving Knapsack Problem.pdf
An Efficient Genetic Algorithm for Solving Knapsack Problem.pdfAn Efficient Genetic Algorithm for Solving Knapsack Problem.pdf
An Efficient Genetic Algorithm for Solving Knapsack Problem.pdf
Ā 
12918 2015 article_144 (1)
12918 2015 article_144 (1)12918 2015 article_144 (1)
12918 2015 article_144 (1)
Ā 
Performance evaluation of hepatitis diagnosis using single and multi classifi...
Performance evaluation of hepatitis diagnosis using single and multi classifi...Performance evaluation of hepatitis diagnosis using single and multi classifi...
Performance evaluation of hepatitis diagnosis using single and multi classifi...
Ā 
Chi square test
Chi square testChi square test
Chi square test
Ā 
An Hybrid Learning Approach using Particle Intelligence Dynamics and Bacteri...
An Hybrid Learning Approach using Particle Intelligence  Dynamics and Bacteri...An Hybrid Learning Approach using Particle Intelligence  Dynamics and Bacteri...
An Hybrid Learning Approach using Particle Intelligence Dynamics and Bacteri...
Ā 
Enhanced Genetic Algorithm with K-Means for the Clustering Problem
Enhanced Genetic Algorithm with K-Means for the Clustering ProblemEnhanced Genetic Algorithm with K-Means for the Clustering Problem
Enhanced Genetic Algorithm with K-Means for the Clustering Problem
Ā 
Screening of new strains of sugarcane using augmented block designs
Screening of new strains of sugarcane using augmented block designsScreening of new strains of sugarcane using augmented block designs
Screening of new strains of sugarcane using augmented block designs
Ā 
Two-Stage Eagle Strategy with Differential Evolution
Two-Stage Eagle Strategy with Differential EvolutionTwo-Stage Eagle Strategy with Differential Evolution
Two-Stage Eagle Strategy with Differential Evolution
Ā 

More from Mario Pavone

The Influence of Age Assignments on the Performance of Immune Algorithms
The Influence of Age Assignments on the Performance of Immune AlgorithmsThe Influence of Age Assignments on the Performance of Immune Algorithms
The Influence of Age Assignments on the Performance of Immune AlgorithmsMario Pavone
Ā 
Multi-objective Genetic Algorithm for Interior Lighting Design
Multi-objective Genetic Algorithm for Interior Lighting DesignMulti-objective Genetic Algorithm for Interior Lighting Design
Multi-objective Genetic Algorithm for Interior Lighting DesignMario Pavone
Ā 
DENSA:An effective negative selection algorithm with flexible boundaries for ...
DENSA:An effective negative selection algorithm with flexible boundaries for ...DENSA:An effective negative selection algorithm with flexible boundaries for ...
DENSA:An effective negative selection algorithm with flexible boundaries for ...Mario Pavone
Ā 
O-BEE-COL: Optimal BEEs for COLoring Graphs
O-BEE-COL: Optimal BEEs for COLoring GraphsO-BEE-COL: Optimal BEEs for COLoring Graphs
O-BEE-COL: Optimal BEEs for COLoring GraphsMario Pavone
Ā 
Swarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring ProblemSwarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring ProblemMario Pavone
Ā 
12th European Conference on Artificial Life - ECAL 2013
12th European Conference on Artificial Life - ECAL 201312th European Conference on Artificial Life - ECAL 2013
12th European Conference on Artificial Life - ECAL 2013Mario Pavone
Ā 
Swarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring ProblemSwarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring ProblemMario Pavone
Ā 
Clonal Selection: an Immunological Algorithm for Global Optimization over Con...
Clonal Selection: an Immunological Algorithm for Global Optimization over Con...Clonal Selection: an Immunological Algorithm for Global Optimization over Con...
Clonal Selection: an Immunological Algorithm for Global Optimization over Con...Mario Pavone
Ā 
CFP: Optimiation on Complex Systems
CFP: Optimiation on Complex SystemsCFP: Optimiation on Complex Systems
CFP: Optimiation on Complex SystemsMario Pavone
Ā 
Immunological Multiple Sequence Alignments
Immunological Multiple Sequence AlignmentsImmunological Multiple Sequence Alignments
Immunological Multiple Sequence AlignmentsMario Pavone
Ā 
An Immune Algorithm for Protein Structure Prediction on Lattice Models
An Immune Algorithm for Protein Structure Prediction on Lattice ModelsAn Immune Algorithm for Protein Structure Prediction on Lattice Models
An Immune Algorithm for Protein Structure Prediction on Lattice ModelsMario Pavone
Ā 
Robust Immunological Algorithms for High-Dimensional Global Optimization
Robust Immunological Algorithms for High-Dimensional Global OptimizationRobust Immunological Algorithms for High-Dimensional Global Optimization
Robust Immunological Algorithms for High-Dimensional Global OptimizationMario Pavone
Ā 
An Information-Theoretic Approach for Clonal Selection Algorithms
An Information-Theoretic Approach for Clonal Selection AlgorithmsAn Information-Theoretic Approach for Clonal Selection Algorithms
An Information-Theoretic Approach for Clonal Selection AlgorithmsMario Pavone
Ā 

More from Mario Pavone (15)

The Influence of Age Assignments on the Performance of Immune Algorithms
The Influence of Age Assignments on the Performance of Immune AlgorithmsThe Influence of Age Assignments on the Performance of Immune Algorithms
The Influence of Age Assignments on the Performance of Immune Algorithms
Ā 
Multi-objective Genetic Algorithm for Interior Lighting Design
Multi-objective Genetic Algorithm for Interior Lighting DesignMulti-objective Genetic Algorithm for Interior Lighting Design
Multi-objective Genetic Algorithm for Interior Lighting Design
Ā 
DENSA:An effective negative selection algorithm with flexible boundaries for ...
DENSA:An effective negative selection algorithm with flexible boundaries for ...DENSA:An effective negative selection algorithm with flexible boundaries for ...
DENSA:An effective negative selection algorithm with flexible boundaries for ...
Ā 
O-BEE-COL: Optimal BEEs for COLoring Graphs
O-BEE-COL: Optimal BEEs for COLoring GraphsO-BEE-COL: Optimal BEEs for COLoring Graphs
O-BEE-COL: Optimal BEEs for COLoring Graphs
Ā 
Swarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring ProblemSwarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring Problem
Ā 
O-BEE-COL
O-BEE-COLO-BEE-COL
O-BEE-COL
Ā 
12th European Conference on Artificial Life - ECAL 2013
12th European Conference on Artificial Life - ECAL 201312th European Conference on Artificial Life - ECAL 2013
12th European Conference on Artificial Life - ECAL 2013
Ā 
Swarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring ProblemSwarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring Problem
Ā 
Clonal Selection: an Immunological Algorithm for Global Optimization over Con...
Clonal Selection: an Immunological Algorithm for Global Optimization over Con...Clonal Selection: an Immunological Algorithm for Global Optimization over Con...
Clonal Selection: an Immunological Algorithm for Global Optimization over Con...
Ā 
CFP: Optimiation on Complex Systems
CFP: Optimiation on Complex SystemsCFP: Optimiation on Complex Systems
CFP: Optimiation on Complex Systems
Ā 
Joco pavone
Joco pavoneJoco pavone
Joco pavone
Ā 
Immunological Multiple Sequence Alignments
Immunological Multiple Sequence AlignmentsImmunological Multiple Sequence Alignments
Immunological Multiple Sequence Alignments
Ā 
An Immune Algorithm for Protein Structure Prediction on Lattice Models
An Immune Algorithm for Protein Structure Prediction on Lattice ModelsAn Immune Algorithm for Protein Structure Prediction on Lattice Models
An Immune Algorithm for Protein Structure Prediction on Lattice Models
Ā 
Robust Immunological Algorithms for High-Dimensional Global Optimization
Robust Immunological Algorithms for High-Dimensional Global OptimizationRobust Immunological Algorithms for High-Dimensional Global Optimization
Robust Immunological Algorithms for High-Dimensional Global Optimization
Ā 
An Information-Theoretic Approach for Clonal Selection Algorithms
An Information-Theoretic Approach for Clonal Selection AlgorithmsAn Information-Theoretic Approach for Clonal Selection Algorithms
An Information-Theoretic Approach for Clonal Selection Algorithms
Ā 

Recently uploaded

CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
Ā 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
Ā 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineeringmalavadedarshan25
Ā 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
Ā 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
Ā 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
Ā 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHC Sai Kiran
Ā 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
Ā 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
Ā 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
Ā 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
Ā 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
Ā 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
Ā 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixingviprabot1
Ā 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
Ā 

Recently uploaded (20)

POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examples
Ā 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
Ā 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
Ā 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineering
Ā 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
Ā 
Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
Ā 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
Ā 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
Ā 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Ā 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECH
Ā 
young call girls in Rajiv ChowkšŸ” 9953056974 šŸ” Delhi escort Service
young call girls in Rajiv ChowkšŸ” 9953056974 šŸ” Delhi escort Serviceyoung call girls in Rajiv ChowkšŸ” 9953056974 šŸ” Delhi escort Service
young call girls in Rajiv ChowkšŸ” 9953056974 šŸ” Delhi escort Service
Ā 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
Ā 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
Ā 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
Ā 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
Ā 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Ā 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
Ā 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Ā 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixing
Ā 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
Ā 

A Hybrid Immunological Search for theWeighted Feedback Vertex Set Problem

  • 1. A Hybrid Immunological Search for the Weighted Feedback Vertex Set Problem Vincenco Cutello, Maria Oliva, Mario F. Pavone, Rocco A. Scollo Department of Mathematics and Computer Science University of Catania V.le A. Doria 6, I-95125 Catania, Italy cutello@unict.it, mpavone@dmi.unict.it Abstract. In this paper we present a hybrid immunological inspired algorithm (HYBRID-IA) for solving the Minimum Weighted Feedback Vertex Set (M W F V S) problem. MWFV S is one of the most interesting and challenging com- binatorial optimization problem, which ļ¬nds application in many ļ¬elds and in many real life tasks. The proposed algorithm is inspired by the clonal selection principle, and therefore it takes advantage of the main strength characteristics of the operators of (i) cloning; (ii) hypermutation; and (iii) aging. Along with these operators, the algorithm uses a local search procedure, based on a deter- ministic approach, whose purpose is to reļ¬ne the solutions found so far. In or- der to evaluate the efļ¬ciency and robustness of HYBRID-IA several experiments were performed on different instances, and for each instance it was compared to three different algorithms: (1) a memetic algorithm based on a genetic algorithm (MA); (2) a tabu search metaheuristic (XTS); and (3) an iterative tabu search (ITS). The obtained results prove the efļ¬ciency and reliability of HYBRID-IA on all instances in term of the best solutions found and also similar performances with all compared algorithms, which represent nowadays the state-of-the-art on for MWFV S problem. Keywords: Immunological algorithms, immune-inspired computation, metaheuris- tics, combinatorial optimization, feedback vertex set, weighted feedback vertex set. 1 Introduction Immunological inspired computation represents an established and rich family of algo- rithms inspired by the dynamics and the information processing mechanisms that the Immune System uses to detect, recognise, learn and remember foreign entities to the living organism [11]. Thanks to these interesting features, immunological inspired al- gorithms represent successful computational methodologies in search and optimization tasks [4, 20]. Although there exist several immune theories at the basis of immunolog- ical inspired computation that could characterize their natural application to anomaly detection and classiļ¬cation tasks, one that has been proven to be quite effective and ro- bust is based on the clonal selection principle. Algorithms based on the clonal selection principle work on a population of immunological cells, better known as antibodies, that proliferate, i.e. clone themselves ā€“ the number of copies depends on the quality of their
  • 2. 2 foreign entities detection ā€“ and undergo a mutation process usually at a high rate. This process, biologically called Afļ¬nity Maturation, makes these algorithms very suitable in functions and combinatorial optimization problems. This statement is supported by several experimental applications [7, 6, 8, 18], as well as by theoretical analyses that prove their efļ¬ciency with respect to several Randomized Search Heuristics [25, 15, 14, 23]. In light of the above, we have designed and developed an immune inspired hyper- mutations algorithm - based precisely on the clonal selection principle - in order to solve a classical combinatorial optimization problem, namely the Weighted Feedback Vertex Set (WFV S). In addition, we take into account what clearly emerges from the evolutionary com- putation literature: in order to obtain good performances and solve hard a combinato- rial optimization problem, it is necessary to combine together metaheuristics and other classical optimization techniques, such as local search, dynamic programming, exact methods, etc. For this reason, we have designed a hybrid immunological algorithm, by including some deterministic criteria inside in order to reļ¬ne the found solutions and to help the convergence of the algorithm towards the global optimal solution. The hybrid immunological algorithm proposed in this paper, hereafter simply called HYBRID-IA, takes advantage of the immunological operators of cloning, hypermuta- tion and aging, to carefully explore the search space and properly exploit the infor- mation learned, but it also makes use of local search for improving the quality of the solutions, and deterministically trying to remove that vertex that break off one or more cycles in the graph, and has the minor weight-degree ratio. The many experiments per- formed have proved the fruitful impact of such a greedy idea, since it was almost always able to improve the solutions, leading the HYBRID-IA towards the global optimal solu- tion. For evaluating the reliability and efļ¬ciency of HYBRID-IA, many experiments on several different instances have been performed, taken by [2] and following the same experimental protocol proposed in it. HYBRID-IA was also compared with other three metaheuristics: Iterated Tabu Search (ITS) [3]; eXploring Tabu Search (XTS) [9, 1]; and Memetic Algorithm (MA) [2]. From all comparisons performed and analysed, HYBRID-IA proved to be always competitive and comparable with the best of the three metaheuristics. Furthermore, it is worth emphasizing that HYBRID-IA outperforms the three algorithms on some instances, since, unlike them, it is able to ļ¬nd the global best solution. 2 The Weighted Feedback Vertex Set Problem Given a directed or undirected graph G = (V, E)., a feedback vertex set of G is a subset S āŠ‚ V of vertices whose removal makes G acyclic. More formally, if S āŠ‚ V we can deļ¬ne the subgraph G[S] = (V S, EV S) where EV S = {(u, v) āˆˆ E : u, v āˆˆ V S}. If G[S] is acyclic, then S is a feedback set. The Minimum Feedback Vertex Set Problem (MFV S) is the problem of ļ¬nding a feedback vertex set of minimal cardinality. If S is a feedback set, we say that a vertex v āˆˆ S is redundant if the induced subgraph G[S{v}] = ((V S) āˆŖ {v}, E(V S)āˆŖ{v}, w) is still an acyclic graph. It follows that S is a minimal FV S if it doesnā€™t contain any redundant vertices. It is well known that the
  • 3. 3 decisional version of the MFV S is a NP-complete problem for general graphs [12, 17] and even for bipartite graphs [24]. If we associate a positive weight w(v) to each vertex v āˆˆ V , let S be any subset of V, then its weight is the sum of the weights of its vertices, i.e. vāˆˆS w(v). The Minimum Weighted Feedback Vertex Set Problem (MWFV S) is the problem of ļ¬nding a feedback vertex set of minimal weight. The MWFV S problem is an interesting and challenging task as it ļ¬nds application in many ļ¬elds and in many real-life tasks, such as in the context of (i) operating systems for preventing or removing deadlocks [22]; (ii) combinatorial circuit design [16]; (iii) information security [13]; and (iv) the study of monopolies in synchronous distributed systems [19]. Many heuristics and metaheuristics have been developed for the simple MFV S problem, whilst very few, instead, have been developed for the weighted vari- ant of the problem. 3 HYBRID-IA: a Hybrid Immunological Algorithm HYBRID-IA is an immunological algorithm inspired by the clonal selection principle, which represents an excellent example of bottom up intelligent strategy where adapta- tion operates at local level, whilst a complex and useful behaviour emerges at global level. The basic idea of this metaphor is how the cells adapt for binding and eliminate foreign entities, better known as Antigene (Ag). The key features of HYBRID-IA are the cloning, hypermutation and aging operators, which, respectively, generate new pop- ulations centered on higher afļ¬nity values; explore the neighborhood of each solution into the search space; and eliminate the old cells and the less promising ones, in order to keep the algorithm from getting trapped into a local optimal. To these immunological operators, a local search strategy is also added, which, by restoring one or more cycles with the addition of a previously removed vertex, it tries again to break off the cycle, or the cycles, by conveniently removing a new vertex that improves the ļ¬tness of the solution. Usually the best choice is to select the node with the minimal weight-degree ratio. The existence of a cycle, or more cycles, is computed via the well-know proce- dure Depth First Search (DFS) [5]. The aim of the local search is then to repair the random choices done by the stochastic operators via more locally appropriate choices. HYBRID-IA is based on two main entities, such as the antigen that represents the problem to tackle, and the B cell receptor that instead represents a point (conļ¬guration) of the search space. Each B cell, in particular, represents a permutation of vertices that determines the order of the nodes to be removed: starting from the ļ¬rst position of the permutation, the relative node is selected and removed from the graph, and this is iteratively repeated, following the order in the permutation, until an acyclic graph is obtained, or, in general, there are no more vertices to be removed. Once an acyclic graph is obtained, all possible redundant vertices are restored in V . Now, all removed vertices represent the S set, i.e. a solution to the problem. Such process is outlined in simple way in Algorithm 1. Some clariļ¬cation about Algorithm 1: ā€“ Input to the Algorithm is an undirected graph G = (V, E); ā€“ given any X āŠ† V by dX (v) we denote the degree of vertex v in the graph induced by X, i.e. the graph G(X) obtained from G by removing all the vertices not in X.
  • 4. 4 ā€“ if, given any X āŠ† V a vertex v has degree dX (v) ā‰¤ 1, such a vertex cannot be involved in any cycle. So, when searching for a feedback vertex set, any such a vertex can simply be ignored, or removed from the graph. ā€“ the algorithm associates deterministically a subset S of the graph vertices to any given permutation of the vertices in V. The sum of the weights of the vertices in S will be the ļ¬tness value associated to the permutation. Algorithm 1 Create Solution X ā† V P ā† permutation(V ) S ā† āˆ… for u āˆˆ P do if u āˆˆ X and G(X) not acyclic then X ā† X {u} S ā† S āˆŖ {u} while āˆƒv āˆˆ X : dX(v) < 2 do X ā† X {u} end while end if end for Removing all redundant vertices from S return S At each time step t, we maintain a population of B cells of size d that we label P(t) . Each permutation, i.e. B cell, is randomly generated during the initialization phase (t = 0) using an uniform distribution. A summary of the proposed algorithm is shown below. HYBRID-IA terminates its execution when the ļ¬xed termination criterion is satisļ¬ed. For our experiments and for all outcomes presented in this paper, a maximum number of generations has been considered. Algorithm 2 HYBRID-IA (d, dup, Ļ, Ļ„B) t ā† 0; P(t) ā† Initialize Population(d); Compute Fitness(P(t) ); repeat P(clo) ā† Cloning (P(t) , dup); P(hyp) ā† Hypermutation(P(clo) , Ļ); Compute Fitness(P(hyp) ); (P (t) a , P (hyp) a ) ā† Aging(P(t) , P(hyp) , Ļ„B); P(select) ā† (Āµ + Ī»)-Selection(P (t) a , P (hyp) a ); P(t+1) ā† Local Search(P(select) ); Compute Fitness(P(t+1) ); t ā† t + 1; until (termination criterion is satisļ¬ed)
  • 5. 5 Cloning operator: it is the ļ¬rst immunological operator to be applied, and it has the aim to reproduce the proliferation mechanism of the immune system. Indeed it sim- ply copies dup times each B cell producing an intermediate population P(clo) of size d Ɨ dup. Once a B cell copy is created, i.e. the B cell is cloned, to this is assigned an age that determines its lifespan, during that it can mature, evolves and improves: from the assigned age it will evolve into the population for producing robust offspring until a maximum age reachable (a user-deļ¬ned parameter), i.e. a preļ¬xed maximum number of generations. It is important to highlight that the assignment of the age together to the aging operator play a key role on the performances of HYBRID-IA [10, 21], since their combination has the purpose to reduce premature convergences, and keep an appropri- ate diversity between the B cells. The cloning operator, coupled with the hypermutation operator, performs a local search around the cloned solutions; indeed the introduction of a high number of blind mutations produces individuals with higher ļ¬tness function values, which will be after selected to form ever better progenies. Inversely hypermutation operator: this operator has the aim to explore the neighbor- hood of each clone taking into account, however, the quality of the ļ¬tness function of the clone it is working on. It acts on each element of the population P(clo) , performing M mutations on each clone, whose number M is determined by an inversely propor- tional law to the clone ļ¬tness value: the higher is the ļ¬tness function value, the lower is the number of mutations performed on the B cell. Unlike of any evolutionary algorithm, no mutation probability has been considered in HYBRID-IA. Given a clone x, the number of mutations M that it will be undergo is determined by the following potential mutation: Ī± = eāˆ’Ļ Ė†f(x) , (1) where Ī± represents the mutation rate, and Ė†f(x) the ļ¬tness function value normalized in [0, 1]. The number of mutations M is then given by M = āŒŠ(Ī± Ɨ ā„“) + 1āŒ‹, (2) where ā„“ is the length of the B cell. From equation 2, is possible to note that at least one mutation occurs on each cloned B cell, and this happens to the solutions very close to the optimal one. Since any B cell is represented as a vertices permutation that de- termines their removing order, the position occupied by each vertex in the permutation becomes crucial for the reachment of the global best solution. Consequently, the hyper- mutation operator adopted is the well-known Swap Mutations, through which the right permutation, i.e. the right removing order, is searched: for any x B cell, choices two positions i and j, the vertices xi and xj are exchanged of position, becoming xā€² i = xj and xā€² j = xi, respectively. Aging operator: this operator has the main goal to help the algorithm for jumping out from local optimal. Simply it eliminates the old B cells from the populations P(t) and P(hyp) : once the age of a B cell exceeds the maximum age allowed (Ļ„B), it will be removed from the population of belonging independently from its ļ¬tness value. As
  • 6. 6 written above, the parameter Ļ„B represents the maximum number of generations al- lowed so that every B cell can be considered to remain into the population. In this way, this operator is able to maintain a proper turnover between the B cells in the population, producing high diversity inside it, and this surely help the algorithm to avoid premature convergences and, consequently, to get trapped into local optima. An exception about the removal may be done for the best current solution that is kept into the population even if its age is older than Ļ„B. This variant of the aging operator is called elitist aging operator. (Āµ + Ī»)-Selection operator: once the aging operator ended its work, the best d sur- vivors from both populations P (t) a and P (hyp) a are selected for generating a temporary population P(select that, afterwards, will be undergo to the local search. For this process the classical (Āµ + Ī»)-Selection operator has been considered, where, in our case, Āµ = d and Ī» = (d Ɨ dup). In a nutshell, this operator reduces the offspring B cell population (P (hyp) a ) of size Ī» ā‰„ Āµ to a new population (P(select ) of size Āµ = d. Since this se- lection mechanism identiļ¬es the d best elements between the offspring set and the old parent B cells, then it guarantees monotonicity in the evolution dynamics. Nevertheless, may happens that all survived B cells are less than the required population size (d), i.e. da < d. This can easily happens depending on the chosen age assignment, and ļ¬xed value of Ļ„B. In this case, the selection mechanism randomly generates d āˆ’ da new B cells. Local search: the main idea at the base of this local search is to repair in a proper and deterministic way the solutions produced by the stochastic mutation operator. Whilst the hypermutation operator determines the vertices to be removed in a blind and random way, i.e. choosing them independently of its weight, then the local search try to change one, or more of them with an other node of lesser weight, improving thus the ļ¬tness function of that B cell. Given a solution x, all vertices in x are sorted in decreasing order with respect their weights. Then, starting from vertex u with largest weight, the local search procedure iteratively works as follows: the vertex u is inserted again in V S, generating then one or more cycles; via the classical DFS procedure, are computed the number of the cycles produced, and, thus one of these (if there are more) is taken into account, together with all vertices involved in it. These last vertices are now sorted in increasing way with respect their weight-degree ratio; thus, the vertex v with smaller ratio is selected to break off the cycle. Of course, may also happens that v break off also more cycles. Anyway, if from the removal of v the subgraph becomes acyclic, and this removal improves the ļ¬tness function then v is considered for the solution and it replaces u; otherwise the process is repeated again taking into account a new cycle. At the end of the iterations, if the sum of the new removed vertices (i.e. the ļ¬tness) is less to the previous one, then such vertices are inserted in the new solution in place of vertex u. 4 Results In this section all analysis, experiments, and comparisons of HYBRID-IA are presented in order to measure the goodness of the proposed approach. The main goal of these ex-
  • 7. 7 periments is obviously to prove the reliability and competitiveness of the HYBRID-IA with respect the state-of-the-art, but also to test the efļ¬ciency and the computational impact provided by the designed local search. Thus, for properly evaluating the perfor- mances of the proposed algorithm, a set of benchmark instances proposed in [3] have been considered, whose set includes grid, toroidal, hypercube, and random graphs. Each instance considered, besides to the topology, differs in the number of vertices; number of edges; and range of values for the vertices weights. In this way, as suggested in [2], is possible to inspect the computational performances of HYBRID-IA with respect to the density of the graph and weight ranges. Further, HYBRID-IA has been also com- pared with three different algorithms, which represent nowadays the state-of-the-art for the WFVS problem: Iterated Tabu Search (ITS) [3]; eXploring Tabu Search (XTS) [9, 1]; and Memetic Algorithm (MA) [2]. All three algorithms, and their results, have been taken in [2]. Each experiment was performed by HYBRID-IA with population size d = 100, duplication parameter dup = 2, maximum age reachable Ļ„B = 20, mutation rate pa- rameter Ļ = 0.5, and maximum number of generations maxgen = 300. Further, each experiment reported in the tables below is the average over ļ¬ve instances with the same characteristics but different assignment of the vertices weights. The experimental proto- col used, and not included above, has been taken by [2]. It is important to highlight that in [2] a ļ¬xed maximum number of generators is not given, but the authors use a stop criterion based on a formula (MaxIt) that depends on the density of the graph: the al- gorithm will end its evolution process when it will reach MaxIt consecutive iterations without improvements. Note that this threshold is reset every time an improvement oc- curs. From a simple calculus, it is possible to check how 300 generations used in this work are almost always lowest or equal to the minimum ones performed by the algo- rithms presented in [2], considering primarily the simplicity of ļ¬tness improvements in the ļ¬rst steps of the generations. 4.1 Dynamic Behaviour Before to presents the experiments, and comparisons performed, the dynamic behavior and the learning ability of HYBRID-IA are presented. An analysis on the computational impact, and convergence advantages provided by the use of the local search developed has been conducted as well. In ļ¬gure 1, left plot, is shown the dynamic behavior of HYBRID-IA, where are displayed the curves of the (i) best ļ¬tness, (ii) average ļ¬tness of the population, and (iii) average ļ¬tness of the cloned hypermutated population over the generations. For this analysis the squared grid instance S SG9 has been considered, whose features, and weights range are shown in table 1. From this plot is possible to see how HYBRID-IA go down quickly in a very short generations to acceptable solutions for then oscillat- ing between values close to each other. This oscillatory behavior, with main reference to the cloned hypermutated population curve, proves how the algorithm HYBRID-IA has a good solutions diversity, which is surely helpful for the search space exploration. These ļ¬‚uctuations are instead less pronounced in the curve of the average ļ¬tness of the population, and this is due to the use of the local search that provides a greater con- vergence stability. Last curve, the one of the best ļ¬tness, indicates how the algorithm
  • 8. 8 takes the best advantage of the generations, managing to still improve in the last gen- erations. Since this is one of few instances where HYBRID-IA didnā€™t reach the optimal solution, inspecting this curve we think that likely increasing the number of generations (even just a little), HYBRID-IA will be able to ļ¬nd the global best. Right plot of ļ¬gure 1100 1150 1200 1250 1300 1350 1400 1450 1500 1550 1600 0 50 100 150 200 250 300 Fitness Generations Hybrid-IA Convergence Behavior on the S-SG9 instance Legend cloneā€™s avg fitness popā€™s avg fitness best fitness 1290 1295 1300 1305 1310 1315 1320 1325 50 100 150 200 250 300 Fitness Generations Hybrid-IA Convergence Behavior on S-R23 instance Legend Survivors average fitness Local search average fitness Fig. 1. Convergence behavior of the average ļ¬tness function values of P(t) , P(hyp) , and the best B cell versus generations on the grid S SG9 instance (left plot). Average ļ¬tness function of P(select) vs. average ļ¬tness function P(t) on the random S R23 instance (right plot). 1 shows the comparison between the average ļ¬tness values of the survivorsā€™ popula- tion (P(select) ), and the one produced by the local search. Although they show a very similar parallel behavior, this plot proves the usefulness and beneļ¬ts produced by the local search, which is always able to improve the solutions generated by the stochastic immune operators. Indeed the curve of the average ļ¬tness produced by the local search is always below to the survivors one. Besides the convergence analysis, it becomes important also to understand the learn- ing ability of the algorithm, i.e. how many information it is able to gain during all evolutionary process, which affects the performances of any evolutionary algorithm in general. For analyzing the learning process we have then used the well-known entropy function, Information Gain, which measures the quantity of information the system dis- covers during the learning phase [6, 7, 18]. Let Bt m be the number of the B cells that at the timestep t have the ļ¬tness function value m; we deļ¬ne the candidate solutions distribution function f (t) m as the ratio between the number Bt m and the total number of candidate solutions: f(t) m = Bt m h m=0 Bt m = Bt m d . (3) It follows that the information gain K(t, t0) and entropy E(t) can be deļ¬ned as: K(t, t0) = m f(t) m log(f(t) m /f(t0) m ), (4) E(t) = m f(t) m log f(t) m . (5)
  • 9. 9 The gain is the amount of information the system learned compared to the randomly generated initial population P(t=0) . Once the learning process begins, the information gain increases until to reach a peak point. 20.5 21 21.5 22 22.5 23 23.5 24 1 10 100 K(t,t0) Generations Information Gain computed using Local Search Legend Information Gain 25 26 27 28 29 30 31 32 33 1 10 100 SD 12 14 16 18 20 22 24 1 10 100 K(t,t0) Generations Information Gain computed without Local Search Legend Information Gain 16 17 18 19 20 21 22 23 1 10 100 SD Fig. 2. Learning during the evolutionary process. Information gain curves of Hybrid-IA with local search strategy (left plot) and without it (right plot). The inset plots, in both ļ¬gures, display the relative standard deviations. Figure 2 shows the information gain obtained with the use of the local search mech- anism (left plot), and without it (right plot) when HYBRID-IA is applied to the random S R23 instance. In both plots we report also the standard deviation, showed as inset plot. Inspecting both plots is possible to note that with the using of the local search, the algorithm quickly gains high information until to reach the higher peak within the ļ¬rst generations, which exactly corresponds to the achievement of the optimal solution. It is important to highlight that the higher peak of the information gain corresponds to the lower point of the standard deviation, and then this conļ¬rm that HYBRID-IA reaches its maximum learning at the same time as it show less uncertainty. Due to the deterministic approach of the local search, once reached the highest peak, and found the global optimal solution, the algorithm begins to lost information and starts to show an oscillatory behavior. The same happens for the standard deviation. From the right plot, instead, without the use of the local search, HYBRID-IA gains information more slowly. Also for this version, the higher peak of information learned corresponds to the lowest uncertainty degree, and in this temporal window HYBRID-IA reaches the best solution. Unlike the curve produced with the use of the local search, once found the optimal solution and reached the highest information peak, the algorithm starts to lost information as well but its behavior seems to be more steady-state. The two curves of the information gain have been compared and showed in ļ¬gure 3 (left plot) over all generations. The relative inset plot is a zoom of the information gain behavior over the ļ¬rst 25 generations. From this plot is clear how the local search approach developed helps the algorithm to learn more information already from the ļ¬rst generations, and in a quickly way; the inset plot, in particular, highlight the important existing distance between the two curves. From an overall view over all 300 generations, it is possible to see how the local search gives more steady-state even after reaching
  • 10. 10 8 10 12 14 16 18 20 22 24 0 50 100 150 200 250 300 K(t,t0) Generations Information Gain Legend using LS without LS 8 10 12 14 16 18 20 22 24 26 0 5 10 15 20 25 16 18 20 22 24 26 28 30 32 34 1 10 100 SD Generations Standard Deviation Legend using LS without LS Fig. 3. Information Gain and standard deviation. the global optimal. In the right plot of the ļ¬gure 3 is instead shown the comparison between the two standard deviations produced by HYBRID-IA with and without the use of the local search. Of course, as we expected, the curve of the standard deviation using the local search is taller as the algorithm gains higher amount of information than the version without local search. Finally, from the analysis of all these ļ¬gures ā€“ from convergence speed to learning ability ā€“ clearly emerges how the local search designed and developed helps HYBRID- IA in having an appropriate and correct convergence towards the global optimum, and a greater amount of information gained. 4.2 Experiments and Comparisons In this subsection all experimental results and comparisons done with the state-of-the- art are presented in order to evaluate the efļ¬ciency, reliability and robustness of the HYBRID-IA performances. Many experiments have been performed, and the bench- mark instances proposed in [2] have been considered for our tests and comparisons. In particular, HYBRID-IA has been compared with three different metaheuristics: ITS [3], XTS [9, 1], and MA [2]. As written in section 4, the same experimental protocol proposed in [2] has been used. Tables 1 and 2 show the results obtained by HYBRID-IA on different sets of instance: squared grid graph, no squared grid graph, and toroidal graph in table 1; hypercube graph and random graphs in table 2. In both tables are reported: the name of the instance in 1st column; number of vertices in 2nd; number of edges in 3rd; lower and upper bounds of the vertex weights in 4th and 5th; and optimal solution Kāˆ— in 6th. In the next columns are reported the results of HYBRID-IA (7th) and of the other three algorithms compared. It is important to clarify that for the grid graphs (squared and not squared) n and m indicate the number of the rows and columns of the grid. The last column of both tables shows the difference between the result obtained by Hybrid-IA and the best result among the three compared algorithms. Further, in each line of the tables the best results among all are reported in boldface. On the squared grid graph instances (top of table 1) is possible to see how HYBRID- IA is able to reach the optimal solution in 7 instances over 9, unlike of MA that instead reaches it in all instances. However, in those two instances (S SG7 and S SG9) the per-
  • 11. 11 Table 1. HYBRID-IA versus MA, ITS and XTS on the set of the instances: squared grid; not squared grid and toroidal graphs. INSTANCE n m Low Up Kāˆ— HYBRID-IA ITS XTS MA Ā± SQUARED GRID GRAPHS S SG1 5 5 10 25 114.0 114.0 114.0 114.0 114.0 = S SG2 5 5 10 50 199.8 199.8 199.8 199.8 199.8 = S SG3 5 5 10 75 312.4 312.4 312.6 312.4 312.4 = S SG4 7 7 10 25 252.0 252.0 252.4 252.0 252.0 = S SG5 7 7 10 50 437.6 437.6 439.8 437.6 437.6 = S SG6 7 7 10 75 713.6 713.6 718.4 717.4 713.6 = S SG7 9 9 10 25 442.2 444.2 444.2 442.8 442.2 +2.0 S SG8 9 9 10 50 752.2 752.2 754.6 753.0 752.2 = S SG9 9 9 10 75 1134.4 1136.0 1138.0 1134.4 1134.4 +1.6 NOT SQUARED GRID GRAPHS S NG1 8 3 10 25 96.8 96.8 96.8 96.8 96.8 = S NG2 8 3 10 50 157.4 157.4 157.4 157.4 157.4 = S NG3 8 3 10 75 220.0 220.0 220.0 220.0 220.0 = S NG4 9 6 10 25 295.6 295.6 295.8 295.8 295.6 = S NG5 9 6 10 50 488.6 488.6 489.4 488.6 488.6 = S NG6 9 6 10 75 755.0 755.0 755.0 755.2 755.0 = S NG7 12 6 10 25 398.2 398.2 399.8 398.8 398.4 āˆ’0.2 S NG8 12 6 10 50 671.8 671.8 673.4 671.8 671.8 = S NG9 12 6 10 75 1015.2 1015.2 1017.4 1015.4 1015.2 = TOROIDAL GRAPHS S T1 5 5 10 25 101.4 101.4 101.4 101.4 101.4 = S T2 5 5 10 50 124.4 124.4 124.4 124.4 124.4 = S T3 5 5 10 75 157.8 157.8 157.8 158.8 157.8 = S T4 7 7 10 25 195.4 195.4 197.4 195.4 195.4 = S T5 7 7 10 50 234.2 234.2 234.2 234.2 234.2 = S T6 7 7 10 75 269.6 269.6 269.6 269.6 269.6 = S T7 9 9 10 25 309.6 310.2 310.4 309.8 309.8 +0.4 S T8 9 9 10 50 369.6 369.6 370.0 369.6 369.6 = S T9 9 9 10 75 431.8 431.8 432.2 432.2 431.8 = formances showed by HYBRID-IA are not the worst with respect the other two algo- rithms. On the not square grid graphs (middle of table 1) instead HYBRID-IA reaches the optimal solution on all instances (9 over 9), outperforming all three algorithms on the instance S NG7, where none of the three compared algorithms is able to ļ¬nd the optimal solution. Also on the toroidal graphs (bottom of table 1) HYBRID-IA is able to reaches the global optimal solution in 8 over 9 instances; exception is made on the instance S T7 where the solution found is 0.4 higher with respect the best solution
  • 12. 12 found by MA, and 0.6 with respect to the optimum one. Also in this instance, how- ever, the worst performances are not by HYBRID-IA but rather by ITS. In the overall, inspecting all results in table 1, is possible to see how HYBRID-IA shows competitive performances with MA algorithm, except in three instances where it shows the worst re- sults, whilst instead it is able to outperform MA in the instance S NG7 where it reaches the optimal solution unlike of MA. Analysing the results with respect the other two algorithms, is clear how HYBRID-IA outperform ITS and XTS on all instances. An exception is done with respect XTS only on the instances S SG7 and S SG9. In table 2 are presented the comparisons on the hypercube, and random graphs, which present larger problem dimensions with respect the previous ones. Analysing the results obtained on the hypercube graph instances, it is very clear how HYBRID-IA outperform all three algorithms in all instances (9 over 9), reaching even the optimal solution on the S H7 instance where instead the three algorithms fail. On the random graphs instead HYBRID-IA shows comparable results with respect MA except on three instances, where the result difference is however not excessive (1.0 as average); whilst it is able to reach the optimum on the instances S R20 and S R23 where instead MA, and the other two algorithms fail. Also in this set of instances, in those instances where HYBRID-IA fail with respect MA, it shows in any case always better performances than ITS and XTS in all instances. In the overall, analyzing all results of this table, is possible to assert that HYBRID-IA is competitive and comparable to MA, losing the comparison on three instances but winning on other three. If we compare HYBRID-IA with ITS and XTS is possible to see how our proposed algorithm shows better performances in all instances ļ¬nding often, or almost always, better solutions than these two compared algorithms. Finally, from the analysis of the convergence behavior, learning ability, and com- parisons performed it is possible to clearly assert how HYBRID-IA is comparable with the state-of-the-art for the MWFV S problem, showing reliable and robustness perfor- mances, and good ability in information learning. These efļ¬cient performances are due to the combination of the immunological operators, which introduce enough diversity in the search phase, and the local search designed, which instead reļ¬ne the solution via more appropriate and deterministic choices. 5 Conclusion In this paper we introduce a hybrid immunological algorithm, simply called HYBRID- IA, which takes advantage by the immunological operators (cloning, hypermutation and aging) for carefully exploring the search space, introducing diversity, and avoiding to get trapped into a local optima, and by a Local Search whose aim is to reļ¬ne the solutions found through appropriate choices and based on a deterministic approach. The algorithm was designed and developed for solving one of the most challenging combinatorial optimization problems, such as the Weighted Feedback Vertex Set, which simply consists, given an undirected graph, in ļ¬nding the subset of vertices of mini- mum weight such that their removal produce an acyclic graph. In order to evaluate the goodness and reliability of HYBRID-IA, many experiments have been performed and the results obtained have been compared with three different metaheuristics (ITS, XTS,
  • 13. 13 Table 2. HYBRID-IA versus MA, ITS and XTS on the set of the instances: hypercube and random graphs. INSTANCE n m Low Up Kāˆ— HYBRID-IA ITS XTS MA Ā± HYPERCUBE GRAPHS S H1 16 32 10 25 72.2 72.2 72.2 72.2 72.2 = S H2 16 32 10 50 93.8 93.8 93.8 93.8 93.8 = S H3 16 32 10 75 97.4 97.4 97.4 97.4 97.4 = S H4 32 80 10 25 170.0 170.0 170.0 170.0 170.0 = S H5 32 80 10 50 240.6 240.6 241.0 240.6 240.6 = S H6 32 80 10 75 277.6 277.6 277.6 277.6 277.6 = S H7 64 192 10 25 353.4 353.4 354.6 353.8 353.8 āˆ’0.4 S H8 64 192 10 50 475.6 475.6 476.0 475.6 475.6 = S H9 64 192 10 75 503.8 503.8 503.8 504.8 503.8 = RANDOM GRAPHS S R1 25 33 10 25 63.8 63.8 63.8 63.8 63.8 = S R2 25 33 10 50 99.8 99.8 99.8 99.8 99.8 = S R3 25 33 10 75 125.2 125.2 125.2 125.2 125.2 = S R4 25 69 10 25 157.6 157.6 157.6 157.6 157.6 = S R5 25 69 10 50 272.2 272.2 272.2 272.2 272.2 = S R6 25 69 10 75 409.4 409.4 409.4 409.4 409.4 = S R7 25 204 10 25 273.4 273.4 273.4 273.4 273.4 = S R8 25 204 10 50 507.0 507.0 507.0 507.0 507.0 = S R9 25 204 10 75 785.8 785.8 785.8 785.8 785.8 = S R10 50 85 10 25 174.6 174.6 175.4 176.0 174.6 = S R11 50 85 10 50 280.8 280.8 280.8 281.6 280.8 = S R12 50 85 10 75 348.0 348.0 348.0 349.2 348.0 = S R13 50 232 10 25 386.2 386.2 389.4 386.8 386.2 = S R14 50 232 10 50 708.6 708.6 708.6 708.6 708.6 = S R15 50 232 10 75 951.6 951.6 951.6 951.6 951.6 = S R16 50 784 10 25 602.0 602.0 602.2 602.0 602.0 = S R17 50 784 10 50 1171.8 1171.8 1172.2 1172.0 1171.8 = S R18 50 784 10 75 1648.8 1648.8 1649.4 1648.8 1648.8 = S R19 75 157 10 25 318.2 319.2 321.0 320.0 318.2 +1.0 S R20 75 157 10 50 521.6 522.2 526.2 525.0 522.6 āˆ’0.4 S R21 75 157 10 75 751.0 751.8 757.2 754.2 751.0 +0.8 S R22 75 490 10 25 635.8 637.0 638.6 635.8 635.8 +1.2 S R23 75 490 10 50 1226.6 1226.6 1230.6 1228.6 1227.6 āˆ’1.0 S R24 75 490 10 75 1789.4 1789.4 1793.6 1789.4 1789.4 = S R25 75 1739 10 25 889.8 889.8 891.0 889.8 889.8 = S R26 75 1739 10 50 1664.2 1664.2 1664.8 1664.2 1664.2 = S R27 75 1739 10 75 2452.2 2452.2 2452.8 2452.2 2452.2 =
  • 14. 14 and MA), which represent nowadays the state-of-the-art. For these experiments a set of graph benchmark instances has been considered, based on different topologies: grid (squared and no squared), toroidal, hypercube, and random graphs. An analysis on the convergence speed and learning ability of HYBRID-IA has been performed in order to evaluate, of course, its efļ¬ciency but also the computational im- pact and reliability provided by the developed local search. From this analysis, clearly emerges how the developed local search helps the algorithm in having a correct conver- gence towards the global optima, as well as a high amount of information gained during the learning process. Finally, from the results obtained, and the comparisons done, it is possible to as- sert how HYBRID-IA is competitive and comparable with the WFV S state-of-the-art, showing efļ¬cient and robust performances. In almost all instances tested it was able to reach the optimal solutions. It is important to point out how HYBRID-IA has been in- stead the only one to reach the global optimal solutions on 4 instances, unlike the three compared algorithms. References 1. L. Brunetta, F. Mafļ¬oli, M. Trubian: ā€œSolving the Feedback Vertex Set Problem on Undi- rected Graphsā€, Discrete Applied Mathematics, vol. 101, pp. 37-51, 2000. 2. F. Carrabs, C. Cerrone, R. Cerulli: ā€œA Memetic Algorithm for the Weighted Feedback Vertex Set Problemā€, Networks, vol. 64, no. 4, pp. 339ā€“356, 2014. 3. F. Carrabs, R. Cerulli, M. Gentili,G. Parlato: ā€œA Tabu Search Heuristic Based on k-Diamonds for the Weighted Feedback Vertex Set Problemā€, Proc. International Conference on Network Optimization (INOC), LNCS 6701, pp. 589ā€“602, 2011. 4. P. Conca, G. Stracquadanio, O. Greco, V. Cutello, M. Pavone, G. Nicosia: ā€œPacking Equal Disks in a Unit Square: an Immunological Optimization Approachā€, Proc. International Workshop on Artiļ¬cial Immune Systems (AIS), IEEE Press, pp. 1ā€“5, 2015. 5. T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein: ā€œIntroduction to Algorithms, Third Editionā€, MIT Press, July 2009. 6. V. Cutello, G. Nicosia, M. Pavone: ā€œAn Immune Algorithm with Stochastic Aging and Kull- back Entropy for the Chromatic Number Problemā€, Journal of Combinatorial Optimization, vol. 14, no. 1, pp. 9ā€“33, 2007. 7. V. Cutello, G. Nicosia, M. Pavone, J. Timmis: ā€œAn Immune Algorithm for Protein Structure Prediction on Lattice Modelsā€, IEEE Transaction on Evolutionary Computation, vol. 11, no. 1, pp. 101ā€“117, 2007. 8. V. Cutello, G. Nicosia, M. Pavone, I. Prizzi: ā€œProtein Multiple Sequence Alignment by Hy- brid Bio-Inspired Algorithmsā€, Nucleic Acids Research, Oxford Journals, vol. 39, no. 6, pp. 1980ā€“1992, 2011. 9. M. DellAmico, A. Lodi, and F. Mafļ¬oli: ā€œSolution of the Cumulative Assignment Problem with a Well-Structured Tabu Search Methodā€, Journal of Heuristics, vol. 5, no. 2, pp. 123- 143, 1999. 10. A. Di Stefano, A. Vitale, V. Cutello, M. Pavone: ā€œHow Long Should Offspring Lifespan Be in Order to Obtain a Proper Exploration?ā€, Proc. IEEE Symposium Series on Computational Intelligence (IEEE SSCI), IEEE Press, pp. 1ā€“8, 2016. 11. S. Fouladvand, A. Osareh, B. Shadgar, M. Pavone, S. Sharaļ¬a: ā€œDENSA: An Effective Neg- ative Selection Algorithm with Flexible Boundaries for SelfSpace and Dynamic Number of Detectorsā€, Engineering Applications of Artiļ¬cial Intelligence, vol. 62, pp. 359ā€“372, 2016.
  • 15. 15 12. M. R. Garey, D. S. Johnson: ā€œComputers and Intractability: A Guide to Theory of NP- Completenessā€, Freeman, New York, 1979. 13. D. Gusļ¬eld: ā€œA Graph Theoretic Approach to Statistical Data Securityā€, SIAM Journal on Computing, vol. 17, no. 3, pp. 552ā€“571, 1988. 14. T. Jansen, C. Zarges: ā€œComputing Longest Common Subsequences with the B-cell Algo- rithmā€, Proc. of 11th International Conference on Artiļ¬cial Immune Systems (ICARIS), LNCS 7597, pp. 111-124, 2012. 15. T. Jansen, P. S. Oliveto, C. Zarges: ā€œOn the Analysis of the Immune-Inspired B-Cell Algo- rithm for the Vertex Cover Problemā€, Proc. of 10th International Conference on Artiļ¬cial Immune Systems (ICARIS), LNCS 6825, pp. 117-131, 2011. 16. D. B. Johnson: ā€œFinding All the Elementary Circuits of a Directed Graphā€, SIAM Journal on Computing, vol. 4, pp. 77ā€“84, 1975. 17. R. M. Karp: ā€œReducibility among Combinatorial Problemsā€, Complexity of Computer Com- putations, The IBM Research Symposia Series, Springer, pp. 85ā€“103, 1972 18. M. Pavone, G. Narzisi, G. Nicosia: ā€œClonal Selection - An Immunological Algorithm for Global Optimization over Continuous Spacesā€, Journal of Global Optimization, vol. 53, no. 4, pp. 769ā€“808, 2012. 19. D. Peleg: ā€œSize Bounds for Dynamic Monopoliesā€, Discrete Applied Mathematics, vol. 86, pp. 263ā€“273, 1998 20. Y. Tian, H. Zhang: ā€œ Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategyā€, PLoS ONE, vol. 11, no. 8, e0157994, 2016. 21. A. Vitale, A. Di Stefano, V. Cutello, M. Pavone: ā€œThe Inļ¬‚uence of Age Assignments on the Performance of Immune Algorithmsā€, Proc. 18th Annual UK Workshop on Computational Intelligence (UKCI), Advances in Computational Intelligence Systems, Advances in Intelli- gent Systems and Computing series, vol. 840, pp. 16ā€“28, 2018. 22. C. C. Wang, E. L. Lloyd, M. L. Soffa: ā€œFeedback Vertex Set and Cyclically Reducible Graphsā€, Journal of Association of Computing Machinery (ACM), vol. 32, no. 2, pp. 296ā€“ 313, 1985. 23. X. Xia, Z. Yuren: ā€œOn the Effectiveness of Immune Inspired Mutation Operators in Some Discrete Optimization Problemsā€, Information Sciences, vol. 426, pp. . 87ā€“100. 24. M. Yannakakis: ā€œNode-Deletion Problem on Bipartite Graphsā€, SIAM Journal on Comput- ing, vol. 10, no. 2, pp. 310ā€“327, 1981. 25. C. Zarges: ā€œOn the Utility of the Population Size for Inversely Fitness Proportional Muta- tion Ratesā€, Proc. of 10th ACM SIGEVO Workshop on Foundations of Genetic Algorithms (FOGA), pp. 39-46, 2009.