One-Sample Face Recognition Using HMM Model of Fiducial AreasCSCJournals
In most real world applications, multiple image samples of individuals are not easy to collate for direct implementation of recognition or verification systems. Therefore there is a need to perform these tasks even if only one training sample per person is available. This paper describes an effective algorithm for recognition and verification with one sample image per class. It uses two dimensional discrete wavelet transform (2D DWT) to extract features from images and hidden Markov model (HMM) was used for training, recognition and classification. It was tested with a subset of the AT&T database and up to 90% correct classification (Hit) and false acceptance rate (FAR) of 0.02% was achieved.
One-Sample Face Recognition Using HMM Model of Fiducial AreasCSCJournals
In most real world applications, multiple image samples of individuals are not easy to collate for direct implementation of recognition or verification systems. Therefore there is a need to perform these tasks even if only one training sample per person is available. This paper describes an effective algorithm for recognition and verification with one sample image per class. It uses two dimensional discrete wavelet transform (2D DWT) to extract features from images and hidden Markov model (HMM) was used for training, recognition and classification. It was tested with a subset of the AT&T database and up to 90% correct classification (Hit) and false acceptance rate (FAR) of 0.02% was achieved.
Swarm Intelligence Heuristics for Graph Coloring ProblemMario Pavone
In this research work we present two novel swarm
heuristics based respectively on the ants and bees artificial
colonies, called AS-GCP and ABC-GCP. The first is based
mainly on the combination of Greedy Partitioning Crossover
(GPX), and a local search approach that interact with the
pheromone trails system; the last, instead, has as strengths
three evolutionary operators, such as a mutation operator; an
improved version of GPX and a Temperature mechanism. The
aim of this work is to evaluate the efficiency and robustness of both developed swarm heuristics, in order to solve the classical Graph Coloring Problem (GCP). Many experiments have been performed in order to study what is the real contribution of variants and novelty designed both in AS-GCP and ABC-GCP.
A first study has been conducted with the purpose for setting
the best parameters tuning, and analyze the running time
for both algorithms. Once done that, both swarm heuristics
have been compared with 15 different algorithms using the
classical DIMACS benchmark. Inspecting all the experiments
done is possible to say that AS-GCP and ABC-GCP are very
competitive with all compared algorithms, demonstrating thus
the goodness of the variants and novelty designed. Moreover,
focusing only on the comparison among AS-GCP and ABCGCP is possible to claim that, albeit both seem to be suitable to solve the GCP, they show different features: AS-GCP presents a quickly convergence towards good solutions, reaching often the best coloring; ABC-GCP, instead, shows performances more robust, mainly in graphs with a more dense, and complex topology. Finally, ABC-GCP in the overall has showed to be more competitive with all compared algorithms than AS-GCP as average of the best colors found.
Graph Coloring, one of the most challenging combinatorial problems, finds applicability in many real-world tasks. In this work we have developed a new artificial bee colony algorithm (called O-BEE-COL) for solving this problem. The special features of the proposed algorithm are (i) a SmartSwap mutation operator, (ii) an optimized GPX operator, and (iii) a temperature mechanism. Various studies are presented to show the impact factor of the three operators, their efficiency, the robustness of O-BEE-COL, and finally the competitiveness of O-BEE-COL with respect to the state-of-the-art. Inspecting all experimental results we can claim that: (a) disabling one of these operators O-BEE-COL worsens the performances in term of the Success Rate (SR), and/or best coloring found; (b) O-BEE-COL obtains comparable, and competitive results with respect to state-of-the-art algorithms for the Graph Coloring Problem.
Clonal Selection: an Immunological Algorithm for Global Optimization over Con...Mario Pavone
In this research paper we present an immunological algorithm (IA) to solve
global numerical optimization problems for high-dimensional instances. Such optimization
problems are a crucial component for many real-world applications. We designed two versions
of the IA: the first based on binary-code representation and the second based on real
values, called opt-IMMALG01 and opt-IMMALG, respectively. A large set of experiments
is presented to evaluate the effectiveness of the two proposed versions of IA. Both opt-
IMMALG01 and opt-IMMALG were extensively compared against several nature inspired
methodologies including a set of Differential Evolution algorithms whose performance is
known to be superior to many other bio-inspired and deterministic algorithms on the same
test bed. Also hybrid and deterministic global search algorithms (e.g., DIRECT, LeGO,
PSwarm) are compared with both IA versions, for a total 39 optimization algorithms.The
results suggest that the proposed immunological algorithm is effective, in terms of accuracy,
and capable of solving large-scale instances for well-known benchmarks. Experimental
results also indicate that both IA versions are comparable, and often outperform, the stateof-
the-art optimization algorithms.
nternational Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
An optimal design of current conveyors using a hybrid-based metaheuristic alg...IJECEIAES
This paper focuses on the optimal sizing of a positive second-generation current conveyor (CCII+), employing a hybrid algorithm named DE-ACO, which is derived from the combination of differential evolution (DE) and ant colony optimization (ACO) algorithms. The basic idea of this hybridization is to apply the DE algorithm for the ACO algorithm’s initialization stage. Benchmark test functions were used to evaluate the proposed algorithm’s performance regarding the quality of the optimal solution, robustness, and computation time. Furthermore, the DE-ACO has been applied to optimize the CCII+ performances. SPICE simulation is utilized to validate the achieved results, and a comparison with the standard DE and ACO algorithms is reported. The results highlight that DE-ACO outperforms both ACO and DE.
A Binary Bat Inspired Algorithm for the Classification of Breast Cancer Data ijscai
Advancement in information and technology has made a major impact on medical science where the
researchers come up with new ideas for improving the classification rate of various diseases. Breast cancer
is one such disease killing large number of people around the world. Diagnosing the disease at its earliest
instance makes a huge impact on its treatment. The authors propose a Binary Bat Algorithm (BBA) based
Feedforward Neural Network (FNN) hybrid model, where the advantages of BBA and efficiency of FNN is
exploited for the classification of three benchmark breast cancer datasets into malignant and benign cases.
Here BBA is used to generate a V-shaped hyperbolic tangent function for training the network and a fitness
function is used for error minimization. FNNBBA based classification produces 92.61% accuracy for
training data and 89.95% for testing
Constructing a classification model is important in machine learning for a particular task. A
classification process involves assigning objects into predefined groups or classes based on a
number of observed attributes related to those objects. Artificial neural network is one of the
classification algorithms which, can be used in many application areas. This paper investigates
the potential of applying the feed forward neural network architecture for the classification of
medical datasets. Migration based differential evolution algorithm (MBDE) is chosen and
applied to feed forward neural network to enhance the learning process and the network
learning is validated in terms of convergence rate and classification accuracy. In this paper,
MBDE algorithm with various migration policies is proposed for classification problems using
medical diagnosis.
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...cscpconf
Constructing a classification model is important in machine learning for a particular task. A
classification process involves assigning objects into predefined groups or classes based on a
number of observed attributes related to those objects. Artificial neural network is one of the
classification algorithms which, can be used in many application areas. This paper investigates
the potential of applying the feed forward neural network architecture for the classification of
medical datasets. Migration based differential evolution algorithm (MBDE) is chosen and
applied to feed forward neural network to enhance the learning process and the network
learning is validated in terms of convergence rate and classification accuracy. In this paper,
MBDE algorithm with various migration policies is proposed for classification problems using
medical diagnosis.
Swarm Intelligence Heuristics for Graph Coloring ProblemMario Pavone
In this research work we present two novel swarm
heuristics based respectively on the ants and bees artificial
colonies, called AS-GCP and ABC-GCP. The first is based
mainly on the combination of Greedy Partitioning Crossover
(GPX), and a local search approach that interact with the
pheromone trails system; the last, instead, has as strengths
three evolutionary operators, such as a mutation operator; an
improved version of GPX and a Temperature mechanism. The
aim of this work is to evaluate the efficiency and robustness of both developed swarm heuristics, in order to solve the classical Graph Coloring Problem (GCP). Many experiments have been performed in order to study what is the real contribution of variants and novelty designed both in AS-GCP and ABC-GCP.
A first study has been conducted with the purpose for setting
the best parameters tuning, and analyze the running time
for both algorithms. Once done that, both swarm heuristics
have been compared with 15 different algorithms using the
classical DIMACS benchmark. Inspecting all the experiments
done is possible to say that AS-GCP and ABC-GCP are very
competitive with all compared algorithms, demonstrating thus
the goodness of the variants and novelty designed. Moreover,
focusing only on the comparison among AS-GCP and ABCGCP is possible to claim that, albeit both seem to be suitable to solve the GCP, they show different features: AS-GCP presents a quickly convergence towards good solutions, reaching often the best coloring; ABC-GCP, instead, shows performances more robust, mainly in graphs with a more dense, and complex topology. Finally, ABC-GCP in the overall has showed to be more competitive with all compared algorithms than AS-GCP as average of the best colors found.
Graph Coloring, one of the most challenging combinatorial problems, finds applicability in many real-world tasks. In this work we have developed a new artificial bee colony algorithm (called O-BEE-COL) for solving this problem. The special features of the proposed algorithm are (i) a SmartSwap mutation operator, (ii) an optimized GPX operator, and (iii) a temperature mechanism. Various studies are presented to show the impact factor of the three operators, their efficiency, the robustness of O-BEE-COL, and finally the competitiveness of O-BEE-COL with respect to the state-of-the-art. Inspecting all experimental results we can claim that: (a) disabling one of these operators O-BEE-COL worsens the performances in term of the Success Rate (SR), and/or best coloring found; (b) O-BEE-COL obtains comparable, and competitive results with respect to state-of-the-art algorithms for the Graph Coloring Problem.
Clonal Selection: an Immunological Algorithm for Global Optimization over Con...Mario Pavone
In this research paper we present an immunological algorithm (IA) to solve
global numerical optimization problems for high-dimensional instances. Such optimization
problems are a crucial component for many real-world applications. We designed two versions
of the IA: the first based on binary-code representation and the second based on real
values, called opt-IMMALG01 and opt-IMMALG, respectively. A large set of experiments
is presented to evaluate the effectiveness of the two proposed versions of IA. Both opt-
IMMALG01 and opt-IMMALG were extensively compared against several nature inspired
methodologies including a set of Differential Evolution algorithms whose performance is
known to be superior to many other bio-inspired and deterministic algorithms on the same
test bed. Also hybrid and deterministic global search algorithms (e.g., DIRECT, LeGO,
PSwarm) are compared with both IA versions, for a total 39 optimization algorithms.The
results suggest that the proposed immunological algorithm is effective, in terms of accuracy,
and capable of solving large-scale instances for well-known benchmarks. Experimental
results also indicate that both IA versions are comparable, and often outperform, the stateof-
the-art optimization algorithms.
nternational Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
An optimal design of current conveyors using a hybrid-based metaheuristic alg...IJECEIAES
This paper focuses on the optimal sizing of a positive second-generation current conveyor (CCII+), employing a hybrid algorithm named DE-ACO, which is derived from the combination of differential evolution (DE) and ant colony optimization (ACO) algorithms. The basic idea of this hybridization is to apply the DE algorithm for the ACO algorithm’s initialization stage. Benchmark test functions were used to evaluate the proposed algorithm’s performance regarding the quality of the optimal solution, robustness, and computation time. Furthermore, the DE-ACO has been applied to optimize the CCII+ performances. SPICE simulation is utilized to validate the achieved results, and a comparison with the standard DE and ACO algorithms is reported. The results highlight that DE-ACO outperforms both ACO and DE.
A Binary Bat Inspired Algorithm for the Classification of Breast Cancer Data ijscai
Advancement in information and technology has made a major impact on medical science where the
researchers come up with new ideas for improving the classification rate of various diseases. Breast cancer
is one such disease killing large number of people around the world. Diagnosing the disease at its earliest
instance makes a huge impact on its treatment. The authors propose a Binary Bat Algorithm (BBA) based
Feedforward Neural Network (FNN) hybrid model, where the advantages of BBA and efficiency of FNN is
exploited for the classification of three benchmark breast cancer datasets into malignant and benign cases.
Here BBA is used to generate a V-shaped hyperbolic tangent function for training the network and a fitness
function is used for error minimization. FNNBBA based classification produces 92.61% accuracy for
training data and 89.95% for testing
Constructing a classification model is important in machine learning for a particular task. A
classification process involves assigning objects into predefined groups or classes based on a
number of observed attributes related to those objects. Artificial neural network is one of the
classification algorithms which, can be used in many application areas. This paper investigates
the potential of applying the feed forward neural network architecture for the classification of
medical datasets. Migration based differential evolution algorithm (MBDE) is chosen and
applied to feed forward neural network to enhance the learning process and the network
learning is validated in terms of convergence rate and classification accuracy. In this paper,
MBDE algorithm with various migration policies is proposed for classification problems using
medical diagnosis.
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...cscpconf
Constructing a classification model is important in machine learning for a particular task. A
classification process involves assigning objects into predefined groups or classes based on a
number of observed attributes related to those objects. Artificial neural network is one of the
classification algorithms which, can be used in many application areas. This paper investigates
the potential of applying the feed forward neural network architecture for the classification of
medical datasets. Migration based differential evolution algorithm (MBDE) is chosen and
applied to feed forward neural network to enhance the learning process and the network
learning is validated in terms of convergence rate and classification accuracy. In this paper,
MBDE algorithm with various migration policies is proposed for classification problems using
medical diagnosis.
Contents of the presentation:
• GA – Introduction
• GA – Fundamentals
• GA – Genotype Representation
• GA – Population
• GA – Fitness Function
• GA – Parent Selection
• GA – Crossover
• GA – Mutation
• Research Paper
Ant colony search and heuristic techniques for optimal dispatch of energy sou...Beniamino Murgante
Ant colony search and heuristic techniques for optimal dispatch of energy sources in micro-grids - Eleonora Riva Sanseverino – University of Palermo (Italy)
Intelligent Analysis of Environmental Data (S4 ENVISA Workshop 2009)
BPSO&1-NN algorithm-based variable selection for power system stability ident...IJAEMSJORNAL
Due to the very high nonlinearity of the power system, traditional analytical methods take a lot of time to solve, causing delay in decision-making. Therefore, quickly detecting power system instability helps the control system to make timely decisions become the key factor to ensure stable operation of the power system. Power system stability identification encounters large data set size problem. The need is to select representative variables as input variables for the identifier. This paper proposes to apply wrapper method to select variables. In which, Binary Particle Swarm Optimization (BPSO) algorithm combines with K-NN (K=1) identifier to search for good set of variables. It is named BPSO&1-NN. Test results on IEEE 39-bus diagram show that the proposed method achieves the goal of reducing variables with high accuracy.
Traveling Salesman Problem (TSP) is a kind of NPHard problem which cant be solved in polynomial time for
asymptotically large values of n. In this paper a balanced combination of Genetic algorithm and Simulated Annealing is used. To
improve the performance of finding optimal solution from huge
search space, we have incorporated the use of tournament and
rank as selection operator. And Inver-over operator Mechanism
for crossover and mutation . To illustrate it more clearly an
implementation in C++ (4.9.9.2) has been done.
Index Terms—Genetic Algorithm (GA) , Simulated Annealing
(SA) , Inver-over operator , Lin-Kernighan algorithm , selection
operator , crossover operator , mutation operator.
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...IAEME Publication
Close range photogrammetry network design is referred to the process of placing a set of
cameras in order to achieve photogrammetric tasks. The main objective of this paper is tried to find
the best location of two/three camera stations. The genetic algorithm optimization and Particle
Swarm Optimization are developed to determine the optimal camera stations for computing the three
dimensional coordinates. In this research, a mathematical model representing the genetic algorithm
optimization and Particle Swarm Optimization for the close range photogrammetry network is
developed. This paper gives also the sequence of the field operations and computational steps for this
task. A test field is included to reinforce the theoretical aspects.
An approach for breast cancer diagnosis classification using neural networkacijjournal
Artificial neural network has been widely used in various fields as an intelligent tool in recent years, such
as artificial intelligence, pattern recognition, medical diagnosis, machine learning and so on. The
classification of breast cancer is a medical application that poses a great challenge for researchers and
scientists. Recently, the neural network has become a popular tool in the classification of cancer datasets.
Classification is one of the most active research and application areas of neural networks. Major
disadvantages of artificial neural network (ANN) classifier are due to its sluggish convergence and always
being trapped at the local minima. To overcome this problem, differential evolution algorithm (DE) has
been used to determine optimal value or near optimal value for ANN parameters. DE has been applied
successfully to improve ANN learning from previous studies. However, there are still some issues on DE
approach such as longer training time and lower classification accuracy. To overcome these problems,
island based model has been proposed in this system. The aim of our study is to propose an approach for
breast cancer distinguishing between different classes of breast cancer. This approach is based on the
Wisconsin Diagnostic and Prognostic Breast Cancer and the classification of different types of breast
cancer datasets. The proposed system implements the island-based training method to be better accuracy
and less training time by using and analysing between two different migration topologies
ENHANCED BREAST CANCER RECOGNITION BASED ON ROTATION FOREST FEATURE SELECTIO...cscpconf
Optimization problems are dominantly being solved using Computational Intelligence. One of
the issues that can be addressed in this context is problems related to attribute subset selection
evaluation. This paper presents a computational intelligence technique for solving the
optimization problem using a proposed model called Modified Genetic Search Algorithms
(MGSA) that avoids local bad search space with merit and scaled fitness variables, detecting
and deleting bad candidate chromosomes, thereby reducing the number of individual
chromosomes from search space and subsequent iterations in next generations. This paper aims
to show that Rotation forest ensembles are useful in the feature selection method. The base
classifier is multinomial logistic regression method integrated with Haar wavelets as projection
filter and reproducing the ranks of each features with 10 fold cross validation method. It also
discusses the main findings and concludes with promising result of the proposed model. It
explores the combination of MGSA for optimization with Naïve Bayes classification. The result
obtained using proposed model MGSA is validated mathematically using Principal Component
Analysis. The goal is to improve the accuracy and quality of diagnosis of Breast cancer disease
with robust machine learning algorithms. As compared to other works in literature survey,
experimental results achieved in this paper show better results with statistical inferenc
Similar to An Information-Theoretic Approach for Clonal Selection Algorithms (20)
A Hybrid Immunological Search for theWeighted Feedback Vertex Set ProblemMario Pavone
In this paper we present a hybrid immunological inspired algorithm (HYBRID-IA) for solving the Minimum Weighted Feedback Vertex Set (MWFVS) problem. MWFV S is one of the most interesting and challenging combinatorial optimization problem, which finds application in many fields and in many real life tasks. The proposed algorithm is inspired by the clonal selection principle, and therefore it takes advantage of the main strength characteristics of the operators of (i) cloning; (ii) hypermutation; and (iii) aging. Along with these operators, the algorithm uses a local search procedure, based on a deterministic approach, whose purpose is to refine the solutions found so far. In order to evaluate the efficiency and robustness of HYBRID-IA several experiments were performed on different instances, and for each instance it was compared to three different algorithms: (1) a memetic algorithm based on a genetic algorithm (MA); (2) a tabu search metaheuristic (XTS); and (3) an iterative tabu search (ITS). The obtained results prove the efficiency and reliability of HYBRID-IA on all instances in term of the best solutions found and also similar performances with all compared algorithms, which represent nowadays the state-of-the-art on for MWFV S problem.
The Influence of Age Assignments on the Performance of Immune AlgorithmsMario Pavone
How long a B cell remains, evolves and matures inside a population plays a crucial role on the capability for an immune algorithm to jump out from local optima, and find the global optimum. Assigning the right age to each clone (or offspring, in general) means to find the proper balancing between the exploration and exploitation. In this research work we present an experimental study conducted on an immune algorithm, based on the clonal selection principle, and performed on eleven different age assignments, with the main aim to verify if at least one, or two, of the top 4 in the previous efficiency ranking produced on the one-max problem, still appear among the top 4 in the new efficiency ranking obtained on a different complex problem. Thus, the NK landscape model has been considered as the test problem, which is a mathematical model formulated for the study of tunably rugged fitness landscape. From the many experiments performed is possible to assert that in the elitism variant of the immune algorithm, two of the best age assignments previously discovered, still continue to appear among the top 3 of the new rankings produced; whilst they become three in the no elitism version. Further, in the first variant none of the 4 top previous ones ranks ever in the first position, unlike on the no elitism variant, where the previous best one continues to appear in 1st position more than the others. Finally, this study confirms that the idea to assign the same age of the parent to the cloned B cell is not a good strategy since it continues to be as the worst also in the new
efficiency ranking.
Multi-objective Genetic Algorithm for Interior Lighting DesignMario Pavone
This paper proposes a novel system to help in the design of
interior lighting. It is based on multi-objective optimization of the key criteria involved in lighting design: the respect of a given target level of illuminance, uniformity of lighting, and electrical energy saving. The proposed solution integrates the 3D graphic software Blender, used to reproduce the architectural space and to simulate the eect of illumination, and the genetic algorithm NSGA-II. This solution oers advantages in design exibility over previous related works.
How long should Offspring Lifespan be in order to obtain a proper exploration?Mario Pavone
The time an offspring should live and remain into
the population in order to evolve and mature is a crucial factor
of the performance of population-based algorithms both in the
search for global optima, and in escaping from the local optima.
Offsprings lifespan influences a correct exploration of the search
space, and a fruitful exploiting of the knowledge learned. In
this research work we present an experimental study on an
immunological-inspired heuristic, called OPT-IA, with the aim
to understand how long must the lifespan of each clone be
to properly explore the solution space. Eleven different types
of age assignment have been considered and studied, for an
overall of 924 experiments, with the main goal to determine
the best one, as well as an efficiency ranking among all the
age assignments. This research work represents a first step
towards the verification if the top 4 age assignments in the
obtained ranking are still valid and suitable on other discrete
and continuous domains, i.e. they continue to be the top 4 even
if in different order.
Swarm Intelligence Heuristics for Graph Coloring ProblemMario Pavone
In this research work we present two novel swarm heuristics based respectively on the ants and bees artificial colonies, called AS-GCP and ABC-GCP. The first is based mainly on the combination of Greedy Partitioning Crossover (GPX), and a local search approach that interact with the pheromone trails system; the last, instead, has as strengths three evolutionary operators, such as a mutation operator; an improved version of GPX and a Temperature mechanism. The aim of this work is to evaluate the efficiency and robustness of both developed swarm heuristics, in order to solve the classical Graph Coloring Problem (GCP). Many experiments have been performed in order to study what is the real contribution of variants and novelty designed both in AS-GCP and ABC-GCP.
A first study has been conducted with the purpose for setting the best parameters tuning, and analyze the running time for both algorithms. Once done that, both swarm heuristics have been compared with 15 different algorithms using the classical DIMACS benchmark. Inspecting all the experiments done is possible to say that AS-GCP and ABC-GCP are very competitive with all compared algorithms, demonstrating thus the goodness of the variants and novelty designed. Moreover, focusing only on the comparison among AS-GCP and ABC-GCP is possible to claim that, albeit both seem to be suitable to solve the GCP, they show different features: AS-GCP presents a quickly convergence towards good solutions, reaching often the best coloring; ABC-GCP, instead, shows performances more robust, mainly in graphs with a more dense, and complex topology. Finally, ABC-GCP in the overall has showed to be more competitive with all compared algorithms than AS-GCP as average of the best colors found.
An Information-Theoretic Approach for Clonal Selection Algorithms
1. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
An Information-Theoretic Approach
for Clonal Selection Algorithms
V. Cutello, G. Nicosia, M. Pavone, G. Stracquadanio
Department of Mathematics and Computer Science
University of Catania
Viale A. Doria 6, 95125 Catania, Italy
(cutello, nicosia, mpavone, stracquadanio)@dmi.unict.it
Seminar - DMI - Catania, 12 July 2010
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
2. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Outline
1 Introduction
Global Optimization
Numerical Minimization Problem
Artificial Immune System
2 An Optimization CSA
i-CSA operators
pseudo-code of CSA
3 KLd , Rd and VNd Entropic Divergences
Learning Process
Kullback-Leibler divergence
Rényi generalized divergence
Von Neumann divergence
Comparison of the divergences
4 Results
5 Conclusions
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
3. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Global Optimization
Global Optimization (GO): finding the best set of
parameters to optimize a given objective function
GO problems are quite difficult to solve: there exist
solutions that are locally optimal but not globally
GO requires finding a setting x = (x1 , x2 , . . . , xn ) ∈ S,
where S ⊆ Rn is a bounded set on Rn , such that a certain
n-dimensional objective function f : S → R is optimized.
GOAL: findind a point xmin ∈ S such that f (xmin ) is a global
minimum on S, i.e. ∀x ∈ S : f (xmin ) ≤ f (x).
It is difficult to decide when a global (or local) optimum has
been reached
There could be very many local optima where the
algorithm can be trapped
the difficulty increases proportionally with the problem dimension
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
4. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Numerical Minimization Problem
Let be x = (x1 , x2 , . . . , xn ) the variables vector in Rn ;
Bl = (Bl1 , Bl2 , . . . , Bln ) and Bu = (Bu1 , Bu2 , . . . , Bun ) the lower
and the upper bounds of the variables, such that
xi ∈ Bli , Bui (i = 1, . . . , n).
GOAL: minimizing f (x) the objective function
min(f (x)), Bl ≤ x ≤ Bu
The first 13 functions taken from [Yao et al., IEEE TEC, 1999],
were used as benchmark to evaluate the performances and
convergence ability.
These functions belong to two different categories: unimodal,
and multimodal with many local optima.
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
5. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Artificial Immune System
Articial Immune Systems - AIS
Immune System (IS) is the main responsible to protect the
organism against the attack from external microorganisms,
that might cause diseases;
The biological IS has to assure recognition of each
potentially dangerous molecule or substance
Artificial Immune Systems are a new paradigm of the
biologically-inspired computing
Three immunological theory: immune networks, negative
selection, and clonal selection
AIS have been successfully employed in a wide variety of
different applications
[Timmis et al.: J.Ap.Soft.Comp., BioSystems, Curr. Proteomics, 2008]
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
6. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Artificial Immune System
Clonal Selection Algorithms - CSA
CSA represents an effective mechanism for search and
optimization
[Cutello et al.: IEEE TEC & J. Comb. Optimization, 2007]
Cloning, Hypermutation and Aging operators: features key
of CSA
Cloning: triggers the growth of a new population of
high-value B cells centered on a higher affinity value
Hypermutation: can be seen as a local search procedure
that leads to a faster maturation during the learning phase.
Aging: aims to generate diversity inside the population,
and thus to avoid getting trapped in a local optima.
increasing or decreasing the allowed time to stay in the
population (δ) influences the convergence process
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
7. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Artificial Immune System
Learning Capability of CSA
Same classes of functions were used to analyze the
learning process
Three relative entropies were used: Kullback-Leibler, Rényi
generalized and Von Neumann divergences
The learning analysis was made studying the gain both
with respect to the initial distribution, and the ones based
on the information obtained in previous step.
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
8. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
i-CSA operators
Cloning Operator
Cloning operator clones each B cell dup times (P (clo) )
Thanks to this operator CSA produces individuals with higher
affinities (higher fitness function values)
The strategy to set the age is crucial for the search inside the
landscape
What age to assign to each clone?
Each clone was assigned a random age chosen into the range
[0, τB ] (labelled IA) [Cutello et al.: SAC 2006 & ICANNGA 2007]
Same age of parents, whilst zero when the fitness of the
mutated clones is improved [Cutello et al.: IEEE TEC 2007 & CEC 2004]
The improved proposed version: choosing the age of each clone
2
into the range [0, 3 τB ] (labelled i-CSA)
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
9. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
i-CSA operators
Hypermutation Operator
Tries to mutate any B cell receptor: isn’t based on an
explicit usage of a mutation probability.
There exist several different kinds of hypermutation
operator [Cutello et al.: ICARIS & CEC, 2004]
Inversely Proportional Hypermutation was designed
Mutation rate:
ˆ
α = e−ρ f (x) ,
with ˆ(x) the normalized fitness function in [0, 1]
f
Number of mutations:
M = (α × ) + 1 ,
where is the length of the B cell
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
10. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
i-CSA operators
Number of mutations produced
Number of Mutations of the Inversely Hypermutation Operator
220
dim=30, ρ=3.5
200 dim=50, ρ=4.0
dim=100, ρ=6.0
dim=200, ρ=7.0
180
160
140 10
9
120 8
M
7
100 6
5
80 4
3
60
2
1
40 0.4 0.5 0.6 0.7 0.8 0.9 1
20
0
0 0.2 0.4 0.6 0.8 1
normalized fitness
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
11. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
i-CSA operators
How the Hypermutation Operator works
Choose randomly a variable xi (i ∈ {1, . . . , = n})
Choose randomly an index j (j ∈ {1, . . . , = n}, and j = i)
Replace xi in according to the following rule:
(t+1) (t) (t)
xi = (1 − β) xi + β xj ,
Normalization of fitness: best current fitness value into the
population decreased of an user-defined threshold Θ,
This strategy is due to making CSA as blind as possible,
since is not known a priori any additional information
concerning the problem [Cutello et al.: SAC 2006]
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
12. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
i-CSA operators
Aging Operator
(t) (hyp)
Eliminates all old B cells, in the populations Pd , and PNc
Depends on the parameter τB : maximum number of
generations allowed
when a B cell is τB + 1 old it is erased
GOAL: produce an high diversity into the current
population to avoid premature convergence
static aging operator: when a B cell is erased
independently from its fitness value quality
elitist static aging operator: the selection mechanism does
not allow the elimination of the best B cell
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
13. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
pseudo-code of CSA
Clonal Selection Algorithm
CSA (d, dup, ρ, τB , Tmax )
fen := 0;
Nc := d × dup;
(t=0)
Pd := init_pop(d);
// xi = Bli + β(Bui − Bli ), – β ∈ [0, 1] is a real random value
(t=0)
comp_fit(Pd );
fen := fen + d;
while (fen < Tmax )do
(clo) (t)
PNc := Cloning (Pd , dup);
(hyp) (clo)
PNc := Hypermutation(PNc , ρ);
(hyp)
comp_fit(PNc );
fen := fen + Nc;
(t) (hyp) (t) (hyp)
(Pad , PaNc ) := aging(Pd , PNc , τB );
(t+1) (t) (hyp)
Pd := (µ + λ)-selection(Pad , PaNc );
t := t + 1;
end_while
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
14. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Learning Process
To analyze the learning process were used three entropic
divergence metrics:
1 Kullback-Leibler (KLd ), or Information Gain
2 Rényi generalized (Rd ),
3 Von Neumann (VNd )
The learning process of CSA was studied with respect both
to the initial distribution, and the ones to the previous step
Distribution function:
t
Bm Bmt
(t)
fm = h
= ,
t
m=0 Bm
d
t
Bm is the number of B cells at time step t with fitness
function value m
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
15. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Kullback-Leibler divergence
KLd or Information Gain
Is one of the most frequently used information-theoretic
distance measure
Is based on two probability distributions of discrete random
variable, called relative information, which found many
applications in setting important theorems in information
theory and statistics
measures the quantity of information the system discovers
during the learning phase [Cutello et al.: journal of Combinatorial
Optimization, 2007]
Formally defined as:
(t) (t ) (t) (t) (t )
KLd (fm , fm 0 ) = fm log(fm /fm 0 )
m
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
16. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Kullback-Leibler divergence
Maximum KLd Principle
The gain is the amount of information the system has
already learned during its search process
Once the learning process begins, the information gain
increases monotonically until it reaches a final steady state
Maximum Kullback-Leibler principle [Cutello et al., GECCO 2003]:
dK
≥0
dt
The learning process will end when the above equation is
zero
Maximum KLd principle as termination criterion [Cutello et al.,
GECCO 2003 & JOCO 2007]
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
17. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Kullback-Leibler divergence
KLd on f5 , f7 , and f10
Information Gain
25
f5
f7
f10
20
15
10
5
0
1 2 4 8 16 32
Generations
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
18. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Kullback-Leibler divergence
KLd and standard deviation on f5
Clonal Selection Algorithm: i-CSA
25
20
300
15
250
200
10 150
100
50
5
0
16 32 64
0
16 32 64
Generations
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
19. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Kullback-Leibler divergence
Average fitness vs. best fitness on f5
Clonal Selection Algorithm: i-CSA
4e+09
avg fitness
best fitness
3.5e+09
3e+09
25
2.5e+09 gain
20 entropy
Fitness
2e+09 15
10
1.5e+09
5
1e+09
0
16 32 64
5e+08
0
0 2 4 6 8 10
Generations
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
20. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Rényi generalized divergence
Rényi generalized divergence (Rd )
Formally defined as:
(t) α
(t) (t ) 1 fm
Rd (fm , fm 0 , α) = log ,
α−1 (t ) α−1
m fm 0
with α > 0 and α = 1
[Rényi, proc. 4th Berkeley Symposium on Mathematics, Statistics
and Probability, 1961]
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
21. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Von Neumann divergence
Von Neumann divergence (VNd )
Formally defined as:
(t) (t ) 1 (t) (t ) 1 (t) (t)
VNd (fm , fm 0 ) = − (fm log fm 0 )− (fm log fm )
n m
n m
[Kopp, et al., Annals of Physics, 2007]
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
22. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Comparison of the divergences
KLd vs. Rd and VNd , with respect t0
Learning with respect to initial distribution (t0)
30
Von Neumann distribution (VNd)
Renyi generalized
Kullback-Leibler entropy
25
20 1.2
VNd
1
15 0.8
0.6
10
0.4
0.2
5
0
16 32 64
0
-5
16 32 64
Generations
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
23. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Comparison of the divergences
KLd vs. Rd and VNd , with respect (t − 1)
Learning with respect (t-1) time step
30
Von Neumann divergence (VNd)
Renyi generalized
Kullback-Leibler entropy
25
0.9
20 0.8 VNd
0.7
0.6
15
0.5
0.4
0.3
10
0.2
0.1
5 0
16 32 64
0
-5
16 32 64
Generations
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
24. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Results
In order to study the search capability, and the ability to
escape from a local optima, the first 13 functions of the
classical benchmark were taken into account from [Yao et al.,
IEEE TEC, 1999]
There are many evolutionary methodologies able to tackle
effectively the global numerical function optimization.
Among such algorithms, differential evolution (DE) has
shown better performances than other evolutionary
algorithms on complex and continuous search space
[Price, et al., journal of Global Optimization 1997, & Springer-Verlag
2005]
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
25. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
i-CSA vs. IA, and DE rand/1/bin [Thomsen, et al., CEC 2004]
IA i-CSA DE rand/1/bin
30 variables
f1 0.0 0.0 0.0
0.0 0.0 0.0
f2 0.0 0.0 0.0
0.0 0.0 0.0
f3 0.0 0.0 2.02 × 10−9
0.0 0.0 8.26 × 10−10
f4 0.0 0.0 3.85 × 10−8
0.0 0.0 9.17 × 10−9
f5 12 0.0 0.0
13.22 0.0 0.0
f6 0.0 0.0 0.0
0.0 0.0 0.0
f7 1.521 × 10−5 7.48 × 10−6 4.939 × 10−3
2.05 × 10−5 6.46 × 10−6 1.13 × 10−3
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
26. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
i-CSA vs. IA, and DE rand/1/bin [Thomsen, et al., CEC 2004]
IA i-CSA DE rand/1/bin
30 variables
f8 −1.256041 × 10+4 −9.05 × 10+3 −1.256948 × 10+4
25.912 1.91 × 104 2.3 × 10−4
f9 0.0 0.0 0.0
0.0 0.0 0.0
f10 0.0 0.0 −1.19 × 10−15
0.0 0.0 7.03 × 10−16
f11 0.0 0.0 0.0
0.0 0.0 0.0
f12 0.0 0.0 0.0
0.0 0.0 0.0
f13 0.0 0.0 −1.142824
0.0 0.0 4.45 × 10−8
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
27. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
i-CSA vs. IA, and DE rand/1/bin [Thomsen, et al., CEC 2004]
IA i-CSA DE rand/1/bin
100 variables
f1 0.0 0.0 0.0
0.0 0.0 0.0
f2 0.0 0.0 0.0
0.0 0.0 0.0
f3 0.0 0.0 5.87 × 10−10
0.0 0.0 1.83 × 10−10
f4 6.447 × 10−7 0.0 1.128 × 10−9
3.338 × 10−6 0.0 1.42 × 10−10
f5 74.99 22.116 0.0
38.99 39.799 0.0
f6 0.0 0.0 0.0
0.0 0.0 0.0
f7 1.59 × 10−5 1.2 × 10−6 7.664 × 10−3
3.61 × 10−5 1.53 × 10−6 6.58 × 10−4
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
28. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
i-CSA vs. IA, and DE rand/1/bin [Thomsen, et al., CEC 2004]
IA i-CSA DE rand/1/bin
100 variables
f8 −4.16 × 10+4 −2.727 × 10+4 −4.1898 × 10+4
2.06 × 10+2 7.63 × 10−4 1.06 × 10−3
f9 0.0 0.0 0.0
0.0 0.0 0.0
f10 0.0 0.0 8.023 × 10−15
0.0 0.0 1.74 × 10−15
f11 0.0 0.0 5.42 × 10−20
0.0 0.0 5.42 × 10−20
f12 0.0 0.0 0.0
0.0 0.0 0.0
f13 0.0 0.0 −1.142824
0.0 0.0 2.74 × 10−8
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
29. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
i-CSA and IA vs. several DE variants [Coello et al., GECCO 2006]
Unimodal Functions
f1 f2 f3 f4 f6 f7
i-CSA 0.0 0.0 0.0 0.0 0.0 2.79 × 10−5
IA 0.0 0.0 0.0 0.0 0.0 4.89 × 10−5
DE rand/1/bin 0.0 0.0 0.02 1.9521 0.0 0.0
DE rand/1/exp 0.0 0.0 0.0 3.7584 0.84 0.0
DE best/1/bin 0.0 0.0 0.0 0.0017 0.0 0.0
DE best/1/exp 407.972 3.291 10.6078 1.701872 2737.8458 0.070545
DE current-to-best/1 0.54148 4.842 0.471730 4.2337 1.394 0.0
DE current-to-rand/1 0.69966 3.503 0.903563 3.298563 1.767 0.0
DE current-to-rand/1/bin 0.0 0.0 0.000232 0.149514 0.0 0.0
DE rand/2/dir 0.0 0.0 30.112881 0.044199 0.0 0.0
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
30. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
i-CSA and IA vs. several DE variants [Coello et al., GECCO 2006]
Multimodal Functions
f5 f9 f10 f11 f12 f13
i-CSA 16.2 0.0 0.0 0.0 0.0 0.0
IA 11.69 0.0 0.0 0.0 0.0 0.0
DE rand/1/bin 19.578 0.0 0.0 0.001117 0.0 0.0
DE rand/1/exp 6.696 97.753938 0.080037 0.000075 0.0 0.0
DE best/1/bin 30.39087 0.0 0.0 0.000722 0.0 0.000226
DE best/1/exp 132621.5 40.003971 9.3961 5.9278 1293.0262 2584.85
DE current-to-best/1 30.984666 98.205432 0.270788 0.219391 0.891301 0.038622
DE current-to-rand/1 31.702063 92.263070 0.164786 0.184920 0.464829 5.169196
DE current-to-rand/1/bin 24.260535 0.0 0.0 0.0 0.001007 0.000114
DE rand/2/dir 30.654916 0.0 0.0 0.0 0.0 0.0
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
31. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Immune systems have been related to swarm systems,
since many immune algorithms operate in a very similar
manner
i-CSA was compared with some swarm intelligence
algorithms [Karaboga et al., journal of Global Optimization, 2007]:
1 particle swam optimization (PSO)
2 particle swarm inspired evolutionary algorithm (PS-EA)
3 artificial bee colony
Experimental protocol: n = {10, 20, 30} and as termination
criterion 500, 750, and 1000 generations
five functions were taken into account: f5 , f9 , f10 f11 , and
the new function:
n
H(x) = (418.9829 × n) + −xi sin( |xi |)
i=1
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
35. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Conclusion 1/3
Global numerical optimization has been taken into account
to prove the effectiveness of a derivative-free clonal
selection algorithm (i-CSA)
The main features of i-CSA can be summarized as:
1 cloning operator, which explores the neighbourhood of a
given solution
2 inversely proportional hypermutation operator, that perturbs
each candidate solution as a function of its fitness function
value (inversely proportionally)
3 aging operator, that eliminates the oldest candidate
solutions from the current population in order to introduce
diversity and thus avoiding local minima during the search
process
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
36. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Conclusion 2/3
The decision of the age to assign to any B cell receptor is crucial
for the quality of the search inside the space of solutions
We have presented a simple variant of CSA able to effectively
improve its performances leaving a longer maturation for each B
cell
A large set of classical numerical functions was taken into
account
i-CSA was compared with several variants of the DE algorithm,
since it has been shown to be effective on many optimization
problems.
i-CSA was also compared with state-of-the-art swarm
algorithms.
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/
37. Introduction An Optimization CSA KLd , Rd and VNd Entropic Divergences Results Conclusions
Conclusion 3/3
The analysis of the results shows that i-CSA is comparable, and often
outperforms, all nature-inspired algorithms in terms of accuracy to the accuracy,
and effectiveness in solving large-scale instances
By analyzing one of the most difficult function of the benchmark, (f5 ), we
characterize the learning capability of i-CSA.
The obtained gain has been analyzed both with respect the initial distribution,
and the ones obtained in the previous step.
For this study, three different entropic metrics were used
1 Kullback-Leibler
2 Rényi
3 von Neumann
By the relative curves, is possible to observe a strong correlation between optima
(peaks in the search landscape) discovering, and high relative entropy values.
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone – mpavone@dmi.unict.it – http://www.dmi.unict.it/mpavone/