How long a B cell remains, evolves and matures inside a population plays a crucial role on the capability for an immune algorithm to jump out from local optima, and find the global optimum. Assigning the right age to each clone (or offspring, in general) means to find the proper balancing between the exploration and exploitation. In this research work we present an experimental study conducted on an immune algorithm, based on the clonal selection principle, and performed on eleven different age assignments, with the main aim to verify if at least one, or two, of the top 4 in the previous efficiency ranking produced on the one-max problem, still appear among the top 4 in the new efficiency ranking obtained on a different complex problem. Thus, the NK landscape model has been considered as the test problem, which is a mathematical model formulated for the study of tunably rugged fitness landscape. From the many experiments performed is possible to assert that in the elitism variant of the immune algorithm, two of the best age assignments previously discovered, still continue to appear among the top 3 of the new rankings produced; whilst they become three in the no elitism version. Further, in the first variant none of the 4 top previous ones ranks ever in the first position, unlike on the no elitism variant, where the previous best one continues to appear in 1st position more than the others. Finally, this study confirms that the idea to assign the same age of the parent to the cloned B cell is not a good strategy since it continues to be as the worst also in the new
efficiency ranking.
How long should Offspring Lifespan be in order to obtain a proper exploration?Mario Pavone
The time an offspring should live and remain into
the population in order to evolve and mature is a crucial factor
of the performance of population-based algorithms both in the
search for global optima, and in escaping from the local optima.
Offsprings lifespan influences a correct exploration of the search
space, and a fruitful exploiting of the knowledge learned. In
this research work we present an experimental study on an
immunological-inspired heuristic, called OPT-IA, with the aim
to understand how long must the lifespan of each clone be
to properly explore the solution space. Eleven different types
of age assignment have been considered and studied, for an
overall of 924 experiments, with the main goal to determine
the best one, as well as an efficiency ranking among all the
age assignments. This research work represents a first step
towards the verification if the top 4 age assignments in the
obtained ranking are still valid and suitable on other discrete
and continuous domains, i.e. they continue to be the top 4 even
if in different order.
INCREASING ACCURACY THROUGH CLASS DETECTION: ENSEMBLE CREATION USING OPTIMIZE...IJCSEA Journal
ABSTRACT
Classifier ensembles have been used successfully to improve accuracy rates of the underlying classification mechanisms. Through the use of aggregated classifications, it becomes possible to achieve lower error rates in classification than by using a single classifier instance. Ensembles are most often used with collections of decision trees or neural networks owing to their higher rates of error when used individually. In this paper, we will consider a unique implementation of a classifier ensemble which utilizes kNN classifiers. Each classifier is tailored to detecting membership in a specific class using a best subset selection process for variables. This provides the diversity needed to successfully implement an ensemble. An aggregating mechanism for determining the final classification from the ensemble is presented and tested against several well known datasets.
Relationships between diversity of classification ensembles and single classIEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Using STELLA to Explore Dynamic Single Species Models: The Magic of Making Hu...Lisa Jensen
The use of formal, mathematical models allows stakeholders, decision makers and scientists to better visualize interactions and relationships within ecological systems. This study uses STELLA, a modeling tool, to simulate simple population dynamics for the humpback whale (Megaptera novaengliae) to better understand the impacts of reproductive and mortality rates as well as alternative solution algorithms used to drive the model. A wide range of population dynamics occurred as a result of varying time increments for calculating populations and use of available solution algorithms. Populations are most likely to achieve equilibrium when reproduction and mortality result in approximately the same number of individuals.
Analyzing probabilistic models in hierarchical BOA on traps and spin glasseskknsastry
The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying optimization problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem.
Modeling XCS in class imbalances: Population size and parameter settingskknsastry
This paper analyzes the scalability of the population size required in XCS to maintain niches that are infrequently activated. Facetwise models have been developed to predict the effect of the imbalance ratio—ratio between the number of instances of the majority class and the minority class that are sampled to XCS—on population initialization, and on the creation and deletion of classifiers of the minority class. While theoretical models show that, ideally, XCS scales linearly with the imbalance ratio, XCS with standard configuration scales exponentially. The causes that are potentially responsible for this deviation from the ideal scalability are also investigated. Specifically, the inheritance procedure of classifiers’ parameters, mutation, and subsumption are analyzed, and improvements in XCS’s mechanisms are proposed to effectively and efficiently handle imbalanced problems. Once the recommendations are incorporated to XCS, empirical results show that the population size in XCS indeed scales linearly with the imbalance ratio.
How long should Offspring Lifespan be in order to obtain a proper exploration?Mario Pavone
The time an offspring should live and remain into
the population in order to evolve and mature is a crucial factor
of the performance of population-based algorithms both in the
search for global optima, and in escaping from the local optima.
Offsprings lifespan influences a correct exploration of the search
space, and a fruitful exploiting of the knowledge learned. In
this research work we present an experimental study on an
immunological-inspired heuristic, called OPT-IA, with the aim
to understand how long must the lifespan of each clone be
to properly explore the solution space. Eleven different types
of age assignment have been considered and studied, for an
overall of 924 experiments, with the main goal to determine
the best one, as well as an efficiency ranking among all the
age assignments. This research work represents a first step
towards the verification if the top 4 age assignments in the
obtained ranking are still valid and suitable on other discrete
and continuous domains, i.e. they continue to be the top 4 even
if in different order.
INCREASING ACCURACY THROUGH CLASS DETECTION: ENSEMBLE CREATION USING OPTIMIZE...IJCSEA Journal
ABSTRACT
Classifier ensembles have been used successfully to improve accuracy rates of the underlying classification mechanisms. Through the use of aggregated classifications, it becomes possible to achieve lower error rates in classification than by using a single classifier instance. Ensembles are most often used with collections of decision trees or neural networks owing to their higher rates of error when used individually. In this paper, we will consider a unique implementation of a classifier ensemble which utilizes kNN classifiers. Each classifier is tailored to detecting membership in a specific class using a best subset selection process for variables. This provides the diversity needed to successfully implement an ensemble. An aggregating mechanism for determining the final classification from the ensemble is presented and tested against several well known datasets.
Relationships between diversity of classification ensembles and single classIEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Using STELLA to Explore Dynamic Single Species Models: The Magic of Making Hu...Lisa Jensen
The use of formal, mathematical models allows stakeholders, decision makers and scientists to better visualize interactions and relationships within ecological systems. This study uses STELLA, a modeling tool, to simulate simple population dynamics for the humpback whale (Megaptera novaengliae) to better understand the impacts of reproductive and mortality rates as well as alternative solution algorithms used to drive the model. A wide range of population dynamics occurred as a result of varying time increments for calculating populations and use of available solution algorithms. Populations are most likely to achieve equilibrium when reproduction and mortality result in approximately the same number of individuals.
Analyzing probabilistic models in hierarchical BOA on traps and spin glasseskknsastry
The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying optimization problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem.
Modeling XCS in class imbalances: Population size and parameter settingskknsastry
This paper analyzes the scalability of the population size required in XCS to maintain niches that are infrequently activated. Facetwise models have been developed to predict the effect of the imbalance ratio—ratio between the number of instances of the majority class and the minority class that are sampled to XCS—on population initialization, and on the creation and deletion of classifiers of the minority class. While theoretical models show that, ideally, XCS scales linearly with the imbalance ratio, XCS with standard configuration scales exponentially. The causes that are potentially responsible for this deviation from the ideal scalability are also investigated. Specifically, the inheritance procedure of classifiers’ parameters, mutation, and subsumption are analyzed, and improvements in XCS’s mechanisms are proposed to effectively and efficiently handle imbalanced problems. Once the recommendations are incorporated to XCS, empirical results show that the population size in XCS indeed scales linearly with the imbalance ratio.
Modeling selection pressure in XCS for proportionate and tournament selectionkknsastry
In this paper, we derive models of the selection pressure in XCS for proportionate (roulette wheel) selection and tournament selection. We show that these models can explain the empirical results that have been previously presented in the literature. We validate the models on simple problems showing that, (i) when the model assumptions hold, the theory perfectly matches the empirical evidence; (ii) when the model assumptions do not hold, the theory can still provide qualitative explanations of the experimental results.
Let's get ready to rumble redux: Crossover versus mutation head to head on ex...kknsastry
This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems with substructures of non-uniform salience. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for deterministic exponentially-scaled additively separable, problems, the BB-wise mutation is more efficient than crossover yielding a speedup of Θ(<em>l</em> log<em>l</em>), where <em>l</em> is the problem size. For the noisy exponentially-scaled problems, the outcome depends on whether scaling on noise is dominant. When scaling dominates, mutation is more efficient than crossover yielding a speedup of Θ(<em>l</em> log<em>l</em>). On the other hand, when noise dominates, crossover is more efficient than mutation yielding a speedup of Θ(<em>l</em>).
A New Method for Figuring the Number of Hidden Layer Nodes in BP Algorithmrahulmonikasharma
In the field of artificial neural network, BP neural network is a multi-layer feed-forward neural network. Because it is difficult to figure the number of hidden layer nodes in a BP neural network, the theoretical basis and the existing methods for BP network hidden layer nodes are studied. Then based on traditional empirical formulas, we propose a new approach to rapidly figure the quantity of hidden layer nodes in two-layer network. That is, with the assistance of experience formulas, the horizon of unit number in hidden layer can be confirmed and its optimal value will be found in this horizon. Finally, a new formula for figuring the quantity of hidden layer codes is obtained through fitting input dimension, output dimension and the optimal value of hidden layer codes. Under some given input dimension and output dimension, efficiency and precision of BP algorithm may be improved by applying the proposed formula.
The amount of information in the form of features and variables available to machine learning algorithms is ever increasing. This can lead to classifiers that are prone to overfitting in high dimensions, high dimensional models do not lend themselves to interpretable results, and the CPU and memory resources necessary to run on high-dimensional datasets severly limit the applications of the approaches.
Variable and feature selection aim to remedy this by finding a subset of features that in some way captures the information provided best.
In this paper we present the general methodology and highlight some specific approaches.
A Hybrid Immunological Search for theWeighted Feedback Vertex Set ProblemMario Pavone
In this paper we present a hybrid immunological inspired algorithm (HYBRID-IA) for solving the Minimum Weighted Feedback Vertex Set (MWFVS) problem. MWFV S is one of the most interesting and challenging combinatorial optimization problem, which finds application in many fields and in many real life tasks. The proposed algorithm is inspired by the clonal selection principle, and therefore it takes advantage of the main strength characteristics of the operators of (i) cloning; (ii) hypermutation; and (iii) aging. Along with these operators, the algorithm uses a local search procedure, based on a deterministic approach, whose purpose is to refine the solutions found so far. In order to evaluate the efficiency and robustness of HYBRID-IA several experiments were performed on different instances, and for each instance it was compared to three different algorithms: (1) a memetic algorithm based on a genetic algorithm (MA); (2) a tabu search metaheuristic (XTS); and (3) an iterative tabu search (ITS). The obtained results prove the efficiency and reliability of HYBRID-IA on all instances in term of the best solutions found and also similar performances with all compared algorithms, which represent nowadays the state-of-the-art on for MWFV S problem.
Introduction to Optimization with Genetic Algorithm (GA)Ahmed Gad
Selection of the optimal parameters for machine learning tasks is challenging. Some results may be bad not because the data is noisy or the used learning algorithm is weak, but due to the bad selection of the parameters values. This article gives a brief introduction about evolutionary algorithms (EAs) and describes genetic algorithm (GA) which is one of the simplest random-based EAs.
References:
Eiben, Agoston E., and James E. Smith. Introduction to evolutionary computing. Vol. 53. Heidelberg: springer, 2003.
https://www.linkedin.com/pulse/introduction-optimization-genetic-algorithm-ahmed-gad
https://www.kdnuggets.com/2018/03/introduction-optimization-with-genetic-algorithm.html
Dynamic Radius Species Conserving Genetic Algorithm for Test Generation for S...ijseajournal
This paper critically examined nine different software models for modeling / developing multi-agent based
systems. The study revealed that the different models examined have their various advantages /
disadvantages and uniqueness in terms of practical development/deployment of the multi-agent based
information system in question. The agentology model is one methodology that can be easily adopted or
adapted for the development of any multi-agent based software prototype because it was inspired by the
best practices and good ideas contained in other agent oriented methodologies like CoMoMas, MASE,
GAIA, Prometheus, HIM, MASim and Tropos.
large data set is not available for some disease such as Brain Tumor. This and part2 presentation shows how to find "Actionable solution from a difficult cancer dataset
EXPERIMENTS ON HYPOTHESIS "FUZZY K-MEANS IS BETTER THAN K-MEANS FOR CLUSTERING"IJDKP
Clustering is one of the data mining techniques that have been around to discover business intelligence by grouping objects into clusters using a similarity measure. Clustering is an unsupervised learning process that has many utilities in real time applications in the fields of marketing, biology, libraries, insurance, city-planning, earthquake studies and document clustering. Latent trends and relationships among data objects can be unearthed using clustering algorithms. Many clustering algorithms came into existence. However, the quality of clusters has to be given paramount importance. The quality objective is to achieve
highest similarity between objects of same cluster and lowest similarity between objects of different clusters. In this context, we studied two widely used clustering algorithms such as the K-Means and Fuzzy K-Means. K-Means is an exclusive clustering algorithm while the Fuzzy K-Means is an overlapping clustering algorithm. In this paper we prove the hypothesis “Fuzzy K-Means is better than K-Means for Clustering” through both literature and empirical study. We built a prototype application to demonstrate the differences between the two clustering algorithms. The experiments are made on diabetes dataset
obtained from the UCI repository. The empirical results reveal that the performance of Fuzzy K-Means is better than that of K-means in terms of quality or accuracy of clusters. Thus, our empirical study proved the hypothesis “Fuzzy K-Means is better than K-Means for Clustering”.
INFLUENCE OF PRIORS OVER MULTITYPED OBJECT IN EVOLUTIONARY CLUSTERINGcscpconf
In recent years, Evolutionary clustering is an evolving research area in data mining. The evolution diagnosis of any homogeneous as well as heterogeneous network will provide an overall view about the network. Applications of evolutionary clustering includes, analyzing, the social networks, information networks, about their structure, properties and behaviors. In this paper, the authors study the problem of influence of priors over multi-typed object in
evolutionary clustering. Priors are defined for each type of object in a heterogeneous information network and experimental results were produced to show how consistency and quality of cluster changes according to the priors
Influence of priors over multityped object in evolutionary clusteringcsandit
In recent years, Evolutionary clustering is an evolving research area in data mining. The
evolution diagnosis of any homogeneous as well as heterogeneous network will provide an
overall view about the network. Applications of evolutionary clustering includes, analyzing, the
social networks, information networks, about their structure, properties and behaviors. In this
paper, the authors study the problem of influence of priors over multi-typed object in
evolutionary clustering. Priors are defined for each type of object in a heterogeneous
information network and experimental results were produced to show how consistency and
quality of cluster changes according to the priors.
Analysis On Classification Techniques In Mammographic Mass Data SetIJERA Editor
Data mining, the extraction of hidden information from large databases, is to predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. Data-Mining classification techniques deals with determining to which group each data instances are associated with. It can deal with a wide variety of data so that large amount of data can be involved in processing. This paper deals with analysis on various data mining classification techniques such as Decision Tree Induction, Naïve Bayes , k-Nearest Neighbour (KNN) classifiers in mammographic mass dataset.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Computational Pool-Testing with Retesting StrategyWaqas Tariq
Pool testing is a cost effective procedure for identifying defective items in a large population. It also improves the efficiency of the testing procedure when imperfect tests are employed. This study develops computational pool-testing strategy based on a proposed pool testing with re-testing strategy. Statistical moments based on this applied design have been generated. With advent of computers in 1980‘s, pool-testing with re-testing strategy under discussion is handled in the context of computational statistics. From this study, it has been established that re-testing reduces misclassifications significantly as compared to Dorfman procedure although re-testing comes with a cost i.e. increase in the number of tests. Re-testing considered improves the sensitivity and specificity of the testing scheme.
Prediction of Bioprocess Production Using Deep Neural Network MethodTELKOMNIKA JOURNAL
Deep learning enhanced the state-of-the-art methods in genomics allows it to be used
in analysing the biological data with high prediction. The training process of neural network with
several hidden layers which has been facilitated by deep learning has been subjected into
increased interest in achieving remarkable results in various fields. Thus, the extraction of
bioprocess production can be implemented by pathway prediction in genomic metabolic network
in eschericia coli. As metabolic engineering involves the manipulation of genes which have the
potential to increase the yield of metabolite production. A mathematical model of this network is
the foundation for the development of computational procedure that directs genetic
manipulations that would eventually lead to optimized bioprocess production. Due to the ability
of deep learning to be well suited in terms of genomics, modelling for biological network can be
implemented. Each layer reveal the insight of biological network which enable pathway analysis
to be implemented in order to extract the target bioprocess production. In this study, deep
neural network has been to identify any set of gene deletion models that offers optimal results in
xylitol production and its growth yield.
Neural Model-Applying Network (Neuman): A New Basis for Computational Cognitionaciijournal
NeuMAN represents a new model for computational cognition synthesizing important results across AI, psychology, and neuroscience. NeuMAN is based on three important ideas: (1) neural mechanisms perform all requirements for intelligence without symbolic reasoning on finite sets, thus avoiding exponential matching algorithms; (2) the network reinforces hierarchical abstraction and composition for sensing and acting; and (3) the network uses learned sequences within contextual frames to make predictions, minimize reactions to expected events, and increase responsiveness to high-value information. These systems exhibit both automatic and deliberate processes. NeuMAN accords with a wide variety of findings in neural and cognitive science and will supersede symbolic reasoning as a foundation for AI and as a model of human intelligence. It will likely become the principal mechanism for engineering intelligent systems.
Modeling selection pressure in XCS for proportionate and tournament selectionkknsastry
In this paper, we derive models of the selection pressure in XCS for proportionate (roulette wheel) selection and tournament selection. We show that these models can explain the empirical results that have been previously presented in the literature. We validate the models on simple problems showing that, (i) when the model assumptions hold, the theory perfectly matches the empirical evidence; (ii) when the model assumptions do not hold, the theory can still provide qualitative explanations of the experimental results.
Let's get ready to rumble redux: Crossover versus mutation head to head on ex...kknsastry
This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems with substructures of non-uniform salience. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for deterministic exponentially-scaled additively separable, problems, the BB-wise mutation is more efficient than crossover yielding a speedup of Θ(<em>l</em> log<em>l</em>), where <em>l</em> is the problem size. For the noisy exponentially-scaled problems, the outcome depends on whether scaling on noise is dominant. When scaling dominates, mutation is more efficient than crossover yielding a speedup of Θ(<em>l</em> log<em>l</em>). On the other hand, when noise dominates, crossover is more efficient than mutation yielding a speedup of Θ(<em>l</em>).
A New Method for Figuring the Number of Hidden Layer Nodes in BP Algorithmrahulmonikasharma
In the field of artificial neural network, BP neural network is a multi-layer feed-forward neural network. Because it is difficult to figure the number of hidden layer nodes in a BP neural network, the theoretical basis and the existing methods for BP network hidden layer nodes are studied. Then based on traditional empirical formulas, we propose a new approach to rapidly figure the quantity of hidden layer nodes in two-layer network. That is, with the assistance of experience formulas, the horizon of unit number in hidden layer can be confirmed and its optimal value will be found in this horizon. Finally, a new formula for figuring the quantity of hidden layer codes is obtained through fitting input dimension, output dimension and the optimal value of hidden layer codes. Under some given input dimension and output dimension, efficiency and precision of BP algorithm may be improved by applying the proposed formula.
The amount of information in the form of features and variables available to machine learning algorithms is ever increasing. This can lead to classifiers that are prone to overfitting in high dimensions, high dimensional models do not lend themselves to interpretable results, and the CPU and memory resources necessary to run on high-dimensional datasets severly limit the applications of the approaches.
Variable and feature selection aim to remedy this by finding a subset of features that in some way captures the information provided best.
In this paper we present the general methodology and highlight some specific approaches.
A Hybrid Immunological Search for theWeighted Feedback Vertex Set ProblemMario Pavone
In this paper we present a hybrid immunological inspired algorithm (HYBRID-IA) for solving the Minimum Weighted Feedback Vertex Set (MWFVS) problem. MWFV S is one of the most interesting and challenging combinatorial optimization problem, which finds application in many fields and in many real life tasks. The proposed algorithm is inspired by the clonal selection principle, and therefore it takes advantage of the main strength characteristics of the operators of (i) cloning; (ii) hypermutation; and (iii) aging. Along with these operators, the algorithm uses a local search procedure, based on a deterministic approach, whose purpose is to refine the solutions found so far. In order to evaluate the efficiency and robustness of HYBRID-IA several experiments were performed on different instances, and for each instance it was compared to three different algorithms: (1) a memetic algorithm based on a genetic algorithm (MA); (2) a tabu search metaheuristic (XTS); and (3) an iterative tabu search (ITS). The obtained results prove the efficiency and reliability of HYBRID-IA on all instances in term of the best solutions found and also similar performances with all compared algorithms, which represent nowadays the state-of-the-art on for MWFV S problem.
Introduction to Optimization with Genetic Algorithm (GA)Ahmed Gad
Selection of the optimal parameters for machine learning tasks is challenging. Some results may be bad not because the data is noisy or the used learning algorithm is weak, but due to the bad selection of the parameters values. This article gives a brief introduction about evolutionary algorithms (EAs) and describes genetic algorithm (GA) which is one of the simplest random-based EAs.
References:
Eiben, Agoston E., and James E. Smith. Introduction to evolutionary computing. Vol. 53. Heidelberg: springer, 2003.
https://www.linkedin.com/pulse/introduction-optimization-genetic-algorithm-ahmed-gad
https://www.kdnuggets.com/2018/03/introduction-optimization-with-genetic-algorithm.html
Dynamic Radius Species Conserving Genetic Algorithm for Test Generation for S...ijseajournal
This paper critically examined nine different software models for modeling / developing multi-agent based
systems. The study revealed that the different models examined have their various advantages /
disadvantages and uniqueness in terms of practical development/deployment of the multi-agent based
information system in question. The agentology model is one methodology that can be easily adopted or
adapted for the development of any multi-agent based software prototype because it was inspired by the
best practices and good ideas contained in other agent oriented methodologies like CoMoMas, MASE,
GAIA, Prometheus, HIM, MASim and Tropos.
large data set is not available for some disease such as Brain Tumor. This and part2 presentation shows how to find "Actionable solution from a difficult cancer dataset
EXPERIMENTS ON HYPOTHESIS "FUZZY K-MEANS IS BETTER THAN K-MEANS FOR CLUSTERING"IJDKP
Clustering is one of the data mining techniques that have been around to discover business intelligence by grouping objects into clusters using a similarity measure. Clustering is an unsupervised learning process that has many utilities in real time applications in the fields of marketing, biology, libraries, insurance, city-planning, earthquake studies and document clustering. Latent trends and relationships among data objects can be unearthed using clustering algorithms. Many clustering algorithms came into existence. However, the quality of clusters has to be given paramount importance. The quality objective is to achieve
highest similarity between objects of same cluster and lowest similarity between objects of different clusters. In this context, we studied two widely used clustering algorithms such as the K-Means and Fuzzy K-Means. K-Means is an exclusive clustering algorithm while the Fuzzy K-Means is an overlapping clustering algorithm. In this paper we prove the hypothesis “Fuzzy K-Means is better than K-Means for Clustering” through both literature and empirical study. We built a prototype application to demonstrate the differences between the two clustering algorithms. The experiments are made on diabetes dataset
obtained from the UCI repository. The empirical results reveal that the performance of Fuzzy K-Means is better than that of K-means in terms of quality or accuracy of clusters. Thus, our empirical study proved the hypothesis “Fuzzy K-Means is better than K-Means for Clustering”.
INFLUENCE OF PRIORS OVER MULTITYPED OBJECT IN EVOLUTIONARY CLUSTERINGcscpconf
In recent years, Evolutionary clustering is an evolving research area in data mining. The evolution diagnosis of any homogeneous as well as heterogeneous network will provide an overall view about the network. Applications of evolutionary clustering includes, analyzing, the social networks, information networks, about their structure, properties and behaviors. In this paper, the authors study the problem of influence of priors over multi-typed object in
evolutionary clustering. Priors are defined for each type of object in a heterogeneous information network and experimental results were produced to show how consistency and quality of cluster changes according to the priors
Influence of priors over multityped object in evolutionary clusteringcsandit
In recent years, Evolutionary clustering is an evolving research area in data mining. The
evolution diagnosis of any homogeneous as well as heterogeneous network will provide an
overall view about the network. Applications of evolutionary clustering includes, analyzing, the
social networks, information networks, about their structure, properties and behaviors. In this
paper, the authors study the problem of influence of priors over multi-typed object in
evolutionary clustering. Priors are defined for each type of object in a heterogeneous
information network and experimental results were produced to show how consistency and
quality of cluster changes according to the priors.
Analysis On Classification Techniques In Mammographic Mass Data SetIJERA Editor
Data mining, the extraction of hidden information from large databases, is to predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. Data-Mining classification techniques deals with determining to which group each data instances are associated with. It can deal with a wide variety of data so that large amount of data can be involved in processing. This paper deals with analysis on various data mining classification techniques such as Decision Tree Induction, Naïve Bayes , k-Nearest Neighbour (KNN) classifiers in mammographic mass dataset.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Computational Pool-Testing with Retesting StrategyWaqas Tariq
Pool testing is a cost effective procedure for identifying defective items in a large population. It also improves the efficiency of the testing procedure when imperfect tests are employed. This study develops computational pool-testing strategy based on a proposed pool testing with re-testing strategy. Statistical moments based on this applied design have been generated. With advent of computers in 1980‘s, pool-testing with re-testing strategy under discussion is handled in the context of computational statistics. From this study, it has been established that re-testing reduces misclassifications significantly as compared to Dorfman procedure although re-testing comes with a cost i.e. increase in the number of tests. Re-testing considered improves the sensitivity and specificity of the testing scheme.
Prediction of Bioprocess Production Using Deep Neural Network MethodTELKOMNIKA JOURNAL
Deep learning enhanced the state-of-the-art methods in genomics allows it to be used
in analysing the biological data with high prediction. The training process of neural network with
several hidden layers which has been facilitated by deep learning has been subjected into
increased interest in achieving remarkable results in various fields. Thus, the extraction of
bioprocess production can be implemented by pathway prediction in genomic metabolic network
in eschericia coli. As metabolic engineering involves the manipulation of genes which have the
potential to increase the yield of metabolite production. A mathematical model of this network is
the foundation for the development of computational procedure that directs genetic
manipulations that would eventually lead to optimized bioprocess production. Due to the ability
of deep learning to be well suited in terms of genomics, modelling for biological network can be
implemented. Each layer reveal the insight of biological network which enable pathway analysis
to be implemented in order to extract the target bioprocess production. In this study, deep
neural network has been to identify any set of gene deletion models that offers optimal results in
xylitol production and its growth yield.
Neural Model-Applying Network (Neuman): A New Basis for Computational Cognitionaciijournal
NeuMAN represents a new model for computational cognition synthesizing important results across AI, psychology, and neuroscience. NeuMAN is based on three important ideas: (1) neural mechanisms perform all requirements for intelligence without symbolic reasoning on finite sets, thus avoiding exponential matching algorithms; (2) the network reinforces hierarchical abstraction and composition for sensing and acting; and (3) the network uses learned sequences within contextual frames to make predictions, minimize reactions to expected events, and increase responsiveness to high-value information. These systems exhibit both automatic and deliberate processes. NeuMAN accords with a wide variety of findings in neural and cognitive science and will supersede symbolic reasoning as a foundation for AI and as a model of human intelligence. It will likely become the principal mechanism for engineering intelligent systems.
Author Mr Di Chen : Ecole Polytechnique Féderale de Lausanne: Financial Engineering Section
This paper shows that complexity influences stock returns. By establishing the complexity and resilience measure of the common stock and analyzing the relationship between return, momentum, size, complexity, book-to-market ratio and resilience, three measures (size, complexity and momentum) stand out as the factors that can influence stock returns.
Feature selection of unbalanced breast cancer data using particle swarm optim...IJECEIAES
Breast cancer is one of the significant deaths causing diseases of women around the globe. Therefore, high accuracy in cancer prediction models is vital to improving patients’ treatment quality and survivability rate. In this work, we presented a new method namely improved balancing particle swarm optimization (IBPSO) algorithm to predict the stage of breast cancer using unbalanced surveillance epidemiology and end result (USEER) data. The work contributes in two directions. First, design and implement an improved particle swarm optimization (IPSO) algorithm to avoid the local minima while reducing USEER data’s dimensionality. The improvement comes primarily through employing the cross-over ability of the genetic algorithm as a fitness function while using the correlation-based function to guide the selection task to a minimal feature subset of USEER sufficiently to describe the universe. Second, develop an improved synthetic minority over-sampling technique (ISMOTE) that avoid overfitting problem while efficiently balance USEER. ISMOTE generates the new objects based on the average of the two objects with the smallest and largest distance from the centroid object of the minority class. The experiments and analysis show that the proposed IBPSO is feasible and effective, outperforms other state-of-the-art methods; in minimizing the features with an accuracy of 98.45%.
Performance Evaluation of Neural Classifiers Through Confusion Matrices To Di...Waqas Tariq
In this paper we have aimed to diagnose skin conditions using Artificial Intelligence (AI) based classifier algorithms and do the performance analyses of those presented algorithms through confusion matrices. These algorithms are being used in a large array of different areas including medicine, and display very distinct characteristics in the sense that they are grouped under different categories such as supervised, unsupervised, statistical, or optimization. The objective of this study is to diagnose skin conditions using seven different well-known and popular as well as emerging Artificial Intelligence based algorithms and to help general practitioners and/or dermatologists develop a careful and supportive approach that leads to a probable diagnosis of skin conditions or diseases. These algorithms we chose as neural classifiers include Back- Propagation (BP), Random Forest (RF), Support Vector Machines (SVMs), Linear Vector Quantization (LVQ), Self-Organizing Maps (SOMs), Naïve Bayes, and finally Bayesian Networks. All of these algorithms have been tested and their results of diagnosing skin conditions/diseases by using data set from Dermatology Database have been compared.
Similar to The Influence of Age Assignments on the Performance of Immune Algorithms (20)
Multi-objective Genetic Algorithm for Interior Lighting DesignMario Pavone
This paper proposes a novel system to help in the design of
interior lighting. It is based on multi-objective optimization of the key criteria involved in lighting design: the respect of a given target level of illuminance, uniformity of lighting, and electrical energy saving. The proposed solution integrates the 3D graphic software Blender, used to reproduce the architectural space and to simulate the eect of illumination, and the genetic algorithm NSGA-II. This solution oers advantages in design exibility over previous related works.
Swarm Intelligence Heuristics for Graph Coloring ProblemMario Pavone
In this research work we present two novel swarm
heuristics based respectively on the ants and bees artificial
colonies, called AS-GCP and ABC-GCP. The first is based
mainly on the combination of Greedy Partitioning Crossover
(GPX), and a local search approach that interact with the
pheromone trails system; the last, instead, has as strengths
three evolutionary operators, such as a mutation operator; an
improved version of GPX and a Temperature mechanism. The
aim of this work is to evaluate the efficiency and robustness of both developed swarm heuristics, in order to solve the classical Graph Coloring Problem (GCP). Many experiments have been performed in order to study what is the real contribution of variants and novelty designed both in AS-GCP and ABC-GCP.
A first study has been conducted with the purpose for setting
the best parameters tuning, and analyze the running time
for both algorithms. Once done that, both swarm heuristics
have been compared with 15 different algorithms using the
classical DIMACS benchmark. Inspecting all the experiments
done is possible to say that AS-GCP and ABC-GCP are very
competitive with all compared algorithms, demonstrating thus
the goodness of the variants and novelty designed. Moreover,
focusing only on the comparison among AS-GCP and ABCGCP is possible to claim that, albeit both seem to be suitable to solve the GCP, they show different features: AS-GCP presents a quickly convergence towards good solutions, reaching often the best coloring; ABC-GCP, instead, shows performances more robust, mainly in graphs with a more dense, and complex topology. Finally, ABC-GCP in the overall has showed to be more competitive with all compared algorithms than AS-GCP as average of the best colors found.
Graph Coloring, one of the most challenging combinatorial problems, finds applicability in many real-world tasks. In this work we have developed a new artificial bee colony algorithm (called O-BEE-COL) for solving this problem. The special features of the proposed algorithm are (i) a SmartSwap mutation operator, (ii) an optimized GPX operator, and (iii) a temperature mechanism. Various studies are presented to show the impact factor of the three operators, their efficiency, the robustness of O-BEE-COL, and finally the competitiveness of O-BEE-COL with respect to the state-of-the-art. Inspecting all experimental results we can claim that: (a) disabling one of these operators O-BEE-COL worsens the performances in term of the Success Rate (SR), and/or best coloring found; (b) O-BEE-COL obtains comparable, and competitive results with respect to state-of-the-art algorithms for the Graph Coloring Problem.
Swarm Intelligence Heuristics for Graph Coloring ProblemMario Pavone
In this research work we present two novel swarm heuristics based respectively on the ants and bees artificial colonies, called AS-GCP and ABC-GCP. The first is based mainly on the combination of Greedy Partitioning Crossover (GPX), and a local search approach that interact with the pheromone trails system; the last, instead, has as strengths three evolutionary operators, such as a mutation operator; an improved version of GPX and a Temperature mechanism. The aim of this work is to evaluate the efficiency and robustness of both developed swarm heuristics, in order to solve the classical Graph Coloring Problem (GCP). Many experiments have been performed in order to study what is the real contribution of variants and novelty designed both in AS-GCP and ABC-GCP.
A first study has been conducted with the purpose for setting the best parameters tuning, and analyze the running time for both algorithms. Once done that, both swarm heuristics have been compared with 15 different algorithms using the classical DIMACS benchmark. Inspecting all the experiments done is possible to say that AS-GCP and ABC-GCP are very competitive with all compared algorithms, demonstrating thus the goodness of the variants and novelty designed. Moreover, focusing only on the comparison among AS-GCP and ABC-GCP is possible to claim that, albeit both seem to be suitable to solve the GCP, they show different features: AS-GCP presents a quickly convergence towards good solutions, reaching often the best coloring; ABC-GCP, instead, shows performances more robust, mainly in graphs with a more dense, and complex topology. Finally, ABC-GCP in the overall has showed to be more competitive with all compared algorithms than AS-GCP as average of the best colors found.
Clonal Selection: an Immunological Algorithm for Global Optimization over Con...Mario Pavone
In this research paper we present an immunological algorithm (IA) to solve
global numerical optimization problems for high-dimensional instances. Such optimization
problems are a crucial component for many real-world applications. We designed two versions
of the IA: the first based on binary-code representation and the second based on real
values, called opt-IMMALG01 and opt-IMMALG, respectively. A large set of experiments
is presented to evaluate the effectiveness of the two proposed versions of IA. Both opt-
IMMALG01 and opt-IMMALG were extensively compared against several nature inspired
methodologies including a set of Differential Evolution algorithms whose performance is
known to be superior to many other bio-inspired and deterministic algorithms on the same
test bed. Also hybrid and deterministic global search algorithms (e.g., DIRECT, LeGO,
PSwarm) are compared with both IA versions, for a total 39 optimization algorithms.The
results suggest that the proposed immunological algorithm is effective, in terms of accuracy,
and capable of solving large-scale instances for well-known benchmarks. Experimental
results also indicate that both IA versions are comparable, and often outperform, the stateof-
the-art optimization algorithms.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Water billing management system project report.pdfKamal Acharya
Our project entitled “Water Billing Management System” aims is to generate Water bill with all the charges and penalty. Manual system that is employed is extremely laborious and quite inadequate. It only makes the process more difficult and hard.
The aim of our project is to develop a system that is meant to partially computerize the work performed in the Water Board like generating monthly Water bill, record of consuming unit of water, store record of the customer and previous unpaid record.
We used HTML/PHP as front end and MYSQL as back end for developing our project. HTML is primarily a visual design environment. We can create a android application by designing the form and that make up the user interface. Adding android application code to the form and the objects such as buttons and text boxes on them and adding any required support code in additional modular.
MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software. It is a stable ,reliable and the powerful solution with the advanced features and advantages which are as follows: Data Security.MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
The Influence of Age Assignments on the Performance of Immune Algorithms
1. The Influence of Age Assignments on the Performance
of Immune Algorithms
Alessandro Vitale1
, Antonino Di Stefano1
, Vincenzo Cutello2
and Mario Pavone2
1
Department of Electric, Electronics and Computer Science
University of Catania
v.le A. Doria 6, I-95125, Catania
2
Department of Mathematics and Computer Science
University of Catania
V.le A. Doria 6, I-95125 Catania, Italy
cutello@dmi.unict.it, mpavone@dmi.unict.it
Abstract. How long a B cell remains, evolves and matures inside a population
plays a crucial role on the capability for an immune algorithm to jump out from
local optima, and find the global optimum. Assigning the right age to each clone
(or offspring, in general) means to find the proper balancing between the explo-
ration and exploitation. In this research work we present an experimental study
conducted on an immune algorithm, based on the clonal selection principle, and
performed on eleven different age assignments, with the main aim to verify if at
least one, or two, of the top 4 in the previous efficiency ranking produced on the
one-max problem, still appear among the top 4 in the new efficiency ranking ob-
tained on a different complex problem. Thus, the NK landscape model has been
considered as the test problem, which is a mathematical model formulated for
the study of tunably rugged fitness landscape. From the many experiments per-
formed is possible to assert that in the elitism variant of the immune algorithm,
two of the best age assignments previously discovered, still continue to appear
among the top 3 of the new rankings produced; whilst they become three in the
no elitism version. Further, in the first variant none of the 4 top previous ones
ranks ever in the first position, unlike on the no elitism variant, where the pre-
vious best one continues to appear in 1st position more than the others. Finally,
this study confirms that the idea to assign the same age of the parent to the cloned
B cell is not a good strategy since it continues to be as the worst also in the new
efficiency ranking.
1 Introduction
The Immune algorithms are an established and successful computational methodology,
which take inspiration from the information processing mechanism of the living things,
and from their dynamics. What makes the immune system really challenging from a
computational perspective is its ability in recognize, distinguish, detect, and remember
(memory) foreign entities to the living organism, as well as its capability in learning,
and self-regulation [6]. As in any evolutionary algorithms, also in the immune algo-
rithms the success’s key for having efficient performances is given by an appropriate,
and proper balancing between the exploration and exploitation mechanisms: the first is
2. 2 Vitale, Di Stefano, Cutello, Pavone
carried out by the perturbation and recombination operators, whilst the last one is ob-
tained through the selection process adopted. What makes, however, more efficient and
accurate the search process, and at the same time helps to learn information as more
as possible, is given by the right maturation time of each solution. Indeed, how long
a solution stays, evolves, and matures inside a population is strictly related to a good
balancing of the exploration/exploitation processes, and, therefore, plays a decisive role
in the performances of any evolutionary algorithm.
In a previous work published in [5], a study on how much time an individual needs
to stay into the population to properly explore the search space has been conducted.
This study has been performed on an Immune Algorithm (IA), and the outcomes ob-
tained have been summarized in an efficiency ranking of the several age assignment
types considered. Further, this ranking was performed and produced on the one-max
problem [12, 4], one of the classical toy problems used for understanding the dynamics,
variants and search’s ability of any stochastic algorithm [2]. In light of this, in this paper
we present the same experimental study but tackling a different problem, such as the
NK-model [8] that is a mathematical model able to describe a “tunably rugged” fitness
landscape, with the primary goal to verify if the top 4 in the previous ranking, or at
least 2 of them, still appear among the top 4 in the new efficiency ranking produced on
this model. If so, we give a reliable age’s assignment that provides, with high proba-
bility, efficient and robust performances in discrete search spaces to can use especially
in uncertainty environments. Thus, the same IA proposed in [1, 11] was developed,
whose core components are the cloning, hypermutation and aging operators, and eleven
(11) age assignment types have been considered for being studied. The Elitism and No
Elitism versions were investigated as well. Inspecting the new outcomes it is possible
to assert that (a) in the elitism version, two of the top 4 of the previous ranking appear
among the top 3 of the new ranking, but none of them rank in the first position; (b)
in the no elitism variant, instead, three of the previous top 4 are still among the first
three positions of the new efficiency ranking, and, further, the best of the previous rank-
ing continues to be still the best one; finally, (c) the worst of the previous ranking still
continues to be in the last position in the new one.
2 The NK Landscape Model
For validating and generalizing the outcomes obtained, it is needed to avoid that the al-
gorithm is tailored to a specific problem, keeping it instead unaware on the knowledge
of the application domain. In this way, the efficiency ranking produced will be suit-
able and applicable also on other problems. At this end, the NK-model was considered
for validating the experiments performed, and the outcomes produced. This model was
developed in [8, 9] as a powerful analytic tool able to model and represent the effects
of epistatic interactions in population genetic. The basic idea behind this model is that
in complex systems with many components (N), the functional contribution of each
component is affected by the interaction of one, or more (K) parts of the system. The
multidimensional space in which each component is represented by a dimension of the
space, to which is assigned a fitness level with respect to a specific property, constitutes
the fitness landscape, whose topology depends on the interactions and on the degree
3. Influence of age assignments on IAs 3
of interdependence of the functional contribution of the various components. This in-
terdependence degree influences strictly the smoothness or ruggedness of the fitness
landscape: if a component is affected by a variety of other parts, then the landscape
produced will be quite rugged [10]. The NK-model represents, a mathematical model
that captures the central factors for an ensemble of tunable rugged fitness landscapes,
via changes between overall size of the landscape and the number of its local hills and
valleys. It is based on two parameters: (1) the number N of components; and (2) a
parameter K that measures the richness of interactions among components. Increasing
this last parameter means to increase the number of peaks and valleys, and thus raising
the ruggedness of the corresponding fitness landscape: moving from single peaks and
smooths, to multi-peaks and fully uncorrelated. Any component is assumed to be in a
binary state, 0 or 1, and the fitness contribution of such a component depends on its own
state, and the one of those K other components explicitly linked to it.
Formally, given a bit string s = (s1, s2, . . . , sN ) of length N, it assigns a fitness
contribution φi to each locus si of the N residue chain such that φi depends on si and
K other bits (0 ≤ K ≤ N − 1). There are 2(k+1)
combinations of states of the K + 1
loci that determine the fitness contribution of each locus. For each of this combination
is randomly selected a variable in [0, 1] as its fitness contribution. The total fitness of
the string s is, then, defined as the average of the fitness contributions of each part, and
the ones of K neighbors that affect upon it:
FNK(s) =
1
N
N
i=1
φi(si, si+1, si+2, . . . , si+K) (1)
The goal in dealing with the NK-model is to maximize the equation above. Note that the
tuning of the parameter K alters how rugged the landscape should be, and then affords
us a tunable rugged fitness landscape. Indeed, for K = 0 each site is independent of
all other ones, and this generate the smoothest landscape; whilst, for K = N − 1
the fitness contribution of each site depends on all of other ones, producing then most
rugged landscapes, with very many local optima. Solving the NK-model was proved to
be NP-complete problem [13]. This model, thanks to its properties, allows us to keep
the study, and its validation within the discrete domain, as done in [5], and, in the same
time, testing it on a different rugged fitness landscape level.
3 The Immune Algorithm
In order to achieve the prefixed purposes in this research work, we have faithfully de-
veloped the immune inspired algorithm proposed in [5, 3], and whose main features are
given by the three operators: (1) cloning, which generates a new population centered
on the higher affinity values, (2) inversely proportional hypermutation, which explores
the neighborhood of each point in the search space, and (3) aging, which introduces
diversity in the population and avoid to get trapped into local optima. This algorithm is
based on four main parameters (all user-defined), such as population size (d); number
of clones to be produced (dup); mutation rate (ρ); and maximum number of genera-
tions allowed to a B cell to stay into the population (τB). Its pseudo-code is showed in
Algorithm 1.
4. 4 Vitale, Di Stefano, Cutello, Pavone
Algorithm 1 The Immune Algorithm (d, dup, ρ, τB)
t ← 0;
P(t)
← Initialize Population(d);
Evaluate Fitness(P(t)
);
repeat
Increase Age(P(t)
);
P(clo)
← Cloning (P(t)
, dup);
P(hyp)
← Hypermutation(P(clo)
, ρ);
Evaluate Fitness(P(hyp)
);
(P
(t)
a , P
(hyp)
a ) ← Aging(P(t)
, P(hyp)
, τB);
P(t+1)
← (µ + λ)-Selection(P
(t)
a , P
(hyp)
a );
t ← t + 1;
until (stop criterion is satisfied)
The immune algorithm begins with the inizialization of the population P(t=0)
of
size d by generating random solutions in the binary domain ({0, 1}); after that, for
each B cell (x ∈ P(t)
), i.e. a candidate solution, the fitness value is computed via the
Evaluate Fitness(P(t)
) procedure (lines 3 and 8 in Algorithm 1). Inside the evolutionary
process the three main operators are performed together to a selection operator, which
attempts to exploit as better as possible the information gained during the evolution.
The main loop (i.e. the evolution) terminates once a fixed stop criterion is satisfied,
that is when it is reached the maximum number of fitness function evaluations allowed
(Tmax).
The first immune operator applied is the cloning operator, which simply copies dup
times each solution (i.e. each B cell) producing an intermediate population P(clo)
of size
(d×dup). Once a clone is produced to this is assigned an age that determines its lifetime
into the population. Indeed, based on this assignment, the B cell will live until its age
will be greater than τB. For doing so, at the beginning of each time step t, the age of all B
cells in the population is increased by one (line 5 in Algorithm 1). The assignment of the
age to each B cell, together with the aging operator, has the purpose to reduce premature
convergences, and keep a right diversity into the population. Then, what age to assign to
each clone (or individual) plays a crucial role on the performances of any evolutionary
algorithm. Therefore, in order to analyse and confirm if the previous efficiency ranking
produced is still valid for this new model, even if in different order, we have conducted
the same age assignment study proposed in [5], evaluating the overall performances in
terms of performance, convergence and success. Eleven different age assignment types
for each clone has been considered in this study:
1. (type0) age zero (0);
2. (type1) random age chosen in the range [0, τB];
3. (type2) random age chosen in the range [0, 2
3 τB]. This option guarantees to
each B cell to evolve at least for a minimal number of generations (in the worst
case 1
3 τB);
5. Influence of age assignments on IAs 5
4. (type3) random age chosen in the range [0, inherited], where with inherited we
indicate the same age of the parent. With this option, in the worst case the clone
will have the same age of its parent;
5. (type4) random age chosen in the range [0, 2
3 inherited]. In this way for each
offspring is guaranteed a lower age than the parent;
6. to each clone is assigned the same age of the parent (inherited), but if after M
mutations performed on the clone its fitness value improves, then its age is updated
as follows:
(a) (type5) zero;
(b) (type6) randomly chosen in the range [0, τB];
(c) (type7) randomly chosen in the range [0, 2
3 τB];
(d) (type8) randomly chosen in the range [0, inherited];
(e) (type9) randomly chosen in the range [0, 2
3 inherited];
7. (type10) same age of parent less one (inherited − 1).
We would to highlight, and stress once again, that what we want to prove with
this research study is not to get the same and identical efficiency ranking previously
produced, but rather verify if the previous best 4, or just some of these, in overall still
appear among the top positions in the new ranking. In this way, we may provide one or
more age assignments to be considered, for having reliable and efficient performances,
especially when it is tackled uncertainty environments.
The hypermutation operator acts on each solution of population P(clo)
performing
M mutations, whose number is determined by an inversely proportional law, that is
the higher is the fitness function value, the lower is the number of mutations performed
on the B cell. Interestingly that this perturbation operator is not based on any mutation
probability. In particular, the number of mutations M to perform over a clone x is
determined by α = e−ρ ˆf(x)
, where α represents the mutation rate, and ˆf(x) the fitness
function value normalized in [0, 1]. The number M of mutations is then determined
by M = (α × ) + 1 , with the length of the B cell. In this way, at least one
mutation is guaranteed on each B cell; and this happens exactly when the solution is
very close to the optimal one. For each clone, the hypermutation operator randomly
choose a bit, and it inverts its value (from 0 to 1, or viceversa), and this is repeated
without redundancy for M times. At the end, all hypermutated clones produce a new
population, labelled P(hyp)
. During the normalization of the fitness function into the
range [0, 1], the best current fitness is decreased by a threshold θ, and it is used in
place of the global optima, because usually it is often unknown. In this way, no a priori
knowledge about the problem is used.
Aging operator acts as last immune operator with the main goal to produce enough
diversity into the population in order to avoid premature convergences and then get-
ting trapped into local optima. It eliminates the old B cells from the populations P(t)
and P(hyp)
: every B cell is allowed to stay in the population for a fixed number of
generations τB; then as soon as a B cell becomes older than τB it is removed from
the population of belonging independently from its fitness value. There exists a variant
of this operator, called elitism version, which makes an exception on the best solution
found so far: it is always kept in the population, even it is older than τB. In this experi-
mental study, both variants of the aging operator have been taken into account: elitism
6. 6 Vitale, Di Stefano, Cutello, Pavone
and no elitism. An analysis on the benefits and efficiency of the aging operator can be
found in [7].
After the three immune operators have been performed, a new population P(t+1)
for the next iteration is produced by the merging of the best B cells via the (µ + λ)-
Selection operator, which, then, selects the best d survivors to the aging step from the
populations P
(t)
a and P
(hyp)
a . Such operator, with µ = d and λ = (d × dup), reduces
the offspring B cell population of size λ ≥ µ to a new parent population of size µ = d.
The selection operator identifies the d best elements from the offspring set and the old
parent B cells, thus guaranteeing monotonicity in the evolution dynamics. Nevertheless,
due to the aging operator, it could happen that only d1 < d B cells survived; in this case,
the selection operator randomly generates new d − d1 B cells.
4 Results and Discussion
In this section all outcomes obtained in the experimental study conducted are presented,
whose main aim is to investigate if the best age assignments previously produced are
still valid in solving a new model, working always in discrete domain (bit strings), but
facing a more rugged fitness landscape. For these experiments we have considered the
NK-model that is a mathematical model based on two parameters (N and K), from
which is possible to produce tunable rugged landscapes. In particular, tuning the pa-
rameter K allows to alter the roughness of the landscape: from the smoothest produced
for K = 0, to the roughest one with many local optima for K = N −1. For our test case
we have considered N = 100, and K = N
2 = 50. In this way, a more rugged fitness
landscape is produced with many local optima, which makes surely harder the search
process for the global optima but, on the other hand, allow us to better validate the out-
comes obtained. In figure 1 is showed the fitness landscape produced for N = 12, and
K = 6 = N
2 . It was considered N = 12 due to the high number of the all binary strings
(2N
) that affect the given landscape. It gives anyway an idea about the hardness of the
landscape of the problem to be solved.
In order to validate all results, and produce the new efficiency ranking, we used the
same experimental protocol proposed in [5], that is: d = {50, 100}; dup = {2, 5, 10};
τB = {5, 10, 15, 20, 50, 100, 200}; Tmax = 105
; and each experiment was performed
on 100 independent runs. Because the parameter ρ is related only to the problem di-
mension (N), after several tests, it was fixed to 5.9 for all experiments. Further, all
experiments were performed for both versions of IA: elitism, and no elitism variants.
Since the optimal solution is unknown, we have considered as evaluation measures, in
order, respectively: (i) mean of the best solutions found on 100 independent runs; (ii)
best solution found; and (iii) standard deviation (σ). Of course, seeing the high num-
ber of experiments performed, in this section we report figures and tables of the more
meaningful results.
Note that each plot in figures 2 and 3, and for each group of columns inside them
(i.e. τB values), shows the age assignments in increasing order, as suggested in section
3, from left to right (type0 leftmost, and type10 the rightmost one).
Figure 2, plots (a) and (b), shows the results produced by the developed immune
algorithm on the 11 age assignment test cases. Each plot reports the mean of the best
7. Influence of age assignments on IAs 7
NK-Model: Number of Local Optima when K=N/2
0
10
20
30
40
50
60
70
0
10
20
30
40
50
60
70
0.3
0.4
0.5
0.6
0.7
0.8
Fig. 1. Rugged fitness landscape produced by setting N = 12 and K = N/2.
0,7
0,71
0,72
0,73
0,74
0,75
0,76
0,77
0,78
0,79
0,8
0,81
Age Assignment Type
taub=5 taub=10 taub=15 taub=20 taub=50 taub=100 taub=200
MEAN
0,62
0,64
0,66
0,68
0,7
0,72
0,74
0,76
0,78
0,8
0,82
taub=5 taub=10 taub=15 taub=20 taub=50 taub=100 taub=200
Age Assignment Type
MEAN
(a)
0,7
0,72
0,74
0,76
0,78
0,8
0,82
taub=5 taub=10 taub=15 taub=20 taub=50 taub=100 taub=200
MEAN
Age Assignment Type
0,62
0,64
0,66
0,68
0,7
0,72
0,74
0,76
0,78
0,8
0,82
taub=5 taub=10 taub=15 taub=20 taub=50 taub=100 taub=200
MEAN
Age Assignment Type
(b)
Fig. 2. Mean of the best solutions versus age types. Results are obtained by both immune algo-
rithm versions: elitism (upper plots) and without elitism (bottom plots). Plot (a) shows the results
obtained by setting d = 50 and dup = 2; whilst the ones of plot (b) using the parameters d = 100
and dup = 2.
solutions found on the 100 independent runs by varying the τB parameter, and for both
variants of the algorithm: with elitism (upper plots) and without elitism (bottom plots).
In particular, the plot (a) is obtained by setting d = 50 and dup = 2; whilst plot (b)
8. 8 Vitale, Di Stefano, Cutello, Pavone
using the parameters d = 100 and dup = 2. Inspecting plot (a) of figure 2, is possible
to see how at least 2 of the previous best 4 age assignments (type0, type4, type3
and type2, in the order) often appear among the first positions, although not always
as the best ones. This appears clearer if we analyse the no elitism version (bottom
plot). In particular, at low values of τB all 4 best previous ones appear among the new
4 − 5 top positions, and sometimes also in the same order of the previous ranking. It
is interesting to note from both plots (a) how for low values of τB the performances at
various age types are feelingly different; whilst at the increasing of τB the performances
are roughly equivalents. In both variants, the overall best performances, in terms of
mean of best solutions found, are obtained with τB = 20. What instead is clear in both
plots, and confirms what asserted in the previous efficiency ranking, is that the last age
assignment type proposed (type10) still continues to be the worst. Same analysis may
be done also for the plot (b) in figure 2. In addition to the previous analysis is possible
to assert for both versions that all previous 4 top age assignment types appear always
among the 4 − 5 best (reflecting the same order in some way), except for τB = 50.
Also in these experiments, the best performances in the overall are obtained by setting
τB = 20, besides, the last age assignment option (type10), that is to assign the same
age of parent minus 1, continues to be a bad choice.
Plots (a) and (b) in figure 3 report, instead, the performances of both variants of the
immune algorithm on all age assignment test cases when dup = 10, and by varying
the population size. The upper plots show the results obtained with the elitism version,
whilst on the bottom ones the variant without elitism. Analysing the plots (a), it is
possible to see for these experiments none of the top 4 previous ones appears in steady
way among the new best positions, and when they appear they are in sparse order. This
is due to the high number of clones (10 for each B cell) that require a greater diversity
into the population, which is better produced by the age assignment types 5 to 9. This
statement is also confirmed by having better performances at low τB values. However,
if we consider only the rank, and possible equivalent ranks, then at least one of the
previous best 4 is almost always among the 4 − 5 best in the new ranking. The plots (b)
show a similar behavior to previous ones, but at the increasing of τB values it becomes
more pronounced the presence of the previous top age types among the new best ones.
Also in these experiments, the type10 appears as the worst age assignment.
Table 1 reports the efficiency ranking produced for each pair of values (d, dup).
In boldface is highlighted when one of the previously best 4 [5] is ranked in the top
4 positions of the new ranking. Looking to the elitism version for d = 100, we find
type3 and type 4 among the first 4 ranked for dup = {2, 5}, whilst only the type3
to 4th place for dup = 10. A different efficiency ranking is, instead, produced for
d = 50, where for dup = 2 the type3 and type4 are ranked in 2nd and 3rd position
respectively; whilst, for the last dup value, none of the previously top 4 achieve the first
positions. This different behavior may be explained because by setting the population
size to 50 the algorithm shows a stronger form of elitism that limits the effect of the
age assignments. This means, in a nutshell, that the age type chosen alone is not able to
produce a right diversity into the population. Inspecting the no elitism variant is clear
how, for dup = {2, 5}, three over four of the previous ranking appear still among the
top 4, as type0, type2, and type4. In particular, for small dup values type0 is
9. Influence of age assignments on IAs 9
0,765
0,77
0,775
0,78
0,785
0,79
0,795
0,8
taub=5 taub=10 taub=15 taub=20 taub=50 taub=100 taub=200
MEAN
Age Assignment Type
0,76
0,765
0,77
0,775
0,78
0,785
0,79
0,795
0,8
taub=5 taub=10 taub=20 taub=50 taub=100 taub=200
MEAN
Age Assignment Type
taub=15
(a)
0,77
0,775
0,78
0,785
0,79
0,795
0,8
taub=5 taub=10 taub=15 taub=20 taub=50 taub=100 taub=200
MEAN
Age Assignment Type
0,76
0,765
0,77
0,775
0,78
0,785
0,79
0,795
0,8
taub=5 taub=10 taub=15 taub=20 taub=50 taub=100 taub=200
MEAN
Age Assignment Type
(b)
Fig. 3. Mean of the best solutions versus age types. Results are obtained by both immune algo-
rithm versions: elitism (upper plots) and without elitism (bottom plots). Plot (a) shows the results
obtained by setting d = 50 and dup = 10; whilst the ones of plot (b) using the parameters
d = 100 and dup = 10.
Table 1. Efficiency ranking produced by each pairs (d, dup).
type
d = 50 d = 100
dup = 2 dup = 5 dup = 10 dup = 2 dup = 5 dup = 10
Elitism Ranking
0 6 8 10 5 8 9
1 10 10 9 10 10 7
2 8 9 8 7 9 9
3 3 5 6 3 3 4
4 2 6 7 2 4 8
5 1 1 5 1 1 4
6 9 7 2 9 7 4
7 5 3 1 6 4 2
8 7 4 3 8 6 3
9 4 2 3 4 2 1
10 11 11 11 11 11 11
No Elitism Ranking
0 1 7 10 1 4 10
1 7 6 4 7 8 3
2 4 3 8 4 3 7
3 5 4 6 5 6 9
4 2 2 9 2 1 6
5 3 1 6 3 2 8
6 10 10 5 10 10 5
7 8 8 3 8 7 1
8 9 9 2 9 9 2
9 6 5 1 6 5 4
10 11 11 11 11 11 11
10. 10 Vitale, Di Stefano, Cutello, Pavone
always the best option to choose; type4 is always the second best; whilst type3
fluctuates between 3rd and 4th position. If, instead, the number of clones is increased
(dup = 10), then, an opposite ranking to the one showed in [5] is produced, where
the previous age types at the bottom of the ranking are now in the best ranks. This is
because they are able to produce more diversity than the others, which is a key point
when working with a high number of solutions. From these single efficiency rankings,
what emerges clearly is that in any ranking the type10 is always the worst. This
confirms once again the outcome obtained in the previous work.
After analysing the single experiments, and individual efficiency rankings, the goal
of this research work is to understand which age assignment types show best perfor-
mances in the overall (i.e. independently of the parameter values used), and if among
these appears one or more of the top 4 previous ones. At this end, in table 2 it is re-
ported the summarized and overall outcomes. Thus, once computed the single rankings
for each set value of d and dup, in table 2 we summarize everything, reporting the
number of times that each age assignment appears among the top k (ϕk) over 6 overall
experiments (by varying d = 50, 100 and dup = 2, 5, 10), and the statistic success rate
in using that age assignment type. This last was computed only with respect to the top
4, which means that, inspecting all experiments, a given age type appears SRtop4 times
among the best 4. Obviously, the higher the percentage is, more robust and efficient
are the performances using the relative age assignment type. Further, always in table 2,
ϕ5 indicates how many times the given age assignment appears among the top 5; ϕ4
among top 4; and so on. From this table, analysing the elitism version, is possible to
Table 2. Summary of the overall outcomes, where ϕk reports the number of times that each
age assignment appears among the top k, and SRtop4 the statistic success rate that indicates the
percentage of how many times the relative age type appears among the best 4.
Elitism no Elitism
type ϕ5 ϕ4 ϕ3 ϕ2 ϕ1 SRtop4 ϕ5 ϕ4 ϕ3 ϕ2 ϕ1 SRtop4
0 1 0 0 0 0 0% 3 3 2 2 2 50%
1 0 0 0 0 0 0% 2 2 1 0 0 33.33%
2 0 0 0 0 0 0% 4 4 2 0 0 66.67%
3 5 4 3 0 0 66.67% 3 1 0 0 0 16.67%
4 3 3 2 2 0 50.00% 4 4 4 4 1 66.67%
5 6 5 4 4 4 83.33% 4 4 4 2 1 66.67%
6 2 2 1 1 0 33.33% 2 0 0 0 0 0%
7 5 4 3 2 1 66.67% 2 2 2 1 1 33.33%
8 3 3 2 0 0 50% 2 2 2 2 0 33.33%
9 6 6 4 3 1 100% 4 2 1 1 1 33.33%
10 0 0 0 0 0 0% 0 0 0 0 0 0%
note how the best age assignment in the previous ranking (type0) appears only one
time among the top 5, and never among the first 4; whilst the types 3 and 4, third and
second position of the previous ranking, appear in these new experiments among the
best 3, respectively 3 and 2 times. Need to be highlight as type3 is among top 3 for 3
times over 6, but never appear in the first two positions, unlike of type4 that it ranks
in the first two positions 2 times over 6 experiments. Focusing only on the top, none of
the previous best 4 is able to outperform the other age assignment types. In particular,
11. Influence of age assignments on IAs 11
type 5 is the age type that shows the best performances, winning 4 times over 6. In
general, it is possible to assert that the type5, type7 and type9, the only ones to
appear as top, represent an assignment type that introduce diversity into the population,
but in the same time leaves enough time for the maturation of all promising solutions.
Inspecting the no elitism version, we achieve a different behavior of the performances
with respect the other variant (of course, as we expected), where, instead, the previous
best one still continues to be the top (2 over 6) also in this case study, together with the
type4 (the previous 2nd) that appears in the first rank 1 times. Looking to the first 4
ranks, is possible to see as 3 of the previous best are still among the top 4 in the new
ranking (type4, type2 and type0), together to the assignment type5. It is worth
noting though that although type0 is not the best among the top 4, neither among the
top 2, it is instead the one that appear in the first position more times than the others.
Finally, keeping in mind the goals of this research work, and following this con-
ducted analysis, it is possible to conclude that, for the elitism variant of the developed
immune algorithm, 2 of the previous top 4 (type3 and type4) still appear among
the top 4 in the new ranking, and, furthermore, the type4 is again among the best 2,
placing in the first two positions 2 times over 6. For the no elitism variant, instead, three
of the previous top 4 (type4, type2 and type0) continue to appear in the new top 3
(of course in different order); but if we inspect only the top, we may state that the best
previous (type0) continues to be it also in this new case study. What emerges clearly,
and in common between the two variants of the immune algorithm, are the bad perfor-
mances produced by the age assignment type10, as highlighted also in the previous
efficiency ranking.
5 Conclusions
Starting from the well-know assertion that a right balancing between exploration and
exploitation is a successful key for any evolutionary algorithm, in this research work we
present an experimental study focused on understanding the right maturation of each
solution for having a careful search process, and good information learning. Since this
study follows a previous one, our main goal is investigate if the best age assignments
previously obtained are still validate when is tackled a new and different problem.
An Immune Algorithm (IA) was developed for tackling and solving the NK-model,
which is a mathematical model able to capture the fitness interactions producing a tun-
able rugged fitness landscape. Using this model we are keeping the study on discrete
search space (binary strings), as done in the previous study, but with more roughness of
the fitness landscape. In order to lead a proper study the tunable parameter (K), with
whom is altered the roughness of the search landscape, was set to 50% of the length
of the given bit string (N), producing in this way a hard landscape with many local
optima.
Eleven (11) age assignment test cases have been studied and analysed with the aim
to understand which one shows the best performances; produce an efficiency ranking;
but, primarily, investigate if one or more best age types in the previous ranking are
still among the top ranks in the new one. An overall of 924 experiments have been
performed, and from their analysis we may assert that: (i) for the elitism version, two
12. 12 Vitale, Di Stefano, Cutello, Pavone
of the best previous ones (type3 and type4) still appear among the top 3, with,
moreover type4 ranked in 2nd position 2 times over 6 in the new efficiency ranking;
(ii) always for elitism version, looking only the top, any of the best previous ones reach
never the 1st position, and this is likely due to the lower diversity they produce with
respect the other age assignment types; (iii) for the no elitism variant, instead, three of
the previous top 4 (type4, type2 and type0) still appear among the best 3 in the new
efficiency ranking; (iv) for this last variant, the best of the previous ranking (type0),
although it is never the best among the top 3, it is the one that however appears in 1st
position more than the other age assignment types studied; and, finally, (v) type10
continues to be the worst option as already proved in the previous study.
References
1. P. Conca, G. Stracquadanio, O. Greco, V. Cutello, M. Pavone, G. Nicosia: “Packing equal
disks in a unit square: an immunological optimization approach”, 1st International Workshop
on Artificial Immune Systems (AIS), IEEE Press, pp. 1–5, 2015.
2. V. Cutello, A. G. De Michele, M. Pavone: “Escaping Local Optima via Parallelization and
Migration”, VI International Workshop on Nature Inspired Cooperative Strategies for Opti-
mization (NICSO), Studies in Computational Intelligence, vol. 512, pp. 141–152, 2013.
3. V. Cutello, G. Morelli, G Nicosia, M. Pavone, G. Scollo: “On discrete models and immunolog-
ical algorithms for protein structure prediction”, Natural Computing, vol. 10, no. 1, pp. 91-
102, 2011.
4. V. Cutello, G. Narzisi, G. Nicosia, M. Pavone: “Clonal Selection Algorithms: A Comparative
Case Study using Effective Mutation Potentials”, 4th International Conference on Artificial
Immune Systems (ICARIS), LNCS 3627, pp. 13–28, 2005.
5. A. Di Stefano, A. Vitale, V. Cutello, M. Pavone: “How long should offspring lifespan be
in order to obtain a proper exploration?”, 2016 IEEE Symposium Series on Computational
Intelligence (SSCI), INSPEC number 16670548 2016, pp. 1–8, 2016.
6. S. Fouladvand, A. Osareh, B. Shadgar, M. Pavone, S. Sharafi: “DENSA: An effective negative
selection algorithm with flexible boundaries for selfspace and dynamic number of detectors”,
Engineering Applications of Artificial Intelligence, vol. 62, pp. 359–372, 2017.
7. T. Jansen, C. Zarges: ”On Benefits and Drawbacks og Aging Strategies for randomized Search
Heuristics”, Theoretical Computer Science, vol. 412, pp. 543–559, 2011.
8. S. A. Kauffman, S. Levin: ”Towards a Genarl Theory of Adaptive Walks on Rugged Land-
scapes”, Journal of Theoretical Biology, vol. 128, no. 1, pp. 11–45, 1987.
9. S. A. Kauffman, E. D. Weinberger: ”The NK Model of Rugged Fitness Landscapes And Its
Application to Maturation of the Immune Response”, Journal of Theoretical Biology, vol. 141,
no. 2, pp. 211–245, 1989.
10. D. A. Levinthal: ”Adaptation on Rugged Landscapes”, Management Science, vol. 43, no. 7,
pp. 934–950, 1997.
11. M. Pavone, G. Narzisi, G. Nicosia: ”Clonal Selection - An Immunological Algorithm for
Global Optimization over Continuous Spaces”, Journal of Global Optimization, vol. 53, no. 4,
pp. 769–808, 2012.
12. J. D. Schaffer, L. J. Eshelman: “On crossover as an evolutionary viable strategy”, 4th Inter-
national Conference on Genetic Algorithms, pp. 61–68, 1991.
13. E. Weinberger: “NP-Completeness of Kauffman’s NK model, a Tuneably Rugged Fitness
Landscape”, Santa Fe Institute Working Paper, 96-02-003, 1996.