This document provides an introduction to multi-objective optimization using evolutionary algorithms. It discusses how evolutionary algorithms are well-suited for multi-objective optimization problems as they use a population approach to find multiple non-dominated solutions simultaneously. The document outlines the basic principles of evolutionary optimization for single-objective problems and then describes the key concepts and operating principles of evolutionary multi-objective optimization.
The document proposes a methodology to improve evolutionary multi-objective algorithms (EMOAs) by incorporating achievement scalarizing functions (ASFs) to provide convergence to the Pareto optimal front while maintaining diversity. The methodology executes in serial stages: running an EMOA to get a non-dominated set, clustering this set to extract a representative set, calculating pseudo-weights for the representative set, and perturbing the extreme points to generate reference points to drive the ASF towards the Pareto front over iterations until no improvements are found. Initial studies on test problems ZDT1, ZDT2 and ZDT3 show promising results, with the proposed approach finding a representative set of clustered Pareto points in fewer generations compared to NSGA
Hybrid Method HVS-MRMR for Variable Selection in Multilayer Artificial Neural...IJECEIAES
The variable selection is an important technique the reducing dimensionality of data frequently used in data preprocessing for performing data mining. This paper presents a new variable selection algorithm uses the heuristic variable selection (HVS) and Minimum Redundancy Maximum Relevance (MRMR). We enhance the HVS method for variab le selection by incorporating (MRMR) filter. Our algorithm is based on wrapper approach using multi-layer perceptron. We called this algorithm a HVS-MRMR Wrapper for variables selection. The relevance of a set of variables is measured by a convex combination of the relevance given by HVS criterion and the MRMR criterion. This approach selects new relevant variables; we evaluate the performance of HVS-MRMR on eight benchmark classification problems. The experimental results show that HVS-MRMR selected a less number of variables with high classification accuracy compared to MRMR and HVS and without variables selection on most datasets. HVS-MRMR can be applied to various classification problems that require high classification accuracy.
Selection of Equipment by Using Saw and Vikor Methods IJERA Editor
This document discusses methods for selecting equipment using multi-criteria decision making approaches. It presents the SAW (Simple Additive Weighting) and VIKOR (VIseKriterijumska Optimizacija I Kompromisno Resenje) methods for equipment selection. The document outlines the steps for both SAW and VIKOR methods. It also discusses consistency testing to validate the results and ensure less than 0.1 consistency ratio. The methods are then applied to a case study of equipment selection at a spring manufacturing unit to demonstrate the process.
The approaches used in literature for solving combinatorial optimization problems have applied specific methodology or a
specific combination of methodologies to solve it. However, less importance is attached to modeling the solution for the given problem systematically. Modeling helps in analyzing the various parts of the solution clearly, thereby identifying which part of the methodology or combination of methodologies applied is efficient or inefficient. In order to find how efficient the different parts of the applied methodology is or methodologies are, it may be better to solve the given problem using the notion of hyper-heuristics. This can be done by solving the different parts of the given problem with many different methodologies realized, implemented and benchmarked, enabling to choose the best hybrid methodology. A theoretical model or representation of the problem’s solution may facilitate clear proposal and realization of the different methodologies for the various parts of the solution. The literature reveals that there is a need
for a generic model which could be used to represent the solution for combinatorial optimization problems. Therefore, inspired by the basic problem solving behavior exhibited by animals in their day to day life, a new bio-inspired hyper-heuristic generic model for solving combinatorial optimization problems has been proposed. To demonstrate the application of this generic model proposed a problem specific model is derived that solves the web services selection/composition problem. This specialized model has been realized with a trip planning case study and the results are discussed.
The Multi-Objective Genetic Algorithm Based Techniques for Intrusion Detectionijcsse
The document summarizes various multi-objective genetic algorithm (MOGA) techniques that have been proposed for intrusion detection. It discusses several MOGAs in the literature, including NSGA-II, SPEA2, PAES, PESA-II, and AMGA. The key advantages and limitations of these algorithms are highlighted. The document concludes that MOGAs show promise for generating optimal classifiers and classifier ensembles for intrusion detection due to their ability to find solutions to problems with multiple conflicting objectives.
Temporal based Factorization Approach for Solving Drift and Decay in Sparse Scoring Matrix proposes the TemporalMF++ approach to address weaknesses in existing temporal based factorization recommendation systems. TemporalMF++ integrates long and short-term user preferences learned through k-means clustering and bacterial foraging optimization to model preference drift and item popularity decay over time. Experimental results on the Netflix dataset show TemporalMF++ achieves an 18.5% improvement in prediction accuracy over benchmark methods.
Artificial Intelligence in Robot Path Planningiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The document proposes a methodology to improve evolutionary multi-objective algorithms (EMOAs) by incorporating achievement scalarizing functions (ASFs) to provide convergence to the Pareto optimal front while maintaining diversity. The methodology executes in serial stages: running an EMOA to get a non-dominated set, clustering this set to extract a representative set, calculating pseudo-weights for the representative set, and perturbing the extreme points to generate reference points to drive the ASF towards the Pareto front over iterations until no improvements are found. Initial studies on test problems ZDT1, ZDT2 and ZDT3 show promising results, with the proposed approach finding a representative set of clustered Pareto points in fewer generations compared to NSGA
Hybrid Method HVS-MRMR for Variable Selection in Multilayer Artificial Neural...IJECEIAES
The variable selection is an important technique the reducing dimensionality of data frequently used in data preprocessing for performing data mining. This paper presents a new variable selection algorithm uses the heuristic variable selection (HVS) and Minimum Redundancy Maximum Relevance (MRMR). We enhance the HVS method for variab le selection by incorporating (MRMR) filter. Our algorithm is based on wrapper approach using multi-layer perceptron. We called this algorithm a HVS-MRMR Wrapper for variables selection. The relevance of a set of variables is measured by a convex combination of the relevance given by HVS criterion and the MRMR criterion. This approach selects new relevant variables; we evaluate the performance of HVS-MRMR on eight benchmark classification problems. The experimental results show that HVS-MRMR selected a less number of variables with high classification accuracy compared to MRMR and HVS and without variables selection on most datasets. HVS-MRMR can be applied to various classification problems that require high classification accuracy.
Selection of Equipment by Using Saw and Vikor Methods IJERA Editor
This document discusses methods for selecting equipment using multi-criteria decision making approaches. It presents the SAW (Simple Additive Weighting) and VIKOR (VIseKriterijumska Optimizacija I Kompromisno Resenje) methods for equipment selection. The document outlines the steps for both SAW and VIKOR methods. It also discusses consistency testing to validate the results and ensure less than 0.1 consistency ratio. The methods are then applied to a case study of equipment selection at a spring manufacturing unit to demonstrate the process.
The approaches used in literature for solving combinatorial optimization problems have applied specific methodology or a
specific combination of methodologies to solve it. However, less importance is attached to modeling the solution for the given problem systematically. Modeling helps in analyzing the various parts of the solution clearly, thereby identifying which part of the methodology or combination of methodologies applied is efficient or inefficient. In order to find how efficient the different parts of the applied methodology is or methodologies are, it may be better to solve the given problem using the notion of hyper-heuristics. This can be done by solving the different parts of the given problem with many different methodologies realized, implemented and benchmarked, enabling to choose the best hybrid methodology. A theoretical model or representation of the problem’s solution may facilitate clear proposal and realization of the different methodologies for the various parts of the solution. The literature reveals that there is a need
for a generic model which could be used to represent the solution for combinatorial optimization problems. Therefore, inspired by the basic problem solving behavior exhibited by animals in their day to day life, a new bio-inspired hyper-heuristic generic model for solving combinatorial optimization problems has been proposed. To demonstrate the application of this generic model proposed a problem specific model is derived that solves the web services selection/composition problem. This specialized model has been realized with a trip planning case study and the results are discussed.
The Multi-Objective Genetic Algorithm Based Techniques for Intrusion Detectionijcsse
The document summarizes various multi-objective genetic algorithm (MOGA) techniques that have been proposed for intrusion detection. It discusses several MOGAs in the literature, including NSGA-II, SPEA2, PAES, PESA-II, and AMGA. The key advantages and limitations of these algorithms are highlighted. The document concludes that MOGAs show promise for generating optimal classifiers and classifier ensembles for intrusion detection due to their ability to find solutions to problems with multiple conflicting objectives.
Temporal based Factorization Approach for Solving Drift and Decay in Sparse Scoring Matrix proposes the TemporalMF++ approach to address weaknesses in existing temporal based factorization recommendation systems. TemporalMF++ integrates long and short-term user preferences learned through k-means clustering and bacterial foraging optimization to model preference drift and item popularity decay over time. Experimental results on the Netflix dataset show TemporalMF++ achieves an 18.5% improvement in prediction accuracy over benchmark methods.
Artificial Intelligence in Robot Path Planningiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Genetic Approach to Parallel SchedulingIOSR Journals
Genetic algorithms were used to solve the parallel task scheduling problem of minimizing overall completion time. The genetic algorithm represents each scheduling as a chromosome. It initializes a population of random schedules and evaluates their fitness based on completion time. Selection, crossover, and mutation operators evolve the population over generations. The best schedule found schedules tasks to processors to minimize completion time. Testing on task graphs of varying sizes showed that the genetic algorithm finds improved schedules over generations and that tournament selection works better than roulette wheel selection.
An integrated approach for enhancing ready mixed concrete selection using tec...prjpublications
This document presents a study that evaluates criteria for selecting ready mixed concrete using the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method. The study identifies important criteria for selection like quality control, cost, and delivery. A survey was conducted with responses from ready mixed concrete plant managers, consultants, and contractors to gather data on the criteria. TOPSIS was then used to analyze the data and provide a ranking of respondents based on their relative closeness to the ideal solution. The results showed Respondent R3 had the highest ranking, making it the best option according to the TOPSIS analysis.
Metasem: An R Package For Meta-Analysis Using Structural Equation Modelling: ...Pubrica
This presentation explains about the Metasem: An r package for Meta-Analysis using Structural Equation Modelling:
1. SEM is used meta-analytical model formulated for conducting Meta-analysis which is used to analyse structural relationships. SEM can be univariate, multivariate, and three-level meta-analysis
2. Structural equation model (SEM) in general optimized and fit by using OpenMx package
3. The routine analysis of batch mode either interactively or noninteractively can be analysed by R package. Using the graphical interface like R studio is a convenient method for users to interfere with the analysis.
Learn More: https://pubrica.com/academy/
Why Pubrica:
When you order our services, we promise you the following – Plagiarism free, always on Time, outstanding customer support, written to Standard, Unlimited Revisions support and High-quality Subject Matter Experts.
Find freelance Meta-Analysis professionals, consultants, freelancers and get your project done - https://bit.ly/30V8QUK
Contact us:
Web: https://pubrica.com/
Email: sales@pubrica.com
WhatsApp : +91 9884350006
United Kingdom : +44-1143520021
Constructing a classification model is important in machine learning for a particular task. A
classification process involves assigning objects into predefined groups or classes based on a
number of observed attributes related to those objects. Artificial neural network is one of the
classification algorithms which, can be used in many application areas. This paper investigates
the potential of applying the feed forward neural network architecture for the classification of
medical datasets. Migration based differential evolution algorithm (MBDE) is chosen and
applied to feed forward neural network to enhance the learning process and the network
learning is validated in terms of convergence rate and classification accuracy. In this paper,
MBDE algorithm with various migration policies is proposed for classification problems using
medical diagnosis.
IRJET-Comparison between Supervised Learning and Unsupervised LearningIRJET Journal
This document compares supervised and unsupervised learning models in artificial neural networks. It describes how supervised learning uses labeled training data to learn relationships between inputs and outputs, while unsupervised learning identifies hidden patterns in unlabeled data. The document outlines techniques for each like classification and regression for supervised learning, and clustering and density estimation for unsupervised learning. It then presents experiments applying multilayer perceptron (supervised) and k-means clustering (unsupervised) to datasets, finding unsupervised learning had higher accuracy. The document concludes unsupervised learning is favored for this task but supervised learning remains useful for problems with labeled data.
Decision support systems, Supplier selection, Information systems, Boolean al...ijmpict
For organizations operating with number of products/services and number of suppliers, to select the right
supplier meeting all their requirements will be a challenging job. Such organizations need a good
decision support system to evaluate the suppliers effectively. Several decision support systems have been
reported to deal with complex selection process to decide the right supplier. Many mathematical models
have also been developed. This paper presents a new method, named as Bit Decision Making (BDM)
method, which treats such complex system of decision making as a collection and sequence of reasonable
number of meaningful and manageable sub-systems by identifying and processing the relevant decision
criteria in each sub-system. Help of Boolean logic and Boolean algebra is taken to assign binary digit
values to the selection criteria and generate mathematical equations that correlate the inputs to the output
at each stage of decision making. Each sub-system with its own mathematical model has been treated as a
standardized decision sub-system for that phase of making decision in evaluating suppliers. The sequence
and connectivity of the sub-systems along with their outputs finally lead to selection of the best supplier. A
real-world case of evaluation of information technology (IT) tenders has been dealt with for application of
the proposed method. The paper discusses in detail the theory, methodology, application and features of
the new method.
Predicting the Presence of Learning Motivation in Electronic Learning: A New ...TELKOMNIKA JOURNAL
Research affirms that electronic learning (e-learning) has a great deal of advantages to learning process, it’s provides learners with flexibility in terms of time, place, and pace. That easiness was not be a guarantee of a good learning outcomes though they got the same learning material and course, because the fact is the outcomes was not the same from one to another and even not satisfy enough. One of prerequisite to the successful of e-learning process is the presence of learning motivation and it was not easy to identify. We propose a novel model to predicting the presence of learning motivation in e-learning using those attributes that have been identified in previous research. This model has been built using WEKA toolkit by comparing fourteen algorithms for Tree Classifier and ten-fold cross validation testing methods to process 3.200 of data sets. The best accuracy reached at 91.1% and identified four parameters to predict the presence of learning motivation in e-learning, and aim to assist teachers in identify whether student needs motivation or does not. This study also confirmed that Tree Classifier still has the best accuracy to predict and classify academic performance, it was reached average 90.48% for fourteen algorithm for accuracy value.
This paper presents a set of methods that uses a genetic algorithm for automatic test-data generation in
software testing. For several years researchers have proposed several methods for generating test data
which had different drawbacks. In this paper, we have presented various Genetic Algorithm (GA) based test
methods which will be having different parameters to automate the structural-oriented test data generation
on the basis of internal program structure. The factors discovered are used in evaluating the fitness
function of Genetic algorithm for selecting the best possible Test method. These methods take the test
populations as an input and then evaluate the test cases for that program. This integration will help in
improving the overall performance of genetic algorithm in search space exploration and exploitation fields
with better convergence rate.
Promise 2011: "A Principled Evaluation of Ensembles of Learning Machines for ...CS, NcState
The document discusses evaluating ensembles of learning machines for software effort estimation. It aims to determine if ensemble methods improve upon single learners, which ensembles perform best, and how to select models for different datasets. The study uses several public software effort estimation datasets and evaluates multiple ensemble techniques, including bagging and negative correlation learning, against single learners like decision trees. Statistical tests are used to rigorously compare the performance of different models.
A Software Measurement Using Artificial Neural Network and Support Vector Mac...ijseajournal
Today, Software measurement are based on various techniques such that neural network, Genetic
algorithm, Fuzzy Logic etc. This study involves the efficiency of applying support vector machine using
Gaussian Radial Basis kernel function to software measurement problem to increase the performance and
accuracy. Support vector machines (SVM) are innovative approach to constructing learning machines that
Minimize generalization error. There is a close relationship between SVMs and the Radial Basis Function
(RBF) classifiers. Both have found numerous applications such as in optical character recognition, object
detection, face verification, text categorization, and so on. The result demonstrated that the accuracy and
generalization performance of SVM Gaussian Radial Basis kernel function is better than RBFN. We also
examine and summarize the several superior points of the SVM compared with RBFN.
Modelling the expected loss of bodily injury claims using gradient boostingGregg Barrett
This document summarizes an effort to model the expected loss of bodily injury claims using gradient boosting. Frequency and severity models are built separately and then combined to estimate expected loss. Gradient boosting is chosen as the modeling approach due to its flexibility. Tuning parameters like shrinkage, number of trees, and depth must be selected. The goal is predictive accuracy over interpretability. Performance is evaluated on a test set not used for model selection.
Iso9126 based software quality evaluation using choquet integral ijseajournal
This document summarizes a research paper that evaluates software quality using the Choquet integral approach. The paper introduces aggregation functions commonly used to evaluate software quality, such as the arithmetic mean and weighted arithmetic mean. It then presents the Choquet integral as a new approach that considers interactions among quality criteria. The paper applies the Choquet integral to evaluate software alternatives based on ISO 9126 quality attributes, using real data from case studies. Evaluation results are compared to other aggregation methods to demonstrate the Choquet integral's ability to model interactions among criteria.
This document presents a model for evaluating design concepts using decision matrix logic. It describes the typical decision matrix process of identifying alternatives and criteria, assigning weights, rating alternatives, and summing scores. The model aims to make the decision matrix approach more logical, reproducible, accurate and reliable for designers. It also presents an example applying the model to selecting an optimal concept for a simple gearbox design. The computer-based model is intended to enhance the capabilities of novice designers in conceptual engineering design.
Parallel and distributed genetic algorithm with multiple objectives to impro...khalil IBRAHIM
we argue that the timetabling problem reflects the problem of scheduling university courses, So you must specify the range of time periods and a group of instructors for a range of lectures to check a set of constraints and reduce the cost of other constraints ,this is the problem called NP-hard, it is a class of problems that are informally, it’s mean that necessary operations to solve the problem will increase exponentially and directly proportional to the size of the problem, The construction of timetable is the most complicated problem that was facing many universities, and increased by size of the university data and overlapping disciplines between colleges, and when a traditional algorithm (EA) is unable to provide satisfactory results, a distributed EA (dEA), which deploys the population on distributed systems, it also offers an opportunity to solve extremely high dimensional problems through distributed coevolution using a divide-and-conquer mechanism, Further, the distributed environment allows a dEA to maintain population diversity, thereby avoiding local optima and also facilitating multi-objective search, by employing different distribution models to parallelize the processing of EAs, we designed a genetic algorithm suitable for Universities environment and the constraints facing it when building timetable for lectures.
Hybrid Classifier for Sentiment Analysis using Effective PipeliningIRJET Journal
The document describes a hybrid approach for sentiment analysis of tweets that uses a pipeline of rules-based classification, lexicon-based classification, and machine learning classification. Tweets are first classified using rules and a lexicon, and only tweets that do not meet a confidence threshold are passed to a machine learning classifier. The hybrid approach aims to optimize performance, speed, accuracy, and processing requirements compared to using individual classification methods alone. The document provides background on sentiment analysis methods and evaluates the performance of the hybrid approach versus individual classifiers.
Applying supervised and un supervised learning approaches for movie recommend...IAEME Publication
This paper compares supervised and unsupervised machine learning approaches for developing a movie recommender system. It applies classification and clustering techniques to movie rating data to classify movies as viewable or not and cluster users based on attributes. Experimental results show that classification accuracy is highest when 80% of data is used for training. Clustering identifies groups of similar movies and users. The authors conclude recommender systems built with machine learning can provide intelligent recommendations that approach or surpass human judgment.
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
Biology-Derived Algorithms in Engineering OptimizationXin-She Yang
The document discusses biology-derived algorithms and their applications in engineering optimization. It describes several biology-inspired algorithms including genetic algorithms, photosynthetic algorithms, neural networks, and cellular automata. Genetic algorithms and photosynthetic algorithms are discussed in more detail. The document also provides examples of how these algorithms can be applied to problems in engineering optimization, such as parameter estimation and inverse analysis.
This document provides information about the band Coldplay and their song "The Scientist". It includes the band members' names and years of birth. It states that Chris Martin was inspired by social issues related to love and mistakes. The song "The Scientist" was written in 2001 and released in 2002, and is about a man who realizes he made a mistake in his life and wants to go back to the beginning. It provides the lyrics to the song. It also lists Coldplay's albums including Parachutes, A Rush of Blood to the Head, X&Y, and Viva la Vida or Death and All His Friends.
#fotocasaResponde: ¿Cómo reclamar la devolución de las cláusulas suelo?fotocasa
¿En qué casos se puede reclamar? ¿Cuáles son las cantidades a devolver? ¿Cómo reclamar la devolución del dinero? Estas son sólo algunas de las preguntas que estos días nos han hecho llegar nuestros usuarios. Para dar respuesta a estas y otras dudas, contamos con la presencia de Miguel Muñoz, abogado experto en Derecho Inmobiliario de Legálitas.
Etiologia de la celulitis y Predicción clínica de la enfermedad Estreptocócic...Alex Castañeda-Sabogal
Etiologia de la celulitis. Estudio prospectivo y predicción clínica de la infeccion por Estreptococcus basado en la frecuencia encontrada de las especies de estreptococo
1) Antibiotic de-escalation refers to narrowing or reducing the spectrum of antibiotics administered to critically ill patients once culture results are available.
2) Observational studies have found de-escalation therapy to be safely practiced in ICU patients and possibly associated with lower mortality and shorter hospital stays.
3) However, randomized trials have found possible higher risks of reinfection with de-escalation, without effects on mortality. Overall, de-escalation appears to be a well-tolerated strategy but is not widely adopted in practice.
Genetic Approach to Parallel SchedulingIOSR Journals
Genetic algorithms were used to solve the parallel task scheduling problem of minimizing overall completion time. The genetic algorithm represents each scheduling as a chromosome. It initializes a population of random schedules and evaluates their fitness based on completion time. Selection, crossover, and mutation operators evolve the population over generations. The best schedule found schedules tasks to processors to minimize completion time. Testing on task graphs of varying sizes showed that the genetic algorithm finds improved schedules over generations and that tournament selection works better than roulette wheel selection.
An integrated approach for enhancing ready mixed concrete selection using tec...prjpublications
This document presents a study that evaluates criteria for selecting ready mixed concrete using the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method. The study identifies important criteria for selection like quality control, cost, and delivery. A survey was conducted with responses from ready mixed concrete plant managers, consultants, and contractors to gather data on the criteria. TOPSIS was then used to analyze the data and provide a ranking of respondents based on their relative closeness to the ideal solution. The results showed Respondent R3 had the highest ranking, making it the best option according to the TOPSIS analysis.
Metasem: An R Package For Meta-Analysis Using Structural Equation Modelling: ...Pubrica
This presentation explains about the Metasem: An r package for Meta-Analysis using Structural Equation Modelling:
1. SEM is used meta-analytical model formulated for conducting Meta-analysis which is used to analyse structural relationships. SEM can be univariate, multivariate, and three-level meta-analysis
2. Structural equation model (SEM) in general optimized and fit by using OpenMx package
3. The routine analysis of batch mode either interactively or noninteractively can be analysed by R package. Using the graphical interface like R studio is a convenient method for users to interfere with the analysis.
Learn More: https://pubrica.com/academy/
Why Pubrica:
When you order our services, we promise you the following – Plagiarism free, always on Time, outstanding customer support, written to Standard, Unlimited Revisions support and High-quality Subject Matter Experts.
Find freelance Meta-Analysis professionals, consultants, freelancers and get your project done - https://bit.ly/30V8QUK
Contact us:
Web: https://pubrica.com/
Email: sales@pubrica.com
WhatsApp : +91 9884350006
United Kingdom : +44-1143520021
Constructing a classification model is important in machine learning for a particular task. A
classification process involves assigning objects into predefined groups or classes based on a
number of observed attributes related to those objects. Artificial neural network is one of the
classification algorithms which, can be used in many application areas. This paper investigates
the potential of applying the feed forward neural network architecture for the classification of
medical datasets. Migration based differential evolution algorithm (MBDE) is chosen and
applied to feed forward neural network to enhance the learning process and the network
learning is validated in terms of convergence rate and classification accuracy. In this paper,
MBDE algorithm with various migration policies is proposed for classification problems using
medical diagnosis.
IRJET-Comparison between Supervised Learning and Unsupervised LearningIRJET Journal
This document compares supervised and unsupervised learning models in artificial neural networks. It describes how supervised learning uses labeled training data to learn relationships between inputs and outputs, while unsupervised learning identifies hidden patterns in unlabeled data. The document outlines techniques for each like classification and regression for supervised learning, and clustering and density estimation for unsupervised learning. It then presents experiments applying multilayer perceptron (supervised) and k-means clustering (unsupervised) to datasets, finding unsupervised learning had higher accuracy. The document concludes unsupervised learning is favored for this task but supervised learning remains useful for problems with labeled data.
Decision support systems, Supplier selection, Information systems, Boolean al...ijmpict
For organizations operating with number of products/services and number of suppliers, to select the right
supplier meeting all their requirements will be a challenging job. Such organizations need a good
decision support system to evaluate the suppliers effectively. Several decision support systems have been
reported to deal with complex selection process to decide the right supplier. Many mathematical models
have also been developed. This paper presents a new method, named as Bit Decision Making (BDM)
method, which treats such complex system of decision making as a collection and sequence of reasonable
number of meaningful and manageable sub-systems by identifying and processing the relevant decision
criteria in each sub-system. Help of Boolean logic and Boolean algebra is taken to assign binary digit
values to the selection criteria and generate mathematical equations that correlate the inputs to the output
at each stage of decision making. Each sub-system with its own mathematical model has been treated as a
standardized decision sub-system for that phase of making decision in evaluating suppliers. The sequence
and connectivity of the sub-systems along with their outputs finally lead to selection of the best supplier. A
real-world case of evaluation of information technology (IT) tenders has been dealt with for application of
the proposed method. The paper discusses in detail the theory, methodology, application and features of
the new method.
Predicting the Presence of Learning Motivation in Electronic Learning: A New ...TELKOMNIKA JOURNAL
Research affirms that electronic learning (e-learning) has a great deal of advantages to learning process, it’s provides learners with flexibility in terms of time, place, and pace. That easiness was not be a guarantee of a good learning outcomes though they got the same learning material and course, because the fact is the outcomes was not the same from one to another and even not satisfy enough. One of prerequisite to the successful of e-learning process is the presence of learning motivation and it was not easy to identify. We propose a novel model to predicting the presence of learning motivation in e-learning using those attributes that have been identified in previous research. This model has been built using WEKA toolkit by comparing fourteen algorithms for Tree Classifier and ten-fold cross validation testing methods to process 3.200 of data sets. The best accuracy reached at 91.1% and identified four parameters to predict the presence of learning motivation in e-learning, and aim to assist teachers in identify whether student needs motivation or does not. This study also confirmed that Tree Classifier still has the best accuracy to predict and classify academic performance, it was reached average 90.48% for fourteen algorithm for accuracy value.
This paper presents a set of methods that uses a genetic algorithm for automatic test-data generation in
software testing. For several years researchers have proposed several methods for generating test data
which had different drawbacks. In this paper, we have presented various Genetic Algorithm (GA) based test
methods which will be having different parameters to automate the structural-oriented test data generation
on the basis of internal program structure. The factors discovered are used in evaluating the fitness
function of Genetic algorithm for selecting the best possible Test method. These methods take the test
populations as an input and then evaluate the test cases for that program. This integration will help in
improving the overall performance of genetic algorithm in search space exploration and exploitation fields
with better convergence rate.
Promise 2011: "A Principled Evaluation of Ensembles of Learning Machines for ...CS, NcState
The document discusses evaluating ensembles of learning machines for software effort estimation. It aims to determine if ensemble methods improve upon single learners, which ensembles perform best, and how to select models for different datasets. The study uses several public software effort estimation datasets and evaluates multiple ensemble techniques, including bagging and negative correlation learning, against single learners like decision trees. Statistical tests are used to rigorously compare the performance of different models.
A Software Measurement Using Artificial Neural Network and Support Vector Mac...ijseajournal
Today, Software measurement are based on various techniques such that neural network, Genetic
algorithm, Fuzzy Logic etc. This study involves the efficiency of applying support vector machine using
Gaussian Radial Basis kernel function to software measurement problem to increase the performance and
accuracy. Support vector machines (SVM) are innovative approach to constructing learning machines that
Minimize generalization error. There is a close relationship between SVMs and the Radial Basis Function
(RBF) classifiers. Both have found numerous applications such as in optical character recognition, object
detection, face verification, text categorization, and so on. The result demonstrated that the accuracy and
generalization performance of SVM Gaussian Radial Basis kernel function is better than RBFN. We also
examine and summarize the several superior points of the SVM compared with RBFN.
Modelling the expected loss of bodily injury claims using gradient boostingGregg Barrett
This document summarizes an effort to model the expected loss of bodily injury claims using gradient boosting. Frequency and severity models are built separately and then combined to estimate expected loss. Gradient boosting is chosen as the modeling approach due to its flexibility. Tuning parameters like shrinkage, number of trees, and depth must be selected. The goal is predictive accuracy over interpretability. Performance is evaluated on a test set not used for model selection.
Iso9126 based software quality evaluation using choquet integral ijseajournal
This document summarizes a research paper that evaluates software quality using the Choquet integral approach. The paper introduces aggregation functions commonly used to evaluate software quality, such as the arithmetic mean and weighted arithmetic mean. It then presents the Choquet integral as a new approach that considers interactions among quality criteria. The paper applies the Choquet integral to evaluate software alternatives based on ISO 9126 quality attributes, using real data from case studies. Evaluation results are compared to other aggregation methods to demonstrate the Choquet integral's ability to model interactions among criteria.
This document presents a model for evaluating design concepts using decision matrix logic. It describes the typical decision matrix process of identifying alternatives and criteria, assigning weights, rating alternatives, and summing scores. The model aims to make the decision matrix approach more logical, reproducible, accurate and reliable for designers. It also presents an example applying the model to selecting an optimal concept for a simple gearbox design. The computer-based model is intended to enhance the capabilities of novice designers in conceptual engineering design.
Parallel and distributed genetic algorithm with multiple objectives to impro...khalil IBRAHIM
we argue that the timetabling problem reflects the problem of scheduling university courses, So you must specify the range of time periods and a group of instructors for a range of lectures to check a set of constraints and reduce the cost of other constraints ,this is the problem called NP-hard, it is a class of problems that are informally, it’s mean that necessary operations to solve the problem will increase exponentially and directly proportional to the size of the problem, The construction of timetable is the most complicated problem that was facing many universities, and increased by size of the university data and overlapping disciplines between colleges, and when a traditional algorithm (EA) is unable to provide satisfactory results, a distributed EA (dEA), which deploys the population on distributed systems, it also offers an opportunity to solve extremely high dimensional problems through distributed coevolution using a divide-and-conquer mechanism, Further, the distributed environment allows a dEA to maintain population diversity, thereby avoiding local optima and also facilitating multi-objective search, by employing different distribution models to parallelize the processing of EAs, we designed a genetic algorithm suitable for Universities environment and the constraints facing it when building timetable for lectures.
Hybrid Classifier for Sentiment Analysis using Effective PipeliningIRJET Journal
The document describes a hybrid approach for sentiment analysis of tweets that uses a pipeline of rules-based classification, lexicon-based classification, and machine learning classification. Tweets are first classified using rules and a lexicon, and only tweets that do not meet a confidence threshold are passed to a machine learning classifier. The hybrid approach aims to optimize performance, speed, accuracy, and processing requirements compared to using individual classification methods alone. The document provides background on sentiment analysis methods and evaluates the performance of the hybrid approach versus individual classifiers.
Applying supervised and un supervised learning approaches for movie recommend...IAEME Publication
This paper compares supervised and unsupervised machine learning approaches for developing a movie recommender system. It applies classification and clustering techniques to movie rating data to classify movies as viewable or not and cluster users based on attributes. Experimental results show that classification accuracy is highest when 80% of data is used for training. Clustering identifies groups of similar movies and users. The authors conclude recommender systems built with machine learning can provide intelligent recommendations that approach or surpass human judgment.
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
Biology-Derived Algorithms in Engineering OptimizationXin-She Yang
The document discusses biology-derived algorithms and their applications in engineering optimization. It describes several biology-inspired algorithms including genetic algorithms, photosynthetic algorithms, neural networks, and cellular automata. Genetic algorithms and photosynthetic algorithms are discussed in more detail. The document also provides examples of how these algorithms can be applied to problems in engineering optimization, such as parameter estimation and inverse analysis.
This document provides information about the band Coldplay and their song "The Scientist". It includes the band members' names and years of birth. It states that Chris Martin was inspired by social issues related to love and mistakes. The song "The Scientist" was written in 2001 and released in 2002, and is about a man who realizes he made a mistake in his life and wants to go back to the beginning. It provides the lyrics to the song. It also lists Coldplay's albums including Parachutes, A Rush of Blood to the Head, X&Y, and Viva la Vida or Death and All His Friends.
#fotocasaResponde: ¿Cómo reclamar la devolución de las cláusulas suelo?fotocasa
¿En qué casos se puede reclamar? ¿Cuáles son las cantidades a devolver? ¿Cómo reclamar la devolución del dinero? Estas son sólo algunas de las preguntas que estos días nos han hecho llegar nuestros usuarios. Para dar respuesta a estas y otras dudas, contamos con la presencia de Miguel Muñoz, abogado experto en Derecho Inmobiliario de Legálitas.
Etiologia de la celulitis y Predicción clínica de la enfermedad Estreptocócic...Alex Castañeda-Sabogal
Etiologia de la celulitis. Estudio prospectivo y predicción clínica de la infeccion por Estreptococcus basado en la frecuencia encontrada de las especies de estreptococo
1) Antibiotic de-escalation refers to narrowing or reducing the spectrum of antibiotics administered to critically ill patients once culture results are available.
2) Observational studies have found de-escalation therapy to be safely practiced in ICU patients and possibly associated with lower mortality and shorter hospital stays.
3) However, randomized trials have found possible higher risks of reinfection with de-escalation, without effects on mortality. Overall, de-escalation appears to be a well-tolerated strategy but is not widely adopted in practice.
This document provides instructions for saving a PowerPoint presentation as a movie file and uploading it to the Safari Montage platform so that it can be viewed seamlessly within Safari without needing to launch PowerPoint. The key steps are to set transitions in PowerPoint, save the presentation as a movie file, log into Safari Montage, upload the movie file, add it to a playlist, and then it will play like any other movie in Safari Montage.
IN THIS SUMMARY
Economists and business leaders alike are still trying to understand the forces that led to the United States’ current economic woes. Some believe it is a down financial cycle or a recession, but in Aftershock, David Wiedemer, Robert Wiedemer, and Cindy Spitzer detail why they believe that neither explanation is correct. They describe what they have termed a Bubblequake–a popping of the real estate bubble, the private debt bubble, the stock market bubble, and the discretionary spending bubble. More alarming is that the economy is not going back to the way it was before because there are still more economic bubbles waiting to burst.
SUBSCRIBE TODAY
http://www.bizsum.com/summaries/aftershock
1) The document provides 11 examples involving calculations using the normal distribution to solve probability problems related to business, quality control, and sampling.
2) Many of the examples ask the reader to calculate the probability of an event occurring or the expected number of outcomes given data about the average and standard deviation of a normal distribution.
3) The final example discusses whether a concession manager should hire additional employees given the potential costs and probabilities associated with the expected attendance at a hockey game.
Presentation from Francisco Sanchez Pons at parallel session on FOTseuroFOT
The document discusses the requirements and development of a data logging system for the EuroFOT field trial. It describes:
1) The EuroFOT challenge of collecting vehicle and driver behavior data from nearly 1,000 vehicles over 12 months.
2) The mechanical, electrical, and functional requirements for the data logging system, including size, power needs, data formats, and wireless capabilities.
3) The evolution of the CTAG datalogger from version 1.0 to 2.0 to meet EuroFOT's needs, such as adding 4 CAN channels, GPRS, and remote firmware updates.
4) Test results showing the version 2.0 datalogger met requirements and was selected
Inscape makes smart workspaces through 16 approaches:
1. They focus on applications and customized solutions for unique client needs and spaces.
2. Their infrastructure accommodates power, lighting, and more, allowing infinite flexibility through movable walls, doors, and ceilings.
3. They maximize real estate through compact vertical storage and 2,808 paint colors.
15 Spanish artists and 11 Portuguese artists were interviewed about their experiences with artistic mobility. Half of the Spanish artists were between 30-40 years old and most Portuguese artists were also in that age range. The interviews uncovered professional, personal, and artistic benefits from mobility including new knowledge, critical feedback, and personal fulfillment. Artists integrated into their host communities in varying degrees from mere contact to collaboration. Mobility enhanced both artistic and personal learning in intertwined ways. Direct effects included international recognition and new professional opportunities. Artists developed greater social and cultural awareness from experiencing other ways of life.
Pubcon 2013 - Post mortem banned site forensicsBrian McDowell
Conductor is a company founded in 2006 that provides SEO support and data collection to over 5,000 brands. Brian McDowell from Conductor gave a presentation on SEO forensics tools and techniques. The presentation included case studies on recovering from link penalties and determining if a site was filtered or penalized. Conductor's platform collects 6 terabytes of SEO data weekly to help clients understand SEO performance.
This document proposes a reusable and eco-friendly packaging system for the fashion industry. It notes current problems with packaging waste and lack of organization. It draws inspiration from concepts like cradle-to-cradle design and sharing models. The proposed product line includes durable bags, boxes, and packing materials made from 100% renewable and recyclable materials. The business model involves partnering with firms to test the system and lower costs over time through a subscription service. The goal is to help reduce packaging waste while providing an affordable and sustainable solution for businesses.
Dit is een artikel van mijn hand in M&O 2012/4
Veranderen is mijn vak. Theorieën en modellen helpen me als adviseur om de complexe werkelijkheid in kaart te brengen. Maar helaas schie- ten ze ook tekort, omdat ze de werkelijkheid net iets te simpel voor- stellen. Ik geef daarom de voorkeur aan managementboeken die het veranderen van organisaties en mensen niet eenvoudig, lineair en planmatig benaderen, maar de nadruk leggen op de complexiteit en weerbarstigheid. In dit artikel beschrijf ik het doen-en-denkenproces. Niet bedoeld als een prescriptief leermodel, maar als een heuristisch hulpmiddel. Het bevat bekende zaken, maar het wijkt ook af. Bijvoor- beeld door de ingebouwde frictie tussen doen en denken als de drijf- veer van verandering; het is een reactief model, geen doelzoekend model. Centraal staat dat denken en doen gestapelde deelprocessen zijn: dat denken niet kan zonder doen, maar doen wel zonder denken. De crux is niet het kiezen wanneer te doen of wanneer te denken, maar hoe deze twee processen goed te laten samenlopen.
The document discusses an automatic process but provides no meaningful details about what was automatic or what the process entailed. It contains a long string of non-printing characters between "Automatic" and "END" with no other context.
This document provides an overview of a survey of multi-objective evolutionary algorithms for data mining tasks. It discusses key concepts in multi-objective optimization and evolutionary algorithms. It also reviews common data mining tasks like feature selection, classification, clustering, and association rule mining that are often formulated as multi-objective problems and solved using multi-objective evolutionary algorithms. The survey focuses on reviewing applications of multi-objective evolutionary algorithms for feature selection and classification in part 1, and applications for clustering, association rule mining and other tasks in part 2.
Application Issues For Multiobjective Evolutionary AlgorithmsAmy Isleb
This document discusses the application of multiobjective evolutionary algorithms to real-world optimization problems. It addresses issues in designing MOEAs such as problem-specific data structures, evolutionary operators, and parameter determination. Three application examples are provided in electronic circuit design, design centering problems, and project scheduling to illustrate different aspects of MOEA design, implementation, and use for real-world problems. The document also covers basic concepts in multiobjective optimization and evolutionary algorithms.
Differential Evolution (DE) is a renowned optimization stratagem that can easily solve nonlinear and comprehensive problems. DE is a well known and uncomplicated population based probabilistic approach for comprehensive optimization. It has apparently outperformed a number of Evolutionary Algorithms and further search heuristics in the vein of Particle Swarm Optimization at what time of testing over both yardstick and actual world problems. Nevertheless, DE, like other probabilistic optimization algorithms, from time to time exhibits precipitate convergence and stagnates at suboptimal position. In order to stay away from stagnation behavior while maintaining an excellent convergence speed, an innovative search strategy is introduced, named memetic search in DE. In the planned strategy, positions update equation customized as per a memetic search stratagem. In this strategy a better solution participates more times in the position modernize procedure. The position update equation is inspired from the memetic search in artificial bee colony algorithm. The proposed strategy is named as Memetic Search in Differential Evolution (MSDE). To prove efficiency and efficacy of MSDE, it is tested over 8 benchmark optimization problems and three real world optimization problems. A comparative analysis has also been carried out among proposed MSDE and original DE. Results show that the anticipated algorithm go one better than the basic DE and its recent deviations in a good number of the experiments.
A Genetic Algorithm For Constraint Optimization Problems With Hybrid Handling...Jim Jimenez
This document describes a genetic algorithm for solving constrained optimization problems. It proposes handling constraints using a hybrid scheme that initially searches only the feasible space without calculating the objective function, and then uses a penalty function once the feasible space is reached. The algorithm uses special operators like mutation based on eugenics and local search of the best solutions. It was tested on 11 benchmark functions and showed competitive performance compared to other works.
A Classification And Comparison Of Credit Assignment Strategies In Multiobjec...Dustin Pytko
The document discusses adaptive operator selection (AOS) strategies for multiobjective optimization problems. It introduces a classification of credit assignment strategies for evaluating operator performance in AOS. The classification groups strategies based on the sets of solutions used to assess impact and the fitness function used to compare solutions. The paper proposes five new credit assignment strategies to fill gaps in the classification. It then experimentally compares the performance of nine total credit assignment strategies on benchmark problems. The results show that most strategies improve the generality of multiobjective evolutionary algorithms compared to a random operator selector.
This document summarizes a lecture on evolutionary computing. It defines evolutionary computing as problem-solving techniques based on biological evolution principles like natural selection and genetic inheritance. These techniques are increasingly used to solve both practical and scientific problems. Evolutionary algorithms apply principles of evolution like selection, recombination, and mutation to populations of potential solutions to iteratively find better approximations. Over generations, the population evolves to include individuals better suited to their environment. The document outlines the structure and process of evolutionary algorithms, comparing them to traditional search methods.
One of the obstacles that hinder the usage of mutation testing is its impracticality, two main contributors of this are a large number of mutants and a large number of test cases involves in the process. Researcher usually tries to address this problem by optimizing the mutants and the test case separately. In this research, we try to tackle both of optimizing mutant and optimizing test-case simultaneousl y using a coevolution optimization method. The coevolution optimization method is chosen for the mutation testing problem because the method works by optimizing multiple collections (population) of a solution. This research found that coevolution is better suited for multiproblem optimization than other single population methods (i.e. Genetic Algorithm), we also propose new indicator to determine the optimal coevolution cycle. The experiment is done to the artificial case, laboratory, and also a real case.
This document discusses various artificial intelligence techniques for robot path planning, including ant colony optimization. It provides background on particle swarm optimization, genetic algorithms, tabu search, simulated annealing, reactive search optimization, and ant colony algorithms. It then proposes a solution for robotic path planning that uses ant colony optimization. The proposed solution involves defining a source and destination point for the robot, moving it forward one step at a time while checking for obstacles, having it take three steps back if an obstacle is encountered, and applying ant colony optimization algorithms to help the robot find an optimal path to bypass obstacles and reach the destination point.
This document discusses optimization problems in engineering applications. It begins by defining optimization and describing how it can be applied to engineering problems to minimize costs or maximize benefits. Some examples of engineering applications that can be optimized are described, such as designing structures for minimum cost or maximum efficiency. The document then discusses procedures for solving optimization problems, including recognizing and defining the problem, constructing a model, and implementing solutions. It also describes different types of optimization problems and methods for solving linear programming problems, including the graphical and simplex methods.
This document contains answers to assignment questions on operations research. It defines operations research and describes types of operations research models including physical and mathematical models. It also outlines the phases of operations research including the judgment, research, and action phases. Additionally, it provides explanations and examples of linear programming problems and their graphical solution method, as well as addressing how to solve degeneracies in transportation problems and explaining the MODI optimality test procedure.
This document discusses approaches to optimizing discrete-event simulation models over the past 20 years. While computational power has increased, recent literature shows a lack of new approaches and a widening divide between simulation modeling, optimization, and implementing improvements. The document proposes two areas for advancing the field: 1) integrating simulation optimization dynamically into operations rather than as a static tool, and 2) developing intelligent interfaces that can recognize input parameters and select appropriate optimization algorithms for specific problems.
This document reviews applications of evolutionary multiobjective optimization (EMO) techniques in production research. It summarizes EMO applications in several areas of production research, including scheduling, production planning and control, cellular manufacturing, flexible manufacturing systems, and assembly-line optimization. The review finds that EMO techniques have been successfully applied to optimization problems in these areas and provide a number of non-dominated solutions. However, future research opportunities remain, such as improved integration of EMO with other metaheuristics and consideration of additional objectives.
Computational optimization, modelling and simulation: Recent advances and ove...Xin-She Yang
This document summarizes recent advances in computational optimization, modeling, and simulation. It discusses how optimization is important for engineering design and industrial applications to maximize profits and minimize costs. Metaheuristic algorithms and surrogate-based optimization techniques are becoming widely used for complex optimization problems. The workshop accepted papers that applied optimization, modeling, and simulation to diverse areas like production planning, mixed-integer programming, electromagnetics, and reliability analysis. Overall computational optimization and modeling have broad applications and continued research is needed in areas like metaheuristic convergence and surrogate modeling methods.
Multi-Population Methods with Adaptive Mutation for Multi-Modal Optimization ...ijscai
This paper presents an efficient scheme to locate multiple peaks on multi-modal optimization problems by
using genetic algorithms (GAs). The premature convergence problem shows due to the loss of diversity,
the multi-population technique can be applied to maintain the diversity in the population and the
convergence capacity of GAs. The proposed scheme is the combination of multi-population with adaptive
mutation operator, which determines two different mutation probabilities for different sites of the
solutions. The probabilities are updated by the fitness and distribution of solutions in the search space
during the evolution process. The experimental results demonstrate the performance of the proposed
algorithm based on a set of benchmark problems in comparison with relevant algorithms.
Decision Support Systems in Clinical EngineeringAsmaa Kamel
This document provides an overview of the Analytic Hierarchy Process (AHP) decision support system and presents a case study on using AHP to make medical equipment scrapping decisions. The key points are:
1) AHP breaks down a complex decision problem into a hierarchy, then uses pairwise comparisons to determine criteria weights and rank alternatives. It was used in this case study to evaluate 9 dialysis machines for potential scrapping.
2) Criteria for the dialysis machine scrapping decision included age, performance, safety record, and costs. Data was incomplete so the study simulated different scenarios to examine the impact.
3) AHP derived local and global priorities to determine each machine's overall priority for scra
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...IAEME Publication
Close range photogrammetry network design is referred to the process of placing a set of
cameras in order to achieve photogrammetric tasks. The main objective of this paper is tried to find
the best location of two/three camera stations. The genetic algorithm optimization and Particle
Swarm Optimization are developed to determine the optimal camera stations for computing the three
dimensional coordinates. In this research, a mathematical model representing the genetic algorithm
optimization and Particle Swarm Optimization for the close range photogrammetry network is
developed. This paper gives also the sequence of the field operations and computational steps for this
task. A test field is included to reinforce the theoretical aspects.
Comparison between the genetic algorithms optimization and particle swarm opt...IAEME Publication
The document compares the genetic algorithms optimization and particle swarm optimization methods for designing close range photogrammetry networks. It presents the genetic algorithm and particle swarm optimization as two popular meta-heuristic algorithms inspired by natural evolution and collective animal behavior, respectively. The document develops mathematical models representing the genetic algorithm and particle swarm optimization for close range photogrammetry network design and evaluates them in a test field to reinforce the theoretical aspects.
Nonlinear Programming: Theories and Algorithms of Some Unconstrained Optimiza...Dr. Amarjeet Singh
Nonlinear programming problem (NPP) had become an important branch of operations research, and it was the mathematical programming with the objective function or constraints being nonlinear functions. There were a variety of traditional methods to solve nonlinear programming problems such as bisection method, gradient projection method, the penalty function method, feasible direction method, the multiplier method. But these methods had their specific scope and limitations, the objective function and constraint conditions generally had continuous and differentiable request. The traditional optimization methods were difficult to adopt as the optimized object being more complicated. However, in this paper, mathematical programming techniques that are commonly used to extremize nonlinear functions of single and multiple (n) design variables subject to no constraints are been used to overcome the above challenge. Although most structural optimization problems involve constraints that bound the design space, study of the methods of unconstrained optimization is important for several reasons. Steepest Descent and Newton’s methods are employed in this paper to solve an optimization problem.
Optimizing Mobile Robot Path Planning and Navigation by Use of Differential E...IOSR Journals
Abstract: Path planning and navigation is essential for an autonomous robot which can move avoiding the
static obstacles in a real world and to reach the specific target. Optimizing path for the robot movement gives
the optimal distance from the source to the target and save precious time as well. With the development of
various evolutionary algorithms, the differential evolution is taking the pace in comparison to genetic algorithm.
Differential evolution has been deployed quite successfully for solving global optimization problem. Differential
evolution is a very simple yet powerful metaheuristics type problem solving method. In this paper we are
proposing a Differential Evolution based path navigation algorithm for mobile path navigation and analyze its
efficiency with other developed approaches. The proposed algorithm optimized the robot path and navigates the
robot to the proper target efficiently.
Optimizing Mobile Robot Path Planning and Navigation by Use of Differential E...
Moea introduction by deb
1. Multi-Objective Optimization Using Evolutionary Algorithms:
An Introduction
Kalyanmoy Deb
Department of Mechanical Engineering
Indian Institute of Technology Kanpur
Kanpur, PIN 208016, India
deb@iitk.ac.in
http://www.iitk.ac.in/kangal/deb.htm
February 10, 2011
KanGAL Report Number 2011003
Abstract
As the name suggests, multi-objective optimization involves optimizing a number of objectives si-
multaneously. The problem becomes challenging when the objectives are of conflict to each other, that
is, the optimal solution of an objective function is different from that of the other. In solving such
problems, with or without the presence of constraints, these problems give rise to a set of trade-off opti-
mal solutions, popularly known as Pareto-optimal solutions. Due to the multiplicity in solutions, these
problems were proposed to be solved suitably using evolutionary algorithms which use a population ap-
proach in its search procedure. Starting with parameterized procedures in early nineties, the so-called
evolutionary multi-objective optimization (EMO) algorithms is now an established field of research and
application with many dedicated texts and edited books, commercial softwares and numerous freely
downloadable codes, a biannual conference series running successfully since 2001, special sessions and
workshops held at all major evolutionary computing conferences, and full-time researchers from uni-
versities and industries from all around the globe. In this chapter, we provide a brief introduction to
its operating principles and outline the current research and application studies of EMO.
1 Introduction
In the past 15 years, evolutionary multi-objective optimization (EMO) has become a popular and useful
field of research and application. Evolutionary optimization (EO) algorithms use a population based
approach in which more than one solution participates in an iteration and evolves a new population of
solutions in each iteration. The reasons for their popularity are many: (i) EOs do not require any derivative
information (ii) EOs are relatively simple to implement and (iii) EOs are flexible and have a wide-spread
applicability. For solving single-objective optimization problems, particularly in finding a single optimal
solution, the use of a population of solutions may sound redundant, in solving multi-objective optimization
problems an EO procedure is a perfect choice [1]. The multi-objective optimization problems, by nature,
give rise to a set of Pareto-optimal solutions which need a further processing to arrive at a single preferred
solution. To achieve the first task, it becomes quite a natural proposition to use an EO, because the use
of population in an iteration helps an EO to simultaneously find multiple non-dominated solutions, which
portrays a trade-off among objectives, in a single simulation run.
In this chapter, we present a brief description of an evolutionary optimization procedure for single-
objective optimization. Thereafter, we describe the principles of evolutionary multi-objective optimization.
Then, we discuss some salient developments in EMO research. It is clear from these discussions that EMO
is not only being found to be useful in solving multi-objective optimization problems, it is also helping
to solve other kinds of optimization problems in a better manner than they are traditionally solved. As
a by-product, EMO-based solutions are helping to reveal important hidden knowledge about a problem
– a matter which is difficult to achieve otherwise. EMO procedures with a decision making concept are
1
2. discussed as well. Some of these ideas require further detailed studies and this chapter mentions some such
current and future topics in this direction.
2 Evolutionary Optimization (EO) for Single-Objective Optimiza-
tion
Evolutionary optimization principles are different from classical optimization methodologies in the following
main ways [2]:
• An EO procedure does not usually use gradient information in its search process. Thus, EO method-
ologies are direct search procedures, allowing them to be applied to a wide variety of optimization
problems.
• An EO procedure uses more than one solution (a population approach) in an iteration, unlike in most
classical optimization algorithms which updates one solution in each iteration (a point approach).
The use of a population has a number of advantages: (i) it provides an EO with a parallel processing
power achieving a computationally quick overall search, (ii) it allows an EO to find multiple optimal
solutions, thereby facilitating the solution of multi-modal and multi-objective optimization problems,
and (iii) it provides an EO with the ability to normalize decision variables (as well as objective
and constraint functions) within an evolving population using the population-best minimum and
maximum values.
• An EO procedure uses stochastic operators, unlike deterministic operators used in most classical
optimization methods. The operators tend to achieve a desired effect by using higher probabilities
towards desirable outcomes, as opposed to using predetermined and fixed transition rules. This allows
an EO algorithm to negotiate multiple optima and other complexities better and provide them with
a global perspective in their search.
An EO begins its search with a population of solutions usually created at random within a specified lower
and upper bound on each variable. Thereafter, the EO procedure enters into an iterative operation of
updating the current population to create a new population by the use of four main operators: selection,
crossover, mutation and elite-preservation. The operation stops when one or more pre-specified termination
criteria are met.
The initialization procedure usually involve a random creation of solutions. If in a problem the knowl-
edge of some good solutions is available, it is better to use such information in creating the initial pop-
ulation. Elsewhere [3], it is highlighted that for solving complex real-world optimization problems, such
a customized initialization is useful and also helpful in achieving a faster search. After the population
members are evaluated, the selection operator chooses above-average (in other words, better) solutions
with a larger probability to fill an intermediate mating pool. For this purpose, several stochastic selection
operators exist in the EO literature. In its simplest form (called the tournament selection [4]), two solutions
can be picked at random from the evaluated population and the better of the two (in terms of its evaluated
order) can be picked.
The ‘variation’ operator is a collection of a number of operators (such as crossover, mutation etc.)
which are used to generate a modified population. The purpose of the crossover operator is to pick two or
more solutions (parents) randomly from the mating pool and create one or more solutions by exchanging
information among the parent solutions. The crossover operator is applied with a crossover probability
(pc ∈ [0, 1]), indicating the proportion of population members participating in the crossover operation. The
remaining (1 − pc ) proportion of the population is simply copied to the modified (child) population. In
the context of real-parameter optimization having n real-valued variables and involving a crossover with
two parent solutions, each variable may be crossed at a time. A probability distribution which depends
on the difference between the two parent variable values is often used to create two new numbers as child
values around the two parent values [5]. Besides the variable-wise recombination operators, vector-wise
recombination operators also suggested to propagate the correlation among variables of parent solutions
to the created child solutions [6, 7].
Each child solution, created by the crossover operator, is then perturbed in its vicinity by a mutation
operator [2]. Every variable is mutated with a mutation probability pm , usually set as 1/n (n is the number
2
3. of variables), so that on an average one variable gets mutated per solution. In the context of real-parameter
optimization, a simple Gaussian probability distribution with a predefined variance can be used with its
mean at the child variable value [1]. This operator allows an EO to search locally around a solution and is
independent on the location of other solutions in the population.
The elitism operator combines the old population with the newly created population and chooses to
keep better solutions from the combined population. Such an operation makes sure that an algorithm has
a monotonically non-degrading performance. [8] proved an asymptotic convergence of a specific EO but
having elitism and mutation as two essential operators.
Finally, the user of an EO needs to choose termination criteria. Often, a predetermined number of
generations is used as a termination criterion. For goal attainment problems, an EO can be terminated as
soon as a solution with a predefined goal or a target solution is found. In many studies [2, 9, 10, 11], a ter-
mination criterion based on the statistics of the current population vis-a-vis that of the previous population
to determine the rate of convergence is used. In other more recent studies, theoretical optimality conditions
(such as the extent of satisfaction of Karush-Kuhn-Tucker (KKT) conditions) are used to determine the
termination of a real-parameter EO algorithm [12]. Although EOs are heuristic based, the use of such
theoretical optimality concepts in an EO can also be used to test their converging abilities towards local
optimal solutions.
To demonstrate the working of the above-mentioned GA, we show four snapshots of a typical simulation
run on the following constrained optimization problem:
Minimize f (x) = (x2 + x2 − 11)2 + (x1 + x2 − 7)2
1 2
subject to g1 (x) ≡ 26 − (x1 − 5)2 − x2 ≥ 0,
2
(1)
g2 (x) ≡ 20 − 4x1 − x2 ≥ 0,
0 ≤ (x1 , x2 ) ≤ 6.
10 points are used and the GA is run for 100 generations. The SBX recombination operator is used with
probability of pc = 0.9 and index ηc = 10. The polynomial mutation operator is used with a probability of
pm = 0.5 with an index of ηm = 50. Figures 1 to 4 show the populations at generation zero, 5, 40 and 100,
respectively. It can be observed that in only five generations, all 10 population members become feasible.
Thereafter, the points come close to each other and creep towards the constrained minimum point.
The EA procedure is a population-based stochastic search procedure which iteratively emphasizes its
better population members, uses them to recombine and perturb locally in the hope of creating new
and better populations until a predefined termination criterion is met. The use of a population helps
to achieve an implicit parallelism [2, 13, 14] in an EO’s search mechanism (causing an inherent parallel
search in different regions of the search space), a matter which makes an EO computationally attractive
for solving difficult problems. In the context of certain Boolean functions, a computational time saving
to find the optimum varying polynomial to the population size is proven [15]. On one hand, the EO
procedure is flexible, thereby allowing a user to choose suitable operators and problem-specific information
to suit a specific problem. On the other hand, the flexibility comes with an onus on the part of a user to
choose appropriate and tangible operators so as to create an efficient and consistent search [16]. However,
the benefits of having a flexible optimization procedure, over their more rigid and specific optimization
algorithms, provide hope in solving difficult real-world optimization problems involving non-differentiable
objectives and constraints, non-linearities, discreteness, multiple optima, large problem sizes, uncertainties
in computation of objectives and constraints, uncertainties in decision variables, mixed type of variables,
and others.
A wiser approach to solving optimization problems of the real world would be to first understand the
niche of both EO and classical methodologies and then adopt hybrid procedures employing the better of
the two as the search progresses over varying degrees of search-space complexity from start to finish. As
demonstrated in the above typical GA simulation, there are two phases in the search of a GA. First, the
GA exhibits a more global search by maintaining a diverse population, thereby discovering potentially
good regions of interest. Second, a more local search takes place by bringing the population members
closer together. Although the above GA degenerates to both these search phases automatically without
any external intervention, a more efficient search can be achieved if the later local search phase can be
identified and executed with a more specialized local search algorithm.
3
4. Figure 1: Initial population. Figure 2: Population at generation 5.
Figure 3: Population at generation 40. Figure 4: Population at generation 100.
4
5. 3 Evolutionary Multi-objective Optimization (EMO)
A multi-objective optimization problem involves a number of objective functions which are to be either
minimized or maximized. As in a single-objective optimization problem, the multi-objective optimization
problem may contain a number of constraints which any feasible solution (including all optimal solutions)
must satisfy. Since objectives can be either minimized or maximized, we state the multi-objective opti-
mization problem in its general form:
Minimize/Maximize fm (x), m = 1, 2, . . . , M ;
subject to gj (x) ≥ 0, j = 1, 2, . . . , J;
hk (x) = 0, k = 1, 2, . . . , K; (2)
(L) (U )
xi ≤ xi ≤ xi , i = 1, 2, . . . , n.
A solution x ∈ Rn is a vector of n decision variables: x = (x1 , x2 , . . . , xn )T . The solutions satisfying
the constraints and variable bounds constitute a feasible decision variable space S ⊂ Rn . One of the
striking differences between single-objective and multi-objective optimization is that in multi-objective
optimization the objective functions constitute a multi-dimensional space, in addition to the usual decision
variable space. This additional M -dimensional space is called the objective space, Z ⊂ RM . For each
solution x in the decision variable space, there exists a point z ∈ RM ) in the objective space, denoted by
f (x) = z = (z1 , z2 , . . . , zM )T . To make the descriptions clear, we refer a ‘solution’ as a variable vector and
a ‘point’ as the corresponding objective vector.
The optimal solutions in multi-objective optimization can be defined from a mathematical concept of
partial ordering. In the parlance of multi-objective optimization, the term domination is used for this
purpose. In this section, we restrict ourselves to discuss unconstrained (without any equality, inequality
or bound constraints) optimization problems. The domination between two solutions is defined as follows
[1, 17]:
Definition 3.1 A solution x(1) is said to dominate the other solution x(2) , if both the following conditions
are true:
1. The solution x(1) is no worse than x(2) in all objectives. Thus, the solutions are compared based on
their objective function values (or location of the corresponding points (z(1) and z(2) ) on the objective
space).
2. The solution x(1) is strictly better than x(2) in at least one objective.
For a given set of solutions (or corresponding points on the objective space, for example, those shown
in Figure 5(a)), a pair-wise comparison can be made using the above definition and whether one point
dominates the other can be established. All points which are not dominated by any other member of the
f2 (minimize) f2 (minimize)
6
6
2 2
5 5 Non−dominated
front
4 4
3 5 3 5
1 1
3
1 3 1
2 6 10 14 18 2 6 10 14 18
f1 (maximize) f1 (maximize)
(a) (b)
Figure 5: A set of points and the first non-domination front are shown.
5
6. set are called the non-dominated points of class one, or simply the non-dominated points. For the set of
six solutions shown in the figure, they are points 3, 5, and 6. One property of any two such points is that
a gain in an objective from one point to the other happens only due to a sacrifice in at least one other
objective. This trade-off property between the non-dominated points makes the practitioners interested
in finding a wide variety of them before making a final choice. These points make up a front when viewed
them together on the objective space; hence the non-dominated points are often visualized to represent a
non-domination front. The computational effort needed to select the points of the non-domination front
from a set of N points is O(N log N ) for 2 and 3 objectives, and O(N logM −2 N ) for M > 3 objectives [18].
With the above concept, now it is easier to define the Pareto-optimal solutions in a multi-objective
optimization problem. If the given set of points for the above task contain all points in the search space
(assuming a countable number), the points lying on the non-domination front, by definition, do not get
dominated by any other point in the objective space, hence are Pareto-optimal points (together they con-
stitute the Pareto-optimal front) and the corresponding pre-images (decision variable vectors) are called
Pareto-optimal solutions. However, more mathematically elegant definitions of Pareto-optimality (includ-
ing the ones for continuous search space problems) exist in the multi-objective literature [17, 19].
3.1 Principle of EMO’s Search
In the context of multi-objective optimization, the extremist principle of finding the optimum solution
cannot be applied to one objective alone, when the rest of the objectives are also important. Different
solutions may produce trade-offs (conflicting outcomes among objectives) among different objectives. A
solution that is extreme (in a better sense) with respect to one objective requires a compromise in other
objectives. This prohibits one to choose a solution which is optimal with respect to only one objective.
This clearly suggests two ideal goals of multi-objective optimization:
1. Find a set of solutions which lie on the Pareto-optimal front, and
2. Find a set of solutions which are diverse enough to represent the entire range of the Pareto-optimal
front.
Evolutionary multi-objective optimization (EMO) algorithms attempt to follow both the above principles
similar to the other a posteriori MCDM methods (refer to Chapter ).
Although one fundamental difference between single and multiple objective optimization lies in the
cardinality in the optimal set, from a practical standpoint a user needs only one solution, no matter
whether the associated optimization problem is single or multi-objective. The user is now in a dilemma.
Since a number of solutions are optimal, the obvious question arises: Which of these optimal solutions
must one choose? This is not an easy question to answer. It involves higher-level information which is
often non-technical, qualitative and experience-driven. However, if a set of many trade-off solutions are
already worked out or available, one can evaluate the pros and cons of each of these solutions based on all
such non-technical and qualitative, yet still important, considerations and compare them to make a choice.
Thus, in a multi-objective optimization, ideally the effort must be made in finding the set of trade-off
optimal solutions by considering all objectives to be important. After a set of such trade-off solutions
are found, a user can then use higher-level qualitative considerations to make a choice. Since an EMO
procedure deals with a population of solutions in every iteration, it makes them intuitive to be applied
in multi-objective optimization to find a set of non-dominated solutions. Like other a posteriori MCDM
methodologies, an EMO based procedure works with the following principle in handling multi-objective
optimization problems:
Step 1 Find multiple non-dominated points as close to the Pareto-optimal front as possible, with a wide
trade-off among objectives.
Step 2 Choose one of the obtained points using higher-level information.
Figure 6 shows schematically the principles, followed in an EMO procedure. Since EMO procedures
are heuristic based, they may not guarantee in finding Pareto-optimal points, as a theoretically provable
optimization method would do for tractable (for example, linear or convex) problems. But EMO procedures
have essential operators to constantly improve the evolving non-dominated points (from the point of view of
convergence and diversity discussed above) similar to the way most natural and artificial evolving systems
6
7. Multi−objective
optimization problem
Minimize f1
Minimize f2
......
Minimize f M
Step 1
subject to constraints
IDEAL
Multi−objective
optimizer
Multiple trade−off Choose one
solutions found solution
Higher−level
information
Step 2
Figure 6: Schematic of a two-step multi-objective optimization procedure.
continuously improve their solutions. To this effect, a recent simulation study [12] has demonstrated that a
particular EMO procedure, starting from random non-optimal solutions, can progress towards theoretical
Karush-Kuhn-Tucker (KKT) points with iterations in real-valued multi-objective optimization problems.
The main difference and advantage of using an EMO compared to a posteriori MCDM procedures is
that multiple trade-off solutions can be found in a single simulation run, as most a posteriori MCDM
methodologies would require multiple applications.
In Step 1 of the EMO-based multi-objective optimization (the task shown vertically downwards in
Figure 6), multiple trade-off, non-dominated points are found. Thereafter, in Step 2 (the task shown
horizontally, towards the right), higher-level information is used to choose one of the obtained trade-off
points. This dual task allows an interesting feature, if applied for solving single-objective optimization
problems. It is easy to realize that a single-objective optimization is a degenerate case of multi-objective
optimization, as shown in details in another study [20]. In the case of single-objective optimization having
only one globally optimal solution, Step 1 will ideally find only one solution, thereby not requiring us to
proceed to Step 2. However, in the case of single-objective optimization having multiple global optima,
both steps are necessary to first find all or multiple global optima, and then to choose one solution from
them by using a higher-level information about the problem. Thus, although seems ideal for multi-objective
optimization, the framework suggested in Figure 6 can be ideally thought as a generic principle for both
single and multiple objective optimization.
3.2 Generating Classical Methods and EMO
In the generating MCDM approach, the task of finding multiple Pareto-optimal solutions is achieved
by executing many independent single-objective optimizations, each time finding a single Pareto-optimal
solution. A parametric scalarizing approach (such as the weighted-sum approach, -constraint approach,
and others) can be used to convert multiple objectives into a parametric single-objective objective function.
By simply varying the parameters (weight vector or -vector) and optimizing the scalarized function,
different Pareto-optimal solutions can be found. In contrast, in an EMO, multiple Pareto-optimal solutions
are attempted to be found in a single simulation by emphasizing multiple non-dominated and isolated
solutions. We discuss a little later some EMO algorithms describing how such dual emphasis is provided,
but now discuss qualitatively the difference between a posteriori MCDM and EMO approaches.
Consider Figure 7, in which we sketch how multiple independent parametric single-objective optimiza-
tions may find different Pareto-optimal solutions. The Pareto-optimal front corresponds to global optimal
solutions of several scalarized objectives. However, during the course of an optimization task, algorithms
must overcome a number of difficulties, such as infeasible regions, local optimal solutions, flat regions of
objective functions, isolation of optimum, etc., to converge to the global optimal solution. Moreover, due
to practical limitations, an optimization task must also be completed in a reasonable computational time.
7
8. f2
Local fronts
Initial
points
Infeasible
regions
Pareto−optimal
front
f1
Figure 7: Generative MCDM methodology employs multiple, independent single-objective optimizations.
This requires an algorithm to strike a good balance between the extent of these tasks its search operators
must do to overcome the above-mentioned difficulties reliably and quickly. When multiple simulations are
to performed to find a set of Pareto-optimal solutions, the above balancing act must have to performed in
every single simulation. Since simulations are performed independently, no information about the success
or failure of previous simulations is used to speed up the process. In difficult multi-objective optimization
problems, such memory-less a posteriori methods may demand a large overall computational overhead
to get a set of Pareto-optimal solutions. Moreover, even though the convergence can be achieved in some
problems, independent simulations can never guarantee finding a good distribution among obtained points.
EMO, as mentioned earlier, constitutes an inherent parallel search. When a population member over-
comes certain difficulties and make a progress towards the Pareto-optimal front, its variable values and
their combination reflect this fact. When a recombination takes place between this solution and other pop-
ulation members, such valuable information of variable value combinations gets shared through variable
exchanges and blending, thereby making the overall task of finding multiple trade-off solutions a parallelly
processed task.
3.3 Elitist Non-dominated Sorting GA or NSGA-II
The NSGA-II procedure [21] is one of the popularly used EMO procedures which attempt to find multiple
Pareto-optimal solutions in a multi-objective optimization problem and has the following three features:
1. It uses an elitist principle,
2. it uses an explicit diversity preserving mechanism, and
3. it emphasizes non-dominated solutions.
At any generation t, the offspring population (say, Qt ) is first created by using the parent population (say,
Pt ) and the usual genetic operators. Thereafter, the two populations are combined together to form a new
population (say, Rt ) of size 2N . Then, the population Rt classified into different non-domination classes.
Thereafter, the new population is filled by points of different non-domination fronts, one at a time. The
filling starts with the first non-domination front (of class one) and continues with points of the second
non-domination front, and so on. Since the overall population size of Rt is 2N , not all fronts can be
accommodated in N slots available for the new population. All fronts which could not be accommodated
are deleted. When the last allowed front is being considered, there may exist more points in the front than
the remaining slots in the new population. This scenario is illustrated in Figure 8. Instead of arbitrarily
discarding some members from the last front, the points which will make the diversity of the selected points
the highest are chosen.
The crowded-sorting of the points of the last front which could not be accommodated fully is achieved
in the descending order of their crowding distance values and points from the top of the ordered list are
chosen. The crowding distance di of point i is a measure of the objective space around i which is not
occupied by any other solution in the population. Here, we simply calculate this quantity di by estimating
8
9. Non−dominated Crowding
sorting distance Pt+1
sorting
F1
Pt F2
F3
Qt
Rejected
Rt
Figure 9: The crowding distance cal-
culation.
Figure 8: Schematic of the NSGA-II procedure.
the perimeter of the cuboid (Figure 9) formed by using the nearest neighbors in the objective space as the
vertices (we call this the crowding distance).
Next, we show snapshots of a typical NSGA-II simulation on a two-objective test problem:
Minimize f1 (x) = x1 ,
Minimize f (x) = g(x) 1 − f (x)/g(x) ,
2 1
ZDT2 : where 9 30
g(x) = 1 + 29 i=2 xi (3)
0 ≤ x1 ≤ 1,
−1 ≤ xi ≤ 1, i = 2, 3, . . . , 30.
NSGA-II is run with a population size of 100 and for 100 generations. The variables are used as real
numbers and an SBX recombination operator with pc = 0.9 and distribution index of ηc = 10 and a
polynomial mutation operator [1] with pm = 1/n (n is the number of variables) and distribution index of
ηm = 20 are used. Figure 10 is the initial population shown on the objective space. Figures 11, 12, and 13
show populations at generations 10, 30 and 100, respectively. The figures illustrates how the operators of
NSGA-II cause the population to move towards the Pareto-optimal front with generations. At generation
100, the population comes very close to the true Pareto-optimal front.
4 Applications of EMO
Since the early development of EMO algorithms in 1993, they have been applied to many real-world and
interesting optimization problems. Descriptions of some of these studies can be found in books [1, 22, 23],
dedicated conference proceedings [24, 25, 26, 27], and domain-specific books, journals and proceedings.
In this section, we describe one case study which clearly demonstrates the EMO philosophy which we
described in Section 3.1.
4.1 Spacecraft Trajectory Design
[28] proposed a multi-objective optimization technique using the original non-dominated sorting algorithm
(NSGA) [29] to find multiple trade-off solutions in a spacecraft trajectory optimization problem. To evaluate
a solution (trajectory), the SEPTOP (Solar Electric Propulsion Trajectory Optimization) software [30] is
called for, and the delivered payload mass and the total time of flight are calculated. The multi-objective
optimization problem has eight decision variables controlling the trajectory, three objective functions: (i)
maximize the delivered payload at destination, (ii) maximize the negative of the time of flight, and (iii)
maximize the total number of heliocentric revolutions in the trajectory, and three constraints limiting the
SEPTOP convergence error and minimum and maximum bounds on heliocentric revolutions.
9
10. Figure 10: Initial population. Figure 11: Population at generation 10.
Figure 12: Population at generation 30. Figure 13: Population at generation 100.
10
11. On the Earth–Mars rendezvous mission, the study found interesting trade-off solutions [28]. Using a
population of size 150, the NSGA was run for 30 generations. The obtained non-dominated solutions are
shown in Figure 14 for two of the three objectives and some selected solutions are shown in Figure 15. It is
22
36
900 73 72
Mass Delivered to Target (kg.) 132
800
44
700
600
500
400
300
200
100
0
1 1.5 2 2.5 3 3.5
Transfer Time (yrs.)
Figure 14: Obtained non-dominated solutions using NSGA.
clear that there exist short-time flights with smaller delivered payloads (solution marked 44) and long-time
flights with larger delivered payloads (solution marked 36). Solution 44 can deliver a mass of 685.28 kg
and requires about 1.12 years. On other hand, an intermediate solution 72 can deliver almost 862 kg with
a travel time of about 3 years. In these figures, each continuous part of a trajectory represents a thrusting
arc and each dashed part of a trajectory represents a coasting arc. It is interesting to note that only a
small improvement in delivered mass occurs in the solutions between 73 and 72 with a sacrifice in flight
time of about an year.
The multiplicity in trade-off solutions, as depicted in Figure 15, is what we envisaged in discovering in
a multi-objective optimization problem by using a posteriori procedure, such as an EMO algorithm. This
aspect was also discussed in Figure 6. Once such a set of solutions with a good trade-off among objectives is
obtained, one can analyze them for choosing a particular solution. For example, in this problem context, it
makes sense to not choose a solution between points 73 and 72 due to poor trade-off between the objectives
in this range. On the other hand, choosing a solution within points 44 and 73 is worthwhile, but which
particular solution to choose depends on other mission related issues. But by first finding a wide range
of possible solutions and revealing the shape of front, EMO can help narrow down the choices and allow
a decision maker to make a better decision. Without the knowledge of such a wide variety of trade-off
solutions, a proper decision-making may be a difficult task. Although one can choose a scalarized objective
(such as the -constraint method with a particular vector) and find the resulting optimal solution, the
decision-maker will always wonder what solution would have been derived if a different vector was chosen.
For example, if 1 = 2.5 years is chosen and mass delivered to the target is maximized, a solution in between
points 73 and 72 will be found. As discussed earlier, this part of the Pareto-optimal front does not provide
the best trade-offs between objectives that this problem can offer. A lack of knowledge of good trade-off
regions before a decision is made may allow the decision maker to settle for a solution which, although
optimal, may not be a good compromised solution. The EMO procedure allows a flexible and a pragmatic
procedure for finding a well-diversified set of solutions simultaneously so as to enable picking a particular
region for further analysis or a particular solution for implementation.
11
12. Figure 15: Four trade-off trajectories.
5 Constraint Handling in EMO
The constraint handling method modifies the binary tournament selection, where two solutions are picked
from the population and the better solution is chosen. In the presence of constraints, each solution can be
either feasible or infeasible. Thus, there may be at most three situations: (i) both solutions are feasible, (ii)
one is feasible and other is not, and (iii) both are infeasible. We consider each case by simply redefining the
domination principle as follows (we call it the constrained-domination condition for any two solutions x(i)
and x(j) ):
Definition 5.1 A solution x(i) is said to ‘constrained-dominate’ a solution x(j) (or x(i) c x(j) ), if any
of the following conditions are true:
1. Solution x(i) is feasible and solution x(j) is not.
2. Solutions x(i) and x(j) are both infeasible, but solution x(i) has a smaller constraint violation, which
can be computed by adding the normalized violation of all constraints:
J K
CV(x) = gj (x) +
¯ ¯
abs(hk (x)),
j=1 k=1
12
13. where α is −α, if α < 0 and is zero, otherwise. The normalization is achieved with the population
minimum ( gj min ) and maximum ( gj max ) constraint violations: gj (x) = ( gj (x) − gj min )/( gj max −
¯
gj min ).
3. Solutions x(i) and x(j) are feasible and solution x(i) dominates solution x(j) in the usual sense (Def-
inition 3.1).
The above change in the definition requires a minimal change in the NSGA-II procedure described earlier.
Figure 16 shows the non-domination fronts on a six-membered population due to the introduction of
two constraints (the minimization problem is described as CONSTR elsewhere [1]). In the absence of the
constraints, the non-domination fronts (shown by dashed lines) would have been ((1,3,5), (2,6), (4)),
but in their presence, the new fronts are ((4,5), (6), (2), (1), (3)). The first non-domination front
10
8 4
3
1 2
6 5 3
f2 4
6
4 5
Front 2
Front 1
2
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
f1
Figure 16: Non-constrained-domination fronts.
consists of the “best” (that is, non-dominated and feasible) points from the population and any feasible
point lies on a better non-domination front than an infeasible point.
6 Performance Measures Used in EMO
There are two goals of an EMO procedure: (i) a good convergence to the Pareto-optimal front and (ii) a
good diversity in obtained solutions. Since both are conflicting in nature, comparing two sets of trade-off
solutions also require different performance measures. In the early years of EMO research, three different
sets of performance measures were used:
1. Metrics evaluating convergence to the known Pareto-optimal front (such as error ratio, distance from
reference set, etc.),
2. Metrics evaluating spread of solutions on the known Pareto-optimal front (such as spread, spacing,
etc.), and
3. Metrics evaluating certain combinations of convergence and spread of solutions (such as hypervolume,
coverage, R-metrics, etc.).
A detailed study [31] comparing most existing performance metrics based on out-performance relations
has concluded that R-metrics suggested by [32] are the best. However, a study has argued that a single
unary performance measure (any of the first two metrics described above in the enumerated list) cannot
adequately determine a true winner, as both aspects of convergence and diversity cannot be measured by
a single performance metric [33]. That study also concluded that binary performance metrics (indicating
usually two different values when a set of solutions A is compared against B and B is compared against
A), such as epsilon-indicator, binary hypervolume indicator, utility indicators R1 to R3, etc., are better
measures for multi-objective optimization. The flip side is that the binary metrics computes M (M − 1)
performance values for two algorithms in an M -objective optimization problem, by analyzing all pair-wise
13
14. performance comparisons, thereby making them difficult to use in practice. In addition, unary and binary
attainment indicators of [34, 35] are of great importance. Figures 17 and 18 illustrate the hypervolume
and attainment indicators. Attainment surface is useful to determine a representative front obtained from
Search boundary W
f2 f2 Frequency distribution
A A B
25% 50% 75%
B
B
A
C D
cross−line
Pareto−optimal front
E
f1
f1
Figure 18: The attainment surface is created for a
Figure 17: The hypervolume enclosed by the non- number of non-dominated solutions.
dominated solutions.
multiple runs of an EMO algorithm. In general, 50% surface can be used to indicate the front that is
dominated by 50% of all obtained non-dominated points.
7 EMO and Decision-making
Finding a set of representative Pareto-optimal solutions using an EMO procedure is half the task; choosing
a single preferred solution from the obtained set is also an equally important task. There are three main
directions of developments in this direction.
In the a-priori approach, preference information of a decision-maker (DM) is used to focus the search
effort into a part of the Pareto-optimal front, instead of the entire frontier. For this purpose, a refer-
ence point approach [36], a reference direction approach [37], “light beam” approach [38] etc. have been
incorporated in a NSGA-II procedure to find a preferred part of the Pareto-optimal frontier.
In the a-posteriori approach, preference information is used after a set of representative Pareto-optimal
solutions are found by an EMO procedure. The multiple criteria decision making (MCDM) approaches
including reference point method, Tchebyshev metric method etc. [17] can be used. This approach is
now believed to be applicable only to two, three or at most four-objective problems. As the number of
objectives increase, EMO methodologies exhibit difficulties in converging close to the Pareto-optimal front
and the a-posteriori approaches become a difficult proposition.
In the interactive approach, DM’s preference information is integrated to an EMO algorithm during
the optimization run. In the progressively interactive EMO approach [39], the DM is called after every τ
generations and is presented with a few well-diversified solutions chosen from the current non-dominated
front. The DM is then asked to rank the solutions according to preference. The information is then
processed through an optimization task to capture DM’s preference using an utility function. This utility
function is then used to drive NSGA-II’s search till the procedure is repeated in the next DM call.
The decision-making procedure integrated with an EMO procedure makes the multi-objective opti-
mization procedure complete. More such studies must now be executed to make EMO more usable in
practice.
14
15. 8 Multiobjectivization
Interestingly, the act of finding multiple trade-off solutions using an EMO procedure has found its appli-
cation outside the realm of solving multi-objective optimization problems per se. The concept of finding
multiple trade-off solutions using an EMO procedure is applied to solve other kinds of optimization prob-
lems that are otherwise not multi-objective in nature. For example, the EMO concept is used to solve
constrained single-objective optimization problems by converting the task into a two-objective optimiza-
tion task of additionally minimizing an aggregate constraint violation [40]. This eliminates the need to
specify a penalty parameter while using a penalty based constraint handling procedure. A recent study
[41] utilizes a bi-objective NSGA-II to find a Pareto-optimal frontier corresponding to minimizations of the
objective function and constraint violation. The frontier is then used to estimate an appropriate penalty
parameter, which is then used to formulate a penalty based local search problem and is solved using a
classical optimization method. The approach is shown to require an order or two magnitude less function
evaluations than the existing constraint handling methods on a number of standard test problems.
A well-known difficulty in genetic programming studies, called the ‘bloating’, arises due to the continual
increase in size of genetic programs with iteration. The reduction of bloating by minimizing the size
of programs as an additional objective helped find high-performing solutions with a smaller size of the
code [42]. Minimizing the intra-cluster distance and maximizing inter-cluster distance simultaneously in
a bi-objective formulation of a clustering problem is found to yield better solutions than the usual single-
objective minimization of the ratio of the intra-cluster distance to the inter-cluster distance [43]. A recent
edited book [44] describes many such interesting applications in which EMO methodologies have helped
solve problems which are otherwise (or traditionally) not treated as multi-objective optimization problems.
8.1 Knowledge Discovery Through EMO
One striking difference between a single-objective optimization and multi-objective optimization is the
cardinality of the solution set. In latter, multiple solutions are the outcome and each solution is theo-
retically an optimal solution corresponding to a particular trade-off among the objectives. Thus, if an
EMO procedure can find solutions close to the true Pareto-optimal set, what we have in our hand are a
number of high-performing solutions trading-off the conflicting objectives considered in the study. Since
they are all near optimal, these solutions can be analyzed for finding properties which are common to them.
Such a procedure can then become a systematic approach in deciphering important and hidden proper-
ties which optimal and high-performing solutions must have for that problem. In a number of practical
problem-solving tasks, the so-called innovization procedure is shown to find important knowledge about
high-performing solutions [45]. Figure 19 shows that of the five decision variables involved in an electric
motor design problem involving minimum cost and maximum peak-torque, four variables have identical
values for all Pareto-optimal solutions [46]. Of the two allowable electric connections, the ‘Y’-type connec-
tion; of three laminations, ‘Y’-type lamination; of 10 to 80 different turns, 18 turns, and of 16 different wire
sizes, 16-gauge wire remain common to all Pareto-optimal solutions. The only way the solutions vary is
having different number of laminations. In fact, for a motor having more peak-torque, a linearly increasing
number of laminations becomes a recipe for optimal more design. Such useful properties are expected
to exist in practical problems, as they follow certain scientific and engineering principles at the core, but
finding them through a systematic scientific procedure had not been paid much attention in the past. The
principle of first searching for multiple trade-off and high-performing solutions using a multi-objective op-
timization procedure and then analyzing them to discover useful knowledge certainly remains a viable way
forward. The current efforts [47] to automate the knowledge extraction procedure through a sophisticated
data-mining task is promising and should make the overall approach more appealing to the practitioners.
9 Hybrid EMO procedures
The search operators used in EMO are generic. There is no guarantee that an EMO will find any Pareto-
optimal solution in a finite number of solution evaluations for an arbitrary problem. However, as discussed
above, EMO methodologies provide adequate emphasis to currently non-dominated and isolated solutions
so that population members progress towards the Pareto-optimal front iteratively. To make the overall
procedure faster and to perform the task with a more guaranteed manner, EMO methodologies must
15
16. 65 200
60 180
55 160
Number of Laminations
140
50
Cost ($)
120
45
100
40
80
All Y!type elect. connection
35
All Y!type lamination 60
30 18 turns per coil
40
Guage 16
25 20
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5
Peak Torque (N!m)
Figure 19: Innovization study of an electric motor design problem.
be combined with mathematical optimization techniques having local convergence properties. A simple-
minded approach would be to start the optimization task with an EMO and the solutions obtained from
EMO can be improved by optimizing a composite objective derived from multiple objectives to ensure a
good spread by using a local search technique. Another approach would be to use a local search technique
as a mutation-like operator in an EMO so that all population members are at least guaranteed local optimal
solutions. A study [48] has demonstrated that the the latter approach is an overall better approach from
a computational point of view.
However, the use of a local search technique within an EMO has another advantage. Since, a local search
can find a weak or a near Pareto-optimal point, the presence of such super-individual in a population can
cause other near Pareto-optimal solutions to be found as a outcome of recombination of the super-individual
with other population members. A recent study has demonstrated this aspect [49].
10 Practical EMOs
Here, we describe some recent advances of EMO in which different practicalities are considered.
10.1 EMO for Many Objectives
With the success of EMO in two and three objective problems, it has become an obvious quest to investi-
gate if an EMO procedure can also be used to solve four or more objective problems. An earlier study [50]
with eight objectives revealed somewhat negative results. EMO methodologies work by emphasizing non-
dominated solutions in a population. Unfortunately, as the number of objectives increase, most population
members in a randomly created population tend to become non-dominated to each other. For example, in
a three-objective scenario, about 10% members in a population of size 200 are non-dominated, whereas in
a 10-objective problem scenario, as high as 90% members in a population of size 200 are non-dominated.
Thus, in a large-objective problem, an EMO algorithm runs out of room to introduce new population mem-
bers into a generation, thereby causing a stagnation in the performance of an EMO algorithm. Moreover,
an exponentially large population size is needed to represent a large-dimensional Pareto-optimal front. This
16
17. makes an EMO procedure slow and computationally less attractive. However, practically speaking, even
if an algorithm can find tens of thousands of Pareto-optimal solutions for a multi-objective optimization
problem, besides simply getting an idea of the nature and shape of the front, they are simply too many
to be useful for any decision making purposes. Keeping these views in mind, EMO researchers have taken
two different approaches in dealing with large-objective problems.
10.1.1 Finding a Partial Set
Instead of finding the complete Pareto-optimal front in a problem having a large number of objectives,
EMO procedures can be used to find only a part of the Pareto-optimal front. This can be achieved by
indicating preference information by various means. Ideas, such as reference point based EMO [36, 51],
‘light beam search’ [38], biased sharing approaches [52], cone dominance [53] etc. are suggested for this
purpose. Each of these studies have shown that up to 10 and 20-objective problems, although finding the
complete fortier is a difficulty, finding a partial frontier corresponding to certain preference information is
not that difficult a proposition. Despite the dimension of the partial frontier being identical to that of the
complete Pareto-optimal frontier, the closeness of target points in representing the desired partial frontier
helps make only a small fraction of an EMO population to be non-dominated, thereby making rooms for
new and hopefully better solutions to be found and stored.
The computational efficiency and accuracy observed in some EMO implementations have led a dis-
tributed EMO study [53] in which each processor in a distributed computing environment receives a unique
cone for defining domination. The cones are designed carefully so that at the end of such a distributed
computing EMO procedure, solutions are found to exist in various parts of the complete Pareto-optimal
front. A collection of these solutions together is then able to provide a good representation of the entire
original Pareto-optimal front.
10.1.2 Identifying and Eliminating Redundant Objectives
Many practical optimization problems can easily list a large of number of objectives (often more than 10),
as many different criterion or goals are often of interest to practitioners. In most instances, it is not entirely
sure whether the chosen objectives are all in conflict to each other or not. For example, minimization of
weight and minimization of cost of a component or a system are often mistaken to have an identical optimal
solution, but may lead to a range of trade-off optimal solutions. Practitioners do not take any chance and
tend to include all (or as many as possible) objectives into the optimization problem formulation. There
is another fact which is more worrisome. Two apparently conflicting objectives may show a good trade-off
when evaluated with respect to some randomly created solutions. But if these two objectives are evaluated
for solutions close to their optima. they tend to show a good correlation. That is, although objectives can
exhibit conflicting behavior for random solutions, near their Pareto-optimal front, the conflict vanishes and
optimum of one becomes close to the optimum of the other.
Thinking of the existence of such problems in practice, recent studies [54, 55] have performed linear and
non-linear principal component analysis (PCA) to a set of EMO-produced solutions. Objectives causing
positively correlated relationship between each other on the obtained NSGA-II solutions are identified and
are declared as redundant. The EMO procedure is then restarted with non-redundant objectives. This
combined EMO-PCA procedure is continued until no further reduction in the number of objectives is
possible. The procedure has handled practical problems involving five and more objectives and has shown
to reduce the choice of real conflicting objectives to a few. On test problems, the proposed approach
has shown to reduce an initial 50-objective problem to the correct three-objective Pareto-optimal front
by eliminating 47 redundant objectives. Another study [56] used an exact and a heuristic-based conflict
identification approach on a given set of Pareto-optimal solutions. For a given error measure, an effort is
made to identify a minimal subset of objectives which do not alter the original dominance structure on a set
of Pareto-optimal solutions. This idea has recently been introduced within an EMO [57], but a continual
reduction of objectives through a successive application of the above procedure would be interesting.
This is a promising area of EMO research and definitely more computationally faster objective-reduction
techniques are needed for the purpose. In this direction, the use of alternative definitions of domination is
important. One such idea redefined the definition of domination: a solution is said to dominate another
solution, if the former solution is better than latter in more objectives. This certainly excludes finding
the entire Pareto-optimal front and helps an EMO to converge near the intermediate and central part of
17
18. the Pareto-optimal front. Another EMO study used a fuzzy dominance [58] relation (instead of Pareto-
dominance), in which superiority of one solution over another in any objective is defined in a fuzzy manner.
Many other such definitions are possible and can be implemented based on the problem context.
10.2 Dynamic EMO
Dynamic optimization involves objectives, constraints, or problem parameters which change over time. This
means that as an algorithm is approaching the optimum of the current problem, the problem definition
has changed and now the algorithm must solve a new problem. Often, in such dynamic optimization
problems, an algorithm is usually not expected to find the optimum, instead it is best expected to track
the changing optimum with iteration. The performance of a dynamic optimizer then depends on how close
it is able to track the true optimum (which is changing with iteration or time). Thus, practically speaking,
optimization algorithms may hope to handle problems which do not change significantly with time. From
the algorithm’s point of view, since in these problems the problem is not expected to change too much
from one time instance to another and some good solutions to the current problem are already at hand in a
population, researchers fancied solving such dynamic optimization problems using evolutionary algorithms
[59].
A recent study [60] proposed the following procedure for dynamic optimization involving single or
multiple objectives. Let P(t) be a problem which changes with time t (from t = 0 to t = T ). Despite the
continual change in the problem, we assume that the problem is fixed for a time period τ , which is not
known a priori and the aim of the (offline) dynamic optimization study is to identify a suitable value of τ
for an accurate as well computationally faster approach. For this purpose, an optimization algorithm with
τ as a fixed time period is run from t = 0 to t = T with the problem assumed fixed for every τ time period.
A measure Γ(τ ) determines the performance of the algorithm and is compared with a pre-specified and
expected value ΓL . If Γ(τ ) ≥ ΓL , for the entire time domain of the execution of the procedure, we declare
τ to be a permissible length of stasis. Then, we try with a reduced value of τ and check if a smaller length
of statis is also acceptable. If not, we increase τ to allow the optimization problem to remain static for a
longer time so that the chosen algorithm can now have more iterations (time) to perform better. Such a
procedure will eventually come up with a time period τ ∗ which would be the smallest time of statis allowed
for the optimization algorithm to work based on chosen performance requirement. Based on this study, a
number of test problems and a hydro-thermal power dispatch problem have been recently tackled [60].
In the case of dynamic multi-objective problem solving tasks, there is an additional difficulty which
is worth mentioning here. Not only does an EMO algorithm needs to find or track the changing Pareto-
optimal fronts, in a real-world implementation, it must also make an immediate decision about which
solution to implement from the current front before the problem changes to a new one. Decision-making
analysis is considered to be time-consuming involving execution of analysis tools, higher-level considera-
tions, and sometimes group discussions. If dynamic EMO is to be applied in practice, automated procedures
for making decisions must be developed. Although it is not clear how to generalize such an automated
decision-making procedure in different problems, problem-specific tools are certainly possible and certainly
a worthwhile and fertile area for research.
10.3 Uncertainty Handling Using EMO
A major surge in EMO research has taken place in handling uncertainties among decision variables and
problem parameters in multi-objective optimization. Practice is full of uncertainties and almost no param-
eter, dimension, or property can be guaranteed to be fixed at a value it is aimed at. In such scenarios,
evaluation of a solution is not precise, and the resulting objective and constraint function values becomes
probabilistic quantities. Optimization algorithms are usually designed to handle such stochastiticies by
using crude methods, such as the Monte Carlo simulation of stochasticities in uncertain variables and pa-
rameters and by sophisticated stochastic programming methods involving nested optimization techniques
[61]. When these effects are taken care of during the optimization process, the resulting solution is usually
different from the optimum solution of the problem and is known as a ’robust’ solution. Such an opti-
mization procedure will then find a solution which may not be the true global optimum solution, but one
which is less sensitive to uncertainties in decision variables and problem parameters. In the context of
multi-objective optimization, a consideration of uncertainties for multiple objective functions will result in
18
19. a robust frontier which may be different from the globally Pareto-optimal front. Each and every point on
the robust frontier is then guaranteed to be less sensitive to uncertainties in decision variables and problem
parameters. Some such studies in EMO are [62, 63].
When the evaluation of constraints under uncertainties in decision variables and problem parameters are
considered, deterministic constraints become stochastic (they are also known as ’chance constraints’) and
involves a reliability index (R) to handle the constraints. A constraint g(x) ≥ 0 then becomes Prob(g(x) ≥
0) ≥ R. In order to find left side of the above chance constraint, a separate optimization methodology [64],
is needed, thereby making the overall algorithm a bi-level optimization procedure. Approximate single-loop
algorithms exist [65] and recently one such methodology has been integrated with an EMO [61] and shown
to find a ’reliable’ frontier corresponding a specified reliability index, instead of the Pareto-optimal frontier,
in problems having uncertainty in decision variables and problem parameters. More such methodologies are
needed, as uncertainties is an integral part of practical problem-solving and multi-objective optimization
researchers must look for better and faster algorithms to handle them.
10.4 Meta-model Assisted EMO
The practice of optimization algorithms is often limited by the computational overheads associated with
evaluating solutions. Certain problems involving expensive computations, such as numerical solution of
partial differential equations describing the physics of the problem, finite difference computations involving
an analysis of a solution, computational fluid dynamics simulation to study the performance of a solution
over a changing environment etc. In some such problems, evaluation of each solution to compute constraints
and objective functions may take a few hours to a day or two. In such scenarios, even if an optimization
algorithm needs one hundred solutions to get anywhere close to a good and feasible solution, the application
needs an easy three to six months of continuous computational time. In most practical purposes, this is
considered a ’luxury’ in an industrial set-up. Optimization researchers are constantly at their toes in
coming up with approximate yet faster algorithms.
Meta-models for objective functions and constraints have been developed for this purpose. Two different
approaches are mostly followed. In one approach, a sample of solutions are used to generate a meta-model
(approximate model of the original objectives and constraints) and then efforts have been made to find the
optimum of the meta-model, assuming that the optimal solutions of both the meta-model and the original
problem are similar to each other [66, 67]. In the other approach, a successive meta-modeling approach is
used in which the algorithm starts to solve the first meta-model obtained from a sample of the entire search
space [68, 69, 70]. As the solutions start to focus near the optimum region of the meta-model, a new and
more accurate meta-model is generated in the region dictated by the solutions of the previous optimization.
A coarse-to-fine-grained meta-modeling technique based on artificial neural networks is shown to reduce
the computational effort by about 30 to 80% on different problems [68]. Other successful meta-modeling
implementations for multi-objective optimization based on Kriging and response surface methodologies
exist [70, 71].
11 Conclusions
This chapter has introduced the fast-growing field of multi-objective optimization based on evolutionary
algorithms. First, the principles of single-objective evolutionary optimization (EO) techniques have been
discussed so that readers can visualize the differences between evolutionary optimization and classical
optimization methods. The EMO principle of handling multi-objective optimization problems is to find a
representative set of Pareto-optimal solutions. Since an EO uses a population of solutions in each iteration,
EO procedures are potentially viable techniques to capture a number of trade-off near-optimal solutions in a
single simulation run. This chapter has described a number of popular EMO methodologies, presented some
simulation studies on test problems, and discussed how EMO principles can be useful in solving real-world
multi-objective optimization problems through a case study of spacecraft trajectory optimization.
Finally, this chapter has discussed the potential of EMO and its current research activities. The principle
of EMO has been utilized to solve other optimization problems that are otherwise not multi-objective in
nature. The diverse set of EMO solutions have been analyzed to find hidden common properties that can
act as valuable knowledge to a user. EMO procedures have been extended to enable them to handle various
19
20. practicalities. Finally, the EMO task is now being suitably combined with decision-making activities in
order to make the overall approach more useful in practice.
EMO addresses an important and inevitable fact of problem-solving tasks. EMO has enjoyed a steady
rise of popularity in a short time. EMO methodologies are being extended to address practicalities. In
the area of evolutionary computing and optimization, EMO research and application currently stands as
one of the fastest growing fields. EMO methodologies are still to be applied to many areas of science and
engineering. With such applications, the true value and importance of EMO will become evident.
Acknowledgments
This chapter contains some excerpts from previous publications by the same author entitled ‘Introduction
to Evolutionary Multi-Objective Optimization’, In J. Branke, K. Deb, K. Miettinen and R. Slowinski
(Eds.) Multiobjective Optimization: Interactive and Evolutionary Approaches (LNCS 5252), 2008, Berlin:
Springer-Verlag., (pp. 59–96) and ‘Recent Developments in Evolutionary Multi-Objective Optimization’ in
M. Ehrgott et al. (Eds.) Trends in Multiple Criteria Decision Analysis, 2010, Berlin: Springer-Verlag, (pp.
339-368). This paper will appear as a chapter in a Springer book entitled ’Multi-objective Evolutionary
Optimisation for Product Design and Manufacturing’ edited by Lihui Wang, Amos Ng, and Kalyanmoy
Deb in 2011.
References
[1] Deb K. Multi-objective optimization using evolutionary algorithms. Chichester, UK: Wiley; 2001.
[2] Goldberg DE. Genetic Algorithms for Search, Optimization, and Machine Learning. Reading, MA:
Addison-Wesley; 1989.
[3] Deb K, Reddy AR, Singh G. Optimal scheduling of casting sequence using genetic algorithms. Journal
of Materials and Manufacturing Processes. 2003;18(3):409–432.
[4] Deb K. An introduction to genetic algorithms. S¯dhan¯. 1999;24(4):293–315.
a a
[5] Deb K, Agrawal RB. Simulated binary crossover for continuous search space. Complex Systems.
1995;9(2):115–148.
[6] Deb K, Anand A, Joshi D. A computationally efficient evolutionary algorithm for real-parameter
optimization. Evolutionary Computation Journal. 2002;10(4):371–395.
[7] Storn R, Price K. Differential evolution – A fast and efficient heuristic for global optimization over
continuous spaces. Journal of Global Optimization. 1997;11:341–359.
[8] Rudolph G. Convergence analysis of canonical genetic algorithms. IEEE Transactions on Neural
Network. 1994;5(1):96–101.
[9] Michalewicz Z. Genetic Algorithms + Data Structures = Evolution Programs. Berlin: Springer-Verlag;
1992.
[10] Gen M, Cheng R. Genetic Algorithms and Engineering Design. New York: Wiley; 1997.
[11] B¨ck T, Fogel D, Michalewicz Z, editors. Handbook of Evolutionary Computation. Bristol: Institute
a
of Physics Publishing and New York: Oxford University Press; 1997.
[12] Deb K, Tiwari R, Dixit M, Dutta J. Finding trade-off solutions close to KKT points using evolutionary
multi-objective optimization. In: Proceedings of the Congress on Evolutionary Computation (CEC-
2007); 2007. p. 2109–2116.
[13] Holland JH. Adaptation in Natural and Artificial Systems. Ann Arbor, MI: MIT Press; 1975.
[14] Vose MD, Wright AH, Rowe JE. Implicit parallelism. In: Proceedings of GECCO 2003 (Lecture Notes
in Computer Science. vol. 2723–2724. Springer-Verlag; 2003. .
20
21. [15] Jansen T, Wegener I. On the utility of populations. In: Proceedings of the Genetic and Evolutionary
Computation Conference (GECCO 2001). San Mateo, CA: Morgan Kaufmann; 2001. p. 375–382.
[16] Radcliffe NJ. Forma analysis and random respectful recombination. In: Proceedings of the Fourth
International Conference on Genetic Algorithms; 1991. p. 222–229.
[17] Miettinen K. Nonlinear Multiobjective Optimization. Boston: Kluwer; 1999.
[18] Kung HT, Luccio F, Preparata FP. On finding the maxima of a set of vectors. Journal of the
Association for Computing Machinery. 1975;22(4):469–476.
[19] Ehrgott M. Multicriteria Optimization. Berlin: Springer; 2000.
[20] Deb K, Tiwari S. Omni-optimizer: A generic evolutionary algorithm for global optimization. European
Journal of Operations Research (EJOR). 2008;185(3):1062–1087.
[21] Deb K, Agrawal S, Pratap A, Meyarivan T. A fast and Elitist multi-objective Genetic Algorithm:
NSGA-II. IEEE Transactions on Evolutionary Computation. 2002;6(2):182–197.
[22] Coello CAC, VanVeldhuizen DA, Lamont G. Evolutionary Algorithms for Solving Multi-Objective
Problems. Boston, MA: Kluwer; 2002.
[23] Osyczka A. Evolutionary algorithms for single and multicriteria design optimization. Heidelberg:
Physica-Verlag; 2002.
[24] Zitzler E, Deb K, Thiele L, Coello CAC, Corne DW. Proceedings of the First Evolutionary Multi-
Criterion Optimization (EMO-01) Conference (Lecture Notes in Computer Science (LNCS) 1993).
Heidelberg: Springer; 2001.
[25] Fonseca C, Fleming P, Zitzler E, Deb K, Thiele L. Proceedings of the Second Evolutionary Multi-
Criterion Optimization (EMO-03) Conference (Lecture Notes in Computer Science (LNCS) 2632).
Heidelberg: Springer; 2003.
[26] Coello CAC, Aguirre AH, Zitzler E, editors. Evolutionary Multi-Criterion Optimization: Third Inter-
national Conference. Berlin, Germany: Springer; 2005. LNCS 3410.
[27] Obayashi S, Deb K, Poloni C, Hiroyasu T, Murata T, editors. Evolutionary Multi-Criterion Optimiza-
tion, 4th International Conference, EMO 2007, Matsushima, Japan, March 5-8, 2007, Proceedings.
vol. 4403 of Lecture Notes in Computer Science. Springer; 2007.
[28] Coverstone-Carroll V, Hartmann JW, Mason WJ. Optimal multi-objective low-thurst spacecraft tra-
jectories. Computer Methods in Applied Mechanics and Engineering. 2000;186(2–4):387–402.
[29] Srinivas N, Deb K. Multi-Objective function optimization using non-dominated sorting genetic algo-
rithms. Evolutionary Computation Journal. 1994;2(3):221–248.
[30] Sauer CG. Optimization of multiple target electric propulsion trajectories. In: AIAA 11th Aerospace
Science Meeting; 1973. Paper Number 73-205.
[31] Knowles JD, Corne DW. On metrics for comparing nondominated sets. In: Congress on Evolutionary
Computation (CEC-2002). Piscataway, NJ: IEEE Press; 2002. p. 711–716.
[32] Hansen MP, Jaskiewicz A. Evaluating the quality of approximations to the non-dominated set. Lyngby:
Institute of Mathematical Modelling, Technical University of Denmark; 1998. IMM-REP-1998-7.
[33] Zitzler E, Thiele L, Laumanns M, Fonseca CM, Fonseca VG. Performance assessment of multiobjective
optimizers: An analysis and review. IEEE Transactions on Evolutionary Computation. 2003;7(2):117–
132.
[34] Fonseca CM, Fleming PJ. On the performance assessment and comparison of stochastic multiobjective
optimizers. In: Voigt HM, Ebeling W, Rechenberg I, Schwefel HP, editors. Parallel Problem Solving
from Nature (PPSN IV). Berlin: Springer; 1996. p. 584–593. Also available as Lecture Notes in
Computer Science 1141.
21
22. [35] Fonseca CM, da Fonseca VG, Paquete L. Exploring the performance of stochastic multiobjective opti-
misers with the second-order attainment function. In: Third International Conference on Evolutionary
Multi-Criterion Optimization, EMO-2005. Berlin: Springer; 2005. p. 250–264.
[36] Deb K, Sundar J, Uday N, Chaudhuri S. Reference Point Based Multi-Objective Optimization Us-
ing Evolutionary Algorithms. International Journal of Computational Intelligence Research (IJCIR).
2006;2(6):273–286.
[37] Deb K, Kumar A. Interactive evolutionary multi-objective optimization and decision-making using
reference direction method. In: Proceedings of the Genetic and Evolutionary Computation Conference
(GECCO-2007). New York: The Association of Computing Machinery (ACM); 2007. p. 781–788.
[38] Deb K, Kumar A. Light Beam Search Based Multi-objective Optimization using Evolutionary Algo-
rithms. In: Proceedings of the Congress on Evolutionary Computation (CEC-07); 2007. p. 2125–2132.
[39] Deb K, Sinha A, Kukkonen S. Multi-objective test problems, linkages and evolutionary methodologies.
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2006). New York:
The Association of Computing Machinery (ACM); 2006. p. 1141–1148.
[40] Coello CAC. Treating objectives as constraints for single objective optimization. Engineering Opti-
mization. 2000;32(3):275–308.
[41] Deb K, Datta R. A Fast and Accurate Solution of Constrained Optimization Problems Using a
Hybrid Bi-Objective and Penalty Function Approach. In: Proceedings of the IEEE World Congress
on Computational Intelligence (WCCI-2010); 2010. .
[42] Bleuler S, Brack M, Zitzler E. Multiobjective genetic programming: Reducing bloat using SPEA2.
In: Proceedings of the 2001 Congress on Evolutionary Computation; 2001. p. 536–543.
[43] Handl J, Knowles JD. An evolutionary approach to multiobjective clustering. IEEE Transactions on
Evolutionary Computation. 2007;11(1):56–76.
[44] Knowles JD, Corne DW, Deb K. Multiobjective problem solving from nature. Springer Natural
Computing Series, Springer-Verlag; 2008.
[45] Deb K, Srinivasan A. Innovization: Innovating design principles through optimization. In: Proceedings
of the Genetic and Evolutionary Computation Conference (GECCO-2006). New York: ACM; 2006. p.
1629–1636.
[46] Deb K, Sindhya K. Deciphering innovative principles for optimal electric brushless D.C. permanent
magnet motor design. In: Proceedings of the World Congress on Computational Intelligence (WCCI-
2008). Piscatway, NY: IEEE Press; 2008. p. 2283–2290.
[47] Bandaru S, Deb K. Towards automating the discovery of certain innovative design principles through
a clustering based optimization technique. Engineering Optimization. in press;.
[48] Deb K, Goel T. A hybrid multi-objective evolutionary approach to engineering shape design. In: Pro-
ceedings of the First International Conference on Evolutionary Multi-Criterion Optimization (EMO-
01); 2001. p. 385–399.
[49] Sindhya K, Deb K, Miettinen K. A local search based evolutionary multi-objective optimization
technique for fast and accurate convergence. In: Proceedings of the Parallel Problem Solving From
Nature (PPSN-2008). Berlin, Germany: Springer-Verlag; 2008. .
[50] Khare V, Yao X, Deb K. Performance Scaling of Multi-objective Evolutionary Algorithms. In:
Proceedings of the Second Evolutionary Multi-Criterion Optimization (EMO-03) Conference (LNCS
2632); 2003. p. 376–390.
[51] Luque M, Miettinen K, Eskelinen P, Ruiz F. Incorporating preference information in interactive
reference point based methods for multiobjective optimization. Omega. 2009;37(2):450–462.
22
23. [52] Branke J, Deb K. Integrating user preferences into evolutionary multi-objective optimization. In: Jin
Y, editor. Knowledge Incorporation in Evolutionary Computation. Heidelberg, Germany: Springer;
2004. p. 461–477.
[53] Deb K, Zope P, Jain A. Distributed Computing of Pareto-Optimal Solutions Using Multi-Objective
Evolutionary Algorithms. In: Proceedings of the Second Evolutionary Multi-Criterion Optimization
(EMO-03) Conference (LNCS 2632); 2003. p. 535–549.
[54] Deb K, Saxena D. Searching For Pareto-Optimal Solutions Through Dimensionality Reduction for
Certain Large-Dimensional Multi-Objective Optimization Problems. In: Proceedings of the World
Congress on Computational Intelligence (WCCI-2006); 2006. p. 3352–3360.
[55] Saxena DK, Deb K. Non-linear Dimensionality Reduction Procedures for Certain Large-Dimensional
Multi-Objective Optimization Problems: Employing Correntropy and a Novel Maximum Variance
Unfolding. In: Proceedings of the Fourth International Conference on Evolutionary Multi-Criterion
Optimization (EMO-2007); 2007. p. 772–787.
[56] Brockhoff D, Zitzler E. Dimensionality Reduction in Multiobjective Optimization: The Minimum
Objective Subset Problem. In: Waldmann KH, Stocker UM, editors. Operations Research Proceedings
2006. Springer; 2007. p. 423–429.
[57] Brockhoff D, Zitzler E. Offline and Online Objective Reduction in Evolutionary Multiobjective Opti-
mization Based on Objective Conflicts. Institut f¨r Technische Informatik und Kommunikationsnetze,
u
ETH Z¨rich; 2007. 269.
u
[58] Farina M, Amato P. A Fuzzy Definition of Optimality for Many Criteria Optimization Problems.
IEEE Trans on Systems, Man and Cybernetics Part A: Systems and Humans. 2004;34(3):315–326.
[59] Branke J. Evolutionary Optimization in Dynamic Environments. Heidelberg, Germany: Springer;
2001.
[60] Deb K, Rao UB, Karthik S. Dynamic Multi-Objective Optimization and Decision-Making Using
Modified NSGA-II: A Case Study on Hydro-Thermal Power Scheduling Bi-Objective Optimization
Problems. In: Proceedings of the Fourth International Conference on Evol. Multi-Criterion Optimiza-
tion (EMO-2007); 2007. .
[61] Deb K, Gupta S, Daum D, Branke J, Mall A, Padmanabhan D. Reliability-based optimization using
evolutionary algorithms. IEEE Trans on Evolutionary Computation. in press;.
[62] Deb K, Gupta H. Introducing robustness in multi-objective optimization. Evolutionary Computation
Journal. 2006;14(4):463–494.
[63] Basseur M, Zitzler E. Handling Uncertainty in Indicator-Based Multiobjective Optimization. Inter-
national Journal of Computational Intelligence Research. 2006;2(3):255–272.
[64] Cruse TR. Reliability-based mechanical design. New York: Marcel Dekker; 1997.
[65] Du X, Chen W. Sequential Optimization and Reliability Assessment Method for Efficient Probabilistic
Design. ASME Transactions on Journal of Mechanical Design. 2004;126(2):225–233.
[66] El-Beltagy MA, Nair PB, Keane AJ. Metamodelling techniques for evolutionary optimization of
computationally expensive problems: promises and limitations. In: Proceedings of the Genetic and
Evolutionary Computation Conference (GECCO-1999). San Mateo, CA: Morgan Kaufman; 1999. p.
196–203.
[67] Giannakoglou KC. Design of optimal aerodynamic shapes using stochastic optimization methods and
computational intelligence. Progress in Aerospace Science. 2002;38(1):43–76.
[68] Nain PKS, Deb K. Computationally effective search and optimization procedure using coarse to fine
approximations. In: Proceedings of the Congress on Evolutionary Computation (CEC-2003); 2003. p.
2081–2088.
23
24. [69] Deb K, Nain PKS. In: An Evolutionary Multi-objective Adaptive Meta-modeling Procedure Using
Artificial Neural Networks. Berlin, Germany: Springer; 2007. p. 297–322.
[70] Emmerich MTM, Giannakoglou KC, Naujoks B. Single and multiobjective evolutionary optimization
assisted by Gaussian random field metamodels. IEEE Transactions on Evolutionary Computation.
2006;10(4):421–439.
[71] Emmerich M, Naujoks B. Metamodel-Assisted Multiobjective Optimisation Strategies and Their
Application in Airfoil Design. In: Adaptive Computing in Design and Manufacture VI. London, UK:
Springer; 2004. p. 249–260.
24