Constructing a classification model is important in machine learning for a particular task. A
classification process involves assigning objects into predefined groups or classes based on a
number of observed attributes related to those objects. Artificial neural network is one of the
classification algorithms which, can be used in many application areas. This paper investigates
the potential of applying the feed forward neural network architecture for the classification of
medical datasets. Migration based differential evolution algorithm (MBDE) is chosen and
applied to feed forward neural network to enhance the learning process and the network
learning is validated in terms of convergence rate and classification accuracy. In this paper,
MBDE algorithm with various migration policies is proposed for classification problems using
medical diagnosis.
An Automatic Medical Image Segmentation using Teaching Learning Based Optimiz...idescitation
Nature inspired population based evolutionary algorithms are very popular with
their competitive solutions for a wide variety of applications. Teaching Learning based
Optimization (TLBO) is a very recent population based evolutionary algorithm evolved
on the basis of Teaching Learning process of a class room. TLBO does not require any
algorithmic specific parameters. This paper proposes an automatic grouping of pixels into
different homogeneous regions using the TLBO. The experimental results have
demonstrated the effectiveness of TLBO in image segmentation.
Hybrid Method HVS-MRMR for Variable Selection in Multilayer Artificial Neural...IJECEIAES
The variable selection is an important technique the reducing dimensionality of data frequently used in data preprocessing for performing data mining. This paper presents a new variable selection algorithm uses the heuristic variable selection (HVS) and Minimum Redundancy Maximum Relevance (MRMR). We enhance the HVS method for variab le selection by incorporating (MRMR) filter. Our algorithm is based on wrapper approach using multi-layer perceptron. We called this algorithm a HVS-MRMR Wrapper for variables selection. The relevance of a set of variables is measured by a convex combination of the relevance given by HVS criterion and the MRMR criterion. This approach selects new relevant variables; we evaluate the performance of HVS-MRMR on eight benchmark classification problems. The experimental results show that HVS-MRMR selected a less number of variables with high classification accuracy compared to MRMR and HVS and without variables selection on most datasets. HVS-MRMR can be applied to various classification problems that require high classification accuracy.
A Combined Approach for Feature Subset Selection and Size Reduction for High ...IJERA Editor
selection of relevant feature from a given set of feature is one of the important issues in the field of
data mining as well as classification. In general the dataset may contain a number of features however it is not
necessary that the whole set features are important for particular analysis of decision making because the
features may share the common information‟s and can also be completely irrelevant to the undergoing
processing. This generally happen because of improper selection of features during the dataset formation or
because of improper information availability about the observed system. However in both cases the data will
contain the features that will just increase the processing burden which may ultimately cause the improper
outcome when used for analysis. Because of these reasons some kind of methods are required to detect and
remove these features hence in this paper we are presenting an efficient approach for not just removing the
unimportant features but also the size of complete dataset size. The proposed algorithm utilizes the information
theory to detect the information gain from each feature and minimum span tree to group the similar features
with that the fuzzy c-means clustering is used to remove the similar entries from the dataset. Finally the
algorithm is tested with SVM classifier using 35 publicly available real-world high-dimensional dataset and the
results shows that the presented algorithm not only reduces the feature set and data lengths but also improves the
performances of the classifier.
Architecture neural network deep optimizing based on self organizing feature ...journalBEEI
Forward neural network (FNN) execution relying on the algorithm of training and architecture selection. Different parameters using for nip out the architecture of FNN such as the connections number among strata, neurons hidden number in each strata hidden and hidden strata number. Feature architectural combinations exponential could be uncontrollable manually so specific architecture can be design automatically by using special algorithm which build system with ability generalization better. Determination of architecture FNN can be done by using the algorithm of optimization numerous. In this paper methodology new proposes achievement where FNN neurons respective with hidden layers estimation work where in this work collect algorithm training self organizing feature map (SOFM) with advantages to explain how the best architectural selected automatically by SOFM from criteria error testing based on architecture populated. Different size of dataset benchmark of 4 classifications tested for approach proposed.
Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
An Automatic Medical Image Segmentation using Teaching Learning Based Optimiz...idescitation
Nature inspired population based evolutionary algorithms are very popular with
their competitive solutions for a wide variety of applications. Teaching Learning based
Optimization (TLBO) is a very recent population based evolutionary algorithm evolved
on the basis of Teaching Learning process of a class room. TLBO does not require any
algorithmic specific parameters. This paper proposes an automatic grouping of pixels into
different homogeneous regions using the TLBO. The experimental results have
demonstrated the effectiveness of TLBO in image segmentation.
Hybrid Method HVS-MRMR for Variable Selection in Multilayer Artificial Neural...IJECEIAES
The variable selection is an important technique the reducing dimensionality of data frequently used in data preprocessing for performing data mining. This paper presents a new variable selection algorithm uses the heuristic variable selection (HVS) and Minimum Redundancy Maximum Relevance (MRMR). We enhance the HVS method for variab le selection by incorporating (MRMR) filter. Our algorithm is based on wrapper approach using multi-layer perceptron. We called this algorithm a HVS-MRMR Wrapper for variables selection. The relevance of a set of variables is measured by a convex combination of the relevance given by HVS criterion and the MRMR criterion. This approach selects new relevant variables; we evaluate the performance of HVS-MRMR on eight benchmark classification problems. The experimental results show that HVS-MRMR selected a less number of variables with high classification accuracy compared to MRMR and HVS and without variables selection on most datasets. HVS-MRMR can be applied to various classification problems that require high classification accuracy.
A Combined Approach for Feature Subset Selection and Size Reduction for High ...IJERA Editor
selection of relevant feature from a given set of feature is one of the important issues in the field of
data mining as well as classification. In general the dataset may contain a number of features however it is not
necessary that the whole set features are important for particular analysis of decision making because the
features may share the common information‟s and can also be completely irrelevant to the undergoing
processing. This generally happen because of improper selection of features during the dataset formation or
because of improper information availability about the observed system. However in both cases the data will
contain the features that will just increase the processing burden which may ultimately cause the improper
outcome when used for analysis. Because of these reasons some kind of methods are required to detect and
remove these features hence in this paper we are presenting an efficient approach for not just removing the
unimportant features but also the size of complete dataset size. The proposed algorithm utilizes the information
theory to detect the information gain from each feature and minimum span tree to group the similar features
with that the fuzzy c-means clustering is used to remove the similar entries from the dataset. Finally the
algorithm is tested with SVM classifier using 35 publicly available real-world high-dimensional dataset and the
results shows that the presented algorithm not only reduces the feature set and data lengths but also improves the
performances of the classifier.
Architecture neural network deep optimizing based on self organizing feature ...journalBEEI
Forward neural network (FNN) execution relying on the algorithm of training and architecture selection. Different parameters using for nip out the architecture of FNN such as the connections number among strata, neurons hidden number in each strata hidden and hidden strata number. Feature architectural combinations exponential could be uncontrollable manually so specific architecture can be design automatically by using special algorithm which build system with ability generalization better. Determination of architecture FNN can be done by using the algorithm of optimization numerous. In this paper methodology new proposes achievement where FNN neurons respective with hidden layers estimation work where in this work collect algorithm training self organizing feature map (SOFM) with advantages to explain how the best architectural selected automatically by SOFM from criteria error testing based on architecture populated. Different size of dataset benchmark of 4 classifications tested for approach proposed.
Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
Multimodal Biometrics Recognition by Dimensionality Diminution MethodIJERA Editor
Multimodal biometric system utilizes two or more character modalities, e.g., face, ear, and fingerprint,
Signature, plamprint to improve the recognition accuracy of conventional unimodal methods. We propose a new
dimensionality reduction method called Dimension Diminish Projection (DDP) in this paper. DDP can not only
preserve local information by capturing the intra-modal geometry, but also extract between-class relevant
structures for classification effectively. Experimental results show that our proposed method performs better
than other algorithms including PCA, LDA and MFA.
PREDICTIVE EVALUATION OF THE STOCK PORTFOLIO PERFORMANCE USING FUZZY CMEANS A...ijfls
The aim of this paper is to investigate the trend of the return of a portfolio formed randomly or for any
specific technique. The approach is made using two techniques fuzzy: fuzzy c-means (FCM) algorithm and
the fuzzy transform, where the rules used at fuzzy transform arise from the application of the FCM
algorithm. The results show that the proposed methodology is able to predict the trend of the return of a
stock portfolio, as well as the tendency of the market index. Real data of the financial market are used from
2004 until 2007.
Textural Feature Extraction of Natural Objects for Image ClassificationCSCJournals
The field of digital image processing has been growing in scope in the recent years. A digital image is represented as a two-dimensional array of pixels, where each pixel has the intensity and location information. Analysis of digital images involves extraction of meaningful information from them, based on certain requirements. Digital Image Analysis requires the extraction of features, transforms the data in the high-dimensional space to a space of fewer dimensions. Feature vectors are n-dimensional vectors of numerical features used to represent an object. We have used Haralick features to classify various images using different classification algorithms like Support Vector Machines (SVM), Logistic Classifier, Random Forests Multi Layer Perception and Naïve Bayes Classifier. Then we used cross validation to assess how well a classifier works for a generalized data set, as compared to the classifications obtained during training.
With the increase in Internet users the number of malicious users are also growing day-by-day posing a
serious problem in distinguishing between normal and abnormal behavior of users in the network. This
has led to the research area of intrusion detection which essentially analyzes the network traffic and tries
to determine normal and abnormal patterns of behavior.In this paper, we have analyzed the standard
NSL-KDD intrusion dataset using some neural network based techniques for predicting possible
intrusions. Four most effective classification methods, namely, Radial Basis Function Network, Self-
Organizing Map, Sequential Minimal Optimization, and Projective Adaptive Resonance Theory have been
applied. In order to enhance the performance of the classifiers, three entropy based feature selection
methods have been applied as preprocessing of data. Performances of different combinations of classifiers
and attribute reduction methods have also been compared.
The purpose research is to develop the decision model of Multi-Criteria Group Decision Making (MCGDM) into Interval Value Fuzzy Multi-Criteria Group Decision Making (IV-FMCGDM), while the specific purpose is to construct decision-making model of Adaptive Interval Value Fuzzy Analytic Hierarchy Process (AIV- FAHP) uses Triangular Fuzzy Number (TFN) and group decision aggregation functions using Interval Value Geometric Means Aggregation (IV-GMA). The novelty research is to study the concept of group decision making by improving the middle point on the Interval Value Triangular Fuzzy Number (IV TFN). It provides more accurate modeling, and better rating performance, and more effective linguistic representation. This research produced a new decision-making model and algorithm based on AIV-FAHP used to measure the quality of e-learning.
Behavior study of entropy in a digital image through an iterative algorithmijscmcj
Image segmentation is a critical step in computer vision tasks constituting an essential issue for pattern recognition and visual interpretation. In this paper, we study the behavior of entropy in digital images through an iterative algorithm of mean shift filtering. The order of a digital image in gray levels is defined. The behavior of Shannon entropy is analyzed and then compared, taking into account the number of iterations of our algorithm, with the maximum entropy that could be achieved under the same order. The use of equivalence classes it induced, which allow us to interpret entropy as a hyper-surface in real m dimensional space. The difference of the maximum entropy of order n and the entropy of the image is used to group the the iterations, in order to caractrizes the performance of the algorithm.
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
Multilinear Kernel Mapping for Feature Dimension Reduction in Content Based M...ijma
In the process of content-based multimedia retrieval, multimedia information is processed in order to
obtain descriptive features. Descriptive representation of features, results in a huge feature count, which
results in processing overhead. To reduce this descriptive feature overhead, various approaches have been
used to dimensional reduction, among which PCA and LDA are the most used methods. However, these
methods do not reflect the significance of feature content in terms of inter-relation among all dataset
features. To achieve a dimension reduction based on histogram transformation, features with low
significance can be eliminated. In this paper, we propose a feature dimensional reduction approaches to
achieve the dimension reduction approach based on a multi-linear kernel (MLK) modeling. A benchmark
dataset for the experimental work is taken and the proposed work is observed to be improved in analysis in
comparison to the conventional system.
ON FEATURE SELECTION ALGORITHMS AND FEATURE SELECTION STABILITY MEASURES: A C...ijcsit
Data mining is indispensable for business organizations for extracting useful information from the huge volume of stored data which can be used in managerial decision making to survive in the competition. Due to the day-to-day advancements in information and communication technology, these data collected from ecommerce and e-governance are mostly high dimensional. Data mining prefers small datasets than high dimensional datasets. Feature selection is an important dimensionality reduction technique. The subsets selected in subsequent iterations by feature selection should be same or similar even in case of small perturbations of the dataset and is called as selection stability. It is recently becomes important topic of research community. The selection stability has been measured by various measures. This paper analyses the selection of the suitable search method and stability measure for the feature selection algorithms and also the influence of the characteristics of the dataset as the choice of the best approach is highly problem dependent.
Breve historia de la evolución de las redes sociales hasta las redes educativas, principales funciones de las redes educativas, las redes más importantes del mundo, las redes españolas y las funciones más importantes de redAlumnos.
2015 Colorado Business Economic OutlookKeenan Brugh
The Business Research Division
(BRD) in the Leeds
School of Business is proud
to present our 50th annual
Colorado Business Economic
Outlook. In commemorating
this milestone anniversary, we
acknowledge the vision of Dean
William Baughn in 1964 of a
consensus forecast developed by
our College of Business at the
time, the business community,
and the state government. We celebrate this partnership
that relies on research conducted by our students and
staff, and members of the public and private sectors in
service and outreach to the state of Colorado.
This forecast analyzes changes that have occurred in all
economic sectors during the past year, and looks at the
opportunities and challenges that will shape population,
employment, and the overall economy in the coming
year. The information in this book is initially presented
at the fiftieth annual Colorado Business Economic Outlook
Forum in Denver, followed by roughly 50 forecast
speeches that are held throughout the state during the
year, ranging from presentations to industry associations
and nonprofit organizations to the Federal Reserve Bank
of Kansas City Regional Economic Roundtable.
http://www.colorado.edu/leeds/centers/business-research-division/brd-publications/colorado-business-economic-outlook/50th-annual
Multimodal Biometrics Recognition by Dimensionality Diminution MethodIJERA Editor
Multimodal biometric system utilizes two or more character modalities, e.g., face, ear, and fingerprint,
Signature, plamprint to improve the recognition accuracy of conventional unimodal methods. We propose a new
dimensionality reduction method called Dimension Diminish Projection (DDP) in this paper. DDP can not only
preserve local information by capturing the intra-modal geometry, but also extract between-class relevant
structures for classification effectively. Experimental results show that our proposed method performs better
than other algorithms including PCA, LDA and MFA.
PREDICTIVE EVALUATION OF THE STOCK PORTFOLIO PERFORMANCE USING FUZZY CMEANS A...ijfls
The aim of this paper is to investigate the trend of the return of a portfolio formed randomly or for any
specific technique. The approach is made using two techniques fuzzy: fuzzy c-means (FCM) algorithm and
the fuzzy transform, where the rules used at fuzzy transform arise from the application of the FCM
algorithm. The results show that the proposed methodology is able to predict the trend of the return of a
stock portfolio, as well as the tendency of the market index. Real data of the financial market are used from
2004 until 2007.
Textural Feature Extraction of Natural Objects for Image ClassificationCSCJournals
The field of digital image processing has been growing in scope in the recent years. A digital image is represented as a two-dimensional array of pixels, where each pixel has the intensity and location information. Analysis of digital images involves extraction of meaningful information from them, based on certain requirements. Digital Image Analysis requires the extraction of features, transforms the data in the high-dimensional space to a space of fewer dimensions. Feature vectors are n-dimensional vectors of numerical features used to represent an object. We have used Haralick features to classify various images using different classification algorithms like Support Vector Machines (SVM), Logistic Classifier, Random Forests Multi Layer Perception and Naïve Bayes Classifier. Then we used cross validation to assess how well a classifier works for a generalized data set, as compared to the classifications obtained during training.
With the increase in Internet users the number of malicious users are also growing day-by-day posing a
serious problem in distinguishing between normal and abnormal behavior of users in the network. This
has led to the research area of intrusion detection which essentially analyzes the network traffic and tries
to determine normal and abnormal patterns of behavior.In this paper, we have analyzed the standard
NSL-KDD intrusion dataset using some neural network based techniques for predicting possible
intrusions. Four most effective classification methods, namely, Radial Basis Function Network, Self-
Organizing Map, Sequential Minimal Optimization, and Projective Adaptive Resonance Theory have been
applied. In order to enhance the performance of the classifiers, three entropy based feature selection
methods have been applied as preprocessing of data. Performances of different combinations of classifiers
and attribute reduction methods have also been compared.
The purpose research is to develop the decision model of Multi-Criteria Group Decision Making (MCGDM) into Interval Value Fuzzy Multi-Criteria Group Decision Making (IV-FMCGDM), while the specific purpose is to construct decision-making model of Adaptive Interval Value Fuzzy Analytic Hierarchy Process (AIV- FAHP) uses Triangular Fuzzy Number (TFN) and group decision aggregation functions using Interval Value Geometric Means Aggregation (IV-GMA). The novelty research is to study the concept of group decision making by improving the middle point on the Interval Value Triangular Fuzzy Number (IV TFN). It provides more accurate modeling, and better rating performance, and more effective linguistic representation. This research produced a new decision-making model and algorithm based on AIV-FAHP used to measure the quality of e-learning.
Behavior study of entropy in a digital image through an iterative algorithmijscmcj
Image segmentation is a critical step in computer vision tasks constituting an essential issue for pattern recognition and visual interpretation. In this paper, we study the behavior of entropy in digital images through an iterative algorithm of mean shift filtering. The order of a digital image in gray levels is defined. The behavior of Shannon entropy is analyzed and then compared, taking into account the number of iterations of our algorithm, with the maximum entropy that could be achieved under the same order. The use of equivalence classes it induced, which allow us to interpret entropy as a hyper-surface in real m dimensional space. The difference of the maximum entropy of order n and the entropy of the image is used to group the the iterations, in order to caractrizes the performance of the algorithm.
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
Multilinear Kernel Mapping for Feature Dimension Reduction in Content Based M...ijma
In the process of content-based multimedia retrieval, multimedia information is processed in order to
obtain descriptive features. Descriptive representation of features, results in a huge feature count, which
results in processing overhead. To reduce this descriptive feature overhead, various approaches have been
used to dimensional reduction, among which PCA and LDA are the most used methods. However, these
methods do not reflect the significance of feature content in terms of inter-relation among all dataset
features. To achieve a dimension reduction based on histogram transformation, features with low
significance can be eliminated. In this paper, we propose a feature dimensional reduction approaches to
achieve the dimension reduction approach based on a multi-linear kernel (MLK) modeling. A benchmark
dataset for the experimental work is taken and the proposed work is observed to be improved in analysis in
comparison to the conventional system.
ON FEATURE SELECTION ALGORITHMS AND FEATURE SELECTION STABILITY MEASURES: A C...ijcsit
Data mining is indispensable for business organizations for extracting useful information from the huge volume of stored data which can be used in managerial decision making to survive in the competition. Due to the day-to-day advancements in information and communication technology, these data collected from ecommerce and e-governance are mostly high dimensional. Data mining prefers small datasets than high dimensional datasets. Feature selection is an important dimensionality reduction technique. The subsets selected in subsequent iterations by feature selection should be same or similar even in case of small perturbations of the dataset and is called as selection stability. It is recently becomes important topic of research community. The selection stability has been measured by various measures. This paper analyses the selection of the suitable search method and stability measure for the feature selection algorithms and also the influence of the characteristics of the dataset as the choice of the best approach is highly problem dependent.
Breve historia de la evolución de las redes sociales hasta las redes educativas, principales funciones de las redes educativas, las redes más importantes del mundo, las redes españolas y las funciones más importantes de redAlumnos.
2015 Colorado Business Economic OutlookKeenan Brugh
The Business Research Division
(BRD) in the Leeds
School of Business is proud
to present our 50th annual
Colorado Business Economic
Outlook. In commemorating
this milestone anniversary, we
acknowledge the vision of Dean
William Baughn in 1964 of a
consensus forecast developed by
our College of Business at the
time, the business community,
and the state government. We celebrate this partnership
that relies on research conducted by our students and
staff, and members of the public and private sectors in
service and outreach to the state of Colorado.
This forecast analyzes changes that have occurred in all
economic sectors during the past year, and looks at the
opportunities and challenges that will shape population,
employment, and the overall economy in the coming
year. The information in this book is initially presented
at the fiftieth annual Colorado Business Economic Outlook
Forum in Denver, followed by roughly 50 forecast
speeches that are held throughout the state during the
year, ranging from presentations to industry associations
and nonprofit organizations to the Federal Reserve Bank
of Kansas City Regional Economic Roundtable.
http://www.colorado.edu/leeds/centers/business-research-division/brd-publications/colorado-business-economic-outlook/50th-annual
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...IAEME Publication
Close range photogrammetry network design is referred to the process of placing a set of
cameras in order to achieve photogrammetric tasks. The main objective of this paper is tried to find
the best location of two/three camera stations. The genetic algorithm optimization and Particle
Swarm Optimization are developed to determine the optimal camera stations for computing the three
dimensional coordinates. In this research, a mathematical model representing the genetic algorithm
optimization and Particle Swarm Optimization for the close range photogrammetry network is
developed. This paper gives also the sequence of the field operations and computational steps for this
task. A test field is included to reinforce the theoretical aspects.
MULTIPROCESSOR SCHEDULING AND PERFORMANCE EVALUATION USING ELITIST NON DOMINA...ijcsa
Task scheduling plays an important part in the improvement of parallel and distributed systems. The problem of task scheduling has been shown to be NP hard. The time consuming is more to solve the problem in deterministic techniques. There are algorithms developed to schedule tasks for distributed environment, which focus on single objective. The problem becomes more complex, while considering biobjective.This paper presents bi-objective independent task scheduling algorithm using elitist Nondominated
sorting genetic algorithm (NSGA-II) to minimize the makespan and flowtime. This algorithm generates pareto global optimal solutions for this bi-objective task scheduling problem. NSGA-II is implemented by using the set of benchmark instances. The experimental result shows NSGA-II generates efficient optimal schedules.
An approach for breast cancer diagnosis classification using neural networkacijjournal
Artificial neural network has been widely used in various fields as an intelligent tool in recent years, such
as artificial intelligence, pattern recognition, medical diagnosis, machine learning and so on. The
classification of breast cancer is a medical application that poses a great challenge for researchers and
scientists. Recently, the neural network has become a popular tool in the classification of cancer datasets.
Classification is one of the most active research and application areas of neural networks. Major
disadvantages of artificial neural network (ANN) classifier are due to its sluggish convergence and always
being trapped at the local minima. To overcome this problem, differential evolution algorithm (DE) has
been used to determine optimal value or near optimal value for ANN parameters. DE has been applied
successfully to improve ANN learning from previous studies. However, there are still some issues on DE
approach such as longer training time and lower classification accuracy. To overcome these problems,
island based model has been proposed in this system. The aim of our study is to propose an approach for
breast cancer distinguishing between different classes of breast cancer. This approach is based on the
Wisconsin Diagnostic and Prognostic Breast Cancer and the classification of different types of breast
cancer datasets. The proposed system implements the island-based training method to be better accuracy
and less training time by using and analysing between two different migration topologies
Parallel and distributed genetic algorithm with multiple objectives to impro...khalil IBRAHIM
we argue that the timetabling problem reflects the problem of scheduling university courses, So you must specify the range of time periods and a group of instructors for a range of lectures to check a set of constraints and reduce the cost of other constraints ,this is the problem called NP-hard, it is a class of problems that are informally, it’s mean that necessary operations to solve the problem will increase exponentially and directly proportional to the size of the problem, The construction of timetable is the most complicated problem that was facing many universities, and increased by size of the university data and overlapping disciplines between colleges, and when a traditional algorithm (EA) is unable to provide satisfactory results, a distributed EA (dEA), which deploys the population on distributed systems, it also offers an opportunity to solve extremely high dimensional problems through distributed coevolution using a divide-and-conquer mechanism, Further, the distributed environment allows a dEA to maintain population diversity, thereby avoiding local optima and also facilitating multi-objective search, by employing different distribution models to parallelize the processing of EAs, we designed a genetic algorithm suitable for Universities environment and the constraints facing it when building timetable for lectures.
Optimization of Mechanical Design Problems Using Improved Differential Evolut...IDES Editor
Differential Evolution (DE) is a novel evolutionary
approach capable of handling non-differentiable, non-linear
and multi-modal objective functions. DE has been consistently
ranked as one of the best search algorithm for solving global
optimization problems in several case studies. This paper
presents an Improved Constraint Differential Evolution
(ICDE) algorithm for solving constrained optimization
problems. The proposed ICDE algorithm differs from
unconstrained DE algorithm only in the place of initialization,
selection of particles to the next generation and sorting the
final results. Also we implemented the new idea to five versions
of DE algorithm. The performance of ICDE algorithm is
validated on four mechanical engineering problems. The
experimental results show that the performance of ICDE
algorithm in terms of final objective function value, number
of function evaluations and convergence time.
Optimization of Mechanical Design Problems Using Improved Differential Evolut...IDES Editor
Differential Evolution (DE) is a novel evolutionary
approach capable of handling non-differentiable, non-linear
and multi-modal objective functions. DE has been consistently
ranked as one of the best search algorithm for solving global
optimization problems in several case studies. This paper
presents an Improved Constraint Differential Evolution
(ICDE) algorithm for solving constrained optimization
problems. The proposed ICDE algorithm differs from
unconstrained DE algorithm only in the place of initialization,
selection of particles to the next generation and sorting the
final results. Also we implemented the new idea to five versions
of DE algorithm. The performance of ICDE algorithm is
validated on four mechanical engineering problems. The
experimental results show that the performance of ICDE
algorithm in terms of final objective function value, number
of function evaluations and convergence time.
Differential Evolution (DE) is a renowned optimization stratagem that can easily solve nonlinear and comprehensive problems. DE is a well known and uncomplicated population based probabilistic approach for comprehensive optimization. It has apparently outperformed a number of Evolutionary Algorithms and further search heuristics in the vein of Particle Swarm Optimization at what time of testing over both yardstick and actual world problems. Nevertheless, DE, like other probabilistic optimization algorithms, from time to time exhibits precipitate convergence and stagnates at suboptimal position. In order to stay away from stagnation behavior while maintaining an excellent convergence speed, an innovative search strategy is introduced, named memetic search in DE. In the planned strategy, positions update equation customized as per a memetic search stratagem. In this strategy a better solution participates more times in the position modernize procedure. The position update equation is inspired from the memetic search in artificial bee colony algorithm. The proposed strategy is named as Memetic Search in Differential Evolution (MSDE). To prove efficiency and efficacy of MSDE, it is tested over 8 benchmark optimization problems and three real world optimization problems. A comparative analysis has also been carried out among proposed MSDE and original DE. Results show that the anticipated algorithm go one better than the basic DE and its recent deviations in a good number of the experiments.
EMBC'13 Poster Presentation on "A Bio-Inspired Cooperative Algorithm for Dist...Md Kafiul Islam
In this paper we propose an algorithm for distributed
optimization in mobile nodes. Compared with many
published works, an important consideration here is that the
nodes do not know the cost function beforehand. Instead of
decision-making based on linear combination of the neighbor
estimates, the proposed algorithm relies on information-rich
nodes that are iteratively identified. To quickly identify the
information rich node, the algorithm adopts a larger step size
during the initial iterations. The proposed algorithm can be used
in many different applications, such as distributed odor source
localization and mobile robots. We present simulation results to
show the performance of our proposed algorithm
Optimizing Mobile Robot Path Planning and Navigation by Use of Differential E...IOSR Journals
Abstract: Path planning and navigation is essential for an autonomous robot which can move avoiding the
static obstacles in a real world and to reach the specific target. Optimizing path for the robot movement gives
the optimal distance from the source to the target and save precious time as well. With the development of
various evolutionary algorithms, the differential evolution is taking the pace in comparison to genetic algorithm.
Differential evolution has been deployed quite successfully for solving global optimization problem. Differential
evolution is a very simple yet powerful metaheuristics type problem solving method. In this paper we are
proposing a Differential Evolution based path navigation algorithm for mobile path navigation and analyze its
efficiency with other developed approaches. The proposed algorithm optimized the robot path and navigates the
robot to the proper target efficiently.
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
Optimal rule set generation using pso algorithmcsandit
Classification and Prediction is an important resea
rch area of data mining. Construction of
classifier model for any decision system is an impo
rtant job for many data mining applications.
The objective of developing such a classifier is to
classify unlabeled dataset into classes. Here
we have applied a discrete Particle Swarm Optimizat
ion (PSO) algorithm for selecting optimal
classification rule sets from huge number of rules
possibly exist in a dataset. In the proposed
DPSO algorithm, decision matrix approach was used f
or generation of initial possible
classification rules from a dataset. Then the propo
sed algorithm discovers important or
significant rules from all possible classification
rules without sacrificing predictive accuracy.
The proposed algorithm deals with discrete valued d
ata, and its initial population of candidate
solutions contains particles of different sizes. Th
e experiment has been done on the task of
optimal rule selection in the data sets collected f
rom UCI repository. Experimental results show
that the proposed algorithm can automatically evolv
e on average the small number of
conditions per rule and a few rules per rule set, a
nd achieved better classification performance
of predictive accuracy for few classes.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Comparison of Cost Estimation Methods using Hybrid Artificial Intelligence on...IJERA Editor
Cost estimating at schematic design stage as the basis of project evaluation, engineering design, and cost
management, plays an important role in project decision under a limited definition of scope and constraints in
available information and time, and the presence of uncertainties. The purpose of this study is to compare the
performance of cost estimation models of two different hybrid artificial intelligence approaches: regression
analysis-adaptive neuro fuzzy inference system (RANFIS) and case based reasoning-genetic algorithm (CBRGA)
techniques. The models were developed based on the same 50 low-cost apartment project datasets in
Indonesia. Tested on another five testing data, the models were proven to perform very well in term of accuracy.
A CBR-GA model was found to be the best performer but suffered from disadvantage of needing 15 cost drivers
if compared to only 4 cost drivers required by RANFIS for on-par performance.
Automobile Management System Project Report.pdfKamal Acharya
The proposed project is developed to manage the automobile in the automobile dealer company. The main module in this project is login, automobile management, customer management, sales, complaints and reports. The first module is the login. The automobile showroom owner should login to the project for usage. The username and password are verified and if it is correct, next form opens. If the username and password are not correct, it shows the error message.
When a customer search for a automobile, if the automobile is available, they will be taken to a page that shows the details of the automobile including automobile name, automobile ID, quantity, price etc. “Automobile Management System” is useful for maintaining automobiles, customers effectively and hence helps for establishing good relation between customer and automobile organization. It contains various customized modules for effectively maintaining automobiles and stock information accurately and safely.
When the automobile is sold to the customer, stock will be reduced automatically. When a new purchase is made, stock will be increased automatically. While selecting automobiles for sale, the proposed software will automatically check for total number of available stock of that particular item, if the total stock of that particular item is less than 5, software will notify the user to purchase the particular item.
Also when the user tries to sale items which are not in stock, the system will prompt the user that the stock is not enough. Customers of this system can search for a automobile; can purchase a automobile easily by selecting fast. On the other hand the stock of automobiles can be maintained perfectly by the automobile shop manager overcoming the drawbacks of existing system.
Vaccine management system project report documentation..pdfKamal Acharya
The Division of Vaccine and Immunization is facing increasing difficulty monitoring vaccines and other commodities distribution once they have been distributed from the national stores. With the introduction of new vaccines, more challenges have been anticipated with this additions posing serious threat to the already over strained vaccine supply chain system in Kenya.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
2. 154 Computer Science & Information Technology (CS & IT)
machine may produce different, possibly better solutions for many problems, especially complex
ones (for example in engineering). Separating individuals spatially from each other results in
slowing down the information flow between individuals, which may have both desired and
undesired results. A slower information flow may stop temporarily best solution from dominating
the population and allow different building blocks or solutions to be discovered and later
confronted, which is important in the context of engineering design and creativity. On the other
hand one can prevent successful mixing, which could otherwise lead to constructing a novel
solution.
In this paper, a latest optimization algorithm called DE with island model is applied in feed
forward neural network to improve neural network learning mechanism. Island based model
works by running multiple algorithms and shares the results at regular interval promoting the
overall performance of the algorithm. This paper proposes the migration based differential
evolution algorithm for classification medical diagnosis.
2. DIFFERENTIAL EVOLUTION ALGORITHM
Having developed an ANN-based process model, a DE algorithm is used to optimize the N-
dimensional input space of the ANN model. Conventionally, various deterministic gradient-based
methods are used for performing optimization of the phenomenological models. Most of these
methods require that the objective function should simultaneously satisfy the smoothness,
continuity, and differentiability criteria. Although the nonlinear relationships approximated by an
ANN model can be expressed in the form of generic closed-form expressions, the objective
function(s) derived thereby cannot be guaranteed to satisfy the smoothness criteria. Thus, the
gradient-based methods cannot be efficiently used for optimizing the input space of an ANN
model and, therefore, it becomes necessary to explore alternative optimization formalisms, which
are lenient towards the form of the objective function.
In the recent years, Differential Evolution (DE) that are members of the stochastic optimization
formalisms have been used with a great success in solving problems involving very large search
spaces. The DEs were originally developed as the genetic engineering models mimicking the
population evolution in natural systems. Specifically, DE like genetic algorithm (GA) enforces
the “survival-of-the-fittest” and “genetic propagation of characteristics” principles of biological
evolution for searching the solution space of an optimization problem. The principal features
possessed by the DEs are: (i) they require only scalar values and not the second- and/or first-order
derivatives of the objective function, (ii) the capability to handle nonlinear and noisy objective
functions, (iii) they perform global search and thus are more likely to arrive at or near the global
optimum and (iv) DEs do not impose pre-conditions, such as smoothness, differentiability and
continuity, on the form of the objective function.
Differential Evolution (DE), an improved version of GA, is an exceptionally simple evolution
strategy that is significantly faster and robust at numerical optimization and is more likely to find
a function’s true global optimum. Unlike simple GA that uses a binary coding for representing
problem parameters, DE uses real coding of floating point numbers. The mutation operator here is
the addition instead of bit-wise flipping used in GA. And DE uses non-uniform crossover and
tournament selection operators to create new solution strings. Among the DEs advantages are its
simple structure, ease of use, speed and robustness. It can be used for optimizing functions with
real variables and many local optima.
This paper demonstrates a successful application of DE with island model. As already stated, DE
in principle is similar to GA. So, as in GA, we use a population of points in our search for the
optimum. The population size is denoted by NP. The dimension of each vector is denoted by D.
3. Computer Science & Information Technology (CS & IT) 155
The main operation is the NP number of competitions that are to be carried out to decide the next
generation. To start with, we have a population of NP vectors within the range of the objective
function. We select one of these NP vectors as our target vector. We then randomly select two
vectors from the population and find the difference between them (vector subtraction). This
difference is multiplied by a factor F (specified at the start) and added to the third randomly
selected vector. The result is called the noisy random vector. Subsequently, the crossover is
performed between the target vector and noisy random vector to produce the trial vector. Then, a
competition between the trial vector and target vector is performed and the winner is replaced
into the population. The same procedure is carried out NP times to decide the next generation of
vectors. This sequence is continued till some convergence criterion is met. This summarizes the
basic procedure carried out in differential evolution. The details of this procedure are described
below.
Steps performed in DE
Assume that the objective function is of D dimensions and that it has to be optimized. The
weighting constants F and the crossover constant CR are specified.
Step 1. Generate NP random vectors as the initial population: generate (NP×D) random numbers
and linearize the range between 0 and 1 to cover the entire range of the function. From these
(NP×D) numbers, generate NP random vectors, each of dimension D, by mapping the random
numbers over the range of the function.
Step 2. Choose a target vector from the population of size NP: first generate a random number
between 0 and 1. From the value of the random number decide which population member is to be
selected as the target vector (Xi) (a linear mapping rule can be used).
Step 3. Choose two vectors from the population at random and find the weighted difference:
Generate two random numbers. Decide which two population members are to be selected
(Xa,Xb).Find the vector difference between the two vectors (Xa - Xb).Multiply this difference by
F to obtain the weighted difference. Weighted difference = F (Xa - Xb)
Step 4. Find the noisy random vector: generate a random number. Choose the third random vector
from the population (Xc). Add this vector to the weighted difference to obtain the noisy random
vector (X’c).
Step 5. Perform the crossover between Xi and X’c to find Xt, the trial vector: generate D random
numbers. For each of the D dimensions, if the random number is greater than CR, copy from Xi
into the trial vector; if the random number is less than CR, copy the value from X’c into the trial
vector.
Step 6. Calculate the cost of the trial vector and the target vector: for a minimization problem,
calculate the function value directly and this is the cost. For a maximization problem, transform
the objective function f(x) using the rule F(x) = 1 / [1 + f(x)] and calculate the value of the cost.
Alternatively, directly calculate the value of f(x) and this yields the profit. In case the cost is
calculated, the vector that yields the lower cost replaces the population member in the initial
population. In case the profit is calculated, the vector with the greater profit replaces the
population member in the initial population.
Steps 1–6 are continued till some stopping criterion is met. This may be of two kinds. One may
be some convergence criterion that states that the error in the minimum or maximum between two
previous generations should be less than some specified value. The other may be an upper bound
on the number of generations. The stopping criterion may be a combination of the two. Either
4. 156 Computer Science & Information Technology (CS & IT)
way, once the stopping criterion is met, the computations are terminated. Choosing DE key
parameters NP, F, and CR is seldom difficult and some general guidelines are available.
Normally, NP ought to be about 5 to 10 times the number of parameters in a vector. As for F, it
lies in the range 0.4 to 1.0. Initially, F= 0.5 can be tried and then F and/or NP is increased if the
population converges prematurely. A good first choice for CR is 0.1, but in general CR should be
as large as possible (Price and Storn, 1997). DE has already been successfully applied for solving
several complex problems and is now being identified as a potential source for the accurate and
faster optimization.
3. ISLAND MODEL
The main difference between the island model and the single population model is the separation
of individuals into islands. As against the master-slave model the communication to computation
ratio of the island model approach is low, owing to the low communication frequency between
the islands. Also, separating individuals separately from each other results in a qualitative change
in the behaviour of the algorithm.
In the island model approach, each island executes a standard sequential evolutionary algorithm.
The communication between sub-population is assured by a migration process. Some randomly
selected individuals (migration size) migrate from one island to another after every certain
number of generations (migration interval) depending upon a communication topology (migration
topology). The two basic and most sensitive parameters of island model strategy are: migration
size, which indicates the number of individuals migrating and controls the quantitative aspect of
migration; and migration interval denoting the frequency of migration. Although different aspects
of migration size and interval were studied in the past, we are unaware of any work studying
directly the influence of these parameters on the behaviour of island model based differential
evolution, though [15] presents a similar study on a set of 8 standard functions.
3.1. Migration Topology
The migration topology describes which islands send individuals to which islands. There are
many topologies. This system investigates the fully connected topology. In this paper, simulations
were run with setups of five islands.
3.2. Migration Policy
A migration policy consists of two parts. The first part is the selection of individuals, which shall
be migrated to another island. The second part is to choose which individuals are replaced by the
newly obtained individuals. Four migration policies are proposed in this system:
• Select the best individuals replace the worst individuals.
• Select random individuals, replace the worst individuals.
• Select the best individuals replace random individuals.
• Select random individuals, replace random individuals.
This system experiments all of the above migration policies and compare their results.
3.3. Migration Interval
In order to distribute information about good individuals among the islands, migration has to take
place. This can either be done in synchronous way every nth
generation or in an asynchronous
way, meaning migration takes place at non-periodical times. It is commonly accepted that a more
5. Computer Science & Information Technology (CS & IT) 157
frequent migration leads to a higher selection pressure and therefore a faster convergence. But as
always with a higher selection pressure come the susceptibility to get stuck in local optima. In
this system, various migration intervals are experimented to find the best solution for the neural
network training.
3.4. Migration Size
A further important factor is the number of individuals which are exchanged. According to these
studies the migration size has to be adapted to the size of a subpopulation of an island. When one
migrates only a very small percentage, the influence of the exchange is negligible but if too much
individuals are migrated, these new individuals take over the existing population, leading to a
decrease of the global diversity. In this system, the migration sizes were chosen to be
approximately 10% of the size of a subpopulation as suggested in [19].
4. THE PROPOSED MODEL
As shown in figure 1, the training patterns of medical dataset are used as input data. Attributes are
scaled to fall within a small specific range by using min-mix normalization. At the start of the
algorithm, dataset were loaded from the database. In the next step, each chromosome or vector is
randomly initialized with random neural network weight. Fitness of each chromosome is
evaluated using following step. Fitness defined how well a chromosome solves the problem in
hand. The first step converts chromosome’s genes into neural network and fitness is calculated
for each individuals. Mutation operator produce the trial vector from parent vector and randomly
selected three vectors. Crossover recombines the parent vector and trial vector to produce
offspring. By using mutation and crossover, some genes are modified that mean weights are
updated. Fitness of offspring is calculated and compare with fitness of parent vector, the
chromosome with high fitness survive and next generation begin. Choose individuals according
to migration policy. Migrate and replace individuals according to migration topology. Figure 1
presents the flow of the proposed system.
Figure 1. ANNs-MBDE algorithm training process
6. 158 Computer Science & Information Technology (CS & IT)
5. DESCRIPTION OF DATASETS
This paper presents four medical datasets such as Breast Cancer, Heart, Liver and Pima Indian
Diabetes from UCI machine learning repository [18]. The size and number of attributes are
different for each dataset. The size means number of medical dataset records for training.
5.1. Breast Cancer
The cancer dataset requires the decision maker to correctly diagnose breast lumps as either benign
or malignant based on data from automated microscopic examination of cells collected by needle
aspiration. The dataset includes 9 inputs and 2 outputs. A total of 345 instances are available in
breast cancer data set. 231 instances are used for training and 114 instances are used for testing.
5.2. Heart
The network architecture used for Heart dataset consists of 13 continuous and 2 classes. The
attribute are age, sex, chest pain type, resting blood pressure, serum cholestoral, fasting blood
pressure, serum cholestoral, fasting blood sugar > 120 mg/dl, resting electrocardiographic results,
maximum heart rate achieved, exercise induced angina, oldpeak, the slope of the peak exercise
ST segment, number of major vessels and thal. The classes are absent (1) and present (2).. A total
of 303 instances are available in heart disease data set. 201 instances are used for training and 100
instances are used for testing.
5.3. Liver
In this paper, we used liver dataset from UCI machine learning repository. There are 345
instances, 6 continuous attributes and 2 classes. The attributes are mean corpuscular volume,
alkaline phosphatase, alamine aminotransferase, aspartate, aminotransferase, gamma-glutamyl
transpeptidase and number of half-pint equivalents of alcoholic beverages drunk per day. The
classes are absents (1) and present (2). The first 5 attributes are all blood tests which are thought
to be sensitive to liver disorders that might arise from excessive alcohol consumption. Each
record is a single male individual.
5.4. Pima Indian Diabetes Database (PIDD)
There are 768 instances, 8 continuous attributes and 2 classes. PIDD includes the following
attributes (1-8 attributes as input and last attributes as target variable) number of times pregnant,
plasma glucose concentration a 2 hours in an oral glucose tolerance test, diastolic blood pressure
(mm Hg), triceps skin fold thickness (mm), 2 hours serum insulin (mu U/ml), body mass index
(weight in kg/ (height in m) ^ 2), diabetes pedigree function and age (years). Class to be predicted
is patient is suffering from tested-positive or test-negative.
6. EXPERIMENTAL RESULT
Java programming language, which is the platform independent and a general-purpose
development language, is used to implement the proposed system. First of all, the medical
datasets are accessed. And then, normalization is made for pre-processing steps according to the
different medical data sets. Experiments are performed with four migration policies. Currently the
system experiment the island model with fully connected topology and four migration policies. In
this system, five islands are used. The island model used the iteration as the migration interval
and one-third of the old population is used to migrate and replace. Four medical datasets are used
7. Computer Science & Information Technology (CS & IT) 159
from the UCI, namely Breast Cancer, Heart, Liver and Pima Indian Diabetes. The results for each
dataset are compared and analysed based on the convergence rate and classification performance.
6.1. Parameters of Datasets
The table 1 below shows the parameter of four datasets.
Table 1. Datasets Information
Parameter
Medical Datasets
Breast cancer Heart Liver Pima Diabetes
Train Data 456 201 231 509
Test Data 227 100 114 259
Output Neuron 2 2 2 2
6.2. Data Normalization
The data normalization is considered as the most important pre-processing step using neural
networks. To improve the performance of multilayer neural networks, it is better to normalize the
data entry such that will be found in the interval of [0 1]. To transform the data into digital form,
and use it as inputs of the neural network, scaling or normalization should be realized for each
attribute. The nine numerical attributes, in the analog form, are scaled with a range of 0 and 1.
There are many types of normalization that are found in the literature. The new values obtained
after normalization, follow this equation:
(1)
6.3. Classifier Accuracy
Estimating classifier accuracy is important since it determines to evaluate how accurately a given
classifier will label future data, data on which the classifier has not been trained. Accuracy
estimates also help in the comparison of different classifiers. The following classification features
are used to train and test the classifier.
Given: A collection of labeled records (training set). Each record contains a set of features
(attributes), and the true class (label).
Find: A model for the class as a function of the values of the features.
Gold: Previously unseen records should be assigned a class as accurately as possible. A test set is
used to determine the accuracy of the model. Usually, the given data set is divided into training
and test sets, with training set used to build the model and test set used to validate it. The
Sensitivity and Specificity measures can be used to determine the accuracy measures.
Precision may also be used to access the percentage of samples labeled as for example, “cancer”
that actually are “cancer” samples. These measures are defined as
8. 160 Computer Science & Information Technology (CS & IT)
(2)
(3)
(4)
Where,
t_pos = the number of true positives (“medical dataset class” samples that were correctly
classified as such class),
pos = the number of positive (“medical dataset class”) samples
t_neg = the number of true negative (“not medical dataset class” samples that were correctly
classified as such class)
neg = the number of negative samples
f_pos = number of false positive (“not medical dataset class” samples that were incorrectly
labeled as such class)
(5)
6.4. Accuracy Comparisons for UCI Medical Datasets
Four medical datasets are tested with MBDE neural network classifier for this system. Below the
tables show the accuracy of training and testing of MBDE neural network on medical datasets.
Table 2. Results of classification accuracy on breast cancer dataset
Migration Policies
Error
Convergence
Convergence
Time (sec)
Classification Accuracy
Training Accuracy
(%)
Testing Accuracy
(%)
Best-Worst 0.0011 9 97.82 100
Best-Random 0.0021 10 99.31 99.43
Random-Worst 0.0037 13 97.35 98.46
Random-Random 0.0043 13 98.76 97.32
9. Computer Science & Information Technology (CS & IT) 161
Table 3. Results of classification accuracy on heart dataset
Migration Policies
Error
Convergence
Convergence
Time (sec)
Classification Accuracy
Training
Accuracy (%)
Testing
Accuracy (%)
Best-Worst 0.0021 11 97.32 99.89
Best-Random 0.0107 12 98.09 99.32
Random-Worst 0.0130 13 99.14 98.72
Random-Random 0.0160 14 99.04 98.04
Table 4. Results of classification accuracy on liver dataset
Migration
Policies
Error
Convergenc
e
Convergence
Time (sec)
Classification Accuracy
Training
Accuracy (%)
Testing
Accuracy (%)
Best-Worst 0.0011 13 99.72 100
Best-Random 0.0167 11 98.62 98.45
Random-Worst 0.0203 11 98.14 97.68
Random-Random 0.0264 12 97.68 97.36
Table 5. Results of classification accuracy on PIDD dataset
Migration Policies
Error
Convergence
Convergence
Time (sec)
Classification Accuracy
Training
Accuracy (%)
Testing
Accuracy (%)
Best-Worst 0.0035 12 98.23 99.01
Best-Random 0.0159 13 98.97 98.34
Random-Worst 0.0159 13 98.35 97.78
Random-Random 0.0178 14 97.65 97.78
The MBDE is successfully applied in neural network and has been tested using Breast Cancer,
Heart, Liver and Pima Indian Diabetes datasets.
7. ACCURACY COMPARISON ON MIGRATION POLICIES
This analysis is carried out to compare the results of migration policy. To do this, the learning
patterns for the proposed system is compared using all four medical datasets. The comparative
correct classification percentage for all datasets is shown in figure 2.
10. 162 Computer Science & Information Technology (CS & IT)
Figure 2. Comparative of correct classification percentage of migration policies
In this paper, we have introduced the various migration policies with fully connected topology
and compared their results. Figure 2 shows the accuracy comparison with four migration policies
by using medical datasets. Medical datasets are used to implement this proposed system which
shows better experiments with higher accuracy. The proposed system also reduces the computing
time. For all medical datasets, the experiments show that best-worst migration policy has better
results on convergence time and correct classification percentage. The proposed algorithm
converges in a short time with high correct classification percentage.
8. CONCLUSIONS
The proposed system (MBDE) algorithm is successfully applied in neural network and has been
tested using breast cancer, heart, liver and pima indian diabetes database datasets. The analysis is
done by comparing the results for each dataset. The results produced by ANNs are not optimal
before using migration based differential evolution algorithm. Therefore, this paper improves the
performance of ANNs by using the proposed algorithm, MBDE. The main difficulty of neural
network is to adjust weight in order to reduce the error rate. This system presents the neural
network training algorithm using migration based differential evolution algorithm. By exploiting
the global search power of differential evolution algorithm in conjunction with island model will
boost the training performance of the algorithm. The system will converge quickly to the lower
mean square error. Island model encourage the diversity among the individual among islands
which increase search capability and by migration island model can share the best experiences of
each other. By using island model rather than single DE, it can get advantages from parallel
problem solving and information sharing which lead to faster global search. This system
improves the performance of medical datasets in feed forward neural network and reduces the
computing time.
REFERENCES
[1] R. Storn and K. Price, “Differential evolution—A simple and efficient heuristic for global
optimization over continuous spaces,” J. Glob. Optim., vol. 11, no. 4, pp. 341–359, Dec. (1997).
[2] K. Price, “An introduction to differential evolution,” in New Ideas in Optimization, D. Corne, M.
Dorigo, and F. Glover, Eds. London, U.K.: McGraw-Hill, pp. 79–108 (1999).
[3] D. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning. Reading, MA:
Addison-Wesley, (1989).
11. Computer Science & Information Technology (CS & IT) 163
[4] Z. Michalewicz, Genetic Algorithms+Data Structures=Evolution Programs. Berlin, Germany:
Springer-Verlag, (1998).
[5] J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence. San Francisco, CA: Morgan Kaufmann,
(2001).
[6] K. Socha and M. Doringo, “Ant colony optimization for continous domains,”Eur. J. Oper. Res., vol.
185, no. 3, pp. 1155–1173, Mar. (2008).
[7] D. T. Pham, A. Ghanbarzadeh, E. Koç, S. Otri, S. Rahim, and M. Zaidi, “The bees algorithm—A
novel tool for complex optimization problems,” in IPROMS 2006. Oxford, U.K.: Elsevier, (2006).
[8] H. G. Beyer and H. P. Schwefel, “Evolution strategies: A comprehensive introduction,” Nat.
Comput., vol. 1, no. 1, pp. 3–52, May (2002).
[9] K. V. Price, R. M. Storn, and J. A. Lampinen, Differential Evolution: A Practical Approach to Global
Optimization. Berlin, Germany: Springer Verlag, (2005).
[10] R. L. Becerra and C. A. Coello Coello, “Cultured differential evolution for constrained optimization,”
Comput. Methods Appl. Mech. Eng., vol. 195, no. 33–36, pp. 4303–4322, Jul. 1, (2006).
[11] A. Slowik and M. Bialko, “Adaptive selection of control parameters in differential evolution
algorithms,” in Computational Intelligence: Methods and Applications, L. Zadeh, L. Rutkowski, R.
Tadeusiewicz, and J. Zurada, Eds. Warsaw, Poland: EXIT, pp. 244–253, (2008).
[12] J. Liu and J. Lampinen, “A fuzzy adaptive differential evolution algorithm,” Soft Comput.—A Fusion
of Foundations, Methodologies and Applications, vol. 9, no. 6, pp. 448–462, Jun. (2005).
[13] M. M. Ali and A. Torn, “Population set-based global optimization algorithms: Some modifications
and numerical studies,” Comput. Oper. Res., vol. 31, no. 10, pp. 1703–1725, Sep. (2004).
[14] E. Mezura-Montes, C. A. Coello Coello, J. Velázquez-Reyes, and L. Munoz-Dávila, “Multiple trial
vectors in differential evolution for engineering design,” Eng. Optim., vol. 39, no. 5, pp. 567–589, Jul.
(2007).
[15] Z. Skolicki and K. De Jong, “The influence of migration sizes and intervals on island models,” in
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2005), ACM Press,
(2001).
[16] R. Storn, “On the usage of differential evolution for function optimization,” in Proc. of the 1996
Biennial Conference of the North American Fuzzy Information processing society- NAFIPS, Edited
by: M. H. Smith, M. A. Lee, J. Keller and J. Yen, June 19-22, Berkeley, CA, USA, IEEE Press, New
York, pp 519-523, (1996).
[17] F. Amato, A. Lopez, E. Maria, P. Vanhara, A. Hampl, “ Artificial neural networks in medical
diagnosis “, J Appl Biomed.11, 2013, DOI 10.2478/v10136-012-0031-x ISSN 1214-0287, pp.47-58,
(2013).
[18] https://archive.ics.uci.edu/ml/datasets.html
[19] Z. Skolicki and K. De Jong. The influence of migration sizes and intervals on island models. In
GECCO’05: Proceedings of the 2005 conference on Genetic and evolutionary computation, pages
1295-1302, New York, NY, USA, ACM, (2005).