In this paper the benchmarking functions are used to evaluate and check the particle swarm optimization (PSO) algorithm. However, the functions utilized have two dimension but they selected with different difficulty and with different models. In order to prove capability of PSO, it is compared with genetic algorithm (GA). Hence, the two algorithms are compared in terms of objective functions and the standard deviation. Different runs have been taken to get convincing results and the parameters are chosen properly where the Matlab software is used. Where the suggested algorithm can solve different engineering problems with different dimension and outperform the others in term of accuracy and speed of convergence.
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITYIAEME Publication
This Research paper discusses the study and analysis conducted during this research on various techniques in biometric domain. A close glance on biometric enhancement techniques and their limitations are presented in this research paper. This process would enable researcher to understand the research contributions in the area of DCT and DFT based recognition and security, locate some crucial limitations of these notable research. This paper having summary about the different research papers that applicable to our topic of research which mentioned above. Biometric Recognition and security is a most important subject of research in this area of image processing.
Applying Genetic Algorithms to Information Retrieval Using Vector Space ModelIJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in 1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used cosine similarity and jaccards to compute similarity between the query and documents, and used two proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study concluded that we might have several improvements when using adaptive genetic algorithms.
Clustering technology has been applied in numerous applications. It can enhance the performance
of information retrieval systems, it can also group Internet users to help improve the click-through rate of
on-line advertising, etc. Over the past few decades, a great many data clustering algorithms have been
developed, including K-Means, DBSCAN, Bi-Clustering and Spectral clustering, etc. In recent years, two
new data clustering algorithms have been proposed, which are affinity propagation (AP, 2007) and density
peak based clustering (DP, 2014). In this work, we empirically compare the performance of these two latest
data clustering algorithms with state-of-the-art, using 6 external and 2 internal clustering validation metrics.
Our experimental results on 16 public datasets show that, the two latest clustering algorithms, AP and DP,
do not always outperform DBSCAN. Therefore, to find the best clustering algorithm for a specific dataset, all
of AP, DP and DBSCAN should be considered. Moreover, we find that the comparison of different clustering
algorithms is closely related to the clustering evaluation metrics adopted. For instance, when using the
Silhouette clustering validation metric, the overall performance of K-Means is as good as AP and DP. This
work has important reference values for researchers and engineers who need to select appropriate clustering
algorithms for their specific applications.
Applying genetic algorithms to information retrieval using vector space modelIJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on
mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in
1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used
cosine similarity and jaccards to compute similarity between the query and documents, and used two
proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at
evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study
concluded that we might have several improvements when using adaptive genetic algorithms.
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACHcscpconf
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the complex and dynamic interaction of factors that impact software development. Heterogeneity exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency due to heterogeneity of the data. Using a clustered approach creates the subsets of data having a degree of homogeneity that enhances prediction accuracy. It was also observed in this study that ridge regression performs better than other regression techniques used in the analysis.
Estimating project development effort using clustered regression approachcsandit
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a
challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the
complex and dynamic interaction of factors that impact software development. Heterogeneity
exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying
them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency
due to heterogeneity of the data. Using a clustered approach creates the subsets of data having
a degree of homogeneity that enhances prediction accuracy. It was also observed in this study
that ridge regression performs better than other regression techniques used in the analysis.
With the development of database, the data volume stored in database increases rapidly and in the large
amounts of data much important information is hidden. If the information can be extracted from the
database they will create a lot of profit for the organization. The question they are asking is how to extract
this value. The answer is data mining. There are many technologies available to data mining practitioners,
including Artificial Neural Networks, Genetics, Fuzzy logic and Decision Trees. Many practitioners are
wary of Neural Networks due to their black box nature, even though they have proven themselves in many
situations. This paper is an overview of artificial neural networks and questions their position as a
preferred tool by data mining practitioners.
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITYIAEME Publication
This Research paper discusses the study and analysis conducted during this research on various techniques in biometric domain. A close glance on biometric enhancement techniques and their limitations are presented in this research paper. This process would enable researcher to understand the research contributions in the area of DCT and DFT based recognition and security, locate some crucial limitations of these notable research. This paper having summary about the different research papers that applicable to our topic of research which mentioned above. Biometric Recognition and security is a most important subject of research in this area of image processing.
Applying Genetic Algorithms to Information Retrieval Using Vector Space ModelIJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in 1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used cosine similarity and jaccards to compute similarity between the query and documents, and used two proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study concluded that we might have several improvements when using adaptive genetic algorithms.
Clustering technology has been applied in numerous applications. It can enhance the performance
of information retrieval systems, it can also group Internet users to help improve the click-through rate of
on-line advertising, etc. Over the past few decades, a great many data clustering algorithms have been
developed, including K-Means, DBSCAN, Bi-Clustering and Spectral clustering, etc. In recent years, two
new data clustering algorithms have been proposed, which are affinity propagation (AP, 2007) and density
peak based clustering (DP, 2014). In this work, we empirically compare the performance of these two latest
data clustering algorithms with state-of-the-art, using 6 external and 2 internal clustering validation metrics.
Our experimental results on 16 public datasets show that, the two latest clustering algorithms, AP and DP,
do not always outperform DBSCAN. Therefore, to find the best clustering algorithm for a specific dataset, all
of AP, DP and DBSCAN should be considered. Moreover, we find that the comparison of different clustering
algorithms is closely related to the clustering evaluation metrics adopted. For instance, when using the
Silhouette clustering validation metric, the overall performance of K-Means is as good as AP and DP. This
work has important reference values for researchers and engineers who need to select appropriate clustering
algorithms for their specific applications.
Applying genetic algorithms to information retrieval using vector space modelIJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on
mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in
1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used
cosine similarity and jaccards to compute similarity between the query and documents, and used two
proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at
evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study
concluded that we might have several improvements when using adaptive genetic algorithms.
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACHcscpconf
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the complex and dynamic interaction of factors that impact software development. Heterogeneity exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency due to heterogeneity of the data. Using a clustered approach creates the subsets of data having a degree of homogeneity that enhances prediction accuracy. It was also observed in this study that ridge regression performs better than other regression techniques used in the analysis.
Estimating project development effort using clustered regression approachcsandit
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a
challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the
complex and dynamic interaction of factors that impact software development. Heterogeneity
exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying
them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency
due to heterogeneity of the data. Using a clustered approach creates the subsets of data having
a degree of homogeneity that enhances prediction accuracy. It was also observed in this study
that ridge regression performs better than other regression techniques used in the analysis.
With the development of database, the data volume stored in database increases rapidly and in the large
amounts of data much important information is hidden. If the information can be extracted from the
database they will create a lot of profit for the organization. The question they are asking is how to extract
this value. The answer is data mining. There are many technologies available to data mining practitioners,
including Artificial Neural Networks, Genetics, Fuzzy logic and Decision Trees. Many practitioners are
wary of Neural Networks due to their black box nature, even though they have proven themselves in many
situations. This paper is an overview of artificial neural networks and questions their position as a
preferred tool by data mining practitioners.
The pertinent single-attribute-based classifier for small datasets classific...IJECEIAES
Classifying a dataset using machine learning algorithms can be a big challenge when the target is a small dataset. The OneR classifier can be used for such cases due to its simplicity and efficiency. In this paper, we revealed the power of a single attribute by introducing the pertinent single-attributebased-heterogeneity-ratio classifier (SAB-HR) that used a pertinent attribute to classify small datasets. The SAB-HR’s used feature selection method, which used the Heterogeneity-Ratio (H-Ratio) measure to identify the most homogeneous attribute among the other attributes in the set. Our empirical results on 12 benchmark datasets from a UCI machine learning repository showed that the SAB-HR classifier significantly outperformed the classical OneR classifier for small datasets. In addition, using the H-Ratio as a feature selection criterion for selecting the single attribute was more effectual than other traditional criteria, such as Information Gain (IG) and Gain Ratio (GR).
Particle Swarm Optimization based K-Prototype Clustering Algorithm iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Data Preparation and Reduction Technique in Intrusion Detection Systems: ANOV...CSCJournals
Intrusion detection system plays a main role in detecting anomaly and suspected behaviors in many organization environments. The detection process involves collecting and analyzing real traffic data which in heavy-loaded networks represents the most challenging aspect in designing efficient IDS.
Collected data should be prepared and reduced to enhance the classification accuracy and computation performance.
In this research, a proposed technique called, ANOVA-PCA, is applied on NSL-KDD dataset of 41 features which are reduced to 10. It is tested and evaluated with three types of supervised classifiers: k-nearest neighbor, decision tree, and random forest. Results are obtained using various performance measures, and they are compared with other feature selection algorithms such as neighbor component analysis (NCA) and ReliefF. Results showed that the proposed method was simple, faster in computation compared with others, and good classification accuracy of 98.9% was achieved.
USING ONTOLOGIES TO IMPROVE DOCUMENT CLASSIFICATION WITH TRANSDUCTIVE SUPPORT...IJDKP
Many applications of automatic document classification require learning accurately with little training
data. The semi-supervised classification technique uses labeled and unlabeled data for training. This
technique has shown to be effective in some cases; however, the use of unlabeled data is not always
beneficial.
On the other hand, the emergence of web technologies has originated the collaborative development of
ontologies. In this paper, we propose the use of ontologies in order to improve the accuracy and efficiency
of the semi-supervised document classification.
We used support vector machines, which is one of the most effective algorithms that have been studied for
text. Our algorithm enhances the performance of transductive support vector machines through the use of
ontologies. We report experimental results applying our algorithm to three different datasets. Our
experiments show an increment of accuracy of 4% on average and up to 20%, in comparison with the
traditional semi-supervised model.
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithmaciijournal
In this paper we attempt to solve an automatic clustering problem by optimizing multiple objectives such as automatic k-determination and a set of cluster validity indices concurrently. The proposed automatic clustering technique uses the most recent optimization algorithm Jaya as an underlying optimization stratagem. This evolutionary technique always aims to attain global best solution rather than a local best solution in larger datasets. The explorations and exploitations imposed on the proposed work results to detect the number of automatic clusters, appropriate partitioning present in data sets and mere optimal values towards CVIs frontiers. Twelve datasets of different intricacy are used to endorse the performance of aimed algorithm. The experiments lay bare that the conjectural advantages of multi objective clustering optimized with evolutionary approaches decipher into realistic and scalable performance paybacks.
A CONCEPTUAL METADATA FRAMEWORK FOR SPATIAL DATA WAREHOUSEIJDKP
Metadata represents the information about data to be stored in Data Warehouses. It is a mandatory
element of Data Warehouse to build an efficient Data Warehouse. Metadata helps in data integration,
lineage, data quality and populating transformed data into data warehouse. Spatial data warehouses are
based on spatial data mostly collected from Geographical Information Systems (GIS) and the transactional
systems that are specific to an application or enterprise. Metadata design and deployment is the most
critical phase in building of data warehouse where it is mandatory to bring the spatial information and
data modeling together. In this paper, we present a holistic metadata framework that drives metadata
creation for spatial data warehouse. Theoretically, the proposed metadata framework improves the
efficiency of accessing of data in response to frequent queries on SDWs. In other words, the proposed
framework decreases the response time of the query and accurate information is fetched from Data
Warehouse including the spatial information
Improving K-NN Internet Traffic Classification Using Clustering and Principle...journalBEEI
K-Nearest Neighbour (K-NN) is one of the popular classification algorithm, in this research K-NN use to classify internet traffic, the K-NN is appropriate for huge amounts of data and have more accurate classification, K-NN algorithm has a disadvantages in computation process because K-NN algorithm calculate the distance of all existing data in dataset. Clustering is one of the solution to conquer the K-NN weaknesses, clustering process should be done before the K-NN classification process, the clustering process does not need high computing time to conqest the data which have same characteristic, Fuzzy C-Mean is the clustering algorithm used in this research. The Fuzzy C-Mean algorithm no need to determine the first number of clusters to be formed, clusters that form on this algorithm will be formed naturally based datasets be entered. The Fuzzy C-Mean has weakness in clustering results obtained are frequently not same even though the input of dataset was same because the initial dataset that of the Fuzzy C-Mean is less optimal, to optimize the initial datasets needs feature selection algorithm. Feature selection is a method to produce an optimum initial dataset Fuzzy C-Means. Feature selection algorithm in this research is Principal Component Analysis (PCA). PCA can reduce non significant attribute or feature to create optimal dataset and can improve performance for clustering and classification algorithm. The resultsof this research is the combination method of classification, clustering and feature selection of internet traffic dataset was successfully modeled internet traffic classification method that higher accuracy and faster performance.
Opinion mining framework using proposed RB-bayes model for text classicationIJECEIAES
Information mining is a capable idea with incredible potential to anticipate future patterns and conduct. It alludes to the extraction of concealed information from vast data sets by utilizing procedures like factual examination, machine learning, grouping, neural systems and genetic algorithms. In naive baye’s, there exists a problem of zero likelihood. This paper proposed RB-Bayes method based on baye’s theorem for prediction to remove problem of zero likelihood. We also compare our method with few existing methods i.e. naive baye’s and SVM. We demonstrate that this technique is better than some current techniques and specifically can analyze data sets in better way. At the point when the proposed approach is tried on genuine data-sets, the outcomes got improved accuracy in most cases. RB-Bayes calculation having precision 83.333.
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithmaciijournal
In this paper we attempt to solve an automatic clustering problem by optimizing multiple objectives such as
automatic k-determination and a set of cluster validity indices concurrently. The proposed automatic
clustering technique uses the most recent optimization algorithm Jaya as an underlying optimization
stratagem. This evolutionary technique always aims
to attain global best solution rather than a local best solution in larger datasets. The explorations and exploitations imposed on the proposed work results to
detect the number of automatic clusters, appropriate partitioning present in data sets and mere optimal
values towards CVIs frontiers. Twelve datasets of different intricacy are used to endorse the performance
of aimed algorithm. The experiments lay bare that the conjectural advantages of multi objective clustering optimized with evolutionary approaches decipher into realistic and scalable performance paybacks.
PROGRAM TEST DATA GENERATION FOR BRANCH COVERAGE WITH GENETIC ALGORITHM: COMP...cscpconf
In search based test data generation, the problem of test data generation is reduced to that of
function minimization or maximization.Traditionally, for branch testing, the problem of test data
generation has been formulated as a minimization problem. In this paper we define an alternate
maximization formulation and experimentally compare it with the minimization formulation. We
use a genetic algorithm as the search technique and in addition to the usual genetic algorithm
operators we also employ the path prefix strategy as a branch ordering strategy and memory and elitism. Results indicate that there is no significant difference in the performance or the coverage obtained through the two approaches and either could be used in test data generation when coupled with the path prefix strategy, memory and elitism.
New proximity estimate for incremental update of non uniformly distributed cl...IJDKP
The conventional clustering algorithms mine static databases and generate a set of patterns in the form of
clusters. Many real life databases keep growing incrementally. For such dynamic databases, the patterns
extracted from the original database become obsolete. Thus the conventional clustering algorithms are not
suitable for incremental databases due to lack of capability to modify the clustering results in accordance
with recent updates. In this paper, the author proposes a new incremental clustering algorithm called
CFICA(Cluster Feature-Based Incremental Clustering Approach for numerical data) to handle numerical
data and suggests a new proximity metric called Inverse Proximity Estimate (IPE) which considers the
proximity of a data point to a cluster representative as well as its proximity to a farthest point in its vicinity.
CFICA makes use of the proposed proximity metric to determine the membership of a data point into a
cluster.
This paper discusses the possible applications of particle swarm optimization (PSO) in the Power system. One of the problems in Power System is Economic Load dispatch (ED). The discussion is carried out in view of the saving money, computational speed – up and expandability that can be achieved by using PSO method. The general approach of the method of this paper is that of Dynamic Programming Method coupled with PSO method. The feasibility of the proposed method is demonstrated, and it is compared with the lambda iterative method in terms of the solution quality and computation efficiency. The experimental results show that the proposed PSO method was indeed capable of obtaining higher quality solutions efficiently in ED problems.
International Refereed Journal of Engineering and Science (IRJES)irjes
The core of the vision IRJES is to disseminate new knowledge and technology for the benefit of all, ranging from academic research and professional communities to industry professionals in a range of topics in computer science and engineering. It also provides a place for high-caliber researchers, practitioners and PhD students to present ongoing research and development in these areas.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The pertinent single-attribute-based classifier for small datasets classific...IJECEIAES
Classifying a dataset using machine learning algorithms can be a big challenge when the target is a small dataset. The OneR classifier can be used for such cases due to its simplicity and efficiency. In this paper, we revealed the power of a single attribute by introducing the pertinent single-attributebased-heterogeneity-ratio classifier (SAB-HR) that used a pertinent attribute to classify small datasets. The SAB-HR’s used feature selection method, which used the Heterogeneity-Ratio (H-Ratio) measure to identify the most homogeneous attribute among the other attributes in the set. Our empirical results on 12 benchmark datasets from a UCI machine learning repository showed that the SAB-HR classifier significantly outperformed the classical OneR classifier for small datasets. In addition, using the H-Ratio as a feature selection criterion for selecting the single attribute was more effectual than other traditional criteria, such as Information Gain (IG) and Gain Ratio (GR).
Particle Swarm Optimization based K-Prototype Clustering Algorithm iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Data Preparation and Reduction Technique in Intrusion Detection Systems: ANOV...CSCJournals
Intrusion detection system plays a main role in detecting anomaly and suspected behaviors in many organization environments. The detection process involves collecting and analyzing real traffic data which in heavy-loaded networks represents the most challenging aspect in designing efficient IDS.
Collected data should be prepared and reduced to enhance the classification accuracy and computation performance.
In this research, a proposed technique called, ANOVA-PCA, is applied on NSL-KDD dataset of 41 features which are reduced to 10. It is tested and evaluated with three types of supervised classifiers: k-nearest neighbor, decision tree, and random forest. Results are obtained using various performance measures, and they are compared with other feature selection algorithms such as neighbor component analysis (NCA) and ReliefF. Results showed that the proposed method was simple, faster in computation compared with others, and good classification accuracy of 98.9% was achieved.
USING ONTOLOGIES TO IMPROVE DOCUMENT CLASSIFICATION WITH TRANSDUCTIVE SUPPORT...IJDKP
Many applications of automatic document classification require learning accurately with little training
data. The semi-supervised classification technique uses labeled and unlabeled data for training. This
technique has shown to be effective in some cases; however, the use of unlabeled data is not always
beneficial.
On the other hand, the emergence of web technologies has originated the collaborative development of
ontologies. In this paper, we propose the use of ontologies in order to improve the accuracy and efficiency
of the semi-supervised document classification.
We used support vector machines, which is one of the most effective algorithms that have been studied for
text. Our algorithm enhances the performance of transductive support vector machines through the use of
ontologies. We report experimental results applying our algorithm to three different datasets. Our
experiments show an increment of accuracy of 4% on average and up to 20%, in comparison with the
traditional semi-supervised model.
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithmaciijournal
In this paper we attempt to solve an automatic clustering problem by optimizing multiple objectives such as automatic k-determination and a set of cluster validity indices concurrently. The proposed automatic clustering technique uses the most recent optimization algorithm Jaya as an underlying optimization stratagem. This evolutionary technique always aims to attain global best solution rather than a local best solution in larger datasets. The explorations and exploitations imposed on the proposed work results to detect the number of automatic clusters, appropriate partitioning present in data sets and mere optimal values towards CVIs frontiers. Twelve datasets of different intricacy are used to endorse the performance of aimed algorithm. The experiments lay bare that the conjectural advantages of multi objective clustering optimized with evolutionary approaches decipher into realistic and scalable performance paybacks.
A CONCEPTUAL METADATA FRAMEWORK FOR SPATIAL DATA WAREHOUSEIJDKP
Metadata represents the information about data to be stored in Data Warehouses. It is a mandatory
element of Data Warehouse to build an efficient Data Warehouse. Metadata helps in data integration,
lineage, data quality and populating transformed data into data warehouse. Spatial data warehouses are
based on spatial data mostly collected from Geographical Information Systems (GIS) and the transactional
systems that are specific to an application or enterprise. Metadata design and deployment is the most
critical phase in building of data warehouse where it is mandatory to bring the spatial information and
data modeling together. In this paper, we present a holistic metadata framework that drives metadata
creation for spatial data warehouse. Theoretically, the proposed metadata framework improves the
efficiency of accessing of data in response to frequent queries on SDWs. In other words, the proposed
framework decreases the response time of the query and accurate information is fetched from Data
Warehouse including the spatial information
Improving K-NN Internet Traffic Classification Using Clustering and Principle...journalBEEI
K-Nearest Neighbour (K-NN) is one of the popular classification algorithm, in this research K-NN use to classify internet traffic, the K-NN is appropriate for huge amounts of data and have more accurate classification, K-NN algorithm has a disadvantages in computation process because K-NN algorithm calculate the distance of all existing data in dataset. Clustering is one of the solution to conquer the K-NN weaknesses, clustering process should be done before the K-NN classification process, the clustering process does not need high computing time to conqest the data which have same characteristic, Fuzzy C-Mean is the clustering algorithm used in this research. The Fuzzy C-Mean algorithm no need to determine the first number of clusters to be formed, clusters that form on this algorithm will be formed naturally based datasets be entered. The Fuzzy C-Mean has weakness in clustering results obtained are frequently not same even though the input of dataset was same because the initial dataset that of the Fuzzy C-Mean is less optimal, to optimize the initial datasets needs feature selection algorithm. Feature selection is a method to produce an optimum initial dataset Fuzzy C-Means. Feature selection algorithm in this research is Principal Component Analysis (PCA). PCA can reduce non significant attribute or feature to create optimal dataset and can improve performance for clustering and classification algorithm. The resultsof this research is the combination method of classification, clustering and feature selection of internet traffic dataset was successfully modeled internet traffic classification method that higher accuracy and faster performance.
Opinion mining framework using proposed RB-bayes model for text classicationIJECEIAES
Information mining is a capable idea with incredible potential to anticipate future patterns and conduct. It alludes to the extraction of concealed information from vast data sets by utilizing procedures like factual examination, machine learning, grouping, neural systems and genetic algorithms. In naive baye’s, there exists a problem of zero likelihood. This paper proposed RB-Bayes method based on baye’s theorem for prediction to remove problem of zero likelihood. We also compare our method with few existing methods i.e. naive baye’s and SVM. We demonstrate that this technique is better than some current techniques and specifically can analyze data sets in better way. At the point when the proposed approach is tried on genuine data-sets, the outcomes got improved accuracy in most cases. RB-Bayes calculation having precision 83.333.
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithmaciijournal
In this paper we attempt to solve an automatic clustering problem by optimizing multiple objectives such as
automatic k-determination and a set of cluster validity indices concurrently. The proposed automatic
clustering technique uses the most recent optimization algorithm Jaya as an underlying optimization
stratagem. This evolutionary technique always aims
to attain global best solution rather than a local best solution in larger datasets. The explorations and exploitations imposed on the proposed work results to
detect the number of automatic clusters, appropriate partitioning present in data sets and mere optimal
values towards CVIs frontiers. Twelve datasets of different intricacy are used to endorse the performance
of aimed algorithm. The experiments lay bare that the conjectural advantages of multi objective clustering optimized with evolutionary approaches decipher into realistic and scalable performance paybacks.
PROGRAM TEST DATA GENERATION FOR BRANCH COVERAGE WITH GENETIC ALGORITHM: COMP...cscpconf
In search based test data generation, the problem of test data generation is reduced to that of
function minimization or maximization.Traditionally, for branch testing, the problem of test data
generation has been formulated as a minimization problem. In this paper we define an alternate
maximization formulation and experimentally compare it with the minimization formulation. We
use a genetic algorithm as the search technique and in addition to the usual genetic algorithm
operators we also employ the path prefix strategy as a branch ordering strategy and memory and elitism. Results indicate that there is no significant difference in the performance or the coverage obtained through the two approaches and either could be used in test data generation when coupled with the path prefix strategy, memory and elitism.
New proximity estimate for incremental update of non uniformly distributed cl...IJDKP
The conventional clustering algorithms mine static databases and generate a set of patterns in the form of
clusters. Many real life databases keep growing incrementally. For such dynamic databases, the patterns
extracted from the original database become obsolete. Thus the conventional clustering algorithms are not
suitable for incremental databases due to lack of capability to modify the clustering results in accordance
with recent updates. In this paper, the author proposes a new incremental clustering algorithm called
CFICA(Cluster Feature-Based Incremental Clustering Approach for numerical data) to handle numerical
data and suggests a new proximity metric called Inverse Proximity Estimate (IPE) which considers the
proximity of a data point to a cluster representative as well as its proximity to a farthest point in its vicinity.
CFICA makes use of the proposed proximity metric to determine the membership of a data point into a
cluster.
This paper discusses the possible applications of particle swarm optimization (PSO) in the Power system. One of the problems in Power System is Economic Load dispatch (ED). The discussion is carried out in view of the saving money, computational speed – up and expandability that can be achieved by using PSO method. The general approach of the method of this paper is that of Dynamic Programming Method coupled with PSO method. The feasibility of the proposed method is demonstrated, and it is compared with the lambda iterative method in terms of the solution quality and computation efficiency. The experimental results show that the proposed PSO method was indeed capable of obtaining higher quality solutions efficiently in ED problems.
International Refereed Journal of Engineering and Science (IRJES)irjes
The core of the vision IRJES is to disseminate new knowledge and technology for the benefit of all, ranging from academic research and professional communities to industry professionals in a range of topics in computer science and engineering. It also provides a place for high-caliber researchers, practitioners and PhD students to present ongoing research and development in these areas.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
Software Effort Estimation Using Particle Swarm Optimization with Inertia WeightWaqas Tariq
Software is the most expensive element of virtually all computer based systems. For complex custom systems, a large effort estimation error can make the difference between profit and loss. Cost (Effort) Overruns can be disastrous for the developer. The basic input for the effort estimation is size of project. A number of models have been proposed to construct a relation between software size and Effort; however we still have problems for effort estimation because of uncertainty existing in the input information. Accurate software effort estimation is a challenge in Industry. In this paper we are proposing three software effort estimation models by using soft computing techniques: Particle Swarm Optimization with inertia weight for tuning effort parameters. The performance of the developed models was tested by NASA software project dataset. The developed models were able to provide good estimation capabilities.
Optimal rule set generation using pso algorithmcsandit
Classification and Prediction is an important resea
rch area of data mining. Construction of
classifier model for any decision system is an impo
rtant job for many data mining applications.
The objective of developing such a classifier is to
classify unlabeled dataset into classes. Here
we have applied a discrete Particle Swarm Optimizat
ion (PSO) algorithm for selecting optimal
classification rule sets from huge number of rules
possibly exist in a dataset. In the proposed
DPSO algorithm, decision matrix approach was used f
or generation of initial possible
classification rules from a dataset. Then the propo
sed algorithm discovers important or
significant rules from all possible classification
rules without sacrificing predictive accuracy.
The proposed algorithm deals with discrete valued d
ata, and its initial population of candidate
solutions contains particles of different sizes. Th
e experiment has been done on the task of
optimal rule selection in the data sets collected f
rom UCI repository. Experimental results show
that the proposed algorithm can automatically evolv
e on average the small number of
conditions per rule and a few rules per rule set, a
nd achieved better classification performance
of predictive accuracy for few classes.
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAijejournal
A well-constructed classification model highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAacijjournal
A well-constructed classification model highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.
APPLYING GENETIC ALGORITHMS TO INFORMATION RETRIEVAL USING VECTOR SPACE MODEL IJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in 1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used cosine similarity and jaccards to compute similarity between the query and documents, and used two proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study concluded that we might have several improvements when using adaptive genetic algorithms.
Remote sensing and GIS application for monitoring drought vulnerability in In...riyaniaes
Agricultural drought is one of the hydrometeorological disasters that cause significant losses because it affects food stocks. In addition, agricultural droughts, impact the physical and socio-economic development of the community. Remote sensing technology is used to monitor agricultural droughts spatially and temporally for minimizing losses. This study reviewed the literatures related to remote sensing and GIS for monitoring drought vulnerability in Indonesia. The study was conducted on an island-scale on Java Island, a provincial-scale in East Java and Bali, and a district-scale in Indramayu and Kebumen. The dominant method was the drought index, which involves variable land surface temperature (LST), vegetation index, land cover, wetness index, and rainfall. Each study has a strong point and a weak point. Low-resolution satellite imagery has been used to assess drought vulnerability. At the island scale, it provides an overview of drought conditions, while at the provincial scale, it focuses on paddy fields and has little detailed information. In-situ measurements at the district scale detect meteorological drought accurately, but there were limitations in the mapping unit's detailed information. Drought mapping using GIS and remote sensing at the district scale has detailed spatial information on climate and physiographic aspects, but it needs temporal data monitoring.
Comparative review on information and communication technology issues in educ...riyaniaes
The use of information and communication technology is very beneficial in the education sector because it can enhance the quality of education. However, the implementation of ICT in the education sector of developed and developing countries is a challenging task. This paper explains the comparative study of ICT issues in the education sector of developed and developing countries. In particular, we compare issues between Pakistan and high-tech countries. Our study reveals the fact that the education sector is facing numerous ICT problems that are based on culture, finance, management, infrastructure, lack of training, lack of equipment, teacher’s refusal, and ethical issues. At the end of this paper, various issues faced by the implementation of ICT in the education sector of Pakistan have been categorized into various types, namely, infrastructure, lack of IT professionals, lack of high-speed internet and equipment. Our research is based on five key research questions related to ICT issues. We used a mixed approach where the results of this study can be used as a set of guidelines to help make the learning environment technology-oriented, fast, planned, and productive. Future directions are also given at the end of this paper.
The development and implementation of an android-based saving and loan cooper...riyaniaes
The savings and loans cooperative are a cooperative that has an important purpose in its reach and services, such as the Permata Ngijo Savings and Loans Cooperative. Activities within the cooperative itself include managing member data as well as savings and loan transactions. In systems that are still manual, such as Microsoft Excel, there is a risk of inaccurate data and takes a long time. This manual system is still less effective because the number of members and transactions continues to grow. Therefore, an application is needed as a more effective data processing and information system for members that is easier to access. This study aims to build an android-based savings and loan application using the hypertext preprocessor (PHP) programming language and my structured query language (MySQL) as data storage. Based on the android application that has been made, it shows that the black box functionality of the application runs as planned, the success rate for users reaches 90%.
Online medical consultation: covid-19 system using software object-oriented a...riyaniaes
The internet has been a source of medical information, it has been used for online medical consultation (OMC). OMC is now offered by many providers internationally with diverse models and features. In OMC, consultations and treatments are available 24/7. The covid-19 pandemic across-the-board, many people unable to go to hospital or clinic because the spread of the virus. This paper tried to answer two research questions. The first one on how the OMC can help the patients during covid-19 pandemic. A literature review was conducted to answer the first research question. The second one on how to develop system in OMC related to covid-19 pandemic. The system was developed by Visual Studio 2019 using software object-oriented approach. Online expert review was conducted within 6 experts from health and academic industry to verify the model. Also, the system was validated by 11 users from heath and academic industry to confirm its usability. The statistical package for social science (SPSS 22) was used to analyze the collected data. The result of expert review confirmed that covid-19 system can help the patients. Also, the validity of the system was confirmed by 11 users from health and academic industry.
Successful factors determining the significant relationship between e-governa...riyaniaes
Every government's major objective is to provide the greatest services in order to establish efficiency and quality of performance. Syria's government has understood how critical it is to go in the direction of information technology. However, there are gaps and poor links across government sectors, which has tainted the image of Syrian e-governance. As a result, one of the main aims of this study is to figure out what factors impact Syrians' acceptance of the e-government system. A total of 600 questionnaires were delivered to Syrian individuals as part of a survey. The data was analysed using the structural equation model (SEM) using AMOS version 21.0. User intention to utilise an e-government system was shown to be influenced by performance expectations, effort expectations, system flexibility, citizens-centricity, and facilitating conditions. Assurance, responsiveness, reliability, tangibles, and empathy are five fundamental factors that have a major impact on government operation excellence. Behavioural Intention is being utilised as a mediator between the government operation excellence (GOE) initiative and the e-government platform.
Non-linear behavior of root and stem diameter changes in monopodial orchidriyaniaes
Precision agriculture aims to maximize yield with optimum resources. Vast majority of natural systems are acknowledged as complex and non-linear. However, prior to formulation of precise models, linearity tests are performed to validate plant behavior. This study has presented proof that the water uptake system in monopodial orchid is indeed non-linear. The change in physical growth of root and stem due to temperature and relative humidity factors are observed. The work focused on Ascocenda Fuchs Harvest Moon x (V. Chaophraya x Boots) orchid hybrid. Three complementary methods are presented: linearity tests through 1) regression fitting; 2) scatter plots; and 3) cross-correlation function tests. Root diameter, stem diameter, temperature, and relative humidity are logged at 15 minutes interval for a duration of 71 days. The polynomial equations derived for root diameter and stem diameter changes attained strong regression coefficients. The non-linear behavior is further confirmed by the scatter plots where no linear associations are present between the independent and dependent variables. Subsequently, the cross-correlation function tests conducted on temperature-root diameter, temperature-stem diameter, relative humidity-root diameter, and relative humidity-stem diameter combinations also revealed weak correlation. Despite using different techniques, the behavior of physical changes has been consistently proven to be non-linear.
Cucumber disease recognition using machine learning and transfer learningriyaniaes
Cucumber is grown, as a cash crop besides it is one of the main and popular vegetables in Bangladesh. As Bangladesh's economy is largely dependent on the agricultural sector, cucumber farming could make economic and productivity growth more sustainable. But many diseases diminish the situation of cucumber. Early detection of disease can help to stop disease from spreading to other healthy plants and also accurate identifying the disease will help to reduce crop losses through specific treatments. In this paper, we have presented two approaches namely traditional machine learning (ML) and CNN-based transfer learning. Then we have compared the performance of the applied techniques to find out the most appropriate techniques for recognizing cucumber diseases. In our ML approach, the system involves five steps. After collecting the image, pre-processing is done by resizing, filtering, and contrast-enhancing. Then we have compared various ML algorithms using k-means based image segmentation after extracted 10 relevant features. Random forest gives the best accuracy with 89.93% in the traditional ML approach. We also studied and applied CNN-based transfer learning to investigate the further improvement of recognition performance. Lastly, a comparison among various transfer learning models such as InceptionV3, MobileNetV2, and VGG16 has been performed. Between these two approaches, MobileNetV2 achieves the highest accuracy with 93.23%.
Android mobile application for wildfire reporting and monitoringriyaniaes
Peat fires cause major environmental problems in Central Kalimantan Province, Indonesia and threaten human health and effect the social-economic sector. The lack of peat fire detection systems is one factor that causing these reoccurring fires. Therefore, in this study, we develop an Android mobile platform application and a web-based application to support the citizen-volunteers who want to contribute wildfires reports, and the decision-makers who wish to collect, visualize, and evaluate these wildfires reports. In this paper, the global navigation satellite system (GNSS) and a global position system (GPS) sensor from a smartphone’s camera, is a useful tool to show the potential fire and smoke’s close-range location. The exchangeable image (EXIF) file image and GPS metadata captured by a mobile phone can store and supply raw observation to our devices and sent it to the data center through global internet communication. This work’s results are the proposed application easy-to-use to monitoring potential peat fire by location and data activity. This paper focuses on developing an application for the mobile platform for peat fire reporting and a web-based application to collect peat fire location for decision-makers. Our main objective is to detect the potential and spread of fire in peatlands as early as possible by utilizing community reports using smartphones.
An empirical assessment of different kernel functions on the performance of s...riyaniaes
Artificial intelligence (AI) and machine learning (ML) have influenced every part of our day-to-day activities in this era of technological advancement, making a living more comfortable on the earth. Among the several AI and ML algorithms, the support vector machine (SVM) has become one of the most generally used algorithms for data mining, prediction and other (AI and ML) activities in several domains. The SVM’s performance is significantly centred on the kernel function (KF); nonetheless, there is no universal accepted ground for selecting an optimal KF for a specific domain. In this paper, we investigate empirically different KFs on the SVM performance in various fields. We illustrated the performance of the SVM based on different KF through extensive experimental results. Our empirical results show that no single KF is always suitable for achieving high accuracy and generalisation in all domains. However, the gaussian radial basis function (RBF) kernel is often the default choice. Also, if the KF parameters of the RBF and exponential RBF are optimised, they outperform the linear and sigmoid KF based SVM method in terms of accuracy. Besides, the linear KF is more suitable for the linearly separable dataset.
An effective classification approach for big data with parallel generalized H...riyaniaes
Advancements in information technology is contributing to the excessive rate of big data generation recently. Big data refers to datasets that are huge in volume and consumes much time and space to process and transmit using the available resources. Big data also covers data with unstructured and structured formats. Many agencies are currently subscribing to research on big data analytics owing to the failure of the existing data processing techniques to handle the rate at which big data is generated. This paper presents an efficient classification and reduction technique for big data based on parallel generalized Hebbian algorithm (GHA) which is one of the commonly used principal component analysis (PCA) neural network (NN) learning algorithms. The new method proposed in this study was compared to the existing methods to demonstrate its capabilities in reducing the dimensionality of big data. The proposed method in this paper is implemented using Spark Radoop platform.
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfKamal Acharya
The College Bus Management system is completely developed by Visual Basic .NET Version. The application is connect with most secured database language MS SQL Server. The application is develop by using best combination of front-end and back-end languages. The application is totally design like flat user interface. This flat user interface is more attractive user interface in 2017. The application is gives more important to the system functionality. The application is to manage the student’s details, driver’s details, bus details, bus route details, bus fees details and more. The application has only one unit for admin. The admin can manage the entire application. The admin can login into the application by using username and password of the admin. The application is develop for big and small colleges. It is more user friendly for non-computer person. Even they can easily learn how to manage the application within hours. The application is more secure by the admin. The system will give an effective output for the VB.Net and SQL Server given as input to the system. The compiled java program given as input to the system, after scanning the program will generate different reports. The application generates the report for users. The admin can view and download the report of the data. The application deliver the excel format reports. Because, excel formatted reports is very easy to understand the income and expense of the college bus. This application is mainly develop for windows operating system users. In 2017, 73% of people enterprises are using windows operating system. So the application will easily install for all the windows operating system users. The application-developed size is very low. The application consumes very low space in disk. Therefore, the user can allocate very minimum local disk space for this application.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Vaccine management system project report documentation..pdfKamal Acharya
The Division of Vaccine and Immunization is facing increasing difficulty monitoring vaccines and other commodities distribution once they have been distributed from the national stores. With the introduction of new vaccines, more challenges have been anticipated with this additions posing serious threat to the already over strained vaccine supply chain system in Kenya.
Using particle swarm optimization to solve test functions problems
1. Bulletin of Electrical Engineering and Informatics
Vol. 10, No. 6, December 2021, pp. 3422~3431
ISSN: 2302-9285, DOI: 10.11591/eei.v10i6.3244 3422
Journal homepage: http://beei.org
Using particle swarm optimization to solve test functions
problems
Issa Ahmed Abed, May Mohammed Ali, Afrah Abood Abdul Kadhim
Basrah Engineering Technical College, Southern Technical University, Iraq
Article Info ABSTRACT
Article history:
Received May 22, 2021
Revised Aug 17, 2021
Accepted Oct 6, 2021
In this paper the benchmarking functions are used to evaluate and check the
particle swarm optimization (PSO) algorithm. However, the functions utilized
have two dimension but they selected with different difficulty and with
different models. In order to prove capability of PSO, it is compared with
genetic algorithm (GA). Hence, the two algorithms are compared in terms of
objective functions and the standard deviation. Different runs have been taken
to get convincing results and the parameters are chosen properly where the
Matlab software is used. Where the suggested algorithm can solve different
engineering problems with different dimension and outperform the others in
term of accuracy and speed of convergence.
Keywords:
Best solution
Comparison
Operators
Optimization
Warm This is an open access article under the CC BY-SA license.
Corresponding Author:
Issa Ahmed Abed
Department of Automation and Control Technologies Engineering
Basrah Engineering Technical College
Southern Technical University, Basrah, Iraq
Email: issaahmedabed@stu.edu.iq
1. INTRODUCTION
A large number of real-life optimization problems in science, engineering, economics, and business
are complex and difficult to solve. They cannot be solved in an exact manner within a reasonable amount of
time. Using approximate algorithms is the main alternative to solve this class of problems [1], [2]. The
optimization process involves finding a single or a series of optimal solutions from among a very large
number of possibilities. In this process, the space of potential solutions is reduced to one or a few of the best
ones [3], [4]. Many researchers constructed different methods to get the best solution because it is difficult to
find the solution in many problems in engineering. Particle swarm optimization (PSO) is stochastic method
depends on behavior of animals. Wang et al. [5] gave the basic and the details of PSO. They analyzed the
method from various views of that are the structure, parameters, discrete PSO, parallel PSO, and multi-
objective. Ozdemir [6] used the PSO to reach to the global minimum of the function suggested. He applied
the method on different benchmark functions. The author proved that PSO is a successful algorithm to solve
various problems. Jumaa et al. [7] suggested a method for scheduling optimization and running of distributed
generator to reduce the loss of power, revamp voltage profile as well as the reliability of the whole network.
They proposed particle swarm optimization in order to estimate the better site and the size of distributed
generation. Garcia et al. [8] estimated a technique utilizing genetic algorithm (GA) to evaluate the relation
maximum of differentiable functions. The authors introduced Python library contains components from the
algorithm. Alyoutbaki and Al-Rawi [9] introduced a method to extend the fault tolerance of the system. Ant
colony optimization (ACO) is used to enhance the suggested strategy. ACO can select better virtual machine
where is to emigrate the cloudlet in order to decrease the consumption of energy and the time.
2. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Using particle swarm optimization to solve test functions problems (Issa Ahmed Abed)
3423
This paper presents the particle swarm optimization to solve some suggested test functions. The rest
of the paper is organized as; section 2 is the overview of genetic algorithm and PSO; comparison between the
both algorithms is introduced in section 3; the simulation results are in section 4; section 5 is the conclusion.
2. THE PROPOSED METHODS
2.1. Genetic algorithm
In order to solve many optimization problems, genetic algorithm is used which is considered as an
adaptive technique. According to the selection of natural and the survival of fittest the population will
promote through different iterations. If the genetic algorithm has been encoded properly in that case it is able
to transfer the solution from simulation to the real applications where the GA will simulate the selection and
survival [10], [11]. In this algorithm, the size of population of individuals is fixed constant, where the GA
goes generation after generation. During the iterations of GA, it will pass through various operations such as
reproduction, crossover, and mutation which produce new individuals offspring. The new population is rated
by the function which is called objective function. The best new solutions are calculated by these procedures
[12]-[14].
2.2. Particle swarm optimization
Particle swarm optimization is first given by Erberhart and Kennedy. It is an individual’s population
which are called particles. PSO is inspired from the behavior of flocking bird and the schooling fish [15],
[16]. It is a technique depends on the relation between the particles of swarm where it is an optimization
stochastic and the population intellects in the search space. Here, there is position and velocity for each
particle. These values are changed through the generations where the best position is the particles of best
experience and the global position is the best achieved experience of all particles [17], [18]. Also, the
particles have vectors of numbers which are real and every vector position is called dimension. Two relations
are there, first for velocity and the second for position which are illustrated in (1) and (2) [19], [20]:
𝑣𝑒𝑙𝑖,𝑗
𝑔𝑒𝑛+1
= 𝑤𝑒𝑖𝑔ℎ𝑡 ∗ 𝑣𝑒𝑙𝑖,𝑗
𝑔𝑒𝑛
+ 𝑎𝑐𝑐1 ∗ 𝑟𝑎𝑛𝑑1 ∗ (𝑝𝑒𝑟𝑏𝑒𝑠𝑡𝑖,𝑗
𝑔𝑒𝑛
− 𝑝𝑎𝑟𝑖,𝑗
𝑔𝑒𝑛
) + 𝑎𝑐𝑐2
∗ 𝑟𝑎𝑛𝑑2 ∗ (𝑔𝑙𝑜𝑏𝑎𝑙𝑏𝑒𝑠𝑡𝑖,𝑗
𝑔𝑒𝑛
− 𝑝𝑎𝑟𝑖,𝑗
𝑔𝑒𝑛
)
(1)
𝑝𝑎𝑟𝑖,𝑗
𝑔𝑒𝑛+1
= 𝑝𝑎𝑟𝑖,𝑗
𝑔𝑒𝑛
+ 𝑣𝑒𝑙𝑖,𝑗
𝑔𝑒𝑛+1
(2)
Where 𝑝𝑎𝑟𝑖 represents the particles in the population of size (𝑝𝑜𝑝) with the dimension of (𝑑𝑖𝑚) and
𝑝𝑎𝑟𝑖 = {𝑝𝑎𝑟𝑖,1, 𝑝𝑎𝑟𝑖,2, … … … . . , 𝑝𝑎𝑟𝑖,𝑑𝑖𝑚}. In addition, the population has velocity (𝑣𝑒𝑙) and it is written as,
𝑣𝑒𝑙𝑖 = {𝑣𝑒𝑙𝑖,1, 𝑣𝑒𝑙𝑖,2, … … … . . , 𝑣𝑒𝑙𝑖,𝑑𝑖𝑚}. The (𝑖) is from 1 to (𝑝𝑜𝑝), ( 𝑗) from 1 to (𝑑𝑖𝑚) and (𝑔𝑒𝑛) is the
iteration number. The (𝑝𝑒𝑟𝑏𝑒𝑠𝑡𝑖,𝑗
𝑔𝑒𝑛
) is the personal best with (𝑗𝑡ℎ
) parameter of (𝑖𝑡ℎ
) individual and the
(𝑔𝑙𝑜𝑏𝑎𝑙𝑏𝑒𝑠𝑡𝑗) means the 𝑗𝑡ℎ
parameter of the best one of population to (𝑔𝑒𝑛). Finally, the (𝑤𝑒𝑖𝑔ℎ𝑡), (𝑎𝑐𝑐),
and (𝑟𝑎𝑛𝑑) are the weight parameter, acceleration parameter and the random number between [0,1] [21]-
[23]. Figure 1 shows the overall algorithm.
2.3. The difference between PSO and some other algorithms
The difference between PSO and some other algorithms. The following steps are found in many
evolutionary methods:
− In the initialization there is a random individual.
− The range to reach to the optimum will be the base for the objective function which is calculated for each
part.
− The objective value is also the base for the reproduction of population.
− These procedures can stop or continue.
In that case the PSO has the same points like many algorithms. Therefore, they are generated the
population randomly. They are depended on the objective function to calculate the best solution. All of them
goes from step to step to reach to the best, but they do not pledge the winning. Hence, the PSO for example
does not contains processes like crossover or mutation. Where it is enhancing their particles by the internal
velocity and PSO contains memory. However, there are some differences between PSO and other algorithms
such as GA. Where all the individuals in GA translate the information between them while the in the PSO
only globalbest provides the information to the other individuals [24].
3. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 6, December 2021 : 3422 – 3431
3424
Figure 1. PSO procedures
3. RESULTS AND DISCUSSION
The tests have been done on the PC Intel Core i7 processor 2.7 GHz and windows 7 professional
with 4 GB RAM. All the results are simulated using Matlab 2008a. The population size was 10 while the
number of iterations are equal to 100. In addition, 𝑝𝑚 = 0.5 in GA and in PSO the inertia weight between 0.4
and 0.9 while the acceleration coefficients are 2. In order to test the ability of PSO, the following functions
have been used in this paper where all of them are two dimensional [25].
3.1. Beale function
This is the first function suggested here. The mathematical formula is:
𝑓(𝑥) = (1.5 − 𝑥1 + 𝑥1𝑥2)2
+ (2.25 − 𝑥1 + 𝑥1𝑥2
2)2
+ (2.625 − 𝑥1 + 𝑥1𝑥2
3
)2
(3)
Which is a multimodal function. The 𝑥1,2 ∈ [−4.5,4.5] with 𝑓(𝑥𝑏𝑒𝑠𝑡) = 0 𝑎𝑡 𝑥𝑏𝑒𝑠𝑡
= (3,0.5). The
details of this function are shown in Figure 2. The particle swarm optimization has been tested through the
Beale function and it is compared with genetic algorithm as illustrated in Figure 3. It is obvious that the PSO
is faster that GA as well it is more accurate to use in this function.
4. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Using particle swarm optimization to solve test functions problems (Issa Ahmed Abed)
3425
Figure 2. The image of Beale function Figure 3. Functions comparison
3.2. McCormick function
The common relation for this function is:
𝑓(𝑥) = sin(𝑥1 + 𝑥2) + (𝑥1 − 𝑥2)2
− 1.5𝑥1 + 2.5𝑥2 + 1 (4)
The boundary of dimensions 𝑥1 ∈ [−1.5, 4], 𝑥2 ∈ [−3, 4] while the optimum 𝑓(𝑥𝑏𝑒𝑠𝑡
) ≈
−1.9133 𝑎𝑡 𝑥𝑏𝑒𝑠𝑡
= (−0.547, −1.547). Figure 4 presents this function. Also, the algorithms are tested
through McCormick function to show the ability of each method as shown in Figure 5. From the figure it can
be seen, there is a big comparable between the methods but still the PSO outperforms the GA and can reach
the best solution.
Figure 4. The image of McCormick function Figure 5. The PSO comparison
3.3. Matyas function
The expression for this function is:
𝑓(𝑥) = 0.26(𝑥1
2
+ 𝑥2
2
) − 0.48𝑥1𝑥2 (5)
Where the parameters 𝑥1,2 ∈ [−10,10] and the best function 𝑓(𝑥𝑏𝑒𝑠𝑡) = 0 at 𝑥𝑏𝑒𝑠𝑡
= (0,0). The
graph of Matyas function is given in Figure 6. From Figure 7 which shows the value for the two methods
during the generations. The value by using the PSO converges generation by generation to reach to the target
point in very suitable time.
-1
0
1
2
3
4
5
0 50 100 150
Average
Value
Iterations
GA PSO
-2,5
-2
-1,5
-1
-0,5
0
0 50 100 150
Average
Value
Iterations
GA PSO
5. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 6, December 2021 : 3422 – 3431
3426
Figure 6. The 3-D graph for matyas Figure 7. The convergence of the methods
3.4. Mishra bird function
It is a function with formula:
𝑓(𝑥) = sin(𝑥1) 𝑒(1−cos (𝑥2))2
+ cos(𝑥2) 𝑒(1−sin(𝑥1))2
+ (𝑥1 − 𝑥2)2
(6)
The dimensions should be 𝑥1,2 ∈ [−2𝜋, 2𝜋] while the fuction is 𝑓(𝑥𝑏𝑒𝑠𝑡) = −106.764537 at
𝒙𝒃𝒆𝒔𝒕
= (4.70104, 3.15294)𝑎𝑛𝑑 𝑥𝑏𝑒𝑠𝑡
= (−1.58214, −3.13024) . The graph for this function is given in
Figure 8. The PSO still has the best position in terms of the error and the speed and the GA cannot overcome
this algorithm as presented in Figure 9.
Figure 8. The function of mishra Figure 9. The comparison curves between the two
in Mishra bird
3.5. Holder table function
The mathematical equation for this function is:
𝑓(𝑥) = − |sin(𝑥1) cos(𝑥2) exp (|1 −
√𝑥1
2+𝑥2
2
𝜋
|)| (7)
-0,2
0
0,2
0,4
0,6
0,8
1
1,2
1,4
0 50 100 150
Average
Value
Iterations
GA PSO
-120
-100
-80
-60
-40
-20
0
0 50 100 150
Average
Value
Iterations
GA PSO
6. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Using particle swarm optimization to solve test functions problems (Issa Ahmed Abed)
3427
In that case 𝑥1,2 ∈ [−10, 10] 𝑎𝑛𝑑 𝑓(𝑥𝑏𝑒𝑠𝑡) = − 19.2085 𝑎𝑡 𝑥𝑏𝑒𝑠𝑡
= (8.05502,9.66459),
(−8.05502, 9.66459), (8.05502, −9.66459), (−8.05502, −9.66459). The graph of holder function is
illustrated in Figure 10. In this test which seems to be difficult for PSO which is trapped in local optima
where the accuracy is less than the other algorithm as shown in Figure 11.
Figure 10. The function of holder table Figure 11. The comparison between the two
algorithms
3.6. Eggholder function
The equation is:
𝑓(𝑥) = −(𝑥2 + 47)𝑠𝑖𝑛 (√|𝑥2 +
𝑥1
2
+ 47|) − 𝑥1𝑠𝑖𝑛(√|𝑥1 − (𝑥2 + 47)| (8)
At 𝑥1,2 ∈ [−512, 512] with 𝑓(𝑥𝑏𝑒𝑠𝑡) = −959.6407 for 𝑥𝑏𝑒𝑠𝑡
= (512, 404.2319) and the graph for
this function is introduced in Figure 12. Figure 13 shown the GA is still dominant on GA and they are very
comparative in terms of the speed.
Figure 12. The graph Egg function Figure 13. The objective curves
3.7. Schaffer function
The Mathematical equation is:
-25
-20
-15
-10
-5
0
0 50 100 150
Average
Value
Iterations
GA PSO
-1000
-800
-600
-400
-200
0
0 50 100 150
Average
Value
Iterations
GA PSO
7. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 6, December 2021 : 3422 – 3431
3428
𝑓(𝑥) = 0.5 +
𝑠𝑖𝑛2(𝑥1
2−𝑥2
2)−0.5
[1+0.001(𝑥1
2+𝑥2
2)]2 (9)
Where 𝑥1,2 ∈ [−100,100] and 𝑓(𝑥𝑏𝑒𝑠𝑡) = 0, with 𝑥 𝑏𝑒𝑠𝑡
= (0,0) as shown in Figure 14. In this function,
even if PSO is better, but both algorithms need more improvement to approximate to the optimum value which
is 0 as shown in Figure 15.
Figure 14. The function of Schaffer Figure 15. The approximation of the algorithms
3.8. Booth function
This function is:
𝑓(𝑥) = (𝑥1 + 2𝑥2 − 7)2
+ (2𝑥1 + 𝑥2 − 5)2
(10)
The dimensions 𝑥1,2 ∈ [−10,10] and 𝑓(𝑥𝑏𝑒𝑠𝑡) = 0 at 𝑥𝑏𝑒𝑠𝑡
= (1,3) as shown in Figure 16. In this
function the PSO won to reach faster and gets the optimal solution as given in Figure 17.
Figure 16. Booth image Figure 17. Function approximation
3.9. Easom function
The mathematical expression for this function is:
0
0,1
0,2
0,3
0,4
0,5
0,6
0 50 100 150
Average
Value
Iterations
GA PSO
8. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Using particle swarm optimization to solve test functions problems (Issa Ahmed Abed)
3429
𝑓(𝑥) = −𝑐𝑜𝑠(𝑥1)𝑐𝑜𝑠(𝑥2)𝑒𝑥𝑝(−(𝑥1 − 𝜋)2
− (𝑥2 − 𝜋)2
) (11)
It is a unimodal with 𝑥1,2 ∈ [−100,100] with 𝑓(𝑥𝑏𝑒𝑠𝑡) = −1, at 𝑥𝑏𝑒𝑠𝑡
= (𝜋, 𝜋) as shown in
Figure 18. In this function, the GA do very bad while the PSO is excellent and it is reached the target solution
in better way as presented in Figure 19.
Figure 18. The easom function Figure 19. Comparision function
Table 1 is the summary for average value and the deviation for both the algorithms with different
suggested test functions where it is clearly that the PSO is dominant. Additional results are given in Table 2
for one check during the work.
Table 1. The SD and the average for different functions
Function
Particle swarm optimization Genetic algorithm
Standard deviation Average Standard deviation Average
Beale 0 0 0.330022 0.20556
McCormick 2.48E-16 -1.9132 0.005451 -1.9086
Matyas 0 0 0.000572 0.00058
Mishra bird 0 -106.788 0.158935 -106.614
Holder table 1.814672 -17.5691 0 -19.2014
Eggholder 69.2471 -891.251 74.95725 -909.32
Schaffer 0.003303 0.29434 0 0.3002
Booth 0 0 0.214113 0.18532
Easom 5.48E-5 -1 0.740961 0.40972
Table 2. Different results for suggested problems
Function
Particle swarm optimization Genetic algorithm
Best value Best variables Best value Best variable
Beale 1.45e-008 3.0002, 0.5 0.2303 2.2412, 0.2647
McCormick - 1.9132 - 0.5472, - 1.5472 - 1.9121 - 0.5294, - 1.5176
Matyas 7.229e-012 - 0.1353, - 0.1163 6.1515e-005 0.0392, 0.0392
Mishra bird - 106.7877 - 3.1229, - 1.5895 - 106.7139 3.0980, - 1.6059
Holder table - 19.2085 8.055, 9.6646 - 19.2014 8.0392, 9.6863
Eggholder - 959.6407 512.00, 404.2318 - 950.5854 475.8588, 431.6863
Schaffer 0.2923 1.2651, 0.1700 0.3002 - 1.1765, - 3.5294
Booth 3.0702e-010 1.00, 3.00 0.1576 1.2941, 2.7843
Easom - 1.00 3.1417, 3.1416 - 1.2162 - 0.3922, - 0.3922
4. CONCLUSION
Particle swarm optimization test has been done to highlight the ability of the algorithm to optimize
the difficult problems. The experiments have included various mathematical functions with details presents in
this paper. In addition to that, it is compared with other algorithm to be sure that this algorithm can
9. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 6, December 2021 : 3422 – 3431
3430
outperform the others in terms of the accuracy and the speed. Hence, this method can be used to solve any
optimization problem in all the fields.
ACKNOWLEDGEMENTS
Many thanks for everybody in Southern Technical University how help us to complete this paper.
REFERENCES
[1] T. El-Ghazali, "Metaheuristics: from design to implementation," vol. 74, John Wiley & Sons, 2009.
[2] H. W. Abdulrazak, I. A. Abed, and D. K. Shary. "Management of Microgrid System Based on Optimization
Algorithm," Journal of Physics: Conference Series, vol. 1773. no. 1, 2021.
[3] Z. Jozef, "Optimization Problems and Genetic Algorithms," Review of Business Information Systems, vol. 14, no. 3
2010, doi: 10.19030/rbis.v14i3.491.
[4] G. M. Fadhil, I. A. Abed and R. S. Jasim, "Genetic Algorithm Utilization to Fine Tune the Parameters of PID
Controller," Kufa Journal of Engineering, vol. 12, no. 2, pp. 1-12, 2021, doi: 10.30572/2018/KJE/120201.
[5] D. Wang , D. Tan and L. Lie, "Particle swarm optimization algorithm: an overview," Soft Computing, vol. 22, pp.
387-408, 2018.
[6] M. Ozdemir, "Particle swarm optimization for continuous function optimization problems," International Journal of
Applied Mathematics Electronics and Computers, vol. 5, no. 3, pp. 47-52, 2017, doi: 10.18100/ijamec.2017331879.
[7] F. A. Jumaa, O. M. Neda, and M. A. Mhawesh, "Optimal distributed generation placement using artificial
intelligence for improving active radial distribution system," Bulletin of Electrical Engineering and Informatics, vol.
10, no. 5, pp. 2345-2354, doi: 10.11591/eei.v10i5.2949.
[8] J. M. Garcia, C. A. Acosta and M. J. Mesa, "Genetic algorithms for mathematical optimization," Journal of Physics:
Conference Series, vol. 1448, no. 1, 2020.
[9] Y. A. G. Alyouzbaki and M. F. Al-Rawi, "Novel load balancing approach based on ant colony optimization
technique in cloud computing," Bulletin of Electrical Engineering and Informatics, vol. 10, no. 4, pp. 2320-2326,
2021, doi: 10.11591/eei.v10i4.2947.
[10] M. S. Sami, and H. A. Kubba, "Genetic algorithm based load flow solution problem in electrical power systems,"
Journal of Engineering, vol. 15 no. 4, pp. 4142-4162, 2009.
[11] B. Tarek, L. Slimani, and M. Belkacemi, "A genetic algorithm for solving the optimal power flow problem,"
Leonardo Journal of Sciences, vol. 4, pp. 44-58, 2004.
[12] Younes, Mimoun, and Mostefa Rahli, "On the choice genetic parameters with Taguchi method applied in economic
power dispatch," Leonardo journal of sciences, vol. 9, pp. 9-24, 2006.
[13] D. Hermawanto, "Genetic algorithm for solving simple mathematical equality problem," arXiv preprint
arXiv:1308.4675, 2013.
[14] ABM. Nasiruzzaman, and MG. Rabbani, "A Genetic Algorithm Based Approach for Solving Optimal Power Flow
Problem," International Conference on Electronics, Computer and Communication, 2008.
[15] J. Kennedy and R. Eberhart, "Particle swarm optimization," Proceedings of ICNN'95 - International Conference on
Neural Networks, 1995, vol. 4, pp. 1942-1948, doi: 10.1109/ICNN.1995.488968.
[16] E. A. Kaur and E. K. Mandeep, "A comprehensive survey of test functions for evaluating the performance of particle
swarm optimization algorithm," International Journal of Hybrid Information Technology, vol. 8, no. 5, pp. 97-104,
2015.
[17] M. Khesti and L. Ding, "Particle Swarm Optimization Solution for Power System Operation Problems," Particle
Swarm Optimization with Applications. IntechOpen, 2017.
[18] S. M. Majercik, "GREEN-PSO: Conserving Function Evaluations in Particle Swarm Optimization," IJCCI, 2013.
[19] G. Tomasseti and L. Cagnina, "Particle swarm algorithms to solve engineering problems: a comparison of
performance," Journal of Engineering, vol. 2013, no. 435104, 2013, doi: 10.1155/2013/435104.
[20] M. R. Bonyadi and Z. Michalewicz, "Particle Swarm Optimization for Single Objective Continuous Space
Problems: A Review," Evolutionary Computation, vol. 25, no. 1, pp. 1-54, 2017, doi: 10.1162/EVCO_r_00180.
[21] MN. Alam, "Particle swarm optimization: Algorithm and its codes in matlab," ResearchGate, pp. 1-10, 2016.
[22] P. Erdoğmus, "Particle swarm optimization performance on special linear programming problems," Scientific
Research and Essays, vol. 5, no. 12, pp. 1506-1518, 2013.
[23] BSG. D. Almeida and VC. Leite, "Particle Swarm Optimization: A Powerful Technique for Solving Engineering
Problems," Swarm Intelligence-Recent Advances, New Perspectives and Applications, IntechOpen, 2019.
[24] M. Najjarzadeh and A. Ayatollahi, "A comparison between Genetic Algorithm and PSO for linear phase FIR digital
filter design," 9th International Conference on Signal Processing, 2008, pp. 2134-2137, doi:
10.1109/ICOSP.2008.4697568.
[25] I. A. Abed, S. P. Koh, K. S. M. Sahari, S. K. Tiong, M. M. Ali and A. A. A. Kadhim, "A New Proposed Pendulum-
Like with Attraction-Repulsion mechanism Algorithm for Solving Optimization Problems," IOP Conference Series:
Materials Science and Engineering, vol. 881, no. 1, p. 012132, 2020.
10. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Using particle swarm optimization to solve test functions problems (Issa Ahmed Abed)
3431
BIOGRAPHIES OF AUTHORS
Issa Ahmed Abed received his BSc. and MSc. in Control and Computing from the
Department of Electrical Engineering, University of Basrah, Iraq in 2002 and 2005,
respectively. While his PhD is also in Control and Computing from University Tenaga
Nasional, Malaysia in 2014. Currently, he is an Asst. Prof. and the head of control and
automation department at Engineering Technical College Basrah, Southern Technical
University, Iraq. His areas of interest include Artificial Intelligence, Solar energy, control
systems, robotics, and image processing.
May Mohammed Ali received the B.Sc. and M.Sc. degrees in electrical engineering from Al-
Basrah University, Basrah, Iraq, in 1996 and 2000, respectively. She is currently an assistant
lecturer of control and automation, Southern Technical University, Engineering Technical
College Basrah, Iraq. She can be contacted at email: may.mohammed@stu.edu.iq
Afrah Abood Abdul Kadhim received her BSc. and MSc. in Power and Machine from the
Department of Electrical Engineering, University of Basrah, Iraq in 2002 and 2006,
respectively. Currently, she is an assistant lecturer at Basrah Engineering Technical College,
Southern Technical University, Iraq. Her areas of interest include Power, Machine, Control,
and Artificial Intelligence.