In this paper, a dimensionality reduction is achieved in large datasets using the proposed distance based Non-integer Matrix Factorization (NMF) technique, which is intended to solve the data dimensionality problem. Here, NMF and distance measurement aim to resolve the non-orthogonality problem due to increased dataset dimensionality. It initially partitions the datasets, organizes them into a defined geometric structure and it avoids capturing the dataset structure through a distance based similarity measurement. The proposed method is designed to fit the dynamic datasets and it includes the intrinsic structure using data geometry. Therefore, the complexity of data is further avoided using an Improved Distance based Locality Preserving Projection. The proposed method is evaluated against existing methods in terms of accuracy, average accuracy, mutual information and average mutual information.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
SCAF – AN EFFECTIVE APPROACH TO CLASSIFY SUBSPACE CLUSTERING ALGORITHMSijdkp
Subspace clustering discovers the clusters embedded in multiple, overlapping subspaces of high
dimensional data. Many significant subspace clustering algorithms exist, each having different
characteristics caused by the use of different techniques, assumptions, heuristics used etc. A comprehensive
classification scheme is essential which will consider all such characteristics to divide subspace clustering
approaches in various families. The algorithms belonging to same family will satisfy common
characteristics. Such a categorization will help future developers to better understand the quality criteria to
be used and similar algorithms to be used to compare results with their proposed clustering algorithms. In
this paper, we first proposed the concept of SCAF (Subspace Clustering Algorithms’ Family).
Characteristics of SCAF will be based on the classes such as cluster orientation, overlap of dimensions etc.
As an illustration, we further provided a comprehensive, systematic description and comparison of few
significant algorithms belonging to “Axis parallel, overlapping, density based” SCAF.
Multi fractal analysis of human brain mr imageeSAT Journals
Abstract In computer programming, code smell may origin of latent problems in source code. Detecting and resolving bad smells remain time intense for software engineers despite proposals on bad smell detecting and refactoring tools. Numerous code smells have been recognized yet the sequence in which the detection and resolution of different kinds of code smells are performed because software engineers do not know how to optimize sequence. In this paper, the novel refactoring approach is proposed to improve the performance of programs. In this recommended approach the code smells are automatically detected and refactored. The simulation results propose the reduction of time over the semi-automated refactoring are achieved when code smells are refactored by using multi-step automated refactoring. Keywords: Code smell, multi step refactoring, detection, code resolution, restructuring etc
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Parametric Comparison of K-means and Adaptive K-means Clustering Performance ...IJECEIAES
Image segmentation takes a major role for analyzing the area of interest in image processing. Many researchers have used different types of techniques for analyzing the image. One of the widely used techniques is K-means clustering. In this paper we use two algorithms K-means and the advance of K-means is called as adaptive K-means clustering. Both the algorithms are using in different types of image and got a successful result. By comparing the Time period, PSNR and RMSE value from the result of both algorithms we prove that the Adaptive K-means clustering algorithm gives a best result as compard to K-means clustering in image segmentation.
Web image annotation by diffusion maps manifold learning algorithmijfcstjournal
Automatic image annotation is one of the most challenging problems in machine vision areas. The goal of this task is to predict number of keywords automatically for images captured in real data. Many methods are based on visual features in order to calculate similarities between image samples. But the computation cost of these approaches is very high. These methods require many training samples to be stored in memory. To lessen thisburden, a number of techniques have been developed to reduce the number
of features in a dataset. Manifold learning is a popular approach to nonlinear dimensionality reduction. In
this paper, we investigate Diffusion maps manifold learning method for webimage auto-annotation task.Diffusion maps
manifold learning method isused to reduce the dimension of some visual features. Extensive experiments and analysis onNUS-WIDE-LITE web image dataset with
different visual featuresshow how this manifold learning dimensionality reduction method can be applied effectively to image annotation.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
SCAF – AN EFFECTIVE APPROACH TO CLASSIFY SUBSPACE CLUSTERING ALGORITHMSijdkp
Subspace clustering discovers the clusters embedded in multiple, overlapping subspaces of high
dimensional data. Many significant subspace clustering algorithms exist, each having different
characteristics caused by the use of different techniques, assumptions, heuristics used etc. A comprehensive
classification scheme is essential which will consider all such characteristics to divide subspace clustering
approaches in various families. The algorithms belonging to same family will satisfy common
characteristics. Such a categorization will help future developers to better understand the quality criteria to
be used and similar algorithms to be used to compare results with their proposed clustering algorithms. In
this paper, we first proposed the concept of SCAF (Subspace Clustering Algorithms’ Family).
Characteristics of SCAF will be based on the classes such as cluster orientation, overlap of dimensions etc.
As an illustration, we further provided a comprehensive, systematic description and comparison of few
significant algorithms belonging to “Axis parallel, overlapping, density based” SCAF.
Multi fractal analysis of human brain mr imageeSAT Journals
Abstract In computer programming, code smell may origin of latent problems in source code. Detecting and resolving bad smells remain time intense for software engineers despite proposals on bad smell detecting and refactoring tools. Numerous code smells have been recognized yet the sequence in which the detection and resolution of different kinds of code smells are performed because software engineers do not know how to optimize sequence. In this paper, the novel refactoring approach is proposed to improve the performance of programs. In this recommended approach the code smells are automatically detected and refactored. The simulation results propose the reduction of time over the semi-automated refactoring are achieved when code smells are refactored by using multi-step automated refactoring. Keywords: Code smell, multi step refactoring, detection, code resolution, restructuring etc
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Parametric Comparison of K-means and Adaptive K-means Clustering Performance ...IJECEIAES
Image segmentation takes a major role for analyzing the area of interest in image processing. Many researchers have used different types of techniques for analyzing the image. One of the widely used techniques is K-means clustering. In this paper we use two algorithms K-means and the advance of K-means is called as adaptive K-means clustering. Both the algorithms are using in different types of image and got a successful result. By comparing the Time period, PSNR and RMSE value from the result of both algorithms we prove that the Adaptive K-means clustering algorithm gives a best result as compard to K-means clustering in image segmentation.
Web image annotation by diffusion maps manifold learning algorithmijfcstjournal
Automatic image annotation is one of the most challenging problems in machine vision areas. The goal of this task is to predict number of keywords automatically for images captured in real data. Many methods are based on visual features in order to calculate similarities between image samples. But the computation cost of these approaches is very high. These methods require many training samples to be stored in memory. To lessen thisburden, a number of techniques have been developed to reduce the number
of features in a dataset. Manifold learning is a popular approach to nonlinear dimensionality reduction. In
this paper, we investigate Diffusion maps manifold learning method for webimage auto-annotation task.Diffusion maps
manifold learning method isused to reduce the dimension of some visual features. Extensive experiments and analysis onNUS-WIDE-LITE web image dataset with
different visual featuresshow how this manifold learning dimensionality reduction method can be applied effectively to image annotation.
Geometric Correction for Braille Document Images csandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
Discretization methods for Bayesian networks in the case of the earthquakejournalBEEI
The Bayesian networks are a graphical probability model that represents interactions between variables. This model has been widely applied in various fields, including in the case of disaster. In applying field data, we often find a mixture of variable types, which is a combination of continuous variables and discrete variables. For data processing using hybrid and continuous Bayesian networks, all continuous variables must be normally distributed. If normal conditions unsatisfied, we offer a solution, is to discretize continuous variables. Next, we can continue the process with the discrete Bayesian networks. The discretization of a variable can be done in various ways, including equal-width, equal-frequency, and K-means. The combination of BN and k-means is a new contribution in this study called the k-means Bayesian networks (KMBN) model. In this study, we compared the three methods of discretization used a confusion matrix. Based on the earthquake damage data, the K-means clustering method produced the highest level of accuracy. This result indicates that K-means is the best method for discretizing the data that we use in this study.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Credal Fusion of Classifications for Noisy and Uncertain DataIJECEIAES
This paper reports on an investigation in classification technique employed to classify noised and uncertain data. However, classification is not an easy task. It is a significant challenge to discover knowledge from uncertain data. In fact, we can find many problems. More time we don’t have a good or a big learning database for supervised classification. Also, when training data contains noise or missing values, classification accuracy will be affected dramatically. So to extract groups from data is not easy to do. They are overlapped and not very separated from each other. Another problem which can be cited here is the uncertainty due to measuring devices. Consequentially classification model is not so robust and strong to classify new objects. In this work, we present a novel classification algorithm to cover these problems. We materialize our main idea by using belief function theory to do combination between classification and clustering. This theory treats very well imprecision and uncertainty linked to classification. Experimental results show that our approach has ability to significantly improve the quality of classification of generic database.
Mine Blood Donors Information through Improved K-Means Clusteringijcsity
The number of accidents and health diseases which are increasing at an alarming rate are resulting in a huge increase in the demand for blood. There is a necessity for the organized analysis of the blood donor database or blood banks repositories. Clustering analysis is one of the data mining applications and K-means clustering algorithm is the fundamental algorithm for modern clustering techniques. K-means clustering algorithm is traditional approach and iterative algorithm. At every iteration, it attempts to find the distance from the centroid of each cluster to each and every data point. This paper gives the improvement to the original k-means algorithm by improving the initial centroids with distribution of data. Results and discussions show that improved K-means algorithm produces accurate clusters in less computation time to find the donors information
Smooth Support Vector Machine for Suicide-Related Behaviours Prediction IJECEIAES
Suicide-related behaviours need to be prevented on psychiatric patients. Prediction of those behaviours based on patient medical records would be very useful for the prevention by the psychiatric hospital. This research focused on developing this prediction at the only one psychiatric hospital of Bali Province by using Smooth Support Vector Machine method, as the further development of Support Vector Machine. The method used 30.660 patient medical records from the last five years. Data cleaning gave 2665 relevant data for this research, includes 111 patients that have suicide-related behaviours and under active treatment. Those cleaned data then were transformed into ten predictor variables and a response variable. Splitting training and testing data on those transformed data were done for building and accuracy evaluation of the method model. Based on the experiment, the best average accuracy at 63% can be obtained by using 30% of relevant data as data testing and by using training data which has one-to-one ratio in number between patients that have suicide-related behaviours and patients that have no such behaviours. In the future work, accuracy improvement need to be confirmed by using Reduced Support Vector Machine method, as the further development of Smooth Support Vector Machine.
Unsupervised Clustering Classify theCancer Data withtheHelp of FCM AlgorithmIOSR Journals
There is structure in nature; also it is believed that there is an underlying structure in most of
phenomena, to be understood. in image recognition ,mole biology applications such as protein folding and 3D
molecular structure, cancer detection many others the underlying structure exists .by finding structure one
classifies the data according to similar patterns, features and other characteristics .this general idea is known
as classification. In classification, also termed clustering, the most important issue is deciding what criteria to
classify against.This paper presents the fuzzy classification techniques to classify the data of cancer disease.
Cluster analysis, cluster validity and fuzzy C-mean (FCM) technique are proposed to be discussed and applied
in cancer data classification.
Reduct generation for the incremental data using rough set theorycsandit
n today’s changing world huge amount of data is ge
nerated and transferred frequently.
Although the data is sometimes static but most comm
only it is dynamic and transactional. New
data that is being generated is getting constantly
added to the old/existing data. To discover the
knowledge from this incremental data, one approach
is to run the algorithm repeatedly for the
modified data sets which is time consuming. The pap
er proposes a dimension reduction
algorithm that can be applied in dynamic environmen
t for generation of reduced attribute set as
dynamic reduct.
The method analyzes the new dataset, when it become
s available, and modifies
the reduct accordingly to fit the entire dataset. T
he concepts of discernibility relation, attribute
dependency and attribute significance of Rough Set
Theory are integrated for the generation of
dynamic reduct set, which not only reduces the comp
lexity but also helps to achieve higher
accuracy of the decision system. The proposed metho
d has been applied on few benchmark
dataset collected from the UCI repository and a dyn
amic reduct is computed. Experimental
result shows the efficiency of the proposed method
Finding Relationships between the Our-NIR Cluster ResultsCSCJournals
The problem of evaluating node importance in clustering has been active research in present days and many methods have been developed. Most of the clustering algorithms deal with general similarity measures. However In real situation most of the cases data changes over time. But clustering this type of data not only decreases the quality of clusters but also disregards the expectation of users, when usually require recent clustering results. In this regard we proposed Our-NIR method that is better than Ming-Syan Chen proposed a method and it has proven with the help of results of node importance, which is related to calculate the node importance that is very useful in clustering of categorical data, still it has deficiency that is importance of data labeling and outlier detection. In this paper we modified Our-NIR method for evaluating of node importance by introducing the probability distribution which will be better than by comparing the results.
Classification of MR medical images Based Rough-Fuzzy KMeansIOSRJM
Image classification is very significant for many vision of computer and it has acquired significant solicitude from industry and research over last years. We, explore an algorithm via the approximation of Fuzzy -Rough- K-means (FRKM), to bring to light data reliance, data decreasing, estimated of the classification (partition) of the set, and induction of rule from databases of the image. Rough theory provide a successful approach of carrying on precariousness and furthermore applied for image classification feature similarity dimensionality reduction and style categorization. The suggested algorithm is derived from a k means classifier using rough theory for segmentation (or processing) of the image which is moreover split into two portions. Exploratory conclusion output that, suggested method execute well and get better the classification outputs in the fuzzy areas of the image. The results explain that the FRKM execute well than purely using rough set, it can get 94.4% accuracy figure of image classification that, is over 88.25% by using only rough set.
A Novel Clustering Method for Similarity Measuring in Text DocumentsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
EDGE DETECTION IN SEGMENTED IMAGES THROUGH MEAN SHIFT ITERATIVE GRADIENT USIN...ijscmcj
In this paper, we propose a new method for edge detection in obtained images from the Mean Shift iterative algorithm. The comparable, proportional and symmetrical images are de?ned and the importance of Ring Theory is explained. A relation of equivalence among proportional images are de?ned for image groups in equivalent classes. The length of the mean shift vector is used in order to quantify the homogeneity of the neighborhoods. This gives a measure of how much uniform are the regions that compose the image. Edge detection is carried out by using the mean shift gradient based on symmetrical images. The difference among the values of gray levels are accentuated or these are decreased to enhance the interest region contours. The chosen images for the experiments were standard images and real images (cerebral hemorrhage images). The obtained results were compared with the Canny detector, and our results showed a good performance as for the edge continuity.
A Mathematical Programming Approach for Selection of Variables in Cluster Ana...IJRES Journal
Data clustering is a common technique for statistical data analysis; it is defined as a class of
statistical techniques for classifying a set of observations into completely different groups. Cluster analysis
seeks to minimize group variance and maximize between group variance. In this study we formulate a
mathematical programming model that chooses the most important variables in cluster analysis. A nonlinear
binary model is suggested to select the most important variables in clustering a set of data. The idea of the
suggested model depends on clustering data by minimizing the distance between observations within groups.
Indicator variables are used to select the most important variables in the cluster analysis.
Semi-Supervised Discriminant Analysis Based On Data Structureiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Geometric Correction for Braille Document Images csandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
Discretization methods for Bayesian networks in the case of the earthquakejournalBEEI
The Bayesian networks are a graphical probability model that represents interactions between variables. This model has been widely applied in various fields, including in the case of disaster. In applying field data, we often find a mixture of variable types, which is a combination of continuous variables and discrete variables. For data processing using hybrid and continuous Bayesian networks, all continuous variables must be normally distributed. If normal conditions unsatisfied, we offer a solution, is to discretize continuous variables. Next, we can continue the process with the discrete Bayesian networks. The discretization of a variable can be done in various ways, including equal-width, equal-frequency, and K-means. The combination of BN and k-means is a new contribution in this study called the k-means Bayesian networks (KMBN) model. In this study, we compared the three methods of discretization used a confusion matrix. Based on the earthquake damage data, the K-means clustering method produced the highest level of accuracy. This result indicates that K-means is the best method for discretizing the data that we use in this study.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Credal Fusion of Classifications for Noisy and Uncertain DataIJECEIAES
This paper reports on an investigation in classification technique employed to classify noised and uncertain data. However, classification is not an easy task. It is a significant challenge to discover knowledge from uncertain data. In fact, we can find many problems. More time we don’t have a good or a big learning database for supervised classification. Also, when training data contains noise or missing values, classification accuracy will be affected dramatically. So to extract groups from data is not easy to do. They are overlapped and not very separated from each other. Another problem which can be cited here is the uncertainty due to measuring devices. Consequentially classification model is not so robust and strong to classify new objects. In this work, we present a novel classification algorithm to cover these problems. We materialize our main idea by using belief function theory to do combination between classification and clustering. This theory treats very well imprecision and uncertainty linked to classification. Experimental results show that our approach has ability to significantly improve the quality of classification of generic database.
Mine Blood Donors Information through Improved K-Means Clusteringijcsity
The number of accidents and health diseases which are increasing at an alarming rate are resulting in a huge increase in the demand for blood. There is a necessity for the organized analysis of the blood donor database or blood banks repositories. Clustering analysis is one of the data mining applications and K-means clustering algorithm is the fundamental algorithm for modern clustering techniques. K-means clustering algorithm is traditional approach and iterative algorithm. At every iteration, it attempts to find the distance from the centroid of each cluster to each and every data point. This paper gives the improvement to the original k-means algorithm by improving the initial centroids with distribution of data. Results and discussions show that improved K-means algorithm produces accurate clusters in less computation time to find the donors information
Smooth Support Vector Machine for Suicide-Related Behaviours Prediction IJECEIAES
Suicide-related behaviours need to be prevented on psychiatric patients. Prediction of those behaviours based on patient medical records would be very useful for the prevention by the psychiatric hospital. This research focused on developing this prediction at the only one psychiatric hospital of Bali Province by using Smooth Support Vector Machine method, as the further development of Support Vector Machine. The method used 30.660 patient medical records from the last five years. Data cleaning gave 2665 relevant data for this research, includes 111 patients that have suicide-related behaviours and under active treatment. Those cleaned data then were transformed into ten predictor variables and a response variable. Splitting training and testing data on those transformed data were done for building and accuracy evaluation of the method model. Based on the experiment, the best average accuracy at 63% can be obtained by using 30% of relevant data as data testing and by using training data which has one-to-one ratio in number between patients that have suicide-related behaviours and patients that have no such behaviours. In the future work, accuracy improvement need to be confirmed by using Reduced Support Vector Machine method, as the further development of Smooth Support Vector Machine.
Unsupervised Clustering Classify theCancer Data withtheHelp of FCM AlgorithmIOSR Journals
There is structure in nature; also it is believed that there is an underlying structure in most of
phenomena, to be understood. in image recognition ,mole biology applications such as protein folding and 3D
molecular structure, cancer detection many others the underlying structure exists .by finding structure one
classifies the data according to similar patterns, features and other characteristics .this general idea is known
as classification. In classification, also termed clustering, the most important issue is deciding what criteria to
classify against.This paper presents the fuzzy classification techniques to classify the data of cancer disease.
Cluster analysis, cluster validity and fuzzy C-mean (FCM) technique are proposed to be discussed and applied
in cancer data classification.
Reduct generation for the incremental data using rough set theorycsandit
n today’s changing world huge amount of data is ge
nerated and transferred frequently.
Although the data is sometimes static but most comm
only it is dynamic and transactional. New
data that is being generated is getting constantly
added to the old/existing data. To discover the
knowledge from this incremental data, one approach
is to run the algorithm repeatedly for the
modified data sets which is time consuming. The pap
er proposes a dimension reduction
algorithm that can be applied in dynamic environmen
t for generation of reduced attribute set as
dynamic reduct.
The method analyzes the new dataset, when it become
s available, and modifies
the reduct accordingly to fit the entire dataset. T
he concepts of discernibility relation, attribute
dependency and attribute significance of Rough Set
Theory are integrated for the generation of
dynamic reduct set, which not only reduces the comp
lexity but also helps to achieve higher
accuracy of the decision system. The proposed metho
d has been applied on few benchmark
dataset collected from the UCI repository and a dyn
amic reduct is computed. Experimental
result shows the efficiency of the proposed method
Finding Relationships between the Our-NIR Cluster ResultsCSCJournals
The problem of evaluating node importance in clustering has been active research in present days and many methods have been developed. Most of the clustering algorithms deal with general similarity measures. However In real situation most of the cases data changes over time. But clustering this type of data not only decreases the quality of clusters but also disregards the expectation of users, when usually require recent clustering results. In this regard we proposed Our-NIR method that is better than Ming-Syan Chen proposed a method and it has proven with the help of results of node importance, which is related to calculate the node importance that is very useful in clustering of categorical data, still it has deficiency that is importance of data labeling and outlier detection. In this paper we modified Our-NIR method for evaluating of node importance by introducing the probability distribution which will be better than by comparing the results.
Classification of MR medical images Based Rough-Fuzzy KMeansIOSRJM
Image classification is very significant for many vision of computer and it has acquired significant solicitude from industry and research over last years. We, explore an algorithm via the approximation of Fuzzy -Rough- K-means (FRKM), to bring to light data reliance, data decreasing, estimated of the classification (partition) of the set, and induction of rule from databases of the image. Rough theory provide a successful approach of carrying on precariousness and furthermore applied for image classification feature similarity dimensionality reduction and style categorization. The suggested algorithm is derived from a k means classifier using rough theory for segmentation (or processing) of the image which is moreover split into two portions. Exploratory conclusion output that, suggested method execute well and get better the classification outputs in the fuzzy areas of the image. The results explain that the FRKM execute well than purely using rough set, it can get 94.4% accuracy figure of image classification that, is over 88.25% by using only rough set.
A Novel Clustering Method for Similarity Measuring in Text DocumentsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
EDGE DETECTION IN SEGMENTED IMAGES THROUGH MEAN SHIFT ITERATIVE GRADIENT USIN...ijscmcj
In this paper, we propose a new method for edge detection in obtained images from the Mean Shift iterative algorithm. The comparable, proportional and symmetrical images are de?ned and the importance of Ring Theory is explained. A relation of equivalence among proportional images are de?ned for image groups in equivalent classes. The length of the mean shift vector is used in order to quantify the homogeneity of the neighborhoods. This gives a measure of how much uniform are the regions that compose the image. Edge detection is carried out by using the mean shift gradient based on symmetrical images. The difference among the values of gray levels are accentuated or these are decreased to enhance the interest region contours. The chosen images for the experiments were standard images and real images (cerebral hemorrhage images). The obtained results were compared with the Canny detector, and our results showed a good performance as for the edge continuity.
A Mathematical Programming Approach for Selection of Variables in Cluster Ana...IJRES Journal
Data clustering is a common technique for statistical data analysis; it is defined as a class of
statistical techniques for classifying a set of observations into completely different groups. Cluster analysis
seeks to minimize group variance and maximize between group variance. In this study we formulate a
mathematical programming model that chooses the most important variables in cluster analysis. A nonlinear
binary model is suggested to select the most important variables in clustering a set of data. The idea of the
suggested model depends on clustering data by minimizing the distance between observations within groups.
Indicator variables are used to select the most important variables in the cluster analysis.
Semi-Supervised Discriminant Analysis Based On Data Structureiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...ijcsit
In this paper, we present an algorithm for feature selection. This algorithm labeled QC-FS: Quantum
Clustering for Feature Selection performs the selection in two steps. Partitioning the original features
space in order to group similar features is performed using the Quantum Clustering algorithm. Then the
selection of a representative for each cluster is carried out. It uses similarity measures such as correlation
coefficient (CC) and the mutual information (MI). The feature which maximizes this information is chosen
by the algorithm
Blind Image Seperation Using Forward Difference Method (FDM)sipij
In this paper, blind image separation is performed, exploiting the property of sparseness to represent images. A new sparse representation called forward difference method is proposed. It is known that most of the independent component analysis (ICA) basis functions, extracted from images are sparse and gives unreliable sparseness measure. In the proposed method, the image mixture is first transformed to sparse images. These images are divided into blocks and for each block the sparseness measure ε0 norm is applied. The block having the most sparseness is considered to determine the separation matrix. The efficiency of the proposed method is compared with other sparse representation functions.
A COMPARATIVE STUDY ON DISTANCE MEASURING APPROACHES FOR CLUSTERINGIJORCS
Clustering plays a vital role in the various areas of research like Data Mining, Image Retrieval, Bio-computing and many a lot. Distance measure plays an important role in clustering data points. Choosing the right distance measure for a given dataset is a biggest challenge. In this paper, we study various distance measures and their effect on different clustering. This paper surveys existing distance measures for clustering and present a comparison between them based on application domain, efficiency, benefits and drawbacks. This comparison helps the researchers to take quick decision about which distance measure to use for clustering. We conclude this work by identifying trends and challenges of research and development towards clustering.
Connectivity-Based Clustering for Mixed Discrete and Continuous DataIJCI JOURNAL
This paper introduces a density-based clustering procedure for datasets with variables of mixed type. The proposed procedure, which is closely related to the concept of shared neighbourhoods, works particularly well in cases where the individual clusters differ greatly in terms of the average pairwise distance of the associated objects. Using a number of concrete examples, it is shown that the proposed clustering algorithm succeeds in allowing the identification of subgroups of objects with statistically significant distributional characteristics.
A Review on Classification Based Approaches for STEGanalysis DetectionEditor IJCATR
This paper presents two scenarios of image steganalysis, in first scenario, an alternative feature set for steganalysis based on
rate-distortion characteristics of images. Here features are based on two key observations: i) Data embedding typically increases the
image entropy in order to encode the hidden messages; ii) Data embedding methods are limited to the set of small, imperceptible distortions.
The proposed feature set is used as the basis of a steganalysis algorithm and its performance is investigated using different
data hiding methods. In second scenario, a new blind approach of image Steganalysis based on contourlet transform and nonlinear
support vector machine. Properties of Contourlet transform are used to extract features of images, the important aspect of this paper is
that, it uses the minimum number of features in the transform domain and gives a better accuracy than many of the existing stegananlysis
methods. The efficiency of the proposed method is demonstrated through experimental results. Also its performance is compared
with the Contourlet based steganalyzer (WBS). Finally, the results show that the proposed method is very efficient in terms of its detection
accuracy and computational cost.
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
Multimodal Biometrics Recognition by Dimensionality Diminution MethodIJERA Editor
Multimodal biometric system utilizes two or more character modalities, e.g., face, ear, and fingerprint,
Signature, plamprint to improve the recognition accuracy of conventional unimodal methods. We propose a new
dimensionality reduction method called Dimension Diminish Projection (DDP) in this paper. DDP can not only
preserve local information by capturing the intra-modal geometry, but also extract between-class relevant
structures for classification effectively. Experimental results show that our proposed method performs better
than other algorithms including PCA, LDA and MFA.
Geoid height determination is one of the major problems of geodesy because usage of satellite
techniques in geodesy isgetting increasing. Geoid heights can be determined using different methods according
to the available data. Soft computing methods such as Fuzzy logic and neural networks became so popular that
they are used to solve many engineering problems. Fuzzy logic theory and later developments in uncertainty
assessment have enabled us to develop more precise models for our requirements. In this study, How to
construct the best fuzzy model is examined. For this purpose, three different data sets were taken and two
different kinds (two inpust one output and three inputs one output) fuzzy model were formed for the calculation
of geoid heights in Istanbul (Turkey). The Fuzzy models results of these were compared with geoid heights
obtained by GPS/levelling methods. The fuzzy approximation models were tested on the test points.
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
2. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 1, February 2019 : 593 - 601
594
from the datasets [28-30]. Feature reduction in clinical data set is discussed in [31] and [32] explains the
dimensionality reduction in kernel PCA.
In this paper, we propose an Improved Distance based Locality Preserving Projections (IDLPP)
technique for reducing the datasets which possess high dimensionality. The notion of the proposed system
was inspired by the idea of LPP. In this paper NMF is used for eliminating the low dimensional features.
The distance estimation is computed using a probabilistic distance measurement, which represents the
estimation of probability between two different data samples.
The main contributions involve the following:
1. The proposed solution finds the similarity between the data samples using squared distance
representation.
2. The low dimensional features are eliminated using NMF.
3. Finally, the proposed IDLPP technique is compared against other LPP methods using accuracy, average
accuracy, NMI and average NMI.
The outline of the paper is as follows: Section 2 gives the outline of LPP with the data partitioning
technique, NMF metric estimation. Section 3 discusses the similarity measurement based on distance
between the nodes. Section 4 evaluates the IDLPP with other LPPN experimentally and the results are
discussed. Finally, section 5 concludes the paper.
2. LOCALITY PRESERVING PROJECTIONS
The LPP as an unsupervised method is used as a dimensionality reduction technique in data mining
with larger datasets. This method handles the structure of such datasets in a better manner than principle
component analysis. Further, the local dataset structure is preserved through the construction of adjacent
graphs using the k nearest neighbor algorithm.
2.1. Data partitioning
Consider the two sample data xi and xj in a large dimensional dataset, which lie at closer proximity.
The distance between these two samples is found through the k-nearest neighbor algorithm. This forms an
edge between the data samples, and the weights of the two sample data are thus computed as,
(1)
Assume that the sample set at the parent node is represented as a matrix X = [x1, x2...xn]T
∈ Ɍ
n×d
,
where n is the total examples in a sample set. The sample set is divided into two subsets i.e. left and right
child node based on a decision, which is represented as: X1∈ Ɍ
n1×d
, and X2 ∈ Ɍ
n2×d
. Each instance has its own
attributes that is weighted through a combination weighted vector, say, w. This estimates the sample point
project point (x) in matrix (X) along the orientation of the weighted vector, say, w: P(x) = w · x. After
defining the split value p, the matrix (X) is divided into two values X1 and X2 based on the projection values
{P(x), x ∈X}:
and p considers the medium (m) of all matrix (X) projections: p = m = median{P(xi), xi ∈ X, i = i1, i2, ..., n}.
The similarity matrix is obtained based on which finds the similarity estimation between the
data samples (N).
Assume two different samples xi and xj lie in a subspace at closer proximity, then the new data
samples yi and yj will lie at the new subspace. Therefore, the estimation of projection vector (a) is carried out
using the following equation,
i = 1,2,....,N. (2)
where, yi = xi aT
has a sample matrix (X).
2
i jx x
t
ijS e
1
2
x X if P x w x p
x X if P x w x p
, 1
N
ij i j
S S
22
0.5 0.5 T T
i j ij i j ij
ij ij
y y S x a x a S
3. Int J Elec & Comp Eng ISSN: 2088-8708
Improved Probabilistic Distance based Locality Preserving Projections Method... (Jasem M. Alostad)
595
Further, the diagonal matrix (D) is multiplied by (2) to attain the following relation using a Laplacian matrix,
which is given by,
(3)
where,
D or is the diagonal matrix and
L= D-Sis the Laplacian matrix.
The diagonal matrix is further limited to find the objective function of LPP, which is given by the
following condition.
(4)
The optimal projection vector (a) is found by solving the generalized Eigen value problem. The following
equation shows the optimal projection vector (a).
(5)
2.2. NMF Metric Estimation
Assume the optimal projection vector (a) is approximated and applied over the features space (G),
which represents the features vectors (F) of sample data (X). The feature vector is normalized to fT
f = 1 and
then gram matrix (FGF) is found for the obtained normalized feature vector using a metric (M).
M = FT
F, s.t. , ∀l = 1,...,q (6)
The label information is avoided using a metric (M) that estimates the gram matrix and approximation of the
sample data vector over the feature space is used to obtain the metric in feature space i.e. M = FT
F.
3. DLPP BASED SIMILARITY MEASUREMENT
The Euclidean distance η between the vectors Xi = (xi1, xi2 ... , xiD)T
and Xj = (xj1, xj2... , xjD)T
is given
by,
where, z is the squared distance between the vector Xi and Xj
The squared distance η is estimated between any two vectors Xi, Xj∈ X with one or more missing datasets.
Hence, we assume that vector Xi and vector Xj are independent. Since the squared distance η is a transform of
vector Xi and vector Xj, the squared distance is regarded as a random variable. This takes into account the
missing datasets, which are modeled below. Consider the squared distance η as a non-negative function,
where the expected distance is given in terms of a Probability Density Function p(η),
2
0.5 i j ij
ij
T T T T
i ij i i ij j
i ij
T T
T
y y S
x a D ax x a S ax
X D S a X a
X a X La
ii ij
j
D S
arg min
. . 1
T
a
T
Xa LX a
s t Xa DX a
XLX a XDX a
1T
l lu u
2
1
D
id jd
d
z x x
2
2
2
1
i j
D
id jd
d
z X X
z x x
4. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 1, February 2019 : 593 - 601
596
The statistical model is used to resolve the above squared integral function, which is given as,
Assume a component, say xid or xjd, is missing in the given data space, then the value of z is
considered as the summation of squared random variables (φ2
). Depending on [18], the distribution of
summed φ2
is assumed to be Gamma function iff PDF of the random variables φ is given by,
where α and β are the distribution parameters and the value of a random variable is assigned to a constant (ζ),
which is given by
∀φ : h(φ) + h(−φ) = ζ.
Assume zis a Gamma distribution that reasonably chooses a Nakagami [12] distribution for the
expected value η. The random variable is considered as a Nakagami function i.e. φ∼Nakagami(m, ),
which is obtained by using .
The Nakagami distribution is a function of two parameters (shape and spread) that models the
scattered datasets and reaches the receiver through multiple paths. Based on the assumption η∼Nakagami(m,
), the expected value of the squared distance i.e. E(η) is given as:
where m is the shape function of the Nakagami distribution and is the spread function of the Nakagami
distribution, which is a Gamma function.
4. EXPERIMENTAL EVALUATION
The proposed IDLPP method is tested against three datasets, namely 20 Newsgroups data shown in
Figure 1. Reuters 21578 data shown in Figure 2 and R52 data shown in Figure 3. Initially, the data is
preprocessed using the trunc5 stemmer technique and POS Tagger technique. Then the stop word removal
technique is used to remove the stop words and remaining words are accepted based on mutual information.
The dataset sample selection is given in Table 1.
Figure 1. Attributes of 20 newsgroups dataset
0
E P n d
2
1
D
d
d
z
2 1 2
expp h
~ ,Gamma
0 .5 m
E
m m
6. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 1, February 2019 : 593 - 601
598
4.1. Result discussion
Figure 4 shows the results of accuracy between IDLPP and existing LPP methods in relation to 20
samples. Figure 5 shows the results of average accuracy between IDLPP and existing LPP methods in
relation to three datasets i.e. 20 news groups, Reuters 21578 and R52 datasets. The result shows that the
proposed method obtains a higher accuracy rate than other methods. The discarding of irrelevant feature
vectors from the dataset using the proposed method is efficient and more robust than other existing LPP
methods, which is evident from the results. Figure 6 shows the results of NMI between IDLPP and existing
LPP methods in relation to 20 samples. Figure 7 shows the results of average NMI between IDLPP and
existing LPP methods in relation to three datasets i.e. 20 news group, Reuters 21578 and R52 datasets.
The proposed method obtains higher NMI than other methods, which is due to the effective reduction of
redundant data samples from the larger datasets. The use of NMF helps to reduce the feature vector and the
use of distance based measurement reduces the distance between the dataset samples.
Figure 4. Results of accuracy using IDLPP and other LPP methods
Figure 5. Results of average accuracy using IDLPP and other LPP methods
7. Int J Elec & Comp Eng ISSN: 2088-8708
Improved Probabilistic Distance based Locality Preserving Projections Method... (Jasem M. Alostad)
599
Figure 6. Results of NMI using IDLPP and other LPP methods
Figure 7. Results of average NMI using IDLPP
Further, the proposed method and other existing methods have been tested over UCI datasets. Figure
8 shows the classification accuracy of UCI datasets. The total number of instances, classes and dimensions
are listed in Table 2 for evaluation. The estimation of classification accuracy between the proposed and
existing methods has been tested and the result shows that the proposed method obtains higher classification
accuracy than the other methods. This demonstrated the efficacy of the proposed method.
Table 2. UCI Dataset Sample
Dataset
No. of
Instances
No. of
Classes
No. of
Dimensions
Anneal 898 5 90
Breast Tissue 106 6 9
Colic 368 2 60
Hepatitis 155 2 19
House 232 2 16
Hypothyroid 368 2 60
Promoter 106 2 57
Sonar 208 2 60
Wdbc 569 2 30
Wine 178 3 13
8. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 1, February 2019 : 593 - 601
600
Figure 8. Classification Accuracy of UCI Datasets
5. CONCLUSION
In this paper, an IDLPP method is presented that increases the rate of accuracy and Mutual
Information over large dimensional text datasets to retrieve the results effectively for the given queries. The
distance measurement has been carried out in a probabilistic way in IDLPP between the sample data vector
and this reveals that there is a hidden geometric pattern. It also reduces high dimensional irrelevant samples
in large datasets and the geometric information of the datasets is preserved and this has increased the
robustness. The results show that that the IDLPP method yields an improved rate of accuracy and an
improved rate of NMI over other LPP methods and it is an improved method to preserve the locality
projections.
REFERENCES
[1] Li, J., and Deshpande, A, “Ranking Continuous Probabilistic Datasets,” Proceedings of the VLDB Endowment, 3(1-
2), 638-649, 2010.
[2] Cormode, G., and Garofalakis, M, “Histograms and Wavelets on Probabilistic Data,” IEEE Transactions on
Knowledge and Data Engineering, 22(8), 1142-1157, 2010.
[3] Wiharto, W., Kusnanto, H., and Herianto, H, “System Diagnosis of Coronary Heart Disease Using a Combination
of Dimensional Reduction and Data Mining Techniques: A Review,” Indonesian Journal of Electrical Engineering
and Computer Science, 7(2), 514-523, 2017.
[4] Zhang, D., Wang, Y., Zhou, L., Yuan, H., Shen, D., and Alzheimer's Disease Neuroimaging Initiative, “Multimodal
Classification Of Alzheimer's Disease and Mild Cognitive Impairment,” Neuroimage, 55(3), 856-867, 2011.
[5] Mashhour, E. M., El Houby, E. M., Wassif, K. T., and Salah, A. I, “Feature Selection Approach based on Firefly
Algorithm and Chi-square,” International Journal of Electrical and Computer Engineering (IJECE), 8(4), 2018.
[6] Chen, L. F., Liao, H. Y. M., Ko, M. T., Lin, J. C., and Yu, G. J, “A New LDA-Based Face Recognition System
Which Can Solve The Small Sample Size Problem,” Pattern recognition, 33(10), 1713-1726, 2000.
[7] Liu, M., Miao, L., and Zhang, D, “Two-Stage Cost-Sensitive Learning for Software Defect Prediction,” IEEE
Transactions on Reliability, 63(2), 676-686, 2014.
[8] Simão, M., Neto, P., and Gibaru, O, “Using Data Dimensionality Reduction for Recognition of Incomplete
Dynamic Gestures,” Pattern Recognition Letters, DOI: 10.1016/j.patrec.2017.01.003.
[9] Tunga, B, “A Hybrid Algorithm With Cluster Analysis In Modelling High Dimensional Data,” Discrete Applied
Mathematics, 2017.
[10] Zhu, R., and Xue, J. H, “On the Orthogonal Distance to Class Subspaces for High-Dimensional Data
Classification,” Information Sciences, 417, 262-273, 2017.
[11] Liu, Z., Liu, B., Zheng, S., and Shi, N. Z, “Simultaneous Testing of Mean Vector and Covariance Matrix for High-
Dimensional Data,” Journal of Statistical Planning and Inference, 188, 82-93, 2017.
[12] Lucas, T., Silva, T. C., Vimieiro, R., and Ludermir, T. B, “A New Evolutionary Algorithm For Mining Top-K
Discriminative Patterns In High Dimensional Data,” Applied Soft Computing, DOI: 10.1016/j.asoc.2017.05.048.
[13] Zhao, S., Zhou, J., and Li, H, “Model Averaging with High-Dimensional Dependent Data,” Economics Letters,
148, 68-71, 2016.
[14] Zamora, J., Mendoza, M., and Allende, H, “Hashing-Based Clustering in High Dimensional Data,” Expert Systems
with Applications, 62, 202-211, 2016.
9. Int J Elec & Comp Eng ISSN: 2088-8708
Improved Probabilistic Distance based Locality Preserving Projections Method... (Jasem M. Alostad)
601
[15] Ultsch, A., & Lötsch, J, “Machine-Learned Cluster Identification in High-Dimensional Data,” Journal of
biomedical informatics, 66, 95-104, 2017.
[16] Sang, Y., Qi, H., Li, K., Jin, Y., Yan, D., and Gao, S, “An Effective Discretization Method For Disposing High-
Dimensional Data," Information Sciences, 270, 73-91, 2014.
[17] Apiletti, D., Baralis, E., Cerquitelli, T., Garza, P., Pulvirenti, F., and Michiardi, P, “A Parallel MapReduce
Algorithm to Efficiently Support Itemset Mining on High Dimensional Data,” Big Data Research, 10, 53-69, 2017.
[18] Zhou, P., Hu, X., Li, P., and Wu, X, “Online Feature Selection for High-Dimensional Class-Imbalanced Data,”
Knowledge-Based Systems, 136, 187-199, 2017.
[19] Lansangan, J. R. G., and Barrios, E. B, “Simultaneous Dimension Reduction and Variable Selection in Modeling
High Dimensional Data,” Computational Statistics & Data Analysis, 112, 242-256.
[20] Jing, L., Tian, K., & Huang, J. Z, “Stratified Feature Sampling Method For Ensemble Clustering of High
Dimensional Data,” Pattern Recognition, 48(11), 3688-3702, 2015.
[21] Liu, X., and Li, M, “Integrated Constraint Based Clustering Algorithm for High Dimensional Data,”
Neurocomputing, 142, 478-485, 2014.
[22] Cardoso, Â., and Wichert, A, “Iterative Random Projections for High-Dimensional Data Clustering,” Pattern
Recognition Letters, 33(13), 1749-1755, 2012.
[23] Moayedikia, A., Ong, K. L., Boo, Y. L., Yeoh, W. G., and Jensen, R, “Feature Selection for High Dimensional
Imbalanced Class Data Using Harmony Search,” Engineering Applications of Artificial Intelligence, 57, 38-49,
2017.
[24] Pedergnana, M., and García, S. G, “Smart Sampling And Incremental Function Learning for Very Large High
Dimensional Data,” Neural Networks, 78, 75-87, 2016.
[25] Itoh, T., Kumar, A., Klein, K., and Kim, J, “High-Dimensional Data Visualization By Interactive Construction of
Low-Dimensional Parallel Coordinate Plots,” Journal of Visual Languages & Computing, 2017.
[26] Ando, T., and Li, K. C, “A Model-Averaging Approach For High-Dimensional Regression,” Journal of the
American Statistical Association, 109(505), 254-265, 2014.
[27] Qiao, L., Chen, S., and Tan, X, “Sparsity Preserving Projections With Applications To Face Recognition,” Pattern
Recognition, 43(1), 331-341, 2010.
[28] Liu, M., Zhang, D., and Chen, S, “Attribute Relation Learning for Zero-Shot Classification,” Neurocomputing, 139,
34-46, 2014.
[29] Jiang, W., and Chung, F. L, “A Trace Ratio Maximization Approach to Multiple Kernel-Based Dimensionality
Reduction,” Neural Networks, 49, 96-106, 2014.
[30] Liu, M., and Zhang, D, “Sparsity Score: A Novel Graph-Preserving Feature Selection Method,” International
Journal of Pattern Recognition and Artificial Intelligence, 28(04), 1450009, 2014.
[31] Srividya Sivasankar, Sruthi Nair, M.V. Judy, “Feature Reduction in Clinical Data Classification using Augmented
Genetic Algorithm”, International Journal of Electrical and Computer Engineering (IJECE) Vol. 5, No. 6,
December 2015, pp. 1516-1524.
[32] Muhammad Kusban, Adhi Susanto, and Oyas Wahyunggoro, “Combination a Skeleton Filter and Reduction
Dimension of Kernel PCA-Based on Palmprint Recognition” International Journal of Electrical and Computer
Engineering (IJECE) Vol. 6, No. 6, December 2016, pp. 3255-3261.
BIOGRAPHIES OF AUTHORS
Dr. Jasem M.Alostad received his Ph.D from The University of York, United Kingdom in 2006,
MS from the Monmouth University, New Jersey, USA in 1996 and B.S (Computer Science)
from Western Kentucky University, KY, USA in 1990. He is currently the Director of Computer
and Information Centre in The Public Authority of Applied Education and Training (PAAET),
Kuwait. He has more than 10 years of experience in both academics and management. He has
authored more than 25 technical research papers published in leading journals and conferences
from the ACM, Elsevier, Springer etc. His current research includes Software Engineering, HCI,
Data Mining, Data Analysis, Big Data Cloud Computing and Internet of things (IoT).