Feature selection is a problem closely related to dimensionality reduction. A commonly used
approach in feature selection is ranking the individual features according to some criteria and
then search for an optimal feature subset based on an evaluation criterion to test the optimality.
The objective of this work is to predict more accurately the presence of Learning Disability
(LD) in school-aged children with reduced number of symptoms. For this purpose, a novel
hybrid feature selection approach is proposed by integrating a popular Rough Set based feature
ranking process with a modified backward feature elimination algorithm. The approach follows
a ranking of the symptoms of LD according to their importance in the data domain. Each
symptoms significance or priority values reflect its relative importance to predict LD among the
various cases. Then by eliminating least significant features one by one and evaluating the
feature subset at each stage of the process, an optimal feature subset is generated. The
experimental results shows the success of the proposed method in removing redundant
attributes efficiently from the LD dataset without sacrificing the classification performance.
Feature selection is one of the most fundamental steps in machine learning. It is closely related to
dimensionality reduction. A commonly used approach in feature selection is ranking the individual
features according to some criteria and then search for an optimal feature subset based on an evaluation
criterion to test the optimality. The objective of this work is to predict more accurately the presence of
Learning Disability (LD) in school-aged children with reduced number of symptoms. For this purpose, a
novel hybrid feature selection approach is proposed by integrating a popular Rough Set based feature
ranking process with a modified backward feature elimination algorithm. The process of feature ranking
follows a method of calculating the significance or priority of each symptoms of LD as per their
contribution in representing the knowledge contained in the dataset. Each symptoms significance or
priority values reflect its relative importance to predict LD among the various cases. Then by eliminating
least significant features one by one and evaluating the feature subset at each stage of the process, an
optimal feature subset is generated. For comparative analysis and to establish the importance of rough set
theory in feature selection, the backward feature elimination algorithm is combined with two state-of-theart
filter based feature ranking techniques viz. information gain and gain ratio. The experimental results
show the proposed feature selection approach outperforms the other two in terms of the data reduction.
Also, the proposed method eliminates all the redundant attributes efficiently from the LD dataset without
sacrificing the classification performance.
EFFICIENT FEATURE SUBSET SELECTION MODEL FOR HIGH DIMENSIONAL DATAIJCI JOURNAL
This paper proposes a new method that intends on reducing the size of high dimensional dataset by
identifying and removing irrelevant and redundant features. Dataset reduction is important in the case of
machine learning and data mining. The measure of dependence is used to evaluate the relationship
between feature and target concept and or between features for irrelevant and redundant feature removal.
The proposed work initially removes all the irrelevant features and then a minimum spanning tree of
relevant features is constructed using Prim’s algorithm. Splitting the minimum spanning tree based on the
dependency between features leads to the generation of forests. A representative feature from each of the
forests is taken to form the final feature subset
Unsupervised Feature Selection Based on the Distribution of Features Attribut...Waqas Tariq
Since dealing with high dimensional data is computationally complex and sometimes even intractable, recently several feature reductions methods have been developed to reduce the dimensionality of the data in order to simplify the calculation analysis in various applications such as text categorization, signal processing, image retrieval, gene expressions and etc. Among feature reduction techniques, feature selection is one the most popular methods due to the preservation of the original features. However, most of the current feature selection methods do not have a good performance when fed on imbalanced data sets which are pervasive in real world applications. In this paper, we propose a new unsupervised feature selection method attributed to imbalanced data sets, which will remove redundant features from the original feature space based on the distribution of features. To show the effectiveness of the proposed method, popular feature selection methods have been implemented and compared. Experimental results on the several imbalanced data sets, derived from UCI repository database, illustrate the effectiveness of our proposed methods in comparison with the other compared methods in terms of both accuracy and the number of selected features.
Introduction to feature subset selection methodIJSRD
Data Mining is a computational progression to ascertain patterns in hefty data sets. It has various important techniques and one of them is Classification which is receiving great attention recently in the database community. Classification technique can solve several problems in different fields like medicine, industry, business, science. PSO is based on social behaviour for optimization problem. Feature Selection (FS) is a solution that involves finding a subset of prominent features to improve predictive accuracy and to remove the redundant features. Rough Set Theory (RST) is a mathematical tool which deals with the uncertainty and vagueness of the decision systems.
SURVEY ON CLASSIFICATION ALGORITHMS USING BIG DATASETEditor IJMTER
Data mining environment produces a large amount of data that need to be analyzed.
Using traditional databases and architectures, it has become difficult to process, manage and analyze
patterns. To gain knowledge about the Big Data a proper architecture should be understood.
Classification is an important data mining technique with broad applications to classify the various
kinds of data used in nearly every field of our life. Classification is used to classify the item
according to the features of the item with respect to the predefined set of classes. This paper put a
light on various classification algorithms including j48, C4.5, Naive Bayes using large dataset.
Feature selection is one of the most fundamental steps in machine learning. It is closely related to
dimensionality reduction. A commonly used approach in feature selection is ranking the individual
features according to some criteria and then search for an optimal feature subset based on an evaluation
criterion to test the optimality. The objective of this work is to predict more accurately the presence of
Learning Disability (LD) in school-aged children with reduced number of symptoms. For this purpose, a
novel hybrid feature selection approach is proposed by integrating a popular Rough Set based feature
ranking process with a modified backward feature elimination algorithm. The process of feature ranking
follows a method of calculating the significance or priority of each symptoms of LD as per their
contribution in representing the knowledge contained in the dataset. Each symptoms significance or
priority values reflect its relative importance to predict LD among the various cases. Then by eliminating
least significant features one by one and evaluating the feature subset at each stage of the process, an
optimal feature subset is generated. For comparative analysis and to establish the importance of rough set
theory in feature selection, the backward feature elimination algorithm is combined with two state-of-theart
filter based feature ranking techniques viz. information gain and gain ratio. The experimental results
show the proposed feature selection approach outperforms the other two in terms of the data reduction.
Also, the proposed method eliminates all the redundant attributes efficiently from the LD dataset without
sacrificing the classification performance.
EFFICIENT FEATURE SUBSET SELECTION MODEL FOR HIGH DIMENSIONAL DATAIJCI JOURNAL
This paper proposes a new method that intends on reducing the size of high dimensional dataset by
identifying and removing irrelevant and redundant features. Dataset reduction is important in the case of
machine learning and data mining. The measure of dependence is used to evaluate the relationship
between feature and target concept and or between features for irrelevant and redundant feature removal.
The proposed work initially removes all the irrelevant features and then a minimum spanning tree of
relevant features is constructed using Prim’s algorithm. Splitting the minimum spanning tree based on the
dependency between features leads to the generation of forests. A representative feature from each of the
forests is taken to form the final feature subset
Unsupervised Feature Selection Based on the Distribution of Features Attribut...Waqas Tariq
Since dealing with high dimensional data is computationally complex and sometimes even intractable, recently several feature reductions methods have been developed to reduce the dimensionality of the data in order to simplify the calculation analysis in various applications such as text categorization, signal processing, image retrieval, gene expressions and etc. Among feature reduction techniques, feature selection is one the most popular methods due to the preservation of the original features. However, most of the current feature selection methods do not have a good performance when fed on imbalanced data sets which are pervasive in real world applications. In this paper, we propose a new unsupervised feature selection method attributed to imbalanced data sets, which will remove redundant features from the original feature space based on the distribution of features. To show the effectiveness of the proposed method, popular feature selection methods have been implemented and compared. Experimental results on the several imbalanced data sets, derived from UCI repository database, illustrate the effectiveness of our proposed methods in comparison with the other compared methods in terms of both accuracy and the number of selected features.
Introduction to feature subset selection methodIJSRD
Data Mining is a computational progression to ascertain patterns in hefty data sets. It has various important techniques and one of them is Classification which is receiving great attention recently in the database community. Classification technique can solve several problems in different fields like medicine, industry, business, science. PSO is based on social behaviour for optimization problem. Feature Selection (FS) is a solution that involves finding a subset of prominent features to improve predictive accuracy and to remove the redundant features. Rough Set Theory (RST) is a mathematical tool which deals with the uncertainty and vagueness of the decision systems.
SURVEY ON CLASSIFICATION ALGORITHMS USING BIG DATASETEditor IJMTER
Data mining environment produces a large amount of data that need to be analyzed.
Using traditional databases and architectures, it has become difficult to process, manage and analyze
patterns. To gain knowledge about the Big Data a proper architecture should be understood.
Classification is an important data mining technique with broad applications to classify the various
kinds of data used in nearly every field of our life. Classification is used to classify the item
according to the features of the item with respect to the predefined set of classes. This paper put a
light on various classification algorithms including j48, C4.5, Naive Bayes using large dataset.
Recommendation system using bloom filter in mapreduceIJDKP
Many clients like to use the Web to discover product details in the form of online reviews. The reviews are
provided by other clients and specialists. Recommender systems provide an important response to the
information overload problem as it presents users more practical and personalized information facilities.
Collaborative filtering methods are vital component in recommender systems as they generate high-quality
recommendations by influencing the likings of society of similar users. The collaborative filtering method
has assumption that people having same tastes choose the same items. The conventional collaborative
filtering system has drawbacks as sparse data problem & lack of scalability. A new recommender system is
required to deal with the sparse data problem & produce high quality recommendations in large scale
mobile environment. MapReduce is a programming model which is widely used for large-scale data
analysis. The described algorithm of recommendation mechanism for mobile commerce is user based
collaborative filtering using MapReduce which reduces scalability problem in conventional CF system.
One of the essential operations for the data analysis is join operation. But MapReduce is not very
competent to execute the join operation as it always uses all records in the datasets where only small
fraction of datasets are applicable for the join operation. This problem can be reduced by applying
bloomjoin algorithm. The bloom filters are constructed and used to filter out redundant intermediate
records. The proposed algorithm using bloom filter will reduce the number of intermediate results and will
improve the join performance.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Comparative study of various supervisedclassification methodsforanalysing def...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A statistical data fusion technique in virtual data integration environmentIJDKP
Data fusion in the virtual data integration environment starts after detecting and clustering duplicated
records from the different integrated data sources. It refers to the process of selecting or fusing attribute
values from the clustered duplicates into a single record representing the real world object. In this paper, a
statistical technique for data fusion is introduced based on some probabilistic scores from both data
sources and clustered duplicates
Fuzzy Analytic Hierarchy Based DBMS Selection In Turkish National Identity Ca...Ferhat Ozgur Catak
Database Management Systems (DBMS) play an important role to support
enterprise application developments. Selection of the right DBMS is a crucial decision for
software engineering process. This selection requires optimizing a number of criteria.
Evaluation and selection of DBMS among several candidates tend to be very complex. It
requires both quantitative and qualitative issues. Wrong selection of DBMS will have a
negative effect on the development of enterprise application. It can turn out to be costly and adversely affect business process. The following study focuses on the evaluation of a multi criteria
decision problem by the usage of fuzzy logic. We will demonstrate the methodological considerations
regarding to group decision and fuzziness based on the DBMS selection problem. We developed a new
Fuzzy AHP based decision model which is formulated and proposed to select a DBMS easily. In this
decision model, first, main criteria and their sub criteria are determined for the evaluation. Then these
criteria are weighted by pair-wise comparison, and then DBMS alternatives are evaluated by assigning a
rating scale.
Correlation of artificial neural network classification and nfrs attribute fi...eSAT Journals
Abstract
Mostly 5 to 15% of the women in the stage of reproduction face the disease called Polycystic Ovarian Syndrome (PCOS) which is the multifaceted, heterogeneous and complex. The long term consequences diseases like endometrial hyperplasia, type 2 diabetes mellitus and coronary disease are caused by the polycystic ovaries, chronic anovulation and hyperandrogenism are characterized with the resistance of insulin and the hypertension, abdominal obesity and dyslipidemia and hyperinsulinemia are called as Metabolic syndrome (frequent metabolic traits) The above cause the common disease called Anovulatory infertility. Computer based information along with advanced Data mining techniques are used for appropriate results. Classification is a classic data mining task, with roots in machine learning. Naïve Bayesian, Artificial Neural Network, Decision Tree, Support Vector Machines are the classification tasks in the data mining. Feature selection methods involve generation of the subset, evaluation of each subset, criteria for stopping the search and validation procedures. The characteristics of the search method used are important with respect to the time efficiency of the feature selection methods. PCA (Principle Component Analysis), Information gain Subset Evaluation, Fuzzy rough set evaluation, Correlation based Feature Selection (CFS) are some of the feature selection techniques, greedy first search, ranker etc are the search algorithms that are used in the feature selection. In this paper, a new algorithm which is based on Fuzzy neural subset evaluation and artificial neural network is proposed which reduces the task of classification and feature selection separately. This algorithm combines the neural fuzzy rough subset evaluation and artificial neural network together for the better performance than doing the tasks separately.
Keywords: ANN, SVM, PCA, CFS
CLASSIFICATION ALGORITHM USING RANDOM CONCEPT ON A VERY LARGE DATA SET: A SURVEYEditor IJMTER
Data mining environment produces a large amount of data, that need to be
analyses, pattern have to be extracted from that to gain knowledge. In this new period with
rumble of data both ordered and unordered, by using traditional databases and architectures, it
has become difficult to process, manage and analyses patterns. To gain knowledge about the
Big Data a proper architecture should be understood. Classification is an important data mining
technique with broad applications to classify the various kinds of data used in nearly every
field of our life. Classification is used to classify the item according to the features of the item
with respect to the predefined set of classes. This paper provides an inclusive survey of
different classification algorithms and put a light on various classification algorithms including
j48, C4.5, k-nearest neighbor classifier, Naive Bayes, SVM etc., using random concept.
Data Mining System and Applications: A Reviewijdpsjournal
In the Information Technology era information plays vital role in every sphere of the human life. It is very important to gather data from different data sources, store and maintain the data, generate information, generate knowledge and disseminate data, information and knowledge to every stakeholder. Due to vast use of computers and electronics devices and tremendous growth in computing power and storage capacity, there is explosive growth in data collection. The storing of the data in data warehouse enables entire enterprise to access a reliable current database. To analyze this vast amount of data and drawing fruitful conclusions and inferences it needs the special tools called data mining tools. This paper gives overview of the data mining systems and some of its applications.
N ETWORK F AULT D IAGNOSIS U SING D ATA M INING C LASSIFIERScsandit
Mobile networks are under more pressure than ever b
efore because of the increasing number of
smartphone users and the number of people relying o
n mobile data networks. With larger
numbers of users, the issue of service quality has
become more important for network operators.
Identifying faults in mobile networks that reduce t
he quality of service must be found within
minutes so that problems can be addressed and netwo
rks returned to optimised performance. In
this paper, a method of automated fault diagnosis i
s presented using decision trees, rules and
Bayesian classifiers for visualization of network f
aults. Using data mining techniques the model
classifies optimisation criteria based on the key p
erformance indicators metrics to identify
network faults supporting the most efficient optimi
sation decisions. The goal is to help wireless
providers to localize the key performance indicator
alarms and determine which Quality of
Service factors should be addressed first and at wh
ich locations
Survey on Various Classification Techniques in Data Miningijsrd.com
Dynamic Classification is an information mining (machine learning) strategy used to anticipate bunch participation for information cases. In this paper, we show the essential arrangement systems. A few significant sorts of arrangement technique including induction, Bayesian networks, k-nearest neighbor classifier, case-based reasoning, genetic algorithm and fuzzy logic techniques. The objective of this review is to give a complete audit of distinctive characterization procedures in information mining.
A Survey on Constellation Based Attribute Selection Method for High Dimension...IJERA Editor
Attribute Selection is an important topic in Data Mining, because it is the effective way for reducing dimensionality, removing irrelevant data, removing redundant data, & increasing accuracy of the data. It is the process of identifying a subset of the most useful attributes that produces compatible results as the original entire set of attribute. Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group called a cluster are more similar in some sense or another to each other than to those in other groups (Clusters). There are various approaches & techniques for attribute subset selection namely Wrapper approach, Filter Approach, Relief Algorithm, Distributional clustering etc. But each of one having some disadvantages like unable to handle large volumes of data, computational complexity, accuracy is not guaranteed, difficult to evaluate and redundancy detection etc. To get the upper hand on some of these issues in attribute selection this paper proposes a technique that aims to design an effective clustering based attribute selection method for high dimensional data. Initially, attributes are divided into clusters by using graph-based clustering method like minimum spanning tree (MST). In the second step, the most representative attribute that is strongly related to target classes is selected from each cluster to form a subset of attributes. The purpose is to increase the level of accuracy, reduce dimensionality; shorter training time and improves generalization by reducing over fitting.
Distributed Digital Artifacts on the Semantic WebEditor IJCATR
Distributed digital artifacts incorporate cryptographic hash values to URI called trusty URIs in a distributed environment
building good in quality, verifiable and unchangeable web resources to prevent the rising man in the middle attack. The greatest
challenge of a centralized system is that it gives users no possibility to check whether data have been modified and the communication
is limited to a single server. As a solution for this, is the distributed digital artifact system, where resources are distributed among
different domains to enable inter-domain communication. Due to the emerging developments in web, attacks have increased rapidly,
among which man in the middle attack (MIMA) is a serious issue, where user security is at its threat. This work tries to prevent MIMA
to an extent, by providing self reference and trusty URIs even when presented in a distributed environment. Any manipulation to the
data is efficiently identified and any further access to that data is blocked by informing user that the uniform location has been
changed. System uses self-reference to contain trusty URI for each resource, lineage algorithm for generating seed and SHA-512 hash
generation algorithm to ensure security. It is implemented on the semantic web, which is an extension to the world wide web, using
RDF (Resource Description Framework) to identify the resource. Hence the framework was developed to overcome existing
challenges by making the digital artifacts on the semantic web distributed to enable communication between different domains across
the network securely and thereby preventing MIMA.
sis of health condition is very challenging task for every human being because life is directly related to health
condition. Data mining based classification is one of the important applications for classification of data. In this
research work, we have used various classification techniques for classification of thyroid data. CART gives highest
accuracy 99.47% as best model. Feature selection plays very important role to computationally efficient and increase
the performance of model. This research work focus on Info Gain and Gain Ratio feature selection technique to
reduce the irrelevant features from original data set and computationally increase the performance of model. We have
applied both the feature selection techniques on best model i. e. CART. Our proposed CART-Info Gain and CARTGain
Ratio gives 99.47% and 99.20% accuracy with 25 and 3 feature respectively.
In the present day huge amount of data is generated in every minute and transferred frequently. Although
the data is sometimes static but most commonly it is dynamic and transactional. New data that is being
generated is getting constantly added to the old/existing data. To discover the knowledge from this
incremental data, one approach is to run the algorithm repeatedly for the modified data sets which is time
consuming. Again to analyze the datasets properly, construction of efficient classifier model is necessary.
The objective of developing such a classifier is to classify unlabeled dataset into appropriate classes. The
paper proposes a dimension reduction algorithm that can be applied in dynamic environment for
generation of reduced attribute set as dynamic reduct, and an optimization algorithm which uses the
reduct and build up the corresponding classification system. The method analyzes the new dataset, when it
becomes available, and modifies the reduct accordingly to fit the entire dataset and from the entire data
set, interesting optimal classification rule sets are generated. The concepts of discernibility relation,
attribute dependency and attribute significance of Rough Set Theory are integrated for the generation of
dynamic reduct set, and optimal classification rules are selected using PSO method, which not only
reduces the complexity but also helps to achieve higher accuracy of the decision system. The proposed
method has been applied on some benchmark dataset collected from the UCI repository and dynamic
reduct is computed, and from the reduct optimal classification rules are also generated. Experimental
result shows the efficiency of the proposed method.
Recommendation system using bloom filter in mapreduceIJDKP
Many clients like to use the Web to discover product details in the form of online reviews. The reviews are
provided by other clients and specialists. Recommender systems provide an important response to the
information overload problem as it presents users more practical and personalized information facilities.
Collaborative filtering methods are vital component in recommender systems as they generate high-quality
recommendations by influencing the likings of society of similar users. The collaborative filtering method
has assumption that people having same tastes choose the same items. The conventional collaborative
filtering system has drawbacks as sparse data problem & lack of scalability. A new recommender system is
required to deal with the sparse data problem & produce high quality recommendations in large scale
mobile environment. MapReduce is a programming model which is widely used for large-scale data
analysis. The described algorithm of recommendation mechanism for mobile commerce is user based
collaborative filtering using MapReduce which reduces scalability problem in conventional CF system.
One of the essential operations for the data analysis is join operation. But MapReduce is not very
competent to execute the join operation as it always uses all records in the datasets where only small
fraction of datasets are applicable for the join operation. This problem can be reduced by applying
bloomjoin algorithm. The bloom filters are constructed and used to filter out redundant intermediate
records. The proposed algorithm using bloom filter will reduce the number of intermediate results and will
improve the join performance.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Comparative study of various supervisedclassification methodsforanalysing def...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A statistical data fusion technique in virtual data integration environmentIJDKP
Data fusion in the virtual data integration environment starts after detecting and clustering duplicated
records from the different integrated data sources. It refers to the process of selecting or fusing attribute
values from the clustered duplicates into a single record representing the real world object. In this paper, a
statistical technique for data fusion is introduced based on some probabilistic scores from both data
sources and clustered duplicates
Fuzzy Analytic Hierarchy Based DBMS Selection In Turkish National Identity Ca...Ferhat Ozgur Catak
Database Management Systems (DBMS) play an important role to support
enterprise application developments. Selection of the right DBMS is a crucial decision for
software engineering process. This selection requires optimizing a number of criteria.
Evaluation and selection of DBMS among several candidates tend to be very complex. It
requires both quantitative and qualitative issues. Wrong selection of DBMS will have a
negative effect on the development of enterprise application. It can turn out to be costly and adversely affect business process. The following study focuses on the evaluation of a multi criteria
decision problem by the usage of fuzzy logic. We will demonstrate the methodological considerations
regarding to group decision and fuzziness based on the DBMS selection problem. We developed a new
Fuzzy AHP based decision model which is formulated and proposed to select a DBMS easily. In this
decision model, first, main criteria and their sub criteria are determined for the evaluation. Then these
criteria are weighted by pair-wise comparison, and then DBMS alternatives are evaluated by assigning a
rating scale.
Correlation of artificial neural network classification and nfrs attribute fi...eSAT Journals
Abstract
Mostly 5 to 15% of the women in the stage of reproduction face the disease called Polycystic Ovarian Syndrome (PCOS) which is the multifaceted, heterogeneous and complex. The long term consequences diseases like endometrial hyperplasia, type 2 diabetes mellitus and coronary disease are caused by the polycystic ovaries, chronic anovulation and hyperandrogenism are characterized with the resistance of insulin and the hypertension, abdominal obesity and dyslipidemia and hyperinsulinemia are called as Metabolic syndrome (frequent metabolic traits) The above cause the common disease called Anovulatory infertility. Computer based information along with advanced Data mining techniques are used for appropriate results. Classification is a classic data mining task, with roots in machine learning. Naïve Bayesian, Artificial Neural Network, Decision Tree, Support Vector Machines are the classification tasks in the data mining. Feature selection methods involve generation of the subset, evaluation of each subset, criteria for stopping the search and validation procedures. The characteristics of the search method used are important with respect to the time efficiency of the feature selection methods. PCA (Principle Component Analysis), Information gain Subset Evaluation, Fuzzy rough set evaluation, Correlation based Feature Selection (CFS) are some of the feature selection techniques, greedy first search, ranker etc are the search algorithms that are used in the feature selection. In this paper, a new algorithm which is based on Fuzzy neural subset evaluation and artificial neural network is proposed which reduces the task of classification and feature selection separately. This algorithm combines the neural fuzzy rough subset evaluation and artificial neural network together for the better performance than doing the tasks separately.
Keywords: ANN, SVM, PCA, CFS
CLASSIFICATION ALGORITHM USING RANDOM CONCEPT ON A VERY LARGE DATA SET: A SURVEYEditor IJMTER
Data mining environment produces a large amount of data, that need to be
analyses, pattern have to be extracted from that to gain knowledge. In this new period with
rumble of data both ordered and unordered, by using traditional databases and architectures, it
has become difficult to process, manage and analyses patterns. To gain knowledge about the
Big Data a proper architecture should be understood. Classification is an important data mining
technique with broad applications to classify the various kinds of data used in nearly every
field of our life. Classification is used to classify the item according to the features of the item
with respect to the predefined set of classes. This paper provides an inclusive survey of
different classification algorithms and put a light on various classification algorithms including
j48, C4.5, k-nearest neighbor classifier, Naive Bayes, SVM etc., using random concept.
Data Mining System and Applications: A Reviewijdpsjournal
In the Information Technology era information plays vital role in every sphere of the human life. It is very important to gather data from different data sources, store and maintain the data, generate information, generate knowledge and disseminate data, information and knowledge to every stakeholder. Due to vast use of computers and electronics devices and tremendous growth in computing power and storage capacity, there is explosive growth in data collection. The storing of the data in data warehouse enables entire enterprise to access a reliable current database. To analyze this vast amount of data and drawing fruitful conclusions and inferences it needs the special tools called data mining tools. This paper gives overview of the data mining systems and some of its applications.
N ETWORK F AULT D IAGNOSIS U SING D ATA M INING C LASSIFIERScsandit
Mobile networks are under more pressure than ever b
efore because of the increasing number of
smartphone users and the number of people relying o
n mobile data networks. With larger
numbers of users, the issue of service quality has
become more important for network operators.
Identifying faults in mobile networks that reduce t
he quality of service must be found within
minutes so that problems can be addressed and netwo
rks returned to optimised performance. In
this paper, a method of automated fault diagnosis i
s presented using decision trees, rules and
Bayesian classifiers for visualization of network f
aults. Using data mining techniques the model
classifies optimisation criteria based on the key p
erformance indicators metrics to identify
network faults supporting the most efficient optimi
sation decisions. The goal is to help wireless
providers to localize the key performance indicator
alarms and determine which Quality of
Service factors should be addressed first and at wh
ich locations
Survey on Various Classification Techniques in Data Miningijsrd.com
Dynamic Classification is an information mining (machine learning) strategy used to anticipate bunch participation for information cases. In this paper, we show the essential arrangement systems. A few significant sorts of arrangement technique including induction, Bayesian networks, k-nearest neighbor classifier, case-based reasoning, genetic algorithm and fuzzy logic techniques. The objective of this review is to give a complete audit of distinctive characterization procedures in information mining.
A Survey on Constellation Based Attribute Selection Method for High Dimension...IJERA Editor
Attribute Selection is an important topic in Data Mining, because it is the effective way for reducing dimensionality, removing irrelevant data, removing redundant data, & increasing accuracy of the data. It is the process of identifying a subset of the most useful attributes that produces compatible results as the original entire set of attribute. Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group called a cluster are more similar in some sense or another to each other than to those in other groups (Clusters). There are various approaches & techniques for attribute subset selection namely Wrapper approach, Filter Approach, Relief Algorithm, Distributional clustering etc. But each of one having some disadvantages like unable to handle large volumes of data, computational complexity, accuracy is not guaranteed, difficult to evaluate and redundancy detection etc. To get the upper hand on some of these issues in attribute selection this paper proposes a technique that aims to design an effective clustering based attribute selection method for high dimensional data. Initially, attributes are divided into clusters by using graph-based clustering method like minimum spanning tree (MST). In the second step, the most representative attribute that is strongly related to target classes is selected from each cluster to form a subset of attributes. The purpose is to increase the level of accuracy, reduce dimensionality; shorter training time and improves generalization by reducing over fitting.
Distributed Digital Artifacts on the Semantic WebEditor IJCATR
Distributed digital artifacts incorporate cryptographic hash values to URI called trusty URIs in a distributed environment
building good in quality, verifiable and unchangeable web resources to prevent the rising man in the middle attack. The greatest
challenge of a centralized system is that it gives users no possibility to check whether data have been modified and the communication
is limited to a single server. As a solution for this, is the distributed digital artifact system, where resources are distributed among
different domains to enable inter-domain communication. Due to the emerging developments in web, attacks have increased rapidly,
among which man in the middle attack (MIMA) is a serious issue, where user security is at its threat. This work tries to prevent MIMA
to an extent, by providing self reference and trusty URIs even when presented in a distributed environment. Any manipulation to the
data is efficiently identified and any further access to that data is blocked by informing user that the uniform location has been
changed. System uses self-reference to contain trusty URI for each resource, lineage algorithm for generating seed and SHA-512 hash
generation algorithm to ensure security. It is implemented on the semantic web, which is an extension to the world wide web, using
RDF (Resource Description Framework) to identify the resource. Hence the framework was developed to overcome existing
challenges by making the digital artifacts on the semantic web distributed to enable communication between different domains across
the network securely and thereby preventing MIMA.
sis of health condition is very challenging task for every human being because life is directly related to health
condition. Data mining based classification is one of the important applications for classification of data. In this
research work, we have used various classification techniques for classification of thyroid data. CART gives highest
accuracy 99.47% as best model. Feature selection plays very important role to computationally efficient and increase
the performance of model. This research work focus on Info Gain and Gain Ratio feature selection technique to
reduce the irrelevant features from original data set and computationally increase the performance of model. We have
applied both the feature selection techniques on best model i. e. CART. Our proposed CART-Info Gain and CARTGain
Ratio gives 99.47% and 99.20% accuracy with 25 and 3 feature respectively.
In the present day huge amount of data is generated in every minute and transferred frequently. Although
the data is sometimes static but most commonly it is dynamic and transactional. New data that is being
generated is getting constantly added to the old/existing data. To discover the knowledge from this
incremental data, one approach is to run the algorithm repeatedly for the modified data sets which is time
consuming. Again to analyze the datasets properly, construction of efficient classifier model is necessary.
The objective of developing such a classifier is to classify unlabeled dataset into appropriate classes. The
paper proposes a dimension reduction algorithm that can be applied in dynamic environment for
generation of reduced attribute set as dynamic reduct, and an optimization algorithm which uses the
reduct and build up the corresponding classification system. The method analyzes the new dataset, when it
becomes available, and modifies the reduct accordingly to fit the entire dataset and from the entire data
set, interesting optimal classification rule sets are generated. The concepts of discernibility relation,
attribute dependency and attribute significance of Rough Set Theory are integrated for the generation of
dynamic reduct set, and optimal classification rules are selected using PSO method, which not only
reduces the complexity but also helps to achieve higher accuracy of the decision system. The proposed
method has been applied on some benchmark dataset collected from the UCI repository and dynamic
reduct is computed, and from the reduct optimal classification rules are also generated. Experimental
result shows the efficiency of the proposed method.
A STUDY ON SIMILARITY MEASURE FUNCTIONS ON ENGINEERING MATERIALS SELECTION cscpconf
While designing a new type of engineering material one has to search for some existing
materials which suits design requirement and then he can try to produce new kind of
engineering material. This selection process itself is tedious as he has to select few numbers of
materials out of a set of lakhs of materials. Therefore in this paper a model is proposed to select
a particular material which suits the user requirement, by using some similarity/distance
measuring functionalities. Here thirteen different types of similarity/distance measuring
functionalities are examined. Performance Index Measure(PIM) is calculated to verify the
relative performance of the selected material with the target material. Then all the results are
normalised for the purpose of analysing the results. Hence the proposed model reduces the
wastage of time in selection and also avoids the haphazardly selection of the materials in materials design and manufacturing industries.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
FUZZY ANALYTIC HIERARCHY BASED DBMS SELECTION IN TURKISH NATIONAL IDENTITY CA...ijistjournal
Database Management Systems (DBMS) play an important role to support enterprise application developments. Selection of the right DBMS is a crucial decision for software engineering process. This selection requires optimizing a number of criteria. Evaluation and selection of DBMS among several candidates tend to be very complex. It requires both quantitative and qualitative issues. Wrong selection of DBMS will have a negative effect on the development of enterprise application. It can turn out to be costly and adversely affect business process. The following study focuses on the evaluation of a multi criteria decision problem by the usage of fuzzy logic. We will demonstrate the methodological considerations regarding to group decision and fuzziness based on the DBMS selection problem. We developed a new Fuzzy AHP based decision model which is formulated and proposed to select a DBMS easily. In this decision model, first, main criteria and their sub criteria are determined for the evaluation. Then these criteria are weighted by pair-wise comparison, and then DBMS alternatives are evaluated by assigning a rating scale.
FUZZY ANALYTIC HIERARCHY BASED DBMS SELECTION IN TURKISH NATIONAL IDENTITY CA...ijistjournal
Database Management Systems (DBMS) play an important role to support enterprise application developments. Selection of the right DBMS is a crucial decision for software engineering process. This selection requires optimizing a number of criteria. Evaluation and selection of DBMS among several candidates tend to be very complex. It requires both quantitative and qualitative issues. Wrong selection of DBMS will have a negative effect on the development of enterprise application. It can turn out to be costly and adversely affect business process. The following study focuses on the evaluation of a multi criteria decision problem by the usage of fuzzy logic. We will demonstrate the methodological considerations regarding to group decision and fuzziness based on the DBMS selection problem. We developed a new Fuzzy AHP based decision model which is formulated and proposed to select a DBMS easily. In this decision model, first, main criteria and their sub criteria are determined for the evaluation. Then these criteria are weighted by pair-wise comparison, and then DBMS alternatives are evaluated by assigning a rating scale.
AN EFFICIENT FEATURE SELECTION IN CLASSIFICATION OF AUDIO FILEScscpconf
In this paper we have focused on an efficient feature selection method in classification of audio files.
The main objective is feature selection and extraction. We have selected a set of features for further
analysis, which represents the elements in feature vector. By extraction method we can compute a
numerical representation that can be used to characterize the audio using the existing toolbox. In this
study Gain Ratio (GR) is used as a feature selection measure. GR is used to select splitting attribute
which will separate the tuples into different classes. The pulse clarity is considered as a subjective
measure and it is used to calculate the gain of features of audio files. The splitting criterion is
employed in the application to identify the class or the music genre of a specific audio file from
testing database. Experimental results indicate that by using GR the application can produce a
satisfactory result for music genre classification. After dimensionality reduction best three features
have been selected out of various features of audio file and in this technique we will get more than
90% successful classification result.
In this paper we have focused on an efficient feature selection method in classification of audio files.
The main objective is feature selection and extraction. We have selected a set of features for further
analysis, which represents the elements in feature vector. By extraction method we can compute a
numerical representation that can be used to characterize the audio using the existing toolbox. In this
study Gain Ratio (GR) is used as a feature selection measure. GR is used to select splitting attribute
which will separate the tuples into different classes. The pulse clarity is considered as a subjective
measure and it is used to calculate the gain of features of audio files. The splitting criterion is
employed in the application to identify the class or the music genre of a specific audio file from
testing database. Experimental results indicate that by using GR the application can produce a
satisfactory result for music genre classification. After dimensionality reduction best three features
have been selected out of various features of audio file and in this technique we will get more than
90% successful classification result.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
New Feature Selection Model Based Ensemble Rule Classifiers Method for Datase...ijaia
Feature selection and classification task are an essential process in dealing with large data sets that
comprise numerous number of input attributes. There are many search methods and classifiers that have
been used to find the optimal number of attributes. The aim of this paper is to find the optimal set of
attributes and improve the classification accuracy by adopting ensemble rule classifiers method. Research
process involves 2 phases; finding the optimal set of attributes and ensemble classifiers method for
classification task. Results are in terms of percentage of accuracy and number of selected attributes and
rules generated. 6 datasets were used for the experiment. The final output is an optimal set of attributes
with ensemble rule classifiers method. The experimental results conducted on public real dataset
demonstrate that the ensemble rule classifiers methods consistently show improve classification accuracy
on the selected dataset. Significant improvement in accuracy and optimal set of attribute selected is
achieved by adopting ensemble rule classifiers method.
Enhancing Keyword Query Results Over Database for Improving User Satisfaction ijmpict
Storing data in relational databases is widely increasing to support keyword queries but search results does not gives effective answers to keyword query and hence it is inflexible from user perspective. It would be helpful to recognize such type of queries which gives results with low ranking. Here we estimate prediction of query performance to find out effectiveness of a search performed in response to query and features of such hard queries is studied by taking into account contents of the database and result list. One relevant problem of database is the presence of missing data and it can be handled by imputation. Here an inTeractive Retrieving-Inferring data imputation method (TRIP) is used which achieves retrieving and inferring alternately to fill the missing attribute values in the database. So by considering both the prediction of hard queries and imputation over the database, we can get better keyword search results.
JAVA 2013 IEEE DATAMINING PROJECT A fast clustering based feature subset sele...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Vaccine management system project report documentation..pdfKamal Acharya
The Division of Vaccine and Immunization is facing increasing difficulty monitoring vaccines and other commodities distribution once they have been distributed from the national stores. With the introduction of new vaccines, more challenges have been anticipated with this additions posing serious threat to the already over strained vaccine supply chain system in Kenya.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Automobile Management System Project Report.pdfKamal Acharya
The proposed project is developed to manage the automobile in the automobile dealer company. The main module in this project is login, automobile management, customer management, sales, complaints and reports. The first module is the login. The automobile showroom owner should login to the project for usage. The username and password are verified and if it is correct, next form opens. If the username and password are not correct, it shows the error message.
When a customer search for a automobile, if the automobile is available, they will be taken to a page that shows the details of the automobile including automobile name, automobile ID, quantity, price etc. “Automobile Management System” is useful for maintaining automobiles, customers effectively and hence helps for establishing good relation between customer and automobile organization. It contains various customized modules for effectively maintaining automobiles and stock information accurately and safely.
When the automobile is sold to the customer, stock will be reduced automatically. When a new purchase is made, stock will be increased automatically. While selecting automobiles for sale, the proposed software will automatically check for total number of available stock of that particular item, if the total stock of that particular item is less than 5, software will notify the user to purchase the particular item.
Also when the user tries to sale items which are not in stock, the system will prompt the user that the stock is not enough. Customers of this system can search for a automobile; can purchase a automobile easily by selecting fast. On the other hand the stock of automobiles can be maintained perfectly by the automobile shop manager overcoming the drawbacks of existing system.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
2. 128 Computer Science & Information Technology (CS & IT)
any prior knowledge and using only the information contained within the dataset alone [2]. In
this work, RST is employed as a feature selection tool to select most significant features which
will improve the diagnostic accuracy by SVM. For this purpose, a popular Rough Set based
feature ranking algorithm called PRS relevance approach is implemented to rank various
symptoms of the LD dataset. Then by integrating this feature ranking technique with backward
feature elimination [15], a new hybrid feature selection technique is proposed. A combination of
four relevant symptoms is identified from the LD dataset through this approach which gives the
same classification accuracy compared to the whole sixteen features. It implies that these four
features were worthwhile to be taken close attention by the physicians or teachers handling LD
when they conduct the diagnosis.
The rest of the paper is organized as follows. A review of Rough Set based feature ranking
process is given in section 2. In section 3, conventional feature selection procedures are
described. A brief description on Learning Disability dataset is presented in Section 4. Section
5 presents the proposed approach of feature selection process. Experimental results are reported
in Section 6. A discussion of the experimental results is given in Section 7. The last section
concludes this research work.
2. ROUGH SET BASED ATTRIBUTE RANKING
Rough Set Theory (RST) proposed by Z. Pawlak is a mathematical approach to intelligent data
analysis and data mining. RST is concerned with the classificatory analysis of imprecise,
uncertain or incomplete information expressed in terms of data acquired from experience. In
RST all computations are done directly on collected data and performed by making use of the
granularity structure of the data. The set of all indiscernible (similar) objects is called an
elementary set or a category and forms a basic granule (atom) of the knowledge about the data
contained in the dataset. The indiscernibility relation generated in this way is the mathematical
basis of RST [18].
The entire knowledge available in a high dimensional dataset is not always necessary to define
various categories represented in the dataset. Though the machine learning and data mining
techniques are suitable for handling data mining problems, they may not be effective for handling
high dimensional data. This motivates the need for efficient automated feature selection
processes in the area of data mining. In RST, a dataset is always termed as a decision table. A
decision table presents some basic facts about the Universe along with the decisions (actions)
taken by the experts based on the given facts. An important issue in data analysis is whether the
complete set of attributes given in the decision table are necessary to define the knowledge
involved in the equivalence class structure induced by the set of all attributes. This problem
arises in many practical applications and will be referred to as knowledge reduction. With the
help of RST, we can eliminate all superfluous attributes from the dataset preserving only the
indispensable attributes [18]. In reduction of knowledge, the basic roles played by two
fundamental concepts in RST are reduct and core. A reduct is a subset of the set of attributes
which by itself can fully characterize the knowledge in the given decision table. A reduct keeps
essential information of the original decision table. In a decision table there may exist more than
one reduct. The set of attributes which is common to all reducts is called the core [18]. The core
may be thought of as the set of indispensable attributes which cannot be eliminated while
reducing the knowledge involved in the information system. Elimination of a core attribute from
the dataset causes collapse of the category structure given by the original decision table. To
determine the core attributes, we take the intersection of all the reducts of the information system.
In the following section, a popular and more effective reduct based feature ranking approach
known as PRS relevance method [19] is presented. In this method, the ranking is done with the
3. Computer Science & Information Technology (CS & IT) 129
help of relevance of each attribute/feature calculated by considering its frequency of occurrence
in various reducts generated from the dataset.
2.1. Proportional Rough Set (PRS) Relevance Method
This is an effective Rough Set based method for attribute ranking proposed by Maria Salamó and
López-Sánchez [19]. The concept of reducts is used as the basic idea for the implementation of this
approach. The same idea is also used by Li and Cercone to rank the decision rules generated from
a rule mining algorithm [20, 21, 22, 23]. There exist multiple reduct for a dataset. Each reduct is a
representative of the original data. Most data mining operations require only a single reduct for
decision making purposes. But selecting any one reduct leads to the elimination of representative
information contained in all other reducts. The main idea behind this reduct based feature ranking
approach is the following: the more frequent a conditional attribute appears in the reducts and the
more relevant will be the attribute. Hence the number of times an attribute appears in all reducts
and the total number of reducts determines the significance (priority) of each attribute in
representing the knowledge contained in the dataset. This idea is used for measuring the
significance of various features in PRS relevance feature ranking approach [19]. With the help of
these priority values the features available in the dataset can be arranged in the decreasing order
of their priority.
3. FEATURE SELECTION
The Feature selection is a search process that selects a subset of significant features from a data
domain for building efficient learning models. Feature selection is closely related to
dimensionality reduction. Most of the dataset contain relevant as well as irrelevant and redundant
features. Irrelevant and redundant features do not contribute anything to determine the target
class and at the same time deteriorates the quality of the results of the intended data mining task.
The process of eliminating these types of features from a dataset is referred to as feature
selection. In a decision table, if a particular feature is highly correlated with decision feature, then
it is relevant and if it is highly correlated with others, it is redundant. Hence the search for a good
feature subset involves finding those features that are highly correlated with the decision feature
but uncorrelated with each other [1]. Feature selection process reduces the dimensionality of the
dataset and the goal of dimensionality reduction is to map a set of observations from a high
dimensional space M into a low dimensional space m (m<<M) by preserving the semantics of the
original high dimensional dataset. Let I = (U, A) be an information system (dataset), where U =
{x1, x2, …, xn} be the set of objects and A = {a1, a2, …, aM} be the set of attributes used to
characterize each object in I. Hence each object xi in the information system can be represented
as an M dimension vector [a1(xi), a2(xi), …, aM(xi)], where aj(xi) yields the jth
(j = 1, 2, 3, …, M)
attribute value of the ith
(i = 1, 2, 3, …., n) data object. Dimensionality reduction techniques
transform the given dataset I of size n × M into a new low dimensional dataset Y of size n × m.
While constructing a feature selection method, two different factors namely search strategies and
evaluating measures [2] are to be considered. Commonly used search strategies are complete or
exhaustive [3], heuristic [4] and random [5][6]. In general feature selection methods are based on
some exhaustive approaches which are quite impractical in many cases, especially for high
dimensional datasets, due to the high computational cost involved in the searching process [25].
To reduce this complexity, as an alternate solution strategy, heuristic or random search methods
are employed in modern feature selection algorithms.
Based on the procedures used for evaluating the scalability of the generated subset, heuristic or
random search methods are further classified into three – classifier specific or wrapper methods
[7][8][9][10][11], classifier independent or filter methods [12][13][14] and hybrid models [15]
4. 130 Computer Science & Information Technology (CS & IT)
which combines both filter and wrapper approach to achieve better classification performance. In
a classifier specific feature selection method, the quality of the selected features is evaluated with
the help of a learning algorithm and the corresponding classification accuracy is determined. If it
satisfies the desired accuracy, the selected feature subset is considered as optimal; otherwise it is
modified and the process is repeated for a better one. The process of feature selection using
wrapper (classifier specific) approach is depicted in Figure 1. Even though the wrapper method
may produce better results, it is computationally expensive and can encounter problems while
dealing with huge dataset.
Figure 1: Wrapper approach to feature selection
In the case of classifier independent method, to evaluate the significance of selected features one
or more of classifier independent measures such as inter class distance [12], mutual information
[16][17] and dependence measure [13][18] are employed. In this approach, the process of feature
selection is treated as a completely independent pre-processing operation. As an outcome of this
pre-processing, irrelevant/noisy attributes are filtered. All filter based methods use heuristics
based on general characteristics of the data rather than a learning algorithm to evaluate the
optimality of feature subsets. As a result, filter methods are generally much faster than wrapper
methods. Since this method does not depend on any particular learning algorithm, it is more
suitable in managing high dimensionality of the data.
In the case of hybrid model, as a first step, features are ranked using some distance criterion or
similarity measure and then with the help of a wrapper model an optimal feature subset is
generated. The method usually starts with an initial subset of features heuristically selected
beforehand. Then features are added (forward selection) or removed (backward elimination)
iteratively until an optimal feature subset is obtained.
4. LEARNING DISABILITY DATASET
Learning disability (LD) is a neurological condition that affects the child’s brain resulting in
difficulty in learning and using certain skills such as reading, writing, listening, speaking and
reasoning. Learning disabilities affect children both academically and socially and about 10% of
children enrolled in schools are affected with this problem. With the right help at the right time,
children with learning disabilities can learn successfully. Identifying students with LD and
assessing the nature and depth of LD is essential for helping them to get around LD. As nature
and symptoms of LD may vary from child to child, it is difficult to access LD. A variety of tests
are available for evaluating LD. Also there are many approaches for managing LD by teachers as
well as parents.
To apply the proposed methodology on a real world dataset, a dataset consisting of the signs and
symptoms of the learning disabilities in school age children is selected. It is collected from
various sources which include a child care clinic providing assistance for handling learning
disability in children and three different schools conducting such LD assessment studies. This
dataset is helpful to determine the existence of LD in a suspected child. It is selected with a view
to provide tools for researchers and physicians handling learning disabilities to analyze the data
and to facilitate the decision making process.
5. Computer Science & Information Technology (CS & IT) 131
The dataset contains 500 student records with 16 conditional attributes as signs and symptoms of
LD and the existence of LD in a child as decision attribute. Various signs and symptoms
collected includes the information regarding whether the child has any difficulty in reading (DR),
any difficulty with spelling (DS), any difficulty with handwriting (DH) and so on. There are no
missing values or inconsistency exists in the dataset. Table 1 gives a portion of the dataset used
for the experiment. In this table t represents the attribute value true and f represents the attribute
value false. Table 2 gives key used for representing the symptoms and its abbreviations.
Table 1: Learning Disability (LD) dataset
Table 2: Key used for representing the symptoms of LD
Key/ Abbreviations Symptoms
Key/
Abbreviations
Symptoms
DR Difficulty with Reading LM Lack of Motivation
DS Difficulty with Spelling DSS
Difficulty with Study
Skills
DH Difficulty with Handwriting DNS Does Not like School
DWE
Difficulty with Written
Expression
DLL
Difficulty in Learning a
Language
DBA
Difficulty with Basic
Arithmetic DLS
Difficulty in Learning a
Subject
DHA
Difficulty with Higher
Arithmetic skills
STL Is Slow To Learn
DA Difficulty with Attention RG Repeated a Grade
ED Easily Distracted LD Learning Disability
DM Difficulty with Memory
6. 132 Computer Science & Information Technology (CS & IT)
5. PROPOSED APPROACH
The proposed method of feature selection follows a hybrid approach which utilizes the
complementary strength of wrapper and filter approaches. Before feature selection begins, each
feature is evaluated independently with respect to the class to identify its significance in the data
domain. Features are then ranked in the decreasing order of their significance[26]. To calculate
the significance and to rank various features of the LD dataset, in this work, PRS relevance
approach is used. To explain the feature ranking process, consider a decision table T = {U, A, d},
where U is the non-empty finite set of objects called the Universe, A = {a1, a2, …, an} be the non-
empty finite set of conditional attributes/features and d is the decision attribute. Let {r1, r2,…, rp} be
the set of reducts generated from T. Then, for each conditional attribute ai ∈ A, reduct based
attribute priority/significance ( )iaβ is defined as [19, 20, 21]:
{ }
ni
p
pjrar
a
jij
i ,...,3,2,1,
,...,3,2,1,
)( =
=∈
=β 1
where the numerator of the Eq. 1 gives the occurrence frequency of the attribute ai in various
reducts.
From Eq. 1 it is clear that an attribute a not appearing in any of the reducts has priority value β(a)
= 0. For an attribute a, which is a member of core of the decision table has a priority value β(a) =
1. For the remaining attributes the priority values are proportional to the number of reducts in
which the attribute appear as a member. These reduct based priority values will provide a
ranking for the considered features.
After ranking the features, search process start with all available features and successfully remove
least significant features one by one (backward elimination) after evaluating the influence of this
feature in the classification accuracy until the selected feature subset gives a better classification
performance. When a certain feature is eliminated, if there is no change in the current best
classification accuracy the considered feature is redundant. If the classification accuracy is
increased as a result of elimination, the removed feature is considered as a feature with negative
influence on the classification accuracy. In these two cases, the selected feature is permanently
removed from the feature subset; otherwise it is retained. Feature evaluation starts by considering
the classification accuracy obtained from all available features as the current best accuracy. The
search terminates when no single attribute deletion contributes any improvement in the current
best classification accuracy. At this stage, the remaining feature subset is considered as optimal.
For classification, Sequential Minimal Optimization (SMO) algorithm using the polynomial
kernel is used in this work. It is implemented through Weka data mining tool kit [24]. This
algorithm is used for the prediction of LD because it is simple, easy to implement and generally
faster. The proposed feature selection algorithm FeaSel is presented below. The algorithm
accepts the ranked set of features obtained from the PRS relevance approach as input and
generates an optimal feature subset consisting of the significant features as output. The overall
feature selection process is represented in figure 2.
Algorithm FeaSel(Fn, Y, n, Xn)
//Fn ={f1, f2,...,fn}– Set of features obtained from PRS relevance approach ranked in
descending order of their significance.
//Y – class; n – total number of features.
// Xn – The optimal feature subset.
{
7. Computer Science & Information Technology (CS & IT) 133
Xn=Fn;
max_acc=acc(Fn,Y); //acc() returns the classification accuracy given by the classifier
for (i=n to 1 step -1) do
{
Fn=Fn-{fi};
curr_acc=acc(Fn, Y);
if (curr_acc==max_acc)
Xn=Fn;
else if (curr_acc>max_acc)
{
Xn=Fn;
max_acc=curr_acc;
}
else
Xn=Fn∪{fi};
Fn=Xn;
}
return(Xn, max_acc);
}
Figure 2: Block diagram of the feature selection process
8. 134 Computer Science & Information Technology (CS & IT)
6. EXPERIMENTAL ANALYSIS AND RESULTS
In order to implement the PRS relevance approach to rank the features, as a first step of the
process, various reducts are generated from the LD dataset. For this purpose, the discernibility
matrix approach of Rough Sets Data Explorer software package ROSE2 is used which generates
63 reducts from the original LD dataset. Then frequencies of various features occurring in these
reducts are computed. These frequencies are given in Table 3. Based on these frequencies and by
applying Eq. 1, the priority/significance values of various features are calculated. Ranked features
as per their significance are shown in Table 4.
Table 3: Frequencies of various attributes in reducts
Feature Frequency Feature Frequency
DR 63 DSS 18
DS 34 DNS 23
DWE 32 DHA 21
DBA 41 DH 16
DA 44 DLL 50
ED 63 DLS 27
DM 63 RG 36
LM 41 STL 27
Table 4: Attributes with priority values
Rank Feature Significance Rank Feature Significance
1 DR 1 9 DS 0.5397
2 ED 1 10 DWE 0.5079
3 DM 1 11 DLS 0.4286
4 DLL 0.7937 12 STL 0.4286
5 DA 0.6984 13 DNS 0.3651
6 LM 0.6508 14 DHA 0.3333
7 DBA 0.6508 15 DSS 0.2857
8 RG 0.5714 16 DH 0.2540
For feature selection using the proposed algorithm, the classification accuracy of the whole LD
dataset with all available features is determined first. In the feature selection algorithm the
construction of the best feature subset is mainly based on this value. Then, the set of features
ranked using PRS relevance approach is given to the proposed feature selection algorithm FeaSel.
Since the features are ranked in decreasing order of significance, features with lower ranks gets
eliminated during initial stages. The algorithm starts with all features of LD and in the first
iteration the algorithm selects lowest ranked feature DH as a test feature. Since there is no
change occurs in the original classification accuracy while eliminating this feature, it is
9. Computer Science & Information Technology (CS & IT) 135
designated as redundant and hence it is permanently removed from the feature set. The same
situation continues for the features DSS, DHA, DNS, STL, and DLS selected in order from right
to left from the ranked feature set and hence all these features are removed from the feature set.
But when selecting the next feature DWE, there is a reduction in the classification accuracy
which signifies the influence of this feature in determining the classification accuracy and hence
this feature is retained in the feature set. The process is continued until all features are evaluated.
The performance of various symptoms of LD during the feature selection process is depicted in
figure 3.
Figure 3. Influence of various symptoms in classification
After evaluating all features of the LD dataset, the algorithm retains the set of features {DWE,
DS, DLL, DM}. These four features are significant because all other features can be removed
from the LD dataset without affecting the classification performance. Table 5 shows the results
obtained from the classifier before and after the feature selection process. To determine the
accuracy 10 fold cross validation is used.
Table 5: Classification results given by SMO
Various cases
Dataset prior to perform
feature selection
Dataset reduced using the
proposed approach
No. of features 16 4
Classification accuracy (%) 98.6 98.6
Time taken to build the model (Sec.) 0.11 0.01
7. DISCUSSION
From the experimental results presented in Table 5 it is clear that, in the case of the proposed
approach a 75% reduction in the dataset does not affect the classification accuracy. It follows
that the original dataset contains about 75% redundant attributes and the feature selection
approach presented is efficient in removing these redundant attributes without affecting the
98.6 98.6 98.6 98.6 98.6 98.6
97.4
98
98.6 98.6 98.6 98.6
96.6
98.4
98.6 98.6
DH DSS DHA DNS STL DLS DWE DS RG DBA DM DA DLL DM ED DR
Classification Accuracy (%)
10. 136 Computer Science & Information Technology (CS & IT)
classification accuracy. From the comparison of results, it can be seen that when using the
selected significant features for classification, the time taken to build the learning model is also
greatly improved. This shows that in an information system there are some non-relevant features
and identifying and removing these features will enable learning algorithms to operate faster. In
other words, increasing the number of features in a dataset may not be always helpful to increase
the classification performance of the data. Increasing the number of features progressively may
result in reduction of classification rate after a peak. This is known as peaking phenomenon.
8. CONCLUSION
In this paper, a novel hybrid feature selection approach is proposed to predict the Learning
Disability in a cost effective way. The approach follows a method of assigning priorities to
various symptoms of the LD dataset based on the general characteristics of the data alone. Each
symptoms priority values reflect its relative importance to predict LD among the various cases.
By ranking these symptoms in the decreasing order of their significance, least significant features
are eliminated one by one by considering its involvement in predicting the learning disability.
The experimental result reveals the need of feature selection in classification to improve the
performance such as speed of learning and predictive accuracy. With the help of the proposed
method, redundant attributes can be removed efficiently from the LD dataset without sacrificing
the classification performance.
REFERENCES
[1] Richard Jensen (2005) Combining rough and fuzzy sets for feature selection, Ph.D thesis from
Internet.
[2] Yumin Chen, Duoqian Miao & Ruizhi Wang, (2010) “A Rough Set approach to feature selection
based on ant colony optimization”, Pattern Recognition Letters, Vol. 31, pp. 226-233.
[3] Petr Somol, Pavel Pudil & Josef Kittler, (2004) “Fast Branch & Bound Algorithms for Optimal
Feature Selection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, No. 7,
pp. 900-912.
[4] Ning Zhong, Juzhen Dong & Setsuo Ohsuga, (2001) “Using Rough Sets with heuristics for feature
selection”, Journal of Intelligence Information systems, Vol. 16, pp.199-214.
[5] Raymer M L,Punch W F, Goodman E D,Kuhn L A & Jain A K, (2000) “Dimensionality Reduction
Using Genetic Algorithms”, IEEE Trans. Evolutionary Computation, Vol.4, No.2, pp. 164-171.
[6] Carmen Lai, Marcel J.T. Reinders & Lodewyk Wessels, (2006) "Random subspace method for
multivariate feature selection”, Pattern Recognition letters, Vol. 27, pp. 1067-1076.
[7] Ron Kohavi & Dan Sommerfield, (1995) ”Feature subset selection using the wrapper method: Over
fitting and dynamic search space topology”, Proceedings of the First International Conference on
Knowledge Discovery and Data Mining, pp. 192-197.
[8] Isabelle Guyon, Jason Weston, Stephen Barnhill & Vladimir Vapnik, (2002) “Gene selection for
cancer classification using support vector machines”, Machine Learning, Kluwer Academic
Publishers, Vol. 46, pp. 389-422.
[9] Neumann J, Schnörr C & Steidl G, (2005) “Combined SVM based feature selection and
classification”, Machine Learning, Vol.61, pp.129-150.
[10] Gasca E,Sanchez J S & Alonso R, (2006) “Eliminating redundancy and irrelevance using a new
MLP based feature selection method”, Pattern Recognition, Vol. 39, pp. 313-315.
[11] Zong-Xia Xie, Qing-Hua Hu & Da-Ren Yu, (2006) “Improved feature selection algorithm base on
SVM and Correlation”, LNCS, Vol. 3971, pp. 1373-1380.
[12] Kira K & Rendell L A, (1992) “The feature selection problem: Traditional methods and a new
algorithm”, Proceedings of the International conference AAAI-92, San Jose, CA, pp. 129-134.
[13] Mondrzejewski M, (1993) “ Feature selection using Rough Set theory” , Proceedings of the European
conference on Machine learning ECML’93, Springer-Verlag, pp. 213-226.
[14] Manoranjan Dash & Huan Liu, (2003) “Consistency based search in feature selection”, Artificial
Intelligence, Vpl.151, pp. 155-176.
11. Computer Science & Information Technology (CS & IT) 137
[15] Swati Shilaskar & Ashok Ghatol. Article, (2013) “Dimensionality Reduction Techniques for
Improved Diagnosis of Heart Disease”, International Journal of Computer Applications, Vol. 61, No.
5, pp. 1-8.
[16] Yao Y.Y, (2003) “Information-theoretic measures for knowledge discovery and data mining Entropy
Measures, Maximum Entropy and Emerging Applications”, Springer Berlin. pp. 115-136.
[17] Miao D. Q & Hou, L, (2004) “A Comparison of Rough Set methods and representative learning
algorithms”, Fundamenta Informaticae. Vol. 59, pp. 203-219.
[18] Pawlak Z, (1991) Rough Sets: Theoretical aspects of Reasoning about Data, Kluwer Academic
Publishing, Dordrecht.
[19] Maria Salamo M & Lopez-Sanchez M, (2011). “Rough Set approaches to feature selection for
Case-Based Reasoning Classifiers”, Pattern Recognition Letters, Vol. 32, pp. 280-292.
[20] Li J. & Cercone N, (2006) “ Discovering and Ranking Important Rules”, Proceedings of KDM
Workshop, Waterloo, Canada.
[21] Li J, (2007) Rough Set Based Rule Evaluations and their Applications, Ph.D thesis from Internet.
[22] Shen Q. & Chouchoulas A, (2001) “Rough Set – Based Dimensionality Reduction for Supervised and
Unsupervised Learning”, International Journal of Applied Mathematics and Computer Sciences, Vol.
11, No. 3, pp. 583-601.
[23] Jensen J (2005) Combining rough set and fuzzy sets for feature selection, Ph.D Thesis from
Internet.
[24] Ian H. Witten & Eibe Frank (2005) Data Mining – Concepts and Techniques. Elsevier.
[25] Alper U., Alper, M. & Ratna Babu C, (2011) “A maximum relevance minimum redundancy feature
selection method based on swarm intelligence for support vector machine classification”, Information
Science, Vol. 181, pp. 4625-4641.
[26] Pablo Bermejo, Jose A. Gámez & Jose M. Puerta, (2011) “A GRASP algorithm for fast hybrid
(filter-wrapper) feature subset selection in high-dimensional datasets”, Science Direct, Pattern
Recognition Letters, Vol. 32, pp. 701-711.
ACKNOWLEDGEMENTS
This work was supported by University Grants Commission (UGC), New Delhi, India under the
Minor Research Project program.
AUTHOR
Sabu M K, received his Ph. D degree from Mahatma Gandhi University, Kottayam,
Kerala, India in 2014. He is currently an Associate Professor and also the Head of the
Department of Computer Applications in M.E.S College, Marampally, Aluva, Kerala.
His research interests include data mining, rough set theory, machine learning and soft
computing.