This document provides an overview of classification techniques for machine learning. It defines classification as predicting categorical class labels based on a training set. Decision tree induction and Bayes classification methods are described as common classification approaches. Decision trees are constructed recursively to partition data based on attribute tests. Information gain, gain ratio, and Gini index are discussed as measures for selecting the best attributes to test at each node. The naïve Bayes classifier is introduced as a simplified Bayesian approach based on conditional independence assumptions between attributes.
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Machine learning is a branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy.
IBM has a rich history with machine learning. One of its own, Arthur Samuel, is credited for coining the term, “machine learning” with his research (link resides outside ibm.com) around the game of checkers. Robert Nealey, the self-proclaimed checkers master, played the game on an IBM 7094 computer in 1962, and he lost to the computer. Compared to what can be done today, this feat seems trivial, but it’s considered a major milestone in the field of artificial intelligence.
Over the last couple of decades, the technological advances in storage and processing power have enabled some innovative products based on machine learning, such as Netflix’s recommendation engine and self-driving cars.
Machine learning is an important component of the growing field of data science. Through the use of statistical methods, algorithms are trained to make classifications or predictions, and to uncover key insights in data mining projects. These insights subsequently drive decision making within applications and businesses, ideally impacting key growth metrics. As big data continues to expand and grow, the market demand for data scientists will increase. They will be required to help identify the most relevant business questions and the data to answer them.
Machine learning algorithms are typically created using frameworks that accelerate solution development, such as TensorFlow and PyTorch.
Related content
Subscribe to IBM newsletters
Begin your journey to AI
Learn how to scale AI
Explore the AI Academy
Machine Learning vs. Deep Learning vs. Neural Networks
Since deep learning and machine learning tend to be used interchangeably, it’s worth noting the nuances between the two. Machine learning, deep learning, and neural networks are all sub-fields of artificial intelligence. However, neural networks is actually a sub-field of machine learning, and deep learning is a sub-field of neural networks.
The way in which deep learning and machine learning differ is in how each algorithm learns. "Deep" machine learning can use labeled datasets, also known as supervised learning, to inform its algorithm, but it doesn’t necessarily require a labeled dataset. Deep learning can ingest unstructured data in its raw form (e.g., text or images), and it can automatically determine the set of features which distinguish different categories of data from one another. This eliminates some of the human intervention required and enables the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman notes in this MIT lecture (link resides outside ibm.com).
Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the set of features to understand the differences between data inputs,
This document summarizes key concepts from Chapter 8 of the textbook "Data Mining: Concepts and Techniques". It discusses classification, which predicts categorical class labels, as a supervised learning technique. It covers decision trees for classification, which select attributes to split data recursively based on measures like information gain. It also discusses evaluating and improving classification models to avoid overfitting, and scaling classification to large databases.
This document summarizes key concepts from Chapter 8 of the textbook "Data Mining: Concepts and Techniques". It discusses classification, which predicts categorical class labels. The two main processes are model construction using a training set, and then using the model to classify new data. Decision trees, Naive Bayes, and rule-based classification algorithms are covered. Metrics for evaluating models like accuracy and techniques for improving accuracy like ensemble methods are also summarized.
Data Mining Concepts and Techniques.pptRvishnupriya2
This document discusses classification techniques in data mining, including decision trees. It covers supervised vs. unsupervised learning, the classification process, decision tree induction using information gain and other measures, handling continuous attributes, overfitting, and tree pruning. Specific algorithms covered include ID3, C4.5, CART, and CHAID. The goal of classification and how decision trees are constructed from the training data is explained at a high level.
Data Mining Concepts and Techniques.pptRvishnupriya2
This document discusses classification techniques for data mining. It covers supervised and unsupervised learning methods. Specifically, it describes classification as a two-step process involving model construction from training data and then using the model to classify new data. Several classification algorithms are covered, including decision tree induction, Bayes classification, and rule-based classification. Evaluation metrics like accuracy and techniques to improve classification like ensemble methods are also summarized.
This document provides an overview of classification techniques in machine learning. It discusses:
- The process of classification involves model construction using a training set and then applying the model to classify new data.
- Supervised learning aims to predict categorical labels while unsupervised learning groups data without labels.
- Popular classification algorithms covered include decision trees, Bayesian classification, and rule-based methods. Attribute selection measures, model evaluation, overfitting, and tree pruning are also discussed.
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Machine learning is a branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy.
IBM has a rich history with machine learning. One of its own, Arthur Samuel, is credited for coining the term, “machine learning” with his research (link resides outside ibm.com) around the game of checkers. Robert Nealey, the self-proclaimed checkers master, played the game on an IBM 7094 computer in 1962, and he lost to the computer. Compared to what can be done today, this feat seems trivial, but it’s considered a major milestone in the field of artificial intelligence.
Over the last couple of decades, the technological advances in storage and processing power have enabled some innovative products based on machine learning, such as Netflix’s recommendation engine and self-driving cars.
Machine learning is an important component of the growing field of data science. Through the use of statistical methods, algorithms are trained to make classifications or predictions, and to uncover key insights in data mining projects. These insights subsequently drive decision making within applications and businesses, ideally impacting key growth metrics. As big data continues to expand and grow, the market demand for data scientists will increase. They will be required to help identify the most relevant business questions and the data to answer them.
Machine learning algorithms are typically created using frameworks that accelerate solution development, such as TensorFlow and PyTorch.
Related content
Subscribe to IBM newsletters
Begin your journey to AI
Learn how to scale AI
Explore the AI Academy
Machine Learning vs. Deep Learning vs. Neural Networks
Since deep learning and machine learning tend to be used interchangeably, it’s worth noting the nuances between the two. Machine learning, deep learning, and neural networks are all sub-fields of artificial intelligence. However, neural networks is actually a sub-field of machine learning, and deep learning is a sub-field of neural networks.
The way in which deep learning and machine learning differ is in how each algorithm learns. "Deep" machine learning can use labeled datasets, also known as supervised learning, to inform its algorithm, but it doesn’t necessarily require a labeled dataset. Deep learning can ingest unstructured data in its raw form (e.g., text or images), and it can automatically determine the set of features which distinguish different categories of data from one another. This eliminates some of the human intervention required and enables the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman notes in this MIT lecture (link resides outside ibm.com).
Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the set of features to understand the differences between data inputs,
This document summarizes key concepts from Chapter 8 of the textbook "Data Mining: Concepts and Techniques". It discusses classification, which predicts categorical class labels, as a supervised learning technique. It covers decision trees for classification, which select attributes to split data recursively based on measures like information gain. It also discusses evaluating and improving classification models to avoid overfitting, and scaling classification to large databases.
This document summarizes key concepts from Chapter 8 of the textbook "Data Mining: Concepts and Techniques". It discusses classification, which predicts categorical class labels. The two main processes are model construction using a training set, and then using the model to classify new data. Decision trees, Naive Bayes, and rule-based classification algorithms are covered. Metrics for evaluating models like accuracy and techniques for improving accuracy like ensemble methods are also summarized.
Data Mining Concepts and Techniques.pptRvishnupriya2
This document discusses classification techniques in data mining, including decision trees. It covers supervised vs. unsupervised learning, the classification process, decision tree induction using information gain and other measures, handling continuous attributes, overfitting, and tree pruning. Specific algorithms covered include ID3, C4.5, CART, and CHAID. The goal of classification and how decision trees are constructed from the training data is explained at a high level.
Data Mining Concepts and Techniques.pptRvishnupriya2
This document discusses classification techniques for data mining. It covers supervised and unsupervised learning methods. Specifically, it describes classification as a two-step process involving model construction from training data and then using the model to classify new data. Several classification algorithms are covered, including decision tree induction, Bayes classification, and rule-based classification. Evaluation metrics like accuracy and techniques to improve classification like ensemble methods are also summarized.
This document provides an overview of classification techniques in machine learning. It discusses:
- The process of classification involves model construction using a training set and then applying the model to classify new data.
- Supervised learning aims to predict categorical labels while unsupervised learning groups data without labels.
- Popular classification algorithms covered include decision trees, Bayesian classification, and rule-based methods. Attribute selection measures, model evaluation, overfitting, and tree pruning are also discussed.
This document summarizes key concepts from Chapter 8 of the textbook "Data Mining: Concepts and Techniques". It discusses classification, which predicts categorical class labels, as a supervised learning method. The two main steps of classification are model construction using a training set, and then using the model to classify new data. Decision trees are presented as a popular classification technique that uses measures like information gain or Gini index to build tree models that partition data into purer class subsets. The chapter covers decision tree algorithms, attribute selection, overfitting, and scaling classification to large datasets.
This document discusses decision tree induction and attribute selection measures. It describes common measures like information gain, gain ratio, and Gini index that are used to select the best splitting attribute at each node in decision tree construction. It provides examples to illustrate information gain calculation for both discrete and continuous attributes. The document also discusses techniques for handling large datasets like SLIQ and SPRINT that build decision trees in a scalable manner by maintaining attribute value lists.
This document discusses classification and prediction in machine learning. It defines classification as predicting categorical class labels, while prediction models continuous values. The key steps of classification are constructing a model from a training set and using the model to classify new data. Decision trees and rule-based classifiers are described as common classification methods. Attribute selection measures like information gain and gini index are explained for decision tree induction. The document also covers issues in data preparation and model evaluation for classification tasks.
This chapter discusses classification techniques for data mining. It covers basic concepts of supervised learning and classification, including decision tree induction using information gain, gain ratio and gini index for attribute selection. It also discusses overfitting, tree pruning, and scaling classification to large databases using techniques like building attribute-value-class lists. The chapter provides examples and comparisons of different attribute selection measures and approaches to improve accuracy.
This document discusses classification concepts and decision tree induction. It defines classification as predicting categorical class labels based on a training set. Decision tree induction is introduced as a basic classification algorithm that recursively partitions data based on attribute values to construct a tree. Information gain and the Gini index are presented as common measures for selecting the best attribute to use at each tree node split. Overfitting is identified as a potential issue, and prepruning and postpruning techniques are described to address it.
The document discusses classification techniques for supervised learning problems. It describes classification as predicting categorical class labels based on a training set of labeled data. The classification process involves constructing a model from the training set and then using the model to classify new unlabeled data. Common classification techniques discussed include decision tree induction, Bayesian classification methods, and rule-based classification. Model evaluation and techniques for improving accuracy, such as ensemble methods, are also covered.
This chapter discusses classification techniques for data mining, including decision trees, Bayes classification, and rule-based classification. It covers the basic process of classification, which involves constructing a model from training data and then using the model to classify new data. Decision tree induction and attribute selection measures like information gain, gain ratio, and Gini index are explained in detail. The chapter also discusses techniques for scaling up classification to large databases, addressing overfitting, and improving accuracy.
The document summarizes key concepts in classification and decision tree induction. It discusses supervised vs unsupervised learning, the two-step classification process of model construction and usage, and decision tree induction basics including attribute selection measures like information gain, gain ratio, and Gini index. It also covers overfitting and techniques like prepruning and postpruning decision trees.
This chapter discusses classification techniques for data mining. It begins with an overview of classification vs unsupervised learning and the basic process of classification which involves model construction using a training set and then using the model to classify new data. It then covers decision trees, describing the basic algorithm for inducing decision trees from data and different measures for attribute selection like information gain, gain ratio, and gini index. The chapter also discusses model evaluation, overfitting, tree pruning, and techniques for scaling classification to large databases.
DCOM (Distributed Component Object Model) and CORBA (Common Object Request Broker Architecture) are two popular distributed object models. In this paper, we make architectural comparison of DCOM and CORBA at three different layers: basic programming architecture, remoting architecture, and the wire protocol architecture.
classification in data mining and data warehousing.pdf321106410027
The document discusses various classification techniques in machine learning. It begins with an overview of classification and supervised vs. unsupervised learning. Classification aims to predict categorical class labels by constructing a predictive model from labeled training data. Decision tree induction is then covered as a basic classification algorithm that recursively partitions data based on attribute values until reaching single class leaf nodes. Bayes classification methods are also mentioned, which classify examples based on applying Bayes' theorem to calculate posterior probabilities.
Efficient classification of big data using vfdt (very fast decision tree)eSAT Journals
Abstract
Decision Tree learning algorithms have been able to capture knowledge successfully. Decision Trees are best considered when
instances are described by attribute-value pairs and when the target function has a discrete value. The main task of these
decision trees is to use inductive methods to the given values of attributes of an unknown object and determine an
appropriate classification by applying decision tree rules. Decision Trees are very effective forms to evaluate the performance
and represent the algorithms because of their robustness, simplicity, capability of handling numerical and categorical data,
ability to work with large datasets and comprehensibility to a name a few. There are various decision tree algorithms available
like ID3, CART, C4.5, VFDT, QUEST, CTREE, GUIDE, CHAID, CRUISE, etc. In this paper a comparative study on three of
these popular decision tree algorithms - (Iterative Dichotomizer 3), C4.5 which is an evolution of ID3 and VFDT (Very
Fast Decision Tree has been made. An empirical study has been conducted to compare C4.5 and VFDT in terms of accuracy
and execution time and various conclusions have been drawn.
Key Words: Decision tree, ID3, C4.5, VFDT, Information Gain, Gain Ratio, Gini Index, Over−fitting.
The document discusses decision tree induction, which is a popular tool for classification and prediction. It describes how decision trees work by having internal decision nodes that split the data into branches, which end at leaf nodes that provide a class label or prediction. It also covers different algorithms for building decision trees like ID3, C4.5, and CART. The key steps in decision tree induction include selecting attributes to split on using metrics like information gain or Gini index, and pruning the fully grown tree to avoid overfitting.
Perfomance Comparison of Decsion Tree Algorithms to Findout the Reason for St...ijcnes
Educational data mining is used to study the data available in the educational field and bring out the hidden knowledge from it. Classification methods like decision trees, rule mining can be applied on the educational data for predicting the students behavior. This paper focuses on finding thesuitablealgorithm which yields the best result to find out the reason behind students absenteeism in an academic year. The first step in this processis to gather students data by using questionnaire.The datais collected from 123 under graduate students from a private college which is situated in a semirural area. The second step is to clean the data which is appropriate for mining purpose and choose the relevant attributes. In the final step, three different Decision tree induction algorithms namely, ID3(Iterative Dichotomiser), C4.5 and CART(Classification and Regression Tree)were applied for comparison of results for the same data sample collected using questionnaire. The results were compared to find the algorithm which yields the best result in predicting the reason for student s absenteeism.
IRJET- Performance Evaluation of Various Classification AlgorithmsIRJET Journal
This document evaluates the performance of various classification algorithms (logistic regression, K-nearest neighbors, decision tree, random forest, support vector machine, naive Bayes) on a heart disease dataset. It provides details on each algorithm and evaluates their performance based on metrics like confusion matrix, precision, recall, F1-score and accuracy. The results show that naive Bayes had the best performance in correctly classifying samples with an accuracy of 80.21%, while SVM had the worst at 46.15%. In general, random forest and naive Bayes performed best according to the evaluation.
IRJET- Performance Evaluation of Various Classification AlgorithmsIRJET Journal
This document evaluates the performance of various classification algorithms (logistic regression, K-nearest neighbors, decision tree, random forest, support vector machine, naive Bayes) on a heart disease dataset. It provides details on each algorithm and evaluates their performance based on metrics like confusion matrix, precision, recall, F1-score and accuracy. The results show that naive Bayes had the best performance in correctly classifying samples with an accuracy of 80.21%, while SVM had the worst at 46.15%. In general, random forest and naive Bayes performed best according to the evaluation.
The document discusses decision tree models in machine learning. It begins by defining key terms like decision nodes, branches, and leaf nodes. It then explains how decision trees are built in a top-down manner by recursively splitting the training data based on selected attributes. The document also covers different algorithms for building decision trees like ID3, C4.5, and CART. It discusses measures used for attribute selection like information gain, gain ratio, and Gini index. Finally, it provides an example of how to build a decision tree to classify whether to play tennis based on weather attributes.
Machine learning is a type of artificial intelligence that allows software to learn from data without being explicitly programmed. The document discusses several machine learning techniques including supervised learning algorithms like linear regression, logistic regression, decision trees, support vector machines, K-nearest neighbors, and Naive Bayes. Unsupervised learning algorithms covered include clustering techniques like K-means and hierarchical clustering. Applications of machine learning include spam filtering, fraud detection, image recognition, and medical diagnosis.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
This document summarizes key concepts from Chapter 8 of the textbook "Data Mining: Concepts and Techniques". It discusses classification, which predicts categorical class labels, as a supervised learning method. The two main steps of classification are model construction using a training set, and then using the model to classify new data. Decision trees are presented as a popular classification technique that uses measures like information gain or Gini index to build tree models that partition data into purer class subsets. The chapter covers decision tree algorithms, attribute selection, overfitting, and scaling classification to large datasets.
This document discusses decision tree induction and attribute selection measures. It describes common measures like information gain, gain ratio, and Gini index that are used to select the best splitting attribute at each node in decision tree construction. It provides examples to illustrate information gain calculation for both discrete and continuous attributes. The document also discusses techniques for handling large datasets like SLIQ and SPRINT that build decision trees in a scalable manner by maintaining attribute value lists.
This document discusses classification and prediction in machine learning. It defines classification as predicting categorical class labels, while prediction models continuous values. The key steps of classification are constructing a model from a training set and using the model to classify new data. Decision trees and rule-based classifiers are described as common classification methods. Attribute selection measures like information gain and gini index are explained for decision tree induction. The document also covers issues in data preparation and model evaluation for classification tasks.
This chapter discusses classification techniques for data mining. It covers basic concepts of supervised learning and classification, including decision tree induction using information gain, gain ratio and gini index for attribute selection. It also discusses overfitting, tree pruning, and scaling classification to large databases using techniques like building attribute-value-class lists. The chapter provides examples and comparisons of different attribute selection measures and approaches to improve accuracy.
This document discusses classification concepts and decision tree induction. It defines classification as predicting categorical class labels based on a training set. Decision tree induction is introduced as a basic classification algorithm that recursively partitions data based on attribute values to construct a tree. Information gain and the Gini index are presented as common measures for selecting the best attribute to use at each tree node split. Overfitting is identified as a potential issue, and prepruning and postpruning techniques are described to address it.
The document discusses classification techniques for supervised learning problems. It describes classification as predicting categorical class labels based on a training set of labeled data. The classification process involves constructing a model from the training set and then using the model to classify new unlabeled data. Common classification techniques discussed include decision tree induction, Bayesian classification methods, and rule-based classification. Model evaluation and techniques for improving accuracy, such as ensemble methods, are also covered.
This chapter discusses classification techniques for data mining, including decision trees, Bayes classification, and rule-based classification. It covers the basic process of classification, which involves constructing a model from training data and then using the model to classify new data. Decision tree induction and attribute selection measures like information gain, gain ratio, and Gini index are explained in detail. The chapter also discusses techniques for scaling up classification to large databases, addressing overfitting, and improving accuracy.
The document summarizes key concepts in classification and decision tree induction. It discusses supervised vs unsupervised learning, the two-step classification process of model construction and usage, and decision tree induction basics including attribute selection measures like information gain, gain ratio, and Gini index. It also covers overfitting and techniques like prepruning and postpruning decision trees.
This chapter discusses classification techniques for data mining. It begins with an overview of classification vs unsupervised learning and the basic process of classification which involves model construction using a training set and then using the model to classify new data. It then covers decision trees, describing the basic algorithm for inducing decision trees from data and different measures for attribute selection like information gain, gain ratio, and gini index. The chapter also discusses model evaluation, overfitting, tree pruning, and techniques for scaling classification to large databases.
DCOM (Distributed Component Object Model) and CORBA (Common Object Request Broker Architecture) are two popular distributed object models. In this paper, we make architectural comparison of DCOM and CORBA at three different layers: basic programming architecture, remoting architecture, and the wire protocol architecture.
classification in data mining and data warehousing.pdf321106410027
The document discusses various classification techniques in machine learning. It begins with an overview of classification and supervised vs. unsupervised learning. Classification aims to predict categorical class labels by constructing a predictive model from labeled training data. Decision tree induction is then covered as a basic classification algorithm that recursively partitions data based on attribute values until reaching single class leaf nodes. Bayes classification methods are also mentioned, which classify examples based on applying Bayes' theorem to calculate posterior probabilities.
Efficient classification of big data using vfdt (very fast decision tree)eSAT Journals
Abstract
Decision Tree learning algorithms have been able to capture knowledge successfully. Decision Trees are best considered when
instances are described by attribute-value pairs and when the target function has a discrete value. The main task of these
decision trees is to use inductive methods to the given values of attributes of an unknown object and determine an
appropriate classification by applying decision tree rules. Decision Trees are very effective forms to evaluate the performance
and represent the algorithms because of their robustness, simplicity, capability of handling numerical and categorical data,
ability to work with large datasets and comprehensibility to a name a few. There are various decision tree algorithms available
like ID3, CART, C4.5, VFDT, QUEST, CTREE, GUIDE, CHAID, CRUISE, etc. In this paper a comparative study on three of
these popular decision tree algorithms - (Iterative Dichotomizer 3), C4.5 which is an evolution of ID3 and VFDT (Very
Fast Decision Tree has been made. An empirical study has been conducted to compare C4.5 and VFDT in terms of accuracy
and execution time and various conclusions have been drawn.
Key Words: Decision tree, ID3, C4.5, VFDT, Information Gain, Gain Ratio, Gini Index, Over−fitting.
The document discusses decision tree induction, which is a popular tool for classification and prediction. It describes how decision trees work by having internal decision nodes that split the data into branches, which end at leaf nodes that provide a class label or prediction. It also covers different algorithms for building decision trees like ID3, C4.5, and CART. The key steps in decision tree induction include selecting attributes to split on using metrics like information gain or Gini index, and pruning the fully grown tree to avoid overfitting.
Perfomance Comparison of Decsion Tree Algorithms to Findout the Reason for St...ijcnes
Educational data mining is used to study the data available in the educational field and bring out the hidden knowledge from it. Classification methods like decision trees, rule mining can be applied on the educational data for predicting the students behavior. This paper focuses on finding thesuitablealgorithm which yields the best result to find out the reason behind students absenteeism in an academic year. The first step in this processis to gather students data by using questionnaire.The datais collected from 123 under graduate students from a private college which is situated in a semirural area. The second step is to clean the data which is appropriate for mining purpose and choose the relevant attributes. In the final step, three different Decision tree induction algorithms namely, ID3(Iterative Dichotomiser), C4.5 and CART(Classification and Regression Tree)were applied for comparison of results for the same data sample collected using questionnaire. The results were compared to find the algorithm which yields the best result in predicting the reason for student s absenteeism.
IRJET- Performance Evaluation of Various Classification AlgorithmsIRJET Journal
This document evaluates the performance of various classification algorithms (logistic regression, K-nearest neighbors, decision tree, random forest, support vector machine, naive Bayes) on a heart disease dataset. It provides details on each algorithm and evaluates their performance based on metrics like confusion matrix, precision, recall, F1-score and accuracy. The results show that naive Bayes had the best performance in correctly classifying samples with an accuracy of 80.21%, while SVM had the worst at 46.15%. In general, random forest and naive Bayes performed best according to the evaluation.
IRJET- Performance Evaluation of Various Classification AlgorithmsIRJET Journal
This document evaluates the performance of various classification algorithms (logistic regression, K-nearest neighbors, decision tree, random forest, support vector machine, naive Bayes) on a heart disease dataset. It provides details on each algorithm and evaluates their performance based on metrics like confusion matrix, precision, recall, F1-score and accuracy. The results show that naive Bayes had the best performance in correctly classifying samples with an accuracy of 80.21%, while SVM had the worst at 46.15%. In general, random forest and naive Bayes performed best according to the evaluation.
The document discusses decision tree models in machine learning. It begins by defining key terms like decision nodes, branches, and leaf nodes. It then explains how decision trees are built in a top-down manner by recursively splitting the training data based on selected attributes. The document also covers different algorithms for building decision trees like ID3, C4.5, and CART. It discusses measures used for attribute selection like information gain, gain ratio, and Gini index. Finally, it provides an example of how to build a decision tree to classify whether to play tennis based on weather attributes.
Machine learning is a type of artificial intelligence that allows software to learn from data without being explicitly programmed. The document discusses several machine learning techniques including supervised learning algorithms like linear regression, logistic regression, decision trees, support vector machines, K-nearest neighbors, and Naive Bayes. Unsupervised learning algorithms covered include clustering techniques like K-means and hierarchical clustering. Applications of machine learning include spam filtering, fraud detection, image recognition, and medical diagnosis.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
2. 2
Classification: Basic Concepts
Classification: Basic Concepts
Decision Tree Induction
Bayes Classification Methods
Rule-Based Classification
Model Evaluation and Selection
Techniques to Improve Classification Accuracy:
Ensemble Methods
Summary
3. 3
Classification
predicts categorical class labels (discrete or nominal)
classifies data (constructs a model) based on the training
set and the values (class labels) in a classifying attribute
and uses it in classifying new data
Numeric Prediction
models continuous-valued functions, i.e., predicts
unknown or missing values
Typical applications
Credit/loan approval:
Medical diagnosis: if a tumor is cancerous or benign
Fraud detection: if a transaction is fraudulent
Web page categorization: which category it is
Prediction Problems: Classification vs.
Numeric Prediction
4. 4
Decision Tree Induction: An Example
age?
overcast
student? credit rating?
<=30 >40
no yes yes
yes
31..40
fair
excellent
yes
no
age income student credit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
Training data set: Buys_computer
The data set follows an example of
Quinlan’s ID3 (Playing Tennis)
Resulting tree:
5. 5
Algorithm for Decision Tree Induction
Basic algorithm (a greedy algorithm)
Tree is constructed in a top-down recursive divide-and-
conquer manner
At start, all the training examples are at the root
Attributes are categorical (if continuous-valued, they are
discretized in advance)
Examples are partitioned recursively based on selected
attributes
Test attributes are selected on the basis of a heuristic or
statistical measure (e.g., information gain)
Conditions for stopping partitioning
All samples for a given node belong to the same class
There are no remaining attributes for further partitioning –
majority voting is employed for classifying the leaf
There are no samples left
7. 7
Attribute Selection Measure:
Information Gain (ID3/C4.5)
Select the attribute with the highest information gain
Let pi be the probability that an arbitrary tuple in D belongs to
class Ci, estimated by |Ci, D|/|D|
Expected information (entropy) needed to classify a tuple in D:
Information needed (after using A to split D into v partitions) to
classify D:
Information gained by branching on attribute A
)
(
log
)
( 2
1
i
m
i
i p
p
D
Info
)
(
|
|
|
|
)
(
1
j
v
j
j
A D
Info
D
D
D
Info
(D)
Info
Info(D)
Gain(A) A
8. 8
Attribute Selection: Information Gain
Class P: buys_computer = “yes”
Class N: buys_computer = “no”
means “age <=30” has 5 out of
14 samples, with 2 yes’es and 3
no’s. Hence
Similarly,
age pi ni I(pi, ni)
<=30 2 3 0.971
31…40 4 0 0
>40 3 2 0.971
694
.
0
)
2
,
3
(
14
5
)
0
,
4
(
14
4
)
3
,
2
(
14
5
)
(
I
I
I
D
Infoage
048
.
0
)
_
(
151
.
0
)
(
029
.
0
)
(
rating
credit
Gain
student
Gain
income
Gain
246
.
0
)
(
)
(
)
(
D
Info
D
Info
age
Gain age
age income student credit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
)
3
,
2
(
14
5
I
940
.
0
)
14
5
(
log
14
5
)
14
9
(
log
14
9
)
5
,
9
(
)
( 2
2
I
D
Info
9. 9
Computing Information-Gain for
Continuous-Valued Attributes
Let attribute A be a continuous-valued attribute
Must determine the best split point for A
Sort the value A in increasing order
Typically, the midpoint between each pair of adjacent values
is considered as a possible split point
(ai+ai+1)/2 is the midpoint between the values of ai and ai+1
The point with the minimum expected information
requirement for A is selected as the split-point for A
Split:
D1 is the set of tuples in D satisfying A ≤ split-point, and D2 is
the set of tuples in D satisfying A > split-point
10. 10
Gain Ratio for Attribute Selection
(C4.5)
Information gain measure is biased towards attributes with a
large number of values
C4.5 (a successor of ID3) uses gain ratio to overcome the
problem (normalization to information gain)
GainRatio(A) = Gain(A)/SplitInfo(A)
Ex.
gain_ratio(income) = 0.029/1.557 = 0.019
The attribute with the maximum gain ratio is selected as the
splitting attribute
)
|
|
|
|
(
log
|
|
|
|
)
( 2
1 D
D
D
D
D
SplitInfo
j
v
j
j
A
11. 11
Gini Index (CART, IBM
IntelligentMiner)
If a data set D contains examples from n classes, gini index,
gini(D) is defined as
where pj is the relative frequency of class j in D
If a data set D is split on A into two subsets D1 and D2, the gini
index gini(D) is defined as
Reduction in Impurity:
The attribute provides the smallest ginisplit(D) (or the largest
reduction in impurity) is chosen to split the node (need to
enumerate all the possible splitting points for each attribute)
n
j
p j
D
gini
1
2
1
)
(
)
(
|
|
|
|
)
(
|
|
|
|
)
( 2
2
1
1
D
gini
D
D
D
gini
D
D
D
giniA
)
(
)
(
)
( D
gini
D
gini
A
gini A
12. 12
Computation of Gini Index
Ex. D has 9 tuples in buys_computer = “yes” and 5 in “no”
Suppose the attribute income partitions D into 10 in D1: {low,
medium} and 4 in D2
Gini{low,high} is 0.458; Gini{medium,high} is 0.450. Thus, split on the
{low,medium} (and {high}) since it has the lowest Gini index
All attributes are assumed continuous-valued
May need other tools, e.g., clustering, to get the possible split
values
Can be modified for categorical attributes
459
.
0
14
5
14
9
1
)
(
2
2
D
gini
)
(
14
4
)
(
14
10
)
( 2
1
}
,
{ D
Gini
D
Gini
D
gini medium
low
income
13. 13
Comparing Attribute Selection Measures
The three measures, in general, return good results but
Information gain:
biased towards multivalued attributes
Gain ratio:
tends to prefer unbalanced splits in which one partition is
much smaller than the others
Gini index:
biased to multivalued attributes
has difficulty when # of classes is large
tends to favor tests that result in equal-sized partitions
and purity in both partitions
14. 14
Other Attribute Selection Measures
CHAID: a popular decision tree algorithm, measure based on χ2 test for
independence
C-SEP: performs better than info. gain and gini index in certain cases
G-statistic: has a close approximation to χ2 distribution
MDL (Minimal Description Length) principle (i.e., the simplest solution is
preferred):
The best tree as the one that requires the fewest # of bits to both (1)
encode the tree, and (2) encode the exceptions to the tree
Multivariate splits (partition based on multiple variable combinations)
CART: finds multivariate splits based on a linear comb. of attrs.
Which attribute selection measure is the best?
Most give good results, none is significantly superior than others
15. 15
Overfitting and Tree Pruning
Overfitting: An induced tree may overfit the training data
Too many branches, some may reflect anomalies due to
noise or outliers
Poor accuracy for unseen samples
Two approaches to avoid overfitting
Prepruning: Halt tree construction early ̵ do not split a node
if this would result in the goodness measure falling below a
threshold
Difficult to choose an appropriate threshold
Postpruning: Remove branches from a “fully grown” tree—
get a sequence of progressively pruned trees
Use a set of data different from the training data to
decide which is the “best pruned tree”
16. 16
Enhancements to Basic Decision Tree
Induction
Allow for continuous-valued attributes
Dynamically define new discrete-valued attributes that
partition the continuous attribute value into a discrete set of
intervals
Handle missing attribute values
Assign the most common value of the attribute
Assign probability to each of the possible values
Attribute construction
Create new attributes based on existing ones that are
sparsely represented
This reduces fragmentation, repetition, and replication
17. 17
Classification in Large Databases
Classification—a classical problem extensively studied by
statisticians and machine learning researchers
Scalability: Classifying data sets with millions of examples and
hundreds of attributes with reasonable speed
Why is decision tree induction popular?
relatively faster learning speed (than other classification
methods)
convertible to simple and easy to understand classification
rules
can use SQL queries for accessing databases
comparable classification accuracy with other methods
RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)
Builds an AVC-list (attribute, value, class label)
18. 18
Bayesian Classification: Why?
A statistical classifier: performs probabilistic prediction, i.e.,
predicts class membership probabilities
Foundation: Based on Bayes’ Theorem.
Performance: A simple Bayesian classifier, naïve Bayesian
classifier, has comparable performance with decision tree and
selected neural network classifiers
Incremental: Each training example can incrementally
increase/decrease the probability that a hypothesis is correct —
prior knowledge can be combined with observed data
Standard: Even when Bayesian methods are computationally
intractable, they can provide a standard of optimal decision
making against which other methods can be measured
19. 19
Bayes’ Theorem: Basics
Total probability Theorem:
Bayes’ Theorem:
Let X be a data sample (“evidence”): class label is unknown
Let H be a hypothesis that X belongs to class C
Classification is to determine P(H|X), (i.e., posteriori probability): the
probability that the hypothesis holds given the observed data sample X
P(H) (prior probability): the initial probability
E.g., X will buy computer, regardless of age, income, …
P(X): probability that sample data is observed
P(X|H) (likelihood): the probability of observing the sample X, given that
the hypothesis holds
E.g., Given that X will buy computer, the prob. that X is 31..40,
medium income
)
(
)
1
|
(
)
(
i
A
P
M
i i
A
B
P
B
P
)
(
/
)
(
)
|
(
)
(
)
(
)
|
(
)
|
( X
X
X
X
X P
H
P
H
P
P
H
P
H
P
H
P
20. 20
Prediction Based on Bayes’ Theorem
Given training data X, posteriori probability of a hypothesis H,
P(H|X), follows the Bayes’ theorem
Informally, this can be viewed as
posteriori = likelihood x prior/evidence
Predicts X belongs to Ci iff the probability P(Ci|X) is the highest
among all the P(Ck|X) for all the k classes
Practical difficulty: It requires initial knowledge of many
probabilities, involving significant computational cost
)
(
/
)
(
)
|
(
)
(
)
(
)
|
(
)
|
( X
X
X
X
X P
H
P
H
P
P
H
P
H
P
H
P
21. 21
Naïve Bayes Classifier
A simplified assumption: attributes are conditionally
independent (i.e., no dependence relation between
attributes):
This greatly reduces the computation cost: Only counts the
class distribution
If Ak is categorical, P(xk|Ci) is the # of tuples in Ci having value xk
for Ak divided by |Ci, D| (# of tuples of Ci in D)
If Ak is continous-valued, P(xk|Ci) is usually computed based on
Gaussian distribution with a mean μ and standard deviation σ
and P(xk|Ci) is
)
|
(
...
)
|
(
)
|
(
1
)
|
(
)
|
(
2
1
Ci
x
P
Ci
x
P
Ci
x
P
n
k
Ci
x
P
Ci
P
n
k
X
2
2
2
)
(
2
1
)
,
,
(
x
e
x
g
)
,
,
(
)
|
( i
i C
C
k
x
g
Ci
P
X
22. 22
Naïve Bayes Classifier: Training Dataset
Class:
C1:buys_computer = ‘yes’
C2:buys_computer = ‘no’
Data to be classified:
X = (age <=30,
Income = medium,
Student = yes
Credit_rating = Fair)
age income student
credit_rating
buys_compu
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
23. 23
Naïve Bayes Classifier: An Example
P(Ci): P(buys_computer = “yes”) = 9/14 = 0.643
P(buys_computer = “no”) = 5/14= 0.357
Compute P(X|Ci) for each class
P(age = “<=30” | buys_computer = “yes”) = 2/9 = 0.222
P(age = “<= 30” | buys_computer = “no”) = 3/5 = 0.6
P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444
P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4
P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667
P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2
P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667
P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4
X = (age <= 30 , income = medium, student = yes, credit_rating = fair)
P(X|Ci) : P(X|buys_computer = “yes”) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044
P(X|buys_computer = “no”) = 0.6 x 0.4 x 0.2 x 0.4 = 0.019
P(X|Ci)*P(Ci) : P(X|buys_computer = “yes”) * P(buys_computer = “yes”) = 0.028
P(X|buys_computer = “no”) * P(buys_computer = “no”) = 0.007
Therefore, X belongs to class (“buys_computer = yes”)
age income student
credit_rating
buys_comp
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
24. 24
Avoiding the Zero-Probability
Problem
Naïve Bayesian prediction requires each conditional prob. be
non-zero. Otherwise, the predicted prob. will be zero
Ex. Suppose a dataset with 1000 tuples, income=low (0),
income= medium (990), and income = high (10)
Use Laplacian correction (or Laplacian estimator)
Adding 1 to each case
Prob(income = low) = 1/1003
Prob(income = medium) = 991/1003
Prob(income = high) = 11/1003
The “corrected” prob. estimates are close to their
“uncorrected” counterparts
n
k
Ci
xk
P
Ci
X
P
1
)
|
(
)
|
(
25. 25
Naïve Bayes Classifier: Comments
Advantages
Easy to implement
Good results obtained in most of the cases
Disadvantages
Assumption: class conditional independence, therefore loss
of accuracy
Practically, dependencies exist among variables
E.g., hospitals: patients: Profile: age, family history, etc.
Symptoms: fever, cough etc., Disease: lung cancer,
diabetes, etc.
Dependencies among these cannot be modeled by Naïve
Bayes Classifier
How to deal with these dependencies? Bayesian Belief Networks
26. 26
Naïve Bayes Classifier: Types
Gaussian Naive Bayes classifier
In Gaussian Naive Bayes, continuous values associated with each feature
are assumed to be distributed according to a Gaussian distribution. A
Gaussian distribution is also called Normal distribution.
Multinomial Naive Bayes
Feature vectors represent the frequencies with which certain events have
been generated by a multinomial distribution. This is the event model
typically used for document classification.
Bernoulli Naive Bayes
In the multivariate Bernoulli event model, features are independent
booleans (binary variables) describing inputs. Like the multinomial model,
this model is popular for document classification tasks, where binary term
occurrence(i.e. a word occurs in a document or not) features are used
rather than term frequencies(i.e. frequency of a word in the document).
27. Summary (I)
Classification is a form of data analysis that extracts models
describing important data classes.
Effective and scalable methods have been developed for decision
tree induction, Naive Bayesian classification, rule-based
classification, and many other classification methods.
Evaluation metrics include: accuracy, sensitivity, specificity,
precision, recall, F measure, and Fß measure.
Stratified k-fold cross-validation is recommended for accuracy
estimation. Bagging and boosting can be used to increase overall
accuracy by learning and combining a series of individual models.
27
28. Summary (II)
Significance tests and ROC curves are useful for model selection.
There have been numerous comparisons of the different
classification methods; the matter remains a research topic
No single method has been found to be superior over all others
for all data sets
Issues such as accuracy, training time, robustness, scalability,
and interpretability must be considered and can involve trade-
offs, further complicating the quest for an overall superior
method
28
29. References (1)
C. Apte and S. Weiss. Data mining with decision trees and decision rules. Future
Generation Computer Systems, 13, 1997
C. M. Bishop, Neural Networks for Pattern Recognition. Oxford University Press,
1995
L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees.
Wadsworth International Group, 1984
C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition. Data
Mining and Knowledge Discovery, 2(2): 121-168, 1998
P. K. Chan and S. J. Stolfo. Learning arbiter and combiner trees from partitioned data
for scaling machine learning. KDD'95
H. Cheng, X. Yan, J. Han, and C.-W. Hsu, Discriminative Frequent Pattern Analysis for
Effective Classification, ICDE'07
H. Cheng, X. Yan, J. Han, and P. S. Yu, Direct Discriminative Pattern Mining for
Effective Classification, ICDE'08
W. Cohen. Fast effective rule induction. ICML'95
G. Cong, K.-L. Tan, A. K. H. Tung, and X. Xu. Mining top-k covering rule groups for
gene expression data. SIGMOD'05
29
30. References (2)
A. J. Dobson. An Introduction to Generalized Linear Models. Chapman & Hall, 1990.
G. Dong and J. Li. Efficient mining of emerging patterns: Discovering trends and
differences. KDD'99.
R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification, 2ed. John Wiley, 2001
U. M. Fayyad. Branching on attribute values in decision tree generation. AAAI’94.
Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and
an application to boosting. J. Computer and System Sciences, 1997.
J. Gehrke, R. Ramakrishnan, and V. Ganti. Rainforest: A framework for fast decision tree
construction of large datasets. VLDB’98.
J. Gehrke, V. Gant, R. Ramakrishnan, and W.-Y. Loh, BOAT -- Optimistic Decision Tree
Construction. SIGMOD'99.
T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data
Mining, Inference, and Prediction. Springer-Verlag, 2001.
D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The
combination of knowledge and statistical data. Machine Learning, 1995.
W. Li, J. Han, and J. Pei, CMAR: Accurate and Efficient Classification Based on Multiple
Class-Association Rules, ICDM'01.
30
31. References (3)
T.-S. Lim, W.-Y. Loh, and Y.-S. Shih. A comparison of prediction accuracy, complexity,
and training time of thirty-three old and new classification algorithms. Machine
Learning, 2000.
J. Magidson. The Chaid approach to segmentation modeling: Chi-squared
automatic interaction detection. In R. P. Bagozzi, editor, Advanced Methods of
Marketing Research, Blackwell Business, 1994.
M. Mehta, R. Agrawal, and J. Rissanen. SLIQ : A fast scalable classifier for data
mining. EDBT'96.
T. M. Mitchell. Machine Learning. McGraw Hill, 1997.
S. K. Murthy, Automatic Construction of Decision Trees from Data: A Multi-
Disciplinary Survey, Data Mining and Knowledge Discovery 2(4): 345-389, 1998
J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81-106, 1986.
J. R. Quinlan and R. M. Cameron-Jones. FOIL: A midterm report. ECML’93.
J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993.
J. R. Quinlan. Bagging, boosting, and c4.5. AAAI'96.
31
32. References (4)
R. Rastogi and K. Shim. Public: A decision tree classifier that integrates building and
pruning. VLDB’98.
J. Shafer, R. Agrawal, and M. Mehta. SPRINT : A scalable parallel classifier for data
mining. VLDB’96.
J. W. Shavlik and T. G. Dietterich. Readings in Machine Learning. Morgan Kaufmann,
1990.
P. Tan, M. Steinbach, and V. Kumar. Introduction to Data Mining. Addison Wesley,
2005.
S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn: Classification and
Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert
Systems. Morgan Kaufman, 1991.
S. M. Weiss and N. Indurkhya. Predictive Data Mining. Morgan Kaufmann, 1997.
I. H. Witten and E. Frank. Data Mining: Practical Machine Learning Tools and
Techniques, 2ed. Morgan Kaufmann, 2005.
X. Yin and J. Han. CPAR: Classification based on predictive association rules. SDM'03
H. Yu, J. Yang, and J. Han. Classifying large data sets using SVM with hierarchical
clusters. KDD'03.
32