This document discusses classification and prediction techniques in data mining. It covers various classification methods like decision tree induction, Bayesian classification, and support vector machines. It also discusses scaling classification to large databases, evaluating model accuracy, and presenting classification results visually. The key methods covered are decision tree construction using information gain, the naïve Bayesian classifier based on Bayes' theorem, and scaling tree learning using techniques like RainForest.
This document discusses various machine learning techniques for classification and prediction. It covers decision tree induction, tree pruning, Bayesian classification, Bayesian belief networks, backpropagation, association rule mining, and ensemble methods like bagging and boosting. Classification involves predicting categorical labels while prediction predicts continuous values. Key steps for preparing data include cleaning, transformation, and comparing different methods based on accuracy, speed, robustness, scalability, and interpretability.
Tree pruning removes parts of a decision tree that overfit the training data due to noise or outliers, making the tree smaller and less complex. There are two pruning strategies: postpruning removes subtrees after full tree growth, while prepruning stops growing branches when information becomes unreliable. Decision tree algorithms are efficient for small datasets but have performance issues for very large real-world datasets that may not fit in memory.
This document discusses classification and prediction. Classification predicts categorical class labels by classifying data based on a training set and class labels. Prediction models continuous values and predicts unknown values. Some applications are credit approval, marketing, medical diagnosis, and treatment analysis. Classification involves a learning step to describe classes and a classification step to classify new data. Prediction involves estimating accuracy by comparing test results to known labels. Issues with classification and prediction include data preparation, comparing methods, and decision tree induction algorithms.
Classification and prediction models are used to categorize data or predict unknown values. Classification predicts categorical class labels to classify new data based on attributes in a training set, while prediction models continuous values. Common applications include credit approval, marketing, medical diagnosis, and treatment analysis. The classification process involves building a model from a training set and then using the model to classify new data, estimating accuracy on a test set.
This document provides a summary of Bayesian classification. Bayesian classification predicts the probability of class membership for new data instances based on prior knowledge and training data. It uses Bayes' theorem to calculate the posterior probability of a class given the attributes of an instance. The naive Bayesian classifier assumes attribute independence and uses frequency counts to estimate probabilities. It classifies new instances by selecting the class with the highest posterior probability. The example shows how probabilities are estimated from training data and used to classify an unseen instance in the play-tennis dataset.
2.1 Data Mining-classification Basic conceptsKrish_ver2
This document discusses classification and decision trees. It defines classification as predicting categorical class labels using a model constructed from a training set. Decision trees are a popular classification method that operate in a top-down recursive manner, splitting the data into purer subsets based on attribute values. The algorithm selects the optimal splitting attribute using an evaluation metric like information gain at each step until it reaches a leaf node containing only one class.
This document discusses various machine learning techniques for classification and prediction. It covers decision tree induction, tree pruning, Bayesian classification, Bayesian belief networks, backpropagation, association rule mining, and ensemble methods like bagging and boosting. Classification involves predicting categorical labels while prediction predicts continuous values. Key steps for preparing data include cleaning, transformation, and comparing different methods based on accuracy, speed, robustness, scalability, and interpretability.
Tree pruning removes parts of a decision tree that overfit the training data due to noise or outliers, making the tree smaller and less complex. There are two pruning strategies: postpruning removes subtrees after full tree growth, while prepruning stops growing branches when information becomes unreliable. Decision tree algorithms are efficient for small datasets but have performance issues for very large real-world datasets that may not fit in memory.
This document discusses classification and prediction. Classification predicts categorical class labels by classifying data based on a training set and class labels. Prediction models continuous values and predicts unknown values. Some applications are credit approval, marketing, medical diagnosis, and treatment analysis. Classification involves a learning step to describe classes and a classification step to classify new data. Prediction involves estimating accuracy by comparing test results to known labels. Issues with classification and prediction include data preparation, comparing methods, and decision tree induction algorithms.
Classification and prediction models are used to categorize data or predict unknown values. Classification predicts categorical class labels to classify new data based on attributes in a training set, while prediction models continuous values. Common applications include credit approval, marketing, medical diagnosis, and treatment analysis. The classification process involves building a model from a training set and then using the model to classify new data, estimating accuracy on a test set.
This document provides a summary of Bayesian classification. Bayesian classification predicts the probability of class membership for new data instances based on prior knowledge and training data. It uses Bayes' theorem to calculate the posterior probability of a class given the attributes of an instance. The naive Bayesian classifier assumes attribute independence and uses frequency counts to estimate probabilities. It classifies new instances by selecting the class with the highest posterior probability. The example shows how probabilities are estimated from training data and used to classify an unseen instance in the play-tennis dataset.
2.1 Data Mining-classification Basic conceptsKrish_ver2
This document discusses classification and decision trees. It defines classification as predicting categorical class labels using a model constructed from a training set. Decision trees are a popular classification method that operate in a top-down recursive manner, splitting the data into purer subsets based on attribute values. The algorithm selects the optimal splitting attribute using an evaluation metric like information gain at each step until it reaches a leaf node containing only one class.
Data.Mining.C.6(II).classification and predictionMargaret Wang
The document summarizes different machine learning classification techniques including instance-based approaches, ensemble approaches, co-training approaches, and partially supervised approaches. It discusses k-nearest neighbor classification and how it works. It also explains bagging, boosting, and AdaBoost ensemble methods. Co-training uses two independent views to label unlabeled data. Partially supervised approaches can build classifiers using only positive and unlabeled data.
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kambererror007
The document describes Chapter 6 of the book "Data Mining: Concepts and Techniques" which covers the topics of classification and prediction. It defines classification and prediction and discusses key issues in classification such as data preparation, evaluating methods, and decision tree induction. Decision tree induction creates a tree model by recursively splitting the training data on attributes and their values to make predictions. The chapter also covers other classification methods like Bayesian classification, rule-based classification, and support vector machines. It describes the process of model construction from training data and then using the model to classify new, unlabeled data.
This document provides an overview of classification techniques. It defines classification as assigning records to predefined classes based on their attribute values. The key steps are building a classification model from training data and then using the model to classify new, unseen records. Decision trees are discussed as a popular classification method that uses a tree structure with internal nodes for attributes and leaf nodes for classes. The document covers decision tree induction, handling overfitting, and performance evaluation methods like holdout validation and cross-validation.
Classification techniques in data miningKamal Acharya
The document discusses classification algorithms in machine learning. It provides an overview of various classification algorithms including decision tree classifiers, rule-based classifiers, nearest neighbor classifiers, Bayesian classifiers, and artificial neural network classifiers. It then describes the supervised learning process for classification, which involves using a training set to construct a classification model and then applying the model to a test set to classify new data. Finally, it provides a detailed example of how a decision tree classifier is constructed from a training dataset and how it can be used to classify data in the test set.
The document discusses the differences and similarities between classification and prediction, providing examples of how classification predicts categorical class labels by constructing a model based on training data, while prediction models continuous values to predict unknown values, though the process is similar between the two. It also covers clustering analysis, explaining that it is an unsupervised technique that groups similar data objects into clusters to discover hidden patterns in datasets.
The document defines several key machine learning and neural network terminology including:
- Activation level - The output value of a neuron in an artificial neural network.
- Activation function - The function that determines the output value of a neuron based on its net input.
- Attributes - Properties of an instance that can be used to determine its classification in machine learning tasks.
- Axon - The output part of a biological neuron that transmits signals to other neurons.
The document provides an overview of concepts and topics to be covered in the MIS End Term Exam for AI and A2 on February 6th 2020, including: decision trees, classifier algorithms like ID3, CART and Naive Bayes; supervised and unsupervised learning; clustering using K-means; bias and variance; overfitting and underfitting; ensemble learning techniques like bagging and random forests; and the use of test and train data.
In a world of data explosion, the rate of data generation and consumption is on the increasing side, there comes the buzzword - Big Data.
Big Data is the concept of fast-moving, large-volume data in varying dimensions (sources) and
highly unpredicted sources.
The 4Vs of Big Data
● Volume - Scale of Data
● Velocity - Analysis of Streaming Data
● Variety - Different forms of Data
● Veracity - Uncertainty of Data
With increasing data availability, the new trend in the industry demands not just data collection,
but making ample sense of acquired data - thereby, the concept of Data Analytics.
Taking it a step further to further make a futuristic prediction and realistic inferences - the concept
of Machine Learning.
A blend of both gives a robust analysis of data for the past, now and the future.
There is a thin line between data analytics and Machine learning which becomes very obvious
when you dig deep.
This document discusses various techniques for data mining classification including rule-based classifiers, nearest neighbor classifiers, Bayes classifiers, artificial neural networks, and ensemble methods. Rule-based classifiers use if-then rules to classify records while nearest neighbor classifiers classify new records based on their similarity to training records. Bayes classifiers use Bayes' theorem to calculate conditional probabilities while artificial neural networks are assemblies of interconnected nodes that learn weights to classify data. Ensemble methods construct multiple classifiers and aggregate their predictions to improve accuracy.
This document provides an overview of data mining techniques discussed in Chapter 3, including parametric and nonparametric models, statistical perspectives on point estimation and error measurement, Bayes' theorem, decision trees, neural networks, genetic algorithms, and similarity measures. Nonparametric techniques like neural networks, decision trees, and genetic algorithms are particularly suitable for data mining applications involving large, dynamically changing datasets.
The document discusses data mining and knowledge discovery in databases. It defines data mining as the nontrivial extraction of implicit and potentially useful information from large amounts of data. With huge increases in data collection and storage, data mining aims to analyze data and discover patterns that can provide insights and knowledge about businesses and the real world. The data mining process involves selecting, preprocessing, transforming, and analyzing data to extract hidden patterns and relationships, which are then interpreted and evaluated.
Supervised learning and Unsupervised learning Usama Fayyaz
This document discusses supervised and unsupervised machine learning. Supervised learning uses labeled training data to learn a function that maps inputs to outputs. Unsupervised learning is used when only input data is available, with the goal of modeling underlying structures or distributions in the data. Common supervised algorithms include decision trees and logistic regression, while common unsupervised algorithms include k-means clustering and dimensionality reduction.
A Decision Tree Based Classifier for Classification & Prediction of Diseasesijsrd.com
In this paper, we are proposing a modified algorithm for classification. This algorithm is based on the concept of the decision trees. The proposed algorithm is better then the previous algorithms. It provides more accurate results. We have tested the proposed method on the example of patient data set. Our proposed methodology uses greedy approach to select the best attribute. To do so the information gain is used. The attribute with highest information gain is selected. If information gain is not good then again divide attributes values into groups. These steps are done until we get good classification/misclassification ratio. The proposed algorithms classify the data sets more accurately and efficiently.
This document discusses clustering, which is the task of grouping data points into clusters so that points within the same cluster are more similar to each other than points in other clusters. It describes different types of clustering methods, including density-based, hierarchical, partitioning, and grid-based methods. It provides examples of specific clustering algorithms like K-means, DBSCAN, and discusses applications of clustering in fields like marketing, biology, libraries, insurance, city planning, and earthquake studies.
IMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHESVikash Kumar
IMAGE CLASSIFICATION USING KNN, RANDOM FOREST AND SVM ALGORITHM ON GLAUCOMA DATASETS AND EXPLAIN THE ACCURACY, SENSITIVITY, AND SPECIFICITY OF EACH AND EVERY ALGORITHMS
This is a presentation about Gradient Boosted Trees which starts from the basics of Data Mining, building up towards Ensemble Methods like Bagging,Boosting etc. and then building towards Gradient Boosted Trees.
This document discusses various techniques for data mining classification including rule-based classifiers, nearest neighbor classifiers, Bayes classifiers, artificial neural networks, and ensemble methods. Rule-based classifiers use if-then rules to classify records while nearest neighbor classifiers classify new records based on their similarity to training records. Bayes classifiers use Bayes' theorem to calculate conditional probabilities while artificial neural networks are composed of interconnected nodes that learn weights through backpropagation. Ensemble methods construct multiple classifiers and aggregate their predictions to improve accuracy.
This chapter discusses selection statements in C++. It covers relational expressions, if-else statements, nested if statements, the switch statement, and common programming errors. Relational expressions are used to compare operands and evaluate to true or false. If-else statements select between two statements based on a condition. Nested if statements allow if statements within other if statements. The switch statement compares a value to multiple cases and executes the matching case's statements. Programming errors can occur from incorrect operators, unpaired braces, and untested conditions.
The document presents the SLIQ algorithm for building scalable decision trees for data mining. SLIQ addresses limitations of existing algorithms for handling large datasets by pre-sorting attributes and using a breadth-first approach to build the tree. It employs a pruning method based on minimum description length to reduce tree size without loss of accuracy. Evaluation on benchmark and synthetic datasets showed SLIQ to be accurate, faster than alternatives, and better able to scale to large data while generating smaller trees than other methods.
The document discusses conditional statements in C# including if, if-else, nested if statements, and switch-case statements. It covers:
- Comparison and logical operators that are used to compose logical conditions for conditional statements
- How the if and if-else statements provide conditional execution of code blocks based on evaluating conditions
- Nested if statements allow creating more complex logic by placing if statements inside other if or else blocks
- The switch-case statement selects code for execution depending on the value of an expression, making it useful for multiple comparisons
Data.Mining.C.6(II).classification and predictionMargaret Wang
The document summarizes different machine learning classification techniques including instance-based approaches, ensemble approaches, co-training approaches, and partially supervised approaches. It discusses k-nearest neighbor classification and how it works. It also explains bagging, boosting, and AdaBoost ensemble methods. Co-training uses two independent views to label unlabeled data. Partially supervised approaches can build classifiers using only positive and unlabeled data.
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kambererror007
The document describes Chapter 6 of the book "Data Mining: Concepts and Techniques" which covers the topics of classification and prediction. It defines classification and prediction and discusses key issues in classification such as data preparation, evaluating methods, and decision tree induction. Decision tree induction creates a tree model by recursively splitting the training data on attributes and their values to make predictions. The chapter also covers other classification methods like Bayesian classification, rule-based classification, and support vector machines. It describes the process of model construction from training data and then using the model to classify new, unlabeled data.
This document provides an overview of classification techniques. It defines classification as assigning records to predefined classes based on their attribute values. The key steps are building a classification model from training data and then using the model to classify new, unseen records. Decision trees are discussed as a popular classification method that uses a tree structure with internal nodes for attributes and leaf nodes for classes. The document covers decision tree induction, handling overfitting, and performance evaluation methods like holdout validation and cross-validation.
Classification techniques in data miningKamal Acharya
The document discusses classification algorithms in machine learning. It provides an overview of various classification algorithms including decision tree classifiers, rule-based classifiers, nearest neighbor classifiers, Bayesian classifiers, and artificial neural network classifiers. It then describes the supervised learning process for classification, which involves using a training set to construct a classification model and then applying the model to a test set to classify new data. Finally, it provides a detailed example of how a decision tree classifier is constructed from a training dataset and how it can be used to classify data in the test set.
The document discusses the differences and similarities between classification and prediction, providing examples of how classification predicts categorical class labels by constructing a model based on training data, while prediction models continuous values to predict unknown values, though the process is similar between the two. It also covers clustering analysis, explaining that it is an unsupervised technique that groups similar data objects into clusters to discover hidden patterns in datasets.
The document defines several key machine learning and neural network terminology including:
- Activation level - The output value of a neuron in an artificial neural network.
- Activation function - The function that determines the output value of a neuron based on its net input.
- Attributes - Properties of an instance that can be used to determine its classification in machine learning tasks.
- Axon - The output part of a biological neuron that transmits signals to other neurons.
The document provides an overview of concepts and topics to be covered in the MIS End Term Exam for AI and A2 on February 6th 2020, including: decision trees, classifier algorithms like ID3, CART and Naive Bayes; supervised and unsupervised learning; clustering using K-means; bias and variance; overfitting and underfitting; ensemble learning techniques like bagging and random forests; and the use of test and train data.
In a world of data explosion, the rate of data generation and consumption is on the increasing side, there comes the buzzword - Big Data.
Big Data is the concept of fast-moving, large-volume data in varying dimensions (sources) and
highly unpredicted sources.
The 4Vs of Big Data
● Volume - Scale of Data
● Velocity - Analysis of Streaming Data
● Variety - Different forms of Data
● Veracity - Uncertainty of Data
With increasing data availability, the new trend in the industry demands not just data collection,
but making ample sense of acquired data - thereby, the concept of Data Analytics.
Taking it a step further to further make a futuristic prediction and realistic inferences - the concept
of Machine Learning.
A blend of both gives a robust analysis of data for the past, now and the future.
There is a thin line between data analytics and Machine learning which becomes very obvious
when you dig deep.
This document discusses various techniques for data mining classification including rule-based classifiers, nearest neighbor classifiers, Bayes classifiers, artificial neural networks, and ensemble methods. Rule-based classifiers use if-then rules to classify records while nearest neighbor classifiers classify new records based on their similarity to training records. Bayes classifiers use Bayes' theorem to calculate conditional probabilities while artificial neural networks are assemblies of interconnected nodes that learn weights to classify data. Ensemble methods construct multiple classifiers and aggregate their predictions to improve accuracy.
This document provides an overview of data mining techniques discussed in Chapter 3, including parametric and nonparametric models, statistical perspectives on point estimation and error measurement, Bayes' theorem, decision trees, neural networks, genetic algorithms, and similarity measures. Nonparametric techniques like neural networks, decision trees, and genetic algorithms are particularly suitable for data mining applications involving large, dynamically changing datasets.
The document discusses data mining and knowledge discovery in databases. It defines data mining as the nontrivial extraction of implicit and potentially useful information from large amounts of data. With huge increases in data collection and storage, data mining aims to analyze data and discover patterns that can provide insights and knowledge about businesses and the real world. The data mining process involves selecting, preprocessing, transforming, and analyzing data to extract hidden patterns and relationships, which are then interpreted and evaluated.
Supervised learning and Unsupervised learning Usama Fayyaz
This document discusses supervised and unsupervised machine learning. Supervised learning uses labeled training data to learn a function that maps inputs to outputs. Unsupervised learning is used when only input data is available, with the goal of modeling underlying structures or distributions in the data. Common supervised algorithms include decision trees and logistic regression, while common unsupervised algorithms include k-means clustering and dimensionality reduction.
A Decision Tree Based Classifier for Classification & Prediction of Diseasesijsrd.com
In this paper, we are proposing a modified algorithm for classification. This algorithm is based on the concept of the decision trees. The proposed algorithm is better then the previous algorithms. It provides more accurate results. We have tested the proposed method on the example of patient data set. Our proposed methodology uses greedy approach to select the best attribute. To do so the information gain is used. The attribute with highest information gain is selected. If information gain is not good then again divide attributes values into groups. These steps are done until we get good classification/misclassification ratio. The proposed algorithms classify the data sets more accurately and efficiently.
This document discusses clustering, which is the task of grouping data points into clusters so that points within the same cluster are more similar to each other than points in other clusters. It describes different types of clustering methods, including density-based, hierarchical, partitioning, and grid-based methods. It provides examples of specific clustering algorithms like K-means, DBSCAN, and discusses applications of clustering in fields like marketing, biology, libraries, insurance, city planning, and earthquake studies.
IMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHESVikash Kumar
IMAGE CLASSIFICATION USING KNN, RANDOM FOREST AND SVM ALGORITHM ON GLAUCOMA DATASETS AND EXPLAIN THE ACCURACY, SENSITIVITY, AND SPECIFICITY OF EACH AND EVERY ALGORITHMS
This is a presentation about Gradient Boosted Trees which starts from the basics of Data Mining, building up towards Ensemble Methods like Bagging,Boosting etc. and then building towards Gradient Boosted Trees.
This document discusses various techniques for data mining classification including rule-based classifiers, nearest neighbor classifiers, Bayes classifiers, artificial neural networks, and ensemble methods. Rule-based classifiers use if-then rules to classify records while nearest neighbor classifiers classify new records based on their similarity to training records. Bayes classifiers use Bayes' theorem to calculate conditional probabilities while artificial neural networks are composed of interconnected nodes that learn weights through backpropagation. Ensemble methods construct multiple classifiers and aggregate their predictions to improve accuracy.
This chapter discusses selection statements in C++. It covers relational expressions, if-else statements, nested if statements, the switch statement, and common programming errors. Relational expressions are used to compare operands and evaluate to true or false. If-else statements select between two statements based on a condition. Nested if statements allow if statements within other if statements. The switch statement compares a value to multiple cases and executes the matching case's statements. Programming errors can occur from incorrect operators, unpaired braces, and untested conditions.
The document presents the SLIQ algorithm for building scalable decision trees for data mining. SLIQ addresses limitations of existing algorithms for handling large datasets by pre-sorting attributes and using a breadth-first approach to build the tree. It employs a pruning method based on minimum description length to reduce tree size without loss of accuracy. Evaluation on benchmark and synthetic datasets showed SLIQ to be accurate, faster than alternatives, and better able to scale to large data while generating smaller trees than other methods.
The document discusses conditional statements in C# including if, if-else, nested if statements, and switch-case statements. It covers:
- Comparison and logical operators that are used to compose logical conditions for conditional statements
- How the if and if-else statements provide conditional execution of code blocks based on evaluating conditions
- Nested if statements allow creating more complex logic by placing if statements inside other if or else blocks
- The switch-case statement selects code for execution depending on the value of an expression, making it useful for multiple comparisons
The document discusses various concepts in data mining and decision trees including:
1) Pruning trees to address overfitting and improve generalization,
2) Separating data into training, development and test sets to evaluate model performance,
3) Information gain favoring attributes with many values by having less entropy,
4) Strategies for dealing with missing attribute values such as predicting values or focusing on other attributes/classes,
5) Changing stopping conditions for regression trees to use standard deviation thresholds rather than discrete classes.
The document discusses perceptrons and gradient descent algorithms for training perceptrons on classification tasks. It contains 4 exercises:
1) Explains the role of the learning rate in perceptron training and which Boolean functions can/cannot be modeled with perceptrons.
2) Applies a perceptron to a sample dataset, calculates outputs, and determines the accuracy.
3) Performs one iteration of gradient descent on the same dataset, computing weight updates with a learning rate of 0.2.
4) Performs one iteration of stochastic gradient descent on the dataset, recomputing outputs and updating weights after each instance.
The document describes speech channel assignment and channel mode modification procedures in 3 G mobile networks. It discusses (1) how the BSC assigns TCH channels to an MS based on a service request, (2) the internal BSC signaling for channel assignment, and (3) how the BSC modifies the channel mode based on an assignment request from the MSC.
The document discusses avoiding new and delete in modern C++ code by using smart pointers and containers provided by the standard library and third-party libraries. It provides examples of managing memory with new/delete in C and early C++ compared to modern approaches using classes, constructors, destructors, and smart pointers. The document also discusses specific cases where new and delete may still be needed, such as with intrusive reference counting objects and some GUI frameworks. Overall, it promotes using make functions like make_unique, make_shared, and containers by default to simplify memory management in C++.
This document discusses applying data mining techniques to predict whether a football match will be cancelled due to weather. Attributes that could be used in the prediction include amount of rain, temperature, humidity, and weather conditions. The data would come from weather stations and the football club. A second example discusses using attributes like player numbers, injuries, past goals, and team performance to predict the outcome of a match between Ajax and Real Madrid. The document also explains the difference between a training set, which is used to weight attributes, and a test set, which is used to evaluate the training set's predictions. Key data mining concepts like features, instances, and classes are briefly defined with examples.
This document provides a summary of attributes from a dataset containing information on 2000 previous loan customers. It analyzes each attribute, noting any issues found like duplicates, outliers, or irrelevant attributes. Key attributes that may help predict loan repayment are identified as customer ID, age, current debt, and postcode. While other attributes like name, gender, and employment status have data quality or relevance issues. The goal is to select the best data mining technique to differentiate customers and predict the likelihood of future loan repayment.
This document provides information about a course on data warehousing and data mining, including:
1. It outlines the course syllabus which covers the basics of data warehousing, data preprocessing, association rules, classification and clustering, and recent trends in data mining.
2. It describes the 5 units that make up the course, including an overview of the topics covered in each unit such as data warehouse architecture, data integration, decision trees, and applications of data mining.
3. It lists two textbooks and four references that will be used for the course.
Parallel Rule Generation For Efficient Classification SystemTalha Ghaffar
Parallel Rule Generation For Efficient Classification System,
genetic algorithms,
divide and conquer approach to classification , Distributed computing to solve classification problem , heterogeneous approach to classification
The document discusses searching data structures like binary search trees and linked lists. It provides pseudocode for iterative searching algorithms on both ordered and unordered linked lists. The algorithms traverse the list by iterating through each node until the target value is found or the end is reached. For ordered lists, searching can stop early if the current node value is greater than the target. Tracing examples are provided to demonstrate searching for a value in sample linked lists.
This document discusses standards around auditor independence and long association with audit clients. It states that performing routine administrative services does not generally threaten independence. For long association of senior personnel on audit engagements, familiarity and self-interest threats increase with time on engagement. The significance depends on factors like role and changes at client. Safeguards like rotation are recommended. For public interest entity audits, a key partner cannot exceed 7 years before rotating off for 2 years. Exceptions can be made for rare circumstances. The standards also discuss rotation of other long associated partners and provision of non-assurance services to audit clients. Independence is the primary consideration and safeguards or not providing the service are recommended if threats cannot be reduced.
HI-TECH is an ISO certified Indian company that specializes in clean air and containment facilities for operating theaters. It prides itself on manufacturing high-quality products using the latest techniques to ensure reliability and long life. HI-TECH offers laminar airflow units, isolation rooms, and other modular operating theater equipment to help prevent infections in hospitals and meet various industry standards. It has numerous hospital and clinic customers throughout India that rely on its equipment.
Nell’attuale quadro normativo europeo i riferimenti al dato culturale sono molteplici, ma è ancora assente una trattazione organica del tema e il settore di riferimento è la cultura considerata in senso più ampio, secondo quanto disposto dall’art. 151 del TCE, ora articolo 167 del TFUE. Il diritto dei beni culturali sembra rimanere confinato nelle frontiere nazionali. È evidente, pertanto, che l’azione dell’Unione europea sia rivolta al plurale concorrendo, quindi, allo sviluppo “delle culture” degli Stati membri e non di una cultura propriamente europea. Nel settore dei beni culturali l’attività normativa di diritto secondario ha avuto come obiettivo quello di conciliare nel mercato interno la libera circolazione dei beni culturali con le esigenze di protezione degli stessi. Prima dell’adozione del regolamento 3911/92, relativo all’esportazione dei beni culturali, e della direttiva 7/93, relativa alla restituzione dei beni culturali usciti illecitamente dal territorio di uno Stato membro, gli Stati si limitavano, infatti, ad effettuare controlli alle frontiere esclusivamente con riguardo ai beni rientranti nel proprio patrimonio
The document discusses networking concepts including network devices like hubs, switches, and routers. It defines a network as interconnected devices that can communicate over a medium. Hubs are used to connect multiple devices as a single network segment, while switches provide secure communication between sources and destinations using port and MAC addresses. Routers allow communication between different networks in different locations using components like ROM, flash, and processors. The document also covers router configuration modes like user exec, privileged exec, and global configuration modes.
POWERPOINT KODE ETIK AKUNTAN INTERNASIONAL VERSI ENGLISHdewi_kusumastuti
This document discusses guidelines for auditors providing non-assurance services to audit clients according to the International Code of Ethics for Professional Accountants. It states that providing certain administrative services related to company secretarial functions does not generally threaten an auditor's independence. For audit engagements of public interest entities, it specifies rules for auditor rotation, including that a key audit partner cannot serve in that role for more than seven years. It also provides examples of threats to independence posed by long association with clients and recommends safeguards like partner rotation. The document emphasizes that firms should evaluate threats to independence before providing any non-assurance services to audit clients.
The document discusses classification and prediction in data mining. It describes classification as predicting categorical class labels based on a training set, while prediction models continuous values. The two-step classification process involves model construction from a training set and then using the model to classify new data. Decision tree induction is covered as a classification method, including how trees are constructed in a top-down recursive manner by selecting attributes that optimize a splitting criterion. Factors in evaluating classification methods such as accuracy, speed, and interpretability are also summarized.
The document discusses classification and prediction techniques in data mining. It begins with an overview of classification vs. prediction and supervised vs. unsupervised learning. It then covers specific classification techniques like decision trees, Bayesian classification, rule-based classification and support vector machines. It provides details on Bayesian classification including the Bayesian theorem and how naive Bayesian classification works. It discusses issues in evaluating classification methods and gives examples of Bayesian classification.
This document summarizes Chapter 7 of the textbook "Data Mining: Concepts and Techniques" which covers the topics of classification and prediction in data mining. The chapter discusses classification using decision tree induction, Bayesian classification, backpropagation, association rule mining, and other methods. It also addresses evaluating classification methods, handling issues like data preparation and overfitting, and measuring classification accuracy. The goal of classification is to predict categorical class labels, while prediction models continuous values.
classification in data mining and data warehousing.pdf321106410027
The document discusses various classification techniques in machine learning. It begins with an overview of classification and supervised vs. unsupervised learning. Classification aims to predict categorical class labels by constructing a predictive model from labeled training data. Decision tree induction is then covered as a basic classification algorithm that recursively partitions data based on attribute values until reaching single class leaf nodes. Bayes classification methods are also mentioned, which classify examples based on applying Bayes' theorem to calculate posterior probabilities.
This document discusses classification techniques for supervised learning. It begins with an overview of classification and describes the two-step classification process of model construction using a training set and then applying the model to classify new data. It then covers specific classification algorithms like decision tree induction, Bayesian classification methods, and rule-based classification. It also discusses evaluating and selecting models as well as techniques for improving accuracy, such as ensemble methods.
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kambererror007
The document describes Chapter 6 of the book "Data Mining: Concepts and Techniques" which covers the topics of classification and prediction. It defines classification and prediction and discusses key issues in classification such as data preparation, evaluating methods, and decision tree induction. Decision tree induction creates a tree model by recursively splitting the training data on attributes and their values to produce leaf nodes containing target class labels. The chapter also covers other classification techniques including Bayesian classification, rule-based classification, support vector machines, and ensemble methods. It describes the process of model construction from training data and then using the model to classify new unlabeled data.
uses a measure based on the probability of misclassification
Relief: estimates the quality of attributes according to how well their
values distinguish between instances that are near to each other
Consistency: prefers attributes that are consistent with the target
concept
Laplace: adds a small constant to all counts to avoid zero probabilities
January 20, 2014
Data Mining: Concepts and Techniques
23
Pruning Decision Trees
Decision trees can overfit the training data
Pruning removes subtrees that do not generalize well
Post-pr
The document discusses data mining and knowledge discovery in databases. It defines data mining as extracting patterns from large amounts of data. The key steps in the knowledge discovery process are presented as data selection, preprocessing, data mining, and interpretation. Common data mining techniques include clustering, classification, and association rule mining. Clustering groups similar data objects, classification predicts categorical labels, and association rules find relationships between variables. Data mining has applications in many domains like market analysis, fraud detection, and bioinformatics.
This document outlines a course on knowledge acquisition in decision making, including the course objectives of introducing data mining techniques and enhancing skills in applying tools like SAS Enterprise Miner and WEKA to solve problems. The course content is described, covering topics like the knowledge discovery process, predictive and descriptive modeling, and a project presentation. Evaluation includes assignments, case studies, and a final exam.
This chapter discusses classification techniques for data mining. It covers basic concepts of supervised learning and classification, including decision tree induction using information gain, gain ratio and gini index for attribute selection. It also discusses overfitting, tree pruning, and scaling classification to large databases using techniques like building attribute-value-class lists. The chapter provides examples and comparisons of different attribute selection measures and approaches to improve accuracy.
This document outlines the objectives, content, evaluation, and prerequisites for a course on Knowledge Acquisition in Decision Making, which introduces students to data mining techniques and how to apply them to solve business problems using SAS Enterprise Miner and WEKA. The course covers topics such as data preprocessing, predictive modeling with decision trees and neural networks, descriptive modeling with clustering and association rules, and a project presentation. Students will be evaluated based on assignments, case studies, a project, quizzes, class participation, and a final exam.
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
The document discusses supervised and unsupervised learning. Supervised learning involves classification using labeled training data, while unsupervised learning involves clustering unlabeled data to discover hidden patterns. It also discusses evaluating classification methods based on accuracy, speed, robustness, and interpretability. Different classification techniques are mentioned including decision trees, naive Bayesian, neural networks, and logistic regression. An example training dataset is shown for building a decision tree to predict whether someone buys a computer based on attributes like age, income, student status, and credit rating. The process of inducing a decision tree from data is also briefly discussed.
This document summarizes key concepts from Chapter 8 of the textbook "Data Mining: Concepts and Techniques". It discusses classification, which predicts categorical class labels, as a supervised learning technique. It covers decision trees for classification, which select attributes to split data recursively based on measures like information gain. It also discusses evaluating and improving classification models to avoid overfitting, and scaling classification to large databases.
This document summarizes key concepts from Chapter 8 of the textbook "Data Mining: Concepts and Techniques". It discusses classification, which predicts categorical class labels. The two main processes are model construction using a training set, and then using the model to classify new data. Decision trees, Naive Bayes, and rule-based classification algorithms are covered. Metrics for evaluating models like accuracy and techniques for improving accuracy like ensemble methods are also summarized.
This document summarizes key concepts from Chapter 8 of the textbook "Data Mining: Concepts and Techniques". It discusses classification, which predicts categorical class labels, as a supervised learning method. The two main steps of classification are model construction using a training set, and then using the model to classify new data. Decision trees are presented as a popular classification technique that uses measures like information gain or Gini index to build tree models that partition data into purer class subsets. The chapter covers decision tree algorithms, attribute selection, overfitting, and scaling classification to large datasets.
Based on the decision tree, this case would be classified as follows:
1. Outlook is overcast, so go to the overcast branch
2. For overcast, there are no further tests, so the leaf node is reached
3. The leaf node predicts Play=yes
Therefore, for the given conditions, Play=yes.
Knowledge Discovery Tutorial By Claudia d'Amato and Laura Hollnik at the Summer School on Ontology Engineering and the Semantic Web in Bertinoro, Italy (SSSW2015)
This document discusses knowledge discovery and data mining. It defines knowledge discovery as the process of automatically searching large volumes of data for patterns that can be considered knowledge. Data mining is defined as one step in the knowledge discovery process and involves using computational methods to discover patterns in large datasets. The document outlines common data mining tasks such as predictive tasks, descriptive tasks, and anomaly detection. It also discusses evaluating data mining algorithms, including assessing the performance of a single algorithm and comparing the performance of multiple algorithms.
This document discusses classification concepts and decision tree induction. It defines classification as predicting categorical class labels based on a training set. Decision tree induction is introduced as a basic classification algorithm that recursively partitions data based on attribute values to construct a tree. Information gain and the Gini index are presented as common measures for selecting the best attribute to use at each tree node split. Overfitting is identified as a potential issue, and prepruning and postpruning techniques are described to address it.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Ch06
1. April 20, 2014 Data Mining: Concepts and Techniques 1
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
2. April 20, 2014 Data Mining: Concepts and Techniques 2
Classification
predicts categorical class labels (discrete or nominal)
classifies data (constructs a model) based on the
training set and the values (class labels) in a
classifying attribute and uses it in classifying new data
Prediction
models continuous-valued functions, i.e., predicts
unknown or missing values
Typical applications
Credit approval
Target marketing
Medical diagnosis
Fraud detection
Classification vs. Prediction
3. April 20, 2014 Data Mining: Concepts and Techniques 3
Classification—A Two-Step Process
Model construction: describing a set of predetermined classes
Each tuple/sample is assumed to belong to a predefined
class, as determined by the class label attribute
The set of tuples used for model construction is training set
The model is represented as classification rules, decision
trees, or mathematical formulae
Model usage: for classifying future or unknown objects
Estimate accuracy of the model
The known label of test sample is compared with the
classified result from the model
Accuracy rate is the percentage of test set samples that are
correctly classified by the model
Test set is independent of training set, otherwise over-fitting
will occur
If the accuracy is acceptable, use the model to classify data
tuples whose class labels are not known
4. April 20, 2014 Data Mining: Concepts and Techniques 4
Process (1): Model Construction
Training
Data
NAME RANK YEARS TENURED
Mike Assistant Prof 3 no
Mary Assistant Prof 7 yes
Bill Professor 2 yes
Jim Associate Prof 7 yes
Dave Assistant Prof 6 no
Anne Associate Prof 3 no
Classification
Algorithms
IF rank = ‘professor’
OR years > 6
THEN tenured = ‘yes’
Classifier
(Model)
5. April 20, 2014 Data Mining: Concepts and Techniques 5
Process (2): Using the Model in Prediction
Classifier
Testing
Data
NAME RANK YEARS TENURED
Tom Assistant Prof 2 no
Merlisa Associate Prof 7 no
George Professor 5 yes
Joseph Assistant Prof 7 yes
Unseen Data
(Jeff, Professor, 4)
Tenured?
6. April 20, 2014 Data Mining: Concepts and Techniques 6
Supervised vs. Unsupervised Learning
Supervised learning (classification)
Supervision: The training data (observations,
measurements, etc.) are accompanied by labels
indicating the class of the observations
New data is classified based on the training set
Unsupervised learning (clustering)
The class labels of training data is unknown
Given a set of measurements, observations, etc. with
the aim of establishing the existence of classes or
clusters in the data
7. April 20, 2014 Data Mining: Concepts and Techniques 7
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
8. April 20, 2014 Data Mining: Concepts and Techniques 8
Issues: Data Preparation
Data cleaning
Preprocess data in order to reduce noise and handle
missing values
Relevance analysis (feature selection)
Remove the irrelevant or redundant attributes
Data transformation
Generalize and/or normalize data
9. April 20, 2014 Data Mining: Concepts and Techniques 9
Issues: Evaluating Classification Methods
Accuracy
classifier accuracy: predicting class label
predictor accuracy: guessing value of predicted
attributes
Speed
time to construct the model (training time)
time to use the model (classification/prediction time)
Robustness: handling noise and missing values
Scalability: efficiency in disk-resident databases
Interpretability
understanding and insight provided by the model
Other measures, e.g., goodness of rules, such as decision
tree size or compactness of classification rules
10. April 20, 2014 Data Mining: Concepts and Techniques 10
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
11. April 20, 2014 Data Mining: Concepts and Techniques 11
Decision Tree Induction: Training Dataset
age income student credit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
This
follows an
example
of
Quinlan‘s
ID3
(Playing
Tennis)
12. April 20, 2014 Data Mining: Concepts and Techniques 12
Output: A Decision Tree for “buys_computer”
age?
overcast
student? credit rating?
<=30 >40
no yes yes
yes
31..40
fairexcellentyesno
13. April 20, 2014 Data Mining: Concepts and Techniques 13
Algorithm for Decision Tree Induction
Basic algorithm (a greedy algorithm)
Tree is constructed in a top-down recursive divide-and-conquer
manner
At start, all the training examples are at the root
Attributes are categorical (if continuous-valued, they are
discretized in advance)
Examples are partitioned recursively based on selected attributes
Test attributes are selected on the basis of a heuristic or
statistical measure (e.g., information gain)
Conditions for stopping partitioning
All samples for a given node belong to the same class
There are no remaining attributes for further partitioning –
majority voting is employed for classifying the leaf
There are no samples left
14. April 20, 2014 Data Mining: Concepts and Techniques 14
Attribute Selection Measure:
Information Gain (ID3/C4.5)
Select the attribute with the highest information gain
Let pi be the probability that an arbitrary tuple in D
belongs to class Ci, estimated by |Ci, D|/|D|
Expected information (entropy) needed to classify a tuple
in D:
Information needed (after using A to split D into v
partitions) to classify D:
Information gained by branching on attribute A
)(log)( 2
1
i
m
i
i ppDInfo
)(
||
||
)(
1
j
v
j
j
A DI
D
D
DInfo
(D)InfoInfo(D)Gain(A) A
15. April 20, 2014 Data Mining: Concepts and Techniques 15
Attribute Selection: Information Gain
Class P: buys_computer = ―yes‖
Class N: buys_computer = ―no‖
means ―age <=30‖ has 5
out of 14 samples, with 2 yes‘es
and 3 no‘s. Hence
Similarly,
age pi ni I(pi, ni)
<=30 2 3 0.971
31…40 4 0 0
>40 3 2 0.971
694.0)2,3(
14
5
)0,4(
14
4
)3,2(
14
5
)(
I
IIDInfoage
048.0)_(
151.0)(
029.0)(
ratingcreditGain
studentGain
incomeGain
246.0)()()( DInfoDInfoageGain age
age income student credit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
)3,2(
14
5
I
940.0)
14
5
(log
14
5
)
14
9
(log
14
9
)5,9()( 22IDInfo
16. April 20, 2014 Data Mining: Concepts and Techniques 16
Enhancements to Basic Decision Tree Induction
Allow for continuous-valued attributes
Dynamically define new discrete-valued attributes that
partition the continuous attribute value into a discrete
set of intervals
Handle missing attribute values
Assign the most common value of the attribute
Assign probability to each of the possible values
Attribute construction
Create new attributes based on existing ones that are
sparsely represented
This reduces fragmentation, repetition, and replication
17. April 20, 2014 Data Mining: Concepts and Techniques 17
Classification in Large Databases
Classification—a classical problem extensively studied by
statisticians and machine learning researchers
Scalability: Classifying data sets with millions of examples
and hundreds of attributes with reasonable speed
Why decision tree induction in data mining?
relatively faster learning speed (than other classification
methods)
convertible to simple and easy to understand
classification rules
can use SQL queries for accessing databases
comparable classification accuracy with other methods
18. April 20, 2014 Data Mining: Concepts and Techniques 18
Scalable Decision Tree Induction Methods
SLIQ (EDBT‘96 — Mehta et al.)
Builds an index for each attribute and only class list and
the current attribute list reside in memory
SPRINT (VLDB‘96 — J. Shafer et al.)
Constructs an attribute list data structure
PUBLIC (VLDB‘98 — Rastogi & Shim)
Integrates tree splitting and tree pruning: stop growing
the tree earlier
RainForest (VLDB‘98 — Gehrke, Ramakrishnan & Ganti)
Builds an AVC-list (attribute, value, class label)
BOAT (PODS‘99 — Gehrke, Ganti, Ramakrishnan & Loh)
Uses bootstrapping to create several small samples
19. April 20, 2014 Data Mining: Concepts and Techniques 19
Scalability Framework for RainForest
Separates the scalability aspects from the criteria that
determine the quality of the tree
Builds an AVC-list: AVC (Attribute, Value, Class_label)
AVC-set (of an attribute X )
Projection of training dataset onto the attribute X and
class label where counts of individual class label are
aggregated
AVC-group (of a node n )
Set of AVC-sets of all predictor attributes at the node n
20. April 20, 2014 Data Mining: Concepts and Techniques 20
Rainforest: Training Set and Its AVC Sets
student Buy_Computer
yes no
yes 6 1
no 3 4
Age Buy_Computer
yes no
<=30 3 2
31..40 4 0
>40 3 2
Credit
rating
Buy_Computer
yes no
fair 6 2
excellent 3 3
age income studentcredit_ratingbuys_computer
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
AVC-set on incomeAVC-set on Age
AVC-set on Student
Training Examples
income Buy_Computer
yes no
high 2 2
medium 4 2
low 3 1
AVC-set on
credit_rating
21. April 20, 2014 Data Mining: Concepts and Techniques 21
BOAT (Bootstrapped Optimistic Algorithm
for Tree Construction)
Use a statistical technique called bootstrapping to create
several smaller samples (subsets), each fits in memory
Each subset is used to create a tree, resulting in several
trees
These trees are examined and used to construct a new
tree T’
It turns out that T’ is very close to the tree that would
be generated using the whole data set together
Adv: requires only two scans of DB, an incremental alg.
22. April 20, 2014 Data Mining: Concepts and Techniques 22
Presentation of Classification Results
23. April 20, 2014 Data Mining: Concepts and Techniques 23
Visualization of a Decision Tree in SGI/MineSet 3.0
24. April 20, 2014 Data Mining: Concepts and Techniques 24
Interactive Visual Mining by Perception-Based
Classification (PBC)
25. April 20, 2014 Data Mining: Concepts and Techniques 25
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
26. April 20, 2014 Data Mining: Concepts and Techniques 26
Bayesian Classification: Why?
A statistical classifier: performs probabilistic prediction,
i.e., predicts class membership probabilities
Foundation: Based on Bayes‘ Theorem.
Performance: A simple Bayesian classifier, naïve Bayesian
classifier, has comparable performance with decision tree
and selected neural network classifiers
Incremental: Each training example can incrementally
increase/decrease the probability that a hypothesis is
correct — prior knowledge can be combined with observed
data
Standard: Even when Bayesian methods are
computationally intractable, they can provide a standard
of optimal decision making against which other methods
can be measured
27. April 20, 2014 Data Mining: Concepts and Techniques 27
Bayesian Theorem: Basics
Let X be a data sample (―evidence‖): class label is unknown
Let H be a hypothesis that X belongs to class C
Classification is to determine P(H|X), the probability that
the hypothesis holds given the observed data sample X
P(H) (prior probability), the initial probability
E.g., X will buy computer, regardless of age, income, …
P(X): probability that sample data is observed
P(X|H) (posteriori probability), the probability of observing
the sample X, given that the hypothesis holds
E.g., Given that X will buy computer, the prob. that X is
31..40, medium income
28. April 20, 2014 Data Mining: Concepts and Techniques 28
Bayesian Theorem
Given training data X, posteriori probability of a
hypothesis H, P(H|X), follows the Bayes theorem
Informally, this can be written as
posteriori = likelihood x prior/evidence
Predicts X belongs to C2 iff the probability P(Ci|X) is the
highest among all the P(Ck|X) for all the k classes
Practical difficulty: require initial knowledge of many
probabilities, significant computational cost
)(
)()|()|(
X
XX
P
HPHPHP
29. April 20, 2014 Data Mining: Concepts and Techniques 29
Towards Naïve Bayesian Classifier
Let D be a training set of tuples and their associated class
labels, and each tuple is represented by an n-D attribute
vector X = (x1, x2, …, xn)
Suppose there are m classes C1, C2, …, Cm.
Classification is to derive the maximum posteriori, i.e., the
maximal P(Ci|X)
This can be derived from Bayes‘ theorem
Since P(X) is constant for all classes, only
needs to be maximized
)(
)()|(
)|(
X
X
X
P
i
CP
i
CP
i
CP
)()|()|(
i
CP
i
CP
i
CP XX
30. April 20, 2014 Data Mining: Concepts and Techniques 30
Derivation of Naïve Bayes Classifier
A simplified assumption: attributes are conditionally
independent (i.e., no dependence relation between
attributes):
This greatly reduces the computation cost: Only counts
the class distribution
If Ak is categorical, P(xk|Ci) is the # of tuples in Ci having
value xk for Ak divided by |Ci, D| (# of tuples of Ci in D)
If Ak is continous-valued, P(xk|Ci) is usually computed
based on Gaussian distribution with a mean μ and
standard deviation σ
and P(xk|Ci) is
)|(...)|()|(
1
)|()|(
21
CixPCixPCixP
n
k
CixPCiP
nk
X
2
2
2
)(
2
1
),,(
x
exg
),,()|( ii CCkxgCiP X
31. April 20, 2014 Data Mining: Concepts and Techniques 31
Naïve Bayesian Classifier: Training Dataset
Class:
C1:buys_computer = ‗yes‘
C2:buys_computer = ‗no‘
Data sample
X = (age <=30,
Income = medium,
Student = yes
Credit_rating = Fair)
age income studentcredit_ratingbuys_compu
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
33. April 20, 2014 Data Mining: Concepts and Techniques 33
Avoiding the 0-Probability Problem
Naïve Bayesian prediction requires each conditional prob. be non-
zero. Otherwise, the predicted prob. will be zero
Ex. Suppose a dataset with 1000 tuples, income=low (0), income=
medium (990), and income = high (10),
Use Laplacian correction (or Laplacian estimator)
Adding 1 to each case
Prob(income = low) = 1/1003
Prob(income = medium) = 991/1003
Prob(income = high) = 11/1003
The ―corrected‖ prob. estimates are close to their ―uncorrected‖
counterparts
n
k
CixkPCiXP
1
)|()|(
34. April 20, 2014 Data Mining: Concepts and Techniques 34
Naïve Bayesian Classifier: Comments
Advantages
Easy to implement
Good results obtained in most of the cases
Disadvantages
Assumption: class conditional independence, therefore
loss of accuracy
Practically, dependencies exist among variables
E.g., hospitals: patients: Profile: age, family history, etc.
Symptoms: fever, cough etc., Disease: lung cancer, diabetes, etc.
Dependencies among these cannot be modeled by Naïve
Bayesian Classifier
How to deal with these dependencies?
Bayesian Belief Networks
35. April 20, 2014 Data Mining: Concepts and Techniques 35
Bayesian Belief Networks
Bayesian belief network allows a subset of the variables
conditionally independent
A graphical model of causal relationships
Represents dependency among the variables
Gives a specification of joint probability distribution
X Y
Z
P
Nodes: random variables
Links: dependency
X and Y are the parents of Z, and Y is
the parent of P
No dependency between Z and P
Has no loops or cycles
36. April 20, 2014 Data Mining: Concepts and Techniques 36
Bayesian Belief Network: An Example
Family
History
LungCancer
PositiveXRay
Smoker
Emphysema
Dyspnea
LC
~LC
(FH, S) (FH, ~S) (~FH, S) (~FH, ~S)
0.8
0.2
0.5
0.5
0.7
0.3
0.1
0.9
Bayesian Belief Networks
The conditional probability table
(CPT) for variable LungCancer:
n
i
YParents ixiPxxP n
1
))(|(),...,( 1
CPT shows the conditional probability for
each possible combination of its parents
Derivation of the probability of a
particular combination of values of
X, from CPT:
37. April 20, 2014 Data Mining: Concepts and Techniques 37
Training Bayesian Networks
Several scenarios:
Given both the network structure and all variables
observable: learn only the CPTs
Network structure known, some hidden variables:
gradient descent (greedy hill-climbing) method,
analogous to neural network learning
Network structure unknown, all variables observable:
search through the model space to reconstruct
network topology
Unknown structure, all hidden variables: No good
algorithms known for this purpose
Ref. D. Heckerman: Bayesian networks for data mining
38. April 20, 2014 Data Mining: Concepts and Techniques 38
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
39. April 20, 2014 Data Mining: Concepts and Techniques 39
Using IF-THEN Rules for Classification
Represent the knowledge in the form of IF-THEN rules
R: IF age = youth AND student = yes THEN buys_computer = yes
Rule antecedent/precondition vs. rule consequent
Assessment of a rule: coverage and accuracy
ncovers = # of tuples covered by R
ncorrect = # of tuples correctly classified by R
coverage(R) = ncovers /|D| /* D: training data set */
accuracy(R) = ncorrect / ncovers
If more than one rule is triggered, need conflict resolution
Size ordering: assign the highest priority to the triggering rules that has
the ―toughest‖ requirement (i.e., with the most attribute test)
Class-based ordering: decreasing order of prevalence or misclassification
cost per class
Rule-based ordering (decision list): rules are organized into one long
priority list, according to some measure of rule quality or by experts
40. April 20, 2014 Data Mining: Concepts and Techniques 40
age?
student? credit rating?
<=30 >40
no yes yes
yes
31..40
fairexcellentyesno
Example: Rule extraction from our buys_computer decision-tree
IF age = young AND student = no THEN buys_computer = no
IF age = young AND student = yes THEN buys_computer = yes
IF age = mid-age THEN buys_computer = yes
IF age = old AND credit_rating = excellent THEN buys_computer = yes
IF age = young AND credit_rating = fair THEN buys_computer = no
Rule Extraction from a Decision Tree
Rules are easier to understand than large trees
One rule is created for each path from the root
to a leaf
Each attribute-value pair along a path forms a
conjunction: the leaf holds the class prediction
Rules are mutually exclusive and exhaustive
41. April 20, 2014 Data Mining: Concepts and Techniques 41
Rule Extraction from the Training Data
Sequential covering algorithm: Extracts rules directly from training data
Typical sequential covering algorithms: FOIL, AQ, CN2, RIPPER
Rules are learned sequentially, each for a given class Ci will cover many
tuples of Ci but none (or few) of the tuples of other classes
Steps:
Rules are learned one at a time
Each time a rule is learned, the tuples covered by the rules are
removed
The process repeats on the remaining tuples unless termination
condition, e.g., when no more training examples or when the quality
of a rule returned is below a user-specified threshold
Comp. w. decision-tree induction: learning a set of rules simultaneously
42. April 20, 2014 Data Mining: Concepts and Techniques 42
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
43. April 20, 2014 Data Mining: Concepts and Techniques 43
Classification:
predicts categorical class labels
E.g., Personal homepage classification
xi = (x1, x2, x3, …), yi = +1 or –1
x1 : # of a word ―homepage‖
x2 : # of a word ―welcome‖
Mathematically
x X = n, y Y = {+1, –1}
We want a function f: X Y
Classification: A Mathematical Mapping
44. April 20, 2014 Data Mining: Concepts and Techniques 44
Linear Classification
Binary Classification
problem
The data above the red
line belongs to class ‗x‘
The data below red line
belongs to class ‗o‘
Examples: SVM,
Perceptron, Probabilistic
Classifiers
x
xx
x
xx
x
x
x
x
o
o
o
o
o
o
o
o
o o
o
o
o
45. April 20, 2014 Data Mining: Concepts and Techniques 45
Discriminative Classifiers
Advantages
prediction accuracy is generally high
As compared to Bayesian methods – in general
robust, works when training examples contain errors
fast evaluation of the learned target function
Bayesian networks are normally slow
Criticism
long training time
difficult to understand the learned function (weights)
Bayesian networks can be used easily for pattern discovery
not easy to incorporate domain knowledge
Easy in the form of priors on the data or distributions
46. April 20, 2014 Data Mining: Concepts and Techniques 46
Perceptron & Winnow
• Vector: x, w
• Scalar: x, y, w
Input: {(x1, y1), …}
Output: classification function f(x)
f(xi) > 0 for yi = +1
f(xi) < 0 for yi = -1
f(x) => wx + b = 0
or w1x1+w2x2+b = 0
x1
x2
• Perceptron: update W
additively
• Winnow: update W
multiplicatively
47. April 20, 2014 Data Mining: Concepts and Techniques 47
Classification by Backpropagation
Backpropagation: A neural network learning algorithm
Started by psychologists and neurobiologists to develop
and test computational analogues of neurons
A neural network: A set of connected input/output units
where each connection has a weight associated with it
During the learning phase, the network learns by
adjusting the weights so as to be able to predict the
correct class label of the input tuples
Also referred to as connectionist learning due to the
connections between units
48. April 20, 2014 Data Mining: Concepts and Techniques 48
Neural Network as a Classifier
Weakness
Long training time
Require a number of parameters typically best determined
empirically, e.g., the network topology or ``structure."
Poor interpretability: Difficult to interpret the symbolic meaning
behind the learned weights and of ``hidden units" in the network
Strength
High tolerance to noisy data
Ability to classify untrained patterns
Well-suited for continuous-valued inputs and outputs
Successful on a wide array of real-world data
Algorithms are inherently parallel
Techniques have recently been developed for the extraction of
rules from trained neural networks
49. April 20, 2014 Data Mining: Concepts and Techniques 49
A Neuron (= a perceptron)
The n-dimensional input vector x is mapped into variable y by
means of the scalar product and a nonlinear function mapping
k-
f
weighted
sum
Input
vector x
output y
Activation
function
weight
vector w
w0
w1
wn
x0
x1
xn
)sign(y
ExampleFor
n
0i
kii xw
50. April 20, 2014 Data Mining: Concepts and Techniques 50
A Multi-Layer Feed-Forward Neural Network
Output layer
Input layer
Hidden layer
Output vector
Input vector: X
wij
i
jiijj OwI
jIj
e
O
1
1
))(1( jjjjj OTOOErr
jk
k
kjjj wErrOOErr )1(
ijijij OErrlww )(
jjj Errl)(
51. April 20, 2014 Data Mining: Concepts and Techniques 51
How A Multi-Layer Neural Network Works?
The inputs to the network correspond to the attributes measured
for each training tuple
Inputs are fed simultaneously into the units making up the input
layer
They are then weighted and fed simultaneously to a hidden layer
The number of hidden layers is arbitrary, although usually only one
The weighted outputs of the last hidden layer are input to units
making up the output layer, which emits the network's prediction
The network is feed-forward in that none of the weights cycles
back to an input unit or to an output unit of a previous layer
From a statistical point of view, networks perform nonlinear
regression: Given enough hidden units and enough training
samples, they can closely approximate any function
52. April 20, 2014 Data Mining: Concepts and Techniques 52
Defining a Network Topology
First decide the network topology: # of units in the
input layer, # of hidden layers (if > 1), # of units in each
hidden layer, and # of units in the output layer
Normalizing the input values for each attribute measured in
the training tuples to [0.0—1.0]
One input unit per domain value, each initialized to 0
Output, if for classification and more than two
classes, one output unit per class is used
Once a network has been trained and its accuracy is
unacceptable, repeat the training process with a different
network topology or a different set of initial weights
53. April 20, 2014 Data Mining: Concepts and Techniques 53
Backpropagation
Iteratively process a set of training tuples & compare the network's
prediction with the actual known target value
For each training tuple, the weights are modified to minimize the
mean squared error between the network's prediction and the
actual target value
Modifications are made in the ―backwards‖ direction: from the output
layer, through each hidden layer down to the first hidden layer, hence
―backpropagation‖
Steps
Initialize weights (to small random #s) and biases in the network
Propagate the inputs forward (by applying activation function)
Backpropagate the error (by updating weights and biases)
Terminating condition (when error is very small, etc.)
54. April 20, 2014 Data Mining: Concepts and Techniques 54
Backpropagation and Interpretability
Efficiency of backpropagation: Each epoch (one interation through the
training set) takes O(|D| * w), with |D| tuples and w weights, but # of
epochs can be exponential to n, the number of inputs, in the worst
case
Rule extraction from networks: network pruning
Simplify the network structure by removing weighted links that
have the least effect on the trained network
Then perform link, unit, or activation value clustering
The set of input and activation values are studied to derive rules
describing the relationship between the input and hidden unit
layers
Sensitivity analysis: assess the impact that a given input variable has
on a network output. The knowledge gained from this analysis can be
represented in rules
55. April 20, 2014 Data Mining: Concepts and Techniques 55
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
56. April 20, 2014 Data Mining: Concepts and Techniques 56
SVM—Support Vector Machines
A new classification method for both linear and nonlinear
data
It uses a nonlinear mapping to transform the original
training data into a higher dimension
With the new dimension, it searches for the linear optimal
separating hyperplane (i.e., ―decision boundary‖)
With an appropriate nonlinear mapping to a sufficiently
high dimension, data from two classes can always be
separated by a hyperplane
SVM finds this hyperplane using support vectors
(―essential‖ training tuples) and margins (defined by the
support vectors)
57. April 20, 2014 Data Mining: Concepts and Techniques 57
SVM—History and Applications
Vapnik and colleagues (1992)—groundwork from Vapnik
& Chervonenkis‘ statistical learning theory in 1960s
Features: training can be slow but accuracy is high owing
to their ability to model complex nonlinear decision
boundaries (margin maximization)
Used both for classification and prediction
Applications:
handwritten digit recognition, object
recognition, speaker identification, benchmarking time-
series prediction tests
58. April 20, 2014 Data Mining: Concepts and Techniques 58
SVM—General Philosophy
Support Vectors
Small Margin Large Margin
59. April 20, 2014 Data Mining: Concepts and Techniques 59
SVM—Margins and Support Vectors
60. April 20, 2014 Data Mining: Concepts and Techniques 60
SVM—When Data Is Linearly Separable
m
Let data D be (X1, y1), …, (X|D|, y|D|), where Xi is the set of training tuples
associated with the class labels yi
There are infinite lines (hyperplanes) separating the two classes but we want to
find the best one (the one that minimizes classification error on unseen data)
SVM searches for the hyperplane with the largest margin, i.e., maximum
marginal hyperplane (MMH)
61. April 20, 2014 Data Mining: Concepts and Techniques 61
SVM—Linearly Separable
A separating hyperplane can be written as
W ● X + b = 0
where W={w1, w2, …, wn} is a weight vector and b a scalar (bias)
For 2-D it can be written as
w0 + w1 x1 + w2 x2 = 0
The hyperplane defining the sides of the margin:
H1: w0 + w1 x1 + w2 x2 ≥ 1 for yi = +1, and
H2: w0 + w1 x1 + w2 x2 ≤ – 1 for yi = –1
Any training tuples that fall on hyperplanes H1 or H2 (i.e., the
sides defining the margin) are support vectors
This becomes a constrained (convex) quadratic optimization
problem: Quadratic objective function and linear constraints
Quadratic Programming (QP) Lagrangian multipliers
62. April 20, 2014 Data Mining: Concepts and Techniques 62
Why Is SVM Effective on High Dimensional Data?
The complexity of trained classifier is characterized by the # of
support vectors rather than the dimensionality of the data
The support vectors are the essential or critical training examples —
they lie closest to the decision boundary (MMH)
If all other training examples are removed and the training is
repeated, the same separating hyperplane would be found
The number of support vectors found can be used to compute an
(upper) bound on the expected error rate of the SVM classifier, which
is independent of the data dimensionality
Thus, an SVM with a small number of support vectors can have good
generalization, even when the dimensionality of the data is high
63. April 20, 2014 Data Mining: Concepts and Techniques 63
SVM—Linearly Inseparable
Transform the original input data into a higher dimensional
space
Search for a linear separating hyperplane in the new space
A1
A2
64. April 20, 2014 Data Mining: Concepts and Techniques 64
SVM—Kernel functions
Instead of computing the dot product on the transformed data tuples,
it is mathematically equivalent to instead applying a kernel function
K(Xi, Xj) to the original data, i.e., K(Xi, Xj) = Φ(Xi) Φ(Xj)
Typical Kernel Functions
SVM can also be used for classifying multiple (> 2) classes and for
regression analysis (with additional user parameters)
65. April 20, 2014 Data Mining: Concepts and Techniques 65
SVM vs. Neural Network
SVM
Relatively new concept
Deterministic algorithm
Nice Generalization
properties
Hard to learn – learned
in batch mode using
quadratic programming
techniques
Using kernels can learn
very complex functions
Neural Network
Relatively old
Nondeterministic
algorithm
Generalizes well but
doesn‘t have strong
mathematical foundation
Can easily be learned in
incremental fashion
To learn complex
functions—use multilayer
perceptron (not that
trivial)
66. April 20, 2014 Data Mining: Concepts and Techniques 66
SVM Related Links
SVM Website
http://www.kernel-machines.org/
Representative implementations
LIBSVM: an efficient implementation of SVM, multi-class
classifications, nu-SVM, one-class SVM, including also various
interfaces with java, python, etc.
SVM-light: simpler but performance is not better than LIBSVM,
support only binary classification and only C language
SVM-torch: another recent implementation also written in C.
67. April 20, 2014 Data Mining: Concepts and Techniques 67
SVM—Introduction Literature
―Statistical Learning Theory‖ by Vapnik: extremely hard to
understand, containing many errors too.
C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern
Recognition. Knowledge Discovery and Data Mining, 2(2), 1998.
Better than the Vapnik‘s book, but still written too hard for
introduction, and the examples are so not-intuitive
The book ―An Introduction to Support Vector Machines‖ by N.
Cristianini and J. Shawe-Taylor
Also written hard for introduction, but the explanation about the
mercer‘s theorem is better than above literatures
The neural network book by Haykins
Contains one nice chapter of SVM introduction
68. April 20, 2014 Data Mining: Concepts and Techniques 68
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
69. April 20, 2014 Data Mining: Concepts and Techniques 69
Associative Classification
Associative classification
Association rules are generated and analyzed for use in classification
Search for strong associations between frequent patterns
(conjunctions of attribute-value pairs) and class labels
Classification: Based on evaluating a set of rules in the form of
P1 ^ p2 … ^ pl ―Aclass = C‖ (conf, sup)
Why effective?
It explores highly confident associations among multiple attributes
and may overcome some constraints introduced by decision-tree
induction, which considers only one attribute at a time
In many studies, associative classification has been found to be more
accurate than some traditional classification methods, such as C4.5
70. April 20, 2014 Data Mining: Concepts and Techniques 70
Typical Associative Classification Methods
CBA (Classification By Association: Liu, Hsu & Ma, KDD‘98)
Mine association possible rules in the form of
Cond-set (a set of attribute-value pairs) class label
Build classifier: Organize rules according to decreasing precedence
based on confidence and then support
CMAR (Classification based on Multiple Association Rules: Li, Han, Pei, ICDM‘01)
Classification: Statistical analysis on multiple rules
CPAR (Classification based on Predictive Association Rules: Yin & Han, SDM‘03)
Generation of predictive rules (FOIL-like analysis)
High efficiency, accuracy similar to CMAR
RCBT (Mining top-k covering rule groups for gene expression data, Cong et al. SIGMOD‘05)
Explore high-dimensional classification, using top-k rule groups
Achieve high classification accuracy and high run-time efficiency
71. April 20, 2014 Data Mining: Concepts and Techniques 71
Associative Classification May Achieve High
Accuracy and Efficiency (Cong et al. SIGMOD05)
72. April 20, 2014 Data Mining: Concepts and Techniques 72
The k-Nearest Neighbor Algorithm
All instances correspond to points in the n-D space
The nearest neighbor are defined in terms of
Euclidean distance, dist(X1, X2)
Target function could be discrete- or real- valued
For discrete-valued, k-NN returns the most common
value among the k training examples nearest to xq
Vonoroi diagram: the decision surface induced by 1-
NN for a typical set of training examples
.
_
+
_ xq
+
_ _
+
_
_
+
.
.
.
. .
73. April 20, 2014 Data Mining: Concepts and Techniques 73
Discussion on the k-NN Algorithm
k-NN for real-valued prediction for a given unknown tuple
Returns the mean values of the k nearest neighbors
Distance-weighted nearest neighbor algorithm
Weight the contribution of each of the k neighbors
according to their distance to the query xq
Give greater weight to closer neighbors
Robust to noisy data by averaging k-nearest neighbors
Curse of dimensionality: distance between neighbors could
be dominated by irrelevant attributes
To overcome it, axes stretch or elimination of the least
relevant attributes
2),(
1
i
xqxd
w
74. April 20, 2014 Data Mining: Concepts and Techniques 74
Genetic Algorithms (GA)
Genetic Algorithm: based on an analogy to biological evolution
An initial population is created consisting of randomly generated rules
Each rule is represented by a string of bits
E.g., if A1 and ¬A2 then C2 can be encoded as 100
If an attribute has k > 2 values, k bits can be used
Based on the notion of survival of the fittest, a new population is
formed to consist of the fittest rules and their offsprings
The fitness of a rule is represented by its classification accuracy on a
set of training examples
Offsprings are generated by crossover and mutation
The process continues until a population P evolves when each rule in P
satisfies a prespecified threshold
Slow but easily parallelizable
75. April 20, 2014 Data Mining: Concepts and Techniques 75
Rough Set Approach
Rough sets are used to approximately or “roughly” define
equivalent classes
A rough set for a given class C is approximated by two sets: a lower
approximation (certain to be in C) and an upper approximation
(cannot be described as not belonging to C)
Finding the minimal subsets (reducts) of attributes for feature
reduction is NP-hard but a discernibility matrix (which stores the
differences between attribute values for each pair of data tuples) is
used to reduce the computation intensity
76. April 20, 2014 Data Mining: Concepts and Techniques 76
Fuzzy Set
Approaches
Fuzzy logic uses truth values between 0.0 and 1.0 to
represent the degree of membership (such as using
fuzzy membership graph)
Attribute values are converted to fuzzy values
e.g., income is mapped into the discrete categories
{low, medium, high} with fuzzy values calculated
For a given new sample, more than one fuzzy value may
apply
Each applicable rule contributes a vote for membership
in the categories
Typically, the truth values for each predicted category
are summed, and these sums are combined
77. April 20, 2014 Data Mining: Concepts and Techniques 77
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
78. April 20, 2014 Data Mining: Concepts and Techniques 78
What Is Prediction?
(Numerical) prediction is similar to classification
construct a model
use model to predict continuous or ordered value for a given input
Prediction is different from classification
Classification refers to predict categorical class label
Prediction models continuous-valued functions
Major method for prediction: regression
model the relationship between one or more independent or
predictor variables and a dependent or response variable
Regression analysis
Linear and multiple regression
Non-linear regression
Other regression methods: generalized linear model, Poisson
regression, log-linear models, regression trees
79. April 20, 2014 Data Mining: Concepts and Techniques 79
Linear Regression
Linear regression: involves a response variable y and a single
predictor variable x
y = w0 + w1 x
where w0 (y-intercept) and w1 (slope) are regression coefficients
Method of least squares: estimates the best-fitting straight line
Multiple linear regression: involves more than one predictor variable
Training data is of the form (X1, y1), (X2, y2),…, (X|D|, y|D|)
Ex. For 2-D data, we may have: y = w0 + w1 x1+ w2 x2
Solvable by extension of least square method or using SAS, S-Plus
Many nonlinear functions can be transformed into the above
||
1
2
||
1
)(
))((
1 D
i
i
D
i
ii
xx
yyxx
w xwyw
10
80. April 20, 2014 Data Mining: Concepts and Techniques 80
Some nonlinear models can be modeled by a polynomial
function
A polynomial regression model can be transformed into
linear regression model. For example,
y = w0 + w1 x + w2 x2 + w3 x3
convertible to linear with new variables: x2 = x2, x3= x3
y = w0 + w1 x + w2 x2 + w3 x3
Other functions, such as power function, can also be
transformed to linear model
Some models are intractable nonlinear (e.g., sum of
exponential terms)
possible to obtain least square estimates through
extensive calculation on more complex formulae
Nonlinear Regression
81. April 20, 2014 Data Mining: Concepts and Techniques 81
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
82. April 20, 2014 Data Mining: Concepts and Techniques 82
Classifier Accuracy Measures
Accuracy of a classifier M, acc(M): percentage of test set tuples that are
correctly classified by the model M
Error rate (misclassification rate) of M = 1 – acc(M)
Given m classes, CMi,j, an entry in a confusion matrix, indicates #
of tuples in class i that are labeled by the classifier as class j
Alternative accuracy measures (e.g., for cancer diagnosis)
sensitivity = t-pos/pos /* true positive recognition rate */
specificity = t-neg/neg /* true negative recognition rate */
precision = t-pos/(t-pos + f-pos)
accuracy = sensitivity * pos/(pos + neg) + specificity * neg/(pos + neg)
This model can also be used for cost-benefit analysis
classes buy_computer = yes buy_computer = no total recognition(%)
buy_computer = yes 6954 46 7000 99.34
buy_computer = no 412 2588 3000 86.27
total 7366 2634 10000 95.52
C1 C2
C1 True positive False negative
C2 False positive True negative
83. April 20, 2014 Data Mining: Concepts and Techniques 83
Predictor Error Measures
Measure predictor accuracy: measure how far off the predicted value is
from the actual known value
Loss function: measures the error betw. yi and the predicted value yi‘
Absolute error: | yi – yi‘|
Squared error: (yi – yi‘)2
Test error (generalization error): the average loss over the test set
Mean absolute error: Mean squared error:
Relative absolute error: Relative squared error:
The mean squared-error exaggerates the presence of outliers
Popularly use (square) root mean-square error, similarly, root relative
squared error
d
yy
d
i
ii
1
|'|
d
yy
d
i
ii
1
2
)'(
d
i
i
d
i
ii
yy
yy
1
1
||
|'|
d
i
i
d
i
ii
yy
yy
1
2
1
2
)(
)'(
84. April 20, 2014 Data Mining: Concepts and Techniques 84
Evaluating the Accuracy of a Classifier
or Predictor (I)
Holdout method
Given data is randomly partitioned into two independent sets
Training set (e.g., 2/3) for model construction
Test set (e.g., 1/3) for accuracy estimation
Random sampling: a variation of holdout
Repeat holdout k times, accuracy = avg. of the accuracies
obtained
Cross-validation (k-fold, where k = 10 is most popular)
Randomly partition the data into k mutually exclusive subsets,
each approximately equal size
At i-th iteration, use Di as test set and others as training set
Leave-one-out: k folds where k = # of tuples, for small sized data
Stratified cross-validation: folds are stratified so that class dist. in
each fold is approx. the same as that in the initial data
85. April 20, 2014 Data Mining: Concepts and Techniques 85
Evaluating the Accuracy of a Classifier
or Predictor (II)
Bootstrap
Works well with small data sets
Samples the given training tuples uniformly with replacement
i.e., each time a tuple is selected, it is equally likely to be
selected again and re-added to the training set
Several boostrap methods, and a common one is .632 boostrap
Suppose we are given a data set of d tuples. The data set is sampled d
times, with replacement, resulting in a training set of d samples. The data
tuples that did not make it into the training set end up forming the test set.
About 63.2% of the original data will end up in the bootstrap, and the
remaining 36.8% will form the test set (since (1 – 1/d)d ≈ e-1 = 0.368)
Repeat the sampling procedue k times, overall accuracy of the
model:
))(368.0)(632.0()( _
1
_ settraini
k
i
settesti MaccMaccMacc
86. April 20, 2014 Data Mining: Concepts and Techniques 86
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
87. April 20, 2014 Data Mining: Concepts and Techniques 87
Ensemble Methods: Increasing the Accuracy
Ensemble methods
Use a combination of models to increase accuracy
Combine a series of k learned
models, M1, M2, …, Mk, with the aim of creating an
improved model M*
Popular ensemble methods
Bagging: averaging the prediction over a collection of
classifiers
Boosting: weighted vote with a collection of classifiers
Ensemble: combining a set of heterogeneous classifiers
88. April 20, 2014 Data Mining: Concepts and Techniques 88
Bagging: Boostrap Aggregation
Analogy: Diagnosis based on multiple doctors‘ majority vote
Training
Given a set D of d tuples, at each iteration i, a training set Di of d
tuples is sampled with replacement from D (i.e., boostrap)
A classifier model Mi is learned for each training set Di
Classification: classify an unknown sample X
Each classifier Mi returns its class prediction
The bagged classifier M* counts the votes and assigns the class
with the most votes to X
Prediction: can be applied to the prediction of continuous values by
taking the average value of each prediction for a given test tuple
Accuracy
Often significant better than a single classifier derived from D
For noise data: not considerably worse, more robust
Proved improved accuracy in prediction
89. April 20, 2014 Data Mining: Concepts and Techniques 89
Boosting
Analogy: Consult several doctors, based on a combination of weighted
diagnoses—weight assigned based on the previous diagnosis accuracy
How boosting works?
Weights are assigned to each training tuple
A series of k classifiers is iteratively learned
After a classifier Mi is learned, the weights are updated to allow the
subsequent classifier, Mi+1, to pay more attention to the training
tuples that were misclassified by Mi
The final M* combines the votes of each individual classifier, where
the weight of each classifier's vote is a function of its accuracy
The boosting algorithm can be extended for the prediction of
continuous values
Comparing with bagging: boosting tends to achieve greater accuracy,
but it also risks overfitting the model to misclassified data
90. April 20, 2014 Data Mining: Concepts and Techniques 90
Adaboost (Freund and Schapire, 1997)
Given a set of d class-labeled tuples, (X1, y1), …, (Xd, yd)
Initially, all the weights of tuples are set the same (1/d)
Generate k classifiers in k rounds. At round i,
Tuples from D are sampled (with replacement) to form a
training set Di of the same size
Each tuple‘s chance of being selected is based on its weight
A classification model Mi is derived from Di
Its error rate is calculated using Di as a test set
If a tuple is misclssified, its weight is increased, o.w. it is
decreased
Error rate: err(Xj) is the misclassification error of tuple Xj. Classifier
Mi error rate is the sum of the weights of the misclassified tuples:
The weight of classifier Mi‘s vote is
)(
)(1
log
i
i
Merror
Merror
d
j
ji errwMerror )()( jX
91. April 20, 2014 Data Mining: Concepts and Techniques 91
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
92. April 20, 2014 Data Mining: Concepts and Techniques 92
Model Selection: ROC Curves
ROC (Receiver Operating Characteristics)
curves: for visual comparison of
classification models
Originated from signal detection theory
Shows the trade-off between the true
positive rate and the false positive rate
The area under the ROC curve is a
measure of the accuracy of the model
Rank the test tuples in decreasing order:
the one that is most likely to belong to the
positive class appears at the top of the list
The closer to the diagonal line (i.e., the
closer the area is to 0.5), the less accurate
is the model
Vertical axis represents
the true positive rate
Horizontal axis rep. the
false positive rate
The plot also shows a
diagonal line
A model with perfect
accuracy will have an
area of 1.0
93. April 20, 2014 Data Mining: Concepts and Techniques 93
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
94. April 20, 2014 Data Mining: Concepts and Techniques 94
Summary (I)
Classification and prediction are two forms of data analysis that can
be used to extract models describing important data classes or to
predict future data trends.
Effective and scalable methods have been developed for decision
trees induction, Naive Bayesian classification, Bayesian belief
network, rule-based classifier, Backpropagation, Support Vector
Machine (SVM), associative classification, nearest neighbor classifiers,
and case-based reasoning, and other classification methods such as
genetic algorithms, rough set and fuzzy set approaches.
Linear, nonlinear, and generalized linear models of regression can be
used for prediction. Many nonlinear problems can be converted to
linear problems by performing transformations on the predictor
variables. Regression trees and model trees are also used for
prediction.
95. April 20, 2014 Data Mining: Concepts and Techniques 95
Summary (II)
Stratified k-fold cross-validation is a recommended method for
accuracy estimation. Bagging and boosting can be used to increase
overall accuracy by learning and combining a series of individual
models.
Significance tests and ROC curves are useful for model selection
There have been numerous comparisons of the different classification
and prediction methods, and the matter remains a research topic
No single method has been found to be superior over all others for all
data sets
Issues such as accuracy, training time, robustness, interpretability, and
scalability must be considered and can involve trade-offs, further
complicating the quest for an overall superior method
96. April 20, 2014 Data Mining: Concepts and Techniques 96
References (1)
C. Apte and S. Weiss. Data mining with decision trees and decision rules. Future
Generation Computer Systems, 13, 1997.
C. M. Bishop, Neural Networks for Pattern Recognition. Oxford University Press,
1995.
L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression
Trees. Wadsworth International Group, 1984.
C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition.
Data Mining and Knowledge Discovery, 2(2): 121-168, 1998.
P. K. Chan and S. J. Stolfo. Learning arbiter and combiner trees from partitioned
data for scaling machine learning. KDD'95.
W. Cohen. Fast effective rule induction. ICML'95.
G. Cong, K.-L. Tan, A. K. H. Tung, and X. Xu. Mining top-k covering rule groups for
gene expression data. SIGMOD'05.
A. J. Dobson. An Introduction to Generalized Linear Models. Chapman and Hall,
1990.
G. Dong and J. Li. Efficient mining of emerging patterns: Discovering trends and
differences. KDD'99.
97. April 20, 2014 Data Mining: Concepts and Techniques 97
References (2)
R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification, 2ed. John Wiley and
Sons, 2001
U. M. Fayyad. Branching on attribute values in decision tree generation. AAAI‘94.
Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line
learning and an application to boosting. J. Computer and System Sciences, 1997.
J. Gehrke, R. Ramakrishnan, and V. Ganti. Rainforest: A framework for fast decision
tree construction of large datasets. VLDB‘98.
J. Gehrke, V. Gant, R. Ramakrishnan, and W.-Y. Loh, BOAT -- Optimistic Decision Tree
Construction. SIGMOD'99.
T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data
Mining, Inference, and Prediction. Springer-Verlag, 2001.
D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The
combination of knowledge and statistical data. Machine Learning, 1995.
M. Kamber, L. Winstone, W. Gong, S. Cheng, and J. Han. Generalization and decision
tree induction: Efficient classification in data mining. RIDE'97.
B. Liu, W. Hsu, and Y. Ma. Integrating Classification and Association Rule. KDD'98.
W. Li, J. Han, and J. Pei, CMAR: Accurate and Efficient Classification Based on
Multiple Class-Association Rules, ICDM'01.
98. April 20, 2014 Data Mining: Concepts and Techniques 98
References (3)
T.-S. Lim, W.-Y. Loh, and Y.-S. Shih. A comparison of prediction
accuracy, complexity, and training time of thirty-three old and new
classification algorithms. Machine Learning, 2000.
J. Magidson. The Chaid approach to segmentation modeling: Chi-squared
automatic interaction detection. In R. P. Bagozzi, editor, Advanced Methods of
Marketing Research, Blackwell Business, 1994.
M. Mehta, R. Agrawal, and J. Rissanen. SLIQ : A fast scalable classifier for data
mining. EDBT'96.
T. M. Mitchell. Machine Learning. McGraw Hill, 1997.
S. K. Murthy, Automatic Construction of Decision Trees from Data: A Multi-
Disciplinary Survey, Data Mining and Knowledge Discovery 2(4): 345-389, 1998
J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81-106, 1986.
J. R. Quinlan and R. M. Cameron-Jones. FOIL: A midterm report. ECML‘93.
J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993.
J. R. Quinlan. Bagging, boosting, and c4.5. AAAI'96.
99. April 20, 2014 Data Mining: Concepts and Techniques 99
References (4)
R. Rastogi and K. Shim. Public: A decision tree classifier that integrates building
and pruning. VLDB‘98.
J. Shafer, R. Agrawal, and M. Mehta. SPRINT : A scalable parallel classifier for
data mining. VLDB‘96.
J. W. Shavlik and T. G. Dietterich. Readings in Machine Learning. Morgan Kaufmann,
1990.
P. Tan, M. Steinbach, and V. Kumar. Introduction to Data Mining. Addison Wesley,
2005.
S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn: Classification
and Prediction Methods from Statistics, Neural Nets, Machine Learning, and
Expert Systems. Morgan Kaufman, 1991.
S. M. Weiss and N. Indurkhya. Predictive Data Mining. Morgan Kaufmann, 1997.
I. H. Witten and E. Frank. Data Mining: Practical Machine Learning Tools and
Techniques, 2ed. Morgan Kaufmann, 2005.
X. Yin and J. Han. CPAR: Classification based on predictive association rules.
SDM'03
H. Yu, J. Yang, and J. Han. Classifying large data sets using SVM with
hierarchical clusters. KDD'03.