This document provides an overview of data mining and machine learning techniques. It discusses how machine learning can be used to extract patterns and insights from large amounts of data. Examples are provided of applications in domains like credit lending, oil spill detection, load forecasting, and machine fault diagnosis. The document also covers key concepts in data mining like the generalization process, bias, and some ethical considerations around using these techniques.
The document provides an overview of concepts related to computing for bioinformatics including machine learning, data mining, knowledge discovery, statistics, databases, and data visualization. It discusses techniques like classification, clustering, association rule mining, and anomaly detection. It also presents examples of applying these techniques to problems like weather prediction, contact lens recommendation, and soybean disease diagnosis.
Probability Forecasting - a Machine Learning Perspectivebutest
David Lindsay's document discusses probability forecasting from a machine learning perspective. It provides an overview of probability forecasting and assessments of reliability and resolution. It describes experiments testing various machine learning algorithms on multiple datasets to evaluate their probability forecasts. Key findings are that traditional assessment methods only measure resolution, not reliability, and that most machine learning algorithms produce forecasts that are accurate but unreliable. The Probability Calibration Graph is introduced as a new visual tool for assessing reliability.
The document discusses machine learning techniques for finding patterns in data. It covers classification algorithms like decision trees and neural networks that can predict outcomes for new data based on patterns learned from training examples. The document also discusses concepts like bias, which refers to the assumptions built into machine learning algorithms that guide their search for patterns and prevent overfitting to noise in the training data. Examples are provided to illustrate classification problems and solutions like rules learned to predict gameplay based on weather conditions.
The document discusses machine learning techniques for finding patterns in data and using those patterns to make predictions. It covers topics like classification algorithms, decision trees, neural networks, learning as a search process, and how machine learning systems use bias to avoid overfitting training data. Examples are provided on classifying weather data to determine if a baseball game should be played, classifying iris flowers, predicting CPU performance, and diagnosing soybean diseases.
This document provides an overview of data mining and machine learning concepts. It discusses the data mining process, examples of data mining applications in areas like loan approval, image analysis, and load forecasting. It also covers machine learning techniques like decision trees and rules. Additionally, it addresses topics such as the role of domain knowledge, generalization as a search problem, and biases that influence machine learning models.
The document discusses machine learning concepts related to classification, including linear regression, decision trees, and neural networks. It provides an example of using weather data to classify whether a game will be played or not based on attributes like temperature and humidity. Rules are generated to make the classification based on patterns in the data.
As we move into a new era of ITSM computing, new big data and machine learning tools and methodologies are being developed to support IT staff by intelligently extracting insights and making predictions from the enormous amounts of data accumulated from the organization. According to Gartner, I&O leaders must take a comprehensive approach to incorporate advanced big data and machine learning technologies into their organizations or risk becoming irrelevant. But what exactly is big data and machine learning all about? How can you introduce these concepts into your existing Service Desk?
Join USF’s distinguished Computer Science and Engineering Professor Lawrence Hall and SunView Software’s VP of Marketing and Product Strategy John Prestridge as they break down the fundamentals of big data and machine learning and provide real-world examples of the impact the technologies will have on ITSM.
Introduction to Data Mining (Why Mine Data? Commercial Viewpoint)dradilkhan87
The document provides an introduction to data mining, including why it is used from both commercial and scientific viewpoints. It discusses that data is being collected exponentially but much of it goes unanalyzed. Data mining aims to automatically discover useful information from large datasets. It describes different types of data mining tasks like classification, regression, clustering, and association rule learning. Classification involves using attributes to predict unknown values, while clustering finds patterns without predefined labels.
The document provides an overview of concepts related to computing for bioinformatics including machine learning, data mining, knowledge discovery, statistics, databases, and data visualization. It discusses techniques like classification, clustering, association rule mining, and anomaly detection. It also presents examples of applying these techniques to problems like weather prediction, contact lens recommendation, and soybean disease diagnosis.
Probability Forecasting - a Machine Learning Perspectivebutest
David Lindsay's document discusses probability forecasting from a machine learning perspective. It provides an overview of probability forecasting and assessments of reliability and resolution. It describes experiments testing various machine learning algorithms on multiple datasets to evaluate their probability forecasts. Key findings are that traditional assessment methods only measure resolution, not reliability, and that most machine learning algorithms produce forecasts that are accurate but unreliable. The Probability Calibration Graph is introduced as a new visual tool for assessing reliability.
The document discusses machine learning techniques for finding patterns in data. It covers classification algorithms like decision trees and neural networks that can predict outcomes for new data based on patterns learned from training examples. The document also discusses concepts like bias, which refers to the assumptions built into machine learning algorithms that guide their search for patterns and prevent overfitting to noise in the training data. Examples are provided to illustrate classification problems and solutions like rules learned to predict gameplay based on weather conditions.
The document discusses machine learning techniques for finding patterns in data and using those patterns to make predictions. It covers topics like classification algorithms, decision trees, neural networks, learning as a search process, and how machine learning systems use bias to avoid overfitting training data. Examples are provided on classifying weather data to determine if a baseball game should be played, classifying iris flowers, predicting CPU performance, and diagnosing soybean diseases.
This document provides an overview of data mining and machine learning concepts. It discusses the data mining process, examples of data mining applications in areas like loan approval, image analysis, and load forecasting. It also covers machine learning techniques like decision trees and rules. Additionally, it addresses topics such as the role of domain knowledge, generalization as a search problem, and biases that influence machine learning models.
The document discusses machine learning concepts related to classification, including linear regression, decision trees, and neural networks. It provides an example of using weather data to classify whether a game will be played or not based on attributes like temperature and humidity. Rules are generated to make the classification based on patterns in the data.
As we move into a new era of ITSM computing, new big data and machine learning tools and methodologies are being developed to support IT staff by intelligently extracting insights and making predictions from the enormous amounts of data accumulated from the organization. According to Gartner, I&O leaders must take a comprehensive approach to incorporate advanced big data and machine learning technologies into their organizations or risk becoming irrelevant. But what exactly is big data and machine learning all about? How can you introduce these concepts into your existing Service Desk?
Join USF’s distinguished Computer Science and Engineering Professor Lawrence Hall and SunView Software’s VP of Marketing and Product Strategy John Prestridge as they break down the fundamentals of big data and machine learning and provide real-world examples of the impact the technologies will have on ITSM.
Introduction to Data Mining (Why Mine Data? Commercial Viewpoint)dradilkhan87
The document provides an introduction to data mining, including why it is used from both commercial and scientific viewpoints. It discusses that data is being collected exponentially but much of it goes unanalyzed. Data mining aims to automatically discover useful information from large datasets. It describes different types of data mining tasks like classification, regression, clustering, and association rule learning. Classification involves using attributes to predict unknown values, while clustering finds patterns without predefined labels.
This document discusses machine learning, including differentiating it from artificial intelligence and deep learning. It covers the need for machine learning due to increasing data volumes and how machine learning processes work through experiences to build rules and logic from data. The types of machine learning are described as supervised learning, unsupervised learning, and reinforcement learning. Examples of machine learning applications like recommendation engines and spam filters are also provided.
The document describes decision tree induction and classification. It provides examples of datasets that can be used to build decision trees, including examples on loan approvals, playing sailboats, and shapes. It explains the basic TDIDT (Top Down Induction of Decision Trees) algorithm for building decision trees from datasets in a recursive partitioning manner. Key aspects of decision tree learning covered include attribute selection criteria, information gain, entropy, and residual information.
This document outlines an agenda for a data science boot camp covering various machine learning topics over several hours. The agenda includes discussions of decision trees, ensembles, random forests, data modelling, and clustering. It also provides examples of data leakage problems and discusses the importance of evaluating model performance. Homework assignments involve building models with Weka and identifying the minimum attributes needed to distinguish between red and white wines.
Influx/Days 2017 San Francisco | Baron SchwartzInfluxData
WHAT GOOD IS ANOMALY DETECTION?
Static thresholds on metrics have been falling out of fashion for a while, and for good reason. Modern tooling lets you analyze and monitor a lot more data points than you used to be able to, resulting in lots more noise. The hope is that anomaly detection answers some of this, by replacing static thresholds (anomalies) with dynamic ones. But it doesn’t work as well as most people think it will. In this talk I’ll explain how anomaly detection works, so you can understand why it isn’t a good general-purpose solution, and which specific cases it’s good at.
Automating fetal heart monitor using machine learningTamjid Rayhan
This is a webinar held by the IEEE student branch of University of Chittagong. This talks about how a beginner can gain expert level knowledge in Machine learning and deep learning using online resources. It focuses on how the presentar solved a biomedical engineering problem using Machine learning. Also gives reference to many interesting references to advices given by the leaders of Machine learning field.
- Big data is growing rapidly in both commercial and scientific databases. Data mining is commonly used to extract useful information from large datasets. It helps with customer service, hypothesis formation, and more.
- Recent technological advances are generating large amounts of medical and genomic data. Data mining offers potential solutions for automated analysis of patient histories, gene function prediction, and drug discovery. Traditional techniques may be unsuitable due to data enormity, dimensionality, and heterogeneity.
- Data mining involves tasks like classification, association rule mining, clustering, and outlier detection. Various machine learning algorithms are applied including decision trees, naive Bayes, and neural networks.
An introduction to machine learning and statisticsSpotle.ai
This document provides an overview of machine learning and predictive modeling. It begins by describing how predictive models can be used in various domains like healthcare, finance, telecom, and business. It then discusses the differences between machine learning and predictive modeling, noting that machine learning aims to allow machines to learn autonomously using feedback mechanisms, while predictive modeling focuses on building statistical models to predict outcomes. The document also uses examples like Microsoft's Tay chatbot to illustrate how machine learning systems can be exposed to real-world data to continuously learn and improve. It concludes by explaining how predictive analytics fits within machine learning as the starting point to build initial predictive models and continuously monitor and refine them.
The document discusses applications of deep learning in various medical domains including ophthalmology, dermatology, pathology, cardiology, neurosurgery and EMR. For ophthalmology, it describes using deep learning models for tasks like diabetic retinopathy detection from fundus images and age-related macular degeneration classification from OCT retina images. It also summarizes a study on diabetic retinopathy detection that achieved high performance but had limitations in reproducibility. For dermatology, it outlines the need for skin cancer classification from images and challenges in early detection.
This document provides an introduction to machine learning. It discusses the history of machine learning, including early work in neural networks and decision trees. It defines machine learning as the ability to improve performance on tasks based on experience. The key components of a learning problem are identified as the task, data used for learning, and a performance measure. Linear regression, decision trees, instance-based learning, Bayesian learning, support vector machines, neural networks, and clustering are listed as machine learning algorithms. Designing a learning system involves choosing training experiences, a target function, representation of that function, and a learning algorithm.
2023 Supervised Learning for Orange3 from scratchFEG
This document provides an overview of supervised learning and decision tree models. It discusses supervised learning techniques for classification and regression. Decision trees are explained as a method that uses conditional statements to classify examples based on their features. The document reviews node splitting criteria like information gain that help determine the most important features. It also discusses evaluating models for overfitting/underfitting and techniques like bagging and boosting in random forests to improve performance. Homework involves building a classification model on a healthcare dataset and reporting the results.
Machine Learning Essentials Demystified part1 | Big Data DemystifiedOmid Vahdaty
Machine Learning Essentials Abstract:
Machine Learning (ML) is one of the hottest topics in the IT world today. But what is it really all about?
In this session we will talk about what ML actually is and in which cases it is useful.
We will talk about a few common algorithms for creating ML models and demonstrate their use with Python. We will also take a peek at Deep Learning (DL) and Artificial Neural Networks and explain how they work (without too much math) and demonstrate DL model with Python.
The target audience are developers, data engineers and DBAs that do not have prior experience with ML and want to know how it actually works.
Finding interesting patterns in data can lead to uncovering new knowledge. New patterns that haven’t occurred before can signify events of interest. Depending on context, these can be called novelties, anomalies, outliers or events. Whatever they are called, they are interesting because they tell a story different from the norm. In this talk, we will call them anomalies. Two diverse applications of anomaly detection are detecting fraudulent credit card transactions and identifying astronomical anomalies such as solar flares.
However, there are many challenges in anomaly detection including high false positive rates and low predictive accuracy. Ensemble learning is a way of combining many algorithms or models to obtain better predictive performance. Anomaly detection is generally an unsupervised task, that is, we do not train models using labelled data. Constructing an unsupervised anomaly detection ensemble is challenging because we do not know the labels. In this talk we discuss two topics in anomaly detection. First, we introduce an anomaly detection ensemble using Item Response Theory (IRT) – a class of models used in educational psychometrics. Using IRT we construct an ensemble that can downplay noisy, non-discriminatory methods and accentuate sharper methods.
Then we explore anomaly detection in computer network security. With cyber incidents and data breaches becoming increasingly common, we have seen a massive increase in computer network attacks over the years. Anomaly detection methods, even though used to detect suspicious behaviour, are criticized for high false positive rates. In addition, computer networks produce a large amount of complex data. We go through the end-to-end process of detecting anomalies in this scenario and show how we can minimize false positives and visualise anomalies developing over time.
Semi-Supervised Insight Generation from Petabyte Scale Text DataTech Triveni
Existing state-of-the-art supervised methods in Machine Learning require large amounts of annotated data to achieve good performance and generalization. However, manually constructing such a training data set with sentiment labels is a labor-intensive and time-consuming task. With the proliferation of data acquisition in domains such as images, text and video, the rate at which we acquire data is greater than the rate at which we can label them. Techniques that reduce the amount of labeled data needed to achieve competitive accuracies are of paramount importance for deploying scalable, data-driven, real-world solutions.
At Envestnet | Yodlee, we have deployed several advanced state-of-the-art Machine Learning solutions that process millions of data points on a daily basis with very stringent service level commitments. A key aspect of our Natural Language Processing solutions is Semi-supervised learning (SSL): A family of methods that also make use of unlabelled data for training – typically a small amount of labeled data with a large amount of unlabelled data. Pure supervised solutions fail to exploit the rich syntactic structure of the unlabelled data to improve decision boundaries. There is an abundance of published work in the field - but few papers have succeeded in showing significantly better results than state-of-the-art supervised learning. Often, methods have simplifying assumptions that fail to transfer to real-world scenarios. There is a lack of practical guidelines for deploying effective SSL solutions. We attempt to bridge that gap by sharing our learning from successful SSL models deployed in production
The document is a presentation about using Topological Data Analysis (TDA) as an insight extraction tool. It begins with an introduction to TDA and the speaker. It then discusses challenges in current data science and machine learning techniques. It argues that TDA can help address these challenges by using shape as an organizing principle to analyze data. The presentation includes a step-by-step tutorial of TDA, examples of real-life applications, and a discussion of pros and cons.
Machine learning is a type of artificial intelligence that allows systems to learn from data without being explicitly programmed. The document provides an introduction to machine learning, explaining what it is, why it is used, common algorithms, advantages, and challenges. Some key challenges discussed include poor quality data, overfitting or underfitting training data, the complexity of machine learning processes, lack of training data, slow implementation speeds, and imperfections in algorithms as data grows.
This document discusses machine learning, including differentiating it from artificial intelligence and deep learning. It covers the need for machine learning due to increasing data volumes and how machine learning processes work through experiences to build rules and logic from data. The types of machine learning are described as supervised learning, unsupervised learning, and reinforcement learning. Examples of machine learning applications like recommendation engines and spam filters are also provided.
The document describes decision tree induction and classification. It provides examples of datasets that can be used to build decision trees, including examples on loan approvals, playing sailboats, and shapes. It explains the basic TDIDT (Top Down Induction of Decision Trees) algorithm for building decision trees from datasets in a recursive partitioning manner. Key aspects of decision tree learning covered include attribute selection criteria, information gain, entropy, and residual information.
This document outlines an agenda for a data science boot camp covering various machine learning topics over several hours. The agenda includes discussions of decision trees, ensembles, random forests, data modelling, and clustering. It also provides examples of data leakage problems and discusses the importance of evaluating model performance. Homework assignments involve building models with Weka and identifying the minimum attributes needed to distinguish between red and white wines.
Influx/Days 2017 San Francisco | Baron SchwartzInfluxData
WHAT GOOD IS ANOMALY DETECTION?
Static thresholds on metrics have been falling out of fashion for a while, and for good reason. Modern tooling lets you analyze and monitor a lot more data points than you used to be able to, resulting in lots more noise. The hope is that anomaly detection answers some of this, by replacing static thresholds (anomalies) with dynamic ones. But it doesn’t work as well as most people think it will. In this talk I’ll explain how anomaly detection works, so you can understand why it isn’t a good general-purpose solution, and which specific cases it’s good at.
Automating fetal heart monitor using machine learningTamjid Rayhan
This is a webinar held by the IEEE student branch of University of Chittagong. This talks about how a beginner can gain expert level knowledge in Machine learning and deep learning using online resources. It focuses on how the presentar solved a biomedical engineering problem using Machine learning. Also gives reference to many interesting references to advices given by the leaders of Machine learning field.
- Big data is growing rapidly in both commercial and scientific databases. Data mining is commonly used to extract useful information from large datasets. It helps with customer service, hypothesis formation, and more.
- Recent technological advances are generating large amounts of medical and genomic data. Data mining offers potential solutions for automated analysis of patient histories, gene function prediction, and drug discovery. Traditional techniques may be unsuitable due to data enormity, dimensionality, and heterogeneity.
- Data mining involves tasks like classification, association rule mining, clustering, and outlier detection. Various machine learning algorithms are applied including decision trees, naive Bayes, and neural networks.
An introduction to machine learning and statisticsSpotle.ai
This document provides an overview of machine learning and predictive modeling. It begins by describing how predictive models can be used in various domains like healthcare, finance, telecom, and business. It then discusses the differences between machine learning and predictive modeling, noting that machine learning aims to allow machines to learn autonomously using feedback mechanisms, while predictive modeling focuses on building statistical models to predict outcomes. The document also uses examples like Microsoft's Tay chatbot to illustrate how machine learning systems can be exposed to real-world data to continuously learn and improve. It concludes by explaining how predictive analytics fits within machine learning as the starting point to build initial predictive models and continuously monitor and refine them.
The document discusses applications of deep learning in various medical domains including ophthalmology, dermatology, pathology, cardiology, neurosurgery and EMR. For ophthalmology, it describes using deep learning models for tasks like diabetic retinopathy detection from fundus images and age-related macular degeneration classification from OCT retina images. It also summarizes a study on diabetic retinopathy detection that achieved high performance but had limitations in reproducibility. For dermatology, it outlines the need for skin cancer classification from images and challenges in early detection.
This document provides an introduction to machine learning. It discusses the history of machine learning, including early work in neural networks and decision trees. It defines machine learning as the ability to improve performance on tasks based on experience. The key components of a learning problem are identified as the task, data used for learning, and a performance measure. Linear regression, decision trees, instance-based learning, Bayesian learning, support vector machines, neural networks, and clustering are listed as machine learning algorithms. Designing a learning system involves choosing training experiences, a target function, representation of that function, and a learning algorithm.
2023 Supervised Learning for Orange3 from scratchFEG
This document provides an overview of supervised learning and decision tree models. It discusses supervised learning techniques for classification and regression. Decision trees are explained as a method that uses conditional statements to classify examples based on their features. The document reviews node splitting criteria like information gain that help determine the most important features. It also discusses evaluating models for overfitting/underfitting and techniques like bagging and boosting in random forests to improve performance. Homework involves building a classification model on a healthcare dataset and reporting the results.
Machine Learning Essentials Demystified part1 | Big Data DemystifiedOmid Vahdaty
Machine Learning Essentials Abstract:
Machine Learning (ML) is one of the hottest topics in the IT world today. But what is it really all about?
In this session we will talk about what ML actually is and in which cases it is useful.
We will talk about a few common algorithms for creating ML models and demonstrate their use with Python. We will also take a peek at Deep Learning (DL) and Artificial Neural Networks and explain how they work (without too much math) and demonstrate DL model with Python.
The target audience are developers, data engineers and DBAs that do not have prior experience with ML and want to know how it actually works.
Finding interesting patterns in data can lead to uncovering new knowledge. New patterns that haven’t occurred before can signify events of interest. Depending on context, these can be called novelties, anomalies, outliers or events. Whatever they are called, they are interesting because they tell a story different from the norm. In this talk, we will call them anomalies. Two diverse applications of anomaly detection are detecting fraudulent credit card transactions and identifying astronomical anomalies such as solar flares.
However, there are many challenges in anomaly detection including high false positive rates and low predictive accuracy. Ensemble learning is a way of combining many algorithms or models to obtain better predictive performance. Anomaly detection is generally an unsupervised task, that is, we do not train models using labelled data. Constructing an unsupervised anomaly detection ensemble is challenging because we do not know the labels. In this talk we discuss two topics in anomaly detection. First, we introduce an anomaly detection ensemble using Item Response Theory (IRT) – a class of models used in educational psychometrics. Using IRT we construct an ensemble that can downplay noisy, non-discriminatory methods and accentuate sharper methods.
Then we explore anomaly detection in computer network security. With cyber incidents and data breaches becoming increasingly common, we have seen a massive increase in computer network attacks over the years. Anomaly detection methods, even though used to detect suspicious behaviour, are criticized for high false positive rates. In addition, computer networks produce a large amount of complex data. We go through the end-to-end process of detecting anomalies in this scenario and show how we can minimize false positives and visualise anomalies developing over time.
Semi-Supervised Insight Generation from Petabyte Scale Text DataTech Triveni
Existing state-of-the-art supervised methods in Machine Learning require large amounts of annotated data to achieve good performance and generalization. However, manually constructing such a training data set with sentiment labels is a labor-intensive and time-consuming task. With the proliferation of data acquisition in domains such as images, text and video, the rate at which we acquire data is greater than the rate at which we can label them. Techniques that reduce the amount of labeled data needed to achieve competitive accuracies are of paramount importance for deploying scalable, data-driven, real-world solutions.
At Envestnet | Yodlee, we have deployed several advanced state-of-the-art Machine Learning solutions that process millions of data points on a daily basis with very stringent service level commitments. A key aspect of our Natural Language Processing solutions is Semi-supervised learning (SSL): A family of methods that also make use of unlabelled data for training – typically a small amount of labeled data with a large amount of unlabelled data. Pure supervised solutions fail to exploit the rich syntactic structure of the unlabelled data to improve decision boundaries. There is an abundance of published work in the field - but few papers have succeeded in showing significantly better results than state-of-the-art supervised learning. Often, methods have simplifying assumptions that fail to transfer to real-world scenarios. There is a lack of practical guidelines for deploying effective SSL solutions. We attempt to bridge that gap by sharing our learning from successful SSL models deployed in production
The document is a presentation about using Topological Data Analysis (TDA) as an insight extraction tool. It begins with an introduction to TDA and the speaker. It then discusses challenges in current data science and machine learning techniques. It argues that TDA can help address these challenges by using shape as an organizing principle to analyze data. The presentation includes a step-by-step tutorial of TDA, examples of real-life applications, and a discussion of pros and cons.
Machine learning is a type of artificial intelligence that allows systems to learn from data without being explicitly programmed. The document provides an introduction to machine learning, explaining what it is, why it is used, common algorithms, advantages, and challenges. Some key challenges discussed include poor quality data, overfitting or underfitting training data, the complexity of machine learning processes, lack of training data, slow implementation speeds, and imperfections in algorithms as data grows.
Discovering Digital Process Twins for What-if Analysis: a Process Mining Appr...Marlon Dumas
This webinar discusses the limitations of traditional approaches for business process simulation based on had-crafted model with restrictive assumptions. It shows how process mining techniques can be assembled together to discover high-fidelity digital twins of end-to-end processes from event data.
Open Source Contributions to Postgres: The Basics POSETTE 2024ElizabethGarrettChri
Postgres is the most advanced open-source database in the world and it's supported by a community, not a single company. So how does this work? How does code actually get into Postgres? I recently had a patch submitted and committed and I want to share what I learned in that process. I’ll give you an overview of Postgres versions and how the underlying project codebase functions. I’ll also show you the process for submitting a patch and getting that tested and committed.
We are pleased to share with you the latest VCOSA statistical report on the cotton and yarn industry for the month of May 2024.
Starting from January 2024, the full weekly and monthly reports will only be available for free to VCOSA members. To access the complete weekly report with figures, charts, and detailed analysis of the cotton fiber market in the past week, interested parties are kindly requested to contact VCOSA to subscribe to the newsletter.
Generative Classifiers: Classifying with Bayesian decision theory, Bayes’ rule, Naïve Bayes classifier.
Discriminative Classifiers: Logistic Regression, Decision Trees: Training and Visualizing a Decision Tree, Making Predictions, Estimating Class Probabilities, The CART Training Algorithm, Attribute selection measures- Gini impurity; Entropy, Regularization Hyperparameters, Regression Trees, Linear Support vector machines.
1. Data Mining
Practical Machine Learning Tools and Techniques
Slides for Chapter 1, What’s it all about?
of Data Mining by I. H. Witten, E. Frank,
M. A. Hall, and C. J. Pal
2. 2
Chapter 1: What’s it all about?
•Data mining and machine learning
•Simple examples: the weather problem and others
•Fielded applications
•The data mining process
•Machine learning and statistics
•Generalization as search
•Data mining and ethics
3. 3
Information is crucial
•Example 1: in vitro fertilization
• Given: embryos described by 60 features
• Problem: selection of embryos that will survive
• Data: historical records of embryos and outcome
•Example 2: cow culling
• Given: cows described by 700 features
• Problem: selection of cows that should be culled
• Data: historical records and farmers’ decisions
4. 4
From data to information
•Society produces huge amounts of data
• Sources: business, science, medicine, economics, geography,
environment, sports, …
•This data is a potentially valuable resource
•Raw data is useless: need techniques to automatically
extract information from it
• Data: recorded facts
• Information: patterns underlying the data
• We are concerned with machine learning techniques for
automatically finding patterns in data
• Patterns that are found may be represented as structural
descriptions or as black-box models
5. 5
Structural descriptions
• Example: if-then rules
……………
HardNormalYesMyopePresbyopic
NoneReducedNoHypermetropePre-presbyopic
SoftNormalNoHypermetropeYoung
NoneReducedNoMyopeYoung
Recommended
lenses
Tear production
rate
AstigmatismSpectacle
prescription
Age
If tear production rate = reduced
then recommendation = none
Otherwise, if age = young and astigmatic = no
then recommendation = soft
6. 6
Machine learning
•Definitions of “learning” from dictionary:
To get knowledge of by study,
experience, or being taught
To become aware by information or
from observation
To commit to memory
To be informed of, ascertain; to receive
instruction
Difficult to measure
Trivial for computers
Things learn when they change their
behavior in a way that makes them
perform better in the future.
• Operational definition:
Does a slipper learn?
• Does learning imply intention?
7. 7
Data mining
•Finding patterns in data that provide insight or enable
fast and accurate decision making
•Strong, accurate patterns are needed to make decisions
• Problem 1: most patterns are not interesting
• Problem 2: patterns may be inexact (or spurious)
• Problem 3: data may be garbled or missing
• Machine learning techniques identify patterns in data and
provide many tools for data mining
• Of primary interest are machine learning techniques that
provide structural descriptions
8. 8
The weather problem
• Conditions for playing a certain game
……………
YesFalseNormalMildRainy
YesFalseHighHotOvercast
NoTrueHighHotSunny
NoFalseHighHotSunny
PlayWindyHumidityTemperatureOutlook
If outlook = sunny and humidity = high then play = no
If outlook = rainy and windy = true then play = no
If outlook = overcast then play = yes
If humidity = normal then play = yes
If none of the above then play = yes
9. 9
Classification vs. association rules
•Classification rule:
predicts value of a given attribute (the classification of an example)
•Association rule:
predicts value of arbitrary attribute (or combination)
If outlook = sunny and humidity = high
then play = no
If temperature = cool then humidity = normal
If humidity = normal and windy = false
then play = yes
If outlook = sunny and play = no
then humidity = high
If windy = false and play = no
then outlook = sunny and humidity = high
10. 10
Weather data with mixed attributes
• Some attributes have numeric values
……………
YesFalse8075Rainy
YesFalse8683Overcast
NoTrue9080Sunny
NoFalse8585Sunny
PlayWindyHumidityTemperatureOutlook
If outlook = sunny and humidity > 83 then play = no
If outlook = rainy and windy = true then play = no
If outlook = overcast then play = yes
If humidity < 85 then play = yes
If none of the above then play = yes
12. 12
A complete and correct rule set
If tear production rate = reduced then recommendation = none
If age = young and astigmatic = no
and tear production rate = normal then recommendation = soft
If age = pre-presbyopic and astigmatic = no
and tear production rate = normal then recommendation = soft
If age = presbyopic and spectacle prescription = myope
and astigmatic = no then recommendation = none
If spectacle prescription = hypermetrope and astigmatic = no
and tear production rate = normal then recommendation = soft
If spectacle prescription = myope and astigmatic = yes
and tear production rate = normal then recommendation = hard
If age young and astigmatic = yes
and tear production rate = normal then recommendation = hard
If age = pre-presbyopic
and spectacle prescription = hypermetrope
and astigmatic = yes then recommendation = none
If age = presbyopic and spectacle prescription = hypermetrope
and astigmatic = yes then recommendation = none
16. 16
Data from labor negotiations
goodgoodgoodbad{good,bad}Acceptability of contract
halffull?none{none,half,full}Health plan contribution
yes??no{yes,no}Bereavement assistance
fullfull?none{none,half,full}Dental plan contribution
yes??no{yes,no}Long-term disability assistance
avggengenavg{below-avg,avg,gen}Vacation
12121511(Number of days)Statutory holidays
???yes{yes,no}Education allowance
Shift-work supplement
Standby pay
Pension
Working hours per week
Cost of living adjustment
Wage increase third year
Wage increase second year
Wage increase first year
Duration
Attribute
44%5%?Percentage
??13%?Percentage
???none{none,ret-allw, empl-cntr}
40383528(Number of hours)
none?tcfnone{none,tcf,tc}
????Percentage
4.04.4%5%?Percentage
4.54.3%4%2%Percentage
2321(Number of years)
40…321Type
18. 18
Soybean classification
Diaporthe stem canker19Diagnosis
Normal3ConditionRoot
…
Yes2Stem lodging
Abnormal2ConditionStem
…
?3Leaf spot size
Abnormal2ConditionLeaf
?5Fruit spots
Normal4Condition of fruit
pods
Fruit
…
Absent2Mold growth
Normal2ConditionSeed
…
Above normal3Precipitation
July7Time of occurrenceEnvironment
Sample valueNumber
of values
Attribute
19. 19
The role of domain knowledge
But in this domain, “leaf condition is normal” implies
“leaf malformation is absent”!
If leaf condition is normal
and stem condition is abnormal
and stem cankers is below soil line
and canker lesion color is brown
then
diagnosis is rhizoctonia root rot
If leaf malformation is absent
and stem condition is abnormal
and stem cankers is below soil line
and canker lesion color is brown
then
diagnosis is rhizoctonia root rot
20. 20
Fielded applications
•The result of learning—or the learning method itself—is
deployed in practical applications
• Processing loan applications
• Screening images for oil slicks
• Electricity supply forecasting
• Diagnosis of machine faults
• Marketing and sales
• Separating crude oil and natural gas
• Reducing banding in rotogravure printing
• Finding appropriate technicians for telephone faults
• Scientific applications: biology, astronomy, chemistry
• Automatic selection of TV programs
• Monitoring intensive care patients
21. 21
Processing loan applications (American Express)
•Given: questionnaire with
financial and personal information
•Question: should money be lent?
•Simple statistical method covers 90% of cases
•Borderline cases referred to loan officers
•But: 50% of accepted borderline cases defaulted!
•Solution: reject all borderline cases?
• No! Borderline cases are most active customers
22. 22
Enter machine learning
•1000 training examples of borderline cases
•20 attributes:
• age
• years with current employer
• years at current address
• years with the bank
• other credit cards possessed,…
•Learned rules: correct on 70% of cases
• human experts only 50%
•Rules could be used to explain decisions to customers
23. 23
Screening images
•Given: radar satellite images of coastal waters
•Problem: detect oil slicks in those images
•Oil slicks appear as dark regions with changing size
and shape
•Not easy: lookalike dark regions can be caused by
weather conditions (e.g. high wind)
•Expensive process requiring highly trained personnel
24. 24
Enter machine learning
•Extract dark regions from normalized image
•Attributes:
• size of region
• shape, area
• intensity
• sharpness and jaggedness of boundaries
• proximity of other regions
• info about background
•Constraints:
• Few training examples—oil slicks are rare!
• Unbalanced data: most dark regions aren’t slicks
• Regions from same image form a batch
• Requirement: adjustable false-alarm rate
25. 25
Load forecasting
•Electricity supply companies
need forecast of future demand
for power
•Forecasts of min/max load for each hour
=> significant savings
•Given: manually constructed load model that assumes
“normal” climatic conditions
•Problem: adjust for weather conditions
•Static model consist of:
• base load for the year
• load periodicity over the year
• effect of holidays
26. 26
Enter machine learning
•Prediction corrected using “most similar” days
•Attributes:
• temperature
• humidity
• wind speed
• cloud cover readings
• plus difference between actual load and predicted load
•Average difference among three “most similar” days added
to static model
•Linear regression coefficients form attribute weights in
similarity function
27. 27
Diagnosis of machine faults
•Diagnosis: classical domain
of expert systems
•Given: Fourier analysis of vibrations measured at
various points of a device’s mounting
•Question: which fault is present?
•Preventative maintenance of electromechanical
motors and generators
•Information very noisy
•So far: diagnosis by expert/hand-crafted rules
28. 28
Enter machine learning
•Available: 600 faults with expert’s diagnosis
•~300 unsatisfactory, rest used for training
•Attributes augmented by intermediate concepts that
embodied causal domain knowledge
•Expert not satisfied with initial rules because they did not
relate to his domain knowledge
•Further background knowledge resulted in more complex
rules that were satisfactory
•Learned rules outperformed hand-crafted ones
29. 29
Marketing and sales I
•Companies precisely record massive amounts of
marketing and sales data
•Applications:
• Customer loyalty:
identifying customers that are likely to defect by detecting
changes in their behavior
(e.g. banks/phone companies)
• Special offers:
identifying profitable customers
(e.g. reliable owners of credit cards that need extra money
during the holiday season)
30. 30
Marketing and sales II
•Market basket analysis
• Association techniques find groups of items that tend to
occur together in a transaction
(used to analyze checkout data)
•Historical analysis of purchasing patterns
•Identifying prospective customers
• Focusing promotional mailouts
(targeted campaigns are cheaper than mass-marketed ones)
32. 32
Machine learning and statistics
•Historical difference (grossly oversimplified):
• Statistics: testing hypotheses
• Machine learning: finding the right hypothesis
•But: huge overlap
• Decision trees (C4.5 and CART)
• Nearest-neighbor methods
•Today: perspectives have converged
• Most machine learning algorithms employ statistical
techniques
33. 33
Generalization as search
•Inductive learning: find a concept description that fits
the data
•Example: rule sets as description language
• Enormous, but finite, search space
•Simple solution:
• enumerate the concept space
• eliminate descriptions that do not fit examples
• surviving descriptions contain target concept
34. 34
Enumerating the concept space
•Search space for weather problem
• 4 x 4 x 3 x 3 x 2 = 288 possible combinations
• With 14 rules => 2.7x1034 possible rule sets
•Other practical problems:
• More than one description may survive
• No description may survive
• Language is unable to describe target concept
• or data contains noise
• Another view of generalization as search:
hill-climbing in description space according to pre-specified
matching criterion
• Many practical algorithms use heuristic search that cannot guarantee to
find the optimum solution
35. 35
Bias
•Important decisions in learning systems:
• Concept description language
• Order in which the space is searched
• Way that overfitting to the particular training data is avoided
•These form the “bias” of the search:
• Language bias
• Search bias
• Overfitting-avoidance bias
36. 36
Language bias
•Important question:
• is language universal
or does it restrict what can be learned?
•Universal language can express arbitrary subsets of
examples
•If language includes logical or (“disjunction”), it is
universal
•Example: rule sets
•Domain knowledge can be used to exclude some
concept descriptions a priori from the search
37. 37
Search bias
•Search heuristic
• “Greedy” search: performing the best single step
• “Beam search”: keeping several alternatives
• …
•Direction of search
• General-to-specific
• E.g. specializing a rule by adding conditions
• Specific-to-general
• E.g. generalizing an individual instance into a rule
38. 38
Overfitting-avoidance bias
•Can be seen as a form of search bias
•Modified evaluation criterion
• E.g., balancing simplicity and number of errors
•Modified search strategy
• E.g., pruning (simplifying a description)
• Pre-pruning: stops at a simple description before search proceeds to
an overly complex one
• Post-pruning: generates a complex description first and simplifies it
afterwards
39. 39
Data mining and ethics I
•Ethical issues arise in
practical applications
•Anonymizing data is difficult
•85% of Americans can be identified from just zip
code, birth date and sex
•Data mining often used to discriminate
• E.g., loan applications: using some information (e.g., sex,
religion, race) is unethical
•Ethical situation depends on application
• E.g., same information ok in medical application
•Attributes may contain problematic information
• E.g., area code may correlate with race
40. 40
Data mining and ethics II
•Important questions:
• Who is permitted access to the data?
• For what purpose was the data collected?
• What kind of conclusions can be legitimately drawn from it?
•Caveats must be attached to results
•Purely statistical arguments are never sufficient!
•Are resources put to good use?