The document discusses data mining and decision trees. It provides an example of how a bank used data mining on customer records to better target home equity loan offers. The bank was able to more than double the acceptance rate for offers by using data mining to identify interesting customer clusters and rules to predict responses. The document also discusses classification and regression problems in data mining and how decision trees can be used as a data mining method to help solve these problems.
This document outlines a course on data warehousing and data mining. It introduces key concepts like relational databases, data warehouses, dimensional modeling, and data mining techniques. It also details the course objectives, schedule, assignments, and policies. The goal is for students to gain experience applying data mining methods and understanding the relationship between data mining and other fields.
The Business Value of Reinforcement Learning and Causal InferenceHanan Shteingart
Israeli Reinforcement Learning Day 2021
A talk by Hanan Shteingart, VIANAI about what is the business value of causal inference and reinforcement learning.
The document proposes a probabilistic approach to ranking results of database queries when multiple tuples satisfy the query criteria. It calculates global and conditional scores for tuples based on attribute correlations learned from the query workload and data, and combines these into a ranking function. User studies on real estate and movie datasets show the conditional ranking approach outperforms a baseline that only considers global scores. The approach can be efficiently implemented using indexing and optimization techniques.
2 days agoRajani Sade Data examination Identifying physical.docxfelicidaddinwoodie
2 days ago
Rajani Sade
Data examination: Identifying physical properties and meaning
COLLAPSE
Top of Form
Data examination: Identifying physical properties and meaning
Next step after Data acquisition is data examination. Here we examine the physical properties of the data, what it contains and what we can get from it. Physical properties we need to observe from the data are type, size, and condition. This is more mechanical work. We have to observe the surface characteristics to understand its properties. Type gives us the nature of the data whether it is qualitative or quantitative. If a variable is qualitative, we have to find out if it is text or nominal or ordinal. If a variable is quantitative then we have to find out if it is interval or ratio, whether it is continuous or discrete. Based on the type we can come up with statistical analysis methods. Size of the variable is the number of bytes it occupies in the database. For each column we have to find out the format of the data and maximum length it requires. Condition tells us about the quality of the data. Some of the checks we do to check the quality of the data is missing values, erroneous values, inconsistency, duplicate records, incorrect dates, Special characters, leading or training blanks etc. From the condition we can estimate what kind of cleanup we can do to make this data useful for the analysis.
Reference:
Kirk, A. (2016). Data Visualisation: A Handbook for Data Driven Design. Thousand Oaks, CA: Sage Publications, Ltd. ISBN: 978-1-4739-1214-4
Bottom of Form
Niteshkumar Laxmidas Patel
Data Examination
Data examination is a process, in which, you would examine and understand the data that one has gathered. It is really important to know that when working with information, you should study the data before making any further decisions. Data investigation is to confirm our outcomes whether it is substantial, reproducible and verifiable. Data examination is also a procedure of investigating, purging, changing, and demonstrating information with the objective of finding valuable data, illuminating conclusions, and supporting basic leadership (Kirk; 2016) .Data examination has various aspects and methodologies, incorporating assorted systems under an assortment of names, while being utilized as a part of various business, science, and sociology spaces. There are cases, for example at leading information organizations, for example, Amazon and Google, where information is utilized as a part in which examined data, comes about themselves to settle on the choices, for instance with respect to proposal engines , PageRank, and request anticipating frameworks.So, it is important to examine data before making decisions regarding business (Rosenthal, Robert; Rosnow, Ralph L; 1991).
References:-
Kirk, A. (2016). Data Visualisation: A Handbook for Data Driven Design (p. 50). SAGE Publications.
Rosenthal, R., & Rosnow, R. L. (1991). Essentials of behavioral research: ...
Impact of Recruitment & Selection Processes on Employee Performance: A Study ...Sheheryar Alvi
This study examined the impact of recruitment and selection processes on employee performance in Pakistan's telecom industry. A survey was administered to 232 employees across telecom companies. The results showed that recruitment practices and sources had a significant positive relationship with employee performance, while meritocracy and corporate image did not. Specifically, effective recruitment sources like social media and proper recruitment practices like training opportunities were linked to higher employee performance. The findings suggest managers should focus on selection processes and sources to hire high-performing candidates.
This document outlines a course on data warehousing and data mining. It introduces key concepts like relational databases, data warehouses, dimensional modeling, and data mining techniques. It also details the course objectives, schedule, assignments, and policies. The goal is for students to gain experience applying data mining methods and understanding the relationship between data mining and other fields.
The Business Value of Reinforcement Learning and Causal InferenceHanan Shteingart
Israeli Reinforcement Learning Day 2021
A talk by Hanan Shteingart, VIANAI about what is the business value of causal inference and reinforcement learning.
The document proposes a probabilistic approach to ranking results of database queries when multiple tuples satisfy the query criteria. It calculates global and conditional scores for tuples based on attribute correlations learned from the query workload and data, and combines these into a ranking function. User studies on real estate and movie datasets show the conditional ranking approach outperforms a baseline that only considers global scores. The approach can be efficiently implemented using indexing and optimization techniques.
2 days agoRajani Sade Data examination Identifying physical.docxfelicidaddinwoodie
2 days ago
Rajani Sade
Data examination: Identifying physical properties and meaning
COLLAPSE
Top of Form
Data examination: Identifying physical properties and meaning
Next step after Data acquisition is data examination. Here we examine the physical properties of the data, what it contains and what we can get from it. Physical properties we need to observe from the data are type, size, and condition. This is more mechanical work. We have to observe the surface characteristics to understand its properties. Type gives us the nature of the data whether it is qualitative or quantitative. If a variable is qualitative, we have to find out if it is text or nominal or ordinal. If a variable is quantitative then we have to find out if it is interval or ratio, whether it is continuous or discrete. Based on the type we can come up with statistical analysis methods. Size of the variable is the number of bytes it occupies in the database. For each column we have to find out the format of the data and maximum length it requires. Condition tells us about the quality of the data. Some of the checks we do to check the quality of the data is missing values, erroneous values, inconsistency, duplicate records, incorrect dates, Special characters, leading or training blanks etc. From the condition we can estimate what kind of cleanup we can do to make this data useful for the analysis.
Reference:
Kirk, A. (2016). Data Visualisation: A Handbook for Data Driven Design. Thousand Oaks, CA: Sage Publications, Ltd. ISBN: 978-1-4739-1214-4
Bottom of Form
Niteshkumar Laxmidas Patel
Data Examination
Data examination is a process, in which, you would examine and understand the data that one has gathered. It is really important to know that when working with information, you should study the data before making any further decisions. Data investigation is to confirm our outcomes whether it is substantial, reproducible and verifiable. Data examination is also a procedure of investigating, purging, changing, and demonstrating information with the objective of finding valuable data, illuminating conclusions, and supporting basic leadership (Kirk; 2016) .Data examination has various aspects and methodologies, incorporating assorted systems under an assortment of names, while being utilized as a part of various business, science, and sociology spaces. There are cases, for example at leading information organizations, for example, Amazon and Google, where information is utilized as a part in which examined data, comes about themselves to settle on the choices, for instance with respect to proposal engines , PageRank, and request anticipating frameworks.So, it is important to examine data before making decisions regarding business (Rosenthal, Robert; Rosnow, Ralph L; 1991).
References:-
Kirk, A. (2016). Data Visualisation: A Handbook for Data Driven Design (p. 50). SAGE Publications.
Rosenthal, R., & Rosnow, R. L. (1991). Essentials of behavioral research: ...
Impact of Recruitment & Selection Processes on Employee Performance: A Study ...Sheheryar Alvi
This study examined the impact of recruitment and selection processes on employee performance in Pakistan's telecom industry. A survey was administered to 232 employees across telecom companies. The results showed that recruitment practices and sources had a significant positive relationship with employee performance, while meritocracy and corporate image did not. Specifically, effective recruitment sources like social media and proper recruitment practices like training opportunities were linked to higher employee performance. The findings suggest managers should focus on selection processes and sources to hire high-performing candidates.
This document outlines the objectives, content, evaluation, and prerequisites for a course on Knowledge Acquisition in Decision Making, which introduces students to data mining techniques and how to apply them to solve business problems using SAS Enterprise Miner and WEKA. The course covers topics such as data preprocessing, predictive modeling with decision trees and neural networks, descriptive modeling with clustering and association rules, and a project presentation. Students will be evaluated based on assignments, case studies, a project, quizzes, class participation, and a final exam.
Data mining involves discovering patterns from large amounts of data. It can be used for applications like credit ratings, targeted marketing, fraud detection, and customer relationship management. Some common data mining techniques include classification, clustering, regression, and association rule mining. Decision trees are a popular classification technique that uses a tree structure with internal nodes representing attributes and leaf nodes representing target classes.
This document outlines a course on knowledge acquisition in decision making, including the course objectives of introducing data mining techniques and enhancing skills in applying tools like SAS Enterprise Miner and WEKA to solve problems. The course content is described, covering topics like the knowledge discovery process, predictive and descriptive modeling, and a project presentation. Evaluation includes assignments, case studies, and a final exam.
Data Visualization and Learning Analytics with xAPIMargaret Roth
With the Experience API we are able to collect more granular, high-resolution data from our learning tools and platforms. But once we have that data, how do we present it in ways that easily communicate the right insights to our stakeholders?
In this presentation from the xAPI Cohort's Spring 2018 session, you'll find a brief historical survey of data visualizations, three keys to designing good data visualizations, and case studies of xAPI specific data visualizations and the insights they provided to organizations.
Description of the DaCENA approach to the contextual exploration of knowledge graphs. We use machine learning to learn user preferences using a limited number of user inputs. Through these inputs, we learn a personalized ranking function over semantic associations (semi-paths in a knowledge graph) that best fit users' interests. References for the presentation are:
Bianchi & al.: Actively Learning to Rank Semantic Associations for Personalized Contextual Exploration of Knowledge Graphs. ESWC (1) 2017: 120-135.
Palmonari & al.: DaCENA: Serendipitous News Reading with Data Contexts. ESWC (Satellite Events) 2015: 133-137
This document provides an introduction and overview of data mining and the data mining process. It discusses different types of data like transactional data, temporal data, spatial data, and unstructured data. It also covers common data mining tasks like classification, clustering, association rule mining and frequent pattern mining. Additionally, it discusses related fields like statistics, machine learning, databases and visualization and how they differ from data mining. Finally, it provides examples of different data mining models and tasks.
Full Tutorial With Pictures: https://www.scienceez.com/build-recommender-system/
Macedonian Computer - Science Faculty (FCSE) Lecture by PhD. Andrea Kulakov. Topic: Recommender Systems
This document discusses three case studies that use data analysis methods to address financial and risk-related questions. The first case study looks at predicting changes in corporate earnings using economic indicators. The second predicts the accuracy of Zillow home valuation estimates. The third examines factors that influence returns on initial public offerings of Japanese companies. The document then discusses dimensions of information quality that can impact the ability of a given dataset and analysis method to achieve a specified goal.
The increasing amount of valuable semi-structured data has become available online. In this talk, we overview the state of the art in entity ranking over structured data ("linked data").
The document discusses machine learning and data science concepts. It begins with an introduction to machine learning and the machine learning process. It then provides an overview of select machine learning algorithms and concepts like bias/variance, generalization, underfitting and overfitting. It also discusses ensemble methods. The document then shifts to discussing time series, functions for manipulating time series, and laying the foundation for time series prediction and forecasting. It provides examples of applying techniques like median filtering to smooth time series data. Overall, the document provides a high-level introduction and overview of key machine learning and time series concepts.
This document describes a methodology for crowdsourcing the assessment of Linked Data quality using both Linked Data experts and Amazon Mechanical Turk workers. It presents research on detecting three types of quality issues in DBpedia data via crowdsourcing: incorrect/incomplete object values, incorrect data types, and incorrect outlinks. The methodology uses a two-phase approach, with experts identifying issues in the "Find" phase and workers verifying issues in the "Verify" phase via microtasks. The results indicate that crowdsourcing is effective for detecting quality issues and that experts and workers are suited for different tasks based on required skills.
The document outlines a tutorial on misinformation and biased news. It discusses fact-checking methods, challenges with fact-checking datasets, and evidence retrieval models for fact-checking. The tutorial covers introducing misinformation problems, detecting misinformation through techniques like fact-checking and analyzing social context, characterizing bias and propaganda, and applications and future directions for fighting fake and biased news.
How Minitab Revolutionize Statistical Analysis in Social Science Researchcharlessmithshd
Minitab is a powerful statistical software designed specifically for data analysis. It offers a comprehensive suite of tools for statistical analysis, process control, and data visualization, making it ideal for both academic and professional use. Students always look for top-notch MINITAB homework help for a good academic record.
This document discusses using statistical learning theory and logistic regression to analyze big data and classify car gas mileage levels. It begins with an introduction to big data characteristics and challenges. Then it covers supervised vs unsupervised statistical learning methods and tradeoffs between prediction accuracy and model interpretability. Finally, it demonstrates applying logistic regression in R to classify car gas mileage levels using data from an online source.
IRJET- Big Data and Bayes Theorem used Analyze the Student’s Performance in E...IRJET Journal
This document discusses using big data and Bayes' theorem to analyze student performance in educational settings. It proposes a system to check for plagiarism in student assignments using probabilistic methods. Bayes' theorem is introduced as a way to calculate conditional probabilities. The system would analyze assignments to determine the probability that a student copied work from another based on characteristics like gender and past copying behavior. This probabilistic approach could help teachers efficiently check assignments for plagiarism using big data on student performance patterns.
[Webinar] "How to Keep Top Talent & Improve Your Bottom Line"Steven Wardell
Professor Rob Cross, DBA, shares the latest network-driven methods to help you identify your critical team members, including high-performers, hidden talent, marginalized employees, and overloaded individuals - before they leave the company.
The document provides an overview of a 3-day data analytics training program held in Jakarta, Indonesia from April 24-26, 2019. It discusses topics that will be covered including big data overview, data for business analysis, data analytics concepts, and data analytics tools. The training is led by Dr. Ir. John Sihotang and is aimed at management trainees of the company Sucofindo.
This document provides an overview and introduction to the course CIS 674 Introduction to Data Mining. It defines data mining, outlines basic data mining tasks such as classification, clustering, and association rule mining. It also discusses the relationship between data mining and knowledge discovery in databases (KDD), and highlights some issues in data mining such as handling large datasets, high dimensionality, and interpretation of results.
This document outlines the objectives, content, evaluation, and prerequisites for a course on Knowledge Acquisition in Decision Making, which introduces students to data mining techniques and how to apply them to solve business problems using SAS Enterprise Miner and WEKA. The course covers topics such as data preprocessing, predictive modeling with decision trees and neural networks, descriptive modeling with clustering and association rules, and a project presentation. Students will be evaluated based on assignments, case studies, a project, quizzes, class participation, and a final exam.
Data mining involves discovering patterns from large amounts of data. It can be used for applications like credit ratings, targeted marketing, fraud detection, and customer relationship management. Some common data mining techniques include classification, clustering, regression, and association rule mining. Decision trees are a popular classification technique that uses a tree structure with internal nodes representing attributes and leaf nodes representing target classes.
This document outlines a course on knowledge acquisition in decision making, including the course objectives of introducing data mining techniques and enhancing skills in applying tools like SAS Enterprise Miner and WEKA to solve problems. The course content is described, covering topics like the knowledge discovery process, predictive and descriptive modeling, and a project presentation. Evaluation includes assignments, case studies, and a final exam.
Data Visualization and Learning Analytics with xAPIMargaret Roth
With the Experience API we are able to collect more granular, high-resolution data from our learning tools and platforms. But once we have that data, how do we present it in ways that easily communicate the right insights to our stakeholders?
In this presentation from the xAPI Cohort's Spring 2018 session, you'll find a brief historical survey of data visualizations, three keys to designing good data visualizations, and case studies of xAPI specific data visualizations and the insights they provided to organizations.
Description of the DaCENA approach to the contextual exploration of knowledge graphs. We use machine learning to learn user preferences using a limited number of user inputs. Through these inputs, we learn a personalized ranking function over semantic associations (semi-paths in a knowledge graph) that best fit users' interests. References for the presentation are:
Bianchi & al.: Actively Learning to Rank Semantic Associations for Personalized Contextual Exploration of Knowledge Graphs. ESWC (1) 2017: 120-135.
Palmonari & al.: DaCENA: Serendipitous News Reading with Data Contexts. ESWC (Satellite Events) 2015: 133-137
This document provides an introduction and overview of data mining and the data mining process. It discusses different types of data like transactional data, temporal data, spatial data, and unstructured data. It also covers common data mining tasks like classification, clustering, association rule mining and frequent pattern mining. Additionally, it discusses related fields like statistics, machine learning, databases and visualization and how they differ from data mining. Finally, it provides examples of different data mining models and tasks.
Full Tutorial With Pictures: https://www.scienceez.com/build-recommender-system/
Macedonian Computer - Science Faculty (FCSE) Lecture by PhD. Andrea Kulakov. Topic: Recommender Systems
This document discusses three case studies that use data analysis methods to address financial and risk-related questions. The first case study looks at predicting changes in corporate earnings using economic indicators. The second predicts the accuracy of Zillow home valuation estimates. The third examines factors that influence returns on initial public offerings of Japanese companies. The document then discusses dimensions of information quality that can impact the ability of a given dataset and analysis method to achieve a specified goal.
The increasing amount of valuable semi-structured data has become available online. In this talk, we overview the state of the art in entity ranking over structured data ("linked data").
The document discusses machine learning and data science concepts. It begins with an introduction to machine learning and the machine learning process. It then provides an overview of select machine learning algorithms and concepts like bias/variance, generalization, underfitting and overfitting. It also discusses ensemble methods. The document then shifts to discussing time series, functions for manipulating time series, and laying the foundation for time series prediction and forecasting. It provides examples of applying techniques like median filtering to smooth time series data. Overall, the document provides a high-level introduction and overview of key machine learning and time series concepts.
This document describes a methodology for crowdsourcing the assessment of Linked Data quality using both Linked Data experts and Amazon Mechanical Turk workers. It presents research on detecting three types of quality issues in DBpedia data via crowdsourcing: incorrect/incomplete object values, incorrect data types, and incorrect outlinks. The methodology uses a two-phase approach, with experts identifying issues in the "Find" phase and workers verifying issues in the "Verify" phase via microtasks. The results indicate that crowdsourcing is effective for detecting quality issues and that experts and workers are suited for different tasks based on required skills.
The document outlines a tutorial on misinformation and biased news. It discusses fact-checking methods, challenges with fact-checking datasets, and evidence retrieval models for fact-checking. The tutorial covers introducing misinformation problems, detecting misinformation through techniques like fact-checking and analyzing social context, characterizing bias and propaganda, and applications and future directions for fighting fake and biased news.
How Minitab Revolutionize Statistical Analysis in Social Science Researchcharlessmithshd
Minitab is a powerful statistical software designed specifically for data analysis. It offers a comprehensive suite of tools for statistical analysis, process control, and data visualization, making it ideal for both academic and professional use. Students always look for top-notch MINITAB homework help for a good academic record.
This document discusses using statistical learning theory and logistic regression to analyze big data and classify car gas mileage levels. It begins with an introduction to big data characteristics and challenges. Then it covers supervised vs unsupervised statistical learning methods and tradeoffs between prediction accuracy and model interpretability. Finally, it demonstrates applying logistic regression in R to classify car gas mileage levels using data from an online source.
IRJET- Big Data and Bayes Theorem used Analyze the Student’s Performance in E...IRJET Journal
This document discusses using big data and Bayes' theorem to analyze student performance in educational settings. It proposes a system to check for plagiarism in student assignments using probabilistic methods. Bayes' theorem is introduced as a way to calculate conditional probabilities. The system would analyze assignments to determine the probability that a student copied work from another based on characteristics like gender and past copying behavior. This probabilistic approach could help teachers efficiently check assignments for plagiarism using big data on student performance patterns.
[Webinar] "How to Keep Top Talent & Improve Your Bottom Line"Steven Wardell
Professor Rob Cross, DBA, shares the latest network-driven methods to help you identify your critical team members, including high-performers, hidden talent, marginalized employees, and overloaded individuals - before they leave the company.
The document provides an overview of a 3-day data analytics training program held in Jakarta, Indonesia from April 24-26, 2019. It discusses topics that will be covered including big data overview, data for business analysis, data analytics concepts, and data analytics tools. The training is led by Dr. Ir. John Sihotang and is aimed at management trainees of the company Sucofindo.
This document provides an overview and introduction to the course CIS 674 Introduction to Data Mining. It defines data mining, outlines basic data mining tasks such as classification, clustering, and association rule mining. It also discusses the relationship between data mining and knowledge discovery in databases (KDD), and highlights some issues in data mining such as handling large datasets, high dimensionality, and interpretation of results.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
DataMining.ppt
1. Bellwether Analysis
TECS 2007 R. Ramakrishnan, Yahoo! Research
TECS 2007 R. Ramakrishnan, Yahoo! Research
Data Mining
(with many slides due to Gehrke, Garofalakis, Rastogi)
Raghu Ramakrishnan
Yahoo! Research
University of Wisconsin–Madison (on leave)
2. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 2
Introduction
3. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 3
Definition
Data mining is the exploration and analysis of large quantities of data in
order to discover valid, novel, potentially useful, and ultimately
understandable patterns in data.
Valid: The patterns hold in general.
Novel: We did not know the pattern beforehand.
Useful: We can devise actions from the patterns.
Understandable: We can interpret and comprehend the
patterns.
4. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 4
Case Study: Bank
• Business goal: Sell more home equity loans
• Current models:
– Customers with college-age children use home equity loans to
pay for tuition
– Customers with variable income use home equity loans to even
out stream of income
• Data:
– Large data warehouse
– Consolidates data from 42 operational data sources
5. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 5
Case Study: Bank (Contd.)
1. Select subset of customer records who have received
home equity loan offer
– Customers who declined
– Customers who signed up
Income Number of
Children
Average Checking
Account Balance
… Reponse
$40,000 2 $1500 Yes
$75,000 0 $5000 No
$50,000 1 $3000 No
… … … … …
6. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 6
Case Study: Bank (Contd.)
2. Find rules to predict whether a customer would
respond to home equity loan offer
IF (Salary < 40k) and
(numChildren > 0) and
(ageChild1 > 18 and ageChild1 < 22)
THEN YES
…
7. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 7
Case Study: Bank (Contd.)
3. Group customers into clusters and investigate
clusters
Group 2
Group 3
Group 4
Group 1
8. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 8
Case Study: Bank (Contd.)
4. Evaluate results:
– Many “uninteresting” clusters
– One interesting cluster! Customers with both
business and personal accounts; unusually high
percentage of likely respondents
9. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 9
Example: Bank
(Contd.)
Action:
• New marketing campaign
Result:
• Acceptance rate for home equity offers more
than doubled
10. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 10
Example Application: Fraud Detection
• Industries: Health care, retail, credit card
services, telecom, B2B relationships
• Approach:
– Use historical data to build models of fraudulent
behavior
– Deploy models to identify fraudulent instances
11. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 11
Fraud Detection (Contd.)
• Examples:
– Auto insurance: Detect groups of people who stage accidents to
collect insurance
– Medical insurance: Fraudulent claims
– Money laundering: Detect suspicious money transactions (US
Treasury's Financial Crimes Enforcement Network)
– Telecom industry: Find calling patterns that deviate from a norm
(origin and destination of the call, duration, time of day, day of
week).
12. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 12
Other Example Applications
• CPG: Promotion analysis
• Retail: Category management
• Telecom: Call usage analysis, churn
• Healthcare: Claims analysis, fraud detection
• Transportation/Distribution: Logistics management
• Financial Services: Credit analysis, fraud detection
• Data service providers: Value-added data analysis
13. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 13
What is a Data Mining Model?
A data mining model is a description of a certain aspect
of a dataset. It produces output values for an
assigned set of inputs.
Examples:
• Clustering
• Linear regression model
• Classification model
• Frequent itemsets and association rules
• Support Vector Machines
14. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 14
Data Mining Methods
15. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 15
Overview
• Several well-studied tasks
– Classification
– Clustering
– Frequent Patterns
• Many methods proposed for each
• Focus in database and data mining community:
– Scalability
– Managing the process
– Exploratory analysis
16. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
Classification
Goal:
Learn a function that assigns a record to one of several
predefined classes.
Requirements on the model:
– High accuracy
– Understandable by humans, interpretable
– Fast construction for very large training databases
17. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 17
Classification
Example application: telemarketing
18. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
Classification (Contd.)
• Decision trees are one approach to
classification.
• Other approaches include:
– Linear Discriminant Analysis
– k-nearest neighbor methods
– Logistic regression
– Neural networks
– Support Vector Machines
19. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
Classification Example
• Training database:
– Two predictor attributes:
Age and Car-type (Sport, Minivan
and Truck)
– Age is ordered, Car-type is
categorical attribute
– Class label indicates
whether person bought
product
– Dependent attribute is categorical
Age Car Class
20 M Yes
30 M Yes
25 T No
30 S Yes
40 S Yes
20 T No
30 M Yes
25 M Yes
40 M Yes
20 S No
20. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 22
Classification Problem
• If Y is categorical, the problem is a classification
problem, and we use C instead of Y. |dom(C)| = J, the
number of classes.
• C is the class label, d is called a classifier.
• Let r be a record randomly drawn from P.
Define the misclassification rate of d:
RT(d,P) = P(d(r.X1, …, r.Xk) != r.C)
• Problem definition: Given dataset D that is a random
sample from probability distribution P, find classifier d
such that RT(d,P) is minimized.
21. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 23
Regression Problem
• If Y is numerical, the problem is a regression problem.
• Y is called the dependent variable, d is called a
regression function.
• Let r be a record randomly drawn from P.
Define mean squared error rate of d:
RT(d,P) = E(r.Y - d(r.X1, …, r.Xk))2
• Problem definition: Given dataset D that is a random
sample from probability distribution P, find regression
function d such that RT(d,P) is minimized.
22. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
Regression Example
• Example training database
– Two predictor attributes:
Age and Car-type (Sport, Minivan
and Truck)
– Spent indicates how much
person spent during a recent visit
to the web site
– Dependent attribute is numerical
Age Car Spent
20 M $200
30 M $150
25 T $300
30 S $220
40 S $400
20 T $80
30 M $100
25 M $125
40 M $500
20 S $420
23. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 25
Decision Trees
24. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
What are Decision Trees?
Minivan
Age
Car Type
YES NO
YES
<30 >=30
Sports, Truck
0 30 60 Age
YES
YES
NO
Minivan
Sports,
Truck
25. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 27
Decision Trees
• A decision tree T encodes d (a classifier or
regression function) in form of a tree.
• A node t in T without children is called a leaf
node. Otherwise t is called an internal node.
26. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 28
Internal Nodes
• Each internal node has an associated splitting
predicate. Most common are binary predicates.
Example predicates:
– Age <= 20
– Profession in {student, teacher}
– 5000*Age + 3*Salary – 10000 > 0
27. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 30
Leaf Nodes
Consider leaf node t:
• Classification problem: Node t is labeled with
one class label c in dom(C)
• Regression problem: Two choices
– Piecewise constant model:
t is labeled with a constant y in dom(Y).
– Piecewise linear model:
t is labeled with a linear model
Y = yt + Σ aiXi
28. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 31
Example
Encoded classifier:
If (age<30 and
carType=Minivan)
Then YES
If (age <30 and
(carType=Sports or
carType=Truck))
Then NO
If (age >= 30)
Then YES
Minivan
Age
Car Type
YES NO
YES
<30 >=30
Sports, Truck
29. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
Issues in Tree Construction
• Three algorithmic components:
– Split Selection Method
– Pruning Method
– Data Access Method
30. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
Top-Down Tree Construction
BuildTree(Node n, Training database D,
Split Selection Method S)
[ (1) Apply S to D to find splitting criterion ]
(1a) for each predictor attribute X
(1b) Call S.findSplit(AVC-set of X)
(1c) endfor
(1d) S.chooseBest();
(2) if (n is not a leaf node) ...
S: C4.5, CART, CHAID, FACT, ID3, GID3, QUEST, etc.
31. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
Split Selection Method
• Numerical Attribute: Find a split point that
separates the (two) classes
(Yes: No: )
30 35
Age
32. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
Split Selection Method (Contd.)
• Categorical Attributes: How to group?
Sport: Truck: Minivan:
(Sport, Truck) -- (Minivan)
(Sport) --- (Truck, Minivan)
(Sport, Minivan) --- (Truck)
33. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
Impurity-based Split Selection Methods
• Split selection method has two parts:
– Search space of possible splitting criteria.
Example: All splits of the form “age <= c”.
– Quality assessment of a splitting criterion
• Need to quantify the quality of a split: Impurity
function
• Example impurity functions: Entropy, gini-index,
chi-square index
34. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
Data Access Method
• Goal: Scalable decision tree construction, using
the complete training database
35. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
AVC-Sets
Age Yes No
20 1 2
25 1 1
30 3 0
40 2 0
Car Yes No
Sport 2 1
Truck 0 2
Minivan 5 0
Age Car Class
20 M Yes
30 M Yes
25 T No
30 S Yes
40 S Yes
20 T No
30 M Yes
25 M Yes
40 M Yes
20 S No
Training Database AVC-Sets
36. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
Motivation for Data Access Methods
Training Database
Age
<30 >=30
Left Partition Right Partition
In principle, one pass over training database for each node.
Can we improve?
37. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
RainForest Algorithms: RF-Hybrid
First scan:
Main Memory
Database
AVC-Sets
Build AVC-sets for root
38. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
RainForest Algorithms: RF-Hybrid
Second Scan:
Main Memory
Database
AVC-Sets
Age<30
Build AVC sets for children of the root
39. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
RainForest Algorithms: RF-Hybrid
Third Scan:
Main Memory
Database
Age<30
Sal<20k Car==S
Partition 1 Partition 2 Partition 3 Partition 4
As we expand the tree, we run out
Of memory, and have to “spill”
partitions to disk, and recursively
read and process them later.
40. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
RainForest Algorithms: RF-Hybrid
Further optimization: While writing partitions, concurrently build AVC-groups of
as many nodes as possible in-memory. This should remind you of Hybrid
Hash-Join!
Main Memory
Database
Age<30
Sal<20k Car==S
Partition 1 Partition 2 Partition 3 Partition 4
41. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 44
CLUSTERING
42. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 45
Problem
• Given points in a multidimensional space, group
them into a small number of clusters, using
some measure of “nearness”
– E.g., Cluster documents by topic
– E.g., Cluster users by similar interests
43. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 46
Clustering
• Output: (k) groups of records called clusters, such that
the records within a group are more similar to records
in other groups
– Representative points for each cluster
– Labeling of each record with each cluster number
– Other description of each cluster
• This is unsupervised learning: No record labels are
given to learn from
• Usage:
– Exploratory data mining
– Preprocessing step (e.g., outlier detection)
44. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 49
Clustering (Contd.)
• Requirements: Need to define “similarity”
between records
• Important: Use the “right” similarity (distance)
function
– Scale or normalize all attributes. Example:
seconds, hours, days
– Assign different weights to reflect importance of
the attribute
– Choose appropriate measure (e.g., L1, L2)
45. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 51
Approaches
• Centroid-based: Assume we have k clusters,
guess at the centers, assign points to
nearest center, e.g., K-means; over time,
centroids shift
• Hierarchical: Assume there is one cluster per
point, and repeatedly merge nearby clusters
using some distance threshold
Scalability: Do this with fewest number of passes
over data, ideally, sequentially
46. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 54
Scalable Clustering Algorithms for Numeric
Attributes
CLARANS
DBSCAN
BIRCH
CLIQUE
CURE
• Above algorithms can be used to cluster documents
after reducing their dimensionality using SVD
…….
47. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
Birch [ZRL96]
Pre-cluster data points using “CF-tree” data structure
48. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research
Clustering Feature (CF)
Allows incremental merging of clusters!
49. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 60
Points to Note
• Basic algorithm works in a single pass to
condense metric data using spherical
summaries
– Can be incremental
• Additional passes cluster CFs to detect non-
spherical clusters
• Approximates density function
• Extensions to non-metric data
50. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 63
Market Basket Analysis:
Frequent Itemsets
51. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 64
Market Basket Analysis
• Consider shopping cart filled with several items
• Market basket analysis tries to answer the
following questions:
– Who makes purchases
– What do customers buy
52. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 65
Market Basket Analysis
• Given:
– A database of customer
transactions
– Each transaction is a set
of items
• Goal:
– Extract rules
TID CID Date Item Qty
111 201 5/1/99 Pen 2
111 201 5/1/99 Ink 1
111 201 5/1/99 Milk 3
111 201 5/1/99 Juice 6
112 105 6/3/99 Pen 1
112 105 6/3/99 Ink 1
112 105 6/3/99 Milk 1
113 106 6/5/99 Pen 1
113 106 6/5/99 Milk 1
114 201 7/1/99 Pen 2
114 201 7/1/99 Ink 2
114 201 7/1/99 Juice 4
53. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 66
Market Basket Analysis (Contd.)
• Co-occurrences
– 80% of all customers purchase items X, Y and Z
together.
• Association rules
– 60% of all customers who purchase X and Y also buy
Z.
• Sequential patterns
– 60% of customers who first buy X also purchase Y
within three weeks.
54. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 67
Confidence and Support
We prune the set of all possible association rules
using two interestingness measures:
• Confidence of a rule:
– X => Y has confidence c if P(Y|X) = c
• Support of a rule:
– X => Y has support s if P(XY) = s
We can also define
• Support of a co-ocurrence XY:
– XY has support s if P(XY) = s
56. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 69
Exercise
• Can you find all itemsets
with
support >= 75%?
TID CID Date Item Qty
111 201 5/1/99 Pen 2
111 201 5/1/99 Ink 1
111 201 5/1/99 Milk 3
111 201 5/1/99 Juice 6
112 105 6/3/99 Pen 1
112 105 6/3/99 Ink 1
112 105 6/3/99 Milk 1
113 106 6/5/99 Pen 1
113 106 6/5/99 Milk 1
114 201 7/1/99 Pen 2
114 201 7/1/99 Ink 2
114 201 7/1/99 Juice 4
57. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 70
Exercise
• Can you find all
association rules with
support >= 50%?
TID CID Date Item Qty
111 201 5/1/99 Pen 2
111 201 5/1/99 Ink 1
111 201 5/1/99 Milk 3
111 201 5/1/99 Juice 6
112 105 6/3/99 Pen 1
112 105 6/3/99 Ink 1
112 105 6/3/99 Milk 1
113 106 6/5/99 Pen 1
113 106 6/5/99 Milk 1
114 201 7/1/99 Pen 2
114 201 7/1/99 Ink 2
114 201 7/1/99 Juice 4
58. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 71
Extensions
• Imposing constraints
– Only find rules involving the dairy department
– Only find rules involving expensive products
– Only find rules with “whiskey” on the right hand
side
– Only find rules with “milk” on the left hand side
– Hierarchies on the items
– Calendars (every Sunday, every 1st of the month)
59. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 72
Market Basket Analysis: Applications
• Sample Applications
– Direct marketing
– Fraud detection for medical insurance
– Floor/shelf planning
– Web site layout
– Cross-selling
60. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 73
DBMS Support for DM
61. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 74
Why Integrate DM into a DBMS?
Data
Copy
Extract
Models
Consistency?
Mine
62. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 75
Integration Objectives
• Avoid isolation of
querying from mining
– Difficult to do “ad-hoc”
mining
• Provide simple
programming approach
to creating and using
DM models
• Make it possible to add
new models
• Make it possible to add
new, scalable
algorithms
Analysts (users) DM Vendors
63. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 76
SQL/MM: Data Mining
• A collection of classes that provide a standard
interface for invoking DM algorithms from SQL
systems.
• Four data models are supported:
– Frequent itemsets, association rules
– Clusters
– Regression trees
– Classification trees
64. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 77
DATA MINING SUPPORT IN MICROSOFT SQL
SERVER *
* Thanks to Surajit Chaudhuri for permission to use/adapt his slides
65. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 78
Key Design Decisions
• Adopt relational data representation
– A Data Mining Model (DMM) as a “tabular” object (externally;
can be represented differently internally)
• Language-based interface
– Extension of SQL
– Standard syntax
66. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 79
DM Concepts to Support
• Representation of input (cases)
• Representation of models
• Specification of training step
• Specification of prediction step
Should be independent of specific algorithms
67. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 80
What are “Cases”?
• DM algorithms analyze “cases”
• The “case” is the entity being categorized and classified
• Examples
– Customer credit risk analysis: Case = Customer
– Product profitability analysis: Case = Product
– Promotion success analysis: Case = Promotion
• Each case encapsulates all we know about the entity
68. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 81
Cases as Records: Examples
Cust ID Age
Marital
Status
Wealth
1 35 M 380,000
2 20 S 50,000
3 57 M 470,000
Age Car Class
20 M Yes
30 M Yes
25 T No
30 S Yes
40 S Yes
20 T No
30 M Yes
25 M Yes
40 M Yes
20 S No
69. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 82
Types of Columns
Cust ID Age
Marital
Status
Wealth
Product Purchases
Product Quantity Type
1 35 M 380,000 TV 1 Appliance
Coke 6 Drink
Ham 3 Food
• Keys: Columns that uniquely identify a case
• Attributes: Columns that describe a case
– Value: A state associated with the attribute in a specific case
– Attribute Property: Columns that describe an attribute
– Unique for a specific attribute value (TV is always an appliance)
– Attribute Modifier: Columns that represent additional “meta” information for
an attribute
– Weight of a case, Certainty of prediction
70. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 83
More on Columns
• Properties describe attributes
– Can represent generalization hierarchy
• Distribution information associated with
attributes
– Discrete/Continuous
– Nature of Continuous distributions
• Normal, Log_Normal
– Other Properties (e.g., ordered, not null)
71. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 84
Representing a DMM
• Specifying a Model
– Columns to predict
– Algorithm to use
– Special parameters
• Model is represented as a (nested) table
– Specification = Create table
– Training = Inserting data into the table
– Predicting = Querying the table
Minivan
Age
Car Type
YES NO
YES
<30 >=30
Sports, Truck
72. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 85
CREATE MINING MODEL
CREATE MINING MODEL [Age Prediction]
(
[Gender] TEXT DISCRETE ATTRIBUTE,
[Hair Color] TEXT DISCRETE ATTRIBUTE,
[Age] DOUBLE CONTINUOUS ATTRIBUTE PREDICT,
)
USING [Microsoft Decision Tree]
Name of model
Name of algorithm
73. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 86
CREATE MINING MODEL
CREATE MINING MODEL [Age Prediction]
(
[Customer ID] LONG KEY,
[Gender] TEXT DISCRETE ATTRIBUTE,
[Age] DOUBLE CONTINUOUS ATTRIBUTE PREDICT,
[ProductPurchases] TABLE (
[ProductName] TEXT KEY,
[Quantity] DOUBLE NORMAL CONTINUOUS,
[ProductType] TEXT DISCRETE RELATED TO [ProductName]
)
)
USING [Microsoft Decision Tree]
Note that the ProductPurchases column is a nested table.
SQL Server computes this field when data is “inserted”.
74. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 87
Training a DMM
• Training a DMM requires passing it “known” cases
• Use an INSERT INTO in order to “insert” the data to the
DMM
– The DMM will usually not retain the inserted data
– Instead it will analyze the given cases and build the DMM content
(decision tree, segmentation model)
• INSERT [INTO] <mining model name>
[(columns list)]
<source data query>
75. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 88
INSERT INTO
INSERT INTO [Age Prediction]
(
[Gender],[Hair Color], [Age]
)
OPENQUERY([Provider=MSOLESQL…,
‘SELECT
[Gender], [Hair Color], [Age]
FROM [Customers]’
)
76. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 89
Executing Insert Into
• The DMM is trained
– The model can be retrained or incrementally refined
• Content (rules, trees, formulas) can be explored
• Prediction queries can be executed
77. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 90
What are Predictions?
• Predictions apply the trained model to estimate
missing attributes in a data set
• Predictions = Queries
• Specification:
– Input data set
– A trained DMM (think of it as a truth table, with one row per
combination of predictor-attribute values; this is only
conceptual)
– Binding (mapping) information between the input data and
the DMM
78. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 91
Prediction Join
SELECT [Customers].[ID],
MyDMM.[Age],
PredictProbability(MyDMM.[Age])
FROM
MyDMM PREDICTION JOIN [Customers]
ON MyDMM.[Gender] = [Customers].[Gender] AND
MyDMM.[Hair Color] =
[Customers].[Hair Color]
79. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 92
Exploratory Mining:
Combining OLAP and DM
80. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 93
Databases and Data Mining
• What can database systems offer in the grand
challenge of understanding and learning from
the flood of data we’ve unleashed?
– The plumbing
– Scalability
81. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 94
Databases and Data Mining
• What can database systems offer in the grand
challenge of understanding and learning from
the flood of data we’ve unleashed?
– The plumbing
– Scalability
– Ideas!
• Declarativeness
• Compositionality
• Ways to conceptualize your data
82. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 95
Multidimensional Data Model
• One fact table D=(X,M)
– X=X1, X2, ... Dimension attributes
– M=M1, M2,… Measure attributes
• Domain hierarchy for each dimension attribute:
– Collection of domains Hier(Xi)= (Di
(1),..., Di
(k))
– The extended domain: EXi = 1≤k≤t DXi
(k)
• Value mapping function: γD1D2(x)
– e.g., γmonthyear(12/2005) = 2005
– Form the value hierarchy graph
– Stored as dimension table attribute (e.g., week for a time
value) or conversion functions (e.g., month, quarter)
83. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 96
MA
NY
TX
CA
West
East
ALL
LOCATION
Civic Sierra
F150
Camry
Truck
Sedan
ALL
Automobile
Model
Category
Region
State
ALL
ALL
1
3
2
2
1
3
FactID Auto Loc Repair
p1 F150 NY 100
p2 Sierra NY 500
p3 F150 MA 100
p4 Sierra MA 200
Multidimensional Data
p3
p1
p4
p2
DIMENSION
ATTRIBUTES
84. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 97
Cube Space
• Cube space: C = EX1EX2…EXd
• Region: Hyper rectangle in cube space
– c = (v1,v2,…,vd) , vi EXi
• Region granularity:
– gran(c) = (d1, d2, ..., dd), di = Domain(c.vi)
• Region coverage:
– coverage(c) = all facts in c
• Region set: All regions with same granularity
85. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 98
OLAP Over Imprecise Data
with Doug Burdick, Prasad Deshpande, T.S. Jayram, and
Shiv Vaithyanathan
In VLDB 05, 06 joint work with IBM Almaden
86. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 99
MA
NY
TX
CA
West
East
ALL
LOCATION
Civic Sierra
F150
Camry
Truck
Sedan
ALL
Automobile
Model
Category
Region
State
ALL
ALL
1
3
2
2
1
3
FactID Auto Loc Repair
p1 F150 NY 100
p2 Sierra NY 500
p3 F150 MA 100
p4 Sierra MA 200
p5 Truck MA 100
p5
Imprecise Data
p3
p1
p4
p2
87. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 100
Querying Imprecise Facts
p3
p1
p4
p2
p5
MA
NY
Sierra
F150
FactID Auto Loc Repair
p1 F150 NY 100
p2 Sierra NY 500
p3 F150 MA 100
p4 Sierra MA 200
p5 Truck MA 100
Truck
East
Auto = F150
Loc = MA
SUM(Repair) = ??? How do we treat p5?
88. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 101
p3
p1
p4
p2
p5
MA
NY
Sierra
F150
FactID Auto Loc Repair
p1 F150 NY 100
p2 Sierra NY 500
p3 F150 MA 100
p4 Sierra MA 200
p5 Truck MA 100
Truck
East
Allocation (1)
89. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 102
p3
p1
p4
p2
MA
NY
Sierra
F150
ID FactID Auto Loc Repair Weight
1 p1 F150 NY 100 1.0
2 p2 Sierra NY 500 1.0
3 p3 F150 MA 100 1.0
4 p4 Sierra MA 200 1.0
5 p5 F150 MA 100 0.5
6 p5 Sierra MA 100 0.5
Truck
East
Allocation (2)
p5 p5
(Huh? Why 0.5 / 0.5?
- Hold on to that thought)
90. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 103
p3
p1
p4
p2
MA
NY
Sierra
F150
ID FactID Auto Loc Repair Weight
1 p1 F150 NY 100 1.0
2 p2 Sierra NY 500 1.0
3 p3 F150 MA 100 1.0
4 p4 Sierra MA 200 1.0
5 p5 F150 MA 100 0.5
6 p5 Sierra MA 100 0.5
Truck
East
Allocation (3)
p5 p5
Auto = F150
Loc = MA
SUM(Repair) = 150 Query the Extended Data Model!
91. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 104
Allocation Policies
• The procedure for assigning allocation weights
is referred to as an allocation policy:
– Each allocation policy uses different information to
assign allocation weights
– Reflects assumption about the correlation structure in
the data
• Leads to EM-style iterative algorithms for allocating imprecise
facts, maximizing likelihood of observed data
92. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 105
p3
p1
p4
p2
MA
NY
Sierra
F150
Truck
East
Allocation Policy: Count
p5 p5
p6
1, 5
2, 5
( 1) 2
( 1) ( 2) 2 1
( 2) 1
( 1) ( 2) 2 1
c p
c p
Count c
p
Count c Count c
Count c
p
Count c Count c
c1 c2
93. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 106
p3
p1
p4
p2
MA
NY
Sierra
F150
Truck
East
Allocation Policy: Measure
1, 5
2, 5
( 1) 700
( 1) ( 2) 700 200
( 2) 200
( 1) ( 2) 700 200
c p
c p
Sales c
p
Sales c Sales c
Sales c
p
Sales c Sales c
p5 p5
p6
c1 c2
ID Sales
p1 100
p2 150
p3 300
p4 200
p5 250
p6 400
94. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 107
Allocation Policy Template
1, 5
2, 5
( 1)
( 1) ( 2)
( 2)
( 1) ( 2)
c p
c p
Q c
p
Q c Q c
Q c
p
Q c Q c
1, 5
2, 5
( 1)
( 1) ( 2)
( 2)
( 1) ( 2)
c p
c p
Sales c
p
Sales c Sales c
Sales c
p
Sales c Sales c
1, 5
2, 5
( 1)
( 1) ( 2)
( 2)
( 1) ( 2)
c p
c p
Count c
p
Count c Count c
Count c
p
Count c Count c
,
' ( )
( ) ( )
( ') ( )
c r
c region r
Q c Q c
p
Q c Qsum r
r
MA
NY
Sierra
F150
Truck
East
c1 c2
95. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 108
Sierra
F150
Truck
MA
NY
East
p1
p3
p5
p4
p2
What is a Good Allocation Policy?
We propose desiderata that enable
appropriate definition of query
semantics for imprecise data
Query: COUNT
96. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 109
Desideratum I: Consistency
• Consistency
specifies the
relationship between
answers to related
queries on a fixed
data set
Sierra
F150
Truck
MA
NY
East
p1
p3
p5
p4
p2
97. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 110
Desideratum II: Faithfulness
• Faithfulness specifies the relationship between answers
to a fixed query on related data sets
Sierra
F150
MA
NY
p3
p1
p4
p2
p5
Sierra
F150
MA
NY
p3
p1
p4
p2
p5
Sierra
F150
MA
NY
p3
p1
p4
p2
p5
Data Set 1 Data Set 2 Data Set 3
98. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 111
Results on Query Semantics
• Evaluating queries over extended data model yields
expected value of the aggregation operator over all
possible worlds
• Efficient query evaluation algorithms available for
SUM, COUNT; more expensive dynamic
programming algorithm for AVERAGE
– Consistency and faithfulness for SUM, COUNT are satisfied
under appropriate conditions
– (Bound-)Consistency does not hold for AVERAGE, but holds
for E(SUM)/E(COUNT)
• Weak form of faithfulness holds
– Opinion pooling with LinOP: Similar to AVERAGE
99. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 113
p3
p1
p4
p2
p5
MA
NY
Sierra
F150
Sierra
F150
MA
NY
p4
p1
p3
p5
p2
p1
p3
p4
p5
p2
p4
p1
p3
p5
p2
MA
NY
MA
NY
Sierra
F150
Sierra
F150
p3
p4
p1
p5
p2
MA
NY
Sierra
F150
w1
w2 w3
w4
Imprecise facts
lead to many
possible worlds
[Kripke63, …]
100. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 114
Query Semantics
• Given all possible worlds together with their
probabilities, queries are easily answered using
expected values
– But number of possible worlds is exponential!
• Allocation gives facts weighted assignments to
possible completions, leading to an extended
version of the data
– Size increase is linear in number of (completions of)
imprecise facts
– Queries operate over this extended version
101. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 115
Exploratory Mining:
Prediction Cubes
with Beechun Chen, Lei Chen, and Yi Lin
In VLDB 05; EDAM Project
102. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 116
The Idea
• Build OLAP data cubes in which cell values represent
decision/prediction behavior
– In effect, build a tree for each cell/region in the cube—
observe that this is not the same as a collection of trees
used in an ensemble method!
– The idea is simple, but it leads to promising data mining
tools
– Ultimate objective: Exploratory analysis of the entire space
of “data mining choices”
• Choice of algorithms, data conditioning parameters …
103. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 117
Example (1/7): Regular OLAP
Location Time # of App.
… … ...
AL, USA Dec, 04 2
… … …
WY, USA Dec, 04 3
Goal: Look for patterns of unusually
high numbers of applications:
Z: Dimensions Y: Measure
All
85 86 04
Jan., 86 Dec., 86
All
Year
Month
Location Time
All
Japan USA Norway
AL WY
All
Country
State
104. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 118
Example (2/7): Regular OLAP
Location Time # of App.
… … ...
AL, USA Dec, 04 2
… … …
WY, USA Dec, 04 3
Goal: Look for patterns of unusually
high numbers of applications:
…
…
…
…
…
…
…
…
…
…
…
10
8
2
70
USA
…
…
30
25
50
20
30
CA
…
Dec
…
Jan
Dec
…
Jan
…
2003
2004
Cell value: Number of loan applications
Z: Dimensions Y: Measure
…
…
…
…
…
90
80
USA
…
90
100
CA
…
03
04
Roll up
Coarser
regions
…
…
…
…
…
…
…
…
…
10
WY
…
…
5
…
…
…
…
55
AL
USA
…
15
3
5
YT
…
20
2
5
…
…
15
15
20
AB
CA
…
Dec
…
Jan
…
2004
Drill
down
Finer regions
105. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 119
Model h(X, Z(D))
E.g., decision tree
No
…
F
Black
Dec, 04
WY, USA
…
…
…
…
…
…
Yes
…
M
White
Dec, 04
AL, USA
Approval
…
Sex
Race
Time
Location
Example (3/7): Decision Analysis
Goal: Analyze a bank’s loan decision process
w.r.t. two dimensions: Location and Time
All
85 86 04
Jan., 86 Dec., 86
All
Year
Month
Location Time
All
Japan USA Norway
AL WY
All
Country
State
Z: Dimensions X: Predictors Y: Class
Fact table D
Cube subset
106. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 120
Example (3/7): Decision Analysis
• Are there branches (and time windows) where
approvals were closely tied to sensitive attributes
(e.g., race)?
– Suppose you partitioned the training data by location and
time, chose the partition for a given branch and time window,
and built a classifier. You could then ask, “Are the
predictions of this classifier closely correlated with race?”
• Are there branches and times with decision making
reminiscent of 1950s Alabama?
– Requires comparison of classifiers trained using different
subsets of data.
107. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 121
Model h(X, [USA, Dec 04](D))
E.g., decision tree
Example (4/7): Prediction Cubes
2004 2003 …
Jan … Dec Jan … Dec …
CA 0.4 0.8 0.9 0.6 0.8 … …
USA 0.2 0.3 0.5 … … …
… … … … … … … …
1. Build a model using data
from USA in Dec., 1985
2. Evaluate that model
Measure in a cell:
• Accuracy of the model
• Predictiveness of Race
measured based on that
model
• Similarity between that
model and a given model
N
…
F
Black
Dec, 04
WY, USA
…
…
…
…
…
…
Y
…
M
White
Dec, 04
AL ,USA
Approval
…
Sex
Race
Time
Location
Data [USA, Dec 04](D)
108. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 122
No
…
F
Black
Dec, 04
WY, USA
…
…
…
…
…
…
Yes
…
M
White
Dec, 04
AL, USA
Approval
…
Sex
Race
Time
Location
Data table D
Example (5/7): Model-Similarity
Given:
- Data table D
- Target model h0(X)
- Test set D w/o labels
…
M
Black
…
…
…
…
F
White
…
Sex
Race
Test set D
…
…
…
…
…
…
…
…
…
…
…
0.9
0.3
0.2
USA
…
…
0.5
0.6
0.3
0.2
0.4
CA
…
Dec
…
Jan
Dec
…
Jan
…
2003
2004
Level: [Country, Month]
The loan decision process in USA during Dec 04
was similar to a discriminatory decision model
h0(X)
Build a model
Similarity
No
…
Yes
Yes
…
Yes
109. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 123
Location Time Race Sex … Approval
AL, USA Dec, 04 White M … Yes
… … … … … …
WY, USA Dec, 04 Black F … No
Example (6/7): Predictiveness
2004 2003 …
Jan … Dec Jan … Dec …
CA 0.4 0.2 0.3 0.6 0.5 … …
USA 0.2 0.3 0.9 … … …
… … … … … … … …
Given:
- Data table D
- Attributes V
- Test set D w/o labels
Race Sex …
White F …
… … …
Black M …
Data table D
Test set D
Level: [Country, Month]
Predictiveness of V
Race was an important predictor of loan
approval decision in USA during Dec 04
Build models
h(X) h(XV)
Yes
No
.
.
No
Yes
No
.
.
Yes
110. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 124
Model Accuracy
• A probabilistic view of classifiers: A dataset is a
random sample from an underlying pdf p*(X, Y), and
a classifier
h(X; D) = argmax y p*(Y=y | X=x, D)
– i.e., A classifier approximates the pdf by predicting the
“most likely” y value
• Model Accuracy:
– Ex,y[ I( h(x; D) = y ) ], where (x, y) is drawn from p*(X, Y | D),
and I() = 1 if the statement is true; I() = 0, otherwise
– In practice, since p* is an unknown distribution, we use a
set-aside test set or cross-validation to estimate model
accuracy.
111. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 125
Model Similarity
• The prediction similarity between two models, h1(X)
and h2(X), on test set D is
• The KL-distance between two models, h1(X) and
h2(X), on test set D is
D
D x
x
x ))
(
)
(
(
|
|
1
2
1 h
h
I
D
D x y
h
h
h
x
y
p
x
y
p
x
y
p
)
|
(
)
|
(
log
)
|
(
|
|
1
2
1
1
112. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 126
Attribute Predictiveness
• Intuition: V X is not predictive if and only if V is
independent of Y given the other attributes X – V;
i.e.,
p*(Y | X – V, D) = p*(Y | X, D)
• In practice, we can use the distance between h(X; D)
and h(X – V; D)
• Alternative approach: Test if h(X; D) is more
accurate than h(X – V; D) (e.g., by using cross-
validation to estimate the two model accuracies
involved)
113. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 127
Example (7/7): Prediction Cube
2004 2003 …
Jan … Dec Jan … Dec …
CA 0.4 0.1 0.3 0.6 0.8 … …
USA 0.7 0.4 0.3 0.3 … … …
… … … … … … … …
…
…
…
…
…
…
…
…
…
…
…
…
…
0.8
0.7
0.9
WY
…
…
…
0.1
0.1
0.3
…
…
…
…
…
0.2
0.1
0.2
AL
USA
…
…
…
0.2
0.1
0.2
0.3
YT
…
…
…
0.3
0.3
0.1
0.1
…
…
…
0.2
0.1
0.1
0.2
0.4
AB
CA
…
Dec
…
Jan
Dec
…
Jan
…
2003
2004
Drill down
…
…
…
…
…
0.3
0.2
USA
…
0.2
0.3
CA
…
03
04
Roll up
Cell value: Predictiveness of Race
114. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 128
Efficient Computation
• Reduce prediction cube computation to data
cube computation
– Represent a data-mining model as a distributive or
algebraic (bottom-up computable) aggregate
function, so that data-cube techniques can be
directly applied
115. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 129
Bottom-Up Data Cube
Computation
1985 1986 1987 1988
Norway 10 30 20 24
… 23 45 14 32
USA 14 32 42 11
1985 1986 1987 1988
All 47 107 76 67
All
Norway 84
… 114
USA 99
All
All 297
Cell Values: Numbers of loan applications
116. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 130
Scoring Function
• Represent a model as a function of sets
• Conceptually, a machine-learning model h(X; Z(D)) is
a scoring function Score(y, x; Z(D)) that gives each
class y a score on test example x
– h(x; Z(D)) = argmax y Score(y, x; Z(D))
– Score(y, x; Z(D)) p(y | x, Z(D))
– Z(D): The set of training examples (a cube subset of D)
117. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 131
Machine-Learning Models
• Naïve Bayes:
– Scoring function: algebraic
• Kernel-density-based classifier:
– Scoring function: distributive
• Decision tree, random forest:
– Neither distributive, nor algebraic
• PBE: Probability-based ensemble (new)
– To make any machine-learning model distributive
– Approximation
118. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 132
Efficiency Comparison
0
500
1000
1500
2000
2500
40K 80K 120K 160K 200K
RFex
KDCex
NBex
J48ex
NB
KDC
RF-
PBE
J48-
PBE
Using exhaustive
method
Using bottom-up
score computation
# of Records
Execution
Time
(sec)
119. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 133
Bellwether Analysis:
Global Aggregates from Local Regions
with Beechun Chen, Jude Shavlik, and Pradeep Tamma
In VLDB 06
120. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 134
Motivating Example
• A company wants to predict the first year worldwide profit
of a new item (e.g., a new movie)
– By looking at features and profits of previous (similar) movies, we
predict expected total profit (1-year US sales) for new movie
• Wait a year and write a query! If you can’t wait, stay awake …
– The most predictive “features” may be based on sales data
gathered by releasing the new movie in many “regions” (different
locations over different time periods).
• Example “region-based” features: 1st week sales in Peoria, week-to-
week sales growth in Wisconsin, etc.
• Gathering this data has a cost (e.g., marketing expenses, waiting
time)
• Problem statement: Find the most predictive region
features that can be obtained within a given “cost budget”
121. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 135
Key Ideas
• Large datasets are rarely labeled with the targets that we
wish to learn to predict
– But for the tasks we address, we can readily use OLAP
queries to generate features (e.g., 1st week sales in
Peoria) and even targets (e.g., profit) for mining
• We use data-mining models as building blocks in
the mining process, rather than thinking of them
as the end result
– The central problem is to find data subsets
(“bellwether regions”) that lead to predictive features
which can be gathered at low cost for a new case
122. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 136
Motivating Example
• A company wants to predict the first year’s
worldwide profit for a new item, by using its
historical database
• Database Schema:
Profit Table
Time
Location
CustID
ItemID
Profit
Item Table
ItemID
Category
R&D Expense
Ad Table
Time
Location
ItemID
AdExpense
AdSize
• The combination of the underlined attributes forms a key
123. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 137
A Straightforward Approach
• Build a regression model to predict item profit
• There is much room for accuracy improvement!
Profit Table
Time
Location
CustID
ItemID
Profit
Item Table
ItemID
Category
R&D Expense
Ad Table
Time
Location
ItemID
AdExpense
AdSize
ItemID Category R&D Expense Profit
1 Laptop 500K 12,000K
2 Desktop 100K 8,000K
… … … …
By joining and aggregating tables
in the historical database
we can create a training set:
Item-table features Target
An Example regression model:
Profit = 0 + 1 Laptop + 2 Desktop +
3 RdExpense
124. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 138
Using Regional Features
• Example region: [1st week, HK]
• Regional features:
– Regional Profit: The 1st week profit in HK
– Regional Ad Expense: The 1st week ad expense in HK
• A possibly more accurate model:
Profit[1yr, All] = 0 + 1 Laptop + 2 Desktop + 3 RdExpense +
4 Profit[1wk, KR] + 5 AdExpense[1wk, KR]
• Problem: Which region should we use?
– The smallest region that improves the accuracy the most
– We give each candidate region a cost
– The most “cost-effective” region is the bellwether region
125. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 139
Basic Bellwether Problem
• Historical database: DB
• Training item set: I
• Candidate region set: R
– E.g., { [1-n week, Location] }
• Target generation query: i(DB) returns the target value of item i
I
– E.g., sum(Profit) i, [1-52, All] ProfitTable
• Feature generation query: i,r(DB), i Ir and r R
– Ir: The set of items in region r
– E.g., [ Categoryi, RdExpensei, Profiti, [1-n, Loc], AdExpensei, [1-n, Loc] ]
• Cost query: r(DB), r R, the cost of collecting data from r
• Predictive model: hr(x), r R, trained on {(i,r(DB), i(DB)) : i Ir}
– E.g., linear regression model
All
CA US KR
AL WI
All
Country
State
Location domain hierarchy
126. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 140
Basic Bellwether Problem
1 2 3 4 5 … 52
KR
USA
…
WI
WY
... …
ItemID Category … Profit[1-2,USA] …
… … … … …
i Desktop 45K
… … … … …
Aggregate over data records
in region r = [1-2, USA]
Features i,r(DB)
ItemID Total Profit
… …
i 2,000K
… …
Target i(DB)
Total Profit
in [1-52, All]
For each region r, build a predictive model hr(x); and then
choose bellwether region:
• Coverage(r) fraction of all items in region minimum
coverage support
• Cost(r, DB) cost threshold
• Error(hr) is minimized
r
127. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 141
Experiment on a Mail Order Dataset
0
5000
10000
15000
20000
25000
30000
5 25 45 65 85
Budget
RMSE
Bel Err Avg Err
Smp Err
• Bel Err: The error of the
bellwether region found using a
given budget
• Avg Err: The average error of all
the cube regions with costs
under a given budget
• Smp Err: The error of a set of
randomly sampled (non-cube)
regions with costs under a given
budget
[1-8 month, MD]
Error-vs-Budget Plot
(RMSE: Root Mean Square Error)
128. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 142
Experiment on a Mail Order Dataset
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
5 25 45 65 85
Budget
Fraction
of
indistinguisables
Uniqueness Plot
• Y-axis: Fraction of regions
that are as good as the
bellwether region
– The fraction of regions that
satisfy the constraints and
have errors within the 99%
confidence interval of the
error of the bellwether region
• We have 99% confidence that
that [1-8 month, MD] is a quite
unusual bellwether region
[1-8 month, MD]
129. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 143
Subset-Based Bellwether Prediction
• Motivation: Different subsets of items may have
different bellwether regions
– E.g., The bellwether region for laptops may be
different from the bellwether region for clothes
• Two approaches:
R&D Expense 50K
Yes
No
Category
Desktop Laptop
[1-2, WI] [1-3, MD]
[1-1, NY]
Bellwether Tree Bellwether Cube
Low Medium High
Software OS [1-3,CA] [1-1,NY] [1-2,CA]
… ... … …
Hardware Laptop [1-4,MD] [1-1, NY] [1-3,WI]
… … … …
… … … … …
R&D Expenses
Category
130. Bellwether Analysis
TECS 2007 R. Ramakrishnan, Yahoo! Research
TECS 2007 R. Ramakrishnan, Yahoo! Research
Conclusions
131. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 145
Related Work: Building models on
OLAP Results
• Multi-dimensional regression [Chen, VLDB 02]
– Goal: Detect changes of trends
– Build linear regression models for cube cells
• Step-by-step regression in stream cubes [Liu, PAKDD 03]
• Loglinear-based quasi cubes [Barbara, J. IIS 01]
– Use loglinear model to approximately compress dense regions of
a data cube
• NetCube [Margaritis, VLDB 01]
– Build Bayes Net on the entire dataset of approximate answer
count queries
132. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 146
Related Work (Contd.)
• Cubegrades [Imielinski, J. DMKD 02]
– Extend cubes with ideas from association rules
– How does the measure change when we rollup or drill down?
• Constrained gradients [Dong, VLDB 01]
– Find pairs of similar cell characteristics associated with big
changes in measure
• User-cognizant multidimensional analysis [Sarawagi,
VLDBJ 01]
– Help users find the most informative unvisited regions in a data
cube using max entropy principle
• Multi-Structural DBs [Fagin et al., PODS 05, VLDB 05]
133. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 147
Take-Home Messages
• Promising exploratory data analysis paradigm:
– Can use models to identify interesting subsets
– Concentrate only on subsets in cube space
• Those are meaningful subsets, tractable
– Precompute results and provide the users with an interactive
tool
• A simple way to plug “something” into cube-style
analysis:
– Try to describe/approximate “something” by a distributive or
algebraic function
134. Bee-Chung Chen, Raghu Ramakrishnan, Jude Shavlik, Pradeep Tamma
TECS 2007, Data Mining R. Ramakrishnan, Yahoo! Research 148
Big Picture
• Why stop with decision behavior? Can apply to other
kinds of analyses too
• Why stop at browsing? Can mine prediction cubes in
their own right
• Exploratory analysis of mining space:
– Dimension attributes can be parameters related to algorithm,
data conditioning, etc.
– Tractable evaluation is a challenge:
• Large number of “dimensions”, real-valued dimension
attributes, difficulties in compositional evaluation
• Active learning for experiment design, extending
compositional methods