Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Chapter 4. Data Warehousing and On-Line Analytical Processing.pptSubrata Kumer Paul
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Maximize SAP Ariba Solution ROI Through an Optimized Category Enablement Stra...SAP Ariba
A category enablement strategy is critical to driving the benefits of your SAP Ariba solution program. Do you know the different buy processes that are available? Do you know how you will enable each spend category? Do you know the solution capabilities that are needed to support it? In this workshop, we educate you on the key components of a best-in-class category enablement strategy, positioning you to develop a category enablement plan that will generate results.
The document provides an overview of key concepts in data warehousing and business intelligence, including:
1) It defines data warehousing concepts such as the characteristics of a data warehouse (subject-oriented, integrated, time-variant, non-volatile), grain/granularity, and the differences between OLTP and data warehouse systems.
2) It discusses the evolution of business intelligence and key components of a data warehouse such as the source systems, staging area, presentation area, and access tools.
3) It covers dimensional modeling concepts like star schemas, snowflake schemas, and slowly and rapidly changing dimensions.
This document discusses data warehousing and decision support systems. It defines a data warehouse as a subject-oriented, integrated, time-variant, and non-volatile collection of data used to support management decision making. It describes key features of a data warehouse including being subject-oriented, integrated, time-variant, and non-volatile. The document also discusses the need for decision support systems in business and different architectural styles for data warehousing like OLTP and OLAP.
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...Subrata Kumer Paul
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
This document provides an introduction to data warehousing. It discusses why data warehouses are used, as they allow organizations to store historical data and perform complex analytics across multiple data sources. The document outlines common use cases and decisions in building a data warehouse, such as normalization, dimension modeling, and handling changes over time. It also notes some potential issues like performance bottlenecks and discusses strategies for addressing them, such as indexing and considering alternative data storage options.
The document provides an outline for a presentation on graph-based data models. It introduces some key concepts about graphs and how they are used to model real-world interconnected data. It discusses how early adopters of graph technologies grew by focusing on data relationships. The document also covers graph data structures, graph databases, and graph query languages like Cypher and Gremlin.
Chapter 4. Data Warehousing and On-Line Analytical Processing.pptSubrata Kumer Paul
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Maximize SAP Ariba Solution ROI Through an Optimized Category Enablement Stra...SAP Ariba
A category enablement strategy is critical to driving the benefits of your SAP Ariba solution program. Do you know the different buy processes that are available? Do you know how you will enable each spend category? Do you know the solution capabilities that are needed to support it? In this workshop, we educate you on the key components of a best-in-class category enablement strategy, positioning you to develop a category enablement plan that will generate results.
The document provides an overview of key concepts in data warehousing and business intelligence, including:
1) It defines data warehousing concepts such as the characteristics of a data warehouse (subject-oriented, integrated, time-variant, non-volatile), grain/granularity, and the differences between OLTP and data warehouse systems.
2) It discusses the evolution of business intelligence and key components of a data warehouse such as the source systems, staging area, presentation area, and access tools.
3) It covers dimensional modeling concepts like star schemas, snowflake schemas, and slowly and rapidly changing dimensions.
This document discusses data warehousing and decision support systems. It defines a data warehouse as a subject-oriented, integrated, time-variant, and non-volatile collection of data used to support management decision making. It describes key features of a data warehouse including being subject-oriented, integrated, time-variant, and non-volatile. The document also discusses the need for decision support systems in business and different architectural styles for data warehousing like OLTP and OLAP.
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...Subrata Kumer Paul
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
This document provides an introduction to data warehousing. It discusses why data warehouses are used, as they allow organizations to store historical data and perform complex analytics across multiple data sources. The document outlines common use cases and decisions in building a data warehouse, such as normalization, dimension modeling, and handling changes over time. It also notes some potential issues like performance bottlenecks and discusses strategies for addressing them, such as indexing and considering alternative data storage options.
The document provides an outline for a presentation on graph-based data models. It introduces some key concepts about graphs and how they are used to model real-world interconnected data. It discusses how early adopters of graph technologies grew by focusing on data relationships. The document also covers graph data structures, graph databases, and graph query languages like Cypher and Gremlin.
Multimedia content based retrieval slideshare.pptgovintech1
information retrieval for text and multimedia content has become an important research area.
Content based retrieval in multimedia is a challenging problem since multimedia data needs detailed interpretation
from pixel values. In this presentation, an overview of the content based retrieval is presented along with
the different strategies in terms of syntactic and semantic indexing for retrieval. The matching techniques
used and learning methods employed are also analyzed.
Chapter - 5 Data Mining Concepts and Techniques 2nd Ed slides Han & Kambererror007
The document discusses Chapter 5 from the book "Data Mining: Concepts and Techniques" which covers frequent pattern mining, association rule mining, and correlation analysis. It provides an overview of basic concepts such as frequent patterns and association rules. It also describes efficient algorithms for mining frequent itemsets such as Apriori and FP-growth, and discusses challenges and improvements to frequent pattern mining.
OLAP (online analytical processing) allows users to easily extract and view data from different perspectives. It was invented by Edgar Codd in the 1980s and uses multidimensional data structures called cubes to store and analyze data. OLAP utilizes either a multidimensional (MOLAP), relational (ROLAP), or hybrid (HOLAP) approach to store cube data in databases and provide interactive analysis of data.
There are three common data warehouse architectures: basic, with a staging area, and with a staging area and data marts. The basic architecture extracts data directly from source systems into the data warehouse for users. The staging area architecture uses a staging area to clean and process data before loading it into the warehouse. The third architecture adds data marts, which are subsets of the warehouse organized for specific business units like sales or purchasing.
This document introduces data science, big data, and data analytics. It discusses the roles of data scientists, big data professionals, and data analysts. Data scientists use machine learning and AI to find patterns in data from multiple sources to make predictions. Big data professionals build large-scale data processing systems and use big data tools. Data analysts acquire, analyze, and process data to find insights and create reports. The document also provides examples of how Netflix uses data analytics, data science, and big data professionals to optimize content caching, quality, and create personalized streaming experiences based on quality of experience and user behavior analysis.
The document discusses how companies can leverage data and analytics to gain competitive advantages. It notes that many companies collect large amounts of data but lack the skills and resources to extract useful insights from it. The document promotes Idiro as a company that can help organizations address common data challenges like too much data to manage, lack of analytical skills, and disparate data sources. Idiro provides tools and expertise to clean, analyze and generate business intelligence from big data to help companies better understand their business and customers.
Business intelligence- Components, Tools, Need and Applicationsraj
As part of the research project for the course Technical Foundations of Information Systems at the University of Illinois, our team worked on the topic, Business Intelligence. The presentation focuses on what is Business Intelligence, its various components, latest tools, the need of BI as well as applications of this technology. This project deals with the latest development of BI technologies (hardware or software) and includes comprehensive literature survey from Journals, and the Internet.
The document summarizes the key components of the big data stack, from the presentation layer where users interact, through various processing and storage layers, down to the physical infrastructure of data centers. It provides examples like Facebook's petabyte-scale data warehouse and Google's globally distributed database Spanner. The stack aims to enable the processing and analysis of massive datasets across clusters of servers and data centers.
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Data warehousing and online analytical processingVijayasankariS
The document discusses data warehousing and online analytical processing (OLAP). It defines a data warehouse as a subject-oriented, integrated, time-variant and non-volatile collection of data used to support management decision making. It describes key concepts such as data warehouse modeling using data cubes and dimensions, extraction, transformation and loading of data, and common OLAP operations. The document also provides examples of star schemas and how they are used to model data warehouses.
Data warehousing involves assembling and managing data from various sources to provide an integrated view of enterprise information. A data warehouse contains consolidated, historical data used to support management decision making. It differs from operational databases by containing aggregated, non-volatile data optimized for queries rather than updates. The extract, transform, load (ETL) process migrates data from source systems to the warehouse, transforming it as needed. Process managers oversee loading, maintaining, and querying the warehouse data.
This document provides an overview of data warehousing concepts including:
- Data warehouses store historical data from operational systems for analysis and reporting. The data passes through a staging area and operational data store for cleaning before loading into the data warehouse.
- Common data warehouse architectures include star schemas with fact and dimension tables and snowflake schemas with normalized dimensions. Data marts contain summarized data for specific business questions.
- ETL processes extract, transform, and load the data in three phases. Transformation cleans and prepares the data before loading into dimensional schemas.
- Data warehouses typically contain historical data, derived data generated from existing data, and metadata describing the data and schemas.
This document provides an overview of key concepts related to data warehousing including what a data warehouse is, common data warehouse architectures, types of data warehouses, and dimensional modeling techniques. It defines key terms like facts, dimensions, star schemas, and snowflake schemas and provides examples of each. It also discusses business intelligence tools that can analyze and extract insights from data warehouses.
The document discusses data warehousing, including its history, types, security, applications, components, architecture, benefits and problems. A data warehouse is defined as a subject-oriented, integrated, time-variant collection of data to support management decision making. In the 1990s, organizations needed timely data but traditional systems were too slow. Data warehouses now provide competitive advantages through improved decision making and productivity. They integrate data from multiple sources to support applications like customer analysis, stock control and fraud detection.
The document provides information about what a data warehouse is and why it is important. A data warehouse is a relational database designed for querying and analysis that contains historical data from transaction systems and other sources. It allows organizations to access, analyze, and report on integrated information to support business processes and decisions.
Catalogs and Content with SAP Ariba Solutions, Today and TomorrowSAP Ariba
This document discusses catalogs and content within SAP Ariba solutions. It provides an overview of today's catalog technology and services including global catalog enablement services and Spot Buy. It also discusses catalog innovations such as cross-catalog search, content management systems, and direct materials. Finally, it demonstrates how guided buying, spot buy catalogs, and checkouts can work for procuring items.
This document provides an overview of data warehousing. It defines data warehousing as collecting data from multiple sources into a central repository for analysis and decision making. The document outlines the history of data warehousing and describes its key characteristics like being subject-oriented, integrated, and time-variant. It also discusses the architecture of a data warehouse including sources, transformation, storage, and reporting layers. The document compares data warehousing to traditional DBMS and explains how data warehouses are better suited for analysis versus transaction processing.
In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, and is considered a core component of business intelligence.[1] DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place that are used for creating analytical reports for knowledge workers throughout the enterprise.
This document discusses data preprocessing techniques for data mining. It covers data cleaning, integration, reduction, transformation, and discretization. Data cleaning involves handling missing, noisy, and inconsistent data through techniques like filling in missing values, smoothing noisy data, and resolving inconsistencies. Data integration combines data from multiple sources. Data reduction reduces data size through dimensionality reduction, numerosity reduction, and compression. Dimensionality reduction techniques include wavelet transforms and principal component analysis.
This document summarizes key concepts from Chapter 3 of the textbook "Data Mining: Concepts and Techniques". It discusses data preprocessing, which includes data cleaning, integration, reduction, and transformation. Data cleaning deals with handling missing, noisy, and inconsistent data. Data integration combines data from multiple sources. Data reduction reduces data volume for analysis through techniques like dimensionality reduction. Data transformation normalizes and discretizes values.
Multimedia content based retrieval slideshare.pptgovintech1
information retrieval for text and multimedia content has become an important research area.
Content based retrieval in multimedia is a challenging problem since multimedia data needs detailed interpretation
from pixel values. In this presentation, an overview of the content based retrieval is presented along with
the different strategies in terms of syntactic and semantic indexing for retrieval. The matching techniques
used and learning methods employed are also analyzed.
Chapter - 5 Data Mining Concepts and Techniques 2nd Ed slides Han & Kambererror007
The document discusses Chapter 5 from the book "Data Mining: Concepts and Techniques" which covers frequent pattern mining, association rule mining, and correlation analysis. It provides an overview of basic concepts such as frequent patterns and association rules. It also describes efficient algorithms for mining frequent itemsets such as Apriori and FP-growth, and discusses challenges and improvements to frequent pattern mining.
OLAP (online analytical processing) allows users to easily extract and view data from different perspectives. It was invented by Edgar Codd in the 1980s and uses multidimensional data structures called cubes to store and analyze data. OLAP utilizes either a multidimensional (MOLAP), relational (ROLAP), or hybrid (HOLAP) approach to store cube data in databases and provide interactive analysis of data.
There are three common data warehouse architectures: basic, with a staging area, and with a staging area and data marts. The basic architecture extracts data directly from source systems into the data warehouse for users. The staging area architecture uses a staging area to clean and process data before loading it into the warehouse. The third architecture adds data marts, which are subsets of the warehouse organized for specific business units like sales or purchasing.
This document introduces data science, big data, and data analytics. It discusses the roles of data scientists, big data professionals, and data analysts. Data scientists use machine learning and AI to find patterns in data from multiple sources to make predictions. Big data professionals build large-scale data processing systems and use big data tools. Data analysts acquire, analyze, and process data to find insights and create reports. The document also provides examples of how Netflix uses data analytics, data science, and big data professionals to optimize content caching, quality, and create personalized streaming experiences based on quality of experience and user behavior analysis.
The document discusses how companies can leverage data and analytics to gain competitive advantages. It notes that many companies collect large amounts of data but lack the skills and resources to extract useful insights from it. The document promotes Idiro as a company that can help organizations address common data challenges like too much data to manage, lack of analytical skills, and disparate data sources. Idiro provides tools and expertise to clean, analyze and generate business intelligence from big data to help companies better understand their business and customers.
Business intelligence- Components, Tools, Need and Applicationsraj
As part of the research project for the course Technical Foundations of Information Systems at the University of Illinois, our team worked on the topic, Business Intelligence. The presentation focuses on what is Business Intelligence, its various components, latest tools, the need of BI as well as applications of this technology. This project deals with the latest development of BI technologies (hardware or software) and includes comprehensive literature survey from Journals, and the Internet.
The document summarizes the key components of the big data stack, from the presentation layer where users interact, through various processing and storage layers, down to the physical infrastructure of data centers. It provides examples like Facebook's petabyte-scale data warehouse and Google's globally distributed database Spanner. The stack aims to enable the processing and analysis of massive datasets across clusters of servers and data centers.
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Data warehousing and online analytical processingVijayasankariS
The document discusses data warehousing and online analytical processing (OLAP). It defines a data warehouse as a subject-oriented, integrated, time-variant and non-volatile collection of data used to support management decision making. It describes key concepts such as data warehouse modeling using data cubes and dimensions, extraction, transformation and loading of data, and common OLAP operations. The document also provides examples of star schemas and how they are used to model data warehouses.
Data warehousing involves assembling and managing data from various sources to provide an integrated view of enterprise information. A data warehouse contains consolidated, historical data used to support management decision making. It differs from operational databases by containing aggregated, non-volatile data optimized for queries rather than updates. The extract, transform, load (ETL) process migrates data from source systems to the warehouse, transforming it as needed. Process managers oversee loading, maintaining, and querying the warehouse data.
This document provides an overview of data warehousing concepts including:
- Data warehouses store historical data from operational systems for analysis and reporting. The data passes through a staging area and operational data store for cleaning before loading into the data warehouse.
- Common data warehouse architectures include star schemas with fact and dimension tables and snowflake schemas with normalized dimensions. Data marts contain summarized data for specific business questions.
- ETL processes extract, transform, and load the data in three phases. Transformation cleans and prepares the data before loading into dimensional schemas.
- Data warehouses typically contain historical data, derived data generated from existing data, and metadata describing the data and schemas.
This document provides an overview of key concepts related to data warehousing including what a data warehouse is, common data warehouse architectures, types of data warehouses, and dimensional modeling techniques. It defines key terms like facts, dimensions, star schemas, and snowflake schemas and provides examples of each. It also discusses business intelligence tools that can analyze and extract insights from data warehouses.
The document discusses data warehousing, including its history, types, security, applications, components, architecture, benefits and problems. A data warehouse is defined as a subject-oriented, integrated, time-variant collection of data to support management decision making. In the 1990s, organizations needed timely data but traditional systems were too slow. Data warehouses now provide competitive advantages through improved decision making and productivity. They integrate data from multiple sources to support applications like customer analysis, stock control and fraud detection.
The document provides information about what a data warehouse is and why it is important. A data warehouse is a relational database designed for querying and analysis that contains historical data from transaction systems and other sources. It allows organizations to access, analyze, and report on integrated information to support business processes and decisions.
Catalogs and Content with SAP Ariba Solutions, Today and TomorrowSAP Ariba
This document discusses catalogs and content within SAP Ariba solutions. It provides an overview of today's catalog technology and services including global catalog enablement services and Spot Buy. It also discusses catalog innovations such as cross-catalog search, content management systems, and direct materials. Finally, it demonstrates how guided buying, spot buy catalogs, and checkouts can work for procuring items.
This document provides an overview of data warehousing. It defines data warehousing as collecting data from multiple sources into a central repository for analysis and decision making. The document outlines the history of data warehousing and describes its key characteristics like being subject-oriented, integrated, and time-variant. It also discusses the architecture of a data warehouse including sources, transformation, storage, and reporting layers. The document compares data warehousing to traditional DBMS and explains how data warehouses are better suited for analysis versus transaction processing.
In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, and is considered a core component of business intelligence.[1] DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place that are used for creating analytical reports for knowledge workers throughout the enterprise.
This document discusses data preprocessing techniques for data mining. It covers data cleaning, integration, reduction, transformation, and discretization. Data cleaning involves handling missing, noisy, and inconsistent data through techniques like filling in missing values, smoothing noisy data, and resolving inconsistencies. Data integration combines data from multiple sources. Data reduction reduces data size through dimensionality reduction, numerosity reduction, and compression. Dimensionality reduction techniques include wavelet transforms and principal component analysis.
This document summarizes key concepts from Chapter 3 of the textbook "Data Mining: Concepts and Techniques". It discusses data preprocessing, which includes data cleaning, integration, reduction, and transformation. Data cleaning deals with handling missing, noisy, and inconsistent data. Data integration combines data from multiple sources. Data reduction reduces data volume for analysis through techniques like dimensionality reduction. Data transformation normalizes and discretizes values.
This document summarizes key concepts from Chapter 3 of the textbook "Data Mining: Concepts and Techniques". It discusses the importance of data preprocessing, which includes tasks like data cleaning, integration, reduction, and transformation. Specific techniques are described for handling missing/noisy data, data integration when combining multiple sources, and reducing dimensionality through feature selection or dimension reduction methods like PCA. The goal of preprocessing is to prepare raw data into a format suitable for data mining algorithms.
This document provides an overview of data preprocessing techniques discussed in Chapter 3 of the textbook "Data Mining: Concepts and Techniques". It covers topics such as data quality, data cleaning, data integration, data reduction, and data transformation. Data reduction techniques like dimensionality reduction aim to obtain a reduced representation of data that uses less space but produces similar analytical results. Dimensionality reduction methods include wavelet transforms, principal component analysis, and feature selection. Wavelet transforms decompose a signal into different frequency subbands and allow clusters to become more distinguishable at different resolution levels.
This document summarizes key concepts from Chapter 3 of the textbook "Data Mining: Concepts and Techniques". It discusses the importance of data preprocessing, which includes tasks like data cleaning, integration, reduction, and transformation. Specific techniques are described for handling missing/noisy data, data integration when combining multiple sources, and reducing dimensionality through feature selection or dimension reduction methods like PCA. The goal of preprocessing is to prepare raw data into a format suitable for data mining algorithms.
This document provides an overview of data preprocessing techniques discussed in Chapter 3 of the textbook "Data Mining: Concepts and Techniques". It covers topics such as data quality, data cleaning, data integration, data reduction, and data transformation. Data reduction techniques like dimensionality reduction aim to obtain a reduced representation of data that uses less space but produces similar analytical results. Dimensionality reduction methods include wavelet transforms, principal component analysis, and feature selection. Wavelet transforms decompose a signal into different frequency subbands and allow clusters to become more distinguishable at different resolution levels.
This document summarizes key concepts from Chapter 3 of the textbook "Data Mining: Concepts and Techniques". It discusses the importance of data preprocessing, which includes tasks like data cleaning, integration, reduction, and transformation. Specific techniques are described for handling missing/noisy data, data integration when combining multiple sources, and reducing dimensionality through feature selection or dimension reduction methods like PCA. The goal of preprocessing is to prepare raw data into a format suitable for data mining algorithms.
Data Preprocessing and Visualizsdjvnovrnververdfvdfationwokati2689
This document provides an overview of data preprocessing techniques discussed in Chapter 3 of the textbook "Data Mining: Concepts and Techniques". It covers topics such as data quality, data cleaning, data integration, data reduction, and data transformation. Data reduction techniques like dimensionality reduction aim to obtain a reduced representation of data that uses less space but produces similar analytical results. Dimensionality reduction methods include wavelet transforms, principal component analysis, and feature selection. Wavelet transforms decompose a signal into different frequency subbands and allow clusters to become more distinguishable at different resolution levels.
This document provides an overview of data preprocessing techniques discussed in Chapter 3 of the textbook "Data Mining: Concepts and Techniques". It covers topics such as data quality, data cleaning, data integration, data reduction, and data transformation. Data reduction techniques like dimensionality reduction aim to obtain a reduced representation of data that uses less space but produces similar analytical results. Methods like wavelet transforms and principal component analysis map data to a new space to emphasize clustered regions and remove outliers or unimportant attributes.
This document provides an overview of data preprocessing techniques discussed in Chapter 3 of the textbook "Data Mining: Concepts and Techniques". It covers topics such as data quality, data cleaning, data integration, data reduction, and data transformation. Data reduction techniques like dimensionality reduction aim to obtain a reduced representation of data that uses less space but produces similar analytical results. Dimensionality reduction methods include wavelet transforms, principal component analysis, and feature selection. Wavelet transforms decompose a signal into different frequency subbands and allow clusters to become more distinguishable at different resolution levels.
Data Mining: Concepts and Techniques (3rd ed.)- Chapter 3 preprocessingSalah Amean
the chapter contains :
Data Preprocessing: An Overview,
Data Quality,
Major Tasks in Data Preprocessing,
Data Cleaning,
Data Integration,
Data Reduction,
Data Transformation and Data Discretization,
Summary.
This document provides an overview of data preprocessing techniques. It discusses data quality issues like missing values, noise, and inconsistencies that require cleaning. Major tasks in preprocessing include data cleaning, integration, reduction, and transformation. Data cleaning techniques are described for handling incomplete, noisy, and inconsistent data. Methods for data integration, reduction through dimensionality reduction and sampling, and transformation through normalization and discretization are also summarized.
This document provides an overview of data preprocessing techniques for data mining. It discusses data quality issues like accuracy, completeness, and consistency that require data cleaning. The major tasks of data preprocessing are described as data cleaning, integration, reduction, and transformation. Specific techniques for handling missing data, noisy data, and reducing redundancy during data integration are also summarized.
Data preprocessing involves cleaning, transforming, and reducing raw data to prepare it for analysis. It addresses issues like missing values, inconsistencies, noise and redundancy. Key tasks include data cleaning to detect and correct errors, data integration to combine related data from multiple sources, and data reduction to reduce dimensionality or data size for more efficient analysis while retaining important information. Techniques like wavelet transforms, principal component analysis and dimensionality reduction are commonly used for data reduction. Preprocessing aims to improve data quality and prepare it for downstream analysis tasks.
The document discusses various techniques for data preprocessing, including data cleaning, integration, reduction, and transformation. It describes why preprocessing is important for improving data quality, accuracy, and consistency. Several forms of data preprocessing are covered in detail, such as handling missing or noisy data, data integration, dimensionality reduction techniques like principal component analysis, and different strategies for data reduction.
This presentation gives the idea about Data Preprocessing in the field of Data Mining. Images, examples and other things are adopted from "Data Mining Concepts and Techniques by Jiawei Han, Micheline Kamber and Jian Pei "
The document introduces data preprocessing techniques for data mining. It discusses why data preprocessing is important due to real-world data often being dirty, incomplete, noisy, inconsistent or duplicate. It then describes common data types and quality issues like missing values, noise, outliers and duplicates. The major tasks of data preprocessing are outlined as data cleaning, integration, transformation and reduction. Specific techniques for handling missing values, noise, outliers and duplicates are also summarized.
Data Preprocessing can be defined as a process of converting raw data into a format that is understandable and usable for further analysis. It is an important step in the Data Preparation stage. It ensures that the outcome of the analysis is accurate, complete, and consistent.
Chapter 13. Trends and Research Frontiers in Data Mining.pptSubrata Kumer Paul
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Chapter 10. Cluster Analysis Basic Concepts and Methods.pptSubrata Kumer Paul
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Lecture 8a: Clustering Validity, Minimum Description Length (MDL), Introduction to Information Theory, Co-clustering using MDL. (ppt,pdf)
Deepayan Chakrabarti, Spiros Papadimitriou, Dharmendra Modha, Christos Faloutsos, Fully Automatic Cross-Associations, KDD 2004, Seattle, August 2004. [PDF]
Some details about MDL and Information Theory can be found in the book “Introduction to Data Mining” by Tan, Steinbach, Kumar (chapters 2,4).
This document discusses absorbing random walks on graphs. It explains that absorbing random walks can be used to compute the proximity of non-absorbing nodes to chosen absorbing nodes in a graph. The absorption probabilities provide a measure of how close each non-absorbing node is to the different absorbing nodes. These probabilities can be computed by iteratively updating the absorption probabilities of each non-absorbing node based on the probabilities of its neighbors. Absorbing random walks have applications in areas like information propagation, opinion formation, and semi-supervised learning.
Lecture 9: Dimensionality Reduction, Singular Value Decomposition (SVD), Principal Component Analysis (PCA). (ppt,pdf)
Appendices A, B from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
Lecture 7: Hierarchical clustering, DBSCAN, Mixture models and the EM algorithm (ppt,pdf)
Chapter 8,9 from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
Lecture 10b: Classification. k-Nearest Neighbor classifier, Logistic Regression, Support Vector Machines (SVM), Naive Bayes (ppt,pdf)
Chapters 4,5 from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
Lecture 6: Min-wise independent hashing. Locality Sensitive Hashing. Clustering, K-means algorithm (ppt,pdf)
Chapter 3 from the book Mining Massive Datasets by Anand Rajaraman and Jeff Ullman.
Chapter 8 from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
Lecture 11: Naive Bayes classifier. Supervised Learning. Web Search and PageRank (ppt,pdf)
Chapter 5 from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
Chapter 13 from the book "Introduction to Information Retrieval" by C. Manning, P. Raghavan, H. Schutze
Lecture 4: Frequent Itemests, Association Rules. Evaluation. Beyond Apriori (ppt, pdf)
Chapter 6 from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
Chapter 6 from the book Mining Massive Datasets by Anand Rajaraman and Jeff Ullman.
Lecture 12: Link Analysis Ranking: PageRank, HITS, Random Walks (ppt,pdf)
Chapter 21 from the book "Introduction to Information Retrieval" by C. Manning, P. Raghavan, H. Schutze
Chapter 14 from the book "Networks, Crowds and Markets" by D. Easley, J. Kleinberg
Lecture 2: Data, pre-processing and post-processing
Chapters 2,3 from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
Chapter 1 from the book Mining Massive Datasets by Anand Rajaraman and Jeff Ullman
Lecture 5: Similarity and Distance. Metrics. Min-wise independent hashing. (ppt,pdf)
Chapter 3 from the book Mining Massive Datasets by Anand Rajaraman and Jeff Ullman.
Chapter 2 from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
1. 1
Data Mining:
Concepts and Techniques
(3rd ed.)
Chapter 3
Subrata Kumer Paul
Assistant Professor, Dept. of CSE, BAUET
sksubrata96@gmail.com
2. 2
2
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
3. 3
Data Quality: Why Preprocess the Data?
Measures for data quality: A multidimensional view
Accuracy: correct or wrong, accurate or not
Completeness: not recorded, unavailable, …
Consistency: some modified but some not, dangling, …
Timeliness: timely update?
Believability: how trustable the data are correct?
Interpretability: how easily the data can be
understood?
4. 4
Major Tasks in Data Preprocessing
Data cleaning
Fill in missing values, smooth noisy data, identify or remove
outliers, and resolve inconsistencies
Data integration
Integration of multiple databases, data cubes, or files
Data reduction
Dimensionality reduction
Numerosity reduction
Data compression
Data transformation and data discretization
Normalization
Concept hierarchy generation
5. 5
5
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
6. 6
Data Cleaning
Data in the Real World Is Dirty: Lots of potentially incorrect data,
e.g., instrument faulty, human or computer error, transmission error
incomplete: lacking attribute values, lacking certain attributes of
interest, or containing only aggregate data
e.g., Occupation=“ ” (missing data)
noisy: containing noise, errors, or outliers
e.g., Salary=“−10” (an error)
inconsistent: containing discrepancies in codes or names, e.g.,
Age=“42”, Birthday=“03/07/2010”
Was rating “1, 2, 3”, now rating “A, B, C”
discrepancy between duplicate records
Intentional (e.g., disguised missing data)
Jan. 1 as everyone’s birthday?
7. 7
Incomplete (Missing) Data
Data is not always available
E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
Missing data may be due to
equipment malfunction
inconsistent with other recorded data and thus deleted
data not entered due to misunderstanding
certain data may not be considered important at the
time of entry
not register history or changes of the data
Missing data may need to be inferred
8. 8
How to Handle Missing Data?
Ignore the tuple: usually done when class label is missing
(when doing classification)—not effective when the % of
missing values per attribute varies considerably
Fill in the missing value manually: tedious + infeasible?
Fill in it automatically with
a global constant : e.g., “unknown”, a new class?!
the attribute mean
the attribute mean for all samples belonging to the
same class: smarter
the most probable value: inference-based such as
Bayesian formula or decision tree
9. 9
Noisy Data
Noise: random error or variance in a measured variable
Incorrect attribute values may be due to
faulty data collection instruments
data entry problems
data transmission problems
technology limitation
inconsistency in naming convention
Other data problems which require data cleaning
duplicate records
incomplete data
inconsistent data
10. 10
How to Handle Noisy Data?
Binning
first sort data and partition into (equal-frequency) bins
then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc.
Regression
smooth by fitting the data into regression functions
Clustering
detect and remove outliers
Combined computer and human inspection
detect suspicious values and check by human (e.g.,
deal with possible outliers)
11. 11
Data Cleaning as a Process
Data discrepancy detection
Use metadata (e.g., domain, range, dependency, distribution)
Check field overloading
Check uniqueness rule, consecutive rule and null rule
Use commercial tools
Data scrubbing: use simple domain knowledge (e.g., postal
code, spell-check) to detect errors and make corrections
Data auditing: by analyzing data to discover rules and
relationship to detect violators (e.g., correlation and clustering
to find outliers)
Data migration and integration
Data migration tools: allow transformations to be specified
ETL (Extraction/Transformation/Loading) tools: allow users to
specify transformations through a graphical user interface
Integration of the two processes
Iterative and interactive (e.g., Potter’s Wheels)
12. 12
12
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
13. 13
13
Data Integration
Data integration:
Combines data from multiple sources into a coherent store
Schema integration: e.g., A.cust-id B.cust-#
Integrate metadata from different sources
Entity identification problem:
Identify real world entities from multiple data sources, e.g., Bill
Clinton = William Clinton
Detecting and resolving data value conflicts
For the same real world entity, attribute values from different
sources are different
Possible reasons: different representations, different scales, e.g.,
metric vs. British units
14. 14
14
Handling Redundancy in Data Integration
Redundant data occur often when integration of multiple
databases
Object identification: The same attribute or object
may have different names in different databases
Derivable data: One attribute may be a “derived”
attribute in another table, e.g., annual revenue
Redundant attributes may be able to be detected by
correlation analysis and covariance analysis
Careful integration of the data from multiple sources may
help reduce/avoid redundancies and inconsistencies and
improve mining speed and quality
15. 15
Correlation Analysis (Nominal Data)
Χ2 (chi-square) test
The larger the Χ2 value, the more likely the variables are
related
The cells that contribute the most to the Χ2 value are
those whose actual count is very different from the
expected count
Correlation does not imply causality
# of hospitals and # of car-theft in a city are correlated
Both are causally linked to the third variable: population
Expected
Expected
Observed 2
2 )
(
16. 16
Chi-Square Calculation: An Example
Χ2 (chi-square) calculation (numbers in parenthesis are
expected counts calculated based on the data distribution
in the two categories)
It shows that like_science_fiction and play_chess are
correlated in the group
93
.
507
840
)
840
1000
(
360
)
360
200
(
210
)
210
50
(
90
)
90
250
( 2
2
2
2
2
Play chess Not play chess Sum (row)
Like science fiction 250(90) 200(360) 450
Not like science fiction 50(210) 1000(840) 1050
Sum(col.) 300 1200 1500
17. 17
Correlation Analysis (Numeric Data)
Correlation coefficient (also called Pearson’s product
moment coefficient)
where n is the number of tuples, and are the respective
means of A and B, σA and σB are the respective standard deviation
of A and B, and Σ(aibi) is the sum of the AB cross-product.
If rA,B > 0, A and B are positively correlated (A’s values
increase as B’s). The higher, the stronger correlation.
rA,B = 0: independent; rAB < 0: negatively correlated
B
A
n
i i
i
B
A
n
i i
i
B
A
n
B
A
n
b
a
n
B
b
A
a
r
)
1
(
)
(
)
1
(
)
)(
( 1
1
,
A B
19. 19
Correlation (viewed as linear
relationship)
Correlation measures the linear relationship
between objects
To compute correlation, we standardize data
objects, A and B, and then take their dot product
)
(
/
))
(
(
' A
std
A
mean
a
a k
k
)
(
/
))
(
(
' B
std
B
mean
b
b k
k
'
'
)
,
( B
A
B
A
n
correlatio
20. 20
Covariance (Numeric Data)
Covariance is similar to correlation
where n is the number of tuples, and are the respective mean or
expected values of A and B, σA and σB are the respective standard
deviation of A and B.
Positive covariance: If CovA,B > 0, then A and B both tend to be larger
than their expected values.
Negative covariance: If CovA,B < 0 then if A is larger than its expected
value, B is likely to be smaller than its expected value.
Independence: CovA,B = 0 but the converse is not true:
Some pairs of random variables may have a covariance of 0 but are not
independent. Only under some additional assumptions (e.g., the data follow
multivariate normal distributions) does a covariance of 0 imply independence
A B
Correlation coefficient:
21. Co-Variance: An Example
It can be simplified in computation as
Suppose two stocks A and B have the following values in one week:
(2, 5), (3, 8), (5, 10), (4, 11), (6, 14).
Question: If the stocks are affected by the same industry trends, will
their prices rise or fall together?
E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4
E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.6
Cov(A,B) = (2×5+3×8+5×10+4×11+6×14)/5 − 4 × 9.6 = 4
Thus, A and B rise together since Cov(A, B) > 0.
22. 22
22
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
23. 23
Data Reduction Strategies
Data reduction: Obtain a reduced representation of the data set that
is much smaller in volume but yet produces the same (or almost the
same) analytical results
Why data reduction? — A database/data warehouse may store
terabytes of data. Complex data analysis may take a very long time to
run on the complete data set.
Data reduction strategies
Dimensionality reduction, e.g., remove unimportant attributes
Wavelet transforms
Principal Components Analysis (PCA)
Feature subset selection, feature creation
Numerosity reduction (some simply call it: Data Reduction)
Regression and Log-Linear Models
Histograms, clustering, sampling
Data cube aggregation
Data compression
24. 24
Data Reduction 1: Dimensionality
Reduction
Curse of dimensionality
When dimensionality increases, data becomes increasingly sparse
Density and distance between points, which is critical to clustering, outlier
analysis, becomes less meaningful
The possible combinations of subspaces will grow exponentially
Dimensionality reduction
Avoid the curse of dimensionality
Help eliminate irrelevant features and reduce noise
Reduce time and space required in data mining
Allow easier visualization
Dimensionality reduction techniques
Wavelet transforms
Principal Component Analysis
Supervised and nonlinear techniques (e.g., feature selection)
25. 25
Mapping Data to a New Space
Two Sine Waves Two Sine Waves + Noise Frequency
Fourier transform
Wavelet transform
26. 26
What Is Wavelet Transform?
Decomposes a signal into
different frequency subbands
Applicable to n-
dimensional signals
Data are transformed to
preserve relative distance
between objects at different
levels of resolution
Allow natural clusters to
become more distinguishable
Used for image compression
27. 27
Wavelet Transformation
Discrete wavelet transform (DWT) for linear signal
processing, multi-resolution analysis
Compressed approximation: store only a small fraction of
the strongest of the wavelet coefficients
Similar to discrete Fourier transform (DFT), but better
lossy compression, localized in space
Method:
Length, L, must be an integer power of 2 (padding with 0’s, when
necessary)
Each transform has 2 functions: smoothing, difference
Applies to pairs of data, resulting in two set of data of length L/2
Applies two functions recursively, until reaches the desired length
Haar2 Daubechie4
28. 28
Wavelet Decomposition
Wavelets: A math tool for space-efficient hierarchical
decomposition of functions
S = [2, 2, 0, 2, 3, 5, 4, 4] can be transformed to S^ =
[23/4, -11/4, 1/2, 0, 0, -1, -1, 0]
Compression: many small detail coefficients can be
replaced by 0’s, and only the significant coefficients are
retained
30. 30
Why Wavelet Transform?
Use hat-shape filters
Emphasize region where points cluster
Suppress weaker information in their boundaries
Effective removal of outliers
Insensitive to noise, insensitive to input order
Multi-resolution
Detect arbitrary shaped clusters at different scales
Efficient
Complexity O(N)
Only applicable to low dimensional data
31. 31
x2
x1
e
Principal Component Analysis (PCA)
Find a projection that captures the largest amount of variation in data
The original data are projected onto a much smaller space, resulting
in dimensionality reduction. We find the eigenvectors of the
covariance matrix, and these eigenvectors define the new space
32. 32
Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors
(principal components) that can be best used to represent data
Normalize input data: Each attribute falls within the same range
Compute k orthonormal (unit) vectors, i.e., principal components
Each input data (vector) is a linear combination of the k principal
component vectors
The principal components are sorted in order of decreasing
“significance” or strength
Since the components are sorted, the size of the data can be
reduced by eliminating the weak components, i.e., those with low
variance (i.e., using the strongest principal components, it is
possible to reconstruct a good approximation of the original data)
Works for numeric data only
Principal Component Analysis (Steps)
33. 33
Attribute Subset Selection
Another way to reduce dimensionality of data
Redundant attributes
Duplicate much or all of the information contained in
one or more other attributes
E.g., purchase price of a product and the amount of
sales tax paid
Irrelevant attributes
Contain no information that is useful for the data
mining task at hand
E.g., students' ID is often irrelevant to the task of
predicting students' GPA
34. 34
Heuristic Search in Attribute Selection
There are 2d possible attribute combinations of d attributes
Typical heuristic attribute selection methods:
Best single attribute under the attribute independence
assumption: choose by significance tests
Best step-wise feature selection:
The best single-attribute is picked first
Then next best attribute condition to the first, ...
Step-wise attribute elimination:
Repeatedly eliminate the worst attribute
Best combined attribute selection and elimination
Optimal branch and bound:
Use attribute elimination and backtracking
35. 35
Attribute Creation (Feature Generation)
Create new attributes (features) that can capture the
important information in a data set more effectively than
the original ones
Three general methodologies
Attribute extraction
Domain-specific
Mapping data to new space (see: data reduction)
E.g., Fourier transformation, wavelet
transformation, manifold approaches (not covered)
Attribute construction
Combining features (see: discriminative frequent
patterns in Chapter 7)
Data discretization
36. 36
Data Reduction 2: Numerosity
Reduction
Reduce data volume by choosing alternative, smaller
forms of data representation
Parametric methods (e.g., regression)
Assume the data fits some model, estimate model
parameters, store only the parameters, and discard
the data (except possible outliers)
Ex.: Log-linear models—obtain value at a point in m-
D space as the product on appropriate marginal
subspaces
Non-parametric methods
Do not assume models
Major families: histograms, clustering, sampling, …
37. 37
Parametric Data Reduction:
Regression and Log-Linear Models
Linear regression
Data modeled to fit a straight line
Often uses the least-square method to fit the line
Multiple regression
Allows a response variable Y to be modeled as a
linear function of multidimensional feature vector
Log-linear model
Approximates discrete multidimensional probability
distributions
38. 38
Regression Analysis
Regression analysis: A collective name for
techniques for the modeling and analysis
of numerical data consisting of values of a
dependent variable (also called
response variable or measurement) and
of one or more independent variables (aka.
explanatory variables or predictors)
The parameters are estimated so as to give
a "best fit" of the data
Most commonly the best fit is evaluated by
using the least squares method, but
other criteria have also been used
Used for prediction
(including forecasting of
time-series data), inference,
hypothesis testing, and
modeling of causal
relationships
y
x
y = x + 1
X1
Y1
Y1’
39. 39
Linear regression: Y = w X + b
Two regression coefficients, w and b, specify the line and are to be
estimated by using the data at hand
Using the least squares criterion to the known values of Y1, Y2, …,
X1, X2, ….
Multiple regression: Y = b0 + b1 X1 + b2 X2
Many nonlinear functions can be transformed into the above
Log-linear models:
Approximate discrete multidimensional probability distributions
Estimate the probability of each point (tuple) in a multi-dimensional
space for a set of discretized attributes, based on a smaller subset
of dimensional combinations
Useful for dimensionality reduction and data smoothing
Regress Analysis and Log-Linear
Models
40. 40
Histogram Analysis
Divide data into buckets and
store average (sum) for each
bucket
Partitioning rules:
Equal-width: equal bucket
range
Equal-frequency (or equal-
depth)
0
5
10
15
20
25
30
35
40
10000
20000
30000
40000
50000
60000
70000
80000
90000
100000
41. 41
Clustering
Partition data set into clusters based on similarity, and
store cluster representation (e.g., centroid and diameter)
only
Can be very effective if data is clustered but not if data
is “smeared”
Can have hierarchical clustering and be stored in multi-
dimensional index tree structures
There are many choices of clustering definitions and
clustering algorithms
Cluster analysis will be studied in depth in Chapter 10
42. 42
Sampling
Sampling: obtaining a small sample s to represent the
whole data set N
Allow a mining algorithm to run in complexity that is
potentially sub-linear to the size of the data
Key principle: Choose a representative subset of the data
Simple random sampling may have very poor
performance in the presence of skew
Develop adaptive sampling methods, e.g., stratified
sampling:
Note: Sampling may not reduce database I/Os (page at a
time)
43. 43
Types of Sampling
Simple random sampling
There is an equal probability of selecting any particular
item
Sampling without replacement
Once an object is selected, it is removed from the
population
Sampling with replacement
A selected object is not removed from the population
Stratified sampling:
Partition the data set, and draw samples from each
partition (proportionally, i.e., approximately the same
percentage of the data)
Used in conjunction with skewed data
46. 46
Data Cube Aggregation
The lowest level of a data cube (base cuboid)
The aggregated data for an individual entity of interest
E.g., a customer in a phone calling data warehouse
Multiple levels of aggregation in data cubes
Further reduce the size of data to deal with
Reference appropriate levels
Use the smallest representation which is enough to
solve the task
Queries regarding aggregated information should be
answered using data cube, when possible
47. 47
Data Reduction 3: Data Compression
String compression
There are extensive theories and well-tuned algorithms
Typically lossless, but only limited manipulation is
possible without expansion
Audio/video compression
Typically lossy compression, with progressive refinement
Sometimes small fragments of signal can be
reconstructed without reconstructing the whole
Time sequence is not audio
Typically short and vary slowly with time
Dimensionality and numerosity reduction may also be
considered as forms of data compression
49. 49
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
50. 50
Data Transformation
A function that maps the entire set of values of a given attribute to a
new set of replacement values s.t. each old value can be identified
with one of the new values
Methods
Smoothing: Remove noise from data
Attribute/feature construction
New attributes constructed from the given ones
Aggregation: Summarization, data cube construction
Normalization: Scaled to fall within a smaller, specified range
min-max normalization
z-score normalization
normalization by decimal scaling
Discretization: Concept hierarchy climbing
51. 51
Normalization
Min-max normalization: to [new_minA, new_maxA]
Ex. Let income range $12,000 to $98,000 normalized to [0.0,
1.0]. Then $73,000 is mapped to
Z-score normalization (μ: mean, σ: standard deviation):
Ex. Let μ = 54,000, σ = 16,000. Then
Normalization by decimal scaling
716
.
0
0
)
0
0
.
1
(
000
,
12
000
,
98
000
,
12
600
,
73
A
A
A
A
A
A
min
new
min
new
max
new
min
max
min
v
v _
)
_
_
(
'
A
A
v
v
'
j
v
v
10
' Where j is the smallest integer such that Max(|ν’|) < 1
225
.
1
000
,
16
000
,
54
600
,
73
52. 52
Discretization
Three types of attributes
Nominal—values from an unordered set, e.g., color, profession
Ordinal—values from an ordered set, e.g., military or academic
rank
Numeric—real numbers, e.g., integer or real numbers
Discretization: Divide the range of a continuous attribute into intervals
Interval labels can then be used to replace actual data values
Reduce data size by discretization
Supervised vs. unsupervised
Split (top-down) vs. merge (bottom-up)
Discretization can be performed recursively on an attribute
Prepare for further analysis, e.g., classification
53. 53
Data Discretization Methods
Typical methods: All the methods can be applied recursively
Binning
Top-down split, unsupervised
Histogram analysis
Top-down split, unsupervised
Clustering analysis (unsupervised, top-down split or
bottom-up merge)
Decision-tree analysis (supervised, top-down split)
Correlation (e.g., 2) analysis (unsupervised, bottom-up
merge)
54. 54
Simple Discretization: Binning
Equal-width (distance) partitioning
Divides the range into N intervals of equal size: uniform grid
if A and B are the lowest and highest values of the attribute, the
width of intervals will be: W = (B –A)/N.
The most straightforward, but outliers may dominate presentation
Skewed data is not handled well
Equal-depth (frequency) partitioning
Divides the range into N intervals, each containing approximately
same number of samples
Good data scaling
Managing categorical attributes can be tricky
55. 55
Binning Methods for Data Smoothing
Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26,
28, 29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
57. 57
Discretization by Classification &
Correlation Analysis
Classification (e.g., decision tree analysis)
Supervised: Given class labels, e.g., cancerous vs. benign
Using entropy to determine split point (discretization point)
Top-down, recursive split
Details to be covered in Chapter 7
Correlation analysis (e.g., Chi-merge: χ2-based discretization)
Supervised: use class information
Bottom-up merge: find the best neighboring intervals (those
having similar distributions of classes, i.e., low χ2 values) to merge
Merge performed recursively, until a predefined stopping condition
58. 58
Concept Hierarchy Generation
Concept hierarchy organizes concepts (i.e., attribute values)
hierarchically and is usually associated with each dimension in a data
warehouse
Concept hierarchies facilitate drilling and rolling in data warehouses to
view data in multiple granularity
Concept hierarchy formation: Recursively reduce the data by collecting
and replacing low level concepts (such as numeric values for age) by
higher level concepts (such as youth, adult, or senior)
Concept hierarchies can be explicitly specified by domain experts
and/or data warehouse designers
Concept hierarchy can be automatically formed for both numeric and
nominal data. For numeric data, use discretization methods shown.
59. 59
Concept Hierarchy Generation
for Nominal Data
Specification of a partial/total ordering of attributes
explicitly at the schema level by users or experts
street < city < state < country
Specification of a hierarchy for a set of values by explicit
data grouping
{Urbana, Champaign, Chicago} < Illinois
Specification of only a partial set of attributes
E.g., only street < city, not others
Automatic generation of hierarchies (or attribute levels) by
the analysis of the number of distinct values
E.g., for a set of attributes: {street, city, state, country}
60. 60
Automatic Concept Hierarchy Generation
Some hierarchies can be automatically generated based on
the analysis of the number of distinct values per attribute in
the data set
The attribute with the most distinct values is placed at
the lowest level of the hierarchy
Exceptions, e.g., weekday, month, quarter, year
country
province_or_ state
city
street
15 distinct values
365 distinct values
3567 distinct values
674,339 distinct values
61. 61
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
62. 62
Summary
Data quality: accuracy, completeness, consistency, timeliness,
believability, interpretability
Data cleaning: e.g. missing/noisy values, outliers
Data integration from multiple sources:
Entity identification problem
Remove redundancies
Detect inconsistencies
Data reduction
Dimensionality reduction
Numerosity reduction
Data compression
Data transformation and data discretization
Normalization
Concept hierarchy generation
63. 63
References
D. P. Ballou and G. K. Tayi. Enhancing data quality in data warehouse environments. Comm. of
ACM, 42:73-78, 1999
A. Bruce, D. Donoho, and H.-Y. Gao. Wavelet analysis. IEEE Spectrum, Oct 1996
T. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning. John Wiley, 2003
J. Devore and R. Peck. Statistics: The Exploration and Analysis of Data. Duxbury Press, 1997.
H. Galhardas, D. Florescu, D. Shasha, E. Simon, and C.-A. Saita. Declarative data cleaning:
Language, model, and algorithms. VLDB'01
M. Hua and J. Pei. Cleaning disguised missing data: A heuristic approach. KDD'07
H. V. Jagadish, et al., Special Issue on Data Reduction Techniques. Bulletin of the Technical
Committee on Data Engineering, 20(4), Dec. 1997
H. Liu and H. Motoda (eds.). Feature Extraction, Construction, and Selection: A Data Mining
Perspective. Kluwer Academic, 1998
J. E. Olson. Data Quality: The Accuracy Dimension. Morgan Kaufmann, 2003
D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999
V. Raman and J. Hellerstein. Potters Wheel: An Interactive Framework for Data Cleaning and
Transformation, VLDB’2001
T. Redman. Data Quality: The Field Guide. Digital Press (Elsevier), 2001
R. Wang, V. Storey, and C. Firth. A framework for analysis of data quality research. IEEE Trans.
Knowledge and Data Engineering, 7:623-640, 1995