The document discusses data preprocessing techniques for data mining. It covers why preprocessing is important to ensure quality data and mining results. The major tasks covered are data cleaning, integration, transformation, reduction, and discretization. Data cleaning involves techniques for handling missing data, noisy data, and inconsistencies. Data integration combines multiple data sources. Data transformation includes normalization, aggregation, and feature construction. Data reduction strategies aim to reduce data size for mining while maintaining analytical quality and include cube aggregation, dimensionality reduction, and numerosity reduction.
This document provides an overview of dimensional modeling techniques for data warehouse design, including what a data warehouse is, how dimensional modeling fits into the data presentation area, and some of the key concepts and components of dimensional modeling such as facts, dimensions, and star schemas. It also discusses design concepts like snowflake schemas, slowly changing dimensions, and conformed dimensions.
Visual data mining combines traditional data mining methods with information visualization techniques to explore large datasets. There are three levels of integration between visualization and automated mining methods - no/limited integration, loose integration where methods are applied sequentially, and full integration where methods are applied in parallel. Different visualization methods exist for univariate, bivariate and multivariate data based on the type and dimensions of the data. The document describes frameworks and algorithms for visual data mining, including developing new algorithms interactively through a visual interface. It also summarizes a document on using data mining and visualization techniques for selective visualization of large spatial datasets.
The document defines conceptual, logical, and physical data models and compares their key features. A conceptual model shows entities and relationships without attributes or keys. A logical model adds attributes, primary keys, and foreign keys. A physical model specifies tables, columns, data types, and other implementation details.
This document discusses dimensional data modeling for data warehouses. It begins by explaining star schemas, fact tables, and dimension tables. It then provides tips for combining data into a dimensional model and contrasts dimensional modeling with entity-relationship modeling. The document also covers topics like dimension table structures, updating dimensions, large dimensions, and snowflake schemas.
The document discusses multidimensional databases and data warehousing. It describes multidimensional databases as optimized for data warehousing and online analytical processing to enable interactive analysis of large amounts of data for decision making. It discusses key concepts like data cubes, dimensions, measures, and common data warehouse schemas including star schema, snowflake schema, and fact constellations.
Data mining involves using algorithms to find patterns in large datasets. It is commonly used in market research to perform tasks like classification, prediction, and association rule mining. The document discusses several common data mining techniques like decision trees, naive Bayes classification, and regression trees. It also covers related topics like cross-validation, bagging, and boosting methods used for improving model performance.
This document provides an overview of multidimensional data modeling and how it compares to relational databases. It defines key concepts such as dimensions, hierarchies, and measures in multidimensional modeling. It also explains how multidimensional databases are optimized for online analytical processing (OLAP) and allow for interactive analysis of large datasets. Additionally, the document discusses how data warehouses and data marts relate to multidimensional modeling and data cubes, and the advantages and drawbacks of the multidimensional approach.
The document discusses trends in data mining research, including mining complex data types like sequences, time series, graphs and networks. It covers various data mining methodologies like statistical data mining, visual data mining and views on the foundations of data mining. Statistical techniques discussed include regression, generalized linear models and discriminant analysis. Visual data mining involves using visualization to gain insights from large datasets and present data mining results.
This document provides an overview of dimensional modeling techniques for data warehouse design, including what a data warehouse is, how dimensional modeling fits into the data presentation area, and some of the key concepts and components of dimensional modeling such as facts, dimensions, and star schemas. It also discusses design concepts like snowflake schemas, slowly changing dimensions, and conformed dimensions.
Visual data mining combines traditional data mining methods with information visualization techniques to explore large datasets. There are three levels of integration between visualization and automated mining methods - no/limited integration, loose integration where methods are applied sequentially, and full integration where methods are applied in parallel. Different visualization methods exist for univariate, bivariate and multivariate data based on the type and dimensions of the data. The document describes frameworks and algorithms for visual data mining, including developing new algorithms interactively through a visual interface. It also summarizes a document on using data mining and visualization techniques for selective visualization of large spatial datasets.
The document defines conceptual, logical, and physical data models and compares their key features. A conceptual model shows entities and relationships without attributes or keys. A logical model adds attributes, primary keys, and foreign keys. A physical model specifies tables, columns, data types, and other implementation details.
This document discusses dimensional data modeling for data warehouses. It begins by explaining star schemas, fact tables, and dimension tables. It then provides tips for combining data into a dimensional model and contrasts dimensional modeling with entity-relationship modeling. The document also covers topics like dimension table structures, updating dimensions, large dimensions, and snowflake schemas.
The document discusses multidimensional databases and data warehousing. It describes multidimensional databases as optimized for data warehousing and online analytical processing to enable interactive analysis of large amounts of data for decision making. It discusses key concepts like data cubes, dimensions, measures, and common data warehouse schemas including star schema, snowflake schema, and fact constellations.
Data mining involves using algorithms to find patterns in large datasets. It is commonly used in market research to perform tasks like classification, prediction, and association rule mining. The document discusses several common data mining techniques like decision trees, naive Bayes classification, and regression trees. It also covers related topics like cross-validation, bagging, and boosting methods used for improving model performance.
This document provides an overview of multidimensional data modeling and how it compares to relational databases. It defines key concepts such as dimensions, hierarchies, and measures in multidimensional modeling. It also explains how multidimensional databases are optimized for online analytical processing (OLAP) and allow for interactive analysis of large datasets. Additionally, the document discusses how data warehouses and data marts relate to multidimensional modeling and data cubes, and the advantages and drawbacks of the multidimensional approach.
The document discusses trends in data mining research, including mining complex data types like sequences, time series, graphs and networks. It covers various data mining methodologies like statistical data mining, visual data mining and views on the foundations of data mining. Statistical techniques discussed include regression, generalized linear models and discriminant analysis. Visual data mining involves using visualization to gain insights from large datasets and present data mining results.
Data warehousing and online analytical processingVijayasankariS
The document discusses data warehousing and online analytical processing (OLAP). It defines a data warehouse as a subject-oriented, integrated, time-variant and non-volatile collection of data used to support management decision making. It describes key concepts such as data warehouse modeling using data cubes and dimensions, extraction, transformation and loading of data, and common OLAP operations. The document also provides examples of star schemas and how they are used to model data warehouses.
A data warehouse stores current and historical data for analysis and decision making. It uses a star schema with fact and dimension tables. The fact table contains measures that can be aggregated and connected to dimension tables through foreign keys. Dimensions describe the facts and contain descriptive attributes to analyze measures over time, products, locations etc. This allows analyzing large volumes of historical data for informed decisions.
The document presents on multidimensional data models. It discusses the key components of multidimensional data models including dimensions and facts. It describes different types of multidimensional data models such as data cube model, star schema model, snowflake schema model, and fact constellations. The star schema model and snowflake schema model are explained in more detail through examples and their benefits are highlighted.
The document discusses different types of multidimensional data models (MDDM) used for data warehousing. It describes MDDM as providing both a mechanism for storing data and enabling business analysis. The main types discussed are star schema, snowflake schema, and fact constellation. Star schema has one central fact table connected to multiple dimension tables, resembling a star. Snowflake schema is similar but dimensional tables are normalized into hierarchies. Fact constellation has multiple fact tables sharing some dimensional tables.
The document provides an overview of data mining and data warehousing concepts through a series of lectures. It discusses the evolution of database technology and data analysis, defines data mining and knowledge discovery, describes data mining functionalities like classification and clustering, and covers data warehouse concepts like dimensional modeling and OLAP operations. It also presents sample queries in a proposed data mining query language.
The document provides an overview of dimensional data modeling. It defines key concepts such as facts, dimensions, and star schemas. It discusses the differences between relational and dimensional modeling and how dimensional modeling organizes data into facts and dimensions. The document also covers more complex dimensional modeling topics such as slowly changing dimensions, bridge tables, and hierarchies. It emphasizes the importance of understanding the data and iterating on the design. Finally, it provides 10 recommendations for dimensional modeling including using surrogate keys and type 2 slowly changing dimensions.
The document provides background on multi-dimensional modeling techniques used to create BI InfoCubes. It discusses translating analytical needs into a multi-dimensional data model and star schema, with facts in the center and dimensions as surrounding tables. Guidelines are presented for modeling dimensions, attributes, hierarchies, and fact tables to support OLAP and allow flexible analysis of business process data.
Star ,Snow and Fact-Constullation Schemas??Abdul Aslam
This document compares and contrasts star schema, snowflake schema, and fact constellation schema. It defines each schema and discusses their key differences. Star schema has a single table for each dimension, while snowflake schema normalizes dimensions into multiple tables. Fact constellation allows dimension tables to be shared between multiple fact tables, modeling interrelated subjects. Performance is typically better with star schema, while snowflake schema reduces data redundancy at the cost of increased complexity.
The document discusses various data mining techniques including association rules, classification, clustering, and approaches to discovering patterns in datasets. It covers clustering algorithms like partition and hierarchical clustering. It also explains different data mining problems like discovering sequential patterns, patterns in time series data, and classification and regression rules.
The document discusses dimensional modeling and star schemas for data warehousing. It describes how requirements are used to design dimensional models, including choosing dimensions, grains, and facts. The key aspects of a star schema are presented, including fact tables containing measurements and dimension tables containing business context. Slowly changing dimensions, large dimensions, and snowflake schemas are also covered. Aggregate fact tables and fact constellations are introduced as extensions of the star schema.
Difference between ER-Modeling and Dimensional ModelingAbdul Aslam
Entity relationship (ER) modeling and dimensional modeling (DM) are different logical design techniques. ER modeling seeks to eliminate data redundancy and shows relationships between data, while DM presents data in a standard framework that allows for high performance access. The key differences are that ER modeling contains both logical and physical models, processes normalized data for online transaction processing databases, uses current data with many users, and has smaller and volatile storage, while DM contains only a physical model, processes denormalized data for data warehousing, uses historical data for top management, and has larger and non-volatile storage.
The document discusses data cubes and multidimensional data models. It provides examples of 2D and 3D data cubes to represent sales data with dimensions of time, item, and location. A data cube is a metaphor for storing multidimensional data without redundancy. Common schemas for multidimensional data include star schemas with a central fact table linked to dimension tables, snowflake schemas with some normalized dimension tables, and fact constellations with multiple linked fact tables. Dimension hierarchies allow mapping of low-level concepts like cities to higher-level concepts like states/provinces.
The document discusses data preprocessing techniques for data mining. It covers why preprocessing is important due to real-world data often being incomplete, noisy, and inconsistent. The major tasks of data preprocessing are described as data cleaning, integration, transformation, reduction, and discretization. Specific techniques for handling missing data, noisy data, and data binning are also summarized.
This document discusses data preprocessing techniques for data mining. It covers why preprocessing is important for obtaining quality data and mining results. The major tasks of data preprocessing discussed are data cleaning, integration, transformation, reduction, and discretization. Specific techniques covered for each task include handling missing/noisy data, data integration, normalization, dimensionality reduction, and binning.
This chapter discusses data preprocessing techniques for data mining. It covers why preprocessing is important for cleaning dirty or noisy real-world data. Specific techniques discussed include data cleaning, integration, transformation, reduction and discretization. Data cleaning involves handling missing data, outliers, and inconsistencies. Data integration combines data from multiple sources. Transformation techniques include normalization, aggregation, and generalization. Data reduction reduces data volume for more efficient analysis through methods like dimensionality reduction and numerosity reduction. Discretization converts continuous attributes to discrete intervals.
The document discusses dimensional data modeling concepts. It provides examples of dimensions like date, store, and inventory. It explains that dimensions relate to facts in a fact table through surrogate keys. It also discusses slowly changing dimensions, conformed dimensions, and avoiding snowflakes which can hurt performance. The goal is to choose a business process, declare the grain, identify dimensions, and then identify facts to populate the fact table.
The document discusses dimensional modeling versus entity relationship (ER) modeling. Dimensional modeling uses a denormalized structure that is optimized for select queries, while ER modeling uses normalization to reduce redundancy and is optimized for transactions. Case studies are presented showing dimensional modeling implementations with technologies like SQL Server and Teradata. Skills in dimensional modeling techniques, extract-transform-load processes, and reporting with dimensional models are discussed.
Fact less fact Tables & Aggregate Tables Sunita Sahu
Factless fact tables record events like student attendance or meeting participation without numeric facts. They contain only foreign keys to associated dimensions. Aggregate fact tables contain pre-calculated summaries derived from the lowest level fact table. Having the fact table at the lowest grain allows retrieving large result sets from the data warehouse more efficiently than querying the operational system. Aggregate tables reduce fact table size and the need to aggregate data during queries.
This document discusses data preprocessing techniques for data mining. It covers why preprocessing is important for obtaining quality mining results from quality data. The major tasks of data preprocessing are described, including data cleaning, integration, transformation, reduction, and discretization. Specific techniques for handling missing data, noisy data, and data integration are also outlined. The goals of data reduction strategies like dimensionality and numerosity reduction are explained.
This document discusses data preprocessing techniques for data mining. It covers why preprocessing is important by explaining how real-world data can be dirty, incomplete, noisy, or inconsistent. It then describes common preprocessing tasks like data cleaning, integration, transformation, reduction, and discretization. Specific techniques are explained, such as handling missing data, outliers, normalization, and binning. The goal of preprocessing is to prepare raw data into a format suitable for data mining algorithms to produce accurate and useful results.
Data warehousing and online analytical processingVijayasankariS
The document discusses data warehousing and online analytical processing (OLAP). It defines a data warehouse as a subject-oriented, integrated, time-variant and non-volatile collection of data used to support management decision making. It describes key concepts such as data warehouse modeling using data cubes and dimensions, extraction, transformation and loading of data, and common OLAP operations. The document also provides examples of star schemas and how they are used to model data warehouses.
A data warehouse stores current and historical data for analysis and decision making. It uses a star schema with fact and dimension tables. The fact table contains measures that can be aggregated and connected to dimension tables through foreign keys. Dimensions describe the facts and contain descriptive attributes to analyze measures over time, products, locations etc. This allows analyzing large volumes of historical data for informed decisions.
The document presents on multidimensional data models. It discusses the key components of multidimensional data models including dimensions and facts. It describes different types of multidimensional data models such as data cube model, star schema model, snowflake schema model, and fact constellations. The star schema model and snowflake schema model are explained in more detail through examples and their benefits are highlighted.
The document discusses different types of multidimensional data models (MDDM) used for data warehousing. It describes MDDM as providing both a mechanism for storing data and enabling business analysis. The main types discussed are star schema, snowflake schema, and fact constellation. Star schema has one central fact table connected to multiple dimension tables, resembling a star. Snowflake schema is similar but dimensional tables are normalized into hierarchies. Fact constellation has multiple fact tables sharing some dimensional tables.
The document provides an overview of data mining and data warehousing concepts through a series of lectures. It discusses the evolution of database technology and data analysis, defines data mining and knowledge discovery, describes data mining functionalities like classification and clustering, and covers data warehouse concepts like dimensional modeling and OLAP operations. It also presents sample queries in a proposed data mining query language.
The document provides an overview of dimensional data modeling. It defines key concepts such as facts, dimensions, and star schemas. It discusses the differences between relational and dimensional modeling and how dimensional modeling organizes data into facts and dimensions. The document also covers more complex dimensional modeling topics such as slowly changing dimensions, bridge tables, and hierarchies. It emphasizes the importance of understanding the data and iterating on the design. Finally, it provides 10 recommendations for dimensional modeling including using surrogate keys and type 2 slowly changing dimensions.
The document provides background on multi-dimensional modeling techniques used to create BI InfoCubes. It discusses translating analytical needs into a multi-dimensional data model and star schema, with facts in the center and dimensions as surrounding tables. Guidelines are presented for modeling dimensions, attributes, hierarchies, and fact tables to support OLAP and allow flexible analysis of business process data.
Star ,Snow and Fact-Constullation Schemas??Abdul Aslam
This document compares and contrasts star schema, snowflake schema, and fact constellation schema. It defines each schema and discusses their key differences. Star schema has a single table for each dimension, while snowflake schema normalizes dimensions into multiple tables. Fact constellation allows dimension tables to be shared between multiple fact tables, modeling interrelated subjects. Performance is typically better with star schema, while snowflake schema reduces data redundancy at the cost of increased complexity.
The document discusses various data mining techniques including association rules, classification, clustering, and approaches to discovering patterns in datasets. It covers clustering algorithms like partition and hierarchical clustering. It also explains different data mining problems like discovering sequential patterns, patterns in time series data, and classification and regression rules.
The document discusses dimensional modeling and star schemas for data warehousing. It describes how requirements are used to design dimensional models, including choosing dimensions, grains, and facts. The key aspects of a star schema are presented, including fact tables containing measurements and dimension tables containing business context. Slowly changing dimensions, large dimensions, and snowflake schemas are also covered. Aggregate fact tables and fact constellations are introduced as extensions of the star schema.
Difference between ER-Modeling and Dimensional ModelingAbdul Aslam
Entity relationship (ER) modeling and dimensional modeling (DM) are different logical design techniques. ER modeling seeks to eliminate data redundancy and shows relationships between data, while DM presents data in a standard framework that allows for high performance access. The key differences are that ER modeling contains both logical and physical models, processes normalized data for online transaction processing databases, uses current data with many users, and has smaller and volatile storage, while DM contains only a physical model, processes denormalized data for data warehousing, uses historical data for top management, and has larger and non-volatile storage.
The document discusses data cubes and multidimensional data models. It provides examples of 2D and 3D data cubes to represent sales data with dimensions of time, item, and location. A data cube is a metaphor for storing multidimensional data without redundancy. Common schemas for multidimensional data include star schemas with a central fact table linked to dimension tables, snowflake schemas with some normalized dimension tables, and fact constellations with multiple linked fact tables. Dimension hierarchies allow mapping of low-level concepts like cities to higher-level concepts like states/provinces.
The document discusses data preprocessing techniques for data mining. It covers why preprocessing is important due to real-world data often being incomplete, noisy, and inconsistent. The major tasks of data preprocessing are described as data cleaning, integration, transformation, reduction, and discretization. Specific techniques for handling missing data, noisy data, and data binning are also summarized.
This document discusses data preprocessing techniques for data mining. It covers why preprocessing is important for obtaining quality data and mining results. The major tasks of data preprocessing discussed are data cleaning, integration, transformation, reduction, and discretization. Specific techniques covered for each task include handling missing/noisy data, data integration, normalization, dimensionality reduction, and binning.
This chapter discusses data preprocessing techniques for data mining. It covers why preprocessing is important for cleaning dirty or noisy real-world data. Specific techniques discussed include data cleaning, integration, transformation, reduction and discretization. Data cleaning involves handling missing data, outliers, and inconsistencies. Data integration combines data from multiple sources. Transformation techniques include normalization, aggregation, and generalization. Data reduction reduces data volume for more efficient analysis through methods like dimensionality reduction and numerosity reduction. Discretization converts continuous attributes to discrete intervals.
The document discusses dimensional data modeling concepts. It provides examples of dimensions like date, store, and inventory. It explains that dimensions relate to facts in a fact table through surrogate keys. It also discusses slowly changing dimensions, conformed dimensions, and avoiding snowflakes which can hurt performance. The goal is to choose a business process, declare the grain, identify dimensions, and then identify facts to populate the fact table.
The document discusses dimensional modeling versus entity relationship (ER) modeling. Dimensional modeling uses a denormalized structure that is optimized for select queries, while ER modeling uses normalization to reduce redundancy and is optimized for transactions. Case studies are presented showing dimensional modeling implementations with technologies like SQL Server and Teradata. Skills in dimensional modeling techniques, extract-transform-load processes, and reporting with dimensional models are discussed.
Fact less fact Tables & Aggregate Tables Sunita Sahu
Factless fact tables record events like student attendance or meeting participation without numeric facts. They contain only foreign keys to associated dimensions. Aggregate fact tables contain pre-calculated summaries derived from the lowest level fact table. Having the fact table at the lowest grain allows retrieving large result sets from the data warehouse more efficiently than querying the operational system. Aggregate tables reduce fact table size and the need to aggregate data during queries.
This document discusses data preprocessing techniques for data mining. It covers why preprocessing is important for obtaining quality mining results from quality data. The major tasks of data preprocessing are described, including data cleaning, integration, transformation, reduction, and discretization. Specific techniques for handling missing data, noisy data, and data integration are also outlined. The goals of data reduction strategies like dimensionality and numerosity reduction are explained.
This document discusses data preprocessing techniques for data mining. It covers why preprocessing is important by explaining how real-world data can be dirty, incomplete, noisy, or inconsistent. It then describes common preprocessing tasks like data cleaning, integration, transformation, reduction, and discretization. Specific techniques are explained, such as handling missing data, outliers, normalization, and binning. The goal of preprocessing is to prepare raw data into a format suitable for data mining algorithms to produce accurate and useful results.
This document provides an overview of data preprocessing techniques. It discusses why preprocessing is important, including that real-world data is often dirty, incomplete, noisy, and inconsistent. The major tasks of preprocessing are described as data cleaning, integration, transformation, reduction, and discretization. Specific techniques for handling missing data, noisy data, and reducing redundancy are also summarized.
This document discusses data preprocessing techniques. It explains that data is often incomplete, noisy, or inconsistent when collected from the real world. Common preprocessing steps include data cleaning to handle these issues, data integration and transformation to combine multiple data sources, and data reduction to reduce the volume of data for analysis while maintaining analytical results. Specific techniques covered include filling in missing values, identifying and smoothing outliers, resolving inconsistencies, schema integration, attribute construction, data cube aggregation, dimensionality reduction, and discretization.
The document discusses various techniques for data preprocessing including data cleaning, integration, transformation, reduction, discretization, and concept hierarchy generation. Specifically, it covers filling missing values, handling noisy data, data normalization, aggregation, attribute selection, clustering, sampling and entropy-based discretization to reduce data size while retaining important information.
This document discusses data preprocessing techniques. It defines data preprocessing as transforming raw data into an understandable format. Major tasks in data preprocessing are described as data cleaning, integration, transformation, and reduction. Data cleaning involves handling missing data, noisy data, and inconsistencies. Data integration combines data from multiple sources. Data transformation techniques include smoothing, aggregation, generalization, and normalization. The goal of data reduction is to reduce the volume of data while maintaining analytical results.
Data preprocessing involves cleaning data by filling in missing values, smoothing noisy data, and resolving inconsistencies. It also includes integrating and transforming data from multiple sources, reducing data volume through aggregation, dimensionality reduction, and discretization while maintaining analytical results. The key goals of preprocessing are to improve data quality and prepare the data for mining tasks through techniques like data cleaning, integration, transformation, reduction, and discretization of attributes into intervals or concept hierarchies.
Data preprocessing involves cleaning data by handling missing values, outliers, and inconsistencies. It also includes integrating and transforming data from multiple sources through normalization, aggregation, and dimensionality reduction. The goals of preprocessing are to improve data quality, reduce data size for analysis, and prepare data for mining algorithms through techniques like discretization and concept hierarchy generation.
This document discusses data preprocessing techniques for data mining. It covers why preprocessing is important for obtaining quality data and mining results. The major tasks of preprocessing discussed are data cleaning, integration, transformation, reduction, and discretization. Data cleaning involves handling missing data, noisy data, and inconsistencies. Data integration combines data from multiple sources. Data transformation includes normalization, aggregation, generalization and attribute construction. The goal of preprocessing is to improve data quality and make data more suitable for mining algorithms.
Data preprocessing involves cleaning data by handling missing values, outliers, and noise. It also includes integrating and transforming data from multiple sources through normalization, aggregation, and dimensionality reduction. The goals of preprocessing are to improve data quality, handle inconsistencies, and reduce data volume for analysis while retaining essential information. Techniques include discretization, concept hierarchy generation, sampling, clustering, and developing histograms to obtain a reduced data representation.
Data preprocessing involves cleaning data by handling missing values, outliers, and noise. It also includes integrating and transforming data from multiple sources through normalization, aggregation, and dimensionality reduction. The goals of preprocessing are to improve data quality, reduce data size for analysis, and convert continuous attributes to discrete intervals or concepts. Preprocessing helps produce higher quality mining results.
Data preprocessing is an important step for data mining and warehousing. It involves cleaning data by handling missing values, outliers, and inconsistencies. It also includes integrating, transforming, and reducing data. The goals are to improve data quality, reduce data size, and prepare data for mining algorithms. Key techniques include data cleaning, discretization of continuous attributes, feature selection, and various data reduction methods like binning, clustering, and sampling. Preprocessing helps produce higher quality mining results from quality data.
Data preprocessing involves cleaning data by handling missing values, outliers, and noise. It also includes data integration and transformation through normalization, aggregation, and dimensionality reduction. The goals are to improve data quality, handle inconsistencies, and reduce data size for mining. Techniques include binning, clustering, sampling and discretization which create intervals or concept hierarchies to generalize continuous attributes for analysis.
Data preprocessing is an important step for data mining and warehousing. It involves cleaning data by handling missing values, outliers, and inconsistencies. It also includes integrating, transforming, and reducing data. The goals are to improve data quality, reduce data size, and prepare data for mining algorithms. Key techniques include data cleaning, discretization of continuous attributes, feature selection, and various data reduction methods like binning, clustering, and sampling. Preprocessing helps produce higher quality mining results based on higher quality input data.
Data preprocessing is crucial for data mining and includes data cleaning, integration, reduction, and discretization. The goals are to handle missing data, smooth noisy data, reduce inconsistencies, integrate multiple sources, and reduce data size while maintaining analytical results. Common techniques include filling in missing values, identifying and handling outliers, aggregating data, feature selection, normalization, binning, clustering, and generating concept hierarchies. Preprocessing addresses issues like dirty, incomplete, inconsistent or redundant data to improve mining quality and efficiency.
Data preprocessing is crucial for data mining and includes data cleaning, integration, reduction, and discretization. The goals are to handle missing data, smooth noisy data, reduce inconsistencies, integrate multiple sources, and reduce data size while maintaining analytical results. Common techniques include filling in missing values, identifying outliers, aggregating data, feature selection, binning, clustering, and generating concept hierarchies to replace raw values with semantic concepts. Preprocessing addresses issues like dirty, incomplete, inconsistent data to produce high quality input for mining models and decisions.
Data preprocessing is important for data mining and involves data cleaning, integration, reduction, and discretization. The goals are to handle missing data, remove noise, resolve inconsistencies, reduce data size for faster mining, and prepare data for modeling. Common techniques include filling in missing values, smoothing noisy data, aggregating data, normalizing values, selecting important features, clustering data, and discretizing continuous variables. Preprocessing helps produce higher quality mining results from dirtier real-world data.
The document discusses data preprocessing techniques for cleaning dirty real-world data. It describes why data is often incomplete, noisy, or inconsistent and different methods for handling missing data, noisy data, and outliers. These include filling in missing values, smoothing noisy data using binning, regression, or clustering, and resolving inconsistencies. The goal of data cleaning is to handle data problems and improve quality so data mining results can be more accurate.
This document discusses various techniques for data preprocessing, including data cleaning, integration, transformation, and reduction. It describes why preprocessing is important for obtaining quality data and mining results. Key techniques covered include handling missing data, smoothing noisy data, data integration and normalization for transformation, and data reduction methods like binning, discretization, feature selection and dimensionality reduction.
This document discusses mining complex types of data in data mining, including multidimensional analysis of complex objects, mining spatial, multimedia, time-series, text, and web data. It covers generalizing different types of complex data, such as sets, lists, spatial points, images, and objects. Methods discussed include mining spatial databases through spatial data warehousing and cubes, mining sequences through generalization and pattern extraction, and mining associations in spatial data through progressive refinement.
The document discusses data mining applications in various domains including biomedical, financial, retail, and telecommunications. It describes how data mining can be used to analyze DNA sequences and biomedical data, detect financial crimes and predict loan payments, analyze customer shopping behaviors in retail, and identify fraudulent patterns in telecommunications data. The document also covers trends in data mining such as visual data mining and audio data mining.
This chapter discusses data warehousing and OLAP technology for data mining. It defines what a data warehouse is, including that it is a decision support database maintained separately from operational databases that contains consolidated, historical data. It also describes multi-dimensional data models using data cubes and dimensional hierarchies. Common data warehouse architectures like star schemas and snowflake schemas are presented. Finally, it discusses how OLAP operations on these multi-dimensional models support data mining.
The document provides an introduction to data mining concepts and techniques. It discusses the motivation for data mining due to vast amounts of stored data. It defines data mining as the extraction of interesting and potentially useful patterns from large databases. The document also outlines the key steps in a knowledge discovery process, including data cleaning, transformation, mining, and evaluation. It surveys the major applications and functionalities of data mining, as well as issues that require further research.
The document summarizes key concepts from Chapter 8 of the textbook "Data Mining: Concepts and Techniques" which covers cluster analysis. It discusses different types of data that can be used for cluster analysis as well as major clustering methods including partitioning, hierarchical, density-based, grid-based, and model-based approaches. Specific partitioning algorithms covered are k-means and k-medoids clustering.
This document summarizes Chapter 7 of the textbook "Data Mining: Concepts and Techniques" which covers the topics of classification and prediction in data mining. The chapter discusses classification using decision tree induction, Bayesian classification, backpropagation, association rule mining, and other methods. It also addresses evaluating classification methods, handling issues like data preparation and overfitting, and measuring classification accuracy. The goal of classification is to predict categorical class labels, while prediction models continuous values.
This document provides an overview of chapter 6 from the textbook "Data Mining: Concepts and Techniques" which discusses mining association rules from large databases. The chapter covers association rule mining, the Apriori algorithm for finding frequent itemsets, methods to improve Apriori's efficiency such as hashing and partitioning, and the FP-growth method for mining frequent patterns without candidate generation by compressing a database into a frequent-pattern tree.
This document summarizes Chapter 5 of the textbook "Data Mining: Concepts and Techniques". It discusses concept description, which involves characterizing data through generalization, summarization, and comparison of different classes. Key aspects covered include data cube approaches to characterization, attribute-oriented induction for generalization, analytical characterization of attribute relevance, and presenting generalized results through cross-tabulation, visualization, and rules. Implementation can utilize pre-computed data cubes to enable efficient analysis operations like drill-down.
This document discusses data mining primitives, languages, and system architectures. It describes five primitives that define a data mining task: task-relevant data, type of knowledge to be mined, background knowledge, pattern interestingness measurements, and visualization of discovered patterns. It also discusses data mining query languages like DMQL that allow users to specify data mining tasks interactively. Finally, it covers different architectures for coupling data mining systems with database/data warehouse systems.
KALYAN MATKA | MATKA RESULT | KALYAN MATKA TIPS | SATTA MATKA | MATKA.COM | MATKA PANA JODI TODAY | BATTA SATKA | MATKA PATTI JODI NUMBER | MATKA RESULTS | MATKA CHART | MATKA JODI | SATTA COM | FULL RATE GAME | MATKA GAME | MATKA WAPKA | ALL MATKA RESULT LIVE ONLINE | MATKA RESULT | KALYAN MATKA RESULT | DPBOSS MATKA 143 | MAIN MATKA
Boudoir photography, a genre that captures intimate and sensual images of individuals, has experienced significant transformation over the years, particularly in New York City (NYC). Known for its diversity and vibrant arts scene, NYC has been a hub for the evolution of various art forms, including boudoir photography. This article delves into the historical background, cultural significance, technological advancements, and the contemporary landscape of boudoir photography in NYC.
This document announces the winners of the 2024 Youth Poster Contest organized by MATFORCE. It lists the grand prize and age category winners for grades K-6, 7-12, and individual age groups from 5 years old to 18 years old.
The cherry: beauty, softness, its heart-shaped plastic has inspired artists since Antiquity. Cherries and strawberries were considered the fruits of paradise and thus represented the souls of men.
❼❷⓿❺❻❷❽❷❼❽ Dpboss Kalyan Satta Matka Guessing Matka Result Main Bazar chart Final Matka Satta Matta Matka 143 Kalyan Chart Satta fix Jodi Kalyan Final ank Matka Boss Satta 143 Matka 420 Golden Matka Final Satta Kalyan Penal Chart Dpboss 143 Guessing Kalyan Night Chart
Heart Touching Romantic Love Shayari In English with ImagesShort Good Quotes
Explore our beautiful collection of Romantic Love Shayari in English to express your love. These heartfelt shayaris are perfect for sharing with your loved one. Get the best words to show your love and care.
2. 01/20/18 Data Mining: Concepts and Techniques 2
Chapter 3: Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
3. 01/20/18 Data Mining: Concepts and Techniques 3
Why Data Preprocessing?
Data in the real world is dirty
incomplete: lacking attribute values, lacking certain
attributes of interest, or containing only aggregate
data
noisy: containing errors or outliers
inconsistent: containing discrepancies in codes or
names
No quality data, no quality mining results!
Quality decisions must be based on quality data
Data warehouse needs consistent integration of
quality data
4. 01/20/18 Data Mining: Concepts and Techniques 4
Multi-Dimensional Measure of Data Quality
A well-accepted multidimensional view:
Accuracy
Completeness
Consistency
Timeliness
Believability
Value added
Interpretability
Accessibility
Broad categories:
intrinsic, contextual, representational, and
accessibility.
5. 01/20/18 Data Mining: Concepts and Techniques 5
Major Tasks in Data Preprocessing
Data cleaning
Fill in missing values, smooth noisy data, identify or remove
outliers, and resolve inconsistencies
Data integration
Integration of multiple databases, data cubes, or files
Data transformation
Normalization and aggregation
Data reduction
Obtains reduced representation in volume but produces the
same or similar analytical results
Data discretization
Part of data reduction but with particular importance, especially
for numerical data
7. 01/20/18 Data Mining: Concepts and Techniques 7
Chapter 3: Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
8. 01/20/18 Data Mining: Concepts and Techniques 8
Data Cleaning
Data cleaning tasks
Fill in missing values
Identify outliers and smooth out noisy data
Correct inconsistent data
9. 01/20/18 Data Mining: Concepts and Techniques 9
Missing Data
Data is not always available
E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
Missing data may be due to
equipment malfunction
inconsistent with other recorded data and thus deleted
data not entered due to misunderstanding
certain data may not be considered important at the time of
entry
not register history or changes of the data
Missing data may need to be inferred.
10. 01/20/18 Data Mining: Concepts and Techniques 10
How to Handle Missing Data?
Ignore the tuple: usually done when class label is missing
(assuming the tasks in classification—not effective when the
percentage of missing values per attribute varies considerably.
Fill in the missing value manually: tedious + infeasible?
Use a global constant to fill in the missing value: e.g., “unknown”, a
new class?!
Use the attribute mean to fill in the missing value
Use the attribute mean for all samples belonging to the same class
to fill in the missing value: smarter
Use the most probable value to fill in the missing value: inference-
based such as Bayesian formula or decision tree
11. 01/20/18 Data Mining: Concepts and Techniques 11
Noisy Data
Noise: random error or variance in a measured variable
Incorrect attribute values may due to
faulty data collection instruments
data entry problems
data transmission problems
technology limitation
inconsistency in naming convention
Other data problems which requires data cleaning
duplicate records
incomplete data
inconsistent data
12. 01/20/18 Data Mining: Concepts and Techniques 12
How to Handle Noisy Data?
Binning method:
first sort data and partition into (equi-depth) bins
then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc.
Clustering
detect and remove outliers
Combined computer and human inspection
detect suspicious values and check by human
Regression
smooth by fitting the data into regression functions
13. 01/20/18 Data Mining: Concepts and Techniques 13
Simple Discretization Methods: Binning
Equal-width (distance) partitioning:
It divides the range into N intervals of equal size:
uniform grid
if A and B are the lowest and highest values of the
attribute, the width of intervals will be: W = (B-A)/N.
The most straightforward
But outliers may dominate presentation
Skewed data is not handled well.
Equal-depth (frequency) partitioning:
It divides the range into N intervals, each containing
approximately same number of samples
Good data scaling
Managing categorical attributes can be tricky.
14. 01/20/18 Data Mining: Concepts and Techniques 14
Binning Methods for Data Smoothing
* Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28,
29, 34
* Partition into (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
17. 01/20/18 Data Mining: Concepts and Techniques 17
Chapter 3: Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
18. 01/20/18 Data Mining: Concepts and Techniques 18
Data Integration
Data integration:
combines data from multiple sources into a coherent
store
Schema integration
integrate metadata from different sources
Entity identification problem: identify real world entities
from multiple data sources, e.g., A.cust-id ≡ B.cust-#
Detecting and resolving data value conflicts
for the same real world entity, attribute values from
different sources are different
possible reasons: different representations, different
scales, e.g., metric vs. British units
19. 01/20/18 Data Mining: Concepts and Techniques 19
Handling Redundant Data
in Data Integration
Redundant data occur often when integration of multiple
databases
The same attribute may have different names in
different databases
One attribute may be a “derived” attribute in another
table, e.g., annual revenue
Redundant data may be able to be detected by
correlational analysis
Careful integration of the data from multiple sources may
help reduce/avoid redundancies and inconsistencies and
improve mining speed and quality
20. 01/20/18 Data Mining: Concepts and Techniques 20
Data Transformation
Smoothing: remove noise from data
Aggregation: summarization, data cube construction
Generalization: concept hierarchy climbing
Normalization: scaled to fall within a small, specified
range
min-max normalization
z-score normalization
normalization by decimal scaling
Attribute/feature construction
New attributes constructed from the given ones
21. 01/20/18 Data Mining: Concepts and Techniques 21
Data Transformation:
Normalization
min-max normalization
z-score normalization
normalization by decimal scaling
AAA
AA
A
minnewminnewmaxnew
minmax
minv
v _)__(' +−
−
−
=
A
A
devstand
meanv
v
_
'
−
=
j
v
v
10
'= Where j is the smallest integer such that Max(| |)<1'v
22. 01/20/18 Data Mining: Concepts and Techniques 22
Chapter 3: Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
23. 01/20/18 Data Mining: Concepts and Techniques 23
Data Reduction Strategies
Warehouse may store terabytes of data: Complex data
analysis/mining may take a very long time to run on the
complete data set
Data reduction
Obtains a reduced representation of the data set that is
much smaller in volume but yet produces the same (or
almost the same) analytical results
Data reduction strategies
Data cube aggregation
Dimensionality reduction
Numerosity reduction
Discretization and concept hierarchy generation
24. 01/20/18 Data Mining: Concepts and Techniques 24
Data Cube Aggregation
The lowest level of a data cube
the aggregated data for an individual entity of interest
e.g., a customer in a phone calling data warehouse.
Multiple levels of aggregation in data cubes
Further reduce the size of data to deal with
Reference appropriate levels
Use the smallest representation which is enough to
solve the task
Queries regarding aggregated information should be
answered using data cube, when possible
25. 01/20/18 Data Mining: Concepts and Techniques 25
Dimensionality Reduction
Feature selection (i.e., attribute subset selection):
Select a minimum set of features such that the
probability distribution of different classes given the
values for those features is as close as possible to the
original distribution given the values of all features
reduce # of patterns in the patterns, easier to understand
Heuristic methods (due to exponential # of choices):
step-wise forward selection
step-wise backward elimination
combining forward selection and backward elimination
decision-tree induction
26. 01/20/18 Data Mining: Concepts and Techniques 26
Example of Decision Tree Induction
Initial attribute set:
{A1, A2, A3, A4, A5, A6}
A4 ?
A1? A6?
Class 1 Class 2 Class 1 Class 2
> Reduced attribute set: {A1, A4, A6}
27. 01/20/18 Data Mining: Concepts and Techniques 28
Data Compression
String compression
There are extensive theories and well-tuned algorithms
Typically lossless
But only limited manipulation is possible without
expansion
Audio/video compression
Typically lossy compression, with progressive
refinement
Sometimes small fragments of signal can be
reconstructed without reconstructing the whole
Time sequence is not audio
Typically short and vary slowly with time
28. 01/20/18 Data Mining: Concepts and Techniques 29
Data Compression
Original Data Compressed
Data
lossless
Original Data
Approximated
lossy
29. 01/20/18 Data Mining: Concepts and Techniques 30
Wavelet Transforms
Discrete wavelet transform (DWT): linear signal
processing
Compressed approximation: store only a small fraction of
the strongest of the wavelet coefficients
Similar to discrete Fourier transform (DFT), but better
lossy compression, localized in space
Method:
Length, L, must be an integer power of 2 (padding with 0s, when
necessary)
Each transform has 2 functions: smoothing, difference
Applies to pairs of data, resulting in two set of data of length L/2
Applies two functions recursively, until reaches the desired length
Haar2 Daubechie4
30. 01/20/18 Data Mining: Concepts and Techniques 31
Given N data vectors from k-dimensions, find c <= k
orthogonal vectors that can be best used to represent
data
The original data set is reduced to one consisting of
N data vectors on c principal components (reduced
dimensions)
Each data vector is a linear combination of the c
principal component vectors
Works for numeric data only
Used when the number of dimensions is large
Principal Component Analysis
31. 01/20/18 Data Mining: Concepts and Techniques 32
X1
X2
Y1
Y2
Principal Component Analysis
32. 01/20/18 Data Mining: Concepts and Techniques 33
Numerosity Reduction
Parametric methods
Assume the data fits some model, estimate model
parameters, store only the parameters, and discard
the data (except possible outliers)
Log-linear models: obtain value at a point in m-D
space as the product on appropriate marginal
subspaces
Non-parametric methods
Do not assume models
Major families: histograms, clustering, sampling
33. 01/20/18 Data Mining: Concepts and Techniques 34
Regression and Log-Linear Models
Linear regression: Data are modeled to fit a straight line
Often uses the least-square method to fit the line
Multiple regression: allows a response variable Y to be
modeled as a linear function of multidimensional feature
vector
Log-linear model: approximates discrete
multidimensional probability distributions
34. Linear regression: Y = α + β X
Two parameters , α and β specify the line and are to
be estimated by using the data at hand.
using the least squares criterion to the known values of
Y1, Y2, …, X1, X2, ….
Multiple regression: Y = b0 + b1 X1 + b2 X2.
Many nonlinear functions can be transformed into the
above.
Log-linear models:
The multi-way table of joint probabilities is
approximated by a product of lower-order tables.
Probability: p(a, b, c, d) = αab βacχad δbcd
Regress Analysis and Log-
Linear Models
35. 01/20/18 Data Mining: Concepts and Techniques 36
Histograms
A popular data reduction
technique
Divide data into buckets
and store average (sum)
for each bucket
Can be constructed
optimally in one
dimension using dynamic
programming
Related to quantization
problems. 0
5
10
15
20
25
30
35
40
10000
20000
30000
40000
50000
60000
70000
80000
90000
100000
36. 01/20/18 Data Mining: Concepts and Techniques 37
Clustering
Partition data set into clusters, and one can store cluster
representation only
Can be very effective if data is clustered but not if data
is “smeared”
Can have hierarchical clustering and be stored in multi-
dimensional index tree structures
There are many choices of clustering definitions and
clustering algorithms, further detailed in Chapter 8
37. 01/20/18 Data Mining: Concepts and Techniques 38
Sampling
Allow a mining algorithm to run in complexity that is
potentially sub-linear to the size of the data
Choose a representative subset of the data
Simple random sampling may have very poor
performance in the presence of skew
Develop adaptive sampling methods
Stratified sampling:
Approximate the percentage of each class (or
subpopulation of interest) in the overall database
Used in conjunction with skewed data
Sampling may not reduce database I/Os (page at a time).
38. 01/20/18 Data Mining: Concepts and Techniques 39
Sampling
SRSWOR
(simple random
sample without
replacement)
SRSWR
Raw Data
39. 01/20/18 Data Mining: Concepts and Techniques 40
Sampling
Raw Data Cluster/Stratified Sample
40. 01/20/18 Data Mining: Concepts and Techniques 41
Hierarchical Reduction
Use multi-resolution structure with different degrees of
reduction
Hierarchical clustering is often performed but tends to
define partitions of data sets rather than “clusters”
Parametric methods are usually not amenable to
hierarchical representation
Hierarchical aggregation
An index tree hierarchically divides a data set into
partitions by value range of some attributes
Each partition can be considered as a bucket
Thus an index tree with aggregates stored at each
node is a hierarchical histogram
41. 01/20/18 Data Mining: Concepts and Techniques 42
Chapter 3: Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
42. 01/20/18 Data Mining: Concepts and Techniques 43
Discretization
Three types of attributes:
Nominal — values from an unordered set
Ordinal — values from an ordered set
Continuous — real numbers
Discretization:
divide the range of a continuous attribute into
intervals
Some classification algorithms only accept
categorical attributes.
Reduce data size by discretization
Prepare for further analysis
43. 01/20/18 Data Mining: Concepts and Techniques 44
Discretization and Concept hierachy
Discretization
reduce the number of values for a given continuous
attribute by dividing the range of the attribute into
intervals. Interval labels can then be used to replace
actual data values.
Concept hierarchies
reduce the data by collecting and replacing low level
concepts (such as numeric values for the attribute
age) by higher level concepts (such as young,
middle-aged, or senior).
44. 01/20/18 Data Mining: Concepts and Techniques 45
Discretization and concept hierarchy
generation for numeric data
Binning (see sections before)
Histogram analysis (see sections before)
Clustering analysis (see sections before)
Entropy-based discretization
Segmentation by natural partitioning
45. 01/20/18 Data Mining: Concepts and Techniques 46
Entropy-Based Discretization
Given a set of samples S, if S is partitioned into two
intervals S1 and S2 using boundary T, the entropy after
partitioning is
The boundary that minimizes the entropy function over all
possible boundaries is selected as a binary discretization.
The process is recursively applied to partitions obtained
until some stopping criterion is met, e.g.,
Experiments show that it may reduce data size and
improve classification accuracy
E S T
S
Ent
S
Ent
S
S
S
S( , )
| |
| |
( )
| |
| |
( )= +1
1
2
2
Ent S E T S( ) ( , )− > δ
46. 01/20/18 Data Mining: Concepts and Techniques 47
Segmentation by natural partitioning
3-4-5 rule can be used to segment numeric data into
relatively uniform, “natural” intervals.
* If an interval covers 3, 6, 7 or 9 distinct values at the
most significant digit, partition the range into 3 equi-
width intervals
* If it covers 2, 4, or 8 distinct values at the most
significant digit, partition the range into 4 intervals
* If it covers 1, 5, or 10 distinct values at the most
significant digit, partition the range into 5 intervals
48. 01/20/18 Data Mining: Concepts and Techniques 49
Concept hierarchy generation for
categorical data
Specification of a partial ordering of attributes explicitly
at the schema level by users or experts
Specification of a portion of a hierarchy by explicit data
grouping
Specification of a set of attributes, but not of their partial
ordering
Specification of only a partial set of attributes
49. 01/20/18 Data Mining: Concepts and Techniques 50
Specification of a set of attributes
Concept hierarchy can be automatically generated
based on the number of distinct values per attribute in
the given attribute set. The attribute with the most
distinct values is placed at the lowest level of the
hierarchy.
country
province_or_ state
city
street
15 distinct values
65 distinct
values
3567 distinct values
674,339 distinct values
50. 01/20/18 Data Mining: Concepts and Techniques 51
Chapter 3: Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
51. 01/20/18 Data Mining: Concepts and Techniques 52
Summary
Data preparation is a big issue for both warehousing
and mining
Data preparation includes
Data cleaning and data integration
Data reduction and feature selection
Discretization
A lot a methods have been developed but still an active
area of research
52. 01/20/18 Data Mining: Concepts and Techniques 53
References
D. P. Ballou and G. K. Tayi. Enhancing data quality in data warehouse
environments. Communications of ACM, 42:73-78, 1999.
Jagadish et al., Special Issue on Data Reduction Techniques. Bulletin of
the Technical Committee on Data Engineering, 20(4), December 1997.
D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999.
T. Redman. Data Quality: Management and Technology. Bantam Books,
New York, 1992.
Y. Wand and R. Wang. Anchoring data quality dimensions ontological
foundations. Communications of ACM, 39:86-95, 1996.
R. Wang, V. Storey, and C. Firth. A framework for analysis of data quality
research. IEEE Trans. Knowledge and Data Engineering, 7:623-640,
1995.