Machine Learning and Data Mining: 15 Data Exploration and Preparation
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Machine Learning and Data Mining: 15 Data Exploration and Preparation

  • 17,734 views
Uploaded on

Course "Machine Learning and Data Mining" for the degree of Computer Engineering at the Politecnico di Milano. In this lecture we discuss the data exploration and preparation.

Course "Machine Learning and Data Mining" for the degree of Computer Engineering at the Politecnico di Milano. In this lecture we discuss the data exploration and preparation.

More in: Technology , Education
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
  • It's very worth reading for the freshman to data mining
    Are you sure you want to
    Your message goes here
  • Thanks.
    Are you sure you want to
    Your message goes here
  • Worth reading.

    Azlan
    www.freepolyphonicringtones.org/
    www.free-nokia-ringtones.net/
    Are you sure you want to
    Your message goes here
No Downloads

Views

Total Views
17,734
On Slideshare
17,689
From Embeds
45
Number of Embeds
3

Actions

Shares
Downloads
834
Comments
3
Likes
21

Embeds 45

http://www.slideshare.net 31
http://localhost 12
http://webspace.elet.polimi.it 2

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Data Exploration and Preparation Machine Learning and Data Mining (Unit 15) Prof. Pier Luca Lanzi
  • 2. References 2 Jiawei Han and Micheline Kamber, "Data Mining, : Concepts and Techniques", The Morgan Kaufmann Series in Data Management Systems (Second Edition) Tom M. Mitchell. “Machine Learning” McGraw Hill 1997 Pang-Ning Tan, Michael Steinbach, Vipin Kumar, “Introduction to Data Mining”, Addison Wesley Ian H. Witten, Eibe Frank. “Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations” 2nd Edition Prof. Pier Luca Lanzi
  • 3. Outline 3 Data Exploration Types of data Descriptive statistics Visualization Data Preprocessing Aggregation Sampling Dimensionality Reduction Feature creation Discretization Attribute Transformation Compression Concept hierarchies Prof. Pier Luca Lanzi
  • 4. Types of data sets 4 Record Data Matrix Document Data Transaction Data Graph World Wide Web Molecular Structures Ordered Spatial Data Temporal Data Sequential Data Genetic Sequence Data Prof. Pier Luca Lanzi
  • 5. Important Characteristics of 5 Structured Data Dimensionality Curse of Dimensionality Sparsity Only presence counts Resolution Patterns depend on the scale Prof. Pier Luca Lanzi
  • 6. Record Data 6 Data that consists of a collection of records, each of which consists of a fixed set of attributes Tid Refund Marital Taxable Income Cheat Status 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 10 Prof. Pier Luca Lanzi
  • 7. Data Matrix 7 If data objects have the same fixed set of numeric attributes, then the data objects can be thought of as points in a multi-dimensional space, where each dimension represents a distinct attribute Such data set can be represented by an m by n matrix, where there are m rows, one for each object, and n columns, one for each attribute Projection Projection Distance Load Thickness of x Load of y load 10.23 5.27 15.22 2.7 1.2 12.65 6.25 16.22 2.2 1.1 Prof. Pier Luca Lanzi
  • 8. Document Data 8 Each document becomes a `term' vector, each term is a component (attribute) of the vector, the value of each component is the number of times the corresponding term occurs in the document. timeout season coach game score team ball lost play win Document 1 3 0 5 0 2 6 0 2 0 2 Document 2 0 7 0 2 1 0 0 3 0 0 Document 3 0 1 0 0 1 2 2 0 3 0 Prof. Pier Luca Lanzi
  • 9. Transaction Data 9 A special type of record data, where each record (transaction) involves a set of items. For example, consider a grocery store. The set of products purchased by a customer during one shopping trip constitute a transaction, while the individual products that were purchased are the items. TID Items 1 Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diaper, Milk 4 Beer, Bread, Diaper, Milk 5 Coke, Diaper, Milk Prof. Pier Luca Lanzi
  • 10. Graph Data 10 Generic graph and HTML Links <a href="papers/papers.html#bbbb"> Data Mining </a> <li> 2 <a href="papers/papers.html#aaaa"> Graph Partitioning </a> 1 5 <li> <a href="papers/papers.html#aaaa"> 2 Parallel Solution of Sparse Linear System of Equations </a> 5 <li> <a href="papers/papers.html#ffff"> N-Body Computation and Dense Linear System Solvers Prof. Pier Luca Lanzi
  • 11. Chemical Data 11 Benzene Molecule: C6H6 Prof. Pier Luca Lanzi
  • 12. Ordered Data 12 Sequences of transactions Items/Events An element of the sequence Prof. Pier Luca Lanzi
  • 13. Ordered Data 13 Genomic sequence data GGTTCCGCCTTCAGCCCCGCGCC CGCAGGGCCCGCCCCGCGCCGTC GAGAAGGGCCCGCCTGGCGGGCG GGGGGAGGCGGGGCCGCCCGAGC CCAACCGAGTCCGACCAGGTGCC CCCTCTGCTCGGCCTAGACCTGA GCTCATTAGGCGGCAGCGGACAG GCCAAGTAGAACACGCGAAGCGC TGGGCTGCCTGCTGCGACCAGGG Prof. Pier Luca Lanzi
  • 14. Ordered Data 14 Spatio-Temporal Data Average Monthly Temperature of land and ocean Prof. Pier Luca Lanzi
  • 15. What is data exploration? 15 A preliminary exploration of the data to better understand its characteristics. Key motivations of data exploration include Helping to select the right tool for preprocessing or analysis Making use of humans’ abilities to recognize patterns People can recognize patterns not captured by data analysis tools Related to the area of Exploratory Data Analysis (EDA) Created by statistician John Tukey Seminal book is Exploratory Data Analysis by Tukey A nice online introduction can be found in Chapter 1 of the NIST Engineering Statistics Handbook http://www.itl.nist.gov/div898/handbook/index.htm Prof. Pier Luca Lanzi
  • 16. Techniques Used In Data Exploration 16 In EDA, as originally defined by Tukey The focus was on visualization Clustering and anomaly detection were viewed as exploratory techniques In data mining, clustering and anomaly detection are major areas of interest, and not thought of as just exploratory In our discussion of data exploration, we focus on Summary statistics Visualization Online Analytical Processing (OLAP) Prof. Pier Luca Lanzi
  • 17. Iris Sample Data Set 17 Many of the exploratory data techniques are illustrated with the Iris Plant data set http://www.ics.uci.edu/~mlearn/MLRepository.html From the statistician Douglas Fisher Three flower types (classes): Setosa, Virginica, Versicolour Four (non-class) attributes, sepal width and length, petal width and length Virginica. Robert H. Mohlenbrock. USDA NRCS. 1995. Northeast wetland flora: Field office guide to plant species. Northeast National Technical Center, Chester, PA. Courtesy of USDA NRCS Wetland Science Institute. Prof. Pier Luca Lanzi
  • 18. Mining Data Descriptive Characteristics 18 Motivation To better understand the data: central tendency, variation and spread Data dispersion characteristics median, max, min, quantiles, outliers, variance, etc. Numerical dimensions correspond to sorted intervals Data dispersion: analyzed with multiple granularities of precision Boxplot or quantile analysis on sorted intervals Dispersion analysis on computed measures Folding measures into numerical dimensions Boxplot or quantile analysis on the transformed cube Prof. Pier Luca Lanzi
  • 19. Summary Statistics 19 They are numbers that summarize properties of the data Summarized properties include frequency, location and spread Examples Location, mean Spread, standard deviation Most summary statistics can be calculated in a single pass through the data Prof. Pier Luca Lanzi
  • 20. Frequency and Mode 20 The frequency of an attribute value is the percentage of time the value occurs in the data set For example, given the attribute ‘gender’ and a representative population of people, the gender ‘female’ occurs about 50% of the time. The mode of a an attribute is the most frequent attribute value The notions of frequency and mode are typically used with categorical data Prof. Pier Luca Lanzi
  • 21. Percentiles 21 For continuous data, the notion of a percentile is more useful Given an ordinal or continuous attribute x and a number p between 0 and 100, the pth percentile is a value xp of x such that p% of the observed values of x are less than xp For instance, the 50th percentile is the value x50% such that x p 50% of all values of x are less than x50% Prof. Pier Luca Lanzi
  • 22. Measures of Location: Mean and Median 22 The mean is the most common measure of the location of a set of points However, the mean is very sensitive to outliers Thus, the median or a trimmed mean is also commonly used Prof. Pier Luca Lanzi
  • 23. Measures of Spread: Range and Variance 23 Range is the difference between the max and min The variance or standard deviation is the most common measure of the spread of a set of points However, this is also sensitive to outliers, so that other measures are often used. Prof. Pier Luca Lanzi
  • 24. Visualization 24 Visualization is the conversion of data into a visual or tabular format so that the characteristics of the data and the relationships among data items or attributes can be analyzed or reported Visualization of data is one of the most powerful and appealing techniques for data exploration Humans have a well developed ability to analyze large amounts of information that is presented visually Can detect general patterns and trends Can detect outliers and unusual patterns Prof. Pier Luca Lanzi
  • 25. Example: Sea Surface Temperature 25 The following shows the Sea Surface Temperature for July 1982 Tens of thousands of data points are summarized in a single figure Prof. Pier Luca Lanzi
  • 26. Representation 26 Is the mapping of information to a visual format Data objects, their attributes, and the relationships among data objects are translated into graphical elements such as points, lines, shapes, and colors Example: Objects are often represented as points Their attribute values can be represented as the position of the points or the characteristics of the points, e.g., color, size, and shape If position is used, then the relationships of points, i.e., whether they form groups or a point is an outlier, is easily perceived. Prof. Pier Luca Lanzi
  • 27. Arrangement 27 Is the placement of visual elements within a display Can make a large difference in how easy it is to understand the data Example: Prof. Pier Luca Lanzi
  • 28. Selection 28 Is the elimination or the de-emphasis of certain objects and attributes Selection may involve the chossing a subset of attributes Dimensionality reduction is often used to reduce the number of dimensions to two or three Alternatively, pairs of attributes can be considered Selection may also involve choosing a subset of objects A region of the screen can only show so many points Can sample, but want to preserve points in sparse areas Prof. Pier Luca Lanzi
  • 29. Visualization Techniques: Histograms 29 Histogram Usually shows the distribution of values of a single variable Divide the values into bins and show a bar plot of the number of objects in each bin. The height of each bar indicates the number of objects Shape of histogram depends on the number of bins Example: Petal Width (10 and 20 bins, respectively) Prof. Pier Luca Lanzi
  • 30. Two-Dimensional Histograms 30 Show the joint distribution of the values of two attributes Example: petal width and petal length What does this tell us? Prof. Pier Luca Lanzi
  • 31. Visualization Techniques: Box Plots 31 Box Plots Invented by J. Tukey Another way of displaying the distribution of data Following figure shows the basic part of a box plot outlier 10th percentile 75th percentile 50th percentile 25th percentile 10th percentile Prof. Pier Luca Lanzi
  • 32. Example of Box Plots 32 Box plots can be used to compare attributes Prof. Pier Luca Lanzi
  • 33. Visualization Techniques: Scatter Plots 33 Attributes values determine the position Two-dimensional scatter plots most common, but can have three-dimensional scatter plots Often additional attributes can be displayed by using the size, shape, and color of the markers that represent the objects It is useful to have arrays of scatter plots can compactly summarize the relationships of several pairs of attributes Prof. Pier Luca Lanzi
  • 34. Scatter Plot Array of Iris Attributes 34 Prof. Pier Luca Lanzi
  • 35. Visualization Techniques: Contour Plots 35 Useful when a continuous attribute is measured on a spatial grid They partition the plane into regions of similar values The contour lines that form the boundaries of these regions connect points with equal values The most common example is contour maps of elevation Can also display temperature, rainfall, air pressure, etc. Prof. Pier Luca Lanzi
  • 36. Contour Plot Example: SST Dec, 1998 36 Celsius Prof. Pier Luca Lanzi
  • 37. Visualization Techniques: Matrix Plots 37 Can plot the data matrix This can be useful when objects are sorted according to class Typically, the attributes are normalized to prevent one attribute from dominating the plot Plots of similarity or distance matrices can also be useful for visualizing the relationships between objects Examples of matrix plots are presented on the next two slides Prof. Pier Luca Lanzi
  • 38. Visualization of the Iris Data Matrix 38 standard deviation Prof. Pier Luca Lanzi
  • 39. Visualization of the Iris Correlation Matrix 39 Prof. Pier Luca Lanzi
  • 40. Visualization Techniques: 40 Parallel Coordinates Used to plot the attribute values of high-dimensional data Instead of using perpendicular axes, use a set of parallel axes The attribute values of each object are plotted as a point on each corresponding coordinate axis and the points are connected by a line Thus, each object is represented as a line Often, the lines representing a distinct class of objects group together, at least for some attributes Ordering of attributes is important in seeing such groupings Prof. Pier Luca Lanzi
  • 41. Parallel Coordinates Plots for Iris Data 41 Prof. Pier Luca Lanzi
  • 42. Other Visualization Techniques 42 Star Plots Similar approach to parallel coordinates, but axes radiate from a central point The line connecting the values of an object is a polygon Chernoff Faces Approach created by Herman Chernoff This approach associates each attribute with a characteristic of a face The values of each attribute determine the appearance of the corresponding facial characteristic Each object becomes a separate face Relies on human’s ability to distinguish faces Prof. Pier Luca Lanzi
  • 43. Star Plots for Iris Data 43 Setosa Versicolour Virginica Prof. Pier Luca Lanzi
  • 44. Chernoff Faces for Iris Data 44 Setosa Versicolour Virginica Prof. Pier Luca Lanzi
  • 45. Why Data Preprocessing? 45 Data in the real world is dirty Incomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate data e.g., occupation=“ ” Noisy: containing errors or outliers e.g., Salary=“-10” Inconsistent: containing discrepancies in codes or names e.g., Age=“42” Birthday=“03/07/1997” e.g., Was rating “1,2,3”, now rating “A, B, C” e.g., discrepancy between duplicate records Prof. Pier Luca Lanzi
  • 46. Why Is Data Dirty? 46 Incomplete data may come from “Not applicable” data value when collected Different considerations between the time when the data was collected and when it is analyzed. Human/hardware/software problems Noisy data (incorrect values) may come from Faulty data collection instruments Human or computer error at data entry Errors in data transmission Inconsistent data may come from Different data sources Functional dependency violation (e.g., modify some linked data) Duplicate records also need data cleaning Prof. Pier Luca Lanzi
  • 47. Why Is Data Preprocessing Important? 47 No quality data, no quality mining results! Quality decisions must be based on quality data E.g., duplicate or missing data may cause incorrect or even misleading statistics. Data warehouse needs consistent integration of quality data Data extraction, cleaning, and transformation comprises the majority of the work of building a data warehouse Prof. Pier Luca Lanzi
  • 48. Data Quality 48 What kinds of data quality problems? How can we detect problems with the data? What can we do about these problems? Examples of data quality problems: noise and outliers missing values duplicate data Prof. Pier Luca Lanzi
  • 49. Multi-Dimensional Measure of 49 Data Quality A well-accepted multidimensional view: Accuracy Completeness Consistency Timeliness Believability Value added Interpretability Accessibility Broad categories: Intrinsic, contextual, representational, and accessibility Prof. Pier Luca Lanzi
  • 50. Noise 50 Noise refers to modification of original values Examples: distortion of a person’s voice when talking on a poor phone and “snow” on television screen Two Sine Waves Two Sine Waves + Noise Prof. Pier Luca Lanzi
  • 51. Outliers 51 Outliers are data objects with characteristics that are considerably different than most of the other data objects in the data set Prof. Pier Luca Lanzi
  • 52. Missing Values 52 Reasons for missing values Information is not collected (e.g., people decline to give their age and weight) Attributes may not be applicable to all cases (e.g., annual income is not applicable to children) Handling missing values Eliminate Data Objects Estimate Missing Values Ignore the Missing Value During Analysis Replace with all possible values (weighted by their probabilities) Prof. Pier Luca Lanzi
  • 53. Duplicate Data 53 Data set may include data objects that are duplicates, or almost duplicates of one another Major issue when merging data from heterogeous sources Examples: same person with multiple email addresses Data cleaning: process of dealing with duplicate data issues Prof. Pier Luca Lanzi
  • 54. Data Cleaning as a Process 54 Data discrepancy detection Use metadata (e.g., domain, range, dependency, distribution) Check field overloading Check uniqueness rule, consecutive rule and null rule Use commercial tools • Data scrubbing: use simple domain knowledge (e.g., postal code, spell-check) to detect errors and make corrections • Data auditing: by analyzing data to discover rules and relationship to detect violators (e.g., correlation and clustering to find outliers) Data migration and integration Data migration tools: allow transformations to be specified ETL (Extraction/Transformation/Loading) tools: allow users to specify transformations through a graphical user interface Integration of the two processes Iterative and interactive (e.g., Potter’s Wheels) Prof. Pier Luca Lanzi
  • 55. Data Preprocessing 55 Aggregation Sampling Dimensionality Reduction Feature subset selection Feature creation Discretization and Binarization Attribute Transformation Prof. Pier Luca Lanzi
  • 56. Aggregation 56 Combining two or more attributes (or objects) into a single attribute (or object) Purpose Data reduction: reduce the number of attributes or objects Change of scale: cities aggregated into regions, states, countries, etc More “stable” data: aggregated data tends to have less variability Prof. Pier Luca Lanzi
  • 57. Aggregation 57 Variation of Precipitation in Australia Standard Deviation of Average Standard Deviation of Average Monthly Precipitation Yearly Precipitation Prof. Pier Luca Lanzi
  • 58. Sampling 58 Sampling is the main technique employed for data selection It is often used for both the preliminary investigation of the data and the final data analysis. Statisticians sample because obtaining the entire set of data of interest is too expensive or time consuming Sampling is used in data mining because processing the entire set of data of interest is too expensive or time consuming Prof. Pier Luca Lanzi
  • 59. Key principles for Effective Sampling 59 Using a sample will work almost as well as using the entire data sets, if the sample is representative A sample is representative if it has approximately the same property (of interest) as the original set of data Prof. Pier Luca Lanzi
  • 60. Types of Sampling 60 Simple Random Sampling There is an equal probability of selecting any particular item Sampling without replacement As each item is selected, it is removed from the population Sampling with replacement Objects are not removed from the population as they are selected for the sample. In sampling with replacement, the same object can be picked up more than once Stratified sampling Split the data into several partitions Then draw random samples from each partition Prof. Pier Luca Lanzi
  • 61. Sample Size 61 8000 points 2000 Points 500 Points Prof. Pier Luca Lanzi
  • 62. Sample Size 62 What sample size is necessary to get at least one object from each of 10 groups. Prof. Pier Luca Lanzi
  • 63. Curse of Dimensionality 63 When dimensionality increases, data becomes increasingly sparse in the space that it occupies Definitions of density and distance between points, which is critical for clustering and outlier detection, become less meaningful •Randomly generate 500 points •Compute difference between max and min distance between any pair of points Prof. Pier Luca Lanzi
  • 64. Dimensionality Reduction 64 Purpose: Avoid curse of dimensionality Reduce amount of time and memory required by data mining algorithms Allow data to be more easily visualized May help to eliminate irrelevant features or reduce noise Techniques Principle Component Analysis Singular Value Decomposition Others: supervised and non-linear techniques Prof. Pier Luca Lanzi
  • 65. Dimensionality Reduction: 65 Principal Component Analysis (PCA) Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors (principal components) that can be best used to represent data Steps Normalize input data: Each attribute falls within the same range Compute k orthonormal (unit) vectors, i.e., principal components Each input data (vector) is a linear combination of the k principal component vectors The principal components are sorted in order of decreasing “significance” or strength Since the components are sorted, the size of the data can be reduced by eliminating the weak components, i.e., those with low variance. (i.e., using the strongest principal components, it is possible to reconstruct a good approximation of the original data Works for numeric data only Used when the number of dimensions is large Prof. Pier Luca Lanzi
  • 66. Principal Component Analysis 66 X2 Y1 Y2 X1 Prof. Pier Luca Lanzi
  • 67. Dimensionality Reduction: PCA 67 Goal is to find a projection that captures the largest amount of variation in data x2 e x1 Prof. Pier Luca Lanzi
  • 68. Dimensionality Reduction: PCA 68 Find the eigenvectors of the covariance matrix The eigenvectors define the new space x2 e x1 Prof. Pier Luca Lanzi
  • 69. Feature Subset Selection 69 Another way to reduce dimensionality of data Redundant features duplicate much or all of the information contained in one or more other attributes Example: purchase price of a product and the amount of sales tax paid Irrelevant features contain no information that is useful for the data mining task at hand Example: students' ID is often irrelevant to the task of predicting students' GPA Prof. Pier Luca Lanzi
  • 70. Feature Subset Selection 70 Brute-force approach Try all possible feature subsets as input to data mining algorithm Embedded approaches Feature selection occurs naturally as part of the data mining algorithm Filter approaches Features are selected using a procedure that is independent from a specific data mining algorithm E.g., attributes are selected based on correlation measures Wrapper approaches: Use a data mining algorithm as a black box to find best subset of attributes E.g., apply a genetic algorithm and an algorithm for decision tree to find the best set of features for a decision tree Prof. Pier Luca Lanzi
  • 71. Feature Creation 71 Create new attributes that can capture the important information in a data set much more efficiently than the original attributes E.g., given the birthday, create the attribute age Three general methodologies: Feature Extraction: domain-specific Mapping Data to New Space Feature Construction: combining features Prof. Pier Luca Lanzi
  • 72. Mapping Data to a New Space 72 Fourier transform Wavelet transform Two Sine Waves Two Sine Waves + Noise Frequency Prof. Pier Luca Lanzi
  • 73. Discretization 73 Three types of attributes: Nominal: values from an unordered set, e.g., color Ordinal: values from an ordered set, e.g., military or academic rank Continuous: real numbers, e.g., integer or real numbers Discretization: Divide the range of a continuous attribute into intervals Some classification algorithms only accept categorical attributes Reduce data size by discretization Prepare for further analysis Prof. Pier Luca Lanzi
  • 74. Discretization Approaches 74 Supervised Attributes are discretized using the class information Generates intervals that tries to minimize the loss of information about the class Unsupervised Attributes are discretized solely based on their values Prof. Pier Luca Lanzi
  • 75. Discretization Using Class Labels 75 Entropy based approach 3 categories for both x and y 5 categories for both x and y Prof. Pier Luca Lanzi
  • 76. Discretization Without Using Class Labels 76 Data Equal interval width Equal frequency K-means Prof. Pier Luca Lanzi
  • 77. Discretization and Concept Hierarchy 77 Discretization Reduce the number of values for a given continuous attribute by dividing the range of the attribute into intervals Interval labels can then be used to replace actual data values Supervised vs. unsupervised Split (top-down) vs. merge (bottom-up) Discretization can be performed recursively on an attribute Concept hierarchy formation Recursively reduce the data by collecting and replacing low level concepts (such as numeric values for age) by higher level concepts (such as young, middle-aged, or senior) Prof. Pier Luca Lanzi
  • 78. Typical Discretization Methods 78 Entropy-based discretization: supervised, top-down split Interval merging by χ2 Analysis: supervised, bottom-up merge Segmentation by natural partitioning: top-down split, unsupervised Prof. Pier Luca Lanzi
  • 79. Entropy-Based Discretization 79 Given a set of samples S, if S is partitioned into two intervals S1 and S2 using boundary T, the information gain after partitioning is |S | | S1 | I (S , T ) = Entropy ( S 1) + 2 Entropy ( S 2 ) |S| |S| Entropy is calculated based on class distribution of the samples in the set. Given m classes, the entropy of S1 is m Entropy ( S 1 ) = − ∑ p i log 2 ( p i ) i =1 where pi is the probability of class i in S1 The boundary that minimizes the entropy function over all possible boundaries is selected as a binary discretization The process is recursively applied to partitions obtained until some stopping criterion is met Such a boundary may reduce data size and improve classification accuracy Prof. Pier Luca Lanzi
  • 80. Interval Merge by χ2 Analysis 80 Merging-based (bottom-up) vs. splitting-based methods Merge: Find the best neighboring intervals and merge them to form larger intervals recursively ChiMerge [Kerber AAAI 1992, See also Liu et al. DMKD 2002] Initially, each distinct value of a numerical attr. A is considered to be one interval χ2 tests are performed for every pair of adjacent intervals Adjacent intervals with the least χ2 values are merged together, since low χ2 values for a pair indicate similar class distributions This merge process proceeds recursively until a predefined stopping criterion is met (such as significance level, max-interval, max inconsistency, etc.) Prof. Pier Luca Lanzi
  • 81. Segmentation by Natural Partitioning 81 A simply 3-4-5 rule can be used to segment numeric data into relatively uniform, “natural” intervals. If an interval covers 3, 6, 7 or 9 distinct values at the most significant digit, partition the range into 3 equi-width intervals If it covers 2, 4, or 8 distinct values at the most significant digit, partition the range into 4 intervals If it covers 1, 5, or 10 distinct values at the most significant digit, partition the range into 5 intervals Prof. Pier Luca Lanzi
  • 82. Example of 3-4-5 Rule 82 count -$351 -$159 profit $1,838 $4,700 Step 1: Min Low (i.e, 5%-tile) High(i.e, 95%-0 tile) Max Step 2: msd=1,000 Low=-$1,000 High=$2,000 (-$1,000 - $2,000) Step 3: (-$1,000 - 0) ($1,000 - $2,000) (0 -$ 1,000) (-$400 -$5,000) Step 4: ($2,000 - $5, 000) (-$400 - 0) ($1,000 - $2, 000) (0 - $1,000) (0 - ($1,000 - (-$400 - $200) ($2,000 - $1,200) -$300) $3,000) ($200 - ($1,200 - $400) (-$300 - $1,400) ($3,000 - -$200) $4,000) ($1,400 - ($400 - (-$200 - $1,600) $600) ($4,000 - -$100) $5,000) ($600 - ($1,600 - ($1,800 - $800) ($800 - $1,800) $2,000) (-$100 - $1,000) 0) Prof. Pier Luca Lanzi
  • 83. Attribute Transformation 83 A function that maps the entire set of values of a given attribute to a new set of replacement values such that each old value can be identified with one of the new values Simple functions: xk, log(x), ex, |x| Standardization and Normalization Prof. Pier Luca Lanzi
  • 84. On-Line Analytical Processing (OLAP) 84 Relational databases put data into tables, while OLAP uses a multidimensional array representation. Such representations of data previously existed in statistics and other fields There are a number of data analysis and data exploration operations that are easier with such a data representation. Prof. Pier Luca Lanzi
  • 85. Creating a Multidimensional Array 85 First, identify which attributes are to be the dimensions and which attribute is to be the target attribute whose values appear as entries in the multidimensional array. The attributes used as dimensions must have discrete values The target value is typically a count or continuous value, e.g., the cost of an item Can have no target variable at all except the count of objects that have the same set of attribute values Second, find the value of each entry in the multidimensional array by summing the values (of the target attribute) or count of all objects that have the attribute values corresponding to that entry. Prof. Pier Luca Lanzi
  • 86. Example: Iris data 86 We show how the attributes, petal length, petal width, and species type can be converted to a multidimensional array First, we discretized the petal width and length to have categorical values: low, medium, and high We get the following table - note the count attribute Prof. Pier Luca Lanzi
  • 87. Example: Iris data (continued) 87 Each unique tuple of petal width, petal length, and species type identifies one element of the array. This element is assigned the corresponding count value. The figure illustrates the result. All non-specified tuples are 0. Prof. Pier Luca Lanzi
  • 88. Example: Iris data (continued) 88 Slices of the multidimensional array are shown by the following cross-tabulations What do these tables tell us? Prof. Pier Luca Lanzi
  • 89. OLAP Operations: Data Cube 89 The key operation of a OLAP is the formation of a data cube A data cube is a multidimensional representation of data, together with all possible aggregates. By all possible aggregates, we mean the aggregates that result by selecting a proper subset of the dimensions and summing over all remaining dimensions. For example, if we choose the species type dimension of the Iris data and sum over all other dimensions, the result will be a one-dimensional entry with three entries, each of which gives the number of flowers of each type. Prof. Pier Luca Lanzi
  • 90. Data Cube Example 90 Consider a data set that records the sales of products at a number of company stores at various dates. This data can be represented as a 3 dimensional array There are 3 two-dimensional aggregates (3 choose 2 ), 3 one-dimensional aggregates, and 1 zero-dimensional aggregate (the overall total) Prof. Pier Luca Lanzi
  • 91. Data Cube Example (continued) 91 The following figure table shows one of the two dimensional aggregates, along with two of the one-dimensional aggregates, and the overall total Prof. Pier Luca Lanzi
  • 92. OLAP Operations: Slicing and Dicing 92 Slicing is selecting a group of cells from the entire multidimensional array by specifying a specific value for one or more dimensions. Dicing involves selecting a subset of cells by specifying a range of attribute values. This is equivalent to defining a subarray from the complete array. In practice, both operations can also be accompanied by aggregation over some dimensions. Prof. Pier Luca Lanzi
  • 93. OLAP Operations: Roll-up and Drill-down 93 Attribute values often have a hierarchical structure. Each date is associated with a year, month, and week. A location is associated with a continent, country, state (province, etc.), and city. Products can be divided into various categories, such as clothing, electronics, and furniture. Note that these categories often nest and form a tree or lattice A year contains months which contains day A country contains a state which contains a city Prof. Pier Luca Lanzi
  • 94. OLAP Operations: Roll-up and Drill-down 94 This hierarchical structure gives rise to the roll-up and drill- down operations. For sales data, we can aggregate (roll up) the sales across all the dates in a month. Conversely, given a view of the data where the time dimension is broken into months, we could split the monthly sales totals (drill down) into daily sales totals. Likewise, we can drill down or roll up on the location or product ID attributes. Prof. Pier Luca Lanzi
  • 95. Data Compression 95 String compression There are extensive theories and well-tuned algorithms Typically lossless But only limited manipulation is possible without expansion Audio/video compression Typically lossy compression, with progressive refinement Sometimes small fragments of signal can be reconstructed without reconstructing the whole Time sequence is not audio Typically short and vary slowly with time Prof. Pier Luca Lanzi
  • 96. 96 Dimensionality Reduction: Wavelet Transformation Haar2 Daubechie4 Discrete wavelet transform (DWT): linear signal processing, multi- resolutional analysis Compressed approximation: store only a small fraction of the strongest of the wavelet coefficients Similar to discrete Fourier transform (DFT), but better lossy compression, localized in space Method: Length, L, must be an integer power of 2 (padding with 0’s, when necessary) Each transform has 2 functions: smoothing, difference Applies to pairs of data, resulting in two set of data of length L/2 Applies two functions recursively, until reaches the desired length Prof. Pier Luca Lanzi
  • 97. Concept Hierarchy Generation 97 for Categorical Data Specification of a partial/total ordering of attributes explicitly at the schema level by users or experts street < city < state < country Specification of a hierarchy for a set of values by explicit data grouping {Urbana, Champaign, Chicago} < Illinois Specification of only a partial set of attributes E.g., only street < city, not others Automatic generation of hierarchies (or attribute levels) by the analysis of the number of distinct values E.g., for a set of attributes: {street, city, state, country} Prof. Pier Luca Lanzi
  • 98. Automatic Concept Hierarchy Generation 98 Some hierarchies can be automatically generated based on the analysis of the number of distinct values per attribute in the data set The attribute with the most distinct values is placed at the lowest level of the hierarchy Exceptions, e.g., weekday, month, quarter, year Country 15 distinct values Province 365 distinct values City 3567 distinct values Street 674,339 distinct values Prof. Pier Luca Lanzi
  • 99. Summary 99 Data exploration and preparation, or preprocessing, is a big issue for both data warehousing and data mining Descriptive data summarization is need for quality data preprocessing Data preparation includes Data cleaning and data integration Data reduction and feature selection Discretization A lot a methods have been developed but data preprocessing still an active area of research Prof. Pier Luca Lanzi