Why Data Preprocessing?
• Data in the real world is dirty
Incomplete data may come from
e.g., occupation=“ ”
Faulty data collection instruments
Major Tasks in Data Preprocessing?
• Data cleaning
Fill in missing values, smooth noisy data, identify or
remove outliers, and resolve inconsistencies
• Data integration
Integration of multiple databases, data cubes, or files
• Data transformation
Normalization and aggregation
• Data reduction
Obtains reduced representation in volume but
produces the same or similar analytical results
• Data discretization
Part of data reduction but with particular importance,
especially for numerical data
Descriptive Data Summarization
• It is a techniques can be used to identify the
which data values should be treated as noise
• Measures of central tendency include
Graphic Displays of Basic Descriptive
• Aside from the bar charts, pie charts, and line
graphs used in most statistical or graphical
• Quantile plots
• q-q plots
• scatter plots
• loess curves.
• Data cleaning (or data cleansing) routines
attempt to fill in
• missing values
• identifying outliers
• correct inconsistencies.
1. Ignore the tuple
2. Fill in the missing value manually
3. Use a global constant to fill in the missing value
4. Use the attribute mean to fill in the missing value
5. Use the attribute mean for all samples belonging
to the same class as the given tuple
6. Use the most probable value to fill in the missing
Method 6,however, is a popular strategy.
• The sorted values are distributed into a
of “buckets,” or bins
ex: Bin = 4,8,15
• Smoothing by bin means
Bin = 9
• Smoothing by bin boundaries
• Data can be smoothed by fitting the data to a
• Linear regression involves finding the “best” line
to fit two attributes
• so that one attribute can be used to predict the
• Multiple linear regression is an extension of linear
• Outliers may be detected by clustering, where
similar values are organized into groups, or
• The values that fall outside of the set of
clusters may be considered outliers
• The data should also be examined regarding
• unique rules - each value attribute must be
different from all other values
• consecutive rules - no missing values between
the lowest and highest values .
• null rules - A null rule specifies the use of
blanks,question marks, special characters.
• Data integration, which combines data from
multiple sources into a coherent data store.
• Data integration Technique:
• Schema integration
• correlation analysis
• In data transformation, the data are transformed
or consolidated into forms appropriate for
• Data transformation can involve the following:
• Smoothing - to remove noise from the data.
• Aggregation - summary or aggregation operations
are applied to the data.
• Ex : the daily sales data may be aggregated so as
to compute monthly and annual total amounts.
• Generalization - low-level or “primitive” (raw)
data are replaced by higher-level concepts
through the use of concept hierarchies.
• Normalization - the attribute data are scaled
so as to fall within a small specified range,
such as 1:0 to 1:0, or 0:0 to 1:0.
• Attribute construction - new attributes are
constructed and added from the given set of
• Data reduction techniques can be applied to obtain a
reduced representation of the data set that is much
smaller in volume.
1. Data cube aggregation
where aggregation operations are applied to the data
in the construction of a data cube.
2. Attribute subset selection
where irrelevant, weakly relevant, or redundant
attributes or dimensions may be detected and removed.
where encoding mechanisms are used to
reduce the data set size.
where the data are replaced or estimated by
• Data discretization techniques can be used to
reduce the number of values for a given
continuous attribute by dividing the range of the
attribute into intervals.
• Histogram Analysis
• Entropy-Based Discretization
• Interval Merging by x2 Analysis
• Cluster Analysis
• Discretization by Intuitive Partitioning