ML | OVERVIEW
OF DATA CLEANING
Dr. Sheetal Dhande-Dandge
Professor CSE
SIPNA COET
ML | OVERVIEW
OF DATA
CLEANING
Data cleaning is one of the important parts
of machine learning. It plays a significant
part in building a model. It surely isn’t the
fanciest part of machine learning and at the
same time, there aren’t any hidden tricks or
secrets to uncover.
However, the success or failure of a project
relies on proper data cleaning. Professional data
scientists usually invest a very large portion of
their time in this step because of the belief
that “Better data beats fancier algorithms”.
If we have a well-cleaned dataset, there are
chances that we can get achieve good results
with simple algorithms also, which can prove
very beneficial at times especially in terms of
computation when the dataset size is large.
Obviously, different types of data will require
different types of cleaning. However, this
systematic approach can always serve as a good
starting point.
Dr. sheetal Dhande-Dandge 2
STEPS INVOLVED IN DATA CLEANING:
Data cleaning is a crucial step in the
machine learning (ML) pipeline, as it
involves identifying and removing
any missing, duplicate, or irrelevant
data.
The goal of data cleaning is to ensure
that the data is accurate, consistent,
and free of errors, as incorrect or
inconsistent data can negatively
impact the performance of the ML
model.
Dr. sheetal Dhande-Dandge 3
 The main steps involved in data cleaning are:
 Handling missing data:This step involves
identifying and handling missing data, which can be
done by removing the missing data, imputing
missing values with a suitable estimate, or using
techniques such as multiple imputations to handle
missing data.
 Removing duplicates:This step involves
identifying and removing any duplicate data, which
can be done by using techniques such as data
deduplication or data deduplication algorithms.
 Handling outliers:This step involves identifying
and handling any outliers in the data, which can be
done by removing the outliers or transforming the
data to reduce the impact of the outliers.
 Correcting errors: This step involves identifying
and correcting any errors in the data, which can be
done by using techniques such as data validation or
data correction algorithm
 It is important to note that data cleaning is an iterative
process, as it may be necessary to repeat some of the
steps several times to ensure that the data is accurate
and consistent.
 The choice of data cleaning techniques will depend
on the specific requirements of the project, including the
size and complexity of the data and the desired
outcome.
Dr. sheetal Dhande-Dandge 4
Dr. sheetal Dhande-Dandge 5
 This includes deleting duplicate/ redundant or
irrelevant values from your dataset. Duplicate
observations most frequently arise during data
collection and Irrelevant observations are those that
don’t actually fit the specific problem that you’re
trying to solve.
 Redundant observations alter the efficiency by a
great extent as the data repeats and may add
towards the correct side or towards the incorrect
side, thereby producing unfaithful results.
 Irrelevant observations are any type of data that is of
no use to us and can be removed directly.
Dr. sheetal Dhande-Dandge 6
 The errors that arise during measurement, transfer of
data, or other similar situations are called structural
errors. Structural errors include typos in the name of
features, the same attribute with a different name,
mislabeled classes, i.e. separate classes that should really
be the same, or inconsistent capitalization.
 For example, the model will treat America and America as
different classes or values, though they represent the
same value or red, yellow, and red-yellow as different
classes or attributes, though one class can be included in
the other two classes. So, these are some structural errors
that make our model inefficient and give poor quality
results.
Dr. sheetal Dhande-Dandge 7
 Outliers can cause problems with certain
types of models. For example, linear
regression models are less robust to outliers
than decision tree models. Generally, we
should not remove outliers until we have a
legitimate reason to remove them.
Sometimes, removing them improves
performance, sometimes not. So, one must
have a good reason to remove the outlier,
such as suspicious measurements that are
unlikely to be part of real data.
Dr. sheetal Dhande-Dandge 8
HANDLING MISSING DATA
Missing data is a deceptively tricky issue in
machine learning. We cannot just ignore or
remove the missing observation. They must
be handled carefully as they can be an
indication of something important. The two
most common ways to deal with missing
data are:
• Dropping observations with missing values.
• The fact that the value was missing may be informative in
itself.
• Plus, in the real world, you often need to make predictions
on new data even if some of the features are missing!
• Imputing the missing values from past observations.
• Again, “missingness” is almost always informative in itself,
and you should tell your algorithm if a value was missing.
• Even if you build a model to impute your values, you’re
not adding any real information. You’re just reinforcing
the patterns already provided by other features.
Missing data is like missing a puzzle piece. If
you drop it, that’s like pretending the puzzle
slot isn’t there. If you impute it, that’s like
trying to squeeze in a piece from
somewhere else in the puzzle.
So, missing data is always an informative
and an indication of something important.
And we must be aware of our algorithm of
missing data by flagging it. By using this
technique of flagging and filling, you are
essentially allowing the algorithm to
estimate the optimal constant for
missingness, instead of just filling it in with
the mean.
Dr. sheetal Dhande-Dandge 9
SOME DATA CLEANSING
TOOLS
• Openrefine
• Trifacta Wrangler
• TIBCO Clarity
• Cloudingo
• IBM Infosphere Quality Stage
 Data cleaning is an important step in the machine learning process because it
can have a significant impact on the quality and performance of a model.
Data cleaning involves identifying and correcting or removing errors and
inconsistencies in the data.
Dr. sheetal Dhande-Dandge 10
 Here is a simple example of data cleaning in Python:
 import pandas as pd
 # Load the data
 df = pd.read_csv("data.csv")
 # Drop rows with missing values
 df = df.dropna()
 # Remove duplicate rows
 df = df.drop_duplicates()
 # Remove unnecessary columns
 df = df.drop(columns=["col1", "col2"])
 # Normalize numerical columns
 df["col3"] = (df["col3"] - df["col3"].mean()) / df["col3"].std()
 # Encode categorical columns
 df["col4"] = pd.get_dummies(df["col4"])
 # Save the cleaned data
 df.to_csv("cleaned_data.csv", index=False)
 The code I provided does not have any explicit output statements, so it
will not produce any output when it is run. Instead, it modifies the data
stored in the df DataFrame and saves it to a new CSV file.
 If you want to see the cleaned data, you can print the df DataFrame or
read the saved CSV file. For example, you can add the following line at the
end of the code to print the cleaned data:
 print(df)
Dr. sheetal Dhande-Dandge 11
ADVANTAGES OF DATA CLEANING IN MACHINE
LEARNING:
Improved model performance: Data
cleaning helps improve the performance of
the ML model by removing errors,
inconsistencies, and irrelevant data, which
can help the model to better learn from the
data.
Increased accuracy: Data cleaning helps
ensure that the data is accurate, consistent,
and free of errors, which can help improve
the accuracy of the ML model.
Better representation of the data: Data
cleaning allows the data to be transformed
into a format that better represents the
underlying relationships and patterns in the
data, making it easier for the ML model to
learn from the data.
Dr. sheetal Dhande-Dandge 12
Time-consuming: Data cleaning
can be a time-consuming task,
especially for large and
complex datasets.
1
Error-prone: Data cleaning can
be error-prone, as it involves
transforming and cleaning the
data, which can result in the loss
of important information or the
introduction of new errors.
2
Limited understanding of the
data: Data cleaning can lead to a
limited understanding of the
data, as the transformed data
may not be representative of the
underlying relationships and
patterns in the data.
3
Dr. sheetal Dhande-Dandge 13
CONCLUSION:
 So, we have discussed four different steps in
data cleaning to make the data more reliable
and to produce good results. After properly
completing the Data Cleaning steps, we’ll have
a robust dataset that avoids many of the most
common pitfalls. This step should not be
rushed as it proves very beneficial in the
further process.
Dr. sheetal Dhande-Dandge 14
Dr. sheetal Dhande-Dandge
15

Overview of Data Cleaning.pdf

  • 1.
    ML | OVERVIEW OFDATA CLEANING Dr. Sheetal Dhande-Dandge Professor CSE SIPNA COET
  • 2.
    ML | OVERVIEW OFDATA CLEANING Data cleaning is one of the important parts of machine learning. It plays a significant part in building a model. It surely isn’t the fanciest part of machine learning and at the same time, there aren’t any hidden tricks or secrets to uncover. However, the success or failure of a project relies on proper data cleaning. Professional data scientists usually invest a very large portion of their time in this step because of the belief that “Better data beats fancier algorithms”. If we have a well-cleaned dataset, there are chances that we can get achieve good results with simple algorithms also, which can prove very beneficial at times especially in terms of computation when the dataset size is large. Obviously, different types of data will require different types of cleaning. However, this systematic approach can always serve as a good starting point. Dr. sheetal Dhande-Dandge 2
  • 3.
    STEPS INVOLVED INDATA CLEANING: Data cleaning is a crucial step in the machine learning (ML) pipeline, as it involves identifying and removing any missing, duplicate, or irrelevant data. The goal of data cleaning is to ensure that the data is accurate, consistent, and free of errors, as incorrect or inconsistent data can negatively impact the performance of the ML model. Dr. sheetal Dhande-Dandge 3
  • 4.
     The mainsteps involved in data cleaning are:  Handling missing data:This step involves identifying and handling missing data, which can be done by removing the missing data, imputing missing values with a suitable estimate, or using techniques such as multiple imputations to handle missing data.  Removing duplicates:This step involves identifying and removing any duplicate data, which can be done by using techniques such as data deduplication or data deduplication algorithms.  Handling outliers:This step involves identifying and handling any outliers in the data, which can be done by removing the outliers or transforming the data to reduce the impact of the outliers.  Correcting errors: This step involves identifying and correcting any errors in the data, which can be done by using techniques such as data validation or data correction algorithm  It is important to note that data cleaning is an iterative process, as it may be necessary to repeat some of the steps several times to ensure that the data is accurate and consistent.  The choice of data cleaning techniques will depend on the specific requirements of the project, including the size and complexity of the data and the desired outcome. Dr. sheetal Dhande-Dandge 4
  • 5.
  • 6.
     This includesdeleting duplicate/ redundant or irrelevant values from your dataset. Duplicate observations most frequently arise during data collection and Irrelevant observations are those that don’t actually fit the specific problem that you’re trying to solve.  Redundant observations alter the efficiency by a great extent as the data repeats and may add towards the correct side or towards the incorrect side, thereby producing unfaithful results.  Irrelevant observations are any type of data that is of no use to us and can be removed directly. Dr. sheetal Dhande-Dandge 6
  • 7.
     The errorsthat arise during measurement, transfer of data, or other similar situations are called structural errors. Structural errors include typos in the name of features, the same attribute with a different name, mislabeled classes, i.e. separate classes that should really be the same, or inconsistent capitalization.  For example, the model will treat America and America as different classes or values, though they represent the same value or red, yellow, and red-yellow as different classes or attributes, though one class can be included in the other two classes. So, these are some structural errors that make our model inefficient and give poor quality results. Dr. sheetal Dhande-Dandge 7
  • 8.
     Outliers cancause problems with certain types of models. For example, linear regression models are less robust to outliers than decision tree models. Generally, we should not remove outliers until we have a legitimate reason to remove them. Sometimes, removing them improves performance, sometimes not. So, one must have a good reason to remove the outlier, such as suspicious measurements that are unlikely to be part of real data. Dr. sheetal Dhande-Dandge 8
  • 9.
    HANDLING MISSING DATA Missingdata is a deceptively tricky issue in machine learning. We cannot just ignore or remove the missing observation. They must be handled carefully as they can be an indication of something important. The two most common ways to deal with missing data are: • Dropping observations with missing values. • The fact that the value was missing may be informative in itself. • Plus, in the real world, you often need to make predictions on new data even if some of the features are missing! • Imputing the missing values from past observations. • Again, “missingness” is almost always informative in itself, and you should tell your algorithm if a value was missing. • Even if you build a model to impute your values, you’re not adding any real information. You’re just reinforcing the patterns already provided by other features. Missing data is like missing a puzzle piece. If you drop it, that’s like pretending the puzzle slot isn’t there. If you impute it, that’s like trying to squeeze in a piece from somewhere else in the puzzle. So, missing data is always an informative and an indication of something important. And we must be aware of our algorithm of missing data by flagging it. By using this technique of flagging and filling, you are essentially allowing the algorithm to estimate the optimal constant for missingness, instead of just filling it in with the mean. Dr. sheetal Dhande-Dandge 9
  • 10.
    SOME DATA CLEANSING TOOLS •Openrefine • Trifacta Wrangler • TIBCO Clarity • Cloudingo • IBM Infosphere Quality Stage  Data cleaning is an important step in the machine learning process because it can have a significant impact on the quality and performance of a model. Data cleaning involves identifying and correcting or removing errors and inconsistencies in the data. Dr. sheetal Dhande-Dandge 10
  • 11.
     Here isa simple example of data cleaning in Python:  import pandas as pd  # Load the data  df = pd.read_csv("data.csv")  # Drop rows with missing values  df = df.dropna()  # Remove duplicate rows  df = df.drop_duplicates()  # Remove unnecessary columns  df = df.drop(columns=["col1", "col2"])  # Normalize numerical columns  df["col3"] = (df["col3"] - df["col3"].mean()) / df["col3"].std()  # Encode categorical columns  df["col4"] = pd.get_dummies(df["col4"])  # Save the cleaned data  df.to_csv("cleaned_data.csv", index=False)  The code I provided does not have any explicit output statements, so it will not produce any output when it is run. Instead, it modifies the data stored in the df DataFrame and saves it to a new CSV file.  If you want to see the cleaned data, you can print the df DataFrame or read the saved CSV file. For example, you can add the following line at the end of the code to print the cleaned data:  print(df) Dr. sheetal Dhande-Dandge 11
  • 12.
    ADVANTAGES OF DATACLEANING IN MACHINE LEARNING: Improved model performance: Data cleaning helps improve the performance of the ML model by removing errors, inconsistencies, and irrelevant data, which can help the model to better learn from the data. Increased accuracy: Data cleaning helps ensure that the data is accurate, consistent, and free of errors, which can help improve the accuracy of the ML model. Better representation of the data: Data cleaning allows the data to be transformed into a format that better represents the underlying relationships and patterns in the data, making it easier for the ML model to learn from the data. Dr. sheetal Dhande-Dandge 12
  • 13.
    Time-consuming: Data cleaning canbe a time-consuming task, especially for large and complex datasets. 1 Error-prone: Data cleaning can be error-prone, as it involves transforming and cleaning the data, which can result in the loss of important information or the introduction of new errors. 2 Limited understanding of the data: Data cleaning can lead to a limited understanding of the data, as the transformed data may not be representative of the underlying relationships and patterns in the data. 3 Dr. sheetal Dhande-Dandge 13
  • 14.
    CONCLUSION:  So, wehave discussed four different steps in data cleaning to make the data more reliable and to produce good results. After properly completing the Data Cleaning steps, we’ll have a robust dataset that avoids many of the most common pitfalls. This step should not be rushed as it proves very beneficial in the further process. Dr. sheetal Dhande-Dandge 14
  • 15.