2. WHAT IS CLASSIFICATION &
PREDICTION?
❖ There are two forms of data analysis that can be used for extracting
models describing important classes or to predict future data trends. These two
forms are as follows −
➢ Classification
➢ Prediction
❖ Classification models predict categorical class labels; and prediction models
predict continuous valued functions. For example, we can build a
classification model to categorize bank loan applications as either safe or
risky, or a prediction model to predict the expenditures in dollars of potential
customers on computer equipment given their income and occupation.
April 6, 2024
2 Data Mining: Concepts and Techniques
3. WHAT IS CLASSIFICATION?
❖ Following are the examples of cases where the data analysis task is
Classification
−
➢ Abank loan officer wants to analyze the data in order to know which
customer (loan applicant) are risky or which are safe.
➢ Amarketing manager at a company needs to analyze a customer with a
given profile, who will buy a new computer.
❖ In both of the above examples, a model or classifier is constructed to
predict the categorical labels. These labels are risky or safe for loan
application data and yes or no for marketing data.
April 6, 2024
3 Data Mining: Concepts and Techniques
4. WHAT IS PREDICTION?
❖ Following are the examples of cases where the data analysis task is
Prediction −
❖ Suppose the marketing manager needs to predict how much a given
customer will spend during a sale at his company. In this example we
are bothered to predict a numeric value. Therefore the data analysis
task is an example of numeric prediction. In this case, a model or a
predictor will be constructed that predicts a continuous-valued-function or
ordered value.
❖ Note − Regression analysis is a statistical methodology that is most
often used for numeric prediction
April 6, 2024
4 Data Mining: Concepts and Techniques
6. HOW DOES CLASSIFICATION
WORKS?
❖ With the help of the bank loan application that we have discussed above, let us
understand the working of classification. The Data Classification process includes two
steps −
➢ Building the Classifier or Model
➢ Using Classifier for Classification
April 6, 2024
6 Data Mining: Concepts and Techniques
7. Classification—A Two-Step
Process
Model construction: describing a set of predetermined classes
Each tuple/sample is assumed to belong to a predefined class,
as determined by the class label attribute.
The set of tuples used for model construction is training set .
The model is represented as classification rules, decision trees,
or mathematical formulae
Model usage: for classifying future or unknown objects
Estimate accuracy of the model
The known label of test sample is compared with the classified
result from the model
Accuracy rate is the percentage of test set samples that are
correctly classified by the model
Test set is independent of training set, otherwise over-fitting will
occur
If the accuracy is acceptable, use the model to classify data
tuples whose class labels are not known
April 6, 2024
7 Data Mining: Concepts and Techniques
9. Using the Model in Prediction
April 6, 2024
9 Data Mining: Concepts and Techniques
10. ISSUE REGARDING
CLASSIFICATION & PREDICTION
❖ The major issue is preparing the data for Classification and
Prediction. Preparing the data involves the following activities −
➢ Data Cleaning − Data cleaning involves removing the noise and
treatment of missing values. The noise is removed by applying
smoothing techniques and the problem of missing values is solved
by replacing a missing value with most commonly occurring value
for that attribute.
➢ RelevanceAnalysis − Database may also have the irrelevant
attributes. Correlation analysis is used to know whether any two
given attributes are related.
April 6, 2024
10 Data Mining: Concepts and Techniques
11. ISSUE REGARDING
CLASSIFICATION & PREDICTION
➢ Data Transformation and reduction − The data can be transformed by any of
the following methods.
■ Normalization − The data is transformed using normalization.
Normalization involves scaling all values for given attribute in order to
make them fall within a small specified range. Normalization is used
when in the learning step, the neural networks or the methods involving
measurements are used.
■ Generalization − The data can also be transformed by generalizing it to
the higher concept. For this purpose we can use the concept hierarchies.
April 6, 2024
11 Data Mining: Concepts and Techniques
12. COMPARISON OF
CLASSIFICATION AND
PREDICTION METHOD
❖ Here is the criteria for comparing the methods of Classification and Prediction −
➢ Accuracy −Accuracy of classifier refers to the ability of classifier. It predict the
class label correctly and the accuracy of the predictor refers to how well a given
predictor can guess the value of predicted attribute for a new data.
➢ Speed − This refers to the computational cost in generating and using the
classifier or predictor.
➢ Robustness − It refers to the ability of classifier or predictor to make correct
predictions from given noisy data.
➢ Scalability − Scalability refers to the ability to construct the classifier or
predictor efficiently; given large amount of data.
➢ Interpretability − It refers to what extent the classifier or predictor understands.
April 6, 2024
12 Data Mining: Concepts and Techniques
14. Classification by Decision Tree
Induction
Decision tree
A flow-chart-like tree structure
Internal node denotes a test on an attribute
Branch represents an outcome of the test
Leaf nodes represent class labels or class distribution
Decision tree generation consists of two phases
Tree construction
At start, all the training examples are at the root
Partition examples recursively based on selected attributes
Tree pruning
Identify and remove branches that reflect noise or outliers
Use of decision tree: Classifying an unknown sample
Test the attribute values of the sample against the decision
tree
April 6, 2024
14 Data Mining: Concepts and Techniques
15. Training Dataset
age income student credit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
30…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
April 6, 2024
15 Data Mining: Concepts and Techniques
16. Output: A Decision Tree for “buys_computer”
April 6, 2024
16 Data Mining: Concepts and Techniques
17. Algorithm for Decision Tree Induction
Basic algorithm (a greedy algorithm)
Tree is constructed in a top-down recursive divide-and-
conquer manner
At start, all the training examples are at the root
Attributes are categorical (if continuous-valued, they are
discretized in advance)
Examples are partitioned recursively based on selected
attributes
Test attributes are selected on the basis of a heuristic or
statistical measure (e.g., information gain)
Conditions for stopping partitioning
All samples for a given node belong to the same class
There are no remaining attributes for further partitioning –
majority voting is employed for classifying the leaf
There are no samples left
April 6, 2024
17 Data Mining: Concepts and Techniques
18. Attribute Selection Measure
Information gain (ID3/C4.5)
All attributes are assumed to be categorical
Can be modified for continuous-valued attributes
Gini index (IBM IntelligentMiner)
All attributes are assumed continuous-valued
Assume there exist several possible split values for
each attribute
May need other tools, such as clustering, to get the
possible split values
Can be modified for categorical attributes
April 6, 2024
18 Data Mining: Concepts and Techniques
19. April 6, 2024
Data Mining: Concepts and Techniques
19
Information Gain (ID3/C4.5)
Select the attribute with the highest information gain
Assume there are two classes, P and N
Let the set of examples S contain p elements of class P
and n elements of class N
The amount of information, needed to decide if an
arbitrary example in S belongs to P or N is defined as
n
p
n
n
p
n
n
p
p
n
p
p
n
p
I
2
2 log
log
)
,
(
20. April 6, 2024
Data Mining: Concepts and Techniques
20
Information Gain in Decision
Tree Induction
Assume that using attribute A a set S will be partitioned
into sets {S1, S2 , …, Sv}
If Si contains pi examples of P and ni examples of N,
the entropy, or the expected information needed to
classify objects in all subtrees Si is
The encoding information that would be gained by
branching on A
1
)
,
(
)
(
i
i
i
i
i
n
p
I
n
p
n
p
A
E
)
(
)
,
(
)
( A
E
n
p
I
A
Gain
21. April 6, 2024
Data Mining: Concepts and Techniques
21
Attribute Selection by Information
Gain Computation
Class P: buys_computer =
“yes”
Class N: buys_computer = “no”
I(p, n) = I(9, 5) =0.940
Compute the entropy for age:
Hence
Similarly
age pi ni I(pi, ni)
<=30 2 3 0.971
30…40 4 0 0
>40 3 2 0.971
971
.
0
)
2
,
3
(
14
5
)
0
,
4
(
14
4
)
3
,
2
(
14
5
)
(
I
I
I
age
E
048
.
0
)
_
(
151
.
0
)
(
029
.
0
)
(
rating
credit
Gain
student
Gain
income
Gain
)
(
)
,
(
)
( age
E
n
p
I
age
Gain
22. April 6, 2024
Data Mining: Concepts and Techniques
22
Gini Index (IBM IntelligentMiner)
If a data set T contains examples from n classes, gini index,
gini(T) is defined as
where pj is the relative frequency of class j in T.
If a data set T is split into two subsets T1 and T2 with sizes N1
and N2 respectively, the gini index of the split data contains
examples from n classes, the gini index gini(T) is defined as
The attribute provides the smallest ginisplit(T) is chosen to
split the node (need to enumerate all possible splitting points
for each attribute).
n
j
p j
T
gini
1
2
1
)
(
)
(
)
(
)
( 2
2
1
1
T
gini
N
N
T
gini
N
N
T
ginisplit
23. April 6, 2024
Data Mining: Concepts and Techniques
23
Extracting Classification Rules from Trees
Represent the knowledge in the form of IF-THEN rules
One rule is created for each path from the root to a leaf
Each attribute-value pair along a path forms a conjunction
The leaf node holds the class prediction
Rules are easier for humans to understand
Example
IF age = “<=30” AND student = “no” THEN buys_computer = “no”
IF age = “<=30” AND student = “yes” THEN buys_computer = “yes”
IF age = “31…40” THEN buys_computer = “yes”
IF age = “>40” AND credit_rating = “excellent” THEN buys_computer =
“yes”
IF age = “<=30” AND credit_rating = “fair” THEN buys_computer = “no”
24. April 6, 2024
Data Mining: Concepts and Techniques
24
Avoid Overfitting in Classification
The generated tree may overfit the training data
Too many branches, some may reflect anomalies due
to noise or outliers
Result is in poor accuracy for unseen samples
Two approaches to avoid overfitting
Prepruning: Halt tree construction early—do not split a
node if this would result in the goodness measure
falling below a threshold
Difficult to choose an appropriate threshold
Postpruning: Remove branches from a “fully grown”
tree—get a sequence of progressively pruned trees
Use a set of data different from the training data to decide
which is the “best pruned tree”
25. April 6, 2024
Data Mining: Concepts and Techniques
25
Approaches to Determine the Final Tree
Size
Separate training (2/3) and testing (1/3) sets
Use cross validation, e.g., 10-fold cross validation
Use all the data for training
but apply a statistical test (e.g., chi-square) to estimate
whether expanding or pruning a node may improve the
entire distribution
Use minimum description length (MDL) principle:
halting growth of the tree when the encoding is
minimized
26. April 6, 2024
Data Mining: Concepts and Techniques
26
Enhancements to basic decision
tree induction
Allow for continuous-valued attributes
Dynamically define new discrete-valued attributes that
partition the continuous attribute value into a discrete set
of intervals
Handle missing attribute values
Assign the most common value of the attribute
Assign probability to each of the possible values
Attribute construction
Create new attributes based on existing ones that are
sparsely represented
This reduces fragmentation, repetition, and replication
27. Overfitting and Tree Pruning
Overfitting: An induced tree may overfit the training
data
Too many branches, some may reflect anomalies due to noise
or outliers
Poor accuracy for unseen samples
Two approaches to avoid overfitting
Prepruning: Halt tree construction early—do not split a node if
this would result in the goodness measure falling below a
threshold
Difficult to choose an appropriate threshold
Postpruning: Remove branches from a “fully grown” tree—get
a sequence of progressively pruned trees
Use a set of data different from the training data to decide
which is the “best pruned tree”
April 6, 2024
27 Data Mining: Concepts and Techniques
28. Why decision tree induction in data
mining?
April 6, 2024
Data Mining: Concepts and Techniques
28
relatively faster learning speed (than other
classification methods)
convertible to simple and easy to understand
classification rules
can use SQL queries for accessing databases
comparable classification accuracy with other
methods