SlideShare a Scribd company logo
1 of 83
Download to read offline
Prof. Pier Luca Lanzi
Evaluation of Classification Models
Data Mining andText Mining (UIC 583 @ Politecnico di Milano)
Prof. Pier Luca Lanzi
Evaluation & Credidibility Issues
• What measure should we use? (accuracy might not be enough!)
• How reliable are the predicted results?
• How much should we believe in what was learned?
§Error on the training data is not a good indicator of
performance on future data
§The classifier was computed from the very same training data,
any estimate based on that data will be optimistic.
§In addition, new data will probably not be exactly
the same as the training data!
2
Prof. Pier Luca Lanzi
How to evaluate the performance of a model?
How to obtain reliable estimates?
How to compare the relative
performance among competing models?
Given two equally performing models,
which one should we prefer?
3
Prof. Pier Luca Lanzi
Model Evaluation
• Metrics for Performance Evaluation
§How to evaluate the performance of a model?
• Methods for Performance Evaluation
§How to obtain reliable estimates?
• Methods for Model Comparison
§How to compare the performance of competing models?
• Model Selection
§Which model should we prefer?
4
Prof. Pier Luca Lanzi
How to evaluate the performance of a model?
(Metrics for Performance Evaluation)
5
Prof. Pier Luca Lanzi
Metrics for Performance Evaluation
• Focus on the predictive capability of a model
• Confusion Matrix:
6
PREDICTED CLASS
TRUE
CLASS
Yes No
Yes a (TP) b (FN)
No c (FP) d (TN)
a:TP (true positive)
b: FN (false negative)
c: FP (false positive)
d:TN (true negative)
Prof. Pier Luca Lanzi
Metrics for Performance Evaluation
• Most widely-used metric:
7
PREDICTED CLASS
ACTUAL
CLASS
Yes No
Yes a (TP) b (FN)
No c (FP) d (TN)
Prof. Pier Luca Lanzi
Sometimes Accuracy is not Enough
• Consider a 2-class problem
§Number of Class 0 examples = 9990
§Number of Class 1 examples = 10
• If model predicts everything to be class 0,
accuracy is 9990/10000 = 99.9 %
• Accuracy is misleading because model does not
detect any class 1 example
8
Prof. Pier Luca Lanzi
Cost Matrix 9
C(x): Cost of misclassifying examples of type x
PREDICTED CLASS
ACTUAL
CLASS
Yes No
Yes C(TP) C(FN)
No C(FP) C(TN)
Prof. Pier Luca Lanzi
Computing Cost of Classification 10
Cost
Matrix
PREDICTED CLASS
ACTUAL
CLASS
C(.) + -
+ -1 100
- 1 0
Model M1 PREDICTED CLASS
ACTUAL
CLASS
+ -
+ 150 40
- 60 250
Model M2 PREDICTED CLASS
ACTUAL
CLASS
+ -
+ 250 45
- 5 200
Accuracy = 80%
Cost = 3910
Accuracy = 90%
Cost = 4255
Prof. Pier Luca Lanzi
Cost-Based Model Evaluation
• Predictive accuracy in this case is defined as
• True Positives, true negatives
§ Safe and risky loans predicted as safe and risky respectively
• False Positive Errors
§ We accept a risky loan which we predicted was safe
§ We are likely not to get the money back
(let’s say on average 30000 euros)
• False Negative Errors
§ We don’t give a safe loan since we predicted it was risky
§ We will loose the interest money
(let’s say on average 10000 euros
11
Prof. Pier Luca Lanzi
Cost-Based Model Evaluation
• Two models, similar performance but different types of errors
• First Model
§1576 false positives and 1224 false negatives
§Total cost is 59525443
• Second Model
§1407 false positives and 1093 false negatives
§Total cost is 53147717 (more than 6 millions saved)
• What if we can change the way our model makes mistakes?
§1093 false positives and 1407 false negatives
§Total cost becomes 46852283 (more than 12 millions saved)
12
Prof. Pier Luca Lanzi
Costs can be used to evaluate
an existing classification models
Or can be used by the algorithm
to guide the search for the model
Some algorithms can use the cost matrix to
build the model (e.g., decision trees)
Prof. Pier Luca Lanzi
Precision & Recall
• Alternatives to accuracy, introduced in the area of information
retrieval and search engine
• Precision
§Focuses on the percentage of examples that have been
classified positive examples and are actually positive
§In the information retrieval context represents the percentage
of actually good documents that have been shown as a result.
• Recall
§Focuses on the percentage of positively classified examples
with respect to the number of existing good documents
§In the information retrieval context, recall represents the
percentage of good documents shown with respect to the
existing ones.
18
Prof. Pier Luca Lanzi
Cost-Sensitive Measures
• Precision is biased towards C(TP) & C(FP),
• Recall is biased towards C(TP) & C(FN)
• F1-measure is biased towards all except C(TN),
it is high when both precision and recall are reasonably high
19
The higher the precision, the
lower the FPs
The higher the precision, the
lower the FNs
The higher the F1,
the lower the FPs & FNs
Prof. Pier Luca Lanzi
How to obtain reliable estimates?
(Methods for Performance Evaluation)
20
Prof. Pier Luca Lanzi
Classification Step 1: Split the Data
Between Train and Test Sets
21
Training set
Testing set
data
Prof. Pier Luca Lanzi
Classification Step 2: Compute a
Model using the Data in the Training
Set
22
Training set
Testing set
data
Model Builder
Y N
Prof. Pier Luca Lanzi
Classification Step 3: Evaluate the
Model using the Test Set
23
Training set
Testing set
data
Model Builder Predictions
Y N
Evaluate
+
-
+
-
Prof. Pier Luca Lanzi
Note on Parameter Tuning
• It is important that the test data is not used
in any way to create the classifier
• Some learning schemes operate in two stages:
§Stage 1: builds the basic structure
§Stage 2: optimizes parameter settings
• The test data can’t be used for parameter tuning!
• Proper procedure uses three sets: training data, validation data,
and test data. Validation data is used to optimize parameters
24
Prof. Pier Luca Lanzi
Making the most of the data
• Once evaluation is complete, all the data can be used to build the
final classifier
• Generally, the larger the training data the better the classifier
(but returns diminish)
• The larger the test data the more accurate the error estimate
25
Prof. Pier Luca Lanzi
The Full Process: Training, Validation,
and Testing
26
Training set
Validation set
data
Model Builder Predictions
Y N
Evaluate
+
-
+
-
Model
Builder
Testing set
Y N
+
-
+
-
Final
Evaluation
Prof. Pier Luca Lanzi
Methods for Performance Evaluation
• How to obtain a reliable estimate of performance?
• Performance of a model may depend on
other factors besides the learning algorithm:
§Class distribution
§Cost of misclassification
§Size of training and test sets
27
Prof. Pier Luca Lanzi
Methods of Estimation
• Holdout
§Reserve ½ for training and ½ for testing
§Reserve 2/3 for training and 1/3 for testing
• Random subsampling
§Repeated holdout
• Cross validation
§Partition data into k disjoint subsets
§k-fold: train on k-1 partitions, test on the remaining one
§Leave-one-out: k=n
• Stratified sampling
§oversampling vs undersampling
• Bootstrap
§Sampling with replacement
28
Prof. Pier Luca Lanzi
Holdout Evaluation & “Small”
Datasets
• Reserves a certain amount for testing and uses the remainder for
training, typically,
§Reserve ½ for training and ½ for testing
§Reserve 2/3 for training and 1/3 for testing
• For small or “unbalanced” datasets, samples might not be
representative
• For instance, it might generate training or testing datasets with few or
none instances of some classes
• Stratified sampling
§Makes sure that each class is represented with approximately equal
proportions in both subsets
29
Prof. Pier Luca Lanzi
Repeated Holdout
• Holdout estimate can be made more reliable by repeating the
process with different subsamples
• In each iteration, a certain proportion is randomly selected for
training (possibly with stratification)
• The error rates on the different iterations are averaged to yield
an overall error rate
• Still not optimum since the different test sets overlap
30
Prof. Pier Luca Lanzi
Cross-Validation
• First step
§Data is split into k subsets of equal size
• Second step
§Each subset in turn is used for testing and the remainder for training
• This is called k-fold cross-validation and avoids overlapping test sets
• Often the subsets are stratified before cross-validation is performed
• The error estimates are averaged to yield an overall error estimate
31
Prof. Pier Luca Lanzi
Ten-fold Crossvalidation 32
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
test train
1 2 3 4 5 6 7 8 9 10
testtrain
2 3 4 5 6 7 8 9 10
test train
1
train
… … …
p1
p2
p10
The final performance is computed as the average pi
Prof. Pier Luca Lanzi
Cross-Validation
• Standard method for evaluation stratified ten-fold cross-validation
• Why ten? Extensive experiments have shown that this is the best
choice to get an accurate estimate
• Stratification reduces the estimate’s variance
• Even better: repeated stratified cross-validation
• E.g. ten-fold cross-validation is repeated ten times and results are
averaged (reduces the variance)
• Other approaches appear to be robust, e.g., 5x2 crossvalidation
33
Prof. Pier Luca Lanzi
Leave-One-Out Cross-Validation
• It is a particular form of cross-validation
§Set number of folds to number of training instances
§I.e., for n training instances, build classifier n times
• Makes best use of the data
• Involves no random subsampling
• Computationally expensive
34
Prof. Pier Luca Lanzi
Leave-One-Out and Stratification
• Disadvantage of Leave-One-Out is that:
stratification is not possible
• It guarantees a non-stratified sample because
there is only one instance in the test set!
• Extreme example: random dataset split equally into two classes
§Best inducer predicts majority class
§50% accuracy on fresh data
§Leave-One-Out-CV estimate is 100% error!
35
Prof. Pier Luca Lanzi
Bootstraping
• Cross-validation uses sampling without replacement
§The same instance, once selected, can not be selected again
for a particular training/test set
• Bootstrap uses sampling with replacement
§Sample a dataset of n instances n times with
replacement to form a new dataset of n instances
§Use this data as the training set
§Use the instances from the original dataset that don’t occur in
the new training set for testing
36
Prof. Pier Luca Lanzi
The 0.632 Bootstrap
• An instance has a probability of 1–1/n of not being picked
• Thus its probability of ending up in the test data is:
• This means the training data will contain approximately 63.2% of
the instances
37
Prof. Pier Luca Lanzi
Estimating Error using Bootstrap
• The error estimate on the test data will be very pessimistic, since
training was on just around 63% of the instances
• Therefore, we compute the overall error ε by combining error
on the train and test as,
• The training error gets less weight than the error on the test data
• Repeat process several times with different replacement samples;
average the results
38
Prof. Pier Luca Lanzi
How to compare the relative
performance among competing models?
(Model Comparison)
39
Prof. Pier Luca Lanzi
How to Compare the Performance of
Two Models?
• Two models,
§Model M1: accuracy = 85%, tested on 30 instances
§Model M2: accuracy = 75%, tested on 5000 instances
• Can we say M1 is better than M2?
• How much confidence can we place on accuracy of M1 and M2?
• Can the difference in performance measure be explained as a
result of random fluctuations in the test set?
40
Prof. Pier Luca Lanzi
Comparing Data Mining Schemes
• Which of two learning schemes performs better?
• The answer is obviously domain-dependent
• The typical approach is to compare 10-fold cross-validation (CV)
estimates using the Student’s t-test
• Student’s t-test tells whether the means of two samples are
significantly different
• Use a paired t-test when the individual samples are paired, that is,
when the same CV is applied twice. Use unpaired t-test
otherwise
41
Prof. Pier Luca Lanzi
Confidence Interval for Accuracy
• Prediction can be regarded as a Bernoulli trial with two possible
outcomes, correct or wrong
• A collection of Bernoulli trials has a Binomial distribution
• Given,
§The number of correct test predictions S
§The number of test instances N,
§The accuracy f= S/N
• Can we predict the true accuracy of the model from accuracy f?
42
Prof. Pier Luca Lanzi
Mean and Variance
• Assume that the accuracy f comes from a Bernoulli distribution
with mean p and variance p(1-p)
• When N is large enough, f follows a normal distribution with
mean p and variance p(1-p)/N
• c% confidence level interval [-z ≤ X ≤z] for a random variable of
mean 0 is given by
• With a symmetric distribution, we have
43
Prof. Pier Luca Lanzi
Confidence Interval for Accuracy
• For instance, if X has mean 0 and standard deviation 1,
the confidence interval for c% is defined by the z values,
• Thus, for the confidence interval for 90%
we have,
• To compute the confidence interval for the
expected accuracy f we need to,
§ Normalize it to a distribution with mean 0 and standard deviation1
§ Compute the confidence interval
§ Remap it to the actual distribution
44
Pr[X ≥z] z
0.1% 3.09
0.5% 2.58
1% 2.33
5% 1.65
10% 1.28
20% 0.84
25% 0.69
40% 0.25
Prof. Pier Luca Lanzi
Confidence Interval for Accuracy
• First, transform f using,
• Compute the confidence interval,
• Solve it for p,
45
Prof. Pier Luca Lanzi
Examples
• When f = 75%, N=1000, c=80% (z=1.28), the confidence
interval for the actual performance is [0.732, 0.767]
• When f = 75%, N=100, c=80% (z=1.28), the confidence interval
for the actual performance is [0.691, 0.801]
• Note that the assumption about f being normally distributed only
applies for large values of N (at least 30)
• When f = 75%, N=10, c=80% (z=1.28), the confidence interval
for the actual performance is [0.549, 0.881]
46
Prof. Pier Luca Lanzi
William Gosset (1876-1937)
• Obtained a post as a chemist in the Guinness
brewery in Dublin in 1899.
• Invented the t-test to handle small samples for
quality control in brewing
• Wrote under the name "Student”
• Unpaired and paired two-sample t-test
• Unequal/equal sample size
• Unequal/equal sample variance
47
Prof. Pier Luca Lanzi
Comparing the Performance of
Two Models (Paired)
• Fix a significance level
• If a difference is significant at the c% level,
there is a (100-c)% chance that the true means differ
• Divide the significance level by two because the test is two-tailed
• Look up the value for z that corresponds to c/2
• Compute the value of t based on the observed performance
estimates for the schemes being compared
• If t is less than –z or greater than z then the difference is
statistically significant
48
Prof. Pier Luca Lanzi
“paired” when the estimates
are from the same datasets
“unpaired” when the estimates
are from different datasets
49
Prof. Pier Luca Lanzi
How do we know that the difference in
performance is not just due to change?
We computes the odds of it!
Apply the t-test and compute the p-value
The p-value represents the probability
that the reported difference is due by chance
Prof. Pier Luca Lanzi
In repeated experiments at this sample size,
how often would you see such a difference
assuming that there is no difference between
the two groups?
Prof. Pier Luca Lanzi
run the python notebook dedicated
to classifier comparison
Prof. Pier Luca Lanzi
Multiple Testing
• Say that you perform a statistical test with a 0.05 threshold, but
you repeat the test on twenty different observations.
• For example, you want to compare the performance of several
classification algorithms
• Assume that all of the observations are explainable by the null
hypothesis
• What is the chance that at least one of the observations will
receive a p-value less than 0.05?
46
Prof. Pier Luca Lanzi
Multiple Testing - Example
• Say that you perform a statistical test with a 0.05 threshold, but
you repeat the test on 20 different observations. Assuming that
all of the observations are explainable by the null hypothesis,
what is the chance that at least one of the observations will
receive a p-value less than 0.05?
• P(making a mistake) = 0.05
• P(not making a mistake) = 0.95
• P(not making any mistake) = 0.9520 = 0.358
• P(making at least one mistake) = 1 - 0.358 = 0.642
• There is a 64.2% chance of making at least one mistake.
47
Prof. Pier Luca Lanzi
Bonferroni Correction
• Assume that individual tests are independent.
• Divide the desired p-value threshold by the number of tests
performed.
• Example
§We now have, the threshold set to 0.05/20 = 0.0025.
§P(making a mistake) = 0.0025
§P(not making a mistake) = 0.9975
§P(not making any mistake) = 0.997520 = 0.9512
§P(making at least one mistake) = 1 - 0.9512 = 0.0488
48
Prof. Pier Luca Lanzi
Non Parametric Tests
• Do not make any assumption about the distribution of the
variable in the population
• Mann-Whitney U Test
§Nonparametric equivalent of the independent t-test
• Wilcoxon matched-pairs signed rank test
§Used to compare two related groups
61
Prof. Pier Luca Lanzi
Mann Whitney U Test
• To compute the Mann
Whitney U:
§Rank the scores in both
groups (together) from
highest to lowest.
§Sum the ranks of the
scores for each group.
§The sum of ranks for each
group are used to make
the statistical comparison.
Income Rank No Income Rank
25 12 27 10
32 5 19 17
36 3 16 20
40 1 33 4
22 14 30 7
37 2 17 19
20 16 21 15
18 18 23 13
31 6 26 11
29 8 28 9
85 125
Prof. Pier Luca Lanzi
Wilcoxon Matched-pairs
Signed Rank test
• Step 1
§ Rank the data of both the groups in ascending order
§ If any values are equal average their ranks
• Step 2
§ Add up the ranks in group with smaller sample size
§ If the two groups are of the same size either one may be picked
§ T= sum of ranks in group with smaller sample size
• Step 3
§ Compare this sum with the critical ranges given in table
§ Look up the rows corresponding to the sample sizes of the two groups
§ A range will be shown for the 5% significance level
63
Prof. Pier Luca Lanzi
* 17, 18 & 19 are tied hence the ranks are averaged
Non-smokers (n=15) Heavy smokers (n=14)
Birth wt (Kg) Rank Birth wt (Kg) Rank
3.99 27 3.18 7
3.79 24 2.84 5
3.60* 18 2.90 6
3.73 22 3.27 11
3.21 8 3.85 26
3.60* 18 3.52 14
4.08 28 3.23 9
3.61 20 2.76 4
3.83 25 3.60* 18
3.31 12 3.75 23
4.13 29 3.59 16
3.26 10 3.63 21
3.54 15 2.38 2
3.51 13 2.34 1
2.71 3
Sum=272 Sum=163
Prof. Pier Luca Lanzi
Nonparametric tests Parametric tests
Nominal data Ordinal data
Ordinal, interval,
ratio data
One group Chi square
goodness of fit
Wilcoxon signed rank test One group t-test
Two unrelated
groups Chi square
Wilcoxon rank sum test,
Mann-Whitney test
Student’s t-test
Two related
groups
McNemar’s test Wilcoxon signed rank test Paired Student’s t-test
K-unrelated
groups Chi square test
Kruskal -Wallis one-way
analysis of variance
ANOVA
K-related
groups
Friedman matched samples
ANOVA with repeated
measurements
Prof. Pier Luca Lanzi
T-Test Using R
> j48 <- c(96,93.33,92.00,94.67,96.00,94.00,94.67,94.0,94.67,98)
> zeror <- c(33.33, 33.33, 33.33, 33.33, 33.33, 33.33, 33.33, 33.33, 33.33, 33.33)
> t.test(j48,zeror)
Welch Two Sample t-test
data: j48 and zeror
t = 117.9102, df = 9, p-value = 1.153e-15
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
60.22594 62.58206
sample estimates:
mean of x mean of y
94.734 33.330
68
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Probabilistic Classifiers
70
Prof. Pier Luca Lanzi
Probabilistic Classifiers &
Classification Thresholds
• Up to now we used logistic regression to predict classifier labels,
however, logistic regression returns a probability
• Given an example xi its predicted class is the label with the largest
probability so it is equivalent to using a threshold of 0.5 to decide
which class to assign to an example
• However, we can use a different threshold and for instance label
as positive only examples we return a 1 only when P(+1|x)>0.75
• This would label as positive only cases for which we are more
confident that should be labeled as positive.
71
Prof. Pier Luca Lanzi
How the Classification Threshold
Influence Precision and Recall?
• Suppose we use a near one threshold to classify positive examples
• Then, we will classify as positives only examples for which we are very
confident (this is a pessimistic classifier)
• Precision will be high
§In fact, we are not likely produce false positives
• Recall will be low
§In fact, we are likely to produce more false negatives
72
Prof. Pier Luca Lanzi
How the Classification Threshold
Influence Precision and Recall?
• Suppose we use a near zero threshold to classify positive examples
• Then, we will classify everything as positives
(this is an optimistic classifier)
• Precision will be low as we are going to generate the maximum
number of false positives (everything is positive!)
• Recall will be high since by classifying everything as positive we are
going to generate the maximum number of false negatives
73
Prof. Pier Luca Lanzi
We can use the threshold to
optimize our precision and recall
a higher the threshold, increases
precision and lower recall
a lower threshold, decreases precision
and increase recall
Prof. Pier Luca Lanzi
Run the notebook dedicated
to the precision-recall tradeoff
Prof. Pier Luca Lanzi
Precision-Recall Tradeoff Example
• In the notebook, a simple logistic regression model applied to the loans data
returns the confusion matrix below, corresponding to a precision of 0.82 and
recall of 0.99
• By increasing the classification threshold for positive (+1) examples to 0.75 we
obtained a new confusion matrix with precision of 0.86 and a recall of 0.99
• Overall, we reduced the number of false positives (we did not accept risky
loans and we were better at identifying risky loans)
76
Classified -1 Classified +1
Labeled -1 1212 21910
Labeled +1 1057 98283
Classified -1 Classified +1
Labeled -1 10632 12490
Labeled +1 20106 79234
Prof. Pier Luca Lanzi
we can explore this tradeoffs
using precision-recall curves
Prof. Pier Luca Lanzi
Precision-Recall Curves
• Plot precision as a function of
recall for varying threshold values
• The best classifier would be the
one that has always a precision
equal to one (but never happens)
• More in general classifiers will show of different shapes
• How to decide among more classifiers?
§Use the area under the curve (the nearer to one, the better)
§Use F1 measure
78
Prof. Pier Luca Lanzi
Precision-recall curve for the loan dataset (run the python notebook for details)
Prof. Pier Luca Lanzi
Receiver Operating
Characteristic (ROC) Curves
Prof. Pier Luca Lanzi
ROC Curve
(Receiver Operating Characteristic)
• Developed in 1950s for signal detection theory to analyze signals
• Plot the True Positive Rate (TPR=TP/(TP+FN))
against the False Positive Rate (FPR=FP/(TN+FP))
• Performance of each classifier represented
as a point on the ROC curve
• Changing the classification threshold, sample distribution or cost
matrix changes the location of the point
81
Prof. Pier Luca Lanzi
ROC Curve
• (FPR, TPR)
§(0,0): declare everything
to be negative class
§(1,1): declare everything
to be positive class
§(0,1): ideal
• Diagonal line:
§Random guessing
§Below diagonal line,
prediction is opposite
of the true class
82
Prof. Pier Luca Lanzi
ROC Curve 83
Prof. Pier Luca Lanzi
Using ROC for Model Comparison
• No model consistently outperform the other
• M1 is better for small FPR
• M2 is better for large FPR
• Area Under the ROC curve
• Ideal, area = 1
• Random guess, area = 0.5
84
Prof. Pier Luca Lanzi
Lift Charts
• Lift is a measure of the effectiveness of a predictive model
calculated as the ratio between the results obtained with and
without the predictive model
• Cumulative gains and lift charts are visual aids for measuring
model performance
• Both charts consist of a lift curve and a baseline
• The greater the area between the lift curve and the baseline, the
better the model
89
Prof. Pier Luca Lanzi
Example: Direct Marketing
• Mass mailout of promotional offers (1000000)
• The proportion who normally respond is 0.1%, that is 1000
responses
• A data mining tool can identify a subset of a 100000 for which
the response rate is 0.4%, that is 400 responses
• In marketing terminology, the increase of response rate is known
as the lift factor yielded by the model.
• The same data mining tool, may be able to identify 400000
households for which the response rate is 0.2%, that is 800
respondents corresponding to a lift factor of 2.
• The overall goal is to find subsets of test instances that have a
high proportion of true positive
90
Prof. Pier Luca Lanzi
An Example from Direct Marketing 91
Prof. Pier Luca Lanzi
Which model should we prefer?
(Model selection)
92
Prof. Pier Luca Lanzi
Model Selection Criteria
• Model selection criteria attempt to find a good compromise
between:
§The complexity of a model
§Its prediction accuracy on the training data
• Reasoning: a good model is a simple model that achieves high
accuracy on the given data
• Also known as Occam’s Razor :
the best theory is the smallest one
that describes all the facts
93
William of Ockham, born in the village of Ockham in Surrey (England)
about 1285, was the most influential philosopher of the 14th century
and a controversial theologian.
Prof. Pier Luca Lanzi
The MDL Principle
• MDL stands for minimum description length
• The description length is defined as:
space required to describe a theory
+
space required to describe the theory’s mistakes
• In our case the theory is the classifier and the mistakes are the
errors on the training data
• We seek a classifier with Minimum Description Language
94
Prof. Pier Luca Lanzi
MDL and compression
• MDL principle relates to data compression
• The best theory is the one that compresses the data the most
• I.e. to compress a dataset we generate a model and
then store the model and its mistakes
• We need to compute
(a) size of the model, and
(b) space needed to encode the errors
• (b) easy: use the informational loss function
• (a) need a method to encode the model
95
Prof. Pier Luca Lanzi
Which Classification Algorithm?
• Among the several algorithms, which one is the “best”?
§Some algorithms have a lower computational complexity
§Different algorithms provide different representations
§Some algorithms allow the specification of prior knowledge
• If we are interested in the generalization performance,
are there any reasons to prefer one classifier over another?
• Can we expect any classification method to be superior or
inferior overall?
• According to the No Free Lunch Theorem,
the answer to all these questions is known
96
Prof. Pier Luca Lanzi
No Free Lunch Theorem
• If the goal is to obtain good generalization performance, there are no
context-independent or usage-independent reasons to favor one
classification method over another
• If one algorithm seems to outperform another in a certain situation, it is
a consequence of its fit to the particular problem, not the general
superiority of the algorithm
• When confronting a new problem, this theorem suggests that we
should focus on the aspects that matter most
§Prior information
§Data distribution
§Amount of training data
§Cost or reward
• The theorem also justifies skepticism regarding studies that
“demonstrate” the overall superiority of a certain algorithm
97
Prof. Pier Luca Lanzi
No Free Lunch Theorem
• "[A]ll algorithms that search for an extremum of a cost [objective]
function perform exactly the same, when averaged over all possible
cost functions.“ [1]
• “[T]he average performance of any pair of algorithms across all possible
problems is identical.” [2]
• Wolpert, D.H., Macready, W.G. (1995), No Free Lunch Theorems for
Search, Technical Report SFI-TR-95-02-010 (Santa Fe Institute).
• Wolpert, D.H., Macready, W.G. (1997), No Free Lunch Theorems for
Optimization, IEEE Transactions on Evolutionary Computation 1, 67.
98

More Related Content

What's hot

Support Vector Machine - How Support Vector Machine works | SVM in Machine Le...
Support Vector Machine - How Support Vector Machine works | SVM in Machine Le...Support Vector Machine - How Support Vector Machine works | SVM in Machine Le...
Support Vector Machine - How Support Vector Machine works | SVM in Machine Le...Simplilearn
 
Machine Learning With Logistic Regression
Machine Learning  With Logistic RegressionMachine Learning  With Logistic Regression
Machine Learning With Logistic RegressionKnoldus Inc.
 
Machine Learning with Decision trees
Machine Learning with Decision treesMachine Learning with Decision trees
Machine Learning with Decision treesKnoldus Inc.
 
2.2 decision tree
2.2 decision tree2.2 decision tree
2.2 decision treeKrish_ver2
 
Curse of dimensionality
Curse of dimensionalityCurse of dimensionality
Curse of dimensionalityNikhil Sharma
 
Anomaly detection Workshop slides
Anomaly detection Workshop slidesAnomaly detection Workshop slides
Anomaly detection Workshop slidesQuantUniversity
 
Intro to Feature Selection
Intro to Feature SelectionIntro to Feature Selection
Intro to Feature Selectionchenhm
 
Feature selection
Feature selectionFeature selection
Feature selectionDong Guo
 
Instance based learning
Instance based learningInstance based learning
Instance based learningSlideshare
 
3.7 outlier analysis
3.7 outlier analysis3.7 outlier analysis
3.7 outlier analysisKrish_ver2
 
Introduction to Linear Discriminant Analysis
Introduction to Linear Discriminant AnalysisIntroduction to Linear Discriminant Analysis
Introduction to Linear Discriminant AnalysisJaclyn Kokx
 
Feature selection concepts and methods
Feature selection concepts and methodsFeature selection concepts and methods
Feature selection concepts and methodsReza Ramezani
 
Model selection and cross validation techniques
Model selection and cross validation techniquesModel selection and cross validation techniques
Model selection and cross validation techniquesVenkata Reddy Konasani
 
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...Logistic Regression in Python | Logistic Regression Example | Machine Learnin...
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...Edureka!
 
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...Simplilearn
 
Unsupervised learning clustering
Unsupervised learning clusteringUnsupervised learning clustering
Unsupervised learning clusteringArshad Farhad
 
Cross validation
Cross validationCross validation
Cross validationRidhaAfrawe
 
Logistic regression
Logistic regressionLogistic regression
Logistic regressionYashwantGahlot1
 

What's hot (20)

Support Vector Machine - How Support Vector Machine works | SVM in Machine Le...
Support Vector Machine - How Support Vector Machine works | SVM in Machine Le...Support Vector Machine - How Support Vector Machine works | SVM in Machine Le...
Support Vector Machine - How Support Vector Machine works | SVM in Machine Le...
 
Machine Learning With Logistic Regression
Machine Learning  With Logistic RegressionMachine Learning  With Logistic Regression
Machine Learning With Logistic Regression
 
Data Preprocessing
Data PreprocessingData Preprocessing
Data Preprocessing
 
Machine Learning with Decision trees
Machine Learning with Decision treesMachine Learning with Decision trees
Machine Learning with Decision trees
 
K means Clustering Algorithm
K means Clustering AlgorithmK means Clustering Algorithm
K means Clustering Algorithm
 
2.2 decision tree
2.2 decision tree2.2 decision tree
2.2 decision tree
 
Curse of dimensionality
Curse of dimensionalityCurse of dimensionality
Curse of dimensionality
 
Anomaly detection Workshop slides
Anomaly detection Workshop slidesAnomaly detection Workshop slides
Anomaly detection Workshop slides
 
Intro to Feature Selection
Intro to Feature SelectionIntro to Feature Selection
Intro to Feature Selection
 
Feature selection
Feature selectionFeature selection
Feature selection
 
Instance based learning
Instance based learningInstance based learning
Instance based learning
 
3.7 outlier analysis
3.7 outlier analysis3.7 outlier analysis
3.7 outlier analysis
 
Introduction to Linear Discriminant Analysis
Introduction to Linear Discriminant AnalysisIntroduction to Linear Discriminant Analysis
Introduction to Linear Discriminant Analysis
 
Feature selection concepts and methods
Feature selection concepts and methodsFeature selection concepts and methods
Feature selection concepts and methods
 
Model selection and cross validation techniques
Model selection and cross validation techniquesModel selection and cross validation techniques
Model selection and cross validation techniques
 
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...Logistic Regression in Python | Logistic Regression Example | Machine Learnin...
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...
 
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
 
Unsupervised learning clustering
Unsupervised learning clusteringUnsupervised learning clustering
Unsupervised learning clustering
 
Cross validation
Cross validationCross validation
Cross validation
 
Logistic regression
Logistic regressionLogistic regression
Logistic regression
 

Similar to Prof. Pier Luci Lanzi's Guide to Model Evaluation

DMTM 2015 - 14 Evaluation of Classification Models
DMTM 2015 - 14 Evaluation of Classification ModelsDMTM 2015 - 14 Evaluation of Classification Models
DMTM 2015 - 14 Evaluation of Classification ModelsPier Luca Lanzi
 
DMTM Lecture 03 Regression
DMTM Lecture 03 RegressionDMTM Lecture 03 Regression
DMTM Lecture 03 RegressionPier Luca Lanzi
 
Week 11 Model Evalaution Model Evaluation
Week 11 Model Evalaution Model EvaluationWeek 11 Model Evalaution Model Evaluation
Week 11 Model Evalaution Model Evaluationkhairulhuda242
 
Cross validation.pptx
Cross validation.pptxCross validation.pptx
Cross validation.pptxYouKnowwho28
 
6 Evaluating Predictive Performance and ensemble.pptx
6 Evaluating Predictive Performance and ensemble.pptx6 Evaluating Predictive Performance and ensemble.pptx
6 Evaluating Predictive Performance and ensemble.pptxmohammedalherwi1
 
Bridging the Gap: Machine Learning for Ubiquitous Computing -- Evaluation
Bridging the Gap: Machine Learning for Ubiquitous Computing -- EvaluationBridging the Gap: Machine Learning for Ubiquitous Computing -- Evaluation
Bridging the Gap: Machine Learning for Ubiquitous Computing -- EvaluationThomas Ploetz
 
Replicable Evaluation of Recommender Systems
Replicable Evaluation of Recommender SystemsReplicable Evaluation of Recommender Systems
Replicable Evaluation of Recommender SystemsAlejandro Bellogin
 
Statistical Learning and Model Selection (1).pptx
Statistical Learning and Model Selection (1).pptxStatistical Learning and Model Selection (1).pptx
Statistical Learning and Model Selection (1).pptxrajalakshmi5921
 
Lecture 10 - Model Testing and Evaluation, a lecture in subject module Statis...
Lecture 10 - Model Testing and Evaluation, a lecture in subject module Statis...Lecture 10 - Model Testing and Evaluation, a lecture in subject module Statis...
Lecture 10 - Model Testing and Evaluation, a lecture in subject module Statis...Maninda Edirisooriya
 
IME 672 - Classifier Evaluation I.pptx
IME 672 - Classifier Evaluation I.pptxIME 672 - Classifier Evaluation I.pptx
IME 672 - Classifier Evaluation I.pptxTemp762476
 
Lecture 3: Basic Concepts of Machine Learning - Induction & Evaluation
Lecture 3: Basic Concepts of Machine Learning - Induction & EvaluationLecture 3: Basic Concepts of Machine Learning - Induction & Evaluation
Lecture 3: Basic Concepts of Machine Learning - Induction & EvaluationMarina Santini
 
DMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification EnsemblesDMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification EnsemblesPier Luca Lanzi
 
eMOOCs2015 Does peer grading work?
eMOOCs2015 Does peer grading work?eMOOCs2015 Does peer grading work?
eMOOCs2015 Does peer grading work?RĂ©mi Bachelet
 

Similar to Prof. Pier Luci Lanzi's Guide to Model Evaluation (20)

DMTM 2015 - 14 Evaluation of Classification Models
DMTM 2015 - 14 Evaluation of Classification ModelsDMTM 2015 - 14 Evaluation of Classification Models
DMTM 2015 - 14 Evaluation of Classification Models
 
DMTM Lecture 03 Regression
DMTM Lecture 03 RegressionDMTM Lecture 03 Regression
DMTM Lecture 03 Regression
 
Week 11 Model Evalaution Model Evaluation
Week 11 Model Evalaution Model EvaluationWeek 11 Model Evalaution Model Evaluation
Week 11 Model Evalaution Model Evaluation
 
evaluation and credibility-Part 1
evaluation and credibility-Part 1evaluation and credibility-Part 1
evaluation and credibility-Part 1
 
crossvalidation.pptx
crossvalidation.pptxcrossvalidation.pptx
crossvalidation.pptx
 
Cross validation.pptx
Cross validation.pptxCross validation.pptx
Cross validation.pptx
 
ai4.ppt
ai4.pptai4.ppt
ai4.ppt
 
ai4.ppt
ai4.pptai4.ppt
ai4.ppt
 
6 Evaluating Predictive Performance and ensemble.pptx
6 Evaluating Predictive Performance and ensemble.pptx6 Evaluating Predictive Performance and ensemble.pptx
6 Evaluating Predictive Performance and ensemble.pptx
 
Bridging the Gap: Machine Learning for Ubiquitous Computing -- Evaluation
Bridging the Gap: Machine Learning for Ubiquitous Computing -- EvaluationBridging the Gap: Machine Learning for Ubiquitous Computing -- Evaluation
Bridging the Gap: Machine Learning for Ubiquitous Computing -- Evaluation
 
ai4.ppt
ai4.pptai4.ppt
ai4.ppt
 
ai4.ppt
ai4.pptai4.ppt
ai4.ppt
 
Replicable Evaluation of Recommender Systems
Replicable Evaluation of Recommender SystemsReplicable Evaluation of Recommender Systems
Replicable Evaluation of Recommender Systems
 
Statistical Learning and Model Selection (1).pptx
Statistical Learning and Model Selection (1).pptxStatistical Learning and Model Selection (1).pptx
Statistical Learning and Model Selection (1).pptx
 
Lecture 10 - Model Testing and Evaluation, a lecture in subject module Statis...
Lecture 10 - Model Testing and Evaluation, a lecture in subject module Statis...Lecture 10 - Model Testing and Evaluation, a lecture in subject module Statis...
Lecture 10 - Model Testing and Evaluation, a lecture in subject module Statis...
 
evaluation and credibility-Part 2
evaluation and credibility-Part 2evaluation and credibility-Part 2
evaluation and credibility-Part 2
 
IME 672 - Classifier Evaluation I.pptx
IME 672 - Classifier Evaluation I.pptxIME 672 - Classifier Evaluation I.pptx
IME 672 - Classifier Evaluation I.pptx
 
Lecture 3: Basic Concepts of Machine Learning - Induction & Evaluation
Lecture 3: Basic Concepts of Machine Learning - Induction & EvaluationLecture 3: Basic Concepts of Machine Learning - Induction & Evaluation
Lecture 3: Basic Concepts of Machine Learning - Induction & Evaluation
 
DMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification EnsemblesDMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification Ensembles
 
eMOOCs2015 Does peer grading work?
eMOOCs2015 Does peer grading work?eMOOCs2015 Does peer grading work?
eMOOCs2015 Does peer grading work?
 

More from Pier Luca Lanzi

11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i VideogiochiPier Luca Lanzi
 
Breve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiBreve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiPier Luca Lanzi
 
Global Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomeGlobal Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomePier Luca Lanzi
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Pier Luca Lanzi
 
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...Pier Luca Lanzi
 
GGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaGGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaPier Luca Lanzi
 
Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Pier Luca Lanzi
 
DMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparationDMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparationPier Luca Lanzi
 
DMTM Lecture 19 Data exploration
DMTM Lecture 19 Data explorationDMTM Lecture 19 Data exploration
DMTM Lecture 19 Data explorationPier Luca Lanzi
 
DMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph miningDMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph miningPier Luca Lanzi
 
DMTM Lecture 17 Text mining
DMTM Lecture 17 Text miningDMTM Lecture 17 Text mining
DMTM Lecture 17 Text miningPier Luca Lanzi
 
DMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesDMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesPier Luca Lanzi
 
DMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluationDMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluationPier Luca Lanzi
 
DMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringDMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringPier Luca Lanzi
 
DMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clusteringDMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clusteringPier Luca Lanzi
 
DMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringDMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringPier Luca Lanzi
 
DMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringDMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringPier Luca Lanzi
 
DMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesDMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesPier Luca Lanzi
 
DMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsDMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsPier Luca Lanzi
 
DMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesDMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesPier Luca Lanzi
 

More from Pier Luca Lanzi (20)

11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi
 
Breve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiBreve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei Videogiochi
 
Global Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomeGlobal Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning Welcome
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018
 
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
 
GGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaGGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di apertura
 
Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018
 
DMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparationDMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparation
 
DMTM Lecture 19 Data exploration
DMTM Lecture 19 Data explorationDMTM Lecture 19 Data exploration
DMTM Lecture 19 Data exploration
 
DMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph miningDMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph mining
 
DMTM Lecture 17 Text mining
DMTM Lecture 17 Text miningDMTM Lecture 17 Text mining
DMTM Lecture 17 Text mining
 
DMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesDMTM Lecture 16 Association rules
DMTM Lecture 16 Association rules
 
DMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluationDMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluation
 
DMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringDMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clustering
 
DMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clusteringDMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clustering
 
DMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringDMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clustering
 
DMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringDMTM Lecture 11 Clustering
DMTM Lecture 11 Clustering
 
DMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesDMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensembles
 
DMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsDMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethods
 
DMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesDMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rules
 

Recently uploaded

Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Quarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up FridayQuarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up FridayMakMakNepo
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
Planning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxPlanning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxLigayaBacuel1
 
AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.arsicmarija21
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxRaymartEstabillo3
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Celine George
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxEyham Joco
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........LeaCamillePacle
 

Recently uploaded (20)

Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Quarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up FridayQuarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up Friday
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
Planning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxPlanning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptx
 
AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptx
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........
 
Rapple "Scholarly Communications and the Sustainable Development Goals"
Rapple "Scholarly Communications and the Sustainable Development Goals"Rapple "Scholarly Communications and the Sustainable Development Goals"
Rapple "Scholarly Communications and the Sustainable Development Goals"
 

Prof. Pier Luci Lanzi's Guide to Model Evaluation

  • 1. Prof. Pier Luca Lanzi Evaluation of Classification Models Data Mining andText Mining (UIC 583 @ Politecnico di Milano)
  • 2. Prof. Pier Luca Lanzi Evaluation & Credidibility Issues • What measure should we use? (accuracy might not be enough!) • How reliable are the predicted results? • How much should we believe in what was learned? §Error on the training data is not a good indicator of performance on future data §The classifier was computed from the very same training data, any estimate based on that data will be optimistic. §In addition, new data will probably not be exactly the same as the training data! 2
  • 3. Prof. Pier Luca Lanzi How to evaluate the performance of a model? How to obtain reliable estimates? How to compare the relative performance among competing models? Given two equally performing models, which one should we prefer? 3
  • 4. Prof. Pier Luca Lanzi Model Evaluation • Metrics for Performance Evaluation §How to evaluate the performance of a model? • Methods for Performance Evaluation §How to obtain reliable estimates? • Methods for Model Comparison §How to compare the performance of competing models? • Model Selection §Which model should we prefer? 4
  • 5. Prof. Pier Luca Lanzi How to evaluate the performance of a model? (Metrics for Performance Evaluation) 5
  • 6. Prof. Pier Luca Lanzi Metrics for Performance Evaluation • Focus on the predictive capability of a model • Confusion Matrix: 6 PREDICTED CLASS TRUE CLASS Yes No Yes a (TP) b (FN) No c (FP) d (TN) a:TP (true positive) b: FN (false negative) c: FP (false positive) d:TN (true negative)
  • 7. Prof. Pier Luca Lanzi Metrics for Performance Evaluation • Most widely-used metric: 7 PREDICTED CLASS ACTUAL CLASS Yes No Yes a (TP) b (FN) No c (FP) d (TN)
  • 8. Prof. Pier Luca Lanzi Sometimes Accuracy is not Enough • Consider a 2-class problem §Number of Class 0 examples = 9990 §Number of Class 1 examples = 10 • If model predicts everything to be class 0, accuracy is 9990/10000 = 99.9 % • Accuracy is misleading because model does not detect any class 1 example 8
  • 9. Prof. Pier Luca Lanzi Cost Matrix 9 C(x): Cost of misclassifying examples of type x PREDICTED CLASS ACTUAL CLASS Yes No Yes C(TP) C(FN) No C(FP) C(TN)
  • 10. Prof. Pier Luca Lanzi Computing Cost of Classification 10 Cost Matrix PREDICTED CLASS ACTUAL CLASS C(.) + - + -1 100 - 1 0 Model M1 PREDICTED CLASS ACTUAL CLASS + - + 150 40 - 60 250 Model M2 PREDICTED CLASS ACTUAL CLASS + - + 250 45 - 5 200 Accuracy = 80% Cost = 3910 Accuracy = 90% Cost = 4255
  • 11. Prof. Pier Luca Lanzi Cost-Based Model Evaluation • Predictive accuracy in this case is defined as • True Positives, true negatives § Safe and risky loans predicted as safe and risky respectively • False Positive Errors § We accept a risky loan which we predicted was safe § We are likely not to get the money back (let’s say on average 30000 euros) • False Negative Errors § We don’t give a safe loan since we predicted it was risky § We will loose the interest money (let’s say on average 10000 euros 11
  • 12. Prof. Pier Luca Lanzi Cost-Based Model Evaluation • Two models, similar performance but different types of errors • First Model §1576 false positives and 1224 false negatives §Total cost is 59525443 • Second Model §1407 false positives and 1093 false negatives §Total cost is 53147717 (more than 6 millions saved) • What if we can change the way our model makes mistakes? §1093 false positives and 1407 false negatives §Total cost becomes 46852283 (more than 12 millions saved) 12
  • 13. Prof. Pier Luca Lanzi Costs can be used to evaluate an existing classification models Or can be used by the algorithm to guide the search for the model Some algorithms can use the cost matrix to build the model (e.g., decision trees)
  • 14. Prof. Pier Luca Lanzi Precision & Recall • Alternatives to accuracy, introduced in the area of information retrieval and search engine • Precision §Focuses on the percentage of examples that have been classified positive examples and are actually positive §In the information retrieval context represents the percentage of actually good documents that have been shown as a result. • Recall §Focuses on the percentage of positively classified examples with respect to the number of existing good documents §In the information retrieval context, recall represents the percentage of good documents shown with respect to the existing ones. 18
  • 15. Prof. Pier Luca Lanzi Cost-Sensitive Measures • Precision is biased towards C(TP) & C(FP), • Recall is biased towards C(TP) & C(FN) • F1-measure is biased towards all except C(TN), it is high when both precision and recall are reasonably high 19 The higher the precision, the lower the FPs The higher the precision, the lower the FNs The higher the F1, the lower the FPs & FNs
  • 16. Prof. Pier Luca Lanzi How to obtain reliable estimates? (Methods for Performance Evaluation) 20
  • 17. Prof. Pier Luca Lanzi Classification Step 1: Split the Data Between Train and Test Sets 21 Training set Testing set data
  • 18. Prof. Pier Luca Lanzi Classification Step 2: Compute a Model using the Data in the Training Set 22 Training set Testing set data Model Builder Y N
  • 19. Prof. Pier Luca Lanzi Classification Step 3: Evaluate the Model using the Test Set 23 Training set Testing set data Model Builder Predictions Y N Evaluate + - + -
  • 20. Prof. Pier Luca Lanzi Note on Parameter Tuning • It is important that the test data is not used in any way to create the classifier • Some learning schemes operate in two stages: §Stage 1: builds the basic structure §Stage 2: optimizes parameter settings • The test data can’t be used for parameter tuning! • Proper procedure uses three sets: training data, validation data, and test data. Validation data is used to optimize parameters 24
  • 21. Prof. Pier Luca Lanzi Making the most of the data • Once evaluation is complete, all the data can be used to build the final classifier • Generally, the larger the training data the better the classifier (but returns diminish) • The larger the test data the more accurate the error estimate 25
  • 22. Prof. Pier Luca Lanzi The Full Process: Training, Validation, and Testing 26 Training set Validation set data Model Builder Predictions Y N Evaluate + - + - Model Builder Testing set Y N + - + - Final Evaluation
  • 23. Prof. Pier Luca Lanzi Methods for Performance Evaluation • How to obtain a reliable estimate of performance? • Performance of a model may depend on other factors besides the learning algorithm: §Class distribution §Cost of misclassification §Size of training and test sets 27
  • 24. Prof. Pier Luca Lanzi Methods of Estimation • Holdout §Reserve ½ for training and ½ for testing §Reserve 2/3 for training and 1/3 for testing • Random subsampling §Repeated holdout • Cross validation §Partition data into k disjoint subsets §k-fold: train on k-1 partitions, test on the remaining one §Leave-one-out: k=n • Stratified sampling §oversampling vs undersampling • Bootstrap §Sampling with replacement 28
  • 25. Prof. Pier Luca Lanzi Holdout Evaluation & “Small” Datasets • Reserves a certain amount for testing and uses the remainder for training, typically, §Reserve ½ for training and ½ for testing §Reserve 2/3 for training and 1/3 for testing • For small or “unbalanced” datasets, samples might not be representative • For instance, it might generate training or testing datasets with few or none instances of some classes • Stratified sampling §Makes sure that each class is represented with approximately equal proportions in both subsets 29
  • 26. Prof. Pier Luca Lanzi Repeated Holdout • Holdout estimate can be made more reliable by repeating the process with different subsamples • In each iteration, a certain proportion is randomly selected for training (possibly with stratification) • The error rates on the different iterations are averaged to yield an overall error rate • Still not optimum since the different test sets overlap 30
  • 27. Prof. Pier Luca Lanzi Cross-Validation • First step §Data is split into k subsets of equal size • Second step §Each subset in turn is used for testing and the remainder for training • This is called k-fold cross-validation and avoids overlapping test sets • Often the subsets are stratified before cross-validation is performed • The error estimates are averaged to yield an overall error estimate 31
  • 28. Prof. Pier Luca Lanzi Ten-fold Crossvalidation 32 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 test train 1 2 3 4 5 6 7 8 9 10 testtrain 2 3 4 5 6 7 8 9 10 test train 1 train … … … p1 p2 p10 The final performance is computed as the average pi
  • 29. Prof. Pier Luca Lanzi Cross-Validation • Standard method for evaluation stratified ten-fold cross-validation • Why ten? Extensive experiments have shown that this is the best choice to get an accurate estimate • Stratification reduces the estimate’s variance • Even better: repeated stratified cross-validation • E.g. ten-fold cross-validation is repeated ten times and results are averaged (reduces the variance) • Other approaches appear to be robust, e.g., 5x2 crossvalidation 33
  • 30. Prof. Pier Luca Lanzi Leave-One-Out Cross-Validation • It is a particular form of cross-validation §Set number of folds to number of training instances §I.e., for n training instances, build classifier n times • Makes best use of the data • Involves no random subsampling • Computationally expensive 34
  • 31. Prof. Pier Luca Lanzi Leave-One-Out and Stratification • Disadvantage of Leave-One-Out is that: stratification is not possible • It guarantees a non-stratified sample because there is only one instance in the test set! • Extreme example: random dataset split equally into two classes §Best inducer predicts majority class §50% accuracy on fresh data §Leave-One-Out-CV estimate is 100% error! 35
  • 32. Prof. Pier Luca Lanzi Bootstraping • Cross-validation uses sampling without replacement §The same instance, once selected, can not be selected again for a particular training/test set • Bootstrap uses sampling with replacement §Sample a dataset of n instances n times with replacement to form a new dataset of n instances §Use this data as the training set §Use the instances from the original dataset that don’t occur in the new training set for testing 36
  • 33. Prof. Pier Luca Lanzi The 0.632 Bootstrap • An instance has a probability of 1–1/n of not being picked • Thus its probability of ending up in the test data is: • This means the training data will contain approximately 63.2% of the instances 37
  • 34. Prof. Pier Luca Lanzi Estimating Error using Bootstrap • The error estimate on the test data will be very pessimistic, since training was on just around 63% of the instances • Therefore, we compute the overall error ε by combining error on the train and test as, • The training error gets less weight than the error on the test data • Repeat process several times with different replacement samples; average the results 38
  • 35. Prof. Pier Luca Lanzi How to compare the relative performance among competing models? (Model Comparison) 39
  • 36. Prof. Pier Luca Lanzi How to Compare the Performance of Two Models? • Two models, §Model M1: accuracy = 85%, tested on 30 instances §Model M2: accuracy = 75%, tested on 5000 instances • Can we say M1 is better than M2? • How much confidence can we place on accuracy of M1 and M2? • Can the difference in performance measure be explained as a result of random fluctuations in the test set? 40
  • 37. Prof. Pier Luca Lanzi Comparing Data Mining Schemes • Which of two learning schemes performs better? • The answer is obviously domain-dependent • The typical approach is to compare 10-fold cross-validation (CV) estimates using the Student’s t-test • Student’s t-test tells whether the means of two samples are significantly different • Use a paired t-test when the individual samples are paired, that is, when the same CV is applied twice. Use unpaired t-test otherwise 41
  • 38. Prof. Pier Luca Lanzi Confidence Interval for Accuracy • Prediction can be regarded as a Bernoulli trial with two possible outcomes, correct or wrong • A collection of Bernoulli trials has a Binomial distribution • Given, §The number of correct test predictions S §The number of test instances N, §The accuracy f= S/N • Can we predict the true accuracy of the model from accuracy f? 42
  • 39. Prof. Pier Luca Lanzi Mean and Variance • Assume that the accuracy f comes from a Bernoulli distribution with mean p and variance p(1-p) • When N is large enough, f follows a normal distribution with mean p and variance p(1-p)/N • c% confidence level interval [-z ≤ X ≤z] for a random variable of mean 0 is given by • With a symmetric distribution, we have 43
  • 40. Prof. Pier Luca Lanzi Confidence Interval for Accuracy • For instance, if X has mean 0 and standard deviation 1, the confidence interval for c% is defined by the z values, • Thus, for the confidence interval for 90% we have, • To compute the confidence interval for the expected accuracy f we need to, § Normalize it to a distribution with mean 0 and standard deviation1 § Compute the confidence interval § Remap it to the actual distribution 44 Pr[X ≥z] z 0.1% 3.09 0.5% 2.58 1% 2.33 5% 1.65 10% 1.28 20% 0.84 25% 0.69 40% 0.25
  • 41. Prof. Pier Luca Lanzi Confidence Interval for Accuracy • First, transform f using, • Compute the confidence interval, • Solve it for p, 45
  • 42. Prof. Pier Luca Lanzi Examples • When f = 75%, N=1000, c=80% (z=1.28), the confidence interval for the actual performance is [0.732, 0.767] • When f = 75%, N=100, c=80% (z=1.28), the confidence interval for the actual performance is [0.691, 0.801] • Note that the assumption about f being normally distributed only applies for large values of N (at least 30) • When f = 75%, N=10, c=80% (z=1.28), the confidence interval for the actual performance is [0.549, 0.881] 46
  • 43. Prof. Pier Luca Lanzi William Gosset (1876-1937) • Obtained a post as a chemist in the Guinness brewery in Dublin in 1899. • Invented the t-test to handle small samples for quality control in brewing • Wrote under the name "Student” • Unpaired and paired two-sample t-test • Unequal/equal sample size • Unequal/equal sample variance 47
  • 44. Prof. Pier Luca Lanzi Comparing the Performance of Two Models (Paired) • Fix a significance level • If a difference is significant at the c% level, there is a (100-c)% chance that the true means differ • Divide the significance level by two because the test is two-tailed • Look up the value for z that corresponds to c/2 • Compute the value of t based on the observed performance estimates for the schemes being compared • If t is less than –z or greater than z then the difference is statistically significant 48
  • 45. Prof. Pier Luca Lanzi “paired” when the estimates are from the same datasets “unpaired” when the estimates are from different datasets 49
  • 46. Prof. Pier Luca Lanzi How do we know that the difference in performance is not just due to change? We computes the odds of it! Apply the t-test and compute the p-value The p-value represents the probability that the reported difference is due by chance
  • 47. Prof. Pier Luca Lanzi In repeated experiments at this sample size, how often would you see such a difference assuming that there is no difference between the two groups?
  • 48. Prof. Pier Luca Lanzi run the python notebook dedicated to classifier comparison
  • 49. Prof. Pier Luca Lanzi Multiple Testing • Say that you perform a statistical test with a 0.05 threshold, but you repeat the test on twenty different observations. • For example, you want to compare the performance of several classification algorithms • Assume that all of the observations are explainable by the null hypothesis • What is the chance that at least one of the observations will receive a p-value less than 0.05? 46
  • 50. Prof. Pier Luca Lanzi Multiple Testing - Example • Say that you perform a statistical test with a 0.05 threshold, but you repeat the test on 20 different observations. Assuming that all of the observations are explainable by the null hypothesis, what is the chance that at least one of the observations will receive a p-value less than 0.05? • P(making a mistake) = 0.05 • P(not making a mistake) = 0.95 • P(not making any mistake) = 0.9520 = 0.358 • P(making at least one mistake) = 1 - 0.358 = 0.642 • There is a 64.2% chance of making at least one mistake. 47
  • 51. Prof. Pier Luca Lanzi Bonferroni Correction • Assume that individual tests are independent. • Divide the desired p-value threshold by the number of tests performed. • Example §We now have, the threshold set to 0.05/20 = 0.0025. §P(making a mistake) = 0.0025 §P(not making a mistake) = 0.9975 §P(not making any mistake) = 0.997520 = 0.9512 §P(making at least one mistake) = 1 - 0.9512 = 0.0488 48
  • 52. Prof. Pier Luca Lanzi Non Parametric Tests • Do not make any assumption about the distribution of the variable in the population • Mann-Whitney U Test §Nonparametric equivalent of the independent t-test • Wilcoxon matched-pairs signed rank test §Used to compare two related groups 61
  • 53. Prof. Pier Luca Lanzi Mann Whitney U Test • To compute the Mann Whitney U: §Rank the scores in both groups (together) from highest to lowest. §Sum the ranks of the scores for each group. §The sum of ranks for each group are used to make the statistical comparison. Income Rank No Income Rank 25 12 27 10 32 5 19 17 36 3 16 20 40 1 33 4 22 14 30 7 37 2 17 19 20 16 21 15 18 18 23 13 31 6 26 11 29 8 28 9 85 125
  • 54. Prof. Pier Luca Lanzi Wilcoxon Matched-pairs Signed Rank test • Step 1 § Rank the data of both the groups in ascending order § If any values are equal average their ranks • Step 2 § Add up the ranks in group with smaller sample size § If the two groups are of the same size either one may be picked § T= sum of ranks in group with smaller sample size • Step 3 § Compare this sum with the critical ranges given in table § Look up the rows corresponding to the sample sizes of the two groups § A range will be shown for the 5% significance level 63
  • 55. Prof. Pier Luca Lanzi * 17, 18 & 19 are tied hence the ranks are averaged Non-smokers (n=15) Heavy smokers (n=14) Birth wt (Kg) Rank Birth wt (Kg) Rank 3.99 27 3.18 7 3.79 24 2.84 5 3.60* 18 2.90 6 3.73 22 3.27 11 3.21 8 3.85 26 3.60* 18 3.52 14 4.08 28 3.23 9 3.61 20 2.76 4 3.83 25 3.60* 18 3.31 12 3.75 23 4.13 29 3.59 16 3.26 10 3.63 21 3.54 15 2.38 2 3.51 13 2.34 1 2.71 3 Sum=272 Sum=163
  • 56. Prof. Pier Luca Lanzi Nonparametric tests Parametric tests Nominal data Ordinal data Ordinal, interval, ratio data One group Chi square goodness of fit Wilcoxon signed rank test One group t-test Two unrelated groups Chi square Wilcoxon rank sum test, Mann-Whitney test Student’s t-test Two related groups McNemar’s test Wilcoxon signed rank test Paired Student’s t-test K-unrelated groups Chi square test Kruskal -Wallis one-way analysis of variance ANOVA K-related groups Friedman matched samples ANOVA with repeated measurements
  • 57. Prof. Pier Luca Lanzi T-Test Using R > j48 <- c(96,93.33,92.00,94.67,96.00,94.00,94.67,94.0,94.67,98) > zeror <- c(33.33, 33.33, 33.33, 33.33, 33.33, 33.33, 33.33, 33.33, 33.33, 33.33) > t.test(j48,zeror) Welch Two Sample t-test data: j48 and zeror t = 117.9102, df = 9, p-value = 1.153e-15 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 60.22594 62.58206 sample estimates: mean of x mean of y 94.734 33.330 68
  • 59. Prof. Pier Luca Lanzi Probabilistic Classifiers 70
  • 60. Prof. Pier Luca Lanzi Probabilistic Classifiers & Classification Thresholds • Up to now we used logistic regression to predict classifier labels, however, logistic regression returns a probability • Given an example xi its predicted class is the label with the largest probability so it is equivalent to using a threshold of 0.5 to decide which class to assign to an example • However, we can use a different threshold and for instance label as positive only examples we return a 1 only when P(+1|x)>0.75 • This would label as positive only cases for which we are more confident that should be labeled as positive. 71
  • 61. Prof. Pier Luca Lanzi How the Classification Threshold Influence Precision and Recall? • Suppose we use a near one threshold to classify positive examples • Then, we will classify as positives only examples for which we are very confident (this is a pessimistic classifier) • Precision will be high §In fact, we are not likely produce false positives • Recall will be low §In fact, we are likely to produce more false negatives 72
  • 62. Prof. Pier Luca Lanzi How the Classification Threshold Influence Precision and Recall? • Suppose we use a near zero threshold to classify positive examples • Then, we will classify everything as positives (this is an optimistic classifier) • Precision will be low as we are going to generate the maximum number of false positives (everything is positive!) • Recall will be high since by classifying everything as positive we are going to generate the maximum number of false negatives 73
  • 63. Prof. Pier Luca Lanzi We can use the threshold to optimize our precision and recall a higher the threshold, increases precision and lower recall a lower threshold, decreases precision and increase recall
  • 64. Prof. Pier Luca Lanzi Run the notebook dedicated to the precision-recall tradeoff
  • 65. Prof. Pier Luca Lanzi Precision-Recall Tradeoff Example • In the notebook, a simple logistic regression model applied to the loans data returns the confusion matrix below, corresponding to a precision of 0.82 and recall of 0.99 • By increasing the classification threshold for positive (+1) examples to 0.75 we obtained a new confusion matrix with precision of 0.86 and a recall of 0.99 • Overall, we reduced the number of false positives (we did not accept risky loans and we were better at identifying risky loans) 76 Classified -1 Classified +1 Labeled -1 1212 21910 Labeled +1 1057 98283 Classified -1 Classified +1 Labeled -1 10632 12490 Labeled +1 20106 79234
  • 66. Prof. Pier Luca Lanzi we can explore this tradeoffs using precision-recall curves
  • 67. Prof. Pier Luca Lanzi Precision-Recall Curves • Plot precision as a function of recall for varying threshold values • The best classifier would be the one that has always a precision equal to one (but never happens) • More in general classifiers will show of different shapes • How to decide among more classifiers? §Use the area under the curve (the nearer to one, the better) §Use F1 measure 78
  • 68. Prof. Pier Luca Lanzi Precision-recall curve for the loan dataset (run the python notebook for details)
  • 69. Prof. Pier Luca Lanzi Receiver Operating Characteristic (ROC) Curves
  • 70. Prof. Pier Luca Lanzi ROC Curve (Receiver Operating Characteristic) • Developed in 1950s for signal detection theory to analyze signals • Plot the True Positive Rate (TPR=TP/(TP+FN)) against the False Positive Rate (FPR=FP/(TN+FP)) • Performance of each classifier represented as a point on the ROC curve • Changing the classification threshold, sample distribution or cost matrix changes the location of the point 81
  • 71. Prof. Pier Luca Lanzi ROC Curve • (FPR, TPR) §(0,0): declare everything to be negative class §(1,1): declare everything to be positive class §(0,1): ideal • Diagonal line: §Random guessing §Below diagonal line, prediction is opposite of the true class 82
  • 72. Prof. Pier Luca Lanzi ROC Curve 83
  • 73. Prof. Pier Luca Lanzi Using ROC for Model Comparison • No model consistently outperform the other • M1 is better for small FPR • M2 is better for large FPR • Area Under the ROC curve • Ideal, area = 1 • Random guess, area = 0.5 84
  • 74. Prof. Pier Luca Lanzi Lift Charts • Lift is a measure of the effectiveness of a predictive model calculated as the ratio between the results obtained with and without the predictive model • Cumulative gains and lift charts are visual aids for measuring model performance • Both charts consist of a lift curve and a baseline • The greater the area between the lift curve and the baseline, the better the model 89
  • 75. Prof. Pier Luca Lanzi Example: Direct Marketing • Mass mailout of promotional offers (1000000) • The proportion who normally respond is 0.1%, that is 1000 responses • A data mining tool can identify a subset of a 100000 for which the response rate is 0.4%, that is 400 responses • In marketing terminology, the increase of response rate is known as the lift factor yielded by the model. • The same data mining tool, may be able to identify 400000 households for which the response rate is 0.2%, that is 800 respondents corresponding to a lift factor of 2. • The overall goal is to find subsets of test instances that have a high proportion of true positive 90
  • 76. Prof. Pier Luca Lanzi An Example from Direct Marketing 91
  • 77. Prof. Pier Luca Lanzi Which model should we prefer? (Model selection) 92
  • 78. Prof. Pier Luca Lanzi Model Selection Criteria • Model selection criteria attempt to find a good compromise between: §The complexity of a model §Its prediction accuracy on the training data • Reasoning: a good model is a simple model that achieves high accuracy on the given data • Also known as Occam’s Razor : the best theory is the smallest one that describes all the facts 93 William of Ockham, born in the village of Ockham in Surrey (England) about 1285, was the most influential philosopher of the 14th century and a controversial theologian.
  • 79. Prof. Pier Luca Lanzi The MDL Principle • MDL stands for minimum description length • The description length is defined as: space required to describe a theory + space required to describe the theory’s mistakes • In our case the theory is the classifier and the mistakes are the errors on the training data • We seek a classifier with Minimum Description Language 94
  • 80. Prof. Pier Luca Lanzi MDL and compression • MDL principle relates to data compression • The best theory is the one that compresses the data the most • I.e. to compress a dataset we generate a model and then store the model and its mistakes • We need to compute (a) size of the model, and (b) space needed to encode the errors • (b) easy: use the informational loss function • (a) need a method to encode the model 95
  • 81. Prof. Pier Luca Lanzi Which Classification Algorithm? • Among the several algorithms, which one is the “best”? §Some algorithms have a lower computational complexity §Different algorithms provide different representations §Some algorithms allow the specification of prior knowledge • If we are interested in the generalization performance, are there any reasons to prefer one classifier over another? • Can we expect any classification method to be superior or inferior overall? • According to the No Free Lunch Theorem, the answer to all these questions is known 96
  • 82. Prof. Pier Luca Lanzi No Free Lunch Theorem • If the goal is to obtain good generalization performance, there are no context-independent or usage-independent reasons to favor one classification method over another • If one algorithm seems to outperform another in a certain situation, it is a consequence of its fit to the particular problem, not the general superiority of the algorithm • When confronting a new problem, this theorem suggests that we should focus on the aspects that matter most §Prior information §Data distribution §Amount of training data §Cost or reward • The theorem also justifies skepticism regarding studies that “demonstrate” the overall superiority of a certain algorithm 97
  • 83. Prof. Pier Luca Lanzi No Free Lunch Theorem • "[A]ll algorithms that search for an extremum of a cost [objective] function perform exactly the same, when averaged over all possible cost functions.“ [1] • “[T]he average performance of any pair of algorithms across all possible problems is identical.” [2] • Wolpert, D.H., Macready, W.G. (1995), No Free Lunch Theorems for Search, Technical Report SFI-TR-95-02-010 (Santa Fe Institute). • Wolpert, D.H., Macready, W.G. (1997), No Free Lunch Theorems for Optimization, IEEE Transactions on Evolutionary Computation 1, 67. 98