This document appears to be lecture slides for a course on deriving knowledge from data at scale. It covers many topics related to building machine learning models including data preparation, feature selection, classification algorithms like decision trees and support vector machines, and model evaluation. It provides examples applying these techniques to a Titanic passenger dataset to predict survival. It emphasizes the importance of data wrangling and discusses various feature selection methods.
10. Deriving Knowledge from Data at Scale
https://www.kaggle.com/c/springleaf-marketing-response
not
Determine whether to send a direct mail piece to a customer
20. Deriving Knowledge from Data at Scale
Does the new data point x* exactly match a previous point xi?
If so, assign it to the same class as xi
Otherwise, just guess.
This is the “rote” classifier
21. Deriving Knowledge from Data at Scale
Does the new data point x* match a set pf previous points xi on some specific attribute?
If so, take a vote to determine class.
Example: If most females survived, then assume every female survives
But there are lots of possible rules like this.
And an attribute can have more than two values.
If most people under 4 years old survive, then assume everyone under 4 survives
If most people with 1 sibling survive, then assume everyone with 1 sibling survives
How do we choose?
22. Deriving Knowledge from Data at Scale
IF sex=‘female’ THEN survive=yes
ELSE IF sex=‘male’ THEN survive = no
confusion matrix
no yes <-- classified as
468 109 | no
81 233 | yes
(468 + 233) / (468+109+81+233) = 79% correct (and 21% incorrect)
Not bad!
24. Deriving Knowledge from Data at Scale
IF pclass=‘1’ THEN survive=yes
ELSE IF pclass=‘2’ THEN survive=yes
ELSE IF pclass=‘3’ THEN survive=no
confusion matrix
no yes <-- classified as
372 119 | no
177 223 | yes
(372 + 223) / (372+119+223+177) = 67% correct (and 33% incorrect)
a little worse
28. Deriving Knowledge from Data at Scale
fx yest
denotes +1
denotes -1
f(x,w,b) = sign(w. x - b)
How would you
classify this data?
Estimation:
w: weight vector
x: data vector
29. Deriving Knowledge from Data at Scale
fx
a
yest
denotes +1
denotes -1
f(x,w,b) = sign(w. x - b)
How would you
classify this data?
30. Deriving Knowledge from Data at Scale
fx
a
yest
denotes +1
denotes -1
f(x,w,b) = sign(w. x - b)
How would you
classify this data?
31. Deriving Knowledge from Data at Scale
fx
a
yest
denotes +1
denotes -1
f(x,w,b) = sign(w. x - b)
How would you
classify this data?
32. Deriving Knowledge from Data at Scale
fx
a
yest
denotes +1
denotes -1
f(x,w,b) = sign(w. x - b)
Any of these
would be fine..
..but which is best?
33. Deriving Knowledge from Data at Scale
fx
a
yest
denotes +1
denotes -1
f(x,w,b) = sign(w. x - b)
Define the margin of a linear classifier
as the width that the boundary could
be increased by before hitting a
datapoint.
36. Deriving Knowledge from Data at Scale
fx
a
yest
denotes +1
denotes -1
f(x,w,b) = sign(w. x - b)
The maximum margin
linear classifier is the
linear classifier with the,
um, maximum margin.
This is the simplest kind
of SVM (Called an LSVM)
Linear SVM
37. Deriving Knowledge from Data at Scale2016/4/12 37
fx
a
yest
denotes +1
denotes -1
f(x,w,b) = sign(w. x + b)
The maximum margin
linear classifier is the
linear classifier with the,
um, maximum margin.
This is the simplest kind
of SVM (Called an LSVM)Support Vectors
are those
datapoints that
the margin
pushes up
against
Linear SVM
38. Deriving Knowledge from Data at Scale
denotes +1
denotes -1
f(x,w,b) = sign(w. x - b)
Support Vectors
are those
datapoints that
the margin
pushes up
against
1. Intuitively this feels safest.
2. If we’ve made a small error in the location of
the boundary this gives us least chance of
causing a misclassification.
3. LOOCV (leave one out cross validation) is
easy since the model is immune to removal
of any nonsupport-vector data points.
5. Empirically it works very well.
47. Deriving Knowledge from Data at Scale
Probably the most tricky part of using SVM
RBF is a good first option…
Depends on your data—try several.
• Kernels have even been developed for nonnumeric data like sequences,
structures, and trees/graphs.
May help to use a combination of several kernels.
Don’t touch your evaluation data while you’re trying out different
kernels and parameters.
– Use cross-validation for this if you’re short on data
48. Deriving Knowledge from Data at Scale
Complexity of the optimization problem remains only dependent on the dimensionality of
the input space and not of the feature space!
50. Deriving Knowledge from Data at Scale
• SVM 1 learns “Output==1” vs “Output != 1”
• SVM 2 learns “Output==2” vs “Output != 2”
….
• SVM N learns “Output==N” vs “Output != N”
59. Deriving Knowledge from Data at Scale
IF pclass=‘1’ THEN survive=yes
ELSE IF pclass=‘2’ THEN survive=yes
ELSE IF pclass=‘3’ THEN survive=no
confusion matrix
no yes <-- classified as
372 119 | no
177 223 | yes
(372 + 223) / (372+119+223+177) = 67% correct (and 33% incorrect)
a little worse
60. Deriving Knowledge from Data at Scale
Support Vector Machine Model, Titanic Data, Linear Kernel
62. Deriving Knowledge from Data at Scale
Support Vector Machine Model, RBF Kernel
Titanic Data
overfitting?
63. Deriving Knowledge from Data at Scale
Bill Howe, UW 63
Support Vector Machine Model, RBF Kernel
Titanic Data
A gamma, parameter that controls/balances model complexity against accuracy
64. Deriving Knowledge from Data at Scale
How They Won It!
Lessons from data mining
the past Kaggle contests…
70. Deriving Knowledge from Data at Scale
• Rule of thumb: 5,000 or more desired
• Rule of thumb: for each attribute, 10 or more instances
• Rule of thumb: >100 for each class
71. Deriving Knowledge from Data at Scale
Data cleaning
Data integration
Data transformation
Data reduction
Data discretization
73. Deriving Knowledge from Data at Scale
1. Missing values
2. Outliers
3. Coding
4. Constraints
74. Deriving Knowledge from Data at Scale
ReplaceMissingValues
• RemoveMisclassified
• MergeTwoValues
75. Deriving Knowledge from Data at Scale
Missing values – UCI machine learning repository, 31 of 68 data sets
reported to have missing values. “Missing” can mean many things…
MAR: "Missing at Random":
– usually best case
– usually not true
Non-randomly missing
Presumed normal, so not measured
Causally missing
– attribute value is missing because of other attribute values (or because of
the outcome value!)
81. Deriving Knowledge from Data at Scale
Simple transformations can often have a large impact in performance
Example transformations (not all for performance improvement):
• Difference of two date attributes, distance between coordinate,…
• Ratio of two numeric (ratioscale) attributes, average for smoothing,….
• Concatenating the values of nominal attributes
• Encoding (probabilistic) cluster membership
• Adding noise to data (for robustness tests)
• Removing data randomly or selectively
• Obfuscating the data (for anonymity)
Intuition: add features that increase class discrimination (E, IG)…
Data Transformation
82. Deriving Knowledge from Data at Scale
• Combine attributes
• Normalizing data
• Simplifying data
95. Deriving Knowledge from Data at Scale
instance
Resample, a random subsample
with or without replacement;
To replace or not…
Same random seed, will result in
same (repeatable) sample.
Sample size, as percentage of
original data set size.
98. Deriving Knowledge from Data at Scale
Curse of Dimensionality
exponentially
In many cases the information that is lost by
discarding variables is made up for by a more
accurate mapping/sampling in the lower-
dimensional space !
100. Deriving Knowledge from Data at Scale
work almost as well as using the entire
data set
the same
property (of interest) as the original set of data
110. Deriving Knowledge from Data at Scale
Feature Selection, starts with you…
smallest subset of attributes
111. Deriving Knowledge from Data at Scale
What is Evaluated?
Attributes
Subsets of
Attributes
Evaluation
Method
Independent
Filters Filters
Learning
Algorithm Wrappers
112. Deriving Knowledge from Data at Scale
What is Evaluated?
Attributes
Subsets of
Attributes
Evaluation
Method
Independent
Filters Filters
Learning
Algorithm Wrappers
113. Deriving Knowledge from Data at Scale
list of attributes
evaluated individually
select
subset
115. Deriving Knowledge from Data at Scale
A correlation coefficient shows the degree of linear dependence of x and y. In other words, the coefficient shows
how close two variables lie along a line. If the coefficient is equal to 1 or -1, all the points lie along a line. If the
correlation coefficient is equal to zero, there is no linear relation between x and y. However, this does not
necessarily mean that there is no relation between the two variables. There could e.g. be a non-linear relation.
117. Deriving Knowledge from Data at Scale
Interface for classes that evaluate attributes…
Interface for ranking or searching for a subset of attributes…
118. Deriving Knowledge from Data at Scale
Select CorrelationAttributeEval for Pearson Correlation…
False, doesn’t return R score
True, returns R scores;
119. Deriving Knowledge from Data at Scale
Ranks attributes by their individual evaluations, used in
conjunction with GainRatio, Entropy, Pearson, etc…
Number of attributes to return,
-1 returns all ranked attributes;
Attributes to ignore (skip) in the
evaluation forma: [1, 3-5, 10];
Cutoff at which attributes can
be discarded, -1 no cutoff;
120. Deriving Knowledge from Data at Scale
Predicting Self-Reported Health Status
The Data Set, NHANES_data.csv (National Health and Nutrition Examination Survey)
How would you say your health in general is?
Excellent predictor of mortality, health care utilization & disability
How I processed it…
• 4000 variables;
• Attributes with > 30% missing values removed (dropped column);
• 105 variables remaining;
• Chi-square test, variable and target, remove variables with P value < .20;
• Impute all missing values using expectation minimization;
• 85 variables remaining;
Pearson Correlation Exercise…
122. Deriving Knowledge from Data at Scale
NHANES_data.csv
• Convert the last column from numeric to nominal
• Find the top 15 features using Pearson Correlation
123. Deriving Knowledge from Data at Scale
• OneRAttributeEval
• GainRatioAttributeEval
• InfoGainAttributeEval
• ChiSquaredAttributeEval
• ReliefFAttributeEval
124. Deriving Knowledge from Data at Scale
• Right Click on the new line in the Result list;
• From the pop-up menu, select the item
Save reduced data…
125. Deriving Knowledge from Data at Scale
• Right Click on the new line in the Result list;
• From the pop-up menu, select the item
Save reduced data…
• Save the dataset with 15 selected attributes
to file NHanesPearson.arff
126. Deriving Knowledge from Data at Scale
• Right Click on the new line in the Result list;
• From the pop-up menu, select the item
Save reduced data…
• Save the dataset with 15 selected attributes
to file NHanesPearson.arff
• Switch to the Preprocess mode in Explorer
• Click on Open file… and open the file
NHanesPearson.arff
• Switch to the Classify submode
• Click on Choose, select classifier and use this
feature set and data to build a predictive
model;
127. Deriving Knowledge from Data at Scale
• Anything below 0.3 isn’t highly
correlated with the target…
128. Deriving Knowledge from Data at Scale
What is Evaluated?
Attributes
Subsets of
Attributes
Evaluation
Method
Independent
Filters Filters
Learning
Algorithm Wrappers
135. Deriving Knowledge from Data at Scale
Interface for classes that evaluate attributes…
Interface for ranking or searching for a subset of attributes…
137. Deriving Knowledge from Data at Scale
Forward, Backward, Bi-Directional
Attributes to “seed” the search,
listed individually or by range.
Cutoff for backtracking…
138. Deriving Knowledge from Data at Scale
True: Adds features that are correlated
with class and NOT intercorrelated with
other features already in selection.
False: Eliminates redundant features.
Precompute the correlation matrix in
advance, useful for fast backtracking, or
compute lazily. When given a large
number of attributes, compute lazily…
CfsSubsetEval
139. Deriving Knowledge from Data at Scale
NHANES_data.csv
• Convert the last column from numeric to nominal
• Set the search method as Best First, Forward
• Set the attribute evaluator as CfsSubsetEval
• Run across all attributes in data set…
140. Deriving Knowledge from Data at Scale
• Feature selection can significantly increase the performance of a learning
algorithm (both accuracy and computation time) – but it is not easy!
• Relevance <-> Optimality
• Correlation and Mutual information between single variables and the target are
often used as Ranking-Criteria of variables.
Important points 1/2
141. Deriving Knowledge from Data at Scale
Important points 2/2
• One can not automatically discard variables with small scores – they may still be
useful together with other variables.
• Filters – Wrappers - Embedded Methods
• How to search the space of all feature subsets ?
• How to asses performance of a learner that uses a particular feature subset ?
142. Deriving Knowledge from Data at Scale
not all about accuracy
• Filtering is fast linear intuitive
• Filtering model oblivious
may not be optimal
• Wrappers model-aware slow nonintuitive
• PCA and SVD are lossy
work on the entire data set
start
with fast feature filtering first
NOT to use any feature selection