Machine LearningRoughly speaking, for a given learning task, with a given finite amount of training data, thebest generalization performance will be achieved if the right balance is struck between theaccuracy attained on that particular training set, and the “capacity” of the machine, that is, theability of the machine to learn any training set without error. A machine with too much capacityis like a botanist with a photographic memory who, when presented with a new tree,concludes that it is not a tree because it has a different number of leaves from anything shehas seen before; a machine with too little capacity is like the botanist’s lazy brother, whodeclares that if it’s green, it’s a tree. Neither can generalize well. The exploration andformalization of these concepts has resulted in one of the shining peaks of the theory ofstatistical learning.(Vapnik, 1979)
What is machine learning? Data Model Output examples training Predictions ClassificationsWhy: Face Recognition? Clusters Ordinals
Categories of problemsBy output:Clustering Regression Prediction Classification Ordinal Reg.By input: Vector, X Time Series, x(t)
One size never fits all…• Improving an algorithm: – First option: better features • Visualize classes • Trends • Histograms WEKA or GGOBI – Next: make the algorithm smarter (more complicated) • Interaction of features • Better objective and training criteria
Categories of ML algorithms By training: Supervised (labeled) Unsupervised (unlabeled) By model: Non-parametric Kernel Parametric Raw data only methods Model parameters only 40 40 40 30 30 30 y=1 + 0.5t + 4t2 - t3 20 20 20output output 10 10 10 0 0 0 -10 -10 -10 -20 -20 -20 -4 -2 0 2 4 6 -4 -2 0 2 4 6 -4 -2 0 2 4 6 input input
Training a ML algorithm • Choose data • Optimize model parameters according to: – Objective function Regression Classification40 10 Max Margin 130 Mean Square Error 8 2 620 410 2 0 0-10 -2 -2 0 2 4 6 8-20 -4 -2 0 2 4 6
Pitfalls of ML algorithms• Clean your features: – Training volume: more is better – Outliers: remove them! – Dynamic range: normalize it!• Generalization – Over fitting – Under fitting• Speed: parametric vs. non• What are you learning? …features, features, features…
K-Means clustering •Planar decision boundaries, depending on space you are in… •Highly Efficient •Not always great (but usually pretty good) •Needs good starting criteria
K-Nearest Neighbor •Arbitrary decision boundaries •Not so efficient… •With enough data in each class… optimal •Easy to train, known as a lazy classifier
Mixture of Gaussians •Arbitrary decision boundaries with enough boundaries •Efficient, depending on number of models and Gaussians •Can represent more than just Gaussian distributions •Generative, sometimes tough to train up •Spurious singularities •Can get a distribution for a specific class and feature(s)… and get a Bayesian classifier
Components Analysis(principal or independent) •Reduces dimensionality •All other classifiers work in a rotated space •Remember Eigen-values and Vectors?
Trees Classifiers •Arbitrary Decision boundaries •Can be quite efficient (or not!) •Needs good criteria for splitting •Easy to visualize
Multi-Layer Perceptron •Arbitrary (but linear) Decision boundaries •Can be quite efficient (or not!) •What did it learn?
Support Vector Machines •Arbitrary Decision boundaries •Efficiency depends on support vector size and feature size
Hidden Markov Models •Arbitrary Decision boundaries •Efficiency depends on state space and number of models •Generalizes to incorporate features that change over time
More sophisticated approaches• Graphical models (like an HMM) – Bayesian network – Markov random fields• Boosting – Adaboost• Voting• Cascading• Stacking…