MLlecture1.ppt

463 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
463
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

MLlecture1.ppt

  1. 1. CS760 – Machine Learning <ul><li>Course Instructor: David Page </li></ul><ul><ul><li>email: dpage@cs.wisc.edu </li></ul></ul><ul><ul><li>office: MSC 6743 (University & Charter) </li></ul></ul><ul><ul><li>hours: TBA </li></ul></ul><ul><li>Teaching Assistant: Daniel Wong </li></ul><ul><ul><li>email: wong@cs.wisc.edu </li></ul></ul><ul><ul><li>office: TBA </li></ul></ul><ul><ul><li>hours: TBA </li></ul></ul>
  2. 2. Textbooks & Reading Assignment <ul><li>Machine Learning (Tom Mitchell) </li></ul><ul><li>Selected on-line readings </li></ul><ul><li>Read in Mitchell (posted on class web page) </li></ul><ul><ul><li>Preface </li></ul></ul><ul><ul><li>Chapter 1 </li></ul></ul><ul><ul><li>Sections 2.1 and 2.2 </li></ul></ul><ul><ul><li>Chapter 8 </li></ul></ul>
  3. 3. Monday, Wednesday, and Friday? <ul><ul><li>We’ll meet 30 times this term (may or may not include exam in this count) </li></ul></ul><ul><ul><li>We’ll meet on FRIDAY this and next week, in order to cover material for HW 1 (plus I have some business travel this term) </li></ul></ul><ul><ul><li>Default : we WILL meet on Friday unless I announce otherwise </li></ul></ul>
  4. 4. Course &quot;Style&quot; <ul><ul><li>Primarily algorithmic & experimental </li></ul></ul><ul><ul><li>Some theory, both mathematical & conceptual (much on statistics ) </li></ul></ul><ul><ul><li>&quot;Hands on&quot; experience, interactive lectures/discussions </li></ul></ul><ul><ul><li>Broad survey of many ML subfields, including </li></ul></ul><ul><ul><ul><li>&quot;symbolic&quot; (rules, decision trees, ILP) </li></ul></ul></ul><ul><ul><ul><li>&quot;connectionist&quot; (neural nets) </li></ul></ul></ul><ul><ul><ul><li>support vector machines, nearest-neighbors </li></ul></ul></ul><ul><ul><ul><li>theoretical (&quot;COLT&quot;) </li></ul></ul></ul><ul><ul><ul><li>statistical (&quot;Bayes rule&quot;) </li></ul></ul></ul><ul><ul><ul><li>reinforcement learning, genetic algorithms </li></ul></ul></ul>
  5. 5. &quot;MS vs. PhD&quot; Aspects <ul><li>MS'ish topics </li></ul><ul><ul><li>mature, ready for practical application </li></ul></ul><ul><ul><li>first 2/3 – ¾ of semester </li></ul></ul><ul><ul><li>Naive Bayes, Nearest-Neighbors, Decision Trees, Neural Nets, Suport Vector Machines, ensembles, experimental methodology (10-fold cross validation, t -tests) </li></ul></ul><ul><li>PhD'ish topics </li></ul><ul><ul><li>inductive logic programming, statistical relational learning, reinforcement learning, SVMs, use of prior knowledge </li></ul></ul><ul><ul><li>Other machine learning material covered in Bioinformatics CS 576/776, Jerry Zhu’s CS 838 </li></ul></ul>
  6. 6. Two Major Goals <ul><ul><li>to understand what a learning system should do </li></ul></ul><ul><ul><li>to understand how (and how well ) existing systems work </li></ul></ul><ul><ul><ul><li>Issues in algorithm design </li></ul></ul></ul><ul><ul><ul><li>Choosing algorithms for applications </li></ul></ul></ul>
  7. 7. Background Assumed <ul><li>Languages </li></ul><ul><ul><li>Java (see CS 368 tutorial online) </li></ul></ul><ul><li>AI Topics </li></ul><ul><ul><li>Search </li></ul></ul><ul><ul><li>FOPC </li></ul></ul><ul><ul><li>Unification </li></ul></ul><ul><ul><li>Formal Deduction </li></ul></ul><ul><li>Math </li></ul><ul><ul><li>Calculus (partial derivatives) </li></ul></ul><ul><ul><li>Simple prob & stats </li></ul></ul><ul><li>No previous ML experience assumed (so some overlap with CS 540) </li></ul>
  8. 8. Requirements <ul><ul><li>Bi-weekly programming HW's </li></ul></ul><ul><ul><ul><li>&quot;hands on&quot; experience valuable </li></ul></ul></ul><ul><ul><ul><li>HW0 – build a dataset </li></ul></ul></ul><ul><ul><ul><li>HW1 – simple ML algo's and exper. methodology </li></ul></ul></ul><ul><ul><ul><li>HW2 – decision trees (?) </li></ul></ul></ul><ul><ul><ul><li>HW3 – neural nets (?) </li></ul></ul></ul><ul><ul><ul><li>HW4 – reinforcement learning (in a simulated world) </li></ul></ul></ul><ul><ul><li>&quot;Midterm&quot; exam (in class, about 90% through semester) </li></ul></ul><ul><ul><li>Find project of your choosing </li></ul></ul><ul><ul><ul><li>during last 4-5 weeks of class </li></ul></ul></ul>
  9. 9. Grading <ul><li>HW's 35% </li></ul><ul><li>&quot;Midterm&quot; 40% </li></ul><ul><li>Project 20% </li></ul><ul><li>Quality Discussion 5% </li></ul>
  10. 10. Late HW's Policy <ul><ul><li>HW's due @ 4pm </li></ul></ul><ul><ul><li>you have 5 late days to use over the semester </li></ul></ul><ul><ul><ul><li>(Fri 4pm -> Mon 4pm is 1 late &quot;day&quot;) </li></ul></ul></ul><ul><ul><li>SAVE UP late days! </li></ul></ul><ul><ul><ul><li>extensions only for extreme cases </li></ul></ul></ul><ul><ul><li>Penalty points after late days exhausted </li></ul></ul><ul><ul><li>Can't be more than ONE WEEK late </li></ul></ul>
  11. 11. Academic Misconduct (also on course homepage) <ul><li>All examinations, programming assignments, and written homeworks must be done individually . Cheating and plagiarism will be dealt with in accordance with University procedures (see the Academic Misconduct Guide for Students ). Hence, for example, code for programming assignments must not be developed in groups, nor should code be shared. You are encouraged to discuss with your peers, the TAs or the instructor ideas, approaches and techniques broadly, but not at a level of detail where specific implementation issues are described by anyone. If you have any questions on this, please ask the instructor before you act. </li></ul>
  12. 12. What Do You Think Learning Means?
  13. 13. What is Learning? <ul><li>“ Learning denotes changes in the system that </li></ul><ul><li>… enable the system to do the same task … </li></ul><ul><li>more effectively the next time.” </li></ul><ul><ul><li>- Herbert Simon </li></ul></ul><ul><li>“ Learning is making useful changes in our minds.” </li></ul><ul><ul><li>- Marvin Minsky </li></ul></ul>
  14. 14. Today’s Topics <ul><li>Memorization as Learning </li></ul><ul><li>Feature Space </li></ul><ul><li>Supervised ML </li></ul><ul><li>K -NN ( K -Nearest Neighbor) </li></ul>
  15. 15. Memorization (Rote Learning) <ul><li>Employed by first machine learning systems, in 1950s </li></ul><ul><ul><li>Samuel’s Checkers program </li></ul></ul><ul><ul><li>Michie’s MENACE: Matchbox Educable Naughts and Crosses Engine </li></ul></ul><ul><li>Prior to these, some people believed computers could not improve at a task with experience </li></ul>
  16. 16. Rote Learning is Limited <ul><li>Memorize I/O pairs and perform exact matching with new inputs </li></ul><ul><li>If computer has not seen precise case before, it cannot apply its experience </li></ul><ul><li>Want computer to “generalize” from prior experience </li></ul>
  17. 17. Some Settings in Which Learning May Help <ul><li>Given an input, what is appropriate response (output/action)? </li></ul><ul><ul><li>Game playing – board state/move </li></ul></ul><ul><ul><li>Autonomous robots (e.g., driving a vehicle) -- world state/action </li></ul></ul><ul><ul><li>Video game characters – state/action </li></ul></ul><ul><ul><li>Medical decision support – symptoms/ treatment </li></ul></ul><ul><ul><li>Scientific discovery – data/hypothesis </li></ul></ul><ul><ul><li>Data mining – database/regularity </li></ul></ul>
  18. 18. Broad Paradigms of Machine Learning <ul><ul><li>Inducing Functions from I/O Pairs </li></ul></ul><ul><ul><ul><li>Decision trees (e.g., Quinlan’s C4.5 [1993]) </li></ul></ul></ul><ul><ul><ul><li>Connectionism / neural networks (e.g., backprop) </li></ul></ul></ul><ul><ul><ul><li>Nearest-neighbor methods </li></ul></ul></ul><ul><ul><ul><li>Genetic algorithms </li></ul></ul></ul><ul><ul><ul><li>SVM’s </li></ul></ul></ul><ul><ul><li>Learning without Feedback/Teacher </li></ul></ul><ul><ul><ul><li>Conceptual clustering </li></ul></ul></ul><ul><ul><ul><li>Self-organizing systems </li></ul></ul></ul><ul><ul><ul><li>Discovery systems </li></ul></ul></ul>Not in Mitchell’s textbook (covered in CS 776)
  19. 19. IID (Completion of Lec #2) <ul><li>We are assuming examples are IID: independently identically distributed </li></ul><ul><li>Eg, we are ignoring temporal dependencies (covered in time-series learning ) </li></ul><ul><li>Eg, we assume the learner has no say in which examples it gets (covered in active learning ) </li></ul>
  20. 20. Supervised Learning Task Overview Concepts/ Classes/ Decisions Feature Selection (usually done by humans) Classification Rule Construction (done by learning algorithm) Real World Feature Space HW 0 HW 1-3
  21. 21. Supervised Learning Task Overview (cont.) <ul><li>Note: mappings on previous slide are not necessarily 1-to-1 </li></ul><ul><ul><li>Bad for first mapping? </li></ul></ul><ul><ul><li>Good for the second (in fact, it’s the goal!) </li></ul></ul>
  22. 22. Empirical Learning: Task Definition <ul><li>Given </li></ul><ul><ul><li>A collection of positive examples of some concept/class/category (i.e., members of the class) and, possibly, a collection of the negative examples (i.e., non-members) </li></ul></ul><ul><li>Produce </li></ul><ul><ul><li>A description that covers (includes) all/most of the positive examples and non/few of the negative examples </li></ul></ul><ul><ul><li>(and, hopefully, properly categorizes most future examples!) </li></ul></ul><ul><li>Note: one can easily extend this definition to handle more than two classes </li></ul>The Key Point!
  23. 23. Example Positive Examples Negative Examples How does this symbol classify? <ul><li>Concept </li></ul><ul><ul><li>Solid Red Circle in a (Regular?) Polygon </li></ul></ul><ul><li>What about? </li></ul><ul><ul><li>Figures on left side of page </li></ul></ul><ul><ul><li>Figures drawn before 5pm 2/2/89 <etc> </li></ul></ul>
  24. 24. Concept Learning <ul><li>Learning systems differ in how they represent concepts: </li></ul>. . . Training Examples Backpropagation C4.5, CART AQ, FOIL SVMs Neural Net Decision Tree Φ <- X^Y Φ <- Z Rules If 5x 1 + 9x 2 – 3x 3 > 12 Then +
  25. 25. Feature Space <ul><li>If examples are described in terms of values of features, they can be plotted as points in an N -dimensional space. </li></ul>Size Color Weight ? Big 2500 Gray A “concept” is then a (possibly disjoint) volume in this space.
  26. 26. Learning from Labeled Examples <ul><li>Most common and successful form of ML </li></ul>Venn Diagram + + + + - - - - - - - - <ul><li>Examples – points in a multi-dimensional “feature space” </li></ul><ul><li>Concepts – “function” that labels every point in feature space </li></ul><ul><ul><li>(as +, -, and possibly ?) </li></ul></ul>
  27. 27. Brief Review <ul><li>Conjunctive Concept </li></ul><ul><ul><li>Color(?obj1, red) </li></ul></ul><ul><ul><li>^ </li></ul></ul><ul><ul><li>Size(?obj1, large) </li></ul></ul><ul><li>Disjunctive Concept </li></ul><ul><ul><li>Color(?obj2, blue) </li></ul></ul><ul><ul><li>v </li></ul></ul><ul><ul><li>Size(?obj2, small) </li></ul></ul><ul><li>More formally a “concept” is of the form </li></ul><ul><ul><li>x y z F(x, y, z) -> Member(x, Class1) </li></ul></ul>A A A “ and” “ or” Instances
  28. 28. Empirical Learning and Venn Diagrams <ul><li>Concept = A or B (Disjunctive concept) </li></ul><ul><li>Examples = labeled points in feature space </li></ul><ul><li>Concept = a label for a set of points </li></ul>Venn Diagram A B - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + Feature Space
  29. 29. Aspects of an ML System <ul><li>“ Language” for representing classified examples </li></ul><ul><li>“ Language” for representing “Concepts” </li></ul><ul><li>Technique for producing concept “consistent” with the training examples </li></ul><ul><li>Technique for classifying new instance </li></ul><ul><li>Each of these limits the expressiveness / efficiency of the supervised learning algorithm. </li></ul>HW 0 Other HW’s
  30. 30. Nearest-Neighbor Algorithms <ul><li>(aka. Exemplar models, instance-based learning (IBL), case-based learning) </li></ul><ul><li>Learning ≈ memorize training examples </li></ul><ul><li>Problem solving = find most similar example in memory; output its category </li></ul>Venn - - - - - - - - + + + + + + + + + + ? … “ Voronoi Diagrams” (pg 233)
  31. 31. Simple Example: 1-NN <ul><li>Training Set </li></ul><ul><li>a=0, b=0, c=1 + </li></ul><ul><li>a=0, b=0, c=0 - </li></ul><ul><li>a=1, b=1, c=1 - </li></ul><ul><li>Test Example </li></ul><ul><li>a=0, b=1, c=0 ? </li></ul><ul><li>“ Hamming Distance” </li></ul><ul><li>Ex 1 = 2 </li></ul><ul><li>Ex 2 = 1 </li></ul><ul><li>Ex 3 = 2 </li></ul>So output - (1-NN ≡ one nearest neighbor)
  32. 32. Sample Experimental Results (see UCI archive for more) Simple algorithm works quite well! Testset Correctness Testbed 86% 85% 83% Appendicitis ? 38% 37% Tumor ? 76% 78% Heart Disease 96% 95% 98% Wisconsin Cancer Neural Nets D-Trees 1-NN
  33. 33. K -NN Algorithm <ul><li>Collect K nearest neighbors, select majority classification (or somehow combine their classes) </li></ul><ul><li>What should K be? </li></ul><ul><ul><li>It probably is problem dependent </li></ul></ul><ul><ul><li>Can use tuning sets (later) to select a good setting for K </li></ul></ul>1 Shouldn’t really “ connect the dots” (Why?) Tuning Set Error Rate 2 3 4 5 K
  34. 34. Data Representation <ul><li>Creating a dataset of </li></ul><ul><li>Be sure to include – on separate 8x11 sheet – a photo and a brief bio </li></ul><ul><li>HW0 out on-line </li></ul><ul><ul><li>Due next Friday </li></ul></ul>fixed length feature vectors
  35. 35. HW0 – Create Your Own Dataset (repeated from lecture #1) <ul><ul><li>Think about before next class </li></ul></ul><ul><ul><ul><li>Read HW0 (on-line) </li></ul></ul></ul><ul><ul><li>Google to find: </li></ul></ul><ul><ul><ul><li>UCI archive (or UCI KDD archive) </li></ul></ul></ul><ul><ul><ul><li>UCI ML archive (UCI ML repository) </li></ul></ul></ul><ul><ul><ul><li>More links in HW0’s web page </li></ul></ul></ul>
  36. 36. HW0 – Your “Personal Concept” <ul><li>Step 1: Choose a Boolean (true/false) concept </li></ul><ul><ul><li>Books I like/dislike </li></ul></ul><ul><ul><li>Movies I like/dislike </li></ul></ul><ul><ul><li>www pages I like/dislike </li></ul></ul><ul><ul><ul><li>Subjective judgment (can’t articulate) </li></ul></ul></ul><ul><ul><li>“ time will tell” concepts </li></ul></ul><ul><ul><ul><li>Stocks to buy </li></ul></ul></ul><ul><ul><ul><li>Medical treatment </li></ul></ul></ul><ul><ul><ul><ul><li>at time t , predict outcome at time ( t + ∆ t) </li></ul></ul></ul></ul><ul><ul><li>Sensory interpretation </li></ul></ul><ul><ul><ul><li>Face recognition (see textbook) </li></ul></ul></ul><ul><ul><ul><li>Handwritten digit recognition </li></ul></ul></ul><ul><ul><ul><li>Sound recognition </li></ul></ul></ul><ul><ul><li>Hard-to-Program Functions </li></ul></ul>
  37. 37. Some Real-World Examples <ul><li>Car Steering (Pomerleau, Thrun) </li></ul><ul><li>Medical Diagnosis (Quinlan) </li></ul><ul><li>DNA Categorization </li></ul><ul><li>TV-pilot rating </li></ul><ul><li>Chemical-plant control </li></ul><ul><li>Backgammon playing </li></ul>Medical record Learned Function Steering Angle Digitized camera image age=13, sex=M, wgt=18 Learned Function sick vs healthy
  38. 38. HW0 – Your “Personal Concept” <ul><li>Step 2: Choosing a feature space </li></ul><ul><ul><li>We will use fixed-length feature vectors </li></ul></ul><ul><ul><ul><li>Choose N features </li></ul></ul></ul><ul><ul><ul><li>Each feature has V i possible values </li></ul></ul></ul><ul><ul><ul><li>Each example is represented by a vector of N feature values </li></ul></ul></ul><ul><ul><ul><li>(i.e., is a point in the feature space ) </li></ul></ul></ul><ul><ul><ul><li>e.g.: <red, 50, round> </li></ul></ul></ul><ul><ul><ul><li>color weight shape </li></ul></ul></ul><ul><ul><li>Feature Types </li></ul></ul><ul><ul><ul><li>Boolean </li></ul></ul></ul><ul><ul><ul><li>Nominal </li></ul></ul></ul><ul><ul><ul><li>Ordered </li></ul></ul></ul><ul><ul><ul><li>Hierarchical </li></ul></ul></ul><ul><li>Step 3: Collect examples (“I/O” pairs) </li></ul>Defines a space In HW0 we will use a subset (see next slide)
  39. 39. Standard Feature Types for representing training examples – a source of “ domain knowledge ” <ul><li>Nominal </li></ul><ul><ul><li>No relationship among possible values </li></ul></ul><ul><ul><li>e.g., color є {red, blue, green} (vs. color = 1000 Hertz) </li></ul></ul><ul><li>Linear (or Ordered) </li></ul><ul><ul><li>Possible values of the feature are totally ordered </li></ul></ul><ul><ul><li>e.g., size є {small, medium, large} ← discrete </li></ul></ul><ul><ul><li> weight є [0…500] ← continuous </li></ul></ul><ul><li>Hierarchical </li></ul><ul><ul><li>Possible values are partially ordered in an ISA hierarchy </li></ul></ul><ul><ul><li>e.g. for shape -> </li></ul></ul>closed polygon continuous triangle square circle ellipse
  40. 40. Our Feature Types (for CS 760 HW’s) <ul><li>Discrete </li></ul><ul><ul><li>tokens (char strings, w/o quote marks and spaces) </li></ul></ul><ul><li>Continuous </li></ul><ul><ul><li>numbers (int’s or float’s) </li></ul></ul><ul><ul><ul><li>If only a few possible values (e.g., 0 & 1) use discrete </li></ul></ul></ul><ul><ul><li>i.e., merge nominal and discrete-ordered </li></ul></ul><ul><ul><li>(or convert discrete-ordered into 1,2,…) </li></ul></ul><ul><ul><li>We will ignore hierarchical info and </li></ul></ul><ul><ul><li>only use the leaf values (common approach) </li></ul></ul>
  41. 41. Example Hierarchy (KDD* Journal, Vol 5, No. 1-2, 2001, page 17) <ul><li>Structure of one feature! </li></ul><ul><li>“ the need to be able to incorporate hierarchical (knowledge about data types) is shown in every paper.” </li></ul><ul><li>- From eds. Intro to special issue (on applications) of KDD journal, Vol 15, 2001 </li></ul>* Officially, “Data Mining and Knowledge Discovery”, Kluwer Publishers Product Pct Foods Tea Canned Cat Food Dried Cat Food 99 Product Classes 2302 Product Subclasses Friskies Liver, 250g ~30k Products
  42. 42. HW0: Creating Your Dataset <ul><li>Ex: IMDB has a lot of data that are not discrete or continuous or binary-valued for target function (category) </li></ul>Studio Movie Director/ Producer Actor Made Acted in Directed Name Country List of movies Name Year of birth Gender Oscar nominations List of movies Title, Genre, Year, Opening Wkend BO receipts , List of actors/actresses, Release season Name Year of birth List of movies Produced
  43. 43. HW0: Sample DB <ul><li>Choose a Boolean or binary-valued target function (category) </li></ul><ul><ul><li>Opening weekend box-office receipts > $2 million </li></ul></ul><ul><ul><li>Movie is drama? (action, sci-fi,…) </li></ul></ul><ul><ul><li>Movies I like/dislike (e.g. Tivo) </li></ul></ul>
  44. 44. HW0: Representing as a Fixed-Length Feature Vector <ul><li><discuss on chalkboard> </li></ul><ul><li>Note: some advanced ML approaches do not require such “feature mashing” (eg, ILP) </li></ul>
  45. 45. [email_address] <ul><ul><li>David Jensen’s group at UMass uses Naïve Bayes and other ML algo’s on the IMDB </li></ul></ul><ul><ul><ul><li>Opening weekend box-office receipts > $2 million </li></ul></ul></ul><ul><ul><ul><ul><li>25 attributes </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Accuracy = 83.3% </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Default accuracy = 56% (default algo?) </li></ul></ul></ul></ul><ul><ul><ul><li>Movie is drama? </li></ul></ul></ul><ul><ul><ul><ul><li>12 attributes </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Accuracy = 71.9% </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Default accuracy = 51% </li></ul></ul></ul></ul><ul><ul><li>http://kdl.cs.umass.edu/proximity/about.html </li></ul></ul>
  46. 46. First Algorithm in Detail <ul><li>K -Nearest Neighbors / Instance-Based Learning ( k -NN/IBL) </li></ul><ul><ul><li>Distance functions </li></ul></ul><ul><ul><li>Kernel functions </li></ul></ul><ul><ul><li>Feature selection (applies to all ML algo’s) </li></ul></ul><ul><ul><li>IBL Summary </li></ul></ul><ul><li>Chapter 8 of Mitchell </li></ul>
  47. 47. Some Common Jargon <ul><li>Classification </li></ul><ul><ul><li>Learning a discrete valued function </li></ul></ul><ul><li>Regression </li></ul><ul><ul><li>Learning a real valued function </li></ul></ul><ul><li>IBL easily extended to regression tasks (and to multi-category classification) </li></ul>Discrete/Real Outputs
  48. 48. Variations on a Theme <ul><li>IB1 – keep all examples </li></ul><ul><li>IB2 – keep next instance if incorrectly classified by using previous instances </li></ul><ul><ul><li>Uses less storage (good) </li></ul></ul><ul><ul><li>Order dependent (bad) </li></ul></ul><ul><ul><li>Sensitive to noisy data (bad) </li></ul></ul>(From Aha, Kibler and Albert in ML Journal)
  49. 49. Variations on a Theme (cont.) <ul><li>IB3 – extend IB2 to more intelligently decide which examples to keep (see article) </li></ul><ul><ul><li>Better handling of noisy data </li></ul></ul><ul><li>Another Idea - cluster groups, keep example from each (median/centroid) </li></ul><ul><ul><li>Less storage, faster lookup </li></ul></ul>
  50. 50. Distance Functions <ul><li>Key issue in IBL (instance-based learning) </li></ul><ul><li>One approach: </li></ul><ul><li> assign weights to each feature </li></ul>
  51. 51. Distance Functions (sample) distance between examples 1 and 2 a numeric weighting factor distance for feature i only between examples 1 and 2
  52. 52. Kernel Functions and k -NN <ul><li>Term “kernel” comes from statistics </li></ul><ul><li>Major topic in support vector machines (SVMs) </li></ul><ul><li>Weights the interaction between pairs of examples </li></ul>
  53. 53. Kernel Functions and k -NN (continued) <ul><li>Assume we have </li></ul><ul><ul><li>k nearest neighbors e 1 , ..., e k </li></ul></ul><ul><ul><li>associated output categories O 1 , ..., O k </li></ul></ul><ul><li>Then output for test case e t is </li></ul>the kernel “ delta” function (=1 if O i =c , else =0)
  54. 54. Sample Kernel Functions  (e i , e t ) <ul><ul><li> ( e i , e t ) = 1 </li></ul></ul><ul><ul><li> ( e i , e t ) = 1 / dist( e i , e t ) </li></ul></ul>simple majority vote (? classified as -) inverse distance weight (? could be classified as +) In diagram to right, example ‘?’ has three neighbors, two of which are ‘-’ and one of which is ‘+’. - - + ?
  55. 55. Gaussian Kernel <ul><li>Heavily used in SVMs </li></ul>Euler’s constant
  56. 56. Local Learning <ul><li>Collect k nearest neighbors </li></ul><ul><li>Give them to some supervised ML algo </li></ul><ul><li>Apply learned model to test example </li></ul>+ + + + + + + - - - - ? - Train on these
  57. 57. Instance-Based Learning (IBL) and Efficiency <ul><li>IBL algorithms postpone work from training to testing </li></ul><ul><ul><li>Pure k -NN/IBL just memorizes the training data </li></ul></ul><ul><ul><li>Sometimes called lazy learning </li></ul></ul><ul><li>Computationally intensive </li></ul><ul><ul><li>Match all features of all training examples </li></ul></ul>
  58. 58. Instance-Based Learning (IBL) and Efficiency <ul><li>Possible Speed-ups </li></ul><ul><ul><li>Use a subset of the training examples (Aha) </li></ul></ul><ul><ul><li>Use clever data structures (A. Moore) </li></ul></ul><ul><ul><ul><li>KD trees, hash tables, Voronoi diagrams </li></ul></ul></ul><ul><ul><li>Use subset of the features </li></ul></ul>
  59. 59. Number of Features and Performance <ul><li>Too many features can hurt test set performance </li></ul><ul><li>Too many irrelevant features mean many spurious correlation possibilities for a ML algorithm to detect </li></ul><ul><ul><li>“Curse of dimensionality” </li></ul></ul>
  60. 60. Feature Selection and ML (general issue for ML) <ul><li>Filtering-Based Feature Selection </li></ul><ul><li>all features </li></ul><ul><li>subset of features </li></ul><ul><li>model </li></ul><ul><li>Wrapper-Based Feature Selection </li></ul>FS algorithm ML algorithm ML algorithm all features model FS algorithm calls ML algorithm many times, uses it to help select features
  61. 61. Feature Selection as Search Problem <ul><li>State = set of features </li></ul><ul><ul><li>Start state = empty ( forward selection ) or full ( backward selection ) </li></ul></ul><ul><ul><li>Goal test = highest scoring state </li></ul></ul><ul><li>Operators </li></ul><ul><ul><li>add/subtract features </li></ul></ul><ul><li>Scoring function </li></ul><ul><ul><li>accuracy on training (or tuning) set of ML algorithm using this state’s feature set </li></ul></ul>
  62. 62. Forward and Backward Selection of Features <ul><li>Hill-climbing (“greedy”) search </li></ul>Forward Backward add F 1 ... ... Features to use Accuracy on tuning set (our heuristic function) ... ... {} 50% {F N } 71% {F 1 } 62% add F N add F 1 {F 1 ,F 2 ,...,F N } 73% {F 2 ,...,F N } 79% subtract F 1 subtract F 2
  63. 63. Forward vs. Backward Feature Selection <ul><li>Faster in early steps because fewer features to test </li></ul><ul><li>Fast for choosing a small subset of the features </li></ul><ul><li>Misses useful features whose usefulness requires other features (feature synergy) </li></ul><ul><li>Fast for choosing all but a small subset of the features </li></ul><ul><li>Preserves useful features whose usefulness requires other features </li></ul><ul><ul><li>Example: area important, features = length, width </li></ul></ul>Forward Backward
  64. 64. Some Comments on k -NN <ul><li>Easy to implement </li></ul><ul><li>Good “baseline” algorithm / experimental control </li></ul><ul><li>Incremental learning easy </li></ul><ul><li>Psychologically plausible model of human memory </li></ul><ul><li>No insight into domain (no explicit model) </li></ul><ul><li>Choice of distance function is problematic </li></ul><ul><li>Doesn’t exploit/notice structure in examples </li></ul>Positive Negative
  65. 65. Questions about IBL (Breiman et al. - CART book) <ul><li>Computationally expensive to save all examples; slow classification of new examples </li></ul><ul><ul><li>Addressed by IB2/IB3 of Aha et al. and work of A. Moore (CMU; now Google) </li></ul></ul><ul><ul><li>Is this really a problem? </li></ul></ul>
  66. 66. Questions about IBL (Breiman et al. - CART book) <ul><li>Intolerant of Noise </li></ul><ul><ul><li>Addressed by IB3 of Aha et al. </li></ul></ul><ul><ul><li>Addressed by k -NN version </li></ul></ul><ul><ul><li>Addressed by feature selection - can discard the noisy feature </li></ul></ul><ul><li>Intolerant of Irrelevant Features </li></ul><ul><ul><li>Since algorithm very fast, can experimentally choose good feature sets (Kohavi, Ph. D. – now at Amazon) </li></ul></ul>
  67. 67. More IBL Criticisms <ul><li>High sensitivity to choice of similiarity (distance) function </li></ul><ul><ul><li>Euclidean distance might not be best choice </li></ul></ul><ul><li>Handling non-numeric features and missing feature values is not natural, but doable </li></ul><ul><ul><li>How might we do this? (Part of HW1) </li></ul></ul><ul><li>No insight into task (learned concept not interpretable) </li></ul>
  68. 68. Summary <ul><li>IBL can be a very effective machine learning algorithm </li></ul><ul><li>Good “baseline” for experiments </li></ul>

×