Introduction to machine learning

757 views

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
757
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
31
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Introduction to machine learning

  1. 1. Introduction to Machine Learning
  2. 2. Reminders <ul><li>HW6 posted </li></ul><ul><li>All HW solutions posted </li></ul>
  3. 3. A Review <ul><li>You will be asked to learn the parameters of a probabilistic model of C given A,B, using some data </li></ul><ul><li>The parameters model entries of a conditional probability distribution as coin flips </li></ul> 4  3  2  1 P(C|AB)  A,  B  A,B A,  B A,B
  4. 4. A Review <ul><li>Example data </li></ul><ul><li>ML estimates </li></ul> 4 =6/11  3 =4/11  2 =1/11  1 =3/8 P(C|AB)  A,  B  A,B A,  B A,B 5 0 0 0 6 1 0 0 7 0 1 0 4 1 1 0 10 0 0 1 1 1 0 1 5 0 1 1 3 1 1 1 # obs C B A
  5. 5. Maximum Likelihood for BN <ul><li>For any BN, the ML parameters of any CPT can be derived by the fraction of observed values in the data </li></ul>Alarm Earthquake Burglar E 500 B: 200 N=1000 P(E) = 0.5 P(B) = 0.2 A|E,B: 19/20 A|B: 188/200 A|E: 170/500 A| : 1/380 0.003 F F 0.34 F T 0.95 T F 0.95 T T P(A|E,B) B E
  6. 6. Maximum A Posteriori <ul><li>Example data </li></ul><ul><li>MAP Estimates assuming Beta prior with a=b=3 </li></ul><ul><li>virtual counts 2H,2T </li></ul> 4 =8/15  3 =6/15  2 =3/15  1 =5/14 P(C|AB)  A,  B  A,B A,  B A,B 5 0 0 0 6 1 0 0 7 0 1 0 4 1 1 0 10 0 0 1 1 1 0 1 5 0 1 1 3 1 1 1 # obs C B A
  7. 7. Discussion <ul><li>Where are we now? </li></ul><ul><li>How does it relate to the future of AI? </li></ul><ul><li>How will you be tested on it? </li></ul>
  8. 8. Motivation <ul><li>Agent has made observations ( data ) </li></ul><ul><li>Now must make sense of it ( hypotheses ) </li></ul><ul><ul><li>Hypotheses alone may be important (e.g., in basic science) </li></ul></ul><ul><ul><li>For inference (e.g., forecasting) </li></ul></ul><ul><ul><li>To take sensible actions (decision making) </li></ul></ul><ul><li>A basic component of economics, social and hard sciences, engineering, … </li></ul>
  9. 9. Topics in Machine Learning <ul><li>Applications </li></ul><ul><ul><li>Document retrieval </li></ul></ul><ul><ul><li>Document classification </li></ul></ul><ul><ul><li>Data mining </li></ul></ul><ul><ul><li>Computer vision </li></ul></ul><ul><ul><li>Scientific discovery Robotics … </li></ul></ul>Tasks & settings Classification Ranking Clustering Regression Decision-making Supervised Unsupervised Semi-supervised Active Reinforcement learning Techniques Bayesian learning Decision trees Neural networks Support vector machines Boosting Case-based reasoning Dimensionality reduction …
  10. 10. What is Learning? <ul><li>Mostly generalization from experience: </li></ul><ul><li>“ Our experience of the world is specific, yet we are able to formulate general theories that account for the past and predict the future” M.R. Genesereth and N.J. Nilsson, in Logical Foundations of AI , 1987 </li></ul><ul><li> Concepts, heuristics, policies </li></ul><ul><li>Supervised vs. un-supervised learning </li></ul>
  11. 11. Inductive Learning <ul><li>Basic form: learn a function from examples </li></ul><ul><li>f is the unknown target function </li></ul><ul><li>An example is a pair ( x , f(x) ) </li></ul><ul><li>Problem: find a hypothesis h </li></ul><ul><ul><li>such that h ≈ f </li></ul></ul><ul><ul><li>given a training set of examples D </li></ul></ul><ul><li>Instance of supervised learning </li></ul><ul><ul><li>Classification task: f  {0,1,…,C} </li></ul></ul><ul><ul><li>Regression task: f  reals </li></ul></ul>
  12. 12. Inductive learning method <ul><li>Construct/adjust h to agree with f on training set </li></ul><ul><li>( h is consistent if it agrees with f on all examples) </li></ul><ul><li>E.g., curve fitting: </li></ul>
  13. 13. Inductive learning method <ul><li>Construct/adjust h to agree with f on training set </li></ul><ul><li>( h is consistent if it agrees with f on all examples) </li></ul><ul><li>E.g., curve fitting: </li></ul>
  14. 14. Inductive learning method <ul><li>Construct/adjust h to agree with f on training set </li></ul><ul><li>( h is consistent if it agrees with f on all examples) </li></ul><ul><li>E.g., curve fitting: </li></ul>
  15. 15. Inductive learning method <ul><li>Construct/adjust h to agree with f on training set </li></ul><ul><li>( h is consistent if it agrees with f on all examples) </li></ul><ul><li>E.g., curve fitting: </li></ul>
  16. 16. Inductive learning method <ul><li>Construct/adjust h to agree with f on training set </li></ul><ul><li>( h is consistent if it agrees with f on all examples) </li></ul><ul><li>E.g., curve fitting: </li></ul>
  17. 17. Inductive learning method <ul><li>Construct/adjust h to agree with f on training set </li></ul><ul><li>( h is consistent if it agrees with f on all examples) </li></ul><ul><li>E.g., curve fitting: </li></ul>h=D is a trivial, but perhaps uninteresting solution (caching)
  18. 18. Classification Task <ul><li>The concept f(x) takes on values True and False </li></ul><ul><li>A example is positive if f is True, else it is negative </li></ul><ul><li>The set X of all examples is the example set </li></ul><ul><li>The training set is a subset of X </li></ul>a small one!
  19. 19. Logic-Based Inductive Learning <ul><li>Here, examples (x, f(x)) take on discrete values </li></ul>
  20. 20. Logic-Based Inductive Learning <ul><li>Here, examples (x,f(x)) take discrete values </li></ul>Concept Note that the training set does not say whether an observable predicate is pertinent or not
  21. 21. Rewarded Card Example <ul><li>Deck of cards, with each card designated by [r,s], its rank and suit, and some cards “rewarded” </li></ul><ul><li>Background knowledge KB: ((r=1) v … v (r=10))  NUM(r) ((r=J) v (r=Q) v (r=K))  FACE(r) ((s=S) v (s=C))  BLACK(s) ((s=D) v (s=H))  RED(s) </li></ul><ul><li>Training set D: REWARD([4,C])  REWARD([7,C])  REWARD([2,S])   REWARD([5,H])   REWARD([J,S]) </li></ul>
  22. 22. Rewarded Card Example <ul><li>Deck of cards, with each card designated by [r,s], its rank and suit, and some cards “rewarded” </li></ul><ul><li>Background knowledge KB: ((r=1) v … v (r=10))  NUM(r) ((r=J) v (r=Q) v (r=K))  FACE(r) ((s=S) v (s=C))  BLACK(s) ((s=D) v (s=H))  RED(s) </li></ul><ul><li>Training set D: REWARD([4,C])  REWARD([7,C])  REWARD([2,S])   REWARD([5,H])   REWARD([J,S]) </li></ul><ul><li>Possible inductive hypothesis: h  (NUM(r)  BLACK(s)  REWARD([r,s])) </li></ul>There are several possible inductive hypotheses
  23. 23. Learning a Logical Predicate (Concept Classifier) <ul><li>Set E of objects (e.g., cards) </li></ul><ul><li>Goal predicate CONCEPT(x), where x is an object in E, that takes the value True or False (e.g., REWARD) </li></ul><ul><li>Example: CONCEPT describes the precondition of an action, e.g., Unstack(C,A) </li></ul><ul><li>E is the set of states </li></ul><ul><li>CONCEPT(x)  HANDEMPTY  x, BLOCK(C)  x, BLOCK(A)  x, CLEAR(C)  x, ON(C,A)  x </li></ul><ul><li>Learning CONCEPT is a step toward learning an action description </li></ul>
  24. 24. Learning a Logical Predicate (Concept Classifier) <ul><li>Set E of objects (e.g., cards) </li></ul><ul><li>Goal predicate CONCEPT(x), where x is an object in E, that takes the value True or False (e.g., REWARD) </li></ul><ul><li>Observable predicates A(x), B(X), … (e.g., NUM, RED) </li></ul><ul><li>Training set : values of CONCEPT for some combinations of values of the observable predicates </li></ul>
  25. 25. Learning a Logical Predicate (Concept Classifier) <ul><li>Set E of objects (e.g., cards) </li></ul><ul><li>Goal predicate CONCEPT(x), where x is an object in E, that takes the value True or False (e.g., REWARD) </li></ul><ul><li>Observable predicates A(x), B(X), … (e.g., NUM, RED) </li></ul><ul><li>Training set: values of CONCEPT for some combinations of values of the observable predicates </li></ul><ul><li>Find a representation of CONCEPT in the form: CONCEPT(x)  S(A,B, …) where S(A,B,…) is a sentence built with the observable predicates, e.g.: CONCEPT(x)  A(x)  (  B(x) v C(x)) </li></ul>
  26. 26. <ul><li>An hypothesis is any sentence of the form: CONCEPT(x)  S(A,B, …) where S(A,B,…) is a sentence built using the observable predicates </li></ul><ul><li>The set of all hypotheses is called the hypothesis space H </li></ul><ul><li>An hypothesis h agrees with an example if it gives the correct value of CONCEPT </li></ul>Hypothesis Space
  27. 27. Inductive Learning Scheme + + + + + + + + + + + + - - - - - - - - - - - - Example set X {[A, B, …, CONCEPT]} Hypothesis space H {[CONCEPT(x)  S(A,B, …)]} Training set D Inductive hypothesis h
  28. 28. Size of Hypothesis Space <ul><li>n observable predicates </li></ul><ul><li>2 n entries in truth table defining CONCEPT and each entry can be filled with True or False </li></ul><ul><li>In the absence of any restriction (bias), there are hypotheses to choose from </li></ul><ul><li>n = 6  2x10 19 hypotheses! </li></ul>2 2 n
  29. 29. Multiple Inductive Hypotheses h 1  NUM(r)  BLACK(s)  REWARD([r,s]) h 2  BLACK(s)   (r=J)  REWARD([r,s]) h 3  ([r,s]=[4,C])  ([r,s]=[7,C])  [r,s]=[2,S])  REWARD([r,s]) h 4   ([r,s]=[5,H])   ([r,s]=[J,S])  REWARD([r,s]) agree with all the examples in the training set
  30. 30. Multiple Inductive Hypotheses h 1  NUM(r)  BLACK(s)  REWARD([r,s]) h 2  BLACK(s)   (r=J)  REWARD([r,s]) h 3  ([r,s]=[4,C])  ([r,s]=[7,C])  [r,s]=[2,S])  REWARD([r,s]) h 4   ([r,s]=[5,H])   ([r,s]=[J,S])  REWARD([r,s]) agree with all the examples in the training set Need for a system of preferences – called an inductive bias – to compare possible hypotheses
  31. 31. Notion of Capacity <ul><li>It refers to the ability of a machine to learn any training set without error </li></ul><ul><li>A machine with too much capacity is like a botanist with photographic memory who, when presented with a new tree, concludes that it is not a tree because it has a different number of leaves from anything he has seen before </li></ul><ul><li>A machine with too little capacity is like the botanist’s lazy brother, who declares that if it’s green, it’s a tree </li></ul><ul><li>Good generalization can only be achieved when the right balance is struck between the accuracy attained on the training set and the capacity of the machine </li></ul>
  32. 32.  Keep-It-Simple (KIS) Bias <ul><li>Examples </li></ul><ul><ul><li>Use much fewer observable predicates than the training set </li></ul></ul><ul><ul><li>Constrain the learnt predicate, e.g., to use only “high-level” observable predicates such as NUM, FACE, BLACK, and RED and/or to have simple syntax </li></ul></ul><ul><li>Motivation </li></ul><ul><ul><li>If an hypothesis is too complex it is not worth learning it (data caching does the job as well) </li></ul></ul><ul><ul><li>There are much fewer simple hypotheses than complex ones, hence the hypothesis space is smaller </li></ul></ul>
  33. 33.  Keep-It-Simple (KIS) Bias <ul><li>Examples </li></ul><ul><ul><li>Use much fewer observable predicates than the training set </li></ul></ul><ul><ul><li>Constrain the learnt predicate, e.g., to use only “high-level” observable predicates such as NUM, FACE, BLACK, and RED and/or to have simple syntax </li></ul></ul><ul><li>Motivation </li></ul><ul><ul><li>If an hypothesis is too complex it is not worth learning it (data caching does the job as well) </li></ul></ul><ul><ul><li>There are much fewer simple hypotheses than complex ones, hence the hypothesis space is smaller </li></ul></ul>Einstein: “A theory must be as simple as possible, but not simpler than this”
  34. 34.  Keep-It-Simple (KIS) Bias <ul><li>Examples </li></ul><ul><ul><li>Use much fewer observable predicates than the training set </li></ul></ul><ul><ul><li>Constrain the learnt predicate, e.g., to use only “high-level” observable predicates such as NUM, FACE, BLACK, and RED and/or to have simple syntax </li></ul></ul><ul><li>Motivation </li></ul><ul><ul><li>If an hypothesis is too complex it is not worth learning it (data caching does the job as well) </li></ul></ul><ul><ul><li>There are much fewer simple hypotheses than complex ones, hence the hypothesis space is smaller </li></ul></ul>If the bias allows only sentences S that are conjunctions of k << n predicates picked from the n observable predicates, then the size of H is O(n k )
  35. 35. Relation to Bayesian Learning <ul><li>Recall MAP learning  MAP = argmax  P(D|  ) P(  ) </li></ul><ul><ul><li>P(D|  ) is the likelihood ~ accuracy on training set </li></ul></ul><ul><ul><li>P(  ) is the prior ~ inductive bias </li></ul></ul> MAP = argmin  - log P(D|  ) - log P(  ) Measures accuracy on training set Minimum length encoding of  MAP learning is a form of KIS bias
  36. 36. Supervised Learning Flow Chart Training set Target concept Datapoints Inductive Hypothesis Prediction Learner Hypothesis space Choice of learning algorithm Unknown concept we want to approximate Observations we have seen Test set Observations we will see in the future Better quantities to assess performance
  37. 37. Capacity is Not the Only Criterion <ul><li>Accuracy on training set isn’t the best measure of performance </li></ul>Learn Test Example set X Hypothesis space H Training set D + + + + + + + + + + + + - - - - - - - - - - - -
  38. 38. Generalization Error <ul><li>A hypothesis h is said to generalize well if it achieves low error on all examples in X </li></ul>Learn Test Example set X Hypothesis space H + + + + + + + + + + + + - - - - - - - - - - - -
  39. 39. Assessing Performance of a Learning Algorithm <ul><li>Samples from X are typically unavailable </li></ul><ul><li>Take out some of the training set </li></ul><ul><ul><li>Train on the remaining training set </li></ul></ul><ul><ul><li>Test on the excluded instances </li></ul></ul><ul><ul><li>Cross-validation </li></ul></ul>
  40. 40. Cross-Validation <ul><li>Split original set of examples, train </li></ul>Hypothesis space H Train Examples D + + + + + + + - - - - - - + + + + + - - - - - -
  41. 41. Cross-Validation <ul><li>Evaluate hypothesis on testing set </li></ul>Hypothesis space H Testing set + + + + + + + - - - - - -
  42. 42. Cross-Validation <ul><li>Evaluate hypothesis on testing set </li></ul>Hypothesis space H Testing set + + + + + - - - - - - + + Test
  43. 43. Cross-Validation <ul><li>Compare true concept against prediction </li></ul>Hypothesis space H Testing set + + + + + - - - - - - + + 9/13 correct + + + + + + + - - - - - -
  44. 44. Common Splitting Strategies <ul><li>k-fold cross-validation </li></ul><ul><li>Leave-one-out </li></ul>Train Test Dataset
  45. 45. Cardinal sins of machine learning <ul><li>Thou shalt not: </li></ul><ul><ul><li>Train on examples in the testing set </li></ul></ul><ul><ul><li>Form assumptions by “peeking” at the testing set, then formulating inductive bias </li></ul></ul>
  46. 46. Tennis Example <ul><li>Evaluate learning algorithm PlayTennis = S(Temperature,Wind) </li></ul>
  47. 47. Tennis Example <ul><li>Evaluate learning algorithm PlayTennis = S(Temperature,Wind) </li></ul>Trained hypothesis PlayTennis = (T=Mild or Cool)  (W=Weak) Training errors = 3/10 Testing errors = 4/4
  48. 48. Tennis Example <ul><li>Evaluate learning algorithm PlayTennis = S(Temperature,Wind) </li></ul>Trained hypothesis PlayTennis = (T=Mild or Cool) Training errors = 3/10 Testing errors = 1/4
  49. 49. Tennis Example <ul><li>Evaluate learning algorithm PlayTennis = S(Temperature,Wind) </li></ul>Trained hypothesis PlayTennis = (T=Mild or Cool) Training errors = 3/10 Testing errors = 2/4
  50. 50. Tennis Example <ul><li>Evaluate learning algorithm PlayTennis = S(Temperature,Wind) </li></ul>Sum of all testing errors: 7/12
  51. 51. How to construct a better learner? <ul><li>Ideas? </li></ul>
  52. 52. Next Time <ul><li>Decision tree learning </li></ul>

×