Your SlideShare is downloading. ×
0
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Machine Learning and Inductive Inference
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Machine Learning and Inductive Inference

992

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
992
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
35
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Machine Learning and Inductive Inference Hendrik Blockeel 2001-2002
  • 2. 1 Introduction <ul><li>Practical information </li></ul><ul><li>What is &amp;quot;machine learning and inductive inference&amp;quot;? </li></ul><ul><li>What is it useful for? (some example applications) </li></ul><ul><li>Different learning tasks </li></ul><ul><li>Data representation </li></ul><ul><li>Brief overview of approaches </li></ul><ul><li>Overview of the course </li></ul>
  • 3. Practical information about the course <ul><li>10 lectures (2h) + 4 exercise sessions (2.5h) </li></ul><ul><li>Audience with diverse backgrounds </li></ul><ul><li>Course material: </li></ul><ul><ul><li>Book: Machine Learning (Mitchell, 1997, McGraw-Hill) </li></ul></ul><ul><ul><li>Slides &amp; notes, http://www.cs.kuleuven.ac.be/~hendrik/ML/ </li></ul></ul><ul><li>Examination: </li></ul><ul><ul><li>oral exam (20&apos;) with written preparation (+/- 2h) </li></ul></ul><ul><ul><li>2/3 theory, 1/3 exercises </li></ul></ul><ul><ul><li>Only topics discussed in lectures / exercises </li></ul></ul>
  • 4. What is machine learning? <ul><li>Study of how to make programs improve their performance on certain tasks from own experience </li></ul><ul><ul><li>&amp;quot;performance&amp;quot; = speed, accuracy, ... </li></ul></ul><ul><ul><li>&amp;quot;experience&amp;quot; = set of previously seen cases (&amp;quot;observations&amp;quot;) </li></ul></ul><ul><li>For instance (simple method): </li></ul><ul><ul><li>experience: taking action A in situation S yielded result R </li></ul></ul><ul><ul><li>situation S arises again </li></ul></ul><ul><ul><ul><li>if R was undesirable: try something else </li></ul></ul></ul><ul><ul><ul><li>if R was desirable: try action A again </li></ul></ul></ul>
  • 5. <ul><li>This is a very simple example </li></ul><ul><ul><li>only works if precisely the same situation is encountered </li></ul></ul><ul><ul><li>what if similar situation? </li></ul></ul><ul><ul><ul><li>Need for generalisation </li></ul></ul></ul><ul><ul><li>how about choosing another action even if a good one is already known ? (you might find a better one) </li></ul></ul><ul><ul><ul><li>Need for exploration </li></ul></ul></ul><ul><li>This course focuses mostly on generalisation or inductive inference </li></ul>
  • 6. Inductive inference <ul><li>= Reasoning from specific to general </li></ul><ul><li>e.g. statistics: from sample, infer properties of population </li></ul>sample population observation: &amp;quot;these dogs are all brown&amp;quot; hypothesis: &amp;quot;all dogs are brown&amp;quot;
  • 7. <ul><li>Note: inductive inference is more general than statistics </li></ul><ul><ul><li>statistics mainly consists of numerical methods for inference </li></ul></ul><ul><ul><ul><li>infer mean, probability distribution, … of population </li></ul></ul></ul><ul><ul><li>other approaches: </li></ul></ul><ul><ul><ul><li>find symbolic definition of a concept (“concept learning”) </li></ul></ul></ul><ul><ul><ul><li>find laws with complicated structure that govern the data </li></ul></ul></ul><ul><ul><ul><li>study induction from a logical, philosophical, … point of view </li></ul></ul></ul><ul><ul><ul><li>… </li></ul></ul></ul>
  • 8. <ul><li>Applications of inductive inference: </li></ul><ul><ul><li>Machine learning </li></ul></ul><ul><ul><ul><li>&amp;quot;sample&amp;quot; of observations = experience </li></ul></ul></ul><ul><ul><ul><li>generalizing to population = finding patterns in the observations that generally hold and may be used for future tasks </li></ul></ul></ul><ul><ul><li>Knowledge discovery (Data mining) </li></ul></ul><ul><ul><ul><li>&amp;quot;sample&amp;quot; = database </li></ul></ul></ul><ul><ul><ul><li>generalizing = finding patterns that hold in this database and can also be expected to hold on similar data not in the database </li></ul></ul></ul><ul><ul><ul><li>discovered knowledge = comprehensible description of these patterns </li></ul></ul></ul><ul><ul><li>... </li></ul></ul>
  • 9. What is it useful for? <ul><li>Scientifically: for understanding learning and intelligence in humans and animals </li></ul><ul><ul><li>interesting for psychologists, philosophers, biologists, … </li></ul></ul><ul><li>More practically: </li></ul><ul><ul><li>for building AI systems </li></ul></ul><ul><ul><ul><li>expert systems that improve automatically with time </li></ul></ul></ul><ul><ul><ul><li>systems that help scientists discover new laws </li></ul></ul></ul><ul><ul><li>also useful outside “classical” AI-like applications </li></ul></ul><ul><ul><ul><li>when we don’t know how to program something ourselves </li></ul></ul></ul><ul><ul><ul><li>when a program should adapt regularly to new circumstances </li></ul></ul></ul><ul><ul><ul><li>when a program should tune itself towards its user </li></ul></ul></ul>
  • 10. Knowledge discovery <ul><li>Scientific knowledge discovery </li></ul><ul><ul><li>Some “toy” examples: </li></ul></ul><ul><ul><ul><li>Bacon : rediscovered some laws of physics (e.g. Kepler’s laws of planetary motion) </li></ul></ul></ul><ul><ul><ul><li>AM: rediscovered some mathematical theorems </li></ul></ul></ul><ul><ul><li>More serious recent examples: </li></ul></ul><ul><ul><ul><li>mining the human genome </li></ul></ul></ul><ul><ul><ul><li>mining the web for information on genes, proteins, … </li></ul></ul></ul><ul><ul><ul><li>drug discovery </li></ul></ul></ul><ul><ul><ul><ul><li>context: robots perform lots of experiments at high rate; this yields lots of data, to be studied and interpreted by humans; try to automate this process (because humans can’t keep up with robots) </li></ul></ul></ul></ul>
  • 11. Example: given molecules that are active against some disease, find out what is common in them; this is probably the reason for their activity.
  • 12. <ul><li>Data mining in databases, looking for “interesting” patterns </li></ul><ul><ul><li>e.g. for marketing </li></ul></ul><ul><ul><ul><li>based on data in DB, who should be interested in this new product? (useful for direct mailing) </li></ul></ul></ul><ul><ul><ul><li>study customer behaviour to identify typical profiles of customers </li></ul></ul></ul><ul><ul><ul><li>find out which products in store are often bought together </li></ul></ul></ul><ul><ul><li>e.g. in hospital: help with diagnosis of patients </li></ul></ul>
  • 13. Learning to perform difficult tasks <ul><li>Difficult for humans… </li></ul><ul><ul><li>LEX system : learned how to perform symbolic integration of functions </li></ul></ul><ul><li>… or “easy” for humans, but difficult to program </li></ul><ul><ul><li>humans can do it, but can’t explain how they do it </li></ul></ul><ul><ul><li>e.g.: </li></ul></ul><ul><ul><ul><li>learning to play games (chess, go, …) </li></ul></ul></ul><ul><ul><ul><li>learning to fly a plane, drive a car, … </li></ul></ul></ul><ul><ul><ul><li>recognising faces </li></ul></ul></ul><ul><ul><ul><li>… </li></ul></ul></ul>
  • 14. Adaptive systems <ul><li>Robots in changing environment </li></ul><ul><ul><li>continuously needs to adapt its behaviour </li></ul></ul><ul><li>Systems that adapt to the user </li></ul><ul><ul><li>based on user modelling: </li></ul></ul><ul><ul><ul><li>observe behaviour of user </li></ul></ul></ul><ul><ul><ul><li>build model describing this behaviour </li></ul></ul></ul><ul><ul><ul><li>use model to make user’s life easier </li></ul></ul></ul><ul><ul><li>e.g. adaptive web pages, intelligent mail filters, adaptive user interfaces (e.g. intelligent Unix shell), … </li></ul></ul>
  • 15. Illustration : building a system that learns checkers <ul><li>Learning = improving on task T, with respect to performance measure P, based on experience E </li></ul><ul><li>In this example: </li></ul><ul><ul><li>T = playing checkers </li></ul></ul><ul><ul><li>P: % of games won in world tournament </li></ul></ul><ul><ul><li>E: games played against self </li></ul></ul><ul><ul><ul><li>possible problem: is experience representative for real task? </li></ul></ul></ul><ul><li>Questions to be answered: </li></ul><ul><ul><li>exactly what is given, exactly what is learnt, what representation &amp; learning algorithm should we use </li></ul></ul>
  • 16. <ul><li>What do we want to learn? </li></ul><ul><ul><li>given board situation, which move to make </li></ul></ul><ul><li>What is given? </li></ul><ul><ul><li>direct or indirect evidence ? </li></ul></ul><ul><ul><ul><li>direct: e.g., which moves were good, which were bad </li></ul></ul></ul><ul><ul><ul><li>indirect: consecutive moves in game, + outcome of the game </li></ul></ul></ul><ul><ul><li>in our case: indirect evidence </li></ul></ul><ul><ul><ul><li>direct evidence would require a teacher </li></ul></ul></ul>
  • 17. <ul><li>What exactly shall we learn? </li></ul><ul><ul><li>Choose type of target function: </li></ul></ul><ul><ul><ul><li>ChooseMove: Board  Move ? </li></ul></ul></ul><ul><ul><ul><ul><li>directly applicable </li></ul></ul></ul></ul><ul><ul><ul><li>V: Board   ? </li></ul></ul></ul><ul><ul><ul><ul><li>indicates quality of state </li></ul></ul></ul></ul><ul><ul><ul><ul><li>when playing, choose move that leads to best state </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Note: reasonable definition for V easy to give: </li></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>V(won) = 100, V(lost) = -100, V(draw) = 0, V(s) = V(e) with e best state reachable from s when playing optimally </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Not feasible in practice (exhaustive minimax search) </li></ul></ul></ul></ul></ul><ul><ul><ul><li>… </li></ul></ul></ul><ul><ul><ul><li>Let’s choose the V function here </li></ul></ul></ul>
  • 18. <ul><ul><li>Choose representation for target function: </li></ul></ul><ul><ul><ul><li>set of rules? </li></ul></ul></ul><ul><ul><ul><li>neural network? </li></ul></ul></ul><ul><ul><ul><li>polynomial function of numerical board features? </li></ul></ul></ul><ul><ul><ul><li>… </li></ul></ul></ul><ul><ul><li>Let’s choose: V = w 1 bp+w 2 rp+w 3 bk+w 4 rk+w 5 bt+w 6 rt </li></ul></ul><ul><ul><ul><li>bp, rp : number of black / red pieces </li></ul></ul></ul><ul><ul><ul><li>bk, rk : number of black / red kings </li></ul></ul></ul><ul><ul><ul><li>bt, rt : number of black / read pieces threatened </li></ul></ul></ul><ul><ul><ul><li>w i : constants to be learnt from experience </li></ul></ul></ul>
  • 19. <ul><li>How to obtaining training examples? </li></ul><ul><ul><li>we need a set of examples [bp, rp, bk, rk, bt, rt, V] </li></ul></ul><ul><ul><li>bp etc. easy to determine; but how to guess V? </li></ul></ul><ul><ul><ul><li>we have indirect evidence only! </li></ul></ul></ul><ul><ul><li>possible method: </li></ul></ul><ul><ul><ul><li>with V(s) true target function, V’(s) learnt function, V t (s) training value for a state s: </li></ul></ul></ul><ul><ul><ul><ul><li>V t (s) &lt;- V’(successor(s)) </li></ul></ul></ul></ul><ul><ul><ul><ul><li>adapt V’ using V t values (making V’ and V t converge) </li></ul></ul></ul></ul><ul><ul><ul><ul><li>hope that V’ will converge to V </li></ul></ul></ul></ul><ul><ul><ul><li>intuitively: V for end states is known; propagate V values from later states to earlier states in the game </li></ul></ul></ul>
  • 20. <ul><li>Training algorithm: how to adapt the weights w i ? </li></ul><ul><ul><li>possible method: </li></ul></ul><ul><ul><ul><li>look at “error”: error(s) = V’(s) - V t (s) </li></ul></ul></ul><ul><ul><ul><li>adapt weights so that error is reduced </li></ul></ul></ul><ul><ul><ul><li>e.g. using gradient descent method </li></ul></ul></ul><ul><ul><ul><ul><li>for each feature f i : w i  w i + c f i error(s) with c some small constant </li></ul></ul></ul></ul>
  • 21. Overview of design choices type of training experience games against self games against expert table of good moves determine type of target function determine representation determine learning algorithm … … … … ready! Board   Board  Move linear function of 6 features … gradient descent
  • 22. Some issues that influence choices <ul><li>Which algorithms useful for what type of functions? </li></ul><ul><li>How is learning influenced by </li></ul><ul><ul><li># training examples </li></ul></ul><ul><ul><li>complexity of hypothesis (function) representation </li></ul></ul><ul><ul><li>noise in the data </li></ul></ul><ul><li>Theoretical limits of learning? </li></ul><ul><li>Can we help the learner with prior knowledge? </li></ul><ul><li>Could a system alter its representation itself? </li></ul><ul><li>… </li></ul>
  • 23. Typical learning tasks <ul><li>Concept learning </li></ul><ul><ul><li>learn a definition of a concept </li></ul></ul><ul><ul><li>supervised vs. unsupervised </li></ul></ul><ul><li>Function learning (&amp;quot;predictive modelling&amp;quot;) </li></ul><ul><ul><li>Discrete (&amp;quot;classification&amp;quot;) or continuous (&amp;quot;regression&amp;quot;) </li></ul></ul><ul><ul><li>Concept = function with boolean result </li></ul></ul><ul><li>Clustering </li></ul><ul><li>Finding descriptive patterns </li></ul>
  • 24. Concept learning: supervised <ul><li>Given positive (+) and negative (-) examples of a concept, infer properties that cause instances to be positive or negative (= concept definition) </li></ul>+ + + + + - - - - - - - - - X C : X  {true,false} + + + + + - - - - - - - - - X C
  • 25. Concept learning: unsupervised <ul><li>Given examples of instances </li></ul><ul><ul><li>Invent reasonable concepts ( = clustering ) </li></ul></ul><ul><ul><li>Find definitions for these concepts </li></ul></ul>. . . . . . . . . . . X . . . . . . . . . . . . . . . . . X . . . . . . C 3 C 2 C 1 <ul><li>Cf. taxonomy of animals, identification of market segments, ... </li></ul>
  • 26. Function learning <ul><li>Generalises over concept learning </li></ul><ul><li>Learn function f : X  S where </li></ul><ul><ul><li>S is finite set of values: classification </li></ul></ul><ul><ul><li>S is a continuous range of reals: regression </li></ul></ul>. 1.4 . 2.7 . 0.6 . 2.1 X . 0.9 . 1.4 . 2.7 . 0.6 . 2.1 X . 0.9 0 1 2 3 f
  • 27. Clustering <ul><li>Finding groups of instances that are similar </li></ul><ul><li>May be a goal in itself (unsupervised classification) </li></ul><ul><li>... but also used for other tasks </li></ul><ul><ul><li>regression </li></ul></ul><ul><ul><li>flexible prediction : when it is not known in advance which properties to predict from which other properties </li></ul></ul>. . . . . . . . . . . X . . . . . . . . . . . . . . . . . X . . . . . .
  • 28. Finding descriptive patterns <ul><li>Descriptive patterns = any kind of patterns, not necessarily directly useful for prediction </li></ul><ul><ul><li>Generalises over predictive modelling (= finding predictive patterns) </li></ul></ul><ul><li>Examples of patterns: </li></ul><ul><ul><li>&amp;quot;fast cars usually cost more than slower cars&amp;quot; </li></ul></ul><ul><ul><li>&amp;quot;people are never married to more than one person at the same time&amp;quot; </li></ul></ul>
  • 29. Representation of data <ul><li>Numerical data: instances are points in  n </li></ul><ul><ul><li>Many techniques focus on this kind of data </li></ul></ul><ul><li>Symbolic data (true/false, black/white/red/blue, ...) </li></ul><ul><ul><li>Can be converted to numeric data </li></ul></ul><ul><ul><li>Some techniques work directly with symbolic data </li></ul></ul><ul><li>Structural data </li></ul><ul><ul><li>Instances have internal structure (graphs, sets, …; cf. molecules) </li></ul></ul><ul><ul><li>Difficult to convert to simpler format </li></ul></ul><ul><ul><li>Few techniques can handle these directly </li></ul></ul>
  • 30. Brief overview of approaches <ul><li>Symbolic approaches: </li></ul><ul><ul><li>Version Spaces, Induction of decision trees, Induction of rule sets, inductive logic programming, … </li></ul></ul><ul><li>Numeric approaches: </li></ul><ul><ul><li>neural networks, support vector machines, … </li></ul></ul><ul><li>Probabilistic approaches (“bayesian learning”) </li></ul><ul><li>Miscellaneous: </li></ul><ul><ul><li>instance based learning, genetic algorithms, reinforcement learning </li></ul></ul>
  • 31. Overview of the course <ul><li>Introduction (today) (Ch. 1) </li></ul><ul><li>Concept-learning: Versionspaces (Ch. 2 - brief) </li></ul><ul><li>Induction of decision trees (Ch. 3) </li></ul><ul><li>Artificial neural networks (Ch. 4 - brief) </li></ul><ul><li>Evaluating hypotheses (Ch. 5) </li></ul><ul><li>Bayesian learning (Ch. 6) </li></ul><ul><li>Computational learning theory (Ch. 7) </li></ul><ul><li>Support vector machines (brief) </li></ul>
  • 32. <ul><li>Instance-based learning (Ch. 8) </li></ul><ul><li>Genetic algorithms (Ch. 9) </li></ul><ul><li>Induction of rule sets &amp; association rules (Ch. 10) </li></ul><ul><li>Reinforcement learning (Ch. 13) </li></ul><ul><li>Clustering </li></ul><ul><li>Inductive logic programming </li></ul><ul><li>Combining different models </li></ul><ul><ul><li>bagging, boosting, stacking, … </li></ul></ul>
  • 33. 2 Version Spaces <ul><li>Recall basic principles from AI course </li></ul><ul><ul><li>stressing important concepts for later use </li></ul></ul><ul><li>Difficulties with version space approaches </li></ul><ul><li>Inductive bias </li></ul><ul><li> Mitchell, Ch.2 </li></ul>
  • 34. Basic principles <ul><li>Concept learning as search </li></ul><ul><ul><li>given : hypothesis space H and data set S </li></ul></ul><ul><ul><li>find : all h  H consistent with S </li></ul></ul><ul><ul><li>this set is called the version space , VS(H,S) </li></ul></ul><ul><li>How to search in H </li></ul><ul><ul><li>enumerate all h in H: not feasible </li></ul></ul><ul><ul><li>prune search using some generality ordering </li></ul></ul><ul><ul><ul><li>h 1 more general than h 2  (x  h 2  x  h 1 ) </li></ul></ul></ul><ul><li>See Mitchell Chapter 2 for examples </li></ul>
  • 35. An example <ul><li>+ : belongs to concept; - : does not </li></ul><ul><ul><li>S = set of these + and - examples </li></ul></ul><ul><li>Assume hypotheses are rectangles </li></ul><ul><ul><li>I.e., H = set of all rectangles </li></ul></ul><ul><li>VS(H,S) = set of all rectangles that contain all + and no - </li></ul>- - - - - - - - - - - - + + + - -
  • 36. <ul><li>Example of consistent hypothesis: green rectangle </li></ul>- - - - - - - - - - - - + + + - -
  • 37. <ul><li>h1 more general than h2  h2 totally inside h1 </li></ul>h 1 h 2 h 3 h 2 more specific than h 1 h 3 incomparable with h 1 - - - - - - - - - - - - + + + - -
  • 38. Version space boundaries <ul><li>Bound versionspace by giving its most specific (S) and most general (G) borders </li></ul><ul><ul><li>S: rectangles that cannot become smaller without excluding some + </li></ul></ul><ul><ul><li>G: rectangles that cannot become larger without including some - </li></ul></ul><ul><li>Any hypothesis h consistent with the data </li></ul><ul><ul><li>must be more general than some element in S </li></ul></ul><ul><ul><li>must be more specific than some element in G </li></ul></ul><ul><li>Thus, G and S completely specify VS </li></ul>
  • 39. Example, continued <ul><li>So what are S and G here? </li></ul>S = {h 1 }, G = {h 2 ,h 3 } - - - - - - - - - - - - + + + - - h 2 : most general hypothesis h 3 : another most general hyp. h 1 : most specific hypothesis
  • 40. Computing the version space <ul><li>Computing G and S is sufficient to know the full versionspace </li></ul><ul><li>Algorithms in Mitchell’s book: </li></ul><ul><ul><li>FindS: computes only S set </li></ul></ul><ul><ul><ul><li>S is always singleton in Mitchell’s examples </li></ul></ul></ul><ul><ul><li>Candidate Elimination: computes S and G </li></ul></ul>
  • 41. Candidate Elimination Algorithm: demonstration with rectangles <ul><li>Algorithm: see Mitchell </li></ul><ul><li>Representation: </li></ul><ul><ul><li>Concepts are rectangles </li></ul></ul><ul><ul><li>Rectangle represented with 2 attributes: </li></ul></ul><ul><ul><ul><li>&lt;Xmin-Xmax, Ymin-Ymax&gt; </li></ul></ul></ul><ul><li>Graphical representation: </li></ul><ul><ul><li>hypothesis consistent with data if </li></ul></ul><ul><ul><ul><li>all + inside rectangle </li></ul></ul></ul><ul><ul><ul><li>no - inside rectangle </li></ul></ul></ul>
  • 42. G S <ul><li>Start: S = {none}, G = {all} </li></ul>S = {&lt;  ,  &gt;} G = {&lt;1-6, 1-6&gt;} 3 4 5 6 2 1 3 2 1 4 5 6
  • 43. G S <ul><li>Example e1 appears, not covered by S </li></ul><ul><li>Start: S = {none}, G = {all} </li></ul>S = {&lt;  ,  &gt;} G = {&lt;1-6,1-6&gt;} (3,2) : + +
  • 44. + G S (3,2) : + <ul><li>Example e1 appears, not covered by S </li></ul><ul><li>Start: S = {none}, G = {all} </li></ul><ul><li>S is extended to cover e1 </li></ul>S = {&lt; 3-3,2-2 &gt;} G = {&lt;1-6,1-6&gt;}
  • 45. (3,2) : + <ul><li>Example e1 appears, not covered by S </li></ul><ul><li>Start: S = {none}, G = {all} </li></ul><ul><li>S is extended to cover e1 </li></ul><ul><li>Example e2 appears, covered by G </li></ul>+ G S S = {&lt;3-3,2-2&gt;} G = {&lt;1-6,1-6&gt;} - (5,4) : -
  • 46. + G S (3,2) : + <ul><li>Example e1 appears, not covered by S </li></ul><ul><li>Start: S = {none}, G = {all} </li></ul><ul><li>S is extended to cover e1 </li></ul>S = {&lt;3-3,2-2&gt;} G = {&lt; 1-4,1-6 &gt;, &lt; 1-6, 1-3 &gt;} <ul><li>Example e2 appears, covered by G </li></ul><ul><li>G is changed to avoid covering e2 </li></ul><ul><ul><li>note: now consists of 2 parts </li></ul></ul><ul><ul><li>each part covers all + and no - </li></ul></ul>(5,4) : - -
  • 47. + G S (3,2) : + <ul><li>Example e1 appears, not covered by S </li></ul><ul><li>Start: S = {none}, G = {all} </li></ul><ul><li>S is extended to cover e1 </li></ul><ul><li>Example e2 appears, covered by G </li></ul><ul><li>G is changed to avoid covering e2 </li></ul><ul><li>Example e3 appears, covered by G </li></ul>S = {&lt;3-3,2-2&gt;} G = {&lt;1-4,1-6&gt;, &lt;1-6, 1-3&gt;} (5,4) : - - (2,4) : - -
  • 48. + G S (3,2) : + <ul><li>Example e1 appears, not covered by S </li></ul><ul><li>Start: S = {none}, G = {all} </li></ul><ul><li>S is extended to cover e1 </li></ul><ul><li>Example e2 appears, covered by G </li></ul><ul><li>G is changed to avoid covering e2 </li></ul><ul><li>Example e3 appears, covered by G </li></ul><ul><li>One part of G is affected &amp; reduced </li></ul>S = {&lt;3-3,2-2&gt;} G = {&lt; 3-4,1-6 &gt;, &lt;1-6, 1-3&gt;} (5,4) : - - (2,4) : - -
  • 49. + G S (3,2) : + <ul><li>Example e1 appears, not covered by S </li></ul><ul><li>Start: S = {none}, G = {all} </li></ul><ul><li>S is extended to cover e1 </li></ul><ul><li>Example e2 appears, covered by G </li></ul><ul><li>G is changed to avoid covering e2 </li></ul><ul><li>Example e3 appears, covered by G </li></ul><ul><li>One part of G is affected &amp; reduced </li></ul><ul><li>Example e4 appears, not covered by S </li></ul>S = {&lt;3-3,2-2&gt;} G = {&lt;3-4,1-6&gt;, &lt;1-6, 1-3&gt;} (5,4) : - - (2,4) : - - (5,3) : + +
  • 50. + G S (3,2) : + <ul><li>Example e1 appears, not covered by S </li></ul><ul><li>Start: S = {none}, G = {all} </li></ul><ul><li>S is extended to cover e1 </li></ul><ul><li>Example e2 appears, covered by G </li></ul><ul><li>G is changed to avoid covering e2 </li></ul><ul><li>Example e3 appears, covered by G </li></ul><ul><li>One part of G is affected &amp; reduced </li></ul><ul><li>Example e4 appears, not covered by S </li></ul><ul><li>S is extended to cover e4 </li></ul>S = {&lt; 3-5,2-3 &gt;} G = {&lt;3-4,1-6&gt;, &lt;1-6, 1-3&gt;} (5,4) : - - (2,4) : - - (5,3) : + +
  • 51. + G S (3,2) : + <ul><li>Example e1 appears, not covered by S </li></ul><ul><li>Start: S = {none}, G = {all} </li></ul><ul><li>S is extended to cover e1 </li></ul><ul><li>Example e2 appears, covered by G </li></ul><ul><li>G is changed to avoid covering e2 </li></ul><ul><li>Example e3 appears, covered by G </li></ul><ul><li>One part of G is affected &amp; reduced </li></ul><ul><li>Example e4 appears, not covered by S </li></ul><ul><li>S is extended to cover e4 </li></ul><ul><li>Part of G not covering new S is removed </li></ul>S = {&lt;3-5,2-3&gt;} G = {&lt;1-6, 1-3&gt;} (5,4) : - - (2,4) : - - (5,3) : + +
  • 52. + G S Current versionspace contains all rectangles covering S and covered by G, e.g. h = &lt;2-5,2-3&gt; h S = {&lt;3-5,2-3&gt;} G = {&lt;1-6, 1-3&gt;}
  • 53. <ul><li>Interesting points: </li></ul><ul><ul><li>We here use an extended notion of generality </li></ul></ul><ul><ul><ul><li>In book :  &lt; value &lt; ? </li></ul></ul></ul><ul><ul><ul><li>Here: e.g.  &lt; 2-3 &lt; 2-5 &lt; 1-5 &lt; ? </li></ul></ul></ul><ul><ul><li>We still use a conjunctive concept definition </li></ul></ul><ul><ul><ul><li>each concept is 1 rectangle </li></ul></ul></ul><ul><ul><ul><li>this could be extended as well (but complicated) </li></ul></ul></ul>
  • 54. Difficulties with version space approaches <ul><li>Idea of VS provides nice theoretical framework </li></ul><ul><li>But not very useful for most practical problems </li></ul><ul><li>Difficulties with these approaches: </li></ul><ul><ul><li>Not very efficient </li></ul></ul><ul><ul><ul><li>Borders G and S may be very large (may grow exponentially) </li></ul></ul></ul><ul><ul><li>Not noise resistant </li></ul></ul><ul><ul><ul><li>VS “collapses” when no consistent hypothesis exists </li></ul></ul></ul><ul><ul><ul><li>often we would like to find the “best” hypothesis in this case </li></ul></ul></ul><ul><ul><li>in Mitchell’s examples: only conjunctive definitions </li></ul></ul><ul><li>We will compare with other approaches... </li></ul>
  • 55. Inductive bias <ul><li>After having seen a limited number of examples, we believe we can make predictions for unseen cases. </li></ul><ul><li>From seen cases to unseen cases = inductive leap </li></ul><ul><li>Why do we believe this ? Is there any guarantee this prediction will be correct ? What extra assumptions do we need to guarantee correctness? </li></ul><ul><li>Inductive bias : minimal set of extra assumptions that guarantees correctness of inductive leap </li></ul>
  • 56. Equivalence between inductive and deductive systems inductive system training examples new instance deductive system training examples new instance inductive bias result (by proof) result (by inductive leap)
  • 57. Definition of inductive bias <ul><li>More formal definition of inductive bias (Mitchell): </li></ul><ul><li>L(x,D) denotes classification assigned to instance x by learner L after training on D </li></ul><ul><li>The inductive bias of L is any minimal set of assertions B such that for any target concept c and corresponding training examples D, </li></ul><ul><li> x  X: B  D  x |- L(x,D) </li></ul>
  • 58. Effect of inductive bias <ul><li>Different learning algorithms give different results on same dataset because each may have a different bias </li></ul><ul><li>Stronger bias means less learning </li></ul><ul><ul><li>more is assumed in advance </li></ul></ul><ul><li>Is learning possible without any bias at all? </li></ul><ul><ul><li>I.e., “pure” learning, without any assumptions in advance </li></ul></ul><ul><ul><li>The answer is No . </li></ul></ul>
  • 59. Inductive bias of version spaces <ul><li>Bias of candidate elimination algorithm: target concept is in H </li></ul><ul><li>H typically consists of conjunctive concepts </li></ul><ul><ul><li>in our previous illustration, rectangles </li></ul></ul><ul><li>H could be extended towards disjunctive concepts </li></ul><ul><li>Is it possible to use version spaces with H = set of all imaginable concepts, thereby eliminating all bias ? </li></ul>
  • 60. Unbiased version spaces <ul><li>Let U be the example domain </li></ul><ul><li>Unbiased: target concept C can be any subset of U </li></ul><ul><ul><li>hence, H = 2 U </li></ul></ul><ul><li>Condider VS(H,D) with D a strict subset of U </li></ul><ul><li>Assume you see an unseen instance x (x  U D) </li></ul><ul><li>For each h  VS that predicts x  C, there is a h’  VS that predicts x  C, and vice versa </li></ul><ul><ul><li>just take h = h’  {x}: since x  D, h and h’ are exactly the same w.r.t. D; so either both are in VS, or none of them are </li></ul></ul>
  • 61. <ul><li>Conclusion: version spaces without any bias do not allow generalisation </li></ul><ul><li>To be able to make an inductive leap, some bias is necessary . </li></ul><ul><li>We will see many different learning algorithms that all differ in their inductive bias. </li></ul><ul><li>When choosing one in practice, bias should be an important criterium </li></ul><ul><ul><li>unfortunately: not always well understood… </li></ul></ul>
  • 62. To remember <ul><li>Definition of version space, importance of generality ordering for searching </li></ul><ul><li>Definition of inductive bias, practical importance, why it is necessary for learning, how it relates inductive systems to deductive systems </li></ul>
  • 63. 3 Induction of decision trees <ul><li>What are decision trees? </li></ul><ul><li>How can they be induced automatically? </li></ul><ul><ul><li>top-down induction of decision trees </li></ul></ul><ul><ul><li>avoiding overfitting </li></ul></ul><ul><ul><li>converting trees to rules </li></ul></ul><ul><ul><li>alternative heuristics  </li></ul></ul><ul><ul><li>a generic TDIDT algorithm  </li></ul></ul><ul><li> Mitchell, Ch. 3 </li></ul>
  • 64. What are decision trees? <ul><li>Represent sequences of tests </li></ul><ul><li>According to outcome of test, perform a new test </li></ul><ul><li>Continue until result obtained known </li></ul><ul><li>Cf. guessing a person using only yes/no questions: </li></ul><ul><ul><li>ask some question </li></ul></ul><ul><ul><li>depending on answer, ask a new question </li></ul></ul><ul><ul><li>continue until answer known </li></ul></ul>
  • 65. Example decision tree 1 <ul><li>Mitchell’s example: Play tennis or not? (depending on weather conditions) </li></ul>Outlook Humidity Wind No Yes No Yes Yes Sunny Overcast Rainy High Normal Strong Weak
  • 66. Example decision tree 2 <ul><li>Again from Mitchell: tree for predicting whether C-section necessary </li></ul><ul><li>Leaves are not pure here; ratio pos/neg is given </li></ul>Fetal_Presentation Previous_Csection + - - 1 2 3 0 1 [3+, 29-] .11+ .89- [8+, 22-] .27+ .73- [55+, 35-] .61+ .39- Primiparous … …
  • 67. Representation power <ul><li>Typically: </li></ul><ul><ul><li>examples represented by array of attributes </li></ul></ul><ul><ul><li>1 node in tree tests value of 1 attribute </li></ul></ul><ul><ul><li>1 child node for each possible outcome of test </li></ul></ul><ul><ul><li>Leaf nodes assign classification </li></ul></ul><ul><li>Note: </li></ul><ul><ul><li>tree can represent any boolean function </li></ul></ul><ul><ul><ul><li>i.e., also disjunctive concepts (&lt;-&gt; VS examples) </li></ul></ul></ul><ul><ul><li>tree can allow noise (non-pure leaves) </li></ul></ul>
  • 68. Representing boolean formulae <ul><li>E.g., A  B </li></ul><ul><li>Similarly (try yourself): </li></ul><ul><ul><li>A  B, A xor B, (A  B)  (C   D  E) </li></ul></ul><ul><ul><li>“ M of N” (at least M out of N propositions are true) </li></ul></ul><ul><ul><li>What about complexity of tree vs. complexity of original formula? </li></ul></ul>A false true B true true false true false
  • 69. Classification, Regression and Clustering trees <ul><li>Classification trees represent function X -&gt; C with C discrete (like the decision trees we just saw) </li></ul><ul><li>Regression trees predict numbers in leaves </li></ul><ul><ul><li>could use a constant (e.g., mean), or linear regression model, or … </li></ul></ul><ul><li>Clustering trees just group examples in leaves </li></ul><ul><li>Most (but not all) research in machine learning focuses on classification trees </li></ul>
  • 70. Example decision tree 3 (from study of river water quality) <ul><li>&amp;quot;Data mining&amp;quot; application </li></ul><ul><li>Given: descriptions of river water samples </li></ul><ul><ul><li>biological description: occurrence of organisms in water (“abundance”, graded 0-5) </li></ul></ul><ul><ul><li>chemical description: 16 variables (temperature, concentrations of chemicals (NH 4 , ...)) </li></ul></ul><ul><li>Question: characterize chemical properties of water using organisms that occur </li></ul>
  • 71. Clustering tree abundance(Tubifex sp.,5) ? T = 0.357111 pH = -0.496808 cond = 1.23151 O2 = -1.09279 O2sat = -1.04837 CO2 = 0.893152 hard = 0.988909 NO2 = 0.54731 NO3 = 0.426773 NH4 = 1.11263 PO4 = 0.875459 Cl = 0.86275 SiO2 = 0.997237 KMnO4 = 1.29711 K2Cr2O7 = 0.97025 BOD = 0.67012 abundance(Sphaerotilus natans,5) ? yes no T = 0.0129737 pH = -0.536434 cond = 0.914569 O2 = -0.810187 O2sat = -0.848571 CO2 = 0.443103 hard = 0.806137 NO2 = 0.4151 NO3 = -0.0847706 NH4 = 0.536927 PO4 = 0.442398 Cl = 0.668979 SiO2 = 0.291415 KMnO4 = 1.08462 K2Cr2O7 = 0.850733 BOD = 0.651707 yes no abundance( ...) &lt;- &amp;quot;standardized&amp;quot; values (how many standard deviations above mean)
  • 72. Top-Down Induction of Decision Trees <ul><li>Basic algorithm for TDIDT: (later more formal version) </li></ul><ul><ul><li>start with full data set </li></ul></ul><ul><ul><li>find test that partitions examples as good as possible </li></ul></ul><ul><ul><ul><li>“ good” = examples with same class, or otherwise similar examples, should be put together </li></ul></ul></ul><ul><ul><li>for each outcome of test, create child node </li></ul></ul><ul><ul><li>move examples to children according to outcome of test </li></ul></ul><ul><ul><li>repeat procedure for each child that is not “pure” </li></ul></ul><ul><li>Main question: how to decide which test is “best” </li></ul>
  • 73. Finding the best test (for classification trees) <ul><li>For classification trees: find test for which children are as “pure” as possible </li></ul><ul><li>Purity measure borrowed from information theory: entropy </li></ul><ul><ul><li>is a measure of “missing information”; more precisely, #bits needed to represent the missing information, on average, using optimal encoding </li></ul></ul><ul><li>Given set S with instances belonging to class i with probability p i : Entropy(S) = -  p i log 2 p i </li></ul>
  • 74. Entropy <ul><li>Intuitive reasoning: </li></ul><ul><ul><li>use shorter encoding for more frequent messages </li></ul></ul><ul><ul><li>information theory: message with probability p should get -log 2 p bits </li></ul></ul><ul><ul><ul><li>e.g. A,B,C,D both 25% probability: 2 bits for each (00,01,10,11) </li></ul></ul></ul><ul><ul><ul><li>if some are more probable, it is possible to do better </li></ul></ul></ul><ul><ul><li>average #bits for a message is then -  p i log 2 p i </li></ul></ul>
  • 75. Entropy <ul><li>Entropy in function of p, for 2 classes: </li></ul>
  • 76. Information gain <ul><li>Heuristic for choosing a test in a node: </li></ul><ul><ul><li>choose that test that on average provides most information about the class </li></ul></ul><ul><ul><li>this is the test that, on average, reduces class entropy most </li></ul></ul><ul><ul><ul><li>on average: class entropy reduction differs according to outcome of test </li></ul></ul></ul><ul><ul><li>expected reduction of entropy = information gain </li></ul></ul><ul><li>Gain(S,A) = Entropy(S) -  |S v |/|S| Entropy(S v ) </li></ul>
  • 77. Example <ul><li>Assume S has 9 + and 5 - examples; partition according to Wind or Humidity attribute </li></ul>Humidity Wind High Normal Strong Weak S: [9+,5-] S: [9+,5-] S: [3+,4-] S: [6+,1-] S: [6+,2-] S: [3+,3-] E = 0.985 E = 0.592 E = 0.811 E = 1.0 E = 0.940 E = 0.940 Gain(S, Humidity) = .940 - (7/14).985 - (7/14).592 = 0.151 Gain(S, Wind) = .940 - (8/14).811 - (6/14)1.0 = 0.048
  • 78. <ul><li>Assume Outlook was chosen: continue partitioning in child nodes </li></ul>Outlook ? ? Yes Sunny Overcast Rainy [9+,5-] [2+,3-] [3+,2-] [4+,0-]
  • 79. Hypothesis space search in TDIDT <ul><li>Hypothesis space H = set of all trees </li></ul><ul><li>H is searched in a hill-climbing fashion, from simple to complex </li></ul>...
  • 80. Inductive bias in TDIDT <ul><li>Note: for e.g. boolean attributes, H is complete: each concept can be represented! </li></ul><ul><ul><li>given n attributes, can keep on adding tests until all attributes tested </li></ul></ul><ul><li>So what about inductive bias? </li></ul><ul><ul><li>Clearly no “restriction bias” (H  2 U ) as in cand. elim. </li></ul></ul><ul><ul><li>Preference bias : some hypotheses in H are preferred over others </li></ul></ul><ul><ul><li>In this case: preference for short trees with informative attributes at the top </li></ul></ul>
  • 81. Occam’s Razor <ul><li>Preference for simple models over complex models is quite generally used in machine learning </li></ul><ul><li>Similar principle in science: Occam’s Razor </li></ul><ul><ul><li>roughly: do not make things more complicated than necessary </li></ul></ul><ul><li>Reasoning, in the case of decision trees: more complex trees have higher probability of overfitting the data set </li></ul>
  • 82. Avoiding Overfitting <ul><li>Phenomenon of overfitting: </li></ul><ul><ul><li>keep improving a model, making it better and better on training set by making it more complicated … </li></ul></ul><ul><ul><li>increases risk of modelling noise and coincidences in the data set </li></ul></ul><ul><ul><li>may actually harm predictive power of theory on unseen cases </li></ul></ul><ul><li>Cf. fitting a curve with too many parameters </li></ul>. . . . . . . . . . . .
  • 83. Overfitting: example + + + + + + + - - - - - - - - - - - - - - + - - - - - area with probably wrong predictions
  • 84. Overfitting: effect on predictive accuracy <ul><li>Typical phenomenon when overfitting: </li></ul><ul><ul><li>training accuracy keeps increasing </li></ul></ul><ul><ul><li>accuracy on unseen validation set starts decreasing </li></ul></ul>accuracy on training data accuracy on unseen data size of tree accuracy overfitting starts about here
  • 85. How to avoid overfitting when building classification trees? <ul><li>Option 1: </li></ul><ul><ul><li>stop adding nodes to tree when overfitting starts occurring </li></ul></ul><ul><ul><li>need stopping criterion </li></ul></ul><ul><li>Option 2: </li></ul><ul><ul><li>don’t bother about overfitting when growing the tree </li></ul></ul><ul><ul><li>after the tree has been built, start pruning it again </li></ul></ul>
  • 86. Stopping criteria <ul><li>How do we know when overfitting starts? </li></ul><ul><ul><li>a) use a validation set : data not considered for choosing the best test </li></ul></ul><ul><ul><ul><li>when accuracy goes down on validation set: stop adding nodes to this branch </li></ul></ul></ul><ul><ul><li>b) use some statistical test </li></ul></ul><ul><ul><ul><li>significance test: e.g., is the change in class distribution still significant? (  2 -test) </li></ul></ul></ul><ul><ul><ul><li>MDL : minimal description length principle </li></ul></ul></ul><ul><ul><ul><ul><li>fully correct theory = tree + corrections for specific misclassifications </li></ul></ul></ul></ul><ul><ul><ul><ul><li>minimize size(f.c.t.) = size(tree) + size(misclassifications(tree)) </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Cf. Occam’s razor </li></ul></ul></ul></ul>
  • 87. Post-pruning trees <ul><li>After learning the tree: start pruning branches away </li></ul><ul><ul><li>For all nodes in tree: </li></ul></ul><ul><ul><ul><li>Estimate effect of pruning tree at this node on predictive accuracy </li></ul></ul></ul><ul><ul><ul><ul><li>e.g. using accuracy on validation set </li></ul></ul></ul></ul><ul><ul><li>Prune node that gives greatest improvement </li></ul></ul><ul><ul><li>Continue until no improvements </li></ul></ul><ul><li>Note : this pruning constitutes a second search in the hypothesis space </li></ul>
  • 88. accuracy on training data accuracy on unseen data size of tree accuracy effect of pruning
  • 89. Comparison <ul><li>Advantage of Option 1: no superfluous work </li></ul><ul><li>But: tests may be misleading </li></ul><ul><ul><li>E.g., validation accuracy may go down briefly, then go up again </li></ul></ul><ul><li>Therefore, Option 2 (post-pruning) is usually preferred (though more work, computationally) </li></ul>
  • 90. Turning trees into rules <ul><li>From a tree a rule set can be derived </li></ul><ul><ul><li>Path from root to leaf in a tree = 1 if-then rule </li></ul></ul><ul><li>Advantage of such rule sets </li></ul><ul><ul><li>may increase comprehensibility </li></ul></ul><ul><ul><li>can be pruned more flexibly </li></ul></ul><ul><ul><ul><li>in 1 rule, 1 single condition can be removed </li></ul></ul></ul><ul><ul><ul><ul><li>vs. tree: when removing a node, the whole subtree is removed </li></ul></ul></ul></ul><ul><ul><ul><li>1 rule can be removed entirely </li></ul></ul></ul>
  • 91. Rules from trees: example Outlook Humidity Wind No Yes No Yes Yes Sunny Overcast Rainy High Normal Strong Weak if Outlook = Sunny and Humidity = High then No if Outlook = Sunny and Humidity = Normal then Yes …
  • 92. Pruning rules <ul><li>Possible method: </li></ul><ul><ul><li>1. convert tree to rules </li></ul></ul><ul><ul><li>2. prune each rule independently </li></ul></ul><ul><ul><ul><li>remove conditions that do not harm accuracy of rule </li></ul></ul></ul><ul><ul><li>3. sort rules (e.g., most accurate rule first) </li></ul></ul><ul><ul><ul><li>before pruning: each example covered by 1 rule </li></ul></ul></ul><ul><ul><ul><li>after pruning, 1 example might be covered by multiple rules </li></ul></ul></ul><ul><ul><ul><li>therefore some rules might contradict each other </li></ul></ul></ul>
  • 93. Pruning rules: example A false true B true true false true false if A=true then true if A=false and B=true then true if A=false and B=false then false Tree representing A  B Rules represent A  (  A  B) A  B
  • 94. Alternative heuristics for choosing tests <ul><li>Attributes with continuous domains (numbers) </li></ul><ul><ul><li>cannot different branch for each possible outcome </li></ul></ul><ul><ul><li>allow, e.g., binary test of the form Temperature &lt; 20 </li></ul></ul><ul><li>Attributes with many discrete values </li></ul><ul><ul><li>unfair advantage over attributes with few values </li></ul></ul><ul><ul><ul><li>cf. question with many possible answers is more informative than yes/no question </li></ul></ul></ul><ul><ul><li>To compensate: divide gain by “max. potential gain” SI </li></ul></ul><ul><ul><li>Gain Ratio : GR(S,A) = Gain(S,A) / SI(S,A) </li></ul></ul><ul><ul><ul><li>Split-information SI(S,A) = -  |Si|/|S| log2 |Si|/|S| </li></ul></ul></ul><ul><ul><ul><li>with i ranging over different results of test A </li></ul></ul></ul>
  • 95. <ul><li>Tests may have different costs </li></ul><ul><ul><li>e.g. medical diagnosis: blood test, visual examination, … have different costs </li></ul></ul><ul><ul><li>try to find tree with low expected cost </li></ul></ul><ul><ul><ul><li>instead of low expected number of tests </li></ul></ul></ul><ul><ul><li>alternative heuristics, taking cost into account,have been proposed </li></ul></ul>
  • 96. Properties of good heuristics <ul><li>Many alternatives exist </li></ul><ul><ul><li>ID3 uses information gain or gain ratio </li></ul></ul><ul><ul><li>CART uses “Gini criterion” (not discussed here) </li></ul></ul><ul><li>Q: Why not simply use accuracy as a criterion? </li></ul>A1 80-, 20+ 40-,0+ 40-,20+ A2 80-, 20+ 40-,10+ 40-,10+ How would - accuracy - information gain rate these splits?
  • 97. Heuristics compared Good heuristics are strictly concave
  • 98. Why concave functions? E E 1 E 2 p p 2 p 1 Assume node with size n , entropy E and proportion of positives p is split into 2 nodes with n 1 , E 1 , p 1 and n 2 , E 2 p 2 . We have p = (n 1 /n)p 1 + (n 2 /n) p 2 and the new average entropy E’ = (n 1 /n)E 1 +(n 2 /n)E 2 is therefore found by linear interpolation between ( p 1 ,E 1 ) and ( p 2 ,E 2 ) at p . Gain = difference in height between ( p, E ) and ( p,E’ ). (n 1 /n)E 1 +(n 2 /n)E 2 Gain
  • 99. Handling missing values <ul><li>What if result of test is unknown for example? </li></ul><ul><ul><li>e.g. because value of attribute unknown </li></ul></ul><ul><li>Some possible solutions, when training: </li></ul><ul><ul><li>guess value: just take most common value (among all examples, among examples in this node / class, …) </li></ul></ul><ul><ul><li>assign example partially to different branches </li></ul></ul><ul><ul><ul><li>e.g. counts for 0.7 in yes subtree, 0.3 in no subtree </li></ul></ul></ul><ul><li>When using tree for prediction: </li></ul><ul><ul><li>assign example partially to different branches </li></ul></ul><ul><ul><li>combine predictions of different branches </li></ul></ul>
  • 100. Generic TDIDT algorithm function TDIDT( E : set of examples) returns tree; T&apos; := grow_tree( E ); T := prune ( T&apos; ); return T ; function grow_tree( E : set of examples) returns tree; T := generate_tests ( E ); t := best_test ( T , E ); P := partition induced on E by t ; if stop_criterion ( E , P ) then return leaf( info ( E )) else for all E j in P: t j := grow_tree( E j ); return node( t , {( j,t j )}; 
  • 101. For classification... <ul><li>prune : e.g. reduced-error pruning, ... </li></ul><ul><li>generate_tests : Attr=val, Attr&lt;val, ... </li></ul><ul><ul><li>for numeric attributes: generate val </li></ul></ul><ul><li>best_test : Gain, Gainratio, ... </li></ul><ul><li>stop_criterion : MDL, significance test (e.g.  2 -test), ... </li></ul><ul><li>info : most frequent class (&amp;quot;mode&amp;quot;) </li></ul><ul><li>Popular systems: C4.5 (Quinlan 1993), C5.0 ( www.rulequest.com ) </li></ul>
  • 102. For regression... <ul><li>change </li></ul><ul><ul><li>best_test : e.g. minimize average variance </li></ul></ul><ul><ul><li>info : mean </li></ul></ul><ul><ul><li>stop_criterion : significance test (e.g., F-test), ... </li></ul></ul>A1 A2 {1,3,4,7,8,12} {1,3,4,7,8,12} {1,4,12} {3,7,8} {1,3,7} {4,8,12}
  • 103. CART <ul><li>Classification and regression trees (Breiman et al., 1984) </li></ul><ul><li>Classification: info : mode, best_test : Gini </li></ul><ul><li>Regression: info : mean, best_test : variance </li></ul><ul><li>prune : &amp;quot;error complexity&amp;quot; pruning </li></ul><ul><ul><li>penalty  for each node </li></ul></ul><ul><ul><li>the higher  , the smaller the tree will be </li></ul></ul><ul><ul><li>optimal  obtained empirically (cross-validation) </li></ul></ul>
  • 104. n-dimensional target spaces <ul><li>Instead of predicting 1 number, predict vector of numbers </li></ul><ul><li>info : mean vector </li></ul><ul><li>best_test : variance (mean squared distance) in n-dimensional space </li></ul><ul><li>stop_criterion : F-test </li></ul><ul><li>mixed vectors (numbers and symbols)? </li></ul><ul><ul><li>use appropriate distance measure </li></ul></ul><ul><li>-&gt; &amp;quot;clustering trees&amp;quot; </li></ul>
  • 105. Clustering tree abundance(Tubifex sp.,5) ? T = 0.357111 pH = -0.496808 cond = 1.23151 O2 = -1.09279 O2sat = -1.04837 CO2 = 0.893152 hard = 0.988909 NO2 = 0.54731 NO3 = 0.426773 NH4 = 1.11263 PO4 = 0.875459 Cl = 0.86275 SiO2 = 0.997237 KMnO4 = 1.29711 K2Cr2O7 = 0.97025 BOD = 0.67012 abundance(Sphaerotilus natans,5) ? yes no T = 0.0129737 pH = -0.536434 cond = 0.914569 O2 = -0.810187 O2sat = -0.848571 CO2 = 0.443103 hard = 0.806137 NO2 = 0.4151 NO3 = -0.0847706 NH4 = 0.536927 PO4 = 0.442398 Cl = 0.668979 SiO2 = 0.291415 KMnO4 = 1.08462 K2Cr2O7 = 0.850733 BOD = 0.651707 yes no abundance( ...) &lt;- &amp;quot;standardized&amp;quot; values (how many standard deviations above mean)
  • 106. To Remember <ul><li>Decision trees &amp; their representational power </li></ul><ul><li>Generic TDIDT algorithm and how to instantiate its parameters </li></ul><ul><li>Search through hypothesis space, bias, tree to rule conversion </li></ul><ul><li>For classification trees: details on heuristics, handling missing values, pruning, … </li></ul><ul><li>Some general concepts: overfitting, Occam’s razor </li></ul>
  • 107. 4 Neural networks <ul><li>(Brief summary - studied in detail in other courses) </li></ul><ul><li>Basic principle of artificial neural networks </li></ul><ul><li>Perceptrons and multi-layer neural networks </li></ul><ul><li>Properties </li></ul><ul><li> Mitchell, Ch. 4 </li></ul>
  • 108. Artificial neural networks <ul><li>Modelled after biological neural systems </li></ul><ul><ul><li>Complex systems built from very simple units </li></ul></ul><ul><ul><li>1 unit = neuron </li></ul></ul><ul><ul><ul><li>has multiple inputs and outputs, connecting the neuron to other neurons </li></ul></ul></ul><ul><ul><ul><li>when input signal sufficiently strong, neuron fires (i.,e., propagates signal) </li></ul></ul></ul>
  • 109. <ul><li>ANNs consists of </li></ul><ul><ul><li>neurons </li></ul></ul><ul><ul><li>connections between them </li></ul></ul><ul><ul><ul><li>these connections have weights associated with them </li></ul></ul></ul><ul><ul><li>input and output </li></ul></ul><ul><li>ANNs can learn to associate inputs to outputs by adapting the weights </li></ul><ul><li>For instance (classification): </li></ul><ul><ul><li>inputs = pixels of photo </li></ul></ul><ul><ul><li>outputs = classification of photo (person? tree? …) </li></ul></ul>
  • 110. Perceptrons <ul><li>Simplest type of neural network </li></ul><ul><ul><li>Perceptron simulates 1 neuron </li></ul></ul><ul><ul><li>Fires if sum of (inputs * weights) &gt; some threshold </li></ul></ul><ul><li>Schematically: </li></ul> computes  w i x i X Y threshold function: Y = -1 if X&lt;t, Y=1 otherwise x 1 x 2 x 3 x 4 x 5 w 1 w 5
  • 111. 2-input perceptron <ul><li>represent inputs in 2-D space </li></ul><ul><li>perceptron learns a function of following form: </li></ul><ul><ul><li>if aX + bY &gt; c then +1, else -1 </li></ul></ul><ul><ul><li>i.e., creates linear separation between classes + and - </li></ul></ul>+1 -1
  • 112. n-input perceptrons <ul><li>In general, perceptrons construct a hyperplane in an n-dimensional space </li></ul><ul><ul><li>one side of hyperplane = +, other side = - </li></ul></ul><ul><li>Hence, classes must be linearly separable, otherwise perceptron cannot learn them </li></ul><ul><li>E.g.: learning boolean functions </li></ul><ul><ul><li>encode true/false as +1, -1 </li></ul></ul><ul><ul><li>is there a perceptron that encodes 1. A and B? 2. A or B? 3. A xor B? </li></ul></ul>
  • 113. Multi-layer networks <ul><li>Increase representation power by combining neurons in a network </li></ul>+1 -1 +1 -1 X Y neuron 1 neuron 2 +1 -1 output -1 -1 inputs hidden layer output layer
  • 114. <ul><li>“ Sigmoid” function instead of crisp threshold </li></ul><ul><ul><li>changes continuously instead of in 1 step </li></ul></ul><ul><ul><li>has advantages for training multi-layer networks </li></ul></ul> x 1 x 2 x 3 x 4 x 5 w 1 w 5
  • 115. <ul><li>Non-linear sigmoid function causes non-linear decision surfaces </li></ul><ul><ul><li>e.g., 5 areas for 5 classes a,b,c,d,e </li></ul></ul><ul><li>Very powerful representation </li></ul>a b c d e
  • 116. <ul><li>Note : previous network had 2 layers of neurons </li></ul><ul><li>Layered feedforward neural networks: </li></ul><ul><ul><li>neurons organised in n layers </li></ul></ul><ul><ul><li>each layer has output from previous layer as input </li></ul></ul><ul><ul><ul><li>neurons fully interconnected </li></ul></ul></ul><ul><ul><li>successive layers = different representations of input </li></ul></ul><ul><li>2-layer feedforward networks very popular… </li></ul><ul><li>… but many other architectures possible! </li></ul><ul><ul><li>e.g. recurrent NNs </li></ul></ul>
  • 117. <ul><li>Example: 2-layer net representing ID function </li></ul><ul><ul><li>8 input patterns, mapped to same pattern in output </li></ul></ul><ul><ul><li>network converges to binary representation in hidden layer </li></ul></ul>for instance: 1 101 2 100 3 011 4 111 5 000 6 010 7 110 8 001
  • 118. Training neural networks <ul><li>Trained by adapting the weights </li></ul><ul><li>Popular algorithm: backpropagation </li></ul><ul><ul><li>minimizing error through gradient descent </li></ul></ul><ul><ul><li>principle: output error of a layer is attributed to </li></ul></ul><ul><ul><ul><li>1: weights of connections in that layer </li></ul></ul></ul><ul><ul><ul><ul><li>adapt these weights </li></ul></ul></ul></ul><ul><ul><ul><li>2: inputs of that layer (except if first layer) </li></ul></ul></ul><ul><ul><ul><ul><li>“ backpropagate” error to these inputs </li></ul></ul></ul></ul><ul><ul><ul><ul><li>now use same principle to adapt weights of previous layer </li></ul></ul></ul></ul><ul><ul><li>Iterative process, may be slow </li></ul></ul>
  • 119. Properties of neural networks <ul><li>Useful for modelling complex, non-linear functions of numerical inputs &amp; outputs </li></ul><ul><ul><li>symbolic inputs/outputs representable using some encoding, cf. true/false = 1/-1 </li></ul></ul><ul><ul><li>2 or 3 layer networks can approximate a huge class of functions (if enough neurons in hidden layers) </li></ul></ul><ul><li>Robust to noise </li></ul><ul><ul><li>but: risk of overfitting! (because of high expressiveness) </li></ul></ul><ul><ul><ul><li>may happen when training for too long </li></ul></ul></ul><ul><ul><li>usually handled using e.g. validation sets </li></ul></ul>
  • 120. <ul><li>All inputs have some effect </li></ul><ul><ul><li>cf. decision trees: selection of most important attributes </li></ul></ul><ul><li>Explanatory power of ANNs is limited </li></ul><ul><ul><li>model represented as weights in network </li></ul></ul><ul><ul><li>no simple explanation why networks makes a certain prediction </li></ul></ul><ul><ul><ul><li>contrast with e.g. trees: can give a “rule” that was used </li></ul></ul></ul>
  • 121. <ul><li>Hence, ANNs are good when </li></ul><ul><ul><li>high-dimensional input and output (numeric or symbolic) </li></ul></ul><ul><ul><li>interpretability of model unimportant </li></ul></ul><ul><li>Examples: </li></ul><ul><ul><li>typical: image recognition, speech recognition, … </li></ul></ul><ul><ul><ul><li>e.g. images: one input per pixel </li></ul></ul></ul><ul><ul><ul><li>see http://www.cs.cmu.edu/~tom/faces.html for illustration </li></ul></ul></ul><ul><ul><li>less typical: symbolic problems </li></ul></ul><ul><ul><ul><li>cases where e.g. trees would work too </li></ul></ul></ul><ul><ul><ul><li>performance of networks and trees then often comparable </li></ul></ul></ul>
  • 122. To remember <ul><li>Perceptrons, neural networks: </li></ul><ul><ul><li>inspiration </li></ul></ul><ul><ul><li>what they are </li></ul></ul><ul><ul><li>how they work </li></ul></ul><ul><ul><li>representation power </li></ul></ul><ul><ul><li>explanatory power </li></ul></ul>

×