Upcoming SlideShare
×

# Using Qualitative Knowledge in Numerical Learning

179
-1

Published on

0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total Views
179
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
3
0
Likes
0
Embeds 0
No embeds

No notes for slide
• I’ll give an example of the learning problem for QUIN algorithm. The red points on the picture are example points for the fn. z=… And this are noisy example – QUIN has to be able to learn from noisy data. We can see some qualitatively similar regions: there are 4 qual. different regions. This examples are the learning examples for QUIN: Z is the class var., and x and y are the attributes.
• I’ll give an example of the learning problem for QUIN algorithm. The red points on the picture are example points for the fn. z=… And this are noisy example – QUIN has to be able to learn from noisy data. We can see some qualitatively similar regions: there are 4 qual. different regions. This examples are the learning examples for QUIN: Z is the class var., and x and y are the attributes.
• From this learning examples, QUIN induces the following qual.tree that defines the partition of the attribute space into the areas with common behaviour of the class variable. In the leaves are QCFs. For example, the rightmost leaf that applies when x and y are both positive says that z is ... We say that z is …
• A basic alg. for learning of q.trees uses MDL to learn …QCF, that is QCF that fits the examples best. To learn a qual. tree. a top-down greedy alg., that is similar to dec.tree learning algorithms, is used:… QUIN is heuristic improvement of this basic algorithm that considers also the consistency and prox…
• Given results for ZooChange are multiplied by 1000 (actual values are 1000 times smaller)
• The improvements of Q2 are even more obvious on INTEC wheel model. The blue line denotes the time behavior of toe angle alpha on the most difficult test trace.
• The red line is alpha predicted by LWR.
• and the orange line, alpha predicted by M5.
• The green line corresponds to Q2 prediction learned from the same data. Q2 clearly has the best numerical fit. Also with other state-of-the-art numerical predictors qualitative errors are clearly visible.
• To evaluate accuracy benefits of Q2 learning we compared Because Qfilter optimaly adjusts a base-learner’s predictions to be consistent with a qualitative tree, the differences… We experimented with 3 base-learners: {it RRE} is the root mean squared error normalized by the root mean squared error of average class value. Using our implementation of model and regression trees.
• Now I wll describe the experiments with 5 UCI and 3 Dyanmic Domains We used the 5 smallest data sets from the UCI repository with the majority of continuous attributes. A reason for choosing these data sets is also that Quinlan gives results of M5 and several other regression methods on these data sets, which enables a better comparison of \$Q^2\$ to other methods. The other three data sets are from dynamic domains where QUIN has typically been previously applied to explain the underlying control skill and to use the induced qualitative models to control a dynamic system. Until now, it was not possible to measure the numerical accuracy of the learned qualitative trees or compare it to other learning methods. Data set {em AntiSway} was used in reverse-engineering of an industrial gantry crane controller. This so-called {em anti-sway crane} is used in metallurgical companies to reduce the swing of the load and increase the productivity of transportation of slabs. Data sets {em CraneSkill1} and {em CraneSkill2} are the logged data of two experienced human operators controlling a crane simulator. Such control traces are typically used to reconstruct the underlying operator&apos;s control skill. The learning task is to predict the velocity of a crane trolley given the position of the trolley, rope angle and its velocity.
• The graph gives the RREs of LWR and Q2 in these 8 datasets using 10CV. Q 2 is much better in all domains, except in AutoMPG domain.
• ### Using Qualitative Knowledge in Numerical Learning

1. 1. USING QUALITATIVE KNOWLEDGE IN NUMERICAL LEARNING Ivan Bratko Faculty of Computer and Info. Sc. University of Ljubljana Slovenia
2. 2. THIS TALK IS ABOUT: AUTOMATED MODELLING FROM DATA WITH MACHINE LEARNING COMBINING NUMERICAL AND QUALITATIVE REPRESENTATIONS
3. 3. BUILING MODELS FROM DATA <ul><li>Observed </li></ul><ul><li>system </li></ul>Machine learning, numerical regression Model of system Data
4. 4. EXAMPLE: POPULATION DYNAMICS <ul><li>A lake with zooplankton, phytoplankton and nutrient nitrogen </li></ul><ul><li>Variables in system: </li></ul><ul><li>Nut </li></ul><ul><li>Phyto </li></ul><ul><li>Zoo </li></ul>
5. 5. POPULATION DYNAMICS <ul><li>Observed behaviour in time </li></ul>Data provided by Todorovski&D žeroski
6. 6. PRIOR KNOWLEDGE <ul><li>We would like our modelling methods to make use of expert’s prior knowledge (possibly qualitative) </li></ul><ul><li>Phytoplankton feeds on Nutrient, </li></ul><ul><li>Zooplankton feeds on Phytoplankton </li></ul><ul><li>Nutrient Phyto Zoo </li></ul>
7. 7. QUALITATIVE DIFFICULTIES OF NUMERICAL LEARNING <ul><li>Learn time behavior of water level: </li></ul><ul><li> h = f( t, initial_outflow) </li></ul>Level h outflow t h
8. 8. TIME BEHAVIOUR OF WATER LEVEL Initial_ouflow =12.5
9. 9. VARYING INITIAL OUTFLOW Initial_ouflow =12.5 11.25 10.0 8.75 6.25
10. 10. PREDICTING WATER LEVEL WITH M5 11.25 10.0 8.75 6.25 7.5 Initial_ouflow =12.5 Qualitatively incorrect – water level cannot increase M5 prediction
11. 11. QUALITATIVE ERRORS OF NUMERICAL LEARNERS <ul><li>E xperiments with regression (model) trees ( M5; Quinlan 92 ) , LWR (Atkenson et.al. 97) in Weka ( Witten & Frank 2000 ) , neural nets, ... </li></ul><ul><li>Qualitative errors: </li></ul><ul><ul><li>water level should never increase </li></ul></ul><ul><ul><li>water level should not be negative </li></ul></ul><ul><li>An expert might accept numerical errors, but such qualitative errors are particularly disturbing </li></ul>
12. 12. Q 2 LEARNING AIMS AT OVERCOMING THESE DIFFICULTIES
13. 13. Q 2 LEARNING Š uc, Vladu š i č , Bratko; IJCAI’03, AIJ 2004, IJCAI’05 <ul><li>Aims at overcoming these difficulties of numerical learning </li></ul><ul><li>Q 2 = Q ualitatively faithful Q uantitative learning </li></ul><ul><li>Q 2 makes use of qualitative constraints </li></ul>
14. 14. QUALITATIVE CONSTRAINTS FOR WATER LEVEL <ul><li>For any initial outflow: </li></ul><ul><li>Level is always decreasing with time </li></ul><ul><li>For any time point: </li></ul><ul><li>Greater the initial outflow, greater the level </li></ul>
15. 15. SUMMARY OF Q 2 LEARNING <ul><li>Standard numerical learning approaches make qualitative errors. </li></ul><ul><li>As a result, numerical predictions are qualitatively inconsistent with expectations </li></ul><ul><li>Q 2 learning (Qualitatively faithful Quantitative prediction); </li></ul><ul><li>A method that enforces qualitative consistency </li></ul><ul><li>Resulting numerical models enable clearer interpretation, and also significantly improve quantitative prediction </li></ul>
16. 16. IDEA OF Q 2 <ul><li>First find qualitative laws in data </li></ul><ul><li>Respect these qualitative laws in numerical learning </li></ul>
17. 17. CONTENTS OF REST OF TALK <ul><li>Building blocks of Q 2 learning: </li></ul><ul><li>Ideas from Qualitative Reasoning, </li></ul><ul><li>Algorithms QUIN, QFILTER, QCGRID </li></ul><ul><li>Experimental analysis </li></ul><ul><li>Applications: </li></ul><ul><li>Car modelling, ecological modelling, behavioural cloning (operating a crane, flying an aircraft) </li></ul>
18. 18. HOW CAN WE DESCRIBE QUALITATIVE PROPERTIES ? We can use concepts from field of qualitative reasoning in AI Related terms: Qualitative physics, Naive physics, Qualitative modelling
19. 19. QUALITATIVE MODELLING IN AI <ul><li>Naive physics, as opposed to &quot;proper physics“ </li></ul><ul><li>Qualitative modelling, as opposed to quantitative modelling </li></ul>
20. 20. ESSENCE OF NAIVE PHYSICS <ul><li>Describe physical processes qualitatively , without numbers or exact numerical relations </li></ul><ul><li>“ Naive physics”, as opposed to &quot;proper physics“ </li></ul><ul><li>Close to common sense descriptions </li></ul>
21. 21. EXAMPLE: BATH TUB <ul><li>What will happen? </li></ul>Amount of water will keep increasing, so will level, until the level reaches the top.
22. 22. EXAMPLE: U-TUBE <ul><li>What will happen? </li></ul>La Lb Level La will be decreasing, and Lb increasing, until La = Lb.
23. 23. QUALITATIVE REASONING ABOUT U-TUBE <ul><li>Total amount of water in system constant </li></ul><ul><li>If La > Lb then flow from A to B </li></ul><ul><li>Flow causes amount in A to decrease </li></ul><ul><li>Flow causes amount in B to increase </li></ul><ul><li>All changes in time happen continuously and smoothly </li></ul>Level La Level Lb A B
24. 24. QUALITATIVE REASONING ABOUT U-TUBE <ul><li>In any container: the greater the amount, the greater the level </li></ul><ul><li>So, La will keep decreasing, Lb increasing </li></ul>Level La Level Lb
25. 25. QUALITATIVE REASONING ABOUT U-TUBE <ul><li>La will keep decreasing, Lb increasing, until they equalise </li></ul>Level La Level Lb La Lb Time
26. 26. THIS REASONING IS VALID FOR ALL CONTAINERS OF ANY SHAPE AND SIZE, REGARDLESS OF ACTUAL NUMBERS!
27. 27. QHY REASON QUALITATIVELY? <ul><li>Because it is easier than quantitatively </li></ul><ul><li>Because it is easy to understand - </li></ul><ul><li>facilitates explanation </li></ul><ul><li>We want to exploit these advantages in ML </li></ul>
28. 28. RELATION BETWEEN AMOUNT AND LEVEL <ul><li>The greater the amount, the greater the level </li></ul><ul><li>A = M + (L) </li></ul><ul><li>A is a monotonically increasing function of L </li></ul>
29. 29. MONOTONIC FUNCTIONS Y = M + (X) specifies a family of functions X Y
30. 30. MONOTONIC QUALITATIVE CONSTRAINTS, MQCs <ul><li>Generalisation of monotonically increasing functions to several arguments </li></ul><ul><li>Example: Z = M +,- ( X, Y) </li></ul><ul><li>Z increases with X, and decreases with Y </li></ul><ul><li>More precisely: if X increases and Y stays unchanged then Z increases </li></ul>
31. 31. EXAMPLE: BEHAVIOUR OF GAS <ul><li>Pressure = M +,- (Temperature, Volume) </li></ul>Pressure increases with Temperature Pressure decreases with Volume
32. 32. Q 2 LEARNING Induce qualitative constraints ( QUIN ) Qualitative to Quantitative Transformation (Q2Q) <ul><li>Numerical predictor : </li></ul><ul><li>respects qualitative constraints </li></ul><ul><li>fits data numerically </li></ul>Numerical data One possibility: QFILTER
33. 33. PROGRAM QUIN INDUCING QUALITATIVE CONSTRAINTS FROM NUMERICAL DATA Šuc 2001 ( PhD Thesis , also as book 2003 ) Šuc and Bratko, ECML ’ 01
34. 34. QUIN <ul><li>QUIN = Qualitative Induction </li></ul><ul><li>Numerical examples </li></ul><ul><li>QUIN </li></ul><ul><li>Qualitative tree </li></ul><ul><li>Qualitative tree : similar to decision tree, </li></ul><ul><li>qualitative constraints in leaves </li></ul>
35. 35. EXAMPLE PROBLEM FOR QUIN <ul><li>Noisy examples: </li></ul><ul><li>z = x 2 - y 2 + noise(st.dev. 50) </li></ul>
36. 36. EXAMPLE PROBLEM FOR QUIN In this region: z = M +,+ (x,y)
37. 37. INDUCED QUALITATIVE TREE FOR z = x 2 - y 2 + noise z= M -,+ ( x,y) z= M -,- ( x,y) z= M +,+ ( x , y) z= M +,- ( x,y)  0 > 0 > 0  0 > 0  0 y x y
38. 38. QUIN ALGORITHM: OUTLINE <ul><li>Top-down greedy algorithm (similar to induction of decision trees) </li></ul><ul><li>For every possible split, find the “most consistent” MQC (min. error-cost) for each subset of examples </li></ul><ul><li>Select the best split according to MDL </li></ul>
39. 39. Q2Q Qualitative to Quantitative Transformation
40. 40. Q2Q EXAMPLE <ul><li>X < 5 </li></ul><ul><li>y n </li></ul><ul><li>Y = M + (X) Y = M - (X) </li></ul>5 X Y
41. 41. QUALITATIVE TREES IMPOSE NUMERICAL CONSTRAINTS <ul><li>MQCs impose numerical constraints on class </li></ul><ul><li>values, between pairs of examples </li></ul><ul><li>y = M + (x) requires: </li></ul><ul><li>If x 1 > x 2 then y 1 > y 2 </li></ul>
42. 42. RESPECTING MQCs NUMERICALLY z = M +,+ (x,y) requires: If x 1 < x 2 and y 1 < y 2 then z 1 < z 2 (x 2 , y 2 ) (x 1 , y 1 ) x y
43. 43. QFILTER AN APPROACH TO Q2Q TRANSFORMATION Šuc and Bratko, ECML ’03
44. 44. TASK OF QFILTER <ul><li>Given: </li></ul><ul><ul><li>qualitative tree </li></ul></ul><ul><ul><li>points with class predictions by arbitrary numerical learner </li></ul></ul><ul><ul><li>learning examples (optionally) </li></ul></ul><ul><li>Modify class predictions to achieve </li></ul><ul><li>consistency with qualitative tree </li></ul>
45. 45. QFILTER IDEA <ul><li>Force numerical predictions to respect </li></ul><ul><li>qualitative constraints : </li></ul><ul><li>find minimal changes of predicted values so that qualitative constraints become satisfied </li></ul><ul><li>“ minimal” = min. sum of squared changes </li></ul><ul><li>a quadratic programming problem </li></ul>
46. 46. RESPECTING MQCs NUMERICALLY <ul><li>Y = M + (X) </li></ul>X Y
47. 47. QFILTER APPLIED TO WATER OUTFLOW Qualitative constraint that applies to water outflow: h = M -,+ (time, InitialOutflow) This could be supplied by domain expert, or induced from data by QUIN
48. 48. PREDICTING WATER LEVEL WITH M5 7.5 M5 prediction
49. 49. QFILTER’S PREDICTION QFILTER predictions T rue values
50. 50. POPULATION DYNAMICS <ul><li>Aquatic ecosystem with zooplankton, phytoplankton and nutrient nitrogen </li></ul><ul><li>Phyto feeds on Nutrient, </li></ul><ul><li>Zoo feeds on Phyto </li></ul><ul><li>Nutrient Phyto Zoo </li></ul>
51. 51. POPULATION DYNAMICS WITH Q 2 <ul><li>Behaviour in time </li></ul>
52. 52. PREDICTION PROBLEM <ul><li>Predict the change in zooplankton population: </li></ul><ul><li>ZooChange(t) = Zoo(t + 1) - Zoo(t) </li></ul><ul><li>Biologist’s rough idea: </li></ul><ul><li>ZooChange = Growth - Mortality </li></ul><ul><li>M +,+ (Zoo, Phyto) M + (Zoo) </li></ul>
53. 53. APPROXIMATE QUALITATIVE MODEL OF ZOO CHANGE Induced from data by QUIN
54. 54. EXPERIMENT WITH NOISY DATA All results as MSE (Mean Squared Error) 2.269 ; 1.889 0.112 ; 0.102 0.015 ; 0.008 ZooChange 20 % noise LWR; Q 2 5 % noise LWR; Q 2 no noise LWR; Q 2 Domain
55. 55. APPLICATIONS OF Q 2 FROM REAL ECOLOGICAL DATA <ul><li>Growth of algae Lagoon of Venice </li></ul><ul><li>Plankton in Lake Glumsoe </li></ul>
56. 56. Lake Glumsø <ul><li>Location and properties: </li></ul><ul><ul><li>Lake Glumsø is located in a sub-glacial valley in Denmark </li></ul></ul><ul><ul><li>Average depth 2 m </li></ul></ul><ul><ul><li>Surface area 266000 m 2 </li></ul></ul><ul><li>Pollution </li></ul><ul><ul><li>Receives waste water from community with 3000 inhabitants (mainly agricultural) </li></ul></ul><ul><ul><li>High nitrogen and phosphorus concentration in waste water caused hypereutrophication </li></ul></ul><ul><ul><li>No submerged vegetation </li></ul></ul><ul><ul><ul><li>low transparency of water </li></ul></ul></ul><ul><ul><ul><li>oxygen deficit at the bottom of the lake </li></ul></ul></ul>
57. 57. Lake Glumsø – data <ul><li>Relevant variables for modelling are: </li></ul><ul><ul><li>phytoplankton phyto </li></ul></ul><ul><ul><li>zooplankton zoo </li></ul></ul><ul><ul><li>soluble nitrogen ns </li></ul></ul><ul><ul><li>soluble phosphorus ps </li></ul></ul><ul><ul><li>water temperature temp </li></ul></ul>
58. 58. PREDICTION ACCURACY <ul><li>Over all (40) experiments. </li></ul><ul><li>Q 2 better than LWR in 75% (M5, 83%) of the test cases </li></ul><ul><li>The differences were found significant (t-test) </li></ul><ul><ul><li>at 0.02 significance level </li></ul></ul>
59. 59. OTHER ECOLOGICAL MODELLING APPLICATIONS <ul><li>Predicting ozone concentrations in Ljubljana and Nova Gorica </li></ul><ul><li>Predicting flooding of Savinja river </li></ul><ul><li>Q2 model by far superior to any predictor so far used in practice </li></ul>
60. 60. CASE STUDY INTEC’S CAR SIMULATION MODELS <ul><li>Goal: simplify INTEC’s car models to speed up simulation </li></ul><ul><li>Context: Clockwork European project (engineering design) </li></ul>
61. 61. Intec’s wheel model
62. 62. Learning Manouvres <ul><li>Learning manouvres were very simple: </li></ul><ul><ul><li>Sinus bump </li></ul></ul><ul><ul><li>Turning left </li></ul></ul><ul><ul><li>Turning right </li></ul></ul>Road excitation: Steering position
63. 63. WHEEL MODEL : PREDICTING TOE ANGLE 
64. 64. WHEEL MODEL : PREDICTING TOE ANGLE 
65. 65. WHEEL MODEL : PREDICTING TOE ANGLE 
66. 66. WHEEL MODEL : PREDICTING TOE ANGLE  Qualiative errors Q 2 predicted alpha Q 2
67. 67. BEHAVIOURAL CLONING <ul><li>Given a skilled operator, reconstruct the human’s sub cognitive skill </li></ul>
68. 68. EXAMPLE: GANTRY CRANE Control force Load Carriage
69. 69. USE MACHINE LEARNING: BASIC IDEA Controller System Observe Execution trace Learning program Reconstructed controller (“clone”) Actions States
70. 70. CRITERIA OF SUCCESS <ul><li>Induced controller description has to: </li></ul><ul><li>Be comprehensible </li></ul><ul><li>Work as a controller </li></ul>
71. 71. WHY COMPREHENSIBILITY? <ul><li>To help the user’s intuition about the </li></ul><ul><ul><li>essential mechanism and </li></ul></ul><ul><ul><li>causalities </li></ul></ul><ul><li>that enable the controller achieve the goal </li></ul>
72. 72. SKILL RECONSTRUTION IN CRANE Control forces: F x , F L State: X, d X,  , d  , L, d L
73. 73. CARRIAGE CONTROL QUIN: dX des = f(X,  , d  ) M - ( X ) M + (  ) X < 20.7 X < 60.1 M + ( X ) yes yes no no First the trolley velocity is increasing From about middle distance from the goal until the goal the trolley velocity is decreasing At the goal reduce the swing of the rope (by acceleration of the trolley when the rope angle increases)
74. 74. CARRIAGE CONTROL: dX des = f(X,  , d  ) M - ( X ) M + (  ) X < 20.7 X < 60.1 X < 29.3 M + ( X ) d  < -0.02 M - ( X ) M -,+ ( X ,  ) M +,+,- ( X,  , d  ) yes yes yes yes no no no no Enables reconstruction of individual differences in control styles Operator S Operator L
75. 75. CASE STUDY IN REVERSE ENGINEERING: ANTI-SWAY CRANE
76. 76. ANTI-SWAY CRANE <ul><li>Industrial crane controller minimising load swing, “anti-sway crane” </li></ul><ul><li>Developed by M. Valasek (Czech Technical University, CTU) </li></ul><ul><li>Reverse engineering of anti-sway crane: a case study in the Clockwork European project </li></ul>
77. 77. ANTI-SWAY CRANE OF CTU <ul><li>Crane parameters: </li></ul><ul><li>travel distance 100m </li></ul><ul><li>height 15m, width 30m </li></ul><ul><li>80-120 tons </li></ul><ul><li>In daily use at Nova Hut metallurgical factory, Ostrava </li></ul>
78. 78. EXPLAINING HOW CONTROLLER WORKS <ul><li>Load swinging to right; </li></ul><ul><li>Accelerate cart to right to reduce swing </li></ul>
79. 79. EMPIRICAL EVALUATION <ul><li>Compare errors of base-learners and corresponding Q 2 learners </li></ul><ul><ul><li>differences btw. a base-learner and a Q 2 learner are only due to the induced qualitative constraints </li></ul></ul><ul><li>Experiments with three base-learners: </li></ul><ul><ul><li>Locally Weighted Regression (LWR) </li></ul></ul><ul><ul><li>Model trees </li></ul></ul><ul><ul><li>Regression trees </li></ul></ul>
80. 80. Robot Arm Domain Y 1 Y 2 Two-link, two-joint robot arm Link 1 extendible: L 1  [2, 10] Y 1 = L 1 sin(  1 ) Y 2 = L 1 sin(  1 ) + 5 sin(  1 +  2 )  1  2 Four learning problems: A: Y 1 = f(L 1 ,  1 ) B: Y 2 = f(L 1 ,  1 ,  2 ,  sum , Y 1 ) C: Y 2 = f(L 1 ,  1 ,  2 ,  sum ) D: Y 2 = f(L 1 ,  1 ,  2 ) L 1 Derived attribute  sum =  1 +  2 Difficulty for Q 2
81. 81. Robot Arm: LWR and Q 2 at different noise levels Q 2 outperforms LWR with all four learning problems (at all three noise levels) A 0, 5, 10% n. | B 0, 5, 10% n. | C 0, 5, 10% n. | D 0, 5, 10% n.
82. 82. UCI and Dynamic Domains <ul><li>Five smallest regression data sets from UCI </li></ul><ul><li>Dynamic domains: </li></ul><ul><ul><li>typical domains where QUIN was applied so far to explain the control skill or control the system </li></ul></ul><ul><ul><li>until now was not possible to measure accuracy of the learned concepts (qualitative trees) </li></ul></ul><ul><li>AntiSway </li></ul><ul><ul><li>logged data from an anti-sway crane controller </li></ul></ul><ul><li>CraneSkill1, CraneSkill2: </li></ul><ul><ul><li>logged data of experienced human operators controlling a crane </li></ul></ul>
83. 83. UCI and Dynamic Domains: LWR compared to Q 2 Similar results with other two base-learners. Q 2 significantly better than base-learners in 18 out of 24 comparisons (24 = 8 datasets * 3 base-learners)
84. 84. Q 2 - CONCLUSIONS <ul><li>A novel approach to numerical learning </li></ul><ul><li>Can take into account qualitative prior knowledge </li></ul><ul><li>Advantages: </li></ul><ul><ul><li>qualitative consistency of induced models and data – important for interpretation of induced models </li></ul></ul><ul><ul><li>improved numerical accuracy of predictions </li></ul></ul>
85. 85. Q 2 TEAM + ACKNOWLEDGEMENTS <ul><li>Q 2 learning, QUIN, Qfilter , QCGRID (AI Lab, Ljubljana): </li></ul><ul><ul><li>Dorian Š uc </li></ul></ul><ul><ul><li>Daniel Vladušič </li></ul></ul><ul><li>Car modelling data </li></ul><ul><ul><li>Wolfgan Rulka (INTEC, Munich) </li></ul></ul><ul><ul><li>Zbinek Šika (Czech Technical Univ.) </li></ul></ul><ul><li>Population dynamics data </li></ul><ul><ul><li>Sašo Džeroski, Ljupčo Todorovski (J. Stefan Institute, Ljubljana) </li></ul></ul><ul><li>Lake Glumsoe </li></ul><ul><ul><li>Sven Joergensen </li></ul></ul><ul><ul><li>Boris Kompare, Jure Žabkar, D. Vladušič </li></ul></ul>
86. 86. RELEVANT PAPERS <ul><li>Clark and Matwin 93: also used qualitative constraints in numerical predictions </li></ul><ul><li>Š uc, Vladu š i č and Bratko; IJCAI’03 </li></ul><ul><li>Š uc, Vladu š i č and Bratko; Artificial Intelligence Journal, 2004 </li></ul><ul><li>Š uc and Bratko; ECML’03 </li></ul><ul><li>Š uc and Bratko; IJCAI’05 </li></ul>
1. #### A particular slide catching your eye?

Clipping is a handy way to collect important slides you want to go back to later.