Chapter 13 Data Mining - 2 -


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Chapter 13 Data Mining - 2 -

  1. 1. Chapter 13 Data Mining
  2. 2. Recommended References <ul><li>This lecture assumes some knowledge on learning systems. We recommend: </li></ul><ul><ul><li>P. Langley: Elements of Machine Learning. Morgan Kaufman 1996. </li></ul></ul><ul><ul><li>T.M. Mitchell: Machine Learning. McGraw Hill 1997. </li></ul></ul><ul><ul><li>R. Bergmann: Slides on “Lernende Systeme”, ;also: M.M. Richter: Lernende Systeme, Vorlesungsmanuskript Kaiserslautern. </li></ul></ul><ul><ul><li>Bergmann, R. & Stahl, S. (1998). Similarity Measures for Object-Oriented Case Representations. Proceedings of the European Workshop on Case-Based Reasoning, EWCBR'98. </li></ul></ul><ul><li>Data Mining references: </li></ul><ul><ul><li>P. Adriaans, D.Zatinge: Data Mining. Addison Wesley 1996. </li></ul></ul><ul><ul><li>Th. Reinartz: Focusing Solutions for Data Mining. Springer Lecture Notes in AI 1623, 1998. </li></ul></ul><ul><ul><li>S.M. Weiss, N. Indurkhya: Predictive Data Mining. Morgan Kaufman 1997. </li></ul></ul>
  3. 3. Data Mining, Learning and Performance (1) <ul><li>The ultimate goal is to make an optimal performance of some process P. </li></ul><ul><li>The meaning of this is given by the users utility. </li></ul><ul><li>In order to make an optimal performance certain knowledge is necessary. This knowledge may be implicitly in the available data and has to be made usable, i.e. has to be learned. </li></ul><ul><li>For learning one needs: </li></ul><ul><ul><li>What are precisely the goals? </li></ul></ul><ul><ul><li>How to measure the achievements of the goals? </li></ul></ul><ul><ul><li>How to react if goals are not achieved ? </li></ul></ul>
  4. 4. Data Mining, Learning and Performance (2) Users view on the performance of P Formal evaluation function F for P  ? <ul><li>Coincidence of the users view on the performance and the result of the evaluation is wanted. </li></ul><ul><li>Often the coincidence can be only be approximated </li></ul>The performance of the process P is tested in experiments which generates certain data D. These data are the input to some evaluation function F.
  5. 5. Data Mining, Learning and Performance (3) Process P and knowledge K Generated data D experiment Evaluation result Analysis result Data Mining: analyze data and evaluation result Improved Process P’ and knowledge K’ Data mining Learning update
  6. 6. KDD: Knowledge Discovery in Data Bases <ul><li>Knowledge Discovery in Data Bases is the non-trivial process of identifying valid, novel, potential useful, and ultimately understandable patterns in data (Fayyad). </li></ul><ul><li>Data Mining is often used as a synonym for KDD but sometimes restricted to a crucial step in KDD: </li></ul><ul><li>The step of applying data analysis and discovery algorithms that, under acceptable computational efficiency limitations, produce a particular enumeration of patterns over the data. </li></ul>
  7. 7. KDD Phases Business Understanding Data Understanding Data Preparation Data Exploration Data Mining Evaluation Deployment
  8. 8. Requirement Analysis for KDD Processes requirement analysis application properties application requirements application goals data characteristics volume quality representation domain characteristics system context characteristics
  9. 9. Data Mining and the Pre-Sales Process <ul><li>The purpose of the data mining for the pre-sales process is to get knowledge which allows the supplier to catch more customers of the intended target groups. </li></ul><ul><li>The knowledge obtained can be concerned with </li></ul><ul><ul><li>The market in general </li></ul></ul><ul><ul><li>The market with respect to certain products </li></ul></ul><ul><ul><li>The behavior of certain customer classes: Marketing Campaign Management: How react customers on marketing actions ? Basket Analysis: What buy customers typically ? </li></ul></ul><ul><ul><li>Individual customers and their behavior </li></ul></ul><ul><li>The general strategy for data mining of a company is the strategic model which on the other hand is influenced from feed back of the results obtained. </li></ul>
  10. 10. Data Mining and the Sales Process <ul><li>The purpose of the data mining for the sales process is to get knowledge which allows the supplier to improve the quality of his processes in such a way that customers who have contacted the supplier </li></ul><ul><ul><li>are guided efficiently in the sales process </li></ul></ul><ul><ul><li>make a positive decision for the sale </li></ul></ul><ul><li>This includes </li></ul><ul><ul><li>offering the products appropriately </li></ul></ul><ul><ul><li>offering adequate alternatives </li></ul></ul><ul><ul><li>guiding effeciently through the dialogue </li></ul></ul><ul><li>This influences the diagnostic and the action model. </li></ul>
  11. 11. Data Mining in the After Sales Process <ul><li>The purpose of the data mining for the after-sales process is to get knowledge which allows to deal with customer questions and complaints more efficiently. </li></ul><ul><li>The goals are </li></ul><ul><ul><li>improve recognition of reasons for calls </li></ul></ul><ul><ul><li>avoid repeated calls </li></ul></ul><ul><ul><li>come efficiently to solutions </li></ul></ul><ul><li>Useful knowledge is mainly contained in experiences and therefore the collection of experiences is central. </li></ul><ul><li>Experiences are best stored as cases in CBR. </li></ul>
  12. 12. The Starting Point: Data (1) <ul><li>Data have a certain quality </li></ul><ul><ul><li>Correctness and completeness problem </li></ul></ul><ul><li>It is essential to address the problem of data quality: if you feed garbage into the system, you will get garbage out ! </li></ul><ul><ul><li>the insights obtained from the data lead to incorrect consequences (wrong data) </li></ul></ul><ul><ul><li>the insights are too general to be useful (incomplete data) </li></ul></ul>
  13. 13. Starting Point: Data (2) <ul><li>Data may be noisy </li></ul><ul><li>Incorrect data </li></ul><ul><ul><li>wrong values for the attributes </li></ul></ul><ul><ul><li>incorrect classification </li></ul></ul><ul><ul><li>duplicate data </li></ul></ul><ul><li>Incomplete data </li></ul><ul><ul><li>missing values for some attributes </li></ul></ul><ul><ul><li>missing attributes </li></ul></ul><ul><ul><li>missing objects </li></ul></ul><ul><li>Data not usable </li></ul><ul><ul><li>free text difficult to cope with </li></ul></ul><ul><ul><li>terminology not understood </li></ul></ul><ul><ul><li>not suitable for the intended goals </li></ul></ul>
  14. 14. Starting Point: Data (3) <ul><li>Knowledge management task: </li></ul><ul><li>Quality management ! </li></ul><ul><li>Data sampling </li></ul><ul><ul><li>Define the goals </li></ul></ul><ul><ul><li>Quality is more important than quantity </li></ul></ul><ul><ul><li>Make use of existing information sources to ensure completeness of the base </li></ul></ul><ul><ul><li>Create your own sources </li></ul></ul><ul><ul><li>Data have to come in time: Data which are too old are not useful (updating problem) </li></ul></ul><ul><li>See chapter 15. </li></ul>
  15. 15. Data for what Knowledge ? <ul><li>The way data are obtained depends on the type of knowledge one is interested in. </li></ul><ul><li>We distinguish three main types: </li></ul><ul><ul><li>Knowledge about some market. This will influence the strategic, the diagnostic model and the action model of the supplier. </li></ul></ul><ul><ul><li>Knowledge about individual customers. It is used to treat the customer individually, e.g. making special offers. </li></ul></ul><ul><ul><li>Knowledge about technical objects: Their quality, how to explain to operate them etc, </li></ul></ul><ul><li>With the type of knowledge different </li></ul><ul><ul><li>goals of the supplier </li></ul></ul><ul><ul><li>data sources </li></ul></ul><ul><li>are connected. </li></ul>
  16. 16. Data Ware House <ul><li>Idea: Store knowledge like physical objects </li></ul><ul><li>Allows: Access, delivery, manipulation as for physical objects. </li></ul><ul><li>Data Ware House: </li></ul><ul><li>Access to knowledge for immediate use </li></ul><ul><li>Makes knowledge available for improving the quality </li></ul><ul><li>The data warehouse is managed by the knowledge manager. </li></ul>
  17. 17. From Data to Knowledge (1) Data Facts Information Description, definition, perspective Knowledge Strategy, practice, method Wisdom Insight, moral Understanding Association Relationships Connectivity What, when, where, who? How, why? Implications (Understand relations) (Understand models, rules, patterns) (Understand p rinciples) What?
  18. 18. From Data to Knowledge (2) <ul><li>Data are raw products </li></ul><ul><li>Information pieces are semifinished products </li></ul><ul><li>Knowledge and wisdom are high quality products </li></ul><ul><li>But: </li></ul><ul><li>When using knowledge acces to actual data and information </li></ul><ul><li>is necessary, How to do this ? </li></ul>Abstraction
  19. 19. From Data to Knowledge (3) It is a knowledge management task to provide for each application of knowledge the needed actual data: Task to perform Knowledge applied Data needed
  20. 20. From Data to Knowledge (4) <ul><li>Only explicit knowledge can be used directly </li></ul><ul><li>Explicit knowledge is directly formulated: </li></ul><ul><ul><li>Prescriptions, rules, norms </li></ul></ul><ul><ul><li>Suggestions, ways to behave </li></ul></ul><ul><ul><li>General laws, exceptions </li></ul></ul><ul><ul><li>Hierarchical relations </li></ul></ul><ul><ul><li>Properties, Constraints </li></ul></ul><ul><ul><li>. </li></ul></ul><ul><ul><li>. </li></ul></ul><ul><ul><li>. </li></ul></ul>
  21. 21. From Data to Knowledge (5) <ul><li>Implicit knowledge cannot be directly used </li></ul><ul><li>Implicit knowledge is: </li></ul><ul><ul><li>contained in data and information </li></ul></ul><ul><ul><li>often hidden and difficult to discover </li></ul></ul><ul><ul><li>not directly applicable </li></ul></ul><ul><ul><li>silent knowledge </li></ul></ul>
  22. 22. From Data to Knowledge (6) <ul><li>Implicit knowledge: </li></ul><ul><li>Sales statistics contain implicit knowledge about customer preferences </li></ul><ul><li>Data bases about accidents contain implicit knowledge about dangerous situations </li></ul><ul><li>Test data contain implicit knowledge about quality </li></ul>
  23. 23. From Data to Knowledge (7) <ul><li>Data and pieces of information have to be correct (or exact tolerances have to be given) </li></ul><ul><li>Knowledge has not to be totally correct in order to be useful: </li></ul><ul><ul><li>Probabilities, Evidences </li></ul></ul><ul><ul><li>Heuristics </li></ul></ul><ul><ul><li>Rules of thumb </li></ul></ul><ul><ul><li>Vague statements („this is not reliable“, „the weather there is not nice in November“) </li></ul></ul><ul><ul><li>Fuzzy statements </li></ul></ul><ul><li>A correct statement in a complex situation may even be useless because it is too complicated </li></ul>
  24. 24. Wisdom <ul><li>Wisdom is usually referred to as a very advanced type of knowledge </li></ul><ul><li>It refers to the understanding of basic background principles </li></ul><ul><li>Only in the exact sciences it can be expressed in precise terms </li></ul><ul><li>Wisdom is of relevance for the strategic model (which is mainly informal) </li></ul>
  25. 25. Make Knowledge Explicit (1) <ul><li>General properties of products need to be differently represented in different situations: </li></ul><ul><ul><li>Vacations in Tirol are </li></ul></ul><ul><ul><ul><li>nice and warm (for persons from Alaska) </li></ul></ul></ul><ul><ul><ul><li>nice and cool (for persons from Brazil) </li></ul></ul></ul><ul><ul><li>A car </li></ul></ul><ul><ul><ul><li>is good and speedy on small and hilly roads (Germany) </li></ul></ul></ul><ul><ul><ul><li>is comfortable (USA) </li></ul></ul></ul>
  26. 26. Make Knowledge Explicit (2) <ul><li>Use the properties of a product in order to </li></ul><ul><ul><li>guarantee the satisfaction of different safety regulations </li></ul></ul><ul><ul><li>satisfy different types of demands </li></ul></ul><ul><ul><li>respect different types of sensitivities </li></ul></ul><ul><li>Describe these properties in different ways </li></ul><ul><li>For such purposes one has to extract the specific views from the overall knowledge </li></ul>
  27. 27. Reliability of Knowledge (1) Extension of knowledge Darkness indicates reliability Obtained by direct retrieval Obtained by logical deduction Obtained by approximative reasoning Obtained by CBR Obtained by learning and data mining This assumes that the underlying data and information bases are reliable
  28. 28. Reliability of Knowledge (2) <ul><li>This schema is only a rough and general indication. </li></ul><ul><li>The success in applications depend heavily on e.g. </li></ul><ul><ul><li>correctness, amount and typicality of data </li></ul></ul><ul><ul><li>adequate choice of the specific method and precision with which it is applied </li></ul></ul><ul><ul><li>number of experiments carried out </li></ul></ul><ul><ul><li>testing of the results </li></ul></ul><ul><li>Therefore the success depends on the investigated effort. </li></ul><ul><li>There is again the utility question: Costs of obtaining knowledge versus gain of applying knowledge </li></ul>
  29. 29. Sources of Data <ul><li>General analysis, public domain </li></ul><ul><ul><li>accessible to everyone but often widely distributed and hard to collect </li></ul></ul><ul><li>General analysis, performed by the company itself or some paid institution </li></ul><ul><ul><li>expensive, but can be taylored to the needs of the company </li></ul></ul><ul><li>History of customers </li></ul><ul><ul><li>requires customers who buy regularly </li></ul></ul><ul><ul><li>has to be updated regularly </li></ul></ul><ul><li>Internal analysis of customer behavior </li></ul><ul><ul><li>reaction on change of </li></ul></ul><ul><ul><ul><li>prices </li></ul></ul></ul><ul><ul><ul><li>dialogue strategies etc. </li></ul></ul></ul><ul><li>Cases </li></ul><ul><ul><li>collected experiences, failure statistics etc. </li></ul></ul>
  30. 30. History of Customers <ul><li>Knowledge about behavior about individual customers should in general not be obtained by asking personal questions but rather automatically. </li></ul><ul><li>One possibility is to do this at the cashier if the customer pays by a customer or credit card. A method for E-C is if the customer orders directly over the net. </li></ul><ul><li>There may be certain restrictions by law. </li></ul><ul><li>The history can contain among others </li></ul><ul><ul><li>main products ordered and their quantities </li></ul></ul><ul><ul><li>times or events when ordered (weekend, holidays, time of the year,...) </li></ul></ul><ul><li>The history should contain (if possible) information about the customer (for description of customer classes) </li></ul><ul><ul><li>age, sex, profession, location of living,... </li></ul></ul>
  31. 31. Cases (1) <ul><li>In the after sales process histories have to be recorded if they are available, they are the material for the cases. </li></ul><ul><li>Often there are not enough cases available to cover all or most of the relevant problem situations. </li></ul><ul><li>In this situation artificial cases can be created which is done by variation of relevant parameters. </li></ul><ul><li>Both, collecting and creating cases requires some a priori understanding of the tasks to be performed. </li></ul><ul><li>To build a CBR-System one has to define the four containers vocabulary, case base, similarity measure and solution transformation. </li></ul>
  32. 32. Cases (2) <ul><li>There are commercial systems like CBR-Works which support the collection and representation of cases (see also chapters 3 and 12). </li></ul><ul><li>A general methodology for developing CBR-System for applications in the help desk area is described in </li></ul><ul><ul><li>R. Bergmann, S. Breen, M. Göker, M. Manago, S. Wess: Developing Industrial Case-Based Reasoning Application - The INRECA- Methodology. Springer Lecture Notes in AI 1612, 1999. </li></ul></ul>
  33. 33. From Data to Information Using Knowledge Raw Data will be valuable Information Customer: Company X, Architects PC component: Matrox G100? Company X : 1x PC Dual-Pentium XL437, Sold 4/97 2x ML 649 (P233/124/9,6), Sold 5/97 SW: High-End, CAD&3D Visual., TCP/IP Netw., … G100 : Entry level graphics card, AGP slot necessary, very good Price/Power relation, limited 3D power, ... “ The G100 is only little useful for Company X because the architects use high-end 3D graphics software. G100 is an entry level graphics card and additionally needs it an AGP slot which is not built in the current HW configuration of the PC’s.” by using Knowledge
  34. 34. Three Main Phases <ul><li>Measurement: Collects numerical data about the intended utility </li></ul><ul><li>Evaluation: Extracts statements about the utility from the data (excellent, good, sufficient, improved, insufficient, ...) </li></ul><ul><li>Sensitivity Analysis: Extracts influence factors responsible for the result of the evaluation. </li></ul><ul><li>The learning and data mining tools can </li></ul><ul><ul><li>use the results of all three phases </li></ul></ul><ul><ul><li>can improve these phases </li></ul></ul>
  35. 35. Measurement <ul><li>The utility is often only informal and implicit in the head of the user. </li></ul><ul><li>The measurement problem is </li></ul><ul><ul><li>to map it on quantitative magnitudes </li></ul></ul><ul><ul><li>to define procedures which measure these quantities. </li></ul></ul><ul><li>The measurement procedures are often difficult to define and expensive. </li></ul><ul><li>The parameters in the procedures have to be named precisely such that the procedure can be applied repeatedly (as e.g. in the exact sciences) </li></ul>
  36. 36. Evaluation <ul><li>The evaluation of the measured data has to close the gap between the data and the utility of the user: </li></ul><ul><ul><li>the evaluation predicate should (at least ideally) coincide with the predicate which is given by the user to the performance (see also the relation between similarity and utility in chapter 6). </li></ul></ul><ul><li>The evaluation should contain a statement about its reliability, e.g. </li></ul><ul><ul><li>tolerances for errors </li></ul></ul><ul><ul><li>error probabilities </li></ul></ul><ul><ul><li>confidence intervals </li></ul></ul><ul><li>The reliability depends heavily on the input data (volume, representability, correctness,noise, etc.) </li></ul>
  37. 37. Sensitivity Analysis <ul><li>This is the most difficult and the most important phase. </li></ul><ul><li>The evaluation is given as a function Ev(d1,....,dn) where the di are data obtained by the measurement. </li></ul><ul><li>The data di are on the other hand an indirect consequence of parameters pi which can be directly influenced by the person who designs the process (or product etc.) which is evaluated: </li></ul><ul><ul><li>Ev(d1,...,dn) = Influence(p1,...,pm) </li></ul></ul><ul><ul><li>where the function Influence is in general unknown. </li></ul></ul><ul><li>We call a parameter pi an (important) influence factor if small variations of pi result in large variations of the function influence(....,pi,...). </li></ul><ul><li>The determination of influence factors is the basis for learning improvements of object under consideration. </li></ul>
  38. 38. QMCB: Q uantitative M odels of C onsumer B ehavior <ul><li>Goal: The calculation and prediction of meaningful market diagnostics on the basis of data. </li></ul><ul><li>A possible approach: Integration of statistical methods and models as well as econometric models in a knowledge based system. </li></ul><ul><li>Tasks: </li></ul><ul><ul><li>Descriptive (a posterori) analysis of data </li></ul></ul><ul><ul><li>Model based simulation of future buying behavior. </li></ul></ul><ul><li>The special types of task require special data representations for useful evaluations. </li></ul>
  39. 39. Different Types of Forecasts <ul><li>The types vary with respect to the knowledge they contain and the usefulness of the prognosis. From the QMCB one should be able to compute directly (examples): </li></ul><ul><ul><li>Market share of a product </li></ul></ul><ul><ul><li>Product purchase probability, expectation and variance </li></ul></ul><ul><ul><li>Brand purchase probability, expectation and variance </li></ul></ul><ul><ul><li>Heterogeneity in purchase rates </li></ul></ul><ul><li>Indirect consequences: </li></ul><ul><ul><li>relative product attraction </li></ul></ul><ul><ul><li>relative brand attraction </li></ul></ul><ul><ul><li>etc. </li></ul></ul>
  40. 40. Example System: KVASS (1) <ul><li>KVASS (KaufVerhaltensAnalyse und SimulationsSystem) is an example of a model and knowledge based data analysis system. </li></ul><ul><ul><li>Reference: R. Decker:Knowledge Based Selection and Application of Quantitative Models of Consumer Behavior. Information Systems and Data Analysis (ed. H.H.Bock, W.Lenski, M.M.Richter), Springer Verlag 1994, p. 405-414. </li></ul></ul><ul><li>Basic idea: Model data with a predefined set of descriptors. These are essentially attributes with there domains, e.g. </li></ul><ul><ul><li>estimation method : {undefined, least squares, ...., moments} </li></ul></ul><ul><ul><li>type of recording : {undefined, diary, ..., interview} </li></ul></ul>
  41. 41. Example System: KVASS (2) <ul><li>Classes of descriptors are: </li></ul><ul><ul><li>Essential aspects for a general description (type of recording, market share etc.) </li></ul></ul><ul><ul><li>Temporal aspects (periods for data collections etc.) </li></ul></ul><ul><ul><li>Information on the models used for computation (e.g. estimation method) </li></ul></ul><ul><ul><li>Technical descriptors for interpretation of the representation (e.g. ordinal, nominal etc.) </li></ul></ul><ul><ul><li>Combination of descriptors allow to represent complex situations; this can be translated in more understandable relational representations (see chapter 4). </li></ul></ul>
  42. 42. Example System: KVASS (3) <ul><li>The system describes essentially a measurement procedure, i.e. the first phase. </li></ul><ul><li>The purpose is not to make an evaluation about the success of a product or process of the company. </li></ul><ul><li>The correctness condition is that the results provided by the analysis of the system coincide with the reality. </li></ul><ul><li>The results of the system are on the other hand important for the sensitivity analysis concerning success or failure of processes or products designed by the company. </li></ul>
  43. 43. Causal Analysis (1) <ul><li>Causal analysis is some kind of sensitivity analysis. Task: Make causal relations explicit. </li></ul><ul><li>Suppose the Xi are activities and the Yi are sales results. Notation: </li></ul><ul><ul><li>Xi + Yi : positive influence </li></ul></ul><ul><ul><li>Xi - Yi : negative influence </li></ul></ul><ul><ul><li>no arow : neutral </li></ul></ul><ul><li>Initial situation: A suspected model for the influence. </li></ul><ul><li>Either experiment: Variation of the Xi and measurement of the Yi or analysis in several companies. </li></ul><ul><li>Data analysis: E.g. by analyzing the covariance structure. </li></ul><ul><li>Result: Revised model and refined model. </li></ul>
  44. 44. Causal Analysis (2) <ul><li>Example (artificially created): </li></ul><ul><ul><li>X1: Effort in catalogues </li></ul></ul><ul><ul><li>X2: Effort in dynamic forms </li></ul></ul><ul><ul><li>X3 :Effort in recording and applying customer histories </li></ul></ul><ul><ul><li>Y1: Return from book sales </li></ul></ul><ul><ul><li>Y2: Return from high tech products sales </li></ul></ul><ul><li>Initial model based on qualitative knowledge: </li></ul>X1 X2 Y1 Y2 X3 + + + + +
  45. 45. Causal Analysis (3) Revised model: X1 X2 Y1 Y2 X3 + + + + - A possibility for coming to a refined quantitative model is to assume a linear model (which may be justified by some knowledge). This leads to the linear equations Y1 = a11X1 - a13X3 Y2 = a21X1 + a22X2 + a23X3 The solutions for the coefficients aik will determine a quantitative model.
  46. 46. Quality Management: Internal Analysis <ul><li>As the first step the goals of the analysis have to be defined: </li></ul><ul><ul><li>Where are the weak points ? </li></ul></ul><ul><ul><li>What has to be improved or optimized ? </li></ul></ul><ul><ul><li>Where are improvements possible ? </li></ul></ul><ul><li>This is part of the requirements analysis </li></ul><ul><li>Further steps include </li></ul><ul><ul><li>identify groups of objects with similar quality characteristics </li></ul></ul><ul><ul><li>identify properties of these groups </li></ul></ul><ul><ul><li>describe these groups </li></ul></ul><ul><ul><li>draw conclusions for quality improvements </li></ul></ul>
  47. 47. Example: Quality Analysis for Dialogues (1) <ul><li>Classification of Dialogues (evaluation of the user): </li></ul><ul><ul><li>succesfully finished </li></ul></ul><ul><ul><li>quit because no adequate product available </li></ul></ul><ul><ul><li>quit for unknown reasons : This is the failure class. </li></ul></ul><ul><li>Measurement: </li></ul><ul><ul><li>Has to collect data which arise during the dialogue </li></ul></ul><ul><ul><li>These data may not be recorded during an ordinary dialogue, e.g. </li></ul></ul><ul><ul><ul><li>Which questions raised by the customer where dealing with a certain property type of the product </li></ul></ul></ul><ul><ul><ul><li>Which actions where performed by customers from a certain customer class </li></ul></ul></ul><ul><ul><li>The quality of the measured data has to be considered </li></ul></ul>
  48. 48. Quality Analysis for Dialogues (2) <ul><li>The evaluation is simple because it is the same as the one of the user. </li></ul><ul><li>The sensitivity analysis has two phases here: </li></ul><ul><ul><li>(1) Describe the evaluation result in terms of measured quantities and determine the influence factors of this description. </li></ul></ul><ul><ul><li>(2) Describe the evaluation result in terms of factors which define the dialogue. </li></ul></ul><ul><li>The first phases involves already a learning step: </li></ul><ul><ul><li>The classification of the dialogue in terms of measured quantities has to be learned. This classification approximates the real classes obtained from the evaluation. </li></ul></ul>
  49. 49. Quality Analysis for Dialogues (3) <ul><li>The analysis of the first phases is based on the dialogue situations and additionally measured data. </li></ul><ul><li>Typical candidates for interesting data in order to classify types of situations are </li></ul><ul><ul><ul><li>length of the dialogue </li></ul></ul></ul><ul><ul><ul><li>not understandable terms </li></ul></ul></ul><ul><ul><ul><li>customer questions (How often? Typical ones?) </li></ul></ul></ul><ul><ul><ul><li>etc. </li></ul></ul></ul><ul><li>The selection of these candidates depends on a hypothesis for a preliminary dependency model. The data mining and learning methods are used in order to refine and correct this model. </li></ul>
  50. 50. Quality Analysis for Dialogues (4) <ul><li>The result allows a prognosis of the dialogue class from the occurrence of dialogue situations which are important influence factors (but here in terms ob measured data!), in particular a description of failure situations, i.e. situations which lead with high probability to a failure dialogue. </li></ul><ul><li>The description of the failure situations is refined in order to </li></ul><ul><ul><li>discover dependencies between influence factors </li></ul></ul><ul><ul><li>in particular to obtain definitions of earliest failure situations in dialogues, i.e. earliest situations in the dialogue which will lead to a failure. </li></ul></ul><ul><li>The earliest failure situations give rise to the second phase of the sensitivity analysis. </li></ul>
  51. 51. Quality Analysis for Dialogues (5) <ul><li>Second phase: Analysis of reasons for reaching earliest failure situations, mainly: </li></ul><ul><ul><ul><li>Which elements in the strategy are responsible? </li></ul></ul></ul><ul><ul><ul><li>Weak points of the knowledge base (e.g. wrong prices for products)? </li></ul></ul></ul><ul><li>These reasons can directly be influenced when the dialogue is designed. </li></ul><ul><li>Consequences of the analysis (learned results): </li></ul><ul><ul><li>improved knowledge base </li></ul></ul><ul><ul><li>Possible changes of the strategy </li></ul></ul><ul><ul><li>Possible disadvantages of changes </li></ul></ul><ul><li>Final recommendations: Update </li></ul>
  52. 52. Discussion <ul><li>The dialogue and the situations can be given in a (possibly object oriented) attribute-value representation. Some virtual attributes (like length of dialogues) can be useful, they contain valuable knowledge. </li></ul><ul><li>One way to proceed is to use cluster analysis techniques and machine learning algorithms (e.g. CN2, C4.5) for learning the classification. </li></ul><ul><li>Another way is to consider the data base as a case base and start with an initial similarity measure which is improved during the development of a CBR-system for the classification and the improvement suggestions. </li></ul>
  53. 53. Learning Informal Concepts <ul><li>Many concepts in e-c, in particular in connection with CRM and customer classes are of informal character where no direct formal equivalent exists. </li></ul><ul><li>Computer support requires a formal notion which approximates the informal concept as good as possible. </li></ul><ul><li>Such formal versions have to be learned and the learning process requires data mining activities which are again based on studies of customers and their behavior. </li></ul><ul><li>It has to taken into account that informal concepts are usually not stable over time. </li></ul>
  54. 54. The Correctness Problem <ul><li>The correctness problem for the statement that two expressions are logically equivalent reduces to a formal proof. </li></ul><ul><li>How to “proof” that an informal and a formal concept are equivalent ? </li></ul><ul><ul><li>Formal systems do not have access to informal notions. </li></ul></ul><ul><ul><li>Humans have usually difficulties to compare both types of notions because this refers to a broad scope of intended uses. </li></ul></ul><ul><li>Required is a kind of Turing Test which decides that a human who uses the informal version and a machine which uses the formal version refer to the same concept. </li></ul><ul><li>The ordering principle is that the test does not deal with the concept itself but with partial orderings related to the concept. </li></ul>
  55. 55. The Ordering Principle and a Turing Test (1) Suppose there is a partial ordering „<„ with the concept C associated: The partial ordering then again has two versions: formal and informal. The Turing test refers to these two versions of „<„ : Informal human version of C Formal version of C The goal is that when variations of the arguments of < are presented: The human says „up“ if and only if the formal system says „up“ goal
  56. 56. The Ordering Principle and a Turing Test (2) Concept to grasp: Typical lion Formal version uses Ordering: Quotient of length/height Human: Aesthetic property better better The partial ordering approximates the concept C in the sense that semantics of y < z is : z is more typical for C t han y is.
  57. 57. The Ordering Principle and a Turing Test (3) <ul><li>Advantages of the ordering principle: </li></ul><ul><ul><li>The validity of the equivalence of formal and informal concepts can be effectively validated by Turing tests, i.e. by experiments. </li></ul></ul><ul><ul><li>If there are several orderings involved this can be done for all of them. </li></ul></ul><ul><ul><li>The search for a formal counterpart of an informal concept can be performed in an approximative way and partial validation is possible. </li></ul></ul><ul><li>The formal partial ordering is what has to be learned </li></ul><ul><li>The learning process is an approximation process in order to perform the Turing test sufficiently well. </li></ul>
  58. 58. The Learning Scenario <ul><li>(1)The informal concept C on a set U is regarded as a fuzzy set where a set of prototypes P  U is known. </li></ul><ul><li>(2) An informal relation r x (y,z) stating “y is more similar to x than z is” </li></ul><ul><li>The object to be learned is a similarity measure </li></ul><ul><li> sim: U x P  [0,1]. </li></ul><ul><li>Turing test: The relations  x (from the formal similarity measure) and r x agree. </li></ul><ul><li>We decompose the approach into two basic steps: </li></ul><ul><ul><li>A first step to get a suitable representation language : Concept learning. </li></ul></ul><ul><ul><li>A second step for learning the similarity measure: Subsymbolic learning. </li></ul></ul>
  59. 59. Learning of Weights <ul><li>Learning similarities is an example of subsymbolic learning and reduces often to learning weights: We distinguish: </li></ul><ul><ul><li>global weights: </li></ul></ul><ul><ul><li>prototype specific weights: w i,c : relevance matrix </li></ul></ul><ul><li>Change of weights: Change of relevance of features. </li></ul><ul><li>Error function determined by Turing test. </li></ul><ul><li>Learning procedures can be supervised or unsupervised. </li></ul>
  60. 60. Learning of Weights with/without Feedback <ul><li>Many algorithms for both learning types are known. </li></ul><ul><li>Learning without Feedback for Retrieval / Reuse </li></ul><ul><ul><li>Use the the distribution of cases in the case base in order to determine the relevance of attributes </li></ul></ul><ul><li>Learning with Feedback </li></ul><ul><ul><li>Correct or incorrect choice of cases / classification </li></ul></ul><ul><ul><li>result leads to the change of weights </li></ul></ul>+ + A 1 + + - - - - A 2 A 1 is more important than A 2
  61. 61. Learning of Weights without Feedback <ul><li>Determination of class specific weights: </li></ul><ul><ul><li>Binary coding of the attributes by </li></ul></ul><ul><ul><ul><li>Discretizing of real valued attributes </li></ul></ul></ul><ul><ul><ul><li>Transforming each symbolic attribute into n binary attributes </li></ul></ul></ul><ul><ul><li>Suppose </li></ul></ul><ul><ul><ul><li>w ik the weight for attribute i for class k </li></ul></ul></ul><ul><ul><ul><li>class(c) the class (solution) in case c </li></ul></ul></ul><ul><ul><ul><li>c i the attribute i in case c </li></ul></ul></ul><ul><ul><li>Put: w ik = P ( class(c)=k | c i ) conditional probability that the class of a case is k under the condition that the attribute i vorliegt is given. </li></ul></ul><ul><ul><li>Estimation of the probabilities use samples of the case base. </li></ul></ul>
  62. 62. Learning of Weights with Feedback <ul><li>Correct or incorrect classification leads to a correction of weights: w ik := w ik +  w ik </li></ul><ul><li>There are several ways for the adaptation of the weights: </li></ul><ul><li>Approach of Salzberg (1991) for binary attributes: </li></ul><ul><ul><li>Feedback = positive (i.e. correct classification): </li></ul></ul><ul><ul><ul><li>Weight for attributes with the same values increases </li></ul></ul></ul><ul><ul><ul><li>Weight for attributes with different values decreases </li></ul></ul></ul><ul><ul><ul><li>Feedback = negative (i.e. wrong classification): </li></ul></ul></ul><ul><ul><ul><li>Weight for attributes with the same values decreases </li></ul></ul></ul><ul><ul><ul><li>Weight for attributes with different values increases </li></ul></ul></ul><ul><ul><ul><li>The increment  w ik remains constant. </li></ul></ul></ul>
  63. 63. Summary <ul><li>Relations between data mining and kdd. </li></ul><ul><li>Relations between data mining, learning and performance. </li></ul><ul><li>The way from data to knowledge. </li></ul><ul><li>Making knowledge explicit. </li></ul><ul><li>Collecting cases and building a CBR-system </li></ul><ul><li>Examples: </li></ul><ul><ul><li>Quantitative models of consumer behavior (external analysis) </li></ul></ul><ul><ul><li>Causal analysis (external analysis) </li></ul></ul><ul><ul><li>Quality analysis for dialogues (internal analysis) </li></ul></ul><ul><li>Learning of informal concepts can be reduced to learning of similarity measures. </li></ul>