Introduction To Data Mining


Published on

Published in: Business, Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Introduction To Data Mining

    1. 1. Introduction to Data Mining
    2. 2. Course Overview <ul><li>Introduction to Knowledge Discovery in Databases and Data Mining </li></ul><ul><ul><li>Why Data Mining? What is Data Mining? On What Kind of Data? </li></ul></ul><ul><li>Applications of Data Mining </li></ul><ul><ul><li>Application Domains and Examples </li></ul></ul><ul><li>Knowledge Discovery in Databases and Data Mining Process </li></ul><ul><ul><li>Processing Steps </li></ul></ul><ul><ul><li>Data Quality, Preparation, and Transformations </li></ul></ul><ul><li>Data Mining Tools </li></ul><ul><ul><li>D2K, SAS, Clementine, Intelligent Miner, Insightful Miner, K-Wiz </li></ul></ul><ul><li>Data Mining Methods </li></ul><ul><ul><li>Association Rules </li></ul></ul><ul><ul><li>Decision Trees </li></ul></ul><ul><ul><li>Information Visualization </li></ul></ul><ul><li>Summary </li></ul>
    3. 3. Acknowledgement <ul><li>Contributions: </li></ul><ul><ul><li>Michael Welge, Loretta Auvil, Lisa Gatzke, Automated Learning Group, National Center for Supercomputing Applications (NCSA), University of Illinois at Urbana-Champaign </li></ul></ul><ul><ul><li>Jiawei Han, Computer Science, University of Illinois at Urbana-Champaign </li></ul></ul>
    4. 4. Literature Data Mining – Concepts and Techniques by J. Han & M. Kamber, Morgan Kaufmann Publishers, 2001 Pattern Classification by R. Duda, P. Hart and D. Stork, 2 nd edition, John Wiley & Sons, 2001
    5. 5. Introduction to Knowledge Discovery in Databases and Data Mining
    6. 6. Computational Knowledge Discovery
    7. 7. Terminology <ul><li>Data Mining </li></ul><ul><li>A step in the knowledge discovery process consisting of particular algorithms (methods) that under some acceptable objective, produces a particular enumeration of patterns (models) over the data. </li></ul><ul><li>Knowledge Discovery Process </li></ul><ul><li>The process of using data mining methods (algorithms) to extract (identify) what is deemed knowledge according to the specifications of measures and thresholds, using a database along with any necessary preprocessing or transformations. </li></ul>
    8. 8. Terminology - A Working Definition <ul><li>Data Mining is a “decision support” process in which we search for patterns of information in data. </li></ul><ul><li>Data Mining is a process of discovering advantageous patterns in data. </li></ul><ul><li>A pattern is a conservative statement about a probability distribution. </li></ul><ul><ul><li>Webster: A pattern is (a) a natural or chance configuration, (b) a reliable sample of traits, acts, tendencies, or other observable characteristics of a person, group, or institution </li></ul></ul>
    9. 9. Data Mining: On What Kind of Data? <ul><li>Relational Databases </li></ul><ul><li>Data Warehouses </li></ul><ul><li>Transactional Databases </li></ul><ul><li>Advanced Database Systems </li></ul><ul><ul><li>Object-Relational </li></ul></ul><ul><ul><li>Spatial and Temporal </li></ul></ul><ul><ul><li>Time-Series </li></ul></ul><ul><ul><li>Multimedia </li></ul></ul><ul><ul><li>Text </li></ul></ul><ul><ul><li>Heterogeneous, Legacy, and Distributed </li></ul></ul><ul><ul><li>WWW </li></ul></ul>Structure - 3D Anatomy Function – 1D Signal Metadata – Annotation
    10. 10. Data Mining: Confluence of Multiple Disciplines ? 20x20 ~ 2^400  10^120 patterns
    11. 11. Why Do We Need Data Mining ? <ul><li>Data volumes are too large for classical analysis approaches: </li></ul><ul><ul><li>Large number of records (10 8 – 10 12 bytes) </li></ul></ul><ul><ul><li>High dimensional data ( 10 2 – 10 4 attributes) </li></ul></ul>How do you explore millions of records, tens or hundreds of fields, and find patterns?
    12. 12. Why Do We Need Data Mining ? <ul><li>Leverage organization’s data assets </li></ul><ul><ul><li>Only a small portion (typically - 5%-10%) of the collected data is ever analyzed </li></ul></ul><ul><ul><li>Data that may never be analyzed continues to be collected, at a great expense, out of fear that something which may prove important in the future is missing. </li></ul></ul><ul><ul><li>Growth rates of data precludes traditional “manually intensive” approach </li></ul></ul>
    13. 13. Why Do We Need Data Mining? <ul><li>As databases grow, the ability to support the decision support process using traditional query languages becomes infeasible </li></ul><ul><ul><li>Many queries of interest are difficult to state in a query language (Query formulation problem) </li></ul></ul><ul><ul><li>“find all cases of fraud” </li></ul></ul><ul><ul><li>“find all individuals likely to buy a FORD expedition” </li></ul></ul><ul><ul><li>“find all documents that are similar to this customers problem” </li></ul></ul>QUERY RESULT (Latitude, Longitude) 1 (Latitude, Longitude) 2
    14. 14. What is It? <ul><ul><li>Knowledge Discovery in Databases is the non-trivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data. </li></ul></ul><ul><li>The understandable patterns are used to: </li></ul><ul><ul><li>Make predictions or classifications about new data </li></ul></ul><ul><ul><li>Explain existing data </li></ul></ul><ul><ul><li>Summarize the contents of a large database to support decision making </li></ul></ul><ul><ul><li>Graphical data visualization to aid humans in discovering deeper patterns </li></ul></ul>
    15. 15. Applications of Data Mining
    16. 16. Data Mining Applications <ul><li>Market analysis </li></ul><ul><li>Risk analysis and management </li></ul><ul><li>Fraud detection and detection of unusual patterns (outliers) </li></ul><ul><li>Text mining (news group, email, documents) and Web mining </li></ul><ul><li>Stream data mining </li></ul><ul><li>DNA and bio-data analysis </li></ul>
    17. 17. Market Analysis <ul><li>Where does the data come from? </li></ul><ul><ul><li>Credit card transactions, loyalty cards, discount coupons, customer complaint calls, plus (public) lifestyle studies </li></ul></ul><ul><li>Target marketing </li></ul><ul><ul><li>Find clusters of “model” customers who share the same characteristics: interest, income level, spending habits, etc. </li></ul></ul><ul><ul><li>Determine customer purchasing patterns over time </li></ul></ul><ul><li>Cross-market analysis </li></ul><ul><ul><li>Associations/co-relations between product sales, & prediction based on such association </li></ul></ul><ul><li>Customer profiling </li></ul><ul><ul><li>What types of customers buy what products (clustering or classification) </li></ul></ul><ul><li>Customer requirement analysis </li></ul><ul><ul><li>identifying the best products for different customers </li></ul></ul><ul><ul><li>Predict what factors will attract new customers) </li></ul></ul>
    18. 18. Corporate Analysis & Risk Management <ul><li>Finance planning and asset evaluation </li></ul><ul><ul><li>cash flow analysis and prediction </li></ul></ul><ul><ul><li>contingent claim analysis to evaluate assets </li></ul></ul><ul><ul><li>cross-sectional and time series analysis (financial-ratio, trend analysis, etc.) </li></ul></ul><ul><li>Resource planning </li></ul><ul><ul><li>summarize and compare the resources and spending </li></ul></ul><ul><li>Competition </li></ul><ul><ul><li>monitor competitors and market directions </li></ul></ul><ul><ul><li>group customers into classes and a class-based pricing procedure </li></ul></ul><ul><ul><li>set pricing strategy in a highly competitive market </li></ul></ul>
    19. 19. Fraud Detection & Mining Unusual Patterns <ul><li>Approaches: Clustering & model construction for frauds, outlier analysis </li></ul><ul><li>Applications: Health care, retail, credit card service, telecomm. </li></ul><ul><ul><li>Auto insurance: ring of collisions </li></ul></ul><ul><ul><li>Money laundering : suspicious monetary transactions </li></ul></ul><ul><ul><li>Medical insurance </li></ul></ul><ul><ul><ul><li>Professional patients, ring of doctors, and ring of references </li></ul></ul></ul><ul><ul><ul><li>Unnecessary or correlated screening tests </li></ul></ul></ul><ul><ul><li>Telecommunications: phone-call fraud </li></ul></ul><ul><ul><ul><li>Phone call model: destination of the call, duration, time of day or week. Analyze patterns that deviate from an expected norm </li></ul></ul></ul><ul><ul><li>Retail industry </li></ul></ul><ul><ul><ul><li>Analysts estimate that 38% of retail shrink is due to dishonest employees </li></ul></ul></ul><ul><ul><li>Anti-terrorism </li></ul></ul>
    20. 20. Data Mining and Business Intelligence
    21. 21. Knowledge Discovery in Databases Process
    22. 22. KDD Process <ul><li>Develop an understanding of the application domain </li></ul><ul><ul><li>Relevant prior knowledge, problem objectives, success criteria, current solution, inventory resources, constraints, terminology, cost and benefits </li></ul></ul><ul><li>Create target data set </li></ul><ul><ul><li>Collect initial data, describe, focus on a subset of variables, verify data quality </li></ul></ul><ul><li>Data cleaning and preprocessing </li></ul><ul><ul><li>Remove noise, outliers, missing fields, time sequence information, known trends, integrate data </li></ul></ul><ul><li>Data Reduction and projection </li></ul><ul><ul><li>Feature subset selection, feature construction, discretizations, aggregations </li></ul></ul>Precision Farming Filter
    23. 23. KDD Process <ul><li>Selection of data mining task </li></ul><ul><ul><li>Classification, segmentation, deviation detection, link analysis </li></ul></ul><ul><li>Select data mining approach </li></ul><ul><li>Data mining to extract patterns or models </li></ul><ul><li>Interpretation and evaluation of patterns/models </li></ul><ul><li>Consolidating discovered knowledge </li></ul>
    24. 24. Knowledge Discovery
    25. 25. Required effort for each KDD Step <ul><li>Arrows indicate the direction we hope the effort should go. </li></ul>
    26. 26. Data Mining Tools
    27. 27. Commercial and Research Tools <ul><li>Data To Knowledge </li></ul><ul><li> </li></ul><ul><li>SAS </li></ul><ul><li> </li></ul><ul><li>Clementine </li></ul><ul><li> </li></ul><ul><li>Intelligent Miner </li></ul><ul><li> </li></ul><ul><li>Insightful Miner </li></ul><ul><li> </li></ul><ul><li>K-Wiz </li></ul><ul><li> </li></ul>
    28. 28. Software Engineering in Data Mining <ul><li>Conceptual Software Hierarchy </li></ul><ul><li>Operating System (Windows, Mac OS, UNIX, Linux) </li></ul><ul><li>Programming Language (Java) </li></ul><ul><li>Modules = Sequences of Programming Language Commands </li></ul><ul><li>Itineraries = Linked Modules </li></ul><ul><li>Streamlines = Linked Itineraries </li></ul><ul><li>Software for </li></ul><ul><li>Users with Various Levels of Programming Skills </li></ul><ul><li>Collaborating Users </li></ul>
    29. 29. D2K - Software Environment for Data Mining <ul><li>Visual programming system employing a scalable framework </li></ul><ul><li>Robust computational infrastructure </li></ul><ul><ul><li>Enable processor intensive apps, support distributed computing </li></ul></ul><ul><ul><li>Enable data intensive apps, support multi-processor, shared memory architectures, thread pooling </li></ul></ul><ul><ul><li>Very low granularity, fast data flow paradigm, integrated control flow </li></ul></ul><ul><li>Reduction of development time </li></ul><ul><ul><li>Increase code reuse and sharing </li></ul></ul><ul><ul><li>Expedite custom software developments </li></ul></ul><ul><ul><li>Relieve distributed computing burden </li></ul></ul><ul><li>Flexible and extensible architecture </li></ul><ul><ul><li>Create plug and play subsystem architectures, and standard APIs </li></ul></ul><ul><li>Rapid application development (RAD) environment </li></ul><ul><li>Integrated environment for models and visualization </li></ul>
    30. 30. D2K Architecture <ul><li>D2K Infrastructure </li></ul><ul><ul><li>Defines the D2K API </li></ul></ul><ul><li>D2K Modules </li></ul><ul><ul><li>Computational unit written in Java that follows the D2K API </li></ul></ul><ul><li>D2K Itineraries </li></ul><ul><ul><li>A group of modules that are connected to form an application </li></ul></ul><ul><li>D2K ToolKit </li></ul><ul><ul><li>User interface </li></ul></ul><ul><li>D2K Driven Applications </li></ul><ul><ul><li>Applications that use D2K modules </li></ul></ul><ul><ul><li>D2K SL </li></ul></ul>
    31. 31. Data Flow Programming Environment: D2K Jump Up Panes Workspace Tool Bar Tool Menu Side Tab Panes
    32. 32. D2K Programming and Runtime Environment
    33. 33. Streamlined Data Mining Environment: D2K SL KDD Steps Session KDD Options Workspace
    34. 34. Data Mining Techniques in D2K <ul><li>Discovery </li></ul><ul><ul><li>Association Rules , Link Analysis, Self Organizing Maps </li></ul></ul><ul><li>Predictive Modeling </li></ul><ul><ul><li>Classification – Naive Bayesian , Neural Networks , Decision Trees </li></ul></ul><ul><ul><li>Regression – Neural Networks , Regression Trees </li></ul></ul><ul><li>Deviation Detection </li></ul><ul><ul><li>Visualization </li></ul></ul><ul><li>Text To Knowledge (T2K) </li></ul><ul><li>Image To Knowledge (I2K) </li></ul><ul><li>---------------------- </li></ul><ul><li>Audio, Touch, Scent and Savor To Knowledge </li></ul><ul><li>Knowledge To Wisdom (K2W) </li></ul>
    35. 35. Data Mining at Work Data Sources Project Objectives Single Multiple Numerous Diagnostics Target Marketing Effluent Quality Control Decision Support Automation Transaction Management Cost Prediction (Warranty, Insurance Claims) Warranty Clustering Territorial Ratemaking Web Information Retrieval, Archival and Clustering Auto Loss Ratio Predictions Precision Farming Bio-Informatics Functional Foods Heterogeneous Data Visualization Crime Data Analysis Data Fusion and Visualization Survey Study of Disability
    36. 36. Examples of Data Mining Methods
    37. 37. Three Primary Data Mining Paradigms <ul><li>Discovery </li></ul><ul><ul><li>Example: Association Rules </li></ul></ul><ul><li>Predictive Modeling </li></ul><ul><ul><li>Classification Example: Decision Trees </li></ul></ul><ul><li>Deviation Detection </li></ul><ul><ul><li>Visualization </li></ul></ul>
    38. 38. Association Rules and Market Basket Analysis
    39. 39. What is Market Basket Analysis? <ul><li>Customer Analysis </li></ul><ul><ul><li>Market Basket Analysis uses the information about what a customer purchases to give us insight into who they are and why they make certain purchases. </li></ul></ul><ul><li>Product Analysis </li></ul><ul><ul><li>Market basket Analysis gives us insight into the merchandise by telling us which products tend to be purchased together and which are most amenable to purchase. </li></ul></ul>
    40. 40. Market Basket Example Is soda typically purchased with bananas? Does the brand of soda make a difference? Where should detergents be placed in the Store to maximize their sales? Are window cleaning products purchased when detergents and orange juice are bought together? How are the demographics of the neighborhood affecting what customers are buying? ? ? ? ?
    41. 41. Association Rules <ul><li>There has been a considerable amount of research in the area of Market Basket Analysis. Its appeal comes from the clarity and utility of its results, which are expressed in the form association rules . </li></ul><ul><li>Given </li></ul><ul><ul><li>A database of transactions </li></ul></ul><ul><ul><li>Each transaction contains a set of items </li></ul></ul><ul><li>Find all rules X->Y that correlate the presence of one set of items X with another set of items Y </li></ul><ul><ul><li>Example: When a customer buys bread and butter, they buy milk 85% of the time </li></ul></ul>+
    42. 42. Results: Useful, Trivial, or Inexplicable? <ul><li>While association rules are easy to understand, they are not always useful. </li></ul><ul><ul><li>Useful: On Fridays convenience store customers often purchase diapers and beer together. </li></ul></ul><ul><ul><li>Trivial: Customers who purchase maintenance agreements are very likely to purchase large appliances. </li></ul></ul><ul><ul><li>Inexplicable: When a new Super Store opens, one of the most commonly sold item is light bulbs. </li></ul></ul>
    43. 43. How Does It Work? Orange juice, Soda Milk, Orange Juice, Window Cleaner Orange Juice, Detergent Orange juice, detergent, soda Window cleaner, soda OJ 4 1 1 2 1 OJ Window Cleaner Milk Soda Detergent 1 2 1 1 0 1 1 1 0 0 2 1 0 3 1 1 0 0 1 2 Window Cleaner Milk Soda Detergent Co-Occurrence of Products Customer Items 1 2 3 4 5 Grocery Point-of-Sale Transactions Orange Juice, Soda Milk, Orange Juice, Window Cleaner Orange Juice, Detergent Orange Juice, Detergent, Soda Window Cleaner, Soda
    44. 44. <ul><li>The co-occurrence table contains some simple patterns </li></ul><ul><ul><li>Orange juice and soda are more likely to be purchased together than any other two items </li></ul></ul><ul><ul><li>Detergent is never purchased with window cleaner or milk </li></ul></ul><ul><ul><li>Milk is never purchased with soda or detergent </li></ul></ul><ul><li>These simple observations are examples of Associations and may suggest a formal rule like: </li></ul><ul><ul><li>If a customer purchases soda, THEN the customer also purchases orange juice </li></ul></ul>How Does It Work? OJ Window Cleaner Milk Soda Detergent 1 1 1 0 0 2 1 0 3 1 1 0 0 1 2 OJ Window Cleaner Milk Soda Detergent 1 2 1 1 0 4 1 1 2 1
    45. 45. How Good Are the Rules? <ul><li>In the data, two of five transactions include both soda and orange juice, These two transactions support the rule. The support for the rule is two out of five or 40% </li></ul><ul><li>Since both transactions that contain soda also contain orange juice there is a high degree of confidence in the rule. In fact every transaction that contains soda contains orange juice. So the rule If soda, THEN orange juice has a confidence of 100%. </li></ul>
    46. 46. Confidence and Support - How Good Are the Rules <ul><li>A rule must have some minimum user-specified confidence </li></ul><ul><ul><li>1 & 2 -> 3 has a 90% confidence if when a customer bought 1 and 2, in 90% of the cases, the customer also bought 3. </li></ul></ul><ul><li>A rule must have some minimum user-specified support </li></ul><ul><ul><li>1 & 2 -> 3 should hold in some minimum percentage of transactions to have value. </li></ul></ul>
    47. 47. Confidence and Support Transaction ID # Items 1 2 3 4 { 1, 2, 3 } { 1,3 } { 1,4 } { 2, 5, 6 } Frequent One Item Set Support { 1 } { 2 } { 3 } { 4 } 75 % 50 % 50 % 25 % For minimum support = 50% = 2 transactions and minimum confidence = 50% For the rule 1=> 3: Support = Support({1,3}) = 50% Confidence (1->3) = Support ({1,3})/Support({1}) = 66% Confidence (3->1)= Support ({1,3})/Support({3}) = 100% Frequent Two Item Set Support { 1,2 } { 1,3 } { 1,4 } { 2,3 } 25 % 50 % 25 % 25 %
    48. 48. Association Examples <ul><li>Find all rules that have “Diet Coke” as a result . These rules may help plan what the store should do to boost the sales of Diet Coke. </li></ul><ul><li>Find all rules that have “Yogurt” in the condition . These rules may help determine what products may be impacted if the store discontinues selling “Yogurt”. </li></ul><ul><li>Find all rules that have “Brats” in the condition and “mustard” in the result . These rules may help in determining the additional items that have to be sold together to make it highly likely that mustard will also be sold. </li></ul><ul><li>Find the best k rules that have “Yogurt” in the result . </li></ul>
    49. 49. The Basic Process <ul><li>Choosing the right set of items </li></ul><ul><ul><li>Taxonomies </li></ul></ul><ul><li>Generation of rules </li></ul><ul><ul><li>If condition Then result </li></ul></ul><ul><ul><li>Negation </li></ul></ul><ul><li>Overcoming the practical limits imposed by thousand or tens of thousands of products </li></ul><ul><ul><li>Minimum Support Pruning </li></ul></ul>
    50. 50. Choosing the Right Set of Items Frozen Foods Frozen Desserts Frozen Vegetables Frozen Dinners Frozen Yogurt Frozen Fruit Bars Ice Cream Peas Carrots Mixed Other Rocky Road Chocolate Strawberry Vanilla Cherry Garcia Other Partial Product Taxonomy General Specific
    51. 51. Example - Minimum Support Pruning / Rule Generation Transaction ID # Items 1 2 3 4 { 1, 3, 4 } { 2, 3, 5 } { 1, 2, 3, 5 } { 2, 5 } Itemset Support { 1 } { 2 } { 3 } { 4 } { 5 } 2 3 3 1 3 Itemset Support { 2 } { 3 } { 5 } 3 3 3 Itemset { 2 } { 3 } { 5 } Itemset Support { 2, 3 } { 2, 5 } { 3, 5 } 2 3 2 Itemset Support { 2, 5 } 3 Scan Database Find Pairings Find Level of Support Scan Database Find Pairings Find Level of Support Two rules with the highest support for two item set: 2->5 and 5->2
    52. 52. Other Association Rule Applications <ul><li>Quantitative Association Rules </li></ul><ul><ul><li>Age[35..40] and Married[Yes] -> NumCars[2] </li></ul></ul><ul><li>Association Rules with Constraints </li></ul><ul><ul><li>Find all association rules where the prices of items are > 100 dollars </li></ul></ul><ul><li>Temporal Association Rules </li></ul><ul><ul><li>Diaper -> Beer (1% support, 80% confidence) </li></ul></ul><ul><ul><li>Diaper -> Beer (20%support) 7:00-9:00 PM weekdays </li></ul></ul><ul><li>Optimized Association Rules </li></ul><ul><ul><li>Given a rule (l < A < u) and X -> Y, Find values for l and u such that support greater than certain threshold and maximizes a support and confidence. </li></ul></ul><ul><ul><li>Check Balance [$ 30,000 .. $50,000] -> Certificate of Deposit (CD)= Yes </li></ul></ul>+
    53. 53. Strengths of Market Basket Analysis <ul><li>It produces easy to understand results </li></ul><ul><li>It supports undirected data mining </li></ul><ul><li>It works on variable length data </li></ul><ul><li>Rules are relatively easy to compute </li></ul>
    54. 54. Weaknesses of Market Basket Analysis <ul><li>It an exponentially growth algorithm </li></ul><ul><li>It is difficult to determine the optimal number of items </li></ul><ul><li>It discounts rare items </li></ul><ul><li>It is limited on the support that it provides attributes </li></ul>
    55. 55. Decision Tree Learning
    56. 56. Example: Supervised Learning with Decision Trees
    57. 57. <ul><li>Start with data at the root node </li></ul><ul><li>Select an attribute and form a logical test on attribute </li></ul><ul><li>Branch on each outcome of test, move subset of example satisfying that out come to corresponding child node </li></ul><ul><li>Recurse on each child node </li></ul><ul><li>Termination rule specifies when to declare a node is a leaf node </li></ul><ul><li>Note: this is a one-step look ahead, non-backtracking search through the space of all decision trees </li></ul><ul><li>Critical Steps </li></ul><ul><ul><li>Formulation of good logical tests </li></ul></ul><ul><ul><li>Selection measure for attributes </li></ul></ul>Decision Tree Learning
    58. 58. Decision Trees <ul><li>Classifiers </li></ul><ul><ul><li>Instances (unlabeled examples): represented as attribute (“feature”) vectors </li></ul></ul><ul><li>Internal Nodes: Tests for Attribute Values </li></ul><ul><ul><li>Typical: equality test (e.g., “Wind = ?”) </li></ul></ul><ul><ul><li>Inequality, other tests possible </li></ul></ul><ul><li>Branches: Attribute Values </li></ul><ul><ul><li>One-to-one correspondence (e.g., “Wind = Strong”, “Wind = Light”) </li></ul></ul><ul><li>Leaves: Assigned Classifications (Class Labels) </li></ul>
    59. 59. Decision Tree for Concept: PlayTennis Outlook? Humidity? Wind? Sunny Overcast Rain Yes No High Normal No Strong Light Outlook? Humidity? Wind? Sunny Overcast Rain Yes No High Normal No Strong Light Yes Yes Yes Yes
    60. 60. Decision Trees and Decision Boundaries + + - - + + + + - - y x 1 3 5 7 How to Visualize Decision Trees? Example: Dividing Instance Space into Axis-Parallel Rectangles More than two variables ? y > 7? No Yes x < 3? No Yes y < 5? No Yes x < 1? No Yes
    61. 61. An Illustrative Example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Day Sunny Sunny Overcast Rain Rain Rain Overcast Sunny Sunny Rain Sunny Overcast Overcast Rain Hot Hot Hot Mild Cool Cool Cool Mild Cool Mild Mild Mild Hot Mild Temperature Humidity Wind PlayTennis? High High High High Normal Normal Normal High Normal Normal Normal High Normal High Outlook Light Strong Light Light Light Strong Strong Light Light Light Strong Strong Light Strong No No Yes Yes Yes No Yes No Yes Yes Yes Yes Yes No Training Examples for Concept PlayTennis
    62. 62. Constructing a Decision Tree for PlayTennis [9+, 5-] E(D) = min(9/14, 5/14) = 5/14 = 36% The Initial Decision Tree with One Leaf <ul><ul><li>Goal: maximize error reduction E, where the error reduction relative to attribute A is the expected reduction in error due to splitting on A: </li></ul></ul><ul><ul><li>Question: What attribute A and what value of A should we split on? </li></ul></ul>1 2 3 4 5 6 7 8 9 10 11 12 13 14 Day Sunny Sunny Overcast Rain Rain Rain Overcast Sunny Sunny Rain Sunny Overcast Overcast Rain Hot Hot Hot Mild Cool Cool Cool Mild Cool Mild Mild Mild Hot Mild Temperature Humidity Wind Play Tennis? High High High High Normal Normal Normal High Normal Normal Normal High Normal High Outlook Light Strong Light Light Light Strong Strong Light Light Light Strong Strong Light Strong No No Yes Yes Yes No Yes No Yes Yes Yes Yes Yes No
    63. 63. Constructing a Decision Tree for PlayTennis Potential Splits of Root Node [3+, 4-] [6+, 1-] Humidity High Normal [9+, 5-] [6+, 2-] [3+, 3-] Wind Light Strong [9+, 5-] [2+, 3-] [3+, 2-] Outlook Sunny Rain [9+, 5-] Overcast [4+, 0-] [3+, 1-] [2+, 2-] Temperature Cool Hot [9+, 5-] Mild [4+, 2-] E(Split/Outlook) = (5/14) – ((5/14)(min(2/5,3/5)) + (4/14)(min(4/4,0/4)) + (5/14)(min(3/5,2/5))) = 7% E(Split/Temperature) = (5/14) – ((4/14)(min(3/4,1/4)) + (6/14)(min(4/6,2/6)) + (4/14)(min(2/4,2/4))) = 0% E(Split/Humidity) = (5/14) – ((7/14)(min(3/7,4/7)) + (7/14)(min(6/7,1/7))) = 7% E(Split/Wind) = (5/14) – ((8/14)(min(6/8,2/8)) + (6/14)(min(3/6,3/6))) = 0%
    64. 64. Constructing a Decision Tree for PlayTennis Humidity? Wind? Yes Yes No Yes No Outlook? 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 [ 9+ , 5- ] <ul><li>Top-Down Induction </li></ul><ul><ul><li>For discrete-valued attributes, terminates in  ( n ) splits </li></ul></ul><ul><ul><li>Makes at most one pass through data set at each level (why?) </li></ul></ul>Sunny Overcast Rain 1 , 2 , 8 , 9 , 11 [ 2+ , 3- ] 3 , 7 , 12 , 13 [ 4+ , 0- ] 4 , 5 , 6 , 10 , 14 [ 3+ , 2- ] High Normal 1 , 2 , 8 [ 0+ , 3- ] 9 , 11 [ 2+ , 0- ] Strong Light 6 , 14 [ 0+ , 2- ] 4 , 5 , 10 [ 3+ , 0- ]
    65. 65. Strengths Of Decision Trees <ul><li>Decision trees are able to generate understandable results </li></ul><ul><li>Decision trees perform classification without requiring much computation </li></ul><ul><li>Decisions trees can handle both continuous and categorical variables </li></ul><ul><li>Decision trees provide a clear indication of which attributes are most important for prediction or classification </li></ul>
    66. 66. Weakness Of Decision Trees <ul><li>Error-prone with too many classes </li></ul><ul><li>Quick partitioning of data results in fast deterioration in attribute selection quality </li></ul><ul><li>Trouble with non-rectangular regions </li></ul>
    67. 67. Visualization
    68. 68. Visualization Example: Naïve Bayesian Three Flower Types; Petal and Sepal Based Classification
    69. 69. Naïve Bayesian Visualization <ul><li>The right hand pane shows the distribution of the classes. </li></ul><ul><li>The left hand pane shows the attributes and each of their values. They are listed by order of significance. </li></ul><ul><ul><li>The message box shows details about each pie chart when brushed. </li></ul></ul><ul><ul><li>Clicking on a pie chart shows how knowing this information can change the overall class predication. </li></ul></ul><ul><ul><li>Clicking on multiple pie charts calculates conditional probabilities. </li></ul></ul><ul><ul><li>Zoom in and out using the right mouse button. </li></ul></ul>Notice Iris-versicolor has a 33% likelihood
    70. 70. Rule Association Visualization <ul><li>Read rules down the column </li></ul><ul><li>Example - the rule in the column labeled as 2 is </li></ul><ul><ul><li>if petal-width Binned=(…, 2.) then flower-type=Iris-setosa </li></ul></ul><ul><ul><li>Support = 25% </li></ul></ul><ul><ul><li>Confidence = 100% </li></ul></ul>
    71. 71. Discovery Using Rule Association <ul><li>What services are purchased together? </li></ul><ul><li>What products or transactions are executed by customers on a single visit to your website? </li></ul><ul><li>What are the relationships in the data? </li></ul>
    72. 72. Parallel Coordinates - Visualization <ul><li>Each vertical line represents a field with the minimum and maximum values represented at bottom and top. </li></ul><ul><li>Each record has a line that connects it to the its value at each field </li></ul><ul><li>Lines are colored based on the output field </li></ul><ul><li>Clicking on the label boxes allows the lines to be rearranged </li></ul><ul><li>Zooming is accomplished by dragging a box over the desired area. Clicking returns to the original view. </li></ul>
    73. 73. Scatterplots - Visualization
    74. 74. Image To Knowledge (I2K): Data Visualization <ul><li>Hyperspectral image with 120 bands </li></ul>
    75. 75. Image To Knowledge (I2K): Visualization of Results <ul><li>Classification Results </li></ul><ul><ul><li>Class labels per pixel </li></ul></ul><ul><ul><li>Class labels per geographical entity </li></ul></ul><ul><ul><li>Class labels of aggregations </li></ul></ul><ul><li>Alignment Results </li></ul><ul><ul><li>Overlays </li></ul></ul><ul><ul><li>Summary Charts </li></ul></ul><ul><li>Image Operations </li></ul><ul><ul><li>Enhancements </li></ul></ul><ul><ul><li>Image Restoration </li></ul></ul><ul><ul><li>Filtering </li></ul></ul>
    76. 76. T2K - Text to Knowledge: Topic Evolution <ul><li>Any chronologically ordered text </li></ul><ul><li>News feeds </li></ul><ul><li>Email </li></ul>
    77. 77. Protein Consumption Dynamics <ul><li>Objective </li></ul><ul><ul><li>To understand, through database visualization, global protein consumption patterns by providing a means to directly compare historical and simulated data. </li></ul></ul><ul><li>Presented at the Global Soy Forum - 1999 </li></ul>
    78. 78. Data Comparison, Reduction & Synthesis <ul><li>Goal </li></ul><ul><ul><li>Development of a 3D visualization tool for multi-channel on-board sensor data. This tools allows for multiple time series comparison, reduction and synthesis. </li></ul></ul><ul><li>Related Projects </li></ul><ul><ul><li>Derivative Monitoring </li></ul></ul><ul><ul><li>Real-time System Monitoring </li></ul></ul>
    79. 79. Summary <ul><li>Curious? Puzzled? </li></ul><ul><li>Found Application? Domain Specific Questions? </li></ul><ul><li>Learn ! </li></ul><ul><li>Become Familiar with Data Mining Terminology </li></ul><ul><li>Introduction to Data Mining </li></ul><ul><li>Look For Tools </li></ul><ul><li>Apply Data Mining Techniques to Problems </li></ul><ul><li>Ask For Help </li></ul>