Your SlideShare is downloading. ×
0
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
I-ed.ppt
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

I-ed.ppt

424

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
424
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
9
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Data Mining and Knowledge Discovery Part of “New Media and eScience” MSc Programme and “Statistics” MSc Programme Fall semester, 2004/05 Nada Lavra č Jo ž ef Stefan Institute Ljubljana, Slovenia Thanks to Blaz Zupan, Saso Dzeroski and Peter Flach for contributing some slides to this course material
  • 2. Course participants
    • I. NMeS MPSJS students
      • Robert Blatnik
      • Joel Plisson
      • Jadran Prodan
      • Viljem Tisnikar
    • II. Statistics students
      • Borut Kodri č
      • Borut Rajer
      • Maja Sever
    • III. Other participants
    • Dept. of Knowledge Technologies members , students , scholars
      • Matjaz Depolli, Borut Lu žar, Primož Lukšič, …
    • Facult y of mechanical engineering MSc students
      • Jo že Jenkole, Viktor Zaletelj, Damir Husejnagič, Andrej Jermol
  • 3. Courses in Knowledge Technologies: Fall 2004/05 10 Nov. 15h-19h Data Mining and Knowledge Discovery prof. dr. Nada Lavrač 11 Nov 12h - 13h ??????? Concept of Sustainable Development prof. dr. Ivo Šlaus 11 Nov. 15h-19h Decision Support prof. dr. Marko Bohanec 17 Nov. 15h-19h Selected topics in New Media and eScience prof. dr. Sašo Džeroski
  • 4. Courses in Knowledge Technologies: Fall 2004/05 25 Nov. 15h-19h ??????? Data Mining and Knowledge Discovery prof. dr. Nada Lavrač 15 Dec. 1 5 h - 1 9 h New Media and Knowledge Management Nada Lavrač, Mitja Jermol Tanja Urban čič Sašo Džeroski Toma ž Erjavec 13 Jan. 15h - 19h Language Technologies to be defined Text and Web Mining, Active Learning, Relational Data Mining, Equation Discovery, .. Mladeni ć, Grobelnik, Todorovski, ...
  • 5. Advanced Course on Knowledge Technologies: ACAI-05 Ljubljana, June 27–July 8, 2005
  • 6. Credits and coursework
    • “ New Media and eScience” MSc Programme
    • 6 credits
    • 30 hours
      • 10 – lectures
      • 10 – hands-on
      • 10 – seminar
    • Individual workload distribution and/or consultations: to be agreed by mail/phone
    • “ Statistics” MSc Programme
    • 12 credits
    • 36 hours
      • 24 – lectures
      • 12 – seminar
    • Individual workload distribution and/or consultations: to be agreed by mail/phone
  • 7. Credits and coursework: Sample individual programmes
    • “ New Media and eScience” MSc Programme
    • 6 credits, 30 hours
      • Lectures (with/without ACAI lectures)
      • e.g., ACAI hands-on (1x, 2x or 3x4 hours)
      • Seminar based on the results of ACAI hands-on work
    • “ Statistics” MSc Programme
    • 12 credits, 36 hours
      • Lectures (e.g., with ACAI lectures)
      • e.g., WEKA ACAI hands-on (1x4 hours)
      • Individual seminar work, using you own data (e.g., using WEKA for survey data analysis)
  • 8. Outline of 10 Nov. and 25 Nov. lectures on DM and KDD
    • I. Introduction
      • Data Mining and KDD process
      • Why DM: Examples of discovered patterns and applications
      • Classification of DM tasks and techniques
      • Visualization and overview of DM tools
      • (Ch. 1,2,11,12,13 of DM&DS book)
    • II. DM Techniques
      • Classification of DM tasks and techniques
      • Predictive DM
        • Decision Tree induction (Ch. 3 of Mitchell’s book)
        • Learning sets of rules ( Ch. 7 of IDA book, Ch. 10 of Mitchell’s book )
      • Descriptive DM
        • Association rule induction
        • Subgroup discovery
        • Hierarchical clustering
    • III. Evaluation
      • Evaluation methodology
      • Evaluation measures
    • IV. Relational Data Mining
      • What is RDM?
      • Propositionalization
      • Inductive Logic Programming
      • (Ch. 3,4,11 of RDM book)
    • V. Concluding Remarks
  • 9. Introduction to data mining
    • Data Mining (DM) and related areas
    • Why DM: Examples of discovered patterns and applications
    • Classification of DM tasks and techniques
    • Visualization and overview of DM tools
  • 10. What is data mining
    • Extraction of useful information from data: discovering relationships that have not been previously known
    • The viewpoint in this course: DM i s the application of m achine l earning techniques to “hard” real-life problems
  • 11. Related a reas
    • Database t e c hnolog y
    • and data warehouses
    • efficient storage, access and manipulation of data
    DM statisti cs machine learning vi sualization text and Web mining soft computing pattern recognition databases
  • 12.
    • Statist ics ,
    • machine learning,
    • pattern recognition
    • and soft computing*
    • techniques for classification and knowledge extraction from data
    Related a reas * ne ural networks, fuzzy logic, geneti c algorithms, probabilistic reasoning DM statisti cs machine learning vi sualization text and Web mining soft computing pattern recognition databases
  • 13. Related a reas
    • Text and Web mining
    • Web page analysis
    • text categorization
    • acquisition, filtering and structuring of textual information
    • natural language processing
    text and Web mining DM statisti cs machine learning vi sualization text and Web mining soft computing pattern recognition databases
  • 14. Related a reas
    • Visuali zation
    • visualization of data and discovered knowledge
    DM statisti cs machine learning vi sualization text and Web mining soft computing pattern recognition databases
  • 15. Point of view in this tutorial
    • Data mining with machine learning methods
    • Emphasis on r elation with statistics
    DM statisti cs machine learning vi sualization text and Web mining soft computing pattern recognition databases
  • 16. Machine l earning and s tatistics
    • Both have a long tradition of developing indu ctive te c hni ques for data analysis
      • reasoning from properties of data sample s to properties of a population
    • DM = statistics + marketing ? No ! D M = statistics + ... + machine learning
    • Statistics is particularly appropriate for hypothesis testing and data analysis under certain theoretical expectations about data distribution, independence, random sampling, sample size , …
    • M achine learning is particularly appropriate for inducing generalizations that consist of easily understandable patterns , induced from both large and small samples
  • 17. D M and KDD
    • DM is a way of doing data analysis, aimed at finding patterns, revealing hidden regularities and relationships
    • Knowledge Discovery in Databases (KDD) provides a broader view:
    • - KDD is defined as “the process of identifying valid, novel, potentially useful and ultimately understandable patterns in data” *
    • - KDD provid es tools to automate the entire process of data analysis, including the statistician’s art of hypothesis selection
    • DM is the key element in this much more elaborate KDD process
    * Usama M. Fayyad et al, The KDD Process for Extracting Useful Knowledge fr o m Volumes of Data . Comm ACM, Nov . 19 96
  • 18. The KDD p rocess
    • KDD involves several phases:
      • data preparation (selection, pre-processing, transformation)
      • data mining
      • interpretation and evalua tion of discovered patterns
    • D ata mining is the key phase, 15-25 % of the KDD process
  • 19. Part I. Introduction
    • Data Mining and the KDD process
    • Why DM: Examples of discovered patterns and applications
    • Classification of DM tasks and techniques
    • Visualization and overview of DM tools
  • 20. The S ol E u N et P roject
    • E uropean 5FP project “ Data Mining and Decision Support for Business Competitiveness: A European Virtual Enterprise” , 2000-2003
    • Scientific coordinator J o z ef Stefan Institute, administrative Fraunhofer Gesellschaft
    • 3 M €, 12 partners (8 academic and 4 business) from 7 countries
    • Main project objectives:
      • development of prototype solutions for end-users
      • foundation of a virtual enterprise for marketing data mining and decision support expertise , involving business and academia
  • 21. Data mining application prototypes
    • Mediana – analysis of media research data
    • Kline & Kline – improved brand name recognition
    • Australian financial house – customer quality evaluation, stock market prediction
    • Czech health farm – predict the use of resources
    • UK County Council - analysis of traffic accident data
    • Portuguese statistical bureau – Web page access analysis for better page organization
    • D etection of c oronary heart disease risk group s
    • Analysis of online d ating
    • EC Harris, UK - a nalysis of building construction projects
    • European Comission - analysis of 5F P IST projects: better understanding of large amounts of text documents, “clique” identification
  • 22. M ediana case study
    • Questionnaires about journal/magazine reading, watching TV programs and listening to radio programs , published annually since 1992 , about 1200 questions/attributes (frequency of reading/listening/watching, distribution w.r.t. sex, age, education, buying power, interests, ...)
    • Data for 1998, about 8000 questionnaires
    • Good quality , “clean” data
    • T ab le of n-t uples ( rows : individuals , columns : at tributes)
  • 23. M ediana case study
    • Target patterns:
      • Which other journals/magazines are read by readers of a particular journal/magazine ?
      • What are the properties of individuals that are consumers of a particular media ?
      • Which properties are distinctive for readers of various journals ?
    • Induced models : description (association rules, clusters) and classification (decision trees, classification rules)
  • 24. Decision trees
    • Finding reader profiles: decision tree for classifying people into readers and non-readers of a teenage magazine
  • 25. Classification rules Set of Rules: if Cond then Class Interpretation: if-then ruleset, or if-then-else decision list Class : Reading of daily newspaper EN (Evening News) if a if person does not read MM (Maribor Magazine) and rarely reads the weekly magazine “7Days” then the person does not read EN (Evening News) else if a person rarely reads MM and does not read the weekly magazine SN (Sunday News) then the person reads EN else if a person rarely reads MM then the person does not read EN else the person reads EN.
  • 26. Association rules
    • Rules X => Y , X, Y conjunction of bin. attributes
    • Support: Sup(X,Y) = #XY / #D = p(XY)
    • Confidence: Conf(X,Y) = #XY / #X = p(XY) / p(X) = p(Y|X)
    • Task: Fi nd all association rules that satisfy minimum support and minimum confidence constraints.
    • Example association rule about readers of yellow press daily newspaper SloN (Slovenian News):
    • read_Love_Stories_Magazine => read_SloN
    • sup = 3.5% (3.5% of the whole dataset population reads both LSM and SloN)
    • conf = 61% (61% of those reading LSM also read SloN)
  • 27. Association rules Finding profiles of readers of the Delo daily newspaper 1. read_Marketing magazine 116 => read_Delo 95 (0.82) 2. read_Financial_News 223 => read_Delo 180 (0.81) 3. read_Views 201 => read_Delo 157 (0.78) 4. read_Money 197 => read_Delo 150 (0.76) 5. read_Vip 181 => read_Delo 134 (0.74) Interpretation: Most readers of Marketing magazin e , Finan cial News , Views , Money and Vip read also Delo.
  • 28. Anal ysis of UK traffic accidents
    • End-user: Hampshire County Council (HCC , UK )
      • Can records of road traffic accidents be analysed to produce road safety information valuable to county surveyors?
      • HCC is sponsored to carry out a research project Road Surface Characteristics and Safety
      • R esearch includes an analysis of the STATS19 Accident Report Form Database to identify trends over time in the relationships between recorded road-user type/injury, vehicle position/damage, and road surface characteristics
  • 29. STATS19 Data Base 10
    • Over 5 million accidents recorded in 1979-1999
    • 3 data tables
    Accident ACC7999 (~5 mil . Accidents, 30 variables) Where ? When ? How many ? Vehicle VEH7999 (~9 mil . V ehicles, 24 variables ) Which vehicles ? What movement ? Which consequences ? Casualty CAS7999 (~7 mil . injuries , 16 variables) Who was injured ? What injuries ? ...
  • 30. Data understanding
  • 31. Data quality : Accident location
  • 32. Data preparation
    • There are 51 police force areas in UK
    • For each area we count the number of accidents in each:
      • Year
      • Month
      • Day of Week
      • Hour of Day
  • 33. Data preparation
  • 34. Simple visualization of short time series
    • Used for data understanding
    • Very informative and easy to understand format
    • UK traffic accident analysis: Distributions of number of accidents over different time periods (year, month, day of week, and hour)
  • 35. Year/Month distribution Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Darker color - MORE accidents
  • 36. Day of Week/Month distribution All weekdays (Mon – Fri) are worse in deep winter, Friday the worst SUN FRI SAT MON THU TUES WED Jan Feb Mar Apr May Jun July Aug Sept Oct Nov Dec
  • 37. Hour/Month distribution
    • More Accidents at “Rush Hour”, Afternoon Rush hour is the worst
    • More holiday traffic (less rush hour) in August
    Jan Feb Mar Apr May Jun Jul Aug Sept Oct Nov Dec
  • 38. Day of Week/Hour distribution
    • More Accidents at “Rush Hour”, Afternoon Rush hour is the
    • worst and lasts longer with “early finish” on Fridays
    • 2. More leisure traffic on Saturday/Sunday
    SUN FRI SAT MON THU TUES WED
  • 39. Traffic: different modeling approaches
    • association rule learning
    • static subgroup discovery
    • dynamic subgroup discovery
    • clustering of short time series
    • text mining
    • multi-relational approaches
  • 40. Some discovered association rules
    • Association rules: Road number and Severity of accident
      • The probability of a fatal or serious accident on the “K8” road is 2.2 times greater than the probability of fatal or serious accidents in the county generally.
      • The probability of fatal accidents on the “K7” road is 2.8 times greater than the probability of fatal accidents in the county generally (when the road is dry and the speed limit = 70).
  • 41. Analysis of documents of European IST project
    • Data source:
    • List of IST project descriptions as 1-2 page text summaries from the Web (database www.cordis.lu/ )
    • IST 5FP has 2786 projects in which participate 7886 organizations
    • Analysis tasks:
    • Visualization of project topics
    • Analysis of collaboration
    • Connectedness between organizations
    • Community/clique identification
    • Thematic consortia identification
    • Simulation of 6FP IST
  • 42. Analysis of documents of European IST project
  • 43. Visualization into 25 project groups Health Data analysis Knowledge Management Mobile computing
  • 44. Institutional Backbone of IST Telecommunication Transport Electronics No. of joint projects
  • 45. Collaboration between countries (top 12) Most active country Number of collaborations
  • 46. Part I. Introduction
    • Data Mining and the KDD process
    • Why DM: Examples of discovered patterns and applications
    • Classification of DM tasks and techniques
    • Visualization and overview of DM tools
  • 47. Types of DM tasks
    • Predictive DM:
      • Classification (learning of rulesets, decision trees, ...)
      • Prediction and estimation (regression)
      • Predictive relational DM (RDM, ILP)
    • Descriptive DM:
      • description and summarization
      • dependency analysis (association rule learning)
      • discovery of properties and constraints
      • segmentation (clustering)
      • subgroup discovery
    • Text, Web and image analysis
    + + + - - - H x x x x + x x x H
  • 48. Predictive vs. descriptive induction
    • Predictive induction
    • Descriptive induction
    + - + + + + - - - - - -  + + + + + + +  + + + + + + + +  + + + + + + + + + +  + +
  • 49. Predictive vs. descriptive induction
    • Predictive induction: Inducing classifiers for solving classification and prediction tasks,
      • Classification rule learning, Decision tree learning, ...
      • Bayesian classifier, ANN, SVM, ...
      • Data analysis through hypothesis generation and testing
    • Descriptive induction: Discovering interesting regularities in the data, uncovering patterns, ... for solving KDD tasks
      • Symbolic clustering, Association rule learning, Subgroup discovery, ...
      • Exploratory data analysis
  • 50. Predictive vs. descriptive induction: A rule learning perspective
    • Predictive induction: Induces rulesets acting as classifiers for solving classification and prediction tasks
    • Descriptive induction: Discovers individual rules describing interesting regularities in the data
    • Therefore: Different goals, different heuristics, different evaluation criteria
  • 51. Supervised vs. unsupervised learning: A rule learning perspective
    • Supervised learning: Rules are induced from labeled instances (training examples with class assignment) - usually used in predictive induction
    • Unsupervised learning: Rules are induced from unabeled instances (training examples with no class assignment) - usually used in descriptive induction
    • Exception: Subgroup discovery
    • Discovers individual rules describing interesting regularities in the data from labeled examples
  • 52. Subgroups vs. classifiers
    • Classifiers:
      • Classification rules aim at pure subgroups
      • A set of rules forms a domain model
    • Subgroups:
      • Rules describing subgroups aim at significantly higher proportion of positives
      • Each rule is an independent chunk of knowledge
    • Link:
      • SD can be viewed as
      • a form of cost-sensitive
      • classification
  • 53. Part I. Introduction
    • Data Mining and the KDD process
    • Why DM: Examples of discovered patterns and applications
    • Classification of DM tasks and techniques
    • Visualization and overview of DM tools
  • 54. Visualization
    • can be used on its own (usually for description and summarization tasks)
    • can be used in combination with other DM techniques, for example
      • visualization of decision trees
      • cluster visualization
      • visualization of association rules
      • subgroup visualization
  • 55. Data visualization: Scatter plot
  • 56. Daisy Graph Visualization by B. Zupan et al.
  • 57. Daisy Graph Patients were mostly female
  • 58. Daisy Graph The older the patient, the higher the difference of HHS between two follow-ups
  • 59. Data visualization: time dependecy Cumulative ineffectiveness of antibiotics gentamycin, clyndamycin, cefpiramide, and cefotaxim [Bohanec et al., “PTAH: A system for supporting nosocomial infection theraphy”, IDAMAP book, 1997]
  • 60. Subgroup visualization Subgroups of patients with CHD risk [Gamberger, Lavrac & Wettschereck, IDAMAP2002]
  • 61. Subgroup visualization Subgroups of patients with CHD risk [Gamberger, Lavrac & Wettschereck, IDAMAP2002]
  • 62. Subgroup visualization Subgroups of patients with CHD risk [Gamberger & Lavrac, ICML2002]
  • 63. DB Miner: Association rule visualization
  • 64. MineSet: Association Rule Visualization
  • 65. MineSet: Decision tree visualization
  • 66. DM tools
  • 67. Clementine
  • 68. S-Plus
  • 69. Part I: Summary
    • KDD is the overall process of discovering useful knowledge in data
      • many steps including data preparation, cleaning, transformation, pre-processing
    • Data Mining is the data analysis phase in KDD
      • DM takes only 15%-25% of the effort of the overall KDD process
      • employing techniques from machine learning and statistics
    • Predictive and descriptive induction have different goals: classifier vs. pattern discovery
    • Many application areas
    • Many powerful tools available
  • 70. Part I : Introduction Questions

×