SDSC Summer Institute 2004 TUTORIAL Data Mining for Scientific Applications Peter Shin  Hector Jasso San Diego Supercomput...
Overview <ul><li>Introduction to data mining </li></ul><ul><ul><li>Definitions, concepts, applications </li></ul></ul><ul>...
Data Mining Definition <ul><li>The search for interesting patterns and models,  </li></ul><ul><li>in large data collection...
Analysis Levels and Infrastructure <ul><li>Informal methods – graphs, plots, visualizations, exploratory data analysis (ye...
The Case for Data Mining:  Data Reality <ul><li>Deluge from new sources </li></ul><ul><ul><li>Remote sensing </li></ul></u...
Knowledge Discovery Process Collection Processing/Cleansing/Corrections Analysis/Modeling Presentation/Visualization Appli...
Characteristics of Data Mining Applications <ul><li>Data : </li></ul><ul><ul><li>Lots of data, numerous sources </li></ul>...
Scientific vs. Commercial Data Mining <ul><li>Goals: </li></ul><ul><ul><li>Science – Theories: Need for insight and theory...
Introduction to Machine Learning <ul><li>Basic machine learning theory </li></ul><ul><li>Concepts and feature vectors </li...
Basic Machine Learning Theory <ul><li>Basic inductive learning hypothesis: </li></ul><ul><ul><li>Having a large number of ...
Concepts are described as feature vectors <ul><li>Example:  vehicles  </li></ul><ul><ul><li>Has wheels </li></ul></ul><ul>...
<ul><li>Easy to generalize to complex data types: </li></ul><ul><ul><li>Number of wheels </li></ul></ul><ul><ul><li>Fuel t...
Supervised vs. Unsupervised Learning <ul><li>Supervised – Each feature vector belongs to a class (label). Labels are given...
Model development <ul><li>Model validation </li></ul><ul><ul><li>Hold-out validation (2/3, 1/3 splits) </li></ul></ul><ul>...
Train Test Overfitting Optimal Depth v v Avoid overfitting
Train Test Overfitting Optimal Depth v v Avoid overfitting
Confusion matrices Predicted Actual Negative Negative Positive Positive Accuracy =   (124 + 84) / (124 + 15 + 8 + 84)   “p...
Classification – Decision Tree Annual Precipitation Ecosystem 63 Prairie 116 Forest 5 Desert 104 Forest 120 Forest 2 Desert
Precipitation > 63? YES NO 63 Prairie 116 Forest 5 Desert 104 Forest 120 Forest 2 Desert 104 Forest  116 Forest 120 Forest...
Precipitation > 5? Precipitation > 63? YES NO NO YES 63 Prairie 116 Forest 5 Desert 104 Forest 120 Forest 2 Desert 104 For...
If (Precip > 63 ) then “Forest” else If (Precip > 5) then “Prairie” else “Desert” Classification accuracy on training data...
Testing Set Results IF(Precip > 63 ) then Forest Else If (Precip > 5) then Prairie Else Desert Learned Model Test Data Res...
Pruning to improve generalization Pruned Decision Tree Precipitation < 60? IF(Precip < 60 ) then Desert Else, [P(Forest) =...
Decision Trees Summary <ul><li>Simple to understand </li></ul><ul><li>Works with mixed data types </li></ul><ul><li>Heuris...
Overview of Clustering <ul><li>Definition: </li></ul><ul><ul><ul><li>Clustering is the discovery of classes </li></ul></ul...
Clustering – k-Means Precipitation  Temperature 49 32 76 17 45 49 63 62 70 71 81 8
Clustering – k-Means
Clustering – k-Means
Clustering – k-Means
Clustering – k-Means
Clustering – k-Means
Clustering – k-Means
Clustering – k-Means Cluster  Temperature  Precipitation 50 – 80 50 – 80 C3 25 - 55 35 - 60 C2 0 - 25 70 - 85 C1
Clustering – k-Means Cluster  Temperature  Precipitation 50 – 80 50 – 80 C3 25 - 55 35 - 60 C2 0 - 25 70 - 85 C1
Clustering – k-Means Cluster  Temperature  Precipitation  Ecosystem Forest 50 – 80 50 – 80 C3 Prairie 25 - 55 35 - 60 C2 D...
Using k-means  <ul><li>Requires a priori knowledge of ‘k’  </li></ul><ul><li>The final outcome depends on the initial choi...
Cyberinfrastructure for Data Mining <ul><li>Resources – hardware and software (analysis tools and middleware) </li></ul><u...
NSF TeraGrid   Building Integrated National CyberInfrastructure <ul><li>Prototype for CyberInfrastructure   </li></ul><ul>...
SDSC is Data-Intensive Center
SDSC is Data-Intensive Center
SDSC Machine Room Data Architecture <ul><li>Philosophy:  enable SDSC configuration to serve the grid as  Data Center </li>...
SDSC IBM Regatta - DataStar <ul><li>100+ TB Disk </li></ul><ul><li>Numerous  fast  CPUs  </li></ul><ul><li>64 GB of RAM pe...
Data Mining Tools used at SDSC <ul><li>SAS Enterprise Miner  (Protein crystallization - JCSG) </li></ul><ul><li>IBM Intell...
SKIDLkit <ul><li>Toolkit for feature selection and classification </li></ul><ul><ul><li>Filter methods </li></ul></ul><ul>...
Survey of Applications at SDSC <ul><li>Sensor networks for bridge monitoring (with Structural Engineering Dept., UCSD) </l...
Sensor Networks for Bridge Monitoring <ul><li>Task: detection & classification </li></ul><ul><ul><li>Identify damaged pier...
 
Text Mining the NSDL Variously Formatted Documents Strip Formatting Pick out content  words using  “ stop lists” Stemming ...
Hyperspectral Image Classification <ul><li>Characteristics of the data </li></ul><ul><ul><li>Over 200 bands </li></ul></ul...
 
Microarray Analysis for Tumor Detection <ul><li>Characteristics of the Data: </li></ul><ul><ul><li>88 prostate tissue samp...
Some genes are more useful than others for building classification models Example: genes  36569_at  and  36495_at  are  us...
Results using independent test set
Break
Hands-on Analysis <ul><li>Decision Tree classification with IBM Intelligent Miner </li></ul><ul><li>Using classification m...
Data Mining Example: Targeting Customers <ul><li>Problem Characteristics: </li></ul><ul><ul><ul><li>1.  We make $50 profit...
Available Information <ul><li>Variable Description </li></ul><ul><ul><ul><li>Please refer to the hand-out. </li></ul></ul>...
Possible Marketing Plans <ul><li>We will send out 30,000 brochures. </li></ul><ul><li>Plan A:   Ignore data and randomly s...
Plan A (ran-dumb plan) <ul><li>Strategy:  </li></ul><ul><ul><ul><li>Send brochures to anyone </li></ul></ul></ul><ul><li>C...
Plan B (Intelligent Target (IT) plan) <ul><li>Strategy: </li></ul><ul><ul><ul><li>Send out brochures to only to: married, ...
Comparison of Two Plans <ul><li>Expected earning from ran-dumb plan </li></ul><ul><ul><li>-$600 </li></ul></ul><ul><li>Exp...
Acknowledgements <ul><li>Original source Census Bureau (1994) </li></ul><ul><li>Data processed and donated by Ron Kohavi a...
Upcoming SlideShare
Loading in …5
×

San Diego Supercomputer Center National Partnership for ...

517 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
517
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
8
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Little control of the collection leads to challenges in the data. ML algorithms handle some of these issues. Need HPC handle computational requirements raised by ML algs and big data.
  • Practical definition. Workshop definition.
  • KDD vs. data mining vs. machine learning. Role of processing and management. Role of computational infrastructure. Keep human processing to a minimum. Optimize integration.
  • San Diego Supercomputer Center National Partnership for ...

    1. 1. SDSC Summer Institute 2004 TUTORIAL Data Mining for Scientific Applications Peter Shin Hector Jasso San Diego Supercomputer Center UCSD
    2. 2. Overview <ul><li>Introduction to data mining </li></ul><ul><ul><li>Definitions, concepts, applications </li></ul></ul><ul><ul><li>Machine learning methods for KDD </li></ul></ul><ul><ul><ul><li>Supervised learning – classification </li></ul></ul></ul><ul><ul><ul><li>Unsupervised learning – clustering </li></ul></ul></ul><ul><li>Cyberinfrastructure for data mining </li></ul><ul><ul><li>SDSC/NPACI resources – hardware and software </li></ul></ul><ul><li>Survey of Applications at SKIDL </li></ul><ul><li>Break </li></ul><ul><li>Hands on tutorial with IBM Intelligent Miner and SKIDLkit </li></ul><ul><ul><li>Customer targeting </li></ul></ul><ul><ul><li>Microarray analysis (leukemia dataset) </li></ul></ul>
    3. 3. Data Mining Definition <ul><li>The search for interesting patterns and models, </li></ul><ul><li>in large data collections, </li></ul><ul><li>using statistical and machine learning methods, </li></ul><ul><li>and high-performance computational infrastructure. </li></ul>Key point: applications are data-driven and compute-intensive
    4. 4. Analysis Levels and Infrastructure <ul><li>Informal methods – graphs, plots, visualizations, exploratory data analysis (yes – Excel is a data mining tool) </li></ul><ul><li>Advanced query processing and OLAP – e.g., National Virtual Observatory (NVO) </li></ul><ul><li>Machine learning (compute-intensive statistical methods) </li></ul><ul><ul><li>Supervised – classification, prediction </li></ul></ul><ul><ul><li>Unsupervised – clustering </li></ul></ul><ul><li>Computational infrastructure needed at all levels – collections management, information integration, high-performance database systems, web services, grid services, scientific workflows, the global IT grid </li></ul>
    5. 5. The Case for Data Mining: Data Reality <ul><li>Deluge from new sources </li></ul><ul><ul><li>Remote sensing </li></ul></ul><ul><ul><li>Microarray processing </li></ul></ul><ul><ul><li>Wireless communication </li></ul></ul><ul><ul><li>Simulation models </li></ul></ul><ul><ul><li>Instrumentation – microscopes, telescopes </li></ul></ul><ul><ul><li>Digital publishing </li></ul></ul><ul><ul><li>Federation of collections </li></ul></ul><ul><li>“ 5 exabytes (5 million terabytes) of new information was created in 2002” (source: UC Berkeley researchers Peter Lyman and Hal Varian) </li></ul><ul><li>This is the result of a recent paradigm shift: from hypothesis-driven data collection to data mining </li></ul><ul><li>Data destination: Legacy archives and independent collection activities </li></ul>
    6. 6. Knowledge Discovery Process Collection Processing/Cleansing/Corrections Analysis/Modeling Presentation/Visualization Application/Decision Support Management/Federation/Warehousing Data Knowledge “ Data is not information; information is not knowledge; knowledge is not wisdom.” Gary Flake, Principal Scientist & Head of Yahoo! Research Labs, July 2004.
    7. 7. Characteristics of Data Mining Applications <ul><li>Data : </li></ul><ul><ul><li>Lots of data, numerous sources </li></ul></ul><ul><ul><li>Noisy – missing values, outliers, interference </li></ul></ul><ul><ul><li>Heterogeneous – mixed types, mixed media </li></ul></ul><ul><ul><li>Complex – scale, resolution, temporal, spatial dimensions </li></ul></ul><ul><li>Relatively little domain theory , few quantitative causal models </li></ul><ul><li>Lack of valid ground truth </li></ul><ul><li>Advice: don’t choose problems that have all these characteristics … </li></ul>
    8. 8. Scientific vs. Commercial Data Mining <ul><li>Goals: </li></ul><ul><ul><li>Science – Theories: Need for insight and theory-based models, interpretable model structures, generate domain rules or causal structures, support for theory development </li></ul></ul><ul><ul><li>Commercial – Profits: black boxes OK </li></ul></ul><ul><li>Types of data: </li></ul><ul><ul><li>Science – Images, sensors, simulations </li></ul></ul><ul><ul><li>Commercial - Transaction data </li></ul></ul><ul><ul><li>Both - Spatial and temporal dimensions, heterogeneous </li></ul></ul><ul><li>Trend – Common IT (information technology) tools fit both enterprises </li></ul><ul><ul><li>Database systems (Oracle, DB2, etc), integration tools (Information Integrator), web services (Blue Titan, .NET) </li></ul></ul><ul><ul><li>This is good! </li></ul></ul>
    9. 9. Introduction to Machine Learning <ul><li>Basic machine learning theory </li></ul><ul><li>Concepts and feature vectors </li></ul><ul><li>Supervised and unsupervised learning </li></ul><ul><li>Model development </li></ul><ul><ul><li>training and testing methodology, </li></ul></ul><ul><ul><li>model validation, </li></ul></ul><ul><ul><li>overfitting </li></ul></ul><ul><ul><li>confusion matrices </li></ul></ul><ul><li>Survey of algorithms </li></ul><ul><ul><li>Decision Trees classification </li></ul></ul><ul><ul><li>k-means clustering </li></ul></ul><ul><ul><li>Hierarchical clustering </li></ul></ul><ul><ul><li>Bayesian networks and probabilistic inference </li></ul></ul><ul><ul><li>Support vector machines </li></ul></ul>
    10. 10. Basic Machine Learning Theory <ul><li>Basic inductive learning hypothesis: </li></ul><ul><ul><li>Having a large number of observations, we can approximate the rule that describes how the data was generated, and thus generate a model (using some algorithm) </li></ul></ul><ul><li>No Free Lunch Theorem : </li></ul><ul><ul><li>There is no ultimate algorithm: In the absence of prior information about the problem, there are no reasons to prefer one learning algorithm over another. </li></ul></ul><ul><li>Conclusion : </li></ul><ul><ul><li>There is no problem-independent “best” learning system. Formal theory and algorithms are not enough. </li></ul></ul><ul><ul><li>Machine learning is an empirical subject. </li></ul></ul>
    11. 11. Concepts are described as feature vectors <ul><li>Example: vehicles </li></ul><ul><ul><li>Has wheels </li></ul></ul><ul><ul><li>Runs on gasoline </li></ul></ul><ul><ul><li>Carries people </li></ul></ul><ul><ul><li>Flies </li></ul></ul><ul><ul><li>Weighs less than 500 pounds </li></ul></ul><ul><li>Boolean feature vectors for vehicles </li></ul><ul><ul><li>car254 [ 1 1 1 0 0 ] </li></ul></ul><ul><ul><li>motorcyle14 [ 1 1 1 0 1 ] </li></ul></ul><ul><ul><li>airplane132 [ 1 1 1 1 0 ] </li></ul></ul>
    12. 12. <ul><li>Easy to generalize to complex data types: </li></ul><ul><ul><li>Number of wheels </li></ul></ul><ul><ul><li>Fuel type </li></ul></ul><ul><ul><li>Carrying capacity </li></ul></ul><ul><ul><li>Flies </li></ul></ul><ul><ul><li>Weight </li></ul></ul><ul><ul><li>car254 [ 4, gas, 6, 0, 2000 ] </li></ul></ul><ul><ul><li>motorcyle14 [ 2, gas, 2, 0, 400 ] </li></ul></ul><ul><ul><li>airplane132 [ 10, jetfuel, 110, 1, 35000 ] </li></ul></ul><ul><li>Most machine learning algorithms expect feature vectors, stored in text files or databases </li></ul><ul><li>Suggestions: </li></ul><ul><ul><li>Identify the target concept </li></ul></ul><ul><ul><li>Organize your data to fit feature vector representation </li></ul></ul><ul><ul><li>Design your database schemas to support generation of data in this format </li></ul></ul>
    13. 13. Supervised vs. Unsupervised Learning <ul><li>Supervised – Each feature vector belongs to a class (label). Labels are given externally, and algorithms learn to predict the label of new samples/observations. </li></ul><ul><li>Unsupervised – Finds structure in the data, by clustering similar elements together. No previous knowledge of classes needed. </li></ul>
    14. 14. Model development <ul><li>Model validation </li></ul><ul><ul><li>Hold-out validation (2/3, 1/3 splits) </li></ul></ul><ul><ul><li>Cross validation, simple and n-fold (reuse) </li></ul></ul><ul><ul><li>Bootstrap validation (sample with replacement) </li></ul></ul><ul><ul><li>Jackknife validation (leave one out) </li></ul></ul><ul><ul><li>When possible hide a subset of the data until train-test is complete. </li></ul></ul>Train Test Apply Training and testing
    15. 15. Train Test Overfitting Optimal Depth v v Avoid overfitting
    16. 16. Train Test Overfitting Optimal Depth v v Avoid overfitting
    17. 17. Confusion matrices Predicted Actual Negative Negative Positive Positive Accuracy = (124 + 84) / (124 + 15 + 8 + 84) “proportion of predictions correct” True positive rate = 84 / (8 + 84) “proportion of positive cases correctly identified” False positive rate = 15 / (124 + 15) “proportion of negative cases incorrectly class as positive” True negative rate = 124 / (124 + 15) “proportion of negative cases correctly identified” False negative rate = 8 / (8 + 84) “proportion of positive cases incorrectly class as negative” Precision = 84 / (15 + 84) “proportion of predicted positive cases that were correct” 84 8 15 124
    18. 18. Classification – Decision Tree Annual Precipitation Ecosystem 63 Prairie 116 Forest 5 Desert 104 Forest 120 Forest 2 Desert
    19. 19. Precipitation > 63? YES NO 63 Prairie 116 Forest 5 Desert 104 Forest 120 Forest 2 Desert 104 Forest 116 Forest 120 Forest 2 Desert 63 Prairie 5 Desert
    20. 20. Precipitation > 5? Precipitation > 63? YES NO NO YES 63 Prairie 116 Forest 5 Desert 104 Forest 120 Forest 2 Desert 104 Forest 116 Forest 120 Forest 63 Prairie 2 Desert 63 Prairie 5 Desert 2 Desert 5 Desert
    21. 21. If (Precip > 63 ) then “Forest” else If (Precip > 5) then “Prairie” else “Desert” Classification accuracy on training data is 100% D F P F D P Actual Learned Model Predicted Confusion matrix 63 Prairie 116 Forest 5 Desert 104 Forest 120 Forest 2 Desert 1 0 0 0 3 0 0 0 2
    22. 22. Testing Set Results IF(Precip > 63 ) then Forest Else If (Precip > 5) then Prairie Else Desert Learned Model Test Data Result: Accuracy 67% Model shows overfitting, generalizes poorly True Predicted D F P F D P Actual Predicted Confusion matrix 72 Prairie 116 Forest 4 Desert 55 Prairie 100 Forest 8 Desert Forest Forest Desert Prairie Forest Prairie 1 1 0 0 2 0 1 0 1
    23. 23. Pruning to improve generalization Pruned Decision Tree Precipitation < 60? IF(Precip < 60 ) then Desert Else, [P(Forest) = .75] & [P(Prairie) = .25] 63 Prairie 116 Forest 5 Desert 104 Forest 120 Forest 2 Desert 104 Forest 63 Prairie 116 Forest 120 Forest 5 Desert 2 Desert
    24. 24. Decision Trees Summary <ul><li>Simple to understand </li></ul><ul><li>Works with mixed data types </li></ul><ul><li>Heuristic search sensitive to local minima </li></ul><ul><li>Models non-linear functions </li></ul><ul><li>Handles classification and regression </li></ul><ul><li>Many successful applications </li></ul><ul><li>Readily available tools </li></ul>
    25. 25. Overview of Clustering <ul><li>Definition: </li></ul><ul><ul><ul><li>Clustering is the discovery of classes </li></ul></ul></ul><ul><ul><ul><li>Unlabeled examples => unsupervised learning. </li></ul></ul></ul><ul><li>Survey of Applications </li></ul><ul><ul><ul><li>Grouping of web-visit data, clustering of genes according to their expression values, grouping of customers into distinct profiles, </li></ul></ul></ul><ul><li>Survey of Methods </li></ul><ul><ul><ul><li>k-means clustering </li></ul></ul></ul><ul><ul><ul><li>Hierarchical clustering </li></ul></ul></ul><ul><ul><ul><li>Expectation Maximization (EM) algorithm </li></ul></ul></ul><ul><ul><ul><li>Gaussian mixture modeling </li></ul></ul></ul><ul><li>Cluster analysis </li></ul><ul><ul><li>Concept (class) discovery </li></ul></ul><ul><ul><li>Data compression/summarization </li></ul></ul><ul><ul><li>Bootstrapping knowledge </li></ul></ul>
    26. 26. Clustering – k-Means Precipitation Temperature 49 32 76 17 45 49 63 62 70 71 81 8
    27. 27. Clustering – k-Means
    28. 28. Clustering – k-Means
    29. 29. Clustering – k-Means
    30. 30. Clustering – k-Means
    31. 31. Clustering – k-Means
    32. 32. Clustering – k-Means
    33. 33. Clustering – k-Means Cluster Temperature Precipitation 50 – 80 50 – 80 C3 25 - 55 35 - 60 C2 0 - 25 70 - 85 C1
    34. 34. Clustering – k-Means Cluster Temperature Precipitation 50 – 80 50 – 80 C3 25 - 55 35 - 60 C2 0 - 25 70 - 85 C1
    35. 35. Clustering – k-Means Cluster Temperature Precipitation Ecosystem Forest 50 – 80 50 – 80 C3 Prairie 25 - 55 35 - 60 C2 Desert 0-25 70 - 85 C1
    36. 36. Using k-means <ul><li>Requires a priori knowledge of ‘k’ </li></ul><ul><li>The final outcome depends on the initial choice of k-means -- inconsistency </li></ul><ul><li>Sensitive to the outli ers, which can skew the means of their clusters </li></ul><ul><li>Favors spherical clusters – clusters may not match domain boundaries </li></ul><ul><li>Requires real-valued features </li></ul>
    37. 37. Cyberinfrastructure for Data Mining <ul><li>Resources – hardware and software (analysis tools and middleware) </li></ul><ul><li>Policies – allocating resources to the scientific community. Challenges to the traditional supercomputer model. Requirements for interactive and real-time analysis resources. </li></ul>
    38. 38. NSF TeraGrid Building Integrated National CyberInfrastructure <ul><li>Prototype for CyberInfrastructure </li></ul><ul><ul><li>Ubiquitous computational resources </li></ul></ul><ul><ul><li>Plug-in compatibility </li></ul></ul><ul><li>National Reach: </li></ul><ul><ul><li>SDSC, NCSA, CIT, ANL, PSC </li></ul></ul><ul><li>High Performance Network: </li></ul><ul><ul><li>40 Gb/s backbone, 30 Gb/s to each site </li></ul></ul><ul><li>Over 20 Teraflops compute power </li></ul><ul><li>Over 1PB Online Storage </li></ul><ul><li>8.9PB Archival Storage </li></ul>
    39. 39. SDSC is Data-Intensive Center
    40. 40. SDSC is Data-Intensive Center
    41. 41. SDSC Machine Room Data Architecture <ul><li>Philosophy: enable SDSC configuration to serve the grid as Data Center </li></ul><ul><li>.5 PB disk </li></ul><ul><li>6 PB archive </li></ul><ul><li>1 GB/s disk-to-tape </li></ul><ul><li>Optimized support for DB2 /Oracle </li></ul>Blue Horizon HPSS LAN (multiple GbE, TCP/IP) SAN (2 Gb/s, SCSI) Linux Cluster, 4TF Sun F15K WAN (30 Gb/s) SCSI/IP or FC/IP FC Disk Cache (400 TB) FC GPFS Disk (100TB) 200 MB/s per controller Silos and Tape, 6 PB, 1 GB/sec disk to tape 32 tape drives 30 MB/s per drive Database Engine Data Miner Vis Engine Local Disk (50TB) Power 4 Power 4 DB Blue Horizon: 1152 processor IBM SP, 1.7 Teraflops HPSS: over 600 TB data stored
    42. 42. SDSC IBM Regatta - DataStar <ul><li>100+ TB Disk </li></ul><ul><li>Numerous fast CPUs </li></ul><ul><li>64 GB of RAM per node </li></ul><ul><li>DB2 v8.x ESE </li></ul><ul><li>IBM Intelligent Miner </li></ul><ul><li>SAS Enterprise Miner </li></ul><ul><li>Platform for high-performance database, data mining, comparative IT studies … </li></ul>
    43. 43. Data Mining Tools used at SDSC <ul><li>SAS Enterprise Miner (Protein crystallization - JCSG) </li></ul><ul><li>IBM Intelligent Miner (Protein crystallization - JCSG, Corn Yield – Michigan State University, Security logs - SDSC) </li></ul><ul><li>CART (Protein crystallization - JCSG) </li></ul><ul><li>Matlab SVM package (TeraBridge health monitoring – UCSD Structural Engineering Department, North Temperate Lakes Monitoring - LTER) </li></ul><ul><li>PyML (Text Mining – NSDL, Hyperspectral data - LTER) </li></ul><ul><li>SKIDLkit by SDSC (Microarray analysis – UCSD Cancer Center, Hyperspectral data - LTER) </li></ul><ul><li>SVMlight (Hyperspectral data, LTER) </li></ul><ul><li>LSI by Telecordia (Text Mining – NSDL) </li></ul><ul><li>CoClustering by Fair Isaac (Text Mining – NSDL) </li></ul><ul><li>Matlab Bayes Net package </li></ul><ul><li>WEKA </li></ul>
    44. 44. SKIDLkit <ul><li>Toolkit for feature selection and classification </li></ul><ul><ul><li>Filter methods </li></ul></ul><ul><ul><li>Wrapper methods </li></ul></ul><ul><ul><li>Data normalization </li></ul></ul><ul><ul><li>Feature selection </li></ul></ul><ul><ul><li>Support Vector Machine & Naïve Bayesian Clustering </li></ul></ul><ul><ul><li>http://daks.sdsc.edu/skidl </li></ul></ul><ul><li>Will use it in the hands-on demo… </li></ul>
    45. 45. Survey of Applications at SDSC <ul><li>Sensor networks for bridge monitoring (with Structural Engineering Dept., UCSD) </li></ul><ul><li>Text mining the NSDL (National Science Digital Library) collection </li></ul><ul><li>Hyperspectral remote sensing data for groundcover classification (with Long Term Ecological Research Network - LTER) </li></ul><ul><li>Microarray analysis for tumor detection (with UCSD Cancer Center) </li></ul>
    46. 46. Sensor Networks for Bridge Monitoring <ul><li>Task: detection & classification </li></ul><ul><ul><li>Identify damaged piers based on the data stream of acceleration measurements. </li></ul></ul><ul><ul><li>Determine which sensors are key to determining bridge health. </li></ul></ul><ul><ul><li>Multi-resolution analysis Rational resource management. </li></ul></ul><ul><li>Testbed: </li></ul><ul><ul><li>Humboldt Bay Bridge with 8 piers. </li></ul></ul><ul><li>Assumptions: </li></ul><ul><ul><li>Damage only happens at the lower end of each pier (location of plastic hinge) </li></ul></ul><ul><ul><li>There is only one damaged pier each time. </li></ul></ul>
    47. 48. Text Mining the NSDL Variously Formatted Documents Strip Formatting Pick out content words using “ stop lists” Stemming Discard words that appear in every document or only one Word count, Term Weighting Generate Term Document Matrix Query: for a list of words, get docs with highest score Various Retrieval Schemes (LSI, Classification, or clustering modules) Processing pipeline
    48. 49. Hyperspectral Image Classification <ul><li>Characteristics of the data </li></ul><ul><ul><li>Over 200 bands </li></ul></ul><ul><ul><li>Small number of samples through labor-intensive collecting process </li></ul></ul><ul><ul><li>Collaboration with the Long Term Ecological Research Network </li></ul></ul><ul><li>Tasks: </li></ul><ul><ul><li>Classify the vegetation (e.g. Juniper tree, Sage, etc.) </li></ul></ul><ul><ul><li>Identify key bands </li></ul></ul><ul><ul><li>Detect spatio-temporal patterns </li></ul></ul>
    49. 51. Microarray Analysis for Tumor Detection <ul><li>Characteristics of the Data: </li></ul><ul><ul><li>88 prostate tissue samples: </li></ul></ul><ul><ul><ul><li>37 labeled “no tumor”, </li></ul></ul></ul><ul><ul><ul><li>51 labeled “tumor” </li></ul></ul></ul><ul><ul><li>Each tissue with 10,600 gene expression measurements </li></ul></ul><ul><ul><li>Collected by the UCSD Cancer Center, analyzed at SDSC </li></ul></ul><ul><li>Tasks: </li></ul><ul><ul><li>Build model to classify new, unseen tissues as either “no tumor” or “tumor” </li></ul></ul><ul><ul><li>Identify key genes to determine their biological significance in the process of cancer </li></ul></ul>
    50. 52. Some genes are more useful than others for building classification models Example: genes 36569_at and 36495_at are useful No Tumor Tumor
    51. 53. Results using independent test set
    52. 54. Break
    53. 55. Hands-on Analysis <ul><li>Decision Tree classification with IBM Intelligent Miner </li></ul><ul><li>Using classification models to make rational decisions </li></ul>
    54. 56. Data Mining Example: Targeting Customers <ul><li>Problem Characteristics: </li></ul><ul><ul><ul><li>1. We make $50 profit on a sale of $200 shoes. </li></ul></ul></ul><ul><ul><ul><li>2. A preliminary study shows that people who make over $50k will buy the shoes at a rate of 5% when they receive the brochure. </li></ul></ul></ul><ul><ul><ul><li>3. People who make less than $50k will buy the shoes at a rate of 1% when they receive the brochure. </li></ul></ul></ul><ul><ul><ul><li>4. It costs $1 to send a brochure to a potential customer. </li></ul></ul></ul><ul><ul><ul><li>5. In general, we do not know whether a person will make more than $50k or not. </li></ul></ul></ul>
    55. 57. Available Information <ul><li>Variable Description </li></ul><ul><ul><ul><li>Please refer to the hand-out. </li></ul></ul></ul>
    56. 58. Possible Marketing Plans <ul><li>We will send out 30,000 brochures. </li></ul><ul><li>Plan A: Ignore data and randomly send brochures (a.k.a ran-dumb plan) </li></ul><ul><li>Plan B: Use data mining to target a specific group with high probabilities of responding (a.k.a Intelligent Target (IT) plan) </li></ul>
    57. 59. Plan A (ran-dumb plan) <ul><li>Strategy: </li></ul><ul><ul><ul><li>Send brochures to anyone </li></ul></ul></ul><ul><li>Cost of sending one brochure = $1 </li></ul><ul><li>Probability of Response </li></ul><ul><ul><ul><li>1% of the population who make <= $50k ( 76% ) </li></ul></ul></ul><ul><ul><ul><li>5% of the population who make > $50k ( 24% ) </li></ul></ul></ul><ul><ul><ul><li>Resulting in: </li></ul></ul></ul><ul><ul><ul><li>( 1% * 76% + 5% * 24% ) = 1.96% final response rate </li></ul></ul></ul><ul><li>Earnings </li></ul><ul><ul><ul><li>Expected profit from one brochure = (Probability of response * profit – Cost of a brochure) </li></ul></ul></ul><ul><ul><ul><li>(1.96% * $50 - $1) = -$0.02 </li></ul></ul></ul><ul><ul><ul><li>Expected Earning = Expected profit from one brochure * number of brochures sent </li></ul></ul></ul><ul><ul><ul><li>-$0.02 * 30000 = -$600 </li></ul></ul></ul>
    58. 60. Plan B (Intelligent Target (IT) plan) <ul><li>Strategy: </li></ul><ul><ul><ul><li>Send out brochures to only to: married, college or above, managerial/professional/sales/tech. support/protective service/armed forces, age >= 28.5, hours_per_week >= 31 </li></ul></ul></ul><ul><li>Cost of sending one brochure = $1 </li></ul><ul><li>Probability of Response </li></ul><ul><ul><ul><li>1% of the population who make <= $50k ( 20.6% ) </li></ul></ul></ul><ul><ul><ul><li>5% of the population who make > $50k ( 79.4% ) </li></ul></ul></ul><ul><ul><ul><li>Resulting in: </li></ul></ul></ul><ul><ul><ul><li>( 1% * 20.6% + 5% * 79.4% ) = 4.176% final response rate </li></ul></ul></ul><ul><li>Earnings </li></ul><ul><ul><ul><li>Expected profit from one brochure = (Probability of response * profit – Cost of a brochure) </li></ul></ul></ul><ul><ul><ul><li>(4.176% * $50 - $1) = $1.088 </li></ul></ul></ul><ul><ul><ul><li>(Probability of response * profit – Cost of a flier) * number of fliers </li></ul></ul></ul><ul><ul><ul><li>$1.088 * 30000 = $32,640 </li></ul></ul></ul>
    59. 61. Comparison of Two Plans <ul><li>Expected earning from ran-dumb plan </li></ul><ul><ul><li>-$600 </li></ul></ul><ul><li>Expected earning from IT plan </li></ul><ul><ul><li>$32,640 </li></ul></ul><ul><li>Net Difference </li></ul><ul><ul><li>$32,640 – (-$600) = $33,240 </li></ul></ul>
    60. 62. Acknowledgements <ul><li>Original source Census Bureau (1994) </li></ul><ul><li>Data processed and donated by Ron Kohavi and Barry Becker (Data Mining and Visualization, SGI) </li></ul>

    ×