Your SlideShare is downloading. ×
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
PowerPoint icon Data Mining
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

PowerPoint icon Data Mining

1,239

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,239
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
21
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Little control of the collection leads to challenges in the data. ML algorithms handle some of these issues. Need HPC handle computational requirements raised by ML algs and big data.
  • Practical definition. Workshop definition.
  • KDD vs. data mining vs. machine learning. Role of processing and management. Role of computational infrastructure. Keep human processing to a minimum. Optimize integration.
  • Little control of the collection leads to challenges in the data. ML algorithms handle some of these issues. Need HPC handle computational requirements raised by ML algs and big data.
  • Transcript

    • 1. SDSC Summer Institute 2005 TUTORIAL Data Mining for Scientific Applications Peter Shin Hector Jasso San Diego Supercomputer Center UCSD
    • 2. Overview
      • Introduction to data mining
        • Definitions, concepts, applications
        • Machine learning methods for KDD
          • Supervised learning – classification
          • Unsupervised learning – clustering
      • Cyberinfrastructure for data mining
        • SDSC resources – hardware and software
      • Survey of Applications at SKIDL
      • Break
      • Hands on tutorial with IBM Intelligent Miner and SKIDLkit
        • Targeted Marketing
        • Microarray analysis (leukemia dataset)
    • 3. Data Mining Definition
      • The search for interesting patterns and models,
      • in large data collections,
      • using statistical and machine learning methods,
      • and high-performance computational infrastructure.
      Key point: applications are data-driven and compute-intensive
    • 4. Analysis Levels and Infrastructure
      • Informal methods – graphs, plots, visualizations, exploratory data analysis (yes – Excel is a data mining tool)
      • Advanced query processing and OLAP – e.g., National Virtual Observatory (NVO)
      • Machine learning (compute-intensive statistical methods)
        • Supervised – classification, prediction
        • Unsupervised – clustering
      • Computational infrastructure needed at all levels – collections management, information integration, high-performance database systems, web services, grid services, scientific workflows, the global IT grid, observing systems
    • 5. The Case for Data Mining: Data Reality
      • Deluge from new sources
        • Remote sensing
        • Microarray processing
        • Wireless communication
        • Simulation models
        • Instrumentation – microscopes, telescopes
        • Digital publishing
        • Federation of collections
      • “ 5 exabytes (5 million terabytes) of new information was created in 2002” (source: UC Berkeley researchers Peter Lyman and Hal Varian)
      • This is the result of a recent paradigm shift: from hypothesis-driven data collection to data mining
      • Data destination: Legacy archives and independent collection activities
    • 6. Knowledge Discovery Process Collection Processing/Cleansing/Corrections Analysis/Modeling Presentation/Visualization Application/Decision Support Management/Federation/Warehousing Data Knowledge “ Data is not information; information is not knowledge; knowledge is not wisdom.” Gary Flake, Principal Scientist & Head of Yahoo! Research Labs, July 2004.
    • 7. Characteristics of Data Mining Applications
      • Data :
        • Lots of data, numerous sources
        • Noisy – missing values, outliers, interference
        • Heterogeneous – mixed types, mixed media
        • Complex – scale, resolution, temporal, spatial dimensions
      • Relatively little domain theory , few quantitative causal models
      • Lack of valid ground truth
      • Advice: don’t choose problems that have all these characteristics …
    • 8. Scientific vs. Commercial Data Mining
      • Goals:
        • Science – Theories: Need for insight and theory-based models, interpretable model structures, generate domain rules or causal structures, support for theory development
        • Commercial – Profits: black boxes OK
      • Types of data:
        • Science – Images, sensors, simulations
        • Commercial - Transaction data
        • Both - Spatial and temporal dimensions, heterogeneous
      • Trend – Common IT (information technology) tools fit both enterprises
        • Database systems (Oracle, DB2, etc), integration tools (Information Integrator), web services (Blue Titan, .NET)
        • This is good!
    • 9. Introduction to Machine Learning
      • Basic machine learning theory
      • Concepts and feature vectors
      • Supervised and unsupervised learning
      • Model development
        • training and testing methodology,
        • model validation,
        • overfitting
        • confusion matrices
      • Survey of algorithms
        • Decision Trees classification
        • k-means clustering
        • Hierarchical clustering
        • Bayesian networks and probabilistic inference
        • Support vector machines
    • 10. Basic Machine Learning Theory
      • Basic inductive learning hypothesis:
        • Having a large number of observations, we can approximate the rule that describes how the data was generated, and thus generate a model (using some algorithm)
      • No Free Lunch Theorem :
        • There is no ultimate algorithm: In the absence of prior information about the problem, there are no reasons to prefer one learning algorithm over another.
      • Conclusion :
        • There is no problem-independent “best” learning system. Formal theory and algorithms are not enough.
        • Machine learning is an empirical subject.
    • 11. Concepts are described as feature vectors
      • Example: vehicles
        • Has wheels
        • Runs on gasoline
        • Carries people
        • Flies
        • Weighs less than 500 pounds
      • Boolean feature vectors for vehicles
        • car254 [ 1 1 1 0 0 ]
        • motorcyle14 [ 1 1 1 0 1 ]
        • airplane132 [ 1 1 1 1 0 ]
    • 12.
      • Easy to generalize to complex data types:
        • Number of wheels
        • Fuel type
        • Carrying capacity
        • Flies
        • Weight
        • car254 [ 4, gas, 6, 0, 2000 ]
        • motorcyle14 [ 2, gas, 2, 0, 400 ]
        • airplane132 [ 10, jetfuel, 110, 1, 35000 ]
      • Most machine learning algorithms expect feature vectors, stored in text files or databases
      • Suggestions:
        • Identify the target concept
        • Organize your data to fit feature vector representation
        • Design your database schemas to support generation of data in this format
    • 13. Supervised vs. Unsupervised Learning
      • Supervised – Each feature vector belongs to a class (label). Labels are given externally, and algorithms learn to predict the label of new samples/observations.
      • Unsupervised – Finds structure in the data, by clustering similar elements together. No previous knowledge of classes needed.
    • 14. Model development
      • Model validation
        • Hold-out validation (2/3, 1/3 splits)
        • Cross validation, simple and n-fold (reuse)
        • Bootstrap validation (sample with replacement)
        • Jackknife validation (leave one out)
        • When possible hide a subset of the data until train-test is complete.
      Train Test Apply Training and testing
    • 15. Train Test Overfitting Optimal Depth Avoid overfitting
    • 16. Train Test Overfitting Optimal Depth Avoid overfitting
    • 17. Confusion matrices Predicted Actual Negative Negative Positive Positive Accuracy = (124 + 84) / (124 + 15 + 8 + 84) “proportion of predictions correct” True positive rate = 84 / (8 + 84) “proportion of positive cases correctly identified” False positive rate = 15 / (124 + 15) “proportion of negative cases incorrectly class as positive” True negative rate = 124 / (124 + 15) “proportion of negative cases correctly identified” False negative rate = 8 / (8 + 84) “proportion of positive cases incorrectly class as negative” Precision = 84 / (15 + 84) “proportion of predicted positive cases that were correct” 84 8 15 124
    • 18. Classification – Decision Tree Annual Precipitation Ecosystem 63 Prairie 116 Forest 5 Desert 104 Forest 120 Forest 2 Desert
    • 19. Precipitation > 63? YES NO 63 Prairie 116 Forest 5 Desert 104 Forest 120 Forest 2 Desert 104 Forest 116 Forest 120 Forest 2 Desert 63 Prairie 5 Desert
    • 20. Precipitation > 5? Precipitation > 63? YES NO NO YES 63 Prairie 116 Forest 5 Desert 104 Forest 120 Forest 2 Desert 104 Forest 116 Forest 120 Forest 63 Prairie 2 Desert 63 Prairie 5 Desert 2 Desert 5 Desert
    • 21. If (Precip > 63 ) then “Forest” else If (Precip > 5) then “Prairie” else “Desert” Classification accuracy on training data is 100% D F P F D P Actual Learned Model Predicted Confusion matrix 63 Prairie 116 Forest 5 Desert 104 Forest 120 Forest 2 Desert 1 0 0 0 3 0 0 0 2
    • 22. Testing Set Results IF(Precip > 63 ) then Forest Else If (Precip > 5) then Prairie Else Desert Learned Model Test Data Result: Accuracy 67% Model shows overfitting, generalizes poorly True Predicted D F P F D P Actual Predicted Confusion matrix 72 Prairie 116 Forest 4 Desert 55 Prairie 100 Forest 8 Desert Forest Forest Desert Prairie Forest Prairie 1 1 0 0 2 0 1 0 1
    • 23. Pruning to improve generalization Pruned Decision Tree Precipitation < 60? IF(Precip < 60 ) then Desert Else, [P(Forest) = .75] & [P(Prairie) = .25] 63 Prairie 116 Forest 5 Desert 104 Forest 120 Forest 2 Desert 104 Forest 63 Prairie 116 Forest 120 Forest 5 Desert 2 Desert
    • 24. Decision Trees Summary
      • Simple to understand
      • Works with mixed data types
      • Heuristic search sensitive to local minima
      • Models non-linear functions
      • Handles classification and regression
      • Many successful applications
      • Readily available tools
    • 25. Overview of Clustering
      • Definition:
          • Clustering is the discovery of classes
          • Unlabeled examples => unsupervised learning.
      • Survey of Applications
          • Grouping of web-visit data, clustering of genes according to their expression values, grouping of customers into distinct profiles,
      • Survey of Methods
          • k-means clustering
          • Hierarchical clustering
          • Expectation Maximization (EM) algorithm
          • Gaussian mixture modeling
      • Cluster analysis
        • Concept (class) discovery
        • Data compression/summarization
        • Bootstrapping knowledge
    • 26. Clustering – k-Means Precipitation Temperature 49 32 76 17 45 49 63 62 70 71 81 8
    • 27. Clustering – k-Means
    • 28. Clustering – k-Means
    • 29. Clustering – k-Means
    • 30. Clustering – k-Means
    • 31. Clustering – k-Means
    • 32. Clustering – k-Means
    • 33. Clustering – k-Means Cluster Temperature Precipitation 50 – 80 50 – 80 C3 25 - 55 35 - 60 C2 0 - 25 70 - 85 C1
    • 34. Clustering – k-Means Cluster Temperature Precipitation 50 – 80 50 – 80 C3 25 - 55 35 - 60 C2 0 - 25 70 - 85 C1
    • 35. Clustering – k-Means Cluster Temperature Precipitation Ecosystem Forest 50 – 80 50 – 80 C3 Prairie 25 - 55 35 - 60 C2 Desert 0-25 70 - 85 C1
    • 36. Using k-means
      • Requires a priori knowledge of ‘k’
      • The final outcome depends on the initial choice of k-means -- inconsistency
      • Sensitive to the outli ers, which can skew the means of their clusters
      • Favors spherical clusters – clusters may not match domain boundaries
      • Requires real-valued features
    • 37. Cyberinfrastructure for Data Mining
      • Resources – hardware and software (analysis tools and middleware)
      • Policies – allocating resources to the scientific community. Challenges to the traditional supercomputer model. Requirements for interactive and real-time analysis resources.
    • 38. NSF TeraGrid Building Integrated National CyberInfrastructure
      • Prototype for CyberInfrastructure
        • Ubiquitous computational resources
        • Plug-in compatibility
      • National Reach:
        • SDSC, NCSA, CIT, ANL, PSC
      • High Performance Network:
        • 40 Gb/s backbone, 30 Gb/s to each site
      • Over 20 Teraflops compute power
      • Over 1PB Online Storage
      • 8.9PB Archival Storage
    • 39. SDSC is Data-Intensive Center
    • 40. SDSC is Data-Intensive Center
    • 41. SDSC Machine Room Data Architecture
      • Philosophy: enable SDSC configuration to serve the grid as Data Center
      • .5 PB disk
      • 6 PB archive
      • 1 GB/s disk-to-tape
      • Optimized support for DB2 /Oracle
      Blue Horizon HPSS LAN (multiple GbE, TCP/IP) SAN (2 Gb/s, SCSI) Linux Cluster, 4TF Sun F15K WAN (30 Gb/s) SCSI/IP or FC/IP FC Disk Cache (400 TB) FC GPFS Disk (100TB) 200 MB/s per controller Silos and Tape, 6 PB, 1 GB/sec disk to tape 32 tape drives 30 MB/s per drive Database Engine Data Miner Vis Engine Local Disk (50TB) Power 4 Power 4 DB Blue Horizon: 1152 processor IBM SP, 1.7 Teraflops HPSS: over 600 TB data stored
    • 42. SDSC IBM Regatta - DataStar
      • 100+ TB Disk
      • Numerous fast CPUs
      • 64 GB of RAM per node
      • DB2 v8.x ESE
      • IBM Intelligent Miner
      • SAS Enterprise Miner
      • Platform for high-performance database, data mining, comparative IT studies …
    • 43. Data Mining Tools used at SDSC
      • SAS Enterprise Miner (Protein crystallization - JCSG)
      • IBM Intelligent Miner (Protein crystallization - JCSG, Corn Yield – Michigan State University, Security logs - SDSC)
      • CART (Protein crystallization - JCSG)
      • Matlab SVM package (TeraBridge health monitoring – UCSD Structural Engineering Department, North Temperate Lakes Monitoring - LTER)
      • PyML (Text Mining – NSDL, Hyperspectral data - LTER)
      • SKIDLkit by SDSC (Microarray analysis – UCSD Cancer Center, Hyperspectral data - LTER)
      • SVMlight (Hyperspectral data, LTER)
      • LSI by Telecordia (Text Mining – NSDL)
      • CoClustering by Fair Isaac (Text Mining – NSDL)
      • Matlab Bayes Net package
      • WEKA
    • 44. SKIDLkit
      • Toolkit for feature selection and classification
        • Filter methods
        • Wrapper methods
        • Data normalization
        • Feature selection
        • Support Vector Machine & Naïve Bayesian Clustering
        • http://daks.sdsc.edu/skidl
      • Will use it in the hands-on demo…
    • 45. Survey of Applications at SDSC
      • Text mining the NSDL (National Science Digital Library) collection
      • Sensor networks for bridge monitoring (with Structural Engineering Dept., UCSD)
      • Spatio-temporal Analysis of 9-1-1 Call Stream Data
      • Hyperspectral remote sensing data for groundcover classification (with Long Term Ecological Research Network - LTER)
      • Microarray analysis for tumor detection (with UCSD Cancer Center)
    • 46. Application: Text Mining the National Science Digital Library (NSDL) Collection
    • 47. Project Goal
      • Assist the educators and students in finding relevant information by categorizing the materials by scientific discipline and grade level using contextual information
      General Approach Based on various metadata in the NSDL community, study the contents of the associated documents and apply machine learning algorithms
    • 48. Source of Vocabulary
      • Eisenhower National Clearinghouse
        • 8417 documents with labels specifying intended grade level
        • Documents are intended for the teachers
        • Selected subset of about 1350 documents that could be associated with a AAAS category
          • Kindergarten-2nd
          • 3rd-5th
          • 6th - 8th
          • 9th - 12th
    • 49. Processing
      • Identify the words used in the kindergarten-2nd grade levels by the teachers
      • Identify the new words used in each of the AAAS categories
      • Characterize the growth of the vocabulary
      • Characterize the complexity of the new terms (number of words from prior grade levels used to explain the new word).
    • 50. Characterization of Learning 10 35% 10226 540 9th-12th 5 37% 6681 430 6th-8th 3 30% 4155 220 3rd-5th 1 2907 150 Kindergarten-2nd Complexity % new words Total words # of documents AAAS Level
    • 51. Characterization of Learning
      • Learn about 33% more words each AAAS category
        • This is an exponential growth and must eventually saturate
      • Complexity grows by about a factor of 2 per AAAS category
        • In later grades, it takes more of your old vocabulary to interpret new words
    • 52. Text Mining the NSDL Variously Formatted Documents Strip Formatting Pick out content words using “ stop lists” Stemming Discard words that appear in every document or only one Word count, Term Weighting Generate Term Document Matrix Query: for a list of words, get docs with highest score Various Retrieval Schemes (LSI, Classification, or clustering modules) Processing pipeline
    • 53. Application: Sensor Stream Mining
    • 54. Sensor Networks for Bridge Monitoring
      • Task:
        • Identify which pier is damaged based on the data stream fed by the sensors at the span middles.
        • Apply multi-resolution technique
      • Assumption:
        • The lower end of a pier can be damaged (location of plastic hinge)
        • There is only one damaged pier at a time.
      Sensors pier span middle
    • 55.  
    • 56. Application: Spatiotemporal Analysis of 9-1-1 Call Stream Data
    • 57. Project Goal
      • Perform spatiotemporal analysis on 9-1-1 call data to improve:
        • Overall emergency planning
        • Real-time emergency decision support
      General Approach Correlate call data “signatures” (unusual spatiotemporal trends) with State-wide and local events: - earthquakes, forest fires, weather events
    • 58. Study Area and Dates: San Francisco Bay Area, April 2005 San Francisco Area
    • 59. First Analysis: “Call Rhythm”
    • 60. Application: Classification of Land Types Using Hyperspectral Data
    • 61. Study Area New Mexico Sevilleta National Wildlife Refuge Study Area New Mexico
    • 62. Previously Available Image/Map Types Relief Shaded Map Landsat Image
    • 63. New image type: NASA’s JPL (Jet Propulsion Lab) Aviris (Airborne Visible/Infrared Imaging Spectrometer) scans, “hyperspectral images” Scanned from an altitude of 20km, 10km flightline 201 bands of electromagnetic information per pixel, spanning infrared to ultraviolet
    • 64. Complete Aviris scan of the Sevilleta Wildlife refuge, 20m per pixel Hyperspectral Scans for Study Area Study Area…
    • 65.  
    • 66. Data set
    • 67. Results Support Vector Machine, one-against-one, wavelet transformation: 97.1 % accuracy on test data
    • 68. Application: Microarray Analysis for Tumor Detection
    • 69. Microarray Analysis for Tumor Detection
      • Characteristics of the Data:
        • 88 prostate tissue samples:
          • 37 labeled “no tumor”,
          • 51 labeled “tumor”
        • Each tissue with 10,600 gene expression measurements
        • Collected by the UCSD Cancer Center, analyzed at SDSC
      • Tasks:
        • Build model to classify new, unseen tissues as either “no tumor” or “tumor”
        • Identify key genes to determine their biological significance in the process of cancer
    • 70. No Tumor Tumor Simple classifier based on expression levels for two genes
    • 71. Results
    • 72. Break
    • 73. Hands-on Analysis
      • Part I:
        • Decision Tree classification using IBM Intelligent Miner
        • Using classification models to make rational decisions
        • Peter Shin
      • Part II:
        • Feature selection, Naïve Bayes Classifiers and Support Vector Machines using SKIDLkit
        • Classification of microarray data
        • Hector Jasso
    • 74. Data Mining Example: Targeting Customers
      • Problem Characteristics:
          • 1. We make $50 profit on a sale of $200 shoes.
          • 2. A preliminary study shows that people who make over $50k will buy the shoes at a rate of 5% when they receive the brochure.
          • 3. People who make less than $50k will buy the shoes at a rate of 1% when they receive the brochure.
          • 4. It costs $1 to send a brochure to a potential customer.
          • 5. In general, we do not know whether a person will make more than $50k or not.
    • 75. Available Information
      • Variable Description
          • Please refer to the hand-out.
    • 76. Possible Marketing Plans
      • We will send out 30,000 brochures.
      • Plan A: Ignore data and randomly send brochures
      • Plan B: Use data mining to target a specific group with high probabilities of responding
    • 77. Plan A
      • Strategy:
          • Send brochures to anyone
      • Cost of sending one brochure = $1
      • Probability of Response
          • 1% of the population who make <= $50k ( 76% )
          • 5% of the population who make > $50k ( 24% )
          • Resulting in:
          • ( 1% * 76% + 5% * 24% ) = 1.96% final response rate
      • Earnings
          • Expected profit from one brochure = (Probability of response * profit – Cost of a brochure)
          • (1.96% * $50 - $1) = -$0.02
          • Expected Earning = Expected profit from one brochure * number of brochures sent
          • -$0.02 * 30000 = -$600
    • 78. Plan B
      • Strategy:
          • Send out brochures to only to: married, college or above, managerial/professional/sales/tech. support/protective service/armed forces, age >= 28.5, hours_per_week >= 31
      • Cost of sending one brochure = $1
      • Probability of Response
          • 1% of the population who make <= $50k ( 20.6% )
          • 5% of the population who make > $50k ( 79.4% )
          • Resulting in:
          • ( 1% * 20.6% + 5% * 79.4% ) = 4.176% final response rate
      • Earnings
          • Expected profit from one brochure = (Probability of response * profit – Cost of a brochure)
          • (4.176% * $50 - $1) = $1.088
          • (Probability of response * profit – Cost of a flier) * number of fliers
          • $1.088 * 30000 = $32,640
    • 79. Comparison of Two Plans
      • Expected earning from plan A
        • -$600
      • Expected earning from plan B
        • $32,640
      • Net Difference
        • $32,640 – (-$600) = $33,240
    • 80. Acknowledgements
      • Original source Census Bureau (1994)
      • Data processed and donated by Ron Kohavi and Barry Becker (Data Mining and Visualization, SGI)
    • 81. Data Mining Example: Microarray Analysis “ Labeled” cases ( 38 bone marrow samples: 27 AML, 11 ALL Each contains 7129 gene expression values) Train model (using Neural Networks, Support Vector Machines, Bayesian nets, etc.) Model 34 New unlabeled bone marrow samples AML/ALL key genes
    • 82.
      • Few samples for analysis (38 labeled)
      • Extremely high-dimensional data (7129 gene expression values per sample)
      • Noisy data
      • Complex underlying mechanisms, not fully understood
      Microarray Data Challenges to Machine Learning Algorithms:
    • 83. Some genes are more useful than others for building classification models Example: genes 36569_at and 36495_at are useful
    • 84. Some genes are more useful than others for building classification models Example: genes 36569_at and 36495_at are useful AML ALL
    • 85. Some genes are more useful than others for building classification models Example: genes 37176_at and 36563_at not useful
    • 86. Importance of Feature (Gene) Selection
      • Majority of genes are not directly related to leukemia
      • Having a large number of features enhances the model’s flexibility, but makes it prone to overfitting
      • Noise and the small number of training samples makes this even more likely
      • Some types of models, like Neural Networks do not scale well with many features
    • 87.
      • Distance metrics to capture class separation
      • Rank genes according to distance metric score
      • Choose the top n ranked genes
      With 7219 genes, how do we choose the best? HIGH score LOW score
    • 88. Distance Metrics
      • Tamayo’s Relative Class Separation
      • t -test
      • Bhattacharyya distance
    • 89. A gene with an undetected outlier could score artificially high Score jumps from 0.00651 to 0.042566
    • 90. How Support Vector Machines (SVMs) work
    • 91. How Support Vector Machines (SVMs) work
    • 92. How Support Vector Machines (SVMs) work
    • 93. How Support Vector Machines (SVMs) work
    • 94. How Support Vector Machines (SVMs) work
    • 95. How Support Vector Machines (SVMs) work
    • 96. How Support Vector Machines (SVMs) work margin Support vectors
    • 97. How Support Vector Machines (SVMs) work margin Support vectors
    • 98. How Support Vector Machines (SVMs) work margin Support vectors
    • 99.
      • Scales well to high-dimensional problems
      • Fast convergence to solution
      • Has well-defined statistical properties
      Characteristics of SVMs
    • 100. … X (Class) w 1 w 2 w 3 w n output variable input variables Naïve Bayesian Classifiers
    • 101.
      • Scales well to high-dimensional problems
      • Fast to compute
      • Based on Bayesian probability theory
      Characteristics of Naïve Bayesian Classifiers
    • 102. Approaches to Feature Selection Input Features Feature Selection by Distance Metric Score Train Model Feature Selection Search Feature Set Importance of features given by the model Filter Approach Wrapper Approach Input Features Model Train Model Model
    • 103.
      • Developed at SDSC:
        • http://daks.sdsc.edu/skidl
      • Implements:
        • Filter and wrapper approaches
        • Naïve Bayesian Net and SVM
        • t -test, Prediction Strength, Bhattacharyya distance
        • Outlier detection
      Software Available: SKIDLkit
    • 104.
      • Collected by White Institute Center for Genomics Research
      • Made available at:
        • http://www-genome.wi.mit.edu/cgi-bin/cancer/datasets.cgi ,
        • Under “Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression”
        • Also availabe as a sample dataset in SKIDLkit
      Leukemia Dataset

    ×