• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Part XIV
 

Part XIV

on

  • 964 views

 

Statistics

Views

Total Views
964
Views on SlideShare
964
Embed Views
0

Actions

Likes
0
Downloads
11
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Part XIV Part XIV Presentation Transcript

    • Computing Concepts for Bioinformatics http://amadeus.biosci.arizona.edu/~nirav
      • Introduction to Machine Learning, Data Mining and Knowledge Discovery
      • Introduction to WEKA
      • Final Project
      • MySQL exercise
    • Systems Biology:Confluence of omics Systems Biology Genomics Functional Genomics Meta- bolomics Proteomics Pharmaco- genomics Modelling Clinical Pathways
    • The players: Statistics Machine Learning Databases Data Visualization Data Mining and Knowledge Discovery
    • Useful Websites:
      • Obtaining WEKA
      • http://www.cs.waikato.ac.nz/ml/weka/
      • Data Mining
      • http://www.kdnuggets.com/dmcourse/index.html
    • Statistics, Machine Learning and Data Mining
      • Statistics:
        • more theory-based
        • more focused on testing hypotheses
      • Machine learning
        • more heuristic
        • focused on improving performance of a learning agent
        • also looks at real-time learning and robotics – areas not part of data mining
      • Data Mining and Knowledge Discovery
        • integrates theory and heuristics
        • focus on the entire process of knowledge discovery, including data cleaning, learning, and integration and visualization of results
      • Distinctions are fuzzy
      witten&eibe
    • Problems Suitable for Data-Mining
      • require knowledge-based decisions
      • have a changing environment
      • have sub-optimal current methods
      • have accessible, sufficient, and relevant data
      • provides high payoff for the right decisions!
    • Knowledge Discovery Definition
      • Knowledge Discovery in Data is the
      • non-trivial process of identifying
        • valid
        • novel
        • potentially useful
        • and ultimately understandable patterns in data.
      • from Advances in Knowledge Discovery and Data Mining, Fayyad, Piatetsky-Shapiro, Smyth, and Uthurusamy, (Chapter 1), AAAI/MIT Press 1996
    • Many Names of Data Mining
      • Data Fishing, Data Dredging: 1960-
        • used by Statistician
      • Data Mining :1990 --
        • used DB, business
      • Knowledge Discovery in Databases (1989-)
        • used by AI, Machine Learning Community
      • AKA Data Archaeology, Information Harvesting, Information Discovery, Knowledge Extraction, ...
      Currently: Data Mining and Knowledge Discovery are used interchangeably Piatetsky-Shapiro
    • Major Data Mining Tasks
      • Classification: predicting an item class
      • Clustering: finding clusters in data
      • Associations: e.g. A & B & C occur frequently
      • Visualization: to facilitate human discovery
      • Summarization: describing a group
      • Deviation Detection : finding changes
      • Estimation: predicting a continuous value
      • Link Analysis: finding relationships
      Piatetsky-Shapiro
    • Finding patterns
      • Goal: programs that detect patterns and regularities in the data
      • Strong patterns  good predictions
        • Problem 1: most patterns are not interesting
        • Problem 2: patterns may be inexact (or spurious)
        • Problem 3: data may be garbled or missing
    • Machine learning techniques
      • Algorithms for acquiring structural descriptions from examples
      • Structural descriptions represent patterns explicitly
        • Can be used to predict outcome in new situation
        • Can be used to understand and explain how prediction is derived ( may be even more important )
      • Methods originate from artificial intelligence, statistics, and research on databases
      witten&eibe
    • Classification Learn a method for predicting the instance class from pre-labeled (classified) instances Many approaches: Regression, Decision Trees, Bayesian, Neural Networks, ... Given a set of points from classes what is the class of new point ?
    • Classification: Linear Regression
      • Linear Regression
        • w 0 + w 1 x + w 2 y >= 0
      • Regression computes w i from data to minimize squared error to ‘fit’ the data
      • Not flexible enough
    • Classification: Decision Trees X Y if X > 5 then blue else if Y > 3 then blue else if X > 2 then green else blue 5 2 3
    • Classification: Neural Nets
      • Can select more complex regions
      • Can be more accurate
      • Also can overfit the data – find patterns in random noise
    • The weather problem Given past data, Can you come up with the rules for Play/Not Play ? What is the game? no true 91 71 rainy yes false 75 81 overcast yes true 90 72 overcast yes true 70 75 sunny yes false 80 75 rainy yes false 70 69 sunny no false 95 72 sunny yes true 65 64 overcast no true 70 65 rainy yes false 80 68 rainy yes false 96 70 rainy yes false 86 83 overcast no true 90 80 sunny no false 85 85 sunny Play Windy Humidity Temperature Outlook
    • The weather problem
      • Conditions for playing golf
      witten&eibe … … … … … Yes False Normal Mild Rainy Yes False High Hot Overcast No True High Hot Sunny No False High Hot Sunny Play Windy Humidity Temperature Outlook If outlook = sunny and humidity = high then play = no If outlook = rainy and windy = true then play = no If outlook = overcast then play = yes If humidity = normal then play = yes If none of the above then play = yes
    • Weather data with mixed attributes
      • Some attributes have numeric values
      witten&eibe … … … … … Yes False 80 75 Rainy Yes False 86 83 Overcast No True 90 80 Sunny No False 85 85 Sunny Play Windy Humidity Temperature Outlook If outlook = sunny and humidity > 83 then play = no If outlook = rainy and windy = true then play = no If outlook = overcast then play = yes If humidity < 85 then play = yes If none of the above then play = yes
    • The contact lenses data witten&eibe None Reduced Yes Hypermetrope Pre-presbyopic None Normal Yes Hypermetrope Pre-presbyopic None Reduced No Myope Presbyopic None Normal No Myope Presbyopic None Reduced Yes Myope Presbyopic Hard Normal Yes Myope Presbyopic None Reduced No Hypermetrope Presbyopic Soft Normal No Hypermetrope Presbyopic None Reduced Yes Hypermetrope Presbyopic None Normal Yes Hypermetrope Presbyopic Soft Normal No Hypermetrope Pre-presbyopic None Reduced No Hypermetrope Pre-presbyopic Hard Normal Yes Myope Pre-presbyopic None Reduced Yes Myope Pre-presbyopic Soft Normal No Myope Pre-presbyopic None Reduced No Myope Pre-presbyopic hard Normal Yes Hypermetrope Young None Reduced Yes Hypermetrope Young Soft Normal No Hypermetrope Young None Reduced No Hypermetrope Young Hard Normal Yes Myope Young None Reduced Yes Myope Young Soft Normal No Myope Young None Reduced No Myope Young Recommended lenses Tear production rate Astigmatism Spectacle prescription Age
    • A complete and correct rule set witten&eibe If tear production rate = reduced then recommendation = none If age = young and astigmatic = no and tear production rate = normal then recommendation = soft If age = pre-presbyopic and astigmatic = no and tear production rate = normal then recommendation = soft If age = presbyopic and spectacle prescription = myope and astigmatic = no then recommendation = none If spectacle prescription = hypermetrope and astigmatic = no and tear production rate = normal then recommendation = soft If spectacle prescription = myope and astigmatic = yes and tear production rate = normal then recommendation = hard If age young and astigmatic = yes and tear production rate = normal then recommendation = hard If age = pre-presbyopic and spectacle prescription = hypermetrope and astigmatic = yes then recommendation = none If age = presbyopic and spectacle prescription = hypermetrope and astigmatic = yes then recommendation = none
    • A decision tree for this problem witten&eibe
    • Classifying iris flowers witten&eibe … … … Iris virginica 1.9 5.1 2.7 5.8 102 101 52 51 2 1 Iris virginica 2.5 6.0 3.3 6.3 Iris versicolor 1.5 4.5 3.2 6.4 Iris versicolor 1.4 4.7 3.2 7.0 Iris setosa 0.2 1.4 3.0 4.9 Iris setosa 0.2 1.4 3.5 5.1 Type Petal width Petal length Sepal width Sepal length If petal length < 2.45 then Iris setosa If sepal width < 2.10 then Iris versicolor ...
      • Example: 209 different computer configurations
      • Linear regression function
      Predicting CPU performance witten&eibe 0 0 32 128 CHMAX 0 0 8 16 CHMIN Channels Performance Cache (Kb) Main memory (Kb) Cycle time (ns) 45 0 4000 1000 480 209 67 32 8000 512 480 208 … 269 32 32000 8000 29 2 198 256 6000 256 125 1 PRP CACH MMAX MMIN MYCT PRP = -55.9 + 0.0489 MYCT + 0.0153 MMIN + 0.0056 MMAX + 0.6410 CACH - 0.2700 CHMIN + 1.480 CHMAX
    • Soybean classification witten&eibe Diaporthe stem canker 19 Diagnosis Normal 3 Condition Roots … Yes 2 Stem lodging Abnormal 2 Condition Stem … ? 3 Leaf spot size Abnormal 2 Condition Leaves ? 5 Fruit spots Normal 4 Condition of fruit pods Fruit … Absent 2 Mold growth Normal 2 Condition Seed … Above normal 3 Precipitation July 7 Time of occurrence Environment Sample value Number of values Attribute
    • The role of domain knowledge
      • But in this domain, “leaf condition is normal” implies “leaf malformation is absent”!
      witten&eibe If leaf condition is normal and stem condition is abnormal and stem cankers is below soil line and canker lesion color is brown then diagnosis is rhizoctonia root rot If leaf malformation is absent and stem condition is abnormal and stem cankers is below soil line and canker lesion color is brown then diagnosis is rhizoctonia root rot
    • Learning as search
      • Inductive learning: find a concept description that fits the data
      • Example: rule sets as description language
        • Enormous, but finite, search space
      • Simple solution:
        • enumerate the concept space
        • eliminate descriptions that do not fit examples
        • surviving descriptions contain target concept
      witten&eibe
    • Enumerating the concept space
      • Search space for weather problem
        • 4 x 4 x 3 x 3 x 2 = 288 possible combinations
        • With 14 rules  2.7x10 34 possible rule sets
      • Solution: candidate-elimination algorithm
      • Other practical problems:
        • More than one description may survive
        • No description may survive
          • Language is unable to describe target concept
          • or data contains noise
      witten&eibe
    • The version space
      • Space of consistent concept descriptions
      • Completely determined by two sets
        • L : most specific descriptions that cover all positive examples and no negative ones
        • G : most general descriptions that do not cover any negative examples and all positive ones
      • Only L and G need be maintained and updated
      • But: still computationally very expensive
      • And: does not solve other practical problems
      witten&eibe
    • Machine Learning with WEKA
    • WEKA: the bird Copyright: Martin Kramer (mkramer@wxs.nl)
    • WEKA: the software
      • Machine learning/data mining software written in Java (distributed under the GNU Public License)
      • Used for research, education, and applications
      • Complements “Data Mining” by Witten & Frank
      • Main features:
        • Comprehensive set of data pre-processing tools, learning algorithms and evaluation methods
        • Graphical user interfaces (incl. data visualization)
        • Environment for comparing learning algorithms
    • WEKA: versions
      • There are several versions of WEKA:
        • WEKA 3.0: “book version” compatible with description in data mining book
        • WEKA 3.2: “GUI version” adds graphical user interfaces (old book version is command-line only)
        • WEKA 3.4: “Latest Stable” with lots of improvements
      • This next slides are based on the latest snapshot of WEKA 3.3
      • @relation heart-disease-simplified
      • @attribute age numeric
      • @attribute sex { female, male}
      • @attribute chest_pain_type { typ_angina, asympt, non_anginal, atyp_angina}
      • @attribute cholesterol numeric
      • @attribute exercise_induced_angina { no, yes}
      • @attribute class { present, not_present}
      • @data
      • 63,male,typ_angina,233,no,not_present
      • 67,male,asympt,286,yes,present
      • 67,male,asympt,229,yes,present
      • 38,female,non_anginal,?,no,not_present
      • ...
      WEKA only deals with “flat” files Flat file in ARFF format
      • @relation heart-disease-simplified
      • @attribute age numeric
      • @attribute sex { female, male}
      • @attribute chest_pain_type { typ_angina, asympt, non_anginal, atyp_angina}
      • @attribute cholesterol numeric
      • @attribute exercise_induced_angina { no, yes}
      • @attribute class { present, not_present}
      • @data
      • 63,male,typ_angina,233,no,not_present
      • 67,male,asympt,286,yes,present
      • 67,male,asympt,229,yes,present
      • 38,female,non_anginal,?,no,not_present
      • ...
      WEKA only deals with “flat” files numeric attribute nominal attribute
    •  
    •  
    • Explorer: pre-processing the data
      • Data can be imported from a file in various formats: ARFF, CSV, C4.5, binary
      • Data can also be read from a URL or from an SQL database (using JDBC)
      • Pre-processing tools in WEKA are called “filters”
      • WEKA contains filters for:
        • Discretization, normalization, resampling, attribute selection, transforming and combining attributes, …
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    • Explorer: building “classifiers”
      • Classifiers in WEKA are models for predicting nominal or numeric quantities
      • Implemented learning schemes include:
        • Decision trees and lists, instance-based classifiers, support vector machines, multi-layer perceptrons, logistic regression, Bayes’ nets, …
      • “ Meta”-classifiers include:
        • Bagging, boosting, stacking, error-correcting output codes, locally weighted learning, …
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    • Final Project
      • Involves:
        • Data Aggregation
        • Data Visualization
        • Typical laboratory environment
      • Will use:
      • Perl
      • MySQL
      • GFF
      • Public websites (ENSMBL, Gbrowser etc)
    • Final Project
      • Two groups in different lab working on same region of the genome
      • Team members gather specific information and perform specific task
      • Method to visualize all information in a genome browser
    • Final Project: Due dates
      • Description available on my site in pdf ( here )
      • I will put all the data and hints up by Dec 5 th Midnight
      • Due on Dec 15 th 4:00 PM
    • MySQL exercise
      • Using haplo.csv from hw-3 (class13)
      • Create a mysql table haplo_scores and load data (from haplo.csv) into the table
      • Write sql statement to show samples where methods disagree (feel free to use web based tool)
      • Create program mysql_sieve.pl to save the output into a file called disagree.txt
      • Now modify the above script to show samples that both methods agree on and save results into file agree.txt
      • How may rows in each file ?
    • Gratitude
      • Susan Miller
      • Gavin Nelson
      • Biochemistry for providing access to this lab
      • IGERT program