BCB 444/544 <ul><li>Lecture 32 </li></ul><ul><li>Machine Learning </li></ul><ul><li>#32_Nov07 </li></ul>
<ul><li>Fri  Oct 30  - Lecture 30 </li></ul><ul><li>  Phylogenetic – Distance-Based Methods </li></ul><ul><ul><ul><li>Chp ...
BCB 544 Only:  New Homework Assignment   <ul><li>544 Extra#2   </li></ul><ul><li>Due:   √PART 1 - ASAP </li></ul><ul><li> ...
Seminars this Week <ul><li>BCB List of URLs for Seminars related to Bioinformatics: </li></ul><ul><ul><li>	 http://www.bcb...
Chp 11 – Phylogenetic Tree Construction Methods and Programs <ul><li>SECTION IV   MOLECULAR PHYLOGENETICS </li></ul><ul><l...
Phylogenetic Tree Evaluation <ul><li>Bootstrapping </li></ul><ul><li>Jackknifing </li></ul><ul><li>Bayesian Simulation </l...
Bootstrapping <ul><li>A bootstrap sample is obtained by sampling sites randomly with replacement </li></ul><ul><ul><li>Obt...
Bootstrapping Comments <ul><li>Bootstrapping doesn’t really assess the accuracy of a tree, only indicates the consistency ...
Jackknifing <ul><li>Another resampling technique </li></ul><ul><li>Randomly delete half of the sites in the dataset </li><...
Bayesian Simulation <ul><li>Using a Bayesian ML method to produce a tree automatically calculates the probability of many ...
Phylogenetic Programs <ul><li>Huge list at: </li></ul><ul><li>http://evolution.genetics.washington.edu/phylip/software.htm...
Phylogenetic Programs <ul><li>TREE-PUZZLE  – uses a heuristic to allow ML on large datasets, also available as a  web serv...
Final Comments on Phylogenetics <ul><li>No method is perfect </li></ul><ul><li>Different methods make very different assum...
Machine Learning <ul><li>What is learning? </li></ul><ul><li>What is machine learning? </li></ul><ul><li>Learning algorith...
What is Learning? <ul><li>Learning is a process by which the learner  improves  his  performance  on a  task  or a set of ...
Types of Learning <ul><li>Rote learning – useful when it is less expensive to store and retrieve some information than to ...
What is Machine Learning? <ul><li>Machine learning is an area of artificial intelligence concerned with development of tec...
Contributing Disciplines <ul><li>Computer Science – artificial intelligence, algorithms and complexity, databases, data mi...
Machine Learning Applications <ul><li>Bioinformatics and Computational Biology </li></ul><ul><li>Environmental Informatics...
Machine Learning Algorithms <ul><li>Many types of algorithms differing in the structure of the learning problem as well as...
Machine Learning Algorithms <ul><li>Regression vs. Classification </li></ul><ul><li>Structural difference </li></ul><ul><l...
Machine Learning Algorithms <ul><li>Supervised vs. Unsupervised </li></ul><ul><li>Data difference </li></ul><ul><li>Superv...
Machine Learning Algorithms <ul><li>Generative vs. Discriminative </li></ul><ul><li>Philosophical difference </li></ul><ul...
Machine Learning Algorithms <ul><li>Linear vs. Non-linear </li></ul><ul><li>Modeling difference </li></ul><ul><li>Linear m...
Linear vs. Non-linear
Summary of Machine Learning Algorithms <ul><li>This is only the tip of the iceberg </li></ul><ul><li>No single algorithm w...
Measuring Performance
Trade Off Between Specificity and Sensitivity <ul><li>Classification threshold controls a trade off between specificity an...
Measuring Performance <ul><li>Using any single measure of performance is problematic </li></ul><ul><li>Accuracy can be mis...
Machine Learning in Bioinformatics <ul><li>Gene finding </li></ul><ul><li>Binding site identification </li></ul><ul><li>Pr...
Sample Learning Scenario – Protein Function Prediction
Some Examples of Algorithms <ul><li>Naïve Bayes </li></ul><ul><li>Neural network </li></ul><ul><li>Support Vector Machine ...
Predicting RNA binding sites in proteins <ul><li>Problem:  Given an amino acid sequence, classify each residue as RNA bind...
Bayes Theorem P(A)  = prior probability P(A|B)  = posterior probability
Bayes Theorem Applied to Classification
Naïve Bayes Algorithm                  n n n n c P c x X x X x X P c P c x X x X x X P x X c P x X c P 2 ...
Naïve Bayes Algorithm Assign  c =1 if
Example ARG 6 T S K K K  R  Q R G S R p(X 1  = T | c = 1) p(X 2  = S | c = 1) … p(X 1  = T | c = 0) p(X 2  = S | c = 0) … ...
Predictions for Ribosomal protein L15 PDB ID 1JJ2:K Actual Predicted
Neural networks <ul><li>The most successful methods for predicting secondary structure are based on neural networks.  The ...
Biological Neurons Dendrites receive inputs, Axon gives output Image from Christos Stergiou and Dimitrios Siganos  http://...
Artificial Neuron – “Perceptron” Image from Christos Stergiou and Dimitrios Siganos  http://www.doc.ic.ac.uk/~nd/surprise_...
The perceptron X1 X2 XN w1 w2 wN T Input Threshold Unit Output The  perceptron  classifies the input vector X into two cat...
The perceptron <ul><li>The input is a vector X and the weights can be stored in another vector W. </li></ul><ul><li>The pe...
The perceptron Training a perceptron: Find the weights W that minimizes the error function: P: number of training data X i...
Biological Neural Network Image from http://en.wikipedia.org/wiki/Biological_neural_network
Artificial Neural Network A complete neural network is a set of perceptrons interconnected such that the outputs of some u...
Support Vector Machines - SVMs Image from http://en.wikipedia.org/wiki/Support_vector_machine
SVM finds the maximum margin hyperplane Image from http://en.wikipedia.org/wiki/Support_vector_machine
Kernel Function
Kernel Function
Take Home Messages <ul><li>Must consider how to set up the learning problem (supervised or unsupervised, generative or dis...
Upcoming SlideShare
Loading in...5
×

32_Nov07_MachineLear..

397

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
397
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
25
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • QEEAA T I D V R EI DE NV…DY K V LGAGQ V R HELTILIADDF SE GARE K VEG…GEE R Q
  • 32_Nov07_MachineLear..

    1. 1. BCB 444/544 <ul><li>Lecture 32 </li></ul><ul><li>Machine Learning </li></ul><ul><li>#32_Nov07 </li></ul>
    2. 2. <ul><li>Fri Oct 30 - Lecture 30 </li></ul><ul><li> Phylogenetic – Distance-Based Methods </li></ul><ul><ul><ul><li>Chp 11 - pp 142 – 169 </li></ul></ul></ul><ul><li>Mon Nov 5 - Lecture 31 </li></ul><ul><li> Phylogenetics – Parsimony and ML </li></ul><ul><ul><ul><li>Chp 11 - pp 142 – 169 </li></ul></ul></ul><ul><li>Wed Nov 7 - Lecture 32 </li></ul><ul><li> Machine Learning </li></ul><ul><li>Fri Nov 9 - Lecture 33 </li></ul><ul><li> Functional and Comparative Genomics </li></ul><ul><ul><ul><li>Chp 17 and Chp 18 </li></ul></ul></ul>Required Reading ( before lecture)
    3. 3. BCB 544 Only: New Homework Assignment <ul><li>544 Extra#2 </li></ul><ul><li>Due: √PART 1 - ASAP </li></ul><ul><li> PART 2 - meeting prior to 5 PM Fri Nov 2 </li></ul><ul><li>Part 1 - Brief outline of Project, email to Drena & Michael </li></ul><ul><li>after response/approval, then: </li></ul><ul><li>Part 2 - More detailed outline of project </li></ul><ul><li>Read a few papers and summarize status of problem </li></ul><ul><li>Schedule meeting with Drena & Michael to discuss ideas </li></ul>
    4. 4. Seminars this Week <ul><li>BCB List of URLs for Seminars related to Bioinformatics: </li></ul><ul><ul><li> http://www.bcb.iastate.edu/seminars/index.html </li></ul></ul><ul><li>Nov 7 Wed - BBMB Seminar 4:10 in 1414 MBB </li></ul><ul><ul><li>Sharon Roth Dent MD Anderson Cancer Center </li></ul></ul><ul><ul><ul><li>Role of chromatin and chromatin modifying proteins in regulating gene expression </li></ul></ul></ul><ul><li>Nov 8 Thurs - BBMB Seminar 4:10 in 1414 MBB </li></ul><ul><ul><li>Jianzhi George Zhang U. Michigan </li></ul></ul><ul><ul><ul><li>Evolution of new functions for proteins </li></ul></ul></ul><ul><li>Nov 9 Fri - BCB Faculty Seminar 2:10 in 102 SciI </li></ul><ul><ul><li>Amy Andreotti ISU </li></ul></ul><ul><ul><ul><li>Something about NMR </li></ul></ul></ul>
    5. 5. Chp 11 – Phylogenetic Tree Construction Methods and Programs <ul><li>SECTION IV MOLECULAR PHYLOGENETICS </li></ul><ul><li>Xiong: Chp 11 Phylogenetic Tree Construction Methods and Programs </li></ul><ul><ul><li>Distance-Based Methods </li></ul></ul><ul><ul><li>Character-Based Methods </li></ul></ul><ul><ul><li>Phylogenetic Tree Evaluation </li></ul></ul><ul><ul><li>Phylogenetic Programs </li></ul></ul>
    6. 6. Phylogenetic Tree Evaluation <ul><li>Bootstrapping </li></ul><ul><li>Jackknifing </li></ul><ul><li>Bayesian Simulation </li></ul><ul><li>Statistical difference tests (are two trees significantly different?) </li></ul><ul><ul><li>Kishino-Hasegawa Test (paired t-test) </li></ul></ul><ul><ul><li>Shimodaira-Hasegawa Test ( χ 2 test) </li></ul></ul>
    7. 7. Bootstrapping <ul><li>A bootstrap sample is obtained by sampling sites randomly with replacement </li></ul><ul><ul><li>Obtain a data matrix with same number of taxa and number of characters as original one </li></ul></ul><ul><li>Construct trees for samples </li></ul><ul><li>For each branch in original tree, compute fraction of bootstrap samples in which that branch appears </li></ul><ul><ul><li>Assigns a bootstrap support value to each branch </li></ul></ul><ul><li>Idea: If a grouping has a lot of support, it will be supported by at least some positions in most of the bootstrap samples </li></ul>
    8. 8. Bootstrapping Comments <ul><li>Bootstrapping doesn’t really assess the accuracy of a tree, only indicates the consistency of the data </li></ul><ul><li>To get reliable statistics, bootstrapping needs to be done on your tree 500 – 1000 times, this is a big problem if your tree took a few days to construct </li></ul>
    9. 9. Jackknifing <ul><li>Another resampling technique </li></ul><ul><li>Randomly delete half of the sites in the dataset </li></ul><ul><li>Construct new tree with this smaller dataset, see how often taxa are grouped </li></ul><ul><li>Advantage – sites aren’t duplicated </li></ul><ul><li>Disadvantage – again really only measuring consistency of the data </li></ul>
    10. 10. Bayesian Simulation <ul><li>Using a Bayesian ML method to produce a tree automatically calculates the probability of many trees during the search </li></ul><ul><li>Most trees sampled in the Bayesian ML search are near an optimal tree </li></ul>
    11. 11. Phylogenetic Programs <ul><li>Huge list at: </li></ul><ul><li>http://evolution.genetics.washington.edu/phylip/software.html </li></ul><ul><li>PAUP* - one of the most popular programs, commercial, Mac and Unix only, nice user interface </li></ul><ul><li>PHYLIP – free, multiplatform, a bit difficult to use but web servers make it easier </li></ul><ul><li>WebPhylip – another interface for PHYLIP online </li></ul>
    12. 12. Phylogenetic Programs <ul><li>TREE-PUZZLE – uses a heuristic to allow ML on large datasets, also available as a web server </li></ul><ul><li>PHYML – web based, uses genetic algorithm </li></ul><ul><li>MrBayes – Bayesian program, fast and can handle large datasets, multiplatform download </li></ul><ul><li>BAMBE – web based Bayesian program </li></ul>
    13. 13. Final Comments on Phylogenetics <ul><li>No method is perfect </li></ul><ul><li>Different methods make very different assumptions </li></ul><ul><li>If multiple methods using different assumptions come up with similar results, we should trust the results more than any single method </li></ul>
    14. 14. Machine Learning <ul><li>What is learning? </li></ul><ul><li>What is machine learning? </li></ul><ul><li>Learning algorithms </li></ul><ul><li>Machine learning applied to bioinformatics and computational biology </li></ul><ul><li>Some slides adapted from Dr. Vasant Honavar and Dr. Byron Olson </li></ul>
    15. 15. What is Learning? <ul><li>Learning is a process by which the learner improves his performance on a task or a set of tasks as a result of experience within some environment </li></ul>
    16. 16. Types of Learning <ul><li>Rote learning – useful when it is less expensive to store and retrieve some information than to compute it </li></ul><ul><li>Learning from instruction – transform instructions into useful knowledge </li></ul><ul><li>Learning from examples – extract predictive or descriptive regularities from data </li></ul><ul><li>Learning from deduction – generalize instances of deductive problem-solving </li></ul><ul><li>Learning from exploration – learn to choose actions that maximize reward </li></ul>
    17. 17. What is Machine Learning? <ul><li>Machine learning is an area of artificial intelligence concerned with development of techniques which allow computers to “learn” </li></ul><ul><li>Machine learning is a method for creating computer programs by the analysis of data sets </li></ul><ul><li>We understand a phenomenon when we can write a computer program that models it at the desired level of detail </li></ul>
    18. 18. Contributing Disciplines <ul><li>Computer Science – artificial intelligence, algorithms and complexity, databases, data mining </li></ul><ul><li>Statistics – statistical inference, experimental design, exploratory data analysis </li></ul><ul><li>Mathematics – abstract algebra, logic, information theory, probability theory </li></ul><ul><li>Psychology and neuroscience – behavior, perception, learning, memory, problem solving </li></ul><ul><li>Philosophy – ontology, epistemology, philosophy of mind, philosophy of science </li></ul>
    19. 19. Machine Learning Applications <ul><li>Bioinformatics and Computational Biology </li></ul><ul><li>Environmental Informatics </li></ul><ul><li>Medical Informatics </li></ul><ul><li>Cognitive Science </li></ul><ul><li>E-Commerce </li></ul><ul><li>Human Computer Interaction </li></ul><ul><li>Robotics </li></ul><ul><li>Engineering </li></ul>
    20. 20. Machine Learning Algorithms <ul><li>Many types of algorithms differing in the structure of the learning problem as well as the approach to learning used </li></ul><ul><li>Regression vs. Classification </li></ul><ul><li>Supervised vs. Unsupervised </li></ul><ul><li>Generative vs. Discriminative </li></ul><ul><li>Linear vs. Non-Linear </li></ul>
    21. 21. Machine Learning Algorithms <ul><li>Regression vs. Classification </li></ul><ul><li>Structural difference </li></ul><ul><li>Regression algorithms attempt to map inputs into continuous outputs (integers, real numbers, etc.) </li></ul><ul><li>Classification algorithms attempt to map inputs into one of a set of classes (color, cellular locations, good and bad credit risks, etc.) </li></ul>
    22. 22. Machine Learning Algorithms <ul><li>Supervised vs. Unsupervised </li></ul><ul><li>Data difference </li></ul><ul><li>Supervised learning involves using pairs of input/output relationships to learn an input output mapping (called labeled pairs often denoted {X,Y} </li></ul><ul><li>Unsupervised learning involves examining input data to find patterns (clustering) </li></ul>
    23. 23. Machine Learning Algorithms <ul><li>Generative vs. Discriminative </li></ul><ul><li>Philosophical difference </li></ul><ul><li>Generative models attempt to recreate or understand the process that generated the data </li></ul><ul><li>Discriminative models attempt to simply separate or determine the class of input data without regard to the process </li></ul>
    24. 24. Machine Learning Algorithms <ul><li>Linear vs. Non-linear </li></ul><ul><li>Modeling difference </li></ul><ul><li>Linear models involve only linear combinations of input variables </li></ul><ul><li>Non-linear models are not restricted in their form (commonly include exponentials or quadratic terms) </li></ul>
    25. 25. Linear vs. Non-linear
    26. 26. Summary of Machine Learning Algorithms <ul><li>This is only the tip of the iceberg </li></ul><ul><li>No single algorithm works best for every application </li></ul><ul><li>Some simple algorithms are effective on many data sets </li></ul><ul><li>Better results can be obtained by preprocessing the data to suit the algorithm or adapting the algorithm to suit the characteristics of the data </li></ul>
    27. 27. Measuring Performance
    28. 28. Trade Off Between Specificity and Sensitivity <ul><li>Classification threshold controls a trade off between specificity and sensitivity </li></ul><ul><li>High specificity – predict fewer instances with higher confidence </li></ul><ul><li>High sensitivity – predict more instances with lower confidence </li></ul><ul><li>Commonly shown as a Receiver Operating Characteristic (ROC) Curve </li></ul>
    29. 29. Measuring Performance <ul><li>Using any single measure of performance is problematic </li></ul><ul><li>Accuracy can be misleading – when 95% of examples are negative, we can achieve 95% accuracy by predicting all negative. We are 95% accurate, but 100% wrong on positive examples </li></ul>
    30. 30. Machine Learning in Bioinformatics <ul><li>Gene finding </li></ul><ul><li>Binding site identification </li></ul><ul><li>Protein structure prediction </li></ul><ul><li>Protein function prediction </li></ul><ul><li>Genetic network inference </li></ul><ul><li>Cancer diagnosis </li></ul><ul><li>etc. </li></ul>
    31. 31. Sample Learning Scenario – Protein Function Prediction
    32. 32. Some Examples of Algorithms <ul><li>Naïve Bayes </li></ul><ul><li>Neural network </li></ul><ul><li>Support Vector Machine </li></ul>
    33. 33. Predicting RNA binding sites in proteins <ul><li>Problem: Given an amino acid sequence, classify each residue as RNA binding or non-RNA binding </li></ul><ul><li>Input to the classifier is a string of amino acid identities </li></ul><ul><li>Output from the classifier is a class label, either binding or not </li></ul>
    34. 34. Bayes Theorem P(A) = prior probability P(A|B) = posterior probability
    35. 35. Bayes Theorem Applied to Classification
    36. 36. Naïve Bayes Algorithm                  n n n n c P c x X x X x X P c P c x X x X x X P x X c P x X c P 2 2 1 1 2 2 1 1 ) 0 ( ) 0 | ,..., , ( ) 1 ( ) 1 | ,..., , ( ) | 0 ( ) | 1 (
    37. 37. Naïve Bayes Algorithm Assign c =1 if
    38. 38. Example ARG 6 T S K K K R Q R G S R p(X 1 = T | c = 1) p(X 2 = S | c = 1) … p(X 1 = T | c = 0) p(X 2 = S | c = 0) … ≥ θ
    39. 39. Predictions for Ribosomal protein L15 PDB ID 1JJ2:K Actual Predicted
    40. 40. Neural networks <ul><li>The most successful methods for predicting secondary structure are based on neural networks. The overall idea is that neural networks can be trained to recognize amino acid patterns in known secondary structure units, and to use these patterns to distinguish between the different types of secondary structure. </li></ul><ul><li>Neural networks classify “input vectors” or “examples” into categories (2 or more). </li></ul><ul><li>They are loosely based on biological neurons. </li></ul>
    41. 41. Biological Neurons Dendrites receive inputs, Axon gives output Image from Christos Stergiou and Dimitrios Siganos http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html
    42. 42. Artificial Neuron – “Perceptron” Image from Christos Stergiou and Dimitrios Siganos http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html
    43. 43. The perceptron X1 X2 XN w1 w2 wN T Input Threshold Unit Output The perceptron classifies the input vector X into two categories. If the weights and threshold T are not known in advance, the perceptron must be trained . Ideally, the perceptron must be trained to return the correct answer on all training examples, and perform well on examples it has never seen. The training set must contain both type of data (i.e. with “1” and “0” output).
    44. 44. The perceptron <ul><li>The input is a vector X and the weights can be stored in another vector W. </li></ul><ul><li>The perceptron computes the dot product S = X.W </li></ul><ul><li>The output F is a function of S: it is often discrete (i.e. 1 or 0), in which case the function is the step function. </li></ul><ul><li>For continuous output, often use a sigmoid: </li></ul>0 1/2 1 0
    45. 45. The perceptron Training a perceptron: Find the weights W that minimizes the error function: P: number of training data X i : training vectors F(W.X i ): output of the perceptron t(X i ) : target value for X i Use steepest descent: - compute gradient: - update weight vector: - iterate (e: learning rate)
    46. 46. Biological Neural Network Image from http://en.wikipedia.org/wiki/Biological_neural_network
    47. 47. Artificial Neural Network A complete neural network is a set of perceptrons interconnected such that the outputs of some units becomes the inputs of other units. Many topologies are possible! Neural networks are trained just like perceptron, by minimizing an error function:
    48. 48. Support Vector Machines - SVMs Image from http://en.wikipedia.org/wiki/Support_vector_machine
    49. 49. SVM finds the maximum margin hyperplane Image from http://en.wikipedia.org/wiki/Support_vector_machine
    50. 50. Kernel Function
    51. 51. Kernel Function
    52. 52. Take Home Messages <ul><li>Must consider how to set up the learning problem (supervised or unsupervised, generative or discriminative, classification or regression, etc.) </li></ul><ul><li>Lots of algorithms out there </li></ul><ul><li>No algorithm performs best on all problems </li></ul>
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×