Artificial Intelligence and Learning Algorithms Presented By Brian M. Frezza 12/1/05
Game Plan <ul><li>What’s a Learning Algorithm? </li></ul><ul><li>Why should I care? </li></ul><ul><ul><li>Biological paral...
Hard Math
What’s a Learning Algorithm? <ul><li>“An algorithm which predicts data’s future behavior based on its past performance.” <...
Why do I care? <ul><li>Use In Informatics </li></ul><ul><ul><li>Predict trends in “fuzzy” data </li></ul></ul><ul><ul><ul>...
Street Smarts <ul><li>CMU’s Navlab-5  (No Hands Across America) </li></ul><ul><ul><li>1995 Neural Network Driven Car </li>...
The Algorithms <ul><li>Bayesian Networks </li></ul><ul><li>Hidden Markov Models </li></ul><ul><li>Genetic Algorithms </li>...
Bayesian Networks: Basics <ul><li>Requires models of how data behaves </li></ul><ul><ul><li>Set of Hypothesis: {H} </li></...
Bayesian Network Example <ul><li>What color hair will Paul Schaffer’s  </li></ul><ul><li>kids have if he marries  Redhead ...
Bayesian Network: Trace H a : 100%  Redhead H b : 50%  Redhead  50% Not H c : 100% Not Redhead  0 Not 0 Hypothesis History...
Bayesian Network:Trace H a : 100%  Redhead H b : 50%  Redhead  50% Not H c : 100% Not Redhead  1 Not 0 Hypothesis History ...
Bayesian Network: Trace H a : 100%  Redhead H b : 50%  Redhead  50% Not H c : 100% Not Redhead  2 Not 0 Hypothesis History...
Bayesian Network: Trace H a : 100%  Redhead H b : 50%  Redhead  50% Not H c : 100% Not Redhead  3 Not 0 Hypothesis History...
Bayesian Networks Notes <ul><li>Never reject hypothesis unless directly disproved </li></ul><ul><li>Learns based on ration...
The Algorithms <ul><li>Bayesian Networks </li></ul><ul><li>Hidden Markov Models </li></ul><ul><li>Genetic Algorithms </li>...
Hidden Markov Models(HMM) <ul><li>Discrete learning algorithm </li></ul><ul><ul><li>Programmer must be able to categorize ...
Hidden Markov Models: Take a Step Back <ul><li>1 st  order Markov Models: </li></ul><ul><ul><li>Q{States} </li></ul></ul><...
1 st  order Markov Model Setup <ul><li>Pick Initial state:  Q 1 </li></ul><ul><li>Pick Transition Probabilities: </li></ul...
1 st  order Markov Model Trace <ul><li>Current State:  Q 1  Time Step = 1 </li></ul><ul><li>Transition probabilities: </li...
1 st  order Markov Model Trace <ul><li>Current State:  Q 2   Time Step = 2 </li></ul><ul><li>Transition probabilities: </l...
1 st  order Markov Model Trace <ul><li>Current State:  Q 3   Time Step = 3 </li></ul><ul><li>Transition probabilities: </l...
1 st  order Markov Model Trace <ul><li>Current State:  Q 4   Time Step = 4 </li></ul><ul><li>Transition probabilities: </l...
What else can Markov do? <ul><li>Higher Order Models </li></ul><ul><ul><li>K th  order </li></ul></ul><ul><li>Metropolis-H...
Hidden Markov Models (HMMs) <ul><li>A Markov Model drives the world but it is hidden from direct observation and its statu...
Hidden Markov Models: Example <ul><li>Secondary Structure Prediction </li></ul>Observable States Hidden States Unstructure...
Hidden Markov Models:  Smaller Example <ul><li>Exon/Intron Mapping </li></ul>G T C A Exon Intergenic Intron Observable Sta...
Hidden Markov Models:  Smaller Example <ul><li>Exon/Intron Mapping </li></ul>Hidden State Transition Probabilities Observa...
Hidden Markov Model <ul><li>How to predict outcomes from a HMM </li></ul><ul><li>Brute force: </li></ul><ul><ul><li>Try ev...
Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observa...
Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observa...
Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observa...
Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observa...
Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observa...
Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observa...
Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observa...
Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observa...
Hidden Markov Models <ul><li>How to Train an HMM </li></ul><ul><ul><li>The forward-backward algorithm </li></ul></ul><ul><...
The Algorithms <ul><li>Bayesian Networks </li></ul><ul><li>Hidden Markov Models </li></ul><ul><li>Genetic Algorithms </li>...
Genetic Algorithms <ul><li>Individuals  are series of bits which represent candidate solutions </li></ul><ul><ul><li>Funct...
Genetic Algorithms <ul><li>Encoding Rules </li></ul><ul><ul><li>“ Gray” bit encoding </li></ul></ul><ul><ul><ul><li>Bit di...
Genetic Algorithms <ul><li>When are they useful? </li></ul><ul><ul><li>Movements in sequence space are funnel shaped with ...
The Algorithms <ul><li>Bayesian Networks </li></ul><ul><li>Hidden Markov Models </li></ul><ul><li>Genetic Algorithms </li>...
Neural Networks <ul><li>1943 McCulloch and Pitts Model of how Neurons process information </li></ul><ul><ul><li>Field imme...
Neural Networks:  A Neuron, Node, or Unit Σ ( W )- W 0,c Activation Function Output W a,c W b,c W 0,c (Bias) W c, n a  z (...
Neural Networks:  Activation Functions Sigmoid Function (logistic function) Threshold Function Zero point set by bias In I...
Threshold Functions can make Logic Gates with Neurons! Logical And W 0,c  = 1.5 W b,c  = 1 W a,c  = 1 A B Σ ( W )- W 0,c a...
And Gate: Trace W 0,c  = 1.5 W b,c  = 1 W a,c  = 1 -1.5 Off Off Off -1.5 < 0 (Bias)
And Gate: Trace W 0,c  = 1.5 W b,c  = 1 W a,c  = 1 -0.5 On Off Off -0.5 < 0 (Bias)
And Gate: Trace W 0,c  = 1.5 W b,c  = 1 W a,c  = 1 -0.5 Off On Off -0.5 < 0 (Bias)
And Gate: Trace W 0,c  = 1.5 W b,c  = 1 W a,c  = 1 0.5 On On On 0.5 > 0 (Bias)
Threshold Functions can make Logic Gates with Neurons! W 0,c  = 0.5 W b,c  = 1 W a,c  = 1 A Σ ( W )- W 0,c a  z (Bias) If ...
Or Gate: Trace W 0,c  = 0.5 W b,c  = 1 W a,c  = 1 -0.5 Off Off Off -0.5 < 0 (Bias)
Or Gate: Trace W 0,c  = 0.5 W b,c  = 1 W a,c  = 1 0.5 On Off On 0.5 > 0 (Bias)
Or Gate: Trace W 0,c  = 0.5 W b,c  = 1 W a,c  = 1 0.5 Off On On 0.5 > 0 (Bias)
Or Gate: Trace W 0,c  = 0.5 W b,c  = 1 W a,c  = 1 1.5 On On On 1.5 > 0 (Bias)
Threshold Functions can make Logic Gates with Neurons! W 0,c  = -0.5 W a,c  = -1 Σ ( W )- W 0,c a  z (Bias) If (  Σ (w) – ...
Not Gate: Trace W 0,c  = -0.5 W a,c  = -1 -0.5 Off On 0.5 > 0 (Bias) 0 – (-0.5) = 0.5
Not Gate: Trace W 0,c  = -0.5 W a,c  = -1 -0.5 On Off -0.5 < 0 (Bias) -1 – (-0.5) = -0.5
Feed-Forward Vs.  Recurrent Networks <ul><li>Feed-Forward </li></ul><ul><ul><li>No Cyclic connections </li></ul></ul><ul><...
Feed-Forward Networks <ul><li>“ Knowledge” is represented by weight on edges </li></ul><ul><ul><li>Modeless! </li></ul></u...
Layers Input Output Hidden layer
Perceptron Learning <ul><li>Gradient Decent  used to reduce error </li></ul><ul><li>Essentially:  </li></ul><ul><ul><li>Ne...
Hidden Network Learning <ul><li>Back-Propagation </li></ul><ul><li>Essentially:  </li></ul><ul><ul><li>Start with Gradient...
They don’t get it either: Issues that aren’t well understood <ul><li>α   (Learning Rate) </li></ul><ul><li>Depth of networ...
How Are Neural Nets Different From My Brain? <ul><li>Neural nets are feed forward </li></ul><ul><ul><li>Brains can be recu...
Frontiers in AI <ul><li>Applications of current algorithms </li></ul><ul><li>New algorithms for determining parameters fro...
 
Upcoming SlideShare
Loading in …5
×

Learning Algorithms For Life Scientists

4,338 views

Published on

This was a very brief introduction to the basics of learning algorithms for life scientists I was asked to give to the incoming first year students at TSRI in the fall of 2005. It covers the very basics of how the algorithms work (sans the complex math) and more importantly, how they can be appropriately understood and applied by chemists and biologists.

Published in: Education
3 Comments
18 Likes
Statistics
Notes
No Downloads
Views
Total views
4,338
On SlideShare
0
From Embeds
0
Number of Embeds
27
Actions
Shares
0
Downloads
463
Comments
3
Likes
18
Embeds 0
No embeds

No notes for slide
  • Learning Algorithms For Life Scientists

    1. 1. Artificial Intelligence and Learning Algorithms Presented By Brian M. Frezza 12/1/05
    2. 2. Game Plan <ul><li>What’s a Learning Algorithm? </li></ul><ul><li>Why should I care? </li></ul><ul><ul><li>Biological parallels </li></ul></ul><ul><li>Real World Examples </li></ul><ul><li>Getting our hands dirty with the algorithms </li></ul><ul><ul><li>Bayesian Networks </li></ul></ul><ul><ul><li>Hidden Markov Models </li></ul></ul><ul><ul><li>Genetic Algorithms </li></ul></ul><ul><ul><li>Neural Networks </li></ul></ul><ul><li>Artificial Neural Networks Vs Neuron Biology </li></ul><ul><ul><li>“ Fraser’s Rules” </li></ul></ul><ul><li>Frontiers in AI </li></ul>
    3. 3. Hard Math
    4. 4. What’s a Learning Algorithm? <ul><li>“An algorithm which predicts data’s future behavior based on its past performance.” </li></ul><ul><ul><li>Programmer can be ignorant of the data’s trends. </li></ul></ul><ul><ul><ul><li>Not rationally designed! </li></ul></ul></ul><ul><ul><li>Training Data </li></ul></ul><ul><ul><li>Test Data </li></ul></ul>
    5. 5. Why do I care? <ul><li>Use In Informatics </li></ul><ul><ul><li>Predict trends in “fuzzy” data </li></ul></ul><ul><ul><ul><li>Subtle patterns in data </li></ul></ul></ul><ul><ul><ul><li>Complex patterns in data </li></ul></ul></ul><ul><ul><ul><li>Noisy data </li></ul></ul></ul><ul><ul><li>Network inference </li></ul></ul><ul><ul><li>Classification inference </li></ul></ul><ul><li>Analogies To Chemical Biology </li></ul><ul><ul><li>Evolution </li></ul></ul><ul><ul><li>Immunological Response </li></ul></ul><ul><ul><li>Neurology </li></ul></ul><ul><li>Fundamental Theories of Intelligence </li></ul><ul><ul><li>That’s heavy dude </li></ul></ul>
    6. 6. Street Smarts <ul><li>CMU’s Navlab-5 (No Hands Across America) </li></ul><ul><ul><li>1995 Neural Network Driven Car </li></ul></ul><ul><ul><li>Pittsburgh to San Diego: 2,797 miles (98.2%) </li></ul></ul><ul><ul><li>Single hidden layer backpropagation network! </li></ul></ul><ul><li>Subcellular location through fluorescence </li></ul><ul><ul><li>“ A Neural network classifier capable of recognizing the patterns of all major subcellular structures in fluorescence microscope images of HeLa cells” M. V. Boland, and R. F. Murphy, Bioinformatics ( 2001 ) 17( 12 ), 1213-1223 </li></ul></ul><ul><li>Protein secondary structure prediction </li></ul><ul><li>Intron/Exon predictions </li></ul><ul><li>Protein/Gene network inference </li></ul><ul><li>Speech recognition </li></ul><ul><li>Face recognition </li></ul>
    7. 7. The Algorithms <ul><li>Bayesian Networks </li></ul><ul><li>Hidden Markov Models </li></ul><ul><li>Genetic Algorithms </li></ul><ul><li>Neural Networks </li></ul>
    8. 8. Bayesian Networks: Basics <ul><li>Requires models of how data behaves </li></ul><ul><ul><li>Set of Hypothesis: {H} </li></ul></ul><ul><li>Keeps track of likelihood of each model being accurate as data becomes available </li></ul><ul><ul><li>P(H) </li></ul></ul><ul><li>Predicts as a weighted average </li></ul><ul><ul><li>P(E) = Sum( P(H)*H(E) ) </li></ul></ul>
    9. 9. Bayesian Network Example <ul><li>What color hair will Paul Schaffer’s </li></ul><ul><li>kids have if he marries Redhead ? </li></ul><ul><ul><li>Hypothesis </li></ul></ul><ul><ul><ul><li>H a (rr) rr x rr : 100% Redhead </li></ul></ul></ul><ul><ul><ul><li>H b (Rr) rr x R r : 50% Redhead 50% Not </li></ul></ul></ul><ul><ul><ul><li>H c (RR) rr x RR: 100% Not </li></ul></ul></ul><ul><li>Initially clueless: </li></ul><ul><ul><li>So P(H a ) = P(H b ) = P(H c ) = 1/3 </li></ul></ul>
    10. 10. Bayesian Network: Trace H a : 100% Redhead H b : 50% Redhead 50% Not H c : 100% Not Redhead 0 Not 0 Hypothesis History Likelihood's = P( red |H a )*P(H a ) + P( red |H b )*P(H b ) + P( red |H c )*P(H c ) = (1)*(1/3) + (1/2)*(1/3) + (0)(1/3) =(1/2) Prediction: Will their next kid be a Redhead ? 1/3 1/3 1/3 P(H c ) P(H b ) P(H a )
    11. 11. Bayesian Network:Trace H a : 100% Redhead H b : 50% Redhead 50% Not H c : 100% Not Redhead 1 Not 0 Hypothesis History Likelihood's = P( red |H a )*P(H a ) + P( red |H b )*P(H b ) + P( red |H c )*P(H c ) = (1)*(1/2) + (1/2)*(1/2) + (0)(1/3) =(3/4) Prediction: Will their next kid be a Redhead ? 0 1/2 1/2 P(H c ) P(H b ) P(H a )
    12. 12. Bayesian Network: Trace H a : 100% Redhead H b : 50% Redhead 50% Not H c : 100% Not Redhead 2 Not 0 Hypothesis History Likelihood's = P( red |H a )*P(H a ) + P( red |H b )*P(H b ) + P( red |H c )*P(H c ) = (1)*(3/4) + (1/2)*(1/4) + (0)(1/3) =(7/8) Prediction: Will their next kid be a Redhead ? 0 1/4 3/4 P(H c ) P(H b ) P(H a )
    13. 13. Bayesian Network: Trace H a : 100% Redhead H b : 50% Redhead 50% Not H c : 100% Not Redhead 3 Not 0 Hypothesis History Likelihood's = P( red |H a )*P(H a ) + P( red |H b )*P(H b ) + P( red |H c )*P(H c ) = (1)*(7/8) + (1/2)*(1/8) + (0)(1/3) =(15/16) Prediction: Will their next kid be a Redhead ? 0 1/8 7/8 P(H c ) P(H b ) P(H a )
    14. 14. Bayesian Networks Notes <ul><li>Never reject hypothesis unless directly disproved </li></ul><ul><li>Learns based on rational models of behavior </li></ul><ul><ul><li>Models can be extracted! </li></ul></ul><ul><li>Programmer needs to form hypothesis beforehand. </li></ul>
    15. 15. The Algorithms <ul><li>Bayesian Networks </li></ul><ul><li>Hidden Markov Models </li></ul><ul><li>Genetic Algorithms </li></ul><ul><li>Neural Networks </li></ul>
    16. 16. Hidden Markov Models(HMM) <ul><li>Discrete learning algorithm </li></ul><ul><ul><li>Programmer must be able to categorize predictions </li></ul></ul><ul><li>HMMs also assume a model of the world working behind the data </li></ul><ul><li>Models are also extractable </li></ul><ul><li>Common Uses </li></ul><ul><ul><li>Speech Recognition </li></ul></ul><ul><ul><li>Secondary structure prediction </li></ul></ul><ul><ul><li>Intron/Exon predictions </li></ul></ul><ul><ul><li>Categorization of data </li></ul></ul>
    17. 17. Hidden Markov Models: Take a Step Back <ul><li>1 st order Markov Models: </li></ul><ul><ul><li>Q{States} </li></ul></ul><ul><ul><li>Pr{Transition} </li></ul></ul><ul><ul><li>Sum of all P(T) out of state = 1 </li></ul></ul>Q 1 Q 4 Q 2 Q 3 P 1 P 2 1-P 1 -P 2 P 3 1-P 3 1 1-P 4 P 4
    18. 18. 1 st order Markov Model Setup <ul><li>Pick Initial state: Q 1 </li></ul><ul><li>Pick Transition Probabilities: </li></ul><ul><li>For each time step </li></ul><ul><ul><li>Pick a random number 0.0-1.0 </li></ul></ul>Q 1 Q 4 Q 2 Q 3 P 1 P 2 1-P 1 -P 2 P 3 1-P 3 1 1-P 4 P 4 P 1 P 2 P 3 P 4 0.6 0.2 0.9 0.4
    19. 19. 1 st order Markov Model Trace <ul><li>Current State: Q 1 Time Step = 1 </li></ul><ul><li>Transition probabilities: </li></ul><ul><li>Random Number: </li></ul><ul><ul><li>0.22341 </li></ul></ul><ul><li>So Next State: </li></ul><ul><ul><li>0.22341 < P 1 </li></ul></ul><ul><ul><ul><li>Take P 1 </li></ul></ul></ul><ul><ul><li>Q 2 </li></ul></ul>Q 1 Q 4 Q 2 Q 3 P 1 P 2 1-P 1 -P 2 P 3 1-P 3 1 1-P 4 P 4 P 1 P 2 P 3 P 4 0.6 0.2 0.9 0.4
    20. 20. 1 st order Markov Model Trace <ul><li>Current State: Q 2 Time Step = 2 </li></ul><ul><li>Transition probabilities: </li></ul><ul><li>Random Number: </li></ul><ul><ul><li>0.64357 </li></ul></ul><ul><li>So Next State: </li></ul><ul><ul><li>No Choice, P = 1 </li></ul></ul><ul><ul><li>Q 3 </li></ul></ul>Q 1 Q 4 Q 2 Q 3 P 1 P 2 1-P 1 -P 2 P 3 1-P 3 1 1-P 4 P 4 P 1 P 2 P 3 P 4 0.6 0.2 0.9 0.4
    21. 21. 1 st order Markov Model Trace <ul><li>Current State: Q 3 Time Step = 3 </li></ul><ul><li>Transition probabilities: </li></ul><ul><li>Random Number: </li></ul><ul><ul><li>0.97412 </li></ul></ul><ul><li>So Next State: </li></ul><ul><ul><li>0.97412 > 0.9 </li></ul></ul><ul><ul><ul><li>Take 1-P 3 </li></ul></ul></ul><ul><ul><li>Q 4 </li></ul></ul>Q 1 Q 4 Q 2 Q 3 P 1 P 2 1-P 1 -P 2 P 3 1-P 3 1 1-P 4 P 4 P 1 P 2 P 3 P 4 0.6 0.2 0.9 0.4
    22. 22. 1 st order Markov Model Trace <ul><li>Current State: Q 4 Time Step = 4 </li></ul><ul><li>Transition probabilities: </li></ul><ul><li>I’m going to stop here. </li></ul><ul><li>Markov Chain: </li></ul><ul><ul><li>Q 1 , Q 2 , Q 3 , Q 4 </li></ul></ul>Q 1 Q 4 Q 2 Q 3 P 1 P 2 1-P 1 -P 2 P 3 1-P 3 1 1-P 4 P 4 P 1 P 2 P 3 P 4 0.6 0.2 0.9 0.4
    23. 23. What else can Markov do? <ul><li>Higher Order Models </li></ul><ul><ul><li>K th order </li></ul></ul><ul><li>Metropolis-Hastings </li></ul><ul><ul><li>Determining thermodynamic equilibrium </li></ul></ul><ul><li>Continuous Markov Models </li></ul><ul><ul><li>Time step varies according to continuous distribution </li></ul></ul><ul><li>Hidden Markov Models </li></ul><ul><ul><li>Discrete model learning </li></ul></ul>
    24. 24. Hidden Markov Models (HMMs) <ul><li>A Markov Model drives the world but it is hidden from direct observation and its status must be inferred from a set of observables. </li></ul><ul><ul><li>Voice recognition </li></ul></ul><ul><ul><ul><li>Observable: Sound waves </li></ul></ul></ul><ul><ul><ul><li>Hidden states: Words </li></ul></ul></ul><ul><ul><li>Intron/Exon prediction </li></ul></ul><ul><ul><ul><li>Observable: nucleotide sequence </li></ul></ul></ul><ul><ul><ul><li>Hidden State: Exon, Intron, Non-coding </li></ul></ul></ul><ul><ul><li>Secondary structure prediction for protein </li></ul></ul><ul><ul><ul><li>Observable: Amino acid sequence </li></ul></ul></ul><ul><ul><ul><li>Hidden State: Alpha helix, Beta Sheet, Unstructured </li></ul></ul></ul>
    25. 25. Hidden Markov Models: Example <ul><li>Secondary Structure Prediction </li></ul>Observable States Hidden States Unstructured Alpha Helix Beta Sheet His Asp Arg Phe Ala Cis Ser Gln Glu Lys Leu Met Asn Ser Tyr Thr Ile Trp Pro Val Gly
    26. 26. Hidden Markov Models: Smaller Example <ul><li>Exon/Intron Mapping </li></ul>G T C A Exon Intergenic Intron Observable States Hidden States P(Ex|Ex) P( In |Ex) P( In |Ex) P(It|It) P( Ig |It) P( Ex |It) P(Ig|Ig) P( Itr |Ig) P( Ex |Ig) P(A| Ex ) P(A| It ) P(A| Ig ) P(C| It ) P(G| It ) P(T| It ) P(T| Ex ) P(G| Ex ) P(C| Ex ) P(C| Ig ) P(T| Ig ) P(G| Ig )
    27. 27. Hidden Markov Models: Smaller Example <ul><li>Exon/Intron Mapping </li></ul>Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observable Starting Distribution 0.8 0.02 0.18 It 0.01 0.5 0.49 Ig 0.2 0.1 0.7 Ex It Ig Ex 0.2 0.5 0.16 0.14 It 0.25 0.25 0.25 0.25 Ig 0.14 0.11 0.42 0.33 Ex C G T A 0.01 0.89 0.1 It Ig Ex
    28. 28. Hidden Markov Model <ul><li>How to predict outcomes from a HMM </li></ul><ul><li>Brute force: </li></ul><ul><ul><li>Try every possible Markov chain </li></ul></ul><ul><ul><ul><li>Which chain has greatest probability of generating observed data? </li></ul></ul></ul><ul><ul><li>Viterbi algorithm </li></ul></ul><ul><ul><ul><li>Dynamic programming approach </li></ul></ul></ul>
    29. 29. Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observable Starting Distribution Example Sequence: ATAATGGCGAGTG Exon = P(A|Ex) * Start Exon = 3.3*10 -2 Introgenic = P(A|Ig) * Start Ig = 2.2*10 -1 Intron = P(A|It) * Start It = 0.14 * 0.01 = 1.4*10 -3 0.8 0.02 0.18 It 0.01 0.9 0.09 Ig 0.2 0.1 0.7 Ex It Ig Ex 0.2 0.5 0.16 0.14 It 0.25 0.25 0.25 0.25 Ig 0.14 0.11 0.42 0.33 Ex C G T A 0.01 0.89 0.1 It Ig Ex G T A G A G C G G T A A T 1.4*10 -3 2.2*10 -1 3.3*10 -2 A Intron Introgenic Exon
    30. 30. Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observable Starting Distribution Example Sequence: ATAATGGCGAGTG Exon = Max( P(Ex|Ex)*P n-1 (Ex), P(Ex|Ig)*P n-1 (Ig), P(Ex|It)*P n-1 (It) ) *P(T|Ex) = 4.6*10 -2 Introgenic =Max( P(Ig|Ex)*P n-1 (Ex), P(Ig|Ig)*P n-1 (Ig), P(Ig|It)*P n-1 (It) ) * P(T|Ig) = 2.8*10 -2 Intron = Max( P(It|Ex)*P n-1 (Ex), P(It|Ig)*P n-1 (Ig), P(It,It)*P n-1 (It) ) * P(T|It) = 1.1*10 -3 0.8 0.02 0.18 It 0.01 0.5 0.49 Ig 0.2 0.1 0.7 Ex It Ig Ex 0.2 0.5 0.16 0.14 It 0.25 0.25 0.25 0.25 Ig 0.14 0.11 0.42 0.33 Ex C G T A 0.01 0.89 0.1 It Ig Ex G T A G A G C G G T A A 1.1*10 -3 2.8*10 -2 4.6*10 -2 T 1.4*10 -3 2.2*10 -1 3.3*10 -2 A Intron Introgenic Exon
    31. 31. Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observable Starting Distribution Example Sequence: ATAATGGCGAGTG Exon = Max( P(Ex|Ex)*P n-1 (Ex), P(Ex|Ig)*P n-1 (Ig), P(Ex|It)*P n-1 (It) ) *P(T|Ex) = 1.1*10 -2 Introgenic =Max( P(Ig|Ex)*P n-1 (Ex), P(Ig|Ig)*P n-1 (Ig), P(Ig|It)*P n-1 (It) ) * P(T|Ig) = 3.5*10 -3 Intron = Max( P(It|Ex)*P n-1 (Ex), P(It|Ig)*P n-1 (Ig), P(It,It)*P n-1 (It) ) * P(T|It) = 1.3*10 -3 0.8 0.02 0.18 It 0.01 0.5 0.49 Ig 0.2 0.1 0.7 Ex It Ig Ex 0.2 0.5 0.16 0.14 It 0.25 0.25 0.25 0.25 Ig 0.14 0.11 0.42 0.33 Ex C G T A 0.01 0.89 0.1 It Ig Ex G T A G A G C G G T A 1.3*10 -3 3.5*10 -3 1.1*10 -2 A 1.1*10 -3 2.8*10 -2 4.6*10 -2 T 1.4*10 -3 2.2*10 -1 3.3*10 -2 A Intron Introgenic Exon
    32. 32. Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observable Starting Distribution Example Sequence: ATAATGGCGAGTG Exon = Max( P(Ex|Ex)*P n-1 (Ex), P(Ex|Ig)*P n-1 (Ig), P(Ex|It)*P n-1 (It) ) *P(T|Ex) Introgenic =Max( P(Ig|Ex)*P n-1 (Ex), P(Ig|Ig)*P n-1 (Ig), P(Ig|It)*P n-1 (It) ) * P(T|Ig) Intron = Max( P(It|Ex)*P n-1 (Ex), P(It|Ig)*P n-1 (Ig), P(It,It)*P n-1 (It) ) * P(T|It) 0.8 0.02 0.18 It 0.01 0.5 0.49 Ig 0.2 0.1 0.7 Ex It Ig Ex 0.2 0.5 0.16 0.14 It 0.25 0.25 0.25 0.25 Ig 0.14 0.11 0.42 0.33 Ex C G T A 0.01 0.89 0.1 It Ig Ex G T A G A G C G G T 2.9*10 -4 4.3*10 -4 2.4*10 -3 A 1.3*10 -3 3.5*10 -3 1.1*10 -2 A 1.1*10 -3 2.8*10 -2 4.6*10 -2 T 1.4*10 -3 2.2*10 -1 3.3*10 -2 A Intron Introgenic Exon
    33. 33. Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observable Starting Distribution Example Sequence: ATAATGGCGAGTG Exon = Max( P(Ex|Ex)*P n-1 (Ex), P(Ex|Ig)*P n-1 (Ig), P(Ex|It)*P n-1 (It) ) *P(T|Ex) Introgenic =Max( P(Ig|Ex)*P n-1 (Ex), P(Ig|Ig)*P n-1 (Ig), P(Ig|It)*P n-1 (It) ) * P(T|Ig) Intron = Max( P(It|Ex)*P n-1 (Ex), P(It|Ig)*P n-1 (Ig), P(It,It)*P n-1 (It) ) * P(T|It) 0.8 0.02 0.18 It 0.01 0.5 0.49 Ig 0.2 0.1 0.7 Ex It Ig Ex 0.2 0.5 0.16 0.14 It 0.25 0.25 0.25 0.25 Ig 0.14 0.11 0.42 0.33 Ex C G T A 0.01 0.89 0.1 It Ig Ex G T A G A G C G G 7.8*10 -5 6.1*10 -5 7.2*10 -4 T 2.9*10 -4 4.3*10 -4 2.4*10 -3 A 1.3*10 -3 3.5*10 -3 1.1*10 -2 A 1.1*10 -3 2.8*10 -2 4.6*10 -2 T 1.4*10 -3 2.2*10 -1 3.3*10 -2 A Intron Introgenic Exon
    34. 34. Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observable Starting Distribution Example Sequence: ATAATGGCGAGTG Exon = Max( P(Ex|Ex)*P n-1 (Ex), P(Ex|Ig)*P n-1 (Ig), P(Ex|It)*P n-1 (It) ) *P(T|Ex) Introgenic =Max( P(Ig|Ex)*P n-1 (Ex), P(Ig|Ig)*P n-1 (Ig), P(Ig|It)*P n-1 (It) ) * P(T|Ig) Intron = Max( P(It|Ex)*P n-1 (Ex), P(It|Ig)*P n-1 (Ig), P(It,It)*P n-1 (It) ) * P(T|It) 0.8 0.02 0.18 It 0.01 0.5 0.49 Ig 0.2 0.1 0.7 Ex It Ig Ex 0.2 0.5 0.16 0.14 It 0.25 0.25 0.25 0.25 Ig 0.14 0.11 0.42 0.33 Ex C G T A 0.01 0.89 0.1 It Ig Ex G T A G A G C G 7.2*10 -5 1.8*10 -5 5.5*10 -5 G 7.8*10 -5 6.1*10 -5 7.2*10 -4 T 2.9*10 -4 4.3*10 -4 2.4*10 -3 A 1.3*10 -3 3.5*10 -3 1.1*10 -2 A 1.1*10 -3 2.8*10 -2 4.6*10 -2 T 1.4*10 -3 2.2*10 -1 3.3*10 -2 A Intron Introgenic Exon
    35. 35. Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observable Starting Distribution Example Sequence: ATAATGGCGAGTG Exon = Max( P(Ex|Ex)*P n-1 (Ex), P(Ex|Ig)*P n-1 (Ig), P(Ex|It)*P n-1 (It) ) *P(T|Ex) Introgenic =Max( P(Ig|Ex)*P n-1 (Ex), P(Ig|Ig)*P n-1 (Ig), P(Ig|It)*P n-1 (It) ) * P(T|Ig) Intron = Max( P(It|Ex)*P n-1 (Ex), P(It|Ig)*P n-1 (Ig), P(It,It)*P n-1 (It) ) * P(T|It) 0.8 0.02 0.18 It 0.01 0.5 0.49 Ig 0.2 0.1 0.7 Ex It Ig Ex 0.2 0.5 0.16 0.14 It 0.25 0.25 0.25 0.25 Ig 0.14 0.11 0.42 0.33 Ex C G T A 0.01 0.89 0.1 It Ig Ex G T A G A G C 2.9*10 -5 2.2*10 -6 4.3*10 -6 G 7.2*10 -5 1.8*10 -5 5.5*10 -5 G 7.8*10 -5 6.1*10 -5 7.2*10 -4 T 2.9*10 -4 4.3*10 -4 2.4*10 -3 A 1.3*10 -3 3.5*10 -3 1.1*10 -2 A 1.1*10 -3 2.8*10 -2 4.6*10 -2 T 1.4*10 -3 2.2*10 -1 3.3*10 -2 A Intron Introgenic Exon
    36. 36. Viterbi Algorithm: Trace Hidden State Transition Probabilities Observable State Probabilities To From Hidden State Observable Starting Distribution Example Sequence: ATAATGGCGAGTG Exon = Max( P(Ex|Ex)*P n-1 (Ex), P(Ex|Ig)*P n-1 (Ig), P(Ex|It)*P n-1 (It) ) *P(T|Ex) Introgenic =Max( P(Ig|Ex)*P n-1 (Ex), P(Ig|Ig)*P n-1 (Ig), P(Ig|It)*P n-1 (It) ) * P(T|Ig) Intron = Max( P(It|Ex)*P n-1 (Ex), P(It|Ig)*P n-1 (Ig), P(It,It)*P n-1 (It) ) * P(T|It) 0.8 0.02 0.18 It 0.01 0.5 0.49 Ig 0.2 0.1 0.7 Ex It Ig Ex 0.2 0.5 0.16 0.14 It 0.25 0.25 0.25 0.25 Ig 0.14 0.11 0.42 0.33 Ex C G T A 0.01 0.89 0.1 It Ig Ex 4.7*10 -10 3.6*10 -11 1.1*10 -10 G 1.2*10 -9 1.2*10 -10 1.4*10 -9 T 9.2*10 -9 4.1*10 -10 4.9* -9 A 8.2*10 -8 2.7*10 -9 8.4* -9 G 2.0*10 -7 9.1*10 -9 1.1*10 -7 A 1.8*10 -6 3.5*10 -8 9.1*10 -8 G 4.6*10 -6 2.8*10 -7 7.2*10 -7 C 2.9*10 -5 2.2*10 -6 4.3*10 -6 G 7.2*10 -5 1.8*10 -5 5.5*10 -5 G 7.8*10 -5 6.1*10 -5 7.2*10 -4 T 2.9*10 -4 4.3*10 -4 2.4*10 -3 A 1.3*10 -3 3.5*10 -3 1.1*10 -2 A 1.1*10 -3 2.8*10 -2 4.6*10 -2 T 1.4*10 -3 2.2*10 -1 3.3*10 -2 A Intron Introgenic Exon
    37. 37. Hidden Markov Models <ul><li>How to Train an HMM </li></ul><ul><ul><li>The forward-backward algorithm </li></ul></ul><ul><ul><ul><li>Ugly probability theory math: </li></ul></ul></ul><ul><ul><ul><li>Starts with an initial guess of parameters </li></ul></ul></ul><ul><ul><ul><li>Refines parameters by attempting to reduce the errors it provokes with fitted to the data. </li></ul></ul></ul><ul><ul><ul><ul><li>Normalized probability of the “Forward probability” of arriving at the state given the observable cross multiplied by the backward probability of generating that observable given the parameter. </li></ul></ul></ul></ul>CENSORED
    38. 38. The Algorithms <ul><li>Bayesian Networks </li></ul><ul><li>Hidden Markov Models </li></ul><ul><li>Genetic Algorithms </li></ul><ul><li>Neural Networks </li></ul>
    39. 39. Genetic Algorithms <ul><li>Individuals are series of bits which represent candidate solutions </li></ul><ul><ul><li>Functions </li></ul></ul><ul><ul><li>Structures </li></ul></ul><ul><ul><li>Images </li></ul></ul><ul><ul><li>Code </li></ul></ul><ul><li>Based on Darwin evolution </li></ul><ul><ul><li>individuals mate, mutate, and are selected based on a Fitness Function </li></ul></ul>
    40. 40. Genetic Algorithms <ul><li>Encoding Rules </li></ul><ul><ul><li>“ Gray” bit encoding </li></ul></ul><ul><ul><ul><li>Bit distance proportional to value distance </li></ul></ul></ul><ul><li>Selection Rules </li></ul><ul><ul><li>Digital / Analog Threshold </li></ul></ul><ul><ul><li>Linear Amplification Vs Weighted Amplification </li></ul></ul><ul><li>Mating Rules </li></ul><ul><ul><li>Mutation parameters </li></ul></ul><ul><ul><li>Recombination parameters </li></ul></ul>
    41. 41. Genetic Algorithms <ul><li>When are they useful? </li></ul><ul><ul><li>Movements in sequence space are funnel shaped with fitness function </li></ul></ul><ul><ul><ul><li>Systems where evolution actually applies! </li></ul></ul></ul><ul><li>Examples </li></ul><ul><ul><li>Medicinal chemistry </li></ul></ul><ul><ul><li>Protein folding </li></ul></ul><ul><ul><li>Amino acid substitutions </li></ul></ul><ul><ul><li>Membrane trafficking modeling </li></ul></ul><ul><ul><li>Ecological simulations </li></ul></ul><ul><ul><li>Linear Programming </li></ul></ul><ul><ul><li>Traveling salesman </li></ul></ul>
    42. 42. The Algorithms <ul><li>Bayesian Networks </li></ul><ul><li>Hidden Markov Models </li></ul><ul><li>Genetic Algorithms </li></ul><ul><li>Neural Networks </li></ul>
    43. 43. Neural Networks <ul><li>1943 McCulloch and Pitts Model of how Neurons process information </li></ul><ul><ul><li>Field immediately splits </li></ul></ul><ul><ul><ul><li>Studying brain’s </li></ul></ul></ul><ul><ul><ul><ul><li>Neurology </li></ul></ul></ul></ul><ul><ul><ul><li>Studying artificial intelligence </li></ul></ul></ul><ul><ul><ul><ul><li>Neural Networks </li></ul></ul></ul></ul>
    44. 44. Neural Networks: A Neuron, Node, or Unit Σ ( W )- W 0,c Activation Function Output W a,c W b,c W 0,c (Bias) W c, n a z (Bias)
    45. 45. Neural Networks: Activation Functions Sigmoid Function (logistic function) Threshold Function Zero point set by bias In In out out +1 +1
    46. 46. Threshold Functions can make Logic Gates with Neurons! Logical And W 0,c = 1.5 W b,c = 1 W a,c = 1 A B Σ ( W )- W 0,c a z (Bias) Output If ( Σ (w) – W o,c > 0 ) Then FIRE Else Don’t (Bias) 0 0 0 0 1 1 0 1 ∩
    47. 47. And Gate: Trace W 0,c = 1.5 W b,c = 1 W a,c = 1 -1.5 Off Off Off -1.5 < 0 (Bias)
    48. 48. And Gate: Trace W 0,c = 1.5 W b,c = 1 W a,c = 1 -0.5 On Off Off -0.5 < 0 (Bias)
    49. 49. And Gate: Trace W 0,c = 1.5 W b,c = 1 W a,c = 1 -0.5 Off On Off -0.5 < 0 (Bias)
    50. 50. And Gate: Trace W 0,c = 1.5 W b,c = 1 W a,c = 1 0.5 On On On 0.5 > 0 (Bias)
    51. 51. Threshold Functions can make Logic Gates with Neurons! W 0,c = 0.5 W b,c = 1 W a,c = 1 A Σ ( W )- W 0,c a z (Bias) If ( Σ (w) – W o,c > 0 ) Then FIRE Else Don’t (Bias) Logical Or B 0 1 0 1 1 1 0 1 U
    52. 52. Or Gate: Trace W 0,c = 0.5 W b,c = 1 W a,c = 1 -0.5 Off Off Off -0.5 < 0 (Bias)
    53. 53. Or Gate: Trace W 0,c = 0.5 W b,c = 1 W a,c = 1 0.5 On Off On 0.5 > 0 (Bias)
    54. 54. Or Gate: Trace W 0,c = 0.5 W b,c = 1 W a,c = 1 0.5 Off On On 0.5 > 0 (Bias)
    55. 55. Or Gate: Trace W 0,c = 0.5 W b,c = 1 W a,c = 1 1.5 On On On 1.5 > 0 (Bias)
    56. 56. Threshold Functions can make Logic Gates with Neurons! W 0,c = -0.5 W a,c = -1 Σ ( W )- W 0,c a z (Bias) If ( Σ (w) – W o,c > 0 ) Then FIRE Else Don’t (Bias) Logical Not 1 0 0 1 !
    57. 57. Not Gate: Trace W 0,c = -0.5 W a,c = -1 -0.5 Off On 0.5 > 0 (Bias) 0 – (-0.5) = 0.5
    58. 58. Not Gate: Trace W 0,c = -0.5 W a,c = -1 -0.5 On Off -0.5 < 0 (Bias) -1 – (-0.5) = -0.5
    59. 59. Feed-Forward Vs. Recurrent Networks <ul><li>Feed-Forward </li></ul><ul><ul><li>No Cyclic connections </li></ul></ul><ul><ul><li>Function of its current inputs </li></ul></ul><ul><ul><li>No internal state other then weights of connections </li></ul></ul><ul><ul><ul><li>“ Out of time” </li></ul></ul></ul><ul><li>Recurrent </li></ul><ul><ul><li>Cyclic connections </li></ul></ul><ul><ul><li>Dynamic behavior </li></ul></ul><ul><ul><ul><li>Stable </li></ul></ul></ul><ul><ul><ul><li>Oscillatory </li></ul></ul></ul><ul><ul><ul><li>Chaotic </li></ul></ul></ul><ul><ul><li>Response depends on current state </li></ul></ul><ul><ul><ul><li>“ In time” </li></ul></ul></ul><ul><ul><li>Short term memory! </li></ul></ul>
    60. 60. Feed-Forward Networks <ul><li>“ Knowledge” is represented by weight on edges </li></ul><ul><ul><li>Modeless! </li></ul></ul><ul><li>“ Learning” consists of adjusting weights </li></ul><ul><li>Customary Arrangements </li></ul><ul><ul><li>One Boolean output for each value </li></ul></ul><ul><ul><li>Arranged in Layers </li></ul></ul><ul><ul><ul><li>Layer 1 = inputs </li></ul></ul></ul><ul><ul><ul><li>Layer 2 to (n-1) = Hidden </li></ul></ul></ul><ul><ul><ul><li>Layer N = outputs </li></ul></ul></ul><ul><ul><ul><ul><li>“ Perceptron” 2 layer Feed-Forward network </li></ul></ul></ul></ul>
    61. 61. Layers Input Output Hidden layer
    62. 62. Perceptron Learning <ul><li>Gradient Decent used to reduce error </li></ul><ul><li>Essentially: </li></ul><ul><ul><li>New Weight = Old Weight + adjustment </li></ul></ul><ul><ul><li>Adjustment = α X error X input X d( activation function ) </li></ul></ul><ul><ul><ul><li>α = Learning Rate </li></ul></ul></ul>CENSORED
    63. 63. Hidden Network Learning <ul><li>Back-Propagation </li></ul><ul><li>Essentially: </li></ul><ul><ul><li>Start with Gradient Decent from output </li></ul></ul><ul><ul><li>Assign “blame” to inputting neurons proportional to their weights </li></ul></ul><ul><ul><li>Adjust weights at previous level using Gradient decent based on “blame” </li></ul></ul>CENSORED
    64. 64. They don’t get it either: Issues that aren’t well understood <ul><li>α (Learning Rate) </li></ul><ul><li>Depth of network (number of layers) </li></ul><ul><li>Size of hidden layers </li></ul><ul><ul><li>Overfitting </li></ul></ul><ul><ul><li>Cross-validation </li></ul></ul><ul><li>Minimum connectivity </li></ul><ul><ul><li>Optimal Brain Damage Algorithm </li></ul></ul><ul><li>No extractable model! </li></ul>
    65. 65. How Are Neural Nets Different From My Brain? <ul><li>Neural nets are feed forward </li></ul><ul><ul><li>Brains can be recurrent with feedback loops </li></ul></ul><ul><li>Neural nets do not distinguish between + or – connections </li></ul><ul><ul><li>In brains excitatory and inhibitory neurons have different properties </li></ul></ul><ul><ul><ul><li>Inhibitory neurons short-distance </li></ul></ul></ul><ul><li>Neural nets exist “Out of time” </li></ul><ul><ul><li>Our brains clearly do exist “in time” </li></ul></ul><ul><li>Neural nets learn VERY differently </li></ul><ul><ul><li>We have very little idea how our brains are learning </li></ul></ul>“ Fraser’s” Rules “ In theory one can, of course, implement biologically realistic neural networks, but this is a mammoth task.  All kinds of details have to be gotten right, or you end up with a network that completely decays to unconnectedness, or one that ramps up its connections until it basically has a seizure.”
    66. 66. Frontiers in AI <ul><li>Applications of current algorithms </li></ul><ul><li>New algorithms for determining parameters from training data </li></ul><ul><ul><li>Backward-Forward </li></ul></ul><ul><ul><li>Backpropagation </li></ul></ul><ul><li>Better classification of the mysteries of neural networks </li></ul><ul><li>Pathology modeling in neural networks </li></ul><ul><li>Evolutionary modeling </li></ul>

    ×