0
Conditional Random Fields - A probabilistic graphical model Stefan Mutter
Motivation Bayesian Network Naive Bayes Markov Random Field Hidden Markov Model Logistic Regression Linear Chain  Conditio...
Outline <ul><li>different views on building a conditional random field (CRF) </li></ul><ul><ul><li>from directed to undire...
Overview: directed graphical models Bayesian Network Naive Bayes Markov Random Field Hidden Markov Model Logistic Regressi...
Bayesian Networks: directed graphical models <ul><li>in general: </li></ul><ul><ul><li>a graphical model - family of proba...
Overview: undirected graphical models Bayesian Network Naive Bayes Markov Random Field Hidden Markov Model Logistic Regres...
Markov Random Field: undirected graphical models <ul><li>undirected graph for joint probability p(x) allows no direct prob...
Markov Random Fields and CRFs <ul><li>A CRF is a Markov Random Field globally conditioned on X </li></ul><ul><li>How do th...
Overview: generative    discriminative models Bayesian Network Naive Bayes Markov Random Field Hidden Markov Model Logist...
Generative models <ul><li>based on joint probability distribution p(y,x) </li></ul><ul><li>includes a model of p(x) which ...
Discriminative models <ul><li>based directly on conditional probability p(y|x) </li></ul><ul><li>need no model for p(x) </...
Naive bayes and logistic regression (1) <ul><li>naive bayes and logistic regression are generative-discriminative pair </l...
Naive bayes and logistic regression (2) <ul><li>if GNB assumptions hold, then GNB and LR converge asymptotically toward id...
Overview: sequence models Bayesian Network Naive Bayes Markov Random Field Hidden Markov Model Logistic Regression Linear ...
Sequence models: HMMs <ul><li>power of graphical models: model many interdependent variables </li></ul><ul><li>HMM models ...
From HMMs to linear chain CRFs (1) <ul><li>key: conditional distribution p(y|x) of an HMM is a CRF with a particular choic...
From HMMs to linear chain CRFs (2) <ul><li>last step: write conditional probability for the HMM </li></ul><ul><li>This is ...
Linear chain conditional random fields <ul><li>Definition: </li></ul><ul><li>for general CRFs use arbitrary cliques </li><...
Side trip: maximum entropy markov models <ul><li>entropy - measure of the uniformity of a distribution </li></ul><ul><li>m...
Side Trip: label bias problem <ul><li>CRF like log-linear models, but label bias problem: </li></ul><ul><ul><li>per state ...
Inference in a linear chain CRF <ul><li>slight variants of HMM algorithms:  </li></ul><ul><ul><li>Viterbi: use definition ...
Parameter estimation in general <ul><li>So far major drawback </li></ul><ul><li>generative model tend to have higher asymp...
Principles in parameter estimation <ul><li>basic principle: maximum likelihood estimation with conditional log likelihood ...
Application: gene prediction <ul><li>use finite-state CRFs to locate introns and exons in DNA sequences </li></ul><ul><li>...
Summary: graphical models
The end Questions ?
References <ul><li>An Introduction to Conditional Random Fields for Relational Learning. Charles Sutton and Andrew McCallu...
References <ul><li>Kevin Murphy. An introduction to graphical models. Technical report, Intel Research Technical Report., ...
Upcoming SlideShare
Loading in...5
×

PowerPoint Presentation - Conditional Random Fields - A ...

1,882

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,882
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
42
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Transcript of "PowerPoint Presentation - Conditional Random Fields - A ..."

  1. 1. Conditional Random Fields - A probabilistic graphical model Stefan Mutter
  2. 2. Motivation Bayesian Network Naive Bayes Markov Random Field Hidden Markov Model Logistic Regression Linear Chain Conditional Random Field General Conditional Random Field
  3. 3. Outline <ul><li>different views on building a conditional random field (CRF) </li></ul><ul><ul><li>from directed to undirected graphical models </li></ul></ul><ul><ul><li>from generative to discriminative models </li></ul></ul><ul><ul><li>sequence models </li></ul></ul><ul><ul><ul><li>from HMMs to CRFs </li></ul></ul></ul><ul><ul><ul><li>CRFs and maximum entropy markov models (MEMM) </li></ul></ul></ul><ul><li>parameter estimation / inference </li></ul><ul><li>applications </li></ul>
  4. 4. Overview: directed graphical models Bayesian Network Naive Bayes Markov Random Field Hidden Markov Model Logistic Regression Linear Chain Conditional Random Field General Conditional Random Field
  5. 5. Bayesian Networks: directed graphical models <ul><li>in general: </li></ul><ul><ul><li>a graphical model - family of probability </li></ul></ul><ul><ul><li>distributions that factorise according to an </li></ul></ul><ul><ul><li>underlying graph </li></ul></ul><ul><ul><li>one-to-one correspondence between </li></ul></ul><ul><ul><li>nodes and random variables </li></ul></ul><ul><ul><li>a set V of random variables consisting of a set X of input variables and a set Y of output variables to predict </li></ul></ul><ul><li>independence assumption using topological ordering: </li></ul><ul><ul><li>a node is v conditionally independent of its predecessors given its direct parents π(v) (Markov blanket) </li></ul></ul><ul><li>direct probabilistic interpretation: </li></ul><ul><ul><li>family of distributions factorises into: </li></ul></ul>
  6. 6. Overview: undirected graphical models Bayesian Network Naive Bayes Markov Random Field Hidden Markov Model Logistic Regression Linear Chain Conditional Random Field General Conditional Random Field
  7. 7. Markov Random Field: undirected graphical models <ul><li>undirected graph for joint probability p(x) allows no direct probabilistic interpretation </li></ul><ul><ul><li>define potential functions  on maximal cliques A </li></ul></ul><ul><ul><ul><li>map joint assignment to non-negative real number </li></ul></ul></ul><ul><ul><ul><li>requires normalisation </li></ul></ul></ul> green  red
  8. 8. Markov Random Fields and CRFs <ul><li>A CRF is a Markov Random Field globally conditioned on X </li></ul><ul><li>How do the potential functions  look like? </li></ul>
  9. 9. Overview: generative  discriminative models Bayesian Network Naive Bayes Markov Random Field Hidden Markov Model Logistic Regression Linear Chain Conditional Random Field General Conditional Random Field  
  10. 10. Generative models <ul><li>based on joint probability distribution p(y,x) </li></ul><ul><li>includes a model of p(x) which is not needed for classification </li></ul><ul><li>interdependent features </li></ul><ul><ul><li>either enhance model structure to represent them </li></ul></ul><ul><ul><ul><li>complexity problems </li></ul></ul></ul><ul><ul><li>or make simplifying independence assumptions </li></ul></ul><ul><ul><ul><li>e.g. naive bayes: once the class label is known, all features are independent </li></ul></ul></ul>
  11. 11. Discriminative models <ul><li>based directly on conditional probability p(y|x) </li></ul><ul><li>need no model for p(x) </li></ul><ul><li>simply: </li></ul><ul><ul><li>make independence assumptions among y but not among x </li></ul></ul><ul><li>in general: </li></ul>computed by inference conditional approach more freedom to fit data
  12. 12. Naive bayes and logistic regression (1) <ul><li>naive bayes and logistic regression are generative-discriminative pair </li></ul><ul><li>naive bayes: </li></ul><ul><li>It can be shown that a gaussian naive bayes classifier implies the parametric form of p(y|x) of its discriminative pair logistic regression! </li></ul>LR is a MRF globally conditioned on X Use log-linear model as potential functions in CRFs LR is a very simple CRF
  13. 13. Naive bayes and logistic regression (2) <ul><li>if GNB assumptions hold, then GNB and LR converge asymptotically toward identical classifiers </li></ul><ul><li>in generative models set of parameters must represent input distribution and conditional well. </li></ul><ul><li>in discriminative models are not as strongly tied to their input distribution </li></ul><ul><ul><li>e.g. LR fits its parameter to the data although the naive bayes assumption might be violated </li></ul></ul><ul><li>in other words: there are more (complex) joint models than GNB whose conditional also have the “LR form” </li></ul><ul><li>GNB and LR mirror relationship between HMM and linear chain CRF </li></ul>
  14. 14. Overview: sequence models Bayesian Network Naive Bayes Markov Random Field Hidden Markov Model Logistic Regression Linear Chain Conditional Random Field General Conditional Random Field
  15. 15. Sequence models: HMMs <ul><li>power of graphical models: model many interdependent variables </li></ul><ul><li>HMM models joint distribution </li></ul><ul><ul><li>uses two independence assumptions to do it tractably </li></ul></ul><ul><ul><ul><li>given the direct predecessor, each state is independent of his ancestors </li></ul></ul></ul><ul><ul><ul><li>each observation depends only on current state </li></ul></ul></ul>
  16. 16. From HMMs to linear chain CRFs (1) <ul><li>key: conditional distribution p(y|x) of an HMM is a CRF with a particular choice of feature function </li></ul><ul><ul><li>parameters are not required to be log probabilities, therefore introduce normalisation </li></ul></ul><ul><ul><li>using feature functions: </li></ul></ul>with
  17. 17. From HMMs to linear chain CRFs (2) <ul><li>last step: write conditional probability for the HMM </li></ul><ul><li>This is a linear chain CRF that includes features only HMM features, richer features are possible </li></ul>
  18. 18. Linear chain conditional random fields <ul><li>Definition: </li></ul><ul><li>for general CRFs use arbitrary cliques </li></ul>with
  19. 19. Side trip: maximum entropy markov models <ul><li>entropy - measure of the uniformity of a distribution </li></ul><ul><li>maximum entropy model maximises entropy, subject to constraints imposed by training data </li></ul><ul><li>model conditional probabilities of reaching a state given an observation o and previous state s’ instead of joint probabilities </li></ul><ul><ul><li>observations on transitions </li></ul></ul><ul><ul><li>split P(s|s’,o) in |S| separately trained transition functions P s’ (s|o) </li></ul></ul><ul><ul><ul><li>leads to per state normalisation </li></ul></ul></ul>
  20. 20. Side Trip: label bias problem <ul><li>CRF like log-linear models, but label bias problem: </li></ul><ul><ul><li>per state normalisation requires that probabilities of transitions leaving a state must some to one </li></ul></ul><ul><ul><ul><li>conservation of probability mass </li></ul></ul></ul><ul><ul><ul><li>states with one outgoing transition ignore observation </li></ul></ul></ul>Calculate:
  21. 21. Inference in a linear chain CRF <ul><li>slight variants of HMM algorithms: </li></ul><ul><ul><li>Viterbi: use definition from HMM </li></ul></ul><ul><ul><li>but define: </li></ul></ul><ul><ul><li>because CRF model can be written as: </li></ul></ul>where
  22. 22. Parameter estimation in general <ul><li>So far major drawback </li></ul><ul><li>generative model tend to have higher asymptotic error, but </li></ul><ul><li>it approaches its asymptotic error faster than a discriminative one with number of training examples logarithmic in number of parameters rather than linear </li></ul><ul><li>remember: discriminative models make no independent assumptions for observations x </li></ul>
  23. 23. Principles in parameter estimation <ul><li>basic principle: maximum likelihood estimation with conditional log likelihood of </li></ul><ul><ul><li>advantage: conditional log likelihood is concave, therefore every local optimum is a global one </li></ul></ul><ul><li>use gradient descent: quasi-Newton methods </li></ul><ul><li>runtime in O(tm 2 ng) t length of sequence, m number of labels, n number of training instances, g number of required gradient computations </li></ul>
  24. 24. Application: gene prediction <ul><li>use finite-state CRFs to locate introns and exons in DNA sequences </li></ul><ul><li>advantages of CRFs: </li></ul><ul><ul><li>ability to straightforwardly incorporate homology evidence from protein databases. </li></ul></ul><ul><ul><li>used feature functions: </li></ul></ul><ul><ul><ul><li>e.g. frequencies of base conjunctions and disjunctions in sliding windows over 20 bases upstream and 40 bases downstream (motivation: splice site detection) </li></ul></ul></ul><ul><ul><ul><ul><li>How many times did “C or G” occurred in the prior 40 bases with sliding window of size 5? </li></ul></ul></ul></ul><ul><ul><ul><li>E.g. frequencies how many times a base appears in related protein (via BLAST search) </li></ul></ul></ul><ul><li>Outperforms 5th order hidden semi markov model by 10% reduction in harmonic mean of precision and recall </li></ul><ul><li>(86.09 <-> 84.55) </li></ul>
  25. 25. Summary: graphical models
  26. 26. The end Questions ?
  27. 27. References <ul><li>An Introduction to Conditional Random Fields for Relational Learning. Charles Sutton and Andrew McCallum. In Introduction to Statistical Relational Learning. Edited by Lise Getoor and Ben Taskar. MIT Press. 2006. </li></ul><ul><li>(including figures and formulae) </li></ul><ul><li>H. Wallach, &quot;Efficient training of conditional random fields,&quot; Master's thesis, University of Edinburgh, 2002. http: //citeseer . ist . psu .edu/wallach02efficient.html </li></ul><ul><li>John Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML-01, pages 282-289, 2001. </li></ul><ul><li>Gene Prediction with Conditional Random Fields. Aron Culotta, David Kulp, and Andrew McCallum. Technical Report UM-CS-2005-028, University of Massachusetts, Amherst, April 2005. </li></ul>
  28. 28. References <ul><li>Kevin Murphy. An introduction to graphical models. Technical report, Intel Research Technical Report., 2001. http://citeseer.ist.psu.edu/murphy01introduction.html </li></ul><ul><li>On Discriminative vs. Generative Classifiers: A comparison of logistic regression and Naive Bayes, Andrew Y. Ng and Michael Jordan. In NIPS 14,, 2002. </li></ul><ul><li>T. Minka. Discriminative models, not discriminative training. Technical report, Microsoft Research Cambridge, 2005. </li></ul><ul><li>P. Blunsom. Maximum Entropy Classification. Lecture slides 433-680. 2005. http://www. cs . mu .oz.au /680/lectures/week06a. pdf </li></ul>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×