Pattern Recognition and Machine Learning<br />Chapter 8: graphical models<br />
Bayesian Networks<br />Directed Acyclic Graph (DAG)<br />
Bayesian Networks<br />General Factorization<br />
Bayesian Curve Fitting (1)	<br />Polynomial<br />
Bayesian Curve Fitting (2)	<br />Plate<br />
Bayesian Curve Fitting (3)	<br />Input variables and explicit hyperparameters<br />
Bayesian Curve Fitting	—Learning<br />Condition on data<br />
Bayesian Curve Fitting	—Prediction<br />Predictive distribution: <br />where<br />
Generative Models<br />Causal process for generating images<br />
Discrete Variables (1)<br />General joint distribution: K 2 { 1 parameters<br />Independent joint distribution: 2(K{ 1) pa...
Discrete Variables (2)<br />General joint distribution over M variables: KM{ 1 parameters<br />M -node Markov chain: K{ 1 ...
Discrete Variables: Bayesian Parameters (1)<br />
Discrete Variables: Bayesian Parameters (2)<br />Shared prior<br />
Parameterized Conditional Distributions<br />If                       are discrete,  <br />K-state variables, <br />      ...
Linear-Gaussian Models<br />Directed Graph<br />Vector-valued Gaussian Nodes<br />Each node is Gaussian, the mean is a lin...
Conditional Independence<br />a is independent of b given c<br />Equivalently<br />Notation<br />
Conditional Independence: Example 1<br />
Conditional Independence: Example 1<br />
Conditional Independence: Example 2<br />
Conditional Independence: Example 2<br />
Conditional Independence: Example 3<br />Note: this is the opposite of Example 1, with c unobserved.<br />
Conditional Independence: Example 3<br />Note: this is the opposite of Example 1, with c observed.<br />
“Am I out of fuel?”<br />B	=	Battery (0=flat, 1=fully charged)<br />F	=	Fuel Tank (0=empty, 1=full)<br />G	=	Fuel Gauge Re...
“Am I out of fuel?”<br />Probability of an empty tank increased by observing G   = 0. <br />
“Am I out of fuel?”<br />Probability of an empty tank reduced by observing B   = 0. <br />This referred to as “explaining ...
D-separation<br /><ul><li>A, B, and C are non-intersecting subsets of nodes in a directed graph.
A path from A to B is blocked if it contains a node such that either</li></ul>the arrows on the path meet either head-to-t...
If A is d-separated from B by C, the joint distribution over all variables in the graph satisfies                       .<...
D-separation: I.I.D. Data<br />
Directed Graphs as Distribution Filters<br />
The Markov Blanket<br />Factors independent of xi cancel between numerator and denominator.<br />
Markov Random Fields<br />Markov Blanket<br />
Cliques and Maximal Cliques<br />Clique<br />Maximal Clique<br />
Joint Distribution<br />where                   is the potential over clique C and <br />is the normalization coefficient;...
Illustration: Image De-Noising (1)<br />Original Image<br />Noisy Image<br />
Illustration: Image De-Noising (2)<br />
Illustration: Image De-Noising (3)<br />Noisy Image<br />Restored Image (ICM)<br />
Illustration: Image De-Noising (4)<br />Restored Image (Graph cuts)<br />Restored Image (ICM)<br />
Converting Directed to Undirected Graphs (1)<br />
Converting Directed to Undirected Graphs (2)<br />Additional links<br />
Directed vs. Undirected Graphs (1)<br />
Directed vs. Undirected Graphs (2)<br />
Inference in Graphical Models<br />
Inference on a Chain<br />
Inference on a Chain<br />
Inference on a Chain<br />
Inference on a Chain<br />
Inference on a Chain<br />To compute local marginals:<br /><ul><li>Compute and store all forward messages,             .
Compute and store all backward messages,             .
Compute Z at any node xm
Computefor all variables required.</li></li></ul><li>Trees<br />Undirected Tree<br />Directed Tree<br />Polytree<br />
Factor Graphs<br />
Factor Graphs from Directed Graphs<br />
Factor Graphs from Undirected Graphs<br />
The Sum-Product Algorithm (1)<br />Objective:<br />to obtain an efficient, exact inference algorithm for finding marginals...
The Sum-Product Algorithm (2)<br />
The Sum-Product Algorithm (3)<br />
The Sum-Product Algorithm (4)<br />
The Sum-Product Algorithm (5)<br />
The Sum-Product Algorithm (6)<br />
The Sum-Product Algorithm (7)<br />Initialization<br />
The Sum-Product Algorithm (8)<br />To compute local marginals:<br /><ul><li>Pick an arbitrary node as root
Compute and propagate messages from the leaf nodes to the root, storing received messages at every node.
Compute and propagate messages from the root to the leaf nodes, storing received messages at every node.
Compute the product of received messages at each node for which the marginal is required, and normalize if necessary.</li>...
Sum-Product: Example (2)<br />
Sum-Product: Example (3)<br />
Sum-Product: Example (4)<br />
The Max-Sum Algorithm (1)<br />Objective: an efficient algorithm for finding <br />the value xmax that maximises p(x);<br ...
The Max-Sum Algorithm (2)<br />Maximizing over a chain (max-product)<br />
The Max-Sum Algorithm (3)<br />Generalizes to tree-structured factor graph<br />maximizing as close to the leaf nodes as p...
Upcoming SlideShare
Loading in...5
×

Pattern Recognition and Machine Learning : Graphical Models

788

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
788
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
44
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Pattern Recognition and Machine Learning : Graphical Models

  1. 1. Pattern Recognition and Machine Learning<br />Chapter 8: graphical models<br />
  2. 2. Bayesian Networks<br />Directed Acyclic Graph (DAG)<br />
  3. 3. Bayesian Networks<br />General Factorization<br />
  4. 4. Bayesian Curve Fitting (1) <br />Polynomial<br />
  5. 5. Bayesian Curve Fitting (2) <br />Plate<br />
  6. 6. Bayesian Curve Fitting (3) <br />Input variables and explicit hyperparameters<br />
  7. 7. Bayesian Curve Fitting —Learning<br />Condition on data<br />
  8. 8. Bayesian Curve Fitting —Prediction<br />Predictive distribution: <br />where<br />
  9. 9. Generative Models<br />Causal process for generating images<br />
  10. 10. Discrete Variables (1)<br />General joint distribution: K 2 { 1 parameters<br />Independent joint distribution: 2(K{ 1) parameters<br />
  11. 11. Discrete Variables (2)<br />General joint distribution over M variables: KM{ 1 parameters<br />M -node Markov chain: K{ 1 + (M{ 1) K(K{ 1) parameters<br />
  12. 12. Discrete Variables: Bayesian Parameters (1)<br />
  13. 13. Discrete Variables: Bayesian Parameters (2)<br />Shared prior<br />
  14. 14. Parameterized Conditional Distributions<br />If are discrete, <br />K-state variables, <br /> in general has O(K M) parameters.<br />The parameterized form<br />requires only M+ 1 parameters <br />
  15. 15. Linear-Gaussian Models<br />Directed Graph<br />Vector-valued Gaussian Nodes<br />Each node is Gaussian, the mean is a linear function of the parents.<br />
  16. 16. Conditional Independence<br />a is independent of b given c<br />Equivalently<br />Notation<br />
  17. 17. Conditional Independence: Example 1<br />
  18. 18. Conditional Independence: Example 1<br />
  19. 19. Conditional Independence: Example 2<br />
  20. 20. Conditional Independence: Example 2<br />
  21. 21. Conditional Independence: Example 3<br />Note: this is the opposite of Example 1, with c unobserved.<br />
  22. 22. Conditional Independence: Example 3<br />Note: this is the opposite of Example 1, with c observed.<br />
  23. 23. “Am I out of fuel?”<br />B = Battery (0=flat, 1=fully charged)<br />F = Fuel Tank (0=empty, 1=full)<br />G = Fuel Gauge Reading (0=empty, 1=full)<br />and hence<br />
  24. 24. “Am I out of fuel?”<br />Probability of an empty tank increased by observing G = 0. <br />
  25. 25. “Am I out of fuel?”<br />Probability of an empty tank reduced by observing B = 0. <br />This referred to as “explaining away”.<br />
  26. 26. D-separation<br /><ul><li>A, B, and C are non-intersecting subsets of nodes in a directed graph.
  27. 27. A path from A to B is blocked if it contains a node such that either</li></ul>the arrows on the path meet either head-to-tail or tail-to-tail at the node, and the node is in the set C, or<br />the arrows meet head-to-head at the node, and neither the node, nor any of its descendants, are in the set C.<br /><ul><li>If all paths from A to B are blocked, A is said to be d-separated from B by C.
  28. 28. If A is d-separated from B by C, the joint distribution over all variables in the graph satisfies .</li></li></ul><li>D-separation: Example<br />
  29. 29. D-separation: I.I.D. Data<br />
  30. 30. Directed Graphs as Distribution Filters<br />
  31. 31. The Markov Blanket<br />Factors independent of xi cancel between numerator and denominator.<br />
  32. 32. Markov Random Fields<br />Markov Blanket<br />
  33. 33. Cliques and Maximal Cliques<br />Clique<br />Maximal Clique<br />
  34. 34. Joint Distribution<br />where is the potential over clique C and <br />is the normalization coefficient; note: MK-state variables  KM terms in Z.<br />Energies and the Boltzmann distribution<br />
  35. 35. Illustration: Image De-Noising (1)<br />Original Image<br />Noisy Image<br />
  36. 36. Illustration: Image De-Noising (2)<br />
  37. 37. Illustration: Image De-Noising (3)<br />Noisy Image<br />Restored Image (ICM)<br />
  38. 38. Illustration: Image De-Noising (4)<br />Restored Image (Graph cuts)<br />Restored Image (ICM)<br />
  39. 39. Converting Directed to Undirected Graphs (1)<br />
  40. 40. Converting Directed to Undirected Graphs (2)<br />Additional links<br />
  41. 41. Directed vs. Undirected Graphs (1)<br />
  42. 42. Directed vs. Undirected Graphs (2)<br />
  43. 43. Inference in Graphical Models<br />
  44. 44. Inference on a Chain<br />
  45. 45. Inference on a Chain<br />
  46. 46. Inference on a Chain<br />
  47. 47. Inference on a Chain<br />
  48. 48. Inference on a Chain<br />To compute local marginals:<br /><ul><li>Compute and store all forward messages, .
  49. 49. Compute and store all backward messages, .
  50. 50. Compute Z at any node xm
  51. 51. Computefor all variables required.</li></li></ul><li>Trees<br />Undirected Tree<br />Directed Tree<br />Polytree<br />
  52. 52. Factor Graphs<br />
  53. 53. Factor Graphs from Directed Graphs<br />
  54. 54. Factor Graphs from Undirected Graphs<br />
  55. 55. The Sum-Product Algorithm (1)<br />Objective:<br />to obtain an efficient, exact inference algorithm for finding marginals;<br />in situations where several marginals are required, to allow computations to be shared efficiently.<br />Key idea: Distributive Law<br />
  56. 56. The Sum-Product Algorithm (2)<br />
  57. 57. The Sum-Product Algorithm (3)<br />
  58. 58. The Sum-Product Algorithm (4)<br />
  59. 59. The Sum-Product Algorithm (5)<br />
  60. 60. The Sum-Product Algorithm (6)<br />
  61. 61. The Sum-Product Algorithm (7)<br />Initialization<br />
  62. 62. The Sum-Product Algorithm (8)<br />To compute local marginals:<br /><ul><li>Pick an arbitrary node as root
  63. 63. Compute and propagate messages from the leaf nodes to the root, storing received messages at every node.
  64. 64. Compute and propagate messages from the root to the leaf nodes, storing received messages at every node.
  65. 65. Compute the product of received messages at each node for which the marginal is required, and normalize if necessary.</li></li></ul><li>Sum-Product: Example (1)<br />
  66. 66. Sum-Product: Example (2)<br />
  67. 67. Sum-Product: Example (3)<br />
  68. 68. Sum-Product: Example (4)<br />
  69. 69. The Max-Sum Algorithm (1)<br />Objective: an efficient algorithm for finding <br />the value xmax that maximises p(x);<br />the value of p(xmax).<br />In general, maximum marginals  joint maximum.<br />
  70. 70. The Max-Sum Algorithm (2)<br />Maximizing over a chain (max-product)<br />
  71. 71. The Max-Sum Algorithm (3)<br />Generalizes to tree-structured factor graph<br />maximizing as close to the leaf nodes as possible<br />
  72. 72. The Max-Sum Algorithm (4)<br />Max-Product  Max-Sum<br />For numerical reasons, use<br />Again, use distributive law <br />
  73. 73. The Max-Sum Algorithm (5)<br />Initialization (leaf nodes)<br />Recursion<br />
  74. 74. The Max-Sum Algorithm (6)<br />Termination (root node)<br />Back-track, for all nodes i with l factor nodes to the root (l=0) <br />
  75. 75. The Max-Sum Algorithm (7)<br />Example: Markov chain<br />
  76. 76. The Junction Tree Algorithm<br /><ul><li>Exact inference on general graphs.
  77. 77. Works by turning the initial graph into a junction tree and then running a sum-product-like algorithm.
  78. 78. Intractable on graphs with large cliques.</li></li></ul><li>Loopy Belief Propagation<br /><ul><li>Sum-Product on general graphs.
  79. 79. Initial unit messages passed across all links, after which messages are passed around until convergence (not guaranteed!).
  80. 80. Approximate but tractable for large graphs.
  81. 81. Sometime works well, sometimes not at all.</li>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×