Upcoming SlideShare
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

# Saving this for later?

### Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Standard text messaging rates apply

# 11 cv mil_models_for_chains_and_trees

511
views

Published on

0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total Views
511
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
2
0
Likes
0
Embeds 0
No embeds

No notes for slide

### Transcript

• 1. Computer vision:models, learning and inference Chapter 11 Fitting probability models Please send errata to s.prince@cs.ucl.ac.uk
• 2. Structure• Chain and tree models• MAP inference in chain models• MAP inference in tree models• Maximum marginals in chain models• Maximum marginals in tree models• Models with loops• Applications Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 2
• 3. Chain and tree models• Given a set of measurements and world states , infer the world states from the measurements• Problem: if N is large then the model relating the two will have a very large number of parameters.• Solution: build sparse models where we only describe subsets of the relations between variables. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 3
• 4. Chain and tree modelsChain model: only model connections between world variable and its predecessing and subsequent variablesTree model: connections between world variables are organized as a tree (no loops). Disregard directionality of connections for directed model Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 4
• 5. AssumptionsWe’ll assume that – World states are discrete – Observed data variables for each world state – The nth data variable is conditionally independent of all of other data variables and world states given associated world state Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 5
• 6. Gesture TrackingComputer vision: models, learning and inference. ©2011 Simon J.D. Prince 6
• 7. Directed model for chains (Hidden Markov model) Compatibility of measurement Compatibility of world state and and world state previous world stateComputer vision: models, learning and inference. ©2011 Simon J.D. Prince 7
• 8. Undirected model for chains Compatibility of measurement Compatibility of world state and and world state previous world state Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 8
• 9. Equivalence of chain modelsDirected:Undirected:Equivalence: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 9
• 10. Chain model for sign language applicationObservations are normally distributed but depend on sign kWorld state is categorically distributed, parameters depend onprevious world state Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 10
• 11. Structure• Chain and tree models• MAP inference in chain models• MAP inference in tree models• Maximum marginals in chain models• Maximum marginals in tree models• Models with loops• Applications Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 11
• 12. MAP inference in chain modelDirected model:MAP inference:Substituting in : Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 12
• 13. MAP inference in chain modelTakes the general form:Unary term:Pairwise term: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 13
• 14. Dynamic programmingMaximizes functions of the form:Set up as cost for traversing graph – each path from left to right is one possibleconfiguration of world states Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 14
• 15. Dynamic programmingAlgorithm:1. Work through graph computing minimum possible cost to reach each node2. When we get to last column find minimum3. Trace back to see how we got there Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 15
• 16. Worked exampleUnary cost Pairwise costs 1. Zero cost to stay at same label 2. Cost of 2 to change label by 1 3. Infinite cost for changing by more than one (not shown) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 16
• 17. Worked exampleMinimum costto reach first node is just unary cost Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 17
• 18. Worked exampleMinimum cost is minimum of two possible routes to get hereRoute 1: 2.0+0.0+1.1 = 3.1Route 2: 0.8+2.0+1.1 = 3.9 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 18
• 19. Worked exampleMinimum cost is minimum of two possible routes to get hereRoute 1: 2.0+0.0+1.1 = 3.1 -- this is the minimum – note this downRoute 2: 0.5+2.0+1.1 = 3.6 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 19
• 20. Worked exampleGeneral rule: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 20
• 21. Worked exampleWork through the graph, computing the minimumcost to reach each node Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 21
• 22. Worked exampleKeep going until we reach the end of the graph Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 22
• 23. Worked exampleFind the minimum possible cost to reach the final column Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 23
• 24. Worked exampleTrace back the route that we arrived here by – this is theminimum configuration Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 24
• 25. Structure• Chain and tree models• MAP inference in chain models• MAP inference in tree models• Maximum marginals in chain models• Maximum marginals in tree models• Models with loops• Applications Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 25
• 26. MAP inference for treesComputer vision: models, learning and inference. ©2011 Simon J.D. Prince 26
• 27. MAP inference for treesComputer vision: models, learning and inference. ©2011 Simon J.D. Prince 27
• 28. Worked exampleComputer vision: models, learning and inference. ©2011 Simon J.D. Prince 28
• 29. Worked example Variables 1-4 proceed as for the chain example.Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 29
• 30. Worked example At variable n=5 must consider all pairs of paths from into the current node.Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 30
• 31. Worked example Variable 6 proceeds as normal. Then we trace back through the variables, splitting at the junction.Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 31
• 32. Structure• Chain and tree models• MAP inference in chain models• MAP inference in tree models• Maximum marginals in chain models• Maximum marginals in tree models• Models with loops• Applications Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 32
• 33. Marginal posterior inference• Start by computing the marginal distribution over the Nth variable• Then we`ll consider how to compute the other marginal distributions Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 33
• 34. Computing one marginal distributionCompute the posterior using Bayes` rule:We compute this expression by writing the joint probability : Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 34
• 35. Computing one marginal distributionProblem: Computing all NK states and marginalizing explicitly is intractable.Solution: Re-order terms and move summations to the right Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 35
• 36. Computing one marginal distributionDefine function of variable w1 (two rightmost terms)Then compute function of variables w2 in terms of previous functionLeads to the recursive relation Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 36
• 37. Computing one marginal distributionWe work our way through the sequence using this recursion.At the end we normalize the result to compute the posteriorTotal number of summations is (N-1)K as opposed to KN forbrute force approach. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 37
• 38. Forward-backward algorithm• We could compute the other N-1 marginal posterior distributions using a similar set of computations• However, this is inefficient as much of the computation is duplicated• The forward-backward algorithm computes all of the marginal posteriors at once Solution: ... and take products Compute all first term Compute all second using a recursion terms using a recursion Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 38
• 39. Forward recursion Conditional probability rule Using conditional independence relations This is the same recursion as beforeComputer vision: models, learning and inference. ©2011 Simon J.D. Prince 39
• 40. Backward recursion Conditional probability rule Using conditional independence relations This is another recursion of the formComputer vision: models, learning and inference. ©2011 Simon J.D. Prince 40
• 41. Forward backward algorithmCompute the marginal posterior distribution asproduct of two termsForward terms:Backward terms: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 41
• 42. Belief propagation• Forward backward algorithm is a special case of a more general technique called belief propagation• Intermediate functions in forward and backward recursions are considered as messages conveying beliefs about the variables.• Well examine the Sum-Product algorithm.• The sum-product algorithm operates on factor graphs. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 42
• 43. Sum product algorithm• Forward backward algorithm is a special case of a more general technique called belief propagation• Intermediate functions in forward and backward recursions are considered as messages conveying beliefs about the variables.• Well examine the Sum-Product algorithm.• The sum-product algorithm operates on factor graphs. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 43
• 44. Factor graphs• One node for each variable• One node for each function relating variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 44
• 45. Sum product algorithmForward pass • Distribute evidence through the graphBackward pass • Collates the evidenceBoth phases involve passing messages between nodes: • The forward phase can proceed in any order as long as the outgoing messages are not sent until all incoming ones received • Backward phase proceeds in reverse order to forward Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 45
• 46. Sum product algorithmThree kinds of message • Messages from unobserved variables to functions • Messages from observed variables to functions • Messages from functions to variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 46
• 47. Sum product algorithmMessage type 1: • Messages from unobserved variables z to function g • Take product of incoming messages • Interpretation: combining beliefsMessage type 2: • Messages from observed variables z to function g • Interpretation: conveys certain belief that observed values are true Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 47
• 48. Sum product algorithmMessage type 3: • Messages from a function g to variable z • Takes beliefs from all incoming variables except recipient and uses function g to a belief about recipientComputing marginal distributions: • After forward and backward passes, we compute the marginal dists as the product of all incoming messages Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 48
• 49. Sum product: forward passMessage from x1 to g1:By rule 2: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 49
• 50. Sum product: forward passMessage from g1 to w1:By rule 3: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 50
• 51. Sum product: forward passMessage from w1 to g1,2:By rule 1:(product of all incoming messages) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 51
• 52. Sum product: forward passMessage from g1,2 from w2:By rule 3: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 52
• 53. Sum product: forward passMessages from x2 to g2 and g2 to w2: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 53
• 54. Sum product: forward passMessage from w2 to g2,3: The same recursion as in the forward backward algorithm Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 54
• 55. Sum product: forward passMessage from w2 to g2,3: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 55
• 56. Sum product: backward passMessage from wN to gN,N-1: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 56
• 57. Sum product: backward passMessage from gN,N-1 to wN-1: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 57
• 58. Sum product: backward passMessage from gn,n-1 to wn-1: The same recursion as in the forward backward algorithm Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 58
• 59. Sum product: collating evidence• Marginal distribution is products of all messages at node• Proof: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 59
• 60. Structure• Chain and tree models• MAP inference in chain models• MAP inference in tree models• Maximum marginals in chain models• Maximum marginals in tree models• Models with loops• Applications Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 60
• 61. Marginal posterior inference for trees Apply sum-product algorithm to the tree-structured graph. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 61
• 62. Structure• Chain and tree models• MAP inference in chain models• MAP inference in tree models• Maximum marginals in chain models• Maximum marginals in tree models• Models with loops• Applications Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 62
• 63. Tree structured graphsThis graph contains loops But the associated factor graph has structure of a tree Can still use Belief Propagation Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 63
• 64. Learning in chains and treesSupervised learning (where we know world states wn) isrelatively easy.Unsupervised learning (where we do not know worldstates wn) is more challenging. Use the EM algorithm: • E-step – compute posterior marginals over states • M-step – update model parametersFor the chain model (hidden Markov model) this is known as the Baum-Welch algorithm. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 64
• 65. Grid-based graphsOften in vision, we have one observation associated with eachpixel in the image grid. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 65
• 66. Why not dynamic programming?When we trace back from the final node, the paths are notguaranteed to converge. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 66
• 67. Why not dynamic programming? Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 67
• 68. Why not dynamic programming?But: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 68
• 69. Approaches to inference for grid-based models1. Prune the graph.Remove edges until an edge remains Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 69
• 70. Approaches to inference for grid-based models2. Combine variables.Merge variables to form compound variable with more states until what remains is a tree. Not practical for large grids Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 70
• 71. Approaches to inference for grid-based models3. Loopy belief propagation. Just apply belief propagation. It is not guaranteed to converge, but in practice it works well.4. Sampling approaches Draw samples from the posterior (easier for directed models)5. Other approaches • Tree-reweighted message passing • Graph cuts Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 71
• 72. Structure• Chain and tree models• MAP inference in chain models• MAP inference in tree models• Maximum marginals in chain models• Maximum marginals in tree models• Models with loops• Applications Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 72
• 73. Gesture TrackingComputer vision: models, learning and inference. ©2011 Simon J.D. Prince 73
• 74. Stereo vision• Two image taken from slightly different positions• Matching point in image 2 is on same scanline as image 1• Horizontal offset is called disparity• Disparity is inversely related to depth• Goal – infer disparities wm,n at pixel m,n from images x(1) and x(2)Use likelihood: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 74
• 75. Stereo visionComputer vision: models, learning and inference. ©2011 Simon J.D. Prince 75
• 76. Stereo vision1. Independent pixels Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 76
• 77. Stereo vision2. Scanlines as chain model (hidden Markov model) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 77
• 78. Stereo vision3. Pixels organized as tree (from Veksler 2005) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 78
• 79. Pictorial StructuresComputer vision: models, learning and inference. ©2011 Simon J.D. Prince 79
• 80. Pictorial StructuresComputer vision: models, learning and inference. ©2011 Simon J.D. Prince 80
• 81. SegmentationComputer vision: models, learning and inference. ©2011 Simon J.D. Prince 81
• 82. Conclusion• For the special case of chains and trees we can perform MAP inference and compute marginal posteriors efficiently.• Unfortunately, many vision problems are defined on pixel grid – this requires special methods Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 82