SlideShare a Scribd company logo
1 of 134
MAP Estimation Algorithms in M. Pawan Kumar, University of Oxford Pushmeet Kohli, Microsoft Research Computer Vision -  Part II
Example: Image Segmentation E(x) =  ∑  c i  x i  +  ∑   c ij  x i (1-x j )   E: {0,1} n   ->   R 0  ->  fg 1  ->  bg Image (D) i i,j n = number of pixels
Example: Image Segmentation E(x) =  ∑  c i  x i  +  ∑   c ij  x i (1-x j )   E: {0,1} n   ->   R 0  ->  fg 1  ->  bg i i,j Unary Cost (c i ) Dark ( negative )  Bright (positive) n = number of pixels
Example: Image Segmentation E(x) =  ∑  c i  x i  +  ∑   c ij  x i (1-x j )   E: {0,1} n   ->   R 0  ->  fg 1  ->  bg i i,j Discontinuity Cost (c ij ) n = number of pixels
Example: Image Segmentation E(x) =  ∑  c i  x i  +  ∑   c ij  x i (1-x j )   E: {0,1} n   ->   R 0  ->  fg 1  ->  bg i i,j Global Minimum (x * ) x *   =  arg min  E(x)   x How to minimize E(x)? n = number of pixels
Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems Connection between st-mincut and energy minimization?
Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
The st-Mincut Problem Source Sink v 1 v 2 2 5 9 4 2 1 ,[object Object],[object Object],[object Object],[object Object]
The st-Mincut Problem Source Sink v 1 v 2 2 5 9 4 2 1 What is a st-cut?
The st-Mincut Problem Source Sink v 1 v 2 2 5 9 4 2 1 What is a st-cut? An st-cut ( S , T ) divides the nodes between source and sink. What is the cost of a st-cut? Sum of cost of all edges going from S to T 5 + 2 + 9 = 16
The st-Mincut Problem What is a st-cut? An st-cut ( S , T ) divides the nodes between source and sink. What is the cost of a st-cut? Sum of cost of all edges going from S to T What is the st-mincut? st-cut with the minimum cost Source Sink v 1 v 2 2 5 9 4 2 1 2 + 1 + 4 = 7
How to compute the st-mincut? Source Sink v 1 v 2 2 5 9 4 2 1 Solve the dual  maximum flow  problem In every network, the maximum flow equals the cost of the st-mincut Min-cutax-flow Theorem Compute the maximum flow between Source and Sink Constraints Edges: Flow < Capacity Nodes: Flow in = Flow out
Maxflow Algorithms Augmenting Path Based Algorithms ,[object Object],[object Object],[object Object],Source Sink v 1 v 2 2 5 9 4 2 1 Algorithms assume non-negative capacity Flow = 0
Maxflow Algorithms Augmenting Path Based Algorithms ,[object Object],[object Object],[object Object],Source Sink v 1 v 2 2 5 9 4 2 1 Algorithms assume non-negative capacity Flow = 0
Maxflow Algorithms Augmenting Path Based Algorithms ,[object Object],[object Object],[object Object],Source Sink v 1 v 2 2-2 5-2 9 4 2 1 Algorithms assume non-negative capacity Flow = 0 + 2
Maxflow Algorithms Source Sink v 1 v 2 0 3 9 4 2 1 Augmenting Path Based Algorithms ,[object Object],[object Object],[object Object],Algorithms assume non-negative capacity Flow = 2
Maxflow Algorithms Source Sink v 1 v 2 0 3 9 4 2 1 Augmenting Path Based Algorithms ,[object Object],[object Object],[object Object],Algorithms assume non-negative capacity Flow = 2
Maxflow Algorithms Source Sink v 1 v 2 0 3 9 4 2 1 Augmenting Path Based Algorithms ,[object Object],[object Object],[object Object],Algorithms assume non-negative capacity Flow = 2
Maxflow Algorithms Source Sink v 1 v 2 0 3 5 0 2 1 Augmenting Path Based Algorithms ,[object Object],[object Object],[object Object],Algorithms assume non-negative capacity Flow = 2 + 4
Maxflow Algorithms Source Sink v 1 v 2 0 3 5 0 2 1 Augmenting Path Based Algorithms ,[object Object],[object Object],[object Object],Algorithms assume non-negative capacity Flow = 6
Maxflow Algorithms Source Sink v 1 v 2 0 3 5 0 2 1 Augmenting Path Based Algorithms ,[object Object],[object Object],[object Object],Algorithms assume non-negative capacity Flow = 6
Maxflow Algorithms Source Sink v 1 v 2 0 2 4 0 2+1 1-1 Augmenting Path Based Algorithms ,[object Object],[object Object],[object Object],Algorithms assume non-negative capacity Flow = 6 + 1
Maxflow Algorithms Source Sink v 1 v 2 0 2 4 0 3 0 Augmenting Path Based Algorithms ,[object Object],[object Object],[object Object],Algorithms assume non-negative capacity Flow = 7
Maxflow Algorithms Source Sink v 1 v 2 0 2 4 0 3 0 Augmenting Path Based Algorithms ,[object Object],[object Object],[object Object],Algorithms assume non-negative capacity Flow = 7
History of Maxflow Algorithms [Slide credit: Andrew Goldberg] Augmenting Path  and  Push-Relabel n:  # nodes m:  # edges U:  maximum edge weight Algorithms assume non-negative edge weights
History of Maxflow Algorithms [Slide credit: Andrew Goldberg] Augmenting Path  and  Push-Relabel n:  # nodes m:  # edges U:  maximum edge weight Algorithms assume non-negative edge weights
Augmenting Path based Algorithms a 1 a 2 1000 1 Sink Source 1000 1000 1000 0 Ford Fulkerson:  Choose  any  augmenting path
Augmenting Path based Algorithms a 1 a 2 1000 1 Sink Source 1000 1000 1000 0 Bad Augmenting Paths Ford Fulkerson:  Choose  any  augmenting path
a 1 a 2 1000 1 Sink Source 1000 1000 1000 0 Augmenting Path based Algorithms Bad Augmenting Path Ford Fulkerson:  Choose  any  augmenting path
a 1 a 2 999 0 Sink Source 1000 1000 999 1 Augmenting Path based Algorithms Ford Fulkerson:  Choose  any  augmenting path
Augmenting Path based Algorithms a 1 a 2 999 0 Sink Source 1000 1000 999 1 Ford Fulkerson:  Choose  any  augmenting path  n:  # nodes m:  # edges We will have to perform 2000 augmentations! Worst case complexity: O (m x Total_Flow) (Pseudo-polynomial bound: depends on flow)
Augmenting Path based Algorithms Dinic:  Choose  shortest  augmenting path  n:  # nodes m:  # edges Worst case Complexity: O (m n 2 ) a 1 a 2 1000 1 Sink Source 1000 1000 1000 0
Maxflow in Computer Vision ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Maxflow in Computer Vision ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],http://www.adastral.ucl.ac.uk/~vladkolm/software.html
Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
St-mincut and Energy Minimization E:  {0,1} n   ->   R Minimizing a Qudratic Pseudoboolean function E(x)  Functions of boolean variables Pseudoboolean? Polynomial time st-mincut algorithms require non-negative edge weights E(x) =  ∑  c i  x i  +  ∑   c ij  x i (1-x j )   c ij ≥0 i,j i T S st-mincut
So how does this work?  ,[object Object],[object Object],[object Object],E(x) Solution T S st-mincut
Graph Construction Sink (1) Source (0)   a 1 a 2 E(a 1 ,a 2 )
Graph Construction Sink (1) Source (0)   a 1 a 2 E(a 1 ,a 2 ) =  2a 1 2
Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2a 1  +  5ā 1 2 5 Sink (1) Source (0)
Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2a 1  + 5ā 1 +  9a 2  + 4ā 2 2 5 9 4 Sink (1) Source (0)
Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2a 1  + 5ā 1 + 9a 2  + 4ā 2  +  2a 1 ā 2 2 5 9 4 2 Sink (1) Source (0)
Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2a 1  + 5ā 1 + 9a 2  + 4ā 2  + 2a 1 ā 2  +   ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0)
Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2a 1  + 5ā 1 + 9a 2  + 4ā 2  + 2a 1 ā 2  +   ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0)
Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2 a 1   + 5 ā 1 + 9 a 2   + 4 ā 2   + 2 a 1 ā 2   +   ā 1 a 2 2 5 9 4 2 1 a 1  = 1  a 2  = 1 E   (1,1) = 11 Cost of cut = 11 Sink (1) Source (0)
Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2 a 1   + 5 ā 1 + 9 a 2   + 4 ā 2   + 2 a 1 ā 2   +   ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0)   a 1  = 1  a 2  = 0 E   (1,0) = 8 st-mincut cost = 8
Energy Function Reparameterization Two functions E 1  and E 2  are reparameterizations if E 1  ( x ) = E 2  ( x )  for all  x   For instance: E 1  (a 1 ) = 1+ 2a 1  + 3ā 1 E 2  (a 1 ) = 3 + ā 1 a 1 ā 1 1+ 2a 1  + 3ā 1 3 + ā 1 0 1 4 4 1 0 3 3
Flow and Reparametrization a 1 a 2 E(a 1 ,a 2 ) = 2a 1  + 5ā 1 + 9a 2  + 4ā 2  + 2a 1 ā 2  +   ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0)
Flow and Reparametrization a 1 a 2 E(a 1 ,a 2 ) =  2a 1  + 5ā 1 + 9a 2  + 4ā 2  + 2a 1 ā 2  +   ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0)   2a 1  + 5ā 1  = 2(a 1 +ā 1 ) + 3ā 1  = 2 + 3ā 1
Flow and Reparametrization Sink (1) Source (0)   a 1 a 2 E(a 1 ,a 2 ) =  2   + 3ā 1 + 9a 2  + 4ā 2  + 2a 1 ā 2  +   ā 1 a 2 0 3 9 4 2 1 2a 1  + 5ā 1  = 2(a 1 +ā 1 ) + 3ā 1  = 2 + 3ā 1
Flow and Reparametrization Sink (0) Source (1)   a 1 a 2 E(a 1 ,a 2 ) = 2   + 3ā 1 +  9a 2  + 4ā 2  + 2a 1 ā 2  +   ā 1 a 2 0 3 9 4 2 1 9a 2  + 4ā 2  = 4(a 2 +ā 2 ) + 5ā 2  = 4 + 5ā 2
Flow and Reparametrization Sink (0) Source (1)   a 1 a 2 E(a 1 ,a 2 ) = 2   + 3ā 1 +  5a 2  + 4   + 2a 1 ā 2  +   ā 1 a 2 0 3 5 0 2 1 9a 2  + 4ā 2  = 4(a 2 +ā 2 ) + 5ā 2  = 4 + 5ā 2
Flow and Reparametrization Sink (0) Source (1)   a 1 a 2 E(a 1 ,a 2 ) = 6   + 3ā 1 + 5a 2  + 2a 1 ā 2  +   ā 1 a 2 0 3 5 0 2 1
Flow and Reparametrization Sink (0) Source (1)   a 1 a 2 E(a 1 ,a 2 ) = 6   +  3ā 1 +  5a 2   +  2a 1 ā 2   +   ā 1 a 2 0 3 5 0 2 1
Flow and Reparametrization Sink (0) Source (1)   a 1 a 2 E(a 1 ,a 2 ) = 6   +  3ā 1 +  5a 2   +  2a 1 ā 2   +   ā 1 a 2 0 3 5 0 2 1 3ā 1 + 5a 2  + 2a 1 ā 2  = 2(ā 1 +a 2 +a 1 ā 2 ) +ā 1 +3a 2 = 2(1+ā 1 a 2 ) +ā 1 +3a 2 F1 = ā 1 +a 2 +a 1 ā 2 F2 = 1+ā 1 a 2 a 1 a 2 F1 F2 0 0 1 1 0 1 2 2 1 0 1 1 1 1 1 1
Flow and Reparametrization Sink (0) Source (1)   a 1 a 2 E(a 1 ,a 2 ) = 8   +  ā 1 +  3a 2   + 3ā 1 a 2 0 1 3 0 0 3 3ā 1 + 5a 2  + 2a 1 ā 2  = 2(ā 1 +a 2 +a 1 ā 2 ) +ā 1 +3a 2 = 2(1+ā 1 a 2 ) +ā 1 +3a 2 F1 = ā 1 +a 2 +a 1 ā 2 F2 = 1+ā 1 a 2 a 1 a 2 F1 F2 0 0 1 1 0 1 2 2 1 0 1 1 1 1 1 1
Flow and Reparametrization Sink (0) Source (1)   a 1 a 2 0 1 3 0 0 3 E(a 1 ,a 2 ) = 8   + ā 1 + 3a 2  + 3ā 1 a 2 No more augmenting paths possible
Flow and Reparametrization Sink (0) Source (1)   a 1 a 2 0 1 3 0 0 3 E(a 1 ,a 2 ) = 8   + ā 1 + 3a 2  + 3ā 1 a 2 Total Flow Residual Graph (positive coefficients) bound on the optimal solution Inference of the optimal solution becomes trivial because the bound is tight
Flow and Reparametrization Sink (0) Source (1)   a 1 a 2 0 1 3 0 0 3 E(a 1 ,a 2 ) = 8   + ā 1 + 3a 2  + 3ā 1 a 2 a 1  = 1  a 2  = 0 E   (1,0) = 8 st-mincut cost = 8 Total Flow bound on the optimal solution Inference of the optimal solution becomes trivial because the bound is tight  Residual Graph (positive coefficients)
Example: Image Segmentation E(x) =  ∑  c i  x i  +  ∑   c ij  x i (1-x j )   E: {0,1} n   ->   R 0  ->  fg 1  ->  bg i i,j Global Minimum (x * ) x *   =  arg min  E(x)   x How to minimize E(x)?
How does the code look like? Sink (1) Source (0)   Graph *g; For all pixels p  /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q),  cost); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1)
How does the code look like? Graph *g; For all pixels p  /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q),  cost); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) a 1 a 2 fgCost( a 1 ) Sink (1) Source (0)   fgCost( a 2 ) bgCost( a 1 ) bgCost( a 2 )
How does the code look like? Graph *g; For all pixels p  /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q),  cost(p,q)); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) a 1 a 2 fgCost( a 1 ) Sink (1) Source (0)   fgCost( a 2 ) bgCost( a 1 ) bgCost( a 2 ) cost(p,q)
How does the code look like? Graph *g; For all pixels p  /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q),  cost(p,q)); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) a 1 a 2 fgCost( a 1 ) Sink (1) Source (0)   fgCost( a 2 ) bgCost( a 1 ) bgCost( a 2 ) cost(p,q) a 1  = bg  a 2  = fg
Image Segmentation in Video Image Flow Global Optimum s t = 0 = 1 E(x)  x * n-links st-cut
Image Segmentation in Video Image Flow Global Optimum
Dynamic Energy Minimization E B computationally expensive operation   E A Recycling Solutions Can we do better? Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07) S B S A minimize
Dynamic Energy Minimization E B computationally expensive operation   E A cheaper operation Kohli & Torr (ICCV05, PAMI07) 3 – 100000 time speedup! Reuse flow Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07) S B S A minimize Simpler energy E B* differences between A and B A and B similar Reparametrization
Dynamic Energy Minimization Reparametrized  Energy Kohli & Torr (ICCV05, PAMI07) Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07) E(a 1 ,a 2 ) = 2a 1  + 5ā 1 + 9a 2  + 4ā 2  +  2a 1 ā 2   +   ā 1 a 2 E(a 1 ,a 2 ) = 8   + ā 1 + 3a 2  + 3ā 1 a 2 Original Energy E(a 1 ,a 2 ) = 2a 1  + 5ā 1 + 9a 2  + 4ā 2  +  7a 1 ā 2   +   ā 1 a 2 E(a 1 ,a 2 ) = 8   + ā 1 + 3a 2  + 3ā 1 a 2  +  5a 1 ā 2   New Energy New Reparametrized  Energy
Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
Minimizing Energy Functions ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Space of Function Minimization Problems Submodular Functions NP-Hard MAXCUT Functions defined on trees
Submodular Set Functions Set function f     2 |E|      ℝ 2 |E|  =  #subsets of E Let E= {a 1 ,a 2 , .... a n } be a set
Submodular Set Functions Set function f     2 |E|      ℝ  is  submodular  if E A B f( A ) + f( B )     f( A  B ) + f( A  B )  for all  A , B     E  2 |E|  =  #subsets of E Let E= {a 1 ,a 2 , .... a n } be a set Important Property Sum of two  submodular   functions is  submodular
Minimizing Submodular Functions ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Submodular Pseudoboolean Functions ,[object Object],[object Object],[object Object],[object Object],f p (0,1) + f p  (1,0)     f p  (0,0) + f p  (1,1) Function defined over boolean vectors  x  = {x 1 ,x 2 , .... x n } Definition :
Quadratic Submodular Pseudoboolean Functions θ ij (0,1) +  θ ij   (1,0)     θ ij   (0,0) +  θ ij   (1,1) For all ij E(x) =  ∑   θ i  (x i ) +  ∑   θ ij  (x i ,x j ) i,j i
Quadratic Submodular Pseudoboolean Functions θ ij (0,1) +  θ ij   (1,0)     θ ij   (0,0) +  θ ij   (1,1) For all ij E(x) =  ∑   c i  x i   +   ∑   c ij  x i (1-x j )   c ij ≥0 i,j i Equivalent (transformable) i.e. All submodular QPBFs are st-mincut solvable E(x) =  ∑   θ i  (x i ) +  ∑   θ ij  (x i ,x j ) i,j i
0  1 0 1 x i x j =  A   + 0  1 0 1 0  1 0 1 0  1 0 1 + + if x 1 =1  add C-A if x 2  = 1  add D-C B+C-A-D    0  is true from the submodularity of  θ ij   How are they equivalent? A =  θ ij   (0,0)    B =  θ ij (0,1)  C =  θ ij   (1,0)  D =  θ ij   (1,1) θ ij  (x i ,x j )  =  θ ij (0,0)    + ( θ ij (1,0)- θ ij (0,0)) x i  + ( θ ij (1,0)- θ ij (0,0)) x j     + ( θ ij (1,0) +  θ ij (0,1) -  θ ij (0,0) -  θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
0  1 0 1 x i x j =  A   + 0  1 0 1 0  1 0 1 0  1 0 1 + + if x 1 =1  add C-A if x 2  = 1  add D-C B+C-A-D    0  is true from the submodularity of  θ ij   How are they equivalent? A =  θ ij   (0,0)    B =  θ ij (0,1)  C =  θ ij   (1,0)  D =  θ ij   (1,1) θ ij  (x i ,x j )  =  θ ij (0,0)    + ( θ ij (1,0)- θ ij (0,0)) x i  + ( θ ij (1,0)- θ ij (0,0)) x j     + ( θ ij (1,0) +  θ ij (0,1) -  θ ij (0,0) -  θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
0  1 0 1 x i x j =  A   + 0  1 0 1 0  1 0 1 0  1 0 1 + + if x 1 =1  add C-A if x 2  = 1  add D-C B+C-A-D    0  is true from the submodularity of  θ ij   How are they equivalent? A =  θ ij   (0,0)    B =  θ ij (0,1)  C =  θ ij   (1,0)  D =  θ ij   (1,1) θ ij  (x i ,x j )  =  θ ij (0,0)    + ( θ ij (1,0)- θ ij (0,0)) x i  + ( θ ij (1,0)- θ ij (0,0)) x j     + ( θ ij (1,0) +  θ ij (0,1) -  θ ij (0,0) -  θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
0  1 0 1 x i x j =  A   + 0  1 0 1 0  1 0 1 0  1 0 1 + + if x 1 =1  add C-A if x 2  = 1  add D-C B+C-A-D    0  is true from the submodularity of  θ ij   How are they equivalent? A =  θ ij   (0,0)    B =  θ ij (0,1)  C =  θ ij   (1,0)  D =  θ ij   (1,1) θ ij  (x i ,x j )  =  θ ij (0,0)    + ( θ ij (1,0)- θ ij (0,0)) x i  + ( θ ij (1,0)- θ ij (0,0)) x j     + ( θ ij (1,0) +  θ ij (0,1) -  θ ij (0,0) -  θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
0  1 0 1 x i x j =  A   + 0  1 0 1 0  1 0 1 0  1 0 1 + + if x 1 =1  add C-A if x 2  = 1  add D-C B+C-A-D    0  is true from the submodularity of  θ ij   How are they equivalent? A =  θ ij   (0,0)    B =  θ ij (0,1)  C =  θ ij   (1,0)  D =  θ ij   (1,1) θ ij  (x i ,x j )  =  θ ij (0,0)    + ( θ ij (1,0)- θ ij (0,0)) x i  + ( θ ij (1,0)- θ ij (0,0)) x j     + ( θ ij (1,0) +  θ ij (0,1) -  θ ij (0,0) -  θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
Quadratic Submodular Pseudoboolean Functions θ ij (0,1) +  θ ij   (1,0)     θ ij   (0,0) +  θ ij   (1,1) For all ij Equivalent (transformable) x in {0,1} n E(x) =  ∑   θ i  (x i ) +  ∑   θ ij  (x i ,x j ) i,j i T S st-mincut
Minimizing Non-Submodular Functions ,[object Object],[object Object],θ ij (0,1) +  θ ij   (1,0)  ≤   θ ij   (0,0) +  θ ij   (1,1) for  some  ij [Slide credit: Carsten Rother] E(x) =  ∑   θ i  (x i ) +  ∑   θ ij  (x i ,x j ) i,j i
Minimization using Roof-dual Relaxation pairwise nonsubmodular unary pairwise submodular [Slide credit: Carsten Rother]
[object Object],Minimization using Roof-dual Relaxation [Slide credit: Carsten Rother]
[object Object],Minimization using Roof-dual Relaxation Non- submodular Submodular
[object Object],Minimization using Roof-dual Relaxation ,[object Object],[object Object],Property of the problem :
[object Object],Minimization using Roof-dual Relaxation is the optimal label Property of the solution :
Recap ,[object Object],[object Object]
But ... ,[object Object],[object Object],[object Object],c x  ϵ   Labels L = {l 1 , l 2 , … , l k } Clique c    V   E( x ) =  ∑   θ i  (x i ) +  ∑   θ ij  (x i ,x j ) +  ∑   θ c  ( x c ) i,j i
Transforming problems in QBFs Multi-label  Functions Pseudoboolean Functions Higher order  Pseudoboolean Functions Quadratic  Pseudoboolean Functions
Transforming problems in QBFs Multi-label  Functions Pseudoboolean Functions Higher order  Pseudoboolean Functions Quadratic  Pseudoboolean Functions
Higher order to Quadratic ,[object Object],{ 0  if  all x i  = 0 C 1   otherwise f( x ) = min   f( x ) min   C 1 a +  C 1  ∑   ā  x i x  ϵ  L = {0,1} n x = x,a  ϵ   {0,1} Higher Order Submodular Function  Quadratic Submodular Function  ∑ x i  =  0 a=0 ( ā=1) f( x ) = 0  ∑ x i  ≥  1 a=1 ( ā=0) f( x ) = C 1
Higher order to Quadratic min   f( x ) min   C 1 a +  C 1  ∑   ā  x i x = x,a  ϵ   {0,1} Higher Order Submodular Function  Quadratic Submodular Function  ∑ x i 1 2 3 C 1 C 1 ∑ x i
Higher order to Quadratic min   f( x ) min   C 1 a +  C 1  ∑   ā  x i x = x,a  ϵ   {0,1} Higher Order Submodular Function  Quadratic Submodular Function  ∑ x i 1 2 3 C 1 C 1 ∑ x i a=1 a=0 Lower envelop of concave functions is concave
Higher order to Quadratic min   f( x ) min   f 1  (x) a +   f 2 (x) ā x = x,a  ϵ   {0,1} Higher Order Submodular Function  Quadratic Submodular Function  ∑ x i 1 2 3 a=1 Lower envelop of concave functions is concave f 2 (x) f 1 (x)
Higher order to Quadratic min   f( x ) min   f 1  (x) a +   f 2 (x) ā x = x,a  ϵ   {0,1} Higher Order Submodular Function  Quadratic Submodular Function  ∑ x i 1 2 3 a=1 a=0 Lower envelop of concave functions is concave f 2 (x) f 1 (x)
Transforming problems in QBFs Multi-label  Functions Pseudoboolean Functions Higher order  Pseudoboolean Functions Quadratic  Pseudoboolean Functions
Multi-label to Pseudo-boolean So what is the problem? E b  (x 1 ,x 2 , ..., x m ) E m  (y 1 ,y 2 , ..., y n ) Multi-label Problem Binary label Problem y i   ϵ   L = {l 1 , l 2 , … , l k } x i   ϵ  L = {0,1} such that: ,[object Object],[object Object],[object Object],[object Object]
Multi-label to Pseudo-boolean ,[object Object],[object Object]
Multi-label to Pseudo-boolean ,[object Object],[object Object],Ishikawa’s result: y  ϵ   Labels L = {l 1 , l 2 , … , l k } θ ij  (y i ,y j ) = g(|y i -y j |) Convex Function g(|y i -y j |) |y i -y j | E( y ) =  ∑   θ i  (y i ) +  ∑   θ ij  (y i ,y j ) i,j i
Multi-label to Pseudo-boolean ,[object Object],[object Object],Schlesinger & Flach ’06: y  ϵ   Labels L = {l 1 , l 2 , … , l k } θ ij ( l i+1 ,l j ) +  θ ij   (l i ,l j+1 )     θ ij   (l i ,l j ) +  θ ij   (l i+1 ,l j+1 ) Covers all Submodular  multi-label functions More general than Ishikawa E( y ) =  ∑   θ i  (y i ) +  ∑   θ ij  (y i ,y j ) i,j i
Multi-label to Pseudo-boolean ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Problems
Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
St-mincut based Move algorithms ,[object Object],[object Object],[object Object],x  ϵ   Labels L = {l 1 , l 2 , … , l k } E( x ) =  ∑   θ i  (x i ) +  ∑   θ ij  (x i ,x j ) i,j i
Move Making Algorithms Solution Space Energy
Move Making Algorithms Search Neighbourhood Current Solution Optimal Move Solution Space Energy
Computing the Optimal Move Search Neighbourhood Current Solution Optimal Move x c ( t ) Key Property  Move Space Bigger move space  Solution Space Energy ,[object Object],[object Object]
Moves using Graph Cuts ,[object Object],[object Object],[object Object],[object Object],Space of Solutions (x) : L N   Move Space (t) : 2 N   Search Neighbourhood Current Solution N Number of Variables L Number of  Labels
Moves using Graph Cuts ,[object Object],[object Object],[object Object],[object Object],Current Solution Construct a move function Minimize move function to get optimal move Move to new solution How to minimize move functions?
General Binary Moves Minimize over move variables  t  to get the optimal move  x =  t  x 1  + (1-  t )   x 2 New solution Current Solution Second solution E m ( t ) = E( t  x 1  + (1-  t )   x 2 ) Boykov, Veksler and Zabih, PAMI 2001 Move energy is a submodular QPBF (Exact Minimization Possible)
Swap Move ,[object Object],[Boykov, Veksler, Zabih]
Swap Move Sky House Tree Ground Swap Sky, House [Boykov, Veksler, Zabih] ,[object Object]
Swap Move ,[object Object],[object Object],[object Object],[object Object],[Boykov, Veksler, Zabih] θ ij  (l a ,l b )  ≥ 0 θ ij  (l a ,l b )  = 0  a = b Examples:  Potts model, Truncated Convex
Expansion Move [Boykov, Veksler, Zabih] [Boykov, Veksler, Zabih] ,[object Object]
Expansion Move Sky House Tree Ground Initialize with Tree Status: Expand Ground Expand House Expand Sky [Boykov, Veksler, Zabih] [Boykov, Veksler, Zabih] ,[object Object]
Expansion Move ,[object Object],[object Object],[object Object],[Boykov, Veksler, Zabih] [Boykov, Veksler, Zabih] θ ij  (l a ,l b ) +  θ ij  (l b ,l c ) ≥  θ ij  (l a ,l c ) Semi metric +  Triangle Inequality ,[object Object],Examples:  Potts model, Truncated linear Cannot solve truncated quadratic
General Binary Moves Minimize over move variables t  x =  t  x 1  + (1-t)   x 2 New solution First solution Second solution Move functions can be non-submodular!! Move Type First Solution Second Solution Guarantee Expansion Old solution All alpha Metric Fusion Any solution  Any solution 
Solving Continuous Problems using Fusion Move x =  t  x 1  + (1-t)   x 2 (Lempitsky et al. CVPR08, Woodford et al. CVPR08) x 1 , x 2  can be continuous F x 1 x 2 x Optical Flow  Example Final Solution Solution from Method 1 Solution from Method 2
Range Moves  ,[object Object],[object Object],[object Object],O. Veksler,  CVPR 2007 θ ij  (y i ,y j ) = min(|y i -y j |,T) |y i -y j | θ ij  (y i ,y j )  T x =  ( t  == 1)  x 1  +  (t == 2)   x 2  +… + (t == k)  x k
Move Algorithms for Solving Higher Order Energies  ,[object Object],[object Object],[Kohli, Kumar and Torr, CVPR07]  [Kohli, Ladicky and Torr, CVPR08] E( x ) =  ∑   θ i  (x i ) +  ∑   θ ij  (x i ,x j ) +  ∑   θ c  ( x c ) i,j i c x  ϵ   Labels L = {l 1 , l 2 , … , l k } Clique c    V
Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
Solving Mixed Programming Problems x  – binary image segmentation (x i  ∊ {0,1}) ω  – non-local parameter (lives in some large set  Ω ) constant unary  potentials pairwise  potentials ≥  0 Rough Shape Prior Stickman Model ω Pose θ i  ( ω,  x i )  Shape Prior E(x, ω ) = C( ω ) +   ∑   θ i  ( ω,  x i ) +  ∑   θ ij  ( ω, x i ,x j ) i,j i
Open Problems Submodular Functions st-mincut Equivalent ,[object Object],[object Object]
Minimizing General Higher Order Functions ,[object Object],[object Object]
Summary Exact Transformation (global optimum) Or Relaxed transformation (partially optimal) Labelling Problem Submodular Quadratic Pseudoboolean Function  Move making algorithms Sub-problem T S st-mincut
Thanks. Questions?
Use of Higher order Potentials Stereo - Woodford et al. CVPR 2008 P 1 P 2 P 3 5 6 7 8 Pixels Disparity Labels E(x 1 ,x 2 ,x 3 ) =  θ 12  (x 1 ,x 2 ) +  θ 23  (x 2 ,x 3 )  θ ij  (x i ,x j )  = 0 if x i =x j C otherwise { E(6,6,6) = 0 + 0 = 0
Use of Higher order Potentials Stereo - Woodford et al. CVPR 2008 P 1 P 2 P 3 5 6 7 8 Pixels Disparity Labels E(x 1 ,x 2 ,x 3 ) =  θ 12  (x 1 ,x 2 ) +  θ 23  (x 2 ,x 3 )  θ ij  (x i ,x j )  = 0 if x i =x j C otherwise { E(6,6,6) = 0 + 0 = 0 E(6,7,7) = 1 + 0 = 1
Use of Higher order Potentials Stereo - Woodford et al. CVPR 2008 P 1 P 2 P 3 5 6 7 8 Pixels Disparity Labels Pairwise potential penalize slanted planar surfaces E(x 1 ,x 2 ,x 3 ) =  θ 12  (x 1 ,x 2 ) +  θ 23  (x 2 ,x 3 )  θ ij  (x i ,x j )  = 0 if x i =x j C otherwise { E(6,6,6) = 0 + 0 = 0 E(6,7,7) = 1 + 0 = 1 E(6,7,8) = 1 + 1 = 2
Computing the Optimal Move Search Neighbourhood Current Solution Optimal Move x E ( x ) x c Transformation function T ( t ) T ( x c , t )  =   x n  = x c  + t
Computing the Optimal Move Search Neighbourhood Current Solution Optimal Move E ( x ) x c Transformation function T E m Move Energy ( t ) x E m ( t )  =  E ( T ( x c , t )) T ( x c , t )  =   x n  = x c  + t
Computing the Optimal Move Search Neighbourhood Current Solution Optimal Move E ( x ) x c E m ( t )  =  E ( T ( x c , t )) Transformation function T E m Move Energy T ( x c , t )  =   x n  = x c  + t  minimize t* Optimal Move ( t ) x

More Related Content

What's hot

HMPC for Upper Stage Attitude Control
HMPC for Upper Stage Attitude ControlHMPC for Upper Stage Attitude Control
HMPC for Upper Stage Attitude ControlPantelis Sopasakis
 
Megadata With Python and Hadoop
Megadata With Python and HadoopMegadata With Python and Hadoop
Megadata With Python and Hadoopryancox
 
Oracle-based algorithms for high-dimensional polytopes.
Oracle-based algorithms for high-dimensional polytopes.Oracle-based algorithms for high-dimensional polytopes.
Oracle-based algorithms for high-dimensional polytopes.Vissarion Fisikopoulos
 
Fast parallelizable scenario-based stochastic optimization
Fast parallelizable scenario-based stochastic optimizationFast parallelizable scenario-based stochastic optimization
Fast parallelizable scenario-based stochastic optimizationPantelis Sopasakis
 
Performing Iterations in EES
Performing Iterations in EESPerforming Iterations in EES
Performing Iterations in EESNaveed Rehman
 
Computer graphics lab report with code in cpp
Computer graphics lab report with code in cppComputer graphics lab report with code in cpp
Computer graphics lab report with code in cppAlamgir Hossain
 
Programming for Mechanical Engineers in EES
Programming for Mechanical Engineers in EESProgramming for Mechanical Engineers in EES
Programming for Mechanical Engineers in EESNaveed Rehman
 
The Uncertain Enterprise
The Uncertain EnterpriseThe Uncertain Enterprise
The Uncertain EnterpriseClarkTony
 
Kuliah teori dan analisis jaringan - linear programming
Kuliah teori dan analisis jaringan - linear programmingKuliah teori dan analisis jaringan - linear programming
Kuliah teori dan analisis jaringan - linear programmingHarun Al-Rasyid Lubis
 
Operations Research Modeling Toolset
Operations Research Modeling ToolsetOperations Research Modeling Toolset
Operations Research Modeling ToolsetFellowBuddy.com
 
Crystal Ball Event Prediction and Log Analysis with Hadoop MapReduce and Spark
Crystal Ball Event Prediction and Log Analysis with Hadoop MapReduce and SparkCrystal Ball Event Prediction and Log Analysis with Hadoop MapReduce and Spark
Crystal Ball Event Prediction and Log Analysis with Hadoop MapReduce and SparkJivan Nepali
 
Volume and edge skeleton computation in high dimensions
Volume and edge skeleton computation in high dimensionsVolume and edge skeleton computation in high dimensions
Volume and edge skeleton computation in high dimensionsVissarion Fisikopoulos
 
Pilot Optimization and Channel Estimation for Multiuser Massive MIMO Systems
Pilot Optimization and Channel Estimation for Multiuser Massive MIMO SystemsPilot Optimization and Channel Estimation for Multiuser Massive MIMO Systems
Pilot Optimization and Channel Estimation for Multiuser Massive MIMO SystemsT. E. BOGALE
 
Sampled-Data Piecewise Affine Slab Systems: A Time-Delay Approach
Sampled-Data Piecewise Affine Slab Systems: A Time-Delay ApproachSampled-Data Piecewise Affine Slab Systems: A Time-Delay Approach
Sampled-Data Piecewise Affine Slab Systems: A Time-Delay ApproachBehzad Samadi
 
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Alexander Litvinenko
 

What's hot (20)

HMPC for Upper Stage Attitude Control
HMPC for Upper Stage Attitude ControlHMPC for Upper Stage Attitude Control
HMPC for Upper Stage Attitude Control
 
Megadata With Python and Hadoop
Megadata With Python and HadoopMegadata With Python and Hadoop
Megadata With Python and Hadoop
 
Recursive Compressed Sensing
Recursive Compressed SensingRecursive Compressed Sensing
Recursive Compressed Sensing
 
Absorbing Random Walk Centrality
Absorbing Random Walk CentralityAbsorbing Random Walk Centrality
Absorbing Random Walk Centrality
 
Oracle-based algorithms for high-dimensional polytopes.
Oracle-based algorithms for high-dimensional polytopes.Oracle-based algorithms for high-dimensional polytopes.
Oracle-based algorithms for high-dimensional polytopes.
 
E 2017 1
E 2017 1E 2017 1
E 2017 1
 
Fast parallelizable scenario-based stochastic optimization
Fast parallelizable scenario-based stochastic optimizationFast parallelizable scenario-based stochastic optimization
Fast parallelizable scenario-based stochastic optimization
 
Performing Iterations in EES
Performing Iterations in EESPerforming Iterations in EES
Performing Iterations in EES
 
Computer graphics lab report with code in cpp
Computer graphics lab report with code in cppComputer graphics lab report with code in cpp
Computer graphics lab report with code in cpp
 
Programming for Mechanical Engineers in EES
Programming for Mechanical Engineers in EESProgramming for Mechanical Engineers in EES
Programming for Mechanical Engineers in EES
 
The Uncertain Enterprise
The Uncertain EnterpriseThe Uncertain Enterprise
The Uncertain Enterprise
 
Kuliah teori dan analisis jaringan - linear programming
Kuliah teori dan analisis jaringan - linear programmingKuliah teori dan analisis jaringan - linear programming
Kuliah teori dan analisis jaringan - linear programming
 
Relatório
RelatórioRelatório
Relatório
 
Operations Research Modeling Toolset
Operations Research Modeling ToolsetOperations Research Modeling Toolset
Operations Research Modeling Toolset
 
Crystal Ball Event Prediction and Log Analysis with Hadoop MapReduce and Spark
Crystal Ball Event Prediction and Log Analysis with Hadoop MapReduce and SparkCrystal Ball Event Prediction and Log Analysis with Hadoop MapReduce and Spark
Crystal Ball Event Prediction and Log Analysis with Hadoop MapReduce and Spark
 
Volume and edge skeleton computation in high dimensions
Volume and edge skeleton computation in high dimensionsVolume and edge skeleton computation in high dimensions
Volume and edge skeleton computation in high dimensions
 
Lecture13 controls
Lecture13  controls Lecture13  controls
Lecture13 controls
 
Pilot Optimization and Channel Estimation for Multiuser Massive MIMO Systems
Pilot Optimization and Channel Estimation for Multiuser Massive MIMO SystemsPilot Optimization and Channel Estimation for Multiuser Massive MIMO Systems
Pilot Optimization and Channel Estimation for Multiuser Massive MIMO Systems
 
Sampled-Data Piecewise Affine Slab Systems: A Time-Delay Approach
Sampled-Data Piecewise Affine Slab Systems: A Time-Delay ApproachSampled-Data Piecewise Affine Slab Systems: A Time-Delay Approach
Sampled-Data Piecewise Affine Slab Systems: A Time-Delay Approach
 
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...
 

Similar to ECCV2008: MAP Estimation Algorithms in Computer Vision - Part 2

Lecture02_Part02.pptx
Lecture02_Part02.pptxLecture02_Part02.pptx
Lecture02_Part02.pptxMahdiAbbasi31
 
Mit15 082 jf10_lec01
Mit15 082 jf10_lec01Mit15 082 jf10_lec01
Mit15 082 jf10_lec01Saad Liaqat
 
Page rank
Page rankPage rank
Page rankCarlos
 
Introduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from ScratchIntroduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from ScratchAhmed BESBES
 
Extended network and algorithm finding maximal flows
Extended network and algorithm finding maximal flows Extended network and algorithm finding maximal flows
Extended network and algorithm finding maximal flows IJECEIAES
 
TMPA-2017: The Quest for Average Response Time
TMPA-2017: The Quest for Average Response TimeTMPA-2017: The Quest for Average Response Time
TMPA-2017: The Quest for Average Response TimeIosif Itkin
 
Efficient Volume and Edge-Skeleton Computation for Polytopes Given by Oracles
Efficient Volume and Edge-Skeleton Computation for Polytopes Given by OraclesEfficient Volume and Edge-Skeleton Computation for Polytopes Given by Oracles
Efficient Volume and Edge-Skeleton Computation for Polytopes Given by OraclesVissarion Fisikopoulos
 
Network problems 1 (1)
Network problems 1 (1)Network problems 1 (1)
Network problems 1 (1)Jabnon Nonjab
 
COCOA: Communication-Efficient Coordinate Ascent
COCOA: Communication-Efficient Coordinate AscentCOCOA: Communication-Efficient Coordinate Ascent
COCOA: Communication-Efficient Coordinate Ascentjeykottalam
 
Gate 2013 complete solutions of ec electronics and communication engineering
Gate 2013 complete solutions of ec  electronics and communication engineeringGate 2013 complete solutions of ec  electronics and communication engineering
Gate 2013 complete solutions of ec electronics and communication engineeringmanish katara
 
Relaxation methods for the matrix exponential on large networks
Relaxation methods for the matrix exponential on large networksRelaxation methods for the matrix exponential on large networks
Relaxation methods for the matrix exponential on large networksDavid Gleich
 
Lec10: Medical Image Segmentation as an Energy Minimization Problem
Lec10: Medical Image Segmentation as an Energy Minimization ProblemLec10: Medical Image Segmentation as an Energy Minimization Problem
Lec10: Medical Image Segmentation as an Energy Minimization ProblemUlaş Bağcı
 
Electrical Engineering Assignment Help
Electrical Engineering Assignment HelpElectrical Engineering Assignment Help
Electrical Engineering Assignment HelpEdu Assignment Help
 
Sampling-Based Planning Algorithms for Multi-Objective Missions
Sampling-Based Planning Algorithms for Multi-Objective MissionsSampling-Based Planning Algorithms for Multi-Objective Missions
Sampling-Based Planning Algorithms for Multi-Objective MissionsMd Mahbubur Rahman
 
mws_gen_nle_ppt_secant.ppt
mws_gen_nle_ppt_secant.pptmws_gen_nle_ppt_secant.ppt
mws_gen_nle_ppt_secant.pptsaadnaeem424
 

Similar to ECCV2008: MAP Estimation Algorithms in Computer Vision - Part 2 (20)

Lecture02_Part02.pptx
Lecture02_Part02.pptxLecture02_Part02.pptx
Lecture02_Part02.pptx
 
L21-MaxFlowPr.ppt
L21-MaxFlowPr.pptL21-MaxFlowPr.ppt
L21-MaxFlowPr.ppt
 
Mit15 082 jf10_lec01
Mit15 082 jf10_lec01Mit15 082 jf10_lec01
Mit15 082 jf10_lec01
 
Page rank
Page rankPage rank
Page rank
 
Introduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from ScratchIntroduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from Scratch
 
Extended network and algorithm finding maximal flows
Extended network and algorithm finding maximal flows Extended network and algorithm finding maximal flows
Extended network and algorithm finding maximal flows
 
TMPA-2017: The Quest for Average Response Time
TMPA-2017: The Quest for Average Response TimeTMPA-2017: The Quest for Average Response Time
TMPA-2017: The Quest for Average Response Time
 
Efficient Volume and Edge-Skeleton Computation for Polytopes Given by Oracles
Efficient Volume and Edge-Skeleton Computation for Polytopes Given by OraclesEfficient Volume and Edge-Skeleton Computation for Polytopes Given by Oracles
Efficient Volume and Edge-Skeleton Computation for Polytopes Given by Oracles
 
An Efficient and Parallel Abstract Interpreter in Scala — First Algorithm
An Efficient and Parallel Abstract Interpreter in Scala — First AlgorithmAn Efficient and Parallel Abstract Interpreter in Scala — First Algorithm
An Efficient and Parallel Abstract Interpreter in Scala — First Algorithm
 
algorithm Unit 3
algorithm Unit 3algorithm Unit 3
algorithm Unit 3
 
Network problems 1 (1)
Network problems 1 (1)Network problems 1 (1)
Network problems 1 (1)
 
COCOA: Communication-Efficient Coordinate Ascent
COCOA: Communication-Efficient Coordinate AscentCOCOA: Communication-Efficient Coordinate Ascent
COCOA: Communication-Efficient Coordinate Ascent
 
Ec gate'13
Ec gate'13Ec gate'13
Ec gate'13
 
Gate 2013 complete solutions of ec electronics and communication engineering
Gate 2013 complete solutions of ec  electronics and communication engineeringGate 2013 complete solutions of ec  electronics and communication engineering
Gate 2013 complete solutions of ec electronics and communication engineering
 
Linkedin_PowerPoint
Linkedin_PowerPointLinkedin_PowerPoint
Linkedin_PowerPoint
 
Relaxation methods for the matrix exponential on large networks
Relaxation methods for the matrix exponential on large networksRelaxation methods for the matrix exponential on large networks
Relaxation methods for the matrix exponential on large networks
 
Lec10: Medical Image Segmentation as an Energy Minimization Problem
Lec10: Medical Image Segmentation as an Energy Minimization ProblemLec10: Medical Image Segmentation as an Energy Minimization Problem
Lec10: Medical Image Segmentation as an Energy Minimization Problem
 
Electrical Engineering Assignment Help
Electrical Engineering Assignment HelpElectrical Engineering Assignment Help
Electrical Engineering Assignment Help
 
Sampling-Based Planning Algorithms for Multi-Objective Missions
Sampling-Based Planning Algorithms for Multi-Objective MissionsSampling-Based Planning Algorithms for Multi-Objective Missions
Sampling-Based Planning Algorithms for Multi-Objective Missions
 
mws_gen_nle_ppt_secant.ppt
mws_gen_nle_ppt_secant.pptmws_gen_nle_ppt_secant.ppt
mws_gen_nle_ppt_secant.ppt
 

More from zukun

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009zukun
 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVzukun
 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Informationzukun
 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statisticszukun
 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibrationzukun
 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionzukun
 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluationzukun
 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-softwarezukun
 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptorszukun
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectorszukun
 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-introzukun
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video searchzukun
 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video searchzukun
 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video searchzukun
 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learningzukun
 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionzukun
 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick startzukun
 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysiszukun
 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structureszukun
 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities zukun
 

More from zukun (20)

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009
 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCV
 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Information
 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statistics
 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibration
 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer vision
 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluation
 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-software
 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptors
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectors
 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-intro
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video search
 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video search
 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video search
 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learning
 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer vision
 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick start
 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysis
 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structures
 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities
 

ECCV2008: MAP Estimation Algorithms in Computer Vision - Part 2

  • 1. MAP Estimation Algorithms in M. Pawan Kumar, University of Oxford Pushmeet Kohli, Microsoft Research Computer Vision - Part II
  • 2. Example: Image Segmentation E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) E: {0,1} n -> R 0 -> fg 1 -> bg Image (D) i i,j n = number of pixels
  • 3. Example: Image Segmentation E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) E: {0,1} n -> R 0 -> fg 1 -> bg i i,j Unary Cost (c i ) Dark ( negative ) Bright (positive) n = number of pixels
  • 4. Example: Image Segmentation E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) E: {0,1} n -> R 0 -> fg 1 -> bg i i,j Discontinuity Cost (c ij ) n = number of pixels
  • 5. Example: Image Segmentation E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) E: {0,1} n -> R 0 -> fg 1 -> bg i i,j Global Minimum (x * ) x * = arg min E(x) x How to minimize E(x)? n = number of pixels
  • 6. Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems Connection between st-mincut and energy minimization?
  • 7. Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
  • 8.
  • 9. The st-Mincut Problem Source Sink v 1 v 2 2 5 9 4 2 1 What is a st-cut?
  • 10. The st-Mincut Problem Source Sink v 1 v 2 2 5 9 4 2 1 What is a st-cut? An st-cut ( S , T ) divides the nodes between source and sink. What is the cost of a st-cut? Sum of cost of all edges going from S to T 5 + 2 + 9 = 16
  • 11. The st-Mincut Problem What is a st-cut? An st-cut ( S , T ) divides the nodes between source and sink. What is the cost of a st-cut? Sum of cost of all edges going from S to T What is the st-mincut? st-cut with the minimum cost Source Sink v 1 v 2 2 5 9 4 2 1 2 + 1 + 4 = 7
  • 12. How to compute the st-mincut? Source Sink v 1 v 2 2 5 9 4 2 1 Solve the dual maximum flow problem In every network, the maximum flow equals the cost of the st-mincut Min-cutax-flow Theorem Compute the maximum flow between Source and Sink Constraints Edges: Flow < Capacity Nodes: Flow in = Flow out
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25. History of Maxflow Algorithms [Slide credit: Andrew Goldberg] Augmenting Path and Push-Relabel n: # nodes m: # edges U: maximum edge weight Algorithms assume non-negative edge weights
  • 26. History of Maxflow Algorithms [Slide credit: Andrew Goldberg] Augmenting Path and Push-Relabel n: # nodes m: # edges U: maximum edge weight Algorithms assume non-negative edge weights
  • 27. Augmenting Path based Algorithms a 1 a 2 1000 1 Sink Source 1000 1000 1000 0 Ford Fulkerson: Choose any augmenting path
  • 28. Augmenting Path based Algorithms a 1 a 2 1000 1 Sink Source 1000 1000 1000 0 Bad Augmenting Paths Ford Fulkerson: Choose any augmenting path
  • 29. a 1 a 2 1000 1 Sink Source 1000 1000 1000 0 Augmenting Path based Algorithms Bad Augmenting Path Ford Fulkerson: Choose any augmenting path
  • 30. a 1 a 2 999 0 Sink Source 1000 1000 999 1 Augmenting Path based Algorithms Ford Fulkerson: Choose any augmenting path
  • 31. Augmenting Path based Algorithms a 1 a 2 999 0 Sink Source 1000 1000 999 1 Ford Fulkerson: Choose any augmenting path n: # nodes m: # edges We will have to perform 2000 augmentations! Worst case complexity: O (m x Total_Flow) (Pseudo-polynomial bound: depends on flow)
  • 32. Augmenting Path based Algorithms Dinic: Choose shortest augmenting path n: # nodes m: # edges Worst case Complexity: O (m n 2 ) a 1 a 2 1000 1 Sink Source 1000 1000 1000 0
  • 33.
  • 34.
  • 35. Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
  • 36. St-mincut and Energy Minimization E: {0,1} n -> R Minimizing a Qudratic Pseudoboolean function E(x) Functions of boolean variables Pseudoboolean? Polynomial time st-mincut algorithms require non-negative edge weights E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) c ij ≥0 i,j i T S st-mincut
  • 37.
  • 38. Graph Construction Sink (1) Source (0) a 1 a 2 E(a 1 ,a 2 )
  • 39. Graph Construction Sink (1) Source (0) a 1 a 2 E(a 1 ,a 2 ) = 2a 1 2
  • 40. Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2a 1 + 5ā 1 2 5 Sink (1) Source (0)
  • 41. Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 2 5 9 4 Sink (1) Source (0)
  • 42. Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 2 5 9 4 2 Sink (1) Source (0)
  • 43. Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0)
  • 44. Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0)
  • 45. Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2 a 1 + 5 ā 1 + 9 a 2 + 4 ā 2 + 2 a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 a 1 = 1 a 2 = 1 E (1,1) = 11 Cost of cut = 11 Sink (1) Source (0)
  • 46. Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2 a 1 + 5 ā 1 + 9 a 2 + 4 ā 2 + 2 a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0) a 1 = 1 a 2 = 0 E (1,0) = 8 st-mincut cost = 8
  • 47. Energy Function Reparameterization Two functions E 1 and E 2 are reparameterizations if E 1 ( x ) = E 2 ( x ) for all x For instance: E 1 (a 1 ) = 1+ 2a 1 + 3ā 1 E 2 (a 1 ) = 3 + ā 1 a 1 ā 1 1+ 2a 1 + 3ā 1 3 + ā 1 0 1 4 4 1 0 3 3
  • 48. Flow and Reparametrization a 1 a 2 E(a 1 ,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0)
  • 49. Flow and Reparametrization a 1 a 2 E(a 1 ,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0) 2a 1 + 5ā 1 = 2(a 1 +ā 1 ) + 3ā 1 = 2 + 3ā 1
  • 50. Flow and Reparametrization Sink (1) Source (0) a 1 a 2 E(a 1 ,a 2 ) = 2 + 3ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 0 3 9 4 2 1 2a 1 + 5ā 1 = 2(a 1 +ā 1 ) + 3ā 1 = 2 + 3ā 1
  • 51. Flow and Reparametrization Sink (0) Source (1) a 1 a 2 E(a 1 ,a 2 ) = 2 + 3ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 0 3 9 4 2 1 9a 2 + 4ā 2 = 4(a 2 +ā 2 ) + 5ā 2 = 4 + 5ā 2
  • 52. Flow and Reparametrization Sink (0) Source (1) a 1 a 2 E(a 1 ,a 2 ) = 2 + 3ā 1 + 5a 2 + 4 + 2a 1 ā 2 + ā 1 a 2 0 3 5 0 2 1 9a 2 + 4ā 2 = 4(a 2 +ā 2 ) + 5ā 2 = 4 + 5ā 2
  • 53. Flow and Reparametrization Sink (0) Source (1) a 1 a 2 E(a 1 ,a 2 ) = 6 + 3ā 1 + 5a 2 + 2a 1 ā 2 + ā 1 a 2 0 3 5 0 2 1
  • 54. Flow and Reparametrization Sink (0) Source (1) a 1 a 2 E(a 1 ,a 2 ) = 6 + 3ā 1 + 5a 2 + 2a 1 ā 2 + ā 1 a 2 0 3 5 0 2 1
  • 55. Flow and Reparametrization Sink (0) Source (1) a 1 a 2 E(a 1 ,a 2 ) = 6 + 3ā 1 + 5a 2 + 2a 1 ā 2 + ā 1 a 2 0 3 5 0 2 1 3ā 1 + 5a 2 + 2a 1 ā 2 = 2(ā 1 +a 2 +a 1 ā 2 ) +ā 1 +3a 2 = 2(1+ā 1 a 2 ) +ā 1 +3a 2 F1 = ā 1 +a 2 +a 1 ā 2 F2 = 1+ā 1 a 2 a 1 a 2 F1 F2 0 0 1 1 0 1 2 2 1 0 1 1 1 1 1 1
  • 56. Flow and Reparametrization Sink (0) Source (1) a 1 a 2 E(a 1 ,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 0 1 3 0 0 3 3ā 1 + 5a 2 + 2a 1 ā 2 = 2(ā 1 +a 2 +a 1 ā 2 ) +ā 1 +3a 2 = 2(1+ā 1 a 2 ) +ā 1 +3a 2 F1 = ā 1 +a 2 +a 1 ā 2 F2 = 1+ā 1 a 2 a 1 a 2 F1 F2 0 0 1 1 0 1 2 2 1 0 1 1 1 1 1 1
  • 57. Flow and Reparametrization Sink (0) Source (1) a 1 a 2 0 1 3 0 0 3 E(a 1 ,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 No more augmenting paths possible
  • 58. Flow and Reparametrization Sink (0) Source (1) a 1 a 2 0 1 3 0 0 3 E(a 1 ,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 Total Flow Residual Graph (positive coefficients) bound on the optimal solution Inference of the optimal solution becomes trivial because the bound is tight
  • 59. Flow and Reparametrization Sink (0) Source (1) a 1 a 2 0 1 3 0 0 3 E(a 1 ,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 a 1 = 1 a 2 = 0 E (1,0) = 8 st-mincut cost = 8 Total Flow bound on the optimal solution Inference of the optimal solution becomes trivial because the bound is tight Residual Graph (positive coefficients)
  • 60. Example: Image Segmentation E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) E: {0,1} n -> R 0 -> fg 1 -> bg i i,j Global Minimum (x * ) x * = arg min E(x) x How to minimize E(x)?
  • 61. How does the code look like? Sink (1) Source (0) Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1)
  • 62. How does the code look like? Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) a 1 a 2 fgCost( a 1 ) Sink (1) Source (0) fgCost( a 2 ) bgCost( a 1 ) bgCost( a 2 )
  • 63. How does the code look like? Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost(p,q)); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) a 1 a 2 fgCost( a 1 ) Sink (1) Source (0) fgCost( a 2 ) bgCost( a 1 ) bgCost( a 2 ) cost(p,q)
  • 64. How does the code look like? Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost(p,q)); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) a 1 a 2 fgCost( a 1 ) Sink (1) Source (0) fgCost( a 2 ) bgCost( a 1 ) bgCost( a 2 ) cost(p,q) a 1 = bg a 2 = fg
  • 65. Image Segmentation in Video Image Flow Global Optimum s t = 0 = 1 E(x) x * n-links st-cut
  • 66. Image Segmentation in Video Image Flow Global Optimum
  • 67. Dynamic Energy Minimization E B computationally expensive operation E A Recycling Solutions Can we do better? Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07) S B S A minimize
  • 68. Dynamic Energy Minimization E B computationally expensive operation E A cheaper operation Kohli & Torr (ICCV05, PAMI07) 3 – 100000 time speedup! Reuse flow Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07) S B S A minimize Simpler energy E B* differences between A and B A and B similar Reparametrization
  • 69. Dynamic Energy Minimization Reparametrized Energy Kohli & Torr (ICCV05, PAMI07) Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07) E(a 1 ,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 E(a 1 ,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 Original Energy E(a 1 ,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 7a 1 ā 2 + ā 1 a 2 E(a 1 ,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 + 5a 1 ā 2 New Energy New Reparametrized Energy
  • 70. Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
  • 71.
  • 72. Submodular Set Functions Set function f  2 |E|  ℝ 2 |E| = #subsets of E Let E= {a 1 ,a 2 , .... a n } be a set
  • 73. Submodular Set Functions Set function f  2 |E|  ℝ is submodular if E A B f( A ) + f( B )  f( A  B ) + f( A  B ) for all A , B  E 2 |E| = #subsets of E Let E= {a 1 ,a 2 , .... a n } be a set Important Property Sum of two submodular functions is submodular
  • 74.
  • 75.
  • 76. Quadratic Submodular Pseudoboolean Functions θ ij (0,1) + θ ij (1,0)  θ ij (0,0) + θ ij (1,1) For all ij E(x) = ∑ θ i (x i ) + ∑ θ ij (x i ,x j ) i,j i
  • 77. Quadratic Submodular Pseudoboolean Functions θ ij (0,1) + θ ij (1,0)  θ ij (0,0) + θ ij (1,1) For all ij E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) c ij ≥0 i,j i Equivalent (transformable) i.e. All submodular QPBFs are st-mincut solvable E(x) = ∑ θ i (x i ) + ∑ θ ij (x i ,x j ) i,j i
  • 78. 0 1 0 1 x i x j = A + 0 1 0 1 0 1 0 1 0 1 0 1 + + if x 1 =1 add C-A if x 2 = 1 add D-C B+C-A-D  0 is true from the submodularity of θ ij How are they equivalent? A = θ ij (0,0) B = θ ij (0,1) C = θ ij (1,0) D = θ ij (1,1) θ ij (x i ,x j ) = θ ij (0,0) + ( θ ij (1,0)- θ ij (0,0)) x i + ( θ ij (1,0)- θ ij (0,0)) x j + ( θ ij (1,0) + θ ij (0,1) - θ ij (0,0) - θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
  • 79. 0 1 0 1 x i x j = A + 0 1 0 1 0 1 0 1 0 1 0 1 + + if x 1 =1 add C-A if x 2 = 1 add D-C B+C-A-D  0 is true from the submodularity of θ ij How are they equivalent? A = θ ij (0,0) B = θ ij (0,1) C = θ ij (1,0) D = θ ij (1,1) θ ij (x i ,x j ) = θ ij (0,0) + ( θ ij (1,0)- θ ij (0,0)) x i + ( θ ij (1,0)- θ ij (0,0)) x j + ( θ ij (1,0) + θ ij (0,1) - θ ij (0,0) - θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
  • 80. 0 1 0 1 x i x j = A + 0 1 0 1 0 1 0 1 0 1 0 1 + + if x 1 =1 add C-A if x 2 = 1 add D-C B+C-A-D  0 is true from the submodularity of θ ij How are they equivalent? A = θ ij (0,0) B = θ ij (0,1) C = θ ij (1,0) D = θ ij (1,1) θ ij (x i ,x j ) = θ ij (0,0) + ( θ ij (1,0)- θ ij (0,0)) x i + ( θ ij (1,0)- θ ij (0,0)) x j + ( θ ij (1,0) + θ ij (0,1) - θ ij (0,0) - θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
  • 81. 0 1 0 1 x i x j = A + 0 1 0 1 0 1 0 1 0 1 0 1 + + if x 1 =1 add C-A if x 2 = 1 add D-C B+C-A-D  0 is true from the submodularity of θ ij How are they equivalent? A = θ ij (0,0) B = θ ij (0,1) C = θ ij (1,0) D = θ ij (1,1) θ ij (x i ,x j ) = θ ij (0,0) + ( θ ij (1,0)- θ ij (0,0)) x i + ( θ ij (1,0)- θ ij (0,0)) x j + ( θ ij (1,0) + θ ij (0,1) - θ ij (0,0) - θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
  • 82. 0 1 0 1 x i x j = A + 0 1 0 1 0 1 0 1 0 1 0 1 + + if x 1 =1 add C-A if x 2 = 1 add D-C B+C-A-D  0 is true from the submodularity of θ ij How are they equivalent? A = θ ij (0,0) B = θ ij (0,1) C = θ ij (1,0) D = θ ij (1,1) θ ij (x i ,x j ) = θ ij (0,0) + ( θ ij (1,0)- θ ij (0,0)) x i + ( θ ij (1,0)- θ ij (0,0)) x j + ( θ ij (1,0) + θ ij (0,1) - θ ij (0,0) - θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
  • 83. Quadratic Submodular Pseudoboolean Functions θ ij (0,1) + θ ij (1,0)  θ ij (0,0) + θ ij (1,1) For all ij Equivalent (transformable) x in {0,1} n E(x) = ∑ θ i (x i ) + ∑ θ ij (x i ,x j ) i,j i T S st-mincut
  • 84.
  • 85. Minimization using Roof-dual Relaxation pairwise nonsubmodular unary pairwise submodular [Slide credit: Carsten Rother]
  • 86.
  • 87.
  • 88.
  • 89.
  • 90.
  • 91.
  • 92. Transforming problems in QBFs Multi-label Functions Pseudoboolean Functions Higher order Pseudoboolean Functions Quadratic Pseudoboolean Functions
  • 93. Transforming problems in QBFs Multi-label Functions Pseudoboolean Functions Higher order Pseudoboolean Functions Quadratic Pseudoboolean Functions
  • 94.
  • 95. Higher order to Quadratic min f( x ) min C 1 a + C 1 ∑ ā x i x = x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function ∑ x i 1 2 3 C 1 C 1 ∑ x i
  • 96. Higher order to Quadratic min f( x ) min C 1 a + C 1 ∑ ā x i x = x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function ∑ x i 1 2 3 C 1 C 1 ∑ x i a=1 a=0 Lower envelop of concave functions is concave
  • 97. Higher order to Quadratic min f( x ) min f 1 (x) a + f 2 (x) ā x = x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function ∑ x i 1 2 3 a=1 Lower envelop of concave functions is concave f 2 (x) f 1 (x)
  • 98. Higher order to Quadratic min f( x ) min f 1 (x) a + f 2 (x) ā x = x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function ∑ x i 1 2 3 a=1 a=0 Lower envelop of concave functions is concave f 2 (x) f 1 (x)
  • 99. Transforming problems in QBFs Multi-label Functions Pseudoboolean Functions Higher order Pseudoboolean Functions Quadratic Pseudoboolean Functions
  • 100.
  • 101.
  • 102.
  • 103.
  • 104.
  • 105. Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
  • 106.
  • 107. Move Making Algorithms Solution Space Energy
  • 108. Move Making Algorithms Search Neighbourhood Current Solution Optimal Move Solution Space Energy
  • 109.
  • 110.
  • 111.
  • 112. General Binary Moves Minimize over move variables t to get the optimal move x = t x 1 + (1- t ) x 2 New solution Current Solution Second solution E m ( t ) = E( t x 1 + (1- t ) x 2 ) Boykov, Veksler and Zabih, PAMI 2001 Move energy is a submodular QPBF (Exact Minimization Possible)
  • 113.
  • 114.
  • 115.
  • 116.
  • 117.
  • 118.
  • 119. General Binary Moves Minimize over move variables t x = t x 1 + (1-t) x 2 New solution First solution Second solution Move functions can be non-submodular!! Move Type First Solution Second Solution Guarantee Expansion Old solution All alpha Metric Fusion Any solution Any solution 
  • 120. Solving Continuous Problems using Fusion Move x = t x 1 + (1-t) x 2 (Lempitsky et al. CVPR08, Woodford et al. CVPR08) x 1 , x 2 can be continuous F x 1 x 2 x Optical Flow Example Final Solution Solution from Method 1 Solution from Method 2
  • 121.
  • 122.
  • 123. Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
  • 124. Solving Mixed Programming Problems x – binary image segmentation (x i ∊ {0,1}) ω – non-local parameter (lives in some large set Ω ) constant unary potentials pairwise potentials ≥ 0 Rough Shape Prior Stickman Model ω Pose θ i ( ω, x i ) Shape Prior E(x, ω ) = C( ω ) + ∑ θ i ( ω, x i ) + ∑ θ ij ( ω, x i ,x j ) i,j i
  • 125.
  • 126.
  • 127. Summary Exact Transformation (global optimum) Or Relaxed transformation (partially optimal) Labelling Problem Submodular Quadratic Pseudoboolean Function Move making algorithms Sub-problem T S st-mincut
  • 129. Use of Higher order Potentials Stereo - Woodford et al. CVPR 2008 P 1 P 2 P 3 5 6 7 8 Pixels Disparity Labels E(x 1 ,x 2 ,x 3 ) = θ 12 (x 1 ,x 2 ) + θ 23 (x 2 ,x 3 ) θ ij (x i ,x j ) = 0 if x i =x j C otherwise { E(6,6,6) = 0 + 0 = 0
  • 130. Use of Higher order Potentials Stereo - Woodford et al. CVPR 2008 P 1 P 2 P 3 5 6 7 8 Pixels Disparity Labels E(x 1 ,x 2 ,x 3 ) = θ 12 (x 1 ,x 2 ) + θ 23 (x 2 ,x 3 ) θ ij (x i ,x j ) = 0 if x i =x j C otherwise { E(6,6,6) = 0 + 0 = 0 E(6,7,7) = 1 + 0 = 1
  • 131. Use of Higher order Potentials Stereo - Woodford et al. CVPR 2008 P 1 P 2 P 3 5 6 7 8 Pixels Disparity Labels Pairwise potential penalize slanted planar surfaces E(x 1 ,x 2 ,x 3 ) = θ 12 (x 1 ,x 2 ) + θ 23 (x 2 ,x 3 ) θ ij (x i ,x j ) = 0 if x i =x j C otherwise { E(6,6,6) = 0 + 0 = 0 E(6,7,7) = 1 + 0 = 1 E(6,7,8) = 1 + 1 = 2
  • 132. Computing the Optimal Move Search Neighbourhood Current Solution Optimal Move x E ( x ) x c Transformation function T ( t ) T ( x c , t ) = x n = x c + t
  • 133. Computing the Optimal Move Search Neighbourhood Current Solution Optimal Move E ( x ) x c Transformation function T E m Move Energy ( t ) x E m ( t ) = E ( T ( x c , t )) T ( x c , t ) = x n = x c + t
  • 134. Computing the Optimal Move Search Neighbourhood Current Solution Optimal Move E ( x ) x c E m ( t ) = E ( T ( x c , t )) Transformation function T E m Move Energy T ( x c , t ) = x n = x c + t minimize t* Optimal Move ( t ) x

Editor's Notes

  1. Move to front
  2. correct
  3. correct
  4. Chi square
  5. Chi square