This document describes a method called Inference by Learning (IbyL) that speeds up inference in Markov random fields (MRFs) while maintaining solution accuracy. IbyL builds a coarse-to-fine hierarchy of MRFs and learns pruning classifiers at each level to iteratively reduce the solution space. It exploits the piecewise smooth structure of MAP solutions. On stereo matching and optical flow problems, IbyL achieves significant speedups over direct optimization while agreeing highly with the best solutions and maintaining low energy.
LAMP PCR.pptx by Dr. Chayanika Das, Ph.D, Veterinary Microbiology
Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers
1.
On-‐line resources: h/p://imagine.enpc.fr/~conejob/ibyl/
TBD, X/X/2014
Bruno Conejo, Phd student Ecole des Ponts ParisTech / Research Analyst GPS,
Caltech
Nikos Komodakis, Associate Professor Ecole des Ponts ParisTech
with S. Leprince & JP. Avouac (GPS, Caltech)
Inference by Learning: Speeding-‐up Graphical
Model OpZmizaZon via a Coarse-‐to-‐Fine
Cascade of Pruning Classifiers (NIPS 2014)
2. B.Conejo
N.Komodakis
–
Inference
by
Learning
2
Introduc=on:
Associated materials:
Paper, code, slides and poster are available on-‐line :
h/p://imagine.enpc.fr/~conejob/ibyl/
MoZvaZons:
1. Speed-‐up the inference of MRFs that have a piecewise smooth
MAP.
2. While maintaining the accuracy of the inference soluZon of MRFs.
Approach:
1. Exploit the piecewise smooth structure of the MAP by iteraZvely
opZmizing a coarse to fine representaZon of the MRF.
2. Rely on learning to progressively reduce the soluZon space.
3. B.Conejo
N.Komodakis
–
Inference
by
Learning
3
Nota=ons:
Let’s consider the following discrete MRF:
: the unary terms of vertex i
: the pairwise terms of edge ij
support
graph
set of
ver1ces
set of
edges
discrete
label set
set of unary
terms
set of pairwise
terms
i
j
: the configuraZon of vertex i
: the configuraZon of the MRF
4. B.Conejo
N.Komodakis
–
Inference
by
Learning
4
Nota=ons:
The Energy: i.e., the total cost is given by
The MAP: Maximum At Posteriori
The pruning matrix: is defined by:
and its associated soluZon space:
5. With such that:
B.Conejo
N.Komodakis
–
Inference
by
Learning
5
Approach:
To obtain an inference speed-‐up we solve:
belongs to
most elements of are 0 (i.e., the labels are pruned)
The IbyL framework iteraZvely esZmates by:
1. Building a coarse to fine set of MRFs from
2. Learning at each scale pruning classifiers to refine
6. B.Conejo
N.Komodakis
–
Inference
by
Learning
6
Model
coarsening:
Given and a grouping funcZon , we create a coarsen MRF
(slight abuse of the notaZon):
The unary and pairwise potenZals of are given by
The verZces and edges of are given by
7. B.Conejo
N.Komodakis
–
Inference
by
Learning
7
Coarse
to
fine
op=miza=on
and
label
pruning
We iteraZvely apply the previous coarsening to build a coarse to fine set
of N+1 progressively coarser MRFs:
with:
We set all elements of the coarsest pruning matrix to 1 (no pruning)
At each scale (s) we apply the following steps:
1. OpZmize the current MRF
2. Update the next scale pruning matrix
3. Up-‐sample the current soluZon for next scale.
8. B.Conejo
N.Komodakis
–
Inference
by
Learning
8
Coarse
to
fine
op=miza=on
and
label
pruning
Step 1: OpZmize the current MRF:
Step 2: Update the next scale pruning matrix:
i. Compute feature map:
ii. Update pruning matrix from off-‐line trained classifier:
Step 3: Up-‐sample the current soluZon:
9. B.Conejo
N.Komodakis
–
Inference
by
Learning
9
Inference
by
learning
framework
We sZll need to define:
1. How to compute the feature map
2. How to train the classifiers
10. B.Conejo
N.Komodakis
–
Inference
by
Learning
10
Feature
map
We stack K individual scalar features defined on
Presence of strong disconZnuity:
Local energy variaZon:
Unary coarsening:
11. B.Conejo
N.Komodakis
–
Inference
by
Learning
11
Learning
pruning
classifiers
On a training set of MRFs, we run the IbyL framework without any
pruning, i.e., . We keep track of features and compute:
such that:
where denotes the binary OR operator.
12. B.Conejo
N.Komodakis
–
Inference
by
Learning
12
Learning
pruning
classifiers
We split the features in two groups w.r.t the PSD feature.
For each group:
1. We have two classes (c0=to prune and c1=to remain acZve)
defined from
2. We weight c0 to 1, and c1 to
3. We train a linear C-‐SVM classifier
During tesZng, the classifier apply the trained classifier of the
corresponding group (w.r.t. the PSD feature).
13. B.Conejo
N.Komodakis
–
Inference
by
Learning
13
Experiments:
Setup
We experiment with two problems:
1. Stereo-‐matching.
2. OpZcal flow.
We evaluate different pruning aggressiveness factor , and compute:
1. The speed-‐up w.r.t. the direct opZmizaZon.
2. The raZo of acZve labels.
3. The energy raZo w.r.t. the direct opZmizaZon.
4. The MAP agreement w.r.t. the best computed soluZon.
As an opZmizaZon sub-‐rouZne we use Fast-‐PD.
17. B.Conejo
N.Komodakis
–
Inference
by
Learning
17
Conclusions
The IbyL framework:
1. Gives an important speed-‐up while maintaining excellent
accuracy of the soluZon.
2. Can be easily adapted to any MRF task by compuZng task
dependent features.
3. Can be easily adapted to high order MRFs.
4. Is available to download at:
h/p://imagine.enpc.fr/~conejob/ibyl/