SlideShare a Scribd company logo
1 of 58
Computer vision: models,
 learning and inference
           Chapter 6
 Learning and inference in vision


    Please send errata to s.prince@cs.ucl.ac.uk
Structure
• Computer vision models
    – Three types of model
•   Worked example 1: Regression
•   Worked example 2: Classification
•   Which type should we choose?
•   Applications



          Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   2
Computer vision models
• Observe measured data, x
• Draw inferences from it about state of world, w

Examples:

   – Observe adjacent frames in video sequence
   – Infer camera motion

   – Observe image of face
   – Infer identity

   – Observe images from two displaced cameras
   – Infer 3d structure of scene


             Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   3
Regression vs. Classification
• Observe measured data, x
• Draw inferences from it about world, w

When the world state w is continuous we’ll call this regression

When the world state w is discrete we call this classification




            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   4
Ambiguity of visual world
• Unfortunately visual measurements may be
  compatible with more than one world state w
  – Measurement process is noisy
  – Inherent ambiguity in visual data
• Conclusion: the best we can do is compute a
  probability distribution Pr(w|x) over possible
  states of world


         Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   5
Refined goal of computer vision
• Take observations x
• Return probability distribution Pr(w|x) over
  possible worlds compatible with data

(not always tractable – might have to settle for
  an approximation to this distribution, samples
  from it, or the best (MAP) solution for w)


         Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   6
Components of solution
We need

• A model that mathematically relates the visual data x
  to the world state y. Model specifies family of
  relationships, particular relationship depends on
  parameters q

• A learning algorithm: fits parameters q from paired
  training examples xi,yi
• An inference algorithm: uses model to return Pr(y|x)
  given new observed data x.

          Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   7
Types of Model
The model mathematically relates the visual data x to
  the world state w. Every model falls into one of
  three categories

1. Model contingency of the world on the data Pr(w|x)
2. Model joint occurrence of world and data Pr(x,w)
3. Model contingency of data on world Pr(x|w)




           Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   8
Generative vs. Discriminative
1. Model contingency of the world on the data Pr(w|x)
               (DISCRIMINATIVE MODEL)

2. Model joint occurrence of world and data Pr(x,w)
3. Model contingency of data on world Pr(x|w)
                  (GENERATIVE MODELS)

Generative as probability model over data and so when we
  draw samples from model, we GENERATE new data

           Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   9
Type 1: Model Pr(w|x) -
                Discriminative
How to model Pr(w|x)?
  1. Choose an appropriate form for Pr(w)
  2. Make parameters a function of x
  3. Function takes parameters q that define its shape

Learning algorithm: learn parameters q from training data x,w
Inference algorithm: just evaluate Pr(w|x)




            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   10
Type 2: Model Pr(x,w) - Generative

How to model Pr(x,w)?
  1. Concatenate x and w to make z= [xT wT]T
  2. Model the pdf of z
  3. Pdf takes parameters q that define its shape

Learning algorithm: learn parameters q from training data x,w
Inference algorithm: compute Pr(w|x) using Bayes rule




           Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   11
Type 3: Pr(x|w) - Generative

How to model Pr(x|w)?
  1. Choose an appropriate form for Pr(x)
  2. Make parameters a function of w
  3. Function takes parameters q that define its shape

Learning algorithm: learn parameters q from training data x,w
Inference algorithm: Define prior Pr(w) and then compute
    Pr(w|x) using Bayes’ rule



            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   12
Summary
Three different types of model depend on the
  quantity of interest:
  1. Pr(w|x) Discriminative
  2. Pr(w,y) Generative
  3. Pr(w|y) Generative

Inference in discriminative models easy as we
  directly model posterior Pr(w|x). Generative
  models require more complex inference process
  using Bayes’ rule
         Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   13
Structure
• Computer vision models
    – Three types of model
•   Worked example 1: Regression
•   Worked example 2: Classification
•   Which type should we choose?
•   Applications



          Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   14
Worked example 1: Regression

Consider simple case where

• we make a univariate continuous measurement x
• use this to predict a univariate continuous state w


(regression as world state is continuous)




           Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   15
Regression application 1:
  Pose from Silhouette




Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   16
Regression application 2:
  Changing face pose




Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   17
Regression application 3:
 Head pose estimation




Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   18
Worked example 1: Regression

Consider simple case where

• we make a univariate continuous measurement x
• use this to predict a univariate continuous state w


(regression as world state is continuous)




           Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   19
Type 1: Model Pr(w|x) -
                Discriminative

How to model Pr(w|x)?
  1. Choose an appropriate form for Pr(w)
  2. Make parameters a function of x
  3. Function takes parameters q that define its shape

Learning algorithm: learn parameters q from training data x,w
Inference algorithm: just evaluate Pr(w|x)



            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   20
Type 1: Model Pr(w|x) -
                Discriminative
How to model Pr(w|x)?
  1. Choose an appropriate form for Pr(w)
  2. Make parameters a function of x
  3. Function takes parameters q that define its shape

                                           1. Choose normal distribution over w
                                           2. Make mean m linear function of x
                                              (variance constant)



                                           3. Parameter are f0, f1, s2.

                                           This model is called linear regression.
           Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   21
Parameters                                       are y-offset, slope and variance

             Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   22
Learning algorithm: learn q from training data x,y. E.g. MAP


            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   23
Inference algorithm: just evaluate Pr(w|x) for new data x

          Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   24
Type 2: Model Pr(x,y) - Generative
How to model Pr(x,w)?

   1. Concatenate x and w to make z= [xT wT]T
   2. Model the pdf of z
   3. Pdf takes parameters q that define its shape

Learning algorithm: learn parameters q from training data x,w
Inference algorithm: compute Pr(w|x) using Bayes rule




           Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   25
Type 2: Model Pr(x,y) - Generative
How to model Pr(x,w)?

  1. Concatenate x and w to make z= [xT wT]T
  2. Model the pdf of z
  3. Pdf takes parameters q that define its shape


                                               1. Concatenate x and w to
                                                  make z= [xT wT]T
                                               2. Model the pdf of z as
                                                  normal distribution.
                                               3. Pdf takes parameters m and
                                                  S that define its shape
           Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   26
Learning algorithm: learn parameters q from training data x,w
            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   27
Inference algorithm: compute Pr(w|x) using Bayes rule



                                                                (normalize each column in figure)
           Computer vision: models, learning and inference. ©2011 Simon J.D. Prince         28
Type 3: Pr(x|w) - Generative
How to model Pr(x|w)?

   1. Choose an appropriate form for Pr(x)
   2. Make parameters a function of w
   3. Function takes parameters q that define its shape

Learning algorithm: learn parameters q from training data x,w
Inference algorithm: Define prior Pr(w) and then compute
    Pr(w|x) using Bayes’ rule




            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   29
Type 3: Pr(x|w) - Generative
How to model Pr(x|w)?

  1. Choose an appropriate form for Pr(x)
  2. Make parameters a function of w
  3. Function takes parameters q that define its shape
                                          1. Choose normal distribution over x
                                          2. Make mean m linear function of w
                                             (variance constant)



                                          3. Parameter are f0, f1, s2.


          Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   30
Learning algorithm: learn q from training data x,w. e.g. MAP

            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   31
Pr(x|w)            x                  Pr(w)             =                     Pr(x,w)

         Can get back to joint probability Pr(x,y)
   Computer vision: models, learning and inference. ©2011 Simon J.D. Prince             32
Inference algorithm: compute Pr(w|x) using Bayes rule



           Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   33
Structure
• Computer vision models
    – Three types of model
•   Worked example 1: Regression
•   Worked example 2: Classification
•   Which type should we choose?
•   Applications



          Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   34
Worked example 2: Classification

Consider simple case where

• we make a univariate continuous measurement x
• use this to predict a discrete binary world
                                       w


(classification as world state is continuous)


            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   35
Classification Example 1:
     Face Detection




Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   36
Classification Example 2:
  Pedestrian Detection




Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   37
Classification Example 3:
    Face Recognition




Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   38
Classification Example 4:
Semantic Segmentation




Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   39
Worked example 2: Classification

Consider simple case where

• we make a univariate continuous measurement x
• use this to predict a discrete binary world
                                       w


(classification as world state is continuous)


            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   40
Type 1: Model Pr(w|x) -
                Discriminative
How to model Pr(w|x)?

   –   Choose an appropriate form for Pr(w)
   –   Make parameters a function of x
   –   Function takes parameters q that define its shape

Learning algorithm: learn parameters q from training data x,w
Inference algorithm: just evaluate Pr(w|x)




            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   41
Type 1: Model Pr(w|x) -
               Discriminative
How to model Pr(w|x)?

  1. Choose an appropriate form for Pr(w)
  2. Make parameters a function of x
  3. Function takes parameters q that define its shape
                 1. Choose Bernoulli dist. for Pr(w)
                 2. Make parameters a function of x




                        3. Function takes parameters f0 and f1
                        This model is called logistic regression.
          Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   42
Two parameters

Learning by standard methods (ML,MAP, Bayesian)
Inference: Just evaluate Pr(w|x)
        Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   43
Type 2: Model Pr(x,y) - Generative
How to model Pr(x,w)?

   1. Concatenate x and w to make z= [xT wT]T
   2. Model the pdf of z
   3. Pdf takes parameters q that define its shape

Learning algorithm: learn parameters q from training data x,w
Inference algorithm: compute Pr(w|x) using Bayes rule




           Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   44
Type 2 Model
Can’t build this model very easily:

• Concatenate continuous vector x and discrete
  w to make z
• No obvious probability distribution to model
  joint probability of discrete and continuous



         Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   45
Type 3: Pr(x|w) - Generative
How to model Pr(x|w)?

   1. Choose an appropriate form for Pr(x)
   2. Make parameters a function of w
   3. Function takes parameters q that define its shape

Learning algorithm: learn parameters q from training data x,w
Inference algorithm: Define prior Pr(w) and then compute
    Pr(w|x) using Bayes’ rule




            Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   46
Type 3: Pr(x|w) - Generative
How to model Pr(x|w)?

  1. Choose an appropriate form for Pr(x)
  2. Make parameters a function of w
  3. Function takes parameters q that define its shape
                                             1. Choose a Gaussian distribution
                                                for Pr(x)
                                             2. Make parameters a function of
                                                discrete binary w

                                             3. Function takes parameters m0,
                                                m0, s20, s21 that define its shape
          Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   47
Learn parameters m0, m0, s20, s21 that define its shape
       Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   48
Inference algorithm: Define prior Pr(w) and
      then compute Pr(w|x) using Bayes’ rule




                Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   49
Structure
• Computer vision models
    – Three types of model
•   Worked example 1: Regression
•   Worked example 2: Classification
•   Which type should we choose?
•   Applications



          Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   50
Which type of model to use?
1. Generative methods model data – costly and
   many aspects of data may have no influence
   on world state




        Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   51
Which type of model to use?
2. Inference simple in discriminative models
3. Data really is generated from world –
   generative matches this
4. If missing data, then generative preferred
5. Generative allows imposition of prior
   knowledge specified by user



         Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   52
Structure
• Computer vision models
    – Three types of model
•   Worked example 1: Regression
•   Worked example 2: Classification
•   Which type should we choose?
•   Applications



          Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   53
Application: Skin Detection




  Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   54
Application: Background subtraction




     Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   55
Application: Background subtraction




But consider this scene in which the foliage is blowing in
  the wind. A normal distribution is not good enough!
  Need a way to make more complex distributions
           Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   56
Future Plan
• Seen three types of model




  – Probability density function
  – Linear regression
  – Logistic regression
• Next three chapters concern these models
         Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   57
Conclusion
• To do computer vision we build a model
  relating the image data x to the world state
  that we wish to estimate w

• Three types of model
   • Model Pr(w|x) -- discriminative
   • Model Pr(w,x) -- generative
   • Model Pr(w|x) – generative



       Computer vision: models, learning and inference. ©2011 Simon J.D. Prince   58

More Related Content

What's hot

Support Vector Machine
Support Vector MachineSupport Vector Machine
Support Vector MachineLucas Xu
 
Machine learning in science and industry — day 4
Machine learning in science and industry — day 4Machine learning in science and industry — day 4
Machine learning in science and industry — day 4arogozhnikov
 
Support vector machine
Support vector machineSupport vector machine
Support vector machineMusa Hawamdah
 
Machine learning in science and industry — day 1
Machine learning in science and industry — day 1Machine learning in science and industry — day 1
Machine learning in science and industry — day 1arogozhnikov
 
Random Forest for Big Data
Random Forest for Big DataRandom Forest for Big Data
Random Forest for Big Datatuxette
 
Minghui Conference Cross-Validation Talk
Minghui Conference Cross-Validation TalkMinghui Conference Cross-Validation Talk
Minghui Conference Cross-Validation TalkWei Wang
 
From RNN to neural networks for cyclic undirected graphs
From RNN to neural networks for cyclic undirected graphsFrom RNN to neural networks for cyclic undirected graphs
From RNN to neural networks for cyclic undirected graphstuxette
 
(研究会輪読) Facial Landmark Detection by Deep Multi-task Learning
(研究会輪読) Facial Landmark Detection by Deep Multi-task Learning(研究会輪読) Facial Landmark Detection by Deep Multi-task Learning
(研究会輪読) Facial Landmark Detection by Deep Multi-task LearningMasahiro Suzuki
 
Machine learning in science and industry — day 2
Machine learning in science and industry — day 2Machine learning in science and industry — day 2
Machine learning in science and industry — day 2arogozhnikov
 
Fundementals of Machine Learning and Deep Learning
Fundementals of Machine Learning and Deep Learning Fundementals of Machine Learning and Deep Learning
Fundementals of Machine Learning and Deep Learning ParrotAI
 
A review on structure learning in GNN
A review on structure learning in GNNA review on structure learning in GNN
A review on structure learning in GNNtuxette
 
Data Science and Machine Learning with Tensorflow
 Data Science and Machine Learning with Tensorflow Data Science and Machine Learning with Tensorflow
Data Science and Machine Learning with TensorflowShubham Sharma
 
Linear Algebra – A Powerful Tool for Data Science
Linear Algebra – A Powerful Tool for Data ScienceLinear Algebra – A Powerful Tool for Data Science
Linear Algebra – A Powerful Tool for Data SciencePremier Publishers
 
Machine learning in science and industry — day 3
Machine learning in science and industry — day 3Machine learning in science and industry — day 3
Machine learning in science and industry — day 3arogozhnikov
 
The Kernel Trick
The Kernel TrickThe Kernel Trick
The Kernel TrickEdgar Marca
 
A short and naive introduction to using network in prediction models
A short and naive introduction to using network in prediction modelsA short and naive introduction to using network in prediction models
A short and naive introduction to using network in prediction modelstuxette
 
Binary Class and Multi Class Strategies for Machine Learning
Binary Class and Multi Class Strategies for Machine LearningBinary Class and Multi Class Strategies for Machine Learning
Binary Class and Multi Class Strategies for Machine LearningPaxcel Technologies
 
Random Matrix Theory in Array Signal Processing: Application Examples
Random Matrix Theory in Array Signal Processing: Application ExamplesRandom Matrix Theory in Array Signal Processing: Application Examples
Random Matrix Theory in Array Signal Processing: Application ExamplesFörderverein Technische Fakultät
 
Uncertainty Awareness in Integrating Machine Learning and Game Theory
Uncertainty Awareness in Integrating Machine Learning and Game TheoryUncertainty Awareness in Integrating Machine Learning and Game Theory
Uncertainty Awareness in Integrating Machine Learning and Game TheoryRikiya Takahashi
 
Learning On The Border:Active Learning in Imbalanced classification Data
Learning On The Border:Active Learning in Imbalanced classification DataLearning On The Border:Active Learning in Imbalanced classification Data
Learning On The Border:Active Learning in Imbalanced classification Data萍華 楊
 

What's hot (20)

Support Vector Machine
Support Vector MachineSupport Vector Machine
Support Vector Machine
 
Machine learning in science and industry — day 4
Machine learning in science and industry — day 4Machine learning in science and industry — day 4
Machine learning in science and industry — day 4
 
Support vector machine
Support vector machineSupport vector machine
Support vector machine
 
Machine learning in science and industry — day 1
Machine learning in science and industry — day 1Machine learning in science and industry — day 1
Machine learning in science and industry — day 1
 
Random Forest for Big Data
Random Forest for Big DataRandom Forest for Big Data
Random Forest for Big Data
 
Minghui Conference Cross-Validation Talk
Minghui Conference Cross-Validation TalkMinghui Conference Cross-Validation Talk
Minghui Conference Cross-Validation Talk
 
From RNN to neural networks for cyclic undirected graphs
From RNN to neural networks for cyclic undirected graphsFrom RNN to neural networks for cyclic undirected graphs
From RNN to neural networks for cyclic undirected graphs
 
(研究会輪読) Facial Landmark Detection by Deep Multi-task Learning
(研究会輪読) Facial Landmark Detection by Deep Multi-task Learning(研究会輪読) Facial Landmark Detection by Deep Multi-task Learning
(研究会輪読) Facial Landmark Detection by Deep Multi-task Learning
 
Machine learning in science and industry — day 2
Machine learning in science and industry — day 2Machine learning in science and industry — day 2
Machine learning in science and industry — day 2
 
Fundementals of Machine Learning and Deep Learning
Fundementals of Machine Learning and Deep Learning Fundementals of Machine Learning and Deep Learning
Fundementals of Machine Learning and Deep Learning
 
A review on structure learning in GNN
A review on structure learning in GNNA review on structure learning in GNN
A review on structure learning in GNN
 
Data Science and Machine Learning with Tensorflow
 Data Science and Machine Learning with Tensorflow Data Science and Machine Learning with Tensorflow
Data Science and Machine Learning with Tensorflow
 
Linear Algebra – A Powerful Tool for Data Science
Linear Algebra – A Powerful Tool for Data ScienceLinear Algebra – A Powerful Tool for Data Science
Linear Algebra – A Powerful Tool for Data Science
 
Machine learning in science and industry — day 3
Machine learning in science and industry — day 3Machine learning in science and industry — day 3
Machine learning in science and industry — day 3
 
The Kernel Trick
The Kernel TrickThe Kernel Trick
The Kernel Trick
 
A short and naive introduction to using network in prediction models
A short and naive introduction to using network in prediction modelsA short and naive introduction to using network in prediction models
A short and naive introduction to using network in prediction models
 
Binary Class and Multi Class Strategies for Machine Learning
Binary Class and Multi Class Strategies for Machine LearningBinary Class and Multi Class Strategies for Machine Learning
Binary Class and Multi Class Strategies for Machine Learning
 
Random Matrix Theory in Array Signal Processing: Application Examples
Random Matrix Theory in Array Signal Processing: Application ExamplesRandom Matrix Theory in Array Signal Processing: Application Examples
Random Matrix Theory in Array Signal Processing: Application Examples
 
Uncertainty Awareness in Integrating Machine Learning and Game Theory
Uncertainty Awareness in Integrating Machine Learning and Game TheoryUncertainty Awareness in Integrating Machine Learning and Game Theory
Uncertainty Awareness in Integrating Machine Learning and Game Theory
 
Learning On The Border:Active Learning in Imbalanced classification Data
Learning On The Border:Active Learning in Imbalanced classification DataLearning On The Border:Active Learning in Imbalanced classification Data
Learning On The Border:Active Learning in Imbalanced classification Data
 

Similar to CV Models Learning Inference

07 cv mil_modeling_complex_densities
07 cv mil_modeling_complex_densities07 cv mil_modeling_complex_densities
07 cv mil_modeling_complex_densitieszukun
 
08 cv mil_regression
08 cv mil_regression08 cv mil_regression
08 cv mil_regressionzukun
 
09 cv mil_classification
09 cv mil_classification09 cv mil_classification
09 cv mil_classificationzukun
 
04 cv mil_fitting_probability_models
04 cv mil_fitting_probability_models04 cv mil_fitting_probability_models
04 cv mil_fitting_probability_modelszukun
 
02 cv mil_intro_to_probability
02 cv mil_intro_to_probability02 cv mil_intro_to_probability
02 cv mil_intro_to_probabilityzukun
 
11 cv mil_models_for_chains_and_trees
11 cv mil_models_for_chains_and_trees11 cv mil_models_for_chains_and_trees
11 cv mil_models_for_chains_and_treeszukun
 
16 cv mil_multiple_cameras
16 cv mil_multiple_cameras16 cv mil_multiple_cameras
16 cv mil_multiple_cameraszukun
 
17 cv mil_models_for_shape
17 cv mil_models_for_shape17 cv mil_models_for_shape
17 cv mil_models_for_shapezukun
 
Adaptive physics–inspired facial animation
Adaptive physics–inspired facial animationAdaptive physics–inspired facial animation
Adaptive physics–inspired facial animationRichard Southern
 
03 cv mil_probability_distributions
03 cv mil_probability_distributions03 cv mil_probability_distributions
03 cv mil_probability_distributionszukun
 
10 cv mil_graphical_models
10 cv mil_graphical_models10 cv mil_graphical_models
10 cv mil_graphical_modelszukun
 
13 cv mil_preprocessing
13 cv mil_preprocessing13 cv mil_preprocessing
13 cv mil_preprocessingzukun
 
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...MLconf
 
Introduction to Probability
Introduction to ProbabilityIntroduction to Probability
Introduction to ProbabilityLukas Tencer
 
Graphical Models for chains, trees and grids
Graphical Models for chains, trees and gridsGraphical Models for chains, trees and grids
Graphical Models for chains, trees and gridspotaters
 
Auto-encoding variational bayes
Auto-encoding variational bayesAuto-encoding variational bayes
Auto-encoding variational bayesKyuri Kim
 
[update] Introductory Parts of the Book "Dive into Deep Learning"
[update] Introductory Parts of the Book "Dive into Deep Learning"[update] Introductory Parts of the Book "Dive into Deep Learning"
[update] Introductory Parts of the Book "Dive into Deep Learning"Young-Min kang
 
14 cv mil_the_pinhole_camera
14 cv mil_the_pinhole_camera14 cv mil_the_pinhole_camera
14 cv mil_the_pinhole_camerazukun
 
Unsupervised Learning (D2L6 2017 UPC Deep Learning for Computer Vision)
Unsupervised Learning (D2L6 2017 UPC Deep Learning for Computer Vision)Unsupervised Learning (D2L6 2017 UPC Deep Learning for Computer Vision)
Unsupervised Learning (D2L6 2017 UPC Deep Learning for Computer Vision)Universitat Politècnica de Catalunya
 

Similar to CV Models Learning Inference (20)

07 cv mil_modeling_complex_densities
07 cv mil_modeling_complex_densities07 cv mil_modeling_complex_densities
07 cv mil_modeling_complex_densities
 
08 cv mil_regression
08 cv mil_regression08 cv mil_regression
08 cv mil_regression
 
09 cv mil_classification
09 cv mil_classification09 cv mil_classification
09 cv mil_classification
 
04 cv mil_fitting_probability_models
04 cv mil_fitting_probability_models04 cv mil_fitting_probability_models
04 cv mil_fitting_probability_models
 
02 cv mil_intro_to_probability
02 cv mil_intro_to_probability02 cv mil_intro_to_probability
02 cv mil_intro_to_probability
 
11 cv mil_models_for_chains_and_trees
11 cv mil_models_for_chains_and_trees11 cv mil_models_for_chains_and_trees
11 cv mil_models_for_chains_and_trees
 
16 cv mil_multiple_cameras
16 cv mil_multiple_cameras16 cv mil_multiple_cameras
16 cv mil_multiple_cameras
 
17 cv mil_models_for_shape
17 cv mil_models_for_shape17 cv mil_models_for_shape
17 cv mil_models_for_shape
 
Adaptive physics–inspired facial animation
Adaptive physics–inspired facial animationAdaptive physics–inspired facial animation
Adaptive physics–inspired facial animation
 
03 cv mil_probability_distributions
03 cv mil_probability_distributions03 cv mil_probability_distributions
03 cv mil_probability_distributions
 
10 cv mil_graphical_models
10 cv mil_graphical_models10 cv mil_graphical_models
10 cv mil_graphical_models
 
13 cv mil_preprocessing
13 cv mil_preprocessing13 cv mil_preprocessing
13 cv mil_preprocessing
 
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
 
Introduction to Probability
Introduction to ProbabilityIntroduction to Probability
Introduction to Probability
 
Lecture 1.pptx
Lecture 1.pptxLecture 1.pptx
Lecture 1.pptx
 
Graphical Models for chains, trees and grids
Graphical Models for chains, trees and gridsGraphical Models for chains, trees and grids
Graphical Models for chains, trees and grids
 
Auto-encoding variational bayes
Auto-encoding variational bayesAuto-encoding variational bayes
Auto-encoding variational bayes
 
[update] Introductory Parts of the Book "Dive into Deep Learning"
[update] Introductory Parts of the Book "Dive into Deep Learning"[update] Introductory Parts of the Book "Dive into Deep Learning"
[update] Introductory Parts of the Book "Dive into Deep Learning"
 
14 cv mil_the_pinhole_camera
14 cv mil_the_pinhole_camera14 cv mil_the_pinhole_camera
14 cv mil_the_pinhole_camera
 
Unsupervised Learning (D2L6 2017 UPC Deep Learning for Computer Vision)
Unsupervised Learning (D2L6 2017 UPC Deep Learning for Computer Vision)Unsupervised Learning (D2L6 2017 UPC Deep Learning for Computer Vision)
Unsupervised Learning (D2L6 2017 UPC Deep Learning for Computer Vision)
 

More from zukun

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009zukun
 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVzukun
 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Informationzukun
 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statisticszukun
 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibrationzukun
 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionzukun
 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluationzukun
 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-softwarezukun
 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptorszukun
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectorszukun
 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-introzukun
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video searchzukun
 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video searchzukun
 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video searchzukun
 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learningzukun
 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionzukun
 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick startzukun
 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysiszukun
 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structureszukun
 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities zukun
 

More from zukun (20)

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009
 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCV
 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Information
 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statistics
 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibration
 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer vision
 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluation
 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-software
 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptors
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectors
 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-intro
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video search
 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video search
 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video search
 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learning
 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer vision
 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick start
 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysis
 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structures
 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities
 

CV Models Learning Inference

  • 1. Computer vision: models, learning and inference Chapter 6 Learning and inference in vision Please send errata to s.prince@cs.ucl.ac.uk
  • 2. Structure • Computer vision models – Three types of model • Worked example 1: Regression • Worked example 2: Classification • Which type should we choose? • Applications Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 2
  • 3. Computer vision models • Observe measured data, x • Draw inferences from it about state of world, w Examples: – Observe adjacent frames in video sequence – Infer camera motion – Observe image of face – Infer identity – Observe images from two displaced cameras – Infer 3d structure of scene Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 3
  • 4. Regression vs. Classification • Observe measured data, x • Draw inferences from it about world, w When the world state w is continuous we’ll call this regression When the world state w is discrete we call this classification Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 4
  • 5. Ambiguity of visual world • Unfortunately visual measurements may be compatible with more than one world state w – Measurement process is noisy – Inherent ambiguity in visual data • Conclusion: the best we can do is compute a probability distribution Pr(w|x) over possible states of world Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 5
  • 6. Refined goal of computer vision • Take observations x • Return probability distribution Pr(w|x) over possible worlds compatible with data (not always tractable – might have to settle for an approximation to this distribution, samples from it, or the best (MAP) solution for w) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 6
  • 7. Components of solution We need • A model that mathematically relates the visual data x to the world state y. Model specifies family of relationships, particular relationship depends on parameters q • A learning algorithm: fits parameters q from paired training examples xi,yi • An inference algorithm: uses model to return Pr(y|x) given new observed data x. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 7
  • 8. Types of Model The model mathematically relates the visual data x to the world state w. Every model falls into one of three categories 1. Model contingency of the world on the data Pr(w|x) 2. Model joint occurrence of world and data Pr(x,w) 3. Model contingency of data on world Pr(x|w) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 8
  • 9. Generative vs. Discriminative 1. Model contingency of the world on the data Pr(w|x) (DISCRIMINATIVE MODEL) 2. Model joint occurrence of world and data Pr(x,w) 3. Model contingency of data on world Pr(x|w) (GENERATIVE MODELS) Generative as probability model over data and so when we draw samples from model, we GENERATE new data Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 9
  • 10. Type 1: Model Pr(w|x) - Discriminative How to model Pr(w|x)? 1. Choose an appropriate form for Pr(w) 2. Make parameters a function of x 3. Function takes parameters q that define its shape Learning algorithm: learn parameters q from training data x,w Inference algorithm: just evaluate Pr(w|x) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 10
  • 11. Type 2: Model Pr(x,w) - Generative How to model Pr(x,w)? 1. Concatenate x and w to make z= [xT wT]T 2. Model the pdf of z 3. Pdf takes parameters q that define its shape Learning algorithm: learn parameters q from training data x,w Inference algorithm: compute Pr(w|x) using Bayes rule Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 11
  • 12. Type 3: Pr(x|w) - Generative How to model Pr(x|w)? 1. Choose an appropriate form for Pr(x) 2. Make parameters a function of w 3. Function takes parameters q that define its shape Learning algorithm: learn parameters q from training data x,w Inference algorithm: Define prior Pr(w) and then compute Pr(w|x) using Bayes’ rule Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 12
  • 13. Summary Three different types of model depend on the quantity of interest: 1. Pr(w|x) Discriminative 2. Pr(w,y) Generative 3. Pr(w|y) Generative Inference in discriminative models easy as we directly model posterior Pr(w|x). Generative models require more complex inference process using Bayes’ rule Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 13
  • 14. Structure • Computer vision models – Three types of model • Worked example 1: Regression • Worked example 2: Classification • Which type should we choose? • Applications Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 14
  • 15. Worked example 1: Regression Consider simple case where • we make a univariate continuous measurement x • use this to predict a univariate continuous state w (regression as world state is continuous) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 15
  • 16. Regression application 1: Pose from Silhouette Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 16
  • 17. Regression application 2: Changing face pose Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 17
  • 18. Regression application 3: Head pose estimation Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 18
  • 19. Worked example 1: Regression Consider simple case where • we make a univariate continuous measurement x • use this to predict a univariate continuous state w (regression as world state is continuous) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 19
  • 20. Type 1: Model Pr(w|x) - Discriminative How to model Pr(w|x)? 1. Choose an appropriate form for Pr(w) 2. Make parameters a function of x 3. Function takes parameters q that define its shape Learning algorithm: learn parameters q from training data x,w Inference algorithm: just evaluate Pr(w|x) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 20
  • 21. Type 1: Model Pr(w|x) - Discriminative How to model Pr(w|x)? 1. Choose an appropriate form for Pr(w) 2. Make parameters a function of x 3. Function takes parameters q that define its shape 1. Choose normal distribution over w 2. Make mean m linear function of x (variance constant) 3. Parameter are f0, f1, s2. This model is called linear regression. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 21
  • 22. Parameters are y-offset, slope and variance Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 22
  • 23. Learning algorithm: learn q from training data x,y. E.g. MAP Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 23
  • 24. Inference algorithm: just evaluate Pr(w|x) for new data x Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 24
  • 25. Type 2: Model Pr(x,y) - Generative How to model Pr(x,w)? 1. Concatenate x and w to make z= [xT wT]T 2. Model the pdf of z 3. Pdf takes parameters q that define its shape Learning algorithm: learn parameters q from training data x,w Inference algorithm: compute Pr(w|x) using Bayes rule Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 25
  • 26. Type 2: Model Pr(x,y) - Generative How to model Pr(x,w)? 1. Concatenate x and w to make z= [xT wT]T 2. Model the pdf of z 3. Pdf takes parameters q that define its shape 1. Concatenate x and w to make z= [xT wT]T 2. Model the pdf of z as normal distribution. 3. Pdf takes parameters m and S that define its shape Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 26
  • 27. Learning algorithm: learn parameters q from training data x,w Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 27
  • 28. Inference algorithm: compute Pr(w|x) using Bayes rule (normalize each column in figure) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 28
  • 29. Type 3: Pr(x|w) - Generative How to model Pr(x|w)? 1. Choose an appropriate form for Pr(x) 2. Make parameters a function of w 3. Function takes parameters q that define its shape Learning algorithm: learn parameters q from training data x,w Inference algorithm: Define prior Pr(w) and then compute Pr(w|x) using Bayes’ rule Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 29
  • 30. Type 3: Pr(x|w) - Generative How to model Pr(x|w)? 1. Choose an appropriate form for Pr(x) 2. Make parameters a function of w 3. Function takes parameters q that define its shape 1. Choose normal distribution over x 2. Make mean m linear function of w (variance constant) 3. Parameter are f0, f1, s2. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 30
  • 31. Learning algorithm: learn q from training data x,w. e.g. MAP Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 31
  • 32. Pr(x|w) x Pr(w) = Pr(x,w) Can get back to joint probability Pr(x,y) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 32
  • 33. Inference algorithm: compute Pr(w|x) using Bayes rule Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 33
  • 34. Structure • Computer vision models – Three types of model • Worked example 1: Regression • Worked example 2: Classification • Which type should we choose? • Applications Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 34
  • 35. Worked example 2: Classification Consider simple case where • we make a univariate continuous measurement x • use this to predict a discrete binary world w (classification as world state is continuous) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 35
  • 36. Classification Example 1: Face Detection Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 36
  • 37. Classification Example 2: Pedestrian Detection Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 37
  • 38. Classification Example 3: Face Recognition Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 38
  • 39. Classification Example 4: Semantic Segmentation Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 39
  • 40. Worked example 2: Classification Consider simple case where • we make a univariate continuous measurement x • use this to predict a discrete binary world w (classification as world state is continuous) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 40
  • 41. Type 1: Model Pr(w|x) - Discriminative How to model Pr(w|x)? – Choose an appropriate form for Pr(w) – Make parameters a function of x – Function takes parameters q that define its shape Learning algorithm: learn parameters q from training data x,w Inference algorithm: just evaluate Pr(w|x) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 41
  • 42. Type 1: Model Pr(w|x) - Discriminative How to model Pr(w|x)? 1. Choose an appropriate form for Pr(w) 2. Make parameters a function of x 3. Function takes parameters q that define its shape 1. Choose Bernoulli dist. for Pr(w) 2. Make parameters a function of x 3. Function takes parameters f0 and f1 This model is called logistic regression. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 42
  • 43. Two parameters Learning by standard methods (ML,MAP, Bayesian) Inference: Just evaluate Pr(w|x) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 43
  • 44. Type 2: Model Pr(x,y) - Generative How to model Pr(x,w)? 1. Concatenate x and w to make z= [xT wT]T 2. Model the pdf of z 3. Pdf takes parameters q that define its shape Learning algorithm: learn parameters q from training data x,w Inference algorithm: compute Pr(w|x) using Bayes rule Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 44
  • 45. Type 2 Model Can’t build this model very easily: • Concatenate continuous vector x and discrete w to make z • No obvious probability distribution to model joint probability of discrete and continuous Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 45
  • 46. Type 3: Pr(x|w) - Generative How to model Pr(x|w)? 1. Choose an appropriate form for Pr(x) 2. Make parameters a function of w 3. Function takes parameters q that define its shape Learning algorithm: learn parameters q from training data x,w Inference algorithm: Define prior Pr(w) and then compute Pr(w|x) using Bayes’ rule Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 46
  • 47. Type 3: Pr(x|w) - Generative How to model Pr(x|w)? 1. Choose an appropriate form for Pr(x) 2. Make parameters a function of w 3. Function takes parameters q that define its shape 1. Choose a Gaussian distribution for Pr(x) 2. Make parameters a function of discrete binary w 3. Function takes parameters m0, m0, s20, s21 that define its shape Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 47
  • 48. Learn parameters m0, m0, s20, s21 that define its shape Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 48
  • 49. Inference algorithm: Define prior Pr(w) and then compute Pr(w|x) using Bayes’ rule Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 49
  • 50. Structure • Computer vision models – Three types of model • Worked example 1: Regression • Worked example 2: Classification • Which type should we choose? • Applications Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 50
  • 51. Which type of model to use? 1. Generative methods model data – costly and many aspects of data may have no influence on world state Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 51
  • 52. Which type of model to use? 2. Inference simple in discriminative models 3. Data really is generated from world – generative matches this 4. If missing data, then generative preferred 5. Generative allows imposition of prior knowledge specified by user Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 52
  • 53. Structure • Computer vision models – Three types of model • Worked example 1: Regression • Worked example 2: Classification • Which type should we choose? • Applications Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 53
  • 54. Application: Skin Detection Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 54
  • 55. Application: Background subtraction Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 55
  • 56. Application: Background subtraction But consider this scene in which the foliage is blowing in the wind. A normal distribution is not good enough! Need a way to make more complex distributions Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 56
  • 57. Future Plan • Seen three types of model – Probability density function – Linear regression – Logistic regression • Next three chapters concern these models Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 57
  • 58. Conclusion • To do computer vision we build a model relating the image data x to the world state that we wish to estimate w • Three types of model • Model Pr(w|x) -- discriminative • Model Pr(w,x) -- generative • Model Pr(w|x) – generative Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 58