SlideShare a Scribd company logo
A Machine Learning (Theory)
Perspective on Computer Vision

            Peter Auer
      Montanuniversität Leoben
Outline

 What I am doing and how computer
 vision approached me (in 2002).
 Some modern machine learning
 algorithms used in computer vision,
 and their development:
   Boosting
   Support Vector Machines
 Concluding remarks
My background
 COLT 1993
   Conference on Learning Theory
   „On-Line Learning of Rectangles in Noisy
   Environments“

 FOCS 1995
   Symp. Foundations of Computer Science
   „Gambling in a Rigged Casino: The Adversarial
   Multi-Arm Bandit Problem“
   with N. Cesa-Bianchi, Y. Freund, R. Schapire

 ICML, NIPS, STOC, …
A computer vision project

 EU-Project LAVA, 2002
   “Learning for adaptable visual
   assistants”
   XRCE: Ch. Dance, R. Mohr
   IRIA Grenoble: C. Schmid, B. Triggs
   RHUL: J. Shawe-Taylor
   IDIAP: S. Bengio
LAVA Proposal
 Vision (goals)
   Recognition of generic objects and events
   Attention Mechanisms
   Base line and high-level descriptors
 Learning (means)
   Statistical Analysis
   Kernels and models and features
   Online Learning
Online learning
 Online Information Setting
   An input is received, a prediction is made, and
   then feedback is acquired.
   Goal: To make good predictions, in respect to
   a (large) set of fixed predictors.
 Online Computation Setting
   The amount of computation per new example –
   to update the learned information – is constant
   (or small).
   Goal: To be fast computationally.
 (Near) real-time learning?
Learning for vision around 2002
 Viola, Jones, CVPR 2001:
   Rapid object detection using a boosted cascade
   of simple features. (Boosting)
 Agarwal, Roth, ECCV 2002:
   Learning a Sparse Representation for Object
   Detection. (Winnow)
 Fergus, Perona, Zisserman, CVPR 2003:
   Object class recognition by unsupervised scale-
   invariant learning. (EM-type algorithm)
 Wallraven, Caputo, Graf, ICCV 2003:
   Recognition with local features: the kernel
   recipe. (SVM)
Our contribution in LAVA

 Opelt, Fussenegger, Pinz, Auer,
 ECCV 2004:
   Weak hypotheses and boosting for
   generic object detection and
   recognition.
Image classification
as a learning problem
Image classification as a learning problem

       Images are represented as vectors x = (x1 , . . . , xn ) ∈ X ⊂ Rn .

       Given
            training images x (1) , . . . , x (m) ∈ X
            with their classifications y (1) , . . . , y (m) ∈ Y = {−1, +1},
       a classifier H : X → Y is learned.


       We consider linear classifiers Hw , w ∈ Rn ,

                                             +1         if w · x ≥ 0
                        Hw (x) =
                                             −1         if w · x < 0
                    n
       (w · x =     i=1 wi xi ).



                                   P. Auer        ML Perspective on CV
The Perceptron algorithm (Rosenblatt, 1958)
   The Perceptron algorithm maintains a weight vector w (t) as its
   current classifier.
       Initialization w (1) = 0.
                            +1       if w (t) · x (t) ≥ 0
       Predict y (t) =
                 ˆ
                            −1       if w (t) · x (t) < 0
       If y (t) = y (t) then w (t+1) = w (t) ,
          ˆ
       else w (t+1) = w (t) + ηy (t) x (t) .
       (η is the learning rate.)


       The Perceptron was abandoned in 1969, when Minsky and
       Papert showed that Perceptrons are not able to learn some
       simple functions.
       Revived only in the 1980’s when neural networks became
       popular.

                                 P. Auer   ML Perspective on CV
Perceptron cannot learn XOR




 No single line can separate the green
 from the red boxes.
Non-linear classifiers



       Extending the feature space (or using kernels) prevents the
       problem:
                                                             2 2
       Since XOR is a quadratic function, use (1, x1 , x2 , x1 , x2 , x1 x2 )
       instead of (x1 , x2 ).
       For x1 , x2 ∈ {+1, −1},

                               x1 XOR x2 = x1 x2 .




                                P. Auer   ML Perspective on CV
Winnow (Littlestone 1987)


      Works like the Perceptron algorithm except for the update of
      the weights:
                       (t+1)             (t)                     (t)
                     wi        = wi            ∗ exp ηy (t) xi

      for some η > 0. (w (1) = 1.)


      Observe the multiplicative update of the weights and
            (t+1)         (t)        (t)
      log wi      = log wi + ηy (t) xi .


      Very related work:
      The Weighted Majority Algorithm (Littlestone, Warmuth)


                               P. Auer         ML Perspective on CV
Comparison of the Perceptron algorithm and Winnow


      Perceptron and Winnow scale differently in respect to
      relevant, used, and irrelevant attributes:


                         all attributes             n
                         relevant attributes        k
                         used attributes            d


                                     # training ex.
                                         √
                      Perceptron           dk
                      Winnow            k log n




                           P. Auer   ML Perspective on CV
Adaboost (Freund, Schapire, 1995)


                                              (s)
      AdaBoost maintains weights vt                     on the training examples
      (x (s) , y (s) ) over time t:

                            (s)
      Initialize weights v0       = 1.
      For t = 1, 2, . . .
           Select coordinate it with maximal correlation with the labels,
                 (s) (s) (s)
              s vt y    xi , as weak hypothesis.
                                                         (s)               (s)
           Choose αt which minimizes                s   vt exp −αt y (s) xit     .
                      (s)     (s)                         (s)
           Update vt+1 = vt exp −αt y (s) xit                    .
      For x = (x1 , . . . , xn ) predict sign (           t    αt xit ).



                                    P. Auer   ML Perspective on CV
History of Boosting (1)
 Rob Schapire:
 The strength of weak learnability, 1990.
   Showed that classifiers which are only 51%
   correct, can be combined into a 99% correct
   classifier.
   Rather a theoretical result, since the algorithm
   was complicated and not practical.
   I know people who thought that this was not
   an interesting result.
History of Boosting (2)

 Yoav Freund:
 Boosting a weak learning algorithm
 by majority, 1995.
   Improved boosting algorithm, but still
   complicated and theoretical.
   Only logarithmically many examples
   are forwarded to the weak learner!
History of Boosting (3)
 Y. Freund and R. Schapire:
 A decision-theoretic generalization of on-line
 learning and an application to boosting, 1995.
   Very simple boosting algorithm, easy to implement.
   Theoretically less interesting.
   Performs very well in practice.

 Won the Gödel price in 2003 and the Kanellakis
 price in 2004. (Both are prestigious prices in
 Theoretical Computer Science.)

 Since then many variants of Boosting (mainly to
 improve error robustness):
   BrownBoost, Soft margin boosting, LPBoost.
Support Vector Machines (SVMs)
 In its vanilla version also learns a linear classifier.

 It maximizes distance between the decision
 boundary and the nearest training points.
    Formulates learning as a well-behaved optimization
    problem.

 Invented by Vladimir Vapnik
 (1979, Russian paper).
    Translated in 1982.
    No practical applications,
    since it required linear separability.
Practical SVMs
 Vapnik:
    The Nature of Statistical Learning Theory, 1995.
    Statistical Learning Theory, 1998.

 Shawe-Taylor, Cristianini:
 Support Vector Machines, 2000.

 Soft margin SVMs:
    Tolerate incorrectly labeled training examples (by
    using slack variables).

 Non-linear classification using the “kernel trick”.
Support Vector Machines (SVMs)



                                    +
                                +                +
                          +                 +
                          +   +
                                                             −
                          + +                                    −
                                                             −
                                                         −           −
                                                     −       −
                                        −                −



                                                                         – p.21
Maschinelles Lernen   —   25.8.03   —   Peter Auer
The kernel trick (1)

       Recall the perceptron update,
                                                              t
                w (t+1) = w (t) + ηy (t) x (t) = η                 y (τ ) x (τ ) ,
                                                            τ =1

       and classification,
                                                       t
                         (t+1)
            y = sign w
            ˆ                    · x = sign                 y (τ ) x (τ ) · x        .
                                                     τ =1

       A kernel function generalizes the inner product,
                                       t
                     y = sign
                     ˆ                      y (τ ) K x (τ ) , x         .
                                     τ =1



                                 P. Auer     ML Perspective on CV
The kernel trick (2)


       The inner product x (τ ) · x is a measure of similarity:
       x (τ ) · x is maximal if x (τ ) = x.


       The kernel function is a similarity measure in feature space,
       K x (τ ) , x = Φ(x (τ ) ) · Φ(x).


       Kernel functions can be designed to capture the relevant
       similarities of the domain.


       Aizerman, Braverman, Rozonoer:
       Theoretical foundations of the potential function method in
       pattern recognition learning, 1964.


                               P. Auer   ML Perspective on CV
Where are we going?

 New learning algorithms?
 Better image descriptors!
 Probably they need to be learned.
 Probably they need to be
 hierarchical.
 We need (to use) more data.
Final remark on algorithm evaluation
and benchmarks

 Computer vision is in the state of
 machine learning 10 years ago (at
 least for object classification).

 Benchmark datasets start to
 become available, e.g. PASCAL
 VOC.

More Related Content

What's hot

Lesson 16: Inverse Trigonometric Functions
Lesson 16: Inverse Trigonometric FunctionsLesson 16: Inverse Trigonometric Functions
Lesson 16: Inverse Trigonometric Functions
Matthew Leingang
 
NIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learningNIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learning
zukun
 
Lesson 12: Linear Approximation
Lesson 12: Linear ApproximationLesson 12: Linear Approximation
Lesson 12: Linear Approximation
Matthew Leingang
 
Machine learning of structured outputs
Machine learning of structured outputsMachine learning of structured outputs
Machine learning of structured outputs
zukun
 
Spectral Learning Methods for Finite State Machines with Applications to Na...
  Spectral Learning Methods for Finite State Machines with Applications to Na...  Spectral Learning Methods for Finite State Machines with Applications to Na...
Spectral Learning Methods for Finite State Machines with Applications to Na...
LARCA UPC
 
Ml mle_bayes
Ml  mle_bayesMl  mle_bayes
Ml mle_bayes
Phong Vo
 
Johan Suykens: "Models from Data: a Unifying Picture"
Johan Suykens: "Models from Data: a Unifying Picture" Johan Suykens: "Models from Data: a Unifying Picture"
Johan Suykens: "Models from Data: a Unifying Picture"
ieee_cis_cyprus
 
Nonlinear Manifolds in Computer Vision
Nonlinear Manifolds in Computer VisionNonlinear Manifolds in Computer Vision
Nonlinear Manifolds in Computer Vision
zukun
 
Intro probability 4
Intro probability 4Intro probability 4
Intro probability 4
Phong Vo
 
Lesson 14: Derivatives of Logarithmic and Exponential Functions
Lesson 14: Derivatives of Logarithmic and Exponential FunctionsLesson 14: Derivatives of Logarithmic and Exponential Functions
Lesson 14: Derivatives of Logarithmic and Exponential Functions
Matthew Leingang
 

What's hot (20)

The multilayer perceptron
The multilayer perceptronThe multilayer perceptron
The multilayer perceptron
 
Bayesian Dark Knowledge and Matrix Factorization
Bayesian Dark Knowledge and Matrix FactorizationBayesian Dark Knowledge and Matrix Factorization
Bayesian Dark Knowledge and Matrix Factorization
 
Lesson 16: Inverse Trigonometric Functions
Lesson 16: Inverse Trigonometric FunctionsLesson 16: Inverse Trigonometric Functions
Lesson 16: Inverse Trigonometric Functions
 
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
 
Optimal Finite Difference Grids for Elliptic and Parabolic PDEs with Applicat...
Optimal Finite Difference Grids for Elliptic and Parabolic PDEs with Applicat...Optimal Finite Difference Grids for Elliptic and Parabolic PDEs with Applicat...
Optimal Finite Difference Grids for Elliptic and Parabolic PDEs with Applicat...
 
Approximate Bayesian Computation on GPUs
Approximate Bayesian Computation on GPUsApproximate Bayesian Computation on GPUs
Approximate Bayesian Computation on GPUs
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
NIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learningNIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learning
 
Lesson 12: Linear Approximation
Lesson 12: Linear ApproximationLesson 12: Linear Approximation
Lesson 12: Linear Approximation
 
March12 natarajan
March12 natarajanMarch12 natarajan
March12 natarajan
 
Machine learning of structured outputs
Machine learning of structured outputsMachine learning of structured outputs
Machine learning of structured outputs
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Stability of adaptive random-walk Metropolis algorithms
Stability of adaptive random-walk Metropolis algorithmsStability of adaptive random-walk Metropolis algorithms
Stability of adaptive random-walk Metropolis algorithms
 
Spectral Learning Methods for Finite State Machines with Applications to Na...
  Spectral Learning Methods for Finite State Machines with Applications to Na...  Spectral Learning Methods for Finite State Machines with Applications to Na...
Spectral Learning Methods for Finite State Machines with Applications to Na...
 
Ml mle_bayes
Ml  mle_bayesMl  mle_bayes
Ml mle_bayes
 
Johan Suykens: "Models from Data: a Unifying Picture"
Johan Suykens: "Models from Data: a Unifying Picture" Johan Suykens: "Models from Data: a Unifying Picture"
Johan Suykens: "Models from Data: a Unifying Picture"
 
Learning to discover monte carlo algorithm on spin ice manifold
Learning to discover monte carlo algorithm on spin ice manifoldLearning to discover monte carlo algorithm on spin ice manifold
Learning to discover monte carlo algorithm on spin ice manifold
 
Nonlinear Manifolds in Computer Vision
Nonlinear Manifolds in Computer VisionNonlinear Manifolds in Computer Vision
Nonlinear Manifolds in Computer Vision
 
Intro probability 4
Intro probability 4Intro probability 4
Intro probability 4
 
Lesson 14: Derivatives of Logarithmic and Exponential Functions
Lesson 14: Derivatives of Logarithmic and Exponential FunctionsLesson 14: Derivatives of Logarithmic and Exponential Functions
Lesson 14: Derivatives of Logarithmic and Exponential Functions
 

Viewers also liked

Critical Theory Approach To Organizations
Critical Theory Approach To OrganizationsCritical Theory Approach To Organizations
Critical Theory Approach To Organizations
Arun Jacob
 
Wanderlust- Photo Contest 2016: Winners and Commended
Wanderlust- Photo Contest 2016: Winners and CommendedWanderlust- Photo Contest 2016: Winners and Commended
Wanderlust- Photo Contest 2016: Winners and Commended
maditabalnco
 
Zaragoza Turismo 32
Zaragoza Turismo 32Zaragoza Turismo 32
Zaragoza Turismo 32
Saucepolis blog & Hotel Sauce
 
EIT-Digital_Annual-Report-2015-Digital-Version
EIT-Digital_Annual-Report-2015-Digital-VersionEIT-Digital_Annual-Report-2015-Digital-Version
EIT-Digital_Annual-Report-2015-Digital-Version
Edna Ayme-Yahil, PhD
 

Viewers also liked (20)

Critical approaches jacqui-clark
Critical approaches jacqui-clarkCritical approaches jacqui-clark
Critical approaches jacqui-clark
 
Stanley Deetz Managerialism and Organizational Democracy Approach
Stanley Deetz Managerialism and Organizational Democracy ApproachStanley Deetz Managerialism and Organizational Democracy Approach
Stanley Deetz Managerialism and Organizational Democracy Approach
 
Chapter 21 ppt (critical theory of communication in organizations)
Chapter 21 ppt (critical theory of communication in organizations)Chapter 21 ppt (critical theory of communication in organizations)
Chapter 21 ppt (critical theory of communication in organizations)
 
Critical Approach
Critical ApproachCritical Approach
Critical Approach
 
Porous materials and metallic foams
Porous materials and metallic foamsPorous materials and metallic foams
Porous materials and metallic foams
 
Critical Theory Approach To Organizations
Critical Theory Approach To OrganizationsCritical Theory Approach To Organizations
Critical Theory Approach To Organizations
 
vilas Nikam- Mechanics of structure-Column
vilas Nikam- Mechanics of structure-Columnvilas Nikam- Mechanics of structure-Column
vilas Nikam- Mechanics of structure-Column
 
Lecture 05
Lecture 05Lecture 05
Lecture 05
 
Wanderlust- Photo Contest 2016: Winners and Commended
Wanderlust- Photo Contest 2016: Winners and CommendedWanderlust- Photo Contest 2016: Winners and Commended
Wanderlust- Photo Contest 2016: Winners and Commended
 
mobileHut_May_16
mobileHut_May_16mobileHut_May_16
mobileHut_May_16
 
Sap amazon and Kogent Demo
Sap amazon and Kogent DemoSap amazon and Kogent Demo
Sap amazon and Kogent Demo
 
Zaragoza Turismo 32
Zaragoza Turismo 32Zaragoza Turismo 32
Zaragoza Turismo 32
 
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”
 
EIT-Digital_Annual-Report-2015-Digital-Version
EIT-Digital_Annual-Report-2015-Digital-VersionEIT-Digital_Annual-Report-2015-Digital-Version
EIT-Digital_Annual-Report-2015-Digital-Version
 
Intranet: un modelo para la transformación digital de la organización
Intranet: un modelo para la transformación digital de la organizaciónIntranet: un modelo para la transformación digital de la organización
Intranet: un modelo para la transformación digital de la organización
 
2015 South By Southwest Sports: #SXSports Insights
2015 South By Southwest Sports: #SXSports Insights2015 South By Southwest Sports: #SXSports Insights
2015 South By Southwest Sports: #SXSports Insights
 
83 solid pancreatic masses on computed tomography
83 solid pancreatic masses on computed tomography83 solid pancreatic masses on computed tomography
83 solid pancreatic masses on computed tomography
 
El narcotráfico
El narcotráficoEl narcotráfico
El narcotráfico
 
美次級房貸風暴的影響評估
美次級房貸風暴的影響評估美次級房貸風暴的影響評估
美次級房貸風暴的影響評估
 
Task Management: 11 Tips for Effective Management
Task Management: 11 Tips for Effective ManagementTask Management: 11 Tips for Effective Management
Task Management: 11 Tips for Effective Management
 

Similar to 05 history of cv a machine learning (theory) perspective on computer vision

Introduction
IntroductionIntroduction
Introduction
butest
 
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...
npinto
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
butest
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
butest
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
butest
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
butest
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
butest
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
butest
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
butest
 
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptxLecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
VAIBHAVSAHU55
 

Similar to 05 history of cv a machine learning (theory) perspective on computer vision (20)

The Perceptron (D1L1 Insight@DCU Machine Learning Workshop 2017)
The Perceptron (D1L1 Insight@DCU Machine Learning Workshop 2017)The Perceptron (D1L1 Insight@DCU Machine Learning Workshop 2017)
The Perceptron (D1L1 Insight@DCU Machine Learning Workshop 2017)
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
Introduction
IntroductionIntroduction
Introduction
 
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...
 
Sparse autoencoder
Sparse autoencoderSparse autoencoder
Sparse autoencoder
 
机器学习Adaboost
机器学习Adaboost机器学习Adaboost
机器学习Adaboost
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
Chapter 1 introduction (Image Processing)
Chapter 1 introduction (Image Processing)Chapter 1 introduction (Image Processing)
Chapter 1 introduction (Image Processing)
 
linear SVM.ppt
linear SVM.pptlinear SVM.ppt
linear SVM.ppt
 
Cuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An IntroductionCuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An Introduction
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptxLecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
 
The Perceptron (D1L2 Deep Learning for Speech and Language)
The Perceptron (D1L2 Deep Learning for Speech and Language)The Perceptron (D1L2 Deep Learning for Speech and Language)
The Perceptron (D1L2 Deep Learning for Speech and Language)
 
Introduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from ScratchIntroduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from Scratch
 

More from zukun

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009
zukun
 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCV
zukun
 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Information
zukun
 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statistics
zukun
 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibration
zukun
 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer vision
zukun
 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluation
zukun
 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-software
zukun
 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptors
zukun
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectors
zukun
 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-intro
zukun
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video search
zukun
 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video search
zukun
 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video search
zukun
 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learning
zukun
 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer vision
zukun
 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick start
zukun
 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structures
zukun
 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities
zukun
 
Icml2012 learning hierarchies of invariant features
Icml2012 learning hierarchies of invariant featuresIcml2012 learning hierarchies of invariant features
Icml2012 learning hierarchies of invariant features
zukun
 

More from zukun (20)

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009
 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCV
 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Information
 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statistics
 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibration
 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer vision
 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluation
 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-software
 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptors
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectors
 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-intro
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video search
 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video search
 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video search
 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learning
 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer vision
 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick start
 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structures
 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities
 
Icml2012 learning hierarchies of invariant features
Icml2012 learning hierarchies of invariant featuresIcml2012 learning hierarchies of invariant features
Icml2012 learning hierarchies of invariant features
 

Recently uploaded

Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 

Recently uploaded (20)

Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
AI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří KarpíšekAI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří Karpíšek
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
PLAI - Acceleration Program for Generative A.I. Startups
PLAI - Acceleration Program for Generative A.I. StartupsPLAI - Acceleration Program for Generative A.I. Startups
PLAI - Acceleration Program for Generative A.I. Startups
 
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
 
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptxUnpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
 
Demystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John StaveleyDemystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John Staveley
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
Free and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi IbrahimzadeFree and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya HalderCustom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 

05 history of cv a machine learning (theory) perspective on computer vision

  • 1. A Machine Learning (Theory) Perspective on Computer Vision Peter Auer Montanuniversität Leoben
  • 2. Outline What I am doing and how computer vision approached me (in 2002). Some modern machine learning algorithms used in computer vision, and their development: Boosting Support Vector Machines Concluding remarks
  • 3. My background COLT 1993 Conference on Learning Theory „On-Line Learning of Rectangles in Noisy Environments“ FOCS 1995 Symp. Foundations of Computer Science „Gambling in a Rigged Casino: The Adversarial Multi-Arm Bandit Problem“ with N. Cesa-Bianchi, Y. Freund, R. Schapire ICML, NIPS, STOC, …
  • 4. A computer vision project EU-Project LAVA, 2002 “Learning for adaptable visual assistants” XRCE: Ch. Dance, R. Mohr IRIA Grenoble: C. Schmid, B. Triggs RHUL: J. Shawe-Taylor IDIAP: S. Bengio
  • 5. LAVA Proposal Vision (goals) Recognition of generic objects and events Attention Mechanisms Base line and high-level descriptors Learning (means) Statistical Analysis Kernels and models and features Online Learning
  • 6. Online learning Online Information Setting An input is received, a prediction is made, and then feedback is acquired. Goal: To make good predictions, in respect to a (large) set of fixed predictors. Online Computation Setting The amount of computation per new example – to update the learned information – is constant (or small). Goal: To be fast computationally. (Near) real-time learning?
  • 7. Learning for vision around 2002 Viola, Jones, CVPR 2001: Rapid object detection using a boosted cascade of simple features. (Boosting) Agarwal, Roth, ECCV 2002: Learning a Sparse Representation for Object Detection. (Winnow) Fergus, Perona, Zisserman, CVPR 2003: Object class recognition by unsupervised scale- invariant learning. (EM-type algorithm) Wallraven, Caputo, Graf, ICCV 2003: Recognition with local features: the kernel recipe. (SVM)
  • 8. Our contribution in LAVA Opelt, Fussenegger, Pinz, Auer, ECCV 2004: Weak hypotheses and boosting for generic object detection and recognition.
  • 9. Image classification as a learning problem
  • 10. Image classification as a learning problem Images are represented as vectors x = (x1 , . . . , xn ) ∈ X ⊂ Rn . Given training images x (1) , . . . , x (m) ∈ X with their classifications y (1) , . . . , y (m) ∈ Y = {−1, +1}, a classifier H : X → Y is learned. We consider linear classifiers Hw , w ∈ Rn , +1 if w · x ≥ 0 Hw (x) = −1 if w · x < 0 n (w · x = i=1 wi xi ). P. Auer ML Perspective on CV
  • 11. The Perceptron algorithm (Rosenblatt, 1958) The Perceptron algorithm maintains a weight vector w (t) as its current classifier. Initialization w (1) = 0. +1 if w (t) · x (t) ≥ 0 Predict y (t) = ˆ −1 if w (t) · x (t) < 0 If y (t) = y (t) then w (t+1) = w (t) , ˆ else w (t+1) = w (t) + ηy (t) x (t) . (η is the learning rate.) The Perceptron was abandoned in 1969, when Minsky and Papert showed that Perceptrons are not able to learn some simple functions. Revived only in the 1980’s when neural networks became popular. P. Auer ML Perspective on CV
  • 12. Perceptron cannot learn XOR No single line can separate the green from the red boxes.
  • 13. Non-linear classifiers Extending the feature space (or using kernels) prevents the problem: 2 2 Since XOR is a quadratic function, use (1, x1 , x2 , x1 , x2 , x1 x2 ) instead of (x1 , x2 ). For x1 , x2 ∈ {+1, −1}, x1 XOR x2 = x1 x2 . P. Auer ML Perspective on CV
  • 14. Winnow (Littlestone 1987) Works like the Perceptron algorithm except for the update of the weights: (t+1) (t) (t) wi = wi ∗ exp ηy (t) xi for some η > 0. (w (1) = 1.) Observe the multiplicative update of the weights and (t+1) (t) (t) log wi = log wi + ηy (t) xi . Very related work: The Weighted Majority Algorithm (Littlestone, Warmuth) P. Auer ML Perspective on CV
  • 15. Comparison of the Perceptron algorithm and Winnow Perceptron and Winnow scale differently in respect to relevant, used, and irrelevant attributes: all attributes n relevant attributes k used attributes d # training ex. √ Perceptron dk Winnow k log n P. Auer ML Perspective on CV
  • 16. Adaboost (Freund, Schapire, 1995) (s) AdaBoost maintains weights vt on the training examples (x (s) , y (s) ) over time t: (s) Initialize weights v0 = 1. For t = 1, 2, . . . Select coordinate it with maximal correlation with the labels, (s) (s) (s) s vt y xi , as weak hypothesis. (s) (s) Choose αt which minimizes s vt exp −αt y (s) xit . (s) (s) (s) Update vt+1 = vt exp −αt y (s) xit . For x = (x1 , . . . , xn ) predict sign ( t αt xit ). P. Auer ML Perspective on CV
  • 17. History of Boosting (1) Rob Schapire: The strength of weak learnability, 1990. Showed that classifiers which are only 51% correct, can be combined into a 99% correct classifier. Rather a theoretical result, since the algorithm was complicated and not practical. I know people who thought that this was not an interesting result.
  • 18. History of Boosting (2) Yoav Freund: Boosting a weak learning algorithm by majority, 1995. Improved boosting algorithm, but still complicated and theoretical. Only logarithmically many examples are forwarded to the weak learner!
  • 19. History of Boosting (3) Y. Freund and R. Schapire: A decision-theoretic generalization of on-line learning and an application to boosting, 1995. Very simple boosting algorithm, easy to implement. Theoretically less interesting. Performs very well in practice. Won the Gödel price in 2003 and the Kanellakis price in 2004. (Both are prestigious prices in Theoretical Computer Science.) Since then many variants of Boosting (mainly to improve error robustness): BrownBoost, Soft margin boosting, LPBoost.
  • 20. Support Vector Machines (SVMs) In its vanilla version also learns a linear classifier. It maximizes distance between the decision boundary and the nearest training points. Formulates learning as a well-behaved optimization problem. Invented by Vladimir Vapnik (1979, Russian paper). Translated in 1982. No practical applications, since it required linear separability.
  • 21. Practical SVMs Vapnik: The Nature of Statistical Learning Theory, 1995. Statistical Learning Theory, 1998. Shawe-Taylor, Cristianini: Support Vector Machines, 2000. Soft margin SVMs: Tolerate incorrectly labeled training examples (by using slack variables). Non-linear classification using the “kernel trick”.
  • 22. Support Vector Machines (SVMs) + + + + + + + − + + − − − − − − − − – p.21 Maschinelles Lernen — 25.8.03 — Peter Auer
  • 23. The kernel trick (1) Recall the perceptron update, t w (t+1) = w (t) + ηy (t) x (t) = η y (τ ) x (τ ) , τ =1 and classification, t (t+1) y = sign w ˆ · x = sign y (τ ) x (τ ) · x . τ =1 A kernel function generalizes the inner product, t y = sign ˆ y (τ ) K x (τ ) , x . τ =1 P. Auer ML Perspective on CV
  • 24. The kernel trick (2) The inner product x (τ ) · x is a measure of similarity: x (τ ) · x is maximal if x (τ ) = x. The kernel function is a similarity measure in feature space, K x (τ ) , x = Φ(x (τ ) ) · Φ(x). Kernel functions can be designed to capture the relevant similarities of the domain. Aizerman, Braverman, Rozonoer: Theoretical foundations of the potential function method in pattern recognition learning, 1964. P. Auer ML Perspective on CV
  • 25. Where are we going? New learning algorithms? Better image descriptors! Probably they need to be learned. Probably they need to be hierarchical. We need (to use) more data.
  • 26. Final remark on algorithm evaluation and benchmarks Computer vision is in the state of machine learning 10 years ago (at least for object classification). Benchmark datasets start to become available, e.g. PASCAL VOC.