Brunelli 2008: template matching techniques in computer vision

1,574 views
1,414 views

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,574
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
27
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Brunelli 2008: template matching techniques in computer vision

  1. 1. Overview Detection as hypothesis testing Training and testing BibliographyTemplate Matching Techniques in Computer Vision Roberto Brunelli FBK - Fondazione Bruno Kessler 1 Settembre 2008 Roberto Brunelli Template Matching Techniques in Computer Vision
  2. 2. Overview Detection as hypothesis testing Training and testing BibliographyTable of contents 1 Overview 2 Detection as hypothesis testing 3 Training and testing 4 Bibliography Roberto Brunelli Template Matching Techniques in Computer Vision
  3. 3. Overview Detection as hypothesis testing The Basics Training and testing Advanced BibliographyTemplate matching template/pattern 1 anything fashioned, shaped, or designed to serve as a model from which something is to be made: a model, design, plan, outline; 2 something formed after a model or prototype, a copy; a likeness, a similitude; 3 an example, an instance; esp. a typical model or a representative instance; matching to compare in respect of similarity; to examine the likeness of difference of. Roberto Brunelli Template Matching Techniques in Computer Vision
  4. 4. Overview Detection as hypothesis testing The Basics Training and testing Advanced Bibliography... template variability ... Roberto Brunelli Template Matching Techniques in Computer Vision
  5. 5. Overview Detection as hypothesis testing The Basics Training and testing Advanced Bibliography... and Computer Vision Many important computer vision tasks can be solved with template matching techniques: Object detection/recognition Object comparison Depth computation and template matching depends on Physics (imaging) Probability and statistics Signal processing Roberto Brunelli Template Matching Techniques in Computer Vision
  6. 6. Imaging Perspective camera Telecentric camera
  7. 7. Imaging Photon noise (Poisson) Quantum nature of light results in appreciable photon noisea (r ∆t)n p(n) = e −(r ∆t) n! I n √ SNR ≤ =√ = n σI n a r photons per unit time, ∆t gathering time
  8. 8. Overview Detection as hypothesis testing The Basics Training and testing Advanced BibliographyFinding them ... A sliding window approach N 1 x d(x , y ) = (xi − yi )2 N i=1 1 x s(x , y ) = x 1 + d(x , y ) Roberto Brunelli Template Matching Techniques in Computer Vision
  9. 9. Overview Detection as hypothesis testing The Basics Training and testing Advanced Bibliography... robustly Specularities outliersSpecularities and noise canresult in outliers: abnormallylarge differences that mayadversely affect thecomparison. Roberto Brunelli Template Matching Techniques in Computer Vision
  10. 10. Overview Detection as hypothesis testing The Basics Training and testing Advanced Bibliography... robustly We downweight outliers changing the metrics: N N (zi )2 → ρ(zi ), zi = xi − yi i=1 i=1 with one that has a more favourable influence function dρ(z) ψ(z) = dz ρ(z) = z 2 ψ(z) = z ρ(z) = |z| ψ(z) = signz z2 z ρ(z) = log 1 + 2 ψ(z) = 2 a a + z2 Roberto Brunelli Template Matching Techniques in Computer Vision
  11. 11. Overview Detection as hypothesis testing The Basics Training and testing Advanced BibliographyIllumination effects 1/3 Additional Template variability Illumination variations affect images in a complex way, reducing the effectiveness of template matching techniques Roberto Brunelli Template Matching Techniques in Computer Vision
  12. 12. Overview Detection as hypothesis testing The Basics Training and testing Advanced BibliographyContrast and edge maps 2/3Image transforms such as localcontrast can reduce the effect Local contrast and edge mapsof illumination: IN = I ∗ Kσ N if N ≤ 1 N = 1 2− N if N > 1 Z (f ∗ g )(x) = f (y )g (x − y ) dy Roberto Brunelli Template Matching Techniques in Computer Vision
  13. 13. Overview Detection as hypothesis testing The Basics Training and testing Advanced BibliographyOrdinal Transforms 3/3 CT invariance xLet us consider a pixel I (x ) and its xneighborhood of W (x , l) of size l.Denoting with ⊗ the operation ofconcatenation, the Census transformis defined as x C (x ) = θ(I (x ) − I (x )) x x x ∈W (x ,l)x x x Roberto Brunelli Template Matching Techniques in Computer Vision
  14. 14. Overview Detection as hypothesis testing The Basics Training and testing Advanced BibliographyMatching variable patterns 1/2 Different criteria, different basis aPatterns of a single class mayspan a complex manifold of ahigh dimensional space: wemay try to find a compactspace enclosing it, possiblyattempting multiple locallinear descriptions. a step edge, orientation θ andaxial distance ρ Roberto Brunelli Template Matching Techniques in Computer Vision
  15. 15. Overview Detection as hypothesis testing The Basics Training and testing Advanced BibliographySubspaces approaches 2/2 PCA the eigenvectors of the PCA, ICA (I and II), LDA covariance matrix; ICA the directions onto which data projects with maximal non Gaussianity; LDA the directions maximizing between class scatter over within class scatter. Roberto Brunelli Template Matching Techniques in Computer Vision
  16. 16. Overview Detection as hypothesis testing The Basics Training and testing Advanced BibliographyDeformable templates 1/2 Eyes potentials1 The circle representing the iris, characterized by its radius r and its center x c . The interior of the circle is attracted to the low intensity values while its boundary is attracted to edges in image intensity.   1 1 1 kv =  1 −8 1  1 1 1 Roberto Brunelli Template Matching Techniques in Computer Vision
  17. 17. Overview Detection as hypothesis testing The Basics Training and testing Advanced Bibliography Deformable templates 2/2Diffeomorphic matchinga : A ◦ u (x ) = A(u(x )) ≈ B(x ) x x x Brain warping ˆ u = argmin ∆(A ◦ u , B; x )dx + ∆(u ) x u u Ω ∆(A, B) = x x 2 x (A(x ) − B(x )) dx Ω H1 u ∆(u ) = u −Iu Ω H1 2 2 a Ω = a (x ) x u x + ∂(u )/∂(x ) x F dx x ∈Ω a x a bijective map u (x ) such that both it and itsinverse u −1 are differentiable Roberto Brunelli Template Matching Techniques in Computer Vision
  18. 18. Linear structures: Radon/Hough Transforms Rs(q ) (I ; q ) = q x x x δ(K(x ; q ))I (x ) dx Rd In the Radon approach (left), the supporting evidence for a shape q with parameter q is collected by integrating over s(q ). In the Hough approach (right), each potentially supporting pixel (e.g. edge pixels A , B , C ) votes for all shapes to which it can potentially belong (all circles whose centers lay respectively on circles a , b , c ).
  19. 19. Overview Detection as hypothesis testing The Basics Training and testing Advanced BibliographyDetection as Learning ˆ Given a set {(x i , yi )}i , we search a function f minimizing the x empirical (approximation) squared error 1 MSE Eemp = (yi − f (x i ))2 x N i ˆx MSE f (x ) = argmin Eemp (f ; {(x i , yi )}i ) x f This ill posed problem can be regularized, turning the optimization problem of Equation 1 into 1 ˆ f (λ) = argmin (yi − f (x i ))2 + λ f x H f ∈H N i where f H is the norm of f in the (function) space H to which we restrict our quest for a solution. Roberto Brunelli Template Matching Techniques in Computer Vision
  20. 20. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography Estimation Detection as testing The problem of template detection fits within game theory.The game proceeds along the Gaming with naturefollowing steps: 1 nature chooses a state θ ∈ Θ; 2 a hint x is generated according to the conditional distribution PX (x|θ); 3 the computational agent makes its guess φ(x) = δ; 4 the agent experiences a loss C (θ, δ). Roberto Brunelli Template Matching Techniques in Computer Vision
  21. 21. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationHypothesis testing and Templates Two cases are relevant to the problem of template matching: 1 ∆ = {δ0 , δ1 , . . . , δK −1 }, that corresponds to hypothesis testing, and in particular the case K = 2, corresponding to binary hypothesis testing. Many problems of pattern recognition fall within this category. 2 ∆ = Rn , corresponding to the problem of point estimation of a real parameter vector: a typical problem being that of model parameter estimation. Template detection can be formalized as a binary hypothesis test: H0 : x ∼ pθ (x ), θ ∈ Θ0 x H1 : x ∼ pθ (x ), θ ∈ Θ1 x Roberto Brunelli Template Matching Techniques in Computer Vision
  22. 22. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationSignal vs. Noise Template detection in the presence of additive white Gaussian noise η ∼ N(0 , σ 2 I ) 0 H0 : x =η f +η simple H1 : x = f αf + o + η composite An hypothesis test (or classifier) is a mapping φ φ : (Rnd )N → {0, . . . , M − 1}. The test φ returns an hypothesis for every possible input, partitioning the input space into a disjoint collection R0 , . . . , RM−1 of decision regions: Rk = {(x 1 , . . . , x N )|φ(x 1 , . . . , x N ) = k}. x x Roberto Brunelli Template Matching Techniques in Computer Vision
  23. 23. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography Estimation Error typesThe probability of a type I (false False alarms and detectionalarm) PF (size or α) α = PF = P(φ = 1|H0 )The detection probability PD (poweror β): β(θ) = PD = P(φ = 1|θ ∈ Θ1 ),The probability of a type II error, ormiss probability PM is PM = 1 − PD . Roberto Brunelli Template Matching Techniques in Computer Vision
  24. 24. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationThe Bayes Risk The Bayes approach is characterized by the assumption that the occurrence probability of each hypothesis πi is known a priori. The optimal test is the one that minimizes the Bayes risk CB : CB = X Cij P(φ(X ) = i|Hj )πj i,j = Cij x x pj (x )dx πj i,j Ri = x x x (C00 π0 p0 (x ) + C01 π1 p1 (x )) dx + R0 x x x (C10 π0 p0 (x ) + C11 π1 p1 (x )) dx . R1 Roberto Brunelli Template Matching Techniques in Computer Vision
  25. 25. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationThe likelihood ratio We may minimize the Bayes risk assigning each possible x to the region whose integrand at x is smaller: x p1 (x ) H1 π0 (C10 − C00 ) L(x ) ≡ x ≡ν x p0 (x ) H0 π1 (C01 − C11 ) x where L(x ) is called the likelihood ratio. When C00 = C11 = 0 and C10 = C01 = 1 x p1 (x ) H1 π0 L(x ) ≡ x ≡ν x p0 (x ) H0 π1 equivalent to the maximum a posteriori (MAP) rule x x φ(x ) = argmax πi pi (x ) i∈{0,1} Roberto Brunelli Template Matching Techniques in Computer Vision
  26. 26. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationFrequentist testing The alternative to Bayesian hypothesis testing is based on the Neyman-Pearson criterion and follows a classic, frequentist approach based on PF = x x p0 (x )dx R1 PD = x x p1 (x )dx . R1 we should design the decision rule in order to maximize PD without exceeding a predefined bound on PF : ˆ R1 = argmax PD . R1 :PF ≤α Roberto Brunelli Template Matching Techniques in Computer Vision
  27. 27. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography Estimation... likelihood ratio again The problem can be solved with the method of Lagrange multipliers: E = PD + λ(PF − α ) = x x p1 (x )dx + λ p0 (x )dx − α x x R1 R1 = −λα + x x x (p1 (x ) + λp0 (x )) dx R1 where α ≤ α. In order to maximize E , the integrand should be positive leading to the following condition: x p1 (x ) H1 > −λ x p0 (x ) as we are considering region R1 . Roberto Brunelli Template Matching Techniques in Computer Vision
  28. 28. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationThe Neyman Pearson Lemma In the binary hypothesis testing problem, if α0 ∈ [0, 1) is the size constraint, the most powerful test of size α ≤ α0 is given by the decision rule   1 x if L(x ) > ν x φ(x ) = γ x if L(x ) = ν 0 x if L(x ) < ν  where ν is the largest constant for which P0 (L(x ) ≥ ν) ≥ α0 and P0 (L(x ) ≤ ν) ≥ 1 − α0 x x The test is unique up to sets of probability zero under H0 and H1 . Roberto Brunelli Template Matching Techniques in Computer Vision
  29. 29. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationAn important example Discriminate two deterministic multidimensional signals corrupted by zero average Gaussian noise: H0 : x ∼ N(µ 0 , Σ), µ H1 : x ∼ N(µ 1 , Σ), µ Using the Mahalanobis distance dΣ (x , y ) = (x − y )T Σ−1 (x − y ) 2 x x x we get 1 1 2 x p0 (x ) = exp − dΣ (x , µ 0 ) x (2π)n/2 |Σ|1/2 2 1 1 2 x p1 (x ) = exp − dΣ (x , µ 1 ) x (2π)n/2 |Σ|1/2 2 Roberto Brunelli Template Matching Techniques in Computer Vision
  30. 30. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography Estimation... with an explicit solution. The decision based on the log-likelihood ratio is 1 w T (x − x 0 ) ≥ νΛ x x φ(x ) = 0 w T (x − x 0 ) < νΛ x with 1 w = Σ−1 (µ 1 − µ 0 ), µ µ x 0 = (µ 1 + µ 0 ) 2 and PF , PD depend only on the distance of the means of the two classes normalized by the amount of noise, which is a measure of the SNR of the classification problem. When Σ = σ 2 I and µ 0 = 0 we have matching by projection: H1 ru = µ T x 1 νΛ H0 Roberto Brunelli Template Matching Techniques in Computer Vision
  31. 31. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography Estimation... more details 2 ν + σ0 /2 PF = P0 (Λ(x ) ≥ ν) = Q x = Q(z) σ0 2 ν − σ0 /2 PD = P1 (Λ(x ) ≥ ν) = Q x = Q(z − σ0 ) σ0 σ0 (Λ(x )) = σ1 (Λ(x )) = w T Σw 2 x 2 x w z = ν/σ0 + σ0 /2 Roberto Brunelli Template Matching Techniques in Computer Vision
  32. 32. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationVariable patterns ... A common source of signal variability is its scaling by an unknown gain factor α possibly coupled to a signal offset β x 1 x = αx + β1 A practical strategy is to normalize both the reference signal and the pattern to be classified to zero average and unit variance: (x − x ) x ¯ x = σx nd 1 x = ¯ xi nd i=1 nd nd 1 2 1 σx = (xi − x ) = ¯ xi2 − x 2 ¯ nd nd i=1 i=1 Roberto Brunelli Template Matching Techniques in Computer Vision
  33. 33. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationCorrelation or, equivalently, replacing matching by projection with x i (xi − µx )(yi − µy ) rP (x , y ) = 2 2 i (xi − µx ) i (yi − µy ) which is related to the fraction of the variance in y accounted for ˆ a by a linear fit of x to y y = ˆx + b ˆ 2 sy |x 2 rP =1− 2 sy nd nd 2 2 sy |x = (yi − yi )2 = ˆ a ˆ yi − ˆxi − b i=1 i=1 nd 2 sy = (y − y )2 ¯ i=1 Roberto Brunelli Template Matching Techniques in Computer Vision
  34. 34. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography Estimation(Maximum likelihood) estimation The likelihood function is defined as N l(θ |{x i }N ) θ x i=1 = p(x i |θ ) x θ i=1 where x N = {x i }N is our (fixed) dataset and it is considered to x i=1 ˆ be a function of θ . The maximum likelihood estimator (MLE) θ is defined as ˆ θ = argmax l(θ |x N ) θx θ resulting in the parameter that maximizes the likelihood of our observations. Roberto Brunelli Template Matching Techniques in Computer Vision
  35. 35. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationBias and Variance Definition ˆ The bias of an estimator θ is ˆ ˆ bias(θ) = E (θ) − θ ˆ where θ represents the true value. If bias(θ) = 0 the operator is said to be unbiased. Definition The mean squared error (MSE) of an estimator is ˆ ˆ MSE(θ) = E ((θ − θ)2 ) Roberto Brunelli Template Matching Techniques in Computer Vision
  36. 36. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationMLE properties 1 The MLE is asymptotically unbiased, i.e., its bias tends to zero as the number of samples increases to infinity. 2 The MLE is asymptotically efficient: asymptotically, no unbiased estimator has lower mean squared error than the MLE. 3 The MLE is asymptotically normal. Roberto Brunelli Template Matching Techniques in Computer Vision
  37. 37. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationShrinkage (James-Stein estimators) ˆ ˆ ˆ MSE(θ) = var(θ) + bias2 (θ) Shrinkage We may reduce MSE trading off bias for variance, using a linear combination of estimators T and S Ts = λT + (1 − λ)S shrinking S towards T . Roberto Brunelli Template Matching Techniques in Computer Vision
  38. 38. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationJames-Stein Theorem Let X be distributed according to a nd -variate normal distribution N(θ , σ 2 I ). Under the squared loss, the usual estimator δ (X ) = X θ X exhibits a higher loss for any θ , being therefore dominated, than aσ 2 δ a (X ) = θ 0 + 1 − 2 (X − θ 0 ) X X − θ0 for nd ≥ 3 and 0 < a < 2(nd − 2) and a = nd − 2 gives the uniformly best estimator in the class. The risk of δnd −2 at θ 0 is constant and equal to 2σ 2 (instead of nd σ 2 of the usual estimator). Roberto Brunelli Template Matching Techniques in Computer Vision
  39. 39. Hypothesis Testing Overview Bayes Risk Detection as hypothesis testing Neyman Pearson testing Training and testing Correlation Bibliography EstimationJS estimation of covariance matrices The unbiased sample estimate of the covariance matrix is ˆ 1 Σ= (x i − x )(x i − x )T x ¯ x ¯ N −1 i and it benefits from shrinking in the small sample, high dimensionality case, avoiding the singularity problem. The optimal shrinking parameter can be obtained in closed form for many useful shrinking targets. Significant improvements are reported in template (face) detection tasks using similar approaches. Roberto Brunelli Template Matching Techniques in Computer Vision
  40. 40. Overview How good is ... good Detection as hypothesis testing Unbiased training and testing Training and testing Performance analysis Bibliography OraclesError breakdown Detailed error breakdown can Eyes localization errors be exploited to improve system performance. Error measures should be invariant to translation, scaling, rotation. Roberto Brunelli Template Matching Techniques in Computer Vision
  41. 41. Overview How good is ... good Detection as hypothesis testing Unbiased training and testing Training and testing Performance analysis Bibliography OraclesError scoring Error weighting or scoring functions can be tuned to tasks: errors are mapped into the range [0, 1], the Task selective penalties lower the score, the worse the error. A single face detection system can be scored differently when considered as a detection or localization system by changing the parameters controlling the weighting functions, using more peaked scoring functions for localization. Roberto Brunelli Template Matching Techniques in Computer Vision
  42. 42. Overview How good is ... good Detection as hypothesis testing Unbiased training and testing Training and testing Performance analysis Bibliography OraclesError impactThe final verification error ∆v System impact ∆v ({x i }) = x δ x f (δ (x i ); θ ) imust be expressed as afunction of the detailed errorinformation that can beassociated to each localization δx1 , face verification systems, FAR=falsexi: acceptance/impostors, FRR=false rejections/true client, HTER= (FAR+FRR)/2 x x x x(δx1 (x i ), δx2 (x i ), δs (x i ), δα (x i )). Roberto Brunelli Template Matching Techniques in Computer Vision
  43. 43. Overview How good is ... good Detection as hypothesis testing Unbiased training and testing Training and testing Performance analysis Bibliography OraclesTraining and testing: concepts Let X be the space of possible inputs (without label), L the set of labels, S = X × L the space of labeled samples, and D = {s 1 , . . . , s N }, where s i = (x i , li ) ∈ S, be our dataset. s x A classifier is a function C : X → L, while an inducer is an operator I : D → C that maps a dataset into a classifier. Roberto Brunelli Template Matching Techniques in Computer Vision
  44. 44. Overview How good is ... good Detection as hypothesis testing Unbiased training and testing Training and testing Performance analysis Bibliography Oracles... and methods The accuracy of a classifier is the probability p(C(x ) = l, (x , l) ∈ S) that its label attribution is correct. The x x problem is to find a low bias and low variance estimate ˆ(C) of . There are three main different approaches to accuracy estimation and model selection: 1 hold-out, 2 bootstrap, 3 k-fold cross validation. Roberto Brunelli Template Matching Techniques in Computer Vision
  45. 45. Overview How good is ... good Detection as hypothesis testing Unbiased training and testing Training and testing Performance analysis Bibliography OraclesHold Out A subset Dh of nh points is extracted from the complete dataset and used as testing set while the remaining set Dt = D Dh of N − nh points is provided to the inducer to train the classifier. The accuracy is estimated as 1 ˆh = δ[J(Dt ; x i ), li ] nh x i ∈Dh where δ(i, j) = 1 when i = j and 0 otherwise. It (approximately) follows a Gaussian distribution N( , (1 − )/nh ), from which an estimate of the variance (of ) follows. Roberto Brunelli Template Matching Techniques in Computer Vision
  46. 46. Bootstrap The accuracy and its variance are estimated from the results of the classifier over a sequence of bootstrap samples, each of them obtained by random sampling with replacement N instances from the original dataset. The accuracy boot is then estimated as boot = 0.632 b + 0.368 r where r is the re-substitution accuracy, and eb is the accuracy on the bootstrap subset. Multiple bootstrap subsets Db,i must be generated, the corresponding values being used to estimate the accuracy by averaging the results: n 1 ¯boot = boot (Db,i ) n i=1 and its variance.
  47. 47. Overview How good is ... good Detection as hypothesis testing Unbiased training and testing Training and testing Performance analysis Bibliography OraclesCross validation k-fold cross validation is based on the subdivision of the dataset into k mutually exclusive subsets of (approximately) equal size: each one of them is used in turn for testing while the remaining k − 1 groups are given to the inducer to estimate the parameters of the classifier. If we denote with D{i} the set that includes instance i 1 ˆk = δ[J(D D{i} ; x i ), li ] N i N Complete cross validation would require averaging over all N/k possible choices of the N/k testing instances out of N and is too expensive with the exception of the case k = 1 which is also known as leave-one-out (LOO). Roberto Brunelli Template Matching Techniques in Computer Vision
  48. 48. Overview How good is ... good Detection as hypothesis testing Unbiased training and testing Training and testing Performance analysis Bibliography OraclesROC representation ROC points and curvesThe ROC curve describes the performanceof a classifier when varying theNeyman-Pearson constraint on PF : PD = f (PF ) or Tp = f (Fp )ROC diagrams are not affected by classskewness, and are invariant also to errorcosts. Roberto Brunelli Template Matching Techniques in Computer Vision
  49. 49. Overview How good is ... good Detection as hypothesis testing Unbiased training and testing Training and testing Performance analysis Bibliography OraclesROC convex hull The expected cost of a classifier can be computed from its ROC coordinates: Operating conditions ˆ C = p(p)(1−Tp )Cηp +p(n)Fp Cπn Proposition For any set of cost (Cηp , Cπn ) and class distributions (p(p), p(n)), there is a point on the ROC convex hull (ROCCH) with minimum expected cost. Roberto Brunelli Template Matching Techniques in Computer Vision
  50. 50. Overview How good is ... good Detection as hypothesis testing Unbiased training and testing Training and testing Performance analysis Bibliography OraclesROC interpolationPropositionROC convex hull hybrid Given two Satisfying operatingclassifiers J1 and J2 represented within constraintsROC space by the points a 1 = (Fp1 , Tp1 )and a 2 = (Fp2 , Tp2 ), it is possible togenerate a classifier for each point a x onthe segment joining a 1 and a 1 with arandomized decision rule that samples J1with probability a2 − ax p(J1 ) = a2 − a1 Roberto Brunelli Template Matching Techniques in Computer Vision
  51. 51. AUCThe area under the curve (AUC) gives theprobability that the classifier will score, arandomly given positive instance higherthat a randomly chosen one. This value isequivalent to the Wilcoxon rank test Scoring classifiersstatistic W 1 W = x x w (s(x i ), s(x j )) NP NN i:li =p j:lj =nwhere, assuming no ties, x x x x w (s(x i ), s(x j )) = 1 if s(x i ) > s(x j )The closer the area to 1, the better theclassifier.
  52. 52. Overview How good is ... good Detection as hypothesis testing Unbiased training and testing Training and testing Performance analysis Bibliography OraclesRendering The appearance of a surface point is determined by solving the rendering equation: x ˆ x ˆ Lo (x , −I , λ) = Le (x , −I , λ)+ x ˆ ˆ x ˆ ˆ ˆ ˆ fr (x , L , −I , λ)Li (x , −L , λ)(−L ·N )d L Ω Roberto Brunelli Template Matching Techniques in Computer Vision
  53. 53. Describing reality: RenderMan R Projection "perspective" "fov" 35 WorldBegin LightSource "pointlight" 1 "intensity" 40 "from" [4 2 4] Translate 0 0 5 Color 1 0 0 Surface "roughMetal" "roughness" 0.01 Cylinder 1 0 1.5 360 WorldEnd A simple shader color roughMetal(normal Nf; color basecolor; float Ka, Kd, Ks, roughness;) { extern vector I; return basecolor * (Ka*ambient() + Kd*diffuse(Nf) + Ks*specular(Nf,-normalize(I), roughness)); }
  54. 54. Overview How good is ... good Detection as hypothesis testing Unbiased training and testing Training and testing Performance analysis Bibliography OraclesHow realistic is it? basic phenomena, including straight propagation, specular reflection, diffuse reflection (Lambertian surfaces), selective reflection, refraction, reflection and polarization (Fresnel’s law), exponential absorption of light (Bouguer’s law); complex phenomena, including non-Lambertian surfaces, anisotropic surfaces, multilayered surfaces, complex volumes, translucent materials, polarization; spectral effects, including spiky illumination, dispersion, inteference, diffraction, Rayleigh scattering, fluorescence, and phosphorescence. Roberto Brunelli Template Matching Techniques in Computer Vision
  55. 55. Overview How good is ... good Detection as hypothesis testing Unbiased training and testing Training and testing Performance analysis Bibliography Oracles Thematic renderingWe can shade a pixel so that Automatic ground truthits color represents the temperature of the surface, its distance from the observer, its surface coordinates, the material, an object unique identification code. Roberto Brunelli Template Matching Techniques in Computer Vision
  56. 56. Overview Detection as hypothesis testing Training and testing BibliographyReferences R. Brunelli and T. Poggio, 1997, Template matching: Matched spatial filters and beyond. Pattern Recognition 30, 751–768. R. Brunelli, 2009 Template Matching Techniques in Computer Vision: Theory and Practice. J. Wiley & Sons T. Moon and W. Stirling, 2000 Mathematical Methods and Algorithms for Signal Processing. Prentice-Hall. J Piper, I. Poole and A. Carothers A, 1994, Stein’s paradox and improved quadratic discrimination of real and simulated data by covariance weighting Proc. of the 12th IAPR International Conference on Pattern Recognition (ICPR’94), vol. 2, pp. 529–532. Roberto Brunelli Template Matching Techniques in Computer Vision

×