Active Appearance Models Presenter: Gil Shapira Instructor: Lior Wolf Date: November  6 ,  2006 [4]
Paper <ul><li>T.F. Cootes, G.J. Edwards, and C.J. Taylor. Active appearance model. In Proc.  5 th European Conference on C...
Agenda <ul><li>Introduction: </li></ul><ul><ul><li>Motivation </li></ul></ul><ul><ul><li>PCA </li></ul></ul><ul><li>Active...
Motivation <ul><li>Statistical model to capture the appearance of an image. </li></ul><ul><li>Generate photo realistic syn...
PCA (Rehash) <ul><li>Approximate dataset using a representation in a lower dimension. </li></ul>[2]
Agenda <ul><li>Introduction: </li></ul><ul><ul><li>Motivation </li></ul></ul><ul><ul><li>PCA </li></ul></ul><ul><li>Active...
Training Set <ul><li>Training set consists of images with landmark points. </li></ul><ul><li>Done manually vs. automatical...
Shape Model <ul><li>We’d like to model the shape of the image indepedently from its texture. </li></ul><ul><li>Represent t...
Shape Model <ul><li>Apply PCA to the aligned shapes: </li></ul><ul><ul><li>Model of shape: </li></ul></ul><ul><ul><ul><li>...
Texture Model <ul><li>Model the texture independently of the shape. </li></ul><ul><li>Warp each image so that its landmark...
Texture Model <ul><li>Intensity values  g im  are taken from the convex hull of the warped image landmark points set. </li...
Texture Model <ul><li>Apply PCA to the normalized values: </li></ul><ul><ul><li>Model of texture: </li></ul></ul><ul><ul><...
Combined Model <ul><li>The shape and texture are modelled by  b s  and  b g  respectively. </li></ul><ul><li>Appearance = ...
Combined Model <ul><li>We have: </li></ul><ul><li>Now apply PCA to the combined vectors: </li></ul><ul><ul><li>Model of ap...
Examples <ul><li>First two modes of shape variation: </li></ul>[2]
Examples <ul><li>First two modes of texture variation: </li></ul>[2]
Examples <ul><li>First four modes of appearance variation: </li></ul>[2]
Examples <ul><li>First four modes of appearance variation: </li></ul>[2]
Examples <ul><li>Reconstructing a previously unseen image: </li></ul>[2]
Agenda <ul><li>Introduction: </li></ul><ul><ul><li>Motivation </li></ul></ul><ul><ul><li>PCA </li></ul></ul><ul><li>Active...
Image Interpretation <ul><li>Ingredients: </li></ul><ul><ul><li>Image to be interpreted. </li></ul></ul><ul><ul><li>Appear...
Image Interpretation <ul><li>Let   I = I i  – I m  be the difference vector, where: </li></ul><ul><ul><li>I i  is the inp...
Image Interpretation <ul><li>Knowing the matching error   I  we’d like to obtain information on how to improve the parame...
Image Interpretation <ul><li>Verify the quality of our prediction   c = A  g : </li></ul><ul><ul><li>Perturb the model. ...
Image Interpretation <ul><li>Single iteration: </li></ul><ul><ul><li>Evaluate the error vector   g 0 . </li></ul></ul><ul...
Results [2]
Results <ul><li>Model trained on  88  hand labeled  200x200  images. </li></ul><ul><li>Tested on  100  other images. </li>...
Results <ul><li>Model trained on  88  hand labeled  200x200  images. </li></ul><ul><li>Tested on  100  other images. </li>...
Agenda <ul><li>Introduction: </li></ul><ul><ul><li>Motivation </li></ul></ul><ul><ul><li>PCA </li></ul></ul><ul><li>Active...
Conclusion <ul><li>Active Appearance Models model the shape and texture of images. </li></ul><ul><li>Provide a lower dimen...
References <ul><li>T.F. Cootes, G.J. Edwards, and C.J. Taylor. Active appearance model. In Proc.  5 th European Conference...
Thank You!
Upcoming SlideShare
Loading in …5
×

Gil Shapira's Active Appearance Model slides

3,417 views

Published on

talk by Gil Shapira <gshapira@gmail.com>

Published in: Technology, Business
1 Comment
3 Likes
Statistics
Notes
No Downloads
Views
Total views
3,417
On SlideShare
0
From Embeds
0
Number of Embeds
65
Actions
Shares
0
Downloads
0
Comments
1
Likes
3
Embeds 0
No embeds

No notes for slide

Gil Shapira's Active Appearance Model slides

  1. 1. Active Appearance Models Presenter: Gil Shapira Instructor: Lior Wolf Date: November 6 , 2006 [4]
  2. 2. Paper <ul><li>T.F. Cootes, G.J. Edwards, and C.J. Taylor. Active appearance model. In Proc. 5 th European Conference on Computer Vision, Freiburg, Germany, 1998 . </li></ul>
  3. 3. Agenda <ul><li>Introduction: </li></ul><ul><ul><li>Motivation </li></ul></ul><ul><ul><li>PCA </li></ul></ul><ul><li>Active Appearance Modes: </li></ul><ul><ul><li>Training Set </li></ul></ul><ul><ul><li>Shape Model </li></ul></ul><ul><ul><li>Texture Model </li></ul></ul><ul><ul><li>Combined Model </li></ul></ul><ul><ul><li>Examples </li></ul></ul><ul><li>Image Interpretation: </li></ul><ul><ul><li>Overview </li></ul></ul><ul><ul><li>Algorithm </li></ul></ul><ul><ul><li>Results </li></ul></ul><ul><li>Conclusion </li></ul>
  4. 4. Motivation <ul><li>Statistical model to capture the appearance of an image. </li></ul><ul><li>Generate photo realistic synthetic images. </li></ul><ul><li>Robust and efficient algorithm used to interpret images. </li></ul>
  5. 5. PCA (Rehash) <ul><li>Approximate dataset using a representation in a lower dimension. </li></ul>[2]
  6. 6. Agenda <ul><li>Introduction: </li></ul><ul><ul><li>Motivation </li></ul></ul><ul><ul><li>PCA </li></ul></ul><ul><li>Active Appearance Modes: </li></ul><ul><ul><li>Training Set </li></ul></ul><ul><ul><li>Shape Model </li></ul></ul><ul><ul><li>Texture Model </li></ul></ul><ul><ul><li>Combined Model </li></ul></ul><ul><ul><li>Examples </li></ul></ul><ul><li>Image Interpretation: </li></ul><ul><ul><li>Overview </li></ul></ul><ul><ul><li>Algorithm </li></ul></ul><ul><ul><li>Results </li></ul></ul><ul><li>Conclusion </li></ul>
  7. 7. Training Set <ul><li>Training set consists of images with landmark points. </li></ul><ul><li>Done manually vs. automatically. </li></ul><ul><li>Should be consistent between images </li></ul>[2]
  8. 8. Shape Model <ul><li>We’d like to model the shape of the image indepedently from its texture. </li></ul><ul><li>Represent the shape of each image by the 2 n -D vector x of its landmark point set. </li></ul><ul><li>Align shapes to the same coordinate frame. </li></ul>Mean [2]
  9. 9. Shape Model <ul><li>Apply PCA to the aligned shapes: </li></ul><ul><ul><li>Model of shape: </li></ul></ul><ul><ul><ul><li>The mean shape and the matrix of eigenvectors defines the model. </li></ul></ul></ul><ul><ul><ul><li>is a vector of parameters of the model. </li></ul></ul></ul><ul><li>How many modes of variation do we want? </li></ul><ul><ul><li>Enough so that some constant of the variance is represented? </li></ul></ul><ul><ul><li>Smallest number such that all-but-one is possible? </li></ul></ul><ul><ul><li>Careful of overfitting! </li></ul></ul>
  10. 10. Texture Model <ul><li>Model the texture independently of the shape. </li></ul><ul><li>Warp each image so that its landmark points match the mean shape. </li></ul><ul><ul><li>Delaunay triangulation. </li></ul></ul>Warped To Becomes [2]
  11. 11. Texture Model <ul><li>Intensity values g im are taken from the convex hull of the warped image landmark points set. </li></ul><ul><li>Normalize the values to minimize the effect of global lighting. </li></ul><ul><ul><li>g = ( g im – β 1 ) / α , where: </li></ul></ul><ul><ul><ul><li>α is the scaling. </li></ul></ul></ul><ul><ul><ul><li>β is the offset. </li></ul></ul></ul>
  12. 12. Texture Model <ul><li>Apply PCA to the normalized values: </li></ul><ul><ul><li>Model of texture: </li></ul></ul><ul><ul><ul><li>The mean texture and the matrix of eigenvectors define the model. </li></ul></ul></ul><ul><ul><ul><li>is a vector of parameters of the model. </li></ul></ul></ul>
  13. 13. Combined Model <ul><li>The shape and texture are modelled by b s and b g respectively. </li></ul><ul><li>Appearance = Shape + Texture, so combine shape ( b s ) and texture ( b g ) into one model. </li></ul><ul><li>Generate concatenated vectors: </li></ul><ul><li>The diagonal matrix W s accounts for different units of shape and texture parameters. </li></ul>
  14. 14. Combined Model <ul><li>We have: </li></ul><ul><li>Now apply PCA to the combined vectors: </li></ul><ul><ul><li>Model of appearance: </li></ul></ul><ul><ul><ul><li>The matrix of eigenvectors defines the model. </li></ul></ul></ul><ul><ul><ul><li>is a vector of parameters of the model. </li></ul></ul></ul><ul><li>To sum up: and , where: and . </li></ul>
  15. 15. Examples <ul><li>First two modes of shape variation: </li></ul>[2]
  16. 16. Examples <ul><li>First two modes of texture variation: </li></ul>[2]
  17. 17. Examples <ul><li>First four modes of appearance variation: </li></ul>[2]
  18. 18. Examples <ul><li>First four modes of appearance variation: </li></ul>[2]
  19. 19. Examples <ul><li>Reconstructing a previously unseen image: </li></ul>[2]
  20. 20. Agenda <ul><li>Introduction: </li></ul><ul><ul><li>Motivation </li></ul></ul><ul><ul><li>PCA </li></ul></ul><ul><li>Active Appearance Modes: </li></ul><ul><ul><li>Training Set </li></ul></ul><ul><ul><li>Shape Model </li></ul></ul><ul><ul><li>Texture Model </li></ul></ul><ul><ul><li>Combined Model </li></ul></ul><ul><ul><li>Examples </li></ul></ul><ul><li>Image Interpretation: </li></ul><ul><ul><li>Overview </li></ul></ul><ul><ul><li>Algorithm </li></ul></ul><ul><ul><li>Results </li></ul></ul><ul><li>Conclusion </li></ul>
  21. 21. Image Interpretation <ul><li>Ingredients: </li></ul><ul><ul><li>Image to be interpreted. </li></ul></ul><ul><ul><li>Appearance model. </li></ul></ul><ul><ul><li>Reasonable starting position. </li></ul></ul><ul><li>Goal: </li></ul><ul><ul><li>Find the best matching synthetic image. </li></ul></ul><ul><li>Method: </li></ul><ul><ul><li>Adjust model parameters in an intelligent way. </li></ul></ul>
  22. 22. Image Interpretation <ul><li>Let  I = I i – I m be the difference vector, where: </li></ul><ul><ul><li>I i is the input image. </li></ul></ul><ul><ul><li>I m is the synthetic image for the current estimation of parameters. </li></ul></ul><ul><li>Search for the best estimation  minimize  = |  I | 2 . </li></ul><ul><li>This seems tough… </li></ul>
  23. 23. Image Interpretation <ul><li>Knowing the matching error  I we’d like to obtain information on how to improve the parameters vector c  calculate  c . </li></ul><ul><li>Approximate  c by  c = A  I . </li></ul><ul><li>Compute A : </li></ul><ul><ul><li>Add extra parameters to  c : </li></ul></ul><ul><ul><ul><li>Account for 2 D translation, rotation and scaling. </li></ul></ul></ul><ul><ul><li>Take  I =  g in the shape normalized frame. </li></ul></ul><ul><ul><li>Generate pairs (  c ,  g ) and use linear regression to estimate the matrix A . </li></ul></ul>
  24. 24. Image Interpretation <ul><li>Verify the quality of our prediction  c = A  g : </li></ul><ul><ul><li>Perturb the model. </li></ul></ul><ul><ul><li>Try to predict the perturbation given the error. </li></ul></ul>[2]
  25. 25. Image Interpretation <ul><li>Single iteration: </li></ul><ul><ul><li>Evaluate the error vector  g 0 . </li></ul></ul><ul><ul><li>Predict displacement of the parameters  c = A  g 0 . </li></ul></ul><ul><ul><li>Set k = 1 . </li></ul></ul><ul><ul><li>Let c 1 = c 0 – k  c . </li></ul></ul><ul><ul><li>Sample the image at this new prediction and caculate a new error vector  g 1 . </li></ul></ul><ul><ul><li>If |  g 1 | 2 < |  g 0 | 2 then accept the new estimate c 1 . </li></ul></ul><ul><ul><li>Otherwise try k = 1.5 , k = 0.5 , k = 0.25 , etc. </li></ul></ul><ul><li>Repeat until |  g 0 | isn’t improved and declare convergence. </li></ul>
  26. 26. Results [2]
  27. 27. Results <ul><li>Model trained on 88 hand labeled 200x200 images. </li></ul><ul><li>Tested on 100 other images. </li></ul>[2]
  28. 28. Results <ul><li>Model trained on 88 hand labeled 200x200 images. </li></ul><ul><li>Tested on 100 other images. </li></ul>[2]
  29. 29. Agenda <ul><li>Introduction: </li></ul><ul><ul><li>Motivation </li></ul></ul><ul><ul><li>PCA </li></ul></ul><ul><li>Active Appearance Modes: </li></ul><ul><ul><li>Training Set </li></ul></ul><ul><ul><li>Shape Model </li></ul></ul><ul><ul><li>Texture Model </li></ul></ul><ul><ul><li>Combined Model </li></ul></ul><ul><ul><li>Examples </li></ul></ul><ul><li>Image Interpretation: </li></ul><ul><ul><li>Overview </li></ul></ul><ul><ul><li>Algorithm </li></ul></ul><ul><ul><li>Results </li></ul></ul><ul><li>Conclusion </li></ul>
  30. 30. Conclusion <ul><li>Active Appearance Models model the shape and texture of images. </li></ul><ul><li>Provide a lower dimension approximation of the image which can serve to generate synthetic variations and entirely different images. </li></ul><ul><li>We’ve seen that the algorithm successfully converges and gives satisfying results with a high probability if the inputs are not terribly off. </li></ul>
  31. 31. References <ul><li>T.F. Cootes, G.J. Edwards, and C.J. Taylor. Active appearance model. In Proc. 5 th European Conference on Computer Vision, Freiburg, Germany, 1998 . </li></ul><ul><li>T. F. Cootes and C. J. Taylor. Statistical models of appearance for computer vision. Technical report, University of Manchester, Wolfson Image Analysis Unit, Imaging Science and Biomedical Engineering, Manchester M13 9PT, United Kingdom, September 1999. </li></ul><ul><li>Denis Simakov, Active Appearance Models Slides, http://www.wisdom.weizmann.ac.il/~deniss/ </li></ul><ul><li>Mikkel B. Stegmann, Active Appearance Models Website, http://www2.imm.dtu.dk/~aam </li></ul>
  32. 32. Thank You!

×