Ph.D. Research


Published on

  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Subsequent efforts in graphics advance the sophistication of the model for animation purposes.
  • Chen: Motion data concatenated with appearance data in the feature vector; Zhang: Masseter muscle; Pamudurthy: Could identify disguised subjects and even identical twins
  • Note on motion discontinuity
  • Drawback: Discretization of objects with irregular geometry becomes extremely involved requiring extensive computational resources for data storage and system solving
  • Relevance: Allows us to use the dynamic equations governing an elastic body
  • The FE face model is anatomically based. It is a model to avoid any pixel-based computationThe number of patches again is a topic not properly investigated. It is apparent that more the number of patches, the richer the model will be. Here we define these patches to provide a proof of concept.
  • Young’s modulus is used as a smoothing factor
  • The strain images from the FE method are masked in the eyes, nose, and mouth regions to fall in line with the our earlier approach of focusing on the regions that undergo elastic deformation
  • A general drawback of matching using strain pattern is - Query expression should be identical to the enrolled expression.
  • Young’s modulus is used as a smoothing factor
  • Genetic Algorithms are a particular class of evolutionary algorithms (EA) that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover.
  • A large effort spanning longer timeline that includes a bigger dataset is a potential topic for future studies
  • A large effort spanning longer timeline that includes a bigger dataset is a potential topic for future studies
  • The distance measures commonly used as Euclidean and Mahalanobis
  • Ph.D. Research

    1. 1. Defense of a Doctoral DissertationComputer Science and EngineeringUniversity of South Florida<br />Facial Skin Motion Properties from Video:Modeling and Applications<br />Vasant Manohar<br />Examining CommitteeAutar K. Kaw, Ph.D. – ChairpersonDmitry B. Goldgof, Ph.D. – Co-Major ProfessorSudeep Sarkar, Ph.D. – Co-Major ProfessorRangachar Kasturi, Ph.D.Tapas K. Das, Ph.D.Thomas A. Sanocki, Ph.D.<br />October 29, 2009<br />
    2. 2. 2<br />Presentation Outline<br />Introduction<br />Motivation, existing work, overview of the developed method<br />Strain-based Characterization<br />Finite Difference Method (profile faces)<br />Finite Element Method (frontal faces)<br />Material Constants-based Characterization<br />Matching faces using Young’s modulus values<br />Conclusions<br />Contribution to research, literature, ideas for future work<br />10/29/2009<br />2<br />
    3. 3. 3<br />A Generic Example<br />Is it possible to find a discriminative feature between the two balls?<br />3<br />10/29/2009<br />
    4. 4. 4<br />The Face Analogy<br />Is it possible to extract a stable feature between the camouflage face and the normal one?<br />4<br />10/29/2009<br />
    5. 5. 5<br />Deformable Modeling of Soft Tissues<br />Applications<br />HCI: facial expression recognition –Essa and Pentland (1997)<br />Age estimation: Kwon and Lobo (1999)<br />Person Identification: Zhang et al. (2004), Pamudurthy et al. (2005), Manohar et al. (2007)<br />Classes of approach<br />Physical models<br />Non-physical models<br />Review and applications: Metaxas (1996), Gibson and Mirtich (1997)<br />5<br />10/29/2009<br />
    6. 6. 6<br />Physical Models<br />Issues:<br />Observed physical phenomena can be very complex<br />Solving underlying partial differential equations (PDEs) requires substantial computational cost<br />Solution strategy<br />Find an adequate simplified model of given problem covering essential observations<br />Apply efficient numerical techniques for solving the PDEs<br />Our proposal<br />Strain pattern extracted from non-rigid facial motion as a simplified and adequate way <br />Modeling Young’s modulus of facial regions from observed motion<br />6<br />10/29/2009<br />
    7. 7. 7<br />Existing Work: Face Modeling and Biomechanics<br />Highly accurate models: Terzopoulos and Waters (1993)<br />Anatomical details of face: bones, musculature, and skin tissues<br />Drawback: high computational cost<br />Task driven reduced models: Essa and Pentland (1997)<br />Finite element model to estimate visual muscle activations and to generate motion energy templates for expression analysis<br />Drawback: automatic identification of action units that estimate the muscle activations<br />Our approach<br />Quantify soft tissue properties through their elasticity<br />Effectively represent them by means of strain maps<br />Model sub-regions of the face using their stiffness values<br />7<br />10/29/2009<br />
    8. 8. 8<br />Existing Work:Person Identification<br />Chen et al. (2001) augmented appearance-based method with facial motion to overcome illumination problems<br />Zhang et al. (2004) used strain pattern from 2 face images of closed and open jaw positions to reveal underlying muscular characteristics for recognition<br />Pamudurthy et al. (2005) used motion image derived from feature displacements<br />Our contribution<br />Extension of Strain maps to videos where the computation is automated<br />Substantiated with comprehensive system design and extensive experimental results<br />Explore an expression invariant approach using material constants<br />8<br />10/29/2009<br />
    9. 9. 9<br />Unique Features of this Work<br />Strain pattern, instead of image intensity, used as a classification feature<br />Related to the biomechanical properties of facial tissues that are distinct for each individual<br />Less sensitive to illumination differences (between registered and query sequences) and face camouflage<br />Finite element modeling based method enforces regularization<br />Mitigates issues related to automatic motion estimation<br />Using material constants for matching presents a unique opportunity for an expression invariant face matching process<br />No special imaging equipment is needed to capture facial deformation<br />9<br />10/29/2009<br />
    10. 10. 10<br />Theoretical Background<br />Optical Flow: Reflects the changes in the image due to motion<br />Strain: A measure to quantify the deformation undergone:<br />Principal Component Analysis: Dimensionality reduction technique that identifies the salient and rich information hidden in raw data<br />10<br />10/29/2009<br />
    11. 11. 11<br />System Flow:Face Matching using Strain Pattern<br />Input Video Sequence of Expression<br />Geometric Normalizationand Masking<br />Correspondence betweentwo subsequent frames<br />Principal Component Analysis<br />Optic Flow<br />Training<br />Coordinate point extraction<br />Displacement vectors forframe-pairs across sequence<br />Euclidean Subspace<br />Link flow valuesfrom each frame-pair<br />Testing<br />Displacement vector forthe complete sequence<br />Distances in <br />Projected Subspace<br />Strain ComputationModule<br />Nearest neighbor classifier<br />Strain Map of a Subject(Strain to Intensity)<br />Intra- & Inter-Subject Variation,ROC Curves<br />11<br />10/29/2009<br />
    12. 12. 12<br />Strain Computation from Dense Motion Field:The Finite Difference Method (FDM)<br />A linear strain tensor capable of describing small deformations is defined as:<br />In 2D image coordinates, this becomes:<br />Computing spatial derivatives (Central Difference Method)<br />Computing strain magnitude from normal strains:<br />12<br />10/29/2009<br />
    13. 13. 13<br />Motion and Strain Images<br />Video conditionsNormal Lighting Low Lighting Shadow Lighting Camouflage Face<br />Motion and Strain ImagesVideo Frame 12 Video Frame 15 Horizontal Motion Vertical MotionInput to Next Step Strain Magnitude Image<br />13<br />10/29/2009<br />
    14. 14. 14<br />Analysis of Strain as a Feature<br />Discriminatory Criterion Subject 1 Subject 2 Subject 3 Subject 4 Subject 5<br />Stability CriterionNormal Light Low Light Shadow Light Camouflage Face<br />14<br />10/29/2009<br />
    15. 15. 15<br />Experimental Set-up<br />A total of 60 subjects<br />All videos (from Canon Optura 20) were profile views of the face with opening the mouth as the expressionExperiments for FDM-based Strain Computation<br />Results were obtained using the Principal Component Analysis algorithm with Mahalanobis distance for computing metric scores<br />15<br />10/29/2009<br />
    16. 16. 16<br />Within-subject and Between-subject Variation<br />Receiver-Operating Characteristic Curve<br />Test-1 (Normal vs. Shadow Lighting)<br />16<br />10/29/2009<br />
    17. 17. 17<br />Within-subject and Between-subject Variation<br />Test-2 (Regular vs. Camouflage Faces)<br />17<br />10/29/2009<br />
    18. 18. 18<br />FDM-Based Method: Summary<br />Presented strain pattern as a unique and stable feature<br />Less vulnerable to illumination variations and face camouflage that often plague image analysis tasks<br />FDM carried out on an image grid makes the computational strategy efficient<br />Drawbacks<br />Requires a dense motion field<br />Restricted as a modeling platform<br />Limitations with respect to the material type<br />Doesn’t scale well for objects with irregular geometry<br />Requires extensive computational resources for data storage and system solving<br />18<br />10/29/2009<br />
    19. 19. 19<br />Strain Computation from Sparse Motion Field:The Finite Element Method (FEM)<br />State-of-the-art technique in physics-based modeling<br />Used for finding approximate solutions of partial differential equations<br />Approach is based on eliminating the differential equation completely<br />Primary challenge is in creating a numerically stable equation that approximates the equation to be studied<br />Relevance to our work<br />Easy incorporation of material constants associated with facial tissues<br />Sparse motion field would suffice<br />We used the commercial software, ANSYS, for FEM implementation<br />19<br />10/29/2009<br />
    20. 20. 20<br />Finite Element Face Model<br />Discretization<br />Geometry:<br />Linear elastic approximation of soft tissue behavior (Koch et al. 1996)<br />Equation of motion (Newton’s second law)<br />Strain-displacement equation<br />Constitutive equations (Hooke’s law)<br />20<br />10/29/2009<br />
    21. 21. 21<br />Finite Element Face Model<br />Each homogeneous and isotropic face region characterized by<br />Compressibility – Poisson’s Ratio<br />Stiffness – Young’s Modulus<br />Poisson’s Ratio of 0.4 (Gladilin 2002)<br />Learning Young’s Modulus<br />Concept of relative stiffness<br />Forehead – reference material; nose – highly rigid; eyes – varying stiffness; relative stiffness same for left and right cheeks<br />Optimization function: <br />Used 1/4th of motion field to drive the model; remaining 3/4th for validation<br />Done once per subject on normal lighting videos<br />21<br />10/29/2009<br />
    22. 22. 22<br />Motion and Strain Images<br />Video Conditions:<br />Motion Vectors:<br />Strain Images:<br />22<br />10/29/2009<br />
    23. 23. 23<br />Experimental Set-up<br />A total of 20 subjects<br />All videos were frontal views of the face with opening the mouth as the expressionExperiments for FEM-based Strain Computation<br />Results were obtained using the Principal Component Analysis algorithm with Mahalanobis distance for computing metric scores<br />23<br />10/29/2009<br />
    24. 24. 24<br />Non-Camouflage Experiments<br />Within-subject and Between-subject Variation (Test-1)<br />Within-subject and Between-subject Variation (Test-2)<br />24<br />10/29/2009<br />
    25. 25. 25<br />Camouflage Experiments<br />Within-subject and Between-subject Variation (Test-3)<br />25<br />10/29/2009<br />
    26. 26. 26<br />FEM-Based Method: Summary<br />Presented a computational strategy that just needs 1/25th of the motion vectors<br />The FE model enforces regularization<br />Mitigates issues related to automatic motion estimation<br />The model includes the material constants associated with facial tissues<br />Presented a first method to learn the material constants at a coarse level sufficient for accurate strain computation<br />Drawbacks<br />Uses just one expression to estimate Young’s modulus<br />Generic face model<br />Primitive search technique<br />Coarse sub-divisions<br />26<br />10/29/2009<br />
    27. 27. 27<br />Modeling Young’s Modulus fromMultiple Facial Expressions<br />Attempt a more accurate estimation of Young’s modulus by using motion from multiple expressions<br />Scalable matching process<br />Refine search technique for better estimation of values<br />Finer sub-divisions in the face model<br />Subject-specific face model conforming to individual’s facial feature locations<br />27<br />10/29/2009<br />
    28. 28. 28<br />System Flow:Face Matching using Material Constants<br />Input Video Sequence of Expression<br />ElasticFace: FE Face Modelwith learned material constants<br />Correspondence betweentwo subsequent frames<br />Optic Flow<br />Repeat for every expressionin the training set<br />Euclidean Space of Young’s Modulusof Face Patches<br />Displacement vectors forframe-pairs across sequence<br />Link flow valuesfrom each frame-pair<br />Testing<br />Displacement vector forthe complete sequence<br />Distances inElasticFace Space<br />Young’s ModulusLearning Module<br />Score-levelFusion techniques<br />Young’s Modulus Distributionof a Subject’s face<br />Intra- & Inter-Subject Variation,ROC Curves<br />28<br />10/29/2009<br />
    29. 29. 29<br />Modeling Algorithm<br />Step -1: Concept of relative stiffness; Forehead – reference material; nose – highly rigid; eyes – varying stiffness;<br />Step -2: Optimization function: <br />Step -3: Use 1/4th of motion field to drive the model; remaining 3/4th for computing the fitness function value<br />Step -4: Run a search algorithm to explore this solution space and use the converged values<br />Repeat Steps 1-4 for every sequence<br />Match based on parameter values along appropriate dimensions<br />29<br />10/29/2009<br />
    30. 30. 30<br />Facial Feature Detection<br />Motivation<br />System automation<br />Reducing computational cost of optic flow by just looking at the region of interest<br />Building a individual-specific face model<br />Viola-Jones Object Detector<br />Rectangular Haar-like binary feature wavelets<br />Cascade of weak classifiers<br />We used the OpenCV implementation of the Haar Object detection<br />Feature detection results on the BU dataset<br />30<br />10/29/2009<br />
    31. 31. 31<br />Face Model<br />Finer sub-divisions to attemptdefining a richer FE model<br />Specific to every subject basedon the results from the featuredetection step<br />31<br />10/29/2009<br />
    32. 32. 32<br />Search Algorithm<br />Plot of the fitness function<br />Objective function is not smooth<br />Multiple local optima<br />32<br />10/29/2009<br />
    33. 33. 33<br />Gradient-based vs. Random Algorithms<br />Gradient based approaches<br />Often progress slowly when the number of parameters are large<br />Make no allowance for multiple optima<br />Random algorithms then seem to be a reasonable choice<br />Course of the algorithm is decided by random numbers<br />Genetic Algorithms are a particular class of evolutionary algorithms (EA)<br />33<br />10/29/2009<br />
    34. 34. 34<br />Genetic Coding<br />Young’s modulus of regionsas the chromosome in GA<br />One-to-one mapping<br />Each chromosome in the poolrepresents a possible Young’smodulus distribution<br />34<br />10/29/2009<br />
    35. 35. 35<br />GA Parameter Settings<br />From the findings in literature for a similar domain, we use the following settings for the GA<br />We used a Gaussian mutation operator with mean = 0 and standard deviation = 1<br />35<br />10/29/2009<br />
    36. 36. 36<br />Training<br />Out of the 6 expressions, use 5 to estimate the Young’s modulus values<br />At least 40% of the elements in a region should deform in order to be considered for optimization<br />A note on sad, fear, and angry expressions<br />Used the converged values of Young’s modulus for regions where there was substantial deformation<br />Use the mean of converged values from multiple expressions as the final value for the region<br />36<br />10/29/2009<br />
    37. 37. 37<br />Multi-Feature Classification Systems:Combination Rules<br />Treat the Young’s modulus from each patch as a separate feature<br />Numerous combination techniques: sensor-level, feature-level, score-level, and decision-level<br />Popular score-level fusion techniques<br />Product Rule<br />Sum Rule<br />Max Rule<br />We investigate both Sum and the Max Rule<br />37<br />10/29/2009<br />
    38. 38. 38<br />Binghamton University 4D Facial ExpressionDataset: BU-4DFE<br />High-resolution (1040 x 1329) 3D dynamic facial expression database<br />Objective – analyze facial behavior in dynamic 3D space<br />Video rate – 25 frames per second<br />Six prototypical expressions: anger, disgust, happiness, fear, sadness, and surprise<br />101 subjects (58 female and 43 male) with wide ethnic/racial variety<br />38<br />10/29/2009<br />
    39. 39. 39<br />Experiments:Non-rigid Motion Tracking<br />Given a subset of motion vectors, we can estimate displacements in other regions using the equation of motion<br />Evaluation<br />Compare against Black and Anandan optic flow output<br />Generate fairly identical dense motion field from a sparse set of motion vectors<br />Compare against a simple bi-cubic interpolation method<br />Emphasize the value added by modeling of material constants in the deformation domain<br />39<br />10/29/2009<br />
    40. 40. 40<br />Experiments:Non-rigid Motion Tracking<br />Snapshot of the table comparing the two methods<br />Observations:<br />Average error from the model is within 7% and the worst case error is within 11%<br />Average error from the model is always less than the interpolation technique<br />40<br />10/29/2009<br />
    41. 41. 41<br />Experiments:Expression Invariant Matching<br />A total 0f 40 subjects (20 male and 20 females)<br />Performed a leave-one-expression-out experiment where we train on 5 expressions and test on the 6th expression; Repeat tests by changing the test expression<br />Investigated both Sum and Max rule<br />Metric computation only along relevant regions<br />40% threshold as earlier<br />41<br />10/29/2009<br />
    42. 42. 42<br />Experiments:Expression Invariant Matching<br />42<br />10/29/2009<br />
    43. 43. 43<br />Experiments:Expression Invariant Matching<br />43<br />10/29/2009<br />
    44. 44. 44<br />Experiments:Expression Invariant Matching<br />A first step towards expression invariant matching of faces<br />Due to lack of deformation, performance for some query expressions are not good: sadness, fear, or anger<br />The disparity in performance aligns with findings in literature<br />Max-rule outperforms sum-rule in almost all the tests<br />44<br />10/29/2009<br />
    45. 45. 45<br />Modeling Young’s modulus:Summary<br />Presented a method for modeling material constants (Young’s modulus) in sub-regions of the face<br />Efficient way of describing underlying material properties<br />Deformable modeling techniques are gauged by their simplicity and adequacy<br />First and novel attempt for expression invariant matching of face templates<br />45<br />10/29/2009<br />
    46. 46. 46<br />Conclusions<br />Used strain pattern an effective and efficient way of characterizing the material properties of facial soft tissues <br />Impact on applications such as facial expression recognition, age estimation, and person identification from video<br />Discussed two methods for computing strain pattern<br />FDM-based: efficient when carried out on image grid;<br />FEM-based: better characterization of facial tissues by incorporating relative material properties; works well with sparse motion field<br />Experiments emphasize that strain pattern is a discriminative and stable feature<br />Value further justified by performance under shadow lighting and camouflage<br />46<br />10/29/2009<br />
    47. 47. 47<br />Conclusions<br />Developed a method for modeling material constants from the motion observed in multiple facial expressions<br />Impact on deformable modeling techniques<br />Presented a novel expression invariant matching strategy<br />Impact on biometrics<br />Due to limited population size, this study so far can only provide a baseline evaluation on performance of the presented methods<br />47<br />10/29/2009<br />
    48. 48. 48<br />Conclusions<br />Intellectual Merit<br />Facial strain pattern adds a new dimension in characterizing the face<br />Important auxiliary information that can be exploited in multimodal techniques<br />Fosters a newer way to capture facial dynamics from video<br />Presents a very first attempt on matching faces with different expressions<br />Presents a simple and adequate way of modeling deformable objects (implications on real-time methods)<br />Broader Impact<br />Addresses the long-standing problem of motion analysis of elastic objects<br />Cross-disciplinary nature<br />Applying image analysis algorithms for material property characterization of facial soft tissues and its applications<br />Utilizes video processing to enhance our abilities to make unique discoveries through facial dynamics in video<br />48<br />10/29/2009<br />
    49. 49. 49<br />Future Directions<br />Fusion with intensity information in a recognition framework<br />Further justify the orthogonal information provided by strain maps<br />Capture the dynamics inherent in a facial expression<br />Snapshots of the variation of strain pattern<br />Use manifolds of strain patterns in image analysis tasks<br />49<br />10/29/2009<br />
    50. 50. 50<br />Contribution to Literature<br />Facial Motion Analysis:<br />V. Manohar, Y. Zhang, D. Goldgof, and S. Sarkar, “Facial Strain Pattern as a Soft Forensic Evidence”, In the Eighth IEEE Workshop on Applications of Computer Vision, Page: 42, 2007<br />V. Manohar, Y. Zhang, D. Goldgof, and S. Sarkar, “Video-based Person Identification using Facial Strain Pattern”, To be submitted to the IEEE Transactions on System, Man, and Cybernetics – Part B<br />V. Manohar, M. Shreve, D. Goldgof, and S. Sarkar, “Finite Element Modeling of Facial Deformation in Videos for Computing Strain Pattern”, In the International Conference on Pattern Recognition, ISBN 978-1-4244-2174-9, Pages: 1-4<br />M. Shreve, S. Godavarthy, V. Manohar, D. Goldgof, and S. Sarkar, “Towards Macro- and Micro-Expression Spotting in Video using Strain Patterns”, In the IEEE Workshop on Applications of Computer Vision, 2009<br />Y. Zhang, J.R. Sullins, D. Goldgof, and V. Manohar, “Computing Strain Elastograms of Skin Using an Optical Flow Based Method”, In the Fifth International Conference on the Ultrasonic Measurement and Imaging of Tissue Elasticity, 2006<br />Medical Imaging:<br />Y. Qiu, V. Manohar, V. Korzhova, X. Sun, and D. Goldgof, “Two-View Mammography Registration using 3D Finite Element Model of the Breast”, Submitted to Computerized Medical Imaging and Graphics<br />Y. Qiu, X. Sun, V. Manohar, and D. Goldgof, &quot;Towards Registration of Temporal Mammograms by Finite Element Simulation of MR Breast Volumes&quot;, In the SPIE Medical Imaging: Visualization, Image-guided Procedures, and Modeling, Vol. 6918, 6918-86, 2008<br />Y. Zhang, R.W. Kramer, D. Goldgof, and V. Manohar, &quot;Development of a Robust Algorithm for Imaging Complex Tissue Elasticity&quot;, In the Fifth International Conference on the Ultrasonic Measurement and Imaging of Tissue Elasticity, 2006<br />50<br />10/29/2009<br />
    51. 51. 51<br />Contribution to Literature<br />Performance Evaluation:<br />R. Kasturi, D. Goldgof, P. Soundararajan, V. Manohar, J. Garofolo, R. Bowers, M. Boonstra, V. Korzhova, and J. Zhang, &quot;Framework for Performance Evaluation of Face, Text, and Vehicle Detection and Tracking in Video: Data, Metrics, and Protocol&quot;, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 31, No. 2, Pages: 319-336, Feb 2009<br />V. Manohar, P. Soundararajan, H. Raju, D. Goldgof, R. Kasturi, and J. Garofolo, &quot;Performance Evaluation of Object Detection and Tracking in Video&quot;, In the Seventh Asian Conference on Computer Vision, LNCS 3852, pp: 151-161, 2006<br />V. Manohar, P. Soundararajan, M. Boonstra, H. Raju, D. Goldgof, R. Kasturi, and J. Garofolo, &quot;Performance Evaluation of Text Detection and Tracking in Video&quot;, In the Seventh IAPR Workshop on Document Analysis Systems, LNCS 3872, pp: 576-587, 2006<br />V. Manohar, M. Boonstra, V. Korzhova, P. Soundararajan, D. Goldgof, R. Kasturi, S. Prasad, H. Raju, R. Bowers, and J. Garofolo, &quot;PETS vs. VACE Evaluation Programs: A Comparative Study&quot;, In the Ninth IEEE International Workshop on Performance Evaluation of Tracking and Surveillance(PETS), pp: 1-6, In Conjunction with CVPR, 2006<br />V. Manohar, P. Soundararajan, V. Korzhova, M. Boonstra, D. Goldgof, R. Kasturi, R. Bowers, and J. Garofolo, &quot;A Baseline Algorithm for Face Detection and Tracking in Video&quot;, In the SPIE Europe Symposium on Security and Defence: Optics and Photonics for Counter-Terrorism and Crime-Fighting, Vol. 6741, 6741-09, 2007<br />51<br />10/29/2009<br />
    52. 52. 52<br />QUESTIONS?<br />
    53. 53. 53<br />Motion Estimation: Optical Flow Method<br /><ul><li>Reflects the changes in the image due to motion
    54. 54. Computation is based on the following assumptions:
    55. 55. observed brightness of any object point is constant over time
    56. 56. nearby points in the image plane move in a similar manner
    57. 57. Minimization problem:(brightness const.) (smoothness const.)
    58. 58. Robust estimation framework (Black and Anandan, 1996)
    59. 59. Recast the least squared formulations with a different error-norm function instead of quadratic
    60. 60. Coarse-to-fine strategy
    61. 61. Construct a pyramid of spatially filtered and sub-sampled images
    62. 62. Compute flow values at lowest resolution and project to next level in the pyramid</li></ul>53<br />10/29/2009<br />
    63. 63. 54<br />Principal Component Analysis<br />Dimensionality Reduction Technique<br />Basic idea<br />Features in subspace provide more salient and richer information than the raw images themselves<br />Representation<br />Strain images represented as vector of weights of low dimensionality (feature vector)<br />Training<br />Learning these weights using a set of training images<br />Testing<br />Calculate distances to each of the training patterns in the projected subspace<br />54<br />10/29/2009<br />
    64. 64. 55<br />Steps involved in FEM<br />Discretization:problem domain is discretized into a collection of simple shapes, or elements; the continuous equations are discretized as finite differences and summations (instead of integrals and derivatives)<br />Assembly: The element equations for each element in the FEM mesh are assembled into a set of global equations that model the properties of the entire system<br />Application of Boundary Conditions: They reflect the known values for certain primary unknowns<br />Solve for Primary Unknowns: Modified global equations are solved for the primary unknowns at the nodes; interpolate for values between nodes<br />7<br />8<br />5<br />6<br />solve<br />discretize<br />object<br />u<br />3<br />4<br />2<br />1<br />+<br />global model<br />interpolate values between nodes<br />nodal mesh + local model<br />55<br />10/29/2009<br />
    65. 65. 56<br />Genetic Algorithm<br />[Start] Generate random population of n chromosomes<br />[Fitness] Evaluate the fitness function value of each chromosome in population<br />[New Population] Create new population by repeating following steps:<br />[Selection] Select 2 parent chromosomes from population<br />[Crossover] Crossover parents with some probability to form new offspring<br />[Mutation] Mutate offspring at each position with some probability<br />[Accepting] Place the new offspring in population<br />[Replace] Use the new generated population for a further run of the algorithm<br />[Test] If the end condition is satisfied, stop, and return the best solution in current population<br />56<br />10/29/2009<br />
    66. 66. 57<br />Road Map of Research<br />2004<br />2005<br />2006<br />2007<br />2008<br />2009<br />Performance Evaluation of Object Detection & Tracking Systems<br />FDM-Based Method: Profile Faces<br />Range Images/2D Images/Video: Performance of Strain<br />FEM-Based Method Frontal Faces<br />Modeling Material Constants:Expression Invariant Matching<br />Registration of Temporal MammogramsFinite Element Modeling<br />Defense!<br />57<br />10/29/2009<br />