Your SlideShare is downloading. ×
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Ph.D. Research
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Ph.D. Research

850

Published on

0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
850
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
2
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Subsequent efforts in graphics advance the sophistication of the model for animation purposes.
  • Chen: Motion data concatenated with appearance data in the feature vector; Zhang: Masseter muscle; Pamudurthy: Could identify disguised subjects and even identical twins
  • Note on motion discontinuity
  • Drawback: Discretization of objects with irregular geometry becomes extremely involved requiring extensive computational resources for data storage and system solving
  • Relevance: Allows us to use the dynamic equations governing an elastic body
  • The FE face model is anatomically based. It is a model to avoid any pixel-based computationThe number of patches again is a topic not properly investigated. It is apparent that more the number of patches, the richer the model will be. Here we define these patches to provide a proof of concept.
  • Young’s modulus is used as a smoothing factor
  • The strain images from the FE method are masked in the eyes, nose, and mouth regions to fall in line with the our earlier approach of focusing on the regions that undergo elastic deformation
  • A general drawback of matching using strain pattern is - Query expression should be identical to the enrolled expression.
  • Young’s modulus is used as a smoothing factor
  • Genetic Algorithms are a particular class of evolutionary algorithms (EA) that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover.
  • A large effort spanning longer timeline that includes a bigger dataset is a potential topic for future studies
  • A large effort spanning longer timeline that includes a bigger dataset is a potential topic for future studies
  • The distance measures commonly used as Euclidean and Mahalanobis
  • Transcript

    • 1. Defense of a Doctoral DissertationComputer Science and EngineeringUniversity of South Florida
      Facial Skin Motion Properties from Video:Modeling and Applications
      Vasant Manohar
      Examining CommitteeAutar K. Kaw, Ph.D. – ChairpersonDmitry B. Goldgof, Ph.D. – Co-Major ProfessorSudeep Sarkar, Ph.D. – Co-Major ProfessorRangachar Kasturi, Ph.D.Tapas K. Das, Ph.D.Thomas A. Sanocki, Ph.D.
      October 29, 2009
    • 2. 2
      Presentation Outline
      Introduction
      Motivation, existing work, overview of the developed method
      Strain-based Characterization
      Finite Difference Method (profile faces)
      Finite Element Method (frontal faces)
      Material Constants-based Characterization
      Matching faces using Young’s modulus values
      Conclusions
      Contribution to research, literature, ideas for future work
      10/29/2009
      2
    • 3. 3
      A Generic Example
      Is it possible to find a discriminative feature between the two balls?
      3
      10/29/2009
    • 4. 4
      The Face Analogy
      Is it possible to extract a stable feature between the camouflage face and the normal one?
      4
      10/29/2009
    • 5. 5
      Deformable Modeling of Soft Tissues
      Applications
      HCI: facial expression recognition –Essa and Pentland (1997)
      Age estimation: Kwon and Lobo (1999)
      Person Identification: Zhang et al. (2004), Pamudurthy et al. (2005), Manohar et al. (2007)
      Classes of approach
      Physical models
      Non-physical models
      Review and applications: Metaxas (1996), Gibson and Mirtich (1997)
      5
      10/29/2009
    • 6. 6
      Physical Models
      Issues:
      Observed physical phenomena can be very complex
      Solving underlying partial differential equations (PDEs) requires substantial computational cost
      Solution strategy
      Find an adequate simplified model of given problem covering essential observations
      Apply efficient numerical techniques for solving the PDEs
      Our proposal
      Strain pattern extracted from non-rigid facial motion as a simplified and adequate way
      Modeling Young’s modulus of facial regions from observed motion
      6
      10/29/2009
    • 7. 7
      Existing Work: Face Modeling and Biomechanics
      Highly accurate models: Terzopoulos and Waters (1993)
      Anatomical details of face: bones, musculature, and skin tissues
      Drawback: high computational cost
      Task driven reduced models: Essa and Pentland (1997)
      Finite element model to estimate visual muscle activations and to generate motion energy templates for expression analysis
      Drawback: automatic identification of action units that estimate the muscle activations
      Our approach
      Quantify soft tissue properties through their elasticity
      Effectively represent them by means of strain maps
      Model sub-regions of the face using their stiffness values
      7
      10/29/2009
    • 8. 8
      Existing Work:Person Identification
      Chen et al. (2001) augmented appearance-based method with facial motion to overcome illumination problems
      Zhang et al. (2004) used strain pattern from 2 face images of closed and open jaw positions to reveal underlying muscular characteristics for recognition
      Pamudurthy et al. (2005) used motion image derived from feature displacements
      Our contribution
      Extension of Strain maps to videos where the computation is automated
      Substantiated with comprehensive system design and extensive experimental results
      Explore an expression invariant approach using material constants
      8
      10/29/2009
    • 9. 9
      Unique Features of this Work
      Strain pattern, instead of image intensity, used as a classification feature
      Related to the biomechanical properties of facial tissues that are distinct for each individual
      Less sensitive to illumination differences (between registered and query sequences) and face camouflage
      Finite element modeling based method enforces regularization
      Mitigates issues related to automatic motion estimation
      Using material constants for matching presents a unique opportunity for an expression invariant face matching process
      No special imaging equipment is needed to capture facial deformation
      9
      10/29/2009
    • 10. 10
      Theoretical Background
      Optical Flow: Reflects the changes in the image due to motion
      Strain: A measure to quantify the deformation undergone:
      Principal Component Analysis: Dimensionality reduction technique that identifies the salient and rich information hidden in raw data
      10
      10/29/2009
    • 11. 11
      System Flow:Face Matching using Strain Pattern
      Input Video Sequence of Expression
      Geometric Normalizationand Masking
      Correspondence betweentwo subsequent frames
      Principal Component Analysis
      Optic Flow
      Training
      Coordinate point extraction
      Displacement vectors forframe-pairs across sequence
      Euclidean Subspace
      Link flow valuesfrom each frame-pair
      Testing
      Displacement vector forthe complete sequence
      Distances in
      Projected Subspace
      Strain ComputationModule
      Nearest neighbor classifier
      Strain Map of a Subject(Strain to Intensity)
      Intra- & Inter-Subject Variation,ROC Curves
      11
      10/29/2009
    • 12. 12
      Strain Computation from Dense Motion Field:The Finite Difference Method (FDM)
      A linear strain tensor capable of describing small deformations is defined as:
      In 2D image coordinates, this becomes:
      Computing spatial derivatives (Central Difference Method)
      Computing strain magnitude from normal strains:
      12
      10/29/2009
    • 13. 13
      Motion and Strain Images
      Video conditionsNormal Lighting Low Lighting Shadow Lighting Camouflage Face
      Motion and Strain ImagesVideo Frame 12 Video Frame 15 Horizontal Motion Vertical MotionInput to Next Step Strain Magnitude Image
      13
      10/29/2009
    • 14. 14
      Analysis of Strain as a Feature
      Discriminatory Criterion Subject 1 Subject 2 Subject 3 Subject 4 Subject 5
      Stability CriterionNormal Light Low Light Shadow Light Camouflage Face
      14
      10/29/2009
    • 15. 15
      Experimental Set-up
      A total of 60 subjects
      All videos (from Canon Optura 20) were profile views of the face with opening the mouth as the expressionExperiments for FDM-based Strain Computation
      Results were obtained using the Principal Component Analysis algorithm with Mahalanobis distance for computing metric scores
      15
      10/29/2009
    • 16. 16
      Within-subject and Between-subject Variation
      Receiver-Operating Characteristic Curve
      Test-1 (Normal vs. Shadow Lighting)
      16
      10/29/2009
    • 17. 17
      Within-subject and Between-subject Variation
      Test-2 (Regular vs. Camouflage Faces)
      17
      10/29/2009
    • 18. 18
      FDM-Based Method: Summary
      Presented strain pattern as a unique and stable feature
      Less vulnerable to illumination variations and face camouflage that often plague image analysis tasks
      FDM carried out on an image grid makes the computational strategy efficient
      Drawbacks
      Requires a dense motion field
      Restricted as a modeling platform
      Limitations with respect to the material type
      Doesn’t scale well for objects with irregular geometry
      Requires extensive computational resources for data storage and system solving
      18
      10/29/2009
    • 19. 19
      Strain Computation from Sparse Motion Field:The Finite Element Method (FEM)
      State-of-the-art technique in physics-based modeling
      Used for finding approximate solutions of partial differential equations
      Approach is based on eliminating the differential equation completely
      Primary challenge is in creating a numerically stable equation that approximates the equation to be studied
      Relevance to our work
      Easy incorporation of material constants associated with facial tissues
      Sparse motion field would suffice
      We used the commercial software, ANSYS, for FEM implementation
      19
      10/29/2009
    • 20. 20
      Finite Element Face Model
      Discretization
      Geometry:
      Linear elastic approximation of soft tissue behavior (Koch et al. 1996)
      Equation of motion (Newton’s second law)
      Strain-displacement equation
      Constitutive equations (Hooke’s law)
      20
      10/29/2009
    • 21. 21
      Finite Element Face Model
      Each homogeneous and isotropic face region characterized by
      Compressibility – Poisson’s Ratio
      Stiffness – Young’s Modulus
      Poisson’s Ratio of 0.4 (Gladilin 2002)
      Learning Young’s Modulus
      Concept of relative stiffness
      Forehead – reference material; nose – highly rigid; eyes – varying stiffness; relative stiffness same for left and right cheeks
      Optimization function:
      Used 1/4th of motion field to drive the model; remaining 3/4th for validation
      Done once per subject on normal lighting videos
      21
      10/29/2009
    • 22. 22
      Motion and Strain Images
      Video Conditions:
      Motion Vectors:
      Strain Images:
      22
      10/29/2009
    • 23. 23
      Experimental Set-up
      A total of 20 subjects
      All videos were frontal views of the face with opening the mouth as the expressionExperiments for FEM-based Strain Computation
      Results were obtained using the Principal Component Analysis algorithm with Mahalanobis distance for computing metric scores
      23
      10/29/2009
    • 24. 24
      Non-Camouflage Experiments
      Within-subject and Between-subject Variation (Test-1)
      Within-subject and Between-subject Variation (Test-2)
      24
      10/29/2009
    • 25. 25
      Camouflage Experiments
      Within-subject and Between-subject Variation (Test-3)
      25
      10/29/2009
    • 26. 26
      FEM-Based Method: Summary
      Presented a computational strategy that just needs 1/25th of the motion vectors
      The FE model enforces regularization
      Mitigates issues related to automatic motion estimation
      The model includes the material constants associated with facial tissues
      Presented a first method to learn the material constants at a coarse level sufficient for accurate strain computation
      Drawbacks
      Uses just one expression to estimate Young’s modulus
      Generic face model
      Primitive search technique
      Coarse sub-divisions
      26
      10/29/2009
    • 27. 27
      Modeling Young’s Modulus fromMultiple Facial Expressions
      Attempt a more accurate estimation of Young’s modulus by using motion from multiple expressions
      Scalable matching process
      Refine search technique for better estimation of values
      Finer sub-divisions in the face model
      Subject-specific face model conforming to individual’s facial feature locations
      27
      10/29/2009
    • 28. 28
      System Flow:Face Matching using Material Constants
      Input Video Sequence of Expression
      ElasticFace: FE Face Modelwith learned material constants
      Correspondence betweentwo subsequent frames
      Optic Flow
      Repeat for every expressionin the training set
      Euclidean Space of Young’s Modulusof Face Patches
      Displacement vectors forframe-pairs across sequence
      Link flow valuesfrom each frame-pair
      Testing
      Displacement vector forthe complete sequence
      Distances inElasticFace Space
      Young’s ModulusLearning Module
      Score-levelFusion techniques
      Young’s Modulus Distributionof a Subject’s face
      Intra- & Inter-Subject Variation,ROC Curves
      28
      10/29/2009
    • 29. 29
      Modeling Algorithm
      Step -1: Concept of relative stiffness; Forehead – reference material; nose – highly rigid; eyes – varying stiffness;
      Step -2: Optimization function:
      Step -3: Use 1/4th of motion field to drive the model; remaining 3/4th for computing the fitness function value
      Step -4: Run a search algorithm to explore this solution space and use the converged values
      Repeat Steps 1-4 for every sequence
      Match based on parameter values along appropriate dimensions
      29
      10/29/2009
    • 30. 30
      Facial Feature Detection
      Motivation
      System automation
      Reducing computational cost of optic flow by just looking at the region of interest
      Building a individual-specific face model
      Viola-Jones Object Detector
      Rectangular Haar-like binary feature wavelets
      Cascade of weak classifiers
      We used the OpenCV implementation of the Haar Object detection
      Feature detection results on the BU dataset
      30
      10/29/2009
    • 31. 31
      Face Model
      Finer sub-divisions to attemptdefining a richer FE model
      Specific to every subject basedon the results from the featuredetection step
      31
      10/29/2009
    • 32. 32
      Search Algorithm
      Plot of the fitness function
      Objective function is not smooth
      Multiple local optima
      32
      10/29/2009
    • 33. 33
      Gradient-based vs. Random Algorithms
      Gradient based approaches
      Often progress slowly when the number of parameters are large
      Make no allowance for multiple optima
      Random algorithms then seem to be a reasonable choice
      Course of the algorithm is decided by random numbers
      Genetic Algorithms are a particular class of evolutionary algorithms (EA)
      33
      10/29/2009
    • 34. 34
      Genetic Coding
      Young’s modulus of regionsas the chromosome in GA
      One-to-one mapping
      Each chromosome in the poolrepresents a possible Young’smodulus distribution
      34
      10/29/2009
    • 35. 35
      GA Parameter Settings
      From the findings in literature for a similar domain, we use the following settings for the GA
      We used a Gaussian mutation operator with mean = 0 and standard deviation = 1
      35
      10/29/2009
    • 36. 36
      Training
      Out of the 6 expressions, use 5 to estimate the Young’s modulus values
      At least 40% of the elements in a region should deform in order to be considered for optimization
      A note on sad, fear, and angry expressions
      Used the converged values of Young’s modulus for regions where there was substantial deformation
      Use the mean of converged values from multiple expressions as the final value for the region
      36
      10/29/2009
    • 37. 37
      Multi-Feature Classification Systems:Combination Rules
      Treat the Young’s modulus from each patch as a separate feature
      Numerous combination techniques: sensor-level, feature-level, score-level, and decision-level
      Popular score-level fusion techniques
      Product Rule
      Sum Rule
      Max Rule
      We investigate both Sum and the Max Rule
      37
      10/29/2009
    • 38. 38
      Binghamton University 4D Facial ExpressionDataset: BU-4DFE
      High-resolution (1040 x 1329) 3D dynamic facial expression database
      Objective – analyze facial behavior in dynamic 3D space
      Video rate – 25 frames per second
      Six prototypical expressions: anger, disgust, happiness, fear, sadness, and surprise
      101 subjects (58 female and 43 male) with wide ethnic/racial variety
      38
      10/29/2009
    • 39. 39
      Experiments:Non-rigid Motion Tracking
      Given a subset of motion vectors, we can estimate displacements in other regions using the equation of motion
      Evaluation
      Compare against Black and Anandan optic flow output
      Generate fairly identical dense motion field from a sparse set of motion vectors
      Compare against a simple bi-cubic interpolation method
      Emphasize the value added by modeling of material constants in the deformation domain
      39
      10/29/2009
    • 40. 40
      Experiments:Non-rigid Motion Tracking
      Snapshot of the table comparing the two methods
      Observations:
      Average error from the model is within 7% and the worst case error is within 11%
      Average error from the model is always less than the interpolation technique
      40
      10/29/2009
    • 41. 41
      Experiments:Expression Invariant Matching
      A total 0f 40 subjects (20 male and 20 females)
      Performed a leave-one-expression-out experiment where we train on 5 expressions and test on the 6th expression; Repeat tests by changing the test expression
      Investigated both Sum and Max rule
      Metric computation only along relevant regions
      40% threshold as earlier
      41
      10/29/2009
    • 42. 42
      Experiments:Expression Invariant Matching
      42
      10/29/2009
    • 43. 43
      Experiments:Expression Invariant Matching
      43
      10/29/2009
    • 44. 44
      Experiments:Expression Invariant Matching
      A first step towards expression invariant matching of faces
      Due to lack of deformation, performance for some query expressions are not good: sadness, fear, or anger
      The disparity in performance aligns with findings in literature
      Max-rule outperforms sum-rule in almost all the tests
      44
      10/29/2009
    • 45. 45
      Modeling Young’s modulus:Summary
      Presented a method for modeling material constants (Young’s modulus) in sub-regions of the face
      Efficient way of describing underlying material properties
      Deformable modeling techniques are gauged by their simplicity and adequacy
      First and novel attempt for expression invariant matching of face templates
      45
      10/29/2009
    • 46. 46
      Conclusions
      Used strain pattern an effective and efficient way of characterizing the material properties of facial soft tissues
      Impact on applications such as facial expression recognition, age estimation, and person identification from video
      Discussed two methods for computing strain pattern
      FDM-based: efficient when carried out on image grid;
      FEM-based: better characterization of facial tissues by incorporating relative material properties; works well with sparse motion field
      Experiments emphasize that strain pattern is a discriminative and stable feature
      Value further justified by performance under shadow lighting and camouflage
      46
      10/29/2009
    • 47. 47
      Conclusions
      Developed a method for modeling material constants from the motion observed in multiple facial expressions
      Impact on deformable modeling techniques
      Presented a novel expression invariant matching strategy
      Impact on biometrics
      Due to limited population size, this study so far can only provide a baseline evaluation on performance of the presented methods
      47
      10/29/2009
    • 48. 48
      Conclusions
      Intellectual Merit
      Facial strain pattern adds a new dimension in characterizing the face
      Important auxiliary information that can be exploited in multimodal techniques
      Fosters a newer way to capture facial dynamics from video
      Presents a very first attempt on matching faces with different expressions
      Presents a simple and adequate way of modeling deformable objects (implications on real-time methods)
      Broader Impact
      Addresses the long-standing problem of motion analysis of elastic objects
      Cross-disciplinary nature
      Applying image analysis algorithms for material property characterization of facial soft tissues and its applications
      Utilizes video processing to enhance our abilities to make unique discoveries through facial dynamics in video
      48
      10/29/2009
    • 49. 49
      Future Directions
      Fusion with intensity information in a recognition framework
      Further justify the orthogonal information provided by strain maps
      Capture the dynamics inherent in a facial expression
      Snapshots of the variation of strain pattern
      Use manifolds of strain patterns in image analysis tasks
      49
      10/29/2009
    • 50. 50
      Contribution to Literature
      Facial Motion Analysis:
      V. Manohar, Y. Zhang, D. Goldgof, and S. Sarkar, “Facial Strain Pattern as a Soft Forensic Evidence”, In the Eighth IEEE Workshop on Applications of Computer Vision, Page: 42, 2007
      V. Manohar, Y. Zhang, D. Goldgof, and S. Sarkar, “Video-based Person Identification using Facial Strain Pattern”, To be submitted to the IEEE Transactions on System, Man, and Cybernetics – Part B
      V. Manohar, M. Shreve, D. Goldgof, and S. Sarkar, “Finite Element Modeling of Facial Deformation in Videos for Computing Strain Pattern”, In the International Conference on Pattern Recognition, ISBN 978-1-4244-2174-9, Pages: 1-4
      M. Shreve, S. Godavarthy, V. Manohar, D. Goldgof, and S. Sarkar, “Towards Macro- and Micro-Expression Spotting in Video using Strain Patterns”, In the IEEE Workshop on Applications of Computer Vision, 2009
      Y. Zhang, J.R. Sullins, D. Goldgof, and V. Manohar, “Computing Strain Elastograms of Skin Using an Optical Flow Based Method”, In the Fifth International Conference on the Ultrasonic Measurement and Imaging of Tissue Elasticity, 2006
      Medical Imaging:
      Y. Qiu, V. Manohar, V. Korzhova, X. Sun, and D. Goldgof, “Two-View Mammography Registration using 3D Finite Element Model of the Breast”, Submitted to Computerized Medical Imaging and Graphics
      Y. Qiu, X. Sun, V. Manohar, and D. Goldgof, "Towards Registration of Temporal Mammograms by Finite Element Simulation of MR Breast Volumes", In the SPIE Medical Imaging: Visualization, Image-guided Procedures, and Modeling, Vol. 6918, 6918-86, 2008
      Y. Zhang, R.W. Kramer, D. Goldgof, and V. Manohar, "Development of a Robust Algorithm for Imaging Complex Tissue Elasticity", In the Fifth International Conference on the Ultrasonic Measurement and Imaging of Tissue Elasticity, 2006
      50
      10/29/2009
    • 51. 51
      Contribution to Literature
      Performance Evaluation:
      R. Kasturi, D. Goldgof, P. Soundararajan, V. Manohar, J. Garofolo, R. Bowers, M. Boonstra, V. Korzhova, and J. Zhang, "Framework for Performance Evaluation of Face, Text, and Vehicle Detection and Tracking in Video: Data, Metrics, and Protocol", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 31, No. 2, Pages: 319-336, Feb 2009
      V. Manohar, P. Soundararajan, H. Raju, D. Goldgof, R. Kasturi, and J. Garofolo, "Performance Evaluation of Object Detection and Tracking in Video", In the Seventh Asian Conference on Computer Vision, LNCS 3852, pp: 151-161, 2006
      V. Manohar, P. Soundararajan, M. Boonstra, H. Raju, D. Goldgof, R. Kasturi, and J. Garofolo, "Performance Evaluation of Text Detection and Tracking in Video", In the Seventh IAPR Workshop on Document Analysis Systems, LNCS 3872, pp: 576-587, 2006
      V. Manohar, M. Boonstra, V. Korzhova, P. Soundararajan, D. Goldgof, R. Kasturi, S. Prasad, H. Raju, R. Bowers, and J. Garofolo, "PETS vs. VACE Evaluation Programs: A Comparative Study", In the Ninth IEEE International Workshop on Performance Evaluation of Tracking and Surveillance(PETS), pp: 1-6, In Conjunction with CVPR, 2006
      V. Manohar, P. Soundararajan, V. Korzhova, M. Boonstra, D. Goldgof, R. Kasturi, R. Bowers, and J. Garofolo, "A Baseline Algorithm for Face Detection and Tracking in Video", In the SPIE Europe Symposium on Security and Defence: Optics and Photonics for Counter-Terrorism and Crime-Fighting, Vol. 6741, 6741-09, 2007
      51
      10/29/2009
    • 52. 52
      QUESTIONS?
    • 53. 53
      Motion Estimation: Optical Flow Method
      • Reflects the changes in the image due to motion
      • 54. Computation is based on the following assumptions:
      • 55. observed brightness of any object point is constant over time
      • 56. nearby points in the image plane move in a similar manner
      • 57. Minimization problem:(brightness const.) (smoothness const.)
      • 58. Robust estimation framework (Black and Anandan, 1996)
      • 59. Recast the least squared formulations with a different error-norm function instead of quadratic
      • 60. Coarse-to-fine strategy
      • 61. Construct a pyramid of spatially filtered and sub-sampled images
      • 62. Compute flow values at lowest resolution and project to next level in the pyramid
      53
      10/29/2009
    • 63. 54
      Principal Component Analysis
      Dimensionality Reduction Technique
      Basic idea
      Features in subspace provide more salient and richer information than the raw images themselves
      Representation
      Strain images represented as vector of weights of low dimensionality (feature vector)
      Training
      Learning these weights using a set of training images
      Testing
      Calculate distances to each of the training patterns in the projected subspace
      54
      10/29/2009
    • 64. 55
      Steps involved in FEM
      Discretization:problem domain is discretized into a collection of simple shapes, or elements; the continuous equations are discretized as finite differences and summations (instead of integrals and derivatives)
      Assembly: The element equations for each element in the FEM mesh are assembled into a set of global equations that model the properties of the entire system
      Application of Boundary Conditions: They reflect the known values for certain primary unknowns
      Solve for Primary Unknowns: Modified global equations are solved for the primary unknowns at the nodes; interpolate for values between nodes
      7
      8
      5
      6
      solve
      discretize
      object
      u
      3
      4
      2
      1
      +
      global model
      interpolate values between nodes
      nodal mesh + local model
      55
      10/29/2009
    • 65. 56
      Genetic Algorithm
      [Start] Generate random population of n chromosomes
      [Fitness] Evaluate the fitness function value of each chromosome in population
      [New Population] Create new population by repeating following steps:
      [Selection] Select 2 parent chromosomes from population
      [Crossover] Crossover parents with some probability to form new offspring
      [Mutation] Mutate offspring at each position with some probability
      [Accepting] Place the new offspring in population
      [Replace] Use the new generated population for a further run of the algorithm
      [Test] If the end condition is satisfied, stop, and return the best solution in current population
      56
      10/29/2009
    • 66. 57
      Road Map of Research
      2004
      2005
      2006
      2007
      2008
      2009
      Performance Evaluation of Object Detection & Tracking Systems
      FDM-Based Method: Profile Faces
      Range Images/2D Images/Video: Performance of Strain
      FEM-Based Method Frontal Faces
      Modeling Material Constants:Expression Invariant Matching
      Registration of Temporal MammogramsFinite Element Modeling
      Defense!
      57
      10/29/2009

    ×