Human Action Recognition Based on Spacio-temporal features

1,368 views

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,368
On SlideShare
0
From Embeds
0
Number of Embeds
8
Actions
Shares
0
Downloads
61
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Human Action Recognition Based on Spacio-temporal features

  1. 1. Human Action Recognition Based on Spatio-temporal Features Nikhil Sawant and Dr. K.K. Biswas Dept. of CSE, Indian Institute of Technology, Delhi Third International Conference on Pattern Recognition and Machine Intelligence (PReMi’09)
  2. 2. Human activity recognition Higher resolution Longer Time Scale Courtesy : Y. Ke, Fathi and Mori, Bobick and Davis, Schuldt et al, Leibe et al, Vaswani et al. Pose Estimation Event Detection Action Classification Tracking Activity Recognition
  3. 3. Use Action recognition? <ul><li>Video surveillance </li></ul><ul><li>Interactive environment </li></ul><ul><li>Video classification & indexing </li></ul><ul><li>Movie search </li></ul><ul><li>Assisted Care </li></ul><ul><li>Sports annotation </li></ul>
  4. 4. Broad outline of our technique Action Class 1 Action Class 2 Action Class 3 Action Class n Motion analysis using Lucas –kanade technique Shape analysis using Viola-Jones feaures Combining motion and shape features over finite time interval …………… ...... Video with human actions Motions features Shape features Spatio-temporal features Learning features though AdaBoos
  5. 5. Target Localization <ul><li>Possible search space is xyt cube </li></ul><ul><li>Action needs to be localized in space and time </li></ul><ul><li>Target localization helps reducing search space </li></ul><ul><li>Background subtraction </li></ul><ul><li>ROI marked </li></ul>Original Video Silhouette Original Video with ROI marked
  6. 6. Motion estimation <ul><li>Make use of optical flows for motion estimation </li></ul><ul><li>Optical flow is the pattern of relative motion between the object/object feature points and the viewer/camera </li></ul><ul><li>We make use of Lucas – Kanade, two frame differential method, it comparatively yields robust and dense optical flows </li></ul>
  7. 7. Noise Reduction <ul><li>Noise removal by averaging </li></ul><ul><li>Optical flows with magnitude > C * O mean are ignored, </li></ul><ul><li>where C – constant [1.5 - 2], </li></ul><ul><li>O mean - mean of optical flow within ROI </li></ul>
  8. 8. <ul><li>Optical flows are aggregated near the motion </li></ul><ul><li>Need for representing optical flow in meaningful way </li></ul><ul><li>Fixed sized grid laid over the ROI </li></ul>Organizing optical flows
  9. 9. <ul><li>Magnitude and direction of Optical flows within each box b ij is averaged and assigned to its centre c ij </li></ul><ul><li>All optical flows have same weight </li></ul>Organizing optical flows (simple averaging)
  10. 10. <ul><li>Each optical flow given a weight </li></ul><ul><li>More the distance from the centre c ij less is the weight and vice-versa </li></ul>Organizing optical flows (weighted averaging)
  11. 11. <ul><li>Optical flows are arranged in structured mannered </li></ul><ul><li>Arranged optical flows are easier to analyze </li></ul>Organizing optical flows
  12. 12. Shape discriptor <ul><li>Shape gives information about the action </li></ul><ul><li>Viola-Jones box features used to get shape features </li></ul><ul><li>Shape information combined with motion information </li></ul>
  13. 13. Spatio-temporal descriptor TLEN TSPAN
  14. 14. Spatio-temporal descriptor <ul><li>Shape and motion features combined over the span of time to form spatio-temporal features </li></ul>
  15. 15. Learning with Adaboost <ul><li>Adaboost is state of art learning algorithm </li></ul><ul><li>Linear decision stumps are used as weak hypothesis </li></ul><ul><li>Weak hypothesis combine to form a strong hypothesis </li></ul><ul><li>Strong hypothesis is weighted sum of weak hypothesis </li></ul><ul><li>Training and testing data is kept mutually exclusive </li></ul>
  16. 16. Results
  17. 17. Results (Weizman dataset)
  18. 18. Thanks You

×