Action Recognition (Thesis presentation)

5,881 views
5,604 views

Published on

0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
5,881
On SlideShare
0
From Embeds
0
Number of Embeds
14
Actions
Shares
0
Downloads
443
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

Action Recognition (Thesis presentation)

  1. 1. Human action recognition using spatio-temporal features Nikhil Sawant (2007MCS2899) Guide : Dr. K.K. Biswas
  2. 2. Human activity recognition Higher resolution Longer Time Scale Courtesy : Y. Ke, Fathi and Mori, Bobick and Davis, Schuldt et al, Leibe et al, Vaswani et al. Pose Estimation Action Recognition Action Classification Tracking Activity Recognition
  3. 3. Use Action recognition? <ul><li>Video surveillance </li></ul><ul><li>Interactive environment </li></ul><ul><li>Video classification & indexing </li></ul><ul><li>Movie search </li></ul><ul><li>Assisted Care </li></ul><ul><li>Sports annotation </li></ul>
  4. 4. Goals…. <ul><li>Action recognition against the stable background </li></ul><ul><li>Action classification </li></ul><ul><li>Event detection </li></ul><ul><li>Scale invariant action recognition </li></ul><ul><li>Resistant to change in view upto certain degrees </li></ul>
  5. 5. Goals…. <ul><li>Action recognition against the stable background </li></ul><ul><li>Action classification </li></ul><ul><li>Event detection </li></ul><ul><li>Scale invariant action recognition </li></ul><ul><li>Resistant to change in view upto certain degrees </li></ul><ul><li>Action recognition in cluttered background </li></ul><ul><li>Action detection invariant of speed </li></ul>
  6. 6. Existing Approaches <ul><li>Tracking interest points </li></ul><ul><li>Flow based Approaches </li></ul><ul><li>Shape based Approaches </li></ul>
  7. 7. Tracking interest points <ul><li>Use of Moving light displays (MLDs) by Johansson in 1973 </li></ul><ul><ul><li>Not feasible for as additional constraints are added </li></ul></ul><ul><li>Use of silhouette and geodesic distance by P. Correra </li></ul>Images Courtesy : P. Correra Tracking 5 crucial points i.e. Head, 2 hands, 2 feet. Mostly present at the local maxima on the plot of geodesic distance
  8. 8. Tracking interest points <ul><li>Use of Moving light displays (MLDs) by Johansson in 1973 </li></ul><ul><ul><li>Not feasible for as additional constraints are added </li></ul></ul><ul><li>Use of silhouette and geodesic distance by P. Correra </li></ul><ul><ul><li>It is difficult to track all the Crucial points all the time </li></ul></ul><ul><ul><li>Occlusion creates problem in tracking </li></ul></ul><ul><ul><li>Complex actions involving occlusion of body parts are difficult to track </li></ul></ul><ul><ul><li>Results depend on the quality of the silhourtte </li></ul></ul>
  9. 9. Flow based approaches <ul><li>Action recognition is done by making use of flow generated by motion </li></ul><ul><ul><li>Use of optical flows </li></ul></ul><ul><ul><li>Spatio-temporal features </li></ul></ul><ul><ul><li>Spatio-temporal regularity based features </li></ul></ul>
  10. 10. Shape based Approaches <ul><li>Blank et. al. shown Action can be describe as space time shape </li></ul><ul><ul><li>Use of possion equation for features </li></ul></ul><ul><ul><li>Local space time saliency </li></ul></ul><ul><ul><li>Action dynamics </li></ul></ul><ul><ul><li>Shape structure and orientation </li></ul></ul>Images Courtesy : M. Blank
  11. 11. Our Approach <ul><li>flow based features + shaped based features </li></ul><ul><li>spatio-temporal features </li></ul><ul><li>Viola-Jones type rectangular features </li></ul><ul><li>Adaboost </li></ul><ul><li>STEPS:- </li></ul><ul><ul><li>Target Localization – Background subtraction </li></ul></ul><ul><ul><li>Local oriented histogram </li></ul></ul><ul><ul><li>Formation of descriptor </li></ul></ul><ul><ul><li>Use of Adaboost for learning </li></ul></ul>
  12. 12. Optical flow and motion features
  13. 13. Target Localization <ul><li>Possible search space is xyt cube </li></ul><ul><li>Action needs to be localized in space and time </li></ul><ul><li>Target localization helps reducing search space </li></ul><ul><li>Background subtraction </li></ul><ul><li>ROI marked </li></ul>Original Video Silhouette Original Video with ROI marked
  14. 14. Motion estimation <ul><li>Make use of optical flows for motion estimation </li></ul><ul><li>Optical flow is the pattern of relative motion between the object/object feature points and the viewer/camera </li></ul><ul><li>Several methods : motion compensation encoding, object segmentation, etc </li></ul><ul><li>We make use of Lucas – Kanade, two frame differential method </li></ul><ul><li>Opencv implementation used </li></ul>
  15. 15. Noise removal <ul><li>Presence of noisy optical flows </li></ul><ul><li>Noise removal by averaging </li></ul><ul><li>Optical flows with magnitude > C * O mean are ignored, where C – constant [1.5 - 2], </li></ul><ul><li>O mean - mean of optical flow within ROI </li></ul>Noisy Optical flows After noise removal
  16. 16. Organizing optical flow <ul><li>Local oriented Histogram </li></ul><ul><li>Weighted averaging </li></ul>
  17. 17. Organizing optical flow (Local oriented Histogram) <ul><li>We fix X DIV x Y DIV grid around ROI </li></ul><ul><li>O n (u, v) is considered in b ij if </li></ul><ul><ul><li>x i < u < x i+1 </li></ul></ul><ul><ul><li>y j < v < y i+1 </li></ul></ul><ul><li>O bij = Σ O n (u, v) / Σ 1 </li></ul><ul><ul><li>Such that, x i < u < x i+1 </li></ul></ul><ul><ul><li>y j < v < y i+1 </li></ul></ul><ul><ul><ul><li>for all i < X DIV & j < Y DIV </li></ul></ul></ul>
  18. 18. <ul><li>Membership of the optical flows should be inversely proportional to their distance from the centre </li></ul>Organizing optical flow (Local oriented Histogram) C (0,0) d 2 d 1 O 1 O 2 O e O e
  19. 19. Organizing optical flow (Weighted Averaging) <ul><li>O j = (O 1 , O 2 ,…..O m ) </li></ul><ul><li>such that for all i Є {1,....,N} </li></ul>
  20. 20. Organizing optical flows
  21. 21. Formation of motion descriptor <ul><li>Optical flow is represented in xy component form </li></ul><ul><li>Effective optical flow from each box is written in a single row as </li></ul><ul><li>[O ex00 , O ey00 , O ex10 , O ey10 ,….. ] vector </li></ul><ul><li>Vectors for each action are stored for every training subject </li></ul><ul><li>Adaboost is used to learn the patterns </li></ul>
  22. 22. Learning with Adaboost Strong classifier Weak classifier Weight Features vector
  23. 23. Classification Example taken from Antonio Torralba @MIT Weak learners from the family of lines h => p(error) = 0.5 it is at chance Each data point has a class label: w t =1 and a weight: + 1 ( ) -1 ( ) y t =
  24. 24. Classification Example This one seems to be the best This is a ‘ weak classifier ’: It performs slightly better than chance. Each data point has a class label: w t =1 and a weight: + 1 ( ) -1 ( ) y t =
  25. 25. Classification Example We set a new problem for which the previous weak classifier performs at chance again Each data point has a class label: w t w t exp{-y t H t } We update the weights: + 1 ( ) - 1 ( ) y t =
  26. 26. Classification Example We set a new problem for which the previous weak classifier performs at chance again Each data point has a class label: w t w t exp{-y t H t } We update the weights: + 1 ( ) - 1 ( ) y t =
  27. 27. Classification Example We set a new problem for which the previous weak classifier performs at chance again Each data point has a class label: w t w t exp{-y t H t } We update the weights: + 1 ( ) - 1 ( ) y t =
  28. 28. Classification Example We set a new problem for which the previous weak classifier performs at chance again Each data point has a class label: w t w t exp{-y t H t } We update the weights: + 1 ( ) - 1 ( ) y t =
  29. 29. Classification Example The strong (non- linear) classifier is built as the combination of all the weak (linear) classifiers. f 1 f 2 f 3 f 4
  30. 30. Our Dataset <ul><li>Video resolution 320 x 240 </li></ul><ul><li>Stable background </li></ul>ACTION SUBJECTS VIDEOS Walking 8 34 Running 8 20 Flying 5 25 Waving 5 25 Pick up 6 24 Stand up 6 48 Sitting down 6 24
  31. 31. Our Dataset (Tennis actions) <ul><li>Small tennis dataset </li></ul>ACTION SUBJECTS VIDEOS Forehand 3 11 Backhand 3 10 Service 2 9
  32. 32. Training and Testing Dataset <ul><li>Training and testing data is mutually exclusive </li></ul><ul><li>Training and testing subjects are mutually exclusive </li></ul><ul><li>Frames used for training and testing </li></ul>ACTION TRAINING TESTING Walking 1184 1710 Running 183 335 Flying 182 373 Waving 198 317 Pick up 111 160 Stand up 128 187 Sitting down 230 282
  33. 33. Classification result ( framewise ) <ul><li>Overall Error : 12.21 % </li></ul>Walking Running Flying Waving Pick up Sit down Stand up Error Walking 1644 46 0 17 1 2 3.86% Running 35 295 3 2 11.94% Flying 1 2 349 11 9 1 6.43% Waving 11 8 269 29 15.14% Pick up 8 7 1 120 23 1 25% Sit down 1 1 26 179 14.97% Stand up 23 282 8.15%
  34. 34. Classification results ( clipwise ) <ul><li>Overall error : 6.94% </li></ul>Walking Running Waving1 waving2 bending Sit-down Stand-up Error Walking 10 0.0% Running 10 0.0% Waving1 9 1 10.0% waving2 10 0.0% bending 9 1 10.0% Sit-down 10 0.0% Stand-up 1 9 10.0%
  35. 35. Action classification
  36. 36. Classification results (Tennis events) <ul><li>Overall Error : 19.17% (per frame) </li></ul>Forehand Backhand Service Error Forehand 54 7 11 21.95% Backhand 11 53 10.75% Service 8 49 14.04%
  37. 37. Event Detection <ul><li>Confusion at the junction two actions </li></ul><ul><li>Use of prediction logic </li></ul>Current frame ‘ f’ Next n frames Previous n frames f f+1 f+2 f+3 f+4 … … f-1 f-2 f-3 f-4 … … f-n f+n
  38. 38. Event Detection Without using prediction logic With prediction logic
  39. 39. Weizmann Dataset ACTION SUBJECTS VIDEOS Bend 9 9 Jack 9 9 Jump 9 9 Pjump 9 9 Run 9 10 Side 9 9 Skip 9 10 Walk 9 10 Wave1 9 9 Wave2 9 9
  40. 40. Standard Dataset (Weizmann Dataset) Walk Side Skip Wave1 Wave2 Bend Run Jack Jump Pjump
  41. 41. confusion matrix ( framewise ) <ul><li>Overall Error : 29.17% (per frame) </li></ul>Bend Jack Jump Pjump Run Side Skip Walk Wave1 Wave2 Bend 271 1 1 20 3 30 11 Jack 18 368 8 48 3 2 3 9 16 Jump 9 3 157 8 2 26 19 7 Pjump 36 26 237 22 6 Run 4 2 5 158 3 50 6 1 2 Side 11 9 77 1 1 84 3 58 2 1 Skip 3 9 76 43 5 109 24 1 7 Walk 2 5 16 2 13 5 395 Wave1 47 2 12 238 27 Wave2 30 6 1 4 1 55 269
  42. 42. Weizmann dataset <ul><li>Smaller resolution (180 x 144), </li></ul><ul><li>Previously (320 x 240) </li></ul><ul><li>Weaker motion vectors compare to previous experiments </li></ul><ul><ul><li>Weizmann : mag 0 – 1.75 px </li></ul></ul><ul><ul><li>Earlier experiment : mag 0 – 5.5 px </li></ul></ul><ul><li>Lack of background frames available, </li></ul><ul><li>used already given poor quality silhouette </li></ul>
  43. 43. Use of MV + Shape Info(SI) <ul><li>Only MV are not enough </li></ul><ul><li>Shape of the person also gives information about the actions </li></ul><ul><li>No. of foreground pixels in each box </li></ul><ul><li>Error : 23.45% </li></ul>
  44. 44. Use of MV + Differential SI <ul><li>We calculate Differential Shape Info </li></ul><ul><li>Make use of Viola-Jones rectangular features </li></ul><ul><li>Rectangular features are used at grid level rather than pixel level </li></ul><ul><li>Error : 19.69% </li></ul>
  45. 45. confusion matrix ( framewise ) Bend Jack Jump Pjump Run Side Skip Walk Wave1 Wave2 Bend 326 7 2 2 Jack 6 418 39 1 3 8 Jump 18 1 189 1 5 4 13 Pjump 11 55 243 6 1 11 Run 2 2 173 2 45 7 Side 8 30 11 1 152 12 33 Skip 1 20 32 83 4 121 13 1 2 Walk 1 1 2 1 1 432 Wave1 43 1 10 10 232 30 Wave2 13 25 328
  46. 46. Spatio-temporal features TSPAN TLEN
  47. 47. Spatio-temporal descriptor <ul><li>Volume Descriptor in row form </li></ul><ul><ul><li>[Frame1 | Frame2 | Frame3 | </li></ul></ul><ul><ul><li> Frame4 | Frame5 | ……] </li></ul></ul><ul><li>Motion and Differential shape information </li></ul><ul><li> for the volume </li></ul><ul><li>Error : 8.472% (per frame) </li></ul>
  48. 48. Event classification ( clipwise ) <ul><li>Error : 2.15% </li></ul><ul><li>Better than 12.7% error rate reported by T. Goodhart et.al., Action recognition usign spatio-temporal regularity based feature, 2008 </li></ul>bend Jack Jump Pjump Run Side Skip Walk Wave1 Wave2 Error bend 9 0.0% Jack 9 0.0% Jump 9 0.0% Pjump 9 0.0% Run 9 1 10.0% Side 9 0.0% Skip 10 0.0% Walk 10 0.0% Wave1 8 1 11.1% Wave2 9 0.0%
  49. 49. Action recognition in cluttered background
  50. 50. Cluttered environment <ul><li>background is not stable </li></ul><ul><li>The actor might be occluded </li></ul><ul><li>Slight change in camera location (panning) </li></ul><ul><li>Scale variation </li></ul><ul><li>Speed variation </li></ul>
  51. 51. Training <ul><li>Training is done without background subtraction </li></ul><ul><li>Manually mark Start and end of action in training videos </li></ul><ul><li>Also the bounding box around the actor is marked </li></ul><ul><li>No shape information is added in the training data </li></ul><ul><li>Training is done with noisy background </li></ul><ul><li>Currently bending and drinking actions supported </li></ul>
  52. 52. Training data drinking
  53. 53. Training data bending
  54. 54. Template length <ul><li>bending – </li></ul><ul><ul><li>Average no. of frames for action – 55 </li></ul></ul><ul><ul><li>Variation – 40 – 110 </li></ul></ul><ul><ul><li>TLEN 45 frames </li></ul></ul><ul><li>Drinking – </li></ul><ul><ul><li>Average no. of frames for action – 50 </li></ul></ul><ul><ul><li>Variation – 35 – 70 </li></ul></ul><ul><ul><li>TLEN 40 frames </li></ul></ul>
  55. 55. Single template formation <ul><li>Length of the template kept constant </li></ul><ul><li>Some of the frames eliminated </li></ul><ul><li>One action - one template </li></ul><ul><li>Adds robustness in the training </li></ul><ul><li>Speed variation during training is tackled </li></ul>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 3 4 5 6 8 9 10 11 13 14 15 1 2 3 4 5 6 7 8 9 10 11 12
  56. 56. Optical flow and Adaboost <ul><li>We have constant length sequences </li></ul><ul><li>Optical flows are calculated </li></ul><ul><li>No shape information as background is cluttered, background subtraction not possible </li></ul><ul><li>Formation of Spatio-temporal template with TSPAN = 1 and TLEN = length of sequence = const </li></ul><ul><li>Templates are learned with Adaboost. </li></ul>
  57. 57. Testing <ul><li>An action Cuboid is formed with specific height, width, length </li></ul><ul><li>Cuboid is moved over each and every valid starting location in the video </li></ul>Height Width Length
  58. 58. t x y
  59. 59. Testing <ul><li>An action Cuboid is formed with specific height, width, length </li></ul><ul><li>Cuboid is moved over each and every valid starting location in the video </li></ul><ul><li>A spatio-temporal template is formed for each Cuboid location and tested with Adaboost </li></ul><ul><li>Appropriate entry is made in confidence matrix </li></ul><ul><li>Height, width and length updated for scale and speed invariance </li></ul>Height Width Length
  60. 60. Confidence matrix <ul><li>Confidence matrix is a 3D matrix </li></ul><ul><li>Confidence matrix has an entry for each and every valid location of cube in the video </li></ul><ul><li>Confidence matrix contains the confidence value given by the Adaboost over various iteration </li></ul><ul><li>We expect that true positives will be surrounded with a dense fog of large confidence values </li></ul><ul><li>averaging is done to reduce the effect of the false positives. </li></ul>
  61. 61. Confidence matrix
  62. 62. Results
  63. 63. Results
  64. 64. Results
  65. 65. Results
  66. 66. Results
  67. 67. Results
  68. 68. Key References <ul><li>Y. Ke, R. Sukthankar, M. Hebert, “Spatio-temporal Shape and Flow Correlation for Action Recognition”, In Proc. Visual Surveillance Workshop, 2007. </li></ul><ul><li>P. Viola and M. Jones. “Robust real-time face detection”. In ICCV, volume 20(11), pages 1254-1259, 2001. </li></ul><ul><li>M. Lucena, J.M. Fuertes and N. P. la Blanca, “Using Optical Flow for Tracking”, Volume 2905/2003, Progress in Pattern Recognition, Speech and Image Analysis. </li></ul><ul><li>Y. Ke, R. Sukthankar, and M. Hebert. “Event detection in crowded videos”. In ICCV, 2007. </li></ul><ul><li>F. Niu and M. Abdel-Mottaleb, “View –Invariant Human Activity Recognition Based on Shape and Motion Features,” in Proc. of the IEEE Sixth International Symposium on Multimedia Software Engineering, pp. 546-556, 2004. </li></ul><ul><li>D.M. Gavrila. “The visual analysis of human movement: A survey”. Computer Vision and Image Understanding, 73:82–98, 1999. </li></ul><ul><li>D. M. Gavrila. “A bayesian, exemplar-based approach to hierarchical shape matching”. IEEE Trans. Pattern Anal. Mach. Intell., 29(8):1408–1421, 2007. </li></ul><ul><li>K. Gaitanis, P. Correa, and B. Macq, “Human Action Recognition using silhouette based feature extraction and Dynamic Bayesian Networks”.  </li></ul><ul><li>M. Ahmad, S. Lee, “Human action recognition using shape and CLG-motion flowfrom multi-viewimage sequences”, 7 th IEEE International Conference on Automatic Face and Gesture Recognition, April 2006. </li></ul><ul><li>10. Haritaoglu, D. Harwood, and L. Davis, “W4: real-time surveillance of people and their activities,” IEEE Transactions on Pattern Analysis and Machine Intelligence 22, pp. 809–830, Aug 2000. </li></ul><ul><li>Ismail Haritaoglu, David Harwood, and Larry S. Davis, “W4: Who? When? Where? What? a Real-time System for Detecting and Tracking People,&quot; Proc. the third IEEE International Conference on Automatic Face and Gesture Recognition Nara, Japan , IEEE Computer Society Press, Los Alamitos, Calif., 1998, pp.222-227. </li></ul><ul><li>P. Correa1, J. Czyz1, T. Umeda1, F. Marqu, X. Marichal3, B. Macq, “Silhouette-based probabilistic 2D human motion estimation for real time application”, in ICIP 2005. </li></ul><ul><li>Y. Ke, R. Sukthankar, and M. Hebert. “Efficient visual event detection using volumetric features, In ICCV’05. </li></ul>

×