Object tracking survey


Published on

Object tracking survey

Published in: Business, Technology, Education
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Object tracking survey

  1. 1. object tracking:a survey<br />Nhat ‘Rich’ Nguyen<br />Vision Seminar<br />February 2010<br />Based on a paper by Yilmaz et al<br />
  2. 2. Definition<br />2<br />Tracking is the problem of estimatingthe trajectory of an object in the image plane as it moves around a scene.<br />
  3. 3. Applications<br />3<br />Motion Recognition<br />Automated Surveillance<br />Video <br />Indexing<br />Human Computer Interaction<br />Traffic Monitoring<br />Vehicle<br />Navigation<br />
  4. 4. Problems<br />Projection<br />Noises<br />Complex shape<br />Complex motion<br />Non-rigid <br />Occlusions<br />Lighting<br />Real-time<br />4<br />
  5. 5. Questions<br />Which object representation is suitable?<br />Which image features should be used?<br />How should motion, appearance of the object be modeled?<br />5<br />Help you to design <br />an object tracking system<br />
  6. 6. Overview<br />Object Representations<br />Features Selection<br />Object Detection<br />Object Tracking<br />Future Direction<br />6<br />
  7. 7. 1. Object Representation<br />7<br />[How to represent an object for tracking]<br />
  8. 8. 8<br />Shape - Points<br />Centroid<br />Multiple Points<br />Control Points<br />
  9. 9. 9<br />Shape - Patches<br />Rectangular Patch<br />Elliptical<br />Patch<br />Multiple Patches<br />
  10. 10. 10<br />Shape - Contour<br />Complete<br />Contour<br />Skeletal <br />Model<br />Silhouette<br />
  11. 11. 11<br />Appearance – Prob. Densities<br />Gaussian<br />Histogram<br />Mixture of <br />Gaussians<br />
  12. 12. 12<br />Appearance – Models<br />Geometric <br />Template<br />Active Contour<br />Multi-view Appearance<br />
  13. 13. 2. Feature Selection<br />13<br />[Which feature can be easily distinguished?]<br />
  14. 14. 14<br />Color<br />HSV<br />RGB<br />LAB<br />
  15. 15. 15<br />Edges<br />Canny Edge Detector<br />
  16. 16. 16<br />Optical Flow<br />Dense field of displacement vectors which defines the translation of each pixel<br />
  17. 17. 17<br />Texture<br />Gray-level Co-occurence Matrix<br />
  18. 18. 18<br />Texture – Law’s measures<br />1-D: kernel for Level, Edge, Spot, Wave, and Ripple<br />2-D: convoluting a vertical and a horizontal 1-D kernel<br />
  19. 19. 3. Object Detection<br />19<br />[To track, we first detect.]<br />
  20. 20. 20<br />Approaches<br />Point Detector<br />Harris <br />SIFT<br />Background Subtraction<br />Segmentation<br />Mean shift<br />Graph cuts<br />Active Contours<br />Supervised Learning<br />Adaptive Boosting<br />Support Vector Machines<br />
  21. 21. 21<br />Point Detectors<br />Harris<br />SIFT<br />
  22. 22. 22<br />Background Subtraction<br />
  23. 23. 23<br />Segmentation<br />
  24. 24. 24<br />Segmentation - Mean shift<br />
  25. 25. 25<br />Segmentation - Mean shift<br />
  26. 26. 26<br />Segmentation – Graph-cuts<br />
  27. 27. 27<br />Segmentation – Active Contour<br />
  28. 28. 28<br />Supervised Learning<br />Learning<br />Examples<br />Features<br />Supervised Learners<br />Input<br />Classification<br />
  29. 29. 29<br />Adaptive Boosting<br />
  30. 30. 30<br />Support Vector Machine<br />
  31. 31. 4. Object Tracking<br />31<br />[State-of-the-art methods.]<br />
  32. 32. 32<br />Approaches<br />Point Tracking<br />[Multi-point Correspondence] <br />Kernel Tracking<br />[Parametric <br />Transformation] <br />Silhouette Tracking<br />[Contour <br />Evolution]<br />
  33. 33. 33<br />Taxonomy<br />
  34. 34. 34<br />Deterministic<br />All possible <br />Associations<br />Unique<br />Associations<br />Multi-frame<br />Correspondence<br />Optimal Assignment Methods:<br />Hungarian vs. Greedy<br />
  35. 35. 35<br />Motion Constraints<br />Proximity<br />Small change<br />in velocity<br />Maximum<br />Velocity<br />Common<br />Motion<br />Rigidity<br />
  36. 36. 36<br />Examples<br />Rotating dish<br />Flying birds <br />
  37. 37. 37<br />State Estimation<br />
  38. 38. Estimate the state of a linear system.<br />The state is Gaussian distributed.<br />Filters<br />38<br />Kalman<br />The state is NOT Gaussian distributed.<br />Particle<br />Instead of nearest neighbor, offer a probabilistic approach for data association<br />No entering or exiting objects<br />Joint Probability<br />Data Association<br />Multiple <br />Hypothesis<br />Exhaustively enumerate all possible associations. <br />
  39. 39. 39<br />Evaluation<br />
  40. 40. 40<br />Template Matching<br />Brute force<br />Similarity measure: cross correlation<br />- specifies candidate template position<br />- object template in previous frame<br />
  41. 41. 41<br />Mean Shift Tracker<br />
  42. 42. 42<br />KLT Feature Tracker<br />Compute the translation of a rectangular region centered on an interest point.<br />Evaluate the quality by computing the affine transformation between corresponding patches.<br />
  43. 43. 43<br />Eigen Tracker<br />Subspace-based approach for multi-view appearance.<br />Uses eigenspace for similarity instead of SSD, or correlation.<br />Allows distortion in the template.<br />
  44. 44. 44<br />SVM Tracker<br />Positive samples consist of images of the object to be tracked.<br />Negative samples consist of images of background object.<br />Maximizes the SVM classification score over image region to estimate the object position.<br />Knowledge about background object is explicitly incorporated in the tracker.<br />
  45. 45. 45<br />Evaluation<br />
  46. 46. 46<br />Shape Matching<br />Similar to Template Matching<br />Use Hausdorff distance measure to identify most mismatch edges.<br />Emphasize parts of model that are not drastically affected by object motion.<br />Examples of a person walking : head and torso vs. arms and legs.<br />
  47. 47. 47<br />State Space Model<br />State is term of shape and motion parameters of the contour<br />Control points of the contour moves on the spring stiffness parameters<br />Measurements consist of the image edges computed in the normal direction of the contour<br />
  48. 48. 48<br />Gradient Descent<br /><ul><li>Direct minimization algorithm.
  49. 49. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient of the function at the current point.rms and legs.</li></li></ul><li>49<br />Contour Evolution<br />
  50. 50. 50<br />Evaluation<br />
  51. 51. 5. Future Direction<br />51<br />[What’s left for us?]<br />
  52. 52. Depth information<br />Occlusion Resolution<br />Moving cameras<br />Non-overlapping view<br />Multiple Camera Tracking<br />52<br />
  53. 53. Broadcast news or home videos.<br />Noisy, compressed, unstructured, multiple views.<br />Severe occlusion, object partially visible.<br />Employ audio in addition to video.<br />Unconstrained Videos<br />53<br />
  54. 54. Ability to learn object model online.<br />Unsupervised learning of object models for multiple non-rigid moving object from a single camera.<br />Efficient Online Estimation<br />54<br />
  55. 55. Require detection at some point.<br />State-of-the-art tracking methods.<br />Point correspondence<br />Geometric models<br />Contour evolution<br />Dependency on context of use.<br />Give valuable insight and encourage new research.<br />Concluding Remarks<br />55<br />