0
Autonomous Vehicle For Object Tracking <ul><li>  </li></ul><ul><li>Group Members:-  Prathamesh Joshi [15] </li></ul><ul><l...
Goal  <ul><li>“ Build a mobile robot platform, including mechanics and electronics, implement and test a purely vision bas...
Object matching using SIFT <ul><li>SIFT- Scale Invariant Feature Transform </li></ul><ul><li>To extract distinctive invari...
 
SIFT System <ul><li>Extract features from reference images in a database </li></ul><ul><li>Extract features from a new giv...
Major Stages of Computing Image Features <ul><li>Scale Space Extrema  Detection </li></ul><ul><li>2 . Key-point Localizati...
<ul><li>Convolving the image I(x,y) with a variable scale Gaussian  G ( x , y ,σ ). </li></ul><ul><li>L ( x ,  y ,σ ) = G ...
 
Scale space images … first octave … … second octave … … third octave … fourth octave … …
Difference-of-Gaussian images … first octave … … second octave … … third octave … fourth octave … …
Finding extrema <ul><li>Sample point is selected only if it is a minimum or a maximum of these points </li></ul>DoG scale ...
Filtering <ul><li>For each candidate keypoint: </li></ul><ul><li>Keypoints with low contrast are removed </li></ul><ul><li...
Orientation assignment
Descriptor <ul><li>Descriptor has 3 dimensions  (x,y, θ ) </li></ul><ul><li>Orientation histogram of gradient magnitudes <...
Recognition using SIFT features <ul><li>Compute SIFT features on the input image </li></ul><ul><li>Match these features to...
 
 
 
 
 
 
 
 
 
 
Depth measurement in real-time frames <ul><li>Using Lagrange’s Interpolation </li></ul>
Observation Table for Depth Detection
<ul><li>The Lagrange’s interpolation Polynomial reduces to the following equation  </li></ul><ul><li>F(x) = 0.0004 x 2 -0....
 
<ul><li>This method has some advantages : </li></ul><ul><li>a) Using only a single camera for the depth finding. </li></ul...
HARDWARE <ul><li>The hardware consists of :- </li></ul><ul><li>Wireless Video Camera. </li></ul><ul><li>RS-232 cable. </li...
Wireless Video Camera <ul><li>Consists of Tx and Rx unit. </li></ul><ul><li>Operates at 1.2Ghz. </li></ul><ul><li>Compatib...
CIRCUIT DIAGRAM
PCB BOARD AND LAYOUT
 
Operation <ul><li>Data is sent by serial module of Roborealm to the PCB board via RS-232. </li></ul><ul><li>Data is proces...
Snap of Vehicle   Platform
Conclusion <ul><li>Drawbacks: </li></ul><ul><li>Algorithm is  computationally intensive. </li></ul><ul><li>False triggerin...
References <ul><li>A Moving Object Tracked by A Mobile Robot with Real-Time Obstacles Avoidance Capacity --Chung-Hao Chen,...
REFERENCES <ul><li>A Real-time Image Recognition System for Tiny Autonomous Mobile Robots </li></ul><ul><li>Stefan Mahlkne...
THANK YOU
 
Upcoming SlideShare
Loading in...5
×

Presentation Object Recognition And Tracking Project

2,845

Published on

0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,845
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
215
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Transcript of "Presentation Object Recognition And Tracking Project"

  1. 1. Autonomous Vehicle For Object Tracking <ul><li>  </li></ul><ul><li>Group Members:- Prathamesh Joshi [15] </li></ul><ul><li>Anirudh Panchal [31] </li></ul><ul><li>Project Guide:- Mr Kiran Bhandari </li></ul>
  2. 2. Goal <ul><li>“ Build a mobile robot platform, including mechanics and electronics, implement and test a purely vision based image matching algorithm under real-time constraints.” </li></ul>
  3. 3. Object matching using SIFT <ul><li>SIFT- Scale Invariant Feature Transform </li></ul><ul><li>To extract distinctive invariant features for reliable matching –object recognition </li></ul><ul><li>The features are </li></ul><ul><li>– invariant to scale </li></ul><ul><li>– invariant to rotation </li></ul><ul><li>– robust to (partial invariant to) affine distortion </li></ul><ul><li>– robust to change in 3D viewpoint </li></ul><ul><li>– robust to noise </li></ul><ul><li>– robust to change in illumination </li></ul>
  4. 5. SIFT System <ul><li>Extract features from reference images in a database </li></ul><ul><li>Extract features from a new given image </li></ul><ul><li>Select candidates -the new image key-points (features) and match them with the features in the database </li></ul>
  5. 6. Major Stages of Computing Image Features <ul><li>Scale Space Extrema Detection </li></ul><ul><li>2 . Key-point Localization </li></ul><ul><li>3. Orientation assignment of key-points </li></ul><ul><li>4 . Calculation of Descriptor vector of key-points </li></ul>
  6. 7. <ul><li>Convolving the image I(x,y) with a variable scale Gaussian G ( x , y ,σ ). </li></ul><ul><li>L ( x , y ,σ ) = G ( x , y ,σ )* I ( x , y )………………………(1) </li></ul><ul><li>To efficiently detect stable key-points difference of Gaussians are calculated </li></ul><ul><li>DoG ( x , y ,σ ) = L ( x , y , k σ)− L ( x , y ,σ)………….. (2) </li></ul>
  7. 9. Scale space images … first octave … … second octave … … third octave … fourth octave … …
  8. 10. Difference-of-Gaussian images … first octave … … second octave … … third octave … fourth octave … …
  9. 11. Finding extrema <ul><li>Sample point is selected only if it is a minimum or a maximum of these points </li></ul>DoG scale space Extrema in this image
  10. 12. Filtering <ul><li>For each candidate keypoint: </li></ul><ul><li>Keypoints with low contrast are removed </li></ul><ul><li>Responses along edges are eliminated </li></ul>
  11. 13. Orientation assignment
  12. 14. Descriptor <ul><li>Descriptor has 3 dimensions (x,y, θ ) </li></ul><ul><li>Orientation histogram of gradient magnitudes </li></ul><ul><li>Position and orientation of each gradient sample rotated relative to keypoint orientation </li></ul>
  13. 15. Recognition using SIFT features <ul><li>Compute SIFT features on the input image </li></ul><ul><li>Match these features to the SIFT feature database </li></ul><ul><li>Each keypoint species 4 parameters: 2D location,scale, and orientation. </li></ul>
  14. 26. Depth measurement in real-time frames <ul><li>Using Lagrange’s Interpolation </li></ul>
  15. 27. Observation Table for Depth Detection
  16. 28. <ul><li>The Lagrange’s interpolation Polynomial reduces to the following equation </li></ul><ul><li>F(x) = 0.0004 x 2 -0.2283 x+44.3429 </li></ul>
  17. 30. <ul><li>This method has some advantages : </li></ul><ul><li>a) Using only a single camera for the depth finding. </li></ul><ul><li>b) Having no direct dependency on the camera parameters like focal length and etc. </li></ul><ul><li>c) Having uncomplicated calculations. </li></ul><ul><li>d) Requiring no auxiliary devices. </li></ul><ul><li>d) Having a constant response time, because of having a fixed amount of calculations; </li></ul><ul><li>f) This method can be used for both stationary and moving targets. </li></ul>
  18. 31. HARDWARE <ul><li>The hardware consists of :- </li></ul><ul><li>Wireless Video Camera. </li></ul><ul><li>RS-232 cable. </li></ul><ul><li>PCB board. </li></ul><ul><li>Robot Platform equiped with DC motor. </li></ul>
  19. 32. Wireless Video Camera <ul><li>Consists of Tx and Rx unit. </li></ul><ul><li>Operates at 1.2Ghz. </li></ul><ul><li>Compatible with PC via TV tuner card. </li></ul><ul><li>Requires 9V for camera and 12V for Rx module. </li></ul>
  20. 33. CIRCUIT DIAGRAM
  21. 34. PCB BOARD AND LAYOUT
  22. 36. Operation <ul><li>Data is sent by serial module of Roborealm to the PCB board via RS-232. </li></ul><ul><li>Data is processed by the microcontroller and control signals are given to the motors depending upon the input data. </li></ul>
  23. 37. Snap of Vehicle Platform
  24. 38. Conclusion <ul><li>Drawbacks: </li></ul><ul><li>Algorithm is computationally intensive. </li></ul><ul><li>False triggering. </li></ul><ul><li>Varying speed of the moving object. </li></ul><ul><li>Object may go outside the vision range of </li></ul><ul><li>the camera. </li></ul><ul><li>Lack of integration between RoboRealm and Matlab. </li></ul>
  25. 39. References <ul><li>A Moving Object Tracked by A Mobile Robot with Real-Time Obstacles Avoidance Capacity --Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan, and Mongi Abidi. </li></ul><ul><li>A Vision System for IIT Kanpur Mirosot Robot Soccer Team CS497 Special Topics in Computer Science Semester II, 2002-03 Manu Chhabra-99211. </li></ul><ul><li>A New Method for Depth Detection Using Interpolation Functions Mahdi Mirzabaki Azad University of Tabriz, Computer Engineering Departement </li></ul><ul><li>DepthFinder, A Real-time Depth Detection System for Aided Driving </li></ul><ul><li>Yi Lu Murphey, Jie Chen1, Jacob Crossman, Jianxin Zhang, Paul Richardson and Larry. </li></ul>
  26. 40. REFERENCES <ul><li>A Real-time Image Recognition System for Tiny Autonomous Mobile Robots </li></ul><ul><li>Stefan Mahlknecht ,Roland Oberhammer ,Gregor Novak. </li></ul><ul><li>Multipurpose Control System and Mobile Robot Development, for Control </li></ul><ul><li>Motion Detection and Object Tracking in Image Sequences---Zoran ˇZivkovi´. </li></ul><ul><li>Object Recognition from Local Scale-Invariant Features--David G. Lowe. </li></ul><ul><li>An Image Identification Algorithm using Scale Invariant Feature Detection </li></ul>
  27. 41. THANK YOU
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×