Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.



Published on

  • Be the first to comment

  • Be the first to like this


  1. 1. Localization of MAV in GPS- denied Environment Using Embedded Stereo Camera Syaril Azrad, Farid Kendoul,Fadhil Mohamad,, Kenzo Nonami Department of Mechanical Engineering,Nonami Lab, Chiba University
  2. 2. Research Background • Vision based Autonomous Q-Rotor Research in Chiba University – Started in 2008 , Participate in 1st US-Asian Demonstration and Assessment of Micro Aerial and , MAV’08 Agra, India, – Single camera approach • Color-based object tracking algoritm (visual-servoing) • Feature-based/Optical Flow (OF) – Stereo camera approach • Ground stereo camera based hovering • Fully-embedded based object tracking
  3. 3. Current Research Background • Localization of MAV using embedded camera – Research has been conducted by Farid et. al using single embedded camera using Optical Flow to localize rotorcraft position – Height estimation improvement • Fusion with Pressure Sensor • Using stereo camera for higher-precision
  4. 4. Altitude Estimation For MAVs ? • GPS • Pressure sensor • Laser Range Finder • Radar Sensors • Vision System – Single Camera – Stereo Camera
  5. 5. For Small UAVs (MAV) Altitude Estimation We propose… • Embedded Lightweight Stereo Camera • Fusion Optical Flow (SFM algorithm) with Image Homography • Fusion Optical Flow (SFM algorithm) with Scale Invariant Feature Transform (SIFT) –based feature matching
  6. 6. Our Platform Introduction • Quadrotor Air-frame • Micro-controller • Sensors • Vision System
  7. 7. 18th Aug. 2010 MOVIC 2010 Tokyo,JAPAN 7 •Size : 54cm x 54cm x 20cm •Total weight : < 700 grams •Flight time : 10 min •Range : 1~2 km •Total cost : < 4000 USD
  8. 8. Proposed Vision-based MAV localization algorithm • Horizontal Position using Optical Flow- based algorithm • Altitude Position using Optical Flow fused with Homography/SIFT-based matching approach
  9. 9. 18th Aug. 2010 MOVIC 2010 Tokyo,JAPAN 9 Computing Optical Flow Implementation We use Lucas-Kanade Method
  10. 10. 18th Aug. 2010 MOVIC 2010 Tokyo,JAPAN 10 the feature point (x, y) in the next image I2 will be at (x+dx, y+dy), and the same process will be repeated in the next step (t +Δt), providing the optical flow   and a simple feature tracking algorithm.
  11. 11. Horizontal Position Estimation (OF) Imaging Model
  12. 12. Or can be expressed as below Then we can express the OF equation as below WHAT DOES THE EQUATION ABOVE MEANS? The velocity of can be expressed as Pi(xi ,yi) A perspective-central camera model maps Pi(xi ,yi)
  13. 13. If we have data from IMU about the rotational velocity of the camera on the body of our MAVs we can get purely translational part Meaning = velocity and position Because OF calculated from images contains rotational and translational part Kendoul et. al (2007)
  14. 14. The strategy is an estimation problem with the state vector So for KF dynamics model, we can write the measurement equation plus noise With H is as below, from our optical flow expression in equation (5)
  15. 15. Now, after estimating Oftrans, we can use them for measurement vector for (MAV)translational velocity and structure parameter Both our cameras are mounted on the MAV, and assuming camera motion is smooth, we can write the dynamic model as below with γ   is the camera acc. We use the model proposed by Kendoul et. al for depth (altitude) as below We can write the discrete dynamic system using the state vector X
  16. 16. Which is non-linear, we Implement Extended Kalman Filter as proposed by Kendoul et. A and estimate translational velocity and depth (altitude) While the observation of the discrete model is as below
  17. 17. OF raw data from camera, Attitude data from IMU Estimate the translational part of OF, separate the rotational part Estimate the camera velocity and depth(altitude)
  18. 18. Move along x-axis with various attitude X Y Verification Experiment of Image Algorithm Fused with IMU data 5 10 15 20 25 30 - 0.4 - 0.2 0 0.2 0.4 Time [s ] X,Ydistance[m]
  19. 19. Localization of MAV using Optical Flow and SIFT-matching technique • SIFT? – Detect & describe feature in local image • In our computation we use SIFTGPU (Wu,2009) to speed up the matching • Matching result is filtered by RANSAC algorithm to separate between outlier in inlier • Applying threads in computations we can get the Optical Flow based algorithm 50fps and SIFT matching 7-8fps. This inculding triangulation
  20. 20. Manual flying test with embedded stereo camera
  21. 21. Results(x,y) Ground Truth Ground Truth
  22. 22. Results(z)
  23. 23. Implementation Strategy 1. Vicon Data 2. Control Program 3. Receive Image Data 4. Process Image Data Image Processing 1.Optical Flow Base 2.Stereo Feature Matching GPU Graphical Processing Unit Socket 1 Computer with two separate Process, or 2 Computers
  24. 24. Implementation • Implemented on 1 Computer due to high capability, but have 2 share GPU • Core-i7 (4 Cores 8 Threads) – Separate Core for Image Processing – Separate Thread for Control – Separate Core for Receiving Image Data – Separate Thread for Vicon Data
  25. 25. Results • Process that has been implemented with stable frequency – PID Control (30Hz) – Vicon Data Acquisition (30Hz) – Image Processing SIFT (8Hz) and Optical Flow (15Hz)
  26. 26. Improvement of result strategy
  27. 27. Before Improvement
  28. 28. Result
  29. 29. Future Work & Suggestions • Apply SIFTGpu for horizontal odometry estimation and fuse with OF base horizontal odometry – Successive frame feature matching • On-board camera/ image processing development, sharing technology.
  30. 30. Thank you…