High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
Pose estimation of a mobile robot
1. 1Kyungpook National University
Pose Estimation of a Mobile Robot Based on
Fusion
of IMU Data and Vision Data Using an Extended
Kalman Filter
Mary B. Alatise and Gerhard P. Hancke
SENSORS JOURNAL,
September 21, 2017
2. 2Kyungpook National University
Motivation
The objective of this paper is to estimate the pose of
a device. This paper presents a fusion of IMU sensor
and vision to determine the accurate position of
mobile robot .
3. 3Kyungpook National University
Why we need fusion of IMU sensor and vision
IMU sensor accumulated drift
Sensor Calibration
Vision illumination
visual occlusion
Blurred features under fast and unpredicted motions.
Field of view of the camera
Use the complementary features of IMU sensor and vison based methods.
Advantages : Implementation
low cost
fast computation
improved accuracy
6. 6Kyungpook National University
Results and Discussion
Simulated Results of Object Detection and Recognition in an Image
Proposed object (query image) Training image
(RGB image)
Convert the image from RGB
to grayscale
Removal of lens distortion
Image including Outliers and
inliers Image with inliers Images with display box around the recognized object
7. 7Kyungpook National University
Simulated Results of Object Analysis of Experimental Results
Euler angles from IMU. Roll: red; pitch: green; yaw: blue.
The figure shows the robot travelling on a flat
surface.
At 49s, roll and pitch angles maintained a close
to zero angle until there was a change in
direction.
At the point where the robot turned 90 degree
to the right, the yaw angle was 91.25 degrees.
The maximum values obtained for pitch and
roll angles are 15 degree and 18 degree.
From the experiment results, it can be
conclude that Euler angles are a good choice
for the experiment performed because the pitch
angle did not attain ±90 degree to cause
Gimbal lock.
8. 8Kyungpook National University
Orientation results for fused sensors
When the robot made a 90 degree right turn,
Yaw value for IMU was 91.25 degree
Yaw value for vision was 88 degree
Pitch value 1.2 degree
Roll value 4 degree
From the results, it can be conclude that the proposed method reduced the accumulated errors and drifts.
Roll angles Pitch angles Yaw angles
9. 9Kyungpook National University
comparison of the three directions of the mobile robot taken from
vision method
Experimental position in XYZ
directions from vision data.
The figure shows a distinctive estimation
of position of the mobile robot.
The position estimation based on the
reference object in the image is relative
to the position of the mobile robot and
the world coordinate , with the median
vector of the planar object for Z-axis
close to 1 and -1.
From the experimental position result, it
can be conclude that the SURC and
RANSAC algorithms combination can be
used for the accurate position estimation.
10. 10Kyungpook National University
Ground truth data
The ground truth data was collected with the
use of external camera placed in the
environment of experiment.
The camera was placed on a flat terrain with
the mobile robot with a distance of 4.90 m
Ground truth system based on a camera
12. 12Kyungpook National University
Performance: Accuracy
The accuracy of proposed method is evaluated by root mean square error (RMSE)
Figures show the difference between the ground truth and proposed method.
Maximum error value for position is 0.145m
Maximum error value for orientation is 0.95 degree.
RMSE for position RMSE for orientation
13. 13Kyungpook National University
RMSE of position and orientation
Position error slightly increases with increase in time.
Both pitch and yaw error angles decreases as time increases.
Roll error was gradually increasing from the start time to about 80 s and later
decreases.
14. 14Kyungpook National University
CONCLUSION
Computer vision and IMU sensor fusion.
For object recognition SURF and RANSAC algorithms were used to detect and match
features in images.
Experimental results show that proposed method gives better performance as compared to
results from individual methods.
The proposed method gives high position accuracy.
The error values from proposed method are reasonable for indoor localization.