This document summarizes a final year project on developing a visual compass for global localization of a robot soccer player. The student developed a prototype desktop application using OpenCV and SURF/SIFT algorithms to build a panoramic visual map from images and determine the robot's direction by matching query images to the map. Experimental results found errors of 18% for SURF and 29% for SIFT. Future work involves porting the application to the Nao robot and reducing errors by creating visual maps from multiple positions on the pitch.
2. What is RoboCup
•RoboCup is an annual international robotics soccer
competition founded in 1997.
•The official goal of the RoboCup: robots that are
capable of beating the FIFA World Cup winning team by
2050.
•RoboEireann, Maynooth’s RoboCup team, have
competed in the Standard Platform League (SPL) since
2008.
3. Overview of Background
Mobile robot localisation can be separated into
two separate problems,
Local localisation (a.k.a. incremental
localisation)
the robot has an initial estimate of its pose
Global localisation (a.k.a. kidnapped robot)
robot doesn’t have initial position information
6. Research Questions
Is visual compass effective to find the direction of
robot is currently looking?
What is the measurement of success?
7. Goals of Project
Develop a visual compass for global localisation
Images of visual appearance of fixed objects
surrounding the pitch (or potential above the pitch)
construct a panorama
using a sequence of images
match query image to the panorama
detect features and matched features.
Compute the direction of robot
8. Approach to Goals
Build a visual map using
OpenCV and SURF
algorithms
Visual map represent 360°
find the direction the
robot is currently looking.
16. Conclusions & Future Work
Developed prototype desktop visual compass
application
Evaluated on real image dataset
Problems Encountered
Ubuntu OS troubleshooting, QT toolkit, and OpenCV
libraries
Design the GUI application in QT
Creating a histogram
Future Work: Port to Nao robot.
17. Possible Solution to reduce the error
Create a visual map from different position of pitch
By the middle of the 21st century, a team of fully autonomous humanoid robot soccer players shall win a soccer game, complying with the official rules of FIFA, against the winner of the most recent World Cup.
localisation problem tracks the pose of the robot over time, where the robot has an initial estimate of its pose, which it updates through the robots odometry and information it gathers for its sensors. The more challenging problem of global localisation occurs where a robot doesn’t have initial position information, i.e., they can handle the kidnapped robot problem, in which a robot is kidnapped and carried to some unknown location.
YES: that it is possible to use a panoramic image representation as an visual map of the area surrounding of a pitch, by matching un-seen images to find the direction the robot is currently looking .
A measurement of success is you can take random image from inside the lab for example, and know what direction are you looking for, you know I’m pointing in that direction of lab.
using a sequence of images create a panorama, using panorama and one query image, detect features in both images and find the best matched features.
This is the final view of project, displaying panorama and query images with feature detected and also shown the number of detected features, and SURF algorithm match the best features, then create the histogram of these good matches. And find the direction of robot from peak of histogram.
This is the final view of project, displaying panorama and query images with feature detected and also shown the number of detected features, and SURF algorithm match the best features, then create the histogram of these good matches. And find the direction of robot from peak of histogram.
This is the final view of project, displaying panorama and query images with feature detected and also shown the number of detected features, and SURF algorithm match the best features, then create the histogram of these good matches. And find the direction of robot from peak of histogram.
This is the final view of project, displaying panorama and query images with feature detected and also shown the number of detected features, and SURF algorithm match the best features, then create the histogram of these good matches. And find the direction of robot from peak of histogram.
This is the final view of project, displaying panorama and query images with feature detected and also shown the number of detected features, and SURF algorithm match the best features, then create the histogram of these good matches. And find the direction of robot from peak of histogram.
This is the graph of experimental result, shown the result direction of angle with different algorithm and expected result. By comparing the expected result and actual output result, we find the 18 % error with SURF and 29% error with SIFT algorithm.
In the previous slide graph shown direction for image29 and image30 both have same direction of view 327 with SIFT.
image35 have zero feature detected, direction of view never find with SUF but SIFT detector result 266 degree
The reason is we’ll have a different visual map those possible covered much more features to compare query image, and reduced the number of error.