1. VA3DR: Visual Autonomy through 3-D Rendering
David Tenorio HMC’17
Veronica Rivera HMC’17
Aaron Leondar OSU’17
Julio Medina HMC’18
Maddie Gaumer HMC’19
Project Advisor: Zach Dodds
Robots: iRobot Create, Nerf USB Rocket Launcher, and Parrot AR.Drone 2.0Helping a robot find its location...
Image matching system
We needed a robust image comparison system to allow the robot to identify it’s best
match in our image database. To make this database, we accumulated several image
matching algorithms and made two groups: color and geometry.
These algorithms were implemented using OpenCV 3.0 and SciPy.
The problem: how to let a robot know where it is with respect to its 3-D environment?
The goal: autonomous, vision-based robotic movement.
The matching plan
Geometry Algorithms
➔ Use ORB algorithm to identify geometric features in pictures (shown below with
below dots) and find similarities between these features (shown with red lines)
E.g. Image homography, ORB visual distances
➔ Apply color algorithms
➔ Take images that perform well enough (the
“winners”)
➔ Apply geometry algorithms to winning images
➔ Overall winner = best match
Once the match is found:
➔ Each image in the environment has a set of coordinates
➔ Coordinates correspond to global position in environment
2D screenshots from model Location of “camera” within model!
...now the robot knows its location!
Bad
Better
Good
Finding a match
for this image...
Odometry: Same position
Actual: Different Position
Odometry: Different position
Actual: Same position
While following a path,
there is a disconnect
between each robot’s
odometry (where it
thinks it is) and its
actual position.
Image
Matching
System
Color Algorithms
➔ Use histograms of color distribution and pixel-by-pixel color comparisons
➔ Histogram of query image compared to histograms of images in a database using four
different comparison methods
The drone flying
in the room...
The best
match!
What the
drone sees
Recognizing its
position, the
drone rotates...
Recalculates
its position...
...and successfully
lands in the desired
location!
Autonomously
navigating
Nerf tank!
Acknowledgements: The team would like to thank the National Science Foundation for the opportunity to embark on this project,
the Harvey Mudd Computer Science Department, J. Philipp de Graaff for the PS-Drone API, Adrian Rosebrock for inspiration and
starter code for our image matching work, and our tireless advisor, Professor Zachary Dodds, for driving the project forward.