Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocular Vision-based System

Connor Goddard
Connor GoddardMEng Graduate Software Engineer at Renishaw plc
estimation of terrain gradient conditions &
obstacle detection using a monocular
vision-based system
Final Demonstration
Connor Luke Goddard (clg11)
Department of Computer Science, Aberystwyth University
Background
1
background
Figure 1: Tracked features from MER [2].
The ability for an autonomous robot to
navigate from one location to another
in a manner that is both safe, yet
objective is vital to the survivability
of the machine, and the success
of the mission it is undertaking.
Vision based obstacle detection has enjoyed a high level of research
focus over recent years. Less work has concentrated on providing a
means of reliably detecting changes in terrain slope through the
exploitation of observed changes in motion.
2
related work
[1] A Robust Visual Odometry and Precipice Detection System Using
Consumer-grade Monocular Vision, J. Campbell et al., IEEE, 2005
[2] Obstacle Detection using Optical Flow, T. Low and G. Wyeth, School
of Information Technology and Electrical Engineering, University of
Queensland, 2011
[3] Appearance-based Obstacle Detection with Monocular Color
Vision, I. Ulrich and I. Nourbakhsh, AAAI, 2000
Figure 2: Left: Precipice detection [1], Right: Obstacle colour detection [3]
3
project overview
Focus
Investigation into a system capable of utilising a single,
forward-facing colour camera to provide an estimation into current
terrain gradient conditions, obstacle location & characteristics and
robot ego-motion.
Potential Metrics For Observation
• Changes in terrain gradient (i.e. slopes).
• Location & characteristics of positive and negative obstacles (i.e.
rock or pit respectively).
• Current robot speed of travel.
• Changes in robot orientation.
4
Working Hypothesis
5
working hypothesis
“From images captured using a non-calibrated monocular
camera system, analysis of optical flow vectors extracted using
localised appearance-based dense optical flow techniques can
provide certain estimates into the current condition of terrain
gradient, the location and characteristics of obstacles, and robot
ego-motion.”
6
motion parallax
Approach focussed on the effects of motion parallax.
i.e. Objects/features that are at a greater distance from the camera
appear to move less from frame-to-frame than those that are closer.
Figure 3: Typical example of motion parallax [4].
7
inference of terrain gradient & obstacle detection
Key Aspects
1. The exclusive use of appearance-based template matching
techniques to provide a localised variant of dense optical flow
analysis over the use of sparse optical flow techniques relying
upon a high number of available image features.
2. The use of a formalised model to represent detected changes in
vertical displacement as part of efforts to estimate changes in
terrain gradient in addition to the location characteristics of
potential obstacles.
8
vertical displacement model
Provides a mechanism by which to evaluate the prediction:
0Vertical
Displacement(px) 0
Image Row (Height)
(px)
Expected general trend between image row
and vertical displacement demonstrated.
Figure 4: Example of “perfect” vertical
displacement model.
“Features captured
towards the bottom of an image
will show greater displacement
between subsequent frames,
than those features captured
towards the top of the image.”
9
inference of terrain slope & potential obstacles
From observing discrepancies between the current model and
“baseline” model, it should be possible to infer the presence of
potential obstacles:
0
Vertical
Displacement(px)
0
Image Row (Height)
(px)
160 400
Recorded vertical
displacement for row #160
matching model vertical
displacement for row #230
230
Model vertical displacement
Last-recorded vertical displacement
Figure 5: Vertical displacement model indicating the potential presence of a positive obstacle.
10
inference of terrain slope & potential obstacles
Differences in model discrepancy behaviour should provide enough
information to establish between a potential obstacle and a change
in terrain slope:
Figure 6: Example of observed differences between detection of obstacles and change in terrain
slope.
11
robot rotation
Two types of observed motion:
• Translation (Vertical component of optical flow vectors)
• Rotation (Horizontal component of optical flow vectors)
Horizontal component of optical flow vector
(Rotation)
Vertical component of optical flow vector
(Translation)
Figure 7: Diagram demonstrating the expected difference in behaviour between the separated
vertical and horizontal components of observed optical flow vectors calculated from
forward-turning motion.
12
robot speed
By recording multiple “calibration” models at different speeds, it
should be possible to later infer the current speed using the
standard speed-distance-time equation.
Optimum point to infer speed
as lowest risk of interference
from moving objects during
recording and comparing
current displacement with
model.
0
Vertical
Displacement(px)
0
Image Row (Height)
(px)
Vertical displacement model recorded at
constant 5mph
Vertical displacement model recorded at
constant 10mph
Vertical displacement model recorded at
constant 15mph
Figure 8: Diagram indicating examples of alternative vertical displacement models calculated
while travelling at different constant speeds.
13
investigation aims
Primary Aims
1. Establish which out the following appearance-based template
matching similarity measures:
• Euclidean Distance
• Correlation Coefficient
• Histogram Correlation
• Histogram Chi-Square
best supports the predicted positive correlation between the
vertical position of features within an image, and the level of
vertical displacement demonstrated.
2. Use the generated model of vertical displacement to identify
potential obstacles and changes in terrain gradient based upon
the predicted ‘comparative trend’ behaviour.
14
investigation aims
Secondary Aims
1. Further extend the capabilities of the vertical displacement model
to provide estimates into the current speed of travel
demonstrated by the robot.
2. Add support for providing an estimation into the change in
orientation.
15
Experiment Methods
16
gathering datasets
No existing imagery datasets were found to be suitable, therefore
new sets had to be collected.
Figure 9: Camera-rig setup used to capture the experiment datasets from front and back profiles.
17
gathering datasets
Images taken at a number of locations, but four were eventually
selected based upon their variety in terms of terrain texture and
lighting conditions.
Figure 10: Examples of the first frame captured from each of the four location datasets.
18
experiment methods
Over time, investigation focus changed from exploring a series of
reasonably “generalised” research aims, to instead concentrating
almost exclusively on one small, but very important aspect;
Ensuring that accurate results for appearance-based motion
displacement could continue to be achieved, across terrain lacking
any significant features and/or demonstrating highly repetitive
patterns.
Three Experiments
1. Template Matching - Multiple Small Patches
2. Template Matching - Full-width Patches (Non-Scaled)
3. Template Matching - Full-width Patches (Geometrically Scaled)
19
experiment one - multiple small patches
Stage One
Import two consecutive images, and convert from RGB (BGR in
OpenCV) colour space to HSV.
The ’V’ channel is then removed in order to improve robustness to
lighting changes between frames.
20
experiment one - multiple small patches
Stage Two
Extract a percentage-width region of interest (ROI) from centre of
first image.
21
region-of-interest
Why do we need to extract a ROI?
Focus-of-expansion: Objects in images do not actually move in
1-dimension (i.e. straight down the image).
This effect is minimised towards the centre of the image.
Figure 11: Diagram indicating the perceived effects caused by Focus of Expansion. Courtesy: [5]
22
experiment one - multiple small patches
Stage Three
Extract patches of a fixed size around each pixel within the extracted
ROI.
Figure 12: Simplified example of patch extraction within ROI.
23
experiment one - multiple small patches
Stage Four
For each patch extracted from image one, move down through a
localised search window (column) in image two searching for the
best match against the template patch.
Figure 13: Example of “best match” search within local column.
24
experiment one - multiple small patches
Stage Five
Identify the patch within the localised search window that provides
the “best match” via correlation-based matching (e.g. Euclidean
Distance, SSD or Correlation coefficient).
25
experiment one - multiple small patches
Stage Six
Average all measured displacements for each pixel along a given row.
Outliers are removed by ignoring any displacements that lie outside
of (2 x Standard Deviation) of the mean.
26
experiment one - multiple small patches
Repeat Stages 1-6
Repeat stages 1-6 for an entire collection of “calibration” images
taken of flat, unobstructed terrain.
27
experiment one - multiple small patches
Stage Seven
Plot the average displacement for each ROI row, calculated from the
displacements recorded over all calibration images.
28
experiment two - full-width patches (non-scaled)
Focus
Establishing if adopting larger patches in fewer numbers provided
better appearance-based matching accuracy than using many more,
but critically much smaller overlapping patches.
While the underlying method remained the same as in the first
experiment, some key changes were made:
• Extracting a central region of interest relative to the ground plane,
as opposed to extracting a fixed-width column from the image
plane.
• Moving away from multiple overlapping small patches, in favour of
adopting a single, full-width patch to represent a single row in the
image.
29
exhaustive vs. non-exhaustive search
Issue discovered regarding the use of non-exhaustive searching
causing erroneous results. All tests in experiments two and three
were conducted using both exhaustive and non-exhaustive
approaches.
0
EuclideanDistance
0
Point at which non-exhaustive search
potentially stops (local minima)
Location of true "best match" score for
Euclidean Distance (global minima)
Figure 14: Example of potential early termination of non-exhaustive search due to entering local
minima caused by noise, rather than global minima which represents to true best match.
30
perspective distortion calibration tool
To extract a region-of-interest along the ground plane, the system
would need to take account of perspective distortion. Therefore, a
simple calibration tool was implemented, enabling users to define
the required region of interest within an image.
Figure 15: Results of the verification test for the approach towards geometric scaling of template
patch pixel coordinates.
31
experiment three - full-width patches (scaled)
Focus
Add geometric scaling of template patch in order to account for
objects appearing larger as they approach the camera.
Original Template Patch Re-scaled Template Patch (Geometric Transformation of Pixel Coordinates) Re-scaled Template Patch (Linear Interpolation)
Figure 16: Comparison between various approaches of performing scaling of the original template
patch image. In the case of linear interpolation, additional information has been added to image
as a result of “estimating” the colour that lies between two original pixels.
32
experiment three - full-width patches (scaled)
6 Stage Process
1. Obtain the width and height of the current template patch.
2. Obtain the calibrated width of the search window with respect to the
current image row.
3. Calculate the independent scale factor for the width and height between
the template patch and the search window.
4. Treating each pixel as a 2D coordinate in geometric space, scale the
position of each pixel coordinate within the template patch relative to the
position of centre coordinate.
5. For each scaled pixel coordinate, extract the value of the pixel at the
corresponding position with the search window and add to a temporary
“image” data structure.
6. Perform template matching between the original template patch, and the
new “extracted” temporary image.
33
experiment three - full-width patches (scaled)
Testing of Scaling Approach
A simple test was devised to confirm geometric scaling worked as
expected under “perfect” conditions.
Figure 17: Results of the verification test for the approach towards geometric scaling of template
patch pixel coordinates.
34
additional experimental work
Some additional experimental work was conducted as part of the larger
project investigation, but these did not actively contribute to the final
investigation results.
• Investigation of Campbell et al. [1] approach to feature tracking to provide
optical flow field (based heavily upon C# implementation provided by Dr.
Rainer Hessmer (Source))
• Investigation of edge-based template matching discussed by Perveen et
al. [3]
Figure 18: Left: Example of feature tracking to establish optical flow field. Right: Edge-based
template matching using Canny edge detector.
35
Investigation Results
View Results (Courtesy: nbviewer)
36
results
Results can be described as generally inconclusive.
However from the results obtained, we have learned:
1. Under certain terrain conditions, it is possible to potentially
establish a relationship between row height in an image, and
average downwards pixel displacement.
2. The performance of each appearance-based similarity measure
can vary widely, based upon factors including:
• Terrain texture (e.g. highly patterned, isotropic or homogenous)
• Patch size (Balance between reducing “noise” and representing
true displacement behaviour)
• Exhaustive vs. Non-exhaustive search
• Scaling vs. Non-scaling
37
results: approach comparison - ‘living room rug’ dataset
Figure 19: Comparison of results for the ‘Living room rug’ dataset using an exhaustive search and
100px patch size. Small patches (Graph 1), full-width patches (Graph 2) and geometrically scaled
full-width patches (Graph 3).
38
results: approach comparison - ‘brick-paved road’ dataset
Figure 20: Comparison of results for the ‘Brick-paved road’ dataset using an exhaustive search
and 100px patch size. Small patches (Graph 1), full-width patches (Graph 2) and geometrically
scaled full-width patches (Graph 3).
39
results: exhaustive vs. non-exhaustive search
Non-Exhaustive Exhaustive
Figure 21: Comparison between non-exhaustive and exhaustive search performance using
geometrically scaled template patches and the ‘Living room rug’ dataset.
40
results: patch size comparison - ‘slate footpath’ dataset
Figure 22: Comparison of results for the ‘Living room rug’ dataset using geometrically-scaled
patches of 50px, 100px and 200px heights under an exhaustive search approach.
41
Conclusion
42
future work
Great potential for future work, most likely focussing on areas
including:
• Further experimentation to establish most appropriate
combination of similarity-measure, search approach and patch
size for a given terrain type.
• Development of a system to identify “significant” changes in
vertical displacement model that are potentially indicative of a
potential obstacle or change in terrain gradient.
• Development of system to estimate change in robot heading.
• Development of system to estimate current robot speed.
43
conclusion summary
Project has laid the groundwork upon which further development
can subsequently be conducted.
While not all of the original aims were accomplished, research
efforts had to be focussed on confirming the underlying hypothesis,
which as a general concept, has been proven.
Not one single approach or solution will suffice for all terrain
conditions.
44
Questions?
Project Blog
Slide Design: Matthias Vogelgesang - (Github)
45
bibliography
[1] Jason Campbell, Rahul Sukthankar, Illah Nourbakhsh, and Aroon Pahwa. A robust
visual odometry and precipice detection system using consumer-grade monocular
vision. In Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE
International Conference on, pages 3421–3427. IEEE, 2005.
[2] Mark Maimone, Yang Cheng, and Larry Matthies. Two years of visual odometry on
the mars exploration rovers. Journal of Field Robotics, 24(3):169–186, 2007.
[3] Nazil Perveen, Darshan Kumar, and Ishan Bhardwaj. An Overview on Template
Matching Methodologies and its Applications. IJRCCT, 2(10):988–995, 2013.
[4] Jonathan Pillow. Motion Perception 2.
http://homepage.psy.utexas.edu/homepage/faculty/pillow/
courses/perception09/slides/Lec13A_Motion_part2.pdf, October
2009.
[5] Akshay Roongta. Depth Cue Theory.
http://akshayroongta.in/notes/depth-cue-theory/, October 2013.
46
1 of 47

Recommended

Object Distance Detection using a Joint Transform Correlator by
Object Distance Detection using a Joint Transform CorrelatorObject Distance Detection using a Joint Transform Correlator
Object Distance Detection using a Joint Transform CorrelatorAlexander Layton
196 views3 slides
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group by
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupDTAM: Dense Tracking and Mapping in Real-Time, Robot vision Group
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupLihang Li
5.8K views52 slides
Hazard Detection Algorithm for Safe Autonomous Landing by
Hazard Detection Algorithm for Safe Autonomous LandingHazard Detection Algorithm for Safe Autonomous Landing
Hazard Detection Algorithm for Safe Autonomous LandingAliya Burkit
61 views7 slides
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM by
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMA ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMcsandit
2.2K views13 slides
EVALUATION OF THE VISUAL ODOMETRY METHODS FOR SEMI-DENSE REAL-TIME by
EVALUATION OF THE VISUAL ODOMETRY METHODS FOR SEMI-DENSE REAL-TIMEEVALUATION OF THE VISUAL ODOMETRY METHODS FOR SEMI-DENSE REAL-TIME
EVALUATION OF THE VISUAL ODOMETRY METHODS FOR SEMI-DENSE REAL-TIMEacijjournal
79 views14 slides
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun... by
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...ijma
363 views11 slides

More Related Content

What's hot

Image reconstruction in nuclear medicine by
Image reconstruction in nuclear medicineImage reconstruction in nuclear medicine
Image reconstruction in nuclear medicineshokoofeh mousavi
2K views43 slides
Detection of Bridges using Different Types of High Resolution Satellite Images by
Detection of Bridges using Different Types of High Resolution Satellite ImagesDetection of Bridges using Different Types of High Resolution Satellite Images
Detection of Bridges using Different Types of High Resolution Satellite Imagesidescitation
425 views10 slides
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF by
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF ijcseit
11 views17 slides
06466595 by
0646659506466595
06466595Jéssyca Bessa
179 views6 slides
Radar reflectance model for the extraction of height from shape from shading ... by
Radar reflectance model for the extraction of height from shape from shading ...Radar reflectance model for the extraction of height from shape from shading ...
Radar reflectance model for the extraction of height from shape from shading ...eSAT Journals
120 views4 slides
An automatic algorithm for object recognition and detection based on asift ke... by
An automatic algorithm for object recognition and detection based on asift ke...An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...Kunal Kishor Nirala
754 views11 slides

What's hot(20)

Detection of Bridges using Different Types of High Resolution Satellite Images by idescitation
Detection of Bridges using Different Types of High Resolution Satellite ImagesDetection of Bridges using Different Types of High Resolution Satellite Images
Detection of Bridges using Different Types of High Resolution Satellite Images
idescitation425 views
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF by ijcseit
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF
ijcseit11 views
Radar reflectance model for the extraction of height from shape from shading ... by eSAT Journals
Radar reflectance model for the extraction of height from shape from shading ...Radar reflectance model for the extraction of height from shape from shading ...
Radar reflectance model for the extraction of height from shape from shading ...
eSAT Journals120 views
An automatic algorithm for object recognition and detection based on asift ke... by Kunal Kishor Nirala
An automatic algorithm for object recognition and detection based on asift ke...An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...
FR3T10-3-IGARSS2011_Geolocation_20110720.pptx by grssieee
FR3T10-3-IGARSS2011_Geolocation_20110720.pptxFR3T10-3-IGARSS2011_Geolocation_20110720.pptx
FR3T10-3-IGARSS2011_Geolocation_20110720.pptx
grssieee598 views
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013 by Sunando Sengupta
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013
Sunando Sengupta1.2K views
Satellite Imaging System by CSCJournals
Satellite Imaging SystemSatellite Imaging System
Satellite Imaging System
CSCJournals217 views
Review on Various Algorithm for Cloud Detection and Removal for Images by IJERA Editor
Review on Various Algorithm for Cloud Detection and Removal for ImagesReview on Various Algorithm for Cloud Detection and Removal for Images
Review on Various Algorithm for Cloud Detection and Removal for Images
IJERA Editor378 views
A new approach of edge detection in sar images using region based active cont... by eSAT Journals
A new approach of edge detection in sar images using region based active cont...A new approach of edge detection in sar images using region based active cont...
A new approach of edge detection in sar images using region based active cont...
eSAT Journals88 views
Passive stereo vision with deep learning by Yu Huang
Passive stereo vision with deep learningPassive stereo vision with deep learning
Passive stereo vision with deep learning
Yu Huang20.6K views
Matching algorithm performance analysis for autocalibration method of stereo ... by TELKOMNIKA JOURNAL
Matching algorithm performance analysis for autocalibration method of stereo ...Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...
Uncalibrated View Synthesis Using Planar Segmentation of Images by ijcga
Uncalibrated View Synthesis Using Planar Segmentation of Images  Uncalibrated View Synthesis Using Planar Segmentation of Images
Uncalibrated View Synthesis Using Planar Segmentation of Images
ijcga8 views
An Enhanced Computer Vision Based Hand Movement Capturing System with Stereo ... by CSCJournals
An Enhanced Computer Vision Based Hand Movement Capturing System with Stereo ...An Enhanced Computer Vision Based Hand Movement Capturing System with Stereo ...
An Enhanced Computer Vision Based Hand Movement Capturing System with Stereo ...
CSCJournals248 views
The flow of baseline estimation using a single omnidirectional camera by TELKOMNIKA JOURNAL
The flow of baseline estimation using a single omnidirectional cameraThe flow of baseline estimation using a single omnidirectional camera
The flow of baseline estimation using a single omnidirectional camera

Viewers also liked

Fourier Transforms by
Fourier TransformsFourier Transforms
Fourier TransformsArvind Devaraj
698 views1 slide
Application of Fourier Transform in Agriculture (Robotic Cultivators Technology) by
Application of Fourier Transform in Agriculture (Robotic Cultivators Technology)Application of Fourier Transform in Agriculture (Robotic Cultivators Technology)
Application of Fourier Transform in Agriculture (Robotic Cultivators Technology)Eisha Razia
2K views21 slides
Signal Processing Course : Fourier by
Signal Processing Course : FourierSignal Processing Course : Fourier
Signal Processing Course : FourierGabriel Peyré
1.3K views29 slides
Signal Processing Introduction using Fourier Transforms by
Signal Processing Introduction using Fourier TransformsSignal Processing Introduction using Fourier Transforms
Signal Processing Introduction using Fourier TransformsArvind Devaraj
7.7K views8 slides
Lecture 9 by
Lecture 9Lecture 9
Lecture 9Wael Sharba
623 views50 slides
Eece 301 note set 14 fourier transform by
Eece 301 note set 14 fourier transformEece 301 note set 14 fourier transform
Eece 301 note set 14 fourier transformSandilya Sridhara
1.6K views27 slides

Viewers also liked(11)

Application of Fourier Transform in Agriculture (Robotic Cultivators Technology) by Eisha Razia
Application of Fourier Transform in Agriculture (Robotic Cultivators Technology)Application of Fourier Transform in Agriculture (Robotic Cultivators Technology)
Application of Fourier Transform in Agriculture (Robotic Cultivators Technology)
Eisha Razia2K views
Signal Processing Course : Fourier by Gabriel Peyré
Signal Processing Course : FourierSignal Processing Course : Fourier
Signal Processing Course : Fourier
Gabriel Peyré1.3K views
Signal Processing Introduction using Fourier Transforms by Arvind Devaraj
Signal Processing Introduction using Fourier TransformsSignal Processing Introduction using Fourier Transforms
Signal Processing Introduction using Fourier Transforms
Arvind Devaraj7.7K views
Eece 301 note set 14 fourier transform by Sandilya Sridhara
Eece 301 note set 14 fourier transformEece 301 note set 14 fourier transform
Eece 301 note set 14 fourier transform
Sandilya Sridhara1.6K views
Optics Fourier Transform Ii by diarmseven
Optics Fourier Transform IiOptics Fourier Transform Ii
Optics Fourier Transform Ii
diarmseven4.3K views
Basics of edge detection and forier transform by Simranjit Singh
Basics of edge detection and forier transformBasics of edge detection and forier transform
Basics of edge detection and forier transform
Simranjit Singh6.4K views
Introduction to Fourier transform and signal analysis by 宗翰 謝
Introduction to Fourier transform and signal analysisIntroduction to Fourier transform and signal analysis
Introduction to Fourier transform and signal analysis
宗翰 謝7.1K views
Introduction to Wavelet Transform with Applications to DSP by Hicham Berkouk
Introduction to Wavelet Transform with Applications to DSPIntroduction to Wavelet Transform with Applications to DSP
Introduction to Wavelet Transform with Applications to DSP
Hicham Berkouk14.1K views

Similar to Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocular Vision-based System

6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo... by
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...Youness Lahdili
15 views4 slides
D018112429 by
D018112429D018112429
D018112429IOSR Journals
203 views6 slides
J017377578 by
J017377578J017377578
J017377578IOSR Journals
108 views4 slides
Real-time Moving Object Detection using SURF by
Real-time Moving Object Detection using SURFReal-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURFiosrjce
297 views4 slides
International Journal of Engineering Research and Development (IJERD) by
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
425 views5 slides
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel... by
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
212 views6 slides

Similar to Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocular Vision-based System(20)

6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo... by Youness Lahdili
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
Youness Lahdili15 views
Real-time Moving Object Detection using SURF by iosrjce
Real-time Moving Object Detection using SURFReal-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURF
iosrjce297 views
International Journal of Engineering Research and Development (IJERD) by IJERD Editor
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
IJERD Editor425 views
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel... by IJERD Editor
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD Editor212 views
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel... by IJERD Editor
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD Editor399 views
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel... by IJERD Editor
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD Editor232 views
Algorithmic Analysis to Video Object Tracking and Background Segmentation and... by Editor IJCATR
Algorithmic Analysis to Video Object Tracking and Background Segmentation and...Algorithmic Analysis to Video Object Tracking and Background Segmentation and...
Algorithmic Analysis to Video Object Tracking and Background Segmentation and...
Editor IJCATR543 views
Efficient 3D stereo vision stabilization for multi-camera viewpoints by journalBEEI
Efficient 3D stereo vision stabilization for multi-camera viewpointsEfficient 3D stereo vision stabilization for multi-camera viewpoints
Efficient 3D stereo vision stabilization for multi-camera viewpoints
journalBEEI17 views
An Assessment of Image Matching Algorithms in Depth Estimation by CSCJournals
An Assessment of Image Matching Algorithms in Depth EstimationAn Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth Estimation
CSCJournals215 views
Gait Based Person Recognition Using Partial Least Squares Selection Scheme by ijcisjournal
Gait Based Person Recognition Using Partial Least Squares Selection Scheme Gait Based Person Recognition Using Partial Least Squares Selection Scheme
Gait Based Person Recognition Using Partial Least Squares Selection Scheme
ijcisjournal139 views
Three-dimensional structure from motion recovery of a moving object with nois... by IJECEIAES
Three-dimensional structure from motion recovery of a moving object with nois...Three-dimensional structure from motion recovery of a moving object with nois...
Three-dimensional structure from motion recovery of a moving object with nois...
IJECEIAES16 views
Removal of Transformation Errors by Quarterion In Multi View Image Registration by IDES Editor
Removal of Transformation Errors by Quarterion In Multi View Image RegistrationRemoval of Transformation Errors by Quarterion In Multi View Image Registration
Removal of Transformation Errors by Quarterion In Multi View Image Registration
IDES Editor687 views
VISUAL TRACKING USING PARTICLE SWARM OPTIMIZATION by csandit
VISUAL TRACKING USING PARTICLE SWARM OPTIMIZATIONVISUAL TRACKING USING PARTICLE SWARM OPTIMIZATION
VISUAL TRACKING USING PARTICLE SWARM OPTIMIZATION
csandit254 views
Visual tracking using particle swarm optimization by csandit
Visual tracking using particle swarm optimizationVisual tracking using particle swarm optimization
Visual tracking using particle swarm optimization
csandit431 views
An efficient image segmentation approach through enhanced watershed algorithm by Alexander Decker
An efficient image segmentation approach through enhanced watershed algorithmAn efficient image segmentation approach through enhanced watershed algorithm
An efficient image segmentation approach through enhanced watershed algorithm
Alexander Decker846 views
Object tracking with SURF: ARM-Based platform Implementation by Editor IJCATR
Object tracking with SURF: ARM-Based platform ImplementationObject tracking with SURF: ARM-Based platform Implementation
Object tracking with SURF: ARM-Based platform Implementation
Editor IJCATR83 views

Recently uploaded

DSD-INT 2023 Thermobaricity in 3D DCSM-FM - taking pressure into account in t... by
DSD-INT 2023 Thermobaricity in 3D DCSM-FM - taking pressure into account in t...DSD-INT 2023 Thermobaricity in 3D DCSM-FM - taking pressure into account in t...
DSD-INT 2023 Thermobaricity in 3D DCSM-FM - taking pressure into account in t...Deltares
9 views26 slides
DSD-INT 2023 Simulating a falling apron in Delft3D 4 - Engineering Practice -... by
DSD-INT 2023 Simulating a falling apron in Delft3D 4 - Engineering Practice -...DSD-INT 2023 Simulating a falling apron in Delft3D 4 - Engineering Practice -...
DSD-INT 2023 Simulating a falling apron in Delft3D 4 - Engineering Practice -...Deltares
6 views15 slides
Elevate your SAP landscape's efficiency and performance with HCL Workload Aut... by
Elevate your SAP landscape's efficiency and performance with HCL Workload Aut...Elevate your SAP landscape's efficiency and performance with HCL Workload Aut...
Elevate your SAP landscape's efficiency and performance with HCL Workload Aut...HCLSoftware
6 views2 slides
What Can Employee Monitoring Software Do?​ by
What Can Employee Monitoring Software Do?​What Can Employee Monitoring Software Do?​
What Can Employee Monitoring Software Do?​wAnywhere
18 views11 slides
A first look at MariaDB 11.x features and ideas on how to use them by
A first look at MariaDB 11.x features and ideas on how to use themA first look at MariaDB 11.x features and ideas on how to use them
A first look at MariaDB 11.x features and ideas on how to use themFederico Razzoli
44 views36 slides
Tridens DevOps by
Tridens DevOpsTridens DevOps
Tridens DevOpsTridens
9 views28 slides

Recently uploaded(20)

DSD-INT 2023 Thermobaricity in 3D DCSM-FM - taking pressure into account in t... by Deltares
DSD-INT 2023 Thermobaricity in 3D DCSM-FM - taking pressure into account in t...DSD-INT 2023 Thermobaricity in 3D DCSM-FM - taking pressure into account in t...
DSD-INT 2023 Thermobaricity in 3D DCSM-FM - taking pressure into account in t...
Deltares9 views
DSD-INT 2023 Simulating a falling apron in Delft3D 4 - Engineering Practice -... by Deltares
DSD-INT 2023 Simulating a falling apron in Delft3D 4 - Engineering Practice -...DSD-INT 2023 Simulating a falling apron in Delft3D 4 - Engineering Practice -...
DSD-INT 2023 Simulating a falling apron in Delft3D 4 - Engineering Practice -...
Deltares6 views
Elevate your SAP landscape's efficiency and performance with HCL Workload Aut... by HCLSoftware
Elevate your SAP landscape's efficiency and performance with HCL Workload Aut...Elevate your SAP landscape's efficiency and performance with HCL Workload Aut...
Elevate your SAP landscape's efficiency and performance with HCL Workload Aut...
HCLSoftware6 views
What Can Employee Monitoring Software Do?​ by wAnywhere
What Can Employee Monitoring Software Do?​What Can Employee Monitoring Software Do?​
What Can Employee Monitoring Software Do?​
wAnywhere18 views
A first look at MariaDB 11.x features and ideas on how to use them by Federico Razzoli
A first look at MariaDB 11.x features and ideas on how to use themA first look at MariaDB 11.x features and ideas on how to use them
A first look at MariaDB 11.x features and ideas on how to use them
Federico Razzoli44 views
Tridens DevOps by Tridens
Tridens DevOpsTridens DevOps
Tridens DevOps
Tridens9 views
Roadmap y Novedades de producto by Neo4j
Roadmap y Novedades de productoRoadmap y Novedades de producto
Roadmap y Novedades de producto
Neo4j43 views
Software evolution understanding: Automatic extraction of software identifier... by Ra'Fat Al-Msie'deen
Software evolution understanding: Automatic extraction of software identifier...Software evolution understanding: Automatic extraction of software identifier...
Software evolution understanding: Automatic extraction of software identifier...
DSD-INT 2023 HydroMT model building and river-coast coupling in Python - Bove... by Deltares
DSD-INT 2023 HydroMT model building and river-coast coupling in Python - Bove...DSD-INT 2023 HydroMT model building and river-coast coupling in Python - Bove...
DSD-INT 2023 HydroMT model building and river-coast coupling in Python - Bove...
Deltares15 views
2023-November-Schneider Electric-Meetup-BCN Admin Group.pptx by animuscrm
2023-November-Schneider Electric-Meetup-BCN Admin Group.pptx2023-November-Schneider Electric-Meetup-BCN Admin Group.pptx
2023-November-Schneider Electric-Meetup-BCN Admin Group.pptx
animuscrm11 views
DSD-INT 2023 Dam break simulation in Derna (Libya) using HydroMT_SFINCS - Prida by Deltares
DSD-INT 2023 Dam break simulation in Derna (Libya) using HydroMT_SFINCS - PridaDSD-INT 2023 Dam break simulation in Derna (Libya) using HydroMT_SFINCS - Prida
DSD-INT 2023 Dam break simulation in Derna (Libya) using HydroMT_SFINCS - Prida
Deltares17 views
Cycleops - Automate deployments on top of bare metal.pptx by Thanassis Parathyras
Cycleops - Automate deployments on top of bare metal.pptxCycleops - Automate deployments on top of bare metal.pptx
Cycleops - Automate deployments on top of bare metal.pptx
Citi TechTalk Session 2: Kafka Deep Dive by confluent
Citi TechTalk Session 2: Kafka Deep DiveCiti TechTalk Session 2: Kafka Deep Dive
Citi TechTalk Session 2: Kafka Deep Dive
confluent17 views
DSD-INT 2023 Delft3D FM Suite 2024.01 2D3D - New features + Improvements - Ge... by Deltares
DSD-INT 2023 Delft3D FM Suite 2024.01 2D3D - New features + Improvements - Ge...DSD-INT 2023 Delft3D FM Suite 2024.01 2D3D - New features + Improvements - Ge...
DSD-INT 2023 Delft3D FM Suite 2024.01 2D3D - New features + Improvements - Ge...
Deltares16 views
Geospatial Synergy: Amplifying Efficiency with FME & Esri ft. Peak Guest Spea... by Safe Software
Geospatial Synergy: Amplifying Efficiency with FME & Esri ft. Peak Guest Spea...Geospatial Synergy: Amplifying Efficiency with FME & Esri ft. Peak Guest Spea...
Geospatial Synergy: Amplifying Efficiency with FME & Esri ft. Peak Guest Spea...
Safe Software391 views
DSD-INT 2023 Simulation of Coastal Hydrodynamics and Water Quality in Hong Ko... by Deltares
DSD-INT 2023 Simulation of Coastal Hydrodynamics and Water Quality in Hong Ko...DSD-INT 2023 Simulation of Coastal Hydrodynamics and Water Quality in Hong Ko...
DSD-INT 2023 Simulation of Coastal Hydrodynamics and Water Quality in Hong Ko...
Deltares10 views
Unmasking the Dark Art of Vectored Exception Handling: Bypassing XDR and EDR ... by Donato Onofri
Unmasking the Dark Art of Vectored Exception Handling: Bypassing XDR and EDR ...Unmasking the Dark Art of Vectored Exception Handling: Bypassing XDR and EDR ...
Unmasking the Dark Art of Vectored Exception Handling: Bypassing XDR and EDR ...
Donato Onofri643 views
Copilot Prompting Toolkit_All Resources.pdf by Riccardo Zamana
Copilot Prompting Toolkit_All Resources.pdfCopilot Prompting Toolkit_All Resources.pdf
Copilot Prompting Toolkit_All Resources.pdf
Riccardo Zamana6 views

Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocular Vision-based System

  • 1. estimation of terrain gradient conditions & obstacle detection using a monocular vision-based system Final Demonstration Connor Luke Goddard (clg11) Department of Computer Science, Aberystwyth University
  • 3. background Figure 1: Tracked features from MER [2]. The ability for an autonomous robot to navigate from one location to another in a manner that is both safe, yet objective is vital to the survivability of the machine, and the success of the mission it is undertaking. Vision based obstacle detection has enjoyed a high level of research focus over recent years. Less work has concentrated on providing a means of reliably detecting changes in terrain slope through the exploitation of observed changes in motion. 2
  • 4. related work [1] A Robust Visual Odometry and Precipice Detection System Using Consumer-grade Monocular Vision, J. Campbell et al., IEEE, 2005 [2] Obstacle Detection using Optical Flow, T. Low and G. Wyeth, School of Information Technology and Electrical Engineering, University of Queensland, 2011 [3] Appearance-based Obstacle Detection with Monocular Color Vision, I. Ulrich and I. Nourbakhsh, AAAI, 2000 Figure 2: Left: Precipice detection [1], Right: Obstacle colour detection [3] 3
  • 5. project overview Focus Investigation into a system capable of utilising a single, forward-facing colour camera to provide an estimation into current terrain gradient conditions, obstacle location & characteristics and robot ego-motion. Potential Metrics For Observation • Changes in terrain gradient (i.e. slopes). • Location & characteristics of positive and negative obstacles (i.e. rock or pit respectively). • Current robot speed of travel. • Changes in robot orientation. 4
  • 7. working hypothesis “From images captured using a non-calibrated monocular camera system, analysis of optical flow vectors extracted using localised appearance-based dense optical flow techniques can provide certain estimates into the current condition of terrain gradient, the location and characteristics of obstacles, and robot ego-motion.” 6
  • 8. motion parallax Approach focussed on the effects of motion parallax. i.e. Objects/features that are at a greater distance from the camera appear to move less from frame-to-frame than those that are closer. Figure 3: Typical example of motion parallax [4]. 7
  • 9. inference of terrain gradient & obstacle detection Key Aspects 1. The exclusive use of appearance-based template matching techniques to provide a localised variant of dense optical flow analysis over the use of sparse optical flow techniques relying upon a high number of available image features. 2. The use of a formalised model to represent detected changes in vertical displacement as part of efforts to estimate changes in terrain gradient in addition to the location characteristics of potential obstacles. 8
  • 10. vertical displacement model Provides a mechanism by which to evaluate the prediction: 0Vertical Displacement(px) 0 Image Row (Height) (px) Expected general trend between image row and vertical displacement demonstrated. Figure 4: Example of “perfect” vertical displacement model. “Features captured towards the bottom of an image will show greater displacement between subsequent frames, than those features captured towards the top of the image.” 9
  • 11. inference of terrain slope & potential obstacles From observing discrepancies between the current model and “baseline” model, it should be possible to infer the presence of potential obstacles: 0 Vertical Displacement(px) 0 Image Row (Height) (px) 160 400 Recorded vertical displacement for row #160 matching model vertical displacement for row #230 230 Model vertical displacement Last-recorded vertical displacement Figure 5: Vertical displacement model indicating the potential presence of a positive obstacle. 10
  • 12. inference of terrain slope & potential obstacles Differences in model discrepancy behaviour should provide enough information to establish between a potential obstacle and a change in terrain slope: Figure 6: Example of observed differences between detection of obstacles and change in terrain slope. 11
  • 13. robot rotation Two types of observed motion: • Translation (Vertical component of optical flow vectors) • Rotation (Horizontal component of optical flow vectors) Horizontal component of optical flow vector (Rotation) Vertical component of optical flow vector (Translation) Figure 7: Diagram demonstrating the expected difference in behaviour between the separated vertical and horizontal components of observed optical flow vectors calculated from forward-turning motion. 12
  • 14. robot speed By recording multiple “calibration” models at different speeds, it should be possible to later infer the current speed using the standard speed-distance-time equation. Optimum point to infer speed as lowest risk of interference from moving objects during recording and comparing current displacement with model. 0 Vertical Displacement(px) 0 Image Row (Height) (px) Vertical displacement model recorded at constant 5mph Vertical displacement model recorded at constant 10mph Vertical displacement model recorded at constant 15mph Figure 8: Diagram indicating examples of alternative vertical displacement models calculated while travelling at different constant speeds. 13
  • 15. investigation aims Primary Aims 1. Establish which out the following appearance-based template matching similarity measures: • Euclidean Distance • Correlation Coefficient • Histogram Correlation • Histogram Chi-Square best supports the predicted positive correlation between the vertical position of features within an image, and the level of vertical displacement demonstrated. 2. Use the generated model of vertical displacement to identify potential obstacles and changes in terrain gradient based upon the predicted ‘comparative trend’ behaviour. 14
  • 16. investigation aims Secondary Aims 1. Further extend the capabilities of the vertical displacement model to provide estimates into the current speed of travel demonstrated by the robot. 2. Add support for providing an estimation into the change in orientation. 15
  • 18. gathering datasets No existing imagery datasets were found to be suitable, therefore new sets had to be collected. Figure 9: Camera-rig setup used to capture the experiment datasets from front and back profiles. 17
  • 19. gathering datasets Images taken at a number of locations, but four were eventually selected based upon their variety in terms of terrain texture and lighting conditions. Figure 10: Examples of the first frame captured from each of the four location datasets. 18
  • 20. experiment methods Over time, investigation focus changed from exploring a series of reasonably “generalised” research aims, to instead concentrating almost exclusively on one small, but very important aspect; Ensuring that accurate results for appearance-based motion displacement could continue to be achieved, across terrain lacking any significant features and/or demonstrating highly repetitive patterns. Three Experiments 1. Template Matching - Multiple Small Patches 2. Template Matching - Full-width Patches (Non-Scaled) 3. Template Matching - Full-width Patches (Geometrically Scaled) 19
  • 21. experiment one - multiple small patches Stage One Import two consecutive images, and convert from RGB (BGR in OpenCV) colour space to HSV. The ’V’ channel is then removed in order to improve robustness to lighting changes between frames. 20
  • 22. experiment one - multiple small patches Stage Two Extract a percentage-width region of interest (ROI) from centre of first image. 21
  • 23. region-of-interest Why do we need to extract a ROI? Focus-of-expansion: Objects in images do not actually move in 1-dimension (i.e. straight down the image). This effect is minimised towards the centre of the image. Figure 11: Diagram indicating the perceived effects caused by Focus of Expansion. Courtesy: [5] 22
  • 24. experiment one - multiple small patches Stage Three Extract patches of a fixed size around each pixel within the extracted ROI. Figure 12: Simplified example of patch extraction within ROI. 23
  • 25. experiment one - multiple small patches Stage Four For each patch extracted from image one, move down through a localised search window (column) in image two searching for the best match against the template patch. Figure 13: Example of “best match” search within local column. 24
  • 26. experiment one - multiple small patches Stage Five Identify the patch within the localised search window that provides the “best match” via correlation-based matching (e.g. Euclidean Distance, SSD or Correlation coefficient). 25
  • 27. experiment one - multiple small patches Stage Six Average all measured displacements for each pixel along a given row. Outliers are removed by ignoring any displacements that lie outside of (2 x Standard Deviation) of the mean. 26
  • 28. experiment one - multiple small patches Repeat Stages 1-6 Repeat stages 1-6 for an entire collection of “calibration” images taken of flat, unobstructed terrain. 27
  • 29. experiment one - multiple small patches Stage Seven Plot the average displacement for each ROI row, calculated from the displacements recorded over all calibration images. 28
  • 30. experiment two - full-width patches (non-scaled) Focus Establishing if adopting larger patches in fewer numbers provided better appearance-based matching accuracy than using many more, but critically much smaller overlapping patches. While the underlying method remained the same as in the first experiment, some key changes were made: • Extracting a central region of interest relative to the ground plane, as opposed to extracting a fixed-width column from the image plane. • Moving away from multiple overlapping small patches, in favour of adopting a single, full-width patch to represent a single row in the image. 29
  • 31. exhaustive vs. non-exhaustive search Issue discovered regarding the use of non-exhaustive searching causing erroneous results. All tests in experiments two and three were conducted using both exhaustive and non-exhaustive approaches. 0 EuclideanDistance 0 Point at which non-exhaustive search potentially stops (local minima) Location of true "best match" score for Euclidean Distance (global minima) Figure 14: Example of potential early termination of non-exhaustive search due to entering local minima caused by noise, rather than global minima which represents to true best match. 30
  • 32. perspective distortion calibration tool To extract a region-of-interest along the ground plane, the system would need to take account of perspective distortion. Therefore, a simple calibration tool was implemented, enabling users to define the required region of interest within an image. Figure 15: Results of the verification test for the approach towards geometric scaling of template patch pixel coordinates. 31
  • 33. experiment three - full-width patches (scaled) Focus Add geometric scaling of template patch in order to account for objects appearing larger as they approach the camera. Original Template Patch Re-scaled Template Patch (Geometric Transformation of Pixel Coordinates) Re-scaled Template Patch (Linear Interpolation) Figure 16: Comparison between various approaches of performing scaling of the original template patch image. In the case of linear interpolation, additional information has been added to image as a result of “estimating” the colour that lies between two original pixels. 32
  • 34. experiment three - full-width patches (scaled) 6 Stage Process 1. Obtain the width and height of the current template patch. 2. Obtain the calibrated width of the search window with respect to the current image row. 3. Calculate the independent scale factor for the width and height between the template patch and the search window. 4. Treating each pixel as a 2D coordinate in geometric space, scale the position of each pixel coordinate within the template patch relative to the position of centre coordinate. 5. For each scaled pixel coordinate, extract the value of the pixel at the corresponding position with the search window and add to a temporary “image” data structure. 6. Perform template matching between the original template patch, and the new “extracted” temporary image. 33
  • 35. experiment three - full-width patches (scaled) Testing of Scaling Approach A simple test was devised to confirm geometric scaling worked as expected under “perfect” conditions. Figure 17: Results of the verification test for the approach towards geometric scaling of template patch pixel coordinates. 34
  • 36. additional experimental work Some additional experimental work was conducted as part of the larger project investigation, but these did not actively contribute to the final investigation results. • Investigation of Campbell et al. [1] approach to feature tracking to provide optical flow field (based heavily upon C# implementation provided by Dr. Rainer Hessmer (Source)) • Investigation of edge-based template matching discussed by Perveen et al. [3] Figure 18: Left: Example of feature tracking to establish optical flow field. Right: Edge-based template matching using Canny edge detector. 35
  • 37. Investigation Results View Results (Courtesy: nbviewer) 36
  • 38. results Results can be described as generally inconclusive. However from the results obtained, we have learned: 1. Under certain terrain conditions, it is possible to potentially establish a relationship between row height in an image, and average downwards pixel displacement. 2. The performance of each appearance-based similarity measure can vary widely, based upon factors including: • Terrain texture (e.g. highly patterned, isotropic or homogenous) • Patch size (Balance between reducing “noise” and representing true displacement behaviour) • Exhaustive vs. Non-exhaustive search • Scaling vs. Non-scaling 37
  • 39. results: approach comparison - ‘living room rug’ dataset Figure 19: Comparison of results for the ‘Living room rug’ dataset using an exhaustive search and 100px patch size. Small patches (Graph 1), full-width patches (Graph 2) and geometrically scaled full-width patches (Graph 3). 38
  • 40. results: approach comparison - ‘brick-paved road’ dataset Figure 20: Comparison of results for the ‘Brick-paved road’ dataset using an exhaustive search and 100px patch size. Small patches (Graph 1), full-width patches (Graph 2) and geometrically scaled full-width patches (Graph 3). 39
  • 41. results: exhaustive vs. non-exhaustive search Non-Exhaustive Exhaustive Figure 21: Comparison between non-exhaustive and exhaustive search performance using geometrically scaled template patches and the ‘Living room rug’ dataset. 40
  • 42. results: patch size comparison - ‘slate footpath’ dataset Figure 22: Comparison of results for the ‘Living room rug’ dataset using geometrically-scaled patches of 50px, 100px and 200px heights under an exhaustive search approach. 41
  • 44. future work Great potential for future work, most likely focussing on areas including: • Further experimentation to establish most appropriate combination of similarity-measure, search approach and patch size for a given terrain type. • Development of a system to identify “significant” changes in vertical displacement model that are potentially indicative of a potential obstacle or change in terrain gradient. • Development of system to estimate change in robot heading. • Development of system to estimate current robot speed. 43
  • 45. conclusion summary Project has laid the groundwork upon which further development can subsequently be conducted. While not all of the original aims were accomplished, research efforts had to be focussed on confirming the underlying hypothesis, which as a general concept, has been proven. Not one single approach or solution will suffice for all terrain conditions. 44
  • 46. Questions? Project Blog Slide Design: Matthias Vogelgesang - (Github) 45
  • 47. bibliography [1] Jason Campbell, Rahul Sukthankar, Illah Nourbakhsh, and Aroon Pahwa. A robust visual odometry and precipice detection system using consumer-grade monocular vision. In Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE International Conference on, pages 3421–3427. IEEE, 2005. [2] Mark Maimone, Yang Cheng, and Larry Matthies. Two years of visual odometry on the mars exploration rovers. Journal of Field Robotics, 24(3):169–186, 2007. [3] Nazil Perveen, Darshan Kumar, and Ishan Bhardwaj. An Overview on Template Matching Methodologies and its Applications. IJRCCT, 2(10):988–995, 2013. [4] Jonathan Pillow. Motion Perception 2. http://homepage.psy.utexas.edu/homepage/faculty/pillow/ courses/perception09/slides/Lec13A_Motion_part2.pdf, October 2009. [5] Akshay Roongta. Depth Cue Theory. http://akshayroongta.in/notes/depth-cue-theory/, October 2013. 46