SlideShare a Scribd company logo
1 of 1
Download to read offline
www.cranfield.ac.uk
• Reduction of ground plane error by a factor of two
compared to original algorithm.
• Reduction of mistakenly detected obstacles.
• Improved height estimation with an innovative
joined membership based on colour and disparity.
• Scene understanding approach with a good
overall accuracy and high classes recall provides
a decent scene segmentation for high-level tasks.
• 3D visualization showing stixels in context of the
laser measurements collected with the Velodyne.
Zygfryd Wieszok
Supervisor: Dr Nabil Aouf
zygfryd.wieszok@cranfield.ac.uk
Stixel based scene
understanding for
autonomous vehicles
Introduction
Autonomous vehicles are future mean of transport.
The important task in autonomous driving is scene
understanding and obstacle detection. This study
provides a scene understanding method based on
the Stixel World estimation. The method
determinates the free-space ahead of the car and
detects the obstacles. Next, the obstacles are
classified into two meaningful classes.
1. Ground plane estimation
1.2 Improved ground line estimation
1.1 Online learned colour road model
2. Improved stixel distance
Error Error
Original paper 3.847 5.708
This paper 1.770 2.600
Error Error
LS approximation 5.482 8.228
IRLS approximation 3.847 5.708
3. Improved stixel height estimation
4. Scene understanding
5. 3D visualisation
Conclusions
Vegetation and
infrastructure
Car and pedestrian
Recall Precision Recall Precision
Pixel level 81.50% 89.57% 67.68% 51.79%
Stixel level 88.25% 97.47% 87.80% 58.36%
Ground plane
estimation
Stixel
distance
estimation
Stixel
height
estimation
Stixel
semantic
segmentation
Stixel
visualisation
Methodology
LS
approximation
IRLS
approximation
Original
paper
This
paper
Line estimation V-Disparity estimation
MSc Computational and Software Techniques in Engineering 2016 (Digital Signal and image Processing Option)
V-Disparity
𝑳 𝒏𝒐𝒓𝒎 𝟏 𝑳 𝒏𝒐𝒓𝒎 2 𝑳 𝒏𝒐𝒓𝒎 𝟏 𝑳 𝒏𝒐𝒓𝒎 2

More Related Content

Similar to WIESZOK Zygfryd 244366 Poster 2015-16

Autonomous Ground Vehicles The Darpa Grand Challenge
Autonomous Ground Vehicles The Darpa Grand ChallengeAutonomous Ground Vehicles The Darpa Grand Challenge
Autonomous Ground Vehicles The Darpa Grand Challengexzhou
 
Autonomous Ground Vehicles The Darpa Grand Challenge
Autonomous Ground Vehicles The Darpa Grand ChallengeAutonomous Ground Vehicles The Darpa Grand Challenge
Autonomous Ground Vehicles The Darpa Grand Challengexzhou
 
Driving behaviors for adas and autonomous driving XII
Driving behaviors for adas and autonomous driving XIIDriving behaviors for adas and autonomous driving XII
Driving behaviors for adas and autonomous driving XIIYu Huang
 
Fisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VFisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VYu Huang
 
Road Marking Blur Detection with Drive Recorder
Road Marking Blur Detection with Drive RecorderRoad Marking Blur Detection with Drive Recorder
Road Marking Blur Detection with Drive RecorderMakoto Kawano
 
Implementation of Lane Line Detection using HoughTransformation and Gaussian ...
Implementation of Lane Line Detection using HoughTransformation and Gaussian ...Implementation of Lane Line Detection using HoughTransformation and Gaussian ...
Implementation of Lane Line Detection using HoughTransformation and Gaussian ...IRJET Journal
 
Fisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VIFisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VIYu Huang
 
Driving behaviors for adas and autonomous driving XI
Driving behaviors for adas and autonomous driving XIDriving behaviors for adas and autonomous driving XI
Driving behaviors for adas and autonomous driving XIYu Huang
 
camera-based Lane detection by deep learning
camera-based Lane detection by deep learningcamera-based Lane detection by deep learning
camera-based Lane detection by deep learningYu Huang
 
A hierarchical RCNN for vehicle and vehicle license plate detection and recog...
A hierarchical RCNN for vehicle and vehicle license plate detection and recog...A hierarchical RCNN for vehicle and vehicle license plate detection and recog...
A hierarchical RCNN for vehicle and vehicle license plate detection and recog...IJECEIAES
 
Automatism System Using Faster R-CNN and SVM
Automatism System Using Faster R-CNN and SVMAutomatism System Using Faster R-CNN and SVM
Automatism System Using Faster R-CNN and SVMIRJET Journal
 
Vision-Based Motorcycle Crash Detection and Reporting Using Deep Learning
Vision-Based Motorcycle Crash Detection and Reporting Using Deep LearningVision-Based Motorcycle Crash Detection and Reporting Using Deep Learning
Vision-Based Motorcycle Crash Detection and Reporting Using Deep LearningIRJET Journal
 
Camera-Based Road Lane Detection by Deep Learning II
Camera-Based Road Lane Detection by Deep Learning IICamera-Based Road Lane Detection by Deep Learning II
Camera-Based Road Lane Detection by Deep Learning IIYu Huang
 
traffic jam detection using image processing
traffic jam detection using image processingtraffic jam detection using image processing
traffic jam detection using image processingMalika Alix
 
LANE CHANGE DETECTION AND TRACKING FOR A SAFE-LANE APPROACH IN REAL TIME VISI...
LANE CHANGE DETECTION AND TRACKING FOR A SAFE-LANE APPROACH IN REAL TIME VISI...LANE CHANGE DETECTION AND TRACKING FOR A SAFE-LANE APPROACH IN REAL TIME VISI...
LANE CHANGE DETECTION AND TRACKING FOR A SAFE-LANE APPROACH IN REAL TIME VISI...cscpconf
 

Similar to WIESZOK Zygfryd 244366 Poster 2015-16 (20)

Autonomous Ground Vehicles The Darpa Grand Challenge
Autonomous Ground Vehicles The Darpa Grand ChallengeAutonomous Ground Vehicles The Darpa Grand Challenge
Autonomous Ground Vehicles The Darpa Grand Challenge
 
Autonomous Ground Vehicles The Darpa Grand Challenge
Autonomous Ground Vehicles The Darpa Grand ChallengeAutonomous Ground Vehicles The Darpa Grand Challenge
Autonomous Ground Vehicles The Darpa Grand Challenge
 
Model autonomous car
Model autonomous carModel autonomous car
Model autonomous car
 
Driving behaviors for adas and autonomous driving XII
Driving behaviors for adas and autonomous driving XIIDriving behaviors for adas and autonomous driving XII
Driving behaviors for adas and autonomous driving XII
 
Fisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VFisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving V
 
Road Marking Blur Detection with Drive Recorder
Road Marking Blur Detection with Drive RecorderRoad Marking Blur Detection with Drive Recorder
Road Marking Blur Detection with Drive Recorder
 
Implementation of Lane Line Detection using HoughTransformation and Gaussian ...
Implementation of Lane Line Detection using HoughTransformation and Gaussian ...Implementation of Lane Line Detection using HoughTransformation and Gaussian ...
Implementation of Lane Line Detection using HoughTransformation and Gaussian ...
 
Fisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VIFisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VI
 
Dave
DaveDave
Dave
 
Driving behaviors for adas and autonomous driving XI
Driving behaviors for adas and autonomous driving XIDriving behaviors for adas and autonomous driving XI
Driving behaviors for adas and autonomous driving XI
 
camera-based Lane detection by deep learning
camera-based Lane detection by deep learningcamera-based Lane detection by deep learning
camera-based Lane detection by deep learning
 
A hierarchical RCNN for vehicle and vehicle license plate detection and recog...
A hierarchical RCNN for vehicle and vehicle license plate detection and recog...A hierarchical RCNN for vehicle and vehicle license plate detection and recog...
A hierarchical RCNN for vehicle and vehicle license plate detection and recog...
 
Technical Writing Paper Presentation .pptx
Technical Writing Paper Presentation .pptxTechnical Writing Paper Presentation .pptx
Technical Writing Paper Presentation .pptx
 
Automatism System Using Faster R-CNN and SVM
Automatism System Using Faster R-CNN and SVMAutomatism System Using Faster R-CNN and SVM
Automatism System Using Faster R-CNN and SVM
 
FINAL PPT ALL.pptx
FINAL PPT ALL.pptxFINAL PPT ALL.pptx
FINAL PPT ALL.pptx
 
A0140109
A0140109A0140109
A0140109
 
Vision-Based Motorcycle Crash Detection and Reporting Using Deep Learning
Vision-Based Motorcycle Crash Detection and Reporting Using Deep LearningVision-Based Motorcycle Crash Detection and Reporting Using Deep Learning
Vision-Based Motorcycle Crash Detection and Reporting Using Deep Learning
 
Camera-Based Road Lane Detection by Deep Learning II
Camera-Based Road Lane Detection by Deep Learning IICamera-Based Road Lane Detection by Deep Learning II
Camera-Based Road Lane Detection by Deep Learning II
 
traffic jam detection using image processing
traffic jam detection using image processingtraffic jam detection using image processing
traffic jam detection using image processing
 
LANE CHANGE DETECTION AND TRACKING FOR A SAFE-LANE APPROACH IN REAL TIME VISI...
LANE CHANGE DETECTION AND TRACKING FOR A SAFE-LANE APPROACH IN REAL TIME VISI...LANE CHANGE DETECTION AND TRACKING FOR A SAFE-LANE APPROACH IN REAL TIME VISI...
LANE CHANGE DETECTION AND TRACKING FOR A SAFE-LANE APPROACH IN REAL TIME VISI...
 

WIESZOK Zygfryd 244366 Poster 2015-16

  • 1. www.cranfield.ac.uk • Reduction of ground plane error by a factor of two compared to original algorithm. • Reduction of mistakenly detected obstacles. • Improved height estimation with an innovative joined membership based on colour and disparity. • Scene understanding approach with a good overall accuracy and high classes recall provides a decent scene segmentation for high-level tasks. • 3D visualization showing stixels in context of the laser measurements collected with the Velodyne. Zygfryd Wieszok Supervisor: Dr Nabil Aouf zygfryd.wieszok@cranfield.ac.uk Stixel based scene understanding for autonomous vehicles Introduction Autonomous vehicles are future mean of transport. The important task in autonomous driving is scene understanding and obstacle detection. This study provides a scene understanding method based on the Stixel World estimation. The method determinates the free-space ahead of the car and detects the obstacles. Next, the obstacles are classified into two meaningful classes. 1. Ground plane estimation 1.2 Improved ground line estimation 1.1 Online learned colour road model 2. Improved stixel distance Error Error Original paper 3.847 5.708 This paper 1.770 2.600 Error Error LS approximation 5.482 8.228 IRLS approximation 3.847 5.708 3. Improved stixel height estimation 4. Scene understanding 5. 3D visualisation Conclusions Vegetation and infrastructure Car and pedestrian Recall Precision Recall Precision Pixel level 81.50% 89.57% 67.68% 51.79% Stixel level 88.25% 97.47% 87.80% 58.36% Ground plane estimation Stixel distance estimation Stixel height estimation Stixel semantic segmentation Stixel visualisation Methodology LS approximation IRLS approximation Original paper This paper Line estimation V-Disparity estimation MSc Computational and Software Techniques in Engineering 2016 (Digital Signal and image Processing Option) V-Disparity 𝑳 𝒏𝒐𝒓𝒎 𝟏 𝑳 𝒏𝒐𝒓𝒎 2 𝑳 𝒏𝒐𝒓𝒎 𝟏 𝑳 𝒏𝒐𝒓𝒎 2