SlideShare a Scribd company logo
1 of 52
Download to read offline
Fisheye/Omnidirectional View in
Autonomous Driving IV
Yu Huang
Outline
• FisheyeMultiNet: Real-time Multi-task Learning Architecture for
Surround-view Automated Parking System
• Generalized Object Detection on Fisheye Cameras for Autonomous
Driving: Dataset, Representations and Baseline
• SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for
Autonomous Driving
• Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors
for the ADAS
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
• Automated Parking is a low speed maneuvering scenario which is quite
unstructured and complex, requiring full 360° near-field sensing around the
vehicle.
• In this paper, discuss the design and implementation of an automated parking
system from the perspective of camera based deep learning algorithms.
• provide a holistic overview of an industrial system covering the embedded
system, use cases and the deep learning architecture.
• demonstrate a real-time multi-task deep learning network called
FisheyeMultiNet, which detects all the necessary objects for parking on a low-
power embedded system.
• FisheyeMultiNet runs at 15 fps for 4 cameras and it has three tasks namely object
detection, semantic segmentation and soiling detection.
• release a partial dataset of 5,000 images containing semantic segmentation and
bounding box detection ground truth via WoodScape project.
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
Classification of Parking scenarios - (a) Parallel Backward Parking (b) Perpendicular Backward Parking (c)
Perpendicular Forward Parking (d) Ambiguous Parking and (e) Fishbone Parking with roadmarkings.
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
Illustration of FisheyeMultiNet architecture comprising of object detection, semantic segmentation and soiling detection tasks.
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
• Object detection is a comprehensively studied problem in autonomous driving.
• However, it has been relatively less explored in the case of fisheye cameras.
• The standard bounding box fails in fisheye cameras due to the strong radial distortion,
particularly in the image’s periphery.
• explore better representations like oriented bounding box, ellipse, and generic polygon
for object detection in fisheye images in this work.
• use the IoU metric to compare these representations using accurate instance
segmentation ground truth.
• design a novel curved bounding box model that has optimal properties for fisheye
distortion models.
• also design a curvature adaptive perimeter sampling method for obtaining polygon
vertices, improving relative mAP score by 4.9% compared to uniform sampling.
• Overall, the proposed polygon model improves mIoU relative accuracy by 40.3%.
• The dataset comprising of 10,000 images along with ground truth will be made public.
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
Left: Illustration of fisheye distortion of projection of an open cube. A 4th-degree polynomial model radial
distortion. can visually notice that box matures to a curved box. Right: propose the Curved Bounding Box
using a circle with an arbitrary center and radius, as illustrated. It captures the radial distortion and obtains a
better footpoint. The center of the circle can be equivalently reparameterized using the object center (xˆ, yˆ).
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
Generic Polygon Representations. Left: Uniform angular sampling where the intersection of the polygon with the
radial line is represented by one parameter per point (r). Middle: Uniform contour sampling using L2 distance. It can
be parameterized in polar co-ordinates using 3 parameters (r, θ, α). α denotes the number of polygon vertices within
the sector, and it may be used to simplify the training. Alternatively, 2 parameters (x,y) can be used, as shown in the
figure on the right. Right: Variable step contour sampling. It is shown that the straight line in the bottom has less
number of points than curved points such as the wheel. This representation allows to maximize the utilization of
vertices according to local curvature.
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
FisheyeYOLO is an extension of YOLOv3 which
can output different output representation
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
• Four fisheye cameras with a 190° field of view cover the 360° around the vehicle.
• Due to its high radial distortion, the standard algorithms do not extend easily.
• In this work, release a synthetic version of the surround-view dataset, covering many of its
weaknesses and extending it.
• Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth.
• Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse
frames.
• However, this means that multi-camera algorithms cannot be designed, which is enabled in the
new dataset.
• implemented surround-view fisheye geometric projections in CARLA Simulator matching
WoodScape’s configuration and created SynWoodScape.
• release 80k images with annotations for 10+ tasks.
• also release the baseline code and supporting scripts.
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
Overview of Surround View cameras based multi-task
visual perception framework. The distance estimation task
(blue block) makes use of semantic guidance and dynamic
object masking from semantic/motion estimation (green
and blue haze block) and camera-geometry adaptive
convolutions (orange block). Additionally, guide the
detection decoder features (gray block) with the semantic
features. The encoder block (shown in the same color) is
common for all the tasks. The framework consists of
processing blocks to train the self-supervised distance
estimation (blue blocks) and semantic segmentation
(green blocks), motion segmentation (blue haze blocks),
and polygon-based fisheye object detection (gray blocks).
obtain Surround View geometric information by post-
processing the predicted distance maps in 3D space
(perano block). The camera tensor Ct (orange block) helps
OmniDet yield distance maps on multiple camera-
viewpoints and make the network camera independent.
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
• This paper proposes a self-calibration method that can be applied for multiple larger
field-of-view (FOV) camera models on ADAS.
• Firstly, perform steps such as edge detection, length thresholding, and edge grouping for
the segregation of robust line candidates from the pool of initial distortion line segments.
• A straightness cost constraint with a cross-entropy loss was imposed on the selected line
candidates, thereby exploiting that loss to optimize the lens-distortion parameters using
the Levenberg–Marquardt (LM) optimization approach.
• The best-fit distortion parameters are used for the undistortion of an image frame,
thereby employing various high-end vision-based tasks on the distortion-rectified frame.
• investigation on experimental approaches such as parameter sharing between multiple
camera systems and model-specific empirical γ-residual rectification factor.
• The quantitative comparisons between the proposed method and traditional OpenCV
method on KITTI dataset with synthetically generated distortion ranges.
• a pragmatic approach of qualitative analysis has been conducted through streamlining
high-end vision-based tasks such as object detection, localization, and mapping, and
auto-parking on undistorted frames.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Proposed Pipeline on ADAS workbench (a) ADAS Platform: Camera sensors setup and image acquisition
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Proposed Pipeline on ADAS workbench (b) Proposed method with block schematics.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Structural anomaly induced into a
scene due to heavy lens distortion
caused by wide-angle cameras with
field-of-view 120◦ < FOV < 140◦.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Lens Projection Models: (a) Standard Camera Pinhole Projection Model. (b) Larger FOV Lens Orthogonal Projection Model.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Proposed Self-calibration design
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Pre-processing of line candidates and
Estimation of Straightness constraint.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Schematic of distortion parameter
estimation using LM-optimization in normal
mode and parameter sharing mode.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Severe distortion cases rectified
using several approaches
[28,29], proposed method with
and without empirical γ-hyper
parameter.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Data acquisition scenarios using
various camera models.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Auto-parking scenario on rear fisheye camera: Real-time visual SLAM pipeline on lens distortion rectified sensor data.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Fisheye/Omnidirectional View in Autonomous Driving IV

More Related Content

What's hot

Driver drowsinees detection and alert.pptx slide
Driver drowsinees detection and alert.pptx slideDriver drowsinees detection and alert.pptx slide
Driver drowsinees detection and alert.pptx slidekavinakshi
 
Automatic Number Plate Recognition (ANPR)
Automatic Number Plate Recognition (ANPR)Automatic Number Plate Recognition (ANPR)
Automatic Number Plate Recognition (ANPR)Vidyut Singhania
 
Image processing on matlab presentation
Image processing on matlab presentationImage processing on matlab presentation
Image processing on matlab presentationNaatchammai Ramanathan
 
Literature Review on Content Based Image Retrieval
Literature Review on Content Based Image RetrievalLiterature Review on Content Based Image Retrieval
Literature Review on Content Based Image RetrievalUpekha Vandebona
 
50409621003 fingerprint recognition system-ppt
50409621003  fingerprint recognition system-ppt50409621003  fingerprint recognition system-ppt
50409621003 fingerprint recognition system-pptMohankumar Ramachandran
 
Deep sort and sort paper introduce presentation
Deep sort and sort paper introduce presentationDeep sort and sort paper introduce presentation
Deep sort and sort paper introduce presentation경훈 김
 
Vehicle detection through image processing
Vehicle detection through image processingVehicle detection through image processing
Vehicle detection through image processingGhazalpreet Kaur
 
입문 Visual SLAM 14강 - 2장 Introduction to slam
입문 Visual SLAM 14강  - 2장 Introduction to slam입문 Visual SLAM 14강  - 2장 Introduction to slam
입문 Visual SLAM 14강 - 2장 Introduction to slamjdo
 
Computer Vision - cameras
Computer Vision - camerasComputer Vision - cameras
Computer Vision - camerasWael Badawy
 
"Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present...
"Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present..."Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present...
"Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present...Edge AI and Vision Alliance
 
Automatic no. plate recognition
Automatic no. plate recognitionAutomatic no. plate recognition
Automatic no. plate recognitionAnjali Mehra
 
Automated Driver Fatigue Detection
Automated Driver Fatigue DetectionAutomated Driver Fatigue Detection
Automated Driver Fatigue DetectionArman Hossain
 
Facial Recognition Attendance System (Synopsis).pptx
Facial Recognition Attendance System (Synopsis).pptxFacial Recognition Attendance System (Synopsis).pptx
Facial Recognition Attendance System (Synopsis).pptxkakimetu
 
Image noise reduction
Image noise reductionImage noise reduction
Image noise reductionJksuryawanshi
 

What's hot (20)

Driver drowsinees detection and alert.pptx slide
Driver drowsinees detection and alert.pptx slideDriver drowsinees detection and alert.pptx slide
Driver drowsinees detection and alert.pptx slide
 
Vehicle detection
Vehicle detectionVehicle detection
Vehicle detection
 
Ray tracing
Ray tracingRay tracing
Ray tracing
 
Automatic Number Plate Recognition (ANPR)
Automatic Number Plate Recognition (ANPR)Automatic Number Plate Recognition (ANPR)
Automatic Number Plate Recognition (ANPR)
 
Image processing on matlab presentation
Image processing on matlab presentationImage processing on matlab presentation
Image processing on matlab presentation
 
Literature Review on Content Based Image Retrieval
Literature Review on Content Based Image RetrievalLiterature Review on Content Based Image Retrieval
Literature Review on Content Based Image Retrieval
 
50409621003 fingerprint recognition system-ppt
50409621003  fingerprint recognition system-ppt50409621003  fingerprint recognition system-ppt
50409621003 fingerprint recognition system-ppt
 
Object detection
Object detectionObject detection
Object detection
 
face detection
face detectionface detection
face detection
 
Deep sort and sort paper introduce presentation
Deep sort and sort paper introduce presentationDeep sort and sort paper introduce presentation
Deep sort and sort paper introduce presentation
 
Vehicle detection through image processing
Vehicle detection through image processingVehicle detection through image processing
Vehicle detection through image processing
 
입문 Visual SLAM 14강 - 2장 Introduction to slam
입문 Visual SLAM 14강  - 2장 Introduction to slam입문 Visual SLAM 14강  - 2장 Introduction to slam
입문 Visual SLAM 14강 - 2장 Introduction to slam
 
Computer Vision - cameras
Computer Vision - camerasComputer Vision - cameras
Computer Vision - cameras
 
Drowsy Driver detection system
Drowsy Driver detection systemDrowsy Driver detection system
Drowsy Driver detection system
 
"Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present...
"Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present..."Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present...
"Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present...
 
Automatic no. plate recognition
Automatic no. plate recognitionAutomatic no. plate recognition
Automatic no. plate recognition
 
Automated Driver Fatigue Detection
Automated Driver Fatigue DetectionAutomated Driver Fatigue Detection
Automated Driver Fatigue Detection
 
Computer Vision
Computer VisionComputer Vision
Computer Vision
 
Facial Recognition Attendance System (Synopsis).pptx
Facial Recognition Attendance System (Synopsis).pptxFacial Recognition Attendance System (Synopsis).pptx
Facial Recognition Attendance System (Synopsis).pptx
 
Image noise reduction
Image noise reductionImage noise reduction
Image noise reduction
 

Similar to Fisheye/Omnidirectional View in Autonomous Driving IV

Fisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VFisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VYu Huang
 
Fisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving IIIFisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving IIIYu Huang
 
Fisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VIFisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VIYu Huang
 
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...IRJET Journal
 
Real-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURFReal-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURFiosrjce
 
Realtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN ModelsRealtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN Modelsnithinsai2992
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IVYu Huang
 
MotorEyes MQP Poster_Onal1301 (1)
MotorEyes MQP Poster_Onal1301 (1)MotorEyes MQP Poster_Onal1301 (1)
MotorEyes MQP Poster_Onal1301 (1)Wut Yee Oo
 
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...Md. Mehedi Hasan
 
Intelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoomIntelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoomEUKA
 
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...Kitsukawa Yuki
 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotSudhakar Spartan
 
parking space counter [Autosaved] (2).pptx
parking space counter [Autosaved] (2).pptxparking space counter [Autosaved] (2).pptx
parking space counter [Autosaved] (2).pptxAlbertDaleSteyn
 
Video Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFTVideo Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFTIRJET Journal
 
Object Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/MLObject Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/MLIRJET Journal
 
Vehicle Recognition at Night Based on Tail LightDetection Using Image Processing
Vehicle Recognition at Night Based on Tail LightDetection Using Image ProcessingVehicle Recognition at Night Based on Tail LightDetection Using Image Processing
Vehicle Recognition at Night Based on Tail LightDetection Using Image ProcessingIJRES Journal
 
Portfolio - Ramsundar K G
Portfolio - Ramsundar K GPortfolio - Ramsundar K G
Portfolio - Ramsundar K GRamsundar K G
 

Similar to Fisheye/Omnidirectional View in Autonomous Driving IV (20)

Fisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VFisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving V
 
Fisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving IIIFisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving III
 
Fisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VIFisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VI
 
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
 
Low-cost infrared camera arrays for enhanced capabilities
Low-cost infrared camera arrays for enhanced capabilitiesLow-cost infrared camera arrays for enhanced capabilities
Low-cost infrared camera arrays for enhanced capabilities
 
J017377578
J017377578J017377578
J017377578
 
Real-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURFReal-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURF
 
Realtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN ModelsRealtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN Models
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IV
 
MotorEyes MQP Poster_Onal1301 (1)
MotorEyes MQP Poster_Onal1301 (1)MotorEyes MQP Poster_Onal1301 (1)
MotorEyes MQP Poster_Onal1301 (1)
 
VISpION Prospekt
VISpION ProspektVISpION Prospekt
VISpION Prospekt
 
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
 
Intelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoomIntelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoom
 
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
 
parking space counter [Autosaved] (2).pptx
parking space counter [Autosaved] (2).pptxparking space counter [Autosaved] (2).pptx
parking space counter [Autosaved] (2).pptx
 
Video Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFTVideo Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFT
 
Object Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/MLObject Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/ML
 
Vehicle Recognition at Night Based on Tail LightDetection Using Image Processing
Vehicle Recognition at Night Based on Tail LightDetection Using Image ProcessingVehicle Recognition at Night Based on Tail LightDetection Using Image Processing
Vehicle Recognition at Night Based on Tail LightDetection Using Image Processing
 
Portfolio - Ramsundar K G
Portfolio - Ramsundar K GPortfolio - Ramsundar K G
Portfolio - Ramsundar K G
 

More from Yu Huang

Application of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous DrivingApplication of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous DrivingYu Huang
 
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...Yu Huang
 
Data Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous DrivingData Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous DrivingYu Huang
 
Techniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous DrivingTechniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous DrivingYu Huang
 
BEV Joint Detection and Segmentation
BEV Joint Detection and SegmentationBEV Joint Detection and Segmentation
BEV Joint Detection and SegmentationYu Huang
 
BEV Object Detection and Prediction
BEV Object Detection and PredictionBEV Object Detection and Prediction
BEV Object Detection and PredictionYu Huang
 
Prediction,Planninng & Control at Baidu
Prediction,Planninng & Control at BaiduPrediction,Planninng & Control at Baidu
Prediction,Planninng & Control at BaiduYu Huang
 
Cruise AI under the Hood
Cruise AI under the HoodCruise AI under the Hood
Cruise AI under the HoodYu Huang
 
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)Yu Huang
 
Scenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingScenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingYu Huang
 
How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?Yu Huang
 
Annotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous DrivingAnnotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous DrivingYu Huang
 
Simulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgSimulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgYu Huang
 
Multi sensor calibration by deep learning
Multi sensor calibration by deep learningMulti sensor calibration by deep learning
Multi sensor calibration by deep learningYu Huang
 
Prediction and planning for self driving at waymo
Prediction and planning for self driving at waymoPrediction and planning for self driving at waymo
Prediction and planning for self driving at waymoYu Huang
 
Jointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planningJointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planningYu Huang
 
Data pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous drivingData pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous drivingYu Huang
 
Open Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planningOpen Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planningYu Huang
 
Lidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rainLidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rainYu Huang
 
Autonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucksAutonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucksYu Huang
 

More from Yu Huang (20)

Application of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous DrivingApplication of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous Driving
 
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
 
Data Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous DrivingData Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous Driving
 
Techniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous DrivingTechniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous Driving
 
BEV Joint Detection and Segmentation
BEV Joint Detection and SegmentationBEV Joint Detection and Segmentation
BEV Joint Detection and Segmentation
 
BEV Object Detection and Prediction
BEV Object Detection and PredictionBEV Object Detection and Prediction
BEV Object Detection and Prediction
 
Prediction,Planninng & Control at Baidu
Prediction,Planninng & Control at BaiduPrediction,Planninng & Control at Baidu
Prediction,Planninng & Control at Baidu
 
Cruise AI under the Hood
Cruise AI under the HoodCruise AI under the Hood
Cruise AI under the Hood
 
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
 
Scenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingScenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous Driving
 
How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?
 
Annotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous DrivingAnnotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous Driving
 
Simulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgSimulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atg
 
Multi sensor calibration by deep learning
Multi sensor calibration by deep learningMulti sensor calibration by deep learning
Multi sensor calibration by deep learning
 
Prediction and planning for self driving at waymo
Prediction and planning for self driving at waymoPrediction and planning for self driving at waymo
Prediction and planning for self driving at waymo
 
Jointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planningJointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planning
 
Data pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous drivingData pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous driving
 
Open Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planningOpen Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planning
 
Lidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rainLidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rain
 
Autonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucksAutonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucks
 

Recently uploaded

IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...Chandu841456
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHC Sai Kiran
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxk795866
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)dollysharma2066
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfAsst.prof M.Gokilavani
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfme23b1001
 
EduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIEduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIkoyaldeepu123
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 

Recently uploaded (20)

Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECH
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptx
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdf
 
EduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIEduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AI
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
young call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Serviceyoung call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Service
 

Fisheye/Omnidirectional View in Autonomous Driving IV

  • 2. Outline • FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System • Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline • SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving • Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the ADAS
  • 3. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System • Automated Parking is a low speed maneuvering scenario which is quite unstructured and complex, requiring full 360° near-field sensing around the vehicle. • In this paper, discuss the design and implementation of an automated parking system from the perspective of camera based deep learning algorithms. • provide a holistic overview of an industrial system covering the embedded system, use cases and the deep learning architecture. • demonstrate a real-time multi-task deep learning network called FisheyeMultiNet, which detects all the necessary objects for parking on a low- power embedded system. • FisheyeMultiNet runs at 15 fps for 4 cameras and it has three tasks namely object detection, semantic segmentation and soiling detection. • release a partial dataset of 5,000 images containing semantic segmentation and bounding box detection ground truth via WoodScape project.
  • 4. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System
  • 5. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System Classification of Parking scenarios - (a) Parallel Backward Parking (b) Perpendicular Backward Parking (c) Perpendicular Forward Parking (d) Ambiguous Parking and (e) Fishbone Parking with roadmarkings.
  • 6. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System
  • 7. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System Illustration of FisheyeMultiNet architecture comprising of object detection, semantic segmentation and soiling detection tasks.
  • 8. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System
  • 9. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline • Object detection is a comprehensively studied problem in autonomous driving. • However, it has been relatively less explored in the case of fisheye cameras. • The standard bounding box fails in fisheye cameras due to the strong radial distortion, particularly in the image’s periphery. • explore better representations like oriented bounding box, ellipse, and generic polygon for object detection in fisheye images in this work. • use the IoU metric to compare these representations using accurate instance segmentation ground truth. • design a novel curved bounding box model that has optimal properties for fisheye distortion models. • also design a curvature adaptive perimeter sampling method for obtaining polygon vertices, improving relative mAP score by 4.9% compared to uniform sampling. • Overall, the proposed polygon model improves mIoU relative accuracy by 40.3%. • The dataset comprising of 10,000 images along with ground truth will be made public.
  • 10. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline Left: Illustration of fisheye distortion of projection of an open cube. A 4th-degree polynomial model radial distortion. can visually notice that box matures to a curved box. Right: propose the Curved Bounding Box using a circle with an arbitrary center and radius, as illustrated. It captures the radial distortion and obtains a better footpoint. The center of the circle can be equivalently reparameterized using the object center (xˆ, yˆ).
  • 11. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline
  • 12. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline Generic Polygon Representations. Left: Uniform angular sampling where the intersection of the polygon with the radial line is represented by one parameter per point (r). Middle: Uniform contour sampling using L2 distance. It can be parameterized in polar co-ordinates using 3 parameters (r, θ, α). α denotes the number of polygon vertices within the sector, and it may be used to simplify the training. Alternatively, 2 parameters (x,y) can be used, as shown in the figure on the right. Right: Variable step contour sampling. It is shown that the straight line in the bottom has less number of points than curved points such as the wheel. This representation allows to maximize the utilization of vertices according to local curvature.
  • 13. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline FisheyeYOLO is an extension of YOLOv3 which can output different output representation
  • 14. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline
  • 15. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving • Four fisheye cameras with a 190° field of view cover the 360° around the vehicle. • Due to its high radial distortion, the standard algorithms do not extend easily. • In this work, release a synthetic version of the surround-view dataset, covering many of its weaknesses and extending it. • Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth. • Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse frames. • However, this means that multi-camera algorithms cannot be designed, which is enabled in the new dataset. • implemented surround-view fisheye geometric projections in CARLA Simulator matching WoodScape’s configuration and created SynWoodScape. • release 80k images with annotations for 10+ tasks. • also release the baseline code and supporting scripts.
  • 16. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 17. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 18. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 19. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 20. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 21. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 22. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving Overview of Surround View cameras based multi-task visual perception framework. The distance estimation task (blue block) makes use of semantic guidance and dynamic object masking from semantic/motion estimation (green and blue haze block) and camera-geometry adaptive convolutions (orange block). Additionally, guide the detection decoder features (gray block) with the semantic features. The encoder block (shown in the same color) is common for all the tasks. The framework consists of processing blocks to train the self-supervised distance estimation (blue blocks) and semantic segmentation (green blocks), motion segmentation (blue haze blocks), and polygon-based fisheye object detection (gray blocks). obtain Surround View geometric information by post- processing the predicted distance maps in 3D space (perano block). The camera tensor Ct (orange block) helps OmniDet yield distance maps on multiple camera- viewpoints and make the network camera independent.
  • 23. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 24. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 25. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS • This paper proposes a self-calibration method that can be applied for multiple larger field-of-view (FOV) camera models on ADAS. • Firstly, perform steps such as edge detection, length thresholding, and edge grouping for the segregation of robust line candidates from the pool of initial distortion line segments. • A straightness cost constraint with a cross-entropy loss was imposed on the selected line candidates, thereby exploiting that loss to optimize the lens-distortion parameters using the Levenberg–Marquardt (LM) optimization approach. • The best-fit distortion parameters are used for the undistortion of an image frame, thereby employing various high-end vision-based tasks on the distortion-rectified frame. • investigation on experimental approaches such as parameter sharing between multiple camera systems and model-specific empirical γ-residual rectification factor. • The quantitative comparisons between the proposed method and traditional OpenCV method on KITTI dataset with synthetically generated distortion ranges. • a pragmatic approach of qualitative analysis has been conducted through streamlining high-end vision-based tasks such as object detection, localization, and mapping, and auto-parking on undistorted frames.
  • 26. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Proposed Pipeline on ADAS workbench (a) ADAS Platform: Camera sensors setup and image acquisition
  • 27. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Proposed Pipeline on ADAS workbench (b) Proposed method with block schematics.
  • 28. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Structural anomaly induced into a scene due to heavy lens distortion caused by wide-angle cameras with field-of-view 120◦ < FOV < 140◦.
  • 29. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Lens Projection Models: (a) Standard Camera Pinhole Projection Model. (b) Larger FOV Lens Orthogonal Projection Model.
  • 30. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Proposed Self-calibration design
  • 31. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Pre-processing of line candidates and Estimation of Straightness constraint.
  • 32. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Schematic of distortion parameter estimation using LM-optimization in normal mode and parameter sharing mode.
  • 33. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 34. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 35. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 36. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 37. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Severe distortion cases rectified using several approaches [28,29], proposed method with and without empirical γ-hyper parameter.
  • 38. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 39. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 40. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Data acquisition scenarios using various camera models.
  • 41. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 42. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 43. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 44. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 45. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 46. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 47. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 48. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 49. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 50. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Auto-parking scenario on rear fisheye camera: Real-time visual SLAM pipeline on lens distortion rectified sensor data.
  • 51. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS