SlideShare a Scribd company logo
Fisheye/Omnidirectional View in
Autonomous Driving IV
Yu Huang
Outline
• FisheyeMultiNet: Real-time Multi-task Learning Architecture for
Surround-view Automated Parking System
• Generalized Object Detection on Fisheye Cameras for Autonomous
Driving: Dataset, Representations and Baseline
• SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for
Autonomous Driving
• Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors
for the ADAS
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
• Automated Parking is a low speed maneuvering scenario which is quite
unstructured and complex, requiring full 360° near-field sensing around the
vehicle.
• In this paper, discuss the design and implementation of an automated parking
system from the perspective of camera based deep learning algorithms.
• provide a holistic overview of an industrial system covering the embedded
system, use cases and the deep learning architecture.
• demonstrate a real-time multi-task deep learning network called
FisheyeMultiNet, which detects all the necessary objects for parking on a low-
power embedded system.
• FisheyeMultiNet runs at 15 fps for 4 cameras and it has three tasks namely object
detection, semantic segmentation and soiling detection.
• release a partial dataset of 5,000 images containing semantic segmentation and
bounding box detection ground truth via WoodScape project.
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
Classification of Parking scenarios - (a) Parallel Backward Parking (b) Perpendicular Backward Parking (c)
Perpendicular Forward Parking (d) Ambiguous Parking and (e) Fishbone Parking with roadmarkings.
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
Illustration of FisheyeMultiNet architecture comprising of object detection, semantic segmentation and soiling detection tasks.
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
• Object detection is a comprehensively studied problem in autonomous driving.
• However, it has been relatively less explored in the case of fisheye cameras.
• The standard bounding box fails in fisheye cameras due to the strong radial distortion,
particularly in the image’s periphery.
• explore better representations like oriented bounding box, ellipse, and generic polygon
for object detection in fisheye images in this work.
• use the IoU metric to compare these representations using accurate instance
segmentation ground truth.
• design a novel curved bounding box model that has optimal properties for fisheye
distortion models.
• also design a curvature adaptive perimeter sampling method for obtaining polygon
vertices, improving relative mAP score by 4.9% compared to uniform sampling.
• Overall, the proposed polygon model improves mIoU relative accuracy by 40.3%.
• The dataset comprising of 10,000 images along with ground truth will be made public.
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
Left: Illustration of fisheye distortion of projection of an open cube. A 4th-degree polynomial model radial
distortion. can visually notice that box matures to a curved box. Right: propose the Curved Bounding Box
using a circle with an arbitrary center and radius, as illustrated. It captures the radial distortion and obtains a
better footpoint. The center of the circle can be equivalently reparameterized using the object center (xˆ, yˆ).
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
Generic Polygon Representations. Left: Uniform angular sampling where the intersection of the polygon with the
radial line is represented by one parameter per point (r). Middle: Uniform contour sampling using L2 distance. It can
be parameterized in polar co-ordinates using 3 parameters (r, θ, α). α denotes the number of polygon vertices within
the sector, and it may be used to simplify the training. Alternatively, 2 parameters (x,y) can be used, as shown in the
figure on the right. Right: Variable step contour sampling. It is shown that the straight line in the bottom has less
number of points than curved points such as the wheel. This representation allows to maximize the utilization of
vertices according to local curvature.
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
FisheyeYOLO is an extension of YOLOv3 which
can output different output representation
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
• Four fisheye cameras with a 190° field of view cover the 360° around the vehicle.
• Due to its high radial distortion, the standard algorithms do not extend easily.
• In this work, release a synthetic version of the surround-view dataset, covering many of its
weaknesses and extending it.
• Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth.
• Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse
frames.
• However, this means that multi-camera algorithms cannot be designed, which is enabled in the
new dataset.
• implemented surround-view fisheye geometric projections in CARLA Simulator matching
WoodScape’s configuration and created SynWoodScape.
• release 80k images with annotations for 10+ tasks.
• also release the baseline code and supporting scripts.
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
Overview of Surround View cameras based multi-task
visual perception framework. The distance estimation task
(blue block) makes use of semantic guidance and dynamic
object masking from semantic/motion estimation (green
and blue haze block) and camera-geometry adaptive
convolutions (orange block). Additionally, guide the
detection decoder features (gray block) with the semantic
features. The encoder block (shown in the same color) is
common for all the tasks. The framework consists of
processing blocks to train the self-supervised distance
estimation (blue blocks) and semantic segmentation
(green blocks), motion segmentation (blue haze blocks),
and polygon-based fisheye object detection (gray blocks).
obtain Surround View geometric information by post-
processing the predicted distance maps in 3D space
(perano block). The camera tensor Ct (orange block) helps
OmniDet yield distance maps on multiple camera-
viewpoints and make the network camera independent.
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
• This paper proposes a self-calibration method that can be applied for multiple larger
field-of-view (FOV) camera models on ADAS.
• Firstly, perform steps such as edge detection, length thresholding, and edge grouping for
the segregation of robust line candidates from the pool of initial distortion line segments.
• A straightness cost constraint with a cross-entropy loss was imposed on the selected line
candidates, thereby exploiting that loss to optimize the lens-distortion parameters using
the Levenberg–Marquardt (LM) optimization approach.
• The best-fit distortion parameters are used for the undistortion of an image frame,
thereby employing various high-end vision-based tasks on the distortion-rectified frame.
• investigation on experimental approaches such as parameter sharing between multiple
camera systems and model-specific empirical γ-residual rectification factor.
• The quantitative comparisons between the proposed method and traditional OpenCV
method on KITTI dataset with synthetically generated distortion ranges.
• a pragmatic approach of qualitative analysis has been conducted through streamlining
high-end vision-based tasks such as object detection, localization, and mapping, and
auto-parking on undistorted frames.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Proposed Pipeline on ADAS workbench (a) ADAS Platform: Camera sensors setup and image acquisition
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Proposed Pipeline on ADAS workbench (b) Proposed method with block schematics.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Structural anomaly induced into a
scene due to heavy lens distortion
caused by wide-angle cameras with
field-of-view 120◦ < FOV < 140◦.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Lens Projection Models: (a) Standard Camera Pinhole Projection Model. (b) Larger FOV Lens Orthogonal Projection Model.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Proposed Self-calibration design
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Pre-processing of line candidates and
Estimation of Straightness constraint.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Schematic of distortion parameter
estimation using LM-optimization in normal
mode and parameter sharing mode.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Severe distortion cases rectified
using several approaches
[28,29], proposed method with
and without empirical γ-hyper
parameter.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Data acquisition scenarios using
various camera models.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Auto-parking scenario on rear fisheye camera: Real-time visual SLAM pipeline on lens distortion rectified sensor data.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Fisheye/Omnidirectional View in Autonomous Driving IV

More Related Content

What's hot

Object tracking
Object trackingObject tracking
Object tracking
Sri vidhya k
 
Action Recognition (Thesis presentation)
Action Recognition (Thesis presentation)Action Recognition (Thesis presentation)
Action Recognition (Thesis presentation)nikhilus85
 
Neural Radiance Field
Neural Radiance FieldNeural Radiance Field
Neural Radiance Field
Dong Heon Cho
 
Deep Learning for Computer Vision: Object Detection (UPC 2016)
Deep Learning for Computer Vision: Object Detection (UPC 2016)Deep Learning for Computer Vision: Object Detection (UPC 2016)
Deep Learning for Computer Vision: Object Detection (UPC 2016)
Universitat Politècnica de Catalunya
 
Object tracking presentation
Object tracking  presentationObject tracking  presentation
Object tracking presentation
MrsShwetaBanait1
 
Visual Object Tracking: review
Visual Object Tracking: reviewVisual Object Tracking: review
Visual Object Tracking: review
Dmytro Mishkin
 
Object detection
Object detectionObject detection
Object detection
Jksuryawanshi
 
Direct Sparse Odometryの解説
Direct Sparse Odometryの解説Direct Sparse Odometryの解説
Direct Sparse Odometryの解説
Masaya Kaneko
 
Deep VO and SLAM
Deep VO and SLAMDeep VO and SLAM
Deep VO and SLAM
Yu Huang
 
[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency
[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency
[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency
Ryutaro Yamauchi
 
Moving Object Detection And Tracking Using CNN
Moving Object Detection And Tracking Using CNNMoving Object Detection And Tracking Using CNN
Moving Object Detection And Tracking Using CNN
NITISHKUMAR1401
 
Object Detection & Tracking
Object Detection & TrackingObject Detection & Tracking
Object Detection & Tracking
Akshay Gujarathi
 
“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...
“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...
“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...
Edge AI and Vision Alliance
 
Camera calibration
Camera calibrationCamera calibration
Camera calibration
Takahashi Kosuke
 
3D Perception for Autonomous Driving - Datasets and Algorithms -
3D Perception for Autonomous Driving - Datasets and Algorithms -3D Perception for Autonomous Driving - Datasets and Algorithms -
3D Perception for Autonomous Driving - Datasets and Algorithms -
Kazuyuki Miyazawa
 
Computer vision introduction
Computer vision  introduction Computer vision  introduction
Computer vision introduction
Wael Badawy
 
Introductory Level of SLAM Seminar
Introductory Level of SLAM SeminarIntroductory Level of SLAM Seminar
Introductory Level of SLAM Seminar
Dong-Won Shin
 
Object detection and Instance Segmentation
Object detection and Instance SegmentationObject detection and Instance Segmentation
Object detection and Instance Segmentation
Hichem Felouat
 
Unsupervised Collaborative Learning of Keyframe Detection and Visual Odometry...
Unsupervised Collaborative Learning of Keyframe Detection and Visual Odometry...Unsupervised Collaborative Learning of Keyframe Detection and Visual Odometry...
Unsupervised Collaborative Learning of Keyframe Detection and Visual Odometry...
Masaya Kaneko
 
"Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio...
"Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio..."Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio...
"Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio...
Edge AI and Vision Alliance
 

What's hot (20)

Object tracking
Object trackingObject tracking
Object tracking
 
Action Recognition (Thesis presentation)
Action Recognition (Thesis presentation)Action Recognition (Thesis presentation)
Action Recognition (Thesis presentation)
 
Neural Radiance Field
Neural Radiance FieldNeural Radiance Field
Neural Radiance Field
 
Deep Learning for Computer Vision: Object Detection (UPC 2016)
Deep Learning for Computer Vision: Object Detection (UPC 2016)Deep Learning for Computer Vision: Object Detection (UPC 2016)
Deep Learning for Computer Vision: Object Detection (UPC 2016)
 
Object tracking presentation
Object tracking  presentationObject tracking  presentation
Object tracking presentation
 
Visual Object Tracking: review
Visual Object Tracking: reviewVisual Object Tracking: review
Visual Object Tracking: review
 
Object detection
Object detectionObject detection
Object detection
 
Direct Sparse Odometryの解説
Direct Sparse Odometryの解説Direct Sparse Odometryの解説
Direct Sparse Odometryの解説
 
Deep VO and SLAM
Deep VO and SLAMDeep VO and SLAM
Deep VO and SLAM
 
[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency
[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency
[論文解説]Unsupervised monocular depth estimation with Left-Right Consistency
 
Moving Object Detection And Tracking Using CNN
Moving Object Detection And Tracking Using CNNMoving Object Detection And Tracking Using CNN
Moving Object Detection And Tracking Using CNN
 
Object Detection & Tracking
Object Detection & TrackingObject Detection & Tracking
Object Detection & Tracking
 
“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...
“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...
“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...
 
Camera calibration
Camera calibrationCamera calibration
Camera calibration
 
3D Perception for Autonomous Driving - Datasets and Algorithms -
3D Perception for Autonomous Driving - Datasets and Algorithms -3D Perception for Autonomous Driving - Datasets and Algorithms -
3D Perception for Autonomous Driving - Datasets and Algorithms -
 
Computer vision introduction
Computer vision  introduction Computer vision  introduction
Computer vision introduction
 
Introductory Level of SLAM Seminar
Introductory Level of SLAM SeminarIntroductory Level of SLAM Seminar
Introductory Level of SLAM Seminar
 
Object detection and Instance Segmentation
Object detection and Instance SegmentationObject detection and Instance Segmentation
Object detection and Instance Segmentation
 
Unsupervised Collaborative Learning of Keyframe Detection and Visual Odometry...
Unsupervised Collaborative Learning of Keyframe Detection and Visual Odometry...Unsupervised Collaborative Learning of Keyframe Detection and Visual Odometry...
Unsupervised Collaborative Learning of Keyframe Detection and Visual Odometry...
 
"Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio...
"Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio..."Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio...
"Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio...
 

Similar to Fisheye/Omnidirectional View in Autonomous Driving IV

Fisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VFisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving V
Yu Huang
 
Fisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving IIIFisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving III
Yu Huang
 
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET Journal
 
Low-cost infrared camera arrays for enhanced capabilities
Low-cost infrared camera arrays for enhanced capabilitiesLow-cost infrared camera arrays for enhanced capabilities
Low-cost infrared camera arrays for enhanced capabilities
University of Glasgow Optical Sciences Seminar
 
J017377578
J017377578J017377578
J017377578
IOSR Journals
 
Real-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURFReal-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURF
iosrjce
 
Realtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN ModelsRealtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN Models
nithinsai2992
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IV
Yu Huang
 
MotorEyes MQP Poster_Onal1301 (1)
MotorEyes MQP Poster_Onal1301 (1)MotorEyes MQP Poster_Onal1301 (1)
MotorEyes MQP Poster_Onal1301 (1)Wut Yee Oo
 
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...Md. Mehedi Hasan
 
Intelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoomIntelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoom
EUKA
 
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
Kitsukawa Yuki
 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
Sudhakar Spartan
 
parking space counter [Autosaved] (2).pptx
parking space counter [Autosaved] (2).pptxparking space counter [Autosaved] (2).pptx
parking space counter [Autosaved] (2).pptx
AlbertDaleSteyn
 
Video Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFTVideo Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFT
IRJET Journal
 
Object Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/MLObject Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/ML
IRJET Journal
 
Vehicle Recognition at Night Based on Tail LightDetection Using Image Processing
Vehicle Recognition at Night Based on Tail LightDetection Using Image ProcessingVehicle Recognition at Night Based on Tail LightDetection Using Image Processing
Vehicle Recognition at Night Based on Tail LightDetection Using Image Processing
IJRES Journal
 
Portfolio - Ramsundar K G
Portfolio - Ramsundar K GPortfolio - Ramsundar K G
Portfolio - Ramsundar K GRamsundar K G
 
X36141145
X36141145X36141145
X36141145
IJERA Editor
 

Similar to Fisheye/Omnidirectional View in Autonomous Driving IV (20)

Fisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VFisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving V
 
Fisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving IIIFisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving III
 
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
 
Low-cost infrared camera arrays for enhanced capabilities
Low-cost infrared camera arrays for enhanced capabilitiesLow-cost infrared camera arrays for enhanced capabilities
Low-cost infrared camera arrays for enhanced capabilities
 
J017377578
J017377578J017377578
J017377578
 
Real-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURFReal-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURF
 
Realtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN ModelsRealtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN Models
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IV
 
MotorEyes MQP Poster_Onal1301 (1)
MotorEyes MQP Poster_Onal1301 (1)MotorEyes MQP Poster_Onal1301 (1)
MotorEyes MQP Poster_Onal1301 (1)
 
VISpION Prospekt
VISpION ProspektVISpION Prospekt
VISpION Prospekt
 
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
 
Intelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoomIntelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoom
 
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
 
parking space counter [Autosaved] (2).pptx
parking space counter [Autosaved] (2).pptxparking space counter [Autosaved] (2).pptx
parking space counter [Autosaved] (2).pptx
 
Video Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFTVideo Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFT
 
Object Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/MLObject Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/ML
 
Vehicle Recognition at Night Based on Tail LightDetection Using Image Processing
Vehicle Recognition at Night Based on Tail LightDetection Using Image ProcessingVehicle Recognition at Night Based on Tail LightDetection Using Image Processing
Vehicle Recognition at Night Based on Tail LightDetection Using Image Processing
 
Portfolio - Ramsundar K G
Portfolio - Ramsundar K GPortfolio - Ramsundar K G
Portfolio - Ramsundar K G
 
X36141145
X36141145X36141145
X36141145
 

More from Yu Huang

Application of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous DrivingApplication of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous Driving
Yu Huang
 
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
Yu Huang
 
Data Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous DrivingData Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous Driving
Yu Huang
 
Techniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous DrivingTechniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous Driving
Yu Huang
 
BEV Object Detection and Prediction
BEV Object Detection and PredictionBEV Object Detection and Prediction
BEV Object Detection and Prediction
Yu Huang
 
Prediction,Planninng & Control at Baidu
Prediction,Planninng & Control at BaiduPrediction,Planninng & Control at Baidu
Prediction,Planninng & Control at Baidu
Yu Huang
 
Cruise AI under the Hood
Cruise AI under the HoodCruise AI under the Hood
Cruise AI under the Hood
Yu Huang
 
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
Yu Huang
 
Scenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingScenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous Driving
Yu Huang
 
How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?
Yu Huang
 
Annotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous DrivingAnnotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous Driving
Yu Huang
 
Simulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgSimulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atg
Yu Huang
 
Multi sensor calibration by deep learning
Multi sensor calibration by deep learningMulti sensor calibration by deep learning
Multi sensor calibration by deep learning
Yu Huang
 
Prediction and planning for self driving at waymo
Prediction and planning for self driving at waymoPrediction and planning for self driving at waymo
Prediction and planning for self driving at waymo
Yu Huang
 
Jointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planningJointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planning
Yu Huang
 
Data pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous drivingData pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous driving
Yu Huang
 
Open Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planningOpen Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planning
Yu Huang
 
Lidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rainLidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rain
Yu Huang
 
Autonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucksAutonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucks
Yu Huang
 
3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image V3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image V
Yu Huang
 

More from Yu Huang (20)

Application of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous DrivingApplication of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous Driving
 
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
 
Data Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous DrivingData Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous Driving
 
Techniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous DrivingTechniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous Driving
 
BEV Object Detection and Prediction
BEV Object Detection and PredictionBEV Object Detection and Prediction
BEV Object Detection and Prediction
 
Prediction,Planninng & Control at Baidu
Prediction,Planninng & Control at BaiduPrediction,Planninng & Control at Baidu
Prediction,Planninng & Control at Baidu
 
Cruise AI under the Hood
Cruise AI under the HoodCruise AI under the Hood
Cruise AI under the Hood
 
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
 
Scenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingScenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous Driving
 
How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?
 
Annotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous DrivingAnnotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous Driving
 
Simulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgSimulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atg
 
Multi sensor calibration by deep learning
Multi sensor calibration by deep learningMulti sensor calibration by deep learning
Multi sensor calibration by deep learning
 
Prediction and planning for self driving at waymo
Prediction and planning for self driving at waymoPrediction and planning for self driving at waymo
Prediction and planning for self driving at waymo
 
Jointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planningJointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planning
 
Data pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous drivingData pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous driving
 
Open Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planningOpen Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planning
 
Lidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rainLidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rain
 
Autonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucksAutonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucks
 
3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image V3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image V
 

Recently uploaded

TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSE
TECHNICAL TRAINING MANUAL   GENERAL FAMILIARIZATION COURSETECHNICAL TRAINING MANUAL   GENERAL FAMILIARIZATION COURSE
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSE
DuvanRamosGarzon1
 
Final project report on grocery store management system..pdf
Final project report on grocery store management system..pdfFinal project report on grocery store management system..pdf
Final project report on grocery store management system..pdf
Kamal Acharya
 
The Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdfThe Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdf
Pipe Restoration Solutions
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
VENKATESHvenky89705
 
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang,  ICLR 2024, MLILAB, KAIST AI.pdfJ.Yang,  ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
MLILAB
 
power quality voltage fluctuation UNIT - I.pptx
power quality voltage fluctuation UNIT - I.pptxpower quality voltage fluctuation UNIT - I.pptx
power quality voltage fluctuation UNIT - I.pptx
ViniHema
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Elective
karthi keyan
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
Osamah Alsalih
 
Automobile Management System Project Report.pdf
Automobile Management System Project Report.pdfAutomobile Management System Project Report.pdf
Automobile Management System Project Report.pdf
Kamal Acharya
 
Architectural Portfolio Sean Lockwood
Architectural Portfolio Sean LockwoodArchitectural Portfolio Sean Lockwood
Architectural Portfolio Sean Lockwood
seandesed
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Dr.Costas Sachpazis
 
H.Seo, ICLR 2024, MLILAB, KAIST AI.pdf
H.Seo,  ICLR 2024, MLILAB,  KAIST AI.pdfH.Seo,  ICLR 2024, MLILAB,  KAIST AI.pdf
H.Seo, ICLR 2024, MLILAB, KAIST AI.pdf
MLILAB
 
Vaccine management system project report documentation..pdf
Vaccine management system project report documentation..pdfVaccine management system project report documentation..pdf
Vaccine management system project report documentation..pdf
Kamal Acharya
 
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdf
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfCOLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdf
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdf
Kamal Acharya
 
Immunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary AttacksImmunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary Attacks
gerogepatton
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Sreedhar Chowdam
 
HYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generationHYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generation
Robbie Edward Sayers
 
Democratizing Fuzzing at Scale by Abhishek Arya
Democratizing Fuzzing at Scale by Abhishek AryaDemocratizing Fuzzing at Scale by Abhishek Arya
Democratizing Fuzzing at Scale by Abhishek Arya
abh.arya
 
ASME IX(9) 2007 Full Version .pdf
ASME IX(9)  2007 Full Version       .pdfASME IX(9)  2007 Full Version       .pdf
ASME IX(9) 2007 Full Version .pdf
AhmedHussein950959
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
FluxPrime1
 

Recently uploaded (20)

TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSE
TECHNICAL TRAINING MANUAL   GENERAL FAMILIARIZATION COURSETECHNICAL TRAINING MANUAL   GENERAL FAMILIARIZATION COURSE
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSE
 
Final project report on grocery store management system..pdf
Final project report on grocery store management system..pdfFinal project report on grocery store management system..pdf
Final project report on grocery store management system..pdf
 
The Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdfThe Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdf
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
 
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang,  ICLR 2024, MLILAB, KAIST AI.pdfJ.Yang,  ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
 
power quality voltage fluctuation UNIT - I.pptx
power quality voltage fluctuation UNIT - I.pptxpower quality voltage fluctuation UNIT - I.pptx
power quality voltage fluctuation UNIT - I.pptx
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Elective
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
 
Automobile Management System Project Report.pdf
Automobile Management System Project Report.pdfAutomobile Management System Project Report.pdf
Automobile Management System Project Report.pdf
 
Architectural Portfolio Sean Lockwood
Architectural Portfolio Sean LockwoodArchitectural Portfolio Sean Lockwood
Architectural Portfolio Sean Lockwood
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
 
H.Seo, ICLR 2024, MLILAB, KAIST AI.pdf
H.Seo,  ICLR 2024, MLILAB,  KAIST AI.pdfH.Seo,  ICLR 2024, MLILAB,  KAIST AI.pdf
H.Seo, ICLR 2024, MLILAB, KAIST AI.pdf
 
Vaccine management system project report documentation..pdf
Vaccine management system project report documentation..pdfVaccine management system project report documentation..pdf
Vaccine management system project report documentation..pdf
 
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdf
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfCOLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdf
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdf
 
Immunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary AttacksImmunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary Attacks
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
 
HYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generationHYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generation
 
Democratizing Fuzzing at Scale by Abhishek Arya
Democratizing Fuzzing at Scale by Abhishek AryaDemocratizing Fuzzing at Scale by Abhishek Arya
Democratizing Fuzzing at Scale by Abhishek Arya
 
ASME IX(9) 2007 Full Version .pdf
ASME IX(9)  2007 Full Version       .pdfASME IX(9)  2007 Full Version       .pdf
ASME IX(9) 2007 Full Version .pdf
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
 

Fisheye/Omnidirectional View in Autonomous Driving IV

  • 2. Outline • FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System • Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline • SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving • Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the ADAS
  • 3. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System • Automated Parking is a low speed maneuvering scenario which is quite unstructured and complex, requiring full 360° near-field sensing around the vehicle. • In this paper, discuss the design and implementation of an automated parking system from the perspective of camera based deep learning algorithms. • provide a holistic overview of an industrial system covering the embedded system, use cases and the deep learning architecture. • demonstrate a real-time multi-task deep learning network called FisheyeMultiNet, which detects all the necessary objects for parking on a low- power embedded system. • FisheyeMultiNet runs at 15 fps for 4 cameras and it has three tasks namely object detection, semantic segmentation and soiling detection. • release a partial dataset of 5,000 images containing semantic segmentation and bounding box detection ground truth via WoodScape project.
  • 4. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System
  • 5. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System Classification of Parking scenarios - (a) Parallel Backward Parking (b) Perpendicular Backward Parking (c) Perpendicular Forward Parking (d) Ambiguous Parking and (e) Fishbone Parking with roadmarkings.
  • 6. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System
  • 7. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System Illustration of FisheyeMultiNet architecture comprising of object detection, semantic segmentation and soiling detection tasks.
  • 8. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System
  • 9. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline • Object detection is a comprehensively studied problem in autonomous driving. • However, it has been relatively less explored in the case of fisheye cameras. • The standard bounding box fails in fisheye cameras due to the strong radial distortion, particularly in the image’s periphery. • explore better representations like oriented bounding box, ellipse, and generic polygon for object detection in fisheye images in this work. • use the IoU metric to compare these representations using accurate instance segmentation ground truth. • design a novel curved bounding box model that has optimal properties for fisheye distortion models. • also design a curvature adaptive perimeter sampling method for obtaining polygon vertices, improving relative mAP score by 4.9% compared to uniform sampling. • Overall, the proposed polygon model improves mIoU relative accuracy by 40.3%. • The dataset comprising of 10,000 images along with ground truth will be made public.
  • 10. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline Left: Illustration of fisheye distortion of projection of an open cube. A 4th-degree polynomial model radial distortion. can visually notice that box matures to a curved box. Right: propose the Curved Bounding Box using a circle with an arbitrary center and radius, as illustrated. It captures the radial distortion and obtains a better footpoint. The center of the circle can be equivalently reparameterized using the object center (xˆ, yˆ).
  • 11. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline
  • 12. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline Generic Polygon Representations. Left: Uniform angular sampling where the intersection of the polygon with the radial line is represented by one parameter per point (r). Middle: Uniform contour sampling using L2 distance. It can be parameterized in polar co-ordinates using 3 parameters (r, θ, α). α denotes the number of polygon vertices within the sector, and it may be used to simplify the training. Alternatively, 2 parameters (x,y) can be used, as shown in the figure on the right. Right: Variable step contour sampling. It is shown that the straight line in the bottom has less number of points than curved points such as the wheel. This representation allows to maximize the utilization of vertices according to local curvature.
  • 13. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline FisheyeYOLO is an extension of YOLOv3 which can output different output representation
  • 14. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline
  • 15. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving • Four fisheye cameras with a 190° field of view cover the 360° around the vehicle. • Due to its high radial distortion, the standard algorithms do not extend easily. • In this work, release a synthetic version of the surround-view dataset, covering many of its weaknesses and extending it. • Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth. • Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse frames. • However, this means that multi-camera algorithms cannot be designed, which is enabled in the new dataset. • implemented surround-view fisheye geometric projections in CARLA Simulator matching WoodScape’s configuration and created SynWoodScape. • release 80k images with annotations for 10+ tasks. • also release the baseline code and supporting scripts.
  • 16. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 17. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 18. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 19. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 20. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 21. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 22. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving Overview of Surround View cameras based multi-task visual perception framework. The distance estimation task (blue block) makes use of semantic guidance and dynamic object masking from semantic/motion estimation (green and blue haze block) and camera-geometry adaptive convolutions (orange block). Additionally, guide the detection decoder features (gray block) with the semantic features. The encoder block (shown in the same color) is common for all the tasks. The framework consists of processing blocks to train the self-supervised distance estimation (blue blocks) and semantic segmentation (green blocks), motion segmentation (blue haze blocks), and polygon-based fisheye object detection (gray blocks). obtain Surround View geometric information by post- processing the predicted distance maps in 3D space (perano block). The camera tensor Ct (orange block) helps OmniDet yield distance maps on multiple camera- viewpoints and make the network camera independent.
  • 23. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 24. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 25. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS • This paper proposes a self-calibration method that can be applied for multiple larger field-of-view (FOV) camera models on ADAS. • Firstly, perform steps such as edge detection, length thresholding, and edge grouping for the segregation of robust line candidates from the pool of initial distortion line segments. • A straightness cost constraint with a cross-entropy loss was imposed on the selected line candidates, thereby exploiting that loss to optimize the lens-distortion parameters using the Levenberg–Marquardt (LM) optimization approach. • The best-fit distortion parameters are used for the undistortion of an image frame, thereby employing various high-end vision-based tasks on the distortion-rectified frame. • investigation on experimental approaches such as parameter sharing between multiple camera systems and model-specific empirical γ-residual rectification factor. • The quantitative comparisons between the proposed method and traditional OpenCV method on KITTI dataset with synthetically generated distortion ranges. • a pragmatic approach of qualitative analysis has been conducted through streamlining high-end vision-based tasks such as object detection, localization, and mapping, and auto-parking on undistorted frames.
  • 26. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Proposed Pipeline on ADAS workbench (a) ADAS Platform: Camera sensors setup and image acquisition
  • 27. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Proposed Pipeline on ADAS workbench (b) Proposed method with block schematics.
  • 28. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Structural anomaly induced into a scene due to heavy lens distortion caused by wide-angle cameras with field-of-view 120◦ < FOV < 140◦.
  • 29. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Lens Projection Models: (a) Standard Camera Pinhole Projection Model. (b) Larger FOV Lens Orthogonal Projection Model.
  • 30. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Proposed Self-calibration design
  • 31. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Pre-processing of line candidates and Estimation of Straightness constraint.
  • 32. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Schematic of distortion parameter estimation using LM-optimization in normal mode and parameter sharing mode.
  • 33. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 34. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 35. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 36. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 37. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Severe distortion cases rectified using several approaches [28,29], proposed method with and without empirical γ-hyper parameter.
  • 38. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 39. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 40. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Data acquisition scenarios using various camera models.
  • 41. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 42. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 43. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 44. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 45. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 46. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 47. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 48. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 49. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 50. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Auto-parking scenario on rear fisheye camera: Real-time visual SLAM pipeline on lens distortion rectified sensor data.
  • 51. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS