SlideShare a Scribd company logo
1 of 41
Download to read offline
Fisheye based Perception for
Autonomous Driving VI
Yu Huang
Outline
• Disentangling and Vectorization: A 3D Visual Perception Approach for
Autonomous Driving Based on Surround-View Fisheye Cameras
• SVDistNet: Self-Supervised Near-Field Distance Estimation on
Surround View Fisheye Cameras
• FisheyeDistanceNet++: Self-Supervised Fisheye Distance Estimation
with Self-Attention, Robust Loss Function and Camera View
Generalization
• An Online Learning System for Wireless Charging Alignment using
Surround-view Fisheye Cameras
• RoadEdgeNet: Road Edge Detection System Using Surround View
Camera Images
Disentangling and Vectorization: A 3D Visual Perception Approach for
Autonomous Driving Based on Surround-View Fisheye Cameras
• The 3D visual perception for vehicles with the surround-view fisheye camera
system is a critical and challenging task for low-cost urban autonomous driving.
• While existing monocular 3D object detection methods perform not well enough
on the fisheye images for mass production, partly due to the lack of 3D datasets
of such images.
• In this paper, manage to overcome and avoid the difficulty of acquiring the large
scale of accurate 3D labeled truth data, by breaking down the 3D object detection
task into some sub-tasks, such as vehicle’s contact point detection, type
classification, re-identification and unit assembling, etc.
• Particularly, propose the concept of Multidimensional Vector to include the
utilizable information generated in different dimensions and stages, instead of
the descriptive approach for the BEV or a cube of eight points.
• The experiments of real fisheye images demonstrate that our solution achieves
state-of-the-art accuracy while being real-time in practice.
Disentangling and Vectorization: A 3D Visual Perception Approach for
Autonomous Driving Based on Surround-View Fisheye Cameras
Disentangling and Vectorization: A 3D Visual Perception Approach for
Autonomous Driving Based on Surround-View Fisheye Cameras
The inputs are four-channels fisheye images which construct a surround-view environment for ego-vehicles. The final
output is a vector map containing the shape of the object under the bird’s-eye view.
Disentangling and Vectorization: A 3D Visual Perception Approach for
Autonomous Driving Based on Surround-View Fisheye Cameras
The diagram of translating the pixel
coordinates of the contact points in
fisheye image to the physical
coordinates.
Disentangling and Vectorization: A 3D Visual Perception Approach for
Autonomous Driving Based on Surround-View Fisheye Cameras
The specific composition of the Multidimensional Vector.
Disentangling and Vectorization: A 3D Visual Perception Approach for
Autonomous Driving Based on Surround-View Fisheye Cameras
ReID module is divided into three stages.
The first stage fuses the vectors of three
branches, the second stage generates an ID
for each object of each channel, and the
third stage merges the vectors which
describing the same object into one vector.
Disentangling and Vectorization: A 3D Visual Perception Approach for
Autonomous Driving Based on Surround-View Fisheye Cameras
Channel fusion and category fusion. (1) Wheels from different channels
are fused into one vehicle. α and β are 0.5, which indicate weights
assigned to the front wheels of two vehicles. (2) Wheels and bumper
from different categories are fused into one vehicle.
Disentangling and Vectorization: A 3D Visual Perception Approach for
Autonomous Driving Based on Surround-View Fisheye Cameras
Two cases of calculating the heading angles of the target-vehicles. (a) two
wheels on one side are visible. (b) only one wheel and one bumper are visible.
Disentangling and Vectorization: A 3D Visual Perception Approach for
Autonomous Driving Based on Surround-View Fisheye Cameras
perception
Disentangling and Vectorization: A 3D Visual Perception Approach for
Autonomous Driving Based on Surround-View Fisheye Cameras
Time consumption tests on the hardware platform
SVDistNet: Self-Supervised Near-Field Distance
Estimation on Surround View Fisheye Cameras
• The depth estimation model must be tested on a variety of cameras equipped to millions of cars
with varying camera geometries. Even within a single car, intrinsics vary due to manufacturing
tolerances.
• Deep learning models are sensitive to these changes, and it is practically infeasible to train and
test on each camera variant.
• present camera-geometry adaptive multi-scale convolutions which utilize the camera parameters
as a conditional input, enabling the model to generalize to previously unseen fisheye cameras.
• improve by pairwise and patchwise vector-based self-attention encoder networks.
• a generalization across different camera viewing angles and extensive experiments.
• To enable comparison with other approaches, evaluate the front camera data on the KITTI dataset
(pinhole camera images) and achieve state-of-the-art performance among self-supervised
monocular methods.
• Baseline code and dataset will be made public: https://github.com/valeoai/WoodScape
SVDistNet: Self-Supervised Near-Field Distance
Estimation on Surround View Fisheye Cameras
surround-view distance estimation framework is
facilitated by employing a single network on
images from multiple cameras. A surround-view
coverage of geometric information is obtained for
an autonomous vehicle by utilizing and post-
processing the distance maps from all cameras.
SVDistNet: Self-Supervised Near-Field Distance
Estimation on Surround View Fisheye Cameras
SVDistNet: Self-Supervised Near-Field Distance
Estimation on Surround View Fisheye Cameras
• High-level overview of surround-view self-supervised distance estimation
framework, which employs semantic guidance as well as camera-geometry
adaptive convolutions (orange blocks).
• framework comprises training units for self-supervised distance estimation
(blue blocks) and semantic segmentation (green blocks).
• The camera tensor Ct assists SVDistNet in producing distance maps across
multiple camera-viewpoints and making the network camera independent.
• Ct can also be applied to standard camera models.
• The multi-task loss from 9 weights and optimizes both modalities at the
same time.
• By post-processing the predicted distance maps in 3D space, can obtain
surround-view geometric information using proposed framework.
SVDistNet: Self-Supervised Near-Field Distance
Estimation on Surround View Fisheye Cameras
Overview of proposed network architecture for semantically guided self-supervised distance estimation.
It consists of a shared vector-based self-attention encoder and task-specific decoders. encoder is a self-
attention network with pairwise and patchwise variants, while the decoder uses pixel-adaptive
convolutions, which are both complemented by Camera Geometry convolutions.
FisheyeDistanceNet++: Self-Supervised Fisheye Distance Estimation with
Self-Attention, Robust Loss Function and Camera View Generalization
the radial distortion models
FisheyeDistanceNet++: Self-Supervised Fisheye Distance Estimation with
Self-Attention, Robust Loss Function and Camera View Generalization
Distance estimation results of the same network evaluated on four different fisheye cameras of
a surround-view camera system. One can see that SVDistNet model generalizes well across
different viewing angles and consistently produces high-quality distance outputs.
FisheyeDistanceNet++: Self-Supervised Fisheye Distance Estimation with
Self-Attention, Robust Loss Function and Camera View Generalization
An Online Learning System for Wireless Charging
Alignment using Surround-view Fisheye Cameras
• In parallel to the electrification of the vehicular fleet, automated parking systems that
make use of surround-view camera systems are becoming increasingly popular.
• In this work, propose a system based on the surround-view camera architecture to
detect, localize, and automatically align the vehicle with the inductive charge pad.
• The visual design of the charge pads is not standardized and not necessarily known
beforehand.
• Therefore, a system that relies on offline training will fail in some situations.
• Thus, propose a self-supervised online learning method that leverages the driver’s
actions when manually aligning the vehicle with the charge pad and combine it with
weak supervision from semantic se
• gmentation and depth to learn a classifier to auto-annotate the charge pad in the video
for further training. In this way, when faced with a previously unseen charge pad, the
driver needs only manually align the vehicle a single time.
• As the charge pad is flat on the ground, it is not easy to detect it from a distance. Thus,
propose using a Visual SLAM pipeline to learn landmarks relative to the charge pad to
enable alignment from a greater range.
An Online Learning System for Wireless Charging
Alignment using Surround-view Fisheye Cameras
An Online Learning System for Wireless Charging
Alignment using Surround-view Fisheye Cameras
Various different commercial charge pads.
ArUco patterns on a charging station.
An Online Learning System for Wireless Charging
Alignment using Surround-view Fisheye Cameras
Three task perception stack.
An Online Learning System for Wireless Charging
Alignment using Surround-view Fisheye Cameras
Synthetic analysis of the pixel size of charge pads
at various distances from the vehicle.
An Online Learning System for Wireless Charging
Alignment using Surround-view Fisheye Cameras
Employing Visual SLAM to predict the position of the charge pad in
an image corresponding to a previous vehicle position.
An Online Learning System for Wireless Charging
Alignment using Surround-view Fisheye Cameras
Overall system architecture for online charge pad learning and vehicle-charge pad alignment
An Online Learning System for Wireless Charging
Alignment using Surround-view Fisheye Cameras
An Online Learning System for Wireless Charging
Alignment using Surround-view Fisheye Cameras
Qualitative results of charge pad
Detection and Tracking in different
scenarios namely outdoor (top),
indoor (2nd row), synthetic (3rd row),
and augmented Valeo logo (bottom).
An Online Learning System for Wireless Charging
Alignment using Surround-view Fisheye Cameras
Examples of visual features in an image
RoadEdgeNet: Road Edge Detection System
Using Surround View Camera Images
• Road Edge is defined as the borderline where there is a change from the road
surface to the non-road surface.
• Most of the currently existing solutions for Road Edge Detection use only a single
front camera to capture the input image; hence, the system’s performance and
robustness suffer.
• efficient CNN trained on a very diverse dataset yields more than 98% semantic
segmentation for the road surface, which is then used to obtain road edge
segments for individual camera images.
• Afterward, the multi-cameras raw road edges are transformed into world
coordinates, and RANSAC curve fitting is used to get the final road edges on both
sides of the vehicle for driving assistance.
• The process of road edge extraction is also very computationally efficient with the
same generic road segmentation output, which is computed along with other
semantic segmentation for driving assistance and autonomous driving.
• RoadEdgeNet algorithm is designed for automated driving in series production,
and discuss the various challenges and limitations of the current algorithm.
RoadEdgeNet: Road Edge Detection System
Using Surround View Camera Images
Overall Road Edge Detection System Architecture
RoadEdgeNet: Road Edge Detection System
Using Surround View Camera Images
RoadEdgeNet
Architecture
RoadEdgeNet: Road Edge Detection System
Using Surround View Camera Images
Road Edges candidate points (left), edge points from curve fitting (mid) and Road Edges Overlay on the Front Camera (right)
RoadEdgeNet: Road Edge Detection System
Using Surround View Camera Images
Obtaining candidate left and right road edge points.
Note that this figure illustrates the Front View camera
image. The same logic is applied for the Rear View
image. For mirror cameras, will perform steps 1 - 3 only
as only have either left or right side edge points on
mirror cameras. Step-1: Get the far left road pixel (x1,
y1) and far- right road pixel (x2, y2) from the segmented
binary image. Step-2: Scan from left to right for each
row. If a road pixel is reached, store the point and go to
the next row while skipping other pixels in that row.
Step-3: Repeat step 2 for each row until y1 is reached
and skip everything below y1. Step-4: Scan from right to
left for each row. If a road pixel is reached, store the
point and go to the next row skipping all other pixels in
that row. Step-5: Repeat step 4 for each row until y2 is
reached and skip everything below y2. Step-6: Transform
the points into vehicle coordinates with the Image to
World Transformation for further processing.
RoadEdgeNet: Road Edge Detection System
Using Surround View Camera Images
RoadEdgeNet: Road Edge Detection System
Using Surround View Camera Images
RoadEdgeNet: Road Edge Detection System
Using Surround View Camera Images
RoadEdgeNet: Road Edge Detection System
Using Surround View Camera Images
Fisheye based Perception for Autonomous Driving VI

More Related Content

What's hot

PopcornSAR Specialized in AUTOSAR_Company profile
PopcornSAR Specialized in AUTOSAR_Company profilePopcornSAR Specialized in AUTOSAR_Company profile
PopcornSAR Specialized in AUTOSAR_Company profilePopcornSAR
 
Collaborative Robots 101: The Anatomy of a Cobot
Collaborative Robots 101: The Anatomy of a CobotCollaborative Robots 101: The Anatomy of a Cobot
Collaborative Robots 101: The Anatomy of a CobotSICK Inc
 
Das Deutschlandposter
Das DeutschlandposterDas Deutschlandposter
Das DeutschlandposterKarinArenas1
 
Olika vinklaråk 7
Olika vinklaråk 7Olika vinklaråk 7
Olika vinklaråk 7gulzay
 
ConnectedDrive launch 07.2014 - products and features - BB Meeting 13.01.2014...
ConnectedDrive launch 07.2014 - products and features - BB Meeting 13.01.2014...ConnectedDrive launch 07.2014 - products and features - BB Meeting 13.01.2014...
ConnectedDrive launch 07.2014 - products and features - BB Meeting 13.01.2014...thang tong
 
Vehicle-to-Pedestrian Technology
Vehicle-to-Pedestrian TechnologyVehicle-to-Pedestrian Technology
Vehicle-to-Pedestrian TechnologyMike Pina
 
Industrial robotics
Industrial roboticsIndustrial robotics
Industrial roboticsHome
 
Robotics and Technology
Robotics and TechnologyRobotics and Technology
Robotics and TechnologyRamki M
 
Autonomous and electric vehicles
Autonomous and electric vehiclesAutonomous and electric vehicles
Autonomous and electric vehiclesSushovan Bej
 
How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?Yu Huang
 
Življenjepis Urška
Življenjepis UrškaŽivljenjepis Urška
Življenjepis UrškaUr?ka Legat
 

What's hot (20)

Matlab robotics toolbox
Matlab robotics toolboxMatlab robotics toolbox
Matlab robotics toolbox
 
Robot Arm Kinematics
Robot Arm KinematicsRobot Arm Kinematics
Robot Arm Kinematics
 
Space Robotics
Space RoboticsSpace Robotics
Space Robotics
 
PopcornSAR Specialized in AUTOSAR_Company profile
PopcornSAR Specialized in AUTOSAR_Company profilePopcornSAR Specialized in AUTOSAR_Company profile
PopcornSAR Specialized in AUTOSAR_Company profile
 
Collaborative Robots 101: The Anatomy of a Cobot
Collaborative Robots 101: The Anatomy of a CobotCollaborative Robots 101: The Anatomy of a Cobot
Collaborative Robots 101: The Anatomy of a Cobot
 
BMW
BMWBMW
BMW
 
Das Deutschlandposter
Das DeutschlandposterDas Deutschlandposter
Das Deutschlandposter
 
Domestic robot
Domestic robotDomestic robot
Domestic robot
 
Olika vinklaråk 7
Olika vinklaråk 7Olika vinklaråk 7
Olika vinklaråk 7
 
Self driving car
Self driving carSelf driving car
Self driving car
 
ConnectedDrive launch 07.2014 - products and features - BB Meeting 13.01.2014...
ConnectedDrive launch 07.2014 - products and features - BB Meeting 13.01.2014...ConnectedDrive launch 07.2014 - products and features - BB Meeting 13.01.2014...
ConnectedDrive launch 07.2014 - products and features - BB Meeting 13.01.2014...
 
Robotics
Robotics Robotics
Robotics
 
Robotics
RoboticsRobotics
Robotics
 
Routing in vanet
Routing in vanetRouting in vanet
Routing in vanet
 
Vehicle-to-Pedestrian Technology
Vehicle-to-Pedestrian TechnologyVehicle-to-Pedestrian Technology
Vehicle-to-Pedestrian Technology
 
Industrial robotics
Industrial roboticsIndustrial robotics
Industrial robotics
 
Robotics and Technology
Robotics and TechnologyRobotics and Technology
Robotics and Technology
 
Autonomous and electric vehicles
Autonomous and electric vehiclesAutonomous and electric vehicles
Autonomous and electric vehicles
 
How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?
 
Življenjepis Urška
Življenjepis UrškaŽivljenjepis Urška
Življenjepis Urška
 

Similar to Fisheye based Perception for Autonomous Driving VI

Fisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VFisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VYu Huang
 
Fisheye/Omnidirectional View in Autonomous Driving IV
Fisheye/Omnidirectional View in Autonomous Driving IVFisheye/Omnidirectional View in Autonomous Driving IV
Fisheye/Omnidirectional View in Autonomous Driving IVYu Huang
 
Fisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving IIIFisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving IIIYu Huang
 
Deep VO and SLAM
Deep VO and SLAMDeep VO and SLAM
Deep VO and SLAMYu Huang
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IVYu Huang
 
BEV Joint Detection and Segmentation
BEV Joint Detection and SegmentationBEV Joint Detection and Segmentation
BEV Joint Detection and SegmentationYu Huang
 
tatacara kalibrasi Kendaraan.ppt
tatacara kalibrasi Kendaraan.ppttatacara kalibrasi Kendaraan.ppt
tatacara kalibrasi Kendaraan.pptdarmadi ir,mm
 
vehicle calibration.ppt
vehicle calibration.pptvehicle calibration.ppt
vehicle calibration.pptdarmadi ir,mm
 
Simulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgSimulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgYu Huang
 
An Experimental Analysis on Self Driving Car Using CNN
An Experimental Analysis on Self Driving Car Using CNNAn Experimental Analysis on Self Driving Car Using CNN
An Experimental Analysis on Self Driving Car Using CNNIRJET Journal
 
BEV Object Detection and Prediction
BEV Object Detection and PredictionBEV Object Detection and Prediction
BEV Object Detection and PredictionYu Huang
 
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...Yu Huang
 
Fisheye Omnidirectional View in Autonomous Driving II
Fisheye Omnidirectional View in Autonomous Driving IIFisheye Omnidirectional View in Autonomous Driving II
Fisheye Omnidirectional View in Autonomous Driving IIYu Huang
 
fusion of Camera and lidar for autonomous driving I
fusion of Camera and lidar for autonomous driving Ifusion of Camera and lidar for autonomous driving I
fusion of Camera and lidar for autonomous driving IYu Huang
 
Object Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/MLObject Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/MLIRJET Journal
 
“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a...
“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a...“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a...
“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a...Edge AI and Vision Alliance
 
Lane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
Lane and Object Detection for Autonomous Vehicle using Advanced Computer VisionLane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
Lane and Object Detection for Autonomous Vehicle using Advanced Computer VisionYogeshIJTSRD
 
Portfolio - Ramsundar K G
Portfolio - Ramsundar K GPortfolio - Ramsundar K G
Portfolio - Ramsundar K GRamsundar K G
 
Deep vo and slam iii
Deep vo and slam iiiDeep vo and slam iii
Deep vo and slam iiiYu Huang
 

Similar to Fisheye based Perception for Autonomous Driving VI (20)

Fisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VFisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving V
 
Fisheye/Omnidirectional View in Autonomous Driving IV
Fisheye/Omnidirectional View in Autonomous Driving IVFisheye/Omnidirectional View in Autonomous Driving IV
Fisheye/Omnidirectional View in Autonomous Driving IV
 
Fisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving IIIFisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving III
 
Deep VO and SLAM
Deep VO and SLAMDeep VO and SLAM
Deep VO and SLAM
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IV
 
BEV Joint Detection and Segmentation
BEV Joint Detection and SegmentationBEV Joint Detection and Segmentation
BEV Joint Detection and Segmentation
 
tatacara kalibrasi Kendaraan.ppt
tatacara kalibrasi Kendaraan.ppttatacara kalibrasi Kendaraan.ppt
tatacara kalibrasi Kendaraan.ppt
 
vehicle calibration.ppt
vehicle calibration.pptvehicle calibration.ppt
vehicle calibration.ppt
 
Simulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgSimulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atg
 
An Experimental Analysis on Self Driving Car Using CNN
An Experimental Analysis on Self Driving Car Using CNNAn Experimental Analysis on Self Driving Car Using CNN
An Experimental Analysis on Self Driving Car Using CNN
 
BEV Object Detection and Prediction
BEV Object Detection and PredictionBEV Object Detection and Prediction
BEV Object Detection and Prediction
 
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
 
Fisheye Omnidirectional View in Autonomous Driving II
Fisheye Omnidirectional View in Autonomous Driving IIFisheye Omnidirectional View in Autonomous Driving II
Fisheye Omnidirectional View in Autonomous Driving II
 
fusion of Camera and lidar for autonomous driving I
fusion of Camera and lidar for autonomous driving Ifusion of Camera and lidar for autonomous driving I
fusion of Camera and lidar for autonomous driving I
 
Object Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/MLObject Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/ML
 
Major PRC-1 ppt.pptx
Major PRC-1 ppt.pptxMajor PRC-1 ppt.pptx
Major PRC-1 ppt.pptx
 
“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a...
“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a...“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a...
“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a...
 
Lane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
Lane and Object Detection for Autonomous Vehicle using Advanced Computer VisionLane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
Lane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
 
Portfolio - Ramsundar K G
Portfolio - Ramsundar K GPortfolio - Ramsundar K G
Portfolio - Ramsundar K G
 
Deep vo and slam iii
Deep vo and slam iiiDeep vo and slam iii
Deep vo and slam iii
 

More from Yu Huang

Application of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous DrivingApplication of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous DrivingYu Huang
 
Data Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous DrivingData Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous DrivingYu Huang
 
Techniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous DrivingTechniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous DrivingYu Huang
 
Prediction,Planninng & Control at Baidu
Prediction,Planninng & Control at BaiduPrediction,Planninng & Control at Baidu
Prediction,Planninng & Control at BaiduYu Huang
 
Cruise AI under the Hood
Cruise AI under the HoodCruise AI under the Hood
Cruise AI under the HoodYu Huang
 
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)Yu Huang
 
Scenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingScenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingYu Huang
 
Annotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous DrivingAnnotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous DrivingYu Huang
 
Multi sensor calibration by deep learning
Multi sensor calibration by deep learningMulti sensor calibration by deep learning
Multi sensor calibration by deep learningYu Huang
 
Prediction and planning for self driving at waymo
Prediction and planning for self driving at waymoPrediction and planning for self driving at waymo
Prediction and planning for self driving at waymoYu Huang
 
Jointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planningJointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planningYu Huang
 
Data pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous drivingData pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous drivingYu Huang
 
Open Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planningOpen Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planningYu Huang
 
Lidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rainLidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rainYu Huang
 
Autonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucksAutonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucksYu Huang
 
3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image V3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image VYu Huang
 
3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IV3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IVYu Huang
 
3-d interpretation from single 2-d image III
3-d interpretation from single 2-d image III3-d interpretation from single 2-d image III
3-d interpretation from single 2-d image IIIYu Huang
 
BEV Semantic Segmentation
BEV Semantic SegmentationBEV Semantic Segmentation
BEV Semantic SegmentationYu Huang
 
Unsupervised/Self-supervvised visual object tracking
Unsupervised/Self-supervvised visual object trackingUnsupervised/Self-supervvised visual object tracking
Unsupervised/Self-supervvised visual object trackingYu Huang
 

More from Yu Huang (20)

Application of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous DrivingApplication of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous Driving
 
Data Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous DrivingData Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous Driving
 
Techniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous DrivingTechniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous Driving
 
Prediction,Planninng & Control at Baidu
Prediction,Planninng & Control at BaiduPrediction,Planninng & Control at Baidu
Prediction,Planninng & Control at Baidu
 
Cruise AI under the Hood
Cruise AI under the HoodCruise AI under the Hood
Cruise AI under the Hood
 
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
 
Scenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingScenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous Driving
 
Annotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous DrivingAnnotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous Driving
 
Multi sensor calibration by deep learning
Multi sensor calibration by deep learningMulti sensor calibration by deep learning
Multi sensor calibration by deep learning
 
Prediction and planning for self driving at waymo
Prediction and planning for self driving at waymoPrediction and planning for self driving at waymo
Prediction and planning for self driving at waymo
 
Jointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planningJointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planning
 
Data pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous drivingData pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous driving
 
Open Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planningOpen Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planning
 
Lidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rainLidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rain
 
Autonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucksAutonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucks
 
3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image V3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image V
 
3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IV3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IV
 
3-d interpretation from single 2-d image III
3-d interpretation from single 2-d image III3-d interpretation from single 2-d image III
3-d interpretation from single 2-d image III
 
BEV Semantic Segmentation
BEV Semantic SegmentationBEV Semantic Segmentation
BEV Semantic Segmentation
 
Unsupervised/Self-supervvised visual object tracking
Unsupervised/Self-supervvised visual object trackingUnsupervised/Self-supervvised visual object tracking
Unsupervised/Self-supervvised visual object tracking
 

Recently uploaded

complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixingviprabot1
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
EduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIEduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIkoyaldeepu123
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...Chandu841456
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfme23b1001
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 

Recently uploaded (20)

complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixing
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
EduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIEduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AI
 
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdf
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 

Fisheye based Perception for Autonomous Driving VI

  • 1. Fisheye based Perception for Autonomous Driving VI Yu Huang
  • 2. Outline • Disentangling and Vectorization: A 3D Visual Perception Approach for Autonomous Driving Based on Surround-View Fisheye Cameras • SVDistNet: Self-Supervised Near-Field Distance Estimation on Surround View Fisheye Cameras • FisheyeDistanceNet++: Self-Supervised Fisheye Distance Estimation with Self-Attention, Robust Loss Function and Camera View Generalization • An Online Learning System for Wireless Charging Alignment using Surround-view Fisheye Cameras • RoadEdgeNet: Road Edge Detection System Using Surround View Camera Images
  • 3. Disentangling and Vectorization: A 3D Visual Perception Approach for Autonomous Driving Based on Surround-View Fisheye Cameras • The 3D visual perception for vehicles with the surround-view fisheye camera system is a critical and challenging task for low-cost urban autonomous driving. • While existing monocular 3D object detection methods perform not well enough on the fisheye images for mass production, partly due to the lack of 3D datasets of such images. • In this paper, manage to overcome and avoid the difficulty of acquiring the large scale of accurate 3D labeled truth data, by breaking down the 3D object detection task into some sub-tasks, such as vehicle’s contact point detection, type classification, re-identification and unit assembling, etc. • Particularly, propose the concept of Multidimensional Vector to include the utilizable information generated in different dimensions and stages, instead of the descriptive approach for the BEV or a cube of eight points. • The experiments of real fisheye images demonstrate that our solution achieves state-of-the-art accuracy while being real-time in practice.
  • 4. Disentangling and Vectorization: A 3D Visual Perception Approach for Autonomous Driving Based on Surround-View Fisheye Cameras
  • 5. Disentangling and Vectorization: A 3D Visual Perception Approach for Autonomous Driving Based on Surround-View Fisheye Cameras The inputs are four-channels fisheye images which construct a surround-view environment for ego-vehicles. The final output is a vector map containing the shape of the object under the bird’s-eye view.
  • 6. Disentangling and Vectorization: A 3D Visual Perception Approach for Autonomous Driving Based on Surround-View Fisheye Cameras The diagram of translating the pixel coordinates of the contact points in fisheye image to the physical coordinates.
  • 7. Disentangling and Vectorization: A 3D Visual Perception Approach for Autonomous Driving Based on Surround-View Fisheye Cameras The specific composition of the Multidimensional Vector.
  • 8. Disentangling and Vectorization: A 3D Visual Perception Approach for Autonomous Driving Based on Surround-View Fisheye Cameras ReID module is divided into three stages. The first stage fuses the vectors of three branches, the second stage generates an ID for each object of each channel, and the third stage merges the vectors which describing the same object into one vector.
  • 9. Disentangling and Vectorization: A 3D Visual Perception Approach for Autonomous Driving Based on Surround-View Fisheye Cameras Channel fusion and category fusion. (1) Wheels from different channels are fused into one vehicle. α and β are 0.5, which indicate weights assigned to the front wheels of two vehicles. (2) Wheels and bumper from different categories are fused into one vehicle.
  • 10. Disentangling and Vectorization: A 3D Visual Perception Approach for Autonomous Driving Based on Surround-View Fisheye Cameras Two cases of calculating the heading angles of the target-vehicles. (a) two wheels on one side are visible. (b) only one wheel and one bumper are visible.
  • 11. Disentangling and Vectorization: A 3D Visual Perception Approach for Autonomous Driving Based on Surround-View Fisheye Cameras perception
  • 12. Disentangling and Vectorization: A 3D Visual Perception Approach for Autonomous Driving Based on Surround-View Fisheye Cameras Time consumption tests on the hardware platform
  • 13. SVDistNet: Self-Supervised Near-Field Distance Estimation on Surround View Fisheye Cameras • The depth estimation model must be tested on a variety of cameras equipped to millions of cars with varying camera geometries. Even within a single car, intrinsics vary due to manufacturing tolerances. • Deep learning models are sensitive to these changes, and it is practically infeasible to train and test on each camera variant. • present camera-geometry adaptive multi-scale convolutions which utilize the camera parameters as a conditional input, enabling the model to generalize to previously unseen fisheye cameras. • improve by pairwise and patchwise vector-based self-attention encoder networks. • a generalization across different camera viewing angles and extensive experiments. • To enable comparison with other approaches, evaluate the front camera data on the KITTI dataset (pinhole camera images) and achieve state-of-the-art performance among self-supervised monocular methods. • Baseline code and dataset will be made public: https://github.com/valeoai/WoodScape
  • 14. SVDistNet: Self-Supervised Near-Field Distance Estimation on Surround View Fisheye Cameras surround-view distance estimation framework is facilitated by employing a single network on images from multiple cameras. A surround-view coverage of geometric information is obtained for an autonomous vehicle by utilizing and post- processing the distance maps from all cameras.
  • 15. SVDistNet: Self-Supervised Near-Field Distance Estimation on Surround View Fisheye Cameras
  • 16. SVDistNet: Self-Supervised Near-Field Distance Estimation on Surround View Fisheye Cameras • High-level overview of surround-view self-supervised distance estimation framework, which employs semantic guidance as well as camera-geometry adaptive convolutions (orange blocks). • framework comprises training units for self-supervised distance estimation (blue blocks) and semantic segmentation (green blocks). • The camera tensor Ct assists SVDistNet in producing distance maps across multiple camera-viewpoints and making the network camera independent. • Ct can also be applied to standard camera models. • The multi-task loss from 9 weights and optimizes both modalities at the same time. • By post-processing the predicted distance maps in 3D space, can obtain surround-view geometric information using proposed framework.
  • 17. SVDistNet: Self-Supervised Near-Field Distance Estimation on Surround View Fisheye Cameras Overview of proposed network architecture for semantically guided self-supervised distance estimation. It consists of a shared vector-based self-attention encoder and task-specific decoders. encoder is a self- attention network with pairwise and patchwise variants, while the decoder uses pixel-adaptive convolutions, which are both complemented by Camera Geometry convolutions.
  • 18. FisheyeDistanceNet++: Self-Supervised Fisheye Distance Estimation with Self-Attention, Robust Loss Function and Camera View Generalization the radial distortion models
  • 19. FisheyeDistanceNet++: Self-Supervised Fisheye Distance Estimation with Self-Attention, Robust Loss Function and Camera View Generalization Distance estimation results of the same network evaluated on four different fisheye cameras of a surround-view camera system. One can see that SVDistNet model generalizes well across different viewing angles and consistently produces high-quality distance outputs.
  • 20. FisheyeDistanceNet++: Self-Supervised Fisheye Distance Estimation with Self-Attention, Robust Loss Function and Camera View Generalization
  • 21. An Online Learning System for Wireless Charging Alignment using Surround-view Fisheye Cameras • In parallel to the electrification of the vehicular fleet, automated parking systems that make use of surround-view camera systems are becoming increasingly popular. • In this work, propose a system based on the surround-view camera architecture to detect, localize, and automatically align the vehicle with the inductive charge pad. • The visual design of the charge pads is not standardized and not necessarily known beforehand. • Therefore, a system that relies on offline training will fail in some situations. • Thus, propose a self-supervised online learning method that leverages the driver’s actions when manually aligning the vehicle with the charge pad and combine it with weak supervision from semantic se • gmentation and depth to learn a classifier to auto-annotate the charge pad in the video for further training. In this way, when faced with a previously unseen charge pad, the driver needs only manually align the vehicle a single time. • As the charge pad is flat on the ground, it is not easy to detect it from a distance. Thus, propose using a Visual SLAM pipeline to learn landmarks relative to the charge pad to enable alignment from a greater range.
  • 22. An Online Learning System for Wireless Charging Alignment using Surround-view Fisheye Cameras
  • 23. An Online Learning System for Wireless Charging Alignment using Surround-view Fisheye Cameras Various different commercial charge pads. ArUco patterns on a charging station.
  • 24. An Online Learning System for Wireless Charging Alignment using Surround-view Fisheye Cameras Three task perception stack.
  • 25. An Online Learning System for Wireless Charging Alignment using Surround-view Fisheye Cameras Synthetic analysis of the pixel size of charge pads at various distances from the vehicle.
  • 26. An Online Learning System for Wireless Charging Alignment using Surround-view Fisheye Cameras Employing Visual SLAM to predict the position of the charge pad in an image corresponding to a previous vehicle position.
  • 27. An Online Learning System for Wireless Charging Alignment using Surround-view Fisheye Cameras Overall system architecture for online charge pad learning and vehicle-charge pad alignment
  • 28. An Online Learning System for Wireless Charging Alignment using Surround-view Fisheye Cameras
  • 29. An Online Learning System for Wireless Charging Alignment using Surround-view Fisheye Cameras
  • 30. Qualitative results of charge pad Detection and Tracking in different scenarios namely outdoor (top), indoor (2nd row), synthetic (3rd row), and augmented Valeo logo (bottom).
  • 31. An Online Learning System for Wireless Charging Alignment using Surround-view Fisheye Cameras Examples of visual features in an image
  • 32. RoadEdgeNet: Road Edge Detection System Using Surround View Camera Images • Road Edge is defined as the borderline where there is a change from the road surface to the non-road surface. • Most of the currently existing solutions for Road Edge Detection use only a single front camera to capture the input image; hence, the system’s performance and robustness suffer. • efficient CNN trained on a very diverse dataset yields more than 98% semantic segmentation for the road surface, which is then used to obtain road edge segments for individual camera images. • Afterward, the multi-cameras raw road edges are transformed into world coordinates, and RANSAC curve fitting is used to get the final road edges on both sides of the vehicle for driving assistance. • The process of road edge extraction is also very computationally efficient with the same generic road segmentation output, which is computed along with other semantic segmentation for driving assistance and autonomous driving. • RoadEdgeNet algorithm is designed for automated driving in series production, and discuss the various challenges and limitations of the current algorithm.
  • 33. RoadEdgeNet: Road Edge Detection System Using Surround View Camera Images Overall Road Edge Detection System Architecture
  • 34. RoadEdgeNet: Road Edge Detection System Using Surround View Camera Images RoadEdgeNet Architecture
  • 35. RoadEdgeNet: Road Edge Detection System Using Surround View Camera Images Road Edges candidate points (left), edge points from curve fitting (mid) and Road Edges Overlay on the Front Camera (right)
  • 36. RoadEdgeNet: Road Edge Detection System Using Surround View Camera Images Obtaining candidate left and right road edge points. Note that this figure illustrates the Front View camera image. The same logic is applied for the Rear View image. For mirror cameras, will perform steps 1 - 3 only as only have either left or right side edge points on mirror cameras. Step-1: Get the far left road pixel (x1, y1) and far- right road pixel (x2, y2) from the segmented binary image. Step-2: Scan from left to right for each row. If a road pixel is reached, store the point and go to the next row while skipping other pixels in that row. Step-3: Repeat step 2 for each row until y1 is reached and skip everything below y1. Step-4: Scan from right to left for each row. If a road pixel is reached, store the point and go to the next row skipping all other pixels in that row. Step-5: Repeat step 4 for each row until y2 is reached and skip everything below y2. Step-6: Transform the points into vehicle coordinates with the Image to World Transformation for further processing.
  • 37. RoadEdgeNet: Road Edge Detection System Using Surround View Camera Images
  • 38. RoadEdgeNet: Road Edge Detection System Using Surround View Camera Images
  • 39. RoadEdgeNet: Road Edge Detection System Using Surround View Camera Images
  • 40. RoadEdgeNet: Road Edge Detection System Using Surround View Camera Images