Autonomous vehicles have the potential to significantly improve road safety by reducing the over 90% of crashes caused by human error. Fully autonomous vehicles that can operate in any condition without human intervention (Level 5 autonomy) are the long term goal, but we are many years away from achieving this. In the meantime, vehicles with partial automation (Levels 3 and 4) that can take control in certain conditions or domains with the ability to hand control back to the driver are being developed and tested, but ensuring safe handoff between the vehicle and driver presents challenges. The technology relies on sensors like LIDAR, radar and cameras along with detailed 3D maps to allow the vehicle to perceive its environment, detect objects, and safely navigate and maneuver
2. 2
Why? Safety
5.5 millions crashes*1
1.5 million injuries*2
35 thousand fatalities*3
93% fatalities* caused by human error4
* US only, data reported per year
6. 6
LEVEL 5: GOAL
+
• Any traffic or weather condition
• Any place or any time
• No human required
FULLY
AUTONOMOUS
DRIVING
7. 7
LEVEL 4: SPECIFIC OPERATIONAL DOMAINS
1
• Limited areas of operation
• Limited speeds
• Limited times of the day
FULLY
AUTONOMOUS
DRIVING, but
2
• New business model (like Uber, but without driver)
• Shared mobility: ride-sharing, car-sharing
MOBILITY
AS A
SERVICE
3 ~10 yearsTIMELINE
8. 8
LEVEL 3: HANDOFF CONTROL BACK
1
• Autonomous mode, that may handoff control
back to the driver
• System should always detect such conditions
• System must ensure to give sufficient warning
HANDOFF
2
Research shows:
Longer disengaged, then longer to get back
With 100 km/h speed car travels ~30m every second.
To give 15 seconds warning system should see 500m
(5 football fields) away!
SAFETY
ISSUES
3 Deployed on new AUDI A8TIMELINE
9. 9
LEVEL 2: HANDOFF AT ANY TIME
1
• Only second or two of warning
• Human must be ready for every such moment!
• Human driver should always supervise the system!
HANDOFF
2
Autonomy can fail to see danger ahead
Drivers under trust or over trust system.
Example: Braking, if adaptive cruise control sensor
doesn’t detect debris on the road
SAFETY
ISSUES
3 Exist: Tesla vehicles and othersTIMELINE
10. 10
LEVEL 1: HANDS-ON, ASSISTANTS
1
• Adaptive Cruise Control
• Lane Keeping Assistance / Lane Departure Alert
• Pre-collision systems (w/ Pedestrian detection) /
Automatic Emergency Braking
• Traffic Jam Assist
• Traffic Sign Recognition
• Intelligent High-beam Control
ADAS /
Active
Systems
2 Exist: Mobileye devices in many cars and othersTIMELINE
14. 14
Mapping & Localization: Map variations
1
NO MAPS
• Without maps no
need to localize
2
NAVIGATION
MAPS
• Localization at
accuracy of the
GPS ~ 10m (5m-
30m)
• What happens
when you lost
GPS signal (like
in big cities)?
4
3D MAPS,
DENSE POINT
CLOUD
• Very detailed 3D
point cloud maps
• Google is the
main contributor
• Localization
should be also at
accuracy ~10 cm
to properly
localize
3
HIGH-
DEFINITION
MAPS
• Detailed maps,
with many
landmarks
• Providers like
TomTom, HERE
• Localization
should be at
accuracy ~10cm
15. 15
Mapping & Localization: Data collection
• Should be near real-time.
• Imagine: traffic signs has been changed or even road directions! Doesn’t match with
sensing!
• It all should be crowdsourced across car manufacturers and map providers.
• Landmark types: traffic signs, traffic lights, road markings, road structure, reflectors, …
• Data collected per vehicle should be small, so HD maps is very preferable in this case.
• Landmarks after preprocessing is just a small subset of data, after sensor post-processing.
• Size of data transfer: ~ 10KB to 1MB per kilometer.
MAP UPDATES
AMOUNT OF DATA COLLECTED
17. 17
Sensing: examples of major problems
3D Object Detection with Semantic 3D bounding boxes1
Free Space Segmentation / Drivable Paths2
Lane detection / fusion3
Task-specific issues:4
Semantic lanes information (country road vocabulary,
types of merges, …)
4.1
Road semantic information: slope estimation (predicting road curves)4.2
Accurate distance prediction to every object4.3
18. 18
Sensing: 3D Object Detection & Trajectories
• 3D Bounding box is crucial
• Not all parts of cube seen during each frame
• Color codes to mark every side of the car
• Adding complexity: distance/range measurements, trajectory prediction
VEHICLES AT
ANY ANGLE
19. 19
Sensing: Semantic Segmentation
• What road and what not? Context matters a lot.
• We need semantic information to help classify such tasks
• Road splatted by small elevation could be parking slot!
• What about snow, wet road (reflections)? Algorithm generalization is key!
SEMANTIC
FREE SPACE
20. 20
Sensing: Lane Detection / Fusion
LANE
FUSION
Lane
detection
Confidence
improvements
Holistic Path
Planning
Alignment
Free Space
TASK
FUSE CENTER OF
THE LANE
22. 22
Driver policy & route planning
• Planning future.
• Example:
Double lane merge,
Negotiating entry:
Roundabout ->
Multi-agent game:
Reinforcement learning:
– States, actions, rewards.
– Need to classify and predict other drivers behavior.
DRIVER POLICY
TECHNOLOGY
Survey of motion
planning,
B. Paden, 2016
23. 23
Summary, safety first
1.2M people killed on the road every year
With proper, very careful execution of the technology we can
achieve safer roads
New businesses and models will dramatically change our life experience.