While Deep Neural Networks (DNN) have revolutionized applications that rely on computer vision, their characteristics introduce substantial challenges to automotive safety engineering. The behavior of a DNN is not explicitly expressed by an engineer in source code, instead enormous amounts of annotated data are used to learn a mapping between input and output. Functional safety as defined by ISO 26262 is not sufficient to match the needs for the new generation of data-driven software.
Earlier this year, ISO/PAS 21148 Safety of the Intended Functionality (SOTIF) was published by ISO. SOTIF is a Publicly Available Specification (PAS), a response to a pressing need of an automotive safety standard appropriate for machine learning. A PAS is a stepping stone toward a new ISO standard, and SOTIF is intended to complement conventional functional safety as defined in ISO 26262.
In this presentation, we introduce the SOTIF process and present our contributions on how to support safety of the intended function. First, we present search-based software testing to efficiently and effectively idenify test scenarios that cause safety violations in simulated environments. Second, we present a safety cage architecture that helps percepiont systems reject input that does not resemble the training data.
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
Trained, Not Coded - Still Safe?
1. Trained,
Not Coded
– Still Safe?
Software Technology Exchange Workshop
Lund, Nov 14, 2019
Markus Borg
@mrksbrg
mrksbrg.com
RISE Research Institutes of Sweden
AB
2. With ML, not only bugs are dangerous…
SOTIF: Safety of the Intended Function
CC BY-NC 2.0
Flickr: @andreas_komodromos
3. Who is Markus?
Board member
Senior researcher, Lund
Adjunct lecturer, Lund University
6. ”a large portion of real-world problems have the
property that it is significantly
easier to collect the data
than to explicitly write the
program”
https://medium.com/@karpathy/software-2-0-a64152b37c35
Andrej Karpathy
Director of AI at
Tesla
7. Karpathy’s Software
2.0Software 1.0
• Humans write source code
• Other humans comprehend the source
code
Software 2.0
• Humans curate data and specify goals
• Backprop. and gradient descent
produces millions of weights
• Humans cannot comprehend mapping
from input to output
10. Definition of Functional Safety
”absence of unreasonable risk due to
hazards resulting from malfunctions of
the electrical/electronic system”
10
What if…
Not a bug – functionality delivered according to the training!
No object
detected
11. “Neither Autopilot nor the driver noticed the
(Fred Lambert, Electrek)
(US NTSB)
white side of the tractor trailer
against a brightly lit sky…”
- Tesla Team, June 30, 2016
14. Automotive Software Safety
…functional insufficiencies
14
…malfunctions of the
electrical/electronic system
Absence of
unreasonable risk due
to…
ISO/PAS
21448
ISO
26262
38. With ML, not only bugs are dangerous…
SOTIF: Safety of the Intended Function
CC BY-NC 2.0
Flickr: @andreas_komodromos
markus.borg@ri.se
@mrksbrg
mrksbrg.com
Editor's Notes
[Titelsida]
[Du byter bakgrundsfärg genom att högerklicka på bakgrunden och välja Formatera bakgrund.]
[Byt rubriknivå genom att markera ett ord i stycket och klicka TAB(Mindre) eller SHIFT + TAB(Större).]
Formalia.
[Citatsida]
[Du byter bakgrundsfärg genom att högerklicka på bakgrunden och välja Formatera bakgrund.]
[Byt rubriknivå genom att markera ett ord i stycket och klicka TAB(Mindre) eller SHIFT + TAB(Större).]
[Citatsida]
[Du byter bakgrundsfärg genom att högerklicka på bakgrunden och välja Formatera bakgrund.]
[Byt rubriknivå genom att markera ett ord i stycket och klicka TAB(Mindre) eller SHIFT + TAB(Större).]
Edition 2: Everything but mopeds. www.mentor.com
In: reasonably foreseeable human misuse
Out: security and antagonistic attacks
“Inability of function to correctly comprehend situation”
“Insufficient robustness of sensors or diverse environmental conditions”
In: reasonably foreseeable human misuse
Out: security and antagonistic attacks
“Inability of function to correctly comprehend situation”
“Insufficient robustness of sensors or diverse environmental conditions”
Instead of V-model: knowns and unknowns, safe and unsafe
No easy fix here, machine learning will not do it for you. Hard systematic work by safety engineers.
2 s when driving 90 km/h
Harm = only consequences
Triggers = causes
2 s when driving 90 km/h
Harm = only consequences
Triggers = causes
Avoid: Improve sensor performance, Improve actuators, Improve algorithms
Reduce: Restriction of intended function, Degradation when poor sensor data detected
Mitigate: Improve Human-Machine Interface, Improve warnings and degradation
2 s when driving 90 km/h
Harm = only consequences
Triggers = causes
Harm = only consequences
Triggers = causes
ProSivic
Avoid: Improve sensor performance, Improve actuators, Improve algorithms
Reduce: Restriction of intended function, Degradation when poor sensor data detected
Mitigate: Improve Human-Machine Interface, Improve warnings and degradation