PwC's recently released Responsible AI Diagnostic surveyed around 250 senior business executives from May to June 2019. The survey says that 84% of CEOs agree that AI-based decisions need to be explainable in order to be trusted. In the past few years, Deep learning has shown remarkable results in various applications, which makes it one of the first choices for many AI use cases. However, deep learning models are hard to explain, and since the majority of CEOs expect AI solutions to be explainable, deep learning has a serious challenge. Daniel Kahneman, in his book thinking fast and slow, presented two different systems the human brain uses to form thoughts and decisions: System 1: fast, intuitive and hard to explain System 2: slow, conscious and easy to explain In this talk I will present: A) PwC Responsible AI Survey B) A proposed deep learning framework that mimics the two systems of thinking C) The recent advances in the neural symbolic learning field.
Call Girls In Mahipalpur O9654467111 Escorts Service
Ā
Deep learning fast and slow, a responsible and explainable AI framework - Ahmad Haj Mosa
1.
2. Deep Learning Fast and Slow,
A Responsible and
Explainable AI Framework
Presentation by Ahmad Haj Mosa
Head of AI
3. PwC Ćsterreich
PwC Global CEOs Survey
84%of CEOs agree that AI-based
decisions need to be
explainable in order to be trusted
3
Control
Security
Ethical
Economic
Performance
ā¢ Risk of errors
ā¢ Risk of bias
ā¢ Risk of opaqueness
ā¢ Risk of performance instability
ā¢ Lack of human agency in AI
supported processes
ā¢ Inability to detect/control rogue AI
ā¢ Adversarial attacks
ā¢ Cyber intrusion risks
ā¢ Privacy risks
ā¢ Open source
software risks
ā¢ Lack of values risk
ā¢ Value alignment risk
ā¢ Reputational risk
ā¢ Autonomous weapons
proliferation
ā¢ Risk of intelligence divide
Risks
ā¢ Job displacement
ā¢ Liability risk
ā¢ Risk of āwinner takes allā
concentration of power
Risks
4. PwC Ćsterreich
Governments are actively developing legal frameworks for
holding algorithms accountable
Algorithmic Accountability
Act (US)
Companies are legally
required to assess
automated systems based
on training data, model,
fairness performance, bias,
discrimination, privacy and
security.
Companies must conduct
impact evaluation and
rectify any identified issues.
General Data Protection
Regulation (EU)
Companies should use
personal information secured
in a manner that prevents
discriminatory effects.
Appropriate mathematical
procedures should be adopted
for consumer profiling.
The data subject possess the
Right To Explanation to
seek meaningful information
about the logic involved in
modeling
California Consumer Privacy
Act
Modeled off GDPR in many
ways, such as limiting data
usage and requiring right
to erasure.
At this moment, CCPA does
not contain provisions for the
right to object to automated
decision making.
Globally, countries are exploring ethical AI standards. Recently, 42 countries signed on to the OECD Principles on Artificial Intelligence.
Government Task Forces
Globally, governments are also
launching task forces and
other exploratory bodies on AI
and AI governance. Bodies
include:
- UK All Parliamentary Group
on AI (Jan 2017)
- NYC AI Task Force (May
2018)
- Victoria All-Party
Parliamentary Group on AI
(March 2018)
10. PwC Ćsterreich
Our minds contain processes that enable us
to solve problems we consider difficult.
"Intelligence" is our name of those processes
we don't yet understand.
Marvin Minsky
10
November 2019
11. PwC Ćsterreich
Our model contain processes that enable us
to solve problems we consider difficult.
āBlack-Box" is our name of those processes
we don't yet understand.
Marvin Minsky
11
November 2019
12. PwC Ćsterreich
What is Explainable AI
12
November 2019
Why?How?
High abstract explanations
that are perceptive by users
(non data scientists)
Explanation of the
mathematical models and
how data flow from inputs
to decisions
13. PwC Ćsterreich
Explainability Types
13
Task Data
Task
Model
Interpretation Scientist
Scientist
Data
Task
Model
Explanation User
Scientist
Data
Task
Model
Explanation User
Scientist
Interpretation
Explanation
Self Learned Explanation
14. PwC Ćsterreich
Explanation
14
Explanation on Features Level
Reasoning/Conceptual based
Explanation
Perceptive by Humans
Possible integration in self-learned
explanations
1
2
3
4
LIME Saliency Maps Shap TCAV
Strong Average WeakStrong Average WeakStrong Average WeakStrong Average Weak
17. PwC Ćsterreich
2. A Framework of Representing Knowledge
source: Marvin Minsky: The society of mind
18. PwC Ćsterreich
3. The Consciousness Prior
1. High dimensional concepts are represented in the
encoder h
2. Low dimensional consciousness thoughts/states
are represented in c
Input x
unconscious state h
conscious state c
Attentions
source: Yoshua Benjio: The Consciousness Prior
19. PwC Ćsterreich
4. Concept Activation Vector
Domain
Expert
Task Scientist Model
High
Concepts
Explanation
š£ š¶
21. PwC Ćsterreich
Self learned conceptual explanations
Logistic Classifier: concept
Dog face
Logistic Classifier: concept
Huskey face
Logistic Classifier: concept
GrassLogistic Classifier: concept
sand
Positive Concept
Negative Concept
Input x
unconscious state h
conscious state c
Attentions
The Consciousness Prior
Framework of Representing Knowledge
22. PwC Ćsterreich
Self learned conceptual explanations
Domain
Expert
Task Scientist Model
High
Concepts
Explanation
Domain
Expert
Task Scientist Model
High
Concepts
Explanation
Logistic Classifier: concept
Dog face
Logistic Classifier: concept
Huskey face
Logistic Classifier: concept
GrassLogistic Classifier: concept
sand
Positive Concept
Negative Concept
High Level Reasoining
23. PwC Ćsterreich
Self learned conceptual explanations
Model Building Process:
1. Domain Expert defines a task: Criminals face detection
2. Domain Expert defines positive and negative concepts:
ā Positive concepts: eyes, nose or hair
ā Negative concepts: background or face color
3. Scientist uses/prepares a training data for the task, and a query
engine for the concepts
4. Training the model:
ā Backpropoagation from the task targets (cross-entropy for the
logits) for some iterations
ā Query images (google search?) that represents the Positive and
Negative concepts
ā Apply TCAV
ā Use Reinforce techniques to force the representation to use
positive concepts and to refuse negative concepts
ā Repeat until convergence
Domain
Expert
Task Scientist Model
High
Concepts
Explanation
24. PwC Ćsterreich
Use Case
Anomaly detection in VAT data
November 2019
24
ā¢ Detection of possible faults
ā¢ Reduces the risk
ā¢ Increases data quality
Trading
Company
Inc.
VAT Regulation
27. PwC Ćsterreich
Summary
Why?
ā¢ For Business, Risk,
Ethics and Law
Explainable AI is a key
feature to consider in
your development
ā¢ 84% of global CEOs
think Explainable AI is
important
When?
ā¢ When the task an AI is
automating usually
requires explanation
from humans
ā¢ When the legal, life or
cost risk of AI decisions
is high
How?
ā¢ If the explanation is only
needed when things go
wrong (Self driving car
accident) then
interpretation or post
training explanation from
the developer is enough
ā¢ If the explanation is
needed at every decision
(medical diagnosis) then
self-learned explanation
is important
Something to take
ā¢ Neural Symbolic AI
ā¢ Consensuses Priors
ā¢ Concept Vectors
Activations
27
November 2019