1. Australia’s National Science Agency
Responsible AI
The Australian Approach
Liming Zhu
Research Director, CSIRO’s Data61
Chair, Blockchain & Distributed Ledger Technology, Standards Australia
Expert on working groups:
• ISO/IEC JTC 1/WG 13 Trustworthiness
• ISO/IEC JTC 1/SC 42/WG3 - Artificial intelligence – Trustworthiness
• WEF Quantum Computing Governance & ethical framework
2. CSIRO’s Data61: Australia’s Largest Data & Digital
Innovation R&D Organisation
1000+
talented people
(including
affiliates/students)
Home of
Australia’s
National AI
Centre
Data61
Generated
18+ Spin-outs
130+ Patent
groups
200+
Gov &
Corporate
partners
Facilities
Mixed-Reality Lab
Robotics Inno. Centre
AI4Cyber HPC Enclave
300+
PhD students
30+
University collaborators
Responsible
Tech/AI
Privacy & RegTech
Engineering & Design of
AI Systems
Responsible AI: The Australian Approach
Resilient &
Recovery Tech
Cybersecurity
Digital Twin
Spark (bushfire) toolkit
2 |
3. Responsible Innovation, Technology and Software
Responsible AI: The Australian Approach
Responsible Software
•
Data61 work: J. Whittle, M. Ferrario, W. Simm, W.Hussain : A
Case for Human Values in Software Engineering. IEEE
Softw. 38(1): 106-113 (2021)
3 |
4. • Responsible AI: “the development of intelligent systems according to fundamental human
principles and values.” [1]
• Being legal is a minimum requirement for responsibility; the duty you have to others.
• What are these ”Principles”? E.g. AI Ethics Principles. Make sure that “you build the right things”
• How can you be sure in a verifiable way? - “Trustworthy AI” – Make sure “you build in the right ways”
Responsible AI & AI Ethics Principles
Responsible AI: The Australian Approach
Australia’s AI Ethics Principles
1) Human, societal and environmental wellbeing
2) Human-centred values:
3) Fairness
4) Privacy protection and security
5) Reliability and safety
6) Transparency and explainability
7) Contestability
8) Accountability
4 |
[1]
5. • Trustworthiness: ability to meet stakeholders' expectation in a verifiable way.
• Trust: degree to which a user or other stakeholder has confidence that a product or system
will behave as intended.
Trustworthy Technology May Not Gain Trust…
Responsible AI: The Australian Approach
• System trustworthiness
• User/Stakeholder Trust
• Calibrated Trust
Data61 work: Ibarra, Georgina; Douglas, David; Tharmarajah, Meena. Machine
Learning and Responsibility in Criminal Investigation. Sydney, Australia: CSIRO; 2020.
https://doi.org/10.25919/5f8dd4294a47f
5 |
6. “It never does just what I want, but only what I tell it.”
• Value alignment problem
• given an optimisation algorithm, how to make sure the
optimisation of its objective function results in outcomes that
we actually want, in all respects? [1]
• impossible (not simply hard) to accurately and completely
specify all the goals, undesirable side-effects and constraints
(including ethical ones)
• Autonomy & Agency
• solve problems autonomously , without explicit guidance from a
human being
• greater degree of adaptability, interactivity …
Responsible AI – What’s unique?
Responsible AI: The Australian Approach
[2] Data61 work: L. Zhu, X. Xu, Q. Lu, G. Governatori, and J. Whittle,
“AI and Ethics - Operationalising Responsible AI”, Humanity Driven AI
(2021). https://arxiv.org/abs/2105.08867
6 |
[1]
7. Australia’s National Science Agency
Australia’s AI Ethics Principles
1) Human, societal and
environmental wellbeing
2) Human-centred values
3) Fairness
4) Privacy protection and security
5) Reliability and safety
6) Transparency and explainability
7) Contestability
8) Accountability
Case studies from our AI Ethics
Principles pilot
8. Australia’s National Science Agency
AI systems should benefit individuals,
society and the environment.
1. Human, societal and
environmental
wellbeing
9. • Environmental, Social, and Corporate Governance (ESG)
• Blockchain-based ESG certification platform
• Provide verifiable evidence to improve human trust
• Wide range of potential users/stakeholders in the supply chain
ESG Certificates for Process/Products (inc. AI)
Responsible AI: The Australian Approach
9 |
10. Australia’s National Science Agency
AI systems should respect human rights,
diversity, and the autonomy of
individuals.
2. Human-centred
values
11. Human Values in Responsible Software
Responsible AI: The Australian Approach
•
Hussain, W., Perera, H., Whittle, J., et. al. Human values in software engineering:
contrasting case studies of practice. IEEE Transactions on Software Engineering (2021)
11 |
12. Australia’s National Science Agency
AI systems should respect and uphold
privacy rights and data protection, and
ensure the security of data.
4. Privacy protection
and security
13. Privacy/Security via Federated Learning
Responsible AI: The Australian Approach
• Data61 work: SK Lo, Q Lu, L Zhu, HY Paik, X Xu, C Wang: Architectural patterns for the
design of federated learning systems, Journal of Systems and Software (2021)
Data61 work: SK Lo, Q Lu, HY Paik, L Zhu, FLRA: A Reference Architecture for Federated
Learning Systems, European Conference on Software Architecture (2021)
13 |
14. Privacy-by-Design via Privacy Patterns
Responsible AI: The Australian Approach
•
Data61 work: Su Yen Chia, Xiwei Xu, Hye-Young Paik, Liming Zhu: Analysing and
extending privacy patterns with architectural context. SAC 2021
14 |
GDPR &
Australian Privacy
Principles
15. Australia’s National Science Agency
There should be transparency and
responsible disclosure so people can
understand when they are being
significantly impacted by AI, and can
find out when an AI system is engaging
with them.
6. Transparency and
explainability
16. Explainability (& Interpretability) is Complex
Responsible AI: The Australian Approach
•
Data61 work: R Hughes, C Edmond, L Wells, M Glencross, L Zhu, T Bednarz, eXplainable AI (XAI):
An introduction to the XAI landscape with practical examples. SigGraph Asia 2020
Interpretable by different stakeholders/users with
different interests and technical literacy
• AI experts, software developers/designers,
managers, boards, decision makers, users, affected
subjects, external auditor, regulator, public..
Properties of an Explanation (Miller 2019)
○ contrastive
■ i.e. in response to some counterfactual information (e.g. in
response to “why did X happen instead of Y?”)
○ selected
■ i.e. from a range of almost infinite causes, we select (in a
biased way) the most useful
○ refer to causes, not probabilities
○ social
■ i.e. presented as part of a conversation or interaction, in
the context of the beliefs of explainer and explained
16 |
17. Australia’s National Science Agency
People responsible for the different
phases of the AI system lifecycle should
be identifiable and accountable for the
outcomes of the AI systems, and human
oversight of AI systems should be
enabled.
8. Accountability
18. AI Governance is an Ecosystem Problem
Responsible AI: The Australian Approach
Shneiderman, B.: Bridging the gap between ethics and practice:
Guidelines for reliable, safe, and trustworthy human-centered ai
systems. ACM Trans. Interact. Intell. Syst. 10(4) (2020). Data61 work: S. Lee, L. Zhu, R. Jeffery “Data Governance Decisions for
Platform Ecosystems” HICSS 2019: 1-10
Industry + Organisation + Teams
Data + Model
18 |
19. Operationalising via Design and Process Patterns
Responsible AI: The Australian Approach
Data61 work: L. Zhu, X. Xu, Q. Lu, G. Governatori, and J. Whittle, “AI and
Ethics - Operationalising Responsible AI”, Humanity Driven AI (2021).
https://arxiv.org/abs/2105.08867
Data61 work: Q. Lu, L. Zhu, et.al. “Software engineering for
responsible AI: an empirical study and operationalised
mechanisms”
19 |
20. Now & Next in Operationalising Responsible AI/Tech
• Responsible technology
• Requirements engineering
• Data/AI governance and RegTech
• Responsible “AI Engineering”
• Software Engineering (SE) for responsible AI
• Empirical studies for insights
• Trust architectures and design patterns
• Trustworthy business & development processes
• Trustworthy AI
• Trust between human and machines
• Hybrid analytics and visualisation
• Human-Robotics teaming
• Collaborative intelligence
• Design and UX
• Specific to Australia’s 8 AI Ethics Principles
• Social-technical systems (1)
• Human values in software (2)
• Requirements, data and federated learning
implication in fairness (3)
• Privacy-preserving technology (4)
• Cybersecurity for AI (4)
• Robotics/Factory safety (5)
• Transparency via trusted mode/data/code
provenance and integrity (6)
• Explainability via causal inference & provenance (6)
• Outcome-driven continuous validation & AIOps (7)
• Accountability via trusted traceability (8)
Responsible AI: The Australian Approach
20 |
21. Responsible AI: The Australian Approach
21 |
Collaborating with Data61/D61+ Network
Collaborative
Responsible AI R&D
projects with
Data61 & its
network
Trialling
Responsible Tech
Deep partnership via
shared technology
roadmaps
Making
Responsible Tech
your competitive
advantage
Contact:
Liming.Zhu@data61.csiro.au
Wilma.James@data61.csiro.au
Culture and
Awareness via
executive training