Australia’s National Science Agency
GenAI in Research
with Responsible AI
Dr/Prof Liming Zhu
Research Director, CSIRO’ Data61
• Expert, OECD.AI – AI Risks and Accountability
• Expert, ISO/SC42/WG3 – AI Trustworthiness
• Member, National AI Centre (NAIC) Think Tank
All pencil drawings in this presentation are created by AI
• Australian government
• “An engineered system that generates predictive outputs such as content,
forecasts, recommendations or decisions for a given set of human-defined
objectives or parameters without explicit programming. AI systems are designed
to operate with varying levels of automation.”
• EU AI Act
• “Software that is developed with one or more of the techniques and approaches
listed in Annex I and can, for a given set of human-defined objectives, generate
outputs such as content, predictions, recommendations, or decisions influencing
the environments they interact with.”
AI Definition – Examples
2 |
• Example – animal image classification
• Data->Features: number of legs, presence of fur, shape of the ear….
• Rule/logic-based
• data, feature, data -> rules, feedback, + AI helps manage/derive complex rules
• Machine learning
• Learned model: P = weighti* featurei… ; human-designed learning algorithm
• Supervised: labelled data, features, AI learns rules, feedback
• Unsupervised: no labelled data, features, AI learns rules, feedback
• Learning new features (potentially replacing human-designed features and processes)
– Human-interpretable but not used before: texture of fur to size ratio
– Human-interpretable but not easily human-usable: fur colour variance in milliseconds in video
– Not fully interpretable to humans: complex relationship edge orientation + texture gradient + shadowing +…
Approaches & Role of Human Expertise
3 |
• Deep learning/neural networks (billions of weights/features)
• No feature engineering, ”dumb” algorithm + big data, emergent/alien capabilities
• Non-domain experts improve learning efficiency; domain expert feedback
Approaches & Role of Human Expertise
4 |
Encoding human expertise ->
Learning human interpretable expertise from human
expertise and data ->
Invalidating human expertise and human processes
Explaining alien intelligence in human-understandable terms
Deep Neural Network -> ChatGPT
5 |
Reinforcement Learning
AI learns to make decisions by interacting
with an environment to maximize
cumulative reward through trial & error.
https://www.understandingai.org/p/large-language-models-explained-with
https://huyenchip.com/2023/05/02/rlhf.html
Foundation Models – Generality is Free?
Problem-specific training + generalization --> general capability training + adaptation
Value of unique data & science expertise in training vs predicting?
Bommasani, R. et.al , 2022. On the Opportunities and Risks of Foundation Models.
6 |
Generative AI
•Text
•Image/video
•Code/Scripts
•Data
Predictive
Diagnostic
Generative AI – Generate Anything?
Prescriptive
7 |
• Hypothesis
• Experiment plans
• New data
• Scripts to run/analyze experiments
• Diagnostic results
• Predictions/Answers
• Program/Scripts for Predictions/Ans
…
Science Discovery with GenAI/FM
8 |
• General Capability
• Smart interns/colleagues or tools
• Ease of Access
• Cost-benefit analysis/plan ->low-cost exp.
• Changing nature/role of scientist expertise
• Explanation & understanding
• Changing x-discipline collaboration mode
• Reverse Conway’s law
• One all-discipline science FM
• No fancy curriculum tricks
• Discipline data WGs
• Responsible AI & Trustworthiness
Trillion Parameter Consortium
9 |
10 |
Example: AI CoPilot for Science
Responsible AI – Regulation & Ethics
11 |
Australia’s AI Ethics Principles (developed by Data61)
1) Human, societal and environmental wellbeing
2) Human-centred values
3) Fairness
4) Privacy protection and security
5) Reliability and safety
6) Transparency and explainability
7) Contestability
8) Accountability
Australia’s Responsible AI Network (RAIN)
Minister Husic: “I'm determined that we go further than ethics principles. I
want Australia to become the world leader in responsible AI.”
Best Practices for Responsible (Generative) AI
12 |
Lu, Q., Zhu, L., Xu, X., Xing, Z., Whittle, J., 2023. Towards Responsible AI in the Era of ChatGPT: A Reference
Architecture for Designing Foundation Model-based AI Systems. http://arxiv.org/abs/2304.11090
CSIRO Responsible AI (RAI)
Pattern Catalogue
• RAI-by-Design Products
• Development Processes
• Governance
https://research.csiro.au/ss/science/projects/responsible-ai-pattern-catalogue/
Summary & Questions
Science Discovery with this new wave of AI
• General capabilities/”interns” vs specific tools
• Low-cost experimentation vs problem-driven planning
• Value of unique data & scientist knowledge
• Responsible AI for science
• Multi-use opportunities and risks
• Trustworthiness and trust in science
More info & Contact
https://research.csiro.au/ss/
Liming.Zhu@data61.csiro.au
Brendan.Omalley@data61.csiro.au
Coming out late 2023
For the latest, follow me on
Twitter: @limingz
LinkedIn: Liming Zhu
13 |
Collaborate with CSIRO’s Data61 on
• (Responsible) AI Engineering best practices & governance
• LLM/Foundation model-based system design/eval

GenAI in Research with Responsible AI

  • 1.
    Australia’s National ScienceAgency GenAI in Research with Responsible AI Dr/Prof Liming Zhu Research Director, CSIRO’ Data61 • Expert, OECD.AI – AI Risks and Accountability • Expert, ISO/SC42/WG3 – AI Trustworthiness • Member, National AI Centre (NAIC) Think Tank All pencil drawings in this presentation are created by AI
  • 2.
    • Australian government •“An engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming. AI systems are designed to operate with varying levels of automation.” • EU AI Act • “Software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” AI Definition – Examples 2 |
  • 3.
    • Example –animal image classification • Data->Features: number of legs, presence of fur, shape of the ear…. • Rule/logic-based • data, feature, data -> rules, feedback, + AI helps manage/derive complex rules • Machine learning • Learned model: P = weighti* featurei… ; human-designed learning algorithm • Supervised: labelled data, features, AI learns rules, feedback • Unsupervised: no labelled data, features, AI learns rules, feedback • Learning new features (potentially replacing human-designed features and processes) – Human-interpretable but not used before: texture of fur to size ratio – Human-interpretable but not easily human-usable: fur colour variance in milliseconds in video – Not fully interpretable to humans: complex relationship edge orientation + texture gradient + shadowing +… Approaches & Role of Human Expertise 3 |
  • 4.
    • Deep learning/neuralnetworks (billions of weights/features) • No feature engineering, ”dumb” algorithm + big data, emergent/alien capabilities • Non-domain experts improve learning efficiency; domain expert feedback Approaches & Role of Human Expertise 4 | Encoding human expertise -> Learning human interpretable expertise from human expertise and data -> Invalidating human expertise and human processes Explaining alien intelligence in human-understandable terms
  • 5.
    Deep Neural Network-> ChatGPT 5 | Reinforcement Learning AI learns to make decisions by interacting with an environment to maximize cumulative reward through trial & error. https://www.understandingai.org/p/large-language-models-explained-with https://huyenchip.com/2023/05/02/rlhf.html
  • 6.
    Foundation Models –Generality is Free? Problem-specific training + generalization --> general capability training + adaptation Value of unique data & science expertise in training vs predicting? Bommasani, R. et.al , 2022. On the Opportunities and Risks of Foundation Models. 6 |
  • 7.
    Generative AI •Text •Image/video •Code/Scripts •Data Predictive Diagnostic Generative AI– Generate Anything? Prescriptive 7 | • Hypothesis • Experiment plans • New data • Scripts to run/analyze experiments • Diagnostic results • Predictions/Answers • Program/Scripts for Predictions/Ans …
  • 8.
    Science Discovery withGenAI/FM 8 | • General Capability • Smart interns/colleagues or tools • Ease of Access • Cost-benefit analysis/plan ->low-cost exp. • Changing nature/role of scientist expertise • Explanation & understanding • Changing x-discipline collaboration mode • Reverse Conway’s law
  • 9.
    • One all-disciplinescience FM • No fancy curriculum tricks • Discipline data WGs • Responsible AI & Trustworthiness Trillion Parameter Consortium 9 |
  • 10.
    10 | Example: AICoPilot for Science
  • 11.
    Responsible AI –Regulation & Ethics 11 | Australia’s AI Ethics Principles (developed by Data61) 1) Human, societal and environmental wellbeing 2) Human-centred values 3) Fairness 4) Privacy protection and security 5) Reliability and safety 6) Transparency and explainability 7) Contestability 8) Accountability Australia’s Responsible AI Network (RAIN) Minister Husic: “I'm determined that we go further than ethics principles. I want Australia to become the world leader in responsible AI.”
  • 12.
    Best Practices forResponsible (Generative) AI 12 | Lu, Q., Zhu, L., Xu, X., Xing, Z., Whittle, J., 2023. Towards Responsible AI in the Era of ChatGPT: A Reference Architecture for Designing Foundation Model-based AI Systems. http://arxiv.org/abs/2304.11090 CSIRO Responsible AI (RAI) Pattern Catalogue • RAI-by-Design Products • Development Processes • Governance https://research.csiro.au/ss/science/projects/responsible-ai-pattern-catalogue/
  • 13.
    Summary & Questions ScienceDiscovery with this new wave of AI • General capabilities/”interns” vs specific tools • Low-cost experimentation vs problem-driven planning • Value of unique data & scientist knowledge • Responsible AI for science • Multi-use opportunities and risks • Trustworthiness and trust in science More info & Contact https://research.csiro.au/ss/ Liming.Zhu@data61.csiro.au Brendan.Omalley@data61.csiro.au Coming out late 2023 For the latest, follow me on Twitter: @limingz LinkedIn: Liming Zhu 13 | Collaborate with CSIRO’s Data61 on • (Responsible) AI Engineering best practices & governance • LLM/Foundation model-based system design/eval