At the recent Generative AI Conference - This talk defined deepfakes and the widespread damage misinformation can cause. In order to build awareness of the ethical implications of deepfakes. At the
National AI Centre, Responsible AI and Responsible AI Network
allows us to action a way to use AI that is aligned to Australia's AI ethics principles.
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Dissecting the dangers of deepfakes and their impact on reputation Generative AI Conference - Slide Share.pptx
1. The National AI Centre is funded by the
Australian Government and coordinated by CSIRO
Rita Arrigo| July 2023
Strategic Engagement,
National AI Centre
Dissecting the
dangers of
deepfakes and their
impact on reputation
2. • About my journey
• Defining what a deepfake is and the widespread damage
misinformation can cause
• You don't know the power of the dark side – understanding
why deepfakes can seem like a reasonable use of AI
• Building awareness of the ethical implications of deepfakes
• National AI Centre, Responsible AI and Responsible AI Network
• LLMs
• Responsible AI Network how to get involved
In this session…
3. 88 - My First Role PC Support
90-95 HP Hardware Dealer
Management
1994- 2000 3RRR Byte Into It
1995 – Australia’s First Internet Café
1997- 2000 Telstra
2000- 2004 Optus/Singtel
2005- 2010 Digital Agencies
2010 – 2016 – Collaboration/Digital
Consulting
2011 – 2016- Mbug/CloudBug
2016- 2020 Microsoft Digital Advisor
AI Ambassador
2017- Leonardo Science Gallery
Melbourne
2021 – Digital Lead at Frazer-Nash
2022 – RMIT CIAIRI Centre of
Industrial AI Research and Innovation
2023 – CSIRO National AI Centre
My Journey
4. The National AI Centre is funded by the
Australian Government and coordinated by CSIRO
Lets take a look at
Deepfakes
https://youtu.be/mPtcU9VmIIE
5.
6. The National AI Centre is funded by the
Australian Government and coordinated by CSIRO
https://youtu.be/Frd
hsX8R_AY
sharenting
7. • Deepfakes pose a serious threat to our digital society by fuelling the
spread of misinformation. It is essential to develop techniques that
both detect them and alert the human user to their presence.
• Using sophisticated machine learning – deep learning, we are able
to produce convincing depictions of individuals doing or saying
things with their consent and knowledge
• Blurring the line between fact and fiction, undermining public trust
in images and videos
Deep Fake Threats
8. • Deepfake detection systems are a young but active field, with many
novel architectures proposed to detect videos or images where
faces have been digitally manipulated
Deepfake creators and Deepfake
detectors
Deepfake Caricatures: Amplifying attention to artifacts increases deepfake
detection by humans and machines | DeepAI
9. • Deepfake Detection Challenge Results: An open initiative to
advance AI (meta.com)
• Contributing Data to Deepfake Detection Research – Google
Research Blog (googleblog.com)
Both Meta and Google are investing in Deep Fake
Detection
10. • A Dark Pattern (DP) is an interface maliciously
crafted to deceive users into performing actions
they did not mean to do.
• DPs make customers/users unhappy and cause
them to lose (digital) trust in a business.
• They disregard human values, try to manipulate user
behaviours and even steal their personal data and money
• On 15 March 2022, EDPB adopts Guidelines on
dark patterns in social media platform
interfaces.
Dark Pattern Detection
43.8
88.9
5
56.2
11.1
95
OUR TEST SET (1023)
SHOPPING WEBS (11K)
MOBILE APPS (250)
PERCENTAGE OF DARK
PATTERN
Not Contain DP Contain DP
website: www.darkpatterns.online/web
11. • You are required to subscribe
to Alarmy so that you can use
their service
• They offer “Free Trial”, but
you need to enter your
account details
• What you may ignore is the
sentence on the bottom…
• “7 days free, then
$84.00/year”
Example
website: www.darkpatterns.online/web
12. • Paula searched for the
Commonwealth bank website,
and fill out an enquiry form
• A man claiming to be a
Commonwealth Bank account
manager phoned and emailed
Paula, and cheated Paula to use
$75,000 for investment
• Scammer emails,
• General Warming is not enough
• Macquarie Bank warning customers about
government bond scams
• Fake website detection
website: www.darkpatterns.online/web
https://www.9news.com.au/national/sydney-woman-loses-75k-after-buying-fake-government-bonds/dc071e0a-76db-42bf-8035-fb6762592c81
13. The Solution
Dark Patterns Database
- Understand and avoid DPs
Dark Pattern Detection Tool
- Automatic detection and mitigation
- Educational effect
website: www.darkpatterns.online/web
15. • Screen Level
• Static dark patterns
• Dynamic dark patterns
• Element Level
• Six Core Properties
– Element Types and their coordinates
– Text Content
– Element Status
– Icon Semantics
– Text Colour and Background Colour
– Element Relationship
Characteristic Analysis
16. Our Solution - UIGuard
• Good explainability
• End-Users can know why and how the AI system makes decision
24. “The development of intelligent systems
according to fundamental human principles
and values.” (Dignum 2019)
Australia’s AI Ethics Principles
1) Human, societal and environmental
wellbeing
2) Human-centred values:
3) Fairness
4) Privacy protection and security
5) Reliability and safety
6) Transparency and explainability
7) Contestability
8) Accountability
What is Responsible AI?
25. Build Australia’s Responsible AI
Advantage
Enable responsible AI
adoption across industry
Grow a responsible
AI Industry in Australia
National AI Centre (NAIC)
Funded by: Coordinated by: Foundation partners:
26. National AI Centre
Get Started
AI Innovation
Pathways
Uplift Practice
Responsible AI
Network
Accelerate
Business Innovation
Strengthen
AI Ecosystem
Get Connected
Ecosystem
Discoverability
Portal
Accelerating positive AI adoption and innovation that benefits
Australia’s business and community.
Stay in touch: www.csiro.au/naic , naic@csiro.au
29. The Australian: CSIRO leads on Responsible AI
InnovationAus: Businesses offered a crash course on AI ethics
Information Age: Don't blindly trust AI vendors - CSIRO warns of
legal and reputational risk
Manufacturers Monthly: New report to help businesses
implement responsible AI
Technology Decisions: CSIRO publishes report on ethical use of AI
TechXplore: New report to help businesses implement responsible
AI
Information and Data Manager - Guide on How to Implement
Responsible AI
2GB radio
Featured on the 22nd news updates throughout the day
• DISR and Minister social media posts: LinkedIn post, Twitter post
• Gradient Institute social media posts:LinkedIn, Twitter
• CSIRO social media posts: LinkedIn, Twitter
Recently Launch Paper
31. Jensen's Kitchen
was a lie: Nvidia
reveals GTC 2021
keynote nearly
100% fake
• Using thousands of photos a digital model of the kitchen
was created including a volumetric capture of Jensen
Jensen's Kitchen was a lie: Nvidia
reveals GTC 2021 keynote nearly
100% fake | TechRadar
32. The National AI Centre is funded by the
Australian Government and coordinated by CSIRO
A perfect virtual
replica of Huang’s
kitchen and a
complete with a
digital clone of the
CEO himself.
To create a virtual Jensen, teams did a full face and body
scan to create a 3D model, then trained an AI to mimic his
gestures and expressions and applied some AI magic to
make his clone realistic.
33. Have I been trained
A tool to find out if a
particular image is used
in model training
34. C2PA Coalition for Content
Provenance and Authority
(C2PA) Releases
Specification of World's
First Industry Standard for
Content Provenance
Microsoft will ID its AI art with a hidden watermark | PCWorld
35. • Singer Taylor Swift and actress Emma
Watson have been victims of deepfake
porn
• There are first-hand testimonies of women
-- from academics to activists -- who were
shocked to discover their faces in deepfake
porn
• 96 percent of deepfake videos online are
non-consensual pornography, and most of
them depict women, according to a 2019
study by the Dutch AI company Sensity.
sextortion schemes & AI Porn
• Expanding cottage industry" around
AI-enhanced porn, with many
deepfake creators taking paid
requests to generate content
featuring a person of the customer's
choice.
• In June 2023, the FBI issued a
warning about "sextortion
schemes," in which fraudsters
capture photos and videos from
social media to create "sexually
themed" deepfakes that are then
used to extort money
36. Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft that
continuously explores the world, acquires diverse skills, and makes novel discoveries
without human intervention
37. Holographic Clinical Simulations
Patient's condition improves or deteriorates as you make decisions and perform
interventions.
CHAT GPT Generated Conversations
personalised for learner
42. • Deepfakes can spread misinformation, but can also be used
for good.
• After Deep fake detection comes Watermarks
• Get involved in understanding more about Responsible AI
Conclusion
43. Follow NAIC on LinkedIn
https://www.linkedin.com/company/national-ai-centre/
Join Responsible AI Network
https://www.csiro.au/naic
Connect with Australia’s AI Industry
https://www.csiro.au/naic
Join the AI conversation
#AIAustralia
Thank you!
Rita Arrigo
Strategic Engagement
National AI Centre
NAIC@csiro.au,
https://www.csiro.au/naic #
Editor's Notes
I was an early, early adopter of bleeding edge technology starting out in the stone age of 1988 and have covered a lot of ground and continue to do so exploring new terrain.
I know I look good for my age - I’m actually a deep fake.
Billed as “art meets artificial intelligence”, Dalí Lives was created by pulling more than 6,000 frames from old video interviews and processing them through 1,000 hours of machine learning. The text was comprised of quotes from interviews. This deepfake has interactivity, where 45 minutes of footage split over 125 videos allows for more than 190,000 possible combinations depending on visitor responses and even includes comments on the weather. It finishes with Dalí turning around and snapping a selfie with his audience. Dalí claimed it was unlikely he would ever die, and maybe he was right, because he was brought to life a second time recently by Samsung’s AI lab in Moscow, this time by training AI on landmark facial features from just a handful of images rather than the usual thousands.
the deepfakes of kids. Yes that's a big problem as the kids will be victim without ever even consenting to having their photos online/public. But of course, anyone with pictures online is at risk of being targeted
Figure 1: A. Overview of our framework, which uses an artifact attention module that highlights defects to improve classifier performance. Through supervision with human labels, we guide this module to learn artifacts visible to humans, then amplify them to generate Deepfake Caricatures: transformations of the original videos where artifacts are more visible. B. Example frames of a standard deepfake video (right) and a deepfake caricature (left). Our method distorts the fake video to exacerbate artifacts and unnatural motion produced by the deepfake process. The caricatures are best experienced in video form. See supp. material for caricature videoclips
CSIRO is investing in Dark Pattern Detection
https://www.toptal.com/designers/ux/dark-patterns
https://www.toptal.com/designers/ux/dark-patterns
If we could provide some hints to her
https://www.toptal.com/designers/ux/dark-patterns
Dark Pattern Examination
End-users may be easily tricked by DPs contained in many websites/apps due to the lack of awareness. Our tool can highlight potential DPs to get end-users informed and help them make a reasonable decision.
User interfaces are designed by the designers. However, designers will often gain inspirations from others' design, which may inherit unawared dark patterns. Our tool helps them to avoid such situation.
<p>Regulators are paying more attention to dark patterns. However, it is hard to examine all websites and apps. Our tools help regulators to implement their policies, examine the violations, help build up digital trust.</p>
real-time audio deepfakes will be a huge problem for organizational fraud
With the highest threat to organisations being impersonation
When we design AI Systems in accordance with human values.
The National AI Centre (NAIC) is funded by the Australian Government and coordinated by CSIRO (Australia’s National Science Agency). NAIC is supported by Foundation Partners Google and CEDA.
In the last few years there has been an increased focus on the social (and environmental) impact of AI; how AI is governed by organisations and a push by governments to regulate AI.
It’s led to a strong growth in research in what we call responsible or ethical AI and how we can sustainably develop and use AI.
In the same way that cars evolved into regulation with seat belts, air bags, licenses, AI is at that stage of evolution
A perfect virtual replica of Huang’s kitchen and a complete with a digital clone of the CEO himself.
The demo is the epitome of what GTC represents: It combined the work of NVIDIA’s deep learning and graphics research teams with several engineering teams and the company’s incredible in-house creative team.
To create a virtual Jensen, teams did a full face and body scan to create a 3D model, then trained an AI to mimic his gestures and expressions and applied some AI magic to make his clone realistic.
Digital Jensen was then brought into a replica of his kitchen that was deconstructed to reveal the holodeck within Omniverse, surprising the audience and making them question how much of the keynote was real, or rendered.
Microsoft will ID its AI art with a hidden watermark | PCWorld
The Coalition for Content Provenance and Authority (C2PA) began work in 2021 to develop an open standard for indicating the origin of digital images, and whether they were authentic or AI-generated. The issue was thrust into the spotlight in March, when AI-generated images of the Pope in a stylish puffy jacket went viral, and AI-art generator Midjourney clamped down to prevent even more. Microsoft has agreed to sign all AI art that its apps generate with a cryptographic watermark indicating it was made with an algorithm.
Ai deepfake: In age of AI, women battle rise of deepfake porn - The Economic Times (indiatimes.com)
Holoscenario, what you would expect students to experience if they were in a clinical ward