The document discusses various scenarios related to the future of artificial intelligence (AI). It notes that AI capabilities are increasing quickly and may lead to both better and worse outcomes depending on human choices. Within a few years, AI could match or surpass human-level intelligence and creativity, disrupting many fields and taking over most jobs. However, AI systems today still sometimes operate in unexpected or harmful ways. The document argues that more focus is needed on understanding AI risks and safety to ensure AI is developed and used beneficially. Strong oversight and controls may be needed to prevent unintentional or malicious outcomes from advanced AI. Overall, the future remains uncertain but positive outcomes are possible through openness, cooperation, and prioritizing safety
The Future of AI Scenarios, Ethics and Regulations
1. @dw2 Page 1
The future of AI?
Scenarios,
Ethics, and
Regulations
?
@dw2
David Wood
2. @dw2 Page 2
The future of AI (in one slide)
• Whatever you think AI might do in the next 3-5 years
• It will probably do more
• AI will cause multiple disruptions (in every field of life & business)
‒Impractical, low-value approach becomes viable game-changer
‒Slow, slow, slow, then fast, FAST, FAST
‒So we all need to be better at anticipating and managing disruptions
• More than that: AI might cause a Singularity (in 3, 10, or 25 years)
‒Humans no longer the most important “players” in “the game”
‒AI will produce the best creativity
‒AI will produce the best science, engineering, and medicine
‒AI will take all the (best) jobs
‒AI will take control of the planet?!
“The Economic Singularity”
3. @dw2 Page 3
When will the first weakly general AI system
be devised, tested, and publicly announced?
https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/
Community median estimate
Apr
2020
Jul
2020
Oct
2020
Jan
2021
Apr
2021
Jul
2021
Oct
2021
Jan
2022
Apr
2022
Jul
2022
Oct
2022
Jan
2023
Apr
2023
Jul
2023
Oct
2023
Sep
2055
Oct
2046
Oct
2033
Aug
2035
Apr
2038
Sep
2042
Apr
2047
Feb
2042
Feb
2043
Nov
2028
Oct
2028
+6 yrs
+35 yrs +13 yrs
Nov
2027
Nov
2026
May
2026
Feb
2027
75% confidence:
25% confidence:
Early
generative AIs
Midjourney,
DALL-E, GPT-3
Chat GPT
& GPT-4 …
Metaculus
GPT-6?
GPT-5?
<3 yrs
2030
June 2024
4. @dw2 Page 4
Geoffrey Hinton (age 75)
University of Toronto
Google Brain
“Godfather of Deep Learning”
“Until quite recently, I
thought it was going to be
like 30 to 50 years before we
have general purpose AI”
“Now I think it may be
20 years or less”
time.com/6273743/thinking-that-could-doom-us-with-ai/
“Some people think
it could be like 5”
“I wouldn’t completely rule
that possibility out now”
“whereas a few years ago I
would have said ‘no way’”
“Yes, we might be”
“Are we close to the
computers coming up with
their own ideas for
improving themselves?”
“Then it could just go fast?”
“That’s an issue.
We have to think hard about
how to control that”
“And can we?”
“We don’t know.
We haven’t been
there yet.”
“But we can try.”
“What do you think the
chances are of AI just
wiping out humanity?”
“It’s not inconceivable.”
He resigned
5. Four approaches to the future
Techno-
ignorant
Not sufficiently
aware of the
scale of change
Techno-
suppressive
Too much good
potential to stop
all development
Techno-
promiscuous
Powerful tech
can magnify
wrong impulses
Techno
-agile
Use steering
wheel, brakes,
and accelerator
Techno-sceptical Techno-conservative Techno-accelerator Techno-progressive
Don’t
look
up!
Go
back!
6. @dw2 Page 6
EXTINCTION BAD
If you can’t steer,
don’t race
Oct 21st
2023
Parliament
Square
Central
London
Just don’t build AGI
until there is expert
consensus that it won’t
cause human extinction
8. @dw2 Page 8
Ethics and regulation (in one slide)
• Governments (post Bletchley) are going to impose some controls
‒Incentives, procurement rules, standards, regulations, penalties
• The market will demand regulations – evidence of safety
• Companies will actually request regulations – and enforcement
‒Smart companies will get ahead of the regulatory curve
• Two main scenarios:
‒Suicide race (ethics abandoned)
‒Cooperation for sustainable superabundance (ethics upheld)
• Three ethical choices (vs. “convenience”, “raw power”):
‒Truth and understanding (vs. Deception and wishful thinking)
‒Trust and reliability (vs. Manipulation and populism)
‒Togetherness and sustainability (vs. Tribalism and partisanship)
9. @dw2 Page 9
Page 9
“Google accused of directing motorist to drive off collapsed bridge”
https://www.bbc.co.uk/news/world-us-canada-66873982, 22nd Sept 2023, Philip Paxson, Hickory, North Carolina
Human vandals had recently damaged some warning signs
Bad human behaviour + Bad AI implementation -> Catastrophe
Misguided humans + Misguided AI -> Catastrophe
10. @dw2 Page 10
AI technology that deeply exploits human psychology?
AI technology designed to
make money for social
media platforms by
keeping users engaged
12. @dw2 Page 12
Lieutenant-Colonel Stanislav Petrov
https://en.wikipedia.org/wiki/Stanislav_Petrov
Yuri Andropov, USSR Premier, Nov 1982 to Feb 1984
KAL 007
1 Sept 1983
Shot down by
Soviet missile
All 269 killed
Including
member of US
House of
Representatives
Ronald Reagan:
“The Korean Air
Massacre”
26 Sept 1983
Alarm system indicated
incoming US missile(s)
Protocol dictated that Petrov
urgently inform his superiors
Petrov declined to follow orders
World
Citizen
Award
“The
Man
Who
Saved
The
World”
Future
of Life
Award
14. @dw2 Page 14
Lion Air Flight 610
Domestic flight inside Indonesia
29 October 2018
189 people on board
Ethiopian Airlines Flight 302
Addis Ababa, Ethiopia to Nairobi, Kenya
10 March 2019
157 people on board
Both flights used Boeing 737 Max aircraft
A (very safe) Boeing 737 design, pushed to the “max”
Airplane could become unstable in some circumstances
Hence introduced MCAS: Maneuvering Characteristics Augmentation System (AI)
Automatically push down the airplane nose in some emergency(?) situations
Pilots could in theory override this, but needed specialist training (skipped)
Jan 2021: Boeing paid fines of over $2.5 billion after being charged with fraud
Total financial impact on
Boeing: $20B to $60B
15. @dw2 Page 15
AI now and future: Big picture
AI can (sometimes)
produce very good
outcomes
AI doesn’t always
operate as hoped
(over hyped?)
AI sometimes
produces bad
outcomes
AI capabilities are
changing
(increasingly)
quickly
4 5
3
1 2
16. @dw2 Page 16
Reasons AI will improve
AI breakthroughs
are commercially
important
AI breakthroughs
are geopolitically
important
Intense
interest in
improvements
There are
many ways to
multiply effort
There’s a huge
“supply line”
of new ideas
to be explored
Education
Communities
Templates
Tools (e.g. AutoML)
AI improving AI
Large Language Models (GPT-5)
Combination models
More insight from brain
Other biological metaphors
Quantum algorithms
Causation models
Decentralised networks…
Demand
Each new generation of AI will
help people produce the next
generation of AI more quickly
17. @dw2 Page 17
AI now and future : Big picture
AI can (sometimes)
produce very good
outcomes
AI doesn’t always
operate as hoped
(over hyped?)
AI sometimes
produces bad
outcomes
AI capabilities are
changing
(increasingly)
quickly
4 5
3
1 2
Greater AI
capabilities could
lead to better and
worse outcomes
The outcome depends on us – our human insight, choices, and skills
The
Abolition
of Aging
19. @dw2 Page 19
The way forwards: Hard but not impossible
• Don’t pause AI, but do pause AGI
‒Pause the training of next generation frontier models
• Much more focus on shared understanding of AI risks & AI safety
‒Shared understanding of risks -> shared desire for global cooperation
‒Options for (e.g.) tamper-proof remote monitoring and remote switch-off
‒Redeploy expert resources from other fields to AI risks and AI safety
• Don’t allow AI developers to “mark their own homework”
‒Independent auditors, supported by governments worldwide (G7+)
‒Applies to open source as well as closed source
• Avoid allowing AI to develop its own volition / autonomy
‒Superintelligent AI should be a passive tool
‒Independent verification of AI recommendations before real-world action
20. @dw2 Page 20
The 7 most important characteristics for success over the next 2-3 years
Fast-learning
Agile
Insightfully collaborative
Emotionally resilient
Trustable
Political
Astute
Unlearning
Uncertainty -> Sprints
Feedback -> Pivots
Partner selection (and deselection)
Community participation
Building and managing coalitions
Honest communications
Fail forward
Understand big picture
Agile regulations & incentives
Integrity
Inspire resilience
Use tech to boost learning
Revised social contract
Aware of key open questions
Learning by doing
VitalSyllabus.org