SlideShare a Scribd company logo
1 of 151
Download to read offline
DeepMind’s Deep Q
Learning Circa 2015
Agent fed sensory input
(pixel stream) with goal
of maximising score on
screen - no rules, or
domain knowledge
given.
Starts off clumsy..
Much better after 2 hrs..
After 4 hrs it learns to
dig a tunnel through to
top side..
AlphaFold
AI – Risks,
Opportunities &
Ethical Issues
Adam Ford • 2022-10 • @adam_ford
Will AI be able to self-improve?
How can we ethically align powerful AI, and
what would make it easier to do so?
Can AI Understand stuff?
What role might causal learning play?
WHY
ADDRESS
THE ISSUE?
ROI - which interventions are most
effective?
https://www.givingwhatwecan.org/research/the-moral-imperative-towards-cost-effectiveness
Note: DALY = Disability Adjusted Life Years, QALY = Quality Adjusted Life Years
Reducing impact of AIDs virus
ROI - Effective Interventions p2
Green Revolution (Norman Borlaug, agriculture) ≈ 250 million
saved (though there were environmental impacts, lost of
biodiversity, deforestation etc)
Smallpox Vaccine (Edward Jenner, medical) ≈ 500 million saved
Haber-Bosch process (Fritz Haber, nitrogen for fertilizer in
agriculture) ≈ 1 billion saved (though was also involved in the
development of explosives and Fritz Haber was involved in
developing chemical weapons)
https://en.wikipedia.org/wiki/Green_Revolution, https://en.wikipedia.org/wiki/Smallpox_vaccine, https://en.wikipedia.org/wiki/Haber_process
Avoiding Extinction Saves Lives!
Existential Risks
Extinction means Damnation of the
Limitless Potential of Knowledge
David Deutsch - Beginnings of Infinity - “Everything not
forbidden by the laws of nature is achievable, given the right
knowledge”
Any physically possible activity can be achieved
provided the knowledge to do so has been acquired.
● Generation of better knowledge acquisition systems -
better computation and information processing
● Space colonisation (inner and outer)
● Better energy generation (dyson swarms, matrioska brains)
● Better governance systems bestowing more well-being
on all sentience .. can we engineer better qualia?
AI as PASTA: Process for Automating
Scientific and Technological Advancement
The bottlenecks to innovation (education, human
minds, societal factors.. etc.) could be alleviated by
Superintelligent (Transformative) AI - leading to
explosive growth in knowledge, theoretical and
practical, in the form of scientific and technological
advancement - leading to a kaleidoscope of
possible futures. https://www.cold-takes.com/this-cant-go-on/#scientific-and-technological-advancement
The future could
be really ace - we
have so much to
look forward to!
Derek Parfit Nick Bostrom
Population
Ethics &
Mitigating
Existential
Risks - Great
ideas, but
who’s
listening?
Navigating the Future
Technological Maturity
A technologically mature civ is robust, antifragile, able to
coordinate effectively, and likely would have solved
ethics.
“a technologically mature civilization could (presumably) engage in
large-scale space colonization... be able to modify and enhance
human biology... could construct extremely powerful computational
hardware and use it to create whole-brain emulations and entirely
artificial types of sentient, superintelligent minds” (Bostrom, 2012)
Technological Maturity
The Ongoing Avoidance of Extinction (OAE)
Compassionate Stewardship: Nurturing a
posthuman civilisation of invincibly mirthful
beings animated by an unlimited
kaleidoscope of novelty and gratients of
bliss.
What’s the ROI on
Preserving the
Long Term
Future?
Longtermism - Earth Only
● All the people who have died in the last
200,000 years - 109 billion.
● All the people alive today - 7.95 billion.
● The amount of people who have died is
14 times the current population.
● If we project 800,000 years into the
future, with the earth’s population
stabilizing at about 11 billion, with life
expectancy at 88, there will be 100 trillion
people born.
● 5.4e+17 lives until sun goes red dwarf
https://ourworldindata.org/longtermism
Habitable Planets in Milky Way ≈ 30 billion
ROI:
Optimisation
Power
Intelligence
is Powerful
Superintelligence is an
Ethical Concern
Intelligence is Powerful
(and potentially dangerous)
Intelligence got our ancestors from mid-level scavengers to apex predators
and beyond
● from hand axes → the international L115A3 sniper rifle → fully
automated drones
● from barefoot meandering → heavier than air flight → escape velocity
→ landing robots on mars, probes escaping our solar system
● artificial intelligence..
If we substitute human minds with better-designed minds, running at 1
million * human clock speed, we would likely see a much faster rate of
progress in technoscience.
→
Force Multipliers
Force multipliers in multiple areas of science
and technology are setting the stage for AI driven
productivity explosion
Examples: higher resolution brain scanners, faster
computing hardware, exotic forms of computation,
improved algorithms, improved philosophy
●AI is a force multiplier feedback for
progress in the above examples
https://www.cold-takes.com/most-important-century/
Zooming in we can map economic growth over past ~75 years
Zooming out :
We live within
economic surge
Where to next?
Productivity Feedback Loops have precedence
Economic history: more resources -> more people -> more ideas -> more resources ...
What
really
happens
https://www.cold-takes.com/most-important-century/
..though *unselfish AI may
restore the dynamic of :
more resources -> more
ideas
Intelligence via Evolution vs the new signature of Intelligent Design
Homo Sapien intelligence - resulting from "blind" natural
selection, by evolutionary timescales has only recently
developed the ability for planning and forward thinking -
deliberation. Almost all our cognitive tools were the result
of ancestral selection pressures, forming the roots of almost
all our behavior. As such, when considering the design of
complex systems where the intelligent designer - us -
collaborates with the system being constructed, we are
faced with a new signature and a different way to
achieve General Intelligence that's completely different than
the process that gave birth to our brains.
AI is powerful now, and will be more powerful
in the future
Syllogism1: intelligence is powerful; AI is intelligent; therefore AI is
powerful
Syllogism2: technological progress is a much faster force multiplier
than evolutionary progress; AI is subject to technological progress
and HI is subject to evolutionary progress; therefore AI will become
smarter faster than HI
Syllogism3: more intelligence is more powerful than less
intelligence; AI will overtake HI; therefore AI will be more powerful
than HI
A faster future than we think - are we ready?
1. There is a substantial chance Human Level AI (HLAI)
before 2100;
2. If HLAI then likely Super Intelligence (SI) will follow via an
Intelligence Explosion (IntExp);
3. Therefore an AI driven productivity explosion
4. An uncontrolled IntExp could destroy everything we value,
but a controlled IntExp would benefit life enormously -
shaping our future light-cone.
*HLAI= Human Level AI - The term ‘human-level’ may be a misnomer - I will discuss later
[The Intelligence Explosion - Evidence and Import 2012]
Large Language Models & Math Automation
Lots of momentum atm in LLMs
"Large language models can write informal
proofs, translate them into formal ones, and
achieve SoTA performance in proving
competition-level maths problems!"
https://arxiv.org/abs/2210.12283
On twitter “If it is really possible for AIs to surpass us as much in
mathematics as AlphaGo surpasses us in go, what is the point of
doing mathematics?”
GPT-3 / GPT-2
Fire your dungeon master -
let the AI create your
fantasy adventure while
you play it!
https://aidungeon.io
https://philosopherai.com
The
Intelligence
Explosion
Intelligence Explosion
"Let an ultraintelligent machine be defined
as a machine that can far surpass all the
intellectual activities of any man however
clever. Since the design of machines is
one of these intellectual activities, an
ultraintelligent machine could design even
better machines; there would then
unquestionably be an ‘intelligence
explosion,’ and the intelligence of man
would be left far behind. Thus the first
ultraintelligent machine is the last invention
that man need ever make." - I. J. Good
Intelligence Explosion
The purest case - AI rewriting its own
source code.
● Improving intelligence accelerates
● It passes a tipping point
● ‘Gravity’ does the rest
Push
P
u
l
l
sisyphus & robot
Is AI > Human Level
Intelligence (HLI) likely?
Syllogism
1. There will be AI (before long, absent defeaters).
2. If there is AI, there will be AI+ (soon after, absent defeaters).
3. If there is AI+, there will be AI++ (soon after, absent defeaters).
4. Therefore, there will be AI++ (before too long, absent defeaters).
The Singularity: A Philosophical Analysis: http://consc.net/papers/singularity.pdf
What happens when machines become
more intelligent than humans?
Philosopher, David
Chalmers
HLI will rise with AI at the pace of AI?
Rapture of the nerds - where HLI will rise with AI by humans
merging with it - where the true believers will break with human
history and merge with the machine, while the unaugmented will be
left in a defiled state…
● No.
● Biological bottlenecks - the kernel of human
intelligence.
● Merging with AI enough to keep pace means
becoming posthuman - defining characteristics of
humans will be drowned out like a drop of ink in an
ocean.
Mathematician / Hugo Award Winning
SciFi Writer Vernor Vinge
coined the term:
"Technological Singularity"
The best answer to the question,
“Will computers ever be as smart as humans?”
is probably “Yes, but only briefly.”—Vernor Vinge
Vernor postulated that
"Within thirty years, we will have the technological means to create superhuman intelligence.
Shortly after, the human era will be ended." - Vernor Vinge 1994 https://edoras.sdsu.edu/~vinge/misc/singularity.html
Closing the Loop - Positive Feedback
A Positive Feedback Loop is
achieved through an agents
ability to recursively optimize its
ability to optimize itself!
Bottle-neck? Other entities? Substrate
expandable (unlike brain), grey box or
ideally white box source code which can
be engineered and manipulated (brain,
black box code with no manual…
possibly can be improved through
biotech...not trivial )
Better source code
= more
optimization
power*
AI optimizes
source code
*Smarter AI will be even better at
becoming smarter
Moore’s Law: the Actual, the Colloquial & the Rhetorical
● CPU - transistors per cpu not doubling?
● GPU - speed increases
● Economic drivers motivated by general
advantages to having faster computers.
● Even if slowdown - small advantages in
speed can aid competitors in race
conditions. Though usually surges in
speedups follow. Not a smooth curve
upwards.
● No, speedups don’t have to be increases in
numbers of transistors in CPU cores.
Surface-level abstractions used to make specific
predictions about when the singularity will
happen A plot (logarithmic scale) of MOS transistor counts for microprocessors against dates of introduction, nearly doubling
every two years. See https://ourworldindata.org/technological-progress
Though Moore’s law has a specific definition, the term has been used colloquially to indicate a
general trend of increasing computing power.
Kinds of
Minds
Human INT isn’t a universal standard
Human intelligence is
a small fraction in the
much larger mind
design space
→
Misconception : Intelligence is a single dimension.
The space of possible minds
● Humans, not being the only possible
mind, are similar to each other
because of the nature of sexual
reproduction
● Transhuman minds, while still being
vestigially human, could be smarter or
more conscious than unaugmented
humans
● Posthuman minds may exist all over
the spectrum - occupy a wider
possibility space in the spectrum
● It may be very hard to guess what an
AI superintelligence would be like,
since it could end up anywhere in this
spectrum of possible minds
Strange intelligences
within the animal kingdom
Spider: Portia fimbriata
Mimic
Octopus
Coconut
Octopus
Solaris - Stanisław Lem
Artificial Neural Nets
ANNs can do stuff that humans can’t write computer
programs to do (directly):
● Image classification
● Accurate text translation
● Advanced Image/text generation from prompts
● Play games
● …much more to come!
Recap
Intelligence is powerful - it’s important to track progress in AI
AI doesn’t need to be a reimplementation of human intelligence for it
to be powerful - need a wide view of intelligence
Distinguishing evaluating levels of intelligence in
● AI vs Humans and
● Humans vs Humans
Therefore:
● Avoid anthropomorphism
● We shouldn’t be limiting measuring progress in AI by the kind of
standard of human intelligence alone
What kinds of
minds do we want
strong AI to
inhabit?
Ethical
Issues
AI: Why should we be concerned?
Value Alignment “The concern is not that [AGI] would hate or
resent us for enslaving it, or that suddenly a spark of consciousness
would arise and it would rebel, but rather that it would be very
competently pursuing an objective that differs from what we
really want.” - Nick Bostrom
- ..but what do we really want? Do we really know? What will would
we really want if we were wiser/smarter/kinder?
“If we use, to achieve our purposes, a mechanical agency with whose
operation we cannot interfere effectively … we had better be quite sure
that the purpose put into the machine is the purpose which we really
desire.” - Norbert Wiener (AI Pioneer and father of Cybernetics)
Systemic Risks
● Race Conditions - Intelligence is powerful - first to
create powerful AGI gets massive first mover advantages -
where safety measures are de-prioritised in favor of speeding up
development - i.e. recommender systems creating addiction and
polarisation
● Competitive pressure can create a race to the bottom
on safety standards
Social Concerns: Political and class
struggle
● The Artilect War
○ Cosmists - those who favor building AGI
○ Terrans - those who don’t
“I believe that the ideological disagreements between these two
groups on this issue will be so strong, that a major "artilect" war,
killing billions of people, will be almost inevitable
before the end of the 21st century."
- Hugo de Garis
Direct vs Indirect Normativity
Direct normativity - rules directly specifying a desired outcome i.e.
the 3 laws of robotics.
Indirect Normativity - programming/training an AI to understand
and acquire human value, or what values we would have if we were
wiser and had more time to think about them - i.e. Coherent
Extrapolated Volition.
Power Seeking behaviour
Convergent Instrumental Goals or Basic AI Drives - they emerge as
helper goals to aid in achieving other goals.
● Resource acquisition - finance or material i.e. to help obtain
more material for computing for..
● Cognitive enhancement
● Avoiding being turned off / Self-preservation
(leading to incorrigibility)
I.e. Paperclip Maximiser
Even well stated benign seeming goals are not immune to
instrumental convergence towards goals that dangerously
orthogonal to ours.- https://en.wikipedia.org/wiki/Instrumental_convergence
Empirical AI Safety
& Trustworthy AI
Opaque AI vs Interpretable AI
& Ethical Implications of Both
Summary
Transparency and interpretability in AI is commonly used to mean
it can be understood by outside observers (humans, other AI etc),
and other AI as well.
Current modern ML is typically opaque - the output is interpretable,
but you can’t look at the model and detect why it created the output.
Robustness in AI Safety - Transparency can help in avoiding
accidental misalignment & deceptive alignment
https://www.lesswrong.com/tag/transparency-interpretability-ml-and-ai
Scalable Oversight
Generally, the more complex AI becomes, the harder it is to
understand.
A challenge of scaling up oversight along side the scaling up of AI
complexity - without which the AI models would be impractical for humans
to effectively evaluate.
Scalable Oversight research seeks to (perhaps by transparency and
interpretability) help humans speed up evaluation.
Though even if scalable oversight is solved, agents on target systems may
still engage in reward hacking - Deepmind research ways in which this can
be alleviated with Causal Influence Diagrams https://arxiv.org/abs/1902.09980
Bottlenecks on safety, reliability and
trustworthiness
Trustworthy companies can train large models and integrate safety -
and seek to generate more revenue by virtue of these models
achieving higher levels of reliability and trustworthiness and safety.
Though the levers to measure and implement safety measures are
in the companies that build the models. For profit motives may act
as a perverse incentive to game AI safety for personal gain -
prioritising revenue over general safety for end users may be
dangerous for high stakes systems.
Right to Explanation (RTE)
Should the AI Alignment / AI Safety community demand rights to
explanation (of AI)?
In the regulation of algorithms, a right to be given an explanation for an
output of the algorithm.
Such rights primarily refer to individual rights to be given an
explanation for decisions that significantly affect an individual,
particularly legally or financially - though this notion could be extended
to social rights to explanation.
Some are concerned that RTE might stifle innovation and progress.
https://en.wikipedia.org/wiki/Right_to_explanation
In the Media
‘Risks posed by AI are real’: EU moves to beat
the algorithms that ruin lives - ‘Black-box’
AI-based discrimination seems to be beyond the
control of organisations that use it - Guardian
article discusses algorithmic bias in blackbox AI
Transparency and Interpretability (ML & AI)
Transparency and interpretability in AI is commonly used to mean
the AI can be understood by outside observers (humans, other AI
etc), though it could be extended to mean AI interpreting and
understanding itself as well.
Current modern ML is typically opaque - the output is interpretable,
but you can’t look at the model and detect why it created the output.
https://www.lesswrong.com/tag/transparency-interpretability-ml-and-ai
Anthropic AI
Chris Olah and associates doing very interesting
work at Anthropic on interpretability - great
podcast interview at 80k hrs.
Building Reliable, Interpretable, and Steerable AI Systems
Anthropic is an AI safety and research company that’s working to build reliable,
interpretable, and steerable AI systems. Large, general systems of today can have
significant benefits, but can also be unpredictable, unreliable, and opaque: our goal is to
make progress on these issues. For now, we’re primarily focused on research towards
these goals; down the road, we foresee many opportunities for our work to create value
commercially and for public benefit.
Code and Model Understandability
Traditional software is intelligible by virtue of us being able to
understand the code.
Artificial Neural Networks (ANNs) are not easily intelligible.
There are approaches to reverse engineer small ANNs into
something intelligible, or to interpret their circuits - at the moment,
this doesn’t scale well for large ANNs
https://www.lesswrong.com/tag/transparency-interpretability-ml-and-ai
Unanticipated behaviours - Unintended
Attempting AI Safety through exhaustive training & behavioural
testing in sandbox environments.
1. Unable to anticipate all safety problems
2. Insufficient disincentives - hard to know what is enough
A model can behave well in sandbox or on a certain training
distribution - then exhibits unexpected safety failures outside the
sandbox, in the real world (in out of sample contexts)
Unanticipated behaviours - Intended
Attempting AI Safety through exhaustive training & behavioural
testing in sandbox environments.
Failure Mode - Treacherous Turn: “An unfriendly AI of sufficient
intelligence realizes that it's unfriendly final goals will be best
realized if it behaves in a friendly manner initially, so that it will be let
out of the box. It will only start behaving in a way that reveals its
unfriendly nature when it no longer matters whether we find out; that
is, when the AI is strong enough that human opposition is
ineffectual.”
(see Nick Bostrom - Superintelligence, p. 117)
Treacherous Turns in the wild
Humans follow rules far more often when being observed -
Non-human animals - cats and dogs steal food more often when not being watched…
- DeepMind collected a list of “specification gaming” examples where an AI system (or
subsystem) learns to detect when it’s being monitored/tested and modifies its behavior while it’s
being monitored/tested so that its undesired properties remain undetected, and then it exhibits those
undesired properties when it’s not being monitored/tested - see Specification gaming: the flip side of
AI ingenuity - Deepmind
- demonstrated in study limit the replication rate of a digital organisms - “the organisms evolved to
recognize when they were in the test environment and “play dead” (pause replication) so they would
not be eliminated and instead be kept in the population where they could continue to replicate
outside the test environment - see The Surprising Creativity of Digital Evolution - Lehmann et al.
(2018)
https://lukemuehlhauser.com/treacherous-turns-in-the-wild
Achieving Transparency - The
Transparency Trichotomy
A trichotomy of ways to understand an (opaque) model M
1. Transparency via inspection: use transparency tools to
understand M via inspecting the trained model.
2. Transparency via training: incentivize M to be as
transparent as possible as part of the training process.
3. Transparency via architecture: structure M's architecture
such that it is inherently more transparent.
https://www.lesswrong.com/posts/cgJ447adbMAeoKTSt/transparency-trichotomy
Example of AI producing unintended
behaviour
Simple tetris AI pausing the game indefinitely to not
loose: https://youtu.be/xOCurBYI_gY?t=943
https://www.lesswrong.com/posts/cgJ447adbMAeoKTSt/trans
parency-trichotomy
Failing Forward with Explanatory Failure
● Deutsch’s interesting view of Optimism as
Explanation of Failure instead of happy
predictions
● Understanding failure means makes the failure
constructive
● Popper - Hypotheses dying in one’s stead -
learning from failed hypotheses
● Improvement of simulations to understand failure
where hypotheses can be further tested
Zero-shot or ‘pre-shot’ verification
Verification or classification by humans and/or
‘Observer AIs’ via transparent explanations of
AI models, or source code before execution -
which may help catch intended ‘treacherous
turns’ or unintended bugs, misrepresentations
or other problems.
Levels of Trust
1. Blind Trust (Gullibility)
2. Behavioural Trust (Blackbox)
3. Verifiable Trust (Transparent)
AI that Understands Itself can Trust Itself
Even if the AI is benign to begin with, if it
mutates itself without understanding the
knock on effects, it may metastasize into
something not-benign.
If we do end up with an AI that is meant to
be ethical, and it's fundamentally opaque,
how can we know it understands itself
enough to trust itself?
Machine
Understanding
Machine Understanding
It really matters what kinds of superminds emerge!
Potential for machine understanding (contrast with explainable AI) this decade
or next
- Yoshua Bengio - causal representation learning - Judea Pearl too
Assumptions:
● ethics isn't arbitrary
● strongly informed by real world states of affairs,
● understanding machines will be better positions than humans to solve
currently intractable problems in ethics
Therefore: investigate safe development of understanding machines
- Henk de Regt what is scientific understanding? (I think machines can do it)
What is understanding? Understanding by design: Six Facets
● Can explain: provide thorough, supported, and justifiable accounts of
phenomena, facts, and data.
● Can interpret: tell meaningful stories; offer apt translations; provide a revealing
historical or personal dimension to ideas and events; make it personal or
accessible through images, anecdotes, analogies, and models.
● Can apply: effectively use and adapt what is known in diverse contexts.
● Have perspective: see and hear points of view through critical eyes and ears
(or sensors).
● Can empathize: find value in what others might find odd, alien, or implausible;
perceive sensitivity on the basis of prior direct experience.
● Have self knowledge: perceive the personal style, prejudices, and habits of
mind that both shape and impede our own understanding; we are aware of
what we do not understand and why understanding is so hard.
(Source: Wiggins, G. and McTighe, J. (1998) Understanding by Design. Alexandra, VA: Association for Supervision
and Curriculum Development.)
What’s missing in modern AI systems?
Deep learning works via mass correlation on big data - great if you
have enough control over your input data relative to the problem,
and that the search space is confined.
Though (*most of) the understanding isn’t in the system, it’s in the
experts that develop the AI systems, and interpret the output.
*arguably some aspects of understanding exist in systems, though not all of them
Training: Black Box vs an Understanding Agent
The opaque nature, we can’t see the under the hood effects of our
training on ML - all we can see is the accuracy of it’s predictions - so
then we strive to achieve “predictive accuracy” - we train AIs by
blind groping towards reward.
Though the AI may be harbouring ‘malevolent motives’ that aren’t
recognisable by merely looking at the output - at some stage the AI
may then unpredictably make a ‘treacherous turn’.
How can we tell a black box AI hasn’t developed a taste for
wire-heading / reward hacking?
The Importance of Knowing Why
Why is one of the first questions
children ask - (oft recursively)
New Science of Causal Inference
to help us understand cause and effect
relationships.
Current AI is bad at causal learning - but
really good at correlating across big data.
Judea Pearl believes causal AI will happen
within a decade.
Judea
Pearl
Why Machine Understanding?
To Avoid Unintended consequences!
● Value alignment: Where the AI’s goals are compatible
with our *useful/good values
● Verifiability / Intelligibility: AI makeup source code /
framework is verifiably beneficial (or highly likely to be) and
where it’s goals and means are intelligible
● Ethical Standards: AI will have to understand ethics -
and it’s understanding ought to be made intelligible to us
● Efficiency: far less training data and computation -
efficiency required for time sensitive decisiveness
Why? - Value Alignment
● AI should understand and employ common sense about:
○ What you are asking - did you state your wish poorly?
○ Your motivations - Why you are asking it
○ The context
■ where desirable solutions are not achieved at any cost
■ Is your wish counter productive? Would it harm you or
others?
■ Ex. After summoning a genie, using the 3rd wish to undo
the first 2 wishes. King Midas - badly stated wish.
Ignorance Isn’t Bliss
King Midas
AI needs to Understand Ethics
Intelligible Ethical Standards
● Rule based systems are fragile - Asimov’s novels
explore failure modes
● Convincing predictive ethical models is not enough
- predictions fail, and it’s bad when it’s not known
why
● A notion of cause and effect is imperative in
understanding ethical problems - obviously trolly
problems :D
Why? - Efficiency
Overhang resulting from far more efficient algorithms - allowing
more computation to be done with less hardware.
More computation used *wisely could realise wonderful things.
Alternatively, ultra efficient machines could make doomsday devices
more accessible to most (inc. psychopaths)
*wisdom is hard won through understanding
We have a preference for X before we die - X could be: a beneficial
singularity, radical longevity medicine, robotic companions,
becoming posthuman etc.
Why? - Wisdom
Simplistically, wisdom is
hard won through
understanding.
Decisions are less risky
when knowledge is
understood and more
risky when based on
ignorant interpretations of
data.
Why Not Machine Understanding?
Machine Understanding could be disastrous if
1. Adding the right ingredients to AI makes it really cheap and easy
to use;
2. AI will be extremely capable - (intelligence is powerful before
and after an intelligence explosion)
3. There doesn’t need to be many bad eggs or misguided people
who intentionally/unintentionally use the AI for diabolical ends for
there to be devastating consequences
How - Achieving AI Understanding -
Interpretability
Take hints from Lakatos award winner Henk D. Regt’s arguments
against the common theory that scientific explanation/understanding
is achieved through building models:
● But the scientist building the models need certain skills -
scientific understanding is fundamentally a skillful act
● He also emphasises intelligibility in scientific explanation -
which is something sorely lacking in the Deep Learning models
that AI develops
Causal AI - the logical next step
4 rungs:
1) learning by association (deep learning, massive correlation
AlphaGo - some animals apparently, and sales managers)
a) Lacks flexibility and adaptability
2) Taking action.. What ifs, taking actions to influence outcomes -
taking interventions require a causal model -
3) Identifying counterfactuals (but for causation) - i.e. to a
computer a match and oxygen may play an equal role
4) Confounding bias - associate with 2nd level -- citrus fruits
prevent scurvy.. But why? A long time ago people used to think it
was the acidity - turns out it is vitamin c
https://ssir.org/articles/entry/the_case_for_causal_ai
Causal Learning in AI - a key to Machine
Understanding.
Opening up a world of possibilities for science.
- Important for humans in making discoveries - if we don’t ask why
then we don’t make new discoveries
- Big data can’t answer causal questions alone - emerging causal
AI will draw on causal relationships between correlation and
causation, to see which are mediators, and which are
confounders
Causal Representation Learning
Great paper ‘Towards Causal Representation Learning’ - “A central
problem for AI and causality is, thus, causal representation
learning, the discovery of high-level causal variables from low-
level observations.”
Causation & Moral Understanding
Consequentialism - the consequences of an act are 1) it’s causal
consequences (computationally tractable), or 2) the whole state of
the universe following the act (practically incomputable)
Zero-Shot-Learning for Ethics: Being able to understand salient
causal factors in novel moral quandaries (problems including
confounders and out-of-training-sample stuff i.e. invernentions)
increases the chance of AI to solving these moral quandaries first
try, and then explain how and why their solution worked.
Some problems, including solving ethical Superintelligence may
need to be done correctly first try.
Understanding Understanding
● Philosophy / Epistemology
● Human Understanding
● Machine Understanding
● Understanding & General Intelligence
● AI Impacts / Opportunities & Risks
● Ethics & Zero Shot Solving of Moral Quandaries
Conference in planning for some time after CoronaVirus restrictions ease - let me know if you are
interested - tech101 [at] gmail [dot] com Twitter @adam_ford YouTube: YouTu.be/TheRationalFuture
WAYS TO
THINK ABOUT
THE PROBLEM
● Degrees of belief
● Confidence intervals, Credence levels
● Epistemic humility
● What's the likelihood of smarter than
HLI by 2050? Or 2075? Or 2100?
Belief shouldn’t be On / Off
Many ways to achieve flight
Plan AI Build AI AGI
Single path dependency - Bad!
This is better →
Multi-pathed Pert charts - Good!
Aspects of
Intelligence &
AGI
Narrow AI vs AGI
"Artificial General Intelligence" (AGI) :
AI capable of "general intelligent action".
Distinction: Narrow AI (or Weak AI/Applied AI) used to accomplish specific problem
solving or reasoning tasks that do not encompass (or in some cases are completely
outside of) the full range of human cognitive abilities.
Gradient between Narrow & General - what lies in between?
?
● Narrow: Deep Blue (Chess), Watson (Jeopardy)
● Arguably in between: AlphaGo, AlphaStar
● Optimally General: No such thing. Computable AIXItl? Not foreseeably
Narrow AI AGI
Getting closer to solving key building
blocks of AI
● Unsupervised Learning (without
human direction on unlabelled data) -
DeepMind’s AlphaGo, AlphaStar, now
in many other systems
● Transfer Learning (knowledge in one
domain is applied to another) -
One shot or Few Shot
Learning
● GPT-3 - few shot learning
Transfer Learning
"I think transfer learning is the key to
general intelligence. And I think the key to
doing transfer learning will be the
acquisition of conceptual knowledge that is
abstracted away from perceptual details of
where you learned it from." - Demis
Hassabis
Transfer Learning
Knowledge in one domain is applied to another
1. IMPALA simultaneously performs multiple tasks—in this case, playing 57
Atari games—and attempts to share learning between them. It showed signs of
transferring what was learned from one game to another.
https://www.technologyreview.com/f/610244/deepminds-latest-ai-transfers-its-learning-to-new-tasks/
2. REGAL - key idea is to combine a neural network policy with a genetic
algorithm, the Biased Random-Key Genetic Algorithm (BRKGA)..REGAL
achieves on average 3.56% lower peak memory than BRKGA on previously
unseen graphs, outperforming all the algorithms we compare to, and giving
4.4x bigger improvement than the next best algorithm.
https://deepmind.com/research/publications/REGAL-Transfer-Learning-For-Fast-Optimization-of-Computation-Graphs
Measuring
Progress
In AI
Evaluating measurable claims about intelligence
Cognitive modelling (think & learn similarly to
humans) comparing AI vs human reaction
times, error rates, response quality etc
Evaluation metrics
Concrete Metrics:
● Performance
● Scalability
● Fault Tolerance
Abstract Metrics:
● Task and Domain
Generality
● Expressivity
● Robustness
● Instructability
● Taskability
● Explainability
Full understanding of human intelligence not required for AI
● For most of human history, we
didn’t really understand fire -
but made good use of it
● Today we don’t fully
understand biology, though we
are able to develop effective
medicine
● We don’t fully understand the
brain - have developed
intelligent algorithms - some
neuromorphic (brain inspired)
Peaks & Valleys in Cognitive capability
● Peaks of AI cognitive
capabilities have already
crossed over valleys in
human cognitive capabilities.
● Will we see a rising tide in
general AI cognitive
capability?
● Will the valleys in AI cognitive
ability rise above the peaks in
human cognitive capability?
Ben Goertzel
“It's impracticable to halt the
exponential advancement of technology
AI vs HLI Ability - Peaks & Troughs
Not
representative
of all human or
AI abilities
AI vs HLI Ability - Peaks & Troughs
Not
representative
of all human or
AI abilities
Turing Tests & Winograd Schemas
● Arguably passed Turing test
● Winograd Schema success rate of 90% is would be
considered impressive to people in NLP - UPDATE -
GPT-3 recently got a score of 88.3 without fine tuning..
See 'Language Models are Few-Shot Learners'
● Conversational AI getting better - fast (see earlier example)
/ OpenAI’s GPT-2/ now GPT-3 - https://philosopherai.com &
https://play.aidungeon.io
● Also check out NVIDIAs new sensation :D
https://www.zdnet.com/article/nvidias-ai-advance-natural-language-processing-
gets-faster-and-better-all-the-time
What won’t AI be able to do within next 50 years?
Best predictions?
What Computers Can’t Do - Hubert Dreyfus
○ What Computers Still Can’t Do
■ Make coffee
■ Beat humans at physical sport
■ Write funny jokes?
■ Write poetry?
● and novels
● Formulate creative strategies - the ability to formulate the kind of creative strategies
that, for instance, define a great lawyer’s ability to form unique arguments or a top CEO to lead his or
her company in bold new directions. This isn’t just about analyzing data; it’s about venturing into
unstructured problem-solving tasks, and deciding which pieces of information are relevant and which
can be safely ignored.
https://www.digitaltrends.com/cool-tech/things-machines-computers-still-cant-do-yet/
How to evaluate evidence?
● Trend extrapolation - Black to Dark Box
○ Some views on trend extrapolation favor near term AGI
○ Some views don’t
○ Strange how people carve up what’s useful trend
extrapolation and what isn’t to support their views
● Inside view - White to Translucent Box
○ Weak - tenable - loose qualitative views on issues with
lopsided support
○ Strong - difficult
● Appeal to authority :D
Pick your authority!
Trend Extrapolation: Accelerating Change
● Kurzweil's Graphs
○ Not a discrete event
○ Despite being smooth and continuous, explosive once
curve reaches transformative levels (knee of the curve)
● Consider the Internet. When the Arpanet went from 10,000 nodes to 20,000 in
one year, and then to 40,000 and then 80,000, it was of interest only to a few
thousand scientists. When ten years later it went from 10 million nodes to 20
million, and then 40 million and 80 million, the appearance of this curve looks
identical (especially when viewed on a log plot), but the consequences were
profoundly more transformative.
● Q: How many devices do you think are connected to the internet today?
This image is old.
128gb SanDisk for
$30 and a $400gb
for $99 on ebay
(circa sept 2019)
A 512gb microsd
card was $99 AUD
Jan 2022 at
Scorptec
AI
Timelines
Weeeee!
Enjoy the ride!
Math and deep insights
(especially probability) can be
powerful relative to trend fitting
and crude analogies.
Long-term historical trends are
weekly suggestive of future
events
BUT - humans aren’t good at
math, and aren’t all that
convinced by it - nice pretty
graphs or animated gifs seem
to do a better job.
Difficulty - convincing humans
is important - since they will
drive the trajectory of AI
development… so convincing
people of good causes based
on less-accurate/less-sound
yet convincing arguments is
important.
Survey of on timeline of AI
AI Expert poll in 2016
● 3 years for championship Angry Birds
● 4 years for the World Series of Poker (some poker solved in
2019)
● 6 years for StarCraft (now solved)
● 6 years for folding laundry
● 7–10 years for expertly answering 'easily Googleable' questions
● 8 years for average speech transcription (sort of solved)
● 9 years for average telephone banking (sort of solved)
● 11 years for expert songwriting (sort of solved)
● over 30 years for writing a New York Times bestseller or winning
the Putnam math competition
https://aiimpacts.org/miri-ai-predictions-dataset
2) In 2017 May, 352 AI experts who published at the 2015 NIPS and ICML conferences were surveyed. Based
on survey results, experts estimate that there’s a 50% chance that AGI will occur before 2060. However,
there’s significant difference of opinion based on geography: Asian respondents expect AGI in 30 years,
whereas North Americans expect it in 74 years.
Some significant job functions that are expected to be automated by 2030 are: Call center reps, truck driving,
retail sales.
Surveys of AI Researchers
1) In 2009, 21 AI experts participating
in AGI-09 conference were surveyed.
Experts believe AGI will occur around
2050, and plausibly sooner. You can
see their estimates regarding specific
AI achievements: passing the Turing
test, passing third grade,
accomplishing Nobel worthy scientific
breakthroughs and achieving
superhuman intelligence.
https://blog.aimultiple.com/artificial-general-intelligence-singularity-timing
2019 - When Will We Reach the Singularity? A Timeline
Consensus from AI Researchers
https://emerj.com/ai-future-outlook/when-will-we-reach-the-singularity-a-timeline-consensus-from-ai-researchers/
Recap..
1. Intelligence is powerful
2. Human Intelligence is a small fraction of a huge space of
all possible mind designs
3. We already have AI smarter than HLI (too many examples
to list) - so therefore you can already see AI outperform HLI
- AI has surged past HLI in specific domains
4. An AI that could self improve could become very powerful
very fast
5. Timeline consensus’ from AI researchers say AGI before
2050 or 2060
AI
Boosters
3 AI Boosters
1. Overhang resulting from
a. algorithmic performance gains (more efficient algorithms
require less computing power per unit of value)
b. or economic gains (Microsoft donating billion $$ to OpenAI)
c. Cheaper cloud infrastructure (drops in price/performance of
already connected cloud)
2. Self improving systems - feedback loop where gains from
previous iterations boost performance of subsequent iterations
of the feedback loop
3. Economic/Political Will - race to first mover advantage
Hardware Overhang
● new programming methods use available computing power more efficiently.
● new / improved algorithms can exploit existing computing power far more efficiently than previous
suboptimal algorithms…
○ Result: AI/AGI can run on smaller amounts of hardware... This could lead to an intelligence
explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run
on countless computers.
● Examples:
○ In 2010, the President's Council of Advisors on Science and Technology reported on benchmark
production planning model having become faster by a factor of 43 million between 1988 and
2003 - where only a factor of roughly 1,000 was due to better hardware, while a factor of
43,000 came from algorithmic improvements.
○ Sudden advances in capability gains - DeepMind’s AlphaGo, AlphaFold, GPT-2
● Estimates of time to reach computing power required for whole brain emulation: ~a decade away
○ BUT: very unlikely that human brain algorithms anywhere enar most computationally efficient for
producing AI.
○ WHY? our brains evolved during a natural selection process and thus weren't deliberately
created with the goal of being modeled by AI.
https://wiki.lesswrong.com/wiki/Computing_overhang
Economic Gains
In 2016 AI investment was
up to $39 billion, 3x from
2013
See McKinsey Report - ARTIFICIAL
INTELLIGENCE THE NEXT DIGITAL FRONTIER?
AI Index 2019 Annual Report
● AI is the most popular area for computer science PhD
specialization, and in 2018, 21% of graduates specialized in
machine learning or AI.
● From 1998 to 2018, peer-reviewed AI research grew by 300%.
● In 2019, global private AI investment was over $70 billion, with
startup investment $37 billion, M&A $34 billion, IPOs $5 billion,
and minority stake $2 billion. Autonomous vehicles led global
investment in the past year ($7 billion), followed by drug and
cancer, facial recognition, video content, fraud detection, and
finance.
https://venturebeat.com/2019/12/11/ai-index-2019-assesses-global-ai-research-investment-and-impact/
AI Index 2019 Annual Report
● China now publishes as many AI journal and conference papers per
year as Europe, having passed the U.S. in 2006.
● More than 40% of AI conference paper citations are attributed to
authors from North America, and about 1 in 3 come from East Asia.
● Singapore, Brazil, Australia, Canada and India experienced the fastest
growth in AI hiring from 2015 to 2019.
● The vast majority of AI patents filed between 2014-2018 were filed in
nations like the U.S. and Canada, and 94% of patents are filed in
wealthy nations.
● Between 2010 and 2019, the total number of AI papers on arXiv
increased 20 times.
https://venturebeat.com/2019/12/11/ai-index-2019-assesses-global-ai-research-investment-and-impact/
Kinetics of an Intelligence Explosion
Intelligence
level: not a
continuous
surge upwards.
‘Crossover’
refers to the
point where AI
can handle
further growth by
itself.
Appeals to
Experts
Bill Gates
"Reverse Engineering the Brain is Within
Reach" -“I am in the camp that is concerned
about super intelligence,” Bill Gates said
during an “Ask Me Anything” session on
Reddit. “First, the machines will do a lot of jobs
for us and not be super intelligent. That should
be positive if we manage it well. A few
decades after that, though, the intelligence is
strong enough to be a concern. I agree with
Elon Musk and some others on this and don’t
understand why some people are not
concerned.”
Stuart Russell
If human beings are losing every
time, it doesn't matter whether
they're losing to a conscious
machine or an completely non
conscious machine, they still lost.
The singularity is about the quality of
decision-making, which is not
consciousness at all.
‘Expert’ predictions of Never...
(On Nuclear physics) “The consensus view as expressed by Ernest
Rutherford on September 11th, 1933, was that it would never be
possible to extract atomic energy from atoms. So, his prediction was
‘never,’ but what turned out to be the case was that the next morning
Leo Szilard read Rutherford’s speech, became annoyed by it, and
invented a nuclear chain reaction mediated by neutrons!
Rutherford’s prediction was ‘never’ and the truth was about 16 hours
later. In a similar way, it feels quite futile for me to make a
quantitative prediction about when these breakthroughs in AGI will
arrive.” - Stuart Russell - co-wrote foundational textbook on AI
EXAMPLES
OF
PROGRESS
AI Capability: Optimal
● Tic-Tac-Toe
● Connect Four: 1988
● Checkers (aka 8x8 draughts): Weakly solved (2007)
● Rubik's Cube: Mostly solved (2010)
● Heads-up limit hold'em poker: Statistically optimal in the sense that "a
human lifetime of play is not sufficient to establish with statistical
significance that the strategy is not an exact solution" (2015) - see
https://www.deepstack.ai/
Search spaces very small - though Heads-up limit hold’em poker is interesting - like most other NN
style AI - the majority of the behaviour was learned via self play in a reinforcement learning
environment
https://en.wikipedia.org/wiki/Progress_in_artificial_intelligence#Current_performance
AI Capability: Super-human
● Othello (aka reversi): c. 1997
● Scrabble: 2006[
● Backgammon: c. 1995-2002
● Chess: Supercomputer (c. 1997); Personal computer (c.
2006); Mobile phone (c. 2009); Computer defeats human +
computer (c. 2017)
● Jeopardy!: Question answering, although the machine did not
use speech recognition (2011)
● Shogi: c. 2017
● Arimaa: 2015
● Go: 2017 <- AlphaGo - learned how to play from
scratch
● Heads-up no-limit hold'em poker: 2017
State of play in AI Progress - Some areas already Super-human
Board games and some computer games are well defined
problems and easier to measure competence against.
AI Capability: Par-human
● Optical character recognition for ISO 1073-1:1976 and similar special characters.
● Classification of images
● Handwriting recognition
AI Capability: High-human
● Crosswords: c. 2012
● Dota 2: 2018
● Bridge card-playing: According to a 2009 review, "the best programs are
attaining expert status as (bridge) card players", excluding bidding.
● StarCraft II: 2019 <- AlphaStar at grandmaster level
AlphaGo vs AlphaStar - Unsupervised Learning
Wider search spaces - Dota 2 and Starcraft - incomplete
information, long term planning. AlphaStar makes use of
‘autoregressive policies’..
AlphaZero / AlphaStar
If you were a blank slate, and were given a GO board, and were told you had 24 hours to
teach yourself to become the best Go player on the planet, and you did exactly that, and
then you did the same thing with Chess and Shogi (both more complex than Chess), and
later Starcraft II, and then Protein Folding - wouldn't you say that proved you had the ability
to adapt to novel situations? Well AlphaZero did all that and it didn't use brute force to do it
either.
Stockfish is another Chess program but unlike AlphaZero it didn't teach itself humans did.
Stockfish could easily beat any human player but it couldn't beat AlphaZero despite the fact
that Stockfish was running on a faster computer and could evaluate 70,000,000 positions a
second while AlphaZero's much smaller computer could only do 80,000.
And AI is becoming less narrow and brittle every day. GO is played on a 19 by 19 grid, but if
you changed it to 20 by 20 or 18 by 18 and gave it another 24 hours to teach itself AlphaZero
would be the best player in the world at that new game too - without any human help.
AlphaZero is not infinitely adaptable, but then humans aren't either.
Analysis of
AlphaGo
defeating
Stockfish
AI Capability: Sub-human
● Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)
● Object recognition
● Facial recognition: Low to mid human accuracy (as of 2014)
● Visual question answering, such as the VQA 1.0
● Various robotics tasks that may require advances in robot hardware as well as AI, including:
○ Stable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers
(as of 2017)
○ Humanoid soccer
● Speech recognition: "nearly equal to human performance" (2017)
● Explainability. Current medical systems can diagnose certain medical conditions well, but cannot
explain to users why they made the diagnosis.
● Stock market prediction: Financial data collection and processing using Machine Learning
algorithms
● Various tasks that are difficult to solve without contextual knowledge, including:
○ Translation
○ Word-sense disambiguation
○ Natural language processing
Though AI is classed as par or sub-human, for most of these tasks AI can
achieve wider capabilities far faster than humans can.
https://talktotransformer.com
OpenAI’s GPT-2
Better Language Models
and Their Implications
“We’ve trained a large-scale unsupervised language
model which generates coherent paragraphs of text,
achieves state-of-the-art performance on many
language modeling benchmarks, and performs
rudimentary reading comprehension, machine
translation, question answering, and
summarization—all without task-specific training.”
Models can be used to generate data by successively
guessing what will come next, feeding in a guess as
input and guessing again. Language models, where
each word is predicted from the words before it, are
perhaps the best known example: these models power
the text predictions that pop up on some email and
messaging apps. Recent advances in language
modelling have enabled the generation of strikingly
plausible passages, such as OpenAI’s GPT-2.
Below is an example of a cut down version of GPT-2
'There's a Wide-Open Horizon of Possibility.' Musicians Are
Using AI to Create Otherwise Impossible New Songs
https://time.com/5774723/ai-music/
Recap..
1. Intelligence is powerful
2. Human Intelligence is a small fraction in the space of all possible
mind design
3. An AI that could self improve could become very powerful very fast
4. Look for trends as well as inside views to help gauge likelihood
5. Some AI is becoming more general (AGI)
○ Unsupervised learning
○ Transfer learning
6. Experts in AI research think that it is likely AGI could be developed
by mid century
7. We already have AI smarter than HLI (too many examples to list)
○ AI has surged past HLI in specific domains
Extra
material
Accelerating returns - a
shortening of timescales
between salient events…
Events expressed as Time
before Present (Years) on the
X axis and Time to Next Event
(Years) on the Y axis,
Logarithmic Plot.
4 billion years ago early
life.
…
Evolutionary History of Mind
Daniel Dennet's Tower of Generate and Test
● "Darwinian creatures", which were simply selected by trial and error on the merits of
their bodies' ability to survive; its "thought" processes being entirely genetic.
● "Skinnerian creatures", which were also capable of independent action and therefore
could enhance their chances of survival by finding the best action (conditioning
overcame the genetic trial and error of Darwinian creatures); phenotypic plasticity; bodily
tribunal of evolved, though sometimes outdated wisdom.
● "Popperian creatures", which can play an action internally in a simulated environment
before they perform it in the real environment and can therefore reduce the chances of
negative effects - allows "our Hypothesis to die in our stead" - popper
● "Gregorian creatures" are tools-enabled, in particular they master the tool of language. -
use tools (e.g. words) - permits learning from others. The possession of learned
predispositions to carry out particular actions
Evolutionary History of Mind
●To add to Dennet's metaphore: We are building a new
floor to the tower
●"Turingian creatures", that use tools-enabled
tool-enablers, in particular they create autonomous
agents to create even greater tools (e.g. minds with
further optimal cognitive abilities) - permits insightful
engineering of increasingly insightful agents,
resulting in an Intelligence Explosion.
References
● "Kinds of Minds" - Daniel Dennit :
http://www.amazon.com/Kinds-Minds-Understanding-Consciousness-Science/dp/0465073514
● Cascades, Cycles, Insight...: http://lesswrong.com/lw/w5/cascades_cycles_insight
● Recursive Self Improvement: http://lesswrong.com/lw/we/recursive_selfimprovement
● Terry Sejnowski : http://www.scientificamerican.com/article.cfm?id=when-build-brains-like-ours
● Nanotech : http://e-drexler.com
● Justin Rattner - Intel touts progress towards intelligent computers:
http://news.cnet.com/8301-1001_3-10023055-92.html#ixzz1G9Iy1cXe
● Bill Gates speaking about AI Risk on Reddit:
https://www.reddit.com/r/IAmA/comments/2tzjp7/hi_reddit_im_bill_gates_and_im_back_for_my_third
● Superintelligence: https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies
● Smarter Than Us: http://lesswrong.com/lw/jrb/smarter_than_us_is_out
●

More Related Content

Similar to AI – Risks, Opportunities and Ethical Issues.pdf

Oas keynote 10 2019
Oas keynote 10 2019Oas keynote 10 2019
Oas keynote 10 2019Jerome Glenn
 
Artificial Intelligence: Existential Threat or Our Best Hope for the Future?
Artificial Intelligence: Existential Threat or Our Best Hope for the Future?Artificial Intelligence: Existential Threat or Our Best Hope for the Future?
Artificial Intelligence: Existential Threat or Our Best Hope for the Future?James Hendler
 
AI and the Future of Work [TUG-CO, 11/15/23]
AI and the Future of Work [TUG-CO, 11/15/23]AI and the Future of Work [TUG-CO, 11/15/23]
AI and the Future of Work [TUG-CO, 11/15/23]Matt Small
 
Ethics for the machines altitude software
Ethics for the machines   altitude softwareEthics for the machines   altitude software
Ethics for the machines altitude softwareAltitude Software
 
eToro startup & mgnt 2.0 course - Class 17 deep thought
eToro startup & mgnt 2.0 course - Class 17 deep thoughteToro startup & mgnt 2.0 course - Class 17 deep thought
eToro startup & mgnt 2.0 course - Class 17 deep thoughtEstrella Demonte
 
15/3 -17 impact exponential technologies
15/3 -17 impact exponential technologies 15/3 -17 impact exponential technologies
15/3 -17 impact exponential technologies Paul Epping
 
Technology in Business Law by Ammar Younas
Technology in Business Law by Ammar YounasTechnology in Business Law by Ammar Younas
Technology in Business Law by Ammar YounasAmmar Younas
 
Artificial intelligence and Creativity
Artificial intelligence and CreativityArtificial intelligence and Creativity
Artificial intelligence and CreativityLuís Gustavo Martins
 
What really is Artificial Intelligence about?
What really is Artificial Intelligence about? What really is Artificial Intelligence about?
What really is Artificial Intelligence about? Harmony Kwawu
 
FINAL FOB Future of Business - Chapter Relationship Man & Machine Gerd Leonha...
FINAL FOB Future of Business - Chapter Relationship Man & Machine Gerd Leonha...FINAL FOB Future of Business - Chapter Relationship Man & Machine Gerd Leonha...
FINAL FOB Future of Business - Chapter Relationship Man & Machine Gerd Leonha...Gerd Leonhard
 
Technologies Demystified: Artificial Intelligence
Technologies Demystified: Artificial IntelligenceTechnologies Demystified: Artificial Intelligence
Technologies Demystified: Artificial IntelligencePioneers.io
 
Managing Future Impacts of Artificial Narrow, General, and Super Intelligence...
Managing Future Impacts of Artificial Narrow, General, and Super Intelligence...Managing Future Impacts of Artificial Narrow, General, and Super Intelligence...
Managing Future Impacts of Artificial Narrow, General, and Super Intelligence...Jerome Glenn
 
AI PROJECT - Created by: Shreya Kumbhar
AI PROJECT - Created by: Shreya KumbharAI PROJECT - Created by: Shreya Kumbhar
AI PROJECT - Created by: Shreya Kumbharshreyakumbhar14
 
Clipperton - AI - Deep Learning: From Hype to Maturity?
Clipperton - AI - Deep Learning: From Hype to Maturity?Clipperton - AI - Deep Learning: From Hype to Maturity?
Clipperton - AI - Deep Learning: From Hype to Maturity?Stephane Valorge
 
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and ConsciousnessDan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and ConsciousnessDaniel Faggella
 

Similar to AI – Risks, Opportunities and Ethical Issues.pdf (20)

Oas keynote 10 2019
Oas keynote 10 2019Oas keynote 10 2019
Oas keynote 10 2019
 
Artificial Intelligence: Existential Threat or Our Best Hope for the Future?
Artificial Intelligence: Existential Threat or Our Best Hope for the Future?Artificial Intelligence: Existential Threat or Our Best Hope for the Future?
Artificial Intelligence: Existential Threat or Our Best Hope for the Future?
 
AI and the Future of Work [TUG-CO, 11/15/23]
AI and the Future of Work [TUG-CO, 11/15/23]AI and the Future of Work [TUG-CO, 11/15/23]
AI and the Future of Work [TUG-CO, 11/15/23]
 
Ethics for the machines altitude software
Ethics for the machines   altitude softwareEthics for the machines   altitude software
Ethics for the machines altitude software
 
eToro startup & mgnt 2.0 course - Class 17 deep thought
eToro startup & mgnt 2.0 course - Class 17 deep thoughteToro startup & mgnt 2.0 course - Class 17 deep thought
eToro startup & mgnt 2.0 course - Class 17 deep thought
 
15/3 -17 impact exponential technologies
15/3 -17 impact exponential technologies 15/3 -17 impact exponential technologies
15/3 -17 impact exponential technologies
 
Technology in Business Law by Ammar Younas
Technology in Business Law by Ammar YounasTechnology in Business Law by Ammar Younas
Technology in Business Law by Ammar Younas
 
Artificial intelligence and Creativity
Artificial intelligence and CreativityArtificial intelligence and Creativity
Artificial intelligence and Creativity
 
What really is Artificial Intelligence about?
What really is Artificial Intelligence about? What really is Artificial Intelligence about?
What really is Artificial Intelligence about?
 
When AI becomes a data-driven machine, and digital is everywhere!
When AI becomes a data-driven machine, and digital is everywhere!When AI becomes a data-driven machine, and digital is everywhere!
When AI becomes a data-driven machine, and digital is everywhere!
 
FINAL FOB Future of Business - Chapter Relationship Man & Machine Gerd Leonha...
FINAL FOB Future of Business - Chapter Relationship Man & Machine Gerd Leonha...FINAL FOB Future of Business - Chapter Relationship Man & Machine Gerd Leonha...
FINAL FOB Future of Business - Chapter Relationship Man & Machine Gerd Leonha...
 
2016 promise-of-ai
2016 promise-of-ai2016 promise-of-ai
2016 promise-of-ai
 
Technologies Demystified: Artificial Intelligence
Technologies Demystified: Artificial IntelligenceTechnologies Demystified: Artificial Intelligence
Technologies Demystified: Artificial Intelligence
 
Managing Future Impacts of Artificial Narrow, General, and Super Intelligence...
Managing Future Impacts of Artificial Narrow, General, and Super Intelligence...Managing Future Impacts of Artificial Narrow, General, and Super Intelligence...
Managing Future Impacts of Artificial Narrow, General, and Super Intelligence...
 
AI PROJECT - Created by: Shreya Kumbhar
AI PROJECT - Created by: Shreya KumbharAI PROJECT - Created by: Shreya Kumbhar
AI PROJECT - Created by: Shreya Kumbhar
 
AI overview
AI overviewAI overview
AI overview
 
Clipperton - AI - Deep Learning: From Hype to Maturity?
Clipperton - AI - Deep Learning: From Hype to Maturity?Clipperton - AI - Deep Learning: From Hype to Maturity?
Clipperton - AI - Deep Learning: From Hype to Maturity?
 
The Singularity
The SingularityThe Singularity
The Singularity
 
Mieczysław Muraszkiewicz, Warsaw University of Technology: Artificial Intelli...
Mieczysław Muraszkiewicz, Warsaw University of Technology: Artificial Intelli...Mieczysław Muraszkiewicz, Warsaw University of Technology: Artificial Intelli...
Mieczysław Muraszkiewicz, Warsaw University of Technology: Artificial Intelli...
 
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and ConsciousnessDan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
 

Recently uploaded

Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Hyundai Motor Group
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 

Recently uploaded (20)

Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 

AI – Risks, Opportunities and Ethical Issues.pdf

  • 1. DeepMind’s Deep Q Learning Circa 2015 Agent fed sensory input (pixel stream) with goal of maximising score on screen - no rules, or domain knowledge given. Starts off clumsy.. Much better after 2 hrs.. After 4 hrs it learns to dig a tunnel through to top side..
  • 2.
  • 4.
  • 5. AI – Risks, Opportunities & Ethical Issues Adam Ford • 2022-10 • @adam_ford Will AI be able to self-improve? How can we ethically align powerful AI, and what would make it easier to do so? Can AI Understand stuff? What role might causal learning play?
  • 7. ROI - which interventions are most effective? https://www.givingwhatwecan.org/research/the-moral-imperative-towards-cost-effectiveness Note: DALY = Disability Adjusted Life Years, QALY = Quality Adjusted Life Years Reducing impact of AIDs virus
  • 8. ROI - Effective Interventions p2 Green Revolution (Norman Borlaug, agriculture) ≈ 250 million saved (though there were environmental impacts, lost of biodiversity, deforestation etc) Smallpox Vaccine (Edward Jenner, medical) ≈ 500 million saved Haber-Bosch process (Fritz Haber, nitrogen for fertilizer in agriculture) ≈ 1 billion saved (though was also involved in the development of explosives and Fritz Haber was involved in developing chemical weapons) https://en.wikipedia.org/wiki/Green_Revolution, https://en.wikipedia.org/wiki/Smallpox_vaccine, https://en.wikipedia.org/wiki/Haber_process
  • 9. Avoiding Extinction Saves Lives! Existential Risks
  • 10. Extinction means Damnation of the Limitless Potential of Knowledge David Deutsch - Beginnings of Infinity - “Everything not forbidden by the laws of nature is achievable, given the right knowledge” Any physically possible activity can be achieved provided the knowledge to do so has been acquired. ● Generation of better knowledge acquisition systems - better computation and information processing ● Space colonisation (inner and outer) ● Better energy generation (dyson swarms, matrioska brains) ● Better governance systems bestowing more well-being on all sentience .. can we engineer better qualia?
  • 11. AI as PASTA: Process for Automating Scientific and Technological Advancement The bottlenecks to innovation (education, human minds, societal factors.. etc.) could be alleviated by Superintelligent (Transformative) AI - leading to explosive growth in knowledge, theoretical and practical, in the form of scientific and technological advancement - leading to a kaleidoscope of possible futures. https://www.cold-takes.com/this-cant-go-on/#scientific-and-technological-advancement
  • 12. The future could be really ace - we have so much to look forward to!
  • 13. Derek Parfit Nick Bostrom Population Ethics & Mitigating Existential Risks - Great ideas, but who’s listening?
  • 15. Technological Maturity A technologically mature civ is robust, antifragile, able to coordinate effectively, and likely would have solved ethics. “a technologically mature civilization could (presumably) engage in large-scale space colonization... be able to modify and enhance human biology... could construct extremely powerful computational hardware and use it to create whole-brain emulations and entirely artificial types of sentient, superintelligent minds” (Bostrom, 2012)
  • 16. Technological Maturity The Ongoing Avoidance of Extinction (OAE) Compassionate Stewardship: Nurturing a posthuman civilisation of invincibly mirthful beings animated by an unlimited kaleidoscope of novelty and gratients of bliss.
  • 17. What’s the ROI on Preserving the Long Term Future?
  • 18. Longtermism - Earth Only ● All the people who have died in the last 200,000 years - 109 billion. ● All the people alive today - 7.95 billion. ● The amount of people who have died is 14 times the current population. ● If we project 800,000 years into the future, with the earth’s population stabilizing at about 11 billion, with life expectancy at 88, there will be 100 trillion people born. ● 5.4e+17 lives until sun goes red dwarf https://ourworldindata.org/longtermism
  • 19. Habitable Planets in Milky Way ≈ 30 billion
  • 20.
  • 23. Intelligence is Powerful (and potentially dangerous) Intelligence got our ancestors from mid-level scavengers to apex predators and beyond ● from hand axes → the international L115A3 sniper rifle → fully automated drones ● from barefoot meandering → heavier than air flight → escape velocity → landing robots on mars, probes escaping our solar system ● artificial intelligence.. If we substitute human minds with better-designed minds, running at 1 million * human clock speed, we would likely see a much faster rate of progress in technoscience. →
  • 24. Force Multipliers Force multipliers in multiple areas of science and technology are setting the stage for AI driven productivity explosion Examples: higher resolution brain scanners, faster computing hardware, exotic forms of computation, improved algorithms, improved philosophy ●AI is a force multiplier feedback for progress in the above examples
  • 26. Zooming in we can map economic growth over past ~75 years
  • 27. Zooming out : We live within economic surge Where to next?
  • 28. Productivity Feedback Loops have precedence Economic history: more resources -> more people -> more ideas -> more resources ...
  • 30. Intelligence via Evolution vs the new signature of Intelligent Design Homo Sapien intelligence - resulting from "blind" natural selection, by evolutionary timescales has only recently developed the ability for planning and forward thinking - deliberation. Almost all our cognitive tools were the result of ancestral selection pressures, forming the roots of almost all our behavior. As such, when considering the design of complex systems where the intelligent designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to achieve General Intelligence that's completely different than the process that gave birth to our brains.
  • 31. AI is powerful now, and will be more powerful in the future Syllogism1: intelligence is powerful; AI is intelligent; therefore AI is powerful Syllogism2: technological progress is a much faster force multiplier than evolutionary progress; AI is subject to technological progress and HI is subject to evolutionary progress; therefore AI will become smarter faster than HI Syllogism3: more intelligence is more powerful than less intelligence; AI will overtake HI; therefore AI will be more powerful than HI
  • 32. A faster future than we think - are we ready? 1. There is a substantial chance Human Level AI (HLAI) before 2100; 2. If HLAI then likely Super Intelligence (SI) will follow via an Intelligence Explosion (IntExp); 3. Therefore an AI driven productivity explosion 4. An uncontrolled IntExp could destroy everything we value, but a controlled IntExp would benefit life enormously - shaping our future light-cone. *HLAI= Human Level AI - The term ‘human-level’ may be a misnomer - I will discuss later [The Intelligence Explosion - Evidence and Import 2012]
  • 33. Large Language Models & Math Automation Lots of momentum atm in LLMs "Large language models can write informal proofs, translate them into formal ones, and achieve SoTA performance in proving competition-level maths problems!" https://arxiv.org/abs/2210.12283 On twitter “If it is really possible for AIs to surpass us as much in mathematics as AlphaGo surpasses us in go, what is the point of doing mathematics?”
  • 34. GPT-3 / GPT-2 Fire your dungeon master - let the AI create your fantasy adventure while you play it! https://aidungeon.io
  • 37. Intelligence Explosion "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." - I. J. Good
  • 38. Intelligence Explosion The purest case - AI rewriting its own source code. ● Improving intelligence accelerates ● It passes a tipping point ● ‘Gravity’ does the rest Push P u l l sisyphus & robot
  • 39. Is AI > Human Level Intelligence (HLI) likely? Syllogism 1. There will be AI (before long, absent defeaters). 2. If there is AI, there will be AI+ (soon after, absent defeaters). 3. If there is AI+, there will be AI++ (soon after, absent defeaters). 4. Therefore, there will be AI++ (before too long, absent defeaters). The Singularity: A Philosophical Analysis: http://consc.net/papers/singularity.pdf What happens when machines become more intelligent than humans? Philosopher, David Chalmers
  • 40. HLI will rise with AI at the pace of AI? Rapture of the nerds - where HLI will rise with AI by humans merging with it - where the true believers will break with human history and merge with the machine, while the unaugmented will be left in a defiled state… ● No. ● Biological bottlenecks - the kernel of human intelligence. ● Merging with AI enough to keep pace means becoming posthuman - defining characteristics of humans will be drowned out like a drop of ink in an ocean.
  • 41. Mathematician / Hugo Award Winning SciFi Writer Vernor Vinge coined the term: "Technological Singularity" The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.”—Vernor Vinge Vernor postulated that "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." - Vernor Vinge 1994 https://edoras.sdsu.edu/~vinge/misc/singularity.html
  • 42. Closing the Loop - Positive Feedback A Positive Feedback Loop is achieved through an agents ability to recursively optimize its ability to optimize itself! Bottle-neck? Other entities? Substrate expandable (unlike brain), grey box or ideally white box source code which can be engineered and manipulated (brain, black box code with no manual… possibly can be improved through biotech...not trivial ) Better source code = more optimization power* AI optimizes source code *Smarter AI will be even better at becoming smarter
  • 43. Moore’s Law: the Actual, the Colloquial & the Rhetorical ● CPU - transistors per cpu not doubling? ● GPU - speed increases ● Economic drivers motivated by general advantages to having faster computers. ● Even if slowdown - small advantages in speed can aid competitors in race conditions. Though usually surges in speedups follow. Not a smooth curve upwards. ● No, speedups don’t have to be increases in numbers of transistors in CPU cores. Surface-level abstractions used to make specific predictions about when the singularity will happen A plot (logarithmic scale) of MOS transistor counts for microprocessors against dates of introduction, nearly doubling every two years. See https://ourworldindata.org/technological-progress Though Moore’s law has a specific definition, the term has been used colloquially to indicate a general trend of increasing computing power.
  • 45. Human INT isn’t a universal standard Human intelligence is a small fraction in the much larger mind design space → Misconception : Intelligence is a single dimension.
  • 46. The space of possible minds ● Humans, not being the only possible mind, are similar to each other because of the nature of sexual reproduction ● Transhuman minds, while still being vestigially human, could be smarter or more conscious than unaugmented humans ● Posthuman minds may exist all over the spectrum - occupy a wider possibility space in the spectrum ● It may be very hard to guess what an AI superintelligence would be like, since it could end up anywhere in this spectrum of possible minds
  • 47. Strange intelligences within the animal kingdom Spider: Portia fimbriata Mimic Octopus Coconut Octopus
  • 49. Artificial Neural Nets ANNs can do stuff that humans can’t write computer programs to do (directly): ● Image classification ● Accurate text translation ● Advanced Image/text generation from prompts ● Play games ● …much more to come!
  • 50. Recap Intelligence is powerful - it’s important to track progress in AI AI doesn’t need to be a reimplementation of human intelligence for it to be powerful - need a wide view of intelligence Distinguishing evaluating levels of intelligence in ● AI vs Humans and ● Humans vs Humans Therefore: ● Avoid anthropomorphism ● We shouldn’t be limiting measuring progress in AI by the kind of standard of human intelligence alone
  • 51. What kinds of minds do we want strong AI to inhabit?
  • 53. AI: Why should we be concerned? Value Alignment “The concern is not that [AGI] would hate or resent us for enslaving it, or that suddenly a spark of consciousness would arise and it would rebel, but rather that it would be very competently pursuing an objective that differs from what we really want.” - Nick Bostrom - ..but what do we really want? Do we really know? What will would we really want if we were wiser/smarter/kinder? “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” - Norbert Wiener (AI Pioneer and father of Cybernetics)
  • 54. Systemic Risks ● Race Conditions - Intelligence is powerful - first to create powerful AGI gets massive first mover advantages - where safety measures are de-prioritised in favor of speeding up development - i.e. recommender systems creating addiction and polarisation ● Competitive pressure can create a race to the bottom on safety standards
  • 55. Social Concerns: Political and class struggle ● The Artilect War ○ Cosmists - those who favor building AGI ○ Terrans - those who don’t “I believe that the ideological disagreements between these two groups on this issue will be so strong, that a major "artilect" war, killing billions of people, will be almost inevitable before the end of the 21st century." - Hugo de Garis
  • 56. Direct vs Indirect Normativity Direct normativity - rules directly specifying a desired outcome i.e. the 3 laws of robotics. Indirect Normativity - programming/training an AI to understand and acquire human value, or what values we would have if we were wiser and had more time to think about them - i.e. Coherent Extrapolated Volition.
  • 57. Power Seeking behaviour Convergent Instrumental Goals or Basic AI Drives - they emerge as helper goals to aid in achieving other goals. ● Resource acquisition - finance or material i.e. to help obtain more material for computing for.. ● Cognitive enhancement ● Avoiding being turned off / Self-preservation (leading to incorrigibility) I.e. Paperclip Maximiser Even well stated benign seeming goals are not immune to instrumental convergence towards goals that dangerously orthogonal to ours.- https://en.wikipedia.org/wiki/Instrumental_convergence
  • 58. Empirical AI Safety & Trustworthy AI Opaque AI vs Interpretable AI & Ethical Implications of Both
  • 59. Summary Transparency and interpretability in AI is commonly used to mean it can be understood by outside observers (humans, other AI etc), and other AI as well. Current modern ML is typically opaque - the output is interpretable, but you can’t look at the model and detect why it created the output. Robustness in AI Safety - Transparency can help in avoiding accidental misalignment & deceptive alignment https://www.lesswrong.com/tag/transparency-interpretability-ml-and-ai
  • 60. Scalable Oversight Generally, the more complex AI becomes, the harder it is to understand. A challenge of scaling up oversight along side the scaling up of AI complexity - without which the AI models would be impractical for humans to effectively evaluate. Scalable Oversight research seeks to (perhaps by transparency and interpretability) help humans speed up evaluation. Though even if scalable oversight is solved, agents on target systems may still engage in reward hacking - Deepmind research ways in which this can be alleviated with Causal Influence Diagrams https://arxiv.org/abs/1902.09980
  • 61. Bottlenecks on safety, reliability and trustworthiness Trustworthy companies can train large models and integrate safety - and seek to generate more revenue by virtue of these models achieving higher levels of reliability and trustworthiness and safety. Though the levers to measure and implement safety measures are in the companies that build the models. For profit motives may act as a perverse incentive to game AI safety for personal gain - prioritising revenue over general safety for end users may be dangerous for high stakes systems.
  • 62. Right to Explanation (RTE) Should the AI Alignment / AI Safety community demand rights to explanation (of AI)? In the regulation of algorithms, a right to be given an explanation for an output of the algorithm. Such rights primarily refer to individual rights to be given an explanation for decisions that significantly affect an individual, particularly legally or financially - though this notion could be extended to social rights to explanation. Some are concerned that RTE might stifle innovation and progress. https://en.wikipedia.org/wiki/Right_to_explanation
  • 63. In the Media ‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives - ‘Black-box’ AI-based discrimination seems to be beyond the control of organisations that use it - Guardian article discusses algorithmic bias in blackbox AI
  • 64. Transparency and Interpretability (ML & AI) Transparency and interpretability in AI is commonly used to mean the AI can be understood by outside observers (humans, other AI etc), though it could be extended to mean AI interpreting and understanding itself as well. Current modern ML is typically opaque - the output is interpretable, but you can’t look at the model and detect why it created the output. https://www.lesswrong.com/tag/transparency-interpretability-ml-and-ai
  • 65. Anthropic AI Chris Olah and associates doing very interesting work at Anthropic on interpretability - great podcast interview at 80k hrs. Building Reliable, Interpretable, and Steerable AI Systems Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. Large, general systems of today can have significant benefits, but can also be unpredictable, unreliable, and opaque: our goal is to make progress on these issues. For now, we’re primarily focused on research towards these goals; down the road, we foresee many opportunities for our work to create value commercially and for public benefit.
  • 66. Code and Model Understandability Traditional software is intelligible by virtue of us being able to understand the code. Artificial Neural Networks (ANNs) are not easily intelligible. There are approaches to reverse engineer small ANNs into something intelligible, or to interpret their circuits - at the moment, this doesn’t scale well for large ANNs https://www.lesswrong.com/tag/transparency-interpretability-ml-and-ai
  • 67. Unanticipated behaviours - Unintended Attempting AI Safety through exhaustive training & behavioural testing in sandbox environments. 1. Unable to anticipate all safety problems 2. Insufficient disincentives - hard to know what is enough A model can behave well in sandbox or on a certain training distribution - then exhibits unexpected safety failures outside the sandbox, in the real world (in out of sample contexts)
  • 68. Unanticipated behaviours - Intended Attempting AI Safety through exhaustive training & behavioural testing in sandbox environments. Failure Mode - Treacherous Turn: “An unfriendly AI of sufficient intelligence realizes that it's unfriendly final goals will be best realized if it behaves in a friendly manner initially, so that it will be let out of the box. It will only start behaving in a way that reveals its unfriendly nature when it no longer matters whether we find out; that is, when the AI is strong enough that human opposition is ineffectual.” (see Nick Bostrom - Superintelligence, p. 117)
  • 69. Treacherous Turns in the wild Humans follow rules far more often when being observed - Non-human animals - cats and dogs steal food more often when not being watched… - DeepMind collected a list of “specification gaming” examples where an AI system (or subsystem) learns to detect when it’s being monitored/tested and modifies its behavior while it’s being monitored/tested so that its undesired properties remain undetected, and then it exhibits those undesired properties when it’s not being monitored/tested - see Specification gaming: the flip side of AI ingenuity - Deepmind - demonstrated in study limit the replication rate of a digital organisms - “the organisms evolved to recognize when they were in the test environment and “play dead” (pause replication) so they would not be eliminated and instead be kept in the population where they could continue to replicate outside the test environment - see The Surprising Creativity of Digital Evolution - Lehmann et al. (2018) https://lukemuehlhauser.com/treacherous-turns-in-the-wild
  • 70. Achieving Transparency - The Transparency Trichotomy A trichotomy of ways to understand an (opaque) model M 1. Transparency via inspection: use transparency tools to understand M via inspecting the trained model. 2. Transparency via training: incentivize M to be as transparent as possible as part of the training process. 3. Transparency via architecture: structure M's architecture such that it is inherently more transparent. https://www.lesswrong.com/posts/cgJ447adbMAeoKTSt/transparency-trichotomy
  • 71. Example of AI producing unintended behaviour Simple tetris AI pausing the game indefinitely to not loose: https://youtu.be/xOCurBYI_gY?t=943 https://www.lesswrong.com/posts/cgJ447adbMAeoKTSt/trans parency-trichotomy
  • 72. Failing Forward with Explanatory Failure ● Deutsch’s interesting view of Optimism as Explanation of Failure instead of happy predictions ● Understanding failure means makes the failure constructive ● Popper - Hypotheses dying in one’s stead - learning from failed hypotheses ● Improvement of simulations to understand failure where hypotheses can be further tested
  • 73. Zero-shot or ‘pre-shot’ verification Verification or classification by humans and/or ‘Observer AIs’ via transparent explanations of AI models, or source code before execution - which may help catch intended ‘treacherous turns’ or unintended bugs, misrepresentations or other problems.
  • 74. Levels of Trust 1. Blind Trust (Gullibility) 2. Behavioural Trust (Blackbox) 3. Verifiable Trust (Transparent)
  • 75. AI that Understands Itself can Trust Itself Even if the AI is benign to begin with, if it mutates itself without understanding the knock on effects, it may metastasize into something not-benign. If we do end up with an AI that is meant to be ethical, and it's fundamentally opaque, how can we know it understands itself enough to trust itself?
  • 77. Machine Understanding It really matters what kinds of superminds emerge! Potential for machine understanding (contrast with explainable AI) this decade or next - Yoshua Bengio - causal representation learning - Judea Pearl too Assumptions: ● ethics isn't arbitrary ● strongly informed by real world states of affairs, ● understanding machines will be better positions than humans to solve currently intractable problems in ethics Therefore: investigate safe development of understanding machines - Henk de Regt what is scientific understanding? (I think machines can do it)
  • 78. What is understanding? Understanding by design: Six Facets ● Can explain: provide thorough, supported, and justifiable accounts of phenomena, facts, and data. ● Can interpret: tell meaningful stories; offer apt translations; provide a revealing historical or personal dimension to ideas and events; make it personal or accessible through images, anecdotes, analogies, and models. ● Can apply: effectively use and adapt what is known in diverse contexts. ● Have perspective: see and hear points of view through critical eyes and ears (or sensors). ● Can empathize: find value in what others might find odd, alien, or implausible; perceive sensitivity on the basis of prior direct experience. ● Have self knowledge: perceive the personal style, prejudices, and habits of mind that both shape and impede our own understanding; we are aware of what we do not understand and why understanding is so hard. (Source: Wiggins, G. and McTighe, J. (1998) Understanding by Design. Alexandra, VA: Association for Supervision and Curriculum Development.)
  • 79. What’s missing in modern AI systems? Deep learning works via mass correlation on big data - great if you have enough control over your input data relative to the problem, and that the search space is confined. Though (*most of) the understanding isn’t in the system, it’s in the experts that develop the AI systems, and interpret the output. *arguably some aspects of understanding exist in systems, though not all of them
  • 80. Training: Black Box vs an Understanding Agent The opaque nature, we can’t see the under the hood effects of our training on ML - all we can see is the accuracy of it’s predictions - so then we strive to achieve “predictive accuracy” - we train AIs by blind groping towards reward. Though the AI may be harbouring ‘malevolent motives’ that aren’t recognisable by merely looking at the output - at some stage the AI may then unpredictably make a ‘treacherous turn’. How can we tell a black box AI hasn’t developed a taste for wire-heading / reward hacking?
  • 81. The Importance of Knowing Why Why is one of the first questions children ask - (oft recursively) New Science of Causal Inference to help us understand cause and effect relationships. Current AI is bad at causal learning - but really good at correlating across big data. Judea Pearl believes causal AI will happen within a decade. Judea Pearl
  • 82. Why Machine Understanding? To Avoid Unintended consequences! ● Value alignment: Where the AI’s goals are compatible with our *useful/good values ● Verifiability / Intelligibility: AI makeup source code / framework is verifiably beneficial (or highly likely to be) and where it’s goals and means are intelligible ● Ethical Standards: AI will have to understand ethics - and it’s understanding ought to be made intelligible to us ● Efficiency: far less training data and computation - efficiency required for time sensitive decisiveness
  • 83. Why? - Value Alignment ● AI should understand and employ common sense about: ○ What you are asking - did you state your wish poorly? ○ Your motivations - Why you are asking it ○ The context ■ where desirable solutions are not achieved at any cost ■ Is your wish counter productive? Would it harm you or others? ■ Ex. After summoning a genie, using the 3rd wish to undo the first 2 wishes. King Midas - badly stated wish.
  • 85. AI needs to Understand Ethics Intelligible Ethical Standards ● Rule based systems are fragile - Asimov’s novels explore failure modes ● Convincing predictive ethical models is not enough - predictions fail, and it’s bad when it’s not known why ● A notion of cause and effect is imperative in understanding ethical problems - obviously trolly problems :D
  • 86. Why? - Efficiency Overhang resulting from far more efficient algorithms - allowing more computation to be done with less hardware. More computation used *wisely could realise wonderful things. Alternatively, ultra efficient machines could make doomsday devices more accessible to most (inc. psychopaths) *wisdom is hard won through understanding We have a preference for X before we die - X could be: a beneficial singularity, radical longevity medicine, robotic companions, becoming posthuman etc.
  • 87. Why? - Wisdom Simplistically, wisdom is hard won through understanding. Decisions are less risky when knowledge is understood and more risky when based on ignorant interpretations of data.
  • 88. Why Not Machine Understanding? Machine Understanding could be disastrous if 1. Adding the right ingredients to AI makes it really cheap and easy to use; 2. AI will be extremely capable - (intelligence is powerful before and after an intelligence explosion) 3. There doesn’t need to be many bad eggs or misguided people who intentionally/unintentionally use the AI for diabolical ends for there to be devastating consequences
  • 89. How - Achieving AI Understanding - Interpretability Take hints from Lakatos award winner Henk D. Regt’s arguments against the common theory that scientific explanation/understanding is achieved through building models: ● But the scientist building the models need certain skills - scientific understanding is fundamentally a skillful act ● He also emphasises intelligibility in scientific explanation - which is something sorely lacking in the Deep Learning models that AI develops
  • 90. Causal AI - the logical next step 4 rungs: 1) learning by association (deep learning, massive correlation AlphaGo - some animals apparently, and sales managers) a) Lacks flexibility and adaptability 2) Taking action.. What ifs, taking actions to influence outcomes - taking interventions require a causal model - 3) Identifying counterfactuals (but for causation) - i.e. to a computer a match and oxygen may play an equal role 4) Confounding bias - associate with 2nd level -- citrus fruits prevent scurvy.. But why? A long time ago people used to think it was the acidity - turns out it is vitamin c https://ssir.org/articles/entry/the_case_for_causal_ai
  • 91. Causal Learning in AI - a key to Machine Understanding. Opening up a world of possibilities for science. - Important for humans in making discoveries - if we don’t ask why then we don’t make new discoveries - Big data can’t answer causal questions alone - emerging causal AI will draw on causal relationships between correlation and causation, to see which are mediators, and which are confounders
  • 92. Causal Representation Learning Great paper ‘Towards Causal Representation Learning’ - “A central problem for AI and causality is, thus, causal representation learning, the discovery of high-level causal variables from low- level observations.”
  • 93. Causation & Moral Understanding Consequentialism - the consequences of an act are 1) it’s causal consequences (computationally tractable), or 2) the whole state of the universe following the act (practically incomputable) Zero-Shot-Learning for Ethics: Being able to understand salient causal factors in novel moral quandaries (problems including confounders and out-of-training-sample stuff i.e. invernentions) increases the chance of AI to solving these moral quandaries first try, and then explain how and why their solution worked. Some problems, including solving ethical Superintelligence may need to be done correctly first try.
  • 94. Understanding Understanding ● Philosophy / Epistemology ● Human Understanding ● Machine Understanding ● Understanding & General Intelligence ● AI Impacts / Opportunities & Risks ● Ethics & Zero Shot Solving of Moral Quandaries Conference in planning for some time after CoronaVirus restrictions ease - let me know if you are interested - tech101 [at] gmail [dot] com Twitter @adam_ford YouTube: YouTu.be/TheRationalFuture
  • 96. ● Degrees of belief ● Confidence intervals, Credence levels ● Epistemic humility ● What's the likelihood of smarter than HLI by 2050? Or 2075? Or 2100? Belief shouldn’t be On / Off
  • 97. Many ways to achieve flight
  • 98. Plan AI Build AI AGI Single path dependency - Bad! This is better →
  • 101. Narrow AI vs AGI "Artificial General Intelligence" (AGI) : AI capable of "general intelligent action". Distinction: Narrow AI (or Weak AI/Applied AI) used to accomplish specific problem solving or reasoning tasks that do not encompass (or in some cases are completely outside of) the full range of human cognitive abilities. Gradient between Narrow & General - what lies in between? ? ● Narrow: Deep Blue (Chess), Watson (Jeopardy) ● Arguably in between: AlphaGo, AlphaStar ● Optimally General: No such thing. Computable AIXItl? Not foreseeably Narrow AI AGI
  • 102. Getting closer to solving key building blocks of AI ● Unsupervised Learning (without human direction on unlabelled data) - DeepMind’s AlphaGo, AlphaStar, now in many other systems ● Transfer Learning (knowledge in one domain is applied to another) -
  • 103. One shot or Few Shot Learning ● GPT-3 - few shot learning Transfer Learning "I think transfer learning is the key to general intelligence. And I think the key to doing transfer learning will be the acquisition of conceptual knowledge that is abstracted away from perceptual details of where you learned it from." - Demis Hassabis
  • 104. Transfer Learning Knowledge in one domain is applied to another 1. IMPALA simultaneously performs multiple tasks—in this case, playing 57 Atari games—and attempts to share learning between them. It showed signs of transferring what was learned from one game to another. https://www.technologyreview.com/f/610244/deepminds-latest-ai-transfers-its-learning-to-new-tasks/ 2. REGAL - key idea is to combine a neural network policy with a genetic algorithm, the Biased Random-Key Genetic Algorithm (BRKGA)..REGAL achieves on average 3.56% lower peak memory than BRKGA on previously unseen graphs, outperforming all the algorithms we compare to, and giving 4.4x bigger improvement than the next best algorithm. https://deepmind.com/research/publications/REGAL-Transfer-Learning-For-Fast-Optimization-of-Computation-Graphs
  • 106. Evaluating measurable claims about intelligence Cognitive modelling (think & learn similarly to humans) comparing AI vs human reaction times, error rates, response quality etc
  • 107. Evaluation metrics Concrete Metrics: ● Performance ● Scalability ● Fault Tolerance Abstract Metrics: ● Task and Domain Generality ● Expressivity ● Robustness ● Instructability ● Taskability ● Explainability
  • 108. Full understanding of human intelligence not required for AI ● For most of human history, we didn’t really understand fire - but made good use of it ● Today we don’t fully understand biology, though we are able to develop effective medicine ● We don’t fully understand the brain - have developed intelligent algorithms - some neuromorphic (brain inspired)
  • 109. Peaks & Valleys in Cognitive capability ● Peaks of AI cognitive capabilities have already crossed over valleys in human cognitive capabilities. ● Will we see a rising tide in general AI cognitive capability? ● Will the valleys in AI cognitive ability rise above the peaks in human cognitive capability?
  • 110. Ben Goertzel “It's impracticable to halt the exponential advancement of technology
  • 111. AI vs HLI Ability - Peaks & Troughs Not representative of all human or AI abilities
  • 112. AI vs HLI Ability - Peaks & Troughs Not representative of all human or AI abilities
  • 113. Turing Tests & Winograd Schemas ● Arguably passed Turing test ● Winograd Schema success rate of 90% is would be considered impressive to people in NLP - UPDATE - GPT-3 recently got a score of 88.3 without fine tuning.. See 'Language Models are Few-Shot Learners' ● Conversational AI getting better - fast (see earlier example) / OpenAI’s GPT-2/ now GPT-3 - https://philosopherai.com & https://play.aidungeon.io ● Also check out NVIDIAs new sensation :D https://www.zdnet.com/article/nvidias-ai-advance-natural-language-processing- gets-faster-and-better-all-the-time
  • 114. What won’t AI be able to do within next 50 years? Best predictions? What Computers Can’t Do - Hubert Dreyfus ○ What Computers Still Can’t Do ■ Make coffee ■ Beat humans at physical sport ■ Write funny jokes? ■ Write poetry? ● and novels ● Formulate creative strategies - the ability to formulate the kind of creative strategies that, for instance, define a great lawyer’s ability to form unique arguments or a top CEO to lead his or her company in bold new directions. This isn’t just about analyzing data; it’s about venturing into unstructured problem-solving tasks, and deciding which pieces of information are relevant and which can be safely ignored. https://www.digitaltrends.com/cool-tech/things-machines-computers-still-cant-do-yet/
  • 115. How to evaluate evidence? ● Trend extrapolation - Black to Dark Box ○ Some views on trend extrapolation favor near term AGI ○ Some views don’t ○ Strange how people carve up what’s useful trend extrapolation and what isn’t to support their views ● Inside view - White to Translucent Box ○ Weak - tenable - loose qualitative views on issues with lopsided support ○ Strong - difficult ● Appeal to authority :D Pick your authority!
  • 116. Trend Extrapolation: Accelerating Change ● Kurzweil's Graphs ○ Not a discrete event ○ Despite being smooth and continuous, explosive once curve reaches transformative levels (knee of the curve) ● Consider the Internet. When the Arpanet went from 10,000 nodes to 20,000 in one year, and then to 40,000 and then 80,000, it was of interest only to a few thousand scientists. When ten years later it went from 10 million nodes to 20 million, and then 40 million and 80 million, the appearance of this curve looks identical (especially when viewed on a log plot), but the consequences were profoundly more transformative. ● Q: How many devices do you think are connected to the internet today?
  • 117. This image is old. 128gb SanDisk for $30 and a $400gb for $99 on ebay (circa sept 2019) A 512gb microsd card was $99 AUD Jan 2022 at Scorptec
  • 118.
  • 120. Weeeee! Enjoy the ride! Math and deep insights (especially probability) can be powerful relative to trend fitting and crude analogies. Long-term historical trends are weekly suggestive of future events BUT - humans aren’t good at math, and aren’t all that convinced by it - nice pretty graphs or animated gifs seem to do a better job. Difficulty - convincing humans is important - since they will drive the trajectory of AI development… so convincing people of good causes based on less-accurate/less-sound yet convincing arguments is important.
  • 121. Survey of on timeline of AI AI Expert poll in 2016 ● 3 years for championship Angry Birds ● 4 years for the World Series of Poker (some poker solved in 2019) ● 6 years for StarCraft (now solved) ● 6 years for folding laundry ● 7–10 years for expertly answering 'easily Googleable' questions ● 8 years for average speech transcription (sort of solved) ● 9 years for average telephone banking (sort of solved) ● 11 years for expert songwriting (sort of solved) ● over 30 years for writing a New York Times bestseller or winning the Putnam math competition https://aiimpacts.org/miri-ai-predictions-dataset
  • 122. 2) In 2017 May, 352 AI experts who published at the 2015 NIPS and ICML conferences were surveyed. Based on survey results, experts estimate that there’s a 50% chance that AGI will occur before 2060. However, there’s significant difference of opinion based on geography: Asian respondents expect AGI in 30 years, whereas North Americans expect it in 74 years. Some significant job functions that are expected to be automated by 2030 are: Call center reps, truck driving, retail sales. Surveys of AI Researchers 1) In 2009, 21 AI experts participating in AGI-09 conference were surveyed. Experts believe AGI will occur around 2050, and plausibly sooner. You can see their estimates regarding specific AI achievements: passing the Turing test, passing third grade, accomplishing Nobel worthy scientific breakthroughs and achieving superhuman intelligence. https://blog.aimultiple.com/artificial-general-intelligence-singularity-timing
  • 123. 2019 - When Will We Reach the Singularity? A Timeline Consensus from AI Researchers https://emerj.com/ai-future-outlook/when-will-we-reach-the-singularity-a-timeline-consensus-from-ai-researchers/
  • 124. Recap.. 1. Intelligence is powerful 2. Human Intelligence is a small fraction of a huge space of all possible mind designs 3. We already have AI smarter than HLI (too many examples to list) - so therefore you can already see AI outperform HLI - AI has surged past HLI in specific domains 4. An AI that could self improve could become very powerful very fast 5. Timeline consensus’ from AI researchers say AGI before 2050 or 2060
  • 126. 3 AI Boosters 1. Overhang resulting from a. algorithmic performance gains (more efficient algorithms require less computing power per unit of value) b. or economic gains (Microsoft donating billion $$ to OpenAI) c. Cheaper cloud infrastructure (drops in price/performance of already connected cloud) 2. Self improving systems - feedback loop where gains from previous iterations boost performance of subsequent iterations of the feedback loop 3. Economic/Political Will - race to first mover advantage
  • 127. Hardware Overhang ● new programming methods use available computing power more efficiently. ● new / improved algorithms can exploit existing computing power far more efficiently than previous suboptimal algorithms… ○ Result: AI/AGI can run on smaller amounts of hardware... This could lead to an intelligence explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. ● Examples: ○ In 2010, the President's Council of Advisors on Science and Technology reported on benchmark production planning model having become faster by a factor of 43 million between 1988 and 2003 - where only a factor of roughly 1,000 was due to better hardware, while a factor of 43,000 came from algorithmic improvements. ○ Sudden advances in capability gains - DeepMind’s AlphaGo, AlphaFold, GPT-2 ● Estimates of time to reach computing power required for whole brain emulation: ~a decade away ○ BUT: very unlikely that human brain algorithms anywhere enar most computationally efficient for producing AI. ○ WHY? our brains evolved during a natural selection process and thus weren't deliberately created with the goal of being modeled by AI. https://wiki.lesswrong.com/wiki/Computing_overhang
  • 128. Economic Gains In 2016 AI investment was up to $39 billion, 3x from 2013 See McKinsey Report - ARTIFICIAL INTELLIGENCE THE NEXT DIGITAL FRONTIER?
  • 129. AI Index 2019 Annual Report ● AI is the most popular area for computer science PhD specialization, and in 2018, 21% of graduates specialized in machine learning or AI. ● From 1998 to 2018, peer-reviewed AI research grew by 300%. ● In 2019, global private AI investment was over $70 billion, with startup investment $37 billion, M&A $34 billion, IPOs $5 billion, and minority stake $2 billion. Autonomous vehicles led global investment in the past year ($7 billion), followed by drug and cancer, facial recognition, video content, fraud detection, and finance. https://venturebeat.com/2019/12/11/ai-index-2019-assesses-global-ai-research-investment-and-impact/
  • 130. AI Index 2019 Annual Report ● China now publishes as many AI journal and conference papers per year as Europe, having passed the U.S. in 2006. ● More than 40% of AI conference paper citations are attributed to authors from North America, and about 1 in 3 come from East Asia. ● Singapore, Brazil, Australia, Canada and India experienced the fastest growth in AI hiring from 2015 to 2019. ● The vast majority of AI patents filed between 2014-2018 were filed in nations like the U.S. and Canada, and 94% of patents are filed in wealthy nations. ● Between 2010 and 2019, the total number of AI papers on arXiv increased 20 times. https://venturebeat.com/2019/12/11/ai-index-2019-assesses-global-ai-research-investment-and-impact/
  • 131. Kinetics of an Intelligence Explosion Intelligence level: not a continuous surge upwards. ‘Crossover’ refers to the point where AI can handle further growth by itself.
  • 133. Bill Gates "Reverse Engineering the Brain is Within Reach" -“I am in the camp that is concerned about super intelligence,” Bill Gates said during an “Ask Me Anything” session on Reddit. “First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
  • 134. Stuart Russell If human beings are losing every time, it doesn't matter whether they're losing to a conscious machine or an completely non conscious machine, they still lost. The singularity is about the quality of decision-making, which is not consciousness at all.
  • 135. ‘Expert’ predictions of Never... (On Nuclear physics) “The consensus view as expressed by Ernest Rutherford on September 11th, 1933, was that it would never be possible to extract atomic energy from atoms. So, his prediction was ‘never,’ but what turned out to be the case was that the next morning Leo Szilard read Rutherford’s speech, became annoyed by it, and invented a nuclear chain reaction mediated by neutrons! Rutherford’s prediction was ‘never’ and the truth was about 16 hours later. In a similar way, it feels quite futile for me to make a quantitative prediction about when these breakthroughs in AGI will arrive.” - Stuart Russell - co-wrote foundational textbook on AI
  • 137. AI Capability: Optimal ● Tic-Tac-Toe ● Connect Four: 1988 ● Checkers (aka 8x8 draughts): Weakly solved (2007) ● Rubik's Cube: Mostly solved (2010) ● Heads-up limit hold'em poker: Statistically optimal in the sense that "a human lifetime of play is not sufficient to establish with statistical significance that the strategy is not an exact solution" (2015) - see https://www.deepstack.ai/ Search spaces very small - though Heads-up limit hold’em poker is interesting - like most other NN style AI - the majority of the behaviour was learned via self play in a reinforcement learning environment https://en.wikipedia.org/wiki/Progress_in_artificial_intelligence#Current_performance
  • 138. AI Capability: Super-human ● Othello (aka reversi): c. 1997 ● Scrabble: 2006[ ● Backgammon: c. 1995-2002 ● Chess: Supercomputer (c. 1997); Personal computer (c. 2006); Mobile phone (c. 2009); Computer defeats human + computer (c. 2017) ● Jeopardy!: Question answering, although the machine did not use speech recognition (2011) ● Shogi: c. 2017 ● Arimaa: 2015 ● Go: 2017 <- AlphaGo - learned how to play from scratch ● Heads-up no-limit hold'em poker: 2017 State of play in AI Progress - Some areas already Super-human Board games and some computer games are well defined problems and easier to measure competence against.
  • 139. AI Capability: Par-human ● Optical character recognition for ISO 1073-1:1976 and similar special characters. ● Classification of images ● Handwriting recognition AI Capability: High-human ● Crosswords: c. 2012 ● Dota 2: 2018 ● Bridge card-playing: According to a 2009 review, "the best programs are attaining expert status as (bridge) card players", excluding bidding. ● StarCraft II: 2019 <- AlphaStar at grandmaster level AlphaGo vs AlphaStar - Unsupervised Learning Wider search spaces - Dota 2 and Starcraft - incomplete information, long term planning. AlphaStar makes use of ‘autoregressive policies’..
  • 140. AlphaZero / AlphaStar If you were a blank slate, and were given a GO board, and were told you had 24 hours to teach yourself to become the best Go player on the planet, and you did exactly that, and then you did the same thing with Chess and Shogi (both more complex than Chess), and later Starcraft II, and then Protein Folding - wouldn't you say that proved you had the ability to adapt to novel situations? Well AlphaZero did all that and it didn't use brute force to do it either. Stockfish is another Chess program but unlike AlphaZero it didn't teach itself humans did. Stockfish could easily beat any human player but it couldn't beat AlphaZero despite the fact that Stockfish was running on a faster computer and could evaluate 70,000,000 positions a second while AlphaZero's much smaller computer could only do 80,000. And AI is becoming less narrow and brittle every day. GO is played on a 19 by 19 grid, but if you changed it to 20 by 20 or 18 by 18 and gave it another 24 hours to teach itself AlphaZero would be the best player in the world at that new game too - without any human help. AlphaZero is not infinitely adaptable, but then humans aren't either.
  • 142. AI Capability: Sub-human ● Optical character recognition for printed text (nearing par-human for Latin-script typewritten text) ● Object recognition ● Facial recognition: Low to mid human accuracy (as of 2014) ● Visual question answering, such as the VQA 1.0 ● Various robotics tasks that may require advances in robot hardware as well as AI, including: ○ Stable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers (as of 2017) ○ Humanoid soccer ● Speech recognition: "nearly equal to human performance" (2017) ● Explainability. Current medical systems can diagnose certain medical conditions well, but cannot explain to users why they made the diagnosis. ● Stock market prediction: Financial data collection and processing using Machine Learning algorithms ● Various tasks that are difficult to solve without contextual knowledge, including: ○ Translation ○ Word-sense disambiguation ○ Natural language processing Though AI is classed as par or sub-human, for most of these tasks AI can achieve wider capabilities far faster than humans can.
  • 143. https://talktotransformer.com OpenAI’s GPT-2 Better Language Models and Their Implications “We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.” Models can be used to generate data by successively guessing what will come next, feeding in a guess as input and guessing again. Language models, where each word is predicted from the words before it, are perhaps the best known example: these models power the text predictions that pop up on some email and messaging apps. Recent advances in language modelling have enabled the generation of strikingly plausible passages, such as OpenAI’s GPT-2. Below is an example of a cut down version of GPT-2
  • 144. 'There's a Wide-Open Horizon of Possibility.' Musicians Are Using AI to Create Otherwise Impossible New Songs https://time.com/5774723/ai-music/
  • 145. Recap.. 1. Intelligence is powerful 2. Human Intelligence is a small fraction in the space of all possible mind design 3. An AI that could self improve could become very powerful very fast 4. Look for trends as well as inside views to help gauge likelihood 5. Some AI is becoming more general (AGI) ○ Unsupervised learning ○ Transfer learning 6. Experts in AI research think that it is likely AGI could be developed by mid century 7. We already have AI smarter than HLI (too many examples to list) ○ AI has surged past HLI in specific domains
  • 146.
  • 148. Accelerating returns - a shortening of timescales between salient events… Events expressed as Time before Present (Years) on the X axis and Time to Next Event (Years) on the Y axis, Logarithmic Plot. 4 billion years ago early life. …
  • 149. Evolutionary History of Mind Daniel Dennet's Tower of Generate and Test ● "Darwinian creatures", which were simply selected by trial and error on the merits of their bodies' ability to survive; its "thought" processes being entirely genetic. ● "Skinnerian creatures", which were also capable of independent action and therefore could enhance their chances of survival by finding the best action (conditioning overcame the genetic trial and error of Darwinian creatures); phenotypic plasticity; bodily tribunal of evolved, though sometimes outdated wisdom. ● "Popperian creatures", which can play an action internally in a simulated environment before they perform it in the real environment and can therefore reduce the chances of negative effects - allows "our Hypothesis to die in our stead" - popper ● "Gregorian creatures" are tools-enabled, in particular they master the tool of language. - use tools (e.g. words) - permits learning from others. The possession of learned predispositions to carry out particular actions
  • 150. Evolutionary History of Mind ●To add to Dennet's metaphore: We are building a new floor to the tower ●"Turingian creatures", that use tools-enabled tool-enablers, in particular they create autonomous agents to create even greater tools (e.g. minds with further optimal cognitive abilities) - permits insightful engineering of increasingly insightful agents, resulting in an Intelligence Explosion.
  • 151. References ● "Kinds of Minds" - Daniel Dennit : http://www.amazon.com/Kinds-Minds-Understanding-Consciousness-Science/dp/0465073514 ● Cascades, Cycles, Insight...: http://lesswrong.com/lw/w5/cascades_cycles_insight ● Recursive Self Improvement: http://lesswrong.com/lw/we/recursive_selfimprovement ● Terry Sejnowski : http://www.scientificamerican.com/article.cfm?id=when-build-brains-like-ours ● Nanotech : http://e-drexler.com ● Justin Rattner - Intel touts progress towards intelligent computers: http://news.cnet.com/8301-1001_3-10023055-92.html#ixzz1G9Iy1cXe ● Bill Gates speaking about AI Risk on Reddit: https://www.reddit.com/r/IAmA/comments/2tzjp7/hi_reddit_im_bill_gates_and_im_back_for_my_third ● Superintelligence: https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies ● Smarter Than Us: http://lesswrong.com/lw/jrb/smarter_than_us_is_out ●