AI – Risks, Opportunities and Ethical Issues April 2023.pdf
1.
DeepMind’s Deep Q
LearningCirca 2015
Agent fed sensory input
(pixel stream) with goal
of maximising score on
screen - no rules, or
domain knowledge
given.
Starts off clumsy..
Much better after 2 hrs..
After 4 hrs it learns to
dig a tunnel through to
top side..
Ethics of Strong
AI– Risks &
Opportunities
Adam Ford • 2023-04 • @adam_ford
Will AI be able to self-improve?
Can we trust powerful AI, and how can we
ethically align it? Which values are we to align
it to?
What could go awfully wrong?
Can AI Understand stuff? (inc. us & itself)
What role might causal learning play?
ROI - whichinterventions are most
effective?
https://www.givingwhatwecan.org/research/the-moral-imperative-towards-cost-effectiveness
Note: DALY = Disability Adjusted Life Years, QALY = Quality Adjusted Life Years
Reducing impact of AIDs virus
8.
ROI - EffectiveInterventions p2
Green Revolution (Norman Borlaug, agriculture) ≈ 250 million
saved (though there were environmental impacts, lost of
biodiversity, deforestation etc)
Smallpox Vaccine (Edward Jenner, medical) ≈ 500 million saved
Haber-Bosch process (Fritz Haber, nitrogen for fertilizer in
agriculture) ≈ 1 billion saved (though was also involved in the
development of explosives and Fritz Haber was involved in
developing chemical weapons)
https://en.wikipedia.org/wiki/Green_Revolution, https://en.wikipedia.org/wiki/Smallpox_vaccine, https://en.wikipedia.org/wiki/Haber_process
Extinction means Damnationof the
Limitless Potential of Knowledge
David Deutsch - Beginnings of Infinity - “Everything not
forbidden by the laws of nature is achievable, given the right
knowledge”
Any physically possible activity can be achieved
provided the knowledge to do so has been acquired.
● Generation of better knowledge acquisition systems -
better computation and information processing
● Space colonisation (inner and outer)
● Huge gains in energy generation (dyson swarms, matrioshka
brains)
● Better governance systems bestowing more well-being
on all sentience .. can we engineer better qualia?
11.
AI as PASTA:Process for Automating
Scientific and Technological Advancement
The bottlenecks to innovation (education, human
minds, societal factors.. etc.) could be alleviated by
Superintelligent (Transformative) AI - leading to
explosive growth in knowledge, theoretical and
practical, in the form of scientific and technological
advancement - leading to a kaleidoscope of
possible futures. https://www.cold-takes.com/this-cant-go-on/#scientific-and-technological-advancement
Technological Maturity
A technologicallymature civ is robust, antifragile, able to
coordinate effectively, and likely would have solved
ethics.
“a technologically mature civilization could (presumably) engage in
large-scale space colonization... be able to modify and enhance
human biology... could construct extremely powerful computational
hardware and use it to create whole-brain emulations and entirely
artificial types of sentient, superintelligent minds” (Bostrom, 2012)
16.
Technological Maturity
The OngoingAvoidance of Extinction (OAE)
Compassionate Stewardship: Nurturing a
posthuman civilisation of invincibly mirthful
beings animated by an unlimited
kaleidoscope of novelty and gratients of
bliss.
Longtermism - EarthOnly
● All the people who have died in the last
200,000 years - 109 billion.
● All the people alive today - 7.95 billion.
● The amount of people who have died is
14 times the current population.
● If we project 800,000 years into the
future, with the earth’s population
stabilizing at about 11 billion, with life
expectancy at 88, there will be 100 trillion
people born.
● 5.4e+17 lives until sun goes red dwarf
https://ourworldindata.org/longtermism
Intelligence is Powerful
(andpotentially dangerous)
Intelligence got our ancestors from mid-level scavengers to apex predators
and beyond
● from hand axes → the international L115A3 sniper rifle → fully
automated drones
● from barefoot meandering → heavier than air flight → escape velocity
→ landing robots on mars, probes escaping our solar system
● artificial intelligence..
If we substitute human minds with better-designed minds, running at 1
million * human clock speed, we would likely see a much faster rate of
progress in technoscience.
→
24.
Force Multipliers
Force multipliersin multiple areas of science
and technology are setting the stage for AI driven
productivity explosion
Examples: higher resolution brain scanners, faster
computing hardware, exotic forms of computation,
improved algorithms, improved philosophy
●AI is a force multiplier feedback for
progress in the above examples
Intelligence via Evolutionvs the new signature of Intelligent Design
Homo Sapien intelligence - resulting from "blind" natural
selection, by evolutionary timescales has only recently
developed the ability for planning and forward thinking -
deliberation. Almost all our cognitive tools were the result
of ancestral selection pressures, forming the roots of almost
all our behavior. As such, when considering the design of
complex systems where the intelligent designer - us -
collaborates with the system being constructed, we are
faced with a new signature and a different way to
achieve General Intelligence that's completely different than
the process that gave birth to our brains.
31.
AI is powerfulnow, and will be more powerful
in the future
Syllogism1: intelligence is powerful; AI is intelligent; therefore AI is
powerful
Syllogism2: technological progress is a much faster force multiplier
than evolutionary progress; AI is subject to technological progress
and HI is subject to evolutionary progress; therefore AI will become
smarter faster than HI
Syllogism3: more intelligence is more powerful than less
intelligence; AI will overtake HI; therefore AI will be more powerful
than HI
32.
A faster futurethan we think - are we ready?
1. There is a substantial chance Human Level AI (HLAI)
before 2100;
2. If HLAI then likely Super Intelligence (SI) will follow via an
Intelligence Explosion (IntExp);
3. Therefore an AI driven productivity explosion
4. An uncontrolled IntExp could destroy everything we value,
but a controlled IntExp would benefit life enormously -
shaping our future light-cone.
*HLAI= Human Level AI - The term ‘human-level’ may be a misnomer - I will discuss later
[The Intelligence Explosion - Evidence and Import 2012]
33.
Large Language Models& Math Automation
Lots of momentum atm in LLMs
"Large language models can write informal
proofs, translate them into formal ones, and
achieve SoTA performance in proving
competition-level maths problems!"
https://arxiv.org/abs/2210.12283
On twitter “If it is really possible for AIs to surpass us as much in
mathematics as AlphaGo surpasses us in go, what is the point of
doing mathematics?”
34.
GPT-3 / GPT-2
Fireyour dungeon master -
let the AI create your
fantasy adventure while
you play it!
https://aidungeon.io
● Degrees ofbelief
● Confidence intervals, Credence levels
● Epistemic humility
● What's the likelihood of smarter than
HLI by 2050? Or 2075? Or 2100?
Belief shouldn’t be On / Off
Narrow AI vsAGI
"Artificial General Intelligence" (AGI) :
AI capable of "general intelligent action".
Distinction: Narrow AI (or Weak AI/Applied AI) used to accomplish specific problem
solving or reasoning tasks that do not encompass (or in some cases are completely
outside of) the full range of human cognitive abilities.
Gradient between Narrow & General - what lies in between?
?
● Narrow: Deep Blue (Chess), Watson (Jeopardy)
● Arguably in between: AlphaGo, AlphaStar
● Optimally General: No such thing. Computable AIXItl? Not foreseeably
Narrow AI AGI
43.
Getting closer tosolving key building
blocks of AI
● Unsupervised Learning (without
human direction on unlabelled data) -
DeepMind’s AlphaGo, AlphaStar, now
in many other systems
● Transfer Learning (knowledge in one
domain is applied to another) -
44.
One shot orFew Shot
Learning
● GPT-3 - few shot learning
Transfer Learning
"I think transfer learning is the key to
general intelligence. And I think the key to
doing transfer learning will be the
acquisition of conceptual knowledge that is
abstracted away from perceptual details of
where you learned it from." - Demis
Hassabis
45.
Transfer Learning
Knowledge inone domain is applied to another
1. IMPALA simultaneously performs multiple tasks—in this case, playing 57
Atari games—and attempts to share learning between them. It showed signs of
transferring what was learned from one game to another.
https://www.technologyreview.com/f/610244/deepminds-latest-ai-transfers-its-learning-to-new-tasks/
2. REGAL - key idea is to combine a neural network policy with a genetic
algorithm, the Biased Random-Key Genetic Algorithm (BRKGA)..REGAL
achieves on average 3.56% lower peak memory than BRKGA on previously
unseen graphs, outperforming all the algorithms we compare to, and giving
4.4x bigger improvement than the next best algorithm.
https://deepmind.com/research/publications/REGAL-Transfer-Learning-For-Fast-Optimization-of-Computation-Graphs
Intelligence Explosion
"Let anultraintelligent machine be defined
as a machine that can far surpass all the
intellectual activities of any man however
clever. Since the design of machines is
one of these intellectual activities, an
ultraintelligent machine could design even
better machines; there would then
unquestionably be an ‘intelligence
explosion,’ and the intelligence of man
would be left far behind. Thus the first
ultraintelligent machine is the last invention
that man need ever make." - I. J. Good
48.
Intelligence Explosion
The purestcase - AI rewriting its own
source code.
● Improving intelligence accelerates
● It passes a tipping point
● ‘Gravity’ does the rest
Push
P
u
l
l
sisyphus & robot
49.
Is AI >Human Level
Intelligence (HLI) likely?
Syllogism
1. There will be AI (before long, absent defeaters).
2. If there is AI, there will be AI+ (soon after, absent defeaters).
3. If there is AI+, there will be AI++ (soon after, absent defeaters).
4. Therefore, there will be AI++ (before too long, absent defeaters).
The Singularity: A Philosophical Analysis: http://consc.net/papers/singularity.pdf
What happens when machines become
more intelligent than humans?
Philosopher, David
Chalmers
50.
HLI will risewith AI at the pace of AI?
Rapture of the nerds - where HLI will rise with AI by humans
merging with it - where the true believers will break with human
history and merge with the machine, while the unaugmented will be
left in a defiled state…
● No.
● Biological bottlenecks - the kernel of human
intelligence.
● Merging with AI enough to keep pace means
becoming posthuman - defining characteristics of
humans will be drowned out like a drop of ink in an
ocean.
51.
Mathematician / HugoAward Winning
SciFi Writer Vernor Vinge
coined the term:
"Technological Singularity"
The best answer to the question,
“Will computers ever be as smart as humans?”
is probably “Yes, but only briefly.”—Vernor Vinge
Vernor postulated that
"Within thirty years, we will have the technological means to create superhuman intelligence.
Shortly after, the human era will be ended." - Vernor Vinge 1994 https://edoras.sdsu.edu/~vinge/misc/singularity.html
52.
Closing the Loop- Positive Feedback
A Positive Feedback Loop is
achieved through an agents
ability to recursively optimize its
ability to optimize itself!
Bottle-neck? Other entities? Substrate
expandable (unlike brain), grey box or
ideally white box source code which can
be engineered and manipulated (brain,
black box code with no manual…
possibly can be improved through
biotech...not trivial )
Better source code
= more
optimization
power*
AI optimizes
source code
*Smarter AI will be even better at
becoming smarter
53.
Moore’s Law: theActual, the Colloquial & the Rhetorical
● CPU - transistors per cpu not doubling?
● GPU - speed increases
● Economic drivers motivated by general
advantages to having faster computers.
● Even if slowdown - small advantages in
speed can aid competitors in race
conditions. Though usually surges in
speedups follow. Not a smooth curve
upwards.
● No, speedups don’t have to be increases in
numbers of transistors in CPU cores.
Surface-level abstractions used to make specific
predictions about when the singularity will
happen A plot (logarithmic scale) of MOS transistor counts for microprocessors against dates of introduction, nearly doubling
every two years. See https://ourworldindata.org/technological-progress
Though Moore’s law has a specific definition, the term has been used colloquially to indicate a
general trend of increasing computing power.
Human INT isn’ta universal standard
Human intelligence is
a small fraction in the
much larger mind
design space
→
Misconception : Intelligence is a single dimension.
56.
The space ofpossible minds
● Humans, not being the only possible
mind, are similar to each other
because of the nature of sexual
reproduction
● Transhuman minds, while still being
vestigially human, could be smarter or
more conscious than unaugmented
humans
● Posthuman minds may exist all over
the spectrum - occupy a wider
possibility space in the spectrum
● It may be very hard to guess what an
AI superintelligence would be like,
since it could end up anywhere in this
spectrum of possible minds
Artificial Neural Nets
ANNscan do stuff that humans can’t write computer
programs to do (directly):
● Image classification
● Accurate text translation
● Advanced Image/text generation from prompts
● Play games
● …much more to come!
60.
Recap
Intelligence is powerful- it’s important to track progress in AI
AI doesn’t need to be a reimplementation of human intelligence for it
to be powerful - need a wide view of intelligence
Distinguishing evaluating levels of intelligence in
● AI vs Humans and
● Humans vs Humans
Therefore:
● Avoid anthropomorphism
● We shouldn’t be limiting measuring progress in AI by the kind of
standard of human intelligence alone
AI: Why shouldwe be concerned?
Value Alignment “The concern is not that [AGI] would hate or
resent us for enslaving it, or that suddenly a spark of consciousness
would arise and it would rebel, but rather that it would be very
competently pursuing an objective that differs from what we
really want.” - Nick Bostrom
- ..but what do we really want? Do we really know? What will would
we really want if we were wiser/smarter/kinder?
“If we use, to achieve our purposes, a mechanical agency with whose
operation we cannot interfere effectively … we had better be quite sure
that the purpose put into the machine is the purpose which we really
desire.” - Norbert Wiener (AI Pioneer and father of Cybernetics)
64.
Systemic Risks
● RaceConditions - Intelligence is powerful - first to
create powerful AGI gets massive first mover advantages -
where safety measures are de-prioritised in favor of speeding up
development - i.e. recommender systems creating addiction and
polarisation
● Competitive pressure can create a race to the bottom
on safety standards
Capability Control -AI Stunting
Risks of creating an
AI system with
capabilities that are
artificially limited to
prevent it from
posing a threat.
67.
Teaching AI -real values are complex
Human values can be
complex, ambiguous,
and
context-dependent -
extremely difficult to
translate them into a
formal set of rules that
an AI system can
follow.
68.
Social Concerns: Politicaland class
struggle
● The Artilect War
○ Cosmists - those who favor building AGI
○ Terrans - those who don’t
“I believe that the ideological disagreements between these two
groups on this issue will be so strong, that a major "artilect" war,
killing billions of people, will be almost inevitable
before the end of the 21st century."
- Hugo de Garis
69.
Game Theory -Multipolar Traps
Convergences to disastrous outcomes.
● Multi-polar (many player) traps - a situation in which multiple parties are
incentivized to take actions that lead to a collectively unfavorable outcome.
● self interest and the tragedy of the commons
Where do multipolar traps occur? How to recognise the traps before falling in?
○ AI arms races, first-to-market advantages, geopolitical arms races
○ Races to the bottom (of ethical & safety standards)
○ Racing ahead with opaque (inscrutable) AI - leading to accidental misalignment
70.
Bottlenecks on safety,reliability and trustworthiness
Trustability though orgs agreeing to safety standards - and seek to generate
more revenue by virtue of these models achieving higher levels of reliability
and trustworthiness and safety.
Though the levers to measure and implement safety measures are in the
companies that build the models.
Profit motives may act as a perverse incentive to game AI safety for quick
profit - prioritising revenue over general safety in high stakes systems is
dangerous.
71.
Direct vs IndirectNormativity
Direct normativity - rules directly specifying a desired outcome i.e.
the 3 laws of robotics.
Indirect Normativity - programming/training an AI to understand
and acquire human value, or what values we would have if we were
wiser and had more time to think about them - i.e. Coherent
Extrapolated Volition.
Endowing AI with noble laws and goals may not prevent unintended
consequences
72.
Power Seeking behaviour
ConvergentInstrumental Goals or Basic AI Drives - they emerge as
helper goals to aid in achieving other goals.
● Resource acquisition - finance or material i.e. to help obtain
more material for computing for..
● Cognitive enhancement
● Avoiding being turned off / Self-preservation
(leading to incorrigibility)
I.e. Paperclip Maximiser
Even well stated benign seeming goals are not immune to
instrumental convergence towards goals that dangerously
orthogonal to ours.- https://en.wikipedia.org/wiki/Instrumental_convergence
73.
Evolving with AI
Canwe merge with the machines?
Will our values/goals constrain the machine? (btw, humans don’t
have aligned goals and values)
Will machine goals and values replace our own?
Are our values what they would
be if we were more ethical, wiser,
and .. post-human?
After a ‘long reflection’?
Pareto-efficient human/AI
merger?
74.
Empirical AI Safety
&Trustworthy AI
Opaque AI vs Interpretable AI
& Ethical Implications of Both
Summary
Transparency and interpretabilityin AI is commonly used to mean it can be
understood by outside observers (humans, other AI etc), and other AI as well.
Current modern ML is typically opaque - the output is interpretable, but you
can’t look at the model and detect why and how it created the output.
Robustness in AI Safety - Transparency can help in avoiding accidental
misalignment & deceptive alignment
https://www.lesswrong.com/tag/transparency-interpretability-ml-and-ai
Scalable Oversight
Generally, themore complex AI becomes, the harder it is to
understand.
A challenge of scaling up oversight along side the scaling up of AI
complexity - without which the AI models would be impractical for humans
to effectively evaluate.
Scalable Oversight research seeks to (perhaps by transparency and
interpretability) help humans speed up evaluation.
Though even if scalable oversight is solved, agents on target systems may
still engage in reward hacking - Deepmind research ways in which this can
be alleviated with Causal Influence Diagrams https://arxiv.org/abs/1902.09980
79.
Bottlenecks on safety,reliability and
trustworthiness
Ideally, trustworthy companies could train large models and
integrate safety - and seek to generate more revenue by virtue of
these models achieving higher levels of reliability and
trustworthiness and safety.
Though the levers to measure and implement safety measures are
in the companies that build the models. For profit motives may act
as a perverse incentive to game AI safety for personal gain -
prioritising revenue over general safety for end users may be
dangerous for high stakes systems.
80.
Right to Explanation(RTE)
Should the AI Alignment / AI Safety community demand rights to
explanation (of AI)?
In the regulation of algorithms, a right to be given an explanation for an
output of the algorithm.
Such rights primarily refer to individual rights to be given an
explanation for decisions that significantly affect an individual,
particularly legally or financially - though this notion could be extended
to social rights to explanation.
Some are concerned that RTE might stifle innovation and progress.
https://en.wikipedia.org/wiki/Right_to_explanation
81.
In the Media
‘Risksposed by AI are real’: EU moves to beat
the algorithms that ruin lives - ‘Black-box’
AI-based discrimination seems to be beyond the
control of organisations that use it - Guardian
article discusses algorithmic bias in blackbox AI
82.
Transparency and Interpretability(ML & AI)
Transparency and interpretability in AI is commonly used to mean
the AI can be understood by outside observers (humans, other AI
etc), though it could be extended to mean AI interpreting and
understanding itself as well.
Current modern ML is typically opaque - the output is interpretable,
but you can’t look at the model and detect why it created the output.
https://www.lesswrong.com/tag/transparency-interpretability-ml-and-ai
83.
Anthropic AI
Chris Olahand associates doing very interesting
work at Anthropic on interpretability - great
podcast interview at 80k hrs.
Building Reliable, Interpretable, and Steerable AI Systems
Anthropic is an AI safety and research company that’s working to build reliable,
interpretable, and steerable AI systems. Large, general systems of today can have
significant benefits, but can also be unpredictable, unreliable, and opaque: our goal is to
make progress on these issues. For now, we’re primarily focused on research towards
these goals; down the road, we foresee many opportunities for our work to create value
commercially and for public benefit.
84.
Code and ModelUnderstandability
Traditional software is intelligible by virtue of us being able to
understand the code.
Artificial Neural Networks (ANNs) are not easily intelligible.
There are approaches to reverse engineer small ANNs into
something intelligible, or to interpret their circuits - at the moment,
this doesn’t scale well for large ANNs
https://www.lesswrong.com/tag/transparency-interpretability-ml-and-ai
85.
Unanticipated behaviours -Unintended
Attempting AI Safety through exhaustive training & behavioural
testing in sandbox environments.
1. Unable to anticipate all safety problems
2. Insufficient disincentives - hard to know what is enough
A model can behave well in sandbox or on a certain training
distribution - then exhibits unexpected safety failures outside the
sandbox, in the real world (in out of sample contexts)
86.
Unanticipated behaviours -Intended
Attempting AI Safety through exhaustive training & behavioural
testing in sandbox environments.
Failure Mode - Treacherous Turn: “An unfriendly AI of sufficient
intelligence realizes that it's unfriendly final goals will be best
realized if it behaves in a friendly manner initially, so that it will be let
out of the box. It will only start behaving in a way that reveals its
unfriendly nature when it no longer matters whether we find out; that
is, when the AI is strong enough that human opposition is
ineffectual.”
(see Nick Bostrom - Superintelligence, p. 117)
87.
Treacherous Turns inthe wild
Humans follow rules far more often when being observed -
Non-human animals - cats and dogs steal food more often when not being watched…
- DeepMind collected a list of “specification gaming” examples where an AI system (or
subsystem) learns to detect when it’s being monitored/tested and modifies its behavior while it’s
being monitored/tested so that its undesired properties remain undetected, and then it exhibits those
undesired properties when it’s not being monitored/tested - see Specification gaming: the flip side of
AI ingenuity - Deepmind
- demonstrated in study limit the replication rate of a digital organisms - “the organisms evolved to
recognize when they were in the test environment and “play dead” (pause replication) so they would
not be eliminated and instead be kept in the population where they could continue to replicate
outside the test environment - see The Surprising Creativity of Digital Evolution - Lehmann et al.
(2018)
https://lukemuehlhauser.com/treacherous-turns-in-the-wild
88.
Achieving Transparency -The
Transparency Trichotomy
A trichotomy of ways to understand an (opaque) model M
1. Transparency via inspection: use transparency tools to
understand M via inspecting the trained model.
2. Transparency via training: incentivize M to be as
transparent as possible as part of the training process.
3. Transparency via architecture: structure M's architecture
such that it is inherently more transparent.
https://www.lesswrong.com/posts/cgJ447adbMAeoKTSt/transparency-trichotomy
89.
AI exhibiting unintendedbehaviour
Simple tetris AI pausing the game indefinitely to not
loose: https://youtu.be/xOCurBYI_gY?t=943
90.
Failing Forward withExplanatory Failure
● Deutsch’s interesting view of Optimism as
Explanation of Failure - contrast wit naively
positive predictions
● Understanding failure means making the failure
constructive
● Popper - Hypotheses dying in one’s stead -
learning from failed hypotheses
● Improvement of simulations to understand failure
where hypotheses can be further tested
91.
Zero-shot or ‘pre-shot’verification
Verification or classification by via transparent
explanations of AI models, or source code
before execution - which may help catch
intended ‘treacherous turns’ or unintended
bugs, misrepresentations or other problems.
AI that UnderstandsItself can Trust Itself
Even if the AI is benign to begin with, if it
mutates itself without understanding the
knock on effects, it may metastasize into
something not-benign.
If we do end up with an AI that is meant to
be ethical, and it's fundamentally opaque,
how can we know it understands itself
enough to trust itself?
94.
AI that UnderstandsItself can Trust Itself
AI which is inherently and explicitly
interpretable would be able to more
accurately predict where and when in it's
cycles of recursive self-improvement it
would become uninterpretable.
95.
Overlapping phases ofAI progress
AI progress phases:
A) Not ethically sound phase - the phase in which AI is still
developing a complex and nuanced understanding of human
(and posthuman) values and goals, and ethical frameworks -
and may not be trustworthy
B) Opaque phase - the phase in which the AI is inscrutable
(opaque) - where we have to rely on forming judgements
only on the observed behaviour the AI exhibits
C) Transparent phase - the phase in which if AI were
scrutable (transparent), the important reasons (causal
influences) for it's strategies and actions are still
understandable by humans If the 'Opaque' Phase lasts
longer than the 'Not Ethically Sound' phase
Rice’s Theorem /Halting Problem
Rice's theorem states that all non-trivial semantic properties
of programs are undecidable.
Turing’s halting problem is
the problem of
determining, from a
description of an arbitrary
computer program and an
input, whether the program
will finish running, or
continue to run forever.
Machine Understanding
It reallymatters what kinds of superminds emerge!
Potential for machine understanding (contrast with explainable AI) this decade
or next
- Yoshua Bengio - causal representation learning - Judea Pearl too
Assumptions:
● ethics isn't arbitrary
● strongly informed by real world states of affairs,
● understanding machines will be better positions than humans to solve
currently intractable problems in ethics
Therefore: investigate safe development of understanding machines
- Henk de Regt what is scientific understanding? (I think machines can do it)
100.
What is understanding?Understanding by design: Six Facets
● Can explain: provide thorough, supported, and justifiable accounts of
phenomena, facts, and data.
● Can interpret: tell meaningful stories; offer apt translations; provide a revealing
historical or personal dimension to ideas and events; make it personal or
accessible through images, anecdotes, analogies, and models.
● Can apply: effectively use and adapt what is known in diverse contexts.
● Have perspective: see and hear points of view through critical eyes and ears
(or sensors).
● Can empathize: find value in what others might find odd, alien, or implausible;
perceive sensitivity on the basis of prior direct experience.
● Have self knowledge: perceive the personal style, prejudices, and habits of
mind that both shape and impede our own understanding; we are aware of
what we do not understand and why understanding is so hard.
(Source: Wiggins, G. and McTighe, J. (1998) Understanding by Design. Alexandra, VA: Association for Supervision
and Curriculum Development.)
101.
What’s missing inmodern AI systems?
Deep learning works via mass correlation on big data - great if you
have enough control over your input data relative to the problem,
and that the search space is confined.
*Most of the understanding isn’t in the system, it’s in the experts that
develop the AI systems, and/or in the crystalized understanding in
the corpuses of training data, and interpret the output.
* arguably some aspects of understanding exist in systems, and we may be in the midst of AI gaining real understanding now or in the near future
102.
Training: Black Boxvs an Understanding Agent
The opaque nature, we can’t see the under the hood effects of our
training on ML - all we can see is the accuracy of it’s predictions - so
then we strive to achieve “predictive accuracy” - we train AIs by
blind groping towards reward.
Though the AI may be harbouring ‘malevolent motives’ undetectable
though behavioural testing - at some stage the AI may then
unpredictably make a ‘treacherous turn’.
How can we tell a black box AI hasn’t developed a taste for
wire-heading / reward hacking?
103.
The Importance ofKnowing Why
Why is one of the first questions
children ask - (oft recursively)
New Science of Causal Inference
to help us understand cause and effect
relationships.
Current AI is bad at causal learning - but
really good at correlating across big data.
Judea Pearl believes causal AI will happen in
less than a decade.
Judea
Pearl
104.
Why Machine Understanding?
ToAvoid Unintended consequences!
● Value alignment: Where the AI’s goals are compatible
with our *useful/good values
● Verifiability / Intelligibility: AI makeup source code /
framework is verifiably beneficial (or highly likely to be) and
where it’s goals and means are intelligible
● Ethical Standards: AI will have to understand ethics -
and it’s understanding ought to be made intelligible to us
● Efficiency: far less training data and computation -
efficiency required for time sensitive decisiveness
105.
Why? - ValueAlignment
● AI should understand and employ common sense about:
○ What you are asking - did you state your wish poorly?
○ Your motivations - Why you are asking it
○ The context
■ where desirable solutions are not achieved at any cost
■ Is your wish counter productive? Would it harm you or
others?
■ Ex. After summoning a genie, using the 3rd wish to undo
the first 2 wishes. King Midas - badly stated wish.
AI needs toUnderstand Ethics
Intelligible Ethical Standards
● Rule based systems are fragile - Asimov’s novels
explore failure modes
● Convincing predictive ethical models is not enough
- predictions fail, and it’s bad when it’s not known
why
● A notion of cause and effect is imperative in
understanding ethical problems - obviously trolly
problems :D
108.
Why? - Efficiency
Overhangresulting from far more efficient algorithms - allowing
more computation to be done with less hardware, and make use
with liberated compute.
More computation used *wisely could realise wonderful things.
Alternatively, ultra efficient machines could make doomsday devices
more accessible to most (inc. psychopaths)
*contextual wisdom is hard won through understanding
109.
Why? - Wisdom
Decisionsare:
● less risky when
knowledge is
understood and
● more risky when
based on ignorant
interpretations of
data.
110.
Why Not MachineUnderstanding?
Machine Understanding could be disastrous if
1. Adding the right ingredients to AI makes it really cheap and easy
to use;
2. AI will be extremely capable - (intelligence is powerful before
and after an intelligence explosion)
3. There doesn’t need to be very many bad eggs or misguided
people who intentionally/unintentionally use the AI for diabolical
ends for there to be devastating consequences
111.
How - AchievingAI Understanding -
Interpretability
Take hints from Lakatos award winner Henk D. Regt’s arguments
against the common theory that scientific explanation/understanding
is achieved through building models:
● But the scientist building the models need certain skills -
scientific understanding is fundamentally a skillful act
● He also emphasises intelligibility in scientific explanation -
which is something sorely lacking in the Deep Learning models
that AI develops
112.
Causal AI -the logical next step
4 rungs:
1) learning by association (deep learning, massive correlation
AlphaGo - some animals apparently, and sales managers)
a) Lacks flexibility and adaptability
2) Taking action.. What ifs, taking actions to influence outcomes -
taking interventions require a causal model -
3) Identifying counterfactuals (but for causation) - i.e. to a
computer a match and oxygen may play an equal role
4) Confounding bias - associate with 2nd level -- citrus fruits
prevent scurvy.. But why? A long time ago people used to think it
was the acidity - turns out it is vitamin c
https://ssir.org/articles/entry/the_case_for_causal_ai
113.
Causal Learning inAI - a key to Machine
Understanding.
Opening up a world of possibilities for science.
- Important for humans in making discoveries - if we don’t ask why
then we don’t make new discoveries
- Big data can’t answer causal questions alone - emerging causal
AI will draw on causal relationships between correlation and
causation, to see which are mediators, and which are
confounders
114.
Causal Representation Learning
Greatpaper ‘Towards Causal Representation Learning’ - “A central
problem for AI and causality is, thus, causal representation
learning, the discovery of high-level causal variables from low-
level observations.”
Same distribution generalization
vs
Horizontal / Strong / Out-of-distribution
generalization
115.
Causation & MoralUnderstanding
Consequentialism - the consequences of an act are 1) it’s causal
consequences (computationally tractable), or 2) the whole state of
the universe following the act (practically incomputable)
Zero-Shot-Learning for Ethics: Being able to understand salient
causal factors in novel moral quandaries (problems including
confounders and out-of-training-sample stuff i.e. invernentions)
increases the chance of AI to solving these moral quandaries first
try, and then explain how and why their solution worked.
Some problems, including solving ethical Superintelligence may
need to be done correctly first try.
Hype Cycles &Recursive Self-Improvement
1. Hype tries to draw attention to growth spurts in AI -
often resulting in excited overshoots, and after it’s
over, the hype looks very exaggerated.
2. There have been long time periods between growth
spurts in AI.
3. Strong arguments that at some stage AI will acquire
the capability of recursive self improvement (an
Intelligence Explosion).
This could allow for
a. growth spurts to be faster,
b. extend far further, and
c. occur more frequently.
120.
Distinguishing hype, anti-hype,and useful predictions
1. Evidence-based predictions - based on evidence
and reason - falsifiable
2. Expert opinions, and consensus forming - track
records,
3. Prediction specificity - less
4. Clear interpretable models that produce the
predictions
5. Acknowledgement uncertainty and risks - hopefully
this comes out in the models
121.
Anti-hype backlashes
Hype cyclesin AI can ferment unrealistic expectations leading to
dashed hope and general misunderstandings.
Anti-hype or "hype backlash" - also leading to misunderstandings,
less investment and attention, could slow down useful R&D,
including alignment research.
Epistemic humility requires recognizing limitations and uncertainties
affecting our ability to make accurate predictions.
122.
Full understanding ofhuman intelligence not required for AI
● For most of human history, we
didn’t really understand fire -
but made good use of it
● Today we don’t fully
understand biology, though we
are able to develop effective
medicine
● We don’t fully understand the
brain - have developed
intelligent algorithms - some
neuromorphic (brain inspired)
123.
Peaks & Valleysin Cognitive capability
● Peaks of AI cognitive
capabilities have already
crossed over valleys in
human cognitive capabilities.
● Will we see a rising tide in
general AI cognitive
capability?
● Will the valleys in AI cognitive
ability rise above the peaks in
human cognitive capability?
AI vs HLIAbility - Peaks & Troughs
Not
representative
of all human or
AI abilities
126.
AI vs HLIAbility - Peaks & Troughs
Not
representative
of all human or
AI abilities
127.
Turing Tests &Winograd Schemas
● Arguably passed Turing test
● Winograd Schema success rate of 90% is would be
considered impressive to people in NLP - UPDATE -
GPT-3 recently got a score of 88.3 without fine tuning..
See 'Language Models are Few-Shot Learners'
● Conversational AI getting better - fast (see earlier example)
/ OpenAI’s GPT-2/ now GPT-3 - https://philosopherai.com &
https://play.aidungeon.io
● Also check out NVIDIAs new sensation :D
https://www.zdnet.com/article/nvidias-ai-advance-natural-language-processing-
gets-faster-and-better-all-the-time
128.
What won’t AIbe able to do within next 50 years?
Best predictions?
What Computers Can’t Do - Hubert Dreyfus
○ What Computers Still Can’t Do
■ Make coffee
■ Beat humans at physical sport
■ Write funny jokes?
■ Write poetry?
● and novels
● Formulate creative strategies - the ability to formulate the kind of creative strategies
that, for instance, define a great lawyer’s ability to form unique arguments or a top CEO to lead his or
her company in bold new directions. This isn’t just about analyzing data; it’s about venturing into
unstructured problem-solving tasks, and deciding which pieces of information are relevant and which
can be safely ignored.
https://www.digitaltrends.com/cool-tech/things-machines-computers-still-cant-do-yet/
129.
How to evaluateevidence?
● Trend extrapolation - Black to Dark Box
○ Some views on trend extrapolation favor near term AGI
○ Some views don’t
○ Strange how people carve up what’s useful trend
extrapolation and what isn’t to support their views
● Inside view - White to Translucent Box
○ Weak - tenable - loose qualitative views on issues with
lopsided support
○ Strong - difficult
● Appeal to authority :D
Pick your authority!
130.
Trend Extrapolation: AcceleratingChange
● Kurzweil's Graphs
○ Not a discrete event
○ Despite being smooth and continuous, explosive once
curve reaches transformative levels (knee of the curve)
● Consider the Internet. When the Arpanet went from 10,000 nodes to 20,000 in
one year, and then to 40,000 and then 80,000, it was of interest only to a few
thousand scientists. When ten years later it went from 10 million nodes to 20
million, and then 40 million and 80 million, the appearance of this curve looks
identical (especially when viewed on a log plot), but the consequences were
profoundly more transformative.
● Q: How many devices do you think are connected to the internet today?
131.
This image isold.
128gb SanDisk for
$30 and a $400gb
for $99 on ebay
(circa sept 2019)
A 512gb microsd
card was $99 AUD
Jan 2022 at
Scorptec
AI comes atyou fast!
"nowhere near solved"
from
"A brief history of AI",
published in January 2021.
https://www.amazon.com/Brief-Hist
ory-Artificial-Intelligence-Where/dp/
1250770742
!
135.
Despite their briefhistory,
computers and AI have
fundamentally changed what
we see, what we know, and
what we do.
Little is as important for the future of the
world, and our own lives, as how this
history continues.
https://t.co/pD4oeW4YDR
136.
Nvidia predicts AImodels one million times more powerful than
ChatGPT within 10 years
https://www.pcgamer.com/nvidia-predicts-ai-models-one-million-times-more-powerful-than-chatgpt-within-10-years/
137.
GPT-4 Beats 90%Of Lawyers Trying To Pass The Bar
“GPT-4 exhibits human-level
performance on the majority of
these professional and academic
exams,” says OpenAI.
138.
Weeeee!
Enjoy the ride!
Mathand deep insights
(especially probability) can be
powerful relative to trend fitting
and crude analogies.
Long-term historical trends are
weekly suggestive of future
events
BUT - humans aren’t good at
math, and aren’t all that
convinced by it - nice pretty
graphs or animated gifs seem
to do a better job.
Difficulty - convincing humans
is important - since they will
drive the trajectory of AI
development… so convincing
people of good causes based
on less-accurate/less-sound
yet convincing arguments is
important.
139.
Survey of ontimeline of AI
AI Expert poll in 2016
● 3 years for championship Angry Birds
● 4 years for the World Series of Poker (some poker solved in
2019)
● 6 years for StarCraft (now solved)
● 6 years for folding laundry
● 7–10 years for expertly answering 'easily Googleable' questions
● 8 years for average speech transcription (sort of solved)
● 9 years for average telephone banking (sort of solved)
● 11 years for expert songwriting (sort of solved)
● over 30 years for writing a New York Times bestseller or winning
the Putnam math competition
https://aiimpacts.org/miri-ai-predictions-dataset
140.
2) In 2017May, 352 AI experts who published at the 2015 NIPS and ICML conferences were surveyed. Based
on survey results, experts estimate that there’s a 50% chance that AGI will occur before 2060. However,
there’s significant difference of opinion based on geography: Asian respondents expect AGI in 30 years,
whereas North Americans expect it in 74 years.
Some significant job functions that are expected to be automated by 2030 are: Call center reps, truck driving,
retail sales.
Surveys of AI Researchers
1) In 2009, 21 AI experts participating
in AGI-09 conference were surveyed.
Experts believe AGI will occur around
2050, and plausibly sooner. You can
see their estimates regarding specific
AI achievements: passing the Turing
test, passing third grade,
accomplishing Nobel worthy scientific
breakthroughs and achieving
superhuman intelligence.
https://blog.aimultiple.com/artificial-general-intelligence-singularity-timing
141.
2019 - WhenWill We Reach the Singularity? A Timeline
Consensus from AI Researchers
https://emerj.com/ai-future-outlook/when-will-we-reach-the-singularity-a-timeline-consensus-from-ai-researchers/
142.
Recap..
1. Intelligence ispowerful
2. Human Intelligence is a small fraction of a huge space of
all possible mind designs
3. We already have AI smarter than HLI (too many examples
to list) - so therefore you can already see AI outperform HLI
- AI has surged past HLI in specific domains
4. An AI that could self improve could become very powerful
very fast
5. Timeline consensus’ from AI researchers say AGI before
2050 or 2060
3 AI Boosters
1.Overhang resulting from
a. algorithmic performance gains (more efficient algorithms
require less computing power per unit of value)
b. or economic gains (Microsoft donating billion $$ to OpenAI)
c. Cheaper cloud infrastructure (drops in price/performance of
already connected cloud)
2. Self improving systems - feedback loop where gains from
previous iterations boost performance of subsequent iterations
of the feedback loop
3. Economic/Political Will - race to first mover advantage
145.
Hardware Overhang
● newprogramming methods use available computing power more efficiently.
● new / improved algorithms can exploit existing computing power far more efficiently than previous
suboptimal algorithms…
○ Result: AI/AGI can run on smaller amounts of hardware... This could lead to an intelligence
explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run
on countless computers.
● Examples:
○ In 2010, the President's Council of Advisors on Science and Technology reported on benchmark
production planning model having become faster by a factor of 43 million between 1988 and
2003 - where only a factor of roughly 1,000 was due to better hardware, while a factor of
43,000 came from algorithmic improvements.
○ Sudden advances in capability gains - DeepMind’s AlphaGo, AlphaFold, GPT-2
● Estimates of time to reach computing power required for whole brain emulation: ~a decade away
○ BUT: very unlikely that human brain algorithms anywhere enar most computationally efficient for
producing AI.
○ WHY? our brains evolved during a natural selection process and thus weren't deliberately
created with the goal of being modeled by AI.
https://wiki.lesswrong.com/wiki/Computing_overhang
146.
Economic Gains
In 2016AI investment was
up to $39 billion, 3x from
2013
See McKinsey Report - ARTIFICIAL
INTELLIGENCE THE NEXT DIGITAL FRONTIER?
147.
AI Index 2019Annual Report
● AI is the most popular area for computer science PhD
specialization, and in 2018, 21% of graduates specialized in
machine learning or AI.
● From 1998 to 2018, peer-reviewed AI research grew by 300%.
● In 2019, global private AI investment was over $70 billion, with
startup investment $37 billion, M&A $34 billion, IPOs $5 billion,
and minority stake $2 billion. Autonomous vehicles led global
investment in the past year ($7 billion), followed by drug and
cancer, facial recognition, video content, fraud detection, and
finance.
https://venturebeat.com/2019/12/11/ai-index-2019-assesses-global-ai-research-investment-and-impact/
148.
AI Index 2019Annual Report
● China now publishes as many AI journal and conference papers per
year as Europe, having passed the U.S. in 2006.
● More than 40% of AI conference paper citations are attributed to
authors from North America, and about 1 in 3 come from East Asia.
● Singapore, Brazil, Australia, Canada and India experienced the fastest
growth in AI hiring from 2015 to 2019.
● The vast majority of AI patents filed between 2014-2018 were filed in
nations like the U.S. and Canada, and 94% of patents are filed in
wealthy nations.
● Between 2010 and 2019, the total number of AI papers on arXiv
increased 20 times.
https://venturebeat.com/2019/12/11/ai-index-2019-assesses-global-ai-research-investment-and-impact/
149.
Kinetics of anIntelligence Explosion
Intelligence
level: not a
continuous
surge upwards.
‘Crossover’
refers to the
point where AI
can handle
further growth by
itself.
Bill Gates
"Reverse Engineeringthe Brain is Within
Reach" -“I am in the camp that is concerned
about super intelligence,” Bill Gates said
during an “Ask Me Anything” session on
Reddit. “First, the machines will do a lot of jobs
for us and not be super intelligent. That should
be positive if we manage it well. A few
decades after that, though, the intelligence is
strong enough to be a concern. I agree with
Elon Musk and some others on this and don’t
understand why some people are not
concerned.”
152.
Stuart Russell
If humanbeings are losing every
time, it doesn't matter whether
they're losing to a conscious
machine or an completely non
conscious machine, they still lost.
The singularity is about the quality of
decision-making, which is not
consciousness at all.
153.
‘Expert’ predictions ofNever...
(On Nuclear physics) “The consensus view as expressed by Ernest
Rutherford on September 11th, 1933, was that it would never be
possible to extract atomic energy from atoms. So, his prediction was
‘never,’ but what turned out to be the case was that the next morning
Leo Szilard read Rutherford’s speech, became annoyed by it, and
invented a nuclear chain reaction mediated by neutrons!
Rutherford’s prediction was ‘never’ and the truth was about 16 hours
later. In a similar way, it feels quite futile for me to make a
quantitative prediction about when these breakthroughs in AGI will
arrive.” - Stuart Russell - co-wrote foundational textbook on AI
AI Capability: Optimal
●Tic-Tac-Toe
● Connect Four: 1988
● Checkers (aka 8x8 draughts): Weakly solved (2007)
● Rubik's Cube: Mostly solved (2010)
● Heads-up limit hold'em poker: Statistically optimal in the sense that "a
human lifetime of play is not sufficient to establish with statistical
significance that the strategy is not an exact solution" (2015) - see
https://www.deepstack.ai/
Search spaces very small - though Heads-up limit hold’em poker is interesting - like most other NN
style AI - the majority of the behaviour was learned via self play in a reinforcement learning
environment
https://en.wikipedia.org/wiki/Progress_in_artificial_intelligence#Current_performance
156.
AI Capability: Super-human
●Othello (aka reversi): c. 1997
● Scrabble: 2006[
● Backgammon: c. 1995-2002
● Chess: Supercomputer (c. 1997); Personal computer (c.
2006); Mobile phone (c. 2009); Computer defeats human +
computer (c. 2017)
● Jeopardy!: Question answering, although the machine did not
use speech recognition (2011)
● Shogi: c. 2017
● Arimaa: 2015
● Go: 2017 <- AlphaGo - learned how to play from
scratch
● Heads-up no-limit hold'em poker: 2017
State of play in AI Progress - Some areas already Super-human
Board games and some computer games are well defined
problems and easier to measure competence against.
157.
AI Capability: Par-human
●Optical character recognition for ISO 1073-1:1976 and similar special characters.
● Classification of images
● Handwriting recognition
AI Capability: High-human
● Crosswords: c. 2012
● Dota 2: 2018
● Bridge card-playing: According to a 2009 review, "the best programs are
attaining expert status as (bridge) card players", excluding bidding.
● StarCraft II: 2019 <- AlphaStar at grandmaster level
AlphaGo vs AlphaStar - Unsupervised Learning
Wider search spaces - Dota 2 and Starcraft - incomplete
information, long term planning. AlphaStar makes use of
‘autoregressive policies’..
158.
AlphaZero / AlphaStar
Ifyou were a blank slate, and were given a GO board, and were told you had 24 hours to
teach yourself to become the best Go player on the planet, and you did exactly that, and
then you did the same thing with Chess and Shogi (both more complex than Chess), and
later Starcraft II, and then Protein Folding - wouldn't you say that proved you had the ability
to adapt to novel situations? Well AlphaZero did all that and it didn't use brute force to do it
either.
Stockfish is another Chess program but unlike AlphaZero it didn't teach itself humans did.
Stockfish could easily beat any human player but it couldn't beat AlphaZero despite the fact
that Stockfish was running on a faster computer and could evaluate 70,000,000 positions a
second while AlphaZero's much smaller computer could only do 80,000.
And AI is becoming less narrow and brittle every day. GO is played on a 19 by 19 grid, but if
you changed it to 20 by 20 or 18 by 18 and gave it another 24 hours to teach itself AlphaZero
would be the best player in the world at that new game too - without any human help.
AlphaZero is not infinitely adaptable, but then humans aren't either.
AI Capability: Sub-human
●Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)
● Object recognition
● Facial recognition: Low to mid human accuracy (as of 2014)
● Visual question answering, such as the VQA 1.0
● Various robotics tasks that may require advances in robot hardware as well as AI, including:
○ Stable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers
(as of 2017)
○ Humanoid soccer
● Speech recognition: "nearly equal to human performance" (2017)
● Explainability. Current medical systems can diagnose certain medical conditions well, but cannot
explain to users why they made the diagnosis.
● Stock market prediction: Financial data collection and processing using Machine Learning
algorithms
● Various tasks that are difficult to solve without contextual knowledge, including:
○ Translation
○ Word-sense disambiguation
○ Natural language processing
Though AI is classed as par or sub-human, for most of these tasks AI can
achieve wider capabilities far faster than humans can.
161.
https://talktotransformer.com
OpenAI’s GPT-2
Better LanguageModels
and Their Implications
“We’ve trained a large-scale unsupervised language
model which generates coherent paragraphs of text,
achieves state-of-the-art performance on many
language modeling benchmarks, and performs
rudimentary reading comprehension, machine
translation, question answering, and
summarization—all without task-specific training.”
Models can be used to generate data by successively
guessing what will come next, feeding in a guess as
input and guessing again. Language models, where
each word is predicted from the words before it, are
perhaps the best known example: these models power
the text predictions that pop up on some email and
messaging apps. Recent advances in language
modelling have enabled the generation of strikingly
plausible passages, such as OpenAI’s GPT-2.
Below is an example of a cut down version of GPT-2
162.
'There's a Wide-OpenHorizon of Possibility.' Musicians Are
Using AI to Create Otherwise Impossible New Songs
https://time.com/5774723/ai-music/
163.
Recap..
1. Intelligence ispowerful
2. Human Intelligence is a small fraction in the space of all possible
mind design
3. An AI that could self improve could become very powerful very fast
4. Look for trends as well as inside views to help gauge likelihood
5. Some AI is becoming more general (AGI)
○ Unsupervised learning
○ Transfer learning
6. Experts in AI research think that it is likely AGI could be developed
by mid century
7. We already have AI smarter than HLI (too many examples to list)
○ AI has surged past HLI in specific domains
Accelerating returns -a
shortening of timescales
between salient events…
Events expressed as Time
before Present (Years) on the
X axis and Time to Next Event
(Years) on the Y axis,
Logarithmic Plot.
4 billion years ago early
life.
…
167.
Evolutionary History ofMind
Daniel Dennet's Tower of Generate and Test
● "Darwinian creatures", which were simply selected by trial and error on the merits of
their bodies' ability to survive; its "thought" processes being entirely genetic.
● "Skinnerian creatures", which were also capable of independent action and therefore
could enhance their chances of survival by finding the best action (conditioning
overcame the genetic trial and error of Darwinian creatures); phenotypic plasticity; bodily
tribunal of evolved, though sometimes outdated wisdom.
● "Popperian creatures", which can play an action internally in a simulated environment
before they perform it in the real environment and can therefore reduce the chances of
negative effects - allows "our Hypothesis to die in our stead" - popper
● "Gregorian creatures" are tools-enabled, in particular they master the tool of language. -
use tools (e.g. words) - permits learning from others. The possession of learned
predispositions to carry out particular actions
168.
Evolutionary History ofMind
●To add to Dennet's metaphore: We are building a new
floor to the tower
●"Turingian creatures", that use tools-enabled
tool-enablers, in particular they create autonomous
agents to create even greater tools (e.g. minds with
further optimal cognitive abilities) - permits insightful
engineering of increasingly insightful agents,
resulting in an Intelligence Explosion.
169.
References
● "Kinds ofMinds" - Daniel Dennit :
http://www.amazon.com/Kinds-Minds-Understanding-Consciousness-Science/dp/0465073514
● Cascades, Cycles, Insight...: http://lesswrong.com/lw/w5/cascades_cycles_insight
● Recursive Self Improvement: http://lesswrong.com/lw/we/recursive_selfimprovement
● Terry Sejnowski : http://www.scientificamerican.com/article.cfm?id=when-build-brains-like-ours
● Nanotech : http://e-drexler.com
● Justin Rattner - Intel touts progress towards intelligent computers:
http://news.cnet.com/8301-1001_3-10023055-92.html#ixzz1G9Iy1cXe
● Bill Gates speaking about AI Risk on Reddit:
https://www.reddit.com/r/IAmA/comments/2tzjp7/hi_reddit_im_bill_gates_and_im_back_for_my_third
● Superintelligence: https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies
● Smarter Than Us: http://lesswrong.com/lw/jrb/smarter_than_us_is_out
●
170.
Understanding Understanding
● Philosophy/ Epistemology
● Human Understanding
● Machine Understanding
● Understanding & General Intelligence
● AI Impacts / Opportunities & Risks
● Ethics & Zero Shot Solving of Moral Quandaries
Conference in planning for some time after CoronaVirus restrictions ease - let me know if you are
interested - tech101 [at] gmail [dot] com Twitter @adam_ford YouTube: YouTu.be/TheRationalFuture