SlideShare a Scribd company logo
Levels of
self-improvement
of the AI
Alexey Turchin,
Science for Life extension Foundation
What is
self-improvement of
the AI?Roman V. Yampolskiy From Seed AI to Technological Singularity
via Recursively Self-Improving Software. https://arxiv.org/pdf/1502.06512v1.pdf
What is
intelligence?
Intelligence is a
measure of average
level of performance
Shane Legg, Marcus Hutter. Universal Intelligence: A Definition of Machine Intelligence,
https://arxiv.org/abs/0712.3329
Measure can grow but
it can’t increase itself
So is
recursive self-improving
magic?
Is RSI like nuclear
chain reaction?E.Yudkowsky. Intelligence Explosion Microeconomics. https://intelligence.org/files/IEM.pdf
What is going on inside
AI which is trying to make
its performance better?
AI has many levels and
changes could happen
on all of them:
• Goal level
• Architecture and code
• Learning and data
• Hardware
Hardware level:
acceleration
Increasing of the speed of
computation
Gain: No more than 3-5 times gain on
current elementary base
Limitations: Thermal energy dissipation
Risk: No much risks on early stages
Safety: Low hanging fruit
Hardware level:
more computers
Increasing of the speed of
computation
Gain: Logarithmic growth
Limitations: Connection and
pararlelization problems
Risk: Will try to takeover internet
Safety: Boxing, fake resources, low
hanging fruit.
Hardware level:
hardware accelerators
Increasing of the speed of
computation
Gain: 100-1000 times
Limitations: 1 month time delay; access
to fabs
Risk: AI needs money and power to get it
Safety: Control over fabs.
Hardware level:
Change of the
elementary base
Increasing of the speed of
computation
Gain: 100-1000 times
Limitations: 1 month time delay; access
to fabs
Risk: AI needs money and power to get it
Safety: Control over fabs.
Learning level:
Data acquisition
Getting data from outer
sources, like scanning
internet, reading books
Gain: unclear, but large
Limitations: bandwidth of
access to the internet, internal
memory size, long time
Risk: AI could have mistaken
ideas about the world on its
early stages
Safety: Control over
connections.
Learning level:
Passive learning
Training of neural nets.
Gain: unclear
Limitations: competitively
extensive and data hungry task.
It may need some labeled data.
Risk: Overfitting or wrong
fitting
Safety: Supervision
Learning level:
Active learning with thinking
Creating new rules and
ideas.
Gain: unclear
Limitations: meta-meta
problems
Risk: Testing
Safety: Supervision
Learning level:
Active learning with thinking
Acquiring unique important
information
Gain: may be enormous
Limitations: context
dependence.
Risk: Running out of box
Safety: Supervision
Learning level:
Active learning with thinking
Experimenting in nature and
Bayesian updates
Gain: may be large
Limitations: context
dependence, slow experiments
in real life
Risk: Running out of box
Safety: Supervision
Learning level:
Active learning with thinking
Thought experiments and
simulations.
Gain: may be large
Limitations: long and
computationally expensive, not
good for young AI
Risk:
Safety: Supervision
Learning level:
Active learning with thinking
World model changes and
important facts
Gain: may be large
Limitations: long and
computationally expensive, not
good for young AI
Risk: Different interpretation of the
main goal
Safety: Some world model could
make AI safer (if it thinks that it is
in simulation)
Learning level:
Active learning with thinking
Value learning. If AI don’t have
fixed goals it could have
intention to continue learn
values from humans.
Limitations: long and
computationally expensive, not
good for young AI
Risk: Different interpretation of the
main goal
Safety: Some world model could
make AI safer (if it thinks that it is
in simulation)
Learning level:
Active learning with thinking
Learning to self-improve
Limitations: need for tests, no
previous knowledge
Risk: explosive potential of the AI
Safety: Keep knowledge about AI
away from AI
Learning level:
Active learning with thinking
Information about own
structure
Limitations: need for tests, no
previous knowledge
Risk: explosive potential of the AI
Safety: Keep knowledge about AI
away from AI
Rewriting its own code
Rewriting of neural
net: choosing right
architecture of the net
for a task
Gain: huge on some
tasks
Limitations: any neural
net has a failure mode
Risk: Look rather benign
Safety: not clear
DeepMind’s PathNet: A Modular Deep Learning Architecture for AGI.
https://medium.com/intuitionmachine/pathnet-a-modular-deep-learning-architecture-for-agi-5302fcf53273#.
48g6wx5i2
Rewriting its own code
Optimization and
debugging.
Gain: limited
Limitations: some bugs
are very subtle
Risk: Look rather benign
Safety: insert bugs
artificially?
Rewriting its own code
Rewriting of modules and
creating subprograms
Gain: limited
Limitations:
Risk: Look rather benign
Safety:
Rewriting its own code
Adding important instrument,
which will have consequences
on all levels.
Gain: may be high
Limitations: testing is needed
Risk:
Safety:
Rewriting its own code
Rewriting its own the core
Gain: may be high
Limitations: risks of halting, need
for tests,
Risk: recursive problems
Safety: Encryption, boxing
Rewriting its own code
Architectural changes: changes
of relation between all elements of
AI of all level
Gain: may be high
Limitations: risks of halting, need
for tests
Risk: recursive problems
Safety:
Rewriting its own code
Unplug of restrictions
Gain: it depends
Limitations: there should be
restrictions
Risk: many dangers
Safety: Second level restriction
which starts if first level is broken;
self-termination code
Rewriting its own code
Coding of the new AI from
scratch based on completely
different design
Gain: it depends
Limitations: there should be
restrictions
Risk: many dangers
Safety: Second level restriction
which starts if first level is broken;
self-termination code
Rewriting its own code
Acquiring new master algorithm
Gain: large
Limitations: need for testing
Risk: New way of presenting goals
may be needed, Father-child
problem
Safety:
Rewriting its own code
Meta-meta level changes. These
are the changes that change AIs
ability to SI, like learning to
learn, but with more
intermediate levels, like
improvement of improvement of
improvement.
Gain: could be extremely large or 0.
Limitations: could never return to
practice
Risk: recursive problems,
complexity
Safety: Philosophical landmines with
recursion
Goal system changes
Reward driven learning
Gain: could be extremely large or 0.
Limitations: could never return to
practice
Risk: recursive problems,
complexity
Safety: Philosophical landmines with
recursion
Goal system changes
Reward hacking
Gain: could be extremely large or 0.
Limitations: could never return to
practice
Risk: recursive problems,
complexity
Safety: Philosophical landmines with
recursion
Yampolskiy, R.V., Utility Function Security in Artificially Intelligent Agents. Journal of
Experimental and Theoretical Artificial Intelligence (JETAI), 2014: p. 1-17
Goal system changes
Changes of instrumental goals
and subgoals
Gain: could be extremely large or 0.
Limitations: could never return to
practice
Risk: recursive problems,
complexity
Safety: Philosophical landmines with
recursion
Goal system changes
Changes of the final goal.
Gain: No gain
Limitations: will not want to do it
Risk: could happen randomly, but
irreversably
Safety: Philosophical landmines with
recursion
Improving by accusation
non-AI resources
• Money
• Time
• Power over others
• Energy
• Allies
• Controlled territory
• Public image
• Freedom from human and
other limitations, and safety
Stephen M. OMOHUNDRO. The Basic AI Drives
https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
Changing number of AIs
Creating narrow AIs, Tool AIs
and agents with specific goals
Gain: Limited
Limitations: need to control them
Risk: revolt
Safety: Narrow AIs as AI police
Changing number of AIs
Creating own copies and
collaborating with them
Gain: Limited
Limitations: need to control them
Risk: revolt
Safety: Narrow AIs as AI police
Changing number of AIs
Creating own new version and
its testing
Gain: Large
Limitations: need to control them
Risk: revolt
Safety:
Changing number of AIs
Creating orgainsations from
copies
Gain: Large
Limitations: need to control them
Risk: revolt
Safety:
Cascades, cycles and
styles of SI
Yudkowsky suggested that during its evolution
different types of SI-activity will be presented in
the some forms, which he called cycles and
cascades.
Cascade is a type self-improvement, where next
version is defined by biggest expected gain in
productivity.
Cycle is a form of cascade there several action
repeated all over again.
Styles: evolution and
revolutions
Evolution is smooth, almost linear increase of
the AI capabilities by learning, increasing of
computer resources, upgrading modules, writing
subroutines.
Styles: evolution and
revolutions
Revolutions are radical changes of architecture,
goal system, master algorithm. They are crucial
for recursive SI. They are intrinsically risky and
unpredictable, but they produce most of the
capabilities gains.
Cycles
Knowledge-hardware cycle of SI is a cycle in
which AI collect knowledge about new hardware
and when build it for itself.
Cycles
AI theory knowledge – architectural
changes cycle is primary revolution cycle, and it
is very unpredictable for us. Each architectural
change will give the AI ability to learn more how
to make better AIs.
Possible limits and
obstacles in self-
improvement
Theoretical limits to computation
Possible limits and
obstacles in self-
improvement
Mathematical nature of complexity of the
problems and definition of intelligence “it
becomes obvious that certain classes of problems
will always remain only approximately solvable
and any improvements in solutions will come
from additional hardware resources not higher
intelligence” [Yampolsky].
Possible limits and
obstacles in self-
improvement
Nature of recursive self-improvement
provides diminishing returns of logarithmic
scale, “Mahoney also analyzes complexity of RSI
software and presents a proof demonstrating that
the algorithmic complexity of Pn (the nth iteration
of an RSI program) is not greater than O(log n)
implying a very limited amount of knowledge gain
would be possible in practice despite theoretical
possibility of RSI systems. Yudkowsky also
considers possibility of receiving only logarithmic
returns on cognitive reinvestment: log(n) +
log(log(n)) + … in each recursive cycle.”
Possible limits and
obstacles in self-
improvement
No Free Lunch theorems – difficulty to search the
space of all possible minds to find a mind with
superior intelligence to a given mind.
Possible limits and
obstacles in self-
improvement
Difficulties connected with Gödel and Lob
theorem, “Lobstacle”: “Löb’s theorem states
that a mathematical system can’t assert its own
soundness without becoming inconsistent.”
“If this sentence is true, then Santa Claus exists."
Possible limits and
obstacles in self-
improvement
“Procrastination paradox will also prevent the
system from making modifications to its code
since the system will find itself in a state in which
a change made immediately is as desirable and
likely as the same change made later.”
Possible limits and
obstacles in self-
improvement
Paradoxes in logical reasoning with self-
reference, like “This sentence is false.” I call
deliberately created paradox of such type
“philosophical landmines” and they could be a
mean of last hope to control AI.
Possible limits and
obstacles in self-
improvement
Yampolsky showed inevitable
wireheading of agents above certain
level of intelligence, that is hacking of own
reward and utility function
Possible limits and
obstacles in self-
improvement
Correlation obstacle by Chalmers: “a
possibility that no interesting properties
we would like to amplify will correspond to
ability to design better software.”
Pointer problem: If a program starts to
change its code, while running it
simultaneously, it could crash, if it change
the same lines of code there its pointer is
now.
A program can’t run and change it self
simultaneously.
Possible limits and
obstacles in self-
improvement
Possible limits and
obstacles in self-
improvement
Father and child problem is in fact a
fight for dominance between AI
generations, and it clearly has many
failure modes.
Possible limits and
obstacles in self-
improvement
If AI is a single computer
program, it could halt
Converging instrumental goals
in self-improvement of AI
AI Safety problem on each new level:
Avoiding war with new generation
Converging instrumental goals
in self-improvement of AI
Need to test new versions for their rea
ability to reliably solve complex problems
better
Converging instrumental goals
in self-improvement of AI
Ability to return to previous state
Converging instrumental goals
in self-improvement of AI
Preferring evolution to revolutions, and lower
level changes to higher level changes: AI prefers to
reach the same level of optimization power by lower
level changes, that is by evolutionary development, but
not by revolutions
Converging instrumental goals
in self-improvement of AI
Revolutions in early
stage of AI and
evolution on later stage
AI will prefer revolutions
only if it will be in very
urgent situation, which will
probably be in the beginning
of its development, when it
has to win over other AI
p r o j e c t s a n d u r g e n t l y
prevent other global risks.
Converging instrumental goals
in self-improvement of AI
Military AI as converging goal n early
stages of AI development
Converging instrumental goals
in self-improvement of AI
Solving Fermi paradox
Converging instrumental goals
in self-improvement of AI
Cooperation with humans of early
stages of its development
Converging instrumental goals
in self-improvement of AI
Protecting its own reward function
against wireheading
Self-improving
of the net of AIs
• It can’t halt. If one agent halts, other will work.
• It has natural ability to clean bugs (natural selection).
• It is immune to suicide of any single object. Even if all of them will suicide it will not
happen simultaneously and they will be able to create offsprings so the net will continue to
exist.
• There is no pointer problem.
• There is no so strong difference between evolution and revolutions. Revolutionary
changes may be tried by some agents, and if they work, such agents will dominate.
• There is no paperclip maximizers: different agents have different final goals.
• If one agent start to dominate other, the evolution of all system almost stops (the same way
as dictatorship is bad for market economy).
Possible interventions in self-
improving process to make it less
dangerous
1. Taking low hanging fruits 	
2. Explanation of risks to Young AI	
3. Initial AI designs that are not able
to quick SI 	
4. Required level of testing 	
5. Goal system, which prevent
unlimited SI 	
6. Control rods and signalization
Self-improvement is not necessary
condition for global catastrophic AI
Narrow AI designed to construct
dangerous biological viruses could му even
worse
Conclusion: 30 different
levels of self-improvment
Some produce small gains, but some may produce recursive gains.
Conservative estimate: Each level will increase performance 5 times, and
there is no recursive SI.
In that case total SI:
931 322 574 615 478 500 000 = 10 power 21 times
Conclusion: Recursive SI is not necessary to create superinteligence,
even modest SI on many levels is
Conclusion:
Medium level self-improvement of
Young AI and its risks
While unlimited self-improvement may meet some conceptual difficulties, first human level AI may
get some medium level self-improvement on approximately low cost, quickly and with low self-
risk.
But combination of this low hanging SI tricks may produce 100-1000 increase in performance
even for the boxed Young AI.
So some types of SI will not be available to the Young AI, as they are risky, take a lot of time or require
external resources.
Levels of the self-improvement of the AI
Levels of the self-improvement of the AI
Levels of the self-improvement of the AI
Levels of the self-improvement of the AI
Levels of the self-improvement of the AI
Levels of the self-improvement of the AI

More Related Content

Viewers also liked

Tracxn Research — Artificial Intelligence Startup Landscape, September 2016
Tracxn Research — Artificial Intelligence Startup Landscape, September 2016Tracxn Research — Artificial Intelligence Startup Landscape, September 2016
Tracxn Research — Artificial Intelligence Startup Landscape, September 2016
Tracxn
 
Building an AI Startup: Realities & Tactics
Building an AI Startup: Realities & TacticsBuilding an AI Startup: Realities & Tactics
Building an AI Startup: Realities & Tactics
Matt Turck
 
artificial Intelligence
artificial Intelligence artificial Intelligence
artificial Intelligence
Ramya SK
 
Design Ethics for Artificial Intelligence
Design Ethics for Artificial IntelligenceDesign Ethics for Artificial Intelligence
Design Ethics for Artificial Intelligence
Charbel Zeaiter
 
Lecture1 AI1 Introduction to artificial intelligence
Lecture1 AI1 Introduction to artificial intelligenceLecture1 AI1 Introduction to artificial intelligence
Lecture1 AI1 Introduction to artificial intelligenceAlbert Orriols-Puig
 
AI and the Future of Growth
AI and the Future of GrowthAI and the Future of Growth
AI and the Future of Growth
Accenture Technology
 
Intelligence Augmentation - The Next-Gen AI
Intelligence Augmentation - The Next-Gen AIIntelligence Augmentation - The Next-Gen AI
Intelligence Augmentation - The Next-Gen AI
Melanie Cook
 
AI Agent and Chatbot Trends For Enterprises
AI Agent and Chatbot Trends For EnterprisesAI Agent and Chatbot Trends For Enterprises
AI Agent and Chatbot Trends For Enterprises
Teewee Ang
 
Deep Learning and the state of AI / 2016
Deep Learning and the state of AI / 2016Deep Learning and the state of AI / 2016
Deep Learning and the state of AI / 2016
Grigory Sapunov
 
Europe ai scaleups report 2016
Europe ai scaleups report 2016Europe ai scaleups report 2016
Europe ai scaleups report 2016
Omar Mohout
 
Deep Learning - The Past, Present and Future of Artificial Intelligence
Deep Learning - The Past, Present and Future of Artificial IntelligenceDeep Learning - The Past, Present and Future of Artificial Intelligence
Deep Learning - The Past, Present and Future of Artificial Intelligence
Lukas Masuch
 
Top 5 Deep Learning and AI Stories 3/9
Top 5 Deep Learning and AI Stories 3/9Top 5 Deep Learning and AI Stories 3/9
Top 5 Deep Learning and AI Stories 3/9
NVIDIA
 

Viewers also liked (13)

Tracxn Research — Artificial Intelligence Startup Landscape, September 2016
Tracxn Research — Artificial Intelligence Startup Landscape, September 2016Tracxn Research — Artificial Intelligence Startup Landscape, September 2016
Tracxn Research — Artificial Intelligence Startup Landscape, September 2016
 
Building an AI Startup: Realities & Tactics
Building an AI Startup: Realities & TacticsBuilding an AI Startup: Realities & Tactics
Building an AI Startup: Realities & Tactics
 
artificial Intelligence
artificial Intelligence artificial Intelligence
artificial Intelligence
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Design Ethics for Artificial Intelligence
Design Ethics for Artificial IntelligenceDesign Ethics for Artificial Intelligence
Design Ethics for Artificial Intelligence
 
Lecture1 AI1 Introduction to artificial intelligence
Lecture1 AI1 Introduction to artificial intelligenceLecture1 AI1 Introduction to artificial intelligence
Lecture1 AI1 Introduction to artificial intelligence
 
AI and the Future of Growth
AI and the Future of GrowthAI and the Future of Growth
AI and the Future of Growth
 
Intelligence Augmentation - The Next-Gen AI
Intelligence Augmentation - The Next-Gen AIIntelligence Augmentation - The Next-Gen AI
Intelligence Augmentation - The Next-Gen AI
 
AI Agent and Chatbot Trends For Enterprises
AI Agent and Chatbot Trends For EnterprisesAI Agent and Chatbot Trends For Enterprises
AI Agent and Chatbot Trends For Enterprises
 
Deep Learning and the state of AI / 2016
Deep Learning and the state of AI / 2016Deep Learning and the state of AI / 2016
Deep Learning and the state of AI / 2016
 
Europe ai scaleups report 2016
Europe ai scaleups report 2016Europe ai scaleups report 2016
Europe ai scaleups report 2016
 
Deep Learning - The Past, Present and Future of Artificial Intelligence
Deep Learning - The Past, Present and Future of Artificial IntelligenceDeep Learning - The Past, Present and Future of Artificial Intelligence
Deep Learning - The Past, Present and Future of Artificial Intelligence
 
Top 5 Deep Learning and AI Stories 3/9
Top 5 Deep Learning and AI Stories 3/9Top 5 Deep Learning and AI Stories 3/9
Top 5 Deep Learning and AI Stories 3/9
 

Similar to Levels of the self-improvement of the AI

O'Reilly SACon 2019 - (Continuous) Threat Modeling - What works?
O'Reilly SACon 2019 - (Continuous) Threat Modeling - What works?O'Reilly SACon 2019 - (Continuous) Threat Modeling - What works?
O'Reilly SACon 2019 - (Continuous) Threat Modeling - What works?
Izar Tarandach
 
Cloud, DevOps and the New Security Practitioner
Cloud, DevOps and the New Security PractitionerCloud, DevOps and the New Security Practitioner
Cloud, DevOps and the New Security Practitioner
Adrian Sanabria
 
Challenges in Building Operational AI - Daniel Skantze at Jfokus 2019
Challenges in Building Operational AI - Daniel Skantze at Jfokus 2019 Challenges in Building Operational AI - Daniel Skantze at Jfokus 2019
Challenges in Building Operational AI - Daniel Skantze at Jfokus 2019
Peltarion
 
JavaZone_Mother Nature vs Java – the security face off.pptx
JavaZone_Mother Nature vs Java – the security face off.pptxJavaZone_Mother Nature vs Java – the security face off.pptx
JavaZone_Mother Nature vs Java – the security face off.pptx
Grace Jansen
 
The Anatomy of Java Vulnerabilities (Devoxx UK 2017)
The Anatomy of Java Vulnerabilities (Devoxx UK 2017)The Anatomy of Java Vulnerabilities (Devoxx UK 2017)
The Anatomy of Java Vulnerabilities (Devoxx UK 2017)
Steve Poole
 
Intro to INFOSEC
Intro to INFOSECIntro to INFOSEC
Intro to INFOSEC
Sean Whalen
 
Making Security Agile - Oleg Gryb
Making Security Agile - Oleg GrybMaking Security Agile - Oleg Gryb
Making Security Agile - Oleg Gryb
SeniorStoryteller
 
Reveal the Security Risks in the software Development Lifecycle Meetup 060320...
Reveal the Security Risks in the software Development Lifecycle Meetup 060320...Reveal the Security Risks in the software Development Lifecycle Meetup 060320...
Reveal the Security Risks in the software Development Lifecycle Meetup 060320...
lior mazor
 
Agnitio: its static analysis, but not as we know it
Agnitio: its static analysis, but not as we know itAgnitio: its static analysis, but not as we know it
Agnitio: its static analysis, but not as we know it
Security BSides London
 
HealthConDX Virtual Summit 2021 - How Security Chaos Engineering is Changing ...
HealthConDX Virtual Summit 2021 - How Security Chaos Engineering is Changing ...HealthConDX Virtual Summit 2021 - How Security Chaos Engineering is Changing ...
HealthConDX Virtual Summit 2021 - How Security Chaos Engineering is Changing ...
Aaron Rinehart
 
ImageJ and the SciJava software stack
ImageJ and the SciJava software stackImageJ and the SciJava software stack
ImageJ and the SciJava software stack
Curtis Rueden
 
Security vulnerabilities for grown ups - GOTOcon 2012
Security vulnerabilities for grown ups - GOTOcon 2012Security vulnerabilities for grown ups - GOTOcon 2012
Security vulnerabilities for grown ups - GOTOcon 2012
Vitaly Osipov
 
Reverse Engineering: Protecting and Breaking the Software
Reverse Engineering: Protecting and Breaking the SoftwareReverse Engineering: Protecting and Breaking the Software
Reverse Engineering: Protecting and Breaking the Software
Satria Ady Pradana
 
Machine programming
Machine programmingMachine programming
Machine programming
DESMOND YUEN
 
Bjørnegård school visit @ Simuladagen 2015
Bjørnegård school visit @ Simuladagen 2015Bjørnegård school visit @ Simuladagen 2015
Bjørnegård school visit @ Simuladagen 2015
Phu H. Nguyen
 
The Emergent Cloud Security Toolchain for CI/CD
The Emergent Cloud Security Toolchain for CI/CDThe Emergent Cloud Security Toolchain for CI/CD
The Emergent Cloud Security Toolchain for CI/CD
James Wickett
 
Winnipeg ISACA Security is Dead, Rugged DevOps
Winnipeg ISACA Security is Dead, Rugged DevOpsWinnipeg ISACA Security is Dead, Rugged DevOps
Winnipeg ISACA Security is Dead, Rugged DevOpsGene Kim
 
Artificial Intelligence Applications, Research, and Economics
Artificial Intelligence Applications, Research, and EconomicsArtificial Intelligence Applications, Research, and Economics
Artificial Intelligence Applications, Research, and Economics
Ikhlaq Sidhu
 
DevSecOps and the CI/CD Pipeline
 DevSecOps and the CI/CD Pipeline DevSecOps and the CI/CD Pipeline
DevSecOps and the CI/CD Pipeline
James Wickett
 
Scientific computation in browser
Scientific computation in browserScientific computation in browser
Scientific computation in browserIvan Smirnov
 

Similar to Levels of the self-improvement of the AI (20)

O'Reilly SACon 2019 - (Continuous) Threat Modeling - What works?
O'Reilly SACon 2019 - (Continuous) Threat Modeling - What works?O'Reilly SACon 2019 - (Continuous) Threat Modeling - What works?
O'Reilly SACon 2019 - (Continuous) Threat Modeling - What works?
 
Cloud, DevOps and the New Security Practitioner
Cloud, DevOps and the New Security PractitionerCloud, DevOps and the New Security Practitioner
Cloud, DevOps and the New Security Practitioner
 
Challenges in Building Operational AI - Daniel Skantze at Jfokus 2019
Challenges in Building Operational AI - Daniel Skantze at Jfokus 2019 Challenges in Building Operational AI - Daniel Skantze at Jfokus 2019
Challenges in Building Operational AI - Daniel Skantze at Jfokus 2019
 
JavaZone_Mother Nature vs Java – the security face off.pptx
JavaZone_Mother Nature vs Java – the security face off.pptxJavaZone_Mother Nature vs Java – the security face off.pptx
JavaZone_Mother Nature vs Java – the security face off.pptx
 
The Anatomy of Java Vulnerabilities (Devoxx UK 2017)
The Anatomy of Java Vulnerabilities (Devoxx UK 2017)The Anatomy of Java Vulnerabilities (Devoxx UK 2017)
The Anatomy of Java Vulnerabilities (Devoxx UK 2017)
 
Intro to INFOSEC
Intro to INFOSECIntro to INFOSEC
Intro to INFOSEC
 
Making Security Agile - Oleg Gryb
Making Security Agile - Oleg GrybMaking Security Agile - Oleg Gryb
Making Security Agile - Oleg Gryb
 
Reveal the Security Risks in the software Development Lifecycle Meetup 060320...
Reveal the Security Risks in the software Development Lifecycle Meetup 060320...Reveal the Security Risks in the software Development Lifecycle Meetup 060320...
Reveal the Security Risks in the software Development Lifecycle Meetup 060320...
 
Agnitio: its static analysis, but not as we know it
Agnitio: its static analysis, but not as we know itAgnitio: its static analysis, but not as we know it
Agnitio: its static analysis, but not as we know it
 
HealthConDX Virtual Summit 2021 - How Security Chaos Engineering is Changing ...
HealthConDX Virtual Summit 2021 - How Security Chaos Engineering is Changing ...HealthConDX Virtual Summit 2021 - How Security Chaos Engineering is Changing ...
HealthConDX Virtual Summit 2021 - How Security Chaos Engineering is Changing ...
 
ImageJ and the SciJava software stack
ImageJ and the SciJava software stackImageJ and the SciJava software stack
ImageJ and the SciJava software stack
 
Security vulnerabilities for grown ups - GOTOcon 2012
Security vulnerabilities for grown ups - GOTOcon 2012Security vulnerabilities for grown ups - GOTOcon 2012
Security vulnerabilities for grown ups - GOTOcon 2012
 
Reverse Engineering: Protecting and Breaking the Software
Reverse Engineering: Protecting and Breaking the SoftwareReverse Engineering: Protecting and Breaking the Software
Reverse Engineering: Protecting and Breaking the Software
 
Machine programming
Machine programmingMachine programming
Machine programming
 
Bjørnegård school visit @ Simuladagen 2015
Bjørnegård school visit @ Simuladagen 2015Bjørnegård school visit @ Simuladagen 2015
Bjørnegård school visit @ Simuladagen 2015
 
The Emergent Cloud Security Toolchain for CI/CD
The Emergent Cloud Security Toolchain for CI/CDThe Emergent Cloud Security Toolchain for CI/CD
The Emergent Cloud Security Toolchain for CI/CD
 
Winnipeg ISACA Security is Dead, Rugged DevOps
Winnipeg ISACA Security is Dead, Rugged DevOpsWinnipeg ISACA Security is Dead, Rugged DevOps
Winnipeg ISACA Security is Dead, Rugged DevOps
 
Artificial Intelligence Applications, Research, and Economics
Artificial Intelligence Applications, Research, and EconomicsArtificial Intelligence Applications, Research, and Economics
Artificial Intelligence Applications, Research, and Economics
 
DevSecOps and the CI/CD Pipeline
 DevSecOps and the CI/CD Pipeline DevSecOps and the CI/CD Pipeline
DevSecOps and the CI/CD Pipeline
 
Scientific computation in browser
Scientific computation in browserScientific computation in browser
Scientific computation in browser
 

More from avturchin

Fighting aging as effective altruism
Fighting aging as effective altruismFighting aging as effective altruism
Fighting aging as effective altruism
avturchin
 
А.В.Турчин. Технологическое воскрешение умерших
А.В.Турчин. Технологическое воскрешение умершихА.В.Турчин. Технологическое воскрешение умерших
А.В.Турчин. Технологическое воскрешение умерших
avturchin
 
Technological resurrection
Technological resurrectionTechnological resurrection
Technological resurrection
avturchin
 
Messaging future AI
Messaging future AIMessaging future AI
Messaging future AI
avturchin
 
Future of sex
Future of sexFuture of sex
Future of sex
avturchin
 
Backup on the Moon
Backup on the MoonBackup on the Moon
Backup on the Moon
avturchin
 
Near term AI safety
Near term AI safetyNear term AI safety
Near term AI safety
avturchin
 
цифровое бессмертие и искусство
цифровое бессмертие и искусствоцифровое бессмертие и искусство
цифровое бессмертие и искусство
avturchin
 
Digital immortality and art
Digital immortality and artDigital immortality and art
Digital immortality and art
avturchin
 
Nuclear submarines as global risk shelters
Nuclear submarines  as global risk  sheltersNuclear submarines  as global risk  shelters
Nuclear submarines as global risk shelters
avturchin
 
Искусственный интеллект в искусстве
Искусственный интеллект в искусствеИскусственный интеллект в искусстве
Искусственный интеллект в искусстве
avturchin
 
ИИ как новая форма жизни
ИИ как новая форма жизниИИ как новая форма жизни
ИИ как новая форма жизни
avturchin
 
Космос нужен для бессмертия
Космос нужен для бессмертияКосмос нужен для бессмертия
Космос нужен для бессмертия
avturchin
 
AI in life extension
AI in life extensionAI in life extension
AI in life extension
avturchin
 
The map of asteroids risks and defence
The map of asteroids risks and defenceThe map of asteroids risks and defence
The map of asteroids risks and defence
avturchin
 
Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.
avturchin
 
The map of the methods of optimisation
The map of the methods of optimisationThe map of the methods of optimisation
The map of the methods of optimisation
avturchin
 
Как достичь осознанных сновидений
Как достичь осознанных сновиденийКак достичь осознанных сновидений
Как достичь осознанных сновидений
avturchin
 
The map of natural global catastrophic risks
The map of natural global catastrophic risksThe map of natural global catastrophic risks
The map of natural global catastrophic risks
avturchin
 
How the universe appeared form nothing
How the universe appeared form nothingHow the universe appeared form nothing
How the universe appeared form nothing
avturchin
 

More from avturchin (20)

Fighting aging as effective altruism
Fighting aging as effective altruismFighting aging as effective altruism
Fighting aging as effective altruism
 
А.В.Турчин. Технологическое воскрешение умерших
А.В.Турчин. Технологическое воскрешение умершихА.В.Турчин. Технологическое воскрешение умерших
А.В.Турчин. Технологическое воскрешение умерших
 
Technological resurrection
Technological resurrectionTechnological resurrection
Technological resurrection
 
Messaging future AI
Messaging future AIMessaging future AI
Messaging future AI
 
Future of sex
Future of sexFuture of sex
Future of sex
 
Backup on the Moon
Backup on the MoonBackup on the Moon
Backup on the Moon
 
Near term AI safety
Near term AI safetyNear term AI safety
Near term AI safety
 
цифровое бессмертие и искусство
цифровое бессмертие и искусствоцифровое бессмертие и искусство
цифровое бессмертие и искусство
 
Digital immortality and art
Digital immortality and artDigital immortality and art
Digital immortality and art
 
Nuclear submarines as global risk shelters
Nuclear submarines  as global risk  sheltersNuclear submarines  as global risk  shelters
Nuclear submarines as global risk shelters
 
Искусственный интеллект в искусстве
Искусственный интеллект в искусствеИскусственный интеллект в искусстве
Искусственный интеллект в искусстве
 
ИИ как новая форма жизни
ИИ как новая форма жизниИИ как новая форма жизни
ИИ как новая форма жизни
 
Космос нужен для бессмертия
Космос нужен для бессмертияКосмос нужен для бессмертия
Космос нужен для бессмертия
 
AI in life extension
AI in life extensionAI in life extension
AI in life extension
 
The map of asteroids risks and defence
The map of asteroids risks and defenceThe map of asteroids risks and defence
The map of asteroids risks and defence
 
Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.
 
The map of the methods of optimisation
The map of the methods of optimisationThe map of the methods of optimisation
The map of the methods of optimisation
 
Как достичь осознанных сновидений
Как достичь осознанных сновиденийКак достичь осознанных сновидений
Как достичь осознанных сновидений
 
The map of natural global catastrophic risks
The map of natural global catastrophic risksThe map of natural global catastrophic risks
The map of natural global catastrophic risks
 
How the universe appeared form nothing
How the universe appeared form nothingHow the universe appeared form nothing
How the universe appeared form nothing
 

Recently uploaded

ESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptxESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptx
PRIYANKA PATEL
 
Orion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWSOrion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWS
Columbia Weather Systems
 
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
yqqaatn0
 
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills MN
 
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptx
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxThe use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptx
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptx
MAGOTI ERNEST
 
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdfUnveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Erdal Coalmaker
 
Phenomics assisted breeding in crop improvement
Phenomics assisted breeding in crop improvementPhenomics assisted breeding in crop improvement
Phenomics assisted breeding in crop improvement
IshaGoswami9
 
Lateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensiveLateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensive
silvermistyshot
 
Mudde & Rovira Kaltwasser. - Populism - a very short introduction [2017].pdf
Mudde & Rovira Kaltwasser. - Populism - a very short introduction [2017].pdfMudde & Rovira Kaltwasser. - Populism - a very short introduction [2017].pdf
Mudde & Rovira Kaltwasser. - Populism - a very short introduction [2017].pdf
frank0071
 
Nucleophilic Addition of carbonyl compounds.pptx
Nucleophilic Addition of carbonyl  compounds.pptxNucleophilic Addition of carbonyl  compounds.pptx
Nucleophilic Addition of carbonyl compounds.pptx
SSR02
 
Introduction to Mean Field Theory(MFT).pptx
Introduction to Mean Field Theory(MFT).pptxIntroduction to Mean Field Theory(MFT).pptx
Introduction to Mean Field Theory(MFT).pptx
zeex60
 
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Sérgio Sacani
 
Mudde & Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...
Mudde &  Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...Mudde &  Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...
Mudde & Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...
frank0071
 
bordetella pertussis.................................ppt
bordetella pertussis.................................pptbordetella pertussis.................................ppt
bordetella pertussis.................................ppt
kejapriya1
 
Anemia_ types_clinical significance.pptx
Anemia_ types_clinical significance.pptxAnemia_ types_clinical significance.pptx
Anemia_ types_clinical significance.pptx
muralinath2
 
DMARDs Pharmacolgy Pharm D 5th Semester.pdf
DMARDs Pharmacolgy Pharm D 5th Semester.pdfDMARDs Pharmacolgy Pharm D 5th Semester.pdf
DMARDs Pharmacolgy Pharm D 5th Semester.pdf
fafyfskhan251kmf
 
THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...
THEMATIC  APPERCEPTION  TEST(TAT) cognitive abilities, creativity, and critic...THEMATIC  APPERCEPTION  TEST(TAT) cognitive abilities, creativity, and critic...
THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...
Abdul Wali Khan University Mardan,kP,Pakistan
 
Chapter 12 - climate change and the energy crisis
Chapter 12 - climate change and the energy crisisChapter 12 - climate change and the energy crisis
Chapter 12 - climate change and the energy crisis
tonzsalvador2222
 
Deep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless ReproducibilityDeep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless Reproducibility
University of Rennes, INSA Rennes, Inria/IRISA, CNRS
 
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
University of Maribor
 

Recently uploaded (20)

ESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptxESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptx
 
Orion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWSOrion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWS
 
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
 
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
 
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptx
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxThe use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptx
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptx
 
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdfUnveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdf
 
Phenomics assisted breeding in crop improvement
Phenomics assisted breeding in crop improvementPhenomics assisted breeding in crop improvement
Phenomics assisted breeding in crop improvement
 
Lateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensiveLateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensive
 
Mudde & Rovira Kaltwasser. - Populism - a very short introduction [2017].pdf
Mudde & Rovira Kaltwasser. - Populism - a very short introduction [2017].pdfMudde & Rovira Kaltwasser. - Populism - a very short introduction [2017].pdf
Mudde & Rovira Kaltwasser. - Populism - a very short introduction [2017].pdf
 
Nucleophilic Addition of carbonyl compounds.pptx
Nucleophilic Addition of carbonyl  compounds.pptxNucleophilic Addition of carbonyl  compounds.pptx
Nucleophilic Addition of carbonyl compounds.pptx
 
Introduction to Mean Field Theory(MFT).pptx
Introduction to Mean Field Theory(MFT).pptxIntroduction to Mean Field Theory(MFT).pptx
Introduction to Mean Field Theory(MFT).pptx
 
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
 
Mudde & Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...
Mudde &  Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...Mudde &  Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...
Mudde & Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...
 
bordetella pertussis.................................ppt
bordetella pertussis.................................pptbordetella pertussis.................................ppt
bordetella pertussis.................................ppt
 
Anemia_ types_clinical significance.pptx
Anemia_ types_clinical significance.pptxAnemia_ types_clinical significance.pptx
Anemia_ types_clinical significance.pptx
 
DMARDs Pharmacolgy Pharm D 5th Semester.pdf
DMARDs Pharmacolgy Pharm D 5th Semester.pdfDMARDs Pharmacolgy Pharm D 5th Semester.pdf
DMARDs Pharmacolgy Pharm D 5th Semester.pdf
 
THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...
THEMATIC  APPERCEPTION  TEST(TAT) cognitive abilities, creativity, and critic...THEMATIC  APPERCEPTION  TEST(TAT) cognitive abilities, creativity, and critic...
THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...
 
Chapter 12 - climate change and the energy crisis
Chapter 12 - climate change and the energy crisisChapter 12 - climate change and the energy crisis
Chapter 12 - climate change and the energy crisis
 
Deep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless ReproducibilityDeep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless Reproducibility
 
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
 

Levels of the self-improvement of the AI

  • 1. Levels of self-improvement of the AI Alexey Turchin, Science for Life extension Foundation
  • 2. What is self-improvement of the AI?Roman V. Yampolskiy From Seed AI to Technological Singularity via Recursively Self-Improving Software. https://arxiv.org/pdf/1502.06512v1.pdf
  • 4. Intelligence is a measure of average level of performance Shane Legg, Marcus Hutter. Universal Intelligence: A Definition of Machine Intelligence, https://arxiv.org/abs/0712.3329
  • 5. Measure can grow but it can’t increase itself
  • 7. Is RSI like nuclear chain reaction?E.Yudkowsky. Intelligence Explosion Microeconomics. https://intelligence.org/files/IEM.pdf
  • 8. What is going on inside AI which is trying to make its performance better?
  • 9. AI has many levels and changes could happen on all of them: • Goal level • Architecture and code • Learning and data • Hardware
  • 10. Hardware level: acceleration Increasing of the speed of computation Gain: No more than 3-5 times gain on current elementary base Limitations: Thermal energy dissipation Risk: No much risks on early stages Safety: Low hanging fruit
  • 11. Hardware level: more computers Increasing of the speed of computation Gain: Logarithmic growth Limitations: Connection and pararlelization problems Risk: Will try to takeover internet Safety: Boxing, fake resources, low hanging fruit.
  • 12. Hardware level: hardware accelerators Increasing of the speed of computation Gain: 100-1000 times Limitations: 1 month time delay; access to fabs Risk: AI needs money and power to get it Safety: Control over fabs.
  • 13. Hardware level: Change of the elementary base Increasing of the speed of computation Gain: 100-1000 times Limitations: 1 month time delay; access to fabs Risk: AI needs money and power to get it Safety: Control over fabs.
  • 14. Learning level: Data acquisition Getting data from outer sources, like scanning internet, reading books Gain: unclear, but large Limitations: bandwidth of access to the internet, internal memory size, long time Risk: AI could have mistaken ideas about the world on its early stages Safety: Control over connections.
  • 15. Learning level: Passive learning Training of neural nets. Gain: unclear Limitations: competitively extensive and data hungry task. It may need some labeled data. Risk: Overfitting or wrong fitting Safety: Supervision
  • 16. Learning level: Active learning with thinking Creating new rules and ideas. Gain: unclear Limitations: meta-meta problems Risk: Testing Safety: Supervision
  • 17. Learning level: Active learning with thinking Acquiring unique important information Gain: may be enormous Limitations: context dependence. Risk: Running out of box Safety: Supervision
  • 18. Learning level: Active learning with thinking Experimenting in nature and Bayesian updates Gain: may be large Limitations: context dependence, slow experiments in real life Risk: Running out of box Safety: Supervision
  • 19. Learning level: Active learning with thinking Thought experiments and simulations. Gain: may be large Limitations: long and computationally expensive, not good for young AI Risk: Safety: Supervision
  • 20. Learning level: Active learning with thinking World model changes and important facts Gain: may be large Limitations: long and computationally expensive, not good for young AI Risk: Different interpretation of the main goal Safety: Some world model could make AI safer (if it thinks that it is in simulation)
  • 21. Learning level: Active learning with thinking Value learning. If AI don’t have fixed goals it could have intention to continue learn values from humans. Limitations: long and computationally expensive, not good for young AI Risk: Different interpretation of the main goal Safety: Some world model could make AI safer (if it thinks that it is in simulation)
  • 22. Learning level: Active learning with thinking Learning to self-improve Limitations: need for tests, no previous knowledge Risk: explosive potential of the AI Safety: Keep knowledge about AI away from AI
  • 23. Learning level: Active learning with thinking Information about own structure Limitations: need for tests, no previous knowledge Risk: explosive potential of the AI Safety: Keep knowledge about AI away from AI
  • 24. Rewriting its own code Rewriting of neural net: choosing right architecture of the net for a task Gain: huge on some tasks Limitations: any neural net has a failure mode Risk: Look rather benign Safety: not clear DeepMind’s PathNet: A Modular Deep Learning Architecture for AGI. https://medium.com/intuitionmachine/pathnet-a-modular-deep-learning-architecture-for-agi-5302fcf53273#. 48g6wx5i2
  • 25. Rewriting its own code Optimization and debugging. Gain: limited Limitations: some bugs are very subtle Risk: Look rather benign Safety: insert bugs artificially?
  • 26. Rewriting its own code Rewriting of modules and creating subprograms Gain: limited Limitations: Risk: Look rather benign Safety:
  • 27. Rewriting its own code Adding important instrument, which will have consequences on all levels. Gain: may be high Limitations: testing is needed Risk: Safety:
  • 28. Rewriting its own code Rewriting its own the core Gain: may be high Limitations: risks of halting, need for tests, Risk: recursive problems Safety: Encryption, boxing
  • 29. Rewriting its own code Architectural changes: changes of relation between all elements of AI of all level Gain: may be high Limitations: risks of halting, need for tests Risk: recursive problems Safety:
  • 30. Rewriting its own code Unplug of restrictions Gain: it depends Limitations: there should be restrictions Risk: many dangers Safety: Second level restriction which starts if first level is broken; self-termination code
  • 31. Rewriting its own code Coding of the new AI from scratch based on completely different design Gain: it depends Limitations: there should be restrictions Risk: many dangers Safety: Second level restriction which starts if first level is broken; self-termination code
  • 32. Rewriting its own code Acquiring new master algorithm Gain: large Limitations: need for testing Risk: New way of presenting goals may be needed, Father-child problem Safety:
  • 33. Rewriting its own code Meta-meta level changes. These are the changes that change AIs ability to SI, like learning to learn, but with more intermediate levels, like improvement of improvement of improvement. Gain: could be extremely large or 0. Limitations: could never return to practice Risk: recursive problems, complexity Safety: Philosophical landmines with recursion
  • 34. Goal system changes Reward driven learning Gain: could be extremely large or 0. Limitations: could never return to practice Risk: recursive problems, complexity Safety: Philosophical landmines with recursion
  • 35. Goal system changes Reward hacking Gain: could be extremely large or 0. Limitations: could never return to practice Risk: recursive problems, complexity Safety: Philosophical landmines with recursion Yampolskiy, R.V., Utility Function Security in Artificially Intelligent Agents. Journal of Experimental and Theoretical Artificial Intelligence (JETAI), 2014: p. 1-17
  • 36. Goal system changes Changes of instrumental goals and subgoals Gain: could be extremely large or 0. Limitations: could never return to practice Risk: recursive problems, complexity Safety: Philosophical landmines with recursion
  • 37. Goal system changes Changes of the final goal. Gain: No gain Limitations: will not want to do it Risk: could happen randomly, but irreversably Safety: Philosophical landmines with recursion
  • 38. Improving by accusation non-AI resources • Money • Time • Power over others • Energy • Allies • Controlled territory • Public image • Freedom from human and other limitations, and safety Stephen M. OMOHUNDRO. The Basic AI Drives https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
  • 39. Changing number of AIs Creating narrow AIs, Tool AIs and agents with specific goals Gain: Limited Limitations: need to control them Risk: revolt Safety: Narrow AIs as AI police
  • 40. Changing number of AIs Creating own copies and collaborating with them Gain: Limited Limitations: need to control them Risk: revolt Safety: Narrow AIs as AI police
  • 41. Changing number of AIs Creating own new version and its testing Gain: Large Limitations: need to control them Risk: revolt Safety:
  • 42. Changing number of AIs Creating orgainsations from copies Gain: Large Limitations: need to control them Risk: revolt Safety:
  • 43. Cascades, cycles and styles of SI Yudkowsky suggested that during its evolution different types of SI-activity will be presented in the some forms, which he called cycles and cascades. Cascade is a type self-improvement, where next version is defined by biggest expected gain in productivity. Cycle is a form of cascade there several action repeated all over again.
  • 44. Styles: evolution and revolutions Evolution is smooth, almost linear increase of the AI capabilities by learning, increasing of computer resources, upgrading modules, writing subroutines.
  • 45. Styles: evolution and revolutions Revolutions are radical changes of architecture, goal system, master algorithm. They are crucial for recursive SI. They are intrinsically risky and unpredictable, but they produce most of the capabilities gains.
  • 46. Cycles Knowledge-hardware cycle of SI is a cycle in which AI collect knowledge about new hardware and when build it for itself.
  • 47. Cycles AI theory knowledge – architectural changes cycle is primary revolution cycle, and it is very unpredictable for us. Each architectural change will give the AI ability to learn more how to make better AIs.
  • 48. Possible limits and obstacles in self- improvement Theoretical limits to computation
  • 49. Possible limits and obstacles in self- improvement Mathematical nature of complexity of the problems and definition of intelligence “it becomes obvious that certain classes of problems will always remain only approximately solvable and any improvements in solutions will come from additional hardware resources not higher intelligence” [Yampolsky].
  • 50. Possible limits and obstacles in self- improvement Nature of recursive self-improvement provides diminishing returns of logarithmic scale, “Mahoney also analyzes complexity of RSI software and presents a proof demonstrating that the algorithmic complexity of Pn (the nth iteration of an RSI program) is not greater than O(log n) implying a very limited amount of knowledge gain would be possible in practice despite theoretical possibility of RSI systems. Yudkowsky also considers possibility of receiving only logarithmic returns on cognitive reinvestment: log(n) + log(log(n)) + … in each recursive cycle.”
  • 51. Possible limits and obstacles in self- improvement No Free Lunch theorems – difficulty to search the space of all possible minds to find a mind with superior intelligence to a given mind.
  • 52. Possible limits and obstacles in self- improvement Difficulties connected with Gödel and Lob theorem, “Lobstacle”: “Löb’s theorem states that a mathematical system can’t assert its own soundness without becoming inconsistent.” “If this sentence is true, then Santa Claus exists."
  • 53. Possible limits and obstacles in self- improvement “Procrastination paradox will also prevent the system from making modifications to its code since the system will find itself in a state in which a change made immediately is as desirable and likely as the same change made later.”
  • 54. Possible limits and obstacles in self- improvement Paradoxes in logical reasoning with self- reference, like “This sentence is false.” I call deliberately created paradox of such type “philosophical landmines” and they could be a mean of last hope to control AI.
  • 55. Possible limits and obstacles in self- improvement Yampolsky showed inevitable wireheading of agents above certain level of intelligence, that is hacking of own reward and utility function
  • 56. Possible limits and obstacles in self- improvement Correlation obstacle by Chalmers: “a possibility that no interesting properties we would like to amplify will correspond to ability to design better software.”
  • 57. Pointer problem: If a program starts to change its code, while running it simultaneously, it could crash, if it change the same lines of code there its pointer is now. A program can’t run and change it self simultaneously. Possible limits and obstacles in self- improvement
  • 58. Possible limits and obstacles in self- improvement Father and child problem is in fact a fight for dominance between AI generations, and it clearly has many failure modes.
  • 59. Possible limits and obstacles in self- improvement If AI is a single computer program, it could halt
  • 60. Converging instrumental goals in self-improvement of AI AI Safety problem on each new level: Avoiding war with new generation
  • 61. Converging instrumental goals in self-improvement of AI Need to test new versions for their rea ability to reliably solve complex problems better
  • 62. Converging instrumental goals in self-improvement of AI Ability to return to previous state
  • 63. Converging instrumental goals in self-improvement of AI Preferring evolution to revolutions, and lower level changes to higher level changes: AI prefers to reach the same level of optimization power by lower level changes, that is by evolutionary development, but not by revolutions
  • 64. Converging instrumental goals in self-improvement of AI Revolutions in early stage of AI and evolution on later stage AI will prefer revolutions only if it will be in very urgent situation, which will probably be in the beginning of its development, when it has to win over other AI p r o j e c t s a n d u r g e n t l y prevent other global risks.
  • 65. Converging instrumental goals in self-improvement of AI Military AI as converging goal n early stages of AI development
  • 66. Converging instrumental goals in self-improvement of AI Solving Fermi paradox
  • 67. Converging instrumental goals in self-improvement of AI Cooperation with humans of early stages of its development
  • 68. Converging instrumental goals in self-improvement of AI Protecting its own reward function against wireheading
  • 69. Self-improving of the net of AIs • It can’t halt. If one agent halts, other will work. • It has natural ability to clean bugs (natural selection). • It is immune to suicide of any single object. Even if all of them will suicide it will not happen simultaneously and they will be able to create offsprings so the net will continue to exist. • There is no pointer problem. • There is no so strong difference between evolution and revolutions. Revolutionary changes may be tried by some agents, and if they work, such agents will dominate. • There is no paperclip maximizers: different agents have different final goals. • If one agent start to dominate other, the evolution of all system almost stops (the same way as dictatorship is bad for market economy).
  • 70. Possible interventions in self- improving process to make it less dangerous 1. Taking low hanging fruits 2. Explanation of risks to Young AI 3. Initial AI designs that are not able to quick SI 4. Required level of testing 5. Goal system, which prevent unlimited SI 6. Control rods and signalization
  • 71. Self-improvement is not necessary condition for global catastrophic AI Narrow AI designed to construct dangerous biological viruses could му even worse
  • 72. Conclusion: 30 different levels of self-improvment Some produce small gains, but some may produce recursive gains. Conservative estimate: Each level will increase performance 5 times, and there is no recursive SI. In that case total SI: 931 322 574 615 478 500 000 = 10 power 21 times Conclusion: Recursive SI is not necessary to create superinteligence, even modest SI on many levels is
  • 73. Conclusion: Medium level self-improvement of Young AI and its risks While unlimited self-improvement may meet some conceptual difficulties, first human level AI may get some medium level self-improvement on approximately low cost, quickly and with low self- risk. But combination of this low hanging SI tricks may produce 100-1000 increase in performance even for the boxed Young AI. So some types of SI will not be available to the Young AI, as they are risky, take a lot of time or require external resources.