SlideShare a Scribd company logo
1 of 66
Download to read offline
A Roadmap:
Plan of action to
prevent human
extinction risks
Alexey	
  Turchin	
  
Existen1al	
  risks	
  Ins1tute	
  
h5p://existrisks.org/
X-risks
Exponential growth of technologies
Unlimited possibilities
Including destruction of humanity
Main x-risks
AI
Synthetic biology
Nuclear doomsday weapons and large scale nuclear
war
Runaway global warming
Nanotech - grey goo.
Crowdsoursing
Contest: more than 50 ideas were added
David Pearce and Sakoshi Nakamoto contributed
International
Control
System
Friendly AI
Plan B
Survive
the
catastrophe
Leave
Backups
Study and Promotion
• Study of Friendly AI theory
• Promotion of Friendly AI (Bostrom and Yudkowsky)
• Fundraising (MIRI)
• Teaching rationality (LessWrong)
• Slowing other AI projects (recruiting scientists)
• FAI free education, starter packages in programming
Solid Friendly AI theory
• Human values theory and decision theory
• Proven safe, fail-safe, intrinsically safe AI
• Preservation of the value system during
AI self-improvement
• A clear theory that is practical to implement
Seed AI
Creation of a small AI capable
of recursive self-improvement and based on
Friendly AI theory
Superintelligent AI
• Seed AI quickly improves itself and
undergoes “hard takeoff”
• It becomes dominant force on Earth
• AI eliminates suffering, involuntary death,
and existential risks
• AI Nanny – one hypothetical variant of super
AI that only acts to prevent
existential risks (Ben Goertzel)
Preparation
• Fundraising and promotion
• Textbook to rebuild civilization
(Dartnell’s book “Knowledge”)
• Hoards with knowledge, seeds and
raw materials (Doomsday vault in
Norway)
• Survivalist communities
Building
• Underground bunkers, space colonies
• Nuclear submarines
• Seasteading
Natural refuges
• Uncontacted tribes
• Remote villages
• Remote islands
• Oceanic ships
• Research stations in Antarctica
Rebuilding civilisation after catastrophe
• Rebuilding population
• Rebuilding science and technology
• Prevention of future catastrophes
Time capsules
with information
• Underground storage
with information and DNA for future
non-human civilizations
• Eternal disks from Long Now
Foundation (or M-disks)
Preservation of
earthly life
• Create conditions for the re-emergence of new intelligent life on Earth
• Directed panspermia (Mars, Europe, space dust)
• Preservation of biodiversity and highly developed animals
(apes, habitats)
Prevent x-risk research
because it only increases risk
• Do not advertise the idea of man-made
global catastrophe
• Don’t try to control risks as it would only
give rise to them
• As we can’t measure the probability of glob-
al catastrophe it maybe unreasonable to try to
change the probability
• Do nothing
Plan of Action to Prevent Human Extinction Risks
Singleton
• “A world order in which there is a
single decision-making agency
at the highest level” (Bostrom)
• Worldwide government system based on AI
• Super AI which prevents all possible risks and
provides immortality and happiness to humanity
• Colonization of the solar system, interstellar travel
and Dyson spheres
• Colonization of the Galaxy
• Exploring the Universe
Controlled regression
• Use small catastrophe to prevent large one (Willard
Wells)
• Luddism (Kaczynski):
relinquishment of dangerous science
• Creation of ecological civilization without technology
(“World made by hand”, anarcho-primitivism)
• Limitation of personal and collective intelligence to
prevent dangerous science
Global
catastrophe
Plan D
Improbable
Ideas
Messages to
ET civilizations
• Interstellar radio messages with encoded
human DNA
• Hoards on the Moon, frozen brains
about humanity
Quantum immortality
• If the many-worlds interpretation of QM is true, an observer will survive any sort of
death including any global catastrophe (Moravec, Tegmark)
• It may be possible to make almost univocal correspondence between observer
survival and survival of a group of people (e.g. if all are in submarine)
Unfriendly AI
• Kills all people and maximizes non-human values
(paperclip maximiser)
• People are alive but suffer extensively
Reboot of the civilization
• Several reboots may happen
• Finally there will be total collapse
or a new supercivilization level
Interstellar distributed
humanity
• Many unconnected human civilizations
• New types of space risks (space wars, planets and stellar
explosions, AI and nanoreplicators, ET civilizations)
Attracting good
outcome by positive thinking
• Preventing negative thoughts
about the end of the world and about
violence
• Maximum positive attitude
“to attract” positive outcome
• Secret police which use mind
superpowers to stop them
• Start partying now
First quarter of 21 century (2015 – 2025)
High-speed
tech development
needed to quickly pass
risk window
• Investment in super-technologies (nanotech, biotech)
• High speed technical progress helps to overcome slow process of
resource depletion
• Invest more in defensive technologies than in offensive
Improving human
intelligence and
morality
• Nootropics, brain stimulation, and gene therapy for
higher IQ
• New rationality: Bayes probabilities, interest in long term
future, LessWrong
• High empathy for new geniuses is
needed to prevent them from becoming superterrorists
• Many rational, positive, and cooperative
people are needed to reduce x-risks
• Lower proportion of destructive beliefs, risky behaviour,
• Engineered enlightenment: use brain science to make
people more unite, less aggressive; open realm of
spiritual world to everybody
• Prevent worst forms of capitalism: desire to short term
money reward, and to change rules for getting it.
Stop greedy monopolies
Saved by non-human
intelligence
• Maybe extraterrestrials
are looking out for us and will
save us
• Send radio messages into space
asking for help if a catastrophe is
inevitable
• Maybe we live in a simulation
and simulators will save us
• The Second Coming,
a miracle, or life after death
Timely achievement of immortality
on highest possible level
• Nanotech-based immortal body
• Mind uploading
• Integration with AI
Miniaturization for survival and invincibility
• Earth crust colonization by miniaturized nano-tech bodies
• Moving into simulated world inside small self sustained computer
Rising
Resilience
AI based on uploading
of its creator
• Friendly to the value system of its creator
• Its values consistently evolve during its self-improvement
• Limited self-improvement may solve friendliness
problem
Bad plans
Depopulation
• Could provide resource preservation and make
control simpler
• Natural causes: pandemics, war, hunger
(Malthus)
• Birth control (Bill Gates)
• Deliberate small catastrophe (bio-weapons)
Unfriendly AI may be better
than nothing
• Any super AI will have some
memory about humanity
• It will use simulations of human
civilization to study the probability of
its own existence
• It may share some human
values and distribute them
through the Universe
Strange strategy
to escape
Fermi paradox
Random strategy may help us to escape
some dangers that killed all previous
civilizations in space
Interstellar travel
• “Orion” style, nuclear powered “generation ships” with colonists
• Starships which operate on new physical principles with immortal people on board
• Von Neumann self-replicating probes with human embryos
Temporary
asylums in space
• Space stations as temporary asylums
(ISS)
• Cheap and safe launch systems
Colonisation of the Solar system
• Self-sustaining colonies on Mars and large asteroids
• Terraforming of planets and asteroids using
self-replicating robots and building space colonies there
• Millions of independent colonies inside asteroids and comet bodies
in the Oort cloud
Space
Colonization
Technological
precognition
• Prediction of the future based
on advanced quantum technology and avoiding
dangerous world-lines
• Search for potential terrorists
using new scanning technologies
• Special AI to predict and prevent new x-risks
Robot-replicators in space
• Mechanical life
• Preservation of information about humanity for billions of years
• Safe narrow AI
Resurrection by
another civilization
• Creation of a civilization which has a lot of common
values and traits with humans
• Resurrection of concrete people
Risk controlTechnology bans
• International ban on dangerous technologies or
voluntary relinquishment (such as not creating new
• Lowering international confrontation
• Lock down all risk areas beneath piles of
bureaucracy, paperwork, safety requirements
• Concentrate all bio, nano and AI research in several controlled
centers
• Control over dissemination of knowledge of mass destruction
• Control over suicidal or terrorist ideation in bioscientists
• Investments in general safety: education, facilities, control,
checks, exams
• Limiting and concentrating in one place an important
resource: dangerous nuclear materials, high level scientists
Technology speedup
• Differential technological development: develop safety and con-
• Laws and economic stimulus (Richard Posner,
carbon emissions trade)
Surveillance
• International control systems (like IAEA)
• Internet control
Worldwide risk
prevention authority
• Worldwide video-surveillance and control
• “The ability when necessary to mobilise a
strong global coordinated response to
anticipated existential risks” (Bostrom)
• Center for quick response to any
emerging risk; x-risks police
a system of international treaties
• Robots for emergency liquidation of
bio-and nuclear hazards
Active shields
• Geoengineering against global warming
• Worldwide missile defence and
anti-asteroid shield
• Nano-shield – distributed system of
control of hazardous replicators
• Bio-shield – worldwide immune system
• Mind-shield - control of dangerous ideation
by the means of brain implants
• Merger of the state, Internet
and worldwide AI in uniform
monitoring, security and control system
• Isolation of risk sources at great distance
from Earth
Research
• Create, prove and widely promote
long term future model (this map is
based on exponential tech development
future model)
• Invest in long term prediction studies
• Comprehensive list of risks
• Probability assessment
• Prevention roadmap
• Determinate most probable
and most easy to prevent
risks
• Education of “world saviors”: choose
best students, provide them courses,
money and tasks
• Create x-risks wiki and x-risks internet
forum which could be able to attract best
minds, but also open to everyone
and well moderated
• Solve the problem that different
scientists ignore each other
(“world saviours arrogance” problem)
• Integrate different lines of thinking
about x-risks
• Lower the barriers of entry into the
• Unconstrained funding of x-risks
research for many different
approaches
(Bostrom) to create high quality x-risk
research
• Study existing system of decision mak-
ing in UN, hire a lawyer
• Integration of existing plans to prevent
concrete risks (e.g. “Climate plan” by
Carana)
• General theory of safety and risks
prevention
• War for world domination
• One country uses bioweapons to kill all
world’s population except its own which is
immunized
• Use of a super-technology (like nanotech)
to quickly gain global military dominance
• Blackmail by Doomsday Machine for world
domination
Global
catastrophe
Fatal mistakes in world
control system
• Wrong command makes shield system
attack people
• Geo-engineering goes awry
• World totalitarianism leads to catastrophe
Social support
with shared knowledge
and productive rivalry (CSER)
• Popularization (articles, books,
forums, media): show public that
x-risks are real and diverse,
but we could and should act.
• Public support, street action
• Political support (parties)
peer-reviewed journal, conferences,
intergovernment panel, international
institute
• Political parties for x-risks prevention
• Productive cooperation between
scientists and society based on trust
Elimination of
certain risks
• Universal vaccines, UV cleaners
• Asteroid detection (WISE)
• Transition to renewable energy, cutting emissions,
carbon capture
• Stop LHC, SETI and METI until AI will be created
• Rogue countries integrated, or crashed or prevented
from having dangerous weapons programs and ad-
vanced science
International
cooperation
• All states in the world contribute to the UN to
• States cooperate directly without UN
• Group of supernations takes responsibility of
x-risks prevention (US, EU, BRICS) and sign a
treaty
• A smaller catastrophe could help unite hu-
manity (pandemic, small asteroid, local nuclear
war) - some movement or event that will cause a
paradigmatic change so that humanity becomes
more existentially-risk aware
• International law about x-risks which will pun-
ish people for rising risk: underestimating it,
plotting it, risk neglect, as well as reward people
-
forts in their prevention.
• The ability to quickly adapt to new risks and
envision them in advance.
• International agencies dedicated to certain
risks
Values
transformation
• Rise public desire for life extension and global security
• Reduction of radical religious (ISIS) or nationalistic values
• Popularity of transhumanism
• Change the model of the future
• “A moral case can be made that existential risk reduction is strictly more im-
portant than any other global public good” (Bostrom)
• The value of the indestructibility of
level and as a goal of every nation
• “Sustainability should be reconceptualised in dynamic terms, as aiming for a
sustainable trajectory rather than a sustainable state” (Bostrom)
• Movies, novels and other works of art that honestly depict x-risks and moti-
vate to their prevention
• Translate best x-risks articles and books on main languages
• Memorial and awareness days: Earth day, Petrov day, Asteroid day
• Rationality and effective altruism
• Education in schools on x-risks, safety and rationality topics
Global
catastrophe
Manipulation of the
extinction probability
using Doomsday argument
• Decision to create more observers in case of
unfavourable event X starts to happen, so lowering its
probability (method UN++ by Bostrom)
• Lowering the birth density to get more time for
the civilization
Control of the simulation
(if we are in it)
• Live an interesting life so our
simulation isn’t switched off
• Don’t let them know that we
know we live in simulation
• Hack the simulation and
control it
• Negotiation with the simulators
or pray for help
Second half of the 21 century (2050-2100)
AI practical studies
• Narrow AI
• Human emulations
• Value loading
• FAI theory promotion to most AI commands; they agree to implement it
and adapt it to their systems
• Tests of FAI on non self-improving models
Readiness
• Crew training
• Crews in bunkers
• Crew rotation
• Different types of asylums
• Frozen embryos
Improving
sustainability
of civilization
• Intrinsically safe critical systems
• Growing diversity of human beings
and habitats
• Increasing diversity and increasing
connectedness of our civilization
• Universal methods of prevention
(resistances structures, strong medicine)
• Building reserves (food stocks,
seeds, minerals, energy, machinery, knowledge)
• Widely distributed civil defence, including air
and water cleaning systems, radiation meters,
gas masks, medical kits etc
Practical steps to
cope with certain
risks
• Instruments to capture radioactive dust
• Small bio-hack groups around the world could im-
prove the biotechnology understanding of the public
to the point where no one accidentally creates a bio-
technology hazard
• Developing better guidance on safe biotechnology
processes and exactly why its safe this way and not
otherwise... effectively “raising the sanity waterline”
• Dangerous forms of BioTech become centralised
• DNA synthesizers are constantly connected to the
Internet and all DNA is checked for dangerous frag-
ments
• Funding things like clean-rooms with negative pres-
sure
• Methane and CO2 capture, may be by bacteria
• Invest in bio-divercity of food supply chain,
prevent pest spread: 1) better carantine law, 2) port-
-
sions in medium bulks of foodstuffs,
sterilization.
• “Develop and stockpile new drugs and vaccines,
monitor biological agents and emerging diseases, and
strengthen the capacities of local health systems to
respond to pandemics” (Matheny)
• Better quarantine laws
• Mild birth control (female education)
prevention
international court or secret institution
• Large project which could unite humanity, like pandemic prevention
• Rogue countries integrated, based on dialogue and appreciating their
values
• Peaceful integration of national states
Dramatic social changes
It could include many interesting but different topics: demise of capi-
talism, hipster revolution, internet connectivity, global village, dissolv-
ing of national states.
Changing the way that politics works so that the policies implemented
actually have empirical backing based on what we know about sys-
tems.
Steps
and
timeline
Exact dates depend on
the pace of technological progress
and of our activity to prevent risks
Step 1
Planning
Step 3
First level of defence
on low-tech level
Step 4
Second level of defence
on high-tech level
Step 5
Reaching indestructibility
of the civilization with
near zero x-risks
Step 2
Preparation
Second quarter of 21 century (2025-2050)
High-tech bunkers
• Nano-tech based adaptive structures
• Small AI with human simulation inside space rock
PlanA
Preventthecatastrophe
Space colonies on large planets
• Creation of space colonies on the Moon and Mars (Elon Musk)
Ideological
payload of new
technologies
Space tech - Mars as backup
Electric car - sustainability
Self-driving cars - risks of AI and value of human life
Facebook - empathy and mutual control
Open source - transparency
Psychedelic drugs - empathy
Computer games and brain stimulation - virtual world
Design new monopoly tech with special ideological
payload
Reactive and
Proactive
approach
Reactive: React on most urgent and most
visible risks. Pros: Good timing, visible re-
sults, right resource allocation, invest only in
real risks
Good for slow risks, like pandemic.
Cons: can’t react on fast emergencies (AI,
asteroid, collider failure).
Risk are ranged by urgency.
Proactive: Envision future risks and build
multilevel defence.
Pros: Good for coping this principally new
risks, enough time to build defence.
Cons
imaginable risks, no clear reward,
problem with identifying new risks and dis-
counting mad ideas.
Risks are ranged by probability.
Decentralized
monitoring
of risks
Decentralized risks
monitoring
• Transparent society: everybody could monitor everybody
• Decentralized control (local police, net-solutions,) local police could handle local crime and
x-risks peers, they could control their neighbourhood in their professional space; Anonymus
hacker groups; google search control
• “Anonymous” style hacker groups: large group of “world saviours”
• Net based safety solutions
• Laws and economic stimulus (Richard Posner,
carbon emissions trade): prizes for any risk found and prevented
• World democracy based on internet voting
• Mild monitoring of potential signs of dangerous activity (internet), using deep learning
• High level of horizontal connectivity between people
Useful ideas to limit
catastrophe scale
Limit the impact of catastrophe by implementing measures to slow
the growth and areas impacted by a catastrophe. Carnitine, improve
the capacity for rapid production of vaccines in response to emerging
threats or create or grow stockpiles of important medical countermeas-
ure.
Increase time available for preparation by improving monitor-
ing and early detection technologies. For example, with pandemics you
could: supporting general research on the magnitude of biosecurity risks
and opportunities to reduce them and improving and connect disease
surveillance systems so that novel threats can be detected and respond-
ed to more quickly
Worldwide x-risk prevention exercises
Plan	
  A	
  is	
  complex:
We	
  should	
  do	
  all	
  it	
  simultaneously:	
  
international	
  control,	
  AI,	
  robustness,	
  and	
  space
International
Control
System
Friendly AI Rising
Resilience
Space
Colonization
Decentralized
monitoring
of risks
Plans	
  B,	
  C	
  and	
  D	
  have	
  smaller	
  
chances	
  on	
  success
Survive
the catastrophe
Leave
backups
Plan D
Deus ex
machina
Bad
plans
Plan A1.1
International Control System
Risk controlTechnology bans
• International ban on dangerous technologies or
voluntary relinquishment (such as not creating new
• Freezing potentially dangerous projects for 30 years
• Lowering international confrontation
• Lock down all risk areas beneath piles of
bureaucracy, paperwork, safety requirements
• Concentrate all bio, nano and AI research in several controlled
centers
• Control over dissemination of knowledge of mass destruction
• Control over suicidal or terrorist ideation in bioscientists
• Investments in general safety: education, facilities, control,
checks, exams
• Limiting and concentrating in one place an important
resource: dangerous nuclear materials, high level scientists
Technology speedup
• Differential technological development: develop safety and con-
• Laws and economic stimulus (Richard Posner,
Surveillance
• Internet control
Worldwide risk
prevention authority
• Worldwide video-surveillance and control
• “The ability when necessary to mobilise a
strong global coordinated response to
• Center for quick response to any
x-risks police
a system of international treaties
• Robots for emergency liquidation of
bio-and nuclear hazards
Active shields
• Geoengineering against global warming
• Worldwide missile defence and
anti-asteroid shield
• Nano-shield – distributed system of
control of hazardous replicators
• Mind-shield - control of dangerous ideation
by the means of brain implants
• Merger of the state, Internet
and worldwide AI in uniform
monitoring, security and control system
• Isolation of risk sources at great distance
from Earth
Research
• Create, prove and widely promote
long term future model (this map is
based on exponential tech development
• Invest in long term prediction studies
• Comprehensive list of risks
• Probability assessment
• Prevention roadmap
• Determinate most probable
and most easy to prevent
risks
• Education of “world saviors”: choose
best students, provide them courses,
money and tasks
• Create x-risks wiki and x-risks internet
forum which could be able to attract best
minds, but also open to everyone
and well moderated
• Solve the problem that different
scientists ignore each other
• Integrate different lines of thinking
about x-risks
• Lower the barriers of entry into the
• Unconstrained funding of x-risks
research for many different
approaches
research
• Study existing system of decision mak-
ing in UN, hire a lawyer
• Integration of existing plans to prevent
concrete risks (e.g. “Climate plan” by
• General theory of safety and risks
prevention
Social support
with shared knowledge
• Popularization (articles, books,
x-risks are real and diverse,
but we could and should act.
• Public support, street action
peer-reviewed journal, conferences,
intergovernment panel, international
institute
• Political parties for x-risks prevention
• Productive cooperation between
scientists and society based on trust
International
cooperation
• All states in the world contribute to the UN to
• States cooperate directly without UN
• Group of supernations takes responsibility of
treaty
• A smaller catastrophe could help unite hu-
manity (pandemic, small asteroid, local nuclear
paradigmatic change so that humanity becomes
more existentially-risk aware
• International law about x-risks which will pun-
ish people for rising risk: underestimating it,
plotting it, risk neglect, as well as reward people
efforts in their prevention.
• The ability to quickly adapt to new risks and
envision them in advance.
• International agencies dedicated to certain
risks
Research
• Create, prove and widely promote
long term future model (this map is
based on exponential tech development
future model)
• Invest in long term prediction studies
• Comprehensive list of risks
• Probability assessment
• Prevention roadmap
• Determinate most probable
and most easy to prevent
risks
• Education of “world saviors”: choose
best students, provide them courses,
money and tasks
• Create x-risks wiki and x-risks internet
forum which could be able to attract best
minds, but also open to everyone
and well moderated
• Solve the problem that different
scientists ignore each other
(“world saviours arrogance” problem)
• Integrate different lines of thinking
about x-risks
• Lower the barriers of entry into the
• Unconstrained funding of x-risks
research for many different
approaches
(Bostrom) to create high quality x-risk
research
• Study existing system of decision mak-
ing in UN, hire a lawyer
• Integration of existing plans to prevent
concrete risks (e.g. “Climate plan” by
Carana)
• General theory of safety and risks
prevention
Step 1
Planning
Plan A1.1
International
Control System
Social support
with shared knowledge
institute
• Political parties for x-risks prevention
Step 2
Preparation
Plan A1.1
International
Control System
International
cooperation
• All states in the world contribute to the UN to
• States cooperate directly without UN
threaty
-
-
Step 3
First level of defence
on low-tech level
Plan A1.1
International
Control System
Risk controlTechnology bans
• International ban on dangerous technologies or
voluntary relinquishment (such as not creating new
• Freezing potentially dangerous projects for 30 years
• Lowering international confrontation
• Lock down all risk areas beneath piles of
bureaucracy, paperwork, safety requirements
• Concentrate all bio, nano and AI research in several controlled
centers
• Control over dissemination of knowledge of mass destruction
• Control over suicidal or terrorist ideation in bioscientists
• Investments in general safety: education, facilities, control,
checks, exams
• Limiting and concentrating in one place an important
resource: dangerous nuclear materials, high level scientists
Technology speedup
• Differential technological development: develop safety and con-
• Laws and economic stimulus (Richard Posner,
Surveillance
Step 3
First level of defence
on low-tech level
Plan A1.1
International
Control System
Worldwide risk
prevention authority
• Worldwide video-surveillance and control
• “The ability when necessary to mobilise a
strong global coordinated response to
anticipated existential risks” (Bostrom)
• Center for quick response to any
emerging risk; x-risks police
a system of international treaties
• Robots for emergency liquidation of
bio-and nuclear hazards
Step 4
Second level of defence
on high-tech level
Plan A1.1
International
Control System
Active shields
• Geoengineering against global warming
• Worldwide missile defence and
anti-asteroid shield
• Nano-shield – distributed system of
control of hazardous replicators
• Bio-shield – worldwide immune system
• Mind-shield - control of dangerous ideation
by the means of brain implants
• Merger of the state, Internet
and worldwide AI in uniform
monitoring, security and control system
• Isolation of risk sources at great distance
from Earth
Step 4
Second level of defence
on high-tech level
Plan A1.1
International
Control System
Risks of plan A1.1
Singleton
• “A world order in which there is a
single decision-making agency
at the highest level” (Bostrom)
• Worldwide government system based on AI
• Super AI which prevents all possible risks and
provides immortality and happiness to humanity
• Colonization of the solar system, interstellar travel
and Dyson spheres
• Colonization of the Galaxy
• Exploring the Universe
Result:
Plan A1.2
Decentralised risk monitoring
Improving human
intelligence and
morality
• Nootropics, brain stimulation, and gene therapy for
higher IQ
• New rationality: Bayes probabilities, interest in long term
future, LessWrong
• High empathy for new geniuses is
needed to prevent them from becoming superterrorists
• Many rational, positive, and cooperative
people are needed to reduce x-risks
• Lower proportion of destructive beliefs, risky behaviour,
• Engineered enlightenment: use brain science to make
people more unite, less aggressive; open realm of
spiritual world to everybody
• Prevent worst forms of capitalism: desire to short term
money reward, and to change rules for getting it.
Stop greedy monopolies
Values
transformation
• Rise public desire for life extension and global security
• Reduction of radical religious (ISIS) or nationalistic values
• Popularity of transhumanism
• Change the model of the future
• “A moral case can be made that existential risk reduction is strictly more im-
portant than any other global public good” (Bostrom)
• The value of the indestructibility of
-
el and as a goal of every nation
• “Sustainability should be reconceptualised in dynamic terms, as aiming for a
sustainable trajectory rather than a sustainable state” (Bostrom)
• Movies, novels and other works of art that honestly depict x-risks and moti-
vate to their prevention
• Translate best x-risks articles and books on main languages
• Memorial and awareness days: Earth day, Petrov day, Asteroid day
• Rationality and effective altruism
• Education in schools on x-risks, safety and rationality topics
Cold war and WW3
prevention
international court or secret institution
• Large project which could unite humanity, like pandemic prevention
• Rogue countries integrated, based on dialogue and appreciating their
values
• Peaceful integration of national states
Dramatic social changes
It could include many interesting but different topics: demise of capi-
talism, hipster revolution, internet connectivity, global village, dissolv-
ing of national states.
Changing the way that politics works so that the policies implemented
actually have empirical backing based on what we know about sys-
tems.
Decentralized risks
monitoring
• Transparent society: everybody could monitor everybody
• Decentralized control (local police, net-solutions,) local police could handle local crime and
x-risks peers, they could control their neighbourhood in their professional space; Anonymus
hacker groups; google search control
• “Anonymous” style hacker groups: large group of “world saviours”
• Net based safety solutions
• Laws and economic stimulus (Richard Posner,
carbon emissions trade): prizes for any risk found and prevented
• World democracy based on internet voting
• Mild monitoring of potential signs of dangerous activity (internet), using deep learning
• High level of horizontal connectivity between people
Values
transformation
• Rise public desire for life extension and global security
• Reduction of radical religious (ISIS) or nationalistic values
• Popularity of transhumanism
• Change the model of the future
• “A moral case can be made that existential risk reduction is strictly more im-
portant than any other global public good” (Bostrom)
• The value of the indestructibility of
-
el and as a goal of every nation
sustainable trajectory rather than a sustainable state” (Bostrom)
-
vate to their prevention
• Translate best x-risks articles and books on main languages
• Rationality and effective altruism
Plan A1.2
Decentralised risk monitoring
Step 1
Improving human
intelligence and
morality
• Nootropics, brain stimulation, and gene therapy for
higher IQ
• New rationality: Bayes probabilities, interest in long term
future, LessWrong
• High empathy for new geniuses is
needed to prevent them from becoming superterrorists
• Many rational, positive, and cooperative
people are needed to reduce x-risks
• Lower proportion of destructive beliefs, risky behaviour,
• Engineered enlightenment: use brain science to make
people more unite, less aggressive; open realm of
spiritual world to everybody
Plan A1.2
Decentralised risk
monitoring. Step 2
Cold war and WW3
prevention
-
-
-
Plan A1.2
Decentralised risk monitoring.
Step 3
Decentralized risks
monitoring
• Transparent society: everybody could monitor everybody
• Decentralized control (local police, net-solutions,) local police could handle local crime and
hacker groups; google search control
• World democracy based on internet voting
Plan A1.2
Decentralised risk monitoring.
Step 4
Plan A2.
Friendly AI
Study and Promotion
• Study of Friendly AI theory
• Promotion of Friendly AI (Bostrom and Yudkowsky)
• Fundraising (MIRI)
• Teaching rationality (LessWrong)
• Slowing other AI projects (recruiting scientists)
• FAI free education, starter packages in programming
Solid Friendly AI theory
• Human values theory and decision theory
• Proven safe, fail-safe, intrinsically safe AI
• Preservation of the value system during
AI self-improvement
• A clear theory that is practical to implement
Seed AI
Creation of a small AI capable
of recursive self-improvement and based on
Friendly AI theory
Superintelligent AI
• Seed AI quickly improves itself and
undergoes “hard takeoff”
• It becomes dominant force on Earth
• AI eliminates suffering, involuntary death,
and existential risks
• AI Nanny – one hypothetical variant of super
AI that only acts to prevent
existential risks (Ben Goertzel)
AI practical studies
• Narrow AI
• Human emulations
• Value loading
• FAI theory promotion to most AI commands; they agree to implement it
and adapt it to their systems
• Tests of FAI on non self-improving models
Study and Promotion
• Study of Friendly AI theory
• Promotion of Friendly AI (Bostrom and Yudkowsky)
• Fundraising (MIRI)
• Teaching rationality (LessWrong)
• Slowing other AI projects (recruiting scientists)
• FAI free education, starter packages in programming
Plan A2.
Friendly AI. Step 1
Solid Friendly AI theory
• Human values theory and decision theory
• Proven safe, fail-safe, intrinsically safe AI
• Preservation of the value system during
AI self-improvement
• A clear theory that is practical to implement
Plan A2.
Friendly AI. Step 2
AI practical studies
• Narrow AI
• Human emulations
• Value loading
• FAI theory promotion to most AI commands; they agree to implement it
and adapt it to their systems
• Tests of FAI on non self-improving models
Plan A2.
Friendly AI. Step 3
Seed AI
Creation of a small AI capable
of recursive self-improvement and based on
Friendly AI theory
Plan A2.
Friendly AI. Step 4
Superintelligent AI
• Seed AI quickly improves itself and
undergoes “hard takeoff”
• It becomes dominant force on Earth
• AI eliminates suffering, involuntary death,
and existential risks
• AI Nanny – one hypothetical variant of super
AI that only acts to prevent
existential risks (Ben Goertzel)
Plan A2.
Friendly AI. Step 5
High-speed
tech development
needed to quickly pass
risk window
• Investment in super-technologies (nanotech, biotech)
• High speed technical progress helps to overcome slow process of
resource depletion
Dramatic social changes
It could include many interesting but different topics: demise of capitalism, hipster
revolution, internet connectivity, global village, dissolving of national states.
Changing the way that politics works so that the policies implemented actually have empirical backing based
on what we know about systems.
Improving human
intelligence and morality
• Nootropics, brain stimulation, and gene therapy for higher IQ
• New rationality: Bayes probabilities, interest in long term future,
LessWrong
• High empathy for new geniuses is
needed to prevent them from becoming superterrorists
• Many rational, positive, and cooperative
people are needed to reduce x-risks
• Engineered enlightenment: use brain science to make people more unite, less
aggressive; open realm of spiritual world to everybody
• Prevent worst forms of capitalism: desire to short term money reward, and to
change rules for getting it. Stop greedy monopolies
Timely achievement of immortality
on highest possible level
• Nanotech-based immortal body
• Mind uploading
• Integration with AI
Miniaturization for survival and invincibility
• Earth crust colonization by miniaturized nano tech bodies
• Moving into simulated world inside small self sustained computer
AI based on uploading
of its creator
• Friendly to the value system of its creator
• Its values consistently evolve during its self-improvement
• Limited self-improvement may solve friendliness
problem
Improving
sustainability
of civilization
• Intrinsically safe critical systems
• Growing diversity of human beings
and habitats
• Increasing diversity and increasing
connectedness of our civilization
• Universal methods of prevention
(resistances structures,
strong medicine)
• Building reserves (food stocks,
seeds, minerals, energy, machinery,
knowledge)
• Widely distributed civil defence,
including air and water cleaning
systems, radiation meters, gas masks,
medical kits etc
Plan A3.
Rising Resilience
Improving
sustainability
of civilization
• Intrinsically safe critical systems
• Growing diversity of human beings
and habitats
• Increasing diversity and increasing
connectedness of our civilization
• Universal methods of prevention
(resistances structures,
strong medicine)
• Building reserves (food stocks,
seeds, minerals, energy, machinery,
knowledge)
• Widely distributed civil defence,
including air and water cleaning
systems, radiation meters, gas masks,
medical kits etc
Plan A3.
Rising Resilience
Step 1
Plan A3.
Rising Resilience
Step 2
Useful ideas to limit
catastrophe scale
Limit the impact of catastrophe by implementing measures to slow
the growth and areas impacted by a catastrophe. Carnitine, improve
the capacity for rapid production of vaccines in response to emerging
threats or create or grow stockpiles of important medical countermeas-
ure.
Increase time available for preparation by improving monitoring
and early detection technologies. For example, with pandemics you
could: supporting general research on the magnitude of biosecurity risks
and opportunities to reduce them and improving and connect disease
surveillance systems so that novel threats can be detected and respond-
ed to more quickly
Worldwide x-risk prevention exercises
High-speed
tech development
needed to quickly pass
risk window
• Investment in super-technologies (nanotech, biotech)
• High speed technical progress helps to overcome slow process of
resource depletion
Dramatic social changes
It could include many interesting but different topics: demise of capitalism, hipster
revolution, internet connectivity, global village, dissolving of national states.
Changing the way that politics works so that the policies implemented actually have empirical backing based
on what we know about systems.
Plan A3.
Rising Resilience. Step 3
Timely achievement of immortality
on highest possible level
• Nanotech-based immortal body
• Integration with AI
Plan A3.
Rising Resilience. Step 4
Interstellar travel
• “Orion” style, nuclear powered “generation ships” with colonists
• Starships which operate on new physical principles with immortal people on board
• Von Neumann self-replicating probes with human embryos
Temporary
asylums in space
• Space stations as temporary asylums
(ISS)
• Cheap and safe launch systems
Colonisation of the Solar system
• Self-sustaining colonies on Mars and large asteroids
• Terraforming of planets and asteroids using
self-replicating robots and building space colonies there
• Millions of independent colonies inside asteroids and comet bodies in the Oort cloud
Space colonies on large planets
• Creation of space colonies on the Moon and Mars (Elon Musk)
with 100-1000 people
Plan A4.
Space colonisation
Temporary
asylums in space
• Space stations as temporary asylums
(ISS)
• Cheap and safe launch systems
Plan A4.
Space colonisation
Step 1
Space colonies on large planets
• Creation of space colonies on the Moon and Mars (Elon Musk)
with 100-1000 people
Plan A4.
Space colonisation
Step 2
Colonisation of the Solar system
• Self-sustaining colonies on Mars and large asteroids
• Terraforming of planets and asteroids using
self-replicating robots and building space colonies there
• Millions of independent colonies inside asteroids and comet bodies in the Oort cloud
Plan A4.
Space colonisation
Step 3
Interstellar travel
• “Orion” style, nuclear powered “generation ships” with colonists
• Starships which operate on new physical principles with immortal people on board
• Von Neumann self-replicating probes with human embryos
Plan A4.
Space colonisation
Step 4
Interstellar distributed
humanity
• Many unconnected human civilizations
• New types of space risks (space wars, planets and stellar
explosions, AI and nanoreplicators, ET civilizations)
Result:
Plan B.
Survive the catastrophe
Preparation
• Fundraising and promotion
• Textbook to rebuild civilization
(Dartnell’s book “Knowledge”)
• Hoards with knowledge, seeds and
raw materials (Doomsday vault in
Norway)
• Survivalist communities
Building
• Underground bunkers, space colonies
• Nuclear submarines
• Seasteading
Natural refuges
• Uncontacted tribes
• Remote villages
• Remote islands
• Oceanic ships
• Research stations in Antarctica
Rebuilding civilisation after catastrophe
• Rebuilding population
• Rebuilding science and technology
• Prevention of future catastrophes
Readiness
• Crew training
• Crews in bunkers
• Crew rotation
• Different types of asylums
• Frozen embryos
High-tech bunkers
• Nano-tech based adaptive structures
• Small AI with human simulation inside space rock
Preparation
• Fundraising and promotion
• Textbook to rebuild civilization
(Dartnell’s book “Knowledge”)
• Hoards with knowledge, seeds and
raw materials (Doomsday vault in
Norway)
• Survivalist communities
Plan B.
Survive the catastrophe
Step 1
Building
• Underground bunkers, space colonies
• Nuclear submarines
• Seasteading
Natural refuges
• Uncontacted tribes
• Remote villages
• Remote islands
• Oceanic ships
• Research stations in Antarctica
Plan B.
Survive the catastrophe
Step 2
Readiness
• Crew training
• Crews in bunkers
• Crew rotation
• Different types of asylums
• Frozen embryos
Plan B.
Survive the catastrophe
Step 3
High-tech bunkers
• Nano-tech based adaptive structures
• Small AI with human simulation inside space rock
Plan B.
Survive the catastrophe
Step 4
Rebuilding civilisation after catastrophe
• Rebuilding population
• Rebuilding science and technology
• Prevention of future catastrophes
Plan B.
Survive the catastrophe
Step 5
Reboot of the civilization
• Several reboots may happen
• Finally there will be total collapse
or a new supercivilization level
Result:
Time capsules
with information
• Underground storage
with information and DNA for future
non-human civilizations
• Eternal disks from Long Now
Foundation (or M-disks)
Preservation of
earthly life
• Create conditions for the re-emergence of new intelligent life
on Earth
• Directed panspermia (Mars, Europe, space dust)
• Preservation of biodiversity and highly developed animals
(apes, habitats)
Messages to
ET civilizations
• Interstellar radio messages with encoded
human DNA
• Hoards on the Moon, frozen brains
about humanity
Robot-replicators in space
• Mechanical life
• Preservation of information about humanity for billions of years
• Safe narrow AI
Plan C.
Leave backups
Time capsules
with information
• Underground storage
with information and DNA for future
non-human civilizations
• Eternal disks from Long Now
Foundation (or M-disks)
Plan C.
Leave backups
Step 1
Messages to
ET civilizations
• Interstellar radio messages with encoded
human DNA
• Hoards on the Moon, frozen brains
about humanity
Plan C.
Leave backups. Step 2
Preservation of
earthly life
• Create conditions for the re-emergence of new intelligent life on Earth
• Directed panspermia (Mars, Europe, space dust)
• Preservation of biodiversity and highly developed animals
(apes, habitats)
Plan C.
Leave backups. Step 3
Robot-replicators in space
• Mechanical life
• Preservation of information about humanity for billions of years
• Safe narrow AI
Plan C.
Leave backups. Step 4
Resurrection by
another civilization
• Creation of a civilization which has a lot of common
values and traits with humans
• Resurrection of concrete people
Result:
Quantum immortality
• If the many-worlds interpretation of QM is true, an observer will survive any sort of
death including any global catastrophe (Moravec, Tegmark)
• It may be possible to make almost univocal correspondence between observer
survival and survival of a group of people (e.g. if all are in submarine)
Saved by non-human
intelligence
• Maybe extraterrestrials
are looking out for us and will
save us
• Send radio messages into space
asking for help if a catastrophe is
inevitable
• Maybe we live in a simulation
and simulators will save us
• The Second Coming,
a miracle, or life after death
Strange strategy
to escape
Fermi paradox
Random strategy may help us to escape
some dangers that killed all previous
civilizations in space
Technological
precognition
• Prediction of the future based
on advanced quantum technology and avoiding
dangerous world-lines
• Search for potential terrorists
using new scanning technologies
• Special AI to predict and prevent new x-risks
Manipulation of the
extinction probability
using Doomsday argument
• Decision to create more observers in case of
unfavourable event X starts to happen, so lowering its
• Lowering the birth density to get more time for
the civilization
Control of the simulation
(if we are in it)
• Live an interesting life so our sim-
ulation isn’t switched off
• Don’t let them know that we
know we live in simulation
• Hack the simulation and
control it
or pray for help
Plan D.
Improbable ideas
Saved by non-human
intelligence
• Maybe extraterrestrials
are looking out for us and will
save us
• Send radio messages into space
asking for help if a catastrophe is
inevitable
• Maybe we live in a simulation
and simulators will save us
• The Second Coming,
a miracle, or life after death
Plan D.
Improbable ideas
Idea 1
Quantum immortality
• If the many-worlds interpretation of QM is true, an observer will survive any sort of
death including any global catastrophe (Moravec, Tegmark)
• It may be possible to make almost univocal correspondence between observer
survival and survival of a group of people (e.g. if all are in submarine)
Plan D.
Improbable ideas. Idea 2
Strange strategy
to escape
Fermi paradox
Random strategy may help us to escape
some dangers that killed all previous
civilizations in space
Plan D.
Improbable ideas
Idea 3
Technological
precognition
• Prediction of the future based
on advanced quantum technology and avoiding
dangerous world-lines
• Search for potential terrorists
using new scanning technologies
• Special AI to predict and prevent new x-risks
Plan D.
Improbable ideas
Idea 4
Manipulation of the
extinction probability
using Doomsday argument
• Decision to create more observers in case of
unfavourable event X starts to happen, so lowering its
probability (method UN++ by Bostrom)
• Lowering the birth density to get more time for
the civilization
Plan D.
Improbable ideas
Idea 5
Control of the simulation
(if we are in it)
• Live an interesting life so our sim-
ulation isn’t switched off
• Don’t let them know that we
know we live in simulation
• Hack the simulation and
control it
• Negotiation with the simulators
or pray for help
Plan D.
Improbable ideas
Idea 6
Prevent x-risk research
because it only increases risk
• Do not advertise the idea of man-made
global catastrophe
• Don’t try to control risks as it would only
give rise to them
• As we can’t measure the probability of glob-
al catastrophe it maybe unreasonable to try to
change the probability
• Do nothing
Controlled regression
• Use small catastrophe to prevent large one (Willard
Wells)
• Luddism (Kaczynski):
relinquishment of dangerous science
• Creation of ecological civilization without technology
(“World made by hand”, anarcho-primitivism)
• Limitation of personal and collective intelligence to
prevent dangerous science
Attracting good
outcome by positive thinking
• Preventing negative thoughts
about the end of the world and about
violence
• Maximum positive attitude
“to attract” positive outcome
• Secret police which use mind
superpowers to stop them
• Start partying now
Depopulation
• Could provide resource preservation and make
control simpler
• Natural causes: pandemics, war, hunger
(Malthus)
• Birth control (Bill Gates)
• Deliberate small catastrophe (bio-weapons)
Unfriendly AI may be better
than nothing
• Any super AI will have some
memory about humanity
• It will use simulations of human
civilization to study the probability of
its own existence
• It may share some human
values and distribute them
through the Universe
Bad plans
Prevent x-risk research
because it only increases risk
• Do not advertise the idea of man-made
global catastrophe
• Don’t try to control risks as it would only
give rise to them
• As we can’t measure the probability of glob-
al catastrophe it maybe unreasonable to try to
change the probability
• Do nothing
Bad plans
Idea 1
Controlled regression
• Use small catastrophe to prevent large one (Willard
Wells)
• Luddism (Kaczynski):
relinquishment of dangerous science
• Creation of ecological civilization without technology
(“World made by hand”, anarcho-primitivism)
• Limitation of personal and collective intelligence to
prevent dangerous science
Bad plans
Idea 2
Bad plans. Idea 3
Depopulation
• Could provide resource preservation and make
control simpler
• Natural causes: pandemics, war, hunger
(Malthus)
• Extreme birth control
• Deliberate small catastrophe (bio-weapons)
Unfriendly AI may be better
than nothing
• Any super AI will have some
memory about humanity
• It will use simulations of human
civilization to study the probability of
its own existence
• It may share some human
values and distribute them
through the Universe
Bad plans. Idea 4
Attracting good
outcome by positive thinking
• Preventing negative thoughts
about the end of the world and about
violence
• Maximum positive attitude
“to attract” positive outcome
• Secret police which use mind
superpowers to stop them
• Start partying now
Bad plans. Idea 5
Dynamic	
  roadmaps
Next	
  stage	
  of	
  the	
  research	
  will	
  be	
  creation	
  of	
  
collectively	
  editable	
  wiki-­‐style	
  roadmaps	
  
They	
  will	
  cover	
  all	
  existing	
  topics	
  of	
  
transhumanism	
  and	
  future	
  studies	
  
Create	
  AI	
  system	
  based	
  on	
  the	
  roadmaps	
  or	
  
working	
  on	
  their	
  improvement
You	
  could	
  read	
  all	
  
roadmaps	
  on
http://immortality-­‐roadmap.com	
  
!
http://existrisks.org/

More Related Content

What's hot

Posthumanism_FollowUpInterest_KCB207_LAMONT
Posthumanism_FollowUpInterest_KCB207_LAMONTPosthumanism_FollowUpInterest_KCB207_LAMONT
Posthumanism_FollowUpInterest_KCB207_LAMONTkatherinelamont
 
Sovereignty and the_ufo
Sovereignty and the_ufoSovereignty and the_ufo
Sovereignty and the_ufogorin2008
 
The future - 2038
The future - 2038The future - 2038
The future - 2038Jim Isaak
 
Legal rights of conscious computers
Legal rights of conscious computersLegal rights of conscious computers
Legal rights of conscious computersDanila Medvedev
 
Transhumanism 2024: A new future for politics?
Transhumanism 2024: A new future for politics?Transhumanism 2024: A new future for politics?
Transhumanism 2024: A new future for politics?David Wood
 
Living with Exponential Growth
Living with Exponential GrowthLiving with Exponential Growth
Living with Exponential Growthneeraj0808
 
2009 ubc-the-ultimate-hack-slides-2.0-final-with-notes
2009 ubc-the-ultimate-hack-slides-2.0-final-with-notes2009 ubc-the-ultimate-hack-slides-2.0-final-with-notes
2009 ubc-the-ultimate-hack-slides-2.0-final-with-notesRobert David Steele Vivas
 

What's hot (7)

Posthumanism_FollowUpInterest_KCB207_LAMONT
Posthumanism_FollowUpInterest_KCB207_LAMONTPosthumanism_FollowUpInterest_KCB207_LAMONT
Posthumanism_FollowUpInterest_KCB207_LAMONT
 
Sovereignty and the_ufo
Sovereignty and the_ufoSovereignty and the_ufo
Sovereignty and the_ufo
 
The future - 2038
The future - 2038The future - 2038
The future - 2038
 
Legal rights of conscious computers
Legal rights of conscious computersLegal rights of conscious computers
Legal rights of conscious computers
 
Transhumanism 2024: A new future for politics?
Transhumanism 2024: A new future for politics?Transhumanism 2024: A new future for politics?
Transhumanism 2024: A new future for politics?
 
Living with Exponential Growth
Living with Exponential GrowthLiving with Exponential Growth
Living with Exponential Growth
 
2009 ubc-the-ultimate-hack-slides-2.0-final-with-notes
2009 ubc-the-ultimate-hack-slides-2.0-final-with-notes2009 ubc-the-ultimate-hack-slides-2.0-final-with-notes
2009 ubc-the-ultimate-hack-slides-2.0-final-with-notes
 

Viewers also liked

Futures Frameworks Simulation
Futures Frameworks SimulationFutures Frameworks Simulation
Futures Frameworks SimulationMelanie Swan
 
Copenhagen Climate Summit Fails - The End Of The World Is Near
Copenhagen Climate Summit Fails - The End Of The World Is NearCopenhagen Climate Summit Fails - The End Of The World Is Near
Copenhagen Climate Summit Fails - The End Of The World Is NearCS83
 
The end of the world
The end of the worldThe end of the world
The end of the worldcollingh
 
World May End Tonight Ignite Bangalore
World May End Tonight Ignite BangaloreWorld May End Tonight Ignite Bangalore
World May End Tonight Ignite BangaloreAnbusivam
 
Until the End of the World: Night Landscape Photographs, Steve Giovinco
Until the End of the World: Night Landscape Photographs, Steve GiovincoUntil the End of the World: Night Landscape Photographs, Steve Giovinco
Until the End of the World: Night Landscape Photographs, Steve GiovincoSteve Giovinco
 
It's The End Of The World As We Know It (And I Feel Fine) by Ben Garney of Pu...
It's The End Of The World As We Know It (And I Feel Fine) by Ben Garney of Pu...It's The End Of The World As We Know It (And I Feel Fine) by Ben Garney of Pu...
It's The End Of The World As We Know It (And I Feel Fine) by Ben Garney of Pu...mochimedia
 
We are going to die!
We are going to die!We are going to die!
We are going to die!Earl Fung
 
【原创 Original】How far are we still from extinction
【原创 Original】How far are we still from extinction【原创 Original】How far are we still from extinction
【原创 Original】How far are we still from extinctionYibo Yang
 
Bible facts about eternal life
Bible facts about eternal lifeBible facts about eternal life
Bible facts about eternal lifePaul Fuller
 
The end of the world
The end of the worldThe end of the world
The end of the worldSimon Manzur
 
324 7 part 2 extinction
324 7 part 2 extinction 324 7 part 2 extinction
324 7 part 2 extinction Ryan Sain
 
HUMAN EXTINCTION
HUMAN EXTINCTIONHUMAN EXTINCTION
HUMAN EXTINCTIONharshijain
 
It's the end of the world as we know it, and i feel fine
It's the end of the world as we know it, and i feel fineIt's the end of the world as we know it, and i feel fine
It's the end of the world as we know it, and i feel fineMartin Hamilton
 
The end of the World in the Spiritism Vision
The end of the World in the Spiritism VisionThe end of the World in the Spiritism Vision
The end of the World in the Spiritism VisionDaniane Bornea Friedl
 
End of the world 02
End of the world 02End of the world 02
End of the world 02topiwopi
 
Is the End of the World Coming?
Is the End of the World Coming?Is the End of the World Coming?
Is the End of the World Coming?AllPreppersUnited
 

Viewers also liked (20)

Futures Frameworks Simulation
Futures Frameworks SimulationFutures Frameworks Simulation
Futures Frameworks Simulation
 
Copenhagen Climate Summit Fails - The End Of The World Is Near
Copenhagen Climate Summit Fails - The End Of The World Is NearCopenhagen Climate Summit Fails - The End Of The World Is Near
Copenhagen Climate Summit Fails - The End Of The World Is Near
 
The end of the world
The end of the worldThe end of the world
The end of the world
 
END OF THE WORLD
END OF THE WORLDEND OF THE WORLD
END OF THE WORLD
 
Dooms day
Dooms dayDooms day
Dooms day
 
World May End Tonight Ignite Bangalore
World May End Tonight Ignite BangaloreWorld May End Tonight Ignite Bangalore
World May End Tonight Ignite Bangalore
 
Until the End of the World: Night Landscape Photographs, Steve Giovinco
Until the End of the World: Night Landscape Photographs, Steve GiovincoUntil the End of the World: Night Landscape Photographs, Steve Giovinco
Until the End of the World: Night Landscape Photographs, Steve Giovinco
 
It's The End Of The World As We Know It (And I Feel Fine) by Ben Garney of Pu...
It's The End Of The World As We Know It (And I Feel Fine) by Ben Garney of Pu...It's The End Of The World As We Know It (And I Feel Fine) by Ben Garney of Pu...
It's The End Of The World As We Know It (And I Feel Fine) by Ben Garney of Pu...
 
We are going to die!
We are going to die!We are going to die!
We are going to die!
 
【原创 Original】How far are we still from extinction
【原创 Original】How far are we still from extinction【原创 Original】How far are we still from extinction
【原创 Original】How far are we still from extinction
 
Bible facts about eternal life
Bible facts about eternal lifeBible facts about eternal life
Bible facts about eternal life
 
The end of the world
The end of the worldThe end of the world
The end of the world
 
324 7 part 2 extinction
324 7 part 2 extinction 324 7 part 2 extinction
324 7 part 2 extinction
 
HUMAN EXTINCTION
HUMAN EXTINCTIONHUMAN EXTINCTION
HUMAN EXTINCTION
 
End of the world presentation
End of the world presentationEnd of the world presentation
End of the world presentation
 
Powerpoint
PowerpointPowerpoint
Powerpoint
 
It's the end of the world as we know it, and i feel fine
It's the end of the world as we know it, and i feel fineIt's the end of the world as we know it, and i feel fine
It's the end of the world as we know it, and i feel fine
 
The end of the World in the Spiritism Vision
The end of the World in the Spiritism VisionThe end of the World in the Spiritism Vision
The end of the World in the Spiritism Vision
 
End of the world 02
End of the world 02End of the world 02
End of the world 02
 
Is the End of the World Coming?
Is the End of the World Coming?Is the End of the World Coming?
Is the End of the World Coming?
 

Similar to Roadmaps to prevent x risks 2- presentation

Backup on the Moon
Backup on the MoonBackup on the Moon
Backup on the Moonavturchin
 
My night with philosophers presentation - London June 8
My night with philosophers presentation - London June 8My night with philosophers presentation - London June 8
My night with philosophers presentation - London June 8David Roden
 
Philosophy of Big Data: Big Data, the Individual, and Society
Philosophy of Big Data: Big Data, the Individual, and SocietyPhilosophy of Big Data: Big Data, the Individual, and Society
Philosophy of Big Data: Big Data, the Individual, and SocietyMelanie Swan
 
Kim Solez Singularity explained and promoted winter 2014
Kim Solez Singularity explained and promoted winter 2014Kim Solez Singularity explained and promoted winter 2014
Kim Solez Singularity explained and promoted winter 2014Kim Solez ,
 
Kim Solez Singularity explained and promoted fall 2016
Kim Solez Singularity explained and promoted fall 2016Kim Solez Singularity explained and promoted fall 2016
Kim Solez Singularity explained and promoted fall 2016Kim Solez ,
 
"Homo Deus": Emerging Narratives for the Future
"Homo Deus": Emerging Narratives for the Future"Homo Deus": Emerging Narratives for the Future
"Homo Deus": Emerging Narratives for the FuturePaweł Skorupiński
 
Encyclopedic intelligence big science and technology
Encyclopedic intelligence big science and technologyEncyclopedic intelligence big science and technology
Encyclopedic intelligence big science and technologyAzamat Abdoullaev
 
Kim Solez Singularity explained promoted winter 2015
Kim Solez Singularity explained promoted winter 2015Kim Solez Singularity explained promoted winter 2015
Kim Solez Singularity explained promoted winter 2015Kim Solez ,
 
Immortality roadmap
Immortality roadmapImmortality roadmap
Immortality roadmapavturchin
 
Elvin Guri innovation Explorer 2019
Elvin Guri innovation Explorer 2019Elvin Guri innovation Explorer 2019
Elvin Guri innovation Explorer 2019InnovationStarter
 
Kimethics in a world of robots 2013
Kimethics in a world of robots 2013Kimethics in a world of robots 2013
Kimethics in a world of robots 2013AlexandruBuruc
 
Nick Bostrom, Oxford’s Future of Humanity Institute
Nick Bostrom, Oxford’s Future of Humanity InstituteNick Bostrom, Oxford’s Future of Humanity Institute
Nick Bostrom, Oxford’s Future of Humanity InstituteActivate Summit
 
Keynote Address: PASI 2013 in Methods for Data-Driven Discovery
Keynote Address: PASI 2013 in Methods for Data-Driven DiscoveryKeynote Address: PASI 2013 in Methods for Data-Driven Discovery
Keynote Address: PASI 2013 in Methods for Data-Driven DiscoverySantiago Nunez
 
In defense of posthuman dignity. Nick Bostrom (2005).
In defense of posthuman dignity. Nick Bostrom (2005).In defense of posthuman dignity. Nick Bostrom (2005).
In defense of posthuman dignity. Nick Bostrom (2005).Jorge Pacheco
 
Siglo XXI: Nuevas tecnologías y desafíos- José Luis Cordeiro
Siglo XXI: Nuevas tecnologías y desafíos- José Luis CordeiroSiglo XXI: Nuevas tecnologías y desafíos- José Luis Cordeiro
Siglo XXI: Nuevas tecnologías y desafíos- José Luis CordeiroIPAE
 
Xrisk 101 (2013 - Starship Congress - Full presentation, Nick Nielsen and Hea...
Xrisk 101 (2013 - Starship Congress - Full presentation, Nick Nielsen and Hea...Xrisk 101 (2013 - Starship Congress - Full presentation, Nick Nielsen and Hea...
Xrisk 101 (2013 - Starship Congress - Full presentation, Nick Nielsen and Hea...Heath Rezabek
 

Similar to Roadmaps to prevent x risks 2- presentation (20)

Backup on the Moon
Backup on the MoonBackup on the Moon
Backup on the Moon
 
Humanity+ (presentation contest 2010)
Humanity+ (presentation contest 2010)Humanity+ (presentation contest 2010)
Humanity+ (presentation contest 2010)
 
My night with philosophers presentation - London June 8
My night with philosophers presentation - London June 8My night with philosophers presentation - London June 8
My night with philosophers presentation - London June 8
 
The #FreeAI Manifesto
The #FreeAI ManifestoThe #FreeAI Manifesto
The #FreeAI Manifesto
 
Philosophy of Big Data: Big Data, the Individual, and Society
Philosophy of Big Data: Big Data, the Individual, and SocietyPhilosophy of Big Data: Big Data, the Individual, and Society
Philosophy of Big Data: Big Data, the Individual, and Society
 
Kim Solez Singularity explained and promoted winter 2014
Kim Solez Singularity explained and promoted winter 2014Kim Solez Singularity explained and promoted winter 2014
Kim Solez Singularity explained and promoted winter 2014
 
Kim Solez Singularity explained and promoted fall 2016
Kim Solez Singularity explained and promoted fall 2016Kim Solez Singularity explained and promoted fall 2016
Kim Solez Singularity explained and promoted fall 2016
 
superintelligence
superintelligencesuperintelligence
superintelligence
 
Future possibilities for education and learning by the year 2030
Future possibilities for education and learning by the year 2030Future possibilities for education and learning by the year 2030
Future possibilities for education and learning by the year 2030
 
"Homo Deus": Emerging Narratives for the Future
"Homo Deus": Emerging Narratives for the Future"Homo Deus": Emerging Narratives for the Future
"Homo Deus": Emerging Narratives for the Future
 
Encyclopedic intelligence big science and technology
Encyclopedic intelligence big science and technologyEncyclopedic intelligence big science and technology
Encyclopedic intelligence big science and technology
 
Kim Solez Singularity explained promoted winter 2015
Kim Solez Singularity explained promoted winter 2015Kim Solez Singularity explained promoted winter 2015
Kim Solez Singularity explained promoted winter 2015
 
Immortality roadmap
Immortality roadmapImmortality roadmap
Immortality roadmap
 
Elvin Guri innovation Explorer 2019
Elvin Guri innovation Explorer 2019Elvin Guri innovation Explorer 2019
Elvin Guri innovation Explorer 2019
 
Kimethics in a world of robots 2013
Kimethics in a world of robots 2013Kimethics in a world of robots 2013
Kimethics in a world of robots 2013
 
Nick Bostrom, Oxford’s Future of Humanity Institute
Nick Bostrom, Oxford’s Future of Humanity InstituteNick Bostrom, Oxford’s Future of Humanity Institute
Nick Bostrom, Oxford’s Future of Humanity Institute
 
Keynote Address: PASI 2013 in Methods for Data-Driven Discovery
Keynote Address: PASI 2013 in Methods for Data-Driven DiscoveryKeynote Address: PASI 2013 in Methods for Data-Driven Discovery
Keynote Address: PASI 2013 in Methods for Data-Driven Discovery
 
In defense of posthuman dignity. Nick Bostrom (2005).
In defense of posthuman dignity. Nick Bostrom (2005).In defense of posthuman dignity. Nick Bostrom (2005).
In defense of posthuman dignity. Nick Bostrom (2005).
 
Siglo XXI: Nuevas tecnologías y desafíos- José Luis Cordeiro
Siglo XXI: Nuevas tecnologías y desafíos- José Luis CordeiroSiglo XXI: Nuevas tecnologías y desafíos- José Luis Cordeiro
Siglo XXI: Nuevas tecnologías y desafíos- José Luis Cordeiro
 
Xrisk 101 (2013 - Starship Congress - Full presentation, Nick Nielsen and Hea...
Xrisk 101 (2013 - Starship Congress - Full presentation, Nick Nielsen and Hea...Xrisk 101 (2013 - Starship Congress - Full presentation, Nick Nielsen and Hea...
Xrisk 101 (2013 - Starship Congress - Full presentation, Nick Nielsen and Hea...
 

More from avturchin

Fighting aging as effective altruism
Fighting aging as effective altruismFighting aging as effective altruism
Fighting aging as effective altruismavturchin
 
А.В.Турчин. Технологическое воскрешение умерших
А.В.Турчин. Технологическое воскрешение умершихА.В.Турчин. Технологическое воскрешение умерших
А.В.Турчин. Технологическое воскрешение умершихavturchin
 
Technological resurrection
Technological resurrectionTechnological resurrection
Technological resurrectionavturchin
 
Messaging future AI
Messaging future AIMessaging future AI
Messaging future AIavturchin
 
Future of sex
Future of sexFuture of sex
Future of sexavturchin
 
Near term AI safety
Near term AI safetyNear term AI safety
Near term AI safetyavturchin
 
цифровое бессмертие и искусство
цифровое бессмертие и искусствоцифровое бессмертие и искусство
цифровое бессмертие и искусствоavturchin
 
Digital immortality and art
Digital immortality and artDigital immortality and art
Digital immortality and artavturchin
 
Nuclear submarines as global risk shelters
Nuclear submarines  as global risk  sheltersNuclear submarines  as global risk  shelters
Nuclear submarines as global risk sheltersavturchin
 
Искусственный интеллект в искусстве
Искусственный интеллект в искусствеИскусственный интеллект в искусстве
Искусственный интеллект в искусствеavturchin
 
ИИ как новая форма жизни
ИИ как новая форма жизниИИ как новая форма жизни
ИИ как новая форма жизниavturchin
 
Космос нужен для бессмертия
Космос нужен для бессмертияКосмос нужен для бессмертия
Космос нужен для бессмертияavturchin
 
AI in life extension
AI in life extensionAI in life extension
AI in life extensionavturchin
 
Levels of the self-improvement of the AI
Levels of the self-improvement of the AILevels of the self-improvement of the AI
Levels of the self-improvement of the AIavturchin
 
The map of asteroids risks and defence
The map of asteroids risks and defenceThe map of asteroids risks and defence
The map of asteroids risks and defenceavturchin
 
Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.avturchin
 
The map of the methods of optimisation
The map of the methods of optimisationThe map of the methods of optimisation
The map of the methods of optimisationavturchin
 
Как достичь осознанных сновидений
Как достичь осознанных сновиденийКак достичь осознанных сновидений
Как достичь осознанных сновиденийavturchin
 
The map of natural global catastrophic risks
The map of natural global catastrophic risksThe map of natural global catastrophic risks
The map of natural global catastrophic risksavturchin
 
How the universe appeared form nothing
How the universe appeared form nothingHow the universe appeared form nothing
How the universe appeared form nothingavturchin
 

More from avturchin (20)

Fighting aging as effective altruism
Fighting aging as effective altruismFighting aging as effective altruism
Fighting aging as effective altruism
 
А.В.Турчин. Технологическое воскрешение умерших
А.В.Турчин. Технологическое воскрешение умершихА.В.Турчин. Технологическое воскрешение умерших
А.В.Турчин. Технологическое воскрешение умерших
 
Technological resurrection
Technological resurrectionTechnological resurrection
Technological resurrection
 
Messaging future AI
Messaging future AIMessaging future AI
Messaging future AI
 
Future of sex
Future of sexFuture of sex
Future of sex
 
Near term AI safety
Near term AI safetyNear term AI safety
Near term AI safety
 
цифровое бессмертие и искусство
цифровое бессмертие и искусствоцифровое бессмертие и искусство
цифровое бессмертие и искусство
 
Digital immortality and art
Digital immortality and artDigital immortality and art
Digital immortality and art
 
Nuclear submarines as global risk shelters
Nuclear submarines  as global risk  sheltersNuclear submarines  as global risk  shelters
Nuclear submarines as global risk shelters
 
Искусственный интеллект в искусстве
Искусственный интеллект в искусствеИскусственный интеллект в искусстве
Искусственный интеллект в искусстве
 
ИИ как новая форма жизни
ИИ как новая форма жизниИИ как новая форма жизни
ИИ как новая форма жизни
 
Космос нужен для бессмертия
Космос нужен для бессмертияКосмос нужен для бессмертия
Космос нужен для бессмертия
 
AI in life extension
AI in life extensionAI in life extension
AI in life extension
 
Levels of the self-improvement of the AI
Levels of the self-improvement of the AILevels of the self-improvement of the AI
Levels of the self-improvement of the AI
 
The map of asteroids risks and defence
The map of asteroids risks and defenceThe map of asteroids risks and defence
The map of asteroids risks and defence
 
Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.
 
The map of the methods of optimisation
The map of the methods of optimisationThe map of the methods of optimisation
The map of the methods of optimisation
 
Как достичь осознанных сновидений
Как достичь осознанных сновиденийКак достичь осознанных сновидений
Как достичь осознанных сновидений
 
The map of natural global catastrophic risks
The map of natural global catastrophic risksThe map of natural global catastrophic risks
The map of natural global catastrophic risks
 
How the universe appeared form nothing
How the universe appeared form nothingHow the universe appeared form nothing
How the universe appeared form nothing
 

Recently uploaded

User Guide: Orion™ Weather Station (Columbia Weather Systems)
User Guide: Orion™ Weather Station (Columbia Weather Systems)User Guide: Orion™ Weather Station (Columbia Weather Systems)
User Guide: Orion™ Weather Station (Columbia Weather Systems)Columbia Weather Systems
 
Speech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptxSpeech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptxpriyankatabhane
 
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)Columbia Weather Systems
 
Behavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfBehavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfSELF-EXPLANATORY
 
(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)
(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)
(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)riyaescorts54
 
Pests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdfPests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdfPirithiRaju
 
Harmful and Useful Microorganisms Presentation
Harmful and Useful Microorganisms PresentationHarmful and Useful Microorganisms Presentation
Harmful and Useful Microorganisms Presentationtahreemzahra82
 
Environmental Biotechnology Topic:- Microbial Biosensor
Environmental Biotechnology Topic:- Microbial BiosensorEnvironmental Biotechnology Topic:- Microbial Biosensor
Environmental Biotechnology Topic:- Microbial Biosensorsonawaneprad
 
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdfPests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdfPirithiRaju
 
FREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naFREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naJASISJULIANOELYNV
 
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In DubaiDubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubaikojalkojal131
 
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRCall Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRlizamodels9
 
《Queensland毕业文凭-昆士兰大学毕业证成绩单》
《Queensland毕业文凭-昆士兰大学毕业证成绩单》《Queensland毕业文凭-昆士兰大学毕业证成绩单》
《Queensland毕业文凭-昆士兰大学毕业证成绩单》rnrncn29
 
GenBio2 - Lesson 1 - Introduction to Genetics.pptx
GenBio2 - Lesson 1 - Introduction to Genetics.pptxGenBio2 - Lesson 1 - Introduction to Genetics.pptx
GenBio2 - Lesson 1 - Introduction to Genetics.pptxBerniceCayabyab1
 
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxTHE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxNandakishor Bhaurao Deshmukh
 
Microteaching on terms used in filtration .Pharmaceutical Engineering
Microteaching on terms used in filtration .Pharmaceutical EngineeringMicroteaching on terms used in filtration .Pharmaceutical Engineering
Microteaching on terms used in filtration .Pharmaceutical EngineeringPrajakta Shinde
 
Davis plaque method.pptx recombinant DNA technology
Davis plaque method.pptx recombinant DNA technologyDavis plaque method.pptx recombinant DNA technology
Davis plaque method.pptx recombinant DNA technologycaarthichand2003
 
Base editing, prime editing, Cas13 & RNA editing and organelle base editing
Base editing, prime editing, Cas13 & RNA editing and organelle base editingBase editing, prime editing, Cas13 & RNA editing and organelle base editing
Base editing, prime editing, Cas13 & RNA editing and organelle base editingNetHelix
 

Recently uploaded (20)

User Guide: Orion™ Weather Station (Columbia Weather Systems)
User Guide: Orion™ Weather Station (Columbia Weather Systems)User Guide: Orion™ Weather Station (Columbia Weather Systems)
User Guide: Orion™ Weather Station (Columbia Weather Systems)
 
Speech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptxSpeech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptx
 
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
 
Behavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfBehavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdf
 
(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)
(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)
(9818099198) Call Girls In Noida Sector 14 (NOIDA ESCORTS)
 
Pests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdfPests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdf
 
Harmful and Useful Microorganisms Presentation
Harmful and Useful Microorganisms PresentationHarmful and Useful Microorganisms Presentation
Harmful and Useful Microorganisms Presentation
 
Environmental Biotechnology Topic:- Microbial Biosensor
Environmental Biotechnology Topic:- Microbial BiosensorEnvironmental Biotechnology Topic:- Microbial Biosensor
Environmental Biotechnology Topic:- Microbial Biosensor
 
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdfPests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdf
 
FREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naFREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by na
 
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In DubaiDubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
 
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRCall Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
 
《Queensland毕业文凭-昆士兰大学毕业证成绩单》
《Queensland毕业文凭-昆士兰大学毕业证成绩单》《Queensland毕业文凭-昆士兰大学毕业证成绩单》
《Queensland毕业文凭-昆士兰大学毕业证成绩单》
 
GenBio2 - Lesson 1 - Introduction to Genetics.pptx
GenBio2 - Lesson 1 - Introduction to Genetics.pptxGenBio2 - Lesson 1 - Introduction to Genetics.pptx
GenBio2 - Lesson 1 - Introduction to Genetics.pptx
 
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
 
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxTHE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
 
Microteaching on terms used in filtration .Pharmaceutical Engineering
Microteaching on terms used in filtration .Pharmaceutical EngineeringMicroteaching on terms used in filtration .Pharmaceutical Engineering
Microteaching on terms used in filtration .Pharmaceutical Engineering
 
Davis plaque method.pptx recombinant DNA technology
Davis plaque method.pptx recombinant DNA technologyDavis plaque method.pptx recombinant DNA technology
Davis plaque method.pptx recombinant DNA technology
 
Base editing, prime editing, Cas13 & RNA editing and organelle base editing
Base editing, prime editing, Cas13 & RNA editing and organelle base editingBase editing, prime editing, Cas13 & RNA editing and organelle base editing
Base editing, prime editing, Cas13 & RNA editing and organelle base editing
 
Volatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -IVolatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -I
 

Roadmaps to prevent x risks 2- presentation

  • 1. A Roadmap: Plan of action to prevent human extinction risks Alexey  Turchin   Existen1al  risks  Ins1tute   h5p://existrisks.org/
  • 2. X-risks Exponential growth of technologies Unlimited possibilities Including destruction of humanity
  • 3. Main x-risks AI Synthetic biology Nuclear doomsday weapons and large scale nuclear war Runaway global warming Nanotech - grey goo.
  • 4. Crowdsoursing Contest: more than 50 ideas were added David Pearce and Sakoshi Nakamoto contributed
  • 5. International Control System Friendly AI Plan B Survive the catastrophe Leave Backups Study and Promotion • Study of Friendly AI theory • Promotion of Friendly AI (Bostrom and Yudkowsky) • Fundraising (MIRI) • Teaching rationality (LessWrong) • Slowing other AI projects (recruiting scientists) • FAI free education, starter packages in programming Solid Friendly AI theory • Human values theory and decision theory • Proven safe, fail-safe, intrinsically safe AI • Preservation of the value system during AI self-improvement • A clear theory that is practical to implement Seed AI Creation of a small AI capable of recursive self-improvement and based on Friendly AI theory Superintelligent AI • Seed AI quickly improves itself and undergoes “hard takeoff” • It becomes dominant force on Earth • AI eliminates suffering, involuntary death, and existential risks • AI Nanny – one hypothetical variant of super AI that only acts to prevent existential risks (Ben Goertzel) Preparation • Fundraising and promotion • Textbook to rebuild civilization (Dartnell’s book “Knowledge”) • Hoards with knowledge, seeds and raw materials (Doomsday vault in Norway) • Survivalist communities Building • Underground bunkers, space colonies • Nuclear submarines • Seasteading Natural refuges • Uncontacted tribes • Remote villages • Remote islands • Oceanic ships • Research stations in Antarctica Rebuilding civilisation after catastrophe • Rebuilding population • Rebuilding science and technology • Prevention of future catastrophes Time capsules with information • Underground storage with information and DNA for future non-human civilizations • Eternal disks from Long Now Foundation (or M-disks) Preservation of earthly life • Create conditions for the re-emergence of new intelligent life on Earth • Directed panspermia (Mars, Europe, space dust) • Preservation of biodiversity and highly developed animals (apes, habitats) Prevent x-risk research because it only increases risk • Do not advertise the idea of man-made global catastrophe • Don’t try to control risks as it would only give rise to them • As we can’t measure the probability of glob- al catastrophe it maybe unreasonable to try to change the probability • Do nothing Plan of Action to Prevent Human Extinction Risks Singleton • “A world order in which there is a single decision-making agency at the highest level” (Bostrom) • Worldwide government system based on AI • Super AI which prevents all possible risks and provides immortality and happiness to humanity • Colonization of the solar system, interstellar travel and Dyson spheres • Colonization of the Galaxy • Exploring the Universe Controlled regression • Use small catastrophe to prevent large one (Willard Wells) • Luddism (Kaczynski): relinquishment of dangerous science • Creation of ecological civilization without technology (“World made by hand”, anarcho-primitivism) • Limitation of personal and collective intelligence to prevent dangerous science Global catastrophe Plan D Improbable Ideas Messages to ET civilizations • Interstellar radio messages with encoded human DNA • Hoards on the Moon, frozen brains about humanity Quantum immortality • If the many-worlds interpretation of QM is true, an observer will survive any sort of death including any global catastrophe (Moravec, Tegmark) • It may be possible to make almost univocal correspondence between observer survival and survival of a group of people (e.g. if all are in submarine) Unfriendly AI • Kills all people and maximizes non-human values (paperclip maximiser) • People are alive but suffer extensively Reboot of the civilization • Several reboots may happen • Finally there will be total collapse or a new supercivilization level Interstellar distributed humanity • Many unconnected human civilizations • New types of space risks (space wars, planets and stellar explosions, AI and nanoreplicators, ET civilizations) Attracting good outcome by positive thinking • Preventing negative thoughts about the end of the world and about violence • Maximum positive attitude “to attract” positive outcome • Secret police which use mind superpowers to stop them • Start partying now First quarter of 21 century (2015 – 2025) High-speed tech development needed to quickly pass risk window • Investment in super-technologies (nanotech, biotech) • High speed technical progress helps to overcome slow process of resource depletion • Invest more in defensive technologies than in offensive Improving human intelligence and morality • Nootropics, brain stimulation, and gene therapy for higher IQ • New rationality: Bayes probabilities, interest in long term future, LessWrong • High empathy for new geniuses is needed to prevent them from becoming superterrorists • Many rational, positive, and cooperative people are needed to reduce x-risks • Lower proportion of destructive beliefs, risky behaviour, • Engineered enlightenment: use brain science to make people more unite, less aggressive; open realm of spiritual world to everybody • Prevent worst forms of capitalism: desire to short term money reward, and to change rules for getting it. Stop greedy monopolies Saved by non-human intelligence • Maybe extraterrestrials are looking out for us and will save us • Send radio messages into space asking for help if a catastrophe is inevitable • Maybe we live in a simulation and simulators will save us • The Second Coming, a miracle, or life after death Timely achievement of immortality on highest possible level • Nanotech-based immortal body • Mind uploading • Integration with AI Miniaturization for survival and invincibility • Earth crust colonization by miniaturized nano-tech bodies • Moving into simulated world inside small self sustained computer Rising Resilience AI based on uploading of its creator • Friendly to the value system of its creator • Its values consistently evolve during its self-improvement • Limited self-improvement may solve friendliness problem Bad plans Depopulation • Could provide resource preservation and make control simpler • Natural causes: pandemics, war, hunger (Malthus) • Birth control (Bill Gates) • Deliberate small catastrophe (bio-weapons) Unfriendly AI may be better than nothing • Any super AI will have some memory about humanity • It will use simulations of human civilization to study the probability of its own existence • It may share some human values and distribute them through the Universe Strange strategy to escape Fermi paradox Random strategy may help us to escape some dangers that killed all previous civilizations in space Interstellar travel • “Orion” style, nuclear powered “generation ships” with colonists • Starships which operate on new physical principles with immortal people on board • Von Neumann self-replicating probes with human embryos Temporary asylums in space • Space stations as temporary asylums (ISS) • Cheap and safe launch systems Colonisation of the Solar system • Self-sustaining colonies on Mars and large asteroids • Terraforming of planets and asteroids using self-replicating robots and building space colonies there • Millions of independent colonies inside asteroids and comet bodies in the Oort cloud Space Colonization Technological precognition • Prediction of the future based on advanced quantum technology and avoiding dangerous world-lines • Search for potential terrorists using new scanning technologies • Special AI to predict and prevent new x-risks Robot-replicators in space • Mechanical life • Preservation of information about humanity for billions of years • Safe narrow AI Resurrection by another civilization • Creation of a civilization which has a lot of common values and traits with humans • Resurrection of concrete people Risk controlTechnology bans • International ban on dangerous technologies or voluntary relinquishment (such as not creating new • Lowering international confrontation • Lock down all risk areas beneath piles of bureaucracy, paperwork, safety requirements • Concentrate all bio, nano and AI research in several controlled centers • Control over dissemination of knowledge of mass destruction • Control over suicidal or terrorist ideation in bioscientists • Investments in general safety: education, facilities, control, checks, exams • Limiting and concentrating in one place an important resource: dangerous nuclear materials, high level scientists Technology speedup • Differential technological development: develop safety and con- • Laws and economic stimulus (Richard Posner, carbon emissions trade) Surveillance • International control systems (like IAEA) • Internet control Worldwide risk prevention authority • Worldwide video-surveillance and control • “The ability when necessary to mobilise a strong global coordinated response to anticipated existential risks” (Bostrom) • Center for quick response to any emerging risk; x-risks police a system of international treaties • Robots for emergency liquidation of bio-and nuclear hazards Active shields • Geoengineering against global warming • Worldwide missile defence and anti-asteroid shield • Nano-shield – distributed system of control of hazardous replicators • Bio-shield – worldwide immune system • Mind-shield - control of dangerous ideation by the means of brain implants • Merger of the state, Internet and worldwide AI in uniform monitoring, security and control system • Isolation of risk sources at great distance from Earth Research • Create, prove and widely promote long term future model (this map is based on exponential tech development future model) • Invest in long term prediction studies • Comprehensive list of risks • Probability assessment • Prevention roadmap • Determinate most probable and most easy to prevent risks • Education of “world saviors”: choose best students, provide them courses, money and tasks • Create x-risks wiki and x-risks internet forum which could be able to attract best minds, but also open to everyone and well moderated • Solve the problem that different scientists ignore each other (“world saviours arrogance” problem) • Integrate different lines of thinking about x-risks • Lower the barriers of entry into the • Unconstrained funding of x-risks research for many different approaches (Bostrom) to create high quality x-risk research • Study existing system of decision mak- ing in UN, hire a lawyer • Integration of existing plans to prevent concrete risks (e.g. “Climate plan” by Carana) • General theory of safety and risks prevention • War for world domination • One country uses bioweapons to kill all world’s population except its own which is immunized • Use of a super-technology (like nanotech) to quickly gain global military dominance • Blackmail by Doomsday Machine for world domination Global catastrophe Fatal mistakes in world control system • Wrong command makes shield system attack people • Geo-engineering goes awry • World totalitarianism leads to catastrophe Social support with shared knowledge and productive rivalry (CSER) • Popularization (articles, books, forums, media): show public that x-risks are real and diverse, but we could and should act. • Public support, street action • Political support (parties) peer-reviewed journal, conferences, intergovernment panel, international institute • Political parties for x-risks prevention • Productive cooperation between scientists and society based on trust Elimination of certain risks • Universal vaccines, UV cleaners • Asteroid detection (WISE) • Transition to renewable energy, cutting emissions, carbon capture • Stop LHC, SETI and METI until AI will be created • Rogue countries integrated, or crashed or prevented from having dangerous weapons programs and ad- vanced science International cooperation • All states in the world contribute to the UN to • States cooperate directly without UN • Group of supernations takes responsibility of x-risks prevention (US, EU, BRICS) and sign a treaty • A smaller catastrophe could help unite hu- manity (pandemic, small asteroid, local nuclear war) - some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware • International law about x-risks which will pun- ish people for rising risk: underestimating it, plotting it, risk neglect, as well as reward people - forts in their prevention. • The ability to quickly adapt to new risks and envision them in advance. • International agencies dedicated to certain risks Values transformation • Rise public desire for life extension and global security • Reduction of radical religious (ISIS) or nationalistic values • Popularity of transhumanism • Change the model of the future • “A moral case can be made that existential risk reduction is strictly more im- portant than any other global public good” (Bostrom) • The value of the indestructibility of level and as a goal of every nation • “Sustainability should be reconceptualised in dynamic terms, as aiming for a sustainable trajectory rather than a sustainable state” (Bostrom) • Movies, novels and other works of art that honestly depict x-risks and moti- vate to their prevention • Translate best x-risks articles and books on main languages • Memorial and awareness days: Earth day, Petrov day, Asteroid day • Rationality and effective altruism • Education in schools on x-risks, safety and rationality topics Global catastrophe Manipulation of the extinction probability using Doomsday argument • Decision to create more observers in case of unfavourable event X starts to happen, so lowering its probability (method UN++ by Bostrom) • Lowering the birth density to get more time for the civilization Control of the simulation (if we are in it) • Live an interesting life so our simulation isn’t switched off • Don’t let them know that we know we live in simulation • Hack the simulation and control it • Negotiation with the simulators or pray for help Second half of the 21 century (2050-2100) AI practical studies • Narrow AI • Human emulations • Value loading • FAI theory promotion to most AI commands; they agree to implement it and adapt it to their systems • Tests of FAI on non self-improving models Readiness • Crew training • Crews in bunkers • Crew rotation • Different types of asylums • Frozen embryos Improving sustainability of civilization • Intrinsically safe critical systems • Growing diversity of human beings and habitats • Increasing diversity and increasing connectedness of our civilization • Universal methods of prevention (resistances structures, strong medicine) • Building reserves (food stocks, seeds, minerals, energy, machinery, knowledge) • Widely distributed civil defence, including air and water cleaning systems, radiation meters, gas masks, medical kits etc Practical steps to cope with certain risks • Instruments to capture radioactive dust • Small bio-hack groups around the world could im- prove the biotechnology understanding of the public to the point where no one accidentally creates a bio- technology hazard • Developing better guidance on safe biotechnology processes and exactly why its safe this way and not otherwise... effectively “raising the sanity waterline” • Dangerous forms of BioTech become centralised • DNA synthesizers are constantly connected to the Internet and all DNA is checked for dangerous frag- ments • Funding things like clean-rooms with negative pres- sure • Methane and CO2 capture, may be by bacteria • Invest in bio-divercity of food supply chain, prevent pest spread: 1) better carantine law, 2) port- - sions in medium bulks of foodstuffs, sterilization. • “Develop and stockpile new drugs and vaccines, monitor biological agents and emerging diseases, and strengthen the capacities of local health systems to respond to pandemics” (Matheny) • Better quarantine laws • Mild birth control (female education) prevention international court or secret institution • Large project which could unite humanity, like pandemic prevention • Rogue countries integrated, based on dialogue and appreciating their values • Peaceful integration of national states Dramatic social changes It could include many interesting but different topics: demise of capi- talism, hipster revolution, internet connectivity, global village, dissolv- ing of national states. Changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about sys- tems. Steps and timeline Exact dates depend on the pace of technological progress and of our activity to prevent risks Step 1 Planning Step 3 First level of defence on low-tech level Step 4 Second level of defence on high-tech level Step 5 Reaching indestructibility of the civilization with near zero x-risks Step 2 Preparation Second quarter of 21 century (2025-2050) High-tech bunkers • Nano-tech based adaptive structures • Small AI with human simulation inside space rock PlanA Preventthecatastrophe Space colonies on large planets • Creation of space colonies on the Moon and Mars (Elon Musk) Ideological payload of new technologies Space tech - Mars as backup Electric car - sustainability Self-driving cars - risks of AI and value of human life Facebook - empathy and mutual control Open source - transparency Psychedelic drugs - empathy Computer games and brain stimulation - virtual world Design new monopoly tech with special ideological payload Reactive and Proactive approach Reactive: React on most urgent and most visible risks. Pros: Good timing, visible re- sults, right resource allocation, invest only in real risks Good for slow risks, like pandemic. Cons: can’t react on fast emergencies (AI, asteroid, collider failure). Risk are ranged by urgency. Proactive: Envision future risks and build multilevel defence. Pros: Good for coping this principally new risks, enough time to build defence. Cons imaginable risks, no clear reward, problem with identifying new risks and dis- counting mad ideas. Risks are ranged by probability. Decentralized monitoring of risks Decentralized risks monitoring • Transparent society: everybody could monitor everybody • Decentralized control (local police, net-solutions,) local police could handle local crime and x-risks peers, they could control their neighbourhood in their professional space; Anonymus hacker groups; google search control • “Anonymous” style hacker groups: large group of “world saviours” • Net based safety solutions • Laws and economic stimulus (Richard Posner, carbon emissions trade): prizes for any risk found and prevented • World democracy based on internet voting • Mild monitoring of potential signs of dangerous activity (internet), using deep learning • High level of horizontal connectivity between people Useful ideas to limit catastrophe scale Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe. Carnitine, improve the capacity for rapid production of vaccines in response to emerging threats or create or grow stockpiles of important medical countermeas- ure. Increase time available for preparation by improving monitor- ing and early detection technologies. For example, with pandemics you could: supporting general research on the magnitude of biosecurity risks and opportunities to reduce them and improving and connect disease surveillance systems so that novel threats can be detected and respond- ed to more quickly Worldwide x-risk prevention exercises
  • 6. Plan  A  is  complex: We  should  do  all  it  simultaneously:   international  control,  AI,  robustness,  and  space International Control System Friendly AI Rising Resilience Space Colonization Decentralized monitoring of risks
  • 7. Plans  B,  C  and  D  have  smaller   chances  on  success Survive the catastrophe Leave backups Plan D Deus ex machina Bad plans
  • 8. Plan A1.1 International Control System Risk controlTechnology bans • International ban on dangerous technologies or voluntary relinquishment (such as not creating new • Freezing potentially dangerous projects for 30 years • Lowering international confrontation • Lock down all risk areas beneath piles of bureaucracy, paperwork, safety requirements • Concentrate all bio, nano and AI research in several controlled centers • Control over dissemination of knowledge of mass destruction • Control over suicidal or terrorist ideation in bioscientists • Investments in general safety: education, facilities, control, checks, exams • Limiting and concentrating in one place an important resource: dangerous nuclear materials, high level scientists Technology speedup • Differential technological development: develop safety and con- • Laws and economic stimulus (Richard Posner, Surveillance • Internet control Worldwide risk prevention authority • Worldwide video-surveillance and control • “The ability when necessary to mobilise a strong global coordinated response to • Center for quick response to any x-risks police a system of international treaties • Robots for emergency liquidation of bio-and nuclear hazards Active shields • Geoengineering against global warming • Worldwide missile defence and anti-asteroid shield • Nano-shield – distributed system of control of hazardous replicators • Mind-shield - control of dangerous ideation by the means of brain implants • Merger of the state, Internet and worldwide AI in uniform monitoring, security and control system • Isolation of risk sources at great distance from Earth Research • Create, prove and widely promote long term future model (this map is based on exponential tech development • Invest in long term prediction studies • Comprehensive list of risks • Probability assessment • Prevention roadmap • Determinate most probable and most easy to prevent risks • Education of “world saviors”: choose best students, provide them courses, money and tasks • Create x-risks wiki and x-risks internet forum which could be able to attract best minds, but also open to everyone and well moderated • Solve the problem that different scientists ignore each other • Integrate different lines of thinking about x-risks • Lower the barriers of entry into the • Unconstrained funding of x-risks research for many different approaches research • Study existing system of decision mak- ing in UN, hire a lawyer • Integration of existing plans to prevent concrete risks (e.g. “Climate plan” by • General theory of safety and risks prevention Social support with shared knowledge • Popularization (articles, books, x-risks are real and diverse, but we could and should act. • Public support, street action peer-reviewed journal, conferences, intergovernment panel, international institute • Political parties for x-risks prevention • Productive cooperation between scientists and society based on trust International cooperation • All states in the world contribute to the UN to • States cooperate directly without UN • Group of supernations takes responsibility of treaty • A smaller catastrophe could help unite hu- manity (pandemic, small asteroid, local nuclear paradigmatic change so that humanity becomes more existentially-risk aware • International law about x-risks which will pun- ish people for rising risk: underestimating it, plotting it, risk neglect, as well as reward people efforts in their prevention. • The ability to quickly adapt to new risks and envision them in advance. • International agencies dedicated to certain risks
  • 9. Research • Create, prove and widely promote long term future model (this map is based on exponential tech development future model) • Invest in long term prediction studies • Comprehensive list of risks • Probability assessment • Prevention roadmap • Determinate most probable and most easy to prevent risks • Education of “world saviors”: choose best students, provide them courses, money and tasks • Create x-risks wiki and x-risks internet forum which could be able to attract best minds, but also open to everyone and well moderated • Solve the problem that different scientists ignore each other (“world saviours arrogance” problem) • Integrate different lines of thinking about x-risks • Lower the barriers of entry into the • Unconstrained funding of x-risks research for many different approaches (Bostrom) to create high quality x-risk research • Study existing system of decision mak- ing in UN, hire a lawyer • Integration of existing plans to prevent concrete risks (e.g. “Climate plan” by Carana) • General theory of safety and risks prevention Step 1 Planning Plan A1.1 International Control System
  • 10. Social support with shared knowledge institute • Political parties for x-risks prevention Step 2 Preparation Plan A1.1 International Control System
  • 11. International cooperation • All states in the world contribute to the UN to • States cooperate directly without UN threaty - - Step 3 First level of defence on low-tech level Plan A1.1 International Control System
  • 12. Risk controlTechnology bans • International ban on dangerous technologies or voluntary relinquishment (such as not creating new • Freezing potentially dangerous projects for 30 years • Lowering international confrontation • Lock down all risk areas beneath piles of bureaucracy, paperwork, safety requirements • Concentrate all bio, nano and AI research in several controlled centers • Control over dissemination of knowledge of mass destruction • Control over suicidal or terrorist ideation in bioscientists • Investments in general safety: education, facilities, control, checks, exams • Limiting and concentrating in one place an important resource: dangerous nuclear materials, high level scientists Technology speedup • Differential technological development: develop safety and con- • Laws and economic stimulus (Richard Posner, Surveillance Step 3 First level of defence on low-tech level Plan A1.1 International Control System
  • 13. Worldwide risk prevention authority • Worldwide video-surveillance and control • “The ability when necessary to mobilise a strong global coordinated response to anticipated existential risks” (Bostrom) • Center for quick response to any emerging risk; x-risks police a system of international treaties • Robots for emergency liquidation of bio-and nuclear hazards Step 4 Second level of defence on high-tech level Plan A1.1 International Control System
  • 14. Active shields • Geoengineering against global warming • Worldwide missile defence and anti-asteroid shield • Nano-shield – distributed system of control of hazardous replicators • Bio-shield – worldwide immune system • Mind-shield - control of dangerous ideation by the means of brain implants • Merger of the state, Internet and worldwide AI in uniform monitoring, security and control system • Isolation of risk sources at great distance from Earth Step 4 Second level of defence on high-tech level Plan A1.1 International Control System
  • 16. Singleton • “A world order in which there is a single decision-making agency at the highest level” (Bostrom) • Worldwide government system based on AI • Super AI which prevents all possible risks and provides immortality and happiness to humanity • Colonization of the solar system, interstellar travel and Dyson spheres • Colonization of the Galaxy • Exploring the Universe Result:
  • 17. Plan A1.2 Decentralised risk monitoring Improving human intelligence and morality • Nootropics, brain stimulation, and gene therapy for higher IQ • New rationality: Bayes probabilities, interest in long term future, LessWrong • High empathy for new geniuses is needed to prevent them from becoming superterrorists • Many rational, positive, and cooperative people are needed to reduce x-risks • Lower proportion of destructive beliefs, risky behaviour, • Engineered enlightenment: use brain science to make people more unite, less aggressive; open realm of spiritual world to everybody • Prevent worst forms of capitalism: desire to short term money reward, and to change rules for getting it. Stop greedy monopolies Values transformation • Rise public desire for life extension and global security • Reduction of radical religious (ISIS) or nationalistic values • Popularity of transhumanism • Change the model of the future • “A moral case can be made that existential risk reduction is strictly more im- portant than any other global public good” (Bostrom) • The value of the indestructibility of - el and as a goal of every nation • “Sustainability should be reconceptualised in dynamic terms, as aiming for a sustainable trajectory rather than a sustainable state” (Bostrom) • Movies, novels and other works of art that honestly depict x-risks and moti- vate to their prevention • Translate best x-risks articles and books on main languages • Memorial and awareness days: Earth day, Petrov day, Asteroid day • Rationality and effective altruism • Education in schools on x-risks, safety and rationality topics Cold war and WW3 prevention international court or secret institution • Large project which could unite humanity, like pandemic prevention • Rogue countries integrated, based on dialogue and appreciating their values • Peaceful integration of national states Dramatic social changes It could include many interesting but different topics: demise of capi- talism, hipster revolution, internet connectivity, global village, dissolv- ing of national states. Changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about sys- tems. Decentralized risks monitoring • Transparent society: everybody could monitor everybody • Decentralized control (local police, net-solutions,) local police could handle local crime and x-risks peers, they could control their neighbourhood in their professional space; Anonymus hacker groups; google search control • “Anonymous” style hacker groups: large group of “world saviours” • Net based safety solutions • Laws and economic stimulus (Richard Posner, carbon emissions trade): prizes for any risk found and prevented • World democracy based on internet voting • Mild monitoring of potential signs of dangerous activity (internet), using deep learning • High level of horizontal connectivity between people
  • 18. Values transformation • Rise public desire for life extension and global security • Reduction of radical religious (ISIS) or nationalistic values • Popularity of transhumanism • Change the model of the future • “A moral case can be made that existential risk reduction is strictly more im- portant than any other global public good” (Bostrom) • The value of the indestructibility of - el and as a goal of every nation sustainable trajectory rather than a sustainable state” (Bostrom) - vate to their prevention • Translate best x-risks articles and books on main languages • Rationality and effective altruism Plan A1.2 Decentralised risk monitoring Step 1
  • 19. Improving human intelligence and morality • Nootropics, brain stimulation, and gene therapy for higher IQ • New rationality: Bayes probabilities, interest in long term future, LessWrong • High empathy for new geniuses is needed to prevent them from becoming superterrorists • Many rational, positive, and cooperative people are needed to reduce x-risks • Lower proportion of destructive beliefs, risky behaviour, • Engineered enlightenment: use brain science to make people more unite, less aggressive; open realm of spiritual world to everybody Plan A1.2 Decentralised risk monitoring. Step 2
  • 20. Cold war and WW3 prevention - - - Plan A1.2 Decentralised risk monitoring. Step 3
  • 21. Decentralized risks monitoring • Transparent society: everybody could monitor everybody • Decentralized control (local police, net-solutions,) local police could handle local crime and hacker groups; google search control • World democracy based on internet voting Plan A1.2 Decentralised risk monitoring. Step 4
  • 22. Plan A2. Friendly AI Study and Promotion • Study of Friendly AI theory • Promotion of Friendly AI (Bostrom and Yudkowsky) • Fundraising (MIRI) • Teaching rationality (LessWrong) • Slowing other AI projects (recruiting scientists) • FAI free education, starter packages in programming Solid Friendly AI theory • Human values theory and decision theory • Proven safe, fail-safe, intrinsically safe AI • Preservation of the value system during AI self-improvement • A clear theory that is practical to implement Seed AI Creation of a small AI capable of recursive self-improvement and based on Friendly AI theory Superintelligent AI • Seed AI quickly improves itself and undergoes “hard takeoff” • It becomes dominant force on Earth • AI eliminates suffering, involuntary death, and existential risks • AI Nanny – one hypothetical variant of super AI that only acts to prevent existential risks (Ben Goertzel) AI practical studies • Narrow AI • Human emulations • Value loading • FAI theory promotion to most AI commands; they agree to implement it and adapt it to their systems • Tests of FAI on non self-improving models
  • 23. Study and Promotion • Study of Friendly AI theory • Promotion of Friendly AI (Bostrom and Yudkowsky) • Fundraising (MIRI) • Teaching rationality (LessWrong) • Slowing other AI projects (recruiting scientists) • FAI free education, starter packages in programming Plan A2. Friendly AI. Step 1
  • 24. Solid Friendly AI theory • Human values theory and decision theory • Proven safe, fail-safe, intrinsically safe AI • Preservation of the value system during AI self-improvement • A clear theory that is practical to implement Plan A2. Friendly AI. Step 2
  • 25. AI practical studies • Narrow AI • Human emulations • Value loading • FAI theory promotion to most AI commands; they agree to implement it and adapt it to their systems • Tests of FAI on non self-improving models Plan A2. Friendly AI. Step 3
  • 26. Seed AI Creation of a small AI capable of recursive self-improvement and based on Friendly AI theory Plan A2. Friendly AI. Step 4
  • 27. Superintelligent AI • Seed AI quickly improves itself and undergoes “hard takeoff” • It becomes dominant force on Earth • AI eliminates suffering, involuntary death, and existential risks • AI Nanny – one hypothetical variant of super AI that only acts to prevent existential risks (Ben Goertzel) Plan A2. Friendly AI. Step 5
  • 28. High-speed tech development needed to quickly pass risk window • Investment in super-technologies (nanotech, biotech) • High speed technical progress helps to overcome slow process of resource depletion Dramatic social changes It could include many interesting but different topics: demise of capitalism, hipster revolution, internet connectivity, global village, dissolving of national states. Changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about systems. Improving human intelligence and morality • Nootropics, brain stimulation, and gene therapy for higher IQ • New rationality: Bayes probabilities, interest in long term future, LessWrong • High empathy for new geniuses is needed to prevent them from becoming superterrorists • Many rational, positive, and cooperative people are needed to reduce x-risks • Engineered enlightenment: use brain science to make people more unite, less aggressive; open realm of spiritual world to everybody • Prevent worst forms of capitalism: desire to short term money reward, and to change rules for getting it. Stop greedy monopolies Timely achievement of immortality on highest possible level • Nanotech-based immortal body • Mind uploading • Integration with AI Miniaturization for survival and invincibility • Earth crust colonization by miniaturized nano tech bodies • Moving into simulated world inside small self sustained computer AI based on uploading of its creator • Friendly to the value system of its creator • Its values consistently evolve during its self-improvement • Limited self-improvement may solve friendliness problem Improving sustainability of civilization • Intrinsically safe critical systems • Growing diversity of human beings and habitats • Increasing diversity and increasing connectedness of our civilization • Universal methods of prevention (resistances structures, strong medicine) • Building reserves (food stocks, seeds, minerals, energy, machinery, knowledge) • Widely distributed civil defence, including air and water cleaning systems, radiation meters, gas masks, medical kits etc Plan A3. Rising Resilience
  • 29. Improving sustainability of civilization • Intrinsically safe critical systems • Growing diversity of human beings and habitats • Increasing diversity and increasing connectedness of our civilization • Universal methods of prevention (resistances structures, strong medicine) • Building reserves (food stocks, seeds, minerals, energy, machinery, knowledge) • Widely distributed civil defence, including air and water cleaning systems, radiation meters, gas masks, medical kits etc Plan A3. Rising Resilience Step 1
  • 30. Plan A3. Rising Resilience Step 2 Useful ideas to limit catastrophe scale Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe. Carnitine, improve the capacity for rapid production of vaccines in response to emerging threats or create or grow stockpiles of important medical countermeas- ure. Increase time available for preparation by improving monitoring and early detection technologies. For example, with pandemics you could: supporting general research on the magnitude of biosecurity risks and opportunities to reduce them and improving and connect disease surveillance systems so that novel threats can be detected and respond- ed to more quickly Worldwide x-risk prevention exercises
  • 31. High-speed tech development needed to quickly pass risk window • Investment in super-technologies (nanotech, biotech) • High speed technical progress helps to overcome slow process of resource depletion Dramatic social changes It could include many interesting but different topics: demise of capitalism, hipster revolution, internet connectivity, global village, dissolving of national states. Changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about systems. Plan A3. Rising Resilience. Step 3
  • 32. Timely achievement of immortality on highest possible level • Nanotech-based immortal body • Integration with AI Plan A3. Rising Resilience. Step 4
  • 33. Interstellar travel • “Orion” style, nuclear powered “generation ships” with colonists • Starships which operate on new physical principles with immortal people on board • Von Neumann self-replicating probes with human embryos Temporary asylums in space • Space stations as temporary asylums (ISS) • Cheap and safe launch systems Colonisation of the Solar system • Self-sustaining colonies on Mars and large asteroids • Terraforming of planets and asteroids using self-replicating robots and building space colonies there • Millions of independent colonies inside asteroids and comet bodies in the Oort cloud Space colonies on large planets • Creation of space colonies on the Moon and Mars (Elon Musk) with 100-1000 people Plan A4. Space colonisation
  • 34. Temporary asylums in space • Space stations as temporary asylums (ISS) • Cheap and safe launch systems Plan A4. Space colonisation Step 1
  • 35. Space colonies on large planets • Creation of space colonies on the Moon and Mars (Elon Musk) with 100-1000 people Plan A4. Space colonisation Step 2
  • 36. Colonisation of the Solar system • Self-sustaining colonies on Mars and large asteroids • Terraforming of planets and asteroids using self-replicating robots and building space colonies there • Millions of independent colonies inside asteroids and comet bodies in the Oort cloud Plan A4. Space colonisation Step 3
  • 37. Interstellar travel • “Orion” style, nuclear powered “generation ships” with colonists • Starships which operate on new physical principles with immortal people on board • Von Neumann self-replicating probes with human embryos Plan A4. Space colonisation Step 4
  • 38. Interstellar distributed humanity • Many unconnected human civilizations • New types of space risks (space wars, planets and stellar explosions, AI and nanoreplicators, ET civilizations) Result:
  • 39. Plan B. Survive the catastrophe Preparation • Fundraising and promotion • Textbook to rebuild civilization (Dartnell’s book “Knowledge”) • Hoards with knowledge, seeds and raw materials (Doomsday vault in Norway) • Survivalist communities Building • Underground bunkers, space colonies • Nuclear submarines • Seasteading Natural refuges • Uncontacted tribes • Remote villages • Remote islands • Oceanic ships • Research stations in Antarctica Rebuilding civilisation after catastrophe • Rebuilding population • Rebuilding science and technology • Prevention of future catastrophes Readiness • Crew training • Crews in bunkers • Crew rotation • Different types of asylums • Frozen embryos High-tech bunkers • Nano-tech based adaptive structures • Small AI with human simulation inside space rock
  • 40. Preparation • Fundraising and promotion • Textbook to rebuild civilization (Dartnell’s book “Knowledge”) • Hoards with knowledge, seeds and raw materials (Doomsday vault in Norway) • Survivalist communities Plan B. Survive the catastrophe Step 1
  • 41. Building • Underground bunkers, space colonies • Nuclear submarines • Seasteading Natural refuges • Uncontacted tribes • Remote villages • Remote islands • Oceanic ships • Research stations in Antarctica Plan B. Survive the catastrophe Step 2
  • 42. Readiness • Crew training • Crews in bunkers • Crew rotation • Different types of asylums • Frozen embryos Plan B. Survive the catastrophe Step 3
  • 43. High-tech bunkers • Nano-tech based adaptive structures • Small AI with human simulation inside space rock Plan B. Survive the catastrophe Step 4
  • 44. Rebuilding civilisation after catastrophe • Rebuilding population • Rebuilding science and technology • Prevention of future catastrophes Plan B. Survive the catastrophe Step 5
  • 45. Reboot of the civilization • Several reboots may happen • Finally there will be total collapse or a new supercivilization level Result:
  • 46. Time capsules with information • Underground storage with information and DNA for future non-human civilizations • Eternal disks from Long Now Foundation (or M-disks) Preservation of earthly life • Create conditions for the re-emergence of new intelligent life on Earth • Directed panspermia (Mars, Europe, space dust) • Preservation of biodiversity and highly developed animals (apes, habitats) Messages to ET civilizations • Interstellar radio messages with encoded human DNA • Hoards on the Moon, frozen brains about humanity Robot-replicators in space • Mechanical life • Preservation of information about humanity for billions of years • Safe narrow AI Plan C. Leave backups
  • 47. Time capsules with information • Underground storage with information and DNA for future non-human civilizations • Eternal disks from Long Now Foundation (or M-disks) Plan C. Leave backups Step 1
  • 48. Messages to ET civilizations • Interstellar radio messages with encoded human DNA • Hoards on the Moon, frozen brains about humanity Plan C. Leave backups. Step 2
  • 49. Preservation of earthly life • Create conditions for the re-emergence of new intelligent life on Earth • Directed panspermia (Mars, Europe, space dust) • Preservation of biodiversity and highly developed animals (apes, habitats) Plan C. Leave backups. Step 3
  • 50. Robot-replicators in space • Mechanical life • Preservation of information about humanity for billions of years • Safe narrow AI Plan C. Leave backups. Step 4
  • 51. Resurrection by another civilization • Creation of a civilization which has a lot of common values and traits with humans • Resurrection of concrete people Result:
  • 52. Quantum immortality • If the many-worlds interpretation of QM is true, an observer will survive any sort of death including any global catastrophe (Moravec, Tegmark) • It may be possible to make almost univocal correspondence between observer survival and survival of a group of people (e.g. if all are in submarine) Saved by non-human intelligence • Maybe extraterrestrials are looking out for us and will save us • Send radio messages into space asking for help if a catastrophe is inevitable • Maybe we live in a simulation and simulators will save us • The Second Coming, a miracle, or life after death Strange strategy to escape Fermi paradox Random strategy may help us to escape some dangers that killed all previous civilizations in space Technological precognition • Prediction of the future based on advanced quantum technology and avoiding dangerous world-lines • Search for potential terrorists using new scanning technologies • Special AI to predict and prevent new x-risks Manipulation of the extinction probability using Doomsday argument • Decision to create more observers in case of unfavourable event X starts to happen, so lowering its • Lowering the birth density to get more time for the civilization Control of the simulation (if we are in it) • Live an interesting life so our sim- ulation isn’t switched off • Don’t let them know that we know we live in simulation • Hack the simulation and control it or pray for help Plan D. Improbable ideas
  • 53. Saved by non-human intelligence • Maybe extraterrestrials are looking out for us and will save us • Send radio messages into space asking for help if a catastrophe is inevitable • Maybe we live in a simulation and simulators will save us • The Second Coming, a miracle, or life after death Plan D. Improbable ideas Idea 1
  • 54. Quantum immortality • If the many-worlds interpretation of QM is true, an observer will survive any sort of death including any global catastrophe (Moravec, Tegmark) • It may be possible to make almost univocal correspondence between observer survival and survival of a group of people (e.g. if all are in submarine) Plan D. Improbable ideas. Idea 2
  • 55. Strange strategy to escape Fermi paradox Random strategy may help us to escape some dangers that killed all previous civilizations in space Plan D. Improbable ideas Idea 3
  • 56. Technological precognition • Prediction of the future based on advanced quantum technology and avoiding dangerous world-lines • Search for potential terrorists using new scanning technologies • Special AI to predict and prevent new x-risks Plan D. Improbable ideas Idea 4
  • 57. Manipulation of the extinction probability using Doomsday argument • Decision to create more observers in case of unfavourable event X starts to happen, so lowering its probability (method UN++ by Bostrom) • Lowering the birth density to get more time for the civilization Plan D. Improbable ideas Idea 5
  • 58. Control of the simulation (if we are in it) • Live an interesting life so our sim- ulation isn’t switched off • Don’t let them know that we know we live in simulation • Hack the simulation and control it • Negotiation with the simulators or pray for help Plan D. Improbable ideas Idea 6
  • 59. Prevent x-risk research because it only increases risk • Do not advertise the idea of man-made global catastrophe • Don’t try to control risks as it would only give rise to them • As we can’t measure the probability of glob- al catastrophe it maybe unreasonable to try to change the probability • Do nothing Controlled regression • Use small catastrophe to prevent large one (Willard Wells) • Luddism (Kaczynski): relinquishment of dangerous science • Creation of ecological civilization without technology (“World made by hand”, anarcho-primitivism) • Limitation of personal and collective intelligence to prevent dangerous science Attracting good outcome by positive thinking • Preventing negative thoughts about the end of the world and about violence • Maximum positive attitude “to attract” positive outcome • Secret police which use mind superpowers to stop them • Start partying now Depopulation • Could provide resource preservation and make control simpler • Natural causes: pandemics, war, hunger (Malthus) • Birth control (Bill Gates) • Deliberate small catastrophe (bio-weapons) Unfriendly AI may be better than nothing • Any super AI will have some memory about humanity • It will use simulations of human civilization to study the probability of its own existence • It may share some human values and distribute them through the Universe Bad plans
  • 60. Prevent x-risk research because it only increases risk • Do not advertise the idea of man-made global catastrophe • Don’t try to control risks as it would only give rise to them • As we can’t measure the probability of glob- al catastrophe it maybe unreasonable to try to change the probability • Do nothing Bad plans Idea 1
  • 61. Controlled regression • Use small catastrophe to prevent large one (Willard Wells) • Luddism (Kaczynski): relinquishment of dangerous science • Creation of ecological civilization without technology (“World made by hand”, anarcho-primitivism) • Limitation of personal and collective intelligence to prevent dangerous science Bad plans Idea 2
  • 62. Bad plans. Idea 3 Depopulation • Could provide resource preservation and make control simpler • Natural causes: pandemics, war, hunger (Malthus) • Extreme birth control • Deliberate small catastrophe (bio-weapons)
  • 63. Unfriendly AI may be better than nothing • Any super AI will have some memory about humanity • It will use simulations of human civilization to study the probability of its own existence • It may share some human values and distribute them through the Universe Bad plans. Idea 4
  • 64. Attracting good outcome by positive thinking • Preventing negative thoughts about the end of the world and about violence • Maximum positive attitude “to attract” positive outcome • Secret police which use mind superpowers to stop them • Start partying now Bad plans. Idea 5
  • 65. Dynamic  roadmaps Next  stage  of  the  research  will  be  creation  of   collectively  editable  wiki-­‐style  roadmaps   They  will  cover  all  existing  topics  of   transhumanism  and  future  studies   Create  AI  system  based  on  the  roadmaps  or   working  on  their  improvement
  • 66. You  could  read  all   roadmaps  on http://immortality-­‐roadmap.com   ! http://existrisks.org/