SlideShare a Scribd company logo
1 of 39
Download to read offline
Near-term AI safety
Turchin Alexey
Moscow, “Kocherga club”
October 14, 15.00
Could AI cause human
extinction in the next 10 years?
Main ideas:
• In 2012 neural nets boom has started.
• MIRI increased its estimation of AI arrival before 2035.
• Main measures of neural nets has been doubling every 1 year in last
5 years.
• NN reached superhuman performance in some important areas in
2016-2017.
• There are 5-10 doubling to human levels in most areas, which
means that AI in 2020s, and may be as early as 2023
• AI could become global risk before superintelligence or even AGI,
and thus earlier.
• 5 independent ways of AI timing prediction produce the same result,
which is surprising
Which level of AI
is dangerous?
• Superintellgence is not necessary condition for the
human extinction event
• Omnicide is a computational task of a certain
complexity
• AI, able to solve omnicide task, is dangerous
• We would call it dangerously powerful AI
Which level of AI
is dangerous?
• AI power is growing
• Availability of powerful AIs is growing
• Complexity of omnicide is diminishing, as new tech appears
• Example: Narrow AI able to calculate combination of DNA for dangerous viruses
Time
Extinction “task” complexity is “green”,
AI power is “blue”
Which level of AI
is dangerous?
• “Superintelligence is needed to calculate nanotech” is a popular
narrative about AI, like in Yudkowsky, 2006.
• Now dangerous technology could be created before
superintelligence, using narrow AI as helper.
• Number of “bad actors” is growing as it is just a share of all actors.
• AGI and superintelligence are still the risks
Time
Extinction “task” complexity is “green”,
AI power is “blue”
Earliest AI arrival timing
cutoff
• If risks is growing linearly, it is 0.003 % a day,
and it is unacceptable even for tomorrow
• Threshold suggestion: 5 per cent, or 2021.
• Exponential growth conjecture: AGI appearance
probability is concentrated in the end of any time
period, while nuclear war probability density is linear.
Two questions about
near-term AI risks
• What are the evidences that catastrophic level AI
will appear in the next 10 years?
• What kind of catastrophes could create AI before
reaching superintelligent stage?
1. Types of the evidences
• Surveys: median is at 2062, but first 10 per cent at 2025.
• Hardware: we have enough hardware for AI, and much more will come soon.
• Neural nets performance growth: doubling every 1 year, 5-10 years to human level based on
many metrics
• Hyperbolic acceleration trends: If they will work, potentially catastrophic instability will begin in
2020s.
• Randomness of the moment of AI creation: if non-linearity of the distribution and the moment
of observation is taken into account, than AGI will appear in next several years (very speculative).
Surveys:
• K.Grace poll: median time of AGI arrival is 2062
• But from the risks analysis point we need earliest arrival time.
• The same poll:
6.25 % before 2022
12.5 % before 2027
15 % before 2030.
• The growth is almost linear with 1.25 per cent a year
Hardware evidences
Moore’s law
• Even of Moore’s law is dead, it will not stop AI progress now, as the
progress in AI now mostly depends on algorithms. Anyway, we have or
will have soon enough computer power to run AIs, similar to a human brain.
• The real Moore’s law is price of computation – that is its essence which
is important from the point of AI timing
• Classical semiconductor Moore’s law will probably allow at least several
doublings in chip performance next 5-10 years.
Hardware evidences
Total computational power of the internet
• Not only the performance of computer chips is growing, but total number of the
interconnected devices is growing exponentially.
• Total power = (number of devices) x (medium performance of 1 device).
• “While world storage capacity doubles every three years, world computing capacity
doubles every year and a half, Hilbert said.” https://www.livescience.com/54094-
how-big-is-the-internet.html
• How many devices now? 7 billion cell phones, + billions of PC, webcam, IoT.
Probably 20 billion devices.
Hardware evidences
Bitcoin network as an example of powerful
specialised hardware
In 2017 Bitcoin net reached 3 exahash power. https://news.bitcoin.com/why-bitcoin-is-close-to-
a-record-breaking-3-exahashes-processing-power/
The net power doubling time is around 8 month. If hash were calculated on ordinary computer,
it will be 12 000 flops, or total power of bitcoin net will be 3.6E22 flops.
But for bitcoin is used specialised ASIC (https://news.bitcoin.com/why-bitcoin-is-close-to-a-
record-breaking-3-exahashes-processing-power/)
However, for neural net processing also needed specialised ASICs, but different from Bitcoin.
One example is Google Tensorflow TPU chips.
It means that given sufficient money incentive, the same powerful distributed AI system
could be built in a couple of years.
Markram expected that human brain simulation would require 1 exoflops (10E18). That means
that current blockchain network is computationally equal to 36 000 human brains (but can’t run
needed type of calculations). Such computer power is probably enough to run superintelligence.
Hardware evidences
Self-driving car has to be
near human brain capabilities
• New Nvidia system “Pegasus can handle 320 trillion operations per second, representing
roughly a 13-fold increase over the calculating power of the current PX 2 line (Tesla 3).” https://
cleantechnica.com/2017/10/11/nvidia-shows-off-new-self-driving-car-chips-used-deutsche-post-
dhl-others/ The companies stated the system would be capable of Level 5 autonomous driving.
• Will be delivered in 2018
• New processor: 21 bn transistors, 10 000 engineers-years, several bn dollars
• Nvidia expects that GPU outpeform CPU 1000 times in 15 years, to 2025
• Full self driving cars are expected in 2020s
• Specialised markets like cars and videogames fuel excess computation power
Hardware evidences
Human mind information capacity
could be overestimated
“Pegasus” system: level 5 self-driving autonomy at 320 trillion operations, human
level performance and will operate on larger data streams than human brains.
Conscious human memory of most humans is around 1 GB, which is rather small
compared to data storage abilities for most contemporary computers.
Human working memory is only 7 units; human print speed is several bytes a second.
Lowest level of computing power needed to simulate human brain is around 100
trillion operations (10E14). Markram estimated needed computers to 1 exoflop or
10E18, and we could think about it as of median estimate. there is no upper estimate.
“Power needed to simulate” is not “computational power of the brain”, it is just a
measure of inefficiency of simulating.
.
Hardware evidences
AI-relate computational power is growing
even without Moore’s law
Not only price of computation is important, but also a budget of AI-
research organisations. Previous budgets were small during AI
winter, now they grow hundreds times.
As global economy grow, the bigger part of economy could be
spent on building computational power. The largest limiting factor
now is energy consumption.
Owning large computer is expensive, but renting time in a cloud be
more cost efficient, as you pay only for time you work and there is no
down time. Or you can earn money by mining in down time.
Large tech giants like Google and IBM could order specialised
computer chips (like TPU) for their software with turnover one -
several months.
Neural net performance evidences
Most metrics are doubling almost every year
• We should look from 2012, when age of neural net started.
• Performance increased: from 27 to 1.5 level of errors.
• It is 15 times increase in 5.5 years.
• Doubling time is 1.3 years.
Data from “AI Progress Measurement”
https://www.eff.org/ai/metrics
Neural net performance evidences
Most metrics are doubling every year
Data from “AI Progress Measurement”
https://www.eff.org/ai/metrics
Neural net performance evidences
Most metrics are doubling every year
Data from “AI Progress Measurement”
https://www.eff.org/ai/metrics
Neural net performance evidences
Most metrics are doubling every year
Data from “AI Progress Measurement”
https://www.eff.org/ai/metrics
Neural net performance evidences
Most metrics are doubling every year
Data from “AI Progress Measurement”
https://www.eff.org/ai/metrics
Neural net performance evidences
Most metrics are doubling every year
Data from “AI Progress Measurement”
https://www.eff.org/ai/metrics
Neural net performance evidences
Most metrics are doubling every year
Data from “AI Progress Measurement”
https://www.eff.org/ai/metrics
Neural net performance evidences
Most metrics are doubling every year
Data from “AI Progress Measurement”
https://www.eff.org/ai/metrics
Neural net performance evidences
Most metrics are doubling every year
Data from “AI Progress Measurement”
https://www.eff.org/ai/metrics
Neural net performance evidences
Most metrics are doubling every year
Data from “AI Progress Measurement”
https://www.eff.org/ai/metrics
Neural net performance evidences
Dataset size is critical for neural nets performace
“Since 2012, there have been significant advances in representation capabilities of the models and
computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant.
What will happen if we increase the dataset size by 10x or 100x? This paper takes a step towards clearing
the clouds of mystery surrounding the relationship between `enormous data' and deep learning. By exploiting
the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the
performance of current vision tasks would change if this data was used for representation learning”. “Once
again we observe that the performance increases logarithmically as the pre-training dataset increases.”
Revisiting Unreasonable Effectiveness of Data in Deep Learning Era
https://arxiv.org/abs/1707.02968
Neural net performance evidences
Human size dataset
1) Humans existed for hundred thousands years, but evolve very slowly,
until they created rich culture 5000 years ago.
2) Humans brought up by animals are animals
3) Animal taught by humans are able to learn some language, like
chimps and dogs.
4) Human brain is very robust to injuries and genetic variations. There is no
any fragile mechanism inside: neural nets are an ultimate answer to the
nature of human brains.
5) Human dataset: 100 000 hours of video stream with audio, or
something like 100 TB of data (if compressed). ImageNet is 100 000
smaller.
6) Most people don't "think" – they repeat patterns of the culture, which
sometimes looks like thinking. The same way neural net was trained to
make basic math - without the understanding of math.
Neural net performance evidences
“Human size” dataset
• Biggest success in neural nets appeared after very large dataset was introduced -
ImageNet, 1 million images in 2012
• In 2016 Google increased dataset to 300 mln images and got state of art
performance using very simple standard neural net architecture.
• It corresponds to growth of dataset size 300 times in 5 years, or doubling time less
than 1 year (around 8 month)
• “Human dataset” is equal to 100 000 hours of video (it is all human life
experiences in childhood)
• “Human dataset” could be estimated as something like 100 billion images.
• Artificial neural net dataset will match human dataset in 2023. In current speed of
doubling of 8 month, such dataset size will be reached in 8 doublings, or in 5-6 years,
that is 2022-2023.
• Larger datasets are technically possible, as years of Youtube videos are available.
• “If a machine could do a whole bunch of those translations successfully it would
demonstrate that they really understand what’s going on, but I think the sets need to
be about 1000 times bigger than they are now in machine translation for it to work
perfectly” - Geoffray Hinton, https://www.re-work.co/blog/interview-yoshua-bengio-
yann-lecun-geoffrey-hinton
Neural net performance evidences
Human size dataset
• Performance has grown logarithmically with dataset size, 13 IOU (63 to
76; maximum is 100 - total recognition) units with increase of the dataset
from 10 to 300 mln elements.
• IOU, “Intersection over union”, measures intersection between predicted
object boundaries and actual one, from 0 to 100 in percents.
• If we extrapolate, 100 bn dataset will provide 97 IOU, which is very close
to absolute maximum (and 10 bn dataset will be only 89 IOU)
• It confirms our intuitions that “human size” dataset of 100 bn images is
needed to get human level performance.
• Google plans to test larger datasets.
Neural net performance evidences
Number of parameters in the neural net
• The size (number of parameters, or connections, roughly equal to synapses) of the cat
recogniser by Google in 2012 was 1 billion.
• Later most of private research was done on graphic cards and the size of parameters was
limited by the size of the memory of graphic cards, which recently reached up to 12 GB. For
example, Karpathy's famous RNN had only 3 million parameters but was able to
generate grammatically correct text.
• latest work by Google created in 2016 a neural net with the size of 130 billion parameters,
and they now use it in Google translate. They showed that quality is growing with the size of
the net, but some diminishing returns are observed. https://arxiv.org/pdf/1701.06538.pdf
• So the number of parameters in the best neural nets by Google grew 100 times for 5
years, and they are planning trillion parameters net soon.
• The human brain has around 150 trillion synapses in the prefrontal cortex.
• If the speed of growth of best neural nets continues, 150 trillion parameters net is 5-10
years from now, or somewhere in 2022-27.
Neural net performance evidences
Number of parameters in the neural net
How to train such large neural nets?
OpenAI found a solution which is easily scalable by changing the way
the net is trained. It is not the backpropagation, but gradient descent in
very large parameters space. https://blog.openai.com/evolution-strategies/
Neural net performance evidences
Number of parameters in the neural net
"IBM’s Artificial Brain Has Grown From 256 Neurons to 64 Million
Neurons in 6 Years – 10 Billion Projected by 2020" https://
www.singularityarchive.com/ibms-artificial-brain-has-grown-
from-256-neurons-to-64-million-neurons-in-6-years/
Hyperbolic predictions
• All hyperbolic prediction converge around 2030
• “Risk period” will start earlier, as instability will grow.
Accelerating factors
• Hype: more people started to learn AI and believe in AGI soon
• Arms race between main companies
• Arms race with China
• Russia could create “AI Chernobyl”
Conclusion about predictions
• Powerful enough AI (to be a global risk) will appear
between 2020 and 2030.
Next milestone is a “robotic brain”
• Robotic brain: walk, speak, world-model, limited common sense.
• The “Turing test” for such robot is its ability to prepare a
breakfast.
• Self-driving cars, home robots, military AIs
• Probably 5 years from now
• After that there are many possible choices.
2. What are the risks?
2.1. Risks of Narrow AI before AGI:
• AI instruments helped to elect Trump and he starts nuclear war? :)
• Military AI gives strategic and weaponry advantage which, however, results in a new arms
race, in the new even more destructive weapons and catastrophic WW3 at the end.
• Narrow AI infects billions of robots and cars and they start hunting humans.
• Narrow AI in hands of a bioterrorist helps him to create dangerous bioweapons
• Bitcoin economy becomes Scott’s “Moloch”
KANSI, or Prosaic AI as mild superintelligence
• KANSI: Known-algorithm non-self-improving agent “"Known-algorithm non-
self-improving" (KANSI) is a strategic scenario and class of possibly-
attainable AI designs, where the first pivotal powerful AI has been
constructed out of known, human-understood algorithms and is not engaging
in extensive self-modification” https://arbital.com/p/KANSI/
• Messy prosaic AI – idea by Paul Christiano, https://ai-alignment.com/
prosaic-ai-control-b959644d79c2
2. What are the risks?
The main questions:
1) What is the threshold of independent AI self-improvement and how far is it from basic
robotic brain?
2) Could AI get strategic decisive advantage without SI?
3) What is the threshold of dangerously powerful AI relative to the robotic brain?
We don’t know, but could “get” answers on this questions in 2020s.
Forever, there are two thresholds:
Near-mode, 2022, in 5 years from now
Farther one is 2030.
Yudkowsky, 2017: what should be a fire alarm?
Narrow AI accident? Almost human-like robot?

More Related Content

What's hot

The New Era of Cognitive Computing
The New Era of Cognitive ComputingThe New Era of Cognitive Computing
The New Era of Cognitive Computing
IBM Research
 

What's hot (20)

10/13 Top 5 Deep Learning Stories
10/13 Top 5 Deep Learning Stories10/13 Top 5 Deep Learning Stories
10/13 Top 5 Deep Learning Stories
 
Top 5 Deep Learning and AI Stories 3/9
Top 5 Deep Learning and AI Stories 3/9Top 5 Deep Learning and AI Stories 3/9
Top 5 Deep Learning and AI Stories 3/9
 
Improving healthcare with AI
Improving healthcare with AIImproving healthcare with AI
Improving healthcare with AI
 
9/30 Top 5 Deep Learning
9/30 Top 5 Deep Learning 9/30 Top 5 Deep Learning
9/30 Top 5 Deep Learning
 
The New Era of Cognitive Computing
The New Era of Cognitive ComputingThe New Era of Cognitive Computing
The New Era of Cognitive Computing
 
Cognitive Automation - Your AI Coworker
Cognitive Automation - Your AI CoworkerCognitive Automation - Your AI Coworker
Cognitive Automation - Your AI Coworker
 
Vertex perspectives artificial intelligence
Vertex perspectives   artificial intelligenceVertex perspectives   artificial intelligence
Vertex perspectives artificial intelligence
 
Applying Machine Learning and Artificial Intelligence to Business
Applying Machine Learning and Artificial Intelligence to BusinessApplying Machine Learning and Artificial Intelligence to Business
Applying Machine Learning and Artificial Intelligence to Business
 
The Convergence of HPC and Deep Learning
The Convergence of HPC and Deep LearningThe Convergence of HPC and Deep Learning
The Convergence of HPC and Deep Learning
 
Big Data Re-Told
Big Data Re-ToldBig Data Re-Told
Big Data Re-Told
 
Wiki stage 20151128 - v001
Wiki stage   20151128 - v001Wiki stage   20151128 - v001
Wiki stage 20151128 - v001
 
AI and the Professions: Past, Present and Future
AI and the Professions: Past, Present and FutureAI and the Professions: Past, Present and Future
AI and the Professions: Past, Present and Future
 
10/28 Top 5 Deep Learning Stories
10/28 Top 5 Deep Learning Stories10/28 Top 5 Deep Learning Stories
10/28 Top 5 Deep Learning Stories
 
Smart Data Slides: Modern AI and Cognitive Computing - Boundaries and Opportu...
Smart Data Slides: Modern AI and Cognitive Computing - Boundaries and Opportu...Smart Data Slides: Modern AI and Cognitive Computing - Boundaries and Opportu...
Smart Data Slides: Modern AI and Cognitive Computing - Boundaries and Opportu...
 
Deep Learning State of the Art (2020)
Deep Learning State of the Art (2020)Deep Learning State of the Art (2020)
Deep Learning State of the Art (2020)
 
When AI becomes a data-driven machine, and digital is everywhere!
When AI becomes a data-driven machine, and digital is everywhere!When AI becomes a data-driven machine, and digital is everywhere!
When AI becomes a data-driven machine, and digital is everywhere!
 
SmartData Webinar: Commercial Cognitive Computing -- How to choose and build ...
SmartData Webinar: Commercial Cognitive Computing -- How to choose and build ...SmartData Webinar: Commercial Cognitive Computing -- How to choose and build ...
SmartData Webinar: Commercial Cognitive Computing -- How to choose and build ...
 
11/4 Top 5 Deep Learning Stories
11/4 Top 5 Deep Learning Stories11/4 Top 5 Deep Learning Stories
11/4 Top 5 Deep Learning Stories
 
Cognitive computing ppt.
Cognitive computing ppt.Cognitive computing ppt.
Cognitive computing ppt.
 
The Next Generation of AI and Deep Learning - GTC17
The Next Generation of AI and Deep Learning - GTC17The Next Generation of AI and Deep Learning - GTC17
The Next Generation of AI and Deep Learning - GTC17
 

Similar to Near term AI safety

Similar to Near term AI safety (20)

It’s hard to stand still on a moving train
It’s hard to stand still on a moving trainIt’s hard to stand still on a moving train
It’s hard to stand still on a moving train
 
Deep Learning disruption
Deep Learning disruptionDeep Learning disruption
Deep Learning disruption
 
Managing Future Impacts of Artificial Narrow, General, and Super Intelligence...
Managing Future Impacts of Artificial Narrow, General, and Super Intelligence...Managing Future Impacts of Artificial Narrow, General, and Super Intelligence...
Managing Future Impacts of Artificial Narrow, General, and Super Intelligence...
 
50th Anniversary Keynote for Korean Testing Laboratory
50th Anniversary Keynote for Korean Testing Laboratory50th Anniversary Keynote for Korean Testing Laboratory
50th Anniversary Keynote for Korean Testing Laboratory
 
When Computers are Everywhere, Will we have superpowers.
When Computers are Everywhere, Will we have superpowers.When Computers are Everywhere, Will we have superpowers.
When Computers are Everywhere, Will we have superpowers.
 
S0-Stephen.pptx
S0-Stephen.pptxS0-Stephen.pptx
S0-Stephen.pptx
 
Présentation de Bruno Schroder au 20e #mforum (07/12/2016)
Présentation de Bruno Schroder au 20e #mforum (07/12/2016)Présentation de Bruno Schroder au 20e #mforum (07/12/2016)
Présentation de Bruno Schroder au 20e #mforum (07/12/2016)
 
Artificial Intelligence and Machine Learning
Artificial Intelligence and Machine LearningArtificial Intelligence and Machine Learning
Artificial Intelligence and Machine Learning
 
Machine Learning in Cyber Security Domain
Machine Learning in Cyber Security Domain Machine Learning in Cyber Security Domain
Machine Learning in Cyber Security Domain
 
Artificial intelligence and machine learning: ultimate game changers
Artificial intelligence and machine learning: ultimate game changersArtificial intelligence and machine learning: ultimate game changers
Artificial intelligence and machine learning: ultimate game changers
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
BigDataCSEKeyNote_2012
BigDataCSEKeyNote_2012BigDataCSEKeyNote_2012
BigDataCSEKeyNote_2012
 
Artificial Intelligence in testing - A STeP-IN Evening Talk Session Speech by...
Artificial Intelligence in testing - A STeP-IN Evening Talk Session Speech by...Artificial Intelligence in testing - A STeP-IN Evening Talk Session Speech by...
Artificial Intelligence in testing - A STeP-IN Evening Talk Session Speech by...
 
10 predictions for the future of IoT
10 predictions for the future of IoT10 predictions for the future of IoT
10 predictions for the future of IoT
 
Innovations using PowerAI
Innovations using PowerAIInnovations using PowerAI
Innovations using PowerAI
 
Inside Out and Upside Down - FOO Camp 2016 - Peter Coffee
Inside Out and Upside Down - FOO Camp 2016 - Peter CoffeeInside Out and Upside Down - FOO Camp 2016 - Peter Coffee
Inside Out and Upside Down - FOO Camp 2016 - Peter Coffee
 
AI in Business - Key drivers and future value
AI in Business - Key drivers and future valueAI in Business - Key drivers and future value
AI in Business - Key drivers and future value
 
Serguei Seloussov - Future of computing and SIT MSc program
Serguei Seloussov - Future of computing and SIT MSc programSerguei Seloussov - Future of computing and SIT MSc program
Serguei Seloussov - Future of computing and SIT MSc program
 
Top 10 Trending Technologies To Master In 2021
Top 10 Trending Technologies To Master In 2021Top 10 Trending Technologies To Master In 2021
Top 10 Trending Technologies To Master In 2021
 
Will artificial intelligence replace programmers
Will artificial intelligence replace programmersWill artificial intelligence replace programmers
Will artificial intelligence replace programmers
 

More from avturchin

Messaging future AI
Messaging future AIMessaging future AI
Messaging future AI
avturchin
 
Backup on the Moon
Backup on the MoonBackup on the Moon
Backup on the Moon
avturchin
 

More from avturchin (20)

Fighting aging as effective altruism
Fighting aging as effective altruismFighting aging as effective altruism
Fighting aging as effective altruism
 
А.В.Турчин. Технологическое воскрешение умерших
А.В.Турчин. Технологическое воскрешение умершихА.В.Турчин. Технологическое воскрешение умерших
А.В.Турчин. Технологическое воскрешение умерших
 
Technological resurrection
Technological resurrectionTechnological resurrection
Technological resurrection
 
Messaging future AI
Messaging future AIMessaging future AI
Messaging future AI
 
Future of sex
Future of sexFuture of sex
Future of sex
 
Backup on the Moon
Backup on the MoonBackup on the Moon
Backup on the Moon
 
цифровое бессмертие и искусство
цифровое бессмертие и искусствоцифровое бессмертие и искусство
цифровое бессмертие и искусство
 
Digital immortality and art
Digital immortality and artDigital immortality and art
Digital immortality and art
 
Nuclear submarines as global risk shelters
Nuclear submarines  as global risk  sheltersNuclear submarines  as global risk  shelters
Nuclear submarines as global risk shelters
 
Искусственный интеллект в искусстве
Искусственный интеллект в искусствеИскусственный интеллект в искусстве
Искусственный интеллект в искусстве
 
ИИ как новая форма жизни
ИИ как новая форма жизниИИ как новая форма жизни
ИИ как новая форма жизни
 
Космос нужен для бессмертия
Космос нужен для бессмертияКосмос нужен для бессмертия
Космос нужен для бессмертия
 
AI in life extension
AI in life extensionAI in life extension
AI in life extension
 
Levels of the self-improvement of the AI
Levels of the self-improvement of the AILevels of the self-improvement of the AI
Levels of the self-improvement of the AI
 
The map of asteroids risks and defence
The map of asteroids risks and defenceThe map of asteroids risks and defence
The map of asteroids risks and defence
 
Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.
 
The map of the methods of optimisation
The map of the methods of optimisationThe map of the methods of optimisation
The map of the methods of optimisation
 
Как достичь осознанных сновидений
Как достичь осознанных сновиденийКак достичь осознанных сновидений
Как достичь осознанных сновидений
 
The map of natural global catastrophic risks
The map of natural global catastrophic risksThe map of natural global catastrophic risks
The map of natural global catastrophic risks
 
How the universe appeared form nothing
How the universe appeared form nothingHow the universe appeared form nothing
How the universe appeared form nothing
 

Recently uploaded

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Human genetics..........................pptx
Human genetics..........................pptxHuman genetics..........................pptx
Human genetics..........................pptx
Silpa
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and Classifications
Areesha Ahmad
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Sérgio Sacani
 
biology HL practice questions IB BIOLOGY
biology HL practice questions IB BIOLOGYbiology HL practice questions IB BIOLOGY
biology HL practice questions IB BIOLOGY
1301aanya
 
Conjugation, transduction and transformation
Conjugation, transduction and transformationConjugation, transduction and transformation
Conjugation, transduction and transformation
Areesha Ahmad
 

Recently uploaded (20)

Call Girls Ahmedabad +917728919243 call me Independent Escort Service
Call Girls Ahmedabad +917728919243 call me Independent Escort ServiceCall Girls Ahmedabad +917728919243 call me Independent Escort Service
Call Girls Ahmedabad +917728919243 call me Independent Escort Service
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
PATNA CALL GIRLS 8617370543 LOW PRICE ESCORT SERVICE
PATNA CALL GIRLS 8617370543 LOW PRICE ESCORT SERVICEPATNA CALL GIRLS 8617370543 LOW PRICE ESCORT SERVICE
PATNA CALL GIRLS 8617370543 LOW PRICE ESCORT SERVICE
 
COMPUTING ANTI-DERIVATIVES (Integration by SUBSTITUTION)
COMPUTING ANTI-DERIVATIVES(Integration by SUBSTITUTION)COMPUTING ANTI-DERIVATIVES(Integration by SUBSTITUTION)
COMPUTING ANTI-DERIVATIVES (Integration by SUBSTITUTION)
 
module for grade 9 for distance learning
module for grade 9 for distance learningmodule for grade 9 for distance learning
module for grade 9 for distance learning
 
Molecular markers- RFLP, RAPD, AFLP, SNP etc.
Molecular markers- RFLP, RAPD, AFLP, SNP etc.Molecular markers- RFLP, RAPD, AFLP, SNP etc.
Molecular markers- RFLP, RAPD, AFLP, SNP etc.
 
Clean In Place(CIP).pptx .
Clean In Place(CIP).pptx                 .Clean In Place(CIP).pptx                 .
Clean In Place(CIP).pptx .
 
Velocity and Acceleration PowerPoint.ppt
Velocity and Acceleration PowerPoint.pptVelocity and Acceleration PowerPoint.ppt
Velocity and Acceleration PowerPoint.ppt
 
Human genetics..........................pptx
Human genetics..........................pptxHuman genetics..........................pptx
Human genetics..........................pptx
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and Classifications
 
Bhiwandi Bhiwandi ❤CALL GIRL 7870993772 ❤CALL GIRLS ESCORT SERVICE In Bhiwan...
Bhiwandi Bhiwandi ❤CALL GIRL 7870993772 ❤CALL GIRLS  ESCORT SERVICE In Bhiwan...Bhiwandi Bhiwandi ❤CALL GIRL 7870993772 ❤CALL GIRLS  ESCORT SERVICE In Bhiwan...
Bhiwandi Bhiwandi ❤CALL GIRL 7870993772 ❤CALL GIRLS ESCORT SERVICE In Bhiwan...
 
GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
 
Thyroid Physiology_Dr.E. Muralinath_ Associate Professor
Thyroid Physiology_Dr.E. Muralinath_ Associate ProfessorThyroid Physiology_Dr.E. Muralinath_ Associate Professor
Thyroid Physiology_Dr.E. Muralinath_ Associate Professor
 
Chemistry 5th semester paper 1st Notes.pdf
Chemistry 5th semester paper 1st Notes.pdfChemistry 5th semester paper 1st Notes.pdf
Chemistry 5th semester paper 1st Notes.pdf
 
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptx
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptxClimate Change Impacts on Terrestrial and Aquatic Ecosystems.pptx
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptx
 
Introduction of DNA analysis in Forensic's .pptx
Introduction of DNA analysis in Forensic's .pptxIntroduction of DNA analysis in Forensic's .pptx
Introduction of DNA analysis in Forensic's .pptx
 
FAIRSpectra - Enabling the FAIRification of Analytical Science
FAIRSpectra - Enabling the FAIRification of Analytical ScienceFAIRSpectra - Enabling the FAIRification of Analytical Science
FAIRSpectra - Enabling the FAIRification of Analytical Science
 
biology HL practice questions IB BIOLOGY
biology HL practice questions IB BIOLOGYbiology HL practice questions IB BIOLOGY
biology HL practice questions IB BIOLOGY
 
Conjugation, transduction and transformation
Conjugation, transduction and transformationConjugation, transduction and transformation
Conjugation, transduction and transformation
 

Near term AI safety

  • 1. Near-term AI safety Turchin Alexey Moscow, “Kocherga club” October 14, 15.00
  • 2. Could AI cause human extinction in the next 10 years? Main ideas: • In 2012 neural nets boom has started. • MIRI increased its estimation of AI arrival before 2035. • Main measures of neural nets has been doubling every 1 year in last 5 years. • NN reached superhuman performance in some important areas in 2016-2017. • There are 5-10 doubling to human levels in most areas, which means that AI in 2020s, and may be as early as 2023 • AI could become global risk before superintelligence or even AGI, and thus earlier. • 5 independent ways of AI timing prediction produce the same result, which is surprising
  • 3. Which level of AI is dangerous? • Superintellgence is not necessary condition for the human extinction event • Omnicide is a computational task of a certain complexity • AI, able to solve omnicide task, is dangerous • We would call it dangerously powerful AI
  • 4. Which level of AI is dangerous? • AI power is growing • Availability of powerful AIs is growing • Complexity of omnicide is diminishing, as new tech appears • Example: Narrow AI able to calculate combination of DNA for dangerous viruses Time Extinction “task” complexity is “green”, AI power is “blue”
  • 5. Which level of AI is dangerous? • “Superintelligence is needed to calculate nanotech” is a popular narrative about AI, like in Yudkowsky, 2006. • Now dangerous technology could be created before superintelligence, using narrow AI as helper. • Number of “bad actors” is growing as it is just a share of all actors. • AGI and superintelligence are still the risks Time Extinction “task” complexity is “green”, AI power is “blue”
  • 6. Earliest AI arrival timing cutoff • If risks is growing linearly, it is 0.003 % a day, and it is unacceptable even for tomorrow • Threshold suggestion: 5 per cent, or 2021. • Exponential growth conjecture: AGI appearance probability is concentrated in the end of any time period, while nuclear war probability density is linear.
  • 7. Two questions about near-term AI risks • What are the evidences that catastrophic level AI will appear in the next 10 years? • What kind of catastrophes could create AI before reaching superintelligent stage?
  • 8. 1. Types of the evidences • Surveys: median is at 2062, but first 10 per cent at 2025. • Hardware: we have enough hardware for AI, and much more will come soon. • Neural nets performance growth: doubling every 1 year, 5-10 years to human level based on many metrics • Hyperbolic acceleration trends: If they will work, potentially catastrophic instability will begin in 2020s. • Randomness of the moment of AI creation: if non-linearity of the distribution and the moment of observation is taken into account, than AGI will appear in next several years (very speculative).
  • 9. Surveys: • K.Grace poll: median time of AGI arrival is 2062 • But from the risks analysis point we need earliest arrival time. • The same poll: 6.25 % before 2022 12.5 % before 2027 15 % before 2030. • The growth is almost linear with 1.25 per cent a year
  • 10. Hardware evidences Moore’s law • Even of Moore’s law is dead, it will not stop AI progress now, as the progress in AI now mostly depends on algorithms. Anyway, we have or will have soon enough computer power to run AIs, similar to a human brain. • The real Moore’s law is price of computation – that is its essence which is important from the point of AI timing • Classical semiconductor Moore’s law will probably allow at least several doublings in chip performance next 5-10 years.
  • 11. Hardware evidences Total computational power of the internet • Not only the performance of computer chips is growing, but total number of the interconnected devices is growing exponentially. • Total power = (number of devices) x (medium performance of 1 device). • “While world storage capacity doubles every three years, world computing capacity doubles every year and a half, Hilbert said.” https://www.livescience.com/54094- how-big-is-the-internet.html • How many devices now? 7 billion cell phones, + billions of PC, webcam, IoT. Probably 20 billion devices.
  • 12. Hardware evidences Bitcoin network as an example of powerful specialised hardware In 2017 Bitcoin net reached 3 exahash power. https://news.bitcoin.com/why-bitcoin-is-close-to- a-record-breaking-3-exahashes-processing-power/ The net power doubling time is around 8 month. If hash were calculated on ordinary computer, it will be 12 000 flops, or total power of bitcoin net will be 3.6E22 flops. But for bitcoin is used specialised ASIC (https://news.bitcoin.com/why-bitcoin-is-close-to-a- record-breaking-3-exahashes-processing-power/) However, for neural net processing also needed specialised ASICs, but different from Bitcoin. One example is Google Tensorflow TPU chips. It means that given sufficient money incentive, the same powerful distributed AI system could be built in a couple of years. Markram expected that human brain simulation would require 1 exoflops (10E18). That means that current blockchain network is computationally equal to 36 000 human brains (but can’t run needed type of calculations). Such computer power is probably enough to run superintelligence.
  • 13. Hardware evidences Self-driving car has to be near human brain capabilities • New Nvidia system “Pegasus can handle 320 trillion operations per second, representing roughly a 13-fold increase over the calculating power of the current PX 2 line (Tesla 3).” https:// cleantechnica.com/2017/10/11/nvidia-shows-off-new-self-driving-car-chips-used-deutsche-post- dhl-others/ The companies stated the system would be capable of Level 5 autonomous driving. • Will be delivered in 2018 • New processor: 21 bn transistors, 10 000 engineers-years, several bn dollars • Nvidia expects that GPU outpeform CPU 1000 times in 15 years, to 2025 • Full self driving cars are expected in 2020s • Specialised markets like cars and videogames fuel excess computation power
  • 14. Hardware evidences Human mind information capacity could be overestimated “Pegasus” system: level 5 self-driving autonomy at 320 trillion operations, human level performance and will operate on larger data streams than human brains. Conscious human memory of most humans is around 1 GB, which is rather small compared to data storage abilities for most contemporary computers. Human working memory is only 7 units; human print speed is several bytes a second. Lowest level of computing power needed to simulate human brain is around 100 trillion operations (10E14). Markram estimated needed computers to 1 exoflop or 10E18, and we could think about it as of median estimate. there is no upper estimate. “Power needed to simulate” is not “computational power of the brain”, it is just a measure of inefficiency of simulating. .
  • 15. Hardware evidences AI-relate computational power is growing even without Moore’s law Not only price of computation is important, but also a budget of AI- research organisations. Previous budgets were small during AI winter, now they grow hundreds times. As global economy grow, the bigger part of economy could be spent on building computational power. The largest limiting factor now is energy consumption. Owning large computer is expensive, but renting time in a cloud be more cost efficient, as you pay only for time you work and there is no down time. Or you can earn money by mining in down time. Large tech giants like Google and IBM could order specialised computer chips (like TPU) for their software with turnover one - several months.
  • 16. Neural net performance evidences Most metrics are doubling almost every year • We should look from 2012, when age of neural net started. • Performance increased: from 27 to 1.5 level of errors. • It is 15 times increase in 5.5 years. • Doubling time is 1.3 years. Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  • 17. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  • 18. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  • 19. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  • 20. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  • 21. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  • 22. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  • 23. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  • 24. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  • 25. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  • 26. Neural net performance evidences Dataset size is critical for neural nets performace “Since 2012, there have been significant advances in representation capabilities of the models and computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant. What will happen if we increase the dataset size by 10x or 100x? This paper takes a step towards clearing the clouds of mystery surrounding the relationship between `enormous data' and deep learning. By exploiting the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the performance of current vision tasks would change if this data was used for representation learning”. “Once again we observe that the performance increases logarithmically as the pre-training dataset increases.” Revisiting Unreasonable Effectiveness of Data in Deep Learning Era https://arxiv.org/abs/1707.02968
  • 27. Neural net performance evidences Human size dataset 1) Humans existed for hundred thousands years, but evolve very slowly, until they created rich culture 5000 years ago. 2) Humans brought up by animals are animals 3) Animal taught by humans are able to learn some language, like chimps and dogs. 4) Human brain is very robust to injuries and genetic variations. There is no any fragile mechanism inside: neural nets are an ultimate answer to the nature of human brains. 5) Human dataset: 100 000 hours of video stream with audio, or something like 100 TB of data (if compressed). ImageNet is 100 000 smaller. 6) Most people don't "think" – they repeat patterns of the culture, which sometimes looks like thinking. The same way neural net was trained to make basic math - without the understanding of math.
  • 28. Neural net performance evidences “Human size” dataset • Biggest success in neural nets appeared after very large dataset was introduced - ImageNet, 1 million images in 2012 • In 2016 Google increased dataset to 300 mln images and got state of art performance using very simple standard neural net architecture. • It corresponds to growth of dataset size 300 times in 5 years, or doubling time less than 1 year (around 8 month) • “Human dataset” is equal to 100 000 hours of video (it is all human life experiences in childhood) • “Human dataset” could be estimated as something like 100 billion images. • Artificial neural net dataset will match human dataset in 2023. In current speed of doubling of 8 month, such dataset size will be reached in 8 doublings, or in 5-6 years, that is 2022-2023. • Larger datasets are technically possible, as years of Youtube videos are available. • “If a machine could do a whole bunch of those translations successfully it would demonstrate that they really understand what’s going on, but I think the sets need to be about 1000 times bigger than they are now in machine translation for it to work perfectly” - Geoffray Hinton, https://www.re-work.co/blog/interview-yoshua-bengio- yann-lecun-geoffrey-hinton
  • 29. Neural net performance evidences Human size dataset • Performance has grown logarithmically with dataset size, 13 IOU (63 to 76; maximum is 100 - total recognition) units with increase of the dataset from 10 to 300 mln elements. • IOU, “Intersection over union”, measures intersection between predicted object boundaries and actual one, from 0 to 100 in percents. • If we extrapolate, 100 bn dataset will provide 97 IOU, which is very close to absolute maximum (and 10 bn dataset will be only 89 IOU) • It confirms our intuitions that “human size” dataset of 100 bn images is needed to get human level performance. • Google plans to test larger datasets.
  • 30. Neural net performance evidences Number of parameters in the neural net • The size (number of parameters, or connections, roughly equal to synapses) of the cat recogniser by Google in 2012 was 1 billion. • Later most of private research was done on graphic cards and the size of parameters was limited by the size of the memory of graphic cards, which recently reached up to 12 GB. For example, Karpathy's famous RNN had only 3 million parameters but was able to generate grammatically correct text. • latest work by Google created in 2016 a neural net with the size of 130 billion parameters, and they now use it in Google translate. They showed that quality is growing with the size of the net, but some diminishing returns are observed. https://arxiv.org/pdf/1701.06538.pdf • So the number of parameters in the best neural nets by Google grew 100 times for 5 years, and they are planning trillion parameters net soon. • The human brain has around 150 trillion synapses in the prefrontal cortex. • If the speed of growth of best neural nets continues, 150 trillion parameters net is 5-10 years from now, or somewhere in 2022-27.
  • 31. Neural net performance evidences Number of parameters in the neural net How to train such large neural nets? OpenAI found a solution which is easily scalable by changing the way the net is trained. It is not the backpropagation, but gradient descent in very large parameters space. https://blog.openai.com/evolution-strategies/
  • 32. Neural net performance evidences Number of parameters in the neural net "IBM’s Artificial Brain Has Grown From 256 Neurons to 64 Million Neurons in 6 Years – 10 Billion Projected by 2020" https:// www.singularityarchive.com/ibms-artificial-brain-has-grown- from-256-neurons-to-64-million-neurons-in-6-years/
  • 33. Hyperbolic predictions • All hyperbolic prediction converge around 2030 • “Risk period” will start earlier, as instability will grow.
  • 34. Accelerating factors • Hype: more people started to learn AI and believe in AGI soon • Arms race between main companies • Arms race with China • Russia could create “AI Chernobyl”
  • 35. Conclusion about predictions • Powerful enough AI (to be a global risk) will appear between 2020 and 2030.
  • 36. Next milestone is a “robotic brain” • Robotic brain: walk, speak, world-model, limited common sense. • The “Turing test” for such robot is its ability to prepare a breakfast. • Self-driving cars, home robots, military AIs • Probably 5 years from now • After that there are many possible choices.
  • 37. 2. What are the risks? 2.1. Risks of Narrow AI before AGI: • AI instruments helped to elect Trump and he starts nuclear war? :) • Military AI gives strategic and weaponry advantage which, however, results in a new arms race, in the new even more destructive weapons and catastrophic WW3 at the end. • Narrow AI infects billions of robots and cars and they start hunting humans. • Narrow AI in hands of a bioterrorist helps him to create dangerous bioweapons • Bitcoin economy becomes Scott’s “Moloch”
  • 38. KANSI, or Prosaic AI as mild superintelligence • KANSI: Known-algorithm non-self-improving agent “"Known-algorithm non- self-improving" (KANSI) is a strategic scenario and class of possibly- attainable AI designs, where the first pivotal powerful AI has been constructed out of known, human-understood algorithms and is not engaging in extensive self-modification” https://arbital.com/p/KANSI/ • Messy prosaic AI – idea by Paul Christiano, https://ai-alignment.com/ prosaic-ai-control-b959644d79c2
  • 39. 2. What are the risks? The main questions: 1) What is the threshold of independent AI self-improvement and how far is it from basic robotic brain? 2) Could AI get strategic decisive advantage without SI? 3) What is the threshold of dangerously powerful AI relative to the robotic brain? We don’t know, but could “get” answers on this questions in 2020s. Forever, there are two thresholds: Near-mode, 2022, in 5 years from now Farther one is 2030. Yudkowsky, 2017: what should be a fire alarm? Narrow AI accident? Almost human-like robot?