Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Near term AI safety

68 views

Published on

Presentation on the mini-conference "Near-term AI safety"

Published in: Science
  • Be the first to comment

Near term AI safety

  1. 1. Near-term AI safety Turchin Alexey Moscow, “Kocherga club” October 14, 15.00
  2. 2. Could AI cause human extinction in the next 10 years? Main ideas: • In 2012 neural nets boom has started. • MIRI increased its estimation of AI arrival before 2035. • Main measures of neural nets has been doubling every 1 year in last 5 years. • NN reached superhuman performance in some important areas in 2016-2017. • There are 5-10 doubling to human levels in most areas, which means that AI in 2020s, and may be as early as 2023 • AI could become global risk before superintelligence or even AGI, and thus earlier. • 5 independent ways of AI timing prediction produce the same result, which is surprising
  3. 3. Which level of AI is dangerous? • Superintellgence is not necessary condition for the human extinction event • Omnicide is a computational task of a certain complexity • AI, able to solve omnicide task, is dangerous • We would call it dangerously powerful AI
  4. 4. Which level of AI is dangerous? • AI power is growing • Availability of powerful AIs is growing • Complexity of omnicide is diminishing, as new tech appears • Example: Narrow AI able to calculate combination of DNA for dangerous viruses Time Extinction “task” complexity is “green”, AI power is “blue”
  5. 5. Which level of AI is dangerous? • “Superintelligence is needed to calculate nanotech” is a popular narrative about AI, like in Yudkowsky, 2006. • Now dangerous technology could be created before superintelligence, using narrow AI as helper. • Number of “bad actors” is growing as it is just a share of all actors. • AGI and superintelligence are still the risks Time Extinction “task” complexity is “green”, AI power is “blue”
  6. 6. Earliest AI arrival timing cutoff • If risks is growing linearly, it is 0.003 % a day, and it is unacceptable even for tomorrow • Threshold suggestion: 5 per cent, or 2021. • Exponential growth conjecture: AGI appearance probability is concentrated in the end of any time period, while nuclear war probability density is linear.
  7. 7. Two questions about near-term AI risks • What are the evidences that catastrophic level AI will appear in the next 10 years? • What kind of catastrophes could create AI before reaching superintelligent stage?
  8. 8. 1. Types of the evidences • Surveys: median is at 2062, but first 10 per cent at 2025. • Hardware: we have enough hardware for AI, and much more will come soon. • Neural nets performance growth: doubling every 1 year, 5-10 years to human level based on many metrics • Hyperbolic acceleration trends: If they will work, potentially catastrophic instability will begin in 2020s. • Randomness of the moment of AI creation: if non-linearity of the distribution and the moment of observation is taken into account, than AGI will appear in next several years (very speculative).
  9. 9. Surveys: • K.Grace poll: median time of AGI arrival is 2062 • But from the risks analysis point we need earliest arrival time. • The same poll: 6.25 % before 2022 12.5 % before 2027 15 % before 2030. • The growth is almost linear with 1.25 per cent a year
  10. 10. Hardware evidences Moore’s law • Even of Moore’s law is dead, it will not stop AI progress now, as the progress in AI now mostly depends on algorithms. Anyway, we have or will have soon enough computer power to run AIs, similar to a human brain. • The real Moore’s law is price of computation – that is its essence which is important from the point of AI timing • Classical semiconductor Moore’s law will probably allow at least several doublings in chip performance next 5-10 years.
  11. 11. Hardware evidences Total computational power of the internet • Not only the performance of computer chips is growing, but total number of the interconnected devices is growing exponentially. • Total power = (number of devices) x (medium performance of 1 device). • “While world storage capacity doubles every three years, world computing capacity doubles every year and a half, Hilbert said.” https://www.livescience.com/54094- how-big-is-the-internet.html • How many devices now? 7 billion cell phones, + billions of PC, webcam, IoT. Probably 20 billion devices.
  12. 12. Hardware evidences Bitcoin network as an example of powerful specialised hardware In 2017 Bitcoin net reached 3 exahash power. https://news.bitcoin.com/why-bitcoin-is-close-to- a-record-breaking-3-exahashes-processing-power/ The net power doubling time is around 8 month. If hash were calculated on ordinary computer, it will be 12 000 flops, or total power of bitcoin net will be 3.6E22 flops. But for bitcoin is used specialised ASIC (https://news.bitcoin.com/why-bitcoin-is-close-to-a- record-breaking-3-exahashes-processing-power/) However, for neural net processing also needed specialised ASICs, but different from Bitcoin. One example is Google Tensorflow TPU chips. It means that given sufficient money incentive, the same powerful distributed AI system could be built in a couple of years. Markram expected that human brain simulation would require 1 exoflops (10E18). That means that current blockchain network is computationally equal to 36 000 human brains (but can’t run needed type of calculations). Such computer power is probably enough to run superintelligence.
  13. 13. Hardware evidences Self-driving car has to be near human brain capabilities • New Nvidia system “Pegasus can handle 320 trillion operations per second, representing roughly a 13-fold increase over the calculating power of the current PX 2 line (Tesla 3).” https:// cleantechnica.com/2017/10/11/nvidia-shows-off-new-self-driving-car-chips-used-deutsche-post- dhl-others/ The companies stated the system would be capable of Level 5 autonomous driving. • Will be delivered in 2018 • New processor: 21 bn transistors, 10 000 engineers-years, several bn dollars • Nvidia expects that GPU outpeform CPU 1000 times in 15 years, to 2025 • Full self driving cars are expected in 2020s • Specialised markets like cars and videogames fuel excess computation power
  14. 14. Hardware evidences Human mind information capacity could be overestimated “Pegasus” system: level 5 self-driving autonomy at 320 trillion operations, human level performance and will operate on larger data streams than human brains. Conscious human memory of most humans is around 1 GB, which is rather small compared to data storage abilities for most contemporary computers. Human working memory is only 7 units; human print speed is several bytes a second. Lowest level of computing power needed to simulate human brain is around 100 trillion operations (10E14). Markram estimated needed computers to 1 exoflop or 10E18, and we could think about it as of median estimate. there is no upper estimate. “Power needed to simulate” is not “computational power of the brain”, it is just a measure of inefficiency of simulating. .
  15. 15. Hardware evidences AI-relate computational power is growing even without Moore’s law Not only price of computation is important, but also a budget of AI- research organisations. Previous budgets were small during AI winter, now they grow hundreds times. As global economy grow, the bigger part of economy could be spent on building computational power. The largest limiting factor now is energy consumption. Owning large computer is expensive, but renting time in a cloud be more cost efficient, as you pay only for time you work and there is no down time. Or you can earn money by mining in down time. Large tech giants like Google and IBM could order specialised computer chips (like TPU) for their software with turnover one - several months.
  16. 16. Neural net performance evidences Most metrics are doubling almost every year • We should look from 2012, when age of neural net started. • Performance increased: from 27 to 1.5 level of errors. • It is 15 times increase in 5.5 years. • Doubling time is 1.3 years. Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  17. 17. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  18. 18. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  19. 19. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  20. 20. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  21. 21. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  22. 22. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  23. 23. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  24. 24. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  25. 25. Neural net performance evidences Most metrics are doubling every year Data from “AI Progress Measurement” https://www.eff.org/ai/metrics
  26. 26. Neural net performance evidences Dataset size is critical for neural nets performace “Since 2012, there have been significant advances in representation capabilities of the models and computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant. What will happen if we increase the dataset size by 10x or 100x? This paper takes a step towards clearing the clouds of mystery surrounding the relationship between `enormous data' and deep learning. By exploiting the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the performance of current vision tasks would change if this data was used for representation learning”. “Once again we observe that the performance increases logarithmically as the pre-training dataset increases.” Revisiting Unreasonable Effectiveness of Data in Deep Learning Era https://arxiv.org/abs/1707.02968
  27. 27. Neural net performance evidences Human size dataset 1) Humans existed for hundred thousands years, but evolve very slowly, until they created rich culture 5000 years ago. 2) Humans brought up by animals are animals 3) Animal taught by humans are able to learn some language, like chimps and dogs. 4) Human brain is very robust to injuries and genetic variations. There is no any fragile mechanism inside: neural nets are an ultimate answer to the nature of human brains. 5) Human dataset: 100 000 hours of video stream with audio, or something like 100 TB of data (if compressed). ImageNet is 100 000 smaller. 6) Most people don't "think" – they repeat patterns of the culture, which sometimes looks like thinking. The same way neural net was trained to make basic math - without the understanding of math.
  28. 28. Neural net performance evidences “Human size” dataset • Biggest success in neural nets appeared after very large dataset was introduced - ImageNet, 1 million images in 2012 • In 2016 Google increased dataset to 300 mln images and got state of art performance using very simple standard neural net architecture. • It corresponds to growth of dataset size 300 times in 5 years, or doubling time less than 1 year (around 8 month) • “Human dataset” is equal to 100 000 hours of video (it is all human life experiences in childhood) • “Human dataset” could be estimated as something like 100 billion images. • Artificial neural net dataset will match human dataset in 2023. In current speed of doubling of 8 month, such dataset size will be reached in 8 doublings, or in 5-6 years, that is 2022-2023. • Larger datasets are technically possible, as years of Youtube videos are available. • “If a machine could do a whole bunch of those translations successfully it would demonstrate that they really understand what’s going on, but I think the sets need to be about 1000 times bigger than they are now in machine translation for it to work perfectly” - Geoffray Hinton, https://www.re-work.co/blog/interview-yoshua-bengio- yann-lecun-geoffrey-hinton
  29. 29. Neural net performance evidences Human size dataset • Performance has grown logarithmically with dataset size, 13 IOU (63 to 76; maximum is 100 - total recognition) units with increase of the dataset from 10 to 300 mln elements. • IOU, “Intersection over union”, measures intersection between predicted object boundaries and actual one, from 0 to 100 in percents. • If we extrapolate, 100 bn dataset will provide 97 IOU, which is very close to absolute maximum (and 10 bn dataset will be only 89 IOU) • It confirms our intuitions that “human size” dataset of 100 bn images is needed to get human level performance. • Google plans to test larger datasets.
  30. 30. Neural net performance evidences Number of parameters in the neural net • The size (number of parameters, or connections, roughly equal to synapses) of the cat recogniser by Google in 2012 was 1 billion. • Later most of private research was done on graphic cards and the size of parameters was limited by the size of the memory of graphic cards, which recently reached up to 12 GB. For example, Karpathy's famous RNN had only 3 million parameters but was able to generate grammatically correct text. • latest work by Google created in 2016 a neural net with the size of 130 billion parameters, and they now use it in Google translate. They showed that quality is growing with the size of the net, but some diminishing returns are observed. https://arxiv.org/pdf/1701.06538.pdf • So the number of parameters in the best neural nets by Google grew 100 times for 5 years, and they are planning trillion parameters net soon. • The human brain has around 150 trillion synapses in the prefrontal cortex. • If the speed of growth of best neural nets continues, 150 trillion parameters net is 5-10 years from now, or somewhere in 2022-27.
  31. 31. Neural net performance evidences Number of parameters in the neural net How to train such large neural nets? OpenAI found a solution which is easily scalable by changing the way the net is trained. It is not the backpropagation, but gradient descent in very large parameters space. https://blog.openai.com/evolution-strategies/
  32. 32. Neural net performance evidences Number of parameters in the neural net "IBM’s Artificial Brain Has Grown From 256 Neurons to 64 Million Neurons in 6 Years – 10 Billion Projected by 2020" https:// www.singularityarchive.com/ibms-artificial-brain-has-grown- from-256-neurons-to-64-million-neurons-in-6-years/
  33. 33. Hyperbolic predictions • All hyperbolic prediction converge around 2030 • “Risk period” will start earlier, as instability will grow.
  34. 34. Accelerating factors • Hype: more people started to learn AI and believe in AGI soon • Arms race between main companies • Arms race with China • Russia could create “AI Chernobyl”
  35. 35. Conclusion about predictions • Powerful enough AI (to be a global risk) will appear between 2020 and 2030.
  36. 36. Next milestone is a “robotic brain” • Robotic brain: walk, speak, world-model, limited common sense. • The “Turing test” for such robot is its ability to prepare a breakfast. • Self-driving cars, home robots, military AIs • Probably 5 years from now • After that there are many possible choices.
  37. 37. 2. What are the risks? 2.1. Risks of Narrow AI before AGI: • AI instruments helped to elect Trump and he starts nuclear war? :) • Military AI gives strategic and weaponry advantage which, however, results in a new arms race, in the new even more destructive weapons and catastrophic WW3 at the end. • Narrow AI infects billions of robots and cars and they start hunting humans. • Narrow AI in hands of a bioterrorist helps him to create dangerous bioweapons • Bitcoin economy becomes Scott’s “Moloch”
  38. 38. KANSI, or Prosaic AI as mild superintelligence • KANSI: Known-algorithm non-self-improving agent “"Known-algorithm non- self-improving" (KANSI) is a strategic scenario and class of possibly- attainable AI designs, where the first pivotal powerful AI has been constructed out of known, human-understood algorithms and is not engaging in extensive self-modification” https://arbital.com/p/KANSI/ • Messy prosaic AI – idea by Paul Christiano, https://ai-alignment.com/ prosaic-ai-control-b959644d79c2
  39. 39. 2. What are the risks? The main questions: 1) What is the threshold of independent AI self-improvement and how far is it from basic robotic brain? 2) Could AI get strategic decisive advantage without SI? 3) What is the threshold of dangerously powerful AI relative to the robotic brain? We don’t know, but could “get” answers on this questions in 2020s. Forever, there are two thresholds: Near-mode, 2022, in 5 years from now Farther one is 2030. Yudkowsky, 2017: what should be a fire alarm? Narrow AI accident? Almost human-like robot?

×