Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

20-Year Artificial Intelligence Risk


Published on

Recently, we interviewed and reached out to a total of over 30 artificial intelligence researchers (all except one hold a PhD) and asked them about the AI risks that they believe to be the most pressing in the next 20 years. While it’s important to bare in mind that the “categorization” was done after the fact (it could be argued that other categories could have been used to couch these responses), and that 33 researches is by no means an extensive consensus, the resulting trends and thoughts of PhDs, most of whom have spent their careers in various segments of AI, are interesting and worth considering.

Published in: Technology
  • Be the first to comment

20-Year Artificial Intelligence Risk

  1. 1. TECHEMERGENCE CONSENSUS: 20-YEAR ARTIFICIAL INTELLIGENCE RISK The data from this consensus was collected between October and December of 2015
  2. 2. In this TechEmergence Consensus, we contacted a total of 33 artificial intelligence researchers (all except one hold a PhD) and asked them about the AI risks that they believe to be the most pressing in the next 20 years. This slidedeck displays the major trends of responses as well as some of the most poignant quotes from the recognized experts we spoke with.
  3. 3. Access to the complete data set and all quotes and answers from all the researchers that we connected with for our 20 year and 100 year AI Risk Consensus as well as our AI Conciousness Consensus is available for free download as a spreadsheet or Google Sheet in the link below: >> CLICK HERE
  5. 5. We’ve selected one or two quotes from each of the major response categories including “automation and technology mismanagement.” Beneath each quote is a link (if available) of our complete interview with this guest on the TechEmergence Podcast. * These consensus answers were recorded seperately from our podcasts interviews, but most podcasts are focused on related topics around the implications and applications of artificial intelligence.
  6. 6. AUTOMATION / ECONOMY The risks brought about by near-term AI may turn out to be the same risks that are already inherent in our society. Automation through AI will increase productivity, but won’t improve our living conditions if we don’t move away from a labor/wage based economy. It may also speed up pollution and resource exhaustion, if we don’t manage to install meaningful regulations. Even in the long run, making AI safe for humanity may turn out to be the same as making our society safe for humanity. - Dr. Joscha Bach Cognitive Scientist at MIT Media Lab and Harvard Program for Evolutionary Dynamics >> CLICK HERE Listen to or read our full interview with Dr. Joscha Bach at
  7. 7. AUTOMATION / ECONOMY >> CLICK HERE Listen to or read our full interview with Dr. Keith Wiley at - Dr. Keith Wiley Researcher, Author, Senior Software Engineer at Atigeo I don’t think anything particularly world-shattering will occur in the next twenty years, although I have some concern that we may completely lose our ability to manage or steer nanosecond-sen- sitive stock markets in constructive ways. They could explode or crash in some chaotic fashion that occurs so fast we don’t have time to mitigate the damage.
  8. 8. NONE GIVEN The biggest risk is some sort of weaponization of advanced narrow AI or early-stage AGI. This wouldn’t have to be killer robots, it could e.g. be an artificial scientist narrowly engineered to create synthetic pathogens — or something else we’re not currently worrying about. Human narrowmindness and aggression, aided by AI tools that are more narrowly clever than generally intelligent or conscious. That’s what we should be worrying about, if anything. - Dr. Ben Goertzel Cheif Scientist at Hanson Robotics & Aidyia Holdings >> CLICK HERE Listen to or read our full interview with Dr. Ben Goertzel at
  9. 9. NONE GIVEN It is hard to believe that AI will be an actual risk. Any advance technology have their own risks, for example the flight control of the space shuttle can fail and generate an accident. However, the technology used to control the space shuttle itself it is not dangerous. - Dr. Eduardo Torres Jara Assistant Professor of Robotic Engineering at Worschester Polytechnic Institute >> CLICK HERE Read our full interview with Dr. Eduardo Torres Jara at
  10. 10. GENERAL MISMANAGEMENT OF AI TECHNOLOGY The increasing interactions between autonomous computer systems may cause unpredictable, not traceable, and perhaps undesirable outcomes. - Dr. Mehdi Dastani Associate Professor at the Intelligent Systems Group at Utrecht University >> CLICK HERE Listen to or read our full interview with Dr. Mehdi Dastani at
  11. 11. KILLER ROBOTS The beginning of automating armed conflict - Dr. Noel Sharkey Emeritus Professor of Artificial Intelligence and Professor of Public Engagement, University of Sheffield, UK >> CLICK HERE Listen to or read our full interview with Dr. Noel Sharkey at
  12. 12. SURVEILLANCE/SECURITY CONCERNS The ability to dress down our critical system designs, expose their safety and security holes, and deliver action plans to exploit these holes - Dr. Pieter Mosterman Senior Research Scientist at MathWorks >> CLICK HERE Listen to or read our full interview with Dr. Pieter Mosterman at
  13. 13. SUPERINTELLIGENCE Cybercrime/cyberweapons: I expect major risk from increasingly hu- man-like AI bots, which will dispense with human limits and prolifer- ate indefinitely to conduct advanced persistent attacks on essential institutions (hospitals, utilities); defraud banks, funds, insurance and stock markets; infiltrate intelligence agencies; impersonate poten- tial love interests; and conduct adversarial activities well beyond the boundaries we’re familiar with. - Dr. Amnon Eden Scientific Consultant, Software Design, Quality and Re-engineering and Machine Learning at IT Sector (UK)
  14. 14. SUPERINTELLIGENCE The most acute danger is for the community of superintelligences to start fighting among themselves in such a manner that everything will be destroyed as a side effect of that fighting. - Dr. Michael (Mishka) Bukatin Senior Software Engineer at Nokia
  15. 15. MALICIOUS AI AI designed to be malicious on purpose - Dr. Roman Yampolskiy Associate Professor Computer Scientist at Speed School of Engineering, University of Louisville >> CLICK HERE Listen to or read our full interview with Dr. Roman Yampolskiy at
  16. 16. If you’ve enjoyed this presentation and you’d like to see the full set of 33 AI researcher responses from our 20 year and 100 year AI Risk Consensus, as well as our AI Conciousness Consensus, the entire dataset is made freely availble in a simple spreadsheet accessible via the form below: >> CLICK HERE
  17. 17. Thanks for viewing our presentation. If you’d like to stay ahead of the curve about cutting-edge research trends and insights in the field of artificial intelligence, be sure to stay connected on social media by clicking the icons below: | © TechEmergence LLC 2016 All Rights Reserved | Design by J. Daniel Samples