2. In this TechEmergence Consensus, we contacted a
total of 33 artificial intelligence researchers (all
except one hold a PhD) and asked them about the
AI risks that they believe to be the most pressing in
the next 20 years.
This slidedeck displays the major trends of
responses as well as some of the most poignant
quotes from the recognized experts we spoke with.
3. Access to the complete data set and all quotes
and answers from all the researchers that we
connected with for our 20 year and 100 year AI
Risk Consensus as well as our AI Conciousness
Consensus is available for free download as a
spreadsheet or Google Sheet in the link below:
>> CLICK HERE
5. We’ve selected one or two quotes from each of
the major response categories including
“automation and technology mismanagement.”
Beneath each quote is a link (if available) of our
complete interview with this guest on the
TechEmergence Podcast.
* These consensus answers were recorded seperately from our podcasts interviews, but most podcasts are focused on
related topics around the implications and applications of artificial intelligence.
6. AUTOMATION / ECONOMY
The risks brought about by near-term AI may turn out to be the
same risks that are already inherent in our society. Automation
through AI will increase productivity, but won’t improve our living
conditions if we don’t move away from a labor/wage based
economy. It may also speed up pollution and resource exhaustion,
if we don’t manage to install meaningful regulations. Even in the
long run, making AI safe for humanity may turn out to be the same
as making our society safe for humanity.
- Dr. Joscha Bach
Cognitive Scientist at MIT Media Lab and Harvard Program for Evolutionary Dynamics
>> CLICK HERE
Listen to or read our full interview with Dr. Joscha Bach at techemergence.com:
7. AUTOMATION / ECONOMY
>> CLICK HERE
Listen to or read our full interview with Dr. Keith Wiley at techemergence.com:
- Dr. Keith Wiley
Researcher, Author, Senior Software Engineer at Atigeo
I don’t think anything particularly world-shattering will occur in
the next twenty years, although I have some concern that we may
completely lose our ability to manage or steer nanosecond-sen-
sitive stock markets in constructive ways. They could explode or
crash in some chaotic fashion that occurs so fast we don’t have
time to mitigate the damage.
8. NONE GIVEN
The biggest risk is some sort of weaponization of advanced narrow
AI or early-stage AGI. This wouldn’t have to be killer robots, it could
e.g. be an artificial scientist narrowly engineered to create synthetic
pathogens — or something else we’re not currently worrying about.
Human narrowmindness and aggression, aided by AI tools that are
more narrowly clever than generally intelligent or conscious. That’s
what we should be worrying about, if anything.
- Dr. Ben Goertzel
Cheif Scientist at Hanson Robotics & Aidyia Holdings
>> CLICK HERE
Listen to or read our full interview with Dr. Ben Goertzel at techemergence.com:
9. NONE GIVEN
It is hard to believe that AI will be an actual risk. Any advance
technology have their own risks, for example the flight control of
the space shuttle can fail and generate an accident. However, the
technology used to control the space shuttle itself it is not
dangerous.
- Dr. Eduardo Torres Jara
Assistant Professor of Robotic Engineering at Worschester Polytechnic Institute
>> CLICK HERE
Read our full interview with Dr. Eduardo Torres Jara at techemergence.com:
10. GENERAL MISMANAGEMENT
OF AI TECHNOLOGY
The increasing interactions between autonomous computer
systems may cause unpredictable, not traceable, and perhaps
undesirable outcomes.
- Dr. Mehdi Dastani
Associate Professor at the Intelligent Systems Group at Utrecht University
>> CLICK HERE
Listen to or read our full interview with Dr. Mehdi Dastani at techemergence.com:
11. KILLER ROBOTS
The beginning of automating armed conflict
- Dr. Noel Sharkey
Emeritus Professor of Artificial Intelligence and Professor of Public Engagement, University of Sheffield, UK
>> CLICK HERE
Listen to or read our full interview with Dr. Noel Sharkey at techemergence.com:
12. SURVEILLANCE/SECURITY
CONCERNS
The ability to dress down our critical system designs, expose their
safety and security holes, and deliver action plans to exploit these
holes
- Dr. Pieter Mosterman
Senior Research Scientist at MathWorks
>> CLICK HERE
Listen to or read our full interview with Dr. Pieter Mosterman at techemergence.com:
13. SUPERINTELLIGENCE
Cybercrime/cyberweapons: I expect major risk from increasingly hu-
man-like AI bots, which will dispense with human limits and prolifer-
ate indefinitely to conduct advanced persistent attacks on essential
institutions (hospitals, utilities); defraud banks, funds, insurance and
stock markets; infiltrate intelligence agencies; impersonate poten-
tial love interests; and conduct adversarial activities well beyond the
boundaries we’re familiar with.
- Dr. Amnon Eden
Scientific Consultant, Software Design, Quality and Re-engineering and
Machine Learning at IT Sector (UK)
14. SUPERINTELLIGENCE
The most acute danger is for the community of superintelligences to
start fighting among themselves in such a manner that everything
will be destroyed as a side effect of that fighting.
- Dr. Michael (Mishka) Bukatin
Senior Software Engineer at Nokia
15. MALICIOUS AI
AI designed to be malicious on purpose
- Dr. Roman Yampolskiy
Associate Professor Computer Scientist at Speed School of Engineering,
University of Louisville
>> CLICK HERE
Listen to or read our full interview with Dr. Roman Yampolskiy at techemergence.com:
16. If you’ve enjoyed this presentation and you’d like
to see the full set of 33 AI researcher responses
from our 20 year and 100 year AI Risk Consensus,
as well as our AI Conciousness Consensus, the
entire dataset is made freely availble in a simple
spreadsheet accessible via the form below:
>> CLICK HERE