Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Igor Markov, Software Engineer, Google at MLconf SEA - 5/20/16

760 views

Published on

Can AI Become a Dystopian Threat to Humanity? – A Hardware Perspective: Viewing future AI as a possible threat to humanity has long become common in the movie industry, while some serious thinkers (Hawking, Musk) have also promoted this perspective, even though prominent ML experts don’t see this happening any time soon. Why is this topic attracting so much attention? What can we learn from the the past? This talk draws attention to physical limitations of possible threats, such as energy sources and the ability to reproduce. These limitations can be made more reliable and harder to circumvent, while the hardware of future AI systems can be designed with particular attention to physical limits.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Igor Markov, Software Engineer, Google at MLconf SEA - 5/20/16

  1. 1. Can AI Become a Dystopian  Threat to Humanity?  A Hardware Perspective Igor Markov Google and The University of Michigan The views expressed are those of my own and do not represent my employers
  2. 2. Threats to humanity? 
  3. 3. Threats to humanity? 
  4. 4. Threats to humanity? 
  5. 5. Why are we so obsessed? 
  6. 6. How did the humanity survive?
  7. 7. How did the humanity survive? • by being smart • by knowing the adversary • by controlling physical resources • by using the physical world to advantage
  8. 8. Now, back to the dystopian AI myth • AI may become smarter than us • Possibly malicious • The physical embodiments are unclear
  9. 9. Black Death killed 50M people in XIV c
  10. 10. Intelligence – hostile or friendly – is limited by physical resources
  11. 11. Computing machinery is designed  using an abstraction hierarchy • From transistors to CPUs to data centers • Each level has a well‐defined function
  12. 12. Introduce hard boundaries between  different levels of intelligence, trust • Can toasters and doorknobs be trusted? • Who can use weapons? • Each agent should have a key weakness
  13. 13. Limit self‐replication, self‐repair and self‐improvement
  14. 14. Limit AIs access to energy • Firmly control the electric grid • No long‐lasting batteries, fuel cells or reactors
  15. 15. Tame potential threats  and use them for protection
  16. 16. Constraints on AI to intercept dystopian threats 1. Hard boundaries between levels of intelligence and trust 2. Limits on self‐replication, self‐repair and self‐improvement 3. Limits on access to energy 4. Physical and network security of critical infrastructure Tame potential threats and use them for protection

×