Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Igor Markov, Software Engineer, Google at The AI Conference 2017

690 views

Published on

Igor L. Markov is currently working at Google on search infrastructure, while holding appointments at Michigan and occasionally teaching at Stanford. He received his M.A. in Mathematics and Ph.D. in Computer Science from UCLA. He is an IEEE Fellow and an ACM Distinguished Scientist.
Igor is interested in computers that make computers, including algorithms and mathematical models, as well as software and hardware. Some of his results lead to order-of-magnitude improvements in practice, and many of them are being used in commercial tools and open-source software. During the 2011 redesign of the ACM Computing Classification System, he lead the effort on the Hardware tree. At the University of Michigan, he chaired the undergraduate program in Computer Engineering (ranked #7 in the US) for a number of years.
Igor co-authored five books and over 200 refereed publications. He served on program committees and chaired tracks at top conferences in Electronic Design Automation. Twelve Ph.D. degrees were defended under his guidance. Current and former students interned at or have been employed by AMD, Altera, Amazon, Berkeley, Cadence, Calypto, Columbia University, the US Department of Defense, the US Department of Energy, the Howard Hughes Medical Inst., Google, IBM Research, Lockheed Martin, Microsoft, MIT, Qualcomm, Samsung, Synopsys, Texas Instruments.
Igor lead student teams that won 1st places at multi-month research competitions in optimization software (organized by IBM Research and Intel Labs) in 2007, 2009, 2010, 2012 and 2013.

Dealing with AI as a Dystopian Threat to Humanity
We have seen how emergent AI could attack the humanity without a warning - either to terminate it or enslave it. How realistic are such sci fi scenarios in our future? How can they be avoided? In this presentation, we argue that the humanity's evolutionary obsession with survival has deep roots and teaches us to rely on the physical world when addressing grave threats. We propose a set of rules and constraints on AI to limit its destructive potential. We also point out that defending the humanity may become more difficult as technology changes who we are.

Published in: Technology
  • Be the first to comment

Igor Markov, Software Engineer, Google at The AI Conference 2017

  1. 1. Dealing with AI as a Dystopian Threat to Humanity Igor Markov Google and The University of Michigan The views expressed are those of my own and do not represent my employers June 2, 2017
  2. 2. … 5. Virtual reality = reality (late 2020s) 4. Computers surpass humans soon (2029) 3. Humans become machines (2030) 2. Earth will be made of computers (2045) 1. Universe will be a supercomputer (2099)
  3. 3. A new Intel study reports that self-driving vehicle services, including ride-hailing, cargo delivery, and in-car entertainment, will be worth $7 trillion by the year 2050. Intel calls this the "passenger economy."
  4. 4. A new CRISPR trial, which hopes to eliminate the human papillomavirus (HPV), is set to be the first to attempt to use the technique inside the human body. In the non-invasive treatment, scientists will apply a gel that carries the necessary DNA coding for the CRISPR machinery to the cervixes of 60 women between the ages of 18 and 50. The team aims to disable the tumor growth mechanism in HPV cells. The trial stands in contradistinction to the usual CRISPR method of extracting cells and re-injecting them into the affected area; although it will still use the Cas9 enzyme (which acts as a pair of ‘molecular scissors’) and guiding RNA that is typical of the process. 20 trials are set to begin in the rest of 2017 and early 2018.
  5. 5. The physical world is changing faster than we can keep up New technologies affect how we live and how we die
  6. 6. Threats to humanity?
  7. 7. Threats to humanity?
  8. 8. An evolutionary obsession with survival
  9. 9. How did the homo sapiens survive? • by being smart • by knowing the adversary • by controlling physical resources • by using the physical world to advantage
  10. 10. Now, back to the dystopian AI myth • AI may become smarter than us • Possibly malicious • The physical embodiments are unclear
  11. 11. Black Death killed 50M people in XIV c
  12. 12. Intelligence – hostile or friendly – is limited by physical resources
  13. 13. Computing machinery is designed using an abstraction hierarchy • From transistors to CPUs to data centers • Each level has a well-defined function • Each level can be regulated
  14. 14. Introduce hard boundaries between different levels of intelligence, trust • Can toasters and doorknobs be trusted? • Who can use weapons? • Each agent should have a key weakness
  15. 15. Limit self-replication, self-repair and self-improvement
  16. 16. Limit AIs access to energy • Firmly control the electric grid • No long-lasting batteries, fuel cells or reactors
  17. 17. Tame potential threats and use them for protection
  18. 18. Constraints on AI to intercept dystopian threats 1. Hard boundaries between levels of intelligence and trust 2. Limits on self-replication, self-repair and self-improvement 3. Limits on access to energy 4. Physical and network security of critical infrastructure Tame potential threats and use them for protection
  19. 19. Specific dystopian scenarios vs. abstract constraints on AI Analysis of specific vulnerabilities 1. Nuclear weapons & early warning systems 2. Disruption and abuse of existing energy facilities 3. Risks in mass transit 4. Smart artificial deceases Abstract rules and constraints • AI will be applied in many new ways • We can’t foresee all dystopian scenarios, superhuman AI • Accidental, unexpected interactions may turn dangerous
  20. 20. Us and them?

×