Superintelligence: how afraid should we be?
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Superintelligence: how afraid should we be?

on

  • 562 views

Superintelligence: How afraid should we be? Presentation by David Wood at the Computational Intelligence Unconference UK, 26th July 2014. Reviews ideas in three recent books: Superintelligence, by ...

Superintelligence: How afraid should we be? Presentation by David Wood at the Computational Intelligence Unconference UK, 26th July 2014. Reviews ideas in three recent books: Superintelligence, by Nick Bostrom; Our Final Invention, by James Barrat; and Intelligence Unbound, edited by Russell Blackford and Damien Broderick.

Please contact the author to invite him to present animated and/or extended versions of these slides in front of an audience of your choosing. (Commercial rates will apply for commercial settings.)

Statistics

Views

Total Views
562
Views on SlideShare
488
Embed Views
74

Actions

Likes
1
Downloads
5
Comments
0

4 Embeds 74

https://twitter.com 66
http://www.slideee.com 5
http://tweetedtimes.com 2
http://www.wiserworldweb.org 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

CC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Superintelligence: how afraid should we be? Presentation Transcript

  • 1. Superintelligence How afraid should we be? Principal, Delta WisdomChair, London Futurists David Wood @dw2 #CIUUK14
  • 2. @dw2 Page 2 Powerful technology, incompletely understood Operated by people outside their level of competence Human lives knocked catastrophically off trajectory, unintentionally http://www.bbc.co.uk/news/world-europe-28357880 Self-improving AGI Beyond human control Humanity knocked catastrophically off trajectory, unintentionallyhttp://mashable.com/2014/07/17/malaysia-airlines-ukraine-russia-rebel/
  • 3. @dw2 Page 3
  • 4. @dw2 Page 4 Likely date of advent of HL-AGI Population 10% 50% 90% Conference: Philosophy & Theory of AI Conference: Artificial General Intelligence Greek Association for Artificial Intelligence Top 100 cited academic authors in AI Combined (from above) Nick Bostrom: Superintelligence
  • 5. @dw2 Page 5 Likely date of advent of HL-AGI Population 10% 50% 90% Conference: Philosophy & Theory of AI 2048 Conference: Artificial General Intelligence 2040 Greek Association for Artificial Intelligence 2050 Top 100 cited academic authors in AI 2050 Combined (from above) 2040 Nick Bostrom: Superintelligence
  • 6. @dw2 Page 6 Likely date of advent of HL-AGI Population 10% 50% 90% Conference: Philosophy & Theory of AI 2048 2080 Conference: Artificial General Intelligence 2040 2065 Greek Association for Artificial Intelligence 2050 2093 Top 100 cited academic authors in AI 2050 2070 Combined (from above) 2040 2075 Nick Bostrom: Superintelligence
  • 7. @dw2 Page 7 Likely date of advent of HL-AGI Population 10% 50% 90% Conference: Philosophy & Theory of AI 2023 2048 2080 Conference: Artificial General Intelligence 2022 2040 2065 Greek Association for Artificial Intelligence 2020 2050 2093 Top 100 cited academic authors in AI 2024 2050 2070 Combined (from above) 2022 2040 2075 Nick Bostrom: Superintelligence
  • 8. @dw2 Page 8 Reaching HL AGI: 5 driving forces 1. Hardware with higher performance: Continuation of Moore’s Law? – “18 different candidates” in Intel labs to add extra life to that trend – Possible breakthroughs with Quantum Computing? 2. Software algorithm improvements? – Can speed things up faster than hardware gains – e.g. chess computers – Compare: Andrew Wiles, unexpected proof of Fermat’s Last Theorem (1993) 3. Learnings from studying the human brain? – Improved scanning techniques -> “neuromorphic computing” etc – Philosophical insight into consciousness/creativity?! 4. More people studying these fields than ever before – Stanford University online course on AI: 160,000 students (23,000 finished it) – More components / databases / tools /methods ready for re-combination – Unexpected triggers for improvement (malware wars, games AI, financial AI…) 5. Transformation in society’s motivation? http://intelligence.org/2013/05/15/when-will-ai-be-created/ (Smarter people?!) “Sputnik moment!?”
  • 9. @dw2 Page 9 Superintelligence – model 1 Village idiot Einstein http://intelligence.org/files/mindisall-tv07.ppt Eliezer Yudkowsky
  • 10. @dw2 Page 10 Superintelligence – model 2 http://intelligence.org/files/mindisall-tv07.ppt Village idiotMouse Chimp Einstein AI 50-100 years 50-100 weeks? / days? / hours? Vernor Vinge: The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.” “The final invention” Eliezer Yudkowsky
  • 11. @dw2 Page 11 Recursive improvement Design, Manufacturing Computers
  • 12. @dw2 Page 12 Recursive improvement Software tools (debuggers, compilers…) Software
  • 13. @dw2 Page 13 Recursive improvement AI tools AI Intelligence explosion ++Rapid reading & comprehension of all written material ++Rapid expansion onto improved hardware ++Funded by financial winnings from smart stock trading ++Supported by humans easily psychologically manipulated
  • 14. Who here wanted to merge again? Jaan Tallinn: http://prezi.com/xku9q-v-fg_j/intelligence-stairway/
  • 15. @dw2 Page 15 Exponential growth? Technology Time 2050 ? Technology Time 2050 AGI=HL ASI>>HL Ray Kurzweil Eliezer Yudkowsky
  • 16. @dw2 Page 16 Going nuclear: hard to calculate • First hydrogen bomb test, 1st March 1954, Bikini Atoll – Explosive yield was expected to be from 4 to 6 Megatons – Was 15 Megatons, two and a half times the expected maximum – Physics error by the designers at Los Alamos National Lab – Wrongly considered the lithium-7 isotope to be inert in bomb – The crew in a nearby Japanese fishing boat became ill in the wake of direct contact with the fallout. One of the crew died http://en.wikipedia.org/wiki/Castle_Bravo
  • 17. @dw2 Page 17 Superintelligence – model 2 http://intelligence.org/files/mindisall-tv07.ppt Village idiot Chimp Einstein Mouse AI Linear model of intelligence? Eliezer Yudkowsky
  • 18. @dw2 Page 18 Gloopy ASIs Posthuman mindspace Bipping ASIs Freepy ASIs Eliezer Yudkowsky http://intelligence.org/ files/mindisall-tv07.ppt Model 3 Transhuman mindspace Human minds Minds-in-general
  • 19. @dw2 Page 19 Dimensions of mind The ability to achieve goals in a wide range of environments Being conscious? Having compassion for sentient beings with lesser intelligence?
  • 20. @dw2 Page 20 AI systems we should fear Killer drones with autonomous decision-making powers (Robocop) Malware that can hack infrastructure- control systems (e.g. Stuxnet) Financial trading systems software (high speed) Software that is expert in manipulating humans
  • 21. http://www.williamhertling.com/ Software that is expert in manipulating humans
  • 22. @dw2 Page 22 AI systems we should fear Killer drones with autonomous decision-making powers (Robocop) Malware that can hack infrastructure- control systems (e.g. Stuxnet) Financial trading systems software (high speed) Software that is expert in manipulating humans Software that pursues a single optimisation goal to the exclusion of all others The more power such an AI has, the more we should fear it
  • 23. @dw2 Page 23 The pursuit of happiness? Software that pursues a single optimisation goal to the exclusion of all others Software will do what we say, rather than what we meant to say Wire-heading?! Just make us happy!?
  • 24. @dw2 Page 24 The pursuit of morality? Just be moral!? http://www.clipartbest.com/clipart-nTXa54XTB Whose morality? The problem of computer morality is at least as hard as the problem of computer vision (!) http://tvtropes.org/pmwiki/pmwiki.php/Creator/IsaacAsimov Isaac Asimov’s Three Laws of Robotics?!
  • 25. @dw2 Page 25 The two fundamental problems of superintelligence Specification problem: How do we define the goals of the AGI software? Control problem: How do retain the ability to shut down the software? Creation problem: How do we create AGI software in the first place?
  • 26. @dw2 Page 26 The fundamental meta-problem of superintelligence Specification problem: How do we define the goals of the AGI software? Control problem: How do retain the ability to shut down the software? Creation problem: How do we create AGI software in the first place? ~No research ~No research Some research Accidental research “Friendly AI” (FAI) “AI in a box”
  • 27. @dw2 Page 27 AI in a box? Tripwires? “Adam and Eve” ethernet port?! Software will be a tool, answering questions, not an agent? The “answers” which the software gives us will have effects in the world (e.g. software it writes for us) Systems which rely on humans to verify and carry out their actions will be uncompetitive compared to those with greater autonomy AGI may become very smart in surreptitiously evading tripwires Simple?
  • 28. @dw2 Page 28 “The orthogonality thesis” Intelligence and final goals are orthogonal More or less any intelligence …could in principle be combined with… more or less any final goal
  • 29. @dw2 Page 29 “The instrumental convergence thesis” (“AI Drives”) Some intermediate (instrumental) goals are likely in all cases for a superintelligence: • Resource acquisition • Cognitive enhancement • Greater creativity • Self preservation (preservation of goal)… Steve Omohundro: “For a sufficiently intelligent system, avoiding vulnerabilities is as powerful a motivator as explicitly constructed goals and subgoals”
  • 30. @dw2 Page 30 Indirect specification of goals? Specification problem: How do we define the goals of the AGI software? “Achieve the goals which the creators of the AGI would have wished it to achieve, if they had thought about the matter long and hard” This software will do what we meant to say, rather than what we actually said (?) AGI helps us to figure out the answer to the spec problem!
  • 31. @dw2 Page 31 CEV: Coherent Extrapolated Volition AGI should be tasked to carry out: Our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted Eliezer Yudkowsky
  • 32. @dw2 Page 32 Unanswered questions (selection) 1. Can we turn ‘poetic’ ideas like CEV into bug-free working software? – Should we humans concentrate harder on working out our “blended volition”? 2. How can we stop a superintelligence from changing its own core goals? – Like humans can choose to set aside their biologically inherited goals – Could AGIs that start off ‘Friendly’ become “born again” with new priorities?! 3. Can we prevent AGIs from developing dangerous instrumental drives? – By programming (bug-free) in tamper-proof limitations? 4. Can AGIs help us to figure out a solution to the Control problem? – Can we use a hierarchy of lower-level AGIs to control higher-level ones? 5. Can we prevent the rapid nuclear-style take-off of self-improving AGI? 6. Are some approaches to creating AGIs safer than others? – Whole Brain Emulation / AGI de novo / evolution in virtual environment… – Open (everything published) vs. Closed (some parts secret)? 7. How does the AGI existential risk compare to other x-risks in priority? – Nanotech grey goo, deadly new bio-hazard, nuclear holocaust, climate chaos…
  • 33. @dw2 Page 33 Answered questions (selection) a) Should we be afraid? – Yes. (End-of-the-world afraid) b) Can we slow down all research into AGI, until we’re confident we have good answers to the control and/or specification problems? – Unlikely – there’s too much financial investment happening worldwide – Too many separate countries / militaries / finance houses… are involved c) How do we promote wider study of the Superintelligence topic? – Need to lose the “weird” and “embarrassment” angles – “Less Wrong” strikes some observers as cultish – “Terminator” and “Transcendence” have done more harm than good – First class books / articles / movies needed, addressing thoughtful audiences – Good intermediate results useful too (not just appeals for more funding)
  • 34. Practical philosophy! Preparing humanity to survive the forthcoming transition to superintelligence Principal, Delta WisdomChair, London Futurists Urgent! (Roles too for mathematicians, theologians…) David Wood @dw2 Philosophy with an expiry date! Making a real difference!