Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Is The Singularity Near

2,654 views

Published on

Presentation given in London, 20th Sept 2008, to UKH+.
"Nine key questions about the coming Technological Singularity"

Published in: Technology, Travel

Is The Singularity Near

  1. 1. Is the Singularity Near? Nine key questions about the coming Technological Singularity ? Technology David Wood, Software Director (day job); UKTA enthusiast, London + Many contributions from the floor! Varied expert and layperson viewpoints welcome 20th Sept 2008 Time 2058
  2. 2. A friendly critique of some of the ideas in Ray Kurzweil’s The Singularity is near: When humans transcend biology
  3. 3. The nine key questions • Defining what we're talking about: What’s the relation between the various different notions of the Singularity?
  4. 4. 1. Defining the Singularity • “The ever-accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue” ? Technology – John von Neumann (1950s) Time 2058
  5. 5. 1. Defining the Singularity • “What is the Singularity? It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed” – Ray Kurzweil ? Technology Time 2058
  6. 6. 1. Defining the Singularity • “When the first transhuman intelligence is created and launches itself into recursive self-improvement, a fundamental discontinuity is likely to occur, the likes of which I can’t even begin to predict” – Michael Anissimov Technology Time 2058
  7. 7. 1. Defining the Singularity • The advent of super-human general AI • AI smarter than humans, for sufficiently many purposes – Including the ability to design and build new AI • A trigger for recursive improvement – A likely trigger for fast (and then even faster) recursive improvement • We could in a very short timespan have super- super…-super-human general AI • We are unlikely to be able to begin to conceive what will happen next (Vernor Vinge)
  8. 8. Some related ideas (i) • Exponential improvement in technology and/ or computing power and/or knowledge • But exponential progress often stalls – Speed of passenger airliners – Heights of skyscrapers – The human population • Example of Kurzweil over-enthusiasm: – “By the end of this decade, computers will disappear as distinct physical objects, with displays built into our eyeglasses, and electronics woven in our clothing, providing full- immersion visual virtual reality” (p. 105)
  9. 9. Some related ideas (i) • Exponential improvement in technology and/ or computing power and/or knowledge • This is an independent idea – We could reach the Singularity slowly – (Though the take-off, when it comes, would still be spectacular) – (Like reaching critical mass: possible slow approach, but we still get a super-fast explosion) • Pre-Singularity exponential improvements in technology make the Singularity more likely, but aren’t necessary to the argument
  10. 10. Some related ideas (ii) • Computers drastically transform the economy – doing more and more work in it • This is an independent idea – If/when the Singularity comes, the economic transformation is likely to be very considerably more drastic • Society will change in completely unpredictable ways, with existing structures breaking down, like the Industrial Revolution • I think we should concentrate on the more specific idea of super-human general AI
  11. 11. AI singularity vs. nano-factory singularity • A general purpose nano-factory can manufacture goods better than humans can • So it can manufacture an even better general purpose nano-factory… – Recursive improvement • Economics would be transformed overnight • An interesting topic! – But probably not so scary/profound as the advent of super-human general AI • (The two developments could, however, work in parallel…)
  12. 12. Eric Drexler, 1989 • “If you can build genuine AI, there are reasons to believe that you can build things like neurons that are a million times faster” • “That leads to the conclusion that you can make systems that think a million times faster than a person” • “With AI, these systems could do engineering design” • “Combining this with the capability of a system to build something that is better than it, you have the possibility for a very abrupt transition” • “This situation may be more difficult to deal with even than nano-technology, but it is much more difficult to think about it constructively”
  13. 13. Hard vs. soft take-off • AI smarter than humans, for sufficiently many purposes – Including the ability to design and build new AI • A trigger for recursive improvement – A likely trigger for fast (and then even faster) recursive improvement • We could in a very short timespan have super- super…-super-human general AI: hard take-off • This assumes there’s only one significant barrier to building better AI – but there might be several – Super-human general AI mightn’t automatically lead to super-super…-super general AI – In the latter case, we can talk about a “soft take-off”
  14. 14. Hard vs. soft take-off ? Technology Time 2058
  15. 15. The nine key questions • Defining what we're talking about: What’s the relation between the various different notions of the Singularity? • Are there arguments in principle against the Singularity? • What are the critical bottleneck determinants of development towards the Singularity? • What are the lessons we should learn from what’s been called “the embarrassing history of AI”? • Is the Singularity a plausible occurrence within (say) the next 50 years? • What’s the likeliest timescale for the Singularity? • Should we be doing everything in our power to prevent the Singularity from happening? • Can we influence the Singularity to make its outcome more likely to be good for humanity rather than disastrous? • What are the biggest uncertainties with the Singularity?
  16. 16. 2. Arguments in principle against the possibility of the Singularity? • “Computers will never be as intelligent as humans” • “Progress with AI has never been as fast as AI enthusiasts predicted” • “There’s something mystical or vitalist about human self-awareness and consciousness that can never be captured in a computer” • “Exponential progress always slows down after a while – skyscraper heights, passenger airline speed…” • “An embodied super-AI will run out of resources or energy before achieving anything God-like” • “The likelihood is so small that discussing it is a waste of our energy and a distraction…”
  17. 17. Could we keep the AI locked up? • “An embodied super-AI will run out of resources or energy before achieving anything God-like” • Could we deliberately avoid connecting the super- AI to the wider network? • “AI-Box experiment” – Eliezer Yudkowsky • The super-AI may well be an expert on human psychology – And would concoct extremely convincing reasons for why it should be connected to the wider network • In any case, we have to contemplate the accidental route to the Singularity – when the AI researchers get a better result than they expected!
  18. 18. 3. Critical bottleneck determinants? • Improving the computing hardware: – We’ve already had forty years of Moore’s Law – That’s been driven by technology expertise coupled with strong commercial incentives – Experts generally expect Moore’s Law to last at least another 10 years – And there are good prospects for new types of computer power even after that (3D chips) • Human brain computational power is c. 100 TIPs (100 x 10^12 instructions per second) – Say this is an underestimate – use 10^16 TIPs – Computers with this power will cost $1000 by c. 2020-25 – Assuming that sufficient financial incentives for incrementally improved computing power remain in force
  19. 19. 3. Critical bottleneck determinants? • Improving the computing hardware: – We’ve already had forty years of Moore’s Law – That’s been driven by technology expertise coupled with strong commercial incentives – Experts generally expect Moore’s Law to last at least another 10 years – And there are good prospects for new types of computer power even after that (3D chips) • Improving the software: This is my guess – Wirth’s Law: “Software gets slower, faster than hardware gets faster”  – Just calculating faster doesn’t make you wiser • Understanding the human brain and human mind: – May or may not turn out to be particularly useful
  20. 20. 4. Learnings from the history of AI? • “Progress with AI has never been as fast as AI enthusiasts predicted” • => “Computers will never be as intelligent as humans” • Or, it will take an awfully long time – So it’s mainly a distraction to discuss it • Alan Turing predicted in 1950 that, by the year 2000, machines would be able to fool 30% of human judges during a 5-minute test • AI researchers need to be optimistic, to ensure they get funded, but progress is poor
  21. 21. 4. Learnings from the history of AI? • “Progress with AI has never been as fast as AI enthusiasts predicted” • Powerful algorithms eventually achieved for: – Chess (but not yet Go) – Algebra (including finding proofs to unsolved theorems) – Real-time navigation – Driving cars across deserts – Quadrupeds that walk over rough terrain (“Big Dog”) – John Koza’s Invention machine, which has won patents – Composing music in the style of named composers – Face recognition – Language translation (Google) – Playing twenty questions (Burger King)…
  22. 22. Possible unexpectedly fast progress? • Technology improvements sometimes happen outside the mainstream – happening somewhat “under the radar” but with strong financial incentive • “Adult entertainment” industry • Computer games – Incredible graphics (GPUs); Physics engines; “AIs” • CAPTCHA defeaters (promoting spam) • Search boxes that understand natural language – Huge investment by Google, Microsoft… • Virtual worlds: Ben Goertzel – Bots there have fewer items of “accidental real-world complexity” to worry about • iPhone and other connected super-smart devices
  23. 23. 5. Is the Singularity plausible before Hard to (say) 2058? rule them Vernor Vinge’s five possible routes to the Singularity: all out! • The AI Scenario: We create superhuman artificial intelligence (AI) in computers. • The IA Scenario: We enhance human intelligence through human-to-computer interfaces—that is, we achieve intelligence amplification (IA). • The Biomedical Scenario: We directly increase our intelligence by improving the neurological operation of our brains (smart drugs or otherwise) • The Internet Scenario: Humanity, its networks, computers, and databases become sufficiently effective to be considered a superhuman being. • The Digital Gaia Scenario: The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being.
  24. 24. 5. Is the Singularity plausible before (say) 2058? • Assuming there’s no major societal breakdown • Assuming there’s nothing mystical about human-level intelligence • Assuming at least some continuing improvements in both hardware and software • … I see no reason to rule out the possibility of the Singularity in this kind of timescale • So it’s worth at least some attention from us!
  25. 25. “Maybe it needs 100 years”? • 100 years of progress at the present rate of achievement… • Could be achieved in just 36 calendar years • If the rate of achievement doubles every 10 years! R(1 + 3.6) Rate of achievement 100.8R R R (36 x 5.6)R/2 100R 2008 2108 2008 2044 Time
  26. 26. Example of accelerating rate of achievement • Sequencing the human genome • Project started in 1990 • Cost of sequencing a base pair c. $10 • “It would take 1000 years to finish” • Project forecast to finish in 15 years (2005) • Actually finished in 2003 • Cost of sequencing a base pair c. $0.02 • HIV virus took 15 years to sequence • SARS took 31 days
  27. 27. There’s more to acceleration than Moore’s Law • First computers were designed on paper and built by hand – Later computers benefited from computer-aided design and computer-aided manufacture – Even later computers will have even better computer- aided design and manufacture • Software creates and improves tools (including compilers, debuggers, profilers, high-level languages…) which in turn allows more complex software to be created more quickly • Technology reduces prices which allows better technology to be used more widely, resulting in more people improving the technology…
  28. 28. 6. Most likely timescale? • I’m not qualified to say • It depends on so many unknowns – and on where society decides to invest effort • “A new Manhattan project”? • Perhaps already being carried out in secret – In China / Singapore / … – Inside Google / Microsoft / Apple / Nokia / … – By DARPA… • My guess: software will improve a great deal in 20 years, with focused effort • Add 10 years for contingency => 2038 ?!
  29. 29. 7. Oppose the Singularity with all our effort? (Anti-Manhattan project?) • Downside of bad Singularity is awful: disaster • Much worse than Terminator movies…
  30. 30. 7. Oppose the Singularity with all our effort? (Anti-Manhattan project?) • Downside of bad Singularity is awful: disaster • Much worse than Terminator movies… • Upside of Singularity is debatable – New cures, new medicines, new mind-trips – Advice on geo-engineering and positive climate control – Societal disruption • “Better red than dead” – accept drawbacks of vigorously controlling all experiments in AI, in order to avoid the risk of a bad Singularity? • But super-AI could improve “accidentally” or “surreptitiously” – so we must create “friendly super-AI” before “accidental super-AI” gets here • “Moore’s Law of mad scientists” - Yudkowsky
  31. 31. 8. Can we influence the Singularity? • Can we influence development towards the Singularity to make its outcome more likely to be good for humanity rather than disastrous? • Or are we just passive spectators? • We can contribute to the battle of ideas – By debating and debugging them – By communicating them – eg blogging / press – By bringing them into the mainstream – By influencing policy of think-tanks, governments, universities, business researchers • We can consider and possibly promote the ideas of “beneficial super-AI” and “friendly AI”…
  32. 32. The project to create friendly AI • AI that will in all circumstances maintain a respect for humans, and will avoid harming humans – Even through multiple generations of self- reprogramming – Even though the AI will have full access to all its source code, and the ability to change any of it – Even though the AI will observe many bad faults in humans (and will be strongly tempted to “fix things”) • The AI will not want to change this part of its programming – Like Gandhi would not want to take a pill to make himself into a murderer • (Depends on there being no serious bugs in this part of the software!)
  33. 33. The project to merge humans with super-AI • AI that will in all circumstances maintain a respect for humans, and will avoid harming humans – Even through multiple generations of self- reprogramming – Even though the AI will have full access to all its source code, and the ability to change any of it – Even though the AI will observe many bad faults in humans (and will be strongly tempted to “fix things”)… • Alternatively, future humans may absorb and directly interact with ever-improved AI systems – “The man with three brains…” – So super-AIs will find it much easier to respect these future humans (“transhumans”) • So we should study IA as much as we study AI
  34. 34. When humans transcend biology
  35. 35. Summary (from Michael Anissimov’s blog, Fri 19th Sept 2008) (1/2) • Superintelligence is possible, it could have a huge impact on the world, and our actions now may influence the final outcome • No fixed timeline • No argument that all of history has been predeterministically building up to this point • No argument that technological progress is slowing down, speeding up, moving sideways, or any other such specific claims • No particular attention given to pre-transhuman intelligence technologies except insofar as they influence when and how superintelligence is created • Central focus on superintelligence as a distinct technological milestone • Acceptance of the point that deliberately designed AGI may exist before neuromorphic AGI
  36. 36. Summary (from Michael Anissimov’s blog, Fri 19th Sept 2008) (2/2) • Acceptance of the fact that we might completely blow ourselves up before the Singularity hits • Acceptance of the fact that the first superintelligence might not give a damn about us, and just decide to rearrange our atoms into something more to its liking • No magical rosy scenario where human upgrades and AGI research coincidentally fuse seamlessly in a way that happens to completely benefit mankind • Acknowledgment of the Everest-sized challenge of creating AGI that doesn’t eliminate us outright, rather than hand-waving it over with “maintaining an open free-market system…” • Superintelligence is possible, it could have a huge impact on the world, and our actions now may influence the final outcome
  37. 37. 9. The biggest uncertainties with the Singularity? • Defining what we're talking about: What’s the relation between the various different notions of the Singularity? • Are there arguments in principle against the Singularity? • What are the critical bottleneck determinants of development towards the Singularity? • What are the lessons we should learn from what’s been called “the embarrassing history of AI”? • Is the Singularity a plausible occurrence within (say) the next 50 years? • What’s the likeliest timescale for the Singularity? • Should we be doing everything in our power to prevent the Singularity from happening? • Can we influence the Singularity to make its outcome more likely to be good for humanity rather than disastrous? • What are the biggest uncertainties with the Singularity?

×