Extreme Simulation Scenarios<br />Thinking about the promise, risk, and plausibility of AI & VR<br />AmonTwyman, DPhil<br />
A)  Transhumanism, simulation, VR & AI<br />B)  ESS: Extreme Simulation Scenarios<br />		Whole Brain Simulation (AKA Uploa...
Transhumanism, simulation, VR & AI<br />
A)  Transhumanism, simulation, VR & AI<br />B)  ESS: Extreme Simulation Scenarios<br />		Whole Brain Simulation (AKA Uploa...
ESS: Extreme Simulation Scenarios<br />Whole Brain Emulation (AKA Uploading)<br />	Replication of CNS activity in a simula...
Whole Brain Simulation (AKA Uploading)<br />Functional simulation of CNS activity, which according to functionalist or pat...
 Large-scale neural aggregates
 Individual neurons
 Molecular or atomic modelling
 Sub-atomic (quantum) effects</li></li></ul><li>VAZ: Virtual Autonomous Zones<br />After Hakim Bey’sTemporary Autonomous Z...
Utility fog<br />Nano-robotic “swarms” which can rapidly reconfigure to create physical objects or environments, but appar...
A)  Transhumanism, simulation, VR & AI<br />B)  ESS: Extreme Simulation Scenarios<br />		Whole Brain Simulation (AKA Uploa...
Historical precedent<br />1950s-1960s:<br />Turing & the Dartmouth Conferences<br />	Ivan Sutherland (“The Ultimate Displa...
A)  Transhumanism, simulation, VR & AI<br />B)  ESS: Extreme Simulation Scenarios<br />		Whole Brain Simulation (AKA Uploa...
Criticisms of ESS<br />Technical arguments<br />	Is this really a possible technology?<br />Moral arguments<br />	Even if ...
Technical arguments<br />How can we be sure we’re not leaving vital information out of a Whole Brain Emulation? Do molecul...
Moral arguments<br />Moral arguments tend to focus on risk, but often are not concerned with wholly physical risks. Perhap...
Metaphysical arguments<br />This kind of thing also falls under the category of opposed or incompatible worldviews.<br />M...
Convergence, chaos & unpredictability<br />Hans Moravec, Ray Kurzweil, and a number of others have described the Law of Ac...
A)  Transhumanism, simulation, VR & AI<br />B)  ESS: Extreme Simulation Scenarios<br />		Whole Brain Simulation (AKA Uploa...
Assessment of ESS: Two types of bias<br />Worldview bias<br />	Assumptions or preferences of the assessor may discourage t...
Worldview bias<br />An assessor may be logically impeccable and working with good evidence, and yet be unwilling to accept...
Competence bias<br />Various biases or logical flaws may cause an assessor to draw erroneous conclusions from the evidence...
Errors of Judgment & Decision Making (JDM)<br /><ul><li>  What is judgment?
  What is Decision Making?
  Advice and Trust
  Heuristics & Biases
Upcoming SlideShare
Loading in …5
×

Extreme Simulation Scenarios

765
-1

Published on

Presentation given by Amon Tywman, DPhil, to UKH+, 11th July 2009.
"Extreme Simulation Scenarios: Thinking about the promise, risk, and plausibility of AI & VR"

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
765
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Extreme Simulation Scenarios

  1. 1. Extreme Simulation Scenarios<br />Thinking about the promise, risk, and plausibility of AI & VR<br />AmonTwyman, DPhil<br />
  2. 2. A) Transhumanism, simulation, VR & AI<br />B) ESS: Extreme Simulation Scenarios<br /> Whole Brain Simulation (AKA Uploading)<br /> Virtual Autonomous Zones<br /> Utility Fog<br />C) Historical precedent<br />D) Criticisms of ESS<br /> Technical arguments<br /> Moral arguments<br /> Metaphysical arguments<br /> Convergence, chaos, & unpredictability<br />E) Assessment of ESS: Two types of bias<br /> Worldview bias<br /> Competence bias<br /> Errors of Judgment & Decision Making (JDM)<br /> Conflation of factors<br /> 1. Promise<br /> 2. Risk<br /> 3. Technological plausibility<br /> 4. Congruence with belief system<br />
  3. 3. Transhumanism, simulation, VR & AI<br />
  4. 4. A) Transhumanism, simulation, VR & AI<br />B) ESS: Extreme Simulation Scenarios<br /> Whole Brain Simulation (AKA Uploading)<br /> Virtual Autonomous Zones<br /> Utility Fog<br />C) Historical precedent<br />D) Criticisms of ESS<br /> Technical arguments<br /> Moral arguments<br /> Metaphysical arguments<br /> Convergence, chaos, & unpredictability<br />E) Assessment of ESS: Two types of bias<br /> Worldview bias<br /> Competence bias<br /> Errors of Judgment & Decision Making (JDM)<br /> Conflation of factors<br /> 1. Promise<br /> 2. Risk<br /> 3. Technological plausibility<br /> 4. Congruence with belief system<br />
  5. 5. ESS: Extreme Simulation Scenarios<br />Whole Brain Emulation (AKA Uploading)<br /> Replication of CNS activity in a simulation at one or more of several functional levels.<br />Virtual Autonomous Zones (VAZ or ‘Polis’)<br /> A VR which hosts AI or uploaded minds and acts as a sovereign “state” (terms from Hakim Bey & Greg Egan).<br />Utility Fog<br /> Distributed nano-scale manipulators which, among other things, could instantiate VR in the “physical” world.<br />
  6. 6. Whole Brain Simulation (AKA Uploading)<br />Functional simulation of CNS activity, which according to functionalist or pattern identity theories would re-create phenomenological awareness.<br />Simulation could be at one or more of the following functional levels:<br /><ul><li> Abstract function (meta-physiology)
  7. 7. Large-scale neural aggregates
  8. 8. Individual neurons
  9. 9. Molecular or atomic modelling
  10. 10. Sub-atomic (quantum) effects</li></li></ul><li>VAZ: Virtual Autonomous Zones<br />After Hakim Bey’sTemporary Autonomous Zones – an abstract and ‘poetic’ concept.<br />A VAZ (or ‘Polis’, after Greg Egan) would be a VR-platform capable of hosting large numbers of AIs or Uploads.<br />The ‘autonomous’ part refers to such a system being physically and perhaps even politically self-sufficient. A world (or worlds) unto itself.<br />
  11. 11. Utility fog<br />Nano-robotic “swarms” which can rapidly reconfigure to create physical objects or environments, but apparently disappear when evenly distributed through a space.<br />“Utility fog” & “Foglets” were terms coined by Dr John Storrs Hall, although the idea of nano-swarms has been around since at least 1964.<br />Utility fog could, in principle, be the realization of Ivan Sutherland’s “Ultimate display”. <br />
  12. 12. A) Transhumanism, simulation, VR & AI<br />B) ESS: Extreme Simulation Scenarios<br /> Whole Brain Simulation (AKA Uploading)<br /> Virtual Autonomous Zones<br /> Utility Fog<br />C) Historical precedent<br />D) Criticisms of ESS<br /> Technical arguments<br /> Moral arguments<br /> Metaphysical arguments<br /> Convergence, chaos, & unpredictability<br />E) Assessment of ESS: Two types of bias<br /> Worldview bias<br /> Competence bias<br /> Errors of Judgment & Decision Making (JDM)<br /> Conflation of factors<br /> 1. Promise<br /> 2. Risk<br /> 3. Technological plausibility<br /> 4. Congruence with belief system<br />
  13. 13. Historical precedent<br />1950s-1960s:<br />Turing & the Dartmouth Conferences<br /> Ivan Sutherland (“The Ultimate Display”)<br />Happenings & Installation Art<br />Timothy Leary & Psychedelia<br />1990s-2000s:<br />End of the “AI Winter”<br />Video & Net.Art<br />VPL & Autodesk<br />Leary Redux<br />
  14. 14. A) Transhumanism, simulation, VR & AI<br />B) ESS: Extreme Simulation Scenarios<br /> Whole Brain Simulation (AKA Uploading)<br /> Virtual Autonomous Zones<br /> Utility Fog<br />C) Historical precedent<br />D) Criticisms of ESS<br /> Technical arguments<br /> Moral arguments<br /> Metaphysical arguments<br /> Convergence, chaos, & unpredictability<br />E) Assessment of ESS: Two types of bias<br /> Worldview bias<br /> Competence bias<br /> Errors of Judgment & Decision Making (JDM)<br /> Conflation of factors<br /> 1. Promise<br /> 2. Risk<br /> 3. Technological plausibility<br /> 4. Congruence with belief system<br />
  15. 15. Criticisms of ESS<br />Technical arguments<br /> Is this really a possible technology?<br />Moral arguments<br /> Even if we can do this, should we?<br />Metaphysical arguments<br /> Souls, wrath of the FSM, and other problems<br />Convergence, chaos, & unpredictability<br /> Linear forecasting is no forecasting at all<br />
  16. 16. Technical arguments<br />How can we be sure we’re not leaving vital information out of a Whole Brain Emulation? Do molecular, atomic, or sub-atomic processes matter?<br />In what sense could a VAZ be be both technically feasible and autonomous? Is it based on existing networks, in which case regulation and network security become issues. If not, what are we talking about? A lump of “cold” nano-machinery buried under Antarctic ice?<br />How would you set up a Utility Fog that is impervious to the equivalent of “bluejacking”? Anything less could easily be fatal.<br />
  17. 17. Moral arguments<br />Moral arguments tend to focus on risk, but often are not concerned with wholly physical risks. Perhaps the best example is bio-conservative Francis Fukuyama.<br />Fukuyama is among those who claim that the current human condition is closely related to human dignity. Therefore enhancement = loss of dignity (alluding to metaphysical and, ironically, libertarian arguments).<br />These kind of critics, being inherently conservative, invariably focus on near-term biological enhancement scenarios.<br />
  18. 18. Metaphysical arguments<br />This kind of thing also falls under the category of opposed or incompatible worldviews.<br />Metaphysical objections can range from the subtle (e.g. is there a “soul” that would be missing from an Upload?) to the fundamental (e.g. God would disapprove – a variant of this being the “hubris argument”).<br />And don’t forget the Simulation Theory… the idea that we already live in a vast simulation of some sort. Might such a reality impose its ownconstraints upon simulation?<br />The answer to these is a combination of Occam’s Razor and a healthy dose of willingness to Question Authority.<br />
  19. 19. Convergence, chaos & unpredictability<br />Hans Moravec, Ray Kurzweil, and a number of others have described the Law of Accelerating Returns. This suggests extreme technological change in a short time frame.<br />Even without resorting to chaos theory,we can see that technological and cultural change feed into each other. Actual technological development and implementation are almost certainly not as predictable as some transhumanists seem to think.<br />
  20. 20.
  21. 21. A) Transhumanism, simulation, VR & AI<br />B) ESS: Extreme Simulation Scenarios<br /> Whole Brain Simulation (AKA Uploading)<br /> Virtual Autonomous Zones<br /> Utility Fog<br />C) Historical precedent<br />D) Criticisms of ESS<br /> Technical arguments<br /> Moral arguments<br /> Metaphysical arguments<br /> Convergence, chaos, & unpredictability<br />E) Assessment of ESS: Two types of bias<br /> Worldview bias<br /> Competence bias<br /> Errors of Judgment & Decision Making (JDM)<br /> Conflation of factors<br /> 1. Promise<br /> 2. Risk<br /> 3. Technological plausibility<br /> 4. Congruence with belief system<br />
  22. 22. Assessment of ESS: Two types of bias<br />Worldview bias<br /> Assumptions or preferences of the assessor may discourage them from accepting particular conclusions.<br />2. Competence bias<br /> The assessor may be unable to draw correct conclusions from the evidence despite being willing to do so.<br />
  23. 23. Worldview bias<br />An assessor may be logically impeccable and working with good evidence, and yet be unwilling to accept particular conclusions because of a priori beliefs.<br />This is not necessarily a case of “pig-headedness”… the assessor may not even be aware of the larger belief system lying behind their specific objections.<br />
  24. 24. Competence bias<br />Various biases or logical flaws may cause an assessor to draw erroneous conclusions from the evidence to hand, despite a general openness to accepting whatever conclusions are produced.<br />Such biases are often considered part of Judgment and Decision Making (JDM) research.<br />
  25. 25. Errors of Judgment & Decision Making (JDM)<br /><ul><li> What is judgment?
  26. 26. What is Decision Making?
  27. 27. Advice and Trust
  28. 28. Heuristics & Biases
  29. 29. Heuristics that make us smart
  30. 30. DSS, optimised DM, & H+</li></li></ul><li>Conflation of factors<br />Promise<br />Risk<br />Technological plausibility<br />Congruence with belief systems<br />
  31. 31. 1. Promise<br />Life and death no longer mediated by biology. An end to biological disease and scarcity.<br />New vistas for exploration 1: Space exploration no longer limited by human “payload” factors.<br />New vistas for exploration 2: Reconfiguration of perceptual apparatus will allow exploration of a broader psychological “reality space” (i.e. interpretation or phase space).<br />
  32. 32. 2. Risk<br /><ul><li>“Zombie” Uploads
  33. 33. Grey Goo (requiring Blue Goo)
  34. 34. Totalitarian control
  35. 35. Terminator/SkyNet scenarios
  36. 36. Ecological disaster
  37. 37. Black Swans (unforseen risks)</li></li></ul><li>3. Technological plausibility<br />Is this a credible technology? E.g.:<br />Can CNS activity be wholly modelled? Would that model capture all essential features?<br />Might computational architecture limitations, economic factors or viruses/network security make a VAZ unfeasible?<br />Could we manage the sheer complexity of interwoven networks of Foglets?<br />
  38. 38. 4. Congruence with belief systems<br />This is essentially the inclusion of Worldview Biases as a factor when assessing the value of a potential technology.<br />What is your perspective, and how might that be colouring your assessment?<br />Such explicit acknowledgment of personal perspective has been advocated within philosophy of science since WWII.<br />
  39. 39. Thanks!<br />

×