• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Extreme Simulation Scenarios
 

Extreme Simulation Scenarios

on

  • 1,321 views

Presentation given by Amon Tywman, DPhil, to UKH+, 11th July 2009.

Presentation given by Amon Tywman, DPhil, to UKH+, 11th July 2009.
"Extreme Simulation Scenarios: Thinking about the promise, risk, and plausibility of AI & VR"

Statistics

Views

Total Views
1,321
Views on SlideShare
1,321
Embed Views
0

Actions

Likes
1
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

CC Attribution License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Extreme Simulation Scenarios Extreme Simulation Scenarios Presentation Transcript

    • Extreme Simulation Scenarios
      Thinking about the promise, risk, and plausibility of AI & VR
      AmonTwyman, DPhil
    • A) Transhumanism, simulation, VR & AI
      B) ESS: Extreme Simulation Scenarios
      Whole Brain Simulation (AKA Uploading)
      Virtual Autonomous Zones
      Utility Fog
      C) Historical precedent
      D) Criticisms of ESS
      Technical arguments
      Moral arguments
      Metaphysical arguments
      Convergence, chaos, & unpredictability
      E) Assessment of ESS: Two types of bias
      Worldview bias
      Competence bias
      Errors of Judgment & Decision Making (JDM)
      Conflation of factors
      1. Promise
      2. Risk
      3. Technological plausibility
      4. Congruence with belief system
    • Transhumanism, simulation, VR & AI
    • A) Transhumanism, simulation, VR & AI
      B) ESS: Extreme Simulation Scenarios
      Whole Brain Simulation (AKA Uploading)
      Virtual Autonomous Zones
      Utility Fog
      C) Historical precedent
      D) Criticisms of ESS
      Technical arguments
      Moral arguments
      Metaphysical arguments
      Convergence, chaos, & unpredictability
      E) Assessment of ESS: Two types of bias
      Worldview bias
      Competence bias
      Errors of Judgment & Decision Making (JDM)
      Conflation of factors
      1. Promise
      2. Risk
      3. Technological plausibility
      4. Congruence with belief system
    • ESS: Extreme Simulation Scenarios
      Whole Brain Emulation (AKA Uploading)
      Replication of CNS activity in a simulation at one or more of several functional levels.
      Virtual Autonomous Zones (VAZ or ‘Polis’)
      A VR which hosts AI or uploaded minds and acts as a sovereign “state” (terms from Hakim Bey & Greg Egan).
      Utility Fog
      Distributed nano-scale manipulators which, among other things, could instantiate VR in the “physical” world.
    • Whole Brain Simulation (AKA Uploading)
      Functional simulation of CNS activity, which according to functionalist or pattern identity theories would re-create phenomenological awareness.
      Simulation could be at one or more of the following functional levels:
      • Abstract function (meta-physiology)
      • Large-scale neural aggregates
      • Individual neurons
      • Molecular or atomic modelling
      • Sub-atomic (quantum) effects
    • VAZ: Virtual Autonomous Zones
      After Hakim Bey’sTemporary Autonomous Zones – an abstract and ‘poetic’ concept.
      A VAZ (or ‘Polis’, after Greg Egan) would be a VR-platform capable of hosting large numbers of AIs or Uploads.
      The ‘autonomous’ part refers to such a system being physically and perhaps even politically self-sufficient. A world (or worlds) unto itself.
    • Utility fog
      Nano-robotic “swarms” which can rapidly reconfigure to create physical objects or environments, but apparently disappear when evenly distributed through a space.
      “Utility fog” & “Foglets” were terms coined by Dr John Storrs Hall, although the idea of nano-swarms has been around since at least 1964.
      Utility fog could, in principle, be the realization of Ivan Sutherland’s “Ultimate display”.
    • A) Transhumanism, simulation, VR & AI
      B) ESS: Extreme Simulation Scenarios
      Whole Brain Simulation (AKA Uploading)
      Virtual Autonomous Zones
      Utility Fog
      C) Historical precedent
      D) Criticisms of ESS
      Technical arguments
      Moral arguments
      Metaphysical arguments
      Convergence, chaos, & unpredictability
      E) Assessment of ESS: Two types of bias
      Worldview bias
      Competence bias
      Errors of Judgment & Decision Making (JDM)
      Conflation of factors
      1. Promise
      2. Risk
      3. Technological plausibility
      4. Congruence with belief system
    • Historical precedent
      1950s-1960s:
      Turing & the Dartmouth Conferences
      Ivan Sutherland (“The Ultimate Display”)
      Happenings & Installation Art
      Timothy Leary & Psychedelia
      1990s-2000s:
      End of the “AI Winter”
      Video & Net.Art
      VPL & Autodesk
      Leary Redux
    • A) Transhumanism, simulation, VR & AI
      B) ESS: Extreme Simulation Scenarios
      Whole Brain Simulation (AKA Uploading)
      Virtual Autonomous Zones
      Utility Fog
      C) Historical precedent
      D) Criticisms of ESS
      Technical arguments
      Moral arguments
      Metaphysical arguments
      Convergence, chaos, & unpredictability
      E) Assessment of ESS: Two types of bias
      Worldview bias
      Competence bias
      Errors of Judgment & Decision Making (JDM)
      Conflation of factors
      1. Promise
      2. Risk
      3. Technological plausibility
      4. Congruence with belief system
    • Criticisms of ESS
      Technical arguments
      Is this really a possible technology?
      Moral arguments
      Even if we can do this, should we?
      Metaphysical arguments
      Souls, wrath of the FSM, and other problems
      Convergence, chaos, & unpredictability
      Linear forecasting is no forecasting at all
    • Technical arguments
      How can we be sure we’re not leaving vital information out of a Whole Brain Emulation? Do molecular, atomic, or sub-atomic processes matter?
      In what sense could a VAZ be be both technically feasible and autonomous? Is it based on existing networks, in which case regulation and network security become issues. If not, what are we talking about? A lump of “cold” nano-machinery buried under Antarctic ice?
      How would you set up a Utility Fog that is impervious to the equivalent of “bluejacking”? Anything less could easily be fatal.
    • Moral arguments
      Moral arguments tend to focus on risk, but often are not concerned with wholly physical risks. Perhaps the best example is bio-conservative Francis Fukuyama.
      Fukuyama is among those who claim that the current human condition is closely related to human dignity. Therefore enhancement = loss of dignity (alluding to metaphysical and, ironically, libertarian arguments).
      These kind of critics, being inherently conservative, invariably focus on near-term biological enhancement scenarios.
    • Metaphysical arguments
      This kind of thing also falls under the category of opposed or incompatible worldviews.
      Metaphysical objections can range from the subtle (e.g. is there a “soul” that would be missing from an Upload?) to the fundamental (e.g. God would disapprove – a variant of this being the “hubris argument”).
      And don’t forget the Simulation Theory… the idea that we already live in a vast simulation of some sort. Might such a reality impose its ownconstraints upon simulation?
      The answer to these is a combination of Occam’s Razor and a healthy dose of willingness to Question Authority.
    • Convergence, chaos & unpredictability
      Hans Moravec, Ray Kurzweil, and a number of others have described the Law of Accelerating Returns. This suggests extreme technological change in a short time frame.
      Even without resorting to chaos theory,we can see that technological and cultural change feed into each other. Actual technological development and implementation are almost certainly not as predictable as some transhumanists seem to think.
    • A) Transhumanism, simulation, VR & AI
      B) ESS: Extreme Simulation Scenarios
      Whole Brain Simulation (AKA Uploading)
      Virtual Autonomous Zones
      Utility Fog
      C) Historical precedent
      D) Criticisms of ESS
      Technical arguments
      Moral arguments
      Metaphysical arguments
      Convergence, chaos, & unpredictability
      E) Assessment of ESS: Two types of bias
      Worldview bias
      Competence bias
      Errors of Judgment & Decision Making (JDM)
      Conflation of factors
      1. Promise
      2. Risk
      3. Technological plausibility
      4. Congruence with belief system
    • Assessment of ESS: Two types of bias
      Worldview bias
      Assumptions or preferences of the assessor may discourage them from accepting particular conclusions.
      2. Competence bias
      The assessor may be unable to draw correct conclusions from the evidence despite being willing to do so.
    • Worldview bias
      An assessor may be logically impeccable and working with good evidence, and yet be unwilling to accept particular conclusions because of a priori beliefs.
      This is not necessarily a case of “pig-headedness”… the assessor may not even be aware of the larger belief system lying behind their specific objections.
    • Competence bias
      Various biases or logical flaws may cause an assessor to draw erroneous conclusions from the evidence to hand, despite a general openness to accepting whatever conclusions are produced.
      Such biases are often considered part of Judgment and Decision Making (JDM) research.
    • Errors of Judgment & Decision Making (JDM)
      • What is judgment?
      • What is Decision Making?
      • Advice and Trust
      • Heuristics & Biases
      • Heuristics that make us smart
      • DSS, optimised DM, & H+
    • Conflation of factors
      Promise
      Risk
      Technological plausibility
      Congruence with belief systems
    • 1. Promise
      Life and death no longer mediated by biology. An end to biological disease and scarcity.
      New vistas for exploration 1: Space exploration no longer limited by human “payload” factors.
      New vistas for exploration 2: Reconfiguration of perceptual apparatus will allow exploration of a broader psychological “reality space” (i.e. interpretation or phase space).
    • 2. Risk
      • “Zombie” Uploads
      • Grey Goo (requiring Blue Goo)
      • Totalitarian control
      • Terminator/SkyNet scenarios
      • Ecological disaster
      • Black Swans (unforseen risks)
    • 3. Technological plausibility
      Is this a credible technology? E.g.:
      Can CNS activity be wholly modelled? Would that model capture all essential features?
      Might computational architecture limitations, economic factors or viruses/network security make a VAZ unfeasible?
      Could we manage the sheer complexity of interwoven networks of Foglets?
    • 4. Congruence with belief systems
      This is essentially the inclusion of Worldview Biases as a factor when assessing the value of a potential technology.
      What is your perspective, and how might that be colouring your assessment?
      Such explicit acknowledgment of personal perspective has been advocated within philosophy of science since WWII.
    • Thanks!