Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Hyperapocalypse rev

2,035 views

Published on

A summary of an argument for the claim that "hyperplastic" agents that can modify every aspect of their structure would not be understandable as rational valuers, desirers or believers. They would be agents "outside the space of reasons".

Published in: Education
  • Be the first to comment

  • Be the first to like this

Hyperapocalypse rev

  1. 1. HYPERAPOCALYPSE: A HOLE IN THE SPACE OF REASONS?
  2. 2. Can we situate all agents in the space of reasons? • It can be argued there are certain rational norms that generalize from actual human persons to hypothetical posthumans given that the latter would be intelligent, goal-driven systems. Such beings would thus count as rational subjects able to evaluate their behaviour according to explicit goals and generalizable rules (Brassier 2011). Brassier, R. 2011. “The View from Nowhere”. Identities: Journal for Politics, Gender and Culture 17: 7–23.
  3. 3. Consider a Hyperplastic Agent… • Hyperplastic agents have an excellent model of their own hardware and software and the power to modify them. • S is a hyperplastic agent iff. S has unlimited capacity to modify its own physical and functional structure. • If s is a physically typed state of S and occurs in a range of contexts {ci}, s can occur in a set of contexts {cj} in some modified S. Where {ci} ≠ {ci}.
  4. 4. The Problem We can situate a hyperplastic agent S in the space of reasons only if attributions of values, desires and beliefs would be robust under different iterations of S. But there are reasons to think that this would not be possible…
  5. 5. A maxim of Rational Desire Consider: RD: If a rational subject x has some overriding value v, x should want to be motivated by the desire that v is realized.
  6. 6. Omohundro on AI Goals Steve Omohundro assumes that RD applies to self-modifying AI’s of the kind that would qualify as hyperplastics. Such entities would thus be rationally motivated to ensure that their values survive future modifications. So how can [a self-modifying AI] ensure that future self-modifications will accomplish its current objectives? For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated to reflect on their goals and to make them explicit. Omohundro, S. M. 2008. “The Basic AI Drives”. Frontiers in Artificial Intelligence and Applications 171: 483
  7. 7. The Problem of Holism • The intrinsic shape or colour of an icon representing a station on a metro map is arbitrary. There is nothing about a circle or a square or the colour blue that signifies “station”. It is only the conformity between the relations between the icons and the stations in metro system it represents which does this. • Likewise, the value meant by an internal state s under some configuration of the S must depend holistically on some inner context c (like a cortical map) where s is related to lots of other states of a similar kind (Fodor and Lepore 1992). • Thus values and contents of the s’s are liable to depend on their relations to other inner states within specific contexts.
  8. 8. Clamping Q: How to ensure that system state s value v* ? A: “clamp” s to contexts in which s v* And NOT s v*, v**, etc. !
  9. 9. Problem: each [s + c] is just another system state which can occur in further disparate contexts in modified versions of S, so exactly the same problem arises for those states as for our original s.
  10. 10. A hyperplastic machine will always be in a position to modify any configuration that it finds itself in (for good or ill). So this problem will be replicated for any combination of states [s + c . . . + . . .] that the machine could assume within its configuration space. Since each of these states will have to be repeatable in yet other contexts, etc. So any answer to the question: what are the contexts in which s means value v yields a list of states for which the same problem arises. The problem has no solution in a form that could be used to clamp S’s internal states in line with RD.
  11. 11. Conclusion: If this schematic argument is sound, then we have reason to think that a hyperplastic agent would not be able to arrange itself so that it conforms to a value or desire under future iterations of itself. This is because none of its internal states could be relied upon to fix that desire in place. Could such a system have second order desires of the kind specified in maxim RD? Well, if the system were smart it would “know” that it could not ensure that any desire could survive some self-modification: desires would just not “show up” in its internal sensor probes. So a smart hyperplastic would differ from the rational subjects we know: the normative claim RD would not be applicable for it. Likewise, attributing desires to other hyperplastics would not be helpful way of predicting their actions because desire attributions would not be robust under future iterations of the same being – some self-modifying tweak could always push its desires out of whack. How about beliefs? Well, here the situation seems even worse. On most conceptions of rational subjectivity, rational believers should be adjust their beliefs only in the light of good reasons. But if beliefs locally supervene on internal structure, no hyperplastic could ensure that its doxastic commitments would survive some arbitrary self-tweak. So (once more) it seems that attributing beliefs to oneself or others would not be a good mind-reading strategy for a hyperplastic agent. The upshot of this is that framework of belief-desire psychology and the normative practices that facilitate our interpersonal life might be inapplicable to hyperplastics. Hyperplastic agents would not be situated in the space of reasons.

×