I, Robot

  • 1,365 views
Uploaded on

Slides for a talk given to a philosophical audience, on what a computational theory of consciousness might look like.

Slides for a talk given to a philosophical audience, on what a computational theory of consciousness might look like.

More in: Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
1,365
On Slideshare
0
From Embeds
0
Number of Embeds
2

Actions

Shares
Downloads
21
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000
  • 2. I, Robot or What would it take to make a robot with a self?
  • 3. I, Robot or What would it take to make a robot with a sense of itself?
  • 4. Philosophical debate about consciousness • Maybe THIS is how consciousness works (yaddah, yaddah)…. • Pschaw! I can imagine something just like that without it being conscious. • I don’t think you can. • Oh no? Let me tell you, I can imagine something which is just like you, an exact copy right down to the atoms, and it behaves just like you and it even believes what you believe and wants what you want, but it’s not conscious. It’s just a zombie. So there. • That seems impossible to me. • You just havn’t got enough imagination, that’s all.
  • 5. Philosophical debate about consciousness • You just havn’t got enough imagination, that’s all. • ----------- • It’s hard to see quite how to argue against this claim directly, so rather than try to give SUFFICIENT conditions for consciousness, I’m going to sketch some NECESSARY conditions, to try to raise the imagination-jump bar a little higher.
  • 6. Philosophical debate about consciousness • You just havn’t got enough imagination, that’s all. • ----------- • It’s hard to see quite how to argue against this claim directly, so rather than try to give SUFFICIENT conditions for consciousness, I’m going to sketch some NECESSARY conditions, to try to raise the imagination-jump bar a little higher. • Basic idea is that consciousness requires a self.
  • 7. methodology • Want to give a functional account of what is essentially a matter of phenomenology • Danger of vacuous functional structure (Eg a C-box) • Some disciplinary rigor provided by requirement of evolutionary plausibility. No epiphanies. • Humans are complicated beasties, but we don’t have subjective reports from nonhumans. So we have to be willing to extrapolate to simpler cases.
  • 8. GOFCogSci Standard Model • Anything known is somehow internally represented as propositions expressed in a ‘language of thought’ • Senses keep internal world-description up to date • World-knowledge is used to plan, react, navigate, etc. • Awareness is restricted to content of LoT. • Cognitive activity involves ‘information processing’ in the LoT kiss...
  • 9. GOFCogSci Standard Model (with a small addition.) • Propositions in LoT come with provenances attached, ie information about where the proposition came from.
  • 10. GOFCogSci Standard Model (with a small addition.) • Propositions in LoT come with provenances attached, ie information about where the proposition came from.
  • 11. GOFCogSci Standard Model (with a small addition.) • Propositions in LoT come with provenances attached, ie information about where the proposition came from. on(cup,table)
  • 12. GOFCogSci Standard Model (with a small addition.) • Propositions in LoT come with provenanaces attached, ie information about where the proposition came from. this was seen on(cup,table)
  • 13. recorded in memory registered by sense S P explanation of Q P Confirmed by Q,R,... P Inferred from Q,R,... P
  • 14. GOFCogSci Standard Model (with a small addition.) • Provenances are under the control of the machinery. • They are needed for truth maintenance, ie keeping track of corrections. • (philosophical aside) Knowing a set of propositions might involve more than just knowing their conjunction.
  • 15. GOFCogSci Standard Model (with a small addition.) • Provenances are under the control of the machinery. • They are needed for ‘truth maintenance’, ie keeping track of corrections. • (philosophical aside) Knowing a set of propositions might involve more than just knowing their conjunction. • (This solves the Problem of Mary, by the way.)
  • 16. One approach to creating a Self • If something which can represent things needs to know about itself, just give it a way to represent itself to itself.
  • 17. One approach to creating a Self • If something which can represent things needs to know about itself, just give it a way to represent itself to itself. • Details get complicated. (Need a meta-theoretic self- description supported by a reflexive architectural layer…) Meta-management Deliberative Reasoning Reactive Mechanisms From A.Sloman 1999
  • 18. One approach to creating a Self • If something which can represent things needs to know about itself, just give it a way to represent itself to itself. • BUT what is being described by this meta-theory?
  • 19. One approach to creating a Self • If something which can represent things needs to know about itself, just give it a way to represent itself to itself. • BUT what is being described by this meta-theory? • What does ‘I’ refer to? (Body, mind, soul, ego, Will,…?) Certainly not our own inference processes.
  • 20. One approach to creating a Self • If something which can represent things needs to know about itself, just give it a way to represent itself to itself. • BUT what is being described by this meta-theory? • What does ‘I’ refer to? (Body, mind, soul…?) Certainly not our own inference processes. • Are you the same “I” you were yesterday?
  • 21. “I look at those old movies, and I wonder how I did them. It was someone else who made them, not me. I can recognise part of me in them, but they were made by someone else, not by me.” - Terry Gilliam
  • 22. • The human self-concept has several aspects
  • 23. • The human self-concept has several aspects • bodily location (I am not in Kansas)
  • 24. • The human self-concept has several aspects • bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust)
  • 25. • The human self-concept has several aspects • bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust) • epistemic agent (I know I left it here somewhere.)
  • 26. • The human self-concept has several aspects • bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust) • epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?)
  • 27. • The human self-concept has several aspects • bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust) • epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?) • source of intentionality (I was referring to the mint sauce)
  • 28. • The human self-concept has several aspects • bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust) • epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?) • source of intentionality (I was referring to the mint sauce) • the ‘free will’ (I’m in charge here.)
  • 29. • The human self-concept has several aspects • bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust) • epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?) • source of intentionality (I was referring to the mint sauce) • the ‘free will’ (I’m in charge here.) • …and probably more.
  • 30. • bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust) • epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?) • source of intentionality (I was referring to the mint sauce) • the ‘free will’ (I’m in charge here.)
  • 31. bodily location ‘mental map’ requires a ‘thishere’ token to relate perceptual input to position of body in the terrain. This is a primitive ‘sense of self’
  • 32. bodily location ‘mental map’ requires a ‘thishere’ token to relate perceptual input to position of subject in the terrain. This is a primitive ‘sense of self’ Purely geographical, it has no implications for mental state or agency. Required in some form by anything which navigates using non-egocentric spatial model. This is routine in AI robotics and probably evolved fairly early in animals. For things with an articulated body it gets quite complicated.
  • 33. locus of narrative memory We humans certainly have a well-developed narrative (episodic) memory; but what is it for?
  • 34. locus of narrative memory Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future; adds ‘now’ to ‘thishere’.
  • 35. locus of narrative memory Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future. ….abbfacytbbhabghjbaabbhafcasghbbrajkbbdaojkkllaa
  • 36. locus of narrative memory Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future. ….abbfacytbbhabghjbaabbhafcasghbbrajkbbdaojkkllaa
  • 37. locus of narrative memory Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future. ….abbfacytbbhabghjbaabbhafcasghbbrajkbbdaojkkllaa bb leads to a after a short delay
  • 38. locus of narrative memory Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future. ….abbfacytbbhabghjbaabbhafcasghbbrajkbbdaojkkllaa bb leads to a after a short delay ….ghfklbnmsdfbb(now I can see ahead ) Delicate balance needed; too general means weak predictions, too specific means narrow applicability. This is still a research area in AI.
  • 39. WARNING Here we enter somewhat wilder areas of speculation, where AI has never ventured. Please follow me carefully and stay alert.
  • 40. stability and fickleness • Unlike AI systems, organisms must eat, and are liable to get eaten. So they have a standing requirement to treat other organisms in a rather special way, one that may require sudden and precipitate action. • It would be folly to rely solely on induction to learn the causal habits of things that were liable to eat you. • Beasties need to make a conceptual division of the things in their surroundings into at least two categories: things which are causally predictable, and things which aren’t, but which require immediate attention when detected.
  • 41. stability and fickleness • Something is causally stable when one can reliably predict its future behavior on the basis of past experience with things of that sort, ie when it is reasonable to learn about its behavior by using induction.
  • 42. stability and fickleness • Something is causally stable when one can reliably predict its future behavior on the basis of past experience with things of that sort, ie when it is reasonable to treat it as having a learnable causal behavior. • It is causally fickle when one knows that it is not causally stable. Probably very old; examples from human experience include surprise when you find someone (but not someTHING) in your personal space unexpectedly (“making someone jump”). Seems to be a crucial distinction between other ‘agents’ and other things.
  • 43. animacy Being causally fickle is a basic aspect of animacy. Animate entities do things for their own reasons, not because they are causally influenced by other things. Evidence of agency in unexpected places often are perceived as highly startling (eg movies, automobiles, reactive automata) until one gets used to their repertoire The ‘intentional stance’ (Dennett) or a description at the and feels able to recognise them. ‘knowledge level’ (Newell) represents one way to gain some predictive power over animate entities (and it’s pretty useful even for complicated inanimate ones.) We are not very good at integrating these frameworks, eg tensions felt by surgeons. I suspect that notions like ‘agency’ and ‘intentionality’ in their full-blooded senses evolved only recently (humans and chimps may be the only creatures who attribute mental states to others), but causal fickleness is likely to be much older.
  • 44. Knowing about knowing The creature so far knows quite a lot about its world, and can learn more from its experience.
  • 45. Knowing about knowing The creature so far knows quite a lot about its world, and can learn more from its experience. But it doesn’t yet KNOW that it knows anything. It is not reflexively aware.
  • 46. Knowing about knowing The creature so far knows quite a lot about its world, and can learn more from its experience. But it doesn’t yet KNOW that it knows anything. It is not reflexively aware…. …but its provenance machinery ‘knows’ something about its own knowledge.
  • 47. Knowing about knowing The creature so far knows quite a lot about its world, and can learn more from its experience. But it doesn’t yet KNOW that it knows anything. It is not reflexively aware…. …but its provenance machinery ‘knows’ something about its own knowledge. Epistemic access to its own truth-adjusting machinery would be one way to achieve reflexivity of knowledge, ie knowing that it knows some of what it in fact knows.
  • 48. Knowing about knowing ‘Reflexivity’ of knowledge, ie knowing that it knows some of what it in fact knows, could be of actual practical use (unlike reflexive knowledge of its own cognitive machinery.) Eg one can take actions to fill gaps in ones own knowledge (exploration) or avoid taking actions when their outcome might depend critically on information known to be missing (not stepping into the dark).
  • 49. Knowing about knowing ‘Reflexivity’ of knowledge, ie knowing that it knows some of what it in fact knows, could be of actual practical use (unlike reflexive knowledge of its own cognitive machinery.) Eg one can take actions to fill gaps in ones own knowledge. (exploration) or avoid taking actions when their outcome might depend critically on information known to be missing. This is current AI research, eg NASA ‘reactive planners’.
  • 50. Epistemic gradients The creature so far knows quite a lot about its world, and can learn more from its experience. On the whole, it knows more about things closer to it in space and time, and less about things which are further away. There is an epistemic gradient with itself at the peak. The gradient can provide another way to identify a ‘self’: the self is the agent which knows things about this-here-now which nothing else knows.
  • 51. Epistemic gradients The self is the agent which knows things about this-here-now which nothing else knows. This also can be of direct practical use, eg knowing that nobody else knows where this-here-now is. Some mental illness seems to be associated with a breakdown of this, eg feelings of ‘ego transparency’ in schizophrenia.
  • 52. Epistemic gradients The self is the agent which knows things about this-here-now which nothing else knows. This also can be of direct practical use, eg knowing that nobody else knows where this-here-now is. (This also fixes the Van Frassen ‘two gods’ argument.)
  • 53. Epistemic gradients Notice that the provenance of a reflexive belief is simply the presence of a (closely related) belief. Provenances of reflexive beliefs are something like ‘simple introspection’. - How do you know the cup is on the table? - Because I saw it. -How do you know that you know that? -?? I just know, that’s all. (What else can I say?)
  • 54. sketch of overall picture Suppose the ‘thisherenow’ is treated as a first-class entity in the world model. The creature must either have a very complete understanding of its own inner functioning (which would be of no practical use), or treat itself as causally fickle. It knows that it does not know why it does what it does. In its own view of itself, its actions necessarily have no causes. It believes itself to have ‘free will’.
  • 55. sketch of overall picture Suppose the ‘thisherenow’ is treated as a first-class entity in the world model. The creature must either have a very complete understanding of its own inner functioning (which would be of no practical use), or treat itself as causally fickle. It knows that it does not know why it does what it does. In its own view of itself, its actions necessarily have no causes. It believes itself to have ‘free will’. If it ever goes to graduate school, it will probably think of itself as having ‘original intentionality’ as well.
  • 56. sketch of overall picture This creature knows that it is an agent, and it knows quite a lot about itself which (it knows) isn’t known to other agents. Much of this knowledge has a characteristic kind of ‘immediate’ provenance. All its ‘private’ beliefs about it’s self have a recursive provenance, in that they are derived from other beliefs about the self, or are ‘immediate’.
  • 57. sketch of overall picture This creature knows that it is an agent, and it knows quite a lot about itself which (it knows) isn’t known to other agents. Much of this knowledge has a characteristic kind of ‘immediate’ provenance. All its ‘private’ beliefs about it’s self have a recursive provenance, in that they are derived from other beliefs about the self, or are ‘immediate’. One might characterize self-beliefs as a system of stable orbits forming the origin of the provenance field. Cogito, ergo sum.
  • 58. What kind of game are we playing here? • We are talking about creatures as though they were robots. • We are using ideas from semantics, evolutionary biology and philosophy, but talking in a technical vocabulary rooted in computer science. • Broader question: is this way of talking legitimate, and why (or why not)?
  • 59. What kind of game are we playing here? • We are talking about creatures as though they were robots. • We are using ideas from semantics, evolutionary biology and philosophy, but talking in a technical vocabulary rooted in computer science. • Broader question: is this way of talking legitimate, and why (or why not)? • Now, THERE is a topic, surely, where philosophy should have something to say about computers.