Searle, Intentionality, and the  Future of Classifier Systems
Upcoming SlideShare
Loading in...5
×
 

Searle, Intentionality, and the Future of Classifier Systems

on

  • 1,409 views

David E. Goldberg reflects about the reality of social constructs and the future of learning classifier systems

David E. Goldberg reflects about the reality of social constructs and the future of learning classifier systems

Statistics

Views

Total Views
1,409
Views on SlideShare
1,399
Embed Views
10

Actions

Likes
0
Downloads
11
Comments
0

2 Embeds 10

http://lcs-gbml.ncsa.uiuc.edu 6
http://www.slideshare.net 4

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Searle, Intentionality, and the  Future of Classifier Systems Searle, Intentionality, and the Future of Classifier Systems Presentation Transcript

  • Illinois Genetic Algorithms Laboratory Department of General Engineering University of Illinois at Urbana-Champaign Urbana, IL 61801. Searle, Intentionality, and the Future of Classifier Systems David E. Goldberg Illinois Genetic Algorithms Laboratory University of Illinois at Urbana-Champaign Urbana, IL 61801 deg@uiuc.edu
  • 1980 v. Now Remember thinking how cool LCSs  were. Just apply them to gas pipelines  and voila, all AI problems of Western Civilization would be solved. Started to ask John for examples  of successful application. Found out that I was in the  middle of an interesting idea, not a working computer program. John H. Holland (b. 1929)
  • Roadmap Are we happy with LCSs?  What’s Searle got to do with it.  Revisiting the Chinese room.  Art Burkes had it right.  Designing a conscious computer.  Searlean program for LCSs:  Computational consciousness not impossible. – From consciousness to intentionality. – Intentionality and beyond. – What are we missing?  What should we do? 
  • Are We Happy With LCSs? Have made progress:  Increasingly competent, solve hard problems – quickly reliably and accurately. Principled manner. – But don’t seem very intelligent:  Do what we tell them. – Not autonomous in any serious sense. – Our discussions are largely technical. – Are we focused on right problems? –
  • What’s Searle Got to Do With It? Mill Prof of Philosophy of Berkeley.  Philosopher of language and mind.  Early work took off from Austin’s work on  speech acts. Searle is Darth Vader of artificial  intelligence. Notorious Chinese Room argument has  always puzzled me. In many ways, Searle is high philosophical  priest of emergence. John R. Searle (b. 1932) Rejects dualism & materialism.  Couldn’t understand how he could miss  possibility of more than mere systactical translation.
  • The Chinese Room Argument Strong AI is not possible on a computer.  Monolingual English speaker in a room with  Chinese writing (story) – 2nd Chinese symbols (questions) – Instructions in English for relating first set of symbols – to second. 3rd set of Chinese symbols (answers) – English speaker does not understand Chinese even  if answers are indistinguishable from those of Chinese speaker.
  • Cracks in the Chinese Room Mind, Language & Society,  Basic Books, 1998, p. 53. “When I say that the brain  is a biological organ and consciousness a biological process, I do not, of course, say or imply that it would be impossible to produce an artificial brain out of nonbiological materials.”
  • More Searle “The heart is also a biological organ, and the  pumping of blood a biological process, but it is possible to build an artificial heart that pumps blood. There is no reason, in principle, why we could not similarly make an artificial brain that causes consciousness.” Searle was complaining about direct approach to  intelligence. Without consciousness and intentionality there  cannot be intelligence. How do we create an intelligent, conscious being? 
  • Arthur Burks Had Interesting Take Robots and Free  Minds, University of Michigan, 1986. “Tonight I will  advocate the thesis: A FINITE DETERMINISTIC AUTOMATON CAN PERFORM ALL NATURAL HUMAN FUNCTIONS.”
  • Chapter 5: Evolution and Intentionality “The course of biological evolution from cells  to Homo sapiens has been a gradual development of intentional systems from direct-response systems.” “The [intentional] system contains a model of  its present status in relation to its goal and regularly updates that model on the basis of the information it receives. Finally, it decides what to do after consulting a strategy, which has value assessments attached in to various alternative courses of action.”
  • CS-1 Had Bio/Psycho Roots CS-1 had reservoirs for  hunger and thirst (Holland & Reitman, 1978). Schemata processors  paper had reservoirs, too (Holland, 1971). CS-1 worked in maze  running task. But design was Lockean.  Tabula rasa for everything  except rule firing, apportionment of credit, and rule discovery. Is this enough?  Thesis: Can’t take shortcut  around consciousness and intentionality.
  • So You Want a Conscious Computer What does this mean?  Consciousness is complex, emergent  phenomenon. How can we design it?  Don’t throw pieces together and hope for  the best. My experience: Emergent phenomena  emerge when (a) key elements are present and (b) system tuned properly. Consider more Searle. 
  • Shooting for C Not Crazy Shooting for GA competence was crazy.  Have accomplished.  How:  Considered essential elements. – Built qual/quant theories of how they worked. – Designed until limits of performance achieved. – Can do the same for  consciousness/intentionality!!
  • Searle’s Greatest Hits Mind as biological phenomenon.  Function of consciousness.  Features of consciousness.  How the mind works: Intentionality.  The good stuff comes from intentionality:  Language & other institutional fact. What are we missing? 
  • Mind as Biology Consciousness is the primary feature of  minds. 3 features of consciousness:  Inner: in body and in sequence of events. – Qualitative: certain way they feel. – Subjective: first person ontology (does not – preclude objective epistemology). Enormous variety of consciousness: smell a  rose, worry about income taxes, sudden rage about driver, etc.
  • Functions of Consciousness What does it do? What is survival value?  What doesn’t it do for our species?  Consciousness is central to our survival.  All actions a result of conscious thought  followed by action.
  • Consciousness, Intentionality, & Causation Represent world, and act on representations.  Intentional causation: Not billiard ball causation.  Not all consciousness intentionally causal, but much  is. Should be best understood; are we not in touch  with it always? Descartes’s error. Yet difficult to describe: Can describe objects,  moods, thoughts, but not C itself. Problems:  Not itself an object of observation (consciousness – observes but is not observed). Tradition of separating mind/body: dualism. –
  • Features of C Ontological subjectivity. 1. C comes in unified form. Thinking and feeling go on 2. at same time in same field of C: Vertical & horizontal. C connects us to world (Tie to intentionality). 3. C states come in moods. 4. Always structured. 5. Varying degrees of attention. 6. C is situated. 7. Varying degrees of familiarity. 8. Refer to other things 9. Always pleasurable or unpleasurable 10.
  • How the Mind Works: Intentionality Primary evolutionary role of C is to relate  us to environment. Cannot eliminate intentionality of mind by  appealing to language; already intentionality of the mind. Searle: Urge to reduce it to something else  is faulty. DEG: As designers we need to reduce it to  something and then find conditions of emergence among those things.
  • Intentionality as Biology Thirst, hunger as basic, causing desire to  drink or eat. Once this granted, camel nose under the  tent, intentions based on other sensory. Isn’t reality “confirmed” by our “success” in  achieving intentional goals over and over again.
  • Structure of Intentional States Intentionality as way mental states are directed at  objects & states of affairs. Can be directed at things that don’t exist?  How can this be?  Distinguish between type of intentional state and  content. Content: rain; Types: hope, believe, fear rain.  Structural features:  Direction of fit – Conditions of satisfaction –
  • Direction of Fit Term: from Austin, foreshadowed by Wittgenstein,  examples Anscombe. Anscombe’s lists:  Shopping list: Beer, butter, bacon. Husband matches – world to list. Detective’s list: Follows shopper, “beer, butter, – bacon,” matches list to world. Not all intentional states like this: e.g. when you  are sorry, assume match between mind and world. Intentional state is null.
  • Conditions of Satisfaction Beliefs can be true or false.  Goals can be achieved or not.  Easier to understand in terms of speech acts.  Have 5 illocutionary points or types:  Assertive: commit to the truth. – Directive: direct hearer to do something. – Commissive: speaker promises to do something. – Expressive: speaker expresses opinion about state of – the world. Declarations: speaker creates something with – utterance.
  • Intentional Causation Intend to move body  body moves:  Example of intentional causation. Differs from billiard ball or Humean causation.  Self-referential: intend to move body, body moves  because I intended  then intentional causation. Critical to distinguishing natural versus social  sciences. Intentional explanations not deterministic: Could  have done otherwise  gap is free will.
  • Good Stuff from Intentionality Searle goes on to talk about language and  institutional facts (money, college degrees, etc.). Disappointment with LCS is it can’t get to  the good stuff. Can’t do language.  Can’t form contracts.  Can’t create new institutional fact. 
  • Construction of Social Reality Need to clarify observer-independent &  observer-dependent features of the world. Need 3 new elements:  Collective intentionality. – Assignment of function. – Constitutive rules –
  • Observer Independent v. Dependent Many features of the world independent of  our observations of them: observer independence. Many observer dependent: Something a  characteristic because of observer judgment, but not relative to others. OI vs. OD more important than mind-body.  DEG aside: Isn’t it dualism in the back door  though?
  • Collective Intentionality Need the notion of “we intend together.”  Attempts to reduce to individual intention are  complex. Existence of biological organisms with collective  intentionality suggests CI is a primitive. DEG aside: Are social insects intentional in Searlean  sense? Could be that social affiliation is primitive, certain behaviors hard wired. Then, CI results from (a) naming the group, (b) attributing intention to it (as-if intentionality), and (c) treating the as-if as real.
  • Assignment of Function Use of objects as tools:  Monkey uses stick to get banana. – Man sits on rock. – Physical existence facilitates function, but  function is observer relative. All function assignment is observer relative. 
  • Constitutive Rules How to distinguish between brute facts and  institutional facts. Types of rules:  Some rules regulate: “Drive on right side of road.” – Some rules regulate and constitute: Rules of chess – both regulate conduct of game and create it. Constitutive rules have form: X counts as Y in C.  “Move two and over one” counts as a knight’s  move in Chess.”
  • Simple Model of Construction of Social Reality Strong thesis: All institutional reality explained by 3  things: Collective intentionality. – Assigned function  wall keeps people out – physically, but low fence or boundary marker keeps people out by convention. Constitutive rules. – Money example: Evolution from valuable  commodity to fiat currency. Institutional reality powerful: X counts as Y in C can  be iterated and stacked forming powerful network of institutional facts.
  • What Are We Missing? Do not have C-machines.  Searle’s 10:  Ontological subjectivity. 1. C comes in unified form. 2. C connects us to world. 3. C states come in moods. 4. Always structured. 5. Varying degrees of attention. 6. C is situated. 7. Varying degrees of familiarity. 8. Refer to other things 9. Always pleasurable or unpleasurable 10.
  • Unity Missing Can argue that we have vertical unity in  message board. Do not have horizontal unity.  My first proposal recommended  modifications to permit time series. Modifications to rules.  Modification to the boards. 
  • Moods & Pleasant/Unpleasant Missing This is big.  Emotions are “engagement with the world”  (Solomon). Necessary for judgment and values.  Don’t want a simulation.  Emotions:  Physiological component – Judgmental component –
  • Other Things Missing Attention  Gestalt structure  Situatedness & familiarity  Refer to other things (may have this) 
  • What Should We Do? Stuff we’ve gotten right: Sensors, association,  models (anticipation), learning Can’t continue to work on same thing.  No serious architectural changes proposed to LCS.  Why? Need:  Emotions: As judgments, source of values, and – arbiter of attention. Multiple boards: As source of difference and – similarity. Main hope of quality of consciousness & unity. Center of intention rooted in “biological needs.” –
  • How Do We Break This Down? Tough problem.  If C is complex building block,  what are minimal essential elements to achieve. How do we know we’ve achieved  it (first person ontology, third person epistomology)? Sets of tests and experiments.  What theory needed to set  parameters of C? Not unlike approach that cracked  innovation
  • Summary & Conclusions Have accomplished quite a bit in classifier  systems. Many of our questions are technical.  Deeper questions about whether we’re  attacking the right questions. Need consciousness and intention to get  the “good stuff” of intelligent behavior. Wrestling with Searle’s categories not a bad  place to start.