Introduction to AI - Third Lecture
Upcoming SlideShare
Loading in...5
×
 

Introduction to AI - Third Lecture

on

  • 635 views

 

Statistics

Views

Total Views
635
Views on SlideShare
633
Embed Views
2

Actions

Likes
0
Downloads
9
Comments
0

2 Embeds 2

http://www.linkedin.com 1
https://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Introduction to AI - Third Lecture Introduction to AI - Third Lecture Presentation Transcript

  • Introduction to AI – 3rd Lecture1960’s – The Golden Years of AI
    Wouter Beek
    me@wouterbeek.com
    22 September 2010
  • Overview of the 1960’s
    Part I
  • 1964-1966 ELIZA
    Joseph Weizenbaum @ MIT
    Natural language processing scripts.
    Script DOCTOR implementing a Rogerian psychotherapist (still in Emacs).
    Pattern-matching techniques.
    Even though intended as a parody, many users took this chatterbot seriously.
    The first successful Turing test?
  • 1966-1972, Shakey
    Shakey: the first mobile robot that was able to reason its own actions.
    Charles Rosen (and many others), 1966-1972, SRI.
    Natural language processing: interpretation of the goal.
    Computer vision: e.g. Hough transformation for feature extraction.
    Robotics: e.g. visibility graph method for Euclidean shortest path finding.
    Other techniques: LISP, A* search.
  • Late 1960’s, Micro-Worlds
    Marvin Minsky@ MIT
    Choose limited problems in idealized settings (as, arguably, is the case in physics).
    Blocks World: the world consists of a limited number of blocks of different color and shape, and all actions in the world involve arranging these blocks.
    E.g. SHRDLU, Terry Winograd, 1968-1970
  • Two SHRDLU examples
    Person: Pick up a big red block.
    Computer: OK.
    Person: Grasp the pyramid.
    Computer: I don't understand which pyramid you mean.
    Person (changing their mind):
    Find a block which is taller than the one you are holding and put it into the box.
    Computer: By "it", I assume you mean the block which is taller than the one I am holding.
    Person: What does the box contain?
    Computer: The blue pyramid and the blue block.
    Person: What is the pyramid supported by?
    Computer: The box.
    Person: How many blocks are not in the box?
    Computer: Four of them.
    Person: Is at least one of them narrower than the one which I told you to pick up?
    Computer: Yes, the red cube.
  • 1956-1974, Golden Years of AI
    1957, Sputnik launch
    U.S. are afraid that the Russians are technologically ahead: huge amounts of Russian scientific articles are translated.
    U.S. National Research Council starts funding automatic translation research.
    DARPA funding
    Based on ambitious claims:
    “In from three to eight years we will have a machine with the general intelligence of an average human being.” [Marvin Minsky, 1970, Life Magazine]
  • 1974, first AI winter
    Too ambitious / too big claims:
    “The vodka is good, but the meat is rotten.”
    “The spirit is willing, but the flesh is weak.”
    (allusion to Mark 14:38)
    1966, negative report by an advisory committee, government funding of automatic translation cancelled.
    Limited knowledge of the outside world:
    Restricted to micro-worlds (e.g. Blocks World)
    Restricted to pattern-matching (e.g. ELIZA)
    Inherent limitations of computability:
    Intractability, combinatorial explosion (to be discussed next week).
    Undecidability
  • Inherent limitations: halting problem
    Decision problem: any yes-no question on an infinite set of inputs.
    Halting problem: Given a description of a program and a finite input, decide whether the program finishes running or will run forever.
    No resource limitations on space (memory) or time (processing power).
    Example of a program that will finish:
    writef(‘Hello, world!’).
    Example of a program that will run forever:
    lala(X):- lala(X). with query lala(a)
    Rephrasing the problem: function h is computable:
    h𝑥,𝑦≔1, 𝑖𝑓 𝑝𝑟𝑜𝑔𝑟𝑎𝑚 𝑥 h𝑎𝑙𝑡𝑠 𝑜𝑛 𝑖𝑛𝑝𝑢𝑡 𝑦0, 𝑖𝑓 𝑝𝑟𝑜𝑔𝑟𝑎𝑚 𝑥 𝑑𝑜𝑒𝑠 𝑛𝑜𝑡 h𝑎𝑙𝑡 𝑜𝑛 𝑖𝑛𝑝𝑢𝑡 𝑦
     
  • Halting problem
    We do this for any totally computable function f(x,y).
    Define a partial function g: gx≔ 0, 𝑖𝑓 𝑓𝑥,𝑥=0𝑢𝑛𝑑𝑒𝑓𝑖𝑛𝑒𝑑, 𝑜𝑡h𝑒𝑟𝑤𝑖𝑠𝑒
    If f is computable, then g is partially computable.
    The algorithm that computes g is called e.
    Two possibilities:
    If g(e)=0, then f(e,e)=0 (definition of g), but then h(e,e)=1 (since e halts on input e).
    If g(e)=undefined, then f(e,e)≠0 (definition of g), but then h(e,e)=0 (since e does not halt when run on e).
    So no computable function f can be h, i.e. the halting problem is undecidable.
     
  • Some undecidable problems
    Halting problem
    But also: first-order logic (FOL)
    Used for the blocks world, Logic Theorist, etc.
    More general: any logical language including the equality predicate and any other binary predicate.
    Entailment in FOL is semidecidable:
    For every sentence S: if S is entailed, then there exists an algorithm that says so.
    For some sentence S: if S is not entailed, then there does not exist an algorithm that says so.
  • Physical symbol systems
    Part II
  • Physical Symbol System (PSS): Ingredients
    Symbols: physical patterns.
    Expressions / symbol structures: (certain) sequences of symbols.
    Processes: functions mapping from and to expressions.
  • PSS: Designation & interpretation
    E is an expressions, P is a process, S is a physical symbol system.
    We call all physical entities objects, e.g. O.
    Symbols are objects.
    Expressions are objects, and are collections of objects that adhere to certain strictures.
    Processes are objects!
    Machines are objects, and are collections of the foregoing objects.
    EdesignatesO according to S:
    Given E, S can affect O, or
    given E, S can behave according to O.
    SinterpretsE:
    E designates P, as in (II).
    Machines are experimental setups for designating and interpreting symbols.
  • PSS Hypothesis
    “A Physical Symbol System has the necessary and sufficient means for general intelligent action.”
    Necessary: if something is intelligent, then it must be a PSS.
    Sufficient: if something is a PSS, then it must be intelligent.
    General intelligent action: the same scope of intelligence as we see in human action.
    Behavioral or functional interpretation of intelligence (as in Turing1950).
  • Remember: Church-Turing Thesis
    Chruch-Turing Thesis: Any computation that is realizable can be realized by a Universal Machine (or Turing Machine, or General Purpose Computer).
    This thesis is likely since the following three abstractions of computability were developed independently and are yet equivalent:
    Post productions (Emil Post)
    Recursive (lambda-)functions (Alonzo Church)
    Turing Machines (Allan Turing)
  • PSS: Conceptual History
    Reasoning as formal symbol manipulation (Frege, Whitehead, Russell, Shannon)
    Reasoning/information/communication theory abstracts away from content.
    Think of Shannon’s notion of information entropy and of logical deduction.
    Automating (1): Computation is a physical process.
    Stored program concept: programs are represented and operated as data.
    Think of the tape in a Turing Machine.
    Interpretation in a PSS.
    List processing: patterns that have referents
    Designation in a PSS.
  • PSS: Evaluating the hypothesis
    Remember the PSS Hyptohesis: “A Physical Symbol System has the necessary and sufficient means for general intelligent action.”
    This is not a theorem.
    The connection between PSS and intelligence cannot be proven.
    This is an empirical generalization.
    Whether it is true or false is found out by creating machines and observing their behavior.
    This makes AI an empirical science (e.g. like physics).
    AI can corroborate hypotheses, but cannot prove theorems.