Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- Introduction to AI - First Lecture by Wouter Beek 1015 views
- Rough Set Semantics for Identity Ma... by Wouter Beek 1864 views
- DynaLearn@JTEL2010_2010_6_9 by Wouter Beek 1285 views
- Pragmatic Semantics for the Web of ... by Wouter Beek 1109 views
- Introduction to AI - Second Lecture by Wouter Beek 905 views
- Introduction to AI - Fifth Lecture by Wouter Beek 891 views

No Downloads

Total views

913

On SlideShare

0

From Embeds

0

Number of Embeds

6

Shares

0

Downloads

16

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Introduction to AI – 3rd Lecture1960’s – The Golden Years of AI<br />Wouter Beek<br />me@wouterbeek.com<br />22 September 2010<br />
- 2. Overview of the 1960’s<br />Part I<br />
- 3. 1964-1966 ELIZA<br />Joseph Weizenbaum @ MIT<br />Natural language processing scripts.<br />Script DOCTOR implementing a Rogerian psychotherapist (still in Emacs).<br />Pattern-matching techniques.<br />Even though intended as a parody, many users took this chatterbot seriously.<br />The first successful Turing test?<br />
- 4. 1966-1972, Shakey<br />Shakey: the first mobile robot that was able to reason its own actions.<br />Charles Rosen (and many others), 1966-1972, SRI.<br />Natural language processing: interpretation of the goal.<br />Computer vision: e.g. Hough transformation for feature extraction.<br />Robotics: e.g. visibility graph method for Euclidean shortest path finding.<br />Other techniques: LISP, A* search.<br />
- 5.
- 6. Late 1960’s, Micro-Worlds<br />Marvin Minsky@ MIT<br />Choose limited problems in idealized settings (as, arguably, is the case in physics).<br />Blocks World: the world consists of a limited number of blocks of different color and shape, and all actions in the world involve arranging these blocks.<br />E.g. SHRDLU, Terry Winograd, 1968-1970<br />
- 7. Two SHRDLU examples<br />Person: Pick up a big red block.<br />Computer: OK.<br />Person: Grasp the pyramid.<br />Computer: I don't understand which pyramid you mean.<br />Person (changing their mind):<br />Find a block which is taller than the one you are holding and put it into the box.<br />Computer: By "it", I assume you mean the block which is taller than the one I am holding.<br />Person: What does the box contain?<br />Computer: The blue pyramid and the blue block.<br />Person: What is the pyramid supported by?<br />Computer: The box.<br />Person: How many blocks are not in the box?<br />Computer: Four of them.<br />Person: Is at least one of them narrower than the one which I told you to pick up?<br />Computer: Yes, the red cube.<br />
- 8. 1956-1974, Golden Years of AI<br />1957, Sputnik launch<br />U.S. are afraid that the Russians are technologically ahead: huge amounts of Russian scientific articles are translated.<br />U.S. National Research Council starts funding automatic translation research.<br />DARPA funding<br />Based on ambitious claims:<br />“In from three to eight years we will have a machine with the general intelligence of an average human being.” [Marvin Minsky, 1970, Life Magazine]<br />
- 9. 1974, first AI winter<br />Too ambitious / too big claims:<br />“The vodka is good, but the meat is rotten.”<br /> “The spirit is willing, but the flesh is weak.”<br />(allusion to Mark 14:38)<br />1966, negative report by an advisory committee, government funding of automatic translation cancelled.<br />Limited knowledge of the outside world:<br />Restricted to micro-worlds (e.g. Blocks World)<br />Restricted to pattern-matching (e.g. ELIZA)<br />Inherent limitations of computability:<br />Intractability, combinatorial explosion (to be discussed next week).<br />Undecidability<br />
- 10. Inherent limitations: halting problem<br />Decision problem: any yes-no question on an infinite set of inputs.<br />Halting problem: Given a description of a program and a finite input, decide whether the program finishes running or will run forever.<br />No resource limitations on space (memory) or time (processing power).<br />Example of a program that will finish:<br />writef(‘Hello, world!’).<br />Example of a program that will run forever:<br />lala(X):- lala(X). with query lala(a)<br />Rephrasing the problem: function h is computable:<br />h𝑥,𝑦≔1, 𝑖𝑓 𝑝𝑟𝑜𝑔𝑟𝑎𝑚 𝑥 h𝑎𝑙𝑡𝑠 𝑜𝑛 𝑖𝑛𝑝𝑢𝑡 𝑦0, 𝑖𝑓 𝑝𝑟𝑜𝑔𝑟𝑎𝑚 𝑥 𝑑𝑜𝑒𝑠 𝑛𝑜𝑡 h𝑎𝑙𝑡 𝑜𝑛 𝑖𝑛𝑝𝑢𝑡 𝑦<br /> <br />
- 11. Halting problem<br />We do this for any totally computable function f(x,y).<br />Define a partial function g: gx≔ 0, 𝑖𝑓 𝑓𝑥,𝑥=0𝑢𝑛𝑑𝑒𝑓𝑖𝑛𝑒𝑑, 𝑜𝑡h𝑒𝑟𝑤𝑖𝑠𝑒<br />If f is computable, then g is partially computable.<br />The algorithm that computes g is called e.<br />Two possibilities:<br />If g(e)=0, then f(e,e)=0 (definition of g), but then h(e,e)=1 (since e halts on input e).<br />If g(e)=undefined, then f(e,e)≠0 (definition of g), but then h(e,e)=0 (since e does not halt when run on e).<br />So no computable function f can be h, i.e. the halting problem is undecidable.<br /> <br />
- 12. Some undecidable problems<br />Halting problem<br />But also: first-order logic (FOL)<br />Used for the blocks world, Logic Theorist, etc.<br />More general: any logical language including the equality predicate and any other binary predicate.<br />Entailment in FOL is semidecidable:<br />For every sentence S: if S is entailed, then there exists an algorithm that says so.<br />For some sentence S: if S is not entailed, then there does not exist an algorithm that says so.<br />
- 13. Physical symbol systems<br />Part II<br />
- 14. Physical Symbol System (PSS): Ingredients<br />Symbols: physical patterns.<br />Expressions / symbol structures: (certain) sequences of symbols.<br />Processes: functions mapping from and to expressions.<br />
- 15. PSS: Designation & interpretation<br />E is an expressions, P is a process, S is a physical symbol system.<br />We call all physical entities objects, e.g. O.<br />Symbols are objects.<br />Expressions are objects, and are collections of objects that adhere to certain strictures.<br />Processes are objects!<br />Machines are objects, and are collections of the foregoing objects.<br />EdesignatesO according to S:<br />Given E, S can affect O, or<br />given E, S can behave according to O.<br />SinterpretsE:<br />E designates P, as in (II).<br />Machines are experimental setups for designating and interpreting symbols.<br />
- 16. PSS Hypothesis<br />“A Physical Symbol System has the necessary and sufficient means for general intelligent action.”<br />Necessary: if something is intelligent, then it must be a PSS.<br />Sufficient: if something is a PSS, then it must be intelligent.<br />General intelligent action: the same scope of intelligence as we see in human action.<br />Behavioral or functional interpretation of intelligence (as in Turing1950).<br />
- 17. Remember: Church-Turing Thesis<br />Chruch-Turing Thesis: Any computation that is realizable can be realized by a Universal Machine (or Turing Machine, or General Purpose Computer).<br />This thesis is likely since the following three abstractions of computability were developed independently and are yet equivalent:<br />Post productions (Emil Post)<br />Recursive (lambda-)functions (Alonzo Church)<br />Turing Machines (Allan Turing)<br />
- 18. PSS: Conceptual History<br />Reasoning as formal symbol manipulation (Frege, Whitehead, Russell, Shannon)<br />Reasoning/information/communication theory abstracts away from content.<br />Think of Shannon’s notion of information entropy and of logical deduction.<br />Automating (1): Computation is a physical process.<br />Stored program concept: programs are represented and operated as data.<br />Think of the tape in a Turing Machine.<br />Interpretation in a PSS.<br />List processing: patterns that have referents<br />Designation in a PSS.<br />
- 19. PSS: Evaluating the hypothesis<br />Remember the PSS Hyptohesis: “A Physical Symbol System has the necessary and sufficient means for general intelligent action.”<br />This is not a theorem.<br />The connection between PSS and intelligence cannot be proven.<br />This is an empirical generalization.<br />Whether it is true or false is found out by creating machines and observing their behavior.<br />This makes AI an empirical science (e.g. like physics).<br />AI can corroborate hypotheses, but cannot prove theorems.<br />

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment