Your SlideShare is downloading. ×
Introduction to AI - Fourth Lecture
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Introduction to AI - Fourth Lecture

348

Published on

Published in: Technology, Spiritual
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
348
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • E.g. if a driving car recognizes approaching tree in 30 minutes, then that is useless. If a vision program for checking whether the original artist made a painting takes the same amount of time, then a similar technique is considered tractable.
  • As Moravec writes: "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.

    The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived.... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come. [Steven Pinker]
  • Transcript

    • 1. Introduction to AI 4th Lecture - 1970’s The first AI winter Wouter Beek me@wouterbeek.com 29 September 2010
    • 2. The cyclic process of pessimism 0 Cycle: 1. Pessimism in the research community 2. Pessimism in the press 3. Cutback in funding 4. End of serious research, goto (1) 0 Common in emerging technologies: 0 Railway mania 0 Dot-com bubble
    • 3. 1966, NRC/ALPAC report on MT 0 NRC, National Research Council 0 ALPAC, Automatic Language Processing Advisory Committee 0 1966, very negative report, Language and Machines: Computers in Translation and Linguistics. 0 Comparing cost and effectiveness of human and computer translators. 0 Human translation is cheaper and more accurate than machine translation. 0 Funding cancelled.
    • 4. 1969, Mansfield Amendment 0 1960’s, DARPA funded research without a clear application. 0 Director Licklider: “We fund people, not projects.” 0 1969’s amendment: DARPA should fund “mission-oriented research, rather than basic undirected research”. 0 American Study Group: AI research is unlikely to produce military applications in the foreseeable future. 0 Funding cancelled.
    • 5. 1973, Lighthill project 0 UK Parliament asked professor Lighthill to evaluate the state of AI research. 0 He identified the following problems: 0 Combinatorial explosion 0 Intractability 0 Limited applicability, termed ‘toy problems’ 0 Research funding cancelled across Europe.
    • 6. 1974, SUR program 0 SUR, Speech Understanding Research 0 At Carnegie Mellon University 0 A system that responds to a pilot’s voice commands. 0 It worked! If the words were spoken in a particular order… 0 Funding cancelled by DARPA. 0 Incredibly useful research: Hidden Markov models for speech recognition.
    • 7. Unrealistic predictions “Many researchers were caught up in a web of increasing exaggeration. Their initial promises to DARPA has been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn’t in their next proposal promise less that in the first one, so they promised more.” [Hans Moravec] “At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.” [John Markoff]
    • 8. Qualitative/quantitative distinction 0 Researchers of the 60’s saw only quantitative hurdles. 0 “The criticism that a machine cannot have much diversity of behaviour is just a way of saying that it cannot have much storage capacity.” [Turing1950] 0 The real hurdles of AI are qualitative: 0 Commonsense knowledge problem 0 Intractability / combinatorial explosion 0 Moravec’s paradox 0 Qualification problem 0 Frame problem 0 Various objections (disability, informality, mathematical) 0 Philosophical underpinnings
    • 9. Commonsense knowledge problem 0 Disambiguation: in order to translate a sentence, a machine must have some idea what the sentence is about. 0 “The spirit is willing but the flesh is weak.”  “The vodka is good but the meat is rotten.” 0 “Out of sight, out of mind.”  “Blind idiot!” 0 Partially due to the micro-world approach of the 60’s. 0 This relates to what Lighthill called ‘toy problems’!
    • 10. Intractability, combinatorial explosion 0 Richard Karp shows that many problems require exponential time (i.e. 2n). 0 This means that only toy problems can be handled. 0 Intractable problem: A problem that cannot be solved fast enough for the solution to be useful.
    • 11. Moravec’s paradox 0 Optimism because machines could do things that are difficult for humans: 0 Solve geometrical problems 0 Give logical proofs 0 Play a game of chess 0 But things that are easy for humans are often difficult for machines: 0 Taking the garbage out 0 Recognizing the man walking across the street is Joe 0 Sensorimotor skills and instincts are (arguably) necessary for intelligent behavior, but pose enormous problems for machines.
    • 12. Qualification problem 0 In order to fully specify the conditions under which a rule applies, one has to provide an impractical number of qualifications. 0 I want to cross a river in a rowboat: 0 The rowboat must have two oars. 0 The two oars must have approximately the same length. 0Further specify what ‘approximately’ means here! 0 The water must not be too cold. 0 The oars must not be made of cardboard. 0 The oars must not be too heavy (e.g. lead). 0 The rowboat must not have a hole in it. 0 Etc.
    • 13. Frame problem (1/2) 0 Fluent: a condition whose truth value changes over time. 0 Fred the turkey and a gun. 0 t=1: load the gun 0 t=2: wait for a bit 0 t=3: shoot the gun, killing poor Fred 0 alive(0), ~loaded(0), trueloaded(1), loaded(2)~alive(3) 0 Frame problem: this is consistent with ~alive(1), but Fred didn’t die in t=1! 0 The problem is that we only describe what changes, but not what stays the same. 0 We need many additional propositions stating what does not change!
    • 14. Frame problem (2/2) 0 alive(0), ~loaded(0), trueloaded(1), loaded(2)~alive(3) 0 Solution: minimize the changes that are made to those due to the actions. 0 One possible minimization: alive(0,1,2), ~alive(3), ~loaded(0), loaded(1,2,3) 0 Another possible minimization: alive(0,1,2,3), ~loaded(0,2,3), loaded(1) 0 More advances solutions are needed…
    • 15. Arguments from disability 0 Arguments that take on the form: “A machine can never do X.” 0 Introduced by Turing1950. 0 Remember his refutation of this problem in quantitative terms! 0 Instances of tasks filling in for X: 0 Moravec’s paradox: simple/instinctive tasks fill in for X. 0 Creative tasks fill in for X (art, humor, taste) 0 Emotional tasks fill in for X (empathy, love)
    • 16. Arguments from informality 0 Machines can only follow rules as supplied by humans, and that is not sufficient for intelligence. 0 Originated by Ada Lovelace: 0 “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”
    • 17. Mathematical objections 0 Undecidability: remember last week’s Halting problem. 0 Gödel’s first incompleteness theorem: 0 Any formal theory capable of expressing elementary arithmetic cannot be both consistent and complete. 0 In other words: In every interesting consistent formal theory there is a statement that is true but not provable in that theory. 0 Gödel sentences: sentences that are true but unprovable. 0 Gödel’s second incompleteness theorem: 0 For any formal theory T that includes basic arithmetic and the notion of formal provability we have: T includes a statement of its own consistency if and only if T is inconsistent. 0 The consequence of formulating the first incompleteness theorem within the theory itself.
    • 18. Philosophical positions 0 Behaviorism: a mental state is attributed when certain external observations regarding an entity have been made. 0 Functionalism: a mental state is determined by the causal connections between input and output. 0 A chip with the same connections as a brain is intelligent. 0 Biological naturalism: the existence of a mental state crucially depends on it being present in a neurological substrate. 0 If a chip has the same connections as a brain, then this can at most be a simulation of intelligence. 0 The Turing Test does not suffice for establishing intelligence.
    • 19. Weak AI || Strong AI 0 Weak AI: machines simulate intelligence / behave as if they are intelligent. 0 Biological naturalism 0 Strong AI: machines are intelligent. 0 Behaviorism, functionalism

    ×