Introduction to AI - Second Lecture


Published on

Published in: Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Simon said that they had "solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind."[38] (This was an early statement of the philosophical position John Searle would later call "Strong AI": that machines can contain minds just as human bodies do.)[39]
  • the body is from the material world; the soul is from the world of ideas; only the soul can access truths, since it does not exist in time and space.“So then with the mind, I myself serve God’s law, but with the flesh, the sin’s law.” [7:25]“If Christ is in you, the body is dead because of sin, but the spirit is alive because of righteousness.” [8:10]
  • René Descartes's illustration of dualism. Inputs are passed on by the sensory organs to the epiphysis in the brain and from there to the immaterial spirit.
  • PO is (1) reflexive, (2) antisymmetric, (3) transitive.
  • Introduction to AI - Second Lecture

    1. 1. Introduction to AI – 2nd Lecture1950’s – The Inception of AI<br />Wouter Beek<br /><br />15 September 2010<br />
    2. 2. Overview of the 1950’s<br />Part I<br />
    3. 3. 1948 - Information Theory<br />Shannon 1948, A Mathematical Theory of Communication<br />Source: thought  message<br />Transmitter: message  signal<br />Channel: signal  signal’<br />Because of noise<br />Receiver: signal’  message’<br />Destination: message’  thought’<br />
    4. 4. Information Entropy<br />Quantifies the information contained in a message.<br />Discrete random variable X with possible outcomes x1, …, xn.<br />Entropy: HX=−𝑖=0𝑛𝑝𝑥𝑖𝑙𝑜𝑔𝑏𝑝(𝑥𝑖)<br />The base of the logarithm is 2 for bit encoding.<br />We say that 0𝑙𝑜𝑔𝑏0=0 (limit).<br />Coin toss: p(head) = 1 - p(tails)<br />If you know that the coin has heads on both sides, then telling you the outcome of the next toss tells you nothing, i.e. H(X) = 0.<br />If you know that the coin is fair, then telling you the outcome of the next toss tells you the maximum amount of information, i.e. H(X) = 1.<br />If you know that the coin has any other bias, then you receive information with entropy between 0 and 1.<br /> <br />
    5. 5. 1946 - ENIAC<br />The first general-purpose, electronic computer.<br />Electronic Numerical Integrator And Computer<br />Turing-completeness, i.e. able to simulate a Turing Machine.<br />
    6. 6. 1937 – Turing Machine<br />Finite tape on which you can read/write 0 or 1.<br />Reading/writing head can traverse Left or Right.<br />Formalism for natural numbers: sequence of 1’s.<br />Convention: start at the first 1 of the first argument; segregate arguments by a single 0.<br />Software for addition:<br />
    7. 7. 1937 – Turing Machine – Computational implications<br />Effective computation: a method of computation, each step of which is preciselypredeterminedand is certain to produce the answer in a finite number of steps.<br />Church-Turing Thesis: Every effectively computable function can be computed by a Turing Machine.<br />
    8. 8. 1955 – Logic Theorist (LT)<br />“Over Christmas, Al[len] Newell and I invented a thinking machine.” [Herbert Simon, January 1956]<br />LT proved 38 of the first 52 theorems in Russell and Whitehead’s Principia Mathematica.<br />The proof for one theorem was shorterthan the one in Principia.<br />The editors of the Journal of Symbolic Logicrejected a paper about the LT, coauthored by Newell and Simon.<br />
    9. 9. Philosophical Ramifications<br />“[We] invented a computer program capable of thinking non-numerically, and thereby solved the venerable mind-body problem, explaining how a system composed of a matter can have the properties of mind.” [Simon]<br />Opposes the traditional mind-body dichotomy:<br />Plato’s Forms<br />Christian concept of the separation of body and soul, due to St. Paul in the Letter to the Romans.<br />Only under the following presupposition is Simon right:<br />“A physical symbol system has the necessary and sufficient means for general intelligent action.”[Newell and Simon, 1976, Computer Science as an Empirical Inquiry]<br />
    10. 10. Cartesian dualism<br />Descartes: immaterial mind and material body are<br />ontologically distinct, yet<br />causally related<br />Compare this to the Turing Test:<br />behavioral or functional interpretation of thought, and<br />mechanical devices will succeed the test<br />
    11. 11. 1956 - Darthmouth Conference (1/2)<br />Organizers: John McCarthy, Marvin Minsky, Nathaniel Rochester, Claude Shannon<br />“We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”<br />[Darthmouth Conference Proposal, 1955, italics added]<br />
    12. 12. 1956 - Darthmouth Conference (2/2)<br />Paul McCarthy coined the term ‘Artificial intelligence’ to designate the field.<br />Newell and Simon showed off their LT.<br />AI@50 / Dartmouth Artificial Intelligence Conference: The Next Fifty Years<br />July 13–15, 2006<br />50th anniversary commemoration.<br />
    13. 13. Newell, Shaw, Simon 1958 - GPS<br />Part II<br />
    14. 14. General Problem Solver (GPS)<br />Problem: the perceived difference between the desired object and the current object.<br />Objects: the things that the problem is about. (E.g. theorems in logic.)<br />Differences exist between pairs of objects.<br />Operator: something that can be applied to objects in order to produce different objects. (E.g. the rules of inference in logic.)<br />Operators are restricted to apply to only certain kinds of objects.<br />Indexed with respect to the differences that these operators are able to mitigate.<br />Heuristic information: that which aids the problem-solver in solving a problem.<br />Relating operators to differences between objects.<br />What is and what is not (heuristic) information is relative to the problem at hand.<br />Theory of problem solving: discovering and understanding systems of heuristics.<br />
    15. 15. General Problem Solver (GPS) – Generalized reasoning<br />Task environment vocabulary<br />proper nouns  common nouns<br />Problem-solving vocabulary<br />Conversion between 1 and 2<br />Correlative definitions<br />
    16. 16. Means-Ends Analysis (MEA) –Ancient Origin<br />“We deliberate not about ends, but about means. […] They assume the end and consider how and by what means it is attained, and if it seems easily and best produced thereby; while if it is achieved by one means only they consider how it will be achieved by this and by what means this will be achieved, till they come to the first cause, which in the order of discovery is last …”<br />[Aristotle, Nicomachean Ethics, III.3.1112b]<br />
    17. 17. Means-Ends Analysis (MEA) –Modern Origin<br />“I want to take my son to nursery school. What’s the difference between what I have and what I want? One of distance. What changes distance? My automobile. My automobile won’t work. What is needed to make it work? A new battery. What has new batteries? An auto repair shop. I want the repair shop to put in a new battery; but the shop doesn’t know I need one. What is the difficulty? One of communication. What allows communication? A telephone . . . and so on.” [Newell and Simon]<br />Principle of subgoal reduction.<br />Part of every heuristic.<br />
    18. 18. Means-Ends Analysis (MEA) –What it is<br />A way of controlling search in problem solving.<br />Input: current state, goals state.<br />Output: sequence of operators that, when applied to the current state, delivers the goal state.<br />The output is derived from the input by mapping operators onto differences.<br />Presupposes a criterion of two states being the same.<br />Presupposes a criterion of identifying the difference between two states.<br />
    19. 19. Means-Ends Analysis (MEA)<br />Presupposition to make MEA always succeed:<br />For every two objects A and B there exists a sequence F1, …, Fn such that Fn(…F1(A)…)=B.<br />Sequence F1, …, Fn is finite.<br />In the search space of finite sequences, F1, …, Fncan be lifted out in finite time.<br />The subject of search techniques.<br />
    20. 20. Means-Ends Analysis (MEA) –Performance Limitations<br />Brute force variant: has to try every operator w.r.t. every object.<br />Include operator restrictions, i.e. an operator only works on specific kinds of objects.<br />Include operator indexing w.r.t. categories of differences that they mitigate.<br />Requires a preliminary categorization of differences.<br />Impose a partial order (PO) on the set of differences (or categories of differences).<br />Prefer operators that reduce complex differences to simpler differences.<br />But regardless of all this: it can only see one step ahead.<br />
    21. 21. Planning<br />Constructing a proposed solution in general terms before working out the details.<br />Ingredients:<br />T: original task environment (from object A to B).<br />T’: abstracted task environment (from object A’ to B’).<br />Translation from problems in T to problems in T’ (A to A’ and B to B’).<br />Translation from solutions in T’ to plannings for solutions in T (sequence of operators F1, …, Fn to F’1, …, F’m).<br />In both T and T’ we use MEA.<br />
    22. 22. Planning<br />Presupposition to make planning always succeed:<br />Every operator F in T is covered by an abstracted operator F’ in T’, such that for every object A in T there is an object A’ in T’ such that [if R’(A’)=B’, then R(A)=B].<br />Under the condition that the problem can always be solved in T of course…<br />