AI Philosophy.


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

AI Philosophy.

  1. 1. AI Philosophy: Computers and Their Limits G51IAI – Introduction to AI Andrew Parkes
  2. 2. Natural Questions <ul><li>Can a computer only have a limited intelligence? or maybe none at all? </li></ul><ul><li>Are there any limits to what computers can do? </li></ul><ul><li>What is a “computer” anyway? </li></ul>
  3. 3. Turing Test <ul><li>The test is conducted with two people and a machine. </li></ul><ul><li>One person plays the role of an interrogator and is in a separate room from the machine and the other person. </li></ul><ul><li>The interrogator only knows the person and machine as A and B. The interrogator does not know which is the person and which is the machine. </li></ul><ul><li>Using a teletype, the interrogator, can ask A and B any question he/she wishes. The aim of the interrogator is to determine which is the person and which is the machine. </li></ul><ul><li>The aim of the machine is to fool the interrogator into thinking that it is a person. </li></ul><ul><li>If the machine succeeds then we can conclude that machines can think. </li></ul>
  4. 4. Turing Test: Modern <ul><li>You’re on the internet and open a chat line (modern teletype) to two others “A” and “B” </li></ul><ul><li>Out of A and B </li></ul><ul><ul><li>one is a person </li></ul></ul><ul><ul><li>one is a machine trying to imitate a person (e.g. capable of discussing the X-factor?) </li></ul></ul><ul><li>If you can’t tell the difference then the machine must be intelligent </li></ul><ul><li>Or at least act intelligent? </li></ul>
  5. 5. Turing Test <ul><li>Often “forget” the second person </li></ul><ul><li>Informally, the test is whether the “machine” behaves like it is intelligent </li></ul><ul><li>This is a test of behaviour </li></ul><ul><li>It is does not ask “does the machine really think?” </li></ul>
  6. 6. Turing Test Objections <ul><li>It is too culturally specific? </li></ul><ul><ul><li>If B had never heard of “The X-Factor” then does it preclude intelligence? </li></ul></ul><ul><ul><li>What if B only speaks Romanian? </li></ul></ul><ul><ul><li>Think about this issue! </li></ul></ul><ul><li>It tests only behaviour not real intelligence? </li></ul>
  7. 7. Chinese Room <ul><li>The system comprises: </li></ul><ul><ul><li>a human, who only understands English </li></ul></ul><ul><ul><li>a rule book, written in English </li></ul></ul><ul><ul><li>two stacks of paper. </li></ul></ul><ul><ul><ul><li>One stack of paper is blank. </li></ul></ul></ul><ul><ul><ul><li>The other has indecipherable symbols on them. </li></ul></ul></ul><ul><li>In computing terms </li></ul><ul><ul><li>the human is the CPU </li></ul></ul><ul><ul><li>the rule book is the program </li></ul></ul><ul><ul><li>the two stacks of paper are storage devices. </li></ul></ul><ul><li>The system is housed in a room that is totally sealed with the exception of a small opening. </li></ul>
  8. 8. Chinese Room: Process <ul><li>The human sits inside the room waiting for pieces of paper to be pushed through the opening. </li></ul><ul><li>The pieces of paper have indecipherable symbols written upon them. </li></ul><ul><li>The human has the task of matching the symbols from the &quot;outside&quot; with the rule book. </li></ul><ul><li>Once the symbol has been found the instructions in the rule book are followed. </li></ul><ul><ul><li>may involve writing new symbols on blank pieces of paper, </li></ul></ul><ul><ul><li>or looking up symbols in the stack of supplied symbols. </li></ul></ul><ul><li>Eventually, the human will write some symbols onto one of the blank pieces of paper and pass these out through the opening. </li></ul>
  9. 9. Chinese Room: Summary <ul><li>Simple Rule processing system but in which the “rule processor” happens to be intelligent but has no understanding of the rules </li></ul><ul><li>The set of rules might be very large </li></ul><ul><li>But this is philosophy and so ignore the practical issues </li></ul>
  10. 10. Searle’s Claim <ul><li>We have a system that is capable of passing the Turing Test and is therefore intelligent according to Turing. </li></ul><ul><li>But the system does not understand Chinese as it just comprises a rule book and stacks of paper which do not understand Chinese. </li></ul><ul><li>Therefore, running the right program does not necessarily generate understanding. </li></ul>
  11. 11. Replies to Searle <ul><li>The Systems Reply </li></ul><ul><li>The Robot Reply </li></ul><ul><li>The Brain Simulator Reply </li></ul>
  12. 12. Blame the System! <ul><li>The Systems Reply states that the system as a whole understands. </li></ul><ul><li>Searle responds that the system could be internalised into a brain and yet the person would still claim not to understand chinese </li></ul>
  13. 13. “ Make Data”? <ul><li>The Robot Reply argues we could internalise everything inside a robot (android) so that it appears like a human. </li></ul><ul><li>Searle argues that nothing has been achieved by adding motors and perceptual capabilities. </li></ul>
  14. 14. Brain-in-a-Vat <ul><li>The Brain Simulator Reply argues we could write a program that simulates the brain (neurons firing etc.) </li></ul><ul><li>Searle argues we could emulate the brain using a series of water pipes and valves. Can we now argue that the water pipes understand? He claims not. </li></ul>
  15. 15. AI Terminology <ul><li>“ Weak AI” </li></ul><ul><ul><li>machine can possibly act intelligently </li></ul></ul><ul><li>“ Strong AI” </li></ul><ul><ul><li>machines can actually think intelligently </li></ul></ul><ul><li>AIMA: “Most AI researchers take the weak hypothesis for granted, and don’t care about the strong AI hypothesis” (Chap. 26. p. 947) </li></ul><ul><li>What is your opinion? </li></ul>
  16. 16. What is a computer? <ul><li>In discussions of “Can a computer be intelligent?” </li></ul><ul><li>Do we need to specify the “type” of the computer? </li></ul><ul><ul><li>Does the architecture matter? </li></ul></ul><ul><li>Matters in practice: need a fast machine, lots of memory, etc </li></ul><ul><li>But does it matter “in theory”? </li></ul>
  17. 17. Turing Machine <ul><li>A very simple computing device </li></ul><ul><ul><li>storage: a tape on which one can read/write symbols from a list </li></ul></ul><ul><ul><li>processing: a “finite state automaton” </li></ul></ul>
  18. 18. Turing Machine: Storage <ul><li>Storage: a tape on which one can read/write symbols from some fixed alphabet </li></ul><ul><ul><li>tape is of unbounded length </li></ul></ul><ul><ul><ul><li>you never run out of tape </li></ul></ul></ul><ul><ul><li>have the options to </li></ul></ul><ul><ul><ul><li>move to next “cell” of the tape </li></ul></ul></ul><ul><ul><ul><li>read/write a symbol </li></ul></ul></ul>
  19. 19. Turing Machine: Processing <ul><li>“finite state automaton” </li></ul><ul><ul><li>The processor can has a fixed finite number of internal states </li></ul></ul><ul><ul><li>there are “transition rules” that take the current symbol from the tape and tell it </li></ul></ul><ul><ul><ul><li>what to write </li></ul></ul></ul><ul><ul><ul><li>whether to move the head left or right </li></ul></ul></ul><ul><ul><ul><li>which state to go to next </li></ul></ul></ul>
  20. 20. Turing Machine Equivalences <ul><li>The set of tape symbols does not matter! </li></ul><ul><li>If you have a Turing machine that uses one alphabet, then you can convert it to use another alphabet by changing the FSA properly </li></ul><ul><li>Might as well just use binary 0,1 for the tape alphabet </li></ul>
  21. 21. Universal Turing Machine <ul><li>This is fixed machine that can simulate any other Turing machine </li></ul><ul><ul><li>the “program” for the other TM is written on the tape </li></ul></ul><ul><ul><li>the UTM then reads the program and executes it </li></ul></ul><ul><li>C.f. on any computer we can write a “DOS emulator” and so read a program from a “.exe” file </li></ul>
  22. 22. Church-Turing Hypothesis <ul><li>“ All methods of computing can be performed on a Universal Turing Machine (UTM)” </li></ul><ul><li>Many “computers” are equivalent to a UTM and hence all equivalent to each other </li></ul><ul><li>Based on the observation that </li></ul><ul><ul><li>when someone comes up with a new method of computing </li></ul></ul><ul><ul><li>then it always has turned out that a UTM can simulate it, </li></ul></ul><ul><ul><li>and so it is no more powerful than a UTM </li></ul></ul>
  23. 23. Church-Turing Hypothesis <ul><li>If you run an algorithm on one computer then you can get it to work on any other </li></ul><ul><ul><li>as long as have enough time and space then computers can all emulate each other </li></ul></ul><ul><ul><li>an operating system of 2070 will still be able to run a 1980’s .exe file </li></ul></ul><ul><li>Implies that abstract philosophical discussions of AI can ignore the actual hardware? </li></ul><ul><ul><li>or maybe not? (see the Penrose argument later!) </li></ul></ul>
  24. 24. Does a Computer have any known limits? <ul><li>Would like to answer: “Does a computer have any limit on intelligence?” </li></ul><ul><li>Simpler to answer “Does a computer have any limits on what it can compute?” </li></ul><ul><ul><li>e.g. ask the question of whether certain classes of program can exist in principle </li></ul></ul><ul><ul><li>best-known example uses program termination: </li></ul></ul>
  25. 25. Program Termination <ul><li>Prog 1: </li></ul><ul><ul><li>i=2 ; while ( i >= 0 ) { i++; } </li></ul></ul><ul><li>Prog 2: </li></ul><ul><ul><li>i=2 ; while ( i <= 10 ) { i++; } </li></ul></ul><ul><li>Prog 1 never halts(?) </li></ul><ul><li>Prog 2 halts </li></ul>
  26. 26. Program Termination <ul><li>Determining program termination </li></ul><ul><li>Decide whether or not a program – with some given input – will eventually stop </li></ul><ul><ul><li>would seem to need intelligence? </li></ul></ul><ul><ul><li>would exhibit intelligence? </li></ul></ul>
  27. 27. Halting Problem <ul><li>SPECIFICATION: HALT-CHECKER </li></ul><ul><li>INPUT: 1) the code for a program P 2) an input I </li></ul><ul><li>OUTPUT: determine whether or not P halts eventually when given input I </li></ul><ul><li> return true if “P halts on I”, false if it never halts </li></ul><ul><li>HALT-CHECKER itself must always halt eventually </li></ul><ul><ul><li>i.e. it must always be able to answer true/false to “P halts on I” </li></ul></ul>
  28. 28. Halting Problem <ul><li>SPECIFICATION: HALT-CHECKER </li></ul><ul><li>INPUT: the code for a program P, and an input I </li></ul><ul><li>OUTPUT: true if “P halts on I”, false otherwise </li></ul><ul><li>HALT-CHECKER could merely “run” P on I? </li></ul><ul><li>If “P halts on I” then eventually it will return true; but what if “P loops on I”? </li></ul><ul><li>BUT cannot wait forever to say it fails to halt! </li></ul><ul><li>Maybe we can detect all the loop states? </li></ul>
  29. 29. Halting Problem <ul><li>TURING RESULT: HALT-CHECKER (HC) cannot be programmed on a standard computer (Turing Machine) </li></ul><ul><ul><li>it is “noncomputable” </li></ul></ul><ul><li>Proof: Create a program by “feeding HALT-CHECKER to itself” and deriving a contradiction (you do not need to know the proof) </li></ul><ul><li>IMPACT: A solid mathematical result that a certain kind of program cannot exist </li></ul>
  30. 30. Other Limits? <ul><li>“Physical System Symbol Hypothesis” is basically </li></ul><ul><ul><li>“a symbol-pushing system can be intelligent” </li></ul></ul><ul><li>For the “symbol manipulation” let’s consider a “formal system”: </li></ul>
  31. 31. “ Formal System” <ul><li>Consists of </li></ul><ul><li>Axioms </li></ul><ul><ul><li>statements taken as true within the system </li></ul></ul><ul><li>Inference rules </li></ul><ul><ul><li>rules used to derive new statements from the axioms and from other derived statements </li></ul></ul><ul><li>Classic Example: </li></ul><ul><li>Axioms: </li></ul><ul><ul><li>All men are mortal </li></ul></ul><ul><ul><li>Socrates is a man </li></ul></ul><ul><li>Inference Rule: “if something is holds ‘for all X’ then it hold for any one X” </li></ul><ul><li>Derive </li></ul><ul><ul><li>Socrates is mortal </li></ul></ul>
  32. 32. Limits of Formal Systems <ul><li>Systems can do logic </li></ul><ul><li>They have the potential to act (be?) intelligent </li></ul><ul><li>What can we do with “formal systems”? </li></ul>
  33. 33. “ Theorem Proving” <ul><li>Bertrand Russell & Alfred Whitehead </li></ul><ul><li>Principia Mathematica 1910-13 </li></ul><ul><li>Attempts to derive all mathematical truths from axioms and inference rules </li></ul><ul><li>Presumption was that </li></ul><ul><ul><li>all mathematics is just </li></ul></ul><ul><ul><ul><li>set up the reasoning </li></ul></ul></ul><ul><ul><ul><li>then “turn the handle” </li></ul></ul></ul><ul><li>Presumption was destroyed by Gödel: </li></ul>
  34. 34. Kurt G ö del <ul><li>Logician, 1906-1978 </li></ul><ul><li>1931, Incompleteness results </li></ul><ul><li>1940s , “invented time travel” </li></ul><ul><ul><li>demonstrated existence of &quot;rotating universes“, solutions to Einstein's general relativity with paths for which .. </li></ul></ul><ul><ul><li>“ on doing the loop you arrive back before you left” </li></ul></ul><ul><li>Died of malnutrition </li></ul>
  35. 35. Gödel's Theorem (1931) <ul><li>Applies to systems that are: </li></ul><ul><li>formal: </li></ul><ul><ul><li>proof is by means of axioms and inference rules following some mechanical set of rules </li></ul></ul><ul><ul><li>no external “magic” </li></ul></ul><ul><li>“ consistent” </li></ul><ul><ul><li>there is no statement X for which we can prove both X and “not X” </li></ul></ul><ul><li>powerful enough to at least do arithmetic </li></ul><ul><ul><li>the system has to be able to reason about the natural numbers 0,1,2,… </li></ul></ul>
  36. 36. Gödel's Theorem (1931) <ul><li>In consistent formal systems that include arithmetic then </li></ul><ul><li>“ There are statements that are true but the formal system cannot prove” </li></ul><ul><li>Note: it states the proof does not exist, not merely that we cannot find one </li></ul><ul><li>Very few people understand this theorem properly </li></ul><ul><ul><li>I’m not one of them  </li></ul></ul><ul><ul><li>I don’t expect you to understand it either! … </li></ul></ul><ul><ul><li>just be aware of its existence as a known limit of what one can do with one kind of “symbol manipulation” </li></ul></ul>
  37. 37. Lucas/Penrose Claims <ul><li>Book: “Emperor's New Mind” 1989 Roger Penrose, Oxford Professor, Mathematics </li></ul><ul><li>(Similar arguments also came from Lucas, 1960s) </li></ul><ul><li>Inspired by G ö del’s Theorem: </li></ul><ul><ul><li>Can create a statement that they can see is true in a system, but that cannot be shown to be true within the system </li></ul></ul><ul><li>Claim: we are able to show something that is true but that a Turing Machine would not be able to show </li></ul><ul><li>Claim: this demonstrates that the human is doing something a computer can never do </li></ul><ul><li>Generated a lot of controversy!! </li></ul>
  38. 38. Penrose Argument <ul><li>Based on the logic of the G ö del’s Theorem </li></ul><ul><li>That there are things humans do that a computer cannot do </li></ul><ul><li>That humans do this because of physical processes within the brain that are noncomputable, i.e. that cannot be simulated by a computer </li></ul><ul><ul><li>compare to “brain in a vat” !? </li></ul></ul><ul><li>Hypothesis: quantum mechanical processes are responsible for the intelligence </li></ul><ul><li>Many (most?) believe that this argument is wrong </li></ul>
  39. 39. Penrose Argument <ul><li>Some physical processes within the brain are noncomputable, i.e. cannot be simulated by a computer (UTM) </li></ul><ul><li>These processes contribute to our intelligence </li></ul><ul><li>Hypothesis: quantum mechanical and quantum gravity processes are responsible for the intelligence (!!) </li></ul><ul><li>(Many believe that this argument is wrong) </li></ul>
  40. 40. One Reply to Penrose <ul><li>Humans are not consistent and so Gödel's theorem does not apply </li></ul><ul><li>Penrose Response: </li></ul><ul><ul><li>In the end, people are consistent </li></ul></ul><ul><ul><li>E.g. one mathematician might make mistakes, but in the end the mathematical community is consistent and so the theorem applies </li></ul></ul>
  41. 41. Summary <ul><li>Church-Turing Hypothesis </li></ul><ul><ul><li>all known computers are equivalent in power </li></ul></ul><ul><ul><li>a simple Turing Machine can run anything we can program </li></ul></ul><ul><li>Physical Symbol Hypothesis </li></ul><ul><ul><li>intelligence is just symbol pushing </li></ul></ul><ul><li>There are known limits on “symbol-pushing” computers </li></ul><ul><ul><li>halting problem, Gödel’s theorem </li></ul></ul><ul><li>Penrose-Lucas: we can do things symbol pushing computers can’t </li></ul><ul><ul><li>Some “Turing Tests” will be failed by a computer </li></ul></ul><ul><ul><li>Some tasks cannot be performed by a “Chinese room” </li></ul></ul><ul><ul><li>but the argument is generally held to be in error </li></ul></ul>
  42. 42. Questions?