Your SlideShare is downloading. ×
Why machines can't think (logically)
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Why machines can't think (logically)

334

Published on

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
334
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
4
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Why Machines Can’t Think (logically) André Vellino vellino@sympatico.ca Carleton University Cognitive Science Program
  • 2. 2 Outline n General Question: What do Logic, Complexity Theory and Automated Theorem Proving have to say about the question “can machines think?” (or at least, “can machines reason?”) n The role of “Logics” in AI n Results in the Complexity of Automated Theorem Proving Procedures n Why Machines Can’t Think: The Argument n The Logicist Response
  • 3. 3 Role of “Logics” in AI “[AI is] the study of the computations that make it possible to perceive, reason and act” Pat Winston Role of “Logics” is to: n (a) to provide a formal system powerful enough to model various representations of knowledge, belief and action; n (b) to characterize mechanisms that specify permissible (aka “valid”) inferences.
  • 4. 4 Examples of “Logics” for AI n 2-valued Propositional Calculus n First Order Predicate Calculus n Modal Logic (possibility and necessity) n Deontic Logic (permissions and obligations) n Relevance Logic (logic of “relevant” implication) n Conditional Logic (counterfactuals) n Default Logic (“common sense” reasoning) n Epistemic Logic (beliefs and knowledge) n Description Logics (knowledge representation)
  • 5. 5 Example: Defeasible Reasoning if the traffic light is red then stop (defeasible rule) [in the absence of any further information, i.e. under normal conditions] Red ⊃ Stop if the light for going straight is green, then go (straight) (absolute rule) Green → Go
  • 6. 6 Expressive Power of a Logic n Depends on the complexity of the semantics. Expressive power of model theory Other 1st-order Theories 2-valued Propositional Calculus 1st-order Predicate Calculus Other Propositional Calculi
  • 7. 7 Propositional Calculus (PC) PC is the language whose well-formed formulas are composed of a finite combination of: Logical constants: { ∨, &, ≡, → } An infinite set of atomic propositional variables: {a, b, c,..., a1, b1, c1, ....}. e.g. (p → (q → p)) & ((~a ∨ b) ≡ (a → b)) Without Loss of Generality, consider only formulas in Conjunctive Normal Form or “sets of clauses” (clauses are disjunctions) e.g. {((p ∨ q ∨ r), (~p ∨ s), (r ∨ t)}
  • 8. 8 Satisfiability / Unsatisfiability a set of clauses Σ = {C1, C2, ...Cn} is satisfiable if ∃ an assignment of truth values to literals in Σ such that C1 & C2 & ...&Cn is true SAT a set of clauses Σ = {C1, C2, ...Cn} is unsatisfiable if no assignments of truth values to literals in Σ are such that C1 & C2 & ...&Cn is true Co-SAT
  • 9. 9 Theorem Provers for co-SAT n To prove T is a tautology, assume ~T and prove that ∅ follows using a theorem prover such as: n Truth Tables (Wittgenstein / Frege / Carroll) n Semantic Tableaux (Beth) n Resolution (Robinson / Davis-Putnam) n Sequent Calculus (Gentzen Systems) n Axioms w/ substitution (Frege Systems)
  • 10. 10 Example 1: Semantic Tableaux Simple example: prove the inconsistency of (a v b) & (e v f) & (~a v b) & ~b i.e. {ab, ef,~ab, ~b} b X ~a ~b X b X ~a ~a b X ~a X ~c X ~b ϑ a b ~b X fe
  • 11. 11 Example 2: Resolution Resolution: a ∪ Β & ~a ∪ C ∴ Β ∪ C For the set of clauses {ab, ef,~ab,~b} 1) ab premise 2) ~ab premise 3) ~b premise 4) b by resolving on a in 1 & 2 5) ∅ by resolving on b in 4 & 3
  • 12. 12 Computability, Decidability and Feasibility n Computable n There exists a Turing Machine (“decision procedure” / “algorithm”) that halts. n Decidable n Given {Σ, T} it is computable whether Σ |− T or whether Σ |− ∼ T n Feasibly Decidable n Decidable by a Turing Machine in polynomial time.
  • 13. 13 Polynomial vs. Exponential n Polynomial complexity n Time (space) grows as a function nk where n is proportional to the size of the input and k is a constant n Exponential complexity n Time (space) grows as a function kn where n is proportional to the size of the input and k is a constant
  • 14. 14 The Class P n P is the class of languages recognizable by a deterministic Turing Machine in polynomial time. Example: n tautology (falsifiability) of propositional biconditionals without negation ((a ≡ b) ≡ (c ≡ b)) ≡ (a ≡ c) n Integer divisibility (indivisibility) by 2 n co-P is the complement of P. P = co-P
  • 15. 15 The Class NP / NP-Complete NP is the class of languages recognizable by a non-deterministic Turing machine in polynomial time e.g.: all problems in P all "guess and verify" problems such as SAT, 3-SAT Traveling Salesman, Subgraph Isomorphism co-NP is the class of languages in the complement of NP e.g.: co-SAT L is in NP-complete if, for every problem L' in NP there exists a polynomial time transformation from L' to L.
  • 16. 16 P NP NP-complete Open Problem: is P =NP ? n Steve Cook (1971) P NP NP-complete NP-I P = NP ?oror
  • 17. 17 Strategy for Proof that P ≠ NP if P = NP then co-NP = NP (since co-P = P ) ∴ co-NP ≠ NP implies P ≠ NP ∃ an efficient proof method for TAUT iff co-NP = NP. ∴ if no theorem proving procedure can produce proofs for all tautologies that are a polynomial function of the length of the tautology (i.e. the lengths of all proofs for theorems are exponentially long), then P ≠ NP.
  • 18. 18 Summary n Verify SAT P p ∨ q & r ∨ ~q T F T T n Find SAT NP p ∨ q & r ∨ ~q ? ? ? ? n Prove UNSAT co-NP a ∨ b & ~a ∨ b & ~b
  • 19. 19 Complexity vs. AI n Complexity Game (co-NP=NP?) n To find “hard examples” for increasingly general propositional theorem proving procedures. n AI Reasoning Game n To find “efficient” and practical theorem- proving procedures in Logics for AI
  • 20. 20 Hard Problems for Resolution n Pigeon Hole Clauses (Haken ‘85) n balls can't fit into n-1 holes ~ball_1_is_in_hole_1 v ~ball_2_is_in_hole_1 ~ball_1_is_in_hole_1 v ~ball_3_is_in_hole_1 ~ball_2_is_in_hole_1 v ~ball_3_is_in_hole_1 ~ball_1_is_in_hole_2 v ~ball_2_is_in_hole_2 ~ball_1_is_in_hole_2 v ~ball_3_is_in_hole_2 ~ball_2_is_in_hole_2 v ~ball_3_is_in_hole_2 each hole can fit only one ball n x (n-1)2 clauses ball_1_is_in_hole_1 v ball_1_is_in_hole_2 ball_2_is_in_hole_1 v ball_2_is_in_hole_2 ball_3_is_in_hole_1 v ball_3_is_in_hole_2 3 balls can fit into 2 holes n clauses
  • 21. 22 Search-Space vs. Proof Length n For problems in NP (SAT), the search space is exponentially large but the proof is polynomial n For problems in Co-NP (co-SAT), the minimal length proof is exponential and the search space even larger
  • 22. 23 Why Machines Can’t Think n If (any) “reasoning” is done by “logical rule- following” and n If any problems that people solve (feasibly) can’t be solved (feasibly) by following rules of logic Then, either n people don't reason logically or n logic is no foundation for artificial intelligence
  • 23. 24 A Few Responses 1) Worst-case complexity is irrelevant because average-case complexity is what matters in practice; 2) Exponential growth is irrelevant if the exponent is small for all realistic inputs 3) There are efficient theorem proving methods that are sound but incomplete; 4) Computational complexity can be overcome by increasing the power of the logic;
  • 24. Selman, Mitchell & Levesque ‘96
  • 25. 26 “Exponential” isn’t bad if exponent is small
  • 26. 27 Devise Sound, Tractable but Incomplete ATPs n Vivid Reasoning (Levesque) n Wants to make “believers out of computers” and devise incomplete but tractable logics that are psychologically realistic (e.g. capture the logic of “mental models” theory – Johnson-Laird) n Bounded Rationality (Cherniak) n “Rational agents” need to use “a better than random, but not perfect, gambling strategy for identifying sound inferences”
  • 27. 28 Use “Stronger” Logics n People don’t map ordinary problems (e.g. pigeon-hole problem) into languages (PC) that are computationally hard n Use a different, more powerful logic in which propositionally-hard-to-prove formulae are easy to prove (e.g. extended resolution) n Problem: punt the exponential-length-of-proof constraint to a search-for-a-short-proof problem
  • 28. 29 Concluding Remarks n If “language of thought” has a structure that can represented as or even modeled by a logic then you need to characterize what is “infeasibly computable” about it and why; n If you can understand what inferences are “cognitively hard” for people experimentally, then you can test hypotheses about what “logics” are being used in people to draw inferences.

×