Like this presentation? Why not share!

# Jarrar.lecture notes.aai.2011s.ch7.p logic

## by PalGov on Sep 17, 2011

• 475 views

### Views

Total Views
475
Views on SlideShare
453
Embed Views
22

Likes
0
0
0

### Categories

Uploaded via SlideShare as Microsoft PowerPoint

## Jarrar.lecture notes.aai.2011s.ch7.p logicPresentation Transcript

• Logic & Logic Agents Chapter 7 (& background) Dr. Mustafa Jarrar University of Birzeit [email_address] www.jarrar.info Lecture Notes, Advanced Artificial Intelligence (SCOM7341) University of Birzeit 2 nd Semester, 2011 Advanced Artificial Intelligence (SCOM7341)
• Lecture Outline
• Discuss some history of Logic
• Remind you with some Logic
• Logic agents (ch.7)
• What is logic?
• A logic allows the axiomatization of the domain information, and the drawing of conclusions from that information.
• Syntax
• Semantics
• Logical inference = reasoning
• History of Logic 1037 1198 ? 2001 322 BC 1960s Complete Scalable! Automated
• Some Logic Background
• Propositions and Validity of Arguments Is it a valid argument?
• Propositions and Validity of arguments Is it a valid argument
• Tarski’s World Describe Tarski’s world using universal and external quantifiers
• Universal and Extensional Quantifiers There is an item that was chosen by every student.  true There is a student who chose every available item.  false There is a student who chose at least one item from every station.  true Every student chose at least one item from every station  false.
• Logic allows us to represent knowledge precisely (Syntax and Semantics).
Motivation Example
• However, representation alone is not enough.
• We also need to process this knowledge and make use of it, i.e. Logical inference = (Reasoning).
 x PhDStudent(x)  Student (x)  x Employee(x)  Person (x)  x Student(x)  Person (x)  x PhDStudent(x)  Employee (x) Person Student Employee PhD Student
• Motivation Example  x PhDStudent(x)  Student (x)  x Employee(x)  Person (x)  x Student(x)  Person (x)  x PhDStudent(x)  Employee (x) Person Student Employee  x PhDStudent(x)  Person (x)  How to process the above axioms to know that an axiom can be derived from another axiom. Reasoning : PhD Student
• Motivation Example Person Student Employee  How to process the above axioms to know that an axiom can be derived from another axiom.  Find contradictions (satisfiability)  x Student(x)  Employee (x) =   … etc.  x PhDStudent(x)  Student (x)  x Employee(x)  Person (x)  x Student(x)  Person (x)  x PhDStudent(x)  Employee (x) Reasoning: PhD Student
• Ch.7 Logic Agents
• Objectives
• Know the important aspects of propositional and predicate logic
• syntax, semantics, models, inference rules, complexity
• Understand the limitations of propositional and predicate logic
• Apply simple reasoning techniques to specific tasks
• Learn about the basic principles of predicate logic
• Apply predicate logic to the specification of knowledge-based systems and agents
• Use inference rules to deduce new knowledge from existing knowledge bases
• A Knowledge-Based Agent
• A knowledge-based agent consists of a knowledge base (KB) and an inference engine (IE).
• A knowledge-base is a set of representations of what one knows about the world (objects and classes of objects, the fact about objects, relationships among objects, etc.)
• Each individual representation is called a sentence .
• The sentences are expressed in a knowledge representation language .
• Examples of sentences
• The moon is made of green cheese
• If A is true then B is true
• A is false
• All humans are mortal
• Confucius is a human
• A Knowledge-Based Agent
• The Inference engine derives new sentences from the input and KB
• The inference mechanism depends on representation in KB
• The agent operates as follows:
• 1. It receives percepts from environment
• 2. It computes what action it should perform (by IE and KB)
• 3. It performs the chosen action (some actions are simply inserting inferred new facts into KB).
Knowledge Base Inference Engine Input from environment Output (actions) Learning (KB update)
• Knowledge Level.
• The most abstract level -- describe agent by saying what it knows.
• Example: A taxi agent might know that the Golden Gate Bridge connects San Francisco with the Marin County.
• Logical Level.
• The level at which the knowledge is encoded into sentences.
• Implementation Level.
• The physical representation of the sentences in the logical level.
• Example: “(Links GoldenGateBridge, SanFrancisco, MarinCounty)”
KB can be viewed at different levels
• The Wumpus World
• Demo (Video)
• Performance measure
• gold +1000, death -1000
• -1 per step, -10 for using the arrow
• Environment
• Squares adjacent to wumpus are smelly
• Sensors: Stench, Breeze, Glitter, Bump, Scream
• Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot
• Squares adjacent to pit are breezy
• Glitter iff gold is in the same square
• Shooting kills wumpus if you are facing it
• Shooting uses up the only arrow
• Grabbing picks up gold if in same square
• Releasing drops the gold in same square
• Entailment –Logical Implication
• Entailment means that one thing follows from another:
• KB ╞ α
• Knowledge base KB entails sentence α
• if and only if
• α is true in all worlds where KB is true
• - E.g., the KB containing “the Giants won” and “the Reds won” entails “Either the Giants won or the Reds won”
• - Entailment is a relationship between sentences (i.e., Syntax) that is based on Semantics.
• - E.g., x+y = 4 entails 4 = x+y
• Models
• Logicians typically think in terms of models , which are formally structured worlds with respect to which truth can be evaluated.
• We say m is a model of a sentence α if α is true in m
• M(α) is the set of all models of α
• Then KB ╞ α iff M(KB)  M( α)
• E.g.
• KB = “Giants won” and “Reds won”
• α = “Giants won”
• Or
• α = “Red won”
• Or
• α = either Giants or Red won
• Wumpus Models
• Wumpus Models
• KB = Wumpus-world rules + observations
• Wumpus Models
• KB = wumpus-world rules + observations
• α 1 = &quot;[1,2] is safe&quot;,
• KB ╞ α 1 , proved by model checking
• Exercise
• Construct a model satisfying the following 
•  =  x (Circle( x )  Above ( x , f ))
• Model:
• Circle( a )
• Circle( c )
• Triangle( f )
• a in [3,5]
• c in [2,4]
• f in [3,3]
a c f We say that this model is an interpretation for 
• Inference - Deduction - Reasoning
• KB ├ i α = sentence α can be derived from KB by procedure I
• Soundness : i is sound if
• whenever KB ├ i α, it is also true that KB ╞ α
• Completeness : i is complete if
• whenever KB ╞ α, it is also true that KB ├ i α
• Preview: we will define a logic (first-order logic) which is expressive enough to say almost anything of interest, and for which there exists a sound and complete inference procedure.
• That is, the procedure will answer any question whose answer follows from what is known by the KB .
KB ├ i α
• Propositional logic
• Propositional logic: Syntax
• Propositional logic is the simplest logic –illustrates basic ideas
• Propositional Logic: Semantics
• Each model specifies true/false for each proposition symbol
• E.g. P 1,2 P 2,2 P 3,1
• false true false
• With these symbols, 8 (= 2 3 ) possible models, can be enumerated automatically.
• Rules for evaluating truth with respect to a model m :
•  S is true iff S is false
• S 1  S 2 is true iff S 1 is true and S 2 is true
• S 1  S 2 is true iff S 1 is true or S 2 is true
• S 1  S 2 is true iff S 1 is false or S 2 is true
• i.e., is false iff S 1 is true and S 2 is false
• S 1  S 2 is true iff S 1  S 2 is true and S 2  S 1 is true
• Simple recursive process evaluates an arbitrary sentence, e.g.,
•  P 1,2  (P 2,2  P 3,1 ) = true  ( true  false ) = true  true = true
• A simple knowledge base
• Let P i,j be true if there is a pit in [i, j].
• Let B i,j be true if there is a breeze in [i, j].
• Knowledge Base:
• B 1,1  (P 1,2  P 2,1 ) B 1 , 1 is breezy if and only if there is a pit in a neighboring square
• B 2,1  (P 1,1  P 2,2  P 3,1 )
• B 1,1
• B 2,1
•  These sentences are true in all wumpus worlds.
•  P1,1 There is no pit in [1, 1]
• Validity and Satisfiability
• A sentence is valid if it is true in all models,
• e.g., True , A  A, A  A, (A  (A  B))  B
• Validity is connected to inference via the Deduction Theorem :
• KB ╞ α if and only if ( KB  α) is valid
• A sentence is satisfiable if it is true in some model
• e.g., A  B, C
• A sentence is unsatisfiable if it is true in no models
• e.g., A  A
• Satisfiability is connected to inference via the following:
• KB ╞ α if and only if ( KB  α) is unsatisfiable
• Validity and Satisfiability
• An interpretation I is a model of  :
• I ╞ 
• A formula  is
• Satisfiable , if there is some I that satisfies  ,
• Unsatisfiable , if  is not satisfiable,
• Falsifiable , if there is some I that does not satisfy  ,
• Valid (i.e., a tautology), if every I is a model of  .
• Two formulas are logically equivalent (    ), if for all I :
• I ╞  iff I ╞ 
• Inference Methods
• Enumeration Method
• Propositional Inference: Enumeration Method Truth Tables for Inference
• Propositional Inference: Enumeration Method
• Let  = A  B and KB = (A  C)  (B   C)
• Is it the case that KB ╞ 
• Check all possible models .  must be true wherever KB is true
• Propositional inference: Enumeration method
• Depth-first enumeration of all models is sound and complete
• For n symbols, time complexity is O(2 n ) , space complexity is O(n)
• Inference rules
• Inference Rule: Modus Ponens
• The rule is written as:
Means, whenever any sentences of the form    and  are given, then the sentence  can be inferred. If  , then  .  . Therefore,  For example, if ( WumpusAhead  WumpusAlive)  Shoot and ( WumpusAhead  WumpusAlive) are given, then Shoot can be inferred.    ,  
• More Inference Rules: Logical Equivalences
• Two sentences are logically equivalent iff true in same models: α ≡ ß iff α╞ β and β ╞ α
All rules are sound if used with search algorithms, but they might be inadequate to reach a goal (i.e., completeness is not guaranteed).
• Resolution Proof by contradiction, i.e., show KB   α unsatisfiable.
• Resolution
• Resolution is a rule of inference leading to a refutation theorem-proving technique for sentences in propositional logic.
• That is, applying the resolution rule in a suitable way allows for telling whether a propositional formula is satisfiable;
• Resolution was introduced by John Alan Robinson in 1965.
• Suppose we have a knowledge base in this form:
• By resolving ( A  B ) and (A   B), we obtain ( A  A), which is reduced to A
• Notice that this rule applies only when a knowledge base in form of conjunctions of disjunctions of literals.
A  B , A   B A
• Resolution
• We first write/convert the formulas into Conjunctive Normal Form (CNF):
• conjunction of disjunctions of literals clauses
• E.g., (A   B)  (B   C   D)
• A literal is a propositional variable or the negation of a propositional variable.
• Resolution inference rule (for CNF):
• where l i and m j are complementary literals ( one is the negation of the other ).
• E.g.,
•  Resolution is sound and complete for propositional logic.
l i  …  l k , m 1  …  m n l i  …  l i-1  l i+1  …  l k  m 1  …  m j-1  m j+1  ...  m n P 1,3  P 2,2 ,  P 2,2 P 1,3 a  b ,  a  c b  c
• Conversion to CNF
• Any sentence in propositional logic can be transformed into an equivalent sentence in conjunctive normal form.
• Example: B 1,1  (P 1,2  P 2,1 )
• 1. Eliminate  , replacing α  β with (α  β)  (β  α).
• (B 1,1  (P 1,2  P 2,1 ))  ((P 1,2  P 2,1 )  B 1,1 )
• 2. Eliminate  , replacing α  β with  α  β.
• (  B 1,1  P 1,2  P 2,1 )  (  (P 1,2  P 2,1 )  B 1,1 )
• 3. Move  inwards using de Morgan's rules and double-negation:
• (  B 1,1  P 1,2  P 2,1 )  ((  P 1,2   P 2,1 )  B 1,1 )
• 4. Apply distributivity law (  over  ) and flatten:
• (  B 1,1  P 1,2  P 2,1 )  (  P 1,2  B 1,1 )  (  P 2,1  B 1,1 )
• Resolution Algorithm
• Any sentence in propositional logic can be transformed into an equivalent sentence in conjunctive normal form.
• Steps:
• All sentences in KB and the negation of the sentence to be proved (the conjecture ) are conjunctively connected.
• The resulting sentence is transformed into a conjunctive normal form with the conjuncts viewed as elements in a set, S , of clauses.
• The resolution rule is applied to all possible pairs of clauses that contain complementary literals. After each application of the resolution rule, the resulting sentence is simplified by removing repeated literals. If the sentence contains complementary literals, it is discarded (as a tautology). If not, and if it is not yet present in the clause set S , it is added to S , and is considered for further resolution inferences.
• If after applying a resolution rule the empty clause is derived, the complete formula is unsatisfiable (or contradictory ), and hence it can be concluded that the initial conjecture follows from the axioms.
• If, on the other hand, the empty clause cannot be derived, and the resolution rule cannot be applied to derive any more new clauses, the conjecture is not a theorem of the original knowledge base.
• Resolution Algorithm ( in short )
• The resolution algorithm tries to prove:
• KB ╞   equivalent to
• KB   unsatisfiable
• Generate all new sentences from KB and the query.
• One of two things can happen:
• We find a case like P   P which is unsatisfiable, which means we can entail the query.
• We find no contradiction: there is a model that satisfies the sentence KB   (non-trivial) and hence we cannot entail the query.
• Resolution Algorithm
• Proof by contradiction, i.e., show KB   α unsatisfiable.
• Example
• KB = (P  Q)  Q
• (P  P)  R
• (R  S)   (S  Q)
• = R
• Does KB entails  ( KB ╞  )
Yes! P  Q 1. P  R 2.  P  R 3. R  S 4. R   Q 5.  S   Q 6.  R neg 7. S 4,7 8.  Q 6,8 9. P 1,9 10. R 3,10 11. . 7,11 12.
• Exercise 1
• KB = (B 1,1  (P 1,2  P 2,1 )) Breeze in [1,1] iff there is a it is [1,2] or [2.1].
•  B 1,1 There is on Breeze in [1,1]
• =  P 1,2 No Pit in [1,2]?
• Does KB entails  ( KB ╞  )
False in all worlds True!
• Exercise 2
• KB = (B 1,1  (P 1,2  P 2,1 )) Breeze in [1,1] iff there is a it is [1,2] or [2.1].
•  B 1,1 There is on Breeze in [1,1]
• = P 1,2 Pit in [1,2]?
• Does KB entails  ( KB ╞  )
• Completeness of the Resolution Method
• You should be able to prove the completeness of the resolution method (at least informally).
• Forward and backward Chaining
• Horn Clauses
• Resolution can be exponential in space and time.
• If we can reduce all clauses to “ Horn clauses ” resolution is linear in space and time.
• A Horn clause has at most 1 positive literal.
• e.g. A   B   C
• P1  P2  P3 ...  Pn  Q;
• ~a V b V c V ~d Not a Horn Clause
• Every Horn clause can be rewritten as an implication with a conjunction of positive literals in the premises and a single positive literal as a conclusion. e.g. B  C  A
• Can be used with forward chaining or backward chaining algorithms .
• These algorithms are very natural and run in linear time!
• Forward Chaining Example
• Idea: fire any rule whose premises are satisfied in the KB ,
• add its conclusion to the KB , until query is found
I feel sleepy  I am happy I am at home  heating On  I feel sleepy I am at home  It’s snowing  Heating On Today is Holiday  I feel sleepy  I am at home Today is Holiday  It’s snowing  I am at home Today is Holiday It’s snowing  Forward chaining is sound and complete for Horn KB It’s Holiday It’s snowing I am at home Heating On I feel sleepy I am happy AND gate OR gate Query
• Forward chaining example I feel sleepy  I am happy I am at home  heating On  I feel sleepy I am at home  It’s snowing  Heating On Today is Holiday  I feel sleepy  I am at home Today is Holiday  It’s snowing  I am at home Today is Holiday It’s snowing It’s Holiday It’s snowing I am at home Heating On I feel sleepy I am happy 2 2 2 2 1
• Idea: fire any rule whose premises are satisfied in the KB ,
• add its conclusion to the KB , until query is found
• Forward chaining example I feel sleepy  I am happy I am at home  heating On  I feel sleepy I am at home  It’s snowing  Heating On Today is Holiday  I feel sleepy  I am at home Today is Holiday  It’s snowing  I am at home Today is Holiday It’s snowing It’s Holiday It’s snowing I am at home Heating On I feel sleepy I am happy 2 2 1 1 1
• Idea: fire any rule whose premises are satisfied in the KB ,
• add its conclusion to the KB , until query is found
• Forward chaining example I feel sleepy  I am happy I am at home  heating On  I feel sleepy I am at home  It’s snowing  Heating On Today is Holiday  I feel sleepy  I am at home Today is Holiday  It’s snowing  I am at home Today is Holiday It’s snowing It’s Holiday It’s snowing I am at home Heating On I feel sleepy I am happy 2 1 1 0 1
• Idea: fire any rule whose premises are satisfied in the KB ,
• add its conclusion to the KB , until query is found
• Forward chaining example I feel sleepy  I am happy I am at home  heating On  I feel sleepy I am at home  It’s snowing  Heating On Today is Holiday  I feel sleepy  I am at home Today is Holiday  It’s snowing  I am at home Today is Holiday It’s snowing It’s Holiday It’s snowing I am at home Heating On I feel sleepy I am happy 1 0 1 0 1
• Idea: fire any rule whose premises are satisfied in the KB ,
• add its conclusion to the KB , until query is found
• Forward chaining example I feel sleepy  I am happy I am at home  heating On  I feel sleepy I am at home  It’s snowing  Heating On Today is Holiday  I feel sleepy  I am at home Today is Holiday  It’s snowing  I am at home Today is Holiday It’s snowing It’s Holiday It’s snowing I am at home Heating On I feel sleepy I am happy 0 0 1 0 1
• Idea: fire any rule whose premises are satisfied in the KB ,
• add its conclusion to the KB , until query is found
• Forward chaining example I feel sleepy  I am happy I am at home  heating On  I feel sleepy I am at home  It’s snowing  Heating On Today is Holiday  I feel sleepy  I am at home Today is Holiday  It’s snowing  I am at home Today is Holiday It’s snowing It’s Holiday It’s snowing I am at home Heating On I feel sleepy I am happy 0 0 0 0 0
• Idea: fire any rule whose premises are satisfied in the KB ,
• add its conclusion to the KB , until query is found
• Forward chaining example It’s Holiday It’s snowing I am at home Heating On I feel sleepy I am happy 0 0 0 0 0 I feel sleepy  I am happy I am at home  heating On  I feel sleepy I am at home  It’s snowing  Heating On Today is Holiday  I feel sleepy  I am at home Today is Holiday  It’s snowing  I am at home Today is Holiday It’s snowing
• Idea: fire any rule whose premises are satisfied in the KB ,
• add its conclusion to the KB , until query is found
• Suppose that the goal is to conclude the color of a pet named Fritz, given that ( he croaks and eats flies) , and that the Knowledge base contains the following :
• If (X croaks and eats flies) - Then (X is a frog)
• If (X chirps and sings) - Then (X is a canary)
• If (X is a frog) - Then (X is green)
• If (X is a canary) - Then (X is yellow)
This Knowledge base would be searched and the first rule would be selected, because its antecedent ( If Fritz croaks and eats flies) matches our given data. Now the consequents ( Then X is a frog) is added to the data. The rule base is again searched and this time the third rule is selected, because its antecedent ( If Fritz is a frog) matches our data that was just confirmed. Now the new consequent ( Then Fritz is green) is added to our data. Nothing more can be inferred from this information, but we have now accomplished our goal of determining the color of Fritz.
• Backward Chaining
• p 1  p 2  …  p n  q
• Idea: work backwards from the query q
• check if q is known already, or
• prove by BC all premises of some rule concluding q
• Hence BC maintains a stack of sub-goals that need to be proved to get to q.
• Avoid loops: check if new sub-goal is already on the goal stack
• Avoid repeated work: check if new sub-goal
• has already been proved true, or
• Backward chaining is the basis for “logic programming,”
• e.g., Prolog
• Backward chaining example
• Backward chaining example
• Backward chaining example
• Backward chaining example we need P to prove L and L to prove P.
• Backward chaining example
• Backward chaining example
• Backward chaining example
• Backward chaining example
• Backward chaining example
• Backward chaining example
• One More Backward Chaining Example
• Pig(y)  Slug(z)  Faster (y, z)
• Slimy(a)  Creeps(a)  Slug(a)
• Pig(Pat)
• Slimy(Steve)
• Creeps(Steve)
Example in First-Order Logic
• Forward vs. backward chaining
• FC is data-driven , automatic, senseless processing,
• e.g., object recognition, routine decisions
• May do lots of work that is irrelevant to the goal
• BC is goal-driven, (bottom-up reasoning) appropriate for problem-solving,
• e.g., Where are my keys?
• Complexity of BC can be much less than linear in size of KB
• KB contains &quot;physics&quot; sentences for every single square
• For every time t and every location [ x,y ],
• L x,y  FacingRight t  Forward t  L x+1,y
• Rapid proliferation of clauses.
•  First order logic is designed to deal with this through the
• introduction of variables.
Expressiveness limitation of propositional logic t+1 t position (x,y) at time t of the agent.
• Summary
• Logical agents apply inference to a knowledge base to derive new information and make decisions
• Basic concepts of logic:
• syntax : formal structure of sentences
• semantics : truth of sentences wrt models
• entailment : necessary truth of one sentence given another
• inference : deriving sentences from other sentences
• soundness : derivations produce only entailed sentences
• completeness : derivations can produce all entailed sentences
• Wumpus world requires the ability to represent partial and negated information, reason by cases, etc.
• Resolution is complete for propositional logic Forward, backward chaining are linear-time, complete for Horn clauses
• Propositional logic lacks expressive power