The document discusses predicate logic and how it can be used to represent facts and relationships between entities. Predicate logic builds upon propositional logic by adding predicates and quantifiers. Key points include:
- Predicate logic uses predicates and quantifiers like "all" and "some" to make statements about relationships between entities.
- Common predicates include things like "man(Marcus)" to represent that Marcus is a man. Quantifiers like "∀x" are used to represent statements about all or some entities.
- Examples are given to show how facts about a person named Marcus can be represented using predicates and quantifiers in predicate logic.
- Additional concepts like computable functions and predicates, resolution, and
Introduction Artificial Intelligence a modern approach by Russel and Norvig 1Garry D. Lasaga
In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. - Wikipedia
An introduction to Keras, a high-level neural networks library written in Python. Keras makes deep learning more accessible, is fantastic for rapid protyping, and can run on top of TensorFlow, Theano, or CNTK. These slides focus on examples, starting with logistic regression and building towards a convolutional neural network.
The presentation was given at the Austin Deep Learning meetup: https://www.meetup.com/Austin-Deep-Learning/events/237661902/
It covers knowledge representation techniques using propositional and predicate logic. It also discusses about the knowledge inference using resolution refutation process, rule based system and bayesian network.
Introduction Artificial Intelligence a modern approach by Russel and Norvig 1Garry D. Lasaga
In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. - Wikipedia
An introduction to Keras, a high-level neural networks library written in Python. Keras makes deep learning more accessible, is fantastic for rapid protyping, and can run on top of TensorFlow, Theano, or CNTK. These slides focus on examples, starting with logistic regression and building towards a convolutional neural network.
The presentation was given at the Austin Deep Learning meetup: https://www.meetup.com/Austin-Deep-Learning/events/237661902/
It covers knowledge representation techniques using propositional and predicate logic. It also discusses about the knowledge inference using resolution refutation process, rule based system and bayesian network.
Directed Acyclic Graph Representation of basic blocks is the most important topic of compiler design.This will help for student studying in master degree in computer science.
Knowledge representation and Predicate logicAmey Kerkar
This presentation is specifically designed for the in depth coverage of predicate logic and the inference mechanism :resolution algorithm.
feel free to write to me at : amecop47@gmail.com
Artificial Intelligence - A modern approach 3edRohanMistry15
Artificial Intelligence: A Modern Approach, 4th US ed.
Artificial Intelligence: A Modern Approach, 3e offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.
Dr. Peter Norvig, contributing Artificial Intelligence author and Professor Sebastian Thrun, a Pearson author are offering a free online course at Stanford University on artificial intelligence.
According to an article in The New York Times, the course on artificial intelligence is “one of three being offered experimentally by the Stanford computer science department to extend technology knowledge and skills beyond this elite campus to the entire world.” One of the other two courses, an introduction to database software, is being taught by Pearson author Dr. Jennifer Widom.
About the Author
Stuart Russell was born in 1962 in Portsmouth, England. He received his B.A. with first-class honours in physics from Oxford University in 1982, and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is a professor of computer science, director of the Center for Intelligent Systems, and holder of the Smith–Zadeh Chair in Engineering. In 1990, he received the Presidential Young Investigator Award of the National Science Foundation, and in 1995 he was cowinner of the Computers and Thought Award. He was a 1996 Miller Professor of the University of California and was appointed to a Chancellor’s Professorship in 2000. In 1998, he gave the Forsythe Memorial Lectures at Stanford University. He is a Fellow and former Executive Council member of the American Association for Artificial Intelligence. He has published over 100 papers on a wide range of topics in artificial intelligence. His other books include The Use of Knowledge in Analogy and Induction and (with Eric Wefald) Do the Right Thing: Studies in Limited Rationality.
Peter Norvig is currently Director of Research at Google, Inc., and was the director responsible for the core Web search algorithms from 2002 to 2005. He is a Fellow of the American Association for Artificial Intelligence and the Association for Computing Machinery. Previously, he was head of the Computational Sciences Division at NASA Ames Research Center, where he oversaw NASA’s research and development in artificial intelligence and robotics, and chief scientist at Junglee, where he helped develop one of the first Internet information extraction services. He received a B.S. in applied mathematics from Brown University and a Ph.D. in computer science from the University of California at Berkeley. He received the Distinguished Alumni and Engineering Innovation awards from Berkeley and the Exceptional Achievement Medal from NASA.
BackTracking Algorithm: Technique and ExamplesFahim Ferdous
This slides gives a strong overview of backtracking algorithm. How it came and general approaches of the techniques. Also some well-known problem and solution of backtracking algorithm.
Directed Acyclic Graph Representation of basic blocks is the most important topic of compiler design.This will help for student studying in master degree in computer science.
Knowledge representation and Predicate logicAmey Kerkar
This presentation is specifically designed for the in depth coverage of predicate logic and the inference mechanism :resolution algorithm.
feel free to write to me at : amecop47@gmail.com
Artificial Intelligence - A modern approach 3edRohanMistry15
Artificial Intelligence: A Modern Approach, 4th US ed.
Artificial Intelligence: A Modern Approach, 3e offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.
Dr. Peter Norvig, contributing Artificial Intelligence author and Professor Sebastian Thrun, a Pearson author are offering a free online course at Stanford University on artificial intelligence.
According to an article in The New York Times, the course on artificial intelligence is “one of three being offered experimentally by the Stanford computer science department to extend technology knowledge and skills beyond this elite campus to the entire world.” One of the other two courses, an introduction to database software, is being taught by Pearson author Dr. Jennifer Widom.
About the Author
Stuart Russell was born in 1962 in Portsmouth, England. He received his B.A. with first-class honours in physics from Oxford University in 1982, and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is a professor of computer science, director of the Center for Intelligent Systems, and holder of the Smith–Zadeh Chair in Engineering. In 1990, he received the Presidential Young Investigator Award of the National Science Foundation, and in 1995 he was cowinner of the Computers and Thought Award. He was a 1996 Miller Professor of the University of California and was appointed to a Chancellor’s Professorship in 2000. In 1998, he gave the Forsythe Memorial Lectures at Stanford University. He is a Fellow and former Executive Council member of the American Association for Artificial Intelligence. He has published over 100 papers on a wide range of topics in artificial intelligence. His other books include The Use of Knowledge in Analogy and Induction and (with Eric Wefald) Do the Right Thing: Studies in Limited Rationality.
Peter Norvig is currently Director of Research at Google, Inc., and was the director responsible for the core Web search algorithms from 2002 to 2005. He is a Fellow of the American Association for Artificial Intelligence and the Association for Computing Machinery. Previously, he was head of the Computational Sciences Division at NASA Ames Research Center, where he oversaw NASA’s research and development in artificial intelligence and robotics, and chief scientist at Junglee, where he helped develop one of the first Internet information extraction services. He received a B.S. in applied mathematics from Brown University and a Ph.D. in computer science from the University of California at Berkeley. He received the Distinguished Alumni and Engineering Innovation awards from Berkeley and the Exceptional Achievement Medal from NASA.
BackTracking Algorithm: Technique and ExamplesFahim Ferdous
This slides gives a strong overview of backtracking algorithm. How it came and general approaches of the techniques. Also some well-known problem and solution of backtracking algorithm.
Foundations of Knowledge Representation in Artificial Intelligence.pptxkitsenthilkumarcse
Knowledge representation in artificial intelligence (AI) is a fundamental concept that involves the process of structuring and encoding knowledge so that AI systems can understand, reason, and make decisions. Effective knowledge representation is essential for AI systems to model and work with complex real-world information. Here are some key aspects of knowledge representation in AI:
Symbolic Knowledge Representation: This approach uses symbols and rules to represent knowledge. It involves encoding information using symbols, predicates, and logical statements. Common formalisms include first-order logic and propositional logic. Symbolic representation is particularly suited for knowledge-based systems and expert systems.
Semantic Networks: In a semantic network, knowledge is represented using nodes and links to denote relationships between concepts. This form of representation is intuitive and is often used for organizing knowledge in a structured manner.
Frames and Ontologies: Frames and ontologies are used to represent knowledge by structuring information into frames or classes. Frames contain attributes and values, and they help in organizing and categorizing knowledge. Ontologies, such as OWL (Web Ontology Language), provide a more formal representation of knowledge for use in the semantic web and knowledge graphs.
Rule-Based Systems: Rule-based systems use a set of rules to represent and reason with knowledge. These rules can be encoded in the form of "if-then" statements, allowing AI systems to make decisions and draw inferences.
Fuzzy Logic: Fuzzy logic allows for the representation of uncertainty and vagueness in knowledge. It is particularly useful in situations where information is not black and white but falls within degrees of truth.
Bayesian Networks: Bayesian networks represent knowledge using probability distributions and conditional dependencies. They are valuable for modeling uncertain or probabilistic relationships in various domains, such as medical diagnosis and risk analysis.
Connectionist Models: Connectionist models, like neural networks, use distributed representations to encode knowledge. In these models, knowledge is spread across interconnected nodes (neurons), and learning occurs through the adjustment of connection weights. These networks are particularly effective in tasks such as pattern recognition and natural language processing.
Hybrid Approaches: Many AI systems use a combination of different knowledge representation techniques to address the complexities of real-world problems. For instance, combining symbolic representation with connectionist models is a common approach in modern AI.
The choice of knowledge representation method depends on the specific problem domain, the nature of the data, and the requirements of the AI system.
Dynamical Systems Methods in Early-Universe CosmologiesIkjyot Singh Kohli
Talk I gave at The Southern Ontario Numerical Analysis Day (SONAD): http://www.math.yorku.ca/sonad2014/ on General Relativity, Dynamical Systems, and Early-Universe Cosmologies.
Knowledge representation events in Artificial Intelligence.pptxkitsenthilkumarcse
As of my last knowledge update in January 2022, here are some key events and trends related to knowledge representation in the field of artificial intelligence (AI):
Knowledge Graphs: Knowledge graphs became increasingly important for representing structured and interconnected knowledge. Major organizations and platforms, such as Google's Knowledge Graph, Facebook's Open Graph, and Wikidata, continued to expand their knowledge graph initiatives.
Semantic Web: The development of the Semantic Web continued to progress, with standards like RDF (Resource Description Framework) and OWL (Web Ontology Language) being used to structure and represent data on the web in a semantically meaningful way.
Ontologies and Industry Standards: Ontologies, which are formal representations of knowledge in specific domains, gained prominence in various fields. Industry-specific ontologies and standards, such as HL7 FHIR in healthcare, were developed and adopted to improve data interoperability.
AI and Knowledge-Based Systems: Knowledge representation remained a foundational component of AI systems, particularly in expert systems. These systems were used in various applications, including medical diagnosis, financial analysis, and troubleshooting.
Hybrid Models: Researchers explored hybrid models that combined symbolic AI (knowledge-based) with connectionist AI (neural networks) to leverage the strengths of both approaches. This approach aimed to address the limitations of purely symbolic or purely neural models.
AI in Chatbots and Virtual Assistants: Chatbots and virtual assistants, powered by knowledge representation and natural language processing, continued to advance, offering improved conversational capabilities and knowledge retrieval.
Knowledge Representation for Explainable AI: As the need for explainable AI grew, knowledge representation played a role in providing transparent and interpretable models. This was particularly important in domains where AI decisions had significant consequences, such as healthcare and finance.
Research in Commonsense Reasoning: Advancements were made in commonsense reasoning, with the goal of enabling AI systems to understand and reason about everyday human knowledge and context.
Knowledge Representation and COVID-19: During the COVID-19 pandemic, knowledge representation played a vital role in aggregating and organizing data related to the virus, treatments, and vaccine research.
AI and the Semantic Web: The integration of AI technologies with the Semantic Web aimed to make web data more semantically meaningful, enhancing search engines, recommendation systems, and data integration.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSveerababupersonal22
It consists of cw radar and fmcw radar ,range measurement,if amplifier and fmcw altimeterThe CW radar operates using continuous wave transmission, while the FMCW radar employs frequency-modulated continuous wave technology. Range measurement is a crucial aspect of radar systems, providing information about the distance to a target. The IF amplifier plays a key role in signal processing, amplifying intermediate frequency signals for further analysis. The FMCW altimeter utilizes frequency-modulated continuous wave technology to accurately measure altitude above a reference point.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
2. Logic
• A scientific study of the process of reasoning
• System of rules and procedures that help in reasoning process
• Logic can be used to represent simple facts
• The facts are claims about the world that are True or False
• Logic defines ways of putting symbols together so that user can define legal sentences in
the language that represent True facts.
Types of Logic
1. Propositional Logic
2. Predicate Logic
11. Quantifiers Universal quantification
• (∀x)P(x) means that P holds for all values of x in the domain associated with that
variable
• E.g., (∀x) dolphin(x) → mammal(x) Existential quantification
• (∃ x)P(x) means that P holds for some value of x in the domain associated with that
variable
• E.g., (∃ x) mammal(x) ∧ lays-eggs(x)
12. Consider the following example that shows the use of predicate logic as a way of
representing knowledge.
1. Marcus was a man.
2. Marcus was a Pompeian.
3. All Pompeians were Romans.
4. Caesar was a ruler.
5. Also, All Pompeians were either loyal to Caesar or hated him.
6. Everyone is loyal to someone.
7. People only try to assassinate rulers they are not loyal to.
8. Marcus tried to assassinate Caesar
13. The facts described by these sentences can be represented as a set of well-formed formulas (wffs) as follows:
1. Marcus was a man
man(Marcus)
2. Marcus was a Pompeian.
Pompeian(Marcus)
3. All Pompeians were Romans.
∀x: Pompeian(x) → Roman(x)
4. Caesar was a ruler.
ruler(Caesar)
5. All Romans were either loyal to Caesar or hated him.
• Inclusive-or
∀x: Roman(x) → loyalto(x, Caesar) ∨ hate(x, Caesar)
• exclusive-or
∀x: Roman(x) → [(loyalto(x, Caesar) V hate(x, Caesar)) ∧ (¬loyalto(x, Caesar) ∧ hate(x, Caesar))
14. 6. Everyone is loyal to someone.
∀x: ∃y: loyalto(x, y)
7. People only try to assassinate rulers they are not loyal to.
∀x: ∀y: person(x) ∧ ruler(y) ∧ tryassassinate(x, y)→¬loyalto(x, y)
8. Marcus tried to assassinate Caesar.
tryassassinate(Marcus, Caesar)
15. • Now suppose if we want to use these statements to answer the question: Was Marcus loyal to Caesar?
• Also, Now let’s try to produce a formal proof, reasoning backward from the desired goal:
¬ Ioyalto(Marcus, Caesar)
• In order to prove the goal, we need to use the rules of inference to transform it into another goal (or
possibly a set of goals) that can, in turn, transformed, and so on, until there are no unsatisfied goals
remaining.
Figure: An attempt to prove ¬loyalto(Marcus, Caesar)
16. • The problem is that, although we know that Marcus was a man, we do not have any way to conclude from that that
Marcus was a person. Also, We need to add the representation of another fact to our system,
namely: ∀ man(x) → person(x)
• Now we can satisfy the last goal and produce a proof that Marcus was not loyal to Caesar.
• Moreover, From this simple example, we see that three important issues must be addressed in the process of converting
English sentences into logical statements and then using those statements to deduce new ones:
1. Many English sentences are ambiguous. Choosing the correct interpretation may be difficult.
2. Also, There is often a choice of how to represent the knowledge. Simple representations are desirable, but they may
exclude certain kinds of reasoning.
3. Similalry, Even in very simple situations, a set of sentences is unlikely to contain all the information necessary to reason
about the topic at hand. In order to be able to use a set of statements effectively. Moreover, It is usually necessary to have
access to another set of statements that represent facts that people consider too obvious to mention.
17. Representing Instance and ISA Relationships
• Specific attributes instance and isa play an important role particularly in a useful form of reasoning
called property inheritance.
• The predicates instance and isa explicitly captured the relationships they used to express, namely class
membership and class inclusion.
• Figure shows the first five sentences of the last section represented in logic in three different ways.
• The first part of the figure contains the representations we have already discussed. In these
representations, class membership represented with unary predicates (such as Roman), each of which
corresponds to a class.
• Asserting that P(x) is true is equivalent to asserting that x is an instance (or element) of P.
• The second part of the figure contains representations that use the instance predicate explicitly.
18. Man(Marcus), class man is represented using
unary predicate
Instance(Marcus, man) predicate is binary such that first
argument is an instance(element) of the second argument
which is usually a class.
Isa(Pompeian, Roman)
The subclass(Pompeian) and superclass(Romans) are used
Isa relationships simplifies representation of wffs and combines statements to express knowledge in a
shorter version.
19. • The predicate instance is a binary one, whose first argument is an object and whose second argument is a
class to which the object belongs.
• But these representations do not use an explicit isa predicate.
• Instead, subclass relationships, such as that between Pompeians and Romans, described as shown in
sentence 3.
• The implication rule states that if an object is an instance of the subclass Pompeian then it is an instance of
the superclass Roman.
• Note that this rule is equivalent to the standard set-theoretic definition of the subclass superclass
relationship.
• The third part contains representations that use both the instance and isa predicates explicitly.
• The use of the isa predicate simplifies the representation of sentence 3, but it requires that one additional
axiom (shown here as number 6) be provided.
20. Computable Functions and Predicates
• To express simple facts, such as the following greater-than and less-than relationships:
• gt(1,O) It(0,1)
• gt(2,1) It(1,2)
• gt(3,2) It( 2,3)
• It is often also useful to have computable functions as well as computable predicates. Thus we might want to
be able to evaluate the truth of gt(2 + 3,1)
• To do so requires that we first compute the value of the plus function given the arguments 2 and 3, and then
send the arguments 5 and 1 to gt.
21. Consider the following set of facts, again involving Marcus:
1) Marcus was a man.
man(Marcus)
2) Marcus was a Pompeian.
Pompeian(Marcus)
3) Marcus was born in 40 A.D.
born(Marcus, 40)
4) All men are mortal.
x: man(x) → mortal(x)
5) All Pompeians died when the volcano erupted in 79 A.D.
erupted(volcano, 79) ∧ ∀ x : [Pompeian(x) → died(x, 79)]
6) No mortal lives longer than 150 years.
x: t1: At2: mortal(x) born(x, t1) gt(t2 – t1,150) → died(x, t2)
7) It is now 1991. now = 1991
22. • The example shows how these ideas of computable functions and predicates can be useful. It also makes
use of the notion of equality and allows equal objects to be substituted for each other whenever it appears
helpful to do so during a proof.
• So, Now suppose we want to answer the question “Is Marcus alive?”
• The statements suggested here, there may be two ways of deducing an answer.
• Either we can show that Marcus is dead because he was killed by the volcano or we can show that he must
be dead because he would otherwise be more than 150 years old, which we know is not possible.
• Also, As soon as we attempt to follow either of those paths rigorously, however, we discover, just as we did
in the last example, that we need some additional knowledge. For example, our statements talk about
dying, but they say nothing that relates to being alive, which is what the question is asking.
23. So we add the following facts:
8) Alive means not dead.
Ɐ x: Ɐ t: [alive(x, t) → ¬ dead(x, t)] [¬ dead(x, t) → alive(x, t)]
9) If someone dies, then he is dead at all later times.
Ɐ x: Ɐ t1: At2: died(x, t1) gt(t2, t1) → dead(x, t2)
24. So, Now let’s attempt to answer the question “Is Marcus alive?” by proving: ¬ alive(Marcus, now)
25. • The term nil at the end of each proof indicates that the list of conditions remaining to be
proved is empty and so the proof has succeeded.
• Notice in those proofs that whenever a statement of the form:
a∧b → c
was used, a and b were set up as independent sub goals.
• In one sense they are, but in another sense they are not if they share the same bound
variables since, in that case, consistent substitutions must be made in each of them.
born(Marcus, t1)
using statement 3 by binding t1 to 40, but then we must also bind t1 to 40 in
gt(now - t1, 150)
26. From looking at the proofs we have just shown, two things should be clear:
• Even very simple conclusions can require many steps to prove.
• A variety of processes, such as matching, substitution, and application of modus
ponens are involved in the production of a proof.
27. RESOLUTION
• A proof procedure that carried out in a single operation the variety of processes involved in reasoning
with statements in predicate logic.
• Resolution is such a procedure, which gains its efficiency from the fact that it operates on statements
that have been converted to a very convenient standard form Resolution produces proofs by refutation.
• In other words, to prove a statement (i.e., show that it is valid), resolution attempts to show that the
negation of the statement produces a contradiction with the known statements (i.e. , that it is
unsatisfiable).
• Produces proof by refutation(the action of proving a statement or theory to be false).
28. Resolution uses following two steps
a) Conversion of axioms to canonical or clause form
b) Using refutation which means to show that the negation of the statement
• An atomic formula and its negation are called Literals
• The disjunction (OR) of the literals is called a clause
29.
30. Conversion to Clause form:
This can be achieved by using the Conjunctive normal form
“Suppose we know that all Romans who know Marcus either hate Caesar or think that anyone who hates anyone is crazy.”
Wff form:-
∀x: [Roman(x) ∧ know(x, Marcus)] → [hate(x, Caesar) V (∀y: ∃z: hate (y, z) → thinkcrazy(x, y))]
• To use this formula in a proof requires a complex matching process. Then, having matched one piece of it, such as
thinkcrazy(x, y), it is necessary to do the right thing with the rest of the formula including the pieces in which the
matched part is embedded and those in which it is not.
• If the formula were in a simpler form, this process would be much easier. The formula would be easier to work with if
-It was flatter, i.e., there was less embedding of components.
- The quantifiers were separated from the rest of the formula so that they did not need to be considered.
• The formula given above for the feelings of Romans who know Marcus would be represented in conjunctive normal form
as
¬ Roman(x) ∧¬ know(x, Marcus) V
hate(x, Caesar) V ¬ hate(y, z) V thinkcrazy(x, z)
31. • Need to reduce a set of wff's to a set of clauses, where a clause is defined to be a wff in conjunctive normal form but with
no instances of the connector A.
• by first converting each wff into conjunctive normal form and then breaking apart each such expression into clauses, one for
each conjunct.
• All the conjuncts will be considered to be conjoined together as the proof procedure operates.
32. Algorithm: Convert to Clause Form
1. Eliminate →, using the fact that a → b is equivalent to ¬ a V b. Performing this transformation on the wff given above
yields.
∀x: ¬ [Roman(x) ∧know(x, Marcus)] V[hate(x, Caesar) V (∀y :¬(∃z : hate(y, z)) V thinkcrazy(x, y))]
2. Reduce the scope of each ¬ to a single term, using the fact that ¬ (¬ p) = p, deMorgan's laws [which say that ¬ (a ∧b) = ¬
a V ¬ b and ¬ (a V b) = ¬ a ∧ ¬ b ], and the standard correspondences between quantifiers [¬ ∀x: P(x) = ∃x: ¬ P(x) and ¬∃x:
P(x) = ∀x: ¬P(x)]. Performing this transformation on the wff from step 1 yields
∀x: [¬ Roman(x) V ¬ know(x, Marcus)] V
[hate(x, Caesar) V (∀y: ∀z: ¬ hate(y, z) Vthinkcrazy(x, y))]
3. Standardize variables so that each quantifier binds a unique variable. Since variables are just dummy names, this
process cannot affect the truth value of the wff.
∀x: P(x) V∀x: Q(x)
Would be converted to
∀x:P(x) V ∀y: Q(y)
33. 4. Move all quantifiers to the left of the formula without changing their relative order. This is possible since there is no
conflict among variable names.
∀x: ∀y: ∀z: [¬ Roman(x) V¬ know(x, Marcus)] V
[hate(x, Caesar) V (¬ hale(y, z) V thinkcrazy(x, y))], The formula at this point is known as prenex normal form
5. Eliminate existential quantifiers. (Skolemization) A formula that contains an existentially quantified variable asserts that
there is a value that can be substituted for the variable that makes the formula true. Eliminate the quantifier by substituting
for the variable a reference to a function that produces the desired value.
∃y: President(y)
Can be transformed into the formula
President (S1)
Where S1 is a function with no arguments that somehow produces a value that satisfies President (S1)-S1 called as skolem
constant.
6. Drop the prefix. At this point, all remaining variables are universally quantified, so the prefix can just be dropped and any
proof procedure we use can simply assume that any variable it sees is universally quantified.
[¬ Roman(x) V¬ know(x, Marcus)] V
[hate(x, Caesar) V (¬ hate(y, z) V thinkcrazy(x, y))]
34. 7. Convert the matrix into a conjunction of disjuncts. In the case or our example, since there are no and’s, it is only necessary
to exploit the associative property of or [ i.e., (a ∧ b) V c = (a V c) ∧ (b ∧ c)] and simply remove the parentheses, giving
¬ Roman(x) V ¬ know(x, Marcus) V
hate(x, Caesar) V¬ hate(y, z) V thinkcrazy(x, y)
8. Create a separate clause corresponding to each conjunct. In order for a wff to be true, all the clauses that are generated
from it must be true. If we are going to be working with several wff’s, all the clauses generated by each of them can now be
combined to represent the same set of facts as were represented by the original wff’s.
9. Standardize apart the variables in the set of clauses generated in step 8. By this we mean rename the variables so that no
two clauses make reference to the same variable. In making this transformation, we rely on the fact that
(∀x: P(x) ∧ Q(x)) = ∀x: P(x) ∧∀x: Q(x)
After applying the entire procedure to a set of wff’s have a set of clause, each of which is disjunction of literals.
Thus since each clause is a separate conjunct and since all the variables are universally quantified, there need be no
relationship between the variables of two clauses, even if they were generated from the same wff.
35. The Basis of Resolution
• The resolution procedure is a simple iterative process: at each step, two clauses, called the parent clauses, are compared
(resolved), yielding a new clause that has been inferred from them.
• The new clause represents ways that the two parent clauses interact with each other.
• Suppose that there are two clauses in the system:
winterV summer
¬ winterV cold
Observe that precisely one of winter and ¬ winter will be true at any point. If winter is true, then cold must be true to
guarantee the truth of the second clause. If ¬ winter is true, then summer must be true to guarantee the truth of the first
clause. From these two clauses can deduce-
Summer V cold
• This is the deduction that the resolution procedure will make.
• Resolution operates by taking two clauses that each contains the same literal, in this example, winter.
• The literal must occur in positive form in one clause and in negative form in the other.
• The resolventis obtained by combining all of the literals of the two parent clauses except the ones that cancel.
• If the clause that is produced is the empty clause, then a contradiction has been found.
36. • The two clauses
winter
¬ winter
will produce the empty clause. If a contradiction exists, then eventually it will be found. Of course, if no contradiction exists,
it is possible that the procedure will never terminate, although as we will see, there are often ways of detecting that no
contradiction exists.
• In predicate logic, all possible ways of substituting values for the variables are considered.
• The theoretical basis of the resolution procedure in predicate logic is Herbrand’s theorem
-To show that a set of clauses S is unsatisfiable, it is necessary to consider only interpretations over a particular set, called the
Herbrand universe of S.
-A set of clauses S is unsatisfiable if and only if a finite subset of ground instances(in which all bound variables have had a
value substituted for them) of S is unsatisfiable.
37. Resolution in Propositional Logic
• In propositional logic, the procedure for producing a proof by resolution of proposition P with respect to a set of
axioms F is the following.
Algorithm: Propositional Resolution
1. Convert all the propositions of F to clause form.
2. Negate P and convert the result to clause form. Add it to the set of clauses obtained in step 1.
3. Repeat until either a contradiction is found or no progress can be made:
a) Select two clauses. Call these the parent clauses.
b) Resolve them together. The resulting clause, called the resolvent, will be the disjunction of all of the literals of both
of the parent clauses with the following exception: If there are any pairs of literals L and ¬ L such that one of the parent
clauses contains L and the other contains ¬ L, then select one such pair and eliminate both L and ¬ L from the resolvent.
c) If the resolvent is the empty clause, then a contradiction has been found. If it is not, then add it to the set of clauses
available to the procedure.
38. Simple example:
Q. want to prove R.
A. Then we negate R, producing ¬ R, which is already in clause form.
One way of viewing the resolution process is that it takes a set of
clauses that are all assumed to be true and, based on information
provided by the others, it generates new clauses that represent
restrictions on the way each of those original clauses can be made
true. A contradiction occurs when a clause becomes so restricted that
there is no way it can be true. This is indicated by the generation of
the empty clause
39.
40. • In order for proposition 2 to be true, one of three things must be true: ¬ P, ¬ Q. or R. But we are assuming that ¬ R
is true.
• Given that, the only way for proposition 2 to be true is for one of two things to be true: ¬ P or ¬ Q.
• That is what the first resolvent clause says. But proposition 1 says that P is true, which means that ¬ P cannot be
true, which leaves only one way for proposition 2 to be true, namely for ¬ Q to be true (as shown in the second
resolvent clause). Proposition 4 can be true if either ¬ T orQ is true. But since we now know that ¬ Q must be true,
the only way for proposition 4 to be true is for ¬ T to be true (the third resolvent).
• But proposition 5 says that T is true. Thus there is no way for all of these clauses to be true in a single
interpretation. This is indicated by the empty clause.
41. The Unification Algorithm In propositional logic
• It is easy to determine that two literals cannot both be true at the same time. Simply look for L and ¬ L in predicate logic, this
matching process is more complicated since the arguments of the predicates must be considered.
• Example: man(John) and ¬man(John) is a contradiction, while man(john)and ¬man(spot) is not
• In order to determine contradictions, we need a matching procedure that compares two literals and discovers whether there exists a
set of substitutions that makes them identical.
• The substitution of variables to make resolution possible is called unification and proof in predicate logic can be based on this joint
method of unification and Resolution.
• Unification is a process of making two different logical atomic expressions identical by finding a substitution.
• The basic idea of unification is very simple. To attempt to unify two literals, first check if their initial predicate symbols are the same.
If so, can proceed. Otherwise, there is no way they can be unified, regardless of their arguments.
tryassassinate (Marcus, Caesar)
hate(Marcus, Caesar)
cannot be unified.
42. • If the predicate symbols match, then must check the arguments, one pair at a time. If the first matches, then can
continue with the second, and so on. To test each argument pair, we can simply call the unification procedure
recursively. The matching rules are simple. Different constants or predicates cannot match; identical ones can. A
variable can match another variable, any constant, or a predicate expression, with the restriction that the predicate
expression must not contain any instances of the variable being matched.
43. Example 1: To unify the expressions
P(x, x)
P(y, z)
The two instances of P match fine. Next we compare x and y, and decide that if we substitute y for x, they could
match. We will write that substitution as
y/x, z/x - not a consistent substitution
After finding first substitution make it throughout the literals
P(y, y)
P(y, z)
Now we can succeed with a substitution of z/y.
Example 2: hate(x, y)
hate(Marcus, z)
Could be unified with any of the following substitutions:
(Marcus/x, z/y)
(Marcus/x, y/z)
(Marcus/x, Caesarly/y, Caesar/z)
44. Algorithm: Unify (L1, L2)
1. If L1 or L2 are both variables or constants, then:
(a) If L1 and L2 are identical, then return NIL.
(b) Else if L1 is a variable, then if L1 occurs in L2 then return {FAIL}, else return (L2/L1).
(c) Else if L2 is a variable, then if L2 occurs in L1 then return {FAIL}, else return (L1/L2).
(d) Else return {FAIL}.
2. If the initial predicate symbols in L1 and L2 are not identical, then return {FAIL}.
3. If LI and L2 have a different number of arguments, then return {FAIL}.
4. Set SUBST to NIL. (At the end of this procedure, SUBST will contain all the substitutions used to unify L1 and L2.)
5. For i ← 1 to number of arguments in L1 :
(a) Call Unify with the ith argument of L1 and the ith argument of L2, putting result in S.
(b) If S contains FAIL then return {FAIL}.
(c) If S is not equal to NIL then:
(i) Apply S to the remainder of both L1 and L2.
(ii) SUBST: = APPEND(S, SUBST).
6. Return SUBST.
L2/L1: Substitute L2 for L1
45. Resolution in Predicate Logic
• Determining that two literals are contradictory- they are if one of them can be unified with the negation of the other.
So, for example,
man(x) and ¬ man(Spot) are contradictory, since man(x) and man(Spot) can be unified.
• To use resolution for expressions in the predicate logic, use the unification algorithm to locate pairs of literals that
cancel out. Need to use the unifier produced by the unification algorithm to generate the resolvent clause. For
example. suppose to resolve two clauses:
1. man(Marcus)
2. ¬ man(x1) V mortal(x1)
• The literal man(Marcus) can be unified with the literal man(x1) with the substitution Marcus/x1, telling us that for
x1 = Marcus, ¬ man(Marcus) is false.
But cannot simply cancel out the two man literals as we did in propositional logic and generate the
Resolvent mortal(x1). Clause 2 says that for a given x1, either ¬ man(x1) or mortal(x1). So for it to be true, can now
conclude only that mortal(Marcus) must be true.
46. Algorithm: Resolution
1. Convert all the statements of F to clause form.
2. Negate P and convert the result to clause form. Add it to the set of clauses obtained in 1.
3. Repeat until either a contradiction is found, no progress can be made, or a predetermined amount of effort has been
expended.
a) Select two clauses. Call these the parent clauses.
b) Resolve them together. The resolvent will be the disjunction of all the literals of both parent clauses with appropriate
substitutions performed and with the following exception: If there is one pair of literals T1 and ¬ T2 such that one of the
parent clauses contains T2 and the other contains T1 and if T1 and T2 are unifiable, then neither T1 nor T2 should appear
in the resolvent. Call T1 and T2 Complementary literals. Use the substitution produced by the unification to create the
resolvent. If there is more than one pair of complementary literals, only one pair should be omitted from the resolvent.
c) If the resolvent is the empty clause, then a contradiction has been found. If it is not, then add it to the set of clauses
available to the procedure
If the choice of clauses to resolve together at each step is made in certain systematic ways, then the resolution
procedure will find a contradiction if one exists.
47. There exist strategies for making the choice that can speed up the process considerably:
• Only resolve pairs of clauses that contain complementary literals, since only such resolutions produce new clauses that
are harder to satisfy than their parents.
• Eliminate certain clauses as soon as they are generated so that they cannot participate in later resolutions. Two kinds of
clauses should be eliminated: tautologies (which can never be unsatisfied) and clauses that are subsumed by other
clauses.
• Whenever possible, resolve either with one of the clauses that is part of the statement, are trying to refute or with a
clause generated by a resolution with such a clause.
• Whenever possible, resolve with clauses that have a single literal. Such resolutions generate new clauses with fewer
literals than the larger of their parent clauses and thus are probably closer to the goal of a resolvent with zero terms.
This method is called the unit-preference strategy.
48.
49.
50. Suppose our knowledge base contained the two additional statements
9) persecute(x, y)->hate(y, x)
10) hate(x, y)->persecute(y, x)
Converting to clause form, we get
9) ¬persecute(x5, y2) V hate(y2, x5)
10) ¬hate(x6, y3)V persecute (y3, x6)
51. Variable y occurs in both clause 1 and clause 2
The substitution at the second resolution step
produces a clause that is too restricted and so does
not lead to contradiction that is present in the
database.
If the clause
¬Father(Chris, Y) had been produced, the
contradiction with clause 4 would have emerged.
This would have happen if clause 2 had been
rewritten as
¬ Mother (a, b) V woman(a)
52. The need to try several substitutions
Resolution provides a good way of finding a refutation
proof without actually trying all the substitutions that
Herbrand’s theorem suggests.
But it does not always eliminate the need of trying
more than one substitutions.
Eg: To prove that Marcus hates some ruler, can try
each substitutions and finding any contradictions.
55. What happened in 79 AD?
ⱻx: event(x, 79)
Erupted(volcano)
can be represented as
Event(erupted(volcano), 79)
56. Natural Deductions
• It is a way of doing machine theorem proving that corresponds more closely to
processes used in human theorem proving.
• Resolution as an easily implementable proof procedure that relies for its simplicity on
a uniform representation of the statements it uses.
• In converting everything to clause form, we often lose valuable heuristic information
that is contained in the original representation of the facts.
• Heuristic information contained in the original statements can be lost in the
transformation
57. For example, suppose we believe that all judges who are not crooked are well-educated, which can be represented as
Ɐx: judge(x) ꓥ ¬ crooked(x)->educated(x)
If converted to clause form,
¬ judge(x)V crooked(x) V educated(x)
It appears also to be a way of deducing that someone is not a judge by showing that he is not crooked and not
educated. The heuristic information contained in the original statement has been lost in the transformation.
59. The declarative representation is one in which the knowledge is specified but how to use to which that knowledge is to
be put is not given.
Declarative knowledge answers the question 'What do you know?'
It is your understanding of things, ideas, or concepts.
In other words, declarative knowledge can be thought of as the who, what, when, and where of information.
Declarative knowledge is normally discussed using nouns, like the names of people, places, or things or dates that
events occurred.
The procedural representation is one in which the control information i.e., necessary to use the knowledge is considered
to be embedded in the knowledge itself.
Procedural knowledge answers the question 'What can you do?'
While declarative knowledge is demonstrated using nouns,
Procedural knowledge relies on action words, or verbs.
It is a person's ability to carry out actions to complete a task.
60. The real difference between declarative and procedural views of knowledge lies in which the control information presides.
Example:
The statements 1, 2 and 3 are procedural knowledge and 4 is a declarative knowledge.
61. Forward & Backward Reasoning
The object of a search procedure is to discover a path through a problem space from an initial configuration to a goal state.
There are actually two directions in which such a search could proceed:
Forward Reasoning,
from the start states
LHS rule must match with initial state
Eg: A → B, B→C => A→C
Backward Reasoning,
from the goal states
RHS rules must match with goal state
Eg: 8-Puzzle Problem
In both the cases, the control strategy is it must cause motion and systematic. The production system model of the search
process provides an easy way of viewing forward and backward reasoning as symmetric processes.
62. Consider the problem of solving a particular instance of the 8-puzzle problem. The rules to be used for solving the puzzle
can be written as:
63. Reasoning Forward from Initial State:
Begin building a tree of move sequences that might be solved with initial configuration at root of the tree.
Generate the next level of the tree by finding all the rules whose left sides match the root node and using their right
sides to create the new configurations.
Generate the next level by taking each node generated at the previous level and applying to it all of the rules whose left
sides match it.
Continue until a configuration that matches the goal state is generated.
64. Reasoning Backward from Goal State:
Begin building a tree of move sequences that might be solved with goal configuration at root of the tree.
Generate the next level of the tree by finding all the rules whose right sides match the root node. These are all the rules that,
if only we could apply them, would generate the state we want. Use the left sides of the rules to generate the nodes at this
second level of the tree.
Generate the next level of the tree by taking each node at the previous level and finding all the rules whose right sides match
it. Then use the corresponding left sides to generate the new nodes.
Continue until a node that matches the initial state is generated.
This method of reasoning backward from the desired final state is often called goal- directed reasoning.
65. To reason forward, the left sides (preconditions) are matched against the current state and the right sides (results) are used to
generate new nodes until the goal is reached. To reason backward, the right sides are matched against the current node and
the left sides are used to generate new nodes representing new goal states to be achieved.
The following 4 factors influence whether it is better to reason Forward or Backward:
1. Are there more possible start states or goal states? We would like to move from the smaller set of states to the larger (and
thus easier to find) set of states.
2. In which direction branching factor (i.e, average number of nodes that can be reached directly from a single node) is
greater? We would like to proceed in the direction with lower branching factor.
3. Will the program be used to justify its reasoning process to a user? If so, it is important to proceed in the direction that
corresponds more closely with the way the user will think.
4. What kind of event is going to trigger a problem-solving episode? If it is arrival of a new fact, forward reasoning makes
sense. If it is a query to which a response is desired, backward reasoning is more natural.
66. Backward-Chaining Rule Systems
Backward-chaining rule systems are good for goal-directed problem solving.
For example, a query system would probably use backward chaining to reason about and answer user questions.
Unification tries to find a set of bindings for variables to equate a (sub) goal with the head of some rule.
Medical expert system, diagnostic problems
Forward-Chaining Rule Systems
Instead of being directed by goals, we sometimes want to be directed by incoming data.
For example, suppose you sense searing heat near your hand. You are likely to jerk your hand away.
Rules that match dump their right-hand side assertions into the state and the process repeats.
Matching is typically more complex for forward-chaining systems than backward ones.
Synthesis systems – Design/Configuration
67. Example of Typical Forward Chaining
Rules
1) If hot and smoky then ADD fire
2) If alarm_beeps then ADD smoky
3) If fire then ADD switchon_sprinkles
Facts
1) alarm_beeps (given)
2) hot (given)
.........
(3) smoky (from F1 by R2)
(4) fire (from F2, F4 by R1)
(5) switch_on_sprinklers (from F2 by R3)
Example of Typical Backward Chaining
Goal: Should I switch on sprinklers?
68. Combining Forward and Backward Reasoning
• Sometimes certain aspects of a problem are best handled via forward chaining and other aspects by backward chaining.
Consider a forward-chaining medical diagnosis program. It might accept twenty or so facts about a patient’s condition then
forward chain on those concepts to try to deduce the nature and/or cause of the disease.
• Now suppose that at some point, the left side of a rule was nearly satisfied – nine out of ten of its preconditions were met.
It might be efficient to apply backward reasoning to satisfy the tenth precondition in a directed manner, rather than wait for
forward chaining to supply the fact by accident.
• Whether it is possible to use the same rules for both forward and backward reasoning also depends on the form of the
rules themselves. If both left sides and right sides contain pure assertions, then forward chaining can match assertions on the
left side of a rule and add to the state description the assertions on the right side. But if arbitrary procedures are allowed as the
right sides of rules then the rules will not be reversible.
69. Logic Programming
Logic Programming is a programming language paradigm in which logical assertions are viewed as programs.
There are several logic programming systems in use today, the most popular of which is PROLOG.
A PROLOG program is described as a series of logical assertions, each of which is a Horn clause.
A Horn clause is a clause that has at most one positive literal. Thus p, -p v q, p -> q are all Horn clauses.
Programs written in pure PROLOG are composed only of Horn Clauses.
70. Syntactic Difference between the logic and the PROLOG representations, including:
In logic, variables are explicitly quantified. In PROLOG, quantification is provided implicitly by the way the variables are
interpreted.
o The distinction between variables and constants is made in PROLOG by having all variables begin with uppercase letters
and all constants begin with lowercase letters.
In logic, there are explicit symbols for and (ꓥ) and or (V). In PROLOG, there is an explicit symbol for and (,), but there is
none for or.
In logic, implications of the form “p implies q” as written as p->q. In PROLOG, the same implication is written
“backward” as q: -p.
71. The first two of these differences arise naturally from the fact that PROLOG
programs are actually sets of Horn Clauses that have been transformed as
follows:
1. If the Horn Clause contains no negative literals (i.e., it contains a single
literal which is positive), then leave it as it is.
2. Otherwise, return the Horn clause as an implication, combining all of the
negative literals into the antecedent of the implication and leaving the
single positive literal (if there is one) as the consequent.
This procedure causes a clause, which originally consisted of a disjunction
of literals (all but one of which were negative), to be transformed to single
implication whose antecedent is a conjunction of (what are now positive)
literals.
72.
73. Matching
• We described the process of using search to solve problems as the application of appropriate rules to individual problem
states to generate new states to which the rules can then be applied and so forth until a solution is found.
• How we extract from the entire collection of rules those that can be applied at a given point? To do so requires some kind
of matching between the current state and the preconditions of the rules. How should this be done? The answer to this
question can be critical to the success of a rule based system.
• A more complex matching is required when the preconditions of rule specify required properties that are not stated
explicitly in the description of the current state. In this case, a separate set of rules must be used to describe how some
properties can be inferred from others.
• An even more complex matching process is required if rules should be applied and if their pre condition approximately
match the current situation. This is often the case in situations involving physical descriptions of the world.
74. Indexing
One way to select applicable rules is to do a simple search though all the rules comparing each one’s precondition to
the current state and extracting all the one’s that match. There are two problems with this simple solution:
i. The large number of rules will be necessary and scanning through all of them at every step would be inefficient.
ii. It’s not always obvious whether a rule’s preconditions are satisfied by a particular state.
Solution: Instead of searching through rules use the current state as an index into the rules and select the matching
one’s immediately.
75. Matching with Variables
• The problem of selecting applicable rules is made more difficult when preconditions are not stated as exact descriptions
of particular situations but rather describe properties that the situations must have. It often turns out that discovering
whether there is a match between a particular situation and the preconditions of a given rule must itself involve a
significant search process.
• Backward-chaining systems usually use depth-first backtracking to select individual rules, but forward-chaining systems
generally employ sophisticated conflict resolution strategies to choose among the applicable rules.
• While it is possible to apply unification repeatedly over the cross product of preconditions and state description
elements, it is more efficient to consider the many-many match problem, in which many rules are matched against many
elements in the state description simultaneously.
One efficient many-many match algorithm is RETE.
76. RETE Matching Algorithm
The matching consists of 3 parts
1. Rules & Productions
2. Working Memory
3. Inference Engine
The inference Engine is a cycle of production system which is match, select, execute.
77. The above cycle is repeated until no rules are put in the conflict set or until stopping condition is reached. In order to verify
several conditions, it is a time consuming process. To eliminate the need to perform thousands of matches of cycles on
effective matching algorithm is called RETE.
The Algorithm consists of two Steps.
1. Working memory changes need to be examined.
2. Grouping rules which share the same condition & linking them to their common terms. RETE Algorithm is many-match algorithm (In which many rules are
matched against many elements). RETE uses forward chaining systems which generally employee sophisticated conflict resolution strategies to choose among
applicable rules. RETE gains efficiency from 3 major sources.
1. RETE maintains a network of rule condition and it uses changes in the state description to determine which new rules might apply. Full matching is
only pursued for candidates that could be affected by incoming/outgoing data.
2. Structural Similarity in rules: RETE stores the rules so that they share structures in memory, set of conditions that appear in several rules are
matched once for cycle.
3. Persistence of variable binding consistency. While all the individual preconditions of the rule might be met, there may be variable binding conflicts
that prevent the rule from firing.
can be minimized. RETE remembers its previous calculations and is able to merge new binding information efficiently.
78. Approximate Matching:
Rules should be applied if their preconditions approximately match to the current situation
Eg: Speech understanding program
Rules: A description of a physical waveform to phones
Physical Signal: difference in the way individuals speak, result of background noise.
Conflict Resolution:
When several rules matched at once such a situation is called conflict resolution. There are 3 approaches to the problem of
conflict resolution in production system.
1. Preference based on rule match:
a. Physical order of rules in which they are presented to the system
b. Priority is given to rules in the order in which they appear
2. Preference based on the objects match:
a. Considers importance of objects that are matched
b. Considers the position of the match able objects in terms of Long Term Memory (LTM) & Short Term Memory(STM)
LTM: Stores a set of rules
STM (Working Memory): Serves as storage area for the facts deduced by rules in
long term memory
3. Preference based on the Action:
a. One way to do is find all the rules temporarily and examine the results of each.
Using a Heuristic Function that can evaluate each of the resulting states compare the merits of the result and then select the
preferred one.
79. Search Control Knowledge:
It is knowledge about which paths are most likely to lead quickly to a goal state
Search Control Knowledge requires Meta Knowledge.
It can take many forms. Knowledge about
o which states are more preferable to others.
o which rule to apply in a given situation
o the Order in which to pursue sub goals
o useful Sequences of rules to apply.