SlideShare a Scribd company logo
1 of 359
CHAPTER ONE
INTRODUCTION
emnetzewdu2121@gmail.com
Introduction to AI 1
Goals/Objectives of AI
The goal of research in Artificial Intelligence is to develop a technology
whereby computers behave intelligently, for example:
 Intelligent behavior by simulated persons or other actors in a computer game
 Mobile robots in hostile or dangerous environments that behave intelligently
 Intelligent responses to user requests in information search systems
such as Google or Wikipedia
 Process control systems that supervise complex processes and that
are able to respond intelligently to a broad variety of malfunctioning
and other unusual situations.
 Intelligent execution of user requests in the operating system and
the basic services (document handling, electronic mail) of standard
computers.
Generally :- is to create intelligent machines.
But what does it mean to be intelligent?
emnetzewdu2121@gmail.com
Introduction to AI 2
What is AI?
Systems that act
rationally
Systems that think
like humans
Systems that think
rationally
Systems that act
like humans
THOUGHT
BEHAVIOUR
HUMAN RATIONAL
emnetzewdu2121@gmail.com
Introduction to AI 3
Systems that act like
humans:
Turing Test approach
“The art of creating machines that perform functions that require intelligence
when performed by people.” (Kurzweil,
1990)
“The study of how to make computers do things at which, at the moment, people
are
better.” (Rich and Knight, 1991)
emnetzewdu2121@gmail.com
Introduction to AI 4
Turing test approach:
?
 You enter a room which has a computer terminal. You have a fixed
period of time to type what you want into the terminal, and study
the replies. At the other end of the line is either a human being or a
computer system.
 If it is a computer system, and at the end of the period you cannot
reliably determine whether it is a system or a human, then the
system is deemed to be intelligent. ????????
emnetzewdu2121@gmail.com
Introduction to AI 5
Cont.…
To pass the Turing test the machine needs:
 Natural language processing :- for communication with human
 Knowledge representation :- to store information effectively & efficiently
 Automated reasoning :- to retrieve & answer questions using the stored
information
 Machine learning:- to adapt to new circumstances
OR
To pass the total Turing Test, the computer will need
 computer vision to perceive objects(Seeing)
 robotics to manipulate objects and move about(acting)
emnetzewdu2121@gmail.com
Introduction to AI 6
Systems that think like humans(thinking
humanly): The Cognitive Modelling
approach
“The exciting new effort to make computers think . . . machines with minds, in the
full and literal sense.” (Haugeland, 1985)
“[The automation of] activities that we associate with human thinking, activities
such as decision-making, problem solving, learning . . .” (Bellman, 1978).
Humans as observed from ‘inside’, How do we know how
humans think?
 Introspection vs. psychological experiments
Cognitive Science
 The interdisciplinary field , brings together computer models from AI and
experimental techniques from psychology to construct precise and testable theories of
the human mind.
emnetzewdu2121@gmail.com
Introduction to AI 7
Systems that think ‘rationally’
"laws of thought“ approach
“The study of mental facilities through the use of
computational models” (Charniak and McDermott)
“The study of the computations that make it possible
to perceive, reason, and act” (Winston)
 Humans are not always ‘rational’.
 Rational - defined in terms of logic?
 Logic can’t express everything (e.g. uncertainty)
 Logical approach is often not feasible in terms of
computation time (needs ‘guidance’).
emnetzewdu2121@gmail.com
Introduction to AI 8
Systems that act rationally: rational
agent approach
“Computational Intelligence is the study of the design of
intelligent agents.” (Poole et al., 1998)
“AI . . . is concerned with intelligent behavior in artifacts.”
(Nilsson, 1998)
Rational behavior: doing the right thing
The right thing: that which is expected to maximize goal achievement,
given the available information
Giving answers to questions is ‘acting’.
We don’t care whether a system:
 replicates human thought processes
 makes the same decisions as humans
 uses purely logical reasoning
emnetzewdu2121@gmail.com
Introduction to AI 9
Cont.
Study AI as rational agent –
2 advantages:
 It is more general than using logic only
Because: LOGIC + Domain knowledge
 It allows extension of the approach with more
scientific methodologies
emnetzewdu2121@gmail.com
Introduction to AI 10
Rational agents
 An agent is an entity that perceives and acts
 Abstractly, an agent is a function from percept histories to actions:
[f: P*  A]
 For any given class of environments and tasks, we seek the agent (or class of
agents) with the best performance
 Limitation: computational limitations make perfect rationality unachievable
  design best program for given machine resources
 Artificial
 Produced by human art or effort, rather than
originating naturally.
• Intelligence
 is the ability to acquire knowledge and use it"
[Pigford and Baur]
emnetzewdu2121@gmail.com
Introduction to AI 11
Cont.
So AI was defined as:
 AI is the study of ideas that enable computers to be intelligent.
 AI is the part of computer science concerned with design of computer
systems that exhibit human intelligence(From the Concise Oxford
Dictionary)
From the above two definitions, we can see that AI has two major roles:
 Study the intelligent part concerned with humans.
 Represent those actions using computers.
emnetzewdu2121@gmail.com
Introduction to AI 12
Group Assignment
The advancement on turing
test(Arguments on turing test)
The foundation of AI
The History of AI
The state of the art(Current status of AI)
emnetzewdu2121@gmail.com
Introduction to AI 13
Intelligent Agents
Agents and Environments
What is an agent?
 An agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators/
effectors to maximize progress towards its goal
What is Intelligent Agent?
 Intelligent agents are software entities that carry out some set of operations on
behalf of a user or another program, with some degree of independence or
autonomy and in so doing, employ some knowledge or representation of the user’s
goals or desires.” (“The IBM Agent”)
 PAGE (Percepts, Actions, Goals, Environment)description
emnetzewdu2121@gmail.com
Introduction to AI 14
Cont.
Percepts
Actions
Goals
Environment
information acquired through the agent’s sensory system
operations performed by the agent on the environment
through its actuators
desired outcome of the task with a measurable
performance
surroundings beyond the control of the agent
emnetzewdu2121@gmail.com
Introduction to AI 15
emnetzewdu2121@gmail.com
Introduction to AI 16
Examples of Agents
 human agent
 eyes, ears, skin, taste buds, etc. for sensors
 hands, fingers, legs, mouth, etc. for actuators
 powered by muscles
 robot
 camera, infrared, bumper, etc. for sensors
 grippers, wheels, lights, speakers, etc. for actuators
 often powered by motors
 software agent
 functions as sensors
 information provided as input to functions in the form of encoded
bit strings or symbols
 functions as actuators
 results deliver the output
emnetzewdu2121@gmail.com
Introduction to AI 17
Cont.
Mathematically speaking, we say that an agent’s behavior is
described by the agent function that maps any given percept
sequence to an action
[f: P*  A]
The agent program runs on the physical architecture to produce f
the agent function for an artificial agent will be implemented by an agent
program.
 The agent function is an abstract mathematical description;
 the agent program is a concrete implementation, running within some
physical system
Example: The vacuum-cleaner
emnetzewdu2121@gmail.com
Introduction to AI 18
Percepts: location and contents, e.g., [A,Dirty]
Actions: Left, Right, Suck, NoOp
emnetzewdu2121@gmail.com
Introduction to AI 19
THE CONCEPT OF
RATIONALITY
A rational agent is one that does the right thing—
conceptually speaking, every entry in the table for the agent
function is filled out correctly.
The right action is the one that will cause the agent to be
most successful
Q:- what does it mean to do the right thing?
 It may depend on the consequences of the agent’s behavior
 Sequence of actions based the percepts it received.
 Causes the env’t to go through sequence of states.
 If the sequence is desirable, then the agent has performed well.
 Desirability is capture by Performance measure which evaluate any
given sequence of env’t states.
emnetzewdu2121@gmail.com
Introduction to AI 20
Cont.
Example:Performance measure for vacuum cleaner.
by the amount of dirt cleaned up in a single eight-
hour shift
reward the agent for having a clean floor.
As a general rule, it is better to design performance measures
according to what one actually wants in the environment,
rather than according to how one thinks the agent should
behave.
Since : With a rational agent, of course, what you ask for is
what you get and A rational agent can maximize this
performance measure by cleaning up the dirt, then dumping it
all on the floor, then cleaning it up again, and so on.
emnetzewdu2121@gmail.com
Introduction to AI 21
Rationality
What is rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date
This leads to a definition of a rational agent:
For each possible percept sequence, a rational agent
should select an action that is expected to maximize its
performance measure, given the evidence provided by
the percept sequence and whatever built-in knowledge
the agent has.
emnetzewdu2121@gmail.com
Introduction to AI 22
Cont.
Notes:
Rationality is distinct from omniscience (“all knowing”).
We can behave rationally even when faced with
incomplete information.
Agents can perform actions in order to modify future
percepts so as to obtain useful information: information
gathering, exploration.
An agent is autonomous if its behavior is determined by
its own experience (with ability to learn and adapt).
emnetzewdu2121@gmail.com
Introduction to AI 23
Characterizing a Task
Environment
Must first specify the setting for intelligent agent design/Rational agent
 PEAS: Performance measure, Environment, Actuators, Sensors
Performance
Measures
Environment
Actuators
Sensors
used to evaluate how well an agent solves the task at
hand
surroundings beyond the control of the agent
determine the actions the agent can perform
provide information about the current state
of the environment
emnetzewdu2121@gmail.com
Introduction to AI 24
Cont.
Consider, e.g., the task of designing an
automated taxi driver:
 Performance measure: Safe, fast, legal, comfortable trip,
maximize profits
 Environment: Roads, other traffic, pedestrians, customers
 Actuators: Steering wheel, accelerator, brake, signal, horn
 Sensors: Cameras, sonar, speedometer, GPS, odometer,
engine
sensors, keyboard
emnetzewdu2121@gmail.com
Introduction to AI 25
Cont.
Agent: Medical diagnosis system
 PEAS???
Performance measure: Healthy patient, minimize costs,
lawsuits
Environment: Patient, hospital, staff
Actuators: Screen display (questions, tests, diagnoses,
treatments, referrals)
Sensors: Keyboard (entry of symptoms, findings, patient's
answers)
emnetzewdu2121@gmail.com
Introduction to AI 26
Cont.
Agent: Interactive English tutor
Performance measure: Maximize student's score on
test
Environment: Set of students
Actuators: Screen display (exercises, suggestions,
corrections)
Sensors: Keyboard
emnetzewdu2121@gmail.com
Introduction to AI 27
Recap
Agents and Environment
 PAGE- agent description
Rationality
 PEAS- task environment description
emnetzewdu2121@gmail.com
Introduction to AI 28
Properties of Task Environment
 The range of the task environment is obviously vast. However, we can identify some
dimensions along to categorize the task environment fairly.
 These dimensions determine:
 To a large extent
 The appropriate agent design
 The applicability of each of the principal families of techniques for agent
implementation.
Fully observable vs. partially observable:
 An agent's sensors give it access to the complete state of the environment at
each point in time.
i.e. the sensor detect all aspects that are relevant to the choice of action.
whereas Relevance depends on the performance measure.
 Partially observable because of noisy and inaccurate sensors or
Parts of the state are simply missing from the sensor data.
emnetzewdu2121@gmail.com
Introduction to AI 29
Cont.
Deterministic vs. stochastic:
 If next state of the environment is completely determined by the
current state and the action executed by the agent, otherwise, it is
stochastic. (If the environment is deterministic except for the actions of
other agents, then the environment is strategic)
Episodic vs. sequential:
 The agent's experience is divided into atomic "episodes" (each episode
consists of the agent perceiving and then performing a single action),
and the choice of action in each episode depends only on the episode
itself.
 If the agent need to think ahead; the current decision could affect the
future decision(sequential environments).
i.e. episodic environment is much simpler than sequential environments;
no need of think
emnetzewdu2121@gmail.com
Introduction to AI 30
Cont.
Static vs. dynamic:
 If the environment is unchanged while an agent is deliberating
then, the env’t is static for that agent otherwise dynamic.
Static environments are easy to deal with because the agent need
not keep looking at the world while it is deciding on an action, nor
need it worry about the passage of time.
Dynamic environments are continuously asking the agent what it
wants to do; if it hasn’t decided yet, that counts as deciding to do
nothing.
 The environment is semi dynamic if the environment itself does
not change with the passage of time but the agent's
performance score does
emnetzewdu2121@gmail.com
Introduction to AI 31
Cont.
Discrete vs. continuous:
 A limited number of distinct, clearly defined
percepts and actions.
e.g. chess has discrete states, percepts and
actions
taxi driving is continuous state and continuous
time problem
Single agent vs. multiagent:
 An agent operating by itself in an
environment.
emnetzewdu2121@gmail.com
Introduction to AI 32
Structure of the Agent
emnetzewdu2121@gmail.com
Introduction to AI 33
 Agent’s strategy is a mapping from percept sequence to action
 How to encode an agent’s strategy?
 Long list of what should be done for each possible percept sequence
 vs. shorter specification (e.g. algorithm)
function SKELETON-AGENT (percept) returns action
static: memory, the agent’s memory of the world
memory  UPDATE-MEMORY(memory,percept)
action  CHOOSE-BEST-ACTION(memory)
memory  UPDATE-MEMORY(memory, action)
return action
On each invocation, the agent’s memory is updated to reflect the new percept, the best
action is chosen, and the fact that the action was taken is also stored in the memory.
The memory persists from one invocation to the next.
emnetzewdu2121@gmail.com
Introduction to AI 34
Agent types
I) Table-lookup driven agents
II) Simple reflex agents
III)Model-based reflex agents
IV) Goal-based agents
V) Utility-based agents
VI)Learning agents
Table-lookup driven
agents
Uses a percept sequence / action table in memory to find the next
action. Implemented as a (large) lookup table.
Drawbacks:
– Huge table (often simply too large)
– Takes a long time to build/learn the
table
emnetzewdu2121@gmail.com
Introduction to AI 35
emnetzewdu2121@gmail.com
Introduction to AI 36
Toy example:
Vacuum world.
Percepts: robot senses it’s location and “cleanliness.”
So, location and contents, e.g., [A, Dirty], [B, Clean].
With 2 locations, we get 4 different possible sensor inputs.
Actions: Left, Right, Suck, NoOp
Simple reflex agents
Agents do not have memory of past world states or percepts.
So, actions depend only on current percept.
Action becomes a “reflex.”
Uses condition-action rules.
emnetzewdu2121@gmail.com
Introduction to AI 37
emnetzewdu2121@gmail.com
Introduction to AI 38
Agent selects actions on the basis
of current percept only.
If tail-light of car in
front is red, then
brake.
Model-based reflex agents
Key difference (wrt simple reflex agents):
 Agents have internal state, which is used
to keep track of past states of the world.
 Agents have the ability to represent
change in the World.
emnetzewdu2121@gmail.com
Introduction to AI 39
emnetzewdu2121@gmail.com
Introduction to AI 40
“Infers
potentially
dangerous
driver
in front.”
If “dangerous driver in front,”
then “keep distance.”
Goal-based agents
Key difference wrt Model-Based Agents:
In addition to state information, have goal information
that
describes desirable situations to be achieved.
Agents of this kind take future events into consideration.
What sequence of actions can I take to achieve certain
goals?
Choose actions so as to (eventually) achieve a (given or
computed) goal.
emnetzewdu2121@gmail.com
Introduction to AI 41
emnetzewdu2121@gmail.com
Introduction to AI 42
Agent keeps track of the world state as well as set
of goals it’s trying to achieve: chooses
actions that will (eventually) lead to the goal(s).
More flexible than reflex agents  may involve
search and planning
Utility-based agents
When there are multiple possible alternatives, how to decide which
one is best?
Goals are qualitative: A goal specifies a crude distinction between a
happy and unhappy state, but often need a more general
performance measure that describes “degree of happiness.”
Important for making tradeoffs: Allows decisions comparing choice
between conflicting goals, and choice between likelihood of success
and importance of goal (if achievement is uncertain).
Use decision theoretic models: e.g., faster vs. safer.
emnetzewdu2121@gmail.com
Introduction to AI 43
emnetzewdu2121@gmail.com
Introduction to AI 44
Decision theoretic actions:
e.g. faster vs. safer
emnetzewdu2121@gmail.com
Introduction to AI 45
Learning agents
Adapt and improve over time
To Summarize
emnetzewdu2121@gmail.com
Introduction to AI 46
 (1) Table-driven agents
 use a percept sequence/action table in memory to find the next action. They are implemented by a (large) lookup
table.
 (2) Simple reflex agents
 are based on condition-action rules, implemented with an appropriate production system. They are
stateless devices which do not have memory of past world states.
 (3) Agents with memory - Model-based reflex agents
 have internal state, which is used to keep track of past states of the world.
 (4) Agents with goals – Goal-based agents
 are agents that, in addition to state information, have goal information that describes desirable
situations. Agents of this kind take future events into consideration.
 (5) Utility-based agents
 base their decisions on classic axiomatic utility theory in order to act rationally.
 (6) Learning agents
 they have the ability to improve performance through learning.
Chapter Next:
Solving Problems By Searching
emnetzewdu2121@gmail.com
Introduction to AI 47
Problem-solving agents
Problem types
Problem formulation
Example problems
Basic search algorithms
Cont.
Goal-based agents:- consider future actions
and the desirability of their outcomes than
others.
problem-solving agent :- one kind of Goal-
Based agent which use atomic representations.
Goal-based agents that use more advanced
factored or structured representations are
usually called planning agents.
emnetzewdu2121@gmail.com
Introduction to AI 48
Problem Solving Agents
Goal formulation, based on the current situation and
the agent’s performance measure; first step in problem
solving.
 We will consider a goal to be a set of world states—
exactly those states in which the goal is satisfied.
Problem formulation is the process of deciding what
actions and states to consider, given a goal.
Search for solution: Given the problem, search for
a solution --- a sequence of actions to achieve the
goal starting from the initial state.
Execution of the solution
emnetzewdu2121@gmail.com
Introduction to AI 49
emnetzewdu2121@gmail.com
Introduction to AI 50
 Formulate goal:
 be in Bucharest
(Romania)
 Formulate problem:
 action: drive between
pair of connected cities
(direct road)
 state: be in a city
(20 world states)
 Find solution:
 sequence of cities
leading from start to
goal state, e.g., Arad,
Sibiu, Fagaras,
Bucharest
 Execution
 drive from Arad to
Bucharest according to
the solution
Example: Path Finding problem
Initial
State
Goal
State
Environment: fully observable (map), deterministic, and the
agent knows effects of each action. Is this really the case?
Note: Map is somewhat of a “toy” example. Our real
interest: Exponentially large spaces, with e.g. 10^100 or
more states. Far beyond full search. Humans can often
still handle those! One of the mysteries of cognition.
Problem and Solution
Definition
A problem can be defined formally by five components:
I. Initial Sate : state that the agent starts in.
e.g. In(Arad).
II. Transition Model: A description of what each action does; RESULT(s,a) :- function
that returns the state that results from doing action a in sate s.
Successor:- refers any reachable state from a given state by a single action.
e.g. RESULT(In(Arad), Go(Zerind)) = In(Zerind) .
III.State Space:- the set of all states reachable from the initial state by any sequence
of actions.
i.e. implicitly defined by the initial state, actions, and transition model
 Graph :- formed by state forms in which the nodes are states and links between
nodes are actions.
 Path :- in the state space is sequence of states connected by a sequence of actions.
emnetzewdu2121@gmail.com
Introduction to AI 51
Cont.
IV. Goal Test:- which determines whether a given state is a goal state.
 Explicit:- if there is set of possible goal states and checks whether the given
state is one of them.
V.Path Cost:- function which assigns a numeric cost to each path
 The agent will chooses a cost function which reflects its own performance
measure.
Here we assume that cost of path can be described as the sum of costs of
individual actions along the path
 The step cost of taking action a in state s to reach state s’ is
denoted by c(s, a, s’).
Solution quality is measured by the path cost function, and an optimal solution
has the lowest path cost among all solutions.
 We remove detail information about our state description from a
representation which is called Abstraction.
emnetzewdu2121@gmail.com
Introduction to AI 52
Ex .Prob.: Toy Prob.; Vacuum world
state space graph
emnetzewdu2121@gmail.com
Introduction to AI 53
 states?
 actions?
 goal test?
 path cost?
The agent is in one of 8 possible world states.
Start state
Left, Right, Suck [simplified: left out No-op]
Goal
(reach one in
this set of states)
No dirt at all locations (i.e., in one of bottom two states).
1 per action
Minimum path from Start to Goal state:
Alternative, longer plan:
3 actions
4 actions
Note: path with
thousands of steps before
reaching goal also exist.
Real world problems
Rout finding problems:
 Rout-finding algorithms:- are used in a variety of applications;
 Websites
 Routing Video streams in computer networks
 Airline travel-planning systems
 Car systems; driving direction solving
 Military operation planning
Airline travel-planning: -
States:- each location and time, status of flight…..
Actions:- any flight from current location, be in any seat class,
leaving at any time, spend enough time in
airport
Goal Test:-be in final destination specified by user
Path cost:- depends on monetary cost, waiting time, customs and
immigration procedures, seat quality,
time of day, airplane type….
emnetzewdu2121@gmail.com Introduction to AI 54
emnetzewdu2121@gmail.com Introduction to AI
55
Robotic Assembly
 states?: real-valued coordinates of robot joint angles parts of the object
to be assembled
 actions?: continuous motions of robot joints
 goal test?: complete assembly
 path cost?: time to execute
Touring Problems:- W.rt. Rout-finding
 It’s actions are corresponding to trips between adjacent cities,
 State space:- each state must include set of cities the agent has visited in addition
to current location.
 Goal test:- checks whether the agent is in goal state and all other states have been
visited.
Traveling salesperson problem(TSP):- touring problem in which each city must be visited
exactly
once.
 The aim is to find shortest tour.
Other Examples:
 Robot navigation/planning
 VLSI layout: positioning millions of components and connections on a
chip to minimize area, circuit delays, etc.
 Protein design : sequence of amino acids that will fold into the 3-
dimensional protein with the right properties.
 Automatic assembly of complex objects
Literally thousands of combinatorial search / reasoning / parsing / matching
problems can be formulated as search problems in exponential size state
spaces.
emnetzewdu2121@gmail.com Introduction to AI 56
emnetzewdu2121@gmail.com Introduction to AI
57
Search
Techniques
Searching for Solutions
emnetzewdu2121@gmail.com
Introduction to AI 58
Search through the state space.
We will consider search techniques that use an
explicit search tree that is generated by the
initial state + successor function.
initialize (initial node)
Loop
choose a node for expansion
according to strategy
goal node?  done
expand node with successor function
Tree-search algorithms
emnetzewdu2121@gmail.com Introduction to AI
59
 Basic idea:
 simulated exploration of state space by generating successors of
already-explored states (a.k.a. ~ expanding states)
Note: 1) Here we only check a node for possibly being a goal state, after we select the
node for expansion.
2) A “node” is a data structure containing state + additional info (parent
node, etc.
Tree Search example
emnetzewdu2121@gmail.com Introduction to AI
60
Node selected
for expansion.
emnetzewdu2121@gmail.com
Introduction to AI 61
Nodes added to tree.
emnetzewdu2121@gmail.com
Introduction to AI 62
Selected for expansion.
Added to tree.
Note: Arad added (again) to
tree!
(reachable from Sibiu)
Not necessarily a problem, but
in Graph-Search, we will avoid
this by maintaining an
“explored” list.
Graph-search
emnetzewdu2121@gmail.com Introduction to AI 63
Note:
1) Uses “explored” set to avoid visiting already explored states.
2) Uses “frontier” set to store states that remain to be explored and
expanded.
Implementation: states
vs. nodes
emnetzewdu2121@gmail.com Introduction to AI 64
 A state is a --- representation of --- a physical configuration.
 A node is a data structure constituting part of a search tree includes
state, tree parent node, action (applied to parent), path cost (initial state
to node) g(x), depth
Search strategies
emnetzewdu2121@gmail.com
Introduction to AI 65
 A search strategy is defined by picking the order of node expansion.
 criteria that might be used to choose among the avail algorithms . We can
evaluate an algorithm’s performance in four ways(dimensions):
 completeness: does it always find a solution if one exists?
 time complexity: How long does it take to find a solution(number of
nodes generated)
 space complexity: maximum number of nodes in memory
 optimality: does it always find a least-cost solution?
 Time and space complexity are measured in terms of
 b: maximum branching factor of the search tree(maximum number of
successors of any node
 d: the depth of the shallowest goal node (i.e., the number of steps
along the path from the root) (depth of the least-cost solution)
 m: maximum depth of the state space (may be ∞)
Search Algorithms
 Informed(heuristic)- have some guidance on where to look
for solutions.
 Uninformed(blind)- given no info. About the problem
other than its definition. Some of them can
solve any kind of problems but not efficient.
1. Breadth-first search 4.Depth-limited search
2. Uniform-cost search 5.Iterative deepening search
3. Depth-first search 6.Bidirectional
search
emnetzewdu2121@gmail.com Introduction to AI
66
Key issue: type of queue used for the fringe of the search tree
Fringe is the collection of nodes that have been generated but not (yet)
expanded. Each node of the fringe is a leaf node.
Breadth- First Search
Breadth-first search is a simple strategy in which the root
node is expanded first, then all the
successors of the root node are expanded next, then their
successors, and so on.
In general, all nodes will be expanded at given depth in
the search tree ; based on their level.
Breadth-first search is an instance of the general graph-
search algorithm in which the shallowest unexpanded
node is chosen for expansion.
The goal test is applied on each node when it is generated
rather than it is selected for expansion.
emnetzewdu2121@gmail.com
Introduction to AI 67
Cont.
Implementation:
Fringe is : FIFO queue used to expand the
frontier.
 At the back of the queue: New nodes(always
deeper than their parents).
 At top of the queue: Old nodes (shallower than
new nodes) .
emnetzewdu2121@gmail.com Introduction to AI 68
Fringe queue:
<A>
Select A from
queue and expand.
Gives
<B, C>
Cont.
Implementation:
Fringe is : FIFO queue used to expand the
frontier.
 At the back of the queue: New nodes(always
deeper than their parents).
 At top of the queue: Old nodes (shallower than
new nodes) .
emnetzewdu2121@gmail.com Introduction to AI 69
Queue: <B, C>
Select B from
front, and expand.
Put children at the
end.
Gives
<C, D, E>
Cont.
Implementation:
Fringe is : FIFO queue used to expand the
frontier.
 At the back of the queue: New nodes(always
deeper than their parents).
 At top of the queue: Old nodes (shallower than
new nodes) .
emnetzewdu2121@gmail.com Introduction to AI 70
Fringe queue: <C, D, E>
Cont.
Implementation:
Fringe is : FIFO queue used to expand the
frontier.
 At the back of the queue: New nodes(always
deeper than their parents).
 At top of the queue: Old nodes (shallower
than new nodes) .
emnetzewdu2121@gmail.com Introduction to AI
71
Fringe queue: <D, E, F, G>
Assuming no further children,
queue becomes
<E, F, G>, <F, G>, <G>, <>. Each
time node checked
for goal state.
Properties of breadth-first
search
emnetzewdu2121@gmail.com Introduction to AI 72
b: maximum branching factor of
the search tree
d: depth of the least-cost solution
Complete? Yes (if b is finite)
Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
Note: check for
goal only when
node is expanded.
Space? O(bd+1) (keeps every node in memory;
needed also to reconstruct soln. path)
Optimal soln. found?
Yes (if all step costs are identical)
Space is the bigger problem (more than time)
Uniform-cost search
Expand least-cost (of path to) unexpanded node
(e.g. useful for finding shortest path
on map)
Implementation:
 fringe = queue ordered by path cost
Equivalent to breadth-first if step costs all equal
emnetzewdu2121@gmail.com Introduction to AI
73
Uniform-cost search
Expand least-cost unexpanded node
Implementation:
 fringe = queue ordered by path cost
Equivalent to breadth-first if step costs all equal
emnetzewdu2121@gmail.com
Introduction to AI 74
Uniform-cost search Sample
0
emnetzewdu2121@gmail.com
Introduction to AI 75
Uniform-cost search Sample
7
5
14
0
1
1
8
X
emnetzewdu2121@gmail.com
Introduction to AI 76
Uniform-cost search Sample
X
14
0
1
1
8
X
14
6
emnetzewdu2121@gmail.com
Introduction to AI 77
Uniform-cost search Sample
X
14
0
2
2
9
X
14
6
X
emnetzewdu2121@gmail.com
Introduction to AI 78
Uniform-cost search
Complete? Yes
Time? # of nodes with g ≤ cost of optimal solution,
O(b1+(C*/ ε)) where C* is the cost of the optimal solution, ε some
small positive constant for cost of every step .
Space? # of nodes with g ≤ cost of optimal solution,
O(b(C*/ ε))
Optimal? Yes
emnetzewdu2121@gmail.com
Introduction to AI 79
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 80
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at
front
emnetzewdu2121@gmail.com
Introduction to AI 81
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 82
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at
front.
emnetzewdu2121@gmail.com
Introduction to AI 83
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at
front
emnetzewdu2121@gmail.com
Introduction to AI 84
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 85
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 86
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 87
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 88
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at
front
emnetzewdu2121@gmail.com
Introduction to AI 89
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at
front
emnetzewdu2121@gmail.com
Introduction to AI 90
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 91
Properties of depth-
first search
Complete? No: fails in infinite-depth spaces, spaces with
loops
 Modify to avoid repeated states along path
 complete in finite spaces
Time? O(bm): terrible if m is much larger than d
 but if solutions are dense, may be much faster than breadth-
first
Space? O(bm), i.e., linear space!
Optimal? No
emnetzewdu2121@gmail.com
Introduction to AI 92
Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors
Recursive implementation:
emnetzewdu2121@gmail.com
Introduction to AI 93
Iterative deepening search
emnetzewdu2121@gmail.com
Introduction to AI 94
Iterative deepening search l
=0
emnetzewdu2121@gmail.com
Introduction to AI 95
Iterative deepening search l
=1
emnetzewdu2121@gmail.com
Introduction to AI 96
Iterative deepening search l
=2
emnetzewdu2121@gmail.com
Introduction to AI 97
Iterative deepening search l
=3
emnetzewdu2121@gmail.com
Introduction to AI 98
Properties of iterative
deepening search
Complete? Yes
Time? (d+1)b0 + d b1 + (d-1)b2 + …
+ bd = O(bd)
Space? O(bd)
Optimal? Yes, if step cost = 1
emnetzewdu2121@gmail.com
Introduction to AI 99
Bidirectional Search
To go both ways, Top
Down and Bottom Up.
emnetzewdu2121@gmail.com
Introduction to AI 100
Summary of algorithms
emnetzewdu2121@gmail.com
Introduction to AI 101
Repeated states
Do not return to the state you just
came from.
Do not create paths with cycles in
them.
Do not generate any state that was
ever generated before.
 Hashing
emnetzewdu2121@gmail.com
Introduction to AI 102
Constraint
Satisfaction Search
Reading Assignment
emnetzewdu2121@gmail.com
Introduction to AI 103
Summary
Problem formulation usually requires abstracting away real-
world details to define a state space that can feasibly be
explored
Variety of uninformed search strategies
 Breadth First Search
 Depth First Search
 Uniform Cost Search (Best First Search)
 Depth Limited Search
 Iterative Deepening
 Bidirectional Search
emnetzewdu2121@gmail.com
Introduction to AI 104
Knowledge
Representation
Introduction.
The objective of research into
intelligent machines is to produce
systems which can reason with
available knowledge and so behave
intelligently.
One of the major issues then is how to
incorporate knowledge into these
systems.
The Problem Of Knowledge
Representation
How is the whole abstract concept of
knowledge reduced into forms which
can be written into a computers
memory.
This is called the problem of Knowledge
Representation.
Fields of Knowledge
The concept of Knowledge is central to
a number of fields of established
academic study, including Philosophy,
Psychology, Logic,and Education.
Even in Mathematics and Physics people
like Isaac Newton and Leibniz reflected
that since physics has its foundations in
a mathematical formalism, the laws of
all nature should be similarly described.
Major Representation
Schemes
Classical Representation schemes
include
 Logic
 Procedural
 Semantic Nets
 Frames
 Direct
 Production Schemes
Each Representation Scheme has a
reasoning mechanism built in.
PROPOSITIONAL
LOGIC
Introduction to AI
What is a Logic?
• When most people say ‘logic’, they mean either propositional logic or first-
order predicate logic.
• However, the precise definition is quite broad, and literally hundreds of logics
have been studied by philosophers, computer scientists and mathematicians.
• Formal Logic is a classical approach of representing Knowledge.
• It was developed by philosophers and mathematicians as a calculus of the
process of making inferences from facts.
• The simplest logic formalism is that of Propositional Calculus which is
effectively an equivalent form of Boolean Logic, the basis of Computing.
• We can make Statements or Propositions which can either be atomic ( i.e.
they can stand alone) or composite which are sentences constructed using
atomic statements joined by logical connectives
• Any ‘formal system’ can be considered a logic if it has:
– a well-defined syntax;
– a well-defined semantics; and
– a well-defined proof-theory. Introduction to AI
•The syntax of a logic defines the syntactically acceptable
objects of the language, which are properly called well-formed
formulae (wff). (We shall just call them formulae.)
•The semantics of a logic associate each formula with a
meaning.
•The proof theory is concerned with manipulating formulae
according to certain rules.
Introduction to AI
•The simplest, and most abstract logic we can study is called propositional
logic.
•Definition: A proposition is a statement that can be either true or false; it
must be one or the other, and it cannot be both.
•EXAMPLES. The following are propositions:
– the reactor is on;
– the wing-flaps are up;
– Abiy Ahmed is prime minister.
whereas the following are not:
– are you going out somewhere?
– 2+3
-Good Morning.
Introduction to AI
Propositional Logic
•It is possible to determine whether any given statement is a proposition by
prefixing it with:
It is true that . . .
and seeing whether the result makes grammatical sense.
•We now define atomic propositions. Intuitively, these are the set of smallest
propositions.
•Definition: An atomic proposition is one whose truth or falsity does not
depend on the truth or falsity of any other proposition.
•So all the above propositions are atomic.
Introduction to AI
•Now, rather than write out propositions in full, we will
abbreviate them by using propositional variables.
•It is standard practice to use the lower-case lattin letters
p, q, r, . . . to stand for propositions.
•If we do this, we must define what we mean by writing
something like:
Let p be Abiy Ahimed is prime Minister.
•Another alternative is to write something like reactor is on, so
that the interpretation of the propositional variable becomes
obvious.
Introduction to AI
The Connectives
•Now, the study of atomic propositions is pretty boring. We therefore
now introduce a number of connectives which will allow us to build
up complex propositions.
•The connectives we introduce are:
∧ and (& or .)
∨ or (| or +)
¬ not (∼)
⇒ implies (⊃ or →)
⇔ iff
•(Some books use other notations; these are given in parentheses.)
Introduction to AI
And
•Any two propositions can be combined to form a third
proposition called the conjunction of the original
propositions.
•Definition: If p and q are arbitrary propositions, then the
conjunction of p and q is written
p ∧ q
and will be true iff both p and q are true.
Introduction to AI
•We can summarise the operation of ∧ in a truth table. The idea of a
truth table for some formula is that it describes the behaviour of a
formula under all possible interpretations of the primitive
propositions the are included in the formula.
•If there are n different atomic propositions in some formula, then
there are 2n different lines in the truth table for that formula. (This is
because each proposition can take one 1 of 2 values — true or false.)
•Let us write T for truth, and F for falsity. Then the truth table for p ∧
q is: p q p ∧ q
F
F
T
T
F
T
F
T
F
F
F
T
Introduction to AI
• Any two propositions can be combined by the word ‘or’ to form a third
proposition called the disjunction of the originals.
• Definition: If p and q are arbitrary propositions, then the disjunction of p and q is
written
p ∨ q
and will be true iff either p is true, or q is true, or both p and q are true.
The operation of ∨ is summarized in the following truth table:
OR
p q p ∨ q
F
F
T
T
F
T
F
T
F
T
T
T
Introduction to AI
If. . . Then. . .
•Many statements, particularly in mathematics, are of the form:
if p is true then q is true.
Another way of saying the same thing is to write:
p implies q.
•In propositional logic, we have a connective that combines two
propositions into a new proposition called the conditional, or
implication of the originals, that attempts to capture the sense
of such a statement.
Introduction to AI
•Definition: If p and q are arbitrary propositions, then the
conditional of p and q is written
p ⇒ q
and will be true iff either p is false or q is true.
•The truth table for ⇒ is:
p q p ⇒ q
F
F
T
T
F
T
F
T
T
T
F
T
Introduction to AI
•The ⇒ operator is the hardest to understand of the operators we
have considered so far, and yet it is extremely important.
•If you find it difficult to understand, just remember that the p ⇒
q means ‘if p is true, then q is true’.
If p is false, then we don’t care about q, and by default, make p
⇒ q evaluate to T in this case.
•Terminology: if φ is the formula p ⇒ q, then p is the antecedent
of φ and q is the consequent.
Introduction to AI
Iff
•Another common form of statement in logic is:
p is true if, and only if, q is true.
•The sense of such statements is captured using the biconditional operator.
•Definition: If p and q are arbitrary propositions, then the biconditional of p and
q is written:
p ⇔ q
and will be true iff either:
1. p and q are both true; or
2. p and q are both false.
•The truth table for ⇔ is:
•If p ⇔ q is true, then p and q are said
to be logically equivalent. They will be
true under exactly the same circumstances.
p q p ⇔ q
F
F
T
T
F
T
F
T
T
F
F
T
Not
•All of the connectives we have considered so far have been binary: they have taken
two arguments.
•The final connective we consider here is
unary. It only takes one argument.
•Any proposition can be prefixed by the word ‘not’ to form a second proposition
called the negation of the original.
•Definition: If p is an arbitrary proposition then the negation of p is written
¬p
and will be true iff p is false.
•Truth table for ¬:
p ¬p
F T
T F
Introduction to AI
Comments
•We can nest complex formulae as deeply as we want.
•We can use parentheses i.e., ),(, to
disambiguate formulae.
•EXAMPLES. If p, q, r, s and t are atomic propositions, then all of
the following are formulae:
p ∧ q ⇒ r p ∧ (q ⇒ r)
(p ∧ (q ⇒ r)) ∨ s
((p ∧ (q ⇒ r)) ∨ s) ∧ t
whereas none of the following is:
p ∧
p ∧ q) p¬
Introduction to AI
• When we consider formulae in terms of interpretations, it turns out that some have
interesting properties.
• Definition:
1. A formula is a tautology iff it is true under every valuation;
2. A formula is consistent iff it is true under at least one valuation;
3. A formula is inconsistent/Contradiction iff it is not made true under any
valuation.
• Now, each line in the truth table of a formula correponds to a valuation.
• So, we can use truth tables to determine whether or not formulae are tautologies or
other.
Tautologies , Contradiction & Consistency
Introduction to AI
emnetzewdu2121@gmail.com
Introduction to AI 127
128
Propositional
Equivalences
CS/APMA 202, Spring 2005
Rosen, section 1.2
Aaron Bloomfield
129
Tautology and Contradiction
A tautology is a statement that is always true
 p  ¬p will always be true (Negation Law)
A contradiction is a statement that is always
false
 p  ¬p will always be false(Negation Law)
p p  ¬p p  ¬p
T T F
F T F
130
Logical Equivalence
A logical equivalence means that the two
sides always have the same truth values
 Symbol is ≡ or  (we’ll use ≡)
131
p  T ≡ p Identity law
p  F ≡ F Domination law
Logical Equivalences of And
p T pT
T T T
F T F
p F pF
T F F
F F F
132
p  p ≡ p Idempotent law
p  q ≡ q  p Commutative law
Logical Equivalences of And
p p pp
T T T
F F F
p q pq qp
T T T T
T F F F
F T F F
F F F F
133
(p  q)  r ≡ p  (q  r) Associative law
Logical Equivalences of And
p q r pq (pq)r qr p(qr)
T T T T T T T
T T F T F F F
T F T F F F F
T F F F F F F
F T T F F T F
F T F F F F F
F F T F F F F
F F F F F F F
134
p  T ≡ T Identity law
p  F ≡ p Domination law
p  p ≡ p Idempotent law
p  q ≡ q  p Commutative law
(p  q)  r ≡ p  (q  r) Associative law
Logical Equivalences of Or
135
Corollary of the Associative Law
(p  q)  r ≡ p  q  r
(p  q)  r ≡ p  q  r
Similar to (3+4)+5 = 3+4+5
Only works if ALL the operators are the same!
136
¬(¬p) ≡ p Double negation law
p  ¬p ≡ T Negation law
p  ¬p ≡ F Negation law
Logical Equivalences of Not
137
DeMorgan’s Law
Probably the most important logical equivalence
To negate pq (or pq), you “flip” the sign, and
negate BOTH p and q
 Thus, ¬(p  q) ≡ ¬p  ¬q
 Thus, ¬(p  q) ≡ ¬p  ¬q
p q p q pq (pq) pq pq (pq) pq
T T F F T F F T F F
T F F T F T T T F F
F T T F F T T T F F
F F T T F T T F T T
138
Yet more equivalences
Distributive:
p  (q  r) ≡ (p  q)  (p  r)
p  (q  r) ≡ (p  q)  (p  r)
Absorption
p  (p  q) ≡ p
p  (p  q) ≡ p
139
How to prove two propositions
are equivalent?
Two methods:
 Using truth tables
Not good for long formula
In this course, only allowed if specifically stated!
 Using the logical equivalences
The preferred method
Example:
 Show that: r
q
p
r
q
r
p 




 )
(
)
(
)
(
140
Using Truth Tables
p q r p→r q →r (p→r)(q →r) pq (pq) →r
T T T T T T T T
T T F F F F T F
T F T T T T F T
T F F F T T F T
F T T T T T F T
F T F T F T F T
F F T T T T F T
F F F T T T F T
r
q
p
r
q
r
p 




 )
(
)
(
)
(
(pq) →r
pq
(p→r)(q →r)
q →r
p→r
r
q
p
141
Idempotent Law
Associativity of Or
r
q
p
r
q
r
p 







 )
(
)
(
)
( Definition of implication
Using Logical Equivalences
r
q
p
r
q
r
p 








 )
(
)
(
)
(
r
q
p
r
q
p 








r
q
p
r
r
q
p 









r
q
p
r
q
r
p 




 )
(
)
(
)
(
r
q
p
r
q
r
p 









Re-arranging
Original statement
DeMorgan’s Law
q
p
q
p 



q
p
q
p 




 )
(
q
r
p
r
q
r
p 









 )
(
)
(
r
r
r 

Cont.
Example: Show that
is logically equivalent to
Solution:
142
143
First-Order Logic
Pros and cons of propositional
logic
Propositional logic is declarative
 Propositional logic allows partial/disjunctive/negated information
 (unlike most data structures and databases)
Propositional logic is compositional:
 Meaning in propositional logic is context-independent
 (unlike natural language, where meaning depends on context)
 Propositional logic has very limited expressive power
145
First-order logic
Whereas propositional logic assumes the world contains facts,
first-order logic (like natural language) assumes the world contain
 Objects, which are things with individual identities
 Properties of objects that distinguish them from other objects
 Relations that hold among sets of objects
 Functions, which are a subset of relations where there is only
one “value” for any given “input”
Examples:
 Objects: Students, lectures, companies, cars ...
 Relations: Brother-of, bigger-than, outside, part-of, has-color,
occurs-after, owns, visits, precedes, ...
 Properties: blue, oval, even, large, ...
 Functions: father-of, best-friend, second-half, one-more-than ...
User provides
Constant symbols, which represent individuals in the
world
 Mary
 3
 Green
Function symbols, which map individuals to individuals
 father-of(Mary) = John
 color-of(Sky) = Blue
Predicate symbols, which map individuals to truth
values
 greater(5,3)
 green(Grass)
 color(Grass, Green)
FOL Provides
Variable symbols
 E.g., x, y, foo
Connectives
 Same as in PL: not (), and (), or (), implies (), if
and only if (biconditional )
Quantifiers
 Universal x or (Ax)
 Existential x or (Ex)
Sentences are built from terms and atoms
A term (denoting a real-world individual) is a
constant symbol, a variable symbol, or an n
place function of n terms.
x and f(x1, ..., xn) are terms, where each xi is a term.
A term with no variables is a ground term
An atomic sentence (which has value true o
false) is an n-place predicate of n terms
A complex sentence is formed from atomic
sentences connected by the logica
connectives:
P, PQ, PQ, PQ, PQ where P and Q are
sentences
Quantifiers
Universal quantification
 (x)P(x) means that P holds for all values of x in the
domain associated with that variable
 E.g., (x) dolphin(x)  mammal(x)
Existential quantification
 ( x)P(x) means that P holds for some value of x in
the domain associated with that variable
 E.g., ( x) mammal(x)  lays-eggs(x)
 Permits one to make a statement about some object
without naming it
Quantifiers
Universal quantifiers are often used w
“implies” to form “rules”:
(x) student(x)  smart(x) means “All students a
smart”
Universal quantification is rarely used to ma
blanket statements about every individual in t
world:
(x)student(x)smart(x) means “Everyone in the wo
is a student and is smart”
Existential quantifiers are usually used w
“and” to specify a list of properties about
Quantifier Scope
Switching the order of universal quantifiers
does not change the meaning:
 (x)(y)P(x,y) ↔ (y)(x) P(x,y)
Similarly, you can switch the order of
existential quantifiers:
 (x)(y)P(x,y) ↔ (y)(x) P(x,y)
Switching the order of universals and
existentials does change meaning:
 Everyone likes someone: (x)(y) likes(x,y)
 Someone is liked by everyone: (y)(x) likes(x,y)
Connections between All and Exists
We can relate sentences involving  and 
using De Morgan’s laws:
(x) P(x) ↔ (x) P(x)
(x) P ↔ (x) P(x)
(x) P(x) ↔  (x) P(x)
(x) P(x) ↔ (x) P(x)
Translating English to FOL
Every gardener likes the sun.
x gardener(x)  likes(x,Sun)
You can fool some of the people all of the time.
x t person(x) time(t)  can-fool(x,t)
You can fool all of the people some of the time.
x t (person(x)  time(t) can-fool(x,t))
x (person(x)  t (time(t) can-fool(x,t)))
All purple mushrooms are poisonous.
x (mushroom(x)  purple(x))  poisonous(x)
No purple mushroom is poisonous.
x purple(x)  mushroom(x)  poisonous(x)
x (mushroom(x)  purple(x))  poisonous(x)
There are exactly two purple mushrooms.
x y mushroom(x)  purple(x)  mushroom(y)  purple(y) ^ (x=y)  
(mushroom(z)  purple(z))  ((x=z)  (y=z))
Clinton is not tall.
tall(Clinton)
X is above Y iff X is on directly on top of Y or there is a pile of one or mor
other objects directly on top of one another starting with X and endin
with Y.
Equivalent
Equivalent
155
emnetzewdu2121@gmail.com
Introduction to AI 156
Logical Equivalence
You might have noticed that the final column in the truth table
from ¬P∨Q¬P∨Q is identical to the final column in the truth table
for P→Q:
P Q P→Q ¬P∨Q
T T T T
T F F F
F T T T
F F T T
This says that no matter what P and Q are, the statements ¬P∨Q and P→Q either both
true or both false. We therefore say these statements are logically equivalent.
And also we can proof by drawing the truth table of (¬P∨Q) ↔ (P→Q) is tautology
emnetzewdu2121@gmail.com
Introduction to AI 157
Example
Prove that the statements ¬(P→Q)¬(P→Q) and P∧¬QP∧¬Q are logically
equivalent without using truth tables.
Chapter Next
Logical Agents
Russell and Norvig: Chapter 7
emnetzewdu2121@gmail.com
Introduction to AI 158
“Thinking Rationally”
Computational models of human “thought”
processes
Computational models of human behavior
Computational systems that “think” rationally
Computational systems that behave rationally
emnetzewdu2121@gmail.com
Introduction to AI 159
Logical Agents
Reflex agents find their way from Arad to Bucharest
by dumb luck
Chess program calculates legal moves of its king, but
doesn’t know that no piece can be on 2 different
squares at the same time
Logical(Knowledge-Based) agents combine general
knowledge with current percepts to infer hidden
aspects of current state prior to selecting actions
 Crucial in partially observable environments
emnetzewdu2121@gmail.com
Introduction to AI 160
Roadmap
Knowledge-based agents
Wumpus world
Logic in general
Propositional and first-order logic
 Inference, validity, equivalence and satisfiability
 Reasoning patterns
 Resolution
 Forward/backward chaining
emnetzewdu2121@gmail.com
Introduction to AI 161
Knowledge-Based Agent
environment
agent
?
sensors
actuators
Knowledge base
Inference Engine
Domain-independent algorithms
Domain-specific content
emnetzewdu2121@gmail.com
Introduction to AI 162
Knowledge Base
Knowledge Base: set of sentences represented in a
knowledge representation language and represents
assertions about the world.
Inference rule: when one ASKs questions of the KB,
the answer should follow from what has been
TELLed to the KB previously.
tell
ask
emnetzewdu2121@gmail.com
Introduction to AI 163
Generic KB-Based Agent
emnetzewdu2121@gmail.com
Introduction to AI 164
Abilities KB agent
Agent must be able to:
 Represent states and actions,
 Incorporate new percepts
 Update internal representation of the world
 Deduce hidden properties of the world
 Deduce appropriate actions
emnetzewdu2121@gmail.com
Introduction to AI 165
Description level
The KB agent is similar to agents with
internal state
Agents can be described at different levels
 Knowledge level
 What they know, regardless of the actual
implementation. (Declarative description)
 Implementation level
 Data structures in KB and algorithms that manipulate
them e.g propositional logic and resolution.
emnetzewdu2121@gmail.com
Introduction to AI 166
Types of Knowledge
Procedural, e.g.: functions
Such knowledge can only be used in
one way -- by executing it
Declarative, e.g.: constraints and rules
It can be used to perform many
different sorts of inferences
emnetzewdu2121@gmail.com
Introduction to AI 167
The Wumpus World
The Wumpus computer game
The agent explores a cave consisting of rooms
connected by passageways.
Lurking somewhere in the cave is the Wumpus, a
beast that eats any agent that enters its room.
Some rooms contain bottomless pits that trap any
agent that wanders into the room.
Occasionally, there is a heap of gold in a room.
The goal is to collect the gold and exit the world
without being eaten
emnetzewdu2121@gmail.com
Introduction to AI 168
Wumpus PEAS description
Performance measure:
gold +1000, death -1000,
-1 per step, -10 use arrow
Environment:
Squares adjacent to wumpus are stench
Squares adjacent to pit are breezy
Glitter iff gold is in the same square
Bump iff move into a wall
Woeful scream iff the wumpus is killed
Shooting kills wumpus if you are facing it
Shooting uses up the only arrow
Grabbing picks up gold if in same square
Releasing drops the gold in same square
Sensors: Stench, Breeze, Glitter, Bump, Scream
Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot
emnetzewdu2121@gmail.com
Introduction to AI 169
A typical Wumpus world
The agent always
starts in [1,1].
The task of the
agent is to find the
gold, return to the
field [1,1] and climb
out of the cave.
emnetzewdu2121@gmail.com
Introduction to AI 170
Wumpus World Characteristics
Observable?
Deterministic?
Episodic or Sequential?
Static?
Discrete?
Single-agent?
emnetzewdu2121@gmail.com
Introduction to AI 171
Wumpus World Characterization
Observable?
 No, only local perception
Deterministic?
 Yes, outcome exactly specified
Episodic?
 No, sequential at the level of actions
Static?
 Yes, Wumpus and pits do not move
Discrete?
 Yes
Single-agent?
 Yes, Wumpus is essentially a natural feature.
emnetzewdu2121@gmail.com
Introduction to AI 172
The Wumpus agent’s first step
[1,1] The KB initially contains the rules of the environment. The first percept is [none,
none,none,none,none], move to safe cell e.g. 2,1
[2,1] breeze which indicates that there is a pit in [2,2] or [3,1], return to [1,1] to try next
safe cell emnetzewdu2121@gmail.com
Introduction to AI 173
The Wumpus agent’s first step
[1,1] The KB initially contains the rules of the environment. The first percept is [none,
none,none,none,none], move to safe cell e.g. 2,1
[2,1] breeze which indicates that there is a pit in [2,2] or [3,1], return to [1,1] to try next
safe cell emnetzewdu2121@gmail.com
Introduction to AI 174
Next….
[1,2] Stench in cell which means that wumpus is in [1,3] or [2,2]
YET … not in [1,1]
YET … not in [2,2] or stench would have been detected in [2,1]
THUS … wumpus is in [1,3]
THUS [2,2] is safe because of lack of breeze in [1,2]
THUS pit in [3,1]
move to next safe cell [2,2] emnetzewdu2121@gmail.com
Introduction to AI 175
Then…
[2,2] move to [2,3]
[2,3] detect glitter , smell, breeze
THUS pick up gold
THUS pit in [3,3] or [2,4]
emnetzewdu2121@gmail.com
Introduction to AI 176
What is a logic?
A formal language
 Syntax – what expressions are legal (well-formed)
 Semantics – what legal expressions mean
 in logic the truth of each sentence with respect to each possible
world.
E.g the language of arithmetic
 X+2 >= y is a sentence, x2+y is not a sentence
 X+2 >= y is true in a world where x=7 and y =1
 X+2 >= y is false in a world where x=0 and y =6
emnetzewdu2121@gmail.com
Introduction to AI 177
Connection World-Representation
World W
Conceptualization
Facts
about W
hold
hold
Sentences
represent
Facts
about W
represent
Sentences
entail
emnetzewdu2121@gmail.com
Introduction to AI 178
Entailment
One thing follows from another
KB |= 
KB entails sentence  if and only if  is true
in worlds where KB is true.
g. x+y=4 entails 4=x+y
Entailment is a relationship between
sentences that is based on semantics.
emnetzewdu2121@gmail.com
Introduction to AI 179
Models
Models are formal definitions of possible
states of the world
We say m is a model of a sentence  if  is
true in m
M() is the set of all models of 
Then KB  if and only if M(KB)  M()
M()
M(KB)
emnetzewdu2121@gmail.com
Introduction to AI 180
Wumpus models
emnetzewdu2121@gmail.com
Introduction to AI 181
Wumpus models
KB = wumpus-world rules + observations
emnetzewdu2121@gmail.com
Introduction to AI 182
Wumpus models
KB = wumpus-world rules + observations
α1 = "[1,2] is safe", KB ╞ α1, proved by model checking
emnetzewdu2121@gmail.com
Introduction to AI 183
Wumpus models
KB = wumpus-world rules + observations
emnetzewdu2121@gmail.com
Introduction to AI 184
Wumpus models
KB = wumpus-world rules + observations
α2 = "[2,2] is safe", KB ╞ α2
emnetzewdu2121@gmail.com
Introduction to AI 185
Inference
KB ├i α = sentence α can be derived from KB by
procedure i
Soundness: i is sound if whenever KB ├i α, it is also true
that KB╞ α
Completeness: i is complete if whenever KB╞ α, it is also
true that KB ├i α
Preview: we will define a logic which is expressive
enough to say almost anything of interest, and for which
there exists a sound and complete inference procedure.
That is, the procedure will answer any question whose
answer follows from what is known by the KB.
(i is an algorithm that derives α from KB )
emnetzewdu2121@gmail.com
Introduction to AI 186
Propositional logic: Syntax
Propositional logic is the simplest logic – illustrates basic
ideas
The proposition symbols P1, P2 etc are sentences
 If S is a sentence, S is a sentence (negation)
 If S1 and S2 are sentences, S1  S2 is a sentence (conjunction)
 If S1 and S2 are sentences, S1  S2 is a sentence (disjunction)
 If S1 and S2 are sentences, S1  S2 is a sentence (implication)
 If S1 and S2 are sentences, S1  S2 is a sentence (biconditional)
emnetzewdu2121@gmail.com
Introduction to AI 187
Propositional logic: Semantics
Each model specifies true/false for each proposition symbol
E.g. P1,2 P2,2 P3,1
false true false
With these symbols, 8 possible models, can be enumerated automatically.
Rules for evaluating truth with respect to a model m:
S is true iff S is false Negation
S1  S2 is true iff S1 is true and S2 is true
Conjunction
S1  S2 is true iff S1is true or S2 is true Disjunction
S1  S2 is true iff S1 is false or S2 is true Implication
i.e., is false iff S1 is true and S2 is false
S1  S2 is true iff S1S2 is true and S2S1 is true
Bicond.
Simple recursive process evaluates an arbitrary sentence, e.g.,
P1,2  (P2,2  P3,1) = true  (true  false) = true  true = true
emnetzewdu2121@gmail.com
Introduction to AI 188
Truth tables for connectives
emnetzewdu2121@gmail.com
Introduction to AI 189
Wumpus world sentences
Let Pi,j be true if there is a pit in [i, j].
Let Bi,j be true if there is a breeze in [i, j].
R1:  P1,1
R4: B1,1
R5: B2,1
"Pits cause breezes in adjacent squares"
R2: B1,1  (P1,2  P2,1)
R3: B2,1  (P1,1  P2,2  P3,1)
( R1 ^ R2 ^ R3 ^ R4 ^ R5 ) is the State of the Wumpus World
emnetzewdu2121@gmail.com
Introduction to AI 190
Truth tables for inference
emnetzewdu2121@gmail.com
Introduction to AI 191
Order of Precedence
   
Examples:
  A  B  C is equivalent to ((A)B)C
emnetzewdu2121@gmail.com
Introduction to AI 192
Logical equivalence
Two sentences are logically equivalent iff true in
same models: α ≡ ß iff α╞ β and β╞ α
emnetzewdu2121@gmail.com
Introduction to AI 193
Validity & Satisfiability
A sentence is valid if it is true in all models,
e.g., True, A A, A  A, (A  (A  B))  B
 Tautologies are necessarily true statements
Validity is connected to inference via the Deduction Theorem:
KB ╞ α if and only if (KB  α) is valid
A sentence is satisfiable if it is true in some model
A sentence is unsatisfiable if it is true in no models
e.g., AA
Satisfiability is connected to inference via the following:
KB ╞ α if and only if (KB α) is unsatisfiable
emnetzewdu2121@gmail.com
Introduction to AI 194
Rules of Inference
• Valid Rules of Inference:
 Modus Ponens
 And-Elimination
 And-Introduction
 Or-Introduction
 Double Negation
 Unit Resolution
 Resolution
emnetzewdu2121@gmail.com
Introduction to AI 195
Examples in Wumpus World
Modus Ponens:  ⇒,  ` 
(WumpusAhead ∧ WumpusAlive) ) Shoot,
(WumpusAhead ∧ WumpusAlive)
` Shoot
nd-Elimination:  ∧  ` 
(WumpusAhead ∧ WumpusAlive)
` WumpusAlive
Resolution:   ,     `   
(WumpusDead  WumpusAhead),
( WumpusAhead  Shoot)
` (WumpusDead  Shoot)
 ⇒ 


∧ 

  
   
 
emnetzewdu2121@gmail.com
Introduction to AI 196
Rules of Inference (continued)
And-Introduction
1, 2, …, n
1 2 …n
Or-Introduction
i
1  2  …i …  n
Double Negation
  

Unit Resolution (special case of resolution)
  
 

emnetzewdu2121@gmail.com
Introduction to AI 197
Wumpus World KB
Proposition Symbols for each i,j:
 Let Pi,j be true if there is a pit in square i,j
 Let Bi,j be true if there is a breeze in square i,j
Sentences in KB
 “There is no pit in square 1,1”
R1:  P1,1
 “A square is breezy iff pit in a neighboring square”
R2: B1,1  (P1,2  P2,1)
R3: B2,1  (P1,1  P3,1  P2,2)
 “Square 1,1 has no breeze”, “Square 2,1 has a breeze”
R4:  B1,1
R5: B2,1
emnetzewdu2121@gmail.com
Introduction to AI 198
Inference in Wumpus World
Apply biconditional elimination to R2:
R6: (B1,1  (P1,2  P2,1)) ((P1,2  P2,1)  B1,1)
Apply e to R6:
R7: ((P1,2  P2,1)  B1,1)
Contrapositive of R7:
R8: ( B1,1  (P1,2  P2,1))
Modus Ponens with R8 and R4 ( B1,1):
R9: (P1,2  P2,1)
de Morgan:
R10:  P1,2   P2,1
emnetzewdu2121@gmail.com
Introduction to AI 199
Searching for Proofs
Finding proofs is exactly like finding solutions
to search problems.
Can search forward (forward chaining) to
derive goal or search backward (backward
chaining) from the goal.
Searching for proofs is not more efficient than
enumerating models, but in many practical
cases, it’s more efficient because we can
ignore irrelevant propositions
emnetzewdu2121@gmail.com
Introduction to AI 200
Full Resolution Rule Revisited
Full Resolution Rule is a generalization
of this rule:
For clauses of length two:
emnetzewdu2121@gmail.com
Introduction to AI 201
Resolution Applied to Wumpus
World
R11: ¬B1,2.
R12: B1,2 ⇔(P1,1∨P2,2∨P1,3).
At some point we determine the absence
of a pit in square 2,2:
R13:  P2,2
Biconditional elimination applied to R3
followed by modus ponens with R5:
R15: P1,1  P1,3  P2,2
Resolve R15 and R13:
R16: P1,1  P1,3
Resolve R16 and R1:
R17: P1,3
emnetzewdu2121@gmail.com
Introduction to AI 202
Resolution: Complete
Inference Procedure
Any complete search algorithm,
applying only the resolution rule, can
derive any conclusion entailed by any
knowledge base in propositional logic.
emnetzewdu2121@gmail.com
Introduction to AI 203
Cont.
The resolution rule applies only to clauses (that is,
disjunctions of literals), so it would seem to be
relevant only to knowledge bases and queries
consisting of clauses.
How, then, can it lead to a complete inference
procedure for all of propositional logic?
 The answer is that every sentence of propositional logic is
logically equivalent to a conjunction of clauses.
A sentence expressed as a conjunction of clauses is
said to be in conjunctive normal form or CNF
emnetzewdu2121@gmail.com
Introduction to AI 204
Conjunctive Normal Form
Conjunctive Normal Form is conjunction
of clause(a disjunction of literals.)
Example:
(A  B   C) (B  D) ( A) (B  C)
clause
literals
emnetzewdu2121@gmail.com
Introduction to AI 205
converting the sentence
B1,1 ⇔(P1,2∨P2,1)
emnetzewdu2121@gmail.com
Introduction to AI 206
Resolution Algorithm
To show KB ╞ , we show (KB   ) is
unsatisfiable.
This is a proof by contradiction.
First convert (KB   ) into CNF.Then apply
resolution rule to resulting clauses.
The process continues until:
 there are no new clauses that can be added
(KB does not entail )
 two clauses resolve to yield empty clause
in which KB entails .
emnetzewdu2121@gmail.com
Introduction to AI 207
Simple Inference in Wumpus
World
KB = R2 R4 = (B1,1  (P1,2  P2,1))   B1,1
Prove  P1,2 by adding the negation P1,2
Convert KB P1,2 to CNF
PL-RESOLUTION algorithm
emnetzewdu2121@gmail.com
Introduction to AI 208
Horn clauses and Definite clauses
Horn Form
 definite clause: disjunction of literals which exactly one is positive.
e.g. (¬L1,1∨¬B2,2 ∨B1,1) is but not (¬B1,1∨P1,2∨P2,1)
 Horn clause: disjunction of literals of which at most one is positive.
e.g. (¬B2,2 ∨B1,1), (¬L1,1∨¬B2,2 ) is ,(¬B1,1∨P1,2∨P2,1) is not
So all definite clauses are Horn clauses
KB containing only definite clauses are interesting since;
 Every definite clause can be written as an implication.
e.g. (¬L1,1∨¬B2,2∨B1,1) => (L1,1∧B2,2) ⇒B1,1
Note: In Horn form, the premise is called body the conclusion is
called head. sentence consisting of a single positive literal,
such asL1,1, is called a fact.
emnetzewdu2121@gmail.com
Introduction to AI 209
 Inference with Horn clauses can be done through forward-chaining
and backward- chaining algorithms. Both are easily understandable
by human and this type of inference is base for logic programming.
 Deciding entailment with Horn clauses can be done in time that is linear
in the size of the knowledge base—a pleasant surprise
210
emnetzewdu2121@gmail.com
Introduction to AI
Forward Chaining
Fire any rule whose premises are
satisfied in the KB.
Add its conclusion to the KB until query
is found.
emnetzewdu2121@gmail.com
Introduction to AI 211
Forward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 212
Forward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 213
Forward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 214
Forward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 215
Forward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 216
Forward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 217
Forward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 218
Forward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 219
Backward Chaining
Motivation: Need goal-directed
reasoning in order to keep from getting
overwhelmed with irrelevant
consequences
Main idea:
 Work backwards from query q
 To prove q:
 Check if q is known already
 Prove by backward chaining all premises of
some rule concluding q
emnetzewdu2121@gmail.com
Introduction to AI 220
Backward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 221
Backward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 222
Backward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 223
Backward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 224
Backward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 225
Backward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 226
Backward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 227
Backward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 228
Backward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 229
Backward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 230
Backward Chaining Example
P  Q
L M  P
B L  M
A P  L
A B  L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 231
Forward Chaining vs.
Backward Chaining
FC is data-driven—it may do lots of
work irrelevant to the goal
BC is goal-driven—appropriate for
problem-solving
emnetzewdu2121@gmail.com
Introduction to AI 232
First-Order Logic
emnetzewdu2121@gmail.com
Introduction to AI 233
Russell and Norvig: Chapter 8
Outline
Why FOL?
Syntax and semantics of FOL
Using FOL
Wumpus world in FOL
emnetzewdu2121@gmail.com
Introduction to AI 234
Pros and cons of propositional
logic
 Propositional logic is declarative
 Propositional logic allows partial/disjunctive/negated
information
 (unlike most data structures and databases)
 Meaning in propositional logic is context-independent
 (unlike natural language, where meaning depends on
context)
 Propositional logic has very limited expressive power
 (unlike natural language)
emnetzewdu2121@gmail.com
Introduction to AI 235
First-order logic
Whereas propositional logic assumes the world
contains facts,
first-order logic (like natural language) assumes the
world contains
 Objects: people, houses, numbers, colors, baseball
games, wars, …
 Relations: red, round, prime, brother of, bigger
than, part of, comes between, …
 Functions: father of, best friend, one more than,
plus, …
emnetzewdu2121@gmail.com
Introduction to AI 236
Syntax of FOL: Basic elements
Constants KingJohn, 2, NUS,...
Predicates Brother, >,...
Functions Sqrt, LeftLegOf,...
Variables x, y, a, b,...
Connectives , , , , 
Equality =
Quantifiers , 
emnetzewdu2121@gmail.com
Introduction to AI 237
Atomic sentences
Atomic sentence = predicate (term1,...,termn)
or term1 = term2
Term = function (term1,...,termn)
or constant or
variable
E.g., Brother(KingJohn,RichardTheLionheart) >
(Length(LeftLegOf(Richard)),
Length(LeftLegOf(KingJohn)))
emnetzewdu2121@gmail.com
Introduction to AI 238
Complex sentences
Complex sentences are made from
atomic sentences using connectives
S, S1  S2, S1  S2, S1  S2, S1  S2,
E.g. Sibling(KingJohn,Richard) 
Sibling(Richard,KingJohn)
>(1,2)  ≤ (1,2)
>(1,2)   >(1,2)
emnetzewdu2121@gmail.com
Introduction to AI 239
Truth in first-order logic
Sentences are true with respect to a model and an
interpretation
Model contains objects (domain elements) and relations among
them
Interpretation specifies referents for
constant symbols → objects
predicate symbols → relations
function symbols → functional relations
An atomic sentence predicate(term1,...,termn) is true
iff the objects referred to by term1,...,termn
are in the relation referred to by predicate
emnetzewdu2121@gmail.com
Introduction to AI 240
Models for FOL: Example
emnetzewdu2121@gmail.com
Introduction to AI 241
Universal quantification
<variables> <sentence>
Everyone at NUS is smart:
x At(x,NUS)  Smart(x)
x P is true in a model m iff P is true with x being
each possible object in the model
Roughly speaking, equivalent to the conjunction of
instantiations of P
At(KingJohn,NUS)  Smart(KingJohn)
 At(Richard,NUS)  Smart(Richard)
 At(NUS,NUS)  Smart(NUS)
 ...
242
emnetzewdu2121@gmail.com
Introduction to AI
A common mistake to avoid
Typically,  is the main connective with 
Common mistake: using  as the main
connective with :
x At(x,NUS)  Smart(x)
means “Everyone is at NUS and everyone is smart”
emnetzewdu2121@gmail.com
Introduction to AI 243
Existential quantification
<variables> <sentence>
Someone at NUS is smart:
x At(x,NUS)  Smart(x)$
x P is true in a model m iff P is true with x being
some possible object in the model
Roughly speaking, equivalent to the disjunction of
instantiations of P
At(KingJohn,NUS)  Smart(KingJohn)
 At(Richard,NUS)  Smart(Richard)
 At(NUS,NUS)  Smart(NUS)
 ...
emnetzewdu2121@gmail.com
Introduction to AI 244
Another common mistake to
avoid
Typically,  is the main connective with 
Common mistake: using  as the main
connective with :
x At(x,NUS)  Smart(x)
is true if there is anyone who is not at NUS!
emnetzewdu2121@gmail.com
Introduction to AI 245
Properties of quantifiers
x y is the same as y x
x y is the same as y x
x y is not the same as y x
x y Loves(x,y)
 “There is a person who loves everyone in the world”
y x Loves(x,y)
 “Everyone in the world is loved by at least one person”
Quantifier duality: each can be expressed using the other
x Likes(x,IceCream) x Likes(x,IceCream)
x Likes(x,Banana) x Likes(x,Banana)
emnetzewdu2121@gmail.com
Introduction to AI 246
Equality
term1 = term2 is true under a given
interpretation if and only if term1 and term2
refer to the same object
E.g., definition of Sibling in terms of Parent:
x,y Sibling(x,y)  [(x = y)  m,f  (m = f) 
Parent(m,x)  Parent(f,x)  Parent(m,y) 
Parent(f,y)]
emnetzewdu2121@gmail.com
Introduction to AI 247
Using FOL
The kinship domain:
Brothers are siblings
x,y Brother(x,y)  Sibling(x,y)
One's mother is one's female parent
m,c Mother(c) = m  (Female(m)  Parent(m,c))
“Sibling” is symmetric
x,y Sibling(x,y)  Sibling(y,x)
emnetzewdu2121@gmail.com
Introduction to AI 248
Interacting with FOL KBs
Suppose a wumpus-world agent is using an FOL KB
and perceives a smell and a breeze (but no glitter) at
t=5:
Tell(KB,Percept([Smell,Breeze,None],5))
Ask(KB,a BestAction(a,5))
I.e., does the KB entail some best action at t=5?
Answer: Yes, {a/Shoot}
emnetzewdu2121@gmail.com
Introduction to AI 249
Knowledge base for the
wumpus world
Perception
 t,s,b Percept([s,b,Glitter],t)  Glitter(t)
Reflex
 t Glitter(t)  BestAction(Grab,t)
emnetzewdu2121@gmail.com
Introduction to AI 250
Deducing hidden properties
x,y,a,b Adjacent([x,y],[a,b]) 
[a,b]  {[x+1,y], [x-1,y],[x,y+1],[x,y-1]}
Properties of squares:
s,t At(Agent,s,t)  Breeze(t)  Breezy(s)
Squares are breezy near a pit:
 Diagnostic rule---infer cause from effect
s Breezy(s)  Exi{r} Adjacent(r,s)  Pit(r)
 Causal rule---infer effect from cause
r Pit(r)  [s Adjacent(r,s)  Breezy(s) ]
emnetzewdu2121@gmail.com
Introduction to AI 251
Knowledge engineering in FOL
1. Identify the task
2. Assemble the relevant knowledge
3. Decide on a vocabulary of predicates,
functions, and constants
4. Encode general knowledge about the
domain
5. Encode a description of the specific problem
instance
6. Pose queries to the inference procedure and
get answers
7. Debug the knowledge base
emnetzewdu2121@gmail.com
Introduction to AI 252
Summary
First-order logic:
 objects and relations are semantic
primitives
 syntax: constants, functions, predicates,
equality, quantifiers
Increased expressive power: sufficient
to define wumpus world
emnetzewdu2121@gmail.com
Introduction to AI 253
Knowledge Representation
emnetzewdu2121@gmail.com
Introduction to AI 254
Russell and Norvig: Chapter 12
Categories and Objects
The organization of objects into categories is a vital part of KR.
Important relationships are
subclass relation (AKO - a kind of)
<category> AKO <category>.
instance relation ( ISA - is a)
<object> ISA <category>.
emnetzewdu2121@gmail.com
Introduction to AI 255
The upper ontology(*)
Anything
AbstractObjects GeneralizedEvents
Sets Numbers Representations Intervals Places PhysicalObjects Processes
Categories Sentences Measurements Moments Things Stuff
Times Weights Animals Agents Solid Liquid Gas
Humans
(*) This is AIMA ’s version. Other authors have other partitions.
See bus_semantics
emnetzewdu2121@gmail.com
Introduction to AI 256
Bus semantics
% thing top node
unkn ako thing.
event ako thing.
set ako thing.
cardinality ako number.
member ako thing.
abstract ako thing.
activity ako thing.
agent ako thing.
company ako agent.
contact ako thing.
content ako thing.
group ako thing.
identity ako thing.
information ako thing.
list ako thing.
measure ako thing.
place ako thing.
story ako thing.
study ako activity.
subset ako thing.
version ako abstract.
accident ako activity.
activation ako activity. %% start of it
addition ako abstract.
address ako place.
advice ako abstract.
age ako year. %% (measure)
agreement ako abstract.
analysis ako abstract.
animate ako agent.
application ako activity.
area ako measure.
% …..
% + (750 ako items)
257
emnetzewdu2121@gmail.com
Introduction to AI
Categories
• Category is a kind of set and denotes a set of objects.
• A category has a set of properties that is common to all its members.
• Categories are formally represented in logic as predicates, but we will
also regard categories as a special kind of objects.
• We then actually introduce a restricted form of second order logic, since
the the terms that occur may be predicates.
Example: Elephants and Mammals are categories.
The set denoted by Elephants is a subset of the set denoted by Mammals.
The set of properties common to Elephants is a superset of the set of properties
common to Mammals.
emnetzewdu2121@gmail.com
Introduction to AI 258
Taxonomy
• Subcategory relations organize categories into a taxonomy or
taxonomic hierarchy.
Other names are type hierarchy or class hierarchy .
• We state that a category is a subcategory of another category by
using the notation for subsets
Basketball  Ball
We will also use the notation
ako(basketball,ball).
emnetzewdu2121@gmail.com
Introduction to AI 259
Category representations
 There are two choices of representing categories in first order logic:
predicates and objects. That is, we can use the predicate Basketball(b)
or we can reify the category as an ”object” basketball. We could then
write
member(x,basketball) or
x  basketball
 We will also use the notation
isa(x,basketball).
Basketball is a subset or subcategory of Ball, which is abbreviated
Basketball  Ball
We will also use the notation
ako(basketball,ball).
emnetzewdu2121@gmail.com
Introduction to AI 260
Inheritance
 Categories serve to organize and simplify the
knowledge base through inheritance.
e.g. If we say that all instances of Food is edible (edible is
in the property set of Food), and if we assert that Fruit
is a subcategory of Food, and Apple is a subcategory of
Fruit, then we know that every apple is edible.
i.e. We say that the individual apples inherit the property
of edibility, in this case from their membership in the
Food category.
emnetzewdu2121@gmail.com
Introduction to AI 261
Reifying properties
• An individual object may have a property.
For example, a specific ball, BB9 can be round.
In ordinary FOL, we write
Round(BB9).
As for categories, we can regard Round as higher order object, and
say
BB9 has the property Round
We will also use the notation
hasprop(BB9,round).
emnetzewdu2121@gmail.com
Introduction to AI 262
Reifying Property Values
Some properties are determined by an attribute and a value. For
example, the diameter of my basketball BB9 has diameter 9.5:
Diameter(BB9 )=9.5
We can also use the notation
has(bb9,diameter,9.5).
An alternative representation for properties , when regarded as
Boolean attributes is
has(BB9,round,true).
In the same manner, we can express that a red ball has colour = red.
has(BB9,colour,red).
emnetzewdu2121@gmail.com
Introduction to AI 263
Logical expressions on categories
• An object is a member of a category
isa(bb9,basketball).
• A category is a subclass of another category
ako(basketball,ball).
• All members of a category have som properties
isa(X,basketball) => hasprop(X,round).
• Members of a category can be recognized by some properties, for example:
hasprop(X,orange) and hasprop(X,round) and
has(X,diameter,9.5) and isa(X,ball) => isa(X,basketball)
• A category as a whole has some properties
isa(teacher,profession).
Here, it is a fallacy to conclude that
isa(tore,teacher) and isa(teacher,profession)=>isa(tore,profession).
264
emnetzewdu2121@gmail.com
Introduction to AI
Category Decompositions
• We can say that both Male and Female is a subclass of Animal,
but we have not said that a male cannot be a female. That is
expressed by
Disjoint({Male,Female})
• If we know that all animals are either male or female, (they
exhaust the possibilities)
Exhaustive({Male,Female},Animals).
• A disjoint exhaustive decomposition is known as a partition
Partition({Male,Female},Animals).
emnetzewdu2121@gmail.com
Introduction to AI 265
Physical Compositions
One object can be a part of another object.
Example, declaring direct parts
part(bucharest,romania).
part(romania,eastern_europe).
part(europe,earth).
We can make a transitive extension partof
part(Y,Z) and partof(X,Y) => partof(X,Z).
and reflexive (*)
partof(X,X).
Therefore we can conclude that partof(bucharest,earth)
emnetzewdu2121@gmail.com Introduction to AI 266
Bunch
It is also useful to define composite objects with definite
parts but no particular structure. For example, we might say
”The apples in the bag weigh two pounds”
It is adviced that we don’t regard these apples as the set of
(all) apples, but instead define them as a bunch of apples.
For example, if the apples are Apple1,Apple2 and Apple3,
then
BunchOf({Apple1,Apple2,Apple3})
Denotes the composite object with three apples as parts, not
elements.
emnetzewdu2121@gmail.com
Introduction to AI 267
Substances and Objects
• The real world can be seen as consisting of primitive objects (particles) and
composite objects. A common characteristic is that they can be counted
(individuated)
However, there are objects that cannot be individuated like
Butter, Water, Nitrogen, Wine, Grass, etc.
They are called stuff, and are denoted in English without articles or quantifiers
(not ”a water”).
Typically, they can be divided without loosing their intrinsic properties. When
we take a part of a substance, we still have the same substance.
isa(X,butter) and partof(Y,X)=>isa(Y,butter).
We can say that butter melts at 30 degrees centigrade
isa(X,butter) =>has(X,meltingpoint,30).
268
emnetzewdu2121@gmail.com
Introduction to AI
Measures, Abstracts ,Mentals
• In the common world, objects have height, mass, cost and so on. The
values we assign for these properties are called measures.
• We imagine that the universe includes abstract ”measure objects” such as
length. Measure objects are given as a number of units, e.g. meters. Logically,
we can combine this with unit functions
Length(L1) = Inches(1.5) = Centimeters(3.81)
• Another way is to use predicates
Length(L1,1.5,inches)
Abstract concepts like ”autonomy”, ”quality” are difficult to represent
without seeking artificial measurements. (e.g. IQ).
Mental concepts are beliefs, thoughts, feelings etc.
emnetzewdu2121@gmail.com
Introduction to AI 269
Reasoning systems for categories
Semantic networks and Description Logics are two closely
related systems for reasoning with categories. Both can be
described using logic.
•Semantic networks provide graphical aids of visualizing the
knowledge base, together with efficient algoritms for inferring
properties of an object on the basis of its category
membership.
•Description logics provide a formal language for
constructing and combining category definitions, and efficient
algorithms for deciding subsets and superset relationships
between categories.
emnetzewdu2121@gmail.com
Introduction to AI 270
Semantic Networks Example
Mammal
Person
Male
Female
Mary John
2
HaveMother
Legs
Legs
isa isa
ako
ako
brother
sister
1
ako
emnetzewdu2121@gmail.com
Introduction to AI 271
Link types in semantic nets
There are 3 types of entities in a semantic nets
categories, objects and values (other than these)
Then there could be 9 different types of relations between these. They are drawn
with certain conventions. Note that objects can act as values also.
category  category ako(C1,C2) every C1 is a.k.o. C2
category  category haveatt(C1,R,C2) every C1has a R a.k.o C2
category  value have(C1,R,V) every C1 has attribute value R = V
object  category isa(O,C) O is a C
object  object has(O1,R,O2) O1 has relation R to O2
object  value has (O,R,V) O has attribute value R=V
object  object partof (O1,O2) O1 is a part of O2
In addition, we have all kinds of relations between values.
value* V1 > 2*V2 +5 272
emnetzewdu2121@gmail.com
Introduction to AI
Further comments on link types
We know that persons have female persons as mothers, but we cannot draw
a HasMother link from Persons to FemalePersons because HasMother is a
relation between a person and his or her mother, and categories do not have
mothers. For this reason, we use a special notation – the double boxed link.
In logic, we have given it the name haveatt,e.g.
haveatt(person,mother,femaleperson).
Compare this to
haveatt(lion,mother,lioness).
We also want to express that persons normally have two legs.As before, we
must be careful not to assert that categories have legs; instead we use a
single-boxed link. In logic, we have given it the name have,e.g.
have(person,legs,2).
emnetzewdu2121@gmail.com
Introduction to AI 273
Content of semantic net
A paraphrase of the
knowledge
All persons are mammals
All females are persons
Persons have a mother who is female
Persons have normally 2 legs
John has 1 leg
Mary isa female
John is a male
John has a sister Mary
Mary has a brother John
Logic representation
ako(person,mammal).
ako(female,person).
haveatt(person,mother,female).
have(person,legs,2).
has(john,legs,1).
isa(mary,female).
isa(john,male).
has(john,sister,mary).
has(mary,brother,john).
274
emnetzewdu2121@gmail.com
Introduction to AI
Inheritance and inference in semantic nets
The rules of inheritance can now be automated using our logic
representation of semantic nets.
isa(X,Y) and ako(Y,Z) => isa(X,Z).
have(Y,R,V) and isa(X,Y) => has(X,R,V).
haveatt(C,R,M) and isa(X,C) and has(X,R,V) =>
isa(V,M).
With these definitions, we can prove that Mary has two legs, even if
this information is not explicitly represented.
emnetzewdu2121@gmail.com
Introduction to AI 275
Example of inheritance
isa(X,Y)and ako(Y,Z)=>isa(X,Z)
have(X,Y,Z)and isa(A1,X)=>has(A1,Y,Z)
haveatt(X,Y,C) and isa(A1,X) and
has(A1,Y,V) => isa(V,C).
t=>ako(female,person)
t=>ako(male,person)
t=>haveatt(person,mother,female)
t=>have(person,legs,2)
t=>isa(mary,female)
t=>isa(john,male)
t=>has(john,legs,1)
t=>has(john,sister,mary)
t=>has(mary,brother,john)
PROOF:
has(mary,legs,2) because
have(person,legs,2) and
isa(mary,person)
have(person,legs,2) is true
isa(mary,person) because
isa(mary,female) and
ako(female,person)
isa(mary,female) is true
ako(female,person) is true
emnetzewdu2121@gmail.com
Introduction to AI 276
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent
Introduction part of Artificial Intelligent

More Related Content

Similar to Introduction part of Artificial Intelligent

901470_Chap1.ppt about to Artificial Intellgence
901470_Chap1.ppt about to Artificial Intellgence901470_Chap1.ppt about to Artificial Intellgence
901470_Chap1.ppt about to Artificial Intellgence
chougulesup79
 

Similar to Introduction part of Artificial Intelligent (20)

901470_Chap1.ppt
901470_Chap1.ppt901470_Chap1.ppt
901470_Chap1.ppt
 
901470_Chap1.ppt
901470_Chap1.ppt901470_Chap1.ppt
901470_Chap1.ppt
 
AI_Intro1.ppt
AI_Intro1.pptAI_Intro1.ppt
AI_Intro1.ppt
 
Chap1.ppt
Chap1.pptChap1.ppt
Chap1.ppt
 
901470_Chap1.ppt
901470_Chap1.ppt901470_Chap1.ppt
901470_Chap1.ppt
 
901470_Chap1.ppt
901470_Chap1.ppt901470_Chap1.ppt
901470_Chap1.ppt
 
901470_Chap1.ppt
901470_Chap1.ppt901470_Chap1.ppt
901470_Chap1.ppt
 
901470_Chap1.ppt about to Artificial Intellgence
901470_Chap1.ppt about to Artificial Intellgence901470_Chap1.ppt about to Artificial Intellgence
901470_Chap1.ppt about to Artificial Intellgence
 
artificial intelligence basis-introduction
artificial intelligence basis-introductionartificial intelligence basis-introduction
artificial intelligence basis-introduction
 
901470 chap1
901470 chap1901470 chap1
901470 chap1
 
901470_Chap1 (1).ppt
901470_Chap1 (1).ppt901470_Chap1 (1).ppt
901470_Chap1 (1).ppt
 
Artificial Intelligence for Business.ppt
Artificial Intelligence for Business.pptArtificial Intelligence for Business.ppt
Artificial Intelligence for Business.ppt
 
UNIT I - AI.pptx
UNIT I - AI.pptxUNIT I - AI.pptx
UNIT I - AI.pptx
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
MODULE -1-FINAL (3).pptx
MODULE -1-FINAL (3).pptxMODULE -1-FINAL (3).pptx
MODULE -1-FINAL (3).pptx
 
Lect 1_Introduction to AI and ML.pdf
Lect 1_Introduction to AI and ML.pdfLect 1_Introduction to AI and ML.pdf
Lect 1_Introduction to AI and ML.pdf
 
introduction to Artificial Intelligence for computer science
introduction to Artificial Intelligence for computer scienceintroduction to Artificial Intelligence for computer science
introduction to Artificial Intelligence for computer science
 
AGI Part 1.pdf
AGI Part 1.pdfAGI Part 1.pdf
AGI Part 1.pdf
 
Lect 01, 02
Lect 01, 02Lect 01, 02
Lect 01, 02
 
Artificial intelligence_ class 12 KATHIR.pptx
Artificial intelligence_ class 12  KATHIR.pptxArtificial intelligence_ class 12  KATHIR.pptx
Artificial intelligence_ class 12 KATHIR.pptx
 

Recently uploaded

Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider  Progress from Awareness to Implementation.pptxTales from a Passkey Provider  Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
FIDO Alliance
 

Recently uploaded (20)

JavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuideJavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate Guide
 
Easier, Faster, and More Powerful – Notes Document Properties Reimagined
Easier, Faster, and More Powerful – Notes Document Properties ReimaginedEasier, Faster, and More Powerful – Notes Document Properties Reimagined
Easier, Faster, and More Powerful – Notes Document Properties Reimagined
 
Introduction to FIDO Authentication and Passkeys.pptx
Introduction to FIDO Authentication and Passkeys.pptxIntroduction to FIDO Authentication and Passkeys.pptx
Introduction to FIDO Authentication and Passkeys.pptx
 
الأمن السيبراني - ما لا يسع للمستخدم جهله
الأمن السيبراني - ما لا يسع للمستخدم جهلهالأمن السيبراني - ما لا يسع للمستخدم جهله
الأمن السيبراني - ما لا يسع للمستخدم جهله
 
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider  Progress from Awareness to Implementation.pptxTales from a Passkey Provider  Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
 
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
 
AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by Anitaraj
 
Cyber Insurance - RalphGilot - Embry-Riddle Aeronautical University.pptx
Cyber Insurance - RalphGilot - Embry-Riddle Aeronautical University.pptxCyber Insurance - RalphGilot - Embry-Riddle Aeronautical University.pptx
Cyber Insurance - RalphGilot - Embry-Riddle Aeronautical University.pptx
 
2024 May Patch Tuesday
2024 May Patch Tuesday2024 May Patch Tuesday
2024 May Patch Tuesday
 
UiPath manufacturing technology benefits and AI overview
UiPath manufacturing technology benefits and AI overviewUiPath manufacturing technology benefits and AI overview
UiPath manufacturing technology benefits and AI overview
 
State of the Smart Building Startup Landscape 2024!
State of the Smart Building Startup Landscape 2024!State of the Smart Building Startup Landscape 2024!
State of the Smart Building Startup Landscape 2024!
 
Design and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data ScienceDesign and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data Science
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Intro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptxIntro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptx
 
Design Guidelines for Passkeys 2024.pptx
Design Guidelines for Passkeys 2024.pptxDesign Guidelines for Passkeys 2024.pptx
Design Guidelines for Passkeys 2024.pptx
 
ChatGPT and Beyond - Elevating DevOps Productivity
ChatGPT and Beyond - Elevating DevOps ProductivityChatGPT and Beyond - Elevating DevOps Productivity
ChatGPT and Beyond - Elevating DevOps Productivity
 
Working together SRE & Platform Engineering
Working together SRE & Platform EngineeringWorking together SRE & Platform Engineering
Working together SRE & Platform Engineering
 
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
 
Generative AI Use Cases and Applications.pdf
Generative AI Use Cases and Applications.pdfGenerative AI Use Cases and Applications.pdf
Generative AI Use Cases and Applications.pdf
 
How we scaled to 80K users by doing nothing!.pdf
How we scaled to 80K users by doing nothing!.pdfHow we scaled to 80K users by doing nothing!.pdf
How we scaled to 80K users by doing nothing!.pdf
 

Introduction part of Artificial Intelligent

  • 2. Goals/Objectives of AI The goal of research in Artificial Intelligence is to develop a technology whereby computers behave intelligently, for example:  Intelligent behavior by simulated persons or other actors in a computer game  Mobile robots in hostile or dangerous environments that behave intelligently  Intelligent responses to user requests in information search systems such as Google or Wikipedia  Process control systems that supervise complex processes and that are able to respond intelligently to a broad variety of malfunctioning and other unusual situations.  Intelligent execution of user requests in the operating system and the basic services (document handling, electronic mail) of standard computers. Generally :- is to create intelligent machines. But what does it mean to be intelligent? emnetzewdu2121@gmail.com Introduction to AI 2
  • 3. What is AI? Systems that act rationally Systems that think like humans Systems that think rationally Systems that act like humans THOUGHT BEHAVIOUR HUMAN RATIONAL emnetzewdu2121@gmail.com Introduction to AI 3
  • 4. Systems that act like humans: Turing Test approach “The art of creating machines that perform functions that require intelligence when performed by people.” (Kurzweil, 1990) “The study of how to make computers do things at which, at the moment, people are better.” (Rich and Knight, 1991) emnetzewdu2121@gmail.com Introduction to AI 4
  • 5. Turing test approach: ?  You enter a room which has a computer terminal. You have a fixed period of time to type what you want into the terminal, and study the replies. At the other end of the line is either a human being or a computer system.  If it is a computer system, and at the end of the period you cannot reliably determine whether it is a system or a human, then the system is deemed to be intelligent. ???????? emnetzewdu2121@gmail.com Introduction to AI 5
  • 6. Cont.… To pass the Turing test the machine needs:  Natural language processing :- for communication with human  Knowledge representation :- to store information effectively & efficiently  Automated reasoning :- to retrieve & answer questions using the stored information  Machine learning:- to adapt to new circumstances OR To pass the total Turing Test, the computer will need  computer vision to perceive objects(Seeing)  robotics to manipulate objects and move about(acting) emnetzewdu2121@gmail.com Introduction to AI 6
  • 7. Systems that think like humans(thinking humanly): The Cognitive Modelling approach “The exciting new effort to make computers think . . . machines with minds, in the full and literal sense.” (Haugeland, 1985) “[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning . . .” (Bellman, 1978). Humans as observed from ‘inside’, How do we know how humans think?  Introspection vs. psychological experiments Cognitive Science  The interdisciplinary field , brings together computer models from AI and experimental techniques from psychology to construct precise and testable theories of the human mind. emnetzewdu2121@gmail.com Introduction to AI 7
  • 8. Systems that think ‘rationally’ "laws of thought“ approach “The study of mental facilities through the use of computational models” (Charniak and McDermott) “The study of the computations that make it possible to perceive, reason, and act” (Winston)  Humans are not always ‘rational’.  Rational - defined in terms of logic?  Logic can’t express everything (e.g. uncertainty)  Logical approach is often not feasible in terms of computation time (needs ‘guidance’). emnetzewdu2121@gmail.com Introduction to AI 8
  • 9. Systems that act rationally: rational agent approach “Computational Intelligence is the study of the design of intelligent agents.” (Poole et al., 1998) “AI . . . is concerned with intelligent behavior in artifacts.” (Nilsson, 1998) Rational behavior: doing the right thing The right thing: that which is expected to maximize goal achievement, given the available information Giving answers to questions is ‘acting’. We don’t care whether a system:  replicates human thought processes  makes the same decisions as humans  uses purely logical reasoning emnetzewdu2121@gmail.com Introduction to AI 9
  • 10. Cont. Study AI as rational agent – 2 advantages:  It is more general than using logic only Because: LOGIC + Domain knowledge  It allows extension of the approach with more scientific methodologies emnetzewdu2121@gmail.com Introduction to AI 10
  • 11. Rational agents  An agent is an entity that perceives and acts  Abstractly, an agent is a function from percept histories to actions: [f: P*  A]  For any given class of environments and tasks, we seek the agent (or class of agents) with the best performance  Limitation: computational limitations make perfect rationality unachievable   design best program for given machine resources  Artificial  Produced by human art or effort, rather than originating naturally. • Intelligence  is the ability to acquire knowledge and use it" [Pigford and Baur] emnetzewdu2121@gmail.com Introduction to AI 11
  • 12. Cont. So AI was defined as:  AI is the study of ideas that enable computers to be intelligent.  AI is the part of computer science concerned with design of computer systems that exhibit human intelligence(From the Concise Oxford Dictionary) From the above two definitions, we can see that AI has two major roles:  Study the intelligent part concerned with humans.  Represent those actions using computers. emnetzewdu2121@gmail.com Introduction to AI 12
  • 13. Group Assignment The advancement on turing test(Arguments on turing test) The foundation of AI The History of AI The state of the art(Current status of AI) emnetzewdu2121@gmail.com Introduction to AI 13
  • 14. Intelligent Agents Agents and Environments What is an agent?  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators/ effectors to maximize progress towards its goal What is Intelligent Agent?  Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program, with some degree of independence or autonomy and in so doing, employ some knowledge or representation of the user’s goals or desires.” (“The IBM Agent”)  PAGE (Percepts, Actions, Goals, Environment)description emnetzewdu2121@gmail.com Introduction to AI 14
  • 15. Cont. Percepts Actions Goals Environment information acquired through the agent’s sensory system operations performed by the agent on the environment through its actuators desired outcome of the task with a measurable performance surroundings beyond the control of the agent emnetzewdu2121@gmail.com Introduction to AI 15
  • 17. Examples of Agents  human agent  eyes, ears, skin, taste buds, etc. for sensors  hands, fingers, legs, mouth, etc. for actuators  powered by muscles  robot  camera, infrared, bumper, etc. for sensors  grippers, wheels, lights, speakers, etc. for actuators  often powered by motors  software agent  functions as sensors  information provided as input to functions in the form of encoded bit strings or symbols  functions as actuators  results deliver the output emnetzewdu2121@gmail.com Introduction to AI 17
  • 18. Cont. Mathematically speaking, we say that an agent’s behavior is described by the agent function that maps any given percept sequence to an action [f: P*  A] The agent program runs on the physical architecture to produce f the agent function for an artificial agent will be implemented by an agent program.  The agent function is an abstract mathematical description;  the agent program is a concrete implementation, running within some physical system Example: The vacuum-cleaner emnetzewdu2121@gmail.com Introduction to AI 18
  • 19. Percepts: location and contents, e.g., [A,Dirty] Actions: Left, Right, Suck, NoOp emnetzewdu2121@gmail.com Introduction to AI 19
  • 20. THE CONCEPT OF RATIONALITY A rational agent is one that does the right thing— conceptually speaking, every entry in the table for the agent function is filled out correctly. The right action is the one that will cause the agent to be most successful Q:- what does it mean to do the right thing?  It may depend on the consequences of the agent’s behavior  Sequence of actions based the percepts it received.  Causes the env’t to go through sequence of states.  If the sequence is desirable, then the agent has performed well.  Desirability is capture by Performance measure which evaluate any given sequence of env’t states. emnetzewdu2121@gmail.com Introduction to AI 20
  • 21. Cont. Example:Performance measure for vacuum cleaner. by the amount of dirt cleaned up in a single eight- hour shift reward the agent for having a clean floor. As a general rule, it is better to design performance measures according to what one actually wants in the environment, rather than according to how one thinks the agent should behave. Since : With a rational agent, of course, what you ask for is what you get and A rational agent can maximize this performance measure by cleaning up the dirt, then dumping it all on the floor, then cleaning it up again, and so on. emnetzewdu2121@gmail.com Introduction to AI 21
  • 22. Rationality What is rational at any given time depends on four things: • The performance measure that defines the criterion of success. • The agent’s prior knowledge of the environment. • The actions that the agent can perform. • The agent’s percept sequence to date This leads to a definition of a rational agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. emnetzewdu2121@gmail.com Introduction to AI 22
  • 23. Cont. Notes: Rationality is distinct from omniscience (“all knowing”). We can behave rationally even when faced with incomplete information. Agents can perform actions in order to modify future percepts so as to obtain useful information: information gathering, exploration. An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt). emnetzewdu2121@gmail.com Introduction to AI 23
  • 24. Characterizing a Task Environment Must first specify the setting for intelligent agent design/Rational agent  PEAS: Performance measure, Environment, Actuators, Sensors Performance Measures Environment Actuators Sensors used to evaluate how well an agent solves the task at hand surroundings beyond the control of the agent determine the actions the agent can perform provide information about the current state of the environment emnetzewdu2121@gmail.com Introduction to AI 24
  • 25. Cont. Consider, e.g., the task of designing an automated taxi driver:  Performance measure: Safe, fast, legal, comfortable trip, maximize profits  Environment: Roads, other traffic, pedestrians, customers  Actuators: Steering wheel, accelerator, brake, signal, horn  Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard emnetzewdu2121@gmail.com Introduction to AI 25
  • 26. Cont. Agent: Medical diagnosis system  PEAS??? Performance measure: Healthy patient, minimize costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers) emnetzewdu2121@gmail.com Introduction to AI 26
  • 27. Cont. Agent: Interactive English tutor Performance measure: Maximize student's score on test Environment: Set of students Actuators: Screen display (exercises, suggestions, corrections) Sensors: Keyboard emnetzewdu2121@gmail.com Introduction to AI 27
  • 28. Recap Agents and Environment  PAGE- agent description Rationality  PEAS- task environment description emnetzewdu2121@gmail.com Introduction to AI 28
  • 29. Properties of Task Environment  The range of the task environment is obviously vast. However, we can identify some dimensions along to categorize the task environment fairly.  These dimensions determine:  To a large extent  The appropriate agent design  The applicability of each of the principal families of techniques for agent implementation. Fully observable vs. partially observable:  An agent's sensors give it access to the complete state of the environment at each point in time. i.e. the sensor detect all aspects that are relevant to the choice of action. whereas Relevance depends on the performance measure.  Partially observable because of noisy and inaccurate sensors or Parts of the state are simply missing from the sensor data. emnetzewdu2121@gmail.com Introduction to AI 29
  • 30. Cont. Deterministic vs. stochastic:  If next state of the environment is completely determined by the current state and the action executed by the agent, otherwise, it is stochastic. (If the environment is deterministic except for the actions of other agents, then the environment is strategic) Episodic vs. sequential:  The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.  If the agent need to think ahead; the current decision could affect the future decision(sequential environments). i.e. episodic environment is much simpler than sequential environments; no need of think emnetzewdu2121@gmail.com Introduction to AI 30
  • 31. Cont. Static vs. dynamic:  If the environment is unchanged while an agent is deliberating then, the env’t is static for that agent otherwise dynamic. Static environments are easy to deal with because the agent need not keep looking at the world while it is deciding on an action, nor need it worry about the passage of time. Dynamic environments are continuously asking the agent what it wants to do; if it hasn’t decided yet, that counts as deciding to do nothing.  The environment is semi dynamic if the environment itself does not change with the passage of time but the agent's performance score does emnetzewdu2121@gmail.com Introduction to AI 31
  • 32. Cont. Discrete vs. continuous:  A limited number of distinct, clearly defined percepts and actions. e.g. chess has discrete states, percepts and actions taxi driving is continuous state and continuous time problem Single agent vs. multiagent:  An agent operating by itself in an environment. emnetzewdu2121@gmail.com Introduction to AI 32
  • 33. Structure of the Agent emnetzewdu2121@gmail.com Introduction to AI 33  Agent’s strategy is a mapping from percept sequence to action  How to encode an agent’s strategy?  Long list of what should be done for each possible percept sequence  vs. shorter specification (e.g. algorithm) function SKELETON-AGENT (percept) returns action static: memory, the agent’s memory of the world memory  UPDATE-MEMORY(memory,percept) action  CHOOSE-BEST-ACTION(memory) memory  UPDATE-MEMORY(memory, action) return action On each invocation, the agent’s memory is updated to reflect the new percept, the best action is chosen, and the fact that the action was taken is also stored in the memory. The memory persists from one invocation to the next.
  • 34. emnetzewdu2121@gmail.com Introduction to AI 34 Agent types I) Table-lookup driven agents II) Simple reflex agents III)Model-based reflex agents IV) Goal-based agents V) Utility-based agents VI)Learning agents
  • 35. Table-lookup driven agents Uses a percept sequence / action table in memory to find the next action. Implemented as a (large) lookup table. Drawbacks: – Huge table (often simply too large) – Takes a long time to build/learn the table emnetzewdu2121@gmail.com Introduction to AI 35
  • 36. emnetzewdu2121@gmail.com Introduction to AI 36 Toy example: Vacuum world. Percepts: robot senses it’s location and “cleanliness.” So, location and contents, e.g., [A, Dirty], [B, Clean]. With 2 locations, we get 4 different possible sensor inputs. Actions: Left, Right, Suck, NoOp
  • 37. Simple reflex agents Agents do not have memory of past world states or percepts. So, actions depend only on current percept. Action becomes a “reflex.” Uses condition-action rules. emnetzewdu2121@gmail.com Introduction to AI 37
  • 38. emnetzewdu2121@gmail.com Introduction to AI 38 Agent selects actions on the basis of current percept only. If tail-light of car in front is red, then brake.
  • 39. Model-based reflex agents Key difference (wrt simple reflex agents):  Agents have internal state, which is used to keep track of past states of the world.  Agents have the ability to represent change in the World. emnetzewdu2121@gmail.com Introduction to AI 39
  • 40. emnetzewdu2121@gmail.com Introduction to AI 40 “Infers potentially dangerous driver in front.” If “dangerous driver in front,” then “keep distance.”
  • 41. Goal-based agents Key difference wrt Model-Based Agents: In addition to state information, have goal information that describes desirable situations to be achieved. Agents of this kind take future events into consideration. What sequence of actions can I take to achieve certain goals? Choose actions so as to (eventually) achieve a (given or computed) goal. emnetzewdu2121@gmail.com Introduction to AI 41
  • 42. emnetzewdu2121@gmail.com Introduction to AI 42 Agent keeps track of the world state as well as set of goals it’s trying to achieve: chooses actions that will (eventually) lead to the goal(s). More flexible than reflex agents  may involve search and planning
  • 43. Utility-based agents When there are multiple possible alternatives, how to decide which one is best? Goals are qualitative: A goal specifies a crude distinction between a happy and unhappy state, but often need a more general performance measure that describes “degree of happiness.” Important for making tradeoffs: Allows decisions comparing choice between conflicting goals, and choice between likelihood of success and importance of goal (if achievement is uncertain). Use decision theoretic models: e.g., faster vs. safer. emnetzewdu2121@gmail.com Introduction to AI 43
  • 44. emnetzewdu2121@gmail.com Introduction to AI 44 Decision theoretic actions: e.g. faster vs. safer
  • 45. emnetzewdu2121@gmail.com Introduction to AI 45 Learning agents Adapt and improve over time
  • 46. To Summarize emnetzewdu2121@gmail.com Introduction to AI 46  (1) Table-driven agents  use a percept sequence/action table in memory to find the next action. They are implemented by a (large) lookup table.  (2) Simple reflex agents  are based on condition-action rules, implemented with an appropriate production system. They are stateless devices which do not have memory of past world states.  (3) Agents with memory - Model-based reflex agents  have internal state, which is used to keep track of past states of the world.  (4) Agents with goals – Goal-based agents  are agents that, in addition to state information, have goal information that describes desirable situations. Agents of this kind take future events into consideration.  (5) Utility-based agents  base their decisions on classic axiomatic utility theory in order to act rationally.  (6) Learning agents  they have the ability to improve performance through learning.
  • 47. Chapter Next: Solving Problems By Searching emnetzewdu2121@gmail.com Introduction to AI 47 Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms
  • 48. Cont. Goal-based agents:- consider future actions and the desirability of their outcomes than others. problem-solving agent :- one kind of Goal- Based agent which use atomic representations. Goal-based agents that use more advanced factored or structured representations are usually called planning agents. emnetzewdu2121@gmail.com Introduction to AI 48
  • 49. Problem Solving Agents Goal formulation, based on the current situation and the agent’s performance measure; first step in problem solving.  We will consider a goal to be a set of world states— exactly those states in which the goal is satisfied. Problem formulation is the process of deciding what actions and states to consider, given a goal. Search for solution: Given the problem, search for a solution --- a sequence of actions to achieve the goal starting from the initial state. Execution of the solution emnetzewdu2121@gmail.com Introduction to AI 49
  • 50. emnetzewdu2121@gmail.com Introduction to AI 50  Formulate goal:  be in Bucharest (Romania)  Formulate problem:  action: drive between pair of connected cities (direct road)  state: be in a city (20 world states)  Find solution:  sequence of cities leading from start to goal state, e.g., Arad, Sibiu, Fagaras, Bucharest  Execution  drive from Arad to Bucharest according to the solution Example: Path Finding problem Initial State Goal State Environment: fully observable (map), deterministic, and the agent knows effects of each action. Is this really the case? Note: Map is somewhat of a “toy” example. Our real interest: Exponentially large spaces, with e.g. 10^100 or more states. Far beyond full search. Humans can often still handle those! One of the mysteries of cognition.
  • 51. Problem and Solution Definition A problem can be defined formally by five components: I. Initial Sate : state that the agent starts in. e.g. In(Arad). II. Transition Model: A description of what each action does; RESULT(s,a) :- function that returns the state that results from doing action a in sate s. Successor:- refers any reachable state from a given state by a single action. e.g. RESULT(In(Arad), Go(Zerind)) = In(Zerind) . III.State Space:- the set of all states reachable from the initial state by any sequence of actions. i.e. implicitly defined by the initial state, actions, and transition model  Graph :- formed by state forms in which the nodes are states and links between nodes are actions.  Path :- in the state space is sequence of states connected by a sequence of actions. emnetzewdu2121@gmail.com Introduction to AI 51
  • 52. Cont. IV. Goal Test:- which determines whether a given state is a goal state.  Explicit:- if there is set of possible goal states and checks whether the given state is one of them. V.Path Cost:- function which assigns a numeric cost to each path  The agent will chooses a cost function which reflects its own performance measure. Here we assume that cost of path can be described as the sum of costs of individual actions along the path  The step cost of taking action a in state s to reach state s’ is denoted by c(s, a, s’). Solution quality is measured by the path cost function, and an optimal solution has the lowest path cost among all solutions.  We remove detail information about our state description from a representation which is called Abstraction. emnetzewdu2121@gmail.com Introduction to AI 52
  • 53. Ex .Prob.: Toy Prob.; Vacuum world state space graph emnetzewdu2121@gmail.com Introduction to AI 53  states?  actions?  goal test?  path cost? The agent is in one of 8 possible world states. Start state Left, Right, Suck [simplified: left out No-op] Goal (reach one in this set of states) No dirt at all locations (i.e., in one of bottom two states). 1 per action Minimum path from Start to Goal state: Alternative, longer plan: 3 actions 4 actions Note: path with thousands of steps before reaching goal also exist.
  • 54. Real world problems Rout finding problems:  Rout-finding algorithms:- are used in a variety of applications;  Websites  Routing Video streams in computer networks  Airline travel-planning systems  Car systems; driving direction solving  Military operation planning Airline travel-planning: - States:- each location and time, status of flight….. Actions:- any flight from current location, be in any seat class, leaving at any time, spend enough time in airport Goal Test:-be in final destination specified by user Path cost:- depends on monetary cost, waiting time, customs and immigration procedures, seat quality, time of day, airplane type…. emnetzewdu2121@gmail.com Introduction to AI 54
  • 55. emnetzewdu2121@gmail.com Introduction to AI 55 Robotic Assembly  states?: real-valued coordinates of robot joint angles parts of the object to be assembled  actions?: continuous motions of robot joints  goal test?: complete assembly  path cost?: time to execute
  • 56. Touring Problems:- W.rt. Rout-finding  It’s actions are corresponding to trips between adjacent cities,  State space:- each state must include set of cities the agent has visited in addition to current location.  Goal test:- checks whether the agent is in goal state and all other states have been visited. Traveling salesperson problem(TSP):- touring problem in which each city must be visited exactly once.  The aim is to find shortest tour. Other Examples:  Robot navigation/planning  VLSI layout: positioning millions of components and connections on a chip to minimize area, circuit delays, etc.  Protein design : sequence of amino acids that will fold into the 3- dimensional protein with the right properties.  Automatic assembly of complex objects Literally thousands of combinatorial search / reasoning / parsing / matching problems can be formulated as search problems in exponential size state spaces. emnetzewdu2121@gmail.com Introduction to AI 56
  • 57. emnetzewdu2121@gmail.com Introduction to AI 57 Search Techniques
  • 58. Searching for Solutions emnetzewdu2121@gmail.com Introduction to AI 58 Search through the state space. We will consider search techniques that use an explicit search tree that is generated by the initial state + successor function. initialize (initial node) Loop choose a node for expansion according to strategy goal node?  done expand node with successor function
  • 59. Tree-search algorithms emnetzewdu2121@gmail.com Introduction to AI 59  Basic idea:  simulated exploration of state space by generating successors of already-explored states (a.k.a. ~ expanding states) Note: 1) Here we only check a node for possibly being a goal state, after we select the node for expansion. 2) A “node” is a data structure containing state + additional info (parent node, etc.
  • 60. Tree Search example emnetzewdu2121@gmail.com Introduction to AI 60 Node selected for expansion.
  • 62. emnetzewdu2121@gmail.com Introduction to AI 62 Selected for expansion. Added to tree. Note: Arad added (again) to tree! (reachable from Sibiu) Not necessarily a problem, but in Graph-Search, we will avoid this by maintaining an “explored” list.
  • 63. Graph-search emnetzewdu2121@gmail.com Introduction to AI 63 Note: 1) Uses “explored” set to avoid visiting already explored states. 2) Uses “frontier” set to store states that remain to be explored and expanded.
  • 64. Implementation: states vs. nodes emnetzewdu2121@gmail.com Introduction to AI 64  A state is a --- representation of --- a physical configuration.  A node is a data structure constituting part of a search tree includes state, tree parent node, action (applied to parent), path cost (initial state to node) g(x), depth
  • 65. Search strategies emnetzewdu2121@gmail.com Introduction to AI 65  A search strategy is defined by picking the order of node expansion.  criteria that might be used to choose among the avail algorithms . We can evaluate an algorithm’s performance in four ways(dimensions):  completeness: does it always find a solution if one exists?  time complexity: How long does it take to find a solution(number of nodes generated)  space complexity: maximum number of nodes in memory  optimality: does it always find a least-cost solution?  Time and space complexity are measured in terms of  b: maximum branching factor of the search tree(maximum number of successors of any node  d: the depth of the shallowest goal node (i.e., the number of steps along the path from the root) (depth of the least-cost solution)  m: maximum depth of the state space (may be ∞)
  • 66. Search Algorithms  Informed(heuristic)- have some guidance on where to look for solutions.  Uninformed(blind)- given no info. About the problem other than its definition. Some of them can solve any kind of problems but not efficient. 1. Breadth-first search 4.Depth-limited search 2. Uniform-cost search 5.Iterative deepening search 3. Depth-first search 6.Bidirectional search emnetzewdu2121@gmail.com Introduction to AI 66 Key issue: type of queue used for the fringe of the search tree Fringe is the collection of nodes that have been generated but not (yet) expanded. Each node of the fringe is a leaf node.
  • 67. Breadth- First Search Breadth-first search is a simple strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on. In general, all nodes will be expanded at given depth in the search tree ; based on their level. Breadth-first search is an instance of the general graph- search algorithm in which the shallowest unexpanded node is chosen for expansion. The goal test is applied on each node when it is generated rather than it is selected for expansion. emnetzewdu2121@gmail.com Introduction to AI 67
  • 68. Cont. Implementation: Fringe is : FIFO queue used to expand the frontier.  At the back of the queue: New nodes(always deeper than their parents).  At top of the queue: Old nodes (shallower than new nodes) . emnetzewdu2121@gmail.com Introduction to AI 68 Fringe queue: <A> Select A from queue and expand. Gives <B, C>
  • 69. Cont. Implementation: Fringe is : FIFO queue used to expand the frontier.  At the back of the queue: New nodes(always deeper than their parents).  At top of the queue: Old nodes (shallower than new nodes) . emnetzewdu2121@gmail.com Introduction to AI 69 Queue: <B, C> Select B from front, and expand. Put children at the end. Gives <C, D, E>
  • 70. Cont. Implementation: Fringe is : FIFO queue used to expand the frontier.  At the back of the queue: New nodes(always deeper than their parents).  At top of the queue: Old nodes (shallower than new nodes) . emnetzewdu2121@gmail.com Introduction to AI 70 Fringe queue: <C, D, E>
  • 71. Cont. Implementation: Fringe is : FIFO queue used to expand the frontier.  At the back of the queue: New nodes(always deeper than their parents).  At top of the queue: Old nodes (shallower than new nodes) . emnetzewdu2121@gmail.com Introduction to AI 71 Fringe queue: <D, E, F, G> Assuming no further children, queue becomes <E, F, G>, <F, G>, <G>, <>. Each time node checked for goal state.
  • 72. Properties of breadth-first search emnetzewdu2121@gmail.com Introduction to AI 72 b: maximum branching factor of the search tree d: depth of the least-cost solution Complete? Yes (if b is finite) Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1) Note: check for goal only when node is expanded. Space? O(bd+1) (keeps every node in memory; needed also to reconstruct soln. path) Optimal soln. found? Yes (if all step costs are identical) Space is the bigger problem (more than time)
  • 73. Uniform-cost search Expand least-cost (of path to) unexpanded node (e.g. useful for finding shortest path on map) Implementation:  fringe = queue ordered by path cost Equivalent to breadth-first if step costs all equal emnetzewdu2121@gmail.com Introduction to AI 73
  • 74. Uniform-cost search Expand least-cost unexpanded node Implementation:  fringe = queue ordered by path cost Equivalent to breadth-first if step costs all equal emnetzewdu2121@gmail.com Introduction to AI 74
  • 79. Uniform-cost search Complete? Yes Time? # of nodes with g ≤ cost of optimal solution, O(b1+(C*/ ε)) where C* is the cost of the optimal solution, ε some small positive constant for cost of every step . Space? # of nodes with g ≤ cost of optimal solution, O(b(C*/ ε)) Optimal? Yes emnetzewdu2121@gmail.com Introduction to AI 79
  • 80. Depth-first search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front emnetzewdu2121@gmail.com Introduction to AI 80
  • 81. Depth-first search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front emnetzewdu2121@gmail.com Introduction to AI 81
  • 82. Depth-first search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front emnetzewdu2121@gmail.com Introduction to AI 82
  • 83. Depth-first search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front. emnetzewdu2121@gmail.com Introduction to AI 83
  • 84. Depth-first search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front emnetzewdu2121@gmail.com Introduction to AI 84
  • 85. Depth-first search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front emnetzewdu2121@gmail.com Introduction to AI 85
  • 86. Depth-first search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front emnetzewdu2121@gmail.com Introduction to AI 86
  • 87. Depth-first search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front emnetzewdu2121@gmail.com Introduction to AI 87
  • 88. Depth-first search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front emnetzewdu2121@gmail.com Introduction to AI 88
  • 89. Depth-first search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front emnetzewdu2121@gmail.com Introduction to AI 89
  • 90. Depth-first search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front emnetzewdu2121@gmail.com Introduction to AI 90
  • 91. Depth-first search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front emnetzewdu2121@gmail.com Introduction to AI 91
  • 92. Properties of depth- first search Complete? No: fails in infinite-depth spaces, spaces with loops  Modify to avoid repeated states along path  complete in finite spaces Time? O(bm): terrible if m is much larger than d  but if solutions are dense, may be much faster than breadth- first Space? O(bm), i.e., linear space! Optimal? No emnetzewdu2121@gmail.com Introduction to AI 92
  • 93. Depth-limited search = depth-first search with depth limit l, i.e., nodes at depth l have no successors Recursive implementation: emnetzewdu2121@gmail.com Introduction to AI 93
  • 95. Iterative deepening search l =0 emnetzewdu2121@gmail.com Introduction to AI 95
  • 96. Iterative deepening search l =1 emnetzewdu2121@gmail.com Introduction to AI 96
  • 97. Iterative deepening search l =2 emnetzewdu2121@gmail.com Introduction to AI 97
  • 98. Iterative deepening search l =3 emnetzewdu2121@gmail.com Introduction to AI 98
  • 99. Properties of iterative deepening search Complete? Yes Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd) Space? O(bd) Optimal? Yes, if step cost = 1 emnetzewdu2121@gmail.com Introduction to AI 99
  • 100. Bidirectional Search To go both ways, Top Down and Bottom Up. emnetzewdu2121@gmail.com Introduction to AI 100
  • 102. Repeated states Do not return to the state you just came from. Do not create paths with cycles in them. Do not generate any state that was ever generated before.  Hashing emnetzewdu2121@gmail.com Introduction to AI 102
  • 104. Summary Problem formulation usually requires abstracting away real- world details to define a state space that can feasibly be explored Variety of uninformed search strategies  Breadth First Search  Depth First Search  Uniform Cost Search (Best First Search)  Depth Limited Search  Iterative Deepening  Bidirectional Search emnetzewdu2121@gmail.com Introduction to AI 104
  • 106. Introduction. The objective of research into intelligent machines is to produce systems which can reason with available knowledge and so behave intelligently. One of the major issues then is how to incorporate knowledge into these systems.
  • 107. The Problem Of Knowledge Representation How is the whole abstract concept of knowledge reduced into forms which can be written into a computers memory. This is called the problem of Knowledge Representation.
  • 108. Fields of Knowledge The concept of Knowledge is central to a number of fields of established academic study, including Philosophy, Psychology, Logic,and Education. Even in Mathematics and Physics people like Isaac Newton and Leibniz reflected that since physics has its foundations in a mathematical formalism, the laws of all nature should be similarly described.
  • 109. Major Representation Schemes Classical Representation schemes include  Logic  Procedural  Semantic Nets  Frames  Direct  Production Schemes Each Representation Scheme has a reasoning mechanism built in.
  • 111. What is a Logic? • When most people say ‘logic’, they mean either propositional logic or first- order predicate logic. • However, the precise definition is quite broad, and literally hundreds of logics have been studied by philosophers, computer scientists and mathematicians. • Formal Logic is a classical approach of representing Knowledge. • It was developed by philosophers and mathematicians as a calculus of the process of making inferences from facts. • The simplest logic formalism is that of Propositional Calculus which is effectively an equivalent form of Boolean Logic, the basis of Computing. • We can make Statements or Propositions which can either be atomic ( i.e. they can stand alone) or composite which are sentences constructed using atomic statements joined by logical connectives • Any ‘formal system’ can be considered a logic if it has: – a well-defined syntax; – a well-defined semantics; and – a well-defined proof-theory. Introduction to AI
  • 112. •The syntax of a logic defines the syntactically acceptable objects of the language, which are properly called well-formed formulae (wff). (We shall just call them formulae.) •The semantics of a logic associate each formula with a meaning. •The proof theory is concerned with manipulating formulae according to certain rules. Introduction to AI
  • 113. •The simplest, and most abstract logic we can study is called propositional logic. •Definition: A proposition is a statement that can be either true or false; it must be one or the other, and it cannot be both. •EXAMPLES. The following are propositions: – the reactor is on; – the wing-flaps are up; – Abiy Ahmed is prime minister. whereas the following are not: – are you going out somewhere? – 2+3 -Good Morning. Introduction to AI Propositional Logic
  • 114. •It is possible to determine whether any given statement is a proposition by prefixing it with: It is true that . . . and seeing whether the result makes grammatical sense. •We now define atomic propositions. Intuitively, these are the set of smallest propositions. •Definition: An atomic proposition is one whose truth or falsity does not depend on the truth or falsity of any other proposition. •So all the above propositions are atomic. Introduction to AI
  • 115. •Now, rather than write out propositions in full, we will abbreviate them by using propositional variables. •It is standard practice to use the lower-case lattin letters p, q, r, . . . to stand for propositions. •If we do this, we must define what we mean by writing something like: Let p be Abiy Ahimed is prime Minister. •Another alternative is to write something like reactor is on, so that the interpretation of the propositional variable becomes obvious. Introduction to AI
  • 116. The Connectives •Now, the study of atomic propositions is pretty boring. We therefore now introduce a number of connectives which will allow us to build up complex propositions. •The connectives we introduce are: ∧ and (& or .) ∨ or (| or +) ¬ not (∼) ⇒ implies (⊃ or →) ⇔ iff •(Some books use other notations; these are given in parentheses.) Introduction to AI
  • 117. And •Any two propositions can be combined to form a third proposition called the conjunction of the original propositions. •Definition: If p and q are arbitrary propositions, then the conjunction of p and q is written p ∧ q and will be true iff both p and q are true. Introduction to AI
  • 118. •We can summarise the operation of ∧ in a truth table. The idea of a truth table for some formula is that it describes the behaviour of a formula under all possible interpretations of the primitive propositions the are included in the formula. •If there are n different atomic propositions in some formula, then there are 2n different lines in the truth table for that formula. (This is because each proposition can take one 1 of 2 values — true or false.) •Let us write T for truth, and F for falsity. Then the truth table for p ∧ q is: p q p ∧ q F F T T F T F T F F F T Introduction to AI
  • 119. • Any two propositions can be combined by the word ‘or’ to form a third proposition called the disjunction of the originals. • Definition: If p and q are arbitrary propositions, then the disjunction of p and q is written p ∨ q and will be true iff either p is true, or q is true, or both p and q are true. The operation of ∨ is summarized in the following truth table: OR p q p ∨ q F F T T F T F T F T T T Introduction to AI
  • 120. If. . . Then. . . •Many statements, particularly in mathematics, are of the form: if p is true then q is true. Another way of saying the same thing is to write: p implies q. •In propositional logic, we have a connective that combines two propositions into a new proposition called the conditional, or implication of the originals, that attempts to capture the sense of such a statement. Introduction to AI
  • 121. •Definition: If p and q are arbitrary propositions, then the conditional of p and q is written p ⇒ q and will be true iff either p is false or q is true. •The truth table for ⇒ is: p q p ⇒ q F F T T F T F T T T F T Introduction to AI
  • 122. •The ⇒ operator is the hardest to understand of the operators we have considered so far, and yet it is extremely important. •If you find it difficult to understand, just remember that the p ⇒ q means ‘if p is true, then q is true’. If p is false, then we don’t care about q, and by default, make p ⇒ q evaluate to T in this case. •Terminology: if φ is the formula p ⇒ q, then p is the antecedent of φ and q is the consequent. Introduction to AI
  • 123. Iff •Another common form of statement in logic is: p is true if, and only if, q is true. •The sense of such statements is captured using the biconditional operator. •Definition: If p and q are arbitrary propositions, then the biconditional of p and q is written: p ⇔ q and will be true iff either: 1. p and q are both true; or 2. p and q are both false. •The truth table for ⇔ is: •If p ⇔ q is true, then p and q are said to be logically equivalent. They will be true under exactly the same circumstances. p q p ⇔ q F F T T F T F T T F F T
  • 124. Not •All of the connectives we have considered so far have been binary: they have taken two arguments. •The final connective we consider here is unary. It only takes one argument. •Any proposition can be prefixed by the word ‘not’ to form a second proposition called the negation of the original. •Definition: If p is an arbitrary proposition then the negation of p is written ¬p and will be true iff p is false. •Truth table for ¬: p ¬p F T T F Introduction to AI
  • 125. Comments •We can nest complex formulae as deeply as we want. •We can use parentheses i.e., ),(, to disambiguate formulae. •EXAMPLES. If p, q, r, s and t are atomic propositions, then all of the following are formulae: p ∧ q ⇒ r p ∧ (q ⇒ r) (p ∧ (q ⇒ r)) ∨ s ((p ∧ (q ⇒ r)) ∨ s) ∧ t whereas none of the following is: p ∧ p ∧ q) p¬ Introduction to AI
  • 126. • When we consider formulae in terms of interpretations, it turns out that some have interesting properties. • Definition: 1. A formula is a tautology iff it is true under every valuation; 2. A formula is consistent iff it is true under at least one valuation; 3. A formula is inconsistent/Contradiction iff it is not made true under any valuation. • Now, each line in the truth table of a formula correponds to a valuation. • So, we can use truth tables to determine whether or not formulae are tautologies or other. Tautologies , Contradiction & Consistency Introduction to AI
  • 128. 128 Propositional Equivalences CS/APMA 202, Spring 2005 Rosen, section 1.2 Aaron Bloomfield
  • 129. 129 Tautology and Contradiction A tautology is a statement that is always true  p  ¬p will always be true (Negation Law) A contradiction is a statement that is always false  p  ¬p will always be false(Negation Law) p p  ¬p p  ¬p T T F F T F
  • 130. 130 Logical Equivalence A logical equivalence means that the two sides always have the same truth values  Symbol is ≡ or  (we’ll use ≡)
  • 131. 131 p  T ≡ p Identity law p  F ≡ F Domination law Logical Equivalences of And p T pT T T T F T F p F pF T F F F F F
  • 132. 132 p  p ≡ p Idempotent law p  q ≡ q  p Commutative law Logical Equivalences of And p p pp T T T F F F p q pq qp T T T T T F F F F T F F F F F F
  • 133. 133 (p  q)  r ≡ p  (q  r) Associative law Logical Equivalences of And p q r pq (pq)r qr p(qr) T T T T T T T T T F T F F F T F T F F F F T F F F F F F F T T F F T F F T F F F F F F F T F F F F F F F F F F F
  • 134. 134 p  T ≡ T Identity law p  F ≡ p Domination law p  p ≡ p Idempotent law p  q ≡ q  p Commutative law (p  q)  r ≡ p  (q  r) Associative law Logical Equivalences of Or
  • 135. 135 Corollary of the Associative Law (p  q)  r ≡ p  q  r (p  q)  r ≡ p  q  r Similar to (3+4)+5 = 3+4+5 Only works if ALL the operators are the same!
  • 136. 136 ¬(¬p) ≡ p Double negation law p  ¬p ≡ T Negation law p  ¬p ≡ F Negation law Logical Equivalences of Not
  • 137. 137 DeMorgan’s Law Probably the most important logical equivalence To negate pq (or pq), you “flip” the sign, and negate BOTH p and q  Thus, ¬(p  q) ≡ ¬p  ¬q  Thus, ¬(p  q) ≡ ¬p  ¬q p q p q pq (pq) pq pq (pq) pq T T F F T F F T F F T F F T F T T T F F F T T F F T T T F F F F T T F T T F T T
  • 138. 138 Yet more equivalences Distributive: p  (q  r) ≡ (p  q)  (p  r) p  (q  r) ≡ (p  q)  (p  r) Absorption p  (p  q) ≡ p p  (p  q) ≡ p
  • 139. 139 How to prove two propositions are equivalent? Two methods:  Using truth tables Not good for long formula In this course, only allowed if specifically stated!  Using the logical equivalences The preferred method Example:  Show that: r q p r q r p       ) ( ) ( ) (
  • 140. 140 Using Truth Tables p q r p→r q →r (p→r)(q →r) pq (pq) →r T T T T T T T T T T F F F F T F T F T T T T F T T F F F T T F T F T T T T T F T F T F T F T F T F F T T T T F T F F F T T T F T r q p r q r p       ) ( ) ( ) ( (pq) →r pq (p→r)(q →r) q →r p→r r q p
  • 141. 141 Idempotent Law Associativity of Or r q p r q r p          ) ( ) ( ) ( Definition of implication Using Logical Equivalences r q p r q r p           ) ( ) ( ) ( r q p r q p          r q p r r q p           r q p r q r p       ) ( ) ( ) ( r q p r q r p           Re-arranging Original statement DeMorgan’s Law q p q p     q p q p       ) ( q r p r q r p            ) ( ) ( r r r  
  • 142. Cont. Example: Show that is logically equivalent to Solution: 142
  • 144. Pros and cons of propositional logic Propositional logic is declarative  Propositional logic allows partial/disjunctive/negated information  (unlike most data structures and databases) Propositional logic is compositional:  Meaning in propositional logic is context-independent  (unlike natural language, where meaning depends on context)  Propositional logic has very limited expressive power
  • 145. 145
  • 146. First-order logic Whereas propositional logic assumes the world contains facts, first-order logic (like natural language) assumes the world contain  Objects, which are things with individual identities  Properties of objects that distinguish them from other objects  Relations that hold among sets of objects  Functions, which are a subset of relations where there is only one “value” for any given “input” Examples:  Objects: Students, lectures, companies, cars ...  Relations: Brother-of, bigger-than, outside, part-of, has-color, occurs-after, owns, visits, precedes, ...  Properties: blue, oval, even, large, ...  Functions: father-of, best-friend, second-half, one-more-than ...
  • 147. User provides Constant symbols, which represent individuals in the world  Mary  3  Green Function symbols, which map individuals to individuals  father-of(Mary) = John  color-of(Sky) = Blue Predicate symbols, which map individuals to truth values  greater(5,3)  green(Grass)  color(Grass, Green)
  • 148. FOL Provides Variable symbols  E.g., x, y, foo Connectives  Same as in PL: not (), and (), or (), implies (), if and only if (biconditional ) Quantifiers  Universal x or (Ax)  Existential x or (Ex)
  • 149. Sentences are built from terms and atoms A term (denoting a real-world individual) is a constant symbol, a variable symbol, or an n place function of n terms. x and f(x1, ..., xn) are terms, where each xi is a term. A term with no variables is a ground term An atomic sentence (which has value true o false) is an n-place predicate of n terms A complex sentence is formed from atomic sentences connected by the logica connectives: P, PQ, PQ, PQ, PQ where P and Q are sentences
  • 150. Quantifiers Universal quantification  (x)P(x) means that P holds for all values of x in the domain associated with that variable  E.g., (x) dolphin(x)  mammal(x) Existential quantification  ( x)P(x) means that P holds for some value of x in the domain associated with that variable  E.g., ( x) mammal(x)  lays-eggs(x)  Permits one to make a statement about some object without naming it
  • 151. Quantifiers Universal quantifiers are often used w “implies” to form “rules”: (x) student(x)  smart(x) means “All students a smart” Universal quantification is rarely used to ma blanket statements about every individual in t world: (x)student(x)smart(x) means “Everyone in the wo is a student and is smart” Existential quantifiers are usually used w “and” to specify a list of properties about
  • 152. Quantifier Scope Switching the order of universal quantifiers does not change the meaning:  (x)(y)P(x,y) ↔ (y)(x) P(x,y) Similarly, you can switch the order of existential quantifiers:  (x)(y)P(x,y) ↔ (y)(x) P(x,y) Switching the order of universals and existentials does change meaning:  Everyone likes someone: (x)(y) likes(x,y)  Someone is liked by everyone: (y)(x) likes(x,y)
  • 153. Connections between All and Exists We can relate sentences involving  and  using De Morgan’s laws: (x) P(x) ↔ (x) P(x) (x) P ↔ (x) P(x) (x) P(x) ↔  (x) P(x) (x) P(x) ↔ (x) P(x)
  • 154. Translating English to FOL Every gardener likes the sun. x gardener(x)  likes(x,Sun) You can fool some of the people all of the time. x t person(x) time(t)  can-fool(x,t) You can fool all of the people some of the time. x t (person(x)  time(t) can-fool(x,t)) x (person(x)  t (time(t) can-fool(x,t))) All purple mushrooms are poisonous. x (mushroom(x)  purple(x))  poisonous(x) No purple mushroom is poisonous. x purple(x)  mushroom(x)  poisonous(x) x (mushroom(x)  purple(x))  poisonous(x) There are exactly two purple mushrooms. x y mushroom(x)  purple(x)  mushroom(y)  purple(y) ^ (x=y)   (mushroom(z)  purple(z))  ((x=z)  (y=z)) Clinton is not tall. tall(Clinton) X is above Y iff X is on directly on top of Y or there is a pile of one or mor other objects directly on top of one another starting with X and endin with Y. Equivalent Equivalent
  • 155. 155
  • 156. emnetzewdu2121@gmail.com Introduction to AI 156 Logical Equivalence You might have noticed that the final column in the truth table from ¬P∨Q¬P∨Q is identical to the final column in the truth table for P→Q: P Q P→Q ¬P∨Q T T T T T F F F F T T T F F T T This says that no matter what P and Q are, the statements ¬P∨Q and P→Q either both true or both false. We therefore say these statements are logically equivalent. And also we can proof by drawing the truth table of (¬P∨Q) ↔ (P→Q) is tautology
  • 157. emnetzewdu2121@gmail.com Introduction to AI 157 Example Prove that the statements ¬(P→Q)¬(P→Q) and P∧¬QP∧¬Q are logically equivalent without using truth tables.
  • 158. Chapter Next Logical Agents Russell and Norvig: Chapter 7 emnetzewdu2121@gmail.com Introduction to AI 158
  • 159. “Thinking Rationally” Computational models of human “thought” processes Computational models of human behavior Computational systems that “think” rationally Computational systems that behave rationally emnetzewdu2121@gmail.com Introduction to AI 159
  • 160. Logical Agents Reflex agents find their way from Arad to Bucharest by dumb luck Chess program calculates legal moves of its king, but doesn’t know that no piece can be on 2 different squares at the same time Logical(Knowledge-Based) agents combine general knowledge with current percepts to infer hidden aspects of current state prior to selecting actions  Crucial in partially observable environments emnetzewdu2121@gmail.com Introduction to AI 160
  • 161. Roadmap Knowledge-based agents Wumpus world Logic in general Propositional and first-order logic  Inference, validity, equivalence and satisfiability  Reasoning patterns  Resolution  Forward/backward chaining emnetzewdu2121@gmail.com Introduction to AI 161
  • 162. Knowledge-Based Agent environment agent ? sensors actuators Knowledge base Inference Engine Domain-independent algorithms Domain-specific content emnetzewdu2121@gmail.com Introduction to AI 162
  • 163. Knowledge Base Knowledge Base: set of sentences represented in a knowledge representation language and represents assertions about the world. Inference rule: when one ASKs questions of the KB, the answer should follow from what has been TELLed to the KB previously. tell ask emnetzewdu2121@gmail.com Introduction to AI 163
  • 165. Abilities KB agent Agent must be able to:  Represent states and actions,  Incorporate new percepts  Update internal representation of the world  Deduce hidden properties of the world  Deduce appropriate actions emnetzewdu2121@gmail.com Introduction to AI 165
  • 166. Description level The KB agent is similar to agents with internal state Agents can be described at different levels  Knowledge level  What they know, regardless of the actual implementation. (Declarative description)  Implementation level  Data structures in KB and algorithms that manipulate them e.g propositional logic and resolution. emnetzewdu2121@gmail.com Introduction to AI 166
  • 167. Types of Knowledge Procedural, e.g.: functions Such knowledge can only be used in one way -- by executing it Declarative, e.g.: constraints and rules It can be used to perform many different sorts of inferences emnetzewdu2121@gmail.com Introduction to AI 167
  • 168. The Wumpus World The Wumpus computer game The agent explores a cave consisting of rooms connected by passageways. Lurking somewhere in the cave is the Wumpus, a beast that eats any agent that enters its room. Some rooms contain bottomless pits that trap any agent that wanders into the room. Occasionally, there is a heap of gold in a room. The goal is to collect the gold and exit the world without being eaten emnetzewdu2121@gmail.com Introduction to AI 168
  • 169. Wumpus PEAS description Performance measure: gold +1000, death -1000, -1 per step, -10 use arrow Environment: Squares adjacent to wumpus are stench Squares adjacent to pit are breezy Glitter iff gold is in the same square Bump iff move into a wall Woeful scream iff the wumpus is killed Shooting kills wumpus if you are facing it Shooting uses up the only arrow Grabbing picks up gold if in same square Releasing drops the gold in same square Sensors: Stench, Breeze, Glitter, Bump, Scream Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot emnetzewdu2121@gmail.com Introduction to AI 169
  • 170. A typical Wumpus world The agent always starts in [1,1]. The task of the agent is to find the gold, return to the field [1,1] and climb out of the cave. emnetzewdu2121@gmail.com Introduction to AI 170
  • 171. Wumpus World Characteristics Observable? Deterministic? Episodic or Sequential? Static? Discrete? Single-agent? emnetzewdu2121@gmail.com Introduction to AI 171
  • 172. Wumpus World Characterization Observable?  No, only local perception Deterministic?  Yes, outcome exactly specified Episodic?  No, sequential at the level of actions Static?  Yes, Wumpus and pits do not move Discrete?  Yes Single-agent?  Yes, Wumpus is essentially a natural feature. emnetzewdu2121@gmail.com Introduction to AI 172
  • 173. The Wumpus agent’s first step [1,1] The KB initially contains the rules of the environment. The first percept is [none, none,none,none,none], move to safe cell e.g. 2,1 [2,1] breeze which indicates that there is a pit in [2,2] or [3,1], return to [1,1] to try next safe cell emnetzewdu2121@gmail.com Introduction to AI 173
  • 174. The Wumpus agent’s first step [1,1] The KB initially contains the rules of the environment. The first percept is [none, none,none,none,none], move to safe cell e.g. 2,1 [2,1] breeze which indicates that there is a pit in [2,2] or [3,1], return to [1,1] to try next safe cell emnetzewdu2121@gmail.com Introduction to AI 174
  • 175. Next…. [1,2] Stench in cell which means that wumpus is in [1,3] or [2,2] YET … not in [1,1] YET … not in [2,2] or stench would have been detected in [2,1] THUS … wumpus is in [1,3] THUS [2,2] is safe because of lack of breeze in [1,2] THUS pit in [3,1] move to next safe cell [2,2] emnetzewdu2121@gmail.com Introduction to AI 175
  • 176. Then… [2,2] move to [2,3] [2,3] detect glitter , smell, breeze THUS pick up gold THUS pit in [3,3] or [2,4] emnetzewdu2121@gmail.com Introduction to AI 176
  • 177. What is a logic? A formal language  Syntax – what expressions are legal (well-formed)  Semantics – what legal expressions mean  in logic the truth of each sentence with respect to each possible world. E.g the language of arithmetic  X+2 >= y is a sentence, x2+y is not a sentence  X+2 >= y is true in a world where x=7 and y =1  X+2 >= y is false in a world where x=0 and y =6 emnetzewdu2121@gmail.com Introduction to AI 177
  • 178. Connection World-Representation World W Conceptualization Facts about W hold hold Sentences represent Facts about W represent Sentences entail emnetzewdu2121@gmail.com Introduction to AI 178
  • 179. Entailment One thing follows from another KB |=  KB entails sentence  if and only if  is true in worlds where KB is true. g. x+y=4 entails 4=x+y Entailment is a relationship between sentences that is based on semantics. emnetzewdu2121@gmail.com Introduction to AI 179
  • 180. Models Models are formal definitions of possible states of the world We say m is a model of a sentence  if  is true in m M() is the set of all models of  Then KB  if and only if M(KB)  M() M() M(KB) emnetzewdu2121@gmail.com Introduction to AI 180
  • 182. Wumpus models KB = wumpus-world rules + observations emnetzewdu2121@gmail.com Introduction to AI 182
  • 183. Wumpus models KB = wumpus-world rules + observations α1 = "[1,2] is safe", KB ╞ α1, proved by model checking emnetzewdu2121@gmail.com Introduction to AI 183
  • 184. Wumpus models KB = wumpus-world rules + observations emnetzewdu2121@gmail.com Introduction to AI 184
  • 185. Wumpus models KB = wumpus-world rules + observations α2 = "[2,2] is safe", KB ╞ α2 emnetzewdu2121@gmail.com Introduction to AI 185
  • 186. Inference KB ├i α = sentence α can be derived from KB by procedure i Soundness: i is sound if whenever KB ├i α, it is also true that KB╞ α Completeness: i is complete if whenever KB╞ α, it is also true that KB ├i α Preview: we will define a logic which is expressive enough to say almost anything of interest, and for which there exists a sound and complete inference procedure. That is, the procedure will answer any question whose answer follows from what is known by the KB. (i is an algorithm that derives α from KB ) emnetzewdu2121@gmail.com Introduction to AI 186
  • 187. Propositional logic: Syntax Propositional logic is the simplest logic – illustrates basic ideas The proposition symbols P1, P2 etc are sentences  If S is a sentence, S is a sentence (negation)  If S1 and S2 are sentences, S1  S2 is a sentence (conjunction)  If S1 and S2 are sentences, S1  S2 is a sentence (disjunction)  If S1 and S2 are sentences, S1  S2 is a sentence (implication)  If S1 and S2 are sentences, S1  S2 is a sentence (biconditional) emnetzewdu2121@gmail.com Introduction to AI 187
  • 188. Propositional logic: Semantics Each model specifies true/false for each proposition symbol E.g. P1,2 P2,2 P3,1 false true false With these symbols, 8 possible models, can be enumerated automatically. Rules for evaluating truth with respect to a model m: S is true iff S is false Negation S1  S2 is true iff S1 is true and S2 is true Conjunction S1  S2 is true iff S1is true or S2 is true Disjunction S1  S2 is true iff S1 is false or S2 is true Implication i.e., is false iff S1 is true and S2 is false S1  S2 is true iff S1S2 is true and S2S1 is true Bicond. Simple recursive process evaluates an arbitrary sentence, e.g., P1,2  (P2,2  P3,1) = true  (true  false) = true  true = true emnetzewdu2121@gmail.com Introduction to AI 188
  • 189. Truth tables for connectives emnetzewdu2121@gmail.com Introduction to AI 189
  • 190. Wumpus world sentences Let Pi,j be true if there is a pit in [i, j]. Let Bi,j be true if there is a breeze in [i, j]. R1:  P1,1 R4: B1,1 R5: B2,1 "Pits cause breezes in adjacent squares" R2: B1,1  (P1,2  P2,1) R3: B2,1  (P1,1  P2,2  P3,1) ( R1 ^ R2 ^ R3 ^ R4 ^ R5 ) is the State of the Wumpus World emnetzewdu2121@gmail.com Introduction to AI 190
  • 191. Truth tables for inference emnetzewdu2121@gmail.com Introduction to AI 191
  • 192. Order of Precedence     Examples:   A  B  C is equivalent to ((A)B)C emnetzewdu2121@gmail.com Introduction to AI 192
  • 193. Logical equivalence Two sentences are logically equivalent iff true in same models: α ≡ ß iff α╞ β and β╞ α emnetzewdu2121@gmail.com Introduction to AI 193
  • 194. Validity & Satisfiability A sentence is valid if it is true in all models, e.g., True, A A, A  A, (A  (A  B))  B  Tautologies are necessarily true statements Validity is connected to inference via the Deduction Theorem: KB ╞ α if and only if (KB  α) is valid A sentence is satisfiable if it is true in some model A sentence is unsatisfiable if it is true in no models e.g., AA Satisfiability is connected to inference via the following: KB ╞ α if and only if (KB α) is unsatisfiable emnetzewdu2121@gmail.com Introduction to AI 194
  • 195. Rules of Inference • Valid Rules of Inference:  Modus Ponens  And-Elimination  And-Introduction  Or-Introduction  Double Negation  Unit Resolution  Resolution emnetzewdu2121@gmail.com Introduction to AI 195
  • 196. Examples in Wumpus World Modus Ponens:  ⇒,  `  (WumpusAhead ∧ WumpusAlive) ) Shoot, (WumpusAhead ∧ WumpusAlive) ` Shoot nd-Elimination:  ∧  `  (WumpusAhead ∧ WumpusAlive) ` WumpusAlive Resolution:   ,     `    (WumpusDead  WumpusAhead), ( WumpusAhead  Shoot) ` (WumpusDead  Shoot)  ⇒    ∧            emnetzewdu2121@gmail.com Introduction to AI 196
  • 197. Rules of Inference (continued) And-Introduction 1, 2, …, n 1 2 …n Or-Introduction i 1  2  …i …  n Double Negation     Unit Resolution (special case of resolution)       emnetzewdu2121@gmail.com Introduction to AI 197
  • 198. Wumpus World KB Proposition Symbols for each i,j:  Let Pi,j be true if there is a pit in square i,j  Let Bi,j be true if there is a breeze in square i,j Sentences in KB  “There is no pit in square 1,1” R1:  P1,1  “A square is breezy iff pit in a neighboring square” R2: B1,1  (P1,2  P2,1) R3: B2,1  (P1,1  P3,1  P2,2)  “Square 1,1 has no breeze”, “Square 2,1 has a breeze” R4:  B1,1 R5: B2,1 emnetzewdu2121@gmail.com Introduction to AI 198
  • 199. Inference in Wumpus World Apply biconditional elimination to R2: R6: (B1,1  (P1,2  P2,1)) ((P1,2  P2,1)  B1,1) Apply e to R6: R7: ((P1,2  P2,1)  B1,1) Contrapositive of R7: R8: ( B1,1  (P1,2  P2,1)) Modus Ponens with R8 and R4 ( B1,1): R9: (P1,2  P2,1) de Morgan: R10:  P1,2   P2,1 emnetzewdu2121@gmail.com Introduction to AI 199
  • 200. Searching for Proofs Finding proofs is exactly like finding solutions to search problems. Can search forward (forward chaining) to derive goal or search backward (backward chaining) from the goal. Searching for proofs is not more efficient than enumerating models, but in many practical cases, it’s more efficient because we can ignore irrelevant propositions emnetzewdu2121@gmail.com Introduction to AI 200
  • 201. Full Resolution Rule Revisited Full Resolution Rule is a generalization of this rule: For clauses of length two: emnetzewdu2121@gmail.com Introduction to AI 201
  • 202. Resolution Applied to Wumpus World R11: ¬B1,2. R12: B1,2 ⇔(P1,1∨P2,2∨P1,3). At some point we determine the absence of a pit in square 2,2: R13:  P2,2 Biconditional elimination applied to R3 followed by modus ponens with R5: R15: P1,1  P1,3  P2,2 Resolve R15 and R13: R16: P1,1  P1,3 Resolve R16 and R1: R17: P1,3 emnetzewdu2121@gmail.com Introduction to AI 202
  • 203. Resolution: Complete Inference Procedure Any complete search algorithm, applying only the resolution rule, can derive any conclusion entailed by any knowledge base in propositional logic. emnetzewdu2121@gmail.com Introduction to AI 203
  • 204. Cont. The resolution rule applies only to clauses (that is, disjunctions of literals), so it would seem to be relevant only to knowledge bases and queries consisting of clauses. How, then, can it lead to a complete inference procedure for all of propositional logic?  The answer is that every sentence of propositional logic is logically equivalent to a conjunction of clauses. A sentence expressed as a conjunction of clauses is said to be in conjunctive normal form or CNF emnetzewdu2121@gmail.com Introduction to AI 204
  • 205. Conjunctive Normal Form Conjunctive Normal Form is conjunction of clause(a disjunction of literals.) Example: (A  B   C) (B  D) ( A) (B  C) clause literals emnetzewdu2121@gmail.com Introduction to AI 205
  • 206. converting the sentence B1,1 ⇔(P1,2∨P2,1) emnetzewdu2121@gmail.com Introduction to AI 206
  • 207. Resolution Algorithm To show KB ╞ , we show (KB   ) is unsatisfiable. This is a proof by contradiction. First convert (KB   ) into CNF.Then apply resolution rule to resulting clauses. The process continues until:  there are no new clauses that can be added (KB does not entail )  two clauses resolve to yield empty clause in which KB entails . emnetzewdu2121@gmail.com Introduction to AI 207
  • 208. Simple Inference in Wumpus World KB = R2 R4 = (B1,1  (P1,2  P2,1))   B1,1 Prove  P1,2 by adding the negation P1,2 Convert KB P1,2 to CNF PL-RESOLUTION algorithm emnetzewdu2121@gmail.com Introduction to AI 208
  • 209. Horn clauses and Definite clauses Horn Form  definite clause: disjunction of literals which exactly one is positive. e.g. (¬L1,1∨¬B2,2 ∨B1,1) is but not (¬B1,1∨P1,2∨P2,1)  Horn clause: disjunction of literals of which at most one is positive. e.g. (¬B2,2 ∨B1,1), (¬L1,1∨¬B2,2 ) is ,(¬B1,1∨P1,2∨P2,1) is not So all definite clauses are Horn clauses KB containing only definite clauses are interesting since;  Every definite clause can be written as an implication. e.g. (¬L1,1∨¬B2,2∨B1,1) => (L1,1∧B2,2) ⇒B1,1 Note: In Horn form, the premise is called body the conclusion is called head. sentence consisting of a single positive literal, such asL1,1, is called a fact. emnetzewdu2121@gmail.com Introduction to AI 209
  • 210.  Inference with Horn clauses can be done through forward-chaining and backward- chaining algorithms. Both are easily understandable by human and this type of inference is base for logic programming.  Deciding entailment with Horn clauses can be done in time that is linear in the size of the knowledge base—a pleasant surprise 210 emnetzewdu2121@gmail.com Introduction to AI
  • 211. Forward Chaining Fire any rule whose premises are satisfied in the KB. Add its conclusion to the KB until query is found. emnetzewdu2121@gmail.com Introduction to AI 211
  • 212. Forward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 212
  • 213. Forward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 213
  • 214. Forward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 214
  • 215. Forward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 215
  • 216. Forward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 216
  • 217. Forward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 217
  • 218. Forward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 218
  • 219. Forward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 219
  • 220. Backward Chaining Motivation: Need goal-directed reasoning in order to keep from getting overwhelmed with irrelevant consequences Main idea:  Work backwards from query q  To prove q:  Check if q is known already  Prove by backward chaining all premises of some rule concluding q emnetzewdu2121@gmail.com Introduction to AI 220
  • 221. Backward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 221
  • 222. Backward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 222
  • 223. Backward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 223
  • 224. Backward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 224
  • 225. Backward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 225
  • 226. Backward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 226
  • 227. Backward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 227
  • 228. Backward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 228
  • 229. Backward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 229
  • 230. Backward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 230
  • 231. Backward Chaining Example P  Q L M  P B L  M A P  L A B  L A B emnetzewdu2121@gmail.com Introduction to AI 231
  • 232. Forward Chaining vs. Backward Chaining FC is data-driven—it may do lots of work irrelevant to the goal BC is goal-driven—appropriate for problem-solving emnetzewdu2121@gmail.com Introduction to AI 232
  • 233. First-Order Logic emnetzewdu2121@gmail.com Introduction to AI 233 Russell and Norvig: Chapter 8
  • 234. Outline Why FOL? Syntax and semantics of FOL Using FOL Wumpus world in FOL emnetzewdu2121@gmail.com Introduction to AI 234
  • 235. Pros and cons of propositional logic  Propositional logic is declarative  Propositional logic allows partial/disjunctive/negated information  (unlike most data structures and databases)  Meaning in propositional logic is context-independent  (unlike natural language, where meaning depends on context)  Propositional logic has very limited expressive power  (unlike natural language) emnetzewdu2121@gmail.com Introduction to AI 235
  • 236. First-order logic Whereas propositional logic assumes the world contains facts, first-order logic (like natural language) assumes the world contains  Objects: people, houses, numbers, colors, baseball games, wars, …  Relations: red, round, prime, brother of, bigger than, part of, comes between, …  Functions: father of, best friend, one more than, plus, … emnetzewdu2121@gmail.com Introduction to AI 236
  • 237. Syntax of FOL: Basic elements Constants KingJohn, 2, NUS,... Predicates Brother, >,... Functions Sqrt, LeftLegOf,... Variables x, y, a, b,... Connectives , , , ,  Equality = Quantifiers ,  emnetzewdu2121@gmail.com Introduction to AI 237
  • 238. Atomic sentences Atomic sentence = predicate (term1,...,termn) or term1 = term2 Term = function (term1,...,termn) or constant or variable E.g., Brother(KingJohn,RichardTheLionheart) > (Length(LeftLegOf(Richard)), Length(LeftLegOf(KingJohn))) emnetzewdu2121@gmail.com Introduction to AI 238
  • 239. Complex sentences Complex sentences are made from atomic sentences using connectives S, S1  S2, S1  S2, S1  S2, S1  S2, E.g. Sibling(KingJohn,Richard)  Sibling(Richard,KingJohn) >(1,2)  ≤ (1,2) >(1,2)   >(1,2) emnetzewdu2121@gmail.com Introduction to AI 239
  • 240. Truth in first-order logic Sentences are true with respect to a model and an interpretation Model contains objects (domain elements) and relations among them Interpretation specifies referents for constant symbols → objects predicate symbols → relations function symbols → functional relations An atomic sentence predicate(term1,...,termn) is true iff the objects referred to by term1,...,termn are in the relation referred to by predicate emnetzewdu2121@gmail.com Introduction to AI 240
  • 241. Models for FOL: Example emnetzewdu2121@gmail.com Introduction to AI 241
  • 242. Universal quantification <variables> <sentence> Everyone at NUS is smart: x At(x,NUS)  Smart(x) x P is true in a model m iff P is true with x being each possible object in the model Roughly speaking, equivalent to the conjunction of instantiations of P At(KingJohn,NUS)  Smart(KingJohn)  At(Richard,NUS)  Smart(Richard)  At(NUS,NUS)  Smart(NUS)  ... 242 emnetzewdu2121@gmail.com Introduction to AI
  • 243. A common mistake to avoid Typically,  is the main connective with  Common mistake: using  as the main connective with : x At(x,NUS)  Smart(x) means “Everyone is at NUS and everyone is smart” emnetzewdu2121@gmail.com Introduction to AI 243
  • 244. Existential quantification <variables> <sentence> Someone at NUS is smart: x At(x,NUS)  Smart(x)$ x P is true in a model m iff P is true with x being some possible object in the model Roughly speaking, equivalent to the disjunction of instantiations of P At(KingJohn,NUS)  Smart(KingJohn)  At(Richard,NUS)  Smart(Richard)  At(NUS,NUS)  Smart(NUS)  ... emnetzewdu2121@gmail.com Introduction to AI 244
  • 245. Another common mistake to avoid Typically,  is the main connective with  Common mistake: using  as the main connective with : x At(x,NUS)  Smart(x) is true if there is anyone who is not at NUS! emnetzewdu2121@gmail.com Introduction to AI 245
  • 246. Properties of quantifiers x y is the same as y x x y is the same as y x x y is not the same as y x x y Loves(x,y)  “There is a person who loves everyone in the world” y x Loves(x,y)  “Everyone in the world is loved by at least one person” Quantifier duality: each can be expressed using the other x Likes(x,IceCream) x Likes(x,IceCream) x Likes(x,Banana) x Likes(x,Banana) emnetzewdu2121@gmail.com Introduction to AI 246
  • 247. Equality term1 = term2 is true under a given interpretation if and only if term1 and term2 refer to the same object E.g., definition of Sibling in terms of Parent: x,y Sibling(x,y)  [(x = y)  m,f  (m = f)  Parent(m,x)  Parent(f,x)  Parent(m,y)  Parent(f,y)] emnetzewdu2121@gmail.com Introduction to AI 247
  • 248. Using FOL The kinship domain: Brothers are siblings x,y Brother(x,y)  Sibling(x,y) One's mother is one's female parent m,c Mother(c) = m  (Female(m)  Parent(m,c)) “Sibling” is symmetric x,y Sibling(x,y)  Sibling(y,x) emnetzewdu2121@gmail.com Introduction to AI 248
  • 249. Interacting with FOL KBs Suppose a wumpus-world agent is using an FOL KB and perceives a smell and a breeze (but no glitter) at t=5: Tell(KB,Percept([Smell,Breeze,None],5)) Ask(KB,a BestAction(a,5)) I.e., does the KB entail some best action at t=5? Answer: Yes, {a/Shoot} emnetzewdu2121@gmail.com Introduction to AI 249
  • 250. Knowledge base for the wumpus world Perception  t,s,b Percept([s,b,Glitter],t)  Glitter(t) Reflex  t Glitter(t)  BestAction(Grab,t) emnetzewdu2121@gmail.com Introduction to AI 250
  • 251. Deducing hidden properties x,y,a,b Adjacent([x,y],[a,b])  [a,b]  {[x+1,y], [x-1,y],[x,y+1],[x,y-1]} Properties of squares: s,t At(Agent,s,t)  Breeze(t)  Breezy(s) Squares are breezy near a pit:  Diagnostic rule---infer cause from effect s Breezy(s)  Exi{r} Adjacent(r,s)  Pit(r)  Causal rule---infer effect from cause r Pit(r)  [s Adjacent(r,s)  Breezy(s) ] emnetzewdu2121@gmail.com Introduction to AI 251
  • 252. Knowledge engineering in FOL 1. Identify the task 2. Assemble the relevant knowledge 3. Decide on a vocabulary of predicates, functions, and constants 4. Encode general knowledge about the domain 5. Encode a description of the specific problem instance 6. Pose queries to the inference procedure and get answers 7. Debug the knowledge base emnetzewdu2121@gmail.com Introduction to AI 252
  • 253. Summary First-order logic:  objects and relations are semantic primitives  syntax: constants, functions, predicates, equality, quantifiers Increased expressive power: sufficient to define wumpus world emnetzewdu2121@gmail.com Introduction to AI 253
  • 255. Categories and Objects The organization of objects into categories is a vital part of KR. Important relationships are subclass relation (AKO - a kind of) <category> AKO <category>. instance relation ( ISA - is a) <object> ISA <category>. emnetzewdu2121@gmail.com Introduction to AI 255
  • 256. The upper ontology(*) Anything AbstractObjects GeneralizedEvents Sets Numbers Representations Intervals Places PhysicalObjects Processes Categories Sentences Measurements Moments Things Stuff Times Weights Animals Agents Solid Liquid Gas Humans (*) This is AIMA ’s version. Other authors have other partitions. See bus_semantics emnetzewdu2121@gmail.com Introduction to AI 256
  • 257. Bus semantics % thing top node unkn ako thing. event ako thing. set ako thing. cardinality ako number. member ako thing. abstract ako thing. activity ako thing. agent ako thing. company ako agent. contact ako thing. content ako thing. group ako thing. identity ako thing. information ako thing. list ako thing. measure ako thing. place ako thing. story ako thing. study ako activity. subset ako thing. version ako abstract. accident ako activity. activation ako activity. %% start of it addition ako abstract. address ako place. advice ako abstract. age ako year. %% (measure) agreement ako abstract. analysis ako abstract. animate ako agent. application ako activity. area ako measure. % ….. % + (750 ako items) 257 emnetzewdu2121@gmail.com Introduction to AI
  • 258. Categories • Category is a kind of set and denotes a set of objects. • A category has a set of properties that is common to all its members. • Categories are formally represented in logic as predicates, but we will also regard categories as a special kind of objects. • We then actually introduce a restricted form of second order logic, since the the terms that occur may be predicates. Example: Elephants and Mammals are categories. The set denoted by Elephants is a subset of the set denoted by Mammals. The set of properties common to Elephants is a superset of the set of properties common to Mammals. emnetzewdu2121@gmail.com Introduction to AI 258
  • 259. Taxonomy • Subcategory relations organize categories into a taxonomy or taxonomic hierarchy. Other names are type hierarchy or class hierarchy . • We state that a category is a subcategory of another category by using the notation for subsets Basketball  Ball We will also use the notation ako(basketball,ball). emnetzewdu2121@gmail.com Introduction to AI 259
  • 260. Category representations  There are two choices of representing categories in first order logic: predicates and objects. That is, we can use the predicate Basketball(b) or we can reify the category as an ”object” basketball. We could then write member(x,basketball) or x  basketball  We will also use the notation isa(x,basketball). Basketball is a subset or subcategory of Ball, which is abbreviated Basketball  Ball We will also use the notation ako(basketball,ball). emnetzewdu2121@gmail.com Introduction to AI 260
  • 261. Inheritance  Categories serve to organize and simplify the knowledge base through inheritance. e.g. If we say that all instances of Food is edible (edible is in the property set of Food), and if we assert that Fruit is a subcategory of Food, and Apple is a subcategory of Fruit, then we know that every apple is edible. i.e. We say that the individual apples inherit the property of edibility, in this case from their membership in the Food category. emnetzewdu2121@gmail.com Introduction to AI 261
  • 262. Reifying properties • An individual object may have a property. For example, a specific ball, BB9 can be round. In ordinary FOL, we write Round(BB9). As for categories, we can regard Round as higher order object, and say BB9 has the property Round We will also use the notation hasprop(BB9,round). emnetzewdu2121@gmail.com Introduction to AI 262
  • 263. Reifying Property Values Some properties are determined by an attribute and a value. For example, the diameter of my basketball BB9 has diameter 9.5: Diameter(BB9 )=9.5 We can also use the notation has(bb9,diameter,9.5). An alternative representation for properties , when regarded as Boolean attributes is has(BB9,round,true). In the same manner, we can express that a red ball has colour = red. has(BB9,colour,red). emnetzewdu2121@gmail.com Introduction to AI 263
  • 264. Logical expressions on categories • An object is a member of a category isa(bb9,basketball). • A category is a subclass of another category ako(basketball,ball). • All members of a category have som properties isa(X,basketball) => hasprop(X,round). • Members of a category can be recognized by some properties, for example: hasprop(X,orange) and hasprop(X,round) and has(X,diameter,9.5) and isa(X,ball) => isa(X,basketball) • A category as a whole has some properties isa(teacher,profession). Here, it is a fallacy to conclude that isa(tore,teacher) and isa(teacher,profession)=>isa(tore,profession). 264 emnetzewdu2121@gmail.com Introduction to AI
  • 265. Category Decompositions • We can say that both Male and Female is a subclass of Animal, but we have not said that a male cannot be a female. That is expressed by Disjoint({Male,Female}) • If we know that all animals are either male or female, (they exhaust the possibilities) Exhaustive({Male,Female},Animals). • A disjoint exhaustive decomposition is known as a partition Partition({Male,Female},Animals). emnetzewdu2121@gmail.com Introduction to AI 265
  • 266. Physical Compositions One object can be a part of another object. Example, declaring direct parts part(bucharest,romania). part(romania,eastern_europe). part(europe,earth). We can make a transitive extension partof part(Y,Z) and partof(X,Y) => partof(X,Z). and reflexive (*) partof(X,X). Therefore we can conclude that partof(bucharest,earth) emnetzewdu2121@gmail.com Introduction to AI 266
  • 267. Bunch It is also useful to define composite objects with definite parts but no particular structure. For example, we might say ”The apples in the bag weigh two pounds” It is adviced that we don’t regard these apples as the set of (all) apples, but instead define them as a bunch of apples. For example, if the apples are Apple1,Apple2 and Apple3, then BunchOf({Apple1,Apple2,Apple3}) Denotes the composite object with three apples as parts, not elements. emnetzewdu2121@gmail.com Introduction to AI 267
  • 268. Substances and Objects • The real world can be seen as consisting of primitive objects (particles) and composite objects. A common characteristic is that they can be counted (individuated) However, there are objects that cannot be individuated like Butter, Water, Nitrogen, Wine, Grass, etc. They are called stuff, and are denoted in English without articles or quantifiers (not ”a water”). Typically, they can be divided without loosing their intrinsic properties. When we take a part of a substance, we still have the same substance. isa(X,butter) and partof(Y,X)=>isa(Y,butter). We can say that butter melts at 30 degrees centigrade isa(X,butter) =>has(X,meltingpoint,30). 268 emnetzewdu2121@gmail.com Introduction to AI
  • 269. Measures, Abstracts ,Mentals • In the common world, objects have height, mass, cost and so on. The values we assign for these properties are called measures. • We imagine that the universe includes abstract ”measure objects” such as length. Measure objects are given as a number of units, e.g. meters. Logically, we can combine this with unit functions Length(L1) = Inches(1.5) = Centimeters(3.81) • Another way is to use predicates Length(L1,1.5,inches) Abstract concepts like ”autonomy”, ”quality” are difficult to represent without seeking artificial measurements. (e.g. IQ). Mental concepts are beliefs, thoughts, feelings etc. emnetzewdu2121@gmail.com Introduction to AI 269
  • 270. Reasoning systems for categories Semantic networks and Description Logics are two closely related systems for reasoning with categories. Both can be described using logic. •Semantic networks provide graphical aids of visualizing the knowledge base, together with efficient algoritms for inferring properties of an object on the basis of its category membership. •Description logics provide a formal language for constructing and combining category definitions, and efficient algorithms for deciding subsets and superset relationships between categories. emnetzewdu2121@gmail.com Introduction to AI 270
  • 271. Semantic Networks Example Mammal Person Male Female Mary John 2 HaveMother Legs Legs isa isa ako ako brother sister 1 ako emnetzewdu2121@gmail.com Introduction to AI 271
  • 272. Link types in semantic nets There are 3 types of entities in a semantic nets categories, objects and values (other than these) Then there could be 9 different types of relations between these. They are drawn with certain conventions. Note that objects can act as values also. category  category ako(C1,C2) every C1 is a.k.o. C2 category  category haveatt(C1,R,C2) every C1has a R a.k.o C2 category  value have(C1,R,V) every C1 has attribute value R = V object  category isa(O,C) O is a C object  object has(O1,R,O2) O1 has relation R to O2 object  value has (O,R,V) O has attribute value R=V object  object partof (O1,O2) O1 is a part of O2 In addition, we have all kinds of relations between values. value* V1 > 2*V2 +5 272 emnetzewdu2121@gmail.com Introduction to AI
  • 273. Further comments on link types We know that persons have female persons as mothers, but we cannot draw a HasMother link from Persons to FemalePersons because HasMother is a relation between a person and his or her mother, and categories do not have mothers. For this reason, we use a special notation – the double boxed link. In logic, we have given it the name haveatt,e.g. haveatt(person,mother,femaleperson). Compare this to haveatt(lion,mother,lioness). We also want to express that persons normally have two legs.As before, we must be careful not to assert that categories have legs; instead we use a single-boxed link. In logic, we have given it the name have,e.g. have(person,legs,2). emnetzewdu2121@gmail.com Introduction to AI 273
  • 274. Content of semantic net A paraphrase of the knowledge All persons are mammals All females are persons Persons have a mother who is female Persons have normally 2 legs John has 1 leg Mary isa female John is a male John has a sister Mary Mary has a brother John Logic representation ako(person,mammal). ako(female,person). haveatt(person,mother,female). have(person,legs,2). has(john,legs,1). isa(mary,female). isa(john,male). has(john,sister,mary). has(mary,brother,john). 274 emnetzewdu2121@gmail.com Introduction to AI
  • 275. Inheritance and inference in semantic nets The rules of inheritance can now be automated using our logic representation of semantic nets. isa(X,Y) and ako(Y,Z) => isa(X,Z). have(Y,R,V) and isa(X,Y) => has(X,R,V). haveatt(C,R,M) and isa(X,C) and has(X,R,V) => isa(V,M). With these definitions, we can prove that Mary has two legs, even if this information is not explicitly represented. emnetzewdu2121@gmail.com Introduction to AI 275
  • 276. Example of inheritance isa(X,Y)and ako(Y,Z)=>isa(X,Z) have(X,Y,Z)and isa(A1,X)=>has(A1,Y,Z) haveatt(X,Y,C) and isa(A1,X) and has(A1,Y,V) => isa(V,C). t=>ako(female,person) t=>ako(male,person) t=>haveatt(person,mother,female) t=>have(person,legs,2) t=>isa(mary,female) t=>isa(john,male) t=>has(john,legs,1) t=>has(john,sister,mary) t=>has(mary,brother,john) PROOF: has(mary,legs,2) because have(person,legs,2) and isa(mary,person) have(person,legs,2) is true isa(mary,person) because isa(mary,female) and ako(female,person) isa(mary,female) is true ako(female,person) is true emnetzewdu2121@gmail.com Introduction to AI 276