1. IT201 Basics of Intelligent
Systems
Text books:
[T1] Madisetti Vijay and Bahga Arshdeep, Internet of Things (A Hands-on-Approach), 1st Edition,
VPT, 2014.
[T2] Buyya Raj Kumar, Vecchiola Christian & Selvi S. Thamarai , Mastering Cloud Computing, McGraw
Hill Publication, New Delhi, 2013.
[T3] Engelbrecht Andries P., Computational Intelligence: An Introduction, Wiley.
Reference Books:
[R1] Raj Pethuru and Raman Anupama C., The Internet of Things: Enabling Technologies, Platforms,
and Use Cases, CRC Press.
[R2] Konar Amit, Computational Intelligence: Principles, Techniques and Applications, Springer. (R2)
[R3] Russel Norvig Artificial Intelligence A Modern Approah, 2nd edition, Prentice Hall
Dr. Shripal Vijayvargiya, Dr. Vishwambhar Pathak, Dept of CSE, BITMESRA Jaipur Campus
2. Module I
AI Concepts
• Introduction to AI and Intelligent Agents
• AI problems and Solution approaches
• Problem solving using Search and Heuristics
• AI Knowledge-base: creation, updation and
reasoning,
• Broad category of branches in AI and
intelligent Systems . (8 L)
3. Module I : AI Concepts- Introduction to AI
https://www.cs.utexas.edu/~mooney/cs343/slide-handouts/intro.4.pdf
Definition of AI- Art of designing Systems that { think like humans +
think rationally + act like humans + act rationally }
Thinking Humanly: Cognitive Modelling- Interdisciplinary field (AI,
psychology, linguistics, philosophy, anthropology) that tries to form
computational theories of human cognition
Thinking Rationally : Laws of Thought- Formalize “correct”
reasoning using a mathematical model (e.g. of deductive reasoning)
Acting Humanly: The Turing Test - If the response of a computer to
an unrestricted textual natural-language conversation cannot be
distinguished from that of a human being then it can be said to be
intelligent
Acting Rationally: Rational Agents- rationality involves maximizing
goals within the computational and other resources available.
4. Module I : AI Concepts- Introduction to AI
https://www.cs.utexas.edu/~mooney/cs343/slide-handouts/intro.4.pdf
Foundations of AI: Philosophy, Mathematics, Psychology; Computer
Science; Linguistics
Expert Systems: detailed knowledge of the specific domain can help
control search and lead to expert level performance for restricted tasks
Typical Applications:
Industrial Applications: Character and hand-writing recognition;
Speech recognition; Processing credit card applications; Financial
prediction; Chemical process control;
Intelligent agents and Internet applications (softbots, believable
agents, intelligent information access);
Scheduling/configuration applications (Successful companies: I2, Red
Pepper, Trilogy)
5. AI = “Thinking Humanly”
• Get inside the actual working of human minds.
• Two ways to do that
– Through Introspection- to catch our own thoughts as
they go by.
– Through psychological experiments
• If the program's input/output and timing behaviors match
corresponding human behaviors, that is evidence that some of
the program's mechanisms could also be operating in humans
• Cognitive Science: brings together computer
models from AI and experimental techniques
from psychology to try to construct precise and
testable theories of the human mind.
6. AI= “Thinking Rationally”: “Law of Thought”
• Aristotle first codified “right thinking” i.e. perfect
reasoning process.
• He proposed syllogisms, that provided patterns for
argument structures that always yielded correct
conclusions when given correct premises-
– e.g. "Socrates is a man; all men are mortal; therefore, is
mortal."
• These laws of thought were LOGIC supposed to govern
the operation of the mind; their study initiated the
field called logic.
• Logical notations provided by Propositional calculus
and Predicate logic can be programmed using
languages like LISP, PROLOG etc.
7. AI = “Acting Humanly”- The Turing Test approach (1950)
• Was designed to provide a satisfactory operational definition of
intelligence.
• The computer passes the test if a human interrogator, after posing some
written questions, cannot tell whether the written responses come from
a person or not.
• For programming a computer to pass the test, the computer would need
to possess the following 6 capabilities:
– natural language processing- to enable it to communicate successfully in
English/specific language
– knowledge representation- to store what it knows or hears
– automated reasoning- to use the stored information to answer questions and to draw
new conclusions
– machine learning- to adapt to new circumstances and to detect and extrapolate
patterns
– computer vision to perceive objects
– robotics to manipulate objects and move about
• Total Turing Test- includes a video signal so that the interrogator can test
the subject's perceptual abilities, as well as the opportunity for the
interrogator to pass physical objects "through the hatch." To pass the total
Turing Test, the computer will need computer vision and robotics.
8. AI= “Acting Rationally”
• The study of AI as rational-agent design has at
least two advantages.
– First, it is more general than the "laws of
thought"approach, because correct inference is
just one of several possible mechanisms for
achieving rationality.
– Second, it is more amenable to scientific
development than are approaches based on
human behavior or human thought because the
standard of rationality is clearly defined general.
9. Agents
• An agent is an entity that perceives and acts .
• So an agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators
• Example:
– Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for
actuators;
– Robotic agent: cameras and infrared range finders for sensors; various motors for actuators
• An agent is completely specified by the agent function mapping percept sequences
(histories) to actions: [f: P* A]
• The agent program runs on the physical architecture to produce f:
– agent = architecture + program
9
10. Acting rationally: Rational agent
• Rational behavior: For each possible percept sequence, a rational agent
should select right action.
• The right thing/action: that which is expected to maximize its
performance measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has. Doesn't
necessarily involve thinking – e.g., blinking reflex – but thinking should be
in the service of rational action.
• Performance measure: An objective criterion for success of an agent's
behavior.
– E.g. performance measure of a vacuum-cleaner agent could be amount of dirt
cleaned up, amount of time taken, amount of electricity consumed, amount of
noise generated, etc.
• Rationality is distinct from omniscience (all-knowing with infinite
knowledge).
• Caveat: computational limitations make perfect rationality unachievable
design best program for given machine resources
10
11. …Rational agents
• Agents can perform actions in order to modify future percepts
so as to obtain useful information (information gathering,
exploration)
• An agent is autonomous if its behavior is determined by its own
experience (with ability to learn and adapt)
11
15. PEAS
• Agent: Part-picking robot
– Performance measure: Percentage of parts in
correct bins
– Environment: Conveyor belt with parts, bins
– Actuators: Jointed arm and hand
– Sensors: Camera, joint angle sensors
15
16. PEAS
• Agent: Interactive English tutor
– Performance measure: Maximize student's score on test
– Environment: Set of students
– Actuators: Screen display (exercises, suggestions, corrections)
– Sensors: Keyboard
16
17. Review Questions
• Examine the AI literature to discover whether the following
tasks can currently besolved by computers:
a. Playing a decent game of table tennis (ping-pong).
b. Driving in the center of Cairo.
c. Buying a week's worth of groceries at the market.
d. Buying a week’s worth of groceries on the web.
e. Playing a decent game of bridge at a competitive level.
f. Discovering and proving new mathematical theorems.
g. Writing an intentionally funny story.
h. Giving competent legal advice in a specialized area of law.
i. Translating spoken English into spoken Swedish in real time.
j. Performing a complex surgical operation.
18. Overview of Steps of System Design (additional related topic)
https://nptel.ac.in/courses/Webcourse-contents/IISc-BANG/System%20Analysis%20and%20Design/pdf/PPTs/mod2.pdf
• SDLC includes the following activities
1. Requirements Determinations
2. Requirements Specifications
3. Feasibility Analysis
4. Final Specifications
5. Hardware Study
6. System Design
7. System Implementation
8. System Evaluation
9. System Modification
• * Feasibility Assessment: (will guide
setting up the ASSUMPTIONS for
PEAS specification)
– Managerial feasibility
– Technical feasibility
– Financial feasibility
– Operational feasibility
– Legal viability
19. Agent functions and programs
• An agent is completely specified by the agent
function mapping percept sequences to
actions
• One agent function (or a small equivalence
class) is rational
• Aim: find a way to implement the rational
agent function concisely
19
20. Agent types
• Four basic types in order of increasing
generality:
– Simple reflex agents
– Model-based reflex agents
– Goal-based agents
– Utility-based agents
20
26. Module I : AI Concepts- Intelligent Agents
[Russel] [http://aima.cs.berkeley.edu/algorithms.pdf]
TABLE-DRIVEN-AGENT; REFLEX-VACUUM-AGENT; SIMPLE-REFLEX-AGENT; MODEL-BASED-REFLEX-AGENT
27. Module I : AI Concepts- Intelligent Agents
[Russel] [http://aima.cs.berkeley.edu/algorithms.pdf]
TABLE-DRIVEN-AGENT; REFLEX-VACUUM-AGENT; SIMPLE-REFLEX-AGENT; MODEL-BASED-REFLEX-AGENT
28. Module I : AI Concepts- Intelligent Agents
[Russel] [http://aima.cs.berkeley.edu/algorithms.pdf]
SEARCH BASED PROBLEM SOLVING AGENTS
29. Environment types
• Fully observable (vs. partially observable): An agent's
sensors give it access to the complete state of the
environment at each point in time.
• Deterministic (vs. stochastic): The next state of the
environment is completely determined by the current
state and the action executed by the agent. (If the
environment is deterministic except for the actions of
other agents, then the environment is strategic)
• Episodic (vs. sequential): The agent's experience is
divided into atomic "episodes" (each episode consists of
the agent perceiving and then performing a single
action), and the choice of action in each episode depends
only on the episode itself.
29
30. Environment types
• Static (vs. dynamic): The environment is unchanged
while an agent is deliberating. (The environment is
semidynamic if the environment itself does not
change with the passage of time but the agent's
performance score does)
• Discrete (vs. continuous): A limited number of
distinct, clearly defined percepts and actions.
• Single agent (vs. multiagent): An agent operating by
itself in an environment.
30
31. Problem Solving
•Rational agents need to perform sequences of actions in order
to achieve goals.
•Intelligent behavior can be generated by having a look-up table
or reactive policy that tells the agent what to do in every
circumstance, but:
- Such a table or policy is difficult to build
- All contingencies must be anticipated
•A more general approach is for the agent to have knowledge of
the world and how its actions affect it and be able to simulate
execution of actions in an internal model of the world in order
to determine a sequence of actions that will accomplish its
goals.
•This is the general task of problem solving and is typically
performed by searching through an internally modelled space
of world states.
31
32. Problem Solving Task
•Given:
-An initial state of the world
-A set of possible possible actions or operators that can be
performed.
-A goal test that can be applied to a single state of the world to
determine if it is a goal state.
•Find:
-A solution stated as a path of states and operators that shows
how to transform the initial state into one that satisfies the
goal test.
•The initial state and set of operators implicitly define a state
space of states of the world and operator transitions between
them. May be infinite. 32
33. Measuring Performance
•Path cost: a function that assigns a cost to a path,
typically by summing the cost of the individual
operators in the path. May want to find minimum
cost solution.
•Search cost: The computational time and space
(memory) required to find the solution.
•Generally there is a trade-off between path cost
and search cost and one must satisfy and find the
best solution in the time that is available.
33
34. Foundations of AI
• Philosophy
– Can formal rules be used to draw valid conclusions?
– How does the mental mind arise from a physical
– Where does knowledge come from?
– How does knowledge lead to action?
• Mathematics
– Can formal rules be used to draw valid conclusions?
– How does the mental mind arise from a physical brain?
– Where does knowledge come from?
– How does knowledge lead to action?
• Economics
– What are the formal rules to draw valid
– What can be computed?
– How do we reason with uncertain information?
• Neuroscience
– How should we make decisions so as to maximize payoff?
– How should we do this when others may not go along?
– How should we do this when the payoff may be in the future?
• Psychology
– How do humans and animals think and act?
• Computer Engineering
– How can we build an efficient computer?
• Control Theory and Cybernetics
– How can artifacts operate under their own control?
• Linguistics
– How does language relate to thought?
36. Problem types
• Deterministic, fully observable single-state problem
– Agent knows exactly which state it will be in; solution is a
sequence
• Non-observable sensorless problem (conformant problem)
– Agent may have no idea where it is; solution is a sequence
• Nondeterministic and/or partially observable contingency
problem
– percepts provide new information about current state
– often interleave} search, execution
• Unknown state space exploration problem
36
37. Module I : AI Concepts- AI problems and Solution approaches
[Russel/ Mooney] [https://www.cs.utexas.edu/~mooney/cs343/slide-handouts/search.4.pdf]
The general task of problem solving is typically performed by
searching through an internally modelled space of world states
Problem Solving Task
o Given: -An initial state of the world; A set of possible possible actions or
operators that can be performed; A goal test that can be applied to a single
state of the world to determine if it is a goal state.
o Find: -A solution stated as a path of states and operators that shows how to
transform the initial state into one that satisfies the goal test.
o The initial state and set of operators implicitly define a state space of states
of the world and operator transitions between them. May be infinite.
Measuring Performance
o Path cost: a function that assigns a cost to a path, typically by summing the
cost of the individual operators in the path. May want to find minimum cost
solution.
o Search cost: The computational time and space (memory) required to find
the solution.
o Generally there is a trade-off between path cost and search cost and one
must satisfice and find the best solution in the time that is available.
38. Module I : AI Concepts- Common Problems
[Russel/ Mooney] [https://www.cs.utexas.edu/~mooney/cs343/slide-handouts/search.4.pdf]
Route Finding,
8-Puzzle
8-Queen
Missionaries and cannibals
39. Module I : AI Concepts- Solution Approaches
[Russel/ Mooney] [https://www.cs.utexas.edu/~mooney/cs343/slide-handouts/search.4.pdf]
Search Problem
Approaches
Uninformed search,
Informed search
40. 40
Example: Romania
• On holiday in Romania; currently in Arad.
• Flight leaves tomorrow from Bucharest
• Formulate goal:
– be in Bucharest
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
• Path cost: Number of intermediate cities, distance traveled,
expected travel time
42. 42
Example: The 8-puzzle
• states? locations of tiles
• actions? move blank left, right, up, down
• goal test? = goal state (given)
• path cost? 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]
43. “Toy” Problems
•8-queens problem (N-queens problem)
•Missionaries and cannibals
Identity of individuals irrelevant, best to represent
state as
(M,C,B) M = number of missionaries on left bank
C = number of cannibals on left bank
B = number of boats on left bank (0 or 1)
Operators to move: 1M, 1C, 2M, 2C, 1M1C
Goal state: (0,0,0)
43
44. More Realistic Problems
• Route finding
• Travelling salesman problem
• VLSI layout
• Robot navigation
• Web searching
44
45. Searching Concepts
•A state can be expanded by generating all states that can be
reached by applying a legal operator to the state.
•State space can also be defined by a successor function that
returns all states produced by applying a single legal operator.
•A search tree is generated by generating search nodes by
successively expanding states starting from the initial state as
the root.
•A search node in the tree can contain
-Corresponding state
-Parent node
-Operator applied to reach this node
-Length of path from root to node (depth)
-Path cost of path from initial state to node 45
47. Search Algorithm
• Easiest way to implement various search
strategies is to maintain a queue of unexpanded
search nodes.
• Different strategies result from different methods
for inserting new nodes in the queue.
• Properties of search strategies
-Completeness
-Time Complexity
-Space Complexity
-Optimality
47
48. Search Strategies
• Uniformed search strategies (blind, exhaustive,
bruteforce) do not guide the search with any additional
information about the problem.
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search
• Informed search strategies (heuristic, intelligent) use
information about the problem (estimated distance from
a state to the goal) to guide the search.
48
49. 49
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors go at
end
–
50. 50
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors go at
end
–
51. 51
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors go at
end
–
52. 52
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors go at
end
–
53. 53
Properties of breadth-first search
• Complete? Yes (if b is finite)
• Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
• Space? O(bd+1) (keeps every node in memory)
• Optimal? Yes (if cost = 1 per step)
• Space is the bigger problem (more than time)
54. 54
Uniform-cost search
• Expand least-cost unexpanded node
• Implementation:
– fringe = queue ordered by path cost
• Equivalent to breadth-first if step costs all equal
• Complete? Yes, if step cost ≥ ε
• Time? # of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε)) where C* is the cost of the optimal
solution
• Space? # of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε))
• Optimal? Yes – nodes expanded in increasing order of
g(n)
55. 55
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
56. 56
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
57. 57
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
58. 58
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
59. 59
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
60. 60
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
61. 61
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
62. 62
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
63. 63
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
64. 64
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
65. 65
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
66. 66
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
67. 67
Properties of depth-first search
• Complete? No: fails in infinite-depth spaces, spaces
with loops
– Modify to avoid repeated states along path
complete in finite spaces
• Time? O(bm): terrible if m is much larger than d
– but if solutions are dense, may be much faster than
breadth-first
• Space? O(bm), i.e., linear space!
• Optimal? No