SlideShare a Scribd company logo
1 of 95
FOUNDATIONS OF ARTIFICIAL
INTELLIGENCE
Introduction
What is AI?
Views of AI fall into four categories:
Thinking humanly Thinking rationally
Acting humanly Acting rationally
The textbook advocates "acting rationally"
Artificial Intelligence
• Definition:
 The art of creating machines that perform
functions that require intelligence when
performed by people.
 The study of how to make computers do
things at which at moment people are better.
 The study of the computation that make it
possible to preceive,reason and act.
Acting humanly: Turing Test
• Turing (1950) "Computing machinery and intelligence":
• "Can machines think?"  "Can machines behave intelligently?"
• Operational test for intelligent behavior: the Imitation Game
• Anticipated all major arguments against AI in following 50 years
• Suggested major components of AI: knowledge, reasoning, language
understanding, learning
Capabilities of computer
• Natural language processing: to enable it to
communicate successfully in English.
• Knowledge representation: to share what it knows
or hears.
• Automated reasoning: to use the stored information
to answer question and to draw new conclusions;
• Machine learning: to adapt to new circumstances
and to detect and extrapolate patterns.
Total Turing Test
The computer will need
* Computer vision: to perceive objects.
* Robotics: to manipulate objects and move.
Thinking humanly: cognitive modeling
• 1960s "cognitive revolution": information-processing
psychology
• Requires scientific theories of internal activities of
the brain
• -- How to validate? Requires
1) Predicting and testing behavior of human subjects (top-
down)
or 2) Direct identification from neurological data (bottom-
up)
• Both approaches (roughly, Cognitive Science and
Cognitive Neuroscience) are now distinct from AI.
• Cognitive science is the interdisciplinary scientific
study of how information concerning faculties such
as perception(sense), language, reasoning, and
emotion, is represented and transformed in a (human
or other animal) nervous system or machine (e.g.,
computer).
Thinking rationally: "laws of thought"
Eg:
Statement:
Socrates is a Man.
All men are mortal.
Conclusion:
Socrates is Mortal.
• These laws of thought were supposed to govern the
operation of the mind ,their study initiated the field
called logic.
Acting rationally: rational agent
• Rational behavior: doing the right thing
• The right thing: that which is expected to maximize
goal achievement, given the available information.
• An agent is something that acts.
• Computer agent are expected to have other attributes,
- Operating under autonomous control,
- Perceiving their environment,
- Adapting new change.
Rational agents
• An agent is an entity that perceives and acts.
• A rational agent is one that acts so as to achieve the
best outcomes .
• Making correct inference is a part of being a rational
agent.
• To act rationally is to reason logically to the
conclusion that a given action will achieve one’s goal.
Applications
• Autonomous planning & scheduling:
NASA’s remote Agent program-control the
operation of spacecraft.
• Game playing:
IBM’s Deep Blue-program to defeat in chess
match.
• Autonomous control:
AlVINN- computer vision system to steer a car.
• Diagnosis:
Medical diagnosis program based on
probabilistic analysis-to perform at the level of
an expert physician.
• Logistics Planning:
U.S forces deployed a Dynamic Analysis and
Replanning Tool (DART) to do automated
logistics planning and scheduling for
transportation.
logistics planning
• Logistics - (military definition) The science of
planning and carrying out the movement and
maintenance of forces.... those aspects of military
operations that deal with the design and development,
storage, movement, distribution;
maintenance,movement, evacuation, and
hospitalization of personnel;
maintenance, operation and disposition of facilities;
• Robotics:
Many surgeons now use robot assistants in
micro surgery- creates 3D model of a patient’s
internal anatomy.
• Language understanding and problem
solving:
PROVERB-computer program that solves
crossword puzzle-word filters.
Agents
• An agent is anything that can be viewed as perceiving
its environment through sensors and acting upon
that environment through actuators
• Human agent: eyes, ears, and other organs for
sensors; hands, legs, mouth, and other body parts
for actuators.
• Robotic agent: cameras and infrared range finders for
sensors;
• various motors for actuators
Agents and environments
• The agent function maps from percept histories to
actions:
[f: P*  A]
• The agent program runs on the physical architecture
to produce f
• agent = architecture + program
Vacuum-cleaner world
• Percepts: location and contents, e.g., [A,Dirty]
• Actions: Left, Right, Suck, NoOp
A vacuum-cleaner agent
Percept sequence Action
[A, Clean] Right
[A, Dirty] Suck
[B,Clean] Left
[B,Dirty] Suck
Rational agents
• An agent should strive to "do the right thing", based
on what it can perceive and the actions it can
perform. The right action is the one that will cause
the agent to be most successful
• Performance measure: An objective criterion for
success of an agent's behavior
• E.g., performance measure of a vacuum-
cleaner agent
• could be
1.amount of dirt cleaned up,
2.amount of time taken,
3.amount of electricity consumed,
4.amount of noise generated., etc
Rational agents
• Agents can perform actions in order to modify
future percepts so as to obtain useful
information (information gathering,
exploration)
• An agent is autonomous if its behavior is
determined by its own experience (with ability
to learn and adapt)
PEAS
• PEAS: Performance measure, Environment,
Actuators, Sensors
• Must first specify the setting for intelligent agent
design
• e.g., the task of designing an automated taxi driver:
– Performance measure
– Environment
– Actuators
– Sensors
PEAS
• Must first specify the setting for intelligent agent
design
• e.g., the task of designing an automated taxi driver:
– Performance measure: Safe, fast, legal, comfortable trip,
maximize profits
– Environment: Roads, other traffic.
– Actuators: Steering wheel, accelerator, brake, signal, horn.
– Sensors: Cameras, sonar, speedometer, GPS, odometer,
engine sensors, keyboard.
PEAS
• Agent: Medical diagnosis system
• Performance measure: Healthy patient,
minimize costs, lawsuits
• Environment: Patient, hospital, staff
• Actuators: Screen display (questions, tests,
diagnoses, treatments, referrals)
• Sensors: Keyboard (entry of symptoms,
findings, patient's answers)
Environment types
• Fully observable (vs. partially observable): An
agent's sensors give it access to the complete
state of the environment (If sensor detect all
aspect).
An environment is partially observable b’coz of
noisy and inaccurate sensors. (vacuum agent)
• Deterministic (vs. stochastic): The next state of
the environment is completely determined by
the current state.(vacuum)
If the environment is partially observed it is
stochastic.
(If the environment is deterministic except for
the actions of other agents, then the
environment is strategic)
• Episodic (vs. sequential):
The agent's experience is divided into
atomic "episodes" (each episode consists of
the agent perceiving and then performing a
single action).
In sequential the current state affect all
future decision.
Eg: chess and taxi driving
• Static (vs. dynamic): The environment is
unchanged while an agent is
deliberating.(puzzle)
Dynamic- continuously asking the agent what
I wants to do next.(taxi driving)
(The environment is semidynamic if the
environment itself does not change with the
passage of time but the agent's performance
score does-chess)
• Discrete (vs. continuous): A limited number of
distinct, clearly defined percepts and
actions.(chess)
Continuous –Taxi driving(speed and location of
the taxi)
• Single agent (vs. multiagent): An agent
operating by itself in an environment.
Eg: Crossword puzzle-single agent.
Chess-two agent environment
Chess-competitive multiagent
Taxi driving-cooperative multi agent
Agent functions and programs
• Agent function:
Agent=architecture+Program
• Agent program:which takes the current
percepts as input
• Agent function:which takes the entire percept
history.
Agent types
Four basic types in order of increasing
generality:
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
Simple reflex agents
• Select actions on the basis of the current
percept, ignoring the rest of the percept
histroy.
• Eg:
Vacuum agent
• Its decision is based only on the current
location and on whether that contains dirt.
Simple Reflex agent
Function REFLEX_VACUUM-AGENT[location,status]returns an
action
If status=Dirty then return suck
else if location=A then return Right
else if location=B then return Left
Simple reflex agents
Simple reflex agents
fnuction SIMPLE-REFLEX-AGENT(percept)returns an action
static rules,a set of condition-action rules
state INTERPRET-INPUT(percept)
rule RULE-MATCH(state,rules)
action RULE-ACTION[rule]
return action
Model-based reflex agents
• Partial observability-maintain some sort of
internal state,depends on percept
history,reflects some unobserved aspect.
2 kind of knowledge:
1)How world evolves independently of the
agent.
2)How the agent’s own action affect the world.
“How the world works”-Model of the world
Model-based reflex agents
Model-based reflex agents
fnuction REFLEX-AGENT-WITH STATE(percept)returns an
action
static: state,a description of the current world state
rules,a set of condition-action rules
action ,the most recent action,initially none
state UPDATE-STATE(state,action,percept)
rule RULE-MATCH(state,rules)
action RULE-ACTION[rule]
return action
Goal-based agents
• With current state description, the agent
needs goal information.
Eg: taxi driving- passenger’s destination.
• Goal based action is straight forward.
• “What will happen if I do such and such?”
Goal-based agents
Utility-based agents
• Goal alone are not enough to generate high
quality
• Goal just provide distinction between “happy”
and “unhappy”.
Two kind of cases where goals are inadequate:
• Conflicting goals-some of them can be
achieved.
• Several goals-none can be acieved,importance
of goals are taken.
Utility-based agents
Learning agents
• It allows the agent to operate in unknown
environment.
4 components:
1) Learning element.
2) Performance element.
3) Critic.
4) Problem generator.
• Learning element:
Responsible for making improvements.
• Performance element:
Responsible for selecting external action.
(entire agent)-Percepts and decides on action.
• Critic:
Learning element uses feedback from critic-
how the agent ids doing, how the
performance element should be modified to
do better in future.
• Problem generator:
Responsible for suggesting actions that will
lead to new and informative experiences.
Learning agents
Eg: Taxi driving
Driving action- Performance element
Observes the world –critic to learning element.
Formulate rule(bad action)-learning element.
Modified by installing the new rule
-Performance element.
Identify area of behavior in need of improvement
& suggest experiment
-Problem generator
Formulating problem
SPRIYAV
Example: Romania
Formulating problem
• On holiday in Romania; currently in Arad.
• Flight leaves tomorrow to Bucharest
• Formulate goal:
– be in Bucharest
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Vacuum world state space graph
• states? integer dirt and robot location
• actions? Left, Right, Suck
• goal test? no dirt at all locations
• path cost? 1 per action
Example: The 8-puzzle
• states? locations of tiles
• actions? move blank left, right, up, down
• goal test? = goal state (given)
• path cost? 1 per move
Real world problem
• Touring problem.
• Traveling sales man problem.
• VLSI. (component connections on chip, minimize-area,
circuit delay, capacitance, maximize-manufacturing yield)
• Automatic assemble problem. (assemble the parts
of some objects, protein design-to find sequence of amino
acid)
• Internet searching. (looking for answer to questions)
Tree search algorithms
EXAMPLE
General tree search
Search strategies
• A search strategy is defined by picking the order of node
expansion
• Strategies are evaluated along the following dimensions:
– completeness: does it always find a solution if one exists?
– time complexity: number of nodes generated
– space complexity: maximum number of nodes in memory
– optimality: does it always find a least-cost solution?
Fringe:
Collection of nodes. Each element in the fringe is a leaf node, a node
with no successors in the tree.
Uninformed search strategies
• Uninformed search strategies use only the
information available in the problem
definition(Blind search)
• Breadth-first search.
• Uniform-cost search.
• Depth-first search.
• Depth-limited search.
• Iterative deepening search.
• Bidirectional search.
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors go at
end
• Complete? Yes (if b is finite)
• Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
• Space? O(bd+1) (keeps every node in memory)
• Optimal? Yes (if cost = 1 per step)
• Space is the bigger problem (more than time)
b- branching factor,
d – depth of the shallowest goal.
m – maximum length of any path.
O – optimal
Uniform-cost search
• Expand least-cost unexpanded node
• Implementation:
– fringe = queue ordered by path cost.
• Expands the node n with lowest path cost ,
if all step cost are equal , this is identical to
breadth-first search.
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Properties of depth-first search
• Complete? No: fails in infinite-depth spaces, spaces
with loops
– Modify to avoid repeated states along path
 complete in finite spaces
• Time? O(bm): terrible if m is much larger than d
– but if solutions are dense, may be much faster than
breadth-first
• Space? O(bm), i.e., linear space!
• Optimal? No
Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors
• Eg:
20 cites to travel,
depth limit l=19.
Two kind of failure :
 Standard failure (no solution).
 cutoff value( no solution within the depth).
Iterative deepening search
Iterative deepening search l =0
Iterative deepening search l =1
Iterative deepening search l =2
Iterative deepening search l =3
Bidirectional Search
• Two simultaneous search, one forward from
initial state and another backward from goal ,
stopping when two searches meet in the
middle.
Heuristic function
• 8-puzzle-horizontally or vertically into empty
space until the configuration matches the goal
configuration.
Admissible heuristics
E.g., for the 8-puzzle:
h(n) = estimated cost from n to goal
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Relaxed problems
• A problem with fewer restrictions on the actions is
called a relaxed problem
• The cost of an optimal solution to a relaxed problem
is an admissible heuristic for the original problem
• If the rules of the 8-puzzle are relaxed so that a tile
can move anywhere, then h1(n) gives the shortest
solution
• If the rules are relaxed so that a tile can move to any
adjacent square, then h2(n) gives the shortest
solution
Local search algorithms
• In many optimization problems, one or more path for
exploring , when a goal is found , the path to that
goal also constitutes a solution to the problem.
• In such cases, we can use local search algorithms.
• keep a single "current" state, rather than multiple
path.
• To understand LSA, consider state space landscape.
• It has both “Location & Elevation”.
• Global Minimum- if elevation corresponds to
cost.
• Global Maximum – if elevation corresponds to
an objective function.(highest peak).
Example: n-queens
• Put n queens on an n × n board with no two
queens on the same row, column, or diagonal
Hill-climbing search
• "Like climbing Everest in thick fog with
amnesia"
•
Hill-climbing search
• Problem: depending on initial state, can get
stuck in local maxima
•
Hill-climbing search: 8-queens problem
• h = number of pairs of queens that are attacking each other, either directly or
indirectly
• h = 17 for the above state
Hill-climbing search: 8-queens problem
• A local minimum with h = 1
Simulated annealing search
• Idea: escape local maxima by allowing some "bad"
moves but gradually decrease their frequency.
Eg: Ping pong game.
Properties of simulated annealing search:
• One can prove: If T decreases slowly enough, then
simulated annealing search will find a global
optimum with probability approaching 1.
Local beam search
• Keep track of k states rather than just one.
• Start with k randomly generated states.
• At each iteration, all the successors of all k states are
generated.
• If any one is a goal state, stop; else select the k best
successors from the complete list and repeat.
Genetic algorithms
• A successor state is generated by combining two parent states
• Start with k randomly generated states (population)
• A state is represented as a string over a finite alphabet (often
a string of 0s and 1s)
• Evaluation function (fitness function). Higher values for better
states.
• Produce the next generation of states by selection, crossover,
and mutation
Genetic algorithms

More Related Content

Similar to Unit 1.ppt

Intelligent (Knowledge Based) agent in Artificial Intelligence
Intelligent (Knowledge Based) agent in Artificial IntelligenceIntelligent (Knowledge Based) agent in Artificial Intelligence
Intelligent (Knowledge Based) agent in Artificial IntelligenceKuppusamy P
 
Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)mufassirin
 
mosfet3inteliggent ageent preserve2ss.ppt
mosfet3inteliggent ageent preserve2ss.pptmosfet3inteliggent ageent preserve2ss.ppt
mosfet3inteliggent ageent preserve2ss.pptdanymorales34
 
Intelligent agents.ppt
Intelligent agents.pptIntelligent agents.ppt
Intelligent agents.pptShilpaBhatia32
 
artificial Intelligence unit1 ppt (1).ppt
artificial Intelligence unit1 ppt (1).pptartificial Intelligence unit1 ppt (1).ppt
artificial Intelligence unit1 ppt (1).pptRamya Nellutla
 
Lecture 2 agent and environment
Lecture 2   agent and environmentLecture 2   agent and environment
Lecture 2 agent and environmentVajira Thambawita
 
SANG AI 1.pptx
SANG AI 1.pptxSANG AI 1.pptx
SANG AI 1.pptxSanGeet25
 
ARTIFICIAL INTELLIGENCE.pptx
ARTIFICIAL INTELLIGENCE.pptxARTIFICIAL INTELLIGENCE.pptx
ARTIFICIAL INTELLIGENCE.pptxashudhanraj
 
Chapter 2 intelligent agents
Chapter 2 intelligent agentsChapter 2 intelligent agents
Chapter 2 intelligent agentsLukasJohnny
 
Artificial Intelligence Chapter two agents
Artificial Intelligence Chapter two agentsArtificial Intelligence Chapter two agents
Artificial Intelligence Chapter two agentsEhsan Nowrouzi
 
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsJarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsPalGov
 
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsJarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsPalGov
 

Similar to Unit 1.ppt (20)

Intelligent (Knowledge Based) agent in Artificial Intelligence
Intelligent (Knowledge Based) agent in Artificial IntelligenceIntelligent (Knowledge Based) agent in Artificial Intelligence
Intelligent (Knowledge Based) agent in Artificial Intelligence
 
Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)
 
mosfet3inteliggent ageent preserve2ss.ppt
mosfet3inteliggent ageent preserve2ss.pptmosfet3inteliggent ageent preserve2ss.ppt
mosfet3inteliggent ageent preserve2ss.ppt
 
m2-agents.pptx
m2-agents.pptxm2-agents.pptx
m2-agents.pptx
 
M2 agents
M2 agentsM2 agents
M2 agents
 
Intelligent agents.ppt
Intelligent agents.pptIntelligent agents.ppt
Intelligent agents.ppt
 
Unit 1.ppt
Unit 1.pptUnit 1.ppt
Unit 1.ppt
 
artificial Intelligence unit1 ppt (1).ppt
artificial Intelligence unit1 ppt (1).pptartificial Intelligence unit1 ppt (1).ppt
artificial Intelligence unit1 ppt (1).ppt
 
Ai u1
Ai u1Ai u1
Ai u1
 
Lecture 2 agent and environment
Lecture 2   agent and environmentLecture 2   agent and environment
Lecture 2 agent and environment
 
SANG AI 1.pptx
SANG AI 1.pptxSANG AI 1.pptx
SANG AI 1.pptx
 
ARTIFICIAL INTELLIGENCE.pptx
ARTIFICIAL INTELLIGENCE.pptxARTIFICIAL INTELLIGENCE.pptx
ARTIFICIAL INTELLIGENCE.pptx
 
Chapter 2 intelligent agents
Chapter 2 intelligent agentsChapter 2 intelligent agents
Chapter 2 intelligent agents
 
agents.pdf
agents.pdfagents.pdf
agents.pdf
 
Lec 2-agents
Lec 2-agentsLec 2-agents
Lec 2-agents
 
Lecture 02-agents
Lecture 02-agentsLecture 02-agents
Lecture 02-agents
 
Artificial Intelligence Chapter two agents
Artificial Intelligence Chapter two agentsArtificial Intelligence Chapter two agents
Artificial Intelligence Chapter two agents
 
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsJarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
 
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsJarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
 
Lecture 2 Agents.pptx
Lecture 2 Agents.pptxLecture 2 Agents.pptx
Lecture 2 Agents.pptx
 

Recently uploaded

GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfme23b1001
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineeringmalavadedarshan25
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidNikhilNagaraju
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)dollysharma2066
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .Satyam Kumar
 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxk795866
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 

Recently uploaded (20)

GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examples
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdf
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineering
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfid
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .
 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptx
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 

Unit 1.ppt

  • 2. What is AI? Views of AI fall into four categories: Thinking humanly Thinking rationally Acting humanly Acting rationally The textbook advocates "acting rationally"
  • 3. Artificial Intelligence • Definition:  The art of creating machines that perform functions that require intelligence when performed by people.  The study of how to make computers do things at which at moment people are better.  The study of the computation that make it possible to preceive,reason and act.
  • 4. Acting humanly: Turing Test • Turing (1950) "Computing machinery and intelligence": • "Can machines think?"  "Can machines behave intelligently?" • Operational test for intelligent behavior: the Imitation Game • Anticipated all major arguments against AI in following 50 years • Suggested major components of AI: knowledge, reasoning, language understanding, learning
  • 5. Capabilities of computer • Natural language processing: to enable it to communicate successfully in English. • Knowledge representation: to share what it knows or hears. • Automated reasoning: to use the stored information to answer question and to draw new conclusions; • Machine learning: to adapt to new circumstances and to detect and extrapolate patterns.
  • 6. Total Turing Test The computer will need * Computer vision: to perceive objects. * Robotics: to manipulate objects and move.
  • 7. Thinking humanly: cognitive modeling • 1960s "cognitive revolution": information-processing psychology • Requires scientific theories of internal activities of the brain • -- How to validate? Requires 1) Predicting and testing behavior of human subjects (top- down) or 2) Direct identification from neurological data (bottom- up) • Both approaches (roughly, Cognitive Science and Cognitive Neuroscience) are now distinct from AI.
  • 8. • Cognitive science is the interdisciplinary scientific study of how information concerning faculties such as perception(sense), language, reasoning, and emotion, is represented and transformed in a (human or other animal) nervous system or machine (e.g., computer).
  • 9. Thinking rationally: "laws of thought" Eg: Statement: Socrates is a Man. All men are mortal. Conclusion: Socrates is Mortal. • These laws of thought were supposed to govern the operation of the mind ,their study initiated the field called logic.
  • 10. Acting rationally: rational agent • Rational behavior: doing the right thing • The right thing: that which is expected to maximize goal achievement, given the available information. • An agent is something that acts. • Computer agent are expected to have other attributes, - Operating under autonomous control, - Perceiving their environment, - Adapting new change.
  • 11. Rational agents • An agent is an entity that perceives and acts. • A rational agent is one that acts so as to achieve the best outcomes . • Making correct inference is a part of being a rational agent. • To act rationally is to reason logically to the conclusion that a given action will achieve one’s goal.
  • 12. Applications • Autonomous planning & scheduling: NASA’s remote Agent program-control the operation of spacecraft. • Game playing: IBM’s Deep Blue-program to defeat in chess match. • Autonomous control: AlVINN- computer vision system to steer a car.
  • 13. • Diagnosis: Medical diagnosis program based on probabilistic analysis-to perform at the level of an expert physician. • Logistics Planning: U.S forces deployed a Dynamic Analysis and Replanning Tool (DART) to do automated logistics planning and scheduling for transportation.
  • 14. logistics planning • Logistics - (military definition) The science of planning and carrying out the movement and maintenance of forces.... those aspects of military operations that deal with the design and development, storage, movement, distribution; maintenance,movement, evacuation, and hospitalization of personnel; maintenance, operation and disposition of facilities;
  • 15. • Robotics: Many surgeons now use robot assistants in micro surgery- creates 3D model of a patient’s internal anatomy. • Language understanding and problem solving: PROVERB-computer program that solves crossword puzzle-word filters.
  • 16. Agents • An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators • Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators. • Robotic agent: cameras and infrared range finders for sensors; • various motors for actuators
  • 17. Agents and environments • The agent function maps from percept histories to actions: [f: P*  A] • The agent program runs on the physical architecture to produce f • agent = architecture + program
  • 18. Vacuum-cleaner world • Percepts: location and contents, e.g., [A,Dirty] • Actions: Left, Right, Suck, NoOp
  • 19. A vacuum-cleaner agent Percept sequence Action [A, Clean] Right [A, Dirty] Suck [B,Clean] Left [B,Dirty] Suck
  • 20. Rational agents • An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful • Performance measure: An objective criterion for success of an agent's behavior
  • 21. • E.g., performance measure of a vacuum- cleaner agent • could be 1.amount of dirt cleaned up, 2.amount of time taken, 3.amount of electricity consumed, 4.amount of noise generated., etc
  • 22. Rational agents • Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration) • An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)
  • 23. PEAS • PEAS: Performance measure, Environment, Actuators, Sensors • Must first specify the setting for intelligent agent design • e.g., the task of designing an automated taxi driver: – Performance measure – Environment – Actuators – Sensors
  • 24. PEAS • Must first specify the setting for intelligent agent design • e.g., the task of designing an automated taxi driver: – Performance measure: Safe, fast, legal, comfortable trip, maximize profits – Environment: Roads, other traffic. – Actuators: Steering wheel, accelerator, brake, signal, horn. – Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard.
  • 25. PEAS • Agent: Medical diagnosis system • Performance measure: Healthy patient, minimize costs, lawsuits • Environment: Patient, hospital, staff • Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) • Sensors: Keyboard (entry of symptoms, findings, patient's answers)
  • 26. Environment types • Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment (If sensor detect all aspect). An environment is partially observable b’coz of noisy and inaccurate sensors. (vacuum agent)
  • 27. • Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state.(vacuum) If the environment is partially observed it is stochastic. (If the environment is deterministic except for the actions of other agents, then the environment is strategic)
  • 28. • Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action). In sequential the current state affect all future decision. Eg: chess and taxi driving
  • 29. • Static (vs. dynamic): The environment is unchanged while an agent is deliberating.(puzzle) Dynamic- continuously asking the agent what I wants to do next.(taxi driving) (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does-chess)
  • 30. • Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions.(chess) Continuous –Taxi driving(speed and location of the taxi) • Single agent (vs. multiagent): An agent operating by itself in an environment.
  • 31. Eg: Crossword puzzle-single agent. Chess-two agent environment Chess-competitive multiagent Taxi driving-cooperative multi agent
  • 32. Agent functions and programs • Agent function: Agent=architecture+Program • Agent program:which takes the current percepts as input • Agent function:which takes the entire percept history.
  • 33. Agent types Four basic types in order of increasing generality: • Simple reflex agents • Model-based reflex agents • Goal-based agents • Utility-based agents
  • 34. Simple reflex agents • Select actions on the basis of the current percept, ignoring the rest of the percept histroy. • Eg: Vacuum agent • Its decision is based only on the current location and on whether that contains dirt.
  • 35. Simple Reflex agent Function REFLEX_VACUUM-AGENT[location,status]returns an action If status=Dirty then return suck else if location=A then return Right else if location=B then return Left
  • 37. Simple reflex agents fnuction SIMPLE-REFLEX-AGENT(percept)returns an action static rules,a set of condition-action rules state INTERPRET-INPUT(percept) rule RULE-MATCH(state,rules) action RULE-ACTION[rule] return action
  • 38. Model-based reflex agents • Partial observability-maintain some sort of internal state,depends on percept history,reflects some unobserved aspect. 2 kind of knowledge: 1)How world evolves independently of the agent. 2)How the agent’s own action affect the world. “How the world works”-Model of the world
  • 40. Model-based reflex agents fnuction REFLEX-AGENT-WITH STATE(percept)returns an action static: state,a description of the current world state rules,a set of condition-action rules action ,the most recent action,initially none state UPDATE-STATE(state,action,percept) rule RULE-MATCH(state,rules) action RULE-ACTION[rule] return action
  • 41. Goal-based agents • With current state description, the agent needs goal information. Eg: taxi driving- passenger’s destination. • Goal based action is straight forward. • “What will happen if I do such and such?”
  • 43. Utility-based agents • Goal alone are not enough to generate high quality • Goal just provide distinction between “happy” and “unhappy”. Two kind of cases where goals are inadequate: • Conflicting goals-some of them can be achieved. • Several goals-none can be acieved,importance of goals are taken.
  • 45. Learning agents • It allows the agent to operate in unknown environment. 4 components: 1) Learning element. 2) Performance element. 3) Critic. 4) Problem generator.
  • 46. • Learning element: Responsible for making improvements. • Performance element: Responsible for selecting external action. (entire agent)-Percepts and decides on action. • Critic: Learning element uses feedback from critic- how the agent ids doing, how the performance element should be modified to do better in future.
  • 47. • Problem generator: Responsible for suggesting actions that will lead to new and informative experiences.
  • 49. Eg: Taxi driving Driving action- Performance element Observes the world –critic to learning element. Formulate rule(bad action)-learning element. Modified by installing the new rule -Performance element. Identify area of behavior in need of improvement & suggest experiment -Problem generator
  • 52. Formulating problem • On holiday in Romania; currently in Arad. • Flight leaves tomorrow to Bucharest • Formulate goal: – be in Bucharest • Formulate problem: – states: various cities – actions: drive between cities • Find solution: – sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
  • 53. Vacuum world state space graph
  • 54. • states? integer dirt and robot location • actions? Left, Right, Suck • goal test? no dirt at all locations • path cost? 1 per action
  • 56. • states? locations of tiles • actions? move blank left, right, up, down • goal test? = goal state (given) • path cost? 1 per move
  • 57. Real world problem • Touring problem. • Traveling sales man problem. • VLSI. (component connections on chip, minimize-area, circuit delay, capacitance, maximize-manufacturing yield) • Automatic assemble problem. (assemble the parts of some objects, protein design-to find sequence of amino acid) • Internet searching. (looking for answer to questions)
  • 60.
  • 61.
  • 63. Search strategies • A search strategy is defined by picking the order of node expansion • Strategies are evaluated along the following dimensions: – completeness: does it always find a solution if one exists? – time complexity: number of nodes generated – space complexity: maximum number of nodes in memory – optimality: does it always find a least-cost solution? Fringe: Collection of nodes. Each element in the fringe is a leaf node, a node with no successors in the tree.
  • 64. Uninformed search strategies • Uninformed search strategies use only the information available in the problem definition(Blind search) • Breadth-first search. • Uniform-cost search. • Depth-first search. • Depth-limited search. • Iterative deepening search. • Bidirectional search.
  • 65. Breadth-first search • Expand shallowest unexpanded node • Implementation: – fringe is a FIFO queue, i.e., new successors go at end
  • 66.
  • 67. • Complete? Yes (if b is finite) • Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1) • Space? O(bd+1) (keeps every node in memory) • Optimal? Yes (if cost = 1 per step) • Space is the bigger problem (more than time) b- branching factor, d – depth of the shallowest goal. m – maximum length of any path. O – optimal
  • 68. Uniform-cost search • Expand least-cost unexpanded node • Implementation: – fringe = queue ordered by path cost. • Expands the node n with lowest path cost , if all step cost are equal , this is identical to breadth-first search.
  • 69. Depth-first search • Expand deepest unexpanded node • Implementation: – fringe = LIFO queue, i.e., put successors at front
  • 70.
  • 71.
  • 72.
  • 73. Properties of depth-first search • Complete? No: fails in infinite-depth spaces, spaces with loops – Modify to avoid repeated states along path  complete in finite spaces • Time? O(bm): terrible if m is much larger than d – but if solutions are dense, may be much faster than breadth-first • Space? O(bm), i.e., linear space! • Optimal? No
  • 74. Depth-limited search = depth-first search with depth limit l, i.e., nodes at depth l have no successors
  • 75. • Eg: 20 cites to travel, depth limit l=19. Two kind of failure :  Standard failure (no solution).  cutoff value( no solution within the depth).
  • 81. Bidirectional Search • Two simultaneous search, one forward from initial state and another backward from goal , stopping when two searches meet in the middle.
  • 82. Heuristic function • 8-puzzle-horizontally or vertically into empty space until the configuration matches the goal configuration.
  • 83. Admissible heuristics E.g., for the 8-puzzle: h(n) = estimated cost from n to goal • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) • h1(S) = ? 8 • h2(S) = ? 3+1+2+2+2+3+3+2 = 18
  • 84. Relaxed problems • A problem with fewer restrictions on the actions is called a relaxed problem • The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem • If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution • If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution
  • 85. Local search algorithms • In many optimization problems, one or more path for exploring , when a goal is found , the path to that goal also constitutes a solution to the problem. • In such cases, we can use local search algorithms. • keep a single "current" state, rather than multiple path. • To understand LSA, consider state space landscape. • It has both “Location & Elevation”.
  • 86. • Global Minimum- if elevation corresponds to cost. • Global Maximum – if elevation corresponds to an objective function.(highest peak).
  • 87. Example: n-queens • Put n queens on an n × n board with no two queens on the same row, column, or diagonal
  • 88. Hill-climbing search • "Like climbing Everest in thick fog with amnesia" •
  • 89. Hill-climbing search • Problem: depending on initial state, can get stuck in local maxima •
  • 90. Hill-climbing search: 8-queens problem • h = number of pairs of queens that are attacking each other, either directly or indirectly • h = 17 for the above state
  • 91. Hill-climbing search: 8-queens problem • A local minimum with h = 1
  • 92. Simulated annealing search • Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency. Eg: Ping pong game. Properties of simulated annealing search: • One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1.
  • 93. Local beam search • Keep track of k states rather than just one. • Start with k randomly generated states. • At each iteration, all the successors of all k states are generated. • If any one is a goal state, stop; else select the k best successors from the complete list and repeat.
  • 94. Genetic algorithms • A successor state is generated by combining two parent states • Start with k randomly generated states (population) • A state is represented as a string over a finite alphabet (often a string of 0s and 1s) • Evaluation function (fitness function). Higher values for better states. • Produce the next generation of states by selection, crossover, and mutation