• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Lecture 1
 

Lecture 1

on

  • 1,661 views

notes of introductory chapters of Artificial intelligence

notes of introductory chapters of Artificial intelligence

Statistics

Views

Total Views
1,661
Views on SlideShare
1,661
Embed Views
0

Actions

Likes
1
Downloads
53
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Lecture 1 Lecture 1 Presentation Transcript

    • BEG471CO Introduction to Artificial Intelligence Er.Roshan Koju Department of Computer Engineering Khwopa Engineering College
    • First, a little bureaucracy ... The course text book: Artificial Intelligence: A Modern Approach (2003), 2nd Edition, Stuart Russell and Peter Norvig.
    • How you will be graded ....
      • Course work:
        • 3 Assignments (some programming, some Q&A): 1 marks each
        • 2 marks for attendance
        • 10marks for assessment and 5 marks for UT
        • 10marks for lab test
        • 5marks for lab reports
        • 10marks for presentation
      • Late policy:
        • Assignments: 20% off for each late day. Please start early!!
    • OUTLINE
      • Applications
      • Introduction
      • Ethics of AI
      • Branches of AI
      • History of AI
      • Relation of AI with other domains
      • Intelligent agents and types
      • Assignment 1
    • Why study AI? Search engines Labor Science Medicine/ Diagnosis Appliances What else?
    • Honda Humanoid Robot Walk Turn Stairs http://world.honda.com/robot/
    • Sony AIBO http://www.aibo.com
    • Natural Language Question Answering http:// www.ai.mit.edu/projects/infolab / http://aimovie.warnerbros.com
    • Intelligence
      • Intelligence is:
        • the ability to reason
        • the ability to understand
        • the ability to create
        • . . .
      • Can we produce a machine with all these abilities?
      • The answer is no, so then what is AI?
    • Formal Definitions
      • Barr and Feigenbaum
        • “ Artificial Intelligence is the part of computer science concerned with designing intelligence computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behavior.”
      • Elaine Rich
        • “ AI is the study of how to make computers do things at which, at the moment, people are better”
      • AI is the science and engineering of making intelligent machines which can perform tasks that require intelligence when performed by humans …”
    • What is AI? The exciting new effort to make computers thinks … machine with minds, in the full and literal sense” (Haugeland 1985) “ The art of creating machines that perform functions that require intelligence when performed by people” (Kurzweil, 1990) “ The study of mental faculties through the use of computational models” (Charniak et al. 1985) A field of study that seeks to explain and emulate intelligent behavior in terms of computational processes” (Schalkol, 1990) Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally
    • Types of tasks
      • Everyday tasks : recognize a friend, recognize who is calling, translate from one language to another, interpret a photograph, talk, cook a dinner
      • Formal tasks : prove a logic theorem, geometry, calculus, play chess, checkers, or Go
      • Expert tasks : engineering design, medical designers, financial analysis
    • “ Chinese room” argument
      • Person who knows English but not Chinese sits in room
      • Receives notes in Chinese
      • Has systematic English rule book for how to write new Chinese characters based on input Chinese characters, returns his notes
        • Person=CPU, rule book=AI program, really also need lots of paper (storage)
        • Has no understanding of what they mean
        • But from the outside, the room gives perfectly reasonable answers in Chinese!
      • Searle’s argument: the room has no intelligence in it!
    • Acting Humanly: The Full Turing Test
      • Alan Turing's 1950 article Computing Machinery and Intelligence discussed conditions for considering a machine to be intelligent
        • “ Can machines think?”  “Can machines behave intelligently?”
        • The Turing test (The Imitation Game): Operational definition of intelligence.
      • Computer needs to posses: Natural language processing, Knowledge representation, Automated reasoning, and Machine learning
      • Problem: 1) Turing test is not reproducible, constructive, and amenable to mathematic analysis. 2) What about physical interaction with interrogator and environment?
      • Total Turing Test: Requires physical interaction and needs perception and actuation.
    • What would a computer need to pass the Turing test?
      • Natural language processing: to communicate with examiner.
      • Knowledge representation: to store and retrieve information provided before or during interrogation.
      • Automated reasoning: to use the stored information to answer questions and to draw new conclusions.
      • Machine learning: to adapt to new circumstances and to detect and extrapolate patterns.
    • What would a computer need to pass the Turing test?
      • Vision (for Total Turing test): to recognize the examiner’s actions and various objects presented by the examiner.
      • Motor control (total test): to act upon objects as requested.
      • Other senses (total test): such as audition, smell, touch, etc.
    • Thinking Humanly: Cognitive Science
      • 1960 “Cognitive Revolution”: information-processing psychology replaced behaviorism
      • Cognitive science brings together theories and experimental evidence to model internal activities of the brain
        • What level of abstraction? “Knowledge” or “Circuits”?
        • How to validate models?
          • Predicting and testing behavior of human subjects (top-down)
          • Direct identification from neurological data (bottom-up)
          • Building computer/machine simulated models and reproduce results (simulation)
    • Thinking Rationally: Laws of Thought
      • Aristotle (~ 450 B.C.) attempted to codify “right thinking” What are correct arguments/thought processes?
      • E.g., “Socrates is a man, all men are mortal; therefore Socrates is mortal”
      • Several Greek schools developed various forms of logic: notation plus rules of derivation for thoughts.
    • Thinking Rationally: Laws of Thought
      • Problems:
        • Uncertainty: Not all facts are certain (e.g., the flight might be delayed).
        • Resource limitations:
          • Not enough time to compute/process
          • Insufficient memory/disk/etc
          • Etc.
    • Acting Rationally: The Rational Agent
      • Rational behavior: Doing the right thing!
      • The right thing: That which is expected to maximize the expected return
      • Provides the most general view of AI because it includes:
        • Correct inference (“Laws of thought”)
        • Uncertainty handling
        • Resource limitation considerations (e.g., reflex vs. deliberation)
        • Cognitive skills (NLP, AR, knowledge representation, ML, etc.)
      • Advantages:
        • More general
        • Its goal of rationality is well defined
    • How to achieve AI?
      • How is AI research done?
      • AI research has both theoretical and experimental sides. The experimental side has both basic and applied aspects.
      • There are two main lines of research:
        • One is biological , based on the idea that since humans are intelligent, AI should study humans and imitate their psychology or physiology.
        • The other is phenomenal , based on studying and formalizing common sense facts about the world and the problems that the world presents to the achievement of goals.
      • The two approaches interact to some extent, and both should eventually succeed. It is a race, but both racers seem to be walking. [ John McCarthy]
    • Modern Turing Test
      • “ On the web, no one knows you’re a….”
      • Problem: ‘bots’
        • Automated agents swamp services
      • Challenge: Prove you’re human
        • Test: Something human can do, ‘bot can’t
      • Solution: CAPTCHAs
        • Distorted images: trivial for human; hard for ‘bot
    • Human intelligence
      • Turing provided some very persuasive arguments that a system passing the Turing test is intelligent.
      • However, the test does not provide much traction on the question of how to actually build an intelligent system.
    • Human intelligence
      • In general there are various reasons why trying to mimic humans might not be the best approach to AI.
      10 14 10 9 Memory updates/sec 10 14 bits/sec 10 10 bits/sec Bandwidth 10 -3 sec 10 -9 sec Cycle time 10 11 neurons 10 14 synapses 10 10 bits RAM 10 11 bits disk Storage Units 10 11 neurons 1 CPU, 10 8 gates Computational Units Human Brain Computer
    • Human intelligence
      • In general there are various reasons why trying to mimic humans might not be the best approach to AI.
        • But more importantly, we know very little about how the human brain performs its higher level processes. Hence, this point of view provides very little information from which a scientific understanding of these processes can be built.
        • Neuroscience has been very influential in some areas of AI, however. For example, in robotic sensing, vision processing, etc.
    • Is AI Ethical?
      • Joseph Weizenbaum (1976) in Computer Power and Human Reason argues:
      • A real AI would indeed be an autonomous, intelligent agent
        • Hence, out of our control
      • It will not share our: motives, constraints, ethics
      • There is no obvious upper bound on intelligence. And perhaps there is no upper bound at all.
        • When our interests and AI's interests conflict, guess who loses
        • Therefore, AI research is unethical.
    • Asimov’s Laws of Robotics
      • A method to insert ethics into AI
      • The three laws of robots are:
        • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
        • A robot must obey the orders given it by human beings.
        • A robot must protect its own existence.
        • Meta-law: Precedence order is lower to higher
    • Objections to Asimov’s Laws
      • Ambiguity in terms: harm, obey, order . . .
      • Unclear how to resolve intra-rule conflicts (e.g., conflicting human interests)
      • They are far too narrow
        • Cover robot-self, robot-human relations but not relations with
          • Other robots
          • Other sentient beings
          • The environment
    • Branches of AI
      • Logical AI
      • Search
      • Natural language processing
      • pattern recognition
      • Knowledge representation
      • Inference From some facts, others can be inferred.
      • Automated reasoning
      • Learning from experience
      • Planning To generate a strategy for achieving some goal
      • Epistemology Study of the kinds of knowledge that are required for solving problems in the world.
      • Ontology Study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are.
      • Genetic programming
      • Emotions???
    • AI Prehistory
    • AI History
    • Relation of AI with other domains
      • Philosophy (400 B.C-)
      • Socrates->Plato->Aristotle
        • Socrates: “I want to know what is characteristic of piety which makes all actions pious...that I may have it to turn to, and to use as a standard whereby to judge your actions and those of other men” (algorithm)
        • Aristotle: Try to formulate laws of rational part of the mind. Believed in another part, intuitive reason
      • Philosophy: Dualism vs. materialism
      • Rene Descartes (1596-1650): dualism (part of mind that is outside of nature)
      • Materialism . Wilhelm Leibniz (1646-1716) built a mechanical device to carry out mental operations; could not produce interesting results
      • Philosophy: Source of knowledge
      • Empiricism (Francis Bacon 1561-1626)
        • John Locke (1632-1704): “Nothing is in the understanding which was not in the senses”
        • David Hume (1711-1776): Principle of induction: General rules from repeated associations between their elements
          • Bertrand Russell (1872-1970): Logical positivism : All knowledge can be characterized by logical theories connected, ultimately, to observed sentences that correspond to sensory inputs
      • Mathematics
      • Logic
        • George Boole (1815-1864): formal language for making logical inference
        • Gottlob Frege (1848-1925): First-order logic (FOL)
        • Computability
          • David Hilbert (1862-1943): Problem #23: is there an algorithm for deciding the truth of any logical proposition involving the natural numbers?
          • Kurt Godel (1906-1978): No: undecidability (yes for FOL)
          • Alan Turing (1912-1954): which functions are computable?
            • Church-Turing thesis: any computable function is computable via a Turing machine
            • No machine can tell in general whether a given program will return an answer on a given input, or run forever
      • Mathematics…
      • Intractability
        • Polynomial vs. exponential (Cobham 1964; Edmonds 1965)
        • Reduction (Dantzig 1960, Edmonds 1962)
        • NP-completeness (Steven Cook 1971, Richard Karp 1972)
        • Contrasts “Electronic Super-Brain”
      • Mathematics…
      • Probability
        • Gerolamo Cardano (1501-1576): probability in gambling
        • Pierre Fermat (1601-1665), Blaise Pascal (1623-1662), James Bernoulli (1654-1705), Pierre Laplace (1749-1827): new methods
        • Bernoulli: subjective beliefs->updating
        • Thomas Bayes (1702-1761): updating rule
      • Decision theory = probability theory + utility theory
        • John Von Neumann & Oskar Morgenstern 1944
      • Game theory
      • Psychology (1879-)
      • Scientific methods for studying human vision
        • Hermann von Helmholtz (1821-1894), Wilhelm Wundt (1832-1920)
      • Introspective experimental psychology
        • Wundt
        • Results were biased to follow hypotheses
      • Behaviorism (prevailed 1920-1960)
        • John Watson (1878-1958), Edward Lee Thorndyke (1874-1949)
        • Against introspection
        • Stimulus-response studies
        • Rejected knowledge, beliefs, goals, reasoning steps
    • Psychology
      • Cognitive psychology
        • Brain posesses and processes information
        • Kenneth Craik 1943: knowledge-based agent:
          • Stimulus -> representation
          • Representation is manipulated to derive new representations
          • These are translated back into actions
        • Widely accepted now
        • Anderson 1980: “A cognitive theory should be like a computer program”
    • Agents
      • An over-used, over-loaded, and misused term.
      • Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through its effectors to maximize progress towards its goals . Hence, an agent gets percepts one at a time, and maps this percept sequence to actions (one action at a time)
      • Human agent:
        • eyes, ears, and other organs for sensors
        • hands, legs, mouth, and other body parts for actuators
      • Robotic agent:
        • cameras and infrared range finders for sensors
        • various motors for actuators
      • The agent function: an abstract mathematical description
        • The agent function maps from percept histories to actions:
      • [ f : P*  A ]
      • The agent program: a concrete implementation
        • The agent program runs on the physical architecture to implement f
      Agent ? Sensors Environment Agent percepts actions ? Sensors Effectors How to design this?
      • Example: Human mind as network of thousands or millions of agents working in parallel. To produce real artificial intelligence, this school holds, we should build computer systems that also contain many agents and systems for arbitrating among the agents' competing results.
      • Distributed decision-making and control
      • Challenges:
        • Action selection: What next action to choose
        • Conflict resolution
      sensors effectors Agency
    • How is an Agent different from other software?
      • Agents are autonomous , that is, they act on behalf of the user
      • Agents contain some level of intelligence , from fixed rules to learning engines that allow them to adapt to changes in the environment
      • Agents don't only act reactively , but sometimes also proactively
    • How is an Agent different from other software?
      • Agents have social ability , that is, they communicate with the user, the system, and other agents as required
      • Agents may also cooperate with other agents to carry out more complex tasks than they themselves can handle
      • Agents may migrate from one system to another to access remote resources or even to meet other agents
    • Examples of agents in different types of applications Agent type Percepts Actions Goals Environment Medical diagnosis system Symptoms, findings, patient's answers Questions, tests, treatments Healthy patients, minimize costs Patient, hospital Satellite image analysis system Pixels of varying intensity, color Print a categorization of scene Correct categorization Images from orbiting satellite Part-picking robot   Pixels of varying intensity Pick up parts and sort into bins Place parts in correct bins Conveyor belts with parts Refinery controller     Temperature, pressure readings Open, close valves; adjust temperature   Maximize purity, yield, safety   Refinery     Interactive English tutor   Typed words     Print exercises, suggestions, corrections Maximize student's score on test Set of students    
    • Vacuum-cleaner world
      • Percepts: location and contents, e.g., [A,Dirty]
      • Actions: Left , Right , Suck , NoOp
    • The Right Thing = The Rational Action
      • Rational Action: The action that maximizes the expected value of the performance measure given the percept sequence to date
        • Rational = Best ?
        • Rational = Optimal ?
        • Rational = Omniscience ?
        • Rational = Clairvoyant ?
        • Rational = Successful ?
    • The Right Thing = The Rational Action
      • Rational Action: The action that maximizes the expected value of the performance measure given the percept sequence to date
        • Rational = Best ?
        • Rational = Optimal ?
        • Rational = Omniscience ?
        • Rational = Clairvoyant ?
        • Rational = Successful ?
    • Rational agents: definition
      • Rational Agent : For each possible percept sequence , a rational agent should select an action that is expected to maximize its performance measure , given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
      • Performance measure: An objective criterion for success of an agent's behavior
        • E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc.
    • PEAS: formalization
      • PEAS: Performance measure, Environment, Actuators, Sensors
      • Must first specify the setting for intelligent agent design
      • Consider, e.g., the task of designing an automated taxi driver:
        • Performance measure
        • Environment
        • Actuators
        • Sensors
      • automated taxi driver:
        • Performance measure : Safe, fast, legal, comfortable trip, maximize profits
        • Environment : Roads, other traffic, pedestrians, customers
        • Actuators : Steering wheel, accelerator, brake, signal, horn
        • Sensors : Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard or microphone
    • Environment types
      • Fully observable (vs. partially observable ): An agent's sensors give it access to the complete state of the environment at each point in time
      • Deterministic (vs. stochastic ): The next state of the environment is completely determined by the current state and the action executed by the agent
        • In partially observable case, it could appear to be stochastic
        • If the environment is deterministic except for the actions of other agents, then the environment is strategic
      • Episodic (vs. sequential ): The agent's experience is divided into atomic "episodes"
        • each episode consists of the agent perceiving and then performing a single action
        • the choice of action in each episode depends only on the episode itself
      • Static (vs. dynamic): The environment is unchanged while an agent is deliberating
        • The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does
      • Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions
      • Single agent (vs. multiagent): An agent operating by itself in an environment
        • Entity B is an agent or merely a stochastically behaving object?
          • Maximize its performance measure depending on agent A’s behavior
        • Multiagent
          • competitive vs. cooperative
          • Communication
      • A hardest combination from 6 categories?
    • Environment types Environment Accessible Deterministic Episodic Static Discrete Chess with a clock Yes Yes No Semi Yes Chess without a clock Yes Yes No Yes Yes Poker No No No Yes Yes Backgammon Yes No No Yes Yes Taxi driving No No No No No Medical diagnosis system No No No No No Image-analysis system Yes Yes Yes Semi No Part-picking robot No No Yes No No Refinery controller No No No No No Interactive English tutor No No No No Yes
    • Table-lookup agent
      • Drawbacks:
        • Huge table
        • Take a long time to build the table
        • No autonomy
        • Even with learning, need a long time to learn the table entries
      function Table-Driven-Agent( percept ) returns an action static: percepts , a sequence // initially empty table , a table of actions // indexed by percept sequences append percept to the end of percepts action <- LookUp ( percepts , table ) return action
    • Look up table agent obstacle sensor Stop 2 Turn left 30 degrees 5 No action 10 Action Distance
    • Agent types
      • Four basic types in order of increasing generality:
        • Simple reflex agents
        • Model-based reflex agents
        • Goal-based agents
        • Utility-based agents
      • How to convert into learning agents
    • Simple reflex agents
    • Simple reflex agents
      • A simple reflex agent works by finding a rule whose condition matches the current situation (as defined by the percept) and then doing the action associated with that rule.
      function Simple-Reflex-Agent( percept ) returns an action static: rules , a set of condition-action rules state <- Interrupt-Input ( percept) rule <- Rule-Match ( state,rules ) action <- Rule-Action ( rule ) return action
    • Model-based reflex agents
    • Model-based reflex agents
      • To handle partial observability, agent keeps track of the part of the world it cannot see now
        • Internal state
      • Tries to model the world in two ways
        • How the world evolves independently of the agent
        • How the agent’s action affect the world
      function Model-Based-Reflex-Agent( percept ) returns an action static: state , a description of the current world rules , a set of condition-action rules action , the most recent action state <- Update-State ( state, action, percept) rule <- Rule-Match ( state, rules ) action <- Rule-Action ( rule ) return action
      • Encode “internal state of the world to remember the past as contained in earlier percepts
      • Needed because sensors do no usually give the entire state of the world at each input, so perception of the environment is captured over time. “State” used to encode different “world states” that generate the same immediate percept
      • Requires ability to represent change in the world with/without the agent
        • one possibility is to represent just the latest state, but then cannot reason about hypothetical courses of action
      • Example: Rodney Brook’s Subsumption Architecture . Main idea: build complex intelligent robots by decomposing behaviors into a hierarchy of skills, each completely defining a complete percept-action cycle for one very specific task. For example, avoiding contact, wandering, exploring, recognizing doorways, etc. Each behavior is modeled by a finite-state machine with a few states (though each state may correspond to a complex function or module). Behaviors are loosely-coupled, asynchronous interactions
      Model-based reflex agents
    • (model-based) Goal-based agents
      • The agent needs goal information that describes desirable situations
      • Consider future
      • Search and planning to find action sequences for goal
      • Less efficient but more flexible
    • Utility-based agents
      • Goals alone are not enough to generate high-quality behavior sometimes
        • Goal are often binary distinction e.g. happy vs. unhappy
      • A utility function maps a state (or its sequence) onto a real number, e.g. the degree of happiness
        • Can provide a tradeoff between conflicting goals e.g. speed vs. security
      • If multiple goals, the likelihood of success of each goal can be weighed up against the importance of the goals
    • Utility-based agents
      • When there are multiple possible alternatives, how to decide which one is best?
      • A goal specifies a crude destination between a happy and unhappy state, but often need a more general performance measure that describes “degree of happiness”
      • Utility function U: State  Reals indicating a measure of success or happiness when at a given state
      • Allows decisions comparing choice between conflicting goals, and choice between likelihood of success and importance of goal (if achievement is uncertain)
      Utility-based agents
    • Learning agents
      • So far, we talked about various methods for selecting actions
        • We have not explained how the agent programs come into being
      • 4 major components
        • Learning element is responsible for making improvements
          • Percept has no idea of how to evaluate the state of the world
          • Uses feedback from critic
        • Critic tells the learning agent how well the agent is doing
          • in terms of performance standard
          • Note that performance standard is fixed
        • Performance element is responsible for selecting external actions
          • This is the agent in the previous slides
          • Takes percept and decides on actions
        • Problem generator is responsible for suggesting actions that will lead to new and informative experiences
          • Can choose suboptimal but exploratory actions
    • Learning agents
    • Assignments -1
      • History of AI
      • Relation of AI with other Domains
      • Human intelligence Vs artificial intelligence
      • Performance measure of Automated car
      • Applications of AI
        • Submission date: within may 18,2010
        • Submit at: ko_rosh@yahoo.com
      • We will study search techniques from next classes
      • Get ready!!!!!
      Thank you