Successfully reported this slideshow.

Introduction to AI - Seventh Lecture

737 views

Published on

Published in: Education
  • Be the first to comment

Introduction to AI - Seventh Lecture

  1. 1. Introduction to AI 7th Lecture 1980’s – Body over Mind Wouter Beek me@wouterbeek.com 4 November 2010
  2. 2. Part I 1980’s, Body over Mind
  3. 3. Nouvelle AI 0 Sensorimotor skills are essential to higher level skills like commonsense reasoning. 0 Abstract reasoning is the least interesting or important human skill. 0 So who came up with the idea… (though tentatively)? 0 “It […] is best to provide the machine with the best sense organs that money can buy […]. That process could follow the normal teaching of a child.” 0 Alan Turing, 1950
  4. 4. Embodied AI – philosophical position 0 Embodiment: The functions of the mind can be described in terms of aspects of the body. 0 Cognitivism: the functions of the mind can be described in terms of information processing. 0 Computationalism: the functions of the mind can be described in computational terms. 0 Cartesian dualism: the functions of the mind are described in immaterial terms. 0 Embodiment hypothesis: conceptual and linguistic structures are shaped by the peculiarities of perceptual structures. [Lakoff & Johnson 1999]
  5. 5. Remember: Moravec paradox 0 Optimism due to machines solving things that are difficult for humans: 0 Geometrical problems 0 Logical proofs 0 Games of chess 0 But things that are easy for humans are often difficult for machines: 0 Taking the garbage out. 0 Recognizing the man walking across the street is Joe. 0 Sensorimotor skills and instincts are (arguably) necessary for intelligent behavior, but pose enormous problems for machines.
  6. 6. Moravec paradox – The argument 0 The time that evolution took to produce a certain skill is proportional to the difficulty to implement that skill. 0 The oldest human skills are unconscious and effortless. 0 The youngest human skills are conscious and require lots of effort. 0 Effortless skills are the most difficult to implement. 0 Difficult skills are the easiest to implement, once the effortless skills have been implemented. But: 0 Cultural evolution is faster than biological evolution. 0 Temporal progression need not parallel complexity. 0 Temporal progression suggests a quantitative development.
  7. 7. Bottom‐up approach Bottom‐up approach: Follow the evolutionary trail, increasing the complexity of artificial agents. Contrast this to: 0 Top‐down approach: start off with conscious reasoning and add sensors/actuators later. 0 Only‐top approach: only solve conscious reasoning problems, planning, conceptual learning and language. Sensors/actuators, although interesting, are not a part of AI.
  8. 8. Subsumption architecture Characteristics: 0 Task decomposition 0 Parallel processing: layers having independent goals. 0 Bottom‐up design: from unconscious to conscious behavior. 0 No internal representation: layers having implicit goals Advantages: 0 Modularity 0 Robustness, autonomy 0 Iterative development and testing
  9. 9. Reactive planning 0 Operate in a timely fashion 0 Work in dynamic and unpredictable environments. 0 Compute only the next action, based on the current stimuli. 0 Cognitive minimalism: behavior modules are FSM’s without memory or learning abilities.
  10. 10. Neats & scruffies 0 What is the best way to conduct AI research? 0 Neats: By producing elegant solutions within a theoretical framework. 0 Including a formal notion of operation, e.g. provability. 0 Scruffies: By hacking and tweaking. 0 Mid‐1970’s: Roger Schank defines the distinction. 0 1983: Nils Nilsson, both are needed @AAAI presidential address. 0 1989: Rodney Brooks, robots should be fast, cheap and out of control. 0 2000’s: The victory of the neats? Russell & Norvig, p25.
  11. 11. Part II Reasoning without Representation
  12. 12. Evolutionary decomposition 0 In evolutionary terms, reacting and acting took longer to develop than intelligence and expert knowledge. 0 Critique: The temporal progression of intelligent life forms need not be in line with the qualitative nor with the quantitative progression of intelligent functions. 0 Artificial Flight researchers that try to emulate a modern airplane by subdividing tasks based on component segmentation will not manage to get to the gist of the problem, i.e. aerodynamics. 0 In general: decomposition should proceed evolutionary, from simple to complex (or bottom‐up) and not by segmenting the complex (top‐down).
  13. 13. Brooks: AI as an empirical science 0 AI should pose and verify hypotheses (remember Newell & Simon). 0 But AI never fails, so something must be wrong. 0 AI always succeeds by defining the unsolved parts of a problem as not pertaining to AI. 0 This is done by factoring out all aspects of perception and action. 0 This is a dubious form of abstraction (and it explains the Brittleness problem). 0 Critique: AI often fails, and this failure is not always related to perception or action. 0 For instance Machine Translation research of the 1950's and 1960's has failed because of the open world problem.
  14. 14. Brittleness problem 0 Brittleness problem: The inability to cope with unexpected changes in the environment. 0 The brittleness problem in AI is caused by the division between perception, action and reasoning. In real organisms there is no such segmentation. 0 The solution is a specific interpretation of the empirical enterprise of AI research: 0 Not top‐down hypothesis validation, but bottom‐up engineering.
  15. 15. AI as bottom‐up engineering 0 AI should be the engineering task of building Creatures that 0 are completely autonomous mobile agents 0 co‐exist with humans in the world 0 are seen by humans as intelligent beings in their own right 0 Creatures should follow the following engineering principles. They should 0 operate in a timely fashion 0 be robust, exhibiting a gradual change in capability under environmental change 0 maintain multiple goals 0 do something, have a purpose in being
  16. 16. Horizontal vs vertical layers 0 [A] In traditional AI research, the assumptions that independent research fields make are not forced to be realistic. This is a bug in the functional decomposition approach. 0 The vertical layers: machine learning, vision, knowledge systems, automatic translation. 0 [B] The traditional decomposition separates, among other things, peripheral perception and action modules from central reasoning or processing modules. 0 The fact that [A] assumptions are not enforced, does not imply [B] that the underlying decomposition is wrong. 0 It must be shown that under the traditional, functional decomposition of the research field, assumptions cannot possibly be enforced. 0 What really plays a role here: the assumption that reasoning and language are heavily influenced by sensors and actuators, and by being in the world. 0 The horizontal layers: obstacle avoidance, path finding, path planning.
  17. 17. Sparseness of representations 0 The world is its own best representation. 0 No world model maintenance, so more robust. 0 Each layer has an independent and implicit purpose or goal. 0 The purpose of the entire Creature is implicit in the collation of the independent purposes of the individual layers. Layer interactions in non‐symbolic terms: 0 Suppression: side‐tapping, replacing an original input message by a message from a lower level. 0 Inhibition: side‐tapping, inhibiting an output message without replacing it.
  18. 18. Activity‐Producing Subsystem Decomposition Combines: 0 [1] bottom‐up decomposition (activity‐producing) 0 [2] horizontal layering (subsystem) Brooks adds to this [3] the sparseness of representations.
  19. 19. Embodied AI’s Disadvantages 0 Meta‐cognition: no reification of tasks, goals or processes. 0 Goal interference: independent goal‐directed behaviors. 0 Task coordination: subsumption in the case of multiple levels is weakly structured. 0 Learning: related to the meta‐cognition disadvantage, since there is no medium in which learning can take place, i.e. no reification of thoughts.

×