0 Sensorimotor skills are essential to higher level skills
like commonsense reasoning.
0 Abstract reasoning is the least interesting or
important human skill.
0 So who came up with the idea… (though tentatively)?
0 “It […] is best to provide the machine with the best sense
organs that money can buy […]. That process could
follow the normal teaching of a child.”
0 Alan Turing, 1950
Embodied AI – philosophical position
0 Embodiment: The functions of the mind can be
described in terms of aspects of the body.
0 Cognitivism: the functions of the mind can be described
in terms of information processing.
0 Computationalism: the functions of the mind can be
described in computational terms.
0 Cartesian dualism: the functions of the mind are
described in immaterial terms.
0 Embodiment hypothesis: conceptual and linguistic
structures are shaped by the peculiarities of
perceptual structures. [Lakoff & Johnson 1999]
Remember: Moravec paradox
0 Optimism due to machines solving things that are difficult for
0 Geometrical problems
0 Logical proofs
0 Games of chess
0 But things that are easy for humans are often difficult for
0 Taking the garbage out.
0 Recognizing the man walking across the street is Joe.
0 Sensorimotor skills and instincts are (arguably) necessary for
intelligent behavior, but pose enormous problems for machines.
Moravec paradox – The argument
0 The time that evolution took to produce a certain skill is
proportional to the difficulty to implement that skill.
0 The oldest human skills are unconscious and effortless.
0 The youngest human skills are conscious and require lots of
0 Effortless skills are the most difficult to implement.
0 Difficult skills are the easiest to implement, once the
effortless skills have been implemented.
0 Cultural evolution is faster than biological evolution.
0 Temporal progression need not parallel complexity.
0 Temporal progression suggests a quantitative development.
Bottom‐up approach: Follow the evolutionary trail,
increasing the complexity of artificial agents.
Contrast this to:
0 Top‐down approach: start off with conscious
reasoning and add sensors/actuators later.
0 Only‐top approach: only solve conscious reasoning
problems, planning, conceptual learning and
language. Sensors/actuators, although interesting, are
not a part of AI.
0 Task decomposition
0 Parallel processing: layers having independent goals.
0 Bottom‐up design: from unconscious to conscious
0 No internal representation: layers having implicit goals
0 Robustness, autonomy
0 Iterative development and testing
0 Operate in a timely fashion
0 Work in dynamic and unpredictable environments.
0 Compute only the next action, based on the current
0 Cognitive minimalism: behavior modules are FSM’s
without memory or learning abilities.
Neats & scruffies
0 What is the best way to conduct AI research?
0 Neats: By producing elegant solutions within a
0 Including a formal notion of operation, e.g. provability.
0 Scruffies: By hacking and tweaking.
0 Mid‐1970’s: Roger Schank defines the distinction.
0 1983: Nils Nilsson, both are needed @AAAI
0 1989: Rodney Brooks, robots should be fast, cheap and
out of control.
0 2000’s: The victory of the neats? Russell & Norvig, p25.
0 In evolutionary terms, reacting and acting took longer
to develop than intelligence and expert knowledge.
0 Critique: The temporal progression of intelligent life
forms need not be in line with the qualitative nor with
the quantitative progression of intelligent functions.
0 Artificial Flight researchers that try to emulate a
modern airplane by subdividing tasks based on
component segmentation will not manage to get to
the gist of the problem, i.e. aerodynamics.
0 In general: decomposition should proceed
evolutionary, from simple to complex (or bottom‐up)
and not by segmenting the complex (top‐down).
Brooks: AI as an empirical science
0 AI should pose and verify hypotheses (remember Newell &
0 But AI never fails, so something must be wrong.
0 AI always succeeds by defining the unsolved parts of a
problem as not pertaining to AI.
0 This is done by factoring out all aspects of perception and
0 This is a dubious form of abstraction (and it explains the
0 Critique: AI often fails, and this failure is not always related
to perception or action.
0 For instance Machine Translation research of the 1950's and
1960's has failed because of the open world problem.
0 Brittleness problem: The inability to cope with
unexpected changes in the environment.
0 The brittleness problem in AI is caused by the division
between perception, action and reasoning. In real
organisms there is no such segmentation.
0 The solution is a specific interpretation of the
empirical enterprise of AI research:
0 Not top‐down hypothesis validation, but bottom‐up
AI as bottom‐up engineering
0 AI should be the engineering task of building
0 are completely autonomous mobile agents
0 co‐exist with humans in the world
0 are seen by humans as intelligent beings in their own
0 Creatures should follow the following engineering
principles. They should
0 operate in a timely fashion
0 be robust, exhibiting a gradual change in capability
under environmental change
0 maintain multiple goals
0 do something, have a purpose in being
Horizontal vs vertical layers
0 [A] In traditional AI research, the assumptions that independent
research fields make are not forced to be realistic. This is a bug in the
functional decomposition approach.
0 The vertical layers: machine learning, vision, knowledge systems,
0 [B] The traditional decomposition separates, among other things,
peripheral perception and action modules from central reasoning or
0 The fact that [A] assumptions are not enforced, does not imply [B] that
the underlying decomposition is wrong.
0 It must be shown that under the traditional, functional decomposition
of the research field, assumptions cannot possibly be enforced.
0 What really plays a role here: the assumption that reasoning and
language are heavily influenced by sensors and actuators, and by being
in the world.
0 The horizontal layers: obstacle avoidance, path finding, path planning.
Sparseness of representations
0 The world is its own best representation.
0 No world model maintenance, so more robust.
0 Each layer has an independent and implicit purpose or
0 The purpose of the entire Creature is implicit in the
collation of the independent purposes of the individual
Layer interactions in non‐symbolic terms:
0 Suppression: side‐tapping, replacing an original input
message by a message from a lower level.
0 Inhibition: side‐tapping, inhibiting an output message
without replacing it.
Activity‐Producing Subsystem Decomposition
0  bottom‐up decomposition (activity‐producing)
0  horizontal layering (subsystem)
Brooks adds to this  the sparseness of
Embodied AI’s Disadvantages
0 Meta‐cognition: no reification of tasks, goals or
0 Goal interference: independent goal‐directed
0 Task coordination: subsumption in the case of
multiple levels is weakly structured.
0 Learning: related to the meta‐cognition disadvantage,
since there is no medium in which learning can take
place, i.e. no reification of thoughts.