• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Academic Course:11 Systems with Internal Models
 

Academic Course:11 Systems with Internal Models

on

  • 303 views

By Alan Winfield

By Alan Winfield

Statistics

Views

Total Views
303
Views on SlideShare
199
Embed Views
104

Actions

Likes
0
Downloads
0
Comments
0

3 Embeds 104

http://www.aware-project.eu 102
http://focas.eu 1
http://translate.googleusercontent.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Academic Course:11 Systems with Internal Models Academic Course:11 Systems with Internal Models Presentation Transcript

    • Designed by Alan Winfield Self-Awareness in Autonomic Systems Systems with Internal Models
    • Designed by Alan Winfield Outline • Why are internal models important? • What is an internal model? • Examples from robotics • A visual abstraction • The major challenges of internal modes – The internal representation – The reality gap – Connecting the internal model – Making it work • Towards a generic architecture for internal models
    • Designed by Alan Winfield Why do self-aware systems need internal models? • Because the self-aware system can run the internal model and therefore test what-if hypotheses* – what if I carry out action x..? – of several possible next actions xi, which should I choose? • Because an internal model (of itself) provides the self in self-aware *See Dennett’s Tower of ‘generate and test’ in Dennett, D. (1995). Darwin’s Dangerous Idea, Penguin.
    • Designed by Alan Winfield What is an internal model? • It is a mechanism for representing both the system itself and its current environment – example: a robot with a simulation of itself and its currently perceived environment, inside itself • The mechanism might be centralized (as in the example above), distributed, or emergent
    • Designed by Alan Winfield Examples • Examples of conventional internal models, i.e. – Analytical or computational models of plant in classical control systems – Adaptive connectionist models such as online learning Artificial Neural Networks (ANNs) within control systems – GOFAI symbolic representation systems • Note that internal models are not a new idea
    • Designed by Alan Winfield Examples 1 • A robot using self- simulation to plan a safe route with incomplete knowledge Vaughan, R. T. and Zuluaga, M. (2006). Use your illusion: Sensorimotor self- simulation allows complex agents to plan with incomplete self-knowledge, in Proceedings of the International Conference on Simulation of Adaptive Behaviour (SAB), pp. 298–309.
    • Designed by Alan Winfield Examples 2 • A robot with an internal model that can learn how to control itself Bongard, J., Zykov, V., Lipson, H. (2006) Resilient machines through continuous self-modeling. Science, 314: 1118-1121.
    • Designed by Alan Winfield Examples 3 • ECCE-Robot – A robot with a complex body uses an internal model as a ‘functional imagination’ Marques, H. and Holland, O. (2009). Architectures for functional imagination, Neurocomputing 72, 4-6, pp. 743–759. Diamond, A., Knight, R., Devereux, D. and Holland, O. (2012). Anthropomimetic robots: Concept, construction and modelling, International Journal of Advanced Robotic Systems 9, pp. 1–14.
    • Designed by Alan Winfield Examples 4 • A distributed system in which each robot has an internal model of itself and the whole system – Robot controllers and the internal simulator are co- evolved O’Dowd P, Winfield A and Studley M (2011), The Distributed Co-Evolution of an Embodied Simulator and Controller for Swarm Robot Behaviours, in Proc IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011), San Francisco, September 2011.
    • Designed by Alan Winfield A visual abstraction Maturana, H. R., & Varela, F. J. (1987). The Tree of Knowledge: The Biological Roots of Human Understanding. Boston, MA: New Science Library/Shambhala Publications.
    • Designed by Alan Winfield …for a self-aware artificial system A self-aware machine From lecture by Prof Roger Moore: Extending Maturana& Varela’s symbols, FECS, Feb. 2012.
    • Designed by Alan Winfield Major challenges 1 • To model both the system and its environment with sufficient fidelity, including: – The system and its behaviours – The world and its physics – The system’s sensorium – The effect of interactions between the modelled system and modelled world, on its modelled sensors • But what is sufficient fidelity?
    • Designed by Alan Winfield Major challenges 2 • Example – imagine placing this Webots simulation inside each NAO robot: Note the simulated robot’s eye view of it’s world
    • Designed by Alan Winfield Major challenges 3 • The Reality Gap – No model can perfectly represent both a system and its environment. Errors in representation are referred to as the reality gap. • The effect of the reality gap will likely be to reduce the efficacy of the system’s self-awareness and therefore its ability to accurately model unexpected events or possible actions – But this is speculation – it’s an open research question
    • Designed by Alan Winfield Major challenges 4 • To connect the internal model with the system’s real sensors and actuators (or equivalent) – i.e. so that events in the real world, as sensed by the system, are represented in the model • To synchronize updating the internal model from both changing perceptual data, and efferent actuator data – i.e. so that the internal model in in-step with the real system and its environment • Except when the model is being used to test what-if hypotheses
    • Designed by Alan Winfield Major challenges 5 • Making it all work – Building internal models that • represent a system and its environment, • hooking the model up to the system’s perception and actuation system, • making use of the model to moderate behaviour (i.e. for safety), and • smoothly integrating all data flows to make it all work – is immensely challenging and remains a difficult research problem
    • Designed by Alan Winfield A Generic Architecture • The major building blocks and their connections: Control System Internal Model Sense data Actuator demands The loop of generate and test The IM moderates action- selection in the controller evaluates the consequences of each possible next action The IM is initialized to match the current real situation
    • Designed by Alan Winfield Conclusions • Would such a system be self-aware? – Yes, but only in a minimal way. It might provide sufficient self-awareness for, i.e. safety in unknown or unpredictable environments • But this would have to be demonstrated by the robot behaving in interesting ways, that were not pre- programmed, in response to novel situations • Validating any claims to self-awareness would be very challenging http://alanwinfield.blogspot.com/