This is a mixture of philosophy, artificial intelligence, Cognitive science, Software engineering and theoretical biology, presented at the Workshop on Philosophy and Engineering, London, 10-12 November 2008.
Virtual Machines in Philosophy, Engineering & Biologywpe
This document provides information about upcoming presentations and workshops given by Aaron Sloman on the topic of virtual machines in philosophy, engineering, and biology. It lists details of seminars and talks to be given at the University of Birmingham, Newcastle, Oxford, and the Royal Academy of Engineering in London between October and November 2008. The document acknowledges those who have contributed to Sloman's work and provides links to related materials online.
Virtual Machines and the Metaphysics of Science Aaron Sloman
The document is an abstract for a presentation given by Aaron Sloman at the Metaphysics of Science conference in Nottingham on September 12, 2009. The presentation discusses virtual machines and their importance for philosophy. It notes that philosophers regularly use complex virtual machines composed of interacting non-physical subsystems, like operating systems and web browsers. However, philosophers often ignore or misdescribe these virtual machines in discussions of topics like functionalism and causation. The presentation aims to explain virtual machines and how they are relevant to several philosophical problems regarding issues like supervenience, causation, and the mind-body problem.
Seven Master of Arts students from Constance at the University of Applied Sciences Communication Design faculty will be working on design research concerning multi-touch interfaces during summer term 2008. Faces and history.
This document provides an overview of an active workshop on functional specifications and use cases. It discusses the purpose of the workshop, which is to introduce a simple, practical, and precise methodology for writing functional specifications for software systems. The workshop agenda is then outlined, which will cover requirements, the use case model, a case study, system and software use cases, and use case realization. Finally, some basic concepts that will be covered in the workshop like stakeholders, actors, use cases, and use case diagrams are introduced at a high level.
Virtuality, causation and the mind-body relationshipAaron Sloman
This document discusses virtual machinery and causation. It defines three types of machines: physical machines, abstract mathematical objects called mathematical machines, and running virtual machines that are instances of mathematical machines controlling events in physical machines. It explores how running virtual machines can have causal powers despite being based on abstract mathematical objects, and how causation occurs both physically and through information processing in virtual machines. The document aims to clarify the nature and causal abilities of virtual machines.
Computer science is the systematic study of computation and computational processes. It involves determining what problems can be solved efficiently using algorithms and studying approaches like programming languages and computer systems implementation. While it makes use of computers, computer science is not just about building hardware and software but rather how we use tools to solve problems and further our understanding. It is a broad, multidisciplinary field that fuels its own advancement through computational methods and has transformed nearly all other domains through its applications.
Virtual Machines in Philosophy, Engineering & Biologywpe
This document provides information about upcoming presentations and workshops given by Aaron Sloman on the topic of virtual machines in philosophy, engineering, and biology. It lists details of seminars and talks to be given at the University of Birmingham, Newcastle, Oxford, and the Royal Academy of Engineering in London between October and November 2008. The document acknowledges those who have contributed to Sloman's work and provides links to related materials online.
Virtual Machines and the Metaphysics of Science Aaron Sloman
The document is an abstract for a presentation given by Aaron Sloman at the Metaphysics of Science conference in Nottingham on September 12, 2009. The presentation discusses virtual machines and their importance for philosophy. It notes that philosophers regularly use complex virtual machines composed of interacting non-physical subsystems, like operating systems and web browsers. However, philosophers often ignore or misdescribe these virtual machines in discussions of topics like functionalism and causation. The presentation aims to explain virtual machines and how they are relevant to several philosophical problems regarding issues like supervenience, causation, and the mind-body problem.
Seven Master of Arts students from Constance at the University of Applied Sciences Communication Design faculty will be working on design research concerning multi-touch interfaces during summer term 2008. Faces and history.
This document provides an overview of an active workshop on functional specifications and use cases. It discusses the purpose of the workshop, which is to introduce a simple, practical, and precise methodology for writing functional specifications for software systems. The workshop agenda is then outlined, which will cover requirements, the use case model, a case study, system and software use cases, and use case realization. Finally, some basic concepts that will be covered in the workshop like stakeholders, actors, use cases, and use case diagrams are introduced at a high level.
Virtuality, causation and the mind-body relationshipAaron Sloman
This document discusses virtual machinery and causation. It defines three types of machines: physical machines, abstract mathematical objects called mathematical machines, and running virtual machines that are instances of mathematical machines controlling events in physical machines. It explores how running virtual machines can have causal powers despite being based on abstract mathematical objects, and how causation occurs both physically and through information processing in virtual machines. The document aims to clarify the nature and causal abilities of virtual machines.
Computer science is the systematic study of computation and computational processes. It involves determining what problems can be solved efficiently using algorithms and studying approaches like programming languages and computer systems implementation. While it makes use of computers, computer science is not just about building hardware and software but rather how we use tools to solve problems and further our understanding. It is a broad, multidisciplinary field that fuels its own advancement through computational methods and has transformed nearly all other domains through its applications.
Book-2011 Kopetz Real-time systems Design principles for distributed embedded...AlfredoLaura2
This document provides an overview and preface for the textbook "Real-Time Systems" by Hermann Kopetz. The textbook is intended as a senior/graduate level textbook on real-time embedded systems and covers 14 topics that could map to a 14-week semester course. It assumes a basic background in computer science or engineering. The second edition has been substantially revised with new chapters on simplicity, energy/power awareness, and the Internet of things. It focuses on the design of distributed real-time systems at the architecture level while considering the progression of physical time.
Policies aimed at bringing universities closer together have always been (and still are) sensitive political issues.
Ascertaining the position and weight of UTC in a COMUE* alongside two major French Universities (Paris 4
(Sorbonne) and University of Paris 6 (Pierre & Marie Curie, or UPMC) has been no simple matter. Among the issues
is the place for technology in a world of traditional ‘pure’ science. Another is the pedagogical contribution of the
arts and humanities that have been an integral factor for UTC, in both teaching and research since the beginning.
This document discusses the development of new principles for modeling control systems based on bionic models inspired by human intelligence. It argues that traditional artificial intelligence and cognitive science approaches have not achieved human-level intelligence. The document proposes developing a formal model of the psyche using layered abstraction principles inspired by models used in computer engineering. This new bionic model would aim to exceed feasibility limits of current machine intelligence approaches by more closely modeling principles of the human mental apparatus.
Fundamental Questions - The Second Decade of AI: Towards Architectures for Hu...Aaron Sloman
The document summarizes a presentation given at the KI2006 Symposium on the history of artificial intelligence. It discusses:
1) The presenter's early education in AI in the late 1960s and 1970s, being impressed by works by Marvin Minsky and attending lectures by Max Clowes.
2) Interesting early AI work in the 1970s by researchers like Patrick Winston, Terry Winograd, and Gerald Sussman.
3) The presenter's realization in the early 1970s that the best way to do philosophy was through designing and implementing fragments of working minds in AI to test philosophical theories.
4) Some of the major AI centers that existed in the early
This document summarizes David De Roure's work over several decades exploring the integration of physical and digital worlds through social machines. It notes his early work in the 1990s on distributed systems and emergent order. In the 2000s, his focus shifted to data and computational grids. More recently, his research through the SOCIAM project examines social networks and how computation can promote new forms of social processes for a variety of user groups. The document outlines both technological developments and theoretical perspectives on social machines over the past 30 years.
This document discusses social machines, which are processes on the web where people do creative work and machines do administration. Examples mentioned include online forums and image classification by citizen scientists. Key points discussed include building models of social interaction between the physical and virtual world, and trajectories of social machines distinguished by their purpose. Open questions are also raised about the social machines of global systems science and implications of the social machines "sphere".
2005: Natural Computing - Concepts and ApplicationsLeandro de Castro
The document discusses natural computing, which encompasses computing inspired by nature, simulating natural phenomena using computers, and using natural materials for computing. It surveys ideas from neurocomputing, evolutionary computing, swarm intelligence, immunocomputing, and artificial life. These fields take inspiration from neural networks, evolution, collective animal behavior, the immune system, and the synthesis of life-like behaviors to develop new algorithms and applications. The goal is to develop more robust, adaptive, and fault-tolerant computing approaches.
International journal of engineering issues vol 2015 - no 1 - paper3sophiabelthome
This document summarizes a paper on modeling evolving complex software systems as cyber-physical systems using principles from physics and mathematics. It discusses how software systems can be viewed as complex automatons with mathematical foundations in areas like complex numbers and Fourier transforms. Cybernetics provides tools to model human behaviors and interactions in these systems. The paper also discusses how analog computers were early models of physical phenomena, and how infinitesimals and differentials from calculus can model continuously changing aspects of cyber-physical systems, within the limits imposed by physical reality.
Scientific information is often hidden or not published properly. The ContentMine is a Social Machine consisting of semantic software and communities of domain expertise; it aims to liberate all scientific facts from the published literature on a daily basis.
The talk , delivered to the Computational Institute, will be /was followed by a hands-on workshop learning how to use the technology and work as a community.
A paradigm is a way of thinking about the world. The document discusses several paradigm shifts in human-computer interaction, including: from batch processing to time-sharing and interactive computing; from command-line interfaces to graphical displays and direct manipulation; and from personal computing to ubiquitous computing. These paradigm shifts represent new ways of conceptualizing the relationship between humans and computers that emerged with technological advances, enabling new forms of usability and interaction.
The document summarizes the discussions and outcomes of a Dagstuhl Perspectives Workshop on applying tensor computing methods to problems in the Internet of Things (IoT). At the workshop, researchers from both industry and academia presented on challenges involving analyzing large, multi-dimensional streaming data from IoT devices and cyber-physical systems. Tensors provide a natural way to represent such data and can enable more efficient information extraction than alternative methods. However, further work is needed to develop benchmark challenges, datasets, and frameworks to make tensor methods more accessible and applicable to industrial IoT problems. The group discussed forming a knowledge hub and collaborating on data challenges to help establish tensor computing as a solution for machine learning on cyber-physical systems.
Running head: QUANTUM COMPUTING
QUANTUM COMPUTING 9
Research Paper: Quantum Computing
(Student’s Name)
(Professor’s Name)
(Course Title)
(Date of Submission)
Abstract
Quantum computers are a new era of invention, and its innovation is still to come. The revolution of the quantum computers produced a lot of challenges for ethical decision-making and predictions at different levels of life; therefore, it raised new concerns such as invasion of privacy and national security. In fact, it can be used easily to access and steal private information and data, while on the other hand, quantum computers can help to eliminate these unethical intrusions and secure the information.
Quantum computers will be the most powerful computer in the world that would open the door to encrypt the information in much less time. On the contrary, the supercomputers sometimes take so many hours to encrypt, whereas quantum computers can be used for the same purpose in a shorter time period making it harder to decrypt the data and information.
Many years from now, quantum computers will become mainstays throughout the world of computing. It will serve the individual and the community, but there is a significant concern that quantum computers could be used to invade people’s privacy (Hirvensalo, 2012).
Literature Review
The study area that is aimed on the implementation of quantum theory principles to develop computer technology is called Quantum computing. The field of quantum mechanics arose from German physicist Max Planck’s attempts to describe the spectrum emitted by hot bodies and specifically he wondered the reason behind the shift in color from red to yellow to blue as the temperature of a flame increased.
https://www.stratfor.com/analysis/approaching-quantum-leap-computing
There has been tremendous development in quantum computing since then and more research is been done to realize its full potential. Generally, quantum computing depends on quantum laws of physics. Rather than store information as 0s or 1s as conventional computers do, a quantum computer uses qubits which can be a 1 or a 0 or both at the same time. The quantum superposition along with the quantum effects of entanglement and quantum tunneling enable computers to consider and manipulate all combinations of bits simultaneously. This effect will make quantum computation powerful and fast (Williams, 2014).
http://www.dwavesys.com/quantum-computing
Researchers in quantum computing have enjoyed a greater level of success. The first small 2-qubit quantum computer was developed in 1997 and in 2001 a 5-qubit quantum computer was used to successfully factor the number 15 [85].Since then, experimental progress on a number of different technologies has been steady but slow, although the practical problems facing physical realizations of quantum computers can be addressed. It is believed that a quant.
This document provides an overview of nanotechnology and nanocomputing. It discusses how nanotechnology involves manipulating matter at the nanoscale level between 1-100 nanometers. Nanocomputing uses quantum dots and cellular automata as promising nanoscale computing components. The document also outlines some ethical considerations and risks of nanotechnology, as well as research being done in nanotechnology at the University of Central Florida.
Evolution of Computer Technology
The History of Computers
Essay about History of the Computer
Brief History Of Computers Essay
Essay on Computer Innovation
The First Generation of Computers Essay
Generation of Computers
Essay The Advancement of Computers
Technology : History Of Computers
Essay about The History of Computers
Computer Evolution
The Development of Computers Essay
History of the Development of Computers Essay
Personal Computer Research Paper
Chapter 4: Paradigms
from
Dix, Finlay, Abowd and Beale (2004).
Human-Computer Interaction, third edition.
Prentice Hall. ISBN 0-13-239864-8.
http://www.hcibook.com/e3/
The document discusses the history and evolution of paradigms in human-computer interaction (HCI). It describes several paradigm shifts in interactive technologies including: batch processing, time-sharing, interactive computing, graphical displays, personal computing, the World Wide Web, ubiquitous computing. Each new paradigm created a new perception of the human-computer relationship.
CSI 5387: Concept Learning Systems / Machine Learning butest
This document provides information about the CSI 5387: Concept Learning Systems / Machine Learning course taught by Dr. Nathalie Japkowicz. The objectives are to introduce machine learning principles, paradigms, and approaches. Students will read and present research papers, and complete a final project proposing and implementing a novel learning scheme. Course requirements include weekly paper critiques, 1-2 presentations, assignments, a project proposal, report, and presentation.
Construction kits for evolving life -- Including evolving minds and mathemati...Aaron Sloman
Darwin's theory of evolution by natural selection does not adequately explain the generative power of biological evolution. For that we need to understand the mechanisms involved in producing new options for natural selection, without which there would always be the same set of possibilities available. This applies also to the construction kits: evolution can produce new construction kits, "Derived" construction kits, based on the Fundamental construction kit provided by the physical universe and its originally lifeless physical and chemical mechanisms. It turns out that life needs both concrete and abstract construction kits, of ever increasing complexity. This paper introduces some basic ideas, though far more empirical and theoretical research is required, combining multiple disciplines. Slideshare no longer allows presentations to be updated, so I no longer use it. For a later version search for: Sloman "Construction kits for evolving life" cogaff. Most of my slideshare presentations have newer versions in the CogAff web site at the University of Birmingham, UK. (Not Alabama)
The Turing Inspired Meta-Morphogenesis Project -- The self-informing universe...Aaron Sloman
This replaces an earlier version. The latest version with clickable links is available at Versions with clickable links available at http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
More Related Content
Similar to Virtual Machines in Philosophy, Engineering & Biology (at WPE 2008)
Book-2011 Kopetz Real-time systems Design principles for distributed embedded...AlfredoLaura2
This document provides an overview and preface for the textbook "Real-Time Systems" by Hermann Kopetz. The textbook is intended as a senior/graduate level textbook on real-time embedded systems and covers 14 topics that could map to a 14-week semester course. It assumes a basic background in computer science or engineering. The second edition has been substantially revised with new chapters on simplicity, energy/power awareness, and the Internet of things. It focuses on the design of distributed real-time systems at the architecture level while considering the progression of physical time.
Policies aimed at bringing universities closer together have always been (and still are) sensitive political issues.
Ascertaining the position and weight of UTC in a COMUE* alongside two major French Universities (Paris 4
(Sorbonne) and University of Paris 6 (Pierre & Marie Curie, or UPMC) has been no simple matter. Among the issues
is the place for technology in a world of traditional ‘pure’ science. Another is the pedagogical contribution of the
arts and humanities that have been an integral factor for UTC, in both teaching and research since the beginning.
This document discusses the development of new principles for modeling control systems based on bionic models inspired by human intelligence. It argues that traditional artificial intelligence and cognitive science approaches have not achieved human-level intelligence. The document proposes developing a formal model of the psyche using layered abstraction principles inspired by models used in computer engineering. This new bionic model would aim to exceed feasibility limits of current machine intelligence approaches by more closely modeling principles of the human mental apparatus.
Fundamental Questions - The Second Decade of AI: Towards Architectures for Hu...Aaron Sloman
The document summarizes a presentation given at the KI2006 Symposium on the history of artificial intelligence. It discusses:
1) The presenter's early education in AI in the late 1960s and 1970s, being impressed by works by Marvin Minsky and attending lectures by Max Clowes.
2) Interesting early AI work in the 1970s by researchers like Patrick Winston, Terry Winograd, and Gerald Sussman.
3) The presenter's realization in the early 1970s that the best way to do philosophy was through designing and implementing fragments of working minds in AI to test philosophical theories.
4) Some of the major AI centers that existed in the early
This document summarizes David De Roure's work over several decades exploring the integration of physical and digital worlds through social machines. It notes his early work in the 1990s on distributed systems and emergent order. In the 2000s, his focus shifted to data and computational grids. More recently, his research through the SOCIAM project examines social networks and how computation can promote new forms of social processes for a variety of user groups. The document outlines both technological developments and theoretical perspectives on social machines over the past 30 years.
This document discusses social machines, which are processes on the web where people do creative work and machines do administration. Examples mentioned include online forums and image classification by citizen scientists. Key points discussed include building models of social interaction between the physical and virtual world, and trajectories of social machines distinguished by their purpose. Open questions are also raised about the social machines of global systems science and implications of the social machines "sphere".
2005: Natural Computing - Concepts and ApplicationsLeandro de Castro
The document discusses natural computing, which encompasses computing inspired by nature, simulating natural phenomena using computers, and using natural materials for computing. It surveys ideas from neurocomputing, evolutionary computing, swarm intelligence, immunocomputing, and artificial life. These fields take inspiration from neural networks, evolution, collective animal behavior, the immune system, and the synthesis of life-like behaviors to develop new algorithms and applications. The goal is to develop more robust, adaptive, and fault-tolerant computing approaches.
International journal of engineering issues vol 2015 - no 1 - paper3sophiabelthome
This document summarizes a paper on modeling evolving complex software systems as cyber-physical systems using principles from physics and mathematics. It discusses how software systems can be viewed as complex automatons with mathematical foundations in areas like complex numbers and Fourier transforms. Cybernetics provides tools to model human behaviors and interactions in these systems. The paper also discusses how analog computers were early models of physical phenomena, and how infinitesimals and differentials from calculus can model continuously changing aspects of cyber-physical systems, within the limits imposed by physical reality.
Scientific information is often hidden or not published properly. The ContentMine is a Social Machine consisting of semantic software and communities of domain expertise; it aims to liberate all scientific facts from the published literature on a daily basis.
The talk , delivered to the Computational Institute, will be /was followed by a hands-on workshop learning how to use the technology and work as a community.
A paradigm is a way of thinking about the world. The document discusses several paradigm shifts in human-computer interaction, including: from batch processing to time-sharing and interactive computing; from command-line interfaces to graphical displays and direct manipulation; and from personal computing to ubiquitous computing. These paradigm shifts represent new ways of conceptualizing the relationship between humans and computers that emerged with technological advances, enabling new forms of usability and interaction.
The document summarizes the discussions and outcomes of a Dagstuhl Perspectives Workshop on applying tensor computing methods to problems in the Internet of Things (IoT). At the workshop, researchers from both industry and academia presented on challenges involving analyzing large, multi-dimensional streaming data from IoT devices and cyber-physical systems. Tensors provide a natural way to represent such data and can enable more efficient information extraction than alternative methods. However, further work is needed to develop benchmark challenges, datasets, and frameworks to make tensor methods more accessible and applicable to industrial IoT problems. The group discussed forming a knowledge hub and collaborating on data challenges to help establish tensor computing as a solution for machine learning on cyber-physical systems.
Running head: QUANTUM COMPUTING
QUANTUM COMPUTING 9
Research Paper: Quantum Computing
(Student’s Name)
(Professor’s Name)
(Course Title)
(Date of Submission)
Abstract
Quantum computers are a new era of invention, and its innovation is still to come. The revolution of the quantum computers produced a lot of challenges for ethical decision-making and predictions at different levels of life; therefore, it raised new concerns such as invasion of privacy and national security. In fact, it can be used easily to access and steal private information and data, while on the other hand, quantum computers can help to eliminate these unethical intrusions and secure the information.
Quantum computers will be the most powerful computer in the world that would open the door to encrypt the information in much less time. On the contrary, the supercomputers sometimes take so many hours to encrypt, whereas quantum computers can be used for the same purpose in a shorter time period making it harder to decrypt the data and information.
Many years from now, quantum computers will become mainstays throughout the world of computing. It will serve the individual and the community, but there is a significant concern that quantum computers could be used to invade people’s privacy (Hirvensalo, 2012).
Literature Review
The study area that is aimed on the implementation of quantum theory principles to develop computer technology is called Quantum computing. The field of quantum mechanics arose from German physicist Max Planck’s attempts to describe the spectrum emitted by hot bodies and specifically he wondered the reason behind the shift in color from red to yellow to blue as the temperature of a flame increased.
https://www.stratfor.com/analysis/approaching-quantum-leap-computing
There has been tremendous development in quantum computing since then and more research is been done to realize its full potential. Generally, quantum computing depends on quantum laws of physics. Rather than store information as 0s or 1s as conventional computers do, a quantum computer uses qubits which can be a 1 or a 0 or both at the same time. The quantum superposition along with the quantum effects of entanglement and quantum tunneling enable computers to consider and manipulate all combinations of bits simultaneously. This effect will make quantum computation powerful and fast (Williams, 2014).
http://www.dwavesys.com/quantum-computing
Researchers in quantum computing have enjoyed a greater level of success. The first small 2-qubit quantum computer was developed in 1997 and in 2001 a 5-qubit quantum computer was used to successfully factor the number 15 [85].Since then, experimental progress on a number of different technologies has been steady but slow, although the practical problems facing physical realizations of quantum computers can be addressed. It is believed that a quant.
This document provides an overview of nanotechnology and nanocomputing. It discusses how nanotechnology involves manipulating matter at the nanoscale level between 1-100 nanometers. Nanocomputing uses quantum dots and cellular automata as promising nanoscale computing components. The document also outlines some ethical considerations and risks of nanotechnology, as well as research being done in nanotechnology at the University of Central Florida.
Evolution of Computer Technology
The History of Computers
Essay about History of the Computer
Brief History Of Computers Essay
Essay on Computer Innovation
The First Generation of Computers Essay
Generation of Computers
Essay The Advancement of Computers
Technology : History Of Computers
Essay about The History of Computers
Computer Evolution
The Development of Computers Essay
History of the Development of Computers Essay
Personal Computer Research Paper
Chapter 4: Paradigms
from
Dix, Finlay, Abowd and Beale (2004).
Human-Computer Interaction, third edition.
Prentice Hall. ISBN 0-13-239864-8.
http://www.hcibook.com/e3/
The document discusses the history and evolution of paradigms in human-computer interaction (HCI). It describes several paradigm shifts in interactive technologies including: batch processing, time-sharing, interactive computing, graphical displays, personal computing, the World Wide Web, ubiquitous computing. Each new paradigm created a new perception of the human-computer relationship.
CSI 5387: Concept Learning Systems / Machine Learning butest
This document provides information about the CSI 5387: Concept Learning Systems / Machine Learning course taught by Dr. Nathalie Japkowicz. The objectives are to introduce machine learning principles, paradigms, and approaches. Students will read and present research papers, and complete a final project proposing and implementing a novel learning scheme. Course requirements include weekly paper critiques, 1-2 presentations, assignments, a project proposal, report, and presentation.
Similar to Virtual Machines in Philosophy, Engineering & Biology (at WPE 2008) (20)
Construction kits for evolving life -- Including evolving minds and mathemati...Aaron Sloman
Darwin's theory of evolution by natural selection does not adequately explain the generative power of biological evolution. For that we need to understand the mechanisms involved in producing new options for natural selection, without which there would always be the same set of possibilities available. This applies also to the construction kits: evolution can produce new construction kits, "Derived" construction kits, based on the Fundamental construction kit provided by the physical universe and its originally lifeless physical and chemical mechanisms. It turns out that life needs both concrete and abstract construction kits, of ever increasing complexity. This paper introduces some basic ideas, though far more empirical and theoretical research is required, combining multiple disciplines. Slideshare no longer allows presentations to be updated, so I no longer use it. For a later version search for: Sloman "Construction kits for evolving life" cogaff. Most of my slideshare presentations have newer versions in the CogAff web site at the University of Birmingham, UK. (Not Alabama)
The Turing Inspired Meta-Morphogenesis Project -- The self-informing universe...Aaron Sloman
This replaces an earlier version. The latest version with clickable links is available at Versions with clickable links available at http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
Reorganised several times since first uploaded: most recently 25 Jan 2016
-------------------------------------------------------------------------------------------------------
Slides include link to video of lecture (158MB) http://www.cs.bham.ac.uk/research/projects/cogaff/movies/#ailect2-2015
-------------------------------------------------------------------------------------------------------------
Two questions are shown to have deep connections: What are the functions of vision in animals? and How did human languages evolve? The answer given here is that the functions of vision need to be supported by richly structured internal languages (forms of representation used for acquiring, storing, manipulating, deriving and using information), from which it follows that internal languages must have evolved before languages for communication.
---------------------------------------------------------------------------------------------------------------
The account of the functions of vision mentions early AI vision, the impact of Marr and the even greater impact of Gibson, but argues that they did not recognize all the functions of vision, e.g. the uses of vision in making mathematical discoveries leading to Euclid's elements.
---------------------------------------------------------------------------------------------------------------
Many questions are left unanswered by this research, which is part of the Meta-Morphogenesis project, introduced here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
---------------------------------------------------------------------------------------------------------------
A slideshare presentation on "origins of language" by Jasmine Wong, adds some useful additional evidence, but presents a simpler theory:
http://www.slideshare.net/JasmineWong6/origins-of-language
---------------------------------------------------------------------------------------------------------------
Minor corrections+ additions 30-Mar-2015, 1-Apr-2015, 15-Apr-2015 12-Nov-2015
How to Build a Research Roadmap (avoiding tempting dead-ends)Aaron Sloman
What's a Research Roadmap For?
Why do we need one?
How can we avoid the usual trap of making bold promises to do X, Y and Z,
then hope that our previous promises will not be remembered the next time we apply for funds to do X, Y and Z?
How can we produce a sensible, well informed roadmap?
Originally presented at the euCognition Research Roadmap discussion in Munich on 12 Jan 2007
This suggests a way to avoid tempting dead ends (repeating old promises that proved unrealistic) by examining many long term goals, including describing existing human and animal competences not yet achieved by robots, then working backwards systematically by investigating requirements for those competences, and requirements for meeting those requirements, etc. Insread of generating a single linear roadmap this should produce a partially ordered network of intermediate targets, leading back, to short term goals that may be achievable starting from where we are.
Such a roadmap will inevitably have mistakes: over-optimistic goals, missing preconditions, unrecognised opportunities. But if the work is done in many teams in a fully open manner with as much collaboration as possible, it should be possible to make faster, deeper, progress than can be achieved by brain-storming discussions of where we can get in a few years.
If learning maths requires a teacher, where did the first teachers come from?
or
Why (and how) did biological evolution produce mathematicians?
Presentation at Symposium on Mathematical Cognition AISB2010
Part of the Meta-Morphogenesis Project. See also this discussion of toddler theorems:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
Evolution of human mathematics from earlier abilities to perceived, use and reason about affordances, spatial possibilities and constraints.
The necessity of mathematical truth does not imply infallibility of mathematical reasoning. (Lakatos).
Toddlers discover theorems without knowing it. Later they may learn to reflect on and talk about what they have learnt. Compare Annette Karmiloff-Smith on "Representational re-description".
Why is it still so hard to give robots and AI systems the ability to reason spatially as mathematicians do (except for simple special cases, e.g. where space is discretised.)
A multi-picture challenge for theories of visionAaron Sloman
(Modified 7th June 2013 to include some droodles.)
Some informal experiments are presented whose results help to challenge most theories of vision and proposed mechanisms of vision.
A possible explanatory information-processing architecture is proposed, based on multiple dynamical systems, grown during an individual's life time, most of which are dormant most of the time, but which can be very rapidly activated and instantiated so as to build a multi-ontology interpretation of the currently, and recently, available visual information -- e.g. turning a corner into a busy street in an unfamiliar city. As far as I know, there is no working implementation of such a system, though a very early prototype called Popeye (implemented in Pop2) around 1976 is summarised. Many hard unsolved problems remain, though most of them are ignored by research on vision that makes narrow assumptions about the functions of biological vision.
Meta-Morphogenesis, Evolution, Cognitive Robotics and Developmental Cognitive...Aaron Sloman
How could a planet, condensed from a cloud of dust, produce minds -- and products of minds, along with microbes, mice, monkeys, mathematics, music, marmite, murder, megalomania, and all other forms and products of life on earth (and possibly elsewhere)?
This presentation introduces the ambitious, multi-disciplinary Meta-Morphogenesis project, partly inspired by Turing's 1952 paper on morphogenesis. It may lead to an answer, by identifying the many transitions between different types and mechanisms of biological information processing, including transitions that changed the mechanisms of change, altering forms of evolution, development, learning, culture and ecosystem dynamics. One of the questions raised is whether chemical information-processing is capable of supporting processes that would be infeasible or impossible on a Turing machine or conventional computer.
A 2hour 30 min recording of this tutorial was made by Adam Ford, available here: http://www.youtube.com/watch?v=BNul52kFI74 (new version installed on 14 Jun 2013 with titles and audio problem fixed). Also available here
http://www.cs.bham.ac.uk/research/projects/cogaff/movies/#m-m-tut
"Information" here is used in Jane Austen's sense, not Claude Shannon's sense. See http://www.cs.bham.ac.uk/research/projects/cogaff/misc/austen-info.html
More information about the project is available here: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
Adam Ford interviewed the author about some of these topics at the AGI conference in December 2012 in this video: http://www.youtube.com/watch?v=iuH8dC7Snno
Related PDF presentations can be found here http://www.cs.bham.ac.uk/research/projects/cogaff/talks
What is computational thinking? Who needs it? Why? How can it be learnt? ...Aaron Sloman
What is computational thinking?
Who needs it? Why? How can it be learnt?
Can it be taught? How?
Slides for invited presentation at Conference of ALT (Association for Learning Technology) 11th Sept 2012, University of Manchester.
PDF available (easier for printing, selecting text, etc.):
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk105
A video of the actual presentation (using no slides because of a projector problem) is now available here
http://www.youtube.com/watch?v=QXAFz3L2Qpo
It also has been made available as "slide 47" after the PDF presentation on this page.
I attempt to generalise Jeannette Wing's notion of "Computational thinking" (ACM 2006) to include attempting to understand much biological information processing, and try to show the necessity for educators to do deep computational thinking if they wish to facilitate processes of learning.
What's vision for, and how does it work? From Marr (and earlier)to Gibson and...Aaron Sloman
ABSTRACT
Very many researchers assume that it is obvious what vision (e.g. in humans) is for, i.e. what functions it has, leaving only the problem of explaining how those functions are fulfilled. So they postulate mechanisms and try to show how those mechanisms can produce the required effects, and also, in some cases, try to show that those postulated mechanisms exist in humans and other animals and perform the postulated functions. The main point of this presentation is that it is far from obvious what vision is for - and J.J. Gibson's main achievement is drawing attention to some of the functions that other researchers had ignored. I'll present some of the other work, show how Gibson extends and improves it, and then point out much more there is to the functions of vision and other forms of perception than even Gibson had noticed.
In particular, much vision research, unlike Gibson, ignores vision's function in on-line control and perception of continuous processes; and nearly all, including Gibson's work, ignores meta-cognitive perception, and perception of possibilities and constraints on possibilities and the associated role of vision in reasoning. If we don't understand that we cannot understand how biological mechanisms arising from requirements for being embodied in a rich, complex and changing 3-D environment underpin human mathematical capabilities, including the ability to reason about topology and Euclidean geometry.
Last updated: 1st March 2014, 10 June 2015 (additional links)
Slides prepared for a broadcast presentation to members of Computing at School http://www.computingatschool.org.uk/, about why computing education should be about more than the science and technology required for useful or entertaining applications. Instead, learning about forms of information processing systems can give us new, deeper ways of thinking about many old phenomena, e.g. the nature of mind and the evolution of minds of various kinds. This supports the claim that the study of computation is as much a science as physics or psychology, rather than just a branch of engineering -- as famously suggested by Fred Brooks.
Helping Darwin: How to think about evolution of consciousness (Biosciences ta...Aaron Sloman
ABSTRACT
Many of Darwin's opponents, and some of those who accepted the theory of evolution as regards physical forms, objected to the claim that human mental functions, and
consciousness in particular, could be products of evolution. There were several reasons for this opposition, including unanswered questions as to how physical mechanisms could produce mental states and processes an old, and still surviving, philosophical problem.
A new answer is now available. Evolution could have produced the "mysterious" aspects of consciousness if, like engineers developing computing systems in the last six or seven decades, evolution encountered and "solved" increasingly complex problems of representation and control (including self-monitoring and self-control) by using systems with increasingly abstract mechanisms based on virtual machines, including most
recently self-monitoring virtual machines.
These capabilities are, like many capabilities of computer-based systems, implemented in non-physical virtual machinery which, in turn, are implemented in lower level physical mechanisms.
This would require far more complex virtual machines than human engineers have so far created. Noone knows whether the biological virtual machines could have been
implemented in the discrete-switch technology used in current computers.
These ideas were not available to Darwin and his contemporaries: most of the concepts, and the technology, involved in creation and use of sophisticated virtual machines were developed only in the last half century, as a by-product of a large number of design decisions by hardware and software engineers solving different problems.
Ontologies for baby animals and robots From "baby stuff" to the world of adul...Aaron Sloman
In contrast with ontology developers concerned with a symbolic or digital environment (e.g. the internet), I draw attention to some features of our 3-D spatio-temporal environment that challenge young humans and other intelligent animals and will also challenge future robots. Evolution provides most animals with an ontology that suffices for life, whereas some animals, including humans, also have mechanisms for substantive ontology extension based on results of interacting with the environment. Future human-like robots will also need this. Since pre-verbal human children and many intelligent non-human animals, including hunting mammals, nest-building birds and primates can interact, often creatively, with complex structures and processes in a 3-D environment, that suggests (a) that they use ontologies that include kinds of material (stuff), kinds of structure, kinds of relationship, kinds of process (some of which are process-fragments composed of bits of stuff changing their properties, structures or relationships), and kinds of causal interaction and (b) since they don't use a human communicative language they must use information encoded in some form that existed prior to human communicative languages both in our evolutionary history and in individual development. Since evolution could not have anticipated the ontologies required for all human cultures, including advanced scientific cultures, individuals must have ways of achieving substantive ontology extension. The research reported here aims mainly to develop requirements for explanatory designs. The attempt to develop forms of representation, mechanisms and architectures that meet those requirements will be a long term research project.
Possibilities between form and function (Or between shape and affordances)Aaron Sloman
I discuss the need for an intelligent system, whether it is a robot, or some sort of digital companion equipped with a vision system, to include in its ontology a range of concepts that appear not to have been noticed by most researchers in robotics, vision, and human psychology. These are concepts that lie between (a) concepts of "form", concerned with spatially located objects, object parts, features, and relationships and (b) concepts of affordances and functions, concerned with how things in the environment make possible or constrain actions that are possible for a perceiver and which can support or hinder the goals of the perceiver.
Those intermediate concepts are concerned with processes that *are* occurring and processes that *can* occur, and the causal relationships between physical structures/forms/configurations and the possibilities for and constraints on such processes, independently of whether they are processes involving anyone's actions or goals.
These intermediate concepts relate motions and constraints on motion to both geometric and topological structures in the environment and the kinds of 'stuff' of which things are composed, since, for example, rigid, flexible, and fluid stuffs support and constrain different sorts of motions.
They underlie affordance concepts. Attempts to study affordances without taking account of the intermediate concepts are bound to prove shallow and inadequate.
Notes for invited talk at Dagstuhl Seminar: ``From Form to Function'' Oct 18-23, 2009 http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=09431
Why the "hard" problem of consciousness is easy and the "easy" problem hard....Aaron Sloman
The "hard" problem of concsiousness can be shown to be a non-problem because it is formulated using a seriously defective concept (the concept of "phenomenal consciousness" defined so as to rule out cognitive functionality and causal powers).
So the hard problem is an example of a well known type of philosophical problem that needs to be dissolved (fairly easily) rather than solved. For other examples, and a brief introduction to conceptual analysis, see http://www.cs.bham.ac.uk/research/projects/cogaff/misc/varieties-of-atheism.html
In contrast, the so-called "easy" problem requires detailed analysis of very complex and subtle features of perceptual processes, introspective processes and other mental processes, sometimes labelled "access consciousness": these have cognitive functions, but their complexity (especially the way details change as the environment changes or the perceiver moves) is considerable and very hard to characterise.
"Access consciousness" is complex also because it takes many different forms, since what individuals are conscious of and what uses being conscious of things can be put to, can vary hugely, from simple life forms, through many other animals and human infants, to sophisticated adult humans,
Finding ways of modelling these aspects of consciousness, and explaining how they arise out of physical mechanisms, requires major advances in the science of information processing systems -- including computer science and neuroscience.
There are empirical facts about introspection that have generated theories of consciousness but some of the empirical facts go unnoticed by philosophers.
The notion of a virtual machine is introduced briefly and illustrated using Conway's "Game of life" and other examples of virtual machinery that explain how contents of consciousness can have causal powers and can have intentionality (be able to refer to other things).
The beginnings of a research program are presented, showing how more examples can be collected and how notions of virtual machinery may need to be developed to cope with all the phenomena.
Some thoughts and demos, on ways of using computing for deep education on man...Aaron Sloman
1. The document discusses using computing to support deep and liberal education on many topics by stretching young minds through challenging learning opportunities rather than solely teaching skills.
2. It suggests computers can provide powerful learning through playing with AI programs, concepts and graphics-based tools, while also needing simpler textual environments to support abstraction.
3. One way to get this in schools is to make remotely accessible systems like Poplog available through shared Linux machines to introduce simple AI programming and collaborative learning.
Why symbol-grounding is both impossible and unnecessary, and why theory-tethe...Aaron Sloman
Introduction to key ideas of semantic models,
implicit definitions and symbol tethering, using ideas from philosophy of science and model theoretic semantics to explain why symbol ground theory is misguided: there is no need for all symbols used by an intelligent agent to be 'grounded' in terms of experience, or sensory-motor patterns. Rather, most of the meaning of a symbol may come from its role in a powerful explanatory theory, though the theory should have some connection with experiments and observations in order to be applicable to the world. That is not the same as requiring every symbol to be linked to experiences, experiments or measurements.
Symbol grounding theory is a modern version of the philosophical theory of 'concept empiricism', which was refuted by the philosopher Immanuel Kant in the 18th century.
Do Intelligent Machines, Natural or Artificial, Really Need Emotions?Aaron Sloman
(Updated on 14 Jan 2014 -- with substantial revisions.)
Many people believe that emotions are required for intelligence. I argue that this is mostly based on (a) wishful thinking and (b) a failure adequately to analyse the variety of types of affective states and processes that can arise in different sorts of architectures produced by biological evolution or required for artificial systems. This work is a development of ideas presented by Herbert Simon in the 1960s in his 'Motivational and emotional controls of cognition'.
What is science? (Can There Be a Science of Mind?) (Updated August 2010)Aaron Sloman
This presentation gives an introduction to philosophy of science, though a rather idiosyncratic one, stressing science as the search for powerful new ontologies rather than merely laws. You can't express a law unless you have
an ontology including the items referred to in the law (e.g. pressure, volume, temperature). The talk raises a
number of questions about the aims and methods of science, about the differences between the physical sciences and
the science of information-processing systems (e.g. organisms, minds, computers), whether there is a unique truth
or final answers to be found by science, whether scientists ever prove anything (no -- at most they show that some
theory is better than any currently available rival theory), and why science does not require faith (though
obstinacy can be useful). The slides end with a section on whether a science of mind is possible, answering yes, and explaining how.
Distinguishes Humean (statistics-based) notions of causation and Kantian (deterministic, structure-based) notions of causation, arguing that intelligent robots and animals need both, but each requires a combination of competences, and various kinds of partial competence of both kinds are possible.
What designers of artificial companions need to understand about biological onesAaron Sloman
This document summarizes Aaron Sloman's presentation at AISB'08 on what designers of artificial companions need to understand about biological ones. Sloman discusses the difficulty of the task and argues that current AI is not capable of replicating the capabilities of young children. He outlines "Type 1" goals for artificial companions focused on engagement, and "Type 2" goals focused on enabling functions like helping users, which are much harder to achieve. Sloman asserts progress requires understanding how human capabilities like understanding environments, minds, and developing new motives emerge from biological and developmental factors. The key is replicating some of the generic learning abilities of young humans to build more advanced functions layer by layer.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
How Barcodes Can Be Leveraged Within Odoo 17Celine George
In this presentation, we will explore how barcodes can be leveraged within Odoo 17 to streamline our manufacturing processes. We will cover the configuration steps, how to utilize barcodes in different manufacturing scenarios, and the overall benefits of implementing this technology.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...
Virtual Machines in Philosophy, Engineering & Biology (at WPE 2008)
1. Seminar, University of Birmingham 16 Oct 2008
The Great debate Newcastle 21 Oct 2008
Mind as Machine Weekend Course, Oxford 1-2 Nov 2008
Workshop on Philosophy and Engineering, RAE, London 10-12 Nov 2008
(This is work in progress. Comments and criticisms welcome.)
Virtual Machines in Philosophy,
Engineering & Biology
Why virtual machines really matter –
for several disciplines
Aaron Sloman
http://www.cs.bham.ac.uk/∼axs
School of Computer Science
The University of Birmingham
The latest version of these slides will be available online at
http://www.cs.bham.ac.uk/research/cogaff/talks/#wpe08
These slides on virtual machines and implementation are closely related:
http://www.cs.bham.ac.uk/research/cogaff/talks/#virt
Last revised January 9, 2009
Information virtual machines Slide 1 Revised: January 9, 2009
2. Presentations based on versions of these slides
Long presentations
Based on: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#virt
1. Thur 16 Oct 2008: School of Computer Science Seminar, Birmingham
Why virtual machines really matter – for several disciplines
http://www.cs.bham.ac.uk/events/seminars/seminar details.html?seminar id=560
2. Tues 21 Oct 2008: The Great Debate, Newcastle
What can biologists, roboticists and philosophers learn from one another? (Unnoticed connections)
http://thegreatdebate.org.uk/UnnoticedConnections.html
3. Sat-Sun 1-2 Nov 2008: Weekend course Mind as Machine, Oxford
Why philosophers need to be robot designers
http://www.conted.ox.ac.uk/courses/details.php?id=O08P107PHR
http://oxfordphilsoc.org/
Short presentation (this one!)
10-12 November 2008: Workshop on Philosophy and Engineering
Royal Academy of Engineering, London
Extended abstract: Virtual Machines in Philosophy, Engineering & Biology
http://www.cs.bham.ac.uk/research/projects/cogaff/08.html#803
Slides here http://www.cs.bham.ac.uk/research/projects/cogaff/talks#wpe08
The previous talks used the slides in this (PDF) presentation
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#virt
This presentation uses only a small subset of these slides.
(The presentation at the meeting used an even smaller subset.)
Information virtual machines Slide 2 Revised: January 9, 2009
3. ACKNOWLEDGEMENTS
Thanks especially to
Matthias Scheutz, Ron Chrisley and Jackie Chappell
And users of our SimAgent toolkit, from whom I have learnt much.
Thanks also to questioners at previous presentations of these ideas
Many thanks to Linux/Unix developers:
I constantly use excellent virtual machines
that they have designed.
I am interacting with one now
and also several others running on it
Apologies for clutter: read only what I point at.
The slides are meant to be readable without me being available to present them.
Information virtual machines Slide 3 Revised: January 9, 2009
4. Main Thesis
Philosophers, neuroscientists, psychologists, social scientists,
among others, have identified puzzles regarding the existence and
causal efficacy of non-physical states and processes
e.g. mental states and processes, socio-economic processes
But they have no good solutions.
• Philosophers have offered metaphysical, epistemological and conceptual theories
about the status of such entities,
for example in talking about dualism, “supervenience”, mind-brain identity, epiphenomenalism, or
simply denying that mental or other non-physical events can be causes.
• To the best of my knowledge the vast majority of such philosophers and scientists
either know nothing about non-physically describable machines (NPDMs) running in
computers, or ignore what they know.
(There are exceptions, e.g. John Pollock, Ron Chrisley, and Peter Simons.)
• Some philosophers, e.g. Dan Dennett, mention virtual machines, but misdescribe
them, e.g. as useful fictions, or wrongly compare them with abstractions such as
centre of mass of a complex object.
Information virtual machines Slide 4 Revised: January 9, 2009
5. Two claims
I claim
(a) most philosophers fail to notice major new insights readily available
from the technology they use every day to write their papers, read or
send email, browse the internet, etc.
(b) software engineers and computer scientists know enough about the
technology to design, implement, analyse, debug, extend, and use
such systems, but they lack the philosophical expertise to articulate
what they know, and they have not fully appreciated the magnitude, the
importance and the complexity of this 20th century development.
(c) There are also deep implications for biology, psychology, neuroscience,
among others.
The key idea is “running machine”, of which there are physical examples
and non-physical examples.
Information virtual machines Slide 5 Revised: January 9, 2009
6. What is a machine?
A machine is a complex enduring entity with parts
(possibly a changing set of parts)
that interact causally with one another as they change their properties
and relationships.
Most machines are also embedded in a complex environment with which
they interact.
The internal and external interactions may be discrete or continuous, sequential or
concurrent.
Different parts of the machine, e.g. different sensors and effectors, may interact with different parts of the
environment concurrently.
The machine may treat parts of itself as parts of the environment (during self-monitoring), and parts of
the environment as parts of itself (e.g. tools, external memory aids). (See Sloman 1978, chapter 6)
The machine may be fully describable using concepts of the physical
sciences (plus mathematics), in which case it is a physical machine (PM).
Examples include levers, assemblages of gears, clocks, clouds, tornadoes, plate
tectonic systems, and myriad molecular machines in living organisms.
Information virtual machines Slide 6 Revised: January 9, 2009
7. Not all machines are physical machines
Some machines have states, processes and interactions
whose descriptions use concepts that cannot be defined in terms of those
of the physical sciences
Examples of such concepts:
“checking spelling”, “playing chess”, “winning”, “threat”, “defence”, “strategy”, “desire”.
“belief”, “plan”, “poverty”, “crime”, “economic recession”, ...
For now I’ll take that indefinability as obvious: it would take too long to explain.
See
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#models
Those include information-processing machines.
Including socio-economic machines, ecosystems and many biological control systems.
Information virtual machines Slide 7 Revised: January 9, 2009
8. Non-Physically-Describable Machines
Non-Physically-Describable Machines (NPDMs) are the subject matter of
common sense, much gossip, novels, plays, legends, history, the social
sciences and economics, psychology, and various aspects of biology.
They are frequently referred to in everyday life, including the press, political debates, etc.
They are also commonplace in computing systems, a fact that is widely ignored by
philosophers trying to understand relations between physical and non-physical states and
processes.
Too many philosophers are taught that if a problem is philosophical it can be addressed
without knowing anything about discoveries or creations in science and technology, since
philosophy (like mathematics) is defined by them to be a non-empirical discipline.
They either forget, or have never heard about, the enormously rich fertilization of mathematics by the
empirical sciences and and by problems in technology and engineering, which is, presumably why, as they
type their papers, or read and write email messages, or interrogate search engines, they do not ask:
“How can all this text-processing, formatting, spelling correction, emailing, web-browsing work,
and how does it relate to what is going on in
the physical machine on my desk or lap or palm, or whatever?”
Even Immanuel Kant recognized that some “synthetic apriori knowledge” (significant non-empirical
knowledge) had to be “awakened” by sensory experience.
Information virtual machines Slide 8 Revised: January 9, 2009
9. Computing NPDMs are called “virtual machines”
Among computer scientists and engineers, NPDMs are normally referred
to as “Virtual Machines” (VMs) a fact that can cause much confusion,
since the word “virtual” often suggests something unreal, as in “virtual
reality” systems.
A NPDM (or VM), like a PM, can have parts that interact with one another and with the
environment.
This is commonplace in computing systems –
e.g. spread-sheets, word-processors, internet-browsers.
GIVE A DEMO.
Terminology:
Because the phrase “virtual machine” is so widespread, at least in computing circles,
where a lot is known about them, I shall continue to talk about virtual machines (VMs),
avoiding the mouthful: NPDM.
NB 1: “Virtual” in this context does not imply: “non-existent”, “unreal”, etc.
NB 2: Every VM must be implemented in a PM without which it cannot exist.
This is “causal dualism”, not “substance dualism”: virtual machines do not contain “stuff” that
continues to exist when the physical machine is completely destroyed.
Information virtual machines Slide 9 Revised: January 9, 2009
10. Virtual machines are everywhere
At all levels there are objects,
properties, relations, structures,
mechanisms, states, events,
processes and also many
CAUSAL INTERACTIONS.
E.g. poverty can cause crime.
• All levels are ultimately realised
(implemented) in physical systems.
• Different disciplines use different
approaches (not always good ones).
• Nobody knows how many levels of
virtual machines physicists will
eventually discover.
(Uncover?)
• The study of virtual machines in
computers is just a special case of
more general attempts to describe and
explain virtual machines in our world.
See the IJCAI’01 Philosophy of AI tutorial (written with Matthias Scheutz) for more on levels and causation:
http://www.cs.bham.ac.uk/∼axs/ijcai01/
Information virtual machines Slide 10 Revised: January 9, 2009
11. Physics also deals with different levels of reality
• The “observable” level with which common sense, engineering, and
much of physics has been concerned for thousands of years:
– levers, balls, pulleys, gears, fluids, and many mechanical and hydraulic devices using
forces produced by visible objects.
• Unobservable extensions
– sub-atomic particles and invisible forces and force fields,
e.g. gravity, electrical and magnetic forces.
• Quantum mechanical extensions
– many things which appear to be inconsistent with the previous ontology of physics
Between the first two levels we find the ontology of chemistry, which includes many
varieties of chemical compounds, chemical events, processes, transformations, causal
interactions.
The chemical entities, states, processes, causal interactions are normally assumed to
be “fully implemented” (fully grounded) in physics.
We don’t know how many more levels future physicists will discover.
IS THERE A ‘BOTTOM’ LEVEL?
Information virtual machines Slide 11 Revised: January 9, 2009
12. In CS there are two notions of virtual machine
We contrast the notion of a PHYSICAL machine with two other notions:
• a VM which is an abstract mathematical object (e.g. the Prolog VM, the Java VM)
• a VM that is a running instance of such a mathematical object, controlling events
in a physical machine, e.g. a running Prolog or Java VM.
VMs as mathematical objects are much studied in meta-mathematics and theoretical
computer science. They are no more causally efficacious than numbers.
The main theorems of computer science, e.g. about computability, complexity, etc. are
primarily about mathematical entities
They are applicable to non-mathematical entities with the same structure – but no non-mathematical
entity can be proved mathematically to have any particular mathematical properties.
There’s more on varieties of virtual machines in later slides.
Information virtual machines Slide 12 Revised: January 9, 2009
13. More on virtual machines (VMs/NPDMs)
Recapitulation:
A virtual machine (VM) is a NPDM, a machine containing causally
interacting components with changing properties and relationships, whose
best description uses concepts that cannot be defined in terms of
concepts of the physical sciences.
This implies (non-obviously) that there are states and processes in VMs that cannot be
measured or detected using the techniques of the physical sciences (e.g. physics,
chemistry), though in order to exist and work, a VM needs to be implemented in a
physical machine.
An example is a collection of running computer programs doing things like checking
spelling, playing chess, sorting email, computing statistics, etc.
“Incorrect spelling” cannot be defined in terms of concepts of physics, and instances of correct
and incorrect spelling cannot be distinguished by physical measuring devices.
However, a second virtual machine that is closely coupled with the first, might be able to detect that the
first is doing those non-physical things.
A socio-economic system is a more abstract and complex form of virtual machine:
“economic inflation” and “recession” cannot be defined in terms of concepts of physics
Mental states and processes in humans and other animals can be regarded as states
and processes in virtual machines, implemented in brains.
Information virtual machines Slide 13 Revised: January 9, 2009
14. Erroneous philosophy on VMs
Much has been written about virtual machines by philosophers and others,
but they are often mistaken,
e.g.
Most ignore the variety of types of VM and the complexity of the relations between VMs
and PMs
(e.g. 2-way causation)
Information virtual machines Slide 14 Revised: January 9, 2009
15. Oversimplified notions of VM used by many philosophers
Some philosophers who know about Finite State Machines (FSMs), use a simple kind of
“functionalism” (atomic state functionalism) as the basis for the notion of virtual machine,
defined in terms of a set of possible states and transitions between them.
E.g. Ned Block, “What is functionalism?”, 1996. (I think he has changed his views since then.)
On this model, a virtual machine that runs on a physical machine has a finite set of possible states
(a, b, c, etc.) and it can switch between them depending on what inputs it gets. At each switch it may also
produce some output. (The idea of a Turing machine combines this notion with the notion of an infinite
tape.)
Finite, Discrete, State Virtual Machine:
Each possible state (e.g. a, b, c, ....) is defined
by how inputs to that state determine next state
and the outputs produced when that happens.
The machine can be defined by a set of rules
specifying the state-transitions.
Implementation relation:
Physical computer:
As demonstrated by Alan Turing and others, this is a surprisingly powerful model of
computation: but it is not general enough for our purposes.
Information virtual machines Slide 15 Revised: January 9, 2009
16. A richer model: Multiple interacting FSMs
This is a more realistic picture of
what goes on in current
computers:
There are multiple input and
output channels, and multiple
interacting finite state machines,
only some of which interact
directly with the environment.
You will not see the virtual machine
components if you open up the
computer, only the hardware
components.
The existence and properties of the
FSMs (e.g. playing chess) cannot be
detected by physical measuring devices.
But even that specification is over-simplified,
as we’ll see.
Information virtual machines Slide 16 Revised: January 9, 2009
17. A possible objection: only one CPU?
Some will object that when we think multiple processes run in parallel on a
single-CPU computer, interacting with one another while they run, we are
mistaken because only one process can run on the CPU at a time, so
there is always only one process running.
This ignores the important role of memory mechanisms in computers.
• Different software processes have different regions of memory allocated to them,
which endure in parallel. So the processes implemented in them endure in parallel,
and a passive process can affect an active one that reads some of its memory.
Moreover
• It is possible to implement an operating system on a multi-cpu machine, so that instead of its processes
sharing only one CPU they share two or more.
• In the limiting case there could be as many CPUs as processes that are running.
• The differences between these different implementations imply that
how many CPUs share the burden of running the processes is a contingent feature of the
implementation of the collection of processes and does not alter the fact that there can be multiple
processes running in a single-cpu machine.
A technical point: software interrupt handlers connected to constantly on physical devices, e.g. keyboard
and mouse interfaces, video cameras, etc., can depend on some processes constantly “watching” the
environment even when they don’t have control of the CPU,
In virtual memory systems, and systems using “garbage collection” things are more complex than
suggested here: the mappings between VM memory and PM memory keep changing.
Information virtual machines Slide 17 Revised: January 9, 2009
18. An even more general model
Instead of having a fixed set of
sub- processes, many computing
systems allow new VMs to be
constructed dynamically,
• of varying complexity
• some of them running for a while then
stopping,
• others going on indefinitely.
• some spawning new sub-processes...
The red polygons and stars might be
subsystems where new, short term or long
term, sub-processes (e.g. a new planning or
parsing process) can be constructed within a
supporting framework of virtual machines.
As indicated in the box with smooth curves, if
analog devices are used, there can be
VM processes that change continuously,
instead of only discrete virtual machines.
Some VMs simulate continuous change.
Information virtual machines Slide 18 Revised: January 9, 2009
19. More general?
Ron Chrisley has challenged my suggestion that the simplest finite state
machine is less general than the other forms described here, since a
Universal Turing Machine (UTM) is of the first type and all the others can
be implemented in a UTM.
My minimal claim is that even if that is true, there are important differences between
different designs, and I am concerned with the features of some of the designs.
I could argue that insofar as a system includes sub-VMs that are not synchronised and
can vary in speed either randomly or under the influence of the environment and include
continuous variation, the system cannot be modelled on a Turing machine.
But that is not the main point: the main point is that there are different sorts of VM whose
differences are important in the current context.
Information virtual machines Slide 19 Revised: January 9, 2009
20. Could such virtual machines run on brains?
We know that it can be very hard to
control directly all the low level physical
processes going on in a complex
machine: so it can often be useful to
introduce a virtual machine that is much
simpler and easier to control.
Perhaps evolution discovered the
importance of using virtual machines to
control very complex systems before we
did?
In that case, virtual machines running on
brains could provide a high level control
interface.
Questions:
How would the genome specify
construction of virtual machines?
Could there be things in DNA, or in
epigenetic control systems, that we
have not yet dreamed of?
Information virtual machines Slide 20 Revised: January 9, 2009
21. VMs can have temporarily or partly
‘decoupled’ components
• “Decoupled” subsystems may exist and process information, even though they have
no connection with sensors or motors.
• For instance, a machine playing games of chess with itself, or investigating
mathematical theorems, e.g. in number theory.
• Some complex systems “express” some of what is going on in their VM states and
processes through externally visible behaviours.
However, it is also possible for internal VM processes to have a richness that cannot
be expressed externally using the available bandwidth for effectors.
• Likewise sensor data may merely introduce minor perturbations in what is a rich and
complex ongoing internal process.
This transforms the requirements for rational discussion of some old philosophical
problems about the relationship between mind and body:
E.g. some mental processes need have no behavioural manifestations, though they
might, in principle, be detected using ‘decompiling’ techniques with non-invasive internal
physical monitoring.
(This may be impossible in practice.)
Information virtual machines Slide 21 Revised: January 9, 2009
22. Problem: Supervenience and Causation
Mental states and processes are said to supervene on physical ones.
But there are many problems about that relationship:
Can mental process cause physical processes? (Sometimes called “downward
causation”.)
How could something happening in a mind produce a change in a physical brain?
(Think of time going from left to right)
If previous physical states and processes suffice to explain physical states and
processes that exist at any time, how can mental ones have any effect?
How could your decision to come here make you come here – don’t physical
causes (in your brain and in your environment) suffice to make you come?
If they suffice, how could anything else play a role?
Information virtual machines Slide 22 Revised: January 9, 2009
23. Explaining what’s going on in VMs requires a new
analysis of the notion of causation
The relationship between objects, states, events and processes in virtual
machines and in underlying implementation machines is a tangled network
of causal interactions.
Software engineers have an intuitive understanding of it, but are not good at philosophical
analysis.
Philosophers mostly ignore the variety of complex mappings between VMs and PMs
when discussing causation and when discussing supervenience,
even though most of them now use multi-process VMs daily for their work.
Explaining how virtual machines and physical machines are related requires a deep
analysis of causation that shows how the same thing can be caused in two very different
ways, by causes operating at different levels of abstraction.
Explaining what ‘cause’ means is one of the hardest problems in philosophy.
For a summary explanation of two kinds of causation (Humean and Kantian) and the relevance of both
kinds to understanding cognition in humans and other animals see:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#wonac
Information virtual machines Slide 23 Revised: January 9, 2009
24. We often assume multiple varieties of causation
A person drives home one night after drinking with friends in a pub.
As he goes round a bend in the road he skids sideways into an oncoming
car and and the driver in the other car dies.
In court, the following facts emerge.
• The driver had exceeded the recommended alcohol limit for driving, but had often had
the extra glass and then driven home on that route without anything going wrong.
• There had earlier been some rain followed by a sharp drop in temperature, as a result
of which the road was unusually icy.
• The car was due for its MOT test, and he had been given two dates for the test, one
before the accident and one after. He chose the later date. Had he taken the earlier
date worn tires would have been detected and replaced with tires that could have
gripped ice better.
• There had been complaints that the camber on the road was not steep enough for a
curve so sharp, though in normal weather it was acceptable.
• The driver was going slightly faster than normal because he had been called home to
help a neighbour who had had a bad fall.
• A few minutes after the accident the temperature rose in a warm breeze and the ice on
the road melted.
What caused the death of the other driver?
Information virtual machines Slide 24 Revised: January 9, 2009
25. How VM events and processes can be causes
We need an explanation of how VM causes and PM causes can co-exist
and both be causes of other VM and PM events and processes.
The crucial point is that the existence of causal links is equivalent to the existence of
whatever makes certain sets of conditional statements (including counterfactual
conditionals) true or false.
A misleading picture:
It oversimplifies.
Our previous diagrams implicitly supported a prejudice
by showing a single upward pointing arrow from each
physical state to the mental, or virtual machine state,
above it.
This implied a simple one-way dependency relationship,
where complex two-way relationships actually exist.
Information virtual machines Slide 25 Revised: January 9, 2009
26. Requirements for complex VMs to work
A complete explanation of how VMs interact with PMs in computers would
require tutorials on:
• Physical components used in computers
• Digital electronic circuits and their mechanisms
• Various kinds of interfaces/transducers linking computers to other devices (hard
drives, displays, keyboards, mice, networks)
• Operating systems
• Device drivers
• File systems
• Memory management systems
• Compilers
• Interpreters
• Interrupt handlers
• Caches
• Programmable firmware stores
and other things I’ve forgotten to mention.
Note: my own understanding of many of those is incomplete.
Information virtual machines Slide 26 Revised: January 9, 2009
27. A web of causal relationships and conditionals
The design of the various hardware and software items listed in the
previous slide has a key feature:
the main effect of all the components is to ensure the simultaneous truth of a complex
network of conditional statements relating what would happen if so and so occurred in
physical or virtual machines.
The support for that network of truths, including counterfactual conditional truths, e.g.
about what would have happened if ...., is equivalent to support for a complex web of
causal connections.
As a result of all this, multiple virtual machines of different kinds each with many
coexisting, causally interacting components, can coexist and interact and interact with
one another and with physical components, in a single physical computer,
even a computer with only one CPU (plus a large number of memory locations).
The picture on the next slide crudely represents that web of interactions, involving both virtual and
physical sub-machines.
Information virtual machines Slide 27 Revised: January 9, 2009
28. Supervenience of VMs: a complex relation
The two machines (PM and VM) need not
be isomorphic: they can have very different
structures.
There need not be any part of the PM that
is isomorphic with the VM.
Not only static parts and relations but also
processes and causal relations can
supervene on physical phenomena.
The structure of the VM can change
significantly (parts added and removed, and
links between parts being added and
removed) without structural changes
occurring at the physical level –
though the physical states of millions of switches may change as the (much simpler, more
abstract) VM changes and causal interactions occur.
The mappings between PM components and VM components may be complex,
subtle in kind, and constantly changing.
A very large “sparse array” in the VM may contain many more locations than there are switches in the PM
(as long as not all locations are actually occupied).
Distinct objects in the VM can have implementations that share parts of the PM.
Information virtual machines Slide 28 Revised: January 9, 2009
29. Notions of Supervenience
We can distinguish at least the following varieties
• property supervenience
(e.g. having a certain temperature supervenes on having molecules with a certain kinetic energy.)
• pattern supervenience
(e.g., supervenience of various horizontal, vertical and diagonal rows of dots on a rectangular array
of dots, or the supervenience of a rotating square on the pixel matrix of a computer screen.)
• mereological, or agglomeration, supervenience
(e.g., possession of some feature by a whole as the result of a summation of features of parts, e.g.
the supervenience of the mass of a stone on the masses of its atoms, or the supervenience of the
centre of mass on the masses and locations of its parts, each with its own mass)
• mechanism supervenience
(supervenience of one machine on another: a collection of interacting objects, states, events and
processes supervenes on some lower level, often more complex, reality, e.g., the supervenience of a
running operating system on the computer hardware – this type is required for intelligent control
systems, as probably discovered by evolution millions of years ago?)
We are talking about mechanism supervenience.
The other kinds are less closely related to implementation of VMs on PMs.
Mechanism supervenience, far from being concerned with how one property relates to others, is concerned
with how a complex ontology (collection of diverse types of entity, types of events, types of process, types of
state, with many properties, relationships and causal interactions) relates to another ontology.
This could be called “ontology supervenience”. Perhaps “ontology instance supervenience” would be better.
Information virtual machines Slide 29 Revised: January 9, 2009
30. A more general notion of supervenience
Supervenience is often described as a relation between properties: e.g. a
person’s mental properties supervene on his physical properties (or
“respects”).
‘[...] supervenience might be taken to mean that there cannot be two events alike in all physical respects
but differing in some mental respects, or that an object cannot alter in some mental respect without
altering in some physical respect.’
D. Davidson (1970), ‘Mental Events’, reprinted in: Essays on Action and Events (OUP, 1980).
It’s better described as a relation between ontologies or complex,
interacting, parts of ontology-instances, not just properties.
The cases we discuss involve not just one object with some (complex) property, but a
collection of VM components enduring over time, changing their properties and
relations, and interacting with one another: e.g. data-structures in a VM, or several
interacting VMs, or thoughts, desires, intentions, emotions, or social and political
processes, all interacting causally – the whole system supervenes.
A single object with a property that supervenes on some other property is
just a very simple special case. We can generalise Davidson’s idea:
A functioning/working ontology supervenes on another if there cannot be
a change in the first without a change in the second.
NOTE: the idea of “supervenience” goes back to G.E.Moore’s work on ethics. A useful introduction to some of the philosophical
ideas is: Jaegwon Kim, Supervenience and Mind: Selected philosophical essays, 1993, CUP.
Information virtual machines Slide 30 Revised: January 9, 2009
31. Multiple layers of virtual machinery
The discussion so far suggests that there are two layers
• Physical machinery
• Virtual machinery
However, just as some physical machines (e.g. modern computers) have a kind of
generality that enables them to support many different virtual machines
(e.g. the same computer may be able to run different operating systems
– Windows, or Linux, or MacOS, or ....)
so are there some virtual machines that have a kind of generality that enables them to
support many different “higher level” virtual machines running on them
(e.g. the same operating system VM may be able to run many different applications, that do very different
things, – window managers, word processors, mail systems, spelling correctors, spreadsheets,
compilers, games, internet browsers, CAD packages, virtual worlds, chat software, etc. ....)
It is possible for one multi-purpose VM to support another multi-purpose VM,
which supports additional VMs.
So VMs may be layered:
VM1 supports VM2 supports VM3 supports VM4, etc.
The layers can branch, and also be circular, e.g. if VM1 includes a component that
invokes a component in a higher level VMk , which is implemented in VM1.
Information virtual machines Slide 31 Revised: January 9, 2009
32. Not to be confused with control hierarchies
Layered virtual machines are not the same as
• Hierarchical control systems
• Brooks’ “subsumption architecture”
Here the different layers implement different functions for the whole system, and can be
turned on and off independently (mostly).
In contrast, a higher level VM provides functionality that is implemented in lower levels:
the lower levels don’t provide different competences that could be added or removed,
e.g. damping.
Removing a lower level VM layer makes the whole thing collapse, unless it replaced by an
equivalent lower level VM (e.g. a different operating system with similar functionality).
No time to explain the difference.
Information virtual machines Slide 32 Revised: January 9, 2009
33. ‘Emergence’ need not be a bad word
People who have noticed the need for pluralist ontologies often talk about
‘emergent’ phenomena.
But the word has a bad reputation, associated with mysticism, vitalist
theories, sloppy thinking, wishful thinking, etc.
If we look closely at the kinds of ‘emergence’ found
in virtual machines in computers, where we know a
lot about how they work (because we designed
them and can debug them, etc), then we’ll be better
able to go on to try to understand the more complex
and obscure cases, e.g. mind/brain relations.
Virtual machine emergence adds to our ontology:
the new entities are not definable simply as patterns
or agglomerations in physical objects (they are not
like ocean waves).
My claim is that engineers discussing implementation of VMs in computers and
philosophers discussing supervenience of minds on brains are talking about the same
‘emergence’ relationship – involving VMs implemented (ultimately) in physical machines.
NB. It is not just a metaphor: both are examples of the same type.
Information virtual machines Slide 33 Revised: January 9, 2009
34. Why virtual machines are important in engineering
They provide “vertical” separation of concerns
Contrast “horizontal” separation: different kinds of functionality that can be
added or removed independently, e.g email, web browsing, various
games, spreadsheets – or the parts of such subsystems.
• Both horizontal decomposition and vertical decomposition involve modularity that
allows different designers to work on different tasks.
• But vertical decomposition involves layers of necessary support.
• VMs reduce combinatorial complexity for system designers
• They can also reduce the complexity of the task of self-monitoring and self control in
an intelligent system.
• Evolution seems to have got there first
• That includes development of meta-semantic competences for self-monitoring,
self-debugging, etc.
• It can also lead to both incomplete self knowledge and to errors in self analysis, etc.
See also The Well-Designed Young Mathematician, AI Journal, December 2008
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0807
Information virtual machines Slide 34 Revised: January 9, 2009
35. Concurrent, interacting virtual machines sharing a substrate
In a multi-processing computer the complexity would be totally
unmanageable if software designers had to think about all the possible
sequences of machine instructions.
Instead we use a VM substrate for handling multiple processes, with
mechanisms for
• memory management
• context switching
• scheduling
• handling privileges and access rights, etc.
• filestore management
• various device drivers
• networking
• and in some cases use of multiple CPUs
Some VMs implement a particular kind of functionality (e.g. a chess playing VM)
whereas others provide a platform of resources that can be combined in different
ways to support multiple kinds of functionality (e.g. operating systems, and
various kinds of software development toolkits).
How much of that did evolution discover?
Information virtual machines Slide 35 Revised: January 9, 2009
36. Self-monitoring and virtual machines
Systems dealing with complex changing circumstances and needs may
need to monitor themselves, and use the results of such monitoring in
taking high level control decisions.
E.g. which high priority task to select for action.
Using a high level virtual machine as the control interface may make a very complex
system much more controllable: only relatively few high level factors are involved in
running the system, compared with monitoring and driving every little sub-process, e.g. at
the transistor level.
The history of computer science and software engineering since around 1950 shows
how human engineers introduced more and more abstract and powerful virtual
machines to help them design, implement, test debug, and run very complex systems.
When this happens the human designers of high level systems need to know less and
less about the details of what happens when their programs run.
Making sure that high level designs produce appropriate low level processes is a separate task, e.g. for
people writing compilers, device drivers, etc. Perhaps evolution produced a similar “division of labour”?
Similarly, biological virtual machines monitoring themselves would be aware of only a tiny
subset of what is really going on and would have over-simplified information.
THAT CAN LEAD TO DISASTERS, BUT MOSTLY DOES NOT.
Information virtual machines Slide 36 Revised: January 9, 2009
37. Robot philosophers
The simplifications in self-monitoring VMs could lead robot-philosophers to
produce confused philosophical theories about the mind-body relationship
– e.g. theories about “qualia”.
Intelligent robots will start thinking about these issues.
As science fiction writers have already pointed out, they may become as
muddled as human philosophers.
So to protect our future robots from muddled thinking, we may have to
teach them philosophy!
BUT WE HAD BETTER DEVELOP GOOD PHILOSOPHICAL THEORIES FIRST!
The proposal that a virtual machine is used as part of the control system goes further than the suggestion
that a robot builds a high level model of itself, e.g. as proposed by Owen Holland in
http://cswww.essex.ac.uk/staff/owen/adventure.ppt
For more on robots becoming philosophers of different sorts see
Why Some Machines May Need Qualia and How They Can Have Them:
Including a Demanding New Turing Test for Robot Philosophers
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0705
Paper for AAAI Fall Symposium, Washington, 2007
Information virtual machines Slide 37 Revised: January 9, 2009
38. Biological evolution probably “discovered” all this
(and more) first
Even though biological evolution does not need an intelligent designer to
be involved, there are strategies that could be useful for evolution for the
same reason as they are useful for designers.
That includes the use of virtual machines, for example.
• More precisely – it could turn out that a modification of a design for an organism that
gives it a kind of self-understanding its competitors lack, could make it more
successful.
• E.g. it may monitor its own reasoning, planning, and learning processes (at a certain
level of abstraction) and find ways to improve them.
• If those improved procedures can also be taught, the benefits need not be
rediscovered by chance.
Information virtual machines Slide 38 Revised: January 9, 2009
39. Why the same considerations are relevant to biology
Conjecture: biological evolution “discovered” long ago that separating a
virtual machine level from the physical level made it possible to use the VM
as a platform on which variants could be explored and good ones chosen,
e.g. different behaviours, or different control mechanisms, different mechanisms for choosing goals or
planning actions, or different mechanisms for learning things.
• Long before that, the usefulness of “horizontal” modularity had already been
discovered, with different neural or other control subsystems coexisting and controlling
different body parts, or producing different behaviours, e.g. eating, walking, breathing,
circulating blood, repairing damaged tissue.
• But developing new parts with specific functions is different from developing new
behaviours for the whole organism.
• If each new behaviour has to be implemented in terms of low level states of muscles
and sensors that could be very restrictive, making things hard to change.
• But if a VM layer is available on which different control regimes could be implemented,
the different regimes will have much simpler specifications.
• This allows one genome to support multiple possible development trajectories,
depending on environment (as in altricial species).
Conjecture: this allows common functionality to exist following different trajectories (in different
individuals with that genome) e.g. doing mathematics or physics in English or Chinese?
Information virtual machines Slide 39 Revised: January 9, 2009
40. A first draft ontology for architectural components
THE COGAFF ARCHITECTURE SCHEMA
For now let’s pretend we understand the
labels in the diagram.
On that assumption the diagram defines a space of
possible information-processing architectures for
integrated agents, depending on what is in the
various boxes and how the components are
connected, and what their functions are.
So if we can agree on what the types of layers are,
and on what the divisions between perceptual, central
and motor systems are, we have a language for
specifying functional subdivisions of a large collection
of possible architectures, ....
even if all the divisions are partly blurred or the
categories overlap.
Note: Marvin Minsky’s book The emotion machine uses finer-grained horizontal division
(six layers). There’s largely because he divides some of these cogaff categories into
sub-categories, e.g. different sorts of reactive mechanisms, different sorts of reflective
mechanisms.
Information virtual machines Slide 40 Revised: January 9, 2009
41. What I am heading towards: H-Cogaff
The H-Cogaff (Human Cogaff)
architecture is a (conjectured) special
case of the CogAff schema, containing
many different sorts of concurrently
active mutually interacting components.
The papers and presentations on the
Cognition & Affect web site give more
information about the functional
subdivisions in the proposed (but still
very sketchy) H-Cogaff architecture,
and show how many different kinds of
familiar states (e.g. several varieties of
emotions) could arise in such an
architecture.
This is shown here merely as an
indication of the kind of complexity we
can expect to find in some virtual
machine architectures for both naturally
occurring (e.g. in humans and perhaps
some other animals) and artificial (e.g.
in intelligent robots).
The conjectured H-Cogaff (Human-Cogaff) architecture
See the web site: http://www.cs.bham.ac.uk/research/cogaff/
Information virtual machines Slide 41 Revised: January 9, 2009
42. MORE TO BE SAID BUT NO MORE TIME
Summary:
The idea of a virtual machine (or NPDM) is deep, full of subtleties and of great
philosophical significance, challenging philosophical theories of mind, of causation, and
of what exists.
The use of virtual machines has been of profound importance in engineering in the last
half century, even though most of the people most closely involved have not noticed the
wider significance of what they were doing –
especially the benefits of vertical separation of concerns, and the complexity of what has to be done to
make all this work.
Biological evolution appears to have “discovered” both the problems and this type of
solution long before we did, even long before humans existed.
Despite the benefits the use of virtual machines can bring problems and some of those
problems may afflict future intelligent machines that are able to think about themselves.
See also http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#virt
There are lots more slides and more on the web
Give to google: ”Aaron Sloman” talks
My talks page has several related presentations.
Information virtual machines Slide 42 Revised: January 9, 2009
43. Further reading: Background to these slides
For many years, like many other scientists, engineers and philosophers, I have been
writing and talking about “information-processing” systems, mechanisms, architectures,
models and explanations, e.g.:
My 1978 book The Computer Revolution in Philosophy, now online here:
http://www.cs.bham.ac.uk/research/cogaff/crp/ (especially chapters 6 and 10)
A. Sloman, (1993) ‘The mind as a control system,’ in Philosophy and the Cognitive Sciences, Cambridge
University Press, Eds. C. Hookway & D. Peterson, pp. 69–110.
Online here: http://www.cs.bham.ac.uk/research/cogaff/
Since the word “information” and the phrase “information-processing” are both widely
used in the sense in which I was using them, I presumed that I did not need to explain
what I meant. Alas I was naively mistaken:
• Not everyone agrees with many things now often taken as obvious, for instance that all organisms
process information.
• Some people think that “information-processing” refers to the manipulation of bit patterns in computers.
• Not everyone believes information can cause things to happen.
• Some people think that talk of “information-processing” involves unfounded assumptions about the use
of representations.
• There is much confusion about what “computation” means, what its relation to information is, and
whether organisms in general or brains in particular do it or need to do it.
• Some of the confusion is caused by conceptual unclarity about virtual machines, and blindness to their
ubiquity.
Information virtual machines Slide 43 Revised: January 9, 2009
44. Further Reading
A very stimulating and thought provoking book overlapping with a lot of this presentation is
George B. Dyson Darwin among the machines: The Evolution Of Global Intelligence 1997,
Addison-Wesley.
Papers and presentations on the Cognition and Affect & CoSy web sites expand on these issues, e.g.
• A. Sloman & R.L. Chrisley, (2003),
Virtual machines and consciousness, in Journal of Consciousness Studies, 10, 4-5, pp. 113–172,
http://www.cs.bham.ac.uk/research/cogaff/03.html#200302
• A. Sloman, R.L. Chrisley & M. Scheutz,
The Architectural Basis of Affective States and Processes, in Who Needs Emotions?: The Brain Meets
the Robot, Eds. M. Arbib & J-M. Fellous, Oxford University Press, Oxford, New York, 2005.
http://www.cs.bham.ac.uk/research/cogaff/03.html#200305
• A. Sloman and R. L. Chrisley,
More things than are dreamt of in your biology: Information-processing in biologically-inspired robots,
Cognitive Systems Research, 6, 2, pp 145–174, 2005,
http://www.cs.bham.ac.uk/research/cogaff/04.html#cogsys
• A. Sloman
The well designed young mathematician. In Artificial Intelligence (2008 In Press.)
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0807
• “What’s information?”
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/whats-information.html
• Presentations http://www.cs.bham.ac.uk/research/projects/cogaff/talks/
M. A. Boden, 2006, Mind As Machine: A history of Cognitive Science (2 Vols), Oxford University Press
There are many other books in philosophy of mind and cognitive science.
Information virtual machines Slide 44 Revised: January 9, 2009
45. For more on all this
Some computer scientists and AI researchers have appreciated the importance of these
ideas, and are investigating ways of giving machines more self awareness, in order to
make them more intelligent.
John McCarthy, “Making robots conscious of their mental states”.
http://www-formal.stanford.edu/jmc/consciousness.html
John L. Pollock (a computationally informed philosopher)
“What Am I? Virtual machines and the mind/body problem”, Philosophy and Phenomenological
Research, 2008, 76, 2, pp. 237–309,
http://philsci-archive.pitt.edu/archive/00003341
Alos work by Dave Clark at MIT on ‘The knowledge layer’ in intelligent self-monitoring networks.
Cognition and Affect Project and CoSy Project papers and talks:
http://www.cs.bham.ac.uk/research/cogaff/
http://www.cs.bham.ac.uk/research/cogaff/talks/
http://www.cs.bham.ac.uk/research/projects/cosy/papers/
The Tutorial presentation by Matthias Scheutz and myself on Philosophy of AI at IJCAI’01.
http://www.cs.bham.ac.uk/research/cogaff/talks/#talk5
Collaborative work with Jackie Chappell on cognitive epigenesis
Jackie Chappell and Aaron Sloman, ‘Natural and artificial meta-configured altricial information-processing
systems,’ in International Journal of Unconventional Computing, 3, 3, pp. 211–239,
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0609
Information virtual machines Slide 45 Revised: January 9, 2009