Introduction to key ideas of semantic models,
implicit definitions and symbol tethering, using ideas from philosophy of science and model theoretic semantics to explain why symbol ground theory is misguided: there is no need for all symbols used by an intelligent agent to be 'grounded' in terms of experience, or sensory-motor patterns. Rather, most of the meaning of a symbol may come from its role in a powerful explanatory theory, though the theory should have some connection with experiments and observations in order to be applicable to the world. That is not the same as requiring every symbol to be linked to experiences, experiments or measurements.
Symbol grounding theory is a modern version of the philosophical theory of 'concept empiricism', which was refuted by the philosopher Immanuel Kant in the 18th century.
Reorganised several times since first uploaded: most recently 25 Jan 2016
-------------------------------------------------------------------------------------------------------
Slides include link to video of lecture (158MB) http://www.cs.bham.ac.uk/research/projects/cogaff/movies/#ailect2-2015
-------------------------------------------------------------------------------------------------------------
Two questions are shown to have deep connections: What are the functions of vision in animals? and How did human languages evolve? The answer given here is that the functions of vision need to be supported by richly structured internal languages (forms of representation used for acquiring, storing, manipulating, deriving and using information), from which it follows that internal languages must have evolved before languages for communication.
---------------------------------------------------------------------------------------------------------------
The account of the functions of vision mentions early AI vision, the impact of Marr and the even greater impact of Gibson, but argues that they did not recognize all the functions of vision, e.g. the uses of vision in making mathematical discoveries leading to Euclid's elements.
---------------------------------------------------------------------------------------------------------------
Many questions are left unanswered by this research, which is part of the Meta-Morphogenesis project, introduced here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
---------------------------------------------------------------------------------------------------------------
A slideshare presentation on "origins of language" by Jasmine Wong, adds some useful additional evidence, but presents a simpler theory:
http://www.slideshare.net/JasmineWong6/origins-of-language
---------------------------------------------------------------------------------------------------------------
Minor corrections+ additions 30-Mar-2015, 1-Apr-2015, 15-Apr-2015 12-Nov-2015
A presentation from Functional Tricity meetup where I present the philosophy underlying the Scheme programming language
http://www.meetup.com/FunctionalTricity/events/231861585/
Complexity and Context-Dependency (version for Bath IOP Seminar)Bruce Edmonds
Extended version of my Complexity and Context-Dependency Talk given at the IOP seminar on "Complexity of Complexity" in Bath 19th Dec 2011.
This version talks more about looking for context in data.
Reorganised several times since first uploaded: most recently 25 Jan 2016
-------------------------------------------------------------------------------------------------------
Slides include link to video of lecture (158MB) http://www.cs.bham.ac.uk/research/projects/cogaff/movies/#ailect2-2015
-------------------------------------------------------------------------------------------------------------
Two questions are shown to have deep connections: What are the functions of vision in animals? and How did human languages evolve? The answer given here is that the functions of vision need to be supported by richly structured internal languages (forms of representation used for acquiring, storing, manipulating, deriving and using information), from which it follows that internal languages must have evolved before languages for communication.
---------------------------------------------------------------------------------------------------------------
The account of the functions of vision mentions early AI vision, the impact of Marr and the even greater impact of Gibson, but argues that they did not recognize all the functions of vision, e.g. the uses of vision in making mathematical discoveries leading to Euclid's elements.
---------------------------------------------------------------------------------------------------------------
Many questions are left unanswered by this research, which is part of the Meta-Morphogenesis project, introduced here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
---------------------------------------------------------------------------------------------------------------
A slideshare presentation on "origins of language" by Jasmine Wong, adds some useful additional evidence, but presents a simpler theory:
http://www.slideshare.net/JasmineWong6/origins-of-language
---------------------------------------------------------------------------------------------------------------
Minor corrections+ additions 30-Mar-2015, 1-Apr-2015, 15-Apr-2015 12-Nov-2015
A presentation from Functional Tricity meetup where I present the philosophy underlying the Scheme programming language
http://www.meetup.com/FunctionalTricity/events/231861585/
Complexity and Context-Dependency (version for Bath IOP Seminar)Bruce Edmonds
Extended version of my Complexity and Context-Dependency Talk given at the IOP seminar on "Complexity of Complexity" in Bath 19th Dec 2011.
This version talks more about looking for context in data.
ARTSEDU 2012
Educational Robotics between narration and simulation
Alessandri Giuseppe , Paciaroni Martina
Faculty of Education Sciences - University of Macerata, (MC, Italy)
Humans and many other intelligent systems (have to) learn from experience, build
models of the environment from the acquired knowledge, and use these models for
prediction. In philosophy this is called inductive inference, in statistics it is called
estimation and prediction, and in computer science it is addressed by machine
learning.
I will first review unsuccessful attempts and unsuitable approaches towards a
general theory of induction, including Popper’s falsificationism and denial of
confirmation, frequentist statistics and much of statistical learning theory, subjective
Bayesianism, Carnap’s confirmation theory, the data paradigm, eliminative induction,
and deductive approaches. I will also debunk some other misguided views, such as
the no-free-lunch myth and pluralism.
I will then turn to Solomonoff’s formal, general, complete, and essentially unique
theory of universal induction and prediction, rooted in algorithmic information theory
and based on the philosophical and technical ideas of Ockham, Epicurus, Bayes,
Turing, and Kolmogorov.
This theory provably addresses most issues that have plagued other inductive
approaches, and essentially constitutes a conceptual solution to the induction
problem. Some theoretical guarantees, extensions to (re)active learning, practical
approximations, applications, and experimental results are mentioned in passing, but
they are not the focus of this talk.
I will conclude with some general advice to philosophers and scientists interested in
the foundations of induction.
This conference volume paper spells out the fundamental mechanism behind consciousness in lay terms and examines the implications upon our cosmology, beliefs, and social institutions.
Captain Power and the Soldiers of the Future, 1987: "Lord Dread's apocalyptic Project New Order, and the slow revelation that Lord Dread himself isn't completely committed to his own plan..." Dread is the ouroboros Saturn-Serpent: luciferian intellect. Captain Power is the Christian resurrection phoenix bird... the Sun/Heart. The fallen, lost son Anti-Christ re-unites with his brother Christ, unleashing incredible energy in the process that enlightens the world.
“If one wants to explain something completely new to "ordinary people", one must be able to express this new knowledge also in the official scientifically recognized language, otherwise one will only be met with rejection. But how can a Windows operating system make it clear to a Basic operating system that it is actually much more powerful when Basic thinking labels every innovation by virtue of its limited syntax as "wrong" and "illogical"? If there is no will for spiritual growth, all efforts are in vain. "Normal science" therefore regards every real innovation always as an impossibility! Your world will change "for the better" only if "all experts" in your world are synthesizing their efforts with a "single goal in mind". This goal should be Heaven on Earth (= the one and only, real "Great Work", and extremely hard work it is!). Do not say rashly "impossible" again!
A scientist will learn so much from me that his dogmas will "crumble." Whether or not he really will allow his "Tower of Babel" to collapse in order to build a different tower with the same building blocks, which really reaches to heaven and thus to me, he will determine with his own behavior. All your decisions are always an "expression" of your spiritual maturity. This maturity, however, has nothing to do with rational intellect, with knowledge in the conventional sense. I would have created an unjust creation if only "sages" were to enter the kingdom of heaven. You do not need to know "how" a plane works if you want to fly into "holy"days. It is enough "to believe" that it works.
You should be master over technology and not its addicted slave. For this I ultimately created your world, so that you try to free yourself out of all captivity. All human fears are based only on your ignorance. With this revelation, unprecedented technological possibilities are opening up to your mankind. With the help of the HOLO-FEELING equation and with a new understanding of the real laws of the "true nature of all things", science will be able to perform real miracles in your world. Only narrow-minded, self-sacred salvific teachings - regardless, whether religious or political - try to force your soul into a cage of undifferentiated categories as good and evil, useful and harmful, or worthy and unworthy of life and death." ― HOLOFEELING (1996)
Why the "hard" problem of consciousness is easy and the "easy" problem hard....Aaron Sloman
The "hard" problem of concsiousness can be shown to be a non-problem because it is formulated using a seriously defective concept (the concept of "phenomenal consciousness" defined so as to rule out cognitive functionality and causal powers).
So the hard problem is an example of a well known type of philosophical problem that needs to be dissolved (fairly easily) rather than solved. For other examples, and a brief introduction to conceptual analysis, see http://www.cs.bham.ac.uk/research/projects/cogaff/misc/varieties-of-atheism.html
In contrast, the so-called "easy" problem requires detailed analysis of very complex and subtle features of perceptual processes, introspective processes and other mental processes, sometimes labelled "access consciousness": these have cognitive functions, but their complexity (especially the way details change as the environment changes or the perceiver moves) is considerable and very hard to characterise.
"Access consciousness" is complex also because it takes many different forms, since what individuals are conscious of and what uses being conscious of things can be put to, can vary hugely, from simple life forms, through many other animals and human infants, to sophisticated adult humans,
Finding ways of modelling these aspects of consciousness, and explaining how they arise out of physical mechanisms, requires major advances in the science of information processing systems -- including computer science and neuroscience.
There are empirical facts about introspection that have generated theories of consciousness but some of the empirical facts go unnoticed by philosophers.
The notion of a virtual machine is introduced briefly and illustrated using Conway's "Game of life" and other examples of virtual machinery that explain how contents of consciousness can have causal powers and can have intentionality (be able to refer to other things).
The beginnings of a research program are presented, showing how more examples can be collected and how notions of virtual machinery may need to be developed to cope with all the phenomena.
Evolution of minds and languages: What evolved first and develops first in ch...Aaron Sloman
SLIDESHARE NOW STUPIDLY DOES NOT ALLOW SLIDES TO BE UPDATED. To find the latest version of these slides go to http://www.cs.bham.ac.uk/research/projects/cogaff//talks/#talk111
The version posted here was last updated on 16 March 2015. There have been several changes since then on the alternative site. Why did Slideshare take such a stupid decision (after being bought by Linkedin?)
A theory is presented according to which "languages" with structural variability and compositional semantics evolved in several species for *internal* use (e.g. in perception, planning, learning, forming goals, deciding, etc.) before *external* languages evolved for communication. The theory implies that such internal languages develop in young humans before a language for communication.
It is is also noted that the standard notion of 'compositional semantics' has to allow for the propagation of semantic content from parts to wholes to be potentially context sensitive at every stage: i.e. current context, speaker intentions, user knowledge, shared goals, can all affect how semantics of larger parts are derived from semantics of smaller parts+syntactic structure. This applies as much to non-verbal languages as to verbal ones.
This theory of how human languages evolved from earlier 'internal languages' (GLs) is inconsistent with the best known published theories of evolution or development of language.
But that does not make it wrong. Moreover, this theory is supported by empirical evidence including the example of deaf children in Nicaragua: http://en.wikipedia.org/wiki/Nicaraguan_Sign_Language
Ontologies for baby animals and robots From "baby stuff" to the world of adul...Aaron Sloman
In contrast with ontology developers concerned with a symbolic or digital environment (e.g. the internet), I draw attention to some features of our 3-D spatio-temporal environment that challenge young humans and other intelligent animals and will also challenge future robots. Evolution provides most animals with an ontology that suffices for life, whereas some animals, including humans, also have mechanisms for substantive ontology extension based on results of interacting with the environment. Future human-like robots will also need this. Since pre-verbal human children and many intelligent non-human animals, including hunting mammals, nest-building birds and primates can interact, often creatively, with complex structures and processes in a 3-D environment, that suggests (a) that they use ontologies that include kinds of material (stuff), kinds of structure, kinds of relationship, kinds of process (some of which are process-fragments composed of bits of stuff changing their properties, structures or relationships), and kinds of causal interaction and (b) since they don't use a human communicative language they must use information encoded in some form that existed prior to human communicative languages both in our evolutionary history and in individual development. Since evolution could not have anticipated the ontologies required for all human cultures, including advanced scientific cultures, individuals must have ways of achieving substantive ontology extension. The research reported here aims mainly to develop requirements for explanatory designs. The attempt to develop forms of representation, mechanisms and architectures that meet those requirements will be a long term research project.
If learning maths requires a teacher, where did the first teachers come from?
or
Why (and how) did biological evolution produce mathematicians?
Presentation at Symposium on Mathematical Cognition AISB2010
Part of the Meta-Morphogenesis Project. See also this discussion of toddler theorems:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
Evolution of human mathematics from earlier abilities to perceived, use and reason about affordances, spatial possibilities and constraints.
The necessity of mathematical truth does not imply infallibility of mathematical reasoning. (Lakatos).
Toddlers discover theorems without knowing it. Later they may learn to reflect on and talk about what they have learnt. Compare Annette Karmiloff-Smith on "Representational re-description".
Why is it still so hard to give robots and AI systems the ability to reason spatially as mathematicians do (except for simple special cases, e.g. where space is discretised.)
Theories of induction in psychology and artificial intelligence assume that the process leads from observation and knowledge to the formulation of linguistic conjectures. This paper proposes instead that the process yields mental models of phenomena. It uses this hypothesis to distinguish between deduction, induction, and creative forms of thought. It shows how models could underlie inductions about specific matters. In the domain of linguistic conjectures, there are many possible inductive generalizations of a conjecture. In the domain of models, however, generalization calls for only a single operation: the addition of information to a model. If the information to be added is inconsistent with the model, then it eliminates the model as false: this operation suffices for all generalizations in a Boolean domain. Otherwise, the information that is added may have effects equivalent (a) to the replacement of an existential quantifier by a universal quantifier, or (b) to the promotion of an existential quantifier from inside to outside the scope of a universal quantifier. The latter operation is novel, and does not seem to have been used in any linguistic theory of induction. Finally, the paper describes a set of constraints on human induction, and outlines the evidence in favor of a model theory of induction.
Distinguishes Humean (statistics-based) notions of causation and Kantian (deterministic, structure-based) notions of causation, arguing that intelligent robots and animals need both, but each requires a combination of competences, and various kinds of partial competence of both kinds are possible.
A technology architecture for managing explicit knowledge over the entire lif...William Hall
This slide set summarizes my work at Tenix Defence from around 1992 through 2002 to manage the authoring and delivery of maintenance documentation and engineering technical data to support life-cycle management of the 10 ANZAC frigates Tenix built for the Australian and New Zealand Navies and more than 300 M113 light-armored vehicles rebuilt as-new for the Australian Army. Today (in 2013) this is still a state-of-the-art application of the content management technology. So far as I know, the full benefit of this technology (as described in this 2002 presentation) has not yet been realized anywhere in the world.
Arguably, implementation of this technology played a major role in the successful completion of the ANZAC Ship Project 17 years after its stringently fixed-price contract was negotiated in 1989. Finished on-time, on budget, with every ship delivered on-time to happy customers and a healthy corporate profit. Unfortunately, Tenix Defence management failed to understand how this system worked, and chose to implement new, supposedly less expensive technology they thought they understood for their next major project. As a consequence of this choice and the failure to transfer human knowledge developed in the ANZAC Project the company’s performance on their next large project (but still less than 10% the size of the ANZAC Project) was so bad that Tenix Defence was closed and its assets sold to the highest bidder. See Hall, W.P., Nousala, S., Kilpatrick B. 2009. One company – two outcomes: knowledge integration vs corporate disintegration in the absence of knowledge management. VINE: The journal of information and knowledge management systems 39(3), 242-258 - http://tinyurl.com/yzgjew4; and Hall, W.P., Richards, G., Sarelius, C., Kilpatrick, B. 2008. Organisational management of project and technical knowledge over fleet lifecycles. Australian Journal of Mechanical Engineering. 5(2):81-95 - http://tinyurl.com/5d2lz7.
ARTSEDU 2012
Educational Robotics between narration and simulation
Alessandri Giuseppe , Paciaroni Martina
Faculty of Education Sciences - University of Macerata, (MC, Italy)
Humans and many other intelligent systems (have to) learn from experience, build
models of the environment from the acquired knowledge, and use these models for
prediction. In philosophy this is called inductive inference, in statistics it is called
estimation and prediction, and in computer science it is addressed by machine
learning.
I will first review unsuccessful attempts and unsuitable approaches towards a
general theory of induction, including Popper’s falsificationism and denial of
confirmation, frequentist statistics and much of statistical learning theory, subjective
Bayesianism, Carnap’s confirmation theory, the data paradigm, eliminative induction,
and deductive approaches. I will also debunk some other misguided views, such as
the no-free-lunch myth and pluralism.
I will then turn to Solomonoff’s formal, general, complete, and essentially unique
theory of universal induction and prediction, rooted in algorithmic information theory
and based on the philosophical and technical ideas of Ockham, Epicurus, Bayes,
Turing, and Kolmogorov.
This theory provably addresses most issues that have plagued other inductive
approaches, and essentially constitutes a conceptual solution to the induction
problem. Some theoretical guarantees, extensions to (re)active learning, practical
approximations, applications, and experimental results are mentioned in passing, but
they are not the focus of this talk.
I will conclude with some general advice to philosophers and scientists interested in
the foundations of induction.
This conference volume paper spells out the fundamental mechanism behind consciousness in lay terms and examines the implications upon our cosmology, beliefs, and social institutions.
Captain Power and the Soldiers of the Future, 1987: "Lord Dread's apocalyptic Project New Order, and the slow revelation that Lord Dread himself isn't completely committed to his own plan..." Dread is the ouroboros Saturn-Serpent: luciferian intellect. Captain Power is the Christian resurrection phoenix bird... the Sun/Heart. The fallen, lost son Anti-Christ re-unites with his brother Christ, unleashing incredible energy in the process that enlightens the world.
“If one wants to explain something completely new to "ordinary people", one must be able to express this new knowledge also in the official scientifically recognized language, otherwise one will only be met with rejection. But how can a Windows operating system make it clear to a Basic operating system that it is actually much more powerful when Basic thinking labels every innovation by virtue of its limited syntax as "wrong" and "illogical"? If there is no will for spiritual growth, all efforts are in vain. "Normal science" therefore regards every real innovation always as an impossibility! Your world will change "for the better" only if "all experts" in your world are synthesizing their efforts with a "single goal in mind". This goal should be Heaven on Earth (= the one and only, real "Great Work", and extremely hard work it is!). Do not say rashly "impossible" again!
A scientist will learn so much from me that his dogmas will "crumble." Whether or not he really will allow his "Tower of Babel" to collapse in order to build a different tower with the same building blocks, which really reaches to heaven and thus to me, he will determine with his own behavior. All your decisions are always an "expression" of your spiritual maturity. This maturity, however, has nothing to do with rational intellect, with knowledge in the conventional sense. I would have created an unjust creation if only "sages" were to enter the kingdom of heaven. You do not need to know "how" a plane works if you want to fly into "holy"days. It is enough "to believe" that it works.
You should be master over technology and not its addicted slave. For this I ultimately created your world, so that you try to free yourself out of all captivity. All human fears are based only on your ignorance. With this revelation, unprecedented technological possibilities are opening up to your mankind. With the help of the HOLO-FEELING equation and with a new understanding of the real laws of the "true nature of all things", science will be able to perform real miracles in your world. Only narrow-minded, self-sacred salvific teachings - regardless, whether religious or political - try to force your soul into a cage of undifferentiated categories as good and evil, useful and harmful, or worthy and unworthy of life and death." ― HOLOFEELING (1996)
Why the "hard" problem of consciousness is easy and the "easy" problem hard....Aaron Sloman
The "hard" problem of concsiousness can be shown to be a non-problem because it is formulated using a seriously defective concept (the concept of "phenomenal consciousness" defined so as to rule out cognitive functionality and causal powers).
So the hard problem is an example of a well known type of philosophical problem that needs to be dissolved (fairly easily) rather than solved. For other examples, and a brief introduction to conceptual analysis, see http://www.cs.bham.ac.uk/research/projects/cogaff/misc/varieties-of-atheism.html
In contrast, the so-called "easy" problem requires detailed analysis of very complex and subtle features of perceptual processes, introspective processes and other mental processes, sometimes labelled "access consciousness": these have cognitive functions, but their complexity (especially the way details change as the environment changes or the perceiver moves) is considerable and very hard to characterise.
"Access consciousness" is complex also because it takes many different forms, since what individuals are conscious of and what uses being conscious of things can be put to, can vary hugely, from simple life forms, through many other animals and human infants, to sophisticated adult humans,
Finding ways of modelling these aspects of consciousness, and explaining how they arise out of physical mechanisms, requires major advances in the science of information processing systems -- including computer science and neuroscience.
There are empirical facts about introspection that have generated theories of consciousness but some of the empirical facts go unnoticed by philosophers.
The notion of a virtual machine is introduced briefly and illustrated using Conway's "Game of life" and other examples of virtual machinery that explain how contents of consciousness can have causal powers and can have intentionality (be able to refer to other things).
The beginnings of a research program are presented, showing how more examples can be collected and how notions of virtual machinery may need to be developed to cope with all the phenomena.
Evolution of minds and languages: What evolved first and develops first in ch...Aaron Sloman
SLIDESHARE NOW STUPIDLY DOES NOT ALLOW SLIDES TO BE UPDATED. To find the latest version of these slides go to http://www.cs.bham.ac.uk/research/projects/cogaff//talks/#talk111
The version posted here was last updated on 16 March 2015. There have been several changes since then on the alternative site. Why did Slideshare take such a stupid decision (after being bought by Linkedin?)
A theory is presented according to which "languages" with structural variability and compositional semantics evolved in several species for *internal* use (e.g. in perception, planning, learning, forming goals, deciding, etc.) before *external* languages evolved for communication. The theory implies that such internal languages develop in young humans before a language for communication.
It is is also noted that the standard notion of 'compositional semantics' has to allow for the propagation of semantic content from parts to wholes to be potentially context sensitive at every stage: i.e. current context, speaker intentions, user knowledge, shared goals, can all affect how semantics of larger parts are derived from semantics of smaller parts+syntactic structure. This applies as much to non-verbal languages as to verbal ones.
This theory of how human languages evolved from earlier 'internal languages' (GLs) is inconsistent with the best known published theories of evolution or development of language.
But that does not make it wrong. Moreover, this theory is supported by empirical evidence including the example of deaf children in Nicaragua: http://en.wikipedia.org/wiki/Nicaraguan_Sign_Language
Ontologies for baby animals and robots From "baby stuff" to the world of adul...Aaron Sloman
In contrast with ontology developers concerned with a symbolic or digital environment (e.g. the internet), I draw attention to some features of our 3-D spatio-temporal environment that challenge young humans and other intelligent animals and will also challenge future robots. Evolution provides most animals with an ontology that suffices for life, whereas some animals, including humans, also have mechanisms for substantive ontology extension based on results of interacting with the environment. Future human-like robots will also need this. Since pre-verbal human children and many intelligent non-human animals, including hunting mammals, nest-building birds and primates can interact, often creatively, with complex structures and processes in a 3-D environment, that suggests (a) that they use ontologies that include kinds of material (stuff), kinds of structure, kinds of relationship, kinds of process (some of which are process-fragments composed of bits of stuff changing their properties, structures or relationships), and kinds of causal interaction and (b) since they don't use a human communicative language they must use information encoded in some form that existed prior to human communicative languages both in our evolutionary history and in individual development. Since evolution could not have anticipated the ontologies required for all human cultures, including advanced scientific cultures, individuals must have ways of achieving substantive ontology extension. The research reported here aims mainly to develop requirements for explanatory designs. The attempt to develop forms of representation, mechanisms and architectures that meet those requirements will be a long term research project.
If learning maths requires a teacher, where did the first teachers come from?
or
Why (and how) did biological evolution produce mathematicians?
Presentation at Symposium on Mathematical Cognition AISB2010
Part of the Meta-Morphogenesis Project. See also this discussion of toddler theorems:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
Evolution of human mathematics from earlier abilities to perceived, use and reason about affordances, spatial possibilities and constraints.
The necessity of mathematical truth does not imply infallibility of mathematical reasoning. (Lakatos).
Toddlers discover theorems without knowing it. Later they may learn to reflect on and talk about what they have learnt. Compare Annette Karmiloff-Smith on "Representational re-description".
Why is it still so hard to give robots and AI systems the ability to reason spatially as mathematicians do (except for simple special cases, e.g. where space is discretised.)
Theories of induction in psychology and artificial intelligence assume that the process leads from observation and knowledge to the formulation of linguistic conjectures. This paper proposes instead that the process yields mental models of phenomena. It uses this hypothesis to distinguish between deduction, induction, and creative forms of thought. It shows how models could underlie inductions about specific matters. In the domain of linguistic conjectures, there are many possible inductive generalizations of a conjecture. In the domain of models, however, generalization calls for only a single operation: the addition of information to a model. If the information to be added is inconsistent with the model, then it eliminates the model as false: this operation suffices for all generalizations in a Boolean domain. Otherwise, the information that is added may have effects equivalent (a) to the replacement of an existential quantifier by a universal quantifier, or (b) to the promotion of an existential quantifier from inside to outside the scope of a universal quantifier. The latter operation is novel, and does not seem to have been used in any linguistic theory of induction. Finally, the paper describes a set of constraints on human induction, and outlines the evidence in favor of a model theory of induction.
Distinguishes Humean (statistics-based) notions of causation and Kantian (deterministic, structure-based) notions of causation, arguing that intelligent robots and animals need both, but each requires a combination of competences, and various kinds of partial competence of both kinds are possible.
A technology architecture for managing explicit knowledge over the entire lif...William Hall
This slide set summarizes my work at Tenix Defence from around 1992 through 2002 to manage the authoring and delivery of maintenance documentation and engineering technical data to support life-cycle management of the 10 ANZAC frigates Tenix built for the Australian and New Zealand Navies and more than 300 M113 light-armored vehicles rebuilt as-new for the Australian Army. Today (in 2013) this is still a state-of-the-art application of the content management technology. So far as I know, the full benefit of this technology (as described in this 2002 presentation) has not yet been realized anywhere in the world.
Arguably, implementation of this technology played a major role in the successful completion of the ANZAC Ship Project 17 years after its stringently fixed-price contract was negotiated in 1989. Finished on-time, on budget, with every ship delivered on-time to happy customers and a healthy corporate profit. Unfortunately, Tenix Defence management failed to understand how this system worked, and chose to implement new, supposedly less expensive technology they thought they understood for their next major project. As a consequence of this choice and the failure to transfer human knowledge developed in the ANZAC Project the company’s performance on their next large project (but still less than 10% the size of the ANZAC Project) was so bad that Tenix Defence was closed and its assets sold to the highest bidder. See Hall, W.P., Nousala, S., Kilpatrick B. 2009. One company – two outcomes: knowledge integration vs corporate disintegration in the absence of knowledge management. VINE: The journal of information and knowledge management systems 39(3), 242-258 - http://tinyurl.com/yzgjew4; and Hall, W.P., Richards, G., Sarelius, C., Kilpatrick, B. 2008. Organisational management of project and technical knowledge over fleet lifecycles. Australian Journal of Mechanical Engineering. 5(2):81-95 - http://tinyurl.com/5d2lz7.
What has AI in Common with Philosophy?
John McCarthy
Computer Science Department
Stanford University
Abstract
AI needs many ideas that have hitherto been studied only by philosophers. This is because a robot, if it is to have human level intelligence and ability to learn from its experience, needs a general world view in which to organize facts. It turns out that many philosophical problems take new forms when thought about in terms of how to design a robot. Some approaches to philosophy are helpful and others are not.
Interaction designers, graphic designers, and anybody else involved in the
ongoing production of commercial websites should pay a lot more attention to
instructional principles – even if the project at hand is not overtly
instructional. What makes material good for learning also makes it good for
other conversion goals – such as explaining products, services, and
strengthening brands. This presentation gives an overview of several key
learning theories valued by instructional designers, with real-world examples
of how they make a difference in all sorts of consumer applications (such as in
banking, how-to, and entertainment). The principles include: learning from
visuals, cognitive load theory, situated learning, cognitive flexibility
theory, scaffolded vs. discovery learning, and case-based reasoning. The
emphasis will be on using the instructional designer's theoretical toolkit to
build an evaluation framework, or heuristic, for reviewing non-instructional
designs.
Do Intelligent Machines, Natural or Artificial, Really Need Emotions?Aaron Sloman
(Updated on 14 Jan 2014 -- with substantial revisions.)
Many people believe that emotions are required for intelligence. I argue that this is mostly based on (a) wishful thinking and (b) a failure adequately to analyse the variety of types of affective states and processes that can arise in different sorts of architectures produced by biological evolution or required for artificial systems. This work is a development of ideas presented by Herbert Simon in the 1960s in his 'Motivational and emotional controls of cognition'.
Helping Darwin: How to think about evolution of consciousness (Biosciences ta...Aaron Sloman
ABSTRACT
Many of Darwin's opponents, and some of those who accepted the theory of evolution as regards physical forms, objected to the claim that human mental functions, and
consciousness in particular, could be products of evolution. There were several reasons for this opposition, including unanswered questions as to how physical mechanisms could produce mental states and processes an old, and still surviving, philosophical problem.
A new answer is now available. Evolution could have produced the "mysterious" aspects of consciousness if, like engineers developing computing systems in the last six or seven decades, evolution encountered and "solved" increasingly complex problems of representation and control (including self-monitoring and self-control) by using systems with increasingly abstract mechanisms based on virtual machines, including most
recently self-monitoring virtual machines.
These capabilities are, like many capabilities of computer-based systems, implemented in non-physical virtual machinery which, in turn, are implemented in lower level physical mechanisms.
This would require far more complex virtual machines than human engineers have so far created. Noone knows whether the biological virtual machines could have been
implemented in the discrete-switch technology used in current computers.
These ideas were not available to Darwin and his contemporaries: most of the concepts, and the technology, involved in creation and use of sophisticated virtual machines were developed only in the last half century, as a by-product of a large number of design decisions by hardware and software engineers solving different problems.
Dissecting the Imitation Faculty: The Multiple Imitation Mechanisms (MIM) Hyp...Francys Subiaul
Is the imitation faculty one self-contained domain-general mechanism or an amalgamation of multiple content-specific systems? The Multiple Imitation Mechanisms (MIM) Hypothesis posits that the imitation faculty consists of distinct content-specific psychological systems that are dissociable both structurally and functionally. This hypothesis is supported by research in the developmental, cognitive, comparative and neural sciences. This body of work suggests that there are dissociable imitation systems that may be distinguished by unique behavioral and neurobiological profiles. The distribution of these different imitation skills in the animal kingdom further suggests a phylogenetic dissociation, whereby some animals specialized in some (but not all possible) imitation types; a reflection of specific selection pressures favoring certain imitation systems. The MIM hypothesis attempts to bring together these different areas of research into one theoretical framework that defines imitation both functionally and structurally.
Construction kits for evolving life -- Including evolving minds and mathemati...Aaron Sloman
Darwin's theory of evolution by natural selection does not adequately explain the generative power of biological evolution. For that we need to understand the mechanisms involved in producing new options for natural selection, without which there would always be the same set of possibilities available. This applies also to the construction kits: evolution can produce new construction kits, "Derived" construction kits, based on the Fundamental construction kit provided by the physical universe and its originally lifeless physical and chemical mechanisms. It turns out that life needs both concrete and abstract construction kits, of ever increasing complexity. This paper introduces some basic ideas, though far more empirical and theoretical research is required, combining multiple disciplines. Slideshare no longer allows presentations to be updated, so I no longer use it. For a later version search for: Sloman "Construction kits for evolving life" cogaff. Most of my slideshare presentations have newer versions in the CogAff web site at the University of Birmingham, UK. (Not Alabama)
The Turing Inspired Meta-Morphogenesis Project -- The self-informing universe...Aaron Sloman
This replaces an earlier version. The latest version with clickable links is available at Versions with clickable links available at http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
Virtuality, causation and the mind-body relationshipAaron Sloman
Extends my previous introductions to virtual machines and their role both in artefacts and products of biological evolution. This attempts to correct various erroneous assumptions about computation, functionalism, supervenience, life, information, and causation. See also http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html
How to Build a Research Roadmap (avoiding tempting dead-ends)Aaron Sloman
What's a Research Roadmap For?
Why do we need one?
How can we avoid the usual trap of making bold promises to do X, Y and Z,
then hope that our previous promises will not be remembered the next time we apply for funds to do X, Y and Z?
How can we produce a sensible, well informed roadmap?
Originally presented at the euCognition Research Roadmap discussion in Munich on 12 Jan 2007
This suggests a way to avoid tempting dead ends (repeating old promises that proved unrealistic) by examining many long term goals, including describing existing human and animal competences not yet achieved by robots, then working backwards systematically by investigating requirements for those competences, and requirements for meeting those requirements, etc. Insread of generating a single linear roadmap this should produce a partially ordered network of intermediate targets, leading back, to short term goals that may be achievable starting from where we are.
Such a roadmap will inevitably have mistakes: over-optimistic goals, missing preconditions, unrecognised opportunities. But if the work is done in many teams in a fully open manner with as much collaboration as possible, it should be possible to make faster, deeper, progress than can be achieved by brain-storming discussions of where we can get in a few years.
A multi-picture challenge for theories of visionAaron Sloman
(Modified 7th June 2013 to include some droodles.)
Some informal experiments are presented whose results help to challenge most theories of vision and proposed mechanisms of vision.
A possible explanatory information-processing architecture is proposed, based on multiple dynamical systems, grown during an individual's life time, most of which are dormant most of the time, but which can be very rapidly activated and instantiated so as to build a multi-ontology interpretation of the currently, and recently, available visual information -- e.g. turning a corner into a busy street in an unfamiliar city. As far as I know, there is no working implementation of such a system, though a very early prototype called Popeye (implemented in Pop2) around 1976 is summarised. Many hard unsolved problems remain, though most of them are ignored by research on vision that makes narrow assumptions about the functions of biological vision.
Meta-Morphogenesis, Evolution, Cognitive Robotics and Developmental Cognitive...Aaron Sloman
How could a planet, condensed from a cloud of dust, produce minds -- and products of minds, along with microbes, mice, monkeys, mathematics, music, marmite, murder, megalomania, and all other forms and products of life on earth (and possibly elsewhere)?
This presentation introduces the ambitious, multi-disciplinary Meta-Morphogenesis project, partly inspired by Turing's 1952 paper on morphogenesis. It may lead to an answer, by identifying the many transitions between different types and mechanisms of biological information processing, including transitions that changed the mechanisms of change, altering forms of evolution, development, learning, culture and ecosystem dynamics. One of the questions raised is whether chemical information-processing is capable of supporting processes that would be infeasible or impossible on a Turing machine or conventional computer.
A 2hour 30 min recording of this tutorial was made by Adam Ford, available here: http://www.youtube.com/watch?v=BNul52kFI74 (new version installed on 14 Jun 2013 with titles and audio problem fixed). Also available here
http://www.cs.bham.ac.uk/research/projects/cogaff/movies/#m-m-tut
"Information" here is used in Jane Austen's sense, not Claude Shannon's sense. See http://www.cs.bham.ac.uk/research/projects/cogaff/misc/austen-info.html
More information about the project is available here: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
Adam Ford interviewed the author about some of these topics at the AGI conference in December 2012 in this video: http://www.youtube.com/watch?v=iuH8dC7Snno
Related PDF presentations can be found here http://www.cs.bham.ac.uk/research/projects/cogaff/talks
What is computational thinking? Who needs it? Why? How can it be learnt? ...Aaron Sloman
What is computational thinking?
Who needs it? Why? How can it be learnt?
Can it be taught? How?
Slides for invited presentation at Conference of ALT (Association for Learning Technology) 11th Sept 2012, University of Manchester.
PDF available (easier for printing, selecting text, etc.):
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk105
A video of the actual presentation (using no slides because of a projector problem) is now available here
http://www.youtube.com/watch?v=QXAFz3L2Qpo
It also has been made available as "slide 47" after the PDF presentation on this page.
I attempt to generalise Jeannette Wing's notion of "Computational thinking" (ACM 2006) to include attempting to understand much biological information processing, and try to show the necessity for educators to do deep computational thinking if they wish to facilitate processes of learning.
What's vision for, and how does it work? From Marr (and earlier)to Gibson and...Aaron Sloman
ABSTRACT
Very many researchers assume that it is obvious what vision (e.g. in humans) is for, i.e. what functions it has, leaving only the problem of explaining how those functions are fulfilled. So they postulate mechanisms and try to show how those mechanisms can produce the required effects, and also, in some cases, try to show that those postulated mechanisms exist in humans and other animals and perform the postulated functions. The main point of this presentation is that it is far from obvious what vision is for - and J.J. Gibson's main achievement is drawing attention to some of the functions that other researchers had ignored. I'll present some of the other work, show how Gibson extends and improves it, and then point out much more there is to the functions of vision and other forms of perception than even Gibson had noticed.
In particular, much vision research, unlike Gibson, ignores vision's function in on-line control and perception of continuous processes; and nearly all, including Gibson's work, ignores meta-cognitive perception, and perception of possibilities and constraints on possibilities and the associated role of vision in reasoning. If we don't understand that we cannot understand how biological mechanisms arising from requirements for being embodied in a rich, complex and changing 3-D environment underpin human mathematical capabilities, including the ability to reason about topology and Euclidean geometry.
Last updated: 1st March 2014, 10 June 2015 (additional links)
Slides prepared for a broadcast presentation to members of Computing at School http://www.computingatschool.org.uk/, about why computing education should be about more than the science and technology required for useful or entertaining applications. Instead, learning about forms of information processing systems can give us new, deeper ways of thinking about many old phenomena, e.g. the nature of mind and the evolution of minds of various kinds. This supports the claim that the study of computation is as much a science as physics or psychology, rather than just a branch of engineering -- as famously suggested by Fred Brooks.
Possibilities between form and function (Or between shape and affordances)Aaron Sloman
I discuss the need for an intelligent system, whether it is a robot, or some sort of digital companion equipped with a vision system, to include in its ontology a range of concepts that appear not to have been noticed by most researchers in robotics, vision, and human psychology. These are concepts that lie between (a) concepts of "form", concerned with spatially located objects, object parts, features, and relationships and (b) concepts of affordances and functions, concerned with how things in the environment make possible or constrain actions that are possible for a perceiver and which can support or hinder the goals of the perceiver.
Those intermediate concepts are concerned with processes that *are* occurring and processes that *can* occur, and the causal relationships between physical structures/forms/configurations and the possibilities for and constraints on such processes, independently of whether they are processes involving anyone's actions or goals.
These intermediate concepts relate motions and constraints on motion to both geometric and topological structures in the environment and the kinds of 'stuff' of which things are composed, since, for example, rigid, flexible, and fluid stuffs support and constrain different sorts of motions.
They underlie affordance concepts. Attempts to study affordances without taking account of the intermediate concepts are bound to prove shallow and inadequate.
Notes for invited talk at Dagstuhl Seminar: ``From Form to Function'' Oct 18-23, 2009 http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=09431
Virtual Machines and the Metaphysics of Science Aaron Sloman
Philosophers regularly use complex (running) virtual machines (not virtual realities) composed of enduring interacting non-physical subsystems (e.g. operating systems, word-processors, email systems, web browsers, and many more). These VMs can be subdivided into different kinds with different types of functions, e.g. "specific-function VMs" and "platform VMs" (including language VMs, and operating system VMs) that provide support for a variety of different (possibly concurrent) "higher level" VMs, with different functions.
Yet, almost all ignore (or misdescribe) these VMs when discussing functionalism, supervenience, multiple realisation, reductionism, emergence, and causation.
Such VMs depend on many hardware and software designs that interact in very complex ways to maintain a network of causal relationships between physical and virtual entities and processes.
I'll try to explain this, and show how VMs are important for philosophy, in part because evolution long ago developed far more sophisticated systems of virtual machinery (e.g. running on brains and their surroundings) than human engineers so far. Most are still not understood.
This partly accounts for the apparent intractability of several philosophical problems.
E.g. running VM subsystems can be disconnected from input-output interactions for extended periods, and some can have more complexity than the available input/output bandwidth can reveal.
Moreover, despite the advantages of VMs for self-monitoring and self control, they can also lead to self-deception.
For a lot of related material see Steve Burbeck's web site http://evolutionofcomputing.org/Multicellular/Emergence.html
(A related presentation debunking the "hard problem" of consciousness is also in this collection.)
What is science? (Can There Be a Science of Mind?) (Updated August 2010)Aaron Sloman
This presentation gives an introduction to philosophy of science, though a rather idiosyncratic one, stressing science as the search for powerful new ontologies rather than merely laws. You can't express a law unless you have
an ontology including the items referred to in the law (e.g. pressure, volume, temperature). The talk raises a
number of questions about the aims and methods of science, about the differences between the physical sciences and
the science of information-processing systems (e.g. organisms, minds, computers), whether there is a unique truth
or final answers to be found by science, whether scientists ever prove anything (no -- at most they show that some
theory is better than any currently available rival theory), and why science does not require faith (though
obstinacy can be useful). The slides end with a section on whether a science of mind is possible, answering yes, and explaining how.
What designers of artificial companions need to understand about biological onesAaron Sloman
Discusses some of the requirements for artificial companions (ACs), contrasting relatively easy (Type 1) goals and much harder, but more important, (Type 2) goals for designers of ACs. Current techniques will not lead to achieving Type 2 goals, but progress can be made towards those goals, by looking more closely at how the relevant competences develop in humans, and their architectural and representational requirements. A human child develops a lot of knowledge about space, time, causation, and many varieties of 3-D structures and processes. This knowledge provides the background and foundation for many kinds of learning, and will be required for development of ACs that act sensibly and give sensible advice. No AI systems are anywhere near the competences of young children. Trying to go directly to systems with adult competences, especially statistics based systems, will produce systems that are either very restricted, or very brittle and unreliable (or both).
Kantian Philosophy of Mathematics and Young Robots: Could a baby robot grow u...Aaron Sloman
There is a sequel to this, with more emphasis on 'toddler theorems' and kinds of child science here:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#toddler
It is not yet stable enough to be uploaded to slideshare.
This is a mixture of philosophy, artificial intelligence, Cognitive science, Software engineering and theoretical biology, presented at the Workshop on Philosophy and Engineering, London, 10-12 November 2008.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...NelTorrente
In this research, it concludes that while the readiness of teachers in Caloocan City to implement the MATATAG Curriculum is generally positive, targeted efforts in professional development, resource distribution, support networks, and comprehensive preparation can address the existing gaps and ensure successful curriculum implementation.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
Why symbol-grounding is both impossible and unnecessary, and why theory-tethering is more powerful anyway.
1. Seminars at Sussex University, 27 Nov 2007, Birmingham 29 Nov 2007
Why symbol-grounding is both impossible and
unnecessary, and why symbol-tethering based
on theory-tethering is more powerful anyway.
Introduction to key ideas of semantic models,
implicit definitions and symbol tethering*
Aaron Sloman
School of Computer Science,
University of Birmingham, UK
http://www.cs.bham.ac.uk/˜axs/
With help from Ron Chrisley, Jackie Chappell and Peter Coxhead
Substantially modified/extended in June 2008
[*] Symbol-tethering was previously described as Symbol-attachment
These slides will be available in my “talks” directory at:
http://www.cs.bham.ac.uk/research/cogaff/talks/#models
This is an expanded version of part of an older set of slides criticising
Concept Empiricism and Symbol Grounding theory
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#grounding
Models and tethers Slide 1 June 17, 2008
2. Apologies
I apologise
For slides that are too cluttered: I write my slides so that they can be
read by people who did not attend the presentation.
So please ignore what’s on the screen unless I draw attention to
something.
NO APOLOGIES for using linux and latex instead of powerpoint.
Acknowledgement:
This work owes a great deal to Kant, to logicians and mathematicians of the 19th and 20th century, and
to the logical and philosophical work of Tarski, Carnap, and other 20th century philosophers of science.
However I believe all that work is merely preliminary to what still needs to be done, namely showing how
to build a working system that can extend its concepts in the process of learning about the world and
discovering that there is much more to the world than meets the eye or skin.
I suspect that making progress on that task will require us to discover new forms of representation, new
mechanisms for operating on them and new information-processing architectures combining them – new
to science, engineering and philosophy, even if they are very old in biological systems.
Some of the ideas presented here were in chapter 2 of The Computer Revolution in Philosophy: Philosophy science and
models of mind (1978) http://www.cs.bham.ac.uk/research/projects/cogaff/crp/
What are the aims of science?
First published in Radical Philosophy (1976)
Models and tethers Slide 2 June 17, 2008
3. Two sorts of empiricism
Concept empiricism is an old, very tempting, and mistaken theory.
It is related to, but different from, knowledge empiricism.
One of the most distinguished proponents of both sorts of empiricism was the
philosopher David Hume, though the ideas are much older.
Immanuel Kant argued against both kinds of empiricism.
Knowledge empiricism states:
All knowledge (about what is and is not true) has to be derived from and testable by
sensory experience, possibly aided by experiments and scientific instruments.
It usually allows for mathematical and logical knowledge as exceptions, though
sometimes describing such knowledge as essentially empty of content
(like definitional truths).
Hume’s knowledge empiricism:
“When we run over libraries, persuaded of these principles, what havoc must we make? If we take in our
hand any volume of divinity or school metaphysics, for instance, let us ask, Does it contain any abstract
reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning
matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry
and illusion.”
An Inquiry Concerning Human Understanding
Models and tethers Slide 3 June 17, 2008
4. Concept Empiricism
Concept empiricism is a theory about concepts, the building blocks
required for both true and false judgements, beliefs, theories, hypotheses,
...
It is not always noticed that concepts are also required as building blocks for all sorts of
mental contents, including questions, desires, intentions, plans, preferences, hopes,
fears, dreams, puzzles, explanations, etc.
Concepts can be thought of as the building blocks of all semantic contents.
They are also required as building blocks for non-conscious contents, e.g. subconscious intermediate
states in perceptual processing or motor control, and stored factual and procedural information used in
understanding and generating language, controlling actions, learning, taking decisions, etc.
Some people think there are also non-conceptual contents, though what that means is not clear, and will
not be discussed here.
If an animal or machine makes use, consciously or otherwise, of semantic contents using a form of
representation that allows novel meanings to be expressed by construction of novel meaning-bearers,
then whatever the components are that can be rearranged or recombined to produce such novelty can be
called “conceptual”. Concepts corresponding to properties and relations are a special case.
There are also concepts that are potentially available for use even though no animal or machine has ever
used them - e.g. a concept of a type of animal with a large number N of heads, for some N which I am not
going to specify. Perhaps one of those concepts will occur in a future science fiction story.
Models and tethers Slide 4 June 17, 2008
5. Concepts in this context are not primarily for
communication
Concept empiricism is not just a theory about conditions for understanding concepts used
in communication, but about concepts needed for thinking, wanting, supposing, intending,
wondering whether, wondering why, etc.
Likewise the symbols (representations) we’ll be talking about are not primarily symbols in
an external language, but the ones needed by an animal or machine for internal use.
It is often thought that concepts, the building blocks of thoughts, questions, gaols, plans, etc. must always
be expressed in discrete symbolic structures that occur in a language with a formal syntax and
compositional semantics, that is used for communication.
I have argued elsewhere that we need to generalise that notion to include “generalised” languages (GLs)
used only for internal purposes, some of which may be more pictorial or spatio-temporal than symbolic
formalisms.
See the slide below, entitled “Generalising the notion of ‘language’ or ‘formalism”’, and also
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#glang
What evolved first: Languages for communicating, or languages for thinking (Generalised Languages: GLs)
GLs of some form are needed to explain some of the intelligent behaviours of pre-verbal children and
non-verbal animals.
However, for much of the rest of this presentation I shall deal only with linguistic or logical
formalisms, where some of the issues are clearer – though I regard this as merely preliminary
clarification which must later be extended to GLs including analogical representations.
Models and tethers Slide 5 June 17, 2008
6. What alternatives are there to Concept Empiricism?
It is often assumed that the only alternative to concepts being derived by
abstraction from experience of instances is that they must be innate.
Compare: J.A. Fodor The Language of Thought (1975)
But the history of science shows that many concepts have been introduced by research
communities that are not explicitly definable in terms of concepts that were used earlier.
Examples include concepts like “gene”, “electron”, “charge”, “valence”, and many more.
Likewise a child of ten uses many concepts that are not definable in terms of the
conceptual apparatus of a child of three months – yet the later concepts must in some
way grow out of the activities of the child, use previously available competences.
There is some sort of boot-strapping process that needs to be explained.
The rest of this presentation aims to show how the introduction of new concepts can go
hand in hand with the introduction of new theories that not only use those concepts, but
also implicitly, and partially, define them.
The process by which a child or scientist (or future robot) develops substantively new
concepts depends on the processes that significantly extend her theories of how the
world works, simultaneously creating and making use of those new concepts:
New theories don’t simply express laws relating old ideas: New ideas are developed also.
Where logic is used, the process can simultaneously involve abduction (adding new premisses to
explain old facts) and adding new symbols, not definable in terms of old ones.
Models and tethers Slide 6 June 17, 2008
7. If concept empiricism were true....
It is often assumed (ignoring some the power of biological evolution) that a
new-born animal has access only to concepts expressible in terms of what
it can experience, or more generally concepts definable in terms of
patterns in its sensory and motor signals.
In combination with concept empiricism (or symbol-grounding theory) that
implies that ALL concepts ever used are restricted to describing patterns
and relationships (e.g. conditional probability relationships) in sensory and
motor signals.
Since sensory and motor signals are processes occurring within the body, an ontology
referring to patterns and relationships in those signals could be called a “somatic”
ontology (from the Greek word “soma” meaning “body”).
So concept empiricism implies that no animal or machine can ever think about, refer to,
perceive, have beliefs, or have goals that concern anything that happens or could happen
outside it: i.e. using exo-somatic concepts would be impossible.
If visual sensory inputs form a changing 2-D array, then, according to concept empiricism, a visual
system could not provide information about 3-D structures, and a perceiver with only visual senses could
never even conceive of 3-D structures: the Necker cube ambiguity would be impossible.
Compare Plato’s cave-dwellers perceiving only shadows on the wall of the cave: concept empiricism
would prevent them thinking of them as shadows of something else, different in kind.
It would also be impossible to think of space or time as extending indefinitely, or of events that occurred
before one’s birth, or of objects composed of unobservable particles with unobservable properties.
Models and tethers Slide 7 June 17, 2008
8. Two kinds of ontology extension
An important aspect of learning and development is acquiring new
concepts, providing the ability to have new mental contents, referring to
new kinds of entity: Ontology extension.
Ontology extension may be
• definitional
New concepts added that are explicitly defined in terms of old ones: this is essentially just
abbreviation, though often very useful.
• substantive.
New concepts added that cannot be explictly defined in terms of old ones.
Substantive ontology extension is an important aspect of scientific advance, contrary to
the view that science is merely concerned with discovering new facts, e.g. laws of nature.
See Sloman 1978, Chapter 2. “What are the aims of science?”
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/
If all concepts had to be explicitly defined in terms of others, that would produce either
circularity or an infinite regress.
So either some concepts must be innate (or available when a machine first starts up) or
there must be a way of acquiring new concepts that does not rely on definition in terms of
old concepts.
According to concept empiricism the only way to acquire new concepts that are not
definable (i.e. the only method of substantive ontology extension) is:
by abstraction from experience of instances.
Models and tethers Slide 8 June 17, 2008
9. Exo-somatic concepts can be innate
The notion of substantive, non-definitional, extension of concepts is not
restricted to concepts acquired in a process of learning and development.
It is possible for concepts that are provided by genetic mechanisms to form
different subsets, some of which are substantive extensions of others.
For example, suppose a new-born animal, or robot, has two sorts of concepts from birth,
as a result of evolutionary processes:
• Somatic concepts – referring only to patterns in sensor and motor signals and
relationships between them, including probabilistic and conditional relations, e.g.:
A concept of sensed temperature, i.e. feeling hot or cold; or the concept of being in a shivering state,
defined by fast rhythmic signals being sent to certain effectors; or the dispositional concept of being in
a state of feeling cold that will change to a state of feeling hot if shivering happens.
• Exo-somatic concepts – referring to things, states of affairs, events, and processes
that exist outside the organism, and whose existence at any time has nothing to do
with whether they happen to be sensed or acted on at that or any other time, e.g.:
a concept of some bushes existing nearby; a concept of a dangerous predator being hidden in the
bushes, and a concept of the predator attempting to eat another animal in the future.
A migrating bird might innately have a concept of a distant location to which it should go.
(In some species, young birds first migrate after their parents have gone.)
In such cases, the exo-somatic concepts are substantive extensions of the somatic
concepts, even though both sets are innate.
But exo-somatic concepts can also be developed by an individual, as we’ll see.
Models and tethers Slide 9 June 17, 2008
10. Concepts and Symbols/Representations
Concepts, and the larger structures built using them, are usable only if
expressed or encoded in something.
This requires a medium and mechanisms to use the medium, e.g. manipulating
information structures.
The words “symbol” and “representation” refer to the contents of the medium.
Many cases can be distinguished
Some media support only fixed structures, e.g. certain switches that can be on or off, or
fixed-size vectors of values.
Others allow new structures to be created
Human percepts and thoughts require a medium in which new structures can be created
of varying complexity.
Compare: seeing one black dot on a large white wall, seeing a busy city-centre street,
seeing ocean waves breaking on rocks.
Examples of representing structures include: vectors, arrays, trees, graphs, maps,
diagrams, dynamic simulations, ...
NOTE: The relationship between symbols (representational structures) and what they express (concepts,
thoughts, ...) can be complex, subtle and highly context-sensitive: there need not be a 1-to-1 mapping.
See Sloman 1971, here (also chapter 8 of Sloman 1978):
http://www.cs.bham.ac.uk/research/cogaff/04.html#200407
Models and tethers Slide 10 June 17, 2008
11. Symbols can exist in virtual machines
Symbols need not be physical: they can be virtual structures in virtual
machines, and usually are in AI systems.
Newell and Simon (amazingly) got this wrong in their “physical symbol system”
hypothesis.
In almost all AI systems the symbols are not physical but occur in virtual machines,
e.g. Lisp or Prolog atoms, or similar entities in other languages.
NOTE
In what follows I may slide loosely between talking about symbols and talking about the
concepts they express.
E.g. I talk both about somatic and exo-somatic concepts and about somatic and
exo-somatic symbols or representations.
In the latter case it is the semantic content of the symbols or representations that is
exo-somatic, even when the symbols or representations themselves (the
meaning-bearing structures) are within the body of the user.
I hope the context will make clear which I am referring to.
Thanks to Peter Coxhead for pointing out the need to be clear about this.
Note: intelligent agents are not restricted to using internal symbols and representations.
You can reason with diagrams in your mind or on paper.
Models and tethers Slide 11 June 17, 2008
12. Recap: Concept empiricism states (roughly):
All concepts are ultimately derived from experience of instances
• All simple concepts have to be abstracted directly from experience of instances
• All non-simple (i.e. complex) concepts are constructed from simple concepts using
logical and mathematical methods of composition.
“When we entertain, therefore, any suspicion that a philosophical term is employed without any
meaning or idea (as is but too frequent), we need but enquire, from what impression is that supposed
idea derived? And if it be impossible to assign any, this will serve to confirm our suspicion.”
David Hume, An Enquiry Concerning Human Understanding Section II.
E.g.
if red and line are simple concepts
then logical conjunction can be used to construct from them
the concept of a red line.
Hume used the example of the concept of a golden mountain.
Hume also allowed a mode of invention of a concept by interpolation,
e.g. imagining a colour never before experienced, by noticing a gap in the ordering of previously
experienced colours.
Mathematical logic allows many more ways of defining new concepts from old than I suspect Hume
ever dreamt of.
Programming languages extend that: complex predicates can be defined in terms of complex
procedures for testing instances.
Mutual definition is also possible in mathematics and programming languages (mutual recursion).
Models and tethers Slide 12 June 17, 2008
13. Symbol Grounding Theory
Symbol grounding theory is a modern (re-invented) version of concept
empiricism.
The term “symbol grounding” was introduced by Stevan Harnad in 1990.
Stevan Harnad, The Symbol Grounding Problem, Physica D 42, pp. 335–346,
http://users.ecs.soton.ac.uk/harnad/Papers/Harnad/harnad90.sgproblem.html
Symbol grounding theory adds extra requirements to concept empiricism that I shall
ignore e.g.
the experience of instances must use sensors that provide information in a structure
that is close to the structure of the things sensed
The label “symbol grounding” has proved to be a very powerful viral meme, and has
damaged many philosophically uneducated minds, and some of the educated ones also.
E.g. according to this and other versions of concept empiricism, theoretical concepts of science, that
have nothing to do with sensorimotor signals of scientists, and, more generally, exo-somatic concepts are
impossible.
Roboticists and AI researchers who accept symbol-grounding theory restrict the forms of learning in
intelligent machines in seriously damaging ways:
• They cannot learn exo-somatic concepts, only somatic concepts.
• They use ontologies restricted to patterns and relationships in their sensor and motor signals.
• Often they ignore the structure of the environment and assume everything to be learnt can be derived
from arbitrary arrays of bit-patterns in sensors. (Evolution would not make such a mistake.)
Models and tethers Slide 13 June 17, 2008
14. Why is concept empiricism so appealing?
Because there appear to be only two ways of learning new concepts:
• EITHER You are given a definition of a new concept in terms of old ones
(E.g. ‘prime number’ defined in terms of division and remainder)
(Requires prior concepts and some formal apparatus for expressing definitions.)
• OR You have experience of examples, and then you somehow get the idea.
(How you get the idea is usually left unexplained by philosophers, but AI theorists have produced a
number of working mechanisms.)
Two sub-cases of the second case:
• Substantive concept formation What you learn is not definable in terms of prior concepts using any
formalism you can articulate
E.g. learning to recognise a colour as a result of seeing examples, or learning the taste of kiwi fruit
from examples.
(This might be implemented using chunking in self-organising neural nets.)
• Concept formation by explicit definition
An explicit definition of the new concept is extracted from the examples.
E.g. Winston’s 1970 program learning about arches, houses, etc., made of simple blocks. —
amounts to guessing an explicit definition.
Many AI concept-learning programs learn only concepts that are explicitly defined in terms of
pre-existing concepts.
So all the concepts they learn are definable in terms of some initial primitive set of concepts.
They do not do “substantive” concept formation.
Models and tethers Slide 14 June 17, 2008
15. Concept empiricism seems irrefutable, at first
People are tempted by concept empiricism (or symbol grounding theory)
because they cannot imagine any way of coming to understand certain
notions except by experiencing instances.
E.g.
red, sweet, pain, pleasure, curved, bigger, straight, inside, ...
etc.
They then conclude that this is the only way of acquiring new concepts, apart from explicit
definition of new concepts in terms of old concepts.
Models and tethers Slide 15 June 17, 2008
16. What sort of theory is concept empiricism?
We have no introspective insight into the processes that produce new
concepts by abstraction from experiences.
People usually assume that we can somehow store something
unexplained that is abstracted in some unexplained way from the
individual experiences.
There are many implausible theories of meaning produced by philosophers and
psychologists who don’t know how to design working systems: e.g.
“We remember all the instances and use a similarity measure”;
“We store a prototype for each concept and compare it with new instances”,
etc., etc.
Trying to go from such a verbal description to a specification of a working system that can
be built and run often shows that the alleged explanation is not capable of explaining
anything, or that there are many different detailed mechanisms that loosely correspond to
the verbal description because it is too vague to select between them.
More people should learn how to use the design stance.
If you have first hand experience of designing (and debugging) systems that have to work, you may
(a) be able to think of a wider variety of possible explanations for observed phenomena,
(b) be better at detecting theories that could not possibly work,
(c) be better at detecting verbal theories (with or without accompanying diagrams) that are so intrinsically
vague that there many very different working systems that would fit the description given.
Models and tethers Slide 16 June 17, 2008
17. Kant’s refutation of concept empiricism
Over two centuries ago, the philosopher Immanuel Kant refuted concept
empiricism
(in his Critique of Pure Reason, 1781).
His main argument (as I understand it) can be summarised thus:
It is impossible for all concepts to be derived ultimately from experience, since if you
have no concepts you cannot have experiences with sufficient internal structure to be
the basis of concept formation.
So some concepts must exist prior to experience.
He gives different arguments for some specific concepts that he thinks must be innate,
such as concepts of time and causation.
Let’s consider an example of a spatial concept.
Models and tethers Slide 17 June 17, 2008
18. An example: straightness
According to concept empiricism there are two ways to acquire a concept
such as straightness
Either
• It is an unanalysable concept learnt from experience of examples.
or
• It is a composite concept explicitly defined in terms of some previously understood
concepts.
Case 1: straightness is unanalysable.
The notion that you can use experiences of straightness to abstract the concept “straight” presupposes
that you can have spatial experiences (e.g. of straight lines, straight edges, straight rows of dots, etc.)
without having concepts — usually that presupposition is not analysed.
What makes something straight is having parts with certain relationships, so detecting what is common to
examples of straightness requires the ability to detect portions of what is seen and relationships between
them. This does not seem to be possible without using concepts.
Case 2: straightness is a definable composite concept
An objection would be that “straight” is a composite concept that has to be explicitly defined in terms of
simpler concepts, so it is only those simpler concepts (including concepts of relationships) that need to
be learnt from experience of instance.
For this to be taken seriously, an explanation is required of what those simpler unanalysable concepts are
and how they can be learnt and used without using concepts.
Models and tethers Slide 18 June 17, 2008
19. Two things wrong with Concept Empiricism
• The first flaw in concept empiricism was pointed out clearly by Kant as
explained above.
YOU CAN’T HAVE EXPERIENCES UNLESS YOU ALREADY HAVE CONCEPTS,
SO NOT ALL CONCEPTS CAN BE DERIVED FROM EXPERIENCES OF INSTANCES.
• The second flaw was that supporters of concept empiricism ignored
problems about new concepts introduced in the development of science.
These were discussed in the 20th century by philosophers of science.
They failed to think of an alternative way in which concepts can be developed which is illustrated by
theoretical concepts in Science.
20th Century philosophers of science showed that such theoretical concepts (e.g. in
physics) cannot be defined in terms of how instances are experienced, and cannot be
defined operationally (as proposed by P.W.Bridgman).
This included trying to explain how new undefined theoretical concepts could work.
• Concepts of unobserved entities and properties, i.e. theoretical concepts in scientific theories
(e.g. the distant past, the properties of matter that explain how matter behaves)
• Concepts of dispositions that are not manifested
(e.g. solubility, fragility, being poisonous, etc.)
There are also problems about how logical and mathematical concepts that do not refer to specific sorts of
experiences can be understood (e.g. “not”, “all”, “or”, “number”, “multiplication”), and concepts like
“efficient”, “useful”, “uncle”, “better”....
Models and tethers Slide 19 June 17, 2008
20. Symbol grounding theory and its alternatives
Harnad wrote in his 1990 paper:
“...there is really only one viable route from sense to symbols: from the ground up...”
We are presenting an alternative suggestion, based on 20th Century philosophy of
science, which provided an account of how theoretical concepts in science work (e.g.
‘electron’, ‘charge’, ‘ion’, ‘chemical valence’, ‘gene’, ‘inflation’):
• Many kinds of meaning are constructed in an abstract form, as part of a theory that
uses them:
new symbols may be added to a theory in theoretical statements that use them, and
the meanings are partly determined by the structure of the theory and the forms of
inference that can be made.
• This always leaves meanings partly indeterminate.
• They can be made more definite in two ways
1. by making the theory more complex, adding new constraints to the interpretations of undefined
symbols
2. by linking the theory to methods of observation, measurement and experiment.
I.e. existing, un-grounded meanings are made more definite as the theory using them
becomes “tethered” – but tethering does not define the symbols in the theory.
Models and tethers Slide 20 June 17, 2008 ‘
21. The KEY new idea: implicit definition by a theory
The key new idea in the alternative philosophy of science, was developed
independently by Einstein and by philosophers of science:
(See http://plato.stanford.edu/entries/einstein-philscience/),
If a new explanatory theory with predictive power, introduces some new theoretical
concepts, those concepts can get most of their meaning from their role in the theory:
they are implicitly defined by the theory containing them.
To explain this we need to introduce the idea of a theory (including undefined symbols) as
determining a class of models.
This is not a new suggestion: the idea was developed by 20th Century philosophers of science.
I discussed it in Chapter 2 of Sloman 1978, and in these two papers:
What enables a machine to understand?, Proc 9th IJCAI, 1985
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#4
Reference without causal links Proc ECAI 1986
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#5
Models and tethers Slide 21 June 17, 2008
22. Why tethering is needed
A theory need not uniquely determine a model – so it may need to be
“tethered” by adding associations with observation and experiment.
(Thanks to Jackie Chappell for suggesting the word “tethering”.)
We shall shortly present an example of a theory, using a logical syntax, including
• conjunction,
• disjunction,
• negation,
• universal and existential quantification,
• application of predicates to arguments, and
• application of functions to arguments.
We show how elaborating the theory reduces the set of models, but something else is
needed to refer to a specific portion of the world.
Models and tethers Slide 22 June 17, 2008
23. Introducing new theoretical concepts
We need to explain how a formal theory, or set of axioms can produce the
first, un-tethered set of meanings.
Models and tethers Slide 23 June 17, 2008
24. A Key Idea: A formal System
KEY IDEA
A formal system with a well defined syntax and rules of inference allows
a set of ‘axioms’ possibly including undefined predicate and function
symbols, to determine a class of possible interpretations (models).
The set of axioms gives the undefined symbols a (partially indeterminate) meaning.
Example on next slide: axioms for lines and points (projective geometry)
• A formal system specifies syntactic forms for constructing new meanings from old, and
forms of inference for deriving new conclusions from old hypotheses.
• We assume that those are familiar ideas in Logic and will use logically formulated
theories as our example.
• But other forms of syntax could also be used, e.g. circuit diagrams, maps, flow-charts,
chemical formulae, etc.
• We’ll define a formal system that includes undefined non-logical symbols that can be
used to specify a type of geometry.
To illustrate this, we shall use familiar words, e.g. ‘line’, ‘point’, ‘intersection’, and others,
but will pretend that they currently have no meaning except the meaning implicitly defined
by their use in a formal theory.
But we will assume the normal meanings of logical operators.
Models and tethers Slide 24 June 17, 2008
25. A formal system concerning lines and points
The following are typical axioms used to specify a type of geometry:
UNDEFINED SYMBOLS (with mutually defined types).
Line [predicate]
Point [predicate]
Intersection [function of two lines, returns a point]
Line-joining [function of two points, returns a line]
Is-on [binary relation between a point and a line]
Identical [binary relation]
AXIOMS (A partial set)
If L1 and L2 are Lines and not Identical(L1, L2) then
Intersection(L1, L2) is a Point, call it P12
Is-on(P12 , L1)
Is-on(P12 , L2)
If Point(P) and Is-on(P, L1) and Is-on(P, L2) then Identical(P, P12 )
If Point(P1) and Point(P2) and not Identical(P1, P2) then
Line-joining(P1, P2) is a Line, call it L12
Is-on(P1, L12 )
Is-on(P2, L12 )
If Line(L) and Is-on(P1, L) and Is-on(P2, L) then Identical(L, L12 )
This is actually a specification of a class of theories, all of which contain the above axioms but are more
specific because they add additional axioms.
We’ll give an example later.
Models and tethers Slide 25 June 17, 2008
26. The theory in English
Any two distinct lines have a unique point of intersection, and any two
distinct points have a unique line containing both.
(Standard projective geometry drops the qualification ‘distinct’.)
All of this can be expressed precisely in Predicate Calculus.
Exercise
You may find this theorem easy to prove:
THEOREM
If L1 and L2 are lines, then
Identical( Intersection(L1, L2), Intersection(L2, L1) )
In English: the intersection of L1 with L2 is the same point as the intersection of L2
with L1.
Models and tethers Slide 26 June 17, 2008
27. A specific projective geometry and its models
Here is an axiomatic specification of a structure with six components,
whose axioms are the axioms given previously, plus the following:
SUPPLEMENTARY AXIOMS
P1, P2, P3 are of type Point.
L1, L2, L3 are of type Line.
There are no other lines or points.
Identical(Line-joining(P1, P2), L1)
Identical(Line-joining(P2, P3), L2)
Identical(Line-joining(P3, P1), L3)
Identical(Intersection(L3, L1), P1)
Identical(Intersection(L1, L2), P2)
Identical(Intersection(L2, L3), P3)
It should not be too difficult to produce a drawing containing three blobs and three lines
satisfying those axioms, according to the ‘obvious’ interpretation of the undefined
symbols in the axioms: That drawing will be a model for the extended axiom set.
It is worth noting that you can find another model (called the dual model) by treating what
we call lines in the drawing as being of type Point and what we call blobs as being of type
Line and providing novel interpretations for Intersection, Line-joining, Is-on
Try it!
Models and tethers Slide 27 June 17, 2008
28. Two slightly more complex models
Here are two slightly more complex models of the original axioms, each of
which contains seven things we can call points and seven things we can
call lines, with obvious interpretations of Intersection and Line-joining, etc.,
that make all the axioms come out true: Check them out!
Consider carefully what you mean by ‘Point’, by ‘Line’, by ‘Intersection’, by ‘Is-on’, etc. in
each case.
In each case if you swap what you mean by ‘Point’ and by ‘Line’, and re-interpret
‘Intersection’, ‘Line-joining’ and ‘Is-on’ you can still make all the axioms come out true.
This is known as the duality of lines and points in projective geometry: anything that is a model for the
axioms can be turned into another model by swapping the two sets taken as interpretations for ‘Line’ and
‘Point’, and changing the interpretations of the functions and relations appropriately.
NB:
POINT IS-ON FOUR DISTINCT LINES NEITHER
IF WE ADDED ANOTHER AXIOM, E.G. STATING THAT EVERY
OF THESE WOULD BE A MODEL:
ADDING AXIOMS CAN REDUCE THE SET OF MODELS.
MODELS AND TETHERS SLIDE 28 JUNE 17, 2008
29. Some non-models of the axioms
On the right is a picture which if interpreted in the
obvious way does not constitute a model of the
axioms presented previously.
For example, it is not true that for every pair of
points there is a line joining them, and it is not true
that for every pair of lines there is a point that is on
both of them.
One point is on no lines, and one line has no points on it.
There are many more non-models of the Axioms.
For example, if you have a set of bowls of fruit on a table, you could try mapping ‘Line’ to the set of bowls,
and ‘Point’ to the set of pieces of fruit, and ‘Is-on’ to being inside.
But the axioms will not be satisfied, since it is not true that any two “lines” contain a common “point” and it
is not true that every pair of “points” determines a “line” they are both “on”.
Models and tethers Slide 29 June 17, 2008
30. Consistency
If a set of axioms is inconsistent, there will be no models.
If the set is consistent, but the conjunction of axioms does not form a
tautology (like (P OR NOT-P AND (Q OR NOT-Q)) then there will be some
models and some non-models.
It is possible to invent hypothetical models, e.g. a society in which people are points and
social clubs are lines, and any two social clubs have a unique common member, and any
two people are members of a unique common social club, etc.
Whether a particular social system is or is not a model of the axioms then becomes an empirical question.
Adding extra axioms, e.g. about properties of points and lines and how they are related,
might rule out sociological models
E.g. requiring there to be infinitely many points on each line, etc.
Models and tethers Slide 30 June 17, 2008
31. Additional axioms for geometry and motion
NOTE:
The axioms presented above do not fully define projective geometry as studied in mathematics.
The full specification of projective geometry requires additional axioms, e.g. axioms specifying the
minimum number of points on a line, and the minimum number of lines through each point.
Further axioms can be added, for instance constraining the points on a line to be totally ordered.
Additional axioms referring to distances and angles are needed to produce a set of
axioms for Euclidean geometry.
THE AXIOMS CONSIDERED SO FAR DO NOT MENTION TIME.
• We could add undefined symbols, such as Time-instant, or Time-interval, Particle and At and add
axioms allowing particles to be at different points at different times and to be on different lines in
different time-intervals.
• Then cars or people moving on roads or trains on railway tracks could be models of the axioms.
• If we added undefined symbols expressing Euclidean notions of distance and direction, and added
axioms expressing constraints on how locations, distances, rates of change of distance, etc. can vary,
then we could use the axioms to make predictions about particular models.
THAT WOULD REQUIRE US TO ADD SOME LINKS BETWEEN THE THEORY AND METHODS
OF MEASUREMENT AND OBSERVATION.
(The theory would need to be ‘tethered’.)
Models and tethers Slide 31 June 17, 2008
32. Model-theoretic semantics
The idea of a set of axioms having models goes back a long way in
mathematics:
Descartes showed that Euclid’s axioms for geometry have an arithmetical/algebraic
model.
Later it was shown that axioms for the set of positive and negative integers could be
satisfied by a model in which integers are represented by pairs of natural numbers (from
the set 0, 1, 2, 3,....).
It was also shown that axioms for rational numbers could be satisfied by a model in
which every rational number (e.g. 5/8, -16/19) is represented by a pair of integers.
In the 20th century this notion of model was made mathematically precise mainly by the
¨
work of Tarski though others contributed including Hilbert, Godel, Montague and others.
Models and tethers Slide 32 June 17, 2008
33. Meaning postulates/Bridging rules
Carnap and others contributed the notion of adding a set of “meaning
postulates” or “bridging rules” to a set of axioms: the additions did not
define any of the undefined symbols but specified extensions to the
axioms which linked them to previously understood notions, e.g. to
methods of measurement or experimentation.
This made it possible to draw conclusions like:
• If experiment E occurs then some statements S1, S2, ... expressed using the undefined terms will be true
• If statements S1, S2, ... using undefined terms are true, then some measurement procedure will produce
result M.
This enabled theories using undefined terms to be used in making predictions, forming
explanations, and deriving tests for the theory.
For example, a physical theory referring to the temperature of objects was tethered by bridging rules
specifying ways of measuring temperature.
However those rules do not define the concept “temperature” because later physicists can discover more
reliable ways of measuring it (more reliable ways of tethering the theory).
The bridging rules do not remove all the ambiguity in the semantics of the set of axioms,
but do reduce the ambiguity: the theory does not float quite so free — it is tethered.
Tethering is a way to ensure that the theory’s models include a particular bit of the real
world and exclude other possible worlds or other parts of this world.
Models and tethers Slide 33 June 17, 2008
34. Structure-determined semantics
• The previous slides show that a set of axioms, simply in virtue of its logical structure,
if it is internally consistent, allows some structures in the universe to be models and
rules out other structures as non-models.
• To the extent that there is a set of models the set of axioms determines a set of
possible meanings for all the undefined terms in the axioms.
• The multiplicity of models makes the “theoretical terms” ambiguous, but the ambiguity
can be continually reduced by adding more and more axioms, since adding an axiom
that is independent of the others (not derivable from them) reduces the set of possible
models, by eliminating models that do not make that axiom true.
• Another way of reducing the set of models is by linking one or more of the undefined
terms to some concept that already has a meaning (e.g. when we linked ‘Line’ to our
familiar concept of line) or by using Carnapian meaning postulates or bridging rules.
• A major thesis of 20th Century philosophy of science associated with philosophers like
Hempel, Carnap, Tarski, and Popper, and independently arrived at by Einstein, is:
Theoretical terms of science (e.g. “gene”, “electron”, “valence”, “atomic-weight”, and many more) are
partly defined by their role in theories, and partly defined by “tethering” the theories to particular
modes of observation and experiment – not by “grounding” all of the symbols in experience:
Most of the meaning comes from the theories.
Models and tethers Slide 34 June 17, 2008
35. Processes can also be models
So far, most of the models of sets of axioms have been static structures,
but there is no reason why a process, including events and causal
interactions, should not be a model of a theory.
(E.g. if the theory includes spaces and times.)
In a process, things can have different properties and relations at
different times.
A historical narrative is a formal system, which, if true has some sequence or partially
ordered collection of actual events as a model (and perhaps other sequences like that
one, in different places at different times), whereas stories that are about invented
characters and events are capable of having similar models but may not actually have
one in the world as it is and has been.
Typically stories are told on the assumption, usually not made explicit, that characters
are constrained by some laws of human behaviour that are not made explicit and that
other things are constrained by laws of physics and chemistry, though science-fiction
stories often alter the constraints.
Models and tethers Slide 35 June 17, 2008
36. Newtonian Mechanics
In Newtonian mechanics there are notions of entities having mass, being
acted on by forces, and having locations, velocities and accelerations,
where those features can change, but the changes are constrained by
“laws of motion”.
The theory of Newtonian mechanics has very many types of models of
varying complexity.
For example, one type of model is a single particle, with no forces acting on it, moving in
a straight line at constant speed through an otherwise empty universe.
Another type of model of Newtonian mechanics is a collection of particles one of which
has a mass that is much greater than all the others, and they all have motions
constrained by mutual gravitational attraction consistent with Newton’s laws: our solar
system is approximately a model of this type.
Models and tethers Slide 36 June 17, 2008
37. Using a process theory to predict
Adding new constraints to a theory restricts the class of processes that
model it, so that if a particular observed process P is a model of a theory
T, then T can be used for predicting what will happen in P (thereby testing
the theory by using it).
If you have a theory about kinds of stuff out of which
objects can be made that are rigid and impenetrable,
then you can combine that with a theory about shapes in
a Euclidean geometry and possible motions in which
parts of shapes move while others are fixed, like a wheel
pivoted at its centre.
That theory of moving, rigid, impenetrable shapes has
very many models, one of which could be a pair of
meshed gear wheels each centrally pivoted.
In this case, using the constraints of the theory you can work out that if the left wheel rotates clockwise
the right wheel must rotate anti-clockwise – and vice versa. (How?)
The theoretical concepts, like Rigid and Impenetrable used in the theory are NOT
definable in their full generality in terms of what you can sense (e.g. an invisible object
with a shape could be rigid), but their roles in the theory, and their usability in making
predictions give them meaning, for a user of the theory able to manipulate forms of
representation corresponding to possible models.
Models and tethers Slide 37 June 17, 2008
38. Using a theory to explain
The theory can also be used to explain observed processes.
Suppose part of the configuration is covered, and it is
observed that when the teeth visible on the left are
moved down, the teeth on the right also move down,
and reverse their motion when the teeth on the left are
moved up.
Building a hypothetical model of the theory, including
portions not observed, can allow the observed
phenomena to be explained by being predicted using
the hypothesised explanatory facts and geometrical
reasoning.
Children seem to be able to develop and use such explanatory and predictive theories.
(They use two kinds of causation: Kantian and Humean. See
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#wonac )
Note
This and the previous diagram are intended to draw attention to the fact that humans and possibly some
other animals are capable of using non-verbal, non-logical, non-Fregean forms of representation for
reasoning with, as claimed in Sloman 1971).
Models and tethers Slide 38 June 17, 2008
39. Elaborations of Model-based Semantics
Note for philosophers and logicians.
Previous slides can be summarised as proposing that semantics can be “model-based
and structure-determined”.
In his “Two dogmas of empiricism”, W.V.O Quine famously produced a description of a scientific theory as a
sort of cloud whose centre has little contact with reality, and which can only be tested empirically at the
fringes. http://www.ditext.com/quine/quine.html (See Section VI.)
This is a suggestive exaggeration.
F.P.Ramsey pointed out that there is a way of removing undefined symbols by treating them as variables
bound by existential quantifiers (using a higher order logic).
In this way a theory expressed in predicate calculus can be transformed into a single complex, existentially
quantified, sentence: a Ramsey Sentence.
Discussion of Ramsey’s idea is beyond the scope of this presentation.
For more on Ramsey, Carnap and others see the lecture by Michael Friedman ‘Carnap on Theoretical
Terms: Structuralism without Metaphysics’, included as part of this workshop:
http://www.phil-fak.uni-duesseldorf.de/en/fff/workshops/tfeu/ Theoretical
Frameworks and Empirical Underdetermination Workshop
University of Dusseldorf, April 2008.
¨
If I am right in claiming that the introduction of theoretical terms by scientists and the substantive ontology extension by
pre-verbal children and some other animals achieved as part of exploring how the world works has much in common with how
science works, then if the children and other animals are not using human languages or anything like predicate calculus, that
implies that pre-verbal children and other animals have other formalisms (e.g. formalisms that may be more like pictures than like
sentences?) that are capable of expressing meanings that are partly implicitly defined by the structures used. Pictures of meshed
gears may be examples. See the slide below: Beyond logical (Fregean) formalisms
Models and tethers Slide 39 June 17, 2008
40. Symbol grounding vs Symbol tethering pictured
The distinction we are making can be represented crudely using this diagram.
On the left every symbol gets its meaning from below (i.e. built up from sensory, or
sensory-motor, information). On the right much of the meaning of a symbol comes from
its role in a rich theory, expressed in a formalism that allows conclusions to be drawn in a
formal way. Tethering merely adds further constraints that help to pin the meaning down.
Note that as a theory develops over time, new modes of experimentation and measurement may be
developed. But the same concepts may continue to be used, even though they are linked to observation
and experiment in new ways, usually making possible far more precise and reliable measurements and
testing procedures than previously.
The idea that the concept (e.g. ‘charge of an electron’) is preserved across such changes would make no
sense if the concept were defined by its links to observation and measurement, since they keep changing.
See also: L.J. Cohen, 1962, The diversity of meaning, Methuen & Co Ltd,
Models and tethers Slide 40 June 17, 2008
41. Beyond logical (Fregean) formalisms
Formal work on model-based semantics normally considers theories
expressed using logical formalisms, or more generally Fregean formalisms
(i.e. those using syntax based on application of functions to arguments).
A distinction was made between Fregean and analogical representations in 1971:
http://www.cs.bham.ac.uk/research/projects/cogaff/04.html#analogical
arguing that analogical representations can also support valid forms of reasoning, despite
using diagrams, pictures and models instead of logical formalisms.
For example, diagrams are often used to work out the effects of lenses, mirrors and prisms on the
propagation of light, and chemical formulae can be regarded as diagrams of a sort, used in predicting
results of or constraints on chemical reactions.
So the use of theories in constraining meanings of undefined terms is not limited to the
use of symbols in logical and algebraic expressions.
However the theory of how such non-logical formalisms can constrain models is not so
well developed, despite the wide-spread use of circuit diagrams, flow-charts, maps and
many other non-logical notations in science and engineering.
At this stage it is not at all clear what sorts of formalisms are available to infant humans
and other animals that are generating new theories about the contents of the
environment, and extending their ontology, as they play and explore and try to cope with
surprises, by bootstrapping their knowledge about the world: but they probably don’t use
predicate calculus, e.g. in coming to understand gear wheels.
Models and tethers Slide 41 June 17, 2008
42. Theories about non-physical processes
When the environment contains information-processing systems, e.g.
people, theories about information-processing mechanisms are useful.
Typically, what is important in such theories is what information is available, how it is
acquired, transformed, stored, interpreted, used, etc., as explained in:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/whats-information.html
It is often useful to describe those processes independently of the physical mechanisms
that make them happen, i.e. by describing a virtual machine.
We do this when we specify computer software, or describe what’s going on in running software systems
(e.g. operating systems, compilers, spelling checkers, email systems, databases, games, etc.)
We also do this when we describe ourselves, or other people as having beliefs, desires, preferences,
skills, etc., and when consider processes such as decision making, plan formation, becoming puzzled,
noticing things, etc.
Humans have an innate self-extending mechanism for generating theories about virtual
machines in other agents (evolution ‘invented’ the idea of virtual machines before we did).
At first the theories are very impoverished, but various kinds of observation of self and others, and social
interactions seem to drive a process whereby theories of mind are extended with new theoretical
concepts and new constraints about how things work (sometimes false ones).
Concepts of beliefs, percepts, intentions, attitudes, skills, etc. cannot be “grounded” in
sensory or motor signals: they refer to virtual machine states and processes.
Mental concepts, like physical, chemical, biological concepts, get most of their semantics
from the theories in which they are used, which develop slowly over several years.
Models and tethers Slide 42 June 17, 2008
43. Reducing semantic indeterminacy
An architecture that builds and manipulates structures may interpret
undefined “primitive” symbols (predicates, relations, functions, etc.) as
somehow referring to unobservable components of reality.
Perhaps Kant’s “things in themselves”?
However, as indicated in previous sections the initial construction may not suffice to
specify with complete precision what is referred to.
The precision can be increased, indeterminacy decreased, in various ways.
Both adding more axioms and adding new attachment points can reduce the semantic
indeterminacy of the “undefined” symbols.
We need much research to investigate the many ways in which this can work, within
different sorts of architectures with different sorts of mechanisms, e.g. logical
mechanisms, self-organising neural mechanisms of various sorts, etc.
The strengths and weaknesses of a wide range of mechanisms and architectures need to
be understood.
We shall then be in a far better position to propose good theories of concept formation in
children instead of assuming infant learners are ham-strung by the need for all their
internal symbols to be “grounded” by being defined in terms of sensory and motor signals.
In particular, it will be interesting to see what difference it makes if the architecture
includes a meta-management (reflective, self-referring) component.
(As in recent work of Minsky and myself.)
Models and tethers Slide 43 June 17, 2008
44. Making these ideas work
So far, the presentation has been purely theoretical, showing how in an
abstract sense axioms can constrain possible models, by excluding some
portions of the universe, or some mathematical structures that do not
satisfy the axioms (under specific interpretations of the undefined
symbols) and including others.
A major task remaining is to show how a developing animal or robot can make use of that
abstract possibility in order to simultaneously extend its ontology (its set of concepts
describing things that can exist) and its theories that make use of that ontology, in
perceiving, describing, explaining, predicting, planning, and producing states of affairs in
the world.
This will require significant extensions of robot designs beyond the current technology, which typically
assumes all concepts that are used in the contents of percepts, thoughts, goals, etc. are tied to sensory
mechanisms, or sensory-motor control subsystems.
This presentation says nothing about how to proceed beyond such designs. I have some ideas but will
expand on them in a separate document.
A robot should not be constrained to do all of its theorising using logic: we need to
explore a host of forms of representations and mechanisms for constructing,
transforming, and deriving representations, with different properties.
An attempt to do this that has not been widely noticed is presented in Arnold Trehub’s book The Cognitive Brain, MIT Press,
1991, online here: http://www.people.umass.edu/trehub/
Models and tethers Slide 44 June 17, 2008
45. An exception: Map-making robots
It is worth noting that an exception in the current state of the art is the
representation of rooms, walls, corridors and doors in SLAM
(simultaneous localisation and mapping) systems.
• The robots involved in those systems typically represent the layout using a map-like
structure (with topological and possibly also metrical properties) whose ontology is
exo-somatic (i.e. refers to things that exist independently of any sensor and motor
signals or other internal states and processes of the agent) and in particular is not
defined in terms of somatic sensory-motor relationships.
• However the map-like structure can be linked by suitable perceptual, learning,
planning, and plan-execution mechanisms to sensory and motor signals – a form of
tethering.
• Such tethering plays a role in growing or modifying the maps and also enables the
maps to be used for predicting consequences of movements and consequences of
change of gaze direction, and for planning routes and controlling actions when
executing plans.
• The sort of representation and ontology typically used in such SLAM systems does not
satisfy the strict requirement for symbol-grounding, for a representation is constructed
and used that is neutral as to the particular sensory and motor signals of the user.
• For example the same map-structure could in principle be shared by two mobile
robots with different sensory and motor sub-systems, e.g. one with TV cameras and
wheels, and the other with laser sensors and legs.
Models and tethers Slide 45 June 17, 2008
46. What do we need to explain?
Besides demonstrating ways in which “ungrounded”, but suitably “tethered”
forms of representation can be used in intelligent machines we also need
to explain how they could come to be used in biological organisms.
This has at least three distinct aspects
• Explaining the functional need for mechanisms using such information structures,
E.g. identifying features of a niche that determine requirements that such mechanisms can meet:
– Providing innate conceptual apparatus for perceiving and learning (Kant)
– Making predictions at a high level of abstraction
– Formulating goals and plans
– Recording re-usable information about the environment, including spatially or spatiotemporally
specific information and useful generalisations
• Explaining how such mechanisms and forms of representation could evolve in species
that do not yet have them.
E.g. studying trajectories in niche space and in design space.
• Explaining how such mechanisms and forms of representation could arise during
learning and development in individuals.
Including studying both the features of the environment that provide a need and opportunities for
such changes, and the mechanisms in individuals that make such changes possible.
Giovanni Pezzulo & C. Castelfranchi, “The Symbol Detachment Problem”, in Cognitive Processing 8, 2,
June, 2007 pp 115-131. http://www.springerlink.com/content/t0346w768380j638/
Clayton T Morrison, Tim Oates & Gary King, “Grounding the Unobservable in the Observable: The Role
and Representation of Hidden State in Concept Formation and Refinement”, AAAI Spring Symposium
2001, http://eksl.isi.edu/files/papers/clayton 2001 1141169568.pdf.
Models and tethers Slide 46 June 17, 2008
47. Kant’s refutation – continued
Kant argued, as indicated above, that there must be innate, a priori
concepts,
– not derived from experience,
– though possibly somehow “awakened by” experience.
He thought the operation of concepts was a deep mystery, and gave no explanation of
where apriori concepts might come from, apart from arguing that they are necessary for
the existence of minds as we know them.
The now obvious option, that apriori concepts are products of evolution, was not available
to Kant.
HAD AI.
HE LIVED NOW, HE WOULD HAVE BEEN DOING
Other philosophers tend to treat “having an experience” as a sort of unanalysable,
self-explanatory notion. From our point of view, and Kant’s, it is a very complex state,
more like a collection of processes, along with a large collection of dispositionally
available additional processes, all presupposing a rich information-processing
architecture, about which we as yet know very little.
Models and tethers Slide 47 June 17, 2008
48. Conjecture
Evolution produced “meaning-manipulators”, i.e. information-processing
systems (organisms) with very abstract and general mechanisms for
constructing and manipulating meanings.
These mechanisms are deployed in the development of various specific types of meaning,
some provided innately (e.g. for perceptual and other skills required at birth) and some
constructed by interacting with the environment. (Some species have only the former.)
These mechanisms provide support for the organism’s ontology: the collection of types of
things supposed to exist in its environment and the types of processes that can occur,
etc., including actions.
Specific contents for some of the generic ontologies can be provided by linking to sensors
and motors. But even without that there is a generic grasp of the meaning insofar as the
structure of a family of related concepts is grasped.
Meanings are primarily specified by formalisms used to express them and
mechanisms that manipulate the formalisms: links to reality added through
transducers merely help to refine partially indeterminate meanings.
Note that much creative story-telling referring to non-existent things would be impossible if we were
restricted to using only concepts abstracted from sensed things in the environment.
All this is a refinement of the theory in two old papers available at
http://www.cs.bham.ac.uk/research/projects/cogaff/
• IJCAI-85 “What enables a machine to understand?”
• ECAI-86 “Reference without causal links”.
Models and tethers Slide 48 June 17, 2008
49. Old problems for concept empiricism:
Logic and Mathematics
How could logical concepts be based on abstraction from instances? What would
experiencing instances of “not” and “or” and “if” be like?
(At least one philosopher thought understanding “or” might be based on experiences of hesitancy.)
Concepts of number were more debatable: Mill thought they all came from experience.
Frege thought they could be defined using pure logic?
(.... his “proof” fell foul of Russell’s paradox)
Perhaps they are both right and number-concepts are multi-faceted concepts?
See chapter 8 of The Computer Revolution in Philosophy.
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/
What about our concept of infinity? Can any instances of infinity be experienced?
Can we define it adequately in terms of a concept of negation and “being bounded”?
Does that suffice to support our thinking about transfinite ordinals (e.g. arranging the odd
integers after the even ones)?
What about concepts of euclidean geometry: infinitely thin, infinitely straight, infinitely
long lines? Think of the continuum.
Many mathematical concepts are purely structural concepts, capable of being applied to
wide ranges of portions of reality, e.g. the concept of a group, a permutation, a function,
an ordering.
Could mere experience of instances generate the apparatus required for grasping and
using such generally applicable concepts?
Models and tethers Slide 49 June 17, 2008
50. More problems for concept empiricism:
Concepts of theoretical science
Deep scientific theories refer to things that cannot be experienced, e.g.
gravitational fields, electrons, nuclear forces, genes, ....
Some philosophers tried to argue that the concepts of theoretical science can all be
defined in terms of observables (e.g. things that can be measured, where the measuring
process is experienced). All attempts to define theoretical concepts like this broke down,
e.g. because the modes of measurement seemed not to be definitional – they could be
rejected and replaced without the concepts changing.
Example: Bridgman’s “operationalism” (1927)
There was also the question about the existence of theoretical entities, states, processes
while they were not being measured or observed.
Dispositional properties (solubility, fragility, rigidity, etc.) also raised problems.
How can we understand the notion of objects having dispositions (e.g. solubility) and
capabilities (e.g strength) while they are not displaying them, so that they are not
experienced?
How can we understand the concept of existence of something unexperienced (the tree in
the quad when nobody’s there)?
(Some philosophers said we don’t.)
Models and tethers Slide 50 June 17, 2008
51. Counterfactual conditionals
Various attempts were made to cope with this via definitions in terms of
large collections of counter-factual conditionals.
“There is an electron with properties P, Q, R, ...”
means
“if you do such and such experiments then you will observe such and such results...”
etc.
But no finite collection of conditions seemed to be capable of exhausting any such
concept, e.g. because new (and better) experimental tests could be discovered, and old
tests rejected, and that process seems to be able to go on indefinitely.
Some philosophers (phenomenalists, e.g. Hume) tried this for all concepts referring to
things that exist independently of us, since they can have properties that are not
manifested in our experience, if we are asleep or not looking at them, etc.
(Objects as permanent possibilities of experience.)
How can you have an experience-grounded concept of something that exists while you
are not experiencing it?
Or of an unexperienced possibility. Can you experience that?
The theory that God experiences all those things continuously, including the tree in the
empty quad does not help, especially atheist philosophers.
Also where does the concept of God come from? (Software bugs in minds)
Models and tethers Slide 51 June 17, 2008
52. Meaning postulates
As reported above, Rudolf Carnap introduced the idea of ‘meaning
postulates’.
Search for that phrase in this document:
http://www.utm.edu/research/iep/c/carnap.htm
Or read his paper on meaning postulates in his 1947 book: Meaning and Necessity.
Carnap’s primary motivation for introducing meaning postulates was to explain how
dispositional concepts and theoretical concepts in the advanced sciences can be
understood, but the idea is far more general than that.
What I’ve written above about the structure of a set of representations and mechanisms
for using them partially defining the “meaning” of the primitive components is inspired by
Carnap’s theory of meaning postulates, which can introduce new undefined primitives
into a theory in the form of new postulates (axioms) using the new primitives alongside
the old ones.
The meaning can be gradually refined and made more determinate by adding more
postulates. We can say it’s a new meaning after each change. Or we can treat it like a
river and say it’s the same meaning which gradually changes over time.
I don’t know if Carnap would approve of my use of his ideas.
Models and tethers Slide 52 June 17, 2008
53. Some wild speculations about colour concepts
The previous discussion suggests that grasping colour concepts may in
part be a matter of plugging certain parameters (perhaps procedural
parameters) into a mostly innately determined complex mechanism
encoding an implicit theory about the nature of object surfaces in our 3-D
environment — where alternative parameters are required for other
concepts of properties of extended surfaces, e.g. texture, warmth, etc.)
• So someone blind from birth may be able to guess (not necessarily consciously)
parameters to be supplied to produce a new family of concepts of properties of
extended surfaces.
• It may even be the case that evolution provides some of those parameters ready made
even in blind people who are not able to connect them to sensory input.
• If so, congenitally blind people may be able to share with sighted people much of their
understanding of colour concepts: perhaps that is why blind people are often able to
talk freely about colours.
Models and tethers Slide 53 June 17, 2008
54. POSSIBLE OBJECTION:
their understanding will be only partial
REPLY:
The fact that innate mechanisms suffice for such partial understanding of
a family of concepts shows that not all concepts need to be derived from
experience of instances.
And when experience plays a role it may sometimes be a small role!
Models and tethers Slide 54 June 17, 2008
55. Generalising the notion of “language” or “formalism”
The model-theoretic ideas used to introduce claim that a set of meanings,
or an ontology, can be at least partially determined by the structure of a
formal theory, uses research by logicians and philosophers of science that
assumed that all theories were expressed using something like predicate
calculus: an assumption that should be abandoned in the context of
non-human organisms and pre-linguistic children.
The idea of a “generalised language” (GL) that evolved before human communicative
language and develops earlier in children is introduced in this presentation:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#glang
What evolved first: Languages for communicating, or languages for thinking
(Generalised Languages: GLs)?
It is argued there that two key properties of human languages, namely structural variability and
compositional semantics need to be generalised and extended to other forms of representation, including
spatial representation, such as diagrams, maps, flow-charts.
The results of model theoretic semantics will not immediately transfer to these languages,
since their syntactic forms are very different and the modes of composition leave far more
scope for context to determine interpretation.
So an open research project is exploring whether and how non-logical languages can
support development of new ontologies and new explanatory and predictive theories
using those ontologies: This will be relevant to learning and developing in humans and
some other animals, and in future robots.
Models and tethers Slide 55 June 17, 2008
56. How is ontology-extension controlled?
I have claimed that substantively new concepts can be acquired by a
scientist or a child by developing new theories that make use of new
concepts expressed by undefined symbols, but controlling the search for
good new concepts and axioms is a major problem.
Adding a new hypothesis or axiom to a theory for the purpose of explain already known
facts is a process known as “abduction”.
Usually abduction is assumed to use pre-existing concepts, whereas my claim is that
there can be ontology-extending abduction too.
Abduction is often incorrectly presented as a third form of reasoning alongside deduction (working out
what must be the case, given some premises) and induction (working out what is likely to be the case,
given some premises that do not suffice to support deductive inferences).
Abduction is not a process of drawing some conclusion, but a process of formulating a conjecture which
can be combined with other assumptions in a process of reasoning, e.g. to explain or predict something.
Abduction involves selecting candidates from a search space, e.g. the space of possible hypotheses (or
geometric models) that could usefully and consistently be added to what is already known.
In general that is a huge space, and if the search is not constrained in some way it can be impossibly
difficult.
The roles of abduction in philosophy and science are compared and contrasted here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/logical-geography.html
For more on abduction and ontology extension see
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#mofm-07
(Some relevant papers on multimodal abduction by Lorenzo Magnani are mentioned at the end of those slides.)
Models and tethers Slide 56 June 17, 2008
57. Acknowledgements
An earlier version of this was supported by a grant from the Leverhulme Trust.
I have had much help and useful criticism from colleagues at Birmingham and elsewhere,
especially Matthias Scheutz and Ron Chrisley.
More recently answering Jackie Chappell’s questions helped to refine the ideas and led
me to switch terminology from “symbol attachment” to “symbol tethering”, and pointed
towards some ideas about possible implementations. (Not soon.)
This work is informal and partly sketchy and conjectural rather than a polished argument,
even though I have thinking about the problems on and off for about 30 years.
Comments and criticisms welcome.
I hope it will not be too long before the ideas can be implemented in a learner that shows
how the use of model-based semantics with symbol tethering renders symbol-grounding
unnecessary!
Models and tethers Slide 57 June 17, 2008
58. Some references
Some of the ideas are in my 1986 paper on “Reference without causal links”:
http://www.cs.bham.ac.uk/research/cogaff/Sloman.ecai86.pdf
and in my 1978 chapter on the aims of science, now available in this free online Book
(The Computer Revolution in Philosophy):
http://www.cs.bham.ac.uk/research/cogaff/crp/chap2.html
The Birmingham Cognition and Affect Project and the CoSy project
PAPERS (mostly postscript and PDF):
http://www.cs.bham.ac.uk/research/cogaff/
http://www.cs.bham.ac.uk/research/projects/cosy/papers/
Especially the work with Chappell, and criticisms of sensorimotor theories.
(References to other work can be found in papers in those directories)
TOOLS:
http://www.cs.bham.ac.uk/research/poplog/freepoplog.html
(Including the SIM AGENT toolkit)
SLIDES FOR TALKS (Including IJCAI01 philosophy of AI tutorial with Matthias Scheutz):
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#presentations
Comments, criticisms and suggestions for improvements are always welcome.
Models and tethers Slide 58 June 17, 2008