Evolution of minds and languages: What evolved first and develops first in children: Languages for communicating, or languages for thinking (Generalised Languages: GLs)
SLIDESHARE NOW STUPIDLY DOES NOT ALLOW SLIDES TO BE UPDATED. To find the latest version of these slides go to http://www.cs.bham.ac.uk/research/projects/cogaff//talks/#talk111
The version posted here was last updated on 16 March 2015. There have been several changes since then on the alternative site. Why did Slideshare take such a stupid decision (after being bought by Linkedin?)
A theory is presented according to which "languages" with structural variability and compositional semantics evolved in several species for *internal* use (e.g. in perception, planning, learning, forming goals, deciding, etc.) before *external* languages evolved for communication. The theory implies that such internal languages develop in young humans before a language for communication.
It is is also noted that the standard notion of 'compositional semantics' has to allow for the propagation of semantic content from parts to wholes to be potentially context sensitive at every stage: i.e. current context, speaker intentions, user knowledge, shared goals, can all affect how semantics of larger parts are derived from semantics of smaller parts+syntactic structure. This applies as much to non-verbal languages as to verbal ones.
This theory of how human languages evolved from earlier 'internal languages' (GLs) is inconsistent with the best known published theories of evolution or development of language.
But that does not make it wrong. Moreover, this theory is supported by empirical evidence including the example of deaf children in Nicaragua: http://en.wikipedia.org/wiki/Nicaraguan_Sign_Language
This document summarizes the internship of Ho Xuan Vinh at Kyoto Institute of Technology aimed at creating a bilingual annotated corpus of Vietnamese-English for machine learning purposes. Vinh experimented with several semantic tagsets, including WordNet, LLOCE, and UCREL, but faced challenges due to the lack of Vietnamese language resources. His goal was to find an effective method for annotating a bilingual corpus to provide training data for natural language processing tasks, but he was unable to validate his annotation approaches due to limitations in the available data and tools.
Lecture 1: Semantic Analysis in Language TechnologyMarina Santini
This document provides an introduction to a course on semantic analysis in language technology taught at Uppsala University in Sweden. It outlines the course website, contact information for the instructor, intended learning outcomes, required readings, assignments and examination. The course focuses on applying semantic analysis methods in natural language processing tasks like sentiment analysis, information extraction, word sense disambiguation and predicate-argument extraction. It will introduce students to representing and modeling meaning in language through formal logics and semantic frameworks.
The document discusses natural language and natural language processing (NLP). It defines natural language as languages used for everyday communication like English, Japanese, and Swahili. NLP is concerned with enabling computers to understand and interpret natural languages. The summary explains that NLP involves morphological, syntactic, semantic, and pragmatic analysis of text to extract meaning and understand context. The goal of NLP is to allow humans to communicate with computers using their own language.
This document provides an introduction and overview of natural language processing (NLP). It discusses what NLP is, how machines can process human language, the history and importance of NLP, and the typical components and processes involved, including morphological/lexical analysis, syntactic analysis, semantic analysis, discourse integration, and pragmatic analysis. The document also compares natural language to computer languages, discusses the future of NLP being linked to advances in artificial intelligence, and summarizes that NLP involves disambiguation at various linguistic levels through statistical learning methods.
Formal and Computational Representations
The Semantics of First-Order Logic
Event Representations
Description Logics & the Web Ontology Language
Compositionality
Lamba calculus
Corpus-based approaches:
Latent Semantic Analysis
Topic models
Distributional Semantics
The document provides an introduction to natural language processing (NLP), discussing key related areas and various NLP tasks involving syntactic, semantic, and pragmatic analysis of language. It notes that NLP systems aim to allow computers to communicate with humans using everyday language and that ambiguity is ubiquitous in natural language, requiring disambiguation. Both manual and automatic learning approaches to developing NLP systems are examined.
NLP is the branch of computer science focused on developing systems that allow computers to communicate with people using everyday language. Also called Computational Linguistics – Also concerns how computational methods can aid the understanding of human language
This document summarizes the internship of Ho Xuan Vinh at Kyoto Institute of Technology aimed at creating a bilingual annotated corpus of Vietnamese-English for machine learning purposes. Vinh experimented with several semantic tagsets, including WordNet, LLOCE, and UCREL, but faced challenges due to the lack of Vietnamese language resources. His goal was to find an effective method for annotating a bilingual corpus to provide training data for natural language processing tasks, but he was unable to validate his annotation approaches due to limitations in the available data and tools.
Lecture 1: Semantic Analysis in Language TechnologyMarina Santini
This document provides an introduction to a course on semantic analysis in language technology taught at Uppsala University in Sweden. It outlines the course website, contact information for the instructor, intended learning outcomes, required readings, assignments and examination. The course focuses on applying semantic analysis methods in natural language processing tasks like sentiment analysis, information extraction, word sense disambiguation and predicate-argument extraction. It will introduce students to representing and modeling meaning in language through formal logics and semantic frameworks.
The document discusses natural language and natural language processing (NLP). It defines natural language as languages used for everyday communication like English, Japanese, and Swahili. NLP is concerned with enabling computers to understand and interpret natural languages. The summary explains that NLP involves morphological, syntactic, semantic, and pragmatic analysis of text to extract meaning and understand context. The goal of NLP is to allow humans to communicate with computers using their own language.
This document provides an introduction and overview of natural language processing (NLP). It discusses what NLP is, how machines can process human language, the history and importance of NLP, and the typical components and processes involved, including morphological/lexical analysis, syntactic analysis, semantic analysis, discourse integration, and pragmatic analysis. The document also compares natural language to computer languages, discusses the future of NLP being linked to advances in artificial intelligence, and summarizes that NLP involves disambiguation at various linguistic levels through statistical learning methods.
Formal and Computational Representations
The Semantics of First-Order Logic
Event Representations
Description Logics & the Web Ontology Language
Compositionality
Lamba calculus
Corpus-based approaches:
Latent Semantic Analysis
Topic models
Distributional Semantics
The document provides an introduction to natural language processing (NLP), discussing key related areas and various NLP tasks involving syntactic, semantic, and pragmatic analysis of language. It notes that NLP systems aim to allow computers to communicate with humans using everyday language and that ambiguity is ubiquitous in natural language, requiring disambiguation. Both manual and automatic learning approaches to developing NLP systems are examined.
NLP is the branch of computer science focused on developing systems that allow computers to communicate with people using everyday language. Also called Computational Linguistics – Also concerns how computational methods can aid the understanding of human language
Big Data and Natural Language ProcessingMichel Bruley
Natural Language Processing (NLP) is the branch of computer science focused on developing systems that allow computers to communicate with people using everyday language.
Gadgets pwn us? A pattern language for CALLLawrie Hunter
The document discusses creating a pattern language for computer-assisted language learning (CALL). It explores the concept of a pattern language as defined by Christopher Alexander and proposes a framework for creating a CALL pattern language in the era of web 2.0. The paper seeks to rework concepts from other fields, like "formal learning design expression" and "task arc," and have participants brainstorm elements to include through graphical challenges. The overall goal is to establish foundational patterns for CALL work.
myassignmenthelp is premier service provider for NLP related assignments and projects. Given PPT describes processes involved in NLP programming.so whenever you need help in any work related to natural language processing feel free to get in touch with us.
Building an Ontology in Educational Domain Case Study for the University of P...IJRES Journal
The current web is based on HTML which cannot be demoralized by information retrieval techniques and therefore processing of information on the web is generally restricted to manual keyword searches which results in unrelated information retrieval, so the semantic web was founded to resolve this problem; furthermore, ontology is used to capture knowledge about any domain of interest with the goal of integrating the machine understandable data on the current human-readable web. Web Ontology Language (OWL) is a semantic markup language for sharing ontologies on the web. In this paper, the education domain and the development of a University Ontology using Protégé 4.1 Editor is considered. The University of Palestine was chosen as an example for the Ontology Development and the diverse aspects: super class and sub class hierarchy, creating a sub class, instances for classes illustration, query retrieval process using the Unified Process for Building the Ontology (UPON) technique.
Provides a basic introduction to Natural Language Processing (NLP), its properties, and some common techniques such as stemming, tokenization, bag-of-words, stripping, and n-grams
In this poster paper we propose a new method for identifying creativity that is based on analyzing a corpus of chat conversations on the same topic and extracting the new ideas expressed by participants. The application is a first step in supporting creativity in online group discussions by highlighting the novel concepts present in conversations (new ideas) and also by identifying topics that could have become important, if not forgotten during the debates (lost ideas)
These slides are an introduction to the understanding of the domain NLP and the basic NLP pipeline that are commonly used in the field of Computational Linguistics.
AAC & Literacy: In Partnership to Develop LanguageJane Farrall
This document provides information on strategies for combining augmentative and alternative communication (AAC) with emergent literacy instruction. It discusses why AAC and literacy should be partnered to develop language, noting the need for meaningful communication and engagement. Shared reading is recommended, using techniques like Comment, Ask, Respond (CAR) and its extension, Putting the CROWD in the CAR, which involves completion, recall, open-ended questions, WH- questions, and distancing. Predictable chart writing is also outlined as an interactive writing activity where students compose text with an adult using a repeated sentence structure.
steps in children acquiring a languageEmine Özkurt
This document summarizes the key stages of language development in children. It discusses four main perspectives on how language is acquired: learning, nativist, interactionist, and cognitive. Children progress through prelinguistic, one-word, telegraphic speech, and early grammar stages from ages 0-5. Piaget's theory of cognitive development also explains language acquisition through its sensory-motor, preoperational, concrete operational, and formal operational stages. The critical period hypothesis suggests there is an ideal time window for acquiring language skills.
This chapter discusses thinking and language. It covers topics such as cognition, concepts, problem solving, algorithms, heuristics, and language development. Cognitive psychologists study mental activities like thinking, knowing, remembering, and communicating. The chapter provides examples of classic problems used to study problem solving and examples of how language develops in children from babbling to two-word sentences. It also discusses artificial intelligence, animal communication like bee dancing, and the relationship between thought and language.
1) Language and speech development is a complex process that almost every human child succeeds in learning. It involves the development of language, communication of thoughts and feelings through symbols, and speech, the act of expressing thoughts through words.
2) Children progress through different stages in their first few years, starting with babbling, then their first words around 12 months, word combinations around 2 years, and simple sentences by 3-4 years old. Their ability to produce sounds also develops over time as they learn the phonetic patterns of their native language.
3) The development involves both biological and learned aspects. It provides insights into the human mind as children figure out the rules and structures of their ambient language through social interaction
Children acquire language in stages from birth through age 6. They progress from babbling to producing single words, then two word sentences and eventually complex multi-word sentences. Children learn language by listening to those around them and practicing. By age 3 children can use descriptive words and opposites, count to 10, and follow simple commands. By age 6 children have mastered most consonant sounds and can tell connected stories about pictures.
The document discusses language development and language disorders in children. It describes the stages of language development from birth to age 5. It also discusses several common language disorders, including aphasia, lisps, and autism. The causes of language disorders can include genetic factors, developmental problems, accidents, or damage to parts of the brain involved in language processing. Early intervention and treatment is important to address language delays or disorders in children.
The document discusses language development in children from infancy through early childhood. It describes the stages of language development including pre-linguistic, holophrase, two-word, telegram, and near-adult grammar stages. Key aspects of language such as semantics, vocabulary, syntax, and speech are also outlined at different ages.
The document discusses the main stages of first language acquisition:
1) The pre-speech stage where infants learn to pay attention to speech before beginning to speak.
2) The babbling stage starting around 4-6 months, characterized by indiscriminate speech sounds and repeated syllables.
3) The one word or "holophrastic" stage starting around 9 months where children utter their first words and develop regular pronunciation by 50 words.
4) The combining words stage starting around 2 years where children speak in sentences of several words but their grammar is still developing.
The innateness theory chomsky presentationJess Roebuck
This document discusses Noam Chomsky's innateness theory of language acquisition. The key points are:
1) According to Chomsky, language is an innate faculty and humans are born with a "universal grammar" consisting of linguistic rules.
2) Chomsky believes that exposure to language is enough for children to acquire it, as they can learn from minimal data due to their innate linguistic knowledge.
3) The theory proposes that children have a "language acquisition device" that allows them to acquire language effortlessly and quickly despite limited teaching.
The document summarizes theories of first language acquisition. It discusses the imitation/behaviorist theory proposed by Skinner, which views language learning as habit formation through reinforcement. It also discusses the innateness/nativist theory of Chomsky, which posits that humans are born with an innate language acquisition device. The document further examines cognitive, input, and connectionist theories and their varying perspectives on how the environment and mental faculties influence language learning.
Children acquire language through a complex interaction between innate cognitive abilities and environmental factors like social interaction and modified input from caregivers. While children have an innate language acquisition device, language development is also shaped by children's cognitive development and their social environment where they learn through interaction.
Stages of Acquisition of first LanguageJoel Acosta
The document discusses language acquisition in children from birth through age 10. It describes the prelinguistic, one-word, two-word, telegraphic, and later language development stages. Key points covered include the difference between learning and acquisition, the roles of nature and nurture, and how children gradually develop more advanced grammar and vocabulary over time through social interaction.
Big Data and Natural Language ProcessingMichel Bruley
Natural Language Processing (NLP) is the branch of computer science focused on developing systems that allow computers to communicate with people using everyday language.
Gadgets pwn us? A pattern language for CALLLawrie Hunter
The document discusses creating a pattern language for computer-assisted language learning (CALL). It explores the concept of a pattern language as defined by Christopher Alexander and proposes a framework for creating a CALL pattern language in the era of web 2.0. The paper seeks to rework concepts from other fields, like "formal learning design expression" and "task arc," and have participants brainstorm elements to include through graphical challenges. The overall goal is to establish foundational patterns for CALL work.
myassignmenthelp is premier service provider for NLP related assignments and projects. Given PPT describes processes involved in NLP programming.so whenever you need help in any work related to natural language processing feel free to get in touch with us.
Building an Ontology in Educational Domain Case Study for the University of P...IJRES Journal
The current web is based on HTML which cannot be demoralized by information retrieval techniques and therefore processing of information on the web is generally restricted to manual keyword searches which results in unrelated information retrieval, so the semantic web was founded to resolve this problem; furthermore, ontology is used to capture knowledge about any domain of interest with the goal of integrating the machine understandable data on the current human-readable web. Web Ontology Language (OWL) is a semantic markup language for sharing ontologies on the web. In this paper, the education domain and the development of a University Ontology using Protégé 4.1 Editor is considered. The University of Palestine was chosen as an example for the Ontology Development and the diverse aspects: super class and sub class hierarchy, creating a sub class, instances for classes illustration, query retrieval process using the Unified Process for Building the Ontology (UPON) technique.
Provides a basic introduction to Natural Language Processing (NLP), its properties, and some common techniques such as stemming, tokenization, bag-of-words, stripping, and n-grams
In this poster paper we propose a new method for identifying creativity that is based on analyzing a corpus of chat conversations on the same topic and extracting the new ideas expressed by participants. The application is a first step in supporting creativity in online group discussions by highlighting the novel concepts present in conversations (new ideas) and also by identifying topics that could have become important, if not forgotten during the debates (lost ideas)
These slides are an introduction to the understanding of the domain NLP and the basic NLP pipeline that are commonly used in the field of Computational Linguistics.
AAC & Literacy: In Partnership to Develop LanguageJane Farrall
This document provides information on strategies for combining augmentative and alternative communication (AAC) with emergent literacy instruction. It discusses why AAC and literacy should be partnered to develop language, noting the need for meaningful communication and engagement. Shared reading is recommended, using techniques like Comment, Ask, Respond (CAR) and its extension, Putting the CROWD in the CAR, which involves completion, recall, open-ended questions, WH- questions, and distancing. Predictable chart writing is also outlined as an interactive writing activity where students compose text with an adult using a repeated sentence structure.
steps in children acquiring a languageEmine Özkurt
This document summarizes the key stages of language development in children. It discusses four main perspectives on how language is acquired: learning, nativist, interactionist, and cognitive. Children progress through prelinguistic, one-word, telegraphic speech, and early grammar stages from ages 0-5. Piaget's theory of cognitive development also explains language acquisition through its sensory-motor, preoperational, concrete operational, and formal operational stages. The critical period hypothesis suggests there is an ideal time window for acquiring language skills.
This chapter discusses thinking and language. It covers topics such as cognition, concepts, problem solving, algorithms, heuristics, and language development. Cognitive psychologists study mental activities like thinking, knowing, remembering, and communicating. The chapter provides examples of classic problems used to study problem solving and examples of how language develops in children from babbling to two-word sentences. It also discusses artificial intelligence, animal communication like bee dancing, and the relationship between thought and language.
1) Language and speech development is a complex process that almost every human child succeeds in learning. It involves the development of language, communication of thoughts and feelings through symbols, and speech, the act of expressing thoughts through words.
2) Children progress through different stages in their first few years, starting with babbling, then their first words around 12 months, word combinations around 2 years, and simple sentences by 3-4 years old. Their ability to produce sounds also develops over time as they learn the phonetic patterns of their native language.
3) The development involves both biological and learned aspects. It provides insights into the human mind as children figure out the rules and structures of their ambient language through social interaction
Children acquire language in stages from birth through age 6. They progress from babbling to producing single words, then two word sentences and eventually complex multi-word sentences. Children learn language by listening to those around them and practicing. By age 3 children can use descriptive words and opposites, count to 10, and follow simple commands. By age 6 children have mastered most consonant sounds and can tell connected stories about pictures.
The document discusses language development and language disorders in children. It describes the stages of language development from birth to age 5. It also discusses several common language disorders, including aphasia, lisps, and autism. The causes of language disorders can include genetic factors, developmental problems, accidents, or damage to parts of the brain involved in language processing. Early intervention and treatment is important to address language delays or disorders in children.
The document discusses language development in children from infancy through early childhood. It describes the stages of language development including pre-linguistic, holophrase, two-word, telegram, and near-adult grammar stages. Key aspects of language such as semantics, vocabulary, syntax, and speech are also outlined at different ages.
The document discusses the main stages of first language acquisition:
1) The pre-speech stage where infants learn to pay attention to speech before beginning to speak.
2) The babbling stage starting around 4-6 months, characterized by indiscriminate speech sounds and repeated syllables.
3) The one word or "holophrastic" stage starting around 9 months where children utter their first words and develop regular pronunciation by 50 words.
4) The combining words stage starting around 2 years where children speak in sentences of several words but their grammar is still developing.
The innateness theory chomsky presentationJess Roebuck
This document discusses Noam Chomsky's innateness theory of language acquisition. The key points are:
1) According to Chomsky, language is an innate faculty and humans are born with a "universal grammar" consisting of linguistic rules.
2) Chomsky believes that exposure to language is enough for children to acquire it, as they can learn from minimal data due to their innate linguistic knowledge.
3) The theory proposes that children have a "language acquisition device" that allows them to acquire language effortlessly and quickly despite limited teaching.
The document summarizes theories of first language acquisition. It discusses the imitation/behaviorist theory proposed by Skinner, which views language learning as habit formation through reinforcement. It also discusses the innateness/nativist theory of Chomsky, which posits that humans are born with an innate language acquisition device. The document further examines cognitive, input, and connectionist theories and their varying perspectives on how the environment and mental faculties influence language learning.
Children acquire language through a complex interaction between innate cognitive abilities and environmental factors like social interaction and modified input from caregivers. While children have an innate language acquisition device, language development is also shaped by children's cognitive development and their social environment where they learn through interaction.
Stages of Acquisition of first LanguageJoel Acosta
The document discusses language acquisition in children from birth through age 10. It describes the prelinguistic, one-word, two-word, telegraphic, and later language development stages. Key points covered include the difference between learning and acquisition, the roles of nature and nurture, and how children gradually develop more advanced grammar and vocabulary over time through social interaction.
Chomsky’s and skinner’s theory of language acquisitionNur Khalidah
This document discusses Noam Chomsky and B.F. Skinner's theories of language acquisition. Chomsky believed language is innate and children acquire it through internal biological mechanisms, while Skinner viewed it as learned through environmental conditioning and reinforcement. Their key differences were that Chomsky saw an innate language acquisition device at work, while Skinner saw children as blank slates shaped by external stimuli. Both agreed the environment plays a role, though they disagreed on whether it was primarily or secondarily influential in the language learning process.
The document discusses four main theories of language acquisition: imitation/behaviorism, innateness/nativism, cognition, and motherese/input. The key points covered include:
- Imitation theory views language learning as a process of reinforcement through stimulus-response and feedback.
- Nativism/innateness theory posits that children are born with an innate language acquisition device that allows them to deduce grammar from primary linguistic data.
- Universal Grammar proposes principles and parameters that are common across languages.
- Theories have both similarities and limitations in fully explaining the complex process of language acquisition.
Similar to Evolution of minds and languages: What evolved first and develops first in children: Languages for communicating, or languages for thinking (Generalised Languages: GLs)
Reorganised several times since first uploaded: most recently 25 Jan 2016
-------------------------------------------------------------------------------------------------------
Slides include link to video of lecture (158MB) http://www.cs.bham.ac.uk/research/projects/cogaff/movies/#ailect2-2015
-------------------------------------------------------------------------------------------------------------
Two questions are shown to have deep connections: What are the functions of vision in animals? and How did human languages evolve? The answer given here is that the functions of vision need to be supported by richly structured internal languages (forms of representation used for acquiring, storing, manipulating, deriving and using information), from which it follows that internal languages must have evolved before languages for communication.
---------------------------------------------------------------------------------------------------------------
The account of the functions of vision mentions early AI vision, the impact of Marr and the even greater impact of Gibson, but argues that they did not recognize all the functions of vision, e.g. the uses of vision in making mathematical discoveries leading to Euclid's elements.
---------------------------------------------------------------------------------------------------------------
Many questions are left unanswered by this research, which is part of the Meta-Morphogenesis project, introduced here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
---------------------------------------------------------------------------------------------------------------
A slideshare presentation on "origins of language" by Jasmine Wong, adds some useful additional evidence, but presents a simpler theory:
http://www.slideshare.net/JasmineWong6/origins-of-language
---------------------------------------------------------------------------------------------------------------
Minor corrections+ additions 30-Mar-2015, 1-Apr-2015, 15-Apr-2015 12-Nov-2015
Why symbol-grounding is both impossible and unnecessary, and why theory-tethe...Aaron Sloman
Introduction to key ideas of semantic models,
implicit definitions and symbol tethering, using ideas from philosophy of science and model theoretic semantics to explain why symbol ground theory is misguided: there is no need for all symbols used by an intelligent agent to be 'grounded' in terms of experience, or sensory-motor patterns. Rather, most of the meaning of a symbol may come from its role in a powerful explanatory theory, though the theory should have some connection with experiments and observations in order to be applicable to the world. That is not the same as requiring every symbol to be linked to experiences, experiments or measurements.
Symbol grounding theory is a modern version of the philosophical theory of 'concept empiricism', which was refuted by the philosopher Immanuel Kant in the 18th century.
Learning and Text Analysis for Ontology Engineeringbutest
This document calls for papers and participation in a workshop on learning and text analysis for ontology engineering to be held in conjunction with the ECAI 2002 conference in Lyon, France. The workshop aims to bring together researchers from linguistics, natural language processing, knowledge representation, and machine learning to discuss issues around building, maintaining, and reusing ontologies and terminological resources. Topics of interest include using texts and linguistic/terminological resources as knowledge sources for building ontologies, applying machine learning and NLP tools to ontology engineering, and learning ontologies from sources like the web. The deadline for paper submissions is March 15th and for motivation abstracts is May 24th. The workshop will include paper presentations, discussions, and
1. The document discusses a design study exploring the relationship between knowledge maturing and social learning, specifically looking at how ontology maturing relates to collaborative learning dialogues.
2. It proposes a "mashup" that combines an ontology development tool called SOBOLEO with a dialogue game platform called InterLoc to allow users to have structured discussions about developing and refining ontologies.
3. A hypothetical example is described where a career advisor could use the mashup to research labor market information with a client, facilitating knowledge acquisition and refinement through collaborative dialogue.
This document discusses using a cognitive grammar approach to user experience (UX) design. It proposes that interfaces can be viewed as languages with underlying grammars and conceptual models. The author describes their experience applying grammatical distinctions like objects and verbs to the information architecture of a banking app. The document then discusses how UX research can be used to develop an ontology conceptualizing a domain and how prototypes can help test and refine the conceptual model through an iterative process.
The document proposes a collaborative ontology building project (COB) that uses a multi-agent approach to facilitate distributed ontology editing and discovery. Key challenges addressed include making ontology editing easy for non-experts, enabling iterative ontology evolution through expert and agent cooperation, and facilitating ontology mining from distributed and dynamic data sources on the web. The proposed system design involves an ontology repository, various human and software agents that contribute to and validate ontologies, and techniques for tasks like ontology alignment and redundancy/conflict checking.
This document provides an overview of the topics that will be covered in the book. It discusses different programming paradigms that will be examined, including imperative, functional, object-oriented, dataflow, concurrent, declarative, and aggregate languages. For each paradigm, examples of relevant languages are given and the chapters where those languages will be discussed are indicated. The goal is to study principles and innovations across a wide range of modern programming languages. Formal semantic models that provide precise definitions of language meaning will also be presented.
The document discusses how ontologies and social media can support eLearning. It describes how ontologies can be enhanced with social tags to integrate formal and informal knowledge. An experiment used tags from Delicious to identify related tags and map them to concepts in a computing ontology. User evaluations found that beginners prefer tagged documents while advanced learners benefit from structured ontologies. Integrating ontologies, tags and social networks has potential to support knowledge discovery and recommendation across formal and informal learning resources and communities.
Pal gov.tutorial4.session8 2.stepwisemethodologiesMustafa Jarrar
This document provides an overview of stepwise methodologies for ontology engineering. It discusses phases such as identifying the purpose and scope, building the ontology through capturing concepts and defining relationships, integrating existing ontologies, evaluating the ontology, and documenting it. The methodology proposes that building the ontology involves capturing concepts through brainstorming, organizing concepts, producing clear definitions, and defining taxonomies and properties. It emphasizes reaching consensus among those involved and reusing existing ontologies where possible. The goal is to develop ontologies that are clear, coherent, extensible, and reusable.
The document discusses knowledge mapping and social software tools that can be used to support sensemaking, knowledge sharing, and collective dialogue. It provides examples of tools such as Compendium that allow users to create and link different knowledge elements, and how such tools have been applied in contexts like capturing scientific collaborations and emergency response planning. The document concludes by suggesting potential applications of knowledge mapping tools and resources for learning more.
Ontologies for baby animals and robots From "baby stuff" to the world of adul...Aaron Sloman
In contrast with ontology developers concerned with a symbolic or digital environment (e.g. the internet), I draw attention to some features of our 3-D spatio-temporal environment that challenge young humans and other intelligent animals and will also challenge future robots. Evolution provides most animals with an ontology that suffices for life, whereas some animals, including humans, also have mechanisms for substantive ontology extension based on results of interacting with the environment. Future human-like robots will also need this. Since pre-verbal human children and many intelligent non-human animals, including hunting mammals, nest-building birds and primates can interact, often creatively, with complex structures and processes in a 3-D environment, that suggests (a) that they use ontologies that include kinds of material (stuff), kinds of structure, kinds of relationship, kinds of process (some of which are process-fragments composed of bits of stuff changing their properties, structures or relationships), and kinds of causal interaction and (b) since they don't use a human communicative language they must use information encoded in some form that existed prior to human communicative languages both in our evolutionary history and in individual development. Since evolution could not have anticipated the ontologies required for all human cultures, including advanced scientific cultures, individuals must have ways of achieving substantive ontology extension. The research reported here aims mainly to develop requirements for explanatory designs. The attempt to develop forms of representation, mechanisms and architectures that meet those requirements will be a long term research project.
Using construction grammar in conversational systemsCJ Jenkins
This thesis explored using construction grammar and ontologies in conversational systems. The author built two early experimental systems using these techniques. Construction grammar represents language as constructions pairing form and meaning. Ontologies allow for more explicit semantics compared to databases. The author developed a stemmer called UEA-Lite and a system called KIA that incorporated construction grammar, ontologies, and machine learning to understand and respond to natural language.
The document introduces ontologies and discusses their role in the Semantic Web. It defines an ontology as an explicit specification of a conceptualization that is shared between people or software agents. Ontologies allow concepts and relationships between concepts to be formally defined so that software applications can interpret data in the same way. The document outlines different types of ontologies including upper ontologies that define common concepts across domains, and domain ontologies that define the terms and relationships within a specific knowledge domain. Formal ontology languages are also discussed as a way to represent ontologies in a machine-readable format.
USING MACHINE LEARNING TO BUILD A SEMI-INTELLIGENT BOT ecij
Nowadays, real-time systems and intelligent systems offer more and more control interface based on voice recognition or human language recognition. Robots and drones will soon be mainly controlled by voice. Other robots will integrate bots to interact with their users, this can be useful both in industry and entertainment. At first, researchers were digging on the side of "ontology reasoning". Given all the technical constraints brought by the treatment of ontologies, an interesting solution has emerged in last years: the construction of a model based on machine learning to connect a human language to a knowledge
base (based for example on RDF). We present in this paper our contribution to build a bot that could be used on real-time systems and drones/robots, using recent machine learning technologies.
USING MACHINE LEARNING TO BUILD A SEMI-INTELLIGENT BOT ecij
Nowadays, real-time systems and intelligent systems offer more and more control interface based on voice
recognition or human language recognition. Robots and drones will soon be mainly controlled by voice.
Other robots will integrate bots to interact with their users, this can be useful both in industry and
entertainment. At first, researchers were digging on the side of "ontology reasoning". Given all the
technical constraints brought by the treatment of ontologies, an interesting solution has emerged in last
years: the construction of a model based on machine learning to connect a human language to a knowledge
base (based for example on RDF). We present in this paper our contribution to build a bot that could be
used on real-time systems and drones/robots, using recent machine learning technologies.
The document discusses using technology for task-based language teaching, including using synchronous computer-mediated communication (SCMC) and asynchronous computer-mediated communication (ACMC) to design technology-based language learning tasks, and the importance of reflection activities after tasks are completed to help learners improve their language skills. Presentations are also scheduled on using corpora and concordancers like AntConc for language teaching.
I held this presentation at the first PKP Scholarly Publishing Conference in Vancouver Canada, on July 12th 2007. Check out the general conference blog if you want to know more about the event:
http://scholarlypublishing.blogspot.com/
You may also be interested in things marked with the "open-access" tag in my own blog:
http://corpblawg.ynada.com/
folksonomy, social tagging, tag clouds, automatic folksonomy construction, word clouds, wordle,context-preserving word cloud visualisation, CPEWCV, seam carving, inflate and push, star forest, cycle cover, quantitative metrics, realized adjacencies, distortion, area utilization, compactness, aspect ratio, running time, semantics in language technology
The document summarizes an e-portfolio community of practice (ePCoP) funded by the JISC Lifelong Learning & Workforce Development Programme and led by the University of Wolverhampton. The ePCoP aimed to share learning from JISC projects using e-portfolios and encourage wider discussion of e-portfolio pedagogy. It utilized Cloudworks as a platform, invited expert practitioners to lead activities, and saw over 1500 views but less engagement in discussions. Moving forward, the ePCoP hopes to establish clear aims and continue hosting resources on Cloudworks and JISC infonet.
Similar to Evolution of minds and languages: What evolved first and develops first in children: Languages for communicating, or languages for thinking (Generalised Languages: GLs) (20)
Construction kits for evolving life -- Including evolving minds and mathemati...Aaron Sloman
Darwin's theory of evolution by natural selection does not adequately explain the generative power of biological evolution. For that we need to understand the mechanisms involved in producing new options for natural selection, without which there would always be the same set of possibilities available. This applies also to the construction kits: evolution can produce new construction kits, "Derived" construction kits, based on the Fundamental construction kit provided by the physical universe and its originally lifeless physical and chemical mechanisms. It turns out that life needs both concrete and abstract construction kits, of ever increasing complexity. This paper introduces some basic ideas, though far more empirical and theoretical research is required, combining multiple disciplines. Slideshare no longer allows presentations to be updated, so I no longer use it. For a later version search for: Sloman "Construction kits for evolving life" cogaff. Most of my slideshare presentations have newer versions in the CogAff web site at the University of Birmingham, UK. (Not Alabama)
The Turing Inspired Meta-Morphogenesis Project -- The self-informing universe...Aaron Sloman
This replaces an earlier version. The latest version with clickable links is available at Versions with clickable links available at http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
Virtuality, causation and the mind-body relationshipAaron Sloman
This document discusses virtual machinery and causation. It defines three types of machines: physical machines, abstract mathematical objects called mathematical machines, and running virtual machines that are instances of mathematical machines controlling events in physical machines. It explores how running virtual machines can have causal powers despite being based on abstract mathematical objects, and how causation occurs both physically and through information processing in virtual machines. The document aims to clarify the nature and causal abilities of virtual machines.
How to Build a Research Roadmap (avoiding tempting dead-ends)Aaron Sloman
What's a Research Roadmap For?
Why do we need one?
How can we avoid the usual trap of making bold promises to do X, Y and Z,
then hope that our previous promises will not be remembered the next time we apply for funds to do X, Y and Z?
How can we produce a sensible, well informed roadmap?
Originally presented at the euCognition Research Roadmap discussion in Munich on 12 Jan 2007
This suggests a way to avoid tempting dead ends (repeating old promises that proved unrealistic) by examining many long term goals, including describing existing human and animal competences not yet achieved by robots, then working backwards systematically by investigating requirements for those competences, and requirements for meeting those requirements, etc. Insread of generating a single linear roadmap this should produce a partially ordered network of intermediate targets, leading back, to short term goals that may be achievable starting from where we are.
Such a roadmap will inevitably have mistakes: over-optimistic goals, missing preconditions, unrecognised opportunities. But if the work is done in many teams in a fully open manner with as much collaboration as possible, it should be possible to make faster, deeper, progress than can be achieved by brain-storming discussions of where we can get in a few years.
If learning maths requires a teacher, where did the first teachers come from?
or
Why (and how) did biological evolution produce mathematicians?
Presentation at Symposium on Mathematical Cognition AISB2010
Part of the Meta-Morphogenesis Project. See also this discussion of toddler theorems:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
Evolution of human mathematics from earlier abilities to perceived, use and reason about affordances, spatial possibilities and constraints.
The necessity of mathematical truth does not imply infallibility of mathematical reasoning. (Lakatos).
Toddlers discover theorems without knowing it. Later they may learn to reflect on and talk about what they have learnt. Compare Annette Karmiloff-Smith on "Representational re-description".
Why is it still so hard to give robots and AI systems the ability to reason spatially as mathematicians do (except for simple special cases, e.g. where space is discretised.)
A multi-picture challenge for theories of visionAaron Sloman
(Modified 7th June 2013 to include some droodles.)
Some informal experiments are presented whose results help to challenge most theories of vision and proposed mechanisms of vision.
A possible explanatory information-processing architecture is proposed, based on multiple dynamical systems, grown during an individual's life time, most of which are dormant most of the time, but which can be very rapidly activated and instantiated so as to build a multi-ontology interpretation of the currently, and recently, available visual information -- e.g. turning a corner into a busy street in an unfamiliar city. As far as I know, there is no working implementation of such a system, though a very early prototype called Popeye (implemented in Pop2) around 1976 is summarised. Many hard unsolved problems remain, though most of them are ignored by research on vision that makes narrow assumptions about the functions of biological vision.
Meta-Morphogenesis, Evolution, Cognitive Robotics and Developmental Cognitive...Aaron Sloman
How could a planet, condensed from a cloud of dust, produce minds -- and products of minds, along with microbes, mice, monkeys, mathematics, music, marmite, murder, megalomania, and all other forms and products of life on earth (and possibly elsewhere)?
This presentation introduces the ambitious, multi-disciplinary Meta-Morphogenesis project, partly inspired by Turing's 1952 paper on morphogenesis. It may lead to an answer, by identifying the many transitions between different types and mechanisms of biological information processing, including transitions that changed the mechanisms of change, altering forms of evolution, development, learning, culture and ecosystem dynamics. One of the questions raised is whether chemical information-processing is capable of supporting processes that would be infeasible or impossible on a Turing machine or conventional computer.
A 2hour 30 min recording of this tutorial was made by Adam Ford, available here: http://www.youtube.com/watch?v=BNul52kFI74 (new version installed on 14 Jun 2013 with titles and audio problem fixed). Also available here
http://www.cs.bham.ac.uk/research/projects/cogaff/movies/#m-m-tut
"Information" here is used in Jane Austen's sense, not Claude Shannon's sense. See http://www.cs.bham.ac.uk/research/projects/cogaff/misc/austen-info.html
More information about the project is available here: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
Adam Ford interviewed the author about some of these topics at the AGI conference in December 2012 in this video: http://www.youtube.com/watch?v=iuH8dC7Snno
Related PDF presentations can be found here http://www.cs.bham.ac.uk/research/projects/cogaff/talks
What is computational thinking? Who needs it? Why? How can it be learnt? ...Aaron Sloman
What is computational thinking?
Who needs it? Why? How can it be learnt?
Can it be taught? How?
Slides for invited presentation at Conference of ALT (Association for Learning Technology) 11th Sept 2012, University of Manchester.
PDF available (easier for printing, selecting text, etc.):
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk105
A video of the actual presentation (using no slides because of a projector problem) is now available here
http://www.youtube.com/watch?v=QXAFz3L2Qpo
It also has been made available as "slide 47" after the PDF presentation on this page.
I attempt to generalise Jeannette Wing's notion of "Computational thinking" (ACM 2006) to include attempting to understand much biological information processing, and try to show the necessity for educators to do deep computational thinking if they wish to facilitate processes of learning.
What's vision for, and how does it work? From Marr (and earlier)to Gibson and...Aaron Sloman
ABSTRACT
Very many researchers assume that it is obvious what vision (e.g. in humans) is for, i.e. what functions it has, leaving only the problem of explaining how those functions are fulfilled. So they postulate mechanisms and try to show how those mechanisms can produce the required effects, and also, in some cases, try to show that those postulated mechanisms exist in humans and other animals and perform the postulated functions. The main point of this presentation is that it is far from obvious what vision is for - and J.J. Gibson's main achievement is drawing attention to some of the functions that other researchers had ignored. I'll present some of the other work, show how Gibson extends and improves it, and then point out much more there is to the functions of vision and other forms of perception than even Gibson had noticed.
In particular, much vision research, unlike Gibson, ignores vision's function in on-line control and perception of continuous processes; and nearly all, including Gibson's work, ignores meta-cognitive perception, and perception of possibilities and constraints on possibilities and the associated role of vision in reasoning. If we don't understand that we cannot understand how biological mechanisms arising from requirements for being embodied in a rich, complex and changing 3-D environment underpin human mathematical capabilities, including the ability to reason about topology and Euclidean geometry.
Last updated: 1st March 2014, 10 June 2015 (additional links)
Slides prepared for a broadcast presentation to members of Computing at School http://www.computingatschool.org.uk/, about why computing education should be about more than the science and technology required for useful or entertaining applications. Instead, learning about forms of information processing systems can give us new, deeper ways of thinking about many old phenomena, e.g. the nature of mind and the evolution of minds of various kinds. This supports the claim that the study of computation is as much a science as physics or psychology, rather than just a branch of engineering -- as famously suggested by Fred Brooks.
Helping Darwin: How to think about evolution of consciousness (Biosciences ta...Aaron Sloman
ABSTRACT
Many of Darwin's opponents, and some of those who accepted the theory of evolution as regards physical forms, objected to the claim that human mental functions, and
consciousness in particular, could be products of evolution. There were several reasons for this opposition, including unanswered questions as to how physical mechanisms could produce mental states and processes an old, and still surviving, philosophical problem.
A new answer is now available. Evolution could have produced the "mysterious" aspects of consciousness if, like engineers developing computing systems in the last six or seven decades, evolution encountered and "solved" increasingly complex problems of representation and control (including self-monitoring and self-control) by using systems with increasingly abstract mechanisms based on virtual machines, including most
recently self-monitoring virtual machines.
These capabilities are, like many capabilities of computer-based systems, implemented in non-physical virtual machinery which, in turn, are implemented in lower level physical mechanisms.
This would require far more complex virtual machines than human engineers have so far created. Noone knows whether the biological virtual machines could have been
implemented in the discrete-switch technology used in current computers.
These ideas were not available to Darwin and his contemporaries: most of the concepts, and the technology, involved in creation and use of sophisticated virtual machines were developed only in the last half century, as a by-product of a large number of design decisions by hardware and software engineers solving different problems.
Possibilities between form and function (Or between shape and affordances)Aaron Sloman
I discuss the need for an intelligent system, whether it is a robot, or some sort of digital companion equipped with a vision system, to include in its ontology a range of concepts that appear not to have been noticed by most researchers in robotics, vision, and human psychology. These are concepts that lie between (a) concepts of "form", concerned with spatially located objects, object parts, features, and relationships and (b) concepts of affordances and functions, concerned with how things in the environment make possible or constrain actions that are possible for a perceiver and which can support or hinder the goals of the perceiver.
Those intermediate concepts are concerned with processes that *are* occurring and processes that *can* occur, and the causal relationships between physical structures/forms/configurations and the possibilities for and constraints on such processes, independently of whether they are processes involving anyone's actions or goals.
These intermediate concepts relate motions and constraints on motion to both geometric and topological structures in the environment and the kinds of 'stuff' of which things are composed, since, for example, rigid, flexible, and fluid stuffs support and constrain different sorts of motions.
They underlie affordance concepts. Attempts to study affordances without taking account of the intermediate concepts are bound to prove shallow and inadequate.
Notes for invited talk at Dagstuhl Seminar: ``From Form to Function'' Oct 18-23, 2009 http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=09431
Virtual Machines and the Metaphysics of Science Aaron Sloman
The document is an abstract for a presentation given by Aaron Sloman at the Metaphysics of Science conference in Nottingham on September 12, 2009. The presentation discusses virtual machines and their importance for philosophy. It notes that philosophers regularly use complex virtual machines composed of interacting non-physical subsystems, like operating systems and web browsers. However, philosophers often ignore or misdescribe these virtual machines in discussions of topics like functionalism and causation. The presentation aims to explain virtual machines and how they are relevant to several philosophical problems regarding issues like supervenience, causation, and the mind-body problem.
Why the "hard" problem of consciousness is easy and the "easy" problem hard....Aaron Sloman
The "hard" problem of concsiousness can be shown to be a non-problem because it is formulated using a seriously defective concept (the concept of "phenomenal consciousness" defined so as to rule out cognitive functionality and causal powers).
So the hard problem is an example of a well known type of philosophical problem that needs to be dissolved (fairly easily) rather than solved. For other examples, and a brief introduction to conceptual analysis, see http://www.cs.bham.ac.uk/research/projects/cogaff/misc/varieties-of-atheism.html
In contrast, the so-called "easy" problem requires detailed analysis of very complex and subtle features of perceptual processes, introspective processes and other mental processes, sometimes labelled "access consciousness": these have cognitive functions, but their complexity (especially the way details change as the environment changes or the perceiver moves) is considerable and very hard to characterise.
"Access consciousness" is complex also because it takes many different forms, since what individuals are conscious of and what uses being conscious of things can be put to, can vary hugely, from simple life forms, through many other animals and human infants, to sophisticated adult humans,
Finding ways of modelling these aspects of consciousness, and explaining how they arise out of physical mechanisms, requires major advances in the science of information processing systems -- including computer science and neuroscience.
There are empirical facts about introspection that have generated theories of consciousness but some of the empirical facts go unnoticed by philosophers.
The notion of a virtual machine is introduced briefly and illustrated using Conway's "Game of life" and other examples of virtual machinery that explain how contents of consciousness can have causal powers and can have intentionality (be able to refer to other things).
The beginnings of a research program are presented, showing how more examples can be collected and how notions of virtual machinery may need to be developed to cope with all the phenomena.
Some thoughts and demos, on ways of using computing for deep education on man...Aaron Sloman
1. The document discusses using computing to support deep and liberal education on many topics by stretching young minds through challenging learning opportunities rather than solely teaching skills.
2. It suggests computers can provide powerful learning through playing with AI programs, concepts and graphics-based tools, while also needing simpler textual environments to support abstraction.
3. One way to get this in schools is to make remotely accessible systems like Poplog available through shared Linux machines to introduce simple AI programming and collaborative learning.
Do Intelligent Machines, Natural or Artificial, Really Need Emotions?Aaron Sloman
(Updated on 14 Jan 2014 -- with substantial revisions.)
Many people believe that emotions are required for intelligence. I argue that this is mostly based on (a) wishful thinking and (b) a failure adequately to analyse the variety of types of affective states and processes that can arise in different sorts of architectures produced by biological evolution or required for artificial systems. This work is a development of ideas presented by Herbert Simon in the 1960s in his 'Motivational and emotional controls of cognition'.
What is science? (Can There Be a Science of Mind?) (Updated August 2010)Aaron Sloman
This presentation gives an introduction to philosophy of science, though a rather idiosyncratic one, stressing science as the search for powerful new ontologies rather than merely laws. You can't express a law unless you have
an ontology including the items referred to in the law (e.g. pressure, volume, temperature). The talk raises a
number of questions about the aims and methods of science, about the differences between the physical sciences and
the science of information-processing systems (e.g. organisms, minds, computers), whether there is a unique truth
or final answers to be found by science, whether scientists ever prove anything (no -- at most they show that some
theory is better than any currently available rival theory), and why science does not require faith (though
obstinacy can be useful). The slides end with a section on whether a science of mind is possible, answering yes, and explaining how.
Distinguishes Humean (statistics-based) notions of causation and Kantian (deterministic, structure-based) notions of causation, arguing that intelligent robots and animals need both, but each requires a combination of competences, and various kinds of partial competence of both kinds are possible.
What designers of artificial companions need to understand about biological onesAaron Sloman
This document summarizes Aaron Sloman's presentation at AISB'08 on what designers of artificial companions need to understand about biological ones. Sloman discusses the difficulty of the task and argues that current AI is not capable of replicating the capabilities of young children. He outlines "Type 1" goals for artificial companions focused on engagement, and "Type 2" goals focused on enabling functions like helping users, which are much harder to achieve. Sloman asserts progress requires understanding how human capabilities like understanding environments, minds, and developing new motives emerge from biological and developmental factors. The key is replicating some of the generic learning abilities of young humans to build more advanced functions layer by layer.
Fundamental Questions - The Second Decade of AI: Towards Architectures for Hu...Aaron Sloman
The document summarizes a presentation given at the KI2006 Symposium on the history of artificial intelligence. It discusses:
1) The presenter's early education in AI in the late 1960s and 1970s, being impressed by works by Marvin Minsky and attending lectures by Max Clowes.
2) Interesting early AI work in the 1970s by researchers like Patrick Winston, Terry Winograd, and Gerald Sussman.
3) The presenter's realization in the early 1970s that the best way to do philosophy was through designing and implementing fragments of working minds in AI to test philosophical theories.
4) Some of the major AI centers that existed in the early
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Discovery of An Apparent Red, High-Velocity Type Ia Supernova at 𝐳 = 2.9 wi...Sérgio Sacani
We present the JWST discovery of SN 2023adsy, a transient object located in a host galaxy JADES-GS
+
53.13485
−
27.82088
with a host spectroscopic redshift of
2.903
±
0.007
. The transient was identified in deep James Webb Space Telescope (JWST)/NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) program. Photometric and spectroscopic followup with NIRCam and NIRSpec, respectively, confirm the redshift and yield UV-NIR light-curve, NIR color, and spectroscopic information all consistent with a Type Ia classification. Despite its classification as a likely SN Ia, SN 2023adsy is both fairly red (
�
(
�
−
�
)
∼
0.9
) despite a host galaxy with low-extinction and has a high Ca II velocity (
19
,
000
±
2
,
000
km/s) compared to the general population of SNe Ia. While these characteristics are consistent with some Ca-rich SNe Ia, particularly SN 2016hnk, SN 2023adsy is intrinsically brighter than the low-
�
Ca-rich population. Although such an object is too red for any low-
�
cosmological sample, we apply a fiducial standardization approach to SN 2023adsy and find that the SN 2023adsy luminosity distance measurement is in excellent agreement (
≲
1
�
) with
Λ
CDM. Therefore unlike low-
�
Ca-rich SNe Ia, SN 2023adsy is standardizable and gives no indication that SN Ia standardized luminosities change significantly with redshift. A larger sample of distant SNe Ia is required to determine if SN Ia population characteristics at high-
�
truly diverge from their low-
�
counterparts, and to confirm that standardized luminosities nevertheless remain constant with redshift.
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...Travis Hills MN
By harnessing the power of High Flux Vacuum Membrane Distillation, Travis Hills from MN envisions a future where clean and safe drinking water is accessible to all, regardless of geographical location or economic status.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
JAMES WEBB STUDY THE MASSIVE BLACK HOLE SEEDSSérgio Sacani
The pathway(s) to seeding the massive black holes (MBHs) that exist at the heart of galaxies in the present and distant Universe remains an unsolved problem. Here we categorise, describe and quantitatively discuss the formation pathways of both light and heavy seeds. We emphasise that the most recent computational models suggest that rather than a bimodal-like mass spectrum between light and heavy seeds with light at one end and heavy at the other that instead a continuum exists. Light seeds being more ubiquitous and the heavier seeds becoming less and less abundant due the rarer environmental conditions required for their formation. We therefore examine the different mechanisms that give rise to different seed mass spectrums. We show how and why the mechanisms that produce the heaviest seeds are also among the rarest events in the Universe and are hence extremely unlikely to be the seeds for the vast majority of the MBH population. We quantify, within the limits of the current large uncertainties in the seeding processes, the expected number densities of the seed mass spectrum. We argue that light seeds must be at least 103 to 105 times more numerous than heavy seeds to explain the MBH population as a whole. Based on our current understanding of the seed population this makes heavy seeds (Mseed > 103 M⊙) a significantly more likely pathway given that heavy seeds have an abundance pattern than is close to and likely in excess of 10−4 compared to light seeds. Finally, we examine the current state-of-the-art in numerical calculations and recent observations and plot a path forward for near-future advances in both domains.
Evolution of minds and languages: What evolved first and develops first in children: Languages for communicating, or languages for thinking (Generalised Languages: GLs)
1. First year lecture: Intro to AI, Birmingham 12 Dec 2007 (and later years)
(Based on Seminar in School of Psychology in October 2007, and earlier publications)
Mind as Machine Weekend Course, Oxford 1-2 Nov 2008
What evolved first:
Languages for communicating?
or
Languages for thinking ?[∗]
(Generalised Languages: GLs)
Aaron Sloman
http://www.cs.bham.ac.uk/˜axs/
Ideas developed with Jackie Chappell (Biosciences, Birmingham).
http://www.jackiechappell.com/
[∗] Where ‘thinking’ here refers loosely to any kind of internal information processing.
These slides are available here (with many other presentations):
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#glang
Also on my slideshare site:
http://www.slideshare.net/asloman/evolution-of-minds-and-languages-presentation
See also: The Turing-inspired Meta-Morphogenesis Project:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
(All Work in progress)
Lang&Cog GLs Slide 1 Last revised: March 16, 2015
2. Abstract
Widely held beliefs about the nature of human language, its relationship to various aspects of human
mental functioning, its evolution in our species, and its development in individuals, ignore or over-simplify
the information-processing requirements for language to evolve or be used, and they ignore some facts
about what pre-linguistic children can do, and facts about what many other animals can do.
I agree with Karl Popper that one should not waste time attacking straw men, i.e. arguing against obviously false theories: One
should always try to present a position argued against in its strongest form before arguing against it. However doing that would
make this document at least ten times longer. So for now I merely summarise very briefly the positions I think are mistaken.
They are all capable of much more detailed and convincing presentations than I give them here.
References will later be added, though I expect most people reading this will already be familiar with some of the literature.
The key idea: Humans and many other animals need “internal languages” (internal means of representing
information) for tasks that are not normally thought of as linguistic tasks, namely perceiving, experiencing,
having desires, forming intentions, working out what to do, performing actions, learning things about the
environment (including other agents), remembering, imagining, theorising, designing .....
Being able to use such internal languages is a prerequisite for learning an external language.
... so internal languages must have evolved first, and must develop first in individuals
This requires a generalised notion of a language: a GL, i.e. a form of manipulable representation with
structural variability, variable (unbounded?) complexity, and compositional semantics, as explained later.
This includes both external communicative languages and internal languages, and it allows for a wide variety of forms,
including all forms of human language used for communication (spoken, written, signed, maps, diagrams, flow-charts,
mathematical formalisms, programming languages), and also many formalisms used internally in working computer models
and AI robots, and hypothesised forms of representation used internally by animals (some not yet identified by psychologists
or neuroscientists!)
In order to understand how brain mechanisms make GLs possible we have to understand what virtual
machines are and the complex ways in which they can be related to physical machines: for a short tutorial
see these slides: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#bielefeld
More detail: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html
Lang&Cog GLs Slide 2 Last revised: March 16, 2015
3. A possible context for this Lecture – teaching
One use of these slides is as a contribution to a first year module:
Introduction to Artificial Intelligence
In previous years I gave a lecture on AI and Philosophy, still available here
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#aiandphil
Further background information can be found in this high-level overview of the goals and methods of AI
(AI as science and AI as engineering):
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#whatsai
A comparison of the aims, methods and tools of AI and more conventional software engineering languages
and development environments can be found here:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#aidev
I am assuming that students will know that AI includes the study of the following, among other things:
• learning and development of various kinds
• control of actions of various kinds
• vision and other forms of perception
• planning and decision making
• reasoning and problem solving of various kinds
• natural language processing
• formation, comparison, selection, rejection, postponement, ... of motives
• emotions and other affective states and processes
• architectures, representations and algorithms for intelligence, including comparison of
symbolic, logic-based, rules-based, neural, diagrammatic, dynamical systems, ...
• abilities to make mathematical discoveries, e.g. in geometry, topology, logic,...
All of these are very complex topics, with complex interrelationships.
Lang&Cog GLs Slide 3 Last revised: March 16, 2015
4. Original context of these slides
These slides were originally written for presentation at the Language and Cognition
seminar, School of Psychology, University of Birmingham, 19th October 2007
The topic is far too large for a single seminar, so only a subset of the slides were
presented, after some videos of animals and prelinguistic children doing things that
demonstrated perception of structure in the environment, and in two cases interpretation
of the intentions of an adult human, and apparently spontaneous actions to help achieve
the adult’s goals.
Thanks to Felix Warneken for use of his videos, available here
http://email.eva.mpg.de/˜warneken/video.htm
Some of the videos I use in this context are available here (e.g. broom video):
http://www.cs.bham.ac.uk/research/projects/cosy/conferences/mofm-paris-07/sloman
Some of the papers referred to in the talk are listed at the end, with URLs.
I thank various listeners and readers for their patience and for interesting comments and questions which
have led to improvements.
NOTE:
My slides are written so as to be readable online,
so they contain a lot more detail than most presentations.
This makes them less suitable for live presentations.
Some references are provided at the end.
Lang&Cog GLs Slide 4 Last revised: March 16, 2015
5. Note for philosophers and doubters about internal languages
For philosophers convinced by Wittgenstein’s arguments (or, more precisely, his rhetoric) that private
languages are impossible, it should be made clear that he was attacking a philosophical thesis concerning
the use of “logically private” languages, and he knew little about computational virtual machines: the core
ideas have only been developed since his death.
However, Kenneth Craik, also at Cambridge, had some of the ideas presented here, while Wittgenstein was still alive.
Most philosophers still know nothing about virtual machines, unfortunately, though they use them every day.
These slides present what is primarily a scientific theory, not a philosophical theory, though the ideas have
been developed partly on the basis of philosophical conceptual analysis, and many of its key features were
inspired by Immanuel Kant’s philosophy, including his views about mathematics and causation.
Like all scientific theories (except shallow theories that are merely about observed correlations), this theory
uses theoretical terms that cannot be explicitly defined, but are implicitly defined by their role in the theory,
as explained here: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#models
The theory as a whole needs to be both enriched internally so as to fill gaps in the explanation, and also
“tethered” by more links to experiment, observation, and working models demonstrating its feasibility and
applicability. There is still a long way to go.
But it should develop faster than Democritus’ atomic theory of matter, propounded over 2400 years ago?
The ideas presented here are not new, but their combination may be. Originality is not claimed however.
The combination of ideas was developed in collaboration with Jackie Chappell: see our joint papers listed near the end.
She has not checked these slides, and may not agree with everything said here. We both regard the work as still incomplete.
The theoretical claims need to be made more precise, working models need to be developed, and deeper empirical probes are
required, for testing.
Lang&Cog GLs Slide 5 Last revised: March 16, 2015
6. Overview of objectives
• Present some widely held views about the nature of human language
• Present some divergent views about the evolution of language
• Extract three core ideas about the nature of language and generalise them to define
the concept of a Generalised Language (GL) that includes “internal” languages:
– Extend the notion of compositional semantics to allow for richer context-dependence in GLs
(analogous to Gricean principles for communication).
– Extend the generalisation to include non-verbal languages using diagrams and other spatial
structures to combine information.
• Show that a GL used internally (i.e. not for communication) can be useful for an
intelligent animal or robot. (But there are deep unsolved problems about the details.)
• Show that some competences of both prelinguistic humans and some other animals
seem to require use of internal GLs, representing structures, processes, intentions, ...
See research on cognitive and social competences of infants and toddlers,
e.g. E. Gibson & A. Pike An Ecological Approach to Perceptual Learning and Development, OUP, 2000
• Conclude that internal GLs evolved before human external languages, and that in
individual humans they develop before an external language is learnt.
• Point out some implications for theories of evolution of human language.
• Point out some implications for theories of language learning in humans
(Supported by the example of Nicaraguan deaf children, and Down’s syndrome children.)
Lang&Cog GLs Slide 6 Last revised: March 16, 2015
7. Methodological warning: Beware of unimplemented models
• Much of what I say is still largely descriptive and hypothetical:
I cannot (yet) demonstrate working computer models of the mechanisms discussed.
Although I have a lot of experience of building computer models and can see how parts of what I am
talking about could be implemented, there are still many gaps in the current state of the art
(especially robot perception of 3-D structures and processes.)
• Until a theory can be expressed with sufficient precision to guide the construction of a
working system it should always be regarded as suspect: for example, you cannot
easily tell what assumptions you are making and whether they are seriously flawed.
• Implementation on computers is a “proof of concept”, but still leaves open the question
whether the mechanisms proposed can be implemented on biological information
processing machinery.
• Unfortunately I don’t think we understand more than a subset of types of biological
information processing mechanisms, and we cannot yet build convincing platforms
that can be used as a basis for implementing theories of the kind discussed here.
• Therefore, much of this is still speculative and not subject to immediate testing.
• This could be the start of a progressive or a degenerating research programme.
Deciding which it is can take many years of development of a research programme:
See: Imre Lakatos,
The methodology of scientific research programmes, in Philosophical papers, volume I,
Eds. J. Worrall & G. Currie, CUP, 1980, ( http://www.answers.com/topic/imre-lakatos )
Lang&Cog GLs Slide 7 Last revised: March 16, 2015
8. Some background assumptions
Things we have learnt from AI research over the last 60 years or so include the following:
• An animal or machine acting in the world needs an information-processing
architecture including different components capable of performing different sorts of
tasks concurrently
• This is a virtual-machine architecture, not a physical architecture
• The various components need information of different sorts, which has to be
represented or encoded in an appropriate way
• There is no one right form of representation: different tasks have different
requirements, e.g.
– for collections of symbolic facts (particular and general)
– for structures representing spatial relationships e.g. maps, 3-D models
– for algorithms that can be executed to produce internal or external behaviours
– for doing statistical processing (e.g. building and using histograms)
– for doing fast pattern recognition (using statistical or symbolic mechanisms)
– for representing control information, including goals, preferences, partially executed plans, future
intentions, etc.
• SOME information processing requirements BUT NOT ALL can be very dependent on
the contents of the environment and on the body of the robot or animal (its structure,
the materials used, etc.).
• Some animal information-processing architectures are mostly genetically determined,
allowing only minor adaptations, whereas others are grown during individual
development and are strongly influenced by interactions with the environment
Lang&Cog GLs Slide 8 Last revised: March 16, 2015
9. How can information be used?
For an animal or robot to use information there are various requirements
that need to be satisfied
• There must be some way of acquiring the information e.g. through the genes, through
the senses, by reasoning or hypothesis formation, or some combination
NOTE: acquisition through senses is not a simple matter of recording sensory signals: A great deal of
analysis, interpretation, abstraction, and combination with prior knowledge may be involved.
• There must be some way in which the information can be encoded or represented so
that it can be used. This may be
– transient and used once
(e.g. information used in continuous control)
– enduring and reusable
for short or long periods e.g.
∗ percepts and immediate environment
∗ generalisations,
∗ geographical (extended spatial) information,
∗ motives, preferences, values, ....
∗ intended or predicted future events/actions, ..... etc.
• There must be mechanisms for selecting relevant information from large stores.
• The form of representation must allow information-manipulations that derive new
information or construct new hypotheses or goals
• Some animals (e.g. humans) and some robots need ways of representing novel
information about things never previously encountered.
Lang&Cog GLs Slide 9 Last revised: March 16, 2015
10. Some videos/demos
• Parrot
Video of parrot scratching back of head with feather
• Crow
Betty, the New Caledonian Crow makes hooks to lift a basket of food.
• Infant helper
Show Warneken video of child spontaneously opening cupboard door to help researcher.
(On web site mentioned above)
The researchers ask: can a child spontaneously decide to help someone?
I ask: if a child can do anything of that sort — what are the representational and architectural
requirements?
Remember the deliberative sheepdog: Demo 5 here:
http://www.cs.bham.ac.uk/research/projects/poplog/figs/simagent
• Broom pusher
Toddler steers broom down corridor and round corner
Non-human animals and and pre-verbal children must have information
represented using internal languages.
See Sloman 1979
The primacy of non-communicative language
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#43
Lang&Cog GLs Slide 10 Last revised: March 16, 2015
11. What representations are NOT
It is often stated, e.g. in AI text-books, and in research publications, that
representations “stand for” or “stand in for” or “resemble” the things they
represent: this is a serious confusion.
• Information about X and X itself will generally be used for quite different purposes.
A recipe for a type of cake gives information about ingredients and actions to be performed to make
an instance.
– If you mix and cook the ingredients properly (eggs, flour, sugar, etc.) you may get a cake.
– Compare trying to mix and cook bits of the recipe (e.g. the printed words for the ingredients)
• A 2-D representation of a 3-D object cannot be used as a replacement for the object.
• If X is some type of physical object, then information about X might be used to work
out how to make X, to decide whether to make X, to reason about the cost of making
X, to work out how to destroy X, how to produce a better X, ...
• If X is a type of action, then information about X can be used to decide whether to
perform X, to work out how long X will take, to work out risks in doing X, to decide how
to perform X, to produce a performance of X, to modulate the performance of X, to
evaluate the performance of X, to teach someone else how to perform X.
• If X represents a generalisation, e.g. “All unsupported objects near the surface of the
earth fall”, then there is no object X refers to that can be used, manipulated, modified
etc.
Lang&Cog GLs Slide 11 Last revised: March 16, 2015
12. AI and Evolutionary Theory
• The study of biological evolution has many facets.
• Evolution of physical forms has been studied most, partly because that is what is most
directly evident about differences between animals, and partly because much fossil
evidence is about physical form.
• In recent years, study of the evolution of genetic makeup in DNA has accelerated.
• Some people try to understand the evolution of behaviour and intelligence by
attempting to draw conclusions from physical form of animals, and from evidence of
products of behaviour.
• In the case of micro-organisms and some insects it is also possible to observe
behavioural changes across generations, e.g. evolution of resistance to disease in
some plants and resistance to medical treatments in some antigens, but not for
animals that evolve more slowly, like humans.
• In any case, observing changes in behaviour is different from understanding what
produces those changes.
Observing what animals do, when they do it, and which don’t do it leaves open the deeper
questions:
How do they do it, and how did that competence evolve or develop?
Lang&Cog GLs Slide 12 Last revised: March 16, 2015
13. AI-inspired biological questions
Many aspects of biological evolution look different from the standpoint of
designers of behaving systems:
That standpoint raises the question:
How could such and such information processing capabilities have
evolved?
And a host of subsidiary questions including how behaviours can be programmed through DNA, or
some combination of DNA and the development environment.
This is now part of the Turing-inspired Meta-Morphogenesis Project:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
which includes an attempt to analyse the varieties of transitions in information-processing that occur
in biological evolution, in development and in learning, illustrated in this list:
http://tinyurl.com/CogMisc/evolution-info-transitions.html
• People who have tried to design behaving systems can more easily identify the
problems evolution must have solved than people who merely observe, measure, and
experiment on living systems.
• In particular, trying to design a working system teaches us a lot about what sorts of
information the system will need, what forms of representation will not work, etc.
• However, at present, most of the competences of animals are more sophisticated than
anything we can get our robots to do – because AI has only been going since the mid
1950s, whereas evolution had many millions of years, building many layers of
competence that we are nowhere near replicating, except in very special cases.
Lang&Cog GLs Slide 13 Last revised: March 16, 2015
14. Evolution of language and language learning
Common assumptions about language and its evolution
It is commonly assumed that:
• The primary or sole function of language is communication between individuals
though there are derived mental functions such as planning, reminiscing, theorising, idly imagining....
• Language initially evolved through primitive forms of communication
– Vocal communication according to some theories
– Gestural communication according to other theories;
(E.g. see paper by Fadiga L., Craighero L. 2007)
• Only after a rich external language had evolved did internal uses of language evolve;
E.g. evolution produced short-cuts between brain mechanisms so that people could talk to
themselves silently, instead of having to think aloud. (Dennett?)
The designer stance challenges such popular views of evolution of
language, and replaces them with more subtle and complex questions.
What are the information-processing requirements of tasks performed by pre-verbal
children and by animals that do not use external languages like human languages?
What are the the information processing requirements of language learning and
understanding: Can heard sentences be understood without use of some internal means
of representing information?
Not necessarily Fodor’s fixed, innate “Language of thought”: what alternatives are possible?
Lang&Cog GLs Slide 14 Last revised: March 16, 2015
15. Rival views about evolution of human language
1. First there were expressive noises which gradually became more
differentiated and elaborate and then were “internalised”.
Only after that did thinking, planning, reasoning, hypothesising, goal formation,
become possible.
Lang&Cog GLs Slide 15 Last revised: March 16, 2015
16. Rival views about evolution of human language
1. First there were expressive noises which gradually became more
differentiated and elaborate and then were “internalised”.
2. First there were expressive gestures, then noises, then as in 1.
Only after that did thinking, planning, reasoning, hypothesising, goal formation,
become possible.
Lang&Cog GLs Slide 16 Last revised: March 16, 2015
17. Rival views about evolution of human language
1. First there were expressive noises which gradually became more
differentiated and elaborate and then were “internalised”.
2. First there were expressive gestures, then noises, then as in 1.
3. First there were internal representations used for perceiving, thinking,
forming goals, forming questions, planning, controlling actions;
later, external forms developed for communicating meanings.
Two options
• externalisation was first gestural
• externalisation was first vocal
NB: Do not assume such internal representations must be like
Fodor’s LOT (Language of Thought).
(For reasons explained later.)
Lang&Cog GLs Slide 17 Last revised: March 16, 2015
18. Rival views about evolution of human language
1. First there were expressive noises which gradually became more
differentiated and elaborate and then were “internalised”.
2. First there were expressive gestures, then noises, then as in 1.
3. First there were internal representations used for perceiving, thinking,
forming goals, forming questions, planning, controlling actions;
later, external forms developed for communicating meanings.
Two options
• externalisation was first gestural
• externalisation was first vocal
All the above options allow for the possibility that the existence of external languages and
cultures produced evolutionary and developmental pressures that caused internal
languages to acquire new functions and more complex forms.
Lang&Cog GLs Slide 18 Last revised: March 16, 2015
19. Rival views about evolution of human language
1. First there were expressive noises which gradually became more
differentiated and elaborate and then were “internalised”.
2. First there were expressive gestures, then noises, then as in 1.
3. First there were internal representations used for perceiving, thinking,
forming goals, forming questions, planning, controlling actions;
later, external forms developed for communicating meanings.
Two options
• externalisation was first gestural
• externalisation was first vocal
All the above options allow for the possibility that the existence of external languages and
cultures produced evolutionary and developmental pressures that caused internal
languages to acquire new functions and more complex forms.
Our question is: what evolved first:
• external human languages?
• internal languages with core properties of human language?
Lang&Cog GLs Slide 19 Last revised: March 16, 2015
20. Rival views about evolution of human language
1. First there were expressive noises which gradually became more
differentiated and elaborate and then were “internalised”.
2. First there were expressive gestures, then noises, then as in 1.
3. First there were internal representations used for perceiving, thinking,
forming goals, forming questions, planning, controlling actions;
later, external forms developed for communicating meanings.
Two options
• externalisation was first gestural
• externalisation was first vocal
All the above options allow for the possibility that the existence of external languages and
cultures produced evolutionary and developmental pressures that caused internal
languages to acquire new functions and more complex forms.
Our question is: what evolved first:
• external human languages?
• internal languages with core properties of human language?
A similar question about what comes first can be asked about individual development.
Lang&Cog GLs Slide 20 Last revised: March 16, 2015
21. Rival views about evolution of human language
1. First there were expressive noises which gradually became more
differentiated and elaborate and then were “internalised”.
2. First there were expressive gestures, then noises, then as in 1.
3. First there were internal representations used for perceiving, thinking,
forming goals, forming questions, planning, controlling actions;
later, external forms developed for communicating meanings.
Two options
• externalisation was first gestural
• externalisation was first vocal
All the above options allow for the possibility that the existence of external languages and
cultures produced evolutionary and developmental pressures that caused internal
languages to acquire new functions and more complex forms.
Our question is: what evolved first:
• external human languages?
• internal languages with core properties of human language?
WHAT CORE PROPERTIES?
Lang&Cog GLs Slide 21 Last revised: March 16, 2015
22. How most(?) people think about human language
• It is essentially a means of communication between separate individuals,
though there are derived mental functions such as planning, reminiscing, theorising, idly imagining.
• It is essentially vocal, though there are secondary means of expression
including sign languages, writing, specialised signalling systems (e.g. morse code, semaphor), ...
• It (mostly) uses a discrete linear medium, though it can encode non-linear
information-structures, e.g. trees, graphs.
• Each language has a syntax with unbounded generative power, and compositional
semantics (plus exceptions and special cases).
• It evolved from primitive to complex communication, and was later “internalised”.
• Individual humans acquire linguistic competence by finding out what languages are
used in their environment and somehow acquiring their rules, vocabulary, and
ontology, in a usable form. The acquisition process
– EITHER uses a specialised innate “language acquisition device” LAD (Chomsky),
– OR uses general learning mechanisms and general intelligence (the current majority view??)
• Only humans have linguistic abilities naturally, though there are some other animals
that can, under very special circumstances, be trained to use a tiny restricted subset.
We introduce a more general concept: Generalised-Language (GL).
Human communicative language is a special subset.
Pre-existing internal GLs are required before human languages can exist.
This challenges most of the bullet points above.
Lang&Cog GLs Slide 22 Last revised: March 16, 2015
23. Our theory summarised:
Languages evolved for internal purposes first, though not in the form in
which we now know language, but with key features (described later) in
common.
• As a result of this internal use, complex actions, especially actions based on complex
intentions and plans, naturally became means of communication
• Humans evolved a sophisticated sign language capability
• Then spoken language took over
• But the evolutionary heritage of gestural language remains with us.
(E.g. Some Down Syndrome children have difficulty learning to talk, but they learn a sign language
much more easily.)
Note: The above claims do not deny that once language for communication developed, that helped to
accelerate the development of human competences, partly through cultural and social evolution and partly
through development of brain mechanisms well suited to learning from other humans.
What are the main features of a language that give it its power?
Three features will be presented later.
Lang&Cog GLs Slide 23 Last revised: March 16, 2015
24. Some requirements for human and animal competences
A few reminders:
• Humans and other animals can take in information about objects and situations of
widely varying complexity.
• We can notice and reason about portions of a situation that can change or move in
relation to others, producing new situations.
• We can think about some these things even when we have never seen them happen
and what we are thinking about is not happening.
• We can use these abilities in coping with dangers and opportunities in the
environment, and in planning and controlling actions so as to use opportunities and
avoid or solve problems in the environment.
• All those competences involve abilities to acquire, manipulate and use information
about things that exist or could exist.
• That means we need mechanisms for creating, manipulating, storing, using, deriving
new internal structures that encode information.
I.e. there are mechanisms for using internal languages.
• How to do that is a major topic in research in AI – there has been some progress, but
there are still many unsolved problems.
Lang&Cog GLs Slide 24 Last revised: March 16, 2015
25. Some important features of a language
What are the main features of a language (external or internal) that give it
its power?
• Structural variability in what is expressed and in the form of expression.
So that different sorts of things, of varying complexity can be described or represented, e.g.
- a dot on a blank surface
- a collection of dots
- a collection of dots and lines
- a plan of the furniture in a room
- a plan of a house
- a generalisation about houses
• Compositional semantics – allowing more complex meanings to be built up from
simpler meanings.
Linguistic expressions are “combinatorial rather than wholistic”.
Meaningful components can be combined in different ways to express different things, including
totally new things, e.g.: I ate a yellow crocodile with red spots for breakfast yesterday.
• Use for reasoning (predicting, explaining, hypothesising, planning) by manipulating
and recombining parts of complex representing structures.
• Use for expressing motives, preferences, goals, values, etc.
So that you can derive new predictions, plans, summaries, generalisations, explanations, hypotheses,
designs, maps, computer programs, etc. from old information.
Illustrate with SHRDLU demo.
For a non-interactive video of the program running see demo 12 here
http://www.cs.bham.ac.uk/research/projects/poplog/figs/simagent
Lang&Cog GLs Slide 25 Last revised: March 16, 2015
26. Generalising features of language
We can generalise the three features commonly thought to be core
features of human language, as follows:
A language with structural variability, compositional semantics and
means of making inferences
(a) need not be composed of things we would recognise as words:
e.g. think of musical notations, circuit diagrams, maps, charts, pictures, models of
chemical molecules, computer programs, and interactive graphical design tools
(b) need not be used for communication:
e.g. it may be used entirely inside a perceiver, thinker, planner problem-solver,
including uses for formulating goals, questions, hypotheses, plans, and percepts
etc.
Let’s use the label “Generalised Language” (GL) to refer to a form of expression or
representation that has
– structural variability,
– compositional semantics
– means of making inferences,
which is capable of being used for any information-processing purpose at all,
communicative or non-communicative.
Lang&Cog GLs Slide 26 Last revised: March 16, 2015
27. Reasoning with spatial structures
Will the pen hit the rim of the mug if moved downwards?
In the scenes depicted, you can probably
distinguish cases where the answer is clear
from cases where the answer cannot be
determined.
Where the answer is clear you can find the
answer by imagining the pen moving down
between the rectangular sheets, and working
out whether it will hit the rim or not.
This is a simple illustration of a general point:
we often reason spatially by visualising a view
of some configuration and imagining parts
moving around and seeing what happens.
Where the answer is uncertain, because of
some ambiguity in what you see, you can
probably imagine a way of moving left or right,
or up or down, so as to remove, or reduce the
uncertainty.
I argued in Sloman 1971 that visualisation can provide valid inferences, just as logical reasoning can, and
that AI researchers need to investigate such modes of inference.
Lang&Cog GLs Slide 27 Last revised: March 16, 2015
28. Main Theses
Main Theses
(a) GLs evolved in biological organisms for internal uses, before human
languages developed for external communication
where internal uses included perception of complex situations and formation and execution of
complex plans for action,
(b) GLs develop in pre-verbal human infants before they learn to use a
human language for communication.
For examples of infant competences see
E. Gibson & A. Pike
An Ecological Approach to Perceptual Learning and Development, OUP, 2000
The main evidence for (a) is the fact that many non-human animals that do not
communicate in anything recognisable as a human language, nevertheless have
competences, which, from an AI standpoint, seem to require the use of internal GLs.
SHOW SOME VIDEOS, OF CHILDREN AND ANIMALS.
(See list at end of these slides.)
Lang&Cog GLs Slide 28 Last revised: March 16, 2015
29. Conjecture: Gestural languages came first
If one of the uses of GL’s was formulation of executable plans for action,
then observing someone’s action could provide a basis for inferring
intentions: so actions could communicate meanings that had previously
been expressed in internal GLs.
• In that case involuntary communication of plans by executing actions came first.
• The usefulness of such communication could have led to voluntary gestural
communication, e.g. during performance of cooperative tasks.
• Since there was already a rich internal GL used for perceiving, thinking, planning,
acting, etc. there could be both motive and opportunity to enrich actions to extend
their voluntary communicative functions.
The fact that there are already rich and complex meanings (including plan-structures) to be
communicated, and benefits to be gained by communicating them (e.g. better cooperation) makes
the evolution of rich forms of communication more likely.
• There are many explanations of the pressure to switch from gestural language (sign
language) to spoken language, but that required complex evolution of the physiology
of breathing, swallowing, and control of vocalisations.
• Empirical evidence of the primacy of sign languages:
The example of Nicaraguan deaf children, and Down’s syndrome children.
(explained below)
Lang&Cog GLs Slide 29 Last revised: March 16, 2015
30. MORE ON CORE PROPERTIES OF GLs
Human languages (including sign languages) use many formats and have
many features.
Earlier, I described three core properties required for using language in
relation to novel situations, for multiple uses, all found in both external
human languages and internal GLs.
• Structural variability:
Linguistic utterances can include varying numbers of distinct components and are not restricted to flat
vectors but can have deeply nested substructures, with pronouns, other forms of anaphora and
repeated elements providing cross-links.
Familiar labels for this property include: ‘generative’ and ‘productive’.
An implication is that not everything that can be communicated has to be learnt, or previously agreed.
• Compositional semantics:
Novel structures can be given a meaning in a systematic way on the basis of the meanings of the
components and the mode of composition (i.e. structural or syntactic relationships between the
components).
• Manipulability: (a consequence of the previous two)
Meaningful structures can be extended, modified or combined for various purposes, discussed later.
I now explain in more detail how the idea of “compositional semantics” needs to be
generalised to meet all the requirements for internal GLs.
Lang&Cog GLs Slide 30 Last revised: March 16, 2015
31. Standard compositional semantics
Conventional compositional semantics:
New combinations of words, phrases, pictures, and other components of
meaningful structures, are understood because the meaning of a whole
is determined by two things:
• the meanings of the parts
• the way the parts are assembled to form the whole.
However, that does not account for all uses of linguistic complexity.
The non-linguistic context in which information structures are combined
also often helps to determine what the combination means for the user.
• Often it is not just the internal structure: the components of the representation and
their relationships do not suffice to determine the semantics of a complex whole.
• Aspects of the context both inside the user of the representation and in the external
environment are often important in determining what is expressed.
(This is obvious with indexicals such as “now”, “here”, “this”, “you”.)
• So the standard notion of compositional semantics does not account for all uses of
representational complexity, unless we think of every syntactic construct as having an
extra argument: the current context (which may or may not all be shared between
speaker and hearer). (Compare the notion of a “rogator” in (Sloman, 1962, 1965).)
(Examples follow.)
Lang&Cog GLs Slide 31 Last revised: March 16, 2015
32. Examples of generalised compositional semantics
Generalised (context-sensitive, situated) compositional semantics:
Meanings of complex wholes are determined by three things:
(a) meanings of parts,
(b) the way the parts are assembled to form the whole, and
(c) the linguistic and non-linguistic context (obviously true for indexicals, e.g. “this”, “here”, etc.) E.g.
• the physical environment
• the goals of the speaker and hearer
• current tasks in progress ... and other things
Examples:
• “Put it roughly there.”
You don’t have to be told exactly where, and there is no semantic rule determining the location.
You have to use your judgement of the situation in selecting a location.
• “If you can’t see over the wall, find a big box to stand on.”
You don’t have to be told exactly how big – use your understanding of what the box is wanted for.
• “The wind will blow the tarpaulin away so let’s put a pile of stones on each corner.”
No semantic rule says how many stones make a pile: you know there must be enough at each corner
to keep the tarpaulin down when the wind blows – how many depends on how strong the wind is.
The role of context in compositional semantics generalises H.P. Grice’s “Cooperative principle” and his “Maxims of
communication”, to include internal languages.
For more on this see:
Spatial prepositions as higher order functions: and implications of Grice’s theory for evolution of language.
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0605
Lang&Cog GLs Slide 32 Last revised: March 16, 2015
33. Illustrating compositional semantics
The notion of ‘compositional semantics’ was proposed by Frege and
Peirce, and is normally summarised something like this
The meaning of a complex expression is determined by the meanings of its parts, and
the way in which those parts are combined.
So, a semantic function (S) which derives semantic content from this syntactic structure:
F(X, Y, Z)
Could be expressed as
S(F(X, Y, Z)) = S(F)(S(X), S(Y), S(Z))
For example, to evaluate the arithmetic expression: sum(33, 66, 99)
apply the procedure denoted by ‘sum’, i.e.
S(‘sum’),
to the numbers denoted by the symbols
‘33’, ‘66’ and ‘99’.
In other words: apply
S(‘sum’)
to
S(‘33’), S(‘66’) and S(‘99)
All this will be familiar to every programmer, and even more familiar to compiler writers.
Lang&Cog GLs Slide 33 Last revised: March 16, 2015
34. Generalising the formulation of compositional semantics
We generalise the familiar notion of compositional semantics by taking
account of context and current goals (C, G) in the interpretation
We replace this equation
S(F ( X, Y, Z )) =
S(F)( S(X), S(Y), S(Z) )
with an equation in which the interpretation of every component, at every level, is
(potentially) influenced by context and goals:
S(F ( X, Y, Z )) =
S(F,C,G)(S(X,C,G), S(Y,C,G), S(Z,C,G))
Neither C nor G is a component of the representation: rather they are parts of the
“environment” accessed by the processes interpreting and using the representation.
Compare C with the interpretation of non-local variables in computer programs.
In some contexts one or both of C and G may not be needed.
For instance in many mathematical and programming contexts, such as evaluation of arithmetical
expressions, C and G will not be needed.
However in much human communication C and G (Context and Goals) are both needed.
For more on this, see the discussion paper mentioned above:
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0605
Lang&Cog GLs Slide 34 Last revised: March 16, 2015
35. A diagrammatic version
Generalised (context-sensitive, situated) compositional semantics can
be explained diagrammatically:
New combinations of words, phrases
and other components are understood
because the meaning of a whole is
determined by three things
• the meanings of the parts
• the way the parts are assembled to
form the whole
• linguistic and non-linguistic aspects
of the context, including
– the physical environment
– the goals of the speaker and hearer
– current tasks in progress ... and other
things
Formally, we can think of every syntactic construct (every box) as having extra arguments
that enrich the interpretation: the current context and current goals (which may or may not
all be shared between speaker and hearer). The contexts that enrich the semantics
(green and red arrows) may come from inside the symbol user, or from the external
physical or social environment.
Lang&Cog GLs Slide 35 Last revised: March 16, 2015
36. Manipulation requires mechanisms
The mere fact that a form of representation supports manipulability as
explained above does not in itself explain how actual manipulation occurs
in any machine or animal.
That requires mechanisms to be available that can construct, modify, combine, store,
compare, and derive new instances of representations.
E.g. new phrases, new sentences, new stories, new plans, new diagrams, new working models
If an animal or machine has a large repertoire of information and mechanisms, selecting
the appropriate ones to use can itself require additional mechanisms and additional
information about how to use the resources.
AI systems typically have powerful abilities, but current systems don’t know that they have
them; nor can they choose which ones would be best to use: except by following simple
pre-programmed rules, which they don’t learn, and don’t modify.
That will need to be changed.
At present we still have a lot to learn about how to build mechanisms that grow
themselves in a machine with human-like competences.
Lang&Cog GLs Slide 36 Last revised: March 16, 2015
37. What does “internalising language” mean?
What does the blue part of this common assumption mean:
External human language evolved from primitive to complex communication,
and was later internalised. (NB: I am not defending this claim: I think it is wrong!)
The reference to being internalised could mean something like this:
• Evolution several times extended brain functions so that mechanisms that originally
evolved for peripheral modules become available for purely internal uses
e.g. visual mechanisms later used for imagining?
• Modules evolved for linguistic communication were later modified for internal use, in
something like this sequence of steps (e.g. proposed in Dennett 1969?):
– After external languages evolved for communication, humans discovered that it could sometimes
be useful to talk to themselves, e.g. when making plans, solving problems, formulating questions ...
– Subsequent evolutionary changes enabled talking silently: i.e. brain mechanisms became able to
provide inputs directly to the speech input portions of the brain, instead of having to route them
externally.
– This made it possible to construct internal meaningful, manipulable linguistic structures that could
be used to think, plan, reason, invent stories, solve problems, construct explanations, remember
what has happened, etc.
(Daniel Dennett, Content and Consciousness, 1969.)
However, such theories of “internalisation” ignore the internal representational (GL)
mechanisms required for external language use in the first place. (Sloman 1979)
Lang&Cog GLs Slide 37 Last revised: March 16, 2015
38. Biological relevance
THESIS: Some animal competences and some competences of
pre-linguistic children need richly structured internal, manipulable forms of
representation with context-sensitive compositional semantics, which are
constructed and used for perception, reasoning, planning and generation
and achievement of goals related to complex features of the environment.
• I have tried to bring out some of the possible uses of GLs with the three core
properties: structural variability, compositional semantics, manipulability.
(Later generalised to include spatial – e.g. diagrammatic – forms of representation).
• We can point to many competences displayed by prelinguistic children and some other
species that are hard to explain without the use of GLs
Examples include nest-building, hunting, dismembering a carcass in order to eat it, playing with toys,
using tools, making tools, fighting with others, collaborating with others.
In particular both Humean and Kantian causal reasoning require use of GLs, though in different ways.
• An important point I shall not have time to go into is the need for specific forms of GL
that provide meta-semantic competences, e.g. the ability to represent and reason
about one’s own or others’ goals, beliefs, thought processes, preferences, planning
strategies, etc. (So-called “mentalistic” vs “mechanistic” cognition).
For more on that see
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0604
Requirements for “fully deliberative” architectures.
Lang&Cog GLs Slide 38 Last revised: March 16, 2015
39. Direction of fit of GL structures to the world
Many information structures (but not all!) are used to refer to some portion
of the world and represent that portion as having certain features, possibly
quite complex features:
in principle such things can be true or false, or in some cases more or less accurate or
inaccurate, more or less close to being true, etc. all depending on how the world is.
Various philosophers (e.g. Anscombe, Austin, Searle) have pointed out that two major
kinds of use of such structures can be distinguished:
• where the information-user tends to construct or modify the representation so as to
make it true or keep it true (belief-like uses)
• where the user tends to monitor and alter the world so as to make or keep the
information structure true (desire-like uses).
Sometimes referred to as a difference in “direction of fit” between beliefs and desires.
The distinction also has a clear role from the standpoint of designers of robots or other intelligent systems,
though, as I’ve shown elsewhere, there are more intermediate cases to consider in complex,
multi-functional machines (e.g. animals).
These ideas about belief-like and desire-like states of an organism or machine are developed further in:
A. Sloman, R.L. Chrisley and M. Scheutz,
The architectural basis of affective states and processes, in
Who Needs Emotions?: The Brain Meets the Robot, Eds. M. Arbib & J-M. Fellous, OUP, 2005, pp. 203–244,
http://www.cs.bham.ac.uk/research/cogaff/03.html#200305
Lang&Cog GLs Slide 39 Last revised: March 16, 2015
40. Desires, beliefs and direction of fit
Content vs function of mental states
Both beliefs and desires can be checked
against current perceptual input, but the
consequences of mismatches are different.
What makes something a desire, or belief, or
fear, or idle thought depends not on the form
of the information structure, nor its medium,
but on its causal role in the whole architecture.
Simple architectures allow for only simple
causal roles, whereas more sophisticated
architectures allow information structures to
have very varied causal roles.
To understand fully the variety of functions
served by GLs in a particular type of animal
(or machine) we would need to have a detailed
specification of the information-processing
architecture.
We are not ready for that yet!
See the presentations on architectures here
http://www.cs.bham.ac.uk/research/projects/
cogaff/talks/
Lang&Cog GLs Slide 40 Last revised: March 16, 2015
41. Varieties of uses of internal GLs
Within an organism or robot, a GL structure may have many different kinds
of use: depending on the conditions under which it is created, how it is
used, what sorts of things modify it and when, and what effects it has and
what sorts of things can affect it. For example,
• The use of representations in perceptual subsystems is related to one direction of fit
(produce information structures that represent how things are)
• Their role in motivational subsystems is clearly related to the other direction of fit
(change the world so that an information structure represents how things are.)
• An organism’s or robot’s ability to have very diverse beliefs, desires and competences
is connected with the structural variability and compositional semantics of its GLs.
• GLs can be substantially extended during development: they are not innately given.
• Some representations need to endure and be usable in different contexts (e.g. facts,
values, competences), whereas others are needed only transiently (e.g. feedback).
• The conditions for a GL to be used for planning several steps ahead are different from
the conditions for using information for online control of continuous actions.
The former requires more complex virtual machines that evolved much later and in relatively few
animals, and benefits from an animal’s ability to represent states of affairs and processes
independently of the sensory and motor signals involved in perceiving or producing them, using an
amodal, exosomatic ontology.
I suspect confusion about so-called mirror neurones can arise from a failure to understand that point.
(Should they have been called ‘abstraction neurones’?)
Lang&Cog GLs Slide 41 Last revised: March 16, 2015
42. Other uses of GL structures in humans
Besides expressing semantic contents for desire-like and belief-like states,
GL structures can have a wide variety of causal roles, depending not only
on their location in the architecture, but also on their form and the
mechanisms available for manipulating them. E.g.
• Comparing and evaluating things, states of affairs, possible actions, goals, policies, ...
• creating more or less complex plans for future actions
• using a plan to control actions (either continuously, as in visual servoing, or step-by-step)
• synchronising concurrent processes, or modulating ongoing processes
• expressing a question,
i.e. constructing a GL structure that directs a search to determine whether it is true or false, or how it
needs to be modified or expanded to make it true.
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0502
• considering unobserved possibilities to explain what has been observed,
• predicting things that have not yet happened
(e.g. Humean or Kantian causal reasoning),
• fantasizing, e.g. wondering what would have happened if,
• inventing stories
• day-dreaming
• meta-management functions (making use of meta-semantic competences).
Most animals, and current robots, have much simpler information processing
competences.
Lang&Cog GLs Slide 42 Last revised: March 16, 2015
43. A consequence of the core features
A consequence of the core features is that it is possible to produce
well-formed linguistic expressions for which the compositional semantics
will produce an impossible (internally inconsistent) interpretation.
E.g. Consider this conjunction
Tom is taller than Mary
and Mary is taller than Jane
and Jane is taller than Dick
and Dick is taller than Tom
If
(a) ‘Taller than’ has its normal meaning
(b) Each repeated occurrence of the same name refers to only one individual
then
That conjunction is inconsistent:
not all conjuncts can be true simultaneously.
We’ll see a similar kind of inconsistency in non-verbal forms, on the next slide.
Lang&Cog GLs Slide 43 Last revised: March 16, 2015
44. Non-verbal forms of representation can also be inconsistent
As shown in this picture by Oscar Reutersv¨ard (1934)
Inconsistency of an information structure implies that
• if it is adopted as a belief it will be a necessarily false belief,
• if adopted as a goal it will be a necessarily unachievable goal, and
• if constructed as a percept it will be a perception of an impossible state of the world.
(illustrated later)
(Compare G. Frege on failure of reference.)
NOTE: Very young children will not see this picture and similar pictures as depicting impossible objects.
Why not? What forms of computation are required that they (and other animals) could lack?
Lang&Cog GLs Slide 44 Last revised: March 16, 2015
45. Building a configuration of blocks - 1
Given a collection of cubes and rectangular blocks could you arrange them to look like this?
Think of locations to which you could you move the ‘loose’ cube on the left.
Lang&Cog GLs Slide 45 Last revised: March 16, 2015
46. Building a configuration of blocks - 2
Moving one cube, could you re-arrange them to look like this?
Some young children will say ‘yes’.
What has to change for them to be able to detect the impossibility?
Lang&Cog GLs Slide 46 Last revised: March 16, 2015
47. Another generalisation: Non-verbal forms
The three core features of human languages structural variability,
generalised compositional semantics and manipulability are also
features of many non-verbal forms of representation.
Given a map, a flow-chart for an algorithm, a circuit diagram, or a picture of an object to
be constructed, more components can go on being added, sometimes indefinitely.
If we use paper, or some other markable surface, it is possible to
• expand a picture or diagram outwards,
• add more internal details (e.g. extra lines),
but eventually there is ‘clutter limit’ because the structure is not stretchable.
(Other kinds of limits relate to short-term memory constraints.)
Structural variability of such spatial forms of representation has recently been enhanced by the use of film
or computing techniques that allow zooming in and out to reveal more or less of the ‘nested’ detail.
It is possible that virtual machines evolved in brains allow such ‘zooming’ in and out, though precise
requirements for such a facility to be useful still need to be specified.
The retinoid model of Arnold Trehub’s The Cognitive Brain (MIT Press, 1991) may be an example.
http://www.people.umass.edu/trehub/
Sloman 1971 (ref. at end) describes more precisely a distinction between “Fregean” and “analogical”
forms of representation, claiming that both can be used for reasoning, planning, and proofs.
This was a criticism of the “Logicist” AI approach expounded by McCarthy and Hayes, in 1969.
Lang&Cog GLs Slide 47 Last revised: March 16, 2015
48. Compositional semantics and structural variability in vision
Your familiarity with the role of low level pictorial cues in representing features like edges,
orientation, curvature of surfaces, joins between two objects or surfaces, etc., allows you
to use compositional semantics to see the 3-D structure, and some causal and functional
relationships, in pictures you have never previously seen.
No AI vision system comes close to being able to do that – yet.
http://www.cs.bham.ac.uk/research/projects/cosy/photos/crane/
Lang&Cog GLs Slide 48 Last revised: March 16, 2015
49. Different combinations of the same elements
What do you see in these pictures? Only 2-D configurations?
Notice how context can influence interpretation of parts
Lang&Cog GLs Slide 49 Last revised: March 16, 2015
50. A droodle: Can you tell what this is?
Droodles depend heavily on the fact that interpretation of visual GL
instances can be can be partly driven by sensory data and partly by
verbal hints (“top down”).
Lang&Cog GLs Slide 50 Last revised: March 16, 2015
51. Possible answers to droodle question
“Early bird catches very strong worm?”
“Sewer worker attacked by a shark?”
Interpretation of visual scenes can include perception of
causal relationships, as in both the above droodle
interpretations.
There is much to be said about droodles, but no time today.
Perceptual combination of spatial and causal relationships
is also needed in use or construction of tools: e.g. shape of
a spanner’s head.
When objects share a region of space, indefinitely many different kinds of structural and
causal relationships can be perceived and interpreted: in contrast with the constrained,
rule-based, use of syntactic relations in human formal and informal languages.
Show broom video, available here (with others)
http://www.cs.bham.ac.uk/research/projects/cosy/conferences/mofm-paris-07/sloman/vid/
Long before children can talk, they can take in and make use of structural relationships in
the environment in order to produce and control actions.
That’s in addition to their ability to manipulate continuously changing dynamical systems,
e.g. maintaining balance while walking, reaching, etc.
Likewise many other animals.
Lang&Cog GLs Slide 51 Last revised: March 16, 2015
52. Perceiving spatial structure vs creating images
Information structures in a spatial GL should not be confused with images
An image is a very large collection of small image features,
which may include colour, brightness, texture, edge-features, optical flow, and various kinds of gradient,
and various metrical and qualitative ‘low-level’ relationships such as brighter, same colour, coarser
textured, so many degrees apart, etc.
For pictorial or spatial GLs to be useful in the ways described, they must be composed of
larger structures with more global relationships not restricted to simple metrical
comparisons.
This topic was much discussed by AI vision researchers in the 1960s. See
S. Kaneff (ed) Picture Language Machines, Academic Press 1970.
and http://hopl.murdoch.edu.au/showlanguage.prx?exp=7352&language=Clowes
The larger structures
• may be image components like lines, regions, polygons, with relationships like touching, enclosing
overlapping, being collinear, approaching, etc., OR
• they may be representations of 3-D or other objects and processes represented by the 2-D structures,
e.g. fingers, pools, planks, rocks, rivers, trees, trains going into tunnels, etc., with static and changing
3-D and causal relationships, e.g. supporting, penetrating, grasping, pushing, going behind, etc.
For the user of the GL to be able to perform manipulations and transformations that are useful for tasks like
predicting, planning, explaining, formulating questions, it is necessary to do something like parsing of the
representations, i.e. segmenting them into relatively large components with relationships, so that either
components or relationships can be changed.
This is quite unlike what is called “image processing”, e.g. enhancing images or applying global
transformations to them, such as convolution.
Lang&Cog GLs Slide 52 Last revised: March 16, 2015
53. Making an “H”
Making a capital “H” using an elastic band and pins
Suppose you had an elastic band and a pile of pins:
could you use the pins to hold the stretched rubber band
in the form of a capital “H”?
What sort of GL is needed to make it possible to answer
such a question?
• How many pins would you need?
• Could you do it using only one hand?
• In what order would you insert the pins?
• How many pins would be inside the band and how many outside?
• Could you do it if the pins were replaced with marbles?
You can probably answer the questions in two ways: by trying physically and examining what happens, and
by merely thinking about it and examining what happens.
• A very young child will not be able either to construct the H physically, or to answer the questions.
• You are probably able to answer the questions just by thinking about the construction processes and the
result.
• What is your brain doing while you visualise the process of creating the final configuration?
• Do you first visualise the final configuration, and then make a plan for constructing it, or do you get to
the final configuration by making a plan, or visualising the construction process?
• What is your brain doing while you count the imagined pins, inside or outside the band?
Lang&Cog GLs Slide 53 Last revised: March 16, 2015
54. Major problems for vision researchers
Relationships between static complex 3-D objects involve many relationships between
parts, some metrical, some topological, and some causal/functional. I.e. relationships
between complex, structured, objects
are multi-strand relationships.
When processes occur involving changing or moving 3-D objects, many relationships
can change at the same time:
they are multi-strand processes.
• The changes are not just geometrical.
They can include changing causal and functional relationships
(e.g. supporting, compressing, obstructing, etc.).
• Perception of processes can include perception of changing affordances.
• I.e. perceived changes can involve several ontological layers.
We can perceive multi-strand processes in which complex 3-D objects change many
relationships at the same time. What forms of representation and what mechanisms
make that possible? As far as I know, neuroscientists have no explanations and AI vision
researchers have no working models.
For more on that see
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#pr0505
A (Possibly) New Theory of Vision (October 2005)
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#compmod07
Architectural and representational requirements for seeing processes and affordances.
(31 May 2007, BBSRC Workshop)
Lang&Cog GLs Slide 54 Last revised: March 16, 2015
55. Partial summary so far
Many familiar kinds of competence involving
• perception of 3-D structures and processes,
• planning and control of actions in a 3-D environment,
• predicting and explaining processes in the environment
require the use of structured, manipulable internal forms of representation with
context-sensitive compositional semantics.
Those forms of representation, GLs, have some (but not all) features of human language,
but use additional mechanisms and are used internally for information processing.
Some of the manipulations that are possible are discrete (e.g. adding or removing an
object, or a contact or containment relation), others continuous e.g. sliding something,
distorting a shape.
In some forms of GL, the structural and functional relationships in the interpretation arise
from spatial embedding of different parts of the same information structure: rather than
use of arbitrary or totally general syntactic conventions (as in language and logic).
Nevertheless the spatial form of representation is not a structure that is isomorphic with
what it represents.
This can be demonstrated using pictures of impossible objects.
Some of these points were made in Sloman 1971 and in Sloman 1979
Lang&Cog GLs Slide 55 Last revised: March 16, 2015
56. Implications of pictures of impossible objects
The impossible pictures rule out the assumption that seeing involves building a structure
that is isomorphic with what is seen: for it is impossible to build a structure that is
isomorphic with an impossible structure.
What we (and other animals?) do must be much more subtle, general and powerful, and
connected with manipulability, structural variation, and compositional semantics, all of
which are important in seeing affordances.
The example of logic shows that it is possible to assemble coherent fragments of
information into an incoherent whole: this seems also to be what happens when we see
pictures of impossible objects, though in that case we do not seem to be using a logical
formalism.
Exactly what sort of GL suffices for the purpose requires further research,
We need to analyse requirements for GLs, including both being usable for representing what exists and
being usable for representing and reasoning about changes that are possible
We seem to use those features of GLs in understanding many examples of causation.
Fortunately we don’t normally need to check for consistency because the 3-D environment cannot be
inconsistent.
See also http://www.cs.bham.ac.uk/research/projects/cogaff/challenge-penrose.pdf
Lang&Cog GLs Slide 56 Last revised: March 16, 2015
57. Examples: To be expanded
Show Felix Warneken movies showing prelinguistic children and chimps apparently
spontaneously determining and responding to goals of an adult human.
This requires them not only to use GLs without being able to talk but also possessing
some meta-semantic competence.
http://email.eva.mpg.de/˜warneken/
Warneken was mainly concerned with evidence for altruism.
I am mainly concerned with the cognitive mechanisms presupposed by the performances, whatever the
motives.
Nest building birds, e.g. corvids.
Could you build a rigid nest using only one hand (or hand and mouth), bringing one twig at a time?
Betty making hooks in different ways and using them for a common task.
Search using google for
betty crow hook
Humans can solve many problems about spatial structures and processes in their heads,
illustrated in previous slides.
Lang&Cog GLs Slide 57 Last revised: March 16, 2015
58. Implications of the examples
GLs are needed for many capabilities shown by other animals and
capabilities shown by pre-linguistic children.
So they cannot be a by-product of evolution of human language.
Since GLs can express plans that can be used to control actions, and since actions can
reveal intentions, they are already well suited as a basis for generating communicative
language
Implication: sign-languages evolved first, but previous theories about how that happened
must be wrong
E.g. theories claiming that simple external gestures arose first, then increasing complexity, then
vocalisation and finally internalisation must be back to front.
Lang&Cog GLs Slide 58 Last revised: March 16, 2015
59. Not Fodor’s Language Of Thought
There is no implication in any of this that a human, or nest-building bird, or
intelligent language user, must start with an ‘innate’ (or genetically
determined) GL that suffices to express everything it will ever need to
express, so that all future meanings are definable in terms of the initial set.
Papers with Jackie Chappell investigate ways in which boot-strapping processes can
substantially extend innate competences through exploration and play in the environment
along with the ability to construct new explanatory theories to account for surprises.
This can include substantial ontology extension: introducing concepts that are not
definable in terms of previous ones, e.g. using model-based semantics and
symbol/theory-tethering.
For more on that see
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#models
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#grounding
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#grounding
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#pr0702
That option was not open to Fodor because he used a model of understanding based on compiled
programming languages, where all programming constructs are translated into the machine language
before the programs run.
He apparently forgot about interpreted programming languages and perhaps did not know about logical
programming languages (e.g. prolog).
He should have known about model-theoretic semantics, but failed to see its relevance, as described in
the presentations listed above.
Lang&Cog GLs Slide 59 Last revised: March 16, 2015
60. Unanswered questions
Despite the evolutionary continuities between humans and some other
species it is clear that there are many spectacular discontinuities
(e.g. only humans make tools to make tools to make tools .... to build things, and it
seems to be the case that only humans prove mathematical theorems, enjoy thinking
about infinite sets, tell stories to one another, etc.).
What explains these discontinuities?
We need to consider various possibilities:
• Was there some change in degree that went past a threshold whose effects were then
amplified? (E.g. some memory structure increased in size?)
• Was there a particular crucial discontinuous change in architecture, or some
mechanism, or some form of representation, after which effects cascaded?
• Were there several different changes, with independent causes, which combined to
produce new spectacularly different effects?
• other possibilities???
We don’t know enough to answer, but I suspect the first answer (a quantitative change
passed a threshold) is unlikely.
I suspect there were a few key discrete architectural changes, that modified the forms of
learning and development in humans and other altricial species (see below).
Lang&Cog GLs Slide 60 Last revised: March 16, 2015
61. Language learning vs language development
If the previous observations and speculations are correct, previous
theories about language learning must be wrong!
• Previous theories imply that children do not acquire a way of representing information
that supports structural variability, compositional semantics and useful manipulability
until they have learnt an external human language, which they do by some sort of
data-mining of perceived uses of language by others.
• If our speculations are correct, the process of language learning is primarily one of
creative and collaborative problem solving in which new ways of expressing
pre-existing meanings are devised collaboratively.
• This is a process of development of internal GLs along with their extension to an
external mode of expression.
• The fact that learners are normally in a minority and can have little influence on the
outcome makes it look as if they are absorbing rather than creating.
But the Nicaraguan case shows that must be wrong. Nicaraguan deaf children rapidly developed a new
sophisticated sign language which they used very fluently and which their teacher was unable to learn.
Once humans had acquired the ability to communicate rich and complex meanings,
cultural evolution, including development of new linguistic forms and functions, could
enormously speed up transmission of information from one generation to another and
that might produce evolutionary opportunities to extend the internal GL-engines.
Lang&Cog GLs Slide 61 Last revised: March 16, 2015
62. Implications for Chomskyan theories
Does all the above imply that humans have anything like the kind of innate
(genetically determined) Language Acquisition Device (LAD) postulated by
Chomsky
(E.g. in Aspects of the Theory of Syntax, (Chomsky, 1965)
or is the learning of human language completely explained by general
purpose learning mechanisms at the basis of all human intelligence?
Our theories imply that the answer is somewhere in between and
back to front.
The discussion of the need for GLs in humans and other animals implies that evolution
produced something used internally with the three core properties, thereby supporting
intelligent perception and manipulation of objects in the environment.
The use of GLs also supports the development of communicative language:
a pre-verbal child has things to communicate about
and has motives that can be served by such communication.
Lang&Cog GLs Slide 62 Last revised: March 16, 2015
63. A different view of language development
The GL structures were not overtly communicated and did not use the
grammars of later human languages.
Insofar as internal GLs are partly acquired through interaction with the environment,
instead of being wholly innate, it follows that the genome of some species provides one or
more GL acquisition devices (GLADS), though they are better viewed not as completely
innate devices, but as self-extending mechanisms, whose self-extending capabilities are
themselves extended by things derived from the environment.
When communicative uses of GLs began they would have built most naturally on the role
of GLs in controlling behaviour (e.g. executing a plan), since what you do often
communicates your intentions.
That probably involved many evolutionary steps that will be hard to find evidence for.
Only later would new pressures cause vocal information structures to take over.
The additional constraints of that impoverished medium (compared with the earlier gestural GL) may
have driven both human languages and the brain mechanisms down narrow channels, further
constraining the permitted structural variability and modes of composition.
But that’s a topic for another time.
Lang&Cog GLs Slide 63 Last revised: March 16, 2015
64. Routes from genome to behaviours
Cognitive epigenesis
The diagram shows different stages at
which the environment influences
processes, e.g.:
• during development of seed, egg, or
embryo, and subsequent growth
(i.e. it is not all controlled by DNA)
• triggering meta-competences to produce
new competences or new
meta-competences (e.g. after previous
competences have produced exploratory
and learning processes)
• during the triggering and deployment of the
competences to produce behaviours
Insofar as the behaviours influence the environment there can be complex developmental feedback loops.
Competences and behaviours further to the right may use several ‘simpler’ competences and behaviours
developed on the left. Diagram is from the IJUC paper with Jackie Chappell. Chris Miall helped with the diagram.
The construction of some competences should be construed as an ongoing process, with repeated
activation of the meta-competence over time.
These schematic specifications may have different sorts of instantiations in different parts of a
multi-functional architecture, e.g. in reactive and deliberative components.
In reactive components many (but not all) of the processes will involve continuous control.
In deliberative and some meta-management components much will be discrete.
Lang&Cog GLs Slide 64 Last revised: March 16, 2015
65. Cascaded development and learning
If learning has to go through partially-ordered competences, where each
competence builds on what has been built in previous stages, and that
involves building new layers of brain mechanism, then that might explain
why each new GL extension can only happen at a certain stage of
development.
A particular GL cannot be added too early because it needs prior resources to provide
• the representing structures,
• the ability to manipulate them, and
• the contents that they represent.
It can’t happen too late because lots of other things are normally delayed until the
appropriate GL has got going, and if that doesn’t happen they may develop anyway, but in
inferior forms and they cannot be disassembled and reassembled later.
There may also be facts related to the sequence in which portions of brains develop.
(e.g. myelinization??)
But the stages may be only partially ordered – allowing different learners to traverse
different developmental trajectories in a network of possible trajectories.
(Compare Waddington’s epigenetic landscape.)
All this still needs to be made a lot more precise – preferably in working models.
Lang&Cog GLs Slide 65 Last revised: March 16, 2015
66. The evolutionary heritage of gestural GLs
It has often been remarked that at least three remarkable facts about
humans suggest that we still retain brain mechanisms that originally
evolved in connection with external gestural GLs.
• It is hard for people to talk without gesturing, often highly creatively, even when they
are talking on the phone to people who cannot possibly see the gestures and who do
not need them – as shown by the usefulness of telephones.
• Some children with Down’s syndrome find it easier to learn a sign language than to
learn to talk normally.
• Nicaraguan deaf children rapidly developed a new sophisticated sign language which
they used very fluently and which their teacher was unable to learn.
Moreover, if all human children develop and use rich internal GLs before they learn to talk
(orally or by signing), then what we used to think of language learning should be thought
of as language extension since they already have rich linguistic (GL) capabilities which
they extend for communicative purposes.
Nicaraguan children showed that that should be thought of as a collaborative, creative
process of developing a means to communicate, rather than a process of doing
data-mining or induction on information collected from the environment.
In most cases the child learners are a small minority, and politically weak, so language
creation looks deceptively like language learning.
Lang&Cog GLs Slide 66 Last revised: March 16, 2015
67. The implementability requirement
Remember the warning earlier about unimplemented models.
• All three of the core properties (structural variability, compositional semantics and
manipulability) have implications for mechanisms, and architectures in which they can
be combined.
• Some mechanisms cannot support structural variability,
e.g. many of those that deal only with vectors of numerical values.
• Some mechanisms have no use for compositional semantics because they do not do
any significant interpretation of the structures they operate on.
• The three core properties should be regarded as properties of virtual machines
implemented in brains not as properties of physical mechanisms:
E.g. your brain does not get rewired when you see a new scene, make an inference, create and
compare a set of plans, compose a poem in your head, ..., but a virtual network might be rewired.
For a short introduction to virtual machines and supervenience see
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#bielefeld
• Current computer-based models support only a small subset of the types of
manipulability discussed here.
Current biologically-inspired mechanisms (e.g. existing neural models) are so far inadequate for these
purposes.
• Perhaps animal brains run virtual machines no modellers have thought of yet?
Lang&Cog GLs Slide 67 Last revised: March 16, 2015
68. Many unsolved problems
These slides scratch the surface of many very deep and difficult problems.
In particular, I have ignored the fact that very little is understood about
what the varied functions of visual perception are, how they work, and
what forms of representation (GLs) they use.
It does not seem to me that anyone in psychology, neuroscience, or
AI/Robotics is near answering the questions.
In particular, as far as I know there are no models of neural mechanisms
that are capable of supporting the required abilities to manipulate,
interpret, and reason about complex structures and processes that involve
geometry and topology.
See also these presentations:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#compmod07
Architectural and representational requirements for seeing processes and affordances.
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#gibson
What’s vision for, and how does it work? From Marr (and earlier) to Gibson and Beyond
Lang&Cog GLs Slide 68 Last revised: March 16, 2015
69. Background to this presentation
The slides overlap with these two papers, the first of which introduced the term ‘G-language’, now ‘GL’.
Aaron Sloman and Jackie Chappell (2007).
‘Computational Cognitive Epigenetics’, in Behavioral and Brain Sciences Journal, 30(4).
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0703
Commentary on: Eva Jablonka, Marion J. Lamb,
Evolution in Four Dimensions:
Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life (MIT Press, 2005)
Precis of book: http://www.bbsonline.org/Preprints/Jablonka-10132006/Referees/
Jackie Chappell and Aaron Sloman, (2007)
‘Natural and artificial meta-configured altricial information-processing systems’,
in International Journal of Unconventional Computing, 3, 3, pp. 211–239,
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0609
Much earlier, less developed, versions of some of the ideas were in these three papers, all now online.
Sloman71
http://www.cs.bham.ac.uk/research/cogaff/04.html#200407
Interactions between philosophy and AI: The role of intuition and non-logical reasoning in intelligence,
Proc 2nd IJCAI London, pp. 209–226, Reprinted in Artificial Intelligence Journal 1971.
Describes a distinction between “Fregean” and “analogical” forms of representation, claiming that both can be used for
reasoning, planning, and proofs.
Sloman79
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#43
The primacy of non-communicative language,
in The analysis of Meaning: Informatics 5 Proceedings ASLIB/BCS Conference, Oxford, March 1979, Eds. M. MacCafferty
and K. Gray, Aslib, London, pp. 1–15,
Sloman78
http://www.cs.bham.ac.uk/research/projects/cogaff/07.html#713
What About Their Internal Languages? 1978,
Commentary on three articles by Premack, D., Woodruff, G., Griffin, D.R., Savage-Rumbaugh, E.S., Rumbaugh, D.R.,
Boysen, S.
in Behavioral and Brain Sciences Journal 1978, 1 (4), pp. 515,
70. There are several other closely related joint papers by Chappell and Sloman (2005 to 2007) on the CoSy project web site:
http://www.cs.bham.ac.uk/research/projects/cosy/papers/
We also have some slide presentations on kinds of causal reasoning in animals and robots prepared for
The Workshop on Natural and Artificial Cognition(WONAC), Oxford 2007, here:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/wonac
Lang&Cog GLs Slide 69 Last revised: March 16, 2015
71. References
Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.
Sloman, A. (1962). Knowing and Understanding: Relations between meaning and truth, meaning and necessary truth, meaning and synthetic necessary
truth. Unpublished doctoral dissertation, Oxford University. (http://www.cs.bham.ac.uk/research/projects/cogaff/07.html#706)
Sloman, A. (1965). Functions and Rogators. In J. N. Crossley & M. A. E. Dummett (Eds.), Formal Systems and Recursive Functions: Proceedings of the
Eighth Logic Colloquium Oxford, July 1963 (pp. 156–175). Amsterdam: North-Holland Publishing Co. Available from
http://tinyurl.com/BhamCog/07.html#714
NOTE:
For people who are not familiar with the story of the Nicaraguan deaf children, there are various summaries
on the web including
Brief history and some links http://www.signwriting.org/nicaragua/nicaragua.html
PBS documentary including video http://www.pbs.org/wgbh/evolution/library/07/2/l 072 04.html
BBC summary http://news.bbc.co.uk/2/hi/science/nature/3662928.stm
Wikipedia summary http://en.wikipedia.org/wiki/Nicaraguan Sign Language
Bruce Bridgeman has a theory that overlaps significantly with the one presented here:
On the Evolution of Consciousness and Language. Psycoloquy: 3(15) Consciousness (1) (1992)
http://www.cogsci.ecs.soton.ac.uk/cgi/psyc/newpsy?consciousness.1
References involving mirror neurons and the gestural theory of evolution of language
Fadiga L., Craighero L. Cues on the origin of language. From electrophysiological data on mirror neurons
and motor representations, in In S. Breten (Ed.), On Being Moved: From mirror neurons to empathy.
Amsterdam, John Benjamins. 2007
http://web.unife.it/progetti/neurolab/pdf/2007 1 Fadiga-Craighero chapter6.pdf
There are strong connections with the work of Annette Karmiloff-Smith on “Representational
Redescription”, outlined in her 1992 book Beyond Modularity, reviewed from our viewpoint here