Theories of induction in psychology and artificial intelligence assume that the process leads from observation and knowledge to the formulation of linguistic conjectures. This paper proposes instead that the process yields mental models of phenomena. It uses this hypothesis to distinguish between deduction, induction, and creative forms of thought. It shows how models could underlie inductions about specific matters. In the domain of linguistic conjectures, there are many possible inductive generalizations of a conjecture. In the domain of models, however, generalization calls for only a single operation: the addition of information to a model. If the information to be added is inconsistent with the model, then it eliminates the model as false: this operation suffices for all generalizations in a Boolean domain. Otherwise, the information that is added may have effects equivalent (a) to the replacement of an existential quantifier by a universal quantifier, or (b) to the promotion of an existential quantifier from inside to outside the scope of a universal quantifier. The latter operation is novel, and does not seem to have been used in any linguistic theory of induction. Finally, the paper describes a set of constraints on human induction, and outlines the evidence in favor of a model theory of induction.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
The document summarizes and compares schema matching and ontology mapping. It discusses how schema matching approaches can be applied to ontology mapping given the similarities between schemas and ontologies. The document outlines different categories of schema matching techniques (element-based, structure-based) and provides examples. It also summarizes several ontology mapping tools and approaches that utilize different matching strategies like string, structure, and semantic similarity.
An ontology is a specification of a conceptualization that allows us to represent domain knowledge so that we can share a common understanding, enable reuse, make domain assumptions explicit, and separate domain knowledge from operational knowledge. Ontologies offer reasoning services like consistency checking, subsumption, and query answering that are different from those found in XML and relational databases. OWL ontologies use semantics rather than just syntax to represent knowledge about concepts, individuals, and relationships between them.
The document presents a new ontology matching system based on a multi-agent architecture. The system takes ontologies described in XML, RDF Schema, and OWL as input. It uses multiple matchers and filtering to generate mappings between ontology entities. The mappings are then validated. The system is implemented as a multi-agent system with different agent types responsible for resources, matching, generating mappings, and filtering/validating mappings. The architecture allows for robust, flexible, and scalable ontology matching.
This document summarizes a workshop on data integration using ontologies. It discusses how data integration is challenging due to differences in schemas, semantics, measurements, units and labels across data sources. It proposes that ontologies can help with data integration by providing definitions for schemas and entities referred to in the data. Core challenges discussed include dealing with multiple synonyms for entities and relationships between biological entities that depend on context. The document advocates for shared community ontologies that can be extended and integrated to facilitate flexible and responsive data integration across multiple sources.
Why symbol-grounding is both impossible and unnecessary, and why theory-tethe...Aaron Sloman
Introduction to key ideas of semantic models,
implicit definitions and symbol tethering, using ideas from philosophy of science and model theoretic semantics to explain why symbol ground theory is misguided: there is no need for all symbols used by an intelligent agent to be 'grounded' in terms of experience, or sensory-motor patterns. Rather, most of the meaning of a symbol may come from its role in a powerful explanatory theory, though the theory should have some connection with experiments and observations in order to be applicable to the world. That is not the same as requiring every symbol to be linked to experiences, experiments or measurements.
Symbol grounding theory is a modern version of the philosophical theory of 'concept empiricism', which was refuted by the philosopher Immanuel Kant in the 18th century.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
The document summarizes and compares schema matching and ontology mapping. It discusses how schema matching approaches can be applied to ontology mapping given the similarities between schemas and ontologies. The document outlines different categories of schema matching techniques (element-based, structure-based) and provides examples. It also summarizes several ontology mapping tools and approaches that utilize different matching strategies like string, structure, and semantic similarity.
An ontology is a specification of a conceptualization that allows us to represent domain knowledge so that we can share a common understanding, enable reuse, make domain assumptions explicit, and separate domain knowledge from operational knowledge. Ontologies offer reasoning services like consistency checking, subsumption, and query answering that are different from those found in XML and relational databases. OWL ontologies use semantics rather than just syntax to represent knowledge about concepts, individuals, and relationships between them.
The document presents a new ontology matching system based on a multi-agent architecture. The system takes ontologies described in XML, RDF Schema, and OWL as input. It uses multiple matchers and filtering to generate mappings between ontology entities. The mappings are then validated. The system is implemented as a multi-agent system with different agent types responsible for resources, matching, generating mappings, and filtering/validating mappings. The architecture allows for robust, flexible, and scalable ontology matching.
This document summarizes a workshop on data integration using ontologies. It discusses how data integration is challenging due to differences in schemas, semantics, measurements, units and labels across data sources. It proposes that ontologies can help with data integration by providing definitions for schemas and entities referred to in the data. Core challenges discussed include dealing with multiple synonyms for entities and relationships between biological entities that depend on context. The document advocates for shared community ontologies that can be extended and integrated to facilitate flexible and responsive data integration across multiple sources.
Why symbol-grounding is both impossible and unnecessary, and why theory-tethe...Aaron Sloman
Introduction to key ideas of semantic models,
implicit definitions and symbol tethering, using ideas from philosophy of science and model theoretic semantics to explain why symbol ground theory is misguided: there is no need for all symbols used by an intelligent agent to be 'grounded' in terms of experience, or sensory-motor patterns. Rather, most of the meaning of a symbol may come from its role in a powerful explanatory theory, though the theory should have some connection with experiments and observations in order to be applicable to the world. That is not the same as requiring every symbol to be linked to experiences, experiments or measurements.
Symbol grounding theory is a modern version of the philosophical theory of 'concept empiricism', which was refuted by the philosopher Immanuel Kant in the 18th century.
This document discusses an integrated approach to ontology development methodology and provides a case study using a shopping mall domain. It begins by reviewing existing ontology development methodologies and identifying their pitfalls. An integrated methodology is then proposed which aims to reduce these pitfalls. The key steps in the proposed methodology are: 1) capturing motivating user scenarios or keywords, 2) generating formal/informal questions and answers from the scenarios, 3) extracting terms and constraints, and 4) building the ontology using a top-down approach. The methodology is applied to developing an ontology for a shopping mall domain to provide multilingual information to visitors.
This document summarizes a survey on string similarity matching search techniques. It discusses how string similarity matching is used to find relevant information in text collections. The document reviews different algorithms for string matching, including edit distance, NR-grep, n-grams, and approaches based on hashing and locality-sensitive hashing. It analyzes techniques like pattern matching, threshold-based joins, and vector representations. The goal is to present an overview of the field and compare algorithm performance for similarity searches.
IRJET- A Survey Paper on Text Summarization MethodsIRJET Journal
This document summarizes various approaches for automatic text summarization, including extractive and abstractive methods. It discusses early surface-level approaches from the 1950s that identified important sentences based on word frequency. It also reviews corpus-based, cohesion-based, rhetoric-based, and graph-based approaches. The document then examines single document summarization techniques like naive Bayes methods, log-linear models, and deep natural language analysis. It concludes with a discussion of multi-document summarization, including abstraction and information fusion as well as graph spreading activation approaches. The goal of the survey is to provide an overview of the major existing methods that have been used for automatic text summarization.
This document discusses the cognitive foundations of mathematics and mathematical formalisms. It argues that:
1) Mathematical concepts like numbers and continuous lines have cognitive origins in human spatial and temporal experiences.
2) Formalist approaches that view mathematical reasoning as sequence matching of meaningless symbols fail to capture the complexity of human cognition and thinking.
3) The elementary components of living systems like cells are highly complex, unlike the simple components that make up artificial systems like clocks and computers. Signification arises from the interaction of signals with intentional gestures and actions.
The document summarizes a seminar on ontology mapping presented by Samhati Soor. The seminar covered the need for ontology mapping due to the proliferation of ontologies, and the purpose of mapping ontologies to achieve interoperability and sharing knowledge. It defined ontologies and ontology mapping and discussed categories of mapping including between global and local ontologies, between local ontologies, and for merging ontologies. Tools for ontology mapping discussed included GLUE and SAM. Evaluation criteria and challenges of ontology mapping were also summarized along with conclusions and references.
This document discusses different perspectives on metaphors and challenges in recognizing and representing metaphors computationally. It reviews ongoing research aiming to accomplish these goals. There is no consensus on what constitutes a metaphor. Linguistic metaphors found in examples are distinguished from cognitive metaphors, which are not linguistic phenomena. Metaphors can involve referential substitution or predication. Framing and context are important for interpretation. Metaphors may be inexhaustible or conventionalized into new meanings. Recognizing metaphors computationally is difficult as literal and figurative meanings depend on context and intention.
Concept hierarchy is the backbone of ontology, and the concept hierarchy acquisition has been a hot topic in the field of ontology learning. this paper proposes a hyponymy extraction method of domain ontology concept based on cascaded conditional random field(CCRFs) and hierarchy clustering. It takes free text as extracting object, adopts CCRFs identifying the domain concepts. First the low layer of CCRFs is used to identify simple domain concept, then the results are sent to the high layer, in which the nesting concepts are recognized. Next we adopt hierarchy clustering to identify the hyponymy relation between domain ontology concepts. The experimental results demonstrate the proposed method is efficient.
Extending the knowledge level of cognitive architectures with Conceptual Spac...Antonio Lieto
Extending the knowledge level of cognitive architectures with Conceptual Spaces (+ a case study with Dual-PECCS: a hybrid knowledge representation system for common sense reasoning). Talk given at Stockholm, September 2016.
Adaptive Emotional Personality Model based on Fuzzy Logic Interpretation of F...CSCJournals
In recent years, emotional personality has found an important application in the field of human machine interaction. Interesting examples of this domain are computer games, interface agents, human-robot interaction, etc. However, few systems in this area include a model of personality, although it plays an important role in differentiating and determining the way they experience emotions and the way they behave. Personality simulation has always been a complex issue due to the complexity of the human personality itself, and the difficulty to model human psychology on electronic basis. Current efforts for emotion simulation are rather based on predefined set or inputs and its responses or on classical models which are simple approximate and have proven flaws. In this paper an emotional simulation system was presented. It utilizes the latest psychological theories to design a complex dynamic system that reacts to any environment, without being pre-programmed on sets of input. The design was relying on fuzzy logic to simulate human emotional reaction, thus increasing the accuracy by further emulating human brain and removing the pre-defined set of input and its matched outputs.
A little more semantics goes a lot further! Getting more out of Linked Data ...Michel Dumontier
This tutorial will provide detailed instruction to create and make use of formalized ontologies from linked open data for advanced knowledge discovery including consistency checking and answering sophisticated questions.
Automated reasoning in OWL offers the tantalizing possibility to undertake advanced knowledge discovery including verifying the consistency of conceptual schemata in information systems, verifying data integrity and answering expressive queries over the conceptual schema and the data. Given that a large amount of structured knowledge is now available as linked data, the challenge is to formalize this knowledge iso that intended semantics become explicit and that the reasoning is efficient and scalable. While using the full expressiveness of OWL 2 yields ontologies that can be used for consistency verification, classification and query answering, use of less expressive OWL profiles enable efficient reasoning and support different application scenarios. In this tutorial,
- we describe how to generate OWL ontologies from linked data
- check consistency of knowledge
- automatically transform ontologies into OWL profiles
- use this knowledge in applications to integrate data and answer sophisticated questions across domains.
- expressive ontologies enables data integration, verifying consistency of knowledge and answering questions
- formalization of linked data will create new opportunities for knowledge discovery
- OWL 2 profiles support more efficient reasoning and query answering procedures
- recent technology facilitates the automatic conversion of OWL 2 ontologies into profiles
- OWL ontologies can dramatically extend the functionality of semantically-enabled web sites
Fengjuan Qiu has experience in both computer science and respiratory care. She received a B.S. in Computer Science from the University of Washington Bothell with a 3.64 GPA and a B.S. in Respiratory Care from Stony Brook University with a 3.7 GPA. Her skills include Java, C++, algorithms, and databases. She has worked as a respiratory therapist at Kindred Hospital since 2014. For projects, she led a team to build a food delivery website and implemented a Unix-like file system and CPU schedulers for an operating systems course. She also designed databases, object-oriented programs, and software requirements specifications. She is involved in several honor societies.
La aplicación de nuevos modelos de negocio basados en la interconectividad será una de las claves para generar competitividad durante los próximos años. Es por esto que des de la DBT ayudamos a nuestros clientes a generar des del cloud nuevos modelos de negocio basados en la conexión de sus activos a Internet.
Carole Keegan is an elementary education graduate seeking a teaching position. She earned a Bachelor of Science in Elementary Education from Florida Southern College in 2016 with concentrations in STEM education and Orton Gillingham instruction. She has experience student teaching kindergarten and serving as a field studies intern for first and second grade. Keegan also worked as a camp counselor and has earned her ESOL, elementary education, and professional education certifications. She maintains a 3.8 GPA and was on the Dean's List and National Honors Society. References are provided from her supervising teacher, professor, and intern supervisor.
Nitin Sharma has over 10 years of experience as a Purchase Executive. He currently works for Hanung Toys & Textiles Ltd, where he is responsible for managing purchases, identifying cost-effective vendors, and ensuring timely delivery of materials. He holds an M.B.F from Gurukula Kangri University and a diploma in export management from IIEM. He aims to work in an environment where he can continue enhancing his skills.
Este documento describe varias herramientas para diseñar modelos de negocio, incluyendo el lienzo de modelo de negocio. El lienzo de modelo de negocio es una herramienta que permite describir un modelo de negocio a través de nueve bloques fundamentales: segmentos de mercado, propuesta de valor, canales, relaciones con clientes, fuentes de ingresos, recursos clave, actividades clave, asociaciones clave y estructura de costes. El documento explica cada uno de estos bloques y cómo se relacionan para describir complet
The document discusses different types of instructional software: drill and practice software, tutorials, simulations, instructional games, problem-solving software, and multimedia programs. It provides definitions and examples for each type. Drill and practice software provides exercises to reinforce skills through repetition. Tutorials present new material through pre- and post-tests and practice activities. Simulations model real or imaginary systems to teach how systems work. Instructional games add game elements to motivate learning. Problem-solving software emphasizes critical thinking through presentation of data or problems. Multimedia programs combine text, audio, images and video to engage different learning styles.
Polymer microspheres for controlled drug releaseDuwan Arismendy
Polymer microspheres can be employed to deliver medication in a rate-controlled and sometimes targeted manner. Medication is released from a microsphere by drug leaching from the polymer or by degradation of the polymer matrix. Since the rate of drug release is controlled by these two factors, it is important to understand the physical and chemical properties of the releasing medium. This review presents the methods used in the preparation of microspheres from monomers or from linear polymers and discusses the physio-chemical properties that affect the formation, structure, and morphology of the spheres. Topics including the effects of molecular weight, blended spheres, crystallinity, drug distribution, porosity, and sphere size are discussed in relation to the characteristics of the release process. Added control over release profiles can be obtained by the employment of core-shell systems and pH-sensitive spheres; the enhancements presented by such systems are discussed through literature examples.
1. There are three major paradigms in communication theory: positivistic, interpretive, and critical. The positivistic paradigm views reality as objective, the interpretive sees it as subjective, and the critical sees reality as constructed by powerful groups.
2. Theories can be approached deductively, from general propositions to specific hypotheses, or inductively, from observations to generalized conclusions. Models are simplified representations of phenomena that show relationships between elements.
3. Linear models depict communication as a one-way transmission process, while interactive models see it as a mutual exchange and transactional models emphasize the interdependence of message formulation, interpretation, and exchange.
NG2S: A Study of Pro-Environmental Tipping Point via ABMsKan Yuenyong
A study of tipping point: much less is known about the most efficient ways to reach such transitions or how self-reinforcing systemic transformations might be instigated through policy. We employ an agent-based model to study the emergence of social tipping points through various feedback loops that have been previously identified to constitute an ecological approach to human behavior. Our model suggests that even a linear introduction of pro-environmental affordances (action opportunities) to a social system can have non-linear positive effects on the emergence of collective pro-environmental behavior patterns.
Understanding knowledge network, learning and connectivismAlaa Al Dahdouh
Behaviorism, Cognitivism, Constructivism and other growing theories such as Actor-Network and Connectivism are circulating in the educational field. For each, there are allies who stand behind research evidence and consistency of observation. Meantime, those existing theories dominate the field until the background is changed or new concrete evidence proves their insufficiencies. Connectivists claim that the background or the general climate has recently changed: a new generation of researchers, connectivists propose a new way of conceiving knowledge. According to them, knowledge is a network and learning is a process of exploring this network. Other researchers find this notion either not clear or not new and probably, with no effect in the education field. This paper addresses a foggy understanding of knowledge defined as a network and the lack of resources talking about this topic. Therefore, it tries to clarify what it means to define knowledge as a network and in what way it can affect teaching and learning.
This document provides an introduction to communication theory. It discusses what constitutes a good theory, noting criteria like theoretical scope, appropriateness, heuristic value, validity, parsimony, and openness. A theory should generally explain phenomena broadly, make logical assumptions, suggest avenues for further research, be logically consistent and falsifiable, provide simple explanations, and allow for compatibility with other theories. The document also distinguishes theories from models, noting that models provide simplified representations of communication processes.
This document discusses an integrated approach to ontology development methodology and provides a case study using a shopping mall domain. It begins by reviewing existing ontology development methodologies and identifying their pitfalls. An integrated methodology is then proposed which aims to reduce these pitfalls. The key steps in the proposed methodology are: 1) capturing motivating user scenarios or keywords, 2) generating formal/informal questions and answers from the scenarios, 3) extracting terms and constraints, and 4) building the ontology using a top-down approach. The methodology is applied to developing an ontology for a shopping mall domain to provide multilingual information to visitors.
This document summarizes a survey on string similarity matching search techniques. It discusses how string similarity matching is used to find relevant information in text collections. The document reviews different algorithms for string matching, including edit distance, NR-grep, n-grams, and approaches based on hashing and locality-sensitive hashing. It analyzes techniques like pattern matching, threshold-based joins, and vector representations. The goal is to present an overview of the field and compare algorithm performance for similarity searches.
IRJET- A Survey Paper on Text Summarization MethodsIRJET Journal
This document summarizes various approaches for automatic text summarization, including extractive and abstractive methods. It discusses early surface-level approaches from the 1950s that identified important sentences based on word frequency. It also reviews corpus-based, cohesion-based, rhetoric-based, and graph-based approaches. The document then examines single document summarization techniques like naive Bayes methods, log-linear models, and deep natural language analysis. It concludes with a discussion of multi-document summarization, including abstraction and information fusion as well as graph spreading activation approaches. The goal of the survey is to provide an overview of the major existing methods that have been used for automatic text summarization.
This document discusses the cognitive foundations of mathematics and mathematical formalisms. It argues that:
1) Mathematical concepts like numbers and continuous lines have cognitive origins in human spatial and temporal experiences.
2) Formalist approaches that view mathematical reasoning as sequence matching of meaningless symbols fail to capture the complexity of human cognition and thinking.
3) The elementary components of living systems like cells are highly complex, unlike the simple components that make up artificial systems like clocks and computers. Signification arises from the interaction of signals with intentional gestures and actions.
The document summarizes a seminar on ontology mapping presented by Samhati Soor. The seminar covered the need for ontology mapping due to the proliferation of ontologies, and the purpose of mapping ontologies to achieve interoperability and sharing knowledge. It defined ontologies and ontology mapping and discussed categories of mapping including between global and local ontologies, between local ontologies, and for merging ontologies. Tools for ontology mapping discussed included GLUE and SAM. Evaluation criteria and challenges of ontology mapping were also summarized along with conclusions and references.
This document discusses different perspectives on metaphors and challenges in recognizing and representing metaphors computationally. It reviews ongoing research aiming to accomplish these goals. There is no consensus on what constitutes a metaphor. Linguistic metaphors found in examples are distinguished from cognitive metaphors, which are not linguistic phenomena. Metaphors can involve referential substitution or predication. Framing and context are important for interpretation. Metaphors may be inexhaustible or conventionalized into new meanings. Recognizing metaphors computationally is difficult as literal and figurative meanings depend on context and intention.
Concept hierarchy is the backbone of ontology, and the concept hierarchy acquisition has been a hot topic in the field of ontology learning. this paper proposes a hyponymy extraction method of domain ontology concept based on cascaded conditional random field(CCRFs) and hierarchy clustering. It takes free text as extracting object, adopts CCRFs identifying the domain concepts. First the low layer of CCRFs is used to identify simple domain concept, then the results are sent to the high layer, in which the nesting concepts are recognized. Next we adopt hierarchy clustering to identify the hyponymy relation between domain ontology concepts. The experimental results demonstrate the proposed method is efficient.
Extending the knowledge level of cognitive architectures with Conceptual Spac...Antonio Lieto
Extending the knowledge level of cognitive architectures with Conceptual Spaces (+ a case study with Dual-PECCS: a hybrid knowledge representation system for common sense reasoning). Talk given at Stockholm, September 2016.
Adaptive Emotional Personality Model based on Fuzzy Logic Interpretation of F...CSCJournals
In recent years, emotional personality has found an important application in the field of human machine interaction. Interesting examples of this domain are computer games, interface agents, human-robot interaction, etc. However, few systems in this area include a model of personality, although it plays an important role in differentiating and determining the way they experience emotions and the way they behave. Personality simulation has always been a complex issue due to the complexity of the human personality itself, and the difficulty to model human psychology on electronic basis. Current efforts for emotion simulation are rather based on predefined set or inputs and its responses or on classical models which are simple approximate and have proven flaws. In this paper an emotional simulation system was presented. It utilizes the latest psychological theories to design a complex dynamic system that reacts to any environment, without being pre-programmed on sets of input. The design was relying on fuzzy logic to simulate human emotional reaction, thus increasing the accuracy by further emulating human brain and removing the pre-defined set of input and its matched outputs.
A little more semantics goes a lot further! Getting more out of Linked Data ...Michel Dumontier
This tutorial will provide detailed instruction to create and make use of formalized ontologies from linked open data for advanced knowledge discovery including consistency checking and answering sophisticated questions.
Automated reasoning in OWL offers the tantalizing possibility to undertake advanced knowledge discovery including verifying the consistency of conceptual schemata in information systems, verifying data integrity and answering expressive queries over the conceptual schema and the data. Given that a large amount of structured knowledge is now available as linked data, the challenge is to formalize this knowledge iso that intended semantics become explicit and that the reasoning is efficient and scalable. While using the full expressiveness of OWL 2 yields ontologies that can be used for consistency verification, classification and query answering, use of less expressive OWL profiles enable efficient reasoning and support different application scenarios. In this tutorial,
- we describe how to generate OWL ontologies from linked data
- check consistency of knowledge
- automatically transform ontologies into OWL profiles
- use this knowledge in applications to integrate data and answer sophisticated questions across domains.
- expressive ontologies enables data integration, verifying consistency of knowledge and answering questions
- formalization of linked data will create new opportunities for knowledge discovery
- OWL 2 profiles support more efficient reasoning and query answering procedures
- recent technology facilitates the automatic conversion of OWL 2 ontologies into profiles
- OWL ontologies can dramatically extend the functionality of semantically-enabled web sites
Fengjuan Qiu has experience in both computer science and respiratory care. She received a B.S. in Computer Science from the University of Washington Bothell with a 3.64 GPA and a B.S. in Respiratory Care from Stony Brook University with a 3.7 GPA. Her skills include Java, C++, algorithms, and databases. She has worked as a respiratory therapist at Kindred Hospital since 2014. For projects, she led a team to build a food delivery website and implemented a Unix-like file system and CPU schedulers for an operating systems course. She also designed databases, object-oriented programs, and software requirements specifications. She is involved in several honor societies.
La aplicación de nuevos modelos de negocio basados en la interconectividad será una de las claves para generar competitividad durante los próximos años. Es por esto que des de la DBT ayudamos a nuestros clientes a generar des del cloud nuevos modelos de negocio basados en la conexión de sus activos a Internet.
Carole Keegan is an elementary education graduate seeking a teaching position. She earned a Bachelor of Science in Elementary Education from Florida Southern College in 2016 with concentrations in STEM education and Orton Gillingham instruction. She has experience student teaching kindergarten and serving as a field studies intern for first and second grade. Keegan also worked as a camp counselor and has earned her ESOL, elementary education, and professional education certifications. She maintains a 3.8 GPA and was on the Dean's List and National Honors Society. References are provided from her supervising teacher, professor, and intern supervisor.
Nitin Sharma has over 10 years of experience as a Purchase Executive. He currently works for Hanung Toys & Textiles Ltd, where he is responsible for managing purchases, identifying cost-effective vendors, and ensuring timely delivery of materials. He holds an M.B.F from Gurukula Kangri University and a diploma in export management from IIEM. He aims to work in an environment where he can continue enhancing his skills.
Este documento describe varias herramientas para diseñar modelos de negocio, incluyendo el lienzo de modelo de negocio. El lienzo de modelo de negocio es una herramienta que permite describir un modelo de negocio a través de nueve bloques fundamentales: segmentos de mercado, propuesta de valor, canales, relaciones con clientes, fuentes de ingresos, recursos clave, actividades clave, asociaciones clave y estructura de costes. El documento explica cada uno de estos bloques y cómo se relacionan para describir complet
The document discusses different types of instructional software: drill and practice software, tutorials, simulations, instructional games, problem-solving software, and multimedia programs. It provides definitions and examples for each type. Drill and practice software provides exercises to reinforce skills through repetition. Tutorials present new material through pre- and post-tests and practice activities. Simulations model real or imaginary systems to teach how systems work. Instructional games add game elements to motivate learning. Problem-solving software emphasizes critical thinking through presentation of data or problems. Multimedia programs combine text, audio, images and video to engage different learning styles.
Polymer microspheres for controlled drug releaseDuwan Arismendy
Polymer microspheres can be employed to deliver medication in a rate-controlled and sometimes targeted manner. Medication is released from a microsphere by drug leaching from the polymer or by degradation of the polymer matrix. Since the rate of drug release is controlled by these two factors, it is important to understand the physical and chemical properties of the releasing medium. This review presents the methods used in the preparation of microspheres from monomers or from linear polymers and discusses the physio-chemical properties that affect the formation, structure, and morphology of the spheres. Topics including the effects of molecular weight, blended spheres, crystallinity, drug distribution, porosity, and sphere size are discussed in relation to the characteristics of the release process. Added control over release profiles can be obtained by the employment of core-shell systems and pH-sensitive spheres; the enhancements presented by such systems are discussed through literature examples.
1. There are three major paradigms in communication theory: positivistic, interpretive, and critical. The positivistic paradigm views reality as objective, the interpretive sees it as subjective, and the critical sees reality as constructed by powerful groups.
2. Theories can be approached deductively, from general propositions to specific hypotheses, or inductively, from observations to generalized conclusions. Models are simplified representations of phenomena that show relationships between elements.
3. Linear models depict communication as a one-way transmission process, while interactive models see it as a mutual exchange and transactional models emphasize the interdependence of message formulation, interpretation, and exchange.
NG2S: A Study of Pro-Environmental Tipping Point via ABMsKan Yuenyong
A study of tipping point: much less is known about the most efficient ways to reach such transitions or how self-reinforcing systemic transformations might be instigated through policy. We employ an agent-based model to study the emergence of social tipping points through various feedback loops that have been previously identified to constitute an ecological approach to human behavior. Our model suggests that even a linear introduction of pro-environmental affordances (action opportunities) to a social system can have non-linear positive effects on the emergence of collective pro-environmental behavior patterns.
Understanding knowledge network, learning and connectivismAlaa Al Dahdouh
Behaviorism, Cognitivism, Constructivism and other growing theories such as Actor-Network and Connectivism are circulating in the educational field. For each, there are allies who stand behind research evidence and consistency of observation. Meantime, those existing theories dominate the field until the background is changed or new concrete evidence proves their insufficiencies. Connectivists claim that the background or the general climate has recently changed: a new generation of researchers, connectivists propose a new way of conceiving knowledge. According to them, knowledge is a network and learning is a process of exploring this network. Other researchers find this notion either not clear or not new and probably, with no effect in the education field. This paper addresses a foggy understanding of knowledge defined as a network and the lack of resources talking about this topic. Therefore, it tries to clarify what it means to define knowledge as a network and in what way it can affect teaching and learning.
This document provides an introduction to communication theory. It discusses what constitutes a good theory, noting criteria like theoretical scope, appropriateness, heuristic value, validity, parsimony, and openness. A theory should generally explain phenomena broadly, make logical assumptions, suggest avenues for further research, be logically consistent and falsifiable, provide simple explanations, and allow for compatibility with other theories. The document also distinguishes theories from models, noting that models provide simplified representations of communication processes.
The role of theory in research division for postgraduate studiespriyankanema9
This document discusses the role of theory in research. It provides definitions of theory as a model or framework that shapes observation and understanding. Theory condenses and organizes knowledge about the world and explains relationships between variables. The document outlines characteristics of theory such as guiding research, becoming stronger with evidence, and generating new research. It distinguishes theories from hypotheses and discusses evaluating theories. The dynamic relationship between theory and research is also examined, with theory informing research and research testing and revising theory. Different types of theories like deductive and inductive theories are defined. The document concludes by discussing theories relevant to multilingual mathematics education research and theories of second language learning.
The Philosophy of Science and its relation to Machine Learningbutest
The document discusses connections between machine learning and the philosophy of science. It argues that while the two disciplines are distinct, they admit a dynamic interaction where ideas are exchanged mutually beneficially. Examples of fruitful interactions discussed include how automated scientific discovery has implications for debates on inductivism vs falsificationism in philosophy of science, and how philosophical work on Bayesian epistemology and causality has influenced machine learning. The document suggests evidence integration may be a locus of future interaction between the two fields.
The paper holds that such applied theory as the Theory of Consciousness can be constructed only within the limits of a special meta-theory (or general theory) which takes into account the agency of informational factor and has all the necessary tools of study (such as method of study, system of models, system of proofs, conservation laws, etc.) which correspond to the nature of the object of study. The currently dominant meta-theory called the Modern Materialistic (Physical) Picture of the World is not appropriate for constructing the required applied theory of consciousness and many other applied theories within its limits.
This document summarizes several agent-based modeling projects done by students at the University of East London. It describes projects using StarLogo, where students modeled emergent urban forms and traffic patterns. It also discusses modeling the growth of traditional Yemeni cities and experiments deforming NURBS surfaces using agent-based modeling in Microstation. The document provides examples of how agent-based modeling can be used as a design tool to explore emergent patterns and behaviors.
The laboratoryandthemarketinee bookchapter10pdf_mergedJeenaDC
This document discusses two modes of scientific knowledge production:
(1) Modus 1 knowledge is produced within established disciplines through deductive reasoning and empirical testing. It aims to build consistent theoretical systems.
(2) Modus 2 knowledge is produced transdisciplinarily to solve practical problems. It is shaped by social and economic factors beyond any single discipline. Knowledge takes the form of narratives that guide practices rather than deductive systems.
The key difference is that Modus 1 seeks fundamental theoretical understanding while Modus 2 focuses on knowledge applicable to real-world problems across disciplinary boundaries. Both make use of models, analogies, and metaphors to give meaning and applicability to formal theories and findings.
This document summarizes key aspects of experimental design and methods for coding texts from media research. It discusses the components of experimental design including causality, theory, control, and ecological validity. It also outlines different types of intentionalities to analyze in media texts, such as those of the author, text, audience, and interpreter. Finally, it provides guidance on coding texts by establishing the texts of interest, analytical approach, unit of analysis, coding scheme, and analysis.
Objectification Is A Word That Has Many Negative ConnotationsBeth Johnson
Here is an introduction to social web mining and big data:
Social web mining is the process of extracting useful information and knowledge from social media data. With the rise of big data, social media platforms are generating massive amounts of unstructured data every day in the form of posts, comments, shares, likes, etc. This user-generated data holds valuable insights about people's opinions, interests, behaviors and more.
Big data analytics provides tools and techniques to analyze this large, complex social data at scale. Social web mining applies data mining and machine learning algorithms to big social data to discover patterns and relationships. Areas of focus include sentiment analysis to understand public opinions on brands, products or issues; network analysis to map relationships and influence; and
Robustness in Machine Learning Explanations Does It Matter-1.pdfDaniel983829
This document discusses the desirability of robustness in machine learning explanations. It argues that robustness is desirable to the extent we want explanations to reflect real patterns in the data and the world. The importance of explanations capturing real patterns depends on the problem context. In some contexts, non-robust explanations can pose moral hazards. Clarifying how much we want explanations to capture real patterns can help determine whether the "Rashomon effect" is beneficial or problematic.
The document discusses various communication models that have been developed over time to conceptualize and simplify the complex process of human communication. It describes several "classical" models, including Aristotle's definition of rhetoric which focused on logos, pathos and ethos. It also summarizes Shannon and Weaver's influential 1949 mathematical model of communication and its key concepts of a sender, receiver, message, channel and noise. The document notes that while models are useful for clarifying complexity, they also have limitations such as potential oversimplification and missing important variables.
OntoSOC: S ociocultural K nowledge O ntology IJwest
This paper
present
s
a
sociocultural knowledge ontology (OntoSOC) modeling appro
a
ch. Ont
o-
SOC modeling appro
a
ch is based on Engeström‟s
Human Activity Theory (HAT)
.
That Theory allowed us
to identify fundamental concepts and rel
a
tionshi
ps between them. The top
-
down precess has been used to
d
efine differents sub
-
concepts. The
modeled vocabulary permits us to organise data, to facilitate in
form
a-
tion retrieval
by introducing a semantic layer in social web platform architec
ture,
we project t
o impl
e
ment.
This platform can be considered as a «
collective me
mory
»
and Participative and Distributed Info
r
mation
System
(PDIS) which will allow Cameroonian communities to share an co
-
construct knowledge on perm
a-
nent organi
z
ed activ
i
ties.
This document discusses integrating theory and empiricism in the study of complex human behavior using nonlinear dynamical models and information theory. It provides three examples of research that use nonlinear dynamics: 1) An experimental economics study that documents cultural variation across societies. 2) Modeling the rise and fall of agrarian societies as a nonlinear dynamical system. 3) Research on nonlinear social learning that combines experimental economics with gene-culture coevolutionary theory. It advocates for crossing disciplinary boundaries and drawing on ideas from biology, ecology and evolution to better understand human behavioral complexity.
The document provides an overview of theories from different fields that can provide a refreshed view of the professional context, including:
- Polysystem Theory, which describes social systems as having interdependent subsystems rather than an independent deep structure
- Pierre Bourdieu's concept of habitus, which describes how individual behaviors emerge from social interactions
- Juri Lotman's concept of the semiosphere, which views communication as occurring within overlapping social niches
- Ilya Prigogine's work on dissipative structures and self-organizing systems, which sees entropy as enabling systems to evolve into higher complexity
- Howard Gardner's theory of multiple intelligences, with a focus on emotional intelligence and its importance for feedback loops
The document discusses the differences between machine learning (ML), statistical learning, data mining (DM), and automated learning (AL). It argues that while ML and statistical learning developed similar techniques starting in the 1960s, DM emerged in the 1990s from a merging of database research and automated learning. However, industry was much more enthusiastic about adopting DM techniques compared to AL techniques, even though many DM systems are just friendly interfaces of AL systems. The document aims to explain the key differences between DM and AL that led to DM's greater commercial success.
1) Artificial Intelligence research draws from many disciplines including formal logic, probability theory, linguistics and philosophy. Computational logic combines and improves upon traditional logic and decision theory.
2) The paper argues that the abductive logic programming (ALP) agent model is a powerful model of both descriptive and normative thinking. It includes production systems and is compatible with classical logic and decision theory.
3) The ALP agent model treats beliefs as describing the world and goals as describing how the world should be. Its semantics aim to generate actions and assumptions to make goals and observations true based on beliefs.
A General Principle of Learning and its Application for Reconciling Einstein’...Jeffrey Huang
This document proposes a general principle of learning based on discovering intrinsic constraint models. It defines intelligence as the ability to understand the world by discovering intrinsic variables and constraints that have minimum entropy. The key goals of learning are to map observations to intrinsic variables and detect constraints to minimize a model entropy objective function. Discovering intrinsic variables is critical for maximizing prediction accuracy, learning efficiency, and generalization power. The principle provides a theoretical foundation for explaining and developing artificial general intelligence.
A risk based approach to establish stability testing conditions for tropical ...Duwan Arismendy
This document analyzes climate data from tropical and subtropical countries to propose appropriate long-term stability testing conditions. Meteorological parameters like temperature, dew point, and relative humidity were measured in locations across Southeast Asia, South America, China, Southern Africa, and the Caribbean. The data was used to identify the hottest and most humid places in each region and calculate safety margins for temperature and humidity compared to different testing conditions. Based on the analysis, the document proposes long-term testing conditions for each country and region that account for the worst climatic conditions and appropriate safety margins.
Bleeding complications in secondary stroke prevention by antiplatelet therapy...Duwan Arismendy
Abstract
Abstract. Boysen G (University of Copenhagen, Copenhagen, Denmark). Bleeding complications in secondary stroke prevention by antiplatelet therapy: a benefit–risk analysis (Review). J Intern Med 1999; 246: 239–245.
This review analyses the benefit–risk ratio of antiplatelet drugs in secondary stroke prevention and is based on the published data from eight large stroke prevention trials. In patients with prior transient ischaemic attack (TIA) or stroke, aspirin prevented one to two vascular events (stroke, AMI, or vascular death) per 100 treatment-years with an excess risk of fatal and severe bleeds of 0.4–0.6 per 100 treatment-years. The gastrointestinal bleeding risk was significantly lower with ticlopidine and clopidogrel, which were both somewhat more effective than aspirin in the prevention of vascular events. The combination of dipyridamole and aspirin prevented 2.82 strokes at the expense of an excess risk of 0.61 (95% CI = 0.27–0.95) fatal or severe bleeds per 100 treatment-years.
In the acute phase of stroke, the aspirin-associated risk of haemorrhagic complications was much increased compared with that in the stable phase after stroke, with 0.48 (95% CI = 0.13–0.83) fatal or severe bleeds per 100 treated patients for the first 4 weeks after stroke in the Chinese Acute Stroke Trial and 0.41 (95% CI = 0.05–0.77) in the International Stroke Trial. Still, there was a net benefit with the prevention of about one death or non-fatal ischaemic stroke per 100 treated patients.
Magnesium in disease prevention and overall healthDuwan Arismendy
Magnesium is the fourth most abundant mineral and the second most abundant intracellular divalent cation and has been recognized as a cofactor for >300 metabolic reactions in the body. Some of the processes in which magnesium is a cofactor include, but are not limited to, protein synthesis, cellular energy production and storage, reproduction, DNA and RNA synthesis, and stabilizing mitochondrial membranes. Magnesium also plays a critical role in nerve transmission, cardiac excitability, neuromuscular conduction, muscular contraction, vasomotor tone, blood pressure, and glucose and insulin metabolism. Because of magnesium's many functions within the body, it plays a major role in disease prevention and overall health. Low levels of magnesium have been associated with a number of chronic diseases including migraine headaches, Alzheimer's disease, cerebrovascular accident (stroke), hypertension, cardiovascular disease, and type 2 diabetes mellitus. Good food sources of magnesium include unrefined (whole) grains, spinach, nuts, legumes, and white potatoes (tubers). This review presents recent research in the areas of magnesium and chronic disease, with the goal of emphasizing magnesium's role in disease prevention and overall health.
Iron bis-glycine chelate competes for the nonheme-iron absorption pathwayDuwan Arismendy
BACKGROUND:
The enterocytic absorption pathway of the food fortificant iron bis-glycine chelate has been the subject of controversy because it is not clear whether that substance uses the classic nonheme-iron absorption pathway or a pathway similar to that of heme absorption.
OBJECTIVE:
The objective was to study the absorption pathway of iron bis-glycine chelate in human subjects.
DESIGN:
Eighty-five healthy adult women were selected to participate in 1 of 6 iron-absorption studies. Study A involved the measurement of the dose-response curve of the absorption of ferrous sulfate (through a nonheme-iron absorption pathway); study B involved the competition of iron bis-glycine chelate with ferrous sulfate for the nonheme-iron absorption pathway; study C involved the measurement of the dose-response curve of heme-iron absorption; study D involved the competition of iron bis-glycine chelate with hemoglobin for the heme-iron absorption pathway; and studies E and F were the same as studies A and B, except that the iron bis-glycine chelate was encapsulated in enteric gelatin capsules so that it would not be processed in the stomach.
RESULTS:
Iron from the bis-glycine chelate competed with ferrous sulfate for the nonheme-iron absorption pathway. Iron from the bis-glycine chelate also competed with ferrous sulfate for absorption when liberated directly into the intestinal lumen. Iron from the bis-glycine chelate did not compete with heme iron for the heme-iron absorption pathway.
CONCLUSION:
The iron from iron bis-glycine chelate delivered at the level of the stomach or duodenum becomes part of the nonheme-iron pool and is absorbed as such.
A randomized, multicenter, placebo controlled trial of polyethylene glycol la...Duwan Arismendy
OBJECTIVES:
Polyethylene glycol (PEG) 3350 (MiraLAX) is currently approved for the short-term treatment of occasional constipation. This study was designed to compare the safety and efficacy of PEG laxative versus placebo over a 6-month treatment period in patients with chronic constipation.
METHODS:
Study subjects who met defined criteria for chronic constipation were randomized in this double-blind, placebo-controlled, parallel, multicenter study to receive PEG laxative as a single daily dose of 17 g or placebo for 6 months. Baseline constipation status was confirmed during a 14-day observation period. As a primary efficacy variable, treatment success was defined as relief of modified ROME criteria for constipation for 50% or more of their treatment weeks. Various secondary measures were assessed. An Interactive Voice Response System (IVRS) recorded daily bowel movement experience and study efficacy and safety information. Laboratory testing at baseline and monthly for the study duration was analyzed for hematology, blood chemistry including amylase, GGT, uric acid, lipids, and urinalysis.
RESULTS:
A total of 304 patients were enrolled and received treatment at one of 50 centers. Successful treatment according to the primary efficacy variable was seen in 52.0% of PEG and 11% of placebo subjects (P < 0.001). Similar efficacy was seen in a subgroup of 75 elderly subjects. According to the primary efficacy definition (based on individual treatment weeks), 61% of PEG treatment weeks versus 22% of the placebo weeks were successful (P < 0.001). There were no significant differences in laboratory findings or adverse events except for the gastrointestinal category where diarrhea, flatulence, and nausea were the most frequent with PEG although they were not individually statistically significant compared with placebo. Similar results were observed when analyzed for differences due to gender, race, or age.
Selecting celebrity endorsers the practitioner’s perspectiveDuwan Arismendy
This document discusses research on celebrity endorsements in advertising. It provides an overview of four models that have been proposed to explain how celebrity endorsements work: the source credibility model, source attractiveness model, product match-up hypothesis, and meaning transfer model. The study described in the document aimed to measure the importance of individual celebrity characteristics according to advertising practitioners and test their importance for different product types. It developed hypotheses predicting that source credibility factors would be more important for technical products while attractiveness factors would be more important for non-technical products. The study used surveys and interviews of advertising practitioners in the UK to understand their perspective on selecting celebrity endorsers.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
A model theory of induction
1. PLEASE SCROLL DOWN FOR ARTICLE
This article was downloaded by: [Princeton University]
On: 20 April 2010
Access details: Access Details: [subscription number 915899681]
Publisher Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-
41 Mortimer Street, London W1T 3JH, UK
International Studies in the Philosophy of Science
Publication details, including instructions for authors and subscription information:
http://www.informaworld.com/smpp/title~content=t713427740
A model theory of induction
Philip N. Johnson-Laird a
a
Department of Psychology, Princeton University, New Jersey, USA
To cite this Article Johnson-Laird, Philip N.(1994) 'A model theory of induction', International Studies in the Philosophy of
Science, 8: 1, 5 — 29
To link to this Article: DOI: 10.1080/02698599408573474
URL: http://dx.doi.org/10.1080/02698599408573474
Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf
This article may be used for research, teaching and private study purposes. Any substantial or
systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or
distribution in any form to anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representation that the contents
will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses
should be independently verified with primary sources. The publisher shall not be liable for any loss,
actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly
or indirectly in connection with or arising out of the use of this material.
2. INTERNATIONAL STUDIES IN THE PHILOSOPHY OF SCIENCE, VOL. 8, NO. 1, 1994
A model theory of induction
PHILIP N. JOHNSON-LAIRD
Department of Psychology, Princeton University, New Jersey 08544, USA
Abstract Theories of induction in psychology and artificial intelligence assume that the
processleadsfrom observation and knowledge to the formulation of linguistic conjectures.This
paper proposes instead that the process yields mental models of phenomena. It uses this
hypothesis to distinguish between deduction, induction, and creativeforms of thought. It shows
how models could underlie inductions about specific matters. In the domain of linguistic
conjectures, there are many possibleinductive generalizations of a conjecture.In the domain of
models, however, generalization calls for only a singleoperation: the addition of information to
a model. If the information to be added is inconsistent with the model, then it eliminates the
model as false: this operation suffices for all generalizations in a Boolean domain. Otherwise,
the information that is added may have effects equivalent (a) to the replacement of an
existential quantifier by a universal quantifier, or (b) to the promotion of an existential
quantifierfrom inside to outside the scope of a universal quantifier. The latter operationis novel,
and does not seem to have been used in any linguistic theory of induction. Finally, the paper
describes a set of constraintson human induction, and outlines the evidence in favor of a model
theory of induction.
Introduction
Induction is part of both everyday and scientific thinking. It enables us to understand
the world and to predict events. It can also mislead us. Many of the cognitive failures
that have led to notable disasters are inductions that turned out to be wrong. For
instance, when the car ferry, Herald of Free Enterprise, sailed from the Belgian port of
Zeebrugge on the 6 March, 1987, the master made the plausible induction that the bow
doors had been closed. They had always been closed in the past, and there was no
evidence to the contrary. The chief officer made the same induction, as did the bosun.
But, the assistant bosun, whose job it was to close the doors, was asleep in his bunk, and
had not closed the doors. Shortly after leaving the harbor, the vessel capsized and sank,
and 188 people drowned. Induction is indeed an important but risky business. If
psychologists had a better understanding of the strengths and weakness of human
inductive competence, then they might be able to help individuals to perform more
skillfully and to introduce more effective measures—especially by way of advisory
computer systems—to prevent inductive disasters.
Induction is also a theoretically confusing business. Some authors restrict the term
to very narrow cases; others outlaw it altogether. Textbooks often define it as leading
from particular premises to a general conclusion, in contrast to deduction, which they
define as leading from general premises to a particular conclusion. In fact, induction can
5
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
3. 6 PHILIP N. JOHNSON-LAIRD
lead from particular observations to a particular conclusion—as it did in the case of the
Herald of Free Enterprise, and deduction can lead from general premises to a general
conclusion. The first goal of this paper is accordingly to draw a principled distinction
between induction, deduction, and other forms of thought. Its second goal is to
distinguish between varieties of induction. And its third goal is to outline a new theory
of induction. This theory departs from the main tradition in psychology and philosophy,
which treats induction as a process yielding plausible verbal generalizations or hypoth-
eses. It proposes instead that induction generates mental modelsof domains. As we
shall see, this distinction is not trivial, and it turns out to have some unexpected
consequences.
An outline of the theory of mental models
The central idea in the theory of mental models is that the process of understanding
yields a model (Johnson-Laird, 1983). Unlike other proposed forms of mental represen-
tation, such as prepositional representations or semantic networks, models are based on
the fundamental principle that their structure corresponds to the way in which human
beings conceive the structure of the world. This principle has three important corol-
laries:
(1) Entities are represented by corresponding tokens in mental models. Each
entity is accordingly represented only once in a mental model.
(2) The properties of entities are represented by the properties of tokens repre-
senting entities.
(3) Relations among entities are represented by relations among the tokens
representing entities.
Thus, a model of the assertion, "The circle is on the right of the triangle" has the
following structure:
A model may be experienced as a visual image, but what matters is, not the subjective
experience, but the structure of the model: entities are represented by tokens, their
properties are represented by properties of the tokens, and the relations between them
are represented by the relations between the tokens.
As an illustration of the theory and of its implications for the mental representation
of concepts, I will consider its implementation in a program for spatial reasoning that
generates models like the one above (Johnson-Laird & Byrne, 1991). The program
constructs three-dimensional models on the basis of verbal assertions. It has a lexicon
in which each word has a analysis of its meaning into primitive constituents, which I
shall refer to as subconcepts. It has a grammar in which each rule has a corresponding
semantic principle for forming combinations of subconcepts. As the program parses a
sentence, it assembles subconcepts to form a representation of the sentence's meaning.
This propositional representation is then used by other procedures to construct a model of
a particular situation described by the sentence.
Given a noun-phrase such as "the circle", the program uses the subconcept
underlying circle to set up a simple model:
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
4. A MODEL THEORY OF INDUCTION 7
And given the assertion:
The circle is on the right of the triangle
the parsing process combines the subconcepts underlying the words in the sentence to
yield the following result, which represents the meaning of the assertion:
((1 0 0)
The meaning of the relation x on the right of is a set of subconcepts that consists of
values for incrementing y's Cartesian co-ordinates to find a location for x:
1 0 0
The 1 indicates that x should be located by incrementing y's value on the left-right
dimension whilst holding y's values on the front-back and up-down dimensions con-
stant, i.e. adding 0s to them.
What the program does with a propositional representation of the meaning of a
sentence depends on context. If the assertion is the first in a discourse, the program uses
the representation to construct a complete model within a minimal array:
The reader will note that an assertion about the relation between two entities with
distinct properties is represented by a model in which there is a relation between two
entities with distinct properties.
Depending on the current state of any existing models, the program can also use the
propositional representation to add an entity to a model, to combine two previously
separate models, to make a valid deduction, or to make a non-monotonic inference. For
example, the program can make a transitive deduction, such as:
The circle is on the right of the triangle.
The cross is on the right of the circle.
The cross is on the right of the triangle.
without relying on any explicit statement of transitivity. It uses the subconcepts for on
the right of to construct the model:
+
It verifies the conclusion in the model, and is unable to find an alternative model of the
premises in which the conclusion is false. In summary, subconcepts combine to form
propositional representations that can be used by many different procedures for con-
structing and manipulating models.
The concept of on therightof is part of a system based on the same underlying set
of subconcepts:
on the right of:
on the left of:
in front of:
behind:
above:
below:
1
- 1
0
0
0
0
0
0
1
- 1
0
0
0
0
0
0
1
- 1
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
5. 8 PHILIP N. JOHNSON-LAIRD
The theory postulates that some such system allows human reasoners to set up spatial
models and to manipulate them. It must exist prior to the mastery of any particular
spatial relation, and can be used to acquire new high-level concepts. For example, one
might acquire the relation represented by (1 0 1), roughly diagonally up and to the right,
if it played an important part in spatial thinking and was accordingly dignified by a single
spatial term. The subconceptual system also provides individuals with an idealized
taxonomy. In the real world, objects do not have to be perfectly aligned, and so a
judgement of the relation between them may compare their actual co-ordinates with
alternative possibilities in the taxonomy. Hence, the extension of a relation depends not
just on its subconceptual analysis but also on other concepts in the same taxonomy.
The theory of mental models extends naturally to the representation of sentential
connectives, such as and, if, and or, and quantifiers, such as any, and some.The theory
posits that models represent as little as possible explicitly. Hence, the initial representa-
tion of a conditional, such as "if there is an A then there is a 2", is by the following two
models:
[A] 2
The first line represents an explicit model of the situation in which the antecedent is
true, and the second line represents an implicit model of the alternative situation(s). The
second model is implicit because it has no immediately available content, but it can be
fleshed out to make its implicit content explicit. The square brackets around the A in
the first model are an "annotation" indicating that the A has been exhaustively
represented, i.e. it cannot occur in any other model (for a defense of such annotations,
see Newell, 1990; Johnson-Laird & Byrne, 1991). The implicit model can be, and in
certain circumstances is, fleshed out explicitly. The fleshing out can correspond to a
bi-conditional, "if and only if there is an A then there is a 2:
A 2
—lA —12
where " i" is an annotation representing negation. Alternatively, the fleshing out takes
the weaker conditional form:
A 2
A —12
—lA —12
There are similar models that represent the other sentential connectives, such as or, only
if, unless (see Johnson-Laird &Byrne, 1991,for the evidence for the psychological reality
of these models).
The representation of quantifiers is also a natural extension of the theory. An
assertion such as, "Some of the athletes are bakers", has the following single model:
a b
a b
a
b
where, unlike the previous diagrams, each line now represents a separate individual in
the samemodel of a state of affairs: "a" denotes a representation of an athlete and "b"
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
6. A MODEL THEORY OF INDUCTION 9
denotes a representation of a baker. The number of tokens representing individuals is
arbitrary. The final line represents implicit individuals, who may be of some other sort.
The statement, "All of the athletes are bakers", has the following initial model:
[a] b
[a] b
[a] b
The square brackets represent that the athletes have been exhaustively represented (in
relation to the bakers). Similar interpretations are made for other quantifiers and for
assertions that contain more than one quantifier, such as "None of the Avon letters is
in the same place as any of the Bury letters". Undoubtedly, the best success of the
theory of mental models has been in accounting for the phenomena of the comprehen-
sion of discourse and the phenomena of deductive reasoning. The theory rejects the idea
that discourse is encoded in a semantic network or in any other way that represents
merely the meanings of expressions. What is represented, as experiments have corrobo-
rated (see e.g. Garnham, 1987) are referents, their properties, and the relations among
them. The theory rejects the idea that deduction depends on formal rules of inference.
It proposes instead that reasoners construct models of premises, draw conclusions from
them, and search for alternative models of the premises that might falsify these
conclusions. It makes two principal predictions about deduction: the major cause of
difficulty of making deductions is the need to consider models of alternative possibilities;
the most likely errors are conclusions that overlook such alternatives. These predictions
have been corroborated in all the main domains of deductive reasoning, including
propositional, relational, and quantificational inferences (Johnson-Laird & Byrne,
1991). We now turn to the application of the model theory to induction, and we begin
by using it to help to draw a systematic distinction between induction and deduction.
Induction, deduction and semantic information
A simple way in which to distinguish induction, deduction, and other forms of thought,
depends on semantic information, that is, the models of possible states of affairs that a
proposition rules out as false (see Bar-Hillel & Carnap, 1964;Johnson-Laird, 1983). For
example, the proposition, "The battery is dead or the voltmeter is faulty, or both", has
the following three explicit models of alternative possibilities:
d f
d - i f
—id f
For simplicity, I am here using single letters to denote entities with particular properties:
d represents "the battery is dead", and f represents "the voltmeter is faulty", and, as
before, " i" represents negation. Each line denotes a model of a different situation, and
so the disjunction eliminates only one out of four possibilities: the situation where there
is neither a dead battery nor a faulty voltmeter: id if. The categorical assertion,
"The battery is dead", eliminates two models out of the four possibilities, id f, and
id if, and so it has a greater information content. And the conjunction, "The
battery is dead and the voltmeter is faulty", eliminates all but one of the four and so it
has a still higher information content.
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
7. 10 PHILIP N. JOHNSON-LAIRD
This notion of semantic information enables us to distinguish between different
sorts of thought process. Given a set of premises and a conclusion, we can ask: what is
the relation between the states of affairs that they respectively eliminate? There are
clearly five possibilities (corresponding to the five possible relations between two sets):
(1) The premises and conclusion rule out exactly the same states of affairs. This is a
case of deduction, as the following example makes clear. You know that the battery is
dead or the voltmeter is faulty, or both. By testing the voltmeter, you observe that is not
faulty. Your premises are thus:
The battery is dead or the voltmeter is faulty, or both.
The voltmeter is not faulty.
And so you infer:
The voltmeter is not faulty and the battery is dead.
The conclusion follows validly from your premises, i.e. it must be true given that the
premises are true. It does not increase semantic information: the premises eliminate all
but one possibility:
and the conclusion holds in this model too. Like any useful deduction, the conclusion
makes explicit what was hitherto only implicit in the premises.
(2) The premises rule out fewer states of affairs than the conclusion, i.e: the conclusion
is consistent with additional models. Here is an example. There is a single premise:
The battery is dead,
and the conclusion is:
The battery is dead or the bulb is broken, or both.
The conclusion follows validly from the premise, i.e. it must be true given that the
premise is true. Logically-untrained individuals shun such deductions, however, pre-
sumably because they throw semantic information away.
(3) The premises and conclusion rule out disjoint states of affairs. This case can only
occur when the conclusion contradicts the premises. For example, the premise:
The banery is dead
rules out any model containing:
—id
whereas the conclusion:
The battery is not dead
rules out any model containing:
d
Hence, the two assertions rule out disjoint states of affairs. A deduction may lead to the
negation of a hypothetical assumption, but no rational process of thought leads
immediately from a premise to its negation (though, cf. Freud, 1925).
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
8. A MODEL THEORY OF INDUCTION 11
(4) The premises and conclusion rule out overlapping states of affairs. For example, the
premise:
The battery is dead
leads to the conclusion:
There is a short in the circuit.
The two propositions each rule out any situation in which both are false, namely, any
model containing:
where " Is" denotes "there is not a short in the circuit". Each proposition, however,
also rules out independent states of affairs. The premise rules out any situation
containing Id, and so it rules out the model:
~~id s
And the conclusion rules out any situation containing —Is, and so it rules out the
model:
The production of a conclusion that rules out situations overlapping those ruled out by
the premises may be the result of a free association where one proposition leads to
another that has no simple relation to it, or it may be the result of a creative thought
process.
(5) The conclusion goes beyond the premises to rule out some additional state of affairs
over and above what they rule out. This case includes all the traditional instances of
induction, and so henceforth I shall use it to define induction: An induction isany process
of thought yielding a conclusion that increasesthe semantic information in its initial observa-
tionsorpremises. Here is an example. Your starting point is the premises:
The battery is dead or the voltmeter is faulty, or both.
The voltmeter is faulty.
And you infer:
.'. The battery is not dead.
The conclusion does not follow validly from the premises. They eliminate all but two
models:
d f
- i d f
The conclusion increases information beyond what is in the premises because it
eliminates the first of these two models. Yet, the conclusion is quite plausible and it may
be true. The difference between induction and deduction is accordingly that induction
increases the semantic information in the premises, whereas deduction maintains or
reduces it.
An assertion has semantic information because it eliminates certain models if it is
true. It may not be true, however. And neither deduction nor induction comes with any
guarantee that their conclusions are true. If the conclusion you deduced about the
battery turns out to be false, then you should revise your belief in one or other of the
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
9. 12 N. JOHNSON-LAIRD
premises. If the conclusion you induced about the battery turns out to be false, then you
should not necessarily change your mind about the truth of the premises. A valid
deduction yielding a false conclusion must be based on false premises, but an induction
yielding a false conclusion need not be.
Some varieties of induction
Induction occurs in three stages. The first stage is to grasp some propositions—some
verbal assertions or perceptual observations. The second stage is to frame a tentative
hypothesis that reaches a semantically stronger description or understanding of this
information. If this conclusion follows validly from the premises and the background
knowledge, then the inference is not an induction but an enthymeme, i.e. a deduction
that depends on premises that are not stated explicitly (see Osherson, Smith & Shafir,
1986). The third stage, if a reasoner is prudent, is to evaluate the conclusion, and as a
result to maintain, modify, or abandon it.
A common form of induction in daily life concerns a specific event, such as the
induction made by the master of the Herald of Free Enterprise that the bow doors had
been closed. Another form of induction leads to a general conclusion. For instance, after
standing in line to no avail for just one occasion in Italy, you are likely to infer:
In Italian bars with cashiers, you pay the cashier first and then take your receipt
to the bar to make your order.
A special case of an induction is an explanation, though not all explanations are arrived
at inductively. In the preceding case, the induction yields a mere description that makes
no strong theoretical claim. But, the process may be accompanied by a search for an
explanation, e.g.:
The barmen are too busy to write bills, and so it is more efficient for customers
to pay the cashier and then to use their receipts to order.
Scientific laws are general descriptions of phenomena, e.g. Kepler's third law of
planetary motion describes the elliptical orbits of the planets. Scientific theories explain
these regularities on the basis of more fundamental considerations, e.g. Einstein's theory
of gravitation explains planetary orbits in terms of the effects of mass on the curvature
of space-time. Some authors argue that induction plays no significant role in scientific
thinking. Thus, Popper (1972) claims that science is based on explanatory conjectures
that are open to falsification, but he offers no account of their origins. The distinction
between an explanation and a corresponding description is far from clear. One view is
that the explanation is a statement in a theoretical language that logically implies the
description, which is a statement in an observation language. But this claim is disputed
(see e.g. Harman, 1973; Thagard, 1988), and it misses the heart of the matter
psychologically. You can describe a phenomenon without understanding it, but you
cannot explain a phenomenon unless you have some putative understanding of it.
Descriptions allow one to make a mental simulation of a phenomenon, whereas
explanations allow one to take it to pieces: you may know what causes the phenomenon,
what results from it, how to influence, control, initiate, or prevent it, how it relates to
other phenomena or how it resembles them, how to predict its onset and course, what
its internal or underlying structure is, how to diagnose unusual events, and, in science,
how to relate the domain as a whole to others. Scientific explanations characteristically
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
10. A MODEL THEORY OF INDUCTION 13
make use of theoretical notions that are unobservable, or that are at a lower physical
level than descriptions of the phenomena. An explanation accounts for what you do not
understand in terms of what you do understand: you cannot construct a model if the key
explanatory concepts are not available to you. Hence, a critical distinction is whether an
explanation is developed by deduction (without increasing the semantic information in
the premises and background knowledge), by induction (increasing the semantic infor-
mation), or by creation (with an overlap in the semantic information in the explanation
and the original knowledge and premises).
The induction of a generalization could just as well be described as an induction
about a concept. In the earlier example, you acquired knowledge about the concept:
Italian bars with cashiers.
These ad hocconcepts are clearly put together inductively out of more basic concepts,
such as the concepts of cashiers, receipts, and bars (Barsalou, 1987). Adults continue
to learn concepts throughout their lives. Some are acquired from knowledge by
acquaintance, others from knowledge by description. You cannot acquire the full
concept of a color, a wine, or a sculpture without a direct acquaintance with them, but
you can learn about quarks, genes, and the unconscious, from descriptions of them.
In summary, inductions are either specific or general; and either descriptive or
explanatory. Generalizations include the acquisition of ad hocconcepts and the formu-
lation of conjectures to explain sets of observations, even perhaps a set containing just
a single datum, such as Sir Alexander Fleming's observation of the destruction of
bacteria on a culture plate—an observation that led to the discovery of penicillin. All
these results are fallible, but human reasoners are usually aware of the fallibility of their
inductions.
Two hypotheses about induction: common elements vs. prototypes
My goal now is to advance a new theory of induction, which accounts for specific and
general inductions. To set the scene, however, I want to sketch the main lines of the
only two historically important ideas about induction. The first idea is that induction is
a search for what is common to a set of observations. Hence, if they all have an element
in common, then this element may be critical. If the positive and negative instances of
the class of observations differ just in respect of this element, then it is indeed the critical
element. This idea implies that a class of events has a set of necessary conditions that
are jointly sufficient to determine its instances (see Smith & Medin, 1981). It can be
traced back to the British Empiricist philosophers, such as Mill (1843), and it provided
the blueprint for a generation of modern psychological investigations. For example, one
of the founders of Behaviourism, Clark L. Hull (1920), studied the acquisition of
concepts based on common elements, and he extended his results to everyday concepts,
arguing that the meaning of dog is "a characteristic more or less common to all dogs and
not common to cats, dolls, and teddy-bears".
The second idea rejects common elements (e.g. Wittgenstein, 1953; de Saussure,
1960). Hence, dogs have nothing in common with one another. They tend to have four
legs, fur, and the ability to bark, but these are not necessary conditions—a dog could be
three-legged, bald, and mute. The criteria for doghood accordingly characterize a
prototypical dog. Prototypes led a secret life in psychology (see e.g. Fisher, 1916;
Bruner, Goodnow &Austin, 1956) until they emerged in the work of Rosch (e.g. 1973).
She argued that real entities are mentally represented by prototypes. This idea was
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
11. 14 PHILIP N. JOHNSON-LAIRD
corroborated by the finding that not all instances of a concept are deemed to be equally
representative—a terrier is a prototypical dog, but a chihuahua is not. Similarly, the time
to make judgements about membership of a concept depends on the distance of the
instance from the prototype (see e.g. Rips, Shoben & Smith, 1973; Hampton, 1979).
The contrast between the two ideas of induction is striking. The first idea
presupposes that a general phenomenon has common elements, and the second idea
rejects this presupposition in favor of prototypes. Not surprisingly, current studies of
induction are in a state of flux. Students of artificial intelligence have turned the first
idea into machines that manipulate explicitly structured symbols in order to produce
inductive generalizations (e.g. Hunt, Marin & Stone, 1966; Winston, 1975; Quinlan,
1983; Michalski, 1984; Langley, Simon, Bradshaw & Zytkow, 1987). Connectionists
have implemented a version of the second idea (e.g. Hinton, 1986; Hanson & Bauer,
1989). Psychologists have examined both ideas experimentally (see Smith & Medin,
1981). And philosophers have argued that neither idea is viable and that induction is
impossible (see e.g. Fodor, 1988, and for a rebuttal, Johnson-Laird, 1983, Ch. 6). The
state of the art in induction can be summarized succinctly: theories and computer
programs alike represent inductive conjectures in an internal language based on a given
set of concepts; they use a variety of linguistic operations for generalizing (and
specializing) these conjectures; there are as yet no procedures that can rapidly and
invariably converge on the correct inductive description in a language as powerful as the
predicate calculus. Certainly, no adequate theory of the human inductive process exists,
and this gap is a serious defect in knowledge.
A model theory of specific inductions
The process of induction, I am going to argue, is the addition of information to a model.
In the case of specific inductions in everyday life, the process is hardly separable as a
distinct mental activity: it is part of the normal business of making sense of the world.
When the starter won't turn over the engine, your immediate thought is:
The battery is flat.
Your conclusion is plausible, but invalid, and so Polya (1957) has suggested that formal,
but invalid, rules are the heuristic basis of such inferences. Because rules do not even
appear to underlie valid inferences (see Johnston-Laird & Byrne, 1991), it is likely that
specific inductions have another basis. You have models, perhaps simplistic, of the car's
electrical circuitry including the battery and starter:
The three symbols in the top left-hand brackets denote a model of the battery with
power, a model of the circuit conveying power to the starter, and a model of the starter
as working. The symbol on the top right hand denotes a model of the starter turning
over the engine. The second model consisting in the three dots is initially implicit: It is
just a place-holder to allow for the fact that there is an alternative to the first model.
When you observe that the starter does not turn over the engine, then this observation
eliminates the first model and fleshes out the second model to yield:
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
12. AMODEL THEORY OF INDUCTION 15
Hi-—
You can now diagnose that the battery is dead, though there are other possible
diagnoses: the circuit is broken, or the starter does not work. The original model might
be triggered by anything in working memory that matches its explicit content, and so it
can be used to make both deductions and inductions.
People are extraordinarily imaginative in building explanatory models that interre-
late specific events. Tony Anderson and I demonstrated their ability in an experiment
based on randomly paired events (Johnson-Laird & Anderson, 1991). In one condition
the subjects received pairs of sentences taken at random from separate stories:
John made his way to a shop which sold TV sets.
Celia had recently had her ears pierced.
In another condition, the sentences were modified to make them co-referential:
Celia made her way to a shop which sold TV sets.
She had recently had her ears pierced.
The subjects' task was to explain what was going on. They readily went beyond the
information given to them in order to account for what was happening. They proposed,
for example, that Celia was getting reception in her ear-rings and wanted the TV shop
to investigate, that she was wearing new earrings and wanted to see herself on closed
circuit TV, that she had won a bet by having her ears pierced and was going to spend
the money on a TV set, and so on. The subjects were almost as equally ingenious with
the sentences that were not co-referential.
A critical factor in the construction of a model is, as Tversky and Kahneman (1973)
have established, the availability of relevant knowledge. We investigated this aspect of
specific inductions in an experiment using such premises as:
The old man was bitten by a poisonous snake.
There was no known antidote.
When we asked the subjects to say what happened, every single one replied that the old
man died. But, when the experimenter responded, "Yes, that's possible but not in fact
true," then the majority of subjects were able to envisage alternative models in which the
old man survived. If the experimenter gave the same response to each of the subjects'
subsequent ideas, then sooner or later they ran out of ideas. Yet, they tended to generate
ideas in approximately the same order as one another, i.e. the sequences were reliably
correlated. Hence, the availability of relevant knowledge has some consistency within
the culture. The conclusions to the snake-bite problem, for instance, tend to be
produced in the following order:
(1) The old man died.
(2) The poison was successfully removed, e.g. by sucking it out.
(3) The old man was immune to the poison.
(4) The poison was weak, and not deadly.
(5) The poison was blocked from entering the circulatory system, e.g. by the man's
thick clothing.
Could the subjects be certain that they had exhausted all possible models of the
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
13. 16 PHILIP N. JOHNSON-LAIRD
If p & q then s If p & q then s
I If p & r thens
lWinston (1975) /rhagard andHolyoak (1985)
• If p then s ^
Michalski I Winston (1975)
(1983) s . y
^ ^ I f p or q then s
Figure 1. Some rulesofgeneralization used in inductive programs
premises? Ofcourse, not.Indeed, bytheendofthe experiment, their confidence in their
initial conclusion had fallen reliably, even in a second group where the experimenter
merely responded, "Yes,that's possible" to each idea. Specific inductions crop up so
often in everyday life because people rarely have enough information to make valid
deductions. Life is seldom deductively closed. Specific inductions are potentially unlimi-
ted, and so there may always be some other, as yet unforeseen, counterexample to a
putative conclusion. A few of the subjects in the experiment produced still more
baroque possibilities, such as that the oldmanwaskept alive long enough for someone
to invent an antidote. If the sequence of conclusions isto be explained in terms of rules
for common sense inferences (see Collins & Michalski, 1989), then they will have to
generate a sequence of plausible inferences. However, Bara, Carassa and Geminiani
(1984) have shown in a computer simulation that such sequences can be generated by
the manipulation of models.
Models and generalization
General inductions, according to linguistic conceptions, depend on going beyond the
data in order to make a generalization. A variety of linguistic operations have been
proposed to make such generalizations, andFig. 1shows just some ofthem. It is natural
to wonder howmany different linguistic operations of generalization there are.We can
begin to answer this question by considering the following possibilities. A sufficient
condition for a concept can be stated in a conditional assertion, e.g.:
If it is a square then it is an instance of the concept.
A necessary condition for a concept can also be stated in a conditional assertion, e.g.:
If it is an instance of the concept then it is a square.
Sufficient conditions for a concept, C, can be generalized by the following operations:
(1) Dropping a conjunct from the antecedent:
If A & then becomes If A then
(2) Adding an inclusive disjunction to the antecedent:
If A then becomes If A or then
Necessary conditions for a concept, C, can be generalized by the following operations:
(3) Adding a conjunct to the consequent:
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
14. A MODEL THEORY OF INDUCTION 17
If thenA becomes If then (A & B)
(4) Dropping an inclusive disjunct from the consequent:
If then (A orB) becomes If then A
Both conditionals can be generalized by adding the respective converse so as to state
necessary and sufficient conditions:
(5) Adding the converse conditional:
If A then become If and only if A then
If then A
These transformations are applicable to inductive hypotheses in general. Hence, for
example, the step from:
If something is ice, then it is water.
to:
If something is ice, then it is water and it is frozen.
is a generalization (based on operation 3).
The five operations above by no means exhaust the set of possible generalizations.
Consider, for example, the set of all four possible models that can be built from the two
propositions: "it's a square", and "it's a positive instance of the concept," and their
negations:
D +ve
D -ve
—ID +ve
—ID -ve
where " + ve" symbolizes "it's a positive instance of the concept", and " — ve" symbol-
izes "it's a negative instance of the concept". The number of relevant propositions, n,
is two in this case, the number of possible models 2n
, and the number of possible subsets
of them is 2(2n)
, including both the set as a whole and the empty set. With any set of
models based on n propositions, then a hypothesis such as:
If it's a square, then it's a positive instance of the concept
eliminates a quarter of them. We can now ask how many logically distinct propositions
are generalizations of this hypothesis, i.e. how many eliminate the same models plus at
least one additional model. The answer equals the number of different sets of models
that exclude the same quarter of possible models as the original hypothesis minus two
cases: the empty set of models (which corresponds to a self-contradiction) and the set
excluded by the original hypothesis itself.
In general, given a hypothesis, H, that rules out a proportion, I(H), of possible models,
the number of possible generalizations of H is equal to:
Unless a hypothesis has a very high information content, which rules out a large
proportion of models, then the formula shows that the number of its possible generaliza-
tions increases exponentially with the number, n, of potentially relevant propositions.
Any simple search procedure based on eliminating putative hypotheses will not be
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
15. 18 PHILIP N. JOHNSON-LAIRD
computationally tractable: it will be unable to examine all possible generalizations in a
reasonable time. Many inductive programs have been designed without taking this
problem into account. They are viable only because the domain of generalization has
been kept artificially small. The programmer rather than the machine has determined
the members of the set of relevant propositions.
Although there are many possible operations of linguistic generalization, the situ-
ation is very different if induction is based instead on mental models. Only one
operation is needed for the five generalizations above and indeed for all possible
generalizations in a Boolean domain. It is the operation of adding information to a
model with the effect of eliminating it. For example, to revert to the five operations
above, the generalization of dropping a conjunct from an antecedent is equivalent to
eliminating a model. Thus, an assertion of the form:
If A & then corresponds to a set of models that includes:
If A then
and the tautology, or not-B.
The operation of eliminating a model—by adding information that contradicts
it—suffices for any generalization, because generalization is nothing more than the
elimination of possible states of affairs. The resulting set of models can then be
described by a parsimonious proposition. Although the operation obviates the need to
choose among an indefinite number of different forms of linguistic generalization, it
does not affect the intractability of the search. The problem now is to determine which
models to eliminate. And, as ever, the number of possibilities to be considered increases
exponentially with the number of models representing the initial situation.
Models and the operations of generalization with quantifiers
Some inductive programs, such as INDUCE 1.2 (Michalski, 1983), operate in a domain
that allows quantification over individuals, i.e. with a version of the predicate calculus.
Where quantifiers range over infinitely many individuals, it is impossible to calculate
semantic information on the basis of cardinalities, but it is still possible to maintain a
partial rank order of generalization: one assertion is a generalization of another if it
eliminates certain states of affairs over and above those eliminated by the other
assertion. Once again, we can ask: how many operations of generalization are necessary?
The answer, once again, is that the only operation that we need is the archetypal
one that adds information to models so as to eliminate otherwise possible states of
affairs. Tokens can be added to the model in order to generalize the step from a finite
number of observations to a universal claim. You observe that some entities of a
particular sort have a property in common:
Electrons emitted in radioactive decay damage the body.
Positrons emitted in radioactive decay damage the body.
Photons emitted in radioactive decay damage the body.
These initial observations support the following model:
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
16. A MODEL THEORY OF INDUCTION 19
P
P
P
where "p" symbolizes a particle and "d" damage to the body. Information can be added
to the model to indicate that all such particles have the same property:
[p] d
[p] d
[p] d
where the square brackets indicate that the set of particles is now exhaustively repre-
sented in the model. This model rules out the possibility of any particles emitted in
radioactive decay that do not damage the body. This operation on models corresponds
to a linguistic operation that leads from an initial observation:
Some particles emitted in radioactive decay damage the body,
to the conclusion:
Any particles emitted in radioactive decay damage the body.
Some authors refer to this operation as "instance-based" generalization (Thagard and
Holyoak, 1985) or as "turning constants into variables" (Michalski, 1983, p. 107).
There seems to be little to choose between the operation on models and the linguistic
operation. However, the operation on models turns out to yield other forms of linguistic
generalization.
Information can be added to a model to represent a new property of existing
individuals. If you have established that certain individuals have one property, then you
can make a generalization that they satisfy another. You observe, for example, bees with
an unusually potent sting:
[s]
[s]
M
where "s" denotes a bee with a potent sting, and the set is exhaustively represented. You
conjecture that the cause of the sting is a certain mutation:
[m] [s]
[m] [s]
[m] [s]
Likewise, new relations can be added to hold between existing entities in a model.
For example, a model might represent the relations among, say, a finite set of viruses
and a finite set of symptoms. The semantically weakest case is as follows:
v s
v s
v —}s
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
17. 20 PHILIP N. JOHNSON-LAIRD
where there is one definite causal relation, signified by the arrow, but nothing is known
about the relations, positive or negative, between the remaining pairwise combinations
of viruses and symptoms. You can describe this model in the following terms:
At least one virus causes at least one of the symptoms.
By the addition of further causal relations the model may be transformed into the
following one:
v
V >S
You can describe a model of this sort as follows:
Each of the symptoms is caused by at least one of the viruses.
Hence, the effect is still equivalent to the linguistic operation of replacing an existential
quantifier ("at least one") in the previous description by a universal quantifier ("each").
The addition of a further causal link, however, yields a still stronger model.
V ^ S
You can describe a model of this sort in the following terms:
At least one of the viruses causes each of the diseases.
In the predicate calculus, the linguistic effect of the operator is now to promote an
existential quantifier from inside to outside the scope of a universal quantifier:
Vs 3v v causes s => 3v Vs v causes s
No such rule, however, appears to be have been proposed by any current inductive
theory. The model theory has therefore led us to the discovery of a new form of
linguistic generalization.
The operation of adding information to models enables us to generalize from the
weakest possible model to the strongest possible one in which each of the viruses causes
each of the symptoms. Hence, the addition of information to models suffices for all
possible generalizations in those everyday domains that can be described by the
predicate calculus. It replaces the need for a battery of various linguistic operations.
How can induction be constrained?
The burden of the argument so far is simple: induction is a search for a model that is
consistent with observation and background knowledge. Generalization calls for only
one operation, but the search is intractable because of the impossibility of examining all
of its possible effects. The way to cut the problem down to a tractable size is, not to
search blindly by trial and error, but to use constraints to guide the search (Newell &
Simon, 1972). Three constraints can be used in any domain and may be built into the
inductive mechanism itself: specificity, availability, and parsimony.
Specificity is a powerful constraint on induction. It is always helpful to frame the
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
18. A MODEL THEORY OF INDUCTION 21
most specific hypothesis consistent with observation and background knowledge, that is,
the hypothesis that admits the fewest possible instances of a concept.1
This constraint
is essential when you can observe only positive instances of a concept. For example, if
you encounter a patient infected with a new virus, and this individual has a fever, a sore
throat, and a rash, then the most specific hypothesis about the signs of the disease is:
fever sore throat rash
where denotes the intersection of sets. If you now encounter another patient with the
same viral infection, who has a fever and a rash, but no sore throat, you will realize that
your initial hypothesis was too specific. You can generalize it to one consistent with the
evidence:
fever rash
Suppose, however, that you had started off with the more general inclusive disjunction:
fever U sore throat U rash
where U denotes the union of sets. Although this conjecture is consistent with the data,
it is too general, and so it would remain unaffected by your encounter with the second
patient. If the only genuine sign of the disease is the rash, then you would never discover
it from positive examples alone, because your hypothesis would accommodate all of
them. Hence, when you are trying to induce a concept from positive instances, you must
follow the specificity constraint. Your hypothesis may admit too few instances, but if so,
sooner or later, you will encounter a positive instance that will allow you to correct it.
This principle has been proposed by Berwick (1986) in terms of what he calls the
"subset" principle, which he derives from a theorem in formal learning theory due to
Angluin (1978). In elucidating children's acquisition of syntax, phonology, and con-
cepts—domains in which they are likely to encounter primarily positive instances—
Berwick argues that the instances that are described by a current inductive hypothesis
should be as few as possible. If they are a proper subset of the actual set of instances,
then children can correct their inductive hypothesis from encounters with further
positive instances. But, if the current hypothesis embraces all the members of the actual
set and more, then it will be impossible for positive instances to refute the hypothesis.
What Angluin (1978) proved was that positive instances could be used to identify a
language in the limit, i.e. converge upon its grammar without the need for subsequent
modifications (see Gold, 1967), provided that the candidate hypotheses about the
grammar could be ordered so that each progressively more general hypothesis includes
items that are not included in its predecessor. The inductive system can then start with
the most specific hypothesis, and it will move to a more general one whenever it
encounters a positive instance that falls outside its current hypothesis.
Availability is another general constraint on induction. It arises from the machinery
that underlies the retrieval of pertinent knowledge. Some information comes to mind
more readily than other information, as we saw in the case of the specific induction
about the snake bite. The availability of information, as Tversky and Kahneman (1973)
have shown, can bias judgement of the likelihood of an event. It also underlies the
"mutability" of an event—the ease with which one can envisage a counterfactual
scenario in which the event does notoccur (see Tversky & Kahneman, 1982; Kahneman
& Miller, 1986). Availability is a form of bias, but bias is what is needed to deal with
the intractable nature of induction.
Parsimony is a matter of fewer concepts in fewer combinations. It can be defined
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
19. 2 2 PHILIP N. JOHNSON-LAIRD
only with respect to a given set of concepts and a system in which to combine them.
Hence, it is easily defined for the propositional calculus, and there are programs
guaranteed in principle to deliver maximally parsimonious descriptions of models within
this domain (see Johnson-Laird, 1990). What complicates parsimony is that the pre-
sumption of a conceptual system begs the question. There is unlikely to be any
procedure for determining absolute parsimony. Its role in induction therefore seems to
be limited to comparisons among alternative theories using the same concepts.
The most important constraint on induction I have left until last for reasons that
will become clear. It is the use of existing knowledge. A rich theory of the domain will
cut down the number of possible inductions; it may also allow an individual to
generalize on the strength of only a single instance. This idea underlies so-called
"explanation-based learning" in which a program uses its background knowledge of the
domain to deduce why a particular instance is a member of a concept (see e.g. Dejong
& Mooney, 1986; Mitchell, Keller & Kedar-Cabelli, 1986). Another source of knowl-
edge is a helpful teacher. A teacher who cannot describe a concept may still be able to
arrange for a suitable sequence of instances to be presented to pupils. This pedagogical
technique cuts down the search space and enables limited inductive mechanisms to
acquire concepts (Winston, 1975). The constraints of theory are so important that they
often override the pure inductive process: one ignores counterexamples to the theory.
The German scientist and aphorist, Georg Lichtenberg (1742-1799), remarked: "One
should not take note of contradictory experiences until there are enough of them to
make constructing a new system worthwhile". The molecular biologist James Watson
has similarly observed that no goodmodel ever accounts for all the facts because some
data are bound to be misleading if not plain wrong (cited in Crick, 1988, p. 60). This
methodological prescription appears to be observed automatically by young children
seeking to acquire knowledge. Karmiloff-Smith and Inhelder (1974/5) have observed
that children learning how to balance beams ignore counterexamples to their current
hypotheses. Such neglect of evidence implies that induction plays only a limited role in
the development of explanations. An explanation does not increase the semantic
information in the observations, but rather eliminates possibilities that only overlap with
those that the evidence eliminates. According to my earlier analysis, the process is
therefore not inductive, but creative.
The design of the human inductive system
Although studies in psychology and artificial intelligence have been revealing, no-one
has described a feasible program for human induction. What I want to consider finally
are some of the design characteristics that any plausible theory must embody. If nothing
else, these characteristics show why no existing algorithm is adequate for human
induction. The agenda is set by the theory of mental models and the underlying
subconcepts from which they are constructed. This theory implies that there are three
sources of concepts.
The first source is evolution. What must be genetically endowed for induction to be
possible are the following basic components:
(1) A set of subconcepts. These subconcepts include those for entities, properties,
and relations, that apply to the perceptual world, to bodily states and emotions, and to
mental domains including deontic and epistemic states (facts, possibilities, counterfac-
tual states, impossibilities). They are the ultimate components out of which all indue-
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
20. AMODEL THEORY OF INDUCTION 23
TABLE 1. Some examples illustrating the ontological and epistemological
classification of concepts
The
ontological
dimension
Entities
Objects:
Substances:
Properties
Relations
The epistemological
Analytical
concepts
Triangle
Space
Straight
Causes
Natural
kinds
Dog
Water
Alive
Sees
dimension
Artefacts
Table
Food
Expensive
Owns
tions are constructed, and they are used in the construction and manipulation of mental
models. It is of little use to define one concept, such as woman, in terms of other
high-level concepts, such as adult, human, female. The concept must depend on
subconcepts that can be used to construct models of the world. What is needed is an
analysis of the satisfaction conditions of our most basic ideas and their interrelations.
These conditions enable us to envisage the world and in certain circumstances to verify
our imaginings.
(2) A set of methods for combining concepts. These methods include composition, i.e.
a method that allows one subconcept to call upon another, and recursion, i.e. a method
that allows a set of subconcepts to be used in a loop that is executed for a certain
number of times (see Boolos & Jeffrey, 1989). These combinations interrelate subcon-
cepts to form concepts, and they interrelate concepts to form new high-level concepts
or inductive conjectures.
(3) A set of inductive mechanisms. It is these mechanisms that make possible induction
of concepts and generalizations.
The second source of concepts is knowledge by compilation. The process depends on
an inductive mechanism that assembles concepts (and their taxonomic interrelations)
out of the set of innate subconcepts and combinations. Verbal instruction alone is no use
here: there is no substitute for the construction of models of the world—its entities,
properties, and relations. Ultimately, the repeated construction of models, as I suggested
in the case of spatial relations, such as diagonally up and to theright,enables the relevant
concept to be compiled into subconcepts.
Concepts constructed from subconcepts are heterogeneous: some are analyticin
that they have necessary and sufficient conditions; others such as natural kinds and
artefacts are open-ended, prototypical, and depend on default values. Table 1 gives
examples of these three main sorts of concepts for entities, properties, and relations.
This heterogeneity has consequences for the mechanism that constructs new concepts.
Those concepts with necessary and sufficient conditions might be induced by a variant
of the method that builds decision trees (Quinlan, 1983), but this method runs into
difficulties with concepts that depend on prototypes. They might be acquired by a
program that constructs hierarchies of clusterings in which instances are grouped
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
21. 24 PHIUP N. JOHNSON-LAIRD
together in ways that are not all or none (e.g. Pearl, 1986; Fisher, 1987; German,
Langley & Fisher, 1990)
Neither decision-tree nor connectionist programs correspond precisely to the
required inductive mechanism. Its input, as I have argued, is a sequence of models, and
its output is a heterogeneous set of concepts. This heterogeneity suggests that the
mechanism builds up a hierarchy of defaults, exceptions, and necessary conditions. The
mechanism must also be able to acquire concepts of objects, properties, relations, and
quantification. Although there are proposals that are syntactically powerful enough to
cope with these demands, no satisfactory program for inducing the corresponding
concepts yet exists.
The third source of both concepts and conjectures is knowledge by composition. The
process depends on a mechanism that comes into play only after some high-level
concepts have been assembled out of subconcepts. Its results take the form of inductive
generalizations and ad hoc concepts, which it composes out of existing high-level
concepts. For example, you can acquire the ad hoc concept "araeostyle" from the
following definition:
An araeostyle building has equi-distant columns with a distance between them
of at least four times the diameter of the columns.
Many machine programs are static in that they construct new concepts out of a fixed
basic set that depends on the user's specification of the inductive problem (see e.g.
Hunt, Marin & Stone, 1966; Mitchell, 1977). The human inductive mechanism,
however, is evolutionary. It constructs new concepts from those that it has previously
constructed; and, unlike most existing programs, it can construct novel concepts in
order to frame an inductive hypothesis. It can also deploy compositional principles in
the construction of concepts such as araeostyle. Existing programs can induce some
compositional concepts using explicitly structured representations (see Anderson, 1975;
Power & Longuet-Higgins, 1978; Selfridge, 1986), but they cannot yet induce concepts
that depend on recursion. Although connectionist systems can acquire rudimentary
conceptual systems, and also rudimentary syntactic rules (see e.g. Hanson & Kegl,
1987), they are not presently able to learn sufficiently powerful concepts to emulate
human competence.
Knowledge by composition, as in the case of araeostyle, saves much time and
trouble, but it is superficial. The shift from novice to expert in any conceptual domain
appears to depend on knowledge by compilation. Only then can a concept be immedi-
ately used to construct models or to check that the concept is satisfied in a perceptual
model. The induction of generalizations similarly depends on the use of experience to
add information to models of the relevant domain.
The case for models
Theorists tend to think of induction as yielding linguistic generalizations (see e.g. the
contributions in Kodratoff & Michalski, 1990). Hence, a major question has been to
find the right language in which to represent concepts and conjectures. There has been
much debate amongst the proponents of different mental languages, such as semantic
networks, production systems, and versions of the predicate calculus. Yet, as I have
argued, to think of the results of induction as linguistic representations may be a vast
mistake. It may not do justice to human thinking. The purpose of induction is to make
sense of the world, either by enabling individuals to predict or to categorize more
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
22. A MODEL THEORY OF INDUCTION 25
efficiently or, better still, to understand phenomena. The mind builds models, and the
structure of models is the basis of human conceptions of the structure of the world. The
products of induction may therefore be models, either ones that simulate phenomena
(descriptive inductions) or else ones constructed from more elemental subconcepts
(explanatory inductions). After such models have been induced, they can, if necessary,
be used to formulate verbal generalizations.
One advantage of models is that the inductive mechanism needs, in principle, only
one operation of generalization: the addition of information to models. This operation
is equivalent to quite diverse effects on linguistic hypotheses. When it leads to the
elimination of a model, it is equivalent to adding the negation of the description of that
model to the current verbal hypothesis. It can have the effect of a so-called universal
generalization, which introduces a universal quantifier in place of an existential. And it
can have the effect of promoting an existential quantifier from inside to outside the
scope of a universal quantifier.
Another advantage of models is that they embody knowledge in a way that naturally
constrains inductive search. They maintain semantic information, they ensure internal
consistency, and they are parsimonious because each entity is represented only once.
They can also focus attention on the critical parts of the phenomena. An instructive
example is provided by Novak's (1977) program for solving textbook problems in
physics. It initially represents problems in a semantic network, but this representation
contains too much information, and so the program extracts from it a model of the
situation that is used to identify the points where forces have to balance.
One final advantage of models is that they elucidate the clues about induction that
have emerged from the psychological laboratory. Because of the limited processing
capacity of working memory, models represent only certain information explicitly and
the rest implicitly. One consequence is that people fall into error, and the evidence
shows they make the same sorts of error in both deduction and induction. Thus, in
deduction, they concentrate on what is explicit in their models, and so, for example,
they often fail to make certain deductions. In induction, they likewise focus on what is
explicit in their models, and so seldom seek anything other than evidence that might
corroborate their inductive conjectures. They eschew negative instances, and encounter
them only when they arise indirectly as a result of following up alternative hypotheses
(see e.g. Bruner et al, 1956; Wason, 1960; Klayman & Ha, 1987). In deduction, they
are markedly influenced by the way in which a problem is framed: what a model
represents explicitly depends on what is explicitly asserted, and so individuals often have
difficulty in grasping that two assertions have the same truth conditions, e.g. "Only the
bakers are athletes" and "All the athletes are bakers" (see Johnson-Laird & Bryne,
1991). In induction, there are equally marked effects of how a problem is framed (see
Hogarth, 1982). In both deduction and induction, disjunctive alternatives cause
difficulties. They call for more than one model to be constructed, whereas reasoners are
much better able to cope with a single model and thus have a natural preference to work
with conjunctions. Disjunctive informative even appears to block straightforward deci-
sions. For example, many people who choose a vacation if they pass an exam, or if they
fail it, do not choose it when the outcome of the exam is unknown (see Shafir &
Tversky, 1991; Tversky & Shafir, 1991). The very preference of a "common element"
analysis of concepts is just another manifestation of the same phenomenon. Similarly, a
single model of a process underlies the natural tendency to overgeneralize. Once children
learn, for example, how to form the regular past tense, their tendency to generate
"go-ed" supplants their previous grasp of "went" as a separate lexical item. Finally,
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
23. 26 PHILIP N. JOHNSON-LAIRD
knowledge appears to play exactly the same part in both deduction and induction. It
biases the process to yield more credible conclusions.
Conclusions
The principal contrast in induction is between specific and general inductions. Specific
inductions are part of comprehension: you flesh out your model of a discourse or the
world with additional information that is automatically provided by your general
knowledge. General inductions yield new models, which can also enrich your conceptual
repertoire. The human inductive mechanism that carries out these tasks appears to
embody five design characteristics:
(1) Its ultimate constituents are a set of subconcepts and conceptual combina-
tions that are powerful enough to construct any mental model.
(2) It can induce heterogeneous concepts of objects, properties, and relations
from knowledge by acquaintance. The repeated construction of models leads
to the compilation of concepts into subconcepts, including necessary subcon-
cepts and those that specify default values.
(3) It can construct novel concepts from knowledge by composition, assembling
them according to principles that generate recursively embedded structures.
(4) It can construct a model of a domain either ab initioor by adding information
to an existing set of models in accordance with evidence.
(5) It is guided by constraints. It takes into account available knowledge; it
formulates the most specific generalizations consistent with the data and
background knowledge; and, perhaps, it seeks the simplest possible conjecture
consistent with the evidence.
The price of tractable induction is imperfection. We often concentrate on the
triumphs of induction and the minor imperfections that yield clues to the nature of its
mechanism. We overlook its catastrophes—the fads of pseudo-science, the superstitions
of daily life, and the disastrous generalizations that lead to such events as the sinking of
the Herald of Free Enterprise. The origin of these errors is in the human inductive
mechanism: its heuristics are what students of other cultures refer to as "magical
thinking". And the pressure of working memory capacity often puts too much emphasis
on what is explicitly represented in a model. Theorists, this theory argues, are bound to
focus on what is explicitly represented in their models. The reader is invited to reflect
on the recursive consequences of this claim for the present theory.
Acknowledgements
I am grateful to friends and colleagues who over the years have worked with me on the
theory of mental models: Tony Anderson, Bruno Bara, Monica Bucciarelli, Alan
Garnham, Vittorio Girotto, Mark Keane, Maria Sonino Legrenzi, Paolo Legrenzi, Jane
Oakhill, Mick Power, Patrizia Tabossi, Walter Schaeken. I am especially grateful to
Ruth M. J. Byrne who helped in the formulation of the theory as it applies to deduction
(see Johnson-Laird & Byrne, 1991). Part of the material in this paper is based on part
of one of three MacEachran Memorial Lectures, which I delivered in October, 1990, in
the Department of Psychology at the University of Alberta, Edmonton (see Johnson-
Laird, 1992). Finally, my thanks to the editors of RISESST for soliciting this article.
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
24. A MODEL THEORY OF INDUCTION 27
Note
1. The most specific description of a set is one that admits the fewest possible instances. The most specific
proposition about a phenomenon is the one that rules out as false the fewest possible states of affairs. The
difference reflects the difference between sets (which have conditions of satisfaction) and propositions
(which are true or false).
References
ANDERSON, J.R. (1975) Computer simulation of a language acquisition system—a first report. In SOLSO, R.L.
(Ed.) Information Processingand Cognition (Hillsdale, NJ, Lawrence Erlbaum Associates).
ANGLUM, D. (1978) Inductive inference of formal languages from positive data. Information and Control, 45,
pp. 117-135.
BARA, B.G., CARASSA, A.G. & GEMINIANI, G.C. (1984) Inference processes in everyday reasoning. In
PLANDER, D. (Ed.) Artificial Intelligenceand Information-Control Systems of Robots (Amsterdam, Elsevier).
BAR-HILLEL, Y. & CARNAP, R. (1964) An outline of a theory of semantic information. In BAR-HILLEL, Y.
Language and Information (Reading, MA, Addison-Wesley).
BARSALOU, L.W. (1987) The instability of graded structure: implications for the nature of concepts. In
NEISSER, U. (Ed.) Conceptsand Conceptual Development: Ecologicaland Intellectual Factors in Categorization
(Cambridge, Cambridge University Press).
BERWICK, R.C. (1986) Learning from positive-only examples: The Subset principle and three case studies. In
MICHALSKI, R.S., CARBONELL, J.G. & MITCHELL, T.M. (eds) Machine Learning: An Artificial Intelligence
Approach, Vol. II (Los Altos, CA, Morgan Kaufmann).
BOOLOS, G. & JEFFREY R. (1989) Computability and Logic. Third Edition. (Cambridge, Cambridge University
Press).
BRUNER, J.S., GOODNOW, J.J. & AUSTIN, G.A. (1956) A Study of Thinking (New York, Wiley).
COLLINS, A.M. & MICHALSKI, R. (1989) The logic of plausible reasoning: A core theory. Cognitive Science, 13,
pp. 1-49.
CRICK, F. (1988) What Mad Pursuit (New York, Basic Books).
DEJONG, G.F. & MOONEY, R. (1986) Explanation-based learning: An alternative view. Machine Learning, 1,
pp. 145-176.
DE SAUSSURE, F. (1960) Course in General Linguistics (London, Peter Owen).
FISHER, D. (1987) Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2,
pp. 139-172.
FISHER, S.C. (1916) The process of generalizing abstraction; and its product, the general concept. Psychological
Monographs, 21, No. 2, whole number 90.
FODOR, J.A. (1980) Fixation of belief and concept acquisition. In PIATTELLI-PALMARINI, M. (Ed.), Language
and Learning: The Debate betweenJean Piaget and Noam Chomsky (Cambridge, MA, Harvard University
Press).
FREUD, S. (1925) Negation. Complete Psychological Works of Sigmund Freud, Vol. 19: The Ego and the Id and
Other Works, (trans J. Strachey) (London, Hogarth).
GARNHAM, A. (1987) Mental Models as Representations of Discourseand Text (Chichester, Ellis Horwood).
GENNARI, J.H., LANGLEY, P. & FISHER, D. (1990) Models of incremental concept formation. In CARBONELL,
J.G. (Ed.), Machine Learning: Paradigms and Methods. (Cambridge, MA, MIT Press). Reprinted from
Artificial Intelligence, 40, 1989.
GOLD, E.M. (1967) Language identification in the limit. Information and Control, 16, pp. 447-474.
HAMPTON, J.A. (1979) Polymorphous concepts in semantic memory. Journal of Verbal Learning and Verbal
Behavior, 18, pp. 441-461.
HANSSON, S.J. & BAUER, M. (1989) Conceptual clustering, categorization, and polymorphy. Machine Learning,
3, pp. 343-372.
HANSON, S.J. & KEGL, J. (1987) PARSNIP: A connectionist network that learns natural language grammar
from exposure to natural language sentences. Proceedingsof the Ninth Annual Conferenceof the Cognitive
Science Society, Seattle, pp. 106-119 (Hillsdale, NJ, Lawrence Erlbaum Associates).
HARMAN, G. (1973) Thought (Princeton, NJ, Princeton University Press).
HINTON, G.E. (1986) Learning distributed representations of concepts. In Proceedingsof the Eighth Annual
Conferenceof the Cognitive Science Society (Hillsdale, NJ, Lawrence Erlbaum Associates).
HOGARTH, R. (Ed.) (1982) New Directionsfor Methodology of Social and Behavioral Science, No. 11: Question
Framing and Response Consistency (San Francisco, Jossey-Bass).
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
25. 2 8 PHUJP N. JOHNSON-LAIRD
HULL, C.L. (1920) Quantitative aspects of the evolution of concepts. PsychologicalMonographs, 28, whole
number 123.
HUNT, E.B., MARIN, J. & STONE, P.T. (1966) Experiments in Induction (New York, Academic Press).
JOHNSON-LAIRD, P.N. (1983) Mental Models. (Cambridge, MA, Harvard University Press).
JOHNSON-LAIRD, P.N. (1990) Propositionalreasoning:an algorithmfor deriving parsimonious conclusions.Unpub-
lished MS, Department of Psychology, Princeton University.
JOHNSON-LAIRD, P.N. (1992) Human andMachine Thinking (Hillsdale, NJ,Lawrence Erlbaum Associates).
JOHNSON-LAIRD, P.N. & ANDERSON, T. (1989) Common-sense inference. Unpublished MS, Princeton
University, New Jersey.
JOHNSON-LAIRD, P.N. & BYRNE, R.M.J. (1991) Deduction (Hillsdale, NJ,Lawrence Erlbaum Associates).
KAHNEMAN, D. & ILLER, D. (1986) Norm theory: Comparing reality to itsalternatives. PsychologicalReview,
93, pp. 136-153.
KARMILOFF-SMITH, A. & INHELDER, B. (1974/5) 'If you want to get ahead, get a theory'. Cognition, 3,
pp. 195-212.
KLAYMAN, J. & HA,Y.-W. (1987) Confirmation, disconfirmation and information in hypothesis testing.
PsychologicalReview, 94,pp.211-228.
KODRATOFF, Y. &MICHALSKI, R.S. (1990) Machine Learning: An Artificial Intelligence Approach. Vol. III (San
Mateo, CA, Morgan Kaufmann).
LANGLEY, P., SIMON, H.A.,BRADSHAW, G.L. & ZYTKOW, J.M. (1987) Scientific Discovery (Cambridge, MA,
MIT Press).
MICHALSKI, R.S. (1983) A theory andmethodology of inductive learning. In MICHALSKI, R.S., CARBONELL,
J.G. & MITCHELL, T.M. (eds)Machine Learning: An Artificial Intelligence Approach (LosAltos, CA,
Morgan Kaufmann).
MILL, J.S. (1843/1950) A System ofLogic, Ratiocinative and Inductive (Toronto, University ofToronto Press).
MITCHELL, T.M. (1977) Version spaces: A candidate elimination approach to rule learning. Fifth International
Joint Conferenceon Artificial Intelligence (Cambridge, MA), pp. 305-310.
MITCHELL, T.M., KELLER, R.&KEDAR-CABELLI, S. (1986) Explanation-based generalization: Aunifying view.
Machine Learning, 1, pp. 47-80.
NEWELL, A. (1990) Unified Theoriesof Cognition (Cambridge, MA, Harvard University Press).
NEWELL, A. & SIMON, H.A. (1972) Human Problem Solving (Englewood Cliffs, Prentice-Hall).
NOVAK, G.S. (1977) Representations of knowledge in a program for solving physics problems. Proceedings of
the Fifth International Joint ConferenceonArtificial Intelligence, pp. 286-291.
OSHERSON, D.N.,SMITH, E.E. & SHAFIR, E. (1986) Some origins of belief. Cognition, 24,pp. 197-224.
PEARL, J. (1986) Fusion, propagation, and structuring in belief networks. Artificial Intelligence, 29,
pp. 241-288.
POLYA, G. (1957) Howto Solve It. Second edition (New York, Doubleday).
POPPER, K.R. (1972) Objective Knowledge (Oxford, Clarendon).
POWER, R.J.D. & LONGUET-HIGGINS, H.C. (1978) Learning to count: A computational model of language
acquisition. Proceedingsof the Royal Society (London) B, 200, pp. 391-417.
QUINLAN, R. (1983) Learning efficient classification procedures andtheir application to chess endgames. In
MICHALSKI, R.S.,CARBONELL, J.G. &MITCHELL, T.M. (eds), Machine Learning: An Artificial Intelligence
Approach (Los Altos, CA, Morgan Kaufmann).
RIPS, L.J., SHOBEN, E.J. & SMITH, E.E. (1973) Semantic distance andtheverification of semantic relations.
Journal of Verbal Learning and Verbal Behavior, 12,pp. 1-20.
ROSCH, E. (1973) Natural categories. Cognitive Psychology, 4, pp. 328-350.
SELFRIDGE, M. (1986) A computer model of child language learning. Artificial Intelligence, 29,pp. 171-216.
SHAFIR, R. &TVERSKY, A. (1991) Thinking through uncertainty: Nonconsequential reasoning andchoice. Unpub-
lished MS, Department of Psychology, Princeton University.
SMITH, E.E. & MEDIN, D.L. (1981) Categories and Concepts (Cambridge, MA,Harvard University Press).
THAGARD, P. (1988) Computational Philosophy of Science (Cambridge, MA, Bradford Books, Press).
THAGARD, P. & HOLYOAK, K.J. (1985) Discovering the wave theory of sound. Proceedings of the Ninth
International Joint Conference on Artificial Intelligence (Los Altos, A, Morgan Kaufmann).
TVERSKY, A. &KAHNEMAN, D. (1973) Availability: a heuristic forjudging frequency and probability. Cognitive
Psychology, 4, pp. 207-232.
TVERSKY, A. & KAHNEMAN, D. (1982) The simulation heuristic. In KAHNEMAN, D., SLOVIC P. & TVERSKY, A.
(eds), Judgement under Uncertainty: Heuristics andBiases (Cambridge, Cambridge University Press).
TVERSKY, A. & SHAFIR, E. (1991) The disjunction effect in choice under uncertainty. Unpublished MS, Depart-
ment of Psychology, Stanford University.
DownloadedBy:[PrincetonUniversity]At:17:1720April2010
26. A MODELTHEORYOF INDUCTION 29
WASON, P.C. (1960) On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of
Experimental Psychology, 12, pp. 129-140.
WINSTON, P.H. (1975) Learning structural descriptions from examples. In WINSTON, P.H. (Ed.), The
Psychology of Computer Vision (New York, McGraw-Hill).
WITTGENSTEIN, L. (1953) Philosophical Investigations (New York, Macmillan).
DownloadedBy:[PrincetonUniversity]At:17:1720April2010