The document discusses connections between machine learning and the philosophy of science. It argues that while the two disciplines are distinct, they admit a dynamic interaction where ideas are exchanged mutually beneficially. Examples of fruitful interactions discussed include how automated scientific discovery has implications for debates on inductivism vs falsificationism in philosophy of science, and how philosophical work on Bayesian epistemology and causality has influenced machine learning. The document suggests evidence integration may be a locus of future interaction between the two fields.
Mixedmethods basics: Systematic, integrated mixed methods and textbooks, NVIVOWendy Olsen
I define mixed methods and show that systematic mixed methods can be well organised, with transparent data coding and case-wise data held carefully for hypothesis testing. I list the relevant textbooks. I challenge the schism idea that qualitative methods are intrinsically opposed to what is usually done with quantitative methods. I show how an integrated approach can be begun, giving examples. Suitable to professional researchers, those doing focus groups, and those wanting more background for their qualitative research to come from quantitative data.
This document discusses cladistic methodology, which aims to estimate evolutionary history from similarities and differences among units of evolution. It addresses several challenges in doing so, including defining units of comparison, detecting convergence, and measuring evolutionary divergence. The document emphasizes applying scientific methodology rigorously, distinguishing empirical data from hypotheses and deduced implications. While mathematics can clarify hypotheses, observations are still needed. Heuristic techniques may be used provisionally if full logical implications are unclear, but must be justified as reasonable approximations.
MIX methode- building better theory by bridging the quantitative-qualitative ...politeknik NSC Surabaya
This document discusses the benefits of using qualitative research methods to build theory. It begins by distinguishing between quantitative and qualitative research paradigms in terms of their underlying philosophies, goals, and methods. The key difference is that quantitative research aims for replication and theory testing, while qualitative research aims to understand social phenomena from the perspectives of participants to develop new theories.
The document then provides an overview of qualitative research methods for data collection and analysis, focusing on grounded theory building. Grounded theory allows researchers to generate a detailed understanding of a phenomenon and develop a logically compelling analysis that identifies constructs and relationships to advance theory. The document argues that combining qualitative and quantitative methods can fully develop understanding and refine theories of organizational phenomena.
E)pistemological Awareness, Instantiation of Methods, and Uninformed Methodol...Abdullah Saleem
This article explores issues of epistemological awareness and the instantiation of methods in qualitative research. It argues that researchers should make their epistemological stances, methodological decision points, and logical arguments transparent. The authors discuss conceptualizing epistemological awareness through either a series of decision junctures or a spatial perspective. They illustrate current practices around epistemological awareness and methods instantiation by analyzing education research studies from 2006. The authors adopt a hybrid theoretical perspective using internalism and constructionism to structure their arguments around these issues.
This document analyzes a single student learning episode using two theoretical lenses: the instrumental genesis perspective and the onto-semiotic approach. The instrumental genesis perspective focuses on how students develop techniques for using tools or artifacts to solve mathematical tasks, and the relationships between thinking and gestures. The onto-semiotic approach views mathematical knowledge and learning as involving systems of practices within social and institutional contexts. Analyzing the same episode from both perspectives provides complementary insights and a richer understanding of the phenomena, while also helping to identify the strengths and limitations of each theoretical approach. Networking the two theories in this way contributes to theoretical development in mathematics education.
The document discusses the debate around whether theory or empiricism should come first in cross-cultural analysis research. Some argue for a deductive, theory-first approach where hypotheses are derived from existing theories. Others argue for an inductive, empiricism-first approach where patterns in the data shape theories. The author examines examples of studies taking both approaches and ultimately argues that theory should precede empiricism to provide a framework for properly analyzing and interpreting empirical data and accounting for confounding variables. Starting with empiricism risks bias and manipulation of statistics to support predetermined conclusions.
This document discusses the differences between theoretical frameworks and conceptual frameworks in research. It begins by providing examples of three students conducting research on street children but approaching it from different disciplinary perspectives of sociology, psychology, and education, and thus having different theoretical/conceptual frameworks.
It then defines key terms like theory, concept, theoretical framework, and conceptual framework. It argues that a theoretical framework typically guides deductive research through existing theories and literature, while a conceptual framework may emerge through inductive research to develop a conceptual model from the data. Examples are provided to illustrate the distinction between theoretical and conceptual frameworks.
Mixedmethods basics: Systematic, integrated mixed methods and textbooks, NVIVOWendy Olsen
I define mixed methods and show that systematic mixed methods can be well organised, with transparent data coding and case-wise data held carefully for hypothesis testing. I list the relevant textbooks. I challenge the schism idea that qualitative methods are intrinsically opposed to what is usually done with quantitative methods. I show how an integrated approach can be begun, giving examples. Suitable to professional researchers, those doing focus groups, and those wanting more background for their qualitative research to come from quantitative data.
This document discusses cladistic methodology, which aims to estimate evolutionary history from similarities and differences among units of evolution. It addresses several challenges in doing so, including defining units of comparison, detecting convergence, and measuring evolutionary divergence. The document emphasizes applying scientific methodology rigorously, distinguishing empirical data from hypotheses and deduced implications. While mathematics can clarify hypotheses, observations are still needed. Heuristic techniques may be used provisionally if full logical implications are unclear, but must be justified as reasonable approximations.
MIX methode- building better theory by bridging the quantitative-qualitative ...politeknik NSC Surabaya
This document discusses the benefits of using qualitative research methods to build theory. It begins by distinguishing between quantitative and qualitative research paradigms in terms of their underlying philosophies, goals, and methods. The key difference is that quantitative research aims for replication and theory testing, while qualitative research aims to understand social phenomena from the perspectives of participants to develop new theories.
The document then provides an overview of qualitative research methods for data collection and analysis, focusing on grounded theory building. Grounded theory allows researchers to generate a detailed understanding of a phenomenon and develop a logically compelling analysis that identifies constructs and relationships to advance theory. The document argues that combining qualitative and quantitative methods can fully develop understanding and refine theories of organizational phenomena.
E)pistemological Awareness, Instantiation of Methods, and Uninformed Methodol...Abdullah Saleem
This article explores issues of epistemological awareness and the instantiation of methods in qualitative research. It argues that researchers should make their epistemological stances, methodological decision points, and logical arguments transparent. The authors discuss conceptualizing epistemological awareness through either a series of decision junctures or a spatial perspective. They illustrate current practices around epistemological awareness and methods instantiation by analyzing education research studies from 2006. The authors adopt a hybrid theoretical perspective using internalism and constructionism to structure their arguments around these issues.
This document analyzes a single student learning episode using two theoretical lenses: the instrumental genesis perspective and the onto-semiotic approach. The instrumental genesis perspective focuses on how students develop techniques for using tools or artifacts to solve mathematical tasks, and the relationships between thinking and gestures. The onto-semiotic approach views mathematical knowledge and learning as involving systems of practices within social and institutional contexts. Analyzing the same episode from both perspectives provides complementary insights and a richer understanding of the phenomena, while also helping to identify the strengths and limitations of each theoretical approach. Networking the two theories in this way contributes to theoretical development in mathematics education.
The document discusses the debate around whether theory or empiricism should come first in cross-cultural analysis research. Some argue for a deductive, theory-first approach where hypotheses are derived from existing theories. Others argue for an inductive, empiricism-first approach where patterns in the data shape theories. The author examines examples of studies taking both approaches and ultimately argues that theory should precede empiricism to provide a framework for properly analyzing and interpreting empirical data and accounting for confounding variables. Starting with empiricism risks bias and manipulation of statistics to support predetermined conclusions.
This document discusses the differences between theoretical frameworks and conceptual frameworks in research. It begins by providing examples of three students conducting research on street children but approaching it from different disciplinary perspectives of sociology, psychology, and education, and thus having different theoretical/conceptual frameworks.
It then defines key terms like theory, concept, theoretical framework, and conceptual framework. It argues that a theoretical framework typically guides deductive research through existing theories and literature, while a conceptual framework may emerge through inductive research to develop a conceptual model from the data. Examples are provided to illustrate the distinction between theoretical and conceptual frameworks.
I’m a young Pakistani Blogger, Academic Writer, Freelancer, Quaidian & MPhil Scholar, Quote Lover, Co-Founder at Essar Student Fund & Blueprism Academia, belonging from Mehdiabad, Skardu, Gilgit Baltistan, Pakistan.
I am an academic writer & freelancer! I can work on Research Paper, Thesis Writing, Academic Research, Research Project, Proposals, Assignments, Business Plans, and Case study research.
Expertise:
Management Sciences, Business Management, Marketing, HRM, Banking, Business Marketing, Corporate Finance, International Business Management
For Order Online:
Whatsapp: +923452502478
Portfolio Link: https://blueprismacademia.wordpress.com/
Email: arguni.hasnain@gmail.com
Follow Me:
Linkedin: arguni_hasnain
Instagram : arguni.hasnain
Facebook: arguni.hasnain
Applying Machine Learning to Agricultural Databutest
This document discusses applying machine learning techniques to agricultural data. It describes a software tool called WEKA that allows experimenting with different machine learning algorithms on real-world datasets. As a case study, the document examines using machine learning to infer rules for culling less productive cows from dairy herd data. Several machine learning methods were tested on the data and produced encouraging results for using machine learning to help solve agricultural problems.
Comprehensive inductive and deductive research approaches are explained. Topics include research paradigm i.e. ontology, epistemology and methodology, theory and its characteristics, research design procedures, research objective statements, operationalization and questionnaire design, different research types (quantitative, qualitative, experimental, phenomenal, descriptive, etc.), and research quality, i.e. validity, reliability, generalizability, sampling, etc. by the use of pictorial illustrations that make the study of business research simpler. These slides are suitable for Bachelor, Master and phD students.
This document summarizes a report about revisiting general theory in historical sociology. It defines general theories as postulates about causal agents and causal mechanisms. It discusses five general theories that have guided or could guide historical sociology: functionalist theory, rational choice theory, power theory, neo-Darwinian theory, and cultural theory. These theories differ in their proposed causal agents and causal mechanisms. The report also discusses how general theories can contribute to empirical research by helping derive hypotheses, integrate findings, and explain outcomes.
This document discusses two notions of scientific justification: deontological justification and alethic justification. Deontological justification assesses whether scientific claims follow proper scientific standards and methods, while alethic justification evaluates claims based on their likelihood of being true independent of scientific standards. The document argues that intrascientific justification employs the deontological notion, assessing claims based on scientific reasons and evidence, while philosophical arguments about scientific realism rely on the alethic notion by considering broader assumptions about truth and our epistemic situation. These two notions of justification underlie the different approaches of scientific and philosophical assessments of science.
This document provides an overview of theoretical perspectives and methodologies used in learning design research. It discusses how researchers come from a variety of disciplines including education, computer science, psychology, and more. Common theoretical perspectives discussed include sociocultural theories like cultural historical activity theory, communities of practice, and actor network theory. Methodologies used include qualitative approaches like ethnography, case studies, and action research as well as quantitative content analysis and evaluation. The relationship between theories, methods, and different epistemological stances is also examined.
The paper holds that such applied theory as the Theory of Consciousness can be constructed only within the limits of a special meta-theory (or general theory) which takes into account the agency of informational factor and has all the necessary tools of study (such as method of study, system of models, system of proofs, conservation laws, etc.) which correspond to the nature of the object of study. The currently dominant meta-theory called the Modern Materialistic (Physical) Picture of the World is not appropriate for constructing the required applied theory of consciousness and many other applied theories within its limits.
Grounded theory is a systematic qualitative research methodology that uses inductive reasoning to generate new theories about a phenomenon. Rather than starting with a hypothesis, grounded theory involves collecting data through methods like interviews and observations, then coding and analyzing the data to discover concepts and relationships that help explain the process or interaction being studied. The theory is "grounded" in the data. Grounded theory was developed in the 1960s by sociologists Glaser and Strauss and involves open, selective, and theoretical coding to iteratively build theories directly supported by the data. It is useful for exploring new domains and leveraging human tendencies to interpret and theorize.
This document provides an overview of causal modelling. It discusses different types of models, including observational, experimental, and quasi-experimental models. Quantitative and qualitative social models are also examined. The document explores views of models as representations, fictional entities, epistemic objects, and maps. It analyzes associational versus causal models and the hypothetico-deductive methodology. Issues of model validity and establishing causal claims through multiple lines of evidence are also covered. The discussion of causal modelling concludes with an announcement of a follow-up workshop on evidence in the social sciences.
In this paper we tried to correlate text sequences those provides common topics for semantic clues. We propose a two step method for asynchronous text mining. Step one check for the common topics in the sequences and isolates these with their timestamps. Step two takes the topic and tries to give the timestamp of the text document. After multiple repetitions of step two, we could give optimum result.
Case-Based Reasoning for Explaining Probabilistic Machine Learningijcsit
This document summarizes a framework for explaining predictions from probabilistic machine learning models using case-based reasoning. The framework consists of two parts: 1) Defining a similarity metric between cases based on how interchangeable their inclusion is in the probability model. 2) Estimating prediction error for a new case by taking the average error of similar past cases. It then applies this framework to explain predictions from a linear regression model for energy performance of households. The document discusses related work on case-based explanation and introduces statistical concepts like J-divergence used in defining the similarity metric.
This document discusses methodological issues in grounded theory research. It identifies four key issues: 1) sampling, where theoretical sampling is distinguished from purposeful sampling; 2) creativity and reflexivity in developing conceptually dense theory; 3) the use and timing of literature reviews; and 4) precision in avoiding method slurring and ensuring theoretical coding occurs. The author recommends that researchers using grounded theory consider these issues to increase methodological rigor and the credibility of findings.
Grounded theory is a qualitative research method that uses a systematic process of data collection and inductive analysis to develop a theory about a phenomenon. The key aspects of grounded theory are that data collection and analysis occur simultaneously to allow codes, concepts, and categories to emerge from the data rather than testing a predetermined hypothesis. Grounded theory was developed in the 1960s by sociologists Glaser and Strauss and focuses on uncovering social processes through exploring relationships and behaviors. It has since evolved, with differing approaches emerging between Glaser and Strauss.
Grounded theory is a qualitative research methodology that aims to build theory from data. It involves collecting and analyzing data to develop concepts and build relationships between concepts to form theories. The grounded theory process involves initial coding of data, focused coding to synthesize codes into categories, theoretical sampling to refine categories, and memo writing to develop theoretical concepts. Studies using grounded theory are evaluated based on their focus, purpose, methodology, sampling strategy, data analysis process, theoretical findings, and conclusions.
Grounded Theory is a method of developing theory from data where the researcher analyzes text through open, axial, and selective coding to categorize concepts and discover relationships between them. Open coding involves identifying phenomena in the data, axial coding relates codes through categories, and selective coding chooses a core category to relate all other categories to in order to develop a central storyline. Memos are notes written by the researcher to document the analytical process.
Grounded theory is a qualitative research method that aims to develop theories inductively from data. It begins with data collection and analysis to allow concepts and theories to emerge from the data rather than testing a predetermined hypothesis. Grounded theory was developed in the 1960s by sociologists Glaser and Strauss and has since split into different paradigms including Straussian, Glaserian, and Constructivist approaches. The key aspects of grounded theory include coding data through open, axial, and selective coding to develop categories and concepts into a theoretical framework or model.
This document provides an overview of research methods in entrepreneurship, specifically focusing on grounded theory methodology. It discusses how grounded theory is well-suited for areas with little existing empirical validation or conflicting perspectives. The document also reviews how grounded theory has been applied in entrepreneurship research and provides guidance on key aspects of the grounded theory research process such as defining the research problem, data collection and analysis, and assessing quality.
Theory building from case studies provides opportunities to develop new theories from patterns across cases, but also poses challenges for writing publishable manuscripts. Case studies allow exploration of "how" and "why" questions in new areas where no existing theory applies. However, researchers must justify why an inductive case-based approach is necessary rather than testing existing theories. Selection of representative yet diverse cases, collection of rich qualitative data, balancing theory development with evidence, and clearly presenting theoretical arguments can help address common issues in case-based theory building.
Ontologies are being used to organize information in many domains like artificial intelligence,
information science, semantic web, library science. Ontologies of an entity having different information
can be merged to create more knowledge of that particular entity. Ontologies today are powering more
accurate search and retrieval in websites like Wikipedia etc. As we move towards the future to Web 3.0,
also termed as the semantic web, ontologies will play a more important role.
Ontologies are represented in various forms like RDF, RDFS, XML, OWL etc. Querying ontologies can
yield basic information about an entity. This paper proposes an automated method for ontology creation,
using concepts from NLP (Natural Language Processing), Information Retrieval and Machine Learning.
Concepts drawn from these domains help in designing more accurate ontologies represented using the
XML format. This paper uses document classification using classification algorithms for assigning labels
to documents, document similarity to cluster similar documents to the input document, together, and
summarization to shorten the text and keep important terms essential in making the ontology. The module
is constructed using the Python programming language and NLTK (Natural Language Toolkit). The
ontologies created in XML will convey to a lay person the definition of the important term's and their
lexical relationships.
Causality is a central notion in the sciences. It is at the core of a number of epistemic practices such as explanation, prediction, or reasoning. The recognition of a plurality of practices calls, in turn, for a pluralistic approach to causality. In the ‘mosaic’ approach, as developed by Illari and Russo (2014), we need to select the causal account that best fits the practice at hand, and in the specific context. For instance, the concept of (causal) mechanism helps with explanatory practices in fields such as biology or neuroscience. Or, the concept of (causal) process helps with tracing ‘world-line’ trajectories in physics contexts or in social science. While no single notion of causality can simultaneously meet the requirements for a good explanation, prediction, or reasoning across different contexts and practices, a pluralistic approach towards the epistemology of causality seems to be the most plausible and attractive solution.
I’m a young Pakistani Blogger, Academic Writer, Freelancer, Quaidian & MPhil Scholar, Quote Lover, Co-Founder at Essar Student Fund & Blueprism Academia, belonging from Mehdiabad, Skardu, Gilgit Baltistan, Pakistan.
I am an academic writer & freelancer! I can work on Research Paper, Thesis Writing, Academic Research, Research Project, Proposals, Assignments, Business Plans, and Case study research.
Expertise:
Management Sciences, Business Management, Marketing, HRM, Banking, Business Marketing, Corporate Finance, International Business Management
For Order Online:
Whatsapp: +923452502478
Portfolio Link: https://blueprismacademia.wordpress.com/
Email: arguni.hasnain@gmail.com
Follow Me:
Linkedin: arguni_hasnain
Instagram : arguni.hasnain
Facebook: arguni.hasnain
Applying Machine Learning to Agricultural Databutest
This document discusses applying machine learning techniques to agricultural data. It describes a software tool called WEKA that allows experimenting with different machine learning algorithms on real-world datasets. As a case study, the document examines using machine learning to infer rules for culling less productive cows from dairy herd data. Several machine learning methods were tested on the data and produced encouraging results for using machine learning to help solve agricultural problems.
Comprehensive inductive and deductive research approaches are explained. Topics include research paradigm i.e. ontology, epistemology and methodology, theory and its characteristics, research design procedures, research objective statements, operationalization and questionnaire design, different research types (quantitative, qualitative, experimental, phenomenal, descriptive, etc.), and research quality, i.e. validity, reliability, generalizability, sampling, etc. by the use of pictorial illustrations that make the study of business research simpler. These slides are suitable for Bachelor, Master and phD students.
This document summarizes a report about revisiting general theory in historical sociology. It defines general theories as postulates about causal agents and causal mechanisms. It discusses five general theories that have guided or could guide historical sociology: functionalist theory, rational choice theory, power theory, neo-Darwinian theory, and cultural theory. These theories differ in their proposed causal agents and causal mechanisms. The report also discusses how general theories can contribute to empirical research by helping derive hypotheses, integrate findings, and explain outcomes.
This document discusses two notions of scientific justification: deontological justification and alethic justification. Deontological justification assesses whether scientific claims follow proper scientific standards and methods, while alethic justification evaluates claims based on their likelihood of being true independent of scientific standards. The document argues that intrascientific justification employs the deontological notion, assessing claims based on scientific reasons and evidence, while philosophical arguments about scientific realism rely on the alethic notion by considering broader assumptions about truth and our epistemic situation. These two notions of justification underlie the different approaches of scientific and philosophical assessments of science.
This document provides an overview of theoretical perspectives and methodologies used in learning design research. It discusses how researchers come from a variety of disciplines including education, computer science, psychology, and more. Common theoretical perspectives discussed include sociocultural theories like cultural historical activity theory, communities of practice, and actor network theory. Methodologies used include qualitative approaches like ethnography, case studies, and action research as well as quantitative content analysis and evaluation. The relationship between theories, methods, and different epistemological stances is also examined.
The paper holds that such applied theory as the Theory of Consciousness can be constructed only within the limits of a special meta-theory (or general theory) which takes into account the agency of informational factor and has all the necessary tools of study (such as method of study, system of models, system of proofs, conservation laws, etc.) which correspond to the nature of the object of study. The currently dominant meta-theory called the Modern Materialistic (Physical) Picture of the World is not appropriate for constructing the required applied theory of consciousness and many other applied theories within its limits.
Grounded theory is a systematic qualitative research methodology that uses inductive reasoning to generate new theories about a phenomenon. Rather than starting with a hypothesis, grounded theory involves collecting data through methods like interviews and observations, then coding and analyzing the data to discover concepts and relationships that help explain the process or interaction being studied. The theory is "grounded" in the data. Grounded theory was developed in the 1960s by sociologists Glaser and Strauss and involves open, selective, and theoretical coding to iteratively build theories directly supported by the data. It is useful for exploring new domains and leveraging human tendencies to interpret and theorize.
This document provides an overview of causal modelling. It discusses different types of models, including observational, experimental, and quasi-experimental models. Quantitative and qualitative social models are also examined. The document explores views of models as representations, fictional entities, epistemic objects, and maps. It analyzes associational versus causal models and the hypothetico-deductive methodology. Issues of model validity and establishing causal claims through multiple lines of evidence are also covered. The discussion of causal modelling concludes with an announcement of a follow-up workshop on evidence in the social sciences.
In this paper we tried to correlate text sequences those provides common topics for semantic clues. We propose a two step method for asynchronous text mining. Step one check for the common topics in the sequences and isolates these with their timestamps. Step two takes the topic and tries to give the timestamp of the text document. After multiple repetitions of step two, we could give optimum result.
Case-Based Reasoning for Explaining Probabilistic Machine Learningijcsit
This document summarizes a framework for explaining predictions from probabilistic machine learning models using case-based reasoning. The framework consists of two parts: 1) Defining a similarity metric between cases based on how interchangeable their inclusion is in the probability model. 2) Estimating prediction error for a new case by taking the average error of similar past cases. It then applies this framework to explain predictions from a linear regression model for energy performance of households. The document discusses related work on case-based explanation and introduces statistical concepts like J-divergence used in defining the similarity metric.
This document discusses methodological issues in grounded theory research. It identifies four key issues: 1) sampling, where theoretical sampling is distinguished from purposeful sampling; 2) creativity and reflexivity in developing conceptually dense theory; 3) the use and timing of literature reviews; and 4) precision in avoiding method slurring and ensuring theoretical coding occurs. The author recommends that researchers using grounded theory consider these issues to increase methodological rigor and the credibility of findings.
Grounded theory is a qualitative research method that uses a systematic process of data collection and inductive analysis to develop a theory about a phenomenon. The key aspects of grounded theory are that data collection and analysis occur simultaneously to allow codes, concepts, and categories to emerge from the data rather than testing a predetermined hypothesis. Grounded theory was developed in the 1960s by sociologists Glaser and Strauss and focuses on uncovering social processes through exploring relationships and behaviors. It has since evolved, with differing approaches emerging between Glaser and Strauss.
Grounded theory is a qualitative research methodology that aims to build theory from data. It involves collecting and analyzing data to develop concepts and build relationships between concepts to form theories. The grounded theory process involves initial coding of data, focused coding to synthesize codes into categories, theoretical sampling to refine categories, and memo writing to develop theoretical concepts. Studies using grounded theory are evaluated based on their focus, purpose, methodology, sampling strategy, data analysis process, theoretical findings, and conclusions.
Grounded Theory is a method of developing theory from data where the researcher analyzes text through open, axial, and selective coding to categorize concepts and discover relationships between them. Open coding involves identifying phenomena in the data, axial coding relates codes through categories, and selective coding chooses a core category to relate all other categories to in order to develop a central storyline. Memos are notes written by the researcher to document the analytical process.
Grounded theory is a qualitative research method that aims to develop theories inductively from data. It begins with data collection and analysis to allow concepts and theories to emerge from the data rather than testing a predetermined hypothesis. Grounded theory was developed in the 1960s by sociologists Glaser and Strauss and has since split into different paradigms including Straussian, Glaserian, and Constructivist approaches. The key aspects of grounded theory include coding data through open, axial, and selective coding to develop categories and concepts into a theoretical framework or model.
This document provides an overview of research methods in entrepreneurship, specifically focusing on grounded theory methodology. It discusses how grounded theory is well-suited for areas with little existing empirical validation or conflicting perspectives. The document also reviews how grounded theory has been applied in entrepreneurship research and provides guidance on key aspects of the grounded theory research process such as defining the research problem, data collection and analysis, and assessing quality.
Theory building from case studies provides opportunities to develop new theories from patterns across cases, but also poses challenges for writing publishable manuscripts. Case studies allow exploration of "how" and "why" questions in new areas where no existing theory applies. However, researchers must justify why an inductive case-based approach is necessary rather than testing existing theories. Selection of representative yet diverse cases, collection of rich qualitative data, balancing theory development with evidence, and clearly presenting theoretical arguments can help address common issues in case-based theory building.
Ontologies are being used to organize information in many domains like artificial intelligence,
information science, semantic web, library science. Ontologies of an entity having different information
can be merged to create more knowledge of that particular entity. Ontologies today are powering more
accurate search and retrieval in websites like Wikipedia etc. As we move towards the future to Web 3.0,
also termed as the semantic web, ontologies will play a more important role.
Ontologies are represented in various forms like RDF, RDFS, XML, OWL etc. Querying ontologies can
yield basic information about an entity. This paper proposes an automated method for ontology creation,
using concepts from NLP (Natural Language Processing), Information Retrieval and Machine Learning.
Concepts drawn from these domains help in designing more accurate ontologies represented using the
XML format. This paper uses document classification using classification algorithms for assigning labels
to documents, document similarity to cluster similar documents to the input document, together, and
summarization to shorten the text and keep important terms essential in making the ontology. The module
is constructed using the Python programming language and NLTK (Natural Language Toolkit). The
ontologies created in XML will convey to a lay person the definition of the important term's and their
lexical relationships.
Causality is a central notion in the sciences. It is at the core of a number of epistemic practices such as explanation, prediction, or reasoning. The recognition of a plurality of practices calls, in turn, for a pluralistic approach to causality. In the ‘mosaic’ approach, as developed by Illari and Russo (2014), we need to select the causal account that best fits the practice at hand, and in the specific context. For instance, the concept of (causal) mechanism helps with explanatory practices in fields such as biology or neuroscience. Or, the concept of (causal) process helps with tracing ‘world-line’ trajectories in physics contexts or in social science. While no single notion of causality can simultaneously meet the requirements for a good explanation, prediction, or reasoning across different contexts and practices, a pluralistic approach towards the epistemology of causality seems to be the most plausible and attractive solution.
A reflection on modeling and the nature of knowledgeJorge Zazueta
This document discusses modeling as a method for generating scientific knowledge. It defines key criteria for knowledge to be considered scientific, including being falsifiable, generalizable, and the method being replicable and rationally linked to the claim. Modeling is proposed as an explicit method that meets these criteria when assumptions are clearly defined. While not all domains are scientifically knowable, modeling can be applied to study objects across various fields. The framework is presented as pragmatic rather than ideological and open to challenges from different perspectives like feminism.
This document discusses the demarcation of science from pseudoscience and the criterion of falsifiability. It explores how theoretical sciences like cosmology and theoretical physics deal with phenomena that are unobservable and difficult to falsify. While mathematics and theoretical constructs are useful for developing scientific understanding, overreliance on interpretation of data without direct observation can compromise objectivity and falsifiability. Determining what constitutes science versus pseudoscience or non-science is a complex problem with no definitive answers.
This document discusses mixed methods research. It provides an overview of why researchers use mixed methods, addressing criticisms of combining qualitative and quantitative research. It also challenges the distinction between these paradigms by analyzing seven common assumptions. Considerations for mixed methods designs include the timing, weighting, and mixing of qualitative and quantitative data. Key mixed methods designs are triangulation, embedded, explanatory, and exploratory approaches. Practical issues like research politics, costs, skills, and team organization are also covered.
Sujay Inductive, nomothetic approaches and grounded theory FINAL FINAL FINAL ...Sujay Rao Mandavilli
This document discusses inductive and deductive approaches to theory building in social sciences research. It recommends using both nomothetic (law-building) and idiographic (case-specific) approaches, as well as grounded theory which generates theories from real-world data. The document aims to promote more inclusive globalized approaches to increase the quality and impact of social sciences research. It relates this topic to the author's previous work on responsibilities of researchers and principles like exceptionism, uncertainty, and cross-cultural research design.
Sujay Inductive, nomothetic approaches and grounded theory FINAL FINAL FINAL ...Sujay Rao Mandavilli
This paper evaluates both inductive and deductive methods with respect to theory building particularly
in the social sciences. The former is an approach for drawing conclusions by proceeding from the specific
to the general, while as per the latter, a hypothesis is usually developed and subsequently tested based
on further evidence. This paper also evaluates nomothetic approaches in opposition to idiographic or
stand-alone approaches with respect to theorization or theory-formulation as well, and recommends
that a combination of the two be used. It also discusses the application of grounded theory in social
sciences research besides other approaches to theorization as well. While choosing a appropriate
research method is the prerogative of the researcher, based on the research question involved, the
researcher’s personal inclinations, besides time and cost considerations, this paper hopes to generate
awareness of more globalized and inclusive approaches to scientific endeavour. This is as such, our fifth
paper on the philosophy of science, and extends our earlier work which primarily focused on the
importance of the social duties of every researcher and scholar, the principle of exceptionism or the
sociological ninety ten rule, the certainty uncertainty principle, and the importance of cross-cultural
research design. This paper is also therefore the logical culmination of all our earlier endeavours, and
forms an integral part of our “Globalization of science” movement, with particular emphasis on the
social sciences.
Astronomy is primarily an observational science where scientists cannot actively experiment on celestial objects. Scientists use large surveys to collect data on many objects to look for variations. Population studies examine limited groups that share properties to see how features relate. Coordinate systems like altitude-azimuth and right ascension-declination are used to precisely map the positions of stars and other objects in the sky.
The document discusses the causal interpretation of statistical models in social research. It outlines different perspectives from staunch causalists to moderate skeptics. Interpreting a statistical model causally is described as an epistemic activity to decide if a model is valid, rather than determining a physical causal relation. The causal interpretation depends on the statistical information and machinery used to make inferences from the model. Keeping statistical and causal inferences distinct is important, while acknowledging the role of background knowledge in interpretation.
This document discusses the theory of knowledge and science's claim to be the supreme form of knowledge. It examines the objectivity and empirical nature of the scientific method, particularly verification processes based on inductive and deductive reasoning. The document then analyzes the hypothetico-deductive model, which uses a combination of verification techniques. Specifically, it claims hypotheses can be tested and confirmed through predicting events and seeing if they occur as expected, with confirmation being inductive since the hypothesis holds true multiple times. However, the hypothetico-deductive model is still subject to flaws of inductive reasoning.
Grounded theory is a systematic qualitative research method where researchers gather data to build hypotheses and theories through inductive reasoning. As researchers examine collected data, ideas and concepts emerge from the data. Researchers code these concepts and may eventually categorize them to form new theories. Unlike conventional scientific methods that start with hypotheses, grounded theory starts with a question or data collection, allowing new contextualized ideas to emerge from the data.
The positivist approach relies on quantitative methods like experiments and statistical analysis to test hypotheses and discover generalizable truths. It assumes an objective reality can be observed and measured. The interpretivist approach uses qualitative methods like interviews and case studies to understand phenomena within their specific contexts. It sees reality as socially constructed and allows for multiple perspectives. Both approaches agree research aims to generate new knowledge but diverge on their methods and philosophical assumptions.
Artificial intelligence needs ideas from philosophy to build human-level intelligence. For a robot to have common sense and learn from experience, it needs a general worldview to organize facts, which raises many philosophical problems. Some philosophical approaches are compatible with designing intelligent systems, such as accepting both science and common sense knowledge, treating mind as a set of features rather than an all-or-nothing concept, and using approximate concepts. Philosophers could help AI by clarifying useful concepts like belief, causality, and counterfactuals.
In this ppt Research and Theory explained in detail which covers Meaning of theory, Definition of Theory, Contribution of Research to Theory, Criteria of Theory, Theory and Facts, Role of Theory in Research, Uses of Theory in Research
This document discusses experimental research design. It begins by outlining the scientific approach to research and some key assumptions of experimental design, including controlling variables and manipulating independent variables.
It then defines experimental design and discusses the importance of controlling extraneous variables. It identifies the key steps in conducting an experiment, including selecting variables, specifying treatment levels, controlling the environment, choosing a design, selecting subjects, pilot testing, and analyzing data.
The document also discusses the differences between internal and external validity in experiments. It outlines three main types of experimental designs - true experiments, quasi-experiments, and pre-experiments - and provides examples of designs within each type. It concludes by discussing the advantages and disadvantages of true experiments and quasi
This document discusses experimental research design. It begins by outlining the scientific approach to research, which includes assumptions that nature is orderly, phenomena have natural causes, and knowledge is superior to ignorance. It then defines research design and describes different types, focusing on experimental design. Experimental design involves manipulating an independent variable in a controlled setting to observe its impact on a dependent variable. The document outlines the steps to conducting an experiment, including selecting variables, specifying treatment levels, controlling the environment, choosing a design, selecting subjects, implementing treatments, collecting data, and analyzing results. It emphasizes that proper experimental design allows for testing hypotheses and distinguishing between competing theories.
This document discusses experimental research design. It begins by outlining the scientific approach and assumptions of research, including that nature is orderly, phenomena have natural causes, and knowledge is superior to ignorance. It then defines research design and different types of designs, focusing on experimental design. Experimental design involves manipulating an independent variable in a controlled setting to observe its impact on a dependent variable. The document outlines the steps to conducting an experiment, including selecting variables, specifying treatment levels, controlling the environment, choosing a design, selecting subjects, implementing treatments, collecting data, and analyzing results. It emphasizes that proper experimental design allows for testing hypotheses and distinguishing between competing theories.
A diagnosis of tenets of the research process what is it to know anythingAlexander Decker
This summary provides an overview of a journal article that discusses philosophical underpinnings of the research process. The article examines different views on the nature of knowledge and how philosophical assumptions guide research approaches. It describes the distinction between a priori and a posteriori knowledge, and how tacit and explicit knowledge contribute to understanding phenomena. Researchers are said to conform to established research traditions and paradigms in systematically approaching problems, though philosophical positions may differ on what constitutes reliable knowledge.
Theories of induction in psychology and artificial intelligence assume that the process leads from observation and knowledge to the formulation of linguistic conjectures. This paper proposes instead that the process yields mental models of phenomena. It uses this hypothesis to distinguish between deduction, induction, and creative forms of thought. It shows how models could underlie inductions about specific matters. In the domain of linguistic conjectures, there are many possible inductive generalizations of a conjecture. In the domain of models, however, generalization calls for only a single operation: the addition of information to a model. If the information to be added is inconsistent with the model, then it eliminates the model as false: this operation suffices for all generalizations in a Boolean domain. Otherwise, the information that is added may have effects equivalent (a) to the replacement of an existential quantifier by a universal quantifier, or (b) to the promotion of an existential quantifier from inside to outside the scope of a universal quantifier. The latter operation is novel, and does not seem to have been used in any linguistic theory of induction. Finally, the paper describes a set of constraints on human induction, and outlines the evidence in favor of a model theory of induction.
This document discusses inductive research as an alternative approach to theory development where theories are derived from analysis of collected data rather than guiding data collection. It provides an overview of inductive research as a qualitative approach that uses exploratory motives to generate broad generalizations through a flexible design focused on natural settings and in-depth understandings. The document also outlines the process of implementing induction including data collection, examination, and identifying themes to ultimately construct typologies, theories, and models. It notes both the limitations and strengths of inductive research.
Similar to The Philosophy of Science and its relation to Machine Learning (20)
Este documento analiza el modelo de negocio de YouTube. Explica que YouTube y otros sitios de video online representan un nuevo modelo de negocio para contenidos audiovisuales debido al cambio en los hábitos de consumo causado por las nuevas tecnologías. Describe cómo YouTube aprovecha la participación de los usuarios para mejorar continuamente y atraer una audiencia diferente a la de los medios tradicionales.
The defense was successful in portraying Michael Jackson favorably to the jury in several ways:
1) They dressed Jackson in ornate costumes that conveyed images of purity, innocence, and humility.
2) Jackson was shown entering the courtroom as if on a red carpet, emphasizing his celebrity status.
3) Jackson appeared vulnerable, childlike, and in declining health during the trial, eliciting sympathy from jurors.
4) Defense attorney Tom Mesereau effectively presented a coherent narrative of Jackson as a victim and portrayed Neverland as a place of refuge, undermining the prosecution's arguments.
Michael Jackson was born in 1958 in Gary, Indiana and rose to fame in the 1960s as the lead singer of The Jackson 5, topping music charts in the 1970s. As a solo artist in the 1980s, his album Thriller broke music records. In the 1990s and 2000s, Jackson faced several legal issues related to child abuse allegations while continuing to release music. He married Lisa Marie Presley and Debbie Rowe and had two children before his death in 2009.
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...butest
This document appears to be a list of popular books from various authors. It includes over 150 book titles across many genres such as fiction, non-fiction, memoirs, and novels. The books cover a wide range of topics from politics to cooking to autobiographies.
The prosecution lost the Michael Jackson trial due to several key mistakes and weaknesses in their case:
1) The lead prosecutor, Thomas Sneddon, was too personally invested in the case against Jackson, having pursued him for over a decade without success.
2) Sneddon's opening statement was disorganized and weak, failing to effectively outline the prosecution's case.
3) The accuser's mother was not credible and damaged the prosecution's case through her erratic testimony, history of lies and con artist behavior.
4) Many prosecution witnesses were not credible due to prior lawsuits against Jackson, debts owed to him, or having been fired by him. Several witnesses even took the Fifth Amendment.
Here are three examples of public relations from around the world:
1. The UK government's "Be Clear on Cancer" campaign which aims to raise awareness of cancer symptoms and encourage early diagnosis.
2. Samsung's global brand marketing and sponsorship activities which aim to increase brand awareness and favorability of Samsung products worldwide.
3. The Brazilian government's efforts to improve its international image and relations with other countries through strategic communication and diplomacy.
The three most important functions of public relations are:
1. Media relations because the media is how most organizations reach their key audiences. Strong media relationships are crucial.
2. Writing, because written communication is at the core of public relations and how most information is
Michael Jackson Please Wait... provides biographical information about Michael Jackson including his birthdate, birthplace, parents, height, interests, idols, favorite foods, films, and more. It discusses his background, career highlights including influential albums like Thriller, and films he appeared in such as The Wiz and Moonwalker. The document contains photos and details about Jackson's life and illustrious music career.
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazzbutest
The document discusses the process of manufacturing celebrity and its negative byproducts. It argues that celebrities are rarely the best in their individual pursuits like singing, dancing, etc. but become famous due to being products of a system controlled by wealthy elites. This system stifles opportunities for worthy artists and creates feudalism. The document also asserts that manufactured celebrities should not be viewed as role models due to behaviors like drug abuse and narcissism that result from the celebrity-making process.
Michael Jackson was a child star who rose to fame with the Jackson 5 in the late 1960s and early 1970s. As a solo artist in the 1970s and 1980s, he had immense commercial success with albums like Off the Wall, Thriller, and Bad, which featured hit singles and groundbreaking music videos. However, his career and public image were plagued by controversies related to allegations of child sexual abuse in the 1990s and 2000s. He continued recording and performing but faced ongoing media scrutiny into his private life until his death in 2009.
Social Networks: Twitter Facebook SL - Slide 1butest
The document discusses using social networking tools like Twitter and Facebook in K-12 education. Twitter allows students and teachers to share short updates and can be used to give parents a window into classroom activities. Facebook allows targeted advertising that could be used to promote educational activities. Both tools could help facilitate communication between schools and communities if used properly while managing privacy and security concerns.
Facebook has over 300 million active users who log on daily, and allows brands to create public profile pages to interact with users. Pages are for brands and organizations only, while groups can be made by any user about any topic. Pages do not show admin names and have no limits on fans, while groups display admin names and are limited to 5,000 members. Content on pages should aim to provoke action from subscribers and establish a regular posting schedule using a conversational tone.
Executive Summary Hare Chevrolet is a General Motors dealership ...butest
Hare Chevrolet is a car dealership located in Noblesville, Indiana that has successfully used social media platforms like Twitter, Facebook, and YouTube to create a positive brand image. They invest significant time interacting directly with customers online to foster a sense of community rather than overtly advertising. As a result, Hare Chevrolet has built a large, engaged audience on social media and serves as a model for how brands can use online presences strategically.
Welcome to the Dougherty County Public Library's Facebook and ...butest
This document provides instructions for signing up for Facebook and Twitter accounts. It outlines the sign up process for both platforms, including filling out forms with name, email, password and other details. It describes how the platforms will then search for friends and suggest people to connect with. It also explains how to search for and follow the Dougherty County Public Library page on both Facebook and Twitter once signed up. The document concludes by thanking participants and providing a contact for any additional questions.
Paragon Software announces the release of Paragon NTFS for Mac OS X 8.0, which provides full read and write access to NTFS partitions on Macs. It is the fastest NTFS driver on the market, achieving speeds comparable to native Mac file systems. Paragon NTFS for Mac 8.0 fully supports the latest Mac OS X Snow Leopard operating system in 64-bit mode and allows easy transfer of files between Windows and Mac partitions without additional hardware or software.
This document provides compatibility information for Olympus digital products used with Macintosh OS X. It lists various digital cameras, photo printers, voice recorders, and accessories along with their connection type and any notes on compatibility. Some products require booting into OS 9.1 for software compatibility or do not support devices that need a serial port. Drivers and software are available for download from Olympus and other websites for many products to enable use with OS X.
To use printers managed by the university's Information Technology Services (ITS), students and faculty must install the ITS Remote Printing software on their Mac OS X computer. This allows them to add network printers, log in with their ITS account credentials, and print documents while being charged per page to funds in their pre-paid ITS account. The document provides step-by-step instructions for installing the software, adding a network printer, and printing to that printer from any internet connection on or off campus. It also explains the pay-in-advance printing payment system and how to check printing charges.
The document provides an overview of the Mac OS X user interface for beginners, including descriptions of the desktop, login screen, desktop elements like the dock and hard disk, and how to perform common tasks like opening files and folders. It also addresses frequently asked questions for Windows users switching to Mac OS X, such as where documents are stored, how to save or find documents, and what the equivalent of the C: drive is in Mac OS X. The document concludes with sections on file management tasks like creating and deleting folders, organizing files within applications, using Spotlight search, and an overview of the Dashboard feature.
This document provides a checklist for securing Mac OS X version 10.5, focusing on hardening the operating system, securing user accounts and administrator accounts, enabling file encryption and permissions, implementing intrusion detection, and maintaining password security. It describes the Unix infrastructure and security framework that Mac OS X is built on, leveraging open source software and following the Common Data Security Architecture model. The checklist can be used to audit a system or harden it against security threats.
This document summarizes a course on web design that was piloted in the summer of 2003. The course was a 3 credit course that met 4 times a week for lectures and labs. It covered topics such as XHTML, CSS, JavaScript, Photoshop, and building a basic website. 18 students from various majors enrolled. Student and instructor evaluations found the course to be very successful overall, though some improvements were suggested like ensuring proper software and pairing programming/non-programming students. The document also discusses implications of incorporating web design material into existing computer science curriculums.
The Philosophy of Science and its relation to Machine Learning
1. The Philosophy of Science and its
relation to Machine Learning
Jon Williamson
To appear in Mohamed Medhat Gaber (ed.)
Scientific Data Mining and Knowledge Discovery:
Principles and Foundations, Springer.
Draft of August 4, 2009
Abstract
In this chapter I discuss connections between machine learning and the
philosophy of science. First I consider the relationship between the two
disciplines. There is a clear analogy between hypothesis choice in science
and model selection in machine learning. While this analogy has been
invoked to argue that the two disciplines are essentially doing the same
thing and should merge, I maintain that the disciplines are distinct but
related and that there is a dynamic interaction operating between the two:
a series of mutually beneficial interactions that changes over time. I will
introduce some particularly fruitful interactions, in particular the conse-
quences of automated scientific discovery for the debate on inductivism
versus falsificationism in the philosophy of science, and the importance of
philosophical work on Bayesian epistemology and causality for contempo-
rary machine learning. I will close by suggesting the locus of a possible
future interaction: evidence integration.
Contents
§1 Introduction 2
§2 What is the Philosophy of Science? 2
§3 Hypothesis Choice and Model Selection 3
§4 Inductivism versus Falsificationism 5
§5 Bayesian Epistemology and Causality 6
§6 Evidence Integration 9
§7 Conclusion 12
1
2. §1
Introduction
Since its genesis in the mid 1990s, data mining has been thought of as en-
compassing two tasks: using data to test some pre-determined hypothesis, or
using data to determine the hypothesis in the first place. The full automation
of both these tasks—hypothesising and then testing—leads to what is known
as automated discovery or machine learning. When such methods are applied
to science we have what is called automated scientific discovery or scientific
machine learning. In this chapter we shall consider the relationship between
the philosophy of science and machine learning, keeping automated scientific
discovery particularly in mind.
§2 offers a brief introduction to the philosophy of science. In §3 it is suggested
that the philosophy of science and machine learning admit mutually fruitful in-
teractions because of an analogy between hypothesis choice in the philosophy
of science and model selection in machine learning. An example of the benefit
of machine learning for the philosophy of science is provided by the importance
of work on automated scientific discovery for the debate between inductivism
and falsificationism in the philosophy of science (§4). On the other hand, the
influence of philosophical work on Bayesianism and causality provides an ex-
ample of the benefits of the philosophy of science for machine learning (§5).
§6 hypothesises that evidence integration may become the locus of the further
fruitful interaction between the two fields.
§2
What is the Philosophy of Science?
In the quest to improve our understanding of science, three fields of enquiry
stand out: history of science, sociology of science and philosophy of science.
Historians of science study the development of science, key scientists and key
ideas. Sociologists of science study social constraints on scientific activity—e.g.,
how power struggles impact on the progress of science. Philosophers of science
study the concepts of science and normative constraints on scientific activity.
Questions of interest to philosophers of science include:
Demarcation: What demarcates science from non-science? One view is that
empirical testability is a necessary condition for a theory to count as sci-
entific.
Unity: To what extent is science a unified or unifiable field of enquiry? Some
take physics to be fundamental and the elements of other sciences to be
reducible to those of physics. Others argue that science is a hotch-potch
of rather unrelated theories, or that high-level complexity is not reducible
to low-level entities and their arrangement.
Realism: Are the claims of science true? To what extent are we justified in
believing contemporary scientific theories? Realists hold that scientific
theories aim to describe an independent reality and that science gradually
gets better at describing that reality. On the other hand instrumentalists
2
3. hold that science is an instrument for making predictions and technological
advances and that there is no reason to take its claims literally, or—if they
are taken at face value—there are no grounds for believing them.
Explanation: What is it to give a scientific explanation of some phenomena?
One view is that explaining is the act of pointing to the physical mechanism
that is responsible for the phenomena. Another is that explanation is
subsumption under some kind of regularity or law.
Confirmation: How does evidence confirm a scientific theory? Some hold that
evidence confirms a hypothesis just when it raises the probability of the
hypothesis. Others take confirmation to be a more complicated relation,
or not a binary relation at all but rather to do with coherence with other
beliefs.
Scientific Method: How are the goals of science achieved? What is the best
way of discovering causal relationships? Can one justify induction? While
many maintain that in principle one can automate science, others hold
that scientific discovery is an essentially human, intuitive activity.
Concepts of the Sciences: How should one interpret the probabilities of
quantum mechanics? Does natural selection operate at the level of the
individual, the population or the gene? Each science has its particular
conceptual questions; even the interpretation of many general concepts—
such as probability and causality—remains unresolved.
§3
Hypothesis Choice and Model Selection
There is a clear link between the philosophy of science on the one hand and
the area of machine learning and data mining on the other. This link is based
around an analogy between hypothesis choice in science and model selection in
machine learning. The task of determining a scientific hypothesis on the basis
of current evidence is much like the task of determining a model on the basis of
given data. Moreover, the task of evaluating the resulting scientific hypothesis is
much like the task of evaluating the chosen model. Finally the task of deciding
which evidence to collect next (which experiments and observations to perform)
seems to be similar across science and machine learning. Apparently, then,
scientific theorising and computational modelling are but two applications of a
more general form of reasoning.
How is this general form of reasoning best characterised? It is sometimes
called abductive inference or abduction, a notion introduced by C.S. Peirce. But
this nomenclature is a mistake: the form of reasoning alluded to here is more
general than abduction. Abduction is the particular logic of moving from ob-
served phenomena to an explanation of those phenomena. Science and machine
learning are interested in the broader, iterative process of moving from evidence
to theory to new evidence to new theory and so on. (This broader process was
sometimes called ‘induction’ by Peirce, though ‘induction’ is normally used in-
stead to refer to the process of moving from the observation of a feature holding
3
4. in each member of a sample to the conclusion that the feature holds of unob-
served members of the population from which the sample is drawn.) Moreover,
explanation is just one use of hypotheses in science and models in machine learn-
ing; hypotheses and models are also used for other forms of inference such as
prediction. In fact, while explanation is often the principal target in science,
machine learning tends to be more interested in prediction. When explanation is
the focus, one is theorising; when prediction is the focus the process is better de-
scribed as modelling. Clearly, then, the general form of reasoning encompasses
both theorising and modelling. This general form of reasoning is sometimes
called discovery. But that is not right either: the form of reasoning under con-
sideration here is narrower than discovery. ‘Discovery’ applies to finding out
new particular facts as well as to new generalities, but we are interested purely
in generalities here. In want of a better name we shall call this general form of
reasoning systematising, and take it to encompass theorising and modelling.
Granting, then, that hypothesis choice in science and model selection in
machine learning are two kinds of systematising, there are a variety of possible
views as to the relationship between the philosophy of science and machine
learning.
One might think that since the philosophy of science and machine learning
are both concerned with systematising, they are essentially the same discipline,
and hence some kind of merger seems sensible (Korb, 2001). This position is
problematic, though. As we saw in §2, the philosophy of science is not only con-
cerned with the study of systematising, but also with a variety of other topics.
Hence the philosophy of science can at best be said to intersect with machine
learning. Moreover, even where they intersect the aims of the two fields are
rather different: e.g., the philosophy of science is primarily interested in expla-
nation and hence theorising, while the area of machine learning and data mining
is primarily interested in prediction and hence modelling. Perhaps automated
scientific discovery is one area where the aims of machine learning and the phi-
losophy of science coincide. In which case automated scientific discovery is the
locus of intersection between the philosophy of science and machine learning.
But this rather narrow intersection falls far short of the claim that the two
disciplines are the same.
More plausibly, then, the philosophy of science and machine learning are not
essentially one, but nevertheless they do admit interesting connections (Tha-
gard, 1988, 189; Bensusan, 2000; Williamson, 2004). In Williamson (2004) I
argue that the two fields admit a dynamic interaction. There is a dynamic in-
teraction between two fields if there is a connection between them which leads
to a mutually beneficial exchange of ideas, the direction of transfer of ideas
between the two fields changes over time, and the fields remain autonomous
(Gillies and Zheng, 2001). Here we shall take a look at two beneficial points
of interaction: the lessons of automated scientific discovery for the study of
scientific method (§4) and the influence of work on Bayesian epistemology and
probabilistic causality on machine learning (§5).
4
5. §4
Inductivism versus Falsificationism
Scientific method is an important topic in the philosophy of science. How do
scientists make discoveries? How should they make discoveries?
One view, commonly called inductivism and advocated by Bacon (1620), is
that science should proceed by first making a large number of observations and
then extracting laws via a procedure that is in principle open to automation. An
opposing position, called falsificationism and held by Popper (1963), is that the
scientist first conjectures in a way that cannot be automated and then tests this
conjecture by observing and experimenting to see whether or not the predictions
of the conjecture are borne out, rejecting the conjecture if not.
While examples from the history of science have tended to support falsifica-
tionism over inductivism, the successes of automated scientific discovery suggest
that inductivism remains a plausible position (Gillies, 1996). The approach of
machine learning is to collect large numbers of observations in a dataset and
then to automatically extract a predictive model from this dataset. In auto-
mated scientific discovery this model is usually also meant to be explanatory;
hence the model plays the same role as scientific laws. To the extent that such
procedures are successful, inductivism is successful. Gillies (1996, §2.6) cites the
GOLEM inductive logic programming system as an example of a machine learn-
ing procedure that successfully induced scientific laws concerning protein fold-
ing; this success was achieved with the help of humans who encoded background
knowledge (Gillies, 1996, §3.4). Journals such as Data Mining and Knowledge
Discovery and the Journal of Computer-Aided Molecular Design show that the
inductive approach continues to produce advances. Moreover, the investment
of drug and agrochemical companies suggests that this line of research promises
to pay dividends. While the hope is that one day such companies might ‘close
the inductive loop’—i.e., automate the whole cyclic procedure of data collec-
tion, hypothesis generation, further data collection, hypothesis reformulation
. . .—the present reality is that machine successes are achieved in combination
with human expertise. The use by Dow AgroSciences of neural networks in the
development of the insecticide spinetoram offers a recent example of success-
ful human-machine collaboration (Sparks et al., 2008); spinetoram won the US
Environmental Protection Agency 2008 Designing Greener Chemicals Award.
Perhaps human scientists proceed by applying falsificationism while machine
science is inductivist. Or perhaps falsificationism and inductivism are but dif-
ferent approximations to a third view which better explicates scientific method.
What could this third view be? It is clear that human scientists base their
conjectures on a wide variety of different kinds of evidence, not just on a large
number of homogeneous observations. (This—together with the fact that it can
be hard for scientists to pin-point all the sources of evidence for their claims
and hard for them to say exactly how their evidence informs their hypotheses—
makes it hard to see how hypothesis generation can be automated. It is natural
to infer, with falsificationists, that hypothesis generation can’t be automated,
but such an inference may be too quick.) On the other hand, most machine
learning algorithms do take as input a large number of homogenous observa-
tions; this supports inductivism, but with the proviso that the successes of au-
tomated scientific discovery tend to be achieved in concert with human scientists
5
6. or knowledge engineers. Human input appears to be important to fully utilise
the range of evidence that is available. The third view of scientific method, then,
is that a theory is formulated on the basis of heterogeneous evidence, including
background knowledge, observations, other theory and so on. This evidence is
often qualitative and hard to elucidate. The theory is revised as new evidence is
accrued, and the new evidence tends to be accrued in a targeted way, by testing
the current theory. Can this third way be automated? Contemporary machine
learning methods require well-articulated quantitative evidence in the form of a
dataset, but, as I suggest in §6, there is scope for relaxing this requirement and
taking a fuller spectrum of evidence into account.
In sum, while not everyone is convinced by the renaissance of inductivism
(see, e.g., Allen, 2001), it is clear that automated scientific discovery has yielded
some successes and that these have sparked new life into the debate between
inductivism and falsificationism in the philosophy of science.
§5
Bayesian Epistemology and Causality
Having looked at one way in which machine learning has had a beneficial impact
on the philosophy of science, we now turn to the other direction: the impact of
the philosophy of science on machine learning.
Epistemologists are interested in a variety of questions concerning our at-
titudes towards the truth of propositions. Some of these questions concern
propositions that we already grant or endorse: e.g., do we know that 2 + 2 = 4?
if so, why? how should a committee aggregate the judgements of its individuals?
Other questions concern propositions that are somewhat speculative: should I
accept that all politicians are liars? to what extent should you believe that it
will rain tomorrow?
Philosophers of science have been particularly interested in the latter ques-
tion: to what extent should one believe a proposition that is open to speculation?
This question is clearly relevant to scientific theorising, where we are interested
in the extent to which we should believe current scientific theories. In the 20th
century philosophers of science developed and applied Bayesian epistemology to
scientific theorising. The ideas behind Bayesian epistemology are present in the
writings of some of the pioneers of probability theory—e.g., Jacob Bernoulli,
Thomas Bayes—but only in recent years has it widely caught on in philosophy
and the sciences.
One can characterise contemporary Bayesian epistemology around the norms
that it posits:
Probability: The strengths of an agent’s beliefs should be representable by
probabilities. For example, the strength to which you believe it will rain
tomorrow should be measurable by a number P (r) between 0 and 1 inclu-
sive, and P (r) should equal 1 − P (¬r), where ¬r is the proposition that
it will not rain tomorrow.
Calibration: These degrees of belief should be calibrated with the agent’s
evidence. For example, if the agent knows just that between 60% and
70% of days like today have been followed by rain, she should believe it
will rain tomorrow to degree within the interval [0.6, 0.7].
6
7. Equivocation: Degrees of belief should otherwise be as equivocal as possible.
In the above example, the agent should equivocate as far as possible be-
tween r and ¬r, setting P (r) = 0.6, the value in the interval [0.6, 0.7] that
is closest to complete equivocation, P= (r) = 0.5.
So-called subjective Bayesianism adopts the Probability norm and usually
the Calibration norm too. This yields a relatively weak prescription, where the
extent to which one should believe a proposition is largely left up to subjective
choice. In order to limit the scope for arbitrary shifts in degrees of belief,
subjectivists often invoke a further norm governing the updating of degrees of
belief: the most common such norm is Bayesian conditionalisation, which says
that the agent’s new degree of belief P (a) in proposition a should be set to her
old degree of belief in a conditional on the new evidence e, P (a) = P (a|e).
In contrast to subjective Bayesianism, Objective Bayesianism adopts all
three of the above norms—Probability, Calibration and Equivocation. If there
are finitely many basic propositions under consideration (propositions that are
not composed out of simpler propositions), these three norms are usually cashed
out using the maximum entropy principle: an agent’s degrees of belief should be
representable by a probability function, from all those calibrated with evidence,
that has maximum entropy H(P ) = − ω∈Ω P (ω) log P (ω). (The maximum en-
tropy probability function is the function that is closest to the completely equiv-
ocal probability function P= which gives the same probability P= (ω) = 1/2n to
each conjunction ω ∈ Ω = {±a1 ∧ · · · ∧ ±an } of the basic propositions a1 , . . . , an
or their negations, where distance from one probability function to another
is understood in terms of cross entropy d(P, Q) = ω∈Ω P (ω) log P (ω)/Q(ω).)
Since these three norms impose rather strong constraints on degrees of belief, no
further norm for updating need be invoked (Williamson, 2009c). The justifica-
tion of these norms and the relative merits of subjective and objective Bayesian
epistemology are topics of some debate in philosophy (see, e.g., Williamson,
2007).
The development of Bayesian epistemology has had a profound impact on
machine learning (Gillies, 2003). The field of machine learning arose out of
research on expert systems. It was quickly realised that when developing an
expert system it is important to model the various uncertainties that arise on
account of incomplete or inconclusive data. Thus the system MYCIN, which was
developed in the 1970s to diagnose bacterial infections, incorporated numerical
values called ‘certainty factors’. Certainty factors were used to measure the
extent to which one ought to believe certain propositions. Hence one might
think that Bayesian epistemology should be applied here. In fact, the MYCIN
procedure for handling certainty factors was non-Bayesian, and MYCIN was
criticised on account of its failing to follow the norms of Bayesian epistemology.
From the late 1970s it was common to handle uncertainty in expert systems
using Bayesian methods. And, when the knowledge bases of expert systems
began to be learned automatically from data rather than elicited from experts,
Bayesian methods were adopted by the machine learning community.
But Bayesian methods were rather computationally intractable in the late
1970s and early 1980s, and consequently systems such as Prospector, which
was developed in the second half of the 1970s for mineral prospecting, had to
make certain simplifying assumptions that were themselves questionable. Indeed
considerations of computational complexity were probably the single biggest
7
8. L r
¨
B
¨ rr
¨ ¨
r
j
r
¨
S r C
rr ¨
B
¨
r
¨
j
r ¨¨
B
Figure 1: Smoking (S) causes Lung cancer (L) and Bronchitis (B) which in
turn cause Chest pains (C).
limiting factor for the application of Bayesian epistemology to expert systems
and machine learning.
It took a rather different stream of research to unleash the potential of Bayes-
ian epistemology in machine learning. This was research on causality and causal
reasoning. In the early 20th century, largely under the sceptical influence of
Mach, Pearson and Russell, causal talk rather fell out of favour in the sciences.
But it was clear that while scientists were reluctant to talk the talk, they were
still very much walking the walk: associations between variables were being in-
terpreted causally in order to predict the effects of interventions and to inform
policy. Consequently, philosophers of science remained interested in questions
about the nature of causality and in how one might best reason causally.
Under the probabilistic view of causality, causal relationships are analysable
in terms of probabilistic relationships—more specifically in terms of patterns
of probabilistic dependence and independence (Williamson, 2009d). Reichen-
bach, Good, Suppes and Pearl, pioneers of the probabilistic approach, devel-
oped the concept of a causal net. A causal net is a diagrammatic representation
of causes and effects—such as that depicted in Fig. 1—which has probabilistic
consequences via what is now known as the Causal Markov Condition. This
condition says that each variable in the net is probabilistically independent of
its non-effects, conditional on its direct causes. If we complete a causal net
by adding the probability distribution of each variable conditional on its direct
causes, the net suffices to determine the joint probability distribution over all
the variables in the net. Since the probabilities in the net tend to be interpreted
as rational degrees of belief, and since the probabilities are often updated by
Bayesian conditionalisation, a causal net is often called a causal Bayesian net.
If we drop the causal interpretation of the arrows in the graph we have what
is known as a Bayesian net. The advantage of the causal interpretation is that
under this interpretation the Markov Condition appears quite plausible, at least
as a default constraint on degrees of belief (Williamson, 2005a).
A causal Bayesian net—and more generally a Bayesian net—can permit
tractable handling of Bayesian probabilities. Depending on the sparsity of the
graph, it can be computationally feasible to represent and reason with Bayes-
ian probabilities even where there are very many variables under consideration.
This fact completed the Bayesian breakthrough in expert systems and machine
learning. By building an expert system around a causal net, efficient represen-
tation and calculation of degrees of belief were typically achievable. From the
machine learning perspective, if the space of models under consideration is the
space of Bayesian nets of sufficiently sparse structure, then learning a model will
8
9. permit efficient inference of appropriate degrees of belief. If the induced net is
then interpreted causally, we have what might be considered the holy grail of
science: a method for the machine learning of causal relationships directly from
data.
In sum, Bayesian epistemology offered a principled way of handling uncer-
tainty in expert systems and machine learning, and Bayesian net methods over-
came many of the ensuing computational hurdles. These lines of work had a
huge impact: the dominance of Bayesian methods—and Bayesian net methods
in particular—in the annual conferences on Uncertainty in Artificial Intelligence
(UAI) from the 1980s is testament to the pervasive influence of Bayesian epis-
temology and work on probabilistic causality.
§6
Evidence Integration
We now have some grounds for the claim that there is a dynamic interaction
between machine learning and the philosophy of science: the achievements of au-
tomated scientific discovery have reinvigorated the debate between inductivists
and falsificationists in the philosophy of science; on the other hand work on
Bayesian epistemology and causality has given impetus to the handling of un-
certainty in machine learning. No doubt the mutually supportive relationships
between philosophy of science and machine learning will continue. Here we will
briefly consider one potential point of interaction, namely the task of evidence
integration.
The dominant paradigm in machine learning and data mining views the
machine learning problem thus: given a dataset learn a (predictively accurate)
model that fits (but does not overfit) the data. Clearly this is an important prob-
lem and progress made on this problem has led to enormous practical advances.
However this problem formulation is rather over-simplistic in the increasingly
evidence-rich environment of our information age. Typically our evidence is not
made up of a single dataset. Typically we have a variety of datasets—of vary-
ing size and quality and with perhaps few variables in common—as well as a
range of qualitative evidence concerning measured and unmeasured variables of
interest—evidence of causal, logical, hierarchical, ontological and mereological
relationships for instance. The above problem formulation just doesn’t apply
when our evidence is so multifarious. The Principle of Total Evidence, which
holds that one should base one’s beliefs and judgements on all one’s available
evidence, is a sound epistemological precept, and one that is breached by the
dominant paradigm in machine learning.
The limitations of the above problem formulation are increasingly becoming
recognised by the machine-learning community. This recognition has led to a
spate of research on what might be called forecast aggregation: a variety of
models, each derived from a different dataset, are used to make predictions, and
these separate predictions are somehow aggregated to yield an overall prediction.
The aggregation operation may involve simple averaging or more sophisticated
statistical meta-analysis methods.
But forecast aggregation itself has several limitations. First, it still falls foul
of the Principle of Total Evidence: each model is based on a single dataset but
9
10. qualitative evidence still tends to be ignored. (Not always: qualitative evidence
about relationships between the variables is sometimes invoked in the prepro-
cessing of the datasets—if the knowledge engineer sees that a variable in one
dataset is a subcategory of a variable in another dataset, these two variables
might be unified in some way. But this data grooming is typically done by hand.
As datasets involve more and more variables it is increasingly important that
qualitative evidence be respected as a part of the automation process.) Second,
it is unclear how far one should trust aggregated forecasts when they are often
generated by models that not only disagree but are based on mutually inconsis-
tent assumptions. Obviously in such cases at most one of the mutually incon-
sistent models is true; surely it would be better to find and use the true model
(if any) than to dilute its predictions with the forecasts of false models. But
this requires collecting new evidence rather than forecast aggregation. Third,
the general task of judgement aggregation—of which forecast aggregation is but
a special case—is fraught with conceptual problems; indeed the literature on
judgement aggregation is replete with impossibility results, not with solutions
(Williamson, 2009a).
In view of these concerns, a better approach might be to construct a single
model which is based on the entirety of the available evidence—quantitative
and qualitative—and to use that model for prediction. Combining evidence is
often called knowledge integration. However, available evidence may not strictly
qualify as knowledge because it may, for reasons that are not evident, not all be
true; hence evidence integration is better terminology.
Bayesian epistemology provides a very good way of creating a single model
on the basis of a wide variety of evidence. As discussed in §5 the model in
this case is (a representation of) the probability function that captures degrees
of belief that are appropriate for an agent with the evidence in question. The
evidence is integrated via the Calibration norm: each item of evidence imposes
constraints that this probability function must satisfy. (Some kind of consis-
tency maintenance procedure must of course be invoked if the evidence itself is
inconsistent.) A dataset imposes the following kind of constraint: the agent’s
probability function, when restricted to the variables of that dataset, should
match (fit but not overfit) the distribution of the dataset, as far as other evi-
dence permits. Qualitative evidence imposes another kind of equality constraint,
as follows. A relation R is an influence relation if learning of a new variable that
does not stand in relation R to (i.e., does not influence) the current variables
does not provide grounds for changing one’s degrees of belief concerning the
current variables. Arguably causal, logical, hierarchical, ontological and mereo-
logical relationships are influence relations. Hence evidence of such relationships
imposes equality constraints of the form: the agent’s probability function, when
restricted to variables that are closed under influence, should match the prob-
ability function that the agent would have adopted were she only to have had
evidence concerning that subset of variables, as far as other evidence permits.
Hence both quantitative and qualitative evidence impose certain equality con-
straints on degrees of belief. See Williamson (2005a) and Williamson (2009b)
for the details and motivation behind this kind of approach.
In sum, evidence integration has greater potential than forecast aggrega-
tion to circumvent the limited applicability of the current machine learning
paradigm. Bayesian epistemology is strikingly well-suited to the problem of ev-
idence integration. Hence there is scope for another fruitful interaction between
10
11. philosophy of science and machine learning.
Example—Cancer Prognosis
As an illustration of the kind of approach to evidence integration that Bayesian
epistemology offers, we shall consider an application of objective Bayesian epis-
temology to integrating evidence for breast cancer prognosis. This application
is described in detail in Nagl et al. (2008).
When a patient has breast cancer and has had surgery to remove the cancer
it is incumbent on the relevant medical practitioners to make an appropriate
onward treatment decision. Broadly speaking, more effective treatments are
more aggressive in the sense that they have harsher side effects. Such treatments
are only warranted to the extent that the cancer is likely to recur without them.
The more strongly the medical practitioner believes the cancer will recur, the
more aggressive the treatment that will be instigated. It is important, then,
that the agent’s degree of belief in recurrence is appropriate given the available
evidence.
Here—as in many realistic applications—evidence is multifarious. There
are clinical datasets detailing the clinical symptoms of past patients, genomic
datasets listing the presence of various molecular markers in past patients, scien-
tific papers supporting particular causal claims or associations, medical experts’
causal knowledge, and information in medical informatics systems, including
medical ontologies and previous decision support systems such as argumenta-
tion systems.
In Nagl et al. (2008) we had available the following sources of evidence. First
we had a clinical dataset, namely the SEER study, which involves 3 million pa-
tients in the US from 1975–2003, including 4731 breast cancer patients. We also
had two molecular datasets, one with 502 cases and another with 119 cases; the
latter dataset also measured some clinical variables. Finally we had a published
study which established a causal relationship between two variables of interest.
These evidence sources impose constraints on an agent’s degrees of belief,
as outlined above. An agent’s degrees of belief should match the dataset dis-
tributions on their respective domains. Moreover, degrees of belief should re-
spect the equality constraints imposed by knowledge of causal influence. While
the Probability norm holds that the strengths of the agent’s beliefs should be
representable by a probability function, the Calibration norm holds that this
probability function should satisfy the constraints imposed by evidence.
These two norms narrow down the choice of belief function to a set of prob-
ability functions. But objective Bayesian epistemology imposes a further norm,
Equivocation. Accordingly, the agent’s degrees of belief should be representable
by a probability function from within this set that is maximally equivocal. On
a finite domain, this turns out to be the (unique) probability function in this set
that has maximum entropy. So objective Bayesian epistemology recommends
that treatment decisions be based on this maximum entropy probability func-
tion.
As discussed in §5, from a computational point of view it is natural to rep-
resent a probability function by a Bayesian net. A Bayesian net that represents
a probability function that is deemed appropriate by objective Bayesian episte-
mology is called an objective Bayesian net (Williamson, 2005b). This Bayesian
net can be used for inference—in our case to calculate degree to which one ought
11
12. Figure 2: Graph of an objective Bayesian net for breast cancer prognosis.
to believe that the patient’s cancer will recur. Fig. 2 depicts the graph of the
objective Bayesian net in our cancer application. At the top is the recurrence
node, beneath which are clinical variables. These are connected to 5 molecular
variables at the bottom of the graph. Hence one can use both molecular markers
and clinical symptoms to predict the patient’s survival, even though no dataset
contains information about all these variables together. The objective Bayesian
net model succeeds in integrating rather disparate evidence sources.
§7
Conclusion
Machine learning in general, and automated scientific discovery in particular,
have a close relationship with the philosophy of science. On the one hand,
advances in automated scientific discovery have lent plausibility to inductivist
philosophy of science. On the other hand, advances in probabilistic epistemology
and work on causality have improved the ability of machine learning methods
to handle uncertainty.
I have suggested that inductivism and falsificationism can be reconciled by
viewing these positions as approximations to a third view of scientific method,
one which considers the full range of evidence for a scientific hypothesis. This
third way would be an intriguing avenue of research for philosophy of science. I
have also suggested that the current single-dataset paradigm in machine learning
is becoming increasingly inapplicable, and that research in machine learning
would benefit from serious consideration of the problem of formulating a model
on the basis of a broad range of evidence. Bayesian epistemology may be a
12
13. fruitful avenue of research for tackling evidence integration in machine learning
as well as evidence integration in the philosophy of science.
Acknowledgements
This research was supported by the Leverhulme Trust. I am very grateful to
Drake Eggleston and Peter Goodfellow for helpful leads.
References
Allen, J. F. (2001). Bioinformatics and discovery: induction beckons again.
BioEssays, 23:104–107.
Bacon, F. (1620). The New Organon. Cambridge University Press (2000),
Cambridge. Ed. Lisa Jardine and Michael Silverthorne.
Bensusan, H. (2000). Is machine learning experimental philosophy of science? In
Aliseda, A. and Pearce, D., editors, ECAI2000 Workshop notes on Scientific
Reasoning in Artificial Intelligence and the Philosophy of Science, pages 9–14.
Gillies, D. (1996). Artificial intelligence and scientific method. Oxford University
Press, Oxford.
Gillies, D. (2003). Handling uncertainty in artificial intelligence, and the Bayes-
ian controversy. In Stadler, F., editor, Induction and deduction in the sciences.
Kluwer, Dordrecht.
Gillies, D. and Zheng, Y. (2001). Dynamic interactions with the philosophy of
mathematics. Theoria, 16(3):437–459.
Korb, K. B. (2001). Machine learning as philosophy of science. In Korb, K.
and Bensusan, H., editors, Proceedings of the ECML-PKDD-01 workshop on
Machine Learning as Experimental Philosophy of Science.
Nagl, S., Williams, M., and Williamson, J. (2008). Objective Bayesian nets for
systems modelling and prognosis in breast cancer. In Holmes, D. and Jain,
L., editors, Innovations in Bayesian networks: theory and applications, pages
131–167. Springer, Berlin.
Popper, K. R. (1963). Conjectures and refutations. Routledge and Kegan Paul.
Sparks, T. C., Crouse, G. D., Dripps, J. E., Anzeveno, P., Martynow, J., DeAm-
icis, C. V., and Gifford, J. (2008). Neural network-based qsar and insecticide
discovery: spinetoram. Journal of Computer-Aided Molecular Design, 22:393–
401.
Thagard, P. (1988). Computational Philosophy of Science. MIT Press / Bradford
Books, Cambridge MA.
Williamson, J. (2004). A dynamic interaction between machine learning and
the philosophy of science. Minds and Machines, 14(4):539–549.
Williamson, J. (2005a). Bayesian nets and causality: philosophical and compu-
tational foundations. Oxford University Press, Oxford.
Williamson, J. (2005b). Objective Bayesian nets. In Artemov, S., Barringer, H.,
d’Avila Garcez, A. S., Lamb, L. C., and Woods, J., editors, We Will Show
Them! Essays in Honour of Dov Gabbay, volume 2, pages 713–730. College
Publications, London.
Williamson, J. (2007). Motivating objective Bayesianism: from empirical con-
straints to objective probabilities. In Harper, W. L. and Wheeler, G. R.,
13
14. editors, Probability and Inference: Essays in Honour of Henry E. Kyburg Jr.,
pages 151–179. College Publications, London.
Williamson, J. (2009a). Aggregating judgements by merging evidence. Journal
of Logic and Computation, 19:461–473.
Williamson, J. (2009b). Epistemic complexity from an objective Bayesian per-
spective. In Carsetti, A., editor, Causality, meaningful complexity and knowl-
edge construction. Springer.
Williamson, J. (2009c). Objective Bayesianism, Bayesian conditionalisation and
voluntarism. Synthese, doi 10.1007/s11229-009-9515-y.
Williamson, J. (2009d). Probabilistic theories of causality. In Beebee, H., Hitch-
cock, C., and Menzies, P., editors, The Oxford Handbook of Causation. Oxford
University Press, Oxford.
14