This document discusses causality and how deep learning relates to causality. It argues that deep learning is not weakening the concept of causality as some critics claim, but rather is creating a new way to find indirect factor correlations and approximate causation. Deep learning provides situated approximations of causality using designed correlational models, rather than determining causation on its own. The document asserts that deep learning processes data in a context-driven way designed by humans, not in a purely data-driven manner, and thus can contribute to causal understanding when guided by human evaluation and interpretation of its results.
Argumentation in Artificial Intelligence.pdfSabrina Baloi
This document provides a foreword for a book on argumentation in artificial intelligence. It discusses how argumentation is an important part of human interaction and intelligence. It notes that argumentation involves both reasoning within individual minds and interaction between multiple agents. The foreword argues that studying argumentation can help unite different fields like logic, mathematics, computation, and studies of human behavior. It presents the book as bringing these various strands together and establishing argumentation as a new field at the intersection of artificial and natural intelligence.
The document provides an overview of artificial intelligence (AI) including its history, approaches, tools, and applications. It discusses early concepts of artificial beings in myths and how AI has been explored through cybernetics, brain simulation, symbolic approaches, statistical methods, and intelligent agent paradigms. Key problems in AI like search, heuristics, logic, and uncertainty are summarized. The document also reviews successes in games, robotics, and question answering systems to demonstrate progress in AI.
Artificial intelligence in the field of economics.pdfgtsachtsiris
This document discusses the history of artificial intelligence (AI) in economics. It describes how economists have engaged with AI since its beginnings, with a focus on different AI methods like machine learning, deep learning, neural networks, and expert systems. The document analyzes how AI has diffused through and within economic subfields using bibliometric data. It presents results mapping the use of AI in economics over time, place, and subfield. Additionally, it finds correlations between engagement with AI and institutional affiliation quality as well as a country's human development index.
The document discusses the evolution of computational design in architecture through four periods: Modularity, Computational Design, Parametricism, and Artificial Intelligence. Modularity introduced modular construction systems, while Computational Design utilized early CAD software to design complex shapes. Parametricism encoded design tasks as parameterized rules that could be varied to generate different designs efficiently. Artificial Intelligence is presented as the latest development, applying machine learning techniques to architectural challenges.
Artificial immune systems and the grand challenge for non classical computationUltraUploader
The document discusses the "Grand Challenge of Non-Classical Computation", which examines breaking free from classical assumptions about computation. It proposes this as a "journey" of exploration rather than having a set goal. Artificial Immune Systems are presented as an example of non-classical computation that could help address this challenge by breaking classical assumptions like the Turing paradigm and having emergent properties rather than predefined specifications. The document encourages the AIS community to position their research within this broader area of non-classical computation.
The laboratoryandthemarketinee bookchapter10pdf_mergedJeenaDC
This document discusses two modes of scientific knowledge production:
(1) Modus 1 knowledge is produced within established disciplines through deductive reasoning and empirical testing. It aims to build consistent theoretical systems.
(2) Modus 2 knowledge is produced transdisciplinarily to solve practical problems. It is shaped by social and economic factors beyond any single discipline. Knowledge takes the form of narratives that guide practices rather than deductive systems.
The key difference is that Modus 1 seeks fundamental theoretical understanding while Modus 2 focuses on knowledge applicable to real-world problems across disciplinary boundaries. Both make use of models, analogies, and metaphors to give meaning and applicability to formal theories and findings.
Notational systems and cognitive evolutionJeff Long
October 29, 2005: “Notational Systems and Cognitive Evolution”. Presented at the 2005
Annual Conference of the American Society for Cybernetics. Paper published in conference proceedings.
Argumentation in Artificial Intelligence.pdfSabrina Baloi
This document provides a foreword for a book on argumentation in artificial intelligence. It discusses how argumentation is an important part of human interaction and intelligence. It notes that argumentation involves both reasoning within individual minds and interaction between multiple agents. The foreword argues that studying argumentation can help unite different fields like logic, mathematics, computation, and studies of human behavior. It presents the book as bringing these various strands together and establishing argumentation as a new field at the intersection of artificial and natural intelligence.
The document provides an overview of artificial intelligence (AI) including its history, approaches, tools, and applications. It discusses early concepts of artificial beings in myths and how AI has been explored through cybernetics, brain simulation, symbolic approaches, statistical methods, and intelligent agent paradigms. Key problems in AI like search, heuristics, logic, and uncertainty are summarized. The document also reviews successes in games, robotics, and question answering systems to demonstrate progress in AI.
Artificial intelligence in the field of economics.pdfgtsachtsiris
This document discusses the history of artificial intelligence (AI) in economics. It describes how economists have engaged with AI since its beginnings, with a focus on different AI methods like machine learning, deep learning, neural networks, and expert systems. The document analyzes how AI has diffused through and within economic subfields using bibliometric data. It presents results mapping the use of AI in economics over time, place, and subfield. Additionally, it finds correlations between engagement with AI and institutional affiliation quality as well as a country's human development index.
The document discusses the evolution of computational design in architecture through four periods: Modularity, Computational Design, Parametricism, and Artificial Intelligence. Modularity introduced modular construction systems, while Computational Design utilized early CAD software to design complex shapes. Parametricism encoded design tasks as parameterized rules that could be varied to generate different designs efficiently. Artificial Intelligence is presented as the latest development, applying machine learning techniques to architectural challenges.
Artificial immune systems and the grand challenge for non classical computationUltraUploader
The document discusses the "Grand Challenge of Non-Classical Computation", which examines breaking free from classical assumptions about computation. It proposes this as a "journey" of exploration rather than having a set goal. Artificial Immune Systems are presented as an example of non-classical computation that could help address this challenge by breaking classical assumptions like the Turing paradigm and having emergent properties rather than predefined specifications. The document encourages the AIS community to position their research within this broader area of non-classical computation.
The laboratoryandthemarketinee bookchapter10pdf_mergedJeenaDC
This document discusses two modes of scientific knowledge production:
(1) Modus 1 knowledge is produced within established disciplines through deductive reasoning and empirical testing. It aims to build consistent theoretical systems.
(2) Modus 2 knowledge is produced transdisciplinarily to solve practical problems. It is shaped by social and economic factors beyond any single discipline. Knowledge takes the form of narratives that guide practices rather than deductive systems.
The key difference is that Modus 1 seeks fundamental theoretical understanding while Modus 2 focuses on knowledge applicable to real-world problems across disciplinary boundaries. Both make use of models, analogies, and metaphors to give meaning and applicability to formal theories and findings.
Notational systems and cognitive evolutionJeff Long
October 29, 2005: “Notational Systems and Cognitive Evolution”. Presented at the 2005
Annual Conference of the American Society for Cybernetics. Paper published in conference proceedings.
This document introduces the concept of a "fourth paradigm" of scientific discovery based on data-intensive science. It discusses how scientific breakthroughs will increasingly depend on advanced computing capabilities to analyze massive datasets. The document presents perspectives from pioneering computer scientist Jim Gray and others on how data-intensive science represents a new era that requires new tools and ways of working. It also discusses the need for digital libraries and infrastructure to support the long-term curation, analysis, and sharing of scientific data and knowledge.
The problem of effective computation is discussed. Short historical analysis of this problem and basic ways of its development in culture and science, including computer science, are represented. Place the problem of the creation effective calculation in modern science is discussed. Necessity of creation “computation metascience” as system with variable hierarchy (open system) is observed. Polymetric analysis as example of this metascience is represented and discussed.
This document provides a review of artificial intelligence focusing on embodied artificial intelligence, models of artificial consciousness, agent-based artificial intelligence, and philosophical commentary on AI. It summarizes the history and major developments in AI from the 1950s to the present. While many models and theories of consciousness have been proposed, very few have been implemented and there is little consensus in the field. The achievements of AI to date are viewed as meager compared to the initial expectations.
Dr Ahmad_Cognitive Sciences Strategies for Futures Studies (Foresight)Dr. Ahmad, Futurist.
Accepted to be presented by KogWis 2016: Doctoral Symposium, Bremen Spatial Cognition Research Centre, Universität Bremen, 26-30 Sep 2016.
https://mindmodeling.org/cogsci2016/papers/0704/paper0704.pdf
Abstract. Developing the conceptual model of the origin of the idea of future scenarios leads to explore Cognitive Sciences (CS) strategies for Futures Studies (FS). This research will try to answer how scenario planning would benefit from CS by reshaping mental models? In other hand, how these explored strategies could develop the future oriented intelligence's machine? This is a vast amount of work to be considered. Modeling via abduction, chance-seeking via intervention on tacit knowledge, Acquiring useful information via causality grouping, Intelligence increase over time and idea blending are just the first examples, so we have a long way to go.
This document proposes a framework for applying systems thinking to help humans adapt and meet 21st century challenges in a sustainable way. It suggests that principles from diverse fields like neuroscience, psychology, and history can inform this framework if integrated at both individual and collective levels. The framework is based on recognizing humans' innate capacities for cooperation, creativity, and adapting behaviors/mental models in response to feedback. It aims to empower joint problem-solving through definable structures that leverage these human strengths.
NG2S: A Study of Pro-Environmental Tipping Point via ABMsKan Yuenyong
A study of tipping point: much less is known about the most efficient ways to reach such transitions or how self-reinforcing systemic transformations might be instigated through policy. We employ an agent-based model to study the emergence of social tipping points through various feedback loops that have been previously identified to constitute an ecological approach to human behavior. Our model suggests that even a linear introduction of pro-environmental affordances (action opportunities) to a social system can have non-linear positive effects on the emergence of collective pro-environmental behavior patterns.
By Aranna Hasan Delwar.
*Summation
Notwithstanding endeavors to incorporate examination across various subdisciplines of science, the size of mix stays restricted. We conjecture that people in the future of Man-made consciousness (artificial intelligence) advances explicitly adjusted for organic sciences will assist with empowering the reintegration of science. Man-made intelligence advances will permit us not exclusively to gather, associate, and dissect information at exceptional scales, yet additionally to assemble far reaching prescient models that length different subdisciplines. They will make conceivable both designated (testing explicit speculations) and untargeted disclosures. Simulated intelligence for science will be the cross-cutting innovation that will upgrade our capacity to do organic examination at each scale. We anticipate that computer based intelligence should alter science in the 21st century similar as measurements changed science in the twentieth 100 years. The hardships, notwithstanding, are many, including information curation and gathering, advancement of new science as hypotheses that interface the subdisciplines, and new prescient and interpretable man-made intelligence models that are more fit to science than existing AI and man-made intelligence procedures. Improvement endeavors will serious areas of strength for require among organic and computational researchers. This white paper gives a dream to simulated intelligence for Science and features a few difficulties.
*Presentation
Computerized reasoning (artificial intelligence) as a thought is old. It very well may be traced all the way back to antiquated times around 700 B.C. in Greek folklore, for instance, with the goliath Bone made of bronze and made, not conceived, to safeguard Europa, the mother of Lord Minos in Crete (City hall leader 2018). From that point to additional cutting edge and logical times, the primary limitation to deliver machines fit for thinking has been innovation, as perceived by Alan M. Turing (Turing 1936) who, relatively radical, was posing inquiries about machines, conduct, awareness, and utilizing discrete cycles to emulate sensory systems that work persistently. John von Neuman (von Neuman 1958), likewise forward thinking, proposed in 1945 a PC design in which both the program directions and the information are situated in irregular access memory. This plan was the forerunner of the cutting edge PC, yet it was only after the coming of the quick CPU that computer based intelligence turned into a reasonable reality. Since its initial starting points in 1956 (McCarthy et al. 2006; Kaplan and Haelein 2019) as a field of innovative work, artificial intelligence has developed and endured mishaps, until right off the bat in the 21st century when it at long last thrived with effective applications in scholarly world and industry. A mix of new techniques and accessibility of strong PCs alongside immense assortments of information got huge s
MY 1st Nobel Prize by 2037 (Work-in-Progress)my1nobel2037
The document proposes establishing a $25 million research center to study creativity and genius through the lens of connecting ideas between human imagination and machine intelligence, termed "symbiotic genius". It references recent studies on the neurological basis of imagination and cites examples of solo, group, and symbiotic genius. Funding such a center could provide a Nobel-worthy opportunity to complement existing initiatives focused on intelligence and drive economic growth through cultivating mass innovation, creativity, and flourishing.
The document discusses how scientific knowledge and thinking have advanced in the 20th century due to the end of classical certainties. It describes how Ilya Prigogine addressed the end of certainty in concepts like time and causality. Werner Heisenberg's uncertainty principle showed that there are limits to human knowledge at the quantum level. Kurt Gödel's incompleteness theorems demonstrated the limitations of classical logic and that a system cannot prove itself consistent. The document argues that complex thinking is needed to develop a comprehensive understanding of scientific phenomena by integrating knowledge across disciplines through group work rather than individual specialization.
Abstract: The emerging configuration of educational institutions, technologies, scientific practices, ethics policies and companies can be usefully framed as the emergence of a new “knowledge infrastructure” (Paul Edwards). The idea that we may be transitioning into significantly new ways of knowing – about learning and learners, teaching and teachers – is both exciting and daunting, because new knowledge infrastructures redefine roles and redistribute power, raising many important questions. What should we see when open the black box powering analytics? How do we empower all stakeholders to engage in the design process? Since digital infrastructure fades quickly into the background, how can researchers, educators and learners engage with it mindfully? This isn’t just interesting to ponder academically: your school or university will be buying products that are being designed now. Or perhaps educational institutions should take control, building and sharing their own open source tools? How are universities accelerating the transition from analytics innovation to infrastructure? Speaking from the perspective of leading an institutional innovation centre in learning analytics, I hope that our experiences designing code, competencies and culture for learning analytics sheds helpful light on these questions.
Crowdsourcing and Cognitive Data Analytics for Conflict Transformation - Istv...Istvan Csakany
ABSTRACT
The thesis discusses the opportunities of using crowdsourcing and cognitive data analytics to increase the efficiency and accuracy of conflict transformation practices. It builds on Ken Wilber’s integral theory and AQAL model to identify the common ground between the transcend method of Johan Galtung, the elicitive conflict transformation approach of John Paul Lederach, and selected cases of crowdsourcing and cognitive data analytics applications. The theories are then applied to the three case studies of the practical use of crowdsourcing and the perspectives in cognitive computer science. The crowdsourcing examples include the constitutional reform process in Iceland (2011–2013), the UNHCR Ideas program, and Sugata Mitra’s School in the Cloud (SOLE) initiative. The case of cognitive computing is discussed through the analysis of IBM Watson’s utility in medical sciences. The thesis concludes that augmenting human intelligence and exploiting the knowledge of large masses through crowdsourcing and cognitive data analytics are viable options also in the field of conflict transformation and peace research. There are already examples of good practices but there is a significant difference between the utility of various approaches in favour of those that build on a human–computer partnership and are open to redefining existing paradigms.
This document discusses the theory of the knowledge society and the concept of a "super-paradigm". It argues that we are facing a new reality that completely overturns existing knowledge. The super-paradigm represents a mix of old and new scientific paradigms integrated in new ways. It also discusses defining characteristics of the new paradigms, including blurred boundaries between matter and spirit. The document puts forth a view of reality as both existing and not existing ("vannincs"), and explores a multi-level structure of reality from the divine to the human. It argues the new model of globalization supports efforts for an eventual "Age of Unity" in the 21st century characterized not by uniformity but cooperation between
This document outlines several emerging technologies and trends over the next 50 years across multiple domains:
- Nanotechnology will advance materials science, medicine, and serve as a model for transdisciplinary science. It will be shaped by both fundamental research and commercial applications.
- Biotechnology like genetic engineering will allow for manipulating and designing new life forms to go beyond what evolution has produced. It will be inspired by what we learn from nature.
- Advances in various technologies will provide opportunities to enhance and extend human capabilities in both mental and physical aspects, potentially leading to a transhumanist future.
- Computation and data analysis will allow for decoding patterns across biological and social systems and widespread simulation use for decision making and
This document discusses the relationships between logic, mathematics, and science. It provides examples of how philosophers have used logic to explore truths, such as William Paley's argument for the existence of a creator based on biological complexity. The development of symbolic logic allowed mathematics to be treated as a logical system, as in Bertrand Russell's Principia Mathematica. However, Kurt Gödel's incompleteness theorems proved that a fully consistent and complete logical system is impossible. While logic has limitations, it remains an important tool for understanding and acquiring knowledge through the scientific method.
1. Contemporary social theories describe how digital technologies have shifted communication away from traditional face-to-face and print media towards functions like social media that follow an escalating logic and erase humanism.
2. The rise of social software, open access, and just-in-time knowledge enabled by the internet challenges traditional structures of gathering and sharing knowledge by allowing information to be more freely distributed and accessed.
3. However, merely distributing information does not necessarily lead to growth of knowledge, as knowledge requires information to be meaningfully modeled and analyzed rather than just believed. The internet allows "downloadable beliefs" but risks knowledge being reduced to beliefs without justification.
Human level artificial general intelligence agiKarlos Svoboda
This document discusses different scenarios for the future development of artificial general intelligence (AGI). It begins by contrasting narrow AI, which focuses on solving specific problems, with AGI, which would involve a system that can understand itself, generalize knowledge across domains, and transfer skills like humans. The document then outlines several possible scenarios using the framework of scenario analysis: 1) steady incremental progress of narrow AI eventually reaching AGI, 2) narrow AI continues successfully but AGI is not achieved, and 3) AGI is developed, rapidly improves itself, and leads to a technological singularity transforming society in unpredictable ways. It discusses these scenarios and their implications in more detail.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document discusses positivism and postpositivism in relation to engineering. It begins by introducing positivism as referring to objectively describing reality that exists and can be observed. It then discusses how positivism has been the philosophical foundation of engineering methodology for undertaking projects. However, it notes that postpositivism emerged as a new paradigm that recognizes the interaction between the observer and observed. The document concludes that while engineering methodology remains positivist, engineers must also be able to link to other disciplines through a transdisciplinary understanding, as problems today require input from different fields of knowledge.
The document discusses civil disobedience and its necessity in a free society. It argues that citizens have an obligation to break unjust laws according to Henry David Thoreau. Civil disobedience is an effective way to criticize the government without violence, as demonstrated by movements led by Gandhi, Martin Luther King Jr., and factory workers. While some argue violence is needed, civil disobedience is remarkably effective and citizens engaging in it are exercising, not overextending, their constitutional rights.
Japanese Writing Paper Printable. Online assignment writing service.Lisa Riley
This document provides instructions for creating an account and submitting an assignment request on the HelpWriting.net website. It outlines a 5-step process: 1) Create an account with an email and password. 2) Complete a form with assignment details. 3) Review bids from writers and select one. 4) Review the completed paper. 5) Request revisions if needed, with the option of a full refund for plagiarized work. The purpose is to guide users through obtaining writing help services from HelpWriting.net.
More Related Content
Similar to Approximate and Situated Causality in Deep Learning.pdf
This document introduces the concept of a "fourth paradigm" of scientific discovery based on data-intensive science. It discusses how scientific breakthroughs will increasingly depend on advanced computing capabilities to analyze massive datasets. The document presents perspectives from pioneering computer scientist Jim Gray and others on how data-intensive science represents a new era that requires new tools and ways of working. It also discusses the need for digital libraries and infrastructure to support the long-term curation, analysis, and sharing of scientific data and knowledge.
The problem of effective computation is discussed. Short historical analysis of this problem and basic ways of its development in culture and science, including computer science, are represented. Place the problem of the creation effective calculation in modern science is discussed. Necessity of creation “computation metascience” as system with variable hierarchy (open system) is observed. Polymetric analysis as example of this metascience is represented and discussed.
This document provides a review of artificial intelligence focusing on embodied artificial intelligence, models of artificial consciousness, agent-based artificial intelligence, and philosophical commentary on AI. It summarizes the history and major developments in AI from the 1950s to the present. While many models and theories of consciousness have been proposed, very few have been implemented and there is little consensus in the field. The achievements of AI to date are viewed as meager compared to the initial expectations.
Dr Ahmad_Cognitive Sciences Strategies for Futures Studies (Foresight)Dr. Ahmad, Futurist.
Accepted to be presented by KogWis 2016: Doctoral Symposium, Bremen Spatial Cognition Research Centre, Universität Bremen, 26-30 Sep 2016.
https://mindmodeling.org/cogsci2016/papers/0704/paper0704.pdf
Abstract. Developing the conceptual model of the origin of the idea of future scenarios leads to explore Cognitive Sciences (CS) strategies for Futures Studies (FS). This research will try to answer how scenario planning would benefit from CS by reshaping mental models? In other hand, how these explored strategies could develop the future oriented intelligence's machine? This is a vast amount of work to be considered. Modeling via abduction, chance-seeking via intervention on tacit knowledge, Acquiring useful information via causality grouping, Intelligence increase over time and idea blending are just the first examples, so we have a long way to go.
This document proposes a framework for applying systems thinking to help humans adapt and meet 21st century challenges in a sustainable way. It suggests that principles from diverse fields like neuroscience, psychology, and history can inform this framework if integrated at both individual and collective levels. The framework is based on recognizing humans' innate capacities for cooperation, creativity, and adapting behaviors/mental models in response to feedback. It aims to empower joint problem-solving through definable structures that leverage these human strengths.
NG2S: A Study of Pro-Environmental Tipping Point via ABMsKan Yuenyong
A study of tipping point: much less is known about the most efficient ways to reach such transitions or how self-reinforcing systemic transformations might be instigated through policy. We employ an agent-based model to study the emergence of social tipping points through various feedback loops that have been previously identified to constitute an ecological approach to human behavior. Our model suggests that even a linear introduction of pro-environmental affordances (action opportunities) to a social system can have non-linear positive effects on the emergence of collective pro-environmental behavior patterns.
By Aranna Hasan Delwar.
*Summation
Notwithstanding endeavors to incorporate examination across various subdisciplines of science, the size of mix stays restricted. We conjecture that people in the future of Man-made consciousness (artificial intelligence) advances explicitly adjusted for organic sciences will assist with empowering the reintegration of science. Man-made intelligence advances will permit us not exclusively to gather, associate, and dissect information at exceptional scales, yet additionally to assemble far reaching prescient models that length different subdisciplines. They will make conceivable both designated (testing explicit speculations) and untargeted disclosures. Simulated intelligence for science will be the cross-cutting innovation that will upgrade our capacity to do organic examination at each scale. We anticipate that computer based intelligence should alter science in the 21st century similar as measurements changed science in the twentieth 100 years. The hardships, notwithstanding, are many, including information curation and gathering, advancement of new science as hypotheses that interface the subdisciplines, and new prescient and interpretable man-made intelligence models that are more fit to science than existing AI and man-made intelligence procedures. Improvement endeavors will serious areas of strength for require among organic and computational researchers. This white paper gives a dream to simulated intelligence for Science and features a few difficulties.
*Presentation
Computerized reasoning (artificial intelligence) as a thought is old. It very well may be traced all the way back to antiquated times around 700 B.C. in Greek folklore, for instance, with the goliath Bone made of bronze and made, not conceived, to safeguard Europa, the mother of Lord Minos in Crete (City hall leader 2018). From that point to additional cutting edge and logical times, the primary limitation to deliver machines fit for thinking has been innovation, as perceived by Alan M. Turing (Turing 1936) who, relatively radical, was posing inquiries about machines, conduct, awareness, and utilizing discrete cycles to emulate sensory systems that work persistently. John von Neuman (von Neuman 1958), likewise forward thinking, proposed in 1945 a PC design in which both the program directions and the information are situated in irregular access memory. This plan was the forerunner of the cutting edge PC, yet it was only after the coming of the quick CPU that computer based intelligence turned into a reasonable reality. Since its initial starting points in 1956 (McCarthy et al. 2006; Kaplan and Haelein 2019) as a field of innovative work, artificial intelligence has developed and endured mishaps, until right off the bat in the 21st century when it at long last thrived with effective applications in scholarly world and industry. A mix of new techniques and accessibility of strong PCs alongside immense assortments of information got huge s
MY 1st Nobel Prize by 2037 (Work-in-Progress)my1nobel2037
The document proposes establishing a $25 million research center to study creativity and genius through the lens of connecting ideas between human imagination and machine intelligence, termed "symbiotic genius". It references recent studies on the neurological basis of imagination and cites examples of solo, group, and symbiotic genius. Funding such a center could provide a Nobel-worthy opportunity to complement existing initiatives focused on intelligence and drive economic growth through cultivating mass innovation, creativity, and flourishing.
The document discusses how scientific knowledge and thinking have advanced in the 20th century due to the end of classical certainties. It describes how Ilya Prigogine addressed the end of certainty in concepts like time and causality. Werner Heisenberg's uncertainty principle showed that there are limits to human knowledge at the quantum level. Kurt Gödel's incompleteness theorems demonstrated the limitations of classical logic and that a system cannot prove itself consistent. The document argues that complex thinking is needed to develop a comprehensive understanding of scientific phenomena by integrating knowledge across disciplines through group work rather than individual specialization.
Abstract: The emerging configuration of educational institutions, technologies, scientific practices, ethics policies and companies can be usefully framed as the emergence of a new “knowledge infrastructure” (Paul Edwards). The idea that we may be transitioning into significantly new ways of knowing – about learning and learners, teaching and teachers – is both exciting and daunting, because new knowledge infrastructures redefine roles and redistribute power, raising many important questions. What should we see when open the black box powering analytics? How do we empower all stakeholders to engage in the design process? Since digital infrastructure fades quickly into the background, how can researchers, educators and learners engage with it mindfully? This isn’t just interesting to ponder academically: your school or university will be buying products that are being designed now. Or perhaps educational institutions should take control, building and sharing their own open source tools? How are universities accelerating the transition from analytics innovation to infrastructure? Speaking from the perspective of leading an institutional innovation centre in learning analytics, I hope that our experiences designing code, competencies and culture for learning analytics sheds helpful light on these questions.
Crowdsourcing and Cognitive Data Analytics for Conflict Transformation - Istv...Istvan Csakany
ABSTRACT
The thesis discusses the opportunities of using crowdsourcing and cognitive data analytics to increase the efficiency and accuracy of conflict transformation practices. It builds on Ken Wilber’s integral theory and AQAL model to identify the common ground between the transcend method of Johan Galtung, the elicitive conflict transformation approach of John Paul Lederach, and selected cases of crowdsourcing and cognitive data analytics applications. The theories are then applied to the three case studies of the practical use of crowdsourcing and the perspectives in cognitive computer science. The crowdsourcing examples include the constitutional reform process in Iceland (2011–2013), the UNHCR Ideas program, and Sugata Mitra’s School in the Cloud (SOLE) initiative. The case of cognitive computing is discussed through the analysis of IBM Watson’s utility in medical sciences. The thesis concludes that augmenting human intelligence and exploiting the knowledge of large masses through crowdsourcing and cognitive data analytics are viable options also in the field of conflict transformation and peace research. There are already examples of good practices but there is a significant difference between the utility of various approaches in favour of those that build on a human–computer partnership and are open to redefining existing paradigms.
This document discusses the theory of the knowledge society and the concept of a "super-paradigm". It argues that we are facing a new reality that completely overturns existing knowledge. The super-paradigm represents a mix of old and new scientific paradigms integrated in new ways. It also discusses defining characteristics of the new paradigms, including blurred boundaries between matter and spirit. The document puts forth a view of reality as both existing and not existing ("vannincs"), and explores a multi-level structure of reality from the divine to the human. It argues the new model of globalization supports efforts for an eventual "Age of Unity" in the 21st century characterized not by uniformity but cooperation between
This document outlines several emerging technologies and trends over the next 50 years across multiple domains:
- Nanotechnology will advance materials science, medicine, and serve as a model for transdisciplinary science. It will be shaped by both fundamental research and commercial applications.
- Biotechnology like genetic engineering will allow for manipulating and designing new life forms to go beyond what evolution has produced. It will be inspired by what we learn from nature.
- Advances in various technologies will provide opportunities to enhance and extend human capabilities in both mental and physical aspects, potentially leading to a transhumanist future.
- Computation and data analysis will allow for decoding patterns across biological and social systems and widespread simulation use for decision making and
This document discusses the relationships between logic, mathematics, and science. It provides examples of how philosophers have used logic to explore truths, such as William Paley's argument for the existence of a creator based on biological complexity. The development of symbolic logic allowed mathematics to be treated as a logical system, as in Bertrand Russell's Principia Mathematica. However, Kurt Gödel's incompleteness theorems proved that a fully consistent and complete logical system is impossible. While logic has limitations, it remains an important tool for understanding and acquiring knowledge through the scientific method.
1. Contemporary social theories describe how digital technologies have shifted communication away from traditional face-to-face and print media towards functions like social media that follow an escalating logic and erase humanism.
2. The rise of social software, open access, and just-in-time knowledge enabled by the internet challenges traditional structures of gathering and sharing knowledge by allowing information to be more freely distributed and accessed.
3. However, merely distributing information does not necessarily lead to growth of knowledge, as knowledge requires information to be meaningfully modeled and analyzed rather than just believed. The internet allows "downloadable beliefs" but risks knowledge being reduced to beliefs without justification.
Human level artificial general intelligence agiKarlos Svoboda
This document discusses different scenarios for the future development of artificial general intelligence (AGI). It begins by contrasting narrow AI, which focuses on solving specific problems, with AGI, which would involve a system that can understand itself, generalize knowledge across domains, and transfer skills like humans. The document then outlines several possible scenarios using the framework of scenario analysis: 1) steady incremental progress of narrow AI eventually reaching AGI, 2) narrow AI continues successfully but AGI is not achieved, and 3) AGI is developed, rapidly improves itself, and leads to a technological singularity transforming society in unpredictable ways. It discusses these scenarios and their implications in more detail.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document discusses positivism and postpositivism in relation to engineering. It begins by introducing positivism as referring to objectively describing reality that exists and can be observed. It then discusses how positivism has been the philosophical foundation of engineering methodology for undertaking projects. However, it notes that postpositivism emerged as a new paradigm that recognizes the interaction between the observer and observed. The document concludes that while engineering methodology remains positivist, engineers must also be able to link to other disciplines through a transdisciplinary understanding, as problems today require input from different fields of knowledge.
Similar to Approximate and Situated Causality in Deep Learning.pdf (20)
The document discusses civil disobedience and its necessity in a free society. It argues that citizens have an obligation to break unjust laws according to Henry David Thoreau. Civil disobedience is an effective way to criticize the government without violence, as demonstrated by movements led by Gandhi, Martin Luther King Jr., and factory workers. While some argue violence is needed, civil disobedience is remarkably effective and citizens engaging in it are exercising, not overextending, their constitutional rights.
Japanese Writing Paper Printable. Online assignment writing service.Lisa Riley
This document provides instructions for creating an account and submitting an assignment request on the HelpWriting.net website. It outlines a 5-step process: 1) Create an account with an email and password. 2) Complete a form with assignment details. 3) Review bids from writers and select one. 4) Review the completed paper. 5) Request revisions if needed, with the option of a full refund for plagiarized work. The purpose is to guide users through obtaining writing help services from HelpWriting.net.
Write An Essay On My Mother Essay Writing English - YouTubeLisa Riley
This document provides instructions for requesting writing assistance from HelpWriting.net. It outlines a 5-step process: 1) Create an account with a password and email. 2) Complete a 10-minute order form with instructions, sources, and deadline. 3) Review bids from writers and choose one. 4) Review the completed paper and authorize payment. 5) Request revisions until satisfied with the work. The document emphasizes that original, high-quality content will be provided, with refunds offered for plagiarized work.
Heart Writing Paper Writing Paper, Heart Printable, PapLisa Riley
The poem analyzes how the poet implies the concept of the absurdity of life through using unusual grammar structures to convey the trivial and insignificant nature of living. The conditions of the world described and the fear of and hatred for death lead the poet to conclude that all things have equal value, resulting in the meaninglessness of everything, including the world itself. The analysis examines how the poet explores existential themes of the purposelessness of human existence.
Raised Line Writing Paper. A4 Raised Line Handwriting PaperLisa Riley
This document discusses the historical aspects that should be considered when analyzing a building. It notes that buildings like the White House have histories involving slavery that are important to recognize. The document then discusses a specific building constructed within the last 10 years. It has many windows to take advantage of natural light, solar panels, and green roofs to help regulate temperature and collect water. Pillars link the office and lounge sides.
The Federalist Papers Audiolibro AlexanderLisa Riley
The document provides instructions for requesting writing assistance from HelpWriting.net. It outlines a 5-step process: 1) Create an account; 2) Complete an order form with instructions and deadline; 3) Review bids from writers and select one; 4) Review the completed paper and authorize payment; 5) Request revisions to ensure satisfaction.
Research Paper Introduction - College Homework Help And Online Tutoring.Lisa Riley
The document provides instructions for getting homework help from the website HelpWriting.net. It outlines a 5-step process: 1) Create an account with an email and password. 2) Complete an order form with instructions and deadline. 3) Review bids from writers and choose one. 4) Receive the completed paper and authorize payment if satisfied. 5) Request revisions until fully satisfied, with a refund option for plagiarized work. The process aims to match students with qualified writers to help complete assignments.
English Essay Topics For Grad. Online assignment writing service.Lisa Riley
This document discusses a proposed study on the resilience of the airline industry after terrorist attacks. It would analyze the relationship between various organizational factors (e.g. relationships, communication) and performance resilience using archival data analysis methods. Correlation analysis and ANOVA would be used to evaluate relationships between independent, moderating, and control variables with the dependent variable of performance resilience. The goal is to explore which factors influence an organization's ability to withstand external shocks like terrorism.
Psychology Essay Writing Service - Qualified WritingsLisa Riley
Locke believed that in the state of nature, prior to government, individuals had a natural right to take ownership of resources by mixing their labor with things provided by nature. However, individuals were only entitled to as much property as they could use before it spoiled, and were obligated to leave "enough and as good" for others. This view formed the basis for Locke's justification of private property as a natural right.
Handwriting Background Paper Vintage Images LeLisa Riley
This document discusses transvestism in Japanese culture. Transvestism, defined as dressing like the opposite sex, was commonly practiced in male homosexual relationships in Japan. Historically, male homosexuality was seen as a gender misalignment. Transvestism was shown through cross-dressing males appearing as females, which was done to please their male partner's heterosexual interests. This was typical in samurai relationships and among monk elites. Cross-dressing was accepted and encouraged through religious norms to fulfill the concept of being in a relationship with a woman. Transvestism was most common in the Edo period when influenced by Bushido.
Examples Of Science Pape. Online assignment writing service.Lisa Riley
The document discusses Beowulf as an epic hero based on the characteristics of heroes from Anglo-Saxon and Medieval literature. It notes that true heroes demonstrate bravery while standing up for what is fair and just, even when facing conflict, and are usually chivalrous and selfless. However, some heroes primarily desire power, fortune and status, focusing more on their own gains than helping others. The document will analyze whether Beowulf possesses these heroic traits as portrayed in the epic poem that bears his name.
This document discusses how teachers can act as catalysts for change in schools. It explains that catalyst teachers are always looking for ways to improve teaching and solve problems through new ideas and research. These visionary teachers work to implement new curriculums and growth mindsets. They are agents of improvement and help schools progress. However, new ideas do not always sit well with staff, so catalyst teachers must address challenges and overcome obstacles by including others and explaining goals. Overall, catalyst teachers inspire excellence through leadership, commitment to growth, and role modeling problem-solving.
Easy Essay Structure. How To Create A Powerful ArguLisa Riley
The document provides instructions for creating an account on HelpWriting.net in 5 steps:
1) Create an account by providing a password and email.
2) Complete a 10-minute order form with instructions, sources, and deadline. Attach sample work to imitate writing style.
3) Writers will bid on the request, choose one based on qualifications and feedback. Place a deposit to start the assignment.
4) Review the paper for expectations. Authorize payment for the writer if pleased, and request revisions if needed.
5) Choose HelpWriting.net for original, high-quality content with the option of full refunds for plagiarism.
Help With Research Paper Writing - I Need Help To WrLisa Riley
The document provides instructions for requesting writing assistance from HelpWriting.net. It outlines a 5-step process: 1) Create an account with a password and email. 2) Complete a 10-minute order form providing instructions, sources, and deadline. 3) Review bids from writers and choose one based on qualifications. 4) Review the completed paper and authorize payment if satisfied. 5) Request revisions until fully satisfied, with the option of a full refund for plagiarized work. The service aims to provide original, high-quality content through a bidding system and revision process.
How To Write A Method In Science Report.pdfHow To Write A Method In Science R...Lisa Riley
The passage discusses airport security screening and how it has changed significantly since 9/11. It notes that Congress established the Transportation Security Administration (TSA) to take over airport security screening from private contractors. Some argue increased security has been too costly and caused more harm than good, while others believe the changes were necessary to prevent future terrorist attacks on planes. The security processes have aimed to prevent dangerous items from being brought onboard flights.
The document provides instructions for requesting writing assistance from HelpWriting.net, including creating an account, completing an order form with assignment details and deadline, and choosing a writer to complete the assignment based on their qualifications, order history, and feedback. The writer will submit the paper for review and payment, and revisions can be requested to ensure satisfaction.
The document provides instructions for requesting and completing an assignment writing request on the HelpWriting.net website. It outlines a 5-step process: 1) Create an account; 2) Complete a request form providing instructions, sources, and deadline; 3) Review bids from writers and select one; 4) Review the completed paper and authorize payment; 5) Request revisions to ensure satisfaction. It emphasizes the site's commitment to original, high-quality work and refunds for plagiarized content.
Contract Essay Term 1 PDF Offer And Acceptance LLisa Riley
The document summarizes E.M. Forster's novel A Room with a View. It discusses how the main character Lucy experiences newfound independence and personal growth during her trip to Florence, Italy. However, upon returning home to England, she feels confined by societal expectations again. In Florence, Lucy is free to be herself, while in England she feels the restrictions of her class. The trip allows Lucy to experience life without limitations and find her sense of self.
Evaluation Essays Examples. How To Write ALisa Riley
The document discusses how Okonkwo from Chinua Achebe's novel Things Fall Apart exhibits traits of a classic Greek hero, as he is a proud and formidable man driven by a fear of weakness and a desire for personal glory, though these qualities ultimately lead to his downfall. It notes a sense of impending doom surrounds Okonkwo from the beginning, hinting that all will not end well for him.
Write My Research Paper For Me 6 Per PageLisa Riley
This document discusses a writing service that allows customers to request papers written for a fee of $6 per page. It outlines a 5-step process: 1) Create an account, 2) Complete an order form providing instructions and deadline, 3) Review bids from writers and choose one, 4) Review the completed paper and authorize payment, 5) Request revisions until satisfied. The service aims to provide original, high-quality content and offers refunds for plagiarized work. Writers complete papers after reviewing customers' instructions and sources.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...NelTorrente
In this research, it concludes that while the readiness of teachers in Caloocan City to implement the MATATAG Curriculum is generally positive, targeted efforts in professional development, resource distribution, support networks, and comprehensive preparation can address the existing gaps and ensure successful curriculum implementation.
2. great advances produced by machine learning techniques (henceforth, ML), several authors have
46
asked themselves whether ML can contribute to the creation of causal knowledge. We will answer to
47
this question into next section.
48
49
2. Deep Learning, Counterfactuals, and Causality
50
Is in this context, where the statistical analysis rules the study of causal relationships, that we
51
find the attack to Machine Learning and Deep Learning as no suitable tools for the advance of causal
52
and scientific knowledge. The most known and debated arguments come from the eminent
53
statistician Judea Pearl [11], [12], and have been widely accepted. The main idea is that machine
54
learning do not can create causal knowledge because the skill of managing counterfactuals, and
55
following his exact words, [11] page 7: “Our general conclusion is that human-level AI cannot
56
emerge solely from model-blind learning machines; it requires the symbiotic collaboration of data
57
and models. Data science is only as much of a science as it facilitates the interpretation of data – a
58
two-body problem, connecting data to reality. Data alone are hardly a science, regardless how big
59
they get and how skillfully they are manipulated”. What he is describing is the well-known problem
60
of the black box model: we use machines that process very complex amounts of data and provide
61
some extractions at the end. As has been called, it is a GIGO (Garbage In, Garbage Out) process [13],
62
[14]. It could be affirmed that GIGO problems are computational versions of Chinese room mental
63
experiment [15]: the machine can find patterns but without real and detailed causal meaning. This
64
is what Pearl means: the blind use of data for establishing statistical correlations instead that of
65
describing causal mechanisms. But is it true? In a nutshell: not at all. I’ll explain the reasons.
66
67
2.1.Deep Learning is not a data driven but a context driven techology: made by humans for humans.
68
Most of epistemic criticisms against AI are always repeating the same idea: machines are still
69
not able to operate like humans do [16], [17]. The idea is always the same: computers are operating
70
with data using a blind semantic perspective that makes not possible that they understand the causal
71
connections between data. It is the definition of a black box model [18], [19]. But here happens the
72
first problem: deep learning (henceforth, DL) is not the result of automated machines creating by
73
themselves search algorithms and after it, evaluating them as well as their results. DL is designed by
74
humans, who select the data, evaluate the results and decide the next step into the chain of possible
75
actions. At epistemic level, is under human evaluation the decision about how to interpret the
76
validity of DL results, a complex, but still only, technique [20]. But even last trends in AGI design
77
include causal thinking, as DeepMind team has recently detailed [21], and with explainable
78
properties. The exponential growth of data and their correlations has been affecting several fields,
79
especially epidemiology [22], [23]. Initially it can be expressed by the agents of some scientific
80
community as a great challenge, in the same way that astronomical statistics modified the
81
Aristotelian-Newtonian idea of physical cause, but with time, the research field accepts new ways of
82
thinking. Consider also the revolution of computer proofs in mathematics and the debates that these
83
techniques generated among experts [24], [25].
84
In that sense, DL is just providing a situated approximation to reality using correlational
85
coherence parameters designed by the communities that use them. It is beyond the nature of any
86
kind of machine learning to solve problems only related to human epistemic envisioning: let’s take
87
the long, unfinished, and even disgusting debates among the experts of different of statistical
88
schools [8]. And this is true because data do not provide or determine epistemology, in the same
89
sense that groups of data do not provide the syntax and semantics of the possible organization
90
systems to which they can be assigned. Any connection between the complex dimensions of any
91
event expresses a possible epistemic approach, which is a (necessary) working simplification. We
92
cannot understand the world using the world itself, in the same way that the best map is not a 1:1
93
scale map, as Borges wrote (1946, On Exactitude in Science): “…In that Empire, the Art of Cartography
94
attained such Perfection that the map of a single Province occupied the entirety of a City, and the
95
map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer
96
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 July 2019 doi:10.20944/preprints201907.0110.v1
3. satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the
97
Empire, and which coincided point for point with it. The following Generations, who were not so
98
fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless,
99
and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and
100
Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by
101
Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography”
102
Then, DL cannot follow a different information processing process, a specific one completely
103
different from those run by humans. As any other epistemic activity, DL must include different
104
levels of uncertainties if we want to use it [26]. Uncertainty is a reality for any cognitive system, and
105
consequently, DL must be prepared to deal with it. Computer vision is a clear example of that set of
106
problems [27]. Kendall and Gal have even coined new concepts to allow introduce uncertainty into
107
DL: homocedastic, and heterocedastic uncertainties (both aleatoric) [28]. The way used to integrate
108
such uncertainties can determine the epistemic model (which is a real cognitive algorithmic
109
extension of ourselves). For example, Bayesian approach provides an efficient way to avoid
110
overfitting, allow to work with multi-modal data, and make possible use them in real-time scenarios
111
(as compared to Monte Carlo approaches) [29]; or even better, some authors are envisioning
112
Bayesian Deep Learning [30]. Dimensionality is a related question that has also a computational
113
solution, as Yosuhua Bengio has been exploring during last decades [31]–[33].
114
In any case, we cannot escape from the informational formal paradoxes, which were
115
well-known at logical and mathematical level once Gödel explained them; they just emerge in this
116
computational scenario, showing that artificial learnability can also be undecidable [34]. Machine
117
learning is dealing with a rich set of statistical problems, those that even at biological level are
118
calculated at approximate levels [35]–[37], a heuristics that are being implemented also into
119
machines. This open range of possibilities, and the existence of mechanisms like informational
120
selection procedures (induction, deduction, abduction), makes possible to use DL in a controlled but
121
creative operational level [38].
122
123
124
2.2. Deep learning is already running counterfactual approaches.
125
The second big Pearl critics is against DL because of its incapacity of integrating counterfactual
126
heuristics. First we must affirm that counterfactuals do not warrant with precision any epistemic
127
model, just add some value (or not). From a classic epistemic point of view, counterfactuals do not
128
provide a more robust scientific knowledge: a quick look at the last two thousands of both Western
129
and Estern sciences can give support to this view [4]. Even going beyond, I affirm that counterfactual
130
can block thinking once it is structurally retaled to a close domain or paradigm of well-established
131
rules; otherwise is just fiction or an empty mental experiment. Counterfactals are a fundamental
132
aspect of human reasoning [39]–[42], and their algorithmic integration is a good idea [43]. But at the
133
same time, due to the underdetermination [44]–[46], counterfactual thinking can express completely
134
wrong ideas about reality. DL can no have an objective ontology that allows it to design a perfect
135
epistemological tool: because of the huge complexity of the involved data as well as for the necessary
136
situatedness of any cognitive system. Uncertainty would not form part of such counterfactual
137
operationability [47], once it should ascribed to any not-well known but domesticable aspect of
138
reality; nonetheless, some new ideas do not fit with the whole set of known facts, the current
139
paradigm , nor the set of new ones. This would position us into a sterile no man’s land, or even block
140
any sound epistemic movement. But humans are able to deal with it, going even beyond [48].
141
Opportunistic blending, and creative innovating are part of our set of most valuable cognitive skills
142
[49] .
143
144
145
2.3. DL is not Magic Algorithmic Thinking (MAT).
146
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 July 2019 doi:10.20944/preprints201907.0110.v1
4. Our third and last analysis of DL characteristics is related to its explainability. Despite of the
147
evidence that the causal debate is beyond any possible resolution provided by DL, because it
148
belongs to ontological perspectives that require a different holistic analysis, it is clear that the results
149
provided my DL must be nor only coherent but also explainable, otherwise we would be in front of a
150
new algorithmic form of magic thinking. By the same reasons by which DL cannot be just mere a
151
complex way of curve fitting, it cannot become a fuzzy domain beyond human understanding. Some
152
attempts are being held to prevent us from this, most of them rules by DARPA: Big Mechanisms [50]
153
or XAI (eXplainable Artificial Intelligence) [51], [52]. An image from DARPA website:
154
155
156
Figure 1. Explanability in DL
157
158
Figure 2. XAI from DARPA
159
160
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 July 2019 doi:10.20944/preprints201907.0110.v1
5. Again, these approaches answer to a request: how to adapt new epistemic tools to our cognitive
161
performative thresholds and characteristics. There is not a bigger revolution from a conceptual
162
perspective than the ones that happened during Renaissance with the use of telescopes or
163
microscopes. DL systems are not running by themselves interacting with the world, automatically
164
selecting the informational events to be studied, or evaluating them in relation to a whole universal
165
paradigm of semantic values. Humans neither do.
166
Singularity debates are useful and exploring possible conceptual frameworks and must be held
167
[53]–[55], but at the same time cannot become fallacious fatalist arguments against current
168
knowledge. Today, DL is a tool used by experts in order to map new connections between sets of
169
data. Epistemology is not automated process, despite of minor and naïve attempts to achieve it
170
[56], [57]. Knowledge is a complex set of explanations related to different systems that is integrated
171
dynamically by networks of epistemic (human, still) agents who are working with AI tools.
172
Machines could postulate their own models, true, but the mechanisms to verify or refine them
173
would not be beyond any mechanism different from the used previously by humans: data do not
174
express by itself some pure nature, but offers different system properties that need to be classified in
175
order to obtain knowledge. And this obtaining is somehow a creation based on the epistemic and
176
body situatedness of the system.
177
178
179
180
181
182
183
184
185
186
.
187
188
189
3. Extending bad and/or good human cognitive skills through DL.
190
It is beyond any doubt that DL is contributing to improve the knowledge in several areas some
191
of them very difficult to interpret because of the nature of obtained data, like neuroscience [58].
192
These advances are expanding the frontiers of verifiable knowledge beyond classic human
193
standards. But even in that sense, they are still explainable. Anyhow, are humans who fed up DL
194
systems with scientific goals, provide data (from which to learn patterns) and define quantitative
195
metrics (in order to know how close are you getting to success). At the same time, are we sure that is
196
not our biased way to deal with cognitive processes that mechanism that allows us to be creative?
197
For such reason, some attempts to reintroduce human biased reasoning into machine learning are
198
being explored [59]. This re-biasing [60], even replying emotional like reasoning mechanisms [61],
199
[62].
200
My suggestion is that after great achievements following classic formal algorithmic approaches,
201
it now time for DL practitioners to expand horizons looking into the great power of cognitive biases.
202
For example, machine learning models with human cognitive biases are already capable of
203
learning from small and biased datasets [63]. This process reminds the role of Student test in relation
204
to frequentist ideas, always requesting large sets of data until the creation of the t-test, but now in the
205
context of machine learning.
206
In [63] the authors developed a method to reduce the inferential gap between human beings
207
and machines by utilizing cognitive biases. They implemented a human cognitive model into
208
machine learning algorithms and compared their performance with the currently most popular
209
methods, naïve Bayes, support vector machine, neural networks, logistic regression, and random
210
forests. This even could make possible one-shot learning systems [64]. Approximate computing can
211
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 July 2019 doi:10.20944/preprints201907.0110.v1
6. boost the potentiality of DL, diminishing the computational power of the systems as well as adding
212
new heuristic approaches to information analysis [35], [65]–[67].
213
Finally, a completely different type of problems, but also important, are how to reduce the
214
biased datasets or heuristics we provide to our DL systems [68] as well as how to control the biases
215
that make us not to interpret DL results properly [69]. Obviously, if there is any malicious value
216
related to such bias, it must be also controlled [70].
217
218
4. Causality in DL: the epidemiological case study
219
Several attempts has been implemented in order to allow causal models in DL, like [71] and the
220
Structural Causal Model (SCM) (as an abstraction over a specific aspect of the CNN. We also
221
formulate a method to quantitatively rank the filters of a convolution layer according to their
222
counterfactual importance), or Temporal Causal Discovery Framework (TCDF, a deep learning
223
framework that learns a causal graph structure by discovering causal relationships in observational
224
time series data) by [72]. But my attempt here will be twofold: (1) first, to consider about the value of
225
“causal data” for epistemic decisions in epidemiology; and (2) second, to look how DL could fit or
226
not with those causal claims into the epidemiological field.
227
228
4.1.Do causality affects at all epidemiological debates?
229
According to the field reference [73], MacMahon and Pugh [74] created the one of the most
230
frequently used definitions of epidemiology: “Epidemiology is the study of the distribution and
231
determinants of disease frequency in man”. Note the absence of the term ‘causality’ and, instead, the
232
use of the one of ‘determinant’. This is the result of the classic prejudices of Hill in his paper of 1965:
233
“I have no wish, nor the skill, to embark upon philosophical discussion of the meaning of ‘causation’. The ‘cause’
234
of illness may be immediate and direct; it may be remote and indirect underlying the observed association. But
235
with the aims of occupational, and almost synonymous preventive, medicine in mind the decisive question is
236
where the frequency of the undesirable event B will be influenced by a change in the environmental feature A.
237
How such a change exerts that influence may call for a great deal of research, However, before deducing
238
‘causation’ and taking action we shall not invariably have to sit around awaiting the results of the research. The
239
whole chain may have to be unraveled or a few links may suffice. It will depend upon circumstances.” After
240
this philosophical epistemic positioning, Hill numbered his 9 general qualitative association factors,
241
also commonly called “Hill’s criteria” or even, which is frankly sardonic, “Hill’s Criteria of
242
Causation”. For such epistemic reluctances epidemiologists abandoned the term “causation” and
243
embraced other terms like “determinant” [75], “determining conditions” [76], or “active agents of
244
change” [77]. For that reason, recent researches have claimed for a pluralistic approach to such
245
complex analysis [78]. As a consequence we can see that even in a very narrow specialized field like
246
epidemiology the meaning of cause is somehow fuzzy. Once medical evidences showed that
247
causality was not always a mono-causality [22], [79] but, instead, the result of the sum of several
248
causes/factors/determinants, the necessity of clarifying multi-causality emerged as a first-line
249
epistemic problem. It was explained as a “web of causation” [80]. Some debates about the logics of
250
causation and some popperian interpretations were held during two decades [81], [82]. Pearl himself
251
provided a graphic way to adapt human cognitive visual skills to such new epidemiological
252
multi-causal reasoning [83], as well do-calculus [84], [85], and directed acyclic graphs (DAGs) are
253
becoming a fundamental tool [86], [87]. DAGs are commonly related to randomized controlled trials
254
(RCT) for assessing causality. But RCT are not a Gold Standard beyond any critic [88], [89], because
255
as [90] affirmed, RCT are often flawed, mostly useless, although clearly indispensable (it is not so
256
uncommon that the same author claim against classic p-value suggesting a new 0,005, [91]). Krauss
257
has even defended the impossibility of using RCT without biases [92], although some authors
258
defend that DAGs can reduce RCT biases [93].
259
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 July 2019 doi:10.20944/preprints201907.0110.v1
7. But there a real case that can shows us a good example about the weight of causality in real
260
scientific debates. We will see the debates about the relation between smoke and lung cancer. As
261
soon as in 1950 were explained the causal connections between smoking and lung cancer [94]. But far
262
from being accepted, these results were replied by tobacco industry using scientific experimental
263
regression. Perhaps the most famous generator of silly counterarguments was R.A. Fisher, the most
264
important frequentist researcher of 20th Century. In 1958 he published a paper in Nature journal [95]
265
in which he affirmed that all connections between tobacco smoking and lung cancer were due to a
266
false correlation. Even more: with the same data could be inferred that “smoking cigarettes was a
267
cause of considerable prophylactic value in preventing the disease, for the practice of inhaling is rare
268
among patients with cancer of the lung that with others” (p. 596). Two years later he was saying
269
similar silly things in a high rated academic journal [96]. He even affirmed that Hill tried to plant
270
fear into good citizens using propaganda, and entering misleadingly into the thread of
271
overconfidence. The point is: did have Fisher real epistemic reasons for not to accepting the huge
272
amount of existing causal evidences against tobacco smoking? No. And we are not affirming the
273
consequent after collecting more data not available during Fisher life. He has strong causal
274
evidences but he did not wanted to accept them. Still today, there are evidences that show how
275
causal connections are field biased, again with tobacco or the new e-cigarettes [97]–[99].
276
As a section conclusion can be affirmed that causality has strong specialized meanings and can
277
be studied under a broad range of conceptual tools. The real example of tobacco controversies offers
278
such long temporal examples.
279
280
4.2.Can DL be of sume utility for the epidemiological debates on causality?
281
The second part of my argumentation will try to elucidate whether DL can be useful for the
282
resolution of debates about causality in epidemiological controversies. The answer is easy and clear:
283
yes. But it is directly related to a specific idea of causality as well as of a demonstration. For example,
284
can be found a machine learning approach to enable evidence based oncology practice [100]. Thus,
285
digital epidemiology is a robust update of previous epidemiological studies [101][102]. The new
286
possibilities of finding new causal patterns using bigger sets of data is surely the best advantages of
287
using DL for epidemiological purposes [103]. Besides, such data are the result of integrating
288
multimodal sources, like visual combined with classic informational [104], but the future with mode
289
and more data capture devices could integrate smell, taste, movements of agents,...deep
290
convolutional neural networks can help us, for example, to estimate environmental exposures using
291
images and other complementary data sources such as cell phone mobility and social media
292
information. Combining fields such as computer vision and natural language processing, DL can
293
provide the way to explore new interactions still opaque to us [105], [106].
294
Despite of the possible benefits, it is also true that the use of DL in epidemiological analysis has
295
a dangerous potential of unethicality, as well as formal problems [107], [108]. But again, the
296
evaluation of involved expert agents will evaluate such difficulties as things to be solved or huge
297
obstacles for the advancement of the field.
298
299
300
5. Conclusion: causal evidence is not a result, but a process.
301
Author has made an overall reply to main critics to Deep Learning (and machine learning) as a
302
reliable epistemic tool. Basic arguments of Judea Pearl have been criticized using real examples of
303
DL, but also making a more general epistemic and philosophical analysis. The systemic nature of
304
knowledge, also situated and even biased, has been pointed as the fundamental aspect of a new
305
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 July 2019 doi:10.20944/preprints201907.0110.v1
8. algorithmic era for the advance of knowledge using DL tools. If formal systems have structural
306
dead-ends like incompleteness, the bioinspired path to machine learning and DL becomes a reliable
307
way [109], [110] to improve, one more time, our algorithmic approach to nature. Finally, thanks to
308
the short case study of epidemiological debates on causality and their use of DL tools, we’ve seen a
309
real implementation case of such epistemic mechanism. The advantages of DL for multi-causal
310
analysis using multi-modal data have been explored as well as some possible critics.
311
312
313
314
315
Funding: This work has been funded by the Ministry of Science, Innovation and Universities within the State
316
Subprogram of Knowledge Generation through the research project FFI2017-85711-P Epistemic innovation: the
317
case of cognitive sciences. This work is also part of the consolidated research network "Grup d'Estudis
318
Humanístics de Ciència i Tecnologia" (GEHUCT) ("Humanistic Studies of Science and Technology Research
319
Group"), recognised and funded by the Generalitat de Catalunya, reference 2017 SGR 568.
320
Acknowledgments: I thank Mr. Isard Boix for his support all throughout this research. Best moments are
321
those without words, and sometimes this lack of meaningfulness entails unique meanings.
322
Conflicts of Interest: The author declares no conflict of interest.
323
References
324
325
[1] J. W. Heisig, Philosophers of Nothingness : An Essay on the Kyoto School. University of Hawai’i Press, 2001.
326
[2] J. Vallverdú, “The Situated Nature of Informational Ontologies,” in Philosophy and Methodology of
327
Information, WORLD SCIENTIFIC, 2019, pp. 353–365.
328
[3] M. J. Schroeder and J. Vallverdú, “Situated phenomenology and biological systems: Eastern and
329
Western synthesis.,” Prog. Biophys. Mol. Biol., vol. 119, no. 3, pp. 530–7, Dec. 2015.
330
[4] J. Vallverdú and M. J. Schroeder, “Lessons from culturally contrasted alternative methods of inquiry
331
and styles of comprehension for the new foundations in the study of life,” Prog. Biophys. Mol. Biol., 2017.
332
[5] A. Carstensen, J. Zhang, G. D. Heyman, G. Fu, K. Lee, and C. M. Walker, “Context shapes early
333
diversity in abstract thought,” Proc. Natl. Acad. Sci., pp. 1–6, Jun. 2019.
334
[6] A. Norenzayan and R. E. Nisbett, “Culture and causal cognition,” Curr. Dir. Psychol. Sci., vol. 9, no. 4,
335
pp. 132–135, 2000.
336
[7] R. E. Nisbet, The Geography of Thought: How Asians and Westerners Think Differently...and Why: Richard E.
337
Nisbett: 9780743255356: Amazon.com: Books. New York: Free Press (Simon & Schuster, Inc.), 2003.
338
[8] J. Vallverdú, Bayesians versus frequentists : a philosophical debate on statistical reasoning. Springer, 2016.
339
[9] D. Casacuberta and J. Vallverdú, “E-science and the data deluge.,” Philos. Psychol., vol. 27, no. 1, pp.
340
126–140, 2014.
341
[10] J. Vallverdú Segura, “Computational epistemology and e-science: A new way of thinking,” Minds
342
Mach., vol. 19, no. 4, pp. 557–567, 2009.
343
[11] J. Pearl, “Theoretical Impediments to Machine Learning,” arXiv Prepr., 2018.
344
[12] J. Pearl and D. Mackenzie, The book of why : the new science of cause and effect. Basic Books, 2018.
345
[13] S. Hillary and S. Joshua, “Garbage in, garbage out (How purportedly great ML models can be screwed
346
up by bad data),” in Proceedings of Blackhat 2017, 2017.
347
[14] I. Askira Gelman, “GIGO or not GIGO,” J. Data Inf. Qual., 2011.
348
[15] J. Moural, “The Chinese room argument,” in John searle, 2003.
349
[16] H. L. Dreyfus, What Computers Can’t Do: A Critique of Artificial Reason. 1972.
350
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 July 2019 doi:10.20944/preprints201907.0110.v1
9. [17] H. L. Dreyfus, S. E. Drey-fus, and L. a. Zadeh, “Mind over Machine: The Power of Human Intuition and
351
Expertise in the Era of the Computer,” IEEE Expert, vol. 2, no. 2, pp. 237–264, 1987.
352
[18] W. N. Price, “Big data and black-box medical algorithms,” Sci. Transl. Med., 2018.
353
[19] J. Vallverdú, “Patenting logic, mathematics or logarithms? The case of computer-assisted proofs,”
354
Recent Patents Comput. Sci., vol. 4, no. 1, pp. 66–70, 2011.
355
[20] F. Gagliardi, “The necessity of machine learning and epistemology in the development of categorization
356
theories: A case study in prototype-exemplar debate,” in Lecture Notes in Computer Science (including
357
subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2009.
358
[21] T. Everitt, R. Kumar, V. Krakovna, and S. Legg, “Modeling AGI Safety Frameworks with Causal
359
Influence Diagrams,” Jun. 2019.
360
[22] M. Susser and E. Susser, “Choosing a future for epidemiology: II. From black box to Chinese boxes and
361
eco-epidemiology,” Am. J. Public Health, vol. 86, no. 5, pp. 674–677, 1996.
362
[23] A. Morabia, “Hume, Mill, Hill, and the sui generis epidemiologic approach to causal inference.,” Am. J.
363
Epidemiol., vol. 178, no. 10, pp. 1526–32, Nov. 2013.
364
[24] T. C. Hales, “Historical overview of the Kepler conjecture,” Discret. Comput. Geom., vol. 36, no. 1, pp. 5–
365
20, 2006.
366
[25] N. Robertson, D. P. Sanders, P. D. Seymour, and R. Thomas, “The Four–Colour Theorem,” J. Comb.
367
Theory, Ser. B, vol. 70, pp. 2–44, 1997.
368
[26] Yarin Gal, Y. Gal, and Yarin Gal, “Uncertainty in Deep Learning,” Phd Thesis, 2017.
369
[27] A. G. Kendall, “Geometry and Uncertainty in Deep Learning for Computer Vision,” 2017.
370
[28] A. Kendall and Y. Gal, “What Uncertainties Do We Need in Bayesian Deep Learning for Computer
371
Vision?,” Mar. 2017.
372
[29] J. Piironen and A. Vehtari, “Comparison of Bayesian predictive methods for model selection,” Stat.
373
Comput., 2017.
374
[30] N. G. Polson and V. Sokolov, “Deep learning: A Bayesian perspective,” Bayesian Anal., 2017.
375
[31] Y. Bengio and Y. Lecun, “Scaling Learning Algorithms towards AI To appear in ‘ Large-Scale Kernel
376
Machines ’,” New York, 2007.
377
[32] J. P. Cunningham and B. M. Yu, “Dimensionality reduction for large-scale neural recordings,” Nat.
378
Neurosci., vol. 17, no. 11, pp. 1500–1509, 2014.
379
[33] S. Bengio and Y. Bengio, “Taking on the curse of dimensionality in joint distributions using neural
380
networks,” IEEE Trans. Neural Networks, 2000.
381
[34] S. Ben-David, P. Hrubeš, S. Moran, A. Shpilka, and A. Yehudayoff, “Learnability can be undecidable,”
382
Nat. Mach. Intell., 2019.
383
[35] C. B. Anagnostopoulos, Y. Ntarladimas, and S. Hadjiefthymiades, “Situational computing: An
384
innovative architecture with imprecise reasoning,” J. Syst. Softw., vol. 80, no. 12 SPEC. ISS., pp. 1993–
385
2014, 2007.
386
[36] K. Friston, “Functional integration and inference in the brain,” Progress in Neurobiology, vol. 68, no. 2. pp.
387
113–143, 2002.
388
[37] S. Schirra, “Approximate decision algorithms for approximate congruence,” Inf. Process. Lett., vol. 43,
389
no. 1, pp. 29–34, 1992.
390
[38] L. Magnani, “AlphaGo, Locked Strategies, and Eco-Cognitive Openness,” Philosophies, 2019.
391
[39] A. A. Baird and J. A. Fugelsang, “The emergence of consequential thought: Evidence from
392
neuroscience,” Philosophical Transactions of the Royal Society B: Biological Sciences. 2004.
393
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 July 2019 doi:10.20944/preprints201907.0110.v1
10. [40] T. Gomila, Verbal Minds : Language and the Architecture of Cognition. Elsevier Science, 2011.
394
[41] J. Y. Halpern, Reasoning About Uncertainty, vol. 2003. 2003.
395
[42] N. Van Hoeck, “Cognitive neuroscience of human counterfactual reasoning,” Front. Hum. Neurosci.,
396
2015.
397
[43] J. Pearl, “The algorithmization of counterfactuals,” Ann. Math. Artif. Intell., 2011.
398
[44] D. Tulodziecki, “Underdetermination,” in The Routledge Handbook of Scientific Realism, 2017.
399
[45] L. Laudan, “Demystifying Underdetermination,” in Philosophy of Science. The Central Issues, 1990.
400
[46] D. Turner, “Local Underdetermination in Historical Science*,” Philos. Sci., 2005.
401
[47] D. Lewis, “Counterfactual Dependence and Time’s Arrow,” Noûs, 2006.
402
[48] M. Ramachandran, “A counterfactual analysis of causation,” Mind, 2004.
403
[49] J. Vallverdú and V. C. Müller, Blended cognition : the robotic challenge. Springer, 2019.
404
[50] A. Rzhetsky, “The Big Mechanism program: Changing how science is done,” in CEUR Workshop
405
Proceedings, 2016.
406
[51] A. Wodecki et al., “Explainable Artificial Intelligence ( XAI ) The Need for Explainable AI,” 2017.
407
[52] T. Ha, S. Lee, and S. Kim, “Designing Explainability of an Artificial Intelligence System,” 2018.
408
[53] R. Kurzweil, “The Singularity Is Near: When Humans Transcend Biology,” Book, vol. 2011. p. 652, 2005.
409
[54] R. V Yampolskiy, “Leakproofing the Singularity - Artificial Intelligence Confinement Problem,” J.
410
Conscious. Stud., vol. 19, no. 1–2, pp. 194–214, 2012.
411
[55] J. Vallverdú, “The Emotional Nature of Post-Cognitive Singularities,” S. A. Victor Callaghan, James
412
Miller, Roman Yampolskiy, Ed. Springer Berlin Heidelberg, 2017, pp. 193–208.
413
[56] R. D. King et al., “The automation of science,” Science (80-. )., 2009.
414
[57] A. Sparkes et al., “Towards Robot Scientists for autonomous scientific discovery,” Automated
415
Experimentation. 2010.
416
[58] A. Iqbal, R. Khan, and T. Karayannis, “Developing a brain atlas through deep learning,” Nat. Mach.
417
Intell., vol. 1, no. 6, pp. 277–287, Jun. 2019.
418
[59] D. D. Bourgin, J. C. Peterson, D. Reichman, T. L. Griffiths, and S. J. Russell, “Cognitive Model Priors for
419
Predicting Human Decisions.”
420
[60] J. Vallverdu, “Re-embodying cognition with the same ‘biases’?,” Int. J. Eng. Futur. Technol., vol. 15, no. 1,
421
pp. 23–31, 2018.
422
[61] A. Leukhin, M. Talanov, J. Vallverdú, and F. Gafarov, “Bio-plausible simulation of three monoamine
423
systems to replicate emotional phenomena in a machine,” Biologically Inspired Cognitive Architectures,
424
2018.
425
[62] J. Vallverdú, M. Talanov, S. Distefano, M. Mazzara, A. Tchitchigin, and I. Nurgaliev, “A cognitive
426
architecture for the implementation of emotions in computing systems,” Biol. Inspired Cogn. Archit., vol.
427
15, pp. 34–40, 2016.
428
[63] H. Taniguchi, H. Sato, and T. Shirakawa, “A machine learning model with human cognitive biases
429
capable of learning from small and biased datasets,” Sci. Rep., 2018.
430
[64] B. M. Lake, R. R. Salakhutdinov, and J. B. Tenenbaum, “One-shot learning by inverting a compositional
431
causal process,” in Advances in Neural Information Processing Systems 27 (NIPS 2013), 2013.
432
[65] Q. Xu, T. Mytkowicz, and N. S. Kim, “Approximate Computing: A Survey,” IEEE Des. Test, vol. 33, no.
433
1, pp. 8–22, 2016.
434
[66] C.-Y. Chen, J. Choi, K. Gopalakrishnan, V. Srinivasan, and S. Venkataramani, “Exploiting approximate
435
computing for deep learning acceleration,” in 2018 Design, Automation & Test in Europe Conference &
436
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 July 2019 doi:10.20944/preprints201907.0110.v1
11. Exhibition (DATE), 2018, pp. 821–826.
437
[67] J. Choi and S. Venkataramani, “Approximate Computing Techniques for Deep Neural Networks,” in
438
Approximate Circuits, Cham: Springer International Publishing, 2019, pp. 307–329.
439
[68] M. A. Gianfrancesco, S. Tamang, J. Yazdany, and G. Schmajuk, “Potential Biases in Machine Learning
440
Algorithms Using Electronic Health Record Data,” JAMA Internal Medicine. 2018.
441
[69] T. Kliegr, Š. Bahník, and J. Fürnkranz, “A review of possible effects of cognitive biases on interpretation
442
of rule-based machine learning models,” Apr. 2018.
443
[70] B. Shneiderman, “Opinion: The dangers of faulty, biased, or malicious algorithms requires independent
444
oversight,” Proc. Natl. Acad. Sci., 2016.
445
[71] T. Narendra, A. Sankaran, D. Vijaykeerthy, and S. Mani, “Explaining Deep Learning Models using
446
Causal Inference,” Nov. 2018.
447
[72] M. Nauta, D. Bucur, C. Seifert, M. Nauta, D. Bucur, and C. Seifert, “Causal Discovery with
448
Attention-Based Convolutional Neural Networks,” Mach. Learn. Knowl. Extr., vol. 1, no. 1, pp. 312–340,
449
Jan. 2019.
450
[73] W. Ahrens and I. Pigeot, Handbook of epidemiology: Second edition. 2014.
451
[74] B. MacMahon and T. F. Pugh, Epidemiology; principles and methods. Little, Brown, 1970.
452
[75] R. Lipton and T. Ødegaard, “Causal thinking and causal language in epidemiology: it’s in the details.,”
453
Epidemiol. Perspect. Innov., vol. 2, p. 8, 2005.
454
[76] P. Vineis and D. Kriebel, “Causal models in epidemiology: Past inheritance and genetic future,”
455
Environmental Health: A Global Access Science Source. 2006.
456
[77] J. S. Kaufman and C. Poole, “Looking Back on ‘Causal Thinking in the Health Sciences,’” Annu. Rev.
457
Public Health, 2002.
458
[78] J. P. Vandenbroucke, A. Broadbent, and N. Pearce, “Causality and causal inference in epidemiology:
459
The need for a pluralistic approach,” Int. J. Epidemiol., vol. 45, no. 6, pp. 1776–1786, 2016.
460
[79] M. Susser, “Causal thinking in the health sciences concepts and strategies of epidemiology,” Causal
461
Think. Heal. Sci. concepts, 1973.
462
[80] N. Krieger, “Epidemiology and the web of causation: Has anyone seen the spider?,” Soc. Sci. Med., vol.
463
39, no. 7, pp. 887–903, 1994.
464
[81] M. Susser, “What is a cause and how do we know one? a grammar for pragmatic epidemiology,”
465
American Journal of Epidemiology. 1991.
466
[82] C. Buck, “Popper’s philosophy for epidemiologists,” Int. J. Epidemiol., 1975.
467
[83] S. Greenland, J. Pearl, and J. M. Robins, “Causal diagrams for epidemiologic research,” Epidemiology,
468
1999.
469
[84] D. Gillies, “Judea Pearl Causality: Models, Reasoning, and Inference, Cambridge: Cambridge
470
University Press, 2000,” Br. J. Philos. Sci., 2001.
471
[85] R. R. Tucci, “Introduction to Judea Pearl’s Do-Calculus,” Apr. 2013.
472
[86] S. Greenland, J. Pearl, and J. M. Robins, “Causal diagrams for epidemiologic research.,” Epidemiology,
473
vol. 10, no. 1, pp. 37–48, 1999.
474
[87] T. J. Vanderweele and J. M. Robins, “Directed acyclic graphs, sufficient causes, and the properties of
475
conditioning on a common effect,” Am. J. Epidemiol., 2007.
476
[88] C. Boudreau, E. C. Guinan, K. R. Lakhani, and C. Riedl, “The Novelty Paradox & Bias for Normal
477
Science: Evidence from Randomized Medical Grant Proposal Evaluations,” 2016.
478
[89] R. M. Daniel, B. L. De Stavola, and S. Vansteelandt, “Commentary: The formal approach to quantitative
479
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 July 2019 doi:10.20944/preprints201907.0110.v1
12. causal inference in epidemiology: Misguided or misrepresented?,” International Journal of Epidemiology,
480
vol. 45, no. 6. pp. 1817–1829, 2016.
481
[90] J. P. A. Ioannidis, “Randomized controlled trials: Often flawed, mostly useless, clearly indispensable: A
482
commentary on Deaton and Cartwright,” Soc. Sci. Med., vol. 210, pp. 53–56, Aug. 2018.
483
[91] J. P. A. Ioannidis, “The proposal to lower P value thresholds to .005,” JAMA - Journal of the American
484
Medical Association. 2018.
485
[92] A. Krauss, “Why all randomised controlled trials produce biased results,” Ann. Med., vol. 50, no. 4, pp.
486
312–322, May 2018.
487
[93] I. Shrier and R. W. Platt, “Reducing bias through directed acyclic graphs,” BMC Medical Research
488
Methodology. 2008.
489
[94] R. Doll and A. B. Hill, “Smoking and Carcinoma of the Lung,” Br. Med. J., vol. 2, no. 4682, pp. 739–748,
490
Sep. 1950.
491
[95] R. A. FISHER, “Cancer and Smoking,” Nature, vol. 182, no. 4635, pp. 596–596, Aug. 1958.
492
[96] R. A. FISHER, “Lung Cancer and Cigarettes?,” Nature, vol. 182, no. 4628, pp. 108–108, Jul. 1958.
493
[97] B. Wynne, “When doubt becomes a weapon,” Nature, vol. 466, no. 7305, pp. 441–442, 2010.
494
[98] C. Pisinger, N. Godtfredsen, and A. M. Bender, “A conflict of interest is strongly associated with
495
tobacco industry–favourable results, indicating no harm of e-cigarettes,” Prev. Med. (Baltim)., vol. 119,
496
pp. 124–131, Feb. 2019.
497
[99] T. Grüning, A. B. Gilmore, and M. McKee, “Tobacco industry influence on science and scientists in
498
Germany.,” Am. J. Public Health, vol. 96, no. 1, pp. 20–32, Jan. 2006.
499
[100] N. Ramarajan, R. A. Badwe, P. Perry, G. Srivastava, N. S. Nair, and S. Gupta, “A machine learning
500
approach to enable evidence based oncology practice: Ranking grade and applicability of RCTs to
501
individual patients.,” J. Clin. Oncol., vol. 34, no. 15_suppl, pp. e18165–e18165, May 2016.
502
[101] M. Salathé, “Digital epidemiology: what is it, and where is it going?,” Life Sci. Soc. Policy, vol. 14, no. 1,
503
p. 1, Dec. 2018.
504
[102] E. Velasco, “Disease detection, epidemiology and outbreak response: the digital future of public health
505
practice,” Life Sci. Soc. Policy, vol. 14, no. 1, p. 7, Dec. 2018.
506
[103] C. Bellinger, M. S. Mohomed Jabbar, O. Zaïane, and A. Osornio-Vargas, “A systematic review of data
507
mining and machine learning for air pollution epidemiology,” BMC Public Health. 2017.
508
[104] S. Weichenthal, M. Hatzopoulou, and M. Brauer, “A picture tells a thousand…exposures:
509
Opportunities and challenges of deep learning image analyses in exposure science and environmental
510
epidemiology,” Environment International. 2019.
511
[105] G. Eraslan, Ž. Avsec, J. Gagneur, and F. J. Theis, “Deep learning: new computational modelling
512
techniques for genomics,” Nat. Rev. Genet., vol. 20, no. 7, pp. 389–403, Jul. 2019.
513
[106] M. J. Cardoso et al., Eds., Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical
514
Decision Support, vol. 10553. Cham: Springer International Publishing, 2017.
515
[107] C. Kreatsoulas and S. V Subramanian, “Machine learning in social epidemiology: Learning from
516
experience.,” SSM - Popul. Heal., vol. 4, pp. 347–349, Apr. 2018.
517
[108] “In the age of machine learning randomized controlled trials are unethical.” [Online]. Available:
518
https://towardsdatascience.com/in-the-age-of-machine-learning-randomized-controlled-trials-are-unet
519
hical-74acc05724af. [Accessed: 03-Jul-2019].
520
[109] T. F. Drumond, T. Viéville, and F. Alexandre, “Bio-inspired Analysis of Deep Learning on Not-So-Big
521
Data Using Data-Prototypes,” Front. Comput. Neurosci., vol. 12, p. 100, Jan. 2019.
522
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 July 2019 doi:10.20944/preprints201907.0110.v1
13. [110] K. Charalampous and A. Gasteratos, “Bio-inspired deep learning model for object recognition,” in 2013
523
IEEE International Conference on Imaging Systems and Techniques (IST), 2013, pp. 51–55.
524
525
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 July 2019 doi:10.20944/preprints201907.0110.v1