GridCure_Distributed and Renewable Energy Strategy Panel (Panel Proposal)Emily Basileo
This presentation provides background information for a panel proposal.
This panel will explore the successes and challenges that energy markets (United States, Brazil, Jordan) have experienced when implementing a distributed energy resource strategy and the technologies that power them
Topics for discussion include:
- Factors that led to moving to renewable and distributed energy
- Strategies for the implementation of a comprehensive distributed energy system
- Questions utilities are still trying to answer (post-grid deployment)
GridCure_Distributed and Renewable Energy Strategy Panel (Panel Proposal)Emily Basileo
This presentation provides background information for a panel proposal.
This panel will explore the successes and challenges that energy markets (United States, Brazil, Jordan) have experienced when implementing a distributed energy resource strategy and the technologies that power them
Topics for discussion include:
- Factors that led to moving to renewable and distributed energy
- Strategies for the implementation of a comprehensive distributed energy system
- Questions utilities are still trying to answer (post-grid deployment)
"A brave, new business world."
It’s difficult to imagine any landscape that’s changed more than business-to-business. The last 5 years has seen almost all the rules re-written, re-worked or simply revoked. Social platforms. Mobile connectivity. Niche business media. Content as a sales source. Targeting business people as people. They're just the tip of a moving landscape. In the pages of 'Engaging a business audience of One,' the OgilvyOne thought-leaders examine each of these game-changers.
OgilvyOne London's Digital Labs presents a comprehensive report about this year's Consumer Electronic Show that took place in Las Vegas. For the third year in a row, the London Labs attended the show with an aim to scan, scope out and bring back the latest and most exciting technologies and trends that will have most impact in the ever-expanding business and consumer technology market. These findings help inform the predictions we make for our clients about potential future commercial application, and the potential use of those trends within the Marketing and Communication space.
Live Webcast: Reaching Today's Prospective StudentsLinkedIn
It has become increasingly challenging for higher education marketers to convert prospects into enrolled students. In fact, nearly 60% of admission directors did not hit their 2015 enrollment goals.*
Yet, thanks to the widespread adoption of social media and advances in marketing technology, marketers have more tools than ever before to deliver relevant, targeted messages to the prospects who are most likely to be interested in enrolling.
Join our webinar as we present new research from LinkedIn revealing the keys to influencing prospective students with relevant content marketing. Register today, and you'll learn:
- Who the key influencers are in the higher education decision process
- What types of content prospects are most interested in at each stage of the decision journey
- Best practices for developing an effective always-on content marketing strategy with Sponsored Updates and InMail
D. Mayo: Philosophical Interventions in the Statistics Warsjemille6
ABSTRACT: While statistics has a long history of passionate philosophical controversy, the last decade especially cries out for philosophical illumination. Misuses of statistics, Big Data dredging, and P-hacking make it easy to find statistically significant, but spurious, effects. This obstructs a test's ability to control the probability of erroneously inferring effects–i.e., to control error probabilities. Disagreements about statistical reforms reflect philosophical disagreements about the nature of statistical inference–including whether error probability control even matters! I describe my interventions in statistics in relation to three events. (1) In 2016 the American Statistical Association (ASA) met to craft principles for avoiding misinterpreting P-values. (2) In 2017, a "megateam" (including philosophers of science) proposed "redefining statistical significance," replacing the common threshold of P ≤ .05 with P ≤ .005. (3) In 2019, an editorial in the main ASA journal called for abandoning all P-value thresholds, and even the words "significant/significance".
A word on each. (1) Invited to be a "philosophical observer" at their meeting, I found the major issues were conceptual. P-values measure how incompatible data are from what is expected under a hypothesis that there is no genuine effect: the smaller the P-value, the more indication of incompatibility. The ASA list of familiar misinterpretations–P-values are not posterior probabilities, statistical significance is not substantive importance, no evidence against a hypothesis need not be evidence for it–I argue, should not be the basis for replacing tests with methods less able to assess and control erroneous interpretations of data. (Mayo 2016, 2019). (2) The "redefine statistical significance" movement appraises P-values from the perspective of a very different quantity: a comparative Bayes Factor. Failing to recognize how contrasting approaches measure different things, disputants often talk past each other (Mayo 2018). (3) To ban P-value thresholds, even to distinguish terrible from warranted evidence, I say, is a mistake (2019). It will not eradicate P-hacking, but it will make it harder to hold P-hackers accountable. A 2020 ASA Task Force on significance testing has just been announced. (I would like to think my blog errorstatistics.com helped.)
To enter the fray between rival statistical approaches, it helps to have a principle applicable to all accounts. There's poor evidence for a claim if little if anything has been done to find it flawed even if it is. This forms a basic requirement for evidence I call the severity requirement. A claim passes with severity only if it is subjected to and passes a test that probably would have found it flawed, if it were. It stems from Popper, though he never adequately cashed it out. A variant is the frequentist principle of evidence developed with Sir David Cox (Mayo and Cox 20
D. Mayo: Philosophy of Statistics & the Replication Crisis in Sciencejemille6
D. Mayo discusses various disputes-notably the replication crisis in science-in the context of her just released book: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars.
Abstract: Mounting failures of replication in the social and biological sciences give a practical spin to statistical foundations in the form of the question: How can we attain reliability when methods make illicit cherry-picking and significance seeking so easy? Researchers, professional societies, and journals are increasingly getting serious about methodological reforms to restore scientific integrity – some are quite welcome (e.g., pre-registration), while others are quite radical. The American Statistical Association convened members from differing tribes of frequentists, Bayesians, and likelihoodists to codify misuses of P-values. Largely overlooked are the philosophical presuppositions of both criticisms and proposed reforms. Paradoxically, alternative replacement methods may enable rather than reveal illicit inferences due to cherry-picking, multiple testing, and other biasing selection effects. Crowd-sourced reproducibility research in psychology is helping to change the reward structure but has its own shortcomings. Focusing on purely statistical considerations, it tends to overlook problems with artificial experiments. Without a better understanding of the philosophical issues, we can expect the latest reforms to fail.
Severe Testing: The Key to Error Correctionjemille6
D. G. Mayo's slides for her presentation given March 17, 2017 at Boston Colloquium for Philosophy of Science, Alfred I.Taub forum: "Understanding Reproducibility & Error Correction in Science"
On Severity, the Weight of Evidence, and the Relationship Between the Twojemille6
Margherita Harris
Visiting fellow in the Department of Philosophy, Logic and Scientific Method at the London
School of Economics and Political Science.
ABSTRACT: According to the severe tester, one is justified in declaring to have evidence in support of a
hypothesis just in case the hypothesis in question has passed a severe test, one that it would be very
unlikely to pass so well if the hypothesis were false. Deborah Mayo (2018) calls this the strong
severity principle. The Bayesian, however, can declare to have evidence for a hypothesis despite not
having done anything to test it severely. The core reason for this has to do with the
(infamous) likelihood principle, whose violation is not an option for anyone who subscribes to the
Bayesian paradigm. Although the Bayesian is largely unmoved by the incompatibility between
the strong severity principle and the likelihood principle, I will argue that the Bayesian’s never-ending
quest to account for yet an other notion, one that is often attributed to Keynes (1921) and that is
usually referred to as the weight of evidence, betrays the Bayesian’s confidence in the likelihood
principle after all. Indeed, I will argue that the weight of evidence and severity may be thought of as
two (very different) sides of the same coin: they are two unrelated notions, but what brings them
together is the fact that they both make trouble for the likelihood principle, a principle at the core of
Bayesian inference. I will relate this conclusion to current debates on how to best conceptualise
uncertainty by the IPCC in particular. I will argue that failure to fully grasp the limitations of an
epistemology that envisions the role of probability to be that of quantifying the degree of belief to
assign to a hypothesis given the available evidence can be (and has been) detrimental to an
adequate communication of uncertainty.
D. G. Mayo: The Replication Crises and its Constructive Role in the Philosoph...jemille6
Constructive role of replication crises teaches a lot about 1.) Non-fallacious uses of statistical tests, 2.) Rationale for the role of probability in tests, 3.) How to reformulate tests.
"A brave, new business world."
It’s difficult to imagine any landscape that’s changed more than business-to-business. The last 5 years has seen almost all the rules re-written, re-worked or simply revoked. Social platforms. Mobile connectivity. Niche business media. Content as a sales source. Targeting business people as people. They're just the tip of a moving landscape. In the pages of 'Engaging a business audience of One,' the OgilvyOne thought-leaders examine each of these game-changers.
OgilvyOne London's Digital Labs presents a comprehensive report about this year's Consumer Electronic Show that took place in Las Vegas. For the third year in a row, the London Labs attended the show with an aim to scan, scope out and bring back the latest and most exciting technologies and trends that will have most impact in the ever-expanding business and consumer technology market. These findings help inform the predictions we make for our clients about potential future commercial application, and the potential use of those trends within the Marketing and Communication space.
Live Webcast: Reaching Today's Prospective StudentsLinkedIn
It has become increasingly challenging for higher education marketers to convert prospects into enrolled students. In fact, nearly 60% of admission directors did not hit their 2015 enrollment goals.*
Yet, thanks to the widespread adoption of social media and advances in marketing technology, marketers have more tools than ever before to deliver relevant, targeted messages to the prospects who are most likely to be interested in enrolling.
Join our webinar as we present new research from LinkedIn revealing the keys to influencing prospective students with relevant content marketing. Register today, and you'll learn:
- Who the key influencers are in the higher education decision process
- What types of content prospects are most interested in at each stage of the decision journey
- Best practices for developing an effective always-on content marketing strategy with Sponsored Updates and InMail
D. Mayo: Philosophical Interventions in the Statistics Warsjemille6
ABSTRACT: While statistics has a long history of passionate philosophical controversy, the last decade especially cries out for philosophical illumination. Misuses of statistics, Big Data dredging, and P-hacking make it easy to find statistically significant, but spurious, effects. This obstructs a test's ability to control the probability of erroneously inferring effects–i.e., to control error probabilities. Disagreements about statistical reforms reflect philosophical disagreements about the nature of statistical inference–including whether error probability control even matters! I describe my interventions in statistics in relation to three events. (1) In 2016 the American Statistical Association (ASA) met to craft principles for avoiding misinterpreting P-values. (2) In 2017, a "megateam" (including philosophers of science) proposed "redefining statistical significance," replacing the common threshold of P ≤ .05 with P ≤ .005. (3) In 2019, an editorial in the main ASA journal called for abandoning all P-value thresholds, and even the words "significant/significance".
A word on each. (1) Invited to be a "philosophical observer" at their meeting, I found the major issues were conceptual. P-values measure how incompatible data are from what is expected under a hypothesis that there is no genuine effect: the smaller the P-value, the more indication of incompatibility. The ASA list of familiar misinterpretations–P-values are not posterior probabilities, statistical significance is not substantive importance, no evidence against a hypothesis need not be evidence for it–I argue, should not be the basis for replacing tests with methods less able to assess and control erroneous interpretations of data. (Mayo 2016, 2019). (2) The "redefine statistical significance" movement appraises P-values from the perspective of a very different quantity: a comparative Bayes Factor. Failing to recognize how contrasting approaches measure different things, disputants often talk past each other (Mayo 2018). (3) To ban P-value thresholds, even to distinguish terrible from warranted evidence, I say, is a mistake (2019). It will not eradicate P-hacking, but it will make it harder to hold P-hackers accountable. A 2020 ASA Task Force on significance testing has just been announced. (I would like to think my blog errorstatistics.com helped.)
To enter the fray between rival statistical approaches, it helps to have a principle applicable to all accounts. There's poor evidence for a claim if little if anything has been done to find it flawed even if it is. This forms a basic requirement for evidence I call the severity requirement. A claim passes with severity only if it is subjected to and passes a test that probably would have found it flawed, if it were. It stems from Popper, though he never adequately cashed it out. A variant is the frequentist principle of evidence developed with Sir David Cox (Mayo and Cox 20
D. Mayo: Philosophy of Statistics & the Replication Crisis in Sciencejemille6
D. Mayo discusses various disputes-notably the replication crisis in science-in the context of her just released book: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars.
Abstract: Mounting failures of replication in the social and biological sciences give a practical spin to statistical foundations in the form of the question: How can we attain reliability when methods make illicit cherry-picking and significance seeking so easy? Researchers, professional societies, and journals are increasingly getting serious about methodological reforms to restore scientific integrity – some are quite welcome (e.g., pre-registration), while others are quite radical. The American Statistical Association convened members from differing tribes of frequentists, Bayesians, and likelihoodists to codify misuses of P-values. Largely overlooked are the philosophical presuppositions of both criticisms and proposed reforms. Paradoxically, alternative replacement methods may enable rather than reveal illicit inferences due to cherry-picking, multiple testing, and other biasing selection effects. Crowd-sourced reproducibility research in psychology is helping to change the reward structure but has its own shortcomings. Focusing on purely statistical considerations, it tends to overlook problems with artificial experiments. Without a better understanding of the philosophical issues, we can expect the latest reforms to fail.
Severe Testing: The Key to Error Correctionjemille6
D. G. Mayo's slides for her presentation given March 17, 2017 at Boston Colloquium for Philosophy of Science, Alfred I.Taub forum: "Understanding Reproducibility & Error Correction in Science"
On Severity, the Weight of Evidence, and the Relationship Between the Twojemille6
Margherita Harris
Visiting fellow in the Department of Philosophy, Logic and Scientific Method at the London
School of Economics and Political Science.
ABSTRACT: According to the severe tester, one is justified in declaring to have evidence in support of a
hypothesis just in case the hypothesis in question has passed a severe test, one that it would be very
unlikely to pass so well if the hypothesis were false. Deborah Mayo (2018) calls this the strong
severity principle. The Bayesian, however, can declare to have evidence for a hypothesis despite not
having done anything to test it severely. The core reason for this has to do with the
(infamous) likelihood principle, whose violation is not an option for anyone who subscribes to the
Bayesian paradigm. Although the Bayesian is largely unmoved by the incompatibility between
the strong severity principle and the likelihood principle, I will argue that the Bayesian’s never-ending
quest to account for yet an other notion, one that is often attributed to Keynes (1921) and that is
usually referred to as the weight of evidence, betrays the Bayesian’s confidence in the likelihood
principle after all. Indeed, I will argue that the weight of evidence and severity may be thought of as
two (very different) sides of the same coin: they are two unrelated notions, but what brings them
together is the fact that they both make trouble for the likelihood principle, a principle at the core of
Bayesian inference. I will relate this conclusion to current debates on how to best conceptualise
uncertainty by the IPCC in particular. I will argue that failure to fully grasp the limitations of an
epistemology that envisions the role of probability to be that of quantifying the degree of belief to
assign to a hypothesis given the available evidence can be (and has been) detrimental to an
adequate communication of uncertainty.
D. G. Mayo: The Replication Crises and its Constructive Role in the Philosoph...jemille6
Constructive role of replication crises teaches a lot about 1.) Non-fallacious uses of statistical tests, 2.) Rationale for the role of probability in tests, 3.) How to reformulate tests.
Today we’ll try to cover a number of things:
1. Learning philosophy/philosophy of statistics
2. Situating the broad issues within philosophy of science
3. Little bit of logic
4. Probability and random variables
The Statistics Wars: Errors and Casualtiesjemille6
ABSTRACT: Mounting failures of replication in the social and biological sciences give a new urgency to critically appraising proposed statistical reforms. While many reforms are welcome (preregistration of experiments, replication, discouraging cookbook uses of statistics), there have been casualties. The philosophical presuppositions behind the meta-research battles remain largely hidden. Too often the statistics wars have become proxy wars between competing tribe leaders, each keen to advance one or another tool or school, rather than build on efforts to do better science. Efforts of replication researchers and open science advocates are diminished when so much attention is centered on repeating hackneyed howlers of statistical significance tests (statistical significance isn’t substantive significance, no evidence against isn’t evidence for), when erroneous understanding of basic statistical terms goes uncorrected, and when bandwagon effects lead to popular reforms that downplay the importance of error probability control. These casualties threaten our ability to hold accountable the “experts,” the agencies, and all the data handlers increasingly exerting power over our lives.
Controversy Over the Significance Test Controversyjemille6
Deborah Mayo (Professor of Philosophy, Virginia Tech, Blacksburg, Virginia) in PSA 2016 Symposium: Philosophy of Statistics in the Age of Big Data and Replication Crises
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Why are Good Theorys Good- Review
1. WHY ARE GOOD THEORIES GOOD ?
REFLECTIONS ON EPISTEMIC VALUES
, CONFIRMATION, AND FORMAL EPISTEMOLOGY
Sinu G S
Student MICS
Selected Topics in Artificial Intelligence
University of Luxembourg
2. THEME OF PAPER
This paper discusses about
Comparison of Theory of Confirmation and Theory
of Verisimilitude or Truthlikeness.
Connection between Logic of Confirmation with
Logic of Acceptability
Connection of Confirmation Theory with
Naturalism,Intertheoretic Reduction and
Explanation.
2
3. AGENDA OF PRESENTATION
Introduction
Approaches of Confirmation
Problems in Confirmation
Huber‟s Theory of Confirmation
Confirmation and Truthlikeness
Many Senses of Confirmation
Naturalism and Bayesian Tinkering
Belief Framework for modeling realistic cognitive
agents (Research Assistant Systems)
3
4. Introduction
Approaches of Confirmation
Problems in Confirmation
Huber‟s Theory of Confirmation
Confirmation and Truthlikeness
Many Senses of Confirmation
Naturalism and Bayesian Tinkering
Belief Framework for modeling realistic cognitive
agents (Research Assistant Systems)
4
6. Introduction
Approaches of Confirmation
Problems in Confirmation
Huber‟s Theory of Confirmation
Confirmation and Truthlikeness
Many Senses of Confirmation
Naturalism and Bayesian Tinkering
Belief Framework for modeling realistic cognitive
agents (Research Assistant Systems)
6
7. APPROACHES OF CONFIRMATION
Inductive Logic
Induction proceeds from the specific case to the general
case: “probable inference”
All swans we have seen have been white; therefore all
swans are white.
7
8. INDUCTIVE LOGIC METHOD
Initial observation
Prediction
suggests
generates
hypothesis
NO, modify
hypothesis
experiments and data
New observations
Do new
observations match
predictions?
YES, confir
m
hypothesis
“Accepted
truth”
8
9. APPROACHES OF CONFIRMATION
Deductive Logic
Deduction proceeds from the general case to the
specific case: “certain inference”
For every action, there is an opposite and equal
reaction. This rifle will recoil when it is fired.
9
10. HYPOTHETIC-DEDUCTIVE LOGIC METHOD
Initial observation
suggests
hypothesis
Prediction A
hypothesis
hypothesis
Prediction B
Prediction C
New observations
NO, falsify
hypothesis
Do new
observations match
predictions?
hypothesis
Prediction D
YES, repeat
attempts to
falsify
10
“Accepted
Multiple
truth”
failed
falsifications
11. Introduction
Approaches of Confirmation
Problems in Confirmation
Huber‟s Theory of Confirmation
Confirmation and Truthlikeness
Many Senses of Confirmation
Naturalism and Bayesian Tinkering
Belief Framework for modeling realistic cognitive
agents (Research Assistant Systems)
11
12. PROBLEMS OF CONFIRMATION
What makes an observation count as evidence ?
This piece of copper conducts electricity
This confirms (increases the credibility of)
Hypothesis “All pieces of copper conduct
electricity”
Law Like Hypothesis
This man performs scientific experiments
This confirms(increases the plausibility) of
all man perform scientific experiments
Accidental Hypothesis
idea
12
13. PROBLEMS OF CONFIRMATION
How do observations confirm a scientific theory ?
You can know only what you observed and you
have never observed a “Law of Nature”
Russell‟s Chicken Story
Moral of the story: You cannot always induce the
truth from past experience!
13
14. PROBLEMS OF CONFIRMATION
Raven‟s Paradox
P1 : All ravens are black.
P2 : Everything that is not black is not a raven
E1 : This raven, is black.
E2 : This red (and thus not black) thing is an apple
(and thus not a raven).
14
15. PROBLEMS OF CONFIRMATION
Moral of Raven‟s Paradox
Theory of Confirmation
“With in certain limits,what is the true of
evidence statements is true of the whole „universe of
discourse‟
Evidences may depends on Context.
15
16. PROBLEMS OF CONFIRMATION
A logical consequence of any theory T is T or S.
“Earth is center of solar system or I am 23 years
old”
I am actually 23 years old.
This means Earth is center of Solar System
(Logical Sequence of I am 23 years old ) is
confirmed by observing „ I am 23 years old‟
Can nature of solar system be confirmed by my age
?
16
17. Introduction
Approaches of Confirmation
Problems in Confirmation
Huber’s Theory of Confirmation
Confirmation and Truthlikeness
Many Senses of Confirmation
Naturalism and Bayesian Tinkering
Belief Framework for modeling realistic cognitive
agents (Research Assistant Systems)
17
18. HUBER‟S THEORY OF CONFIRMATION
Problem of Theory of Theory Assessment
How we compare and evaluate theories in the light
of available evidence?
Given Hypothesis or Theory H
Set of data,the Evidence E
Some Background information B
How good is H given B ?
What is the value of H in view of E and B ?
18
19. HUBER‟S THEORY OF CONFIRMATION
Qualitative Theory of Hypothetico-Deductivism
(H&B) E
Aims at informative theories
Increasing Function of Logical Strength of Theory
Quantitative Theory of probablistic inductive logic
P(H|E&B)>=r
, r (.5,1)
Aims at plausible or true theories
Decreasing Function of Logical Strength of Theory
19
20. HUBER‟S THEORY OF CONFIRMATION
Conflicting Concepts of Confirmation
Informativeness
If E confirms H and H0 logically implies H, then E
confirms H0
. E |∼ H, H0 H ⇒ E |∼ H0.
Plausibility
If E confirms H and H logically implies H0, then E
confirms H0.
E |∼ H, H H0 ⇒ E |∼ H0
A good theory is true & informative
20
21. HUBER‟S THEORY OF CONFIRMATION
2 virtues a good theory
Truth (or „plausibility‟)
Strength (or „informativeness‟)
f (H, E, B) Epistemic value of Hypothesis
If E entails H → H’, then f (H, E) ≤ f (H’, E)
If ¬E entails H’ → H, then f (H, E) ≤ f (H’, E)
f (H, E) = p (H, E) + p (¬H,¬E)
21
22. Introduction
Approaches of Confirmation
Problems in Confirmation
Huber‟s Theory of Confirmation
Confirmation and Truthlikeness
Many Senses of Confirmation
Naturalism and Bayesian Tinkering
Belief Framework for modeling realistic cognitive
agents (Research Assistant Systems)
22
24. CONFIRMATION AND TRUTHLIKENESS
Karl Popper‟s view
Belief
What if every one lost all their beliefs about engineering
Justified
Circular Argument
Why A ? Because B
Why B ? Because A
Criticise Beliefs,Don‟t Justify them
Knowledge is useful truth.
24
25. CONFIRMATION AND TRUTHLIKENESS
Acceptable theories
Not Only High degree of confirmation
But Also Capacity of explaining or predicting the empirical
evidence
Epistemic value of a theory depends on two factors
Coherence or Similarity between H and E
How Informative our Empirical Evidence is ?
Vs (H, E) = [p (H&E) /p (HvE)] [1/p (E)] = p (H, E) /p (HvE)
25
29. Introduction
Approaches of Confirmation
Problems in Confirmation
Huber‟s Theory of Confirmation
Confirmation and Truthlikeness
Many Senses of Confirmation
Naturalism and Bayesian Tinkering
Belief Framework for modeling realistic cognitive
agents (Research Assistant Systems)
29
30. MANY SENSES OF CONFIRMATION
Acceptance of Theory
H can be „acceptable‟ in the sense that the community
allows that individual scientists accept H
Best one confirmed
H can be „acceptable‟ in the sense that the community
commands its members to accept it
Mostly Confirmed
What makes theory so good that it is legitimate to
accept it ?
Alternative Theorems
What makes theory so good that it is compulsory to
accept it ?
Certified Knowledge
30
31. MANY SENSES OF CONFIRMATION
X ,the set of all possible mutually exclusive sets of
consequences that the choice of a demarcation level
β will have for scientist i, then,the optimal choice for
that scientist will correspond to:
pi(x,b) is obviously the probability with which i judges
that the choice of b will lead to consequences x,
ui(x) is the utility that i would experiment under x.
31
32. Introduction
Approaches of Confirmation
Problems in Confirmation
Huber‟s Theory of Confirmation
Confirmation and Truthlikeness
Many Senses of Confirmation
Naturalism and Bayesian Tinkering
Belief Framework for modeling realistic cognitive
agents (Research Assistant Systems)
32
33. NATURALISM AND BAYESIAN TINKERING
Problem of theory evaluation‟ is not a „philosophical‟
problem, but a problem for the communities of fleshand-bone scientists
Inter Theoretical Reduction increases epistemic values
Showing that a theory can be reducible to another
increases the verisimilitude of both theories
33
34. EXPLANATORINESS AND CONFIRMATION
A Theory H explains the facts F for the scientific
community C if and only if F can be derived from H
by C, and the members of C understand H, i.e., if H
is „intelligible‟ for them
Coeteris paribus, if X is easier to understand than
Y, then p(Y) < p(X).
34
35. Introduction
Approaches of Confirmation
Problems in Confirmation
Huber‟s Theory of Confirmation
Confirmation and Truthlikeness
Many Senses of Confirmation
Naturalism and Bayesian Tinkering
Belief Framework for modeling realistic
cognitive agents (Research Assistant Systems)
35
Epistemolgy in sense of knowledge What is necessary & sufficient conditions of knowledge,what are it’s sources,structure,limils ?Epistemolgy in sense of justified beliefHow we are to understand the concept of justification ?What makes justified beliefs justified ?Is justification internal or external to one’s own mind ?Scientific research is complicated, and there are different scientific disciplines. But generally speaking, we can identify four main components in scientific research :Theories - Here we include all the hypotheses, laws and facts that are about the empirical world.The world - The objects, processes and properties of the universe.Predictions - We use our theories to make predictions about the world. They are often predictions about the future, but we can also have predictions about the past. For example, a geological theory about the earth's history might predict that certain rocks contains a high percentage of special metals. A crucial part of scientific research is to test a theory by checking whether its predictions are correct or not.Data (evidence) - The information that is gathered from empirical observations or experiments. We use data to test our theories. They might also inspire new directions in research.When we describe a scientific claim as a "theory", we are highlighting the fact that it is a claim about the world that can be true or false. It does not imply that we are unsure of the claim or that we do not have evidence for it.
Statistics is an inductive process: we are trying to draw general conclusions based on a specific, limited sampleInductive reasoning is the process of reasoning from the specific to the general. Inductive reasoning is supported by inductive logic, for example:From specific propositions such as:This raven is a black bird.To general propositions: All ravens are black birds.The conclusions may or may not be valid
For example, if I observe 10,000 dogs, and every dog has fleas, I may conclude "All dogs must have fleas." The conclusion is a conjecture or a prediction. Further evidence may support or deny my conclusion. The 10,001st dog may not have fleas. Therefore, with an inductive argument, anyone can affirm all my premises (10,000 dogs with fleas, yet deny my conclusion (all dogs have fleas) without involving himself in any logical contradictionAn argument must meet 2 conditions to justify believing the conclusion (hypothesis): (1) the premises (assumptions, experimental/initial conditions must themselves be justified. (2) there must be sufficient connection between the premises and conclusionGiere's 3 criteria of good test provides the basis for judging this connection. 1) prediction is deducible from the hypothesis together with the initial conditions. 2) prediction is improbable when considered out of context from hypothesis 3) prediction is verifiableThe experiment determines the truth and falsity of the prediction.If the prediction is successful the hypothesis is justified.If the prediction fails the hypothesis is refuted.Theoretical hypothesis: Insects always have 6 legsPremise: All creatures with 3 body parts are insects (assumption)Prediction: Each individual captured will have 6 legsPremise: They all have 6 legs (prediction true)Conclusion: Insects probably always have six legs
Deductive reasoning is the process of reasoning from the general to the specific. Deductive reasoning is supported by deductive logicEgFrom general propositions: For every action, there is an opposite and equal reaction.To specific propositions such as: This rifle will recoil when it is fired.In contrast to inductive reasoning, the conclusions of deductive reasoning are as valid as the initial assumption. Deductive reasoning was first described by the ancient Greek philosophers such as Aristotle.
Championed by the philosopher of science Karl Popper (1902-1994)The goal of these tests is not to confirm, but to falsify, the hypothesisThe accepted scientific explanation is the hypothesis that successfully withstands repeated attempts to falsify it Example Good Here is an illustration :Suppose your portable music player fails to switch on. You might then consider the hypothesis that perhaps the batteries are dead. So you decide to test whether this is true.Given this hypothesis, you predict that the music player should work properly if you replace the batteries with new ones.So you proceed to replace the batteries, which is the "experiment" for testing the prediction.If the player works again, then your hypothesis is confirmed, and so you throw away the old batteries. If the player still does not work, then the prediction is false, and the hypothesis is disconfirmed. So you might reject your original hypothesis and come up with an alternative one to test, e.g. the batteries are ok but your music player is broken1. A scientific hypothesis must be testableThe HD method tells us how to test a hypothesis, and a scientific hypothesis must be one that is capable of being tested.2. Confirmation is not truthIn general, confirming the predictions of a theory increases the probability that a theory is correct. But in itself this does not prove conclusively that the theory is correct.If H then P.P.Therefore H.Here H is our hypothesis "the batteries are dead", and P is the prediction "the player will function when the batteries are replaced". This pattern of reasoning is of course not valid, since there might be reasons other than H that also bring about the truth of P. For example, it might be that the original batteries are actually fine, but they were not inserted properly. Replacing the batteries would then restore the loose connection. So the fact that the prediction is true does not prove that the hypothesis is true. We need to consider alternative hypotheses and see which is more likely to be true and which provides the best explanation of the prediction. (Or we can also do more testing!)3) Disconfirmation need not be falsityVery often a hypothesis generates a prediction only when given additional assumptions (auxiliary hypotheses). In such cases, when a prediction fails the theory might still be correct.If Hypothesis and Initial Conditions (are true), then the Prediction (is true).Initial Conditions (are true) but not Prediction.Thus, hypothesis not true.If(H and IC), then P.Not P and IC.Thus, Not H.If the premises are true, and the prediction is false, then the hypothesis must be false.Example of refuting argument:Theoretical hypothesis: insects always have 6 legsPremise: Insects beget other insects (assumption)Prediction: Fly larvae will have six legsPremise: Maggots don't have 6 legs (prediction false)Conclusion: Insects don't always have six legs (conclusion: reject hypothesis)
Similarity in logical structureThis X is a Y, Hence all X’s are Y …Hence Confirmation is not just a function of logical structure of scienceThere has to be something that the hypothesis themselves that makes a difference.
Hume says “Future is under no obligation to mimic the past”On a farm, there was a flock of chickens. One chicken started talking with another, remarking "How good our farmer has been to us. I think he is an awfully nice man, because he comes every morning to feed us." The other chicken nodded in agreement, adding "and he has been feeding each and everyone of us here every day like clockwork, every day without fail since we were all just little baby chicks." Indeed, when queried, most of the other chickens clucked in agreement about how benevolent their farmer was.But there was one chicken, intelligent but eccentric, who countered saying "How do you know he is all that good? I remember, not too long ago, that there were some older chickens who were taken away, and I haven't seen them since. What ever happened to them?"Some of the chickens may have slept a little uneasy that night, but in the morning the farmer came as usual, this time scattering even more corn around. The chickens ate this with gusto, and this dispelled any remaining doubts about the benevolence of the farmer. "You see, there is nothing to worry about. Our farmer had a little extra food, so he gave it to us because he likes us! He is a good man," remarked one chicken to the others, and they all nodded in agreement, all of them, that is, except one.The intelligent but eccentric chicken became even more agitated. "He is just fattening us up! We are going to be slaughtered in a weeks time!" he squawked in alarm. But nobody listened. All the other chickens just thought he was a troublemaker.A week later, all the chickens were placed into cages, loaded onto a truck, and driven to the slaughterhouse.
Qualitative Hypothetico_Deductivisn HDH is confirmed by evidence E relative to background knowlewdge B iff the conjunction of H and B logically implies E in some suitable way
The idea is that a sentence or proposition is the more informative, the more possibilitiesit excludes. Hence, the logically stronger a sentence, the more informativeit is. On the other hand, a sentence is more plausible the fewer possibilities itexcludes, i.e. the more possibilities it includes. Hence, the logically weaker asentence, the more plausible it is. The qualitative counterparts of these two comparativeprinciples are the defining clauses above. If H is informative relativeto E, then so is any logically stronger sentence H0. Similarly, if H is plausiblerelative to E, then so is any logically weaker sentence H0.
Unified ‘logic of confirmation’, one that respects both the idea that a hypothesis H is confirmed by empirical evidence E if H can be formally inferred from E (i.e., if the theory derives from the data) and also the idea that H is confirmed by E if we can formally infer E from HAccording to Huber, the reason of this conflict is that there are (at least) two virtues a good theory must have, truth (or ‘plausibility’) and strength (or ‘informativeness’), and there is a trade-off between them (i.e., the kind of things we can do to improve the ‘truth’ of our theories usually go against what we can do to improve their ‘strength’, and vice-versa).So, imagine we try to define some measures f of the epistemic value of a hypothesis H given the evidence E and background knowledge B; functions f (H, E, B) may have many different mathematical properties, but the following ones are what wouldallow to call it, respectively, ‘truth responsive’ and ‘strength responsive’ on the basis of evidence1 (see Huber 2008a, pp. 92–93):(1) (a) If E entails H -> H, then f (H, E) ≤ f (H , E)(b) If ¬E entails H -> H, then f (H, E) ≤ f H, E)The condition in (1.a) means that, within the states allowed by E, all the states allowed by H are also allowed by H’, i.e., H’ covers a bigger portion than H of the states of the world consistent with our evidence; so, if f behaves in the way stated by (1.a), itwill give a higher value to theories that would be true in a bigger proportion of the states that we know might be true. The condition in (1.b) means, instead, that in the states that are not allowed by E, H’ covers a smaller portion of them than H, and so,the more content that we know is false a hypothesis has, the better (recall that the content of a proposition is inversely related to the magnitude of the states it is consistent with). Fig. 1 allows to clearly understand these two conditions ((1.a) refers to what happens ‘inside’ E, whereas (1.b) refers to what happens ‘out’ of E; the square represents the set of states consistent with background knowledge B).
TraditinalEpistomoloy by ancient greek philosophers, Aristotle and Plateu
Thoeries are modified by new theories,by improving to finding the final universal truth
The main intuition under the notion of truthlikeness is that of the value of two false theories can be different, and even thatthe value of some false (or falsified) theories can be higher than the value of some true (or not yet falsified) theories. If truth and strength were the only relevant virtues of a hypothesis, then, if H and H’ have been falsified, then either both will have the same value, or the most informative of it will be better, but the latter entails that increasing the epistemic value of a falsified theory is a child’s playEpistemic value will not only depend on how much they say about the world, but also on something that has to do with what they say, inparticular, on what is the relation between what they say and the truth about the matterthat acceptable theories must have (if at all) a high degree of confirmation, and also other values, like the capacityof explaining or predicting the empirical evidenceThe naive definition asserts that the epistemic value of a theory depends on twofactors:a) how similar or coherent are the view of the world offered by the theory andthe view of the world that derives from our empirical evidence; andb) how informative our empirical evidence is (for being coherent with a veryshallow empirical knowledge is not as indicative of ‘deep truth’ as being coherent witha much richer corpus of empirical information).The coherence or similarity between H and E can be defined as p(H&E)/p(HvE),7whereas the informativeness of a proposition A can be measured by 1/p(A). Hence, thenaive definition of empirical verisimilitude would be as follows:(3) Vs(H,E) = [p(H&E)/p(HvE)][1/p(E)] = p(H,E)/p(HvE)
The main intuition under the notion of truthlikeness is that of the value of two false theories can be different, and even thatthe value of some false (or falsified) theories can be higher than the value of some true (or not yet falsified) theories. If truth and strength were the only relevant virtues of a hypothesis, then, if H and H’ have been falsified, then either both will have the same value, or the most informative of it will be better, but the latter entails that increasing the epistemic value of a falsified theory is a child’s playEpistemic value will not only depend on how much they say about the world, but also on something that has to do with what they say, inparticular, on what is the relation between what they say and the truth about the matterthat acceptable theories must have (if at all) a high degree of confirmation, and also other values, like the capacityof explaining or predicting the empirical evidenceThe naive definition asserts that the epistemic value of a theory depends on twofactors:a) how similar or coherent are the view of the world offered by the theory andthe view of the world that derives from our empirical evidence; andb) how informative our empirical evidence is (for being coherent with a veryshallow empirical knowledge is not as indicative of ‘deep truth’ as being coherent witha much richer corpus of empirical information).The coherence or similarity between H and E can be defined as p(H&E)/p(HvE),7whereas the informativeness of a proposition A can be measured by 1/p(A). Hence, thenaive definition of empirical verisimilitude would be as follows:(3) Vs(H,E) = [p(H&E)/p(HvE)][1/p(E)] = p(H,E)/p(HvE)
Once upon a time, positivist philosophers saw intertheoretical reduction as the fundamental road to scientific progress; the anti-positivist movement of the second part of the 20th century led a majority of philosophers to abandon, not only the idea that science progresses through reduction to some theories to others, but in many cases even the belief that intertheoretical reduction is logically possible at all
Ceteris paribus or caA ceteris paribus assumption is often fundamental to the predictive purpose of scientific inquiry. In order to formulate scientific laws, it is usually necessary to rule out factors which interfere with examining a specific causal relationship. Under scientific experiments, the ceteris paribus assumption is realized when a scientist controls for all of the independent variables other than the one under study, so that the effect of a single independent variable on the dependent variable can be isolated. By holding all the other relevant factors constant, a scientist is able to focus on the unique effects of a given factor in a complex causal situation.eteris paribus is a Latin phrase, literally translated as "with other things the same," or "all other things being equal or held constant."