Mayo: Day #2 slides

  • 4,272 views
Uploaded on

Today we’ll try to cover a number of things: …

Today we’ll try to cover a number of things:
1. Learning philosophy/philosophy of statistics
2. Situating the broad issues within philosophy of science
3. Little bit of logic
4. Probability and random variables

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
4,272
On Slideshare
0
From Embeds
0
Number of Embeds
17

Actions

Shares
Downloads
18
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Day #2: 6334 Today we’ll try to cover a number of things: 1. 2. 3. 4. Learning philosophy/philosophy of statistics Situating the broad issues within philosophy of science Little bit of logic Probability and random variables (need a full list to get participants in Scholar!) 1
  • 2.  Learning to do philosophy is very different from learning philosophy as a museum of different views, what so and so said, various “isms”  How to teach “doing philosophy,” and how the student is to recognize acquiring the skill, is not cut and dried—(no one can really say, but I can tell when you’re “getting philosophical”)  To teach you to be philosophers—even if it’s only in this class—we’ve got to teach something that is unfamiliar and maybe even painful for some. (I know we have some philosufferers here, who should know what I mean.) 2
  • 3.  We won’t typically come out and “give the answer” in wrestling with a philosophical issue, but may deliberately hold back to encourage you to wrestle with it.  On the other hand, Professor Spanos and I have been working in this general area for a long time, and developing positions, responses to challenges, etc.  You also learn how to do philosophy by witnessing the strongest arguments, and the effort that goes into carefully engaging them.  So there’s a balance here wherein we want to both give and hold back, feed but make you struggle…  So expect that. 3
  • 4.  Everyone can see our publications, no secret there, and people participating in this course (listed or added to Scholar) will also get bits from my still-being-written book (on which I definitely want your feedback).  So you will be getting our strongest arguments, but also the strongest arguments out there, and we’ll want you to work through the arguments on your own.  The most important thing—admittedly, also the most unusual—is that anyone who wants to make progress in the area of philosophy of statistics must be prepared to question and challenge everything you read (even by the highest of the high priests). 4
  • 5.  We know it feels strange. (This does not typically hold for other subjects.)  In that first chapter of How To Tell What’s True About Statistical Inference, I note that “a certain trepidation and groupthink take over when it comes to philosophically tinged notions such as evidence, inductive reasoning, objectivity, rationality, truth”.  “The general area of philosophy that deals with knowledge, evidence, inference, and rationality is called epistemology.  The epistemological standpoints of leaders, be they philosophers or scientists, are too readily taken on faith as canon by others, including researchers who would otherwise cut through unclarity, ambiguity, and the fuzziness and handwaving that often surround these claims”. 5
  • 6.  Progress in mathematical statistics doesn’t require philosophy, but too often “we don’t need to be philosophical about these concepts or methods” means “I don’t want to examine them very closely”.  Writing philosophy (I gave you some pointers already on the blog): o Here, too, a slow examination of the question involved is required. o Less (much less!) is more. o We guarantee you will find this kind of analytical writing of value in your other fields. o As a start, the first assignment will be make-believe for next time. (e-mail) 6
  • 7. PHILOSOPHY: Understand and Justify Human Knowledge About the World PHILOSOPHY OF SCIENCE: How Do We Learn About the World In The Face of Uncertainty and Error?  Is there a scientific method?  How do we obtain good evidence?  How do we make reliable inferences from evidence?  What makes an inquiry “scientific” or rational?  Is there scientific progress? 7
  • 8. Problems with answers from logic empiricist philosophers of science: Is there a scientific method?  The “logics” for science are oversimple, open to paradoxes  Standard canons are violated in actual science  Scientific methods change with changing aims, values, technologies, societies How do we obtain good evidence?  Empirical data are uncertain, finite, probabilistic  Data are not just “given”, they have to be interpreted introducing biases, theory-ladenness, value-ladenness (scientific, social, ethical, economic) 8
  • 9. How do we make reliable inferences from evidence?  Inductivist: cannot justify “induction”  logics of induction failed  Falsificationist: deduction won’t teach anything new; cannot pinpoint blame for a failed prediction What makes an inquiry “scientific” or rational?  None of the philosophical attempts to erect a demarcation for science seem to work. Is there scientific progress?  no account of cumulative growth of knowledge  old “paradigms” are swept away by new ones which are “incommensurable” with the old 9
  • 10. PHILOSOPHY: Understand and Justify Human Knowledge About the World PHILOSOPHY OF SCIENCE: How Do We Learn About the World In The Face of Uncertainty and Error? LOGICAL EMPIRICIST ATTEMPTS: 1930-60 “Armchair Philosophy of Science” Inductive Logics: C(h,e) Carnap Logic of Falsification Popper Problems, Paradoxes “Historicism in Philosophy of KUHN (1962) Science” CRISIS IN PHILOSOPHY OF SCIENCE POST-POSITIVISM STS/ HPS optimistic pessimistic Relativism New Models of Rationality “naturalistic turn” THEORY CHANGE MOVEMENT-1970s, 80s Kuhn, Lakatos, Laudan Search for More Adequate Theories of Induction, Testing, & Decision-Making Postmodernism anarchy “Rational Reconstruction” dadaism BAYESIANS irrationality Inference as updating degrees of belief social Sc constructivism New Experimentalist Turn 1980s,1990s Error Statistical Philosophy: Mayo 1996? 10
  • 11. Little Bit of Logic (Double purpose: both for arguing philosophically and for understanding inductive/deductive methods) Argument: A group of statements, one of which (the conclusion) is claimed to follow from one or more others (the premises), which are regarded as supplying evidence for the truth of that one. This is written: P1, P2,…Pn/ ∴ C. In a 2-value logic, any statement A is regarded as true or false. 11
  • 12. A deductively valid argument: if the premises are all true then, necessarily, the conclusion is true. To use the “⊨” (double turnstile) symbol: P1, P2,…Pn ⊨ C. Note: Deductive validity is a matter of form—any argument with the same form or pattern as a valid argument is also a valid argument. (Simple truth tables serve to determine validity) 12
  • 13. EXAMPLES (listing premises followed by the conclusion) Modus Ponens If H then E H_______ E Modus Tollens If H then E Not-E____ Not-H If (H) GTR, then (E) deflection effect. (not-E) No light deflection observed.  (not-H) GTR is false (falsification) These results depend on the English meaning of “if then” and of “not.” In context, sentence meanings aren’t always so clear. 13
  • 14. Disjunctive syllogism: (1) Either the (A) experiment is flawed or (B) GTR is false (2) GTR is true (i.e., not-B). Conclusion: Therefore, (A) experiment is flawed. If either A or B are true, and not-B, then infer A. Since if A were not true, you can’t also hold the two premises true—without contradiction. (e.g., soup or salad) Either A or B. (disjunction) Not-B Therefore, A So we have 3 valid forms. 14
  • 15. Deductively Valid Argument (argument form): Three equivalent definitions:  An argument where if the premises are all true, then necessarily, the conclusion is true. (i.e., if the conclusion is false, then (necessarily) one of the premises is false.)  An argument where it’s (logically) impossible for the premises to be all true and the conclusion false. (i.e., to have the conclusion false with the premises true leads to a logical contradiction: A & not-A.)  An argument that maps true premises into a true conclusion with 100% reliability. (i.e., if the premises are all true, then 100% of the time the conclusion is true). 15
  • 16. True or False? If an argument is deductively valid, then its conclusion must be true. To detach the conclusion of a deductively valid argument as true, the premises must be true. Here’s an instance of the valid form modus tollens: Example: Let A be: Bayesian methods are acceptable. and let B stand for: Bayesian methods always give the same answer as frequentist methods. If A then B Not-B Not A. 16
  • 17. So you may criticize a valid argument by showing one of its premises is false. (Deductively) Sound argument: deductively valid + premises are true/approximately true. 17
  • 18. Exercise: showing deductively valid arguments can have false conclusions. (1) (I did this above) Give an example of an argument that follows the valid pattern of modus tollens that has a false conclusion. (2) Exercise (YOU do this one) Show that an argument following the form of a disjunctive syllogism can have a false conclusion. Either A or B Not-B Therefore A. 18
  • 19. Invalid Argument: Consider this argument: If H then E E________ H (H) If everyone watched the SOTU speech last night, then (E) you did. (E)You did._________________________ So, (H) everyone did. Affirming the consequent. 19
  • 20. Invalid argument: An argument where it’s possible to have all true premises and a false conclusion without contradiction. 20
  • 21. Inductive argument. With an inductive argument, the conclusion goes beyond the premises. So it’s logically possible for all the premises to be true and the conclusion false. So an inductive argument is invalid. The premises might be experimental outcomes or data points: E1, E2,…En (e.g., light deflection observations, drug reactions, radiation levels in fish) Even if all observed cases had a property, or followed a law, there’s no logical contradiction in the falsity of the generalization: H: All E’s are F 21
  • 22. This is also true for a statistical generalization: 60% in this class watched SOTU Thus, H: 60% of all people watched SOTU H agrees with the data, but it’s possible to have such good agreement even if H is false. The problem of induction: how to justify such inductive inferences. It’s not clear how to characterize it much less to justify the move (if it should be justified). (Some would say we only use deduction, and we’ll talk about this two classes from now.) 22
  • 23. Inductively “good” argument (how should we define it?) Can we try to parallel the definition of deduction validity? (i)A deductively valid argument, if the premises are all true then, necessarily, the conclusion is true. E1, E2,…En ⊨ H. (i)* An inductively good argument is one where if the premises are true, then the conclusion is probable (?) E1, E2,…En ⊨ probably H. (i)* probabilism Does this mean infer H is probably true: Pr(H) = high? 23
  • 24. Try a second definition of deductively valid: (ii) Deductive. The argument leads from true premises to a true conclusion 100% of the time. (ii)* Inductive. The argument leads from true (or approximately true) premises to a true conclusion (1 – α)% of the time. (?) (ii)* highly reliable method (reliabilism?) (Differs from highly reliable conclusion) 24
  • 25. High “reliability” of the method, low long-run error isn’t enough. Remember the BP Deepwater Horizon oil spill of 2010? The BP representatives claimed to have good (inductive) evidence that H: the cement seal is adequate (no gas should leak) In fact they kept decreasing the pressure until H passed, rather than performing a more severe test (as they were supposed to) called "a cement bond log” (using acoustics) 25
  • 26. Oil Exec: Our inference to H: the cement seal is fine is highly reliable Senator: But didn’t you just say no cement bond log was performed, when it initially failed…? Oil Exec: That’s what we did on April 20, but usually we do---I’m giving the average. (Imaginary): We use a randomizer that most of the time directs us to run the gold-standard check on pressure, but with small probability tells us to assume the pressure is fine, or keep decreasing the pressure of the test till it passes…. 26
  • 27. Overall, this test might rarely err but that is irrelevant in appraising the inference from the data on April 20. The oil rep gives a highly misleading of the stringency of the actual test that H managed to “pass”. Passing this overall “test” made it too easy for H for pass, even if false. The long-run reliability of the rule is a necessary but not a sufficient condition to infer H (with severity) 27
  • 28. Severity demands considering, for the context at hand, what error(s)* must be (statistically) ruled out in order the have evidence for some claim H (*construed as any erroneous claim or misunderstanding empirical and theoretical) And being able to evaluate a tools probative capacity to have discerned the flaw, were it present. An inductive argument is good if its conclusion has passed a severe test (with the premises and background). H agrees with the data, but it’s highly improbable to have such good agreement even if H is false. 28
  • 29. Error (Probability) Statistics Error probabilities may be used to quantify probativeness or severity of tests (for a given inference) The logic of severe testing (or corroboration) is not probability logic It may well be that there’s a role for both, but that’s not to unify them, or claim the differences don’t matter… 29
  • 30. I began with a set of questions, and hope to show how they are tackled in the error statistical philosophy. 30
  • 31. Problems with answers from logic empiricist philosophers of science: Is there a scientific method? The “logics” for science are oversimple, open to paradoxes Standard canons are violated in actual science Scientific methods change with changing aims, values, technologies, societies How do we obtain good evidence? Empirical data are uncertain, finite, probabilistic Data are not just “given”, they have to be interpreted introducing biases, theory-ladenness, value-ladenness (scientific, social, ethical, economic) 31
  • 32. How do we make reliable inferences from evidence?  Inductivist: cannot justify “induction”  logics of induction failed  Falsificationist: deduction won’t teach anything new; cannot pinpoint blame for a failed prediction What makes an inquiry “scientific” or rational?  None of the philosophical attempts to erect a demarcation for science seem to work. Is there scientific progress?  no account of cumulative growth of knowledge  old “paradigms” are swept away by new ones which are “incommensurable” with the old 32