Lecture 9
Survey Research & Design in Psychology
James Neill, 2016
Creative Commons Attribution 4.0
Power, Effect Sizes,
C...
2
Overview
1. Significance testing
2. Inferential decision making
3. Statistical power
4. Effect size
5. Confidence interv...
3
Readings
1. Ch 34/35: The size of effects in statistical
analysis: Do my findings matter?
2. Ch 35/36: Meta-analysis: Co...
4
Significance Testing
5
Significance Testing:
Overview
• Logic
• History
• Criticisms
• Decisions
• Practical significance
Logic of significance testing
How many heads
in a row would
I need to throw
before you'd protest
that something
“wasn't ri...
Logic of significance testing
Based on the distributional properties
of a sample dataset, we can
extrapolate (guesstimate)...
8
• Null hypothesis (H0) usually reflects
an expected effect in the population
(or no effect)
• Obtain p-value from sample...
9
History of significance testing
• Developed by Ronald
Fisher (1920s-1930s)
• To determine which
agricultural methods
yie...
10
• Agricultural research designs
couldn’t be fully experimental
because variables such as
weather and soil quality could...
11
• ST spread to other fields, including
social sciences.
• Spread aided by the development
of computers and training.
• ...
12
• Critiqued as early as 1930.
• Cohen's (1980s-1990s) critique
helped a critical mass of
awareness to develop.
• Led to...
13
1. The null hypothesis is rarely true.
2. ST provides:
● a binary decision (yes or no) and
● direction of the effect
Bu...
14
Statistical significance
• Statistical significance simply
means that the observed effect
(relationship or differences)...
15
Practical significance
• Practical significance is about
whether the difference is large
enough to be of value in a
pra...
Criticisms of
significance
testing
Ziliak, S. T. &
McCloskey, D. N.
(2008). The cult of
statistical significance:
How the ...
Criticisms of significance testing
Kirk, R. E. (2001). Promoting good statistical practices: Some Suggestions.
Educational...
Criticisms of significance testing
Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir
Ronald, and...
Criticisms of Significance
Testing
999). The insignificance of null hypothesis significance testing. Political Research Qu...
20
APA Style Guide
recommendations about
effect sizes, CIs and power
• APA 5th edition (2001)
recommended reporting of ESs...
21
NHST and alternatives
“Historically, researchers in psychology have
relied heavily on null hypothesis significance
test...
22
Significance testing:
Recommendations
• Use traditional Fisherian logic
methodology (inferential testing)
• Also use al...
23
• Logic:
–Examine sample data to determine p that it
represents a population with no effect or
some effect. It's a “bet...
24
• Criticisms:
– Binary
– Depends on N, ES, and critical alpha
– Need practical significance
• Recommendations:
–Whereve...
Inferential
Decision
Making
26
Hypotheses
in inferential testing
Null Hypothesis (H0):
No differences / No relationship
Alternative Hypothesis (H1):
D...
27
Inferential decisions
In inferential testing, a conclusion
about a population is made based
on the sample data. Either:...
28
Inferential Decisions:
Correct Decisions
We hope to make a correct
inference based on the sample;
i.e., either:
Do not ...
29
Inferential Decisions:
Type I & II Errors
However, we risk making errors:
Type I error:
Incorrectly reject H0
(i.e., th...
Inferential Decision Making Table
31
• Correct acceptance of H0
• Power (correct rejection of H0) = 1- β
• Type I error (false rejection of H0) = α
• Type I...
Statistical Power
33
Statistical power
Statistical power is the probability of
• correctly rejecting a false H0
(i.e. getting a sig. result ...
Statistical power
35
Statistical power
• Desirable power > .80
• Typical power ~ .60
(in the social sciences)
• Power becomes higher when an...
36
Power analysis
• Ideally, calculate expected power
before conducting a study (a priori),
based on:
–Estimated N,
–Criti...
37
Power analysis for MLR
• Free, online post-hoc power
calculator for MLR
• http://www.danielsoper.com/statcalc3/calc.asp...
38
Summary: Statistical power
1. Power = probability of detecting a real
effect as statistically significant
2. Increase b...
Effect Sizes
40
• A measure of the strength (or
size) of a relationship or effect.
• Where p is reported, also present
an effect size.
...
41
• An inferential test may be statistically
significant (i.e., the result is unlikely to
have occurred by chance), but t...
42
Commonly used effect sizes
Correlational
• r, r2
, sr2
• R, R2
Mean differences
• Standardised mean
difference e.g., Co...
43
Standardised mean difference
The difference between two means in
standard deviation units.
-ve = negative difference
0 ...
44
Standardised mean difference
• A standardised measure of the
difference between two Ms
–d = M2 – M1 / σ
–d = M2 – M1 / ...
Example effect sizes
-5 0 5
0
0.2
0.4
d=.5
-5 0 5
0
0.2
0.4
d=1
-5 0 5
0
0.2
0.4
d=2
-5 0 5
0
0.2
0.4
d=4
Group 1
Group 2
46
• Cohen (1977): .2 = small
.5 = moderate
.8 = large
• Wolf (1986): .25 = educationally
significant
.50 = practically si...
47
Interpreting effect size
• No agreed standards for how to
interpret an ES
• Interpretation is ultimately
subjective
• B...
48
• A small ES can be impressive if, e.g.,
a variable is:
–difficult to change
(e.g. a personality construct) and/or
–ver...
Graphing standardised mean effect size -
Example
Standardised mean effect size table
- Example
51
Power & effect sizes in
psychology
Ward (2002) examined articles in 3
psych. journals to assess the current
status of s...
52
Power & effect sizes in
psychology
• 7% of studies estimate or discuss
statistical power.
• 30% calculate ES measures.
...
53
Summary: Effect size
1. Standardised size of difference or
strength of relationship
2. Inferential tests should be
acco...
Confidence Intervals
55
Confidence intervals
• Very useful, underutilised
• Gives ‘range of certainty’ or ‘area of
confidence’
e.g., a populati...
56
Confidence intervals
• CIs can be reported for:
–B (unstandardised regression
coefficient) in MLR
–Ms
–ESs (e.g., r, R,...
CIs & error bar graphs
● CIs can be presented as error bar graphs
● Shows the mean and upper and lower CIs
● More informat...
CIs in MLR
● In this example, CIs for Bs indicate that we
should not reject the possibility that the
population Bs are zer...
59
Confidence intervals:
Review question 1
Question
If a MLR predictor has a B = .5, with
a 95% CI of .25 to .75, what sho...
60
Confidence intervals:
Review question 2
Question
If a MLR predictor has a B = .2, with
a 95% CI of -.2 to .6, what shou...
61
Summary: Confidence interval
1. Gives ‘range of certainty’
2. Can be used for B, M, ES
3. Can be examined
1. Statistica...
Publication Bias
63
Publication bias
• When publication of results depends
on their nature and direction.
• Studies that show significant e...
64
File-drawer effect
• Tendency for non-significant
results to be ‘filed away’ (hidden)
and not published.
• # of null st...
65
Two counter-acting biases
There are two counter-acting biases
in social science research:
• Low Power:
→ under-estimati...
66
Funnel plots
• A scatterplot of treatment effect
against study size.
• Precision in estimating the true
treatment effec...
Risk ratio (mortality)
0.025 1 40
2
1
0Standarderror
0.25 4
Funnel plots
No
evidence
of
publication
bias
Standard error
de...
Publication Bias
Missing
non-sig.
result
studies
Publication Bias:
Asymmetrical appearance of the funnel plot with a
gap i...
69
Publication bias
• If there is publication bias this
will cause meta-analysis to
overestimate effects.
• The more prono...
Countering the bias
http://www.jasnh.com
Countering the bias
http://www.jnr-eeb.org/index.php/jnr
72
Summary: Publication bias
1. Tendency for statistically significant
studies to be published over non-
significant studi...
Academic Integrity
74
Academic integrity: Students
(Marsden, Carroll, & Neill, 2005)
• N = 954 students enrolled in 12
faculties of 4 Austral...
Academic integrity: Academic staff -
Examplehttp://www.abc.net.au/news/2014-04-17/research-investigations-mounting-for-emb...
Academic integrity: Academic staff
http://www.abc.net.au/7.30/content/2013/s3823977.htm
Academic Integrity: Academic staff -
Examplehttp://www.abc.net.au/news/2014-04-04/uq-research-retraction-barwood-murdoch/5...
A lot of what is published is incorrect
1. “Much of the scientific literature, perhaps half, may
simply be untrue. Afflict...
79
Summary: Academic integrity
1. Violations of academic integrity are
evident and prevalent amongst those
with incentives...
Statistical Methods
in Psychology
Journals:
Guidelines and
Explanations
(Wilkinson, 1999)
Method - Design
(Wilkinson, 1999)
Method - Population
(Wilkinson, 1999)
Method - Sample
(Wilkinson, 1999)
Method – Nonrandom assignment
(Wilkinson, 1999)
Method – Instruments
(Wilkinson, 1999)
Method – Variables
(Wilkinson, 1999)
Method – Procedure
(Wilkinson, 1999)
Method – Power and sample size
(Wilkinson, 1999)
Results – Complications
(Wilkinson, 1999)
Results – Min. sufficient analysis
(Wilkinson, 1999)
Results – Computer programs
(Wilkinson, 1999)
Results – Assumptions
(Wilkinson, 1999)
Results – Hypothesis tests
(Wilkinson, 1999)
Results – Effect sizes
(Wilkinson, 1999)
Results – Interval estimates
(Wilkinson, 1999)
Results – Causality
(Wilkinson, 1999)
Discussion – Interpretation
(Wilkinson, 1999)
Discussion – Conclusions
(Wilkinson, 1999)
102
Further resources
• Statistical significance (Wikiversity)
• http://en.wikiversity.org/wiki/Statistical_significance
•...
103
1. Marsden, H., Carroll, M., & Neill, J. T. (2005). Who
cheats at university? A self-report study of
dishonest academi...
104
Open Office Impress
● This presentation was made using
Open Office Impress.
● Free and open source software.
● http://...
Upcoming SlideShare
Loading in …5
×

Power, Effect Sizes, Confidence Intervals, & Academic Integrity

34,184 views
33,619 views

Published on

Explains use of statistical power, inferential decision making, effect sizes, confidence intervals in applied social science research, and addresses the issue of publication bias and academic integrity.

Published in: Education, Technology, Business
2 Comments
13 Likes
Statistics
Notes
No Downloads
Views
Total views
34,184
On SlideShare
0
From Embeds
0
Number of Embeds
837
Actions
Shares
0
Downloads
939
Comments
2
Likes
13
Embeds 0
No embeds

No notes for slide
  • 7126/6667 Survey Research & Design in Psychology
    Semester 1, 2016, University of Canberra, ACT, Australia
    James T. Neill
    Home page: http://en.wikiversity.org/wiki/Survey_research_and_design_in_psychology
    Lecture page: http://en.wikiversity.org/wiki/Survey_research_and_design_in_psychology/Lectures/Power_%26_effect_sizes
    Image name: Nuvola filesystems socket
    Image source: http://commons.wikimedia.org/wiki/File:Nuvola_filesystems_socket.png
    Image author: David Vignoni, http://en.wikipedia.org/wiki/David_Vignoni
    License: GNU LGPL, http://en.wikipedia.org/wiki/GNU_Lesser_General_Public_License
    Description:
    Explains use of statistical power, inferential decision making, effect sizes, confidence intervals in applied social science research, and addresses the issue of publication bias and academic integrity.
  • Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg
    License: Public domain
  • Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg
    License: Public domain
  • If I flipped a coin multiple times and called out the result each time, how many flips would it take if I was flipping straight heads, before you would suspect that I wasn’t telling the truth or using a bias coin?
    This is the logic of ST: How different does a sampling distribution need to be in order for us to conclude that it is different to our expectations?
    See: A Coin-Flipping Exercise to Introduce the P-Value: http://www.amstat.org/publications/jse/v2n1/maxwell.html
    Image: http://www.codefun.com/Genetic_Info1.htm
  • ST evolved and became a 20th century science phenomenon.
    Karl Pearson laid the foundation for ST as early as 1901 (Glaser, 1999)
    Sir Ronald Fisher 1920’s-1930’s (1925) developed ST for agricultural data to help determine agricultural effectiveness e.g., whether plants grew better using fertilizer A vs. B.
    Image source: https://commons.wikimedia.org/wiki/File:R._A._Fischer.jpg
    Image license: Released under the GNU Free Documentation License; PD-OLD-7
    Image author: USDA Natural Resources Conservation Service, http://photogallery.nrcs.usda.gov/res/sites/photogallery/
    Image source: http://commons.wikimedia.org/wiki/File:NRCSCA06195_-_California_(1257)(NRCS_Photo_Gallery).jpg
    Image license: Public domain
  • These designs couldn’t be fully experimental e.g., because variables (such as weather, soil quality, etc.) could not be controlled. Therefore they needn’t to determine whether plant growth differences were due to chance or due to agricultural methods.
  • For more info: http://www.aacn.org/AACN/jrnlajcc.nsf/GetArticle/ArticleThree85?OpenDocument
  • Cohen (1980’s-1990’s) critiqued ST: Inherent problems and weaknesses with ST tended to be ignored, even though since Cohen (1980’s), formal critiques and alternatives to ST had been available.
  • <number>
    Whether a result is significant or not is a function of:
    Effect size (ES)
    N
    Critical alpha () level
    Sig. can be manipulated by tweaking any of the three
    as each of them increase, so does the likelihood of a significant result
  • <number>
    Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56(5), 746-759.
  • <number>
  • <number>
  • <number>
  • <number>
  • <number>
  • <number>
    Image source: http://commons.wikimedia.org/wiki/File:Crystal_Clear_action_apply.png
    Image license: GFDL 2.1 http://en.wikipedia.org/wiki/GNU_Lesser_General_Public_License
  • <number>
    Image source: http://commons.wikimedia.org/wiki/File:Crystal_Clear_action_button_cancel.png
    Image license: GFDL 2.1 http://en.wikipedia.org/wiki/GNU_Lesser_General_Public_License
  • <number>
    Power = “The probability of detecting a given effect size in a population from a sample of size N, using significance criterion ”
    Image source:https://commons.wikimedia.org/wiki/File:Emoji_u1f4aa.svg
    Image author: Google, 2016
    Image license: Apache License, Version 2.0
  • <number>
  • <number>
    = 1 - likelihood that an inferential test won’t return a sig. result when there is a real difference (1 – β)
    An inferential test is more ‘powerful’ (i.e. more likely to get a significant result) when any of these 3 increase:
    ES (i.e., strength of relationship or effect)
    Sample size (N)
    Critical alpha
  • <number>
  • <number>
    Image screen shot from: http://www.danielsoper.com/statcalc3/calc.aspx?id=9
    Or, for more sophisticated use, consider using G*Power - http://www.gpower.hhu.de/
  • d is a standardised difference between two means, commonly refered to as Cohen’s d.
    Eta-squared is the proportion of total variance in a DV that can be attributed to an IV(s).
    r can be converted to d, d can be converted to r.
    Eta-squared can be converted to R, R can be converted to eta-squared.
  • d is a standardised difference between two means, commonly refered to as Cohen’s d.
    Eta-squared is the proportion of total variance in a DV that can be attributed to an IV(s).
    r can be converted to d, d can be converted to r.
    Eta-squared can be converted to R, R can be converted to eta-squared.
  • <number>
  • <number>
  • <number>
  • <number>
    From slide 9: elderlab.yorku.ca/.../lectures/05%20Power%20and%20Effect%20Size/05%20Power%20and%20Effect%20Size.ppt
  • <number>
  • <number>
    “Cohen’s (1977) caveat that it is better to obtain comparison standards from the professional literature than to use arbitrary guidelines of small, medium, and large effect sizes.”
    - Cason & Gills (1994, p.44)
  • <number>
  • Memory and attention for schizophrenics and normals.
    Image: http://www.cognitivedrugresearch.com/newcdr/imageuploads/medium-untitled.jpg
  • Oral cancer screen leaflets were given to patients in dental waiting rooms, followed by a measure of their oral cancer knowledge and attitudes and intention towards screening tests.
    The immediate effect on knowledge, attitudes and intentions in primary care attenders of a patient information leaflet: a randomized control trial replication and extension
    Image: http://www.nature.com/bdj/journal/v194/n12/fig_tab/4810283t3.html
    Article Abstract: http://www.nature.com/bdj/journal/v194/n12/abs/4810283a.html
  • <number>
    Ward, R. M. (2002). Highly significant findings in psychology: A power and effect size survey. http://digitalcommons.uri.edu/dissertations/AAI3053127/
  • <number>
  • <number>
    If the CI around a M does not include 0, then we can conclude that there is a significant difference between the means
  • <number>
  • <number>
  • <number>
    Image source: This document
    Image author: James Neill, 2014
    Image license: Public domain
  • <number>
    Answer: B
  • <number>
    Answer: A
  • <number>
    Scargle, J. D. (2000). Publication Bias: The "File-Drawer Problem" in Scientific Inference. Journal of Scientific Exploration, 14 (2) 94-106.
    http://www.scientificexploration.org/jse/abstracts/v14n1a6.php
    http://en.wikipedia.org/wiki/Publication_bias
  • <number>
    Image source: http://commons.wikimedia.org/wiki/File:Edge-drawer.png
    Image license: GFDL 2 http://en.wikipedia.org/wiki/GNU_General_Public_License
    See also:
    http://skepdic.com/filedrawer.html
  • <number>
    Power (on average) is moderate, ~.6, resulting in the published literature under-reporting of real results due to insufficient power
    On the other hand, as if to ‘compensate’, journal editors and reviewers seem to favour publication of studies which report significant results
  • <number>
    Funnel Plot slides are from: ssrc.tums.ac.ir/SystematicReview/Assets/publicationbias.ppt
  • <number>
    Source: Unknown
  • <number>
    Note that the funnel plots’ X-axis in this example represents a bias towards not publishing large effects (more commonly the bias is towards is towards not publishing insignificance results)
  • <number>
  • <number>
    Also see: Hubbard, Raymond and Armstrong, J. Scott (1992) Are Null Results Becoming an Endangered Species in Marketing?. Marketing Letters 3(2):pp. 127-136.
  • <number>
    Also see: Hubbard, Raymond and Armstrong, J. Scott (1992) Are Null Results Becoming an Endangered Species in Marketing?. Marketing Letters 3(2):pp. 127-136.
  • <number>
    Marsden, H., Carroll, M., & Neill, J. (2005). Who cheats at university? A self-report study of dishonest academic behaviours in a sample of Australian university students.  Australian Journal of Psychology, 57(1), 1-10.
  • <number>
    http://www.abc.net.au/news/2014-04-17/research-investigations-mounting-for-embattled-professor/5397516
    http://retractionwatch.com/category/levon-khachigian/
  • <number>
    Show
    0:00-2:10 - Intro (2:10)
    2:10-4:45 - Prof. David Vaux (2:35)
    12:20-15:49 (3:29)
    17:14 - 17:48 Independent review (0:36)
    Total: 8:40 mins
    Skip
    4:45-12:20
    15:49-17:14
    17:48-
  • <number>
    For more examples of retracted studies, see Retraction Watch
    http://retractionwatch.com/
  • <number>
    Marsden, H., Carroll, M., & Neill, J. (2005). Who cheats at university? A self-report study of dishonest academic behaviours in a sample of Australian university students.  Australian Journal of Psychology, 57(1), 1-10.
  • <number>
  • Power, Effect Sizes, Confidence Intervals, & Academic Integrity

    1. 1. Lecture 9 Survey Research & Design in Psychology James Neill, 2016 Creative Commons Attribution 4.0 Power, Effect Sizes, Confidence Intervals, & Academic Integrity
    2. 2. 2 Overview 1. Significance testing 2. Inferential decision making 3. Statistical power 4. Effect size 5. Confidence intervals 6. Publication bias 7. Academic integrity 8. Statistical method guidelines
    3. 3. 3 Readings 1. Ch 34/35: The size of effects in statistical analysis: Do my findings matter? 2. Ch 35/36: Meta-analysis: Combining and exploring statistical findings from previous research 3. Ch 37/38: Confidence intervals 4. Ch 39/40: Statistical power 5. Wilkinson, L., & APA Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604.
    4. 4. 4 Significance Testing
    5. 5. 5 Significance Testing: Overview • Logic • History • Criticisms • Decisions • Practical significance
    6. 6. Logic of significance testing How many heads in a row would I need to throw before you'd protest that something “wasn't right”?
    7. 7. Logic of significance testing Based on the distributional properties of a sample dataset, we can extrapolate (guesstimate) about the probability of the observed differences or relationships in a population. In so doing, we are assuming that the sample data is representative and that the data meets the assumptions associated with the inferential test.
    8. 8. 8 • Null hypothesis (H0) usually reflects an expected effect in the population (or no effect) • Obtain p-value from sample data to determine the likelihood of H0 being true • Researcher tolerates some false positives (critical α) to make a probability-based decision about H0 Logic of significance testing
    9. 9. 9 History of significance testing • Developed by Ronald Fisher (1920s-1930s) • To determine which agricultural methods yielded greater output • Were variations in output between two plots attributable to chance or not?
    10. 10. 10 • Agricultural research designs couldn’t be fully experimental because variables such as weather and soil quality couldn't be fully controlled, therefore it was needed to determine whether variations in the DV were due to chance or the IV(s). History of significance testing
    11. 11. 11 • ST spread to other fields, including social sciences. • Spread aided by the development of computers and training. • In the latter 20th century, widespread use of ST attracted critique for its over-use and misuse. History of significance testing
    12. 12. 12 • Critiqued as early as 1930. • Cohen's (1980s-1990s) critique helped a critical mass of awareness to develop. • Led to changes in publication guidelines and teaching about over-reliance on ST and alternative and adjunct techniques. Criticisms of significance testing
    13. 13. 13 1. The null hypothesis is rarely true. 2. ST provides: ● a binary decision (yes or no) and ● direction of the effect But mostly we are interested in the size of the effect – i.e., how much of an effect? 3.Statistical vs. practical significance 4.Sig. is a function of ES, N, and α Criticisms of significance testing
    14. 14. 14 Statistical significance • Statistical significance simply means that the observed effect (relationship or differences) are unlikely to be due to sampling error • Statistical significance can be evident for very small (trivial) effects if N and/or critical alpha are large enough
    15. 15. 15 Practical significance • Practical significance is about whether the difference is large enough to be of value in a practical sense –Is it an effect worth being concerned about – are these noticeable or worthwhile effects? –e.g., a 5% increase in well-being probably starts to have practical value
    16. 16. Criticisms of significance testing Ziliak, S. T. & McCloskey, D. N. (2008). The cult of statistical significance: How the standard error cost us jobs, justice, and lives. Ann Arbor: University of Michigan Press.
    17. 17. Criticisms of significance testing Kirk, R. E. (2001). Promoting good statistical practices: Some Suggestions. Educational and Psychological Measurement, 61, 213-218. doi: 10.1177/00131640121971185
    18. 18. Criticisms of significance testing Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-834.
    19. 19. Criticisms of Significance Testing 999). The insignificance of null hypothesis significance testing. Political Research Quarterly, 52, 647-674.
    20. 20. 20 APA Style Guide recommendations about effect sizes, CIs and power • APA 5th edition (2001) recommended reporting of ESs, power, etc. • APA 6th edition (2009) further strengthened the requirements to use NHST as a starting point and to also include ESs, CIs, and power.
    21. 21. 21 NHST and alternatives “Historically, researchers in psychology have relied heavily on null hypothesis significance testing (NHST) as a starting point for many (but not all) of its analytic approaches. APA stresses that NHST is but a starting point and that additional reporting such as effect sizes, confidence intervals, and extensive description are needed to convey the most complete meaning of the results... complete reporting of all tested hypotheses and estimates of appropriate ESs and CIs are the minimum expectations for all APA journals.” (APA Publication Manual (6th ed., 2009, p. 33))
    22. 22. 22 Significance testing: Recommendations • Use traditional Fisherian logic methodology (inferential testing) • Also use alternative and complementary techniques (ESs and CIs) • Emphasise practical significance • Recognise merits and shortcomings of each approach
    23. 23. 23 • Logic: –Examine sample data to determine p that it represents a population with no effect or some effect. It's a “bet” - At what point do you reject H0 ? • History: –Developed by Fisher for agricultural experiments in early 20th century –During the 1980s and 1990s, ST was increasingly criticised for over-use and mis-application. Significance testing: Summary
    24. 24. 24 • Criticisms: – Binary – Depends on N, ES, and critical alpha – Need practical significance • Recommendations: –Wherever you report a significance test (p-level), also report an ES –Also consider reporting power and CIs Significance testing: Summary
    25. 25. Inferential Decision Making
    26. 26. 26 Hypotheses in inferential testing Null Hypothesis (H0): No differences / No relationship Alternative Hypothesis (H1): Differences / Relationship
    27. 27. 27 Inferential decisions In inferential testing, a conclusion about a population is made based on the sample data. Either: Do not reject H0 p is not sig. (i.e. not below the critical α) Reject H0 p is sig. (i.e., below the critical α)
    28. 28. 28 Inferential Decisions: Correct Decisions We hope to make a correct inference based on the sample; i.e., either: Do not reject H0 : Correctly retain H0 (i.e., when there is no real difference/effect in the population) Reject H0 (Power): Correctly reject H0 (i.e., when there is a real difference/effect in the population)
    29. 29. 29 Inferential Decisions: Type I & II Errors However, we risk making errors: Type I error: Incorrectly reject H0 (i.e., there is no difference/effect in the population) Type II error: Incorrectly fail to reject H0 (i.e., there is a difference/effect in the population)
    30. 30. Inferential Decision Making Table
    31. 31. 31 • Correct acceptance of H0 • Power (correct rejection of H0) = 1- β • Type I error (false rejection of H0) = α • Type II error (false acceptance of H0) = β • Traditional emphasis has been too much on limiting Type I errors and not enough on Type II error - a balance is needed. Inferential decision making: Summary
    32. 32. Statistical Power
    33. 33. 33 Statistical power Statistical power is the probability of • correctly rejecting a false H0 (i.e. getting a sig. result when there is a real difference in the population) Image source:https://commons.wikimedia.org/wiki/File:Emoji_u1f4aa.svg
    34. 34. Statistical power
    35. 35. 35 Statistical power • Desirable power > .80 • Typical power ~ .60 (in the social sciences) • Power becomes higher when any of these increase: –Sample size (N) –Critical alpha (α) –Effect size (Δ)
    36. 36. 36 Power analysis • Ideally, calculate expected power before conducting a study (a priori), based on: –Estimated N, –Critical α, –Expected or minimum ES (e.g., from related research) • Report actual power (post-hoc) in the results.
    37. 37. 37 Power analysis for MLR • Free, online post-hoc power calculator for MLR • http://www.danielsoper.com/statcalc3/calc.aspx?id=9
    38. 38. 38 Summary: Statistical power 1. Power = probability of detecting a real effect as statistically significant 2. Increase by: –↑ N –↑ critical α –↑ ES • Power – >.8 “desirable” – ~.6 is more typical • Can be calculated prospectively and retrospectively
    39. 39. Effect Sizes
    40. 40. 40 • A measure of the strength (or size) of a relationship or effect. • Where p is reported, also present an effect size. • "reporting and interpreting effect sizes in the context of previously reported effects is essential to good research" (Wilkinson & APA Task Force on Statistical Inference, 1999, p. 599) What is an effect size?
    41. 41. 41 • An inferential test may be statistically significant (i.e., the result is unlikely to have occurred by chance), but this doesn’t indicate how large the effect is (it might be trivial). • On the other hand, there may be non- significant, but notable effects (esp. in low powered tests). • Unlike significance testing, effect sizes are not influenced by N. Why use an effect size?
    42. 42. 42 Commonly used effect sizes Correlational • r, r2 , sr2 • R, R2 Mean differences • Standardised mean difference e.g., Cohen’s d • Eta squared (η2 ), Partial eta squared (ηp 2 )
    43. 43. 43 Standardised mean difference The difference between two means in standard deviation units. -ve = negative difference 0 = no difference +ve = positive difference
    44. 44. 44 Standardised mean difference • A standardised measure of the difference between two Ms –d = M2 – M1 / σ –d = M2 – M1 / pooled SD • e.g., Cohen's d, Hedges' g • Not readily available in SPSS; use a separate calculator e.g., Cohensd.xls
    45. 45. Example effect sizes -5 0 5 0 0.2 0.4 d=.5 -5 0 5 0 0.2 0.4 d=1 -5 0 5 0 0.2 0.4 d=2 -5 0 5 0 0.2 0.4 d=4 Group 1 Group 2
    46. 46. 46 • Cohen (1977): .2 = small .5 = moderate .8 = large • Wolf (1986): .25 = educationally significant .50 = practically significant (therapeutic) Standardised Mean ESs are proportional, e.g., .40 is twice as much change as .20 Rules of thumb for interpreting standardised mean differences
    47. 47. 47 Interpreting effect size • No agreed standards for how to interpret an ES • Interpretation is ultimately subjective • Best approach is to compare with other similar studies
    48. 48. 48 • A small ES can be impressive if, e.g., a variable is: –difficult to change (e.g. a personality construct) and/or –very valuable (e.g. an increase in life expectancy) • A large ES doesn’t necessarily mean that there is any practical value e.g., if –it isn’t related to the aims of the investigation (e.g. religious orientation) The meaning of an effect size depends on context
    49. 49. Graphing standardised mean effect size - Example
    50. 50. Standardised mean effect size table - Example
    51. 51. 51 Power & effect sizes in psychology Ward (2002) examined articles in 3 psych. journals to assess the current status of statistical power and effect size measures. • Journal of Personality and Social Psychology • Journal of Consulting and Clinical Psychology • Journal of Abnormal Psychology
    52. 52. 52 Power & effect sizes in psychology • 7% of studies estimate or discuss statistical power. • 30% calculate ES measures. • A medium ES was discovered as the average ES across studies • Current research designs typically do not have sufficient power to detect such an ES.
    53. 53. 53 Summary: Effect size 1. Standardised size of difference or strength of relationship 2. Inferential tests should be accompanied by ESs and CIs 3. Common bivariate ESs include: 1. Cohen’s d 2. Correlation r • Cohen’s d - not in SPSS – use an online effect size calculator
    54. 54. Confidence Intervals
    55. 55. 55 Confidence intervals • Very useful, underutilised • Gives ‘range of certainty’ or ‘area of confidence’ e.g., a population M is 95% likely to lie between -1.96 SD and +1.96 of the sample M • Expressed as a: –Lower-limit –Upper-limit
    56. 56. 56 Confidence intervals • CIs can be reported for: –B (unstandardised regression coefficient) in MLR –Ms –ESs (e.g., r, R, d) • CIs can be examined statistically and graphically (e.g., error-bar graphs)
    57. 57. CIs & error bar graphs ● CIs can be presented as error bar graphs ● Shows the mean and upper and lower CIs ● More informative alternative to bar graphs or line graphs
    58. 58. CIs in MLR ● In this example, CIs for Bs indicate that we should not reject the possibility that the population Bs are zero, except for Attentiveness (we are 95% sure that the true B for Attentiveness is between .91 and 2.16)
    59. 59. 59 Confidence intervals: Review question 1 Question If a MLR predictor has a B = .5, with a 95% CI of .25 to .75, what should be concluded? A. Do not reject H0 (that B = 0) B. Reject H0 (that B = 0)
    60. 60. 60 Confidence intervals: Review question 2 Question If a MLR predictor has a B = .2, with a 95% CI of -.2 to .6, what should be concluded? A. Do not reject H0 (that B = 0) B. Reject H0 (that B = 0)
    61. 61. 61 Summary: Confidence interval 1. Gives ‘range of certainty’ 2. Can be used for B, M, ES 3. Can be examined 1. Statistically (upper and lower limits) 2. Graphically (e.g., error-bar graphs)
    62. 62. Publication Bias
    63. 63. 63 Publication bias • When publication of results depends on their nature and direction. • Studies that show significant effects are more likely to be published. • Type I publication errors are underestimated to an extent that is: “frightening, even calling into question the scientific basis for much published literature.” (Greenwald, 1975, p. 15)
    64. 64. 64 File-drawer effect • Tendency for non-significant results to be ‘filed away’ (hidden) and not published. • # of null studies which would have to ‘filed away’ in order for a body of significant published effects to be considered doubtful. Image source: http://commons.wikimedia.org/wiki/File:Edge-drawer.png
    65. 65. 65 Two counter-acting biases There are two counter-acting biases in social science research: • Low Power: → under-estimation of real effects • Publication Bias or File-drawer effect: → over-estimation of real effects
    66. 66. 66 Funnel plots • A scatterplot of treatment effect against study size. • Precision in estimating the true treatment effect ↑s as N ↑s. • Small studies scatter more widely at the bottom of the graph. • In the absence of bias the plot should resemble a symmetrical inverted funnel.
    67. 67. Risk ratio (mortality) 0.025 1 40 2 1 0Standarderror 0.25 4 Funnel plots No evidence of publication bias Standard error decreases as sample size increases Effect size
    68. 68. Publication Bias Missing non-sig. result studies Publication Bias: Asymmetrical appearance of the funnel plot with a gap in a bottom corner of the funnel plot As studies become less precise, results should be more variable, scattered to both sides of the more precise larger studies … unless there is publication bias.
    69. 69. 69 Publication bias • If there is publication bias this will cause meta-analysis to overestimate effects. • The more pronounced the funnel plot asymmetry, the more likely it is that the amount of bias will be substantial.
    70. 70. Countering the bias http://www.jasnh.com
    71. 71. Countering the bias http://www.jnr-eeb.org/index.php/jnr
    72. 72. 72 Summary: Publication bias 1. Tendency for statistically significant studies to be published over non- significant studies 2. Indicated by gap in funnel plot → file-drawer effect 3. Counteracting biases in scientific publishing; tendency: –towards low-power studies which underestimate effects –to publish sig. effects over non-sig. effects
    73. 73. Academic Integrity
    74. 74. 74 Academic integrity: Students (Marsden, Carroll, & Neill, 2005) • N = 954 students enrolled in 12 faculties of 4 Australian universities • Self-reported: –Plagiarism (81%) –Cheating (41%) –Falsification (25%)
    75. 75. Academic integrity: Academic staff - Examplehttp://www.abc.net.au/news/2014-04-17/research-investigations-mounting-for-embattled-professor/5397516
    76. 76. Academic integrity: Academic staff http://www.abc.net.au/7.30/content/2013/s3823977.htm
    77. 77. Academic Integrity: Academic staff - Examplehttp://www.abc.net.au/news/2014-04-04/uq-research-retraction-barwood-murdoch/5368800
    78. 78. A lot of what is published is incorrect 1. “Much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance.” 2. “Scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data.” 3. “Our love of “significance” pollutes the literature with many a statistical fairy-tale.” http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(15)60696-1/fulltext
    79. 79. 79 Summary: Academic integrity 1. Violations of academic integrity are evident and prevalent amongst those with incentives to do so: 1. Students 2. Researchers 3. Commercial sponsors 2. Adopt a balanced, critical approach, striving for objectivity and academic integrity
    80. 80. Statistical Methods in Psychology Journals: Guidelines and Explanations (Wilkinson, 1999)
    81. 81. Method - Design (Wilkinson, 1999)
    82. 82. Method - Population (Wilkinson, 1999)
    83. 83. Method - Sample (Wilkinson, 1999)
    84. 84. Method – Nonrandom assignment (Wilkinson, 1999)
    85. 85. Method – Instruments (Wilkinson, 1999)
    86. 86. Method – Variables (Wilkinson, 1999)
    87. 87. Method – Procedure (Wilkinson, 1999)
    88. 88. Method – Power and sample size (Wilkinson, 1999)
    89. 89. Results – Complications (Wilkinson, 1999)
    90. 90. Results – Min. sufficient analysis (Wilkinson, 1999)
    91. 91. Results – Computer programs (Wilkinson, 1999)
    92. 92. Results – Assumptions (Wilkinson, 1999)
    93. 93. Results – Hypothesis tests (Wilkinson, 1999)
    94. 94. Results – Effect sizes (Wilkinson, 1999)
    95. 95. Results – Interval estimates (Wilkinson, 1999)
    96. 96. Results – Causality (Wilkinson, 1999)
    97. 97. Discussion – Interpretation (Wilkinson, 1999)
    98. 98. Discussion – Conclusions (Wilkinson, 1999)
    99. 99. 102 Further resources • Statistical significance (Wikiversity) • http://en.wikiversity.org/wiki/Statistical_significance • Effect sizes (Wikiversity): http://en.wikiversity.org/wiki/Effect_size • Statistical power (Wikiversity): http://en.wikiversity.org/wiki/Statistical_power • Confidence interval (Wikiversity) • http://en.wikiversity.org/wiki/Confidence_interval • Academic integrity (Wikiversity) • http://en.wikiversity.org/wiki/Academic_integrity • Publication bias • http://en.wikiversity.org/wiki/Publication_bias
    100. 100. 103 1. Marsden, H., Carroll, M., & Neill, J. T. (2005). Who cheats at university? A self-report study of dishonest academic behaviours in a sample of Australian university students. Australian Journal of Psychology, 57, 1-10. http://wilderdom.com/abstracts/MarsdenCarrollNeill2005WhoCheatsAtUniversity.htm 2. Ward, R. M. (2002). Highly significant findings in psychology: A power and effect size survey. http://digitalcommons.uri.edu/dissertations/AAI3053127/ 3. Wilkinson, L., & APA Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604. References
    101. 101. 104 Open Office Impress ● This presentation was made using Open Office Impress. ● Free and open source software. ● http://www.openoffice.org/product/impress.html

    ×