Your SlideShare is downloading. ×
  • Like
  • Save
Coursebooklet
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply
Published

 

Published in Technology , Education
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
895
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
0
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Essentials of Manuscript Review Gary G. Poehling, MD – Editor-in-Chief Wake Forest University School of Medicine Winston-Salem, North Carolina, USA
  • 2. Essentials of Manuscript Review Arthroscopy The Journal of Arthroscopic and Related Surgery
  • 3. Essentials of Manuscript Review
    • How to optimize scientific communication
    • How to organize a manuscript
    • How to use the essentials of statistics
    • How to review a submitted manuscript
    Learning objectives of this course
  • 4. How to Organize a Manuscript
    • Text (of an Original Article)
    • Introduction
    • Methods
    • Results
    • Discussion
    • Conclusion
  • 5. How to Organize a Manuscript
    • Supporting Structure
    • Title
    • Abstract
    • References
    • Figures
    • Tables
  • 6. Introduction
    • Present succinct referenced review
    • Create reader interest
    • Identify controversy
    • State purpose
    • State hypothesis
  • 7. Introduction
    • Controversy stimulates questions.
    • The purpose of any study is to answer a question.
    • The hypothesis is a tentative theory
    • in which you state
    • what you believed you
    • would find *before*
    • you started the study.
  • 8. Introduction
    • Example
    • Controversy: Is arthroscopy of the knee beneficial to patients with osteoarthritis (OA)?
    • Purpose: To determine whether arthroscopy of the knee benefits patients with OA.
    • Hypothesis: Arthroscopy of the knee is of moderate benefit to patients with OA.
  • 9. Methods
    • Should include:
    • Technical description
    • of the study design
      • Make it complete
      • Make it reproducible
    • Rationale for experimental design
    • Statistical methods
  • 10. Methods
    • Study Design
    • Focus on the purpose.
    • Select the type of study that fits your purpose.
    • Use valid measurement tools.
    • Apply appropriate statistical methods.
  • 11. Methods
    • Study Design
    • Flaws in study design can be fatal.
  • 12. Levels of Evidence
    • Therapeutic Studies
    • Prognostic Studies
    • Diagnostic Studies
    • Economic and Decision Analyses
    Types of Studies—Clinical Only
  • 13. Levels of Evidence (pg 5 of the Journal’s Instructions for Authors)
  • 14. Levels of Evidence
    • Randomized controlled trial = Level I or II
    • Comparative study = Level II or III
    • Case-control study = Level III
    • Case series study = Level IV
    Therapeutic Studies
  • 15. Types of Studies
    • RETROSPECTIVE - Looks Back
    • Easier to do: collect data & see what you have
    • Higher risk of bias
    • PROSPECTIVE - Looks Ahead
    • Better type of study
    • Long time to complete
    • Considerable effort & resources
    • Straightforward conclusions
  • 16. Types of Studies
    • Observational Studies
    • Nature is allowed to take its course.
    • Investigator does *not* intervene.
    • Retrospective design
      • Case report - single subject - no controls
      • Case series - multiple subjects - no controls
    Level of Evidence = IV
  • 17. Types of Studies
    • Observational Studies
    • Retrospective design: case-control study
      • Starts with subjects who have a disease
      • Requires suitable control group without the disease
        • Look for suspected risk factor in both groups
        • May help determine causal relationships
        • Use it to study conditions with low incidence
    Level of Evidence = III
  • 18. Types of Studies
    • Observational Studies
    • Prospectively designed comparative study = patients treated one way compared with patients treated another way at the same institution
    Level of Evidence = II
    • Retrospectively designed comparative study :
    • Level of Evidence = III
  • 19. Types of Studies
    • Experimental Studies - Prospective
    • Investigator has total control of patient allocation.
    • Independent variables (manipulated by the investigator) are systematically changed.
      • Example: Knee Arthroscopy vs. a Placebo in Patients with Osteoarthritis
    • Investigator makes observations: How do these changes affect the dependent variables?
      • Examples: Pain (more or less?); Function (better or worse?)
  • 20. Types of Studies
    • Experimental Studies - Prospective
    • Randomized Controlled Trials
      • Population is randomly allocated.
      • Study group: All receive intervention.
      • Control group: All lack intervention or standard treatment.
      • Example: Knee arthroscopy vs. placebo
      • in VA patients with osteoarthritis
    Level of Evidence = I or II
  • 21. Levels of Evidence (pg 5 of the Journal’s Instructions for Authors)
  • 22. Data Collection Instruments
    • Requirements
    • Reliable
    • Valid
    • Responsive
    • Universal
    • Unbiased
  • 23. Data Collection Instrument
    • Is it reliable?
    • Will the instrument measure consistently across:
    • Different testing situations?
      • Test-retest reliability
    • Different judges?
      • Inter-rater reliability
  • 24. Data Collection Instrument
    • Is it valid?
    • Is the instrument being used to measure the kind of data for which it was intended?
  • 25. Data Collection Instrument
    • Is it responsive?
    • The instrument should be equally sensitive, whether a characteristic is present or absent.
    • For example, MRI vs. physical examination for isolated tears of the ACL . . .
    • Must measure both as compared to arthroscopy.
    • False-negatives:
    • You thought it was intact, but it was torn.
    • False-positives:
    • You thought it was torn, but it was intact.
  • 26. Data Collection Instrument
    • Is it universal?
    • The investigator should employ a widely used data collection instrument, which helps minimize reporting bias because the data can then be compared with other published literature.
  • 27. Data Collection Instrument
    • There should be no difference between the true value and the value that an investigator actually obtains– other than a difference caused by sampling variability.
    Is it unbiased?
  • 28. Bias in Clinical Trials
    • Areas in which bias can occur
    • Systematic error in . . .
    • Allocation
    • Response
    • Assessment
  • 29. Bias in Clinical Trials
    • Allocation or Susceptibility Bias
    • Can occur when patient assignments to a trial group are influenced by an investigator’s knowledge of the treatment to be received.
    • Can result in treatment groups that have different prognoses.
  • 30. Bias in Clinical Trials
    • Allocation or Susceptibility Bias
    • Treatment groups must have similar prognoses, which is achieved by:
      • Randomization of patients
      • Prospective evaluation of patients
      • Well-defined inclusion and exclusion criteria
  • 31. Randomization in Clinical Trials
    • Occurs when patients are assigned to treatments by means of a mechanism that prevents both the patients and the investigator from knowing which treatment is being assigned.
  • 32. Benefits of Randomization
    • Prevents the systematic introduction of bias.
    • Minimizes the possibility of allocation bias.
    • Balances prognostic factors for treatment groups.
    • Improves the validity of statistical tests used to compare treatments.
  • 33. Bias in Clinical Trials
    • Response & Assessment/Recording Bias
    • Can occur when a patient reports a treatment response or when an investigator assesses that response—either person can be influenced by knowing the treatment.
    • A patient or an investigator may have a preconceived idea of which treatment is better. The patient may also want to please the investigator.
  • 34. Bias in Clinical Trials
    • Blinding
    • To minimize Response & Assessment/Recording Bias
    • Single Blind (patient blinded): protects against response bias.
    • Double Blind (patient and investigator blinded): protects against assessment/recording bias as well as response bias.
  • 35. Bias in Clinical Trials
    • Transfer bias
      • Occurs when patients are lost to follow-up.
      • Must be minimized.
    • Performance bias
      • Can occur with a single surgeon or with multiple surgeons.
  • 36. Rationale for Experimental Design
    • Here the investigator explains how the methods address the purpose of the study.
    • The rationale for experimental design also is used to clarify basic science for lay readers.
  • 37. Statistics Standard Distribution Curve
  • 38. Statistics Standard Deviation Curve
  • 39. Statistics Standard Deviation Curve
  • 40. Statistics Standard Deviation Curve
  • 41. Statistics Standard Deviation Curve
  • 42. Statistics
    • Z-Score
    • Another way to view standard deviation:
      • Number of Standard Deviations Needed
  • 43. Statistics
    • Z-score
    • Another way to view standard deviation:
      • Number of Standard Deviations Needed
      • 95% = 1.96 SD (use Z-Score 2)
  • 44. Statistics
    • Z-score
    • Another way to view standard deviation:
      • Number of Standard Deviations Needed
      • 95% = 1.96 SD (use Z-Score 2)
      • 99% = 2.58 SD (use Z-Score 2.5)
  • 45. Statistics
    • Standard Error
    • Standard Error (SEM, SDM) = Sample Quality
      • Standard deviation of the sample divided by the square root of the sample size
  • 46. Statistics
    • Confidence Interval
    • (Z-score x Standard Error)
  • 47. Statistics
    • Confidence Interval
    • (Z-score x Standard Error)
    • Z-Score: Number of Standard Deviations
  • 48. Statistics
    • Confidence Interval
    • (Z-score x Standard Error)
    • Z-Score: Number of Standard Deviations
    • Standard Error: Sample Quality
  • 49. Hypothesis
    • What the investigator believes—before the study begins—that the study can prove.
  • 50. Null Hypothesis
    • A statement of no effect
    • A “null hypothesis” is the converse of what the investigator believes can be proved.
  • 51. Standard Deviation Curve Null Hypothesis Hypothesis
  • 52. Null Hypothesis
    • State of the World (the population)
    Your Decision Based on Data
  • 53. Null Hypothesis
    • State of the World (the population)
    Do Not Reject Null Hypothesis Your Decision Based on Data Reject Null Hypothesis
  • 54. Null Hypothesis
    • State of the World (the population)
    Your Decision Based on Data Null Hypothesis True Reject Null Hypothesis Do Not Reject Null Hypothesis
  • 55. Null Hypothesis
    • State of the World (the population)
    Your Decision Based on Data Null Hypothesis True Reject Null Hypothesis Do Not Reject Null Hypothesis Correct Decision
  • 56. Null Hypothesis
    • State of the World (the population)
    Your Decision Based on Data Null Hypothesis True Null Hypothesis False Reject Null Hypothesis Do Not Reject Null Hypothesis Correct Decision
  • 57. Null Hypothesis
    • State of the World (the population)
    Your Decision Based on Data Null Hypothesis True Null Hypothesis False Reject Null Hypothesis Correct Decision Do Not Reject Null Hypothesis Correct Decision
  • 58. Null Hypothesis
    • State of the World (the population)
    Your Decision Based on Data Null Hypothesis True Null Hypothesis False Reject Null Hypothesis Type I Error Correct Decision Do Not Reject Null Hypothesis Correct Decision
  • 59. Null Hypothesis
    • State of the World (the population)
    Your Decision Based on Data Null Hypothesis True Null Hypothesis False Reject Null Hypothesis Type I Error Correct Decision Do Not Reject Null Hypothesis Correct Decision Type ll Error
  • 60. Null Hypothesis
    • Type I error α (alpha error)
  • 61. Null Hypothesis
    • Type I error α (alpha error)
    • Occurs when rejecting the null hypothesis, although the null hypothesis actually is true (sampling error – bias).
  • 62. Null Hypothesis
    • Type I error α (alpha error)
    • Occurs when rejecting the null hypothesis, although the null hypothesis actually is true (sampling error – bias).
    • Type II error β (beta error)
  • 63. Null Hypothesis
    • Type I error α (alpha error)
    • Occurs when rejecting the null hypothesis, although the null hypothesis actually is true (sampling error – bias).
    • Type II error β (beta error)
    • Occurs when accepting the null hypothesis, although the null hypothesis actually is false (too-small sample).
  • 64. Statistical Power
    • Probability that the null hypothesis will be rejected if it is indeed false
  • 65. Statistical Power
    • Probability that the null hypothesis will be rejected if it is indeed false
    • The capacity to detect a difference, if one exists
  • 66. Statistical Power
    • Probability that the null hypothesis will be rejected if it is indeed false
    • The capacity to detect a difference, if one exists
    • Power = 1 -  (type II error)
  • 67. Statistical Power
    • N = 2 σ 2 (Z 1- α + Z 1- β ) 2 / δ 2
    •  = Standard deviation of outcome (variability)
      • Assumed to be known.
      • Estimated from pilot data.
      • Obtained from the literature.
    • Z 1-   = Allowable type I error
    • Z 1-   = Allowable type II error
    •  = Difference the investigator wants to detect
      • Between (or among) groups
    www.mc.vanderbilt.edu/prevmed/ps.htm
  • 68. Statistical Power
    • Problems with Inadequate Power
    • Chances of false-negative findings increase.
      • Note that failing to show a difference is not the same as showing that no difference exists.
  • 69. Statistical Power
    • Problems with Inadequate Power
    • Chances of false-negative findings increase.
      • Note that failing to show a difference is not the same as showing that no difference exists.
    • Wastes the time of patients & investigators.
  • 70. Statistical Power
    • Problems with Inadequate Power
    • Chances of false-negative findings increase.
      • Note that failing to show a difference is not the same as showing that no difference exists.
    • Wastes the time of patients & investigators.
    • Wastes money.
  • 71. Evidence in Clinical Research
    • A p-value indicates how unlikely it is that a test statistic as extreme as—or more extreme than—the one derived from this study’s data will be found for this patient population if the null hypothesis is true.
  • 72. Evidence in Clinical Research
    • P-values do not provide simple Yes or No answers.
  • 73. Evidence in Clinical Research
    • P-values do not provide simple Yes or No answers.
    • Instead, p-values provide general ideas about the strength of evidence against null hypotheses.
  • 74. Evidence in Clinical Research
    • P-values do not provide simple Yes or No answers.
    • Instead, p-values provide general ideas about the strength of evidence against null hypotheses.
    • The lower the p-value, the stronger the evidence.
  • 75. Evidence in Clinical Research
    • The Confidence Interval (CI) indicates a range of likely differences.
  • 76. Evidence in Clinical Research
    • The Confidence Interval (CI) indicates a range of likely differences.
    • Less confusion exists in the literature about Confidence Intervals because:
      • The range of possible true values is more clearly stated than with p-values.
  • 77. Evidence in Clinical Research
    • The Confidence Interval (CI) indicates a range of likely differences.
    • Less confusion exists in the literature about Confidence Intervals because:
      • The range of possible true values is more clearly stated than with p-values.
      • Apparently contradictory research can be found to have overlapping Confidence Intervals.
  • 78. Results
    • Clear Presentation of Data
    • Organize it like the materials and methods.
    • The numbers must add up.
    • All results must be proposed in the methods.
    • Everything in the methods must be reported.
    • Text must be consistent with tables & figures.
  • 79. Results
    • Evidence in Clinical Research
    • Confidence Interval = range of differences between (or among) treatment groups. Confidence Interval data are extremely useful and their use needs to be encouraged.
    • p < .05 = statistical significance
    • 95% sure that the difference is true – anything else assumes that it is not different or that the null hypothesis is true.
  • 80. Misinterpretation of Results
    • Comparison of 2 Groups
    • Failure to show a difference is not the same as showing that there is no difference – lack of power.
  • 81. Discussion
    • Compare Your Results to Previous Studies.
    • Discuss similarities and differences.
    • Clarify the meaning of your results.
  • 82. Discussion
    • Speculate – if reasonable and feasible.
    • Clearly distinguish your theories or opinions (from your conclusions, which are based on your results).
    Consider alternative explanations.
  • 83. Discussion: Include Limitations.
    • Point out study weaknesses.
    • Specifically consider bias:
      • Allocation (Susceptibility)
      • Response
      • Assessment (Recording)
      • Transfer
      • Performance
  • 84. Conclusion
    • Here you must address the hypothesis: was it proved?
    • What did your data support?
    • What did your results show?
    • Your conclusion must *not* include statements that lie outside the study’s scope.
    • Expressed another way . . .
    • Make no statements in the conclusion that the results do not support.
  • 85. Abstract
    • Original articles require a structured abstract (a maximum of 300 words).
    • In the structured abstract, present the essential details.
      • Purpose
      • Methods
      • Results
      • Conclusions
      • Level of Evidence (or Clinical Relevance)
      • Keywords (a maximum of 6)
  • 86. Abstract
    • Technical notes and case reports require . . .
    • An unstructured abstract (200-word maximum)
  • 87. Abstract
    • Technical notes and case reports require . . .
    • An unstructured abstract (200-word maximum)
    • Great majority of these articles -> “hybrids”
  • 88. Abstract
    • Technical notes and case reports require . . .
    • An unstructured abstract (200-word maximum)
    • Great majority of these articles -> “hybrids”
    • Hybrid = unstructured abstract & 1 figure/2 parts
  • 89. Abstract
    • Technical notes and case reports require . . .
    • An unstructured abstract (200-word maximum)
    • Great majority of these articles -> “hybrids”
    • Hybrid = unstructured abstract & 1 figure/2 parts
    • Abstract must give core message of article!
  • 90. Abstract
    • Technical notes and case reports require . . .
    • An unstructured abstract (200-word maximum)
    • Great majority of these articles -> “hybrids”
    • Hybrid = unstructured abstract & 1 figure/2 parts
    • Abstract must give core message of article!
    • Online -> entire article including all figures
  • 91. Title
    • Describe the Topic that was Studied.
    • Accurate and representative of the study’s content and scope
    • Clear
    • Informative
    • Brief
  • 92. References
    • Catalog Previously Published Information.
    • Choose references directly related to the study.
    • Read the complete referenced article.
    • Avoid 2nd hand or abstract reference sources.
    • Check and then doublecheck the final draft. Hint: Beware the word processor shuffle!
  • 93. Figures
    • “ A Picture is Worth a Thousand Words.”
    • Use figures to clarify your essential point.
    • Label arthroscopic views.
    • Include a self-explanatory legend for *each* figure part.
    • Take care not to mislead.
  • 94. Tables
    • Provide a Concise Summary of Data.
    • Do not repeat material found in the text.
    • Label columns clearly.
    • Group data logically.
    • Check that each table can stand on its own.
      • N, Mean, SD
      • Define all abbreviations, table by table.
  • 95. Reviewer Objectives
    • Faulty Grammar, Syntax, Typos?
    • Yes, a reviewer can mention them.
    • Remedy is the job of the copy editor.
    • Be sure that errors like these do not cause scientific misunderstanding that the copy editor may not know
    • should be corrected.
  • 96. Reviewer Objectives
    • Find the Pearl of Knowledge.
    • Indicate the strengths of the manuscript.
    • Provide constructive comments.
    • Review critically: uncover flaws in thinking.
    • Check for clarity of presentation.
  • 97. Manuscript Assessment
    • Writing a Review
    • Number your comments for the author’s response by referencing . . .
      • Page Numbers
      • Line Numbers
  • 98. Manuscript Assessment
    • Writing a Review
    • Be sure that the author makes statements only once . If it’s in the introduction, it should not be in the discussion. If it’s in a table, it should not be in the text.
    • Sole exception: If it first appears in the abstract, one repetition elsewhere is OK.
  • 99. Manuscript Assessment
    • Designate time to read it *twice*.
    • First – General. Brief. Let it sink in.
    • Second – Comprehensively mark up, then dictate/write/type review (not a linear
    • critique . . . iterative instead).
      • Address the hypothesis first. Then the conclusions.
      • Methods as key – adequate to answer the question?
      • Results – do these data lead to the conclusions?
      • Discussion – are alternative methods considered?
      • Introduction – is the study properly positioned?
      • Abstract – are all key points included?
  • 100. Writing a Review
    • Suggested Order
    • 1. Introduction
    • 2. Methods
    • 3. Results
    • 4. Discussion
    • 5. Conclusion
    6. Abstract 7. Title 8. References 9. Figures 10. Tables
  • 101. Manuscript Assessment
    • Does the Introduction include:
    • Purpose?
    • Hypothesis?
    • Methods
    • Are they reproducible?
    • Do they minimize bias?
    • Do they address the purpose?
    • Is there a rationale for the experimental design: Is the basic science clarified for the lay reader?
  • 102. Manuscript Assessment
    • Results
    • Are they clearly presented and unambiguous?
    • Are they relevant to the study or research problem?
    • Do the tables and figures clarify or confuse?
    • Is there duplication among the text, figures, or tables?
  • 103. Manuscript Assessment
    • Results
    • When results are unbelievably good, they probably are unbelievable!
    • However, the subjective beliefs of a reviewer should not override the objective results of a sound study.
  • 104. Manuscript Assessment
    • Discussion
    • Does it assess the relevant published literature?
    • Does it distinguish author opinion from the conclusions?
    • Does it examine the study’s limitations, including bias?
  • 105. Manuscript Assessment
    • Conclusion
    • Is it based on the data described in the results?
    • Does it address the hypothesis?
    • Does it stray beyond the boundaries of the study?
  • 106. Manuscript Assessment
    • Abstract, Title, References, Figures, Tables
    • Do they follow the guidelines discussed earlier in this online presentation?
  • 107. Manuscript Assessment
    • Establishes a baseline threshold for acceptance.
    • It’s biased and imperfect.
    • Erroneous decisions are inevitable.
    • Reviewer agreement does not assure that the study is accurate or valid.
    • However, the process is indispensable – better than any current alternative.
  • 108. Online Submission & Review
    • http://ees.elsevier.com/arth/
  • 109. Online Submission & Review
    • http://ees.elsevier.com/arth/
    • Journal Website ( www.arthroscopyjournal.org ) home page also has a link to the online system.
    • Many, many benefits with the online system:
    • * Faster turnaround and no lost-in-mail manuscripts
    • * No postage to pay
    • * Manuscript tracking online for corresponding author
    • * Electronic image files instead of photographic prints
  • 110. A Request for New Reviewers
    • http://ees.elsevier.com/arth/
    • Use the online system to register as an Author, being sure to specify personal classifications.
    • Email the Editorial Office that you have registered as an Author, have attended the Journal Review Course, and want to be a Reviewer.
    • The Editorial Office will make you a Reviewer & soon send you an Original Article to review online.
    • Thanks to the online system, it’s just that easy!
  • 111. Additional Statistical Material
    • Confidence Interval
    • Statistical Power
    • Sample Size
    • Data Types
    • Statistical Tests
  • 112. Confidence Interval (CI)
    • Example: 49 women; mean weight, 140 lbs. Standard deviation = 3.5 lbs.
    • Standard error = 3.5/7 =.5
      • (Standard deviation/square root of the sample size)
    • Z-Score 95% = 2 Z-Score 99% = 2.5
    • CI of 95% = 2 x .5 = 140 ± 1 CI of 99% = 2.5 x .5 = 140 ± 1.25
  • 113. Confidence Interval (CI)
    • CI of 95% = 2 x .5 = 140 ± 1
      • 95% confidence that the population’s true mean weight is between 139 lbs. and 141 lbs.
    • CI of 99% = 2.5 x .5 = 140 ± 1.25
      • 99% confidence that the population’s true mean weight is between 138.75 lbs. and 141.25 lbs.
    • Two groups can be significantly different and yet have an overlapping Confidence Interval.
  • 114. Statistical Power
    • The Capacity to Detect a True Difference
    • A Test Has Greater Power When:
    • The sample size is larger.
    • Variability decreases.
    • The effect size is larger.
    • The chance of Type I error is greater . . .
      • Which may lead the investigator to reject the null hypothesis although it is true.
  • 115. Sample Size
    • N = 2 σ 2 (Z 1- α + Z 1- β ) 2 / δ 2
    • N increases as:
    • Variability increases.
    • Type I & Type II errors decrease.
    • The difference that the investigator wants to detect decreases.
  • 116. What are the Data Types?
    • The Type of Scale used to express the Outcome is the Key.
    • Discrete
      • Nominal - Put in boxes (Male vs. Female) 0-0
      • Ordinal - Rank order (Intervals Unequal) ± 0
        • Stages of disease
  • 117. What are the Data Types?
    • The Type of Scale used to express the Outcome is the Key.
    • Continuous
      • Interval - numerical (intervals equal; e.g.,+-+ temperature: 80 ° to 40 ° F., 26.6 ° to 6.1 ° C. ) +/-
      • Ratio - has an absolute zero (e.g., height and +-+ weight) +/- but also x or ÷
  • 118. What Statistical Test To Run?
    • If Interval Data Only
    • Pearson Correlation - ? Linear Correlation
    • Regression - Nature of relationship
  • 119. What Statistical Test To Run?
    • If Nominal Data Only
    • Chi-Square Test – use with samples > 25
    • Fisher’s Exact Test – use with samples ≤ 25
  • 120. What Statistical Test To Run?
    • If Ordinal Data Only
    • Spearman Rank Order Correlation
  • 121. What Statistical Test To Run?
    • If Interval and Nominal Data Are Combined
    • One-Way Analysis of Variance (ANOVA)
      • 1 interval and 1 nominal variable with > 2 groups
    • Two-Way Analysis of Variance (ANOVA)
      • 1 interval and 2 nominal variables
  • 122. What Statistical Test To Run?
    • If Interval and Nominal Data Are Combined
    • t-test: 1 Interval and 1 Nominal Variable with 2 groups
      • If nondirectional: Two-tailed test
      • If directional: One-tailed test
    Tail Tail
  • 123. Thank You