Research Methods in Health
Chapter 7. Statistical Methods 1
Young Moon Chae, Ph.D.
Graduate School of Public Health
Yonsei University, Korea
• What is Biostatistics?
• Biostatistics in Public Health Research
• Descriptive statistics
• Inference statistics
• Power of test
• Biostatistics is the development and application of statistics to research in
• Common perceptions of statistics: numbers, tables, figures, polls, rates, etc.
• These are “descriptions of the world”
• Analysis of data
Biostatistics in Public Health Research
• new statistical techniques
• high speed of computing
• geographical patterns of disease
• clinical trials
• longitudinal analysis
• data analysis in epidemiology studies
Errors in Statistical methods
• Research design
-Improper control group in case-control design
-Selection bias (sample does not represent study population)
-Too small sample size
• Statistical methods
-Parametric statistics for small sample
-T-test for the related sample
-T-test or ANOVA for the samples that do not meet assumptions
(normality, equal variances, independence)
-T-test for multiple comparison
-Regression for nominal dependent variable
-Regression with multi-colinearity
-Chi-square test with cell size less than 5
Descriptive vs. Inferential Statistics
• The mean and standard deviation can be used in 2 ways.
-One way is to describe the distribution of data
-The other way is to infer something about a population (is the population
mean 25? 20?). A statistical test!
• Because the sampling distribution of the mean is normally distributed
(Central Limit Theorem), we can use the normal to show how close the
parameter is likely to be to the sample mean and to make decisions about
• Descriptive Statistics
-Mean, median, mode
-Variance, standard deviation, range, Interquartile range, quartile range,
• Frequency tables, Bar charts and pie charts, Histograms, Stem-and-Leaf
Variables have distributions
• A variable is something that changes or has different values (e.g., anger).
• A distribution is a collection of measures, usually across people.
• Distributions of numbers can be summarized with numbers (called statistics
Central Tendency refers to the Middle of the Distribution
Middle of the Distribution
-Most common score
-Top from bottom 50 percent
-Arithmetic mean or average
• The most frequently occurring score. Can have bimodal and multimodal
distributions. Modal public health student is female.
• Score that separates top 50% from bottom 50%
• Even number of scores, median is half way between two middle scores.
-1 2 3 4 | 5 6 7 8 – Median is 4.5
• Odd number of scores, median is the middle number
-1 2 3 4 5 6 7 – Median is 4
• Sum of scores divided by the number of people. Population mean is
(mu) and sample mean is
• We calculate the sample mean by:
• We calculate the population mean by:
Comparison of statistics
-Good for nominal variables
-Good if you need to know most frequent observation
-Quick and easy
-Good for “bad” (skewed) distributions
-Often used with distributions of money
-Used for inference as well as description; best estimator of the parameter
-Based on all data in the distribution
-Generally preferred except for “bad” distribution.
-Most commonly used statistic for central tendency.
• Estimation: This includes point and interval estimation of certain
characteristics in the population(s).
• Testing Hypothesis about population parameter(s) based on the
information contained in the sample(s).
Estimation of Parameters
• Point Estimation
• Interval Estimation (Confidence Intervals)
• Bound on the error of estimation
• The width of a confidence interval is directly related to the bound on the
• Sampling distribution is a distribution of a statistic (not raw data) over all
possible samples. Same as distribution over infinite number of trials.
• Notion of trials, experiments, replications
• Coin toss example (5 flips, # heads)
• Repeated estimation of the mean
Mean of Sampling Distribution
• Statisticians have worked out properties of sampling
• Middle and spread of sampling distribution are known.
• If mean of sampling distribution equals parameter, statistic
is unbiased. (otherwise, it’s biased.) The sample mean is
• Best estimate of is .
SD of Sampling Distribution
• The standard deviation of the sampling distribution is the standard error.
For the mean, it indicates the average distance of the statistic from the
Heignt in Inches
Standard error of the mean.
Factors influencing the Bound on the error of
• Narrow confidence intervals are preferred
• As the sample size increases, the bound on the error of estimation
• As the confidence level increases the bound on the error of estimation
• You need to plan a sample size to achieve the desired level of error
Decision Making Under Uncertainty
• You have to make decisions even when you are unsure. School, marriage,
therapy, jobs, whatever.
• Statistics provides an approach to decision making under uncertainty. Sort
of decision making by choosing the same way you would bet. Maximize
expected utility (subjective value).
• Comes from agronomy, where they were trying to decide what strain to
• While attempting to make decisions, some necessary assumptions or guesses about
the populations or statements about the probability distribution of the populations
made are called statistical hypothesis. These assumptions are to be proved or
• A predictive statement usually put in the form of a null hypothesis and alternate
• Researcher bets in advance of his experiment that the results will agree with his
theory and cannot be accounted for by the chance variation involved in sampling
• Procedures which enable researcher to decide whether to accept or reject
hypothesis or whether observed samples differ significantly from expected results
• Statements about characteristics of populations, denoted H:
-H: normal distribution,
• The hypothesis actually tested is called the null hypothesis, H0
• The other hypothesis, assumed true if the null is false, is the alternative
13;28 == sm
Testing Statistical Hypotheses - steps
• State the null and alternative hypotheses
• Assume whatever is required to specify the sampling distribution of the
statistic (e.g., SD, normal distribution, etc.)
• Find rejection region of sampling distribution –that place which is not likely if
null is true
• Collect sample data. Find whether statistic falls inside or outside the
rejection region. If statistic falls in the rejection region, result is said to be
The level of significance (a )
• a is known as the nominal level of significance.
• If p-value < a, then we reject the null hypothesis in favor of the alternative
• P-value is also known as the observed level of significance.
• a needs to be pre-determined. (Usually 5%)
Type I and Type II errors
• Type I error is committed when a true null hypothesis is rejected.
• a is the probability of committing type I error.
• Type II error is committed when a false null hypothesis is not rejected.
• b is the probability of committing type II error.
Power of a test
• The power of a test is the probability that a false null hypothesis is
• Power = 1 - b, where b is the probability of committing type II error.
• More powerful tests are preferred. At the design stage one should
identify the desired level of power in the given situation.
Null true Null False
Right Beta (type II
No fire Fire
Alarm silent Right,
Alarm on Alpha Correct rejection
Three named probabilities:
Alpha, beta, and power.
Power of a test (1-β):
• Value (1-β) indicates how well the test is working, i.e., value nearer to 1 means
working well (test is rejecting Ho when it is not true) and value nearer to 0 means
poorly working (not rejecting Ho when it is not true)
• It indicates how well given test will enable us to minimize the probability of type II
error (β), i.e., avoid making wrong decisions. Hypothesis testing cannot be foolproof.
Sometimes test does not reject a Ho which is false (type II error). We would like β to
be as small as possible or (1-β) to be as large as possible.
• Operating Characteristic Function (L) L = 1 -H : Shows conditional probability of
accepting Ho for all values of population parameters for a given sample size,
whether or not the decision happens to be a correct one
• OC curve -graphs showing the probabilities of type II error (β) under various
Factors influencing the Power
• The power of a test is influenced by the magnitude of the difference
between the null hypothesis and the true parameter.
• The power of a test could be improved by increasing the sample size.
• The power of a test could be improved by increasing a. (this is a very
One Tail or Two Tails
The rejection region can fit into 1 or 2 tails of the
sampling distribution of means. The RR is determined by
the alternative hypothesis.
Two Tails One Tail
valueH a ¹m:
orvalueH a >m:
valueH a <m:
valueH a ¹m:
Don't reject RejectReject
If Null is True
Don't reject Reject
If Null is True
valueH a >m:
Note 1.96 vs. 1.65
Example of 2 tails
75:;75:0 ¹= mm aHH
25,10 == Ns
Don't reject RejectReject
If Null is True
Note 5 percent is split into two tails.
Example of 1 tail
75:;75:0 >= mm aHH
25,10 == Ns
Sampling Distribution of Means
Likely Outcome if Null is True
Note all 5 percent is at the top tail.
Parametric or Standard Tests
• Require measurements equivalent to at least an interval scale
• Assume certain properties of parent population like
-i) observations are from a normal population
-ii) large random sample
-iii) population parameters like mean, variance, etc. must hold good
• Situations where above assumptions are not possible, non-parametric tests
are used; As there is no model, these tests are also called distribution-free
• Based on the normal probability distribution and even binomial in case of large
• For testing mean, variance, two individual samples, median, mode, correlation,
• It is based on t-distribution and only incase of small samples
• Used for testing difference between means of two samples, coefficient of simple &
partial correlations, etc.
• Used in the context of ANOVA and for the testing the significance of multiple
correlation coefficients, comparing the variance of two independent samples,
• Based on Chi-square distribution
• Used for comparing a sample variance to a theoretical population variance
Inferences about Population Means
The t Distribution
• We use t when the population variance is unknown (the usual case)
and sample size is small (N<100, the usual case).
• The t distribution is a short, fat relative of the normal. The shape of t
depends on its df. As N becomes infinitely large, t becomes normal.
• The t-test is based on assumptions of normality
• Two groups are independent
• Homogeneity of variance -> can be tested by using F-test.
• As long as the samples in each group are large and nearly equal, the t-test
is robust, that is, still good, even though assumptions are not met.
• We assume normal distributions to figure sampling distributions and thus p
• Violations of normality have implications for testing means. Need to use
non-parametric statistics or use data transformation
• Can test for normality by using Kolmogrov-Simirnov test
The F Distribution (1)
• The F distribution is the ratio of two variance estimates:
• Also the ratio of two chi-squares, each divided by its degrees of freedom:
In our applications, v2 will be larger than v1 and v2 will
be larger than 2. In such a case, the mean of the F
distribution (expected value) is
v2 /(v2 -2).
Testing Hypotheses about 2 Variances
• We find
• Then df1=df2 = 15, and
10 :;: ssss >£ HH
11 ==== sNsN
Going to the F table with 15 and 15 df, we find
that for alpha = .05 (1-tailed), the critical value
is 2.40. Therefore the result is significant.
Application of F Distribution
• The F distribution is used in many statistical tests
-Test for equality of variances.
-Tests for differences in means in ANOVA.
-Tests for regression models (slopes relating one continuous variable to
another like SAT and GPA).
• Cohen, Louis and Manion, Lawrence. Research methods in education.
London: Routledge, 1980.
• Goode, William J and Hatt, Paul K. Methods on social research. London; Mc
• 10.Gopal, M.H. An introduction to research procedures in social sciences.
Bombay: Asia Publishing House, 1970.
• Koosis, Donald J. Business statistics. New York: John Wiley,1972.