The document discusses t-tests, which are used to compare means between groups. It describes the assumptions of t-tests, the different types of t-tests including independent samples t-tests and dependent samples t-tests, and the steps to conduct t-tests by hand and using SPSS. It provides examples of conducting one-sample t-tests, independent samples t-tests, and dependent samples t-tests, including interpreting the results. It also discusses how to increase statistical power by increasing the difference between means, decreasing variance, increasing sample size, and increasing the alpha level.
It includes various cases and practice problems related to Binomial, Poisson & Normal Distributions. Detailed information on where tp use which probability.
The Normal Distribution is a symmetrical probability distribution where most results are located in the middle and few are spread on both sides. It has the shape of a bell and can entirely be described by its mean and standard deviation.
It includes various cases and practice problems related to Binomial, Poisson & Normal Distributions. Detailed information on where tp use which probability.
The Normal Distribution is a symmetrical probability distribution where most results are located in the middle and few are spread on both sides. It has the shape of a bell and can entirely be described by its mean and standard deviation.
The slides discuss comparing two means to ascertain which mean is of greater statistical significance. In these slides we will learn about three research questions in which the t-test can be used to analyze the data and compare the means from two independent groups, two paired samples, and a sample and a population.
OBJECTIVES:
Run the test of hypothesis for mean difference using paired samples. Construct a confidence interval for the difference in population means using paired samples.
Observation of interest will be the difference in the readings
before and after intervention called paired difference observation.
Paired t test:
A paired t-test is used to compare two means where you have two samples in which observations in one sample can be paired with observations in the other sample.
Examples of where this might occur are:
Before-and-after observations on the same subjects (e.g. students’ test
results before and after a particular module or course).
A comparison of two different methods of measurement or two different treatments where the measurements/treatments are applied to the same subjects (e.g. blood pressure measurements using a sphygmomanometer and a dynamap).
When there is a relationship between the groups, such as identical twins.
This test is concerned with the pair-wise differences
between sets of data.
This means that each data point in one group has a related data point in the other group (groups always have equal numbers).
ASSUMPTIONS:
The sample or samples are randomly selected
The sample data are dependent
The distribution of differences is approximately normally
distributed.
Note: The under root is onto the entire numerator and denominator, so you should take the root after solving it entirely
where “t” has (n-1) degrees of freedom and “n” is
the total number of pairs.
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 9: Inferences from Two Samples
9.2: Two Means, Independent Samples
In general, a factorial experiment involves several variables.
One variable is the response variable, which is sometimes called the outcome variable or the dependent variable.
The other variables are called factors.
Unit-I Measures of Dispersion- Biostatistics - Ravinandan A P.pdfRavinandan A P
Biostatistics, Unit-I, Measures of Dispersion, Dispersion
Range
variation of mean
standard deviation
Variance
coefficient of variation
standard error of the mean
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
2. Learning Objectives
• Compute by hand and interpret
– Single sample t
– Independent samples t
– Dependent samples t
• Use SPSS to compute the same tests
and interpret the output
3. Review 6 Steps for
Significance Testing
1. Set alpha (p 4. Find the critical
level). value of the
2. State hypotheses, statistic.
Null and 5. State the decision
Alternative. rule.
3. Calculate the test 6. State the
statistic (sample conclusion.
value).
4. t-test
• t –test is about means: distribution and
evaluation for group distribution
• Withdrawn form the normal distribution
• The shape of distribution depend on
sample size and, the sum of all
distributions is a normal distribution
• t- distribution is based on sample size and
vary according to the degrees of freedom
5. What is the t -test
• t test is a useful technique for comparing
mean values of two sets of numbers.
• The comparison will provide you with a
statistic for evaluating whether the difference
between two means is statistically significant.
• t test can be used either :
1.to compare two independent groups (independent-
samples t test)
2.to compare observations from two measurement
occasions for the same group (paired-samples t
test).
6. What is the t -test
• The null hypothesis states that any
difference between the two means is a
result to difference in distribution.
• Remember, both samples drawn randomly
form the same population.
• Comparing the chance of having difference
is one group due to difference in
distribution.
• Assuming that both distributions
came from the same population, both
distribution has to be equal.
7. What is the t -test
• Then, what we intend:
“To find the he difference due to chance”
• Logically, The larger the difference in means, the
more likely to find a significant t test.
• But, recall:
1. Variability
More (less) variability = less overlap = larger
difference
2. Sample size
Larger sample size = less variability (pop) = larger difference
8. Types
1. The one-sample t test is used compare a single sample
with a population value. For example, a test could be
conducted to compare the average salary of nurses
within a company with a value that was known to
represent the national average for nurses.
2. The independent-sample t test is used to compare two
groups' scores on the same variable. For example, it
could be used to compare the salaries of nurses and
physicians to evaluate whether there is a difference in
their salaries.
3. The paired-sample t test is used to compare the means
of two variables within a single group. For example, it
could be used to see if there is a statistically significant
difference between starting salaries and current salaries
among the general nurses in an organization.
9. Assumptions of t-Test
• Dependent variables are interval or ratio.
• The population from which samples are
drawn is normally distributed.
• Samples are randomly selected.
• The groups have equal variance
(Homogeneity of variance).
• The t-statistic is robust (it is reasonably
reliable even if assumptions are not fully
met.
10. Assumption
1. Should be continuous (I/R)
2. the groups should be randomly
drawn from normally distributed
and independent populations
e.g. Male X Female
Nurse X Physician
Manager X Staff
NO OVER LAP
11. Assumption
3. the independent variable is categorical with two
levels
4. Distribution for the two independent variables
is normal
5. Equal variance (homogeneity of variance)
6. large variation = less likely to have sig t test =
accepting null hypothesis (fail to reject) = Type
II error = a threat to power
Sending an innocent to jail for no significant reason
12. Story of power and
sample size
• Power is the probability of rejecting the
null hypothesis
• The larger the sample size is most
probability to be closer to population
distribution
• Therefore, the sample and pop distribution
will have less variation
• Less variation the more likely to reject the
null hypothesis
• So, larger sample size = more power
= significant t test
13. (One Sample Exercise (1
Testing whether light bulbs have a life of
1000 hours
1. Set alpha. α = .05
2. State hypotheses.
– Null hypothesis is H0: µ = 1000.
– Alternative hypothesis is H1: µ ≠ 1000.
3. Calculate the test statistic
14. Calculating the Single
Sample t
800 What is the mean of our sample?
750 X = 867
940 What is the standard deviation
970 for our sample of light bulbs?
790 SD= 96.73
980 SD 96.73
SE = = = 30.59
820 N 10
760
X − µ 867 − 1000
1000 tX = = = −4.35
860 SX 30.59
15. Determining Significance
4. Determine the critical value. Look
up in the table (Heiman, p. 708).
Looking for alpha = .05, two tails
with df = 10-1 = 9. Table says
2.262.
5. State decision rule. If absolute
value of sample is greater than
critical value, reject null.
If |-4.35| > |2.262|, reject H0.
17. t Values
• Critical value
decreases if N is
increased.
• Critical value
decreases if
alpha is
increased.
• Differences
between the
means will not
have to be as
large to find sig
if N is large or
alpha is
18. Stating the Conclusion
6. State the conclusion. We reject the
null hypothesis that the bulbs were drawn
from a population in which the average life
is 1000 hrs. The difference between our
sample mean (867) and the mean of the
population (1000) is SO different that it is
unlikely that our sample could have been
drawn from a population with an average
life of 1000 hours.
19. SPSS Results
One-Sample Statistics
Std. Error
N Mean Std. Deviation Mean
BULBLIFE 10 867.0000 96.7299 30.5887
One-Sample Test
Test Value = 1000
95% Confidence
Interval of the
Mean Difference
t df Sig. (2-tailed) Difference Lower Upper
BULBLIFE -4.348 9 .002 -133.0000 -202.1964 -63.8036
Computers print p values rather than critical
values. If p (Sig.) is less than .05, it’s
significant.
22. Independent Samples t-test
• Used when we have two independent
samples, e.g., treatment and control
groups.
X1 − X 2
• Formula is: t X1 − X 2 =
SEdiff
• Terms in the numerator are the sample
means.
• Term in the denominator is the standard
error of the difference between means.
23. Independent samples t-test
The formula for the standard error of the
difference in means: 2 2
SD1 SD2
SEdiff = +
N1 N2
Suppose we study the effect of caffeine on a
motor test where the task is to keep a the
mouse centered on a moving dot. Everyone
gets a drink; half get caffeine, half get
placebo; nobody knows who got what.
24. Independent Sample Data
)(Data are time off task
)Experimental (Caff Control (No Caffeine)
12 21
14 18
10 14
8 20
16 11
5 19
3 8
9 12
11 13
15
N1=9, M1=9.778, SD1=4.1164 N2=10, M2=15.1, SD2=4.2805
25. Independent Sample
)Steps(1
1. Set alpha. Alpha = .05
2. State Hypotheses.
Null is H0: µ1 = µ2.
Alternative is H1: µ1 ≠ µ2.
28. Independent Sample Steps
)(3
4. Determine the critical value. Alpha is .
05, 2 tails, and df = N1+N2-2 or 10+9-2
= 17. The value is 2.11.
5. State decision rule. If |-2.758| > 2.11,
then reject the null.
6. Conclusion: Reject the null. the
population means are different. Caffeine
has an effect on the motor pursuit task.
29. Using SPSS
• Open SPSS
• Open file “SPSS Examples” for Lab 5
• Go to:
– “Analyze” then “Compare Means”
– Choose “Independent samples t-test”
– Put IV in “grouping variable” and DV in “test
variable” box.
– Define grouping variable numbers.
• E.g., we labeled the experimental group as
“1” in our data set and the control group as
“2”
30. Independent Samples
Exercise
Experimental Control
12 20
14 18
10 14
8 20
16
Work this problem by hand and with SPSS.
You will have to enter the data into SPSS.
31. SPSS Results
Group Statistics
Std. Error
GROUP N Mean Std. Deviation Mean
TIME experimental group 5 12.0000 3.1623 1.4142
control group 4 18.0000 2.8284 1.4142
Independent Samples Test
Levene's Test for
Equality of Variances t-test for Equality of Means
95% Confidence
Interval of the
Mean Std. Error Difference
F Sig. t df Sig. (2-tailed) Difference Difference Lower Upper
TIME Equal variances
.130 .729 -2.958 7 .021 -6.0000 2.0284 -10.7963 -1.2037
assumed
Equal variances
-3.000 6.857 .020 -6.0000 2.0000 -10.7493 -1.2507
not assumed
33. Dependent Samples t-test
• Used when we have dependent samples –
matched, paired or tied somehow
– Repeated measures
– Brother & sister, husband & wife
– Left hand, right hand, etc.
• Useful to control individual differences.
Can result in more powerful test than
independent samples t-test.
34. Dependent Samples t
Formulas:
D
tXD =
SEdiff
t is the difference in means over a standard error.
SDD
SEdiff =
n pairs
The standard error is found by finding the
difference between each pair of observations. The
standard deviation of these difference is SDD.
Divide SDD by sqrt(number of pairs) to get SEdiff.
36. Dependent Samples t
example
Person Painfree Placebo Difference
(time in
sec)
1 60 55 5
2 35 20 15
3 70 60 10
4 50 45 5
5 60 60 0
M 55 48 7
SD 13.23 16.81 5.70
37. Dependent Samples t
)Example (2
1. Set alpha = .05
2. Null hypothesis: H0: µ1 = µ2.
Alternative is H1: µ1 ≠ µ2.
3. Calculate the test statistic:
SD 5.70
SEdiff = = = 2.55
n pairs 5
D 55 − 48 7
t= = = = 2.75
SEdiff 2.55 2.55
38. Dependent Samples t
)Example (3
4. Determine the critical value of t.
Alpha =.05, tails=2
df = N(pairs)-1 =5-1=4.
Critical value is 2.776
5. Decision rule: is absolute value of
sample value larger than critical value?
6. Conclusion. Not (quite) significant.
Painfree does not have an effect.
39. Using SPSS for dependent t-
test
• Open SPSS
• Open file “SPSS Examples” (same as
before)
• Go to:
– “Analyze” then “Compare Means”
– Choose “Paired samples t-test”
– Choose the two IV conditions you are
comparing. Put in “paired variables
box.”
40. Dependent t- SPSS output
Paired Samples Statistics
Std. Error
Mean N Std. Deviation Mean
Pair PAINFREE 55.0000 5 13.2288 5.9161
1 PLACEBO 48.0000 5 16.8077 7.5166
Paired Samples Correlations
N Correlation Sig.
Pair 1 PAINFREE & PLACEBO 5 .956 .011
Paired Samples Test
Paired Differences
95% Confidence
Interval of the
Std. Error Difference
Mean Std. Deviation Mean Lower Upper t df Sig. (2-tailed)
Pair 1 PAINFREE - PLACEBO 7.0000 5.7009 2.5495 -7.86E-02 14.0786 2.746 4 .052
41. Relationship between t Statistic and Power
• To increase power:
– Increase the difference
between the means.
– Reduce the variance
– Increase N
– Increase α from α = .
01 to α = .05
42. To Increase Power
• Increase alpha, Power for α = .10 is
greater than power for α = .05
• Increase the difference between
means.
• Decrease the sd’s of the groups.
• Increase N.
43. Calculation of Power
From Table A.1 Zβ of .
54 is 20.5%
Power is
20.5% + 50% = 70.5%
In this
example
Power (1 - β )
= 70.5%
44. Calculation of
Sample Size to
Produce a Given
Power
Compute Sample Size N for a Power of .80 at p = 0.05
The area of Zβ must be 30% (50% + 30% = 80%) From Table A.1
Zβ = .84
If the Mean Difference is 5 and SD is 6 then 22.6 subjects would
be required to have a power of .80
45. Power
• Research performed with insufficient
power may result in a Type II error,
• Or waste time and money on a study
that has little chance of rejecting the
null.
• In power calculation, the values for
mean and sd are usually not known
beforehand.
• Either do a PILOT study or use prior
research on similar subjects to
estimate the mean and sd.
46. Independent t-Test
For an Independent
t-Test you need a
grouping variable to
define the groups.
In this case the
variable Group is
defined as
1 = Active
2 = Passive
Use value labels in
SPSS
47. Independent t-Test: Defining
Variables
Be sure to
enter value Grouping variable GROUP, the level of
labels. measurement is Nominal.
52. Group Statistics
Independent t-Test:
Group N Mean Std. Deviation
Std. Error
Mean
Output
Ab_Error Active 10 2.2820 1.24438 .39351
Passive 10 1.9660 1.50606 .47626
Independent Samples Test
Levene's Test for
Equality of Variances t-test for Equality of Means
95% Confidence
Interval of the
Mean Std. Error Difference
F Sig. t df Sig. (2-tailed) Difference Difference Lower Upper
Ab_Error Equal variances
.513 .483 .511 18 .615 .31600 .61780 -.98194 1.61394
assumed
Equal variances
.511 17.382 .615 .31600 .61780 -.98526 1.61726
not assumed
Assumptions: Groups have equal variance [F
= .513, p =.483, YOU DO NOT WANT THIS TO Are the groups
BE SIGNIFICANT. The groups have equal different?
variance, you have not violated an assumption
of t-statistic. t(18) = .511, p = .615
NO DIFFERENCE
2.28 is not different
from 1.96
57. Paired Samples Statistics
Std. Error Dependent or Paired
Pair Pre
Mean
4.7000
N
10
Std. Deviation
2.11082
Mean
.66750
t-Test: Output
1 Post 6.2000 10 2.85968 .90431
Paired Samples Correlations
N Correlation Sig.
Pair 1 Pre & Post 10 .968 .000
Paired Samples Test
Paired Differences
95% Confidence
Interval of the
Std. Error Difference
Mean Std. Deviation Mean Lower Upper t df Sig. (2-tailed)
Pair 1 Pre - Post -1.50000 .97183 .30732 -2.19520 -.80480 -4.881 9 .001
Is there a difference between pre & post?
t(9) = -4.881, p = .001
Yes, 4.7 is significantly different from 6.2
Editor's Notes
1 . Set Alpha level, probability of Type I error, that is probability that we will conclude there is a difference when there really is not. Typically set at .05, or 5 chances in 100 to be wrong in this way . 2 . State hypotheses. Null hypothesis: represents a position that the treatment has no effect. Alternative hypothesis is that the treatment has an effect. In the light bulb example Ho: mu = 1000 hours H1: mu is not equal to 1000 hours 3 . Calculate the test statistic. (see next slide for values ) 4 . Determine the critical value of the statistic . 5 . State the decision rule: e.g., if the statistic computed is greater than the critical value, then reject the null hypothesis . Conclusion: the result is significant or it is not significant. Write up the results .
Let’s do the steps : 1 . Set alpha = .05. If there is no difference, we will be wrong only 5 times in 100 . 2 . State hypotheses. (Null) H 0 : = 1000. (Alternative) H 1 : 1000. We are testing to see if our light bulbs came from a population where average life is 1000 hours . 3 . Calculate the test statistic .
Go over the answers to exercise with them . M = 867 SD = 96.7299 SE= 30.58867 t = -4.35 Reject H 0 Bulbs were not drawn from population with 1000 hr life . Any questions ?
4 . Determine the critical value of the statistic. We look this up in a table. We need to know alpha (.05, two-tailed) and the degrees of freedom (df). For this test, the df are N-1, in our case 10-1 = 9. According to the table, the critical value is 2.262 . 5 . State the decision rule: If the absolute value of the test statistic is greater that the critical value, we reject the null hypothesis. In our case, if |-.33| is greater than 2.262, we reject the hypothesis that = 1000. This is not the case here .
State the conclusion . Our results suggest that GE’s claim that their light bulbs last 1,000 hours is FALSE. (because we had a sample of 10 GE light bulbs and our sample mean was so far away from 1,000 hours that it is highly unlikely that these bulbs came from a population of bulbs whose mean is really 1,000.) There is a 5% chance that this conclusion is wrong (I.e., we may have gotten a difference this big just by chance factors alone ).
On Brannick’s website, Research Methods, Labs, Lab Presentations. Click on Lab 5 SPSS Examples, the Open. SPSS should run and the data for this lab should appear. In the middle is the column ltbulb. This has the data for the ligtbulb example. In the SPSS data editor, click Analyze, Compare Means, One-Sample T Test. Select ltbulb and put it in the Test Variables box. Type 1000 in the Test Value box. Click OK. You get the output on this slide .
Here we have two different samples, and we want to know if they were drawn from populations with two different means. This is equivalent to saying whether a treatment has an effect given a treatment group and a control group. The formula for this t is on the slide . Here t is the test statistic, and the terms in the numerator are the sample and population means. The term in the denominator is SE diff , which is the standard error of the difference between means . You can see from the subscripts for both t and SE, that we are now dealing with the Sampling Distribution of the DIFFERENCE between the means. This is very similar to the sampling distribution that we created last week. However, what we would do to create a sampling distribution of the differences between the means is rather than selecting 5 scores and computing a mean, we would select 5 pairs of scores, subtract one value from the other, then calculate the mean DIFFERENCE value . If we are doing a study and have two groups, what do we EXPECT that the difference in their mean scores will be ? [ They should say zero ]. Thus, the mean of the sampling distribution of the differences between the means is zero . The subscripts are here to tell you which Sampling Distribution we are dealing with (for the Sampling Distribution of Means last week, we had a subscript X-bar. For the sampling distribution of the differences between the means, we have a notation specifying a difference, specifically, the difference between X-bar1 and X-bar2 .
Suppose we have two samples taken from the same population. Suppose we compute the mean for each sample and subtract the mean for sample 2 from sample 1. We will get a difference between sample means. If we do this a lot, on average, that difference will be zero. Most of the time it won’t be exactly zero, however. The amount that the difference wanders from zero on average is , the standard error of the difference .
So let’s say we do the following study. We bring in our volunteers and give each of them a psychomotor test where they use a mouse to keep a dot centered on a computer screen target that keeps moving away (pursuit task). One hour before the test, both groups get an oral dose of a drug. For every other person (1/2 of the people), the drug is caffeine. For the other half, it’s a placebo. Nobody in the study knows who got what. All take the test. The results are in the slide .
1 . Set alpha = .05, two tailed (just a difference, not a prediction of greater n or less than ). 2 . Null Hypothesis: . This is the same as . This says that there is no difference between the drug group and the placebo group in psychomotor performance in the population. The alternative hypothesis is that the drug does have an effect, or
3 . Calculate the test statistic (see the slide ).
3 . Calculate the test statistic (see the slide ).
4 . Determine the critical value of the statistic. We look this up in a table . Alpha is .05, t is 2-tailed and the df are n 1 +n 2 -2, or in our case, 17. The critical value is 2.110 . 5 . State the decision rule. If the absolute value of the test statistic is larger than the critical value, reject the null hypothesis. If |-2.758| > 2.110, reject the null . 6 . Conclusion, the population means are different. The result is significant at p < .05 .
Make sure that they look at the data in SPSS to see how the groups were defined and how that relates to the “define groups” task .
Have them work this one. Assume again that this is time off task for the DV . Here are the answers for the independent samples exercise M1 = 12 , M2 = 18 SD1 = 3.162278, SD2 = 2.8284227 Std Error = 2 t = -6/2 = -3; df = 5 + 4 - 2 = 7 t(.05) = 2.3646; 3 > 2.3646 Reject null hypothesis We conclude that caffeine has an effect . Be sure to cover the relevant areas of the SPSS printout. You should show them where everything that they calculate by hand is on the printout. Also cover the Levene’s test. Explain that if the Levene’s test is significant, we need to use the row that says ‘equal variances NOT assumed”. We do NOT want the Levene’s test to be significant as it violates an assumption of the t-test .
We use this when we have measures on the same people in both conditions (or other dependency in the data). Usually there are individual differences among people that are relatively enduring. For example, suppose we tested the same people on the psychomotor test twice. Some people would be very good at it. Others would be relatively poor at it. The dependent t allows us to take these individual differences into account. The scores on the variable in one treatment will be correlated with the scores on the other treatment . If the observations are positively correlated (most people score either high on both or low on both) and if there is a difference in means, we are more likely to show it with the dependent t-test than with the independent samples t-test. [Emphasize this point, they need to know it for their homework .]
We are still dealing with the Sampling Distribution of the Difference between the means. Our subscript is different here, but says basically the same thing. We are looking at the MEAN DIFFERENCE SCORE. The subscript for the independent samples t said we were looking at the DIFFERENCE BETWEEN THE MEANS .
In this formula, we just put the formula for Se diff in the denominator instead of having you calculate it separately. [this is the formula that appears on the “Guide to Statistics” sheet they can download .
Suppose that we are testing Painfree, a drug to replace aspirin. Five people are selected to test the drug. On day one, ½ get painfree, and the other get a placebo. Then all put their hands into icewater until it hurts so bad they have to pull their hands from the water. We record how long it takes. The next day, they come back and take the other treatment. (Counterbalancing & double blind .)
1 . Set alpha = .05, two tailed (just a difference, not a prediction of greater or less than ). 2 . Null Hypothesis: . This is the same as . This says that there is no difference between the pain killer and the placebo in the population. The alternative hypothesis is that the pain killer does have an effect, or 3 . Calculate the test statistic (see slide ).
4 . Determine the critical value of the statistic. We look this up in a table. Alpha is .05, t is 2-tailed and our df are N-1, where N is the number of pairs . In this case df = 5-1 = 4. The critical value is 2.776 . 5 . State the decision rule. If the absolute value of the test statistic is larger than the critical value, reject the null hypothesis. If |2.75| > 2.776, reject the null . 6 . Conclusion. he population means are not (quite) different. The result is not significant at p < .05 .
Point out that the data for an independent t-test and dependent t-test must be entered differently in SPSS . [ They should choose “painfree” and “placebo” to put in the paired variables box .]
Go over output. Have them start on their homework or project .