2. Another important test of differences is the t-test for
paired samples. This is also known as a t-test for
repeated measures or a t-test for matched samples.
3. Another important test of differences is the t-test for
paired samples. This is also known as a t-test for
repeated measures or a t-test for matched samples.
4. Whenever two distributions of a dependent variable are highly
correlated, either because they are distributions of pre and post
tests from the same people,
5. Whenever two distributions of a dependent variable are highly
correlated, either because they are distributions of pre and post
tests from the same people,
6. Whenever two distributions of a dependent variable are highly
correlated, either because they are distributions of pre and post
tests from the same people,
one month later
7. Whenever two distributions of a dependent variable are highly
correlated, either because they are distributions of pre and post
tests from the same people,
one month later
8. Whenever two distributions of a dependent variable are highly
correlated, either because they are distributions of pre and post
tests from the same people, or from two samples that are matched
such that there is a one-to-one correspondence between each
subject in group one and its matched pair in group two,
9. Whenever two distributions of a dependent variable are highly
correlated, either because they are distributions of pre and post
tests from the same people, or from two samples that are matched
such that there is a one-to-one correspondence between each
subject in group one and its matched pair in group two,
one month later
Brown
Haired
Matched
Subjects
10. Whenever two distributions of a dependent variable are highly
correlated, either because they are distributions of pre and post
tests from the same people, or from two samples that are matched
such that there is a one-to-one correspondence between each
subject in group one and its matched pair in group two,
one month later
one month later
Brown
Haired
Matched
Subjects
Red
Haired
Matched
Subjects
11. Whenever two distributions of a dependent variable are highly
correlated, either because they are distributions of pre and post
tests from the same people, or from two samples that are matched
such that there is a one-to-one correspondence between each
subject in group one and its matched pair in group two, the
appropriate test of differences is the paired samples t-test.
12. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
13. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
14. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
and is reasonably normally distributed;
15. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
and is reasonably normally distributed;
16. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
and is reasonably normally distributed;
17. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
and is reasonably normally distributed;
and a single independent variable (e.g., when you took
the test)
18. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
and is reasonably normally distributed;
and a single independent variable (e.g., when you took
the test)
19. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
and is reasonably normally distributed;
and a single independent variable (e.g., when you took
the test) which has two levels
20. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
and is reasonably normally distributed;
and a single independent variable (e.g., when you took
the test) which has two levels
21. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
and is reasonably normally distributed;
and a single independent variable (e.g., when you took
the test) which has two levels which are repeated or
matched,
22. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
and is reasonably normally distributed;
and a single independent variable (e.g., when you took
the test) which has two levels which are repeated or
matched,
23. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
and is reasonably normally distributed;
and a single independent variable (e.g., when you took
the test) which has two levels which are repeated or
matched, then we use the pair wise t-test to test for
differences between the two samples of the dependent
variable.
24. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
and is reasonably normally distributed;
and a single independent variable (e.g., when you took
the test) which has two levels which are repeated or
matched, then we use the pair wise t-test to test for
differences between the two samples of the dependent
variable.
25. If we have a single dependent variable that exists on an
interval or ratio scale such as scores on a test,
and is reasonably normally distributed;
and a single independent variable (e.g., when you took
the test) which has two levels which are repeated or
matched, then we use the pair wise t-test to test for
differences between the two samples of the dependent
variable.
27. First we begin with the null hypothesis:
There are no significant difference in pre and post scores
(dependent variable).
28. First we begin with the null hypothesis:
There are no significant difference in pre and post scores
(dependent variable).
Or: There are no significant difference between group one
and its matched group in terms of the dependent variable.
29. First we begin with the null hypothesis:
There are no significant difference in pre and post scores
(dependent variable).
Or: There are no significant difference between group one
and its matched group in terms of the dependent variable.
30. First we begin with the null hypothesis:
There are no significant difference in pre and post scores
(dependent variable).
Or: There are no significant difference between group one
and its matched group in terms of the dependent variable.
31. First we begin with the null hypothesis:
There are no significant difference in pre and post scores
(dependent variable).
Or: There are no significant difference between group one
and its matched group in terms of the dependent variable.
no difference
32. Another way to state the null-hypothesis for a paired-sample
t-test is by hypothesizing that the difference
between the pre-post test or matched pairs is ZERO.
33. Another way to state the null-hypothesis for a paired-sample
t-test is by hypothesizing that the difference
between the pre-post test or matched pairs is ZERO.
Name _________
Name _________
1. The word of the day is ______
2. The State bird of Oklahoma is ______
3. The favorite dessert of Queen Elizabeth is
1. The word of the day is ______
2. The State ________
bird of Oklahoma is ______
3. The favorite dessert of Queen Elizabeth is
4. The capital of Siam is ___________
5. ________
The best movie of 1972 was ___________
6. The worst spelling bee in the world is _______
7. What is the capital of Mars?
8. How many people fit in a subway car in Tokyo?
9. The best episode of the Doctor Who reboot is
4. The capital of Siam is ___________
5. The best movie of 1972 was ___________
6. The worst spelling bee in the world is _______
7. What is the capital of Mars?
8. How many ___________________________
people fit in a subway car in Tokyo?
9. The best episode of the Doctor Who reboot is
10. The worst thing you can do with silly putty is
___________________________
________________
10. The worst thing you can do with silly putty is
11. How heavy is a 15 pound bowling ball?
12. ________________
What is the best flavor of Mamba fruit chews?
13. The weather in Alaska is __________________
14. Bugs Bunny is funniest dressed up as a
11. How heavy is a 15 pound bowling ball?
12. What is the best flavor of Mamba fruit chews?
13. The weather _____________
in Alaska is __________________
14. Bugs Bunny is funniest dressed up as a
15. How much does a taxi cost in London?
16. _____________
What is the best flavor Jello Pudding pop?
17. How far is Wichita from Baghdad?
18. What is the most important thing to do on Arbor
15. How much does a taxi cost in London?
16. What is the best flavor Jello Pudding pop?
17. How far is Wichita from Baghdad?
18. 19. What What is the goes most best important with vanilla thing to custard?
do on Arbor
Day?
Day?
19. What goes best with vanilla custard?
Name _________
1. The word of the day is ______
2. The State bird of Oklahoma is ______
3. The favorite dessert of Queen Elizabeth is
________
4. The capital of Siam is ___________
5. The best movie of 1972 was ___________
6. The worst spelling bee in the world is _______
7. What is the capital of Mars?
8. How many people fit in a subway car in Tokyo?
9. The best episode of the Doctor Who reboot is
___________________________
10. The worst thing you can do with silly putty is
________________
11. How heavy is a 15 pound bowling ball?
12. What is the best flavor of Mamba fruit chews?
13. The weather in Alaska is __________________
14. Bugs Bunny is funniest dressed up as a
_____________
15. How much does a taxi cost in London?
16. What is the best flavor Jello Pudding pop?
17. How far is Wichita from Baghdad?
18. What is the most important thing to do on Arbor
Day?
19. What goes best with vanilla custard?
34. Another way to state the null-hypothesis for a paired-sample
t-test is by hypothesizing that the difference
between the pre-post test or matched pairs is ZERO.
Name _________
Name _________
1. The word of the day is ______
2. The State bird of Oklahoma is ______
3. The favorite dessert of Queen Elizabeth is
1. The word of the day is ______
2. The State ________
bird of Oklahoma is ______
3. The favorite dessert of Queen Elizabeth is
4. The capital of Siam is ___________
5. ________
The best movie of 1972 was ___________
6. The worst spelling bee in the world is _______
7. What is the capital of Mars?
8. How many people fit in a subway car in Tokyo?
9. The best episode of the Doctor Who reboot is
4. The capital of Siam is ___________
5. The best movie of 1972 was ___________
6. The worst spelling bee in the world is _______
7. What is the capital of Mars?
8. How many ___________________________
people fit in a subway car in Tokyo?
9. The best episode of the Doctor Who reboot is
10. The worst thing you can do with silly putty is
___________________________
________________
10. The worst thing you can do with silly putty is
11. How heavy is a 15 pound bowling ball?
12. ________________
What is the best flavor of Mamba fruit chews?
13. The weather in Alaska is __________________
14. Bugs Bunny is funniest dressed up as a
11. How heavy is a 15 pound bowling ball?
12. What is the best flavor of Mamba fruit chews?
13. The weather _____________
in Alaska is __________________
14. Bugs Bunny is funniest dressed up as a
15. How much does a taxi cost in London?
16. _____________
What is the best flavor Jello Pudding pop?
17. How far is Wichita from Baghdad?
18. What is the most important thing to do on Arbor
15. How much does a taxi cost in London?
16. What is the best flavor Jello Pudding pop?
17. How far is Wichita from Baghdad?
18. 19. What What is the goes most best important with vanilla thing to custard?
do on Arbor
Day?
Day?
19. What goes best with vanilla custard?
Name _________
1. The word of the day is ______
2. The State bird of Oklahoma is ______
3. The favorite dessert of Queen Elizabeth is
________
4. The capital of Siam is ___________
5. The best movie of 1972 was ___________
6. The worst spelling bee in the world is _______
7. What is the capital of Mars?
8. How many people fit in a subway car in Tokyo?
9. The best episode of the Doctor Who reboot is
− ___________________________
10. The worst thing you can do with silly putty is
= 0
________________
11. How heavy is a 15 pound bowling ball?
12. What is the best flavor of Mamba fruit chews?
13. The weather in Alaska is __________________
14. Bugs Bunny is funniest dressed up as a
_____________
15. How much does a taxi cost in London?
16. What is the best flavor Jello Pudding pop?
17. How far is Wichita from Baghdad?
18. What is the most important thing to do on Arbor
Day?
19. What goes best with vanilla custard?
35. Conceptually, aggregating the exact differences
between each pair makes most sense and leads to the
correct degrees of freedom, sampling distribution and
critical value.
36. Conceptually, aggregating the exact differences
between each pair makes most sense and leads to the
correct degrees of freedom, sampling distribution and
critical value.
Correct Degrees of Freedom
– 50 people take a pre-test
– Same 50 take a post test
– 50 – 1 = 49 degrees of freedom
38. Correct Sampling Distribution
− =
Sample Mean Distribution of
difference between the pre and
post samples test scores
Sample Mean Distribution of
pre-test scores.
Sample Mean Distribution of
post-test scores.
42. The formula for the paired samples t-test is:
Σ(X1-X2)
SEdiff
t =
43. The formula for the paired samples t-test is:
Σ(X1-X2)
SEdiff
t =
Paired t-test value or
the number of
standard error
values that separate
these two means
44. The formula for the paired samples t-test is:
Pre Post
1 7
2 6
1 8
Σ(Xpre-Xpost)
SEdiff
t =
45. The formula for the paired samples t-test is:
Pre Post
1 7
2 6
1 8
Σ(Xpre-Xpost)
SEdiff
t =
Subtract each pretest
score from each
posttest score and
the sum them up
46. The formula for the paired samples t-test is:
Pre Post
1 7
2 6
1 8
Σ(Xpre-Xpost)
SEdiff
t =
Difference
6
4
7
47. The formula for the paired samples t-test is:
Pre Post
1 7
2 6
1 8
Σ(Xpre-Xpost)
SEdiff
t =
Difference
6
4
7
Sum of all
Differences
6 + 4 + 7 = 17
48. The formula for the paired samples t-test is:
Pre Post
1 7
2 6
1 8
Σ(Xpre-Xpost)
SEdiff
t =
Difference
6
4
7
Sum of all
Differences
6 + 4 + 7 = 17
17
SEdiff
t =
49. The formula for the paired samples t-test is:
Σ(Xpre-Xpost)
SEdiff
t =
51. Σ(Xpre-Xpost)
SEdiff
t =
Now let’s calculate
the Standard Error of
It is this statistic (Standard Error of
the Difference) that makes it possible
the difference
to determine if this difference occurred only by chance
or if there is a strong probability that they are actually
different!
52. One thing to note: If the Standard Error of the
Difference is large (say 170) then the t value will be
small.
53. One thing to note: If the Standard Error of the
Difference is large (say 170) then the t value will be
small.
For example:
54. One thing to note: If the Standard Error of the
Difference is large (say 170) then the t value will be
small.
For example:
Σ(Xpre-Xpost)
SEdiff
t =
55. One thing to note: If the Standard Error of the
Difference is large (say 170) then the t value will be
small.
For example:
Σ(Xpre-Xpost)
SEdiff
t =
17
170
t =
56. One thing to note: If the Standard Error of the
Difference is large (say 170) then the t value will be
small.
For example:
Σ(Xpre-Xpost)
SEdiff
t =
17
170
t =
t = .01
57. However, if the Standard Error of the Difference is
small (say 1.7) then the t value will be larger.
58. However, if the Standard Error of the Difference is
small (say 1.7) then the t value will be larger.
For example:
59. However, if the Standard Error of the Difference is
small (say 1.7) then the t value will be larger.
For example:
Σ(Xpre-Xpost)
SEdiff
t =
60. However, if the Standard Error of the Difference is
small (say 1.7) then the t value will be larger.
For example:
Σ(Xpre-Xpost)
SEdiff
t =
17
1.7
t =
61. However, if the Standard Error of the Difference is
small (say 1.7) then the t value will be larger.
For example:
Σ(Xpre-Xpost)
SEdiff
t =
17
1.7
t =
t = 10
62. So what effect will a smaller or larger estimated
standard error have on whether a result is statistically
significantly different or not?
63. So what effect will a smaller or larger estimated
standard error have on whether a result is statistically
significantly different or not?
Well let’s say that our null hypothesis (Ho) is that there
is no statistically significant difference between the pre
and post test scores below:
64. So what effect will a smaller or larger estimated
standard error have on whether a result is statistically
significantly different or not?
Well let’s say that our null hypothesis (Ho) is that there
is no statistically significant difference between the pre
and post test scores below:
Students Pre Post
1 1 7
2 2 6
3 1 8
mean: 1.3 6.0
65. Of course, the alternative hypothesis would be that
there is no statistically significant difference between
the two.
66. Of course, the alternative hypothesis would be that
there is no statistically significant difference between
the two.
With a t value of .01
67. Of course, the alternative hypothesis would be that
there is no statistically significant difference between
the two.
With a t value of .01
Σ(Xpre-Xpost)
SEdiff
t =
17
170
t =
t = .01
68. Of course, the alternative hypothesis would be that
there is no statistically significant difference between
the two.
With a t value of .01, and a sample of 3, and therefore
degrees of freedom of 2, we would look up the t critical
value at a .05 alpha level (a .05 alpha level means that
we are willing to consider an outcome to be significant
if it were likely to happen 5 times out of 100 (.05), in
other words, if it were a very rare occurrence.)
69. So we go to our table of t-Distribution Critical Values
to find the critical value that needs to be exceeded by
our result in order be considered statistically
significantly different.
70. So we go to our table of t-Distribution Critical Values
to find the critical value that needs to be exceeded by
our result in order be considered statistically
significantly different.
71. So we go to our table of t-Distribution Critical Values
to find the critical value that needs to be exceeded by
our result in order be considered statistically
significantly different.
So, the critical value we need to
exceed in order to reject the null
hypothesis is 2.920.
72. So we go to our table of t-Distribution Critical Values
to find the critical value that needs to be exceeded by
our result in order be considered statistically
significantly different.
So, the critical value we need to
exceed in order to reject the null
hypothesis is 2.920.
However, our t-value is
73. So we go to our table of t-Distribution Critical Values
to find the critical value that needs to be exceeded by
our result in order be considered statistically
significantly different.
So, the critical value we need to
exceed in order to reject the null
hypothesis is 2.920.
However, our t-value is
t = .01
74. So we go to our table of t-Distribution Critical Values
to find the critical value that needs to be exceeded by
our result in order be considered statistically
significantly different.
So, the critical value we need to
exceed in order to reject the null
hypothesis is 2.920.
However, our t-value is .01
75. Our t value of 0.1 does not exceed the t critical of
2.920, therefore we will fail to reject the null-hypothesis
(which essentially means to accept the null-hypothesis).
76. On the other hand, when our t value is 10, under the
same conditions,
77. On the other hand, when our t value is 10, under the
same conditions,
Σ(Xpre-Xpost)
SEdiff
t =
17
1.7
t =
t = 10
78. On the other hand, when our t value is 10, under the
same conditions, our t value of 10 does exceed the t-critical
of 2.920, therefore we will reject the null-hypothesis
(which essentially means to accept the
alternative hypothesis).
79. So when it comes to inferential statistics (inferring
meaning to a larger population from a smaller sample),
the size of the standard error determines everything.
80. So when it comes to inferential statistics (inferring
meaning to a larger population from a smaller sample),
the size of the standard error determines everything.
Understanding the standard error theoretically may or
may not be important for your learning purposes. If it
is not, you may want to click quickly through the next
10 slides. If it is important then consider what follows:
81. If we took 1000 samples of pre- and post-tests and
subtracted each pre-test sample from each post-test
sample we would have a sampling distribution called
the sampling distribution of differences between pre-and
post-test samples.
82. If we took 1000 samples of pre- and post-tests and
subtracted each pre-test sample from each post-test
sample we would have a sampling distribution called
the sampling distribution of differences between pre-and
post-test samples.
− =
Sample Mean Distribution of
difference between the pre and
post samples test scores
Sample Mean Distribution of
pre-test scores.
Sample Mean Distribution of
post-test scores.
83. If we took 1000 samples of pre- and post-tests and
subtracted each pre-test sample from each post-test
sample we would have a sampling distribution called
the sampling distribution of differences between pre-and
post-test samples.
Sample Mean Distribution of
difference between the pre and
post samples test scores
Sample Mean Distribution of
pre-test scores.
Sample Mean Distribution of
post-test scores.
84. If we took 1000 samples of pre- and post-tests and
subtracted each pre-test sample from each post-test
sample we would have a sampling distribution called
the sampling distribution of differences between pre-and
post-test samples.
Sample Mean Distribution of
difference between the pre and
post samples test scores
Sample Mean Distribution of
pre-test scores.
Sample Mean Distribution of
post-test scores.
−
85. If we took 1000 samples of pre- and post-tests and
subtracted each pre-test sample from each post-test
sample we would have a sampling distribution called
the sampling distribution of differences between pre-and
post-test samples.
Sample Mean Distribution of
difference between the pre and
post samples test scores
Sample Mean Distribution of
pre-test scores.
Sample Mean Distribution of
post-test scores.
−
86. If we took 1000 samples of pre- and post-tests and
subtracted each pre-test sample from each post-test
sample we would have a sampling distribution called
the sampling distribution of differences between pre-and
post-test samples.
− =
Sample Mean Distribution of
difference between the pre and
post samples test scores
Sample Mean Distribution of
pre-test scores.
Sample Mean Distribution of
post-test scores.
87. If we took 1000 samples of pre- and post-tests and
subtracted each pre-test sample from each post-test
sample we would have a sampling distribution called
the sampling distribution of differences between pre-and
post-test samples.
− =
Sample Mean Distribution of
difference between the pre and
post samples test scores
Sample Mean Distribution of
pre-test scores.
Sample Mean Distribution of
post-test scores.
88. If we took 1000 samples of pre- and post-tests and
subtracted each pre-test sample from each post-test
sample we would have a sampling distribution called
the sampling distribution of differences between pre-and
post-test samples.
Sample Mean Distribution of
difference between the pre and
post samples test scores
Sample Mean Distribution of
pre-test scores.
Sample Mean Distribution of
post-test scores.
89. If we took 1000 samples of pre- and post-tests and
subtracted each pre-test sample from each post-test
sample we would have a sampling distribution called
the sampling distribution of differences between pre-and
post-test samples.
Sample Mean Distribution of
difference between the pre and
post samples test scores
Sample Mean Distribution of
pre-test scores.
Sample Mean Distribution of
post-test scores.
−
90. If we took 1000 samples of pre- and post-tests and
subtracted each pre-test sample from each post-test
sample we would have a sampling distribution called
the sampling distribution of differences between pre-and
post-test samples.
Sample Mean Distribution of
difference between the pre and
post samples test scores
Sample Mean Distribution of
pre-test scores.
Sample Mean Distribution of
post-test scores.
−
91. If we took 1000 samples of pre- and post-tests and
subtracted each pre-test sample from each post-test
sample we would have a sampling distribution called
the sampling distribution of differences between pre-and
post-test samples.
Sample Mean Distribution of
difference between the pre and
post samples test scores
Sample Mean Distribution of
pre-test scores.
− =
Sample Mean Distribution of
post-test scores.
92. If we took 1000 samples of pre- and post-tests and
subtracted each pre-test sample from each post-test
sample we would have a sampling distribution called
the sampling distribution of differences between pre-and
post-test samples.
Sample Mean Distribution of
difference between the pre and
post samples test scores
Sample Mean Distribution of
pre-test scores.
− =
Sample Mean Distribution of
post-test scores.
93. If we took 1000 samples of pre- and post-tests and
subtracted each pre-test sample from each post-test
sample we would have a sampling distribution called
the sampling distribution of differences between pre-and
post-test samples.
Sample Mean Distribution of
difference between the pre and
post samples test scores
Sample Mean Distribution of
pre-test scores.
− =
Sample Mean Distribution of
post-test scores.
etc …
94. If you calculated the standard deviation of this new
subtracted sampling distribution you would have the
actual standard error we are looking for for this
equation.
95. If you calculated the standard deviation of this new
subtracted sampling distribution you would have the
actual standard error we are looking for for this
equation.
Since this is almost impossible to do, the standard error
will be estimated.
96. Let’s see how this is done with a very simple data set.
Let’s begin with our null-hypothesis: “Post-test scores
are not statistically significantly higher than pre-test
scores”.
97. Let’s see how this is done with a very simple data set.
Let’s begin with our null-hypothesis: “Post-test scores
are not statistically significantly higher than pre-test
scores”.
Students Pre Post
1 1 7
2 2 6
3 1 8
mean: 1.3 7.0
100. Let’s begin with the sum of the differences between
the pre and post tests:
101. Let’s begin with the sum of the differences between
the pre and post tests:
102. Let’s begin with the sum of the differences between
the pre and post tests:
Σ(Xpre-Xpost)
SEdiff
t =
103. Let’s begin with the sum of the differences between
the pre and post tests:
Σ(Xpre-Xpost)
SEdiff
t =
the same
104. Let’s begin with the sum of the differences between
the pre and post tests:
105. Let’s begin with the sum of the differences between
the pre and post tests:
Pre Post
1 7
2 6
1 8
Difference
6
4
7
Sum of all
Differences
6 + 4 + 7 = 17
106. Let’s begin with the sum of the differences between
the pre and post tests:
17
107. Just as a contrast, when the difference is smaller, then
the sum of all differences will be smaller as well:
108. Just as a contrast, when the difference is smaller, then
the sum of all differences will be smaller as well:
Pre Post
1 7
2 6
1 8
Difference
6
4
7
Sum of all
Differences
6 + 4 + 7 = 17
109. Just as a contrast, when the difference is smaller, then
the sum of all differences will be smaller as well:
Pre Post
1 7
2 6
1 8
Difference
6
4
7
Sum of all
Differences
6 + 4 + 7 = 17
110. Just as a contrast, when the difference is smaller, then
the sum of all differences will be smaller as well:
Pre Post
1 7
2 6
1 8
Difference
6
4
7
Sum of all
Differences
6 + 4 + 7 = 17
Pre Post
4 7
5 6
5 8
111. Just as a contrast, when the difference is smaller, then
the sum of all differences will be smaller as well:
Pre Post
1 7
2 6
1 8
Difference
6
4
7
Sum of all
Differences
6 + 4 + 7 = 17
Pre Post
4 7
5 6
5 8
Difference
3
1
3
112. Just as a contrast, when the difference is smaller, then
the sum of all differences will be smaller as well:
Pre Post
1 7
2 6
1 8
Difference
6
4
7
Sum of all
Differences
6 + 4 + 7 = 17
Pre Post
4 7
5 6
5 8
Difference
3
1
3
Sum of all
Differences
3 + 1 + 3 = 7
113. Back to our original equation. The next step is to
estimate the standard
114. Back to our original equation. The next step is to
estimate the standard
17
115. Back to our original equation. The next step is to
estimate the standard
17
3
We begin with “n” or the size of the sample, which in
this case is 3.
116. Back to our original equation. The next step is to
estimate the standard
17
3
3
We begin with “n” or the size of the sample, which in
this case is 3.
117. Back to our original equation. The next step is to
estimate the standard
17
3
3
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2.
118. Back to our original equation. The next step is to
estimate the standard
Pre Post
1 7
2 6
1 8
Difference
6
4
7
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2.
119. Back to our original equation. The next step is to
estimate the standard
Pre Post
1 7
2 6
1 8
Difference
6
4
7
Squared Difference
36
16
49
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2.
120. Back to our original equation. The next step is to
estimate the standard
Pre Post
1 7
2 6
1 8
Difference
6
4
7
Squared Difference
36
16
49
Sum of all
Differences
Σd2
36 + 16 + 49 = 101
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2.
121. Back to our original equation. The next step is to
estimate the standard
Let‘s plug in our numbers:
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2.
122. Back to our original equation. The next step is to
estimate the standard
17
3
101
3
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2.
123. Back to our original equation. The next step is to
estimate the standard
17
3
101
3
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2. Then we compute
(Σd)2.
124. Back to our original equation. The next step is to
estimate the standard
Pre Post
1 7
2 6
1 8
Difference
6
4
7
Sum of all
Differences
6 + 4 + 7 = 17
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2. Then we compute
(Σd)2.
125. Back to our original equation. The next step is to
estimate the standard
Pre Post
1 7
2 6
1 8
Difference
6
4
7
Sum of all
Differences
6 + 4 + 7 = 17
Squared Sum of all
Differences
172 = 289
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2. Then we compute
(Σd)2.
126. Back to our original equation. The next step is to
estimate the standard
Let‘s plug in our numbers and then do the
calculations:
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2. Then we compute
(Σd)2.
127. Back to our original equation. The next step is to
estimate the standard
17
3
101 289
3
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2. Then we compute
(Σd)2.
128. Back to our original equation. The next step is to
estimate the standard
17
303
3
289
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2. Then we compute
(Σd)2.
129. Back to our original equation. The next step is to
estimate the standard
17
14
2
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2. Then we compute
(Σd)2.
130. Back to our original equation. The next step is to
estimate the standard
17
7
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2. Then we compute
(Σd)2.
131. Back to our original equation. The next step is to
estimate the standard
17
2.646
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2. Then we compute
(Σd)2.
132. Back to our original equation. The next step is to
estimate the standard
6.425
We begin with “n” or the size of the sample, which in
this case is 3. Then we compute Σd2. Then we compute
(Σd)2.
133. So what does a t value of 6.425 mean? Well, that
depends on whether it is larger or smaller than the
critical t value.
134. So what does a t value of 6.425 mean? Well, that
depends on whether it is larger or smaller than the
critical t value.
Do you remember how we determine the critical t
value?
135. So what does a t value of 6.425 mean? Well, that
depends on whether it is larger or smaller than the
critical t value.
Do you remember how we determine the critical t
value?
All we need is
136. So what does a t value of 6.425 mean? Well, that
depends on whether it is larger or smaller than the
critical t value.
Do you remember how we determine the critical t
value?
All we need is
• the degrees of freedom (sample size (3) minus 1) and
137. So what does a t value of 6.425 mean? Well, that
depends on whether it is larger or smaller than the
critical t value.
Do you remember how we determine the critical t
value?
All we need is
• the degrees of freedom (sample size (3) minus 1) and
• the alpha level we are willing to live with (in this case
.05. This basically means that we are willing to live
with being wrong 5 out of 100 times about our
decision.)
138. So with a degrees of freedom of 2 and an alpha level of
.05,
139. So with a degrees of freedom of 2 and an alpha level of
.05,
140. So with a degrees of freedom of 2 and an alpha level of
.05, our critical t value is: 2.920
141. So with a degrees of freedom of 2 and an alpha level of
.05, our critical t value is: 2.920
Since our t value (6.245) is
greater than our critical t value
(2.920), we will reject the null
hypothesis and accept the
alternative hypothesis that
students’ post-test scores are
higher than their pre-test
scores.
142. To visualize this on a graph we create a t distribution
with degrees of freedom of 2. Since the sample size is
so small, this will be a much flatter distribution than a
normal distribution.
143. To visualize this on a graph we create a t distribution
with degrees of freedom of 2. Since the sample size is
so small, this will be a much flatter distribution than a
normal distribution.
Here is an example of a normal distribution:
144. To visualize this on a graph we create a t distribution
with degrees of freedom of 2. Since the sample size is
so small, this will be a much flatter distribution than a
normal distribution.
Here is an example of a normal distribution:
145. To visualize this on a graph we create a t distribution
with degrees of freedom of 2. Since the sample size is
so small, this will be a much flatter distribution than a
normal distribution.
Here is an example of a normal distribution:
And here is an example of a t-distribution with 2
degrees of freedom:
146. To visualize this on a graph we create a t distribution
with degrees of freedom of 2. Since the sample size is
so small, this will be a much flatter distribution than a
normal distribution.
Here is an example of a normal distribution:
And here is an example of a t-distribution with 2
degrees of freedom:
147. You may recall that as the degrees of freedom increase
the t-distribution begins to approximate the normal
distribution.
148. You may recall that as the degrees of freedom increase
the t-distribution begins to approximate the normal
distribution.
Degrees of freedom = 5
149. You may recall that as the degrees of freedom increase
the t-distribution begins to approximate the normal
distribution.
Degrees of freedom = 5
150. You may recall that as the degrees of freedom increase
the t-distribution begins to approximate the normal
distribution.
Degrees of freedom = 5
Degrees of freedom = 10
151. You may recall that as the degrees of freedom increase
the t-distribution begins to approximate the normal
distribution.
Degrees of freedom = 5
Degrees of freedom = 10
152. At about degrees of freedom of 30 the t-distribution
looks almost identical to the normal or standard or z-distribution.
153. At about degrees of freedom of 30 the t-distribution
looks almost identical to the normal or standard or z-distribution.
154. Let’s go back to our example of a t-distribution with 2
degrees of freedom.
155. Let’s go back to our example of a t-distribution with 2
degrees of freedom.
156. Let’s go back to our example of a t-distribution with 2
degrees of freedom.
Here is the location of the critical t value of 2.920.
157. Let’s go back to our example of a t-distribution with 2
degrees of freedom.
2.920
Here is the location of the critical t value of 2.920.
158. Let’s go back to our example of a t-distribution with 2
degrees of freedom.
2.920
Here is the location of the critical t value of 2.920.
This means that if there really is no statistical difference
between the pre- and post-tests, if we were to
hypothetically draw 100 samples 95% of the time those
sample would be drawn from this part of the
distribution.
159. Let’s go back to our example of a t-distribution with 2
degrees of freedom.
2.920
95% of samples would be on
this side of the distribution
Here is the location of the critical t value of 2.920.
This means that if there really is no statistical difference
between the pre- and post-tests, if we were to
hypothetically draw 100 samples 95% of the time those
sample would be drawn from this part of the
distribution.
160. Let’s go back to our example of a t-distribution with 2
degrees of freedom.
2.920
Here is the location of the critical t value of 2.920.
This means that if there really is no statistical difference
between the pre- and post-tests, if we were to
hypothetically draw 100 samples 95% of the time those
sample would be drawn from this part of the
distribution. And 5% of the samples would be on this
side of the critical t value.
161. And so we would say to ourselves: If the t value lands
to the right of 2.920, then we would hypothesize that
they are not part of the same distribution but are
actually two separate distributions.
162. And so we would say to ourselves: If the t value lands
to the right of 2.920, then we would hypothesize that
they are not part of the same distribution but are
actually two separate distributions.
pre-tests post-tests
2.920
163. And so we would say to ourselves: If the t value lands
to the right of 2.920, then we would hypothesize that
they are not part of the same distribution but are
actually two separate distributions.
pre-tests post-tests
2.920
critical t
value
Since our calculated t value is 6.245, we will reject the
null hypothesis and accept the alternative hypothesis
that they are two different distributions.
164. And so we would say to ourselves: If the t value lands
to the right of 2.920, then we would hypothesize that
they are not part of the same distribution but are
actually two separate distributions.
pre-tests post-tests
2.920
critical t
value
6.245
calculated
t value
Since our calculated t value is 6.245, we will reject the
null hypothesis and accept the alternative hypothesis
that they are two different distributions.
165. Let’s see an example where the pre- and post-test
scores are closer to one another. Will their difference
be statistically significantly different?
166. Let’s see an example where the pre- and post-test
scores are closer to one another. Will their difference
be statistically significantly different?
Here is the data set:
167. Let’s see an example where the pre- and post-test
scores are closer to one another. Will their difference
be statistically significantly different?
Here is the data set:
Students Pre Post
1 6 7
2 6 6
3 7 8
mean: 6.3 7.0
168. Let’s see an example where the pre- and post-test
scores are closer to one another. Will their difference
be statistically significantly different?
Here is the data set:
Students Pre Post
1 6 7
2 6 6
3 7 8
mean: 6.3 7.0
Here is the null hypothesis: “Post-test scores are not
statistically significantly higher than pre-test scores”.
171. Let’s begin with the sum of the differences between
the pre and post tests:
172. Let’s begin with the sum of the differences between
the pre and post tests:
Σ(Xpre-Xpost)
SEdiff
t =
the same
173. Let’s begin with the sum of the differences between
the pre and post tests:
Pre Post
6 7
6 6
7 8
Difference
1
0
1
Sum of all
Differences
1 + 0 + 1 = 2
174. Plug in the sum of all differences:
2
Sum of all
Differences
1 + 0 + 1 = 2
186. Let’s plug in our numbers and then do the calculations:
2
3
2 4
3
187. Let’s plug in our numbers and then do the calculations:
2
6
3
4
188. Let’s plug in our numbers and then do the calculations:
2
2
2
189. Let’s plug in our numbers and then do the calculations:
2
1
190. Let’s plug in our numbers and then do the calculations:
2
1
191. Let’s plug in our numbers and then do the calculations:
2.0
192. So with a degrees of freedom of 2 and an alpha level of
.05,
193. So with a degrees of freedom of 2 and an alpha level of
.05,
194. So with a degrees of freedom of 2 and an alpha level of
.05, our critical t value is: 2.920.
195. So with a degrees of freedom of 2 and an alpha level of
.05, our critical t value is: 2.920.
Since our t value (2.0) is less
than our critical t value (2.920)
we will reject the null
hypothesis and accept the
alternative hypothesis that
students’ post-test scores are
higher than their pretest
scores.
196. Here is the location of the critical t value of 2.920:
197. Here is the location of the critical t value of 2.920:
2.920
198. Here is the location of the critical t value of 2.920:
2.920
95% of samples would be on
this side of the distribution
This means that if there really is no statistical difference
between the pre and posttests, if we were to
hypothetically draw 100 samples, 95% of the time
those samples would be drawn from this part of the
distribution.
199. Here is the location of the critical t value of 2.920:
2.920
This means that if there really is no statistical difference
between the pre and posttests, if we were to
hypothetically draw 100 samples, 95% of the time
those samples would be drawn from this part of the
distribution. And 5% of the samples would be on this
side of the critical t value.
200. If the t value lands to the left of 2.920 then we would
say to ourselves,
2.920
201. If the t value lands to the left of 2.920 then we would
say to ourselves,
2.920
Critical t
value
2.0
Calculated
t value
“that is such a common occurrence that I hypothesize
that they are the same distribution and not two
separate distributions.”
202. If the t value lands to the left of 2.920 then we would
say to ourselves,
2.920
Critical t
value
2.0
Calculated
t value
pre-tests
“that is such a common occurrence that I hypothesize
that they are the same distribution and not two
separate distributions.”
203. If the t value lands to the left of 2.920 then we would
say to ourselves,
2.920
Critical t
value
2.0
Calculated
t value
pre-tests
post-tests
“that is such a common occurrence that I hypothesize
that they are the same distribution and not two
separate distributions.”
204. If the t value lands to the left of 2.920 then we would
say to ourselves,
2.920
Critical t
value
2.0
Calculated
t value
pre-tests
post-tests
“that is such a common occurrence that I hypothesize
that they are the same distribution and not two
separate distributions.”
Since our t value is 2.0, we will fail to reject the null
hypothesis.
205. In Summary: The paired sample t-test used in
hypothesis testing to determine if to two matched
samples (e.g., pre / posttests or matched in some other
way) are statistically significantly different from one
another.