SlideShare a Scribd company logo
1 of 206
Paired Samples t-tests
Another important test of differences is the t-test for 
paired samples. This is also known as a t-test for 
repeated measures or a t-test for matched samples.
Another important test of differences is the t-test for 
paired samples. This is also known as a t-test for 
repeated measures or a t-test for matched samples.
Whenever two distributions of a dependent variable are highly 
correlated, either because they are distributions of pre and post 
tests from the same people,
Whenever two distributions of a dependent variable are highly 
correlated, either because they are distributions of pre and post 
tests from the same people,
Whenever two distributions of a dependent variable are highly 
correlated, either because they are distributions of pre and post 
tests from the same people, 
one month later
Whenever two distributions of a dependent variable are highly 
correlated, either because they are distributions of pre and post 
tests from the same people, 
one month later
Whenever two distributions of a dependent variable are highly 
correlated, either because they are distributions of pre and post 
tests from the same people, or from two samples that are matched 
such that there is a one-to-one correspondence between each 
subject in group one and its matched pair in group two,
Whenever two distributions of a dependent variable are highly 
correlated, either because they are distributions of pre and post 
tests from the same people, or from two samples that are matched 
such that there is a one-to-one correspondence between each 
subject in group one and its matched pair in group two, 
one month later 
Brown 
Haired 
Matched 
Subjects
Whenever two distributions of a dependent variable are highly 
correlated, either because they are distributions of pre and post 
tests from the same people, or from two samples that are matched 
such that there is a one-to-one correspondence between each 
subject in group one and its matched pair in group two, 
one month later 
one month later 
Brown 
Haired 
Matched 
Subjects 
Red 
Haired 
Matched 
Subjects
Whenever two distributions of a dependent variable are highly 
correlated, either because they are distributions of pre and post 
tests from the same people, or from two samples that are matched 
such that there is a one-to-one correspondence between each 
subject in group one and its matched pair in group two, the 
appropriate test of differences is the paired samples t-test.
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test,
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test,
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test, 
and is reasonably normally distributed;
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test, 
and is reasonably normally distributed;
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test, 
and is reasonably normally distributed;
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test, 
and is reasonably normally distributed; 
and a single independent variable (e.g., when you took 
the test)
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test, 
and is reasonably normally distributed; 
and a single independent variable (e.g., when you took 
the test)
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test, 
and is reasonably normally distributed; 
and a single independent variable (e.g., when you took 
the test) which has two levels
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test, 
and is reasonably normally distributed; 
and a single independent variable (e.g., when you took 
the test) which has two levels
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test, 
and is reasonably normally distributed; 
and a single independent variable (e.g., when you took 
the test) which has two levels which are repeated or 
matched,
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test, 
and is reasonably normally distributed; 
and a single independent variable (e.g., when you took 
the test) which has two levels which are repeated or 
matched,
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test, 
and is reasonably normally distributed; 
and a single independent variable (e.g., when you took 
the test) which has two levels which are repeated or 
matched, then we use the pair wise t-test to test for 
differences between the two samples of the dependent 
variable.
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test, 
and is reasonably normally distributed; 
and a single independent variable (e.g., when you took 
the test) which has two levels which are repeated or 
matched, then we use the pair wise t-test to test for 
differences between the two samples of the dependent 
variable.
If we have a single dependent variable that exists on an 
interval or ratio scale such as scores on a test, 
and is reasonably normally distributed; 
and a single independent variable (e.g., when you took 
the test) which has two levels which are repeated or 
matched, then we use the pair wise t-test to test for 
differences between the two samples of the dependent 
variable.
First we begin with the null hypothesis:
First we begin with the null hypothesis: 
There are no significant difference in pre and post scores 
(dependent variable).
First we begin with the null hypothesis: 
There are no significant difference in pre and post scores 
(dependent variable). 
Or: There are no significant difference between group one 
and its matched group in terms of the dependent variable.
First we begin with the null hypothesis: 
There are no significant difference in pre and post scores 
(dependent variable). 
Or: There are no significant difference between group one 
and its matched group in terms of the dependent variable.
First we begin with the null hypothesis: 
There are no significant difference in pre and post scores 
(dependent variable). 
Or: There are no significant difference between group one 
and its matched group in terms of the dependent variable.
First we begin with the null hypothesis: 
There are no significant difference in pre and post scores 
(dependent variable). 
Or: There are no significant difference between group one 
and its matched group in terms of the dependent variable. 
no difference
Another way to state the null-hypothesis for a paired-sample 
t-test is by hypothesizing that the difference 
between the pre-post test or matched pairs is ZERO.
Another way to state the null-hypothesis for a paired-sample 
t-test is by hypothesizing that the difference 
between the pre-post test or matched pairs is ZERO. 
Name _________ 
Name _________ 
1. The word of the day is ______ 
2. The State bird of Oklahoma is ______ 
3. The favorite dessert of Queen Elizabeth is 
1. The word of the day is ______ 
2. The State ________ 
bird of Oklahoma is ______ 
3. The favorite dessert of Queen Elizabeth is 
4. The capital of Siam is ___________ 
5. ________ 
The best movie of 1972 was ___________ 
6. The worst spelling bee in the world is _______ 
7. What is the capital of Mars? 
8. How many people fit in a subway car in Tokyo? 
9. The best episode of the Doctor Who reboot is 
4. The capital of Siam is ___________ 
5. The best movie of 1972 was ___________ 
6. The worst spelling bee in the world is _______ 
7. What is the capital of Mars? 
8. How many ___________________________ 
people fit in a subway car in Tokyo? 
9. The best episode of the Doctor Who reboot is 
10. The worst thing you can do with silly putty is 
___________________________ 
________________ 
10. The worst thing you can do with silly putty is 
11. How heavy is a 15 pound bowling ball? 
12. ________________ 
What is the best flavor of Mamba fruit chews? 
13. The weather in Alaska is __________________ 
14. Bugs Bunny is funniest dressed up as a 
11. How heavy is a 15 pound bowling ball? 
12. What is the best flavor of Mamba fruit chews? 
13. The weather _____________ 
in Alaska is __________________ 
14. Bugs Bunny is funniest dressed up as a 
15. How much does a taxi cost in London? 
16. _____________ 
What is the best flavor Jello Pudding pop? 
17. How far is Wichita from Baghdad? 
18. What is the most important thing to do on Arbor 
15. How much does a taxi cost in London? 
16. What is the best flavor Jello Pudding pop? 
17. How far is Wichita from Baghdad? 
18. 19. What What is the goes most best important with vanilla thing to custard? 
do on Arbor 
Day? 
Day? 
19. What goes best with vanilla custard? 
Name _________ 
1. The word of the day is ______ 
2. The State bird of Oklahoma is ______ 
3. The favorite dessert of Queen Elizabeth is 
________ 
4. The capital of Siam is ___________ 
5. The best movie of 1972 was ___________ 
6. The worst spelling bee in the world is _______ 
7. What is the capital of Mars? 
8. How many people fit in a subway car in Tokyo? 
9. The best episode of the Doctor Who reboot is 
___________________________ 
10. The worst thing you can do with silly putty is 
________________ 
11. How heavy is a 15 pound bowling ball? 
12. What is the best flavor of Mamba fruit chews? 
13. The weather in Alaska is __________________ 
14. Bugs Bunny is funniest dressed up as a 
_____________ 
15. How much does a taxi cost in London? 
16. What is the best flavor Jello Pudding pop? 
17. How far is Wichita from Baghdad? 
18. What is the most important thing to do on Arbor 
Day? 
19. What goes best with vanilla custard?
Another way to state the null-hypothesis for a paired-sample 
t-test is by hypothesizing that the difference 
between the pre-post test or matched pairs is ZERO. 
Name _________ 
Name _________ 
1. The word of the day is ______ 
2. The State bird of Oklahoma is ______ 
3. The favorite dessert of Queen Elizabeth is 
1. The word of the day is ______ 
2. The State ________ 
bird of Oklahoma is ______ 
3. The favorite dessert of Queen Elizabeth is 
4. The capital of Siam is ___________ 
5. ________ 
The best movie of 1972 was ___________ 
6. The worst spelling bee in the world is _______ 
7. What is the capital of Mars? 
8. How many people fit in a subway car in Tokyo? 
9. The best episode of the Doctor Who reboot is 
4. The capital of Siam is ___________ 
5. The best movie of 1972 was ___________ 
6. The worst spelling bee in the world is _______ 
7. What is the capital of Mars? 
8. How many ___________________________ 
people fit in a subway car in Tokyo? 
9. The best episode of the Doctor Who reboot is 
10. The worst thing you can do with silly putty is 
___________________________ 
________________ 
10. The worst thing you can do with silly putty is 
11. How heavy is a 15 pound bowling ball? 
12. ________________ 
What is the best flavor of Mamba fruit chews? 
13. The weather in Alaska is __________________ 
14. Bugs Bunny is funniest dressed up as a 
11. How heavy is a 15 pound bowling ball? 
12. What is the best flavor of Mamba fruit chews? 
13. The weather _____________ 
in Alaska is __________________ 
14. Bugs Bunny is funniest dressed up as a 
15. How much does a taxi cost in London? 
16. _____________ 
What is the best flavor Jello Pudding pop? 
17. How far is Wichita from Baghdad? 
18. What is the most important thing to do on Arbor 
15. How much does a taxi cost in London? 
16. What is the best flavor Jello Pudding pop? 
17. How far is Wichita from Baghdad? 
18. 19. What What is the goes most best important with vanilla thing to custard? 
do on Arbor 
Day? 
Day? 
19. What goes best with vanilla custard? 
Name _________ 
1. The word of the day is ______ 
2. The State bird of Oklahoma is ______ 
3. The favorite dessert of Queen Elizabeth is 
________ 
4. The capital of Siam is ___________ 
5. The best movie of 1972 was ___________ 
6. The worst spelling bee in the world is _______ 
7. What is the capital of Mars? 
8. How many people fit in a subway car in Tokyo? 
9. The best episode of the Doctor Who reboot is 
− ___________________________ 
10. The worst thing you can do with silly putty is 
= 0 
________________ 
11. How heavy is a 15 pound bowling ball? 
12. What is the best flavor of Mamba fruit chews? 
13. The weather in Alaska is __________________ 
14. Bugs Bunny is funniest dressed up as a 
_____________ 
15. How much does a taxi cost in London? 
16. What is the best flavor Jello Pudding pop? 
17. How far is Wichita from Baghdad? 
18. What is the most important thing to do on Arbor 
Day? 
19. What goes best with vanilla custard?
Conceptually, aggregating the exact differences 
between each pair makes most sense and leads to the 
correct degrees of freedom, sampling distribution and 
critical value.
Conceptually, aggregating the exact differences 
between each pair makes most sense and leads to the 
correct degrees of freedom, sampling distribution and 
critical value. 
Correct Degrees of Freedom 
– 50 people take a pre-test 
– Same 50 take a post test 
– 50 – 1 = 49 degrees of freedom
Correct Sampling Distribution
Correct Sampling Distribution 
− = 
Sample Mean Distribution of 
difference between the pre and 
post samples test scores 
Sample Mean Distribution of 
pre-test scores. 
Sample Mean Distribution of 
post-test scores.
and Correct Critical Value
and Correct Critical Value
The formula for the paired samples t-test is:
The formula for the paired samples t-test is: 
Σ(X1-X2) 
SEdiff 
t =
The formula for the paired samples t-test is: 
Σ(X1-X2) 
SEdiff 
t = 
Paired t-test value or 
the number of 
standard error 
values that separate 
these two means
The formula for the paired samples t-test is: 
Pre Post 
1 7 
2 6 
1 8 
Σ(Xpre-Xpost) 
SEdiff 
t =
The formula for the paired samples t-test is: 
Pre Post 
1 7 
2 6 
1 8 
Σ(Xpre-Xpost) 
SEdiff 
t = 
Subtract each pretest 
score from each 
posttest score and 
the sum them up
The formula for the paired samples t-test is: 
Pre Post 
1 7 
2 6 
1 8 
Σ(Xpre-Xpost) 
SEdiff 
t = 
Difference 
6 
4 
7
The formula for the paired samples t-test is: 
Pre Post 
1 7 
2 6 
1 8 
Σ(Xpre-Xpost) 
SEdiff 
t = 
Difference 
6 
4 
7 
Sum of all 
Differences 
6 + 4 + 7 = 17
The formula for the paired samples t-test is: 
Pre Post 
1 7 
2 6 
1 8 
Σ(Xpre-Xpost) 
SEdiff 
t = 
Difference 
6 
4 
7 
Sum of all 
Differences 
6 + 4 + 7 = 17 
17 
SEdiff 
t =
The formula for the paired samples t-test is: 
Σ(Xpre-Xpost) 
SEdiff 
t =
Σ(Xpre-Xpost) 
SEdiff 
t = 
Now let’s calculate 
the Standard Error of 
the difference
Σ(Xpre-Xpost) 
SEdiff 
t = 
Now let’s calculate 
the Standard Error of 
It is this statistic (Standard Error of 
the Difference) that makes it possible 
the difference 
to determine if this difference occurred only by chance 
or if there is a strong probability that they are actually 
different!
One thing to note: If the Standard Error of the 
Difference is large (say 170) then the t value will be 
small.
One thing to note: If the Standard Error of the 
Difference is large (say 170) then the t value will be 
small. 
For example:
One thing to note: If the Standard Error of the 
Difference is large (say 170) then the t value will be 
small. 
For example: 
Σ(Xpre-Xpost) 
SEdiff 
t =
One thing to note: If the Standard Error of the 
Difference is large (say 170) then the t value will be 
small. 
For example: 
Σ(Xpre-Xpost) 
SEdiff 
t = 
17 
170 
t =
One thing to note: If the Standard Error of the 
Difference is large (say 170) then the t value will be 
small. 
For example: 
Σ(Xpre-Xpost) 
SEdiff 
t = 
17 
170 
t = 
t = .01
However, if the Standard Error of the Difference is 
small (say 1.7) then the t value will be larger.
However, if the Standard Error of the Difference is 
small (say 1.7) then the t value will be larger. 
For example:
However, if the Standard Error of the Difference is 
small (say 1.7) then the t value will be larger. 
For example: 
Σ(Xpre-Xpost) 
SEdiff 
t =
However, if the Standard Error of the Difference is 
small (say 1.7) then the t value will be larger. 
For example: 
Σ(Xpre-Xpost) 
SEdiff 
t = 
17 
1.7 
t =
However, if the Standard Error of the Difference is 
small (say 1.7) then the t value will be larger. 
For example: 
Σ(Xpre-Xpost) 
SEdiff 
t = 
17 
1.7 
t = 
t = 10
So what effect will a smaller or larger estimated 
standard error have on whether a result is statistically 
significantly different or not?
So what effect will a smaller or larger estimated 
standard error have on whether a result is statistically 
significantly different or not? 
Well let’s say that our null hypothesis (Ho) is that there 
is no statistically significant difference between the pre 
and post test scores below:
So what effect will a smaller or larger estimated 
standard error have on whether a result is statistically 
significantly different or not? 
Well let’s say that our null hypothesis (Ho) is that there 
is no statistically significant difference between the pre 
and post test scores below: 
Students Pre Post 
1 1 7 
2 2 6 
3 1 8 
mean: 1.3 6.0
Of course, the alternative hypothesis would be that 
there is no statistically significant difference between 
the two.
Of course, the alternative hypothesis would be that 
there is no statistically significant difference between 
the two. 
With a t value of .01
Of course, the alternative hypothesis would be that 
there is no statistically significant difference between 
the two. 
With a t value of .01 
Σ(Xpre-Xpost) 
SEdiff 
t = 
17 
170 
t = 
t = .01
Of course, the alternative hypothesis would be that 
there is no statistically significant difference between 
the two. 
With a t value of .01, and a sample of 3, and therefore 
degrees of freedom of 2, we would look up the t critical 
value at a .05 alpha level (a .05 alpha level means that 
we are willing to consider an outcome to be significant 
if it were likely to happen 5 times out of 100 (.05), in 
other words, if it were a very rare occurrence.)
So we go to our table of t-Distribution Critical Values 
to find the critical value that needs to be exceeded by 
our result in order be considered statistically 
significantly different.
So we go to our table of t-Distribution Critical Values 
to find the critical value that needs to be exceeded by 
our result in order be considered statistically 
significantly different.
So we go to our table of t-Distribution Critical Values 
to find the critical value that needs to be exceeded by 
our result in order be considered statistically 
significantly different. 
So, the critical value we need to 
exceed in order to reject the null 
hypothesis is 2.920.
So we go to our table of t-Distribution Critical Values 
to find the critical value that needs to be exceeded by 
our result in order be considered statistically 
significantly different. 
So, the critical value we need to 
exceed in order to reject the null 
hypothesis is 2.920. 
However, our t-value is
So we go to our table of t-Distribution Critical Values 
to find the critical value that needs to be exceeded by 
our result in order be considered statistically 
significantly different. 
So, the critical value we need to 
exceed in order to reject the null 
hypothesis is 2.920. 
However, our t-value is 
t = .01
So we go to our table of t-Distribution Critical Values 
to find the critical value that needs to be exceeded by 
our result in order be considered statistically 
significantly different. 
So, the critical value we need to 
exceed in order to reject the null 
hypothesis is 2.920. 
However, our t-value is .01
Our t value of 0.1 does not exceed the t critical of 
2.920, therefore we will fail to reject the null-hypothesis 
(which essentially means to accept the null-hypothesis).
On the other hand, when our t value is 10, under the 
same conditions,
On the other hand, when our t value is 10, under the 
same conditions, 
Σ(Xpre-Xpost) 
SEdiff 
t = 
17 
1.7 
t = 
t = 10
On the other hand, when our t value is 10, under the 
same conditions, our t value of 10 does exceed the t-critical 
of 2.920, therefore we will reject the null-hypothesis 
(which essentially means to accept the 
alternative hypothesis).
So when it comes to inferential statistics (inferring 
meaning to a larger population from a smaller sample), 
the size of the standard error determines everything.
So when it comes to inferential statistics (inferring 
meaning to a larger population from a smaller sample), 
the size of the standard error determines everything. 
Understanding the standard error theoretically may or 
may not be important for your learning purposes. If it 
is not, you may want to click quickly through the next 
10 slides. If it is important then consider what follows:
If we took 1000 samples of pre- and post-tests and 
subtracted each pre-test sample from each post-test 
sample we would have a sampling distribution called 
the sampling distribution of differences between pre-and 
post-test samples.
If we took 1000 samples of pre- and post-tests and 
subtracted each pre-test sample from each post-test 
sample we would have a sampling distribution called 
the sampling distribution of differences between pre-and 
post-test samples. 
− = 
Sample Mean Distribution of 
difference between the pre and 
post samples test scores 
Sample Mean Distribution of 
pre-test scores. 
Sample Mean Distribution of 
post-test scores.
If we took 1000 samples of pre- and post-tests and 
subtracted each pre-test sample from each post-test 
sample we would have a sampling distribution called 
the sampling distribution of differences between pre-and 
post-test samples. 
Sample Mean Distribution of 
difference between the pre and 
post samples test scores 
Sample Mean Distribution of 
pre-test scores. 
Sample Mean Distribution of 
post-test scores.
If we took 1000 samples of pre- and post-tests and 
subtracted each pre-test sample from each post-test 
sample we would have a sampling distribution called 
the sampling distribution of differences between pre-and 
post-test samples. 
Sample Mean Distribution of 
difference between the pre and 
post samples test scores 
Sample Mean Distribution of 
pre-test scores. 
Sample Mean Distribution of 
post-test scores. 
−
If we took 1000 samples of pre- and post-tests and 
subtracted each pre-test sample from each post-test 
sample we would have a sampling distribution called 
the sampling distribution of differences between pre-and 
post-test samples. 
Sample Mean Distribution of 
difference between the pre and 
post samples test scores 
Sample Mean Distribution of 
pre-test scores. 
Sample Mean Distribution of 
post-test scores. 
−
If we took 1000 samples of pre- and post-tests and 
subtracted each pre-test sample from each post-test 
sample we would have a sampling distribution called 
the sampling distribution of differences between pre-and 
post-test samples. 
− = 
Sample Mean Distribution of 
difference between the pre and 
post samples test scores 
Sample Mean Distribution of 
pre-test scores. 
Sample Mean Distribution of 
post-test scores.
If we took 1000 samples of pre- and post-tests and 
subtracted each pre-test sample from each post-test 
sample we would have a sampling distribution called 
the sampling distribution of differences between pre-and 
post-test samples. 
− = 
Sample Mean Distribution of 
difference between the pre and 
post samples test scores 
Sample Mean Distribution of 
pre-test scores. 
Sample Mean Distribution of 
post-test scores.
If we took 1000 samples of pre- and post-tests and 
subtracted each pre-test sample from each post-test 
sample we would have a sampling distribution called 
the sampling distribution of differences between pre-and 
post-test samples. 
Sample Mean Distribution of 
difference between the pre and 
post samples test scores 
Sample Mean Distribution of 
pre-test scores. 
Sample Mean Distribution of 
post-test scores.
If we took 1000 samples of pre- and post-tests and 
subtracted each pre-test sample from each post-test 
sample we would have a sampling distribution called 
the sampling distribution of differences between pre-and 
post-test samples. 
Sample Mean Distribution of 
difference between the pre and 
post samples test scores 
Sample Mean Distribution of 
pre-test scores. 
Sample Mean Distribution of 
post-test scores. 
−
If we took 1000 samples of pre- and post-tests and 
subtracted each pre-test sample from each post-test 
sample we would have a sampling distribution called 
the sampling distribution of differences between pre-and 
post-test samples. 
Sample Mean Distribution of 
difference between the pre and 
post samples test scores 
Sample Mean Distribution of 
pre-test scores. 
Sample Mean Distribution of 
post-test scores. 
−
If we took 1000 samples of pre- and post-tests and 
subtracted each pre-test sample from each post-test 
sample we would have a sampling distribution called 
the sampling distribution of differences between pre-and 
post-test samples. 
Sample Mean Distribution of 
difference between the pre and 
post samples test scores 
Sample Mean Distribution of 
pre-test scores. 
− = 
Sample Mean Distribution of 
post-test scores.
If we took 1000 samples of pre- and post-tests and 
subtracted each pre-test sample from each post-test 
sample we would have a sampling distribution called 
the sampling distribution of differences between pre-and 
post-test samples. 
Sample Mean Distribution of 
difference between the pre and 
post samples test scores 
Sample Mean Distribution of 
pre-test scores. 
− = 
Sample Mean Distribution of 
post-test scores.
If we took 1000 samples of pre- and post-tests and 
subtracted each pre-test sample from each post-test 
sample we would have a sampling distribution called 
the sampling distribution of differences between pre-and 
post-test samples. 
Sample Mean Distribution of 
difference between the pre and 
post samples test scores 
Sample Mean Distribution of 
pre-test scores. 
− = 
Sample Mean Distribution of 
post-test scores. 
etc …
If you calculated the standard deviation of this new 
subtracted sampling distribution you would have the 
actual standard error we are looking for for this 
equation.
If you calculated the standard deviation of this new 
subtracted sampling distribution you would have the 
actual standard error we are looking for for this 
equation. 
Since this is almost impossible to do, the standard error 
will be estimated.
Let’s see how this is done with a very simple data set. 
Let’s begin with our null-hypothesis: “Post-test scores 
are not statistically significantly higher than pre-test 
scores”.
Let’s see how this is done with a very simple data set. 
Let’s begin with our null-hypothesis: “Post-test scores 
are not statistically significantly higher than pre-test 
scores”. 
Students Pre Post 
1 1 7 
2 2 6 
3 1 8 
mean: 1.3 7.0
We will calculate each element of the equation below:
We will calculate each element of the equation below:
Let’s begin with the sum of the differences between 
the pre and post tests:
Let’s begin with the sum of the differences between 
the pre and post tests:
Let’s begin with the sum of the differences between 
the pre and post tests: 
Σ(Xpre-Xpost) 
SEdiff 
t =
Let’s begin with the sum of the differences between 
the pre and post tests: 
Σ(Xpre-Xpost) 
SEdiff 
t = 
the same
Let’s begin with the sum of the differences between 
the pre and post tests:
Let’s begin with the sum of the differences between 
the pre and post tests: 
Pre Post 
1 7 
2 6 
1 8 
Difference 
6 
4 
7 
Sum of all 
Differences 
6 + 4 + 7 = 17
Let’s begin with the sum of the differences between 
the pre and post tests: 
17
Just as a contrast, when the difference is smaller, then 
the sum of all differences will be smaller as well:
Just as a contrast, when the difference is smaller, then 
the sum of all differences will be smaller as well: 
Pre Post 
1 7 
2 6 
1 8 
Difference 
6 
4 
7 
Sum of all 
Differences 
6 + 4 + 7 = 17
Just as a contrast, when the difference is smaller, then 
the sum of all differences will be smaller as well: 
Pre Post 
1 7 
2 6 
1 8 
Difference 
6 
4 
7 
Sum of all 
Differences 
6 + 4 + 7 = 17
Just as a contrast, when the difference is smaller, then 
the sum of all differences will be smaller as well: 
Pre Post 
1 7 
2 6 
1 8 
Difference 
6 
4 
7 
Sum of all 
Differences 
6 + 4 + 7 = 17 
Pre Post 
4 7 
5 6 
5 8
Just as a contrast, when the difference is smaller, then 
the sum of all differences will be smaller as well: 
Pre Post 
1 7 
2 6 
1 8 
Difference 
6 
4 
7 
Sum of all 
Differences 
6 + 4 + 7 = 17 
Pre Post 
4 7 
5 6 
5 8 
Difference 
3 
1 
3
Just as a contrast, when the difference is smaller, then 
the sum of all differences will be smaller as well: 
Pre Post 
1 7 
2 6 
1 8 
Difference 
6 
4 
7 
Sum of all 
Differences 
6 + 4 + 7 = 17 
Pre Post 
4 7 
5 6 
5 8 
Difference 
3 
1 
3 
Sum of all 
Differences 
3 + 1 + 3 = 7
Back to our original equation. The next step is to 
estimate the standard
Back to our original equation. The next step is to 
estimate the standard 
17
Back to our original equation. The next step is to 
estimate the standard 
17 
3 
We begin with “n” or the size of the sample, which in 
this case is 3.
Back to our original equation. The next step is to 
estimate the standard 
17 
3 
3 
We begin with “n” or the size of the sample, which in 
this case is 3.
Back to our original equation. The next step is to 
estimate the standard 
17 
3 
3 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2.
Back to our original equation. The next step is to 
estimate the standard 
Pre Post 
1 7 
2 6 
1 8 
Difference 
6 
4 
7 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2.
Back to our original equation. The next step is to 
estimate the standard 
Pre Post 
1 7 
2 6 
1 8 
Difference 
6 
4 
7 
Squared Difference 
36 
16 
49 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2.
Back to our original equation. The next step is to 
estimate the standard 
Pre Post 
1 7 
2 6 
1 8 
Difference 
6 
4 
7 
Squared Difference 
36 
16 
49 
Sum of all 
Differences 
Σd2 
36 + 16 + 49 = 101 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2.
Back to our original equation. The next step is to 
estimate the standard 
Let‘s plug in our numbers: 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2.
Back to our original equation. The next step is to 
estimate the standard 
17 
3 
101 
3 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2.
Back to our original equation. The next step is to 
estimate the standard 
17 
3 
101 
3 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2. Then we compute 
(Σd)2.
Back to our original equation. The next step is to 
estimate the standard 
Pre Post 
1 7 
2 6 
1 8 
Difference 
6 
4 
7 
Sum of all 
Differences 
6 + 4 + 7 = 17 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2. Then we compute 
(Σd)2.
Back to our original equation. The next step is to 
estimate the standard 
Pre Post 
1 7 
2 6 
1 8 
Difference 
6 
4 
7 
Sum of all 
Differences 
6 + 4 + 7 = 17 
Squared Sum of all 
Differences 
172 = 289 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2. Then we compute 
(Σd)2.
Back to our original equation. The next step is to 
estimate the standard 
Let‘s plug in our numbers and then do the 
calculations: 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2. Then we compute 
(Σd)2.
Back to our original equation. The next step is to 
estimate the standard 
17 
3 
101 289 
3 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2. Then we compute 
(Σd)2.
Back to our original equation. The next step is to 
estimate the standard 
17 
303 
3 
289 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2. Then we compute 
(Σd)2.
Back to our original equation. The next step is to 
estimate the standard 
17 
14 
2 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2. Then we compute 
(Σd)2.
Back to our original equation. The next step is to 
estimate the standard 
17 
7 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2. Then we compute 
(Σd)2.
Back to our original equation. The next step is to 
estimate the standard 
17 
2.646 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2. Then we compute 
(Σd)2.
Back to our original equation. The next step is to 
estimate the standard 
6.425 
We begin with “n” or the size of the sample, which in 
this case is 3. Then we compute Σd2. Then we compute 
(Σd)2.
So what does a t value of 6.425 mean? Well, that 
depends on whether it is larger or smaller than the 
critical t value.
So what does a t value of 6.425 mean? Well, that 
depends on whether it is larger or smaller than the 
critical t value. 
Do you remember how we determine the critical t 
value?
So what does a t value of 6.425 mean? Well, that 
depends on whether it is larger or smaller than the 
critical t value. 
Do you remember how we determine the critical t 
value? 
All we need is
So what does a t value of 6.425 mean? Well, that 
depends on whether it is larger or smaller than the 
critical t value. 
Do you remember how we determine the critical t 
value? 
All we need is 
• the degrees of freedom (sample size (3) minus 1) and
So what does a t value of 6.425 mean? Well, that 
depends on whether it is larger or smaller than the 
critical t value. 
Do you remember how we determine the critical t 
value? 
All we need is 
• the degrees of freedom (sample size (3) minus 1) and 
• the alpha level we are willing to live with (in this case 
.05. This basically means that we are willing to live 
with being wrong 5 out of 100 times about our 
decision.)
So with a degrees of freedom of 2 and an alpha level of 
.05,
So with a degrees of freedom of 2 and an alpha level of 
.05,
So with a degrees of freedom of 2 and an alpha level of 
.05, our critical t value is: 2.920
So with a degrees of freedom of 2 and an alpha level of 
.05, our critical t value is: 2.920 
Since our t value (6.245) is 
greater than our critical t value 
(2.920), we will reject the null 
hypothesis and accept the 
alternative hypothesis that 
students’ post-test scores are 
higher than their pre-test 
scores.
To visualize this on a graph we create a t distribution 
with degrees of freedom of 2. Since the sample size is 
so small, this will be a much flatter distribution than a 
normal distribution.
To visualize this on a graph we create a t distribution 
with degrees of freedom of 2. Since the sample size is 
so small, this will be a much flatter distribution than a 
normal distribution. 
Here is an example of a normal distribution:
To visualize this on a graph we create a t distribution 
with degrees of freedom of 2. Since the sample size is 
so small, this will be a much flatter distribution than a 
normal distribution. 
Here is an example of a normal distribution:
To visualize this on a graph we create a t distribution 
with degrees of freedom of 2. Since the sample size is 
so small, this will be a much flatter distribution than a 
normal distribution. 
Here is an example of a normal distribution: 
And here is an example of a t-distribution with 2 
degrees of freedom:
To visualize this on a graph we create a t distribution 
with degrees of freedom of 2. Since the sample size is 
so small, this will be a much flatter distribution than a 
normal distribution. 
Here is an example of a normal distribution: 
And here is an example of a t-distribution with 2 
degrees of freedom:
You may recall that as the degrees of freedom increase 
the t-distribution begins to approximate the normal 
distribution.
You may recall that as the degrees of freedom increase 
the t-distribution begins to approximate the normal 
distribution. 
Degrees of freedom = 5
You may recall that as the degrees of freedom increase 
the t-distribution begins to approximate the normal 
distribution. 
Degrees of freedom = 5
You may recall that as the degrees of freedom increase 
the t-distribution begins to approximate the normal 
distribution. 
Degrees of freedom = 5 
Degrees of freedom = 10
You may recall that as the degrees of freedom increase 
the t-distribution begins to approximate the normal 
distribution. 
Degrees of freedom = 5 
Degrees of freedom = 10
At about degrees of freedom of 30 the t-distribution 
looks almost identical to the normal or standard or z-distribution.
At about degrees of freedom of 30 the t-distribution 
looks almost identical to the normal or standard or z-distribution.
Let’s go back to our example of a t-distribution with 2 
degrees of freedom.
Let’s go back to our example of a t-distribution with 2 
degrees of freedom.
Let’s go back to our example of a t-distribution with 2 
degrees of freedom. 
Here is the location of the critical t value of 2.920.
Let’s go back to our example of a t-distribution with 2 
degrees of freedom. 
2.920 
Here is the location of the critical t value of 2.920.
Let’s go back to our example of a t-distribution with 2 
degrees of freedom. 
2.920 
Here is the location of the critical t value of 2.920. 
This means that if there really is no statistical difference 
between the pre- and post-tests, if we were to 
hypothetically draw 100 samples 95% of the time those 
sample would be drawn from this part of the 
distribution.
Let’s go back to our example of a t-distribution with 2 
degrees of freedom. 
2.920 
95% of samples would be on 
this side of the distribution 
Here is the location of the critical t value of 2.920. 
This means that if there really is no statistical difference 
between the pre- and post-tests, if we were to 
hypothetically draw 100 samples 95% of the time those 
sample would be drawn from this part of the 
distribution.
Let’s go back to our example of a t-distribution with 2 
degrees of freedom. 
2.920 
Here is the location of the critical t value of 2.920. 
This means that if there really is no statistical difference 
between the pre- and post-tests, if we were to 
hypothetically draw 100 samples 95% of the time those 
sample would be drawn from this part of the 
distribution. And 5% of the samples would be on this 
side of the critical t value.
And so we would say to ourselves: If the t value lands 
to the right of 2.920, then we would hypothesize that 
they are not part of the same distribution but are 
actually two separate distributions.
And so we would say to ourselves: If the t value lands 
to the right of 2.920, then we would hypothesize that 
they are not part of the same distribution but are 
actually two separate distributions. 
pre-tests post-tests 
2.920
And so we would say to ourselves: If the t value lands 
to the right of 2.920, then we would hypothesize that 
they are not part of the same distribution but are 
actually two separate distributions. 
pre-tests post-tests 
2.920 
critical t 
value 
Since our calculated t value is 6.245, we will reject the 
null hypothesis and accept the alternative hypothesis 
that they are two different distributions.
And so we would say to ourselves: If the t value lands 
to the right of 2.920, then we would hypothesize that 
they are not part of the same distribution but are 
actually two separate distributions. 
pre-tests post-tests 
2.920 
critical t 
value 
6.245 
calculated 
t value 
Since our calculated t value is 6.245, we will reject the 
null hypothesis and accept the alternative hypothesis 
that they are two different distributions.
Let’s see an example where the pre- and post-test 
scores are closer to one another. Will their difference 
be statistically significantly different?
Let’s see an example where the pre- and post-test 
scores are closer to one another. Will their difference 
be statistically significantly different? 
Here is the data set:
Let’s see an example where the pre- and post-test 
scores are closer to one another. Will their difference 
be statistically significantly different? 
Here is the data set: 
Students Pre Post 
1 6 7 
2 6 6 
3 7 8 
mean: 6.3 7.0
Let’s see an example where the pre- and post-test 
scores are closer to one another. Will their difference 
be statistically significantly different? 
Here is the data set: 
Students Pre Post 
1 6 7 
2 6 6 
3 7 8 
mean: 6.3 7.0 
Here is the null hypothesis: “Post-test scores are not 
statistically significantly higher than pre-test scores”.
We will calculate again each element of the equation 
below:
We will calculate again each element of the equation 
below:
Let’s begin with the sum of the differences between 
the pre and post tests:
Let’s begin with the sum of the differences between 
the pre and post tests: 
Σ(Xpre-Xpost) 
SEdiff 
t = 
the same
Let’s begin with the sum of the differences between 
the pre and post tests: 
Pre Post 
6 7 
6 6 
7 8 
Difference 
1 
0 
1 
Sum of all 
Differences 
1 + 0 + 1 = 2
Plug in the sum of all differences: 
2 
Sum of all 
Differences 
1 + 0 + 1 = 2
The “n” or the sample size is 3. 
2
The “n” or the sample size is 3. 
2 
3
The “n” or the sample size is 3. 
2 
3 
3
Then we compute Σd2
Then we compute Σd2 
Pre Post 
6 7 
6 6 
7 8 
Difference 
1 
0 
1 
Squared Difference 
1 
0 
1 
Sum of all 
Differences 
1 + 0 + 1 = 2 
Σd2
Let’s plug in our numbers:
Let’s plug in our numbers: 
2 
3 
3 
2
Then we compute (Σd)2
Then we compute (Σd)2 
Pre Post 
6 7 
6 6 
7 8 
Difference 
1 
0 
1 
Sum of all 
Differences 
1 + 0 + 1 = 2 
Squared Sum of all 
Differences 
22 = 4
Let’s plug in our numbers:
Let’s plug in our numbers: 
2 
3 
2 4 
3
Let’s plug in our numbers and then do the calculations: 
2 
3 
2 4 
3
Let’s plug in our numbers and then do the calculations: 
2 
6 
3 
4
Let’s plug in our numbers and then do the calculations: 
2 
2 
2
Let’s plug in our numbers and then do the calculations: 
2 
1
Let’s plug in our numbers and then do the calculations: 
2 
1
Let’s plug in our numbers and then do the calculations: 
2.0
So with a degrees of freedom of 2 and an alpha level of 
.05,
So with a degrees of freedom of 2 and an alpha level of 
.05,
So with a degrees of freedom of 2 and an alpha level of 
.05, our critical t value is: 2.920.
So with a degrees of freedom of 2 and an alpha level of 
.05, our critical t value is: 2.920. 
Since our t value (2.0) is less 
than our critical t value (2.920) 
we will reject the null 
hypothesis and accept the 
alternative hypothesis that 
students’ post-test scores are 
higher than their pretest 
scores.
Here is the location of the critical t value of 2.920:
Here is the location of the critical t value of 2.920: 
2.920
Here is the location of the critical t value of 2.920: 
2.920 
95% of samples would be on 
this side of the distribution 
This means that if there really is no statistical difference 
between the pre and posttests, if we were to 
hypothetically draw 100 samples, 95% of the time 
those samples would be drawn from this part of the 
distribution.
Here is the location of the critical t value of 2.920: 
2.920 
This means that if there really is no statistical difference 
between the pre and posttests, if we were to 
hypothetically draw 100 samples, 95% of the time 
those samples would be drawn from this part of the 
distribution. And 5% of the samples would be on this 
side of the critical t value.
If the t value lands to the left of 2.920 then we would 
say to ourselves, 
2.920
If the t value lands to the left of 2.920 then we would 
say to ourselves, 
2.920 
Critical t 
value 
2.0 
Calculated 
t value 
“that is such a common occurrence that I hypothesize 
that they are the same distribution and not two 
separate distributions.”
If the t value lands to the left of 2.920 then we would 
say to ourselves, 
2.920 
Critical t 
value 
2.0 
Calculated 
t value 
pre-tests 
“that is such a common occurrence that I hypothesize 
that they are the same distribution and not two 
separate distributions.”
If the t value lands to the left of 2.920 then we would 
say to ourselves, 
2.920 
Critical t 
value 
2.0 
Calculated 
t value 
pre-tests 
post-tests 
“that is such a common occurrence that I hypothesize 
that they are the same distribution and not two 
separate distributions.”
If the t value lands to the left of 2.920 then we would 
say to ourselves, 
2.920 
Critical t 
value 
2.0 
Calculated 
t value 
pre-tests 
post-tests 
“that is such a common occurrence that I hypothesize 
that they are the same distribution and not two 
separate distributions.” 
Since our t value is 2.0, we will fail to reject the null 
hypothesis.
In Summary: The paired sample t-test used in 
hypothesis testing to determine if to two matched 
samples (e.g., pre / posttests or matched in some other 
way) are statistically significantly different from one 
another.
End of Presentation

More Related Content

What's hot

Parametric and non parametric test
Parametric and non parametric testParametric and non parametric test
Parametric and non parametric testAjay Malpani
 
Parametric tests
Parametric testsParametric tests
Parametric testsheena45
 
Anova lecture
Anova lectureAnova lecture
Anova lecturedoublem44
 
Correlation and Regression
Correlation and RegressionCorrelation and Regression
Correlation and Regressionjasondroesch
 
The Kruskal-Wallis H Test
The Kruskal-Wallis H TestThe Kruskal-Wallis H Test
The Kruskal-Wallis H TestDr. Ankit Gaur
 
Full Lecture Presentation on ANOVA
Full Lecture Presentation on ANOVAFull Lecture Presentation on ANOVA
Full Lecture Presentation on ANOVAStevegellKololi
 
T test, independant sample, paired sample and anova
T test, independant sample, paired sample and anovaT test, independant sample, paired sample and anova
T test, independant sample, paired sample and anovaQasim Raza
 
Analysis of variance (ANOVA) everything you need to know
Analysis of variance (ANOVA) everything you need to knowAnalysis of variance (ANOVA) everything you need to know
Analysis of variance (ANOVA) everything you need to knowStat Analytica
 
Chi square tests using SPSS
Chi square tests using SPSSChi square tests using SPSS
Chi square tests using SPSSParag Shah
 
Student's T-test, Paired T-Test, ANOVA & Proportionate Test
Student's T-test, Paired T-Test, ANOVA & Proportionate TestStudent's T-test, Paired T-Test, ANOVA & Proportionate Test
Student's T-test, Paired T-Test, ANOVA & Proportionate TestAzmi Mohd Tamil
 
One way anova final ppt.
One way anova final ppt.One way anova final ppt.
One way anova final ppt.Aadab Mushrib
 
Introduction to t-tests (statistics)
Introduction to t-tests (statistics)Introduction to t-tests (statistics)
Introduction to t-tests (statistics)Dr Bryan Mills
 

What's hot (20)

Parametric and non parametric test
Parametric and non parametric testParametric and non parametric test
Parametric and non parametric test
 
Student T - test
Student T -  testStudent T -  test
Student T - test
 
Student t-test
Student t-testStudent t-test
Student t-test
 
Parametric tests
Parametric testsParametric tests
Parametric tests
 
Chi square
Chi squareChi square
Chi square
 
Anova lecture
Anova lectureAnova lecture
Anova lecture
 
Z test
Z testZ test
Z test
 
Correlation and Regression
Correlation and RegressionCorrelation and Regression
Correlation and Regression
 
The Kruskal-Wallis H Test
The Kruskal-Wallis H TestThe Kruskal-Wallis H Test
The Kruskal-Wallis H Test
 
Full Lecture Presentation on ANOVA
Full Lecture Presentation on ANOVAFull Lecture Presentation on ANOVA
Full Lecture Presentation on ANOVA
 
T test, independant sample, paired sample and anova
T test, independant sample, paired sample and anovaT test, independant sample, paired sample and anova
T test, independant sample, paired sample and anova
 
Statistical tests
Statistical tests Statistical tests
Statistical tests
 
Paired t Test
Paired t TestPaired t Test
Paired t Test
 
HYPOTHESIS TESTING.ppt
HYPOTHESIS TESTING.pptHYPOTHESIS TESTING.ppt
HYPOTHESIS TESTING.ppt
 
Analysis of variance (ANOVA) everything you need to know
Analysis of variance (ANOVA) everything you need to knowAnalysis of variance (ANOVA) everything you need to know
Analysis of variance (ANOVA) everything you need to know
 
Chi square tests using SPSS
Chi square tests using SPSSChi square tests using SPSS
Chi square tests using SPSS
 
Analysis of Variance (ANOVA)
Analysis of Variance (ANOVA)Analysis of Variance (ANOVA)
Analysis of Variance (ANOVA)
 
Student's T-test, Paired T-Test, ANOVA & Proportionate Test
Student's T-test, Paired T-Test, ANOVA & Proportionate TestStudent's T-test, Paired T-Test, ANOVA & Proportionate Test
Student's T-test, Paired T-Test, ANOVA & Proportionate Test
 
One way anova final ppt.
One way anova final ppt.One way anova final ppt.
One way anova final ppt.
 
Introduction to t-tests (statistics)
Introduction to t-tests (statistics)Introduction to t-tests (statistics)
Introduction to t-tests (statistics)
 

Viewers also liked

Dependent T Test
Dependent T TestDependent T Test
Dependent T Testshoffma5
 
Reporting a paired sample t test
Reporting a paired sample t testReporting a paired sample t test
Reporting a paired sample t testKen Plummer
 
T Test For Two Independent Samples
T Test For Two Independent SamplesT Test For Two Independent Samples
T Test For Two Independent Samplesshoffma5
 
Null hypothesis for paired sample t-test
Null hypothesis for paired sample t-testNull hypothesis for paired sample t-test
Null hypothesis for paired sample t-testKen Plummer
 
HFS3283 paired t tes-t and anova
HFS3283 paired t tes-t and anovaHFS3283 paired t tes-t and anova
HFS3283 paired t tes-t and anovawajihahwafa
 
The two sample t-test
The two sample t-testThe two sample t-test
The two sample t-testChristina K J
 
What is a Wilcoxon Sign-Ranked Test (pair t non para)?
What is a Wilcoxon Sign-Ranked Test (pair t non para)?What is a Wilcoxon Sign-Ranked Test (pair t non para)?
What is a Wilcoxon Sign-Ranked Test (pair t non para)?Ken Plummer
 
Reporting a one-way anova
Reporting a one-way anovaReporting a one-way anova
Reporting a one-way anovaKen Plummer
 
Hypothesis testing; z test, t-test. f-test
Hypothesis testing; z test, t-test. f-testHypothesis testing; z test, t-test. f-test
Hypothesis testing; z test, t-test. f-testShakehand with Life
 
T test for two independent samples
T test for two independent samplesT test for two independent samples
T test for two independent samplesJaclyn Chua Yap
 
香港六合彩
香港六合彩香港六合彩
香港六合彩iewsxc
 
Unit 5 lesson 2
Unit 5 lesson 2Unit 5 lesson 2
Unit 5 lesson 2VMRoberts
 
Spss2 comparing means_two_groups
Spss2 comparing means_two_groupsSpss2 comparing means_two_groups
Spss2 comparing means_two_groupsriddhu12
 
CORE: May the “Power” (Statistical) - Be with You!
CORE: May the “Power” (Statistical) - Be with You!CORE: May the “Power” (Statistical) - Be with You!
CORE: May the “Power” (Statistical) - Be with You!Trident University
 

Viewers also liked (20)

T test
T testT test
T test
 
Dependent T Test
Dependent T TestDependent T Test
Dependent T Test
 
Reporting a paired sample t test
Reporting a paired sample t testReporting a paired sample t test
Reporting a paired sample t test
 
Student's T-Test
Student's T-TestStudent's T-Test
Student's T-Test
 
Two sample t-test
Two sample t-testTwo sample t-test
Two sample t-test
 
T Test For Two Independent Samples
T Test For Two Independent SamplesT Test For Two Independent Samples
T Test For Two Independent Samples
 
Null hypothesis for paired sample t-test
Null hypothesis for paired sample t-testNull hypothesis for paired sample t-test
Null hypothesis for paired sample t-test
 
HFS3283 paired t tes-t and anova
HFS3283 paired t tes-t and anovaHFS3283 paired t tes-t and anova
HFS3283 paired t tes-t and anova
 
The two sample t-test
The two sample t-testThe two sample t-test
The two sample t-test
 
What is a Wilcoxon Sign-Ranked Test (pair t non para)?
What is a Wilcoxon Sign-Ranked Test (pair t non para)?What is a Wilcoxon Sign-Ranked Test (pair t non para)?
What is a Wilcoxon Sign-Ranked Test (pair t non para)?
 
Tests of significance
Tests of significanceTests of significance
Tests of significance
 
Reporting a one-way anova
Reporting a one-way anovaReporting a one-way anova
Reporting a one-way anova
 
Hypothesis Testing
Hypothesis TestingHypothesis Testing
Hypothesis Testing
 
Hypothesis testing; z test, t-test. f-test
Hypothesis testing; z test, t-test. f-testHypothesis testing; z test, t-test. f-test
Hypothesis testing; z test, t-test. f-test
 
T test for two independent samples
T test for two independent samplesT test for two independent samples
T test for two independent samples
 
香港六合彩
香港六合彩香港六合彩
香港六合彩
 
Unit 5 lesson 2
Unit 5 lesson 2Unit 5 lesson 2
Unit 5 lesson 2
 
Statistics
Statistics Statistics
Statistics
 
Spss2 comparing means_two_groups
Spss2 comparing means_two_groupsSpss2 comparing means_two_groups
Spss2 comparing means_two_groups
 
CORE: May the “Power” (Statistical) - Be with You!
CORE: May the “Power” (Statistical) - Be with You!CORE: May the “Power” (Statistical) - Be with You!
CORE: May the “Power” (Statistical) - Be with You!
 

Similar to What is a paired samples t test

How many independent variables?
How many independent variables?How many independent variables?
How many independent variables?Ken Plummer
 
How many dependent variables?
How many dependent variables?How many dependent variables?
How many dependent variables?Ken Plummer
 
Null hypothesis for Wilcoxon Test
Null hypothesis for Wilcoxon TestNull hypothesis for Wilcoxon Test
Null hypothesis for Wilcoxon TestKen Plummer
 
BUS 308 Week 3 Lecture 1 Examining Differences - Continued.docx
BUS 308 Week 3 Lecture 1 Examining Differences - Continued.docxBUS 308 Week 3 Lecture 1 Examining Differences - Continued.docx
BUS 308 Week 3 Lecture 1 Examining Differences - Continued.docxcurwenmichaela
 
Assessment 3 ContextYou will review the theory, logic, and a.docx
Assessment 3 ContextYou will review the theory, logic, and a.docxAssessment 3 ContextYou will review the theory, logic, and a.docx
Assessment 3 ContextYou will review the theory, logic, and a.docxgalerussel59292
 
Day 10 prediction and regression
Day 10 prediction and regressionDay 10 prediction and regression
Day 10 prediction and regressionElih Sutisna Yanto
 
Correlations and t scores (2)
Correlations and t scores (2)Correlations and t scores (2)
Correlations and t scores (2)Pedro Martinez
 
The Nature of the Data - Relationships
The Nature of the Data - RelationshipsThe Nature of the Data - Relationships
The Nature of the Data - RelationshipsKen Plummer
 
Parametric & non-parametric
Parametric & non-parametricParametric & non-parametric
Parametric & non-parametricSoniaBabaee
 
P r e d i c t i n g t h e Semantic Orientation of A d j e c .docx
P r e d i c t i n g  t h e  Semantic Orientation of A d j e c .docxP r e d i c t i n g  t h e  Semantic Orientation of A d j e c .docx
P r e d i c t i n g t h e Semantic Orientation of A d j e c .docxgerardkortney
 
INFERENTIAL STATISTICS: AN INTRODUCTION
INFERENTIAL STATISTICS: AN INTRODUCTIONINFERENTIAL STATISTICS: AN INTRODUCTION
INFERENTIAL STATISTICS: AN INTRODUCTIONJohn Labrador
 
The nature of the data
The nature of the dataThe nature of the data
The nature of the dataKen Plummer
 
Class24 chi squaretestofindependenceposthoc(1)
Class24 chi squaretestofindependenceposthoc(1)Class24 chi squaretestofindependenceposthoc(1)
Class24 chi squaretestofindependenceposthoc(1)arvindmnnitmsw
 
Class24 chi squaretestofindependenceposthoc
Class24 chi squaretestofindependenceposthocClass24 chi squaretestofindependenceposthoc
Class24 chi squaretestofindependenceposthocBetynatha Kb
 
Anova and T-Test
Anova and T-TestAnova and T-Test
Anova and T-TestAD Sarwar
 
Topic Two Biostatistics.pptx
Topic Two  Biostatistics.pptxTopic Two  Biostatistics.pptx
Topic Two Biostatistics.pptxkihembopamelah
 
Questions of difference
Questions of differenceQuestions of difference
Questions of differenceKen Plummer
 

Similar to What is a paired samples t test (20)

How many independent variables?
How many independent variables?How many independent variables?
How many independent variables?
 
How many dependent variables?
How many dependent variables?How many dependent variables?
How many dependent variables?
 
HYPOTHESES.pptx
HYPOTHESES.pptxHYPOTHESES.pptx
HYPOTHESES.pptx
 
Null hypothesis for Wilcoxon Test
Null hypothesis for Wilcoxon TestNull hypothesis for Wilcoxon Test
Null hypothesis for Wilcoxon Test
 
BUS 308 Week 3 Lecture 1 Examining Differences - Continued.docx
BUS 308 Week 3 Lecture 1 Examining Differences - Continued.docxBUS 308 Week 3 Lecture 1 Examining Differences - Continued.docx
BUS 308 Week 3 Lecture 1 Examining Differences - Continued.docx
 
Assessment 3 ContextYou will review the theory, logic, and a.docx
Assessment 3 ContextYou will review the theory, logic, and a.docxAssessment 3 ContextYou will review the theory, logic, and a.docx
Assessment 3 ContextYou will review the theory, logic, and a.docx
 
Day 10 prediction and regression
Day 10 prediction and regressionDay 10 prediction and regression
Day 10 prediction and regression
 
Correlations and t scores (2)
Correlations and t scores (2)Correlations and t scores (2)
Correlations and t scores (2)
 
The Nature of the Data - Relationships
The Nature of the Data - RelationshipsThe Nature of the Data - Relationships
The Nature of the Data - Relationships
 
Parametric & non-parametric
Parametric & non-parametricParametric & non-parametric
Parametric & non-parametric
 
Hypothesis
HypothesisHypothesis
Hypothesis
 
P r e d i c t i n g t h e Semantic Orientation of A d j e c .docx
P r e d i c t i n g  t h e  Semantic Orientation of A d j e c .docxP r e d i c t i n g  t h e  Semantic Orientation of A d j e c .docx
P r e d i c t i n g t h e Semantic Orientation of A d j e c .docx
 
INFERENTIAL STATISTICS: AN INTRODUCTION
INFERENTIAL STATISTICS: AN INTRODUCTIONINFERENTIAL STATISTICS: AN INTRODUCTION
INFERENTIAL STATISTICS: AN INTRODUCTION
 
The nature of the data
The nature of the dataThe nature of the data
The nature of the data
 
Class24 chi squaretestofindependenceposthoc(1)
Class24 chi squaretestofindependenceposthoc(1)Class24 chi squaretestofindependenceposthoc(1)
Class24 chi squaretestofindependenceposthoc(1)
 
Class24 chi squaretestofindependenceposthoc
Class24 chi squaretestofindependenceposthocClass24 chi squaretestofindependenceposthoc
Class24 chi squaretestofindependenceposthoc
 
Anova and T-Test
Anova and T-TestAnova and T-Test
Anova and T-Test
 
Topic Two Biostatistics.pptx
Topic Two  Biostatistics.pptxTopic Two  Biostatistics.pptx
Topic Two Biostatistics.pptx
 
Hypothesis
HypothesisHypothesis
Hypothesis
 
Questions of difference
Questions of differenceQuestions of difference
Questions of difference
 

More from Ken Plummer

Diff rel gof-fit - jejit - practice (5)
Diff rel gof-fit - jejit - practice (5)Diff rel gof-fit - jejit - practice (5)
Diff rel gof-fit - jejit - practice (5)Ken Plummer
 
Learn About Range - Copyright updated
Learn About Range - Copyright updatedLearn About Range - Copyright updated
Learn About Range - Copyright updatedKen Plummer
 
Inferential vs descriptive tutorial of when to use - Copyright Updated
Inferential vs descriptive tutorial of when to use - Copyright UpdatedInferential vs descriptive tutorial of when to use - Copyright Updated
Inferential vs descriptive tutorial of when to use - Copyright UpdatedKen Plummer
 
Diff rel ind-fit practice - Copyright Updated
Diff rel ind-fit practice - Copyright UpdatedDiff rel ind-fit practice - Copyright Updated
Diff rel ind-fit practice - Copyright UpdatedKen Plummer
 
Normal or skewed distributions (inferential) - Copyright updated
Normal or skewed distributions (inferential) - Copyright updatedNormal or skewed distributions (inferential) - Copyright updated
Normal or skewed distributions (inferential) - Copyright updatedKen Plummer
 
Normal or skewed distributions (descriptive both2) - Copyright updated
Normal or skewed distributions (descriptive both2) - Copyright updatedNormal or skewed distributions (descriptive both2) - Copyright updated
Normal or skewed distributions (descriptive both2) - Copyright updatedKen Plummer
 
Nature of the data practice - Copyright updated
Nature of the data practice - Copyright updatedNature of the data practice - Copyright updated
Nature of the data practice - Copyright updatedKen Plummer
 
Nature of the data (spread) - Copyright updated
Nature of the data (spread) - Copyright updatedNature of the data (spread) - Copyright updated
Nature of the data (spread) - Copyright updatedKen Plummer
 
Mode practice 1 - Copyright updated
Mode practice 1 - Copyright updatedMode practice 1 - Copyright updated
Mode practice 1 - Copyright updatedKen Plummer
 
Nature of the data (descriptive) - Copyright updated
Nature of the data (descriptive) - Copyright updatedNature of the data (descriptive) - Copyright updated
Nature of the data (descriptive) - Copyright updatedKen Plummer
 
Dichotomous or scaled
Dichotomous or scaledDichotomous or scaled
Dichotomous or scaledKen Plummer
 
Skewed less than 30 (ties)
Skewed less than 30 (ties)Skewed less than 30 (ties)
Skewed less than 30 (ties)Ken Plummer
 
Skewed sample size less than 30
Skewed sample size less than 30Skewed sample size less than 30
Skewed sample size less than 30Ken Plummer
 
Ordinal and nominal
Ordinal and nominalOrdinal and nominal
Ordinal and nominalKen Plummer
 
Relationship covariates
Relationship   covariatesRelationship   covariates
Relationship covariatesKen Plummer
 
Relationship nature of data
Relationship nature of dataRelationship nature of data
Relationship nature of dataKen Plummer
 
Number of variables (predictive)
Number of variables (predictive)Number of variables (predictive)
Number of variables (predictive)Ken Plummer
 
Levels of the iv
Levels of the ivLevels of the iv
Levels of the ivKen Plummer
 
Independent variables (2)
Independent variables (2)Independent variables (2)
Independent variables (2)Ken Plummer
 

More from Ken Plummer (20)

Diff rel gof-fit - jejit - practice (5)
Diff rel gof-fit - jejit - practice (5)Diff rel gof-fit - jejit - practice (5)
Diff rel gof-fit - jejit - practice (5)
 
Learn About Range - Copyright updated
Learn About Range - Copyright updatedLearn About Range - Copyright updated
Learn About Range - Copyright updated
 
Inferential vs descriptive tutorial of when to use - Copyright Updated
Inferential vs descriptive tutorial of when to use - Copyright UpdatedInferential vs descriptive tutorial of when to use - Copyright Updated
Inferential vs descriptive tutorial of when to use - Copyright Updated
 
Diff rel ind-fit practice - Copyright Updated
Diff rel ind-fit practice - Copyright UpdatedDiff rel ind-fit practice - Copyright Updated
Diff rel ind-fit practice - Copyright Updated
 
Normal or skewed distributions (inferential) - Copyright updated
Normal or skewed distributions (inferential) - Copyright updatedNormal or skewed distributions (inferential) - Copyright updated
Normal or skewed distributions (inferential) - Copyright updated
 
Normal or skewed distributions (descriptive both2) - Copyright updated
Normal or skewed distributions (descriptive both2) - Copyright updatedNormal or skewed distributions (descriptive both2) - Copyright updated
Normal or skewed distributions (descriptive both2) - Copyright updated
 
Nature of the data practice - Copyright updated
Nature of the data practice - Copyright updatedNature of the data practice - Copyright updated
Nature of the data practice - Copyright updated
 
Nature of the data (spread) - Copyright updated
Nature of the data (spread) - Copyright updatedNature of the data (spread) - Copyright updated
Nature of the data (spread) - Copyright updated
 
Mode practice 1 - Copyright updated
Mode practice 1 - Copyright updatedMode practice 1 - Copyright updated
Mode practice 1 - Copyright updated
 
Nature of the data (descriptive) - Copyright updated
Nature of the data (descriptive) - Copyright updatedNature of the data (descriptive) - Copyright updated
Nature of the data (descriptive) - Copyright updated
 
Dichotomous or scaled
Dichotomous or scaledDichotomous or scaled
Dichotomous or scaled
 
Skewed less than 30 (ties)
Skewed less than 30 (ties)Skewed less than 30 (ties)
Skewed less than 30 (ties)
 
Skewed sample size less than 30
Skewed sample size less than 30Skewed sample size less than 30
Skewed sample size less than 30
 
Ordinal (ties)
Ordinal (ties)Ordinal (ties)
Ordinal (ties)
 
Ordinal and nominal
Ordinal and nominalOrdinal and nominal
Ordinal and nominal
 
Relationship covariates
Relationship   covariatesRelationship   covariates
Relationship covariates
 
Relationship nature of data
Relationship nature of dataRelationship nature of data
Relationship nature of data
 
Number of variables (predictive)
Number of variables (predictive)Number of variables (predictive)
Number of variables (predictive)
 
Levels of the iv
Levels of the ivLevels of the iv
Levels of the iv
 
Independent variables (2)
Independent variables (2)Independent variables (2)
Independent variables (2)
 

Recently uploaded

Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17Celine George
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Celine George
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaVirag Sontakke
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxAvyJaneVismanos
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitolTechU
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfadityarao40181
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...M56BOOKSTORE PRODUCT/SERVICE
 

Recently uploaded (20)

Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of India
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptx
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptx
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdf
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
 

What is a paired samples t test

  • 2. Another important test of differences is the t-test for paired samples. This is also known as a t-test for repeated measures or a t-test for matched samples.
  • 3. Another important test of differences is the t-test for paired samples. This is also known as a t-test for repeated measures or a t-test for matched samples.
  • 4. Whenever two distributions of a dependent variable are highly correlated, either because they are distributions of pre and post tests from the same people,
  • 5. Whenever two distributions of a dependent variable are highly correlated, either because they are distributions of pre and post tests from the same people,
  • 6. Whenever two distributions of a dependent variable are highly correlated, either because they are distributions of pre and post tests from the same people, one month later
  • 7. Whenever two distributions of a dependent variable are highly correlated, either because they are distributions of pre and post tests from the same people, one month later
  • 8. Whenever two distributions of a dependent variable are highly correlated, either because they are distributions of pre and post tests from the same people, or from two samples that are matched such that there is a one-to-one correspondence between each subject in group one and its matched pair in group two,
  • 9. Whenever two distributions of a dependent variable are highly correlated, either because they are distributions of pre and post tests from the same people, or from two samples that are matched such that there is a one-to-one correspondence between each subject in group one and its matched pair in group two, one month later Brown Haired Matched Subjects
  • 10. Whenever two distributions of a dependent variable are highly correlated, either because they are distributions of pre and post tests from the same people, or from two samples that are matched such that there is a one-to-one correspondence between each subject in group one and its matched pair in group two, one month later one month later Brown Haired Matched Subjects Red Haired Matched Subjects
  • 11. Whenever two distributions of a dependent variable are highly correlated, either because they are distributions of pre and post tests from the same people, or from two samples that are matched such that there is a one-to-one correspondence between each subject in group one and its matched pair in group two, the appropriate test of differences is the paired samples t-test.
  • 12. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test,
  • 13. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test,
  • 14. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test, and is reasonably normally distributed;
  • 15. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test, and is reasonably normally distributed;
  • 16. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test, and is reasonably normally distributed;
  • 17. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test, and is reasonably normally distributed; and a single independent variable (e.g., when you took the test)
  • 18. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test, and is reasonably normally distributed; and a single independent variable (e.g., when you took the test)
  • 19. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test, and is reasonably normally distributed; and a single independent variable (e.g., when you took the test) which has two levels
  • 20. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test, and is reasonably normally distributed; and a single independent variable (e.g., when you took the test) which has two levels
  • 21. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test, and is reasonably normally distributed; and a single independent variable (e.g., when you took the test) which has two levels which are repeated or matched,
  • 22. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test, and is reasonably normally distributed; and a single independent variable (e.g., when you took the test) which has two levels which are repeated or matched,
  • 23. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test, and is reasonably normally distributed; and a single independent variable (e.g., when you took the test) which has two levels which are repeated or matched, then we use the pair wise t-test to test for differences between the two samples of the dependent variable.
  • 24. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test, and is reasonably normally distributed; and a single independent variable (e.g., when you took the test) which has two levels which are repeated or matched, then we use the pair wise t-test to test for differences between the two samples of the dependent variable.
  • 25. If we have a single dependent variable that exists on an interval or ratio scale such as scores on a test, and is reasonably normally distributed; and a single independent variable (e.g., when you took the test) which has two levels which are repeated or matched, then we use the pair wise t-test to test for differences between the two samples of the dependent variable.
  • 26. First we begin with the null hypothesis:
  • 27. First we begin with the null hypothesis: There are no significant difference in pre and post scores (dependent variable).
  • 28. First we begin with the null hypothesis: There are no significant difference in pre and post scores (dependent variable). Or: There are no significant difference between group one and its matched group in terms of the dependent variable.
  • 29. First we begin with the null hypothesis: There are no significant difference in pre and post scores (dependent variable). Or: There are no significant difference between group one and its matched group in terms of the dependent variable.
  • 30. First we begin with the null hypothesis: There are no significant difference in pre and post scores (dependent variable). Or: There are no significant difference between group one and its matched group in terms of the dependent variable.
  • 31. First we begin with the null hypothesis: There are no significant difference in pre and post scores (dependent variable). Or: There are no significant difference between group one and its matched group in terms of the dependent variable. no difference
  • 32. Another way to state the null-hypothesis for a paired-sample t-test is by hypothesizing that the difference between the pre-post test or matched pairs is ZERO.
  • 33. Another way to state the null-hypothesis for a paired-sample t-test is by hypothesizing that the difference between the pre-post test or matched pairs is ZERO. Name _________ Name _________ 1. The word of the day is ______ 2. The State bird of Oklahoma is ______ 3. The favorite dessert of Queen Elizabeth is 1. The word of the day is ______ 2. The State ________ bird of Oklahoma is ______ 3. The favorite dessert of Queen Elizabeth is 4. The capital of Siam is ___________ 5. ________ The best movie of 1972 was ___________ 6. The worst spelling bee in the world is _______ 7. What is the capital of Mars? 8. How many people fit in a subway car in Tokyo? 9. The best episode of the Doctor Who reboot is 4. The capital of Siam is ___________ 5. The best movie of 1972 was ___________ 6. The worst spelling bee in the world is _______ 7. What is the capital of Mars? 8. How many ___________________________ people fit in a subway car in Tokyo? 9. The best episode of the Doctor Who reboot is 10. The worst thing you can do with silly putty is ___________________________ ________________ 10. The worst thing you can do with silly putty is 11. How heavy is a 15 pound bowling ball? 12. ________________ What is the best flavor of Mamba fruit chews? 13. The weather in Alaska is __________________ 14. Bugs Bunny is funniest dressed up as a 11. How heavy is a 15 pound bowling ball? 12. What is the best flavor of Mamba fruit chews? 13. The weather _____________ in Alaska is __________________ 14. Bugs Bunny is funniest dressed up as a 15. How much does a taxi cost in London? 16. _____________ What is the best flavor Jello Pudding pop? 17. How far is Wichita from Baghdad? 18. What is the most important thing to do on Arbor 15. How much does a taxi cost in London? 16. What is the best flavor Jello Pudding pop? 17. How far is Wichita from Baghdad? 18. 19. What What is the goes most best important with vanilla thing to custard? do on Arbor Day? Day? 19. What goes best with vanilla custard? Name _________ 1. The word of the day is ______ 2. The State bird of Oklahoma is ______ 3. The favorite dessert of Queen Elizabeth is ________ 4. The capital of Siam is ___________ 5. The best movie of 1972 was ___________ 6. The worst spelling bee in the world is _______ 7. What is the capital of Mars? 8. How many people fit in a subway car in Tokyo? 9. The best episode of the Doctor Who reboot is ___________________________ 10. The worst thing you can do with silly putty is ________________ 11. How heavy is a 15 pound bowling ball? 12. What is the best flavor of Mamba fruit chews? 13. The weather in Alaska is __________________ 14. Bugs Bunny is funniest dressed up as a _____________ 15. How much does a taxi cost in London? 16. What is the best flavor Jello Pudding pop? 17. How far is Wichita from Baghdad? 18. What is the most important thing to do on Arbor Day? 19. What goes best with vanilla custard?
  • 34. Another way to state the null-hypothesis for a paired-sample t-test is by hypothesizing that the difference between the pre-post test or matched pairs is ZERO. Name _________ Name _________ 1. The word of the day is ______ 2. The State bird of Oklahoma is ______ 3. The favorite dessert of Queen Elizabeth is 1. The word of the day is ______ 2. The State ________ bird of Oklahoma is ______ 3. The favorite dessert of Queen Elizabeth is 4. The capital of Siam is ___________ 5. ________ The best movie of 1972 was ___________ 6. The worst spelling bee in the world is _______ 7. What is the capital of Mars? 8. How many people fit in a subway car in Tokyo? 9. The best episode of the Doctor Who reboot is 4. The capital of Siam is ___________ 5. The best movie of 1972 was ___________ 6. The worst spelling bee in the world is _______ 7. What is the capital of Mars? 8. How many ___________________________ people fit in a subway car in Tokyo? 9. The best episode of the Doctor Who reboot is 10. The worst thing you can do with silly putty is ___________________________ ________________ 10. The worst thing you can do with silly putty is 11. How heavy is a 15 pound bowling ball? 12. ________________ What is the best flavor of Mamba fruit chews? 13. The weather in Alaska is __________________ 14. Bugs Bunny is funniest dressed up as a 11. How heavy is a 15 pound bowling ball? 12. What is the best flavor of Mamba fruit chews? 13. The weather _____________ in Alaska is __________________ 14. Bugs Bunny is funniest dressed up as a 15. How much does a taxi cost in London? 16. _____________ What is the best flavor Jello Pudding pop? 17. How far is Wichita from Baghdad? 18. What is the most important thing to do on Arbor 15. How much does a taxi cost in London? 16. What is the best flavor Jello Pudding pop? 17. How far is Wichita from Baghdad? 18. 19. What What is the goes most best important with vanilla thing to custard? do on Arbor Day? Day? 19. What goes best with vanilla custard? Name _________ 1. The word of the day is ______ 2. The State bird of Oklahoma is ______ 3. The favorite dessert of Queen Elizabeth is ________ 4. The capital of Siam is ___________ 5. The best movie of 1972 was ___________ 6. The worst spelling bee in the world is _______ 7. What is the capital of Mars? 8. How many people fit in a subway car in Tokyo? 9. The best episode of the Doctor Who reboot is − ___________________________ 10. The worst thing you can do with silly putty is = 0 ________________ 11. How heavy is a 15 pound bowling ball? 12. What is the best flavor of Mamba fruit chews? 13. The weather in Alaska is __________________ 14. Bugs Bunny is funniest dressed up as a _____________ 15. How much does a taxi cost in London? 16. What is the best flavor Jello Pudding pop? 17. How far is Wichita from Baghdad? 18. What is the most important thing to do on Arbor Day? 19. What goes best with vanilla custard?
  • 35. Conceptually, aggregating the exact differences between each pair makes most sense and leads to the correct degrees of freedom, sampling distribution and critical value.
  • 36. Conceptually, aggregating the exact differences between each pair makes most sense and leads to the correct degrees of freedom, sampling distribution and critical value. Correct Degrees of Freedom – 50 people take a pre-test – Same 50 take a post test – 50 – 1 = 49 degrees of freedom
  • 38. Correct Sampling Distribution − = Sample Mean Distribution of difference between the pre and post samples test scores Sample Mean Distribution of pre-test scores. Sample Mean Distribution of post-test scores.
  • 41. The formula for the paired samples t-test is:
  • 42. The formula for the paired samples t-test is: Σ(X1-X2) SEdiff t =
  • 43. The formula for the paired samples t-test is: Σ(X1-X2) SEdiff t = Paired t-test value or the number of standard error values that separate these two means
  • 44. The formula for the paired samples t-test is: Pre Post 1 7 2 6 1 8 Σ(Xpre-Xpost) SEdiff t =
  • 45. The formula for the paired samples t-test is: Pre Post 1 7 2 6 1 8 Σ(Xpre-Xpost) SEdiff t = Subtract each pretest score from each posttest score and the sum them up
  • 46. The formula for the paired samples t-test is: Pre Post 1 7 2 6 1 8 Σ(Xpre-Xpost) SEdiff t = Difference 6 4 7
  • 47. The formula for the paired samples t-test is: Pre Post 1 7 2 6 1 8 Σ(Xpre-Xpost) SEdiff t = Difference 6 4 7 Sum of all Differences 6 + 4 + 7 = 17
  • 48. The formula for the paired samples t-test is: Pre Post 1 7 2 6 1 8 Σ(Xpre-Xpost) SEdiff t = Difference 6 4 7 Sum of all Differences 6 + 4 + 7 = 17 17 SEdiff t =
  • 49. The formula for the paired samples t-test is: Σ(Xpre-Xpost) SEdiff t =
  • 50. Σ(Xpre-Xpost) SEdiff t = Now let’s calculate the Standard Error of the difference
  • 51. Σ(Xpre-Xpost) SEdiff t = Now let’s calculate the Standard Error of It is this statistic (Standard Error of the Difference) that makes it possible the difference to determine if this difference occurred only by chance or if there is a strong probability that they are actually different!
  • 52. One thing to note: If the Standard Error of the Difference is large (say 170) then the t value will be small.
  • 53. One thing to note: If the Standard Error of the Difference is large (say 170) then the t value will be small. For example:
  • 54. One thing to note: If the Standard Error of the Difference is large (say 170) then the t value will be small. For example: Σ(Xpre-Xpost) SEdiff t =
  • 55. One thing to note: If the Standard Error of the Difference is large (say 170) then the t value will be small. For example: Σ(Xpre-Xpost) SEdiff t = 17 170 t =
  • 56. One thing to note: If the Standard Error of the Difference is large (say 170) then the t value will be small. For example: Σ(Xpre-Xpost) SEdiff t = 17 170 t = t = .01
  • 57. However, if the Standard Error of the Difference is small (say 1.7) then the t value will be larger.
  • 58. However, if the Standard Error of the Difference is small (say 1.7) then the t value will be larger. For example:
  • 59. However, if the Standard Error of the Difference is small (say 1.7) then the t value will be larger. For example: Σ(Xpre-Xpost) SEdiff t =
  • 60. However, if the Standard Error of the Difference is small (say 1.7) then the t value will be larger. For example: Σ(Xpre-Xpost) SEdiff t = 17 1.7 t =
  • 61. However, if the Standard Error of the Difference is small (say 1.7) then the t value will be larger. For example: Σ(Xpre-Xpost) SEdiff t = 17 1.7 t = t = 10
  • 62. So what effect will a smaller or larger estimated standard error have on whether a result is statistically significantly different or not?
  • 63. So what effect will a smaller or larger estimated standard error have on whether a result is statistically significantly different or not? Well let’s say that our null hypothesis (Ho) is that there is no statistically significant difference between the pre and post test scores below:
  • 64. So what effect will a smaller or larger estimated standard error have on whether a result is statistically significantly different or not? Well let’s say that our null hypothesis (Ho) is that there is no statistically significant difference between the pre and post test scores below: Students Pre Post 1 1 7 2 2 6 3 1 8 mean: 1.3 6.0
  • 65. Of course, the alternative hypothesis would be that there is no statistically significant difference between the two.
  • 66. Of course, the alternative hypothesis would be that there is no statistically significant difference between the two. With a t value of .01
  • 67. Of course, the alternative hypothesis would be that there is no statistically significant difference between the two. With a t value of .01 Σ(Xpre-Xpost) SEdiff t = 17 170 t = t = .01
  • 68. Of course, the alternative hypothesis would be that there is no statistically significant difference between the two. With a t value of .01, and a sample of 3, and therefore degrees of freedom of 2, we would look up the t critical value at a .05 alpha level (a .05 alpha level means that we are willing to consider an outcome to be significant if it were likely to happen 5 times out of 100 (.05), in other words, if it were a very rare occurrence.)
  • 69. So we go to our table of t-Distribution Critical Values to find the critical value that needs to be exceeded by our result in order be considered statistically significantly different.
  • 70. So we go to our table of t-Distribution Critical Values to find the critical value that needs to be exceeded by our result in order be considered statistically significantly different.
  • 71. So we go to our table of t-Distribution Critical Values to find the critical value that needs to be exceeded by our result in order be considered statistically significantly different. So, the critical value we need to exceed in order to reject the null hypothesis is 2.920.
  • 72. So we go to our table of t-Distribution Critical Values to find the critical value that needs to be exceeded by our result in order be considered statistically significantly different. So, the critical value we need to exceed in order to reject the null hypothesis is 2.920. However, our t-value is
  • 73. So we go to our table of t-Distribution Critical Values to find the critical value that needs to be exceeded by our result in order be considered statistically significantly different. So, the critical value we need to exceed in order to reject the null hypothesis is 2.920. However, our t-value is t = .01
  • 74. So we go to our table of t-Distribution Critical Values to find the critical value that needs to be exceeded by our result in order be considered statistically significantly different. So, the critical value we need to exceed in order to reject the null hypothesis is 2.920. However, our t-value is .01
  • 75. Our t value of 0.1 does not exceed the t critical of 2.920, therefore we will fail to reject the null-hypothesis (which essentially means to accept the null-hypothesis).
  • 76. On the other hand, when our t value is 10, under the same conditions,
  • 77. On the other hand, when our t value is 10, under the same conditions, Σ(Xpre-Xpost) SEdiff t = 17 1.7 t = t = 10
  • 78. On the other hand, when our t value is 10, under the same conditions, our t value of 10 does exceed the t-critical of 2.920, therefore we will reject the null-hypothesis (which essentially means to accept the alternative hypothesis).
  • 79. So when it comes to inferential statistics (inferring meaning to a larger population from a smaller sample), the size of the standard error determines everything.
  • 80. So when it comes to inferential statistics (inferring meaning to a larger population from a smaller sample), the size of the standard error determines everything. Understanding the standard error theoretically may or may not be important for your learning purposes. If it is not, you may want to click quickly through the next 10 slides. If it is important then consider what follows:
  • 81. If we took 1000 samples of pre- and post-tests and subtracted each pre-test sample from each post-test sample we would have a sampling distribution called the sampling distribution of differences between pre-and post-test samples.
  • 82. If we took 1000 samples of pre- and post-tests and subtracted each pre-test sample from each post-test sample we would have a sampling distribution called the sampling distribution of differences between pre-and post-test samples. − = Sample Mean Distribution of difference between the pre and post samples test scores Sample Mean Distribution of pre-test scores. Sample Mean Distribution of post-test scores.
  • 83. If we took 1000 samples of pre- and post-tests and subtracted each pre-test sample from each post-test sample we would have a sampling distribution called the sampling distribution of differences between pre-and post-test samples. Sample Mean Distribution of difference between the pre and post samples test scores Sample Mean Distribution of pre-test scores. Sample Mean Distribution of post-test scores.
  • 84. If we took 1000 samples of pre- and post-tests and subtracted each pre-test sample from each post-test sample we would have a sampling distribution called the sampling distribution of differences between pre-and post-test samples. Sample Mean Distribution of difference between the pre and post samples test scores Sample Mean Distribution of pre-test scores. Sample Mean Distribution of post-test scores. −
  • 85. If we took 1000 samples of pre- and post-tests and subtracted each pre-test sample from each post-test sample we would have a sampling distribution called the sampling distribution of differences between pre-and post-test samples. Sample Mean Distribution of difference between the pre and post samples test scores Sample Mean Distribution of pre-test scores. Sample Mean Distribution of post-test scores. −
  • 86. If we took 1000 samples of pre- and post-tests and subtracted each pre-test sample from each post-test sample we would have a sampling distribution called the sampling distribution of differences between pre-and post-test samples. − = Sample Mean Distribution of difference between the pre and post samples test scores Sample Mean Distribution of pre-test scores. Sample Mean Distribution of post-test scores.
  • 87. If we took 1000 samples of pre- and post-tests and subtracted each pre-test sample from each post-test sample we would have a sampling distribution called the sampling distribution of differences between pre-and post-test samples. − = Sample Mean Distribution of difference between the pre and post samples test scores Sample Mean Distribution of pre-test scores. Sample Mean Distribution of post-test scores.
  • 88. If we took 1000 samples of pre- and post-tests and subtracted each pre-test sample from each post-test sample we would have a sampling distribution called the sampling distribution of differences between pre-and post-test samples. Sample Mean Distribution of difference between the pre and post samples test scores Sample Mean Distribution of pre-test scores. Sample Mean Distribution of post-test scores.
  • 89. If we took 1000 samples of pre- and post-tests and subtracted each pre-test sample from each post-test sample we would have a sampling distribution called the sampling distribution of differences between pre-and post-test samples. Sample Mean Distribution of difference between the pre and post samples test scores Sample Mean Distribution of pre-test scores. Sample Mean Distribution of post-test scores. −
  • 90. If we took 1000 samples of pre- and post-tests and subtracted each pre-test sample from each post-test sample we would have a sampling distribution called the sampling distribution of differences between pre-and post-test samples. Sample Mean Distribution of difference between the pre and post samples test scores Sample Mean Distribution of pre-test scores. Sample Mean Distribution of post-test scores. −
  • 91. If we took 1000 samples of pre- and post-tests and subtracted each pre-test sample from each post-test sample we would have a sampling distribution called the sampling distribution of differences between pre-and post-test samples. Sample Mean Distribution of difference between the pre and post samples test scores Sample Mean Distribution of pre-test scores. − = Sample Mean Distribution of post-test scores.
  • 92. If we took 1000 samples of pre- and post-tests and subtracted each pre-test sample from each post-test sample we would have a sampling distribution called the sampling distribution of differences between pre-and post-test samples. Sample Mean Distribution of difference between the pre and post samples test scores Sample Mean Distribution of pre-test scores. − = Sample Mean Distribution of post-test scores.
  • 93. If we took 1000 samples of pre- and post-tests and subtracted each pre-test sample from each post-test sample we would have a sampling distribution called the sampling distribution of differences between pre-and post-test samples. Sample Mean Distribution of difference between the pre and post samples test scores Sample Mean Distribution of pre-test scores. − = Sample Mean Distribution of post-test scores. etc …
  • 94. If you calculated the standard deviation of this new subtracted sampling distribution you would have the actual standard error we are looking for for this equation.
  • 95. If you calculated the standard deviation of this new subtracted sampling distribution you would have the actual standard error we are looking for for this equation. Since this is almost impossible to do, the standard error will be estimated.
  • 96. Let’s see how this is done with a very simple data set. Let’s begin with our null-hypothesis: “Post-test scores are not statistically significantly higher than pre-test scores”.
  • 97. Let’s see how this is done with a very simple data set. Let’s begin with our null-hypothesis: “Post-test scores are not statistically significantly higher than pre-test scores”. Students Pre Post 1 1 7 2 2 6 3 1 8 mean: 1.3 7.0
  • 98. We will calculate each element of the equation below:
  • 99. We will calculate each element of the equation below:
  • 100. Let’s begin with the sum of the differences between the pre and post tests:
  • 101. Let’s begin with the sum of the differences between the pre and post tests:
  • 102. Let’s begin with the sum of the differences between the pre and post tests: Σ(Xpre-Xpost) SEdiff t =
  • 103. Let’s begin with the sum of the differences between the pre and post tests: Σ(Xpre-Xpost) SEdiff t = the same
  • 104. Let’s begin with the sum of the differences between the pre and post tests:
  • 105. Let’s begin with the sum of the differences between the pre and post tests: Pre Post 1 7 2 6 1 8 Difference 6 4 7 Sum of all Differences 6 + 4 + 7 = 17
  • 106. Let’s begin with the sum of the differences between the pre and post tests: 17
  • 107. Just as a contrast, when the difference is smaller, then the sum of all differences will be smaller as well:
  • 108. Just as a contrast, when the difference is smaller, then the sum of all differences will be smaller as well: Pre Post 1 7 2 6 1 8 Difference 6 4 7 Sum of all Differences 6 + 4 + 7 = 17
  • 109. Just as a contrast, when the difference is smaller, then the sum of all differences will be smaller as well: Pre Post 1 7 2 6 1 8 Difference 6 4 7 Sum of all Differences 6 + 4 + 7 = 17
  • 110. Just as a contrast, when the difference is smaller, then the sum of all differences will be smaller as well: Pre Post 1 7 2 6 1 8 Difference 6 4 7 Sum of all Differences 6 + 4 + 7 = 17 Pre Post 4 7 5 6 5 8
  • 111. Just as a contrast, when the difference is smaller, then the sum of all differences will be smaller as well: Pre Post 1 7 2 6 1 8 Difference 6 4 7 Sum of all Differences 6 + 4 + 7 = 17 Pre Post 4 7 5 6 5 8 Difference 3 1 3
  • 112. Just as a contrast, when the difference is smaller, then the sum of all differences will be smaller as well: Pre Post 1 7 2 6 1 8 Difference 6 4 7 Sum of all Differences 6 + 4 + 7 = 17 Pre Post 4 7 5 6 5 8 Difference 3 1 3 Sum of all Differences 3 + 1 + 3 = 7
  • 113. Back to our original equation. The next step is to estimate the standard
  • 114. Back to our original equation. The next step is to estimate the standard 17
  • 115. Back to our original equation. The next step is to estimate the standard 17 3 We begin with “n” or the size of the sample, which in this case is 3.
  • 116. Back to our original equation. The next step is to estimate the standard 17 3 3 We begin with “n” or the size of the sample, which in this case is 3.
  • 117. Back to our original equation. The next step is to estimate the standard 17 3 3 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2.
  • 118. Back to our original equation. The next step is to estimate the standard Pre Post 1 7 2 6 1 8 Difference 6 4 7 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2.
  • 119. Back to our original equation. The next step is to estimate the standard Pre Post 1 7 2 6 1 8 Difference 6 4 7 Squared Difference 36 16 49 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2.
  • 120. Back to our original equation. The next step is to estimate the standard Pre Post 1 7 2 6 1 8 Difference 6 4 7 Squared Difference 36 16 49 Sum of all Differences Σd2 36 + 16 + 49 = 101 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2.
  • 121. Back to our original equation. The next step is to estimate the standard Let‘s plug in our numbers: We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2.
  • 122. Back to our original equation. The next step is to estimate the standard 17 3 101 3 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2.
  • 123. Back to our original equation. The next step is to estimate the standard 17 3 101 3 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2. Then we compute (Σd)2.
  • 124. Back to our original equation. The next step is to estimate the standard Pre Post 1 7 2 6 1 8 Difference 6 4 7 Sum of all Differences 6 + 4 + 7 = 17 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2. Then we compute (Σd)2.
  • 125. Back to our original equation. The next step is to estimate the standard Pre Post 1 7 2 6 1 8 Difference 6 4 7 Sum of all Differences 6 + 4 + 7 = 17 Squared Sum of all Differences 172 = 289 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2. Then we compute (Σd)2.
  • 126. Back to our original equation. The next step is to estimate the standard Let‘s plug in our numbers and then do the calculations: We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2. Then we compute (Σd)2.
  • 127. Back to our original equation. The next step is to estimate the standard 17 3 101 289 3 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2. Then we compute (Σd)2.
  • 128. Back to our original equation. The next step is to estimate the standard 17 303 3 289 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2. Then we compute (Σd)2.
  • 129. Back to our original equation. The next step is to estimate the standard 17 14 2 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2. Then we compute (Σd)2.
  • 130. Back to our original equation. The next step is to estimate the standard 17 7 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2. Then we compute (Σd)2.
  • 131. Back to our original equation. The next step is to estimate the standard 17 2.646 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2. Then we compute (Σd)2.
  • 132. Back to our original equation. The next step is to estimate the standard 6.425 We begin with “n” or the size of the sample, which in this case is 3. Then we compute Σd2. Then we compute (Σd)2.
  • 133. So what does a t value of 6.425 mean? Well, that depends on whether it is larger or smaller than the critical t value.
  • 134. So what does a t value of 6.425 mean? Well, that depends on whether it is larger or smaller than the critical t value. Do you remember how we determine the critical t value?
  • 135. So what does a t value of 6.425 mean? Well, that depends on whether it is larger or smaller than the critical t value. Do you remember how we determine the critical t value? All we need is
  • 136. So what does a t value of 6.425 mean? Well, that depends on whether it is larger or smaller than the critical t value. Do you remember how we determine the critical t value? All we need is • the degrees of freedom (sample size (3) minus 1) and
  • 137. So what does a t value of 6.425 mean? Well, that depends on whether it is larger or smaller than the critical t value. Do you remember how we determine the critical t value? All we need is • the degrees of freedom (sample size (3) minus 1) and • the alpha level we are willing to live with (in this case .05. This basically means that we are willing to live with being wrong 5 out of 100 times about our decision.)
  • 138. So with a degrees of freedom of 2 and an alpha level of .05,
  • 139. So with a degrees of freedom of 2 and an alpha level of .05,
  • 140. So with a degrees of freedom of 2 and an alpha level of .05, our critical t value is: 2.920
  • 141. So with a degrees of freedom of 2 and an alpha level of .05, our critical t value is: 2.920 Since our t value (6.245) is greater than our critical t value (2.920), we will reject the null hypothesis and accept the alternative hypothesis that students’ post-test scores are higher than their pre-test scores.
  • 142. To visualize this on a graph we create a t distribution with degrees of freedom of 2. Since the sample size is so small, this will be a much flatter distribution than a normal distribution.
  • 143. To visualize this on a graph we create a t distribution with degrees of freedom of 2. Since the sample size is so small, this will be a much flatter distribution than a normal distribution. Here is an example of a normal distribution:
  • 144. To visualize this on a graph we create a t distribution with degrees of freedom of 2. Since the sample size is so small, this will be a much flatter distribution than a normal distribution. Here is an example of a normal distribution:
  • 145. To visualize this on a graph we create a t distribution with degrees of freedom of 2. Since the sample size is so small, this will be a much flatter distribution than a normal distribution. Here is an example of a normal distribution: And here is an example of a t-distribution with 2 degrees of freedom:
  • 146. To visualize this on a graph we create a t distribution with degrees of freedom of 2. Since the sample size is so small, this will be a much flatter distribution than a normal distribution. Here is an example of a normal distribution: And here is an example of a t-distribution with 2 degrees of freedom:
  • 147. You may recall that as the degrees of freedom increase the t-distribution begins to approximate the normal distribution.
  • 148. You may recall that as the degrees of freedom increase the t-distribution begins to approximate the normal distribution. Degrees of freedom = 5
  • 149. You may recall that as the degrees of freedom increase the t-distribution begins to approximate the normal distribution. Degrees of freedom = 5
  • 150. You may recall that as the degrees of freedom increase the t-distribution begins to approximate the normal distribution. Degrees of freedom = 5 Degrees of freedom = 10
  • 151. You may recall that as the degrees of freedom increase the t-distribution begins to approximate the normal distribution. Degrees of freedom = 5 Degrees of freedom = 10
  • 152. At about degrees of freedom of 30 the t-distribution looks almost identical to the normal or standard or z-distribution.
  • 153. At about degrees of freedom of 30 the t-distribution looks almost identical to the normal or standard or z-distribution.
  • 154. Let’s go back to our example of a t-distribution with 2 degrees of freedom.
  • 155. Let’s go back to our example of a t-distribution with 2 degrees of freedom.
  • 156. Let’s go back to our example of a t-distribution with 2 degrees of freedom. Here is the location of the critical t value of 2.920.
  • 157. Let’s go back to our example of a t-distribution with 2 degrees of freedom. 2.920 Here is the location of the critical t value of 2.920.
  • 158. Let’s go back to our example of a t-distribution with 2 degrees of freedom. 2.920 Here is the location of the critical t value of 2.920. This means that if there really is no statistical difference between the pre- and post-tests, if we were to hypothetically draw 100 samples 95% of the time those sample would be drawn from this part of the distribution.
  • 159. Let’s go back to our example of a t-distribution with 2 degrees of freedom. 2.920 95% of samples would be on this side of the distribution Here is the location of the critical t value of 2.920. This means that if there really is no statistical difference between the pre- and post-tests, if we were to hypothetically draw 100 samples 95% of the time those sample would be drawn from this part of the distribution.
  • 160. Let’s go back to our example of a t-distribution with 2 degrees of freedom. 2.920 Here is the location of the critical t value of 2.920. This means that if there really is no statistical difference between the pre- and post-tests, if we were to hypothetically draw 100 samples 95% of the time those sample would be drawn from this part of the distribution. And 5% of the samples would be on this side of the critical t value.
  • 161. And so we would say to ourselves: If the t value lands to the right of 2.920, then we would hypothesize that they are not part of the same distribution but are actually two separate distributions.
  • 162. And so we would say to ourselves: If the t value lands to the right of 2.920, then we would hypothesize that they are not part of the same distribution but are actually two separate distributions. pre-tests post-tests 2.920
  • 163. And so we would say to ourselves: If the t value lands to the right of 2.920, then we would hypothesize that they are not part of the same distribution but are actually two separate distributions. pre-tests post-tests 2.920 critical t value Since our calculated t value is 6.245, we will reject the null hypothesis and accept the alternative hypothesis that they are two different distributions.
  • 164. And so we would say to ourselves: If the t value lands to the right of 2.920, then we would hypothesize that they are not part of the same distribution but are actually two separate distributions. pre-tests post-tests 2.920 critical t value 6.245 calculated t value Since our calculated t value is 6.245, we will reject the null hypothesis and accept the alternative hypothesis that they are two different distributions.
  • 165. Let’s see an example where the pre- and post-test scores are closer to one another. Will their difference be statistically significantly different?
  • 166. Let’s see an example where the pre- and post-test scores are closer to one another. Will their difference be statistically significantly different? Here is the data set:
  • 167. Let’s see an example where the pre- and post-test scores are closer to one another. Will their difference be statistically significantly different? Here is the data set: Students Pre Post 1 6 7 2 6 6 3 7 8 mean: 6.3 7.0
  • 168. Let’s see an example where the pre- and post-test scores are closer to one another. Will their difference be statistically significantly different? Here is the data set: Students Pre Post 1 6 7 2 6 6 3 7 8 mean: 6.3 7.0 Here is the null hypothesis: “Post-test scores are not statistically significantly higher than pre-test scores”.
  • 169. We will calculate again each element of the equation below:
  • 170. We will calculate again each element of the equation below:
  • 171. Let’s begin with the sum of the differences between the pre and post tests:
  • 172. Let’s begin with the sum of the differences between the pre and post tests: Σ(Xpre-Xpost) SEdiff t = the same
  • 173. Let’s begin with the sum of the differences between the pre and post tests: Pre Post 6 7 6 6 7 8 Difference 1 0 1 Sum of all Differences 1 + 0 + 1 = 2
  • 174. Plug in the sum of all differences: 2 Sum of all Differences 1 + 0 + 1 = 2
  • 175. The “n” or the sample size is 3. 2
  • 176. The “n” or the sample size is 3. 2 3
  • 177. The “n” or the sample size is 3. 2 3 3
  • 179. Then we compute Σd2 Pre Post 6 7 6 6 7 8 Difference 1 0 1 Squared Difference 1 0 1 Sum of all Differences 1 + 0 + 1 = 2 Σd2
  • 180. Let’s plug in our numbers:
  • 181. Let’s plug in our numbers: 2 3 3 2
  • 182. Then we compute (Σd)2
  • 183. Then we compute (Σd)2 Pre Post 6 7 6 6 7 8 Difference 1 0 1 Sum of all Differences 1 + 0 + 1 = 2 Squared Sum of all Differences 22 = 4
  • 184. Let’s plug in our numbers:
  • 185. Let’s plug in our numbers: 2 3 2 4 3
  • 186. Let’s plug in our numbers and then do the calculations: 2 3 2 4 3
  • 187. Let’s plug in our numbers and then do the calculations: 2 6 3 4
  • 188. Let’s plug in our numbers and then do the calculations: 2 2 2
  • 189. Let’s plug in our numbers and then do the calculations: 2 1
  • 190. Let’s plug in our numbers and then do the calculations: 2 1
  • 191. Let’s plug in our numbers and then do the calculations: 2.0
  • 192. So with a degrees of freedom of 2 and an alpha level of .05,
  • 193. So with a degrees of freedom of 2 and an alpha level of .05,
  • 194. So with a degrees of freedom of 2 and an alpha level of .05, our critical t value is: 2.920.
  • 195. So with a degrees of freedom of 2 and an alpha level of .05, our critical t value is: 2.920. Since our t value (2.0) is less than our critical t value (2.920) we will reject the null hypothesis and accept the alternative hypothesis that students’ post-test scores are higher than their pretest scores.
  • 196. Here is the location of the critical t value of 2.920:
  • 197. Here is the location of the critical t value of 2.920: 2.920
  • 198. Here is the location of the critical t value of 2.920: 2.920 95% of samples would be on this side of the distribution This means that if there really is no statistical difference between the pre and posttests, if we were to hypothetically draw 100 samples, 95% of the time those samples would be drawn from this part of the distribution.
  • 199. Here is the location of the critical t value of 2.920: 2.920 This means that if there really is no statistical difference between the pre and posttests, if we were to hypothetically draw 100 samples, 95% of the time those samples would be drawn from this part of the distribution. And 5% of the samples would be on this side of the critical t value.
  • 200. If the t value lands to the left of 2.920 then we would say to ourselves, 2.920
  • 201. If the t value lands to the left of 2.920 then we would say to ourselves, 2.920 Critical t value 2.0 Calculated t value “that is such a common occurrence that I hypothesize that they are the same distribution and not two separate distributions.”
  • 202. If the t value lands to the left of 2.920 then we would say to ourselves, 2.920 Critical t value 2.0 Calculated t value pre-tests “that is such a common occurrence that I hypothesize that they are the same distribution and not two separate distributions.”
  • 203. If the t value lands to the left of 2.920 then we would say to ourselves, 2.920 Critical t value 2.0 Calculated t value pre-tests post-tests “that is such a common occurrence that I hypothesize that they are the same distribution and not two separate distributions.”
  • 204. If the t value lands to the left of 2.920 then we would say to ourselves, 2.920 Critical t value 2.0 Calculated t value pre-tests post-tests “that is such a common occurrence that I hypothesize that they are the same distribution and not two separate distributions.” Since our t value is 2.0, we will fail to reject the null hypothesis.
  • 205. In Summary: The paired sample t-test used in hypothesis testing to determine if to two matched samples (e.g., pre / posttests or matched in some other way) are statistically significantly different from one another.