Upcoming SlideShare
×

# Emil Pulido on Quantitative Research: Inferential Statistics

10,828 views

Published on

What do you need to consider when you will be doing Quantitative Research? You will need to consider your data- statistics.

Published in: Technology, Education
3 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

Views
Total views
10,828
On SlideShare
0
From Embeds
0
Number of Embeds
17
Actions
Shares
0
190
0
Likes
3
Embeds 0
No embeds

No notes for slide

### Emil Pulido on Quantitative Research: Inferential Statistics

1. 1. Inferential Statistics <ul><li>- “data analysis techniques for determining how likely it is that results obtained from a sample or samples are the same results that would have been obtained for the entire population” (p. 337) </li></ul><ul><li>Techniques “used to make inferences about parameters ” (p. 338) </li></ul><ul><li>“ using samples to make inferences about populations produces only probability statements about the population” (p. 338) </li></ul><ul><li>“ analysis do not prove that the results are true or false” (p. 338) </li></ul>
2. 2. Concepts underlying the application of Inferential Statistics: <ul><li>- Standard Error: </li></ul>√ N - 1 ___________________________ SD (S E x ̅ ) =
3. 3. <ul><li>Samples can never truly reflect a population </li></ul><ul><li>Variations among means of samples from sample population is called sampling error </li></ul><ul><li>Sampling errors form a bell shaped curve </li></ul><ul><li>Most of the sample means obtained will be close to the population mean </li></ul><ul><li>The standard error (SE x ¯) tells us by how much we would expect our sample mean to differ from the same population </li></ul>
4. 4. The Null Hypothesis: <ul><li>A hypothesis stating that there is no relationship (or difference) between variables and that any relationship found will be a chance (not true) relationship, the result of sampling error </li></ul><ul><li>Testing a Null hypothesis requires a test of significance and a selected probability level that indicates how much risk you are willing to take that the decision you make is wrong </li></ul>
5. 5. Tests of Significance <ul><li>Statistical tests used to determine whether or not there is a significant difference between or among two or more means at a selected probability level </li></ul><ul><li>Frequently used tests of significance are: t test, analysis of variance, and chi square </li></ul><ul><li>Based on a test of significance the researcher will either reject or not reject the null hypothesis </li></ul>
6. 6. Back to Null Hypothesis <ul><li>Type I error : the researcher rejects a null hypothesis that is really true </li></ul><ul><li>Type II error : the researcher fails to reject a hypothesis that is really false </li></ul>
7. 7. Probability level most commonly used: <ul><li>Is the alpha ( α ) where α = .05 </li></ul><ul><li>If you select α as your probability level you have a 5% probability of making a Type I error </li></ul><ul><li>The less chance of being wrong you want to take, the greater the difference of means must be </li></ul>
8. 8. Two-tailed and One-Tailed tests <ul><li>This is referring to the extreme ends of the bell shaped curve that illustrates a normal distribution </li></ul><ul><li>A two-tailed test allows for the possibility that a difference may occur in either direction </li></ul><ul><li>A one-tailed test assumes that a difference can occur in only one direction </li></ul><ul><li>Tests of significance are almost always two-tailed </li></ul>
9. 9. Degrees of Freedom : <ul><li>Dependent upon the number of participants and the number of groups </li></ul><ul><li>Each test of significance has its own formula for determining degrees of freedom </li></ul><ul><li>For Pearson r, the formula is N = 2 </li></ul>
10. 10. Types of Tests of Significance (choose the correct type) <ul><li>Parametric tests - used with ratio and interval data, more powerful , more often used, preferred , but are based on four major assumptions (p.348) </li></ul><ul><li>Nonparametric tests used when the data is nominal or ordinal, when parametric assumptions violated, or when nature of distribution is unknown </li></ul>
11. 11. The t test : <ul><li>- Used to determine whether two means are significantly different at a selected probability level. There are two different types of t tests: t test for independent samples and t test for non independent samples </li></ul>
12. 12. <ul><li>Independent samples are two samples that are randomly formed without any type of matching </li></ul><ul><li>T test for independent samples is a parametric test of significance used to determine whether, at a selected probability level, a significant difference exists between the means of two independent samples </li></ul>
13. 13. <ul><li>- the t test for non independent samples is used to determine whether, at a selected probability level, a significant difference exists between the means of two matched, non independent, samples </li></ul><ul><li>- The formulae are: </li></ul>
14. 14. - you can also use SPSS 12. 0 to calculate t test for independent and non independent samples <ul><li>Sample Analysis of Variance (ANOVA): a parametric test of significance used to determine whether a significant difference exists between two or more means at a selected probability level </li></ul><ul><li>-for a study involving three groups ANOVA is the appropriate analysis technique </li></ul>
15. 15. An F ratio is computed: here are the two
16. 16. <ul><li>When the F ratio is significant and more than two means are involved, procedures called multicomparisons are used to determine which means are significantly different from which other means </li></ul><ul><li>The Scheff é test is appropriate for making any and all possible comparisons involving a set of means. It involves calculation of an F ratio for each mean comparison of interest </li></ul>
17. 17. The Scheff é formula <ul><li>- We can also use SPSS 12.0 to run multiple comparison tests to determine which means are significantly different from other means </li></ul>
18. 18. Factorial Analysis of Variance is a statistical technique that : <ul><li>Allows the researcher to determine the effect of the independent variable and the control variable on the dependent variable both separately and in combination </li></ul><ul><li>It is the appropriate statistical analysis if a study is based on a factorial design and investigates two or more independent variables and the interactions between them- yeilds a separate F ratio for each </li></ul>
19. 19. Analysis of Covariance (ANCOVA) <ul><li>- A statistical method of equating groups on one or more variables and for increasing the power f a statistical test; adjusts scores on a dependent variable for initial difference on other variables </li></ul>
20. 20. Multiple regression equation : <ul><li>- A prediction equation using two or more variables that individually predict a criterion in order to make a more accurate prediction </li></ul>
21. 21. Chi Square (X2): a nonparametric test of significance <ul><li>Appropriate when the data are in the form of frequency count; compares proportions actually observed in a study with expected proportions to see if they are significantly different </li></ul><ul><li>There are two kinds of Chi square: </li></ul>
22. 22. One Dimensional Chi square- can be used to compare frequencies in different categories Two-Dimensional Chi square- used when frequencies are categorized along more than one dimension <ul><li>- formulae: </li></ul>
23. 23. <ul><li>- Of course, you can also use SPSS 12.0 to calculate Chi Square </li></ul>