Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- T Test For Two Independent Samples by shoffma5 201156 views
- T test, independant sample, paired ... by Qasim Raza 18744 views
- The t Test for Two Independent Samples by jasondroesch 4462 views
- T test by sai precious 9587 views
- Introduction to t-tests (statistics) by Dr Bryan Mills 35314 views
- Student t-test by Steve Bishop 10264 views

10,828 views

Published on

No Downloads

Total views

10,828

On SlideShare

0

From Embeds

0

Number of Embeds

17

Shares

0

Downloads

190

Comments

0

Likes

3

No embeds

No notes for slide

- 1. Inferential Statistics <ul><li>- “data analysis techniques for determining how likely it is that results obtained from a sample or samples are the same results that would have been obtained for the entire population” (p. 337) </li></ul><ul><li>Techniques “used to make inferences about parameters ” (p. 338) </li></ul><ul><li>“ using samples to make inferences about populations produces only probability statements about the population” (p. 338) </li></ul><ul><li>“ analysis do not prove that the results are true or false” (p. 338) </li></ul>
- 2. Concepts underlying the application of Inferential Statistics: <ul><li>- Standard Error: </li></ul>√ N - 1 ___________________________ SD (S E x ̅ ) =
- 3. <ul><li>Samples can never truly reflect a population </li></ul><ul><li>Variations among means of samples from sample population is called sampling error </li></ul><ul><li>Sampling errors form a bell shaped curve </li></ul><ul><li>Most of the sample means obtained will be close to the population mean </li></ul><ul><li>The standard error (SE x ¯) tells us by how much we would expect our sample mean to differ from the same population </li></ul>
- 4. The Null Hypothesis: <ul><li>A hypothesis stating that there is no relationship (or difference) between variables and that any relationship found will be a chance (not true) relationship, the result of sampling error </li></ul><ul><li>Testing a Null hypothesis requires a test of significance and a selected probability level that indicates how much risk you are willing to take that the decision you make is wrong </li></ul>
- 5. Tests of Significance <ul><li>Statistical tests used to determine whether or not there is a significant difference between or among two or more means at a selected probability level </li></ul><ul><li>Frequently used tests of significance are: t test, analysis of variance, and chi square </li></ul><ul><li>Based on a test of significance the researcher will either reject or not reject the null hypothesis </li></ul>
- 6. Back to Null Hypothesis <ul><li>Type I error : the researcher rejects a null hypothesis that is really true </li></ul><ul><li>Type II error : the researcher fails to reject a hypothesis that is really false </li></ul>
- 7. Probability level most commonly used: <ul><li>Is the alpha ( α ) where α = .05 </li></ul><ul><li>If you select α as your probability level you have a 5% probability of making a Type I error </li></ul><ul><li>The less chance of being wrong you want to take, the greater the difference of means must be </li></ul>
- 8. Two-tailed and One-Tailed tests <ul><li>This is referring to the extreme ends of the bell shaped curve that illustrates a normal distribution </li></ul><ul><li>A two-tailed test allows for the possibility that a difference may occur in either direction </li></ul><ul><li>A one-tailed test assumes that a difference can occur in only one direction </li></ul><ul><li>Tests of significance are almost always two-tailed </li></ul>
- 9. Degrees of Freedom : <ul><li>Dependent upon the number of participants and the number of groups </li></ul><ul><li>Each test of significance has its own formula for determining degrees of freedom </li></ul><ul><li>For Pearson r, the formula is N = 2 </li></ul>
- 10. Types of Tests of Significance (choose the correct type) <ul><li>Parametric tests - used with ratio and interval data, more powerful , more often used, preferred , but are based on four major assumptions (p.348) </li></ul><ul><li>Nonparametric tests used when the data is nominal or ordinal, when parametric assumptions violated, or when nature of distribution is unknown </li></ul>
- 11. The t test : <ul><li>- Used to determine whether two means are significantly different at a selected probability level. There are two different types of t tests: t test for independent samples and t test for non independent samples </li></ul>
- 12. <ul><li>Independent samples are two samples that are randomly formed without any type of matching </li></ul><ul><li>T test for independent samples is a parametric test of significance used to determine whether, at a selected probability level, a significant difference exists between the means of two independent samples </li></ul>
- 13. <ul><li>- the t test for non independent samples is used to determine whether, at a selected probability level, a significant difference exists between the means of two matched, non independent, samples </li></ul><ul><li>- The formulae are: </li></ul>
- 14. - you can also use SPSS 12. 0 to calculate t test for independent and non independent samples <ul><li>Sample Analysis of Variance (ANOVA): a parametric test of significance used to determine whether a significant difference exists between two or more means at a selected probability level </li></ul><ul><li>-for a study involving three groups ANOVA is the appropriate analysis technique </li></ul>
- 15. An F ratio is computed: here are the two
- 16. <ul><li>When the F ratio is significant and more than two means are involved, procedures called multicomparisons are used to determine which means are significantly different from which other means </li></ul><ul><li>The Scheff é test is appropriate for making any and all possible comparisons involving a set of means. It involves calculation of an F ratio for each mean comparison of interest </li></ul>
- 17. The Scheff é formula <ul><li>- We can also use SPSS 12.0 to run multiple comparison tests to determine which means are significantly different from other means </li></ul>
- 18. Factorial Analysis of Variance is a statistical technique that : <ul><li>Allows the researcher to determine the effect of the independent variable and the control variable on the dependent variable both separately and in combination </li></ul><ul><li>It is the appropriate statistical analysis if a study is based on a factorial design and investigates two or more independent variables and the interactions between them- yeilds a separate F ratio for each </li></ul>
- 19. Analysis of Covariance (ANCOVA) <ul><li>- A statistical method of equating groups on one or more variables and for increasing the power f a statistical test; adjusts scores on a dependent variable for initial difference on other variables </li></ul>
- 20. Multiple regression equation : <ul><li>- A prediction equation using two or more variables that individually predict a criterion in order to make a more accurate prediction </li></ul>
- 21. Chi Square (X2): a nonparametric test of significance <ul><li>Appropriate when the data are in the form of frequency count; compares proportions actually observed in a study with expected proportions to see if they are significantly different </li></ul><ul><li>There are two kinds of Chi square: </li></ul>
- 22. One Dimensional Chi square- can be used to compare frequencies in different categories Two-Dimensional Chi square- used when frequencies are categorized along more than one dimension <ul><li>- formulae: </li></ul>
- 23. <ul><li>- Of course, you can also use SPSS 12.0 to calculate Chi Square </li></ul>

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment