Inferential Statistics - “data analysis techniques for determining how likely it is that  results obtained from a sample or samples are the same results that would have been obtained for the entire population” (p. 337) Techniques “used to make inferences about  parameters ” (p. 338) “ using samples to make inferences about populations produces only probability statements about the population” (p. 338) “ analysis do not  prove  that the results are true or false” (p. 338)
Concepts underlying the application of Inferential Statistics: - Standard Error:  √  N - 1 ___________________________ SD (S E  x ̅  ) =
Samples can never truly reflect a population Variations among means of samples from sample population is called  sampling error Sampling errors form a  bell shaped curve Most of the sample means obtained will be close to the population mean The  standard error  (SE x ¯) tells us by how much we would expect  our sample  mean to differ from the same population
The Null Hypothesis: A hypothesis stating that there is no relationship (or difference) between variables and that any relationship found will be a chance (not true) relationship, the result of sampling error Testing a Null hypothesis requires a  test of significance  and a  selected probability level  that indicates how much risk you are willing to take that the decision you make is wrong
Tests of Significance Statistical tests used to determine whether or not there is a significant difference  between or among two or more means at a selected  probability level Frequently used tests of significance are:  t test, analysis of variance, and chi square Based on a test of significance the researcher will either reject or not reject the null hypothesis
Back to  Null Hypothesis  Type I error : the researcher rejects a null hypothesis that is  really true Type II error : the researcher fails to reject a hypothesis that is  really false
Probability level most commonly used: Is the alpha ( α ) where  α   = .05 If you select  α  as your probability level you have a 5% probability of making a Type I error   The less chance of being wrong you want to take, the greater the difference of means must be
Two-tailed and One-Tailed tests This is referring to the extreme ends of the bell shaped curve that illustrates a normal distribution A  two-tailed test  allows for the possibility that a difference may occur in either direction A  one-tailed test  assumes that a difference can occur in only one direction Tests of significance  are almost always two-tailed
Degrees of Freedom :  Dependent upon the number of  participants  and the number of  groups  Each test of significance has its own formula for determining degrees of freedom For Pearson r, the formula is N = 2
Types of Tests of Significance (choose the correct type) Parametric tests - used with  ratio  and interval data, more  powerful , more  often used, preferred , but are based on  four major assumptions (p.348) Nonparametric tests  used when the data is  nominal  or  ordinal,  when parametric assumptions violated, or when nature of distribution is unknown
The t test : - Used to determine whether two means are significantly different at a selected probability level. There are two different types of t tests:  t test for independent samples  and  t test for non independent samples
Independent samples  are two samples that are randomly formed without any type of  matching T test for independent samples  is a parametric test of significance used to determine whether, at a selected probability level, a significant difference exists between the means of two independent samples
- the t test  for  non independent samples  is used to determine whether, at a selected probability level, a significant difference exists between the means of two matched, non independent, samples - The formulae are:
- you can also use  SPSS 12. 0  to calculate  t test  for independent and non independent samples Sample Analysis of Variance (ANOVA):  a parametric test of significance used to determine whether a significant difference exists between two or more means at a selected probability level -for a study involving  three groups  ANOVA is the appropriate analysis technique
An F ratio is computed: here are the two
When the  F ratio  is significant and more than two means are involved, procedures called  multicomparisons  are used to determine which means are significantly different from which other means The  Scheff é test  is appropriate for making any and all possible comparisons involving a set of means. It involves calculation of an F ratio for each mean comparison of interest
The Scheff é formula - We can also use  SPSS 12.0  to run multiple comparison tests to determine which means are significantly different from other means
Factorial Analysis of Variance  is a statistical technique that : Allows the researcher to determine the effect of the independent variable and the control variable on the dependent variable both separately and in combination It is the appropriate statistical analysis if a study is based on a factorial design and investigates two or more independent variables and the interactions between them- yeilds a separate F ratio for each
Analysis of Covariance (ANCOVA) - A statistical method of equating groups on one or more variables and for increasing the power f a statistical test; adjusts scores on a dependent variable for initial difference on other variables
Multiple regression equation : - A prediction equation using two or more variables that individually predict a criterion in order to make a more accurate prediction
Chi Square (X2): a nonparametric test of significance Appropriate when the data are in the form of  frequency  count; compares proportions actually observed in a study with expected proportions to see if they are significantly different There are two kinds of Chi square:
One Dimensional Chi square-  can be used to compare frequencies in different categories Two-Dimensional Chi square-  used when frequencies are categorized along more than one dimension - formulae:
- Of course, you can also use  SPSS 12.0  to calculate  Chi Square

Emil Pulido on Quantitative Research: Inferential Statistics

  • 1.
    Inferential Statistics -“data analysis techniques for determining how likely it is that results obtained from a sample or samples are the same results that would have been obtained for the entire population” (p. 337) Techniques “used to make inferences about parameters ” (p. 338) “ using samples to make inferences about populations produces only probability statements about the population” (p. 338) “ analysis do not prove that the results are true or false” (p. 338)
  • 2.
    Concepts underlying theapplication of Inferential Statistics: - Standard Error: √ N - 1 ___________________________ SD (S E x ̅ ) =
  • 3.
    Samples can nevertruly reflect a population Variations among means of samples from sample population is called sampling error Sampling errors form a bell shaped curve Most of the sample means obtained will be close to the population mean The standard error (SE x ¯) tells us by how much we would expect our sample mean to differ from the same population
  • 4.
    The Null Hypothesis:A hypothesis stating that there is no relationship (or difference) between variables and that any relationship found will be a chance (not true) relationship, the result of sampling error Testing a Null hypothesis requires a test of significance and a selected probability level that indicates how much risk you are willing to take that the decision you make is wrong
  • 5.
    Tests of SignificanceStatistical tests used to determine whether or not there is a significant difference between or among two or more means at a selected probability level Frequently used tests of significance are: t test, analysis of variance, and chi square Based on a test of significance the researcher will either reject or not reject the null hypothesis
  • 6.
    Back to Null Hypothesis Type I error : the researcher rejects a null hypothesis that is really true Type II error : the researcher fails to reject a hypothesis that is really false
  • 7.
    Probability level mostcommonly used: Is the alpha ( α ) where α = .05 If you select α as your probability level you have a 5% probability of making a Type I error The less chance of being wrong you want to take, the greater the difference of means must be
  • 8.
    Two-tailed and One-Tailedtests This is referring to the extreme ends of the bell shaped curve that illustrates a normal distribution A two-tailed test allows for the possibility that a difference may occur in either direction A one-tailed test assumes that a difference can occur in only one direction Tests of significance are almost always two-tailed
  • 9.
    Degrees of Freedom: Dependent upon the number of participants and the number of groups Each test of significance has its own formula for determining degrees of freedom For Pearson r, the formula is N = 2
  • 10.
    Types of Testsof Significance (choose the correct type) Parametric tests - used with ratio and interval data, more powerful , more often used, preferred , but are based on four major assumptions (p.348) Nonparametric tests used when the data is nominal or ordinal, when parametric assumptions violated, or when nature of distribution is unknown
  • 11.
    The t test: - Used to determine whether two means are significantly different at a selected probability level. There are two different types of t tests: t test for independent samples and t test for non independent samples
  • 12.
    Independent samples are two samples that are randomly formed without any type of matching T test for independent samples is a parametric test of significance used to determine whether, at a selected probability level, a significant difference exists between the means of two independent samples
  • 13.
    - the ttest for non independent samples is used to determine whether, at a selected probability level, a significant difference exists between the means of two matched, non independent, samples - The formulae are:
  • 14.
    - you canalso use SPSS 12. 0 to calculate t test for independent and non independent samples Sample Analysis of Variance (ANOVA): a parametric test of significance used to determine whether a significant difference exists between two or more means at a selected probability level -for a study involving three groups ANOVA is the appropriate analysis technique
  • 15.
    An F ratiois computed: here are the two
  • 16.
    When the F ratio is significant and more than two means are involved, procedures called multicomparisons are used to determine which means are significantly different from which other means The Scheff é test is appropriate for making any and all possible comparisons involving a set of means. It involves calculation of an F ratio for each mean comparison of interest
  • 17.
    The Scheff éformula - We can also use SPSS 12.0 to run multiple comparison tests to determine which means are significantly different from other means
  • 18.
    Factorial Analysis ofVariance is a statistical technique that : Allows the researcher to determine the effect of the independent variable and the control variable on the dependent variable both separately and in combination It is the appropriate statistical analysis if a study is based on a factorial design and investigates two or more independent variables and the interactions between them- yeilds a separate F ratio for each
  • 19.
    Analysis of Covariance(ANCOVA) - A statistical method of equating groups on one or more variables and for increasing the power f a statistical test; adjusts scores on a dependent variable for initial difference on other variables
  • 20.
    Multiple regression equation: - A prediction equation using two or more variables that individually predict a criterion in order to make a more accurate prediction
  • 21.
    Chi Square (X2):a nonparametric test of significance Appropriate when the data are in the form of frequency count; compares proportions actually observed in a study with expected proportions to see if they are significantly different There are two kinds of Chi square:
  • 22.
    One Dimensional Chisquare- can be used to compare frequencies in different categories Two-Dimensional Chi square- used when frequencies are categorized along more than one dimension - formulae:
  • 23.
    - Of course,you can also use SPSS 12.0 to calculate Chi Square