Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- Theorybuilding f13 by Michigan State Un... 721 views
- Measurement - Intro to Quantitative by Michigan State Un... 441 views
- Sampling survey - Intro to Quantita... by Michigan State Un... 818 views
- Concept Explication - Theoretical I... by Michigan State Un... 506 views
- Content Analysis - Theoretical Issu... by Michigan State Un... 512 views
- Take Action and Engage on Facebook by Say WOW Marketing 1913 views

488 views

Published on

No Downloads

Total views

488

On SlideShare

0

From Embeds

0

Number of Embeds

1

Shares

0

Downloads

0

Comments

0

Likes

2

No embeds

No notes for slide

- 1. Descriptive &Inferential StatsBy Serena CarpenterMichigan State University
- 2. Parameter | Stats• Parameter• Describes a census and (stats) describe sample• Nonparametric (categorical) stats• Nominal and ordinal data• Parametric (continuous) stats• Interval and ratio
- 3. Descriptive | Inferential• Descriptive• Summarize data, sample• In the beginning of the Results sections• Inferential• Generalize the sample data to a population• Help researchers draw inferences about the effects of samplingerrors on the results• Significance tests help researchers decide whether thedifferences in descriptive statistics are reliable
- 4. Looking at my data• Let’s say that you have a set of data:• 5, 6, 4, 7, 3, 3, 7, 2, 1, 5, 3, 6• How could you rearrange the data to get a better idea of whatthe scores are in your data set?• 1, 2, 3, 3, 3, 4, 5, 5, 6, 6, 7, 7• How could you make it even more clear?
- 5. Frequency distributionX f7.00 26.00 25.00 24.00 13.00 32.00 11.00 1f = 14(4%)n = 14(4%)
- 6. Graphical Displays of Data• Methods of graphing distributions:• Histograms• A frequency distribution where frequencies are representedby bars.• Stem-and-Leaf Displays• An alternate way to represent a grouped frequencydistribution.
- 7. Grouped Frequency HistogramScore (w = 3)22.019.016.013.010.0Frequency2520151050Std. Dev = 3.01Mean = 15.7N = 47.00
- 8. Shapes/Types of DistributionsScore19.018.017.016.015.014.013.012.011.0Normal DistributionFrequency76543210Std. Dev = 2.00Mean = 15.0N = 26.00
- 9. Shapes/Types of DistributionsScore19.018.017.016.015.014.013.012.011.0Positively SkewedFrequency1086420Std. Dev = 2.18Mean = 13.1N = 29.00
- 10. Shapes/Types of DistributionsScore19.018.017.016.015.014.013.012.011.0Negatively SkewedFrequency1086420Std. Dev = 2.18Mean = 16.9N = 29.00
- 11. Bimodal distribution• A distribution that peaks in two different places.• This happens when two of the scores both occur with equalfrequency, and more frequently than any other score.Xf
- 12. Measures of Central Tendency• Measures of central tendency help to giveinformation about the most likely score in adistribution.• We have three ways to describe central tendency:• Mean• Median• Mode
- 13. Measures of Central Tendency• Mean• M or m• Interval or ratio level• Median• Middle point of the distribution• Insensitive to extreme scores. Use when the mean isinappropriate.• Mode• Most frequently occurring• The mode is appropriate for nominal scale data
- 14. Variability• How much scores vary from each other• Spread, dispersion• Range• 2, 3, 7, 7, 8, 8, 8, 12, 20• Standard deviation
- 15. Standard deviation• S, S.D., sd• How much scores vary from the mean score• About 2/3 of the case lie within one sd unit of the mean in anormal distribution
- 16. S.D.• 95% rule (precisely 1.96 sd units from the mean)• 99.7% rule• If M = 35.00 and S = 6.00, then:• 68% cases lie between29.00 and 41.00• 95% cases lie between23.00 and 47.00• 99.7% cases lie between17.00 and 53.00
- 17. z-Scores (standard scores)• Where an individual stands with in a group.• How many sd units one person’s score is from the mean andwhether his or her score is above or below the mean• Can only be used when the population mean (μ), and thepopulation standard deviation (σ) are known.• z-scores are associated with probabilities under the normal curve• Examples:• 0.00• -2.00• -3.00 to 3.00 is their range
- 18. Transformed Standard Scores• z-Scores are transformed to another scale that does not have0 as an average• Many z-Transformations exist
- 19. Reliabilities• Cronbach’s alpha• Cohen’s kappa• Scott’s pi• a = .80
- 20. Concept of Correlation• The extent to which two scores are related• Relationship Types• Direct or positive• Those who score high on one variable also score high on the other• Inverse or negative• Those who score high on one variable score low on the otherSubject Depression CheerfulnessEdward 80 50John 90 40Barbara 100 30Cynthia 110 20William 120 10
- 21. Causal relationship• One variable causes a change in another variable• Affects• Controlled experiment in which one or more treatments areadministered
- 22. Linear regression - Scatterplot• Graphic representation showing the relationship between twovariables
- 23. Pearson r• Pearson product-moment correlation coefficient describes thelinear relationship between two scores (Likert/ratio)• Ranges from -1.00 to 1.00• -1.00 perfect negative relationship, 1.00 perfect positive• No fewer than 25 participants• Strong, moderate, weak• +.40 to +.69 Strong positive relationship+.30 to +.39 Moderate positive relationship+.20 to +.29 weak positive relationship+.01 to +.19 No or negligible relationship-.01 to -.19 No or negligible relationship-.20 to -.29 weak negative relationship-.30 to -.39 Moderate negative relationship-.40 to -.69 Strong negative relationship-.70 or higher Very strong negative relationship
- 24. Coefficient of Determination• To interpret Pearson r (r-squared)• To interpret to what extent the variance of one variableexplains variance in another variable• If Pearson r =-.77• -.77 X -.77 X 100 = 59%
- 25. Spearman Rho rank correlation• Ordinal or nominal• -1.00 to 1.00Alice Jordan Dexter Betty MingMath class 1 2 3 4 5Philosophy 5 4 1 3 2
- 26. Normal distribution• These distributions are symmetrical and “bell-shaped”• Characterized by high frequencies towards the center of thedistribution and low frequencies in the extreme score regions.• This is a symmetrical distribution.fX
- 27. Data steps• Decide what our null hypothesis is.• Decide how much confidence we wanted.• Set our alpha level.• Calculate our statistic.• Plot the statistic on the sampling distributions.• Make a decision based on our decision rule.• Critical values: .05, .01, .001
- 28. Two types of hypotheses.• Null Hypothesis (Statistical Hypothesis)• This is the hypothesis that goes with the sampling distribution of NODIFFERENCES.• Significance tests determine the probability that the null is true• Research | Scientific | Alternate Hypothesis• This is the hypothesis that goes with the sampling distribution ofDIFFERENCES.H1 Significant effectHo No significant effect
- 29. How do we write thesehypotheses?01Null Hypothesis : 75.00Alternate Hypothesis : 75.00HHNull Hypothesis H0 :m1-m2= 0Alternate Hypothesis H1 :m1-m2¹ 0
- 30. What do these hypotheses look likeconceptually?0H1H 1HThis is our null distribution.This is the one against which we will testour sample.We will specify the mean of thispopulation.75.00
- 31. Alpha andsignificance level (probability)• Significance level (p) p < .05• Statistically significant• The exact probability that the statistic we calculated on ourobserved sample could actually occur in our null distributionby chance alone.• We can only calculate this if we have a computer.• Alpha (α).• The hypothetical remainder of the area under the curve otherthan the CI.• We decide on this level before we conduct the test.• .05, .01, .001
- 32. Probability• Two-tailed probability test• Odds of drawing an individual at either tail of the normal distribution• Flexibility• Almost always select two-tailed test• One-tailed probability test• Easier to reject the null hypothesis – but in one and only direction
- 33. t test• Compares the means of two samples for statistical significance• One nominal variable with two categories and their scores onone dependent interval/ratio variable• t(4.62) = 2.17, p > .05• Degrees of freedom• df = n1 + n2 -2• If the n=30 for one group and n=32 for another group, what is thedf for t test?• (t=2.12, df=26, p <.05, two-tailed test)
- 34. One-way (single factor) ANOVA• Test differences among two or more means• Nominal variable (IV) and ratio/interval variable (DV)• The differences among the means are statistically significant atthe .01 level (F = 58.769, df = 2, 36)• Statistically significant differences among pairs of means• Tukey’s Honestly Significant Difference (HSD) test• Requires same number of subjects per category• Scheffe’s test• More conservative – less likely to lead to rejection of the nullhypothesis• Each category does not have to have an equal number per category
- 35. Two-way ANOVA• Subjects classified in two ways• Two main effects and one interactionConventional New Row MeansHS diploma m = $8.88 m = $8.75 m = $8.82No H.S. Diploma m = $4.56 m = $8.80 m = $6.68Column means m = $6.72 m = $8.78
- 36. Chi-Square• Nominal-level data• X2 (df = 4, n=100) = 22.36, p > .001• Should be no fewer than 5 cases in every cell• One-way chi-square• Two-way chi-squareCandidate Jones Candidate LeeMales n = 80 n = 120Females n = 120 n = 80Candidate Jones Candidate Leen = 110 (55.0%) n = 90 (45.0%)
- 37. Cramer’s Phi or Cramer’s V• Φ• Tests whether there is a statistically relationship with twovariables• 0.00 = no relationship• 1.00 = perfect relationship• If Cramer’s V = .25 or higher “very strong” relationship• .15 to .25 Strong relationship• .11 to .15 Moderate relationship• .06 to .10 Weak relationship• .01 to .05 No or negligible relationship
- 38. Results• Hypothesis 1 predicted that reproach types wouldsignificantly differ from each other in their degree ofperceived threat. To test this hypothesis, mean levels ofperceived face threats were compared across groupsrepresenting the four reproach categories. ANOVA indicatedsupport for the hypothesis, F(3, 87) = 53.79, p < .001, ŋ2 = .65)
- 39. Agenda• Intro to SPSS• SPSS lecture and exercises. Held in 245• Following week: No lecture• April 25th• Present for 5-10 minutes on your proposal.• Feedback from the group• May 1st• Due by 2:45pm via email -

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment