3.
IntroductionA paired t-test is used to compare two population means where you have twosamples in which observations in one sample can be paired with observations inthe other sample.For example:A diagnostic test was made before studying a particular module and then againafter completing the module. We want to find out if, in general, our teachingleads to improvements in students’ knowledge/skills.
4.
First, we see the descriptive statistics forboth variables. The post-test mean scores are higher.
5.
Next, we see the correlation betweenthe two variables. There is a strong positive correlation. People who did well on the pre-test also did well on the post-test.
6.
Finally, we see the T, degrees offreedom, and significance.Our significance is .053If the significance value is less than.05, there is a significant difference.If the significance value is greater than.05, there is no significant difference.Here, we see that the significance value isapproaching significance, but it is not asignificant difference. There is nodifference between pre- and post-testscores. Our test preparation course did nothelp!
8.
Outline 1.Introduction 2.Hypothesis for the independent t-test 3.What do you need to run an independent t-test? 4.Formula 5.Example (Calculating + Reporting)
9.
IntroductionThe independent t-test, also called the two sample t-test or students t-test is an inferential statistical testthat determines whether there is a statisticallysignificant difference between the means in twounrelated groups.
10.
Hypothesis for the independent t-testThe null hypothesis for the independent t-test is that the population means fromthe two unrelated groups are equal:H0: M1 = M2 (M = Mean)In most cases, we are looking to see if we can show that we can reject the nullhypothesis and accept the alternative hypothesis, which is that the populationmeans are not equal:HA: M1 ≠ M2 (M = Mean)To do this we need to set a significance level (alpha) that allows us to eitherreject or accept the alternative hypothesis. Most commonly, this value is set at0.05.
11.
What do you need to run an independent t-test?In order to run an independent t-test you need the following: 1. One independent variable: (the treatments) 2. One dependent variable: (the outcomes)
12.
Formula Exp: Experimental Group Con: Control Group M: mean (the average score of the group) SD: Standard Deviation N: number of scores in each group
17.
Reporting the Result of an Independent T-Test When reporting the result of an independent t-test, you need to include the t-statistic value, the degrees of freedom (df) and the significance value of the test (P- value). The format of the test result is: t(df) = t- statistic, P = significance value.
18.
Example result (APA Style)An independent samples T-test is presented the same as the one-sample t-test: t(75) = 2.11, p = .02 (one –tailed), d = .48 Degrees of freedom Value of Significance Include if test Effect size statistic of statistic is one-tailed if availableExample: Survey respondents who were employed by the federal, state, orlocal government had significantly higher socioeconomic indices (M =55.42, SD = 19.25) than survey respondents who were employed by a privateemployer (M = 47.54, SD = 18.94), t(255) = 2.363, p = .01 (one-tailed).
19.
Analysis of Variance(ANOVA)PRESENTER : MINH SANG
20.
IntroductionWe already learned about the chi square test forindependence, which is useful for data that is measured at thenominal or ordinal level of analysis.If we have data measured at the interval level, we can compare twoor more population groups in terms of their population means usinga technique called analysis of variance, or ANOVA.
21.
Completely randomized designPopulation 1 Population 2….. Population kMean = 1 Mean = 2 …. Mean = kVariance= 2 Variance= 2 … Variance = 2 1 2 k We want to know something about how the populations compare.Do they have the same mean? We can collect random samples fromeach population, which gives us the following data.
22.
Completely randomized designMean = M1 Mean = M2 ..… Mean = MkVariance=s12 Variance=s22 …. Variance = sk2N1 cases N2 cases …. Nk casesSuppose we want to compare 3 college majors in a business school by the average annual income people make 2 years after graduation. We collect the following data (in $1000s) based on random surveys.
24.
Completely randomized designCan the dean conclude that there are differences among the major’s incomes?In this problem we must take into account:1) The variance between samples, or the actual differences by major. This is called the sum of squares for treatment (SST).
25.
Completely randomized design2) The variance within samples, or the variance of incomes within a single major. This is called the sum of squares for error (SSE).Recall that when we sample, there will always be a chance of getting something different than the population. We account for this through #2, or the SSE.
26.
F-StatisticFor this test, we will calculate a F statistic, which is used to compare variances.F = SST/(k-1) SSE/(n-k)SST=sum of squares for treatmentSSE=sum of squares for errork = the number of populations ( 3 )N = total sample size ( 15 )
27.
F-statisticIntuitively, the F statistic is:F = explained varianceunexplained varianceExplained variance is the difference between majorsUnexplained variance is the difference based on random sampling for each group
28.
Calculating SSTSST = ni(Mi - )2 = grand mean or = Mi/k or the sum of all values for all groups divided bytotal sample sizeMi = mean for each samplen= the number of populations
30.
Calculating SSTNote that when M1 = M2 = M3, then SST=0 which would support the nullhypothesis.In this example, the samples are of equal size, but we can also run this analysis with samples of varying size also.
31.
Calculating SSESSE = (Xit – Mi)2In other words, it is just the variance for each sample added together.SSE = (X1t – M1)2 + (X2t – M2)2 + (X3t – M3)2SSE = [(27-29)2 + (22-29)2 +…+ (29-29)2] + [(23-33.5)2 + (36-33.5)2 +…] + [(48-37)2 + (35-37)2 +…+ (29-37)2]SSE = 819.5
32.
Calculating F for our exampleF = 193/2 819.5/15F = 1.77Our calculated F is compared to the critical value using the F-distribution withF , k-1, n-k degrees of freedomk-1 (numerator df)n-k (denominator df)
33.
The ResultsFor 95% confidence ( =.05), our critical F is 3.68 (averaging across the values at 14 and 16In this case, 1.77 < 3.68 so we must accept the null hypothesis. So there are differences among the major’s incomes .The dean is puzzled by these results because just by eyeballing the data, it looks like and finance majors make more money.
34.
Two way ANOVANow SS(total) = SST + SSB + SSEWhere SSB = the variability among blocks, where a block is a matched group ofobservations from each of the populationsWe can calculate a two-way ANOVA to test our null hypothesis.
35.
Two way ANOVATwo-way ANOVA has many of the same ideas as one-way ANOVA, with the maindifference being the inclusion of another factor (or explanatory variable) in ourmodel.In the two-way ANOVA model, there are two factors, each with its own numberof levels. When we are interested in the effects of two factors, it is much moreadvantageous to perform a two-way analysis of variance, as opposed to twoseparate one-way ANOVAs.
36.
Two way ANOVAThere are three main advantages of two-way ANOVA:- It is more efficient to study two factors simultaneously rather thanseparately.- We can reduce the residual variation in a model by including asecond factor thought to influence the response.- We can investigate interactions between factors.
37.
Two way ANOVA The interaction between two variables is usually the most interesting featureof a two-way analysis of variance. When two factors interact, the effect on theresponse variable of one explanatory variable depends on the specific value orlevel of the other explanatory variable.For example, the statement “being overweight caused greater increases in bloodpressure for men than for women” is a statement describing interaction. Theeffect of weight (factor #1, categorical – overweight or not overweight) on bloodpressure (response) depends on gender (factor #2, categorical – male orfemale).
38.
Two way ANOVAThe term main effect is used to describe the overall effect of a singleexplanatory variable. For our previous example, there would be two maineffects: the effect of weight on blood pressure and the effect of gender on bloodpressure.The presence of a main effect might not necessarily be useful when aninteraction effect exists. For example, it might be sensible to report the effect ofbeing overweight on blood pressure without reporting that there is a differencein the effect of being overweight on blood pressure for men and women.
Be the first to comment