6
ONE-WAY BETWEEN-
SUBJECTS ANALYSIS OF
VARIANCE
6.1 Research Situations Where One-Way Between-Subjects
Analysis of Variance (ANOVA) Is Used
A one-way between-subjects (between-S) analysis of variance (ANOVA) is
used in research situations where the researcher wants to compare means on a
quantitative Y outcome variable across two or more groups. Group
membership is identified by each participant’s score on a categorical X
predictor variable. ANOVA is a generalization of the t test; a t test provides
information about the distance between the means on a quantitative outcome
variable for just two groups, whereas a one-way ANOVA compares means
on a quantitative variable across any number of groups. The categorical
predictor variable in an ANOVA may represent either naturally occurring
groups or groups formed by a researcher and then exposed to different
interventions. When the means of naturally occurring groups are compared
(e.g., a one-way ANOVA to compare mean scores on a self-report measure of
political conservatism across groups based on religious affiliation), the design
is nonexperimental. When the groups are formed by the researcher and the
researcher administers a different type or amount of treatment to each group
while controlling extraneous variables, the design is experimental.
The term between-S (like the term independent samples) tells us that each
participant is a member of one and only one group and that the members of
samples are not matched or paired. When the data for a study consist of
repeated measures or paired or matched samples, a repeated measures
ANOVA is required (see Chapter 22 for an introduction to the analysis of
repeated measures). If there is more than one categorical variable or factor
included in the study, factorial ANOVA is used (see Chapter 13). When there
is just a single factor, textbooks often name this single factor A, and if there
are additional factors, these are usually designated factors B, C, D, and so
forth. If scores on the dependent Y variable are in the form of rank or ordinal
data, or if the data seriously violate assumptions required for ANOVA, a
nonparametric alternative to ANOVA may be preferred.
In ANOVA, the categorical predictor variable is called a factor; the
groups are called the levels of this factor. In the hypothetical research
example introduced in Section 6.2, the factor is called “Types of Stress,” and
the levels of this factor are as follows: 1, no stress; 2, cognitive stress from a
mental arithmetic task; 3, stressful social role play; and 4, mock job
interview.
Comparisons among several group means could be made by calculating t
tests for each pairwise comparison among the means of these four treatment
groups. However, as described in Chapter 3, doing a large number of
significance tests leads to an inflated risk for Type I error. If a study includes
k groups, there are k(k – 1)/2 pairs of means; thus, for a set of four groups, the .
Assessment 4 ContextRecall that null hypothesis tests are of.docxfestockton
Assessment 4 Context
Recall that null hypothesis tests are of two types: (1) differences between group means and (2) association between variables. In both cases there is a null hypothesis and an alternative hypothesis. In the group means test, the null hypothesis is that the two groups have equal means, and the alternative hypothesis is that the two groups do not have equal means. In the association between variables type of test, the null hypothesis is that the correlation coefficient between the two variables is zero, and the alternative hypothesis is that the correlation coefficient is not zero.
Notice in each case that the hypotheses are mutually exclusive. If the null is false, the alternative must be true. The purpose of null hypothesis statistical tests is generally to show that the null has a low probability of being true (the p value is less than .05) – low enough that the researcher can legitimately claim it is false. The reason this is done is to support the allegation that the alternative hypothesis is true.
In this context you will be studying the details of the first type of test again, with the added capability of comparing the means among more than two group at a time. This is the same type of test of difference between group means. In variations on this model, the groups can actually be the same people under different conditions. The main idea is that several group mean values are being compared. The groups each have an average score or mean on some variable. The null hypothesis is that the difference between all the group means is zero. The alternative hypothesis is that the difference between the means is not zero. Notice that if the null is false, the alternative must be true. It is first instructive to consider some of the details of groups.
One might ask why we would not use multiple t tests in this situation. For instance, with three groups, why would I not compare groups one and two with a t test, then compare groups one and three, and then compare groups two and three?
The answer can be found in our basic probability review. We are concerned with the probability of a TYPE I error (rejecting a true null hypothesis). We generally set an alpha level of .05, which is the probability of making a TYPE I error. Now consider what happens when we do three t tests. There is .05 probability of making a TYPE I error on the first test, .05 probability of the same error on the second test, and .05 probability on the third test. What happens is that these errors are essentially additive, in that the chances of at least one TYPE I error among the three tests much greater than .05. It is like the increased probability of drawing an ace from a deck of cards when we can make multiple draws.
ANOVA allows us do an "overall" test of multiple groups to determine if there are any differences among groups within the set. Notice that ANOVA does not tell us which groups among the three groups are different from each other. The primary test ...
Assessment 4 ContextRecall that null hypothesis tests are of.docxgalerussel59292
Assessment 4 Context
Recall that null hypothesis tests are of two types: (1) differences between group means and (2) association between variables. In both cases there is a null hypothesis and an alternative hypothesis. In the group means test, the null hypothesis is that the two groups have equal means, and the alternative hypothesis is that the two groups do not have equal means. In the association between variables type of test, the null hypothesis is that the correlation coefficient between the two variables is zero, and the alternative hypothesis is that the correlation coefficient is not zero.
Notice in each case that the hypotheses are mutually exclusive. If the null is false, the alternative must be true. The purpose of null hypothesis statistical tests is generally to show that the null has a low probability of being true (the p value is less than .05) – low enough that the researcher can legitimately claim it is false. The reason this is done is to support the allegation that the alternative hypothesis is true.
In this context you will be studying the details of the first type of test again, with the added capability of comparing the means among more than two group at a time. This is the same type of test of difference between group means. In variations on this model, the groups can actually be the same people under different conditions. The main idea is that several group mean values are being compared. The groups each have an average score or mean on some variable. The null hypothesis is that the difference between all the group means is zero. The alternative hypothesis is that the difference between the means is not zero. Notice that if the null is false, the alternative must be true. It is first instructive to consider some of the details of groups.
One might ask why we would not use multiple t tests in this situation. For instance, with three groups, why would I not compare groups one and two with a t test, then compare groups one and three, and then compare groups two and three?
The answer can be found in our basic probability review. We are concerned with the probability of a TYPE I error (rejecting a true null hypothesis). We generally set an alpha level of .05, which is the probability of making a TYPE I error. Now consider what happens when we do three t tests. There is .05 probability of making a TYPE I error on the first test, .05 probability of the same error on the second test, and .05 probability on the third test. What happens is that these errors are essentially additive, in that the chances of at least one TYPE I error among the three tests much greater than .05. It is like the increased probability of drawing an ace from a deck of cards when we can make multiple draws.
ANOVA allows us do an "overall" test of multiple groups to determine if there are any differences among groups within the set. Notice that ANOVA does not tell us which groups among the three groups are different from each other. The primary test.
Assessment 3 ContextYou will review the theory, logic, and a.docxgalerussel59292
Assessment 3 Context
You will review the theory, logic, and application of t-tests. The t-test is a basic inferential statistic often reported in psychological research. You will discover that t-tests, as well as analysis of variance (ANOVA), compare group means on some quantitative outcome variable.
Recall that null hypothesis tests are of two types: (1) differences between group means and (2) association between variables. In both cases there is a null hypothesis and an alternative hypothesis. In the group means test, the null hypothesis is that the two groups have equal means, and the alternative hypothesis is that the two groups do not have equal means. In the association between variables type of test, the null hypothesis is that the correlation coefficient between the two variables is zero, and the alternative hypothesis is that the correlation coefficient is not zero.
Notice in each case that the hypotheses are mutually exclusive. If the null is false, the alternative must be true. The purpose of null hypothesis statistical tests is generally to show that the null has a low probability of being true (the p value is less than .05) – low enough that the researcher can legitimately claim it is false. The reason this is done is to support the allegation that the alternative hypothesis is true.
In this context you will be studying the details of the first type of test. This is the test of difference between group means. In variations on this model, the two groups can actually be the same people under different conditions, or one of the groups may be assigned a fixed theoretical value. The main idea is that two mean values are being compared. The two groups each have an average score or mean on some variable. The null hypothesis is that the difference between the means is zero. The alternative hypothesis is that the difference between the means is not zero. Notice that if the null is false, the alternative must be true. It is first instructive to consider some of the details of groups. Means, and difference between them.
Null Hypothesis Significance Test
The most common forms of the Null Hypothesis Significance Test (NHST) are three types of t tests, and the test of significance of a correlation. The NHST also extends to more complex tests, such as ANOVA, which will be discussed separately. Below, the null hypothesis and the alternative hypothesis are given for each of the following tests. It would be a valuable use of your time to commit the information below to memory. Once this is done, then when we refer to the tests later, you will have some structure to make sense of the more detailed explanations.
1. One-sample t test: The question in this test is whether a single sample group mean is significantly different from some stated or fixed theoretical value - the fixed value is called a parameter.
· Null Hypothesis: The difference between the sample group mean and the fixed value is zero in the population.
· Alternative hypothesis: T.
ANOVA Interpretation Set 1 Study this scenario and ANOVA.docxfestockton
ANOVA Interpretation Set 1
Study this scenario and ANOVA table, then answer the questions in the assignment instructions.
A researcher wants to compare the efficacy of three different techniques for memorizing
information. They are repetition, imagery, and mnemonics. The researcher randomly assigns
participants to one of the techniques. Each group is instructed in their assigned memory
technique and given a document to memorize within a set time period. Later, a test about the
document is given to all participants. The scores are collected and analyzed using a one-way
ANOVA. Here is the ANOVA table with the results:
Source SS df MS F p
Between 114.3111 2 57.1556 19.74 <.0001
Within 121.6 42 2.8952
Total 235.9111 44
9/10/2019 Print
https://content.ashford.edu/print/AUPSY325.16.1?sections=ch6,ch6sec1,ch6sec2,ch6sec3,ch6sec4,ch6sec5,ch6sec6,ch6sec7,ch6sec8,ch6summary,ch7,ch7sec1,ch7s… 1/76
Chapter Learning Objectives
After reading this chapter, you should be able to do the following:
1. Explain why it is a mistake to analyze the differences between more than two groups with
multiple t tests.
2. Relate sum of squares to other measures of data variability.
3. Compare and contrast t test with analysis of variance (ANOVA).
4. Demonstrate how to determine significant differences among groups in an ANOVA with more
than two groups.
5. Explain the use of eta squared in ANOVA.
6Analysis of Variance
Peter Ginter/Science Faction/Corbis
9/10/2019 Print
https://content.ashford.edu/print/AUPSY325.16.1?sections=ch6,ch6sec1,ch6sec2,ch6sec3,ch6sec4,ch6sec5,ch6sec6,ch6sec7,ch6sec8,ch6summary,ch7,ch7sec1,ch7s… 2/76
Introduction
From one point of view at least, R. A. Fisher was present at the creation of modern statistical analysis. During
the early part of the 20th century, Fisher worked at an agricultural research station in rural southern England.
Analyzing the effect of pesticides and fertilizers on crop yields, he was stymied by independent t tests that
allowed him to compare only two samples at a time. In the effort to accommodate more comparisons, Fisher
created analysis of variance (ANOVA).
Like William Gosset, Fisher felt that his work was important enough to publish, and like Gosset, he met
opposition. Fisher’s came in the form of a fellow statistician, Karl Pearson. Pearson founded the first department
of statistical analysis in the world at University College, London. He also began publication of what is—for
statisticians at least—perhaps the most influential journal in the field, Biometrika. The crux of the initial conflict
between Fisher and Pearson was the latter’s commitment to making one comparison at a time, with the largest
groups possible.
When Fisher submitted his work to Pearson’s journal, suggesting that samples can be small and many
comparisons can be made in the same analysis, Pearson rejected the manuscript. So began a long and
increasingly acrimonious relationship between two men w ...
Assessment 4 ContextRecall that null hypothesis tests are of.docxfestockton
Assessment 4 Context
Recall that null hypothesis tests are of two types: (1) differences between group means and (2) association between variables. In both cases there is a null hypothesis and an alternative hypothesis. In the group means test, the null hypothesis is that the two groups have equal means, and the alternative hypothesis is that the two groups do not have equal means. In the association between variables type of test, the null hypothesis is that the correlation coefficient between the two variables is zero, and the alternative hypothesis is that the correlation coefficient is not zero.
Notice in each case that the hypotheses are mutually exclusive. If the null is false, the alternative must be true. The purpose of null hypothesis statistical tests is generally to show that the null has a low probability of being true (the p value is less than .05) – low enough that the researcher can legitimately claim it is false. The reason this is done is to support the allegation that the alternative hypothesis is true.
In this context you will be studying the details of the first type of test again, with the added capability of comparing the means among more than two group at a time. This is the same type of test of difference between group means. In variations on this model, the groups can actually be the same people under different conditions. The main idea is that several group mean values are being compared. The groups each have an average score or mean on some variable. The null hypothesis is that the difference between all the group means is zero. The alternative hypothesis is that the difference between the means is not zero. Notice that if the null is false, the alternative must be true. It is first instructive to consider some of the details of groups.
One might ask why we would not use multiple t tests in this situation. For instance, with three groups, why would I not compare groups one and two with a t test, then compare groups one and three, and then compare groups two and three?
The answer can be found in our basic probability review. We are concerned with the probability of a TYPE I error (rejecting a true null hypothesis). We generally set an alpha level of .05, which is the probability of making a TYPE I error. Now consider what happens when we do three t tests. There is .05 probability of making a TYPE I error on the first test, .05 probability of the same error on the second test, and .05 probability on the third test. What happens is that these errors are essentially additive, in that the chances of at least one TYPE I error among the three tests much greater than .05. It is like the increased probability of drawing an ace from a deck of cards when we can make multiple draws.
ANOVA allows us do an "overall" test of multiple groups to determine if there are any differences among groups within the set. Notice that ANOVA does not tell us which groups among the three groups are different from each other. The primary test ...
Assessment 4 ContextRecall that null hypothesis tests are of.docxgalerussel59292
Assessment 4 Context
Recall that null hypothesis tests are of two types: (1) differences between group means and (2) association between variables. In both cases there is a null hypothesis and an alternative hypothesis. In the group means test, the null hypothesis is that the two groups have equal means, and the alternative hypothesis is that the two groups do not have equal means. In the association between variables type of test, the null hypothesis is that the correlation coefficient between the two variables is zero, and the alternative hypothesis is that the correlation coefficient is not zero.
Notice in each case that the hypotheses are mutually exclusive. If the null is false, the alternative must be true. The purpose of null hypothesis statistical tests is generally to show that the null has a low probability of being true (the p value is less than .05) – low enough that the researcher can legitimately claim it is false. The reason this is done is to support the allegation that the alternative hypothesis is true.
In this context you will be studying the details of the first type of test again, with the added capability of comparing the means among more than two group at a time. This is the same type of test of difference between group means. In variations on this model, the groups can actually be the same people under different conditions. The main idea is that several group mean values are being compared. The groups each have an average score or mean on some variable. The null hypothesis is that the difference between all the group means is zero. The alternative hypothesis is that the difference between the means is not zero. Notice that if the null is false, the alternative must be true. It is first instructive to consider some of the details of groups.
One might ask why we would not use multiple t tests in this situation. For instance, with three groups, why would I not compare groups one and two with a t test, then compare groups one and three, and then compare groups two and three?
The answer can be found in our basic probability review. We are concerned with the probability of a TYPE I error (rejecting a true null hypothesis). We generally set an alpha level of .05, which is the probability of making a TYPE I error. Now consider what happens when we do three t tests. There is .05 probability of making a TYPE I error on the first test, .05 probability of the same error on the second test, and .05 probability on the third test. What happens is that these errors are essentially additive, in that the chances of at least one TYPE I error among the three tests much greater than .05. It is like the increased probability of drawing an ace from a deck of cards when we can make multiple draws.
ANOVA allows us do an "overall" test of multiple groups to determine if there are any differences among groups within the set. Notice that ANOVA does not tell us which groups among the three groups are different from each other. The primary test.
Assessment 3 ContextYou will review the theory, logic, and a.docxgalerussel59292
Assessment 3 Context
You will review the theory, logic, and application of t-tests. The t-test is a basic inferential statistic often reported in psychological research. You will discover that t-tests, as well as analysis of variance (ANOVA), compare group means on some quantitative outcome variable.
Recall that null hypothesis tests are of two types: (1) differences between group means and (2) association between variables. In both cases there is a null hypothesis and an alternative hypothesis. In the group means test, the null hypothesis is that the two groups have equal means, and the alternative hypothesis is that the two groups do not have equal means. In the association between variables type of test, the null hypothesis is that the correlation coefficient between the two variables is zero, and the alternative hypothesis is that the correlation coefficient is not zero.
Notice in each case that the hypotheses are mutually exclusive. If the null is false, the alternative must be true. The purpose of null hypothesis statistical tests is generally to show that the null has a low probability of being true (the p value is less than .05) – low enough that the researcher can legitimately claim it is false. The reason this is done is to support the allegation that the alternative hypothesis is true.
In this context you will be studying the details of the first type of test. This is the test of difference between group means. In variations on this model, the two groups can actually be the same people under different conditions, or one of the groups may be assigned a fixed theoretical value. The main idea is that two mean values are being compared. The two groups each have an average score or mean on some variable. The null hypothesis is that the difference between the means is zero. The alternative hypothesis is that the difference between the means is not zero. Notice that if the null is false, the alternative must be true. It is first instructive to consider some of the details of groups. Means, and difference between them.
Null Hypothesis Significance Test
The most common forms of the Null Hypothesis Significance Test (NHST) are three types of t tests, and the test of significance of a correlation. The NHST also extends to more complex tests, such as ANOVA, which will be discussed separately. Below, the null hypothesis and the alternative hypothesis are given for each of the following tests. It would be a valuable use of your time to commit the information below to memory. Once this is done, then when we refer to the tests later, you will have some structure to make sense of the more detailed explanations.
1. One-sample t test: The question in this test is whether a single sample group mean is significantly different from some stated or fixed theoretical value - the fixed value is called a parameter.
· Null Hypothesis: The difference between the sample group mean and the fixed value is zero in the population.
· Alternative hypothesis: T.
ANOVA Interpretation Set 1 Study this scenario and ANOVA.docxfestockton
ANOVA Interpretation Set 1
Study this scenario and ANOVA table, then answer the questions in the assignment instructions.
A researcher wants to compare the efficacy of three different techniques for memorizing
information. They are repetition, imagery, and mnemonics. The researcher randomly assigns
participants to one of the techniques. Each group is instructed in their assigned memory
technique and given a document to memorize within a set time period. Later, a test about the
document is given to all participants. The scores are collected and analyzed using a one-way
ANOVA. Here is the ANOVA table with the results:
Source SS df MS F p
Between 114.3111 2 57.1556 19.74 <.0001
Within 121.6 42 2.8952
Total 235.9111 44
9/10/2019 Print
https://content.ashford.edu/print/AUPSY325.16.1?sections=ch6,ch6sec1,ch6sec2,ch6sec3,ch6sec4,ch6sec5,ch6sec6,ch6sec7,ch6sec8,ch6summary,ch7,ch7sec1,ch7s… 1/76
Chapter Learning Objectives
After reading this chapter, you should be able to do the following:
1. Explain why it is a mistake to analyze the differences between more than two groups with
multiple t tests.
2. Relate sum of squares to other measures of data variability.
3. Compare and contrast t test with analysis of variance (ANOVA).
4. Demonstrate how to determine significant differences among groups in an ANOVA with more
than two groups.
5. Explain the use of eta squared in ANOVA.
6Analysis of Variance
Peter Ginter/Science Faction/Corbis
9/10/2019 Print
https://content.ashford.edu/print/AUPSY325.16.1?sections=ch6,ch6sec1,ch6sec2,ch6sec3,ch6sec4,ch6sec5,ch6sec6,ch6sec7,ch6sec8,ch6summary,ch7,ch7sec1,ch7s… 2/76
Introduction
From one point of view at least, R. A. Fisher was present at the creation of modern statistical analysis. During
the early part of the 20th century, Fisher worked at an agricultural research station in rural southern England.
Analyzing the effect of pesticides and fertilizers on crop yields, he was stymied by independent t tests that
allowed him to compare only two samples at a time. In the effort to accommodate more comparisons, Fisher
created analysis of variance (ANOVA).
Like William Gosset, Fisher felt that his work was important enough to publish, and like Gosset, he met
opposition. Fisher’s came in the form of a fellow statistician, Karl Pearson. Pearson founded the first department
of statistical analysis in the world at University College, London. He also began publication of what is—for
statisticians at least—perhaps the most influential journal in the field, Biometrika. The crux of the initial conflict
between Fisher and Pearson was the latter’s commitment to making one comparison at a time, with the largest
groups possible.
When Fisher submitted his work to Pearson’s journal, suggesting that samples can be small and many
comparisons can be made in the same analysis, Pearson rejected the manuscript. So began a long and
increasingly acrimonious relationship between two men w ...
Commonly used Statistics in Medical Research HandoutPat Barlow
We found this handout to be incredibly useful as a guide and resource for non-statistical professionals to make quick decisions about statistical methods. The handout accompanies the Commonly Used Statistics in Medical Research Part I Presentation
Respond using one or more of the following approachesAsk a promickietanger
Respond using one or more of the following approaches:
Ask a probing question, substantiated with additional background information, and evidence.
Share an insight from having read your colleagues’ postings, synthesizing the information to provide new perspectives.
Group B
Inferential Statistics- Based on probability; used to draw conclusions or make generalizations about a given population or problem.
Example: “What can I infer about 5-minute Apgar scores of premature babies (the population) after calculating a mean Apgar score of 7.5 in a sample of 300 premature babies?” (McGonigle & Mastrain, p. 376, 2017).
Sampling Distributions- A sampling distribution is the frequency distribution of a statistic over many random samples from a single population.
Sampling Distribution of the Mean - as an example we randomly draw test scores from 25 students out of a total group of 5,000. We then calculate the mean, then draw a new group and repeat; each mean will serve as one datum, or data point.
Hypothesis Testing- is the use of statistics to determine the probability that a given hypothesis is true.
Null Hypothesis- the hypothesis that there is no significant difference between specified populations; or differences can be attributed to sampling or experimental error
Type 1 Error- This error occurs when we reject the null hypothesis when we should have retained it.
Type 2 Error- This error occurs when we fail to reject the null hypothesis. In other words, we believe that there isn’t a genuine effect when actually there is one.
Parametric statistics – A class of statistical tests that involve assumptions about the distribution of the variables and the estimation of a parameter.
Nonparametic statistics – A class of statistical tests that do not involve stringent assumptions about the distribution of variables. Between-subject design – A research design in which separate groups of people are compared (e.g. smokers and nonsmokers; intervention and control group subjects). Within-subject design – A research design in which a single group of participants is compared under different conditions or different points in time (e.g. before and after surgery).
Two classes of Statistical Tests:
Parametric tests - tests involving an estimation of a parameter, the use of interval or ratio-level data, and the assumption of normally distributed variables. Include t-tests and ANOVA.
Nonparametric tests - used when the data are nominal or ordinal or when a normal distribution cannot be assumed. Include the Mann-Whitney U test, Wilcoxon signed - rank test, and Kruskal - Wallis test.
Statistical Tests
T-test parametric procedure identifying mean differences for two independent groups, like experiment versus control or dependent groups, like pretreatment and post-treatment scores.
One - way ANOVA - tests the relationship between one categorical independent variable, such as different interventions, and a continuous dependent variable.
...
Calculating Analysis of Variance (ANOVA) and Post Hoc Analyses Follo.docxaman341480
Calculating Analysis of Variance (ANOVA) and Post Hoc Analyses Following ANOVA
Analysis of variance (ANOVA)
is a statistical procedure that compares data between two or more groups or conditions to investigate the presence of differences between those groups on some continuous dependent variable (see
Exercise 18
). In this exercise, we will focus on the
one-way ANOVA
, which involves testing one independent variable and one dependent variable (as opposed to other types of ANOVAs, such as factorial ANOVAs that incorporate multiple independent variables).
Why ANOVA and not a
t
-test? Remember that a
t
-test is formulated to compare two sets of data or two groups at one time (see
Exercise 23
for guidance on selecting appropriate statistics). Thus, data generated from a clinical trial that involves four experimental groups, Treatment 1, Treatment 2, Treatments 1 and 2 combined, and a Control, would require 6
t
-tests. Consequently, the chance of making a Type I error (alpha error) increases substantially (or is inflated) because so many computations are being performed. Specifically, the chance of making a Type I error is the number of comparisons multiplied by the alpha level. Thus, ANOVA is the recommended statistical technique for examining differences between more than two groups (
Zar, 2010
).
ANOVA is a procedure that culminates in a statistic called the
F
statistic. It is this value that is compared against an
F
distribution (see
Appendix C
) in order to determine whether the groups significantly differ from one another on the dependent variable. The formulas for ANOVA actually compute two estimates of variance: One estimate represents differences between the groups/conditions, and the other estimate represents differences among (within) the data.
Research Designs Appropriate for the One-Way ANOVA
Research designs that may utilize the one-way ANOVA include the randomized experimental, quasi-experimental, and comparative designs (
Gliner, Morgan, & Leech, 2009
). The independent variable (the “grouping” variable for the ANOVA) may be active or attributional. An active independent variable refers to an intervention, treatment, or program. An attributional independent variable refers to a characteristic of the participant, such as gender, diagnosis, or ethnicity. The ANOVA can compare two groups or more. In the case of a two-group design, the researcher can either select an independent samples
t
-test or a one-way ANOVA to answer the research question. The results will always yield the same conclusion, regardless of which test is computed; however, when examining differences between more than two groups, the one-way ANOVA is the preferred statistical test.
Example 1: A researcher conducts a randomized experimental study wherein she randomizes participants to receive a high-dosage weight loss pill, a low-dosage weight loss pill, or a placebo. She assesses the number of pounds lost from baseline to post-treatment
378
for the thre ...
In Unit 9, we will study the theory and logic of analysis of varianc.docxlanagore871
In Unit 9, we will study the theory and logic of analysis of variance (ANOVA). Recall that a t test requires a predictor variable that is dichotomous (it has only two levels or groups). The advantage of ANOVA over a t test
is that the categorical predictor variable can have two or more groups. Just like a t test, the outcome variable in
ANOVA is continuous and requires the calculation of group means.
Logic of a "One-Way" ANOVA
The ANOVA, or F test, relies on predictor variables referred to as factors. A factor is a categorical (nominal)
predictor variable. The term "one-way" is applied to an ANOVA with only one factor that is defined by two or
more mutually exclusive groups. Technically, an ANOVA can be calculated with only two groups, but the t test is
usually used instead. Instead, the one-way ANOVA is usually calculated with three or more groups, which are
often referred to as levels of the factor.
If the ANOVA includes multiple factors, it is referred to as a factorial ANOVA. An ANOVA with two factors is
referred to as a "two-way" ANOVA; an ANOVA with three factors is referred to as a "three-way" ANOVA, and
so on. Factorial ANOVA is studied in advanced inferential statistics. In this course, we will focus on the theory
and logic of the one-way ANOVA.
ANOVA is one of the most popular statistics used in social sciences research. In non-experimental designs, the
one-way ANOVA compares group means between naturally existing groups, such as political affiliation
(Democrat, Independent, Republican). In experimental designs, the one-way ANOVA compares group means
for participants randomly assigned to different treatment conditions (for example, high caffeine dose; low
caffeine dose; control group).
Avoiding Inflated Type I Error
You may wonder why a one-way ANOVA is necessary. For example, if a factor has four groups ( k = 4), why not
just run independent sample t tests for all pairwise comparisons (for example, Group A versus Group B, Group
A versus Group C, Group B versus Group C, et cetera)? Warner (2013) points out that a factor with four groups
involves six pairwise comparisons. The issue is that conducting multiple pairwise comparisons with the same
data leads to inflated risk of a Type I error (incorrectly rejecting a true null hypothesis—getting a false positive).
The ANOVA protects the researcher from inflated Type I error by calculating a single omnibus test that
assumes all k population means are equal.
Although the advantage of the omnibus test is that it helps protect researchers from inflated Type I error, the
limitation is that a significant omnibus test does not specify exactly which group means differ, just that there is a
difference "somewhere" among the group means. A researcher therefore relies on either (a) planned contrasts
of specific pairwise comparisons determined prior to running the F test or (b) follow-up tests of pairwise
comparisons, also referred to as post-hoc tests, to determine exac ...
Chapter 12Choosing an Appropriate Statistical TestiStockph.docxmccormicknadine86
Chapter 12
Choosing an Appropriate Statistical Test
iStockphoto/ThinkstockLearning Objectives
After reading this chapter, you will be able to. . .
· understand the importance of using the proper statistical analysis.
· identify the type of analysis based on four critical questions.
· use the decision tree to identify the correct statistical test.
Here we are in the final chapter that will pull all prior chapters together. Chapters 1 to 3 discussed descriptive statistics while the latterchapters, 4 to 11, discussed inferential statistics. Each of the inferential chapters presented a statistical concept then conducted the appropriateanalysis to be able to test a hypothesis. The big question for students learning statistics is, "How do I know if I'm using the correct statisticaltest?" For experienced statisticians this question is easy to answer as it is based on a few criteria. However, to a student just learning statisticsor to the novice researcher, this question is a legitimate one. Many statistical reference texts include a guide that asks specific questionsregarding the type of research question, design, number and scales of measurement of variables, and statistical assumption of the data thatallows you to use an elegant chart known as a decision tree. Based on the answers to these questions, the decision tree is used to helpdetermine the type of analysis to be used for the research, thereby helping you answer this big question.
12.1 Considerations
To make the correct decisions based on the use of a decision tree, there are four specific questions that must be answered. These questions areas follows:
· What is your overarching research question?
· How many independent, dependent, and covariate variables are used in the study?
· What are the scales of measurement of each of your variables?
· Are there violations of statistical assumptions?
If you are able to answer these specific questions, then you will be able to determine the proper analysis for your study. These questions arecritically important, and if they cannot be answered, then not enough thought has gone into the research. That said, let us discuss each ofthese questions so that they can be considered and answered in the use of the decision tree.
What Is Your Overarching Research Question?Try It!
Derive your ownresearch question foryour Master's Thesisor DoctoralDissertation. Have a colleague orprofessor read it. What are theirthoughts or suggestions forimprovements?
Answering this question seems simple enough as all research has an overarching research questionthat drives the study, especially since this dictates the type of quantitative methodology. There arekey words in every research question that help determine the appropriate type of analysis. Forinstance, if the research question states, "What are the effects of job satisfaction on employeeproductivity?" the keyword is "effects" as in the cause and effect of job satisfaction (theindependent variable) on productivity (th ...
Inferential Analysis
Chapter 20
NUR 6812Nursing Research
Florida National University
Introduction - Inferential Analysis
We will discuss analysis of variance and regression, which are technically part of the same family of statistics known as the general linear method but are used to achieve different analytical goals
ANALYSIS OF VARIANCE
Analysis of variance (ANOVA) is used so often that Iversen and Norpoth (1987) said they once had a student who thought this was the name of an Italian statistician.
You can think of analysis of variance as a whole family of procedures beginning with the simple and frequently used t-test and becoming quite complicated with the use of multiple dependent variables (MANOVA, to be explained later in this chapter) and covariates.
Although the simpler varieties of these statistics can actually be calculated by hand, it is assumed that you will use a statistical software package for your calculations.
If you want to see how these calculations are done, you could try to compute a correlation, chi-square, t-test, or ANOVA yourself (see Yuker, 1958; Field, 2009), but in general it is too time consuming and too subject to human error to do these by hand.
IMPORTANT TERMINOLOGY
Several terms are used in these analyses that you need to be familiar with to understand the analyses themselves and the results. Many will already be familiar to you.
Statistical significance: This indicates the probability that the differences found are a result of error, not the treatment. Stated in terms of the P value, the convention is to accept either a 1% (P ≤ 0.01), or 1 out of 100, or 5% (P ≤ 0.05), or 5 out of 100, possibility that any differences seen could have been due to error (Cortina & Dunlap, 2007).
Research hypothesis: A research hypothesis is a declarative statement of the expected relationship between the dependent and independent variable(s).
Null hypothesis: The null hypothesis, based on the research hypothesis, states that the predicted relationships will not be found or that those found could have occurred by chance, meaning the difference will not be statistically significant.
Effect size: This is defined by Cortina and Dunlap as “the amount of variance in one variable accounted for by another in the sample at hand” (2007, p. 231). Effect size estimates are helpful adjuncts to significance testing. An important limitation, however, is that they are heavily influenced by the type of treatment or manipulation that occurred and the measures that are used.
Confidence intervals: Although sometimes suggested as an adjunct or replacement for the significance level, confidence intervals are determined in part by the alpha (significance level) (Cortina & Dunlap, 2007). Likened to a margin of error, the confidence intervals indicate the range within which the true difference between means may lie. A narrow confidence interval implies high precision; we can specify believable values within a narrow range ...
Inferential Analysis
Chapter 20
NUR 6812Nursing Research
Florida National University
Introduction - Inferential Analysis
We will discuss analysis of variance and regression, which are technically part of the same family of statistics known as the general linear method but are used to achieve different analytical goals
ANALYSIS OF VARIANCE
Analysis of variance (ANOVA) is used so often that Iversen and Norpoth (1987) said they once had a student who thought this was the name of an Italian statistician.
You can think of analysis of variance as a whole family of procedures beginning with the simple and frequently used t-test and becoming quite complicated with the use of multiple dependent variables (MANOVA, to be explained later in this chapter) and covariates.
Although the simpler varieties of these statistics can actually be calculated by hand, it is assumed that you will use a statistical software package for your calculations.
If you want to see how these calculations are done, you could try to compute a correlation, chi-square, t-test, or ANOVA yourself (see Yuker, 1958; Field, 2009), but in general it is too time consuming and too subject to human error to do these by hand.
IMPORTANT TERMINOLOGY
Several terms are used in these analyses that you need to be familiar with to understand the analyses themselves and the results. Many will already be familiar to you.
Statistical significance: This indicates the probability that the differences found are a result of error, not the treatment. Stated in terms of the P value, the convention is to accept either a 1% (P ≤ 0.01), or 1 out of 100, or 5% (P ≤ 0.05), or 5 out of 100, possibility that any differences seen could have been due to error (Cortina & Dunlap, 2007).
Research hypothesis: A research hypothesis is a declarative statement of the expected relationship between the dependent and independent variable(s).
Null hypothesis: The null hypothesis, based on the research hypothesis, states that the predicted relationships will not be found or that those found could have occurred by chance, meaning the difference will not be statistically significant.
Effect size: This is defined by Cortina and Dunlap as “the amount of variance in one variable accounted for by another in the sample at hand” (2007, p. 231). Effect size estimates are helpful adjuncts to significance testing. An important limitation, however, is that they are heavily influenced by the type of treatment or manipulation that occurred and the measures that are used.
Confidence intervals: Although sometimes suggested as an adjunct or replacement for the significance level, confidence intervals are determined in part by the alpha (significance level) (Cortina & Dunlap, 2007). Likened to a margin of error, the confidence intervals indicate the range within which the true difference between means may lie. A narrow confidence interval implies high precision; we can specify believable values within a narrow range ...
Inferential statistics are techniques that allow us to use these samples to make generalizations about the populations from which the samples were drawn. ... The methods of inferential statistics are (1) the estimation of parameter(s) and (2) testing of statistical hypotheses.
5
ANOVA: Analyzing Differences
in Multiple Groups
Learning Objectives
After reading this chapter, you should be able to:
• Describe the similarities and differences between t-tests and ANOVA.
• Explain how ANOVA can help address some of the problems and limitations associ-
ated with t-tests.
• Use ANOVA to analyze multiple group differences.
• Use post hoc tests to pinpoint group differences.
• Determine the practical importance of statistically significant findings using effect
sizes with eta-squared.
iStockphoto/Thinkstock
tan81004_05_c05_103-134.indd 103 2/22/13 4:28 PM
CHAPTER 5Section 5.1 From t-Test to ANOVA
Chapter Overview
5.1 From t-Test to ANOVA
The ANOVA Advantage
Repeated Testing and Type I Error
5.2 One-Way ANOVA
Variance Between and Within
The Statistical Hypotheses
Measuring Data Variability in the ANOVA
Calculating Sums of Squares
Interpreting the Sums of Squares
The F Ratio
The ANOVA Table
Interpreting the F Ratio
Locating Significant Differences
Determining Practical Importance
5.3 Requirements for the One-Way ANOVA
Comparing ANOVA and the Independent t
One-Way ANOVA on Excel
5.4 Another One-Way ANOVA
Chapter Summary
Introduction
During the early part of the 20th century R. A. Fisher worked at an agricultural research station in rural southern England. In his work analyzing the effect of pesticides and
fertilizers on results like crop yield, he was stymied by the limitations in Gosset’s indepen-
dent samples t-test, which allowed him to compare just two samples at a time. In the effort
to develop a more comprehensive approach, Fisher created a statistical method he called
analysis of variance, often referred to by its acronym, ANOVA, which allows for making
multiple comparisons at the same time using relatively small samples.
5.1 From t-Test to ANOVA
The process for completing an independent samples t-test in Chapter 4 illustrated a number of things. The calculated t value, for example, is a score based on a ratio, one
determined by dividing the variability between the two groups (M1 2 M2) by the vari-
ability within the two groups, which is what the standard error of the difference (SEd)
measures. So both the numerator and the denominator of the t-ratio are measures of data
variability, albeit from different sources. The difference between the means is variability
attributed primarily to the independent variable, which is the group to which individual
subjects belong. The variability in the denominator is variability for reasons that are unex-
plained—error variance in the language of statistics.
tan81004_05_c05_103-134.indd 104 2/22/13 4:28 PM
CHAPTER 5Section 5.1 From t-Test to ANOVA
In his method, ANOVA, Fisher also embraced this
pattern of comparing between-groups variance to
within-groups variance. He calculated the variance
statistics differently, as we shall see, but he followed
Gosset’s pattern of a ratio of between-groups vari-
ance compared to within.
The ANOVA .
· Present a discussion of what team is. What type(s) of team do .docxalinainglis
· Present a discussion of what team is. What type(s) of team do you have in your organization?
· What is meant by the “internal processes” of a team? Why is it important to manage both the internal processes and external opportunities/constraints of a team?
Note: It should contain 3 pages with citation included and References should be in APA format
.
· Presentation of your project. Prepare a PowerPoint with 8 slid.docxalinainglis
· Presentation of your project. Prepare a PowerPoint with 8 slides illustrating the role in Interdisciplinary care for our aging population (Outcome 1,2,3,4,5) (6 hours).
Make sure it has nursing diagnosis
make sure it's a APA STYLE
make sure it has reference
.
More Related Content
Similar to 6ONE-WAY BETWEEN-SUBJECTS ANALYSIS OFVARIANCE6.1 .docx
Commonly used Statistics in Medical Research HandoutPat Barlow
We found this handout to be incredibly useful as a guide and resource for non-statistical professionals to make quick decisions about statistical methods. The handout accompanies the Commonly Used Statistics in Medical Research Part I Presentation
Respond using one or more of the following approachesAsk a promickietanger
Respond using one or more of the following approaches:
Ask a probing question, substantiated with additional background information, and evidence.
Share an insight from having read your colleagues’ postings, synthesizing the information to provide new perspectives.
Group B
Inferential Statistics- Based on probability; used to draw conclusions or make generalizations about a given population or problem.
Example: “What can I infer about 5-minute Apgar scores of premature babies (the population) after calculating a mean Apgar score of 7.5 in a sample of 300 premature babies?” (McGonigle & Mastrain, p. 376, 2017).
Sampling Distributions- A sampling distribution is the frequency distribution of a statistic over many random samples from a single population.
Sampling Distribution of the Mean - as an example we randomly draw test scores from 25 students out of a total group of 5,000. We then calculate the mean, then draw a new group and repeat; each mean will serve as one datum, or data point.
Hypothesis Testing- is the use of statistics to determine the probability that a given hypothesis is true.
Null Hypothesis- the hypothesis that there is no significant difference between specified populations; or differences can be attributed to sampling or experimental error
Type 1 Error- This error occurs when we reject the null hypothesis when we should have retained it.
Type 2 Error- This error occurs when we fail to reject the null hypothesis. In other words, we believe that there isn’t a genuine effect when actually there is one.
Parametric statistics – A class of statistical tests that involve assumptions about the distribution of the variables and the estimation of a parameter.
Nonparametic statistics – A class of statistical tests that do not involve stringent assumptions about the distribution of variables. Between-subject design – A research design in which separate groups of people are compared (e.g. smokers and nonsmokers; intervention and control group subjects). Within-subject design – A research design in which a single group of participants is compared under different conditions or different points in time (e.g. before and after surgery).
Two classes of Statistical Tests:
Parametric tests - tests involving an estimation of a parameter, the use of interval or ratio-level data, and the assumption of normally distributed variables. Include t-tests and ANOVA.
Nonparametric tests - used when the data are nominal or ordinal or when a normal distribution cannot be assumed. Include the Mann-Whitney U test, Wilcoxon signed - rank test, and Kruskal - Wallis test.
Statistical Tests
T-test parametric procedure identifying mean differences for two independent groups, like experiment versus control or dependent groups, like pretreatment and post-treatment scores.
One - way ANOVA - tests the relationship between one categorical independent variable, such as different interventions, and a continuous dependent variable.
...
Calculating Analysis of Variance (ANOVA) and Post Hoc Analyses Follo.docxaman341480
Calculating Analysis of Variance (ANOVA) and Post Hoc Analyses Following ANOVA
Analysis of variance (ANOVA)
is a statistical procedure that compares data between two or more groups or conditions to investigate the presence of differences between those groups on some continuous dependent variable (see
Exercise 18
). In this exercise, we will focus on the
one-way ANOVA
, which involves testing one independent variable and one dependent variable (as opposed to other types of ANOVAs, such as factorial ANOVAs that incorporate multiple independent variables).
Why ANOVA and not a
t
-test? Remember that a
t
-test is formulated to compare two sets of data or two groups at one time (see
Exercise 23
for guidance on selecting appropriate statistics). Thus, data generated from a clinical trial that involves four experimental groups, Treatment 1, Treatment 2, Treatments 1 and 2 combined, and a Control, would require 6
t
-tests. Consequently, the chance of making a Type I error (alpha error) increases substantially (or is inflated) because so many computations are being performed. Specifically, the chance of making a Type I error is the number of comparisons multiplied by the alpha level. Thus, ANOVA is the recommended statistical technique for examining differences between more than two groups (
Zar, 2010
).
ANOVA is a procedure that culminates in a statistic called the
F
statistic. It is this value that is compared against an
F
distribution (see
Appendix C
) in order to determine whether the groups significantly differ from one another on the dependent variable. The formulas for ANOVA actually compute two estimates of variance: One estimate represents differences between the groups/conditions, and the other estimate represents differences among (within) the data.
Research Designs Appropriate for the One-Way ANOVA
Research designs that may utilize the one-way ANOVA include the randomized experimental, quasi-experimental, and comparative designs (
Gliner, Morgan, & Leech, 2009
). The independent variable (the “grouping” variable for the ANOVA) may be active or attributional. An active independent variable refers to an intervention, treatment, or program. An attributional independent variable refers to a characteristic of the participant, such as gender, diagnosis, or ethnicity. The ANOVA can compare two groups or more. In the case of a two-group design, the researcher can either select an independent samples
t
-test or a one-way ANOVA to answer the research question. The results will always yield the same conclusion, regardless of which test is computed; however, when examining differences between more than two groups, the one-way ANOVA is the preferred statistical test.
Example 1: A researcher conducts a randomized experimental study wherein she randomizes participants to receive a high-dosage weight loss pill, a low-dosage weight loss pill, or a placebo. She assesses the number of pounds lost from baseline to post-treatment
378
for the thre ...
In Unit 9, we will study the theory and logic of analysis of varianc.docxlanagore871
In Unit 9, we will study the theory and logic of analysis of variance (ANOVA). Recall that a t test requires a predictor variable that is dichotomous (it has only two levels or groups). The advantage of ANOVA over a t test
is that the categorical predictor variable can have two or more groups. Just like a t test, the outcome variable in
ANOVA is continuous and requires the calculation of group means.
Logic of a "One-Way" ANOVA
The ANOVA, or F test, relies on predictor variables referred to as factors. A factor is a categorical (nominal)
predictor variable. The term "one-way" is applied to an ANOVA with only one factor that is defined by two or
more mutually exclusive groups. Technically, an ANOVA can be calculated with only two groups, but the t test is
usually used instead. Instead, the one-way ANOVA is usually calculated with three or more groups, which are
often referred to as levels of the factor.
If the ANOVA includes multiple factors, it is referred to as a factorial ANOVA. An ANOVA with two factors is
referred to as a "two-way" ANOVA; an ANOVA with three factors is referred to as a "three-way" ANOVA, and
so on. Factorial ANOVA is studied in advanced inferential statistics. In this course, we will focus on the theory
and logic of the one-way ANOVA.
ANOVA is one of the most popular statistics used in social sciences research. In non-experimental designs, the
one-way ANOVA compares group means between naturally existing groups, such as political affiliation
(Democrat, Independent, Republican). In experimental designs, the one-way ANOVA compares group means
for participants randomly assigned to different treatment conditions (for example, high caffeine dose; low
caffeine dose; control group).
Avoiding Inflated Type I Error
You may wonder why a one-way ANOVA is necessary. For example, if a factor has four groups ( k = 4), why not
just run independent sample t tests for all pairwise comparisons (for example, Group A versus Group B, Group
A versus Group C, Group B versus Group C, et cetera)? Warner (2013) points out that a factor with four groups
involves six pairwise comparisons. The issue is that conducting multiple pairwise comparisons with the same
data leads to inflated risk of a Type I error (incorrectly rejecting a true null hypothesis—getting a false positive).
The ANOVA protects the researcher from inflated Type I error by calculating a single omnibus test that
assumes all k population means are equal.
Although the advantage of the omnibus test is that it helps protect researchers from inflated Type I error, the
limitation is that a significant omnibus test does not specify exactly which group means differ, just that there is a
difference "somewhere" among the group means. A researcher therefore relies on either (a) planned contrasts
of specific pairwise comparisons determined prior to running the F test or (b) follow-up tests of pairwise
comparisons, also referred to as post-hoc tests, to determine exac ...
Chapter 12Choosing an Appropriate Statistical TestiStockph.docxmccormicknadine86
Chapter 12
Choosing an Appropriate Statistical Test
iStockphoto/ThinkstockLearning Objectives
After reading this chapter, you will be able to. . .
· understand the importance of using the proper statistical analysis.
· identify the type of analysis based on four critical questions.
· use the decision tree to identify the correct statistical test.
Here we are in the final chapter that will pull all prior chapters together. Chapters 1 to 3 discussed descriptive statistics while the latterchapters, 4 to 11, discussed inferential statistics. Each of the inferential chapters presented a statistical concept then conducted the appropriateanalysis to be able to test a hypothesis. The big question for students learning statistics is, "How do I know if I'm using the correct statisticaltest?" For experienced statisticians this question is easy to answer as it is based on a few criteria. However, to a student just learning statisticsor to the novice researcher, this question is a legitimate one. Many statistical reference texts include a guide that asks specific questionsregarding the type of research question, design, number and scales of measurement of variables, and statistical assumption of the data thatallows you to use an elegant chart known as a decision tree. Based on the answers to these questions, the decision tree is used to helpdetermine the type of analysis to be used for the research, thereby helping you answer this big question.
12.1 Considerations
To make the correct decisions based on the use of a decision tree, there are four specific questions that must be answered. These questions areas follows:
· What is your overarching research question?
· How many independent, dependent, and covariate variables are used in the study?
· What are the scales of measurement of each of your variables?
· Are there violations of statistical assumptions?
If you are able to answer these specific questions, then you will be able to determine the proper analysis for your study. These questions arecritically important, and if they cannot be answered, then not enough thought has gone into the research. That said, let us discuss each ofthese questions so that they can be considered and answered in the use of the decision tree.
What Is Your Overarching Research Question?Try It!
Derive your ownresearch question foryour Master's Thesisor DoctoralDissertation. Have a colleague orprofessor read it. What are theirthoughts or suggestions forimprovements?
Answering this question seems simple enough as all research has an overarching research questionthat drives the study, especially since this dictates the type of quantitative methodology. There arekey words in every research question that help determine the appropriate type of analysis. Forinstance, if the research question states, "What are the effects of job satisfaction on employeeproductivity?" the keyword is "effects" as in the cause and effect of job satisfaction (theindependent variable) on productivity (th ...
Inferential Analysis
Chapter 20
NUR 6812Nursing Research
Florida National University
Introduction - Inferential Analysis
We will discuss analysis of variance and regression, which are technically part of the same family of statistics known as the general linear method but are used to achieve different analytical goals
ANALYSIS OF VARIANCE
Analysis of variance (ANOVA) is used so often that Iversen and Norpoth (1987) said they once had a student who thought this was the name of an Italian statistician.
You can think of analysis of variance as a whole family of procedures beginning with the simple and frequently used t-test and becoming quite complicated with the use of multiple dependent variables (MANOVA, to be explained later in this chapter) and covariates.
Although the simpler varieties of these statistics can actually be calculated by hand, it is assumed that you will use a statistical software package for your calculations.
If you want to see how these calculations are done, you could try to compute a correlation, chi-square, t-test, or ANOVA yourself (see Yuker, 1958; Field, 2009), but in general it is too time consuming and too subject to human error to do these by hand.
IMPORTANT TERMINOLOGY
Several terms are used in these analyses that you need to be familiar with to understand the analyses themselves and the results. Many will already be familiar to you.
Statistical significance: This indicates the probability that the differences found are a result of error, not the treatment. Stated in terms of the P value, the convention is to accept either a 1% (P ≤ 0.01), or 1 out of 100, or 5% (P ≤ 0.05), or 5 out of 100, possibility that any differences seen could have been due to error (Cortina & Dunlap, 2007).
Research hypothesis: A research hypothesis is a declarative statement of the expected relationship between the dependent and independent variable(s).
Null hypothesis: The null hypothesis, based on the research hypothesis, states that the predicted relationships will not be found or that those found could have occurred by chance, meaning the difference will not be statistically significant.
Effect size: This is defined by Cortina and Dunlap as “the amount of variance in one variable accounted for by another in the sample at hand” (2007, p. 231). Effect size estimates are helpful adjuncts to significance testing. An important limitation, however, is that they are heavily influenced by the type of treatment or manipulation that occurred and the measures that are used.
Confidence intervals: Although sometimes suggested as an adjunct or replacement for the significance level, confidence intervals are determined in part by the alpha (significance level) (Cortina & Dunlap, 2007). Likened to a margin of error, the confidence intervals indicate the range within which the true difference between means may lie. A narrow confidence interval implies high precision; we can specify believable values within a narrow range ...
Inferential Analysis
Chapter 20
NUR 6812Nursing Research
Florida National University
Introduction - Inferential Analysis
We will discuss analysis of variance and regression, which are technically part of the same family of statistics known as the general linear method but are used to achieve different analytical goals
ANALYSIS OF VARIANCE
Analysis of variance (ANOVA) is used so often that Iversen and Norpoth (1987) said they once had a student who thought this was the name of an Italian statistician.
You can think of analysis of variance as a whole family of procedures beginning with the simple and frequently used t-test and becoming quite complicated with the use of multiple dependent variables (MANOVA, to be explained later in this chapter) and covariates.
Although the simpler varieties of these statistics can actually be calculated by hand, it is assumed that you will use a statistical software package for your calculations.
If you want to see how these calculations are done, you could try to compute a correlation, chi-square, t-test, or ANOVA yourself (see Yuker, 1958; Field, 2009), but in general it is too time consuming and too subject to human error to do these by hand.
IMPORTANT TERMINOLOGY
Several terms are used in these analyses that you need to be familiar with to understand the analyses themselves and the results. Many will already be familiar to you.
Statistical significance: This indicates the probability that the differences found are a result of error, not the treatment. Stated in terms of the P value, the convention is to accept either a 1% (P ≤ 0.01), or 1 out of 100, or 5% (P ≤ 0.05), or 5 out of 100, possibility that any differences seen could have been due to error (Cortina & Dunlap, 2007).
Research hypothesis: A research hypothesis is a declarative statement of the expected relationship between the dependent and independent variable(s).
Null hypothesis: The null hypothesis, based on the research hypothesis, states that the predicted relationships will not be found or that those found could have occurred by chance, meaning the difference will not be statistically significant.
Effect size: This is defined by Cortina and Dunlap as “the amount of variance in one variable accounted for by another in the sample at hand” (2007, p. 231). Effect size estimates are helpful adjuncts to significance testing. An important limitation, however, is that they are heavily influenced by the type of treatment or manipulation that occurred and the measures that are used.
Confidence intervals: Although sometimes suggested as an adjunct or replacement for the significance level, confidence intervals are determined in part by the alpha (significance level) (Cortina & Dunlap, 2007). Likened to a margin of error, the confidence intervals indicate the range within which the true difference between means may lie. A narrow confidence interval implies high precision; we can specify believable values within a narrow range ...
Inferential statistics are techniques that allow us to use these samples to make generalizations about the populations from which the samples were drawn. ... The methods of inferential statistics are (1) the estimation of parameter(s) and (2) testing of statistical hypotheses.
5
ANOVA: Analyzing Differences
in Multiple Groups
Learning Objectives
After reading this chapter, you should be able to:
• Describe the similarities and differences between t-tests and ANOVA.
• Explain how ANOVA can help address some of the problems and limitations associ-
ated with t-tests.
• Use ANOVA to analyze multiple group differences.
• Use post hoc tests to pinpoint group differences.
• Determine the practical importance of statistically significant findings using effect
sizes with eta-squared.
iStockphoto/Thinkstock
tan81004_05_c05_103-134.indd 103 2/22/13 4:28 PM
CHAPTER 5Section 5.1 From t-Test to ANOVA
Chapter Overview
5.1 From t-Test to ANOVA
The ANOVA Advantage
Repeated Testing and Type I Error
5.2 One-Way ANOVA
Variance Between and Within
The Statistical Hypotheses
Measuring Data Variability in the ANOVA
Calculating Sums of Squares
Interpreting the Sums of Squares
The F Ratio
The ANOVA Table
Interpreting the F Ratio
Locating Significant Differences
Determining Practical Importance
5.3 Requirements for the One-Way ANOVA
Comparing ANOVA and the Independent t
One-Way ANOVA on Excel
5.4 Another One-Way ANOVA
Chapter Summary
Introduction
During the early part of the 20th century R. A. Fisher worked at an agricultural research station in rural southern England. In his work analyzing the effect of pesticides and
fertilizers on results like crop yield, he was stymied by the limitations in Gosset’s indepen-
dent samples t-test, which allowed him to compare just two samples at a time. In the effort
to develop a more comprehensive approach, Fisher created a statistical method he called
analysis of variance, often referred to by its acronym, ANOVA, which allows for making
multiple comparisons at the same time using relatively small samples.
5.1 From t-Test to ANOVA
The process for completing an independent samples t-test in Chapter 4 illustrated a number of things. The calculated t value, for example, is a score based on a ratio, one
determined by dividing the variability between the two groups (M1 2 M2) by the vari-
ability within the two groups, which is what the standard error of the difference (SEd)
measures. So both the numerator and the denominator of the t-ratio are measures of data
variability, albeit from different sources. The difference between the means is variability
attributed primarily to the independent variable, which is the group to which individual
subjects belong. The variability in the denominator is variability for reasons that are unex-
plained—error variance in the language of statistics.
tan81004_05_c05_103-134.indd 104 2/22/13 4:28 PM
CHAPTER 5Section 5.1 From t-Test to ANOVA
In his method, ANOVA, Fisher also embraced this
pattern of comparing between-groups variance to
within-groups variance. He calculated the variance
statistics differently, as we shall see, but he followed
Gosset’s pattern of a ratio of between-groups vari-
ance compared to within.
The ANOVA .
· Present a discussion of what team is. What type(s) of team do .docxalinainglis
· Present a discussion of what team is. What type(s) of team do you have in your organization?
· What is meant by the “internal processes” of a team? Why is it important to manage both the internal processes and external opportunities/constraints of a team?
Note: It should contain 3 pages with citation included and References should be in APA format
.
· Presentation of your project. Prepare a PowerPoint with 8 slid.docxalinainglis
· Presentation of your project. Prepare a PowerPoint with 8 slides illustrating the role in Interdisciplinary care for our aging population (Outcome 1,2,3,4,5) (6 hours).
Make sure it has nursing diagnosis
make sure it's a APA STYLE
make sure it has reference
.
· Prepare a research proposal, mentioning a specific researchabl.docxalinainglis
· Prepare a research proposal, mentioning a specific researchable title, background, Review of literature, research questions and objectives, methodology, resources and references.
· Prepare the Gant Chart to indicate the timescale for completing the proposal
RESEARCH PROPOSAL OUTLINE
1. Title
2. Background (introduction)
3. Review of literature
4. Research Questions & objectives
5. Methodology
4.1 Research Design
4.2 Participants
4.3 Techniques
4.4 Ethical Considerations
6. Time scale (Gantt chart)
7. Resources
8. References
.
· Previous professional experiences that have had a profound.docxalinainglis
· Previous professional experiences that have had a profound effect:
Before I started college, my parents wanted me to excel in healthcare knowing its high demand. The path to health care and eventual employment in a notable hospital setting seemed less risky than the one of Art and design. A few networking events and some LinkedIn leads later I came across an opportunity to start a Biomedical Engineering startup in South Florida with two investors willing to mentor me in a field I wasn’t familiar with. Luckily this new venture I was undertaking had a somewhat speculative risk. I made sure they were mostly in my favor thanks to the connections my investors had in the industry, and my background in health care. My hard work and diligence paid off slowly teaching myself the mechanics of the industry through the engineers we would hire. I remember watching how they would calibrate medical devices from pumps to life-saving equipment in awe. And with the same tenacity absorbing all the medical jargon in the Biomed world. I was adamant about doing my best and being the best even if that meant leaving my creative dreams behind. We started the business almost four years ago as a small minority women-owned business in the corner of a business complex. Five biomedical engineers and six technicians later we are still scaling and have since expanded our office from that small corner to the entire business building. Currently, we are a nationally recognized Biomed and medical supply company for some of the largest healthcare facilities in both the civilian and government sector. Yet through out all the achievement I felt the only sense of raw passion was when I collaborated with my engineers in delivering problem solving services to the hospital we served. Their job was to service devices in a hospital at a micro level and I would bridge that gap by identifying problems and finding opportunities in product service at a large-scale. Working hand in hand with the engineers in articulating the hospital need for turnover I would use design through projective process in creating a plan that would work in the most practical sense.
This moment of free creative problem solving was the highlight of my job. It gave me an opportunity to realize that although at times my approach was unconventional it would work. My systematic methodology I had adapted from working with engineers and my innate out of the box idea would come to together to solve some of the most challenging issues. Little did I know that this minor stroke of self-awareness would one day have me consider architecture.
Your current strengths and weaknesses in reaching your goal.
I realized my creative talents in design could not flourish under the pressures of work. I would constantly leave the office feeling drained in a profession my heart was not set on. In this I learned my weakness was how far I was willing to neglect the urge for creativity, and in exchange it jeopardized my sense of purpos.
· Please select ONE of the following questions and write a 200-wor.docxalinainglis
· Please select ONE of the following questions and write a 200-word discussion.
1. The Federal Reserve Board has enormous power over people's lives with its power to set and influence policy that determines monetary policy in the United States. Do you think this is proper for a democracy to provide the FED with so such power? How is the FED held accountable?
2. Do you believe that the roles of government should change from era to era, or should the US determine the proper role of government and try to maintain it through the ages?
3. Explain Executive Power in the US Constitution and briefly the process by which it developed over the years. Do you think the Framers should have been more specific about the powers of the presidency? Should the country try to make it more specific today?
· Please read the discussions below and write a 100 to 150 words respond for each discussion.
1. (question 1) I do believe that this is proper for a democracy to provided such power to FED. Without the FED the economy would face two problem, which are recessions that can lead into depressions, and inflation. The FED needs to have power to endures the country will not fall into economic trouble. In class professor McWeeney stated that the FED has the power to increase interest rates to control inflation, and the power to decrease interest rates so that theres more money in the economy to create more business and jobs so there wont be a recession. The FED needs these power to try to put the economy in a sweet spot. The FED is held accountable to the government and public. The FED does this by being transparent and giving and annual report to congress.
2. (question 2) I believe that the roles of the government should be changed from era to era. My main reason the roles should be changed is because major changes are constantly happening in the field of law. For example, the progressive era and modern era had several economic reforms that had taken place including increased regulation, anti-trust activity, application of an income tax, raise on social insurance programs, etc. Throughout this time, the government gave women the right to vote. I believe the economy is growing rapidly due to employment relationships, better technology, education, new polices, social and economic changes. This is the reason why the roles of the government should be changed from era to era.
Communicating professionally and ethically is one of the
essential skill sets we can teach you at Strayer. The following
guidelines will ensure:
· Your writing is professional
· You avoid plagiarizing others, which is essential to writing ethically
· You give credit to others in your work
Visit Strayer’s Academic Integrity Center for more information.
Winter 2019
https://pslogin.strayer.edu/?dest=academic-support/academic-integrity-center
Strayer University Writing Standards 2
� Include page numbers.
� Use 1-inch margins.
� Use Arial, Courier, Times New Roman.
· Please use Firefox for access to cronometer.com16 ye.docxalinainglis
· Please use
Firefox
for access to
cronometer.com
16 years old Female. Born on 01/05/2005. Height 5’4, 115 lbs
· Menu Analysis
DAY 2
Quesadilla
Fiesta beans
Salsa
Sour cream
Corn
Fruit
· Submit Screen Shot for Nutrient report for assignment menu(s)
§ Right click to use “Take a screenshot” feature (Firefox only) on specific date you want to have screen shot to save/obtain.
Nutrient Report and Food Intake
· The paper must include all required elements including
each
Cronometer, Excess, Deficit, and
G
roup
Summary of your nutrient report and food intake
Excess
:
· List
ALL
Nutrients that are
Over 100% (Except Amino Acids)
on Cronometer Nutrient report
· List
Food Items
on menu that may reflect excess nutrients on Cronometer Nutrient report
Deficit
:
· List
ALL
Nutrients that are
Less than 50% (Except Amino Acids)
on Cronometer Nutrient report
· List
Food Items
on menu that may reflect deficit nutrients on Cronometer Nutrient report
Summary
:
§ Summarize your overall in 1-2 paragraph, evaluation and conclusion of nutrients and food items on the menu.
.
· Please share theoretical explanations based on social, cultural an.docxalinainglis
· Please share theoretical explanations based on social, cultural and environmental factors, which may contribute to victimization from criminal behavior
· Based on your personal or professional experience share your thoughts on what coping mechanism (internal and external), and support processes can be considered if becoming a crime victim?
.
· If we accept the fact that we may need to focus more on teaching.docxalinainglis
· If we accept the fact that we may need to focus more on teaching civic responsibility, how can this work with both "policies and people" in the school where you become principal?
In order to increase the focus on teaching civic responsibility, policy must be in place supporting this goal. A school leader must be willing to invest time and funds into planning, training, and implementing curriculum that emphasizes civics. Staff members may have different levels of interest, understanding, and comfort when it comes to incorporating civic responsibility into their teaching, so providing professional development in this area would be critical. The strategic plan for integrating civic responsibility and the expectations for each teacher’s involvement should be clearly communicated. In addition to establishing these policies regarding civics education, the school leader and teachers must work to model civic responsibility. In addition to sharing his or her vision for increased focus on civics with the school staff, the school leader should work to share his or her vision with school board members, other district personnel including the superintendent, and the greater community. Lastly, school leaders need to support their staff as they take risks and work to develop and implement new activities, discussions, and projects centered around teaching civic responsibility.
· How will you lead your staff in this part of the curriculum?
In leading my staff in this part of the curriculum, I would work to secure professional development related to civic responsibility, as this is not an area that I have expertise in, and work as a staff to develop our vision and implementation goals. I would also provide examples such as the work of the exemplar schools described in the article in integrating civic responsibility across all content areas, implementing service-learning programs, and creating partnerships between the school and community. I would also work within PLTs to develop ways that civic responsibility could be incorporated within their curriculum and remind them that they have my support as they embark on this endea
Required Resources
Text
Baack, D. (2017). Organizational behavior (2nd ed.). Retrieved from https://ashford.content.edu
· Chapter 8: Leadership
Articles
Austen, B. (2012, July 23). The story of Steve Jobs: An inspiration or a cautionary tale? (Links to an external site.)Links to an external site.Wired. Retrieved fom http://www.wired.com/2012/07/ff_stevejobs/all/
Charan, R. (2006). Home Depot’s blueprint for culture change. Harvard Business Review. 84(4), 60-70. Retrieved from EBSCOhost database
Grow, B., Foust, D., Thornton, E., Farzad, R., McGregor, J., & Zegal, S. (2007). Out at home depot (Links to an external site.)Links to an external site.. Business Week.
Retrieved from http://www.businessweek.com/stories/2007-01-14/out-at-home-depot
Stark, A. (1993). What's the matter with business ethics? Harvard Business Review, 71(3), 38-48. .
· How many employees are working for youtotal of 5 employees .docxalinainglis
· How many employees are working for you?
total of 5 employees
· How did you get your idea or concept for the business?
· CLEAR is a reflection by transparency, manifest and understood, our product is new in the market, and it follows the international fashion style that suits every lady,
· A bag represents you, bags are women priority, and its something women can't go outside without, our bags differ by other bags is that its clear, which is the new form of fashion style, we also made customization on bags so it is a remarkable tool that can lead to higher profit through increased customer satisfaction and loyalty, although it brings for our small factory a lot of work, the good work pays off, we entered these industry because there are no locals designer in it and we started in2016 and hope to reach a global position.
· What do you look for in an employee? (the most important things)
- helping customers on their choice
-stylist
- team work spirit
- deciplant & committed to work ethics
- Good Communication skills
- Ability to manage the conflict
- Is the company socially responsible?
Yes , we try our best to make some of sell go for the charity and especially to help poor people get new clothes , we donate 5% yearly in our total sales .
· What made you choose your current location?
Main criterias for selecting current location :
1- Close to the residence areas , meliha road, near the university of Sharjah
2- Easy access to the visiting customers
3- Its in a big avenue that has many designers and clothing brands
4- Easy to pick up from the shop
5- Serve a big segmentation
· What are your responsibilities as a business owner?
the main responsibility of the Business owner is to maintain the successful of the business, but in order to achieve this have to do so many tasks like:
1- Hire and manage the staff
2- Oversees the financial status , weekly and monthly .
3- Create marketing plans of how the business will be in a year
4- Update the website and chick the system
5- Rent fees
6- Make sure how customers are satisfied by the product
7- Make sure about product quality and chick up
8- Maintain a healthy work environment
9- Develop and fine tune the business according to the market situation
· How do you motivate your employees?
We follow different methods for motivations
1- Personal appreciation for individuals for hard work or personal achievements
2- Kind words
3- Flexible working hours
4- Daily bonus if achieved the daily sales targets
5- Giving the new collection bags as a gift before dropping it to the market , it makes them feel appreciated and special
· Can you give me an example of any challenges or problems that you faced with your shop and employees?
Hiring the right employee is always challenge, last Ramadan we had a huge unread massage for eid orders as well, our customer started to get angry and write under the inestgram comments that there was no respond for online shopping , we struggl.
· How should the risks be prioritized· Who should do the priori.docxalinainglis
· How should the risks be prioritized?
· Who should do the prioritization of the project risks?
· How should project risks be monitored and controlled?
· Who should develop risk responses and contingency plans?
· Who should own these responses and plans?
Introduction
This week, we will explore risk management. Risk management is one of those areas in project management that separates good project managers from great project managers. A good project manager makes risk management an integral part of every phase of project work. Risks are identified, prioritized, and understood. There are clear responsibilities within the team as to whose is responsible for implementing a risk response to reduce the impact should it occur. So let's get started.
What is Risk?
*Risk: An uncertain event or condition that, if it occurs, has a positive or negative effect on one or more project objectives.
Risks can be positive, meaning beneficial to the project, or they can be negative, meaning detrimental to the project.
Many students have a difficult time visualizing positive risks. A positive risk is an opportunity that may increase the probability of success, the return on investment, or the benefits of the project. They may also be ways to reduce project costs or ways to complete the project early. There may even be methods to improve project quality or overall performance. These are all examples of positive risks.
A negative risk can be easier to understand. It is the possibility that something will go wrong, a threat to the success of the project. It is important to remember that a risk is a possibility, not a fact. It is a potential problem. At GettaByte Software, there is the potential that a power outage would occur during data transfer. The potential exists that a key resource could become unavailable due to some unforeseen circumstance, like illness. Those are threats to the success of the project.
When buying a house to renovate, there are potential risks with respect to plumbing, wiring, the foundation, and so on.
A project manager needs to consider trying to make positive risks happen while trying to prevent negative ones from occurring. To do this, a project manager can take a proactive approach to risk management. This means he or she plans a risk response should it look as though the risk will become a reality. In this way, everyone knows exactly how to prepare and respond to the risk once it does become an issue.
The Risk Management Process
A project has both good and bad risks, which are referred to as positive and negative risks or opportunities and threats. For positive risks or opportunities, the project manager can choose from a range of risk responses. For threats, a project manager has a similar range of choices. The following, as described in the PMBOK® Guide, are the risk management processes.
Plan Risk Management:
· Risk Strategy
· Defines the general approach to managing risk on the project
· Methodology
· Defines the specific, tools, .
· How does the distribution mechanism control the issues address.docxalinainglis
· How does the distribution mechanism control the issues addressed in Music and TV, when in regards to race/ethnicity?
· Determine who controls the distribution of Music and TV, when in regards to race/ethnicity?
· In what ways does the controller of distribution affect the shared experience of the audience and community? Keep in mind that a community may be local, regional, national, or global. Be specific in your discussion.
.
· Helen Petrakis Identifying Data Helen Petrakis is a 5.docxalinainglis
·
Helen Petrakis Identifying Data: Helen Petrakis is a 52-year-old, Caucasian female of Greek descent living in a four-bedroom house in Tarpon Springs, FL. Her family consists of her husband, John (60), son, Alec (27), daughter, Dmitra (23), and daughter Althima (18). John and Helen have been married for 30 years. They married in the Greek Orthodox Church and attend services weekly.
Presenting Problem: Helen reports feeling overwhelmed and “blue.” She was referred by a close friend who thought Helen would benefit from having a person who would listen. Although she is uncomfortable talking about her life with a stranger, Helen says that she decided to come for therapy because she worries about burdening friends with her troubles. John has been expressing his displeasure with meals at home, as Helen has been cooking less often and brings home takeout. Helen thinks she is inadequate as a wife. She states that she feels defeated; she describes an incident in which her son, Alec, expressed disappointment in her because she could not provide him with clean laundry. Helen reports feeling overwhelmed by her responsibilities and believes she can’t handle being a wife, mother, and caretaker any longer.
Family Dynamics: Helen describes her marriage as typical of a traditional Greek family. John, the breadwinner in the family, is successful in the souvenir shop in town. Helen voices a great deal of pride in her children. Dmitra is described as smart, beautiful, and hardworking. Althima is described as adorable and reliable. Helen shops, cooks, and cleans for the family, and John sees to yard care and maintaining the family’s cars. Helen believes the children are too busy to be expected to help around the house, knowing that is her role as wife and mother. John and Helen choose not to take money from their children for any room or board. The Petrakis family holds strong family bonds within a large and supportive Greek community.
Helen is the primary caretaker for Magda (John’s 81-year-old widowed mother), who lives in an apartment 30 minutes away. Until recently, Magda was self-sufficient, coming for weekly family dinners and driving herself shopping and to church. Six months ago, she fell and broke her hip and was also recently diagnosed with early signs of dementia. Helen and John hired a reliable and trusted woman temporarily to check in on Magda a couple of days each week. Helen would go and see Magda on the other days, sometimes twice in one day, depending on Magda’s needs. Helen would go food shopping for Magda, clean her home, pay her bills, and keep track of Magda’s medications. Since Helen thought she was unable to continue caretaking for both Magda and her husband and kids, she wanted the helper to come in more often, but John said they could not afford it. The money they now pay to the helper is coming out of the couple’s vacation savings. Caring for Magda makes Helen think she is failing as a wife and mother because she no longer ha.
· Global O365 Tenant Settings relevant to SPO, and recommended.docxalinainglis
· Global O365 Tenant Settings relevant to SPO, and recommended settings
Multi Factor Authentication
Sign In Page customization
External Sharing
· Global SPO settings and recommended settings
Manage External Sharing
Site Creation Settings
· Information Architecture and Hub Site Management
Site Structure
Create and manage Hub Site
· Site Administration
Create Sites
Delete Sites
Restored Deleted Sites
Manage Site Admins
Manage Site creation
Manage Site Storage limits
Change Site Address
· Managed Metadata (Term Store)
Introduction
Setup new term group sets
Create and manage Terms
Assign roles and permission to Manage term sets
· Search
Search Content
Search Center
Crawl Site content
Remove Search results
Search Results
Manage Search Query
Manage Query Rules
Manage Query Suggestion
Manage result sources
Manage search dictionaries
· Security (identity – internal / external, and authorization – management of platform level)
Control Access of Unmanaged devices
Control Access of Network location
Authentication
Safeguarding Data
Sign out inactive users
· Governance – e.g. labels, retention, etc.
Data Classification
Create and Manage labels
· Data loss prevention
· Create and Manage security policies
· Devices Security policies
· App permission policies
· Data Governance
· Retention Policies
· Monitoring and alerting
Create and Manage Alerts
Alert Policies
· SharePoint Migration Tool
Overview
· Operational tasks for managing the health of the environment, alerting, etc.
File Activity report
Site usage report
Message Center
Service Health
· Common issue resolution and FAQ
.
· Focus on the identified client within your chosen case.· Analy.docxalinainglis
· Focus on the identified client within your chosen case.
· Analyze the case using a systems approach, taking into consideration both family and community systems.
· Complete and submit the “Dissecting a Theory and Its Application to a Case Study” worksheet based on your analysis
Helen Petrakis Identifying Data: Helen Petrakis is a 52-year-old, Caucasian female of Greek descent living in a four-bedroom house in Tarpon Springs, FL. Her family consists of her husband, John (60), son, Alec (27), daughter, Dmitra (23), and daughter Althima (18). John and Helen have been married for 30 years. They married in the Greek Orthodox Church and attend services weekly.
Presenting Problem: Helen reports feeling overwhelmed and “blue.” She was referred by a close friend who thought Helen would benefit from having a person who would listen. Although she is uncomfortable talking about her life with a stranger, Helen says that she decided to come for therapy because she worries about burdening friends with her troubles. John has been expressing his displeasure with meals at home, as Helen has been cooking less often and brings home takeout. Helen thinks she is inadequate as a wife. She states that she feels defeated; she describes an incident in which her son, Alec, expressed disappointment in her because she could not provide him with clean laundry. Helen reports feeling overwhelmed by her responsibilities and believes she can’t handle being a wife, mother, and caretaker any longer.
Family Dynamics: Helen describes her marriage as typical of a traditional Greek family. John, the breadwinner in the family, is successful in the souvenir shop in town. Helen voices a great deal of pride in her children. Dmitra is described as smart, beautiful, and hardworking. Althima is described as adorable and reliable. Helen shops, cooks, and cleans for the family, and John sees to yard care and maintaining the family’s cars. Helen believes the children are too busy to be expected to help around the house, knowing that is her role as wife and mother. John and Helen choose not to take money from their children for any room or board. The Petrakis family holds strong family bonds within a large and supportive Greek community.
Helen is the primary caretaker for Magda (John’s 81-year-old widowed mother), who lives in an apartment 30 minutes away. Until recently, Magda was self-sufficient, coming for weekly family dinners and driving herself shopping and to church. Six months ago, she fell and broke her hip and was also recently diagnosed with early signs of dementia. Helen and John hired a reliable and trusted woman temporarily to check in on Magda a couple of days each week. Helen would go and see Magda on the other days, sometimes twice in one day, depending on Magda’s needs. Helen would go food shopping for Magda, clean her home, pay her bills, and keep track of Magda’s medications. Since Helen thought she was unable to continue caretaking for both Magda and her husba.
· Find current events regarding any issues in public health .docxalinainglis
·
Find current events
regarding any issues in public health Anything about infectious diseases ( Don not pick one disease, you have you dig more infectious diseases)
· These current events can be articles, news reports, outbreaks, videos.
· Type down brief 2 sentences describing the event (don’t copy paste title)
· You should have
at least 7 diseases in
total
· No Malaria disease events, please
.
· Explore and assess different remote access solutions.Assig.docxalinainglis
· Explore and assess different remote access solutions.
Assignment Requirements
Discuss with your peers which of the two remote access solutions, virtual private networks (VPNs) or hypertext transport protocol secure (HTTPS), you will rate as the best. You need to make a choice between the two remote access solutions based on the following features:
· Identification, authentication, and authorization
· Cost, scalability, reliability, and interoperability
.
· FASB ASC & GARS Login credentials LinkUser ID AAA51628Pas.docxalinainglis
· FASB ASC & GARS Login credentials
Link
User ID: AAA51628
Password: qc3A9WS
· FASB Codification Learning Guide
· COSO Login
User ID: aaa72751
Password: JhF3a2G
Copyright 2018 Governmental Accounting Standards Board
Foreword
This content collection contains all the original pronouncements that currently constitute the body of state and local governmental accounting and financial reporting standards and guidelines. Specifically, the content collection incorporates these pronouncements:
• Governmental Accounting Standards Board (GASB) Statements, Interpretations, Concepts Statements, Technical Bulletins, and Implementation Guides issued through December 31, 2018
• National Council on Governmental Accounting (NCGA) Statements and Interpretations currently in force and NCGA Concepts Statement 1
• American Institute of Certified Public Accountants (AICPA) 1974 Industry Audit Guide and related Statements of Position continued in force when the GASB began operations
• GASB Suggested Guidelines for Voluntary Reporting issued through December 31, 2018.
Unless otherwise noted, original pronouncements in this infobase are presented in their entirety, with the exception of appendices containing codification instructions, which have been omitted. Pronouncements may include one or more nonauthoritative sections. Authoritative guidance is presented in the main body of each pronouncement. Glossaries also are considered to be authoritative. All other appendices (for example, bases for conclusions and illustrations) and summaries are nonauthoritative. In addition, the entire Suggested Guidelines for Voluntary Reporting, SEA Performance Information, is nonauthoritative.
A status page at the beginning of each pronouncement identifies subsequent changes (amendments and supersessions) to the pronouncement as well as the source of those changes. The status page also identifies (a) other pronouncements affected by that pronouncement, (b) interpretive pronouncements clarifying that pronouncement, (c) the effective date, and (d) the principal sections of the GASB Codification of Governmental Accounting and Financial Reporting Standards in which the pronouncement is incorporated.
Within each pronouncement, a shading technique is used to identify amended or superseded standards. All terms, sentences, and paragraphs that have been deleted or superseded by subsequent pronouncements are shaded. Sentences or paragraphs that have been amended by the addition of terms, sentences, or new footnotes are marked with a vertical solid bar ( | ) in the left margin alongside the amended material. When standards are amended or superseded, relevant nonauthoritative appendices are also modified to reflect those changes.
Appendix A is a reproduction of GASB Codification Appendix F, "Finding List of Original Pronouncements." It shows where each paragraph of each original pronouncement may be found in the Codification, or whether the paragraph contains.
· Due Sat. Sep. · Format Typed, double-spaced, sub.docxalinainglis
·
Due:
Sat. Sep.
·
Format
: Typed, double-spaced, submitted as a word-processing document.
12 point, text-weight font, 1-inch margins.
·
·
Length
: 850 - 1000 words (approx. 3-4 pages)
·
·
Overview
: In Unit 1 and Unit 2, we focused on ways that writers build ideas from personal memories and experiences into interesting narratives that convey significance and meaning to new audiences. In Unit 3, we have been discussing how writers invent ideas by interacting with other communities through firsthand observation and description. These relationships and discoveries can give writers insight into larger concepts or ideas that are valuable to specific communities. For this writing project, you will use firsthand observations and discoveries to write about people and the issues that are important to them. Your evidence will come from the details you observe as you investigate other people, places, and events.
Assignment
Write an ethnography essay focused on a particular group of people and the routines or practices that best reveal their unique significance as a group.
An ethnography is a written description of a particular cultural group or community. For the ethnography essay, you can follow the guidelines in the CEL, p. 110-112. Your ethnography should:
· Begin with your observations of a particular group. Plan to observe this group 2-3 times, so that you can get a better sense of their routines, habits, and practices.
o
Note: if you cannot travel to observe a group or community, plan to observe that community digitally through website documents, social media, and/or emails exchanged with group members.
· Convey insight into the characteristics that give the group unique significance.
· Provide context and background, including location, values, beliefs, histories, rituals, dialogue, and any other details that help convey the group's significance.
· Follow a deliberate organizational pattern that focuses on one or more insights about the group while also providing details and information about the group's culture and routine
As you look back over your observations and notes, remember that your essay should do more than simply relate details without any larger significance. Ethnographies also draw out the unique, interesting, and special qualities of a group or culture that help readers connect to their values or motivations. Note: Please keep in mind that writing in this class is public, and anything you write about may be shared with other students and instructors. Please only write about details that you are comfortable making public within our classroom community.
Assignment Components
In order to finish this project, we will work on the following parts together over the next few weeks:
Draft
: Include at least one pre-revised draft of your essay. The draft needs to meet the word count of 850 words and must also apply formatting requirements for the project—in other words it must be complete. Make sure that your.
· Expectations for Power Point Presentations in Units IV and V I.docxalinainglis
· Expectations for Power Point Presentations in Units IV and V
I would like to provide information about what needs to be included in presentations. Please review the rubric prior to submitting any assignment. If you don't know where to find this, please contact me.
1. You need a title slide.
2. You need an overview of the presentation slide (slide after the title slide). This is how you would organize a presentation if you were presenting it at work.
3. You need a summary slide (before the reference slide); same reason as above.
4. Please do not forget to cite on slides where you are writing about something related to what you have read. Please consider each slide a paragraph. You can cite on the slides or in the notes. If you do not cite, you will not get credit for the slide.
- Direct quotes should not be used in this presentation as they are not analysis.
5. Remember, all I can evaluate is what you submit, so please consider using notes to explain what you are writing in further detail. Bullets are great and you can use these but then provide more detail in the notes.
6. Graphics - Please include graphics/charts/graphs as this is evaluated in the rubric (quality of the presentation).
7. References - For all references, you need citations. For all citations, you need references. They must match. All must be formatted using APA requirements. Please review the Quick Reference Guide that was posted in the announcements.
Please never hesitate to email me with any questions. If you need further clarification about feedback or if you do not agree with any of the feedback, please contact me. My door is always open.
Learning Preferences of Millennials in a Knowledge-Based
Environment
Giora Hadar
University of Groningen (RuG), The Netherlands
[email protected]
Abstract: This paper discusses how understanding intergenerational knowledge transfer can improve knowledge transfer in
large organizations. The U.S. Federal Aviation Administration (FAA) risks significant loss of institutional human capital as huge
numbers of senior controllers retire. To perform their job, air traffic controllers must develop in-depth knowledge, including
tacit knowledge typically acquired over many years, so they can quickly make accurate decisions while dealing with the many
air traffic control (ATC) situations that arise. The only pool available to replace the retiring controllers is the Millennials. This
group, the best educated ever, has its own attitudes toward life, work, and training as well as technology use. Because
knowledge transfer and training involve both technology and human interaction, this paper explores not only the role of
technology but also that of intergenerational communications in both the training and operational environments of a highly
technical workplace.
Keywords: knowledge transfer, training, tacit knowledge, mentoring, mobile smart devices, communications
1. Introduction
Intergenerational knowledge transfe.
· Due Friday by 1159pmResearch Paper--IssueTopic Ce.docxalinainglis
·
Due
Friday by 11:59pm
Research Paper--
Issue/Topic:
Celebrity, Celebrity Culture and the effects on society
1500 or more words
MLA format
Must include research from
at least 4
scholarly sources (use HCC Library and GoogleScholar) I have attached 20 pdf with scholarly sources to choose from. 2 were provided from teacher Celebrity Culture Beneficial and The Culture of Celebrity. I have also attached a Word Document Research Paper Guide. Please read all the way to bottom more instructions at the bottom. Disregards Links and external cites those are the PDFs.
Celebrity
is a
popular cultural Links to an external site.
phenomenon surrounding a well-known person. Though many
celebritiesLinks to an external site.
became famous as a result of their achievements or experiences, a person who obtains celebrity status does not necessarily need to have accomplished anything significant beyond being widely recognized by the public. Some celebrities use their
fameLinks to an external site.
to reach the upper levels of social status. Popular celebrities can wield significant influence over their fans and followers. Cultural historian and film critic Neal Gabler has described the phenomenon of celebrity as a process similar to performance art in which the celebrity builds intrigue and allure by presenting a manufactured image to the public. This image is reinforced through
advertisingLinks to an external site.
endorsements, appearances at high-profile events, tabloid gossip, and
social mediaLinks to an external site.
presence.
In previous decades, celebrity status was mainly reserved for film stars,
televisionLinks to an external site.
personalities,
entertainersLinks to an external site.
, politicians, and
athletesLinks to an external site.
. Contemporary celebrities come from diverse fields ranging from astrophysics to auto mechanics, or they may simply be famous for their lifestyle or
InternetLinks to an external site.
antics. Social media platforms such as YouTube, Twitter, and Instagram provide the means for previously unknown individuals to cultivate a significant following.
Celebrification
is the process by which someone or something previously considered ordinary obtains stardom. Previously commonplace activities, such as practicing
vegetarianismLinks to an external site.
or wearing white t-shirts, can undergo celebrification when associated with a famous person or major event.
Celebrity culture
exists when stardom becomes a pervasive part of the social order,
commodified
as a commercial brand. Celebrities’ personal lives are recast as products for consumption, with a dedicated fan base demanding information and unlimited access to the celebrity’s thoughts and activities. A niche community such as a fan base can be monetized through effective marketing that links brand loyalty to the consumer’s identity. Fans may be more likely to purchase a product or attend an event if they feel that doing so strengthens their.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
1. 6
ONE-WAY BETWEEN-
SUBJECTS ANALYSIS OF
VARIANCE
6.1 Research Situations Where One-Way Between-Subjects
Analysis of Variance (ANOVA) Is Used
A one-way between-subjects (between-S) analysis of variance
(ANOVA) is
used in research situations where the researcher wants to
compare means on a
quantitative Y outcome variable across two or more groups.
Group
membership is identified by each participant’s score on a
categorical X
predictor variable. ANOVA is a generalization of the t test; a t
test provides
information about the distance between the means on a
quantitative outcome
variable for just two groups, whereas a one-way ANOVA
compares means
on a quantitative variable across any number of groups. The
categorical
predictor variable in an ANOVA may represent either naturally
occurring
groups or groups formed by a researcher and then exposed to
different
interventions. When the means of naturally occurring groups are
compared
(e.g., a one-way ANOVA to compare mean scores on a self-
2. report measure of
political conservatism across groups based on religious
affiliation), the design
is nonexperimental. When the groups are formed by the
researcher and the
researcher administers a different type or amount of treatment
to each group
while controlling extraneous variables, the design is
experimental.
The term between-S (like the term independent samples) tells us
that each
participant is a member of one and only one group and that the
members of
samples are not matched or paired. When the data for a study
consist of
repeated measures or paired or matched samples, a repeated
measures
ANOVA is required (see Chapter 22 for an introduction to the
analysis of
repeated measures). If there is more than one categorical
variable or factor
included in the study, factorial ANOVA is used (see Chapter
13). When there
is just a single factor, textbooks often name this single factor A,
and if there
are additional factors, these are usually designated factors B, C,
D, and so
forth. If scores on the dependent Y variable are in the form of
rank or ordinal
data, or if the data seriously violate assumptions required for
ANOVA, a
nonparametric alternative to ANOVA may be preferred.
3. In ANOVA, the categorical predictor variable is called a factor;
the
groups are called the levels of this factor. In the hypothetical
research
example introduced in Section 6.2, the factor is called “Types
of Stress,” and
the levels of this factor are as follows: 1, no stress; 2, cognitive
stress from a
mental arithmetic task; 3, stressful social role play; and 4, mock
job
interview.
Comparisons among several group means could be made by
calculating t
tests for each pairwise comparison among the means of these
four treatment
groups. However, as described in Chapter 3, doing a large
number of
significance tests leads to an inflated risk for Type I error. If a
study includes
k groups, there are k(k – 1)/2 pairs of means; thus, for a set of
four groups, the
researcher would need to do (4 × 3)/2 = 6 t tests to make all
possible pairwise
comparisons. If α = .05 is used as the criterion for significance
for each test,
and the researcher conducts six significance tests, the
probability that this set
of six decisions contains at least one instance of Type I error is
greater than
.05. Most researchers assume that it is permissible to do a few
significance
tests (perhaps three or four) without taking special precautions
to limit the
risk of Type I error. However, if more than a few comparisons
4. are examined,
researchers usually want to use one of several methods that
offer some
protection against inflated risk of Type I error.
One way to limit the risk of Type I error is to perform a single
omnibus
test that examines all the comparisons in the study as a set. The
F test in a
one-way ANOVA provides a single omnibus test of the
hypothesis that the
means of all k populations are equal, in place of many t tests for
all possible
pairs of groups. As a follow-up to a significant overall F test,
researchers
often still want to examine selected pairwise comparisons of
group means to
obtain more information about the pattern of differences among
groups. This
chapter reviews the F test, which is used to assess differences
for a set of
more than two group means, and some of the more popular
procedures for
follow-up comparisons among group means.
The overall null hypothesis for one-way ANOVA is that the
means of the
k populations that correspond to the groups in the study are all
equal:
H0: μ1 = μ2 = … = μk.
When each group has been exposed to different types or dosages
5. of a
treatment, as in a typical experiment, this null hypothesis
corresponds to an
assumption that the treatment has no effect on the outcome
variable. The
alternative hypothesis in this situation is not that all population
means are
unequal, but that there is at least one inequality for one pair of
population
means in the set or one significant contrast involving group
means.
This chapter reviews the computation and interpretation of the F
ratio that
is used in between-S one-way ANOVA. This chapter also
introduces an
extremely important concept: the partition of each score into
two components
—namely, a first component that is predictable from the
independent variable
that corresponds to group membership and a second component
that is due to
the collective effects of all other uncontrolled or extraneous
variables.
Similarly, the total sum of squares can also be partitioned into a
sum of
squares (between groups) that reflects variability associated
with treatment
and a sum of squares (within groups) that is due to the influence
of
extraneous uncontrolled variables. From these sums of squares,
we can
estimate the proportion of variance in scores that is explained
by, accounted
for by, or predictable from group membership (or from the
different
6. treatments administered to groups). The concept of explained or
predicted
variance is crucial for understanding ANOVA; it is also
important for the
comprehension of multivariate analyses described in later
chapters.
The best question ever asked by a student in any of my statistics
classes
was a deceptively simple “Why is there variance?” In the
hypothetical
research situation described in the following section, the
outcome variable is
a self-report measure of anxiety, and the research question is,
Why is there
variance in anxiety? Why do some persons report more anxiety
than other
persons? To what extent are the differences in amount of
anxiety
systematically associated with the independent variable (type of
stress), and
to what extent are differences in the amount of self-reported
anxiety due to
other factors (such as trait levels of anxiety, physiological
arousal, drug use,
gender, other anxiety-arousing events that each participant has
experienced
on the day of the study, etc.)? When we do an ANOVA, we
identify the part
of each individual score that is associated with group
membership (and,
therefore, with the treatment or intervention, if the study is an
experiment)
7. and the part of each individual score that is not associated with
group
membership and that is, therefore, attributable to a variety of
other variables
that are not controlled in the study, or “error.” In most research
situations,
researchers usually hope to show that a large part of the scores
is associated
with, or predictable from, group membership or treatment.
6.2 Hypothetical Research Example
As an illustration of the application of one-way ANOVA,
consider the
following imaginary study. Suppose that an experiment is done
to compare
the effects of four situations: Group 1 is tested in a “no-stress,”
baseline
situation; Group 2 does a mental arithmetic task; Group 3 does a
stressful
social role play; and Group 4 participants do a mock job
interview. For this
study, the X variable is a categorical variable with codes 1, 2, 3,
and 4 that
represent which of these four types of stress each participant
received. This
categorical X predictor variable is called a factor; in this case,
the factor is
called “Types of Stress”; the four levels of this factor
correspond to no stress,
mental arithmetic, stressful role play, and mock job interview.
At the end of
each session, the participants self-report their anxiety on a scale
that ranges
from 0 = no anxiety to 20 = extremely high anxiety. Scores on
anxiety are,
8. therefore, scores on a quantitative Y outcome variable. The
researcher wants
to know whether, overall, anxiety levels differed across these
four situations
and, if so, which situations elicited the highest anxiety and
which treatment
conditions differed significantly from baseline and from other
types of stress.
A convenience sample of 28 participants was obtained; each
participant was
randomly assigned to one of the four levels of stress. This
results in k = 4
groups with n = 7 participants in each group, for a total of N =
28 participants
in the entire study. The SPSS Data View worksheet that
contains data for this
imaginary study appears in Figure 6.1.
6.3 Assumptions About Scores on the Dependent Variable for
One-
Way Between-S ANOVA
The assumptions for one-way ANOVA are similar to those
described for the
independent samples t test (see Section 3 of Chapter 5). The
scores on the
quantitative dependent variable should be quantitative and, at
least
approximately, interval/ratio level of measurement. The scores
should be
approximately normally distributed in the entire sample and
within each
group, with no extreme outliers. The variances of scores should
9. be
approximately equal across groups. Observations should be
independent of
each other, both within groups and between groups. Preliminary
screening
involves the same procedures as for the t test: Histograms of
scores should be
examined to assess normality of distribution shape, a box and
whiskers plot
of the scores within each group could be used to identify
outliers, and the
Levene test (or some other test of homogeneity of variance)
could be used to
assess whether the homogeneity of variance assumption is
violated. If major
departures from the normal distribution shape and/or substantial
differences
among variances are detected, it may be possible to remedy
these problems
by applying data transformations, such as the base 10 logarithm,
or by
removing or modifying scores that are outliers, as described in
Chapter 4.
Figure 6.1 Data View Worksheet for Stress/Anxiety Study in
stress_anxiety.sav
6.4 Issues in Planning a Study
When an experiment is designed to compare treatment groups,
the researcher
needs to decide how many groups to include, how many
participants to
include in each group, how to assign participants to groups, and
10. what types or
dosages of treatments to administer to each group. In an
experiment, the
levels of the factor may correspond to different dosage levels of
the same
treatment variable (such as 0 mg caffeine, 100 mg caffeine, 200
mg caffeine)
or to qualitatively different types of interventions (as in the
stress study,
where the groups received, respectively, no stress, mental
arithmetic, role
play, or mock job interview stress interventions). In addition, it
is necessary
to think about issues of experimental control, as described in
research
methods textbooks; for example, in experimental designs,
researchers need to
avoid confounds of other variables with the treatment variable.
Research
methods textbooks (e.g., Cozby, 2004) provide a more detailed
discussion of
design issues; a few guidelines are listed here.
1. If the treatment variable has a curvilinear relation to the
outcome
variable, it is necessary to have a sufficient number of groups to
describe
this relation accurately—usually, at least 3 groups. On the other
hand, it
may not be practical or affordable to have a very large number
of
groups; if an absolute minimum n of 10 (or, better, 20)
participants per
11. group are included, then a study that included 15 groups would
require a
minimum of 150 (or 300) participants.
2. In studies of interventions that may create expectancy effects,
it is
necessary to include one or several kinds of placebo and no
treatment/control groups for comparison. For instance, studies
of the
effect of biofeedback on heart rate (HR) sometimes include a
group that
gets real biofeedback (a tone is turned on when HR increases), a
group
that gets noncontingent feedback (a tone is turned on and off at
random
in a way that is unrelated to HR), and a group that receives
instructions
for relaxation and sits quietly in the lab without any feedback
(Burish,
1981).
3. Random assignment of participants to conditions is desirable
to try to
ensure equivalence of the groups prior to treatment. For
example, in a
study where HR is the dependent variable, it would be desirable
to have
participants whose HRs were equal across all groups prior to the
administration of any treatments. However, it cannot be
assumed that
random assignment will always succeed in creating equivalent
groups.
In studies where equivalence among groups prior to treatment is
important, the researcher should collect data on behavior prior
to
treatment to verify whether the groups are equivalent.1
12. The same factors that affect the size of the t ratio (see Chapter
5) also
affect the size of F ratios: the distances among group means, the
amount of
variability of scores within each group, and the number, n, of
participants per
group. Other things being equal, an F ratio tends to be larger
when there are
large between-group differences among dosage levels (or
participant
characteristics). As an example, a study that used three different
noise levels
in decibels (dB) as the treatment variable, for example, 35, 65,
and 95 dB,
would result in larger differences in group means on arousal and
a larger F
ratio than a study that looked, for example, at these three noise
levels, which
are closer together: 60, 65, and 70 dB. Similar to the t test, the
F ratio
involves a comparison of between-group and within-group
variability; the
selection of homogeneous participants, standardization of
testing conditions,
and control over extraneous variables will tend to reduce the
magnitude of
within-group variability of scores, which in turn tends to
produce a larger F
(or t) ratio.
Finally, the F test involves degrees of freedom (df) terms based
on
13. number of participants (and also number of groups); the larger
the number of
participants, other things being equal, the larger the F ratio will
tend to be. A
researcher can usually increase the size of an F (or t) ratio by
increasing the
differences in the dosage levels of treatments given to groups
and/or reducing
the effects of extraneous variables through experimental
control, and/or
increasing the number of participants. In studies that involve
comparisons of
naturally occurring groups, such as age groups, a similar
principle applies: A
researcher is more likely to see age-related changes in mental
processing
speed in a study that compares ages 20, 50, and 80 than in a
study that
compares ages 20, 25, and 30.
6.5 Data Screening
A box and whiskers plot was set up to examine the distribution
of anxiety
scores within each of the four stress intervention groups; this
plot appears in
Figure 6.2. Examination of this box and whiskers plot indicates
that there are
no outlier scores in any of the four groups. Other preliminary
data screening
that could be done includes a histogram for all the anxiety
scores in the entire
sample. The Levene test to assess possible violations of the
homogeneity of
variance assumption is reported in Section 6.11.
14. Figure 6.2 Boxplot Showing Distribution of Anxiety Scores
Within
Each of the Four Treatment Groups in the Stress/Anxiety
Experiment
Table 6.1 Data for the Partition of Heart Rate (HR) Scores
NOTES: DevGrand = HR – Grand Mean. DevGroup or residual
= HR – Group Mean. Effect = Group
Mean – Grand Mean. Reproduced Score = Grand Mean + Effect
+ DevGroup.
6.6 Partition of Scores Into Components
When we do a one-way ANOVA, the analysis involves partition
of each
score into two components: a component of the score that is
associated with
group membership and a component of the score that is not
associated with
group membership. The sums of squares (SS) summarize how
much of the
variance in scores is associated with group membership and how
much is not
related to group membership. Examining the proportion of
variance that is
associated with treatment group membership is one way of
approaching the
general research question, Why is there variance in the scores
on the outcome
variable?
15. To illustrate the partition of scores into components, consider
the
hypothetical data in Table 6.1. Suppose that we measure HR for
each of 6
persons in a small sample and obtain the following set of
scores: HR scores
of 81, 85, and 92 for the 3 female participants and HR scores of
70, 63, and
68 for the 3 male participants. In this example, the X group
membership
variable is gender, coded 1 = female and 2 = male, and the Y
quantitative
outcome variable is HR. We can partition each HR score into a
component
that is related to group membership (i.e., related to gender) and
a component
that is not related to group membership. In a textbook
discussion of ANOVA
using just two groups, the factor that represents gender might be
called Factor
A.
We will denote the HR score for person j in group i as Yij. For
example,
Cathy’s HR of 92 corresponds to Y13, the score for the third
person in Group
1. We will denote the grand mean (i.e., the mean HR for all N =
6 persons in
this dataset) as MY; for this set of scores, the value of MY is
76.5. We will
denote the mean for group i as Mi. For example, the mean HR
for the female
group (Group 1) is M1 = 86; the mean HR for the male group
(Group 2) is M2
= 67. Once we know the individual scores, the grand mean, and
16. the group
means, we can work out a partition of each individual HR score
into two
components.
The question we are trying to answer is, Why is there variance
in HR? In
other words, why do some people in this sample have HRs that
are higher
than the sample mean of 76.5, while others have HRs that are
lower than
76.5? Is gender one of the variables that predicts whether a
person’s HR will
be relatively high or low? For each person, we can compute a
deviation of
that person’s individual Yij score from the grand mean; this
tells us how far
above (or below) the grand mean of HR each person’s score
was. Note that in
Table 6.1, the value of the grand mean is the same for all 6
participants. The
value of the group mean was different for members of Group i =
1 (females)
than for members of Group i = 2 (males). We can use these
means to
compute the following three deviations from means for each
person:
The total deviation of the individual score from the grand mean,
DevGrand, can be divided into two components: the deviation of
the
individual score from the group mean, DevGroup, and the
deviation of the
group mean from the grand mean, Effect:
17. For example, look at the line of data for Cathy in Table 6.1.
Cathy’s
observed HR was 92. Cathy’s HR has a total deviation from the
grand mean
of (92 − 76.5) = +15.5; that is, Cathy’s HR was 15.5 beats per
minute (bpm)
higher than the grand mean for this set of data. Compared with
the group
mean for the female group, Cathy’s HR had a deviation of (92 −
86) = +6;
that is, Cathy’s HR was 6 bpm higher than the mean HR for the
female
group. Finally, the “effect” component of Cathy’s score is found
by
subtracting the grand mean (MY) from the mean of the group to
which Cathy
belongs (Mfemale or M1): (86 − 76.5) = +9.5. This value of 9.5
tells us that the
mean HR for the female group (86) was 9.5 bpm higher than the
mean HR
for the entire sample (and this effect is the same for all
members within each
group). We could, therefore, say that the “effect” of
membership in the
female group is to increase the predicted HR score by 9.5.
(Note, however,
that this example involves comparison of naturally occurring
groups, males
vs. females, and so we should not interpret a group difference as
evidence of
causality but merely as a description of group differences.)
Another way to look at these numbers is to set up a predictive
equation
based on a theoretical model. When we do a one-way ANOVA,
18. we seek to
predict each individual score (Yij) on the quantitative outcome
variable from
the following theoretical components: the population mean
(denoted by μ),
the “effect” for group i (often denoted by αi, because α is the
Greek letter that
corresponds to A, which is the name usually given to a single
factor), and the
residual associated with the score for person j in group i
(denoted by εij).
Estimates for μ, αi, and εij can be obtained from the sample data
as follows:
Note that the deviations that are calculated as estimates for αi
and εij (in
Equations 6.6 and 6.7) are the same components that appeared
in Equation
6.4. An individual observed HR score Yij can be represented as
a sum of these
theoretical components, as follows:
In words, Equation 6.8 says that we can predict (or reconstruct)
the observed
score for person j in group i by taking the grand mean μ, adding
the “effect”
αi that is associated with membership in group i, and finally,
adding the
residual εij that tells us how much individual j’s score differed
from the mean
of the group that person j belonged to.
We can now label the terms in Equation 6.4 using this more
formal
19. notation:
In other words, we can divide the total deviation of each
person’s score from
the grand mean into two components: αi, the part of the score
that is
associated with or predictable from group membership (in this
case, gender),
and εij, the part of the score that is not associated with or
predictable from
group membership (the part of the HR score that is due to all
other variables
that influence HR, such as smoking, anxiety, drug use, level of
fitness, health,
etc.). In words, then, Equation 6.9 says that we can predict each
person’s HR
from the following information: Person j’s HR = grand mean +
effect of
person j’s gender on HR + effects of all other variables that
influence person
j’s HR, such as person j’s anxiety, health, drug use, fitness, and
anxiety.
The collective influence of “all other variables” on scores on
the outcome
variable, HR in this example, is called the residual, or “error.”
In most
research situations, researchers hope that the components of
scores that
represent group differences (in this case, gender differences in
HR) will be
relatively large and that the components of scores that represent
within-group
20. variability in HR (in this case, differences among females and
males in HR
and thus, differences due to all variables other than gender) will
be relatively
small.
When we do an ANOVA, we summarize the information about
the sizes
of these two deviations or components (Yij − Mi) and (Mi =
MY) across all the
scores in the sample. We cannot summarize information just by
summing
these deviations; recall from Chapter 2 that the sum of
deviations of scores
from a sample mean always equals 0, so ∑(Yij − Mi) = 0 and
∑(Mi − MY) = 0.
When we summarized information about distances of scores
from a sample
mean by computing a sample variance, we avoided this problem
by squaring
the deviations from the mean prior to summing them. The same
strategy is
applied in ANOVA. ANOVA begins by computing the following
deviations
for each score in the data-set (see Table 6.1 for an empirical
example; these
terms are denoted DevGrand, DevGroup, and Effect in Table
6.1).
(Yij − MY) = (Yij − Mi) + (Mi − MY).
To summarize information about the magnitudes of these score
components across all the participants in the dataset, we square
each term and
then sum the squared deviations across all the scores in the
entire dataset:
21. These sums of squared deviations are given the following
names: SStotal,
sum of squares total, for the sum of the squared deviations of
each score from
the grand mean; SSwithin, or residual, for the sum of squared
deviations of
each score from its group mean, which summarizes information
about within-
group variability of scores; and SSbetween, or treatment, which
summarizes
information about the distances among (variability among)
group means.
Note that, usually, when we square an equation of the form a =
(b + c) (as
in [Yij − MY]
2 = ([Yij − Mi] + [Mi − MY])
2), the result would be of the form a2
= b2 + c2 + 2bc. However, the cross-product term (which would
correspond
to 2bc) is missing in Equation 6.10. This cross-product term has
an expected
value of 0 because the (Yij − Mi) and (Mi − MY) deviations are
independent of
(i.e., uncorrelated with) each other, and therefore, the expected
value of the
cross product between these terms is 0.
This example of the partition of scores into components and the
partition
22. of SStotal into SSbetween and SSwithin had just two groups,
but this type of
partition can be done for any number of groups. Researchers
usually hope
that SSbetween will be relatively large because this would be
evidence that the
group means are far apart and that the different participant
characteristics or
different types or amounts of treatments for each group are,
therefore,
predictively related to scores on the Y outcome variable.
This concept of variance partitioning is a fundamental and
important one.
In other analyses later on (such as Pearson correlation and
bivariate
regression), we will come across a similar type of partition;
each Y score is
divided into a component that is predictable from the X variable
and a
component that is not predictable from the X variable.
It may have occurred to you that if you knew each person’s
scores on
physical fitness, drug use, body weight, and anxiety level, and
if these
variables are predictively related to HR, then you could use
information
about each person’s scores on these variables to make a more
accurate
prediction of HR. This is exactly the reasoning that is used in
analyses such
as multiple regression, where we predict each person’s score on
an outcome
variable, such as anxiety, by additively combining the estimated
effects of
23. numerous predictor variables. It may have also occurred to you
that, even if
we know several factors that influence HR, we might be missing
some
important information; we might not know, for example, that
the individual
person has a family history of elevated HR. Because some
important
variables are almost always left out, the predicted score that we
generate from
a simple additive model (such as grand mean + gender effect +
effects of
smoking = predicted HR) rarely corresponds exactly to the
person’s actual
HR. The difference between the actual HR and the HR predicted
from the
model is called a residual or error. In ANOVA, we predict
people’s scores by
adding (or subtracting) the number of points that correspond to
the effect for
their treatment group to the grand mean.
Recall that for each subject in the caffeine group in the t test
example in
the preceding chapter, the best prediction for that person’s HR
was the mean
HR for all people in the caffeine group. The error or residual
corresponded to
the difference between each participant’s actual HR and the
group mean, and
this residual was presumably due to factors that uniquely
affected that
individual over and above responses to the effects of caffeine.
24. 6.7 Computations for the One-Way Between-S ANOVA
The formulas for computation of the sums of squares, mean
squares, and F
ratio for the one-way between-S ANOVA are presented in this
section; they
are applied to the data from the hypothetical stress/anxiety
study that appear
in Figure 6.1.
6.7.1 Comparison Between the Independent Samples t Test and
One-Way Between-S ANOVA
The one-way between-S ANOVA is a generalization of the
independent
samples t test. The t ratio provides a test of the null hypothesis
that two
means differ significantly. For the independent samples t test,
the null
hypothesis has the following form:
For a one-way ANOVA with k groups, the null hypothesis is as
follows:
The computation of the independent samples t test requires that
we find
the following for each group: M, the sample mean; s, the sample
standard
deviation; and n, the number of scores in each group (see
Section 6 of
Chapter 5). For a one-way ANOVA, the same computations are
performed;
the only difference is that we have to obtain (and summarize)
this information
for k groups (instead of only two groups as in the t test).
25. Table 6.2 summarizes the information included in the
computation of the
independent samples t test and the one-way ANOVA to make it
easier to see
how similar these analyses are. For each analysis, we need to
obtain
information about differences between group means (for the t
test, we
compute M1 − M2; for the ANOVA, because we have more than
two groups,
we need to find the variance of the group means M1, M2, …,
Mk). The
variance among the k group means is called MSbetween; the
formulas to
compute this and other intermediate terms in ANOVA appear
below. We
need to obtain information about the amount of variability of
scores within
each group; for both the independent samples t test and
ANOVA, we can
begin by computing an SS term that summarizes the squared
distances of all
the scores in each group from their group mean. We then
convert that
information into a summary about the amount of variability of
scores within
groups, summarized across all the groups (for the independent
samples t test,
a summary of within-group score variability is provided by ; for
ANOVA, a
summary of within-group score variability is called MSwithin).
26. Table 6.2 Comparison Between the Independent Samples t Test
and
One-Way Between-Subjects (Between-S) ANOVA
NOTES: The F ratio MSbetween/MSwithin can be thought of as
a ratio of
and the F ratio can also be interpreted as information about the
relative magnitude of
For two groups, t2 is equivalent to F.
Any time that we compute a variance (or a mean square), we
sum squared
deviations of scores from group means. As discussed in Chapter
2, when we
look at a set of deviations of n scores from their sample mean,
only the first n
− 1 of these deviations are free to vary. Because the sum of all
the n
deviations from a sample mean must equal 0, once we have the
first n − 1
deviations, the value of the last deviation is not “free to vary”;
it must have
whatever value is needed to make the sum of the deviations
equal to 0. For
the t ratio, we computed a variance for the denominator, and we
need a single
df term that tells us how many independent deviations from the
mean the
divisor of the t test was based on (for the independent samples t
27. test, df = n1 +
n2 − 2, where n1 and n2 are the number of scores in Groups 1
and 2. The total
number of scores in the entire study is equal to n1 + n2, so we
can also say
that the df for the independent samples t test = N − 2).
For an F ratio, we compare MSbetween with MSwithin; we need
to have a
separate df for each of these mean square terms to indicate how
many
independent deviations from the mean each MS term was based
on.
The test statistic for the independent samples t test is
The test statistic for the one-way between-S ANOVA is F =
MSbetween
/MSwithin. In each case, we have information about differences
between or
among group means in the numerator and information about
variability of
scores within groups in the denominator. In a well-designed
experiment,
differences between group means are attributed primarily to the
effects of the
manipulated independent variable X, whereas variability of
scores within
groups is attributed to the effects of all other variables,
collectively called
error. In most research situations, researchers hope to obtain
large enough
values of t and F to be able to conclude that there are
statistically significant
differences among group means. The method for the
computation of these
28. between- and within-group mean squares is given in detail in
the following
few sections.
The by-hand computation for one-way ANOVA (with k groups
and a
total of N observations) involves the following steps. Complete
formulas for
the SS terms are provided in the following sections.
1. Compute SSbetween groups, SSwithin groups, and SStotal.
2. Compute MSbetween by dividing SSbetween by its df, k − 1.
3. Compute MSwithin by dividing SSwithin by its df, N − k.
4. Compute an F ratio: MSbetween/MSwithin.
5. Compare this F value obtained with the critical value of F
from a table
of the F distribution with (k − 1) and (N − k) df (using the table
in
Appendix C that corresponds to the desired alpha level; for
example, the
first table provides critical values for α = .05). If the F value
obtained
exceeds the tabled critical value of F for the predetermined
alpha level
and the available degrees of freedom, reject the null hypothesis
that all
the population means are equal.
6.7.2 Summarizing Information About Distances Between
Group
Means: Computing MSbetween
The following notation will be used:
29. Let k be the number of groups in the study.
Let n1, n2, …, nk be the number of scores in Groups 1, 2, …, k.
Let Yij be the score of subject number j in group i; i = 1, 2, …,
k.
Let M1, M2, …, Mk be the means of scores in Groups 1, 2, …,
k.
Let N be the total N in the entire study; N = n1 + n2 + … + nk.
Let MY be the grand mean of all scores in the study (i.e., the
total of
all the individual scores, divided by N, the total number of
scores).
Once we have calculated the means of each individual group
(M1, M2, …,
Mk) and the grand mean MY, we can summarize information
about the
distances of the group means, Mj, from the grand mean, MY, by
computing
SSbetween as follows:
For the hypothetical data in Figure 6.1, the mean anxiety scores
for
Groups 1 through 4 were as follows: M1 = 9.86, M2 = 14.29,
M3 = 13.57, and
M4 = 17.00. The grand mean on anxiety, MY, is 13.68. Each
group had n = 7
scores. Therefore, for this study,
30. SSbetween ≈ 182 (this agrees with the value of SSbetween in
the SPSS output
that is presented in Section 6.11 except for a small amount of
rounding error).
6.7.3 Summarizing Information About Variability of Scores
Within Groups: Computing MSwithin
To summarize information about the variability of scores within
each
group, we compute MSwithin. For each group, for groups
numbered from i = 1,
2, …, k, we first find the sum of squared deviations of scores
relative to each
group mean, SSi. The SS for scores within group i is found by
taking this
sum:
That is, for each of the k groups, find the deviation of each
individual score
from the group mean; square and sum these deviations for all
the scores in
the group. These within-group SS terms for Groups 1, 2, …, k
are summed
across the k groups to obtain the total SSwithin:
For this dataset, we can find the SS term for Group 1 (for
example) by
taking the sum of the squared deviations of each individual
score in Group 1
from the mean of Group 1, M1. The values are shown for by-
hand
computations; it can be instructive to do this as a spreadsheet,
31. entering the
value of the group mean for each participant as a new variable
and computing
the deviation of each score from its group mean and the squared
deviation for
each participant.
For the four groups of scores in the dataset in Figure 6.1, these
are the values
of SS for each group: SS1 = 26.86, SS2 = 27.43, SS3 = 41.71,
and SS4 = 26.00.
Thus, the total value of SSwithin for this set of data is SSwithin
= SS1 + SS2 +
SS3 + SS4 = 26.86 + 27.43 + 41.71 + 26.00 = 122.00.
We can also find SStotal; this involves taking the deviation of
every
individual score from the grand mean, squaring each deviation,
and summing
the squared deviations across all scores and all groups:
The grand mean MY = 13.68. This SS term includes 28 squared
deviations,
one for each participant in the dataset, as follows:
SStotal = (10 − 13.68)
2 + (10 − 13.68)2 + (12 − 13.68)2 + … + (18 − 13.68)2 =
304.
Recall that Equation 6.10 showed that SStotal can be partitioned
into SS
terms that represent between- and within-group differences
among scores;
therefore, the SSbetween and SSwithin terms just calculated
32. will sum to SStotal:
For these data, SStotal = 304, SSbetween = 182, and SSwithin =
122, so the sum of
SSbetween and SSwithin equals SStotal (due to rounding error,
these values differ
slightly from the values that appear on the SPSS output in
Section 6.11).
6.7.4 The F Ratio: Comparing MSbetween With MSwithin
An F ratio is a ratio of two mean squares. A mean square is the
ratio of a
sum of squares to its degrees of freedom. Note that the formula
for a sample
variance back in Chapter 2 was just the sum of squares for a
sample divided
by the degrees of freedom that correspond to the number of
independent
deviations that were used to compute the SS. From Chapter 2,
Var(X) = SS/df.
The df terms for the two MS terms in a one-way between-S
ANOVA are
based on k, the number of groups, and N, the total number of
scores in the
entire study (where N = n1 + n2 + … + nk). The between-group
SS was
obtained by summing the deviations of each of the k group
means from the
grand mean; only the first k − 1 of these deviations are free to
vary, so the
between-groups df = k − 1, where k is the number of groups.
33. In ANOVA, the mean square between groups is calculated by
dividing
SSbetween by its degrees of freedom:
For the data in the hypothetical stress/anxiety study, SSbetween
= 182, dfbetween
= 4 − 1 = 3, and MSbetween = 182/3 = 60.7.
The df for each SS within-group term is given by n − 1, where n
is the
number of participants in each group. Thus, in this example,
SS1 had n − 1 or
df = 6. When we form SSwithin, we add up SS1 + SS2 + … +
SSk. There are (n
− 1) df associated with each SS term, and there are k groups, so
the total
dfwithin = k × (n − 1). This can also be written as
where N = the total number of scores = n1 + n2 + … + nk and k
is the number
of groups.
We obtain MSwithin by dividing SSwithin by its corresponding
df:
For the hypothetical stress/anxiety data in Figure 6.1, MSwithin
= 122/24 =
5.083.
Finally, we can set up a test statistic for the null hypothesis H0:
μ1 = μ2 =
… = μk by taking the ratio of MSbetween to MSwithin:
34. For the stress/anxiety data, F = 60.702/5.083 = 11.94. This F
ratio is
evaluated using the F distribution with (k − 1) and (N − k) df.
For this dataset,
k = 4 and N = 28, so df for the F ratio are 3 and 24.
An F distribution has a shape that differs from the normal or t
distribution. Because an F is a ratio of two mean squares and
MS cannot be
less than 0, the minimum possible value of F is 0. On the other
hand, there is
no fixed upper limit for the values of F. Therefore, the
distribution of F tends
to be positively skewed, with a lower limit of 0, as in Figure
6.3. The reject
region for significance tests with F ratios consists of only one
tail (at the
upper end of the distribution). The first table in Appendix C
shows the
critical values of F for α = .05. The second and third tables in
Appendix C
provide critical values of F for α = .01 and α = .001. In the
hypothetical study
of stress and anxiety, the F ratio has df equal to 3 and 24. Using
α = .05, the
critical value of F from the first table in Appendix C with df = 3
in the
numerator (across the top of the table) and df = 24 in the
denominator (along
the left-hand side of the table) is 3.01. Thus, in this situation,
the α = .05
decision rule for evaluating statistical significance is to reject
H0 when values
of F > +3.01 are obtained. A value of 3.01 cuts off the top 5%
of the area in
the right-hand tail of the F distribution with df equal to 3 and
35. 24, as shown in
Figure 6.3. The obtained F = 11.94 would therefore be judged
statistically
significant.
Figure 6.3 F Distribution With 3 and 24 df
6.7.5 Patterns of Scores Related to the Magnitudes of
MSbetween and
MSwithin
It is important to understand what information about pattern in
the data is
contained in these SS and MS terms. SSbetween is a function of
the distances
among the group means (M1, M2, …, Mk); the farther apart
these group means
are, the larger SSbetween tends to be. Most researchers hope to
find significant
differences among groups, and therefore, they want SSbetween
(and F) to be
relatively large. SSwithin is the total of squared within-group
deviations of
scores from group means. SSwithin would be 0 in the unlikely
event that all
scores within each group were equal to each other. The greater
the variability
of scores within each group, the larger the value of SSwithin.
Consider the example shown in Table 6.3, which shows
hypothetical data
for which SSbetween would be 0 (because all the group means
are equal);
however, SSwithin is not 0 (because the scores vary within
36. groups). Table 6.4
shows data for which SSbetween is not 0 (group means differ)
but SSwithin is 0
(scores do not vary within groups). Table 6.5 shows data for
which both
SSbetween and SSwithin are nonzero. Finally, Table 6.6 shows
a pattern of
scores for which both SSbetween and SSwithin are 0.
Table 6.3 Data for Which SSbetween Is 0 (Because All the
Group Means
Are Equal), but SSwithin Is Not 0 (Because Scores Vary Within
Groups)
Table 6.4 Data for Which SSbetween Is Not 0 (Because Group
Means
Differ), but SSwithin Is 0 (Because Scores Do Not Vary Within
Groups)
Table 6.5 Data for Which SSbetween and SSwithin Are Both
Nonzero
Table 6.6 Data for Which Both SSwithin and SSbetween Equal
0
6.7.6 Expected Value of F When H0 Is True
The population variances that are estimated by this ratio of
sample mean
squares (based on the algebra of expected mean squares2) are as
follows:
37. where is the population variance of the alpha group effects,
that is, the
amount of variance in the Y scores that is associated with or
predictable from
the group membership variable, and is the population error
variance, that
is, the variance in scores that is due to all variables other than
the group
membership variable or manipulated treatment variable. Earlier,
the null
hypothesis for a one-way ANOVA was given as
H0: μ1 = μ2 = … = μk.
An alternative way to state this null hypothesis is that all the αi
effects are
equal to 0 (and therefore equal to each other): α1 = α2 = … αk.
Therefore,
another form of the null hypothesis for ANOVA is as follows:
It follows that if H0 is true, then the expected value of the F
ratio is close to 1.
If F is much greater than 1, we have evidence that may be
larger than 0.
How is F distributed across thousands of samples? First of all,
note that
the MS terms in the F ratio must be positive (MSbetween can be
0 in rare cases
where all the group means are exactly equal; MSwithin can be 0
in even rarer
cases where all the scores within each group are equal; but
38. because these are
sums of squared terms, neither MS can be negative).
The sums of squared independent normal variables have a chi-
square
distribution; thus, the distributions of each of the mean squares
are chi-square
variates. An F distribution is a ratio of two chi-square variates.
Like chi-
square, the graph of an F distribution has a lower tail that ends
at 0, and it
tends to be skewed with a long tail off to the right. (See Figure
6.3 for a graph
of the distribution of F with df = 3 and 24.) We reject H0 for
large F values
(which cut off the upper 5% in the right-hand tail). Thus, F is
almost always
treated as a one-tailed test. (It is possible to look at the lower
tail of the F
distribution to evaluate whether the obtained sample F is too
small for it to be
likely to have arisen by chance, but this is rarely done.)
Note that for the two-group situation, F is equivalent to t2. Both
F and t
are ratios of an estimate of between-group differences to within-
group
differences; between-group differences are interpreted as being
primarily due
to the manipulated independent variable, while within-group
variability is
due to the effects of extraneous variables. Both t and F are
interpretable as
signal-to-noise ratios, where the “signal” is the effect of the
manipulated
independent variable; in most research situations, we hope that
39. this term will
be relatively large. The size of the signal is evaluated relative to
the
magnitude of “noise,” the variability due to all other extraneous
variables; in
most research situations, we hope that this will be relatively
small. Thus, in
most research situations, we hope for values of t or F that are
large enough
for the null hypothesis to be rejected. A significant F can be
interpreted as
evidence that the between-groups independent variable had a
detectable
effect on the outcome variable. In rare circumstances,
researchers hope to
affirm the null hypothesis, that is, to demonstrate that the
independent
variable has no detectable effect, but this claim is actually quite
difficult to
prove.
If we obtain an F ratio large enough to reject H0, what can we
conclude?
The alternative hypothesis is not that all group means differ
from each other
significantly but that there is at least one (and possibly more
than one)
significant difference between group means:
A significant F tells us that there is probably at least one
significant
difference among group means; by itself, it does not tell us
where that
difference lies. It is necessary to do additional tests to identify
40. the one or
more significant differences (see Section 6.10).
6.7.7 Confidence Intervals (CIs) for Group Means
Once we have information about means and variances for each
group, we
can set up a CI around the mean for each group (using the
formulas from
Chapter 2) or a CI around any difference between a pair of
group means (as
described in Chapter 5). SPSS provides a plot of group means
with error bars
that represent 95% CIs (the SPSS menu selections to produce
this graph
appear in Figures 6.12 and 6.13, and the resulting graph appears
in Figure
6.14; these appear at the end of the chapter along with the
output from the
Results section for the one-way ANOVA).
6.8 Effect-Size Index for One-Way Between-S ANOVA
By comparing the sizes of these SS terms that represent
variability of scores
between and within groups, we can make a summary statement
about the
comparative size of the effects of the independent and
extraneous variables.
The proportion of the total variability (SStotal) that is due to
between-group
differences is given by
In the context of a well-controlled experiment, these between-
group
differences in scores are, presumably, primarily due to the
41. manipulated
independent variable; in a nonexperimental study that compares
naturally
occurring groups, this proportion of variance is reported only to
describe the
magnitudes of differences between groups, and it is not
interpreted as
evidence of causality. An eta squared (η2) is an effect-size
index given as a
proportion of variance; if η2 = .50, then 50% of the variance in
the Yij scores
is due to between-group differences. This is the same eta
squared that was
introduced in the previous chapter as an effect-size index for
the independent
samples t test; verbal labels that can be used to describe effect
sizes are
provided in Table 5.2. If the scores in a two-group t test are
partitioned into
components using the logic just described here and then
summarized by
creating sums of squares, the η2 value obtained will be identical
to the η2 that
was calculated from the t and df terms.
It is also possible to calculate eta squared from the F ratio and
its df; this
is useful when reading journal articles that report F tests
without providing
effect-size information:
An eta squared is interpreted as the proportion of variance in
scores on the Y
42. outcome variable that is predictable from group membership
(i.e., from the
score on X, the predictor variable). Suggested verbal labels for
eta-squared
effect sizes are given in Table 5.2.
An alternative effect-size measure sometimes used in ANOVA
is called
omega squared (ω2) (see W. Hays, 1994). The eta-squared index
describes
the proportion of variance due to between-group differences in
the sample,
but it is a biased estimate of the proportion of variance that is
theoretically
due to differences among the populations. The ω2 index is
essentially a
(downwardly) adjusted version of eta squared that provides a
more
conservative estimate of variance among population means;
however, eta
squared is more widely used in statistical power analysis and as
an effect-size
measure in the literature.
6.9 Statistical Power Analysis for One-Way Between-S
ANOVA
Table 6.7 is an example of a statistical power table that can be
used to make
decisions about sample size when planning a one-way between-
S ANOVA
with k = 3 groups and α = .05. Using Table 6.7, given the
number of groups,
the number of participants, the predetermined alpha level, and
the anticipated
population effect size estimated by eta squared, the researcher
43. can look up the
minimum n of participants per group that is required to obtain
various levels
of statistical power. The researcher needs to make an educated
guess: How
large an effect is expected in the planned study? If similar
studies have been
conducted in the past, the eta-squared values from past research
can be used
to estimate effect size; if not, the researcher may have to make a
guess based
on less exact information. The researcher chooses the alpha
level (usually
.05), calculates dfbetween (which equals k = 1, where k is the
number of groups
in the study), and decides on the desired level of statistical
power (usually
.80, or 80%). Using this information, the researcher can use the
tables in
Cohen (1988) or in Jaccard and Becker (2002) to look up the
minimum
sample size per group that is needed to achieve the power of
80%. For
example, using Table 6.7, for an alpha level of .05, a study with
three groups
and dfbetween = 2, a population eta-squared value of .15, and a
desired level of
power of .80, the minimum number of participants required per
group would
be 19.
Java applets are available on the Web for statistical power
analysis;
44. typically, if the user identifies a Java applet that is appropriate
for the specific
analysis (such as between-S one-way ANOVA) and enters
information about
alpha, the number of groups, population effect size, and desired
level of
power, the applet provides the minimum per-group sample size
required to
achieve the user-specified level of statistical power.
Table 6.7 Statistical Power for One-Way Between-S ANOVA
With k =
3 Groups Using α = .05
SOURCE: Adapted from Jaccard and Becker (2002).
NOTE: Each table entry corresponds to the minimum n required
in each group to obtain the level of
statistical power shown.
6.10 Nature of Differences Among Group Means
6.10.1 Planned Contrasts
The idea behind planned contrasts is that the researcher
identifies a
limited number of comparisons between group means before
looking at the
data. The test statistic that is used for each comparison is
essentially identical
to a t ratio except that the denominator is usually based on the
MSwithin for the
entire ANOVA, rather than just the variances for the two groups
involved in
45. the comparison. Sometimes an F is reported for the significance
of each
contrast, but F is equivalent to t2 in situations where only two
group means
are compared or where a contrast has only 1 df.
For the means of Groups a and b, the null hypothesis for a
simple contrast
between Ma and Mb is as follows:
H0: μa = μb
or
H0: μa − μb = 0.
The test statistic can be in the form of a t test:
where n is the number of cases within each group in the
ANOVA. (If the ns
are unequal across groups, then an average value of n is used;
usually, this is
the harmonic3 mean of the ns.)
Note that this is essentially equivalent to an ordinary t test. In a
t test, the
measure of within-group variability is ; in a one-way ANOVA,
information
about within-group variability is contained in the term
MSwithin. In cases
where an F is reported as a significance test for a contrast
between a pair of
group means, F is equivalent to t2. The df for this t test equal N
− k, where N
is the total number of cases in the entire study and k is the
46. number of groups.
When a researcher uses planned contrasts, it is possible to make
other
kinds of comparisons that may be more complex in form than a
simple
pairwise comparison of means. For instance, suppose that the
researcher has a
study in which there are four groups; Group 1 receives a
placebo and Groups
2 to 4 all receive different antidepressant drugs. One hypothesis
that may be
of interest is whether the average depression score combined
across the three
drug groups is significantly lower than the mean depression
score in Group 1,
the group that received only a placebo.
The null hypothesis that corresponds to this comparison can be
written in
any of the following ways:
or
In words, this null hypothesis says that when we combine the
means using
certain weights (such as +1, −1/3, −1/3, and −1/3), the resulting
composite is
predicted to have a value of 0. This is equivalent to saying that
the mean
outcome averaged or combined across Groups 2 to 4 (which
received three
different types of medication) is equal to the mean outcome in
Group 1
47. (which received no medication). Weights that define a contrast
among group
means are called contrast coefficients. Usually, contrast
coefficients are
constrained to sum to 0, and the coefficients themselves are
usually given as
integers for reasons of simplicity. If we multiply this set of
contrast
coefficients by 3 (to get rid of the fractions), we obtain the
following set of
contrast coefficients that can be used to see if the combined
mean of Groups
2 to 4 differs from the mean of Group 1 (+3, −1, −1, −1). If we
reverse the
signs, we obtain the following set (−3, +1, +1, +1), which still
corresponds to
the same contrast. The F test for a contrast detects the
magnitude, and not the
direction, of differences among group means; therefore, it does
not matter if
the signs on a set of contrast coefficients are reversed.
In SPSS, users can select an option that allows them to enter a
set of
contrast coefficients to make many different types of
comparisons among
group means. To see some possible contrasts, imagine a
situation in which
there are k = 5 groups.
This set of contrast coefficients simply compares the means of
Groups
1 and 5 (ignoring all the other groups): (+1, 0, 0, 0, −1).
48. This set of contrast coefficients compares the combined mean of
Groups 1 to 4 with the mean of Group 5: (+1, +1, +1, +1, −4).
Contrast coefficients can be used to test for specific patterns,
such as a
linear trend (scores on the outcome variable might tend to
increase linearly if
Groups 1 through 5 correspond to equally spaced dosage levels
of a drug):
(−2, −1, 0, +1, +2).
A curvilinear trend can also be tested; for instance, the
researcher might
expect to find that the highest scores on the outcome variable
occur at
moderate dosage levels of the independent variable. If the five
groups
received five equally spaced different levels of background
noise and the
researcher predicts the best task performance at a moderate
level of noise, an
appropriate set of contrast coefficients would be (−1, 0, +2, 0,
−1).
When a user specifies contrast coefficients, it is necessary to
have one
coefficient for each level or group in the ANOVA; if there are k
groups, each
contrast that is specified must include k coefficients. A user
may specify
more than one set of contrasts, although usually the number of
contrasts does
not exceed k − 1 (where k is the number of groups). The
following simple
guidelines are usually sufficient to understand what
49. comparisons a given set
of coefficients makes:
1. Groups with positive coefficients are compared with groups
with
negative coefficients; groups that have coefficients of 0 are
omitted from
such comparisons.
2. It does not matter which groups have positive versus negative
coefficients; a difference can be detected by the contrast
analysis
whether or not the coefficients code for it in the direction of the
difference.
3. For contrast coefficients that represent trends, if you draw a
graph that
shows how the contrast coefficients change as a function of
group
number (X), the line shows pictorially what type of trend the
contrast
coefficients will detect. Thus, if you plot the coefficients (−2,
−1, 0, +1,
+2) as a function of the group numbers 1, 2, 3, 4, 5, you can see
that
these coefficients test for a linear trend. The test will detect a
linear trend
whether it takes the form of an increase or a decrease in mean Y
values
across groups.
When a researcher uses more than one set of contrasts, he or she
may
50. want to know whether those contrasts are logically independent,
uncorrelated,
or orthogonal. There is an easy way to check whether the
contrasts implied
by two sets of contrast coefficients are orthogonal or
independent.
Essentially, to check for orthogonality, you just compute a
(shortcut) version
of a correlation between the two lists of coefficients. First, you
list the
coefficients for Contrasts 1 and 2 (make sure that each set of
coefficients
sums to 0, or this shortcut will not produce valid results).
You cross-multiply each pair of corresponding coefficients (i.e.,
the
coefficients that are applied to the same group) and then sum
these cross
products. In this example, you get
In this case, the result is −1. This means that the two contrasts
above are not
independent or orthogonal; some of the information that they
contain about
differences among means is redundant.
Consider a second example that illustrates a situation in which
the two
contrasts are orthogonal or independent:
In this second example, the curvilinear contrast is orthogonal to
the linear
trend contrast. In a one-way ANOVA with k groups, it is
51. possible to have up
to (k − 1) orthogonal contrasts. The preceding discussion of
contrast
coefficients assumed that the groups in the one-way ANOVA
had equal ns.
When the ns in the groups are unequal, it is necessary to adjust
the values of
the contrast coefficients so that they take unequal group size
into account;
this is done automatically in programs such as SPSS.
6.10.2 Post Hoc or “Protected” Tests
If the researcher wants to make all possible comparisons among
groups or
does not have a theoretical basis for choosing a limited number
of
comparisons before looking at the data, it is possible to use test
procedures
that limit the risk of Type I error by using “protected” tests.
Protected tests
use a more stringent criterion than would be used for planned
contrasts in
judging whether any given pair of means differs significantly.
One method
for setting a more stringent test criterion is the Bonferroni
procedure,
described in Chapter 3. The Bonferroni procedure requires that
the data
analyst use a more conservative (smaller) alpha level to judge
whether each
individual comparison between group means is statistically
significant. For
instance, in a one-way ANOVA with k = 5 groups, there are k ×
(k − 1)/2 =
10 possible pairwise comparisons of group means. If the
52. researcher wants to
limit the overall experiment-wise risk of Type I error (EWα) for
the entire set
of 10 comparisons to .05, one possible way to achieve this is to
set the per-
comparison alpha (PCα) level for each individual significance
test between
means at αEW/(number of post hoc tests to be performed). For
example, if the
experimenter wants an experiment-wise α of .05 when doing k =
10 post hoc
comparisons between groups, the alpha level for each individual
test would
be set at EWα/k, or .05/10, or .005 for each individual test. The
t test could be
calculated using the same formula as for an ordinary t test, but
it would be
judged significant only if its obtained p value were less than
.005. The
Bonferroni procedure is extremely conservative, and many
researchers prefer
less conservative methods of limiting the risk of Type I error.
(One way to
make the Bonferroni procedure less conservative is to set the
experiment-
wise alpha to some higher value such as .10.)
Dozens of post hoc or protected tests have been developed to
make
comparisons among means in ANOVA that were not predicted
in advance.
Some of these procedures are intended for use with a limited
number of
53. comparisons; other tests are used to make all possible pairwise
comparisons
among group means. Some of the better-known post hoc tests
include the
Scheffé, the Newman-Keuls, and the Tukey honestly significant
difference
(HSD). The Tukey HSD has become popular because it is
moderately
conservative and easy to apply; it can be used to perform all
possible pairwise
comparisons of means and is available as an option in widely
used computer
programs such as SPSS. The menu for the SPSS one-way
ANOVA procedure
includes the Tukey HSD test as one of many options for post
hoc tests; SPSS
calls it the Tukey procedure.
The Tukey HSD test (and several similar post hoc tests) uses a
different
method of limiting the risk of Type I error. Essentially, the
Tukey HSD test
uses the same formula as a t ratio, but the resulting test ratio is
labeled “q”
rather than “t,” to remind the user that it should be evaluated
using a different
sampling distribution. The Tukey HSD test and several related
post hoc tests
use critical values from a distribution called the “Studentized
range statistic,”
and the test ratio is often denoted by the letter q:
Values of the q ratio are compared with critical values from
tables of the
Studentized range statistic (see the table in Appendix F). The
Studentized
54. range statistic is essentially a modified version of the t
distribution. Like t, its
distribution depends on the numbers of subjects within groups,
but the shape
of this distribution also depends on k, the number of groups. As
the number
of groups (k) increases, the number of pairwise comparisons
also increases.
To protect against inflated risk of Type I error, larger
differences between
group means are required for rejection of the null hypothesis as
k increases.
The distribution of the Studentized range statistic is broader and
flatter than
the t distribution and has thicker tails; thus, when it is used to
look up critical
values of q that cut off the most extreme 5% of the area in the
upper and
lower tails, the critical values of q are larger than the
corresponding critical
values of t.
This formula for the Tukey HSD test could be applied by
computing a q
ratio for each pair of sample means and then checking to see if
the obtained q
for each comparison exceeded the critical value of q from the
table of the
Studentized range statistic. However, in practice, a
computational shortcut is
often preferred. The formula is rearranged so that the cutoff for
judging a
difference between groups to be statistically significant is given
55. in terms of
differences between means rather than in terms of values of a q
ratio.
Then, if the obtained difference between any pair of means
(such as Ma −
Mb) is greater in absolute value than this HSD, this difference
between means
is judged statistically significant.
An HSD criterion is computed by looking up the appropriate
critical
value of q, the Studentized range statistic, from a table of this
distribution
(see the table in Appendix F). The critical q value is a function
of both n, the
average number of subjects per group, and k, the number of
groups in the
overall one-way ANOVA. As in other test situations, most
researchers use
the critical value of q that corresponds to α = .05, two-tailed.
This critical q
value obtained from the table is multiplied by the error term to
yield HSD.
This HSD is used as the criterion to judge each obtained
difference between
sample means. The researcher then computes the absolute value
of the
difference between each pair of group means (M1 − M2), (M1 −
M3), and so
forth. If the absolute value of a difference between group means
exceeds the
HSD value just calculated, then that pair of group means is
judged to be
significantly different.
56. When a Tukey HSD test is requested from SPSS, SPSS provides
a
summary table that shows all possible pairwise comparisons of
group means
and reports whether each of these comparisons is significant. If
the overall F
for the one-way ANOVA is statistically significant, it implies
that there
should be at least one significant contrast among group means.
However, it is
possible to have situations in which a significant overall F is
followed by a
set of post hoc tests that do not reveal any significant
differences among
means. This can happen because protected post hoc tests are
somewhat more
conservative, and thus require slightly larger between-group
differences as a
basis for a decision that differences are statistically significant,
than the
overall one-way ANOVA.
****************
6.11 SPSS Output and Model Results
To run the one-way between-S ANOVA procedure in SPSS,
make the
following menu selections from the menu at the top of the data
view
worksheet, as shown in Figure 6.4: <Analyze> <Compare
Means> <One-
Way ANOVA>. This opens up the dialog window in Figure 6.5.
Enter the
57. name of one (or several) dependent variables into the window
called
Dependent List; enter the name of the categorical variable that
provides
group membership information into the window named Factor.
For this
example, additional windows were accessed by clicking on the
buttons
marked Post Hoc, Contrasts, and Options. The SPSS screens
that correspond
to this series of menu selections are shown in Figures 6.6
through 6.8.
From the menu of post hoc tests, this example used the one
SPSS calls
Tukey (this corresponds to the Tukey HSD test). To define a
contrast that
compared the mean of Group 1 (no stress) with the mean of the
three stress
treatment groups combined, these contrast coefficients were
entered one at a
time: +3, −1, −1, −1. From the list of options, “Descriptive”
statistics and
“Homogeneity of variance test” were selected by placing checks
in the boxes
next to the names of these tests.
The output for this one-way ANOVA is reported in Figure 6.9.
The first
panel provides descriptive information about each of the
groups: mean,
standard deviation, n, a 95% CI for the mean, and so forth. The
second panel
shows the results for the Levene test of the homogeneity of
variance
assumption; this is an F ratio with (k − 1) and (N − k) df. The
58. obtained F was
not significant for this example; there was no evidence that the
homogeneity
of variance assumption had been violated. The third panel
shows the
ANOVA source table with the overall F; this was statistically
significant, and
this implies that there was at least one significant contrast
between group
means. In practice, a researcher would not report both planned
contrasts and
post hoc tests; however, both were presented for this example to
show how
they are obtained and reported.
Figure 6.4 SPSS Menu Selections for One-Way Between-S
ANOVA
****************
Figure 6.5 One-Way Between-S ANOVA Dialog Window
Figure 6.10 shows the output for the planned contrast that was
specified
by entering these contrast coefficients: (+3, −1, −1, −1). These
contrast
****************
coefficients correspond to a test of the null hypothesis that the
mean anxiety
of the no-stress group (Group 1) was not significantly different
59. from the
mean anxiety of the three stress intervention groups (Groups 2–
4) combined.
SPSS reported a t test for this contrast (some textbooks and
programs use an
F test). This t was statistically significant, and examination of
the group
means indicated that the mean anxiety level was significantly
higher for the
three stress intervention groups combined, compared with the
control group.
Figure 6.6 SPSS Dialog Window, One-Way ANOVA, Post Hoc
Test
Menu
Figure 6.7 Specification of a Planned Contrast
****************
NOTES:
Group Contrast Coefficient
1 No stress +3
2 Mental arithmetic −1
3 Stress role play −1
4 Mock job interview −1
Null hypothesis about weighted linear composite of means that
is represented by this set of contrast
coefficients:
Figure 6.8 SPSS Dialog Window for One-Way ANOVA:
Options
60. ****************
Figure 6.11 shows the results for the Tukey HSD tests that
compared all
possible pairs of group means. The table “Multiple
Comparisons” gives the
difference between means for all possible pairs of means (note
that each
comparison appears twice; that is, Group a is compared with
Group b, and in
another line, Group b is compared with Group a). Examination
of the “sig” or
p values indicates that several of the pairwise comparisons were
significant at
the .05 level. The results are displayed in a more easily readable
form in the
last panel under the heading “Homogeneous Subsets.” Each
subset consists of
group means that were not significantly different from each
other using the
Tukey test. The no-stress group was in a subset by itself; in
other words, it
had significantly lower mean anxiety than any of the three
stress intervention
groups. The second subset consisted of the stress role play and
mental
arithmetic groups, which did not differ significantly in anxiety.
The third
subset consisted of the mental arithmetic and mock job
interview groups.
Note that it is possible for a group to belong to more than one
subset; the
61. anxiety score for the mental arithmetic group was not
significantly different
from the stress role play or the mock job interview groups.
However, because
the stress role play group differed significantly from the mock
job interview
group, these three groups did not form one subset.
****************
Figure 6.9 SPSS Output for One-Way ANOVA
Figure 6.10 SPSS Output for Planned Contrasts
****************
Figure 6.11 SPSS Output for Post Hoc Test (Tukey HSD)
Note also that it is possible for all the Tukey HSD comparisons
to be
nonsignificant even when the overall F for the one-way ANOVA
is
statistically significant. This can happen because the Tukey
HSD test requires
a slightly larger difference between means to achieve
significance.
In this imaginary example, as in some research studies, the
outcome
****************
62. measure (anxiety) is not a standardized test for which we have
norms. The
numbers by themselves do not tell us whether the mock job
interview
participants were moderately anxious or twitching, stuttering
wrecks. Studies
that use standardized measures can make comparisons to test
norms to help
readers understand whether the group differences were large
enough to be of
clinical or practical importance. Alternatively, qualitative data
about the
behavior of participants can also help readers understand how
substantial the
group differences were.
SPSS one-way ANOVA does not provide an effect-size measure,
but this
can easily be calculated by hand. In this case, eta squared is
found by taking
the ratio SSbetween/SStotal from the ANOVA source table: η
2 = .60.
To obtain a graphic summary of the group means that shows the
95% CI
for each mean, use the SPSS menu selections and commands in
Figures 6.12,
6.13, and 6.14; this graph appears in Figure 6.15. Generally, the
group means
are reported either in a summary table (as in Table 6.8) or as a
graph (as in
Figure 6.15); it is not necessary to include both.
Figure 6.12 SPSS Menu Selections for Graph of Cell Means
With Error
63. Bars or Confidence Intervals
Figure 6.13 Initial SPSS Dialog Window for Error Bar Graph
****************
Figure 6.14 Dialog Window for Graph of Means With Error
Bars
****************
Figure 6.15 Means for Stress/Anxiety Data With 95%
Confidence
Intervals (CIs) for Each Group Mean
****************
6.12 Summary
One-way between-S ANOVA provides a method for comparison
of more
than two group means. However, the overall F test for the
ANOVA does not
provide enough information to completely describe the pattern
in the data. It
is often necessary to perform additional comparisons among
specific group
means to provide a complete description of the pattern of
differences among
group means. These can be a priori or planned contrast
comparisons (if a
64. limited number of differences were predicted in advance). If the
researcher
did not make a limited number of predictions in advance about
differences
between group means, then he or she may use protected or post
hoc tests to
do any follow-up comparisons; the Bonferroni procedure and
the Tukey HSD
test were described here, but many other post hoc procedures
are available.
Table 6.8 Mean Anxiety Scores Across Types of Stress
****************
Results
A one-way between-S ANOVA was done to compare the mean
scores on an anxiety scale (0 = not at
all anxious, 20 = extremely anxious) for participants who were
randomly assigned to one of four
groups: Group 1 = control group/no stress, Group 2 = mental
arithmetic, Group 3 = stressful role play,
and Group 4 = mock job interview. Examination of a histogram
of anxiety scores indicated that the
scores were approximately normally distributed with no extreme
outliers. Prior to the analysis, the
Levene test for homogeneity of variance was used to examine
whether there were serious violations of
the homogeneity of variance assumption across groups, but no
significant violation was found: F(3, 24)
= .718, p = .72.
The overall F for the one-way ANOVA was statistically
significant, F(3, 24) = 11.94, p < .001.
65. This corresponded to an effect size of η2 = .60; that is, about
60% of the variance in anxiety scores was
predictable from the type of stress intervention. This is a large
effect. The means and standard
deviations for the four groups are shown in Table 6.8.
One planned contrast (comparing the mean of Group 1, no
stress, to the combined means of Groups
2 to 4, the stress intervention groups) was performed. This
contrast was tested using α = .05, two-tailed;
the t test that assumed equal variances was used because the
homogeneity of variance assumption was
not violated. For this contrast, t(24) = −5.18, p < .001. The
mean anxiety score for the no-stress group
(M = 9.86) was significantly lower than the mean anxiety score
for the three combined stress
intervention groups (M = 14.95).
In addition, all possible pairwise comparisons were made using
the Tukey HSD test. Based on this
test (using α = .05), it was found that the no-stress group scored
significantly lower on anxiety than all
three stress intervention groups. The stressful role play (M =
13.57) was significantly less anxiety
producing than the mock job interview (M = 17.00). The mental
arithmetic task produced a mean level
of anxiety (M = 14.29) that was intermediate between the other
stress conditions, and it did not differ
significantly from either the stress role play or the mock job
interview. Overall, the mock job interview
produced the highest levels of anxiety. Figure 6.15 shows a 95%
CI around each group mean.
The most important concept from this chapter is the idea that a
score can
be divided into components (one part that is related to group
66. membership or
treatment effects and a second part that is due to the effects of
all other
“extraneous” variables that uniquely influence individual
participants).
Information about the relative sizes of these components can be
summarized
across all the participants in a study by computing the sum of
squared
deviations (SS) for the between-group and within-group
deviations. Based on
the SS values, it is possible to compute an effect-size estimate
(η2) that
****************
describes the proportion of variance predictable from group
membership (or
treatment variables) in the study. Researchers usually hope to
design their
studies in a manner that makes the proportion of explained
variance
reasonably high and that produces statistically significant
differences among
group means. However, researchers have to keep in mind that
the proportion
of variance due to group differences in the artificial world of
research may
not correspond to the “true” strength of the influence of the
variable out in
the “real world.” In experiments, we create an artificial world
by holding
some variables constant and by manipulating the treatment
variable; in
67. nonexperimental research, we create an artificial world through
our selection
of participants and measures. Thus, results of our research
should be
interpreted with caution.
Notes
1. If the groups are not equivalent—that is, if the groups have
different means on participant
characteristics that may be predictive of the outcome variable—
ANCOVA (see Chapter 17) may be
used to correct for the nonequivalence by statistical control.
However, it is preferable to identify and
correct any differences in the composition of groups prior to
data collection, if possible.
2. When the formal algebra of expected mean squares is applied
to work out the population
variances that are estimated by the sample values of MSwithin
and MSbetween, the following results are
obtained if the group ns are equal (Winer, Brown, & Michels,
1991):
, the population variance due to error (all other extraneous
variables)
, where is the population variance due to the effects of
membership in
the naturally occurring group or to the manipulated treatment
variable. Thus, when we look at F ratio
estimates, we obtain
The null hypothesis (that all the population means are equal)
can be stated in a different form,
which says the variance of the population means is zero:
68. If H0 is true and , then the expected value of the F ratio is 1.
Values of F that are
substantially larger than 1, and that exceed the critical value of
F from tables of the F distribution, are
taken as evidence that this null hypothesis may be false.
3. Calculating the harmonic mean of cell or group ns: When the
groups have different ns, the
harmonic mean H of a set of unequal ns is obtained by using the
following equation:
****************
That is, sum the inverses of the ns of the groups, divide this
sum by the number of groups, and then
take the inverse of that result to obtain the harmonic mean (H)
of the ns.
Comprehension Questions
1.
A nonexperimental study was done to assess the impact of the
accident at
the Three Mile Island (TMI) nuclear power plant on nearby
residents
(Baum, Gatchel, & Schaeffer, 1983). Data were collected from
residents
of the following four areas:
Group 1: Three Mile Island, where a nuclear accident occurred
(n = 38)
Group 2: Frederick, with no nuclear power plant nearby (n = 27)
Group 3: Dickerson, with an undamaged coal power plant
69. nearby (n =
24)
Group 4: Oyster Creek, with an undamaged nuclear power plant
nearby
(n = 32)
Several different measures of stress were taken for people in
these four
groups. The researchers hypothesized that residents who lived
near TMI
(Group 1) would score higher on a wide variety of stress
measures than
people who lived in the other three areas included as
comparisons. One-
way ANOVA was performed to assess differences among these
four
groups on each outcome. Selected results are reported below for
you to
discuss and interpret. Here are results for two of their outcome
measures:
Stress (total reported stress symptoms) and Depression (score
on the Beck
Depression Inventory). Each cell lists the mean, followed by the
standard
deviation in parentheses.
****************
a.
Write a Results section in which you report whether these
overall
differences were statistically significant for each of these two
outcome
70. variables (using α = .05). You will need to look up the critical
value for
F, and in this instance, you will not be able to include an exact
p value.
Include an eta-squared effectsize index for each of the F ratios
(you can
calculate this by hand from the information given in the table).
Be sure
to state the nature of the differences: Did the TMI group score
higher or
lower on these stress measures relative to the other groups?
b. Would your conclusions change if you used α = .01 instead of
α = .05 as
your criterion for statistical significance?
c. Name a follow-up test that could be done to assess whether
all possible
pairwise comparisons of group means were significant.
d.
Write out the contrast coefficients to test whether the mean for
Group 1
(people who lived near TMI) differed from the average for the
other
three comparison groups.
e.
Here is some additional information about scores on the Beck
Depression Inventory. For purposes of clinical diagnosis, Beck,
Steer,
and Brown (1996) suggested the following cutoffs:
0–13: Minimal depression
14–19: Mild depression
20–28: Moderate depression
71. 29–63: Severe depression
In light of this additional information, what would you add to
your
discussion of the outcomes for depression in the TMI group
versus
groups from other regions? (Did the TMI accident make people
severely
depressed?)
****************
2.
Sigall and Ostrove (1975) did an experiment to assess whether
the
physical attractiveness of a defendant on trial for a crime had an
effect on
the severity of the sentence given in mock jury trials. Each of
the
participants in this study was randomly assigned to one of the
following
three treatment groups; every participant received a packet that
described
a burglary and gave background information about the accused
person.
The three treatment groups differed in the type of information
they were
given about the accused person’s appearance. Members of
Group 1 were
shown a photograph of an attractive person; members of Group
2 were
shown a photograph of an unattractive person; members of
Group 3 saw
72. no photograph. Some of their results are described here. Each
participant
was asked to assign a sentence (in years) to the accused person;
the
researchers predicted that more attractive persons would receive
shorter
sentences.
a.
Prior to assessment of the outcome, the researchers did a
manipulation
check. Members of Groups 1 and 2 rated the attractiveness (on a
1 to 9
scale, with 9 being the most attractive) of the person in the
photo. They
reported that for the attractive photo, M = 7.53; for the
unattractive
photo, M = 3.20, F(1, 108) = 184.29. Was this difference
statistically
significant (using α = .05)?
b. What was the effect size for the difference in (2a)?
c. Was their attempt to manipulate perceived attractiveness
successful?
d. Why does the F ratio in (2a) have just df = 1 in the
numerator?
e. The mean length of sentence given in the three groups was as
follows:
Group 1: Attractive photo, M = 2.80
Group 2: Unattractive photo, M = 5.20
Group 3: No photo, M = 5.10
They did not report a single overall F comparing all three
groups;
73. instead, they reported selected pairwise comparisons. For Group
1
versus Group 2, F(1, 108) = 6.60, p < .025.
Was this difference statistically significant? If they had done
an
overall F to assess the significance of differences of means
among all
three groups, do you think this overall F would have been
statistically
significant?
f. Was the difference in mean length of sentence in part (2e) in
the
****************
f. predicted direction?
g. Calculate and interpret an effect-size estimate for this
obtained F.
h.
What additional information would you need about these data to
do a
Tukey honestly significant difference test to see whether
Groups 2 and
3, as well as 1 and 3, differed significantly?
3.
Suppose that a researcher has conducted a simple experiment to
assess the
effect of background noise level on verbal learning. The
manipulated
74. independent level is the level of white noise in the room (Group
1 = low
level = 65 dB; Group 2 = high level = 70 dB). (Here are some
approximate reference values for decibel noise levels: 45 dB,
whispered
conversation; 65 dB, normal conversation; 80 dB, vacuum
cleaner; 90 dB,
chain saw or jack hammer; 120 dB, rock music played very
loudly.) The
outcome measure is number of syllables correctly recalled from
a 20-item
list of nonsense syllables. Participants ranged in age from 17 to
70 and had
widely varying levels of hearing acuity; some of them
habitually studied
in quiet places and others preferred to study with the television
or radio
turned on. There were 5 participants in each of the two groups.
The
researcher found no significant difference in mean recall scores
between
these groups.
a.
Describe three specific changes to the design of this
noise/learning
study that would be likely to increase the size of the t ratio
(and,
therefore, make it more likely that the researcher would find a
significant effect).
b.
Also, suppose that the researcher has reason to suspect that
there is a
curvilinear relation between noise level and task performance.
75. What
change would this require in the research design?
4.
Suppose that Kim is a participant in a study that compares
several
coaching methods to see how they affect math Scholastic
Aptitude Test
(SAT) scores. The grand mean of math SAT scores for all
participants
(MY) in the study is 550. The group that Kim participated in
had a mean
math SAT score of 565. Kim’s individual score on the math
SAT was 610.
a.
What was the estimated residual component (εij) of Kim’s
score, that is,
the part of Kim’s score that was not related to the coaching
method?
(Both parts of this question call for specific numerical values as
answers.)
****************
b. What was the “effect” (αi) component of Kim’s score?
5.
What pattern in grouped data would make SSwithin = 0? What
pattern
within data would make SSbetween = 0?
76. 6.
Assuming that a researcher hopes to demonstrate that a
treatment or group
membership variable makes a significant difference in
outcomes, which
term does the researcher hope will be larger, MSbetween or
MSwithin? Why?
7. Explain the following equation:
(Yij − MY) = (Yij − Mj) + (Mj − MY).
What do we gain by breaking the (Yij − MY) deviation into two
separate
components, and what do each of these components represent?
Which of
the terms on the right-hand side of the equation do researchers
typically
hope will be large, and why?
8.
What is H0 for a one-way ANOVA? If H0 is rejected, does that
imply
that each mean is significantly different from every other mean?
9. What information do you need to decide on a sample size that
will
provide adequate statistical power?
10. In the equation αj = (Mj − MY), what do we call the αj
term?
11.
If there is an overall significant F in a one-way ANOVA, can
we
conclude that the group membership or treatment variable
77. caused the
observed differences in the group means? Why or why not?
12. How can eta squared be calculated from the SS values in an
ANOVA
summary table?
13. Which of these types of tests is more conservative: planned
contrasts or
post hoc/protected tests?
14. Name two common post hoc procedures for the comparison
of means in
an ANOVA.
****************
Applied Statics 164Applied Statics 165Applied Statics
166Applied Statics 167Applied Statics 168Applied Statics
169Applied Statics 170Applied Statics 171Applied Statics
172Applied Statics 173Applied Statics 174Applied Statics
175Applied Statics 176Applied Statics 177Applied Statics
178Applied Statics 179Applied Statics 180Applied Statics
181Applied Statics 182Applied Statics 183Applied Statics
184Applied Statics 185Applied Statics 186Applied Statics
187Applied Statics 188Applied Statics 189Applied Statics
190Applied Statics 191Applied Statics 192Applied Statics
193Applied Statics 194Applied Statics 195Applied Statics
196Applied Statics 197Applied Statics 198Applied Statics
199Applied Statics 200Applied Statics 201Applied Statics
202Applied Statics 203Applied Statics 204Applied Statics
205Applied Statics 206Applied Statics 207Applied Statics
208Applied Statics 209Applied Statics 210Applied Statics
211Applied Statics 212Applied Statics 213Applied Statics
214Applied Statics 215Applied Statics 216Applied Statics
217Applied Statics 218
78. 6
ONE-WAY BETWEEN-
SUBJECTS ANALYSIS OF
VARIANCE
6.1 Research Situations Where One-Way Between-Subjects
Analysis of Variance (ANOVA) Is Used
A one-way between-subjects (between-S) analysis of variance
(ANOVA) is
used in research situations where the researcher wants to
compare means on a
quantitative Y outcome variable across two or more groups.
Group
membership is identified by each participant’s score on a
categorical X
predictor variable. ANOVA is a generalization of the t test; a t
test provides
information about the distance between the means on a
quantitative outcome
variable for just two groups, whereas a one-way ANOVA
compares means
on a quantitative variable across any number of groups. The
categorical
predictor variable in an ANOVA may represent either naturally
occurring
groups or groups formed by a researcher and then exposed to
different
interventions. When the means of naturally occurring groups are
compared
(e.g., a one-way ANOVA to compare mean scores on a self-
report measure of
political conservatism across groups based on religious
79. affiliation), the design
is nonexperimental. When the groups are formed by the
researcher and the
researcher administers a different type or amount of treatment
to each group
while controlling extraneous variables, the design is
experimental.
The term between-S (like the term independent samples) tells us
that each
participant is a member of one and only one group and that the
members of
samples are not matched or paired. When the data for a study
consist of
repeated measures or paired or matched samples, a repeated
measures
ANOVA is required (see Chapter 22 for an introduction to the
analysis of
repeated measures). If there is more than one categorical
variable or factor
included in the study, factorial ANOVA is used (see Chapter
13). When there
is just a single factor, textbooks often name this single factor A,
and if there
are additional factors, these are usually designated factors B, C,
D, and so
forth. If scores on the dependent Y variable are in the form of
rank or ordinal
data, or if the data seriously violate assumptions required for
ANOVA, a
nonparametric alternative to ANOVA may be preferred.
In ANOVA, the categorical predictor variable is called a factor;
80. the
groups are called the levels of this factor. In the hypothetical
research
example introduced in Section 6.2, the factor is called “Types
of Stress,” and
the levels of this factor are as follows: 1, no stress; 2, cognitive
stress from a
mental arithmetic task; 3, stressful social role play; and 4, mock
job
interview.
Comparisons among several group means could be made by
calculating t
tests for each pairwise comparison among the means of these
four treatment
groups. However, as described in Chapter 3, doing a large
number of
significance tests leads to an inflated risk for Type I error. If a
study includes
k groups, there are k(k – 1)/2 pairs of means; thus, for a set of
four groups, the
researcher would need to do (4 × 3)/2 = 6 t tests to make all
possible pairwise
comparisons. If α = .05 is used as the criterion for significance
for each test,
and the researcher conducts six significance tests, the
probability that this set
of six decisions contains at least one instance of Type I error is
greater than
.05. Most researchers assume that it is permissible to do a few
significance
tests (perhaps three or four) without taking special precautions
to limit the
risk of Type I error. However, if more than a few comparisons
are examined,
researchers usually want to use one of several methods that
81. offer some
protection against inflated risk of Type I error.
One way to limit the risk of Type I error is to perform a single
omnibus
test that examines all the comparisons in the study as a set. The
F test in a
one-way ANOVA provides a single omnibus test of the
hypothesis that the
means of all k populations are equal, in place of many t tests for
all possible
pairs of groups. As a follow-up to a significant overall F test,
researchers
often still want to examine selected pairwise comparisons of
group means to
obtain more information about the pattern of differences among
groups. This
chapter reviews the F test, which is used to assess differences
for a set of
more than two group means, and some of the more popular
procedures for
follow-up comparisons among group means.
The overall null hypothesis for one-way ANOVA is that the
means of the
k populations that correspond to the groups in the study are all
equal:
H0: μ1 = μ2 = … = μk.
When each group has been exposed to different types or dosages
of a
treatment, as in a typical experiment, this null hypothesis
82. corresponds to an
assumption that the treatment has no effect on the outcome
variable. The
alternative hypothesis in this situation is not that all population
means are
unequal, but that there is at least one inequality for one pair of
population
means in the set or one significant contrast involving group
means.
This chapter reviews the computation and interpretation of the F
ratio that
is used in between-S one-way ANOVA. This chapter also
introduces an
extremely important concept: the partition of each score into
two components
—namely, a first component that is predictable from the
independent variable
that corresponds to group membership and a second component
that is due to
the collective effects of all other uncontrolled or extraneous
variables.
Similarly, the total sum of squares can also be partitioned into a
sum of
squares (between groups) that reflects variability associated
with treatment
and a sum of squares (within groups) that is due to the influence
of
extraneous uncontrolled variables. From these sums of squares,
we can
estimate the proportion of variance in scores that is explained
by, accounted
for by, or predictable from group membership (or from the
different
treatments administered to groups). The concept of explained or
predicted
83. variance is crucial for understanding ANOVA; it is also
important for the
comprehension of multivariate analyses described in later
chapters.
The best question ever asked by a student in any of my statistics
classes
was a deceptively simple “Why is there variance?” In the
hypothetical
research situation described in the following section, the
outcome variable is
a self-report measure of anxiety, and the research question is,
Why is there
variance in anxiety? Why do some persons report more anxiety
than other
persons? To what extent are the differences in amount of
anxiety
systematically associated with the independent variable (type of
stress), and
to what extent are differences in the amount of self-reported
anxiety due to
other factors (such as trait levels of anxiety, physiological
arousal, drug use,
gender, other anxiety-arousing events that each participant has
experienced
on the day of the study, etc.)? When we do an ANOVA, we
identify the part
of each individual score that is associated with group
membership (and,
therefore, with the treatment or intervention, if the study is an
experiment)
and the part of each individual score that is not associated with
group
84. membership and that is, therefore, attributable to a variety of
other variables
that are not controlled in the study, or “error.” In most research
situations,
researchers usually hope to show that a large part of the scores
is associated
with, or predictable from, group membership or treatment.
6.2 Hypothetical Research Example
As an illustration of the application of one-way ANOVA,
consider the
following imaginary study. Suppose that an experiment is done
to compare
the effects of four situations: Group 1 is tested in a “no-stress,”
baseline
situation; Group 2 does a mental arithmetic task; Group 3 does a
stressful
social role play; and Group 4 participants do a mock job
interview. For this
study, the X variable is a categorical variable with codes 1, 2, 3,
and 4 that
represent which of these four types of stress each participant
received. This
categorical X predictor variable is called a factor; in this case,
the factor is
called “Types of Stress”; the four levels of this factor
correspond to no stress,
mental arithmetic, stressful role play, and mock job interview.
At the end of
each session, the participants self-report their anxiety on a scale
that ranges
from 0 = no anxiety to 20 = extremely high anxiety. Scores on
anxiety are,
therefore, scores on a quantitative Y outcome variable. The
researcher wants
85. to know whether, overall, anxiety levels differed across these
four situations
and, if so, which situations elicited the highest anxiety and
which treatment
conditions differed significantly from baseline and from other
types of stress.
A convenience sample of 28 participants was obtained; each
participant was
randomly assigned to one of the four levels of stress. This
results in k = 4
groups with n = 7 participants in each group, for a total of N =
28 participants
in the entire study. The SPSS Data View worksheet that
contains data for this
imaginary study appears in Figure 6.1.
6.3 Assumptions About Scores on the Dependent Variable for
One-
Way Between-S ANOVA
The assumptions for one-way ANOVA are similar to those
described for the
independent samples t test (see Section 3 of Chapter 5). The
scores on the
quantitative dependent variable should be quantitative and, at
least
approximately, interval/ratio level of measurement. The scores
should be
approximately normally distributed in the entire sample and
within each
group, with no extreme outliers. The variances of scores should
be
approximately equal across groups. Observations should be
86. independent of
each other, both within groups and between groups. Preliminary
screening
involves the same procedures as for the t test: Histograms of
scores should be
examined to assess normality of distribution shape, a box and
whiskers plot
of the scores within each group could be used to identify
outliers, and the
Levene test (or some other test of homogeneity of variance)
could be used to
assess whether the homogeneity of variance assumption is
violated. If major
departures from the normal distribution shape and/or substantial
differences
among variances are detected, it may be possible to remedy
these problems
by applying data transformations, such as the base 10 logarithm,
or by
removing or modifying scores that are outliers, as described in
Chapter 4.
Figure 6.1 Data View Worksheet for Stress/Anxiety Study in
stress_anxiety.sav
6.4 Issues in Planning a Study
When an experiment is designed to compare treatment groups,
the researcher
needs to decide how many groups to include, how many
participants to
include in each group, how to assign participants to groups, and
what types or
dosages of treatments to administer to each group. In an