SlideShare a Scribd company logo
1 of 59
LAHORE COLLEGE FOR WOMEN UNIVERSITY
SUBMITTED TO: Dr.TAHIRA KALSOOM
SUBMITTED BY: ROOHA SHAHID (1925213023)
KAINAT NAYYAR (1925213017)
HAFIZA AFIA NAZEER (1925213013)
AYESHA TABASSUM (1925213008)
ADEEBA ASHIQ (1925213003)
HAMNA SHAHZAD (1925213033)
CLASS: MS-EDUCATION
STATISTIC IN EDUCATION
NON PARAMETRIC TESTS
Nonparametric tests are also called distribution-free tests because they don’t assume that your data follow a specific distribution.
You may have heard that you should use nonparametric tests when your data don’t meet the assumptions of the parametric test, especially the
assumption about normally distributed data. That sounds like a nice and straightforward way to choose. They also test groups median instead of
mean. Nonparametric tests are like a parallel universe to parametric tests.
Parametric tests (means) Nonparametric tests (medians)
1-sample t test 1-sample Sign, 1-sample Wilcoxon
2-sample t test Mann-Whitney test
One-Way ANOVA Kruskal-Wallis, Mood’s median test
Factorial DOE with one factor and one blocking
variable
Friedman test
REASONS TO USE NONPARAMETRIC TESTS
Reason1: Your area of study is better represented by the median
Reason2: You have a very small sample size
Reason3: You have ordinal data, ranked data, or outliers that you can’t remove
CHI SQUARE
INTRODUCTION
The Chi-square test of independence (also known as the Pearson Chi-square test, or simply the Chi-square) is one of the most useful
statistics for testing hypotheses when the variables are nominal, as often happens in clinical research. Unlike most statistics, the Chi-square (χ2)
can provide information not only on the significance of any observed differences, but also provides detailed information on exactly which
categories account for any differences found. Thus, the amount and detail of information this statistic can provide renders it one of the most
useful tools in the researcher’s array of available analysis tools. As with any statistic, there are requirements for its appropriate use, which are
called “assumptions” of the statistic. Additionally, the χ2 is a significance test, and should always be coupled with an appropriate test of strength.
CONDITIONS OF CHI-SQUARE TEST
The Chi-square test is a non-parametric statistic, also called a distribution free test. Non-parametric tests should be used when any one of the
following conditions pertains to the data:
1. The level of measurement of all the variables is nominal or ordinal.
2. The sample sizes of the study groups are unequal; for the χ2 the groups may be of equal size or unequal size whereas some parametric
tests require groups of equal or approximately equal size.
3. The original data were measured at an interval or ratio level, but violate one of the following assumptions of a parametric test:
a. The distribution of the data was seriously skewed or kurtotic (parametric tests assume approximately normal distribution of the
dependent variable), and thus the researcher must use a distribution free statistic rather than a parametric statistic.
b. The data violate the assumptions of equal variance or homoscedasticity.
c. For any of a number of reasons, the continuous data were collapsed into a small number of categories, and thus the data are no
longer interval or ratio.
ASSUMPTIONS OF THE CHI-SQUARE
As with parametric tests, the non-parametric tests, including the χ2 assume the data were obtained through random selection. However, it
is not uncommon to find inferential statistics used when data are from convenience samples rather than random samples. Each non-parametric
test has its own specific assumptions as well. The assumptions of the Chi-square include:
1. The data in the cells should be frequencies, or counts of cases rather than percentages or some other transformation of the data.
2. The levels (or categories) of the variables are mutually exclusive. That is, a particular subject fits into one and only one level of each of
the variables.
3. Each subject may contribute data to one and only one cell in the χ2. If, for example, the same subjects are tested over time such that the
comparisons are of the same subjects at Time 1, Time 2, Time 3, etc., then χ2 may not be used.
4. The study groups must be independent. This means that a different test must be used if the two groups are related. For example, a
different test must be used if the researcher’s data consists of paired samples, such as in studies in which a parent is paired with his or her
child.
5. There are 2 variables, and both are measured as categories, usually at the nominal level. However, data may be ordinal data. Interval or
ratio data that have been collapsed into ordinal categories may also be used. While Chi-square has no rule about limiting the number of
cells (by limiting the number of categories for each variable), a very large number of cells (over 20) can make it difficult to meet
assumption #6 below, and to interpret the meaning of the results.
6. The value of the cell expected should be 5 or more in at least 80% of the cells, and no cell should have an expected of less than one. This
assumption is most likely to be met if the sample size equals at least the number of cells multiplied by 5. Essentially, this assumption
specifies the number of cases (sample size) needed to use the χ2 for any number of cells in that χ2. This requirement will be fully
explained in the example of the calculation of the statistic in the case study example.
PROCEDURE TO RUN CHI-SQUARE TEST IN SPSS
To perform a Pearson’s chi-square test in SPSS, you need to have two categorical variables, such as counts (1, 2, 3 etc.)
The null hypothesis would be:
“There is no difference in male and female proportions between the control and treated group.”
The alternative hypothesis would be:
“There is a difference in male and female proportions between the control and treated group.”
CHI-SQUARE TEST
In SPSS, the Chi-Square Test of Independence is an option within the Crosstabs procedure.
Quick Steps
1. Click on Analyze > Descriptive Statistics > Crosstabs
2. Drag and drop (at least) one variable into the Row(s) box, and (at least) one into the Column(s) box
3. Click on Statistics, and select Chi-square
4. Press Continue, and then OK to do the chi square test
5. The result will appear in the SPSS output viewer.
PERFORMING THE TEST ON SPSS
To perform this test on SPSS, I selected two categorical variables (Qualification and Locality) from the given data and then apply Chi-
square test to check association between both variables.
Null hypothesis: There is no association between qualification and locality (Independent)
Alternative hypothesis: There is an association between qualification and locality (Dependent)
Whether these two variables are dependent on each other or are independent from each other, we apply Chi-square test on it.
QUICK STEPS
1. Click on Analyze -> Descriptive Statistics -> Crosstabs
2. Drag and drop Locale variable into the Row(s) box, and Qualification into the Column(s) box.
3. Click on Statistics, and select Chi-square. If you also want a measure of effect size, select Phi and Cramer’s V in the same dialog box,
and then press Continue, otherwise just press Continue.
4. Then click on the Cells, select the Chi-square option from the counts and then select the row, column and total option from the
percentage.
5. Press Continue, and then select the Display clustered Bar chart.
6. Then OK to do the chi square test.
7. The result will appear in the SPSS output viewer
The output of chi-square consists of four tables and one bar chart:
Case Processing Summary
Cases
Valid Missing Total
N Percent N Percent N Percent
Locale * Qualification 34 100.0% 0 0.0% 34 100.0%
Locale * Qualification Cross tabulation
Qualification
Highly Qualified Low Qualified Total
Locale Urban Count 7 14 21
% within Locale 33.3% 66.7% 100.0%
% within Qualification 58.3% 63.6% 61.8%
% of Total 20.6% 41.2% 61.8%
Rural Count 5 8 13
% within Locale 38.5% 61.5% 100.0%
% within Qualification 41.7% 36.4% 38.2%
% of Total 14.7% 23.5% 38.2%
Total Count 12 22 34
% within Locale 35.3% 64.7% 100.0%
% within Qualification 100.0% 100.0% 100.0%
% of Total 35.3% 64.7% 100.0%
Chi-Square Tests
a) 1 cells (25.0%) have expected count less than 5. The minimum expected count is 4.59.
b) Computed only for a 2x2 table
Value df Asymptotic
Significance (2-
sided)
Exact Sig. (2-sided) Exact Sig. (1-sided)
Pearson Chi-Square .092a 1 .761
Continuity Correction .000 1 1.000
Likelihood Ratio .092 1 .762
Fisher's Exact Test 1.000 .522
Linear-by-Linear Association .090 1 .765
N of Valid Cases 34
Symmetric Measures
Value Approximate Significance
Nominal by Nominal Phi -.052 .761
Cramer's V .052 .761
N of Valid Cases 34
The chi square statistic appears in the Value column of the Chi-Square Tests table immediately to the right of “Pearson Chi-Square”. In this
example, the value of the chi square statistic is .092a. The p-value appears in the same row in the “Asymptotic Significance (2-sided)” column
(.761). The result is significant if this value is equal to or less than the designated alpha level (normally .05).
In this case, the p-value is greater than the standard alpha value, so we’d accept the null hypothesis that asserts the two variables are
independent of each other. To put it simply, the result is not significant – the data suggests that the variables Locale and Qualification are not
associated with each other.
MANN-WHITNEY U TEST
INTRODUCTION
The Mann-Whitney U test is the nonparametric equivalent of the two sample t-test. While the t-test makes an assumption about the
distribution of a population (i.e. that the sample came from a t-distributed population), the Mann Whitney U Test makes no such assumption.
This test is used to compare differences between two independent groups when the dependent variable is either ordinal or continuous, but not
normally distributed.
Mann-Whitney U Test sometimes also called the Mann Whitney Wilcoxon Test or the Wilcoxon Rank Sum Test.
EXAMPLE
The Mann-Whitney U test can be used to understand whether salaries, measured on a continuous scale, differed based on educational
level (i.e., your dependent variable would be "salary" and your independent variable would be "educational level", which has two groups: "high
school" and "university").
NULL HYPOTHESIS FOR THE TEST
The null hypothesis for the test is H0: The population medians are equal. The non-directional alternative hypothesis is H1: The population
medians are not equal. In other words, The test compares two populations. The null hypothesis for the test is that the probability is 50% that a
randomly drawn member of the first population will exceed a member of the second population. An alternate null hypothesis is that the two
samples come from the same population (i.e. that they both have the same median).
In case of larger samples, formula is used or can apply test using SPSS.
Where, R is the sum of ranks in the sample
n is number of items in the sample
In case of smaller sample, DIRECT METHOD is used. The steps are as follows
 Name the sample with the smaller ranks “sample 1” and the sample with the larger ranks “sample 2”. Choosing the sample with the
smaller ranks to be “sample 1” is optional, but it makes the computation easier.
 Take the first observation in sample 1. Count how many observations in sample 2 are smaller than it. If the observations are equal, count
it as one half. For example, if you have ten that are less and two that are equal: 10 + 2(1/2) = 11.
 Repeat Step 2 for all observations in sample 1 and add up all of your totals from step 2 and 3.
ASSUMPTIONS
When you choose to analyze your data using a Mann-Whitney U test, part of the process involves checking to make sure that the data you want
to analyze can actually be analyzed using a Mann-Whitney U test. You need to do this because it is only appropriate to use a Mann-Whitney U
test if your data "passes" four assumptions that are required for a Mann-Whitney U test to give you a valid result. Assumptions are as follows
ASSUMPTION#1:
Your dependent variable should be measured at the ordinal or continuous level. Examples of ordinal variables include Likert items (e.g.,
a 7-point scale from "strongly agree" through to "strongly disagree"), amongst other ways of ranking categories (e.g., a 5-point scale explaining
how much a customer liked a product, ranging from "Not very much" to "Yes, a lot"). Examples of continuous variables include revision time
(measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so
forth.
ASSUMPTION#2:
Your independent variable should consist of two categorical, independent groups. Example independent variables that meet this criterion
include gender (2 groups: male or female), employment status (2 groups: employed or unemployed), smoker (2 groups: yes or no), and so forth.
ASSUMPTION#3:
You should have independence of observations, which means that there is no relationship between the observations in each group or
between the groups themselves. For example, there must be different participants in each group with no participant being in more than one
group. This is more of a study design issue than something you can test for, but it is an important assumption of the Mann-Whitney U test. If
your study fails this assumption, you will need to use another statistical test instead of the Mann-Whitney U test (e.g., a Wilcoxon signed-rank
test).
ASSUMPTION#4:
A Mann-Whitney U test can be used when your two variables are not normally distributed. However, in order to know how to interpret
the results from a Mann-Whitney U test, you have to determine whether your two distributions (i.e., the distribution of scores for both groups of
the independent variable; for example, 'males' and 'females' for the independent variable, 'gender') have the same shape.
MANN-WHITNEY U TEST PROCEDURE IN SPSS
TOPIC STATEMENT
The relationship between teachers’ perceptions, practices and students’ performance. The questionnaire is related to teachers’ perception and
practices in classroom regarding following 3 learning difficulties
 Dyslexia
 Dyspraxia
 Autism
In data Totalq1 represents teachers’ perception while Total q2 represents teachers’ practices. Total 34 participants solve the questionnaire.
Before applying the Mann-Whitney U Test, firstly find out the NORMALITY of the variable of interest by doing simple process. Total
q1 represents teachers’ perception is dependent variable in each of two groups that are indicated by grouping variable GENDER. Steps are as
follows
STEPS
 Select Descriptive Statistics from the Analyze menu.
 Select Explore from the Descriptive Statistics sub-menu.
 Click on Reset button.
 Copy the Total q1 variable into Dependent List: box.
 Copy the Gender variable into the Factor List: box.
 Click on the Plots… button.
 On the screen that appears select the Histogram tick box.
 Unselect the Stem and leaf button.
 Click on the Continue button.
 Click on OK button.
Ideally for normal distribution, this histogram appears to be reasonably symmetric.
Ideally for a normal distribution this histogram appears to be reasonably symmetric. Now move on to perform Mann-Whitney U Test by
following the steps below.
STEPS
The first SPSS output table contains a summary of the rankings for the 2 groups and can be seen
The first SPSS output table contains a summary of the rankings for the 2 groups and can be seen below:
 Select Non Parametric Tests from the Analyze menu.
 Select Legacy Dialogs from the Non Parametric Tests sub-menu.
 Select 2 Independent Samples from the Legacy Dialogs sub-menu.
 Click on the Reset button.
 Copy the Total q1 variable into the Test Variable List: box.
 Copy the Gender variable into the Grouping Variable List: box.
 Click on the Define Groups… button.
 Type 1 into the Group 1 box as MALE.
 Type 2 into the Group 2 box as FEMALE.
 Click on the Continue button.
 Click on the Exact… button.
 On the screen that appears select the Exact button.
 Click on the Continue button.
 Click on OK button.
Ranks
Student gender N Mean Rank Sum of Ranks
total q1 Male 17 16.97 288.50
Female 17 18.03 306.50
Total 34
The Mann Whitney test works by firstly constructing a ranked list of the observations labelled in their two groups. It will then work from
the lowest observation and give that observation rank 1 and the next rank 2 and so on right up to the largest observation which in this case will
have rank 34. If there are observations with the same value then they are given the same rank that is an average of the ranks available (for
example if three observations have the 9th smallest rank then rather than giving them ranks 9, 10 and 11 respectively they will each be given
rank 10 (9+10+11)/3 = 10). The test works by comparing the sum of the ranks in the two groups.
The statistics required for the test are constructed from the ranks and shown in the table. Here we see that for GENDER category Male
we have 17 observations whose total sum of ranks is 288.50. This results in a mean rank of 16.97. By contrast for GENDER category Female we
have 17 observations whose total sum of ranks is 306.50. This results in a mean rank of 18.03. So GENDER category Female has a larger mean
rank than GENDER category Male and thus tends to take larger values.
The Mann Whitney test will now decide on whether this difference in mean ranks is significant or not as is illustrated in the second table.
The second SPSS output table contains details of the test itself and can be seen below.
Test Statisticsa
Total q1
Mann-Whitney U 135.500
Wilcoxon W 288.500
Z -.311
Asymp. Sig. (2-tailed) .756
Exact Sig. [2*(1-tailed Sig.)] .760b
a. Grouping Variable: Gender
b. Not corrected for ties.
The Mann Whitney U Test can be calculated by sums of the rankings and compare them with what would expect if these two groups
came from the same distribution. Consider each group in turn and work out for each group a U statistic. The formula here is the sum of the
ranks - N x (N+1)/2 for each group. For GENDER category Male the values are U1= 288.50-17x(17+1)/2 = 135.5 and for GENDER Male the
values are U2= 306.50-17x(17-1)/2= 153.5. So U1 is less than U2 and it is lower of U statistics that is reported when giving the results. So here
the value 135.5 is the U statistic as shown.
One way to interpret the Mann-Whitney U statistic is to convert it to a normal score by subtracting its mean and dividing by its standard
error and that is done in the Z row. Here the value of Z = -.311 and this can be compared with a standard normal distribution to get a sense of the
magnitude by which the groups differ.
p value, quoted next to Asymp. Sig. (2-tailed), is .756 which is more than 0.05 indicates that there is no significant evidence to reject the
null hypothesis.
WILCOXON SIGNED-RANK TEST USING SPSS STATISTICS
INTRODUCTION
The Wilcoxon signed-rank test is the nonparametric test equivalent to the dependent t-test. As the Wilcoxon signed-rank test does not
assume normality in the data, it can be used when this assumption has been violated and the use of the dependent t-test is inappropriate. It is used
to compare two sets of scores that come from the same participants. This can occur when we wish to investigate any change in scores from one
time point to another, or when individuals are subjected to more than one condition.
Example
You could use a Wilcoxon signed-rank test to understand whether there was a difference in smokers' daily cigarette consumption before and
after a 6 week hypnotherapy programme (i.e., your dependent variable would be "daily cigarette consumption", and your two related groups
would be the cigarette consumption values "before" and "after" the hypnotherapy programme).
This "quick start" guide shows you how to carry out a Wilcoxon signed-rank test using SPSS Statistics, as well as interpret and report the results
from this test. However, before we introduce you to this procedure, you need to understand the different assumptions that your data must meet in
order for a Wilcoxon signed-rank test to give you a valid result. We discuss these assumptions next.
ASSUMPTIONS
When you choose to analyse your data using a Wilcoxon signed-rank test, part of the process involves checking to make sure that the data you
want to analyse can actually be analysed using a Wilcoxon signed-rank test. You need to do this because it is only appropriate to use a Wilcoxon
signed-rank test if your data "passes" three assumptions that are required for a Wilcoxon signed-rank test to give you a valid result.
Assumption #1:
Your dependent variable should be measured at the ordinal or continuous level. Examples of ordinal variables include Likert items
(e.g., a 7-point item from "strongly agree" through to "strongly disagree"), amongst other ways of ranking categories (e.g., a 5-point item
explaining how much a customer liked a product, ranging from "Not very much" to "Yes, a lot"). Examples of continuous
variables (i.e., interval or ratio variables) include revision time (measured in hours), intelligence (measured using IQ score), exam performance
(measured from 0 to 100), weight (measured in kg), and so forth.
Assumption #2:
Your independent variable should consist of two categorical, "related groups" or "matched pairs". "Related groups" indicates that
the same subjects are present in both groups. The reason that it is possible to have the same subjects in each group is because each subject has
been measured on two occasions on the same dependent variable. For example, you might have measured 10 individuals' performance in a
spelling test (the dependent variable) before and after they underwent a new form of computerized teaching method to improve spelling. You
would like to know if the computer training improved their spelling performance. The Wilcoxon signed-rank test can also be used to compare
different subjects within a "matched-pairs" study design, but this does not happen very often. Nonetheless, to learn more about the different
study designs you use with a Wilcoxon signed-rank test, see our enhanced Wilcoxon signed-rank test guide.
Assumption #3:
The distribution of the differences between the two related groups (i.e., the distribution of differences between the scores of both
groups of the independent variable; for example, the reaction time in a room with "blue lighting" and a room with "red lighting") needs to
be symmetrical in shape. If the distribution of differences is symmetrically shaped, you can analyse your study using the Wilcoxon signed-rank
test. In practice, checking for this assumption just adds a little bit more time to your analysis, requiring you to click a few more buttons in SPSS
Statistics when performing your analysis, as well as think a little bit more about your data, but it is not a difficult task. However, do not be
surprised if, when analysing your own data using SPSS Statistics, this assumption is violated (i.e., is not met). This is not uncommon when
working with real-world data rather than textbook examples, which often only show you how to carry out a Wilcoxon signed-rank test when
everything goes well! However, even when your data fails this assumption, there is often a solution to overcome this, such as transforming your
data to achieve a symmetrically-shaped distribution of differences (not a preferred option) or running a sign test instead of the Wilcoxon signed-
rank test.
TEST PROCEDURE IN SPSS STATISTICS
TOPIC STATEMENT
The relationship between teachers’ perceptions, practices and students’ performance. The statements given below are related to teachers’
perception regarding following three learning difficulties:
Dyslexia
Dyspraxia
Autism
In SPSS data file Total q1 shows teacher perceptions and Total q2 shows teacher practice and total 34 questioners’ data enter on SPSS.
STEP#1
Click Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples... on the top menu. You will be presented with the Two-
Related-Samples Tests dialogue box
STEP#2
Transfer the variables you are interested in analyzing into the Test Pairs: box. According to our data we need to transfer the variables Total
q1 and Totalq2 , which represent the perception before practice and after the practice related to learning difficulties, respectively. There are two
ways to do this. You can either: (1) highlight both variables (use the cursor and hold down the shift key), and then press the button; or (2)
drag-and-drop each variable into the boxes.
STEP#3
Make sure that the Wilcoxon checkbox is ticked in the –Test Type– area.
STEP#4
Generate descriptives or quartiles for your variables; select them by clicking on the button and ticking
the Descriptive and Quartiles checkboxes in the –Statistics– area.
1. Click on the button. You will be returned to the Two-Related-Samples Tests dialogue box.
2. Click on the button.
STEP#5
In output file we get the following tables
Descriptive Statistics
N Mean
Std.
Deviation Minimum Maximum
totalq1 34 1.1000E2 12.52634 80.00 141.00
totalq2 34 1.1085E2 12.13842 81.00 141.00
Percentiles
Percentiles 25th 50th (Median) 75th
Totalq1 1.0400E2 109.0000 1.1500E2
Totalq2 1.0600E2 110.5000 1.1500E2
Wilcoxon Signed Ranks Test
Test Statisticsb
totalq2 - totalq1
Z -1.826a
Asymp. Sig. (2-tailed)
.068
a. Based on negative ranks.
b. Wilcoxon Signed Ranks Test
Ranks
N Mean Rank Sum of Ranks
totalq2 - totalq1 Negative Ranks 0a .00 .00
Positive Ranks 4b 2.50 10.00
Ties 30c
Total 34
a. totalq2 < totalq1
b. totalq2 > totalq1
c. totalq2 = totalq1
As the p -value is less than 0.05 (i.e., p < .05), it can be concluded that there is a statistically significant difference between our two
related groups . A Wilcoxon signed-rank test statistic, the z score is -1.826b and the p value is .068 which is > 0.05. Hence, we accept the null
hypothesis that there is no difference in the median scores. We are looking for the "Asymp. Sig. (2-tailed)" value, which in this case is -1.826.
This is the p-value for the test. We report the Wilcoxon signed-ranks test using the Z statistic.
KRUSKAL WALLIS TEST
INTRODUCTION:
The Kruskal-Wallis H test (sometimes also called the "one-way ANOVA on ranks") is a rank-based nonparametric test that can be used to
determine if there are statistically significant differences between two or more groups of an independent variable on a continuous or ordinal
dependent variable. It is considered the nonparametric alternative to the one-way ANOVA, and an extension of the Mann-Whitney U test to
allow the comparison of more than two independent groups.
Example:
For example, you could use a Kruskal-Wallis H test to understand whether exam performance, measured on a continuous scale from 0-100,
differed based on test anxiety levels (i.e., your dependent variable would be "exam performance" and your independent variable would be "test
anxiety level", which has three independent groups: students with "low", "medium" and "high" test anxiety levels). Alternately, you could use
the Kruskal-Wallis H test to understand whether attitudes towards pay discrimination, where attitudes are measured on an ordinal scale, differed
based on job position (i.e., your dependent variable would be "attitudes towards pay discrimination", measured on a 5-point scale from "strongly
agree" to "strongly disagree", and your independent variable would be "job description", which has three independent groups: "shop floor",
"middle management" and "boardroom").
ASSUMPTIONS:
When you choose to analyse your data using a Kruskal-Wallis H test, part of the process involves checking to make sure that the data you want
to analyse can actually be analysed using a Kruskal-Wallis H test. You need to do this because it is only appropriate to use a Kruskal-Wallis H
test if your data "passes" four assumptions that are required for a Kruskal-Wallis H test to give you a valid result. In practice, checking for these
four assumptions just adds a little bit more time to your analysis, requiring you to click a few more buttons in SPSS Statistics when performing
your analysis, as well as think a little bit more about your data, but it is not a difficult task.
Before we introduce you to these four assumptions, do not be surprised if, when analysing your own data using SPSS Statistics, one or more of
these assumptions is violated (i.e., is not met). This is not uncommon when working with real-world data rather than textbook examples, which
often only show you how to carry out a Kruskal-Wallis H test when everything goes well! However, don’t worry. Even when your data fails
certain assumptions, there is often a solution to overcome this. First, let’s take a look at these four assumptions:
 Assumption #1: Your dependent variable should be measured at the ordinal or continuous level (i.e., interval or ratio). Examples
of ordinal variables include Likert scales (e.g., a 7-point scale from "strongly agree" through to "strongly disagree"), amongst other ways
of ranking categories (e.g., a 3-pont scale explaining how much a customer liked a product, ranging from "Not very much", to "It is OK",
to "Yes, a lot"). Examples of continuous variables include revision time (measured in hours), intelligence (measured using IQ score),
exam performance (measured from 0 to 100), weight (measured in kg), and so forth.
 Assumption #2: Your independent variable should consist of two or more categorical, independent groups. Typically, a Kruskal-Wallis
H test is used when you have three or more categorical, independent groups, but it can be used for just two groups (i.e., a Mann-Whitney
U test is more commonly used for two groups). Example independent variables that meet this criterion include ethnicity (e.g., three
groups: Caucasian, African American and Hispanic), physical activity level (e.g., four groups: sedentary, low, moderate and high),
profession (e.g., five groups: surgeon, doctor, nurse, dentist, therapist), and so forth.
 Assumption #3: You should have independence of observations, which means that there is no relationship between the observations in
each group or between the groups themselves. For example, there must be different participants in each group with no participant being
in more than one group. This is more of a study design issue than something you can test for, but it is an important assumption of the
Kruskal-Wallis H test. If your study fails this assumption, you will need to use another statistical test instead of the Kruskal-Wallis H test
(e.g., a Friedman test). If you are unsure whether your study meets this assumption, you can use our Statistical Test Selector, which is
part of our enhanced content.
As the Kruskal-Wallis H test does not assume normality in the data and is much less sensitive to outliers, it can be used when these
assumptions have been violated and the use of a one-way ANOVA is inappropriate. In addition, if your data is ordinal, a one-way
ANOVA is inappropriate, but the Kruskal-Wallis H test is not. However, the Kruskal-Wallis H test does come with an additional data
consideration, Assumption #4, which is discussed below:
 Assumption #4: In order to know how to interpret the results from a Kruskal-Wallis H test, you have to determine whether
the distributions in each group (i.e., the distribution of scores for each group of the independent variable) have the same shape (which
also means the same variability). To understand what this means, take a look at th
 e diagram below:
In the diagram on the left above, the distribution of scores for the "Caucasian", "African American" and "Hispanic" groups have
the same shape. On the other hand, in the diagram on the right above, the distribution of scores for each group are not identical (i.e., they
have different shapes and variabilities).If your distributions have the same shape, you can use SPSS Statistics to carry out a Kruskal-Wallis H
test to compare the medians of your dependent variable (e.g., "engagement score") for the different groups of the independent variable you are
interested in (e.g., the groups, Caucasian, African American and Hispanic, for the independent variable, "ethnicity"). However, if your
distributions have a different shape, you can only use the Kruskal-Wallis H test to compare mean ranks. Having similar distributions simply
allows you to use medians to represent a shift in location between the groups (as illustrated in the diagram on the left above). As such, it is very
important to check this assumption or you can end up interpreting your results incorrectly.
TEST PROCEDURES IN SPSS STATISTICS:
TOPIC STATEMENT
The relationship between teachers’ perceptions, practices and students’ performance. The questionnaire is related to teachers’ perception and
practices in classroom regarding following 3 learning difficulties
Dyslexia
Dyspraxia
Autism
In data Totalq1 represents teachers’ perception while Total q2 represents teachers’ practices.
Total 34 participants solve the questionnaire.
STEPS
The eight steps below show you how to analyse your data using the Kruskal-Wallis H test in SPSS Statistics. At the end of these eight steps, we
show you how to interpret the results from your Kruskal-Wallis H test.
1. Click Analyze > Nonparametric Tests > Legacy Dialogs > K Independent Samples... on the top menu
2. Transfer the dependent variable, tatalq2, into the Test Variable List: box and the independent variable, Qualification, into
the Grouping Variable: box. You can transfer these variables by either drag-and-dropping each variable into the appropriate boxes or by
highlighting (i.e., clicking on) each variable and using the appropriate button.
3. Click on the button. You will be presented with the "Several Independent Samples: Define Range" dialogue box.
4. Enter "1" into the Minimum: box and "5" into the Maximum box. These values represent the range of codes you gave the groups of the
independent variable.
5. Click on the button and you will be returned to the "Tests for Several Independent Samples" dialogue box, but now with a
completed Grouping Variable: box.
6. Click on the button. You will be presented with the "Several Independent Samples: Options" dialogue box.
7. Select the Descriptive checkbox if you want descriptives and/or the Quartiles checkbox if you want medians and quartiles. If you
selected the Descriptives option, you will be presented with the following screen.
8. Click on the button. You will be returned to the "Tests for Several Independent Samples" dialogue box.
9. Click on the button. This will generate the results.
SPSS STATISTICS OUTPUT FOR THE KRUSKAL-WALLIS H TEST:
You will be presented with the following output (assuming you did not select the Descriptive checkbox in the "Several Independent Samples:
Options" dialogue box)
Descriptive Statistics
N Mean Std. Deviation Minimum Maximum
totalq2
34 110.8529 12.13842 81.00 141.00
Qualification
34 2.71 1.268 1 5
Kruskal-Wallis Test
Ranks
Qualification N Mean Rank
totalq2 BA 6 18.25
BS honors 11 13.27
MA/ MSc 8 14.50
M.Phil 5 21.40
Any other 4 29.13
Total 34
Test Statisticsa,b
totalq2
Chi-Square 8.995
Df 4
Asymp. Sig. .061
Monte Carlo
Sig.
Sig. .052c
99% Confidence
Interval
Lower Bound .047
Upper Bound .058
a. Kruskal Wallis Test
b. Grouping Variable: Qualification
c. Based on 10000 sampled tables with starting seed 2000000.
The mean rank (i.e., the "Mean Rank" column in the Ranks table) of the “tatalq2” (teacher’s practice)for each qualification group can
be used to compare the effect of the different groups. Whether these groups have different scores can be assessed using the Test Statistics table
which presents the result of the Kruskal-Wallis H test. That is, the chi-squared statistic (the "Chi-Square" row), the degrees of freedom (the "df"
row) of the test and the statistical significance of the test (the "Asymp. Sig." row).
FRIEDMAN TEST
INTRODUCTION
The Friedman test is the non-parametric alternative to the one-way ANOVA with repeated measures. It is used to test for differences
between groups when the dependent variable being measured is ordinal. It can also be used for continuous data that has violated the assumptions
necessary to run the one-way ANOVA with repeated measures (e.g., data that has marked deviations from normality).
Example
A researcher wants to examine whether music has an effect on the perceived psychological effort required to perform an exercise session.
The dependent variable is "perceived effort to perform exercise" and the independent variable is "music type", which consists of three groups:
"no music", "classical music" and "dance music". To test whether music has an effect on the perceived psychological effort required to perform
an exercise session, the researcher recruited 12 runners who each ran three times on a treadmill for 30 minutes. For consistency, the treadmill
speed was the same for all three runs. In a random order, each subject ran: (a) listening to no music at all; (b) listening to classical music; and (c)
listening to dance music. At the end of each run, subjects were asked to record how hard the running session felt on a scale of 1 to 10, with 1
being easy and 10 extremely hard. A Friedman test was then carried out to see if there were differences in perceived effort based on music type.
ASSUMPTIONS
When you choose to analyse your data using a Friedman test, part of the process involves checking to make sure that the data you want to
analyse can actually be analysed using a Friedman test. You need to do this because it is only appropriate to use a Friedman test if your data
"passes" the following four assumptions:
 Assumption #1: One group that is measured on three or more different occasions.
 Assumption #2: Group is a random sample from the population.
 Assumption #3: Your dependent variable should be measured at the ordinal or continuous level. Examples of ordinal
variables include Likert scales (e.g., a 7-point scale from strongly agree through to strongly disagree), amongst other ways of ranking
categories (e.g., a 5-point scale explaining how much a customer liked a product, ranging from "Not very much" to "Yes, a lot").
Examples of continuous variables include revision time (measured in hours), intelligence (measured using IQ score), exam performance
(measured from 0 to 100), weight (measured in kg), and so forth.
 Assumption #4: Samples do NOT need to be normally distributed.
The Friedman test procedure in SPSS Statistics will not test any of the assumptions that are required for this test. In most cases, this is
because the assumptions are a methodological or Sstudy design issue, and not what SPSS Statistics is designed for. In the case of assessing the
types of variable you are using, SPSS Statistics will not provide you with any errors if you incorrectly label your variables as nominal.
SETUP IN SPSS STATISTICS
SPSS Statistics puts all repeated measures data on the same row in its Data View. Therefore, you will need as many variables as you have related
groups.
FRIEDMAN TEST PROCEDUREIN SPSS STATISTICS
Topic statement
The relationship between teachers’ perceptions, practices and students’ performance. The questionnaire is related to teachers’ perception and
practices in classroom regarding following 3 learning difficulties
 Dyslexia
 Dyspraxia
 Autism
In data Totalq1 represents teachers’ perception while Total q2 represents teachers’ practices.
Total 34 participants solve the questionnaire.
Steps
1. Click Analyze > Nonparametric Tests > Legacy Dialogs > K RelatedSamples... on the top menu.
2. You will be presented with the Tests for Several RelatedSamples dialogue box.
3. Transfer the variables none, classical and dance to the Test Variables: box by using the button or by dragging-and-dropping the
variables into the box.
4. Make sure that Friedman is selected in the –Test Type– area.
5. Click on the button. You will be presented with the following Several Related Samples: Statistics dialogue box.
6. Tick the Quartiles option:
7. Click on the button. This will return you back to the Tests for Several Related Samples dialogue box.
8. Click on the button to run the Friedman test.
SPSS Statistics Output for the Friedman Test:
SPSS Statistics will generate either two or three tables, depending on whether you selected to have descriptives and/or quartiles generated in
addition to running the Friedman test.
Descriptive Statistics Table
The Descriptives Statistics table will be produced if you selected the Quartiles option:
Percentile N 25th 50th (Median) 75th
totalq2 34 106.0000 110.5000 115.0000
totalq1 34 104.0000 109.0000 115.0000
 Ranks Table
The Ranks table shows the mean rank for each of the related groups, as shown below:
Ranks
Mean Rank
totalq2 1.56
totalq1 1.44
The Friedman test compares the mean ranks between the related groups and indicates how the groups differed, and it is included for this
reason. However, you are not very likely to actually report these values in your results section, but most likely will report the median value for
each related group.
 Test Statistics Table
The Test Statistics table informs you of the actual result of the Friedman test, and whether there was an overall statistically significant
difference between the mean ranks of your related groups. The table looks as follows:
Test Statisticsa
N 34
Chi-Square 4.000
df 1
Asymp. Sig. .046
a. Friedman Test
The table above provides the test statistic (χ2) value ("Chi-square"), degrees of freedom ("df") and the significance level ("Asymp. Sig."), which
is all we need to report the result of the Friedman test. We can see that there is an overall statistically significant difference between the mean
ranks of the related groups.
SPEARMAN’S RANK ORDER CORRELATION
INTRODUCTION
The Spearman rank-order correlation coefficient (Spearman’s correlation, for short) is a nonparametric measure of the strength and
direction of association that exists between two variables measured on at least an ordinal scale. It is denoted by the symbol rs (or the Greek letter
ρ, pronounced rho).The rs value is -1< rs < 1. The test is used for either ordinal variables or for continuous data that has failed the assumptions
necessary for conducting the Pearson's product-moment correlation. For example, you could use a Spearman’s correlation to understand whether
there is an association between exam performance and time spent revising.
TYPE OF VARIABLE USE
You need two variables that are ordinal, interval or ratio. Although you would normally hope to use a Pearson product-moment
correlation on interval or ratio data, the Spearman correlation can be used when the assumptions of the Pearson correlation are markedly
violated. However, Spearman's correlation determines the strength and direction of the monotonic relationship between your two variables
rather than the strength and direction of the linear relationship between your two variables.
MONOTONIC RELATIONSHIP
A monotonic relationship is a relationship that does one of the following:
(1) As the value of one variable increases, so does the value of the other variable; or (2) As the value of one variable increases, the other variable
value decreases. Examples of monotonic and non-monotonic relationships are presented in the diagram below:
ASSUMPTIONS
When you choose to analyze your data using Spearman’s correlation, part of the process involves checking to make sure that the data you
want to analyze can actually be analyzed using a Spearman’s correlation. You need to do this because it is only appropriate to use a Spearman’s
correlation if your data "passes" three assumptions that are required for Spearman’s correlation to give you a valid result.
These three assumptions are:
Assumption #1:
Your two variables should be measured on an ordinal, interval or ratio scale. Examples of ordinal variables include Likert scales (e.g.,
a 7-point scale from "strongly agree" through to "strongly disagree"), amongst other ways of ranking categories (e.g., a 3-pont scale
explaining how much a customer liked a product, ranging from "Not very much", to "It is OK", to "Yes, a lot").
Assumption #2:
Your two variables represent paired observations. For example, imagine that you were interested in the relationship between daily
cigarette consumption and amount of exercise performed each week. A single paired observation reflects the score on each variable for a single
participant (e.g., the daily cigarette consumption of "Participant 1" and the amount of exercise performed each week by "Participant 1"). With 30
participants in the study, this means that there would be 30 paired observations.
Assumption #3:
There is a monotonic relationship between the two variables. There are a number of ways to check whether a monotonic relationship
exists between your two variables, we suggest creating a scatter plot using SPSS Statistics, where you can plot one variable against the other, and
then visually inspect the scatter plot to check for monotonicity.
In terms of assumption #3 above, you can check this using SPSS Statistics. If your two variables do not appear to have a monotonic
relationship, you might consider using a different statistical test.
TEST PROCEDURE IN SPSS STATISTIC
TOPIC STATEMENT
The relationship between teachers’ perceptions, practices and students’ performance. The statements given below are related to teachers’
perception regarding following three learning difficulties:
Dyslexia
Dyspraxia
Autism
In SPSS data file Total q1 shows teacher perceptions and Total q2 shows teacher practice and total 34 questioners’ data enter on SPSS.
STEP#1
To apply Spearman’s Rank Order Correlation on the following data firstly finds monotonicity of data by using simple process.
Click >Graphs>Legacy Dialogs>Scatter/Dot>Simple Scatter>Define>Y-axis total q2>X-axis total q1 Click on OK after this we get output.
Graph show monotonic relationship
STEP#2
Click Analyze > Correlate > Bivariate... on the main menu. You will be presented with the following Bivariate Correlations dialogue box.
Transfer the variables Total q1and Total q2 into the Variables: box by dragging-and-dropping the variables or by clicking each variable and then
clicking on the button. Select the Spearman checkbox in the –Correlation Coefficients– area. Click on the button. This will
generate the results.
Nonparametric Correlations
Correlations
totalq1 totalq2
Spearman's rho totalq1 Correlation Coefficient 1.000 .921**
Sig. (2-tailed) . .000
N 34 34
totalq2 Correlation Coefficient .921** 1.000
Sig. (2-tailed) .000 .
N 34 34
**. Correlation is significant at the 0.01 level (2-tailed).
NONPAR CORR
/VARIABLES=Totalq1 totalq2
/PRINT=SPEARMAN TWOTAIL NOSIG
/MISSING=PAIRWISE.
Since SPSS reports the p-value for this test as being .000 we can say that we have very strong evidence to values are monotonically correlated in
the population. "A Spearman's correlation was run to determine the relationship between 34 participants There was a strong, positive monotonic
correlation between Teacher perception total q1 and teacher practice totalq2 (rs = .921, n = 34, p < .001)."
STEP#3
Now find rank of the following data click on Transform>Rank cases enter Variables (total q1 or total q2) drop or drag than . Ranks show
in Data view
In output file we get;
RANK
Created Variablesb
Source Variable Function New Variable Label
Totalq1a Rank RTotalq1 Rank of Totalq1
totalq2a Rank Rtotalq2 Rank of totalq2
a. Ranks are in ascending order.
b. Mean rank of tied values is used for ties.
STEP#4
Click Analyze > Correlate > Bivariate... on the main menu. You will be presented with the following Bivariate Correlations dialogue box.
Transfer the variables Rank of Total q1and Rank of Total q2 into the Variables: box by dragging-and-dropping the variables or by clicking each
variable and then clicking on the button. Select the Spearman checkbox in the –Correlation Coefficients– area. Click on the
button. This will generate the results.
Nonparametric Correlations
Correlations
Rank of Totalq1 Rank of totalq2
Spearman's rho Rank of Totalq1 Correlation Coefficient
1.000 .921**
Sig. (2-tailed) . .000
N 34 34
Rank of totalq2 Correlation Coefficient .921** 1.000
Sig. (2-tailed) .000 .
N 34 34
**. Correlation is significant at the 0.01 level (2-tailed).
Since SPSS reports the p-value for this test as being .000 we can say that we have very strong evidence to values are monotonically correlated in
the population. "A Spearman's correlation was run to determine the relationship between 34 participants There was a strong, positive monotonic
correlation between Teacher perception Rank of Total q1 and teacher practice Rank of Totalq2 (rs = .921, n = 34, p < .001)."

More Related Content

What's hot

T test, independant sample, paired sample and anova
T test, independant sample, paired sample and anovaT test, independant sample, paired sample and anova
T test, independant sample, paired sample and anova
Qasim Raza
 
Correlational research
Correlational researchCorrelational research
Correlational research
Azura Zaki
 

What's hot (20)

Basic research
Basic researchBasic research
Basic research
 
Analysis of data in research
Analysis of data in researchAnalysis of data in research
Analysis of data in research
 
Action Research Proposal: Research Procedures
Action Research Proposal: Research ProceduresAction Research Proposal: Research Procedures
Action Research Proposal: Research Procedures
 
01 validity and its type
01 validity and its type01 validity and its type
01 validity and its type
 
Reliability and validity ppt
Reliability and validity pptReliability and validity ppt
Reliability and validity ppt
 
What is Reliability and its Types?
What is Reliability and its Types? What is Reliability and its Types?
What is Reliability and its Types?
 
T test, independant sample, paired sample and anova
T test, independant sample, paired sample and anovaT test, independant sample, paired sample and anova
T test, independant sample, paired sample and anova
 
COSMIN 尺度開発研究の質の評価(2018)井上和哉 発表(各指標特性の基準等)
COSMIN 尺度開発研究の質の評価(2018)井上和哉 発表(各指標特性の基準等)COSMIN 尺度開発研究の質の評価(2018)井上和哉 発表(各指標特性の基準等)
COSMIN 尺度開発研究の質の評価(2018)井上和哉 発表(各指標特性の基準等)
 
Reporting Phi Coefficient test in APA
Reporting Phi Coefficient test in APAReporting Phi Coefficient test in APA
Reporting Phi Coefficient test in APA
 
Confirmatory factor analysis (cfa)
Confirmatory factor analysis (cfa)Confirmatory factor analysis (cfa)
Confirmatory factor analysis (cfa)
 
Quantitative data analysis - John Richardson
Quantitative data analysis - John RichardsonQuantitative data analysis - John Richardson
Quantitative data analysis - John Richardson
 
Validity.pptx
Validity.pptxValidity.pptx
Validity.pptx
 
Reliability and validity
Reliability and validityReliability and validity
Reliability and validity
 
Exploratory research
Exploratory researchExploratory research
Exploratory research
 
MANOVA SPSS
MANOVA SPSSMANOVA SPSS
MANOVA SPSS
 
Point biserial correlation
Point biserial correlationPoint biserial correlation
Point biserial correlation
 
Correlational research
Correlational researchCorrelational research
Correlational research
 
Quantitative Research: Surveys and Experiments
Quantitative Research: Surveys and ExperimentsQuantitative Research: Surveys and Experiments
Quantitative Research: Surveys and Experiments
 
Non parametric test
Non parametric testNon parametric test
Non parametric test
 
Validity, Reliability and Feasibility
Validity, Reliability and FeasibilityValidity, Reliability and Feasibility
Validity, Reliability and Feasibility
 

Similar to Nonparametric tests assignment

Parametric vs non parametric test
Parametric vs non parametric testParametric vs non parametric test
Parametric vs non parametric test
ar9530
 
Answer all questions individually and cite all work!!1. Provid.docx
Answer all questions individually and cite all work!!1. Provid.docxAnswer all questions individually and cite all work!!1. Provid.docx
Answer all questions individually and cite all work!!1. Provid.docx
festockton
 

Similar to Nonparametric tests assignment (20)

Non Parametric Test by Vikramjit Singh
Non Parametric Test by  Vikramjit SinghNon Parametric Test by  Vikramjit Singh
Non Parametric Test by Vikramjit Singh
 
Stat topics
Stat topicsStat topics
Stat topics
 
Chi square and t tests, Neelam zafar & group
Chi square and t tests, Neelam zafar & groupChi square and t tests, Neelam zafar & group
Chi square and t tests, Neelam zafar & group
 
UNIT 5.pptx
UNIT 5.pptxUNIT 5.pptx
UNIT 5.pptx
 
Non parametric study; Statistical approach for med student
Non parametric study; Statistical approach for med student Non parametric study; Statistical approach for med student
Non parametric study; Statistical approach for med student
 
Parametric vs non parametric test
Parametric vs non parametric testParametric vs non parametric test
Parametric vs non parametric test
 
Chisquare Test of Association.pdf in biostatistics
Chisquare Test of Association.pdf in biostatisticsChisquare Test of Association.pdf in biostatistics
Chisquare Test of Association.pdf in biostatistics
 
Chi squared test
Chi squared testChi squared test
Chi squared test
 
NON-PARAMETRIC TESTS by Prajakta Sawant
NON-PARAMETRIC TESTS by Prajakta SawantNON-PARAMETRIC TESTS by Prajakta Sawant
NON-PARAMETRIC TESTS by Prajakta Sawant
 
Research Procedure
Research ProcedureResearch Procedure
Research Procedure
 
Answer all questions individually and cite all work!!1. Provid.docx
Answer all questions individually and cite all work!!1. Provid.docxAnswer all questions individually and cite all work!!1. Provid.docx
Answer all questions individually and cite all work!!1. Provid.docx
 
Presentation chi-square test & Anova
Presentation   chi-square test & AnovaPresentation   chi-square test & Anova
Presentation chi-square test & Anova
 
Statistical test
Statistical test Statistical test
Statistical test
 
Non parametric test
Non parametric testNon parametric test
Non parametric test
 
9618821.ppt
9618821.ppt9618821.ppt
9618821.ppt
 
9618821.pdf
9618821.pdf9618821.pdf
9618821.pdf
 
Chi – square test
Chi – square testChi – square test
Chi – square test
 
non parametric test.pptx
non parametric test.pptxnon parametric test.pptx
non parametric test.pptx
 
Statistic and orthodontic by almuzian
Statistic and orthodontic by almuzianStatistic and orthodontic by almuzian
Statistic and orthodontic by almuzian
 
Inferential statistics quantitative data - single sample and 2 groups
Inferential statistics   quantitative data - single sample and 2 groupsInferential statistics   quantitative data - single sample and 2 groups
Inferential statistics quantitative data - single sample and 2 groups
 

More from ROOHASHAHID1 (11)

GENDER DESPERITY IN EDUCATION
GENDER DESPERITY IN EDUCATIONGENDER DESPERITY IN EDUCATION
GENDER DESPERITY IN EDUCATION
 
Case Study in Qualitative Research
 Case Study in Qualitative Research Case Study in Qualitative Research
Case Study in Qualitative Research
 
Recent trends and issues in assessment and evaluation
Recent trends and issues in assessment and evaluationRecent trends and issues in assessment and evaluation
Recent trends and issues in assessment and evaluation
 
Laboratory management
Laboratory managementLaboratory management
Laboratory management
 
Presentation of rooha shahid
Presentation of  rooha shahidPresentation of  rooha shahid
Presentation of rooha shahid
 
EDUCATION SYSTEM IN PAKISTAN
EDUCATION SYSTEM IN PAKISTANEDUCATION SYSTEM IN PAKISTAN
EDUCATION SYSTEM IN PAKISTAN
 
Measurement &; assessment in teaching
Measurement &; assessment in teachingMeasurement &; assessment in teaching
Measurement &; assessment in teaching
 
Education system
Education systemEducation system
Education system
 
Critical thinking
Critical thinkingCritical thinking
Critical thinking
 
Aristotle
AristotleAristotle
Aristotle
 
BANDURA SOCIAL LEARNING THEORY
BANDURA SOCIAL LEARNING THEORYBANDURA SOCIAL LEARNING THEORY
BANDURA SOCIAL LEARNING THEORY
 

Recently uploaded

sourabh vyas1222222222222222222244444444
sourabh vyas1222222222222222222244444444sourabh vyas1222222222222222222244444444
sourabh vyas1222222222222222222244444444
saurabvyas476
 
obat aborsi Tarakan wa 081336238223 jual obat aborsi cytotec asli di Tarakan9...
obat aborsi Tarakan wa 081336238223 jual obat aborsi cytotec asli di Tarakan9...obat aborsi Tarakan wa 081336238223 jual obat aborsi cytotec asli di Tarakan9...
obat aborsi Tarakan wa 081336238223 jual obat aborsi cytotec asli di Tarakan9...
yulianti213969
 
Reconciling Conflicting Data Curation Actions: Transparency Through Argument...
Reconciling Conflicting Data Curation Actions:  Transparency Through Argument...Reconciling Conflicting Data Curation Actions:  Transparency Through Argument...
Reconciling Conflicting Data Curation Actions: Transparency Through Argument...
Bertram Ludäscher
 
Abortion pills in Doha {{ QATAR }} +966572737505) Get Cytotec
Abortion pills in Doha {{ QATAR }} +966572737505) Get CytotecAbortion pills in Doha {{ QATAR }} +966572737505) Get Cytotec
Abortion pills in Doha {{ QATAR }} +966572737505) Get Cytotec
Abortion pills in Riyadh +966572737505 get cytotec
 
原件一样(UWO毕业证书)西安大略大学毕业证成绩单留信学历认证
原件一样(UWO毕业证书)西安大略大学毕业证成绩单留信学历认证原件一样(UWO毕业证书)西安大略大学毕业证成绩单留信学历认证
原件一样(UWO毕业证书)西安大略大学毕业证成绩单留信学历认证
pwgnohujw
 
Abortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get CytotecAbortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Riyadh +966572737505 get cytotec
 
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi ArabiaIn Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
ahmedjiabur940
 
如何办理澳洲拉筹伯大学毕业证(LaTrobe毕业证书)成绩单原件一模一样
如何办理澳洲拉筹伯大学毕业证(LaTrobe毕业证书)成绩单原件一模一样如何办理澳洲拉筹伯大学毕业证(LaTrobe毕业证书)成绩单原件一模一样
如何办理澳洲拉筹伯大学毕业证(LaTrobe毕业证书)成绩单原件一模一样
wsppdmt
 
如何办理(UCLA毕业证书)加州大学洛杉矶分校毕业证成绩单学位证留信学历认证原件一样
如何办理(UCLA毕业证书)加州大学洛杉矶分校毕业证成绩单学位证留信学历认证原件一样如何办理(UCLA毕业证书)加州大学洛杉矶分校毕业证成绩单学位证留信学历认证原件一样
如何办理(UCLA毕业证书)加州大学洛杉矶分校毕业证成绩单学位证留信学历认证原件一样
jk0tkvfv
 

Recently uploaded (20)

sourabh vyas1222222222222222222244444444
sourabh vyas1222222222222222222244444444sourabh vyas1222222222222222222244444444
sourabh vyas1222222222222222222244444444
 
obat aborsi Tarakan wa 081336238223 jual obat aborsi cytotec asli di Tarakan9...
obat aborsi Tarakan wa 081336238223 jual obat aborsi cytotec asli di Tarakan9...obat aborsi Tarakan wa 081336238223 jual obat aborsi cytotec asli di Tarakan9...
obat aborsi Tarakan wa 081336238223 jual obat aborsi cytotec asli di Tarakan9...
 
Northern New England Tableau User Group (TUG) May 2024
Northern New England Tableau User Group (TUG) May 2024Northern New England Tableau User Group (TUG) May 2024
Northern New England Tableau User Group (TUG) May 2024
 
DS Lecture-1 about discrete structure .ppt
DS Lecture-1 about discrete structure .pptDS Lecture-1 about discrete structure .ppt
DS Lecture-1 about discrete structure .ppt
 
Credit Card Fraud Detection: Safeguarding Transactions in the Digital Age
Credit Card Fraud Detection: Safeguarding Transactions in the Digital AgeCredit Card Fraud Detection: Safeguarding Transactions in the Digital Age
Credit Card Fraud Detection: Safeguarding Transactions in the Digital Age
 
Capstone in Interprofessional Informatic // IMPACT OF COVID 19 ON EDUCATION
Capstone in Interprofessional Informatic  // IMPACT OF COVID 19 ON EDUCATIONCapstone in Interprofessional Informatic  // IMPACT OF COVID 19 ON EDUCATION
Capstone in Interprofessional Informatic // IMPACT OF COVID 19 ON EDUCATION
 
Identify Customer Segments to Create Customer Offers for Each Segment - Appli...
Identify Customer Segments to Create Customer Offers for Each Segment - Appli...Identify Customer Segments to Create Customer Offers for Each Segment - Appli...
Identify Customer Segments to Create Customer Offers for Each Segment - Appli...
 
Predictive Precipitation: Advanced Rain Forecasting Techniques
Predictive Precipitation: Advanced Rain Forecasting TechniquesPredictive Precipitation: Advanced Rain Forecasting Techniques
Predictive Precipitation: Advanced Rain Forecasting Techniques
 
RESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptx
RESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptxRESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptx
RESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptx
 
Reconciling Conflicting Data Curation Actions: Transparency Through Argument...
Reconciling Conflicting Data Curation Actions:  Transparency Through Argument...Reconciling Conflicting Data Curation Actions:  Transparency Through Argument...
Reconciling Conflicting Data Curation Actions: Transparency Through Argument...
 
Abortion pills in Doha {{ QATAR }} +966572737505) Get Cytotec
Abortion pills in Doha {{ QATAR }} +966572737505) Get CytotecAbortion pills in Doha {{ QATAR }} +966572737505) Get Cytotec
Abortion pills in Doha {{ QATAR }} +966572737505) Get Cytotec
 
Identify Rules that Predict Patient’s Heart Disease - An Application of Decis...
Identify Rules that Predict Patient’s Heart Disease - An Application of Decis...Identify Rules that Predict Patient’s Heart Disease - An Application of Decis...
Identify Rules that Predict Patient’s Heart Disease - An Application of Decis...
 
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24  Building Real-Time Pipelines With FLaNKDATA SUMMIT 24  Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
 
原件一样(UWO毕业证书)西安大略大学毕业证成绩单留信学历认证
原件一样(UWO毕业证书)西安大略大学毕业证成绩单留信学历认证原件一样(UWO毕业证书)西安大略大学毕业证成绩单留信学历认证
原件一样(UWO毕业证书)西安大略大学毕业证成绩单留信学历认证
 
Abortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get CytotecAbortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get Cytotec
 
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi ArabiaIn Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
 
如何办理澳洲拉筹伯大学毕业证(LaTrobe毕业证书)成绩单原件一模一样
如何办理澳洲拉筹伯大学毕业证(LaTrobe毕业证书)成绩单原件一模一样如何办理澳洲拉筹伯大学毕业证(LaTrobe毕业证书)成绩单原件一模一样
如何办理澳洲拉筹伯大学毕业证(LaTrobe毕业证书)成绩单原件一模一样
 
SCI8-Q4-MOD11.pdfwrwujrrjfaajerjrajrrarj
SCI8-Q4-MOD11.pdfwrwujrrjfaajerjrajrrarjSCI8-Q4-MOD11.pdfwrwujrrjfaajerjrajrrarj
SCI8-Q4-MOD11.pdfwrwujrrjfaajerjrajrrarj
 
如何办理(UCLA毕业证书)加州大学洛杉矶分校毕业证成绩单学位证留信学历认证原件一样
如何办理(UCLA毕业证书)加州大学洛杉矶分校毕业证成绩单学位证留信学历认证原件一样如何办理(UCLA毕业证书)加州大学洛杉矶分校毕业证成绩单学位证留信学历认证原件一样
如何办理(UCLA毕业证书)加州大学洛杉矶分校毕业证成绩单学位证留信学历认证原件一样
 
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
 

Nonparametric tests assignment

  • 1. LAHORE COLLEGE FOR WOMEN UNIVERSITY SUBMITTED TO: Dr.TAHIRA KALSOOM SUBMITTED BY: ROOHA SHAHID (1925213023) KAINAT NAYYAR (1925213017) HAFIZA AFIA NAZEER (1925213013) AYESHA TABASSUM (1925213008) ADEEBA ASHIQ (1925213003) HAMNA SHAHZAD (1925213033) CLASS: MS-EDUCATION STATISTIC IN EDUCATION
  • 2. NON PARAMETRIC TESTS Nonparametric tests are also called distribution-free tests because they don’t assume that your data follow a specific distribution. You may have heard that you should use nonparametric tests when your data don’t meet the assumptions of the parametric test, especially the assumption about normally distributed data. That sounds like a nice and straightforward way to choose. They also test groups median instead of mean. Nonparametric tests are like a parallel universe to parametric tests. Parametric tests (means) Nonparametric tests (medians) 1-sample t test 1-sample Sign, 1-sample Wilcoxon 2-sample t test Mann-Whitney test One-Way ANOVA Kruskal-Wallis, Mood’s median test Factorial DOE with one factor and one blocking variable Friedman test REASONS TO USE NONPARAMETRIC TESTS
  • 3. Reason1: Your area of study is better represented by the median Reason2: You have a very small sample size Reason3: You have ordinal data, ranked data, or outliers that you can’t remove CHI SQUARE INTRODUCTION The Chi-square test of independence (also known as the Pearson Chi-square test, or simply the Chi-square) is one of the most useful statistics for testing hypotheses when the variables are nominal, as often happens in clinical research. Unlike most statistics, the Chi-square (χ2) can provide information not only on the significance of any observed differences, but also provides detailed information on exactly which categories account for any differences found. Thus, the amount and detail of information this statistic can provide renders it one of the most useful tools in the researcher’s array of available analysis tools. As with any statistic, there are requirements for its appropriate use, which are called “assumptions” of the statistic. Additionally, the χ2 is a significance test, and should always be coupled with an appropriate test of strength.
  • 4. CONDITIONS OF CHI-SQUARE TEST The Chi-square test is a non-parametric statistic, also called a distribution free test. Non-parametric tests should be used when any one of the following conditions pertains to the data: 1. The level of measurement of all the variables is nominal or ordinal. 2. The sample sizes of the study groups are unequal; for the χ2 the groups may be of equal size or unequal size whereas some parametric tests require groups of equal or approximately equal size. 3. The original data were measured at an interval or ratio level, but violate one of the following assumptions of a parametric test: a. The distribution of the data was seriously skewed or kurtotic (parametric tests assume approximately normal distribution of the dependent variable), and thus the researcher must use a distribution free statistic rather than a parametric statistic. b. The data violate the assumptions of equal variance or homoscedasticity. c. For any of a number of reasons, the continuous data were collapsed into a small number of categories, and thus the data are no longer interval or ratio.
  • 5. ASSUMPTIONS OF THE CHI-SQUARE As with parametric tests, the non-parametric tests, including the χ2 assume the data were obtained through random selection. However, it is not uncommon to find inferential statistics used when data are from convenience samples rather than random samples. Each non-parametric test has its own specific assumptions as well. The assumptions of the Chi-square include: 1. The data in the cells should be frequencies, or counts of cases rather than percentages or some other transformation of the data. 2. The levels (or categories) of the variables are mutually exclusive. That is, a particular subject fits into one and only one level of each of the variables. 3. Each subject may contribute data to one and only one cell in the χ2. If, for example, the same subjects are tested over time such that the comparisons are of the same subjects at Time 1, Time 2, Time 3, etc., then χ2 may not be used. 4. The study groups must be independent. This means that a different test must be used if the two groups are related. For example, a different test must be used if the researcher’s data consists of paired samples, such as in studies in which a parent is paired with his or her child. 5. There are 2 variables, and both are measured as categories, usually at the nominal level. However, data may be ordinal data. Interval or ratio data that have been collapsed into ordinal categories may also be used. While Chi-square has no rule about limiting the number of
  • 6. cells (by limiting the number of categories for each variable), a very large number of cells (over 20) can make it difficult to meet assumption #6 below, and to interpret the meaning of the results. 6. The value of the cell expected should be 5 or more in at least 80% of the cells, and no cell should have an expected of less than one. This assumption is most likely to be met if the sample size equals at least the number of cells multiplied by 5. Essentially, this assumption specifies the number of cases (sample size) needed to use the χ2 for any number of cells in that χ2. This requirement will be fully explained in the example of the calculation of the statistic in the case study example. PROCEDURE TO RUN CHI-SQUARE TEST IN SPSS To perform a Pearson’s chi-square test in SPSS, you need to have two categorical variables, such as counts (1, 2, 3 etc.) The null hypothesis would be: “There is no difference in male and female proportions between the control and treated group.” The alternative hypothesis would be: “There is a difference in male and female proportions between the control and treated group.” CHI-SQUARE TEST In SPSS, the Chi-Square Test of Independence is an option within the Crosstabs procedure. Quick Steps
  • 7. 1. Click on Analyze > Descriptive Statistics > Crosstabs 2. Drag and drop (at least) one variable into the Row(s) box, and (at least) one into the Column(s) box 3. Click on Statistics, and select Chi-square 4. Press Continue, and then OK to do the chi square test 5. The result will appear in the SPSS output viewer. PERFORMING THE TEST ON SPSS To perform this test on SPSS, I selected two categorical variables (Qualification and Locality) from the given data and then apply Chi- square test to check association between both variables. Null hypothesis: There is no association between qualification and locality (Independent) Alternative hypothesis: There is an association between qualification and locality (Dependent) Whether these two variables are dependent on each other or are independent from each other, we apply Chi-square test on it. QUICK STEPS 1. Click on Analyze -> Descriptive Statistics -> Crosstabs
  • 8. 2. Drag and drop Locale variable into the Row(s) box, and Qualification into the Column(s) box. 3. Click on Statistics, and select Chi-square. If you also want a measure of effect size, select Phi and Cramer’s V in the same dialog box, and then press Continue, otherwise just press Continue. 4. Then click on the Cells, select the Chi-square option from the counts and then select the row, column and total option from the percentage. 5. Press Continue, and then select the Display clustered Bar chart. 6. Then OK to do the chi square test. 7. The result will appear in the SPSS output viewer The output of chi-square consists of four tables and one bar chart:
  • 9. Case Processing Summary Cases Valid Missing Total N Percent N Percent N Percent Locale * Qualification 34 100.0% 0 0.0% 34 100.0% Locale * Qualification Cross tabulation Qualification Highly Qualified Low Qualified Total Locale Urban Count 7 14 21 % within Locale 33.3% 66.7% 100.0%
  • 10. % within Qualification 58.3% 63.6% 61.8% % of Total 20.6% 41.2% 61.8% Rural Count 5 8 13 % within Locale 38.5% 61.5% 100.0% % within Qualification 41.7% 36.4% 38.2% % of Total 14.7% 23.5% 38.2% Total Count 12 22 34 % within Locale 35.3% 64.7% 100.0% % within Qualification 100.0% 100.0% 100.0% % of Total 35.3% 64.7% 100.0%
  • 11. Chi-Square Tests a) 1 cells (25.0%) have expected count less than 5. The minimum expected count is 4.59. b) Computed only for a 2x2 table Value df Asymptotic Significance (2- sided) Exact Sig. (2-sided) Exact Sig. (1-sided) Pearson Chi-Square .092a 1 .761 Continuity Correction .000 1 1.000 Likelihood Ratio .092 1 .762 Fisher's Exact Test 1.000 .522 Linear-by-Linear Association .090 1 .765 N of Valid Cases 34
  • 12. Symmetric Measures Value Approximate Significance Nominal by Nominal Phi -.052 .761 Cramer's V .052 .761 N of Valid Cases 34 The chi square statistic appears in the Value column of the Chi-Square Tests table immediately to the right of “Pearson Chi-Square”. In this example, the value of the chi square statistic is .092a. The p-value appears in the same row in the “Asymptotic Significance (2-sided)” column (.761). The result is significant if this value is equal to or less than the designated alpha level (normally .05). In this case, the p-value is greater than the standard alpha value, so we’d accept the null hypothesis that asserts the two variables are independent of each other. To put it simply, the result is not significant – the data suggests that the variables Locale and Qualification are not associated with each other.
  • 13. MANN-WHITNEY U TEST INTRODUCTION The Mann-Whitney U test is the nonparametric equivalent of the two sample t-test. While the t-test makes an assumption about the distribution of a population (i.e. that the sample came from a t-distributed population), the Mann Whitney U Test makes no such assumption. This test is used to compare differences between two independent groups when the dependent variable is either ordinal or continuous, but not normally distributed. Mann-Whitney U Test sometimes also called the Mann Whitney Wilcoxon Test or the Wilcoxon Rank Sum Test. EXAMPLE The Mann-Whitney U test can be used to understand whether salaries, measured on a continuous scale, differed based on educational level (i.e., your dependent variable would be "salary" and your independent variable would be "educational level", which has two groups: "high school" and "university").
  • 14. NULL HYPOTHESIS FOR THE TEST The null hypothesis for the test is H0: The population medians are equal. The non-directional alternative hypothesis is H1: The population medians are not equal. In other words, The test compares two populations. The null hypothesis for the test is that the probability is 50% that a randomly drawn member of the first population will exceed a member of the second population. An alternate null hypothesis is that the two samples come from the same population (i.e. that they both have the same median). In case of larger samples, formula is used or can apply test using SPSS. Where, R is the sum of ranks in the sample n is number of items in the sample In case of smaller sample, DIRECT METHOD is used. The steps are as follows
  • 15.  Name the sample with the smaller ranks “sample 1” and the sample with the larger ranks “sample 2”. Choosing the sample with the smaller ranks to be “sample 1” is optional, but it makes the computation easier.  Take the first observation in sample 1. Count how many observations in sample 2 are smaller than it. If the observations are equal, count it as one half. For example, if you have ten that are less and two that are equal: 10 + 2(1/2) = 11.  Repeat Step 2 for all observations in sample 1 and add up all of your totals from step 2 and 3. ASSUMPTIONS When you choose to analyze your data using a Mann-Whitney U test, part of the process involves checking to make sure that the data you want to analyze can actually be analyzed using a Mann-Whitney U test. You need to do this because it is only appropriate to use a Mann-Whitney U test if your data "passes" four assumptions that are required for a Mann-Whitney U test to give you a valid result. Assumptions are as follows ASSUMPTION#1: Your dependent variable should be measured at the ordinal or continuous level. Examples of ordinal variables include Likert items (e.g., a 7-point scale from "strongly agree" through to "strongly disagree"), amongst other ways of ranking categories (e.g., a 5-point scale explaining how much a customer liked a product, ranging from "Not very much" to "Yes, a lot"). Examples of continuous variables include revision time
  • 16. (measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so forth. ASSUMPTION#2: Your independent variable should consist of two categorical, independent groups. Example independent variables that meet this criterion include gender (2 groups: male or female), employment status (2 groups: employed or unemployed), smoker (2 groups: yes or no), and so forth. ASSUMPTION#3: You should have independence of observations, which means that there is no relationship between the observations in each group or between the groups themselves. For example, there must be different participants in each group with no participant being in more than one group. This is more of a study design issue than something you can test for, but it is an important assumption of the Mann-Whitney U test. If your study fails this assumption, you will need to use another statistical test instead of the Mann-Whitney U test (e.g., a Wilcoxon signed-rank test).
  • 17. ASSUMPTION#4: A Mann-Whitney U test can be used when your two variables are not normally distributed. However, in order to know how to interpret the results from a Mann-Whitney U test, you have to determine whether your two distributions (i.e., the distribution of scores for both groups of the independent variable; for example, 'males' and 'females' for the independent variable, 'gender') have the same shape. MANN-WHITNEY U TEST PROCEDURE IN SPSS TOPIC STATEMENT The relationship between teachers’ perceptions, practices and students’ performance. The questionnaire is related to teachers’ perception and practices in classroom regarding following 3 learning difficulties  Dyslexia  Dyspraxia  Autism In data Totalq1 represents teachers’ perception while Total q2 represents teachers’ practices. Total 34 participants solve the questionnaire.
  • 18. Before applying the Mann-Whitney U Test, firstly find out the NORMALITY of the variable of interest by doing simple process. Total q1 represents teachers’ perception is dependent variable in each of two groups that are indicated by grouping variable GENDER. Steps are as follows STEPS  Select Descriptive Statistics from the Analyze menu.  Select Explore from the Descriptive Statistics sub-menu.  Click on Reset button.  Copy the Total q1 variable into Dependent List: box.  Copy the Gender variable into the Factor List: box.  Click on the Plots… button.  On the screen that appears select the Histogram tick box.  Unselect the Stem and leaf button.  Click on the Continue button.  Click on OK button.
  • 19. Ideally for normal distribution, this histogram appears to be reasonably symmetric.
  • 20. Ideally for a normal distribution this histogram appears to be reasonably symmetric. Now move on to perform Mann-Whitney U Test by following the steps below.
  • 21. STEPS The first SPSS output table contains a summary of the rankings for the 2 groups and can be seen The first SPSS output table contains a summary of the rankings for the 2 groups and can be seen below:  Select Non Parametric Tests from the Analyze menu.  Select Legacy Dialogs from the Non Parametric Tests sub-menu.  Select 2 Independent Samples from the Legacy Dialogs sub-menu.  Click on the Reset button.  Copy the Total q1 variable into the Test Variable List: box.  Copy the Gender variable into the Grouping Variable List: box.  Click on the Define Groups… button.  Type 1 into the Group 1 box as MALE.  Type 2 into the Group 2 box as FEMALE.  Click on the Continue button.  Click on the Exact… button.  On the screen that appears select the Exact button.  Click on the Continue button.  Click on OK button.
  • 22. Ranks Student gender N Mean Rank Sum of Ranks total q1 Male 17 16.97 288.50 Female 17 18.03 306.50 Total 34 The Mann Whitney test works by firstly constructing a ranked list of the observations labelled in their two groups. It will then work from the lowest observation and give that observation rank 1 and the next rank 2 and so on right up to the largest observation which in this case will have rank 34. If there are observations with the same value then they are given the same rank that is an average of the ranks available (for example if three observations have the 9th smallest rank then rather than giving them ranks 9, 10 and 11 respectively they will each be given rank 10 (9+10+11)/3 = 10). The test works by comparing the sum of the ranks in the two groups. The statistics required for the test are constructed from the ranks and shown in the table. Here we see that for GENDER category Male we have 17 observations whose total sum of ranks is 288.50. This results in a mean rank of 16.97. By contrast for GENDER category Female we have 17 observations whose total sum of ranks is 306.50. This results in a mean rank of 18.03. So GENDER category Female has a larger mean rank than GENDER category Male and thus tends to take larger values.
  • 23. The Mann Whitney test will now decide on whether this difference in mean ranks is significant or not as is illustrated in the second table. The second SPSS output table contains details of the test itself and can be seen below. Test Statisticsa Total q1 Mann-Whitney U 135.500 Wilcoxon W 288.500 Z -.311 Asymp. Sig. (2-tailed) .756 Exact Sig. [2*(1-tailed Sig.)] .760b a. Grouping Variable: Gender b. Not corrected for ties.
  • 24. The Mann Whitney U Test can be calculated by sums of the rankings and compare them with what would expect if these two groups came from the same distribution. Consider each group in turn and work out for each group a U statistic. The formula here is the sum of the ranks - N x (N+1)/2 for each group. For GENDER category Male the values are U1= 288.50-17x(17+1)/2 = 135.5 and for GENDER Male the values are U2= 306.50-17x(17-1)/2= 153.5. So U1 is less than U2 and it is lower of U statistics that is reported when giving the results. So here the value 135.5 is the U statistic as shown. One way to interpret the Mann-Whitney U statistic is to convert it to a normal score by subtracting its mean and dividing by its standard error and that is done in the Z row. Here the value of Z = -.311 and this can be compared with a standard normal distribution to get a sense of the magnitude by which the groups differ. p value, quoted next to Asymp. Sig. (2-tailed), is .756 which is more than 0.05 indicates that there is no significant evidence to reject the null hypothesis.
  • 25. WILCOXON SIGNED-RANK TEST USING SPSS STATISTICS INTRODUCTION The Wilcoxon signed-rank test is the nonparametric test equivalent to the dependent t-test. As the Wilcoxon signed-rank test does not assume normality in the data, it can be used when this assumption has been violated and the use of the dependent t-test is inappropriate. It is used to compare two sets of scores that come from the same participants. This can occur when we wish to investigate any change in scores from one time point to another, or when individuals are subjected to more than one condition. Example You could use a Wilcoxon signed-rank test to understand whether there was a difference in smokers' daily cigarette consumption before and after a 6 week hypnotherapy programme (i.e., your dependent variable would be "daily cigarette consumption", and your two related groups would be the cigarette consumption values "before" and "after" the hypnotherapy programme). This "quick start" guide shows you how to carry out a Wilcoxon signed-rank test using SPSS Statistics, as well as interpret and report the results from this test. However, before we introduce you to this procedure, you need to understand the different assumptions that your data must meet in order for a Wilcoxon signed-rank test to give you a valid result. We discuss these assumptions next.
  • 26. ASSUMPTIONS When you choose to analyse your data using a Wilcoxon signed-rank test, part of the process involves checking to make sure that the data you want to analyse can actually be analysed using a Wilcoxon signed-rank test. You need to do this because it is only appropriate to use a Wilcoxon signed-rank test if your data "passes" three assumptions that are required for a Wilcoxon signed-rank test to give you a valid result. Assumption #1: Your dependent variable should be measured at the ordinal or continuous level. Examples of ordinal variables include Likert items (e.g., a 7-point item from "strongly agree" through to "strongly disagree"), amongst other ways of ranking categories (e.g., a 5-point item explaining how much a customer liked a product, ranging from "Not very much" to "Yes, a lot"). Examples of continuous variables (i.e., interval or ratio variables) include revision time (measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so forth. Assumption #2:
  • 27. Your independent variable should consist of two categorical, "related groups" or "matched pairs". "Related groups" indicates that the same subjects are present in both groups. The reason that it is possible to have the same subjects in each group is because each subject has been measured on two occasions on the same dependent variable. For example, you might have measured 10 individuals' performance in a spelling test (the dependent variable) before and after they underwent a new form of computerized teaching method to improve spelling. You would like to know if the computer training improved their spelling performance. The Wilcoxon signed-rank test can also be used to compare different subjects within a "matched-pairs" study design, but this does not happen very often. Nonetheless, to learn more about the different study designs you use with a Wilcoxon signed-rank test, see our enhanced Wilcoxon signed-rank test guide. Assumption #3: The distribution of the differences between the two related groups (i.e., the distribution of differences between the scores of both groups of the independent variable; for example, the reaction time in a room with "blue lighting" and a room with "red lighting") needs to be symmetrical in shape. If the distribution of differences is symmetrically shaped, you can analyse your study using the Wilcoxon signed-rank test. In practice, checking for this assumption just adds a little bit more time to your analysis, requiring you to click a few more buttons in SPSS Statistics when performing your analysis, as well as think a little bit more about your data, but it is not a difficult task. However, do not be surprised if, when analysing your own data using SPSS Statistics, this assumption is violated (i.e., is not met). This is not uncommon when working with real-world data rather than textbook examples, which often only show you how to carry out a Wilcoxon signed-rank test when
  • 28. everything goes well! However, even when your data fails this assumption, there is often a solution to overcome this, such as transforming your data to achieve a symmetrically-shaped distribution of differences (not a preferred option) or running a sign test instead of the Wilcoxon signed- rank test. TEST PROCEDURE IN SPSS STATISTICS TOPIC STATEMENT The relationship between teachers’ perceptions, practices and students’ performance. The statements given below are related to teachers’ perception regarding following three learning difficulties: Dyslexia Dyspraxia Autism In SPSS data file Total q1 shows teacher perceptions and Total q2 shows teacher practice and total 34 questioners’ data enter on SPSS.
  • 29. STEP#1 Click Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples... on the top menu. You will be presented with the Two- Related-Samples Tests dialogue box STEP#2 Transfer the variables you are interested in analyzing into the Test Pairs: box. According to our data we need to transfer the variables Total q1 and Totalq2 , which represent the perception before practice and after the practice related to learning difficulties, respectively. There are two ways to do this. You can either: (1) highlight both variables (use the cursor and hold down the shift key), and then press the button; or (2) drag-and-drop each variable into the boxes. STEP#3 Make sure that the Wilcoxon checkbox is ticked in the –Test Type– area.
  • 30. STEP#4 Generate descriptives or quartiles for your variables; select them by clicking on the button and ticking the Descriptive and Quartiles checkboxes in the –Statistics– area. 1. Click on the button. You will be returned to the Two-Related-Samples Tests dialogue box. 2. Click on the button. STEP#5 In output file we get the following tables
  • 31. Descriptive Statistics N Mean Std. Deviation Minimum Maximum totalq1 34 1.1000E2 12.52634 80.00 141.00 totalq2 34 1.1085E2 12.13842 81.00 141.00 Percentiles Percentiles 25th 50th (Median) 75th Totalq1 1.0400E2 109.0000 1.1500E2 Totalq2 1.0600E2 110.5000 1.1500E2
  • 32. Wilcoxon Signed Ranks Test Test Statisticsb totalq2 - totalq1 Z -1.826a Asymp. Sig. (2-tailed) .068 a. Based on negative ranks. b. Wilcoxon Signed Ranks Test Ranks N Mean Rank Sum of Ranks totalq2 - totalq1 Negative Ranks 0a .00 .00 Positive Ranks 4b 2.50 10.00 Ties 30c Total 34 a. totalq2 < totalq1 b. totalq2 > totalq1 c. totalq2 = totalq1
  • 33. As the p -value is less than 0.05 (i.e., p < .05), it can be concluded that there is a statistically significant difference between our two related groups . A Wilcoxon signed-rank test statistic, the z score is -1.826b and the p value is .068 which is > 0.05. Hence, we accept the null hypothesis that there is no difference in the median scores. We are looking for the "Asymp. Sig. (2-tailed)" value, which in this case is -1.826. This is the p-value for the test. We report the Wilcoxon signed-ranks test using the Z statistic. KRUSKAL WALLIS TEST INTRODUCTION: The Kruskal-Wallis H test (sometimes also called the "one-way ANOVA on ranks") is a rank-based nonparametric test that can be used to determine if there are statistically significant differences between two or more groups of an independent variable on a continuous or ordinal dependent variable. It is considered the nonparametric alternative to the one-way ANOVA, and an extension of the Mann-Whitney U test to allow the comparison of more than two independent groups. Example: For example, you could use a Kruskal-Wallis H test to understand whether exam performance, measured on a continuous scale from 0-100, differed based on test anxiety levels (i.e., your dependent variable would be "exam performance" and your independent variable would be "test anxiety level", which has three independent groups: students with "low", "medium" and "high" test anxiety levels). Alternately, you could use
  • 34. the Kruskal-Wallis H test to understand whether attitudes towards pay discrimination, where attitudes are measured on an ordinal scale, differed based on job position (i.e., your dependent variable would be "attitudes towards pay discrimination", measured on a 5-point scale from "strongly agree" to "strongly disagree", and your independent variable would be "job description", which has three independent groups: "shop floor", "middle management" and "boardroom"). ASSUMPTIONS: When you choose to analyse your data using a Kruskal-Wallis H test, part of the process involves checking to make sure that the data you want to analyse can actually be analysed using a Kruskal-Wallis H test. You need to do this because it is only appropriate to use a Kruskal-Wallis H test if your data "passes" four assumptions that are required for a Kruskal-Wallis H test to give you a valid result. In practice, checking for these four assumptions just adds a little bit more time to your analysis, requiring you to click a few more buttons in SPSS Statistics when performing your analysis, as well as think a little bit more about your data, but it is not a difficult task. Before we introduce you to these four assumptions, do not be surprised if, when analysing your own data using SPSS Statistics, one or more of these assumptions is violated (i.e., is not met). This is not uncommon when working with real-world data rather than textbook examples, which often only show you how to carry out a Kruskal-Wallis H test when everything goes well! However, don’t worry. Even when your data fails certain assumptions, there is often a solution to overcome this. First, let’s take a look at these four assumptions:
  • 35.  Assumption #1: Your dependent variable should be measured at the ordinal or continuous level (i.e., interval or ratio). Examples of ordinal variables include Likert scales (e.g., a 7-point scale from "strongly agree" through to "strongly disagree"), amongst other ways of ranking categories (e.g., a 3-pont scale explaining how much a customer liked a product, ranging from "Not very much", to "It is OK", to "Yes, a lot"). Examples of continuous variables include revision time (measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so forth.  Assumption #2: Your independent variable should consist of two or more categorical, independent groups. Typically, a Kruskal-Wallis H test is used when you have three or more categorical, independent groups, but it can be used for just two groups (i.e., a Mann-Whitney U test is more commonly used for two groups). Example independent variables that meet this criterion include ethnicity (e.g., three groups: Caucasian, African American and Hispanic), physical activity level (e.g., four groups: sedentary, low, moderate and high), profession (e.g., five groups: surgeon, doctor, nurse, dentist, therapist), and so forth.  Assumption #3: You should have independence of observations, which means that there is no relationship between the observations in each group or between the groups themselves. For example, there must be different participants in each group with no participant being in more than one group. This is more of a study design issue than something you can test for, but it is an important assumption of the Kruskal-Wallis H test. If your study fails this assumption, you will need to use another statistical test instead of the Kruskal-Wallis H test
  • 36. (e.g., a Friedman test). If you are unsure whether your study meets this assumption, you can use our Statistical Test Selector, which is part of our enhanced content. As the Kruskal-Wallis H test does not assume normality in the data and is much less sensitive to outliers, it can be used when these assumptions have been violated and the use of a one-way ANOVA is inappropriate. In addition, if your data is ordinal, a one-way ANOVA is inappropriate, but the Kruskal-Wallis H test is not. However, the Kruskal-Wallis H test does come with an additional data consideration, Assumption #4, which is discussed below:  Assumption #4: In order to know how to interpret the results from a Kruskal-Wallis H test, you have to determine whether the distributions in each group (i.e., the distribution of scores for each group of the independent variable) have the same shape (which also means the same variability). To understand what this means, take a look at th  e diagram below:
  • 37. In the diagram on the left above, the distribution of scores for the "Caucasian", "African American" and "Hispanic" groups have the same shape. On the other hand, in the diagram on the right above, the distribution of scores for each group are not identical (i.e., they have different shapes and variabilities).If your distributions have the same shape, you can use SPSS Statistics to carry out a Kruskal-Wallis H test to compare the medians of your dependent variable (e.g., "engagement score") for the different groups of the independent variable you are interested in (e.g., the groups, Caucasian, African American and Hispanic, for the independent variable, "ethnicity"). However, if your distributions have a different shape, you can only use the Kruskal-Wallis H test to compare mean ranks. Having similar distributions simply allows you to use medians to represent a shift in location between the groups (as illustrated in the diagram on the left above). As such, it is very important to check this assumption or you can end up interpreting your results incorrectly.
  • 38. TEST PROCEDURES IN SPSS STATISTICS: TOPIC STATEMENT The relationship between teachers’ perceptions, practices and students’ performance. The questionnaire is related to teachers’ perception and practices in classroom regarding following 3 learning difficulties Dyslexia Dyspraxia Autism In data Totalq1 represents teachers’ perception while Total q2 represents teachers’ practices. Total 34 participants solve the questionnaire. STEPS The eight steps below show you how to analyse your data using the Kruskal-Wallis H test in SPSS Statistics. At the end of these eight steps, we show you how to interpret the results from your Kruskal-Wallis H test. 1. Click Analyze > Nonparametric Tests > Legacy Dialogs > K Independent Samples... on the top menu
  • 39. 2. Transfer the dependent variable, tatalq2, into the Test Variable List: box and the independent variable, Qualification, into the Grouping Variable: box. You can transfer these variables by either drag-and-dropping each variable into the appropriate boxes or by highlighting (i.e., clicking on) each variable and using the appropriate button. 3. Click on the button. You will be presented with the "Several Independent Samples: Define Range" dialogue box. 4. Enter "1" into the Minimum: box and "5" into the Maximum box. These values represent the range of codes you gave the groups of the independent variable. 5. Click on the button and you will be returned to the "Tests for Several Independent Samples" dialogue box, but now with a completed Grouping Variable: box. 6. Click on the button. You will be presented with the "Several Independent Samples: Options" dialogue box. 7. Select the Descriptive checkbox if you want descriptives and/or the Quartiles checkbox if you want medians and quartiles. If you selected the Descriptives option, you will be presented with the following screen. 8. Click on the button. You will be returned to the "Tests for Several Independent Samples" dialogue box. 9. Click on the button. This will generate the results.
  • 40. SPSS STATISTICS OUTPUT FOR THE KRUSKAL-WALLIS H TEST: You will be presented with the following output (assuming you did not select the Descriptive checkbox in the "Several Independent Samples: Options" dialogue box) Descriptive Statistics N Mean Std. Deviation Minimum Maximum totalq2 34 110.8529 12.13842 81.00 141.00 Qualification 34 2.71 1.268 1 5
  • 41. Kruskal-Wallis Test Ranks Qualification N Mean Rank totalq2 BA 6 18.25 BS honors 11 13.27 MA/ MSc 8 14.50 M.Phil 5 21.40 Any other 4 29.13 Total 34
  • 42. Test Statisticsa,b totalq2 Chi-Square 8.995 Df 4 Asymp. Sig. .061 Monte Carlo Sig. Sig. .052c 99% Confidence Interval Lower Bound .047 Upper Bound .058 a. Kruskal Wallis Test b. Grouping Variable: Qualification c. Based on 10000 sampled tables with starting seed 2000000. The mean rank (i.e., the "Mean Rank" column in the Ranks table) of the “tatalq2” (teacher’s practice)for each qualification group can be used to compare the effect of the different groups. Whether these groups have different scores can be assessed using the Test Statistics table
  • 43. which presents the result of the Kruskal-Wallis H test. That is, the chi-squared statistic (the "Chi-Square" row), the degrees of freedom (the "df" row) of the test and the statistical significance of the test (the "Asymp. Sig." row). FRIEDMAN TEST INTRODUCTION The Friedman test is the non-parametric alternative to the one-way ANOVA with repeated measures. It is used to test for differences between groups when the dependent variable being measured is ordinal. It can also be used for continuous data that has violated the assumptions necessary to run the one-way ANOVA with repeated measures (e.g., data that has marked deviations from normality). Example A researcher wants to examine whether music has an effect on the perceived psychological effort required to perform an exercise session. The dependent variable is "perceived effort to perform exercise" and the independent variable is "music type", which consists of three groups: "no music", "classical music" and "dance music". To test whether music has an effect on the perceived psychological effort required to perform an exercise session, the researcher recruited 12 runners who each ran three times on a treadmill for 30 minutes. For consistency, the treadmill speed was the same for all three runs. In a random order, each subject ran: (a) listening to no music at all; (b) listening to classical music; and (c)
  • 44. listening to dance music. At the end of each run, subjects were asked to record how hard the running session felt on a scale of 1 to 10, with 1 being easy and 10 extremely hard. A Friedman test was then carried out to see if there were differences in perceived effort based on music type. ASSUMPTIONS When you choose to analyse your data using a Friedman test, part of the process involves checking to make sure that the data you want to analyse can actually be analysed using a Friedman test. You need to do this because it is only appropriate to use a Friedman test if your data "passes" the following four assumptions:  Assumption #1: One group that is measured on three or more different occasions.  Assumption #2: Group is a random sample from the population.  Assumption #3: Your dependent variable should be measured at the ordinal or continuous level. Examples of ordinal variables include Likert scales (e.g., a 7-point scale from strongly agree through to strongly disagree), amongst other ways of ranking categories (e.g., a 5-point scale explaining how much a customer liked a product, ranging from "Not very much" to "Yes, a lot"). Examples of continuous variables include revision time (measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so forth.  Assumption #4: Samples do NOT need to be normally distributed.
  • 45. The Friedman test procedure in SPSS Statistics will not test any of the assumptions that are required for this test. In most cases, this is because the assumptions are a methodological or Sstudy design issue, and not what SPSS Statistics is designed for. In the case of assessing the types of variable you are using, SPSS Statistics will not provide you with any errors if you incorrectly label your variables as nominal. SETUP IN SPSS STATISTICS SPSS Statistics puts all repeated measures data on the same row in its Data View. Therefore, you will need as many variables as you have related groups. FRIEDMAN TEST PROCEDUREIN SPSS STATISTICS Topic statement The relationship between teachers’ perceptions, practices and students’ performance. The questionnaire is related to teachers’ perception and practices in classroom regarding following 3 learning difficulties  Dyslexia  Dyspraxia  Autism
  • 46. In data Totalq1 represents teachers’ perception while Total q2 represents teachers’ practices. Total 34 participants solve the questionnaire. Steps 1. Click Analyze > Nonparametric Tests > Legacy Dialogs > K RelatedSamples... on the top menu. 2. You will be presented with the Tests for Several RelatedSamples dialogue box. 3. Transfer the variables none, classical and dance to the Test Variables: box by using the button or by dragging-and-dropping the variables into the box. 4. Make sure that Friedman is selected in the –Test Type– area. 5. Click on the button. You will be presented with the following Several Related Samples: Statistics dialogue box. 6. Tick the Quartiles option: 7. Click on the button. This will return you back to the Tests for Several Related Samples dialogue box. 8. Click on the button to run the Friedman test.
  • 47. SPSS Statistics Output for the Friedman Test: SPSS Statistics will generate either two or three tables, depending on whether you selected to have descriptives and/or quartiles generated in addition to running the Friedman test. Descriptive Statistics Table The Descriptives Statistics table will be produced if you selected the Quartiles option: Percentile N 25th 50th (Median) 75th totalq2 34 106.0000 110.5000 115.0000 totalq1 34 104.0000 109.0000 115.0000
  • 48.  Ranks Table The Ranks table shows the mean rank for each of the related groups, as shown below: Ranks Mean Rank totalq2 1.56 totalq1 1.44 The Friedman test compares the mean ranks between the related groups and indicates how the groups differed, and it is included for this reason. However, you are not very likely to actually report these values in your results section, but most likely will report the median value for each related group.  Test Statistics Table
  • 49. The Test Statistics table informs you of the actual result of the Friedman test, and whether there was an overall statistically significant difference between the mean ranks of your related groups. The table looks as follows: Test Statisticsa N 34 Chi-Square 4.000 df 1 Asymp. Sig. .046 a. Friedman Test
  • 50. The table above provides the test statistic (χ2) value ("Chi-square"), degrees of freedom ("df") and the significance level ("Asymp. Sig."), which is all we need to report the result of the Friedman test. We can see that there is an overall statistically significant difference between the mean ranks of the related groups. SPEARMAN’S RANK ORDER CORRELATION INTRODUCTION The Spearman rank-order correlation coefficient (Spearman’s correlation, for short) is a nonparametric measure of the strength and direction of association that exists between two variables measured on at least an ordinal scale. It is denoted by the symbol rs (or the Greek letter ρ, pronounced rho).The rs value is -1< rs < 1. The test is used for either ordinal variables or for continuous data that has failed the assumptions necessary for conducting the Pearson's product-moment correlation. For example, you could use a Spearman’s correlation to understand whether there is an association between exam performance and time spent revising. TYPE OF VARIABLE USE You need two variables that are ordinal, interval or ratio. Although you would normally hope to use a Pearson product-moment correlation on interval or ratio data, the Spearman correlation can be used when the assumptions of the Pearson correlation are markedly
  • 51. violated. However, Spearman's correlation determines the strength and direction of the monotonic relationship between your two variables rather than the strength and direction of the linear relationship between your two variables. MONOTONIC RELATIONSHIP A monotonic relationship is a relationship that does one of the following: (1) As the value of one variable increases, so does the value of the other variable; or (2) As the value of one variable increases, the other variable value decreases. Examples of monotonic and non-monotonic relationships are presented in the diagram below: ASSUMPTIONS When you choose to analyze your data using Spearman’s correlation, part of the process involves checking to make sure that the data you want to analyze can actually be analyzed using a Spearman’s correlation. You need to do this because it is only appropriate to use a Spearman’s correlation if your data "passes" three assumptions that are required for Spearman’s correlation to give you a valid result.
  • 52. These three assumptions are: Assumption #1: Your two variables should be measured on an ordinal, interval or ratio scale. Examples of ordinal variables include Likert scales (e.g., a 7-point scale from "strongly agree" through to "strongly disagree"), amongst other ways of ranking categories (e.g., a 3-pont scale explaining how much a customer liked a product, ranging from "Not very much", to "It is OK", to "Yes, a lot"). Assumption #2: Your two variables represent paired observations. For example, imagine that you were interested in the relationship between daily cigarette consumption and amount of exercise performed each week. A single paired observation reflects the score on each variable for a single participant (e.g., the daily cigarette consumption of "Participant 1" and the amount of exercise performed each week by "Participant 1"). With 30 participants in the study, this means that there would be 30 paired observations. Assumption #3: There is a monotonic relationship between the two variables. There are a number of ways to check whether a monotonic relationship exists between your two variables, we suggest creating a scatter plot using SPSS Statistics, where you can plot one variable against the other, and then visually inspect the scatter plot to check for monotonicity.
  • 53. In terms of assumption #3 above, you can check this using SPSS Statistics. If your two variables do not appear to have a monotonic relationship, you might consider using a different statistical test. TEST PROCEDURE IN SPSS STATISTIC TOPIC STATEMENT The relationship between teachers’ perceptions, practices and students’ performance. The statements given below are related to teachers’ perception regarding following three learning difficulties: Dyslexia Dyspraxia Autism In SPSS data file Total q1 shows teacher perceptions and Total q2 shows teacher practice and total 34 questioners’ data enter on SPSS. STEP#1 To apply Spearman’s Rank Order Correlation on the following data firstly finds monotonicity of data by using simple process. Click >Graphs>Legacy Dialogs>Scatter/Dot>Simple Scatter>Define>Y-axis total q2>X-axis total q1 Click on OK after this we get output.
  • 54. Graph show monotonic relationship STEP#2 Click Analyze > Correlate > Bivariate... on the main menu. You will be presented with the following Bivariate Correlations dialogue box. Transfer the variables Total q1and Total q2 into the Variables: box by dragging-and-dropping the variables or by clicking each variable and then clicking on the button. Select the Spearman checkbox in the –Correlation Coefficients– area. Click on the button. This will generate the results.
  • 55. Nonparametric Correlations Correlations totalq1 totalq2 Spearman's rho totalq1 Correlation Coefficient 1.000 .921** Sig. (2-tailed) . .000 N 34 34 totalq2 Correlation Coefficient .921** 1.000 Sig. (2-tailed) .000 . N 34 34 **. Correlation is significant at the 0.01 level (2-tailed). NONPAR CORR /VARIABLES=Totalq1 totalq2 /PRINT=SPEARMAN TWOTAIL NOSIG /MISSING=PAIRWISE. Since SPSS reports the p-value for this test as being .000 we can say that we have very strong evidence to values are monotonically correlated in the population. "A Spearman's correlation was run to determine the relationship between 34 participants There was a strong, positive monotonic correlation between Teacher perception total q1 and teacher practice totalq2 (rs = .921, n = 34, p < .001)."
  • 56. STEP#3 Now find rank of the following data click on Transform>Rank cases enter Variables (total q1 or total q2) drop or drag than . Ranks show in Data view
  • 57. In output file we get; RANK
  • 58. Created Variablesb Source Variable Function New Variable Label Totalq1a Rank RTotalq1 Rank of Totalq1 totalq2a Rank Rtotalq2 Rank of totalq2 a. Ranks are in ascending order. b. Mean rank of tied values is used for ties. STEP#4 Click Analyze > Correlate > Bivariate... on the main menu. You will be presented with the following Bivariate Correlations dialogue box. Transfer the variables Rank of Total q1and Rank of Total q2 into the Variables: box by dragging-and-dropping the variables or by clicking each variable and then clicking on the button. Select the Spearman checkbox in the –Correlation Coefficients– area. Click on the button. This will generate the results.
  • 59. Nonparametric Correlations Correlations Rank of Totalq1 Rank of totalq2 Spearman's rho Rank of Totalq1 Correlation Coefficient 1.000 .921** Sig. (2-tailed) . .000 N 34 34 Rank of totalq2 Correlation Coefficient .921** 1.000 Sig. (2-tailed) .000 . N 34 34 **. Correlation is significant at the 0.01 level (2-tailed). Since SPSS reports the p-value for this test as being .000 we can say that we have very strong evidence to values are monotonically correlated in the population. "A Spearman's correlation was run to determine the relationship between 34 participants There was a strong, positive monotonic correlation between Teacher perception Rank of Total q1 and teacher practice Rank of Totalq2 (rs = .921, n = 34, p < .001)."