The document provides information on bivariate analysis and cross-tabulation. It discusses how cross-tabulation allows examination of relationships between two variables and calculation of percentages to compare groups. Chi-square is introduced as a test of hypotheses about relationships between nominal or ordinal variables, requiring calculation of expected frequencies. Examples are provided to demonstrate cross-tabulation tables and chi-square calculations.
This presentation includes an introduction to statistics, introduction to sampling methods, collection of data, classification and tabulation, frequency distribution, graphs and measures of central tendency.
UNIVARIATE & BIVARIATE ANALYSIS
UNIVARIATE BIVARIATE & MULTIVARIATE
UNIVARIATE ANALYSIS
-One variable analysed at a time
BIVARIATE ANALYSIS
-Two variable analysed at a time
MULTIVARIATE ANALYSIS
-More than two variables analysed at a time
TYPES OF ANALYSIS
DESCRIPTIVE ANALYSIS
INFERENTIAL ANALYSIS
DESCRIPTIVE ANALYSIS
Transformation of raw data
Facilitate easy understanding and interpretation
Deals with summary measures relating to sample data
Eg-what is the average age of the sample?
INFERENTIAL ANALYSIS
Carried out after descriptive analysis
Inferences drawn on population parameters based on sample results
Generalizes results to the population based on sample results
Eg-is the average age of population different from 35?
DESCRIPTIVE ANALYSIS OF UNIVARIATE DATA
1. Prepare frequency distribution of each variable
Missing Data
Situation where certain questions are left unanswered
Analysis of multiple responses
Measures of central tendency
3 measures of central tendency
1.Mean
2.Median
3.Mode
MEAN
Arithmetic average of a variable
Appropriate for interval and ratio scale data
x
MEDIAN
Calculates the middle value of the data
Computed for ratio, interval or ordinal scale.
Data needs to be arranged in ascending or descending order
MODE
Point of maximum frequency
Should not be computed for ordinal or interval data unless grouped.
Widely used in business
MEASURE OF DISPERSION
Measures of central tendency do not explain distribution of variables
4 measures of dispersion
1.Range
2.Variance and standard deviation
3.Coefficient of variation
4.Relative and absolute frequencies
DESCRIPTIVE ANALYSIS OF BIVARIATE DATA
There are three types of measure used.
1.Cross tabulation
2.Spearmans rank correlation coefficient
3.Pearsons linear correlation coefficient
Cross Tabulation
Responses of two questions are combined
Spearman’s rank order correlation coefficient.
Used in case of ordinal data
this session differentiates between univariate, bivariate, and multivariate analysis. it covers practical assessment of table of critical values and understanding of the degree of freedom
This presentation includes an introduction to statistics, introduction to sampling methods, collection of data, classification and tabulation, frequency distribution, graphs and measures of central tendency.
UNIVARIATE & BIVARIATE ANALYSIS
UNIVARIATE BIVARIATE & MULTIVARIATE
UNIVARIATE ANALYSIS
-One variable analysed at a time
BIVARIATE ANALYSIS
-Two variable analysed at a time
MULTIVARIATE ANALYSIS
-More than two variables analysed at a time
TYPES OF ANALYSIS
DESCRIPTIVE ANALYSIS
INFERENTIAL ANALYSIS
DESCRIPTIVE ANALYSIS
Transformation of raw data
Facilitate easy understanding and interpretation
Deals with summary measures relating to sample data
Eg-what is the average age of the sample?
INFERENTIAL ANALYSIS
Carried out after descriptive analysis
Inferences drawn on population parameters based on sample results
Generalizes results to the population based on sample results
Eg-is the average age of population different from 35?
DESCRIPTIVE ANALYSIS OF UNIVARIATE DATA
1. Prepare frequency distribution of each variable
Missing Data
Situation where certain questions are left unanswered
Analysis of multiple responses
Measures of central tendency
3 measures of central tendency
1.Mean
2.Median
3.Mode
MEAN
Arithmetic average of a variable
Appropriate for interval and ratio scale data
x
MEDIAN
Calculates the middle value of the data
Computed for ratio, interval or ordinal scale.
Data needs to be arranged in ascending or descending order
MODE
Point of maximum frequency
Should not be computed for ordinal or interval data unless grouped.
Widely used in business
MEASURE OF DISPERSION
Measures of central tendency do not explain distribution of variables
4 measures of dispersion
1.Range
2.Variance and standard deviation
3.Coefficient of variation
4.Relative and absolute frequencies
DESCRIPTIVE ANALYSIS OF BIVARIATE DATA
There are three types of measure used.
1.Cross tabulation
2.Spearmans rank correlation coefficient
3.Pearsons linear correlation coefficient
Cross Tabulation
Responses of two questions are combined
Spearman’s rank order correlation coefficient.
Used in case of ordinal data
this session differentiates between univariate, bivariate, and multivariate analysis. it covers practical assessment of table of critical values and understanding of the degree of freedom
The ppt gives an idea about basic concept of Estimation. point and interval. Properties of good estimate is also covered. Confidence interval for single means, difference between two means, proportion and difference of two proportion for different sample sizes are included along with case studies.
This presentation gives you a brief idea;
-definition of frequency distribution
- types of frequency distribution
-types of charts used in the distribution
-a problem on creating types of distribution
-advantages and limitations of the distribution
Introduction to Statistics - Basic concepts
- How to be a good doctor - A step in Health promotion
- By Ibrahim A. Abdelhaleem - Zagazig Medical Research Society (ZMRS)
Hypothesis Testing is important part of research, based on hypothesis testing we can check the truth of presumes hypothesis (Research Statement or Research Methodology )
Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, suggesting conclusions, and supporting decision-making.
Statistical analysis, presentation on Data Analysis in Research.Leena Gauraha
presentation on Data Analysis in Research, Meaning of Data analysis, Objectives & Steps of Data analysis, Types of Data analysis, Benefits to Business from Data analysis, Data Interpretation Methods in Data analysis.
The ppt gives an idea about basic concept of Estimation. point and interval. Properties of good estimate is also covered. Confidence interval for single means, difference between two means, proportion and difference of two proportion for different sample sizes are included along with case studies.
This presentation gives you a brief idea;
-definition of frequency distribution
- types of frequency distribution
-types of charts used in the distribution
-a problem on creating types of distribution
-advantages and limitations of the distribution
Introduction to Statistics - Basic concepts
- How to be a good doctor - A step in Health promotion
- By Ibrahim A. Abdelhaleem - Zagazig Medical Research Society (ZMRS)
Hypothesis Testing is important part of research, based on hypothesis testing we can check the truth of presumes hypothesis (Research Statement or Research Methodology )
Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, suggesting conclusions, and supporting decision-making.
Statistical analysis, presentation on Data Analysis in Research.Leena Gauraha
presentation on Data Analysis in Research, Meaning of Data analysis, Objectives & Steps of Data analysis, Types of Data analysis, Benefits to Business from Data analysis, Data Interpretation Methods in Data analysis.
This thesis is dedicated to all men and women who have contributed to the
knowledge of astronomy and astrology with the noble aim of enlightening us
for a better life on this planet and for the progress and upliftment of civilization from the dawn of time to eternity.
Astrology deals with the observation and correlation of energy exchanging between the
planets of the solar system, the stars in the constellations and life here on Earth. There are astrological markings uncovered which have been found to date back as far as 3000 B.C. The Chaldeans (later known as the Babylonians) kept records which date back to 700 B.C.
Wise men of this period were known as astrologer priests and they were highly esteemed in
the community. Their knowledge was based on observations of the positions of stars and
planets which they used to cast horoscopes and natal charts for kings and rulers. The priests
of the Egyptian Pharaohs were instructed in astrology by the Babylonians for it was a part of
their religion. It is imperative for us to have a glimpse of the dawn of time and the original role that stars played in human lives just to understand the importance of fixed stars and constellations in astrology. Long before there were horoscopes, aspects, houses or signs, dedicated priest astrologers of virtually every civilization, observed and measured sky patterns. The first
picture book of man was the sky and man used to spend hours of the night observing the ever
moving heavenly spectacle. Mysterious risings, settings, and circling of the heavens were
weighed against mundane phenomena of earth, sea and mankind. The ancient wisdom recorded by earliest scribes, known only to these most learned priests of the earliest civilizations has come into our hands.
This is an exclusive presentation on data collection for researchers in National Institutes Labor of Administration & Training (NILAT), Ministry of production, government of Pakistan
QUESTION 1Question 1 Describe the purpose of ecumenical servic.docxmakdul
QUESTION 1:
Question 1: Describe the purpose of ecumenical services and interfaith initiatives.
What importance, if any, does dialogue between the various faith traditions have?
QUESTION 2:
Syncretism is defined in the textbook as a combination of normally differing beliefs.
This can mean the combination of religious beliefs or the combination of religious
and cultural beliefs. Give two examples of religions, denominations, or sects that
are either syncretistic as a whole, or have some syncretistic practices.
(One example might be Benedicaria—the use of folk traditions in healing by Italian Catholics.)
Your response should be at least 150 words in length.
Both of these questions are worth 50 points!!
I will pay you 15.00 dollars for both questions answered on 1 handshake. Please let
Me know how much my down payment is. I need them answered by 24 march 2016.
Fundamentals of Samples, and Nominal and Ordinal Statistics
Introduction
This module explores the concept of normal distribution and the role it plays in facilitating the ability to generalize and apply research results from samples to populations. We also learn the basics of measures of central tendency and of dispersion as techniques to describe a sample and how to use them in reviewing a research study. Finally, we look at two popular non-parametric statistics analyses that can be used by health care administrators to examine relationships between variables: the Chi-Square Analysis and the Spearman Rank Order Correlation Coefficient.
Concept of the Normal Distribution
The normal distribution is a fundamental concept in statistics. It helps in understanding samples and their relationship to the larger population. The normal distribution is also known as the Bell Curve. It is based on the premise that the bulk of a sample's data set will cluster around the midpoint or center and will drop down to smaller levels as one moves further towards the left and right ends of the curve (see below). The normal distribution requires a larger sample size since the larger the sample, the closer the distribution of the sample approaches true normal.
It is important to recognize the role of the normal distribution as a mathematical model for errors occurring by chance. It also provides a way to describe a sample variable based on measurements of the sample, and to compare it to other samples.
The Bell Curve's Significance
One of the goals of good evidence-based practice is to implement practices that bring better outcomes to patients as a whole (the population). To do this, you need to know how the sample matches the population. In most cases, the larger population will not be available to measure, but we know that the larger a data set is, the more likely it is to be in the form of a normal distribution. This is the basic argument for using the largest feasible numbers of subject in the sample.
Characteristics of a Sample
The characteristics of a sample are described by measuring i ...
Week 5 Lecture 14 The Chi Square Test Quite often, pat.docxcockekeshia
Week 5 Lecture 14
The Chi Square Test
Quite often, patterns of responses or measures give us a lot of information. Patterns are
generally the result of counting how many things fit into a particular category. Whenever we
make a histogram, bar, or pie chart we are looking at the pattern of the data. Frequently, changes
in these visual patterns will be our first clues that things have changed, and the first clue that we
need to initiate a research study (Lind, Marchel, & Wathen, 2008).
One of the most useful test in examining patterns and relationships in data involving
counts (how many fit into this category, how many into that, etc.) is the chi-square. It is
extremely easy to calculate and has many more uses than we will cover. Examining patterns
involves two uses of the Chi-square - the goodness of fit and the contingency table. Both of
these uses have a common trait: they involve counts per group. In fact, the chi-square is the only
statistic we will look at that we use when we have counts per multiple groups (Tanner &
Youssef-Morgan, 2013).
Chi Square Goodness of Fit Test
The goodness of fit test checks to see if the data distribution (counts per group) matches
some pattern we are interested in. Example: Are the employees in our example company
distributed equal across the grades? Or, a more reasonable expectation for a company might be
are the employees distributed in a pyramid fashion – most on the bottom and few at the top?
The Chi Square test compares the actual versus a proposed distribution of counts by
generating a measure for each cell or count: (actual – expected)2/actual. Summing these for all
of the cells or groups provides us with the Chi Square Statistic. As with our other tests, we
determine the p-value of getting a result as large or larger to determine if we reject or not reject
our null hypothesis. An example will show the approach using Excel.
Regardless of the Chi Square test, the chi square related functions are found in the fx
Statistics window rather than the Data Analysis where we found the t and ANOVA test
functions. The most important for us are:
• CHISQ.TEST (actual range, expected range) – returns the p-value for the test
• CHISQ.INV.RT(p-value, df) – returns the actual Chi Square value for the p-value
or probability value used.
• CHISQ.DIST.RT(X, df) – returns the p-value for a given value.
When we have a table of actual and expected results, using the =CHISQ.TEST(actual
range, expected range) will provide us with the p-value of the calculated chi square value (but
does not give us the actual calculated chi square value for the test). We can compare this value
against our alpha criteria (generally 0.05) to make our decision about rejecting or not rejecting
the null hypothesis.
If, after finding the p-value for our chi square test, we want to determine the calculated
value of the chi square statistic, we can use the =CHISQ.INV.RT(probability, df).
Week 5 Lecture 14 The Chi Square TestQuite often, patterns of .docxcockekeshia
Week 5 Lecture 14
The Chi Square Test
Quite often, patterns of responses or measures give us a lot of information. Patterns are generally the result of counting how many things fit into a particular category. Whenever we make a histogram, bar, or pie chart we are looking at the pattern of the data. Frequently, changes in these visual patterns will be our first clues that things have changed, and the first clue that we need to initiate a research study (Lind, Marchel, & Wathen, 2008).
One of the most useful test in examining patterns and relationships in data involving counts (how many fit into this category, how many into that, etc.) is the chi-square. It is extremely easy to calculate and has many more uses than we will cover. Examining patterns involves two uses of the Chi-square - the goodness of fit and the contingency table. Both of these uses have a common trait: they involve counts per group. In fact, the chi-square is the only statistic we will look at that we use when we have counts per multiple groups (Tanner & Youssef-Morgan, 2013). Chi Square Goodness of Fit Test
The goodness of fit test checks to see if the data distribution (counts per group) matches some pattern we are interested in. Example: Are the employees in our example company distributed equal across the grades? Or, a more reasonable expectation for a company might be are the employees distributed in a pyramid fashion – most on the bottom and few at the top?
The Chi Square test compares the actual versus a proposed distribution of counts by generating a measure for each cell or count: (actual – expected)2/actual. Summing these for all of the cells or groups provides us with the Chi Square Statistic. As with our other tests, we determine the p-value of getting a result as large or larger to determine if we reject or not reject our null hypothesis. An example will show the approach using Excel.
Regardless of the Chi Square test, the chi square related functions are found in the fx Statistics window rather than the Data Analysis where we found the t and ANOVA test functions. The most important for us are:
· CHISQ.TEST (actual range, expected range) – returns the p-value for the test
· CHISQ.INV.RT(p-value, df) – returns the actual Chi Square value for the p-value or probability value used.
· CHISQ.DIST.RT(X, df) – returns the p-value for a given value.
When we have a table of actual and expected results, using the =CHISQ.TEST(actual range, expected range) will provide us with the p-value of the calculated chi square value (but does not give us the actual calculated chi square value for the test). We can compare this value against our alpha criteria (generally 0.05) to make our decision about rejecting or not rejecting the null hypothesis.
If, after finding the p-value for our chi square test, we want to determine the calculated value of the chi square statistic, we can use the =CHISQ.INV.RT(probability, df) function, the value for probability is .
univariate and bivariate analysis in spss Subodh Khanal
this slide will help to perform various tests in spss targeting univariate and bivariate analysis along with the way of entering and analyzing multiple responses.
2. So far the statistical methods we
have used only permit us to:
• Look at the frequency in which certain
numbers or categories occur.
• Look at measures of central tendency such
as means, modes, and medians for one
variable.
• Look at measures of dispersion such as
standard deviation and z scores for one
interval or ratio level variable.
3. Bivariate analysis allows us to:
• Look at associations/relationships among
two variables.
• Look at measures of the strength of the
relationship between two variables.
• Test hypotheses about relationships
between two nominal or ordinal level
variables.
4. For example, what does this table tell us about
opinions on welfare by gender?
Support cutting
welfare benefits Male Female
for immigrants
Yes 15 5
No 10 20
Total 25 25
5. Are frequencies sufficient to
allow us to make comparisons
about groups?
What other information do we
need?
6. Is this table more helpful?
Benefits for Males Female
Immigrants
Yes 15 (60%) 5 (20%)
No 10 (40%) 20 (80%)
Total 25 (100%) 25 (100%)
7. How would you write a sentence
or two to describe what is in this
table?
8. Rules for cross-tabulation
• Calculate either column or row percents.
• Calculations are the number of frequencies
in a cell of a table divided by the total
number of frequencies in that column or
row, for example 20/25 = 80.0%
• All percentages in a column or row should
total 100%.
9. Let’s look at another example –
social work degrees by gender
Social Work Male Female
Degree
BA 20 (33.3%) 20 ( %)
MSW 30 ( ) 70 (70.0%)
Ph.D. 10 (16.7%) 10 (10.0%)
60 (100.0%) 100 (100.0%
10. Questions:
What group had the largest percentage of
Ph.Ds?
What are the ways in which you could
find the missing numbers?
Is it obvious why you would use
percentages to make comparisons among
two or more groups?
11. In the following table, were people with drug,
alcohol, or a combination of both most likely
to be referred for individual treatment?
Services Alcohol Drugs Both
Individual 10 (25%) 30 (60%) 5 (50%)
Treatment
Group 10 (25%) 10 (20%) 2 (20%)
Treatment
AA 20 (50%) 10 (20%) 3 (30%)
Total 40 (100%) 50 (100%) 10 (100%)
12. Use the same table to answer the
following question:
How much more likely are
people with alcohol problems
alone to be referred to AA than
people with drug problems or a
combination of drug and alcohol
problems?
13. We use cross-tabulation when:
• We want to look at relationships among two
or three variables.
• We want a descriptive statistical measure to
tell us whether differences among groups
are large enough to indicate some sort of
relationship among variables.
14. Cross-tabs are not sufficient to:
• Tell us the strength or actually size of the relationships
among two or three variables.
• Test a hypothesis about the relationship between two or
three variables.
• Tell us the direction of the relationship among two or more
variables.
• Look at relationships between one nominal or ordinal
variable and one ratio or interval variable unless the range
of possible values for the ratio or interval variable is small.
What do you think a table with a large number of ratio
values would look like?
15. We can use cross-tabs to visually
assess whether independent and
dependent variables might be
related. In addition, we also use
cross-tabs to find out if
demographic variables such as
gender and ethnicity are related
to the second variable.
16. For example, gender may
determine if someone votes
Democratic or Republican or if
income is high, medium, or low.
Ethnicity might be related to
where someone lives or attitudes
about whether undocumented
workers should receive driver’s
licenses.
17. Because we use tables in these ways, we can
set up some decision rules about how to use
tables.
• Independent variables should be column variables.
• If you are not looking at independent and
dependent variable relationships, use the variable
that can logically be said to influence the other as
your column variable.
• Using this rule, always calculate column
percentages rather than row percentages.
• Use the column percentages to interpret your
results.
18. For example,
• If we were looking at the relationship between
gender and income, gender would be the column
variable and income would be the row variable.
Logically gender can determine income. Income
does not determine your gender.
• If we were looking at the relationship between
ethnicity and location of a person’s home,
ethnicity would be the column variable.
• However, if we were looking at the relationship
between gender and ethnicity, one does not
influence the other. Either variable could be the
column variable.
19. SPSS will allow you to choose a
column variable and row variable
and whether or not your table
will include column or row
percents.
20. You must use an additional statistic, chi-
square, if you want to:
• Test a hypothesis about two variables.
• Look at the strength of the relationship between an
independent and dependent variable.
• Determine whether the relationship between the
two variables is large enough to rule out random
chance or sampling error as reasons that there
appears to be a relationship between the two
variables.
21. Chi-square is simply an extension of a
cross-tabulation that gives you more
information about the relationship.
However, it provides no information
about the direction of the relationship
(positive or negative) between the two
variables.
22. Let’s use the following table to
test a hypothesis:
Education
Income High Low Total
High (Above 40 50
$40,000)
Low ($39,999 50
or less)
Total 50 50 100
23. I have not filled in all of the information
because we need to talk about two concepts
before we start calculations:
• Degrees of Freedom: In any table, there are
a limited number of choices for the values
in each cell.
• Marginals: Total frequencies in columns
and rows.
24. Let’s look at the number of choices
we have in the previous table:
Education
Income High Low Total
High (Above 40 50
$40,000)
Low ($39,999 50
or less)
Total 50 50 100
25. So the table becomes:
Education
Income High Low Total
High (Above 40 10 50
$40,000)
Low ($39,999 10 40 50
or less)
Total 50 50 100
26. The rules for determining degrees of freedom
in cross-tabulations or contingency tables:
• In any two by two tables (two columns, two
rows, excluding marginals) DF = 1.
• For all other tables, calculate DF as:
(c -1 ) * (r-1) where c = columns and r =
rows.
( So for a table with 3 columns and 4 rows,
DF = ____. )
27. Importance of Degrees of Freedom
• You will see degrees of freedom on your SPSS
print out.
• Most types of inferential statistics use DF in
calculations.
• In chi-square, we need to know DF if we are
calculating chi-square by hand. You must use the
value of the chi-square and DF to determine if the
chi-square value is large enough to be statistically
significant (consult chi-square table in most
statistics books).
28. Steps in testing a hypothesis:
• State the research hypothesis
• State the null hypothesis
• Choose a level of statistical significance
(alpha level)
• Select and compute the test statistic
• Make a decision regarding whether to
accept or reject the null hypothesis.
29. Calculating Chi-Square
• Formula is [0 - E]2
E
Where 0 is the observed value in a cell
E is the expected value in the same
cell we would see if there was no
association
30. First steps
Alternative hypothesis is: There is a relationship
between income level and education for
respondents in a survey of BA students.
Null hypothesis is: There is no relationship between
income level and education for respondents in a
survey of BA students
Confidence level set at .05
31. Rules for determining whether the chi-square
statistic and probability are large enough to verify a
relationship.
• For hand calculations, use the degree(s) of
freedom and the confidence level you set to check
the Chi-square table found in most statistics
books. For the chi-square to be statistically
significant, it must be the same size or larger than
the number in the table.
• On an SPSS print out, the p. or significance value
must be the same size or smaller than your
significance level.
32. The formula for expected values are
E = R*C
Education
Income High Low Total
High (Above 25 25 50
$40,000)
Low ($39,999 25 25 50
or less)
Total 50 50 100
33. Go back to our first table
Education
Income High Low Total
High (Above 40 10 50
$40,000)
Low ($39,999 10 40 50
or less)
Total 50 50 100
34. Chi-square calculation is
Expected
Values Chi-square
Cell 1 50 * 50/100 25 (40-25)2/25 9
Cell 2 50*50/100 25 (10-25)2/25 9
Cell 3 50 * 50/100 25 (10-25)2/25 9
Cell 4 50*50/100 25 (40-25)2/25 9
36
At .05, 1 = df, chi-square must be larger
than 3.84 to be statistically significant
35. Let’s calculate another chi-square- service
receipt by location of residence
Service Urban Rural Total
Yes 20 40 60
No 30 10 40
Total 50 50 100
36. For this table,
• DF = 1
• Alternative hypothesis:
Receiving service is associated with
location of residence.
Null hypothesis:
There is no association between receiving
service and location of residence.
37. Calculations for chi-square are
Expected
Values Chi-square
Cell 1 50 * 60/100 30 (20-30)2/30 3.33
Cell 2 50*40/100 20 (30-20)2/20 5.00
Cell 3 50*60/100 30 (40-30)2/30 3.33
Cell 4 50*40/100 20 (10-20)2/20 5.00
16.67
At 1 DF at .01 chi-square must be greater than 6.64. Do
we accept or reject the null hypothesis?
38. Running chi-square in SPSS
• Select descriptive statistics
• Select cross-tabulation
• Highlight your independent variable and click on the arrow.
• Highlight your dependent variable and click on the arrow.
• Select Cells
• Choose column percents
• Click continue
• Select statistics
• Select chi-square
• Click continue
• Click ok
39. SPSS print out
Chi-Square Tests
Asymp. Sig.
Value df (2-sided)
Pearson Chi-Square 2.569 a 5 .766
Likelihood Ratio 2.590 5 .763
Linear-by-Linear
.087 1 .768
Association
N of Valid Cases 336
a. 2 cells (16.7%) have expected count less than 5. The
minimum expected count is 1.57.
40. Recode
• To run ratio or interval level variables into SPSS
you need to recode or change the variable into a
categorical or nominal or ordinal variable.
You first need to decide how you will set up
categories and assign a number to them.
For example if your ratio variables for Age are: 25,
37, 42, 50, and 64, you might decide on two
categories: 1 = under 50
2 = 50 and over
41. Recode Instructions
• Go to Transform menu
• Go to Recode
• Select different variable
• Type in new variable name
• Click continue
• Enter range of ratio numbers for first category (25 to 49)
• Enter number for first category (1) in right hand screen.
• Click Add
• Enter range of ratio numbers (50 to 54) for category two
• Enter number for second category (2)
• Click Add
• Click Continue
• Click Change
• Click o.k.