Glassory of Research

1,520 views

Published on

Very important for new researcher. This helps you to know necessary terminology for research. It develops clear concept of certain terms.

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,520
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
19
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Glassory of Research

  1. 1. GLASSORY OF RESEARCHResearch MethodologyAdmin[Type the document title][Type the abstract of the document here. The abstract is typically a short summary of the contents of the document. Type the abstract of the document here. The abstract is typically a short summary of the contents of the document.]<br />Glossary<br />This Glossary provides definitions for key terms used in the previous chapters. Most definitions include other key terms. Key terms used in the definition of other key terms are in bold type. This lets you go to any term you encounter and find its meaning.<br />In addition, other glossaries of social research and statistical terms are available on Web sites. The names and Web addresses of some of these Glossaries are listed under "Aids - Internet Resources" at the end of Chapter 17<br />A<br />Abstract (abstraction) - a mental image of something that people experience and agree to describe in a certain way; concepts for example, are abstractions derived from observations and defined in scientific terms; abstract is the opposite of concrete, which refers to the specific things we experience and can observe<br />(An) abstract: a short summary of a publication, usually about 250 words<br />Alternative hypothesis: the original hypothesis formulated at the beginning of a research project<br />Analysis: the process of summarizing and organizing   data to establish the results of an investigation<br />Analysis of variance: a statistical test used to determine if differences among three or more means are statistically significant<br />Anonymity: the assurance given to respondents that no one, not even the investigator will be able to identify the respondent or any data supplied by or about the respondent<br />Area sample: see cluster sample<br />Assessment research: research undertaken to see if a program is achieving the objectives set for it; also referred to as evaluation research<br />Association (or associated): refers to the extent to which one variable is related to another variable; a measure of how changes in one variable influence changes in another variable<br />Attribute: the elements that make up a variable; may be expressed either in words (male or female) or in numbers<br />Available data: data that already exists in the form of responses to previoussurveys, as mass media material, or as other written, audio, video, or cultural artifacts<br />Average: a loose term used in everyday language to describe one form of the central tendency of a distribution; statisticians use mean in place of average; two other averages or "typical" scores for a distribution are the median and the mode<br />right0<br />B<br />Back translation: the translation of a document that was translated into a new language and then back to the   original language<br />Bar chart: a graphic way of presenting data in which bars representing the attributesof a variable are arranged along the X axis of a graph and the height of the bars, as measured on the Y axis, show the frequency for each attribute; also known as ahistogram<br />Bias: any tendency to see events in a certain way that causes distortions in the collection or analysis of data or in drawing conclusions from findings<br />Bimodal distribution: a distribution with two modes<br />Bivariate analysis: the simultaneous analysis of two variables; bivatiate analysis is generally done to find the extent of association between two variables<br />Bogardus social distance scale: a measurement technique for finding how closelyrespondents say they are willing to associate with members of some designated group; social distance scales are used to measure attitudes toward some group of persons<br />Browser: an Internet -based service that allows a computer to connect with the Internet<br />Browsing: casual examination of books or other materials in search of relevant materials; one can also browse among Web sites, using links on sites to move from one site to another; this form of browsing is called surfing<br />right0<br />C<br />Call back - the act of making a second or third visit to a respondent to obtain aninterview<br />Case study: a detailed investigation of a person, organization, village or other entity for the purpose of understanding the entity in all its complexity as fully as possible<br />Casual observation: observation of behavior in which actions are recorded in narrative form; stands in contrast to structured observation where observations are noted in terms of pre-defined categories<br />Categorical variable: a variable whose attributes form some kind of a classification; the categories used form the elements of the classification; male and female, for example, would be categories of the classification of persons based on gender; categorical variables are also referred to as qualitative variables<br />Causal hypothesis testing: testing a hypothesis under carefully controlled conditions, as in a true experiment, to exclude the influence of any variable other than the independent or experimenta l variable upon the dependent variable; under these conditions, changes in the dependent variable are assumed to be caused by the independent or experimental variable<br />Cause and effect (or causal relationship): refers to a relationship where onevariable is thought to be solely or substantially responsible for changes in another variable; see the definition of causal hypothesis testing<br />CD-ROM: stands for "compact disk read only memory," a form of electronic storage for music, data files and other information; is "read" or played with the help of a computer<br />Cell: a part of a table identified by the intersection of a column and a row of the table<br />right0<br />Census: collection of data from all the members of some population; also calledenumeration<br />Central tendency: measures of the degree to which scores are clustered around themean of a distribution<br />Chart format: used when the same question is repeated with the same response categories; example, when asking for the ages of all members of a household<br />Chain sample: see network sampling<br />Chance selection: see random selection<br />Chi square: a statistical test for determining whether two variables are independentof one another; chi square is based on comparing differences between observed and expected frequencies for various cells in a table<br />Chronbach's alpha:   used in item analysis to select items that are highly associated with the other items in a composite measure; items whose scorescorrelate moderately with other items are assumed to be measuring the same thing and, therefore, the scores can be safely combined to provide a composite measure<br />Class interval width: closely related to class limits;   any whole number (22, 51, 175, etc) is really the midpoint of range that extends 0.5 below and 0.5 above the number, thus the interval of 20-29 has a class width of 19.49 to 29.49 with 25.0 as its midpoint<br />Class limits: the range of numbers that are created when continuous data are combined to form broader categories or intervals; for example, exact ages can be combined into intervals, such as 20-29, 30-39, etc.; the ten year categories are the class limits for the age intervals<br />Classical experiment: a technique for testing hypotheses under carefully controlled conditions, where the experimental or independent variable is administered to theexperimental group but not to an equivalent control group and measurements of the dependent variable are compared between the two groups following the experiment; also called the true experiment<br />right0<br />Closed item: a question or item with a fixed set of responses; respondents are asked to select the response that most closely matches their views<br />Cluster sampling: a probability sampling design based on random selection of successive clusters or units with a simple random sample used in the final cluster to form the final sample; also referred to as an area or multistage sample<br />Codebook: a record book used to provide information about variables, theirattributes, and their locations in a data file; a codebook is used to plan analyses of variables and to interpret the results of analyses<br />Code transfer sheet: a sheet of paper with columns for recording the attributes ofvariables and with rows for each respondent or case; the code for each response or observation is placed at the intersection of the column for the attribute and the row for a particular respondent<br />Coding: the process of assigning numbers to represent the attributes of indicators;coding is a necessary step before data can be entered into a computer data filebecause computers can only "read" numbers<br />Coefficient of correlation: a statistical measure of association between twoquantitative variables; a coefficient of correlation can vary between ±1.00<br />Coefficient of determination: the squared value of the coefficient of correlation; it indicates the percentage of the variation in dependent variable accounted for by the effect of the independent variable<br />Coefficient of reproducibility: the measure of the extent to which responses to a set of items form a Guttman scale; a coefficient of.90 or higher is the generally accepted coefficient<br />Comparison analysis: a research design based on two or more independentsamples, used to estimate how much difference there is among the samples in terms of variables being measured right0<br />Composite measure (or score): scores or other measures based on two or moreindicators; examples are scales and indexes, each of which consist of at least twoitems<br />Computer analysis: analysis of data using a statistical software analysis packagestored in a computer<br />Concept - an abstract description we use to describe things that are real to us but that we cannot experience directly; mental images we share and use to describe things we talk about<br />Conceptualization: the process of defining concepts central to an investigation; also includes specifying and defining dimensions of a concept for which measurementswill be developed<br />Conclusion: the most general statement derived from the results of an investigation; the investigator draws conclusions from the analysis of data collected for the investigation<br />Concrete: refers to specific things we can experience directly; the specific, identifiable chair you are sitting on is considered concrete; in contrast, the idea of a chair isabstract<br />Concurrent validity: a method for estimating the validity of a measuring instrument, such as a scale, based on showing that scores on the instrument differentiate between persons known to differ in the variable being measured; example, in developing a scale to measure attitudes toward some group, the scores of persons known to hold strong positive and negative views toward that group would be compared; if the mean scores of the two groups were substantially different, the scale would be assumed to have demonstrated concurrent validity<br />Confidence interval: the range of values that contain the population parameter at a specified level of confidence; if a mean is estimated to lie between 5.05 and 8.15, the confidence interval is 5.05 to 8.15<br />Confidence level: see level of confidence<br />Confidentiality: the assurance given a respondent that even though the investigator can identify the respondent his or her responses, the investigator will protect the respondent's identity respondent<br />Construct: another term used for concept; a construct is a definition of a variable we intend to investigate; the term is used because, as social scientists, we construct a definition of a variable for the purposes of measurement<br />right0<br />Contamination: occurs when members of a control group are accidentally or otherwise exposed to the experimental variable<br />Content analysis: a method for analyzing the content of written or verbal material; most often used in the analysis of mass media materials; based on development of a set of categories for the coding the content of the material<br />Content validity: a form of validity based on how well the content of an indicato r reflects the concept it is intended to measure<br />Contingency question: a question or item used to select respondents for further questions, depending on how they answer a preceding question; for example, before asking persons which political party they belong to, they could be asked if they now belong to a party, only those who answered "yes" would then be asked for the name of the party; also called a filter question<br />Contingency table: see cross classification table<br />Continuous variable: a variable whose attributes can assume increasingly smaller or larger values; examples are age or income, each of which can be measured in smaller and smaller amounts<br />Control group: the group in an experiment that is not exposed to the experimentalor independent variable but is selected to match the experimental group, which is exposed to the experimental or independent variable, in all other ways<br />Control variable: a variable that is held constant to remove its influence on other variables<br />Controlled comparison: a multivariable analysis in which a control variable is introduced to see if it causes changes in a relationship between other variables<br />Controlled setting: any situation created by an investigator for the purpose of  hypothesis testing in which selected variables are controlled to minimize their influence on the outcome of the research<br />Convenience sample: a nonprobability form of sampling based on collecting datafrom who ever is available or encountered; also called a haphazard sample<br />Copy: the process of making a copy of material from a data file or Web site by using the copy function of a computer program<br />right0<br />Correlation (coefficient of): a statistical measure of the empirical associationbetween two indicators; also referred to as the coefficient of correlation; values for correlation coefficients vary between ±1.00<br />Criterion validity: the extent to which an indicator for a concep t is associated with an eternal criterion;   for example, the validity of a test given in secondary school for predicting success in the university is shown by its ability to predict grade point averages at the end of the freshman year at the university; also referred to aspredictive validity<br />Cross classification: analysis based on showing the relationship between twovariables in categorical form; done in the form of bivariate or multivariate tables<br />Cross classification table (cross-tabulation table):<br />a table showing the relationship between two variables; the data for one variable is displayed in columns and data for the other variable in rows of the table; also referred to as a contingency table<br />Cross products: the products of the scores of two variables, required for the calculation of the coefficient of correlation and other statistics<br />Cross-sectional design: a design used for surveys; based on use of a probability sample so that the sample represents a cross-section of a population<br />Cumulative frequency distribution: a distribution in which the frequency for eachattribute is added to the next higher or lower attribute in the distribution, beginning with the lowest attribute and adding down the distribution or with the highest attribute and adding up the distribution; cumulative frequency distributions are useful for saying how many respondents answered above or below a certain attribute<br />Cumulative percentage distribution: a distribution in which the percentage for each attribute is added to the next higher or lower attribute in the distribution, beginning with the lowest attribute and adding down the distribution or with the highest attribute and adding up the distribution; cumulative percentage distributions are useful for saying what percentage of respondents answered above or below a certain attribute<br />Curvilinear relationship: a relationship between two variables in which the direction of the relation moves in one direction and then reverses;   for example, infant mortality rates are high for the youngest mothers, then decline as mothers are older, only to rise again for the oldest mothers;   also called a nonlinear relationship<br />right0<br />D<br />Data: the specific bits of information collected by a scientifically valid method of collection; can be in the form of observation, by means of an experiment, or by asking persons questions as part of a survey<br />Data collection: the planned, systematic process of obtaining data to answer a research question<br />Data cleaning: the process of reviewing codes for attribute s entered into a computer data file to find and correct errors<br />Database: a searchable computer-based compilation of information on a topic or covering a discipline<br />Data entry: the process of entering codes into a data file stored in a computer; data entry must be done according to the rules of the software program being used<br />Data file: coded data stored in a computer according to locations specified in acodebook<br />Date modification: changing or adding data after the initial data were coded;examples include developing composite scores or recoding open-ended responses to form new categories<br />Deductive logic: a form of reasoning from a general principle or statement, often based on a theoretical framework; for example, derivation of a hypothesis from atheoretical framework<br />Degrees of freedom: a value used in interpreting tests of statistical significance;degrees of freedom are calculated in different ways for different tests of significance<br />right0<br />Dependent variable: the variable that depends on or is influenced by another variable; dependent variables are what researchers seek to understand and explain<br />Descriptive research: investigations whose purpose is to provide precise descriptions of variables and their relationships; surveys are frequently used as designs for descriptive research<br />Descriptive statistical analysis: analysis of data   to describe the characteristics ofsample or for measuring relationships between variables; examples include measures of central tendency (mean, median and mode), measures of variability (variance and standard deviation) or measures of association (correlation, chi square)<br />Descriptive statistics: statistics used to describe features of distributions of scores,such as means and standard deviations<br />Design: a plan for the collection and analysis of data; includes selection of a method of collecting data, ways of measuring variables, a sampling plan, and plans for the analysis of the data to be collected<br />Dimension: a specified and defined aspect or component of a concept selected formeasurement; dimensions of a concept are identified by the process of conceptualization<br />Direct relationship: see positive relationship<br />Discrete variable: a variable whose attributes cannot be separated into smaller units; for example, gender exists in only two forms - male or female<br />Distribution: an ordered set of numbers showing how many times each occurred, from the lowest to the highest number or the reverse<br />Download: the act of copying information from a computer-based file, such as those found on Web sites, to the hard or floppy drive of a computer<br />Draft questionnaire: the form of a questionnaire ready for pretesting; a draft questionnaire is usually revised based on information obtained during one or morepretests<br />right0<br />E<br />Ecological fallacy: an error in drawing a conclusion about the behavior or attitudes of individuals when data are collected at the level of groups to which individuals may belong<br />Edge coding: a way of showing codes for responses in which the codes assigned to responses are written in the margin of the questionnaire opposite the item to which they refer<br />Email: stands for "electronic mail," a form of communication using the Internet as a way of connecting to persons you wish to communicate with<br />Email survey: a survey conducted by sending a questionnaire by email to a list orsample of email addresses; respondents are asked to complete and return the questionnaire by email<br />Empirical: refers to using one's senses (sight, hearing, touch, smelling, and tasting) to learn about events; empirical research is based on measurement of observable events<br />Empirical generalization: a statement or conclusion based on empirical results;basing a conclusion on a relationship between two indicators is an example of an empirical generalization<br />Empirical relationship: a measured or observed relationship based on data for twovariables<br />Empiricism: the use of one's senses to observe and record events external to ourselves; scientific inquiry is based on knowledge derived from observation<br />Enumeration: the process of collecting data from all the members of a population;also called taking a census<br />Equivalent forms measure of reliability: a technique for estimating reliabilitybased on the degree to which results from two equivalent scales or sets of observations are associated; a high level of association indicates high reliability<br />Evaluation research: research undertaken to see whether a program or activity is meeting or has met the objectives set for it<br />Executive summary: a summary of a report prepared to give a brief but complete description of the purpose, methods used, results, and conclusions of an investigation; executive summaries are often written to be understood by persons in administrative positions and those without research training<br />Experiment (experimental design):a research method used to test hypothesesunder carefully controlled conditions designed to rule out the effects of any variablesother than the experimental treatment; elements of an experiment include random assignment of subjects to either the experimental or the control group,measurement of the dependent variable in both groups at the beginning of the experiment; application of the experimental or independent variable to the experimental but not the control group; measurement of the independent variable at the end of the experiment, and comparison of measures on the dependent variable for the pretestand posttest measurements for both groups;   due to the effect of the experimental treatment, larger differences between pretest and posttest measurements are expected in the experimental as opposed to the control group<br />right0<br />Experimental effect: in an experiment, the measure of the impact of theexperimental treatment upon members of the experimental group; the experimental effect is measured as the difference in pretest and posttest scores in theexperimental as opposed to the control group<br />Experimental group: the group of subjects in an experiment who receive theexperimental treatment as contrasted to the control group whose members are not subjected to the experimental treatment<br />Experimental mortality: refers to the loss of subjects during the course of anexperiment; high experimental mortality undermines the validity of an experiment<br />Experimental treatment: in an experiment, this is the variable that is changed by the experimenter to see its effect on the dependent variable; also called theindependent variable or experimental variable<br />Experimental variable: this is another name for experimental treatment<br />Experimenter bias: any potential source of error introduced in an experiment in the way the experiment is designed, the way data are collected and analyzed, or howconclusions are drawn<br />Explanatory research: research undertaken to explain why certain behavior occurs; seeks to provide an explanation for why a relationship exists<br />Exploratory research: research carried out to learn more about a problem or topic; usually undertaken to collect data for designing a descriptive or explanatoryinvestigation<br />External validity: refers to the degree that the results of an experiment can be extended or generalized beyond the conditions of the experiment to conditions in the real world<br />right0<br />F<br />Face validity: the characteristics of indicators that suggest they are a reasonable measure of a variable; example, questions about whether girls have the same right to education as boys would be reasonably valid indicators of attitudes toward gender equity<br />Field jottings: brief notes taken during an observation session to provide a basis for preparing more extensive field notes<br />Field notes: the full, detailed descriptions, sometimes based on field jottings, used to describe what occurred during an observation period; may also contain hypothesesand tentative explanations for what was observed<br />Field research: generally refers to qualitative research conducted in natural setting, as in a village or other public area<br />Filter question: see contingency question<br />Findings: see results<br />Focus group: a group of persons organized by an investigator to obtain detailed information about a topic or issue through unstructured but guided discussion<br />Formative evaluation: an evaluation carried out during the development of a program; used to produce data for guiding the future development of the program<br />Frequency: the number or count for the occurrence of an attribute of an indicator orvariable<br />Frequency distribution: an ordered list of the frequencies or counts for all theattributes of an indicator<br />Frequency matching: a technique for creating equivalent experimental andcontrol groups based on randomly assigning the same number of subjects with similar specific characteristics (so many of one gender, age, ethnic group, etc.) to each group<br />Frequency polygon:   see line graph<br />right0<br />G<br />Generalization: a statement based on the conclusions of a study that extends the conclusions to a broader or more general level<br />Generalizing: is the process, based on logic, for extending conclusions to a broader or more general level; generalizing may be done empirically, as when a statistic,based on a sample, is generalized to the population from which the sample was drawn or may be done theoretically by generalizing from results based on indicators to theoretical relationships among concepts represented by the indicators<br />Grounded theory: development of a theoretical explanation for behavior based on the analysis of data; this approach differs from the traditional deductive derivation of a hypothesis; grounded theory is used most often to generate explanations for behavior observed in qualitative investigations<br />Grouped data: continuous data that are combined into larger intervals or groups; example, instead of analyzing data for the exact ages of respondents, ages could be combined into five or ten-year intervals<br />Guttman scaling: a composite measure in which the scores for items indicate the expected pattern of responses<br />right0<br />H<br />Halo effect: in interviewing, the tendency to expect to receive a response in a certain (biased) way based on how previous respondents had responded; represents asystematic error in data collection<br />Hand analysis: analysis of data by hand counting; also referred to tallying responses<br />Haphazard sample: see convenience sample<br />Histogram: see bar chart<br />History effect: the influence of events on subjects during the course of anexperiment; example, an experiment to change attitudes toward some group could be invalidated by a major public event concerning the group in question<br />Home page: the initial screen or page shown when you visit a Web site; the home page generally has links to other pages on the site and to other related sites<br />Hypothesis: a tentative statement of an expected relationship between variables,usually derived deductively from a theoretical framework; hypotheses may also be based on an empirical findings or conclusions; hypotheses are confirmed (accepted) or disconfirmed (rejected), based on empirical data<br />Hypothesis testing: the process of obtaining empirical data to judge whether ahypothesis is confirmed (accepted) or disconfirmed (rejected); statistical tests are used in making this judgment<br />Hypothetical-inductive process: based on the combined use of deductive logic to derive a hypothesis followed by use of inductive logic to test whether the hypothesis is confirmed (accepted) or disconfirmed (rejected)<br />right0<br />I<br />Independent (independence): the lack of a relationship between two variables;when no relationship is observed, the variables are said to be independent<br />Independent variable: the variable that influences the value of another variable (thedependent variable); in an experiment, the independent variable is the one that is manipulated by the experimenter; in an experiment, the independent variable is also called the experimental or treatment variable<br />Index: a composite measure consisting of two or more indicators assumed to be of the same level of intensity; the indicators may be selected because they represent different dimensions of the concept the index is intended to measure<br />Index score: the interim composite score assigned to mixed type responses as a step in deriving a final Guttman score for a set of items: see Guttman scaling<br />Indicator: a variable used to measure a concept or one of its dimensions<br />Indirect relationship: see negative relationship<br />Inductive logic: a form of reasoning used in deriving conclusions from the results of an investigation; reasoning from the bits or separate pieces of data to a conclusion<br />Inequality signs (< and >): are used in reporting the results of statistical tests of significance to show whether the result produced a probability level of "greater than," shown as >, or "less than," shown as <, the .05 or .01 level of significance<br />Inferential statistical analysis: analysis used in conducting statistical tests of significance and for estimating parameters in a population from results obtained from a sampl e drawn form the population<br />Informed consent: the ethical practice of providing respondents or subjectsinformation about a study, particularly any risks involved, so they can make an informed decision about participating in the study<br />Instrumentation effect: any effect the process of measuring has on the dataobtained in an investigation; in an experiment, administration of the pretest could affect scores on the posttest, thus posing a threat to the validity of the experiment<br />Inter-analyst reliability: the degree to which the observations or ratings of the main investigator and one or more independent observers or analysts agree with one another; a high level of agreement indicates that the rating or coding categories have a high level of reliability<br />right0<br />Internal validity: the degree to which the results of an experiment can be attributed to the effects of the experimental (independent) variable and to no outside variables<br />Internet: the set of telecommunication connections and standards for transmitting information for exchanging information and accessing Web sites from one computer to another throughout the world<br />Internet survey: a form of survey in which questions are posted on a Web site or sent by email to r espondents who reply by completing the questionnaire on the Web site or sending responses by email<br />Interpretation (of results): the process of saying what the results mean; the purpose of interpretation is to develop the conclusions of an investigation or to explain what was found<br />Interrupted time series design: a form of a quasi experiment based on one group, with no control group; the occurrence of some variable is compared over time before and after some event that is thought to have an influence of the variable; example, does a large increase in the tax on cigarettes cause a decline in sales; data for sales before and after the imposition of the tax would be compared to answer this question<br />Interval: the range of numbers used for grouping continuous data<br />Interval measurement: based on an ordered set of categories where the intervals between the categories are assumed to be equal; the numerical values assigned, however, are not based on an absolute zero (examples, intelligence scores. scores on an attitude scale)<br />Interval sample:   see systematic random sample<br />right0<br />Interview schedule: the set of questions used to interview respondents; today, the term questionnaire is used in place of interview schedule guide or schedule<br />Interviewing: the process of collecting data from respondents by asking questions and recording their responses: in structured interviewing, a questionnaire with a fixed set of questions is used; in unstructured interviewing, questions are asked informally and in any order, more in a conversational style with respondents<br />Intra-analyst reliability: refers to the consistency in recording observations or incoding data by a single investigator<br />Inverse relationship: see negative relationship<br />Item: a question or statement used in a questionnaire to obtain data about avariable<br />Item analysis: the process of determining the extent to which items used in acomposite measure are related to one another and how well each item contributes to the composite score; item analysis is used to assess the uni-dimensional orinternal validity of a items making up a tentative composite measure<br />right0<br />K<br />Key informant: a well informed person who provides crucial information in aqualitative investigation; may also review an investigator's description and explanation of events for accuracy and validity; information obtained from key informants is often vital to the success of field research<br />Key terms: words or phrase used in conducting a search of a database or for identifying relevant Web sites; key terms are selected to represent all the ways aconcept may be expressed<br />right0<br />L<br />Level of confidence: estimate of the probability that a parameter lies within a specified range of values; example, a research might report a 95 level of confidence of that the mean for the size of households in a population lies between 8.25 and 10.13 persons<br />Level of measurement: refers to the characteristics of measurements used to collect data; there are four levels of measurement - nominal, ordinal, interval, andratio<br />Level of significance: the probability that the result of a statistical test could be due to sampling error; for example, a result said to be significant at the .05 level indicates that the result could have occurred due to chance variations among sample less than 5 times out of 100 random samples of the same size from the target population; at the .01 level of significance, the result would be considered as occurring due to sampling error less than 1 time out of every 100 samples<br />Likert scale: a composite measure based on a set of responses that range from one extreme to another; example, a scale may have a number of items with response ranging from strongly agree, agree, uncertain, disagree, or strongly disagree<br />Line graph: graphic way to present data in which the frequencies for attributes of avariable are represented by dots at the intersection of the attribute, as arranged along the X axis of the graph, and the values for frequencies, listed along the Y axis; the dots are then connected by a line which creates a line graph; also known as a frequency polygon<br />Line of best fit: in a graph, shows the relationship between two variables; the line of best fit is the line that comes the closest to the largest number of dots representing the values for each pair of attributes for each respondent<br />Link: a connection provided on a Web site to other pages on the site or to a related Web site<br />List of references: the list of the publications, Web sites, or other sources of information cited in a report; references are prepared according to rules and listed alphabetically by the last name of the author; the list of references is placed at the end of the report<br />Longitudinal design: a research design used to measure changes in variables as they occur; data are obtained through successive waves of data collection from the same sample over a period of time<br />right0<br />M<br />Matrix format: a table format for presenting items that vary in content but all have the same response categories; used frequently in presenting items asking about attitudes or views about some topic or issue<br />Maturation effect: any naturally occurring processes over time that may produce changes in subjects in an experiment; as people grow older, they change in many ways; thus, maturation is a threat to the validity of experiments conducted over long periods of time<br />Mean: one of the three measures of central tendency; the value of the sum of a set of scores divided by the number of scores; in everyday communication, the termaverage is used to indicate the mean<br />Measure: an indicator or set of indicators used to obtain data for a variable; also referred to as a measuring instrument<br />Measurement: the process of assigning numerical values or qualitative descriptions toattributes of an indicator or variable<br />Measurement error: the difference between the true value for an indicator and itsobserved value; the observed value is almost always different from the true value because of systematic and random errors that occur during data collection and analysis<br />Measuring instrument: see measure<br />Median: one of three measures of central tendency; the median is the middle scorein a distribution<br />Mixed types: in Guttman scaling, mixed types are response patterns that do not match the expected pattern of responses; mixed types represent errors and reduce the coefficient of reproducibility, which is the measure of success in creating a Guttman scale<br />Mode: one of the three measures of central tendency; the mode is the most frequentscore in a distribution<br />Mortality effect: refers to the loss of subjects during the course of an experiment;high mortality a threat to the validity of an experiment<br />Multimethod research: an investigation using more than one method of collecting data; for example, an investigator may collect data on the same variables by means of observation, use of a survey, and analysis of available data<br />Multiple measures, before and after design: a quasi-experimental design in which data are obtained for a dependent variable from an experimental group and a nonequivalent control group at several or more time before and after an event; pre- and post-event   data for the two groups are compared to see if the event had any effect on the dependent variable<br />Multistage sampling: see cluster sampling<br />Multivariate analysis:   the simultaneous analysis of data for three or morevariables; may be done in the form of tabular analysis or using statistical tests<br />right0<br />N<br />Natural setting: any setting where people carryout normal, everyday activities; examples, life in the home, village, office, or other public places<br />Navigating: using navigation buttons and other aids to easily move among pages of a Web site<br />Navigation buttons: buttons or aids on a Web site one can click on to move quickly from one page of the site to another<br />Negative relationship: a relationship between two variables in which changes in one variable are associated with changes in the opposite direction for the other variable; example, years of schooling and fertility are negatively related; as schooling increases, fertility tends to decline<br />Negatively skewed distribution: a distribution in which most scores are located near the low end of the distribution<br />Network sample: a nonprobability sampling technique in which respondents who are initially contacted are asked to identify other members of the target populationfor inclusion in the investigation; example, in a study of female entrepreneurs,   the first entrepreneurs who were interviewed would be asked to name other females entrepreneurs they know who would then be contacted, interviewed, and asked to identify additional female entrepreneurs to be included in the sample, and so on; also called chain or snowball sampling<br />Nominal measurement: the lowest level of measurement; consists of giving names to categories or the attributes making up an indicator; nominal measurement simply indicates that the categories differ; for example, male and female, are the categories or attributes of the variable of gender<br />Nonequivalent control group design: a form of quasi-experimental design based on use of a control group that is thought to be similar to the experimental groupbut whose members were not selected by random assignment<br />Nonlinear relationship: see curvilinear relationship<br />Nonprobability sampling: - any form of sampling not based on random or chance selection of the members of the sample<br />Nonreactive measure: see unobtrusive measurement<br />Nonsignificant: any result judged to be within the range of chance variation that occurs from random sampling<br />Normal distribution: a distribution with a distinctive bell shape and that has certain specific properties; the most important for researchers is that approximately 68% of the scores in a normal distribution lie within ±1 standard deviation of the mean of the distribution, approximately 95% lie within ±2 standard deviations, and over 99% lie within ±3 standard deviations<br />Null hypothesis: a hypothesis established as a basis for conducting a statistical test of significance; the null hypothesis states that no relationship exists between two variables in the population from which a random sample was drawn; the null hypothesis is accepted or rejected, depending on the level of significance of result of the test<br />Number: refers to the size of a sample or the frequency for the number of cases in an analysis<br />right0<br />O<br />Objectivity: the ability to observe or reason without personal bias; while objectivity is virtually impossible to attain in all aspects of research, it is an ideal scientists strive to achieve<br />Observation: is the process of using one's senses to perceive and record information about some aspect of the natural world; social scientists observe human interaction and behavior<br />Observational design: a flexible plan for conducting observations; usually the basis of field research<br />Observed value: the value for an indicator obtained as a result of measurement orobservation; this is the value we know and almost always differs from the true valueof the indicator because of random or systematic errors in data collection<br />One group, pre- and posttest experimental design: a quasi-experimental design based on a single group, with a pretest measurement of a dependent variable, followed by an experimental treatment and then a posttest of the dependent variable; this design is subject to all the threats to internal validity<br />Online: refers to connecting to the Internet, databases or other computer-based sources of information by means of a computer<br />Open-ended items: questions where the respondent answers in his or her own words; a question is followed by blank space where the response is recorded or written; there are no response categories as they are with closed items<br />Operational definition: the definition of a concept as expressed by the way it is measured; the operational definition of social status, for example, is given by the item or items used to measure it<br />Operationalization: the process of developing measurements for indicators<br />Ordinal measurement:a measurement based on ranking or ordering of theattributes of a variable according to some criteria; level of schooling is an example of an ordinal measure - no schooling, primary level, secondary level, post-secondary level<br />Over-generalization: a statement or conclusion that goes beyond any supportingfindings or results<br />Over-generalizing: the act of drawing a conclusion that is not supported by data;example, claiming that most men in a town prefer a certain political candidate when data were collected only from men who had attended a college or university and who represent a minority of men in the town<br />right0<br />P<br />Page: a section of a Web site containing information on one of the topics or issues covered by the site<br />Panel design: a research design based on successive data collection from the samesample to measure changes in variables as they occur; panels are used inlongitudinal research<br />Parameter: the value of any indicator in the target population; an enumeration orcensus produces parameters; generally we can only estimate parameters fromstatistics that summarize data from a probability sample taken from the target population<br />Participant observation - a qualitative research technique in which the investigator participates substantially in the activities of a group; used   to develop an in-depth understanding of the behavior of the group and to see things as members of the group do<br />Participatory rural appraisal (PRA): an approach to data collection in which respondents are encouraged to participate fully in all phases of the research; is similar to and employees many of the features of Rapid Rural Appraisal<br />Paste: the process of adding material taken from a database, Web site, or other source to the document you are writing<br />Percent: a proportion multiplied by 100; literally means per 100; example, if 13 workers out a workforce of 170 were absent on a given day, the proportion absent is 13/170 or .076 and the percent is .076(100) or 7.6%<br />Perfect relationship: a perfect relationship occurs when a coefficient of correlationequals - 1.00 or + 1.00; this means that a certain amount of change in one variable is associated with a specific amount of change in the other variable; in physics, pressure and volume are perfectly related, an increase in pressure is always associated with a decrease in volume; in the social sciences, perfect relationships are seldom found<br />Personal interviewing: refers to the process of collecting data in face-to-face contact with respondents as opposed to conducting telephone interviews<br />Pie chart: a graphic presentation of results in which the slices of a circle (the pie) represent the proportions of each attribute of a variable<br />Population: the entire group of persons or other cases of interest to an investigator; the group to which an investigator may want to generalize from the sample used in an investigation<br />Positive relationship: a relationship between two variables in which they change in the same direction; as one increases or decreases in value so does the other; also called a direct relationship<br />Positively skewed distribution: a distribution in which most scores are toward the high end of the distribution<br />Posttest measurement: in an experiment, measurement of the dependen t variable taken at the end of the experiment<br />right0<br />Precision matching: a technique for establishing equivalent experimental andcontrol groups by randomly assigning subjects with exactly matching characteristics to one group or the other; example, for two persons of the same gender and age one is randomly selected for the experimental group, the other to thecontrol group; this process would be repeated for each set of persons with matching characteristics; precision matching is the strongest basis for creating equivalent experimental and control groups<br />Precoding responses: the process of assigning numbers to represent the attributesof an indicator at the time the response categories are created; pre-coding is frequently used with responses for closed items<br />Predictive validity: a way of estimating the validity of a measuring instrument,such as a scale, based on the association of scores on the instrument with a scores for some variable taken at a later time; example, the accumulated grade point average of university students could be used to validate a test for university success given while the students were in secondary school<br />Premature closure: occurs when one draws a conclusion based on insufficient data<br />Pretest: a test to see if a questionnaire is ready for use in a survey; generally based on selecting a small sample similar to the one to be used in the actual survey; all the elements of the questionnaire are tested, from the introduction to analysis of responses to obtained<br />Pretest measurement: in an experiment, measurement of the dependent variable at the beginning of the experiment, before the administration of theexperimental variable<br />Pretesting: see pretest<br />Print out: the copy of materials printed from a data file, Web site, or a file stored in a computer<br />Probability level: refers to the extent to which the results of a statistical test of significance could be due   to random variation that always occurs in sampling (calledsampling error); two probability or "p" levels are typically used in reporting results - the .05 and the .01 levels; the .05 level indicates that the result could have occurred due to chance 5 less than 5 times out of every 100 samples; the .01 level indicates that the results could be due to chance not more than once in every 100 samples<br />Probability sampling: any method of sampling based on random or chance selection, where each sampling element or unit has an equal chance of being selected<br />Probe: a technique used in interviewing to encourage a respondent to provide a clearer or more complete response<br />Proportion: a fraction or part of something, expressed as a decimal between 0 and 1.0; example, the proportion of females at a university with 750 females and 4,500 males is 750/4,500 or .166 or .17<br />Proxies: easily developed substitutes for more precise forms of measurement;proxies are frequently used in Rapid Rural Assessment to provide data quickly and inexpensively in place of indicators that take longer to develop and test for validityand reliability; example, the size or construction materials used in a house could be used as a proxy for family wealth<br />Purposive sampling: a nonprobability method of selecting a sample based on selecting respondents because they are uniquely able to provide needed information; example, to learn about how decisions in villages are made, an investigator might select samples of village leaders and elders<br />right0<br />Q<br />Qualitative analysis: examination of data in the form of verbal descriptions rather than numbers; the purpose of qualitative analysis is to describe behavior and provide an explanation for what was observed<br />Qualitative interviewing: a loose, flexible approach to interviewing based on exploration of topics that are discussed in depth with respondents; respondents are encouraged to talk at length about issues presented by the interviewer<br />Qualitative research: a flexible approach to data collection, based mainly on written descriptions of observed behavior; casual and participant observation andunstructured interviewing are the main ways of conducting qualitative research<br />Qualitative survey: a survey based on use of an unstructured questionnaire; the interviewer uses a conversational style of interaction with r espondents to get responses in the respondents' own words and with emotional content<br />Qualitative variable: a variable described in words or by the names of the categories of which it is composed as opposed to a quantitative variable, which ismeasured in numbers; gender is an example of a qualitative variable<br />Quantitative analysis: analysis of data in the form of numbers; begins with the analysis of each variable, one at a time (univariate analysis), and may proceed tobivariate and multivariate analyses<br />Quantitative research: based on numerical measurement of indicators; used to establish quantitative relationships among variables<br />Quantitative variable:   a variable that is measured in numbers as opposed to aqualitative variable, which is not; the number of faculty of a university is a quantitative variable<br />Quasi-experimental designs   are based on some but not all the features of theclassical experiment; most quasi-experiments lack complete control over theindependent variable, but they have the advantage of estimating the effects ofvariables under real social conditions; quasi-experiments may be low on internal validity, but are often high on external validity<br />Questionnaire - a set of carefully phrased and tested questions or items prepared for the collection of data; surveys are based on use of questionnaires<br />Quota sampling: a nonprobability method of sample selection based on setting quotas for cases from defined components of the target population; once the criteria and quotas are set, convenience or other nonprobability methods are used to select the sample; quota sampling has the advantage of at least including sample elements from various segments or components of the target population<br />right0<br />R<br />Random error: any form of error that may occur in a particular instance during datacollection, coding, transfer, or analysis; examples, a poorly asked question, a misunderstanding in recording a specific response, or an made in coding data<br />Random selection: selection based on chance and chance along, with no human judgment or preference involved; can be accomplished using a table of random numbers or by selecting sampling elements by chance from a box<br />Randomization: a process of assigning subjects to either the experimenta l orcontrol group by chance<br />Range - a measure of the dispersion or variation among scores; measured as the difference between the lowest and highest score plus 1<br />Rapid rural appraisal: an approach to data collection using approximations, calledproxies, for measurement of indicators, that permits collection of data quickly and inexpensively; often used to help make decisions about the development or future directions of programs; includes an emphasis on the participation of local persons to the maximum extent possible in the conduct of the investigation; also known asparticipatory rural appraisal<br />Rank order: the result of arranging scores in descending order from the highest to lowest; the highest score is given a rank or 1, the next lower score is given a rank of 2 and so on; rules are followed for assigning tied scores<br />Rapport: is the feeling of trust and confidence an interviewer seeks to establish and maintain with respondents<br />Rate: a measure of how frequently something occurs within the limits of largerpopulation; example, the birth rate is the number of babies born within the population of an area; in social research, rates are expressed in terms of a standardizing base to eliminate differences in the sizes of the populations being examined; a base of 1,000 is used for calculating birth rates; thus, if 55 babies were born in a region with 2,400 persons, the birth rate would be 55/2,400(1000) or 22.9<br />Ratio: the relation between two frequencies; a ratio is found by dividing one frequency by another; in social research ratios, like rates, are generally expressed in terms of a standardizing base of 100, 1,000, or some other base; example, using a standardizing base of 100, the ratio of females to males in a university with 750 females and 4,500 males is 750/4,500(100) or 17; this days that there are 17 females for every 100 males at the university<br />Ratio measurement: the highest level of measurement, based on a real zero point; thus, any number is a ratio of any other number; for example the age of 40 is twice as large as the age of 20 by a ratio of 2<br />Raw data: the original data obtained by some form of data collection before data arecoded or modified in any way<br />Reactivity: occurs when the process of measurement   influences the resultsobtained; knowing they are being observed, persons, for example, may act differently than they would in normal situations; in that situation, measurement would bereactive<br />Record:   has two meanings: (1) the written description of observations made during a session in the course of field research; and (2) the part of a data file, such as a set of data for a single respondent or the full description of a document retrieved from adatabase<br />Reference: a description of a source cited in a report, such as a book or journal article, prepared in a specified fashion or format<br />right0<br />Reliability: is the degree to which an indicator produces essentially the same result with repeated measurements<br />Respondents: the individuals from whom data are obtained, usually by means ofinterviewing or by completing a questionnaire<br />Response rate: the percentage of successfully completed interviews or self-administered questionnaires over the number that was expected to be completed; the latter usually is the size of the selected sample<br />Response set: the tendency of respondents to answer questions or items in way they answered previous questions; to avoid response set, positive and negative items are mixed up in any set of items, making the respondent think about each item before answering<br />Results: what is discovered when the data are analyzed; findings represent the answer to the question being investigated; also called findings<br />Review of the literature: the process of reading research reports on a topic of interest; learning about the results of research on a particular problem or topic<br />Rounding: the process of establishing the last digit in a number derived from a calculation; rules for rounding are given in Chapter 17, Box 17.1<br />right0<br />S<br />Sample: a part of target population; samples are selected by either probability ornonprobability methods; with probability samples we can generalize results from a sample to the target population; this cannot be done with nonprobability samples<br />Sample design: the plan prepared for the selection of a sample from a target population; the simple random sample is one kind of sample design<br />Sample frame: a list of the sampling elements or units comprising a target population<br />Sampling element: a single member or unit of the target population; example, a single member of the full time teaching faculty of a university in the spring of 2003; also called a sampling unit<br />Sampling distribution (of the mean): a distribution of means that could be calculated for all possible samples of a given size that could be drawn from apopulation<br />Sampling error: the error in measuring a variable that occurs because of variations due to random selection of samples; when random samples are used, the amount of sampling error can be calculated and used in estimating population parametersand in conducting tests of statistical significance<br />Sampling interval: the ratio of the size of the sample to the size of the target population; used as the basis for selecting a systemic or interval sample<br />Sampling unit:   see sampling element<br />Scale: a composite measure based on multiple items of varying intensity; used for measuring beliefs and attitudes<br />Scale types: in Guttman scaling, scale types are response patterns that match the expected set of responses<br />Scatter plot: a form of graphic presentation of relationships between two variables;each pair is represented by a dot at the intersection of the value for the attribute of one variable, as displayed on the X axis, and the value for the attribute of the other variable, displayed on the Y axis<br />Scientific inquiry: a way of examining the world around us based on logical analysis of what we learn through use of our senses<br />Scientific method: the approach used in scientific inquiry to establish knowledge about the natural world, based on principles for identifying concepts, developinghypotheses, collecting and analyzing data to test hypotheses and generating findings which are incorporated into theories for explaining natural processes<br />Score: any numerical value used to represent an attribute of an indicator or somedimension of a variable<br />Scoring:   the process of assigning numbers to the attributes of a variable<br />Scroll: to move up or down the content of a page of a Web site<br />Search engine: a software program specially designed to allow persons to search the Internet to find Web sites of interest<br />right0<br />Search service: see search engine<br />Search strategy: the plan developed for selecting relevant records from a database or to guide a search for Web sites<br />Secondary analysis: an investigation based on analysis of previously collected data;example, reanalysis of survey data collected by another researcher or further analysis of data available by a government ministry<br />Selective observation: the tendency to give extra emphasis to certain observations that agree with a preconceived position and to ignore observations that do not agree with the preconception<br />Self-administered questionnaire: a questionnaire designed for completion byrespondents without the assistance of an interviewer<br />Self weighted sample: a sample selected so that each segment represents its proportion of the population; using self weighting simplifies analysis of data from samples selected through successive stages, such as area, cluster, or multistage samples<br />Session: a period of observation as part of a field or observational study; also used to describe a period of time for the operation of a focus group<br />Significance level: see level of significance<br />Simple observation: observation of behavior, generally in a natural setting, in which actions are recorded in narrative form and later analyzed; also known ascasual observation<br />Simple random sample: a probability sample in each sample element has an equal chance of being selected<br />Skewed: a distribution that differs greatly from a normal distribution; instead of most scores occurring near the mean of the distribution, most scores occur at the high or low end of the distribution<br />Snowball sample   - see network sample<br />Social distance scale: see Bogardus social distance scale<br />Social indicators: broad, standardized measures of the quality of life or other socio-economic conditions of geographic areas such as nations, metropolitan areas, or other areas; used to assess health conditions, educational levels, food availability, violence, and other conditions<br />right0<br />Software package: see statistical analysis package<br />Spearman rank order coefficient of correlation: measurement of associationbetween scores for two indicators based their rank order instead of the original values of the scores<br />Split half reliability: a measure of the reliability of a scale or other measuring instrument based on the degree of association between two equivalent forms or halves of the scale; data for both forms are collected at the same time<br />Spurious relationship: a false relationship; a spurious relationship becomes apparent when the initial relationship between two indicators disappears after the effect of third variable is taken into account<br />Stakeholders: are individuals who have a strong interest in the outcome of anevaluation; in the evaluation of an educational program, stakeholders could include teachers, administrators, and parents, each of whom might have different expectations for the results of the evaluation<br />Standard deviation: a measure of variability among a set of scores; it is based on the sizes of the deviations of each score from the mean of the scores; in a normal distribution, approximately 68% of the scores lie within ±1 standard deviation, approximately 95% within ±2 standard deviations, and over 99% lie within ±3 standard deviations    <br />Standard error: see standard error of the mean<br />Standard error of the mean: the standard deviation of a sampling distribution; it shows how much sample statistics, such as a mean, will vary from one random sample to the next<br />Statistic: is any finding or result based on a sample; when probability samplesare used, statistics based on the analysis of data from the sample can be used to estimate the corresponding parameters of the target population from which the sample was drawn<br />Statistical analysis package (or program): a software program designed to analyze data stored in a computer<br />Statistical inference: using the results of a statistical test of significance from asample to make an estimate about relationships among variables in a population;estimates are based on probability levels  <br />Statistical tests of significance: calculations conducted to determine whether differences between means or relationships between variables, for example, are within the range that could be expected due to chance variations that occur from onerandom sample to the next; statistical tests of significance are based on testing thenull hypothesis<br />Stratified random sample: probability samples selected from two or more sub-groups or strata of a target population; example, random samples of males and females drawn separately from the population of university students<br />Structured interviewing: interviewing based on the use of a questionnaire, which, in this kind of use is sometimes called an interview schedule; questions are asked in exactly the same way in all interviews; responses are recorded as given<br />Structured observation: a quantitative observation technique in which the observed behavior is recorded in terms of pre-established categories; tally marks are recorded each time the defined behavior is observed<br />Subject reactivity: in an experiment, describes the tendency of subjects to act differently than normal because they know they are being observed; a threat to theinternal validity of the experiment<br />right0<br />Subjectivity: the tendency to form opinions or draw conclusions on personal grounds without sufficient regard for empirica l evidence; the opposite ofobjectivity  <br />Subjects: in an experiment, persons included in either the control or experimental groups<br />Summative evaluation: an evaluation carried out after a program has been fully developed; the purpose of a summative evaluation is to see whether the program has achieved the objectives set for it<br />Surfing: the process of moving from one Web site to another using addresses supplied by search engines or links on sites that are visited<br />Survey: a method of gathering data from persons, usually by means of getting them to respond to items comprising a questionnaire; in developing countries most surveys are carried out by interviewing persons<br />Systematic error: any kind of error that affects every case or a substantial number of cases in an investigation; for example, a poorly worded question that gives unreliable responses, a mistake in coding that affects all responses for that item<br />Systematic random sample: a probability sample based on selection of sample elements at a specified interval beginningwith a randomly selected first interval; using a sampling interval of 10, for example, one would select the first sampling element randomly between the first and tenth element on a list and then every tenth element thereafter; also called an interval sample<br />right0<br />T<br />Table: a way of presenting a large amount of data in very little space; tables can be used to display frequency distributions for one variable or to show bivarite ormultivariate relationships among variables<br />Tally sheet: a sheet used is recording the counts or tallys for the frequencies ofattributes of variables; example, male and female, as the attributes of the variable gender, could be listed as rows in a tally sheet and tally marks, such as ///, could be recorded for each time either attribute occurred<br />Tallying: the process of counting responses or other data by hand to developfrequency distributions<br />Target population: the specific, concrete population defined in terms of itssampling elements; abstract or general populations are converted to target populations by defining them precisely as possible such as the population of full time employees of a company on the first work day of a given month<br />Telephone survey (interviewing): the process of conducting a survey by means of telephone interviews  <br />Test-retest reliability: a technique for estimating the reliability of a measuring instrument based on the degree of association of between scores obtained at one time and those obtained at a later time; a high degree of association would indicate high reliability of the instrument<br />Testing effect: effects on the measurement of an indicator caused by the process of measuring the indicator; in an experiment, obtaining the pretest measurement can change how subjects respond to the posttest measurement of the same variable; a threat to the internal validity of an experiment<br />Theoretical framework: a set of theoretical statements used for deriving ahypothesis or for supporting an explanation for some behavior<br />Theory: the logical expression of relationships among abstract concepts; generally developed to explain a set of related behaviors or events<br />Time series analysis: analysis using data available for a number of points in time for the same indicator; can be used to establish trends or changes in social indicatorsor other variables<br />Time series design: plan for data collection and analysis based on repeatedmeasurement for a variable at two or more times, such analyses are used to measure changes or trends in variables over time<br />Trend studies (designs): investigations undertaken to measure changes that have occurred in variables; data are collected for variables at two or more points and compared to see what changes or trend is found; trend studies may involve two or more points for data collection in the past or past data collection supplemented with data for the variable at the present time<br />right0<br />Triangulation: collection and comparison of data from two or more sources or using two or more methods of data collection; triangulation is important inqualitative investigations to ensure that observations are accurately recorded and interpreted; an example would include collecting data for some indicators by means of observation, from interviewing several key informants, and by checking observations against available data<br />True experiment: a technique for testing hypotheses under carefully controlled conditions, where the experimental orindependent variable is administered to theexperimental group but not to an equivalent control group and measurements of the dependent variable are compared between the two groups following the experiment; also called a classical experiment<br />True value: the actual or real value of a score or other measurement; because ofrandom and systematic errors that can and do occur in research, we seldom know the true value of anything we measure<br />(The) t test: a statistical test to determine if the differences between two meansexceed the difference that could be due to sampling error<br />Two group, posttest only design: a form of quasi- experiment based on anexperimental group and a control group (often a nonequivalent control group) in which data are obtained only after the experimental variable has occurred; onlyposttest data are obtained and compared for the two groups<br />Type: any group or category of persons sharing a common set of characteristics that distinguish them from others; in social research, types are constructed by an investigator from data describing the special characteristics of respondents; example, an official may be classified as the "bureaucratic type" based on his or her obsessive attention to detailed rules and regulations and desire to please his or her superiors<br />Typology: a classification of persons or groups based on distinctive types created by the investigator for the purposes of analysis; typologies are useful as measures for dependent variables but are hard to interpret as independent variables<br />right0<br />U<br />Unidimensional: defining a concept so that it has only one dimension ormeasurable set of characteristics<br />Unit of analysis: the entity used as the basis for combining data for analysis; may be individuals, families, other groups, organizations, geographic areas, or other entities<br />Univariate analysis: analysis of a single indicator; univariate analysis is generally the first step in the analysis of a body of data; it is undertaken to describe each variable in terms of measures of central tendency (mean, median or mode) and variability (range, variance or standard deviation)<br />Universal Resource Locator (URL): the unique address of each Web site<br />Unobtrusive measurement: any technique of data collection that does not influence the results obtained; for example, observing how persons are dressed and using this as an indicator of social status, analyzing data already collected; also callednonreactive measures<br />Unstructured interviewing: a flexible form of interviewing, more in the style of a conversation; the interviewer adjusts the timing and content of questions to be asked and seeks to obtain full answers in the respondent's own words<br />Unstructured observation: observation of behavior or events as they occur, generally in a natural setting; the action being observed is described in narrative form;participant observation is a form of unstructured observation<br />Unweighted index: an index in which the indicators making up the index are assigned equal value<br />Unweighted score: responses to items are simply added to form a composite score; as distinguished from a weighted score in which responses to some items are given more importance by assigning a greater value to them<br />right0<br />V<br />Valid frequency distribution: a frequency distribution based on the number of useable responses obtained for an indicator; example, if 68 respondents out of a sample 75 provided useable responses to an item, a valid frequency distribution would be based on an N of 68 rather than the N of 75 for the sample<br />Valid percentage distribution: a set of percentage based on the number of useable responses obtained for a variable; example, if 68 respondents out of a sample 75 provided useable responses to an item, a valid percentage distribution would be based on an N of 68 rather than the N of 75 for the sample<br />Validity: the extent to which an indicator measures a given concept or one of itsdimensions<br />Value judgment: a statement or opinion based on one's beliefs or values and not onempirical evidence; Variable: any characteristic that varies; that can take on two or more numerical values or has two or more qualities; the various values or qualities of avariable are its attributes<br />Variance: a measure of dispersion or variability among scores in a distribution;variance is the mean of the squared deviations of each score from the mean of the distribution<br />right0<br />W<br />Web address: see Universal Resource Locator<br />Web site: an electronic source of information accessible through the Internet, the worldwide telecommunication network and software that links computers to Web sites<br />Web survey: a survey conducted by posting a questionnaire on a Web site and inviting viewers to complete the questionnaire; also referred to as an Internet survey<br />Weighted index: an index in which the indicators making up the index are assigned different values to reflect the greater importance of some of them to thecomposite score<br />Weighted sample: in analyzing data from samples, assignment of different weights or values to cases in proportion totheir probability of selection<br />Weighted score: in constructing a composite score, the process of giving greater value to some indicators over others<br />Weighting indicators: the process of assigning greater importance to certainindicators over others in the construction of a composite measure<br />World Wide Web: the original name for the connections among Web sites, now preserved in the "www" in many Web addresses<br />Chapter 2. The Sudan Fertility Survey: An Introduction to Research<br />Introduction<br />In Chapter 1 you learned about the scientific approach to conducting research and the typical stages in the research process. In this chapter, we show how the research process was used in conducting the Sudan Fertility Survey, a large scale research project designed to provide information on an important condition affecting the future of the Sudan — its birth rate. As with all research, the Sudan Fertility Survey began with the definition of the problem to be investigated.<br />Specifying the research question<br />In most research, the researcher decides what to investigate, as you will have to do in your initial study. Sometimes, however, researchers are asked to investigate a question for some organization, such as a government ministry. This is how the Sudan Fertility Survey occurred. The Department of Statistics of the government of the Sudan wanted accurate, detailed information of the current and the estimated future fertility rate in Sudan. In population research, the fertility rate is defined as the number of live births per 1,000 women of childbearing ages.<br />The resulting investigation became known as the Sudan Fertility Survey (Department of Statistics, 1982). We describe this study for four reasons:<br />To show the value of social research - the survey was requested by the government of the Sudan to provide information for developing family planning programs;<br />To illustrate the application of social research methods to an important social problem - that of high population growth;  <br />To show how research is planned and carried out in practice; and<br />To give you an idea of how the results of research can be used to understand social conditions in a country.<br />We begin by examining how the study was carried out because the value of the results depends on how information is collected and analyzed, and this depends on how well the study was planned in the first place.<br />Designing the study<br />All research projects require a design or plan for the collection and analysis of the data. In preparing a design for the Sudan Fertility Survey, a number of important decisions had to be made, one of which was who to study.<br />References begin with the name(s) of the authors(s). In the case of the Sudan Fertility Survey, the author is a government organization. The full reference to this report and others we cite later are provided in the List of References<br />Who to study?<br />Given the objective the of the study,   it was obvious that married women would have to be the source of the desired information.    In research terms, the women became therespondents in the study.   Their responses to questions they were asked became the dataof the investigation.   Incidentally, data are plural.   No one would base a study on the answer of a single respondent to a single question, which would produce a datum or just one bit of information.    In contrast, research is based on the collection and analysis of a body of data. The Sudan Fertility Survey, for example, was based on responses by more than 3,000 women to over 200 questions.   That's a lot of data.<br />With the decision made to collect data from married women, the researchers faced a new decision.   This was whether to collect data from all eligible women in northern Sudan or to limit data collection to some smaller number of women. All eligible women, those who were ever married and living in northern Sudan constituted the population being studied.    For the Sudan study, the population included over three million women, far too many to try to collect data from: Doing so would take too long and cost too much money. Knowing this, the researchers chose the alternative used in most social research. They selected only part of the population as the respondents for the study. This smaller set of women, called a sample,was selected so that the women in the sample were like the population in all important ways, such as being about the same ages, having the same levels of education, and having the same number of children. In Chapter 8 you will learn how samples are selected.<br />How to collect the data?<br />Next, the researchers had to decide how to collect the data from the sample of women. The method chosen was to conduct a survey based on personal interviews with each woman in the sample. With this decided, the investigators turned to developing the questions to be asked.   Stating the questions to be asked is a critical step in a planning a research project because, as in everyday life, the answer you get to any question you ask often depends on how the question was asked. Considerable care, therefore, was taken in framing each question. This task was made easier in the Sudan study because many of the questions used were used in previous studies of fertility in other countries.<br />Studies frequently require translation of questions into the language of the respondents. This was the case with the Sudan survey.   Questions, originally in English, were translated into Arabic, the language of the women who would be interviewed. This translation was    checked to make sure that the meaning of each question was not changed as a result of being translated. Checking was done by translating each question back from Arabic into English, and then by comparing the two English forms of each question. When the back translation agrees with the original language, the translation is considered safe to use. If the two forms differ, the process is checked to find the cause of the difference. In this case, English and Arabic versions of the questions were compared. Back translation, however, can be used with any set of languages.<br />After the researchers were certain that the translated questions asked what was intended, a small sample of women was interviewed to make sure that the women who would be interviewed in the main study would understand the questions and be able to answer them accurately.    Following this step, called a pretest, the questions were organized into aquestionnaire.    As the name implies, a questionnaire is the final set of questions used to collect data from a sample.<br />Persons called interviewers were then trained to use the questionnaire to interview each woman included in the sample. In conducting interviews, each respondent was asked each question on the questionnaire and her answers were recorded by the interviewer.<br />So far we have discussed the typical elements of a social survey. A survey is one form of social research. Generally, surveys are based on data collection from a sample using a questionnaire.<br />Collecting the data<br />The collection of data, in this case the process of interviewing the respondents, lasted from December, 1978, to April, 1979, and resulted in completion of 3,115 questionnaires from eligible women. Reporting the time period for data collection is expected in research reports because reports are frequently published years after data are collected.<br />Therefore, it is important to tell when the data were collected. This is the only way readers can know how old the data are.<br />Analyzing the data<br />Current fertility<br />The purpose of analysis is to organize the data and see what was found. Analysis generally occurs in two phases. First, investigators summarize responses to each question. The central question of the Sudan Fertility Survey was how many children each woman had. Each woman was represented by a number, from zero for those who had not yet given birth to a child, to the maximum number born to any woman. These numbers represent the raw data for establishing the fertility rates in northern Sudan in 1978/79. The raw data were analyzed to find the average number of babies born to the women.   Two averages, in fact, were calculated. One average was based on all the women in the sample from whom data were obtained. This average was 4.2 children. It summarized the number of children born to all women, regardless of their ages or how long they had been married.   <br />Another average was calculated to find out how many children had been born to women who presumably would not have any more babies. For this average, only data for women who were 45 to 49 years of age were used. This average described the completed fertility of married women in northern Sudan. As you might expect, the average for completed fertility (6.2 babies) was higher than that for all married women. This result would be expected because the first average included data for younger women, some of whom had only been married for a short time, whereas the average for completed fertility included only women who had many years to produce children.<br />We cite these two averages to illustrate that a single research project can be used to answer more than one question. Chapter 18 and Chapter 19 you will give you some ideas of various ways you can analyze the data you will collect.<br />Estimates of future fertility<br />As researchers we often want to suggest how we think certain things may change in the future. The following examples show how data from the Sudan Fertility Survey were analyzed to get an idea of possible changes in fertility in northern Sudan.<br />First, the research team compared the number of babies born to younger women with the number older women had given birth to when they were the same ages as the younger women. The analysis showed that younger women were continuing to have about the same number of babies as their older relatives had at the same ages.<br />In addition, the researchers examined the number of children the women said they would like to have if they could have the exact number of children they wanted.   For all women, the preferred number was an average of 6.4 children, which was higher than the actual completed fertility of the older women (6.2 children).   Younger women between the ages of 15 and 24, however, indicated they wanted an average of 5.4 children, less than the 6.4 as reported by all women. These findings also point to continued high fertility in northern Sudan.<br />The researchers also looked at the extent to which family planning was being practiced. The women were asked a number of questions about their knowledge and use of contraceptive methods. Here are some of the results:<br />Only 12% of the women had used contraceptive methods sometime in their lives:<br />Of those who had tried some method, only 9% stated they intended to do so again in the future;<br />And only 16% of the women wanting no more children said they were using a reliable means of contraception.<br />Seeking an explanation for fertility rates<br />So far, the results suggest that fertility will remain unchanged in northern Sudan. Women wanted and were still producing large families and few of the couples were using reliable means to limit family size. Still, before stating a conclusion based on these findings, we need to examine fertility in light of other broad social trends in Sudan. Chief among these is the recent increase in years of schooling among girls.<br />How might increased schooling be linked to fertility? To answer this question, the researchers analyzed the relationship between schooling and fertility. Here are some of the things they discovered:<br />Women with no schooling had an average of 4.2 children;<br />Women with 1 to 5 years of schooling had an average of 4.4 children;<br />Women with 6 or more years of schooling had only 3.0 children on the average.<br />These findings indicate that completion of primary school was associated with lower fertility.The education of women was also related to use of contraceptives. As their schooling increased, so did the use of contraceptives:<br />Only 2.5% of the women with no schooling reported use of contraceptives;<br />While 15.0% of those with 1 to 5 years of schooling did so:<br />And an even larger percentage, 42.0%, of women with at least 6 years of schooling indicated use of contraceptive methods.<br />Among women who wanted no more children, schooling was even more strongly associated with contraceptive use:<br />Only 8.5% of the women with no schooling and who wanted no more children reported use of contraceptives.<br />This was true for 29.5% of those with 1 to 5 years of schooling.<br />A much larger percentage, 63.0%, of the women with 6 or more years of school and who wanted no more children reported use of contraceptives.<br />Interpreting the results<br />From these findings, we could draw the conclusion that fertility in northern Sudan will not change much in the immediate future. In the long run, however, as schooling for girls continues to increase, fertility rates will probably decline. These conclusions would represent our interpretation of the findings. In a sentence or two, we say what we think the findings mean. To summarize: results are based on data; results are facts. Statements that give meaning to the facts or results represent the researcher's interpretation of the results.<br />Generalizing the results<br />When a proper sample is used, researchers can extend a conclusion by saying what they think is true for the population based on what was learned from a sample. Thus, the results from the sample of 3,115 married women who supplied data for the Sudan Fertility Survey could be extended to describe fertility and conditions affecting fertility among the 3 million married women living in northern Sudan at the time the data were collected. When conclusions are extended in this way they are referred to as empirical generalizations.Empirical is used because the generalizations are based on data. The process of creating a generalization is called generalizing.<br />Some empirical generalizations that can be drawn from the results of the Sudan Fertility Survey are:<br />Fertility in northern Sudan is high, averaging slightly over 6 children per married woman.<br />Fertility in northern Sudan will probably remain high in the coming years.<br />However, in the long run, fertility in northern Sudan will probably decline as females obtain more schooling.<br />Notice that these generalizations sound like conclusions.   Often generalizations do, but remember, generalizations are offered as the broadest or most general statements one can make, based on the findings of a study. Researchers are careful in drawing either conclusions or generalizations.   Sometimes, because of limited data, we have to limit conclusions and corresponding generalizations.   The important thing is to be honest in what you say: Be careful not to over generalize or go beyond what your data indicate. For example, the three generalizations we stated earlier were limited to "northern Sudan." We did not try to generalize to all of Sudan because data were not available for other parts of the country.<br />Aids<br />Internet resources<br />In this chapter, we have presented an analysis of only one social research report. Thousands of additional research reports on all kinds of topics are available on Web sites or through other information services. Since this chapter dealt with a report on fertility, we did an Internet search using Google, a popular search service. Google reported about 203,000 Web sites dealing with "fertility rates." We also looked for reports of studies of fertility rates in POPLINE, an information service that covers population-related topics and issues. On February 9, 2005, POPLINE, listed 3,899 items concerned with fertility rates. Some of these were Web sites with the complete text of reports. For example, one report, Transitions in World Population, provides a comprehensive description of population changes, examines bases for future changes, and discusses other issues related to the changing characteristics of the world's population. Others provided summaries of journal articles, books, and other reports related to fertility. Chapter 4 provides explains how to construct and carry out a search of POPLINE.<br />Either Google or POPLINE and many other information services (see Chapter 4) provide access to thousands of social research reports on all kinds of topics.<br />Key terms<br />AnalysisBack translationDataDesignEmpirical generalizationGeneralizingInterpretationOver-generalizingPopulationPretestRaw dataRespondentsQuestionnaireSampleSurvey<br />Main points<br />Information collected in an investigation is referred to as data. Data are plural; the singular of data is datum.<br />Data are analyzed to produce the results of a study.<br />Social scientists use data to establish relationships between variables. Clearly established relationships between variables provide the basis for explaining why behavior occurs as it does.<br />Findings or results are interpreted to produce the conclusions of an investigation; to interpret findings is to say what we think they mean.<br />Conclusions are statements based on findings.<br />An empirical generalization extends findings from a sample to a population.<br />Chapter 4: Selecting a Question to Investigate<br />Introduction<br />Selecting a question to investigate may be the hardest part of your initial research project. It is also a very important decision. Every other decision you will make in planning a research project will be based on what you decide to study. In this chapter, we offer some suggestions for working through this important process. Part of this process involves learning about what is already known about the topic you choose to investigate. In research, becoming informed about previous research findings is referred to as conducting a review of the literature. Therefore, we combine the process of selecting a question to investigate with the process of learning about previous research. Today, with the increasing importance of theInternet as a source of information, literature reviews include Internet searches in addition to looking for information in libraries.<br />The usual processes involved in selecting a question to investigate are outlined in Box 4.1.   <br />Box 4.1. Processes in deciding on a research questionGetting an initial idea: may be expressed as a "topic," "interest" or "problem"Evaluating the idea, topic, interest, or problemConducting a comprehensive review of the topic or problemMaking a final decision on the research question<br />Your initial research question<br />Getting an initial idea<br />Research starts with getting an initial idea about something to investigate.   We use the word "idea" at this stage of the process to cover different ways you might start. You may begin with a "topic" or a "problem" that interests you.<br />Topics, problems, or questions for research can come to you at anytime and from a variety of sources. Course work is an obvious and frequent source of research questions. You may be stimulated by something an instructor says or by something you have read. Some research reports end with section entitled "Recommendations for future research." One of these recommendations may excite you or and lead to a problem you want to investigate.<br />Frequently, things mentioned as "problems" by friends or relatives or something you read about in a newspaper or magazine can be rephrased as a question for study. In addition, your own personal experience or interests may lead you to do research on a certain problem. A student from a religious or ethnic minority group may be motivated to investigate attitudes or behavior of the majority group toward the student's group. A student from a rural area may want to investigate ways of improving social services in his or her village.<br />You may find that recording ideas in a notebook as they occur to you is helpful. You can review these ideas periodically, cross out ones that no longer appeal to you, and keep others for cons

×