Handbook of survey methodology for the social sciences


Published on

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

Handbook of survey methodology for the social sciences

  1. 1. 2Classification of SurveysIneke Stoop and Eric Harrison2.1 IntroductionIn ‘The Analytical Language of John Wilkins’,Borges describes ‘a certain Chinese Encyclo-paedia, the Celestial Emporium of BenevolentKnowledge’, in which it is written that animalsare divided into:• those that belong to the Emperor,• embalmed ones,• those that are trained,• suckling pigs,• mermaids,• fabulous ones,• stray dogs,• those included in the present classification,• those that tremble as if they were mad,• innumerable ones,• those drawn with a very fine camel hair brush,• others,• those that have just broken a flower vase,• those that from a long way off look like flies.To modern readers this classification mayseem somewhat haphazard, hardly systematicand certainly not exhaustive (although the cate-gory ‘others’ makes up for quite a lot of gaps).Actually, Borges did not find this classificationin a Chinese encyclopaedia: he made it up.Making up a classification of surveys at timesseems as challenging as making up a classifi-cation of animals. A short enquiry into types ofsurveys yields random samples, telephone sur-veys, exit polls, multi-actor surveys, businesssurveys, longitudinal surveys, opinion polls(although some would argue that opinion pollsare not surveys), omnibus surveys and so forth.It will be clear that the types of surveys men-tioned in this list are neither exhaustive normutually exclusive. The ‘type’ of survey canrefer to the survey mode, the target population,the kind of information to be collected and anumber of other characteristics. Sometimesthese different characteristics interact, but somecombinations are rarely found together. Surveysof older persons are rarely web surveys, forinstance, and exit polls are never longitudinalsurveys.This chapter presents a brief overview of thedifferent ways in which surveys can be classi-fied. First, however, we need to consider what asurvey is. Below is given an abridged version ofthe section ‘What is a survey’ from the bookletdrafted by Fritz Scheuren from NORC.1I. Stoop (&)The Netherlands Institute for Social Research/SCP,P.O. Box 16164, 2500 BD, The Hague,The Netherlandse-mail: i.stoop@scp.nlE. HarrisonCentre for Comparative Social Surveys,City University, Northampton Square,London, EC1V 0HB, UKe-mail: eric.harrison.1@city.ac.uk 1www.whatisasurvey.info/overview.htmL. Gideon (ed.), Handbook of Survey Methodology for the Social Sciences,DOI: 10.1007/978-1-4614-3876-2_2, Ó Springer Science+Business Media New York 20127
  2. 2. Today the word ‘survey’ is used most often todescribe a method of gathering information from asample of individuals. This ‘sample’ is usuallyjust a fraction of the population being studied….Not only do surveys have a wide variety ofpurposes, they also can be conducted in manyways-including over the telephone, by mail, or inperson. Nonetheless, all surveys do have certaincharacteristics in common. Unlike a census, whereall members of the population are studied, surveysgather information from only a portion of a pop-ulation of interest-the size of the sample depend-ing on the purpose of the study. In a bona fidesurvey, the sample is not selected haphazardly oronly from persons who volunteer to participate…Information is collected by means of standardizedprocedures so that every individual is asked thesame questions in more or less the same way. Thesurvey’s intent is not to describe the particularindividuals who, by chance, are part of the samplebut to obtain a composite profile of the population.In a good survey, the sample that has beenstudied represents the target population, and theinformation that has been collected represents theconcepts of interest. The standardised procedureswith which data are collected are mostly, but notalways, questionnaires which are either presentedto the sample persons by an interviewer or com-pleted by the sample persons themselves.In the next section, surveys are classifiedaccording to a number of criteria. Underlying thisclassification is the following poem by RudyardKipling:I keep six honest serving-men(They taught me all I knew);Their names are What and Why and WhenAnd How and Where and Who.2.2 Classification Criteria2.2.1 Who: The Target PopulationGroves (1989, Chap. 3) starts his theoreticaloverview of populations (of persons) with thepopulation of inference, for instance Americancitizens in 2011. The target population is the finiteset of the elements (usually persons) that will bestudied in a survey. Generally excluded from thetarget population are those persons who cannotbe contacted or will not be able to participate,such as persons living abroad and those living ininstitutions (residential care and nursing homes,prisons). The frame population is the set of per-sons for whom some enumeration can be madeprior to the selection of the survey sample, i.e.who can be listed in the sampling frame. After thesample has been drawn, ineligible units have to beremoved, such as incorrect addresses or personswho are not American citizens. Those who thenrespond to the survey are the survey population,the set of people who, if they have been selectedfor the survey, could be respondents. Unit non-response is the failure to collect data from unitsbelonging to the frame population and selected tobe in a sample.The response rate is the percentageof selected units who participate in the survey.The population of inference may comprisebusinesses, households, individuals, days, jour-neys, etc. In a business survey, information iscollected on establishments or branches. Aninformant, or several informants (see Box 2.1),provide(s) information on behalf of a businessestablishment. A survey among business ownerscan also be seen as a survey among individuals.Box 2.1: Examples of businesssurveysIn two well-known surveys of workplac-es, multiple instruments are fielded to dif-ferent, specifically targeted interest groups.The 2009 European Companies Surveywas conducted using computer assistedtelephone interviews (CATI). The com-panies to be interviewed were selected atrandom among those with ten or moreemployees in each country. A manage-ment representative and, where possible,an employee representative was inter-viewed in each company.The UK’s Workplace Employee Rela-tions Survey (WERS) is one of the longestrunning of its type (since 1980). The mostrecent wave comprised five separateinstruments—some face-to-face and othersby self-completion—and the overalldesign was organised thus:• An overall sample of 2,500 workplaces,combining 1,700 workplaces that are new8 I. Stoop and E. Harrison
  3. 3. to the study and repeat interviews at 800workplaces which were first surveyed in2004.• At each workplace, an interview with themost senior manager responsible foremployment relations and personnelissues was conducted. A self-completionsurvey on financial performance wasdistributed to all trading sectorworkplaces.• An interview with one trade unionemployee representative and one non-trade union representative where present(approximately 900 interviews).• A self-completion survey with a repre-sentative group of up to 25 employees,randomly selected from each workplaceparticipating in the study (approxi-mately 23,000 completed surveys).In a household survey a responsible adult canfunction as a household informant. In a surveyamong individuals the respondents usually pro-vide information about themselves, but oftenalso about their households. A respondent canalso provide information about other householdmembers, e.g. when providing information onthe occupations and education of family mem-bers. In some cases the use of proxies is allowed,which means that the target respondent hassomeone else answer the questions for them.A special case of this would be a survey thatincludes (small) children. In such a case parentscan answer questions instead of their children.It is also possible that all members of thehousehold have to answer a questionnaire, as forinstance in the European Labour Force Survey.In these cases proxies are often allowed. Finally,in multi-actor surveys several members of thesame family are interviewed, but they will notnecessarily be members of the same household.The UK’s WERS (see Box 2.1) is also anexample of a multi-actor survey. Anotherexample is a Dutch survey among persons withlearning disabilities (Stoop et al. 2002, seeBox 2.2). A final example of a multi-actor sur-vey is the multi-country survey described inBox 2.7 in Sect. 2.2.6.Box 2.2: A survey among personswith learning disabilities (see Stoopet al. 2002)Multiple sampling framesThe frame population consisted entirelyof adults aged 18 years and older who hadlearning disabilities and who were livingin an institution or institutionally sup-ported housing arrangement (long-termcare home, socio-home, surrogate familyunit, supported independent livingarrangement) and/or made use of a day-care facility or sheltered workshop. Pre-ceding the fieldwork the frame populationwas constructed by listing the relevantinstitutions by completing and joiningavailable address lists. A complicationwhen using the available sampling frameswas that the instability of the field: insti-tutions change character, new residentialarrangements appear, different residentialfacilities are hard to distinguish from eachother. Additionally, institutions sometimesconsist of main locations and satellitesites, which further complicates the sam-pling procedure.The selected sampling frames showedoverlap and also contained persons whodid not belong to the target population (seealso figure shown below). Two-thirds ofthe clients of sheltered workshops, forinstance, had physical rather than learningdisabilities (C in figure shown below) andwere not included in the frame population.Secondly, an unknown number of personsused more than one facility, for instancedaycare facilities and residential facilitiesor services (B in figure shown below). Toovercome over coverage, the samplingframe of daycare centres and shelteredworkshops was purged of those personswho also used some kind of institutionalresidential arrangement.2 Classification of Surveys 9
  4. 4. The sampling procedure was compli-cated by the fact that different types ofinstitutions were selected and that the finalsample would have to be representativeaccording to type of institution and theextent of the learning disability. Firstly,institutions were selected (acknowledgingtype, size and geographical region) andsubsequently clients within institutions,taking into account size and possibleoverlap between frame populations. Theinterviewer had to select potential samplepersons from a list provided by the localmanagement of the institution, in accor-dance with a strictly random procedure. Inreality, however, this selection was oftenperformed by the local management.Multiple sources and instrumentsSome persons with a learning disabilitycan be interviewed in a survey, whereasothers cannot. If possible, the selectedsample persons were interviewed person-ally. They provided information on theirdaily activities and preferences, autonomy,social networks and leisure activities. Par-ents or legal representatives were askedabout the family background and also, andin greater detail, about the issues in thesample person questionnaire. Supportworkers or supervisors answered questionson the type and duration of care received,coping abilities and daily activities. Finally,questions on services and facilities pro-vided had to be answered by the localmanagement of institutions providing resi-dential facilities or support, daycare centresand sheltered workshops. The combinationof sources was deemed necessary to obtain acomplete picture of the quality of life anduse of facilities of the sample person. Itmade the survey particularly complicated,however, because seven different ques-tionnaires had to be used and everybodyinvolved had to cooperate in order to obtaina complete picture.The population of inference may be thegeneral population of a country (citizens, orresidents, which is by no means the same thing).A survey may also aim at representing a specialgroup, such as older persons, members of aminority ethnic group, students, users of a par-ticular product or public service, persons with alearning disability, drug users, inhabitants of aparticular neighbourhood, gays and lesbians. Insome cases a sampling frame is easy to construct(inhabitants of a particular neighbourhood), andin other cases the survey will have to be pre-ceded by a screening phase to identify the framepopulation (lesbian and gay people).Sometimes, sampling is complicated stillfurther when the ‘population’ under investiga-tion is not a set of individuals but a set ofactivities or events. In a time use survey, forexample, a sample is drawn of households/per-sons and days (Eurostat 2009), and in passengersurveys the units are journeys (see Box 2.3).Box 2.3: Passenger surveysPassenger surveys attempt to establishthe perceived quality of a journey. In theUK, this is complicated by the existence of10 I. Stoop and E. Harrison
  5. 5. train operating companies with regionallybased but overlapping franchises.The UK’s National (Rail) PassengerSurvey (NPS) uses a two-stage clustersample design for each Train OperatingCompany (TOC). The first-stage samplingunit is a train station and questionnaires arethen distributed to passengers using thatstation on a particular day during a specifiedtime period. Stations are selected for eachTOC with a probability proportionate tosize, using the estimated number of pas-sengers as the size measure. A large stationmay be selected several times. Days of theweek and times of day are then assigned toeach selected station, using profiles fordifferent types of station. Finally, the sam-pling points are assigned to weeks at ran-dom during the surveyperiod. A completelynew sampling plan is generated every twoyears, utilising data on passenger volumesprovided by the Office for Rail Regulation(Passenger Focus 2010).As mentioned in Sect. 2.1, good survey prac-tises prescribe a survey sample to be selected atrandom from the frame population. Samplingframes can comprise individuals (a populationregister, list of students or census records),households, addresses, businesses or institutions.In many cases a two-stage sampling procedure isrequired, for instance first households, then indi-viduals, or first institutions, then individuals.There are many ways to draw a probabilitysample, and according to Kish (1997, see alsoHäder and Lynn 2007) they all suffice as long asthe probability mechanism is clear, which meansthat every member of the target population hasto have a known probability (larger than zero) ofbeing selected for the sample. There are evenmore ways of selecting a non-probability sam-ple. We will only give some examples here. Inmany countries, quota sampling is quite popular.In this case, a population is first segmented intomutually exclusive sub-groups. Interviewersthen have to interview a specified number ofpeople within each subgroup (for further andmore in-depth discussion on survey samplingtechniques and non-probability samples in sur-veys, see Hibbert Johnson and Hudson, Chap. 5).How these people are selected is untraceable.Nowadays online panels, discussed at greaterlength by Toepoel in Chap. 20,are becomingquitepopular (see also Sect. 2.2.4 and Box 2.5). In rarecases these are based on probability samples, as isthe Dutch LISS panel (www.lissdata.nl), but thevast majority are not constructed using probabil-ity-based recruitment (The American Associationfor Public Opinion Research 2011). Online accesspanels offer prospective panel members theopportunity to earn money, make their opinionheard or take part in surveys for fun. In riversampling ‘… respondents are recruited directly tospecific surveys using methods similar to the wayin which non-probability panels are built. Once arespondent agrees to do a survey, he or sheanswers a few qualification questions and then isrouted to a waiting survey. Sometimes, but notalways, these respondents are offered the oppor-tunity to join an online panel’ (The AmericanAssociation for Public Opinion Research 2011).Rare populations are hard to identify,approach and survey. Snowball sampling relieson referrals from initial subjects to generateadditional subjects. Respondent-driven sampling(RDS) combines ‘snowball sampling’ with amathematical model that weights the sample tocompensate for the fact that the sample wascollected in a non-random way.2.2.2 What: The TopicIn addition to representing the target population, asurvey shouldrepresent the conceptsofinterest. Or,on a more practical note, the second main distin-guishing feature of a survey is the topic. Surveytopicscanbeanything,fromvictimisationtohealth,from bird-watching to shopping, from politicalinterest to life-long learning and from alcohol andtobacco use to belief in God. There is ample evi-dence that the topic of a survey is a determinant ofthe response rate (see Chap. 9 by Stoop).An omnibus survey has no specific topic at all:data on a wide variety of subjects is collectedduring the same interview, usually paid for by2 Classification of Surveys 11
  6. 6. multiple clients. Nowadays, omnibus surveys areincreasingly being replaced by online accesspanels where clients pay for a particular surveywhile sharing background characteristics.Often a distinction is made between objectivequestions and subjective questions. Objectivequestions are the home turf of official statisticsand cover issues like labour situation, education,living conditions, health, etc. Subjective ques-tions collect information on values, attitudes, andthe like. In practise, this distinction cannot besustained. Assessments of health and job prefer-ences have a clear subjective aspect, for example.In addition, official statistics focus increasingly onissues such as satisfaction and even happiness.The UK Office for National Statistics (ONS), forinstance, regularly collects data and publishesstatistics on ‘Measuring Subjective Wellbeing inthe UK’. Finally, even objective, hard statisticshave a subjective component (e.g. how manyrooms are in your house, how much time do youspend on gardening?).Many different types of organisations collectdata on attitudes, values, preferences andopinions, but from a different perspective. Forexample, there is a big difference betweenopinion polls and surveys of attitudes and val-ues (and opinions). Although opinion pollscould be conducted according to the samequality criteria as academic surveys of valuesand attitudes, in practise they are often com-mercial, non-probability surveys focusing onone or a few questions, providing results in justa day or so, whereas academic surveys can takea year from data collection to first availabilityof results.Appendix 1a presents an overview of com-parative attitude surveys organised by differenttypes of sponsors. Other well-known surveytopics are behavioural patterns, lifestyles, well-being and social belonging and affiliation (seeAppendix 1b). Also common are surveys onliteracy and skills (Appendix 1c) and on votingbehaviour (1d).12 I. Stoop and E. Harrison
  7. 7. Market researchers study brand and mediatracking, consumer satisfaction and advertise-ment effect. As mentioned above, governmentstoo are interested in consumer satisfaction anduse surveys to assess the need for public ser-vices. Both—as academics—are interested infactors that determine decision-making.Some surveys require respondents to keep adiary, for instance time use surveys, travel surveysor expenditure surveys. Other surveys are increas-ingly supplemented (or even partly replaced) bydata from other sources, such as GIS data or datafrompublicregistersandadministrativerecords.Aspart of some surveys, data on bio-markers are col-lected, such as grip strength, body-mass index andpeak flow in SHARE (see Appendix 1) or bloodcholesterol and saliva cortisol in the LISS panel(Avendabo et al. 2010). Election polls predict theoutcome of elections, as do exit polls, where votersare asked questions about their voting.From this overview it will be clear thatalmost any topic can be part of a survey, but alsothat there is a relationship between the targetpopulation and the topic, and the survey agencyand sponsor and the topic.2.2.3 By Whom: Survey Agencyand SponsorSurveys are commissioned by a wide range of or-ganisations: governments, the media, local com-munities, labour unions, universities, institutions,NGOs and many other diverse organizations.Survey agencies can be roughly subdivided in fourgroups: national statistical institutes, universities,market research agencies and not-for-profit or-ganisations. As with the topic, there is ample evi-dence that the type of sponsor has an impact on theresponse rate (see Chap. 9 by Stoop). Most studiesin this area suggest that people are more likely toparticipate in an academic or government surveythan in a commercial survey. In addition, the topicof a survey is clearly related to the type of sponsor:national statistical institutes do not run exit polls,and market research organisations conduct a lot ofconsumer research.In practise, all kinds of combinations ofsponsors and data collectors can occur. Forinstance, television networks can start their ownonline panels, and market research agenciescollect data for national statistical institutes oruniversities. In the European Social Survey(ESS), an academic cross-national survey (seeChap. 15 on Repeated Cross-Sectional Surveysby Stoop and Harrison), each country selects asurvey agency to collect data in that county. ESSdata are therefore collected by each of the fourtypes of survey agencies mentioned above (seewww.europeansocialsurvey.org: ‘Project infor-mation’—participating countries). It couldhowever be argued that in the world of surveys,statistics, academia and market research arethree different continents (and not-for-profit or-ganisations a kind of island in between). In theworld of (official) statistics, sampling is the keyelement of surveys (see for instance the historyof the International Association of SurveyStatisticians (http://isi.cbs.nl/iass/allUK.htm).Surveys run by national statistical institutes arealmost always based on probability samples,whereas market research organisations increas-ingly use non-probability samples from onlinepanels (see e.g. Yeager et al. 2011). Aninstructive overview of the differences betweenacademia and survey research agencies is givenby Smith (2009, 2011), summarised in Box 2.4.In the Netherlands and Flanders, a recentinitiative is trying to bring together the dif-ferent approaches to survey research in theDutch Language Platform for Survey Research(www.npso.net).Box 2.4: Survey research, academiaand research agencies (based on Smith2009, 2011)Smith (2009) sees a major divide in theUK between two kinds of knowledge heldby survey experts in research agencies andin academia, and feels that this is to thedetriment of survey research. He conteststhat agency practitioners are strong onknowing how while academics are strong onknowing that. Market researchers havepractical skills, but lack theoretical knowl-edge whereas academics know the theory2 Classification of Surveys 13
  8. 8. but lack practical skills and may thereforehave unrealistic expectations about the sortsof data a survey can reasonably be expectedto collect. Smith (2009, p. 720) points outthree significant problems:1. Practitioners make needless mistakesbecause they lack depth in their under-standing of how survey errors work.2. The bulk of surveys in the UK (those notusing random probability samples for astart) receive almost no serious aca-demic attention, and suffer as a result.3. Academic commentary and expecta-tions can be very unrealistic.He also comes up with a number of pos-sible solutions, although he is rather pessi-mistic about whether they will be picked up:• Having academics take secondments inagencies and agency staff take academicsecondments.• Establishing formal links betweenagencies and academic departmentswith resource sharing.• Encouraging academics and agencypractitioners to coauthor papers.• Improving the quality of formal surveytraining for both academics andpractitioners.In a subsequent paper, Smith (2011) dis-cusses how academics’ knowledge might betransferred more effectively, and how itmight translate into better survey practise inresearch agencies. One conclusion he drawsfrom attending an academic seminar onsurvey non-response and attrition is that hehad to try to translate research findings intopossible practical recommendations him-self, and is not sure whetherhe drew the rightconclusions. The second example he gives isa questionnaire training course taught by JonKrosnick. This course presented the relevantevidence, but also highlighted some prac-tical implications. Smith (2011) sadly real-ises that despite the vast question designliterature, survey practitioners still writequestions in the way they were taught longago, resulting in questions that are simplybad. So,to improve survey quality, effectiveways have to be found to translate academicknowledge into survey questions. Aca-demics should focus on spelling out thepractical implications of their findings, andsurvey agencies should change their prac-tise in line with the results of the academicresearch.2.2.4 How: Survey ModeThe best-known distinction between differenttypes of surveys is the survey mode. Sec-tion 15.1.3 in Chap. 15 on Repeated Cross-Sectional Surveys describes the main typesbased on the distinction between interview sur-veys (face-to-face and telephone) and self-completion surveys (mail and online). Face-to-face surveys are usually rather expensive andthus most often used by academics and statisti-cians. Interviewers are especially helpful whenthe survey is long, more than one person in thehousehold has to be interviewed or when addi-tional information has to be collected. Recently,however, interesting experiments have been runin web surveys where respondents themselvescollected blood and saliva samples and usedonline weighting scales (Avendabo et al. 2010).In many surveys today, multiple modes areused. This might involve a drop-off self-com-pletion questionnaire following a face-to-facesurvey, or a mixed-mode approach where web,telephone and face-to-face are deployedsequentially to maximise coverage and minimisecosts. De Leeuw (2005) gives a useful overviewof different modalities of mixing modes.Commercial organisations make increasinguse of online access panels. We use the term‘panel’ here not to mean a single sample ofpeople who are monitored over time—as in alongitudinal survey—but in the sense of being apermanent pool of respondents from whomrepeated representative (quota) samples can bedrawn. The UK organisation YouGov was apioneer in this field (see Box 2.5).14 I. Stoop and E. Harrison
  9. 9. Box 2.5: Example of an online accesspanel: YouGov (based on informationfrom http://www.yougov.co.uk/about/about-methodology.asp, accessed on 24January 2012)RegistrationIn order to register with YouGov, eachpanel member completes a detailed pro-filing questionnaire and sets up an accountname and password. This questionnaireenables YouGov to select a sample thatreflects the population. For example, thepool divides into 56% men, 44% women;but in a sample for national political sur-veys, 52% women and 48% men areselected.Recruitment and IncentivesThe pool is recruited from a widevariety of sources: through targeted cam-paigns via non-political websites, by spe-cialist recruitment agencies, and peoplecan also join the panel through the opensite, although the self-recruited sample isidentified as such and is not generally usedfor published political polls.Respondents receive a small incentivefor completing YouGov surveys, toensure that responses are not tiltedtowards those passionately interested inthe subject of the particular survey.Incentives typically range from 50p to £1per survey, through cash payments to anonline account which pays out when thepanel member reaches £50, as well asoccasional cash prizes.Conducting SurveysWhen YouGov conducts a survey,selected panel members are invited byemail to complete the survey by clickingon an Internet link. In order to completethe survey they must log in and providetheir password. This ensures that the rightpeople complete the survey, and enablestheir answers to be matched to the demo-graphics they provided when registeringwith YouGov.Response rates of at least 40% arenormally achieved within 24 h and 60%within 72 h. Little difference has beendetected between early and later respon-ses, once the data have been weighted todemographic and attitudinal variables,including past votes and newspaperreadership.Although online access panels are rather popularamong commercial agencies (and are inexpensivecompared to surveys based on probability sam-pling), concerns about the non-probability samplingprocedures are growing (see Sect. 2.2; Yeageret al. 2011; The American Association for PublicOpinion Research 2011). As long as there is noevidence that ‘… the factors that determine a pop-ulation member’s presence or absence in the sampleare all uncorrelated with the variables of interest in astudy, or if they can be fully accounted for bymaking adjustments before or after data collec-tion…’ (Yeager et al. 2011, p. 711), the assumptionthat a sample from an online panel represents thetarget population cannot be sustained.Probability-based online samples, on theother hand, such as the Dutch LISS panel(www.lissdata.nl), are a useful addition to sci-entific research. The LISS panel consists of5,000 households, comprising 8,000 individuals.The panel is based on a true probability sampleof households drawn from the population regis-ter by Statistics Netherlands. Households thatwould not otherwise be able to participate areprovided with a computer and Internet connec-tion. A special immigrant panel is available inaddition to the LISS panel. This immigrant panelcomprises around 1,600 households (2,400individuals), of which 1,100 households (1,700individuals) are of non-Dutch origin.2.2.5 When: Cross-Sectionsand PanelsFor some surveys, data are collected onlyonce. These are usually called cross-sections. Inmany cases, however, changes over time are an2 Classification of Surveys 15
  10. 10. important part of the research question. In thesecases the survey is repeated at regular intervals;this may be every month, every year or every fewyears. This is usually called a repeated cross-section, meaning that a different sample isapproached each time. Sometimes this is alsocalled a longitudinal survey, highlighting the factthat the focus is on longitudinal comparison.Technically, however, the term ‘longitudinal’ isbest reserved for a longitudinal panel. Here, thesame group of respondents is approached at reg-ular time intervals. This makes it possible tomeasure change at an individual level. Sincepanel members tend to drop out, and because apanel no longer represents the target populationafter a few years, a rotating panel design can beused (see Box 2.6). New panel members partici-pate in a fixed number of waves. A group of newpanel members is recruited for each wave, makingit possible to draw conclusions on individualchanges and on the population at a given moment.Box 2.6: Rotating Panel Design,Labour Force Survey (Eurostat 2011,p. 7)All the participating countries exceptBelgium and Luxembourg use a rotatingpanel design for the samples. The numberof panels (waves) ranges from two toeight. All panel designs provid for anoverlap between successive quarters,except for Germany and Switzerland,which only have a year-to-year overlap.The most common panel design with aquarterly overlap in 2009, adopted by 12participating countries, is 2-(2)-2, wheresampled units are interviewed for twoconsecutive quarters, are then left out ofthe sample for the next two waves and areincluded again two more times. Othercommon rotation patterns, each used bysix countries, are ‘in for 5’ and ‘in for 6’waves, where each panel is interviewedconsecutively for five or six quartersbefore permanently leaving the sample.Three other rotation schemes are used byone or at most two countries.2.2.6 Where: Regional, National,Cross-National and InternationalSurveysA survey among the inhabitants of a specificcommunity can be called a regional survey.When all inhabitants of a country belong to thetarget population, this can be called a nationalsurvey. By stratifying regions, it is possible tomake sure that study outcomes for both regionsand the entire nation can be published.In theory, international surveys sample frommultiple countries and the target population is thecombined population of the countries under study.In practise, however, international surveys arerare, because sampling frames seldom cover morethan one country and because countries areobvious units of comparison. Consequently, mostinternational surveys are really cross-nationalsurveys: an independent sample is drawn in eachparticipating country, and the results of thenational data files are combined afterwards into aharmonised cross-national data file.Two strategies for harmonisation are used incross-national studies, namely input harmonisa-tion and output harmonisation (Körner and Meyer2005). Input harmonisation means that the instru-mentisasidenticalaspossible ineachparticipatingcountry: the same fieldwork approach, the samesurvey mode, the same questions (but translated),and so forth. Output harmonisation allows coun-tries to use their preferred survey mode. Ex-postoutput harmonisation means that different ques-tions can be used, or that some countries can derivevariables from questionnaires and others fromregisters, as long as the same definitions are used.Ex-ante output harmonisation means that thequestionnaire has to be identical in each country,but that the data collection method may differ.Appendix 1 gives an overview of cross-national or comparative surveys. One of theadvantages of these surveys is that in most casesthe data are available for external parties, or atleast for academic use.A special example of a multi-national (andalso multi-actor) survey is a survey in whichmigrant families are interviewed in both thesending and receiving countries (see Box 2.7).16 I. Stoop and E. Harrison
  11. 11. Box 2.7: 500 Families: Migrationhistories of turks in Europe (taken fromhttp://www.norface.org/migration11.html, 18 January 2012)Social, economic and cultural integra-tion of first generation immigrants andtheir children has been the focus ofextensive research in Europe and else-where. However, much remains unknownabout the multi-generational transmissionof social, cultural, religious and economicresources, and behaviours. Furthermore,while transnational studies on intra- andinternational migration processes are wellestablished in the US, they are scarce inEurope. Finally, immigrants are mainlycompared to other immigrants and/ornatives, whereas studies comparingimmigrants to those left behind or thosewho have re-emigrated to the origincountry are an exception. This project willtreat these research lacunae and willextend existing research on internationalmigration processes and intergenerationalmobility by implementing a uniqueresearch design based on 500 Turkishfamilies, their immigrant descendants inEurope and those who remained in Tur-key. It reconstructs basic migration, familyand socio-economic histories of completelineages generated by a sample of 500ancestors, born in Turkey between 1930and 1940 in selected high migrationsending areas; and to personally inter-viewing approximately 6,000 familymembers over up to four generations inTurkey and Europe, investigating patternsof intergenerational transmission ofresources and values and their intersectionwith family migration trajectories.2.2.7 Why: Fit for PurposeCross-classifying the different classification cri-teria of surveys do not make any sense becausethe different classifications are related, as hasbeen mentioned before. In addition, in manycases a trade-off has to be made between accu-racy, speed and costs. Exit polls require speed,and cannot be mail surveys; statisticians need toproduce exact figures and cover the generalpopulation, so non-probability-based onlinesurveys do not suffice.Even so, no two surveys are ever exactly thesame (even when they are intended to be). Likeany other product, the form of a survey will beaffected by any number of design considerations.Like many products, it will be created with aclear purpose in mind. To this extent, to borrowLe Corbusier’s famous dictum, ‘form [at leastpartially] follows function’. This is related toone of the most important survey quality criteria:fit for purpose. A survey is good when it servesthe purpose for which it has been designed.Sometimes speed is predominant, sometimesprecision of the outcomes and sometimes thecomparability of countries. Each purpose has animpact on the final form of the survey.However, there are other factors which alsoinfluence the shape and form of surveys. Theaspirations and whims of the survey architectsor designers will be visible in its appearance.The survey may seek to resolve well-knownproblems or weaknesses by using differentmaterials or a new production process. It mightbuild on the success of similar products, or itmight be experimental in some way, testing thefeasibility of a new process, product or pricepoint. And, in common with most productionprocess involving human beings, things gowrong with surveys.Like any product or object, then, surveydesign is a compromise between the sponsor orclient and the designers. Along the way, its formmay be affected by production problems andcost constraints. Early feedback from markettesting may reveal that there are aspects of theproduct that users find difficult or off-putting, soit has to be re-engineered to make it moreacceptable or to improve its quality. Many ofthese design aspects will be covered in the otherchapters of this book.2 Classification of Surveys 17
  12. 12. 2.3 Appendix 1: ComparativeSurveys(a) Comparative surveys on attitudes, values,beliefs and opinionsEuropean Social SurveyESSwww.europeansocialsurvey.orgAcademically driven social survey designed to chart and explainthe interaction between Europe’s changing institutions and theattitudes, beliefs and behaviour patterns of its diversepopulations. Biennial, first Round in 2002, covers more than 30European countries.International Social Survey ProgrammeISSPwww.gesis.org/en/issp/overview/Continuing annual programme of cross-national collaborationon surveys covering topics important for social science research.Brings together pre-existing social science projects andcoordinates research goals, thereby adding a cross-national,cross-cultural perspective to the individual national studies.European Values StudyEVSwww.europeanvaluesstudy.euLarge-scale, cross-national, and longitudinal survey researchprogramme focusing on basic human values. Provides insightsinto the ideas, beliefs, preferences, attitudes, values and opinionsof citizens all over Europe. Data are collected every ten years onhow Europeans think about life, family, work, religion, politicsand society.World Values SurveyWVSwww.worldvaluessurvey.orgExplores people’s values and beliefs, how they change over timeand what social and political impact they have; carried out inalmost 100 countries. Data are collected every five years onsupport for democracy, tolerance of foreigners and ethnicminorities, support for gender equality, the role of religion andchanging levels of religiosity, the impact of globalsation,attitudes towards the environment, work, family, politics,national identity, culture, diversity, insecurity and subjectivewellbeing.Eurobarometerhttp://ec.europa.eu/public_opinion/index_en.htmwww.gesis.org/en/eurobarometer-data-serviceThe Eurobarometer programme monitors public opinion in theEuropean Union. It consists of four survey instruments/series:the Standard Eurobarometer, the Special Eurobarometer, theFlash Eurobarometer, and the Central and Eastern andCandidate Countries Eurobarometer.Afrobarometerwww.afrobarometer.orgResearch project that measures the social, political andeconomic atmosphere in a dozen African countries. Trends inpublic attitudes are tracked over time. Results are shared withdecision makers, policy advocates, civic educators, journalists,researchers, donors and investors as well as average Africanswho wish to become more informed and active citizens.Latinobarómetrowww.latinobarometro.orgAnnual public opinion survey that involves some 19,000interviews in 18 Latin American countries, representing morethan 400 million inhabitants. Latinobarómetro Corporationresearches the development of democracy and economies aswell as societies, using indicators of opinion, attitudes,behaviour and values. Its results are used by social and politicalactors, international organizations, governments and the media.(continued)18 I. Stoop and E. Harrison
  13. 13. (b) Comparative surveys on living conditions(continued)AsiaBarometerwww.asiabarometer.orgComparative survey in Asia, covering East, Southeast, Southand Central Asia. It focuses on the daily lives of ordinary people(bumi putra) and their relationships to family, neighbourhood,workplace, social and political institutions and the marketplace.Survey of Health, Ageing and Retirement inEurope SHAREwww.share-project.orgMultidisciplinary and cross-national panel database ofmicrodata on health, socioeconomic status and social and familynetworks of more than 45,000 individuals aged 50 years or over.Started in 2004, now covering 13 countries.European Foundation for the Improvement ofLiving and Working ConditionsEUROFOUNDwww.eurofound.europa.eu/surveysEurofound has developed three regularly repeated surveys tocontribute to the planning and establishment of better livingand working conditions. The surveys offer a unique source ofcomparative information on the quality of living and workingconditions across the EU.• European Working Conditions Survey (EWCS)• European Quality of Life Survey (EQLS)• European Company Survey (ECS)Household Finance and Consumption SurveyHFCSwww.ecb.int/home/html/researcher_hfcn.en.htmlThe HFCS collects household-level data on household financesand consumption. It covers the following householdcharacteristics at micro-level: demographics, real and financialassets, liabilities, consumption and saving, income andemployment, future pension entitlements, intergenerationaltransfers and gifts, and risk attitudes. Data available in 2013.Eurostat microdatahttp://epp.eurostat.ec.europa.eu/portal/page/portal/microdata/introductionAccess to anonymised microdata available at Eurostat only forscientific purposes. The following microdata are disseminatedfree of charge:• European Community Household Panel (ECHP)• Labour Force Survey (LFS)• Community Innovation Survey (CIS)• Adult Education Survey(AES)• European Union Survey on Income and Living Conditions(EU-SILC)• Structure of Earnings Survey (SES)• Farm Structure Survey (FSS)European Community Household PanelECHPhttp://epunet.essex.ac.uk/echp.phphttp://epp.eurostat.ec.europa.eu/portal/page/portal/microdata/echpHarmonised cross-national longitudinal survey focusing onhousehold income and living conditions. It also includes itemson health, education, housing, migration, demographics andemployment characteristics. ECHP is now finished.EU Labour Force SurveyEU LFShttp://epp.eurostat.ec.europa.eu/portal/page/portal/microdata/lfsConducted in the 27 Member States of the European Union,three candidate countries and three countries of the EuropeanFree Trade Association (EFTA). Large household samplesurvey providing quarterly results on the labour participation ofpeople aged 15 years and over, as well as on persons outsidethe labour force.EU Statistics on Income and Living ConditionsEU-SILChttp://epp.eurostat.ec.europa.eu/portal/page/portal/microdata/eu_silcInstrument aimed at collecting timely and comparable cross-sectional and longitudinal multidimensional microdata onincome, poverty, social exclusion and living conditions.Provides comparable, cross-sectional and longitudinal multi-dimensional data on income, poverty, social exclusion andliving conditions in the European Union. Cross-sectional dataand longitudinal data.2 Classification of Surveys 19
  14. 14. (c) Surveys on literacy and skills(d) Information on electionsReferencesAvendabo, M., Mackenbach, J. P., & Scherpenzeel, A.(2010). Can biomarkers be collected in an internetsurvey? A pilot study in the LISS panel. In M. Das, P.Ester & L. Kaczmirek (Eds.), Social and behavioralresearch and the internet: Advances in appliedmethods and research strategies. Routledge: Taylorand Francis Group.De Leeuw Edith, E. D. (2005). To mix or not to mix datacollection modes in surveys. Journal of OfficialStatistics, 21(2), 233–255.Eurostat. (2009). Harmonised European time use surveys.2008 Guidelines. Luxembourg: Publications Office ofthe European Union.Eurostat. (2011). Quality report of the European UnionLabour Force Survey 2009. Luxembourg: Publica-tions Office of the European Union.Groves, R. M. (1989). Survey errors and survey costs.New York: Wiley.Häder, S., & Lynn, P. (2007). How representative can amulti-nation survey be? In J. Roger, R. Caroline, F.Rory & E. Gillian (Eds.) Measuring attitudes cross-nationally. Lessons from the European Social Survey(pp. 33–52). London: Sage.Kish, L. (1997). ‘Foreword’. In T. Lê & V. Verma, DHSAnalytical Reports No. 3: An Analysis of SampleDesigns and Sampling Errors of the Demographicand Health Surveys, Calverton, Maryland USA:Macro International Inc.Körner, T., & Meyer, I. (2005). Harmonising socio-demographic information in household surveys ofofficial statistics: Experiences from the Federal Sta-tistical Office Germany. In J. H. P. Hoffmeyer-Zlotnik& J. A. Harkness (Eds.) Methodological aspects incross-national research. ZUMA-Nachrichten spezialband 11. pp. 149–162. MannheimPassenger Focus. (2010). National passenger survey:Detailed technical survey overview. Autumn 2010(Wave 23), London.Adult Literacy and Lifeskills SurveyALLhttp://nces.ed.gov/surveys/all/International comparative study designed to provide participatingcountries, including the United States, with information about the skillsof their adult populations. ALL measures the literacy and numeracyskills of a nationally representative sample from each participatingcountry.Programme for the InternationalAssessment of Adult CompetenciesPIAACwww.oecd.org/piaacInternational survey of adult skills, collaboration betweengovernments, an international consortium of organisations and theOECD; results to be published in 2013. Measures skills andcompetencies needed for individuals (15–65 years) to participate insociety and for economies to prosper.Trends in International Mathematics andScience StudyTIMSSProgress in International ReadingLiteracyPIRLShttp://timss.bc.edu/The TIMSS and PIRLS International Study Centre is dedicated toconducting comparative studies in educational achievement. It servesas the international hub for the IEA’s mathematics, science and readingassessments.• TIMSS, every four years, 4th and 8th grade• PIRLS, every five years, 4th gradeComparative Study of Electoral SystemsCSESwww.cses.orgCollaborative programme of research among election studyteams from around the world. Participating countries include acommon module of survey questions in their post-electionstudies. The resulting data are deposited along with voting,demographic, district and macro variables.Infrastructure for Research on ElectoralDemocracyPIREDEUwww.gesis.org/en/institute/competence-centers/rdc-international-survey-programmes/piredeuCollaborative project on ‘‘Providing an Infrastructure forResearch on Electoral Democracy in the European Union’’,coordinated by the European University Institute (EUI) and itsRobert Schuman Centre for Advanced Studies (RSCAS).20 I. Stoop and E. Harrison
  15. 15. Smith, P. (2009). Survey research: Two types of knowl-edge. Viewpoint. International Journal of MarketResearch, 51(6), 719–721.Smith, P. (2011). Lessons from Academia. Viewpoint.International Journal of Market Research, 53(4),451–453.Stoop, I., Spit, J., de Klerk, M., & Woittiez, I. (2002). Inpursuit of an elusive population. A survey amongpersons with a mental handicap. Paper presented atthe International Conference on Improving Surveys,Copenhagen, August 2002.AAPOR. The American Association for Public OpinionResearch. (2011). AAPOR Report on Online Panels.Prepared for the AAPOR Executive Council by aTask Force operating under the auspices of the AAPORStandards Committee. AAPOR. Retrieved fromwww.aapor.org/Content/aapor/Resources/ForMedia/PressReleases/AAPORReleasesReportonOnlineSurveyPanels/default.htm.Yeager, D. S., Krosnick, J. A., Chang, L., Javitz, H. S.,Levendusky, M. S., Simpser, A., et al. (2011).Comparing the accuracy of RDD telephone surveysand internet surveys conducted with probability andnon-probability samples. Public Opinion Quarterly,75(4), 709–747.2 Classification of Surveys 21
  16. 16. http://www.springer.com/978-1-4614-3875-5