SlideShare a Scribd company logo
Attrition on Longitudinal Surveys – Literature Review
Social Survey Division, ONS November 2009
Introduction
This paper presents a review of literature related to attrition on longitudinal surveys.
Research into attrition encompasses a very wide range of topics, including methods to
measure attrition, attrition bias measures, methods to reduce attrition and methods to
correct for attrition. In this paper we have focussed only on a few of these topics. In
particular, this paper does not examine attrition bias on survey estimates and
methodologies to correct for it, i.e. weighting or imputation.
The paper is organised into three sections:
Section 1 provides an overview of the attrition problem, how to measure attrition and the
theoretical framework of non-response in longitudinal surveys (pages 2-5).
Section 2 focuses on respondents’ and survey’s characteristics that have been found
associated with attrition (pages 6-10).
Section 3 provides a review of the most commonly used methods to reduce attrition
adopted by survey organisations before or during fieldwork (pages 10-16).
A summary of the main findings is also included.
Summary
1. Attrition due to non-response is a major issue of concern to researchers not only
because it may decrease the power of longitudinal analysis but also, and mainly
because, it may be selective, thus impacting on the generalisability of results to the
target population (attrition bias).
2. There is limited methodological research which examines standard definitions of
attrition/longitudinal response rates, in particular for households. Significant
exceptions are Lynn (2005) and Ribisl et al (1996) who recommend a set of standard
response rates to be published for longitudinal surveys. Detailed guidelines to
calculate attrition measures are also published by Eurostat (2004).
3. Lepkowski and Couper (2002) provide a framework to explain the longitudinal
response process. Longitudinal response can be seen as the result of three
conditional processes: locating a respondent; contacting the respondent at a given
location; and then obtaining a respondent’s cooperation. Although these three
processes have got parallels in cross-sectional surveys, they do present longitudinal-
specific issues.
4. A vast body of empirical research has looked into which socio-demographic
characteristics are more likely to predict or to be associated with attrition. Attrition is
more likely among younger and older respondents, men, single (i.e. never married)
people and minority ethnic groups. More mixed evidence exists on the relationship
between attrition and employment, education and income.
5. Respondents’ prior experience of the survey plays a key role in predicting attrition.
Respondents who have little interest or knowledge of a survey topic are more likely to
refuse at later waves than other sample members. Non-response at potentially
sensitive questions, such as income, is also a good predictor of attrition at later
waves. More experimental research is needed to assess the impact of interview
length on attrition.
6. A large and constantly growing array of methods is available in longitudinal surveys to
locate respondents. The majority of tracking methods are potentially time-consuming
and costly, in particular reactive and interviewer-led tracking techniques. Very little
research has looked into the cost-effectiveness of the different tracking methods.
7. Various methods are incorporated into longitudinal survey design with an aim to
minimise refusals. These include incentives, refusal conversion techniques and extra
1
interviewer efforts. Incentives are one of the most popular methods employed and
reduce refusals both in cross-sectional and longitudinal surveys. In longitudinal
surveys, more evidence is needed to assess the impact of changes in incentives over
time (including introducing and ceasing incentives) and of incentive tailoring
strategies.
Section 1
Attrition: Definition, Measures and Theory
1.1. Attrition and sample attrition
In the context of longitudinal surveys, the term attrition is normally used to refer to the loss
of survey participants over time.
Attrition may occur for a number of different reasons and Watson and Wooden (2004)
classify these in two types. The first type includes reasons related to change in the
underlying population, such as deaths, and is often referred to as ‘natural attrition’. This
type of attrition is inevitable but from a statistical perspective is less problematic in
practice, as it reflects phenomena which occur not only in the study cohort but also in the
overall target population. The second type of attrition arises because sample members
cannot be contacted or they refuse to continue participation. Attrition due to non-response
is usually referred to as "sample attrition" or "panel attrition" (Lynn, 2006). This type is far
more problematic and it will be the focus of this literature review. From now on, we will
refer to this type of attrition as "sample attrition" or simply "attrition".
Lynn (2006) defines sample attrition as the "cumulative effect of non-response over
repeated waves or data collection efforts", not including non-response at Wave 1 of a
survey as this is before attrition has occurred. This definition implies a monotone process,
where sample members change their status from respondent to non-respondent, but not
vice versa. In many longitudinal surveys however, attempts are made to contact non-
respondents at previous waves. Therefore sample members may return to be
respondents at a subsequent wave. Some authors distinguish explicitly between "wave
non-respondents" and "attrition cases" (Plewis et al, 2008; Hawkes D and Plewis, 2006).
Wave non-respondents are those cases who are interviewed on some occasions in a
longitudinal survey, but not on others; attrition cases refers to units who are initially part of
the sample but are, sooner or later, lost permanently at follow-up. Some other authors
instead use the term sample attrition indistinctly to denote loss of study participants at
follow up either permanently or temporarily (Lynn, 2006).
In any case, either temporary or permanent, sample attrition is an issue of concern to
longitudinal survey researchers for at least two reasons. Firstly, similarly to attrition due
to demographic losses, sample attrition reduces the size of the sample available for
longitudinal analysis where data from the same respondent for one or more waves is
needed. This causes loss of statistical power with longitudinal samples becoming too
small to produce robust statistical analysis and panel data estimates losing significance.
At high levels, sample attrition may even threaten the viability of continuing a panel
(Watson and Wooden, 2009). Secondly, non-response attrition may be selective, in that
those who are lost at follow up may be different from those who remain in the sample.
Non-random attrition causes great concern as it impacts on the generalisability of results
to the entire target population. This problem is often referred to as "attrition bias". Many
studies have been carried out to investigate the extent of attrition bias in specific surveys
by looking at how characteristics of attriters differ from those of respondents' (Hawkes
and Plewis, 2006). We report some of their main findings in Section 2.
1.2 Measures of attrition
1.2.1 Attrition and response rates
2
Attrition rates are a typical measure used to report levels of attrition. These are defined as
the proportion of respondents who are lost at follow-up. Many surveys however do not
report attrition rates directly, but these can be derived from their published response
rates.
Response rates can be computed and reported in many different ways. As one of the key
indicators of survey quality, the importance of developing adequate standards to allow
comparison of response level across survey organisations has been long acknowledged
(Smith, 2002). The American Association of Public Opinion Research (AAPOR)
published, in the late 1990s, recommended standards to define and calculate response
rates, currently in its 5th edition (AAPOR, 2008). In the UK, Lynn et al (2001)
recommended standards for face to face surveys of households and individuals.
Examples of work on development of response standards in other countries can be found
in literature by Kasse (1999), Hidiroglou et al (1993) and Allen et al (1997).
More limited methodological research has looked specifically at developing standards for
calculating longitudinal response rates. Even the AAPOR Standard Definitions manual
(AAPOR, 2008) provides only generic guidelines on the calculation of response rates for
multi-wave surveys, stating that response rates should be calculated and reported "for
each separate component and cumulatively".
The most extensive work on defining response rates for longitudinal surveys that was
found in the literature is an unpublished paper by Lynn (2005), in which he extends his
previous work on response rates standards for cross sectional surveys to longitudinal
surveys. Lynn (2005) also argues that no single rate can summarise the overall level of
response to a longitudinal survey and recommends instead a number of different
response measures to be calculated and published. Lynn (2005) refers to longitudinal
surveys as surveys with multiple Data Collection Events (DCEs). In his framework, rates
are explicitly defined according to a particular set of DCEs. For each set of DCEs, rates
can be defined either unconditionally or conditionally. Unconditional response rates are
based on all sample units who were eligible for all of the relevant DCEs while conditional
response rates depend upon response to some other set of DCEs, typically one or more
prior DCEs. This results in up to ∑=
−
m
i
i
1
)12( different response rates that could be
reported for a survey with m DCEs. For a survey of 5 waves, that would mean 57 different
response rates.
Out of all possible response rates, Lynn (2005) recommends that the following are always
published:
• Complete response rate: Response to every wave/ Eligible at every wave
• Wave-specific response rates: Responses to wave k/Eligible at wave k
• Wave specific response rates conditional upon response at the previous wave:
Response at wave k/Eligible at wave k and respondent at wave k-1
Additionally, if the survey design is such that a new sample enters at each wave, then the
sample-specific wave response rates should also be published.
Finally, if there are certain combinations of DCEs important for analysis purposes, survey
organisations or data providers should also identify these key combinations and publish
the relevant response rates. Lynn's framework has been recently adopted in the UK by
the National Centre for Social Research for the reporting of response rates for the English
Longitudinal Survey of Ageing (Scholes et al, 2009).
3
Ribisl et al (1996) also suggested that five types of response rates should be published in
panel studies, making an explicit distinction between cooperation and location rates for
later waves of a survey. Their recommended rates include:
• Baseline response rate: Number of completed baseline interviews/ Number of
eligible individuals
• Gross follow-up location rate: Number of participants located at follow up/Number
of completed baseline interviews
• Gross follow-up completion rate: Number of completed follow-up
interviews/Number of completed baseline interviews
• Eligible participant follow-up completion rate: Number of completed follow-up
interview/Number of completed baseline interviews still eligible at follow-up
• Cumulative follow-up completion rate: Number of individuals with completed
interviews at all follow-up time periods/Number of completed baseline interviews
Eurostat (2004) have also devised standard longitudinal response measures for the
surveys feeding into its Statistics on Income and Living Condition (EU-SILC) longitudinal
component. The following response measures are required by each member state for the
second and following waves of the EU-SILC:
• Wave response rate: the proportion of eligible sample units at that wave which
responded to the survey
• Longitudinal follow-up rate: the percentage of units which are passed on to wave
k+1 for follow up within units received into wave k from wave k-1, excluding those
out of scope or non existent.
• Follow-up ratio: number of units passed on from wave k to wave k+1 in
comparison to the number of units received for follow-up at wave k from wave k-1
• Achieved sample size ratio: ratio of the number of responding units in wave k to
the number of responding units in wave k-1
1.2.2. Household response rates
One of the difficulties in calculating longitudinal response rates stems from the difficulties
in dealing with a dynamic picture. Survey units change over time, with some units ceasing
to exist while new ones are created. This is a particular issue for longitudinal household
surveys. Households are more transient units than individuals, as they may change
composition over time with original members leaving and/or new individuals joining the
original household.
The conceptual difficulties surrounding the definition of a longitudinal household have led
some surveys to publish only personal longitudinal response rates. This is the case for
the Survey of Labour and Income Dynamics in Canada (SILD) (Michaud and Webber,
1994). Other surveys do calculate and publish household and individual level longitudinal
rates. That is the case for example the EU-SILC, which provides detailed guidelines on
how to produce its standard response measures both at household and person level
(Eurostat, 2004).
With the exception of Eurostat’s 2004 paper, we could not find any methodological
literature looking into the specific issues surrounding the calculation of longitudinal
household response rates, including how to link households over time and how split
households (i.e. households that are formed when one or more individuals leave their
original households) should be taken into account in the calculation of response rates.
1.3. Attrition theory
4
The factors which cause non-response in longitudinal surveys are, in many ways, similar
to those that operate on standard cross-sectional surveys (Lynn et al, 2005). However,
there are also some mechanisms which are specific to longitudinal surveys.
In order to illustrate the attrition process, Lepkowski and Couper (2002) extend Groves
and Couper’s (1998) general theory of non response to longitudinal surveys. In Lepkowski
and Couper’s framework, the process that leads to non-response attrition at a second (or
later) wave of a panel survey can be divided into three conditional processes:
1. Location: locating a sample member
2. Contact: contacting the sample member given location;
3. Co-operation: obtaining an interview from the sample member given contact.
1.3.1. Location
The failure to locate respondents over waves of a longitudinal survey is often one of the
major causes of attrition (Ribisl et al, 1996). The propensity of locating a respondent can
be seen as the combination of the propensity of the respondent to move and the
propensity to locate a person who has moved (Couper and Ofstedal, 2009).
Geographical mobility is an area of research in its own right and is a phenomenon that
cannot be manipulated by survey practitioners; therefore this paper will not go into it in
much detail. It is worthy to note that longitudinal panel surveys can provide a variety of
information to help and predict the likelihood that a respondent will move. For example,
Uhrig (2008) found that respondents, who expressed a desire to move, or a lack of
attachment to their area, are more likely to become non-contacts at a later wave, as well
as respondents who rent their home. Uhrig (2008) suggests that this may be presumably
due to re-location and these measures could be used to predict subsequent non-
response in panel studies.
The process of locating (tracking) respondents who have moved is more of interest to this
literature review as survey practitioners can have some control upon it. In order to locate
respondents, survey organisations need to ensure that all contact details exist and are
up-to-date for each respondent. For example, name, address, telephone number, email
addresses (Uhrig, 2008). Laurie et al (1999), in their study of tracking procedures, found
that approximately 10 per cent of respondents move within a given year. However,
McAllister et al (1973) state that respondents do not disappear unless they are
deliberately trying to do so. This highlights that there is considerable promise in tracking
respondents if the right methods are used. Groves and Hansen (1996) suggest that ‘with
adequate planning, multiple methods and enough information, time, money, skilled staff
and perseverance, and so on, 90 per cent to 100 per cent location rates are possible’. We
will look at survey tracking efforts more in detail in Section 3.
1.3.2 Contact
Once a panel member has been located, the contact process is not very different from
cross-sectional surveys (Watson and Wooden, 2009). Contactability will depend on the
respondent's patterns of being physically present at the place of contact (normally home),
or any physical impediments (i.e. locked or shared entrances to dwelling) and finally on
the survey organisation's effort in making contacts (Uhrig, 2008). Additionally, in a
longitudinal survey, interviewers have prior knowledge of at home-patterns, information
on the best time to call (Lepkowski and Couper, 2002) and awareness of physical
barriers, if the respondent has not moved. Therefore the impact of these factors should be
smaller than in a cross-sectional survey (Uhrig, 2008), suggesting that non-contact should
be a relatively small phenomenon at later waves, given successful location (Lepkowski
and Couper, 2002). Non-contact attrition tends to be more a result of failure to locate
respondents than of failure to contact them.
1.3.4 Co-operation
5
Refusals in longitudinal surveys differ significantly from cross-sectional surveys. After the
first wave of a longitudinal survey the sample has already experienced the interview
process and is aware of the survey’s topics, its cognitive requirements and its time
commitment. Respondents will use this experience as a guide whether to participate or
not in future waves (Lynn et al, 2005).
By its very nature, a longitudinal survey places a greater burden on respondents and this
factor alone can induce sample members to refuse cooperation at the outset. Indeed
Apodaca et al (1998) report that the presence of a ‘perceived longitudinal burden’
resulted in a 5 per cent drop in response rates. 'Panel fatigue' (Laurie, Smith and Scott,
1999) is also often present, and over time respondents may feel like they have 'done
enough'.
Laurie et al (1999) identify two types of refusers on longitudinal surveys:
• Wave specific refusers are individuals who refuse to take part for one wave
because of circumstantial situations, for example illness or bereavement, but
may participate at a successive wave.
• Definite withdrawals are refusers who are adamant that they don't wish to
take part in the study (any more).
Various methods are incorporated into longitudinal survey practice with an aim to
minimise refusals. We will look at survey organisation’s efforts to improve co-operation
more in detail in Section 3.
Section 2
Factors associated with attrition
Using Lepkowski and Couper (2002)'s framework, overall survey non-response can be
seen as the cumulative effect of failure to locate, failure to contact and refusal to
cooperate. These processes may operate independent of one another, but they all
contribute to the overall attrition (Nicoletti and Peracchi, 2005). Uhrig (2008) notes how in
the literature there is often little differentiation between these processes as a greater
focus is put on the general absence of data regardless of the processes generating it.
The likelihood of a sample member to be located, contacted and to cooperate in a later
wave of a longitudinal survey depends on respondents' personal characteristics, but also
on their previous survey experience and on the survey organisation operational efforts. In
this section we will look at the first two aspects, while the survey organisation processes
will be discussed in Section 3.
2.1. Individual characteristics related to non-response
There is a large amount of literature on non-respondents characteristics, often looking
separately at characteristics of refusals and non-contacts. This literature has been
reviewed extensively by Watson and Wooden (2009), Uhrig (2008) and Lynn et al (2005).
We present here their main findings relative to a number of key socio-economic
characteristics.
2.1.1 Age
Both Lynn et al (2005) and Uhrig (2008) report that a wide body of empirical research has
found that elderly and young people are more likely not to be contacted
(Cheesbrough,1993; Lillard and Panis, 1998, Foster, 1998; Groves and Couper, 1998;
Lynn and Clarke, 2002; Stoop, 2005; Watson, 2003). The elderly also appear more likely
to refuse survey cooperation (Hawkins, 1975; Foster and Bushnell, 1994; Groves and
Couper, 1998; Lepkowski and Couper, 2002). Some authors suggest that a greater
likelihood of situation refusal among the elderly could be due to the increasing chance of
6
finding older sample members with health problems (Groves et al, 2000). Indeed,
research by Jones et al (2006) found age has no effect on refusals if health is good.
Focussing on evidence from longitudinal studies, Watson and Wooden (2009) confirm
that attrition is higher amongst the youngest, but that response patterns are more mixed
for the elderly, with some studies finding rising attrition propensities in old age (e.g.
Fitzgerald et al, 1998), others reporting the reverse (e.g. Hill and Willis, 2001), and others
again reporting no clear evidence in either direction (Behr et al, 2005; Nicoletti and
Peracchi, 2005).
2.1.2 Household structure
A large body of research highlights that single people (never married) are more likely to
not be contacted (e.g. Gray et al, 1996) and refuse participation in surveys (Goyder,
1987; Lillard and Panis, 1998; Nicoletti and Peracchi, 2002). Households with children
and married couples appear less likely to be lost at follow-up (Fitzgerald et al, 1998;
Lillard and Panis, 1998; Nicoletti and Peracchi, 2005; Zabel, 1998). Jones et al (2006),
however, finds no effect of marital status on non-response
2.1.3 Gender
Within panel surveys, men appear to attrit more frequently than women (Lepkowski and
Couper, 2002; Hawkes and Plewis, 2006; Behr et al, 2005). Lynn reports that men are
less likely to be contacted than women (Goyder, 1987; Foster, 1998; Lepkowski and
Couper, 2002). Research by Watson (2003) on the European Community Household
Panel found that once education, employment, child care responsibilities and other
factors are controlled, the gender effect disappears.
2.1.4 Labour market activity, income and education
Mixed evidence is found in the literature regarding the relationship between attrition and
employment, income and education.
Employment outside the home, either as an employee or self-employed, has been found
to be associated with no-contact generally and in longitudinal surveys in particular (Foster
and Bushnell, 1994; Goyder 1987; Groves and Couper 1998; Lynn and Clarke, 2002;
Nicoletti and Peracchi, 2005). Hawkes and Plewis (2006) and Branden et al (1995) also
found that job instability appears related to non-contactability. However, Fitzgerald et al
(1998), Zabel (1998) and Jones et al (2006) all found no significant relationship between
employment status and attrition. Gray et al (1996) actually found attrition rates to be
lowest among the employed. Using data from the British Household Panel Survey (BHPS)
and the German Socio-Economic Panel, Nicoletti and Buck (2004) found that the
economically inactive had higher cooperation rates in one sample, but significantly lower
contact probabilities in the other. Uhrig (2008) reports that respondents who are
unemployed are more likely to be non-contacts and speculates that this may be due to
individuals moving in order to find employment.
Lynn et al (2005) reports that survey refusal appears more likely amongst those with low
incomes (Fitgerald et al, 1998; Nathan, 1999), while households with higher incomes
appear more difficult to be contacted (Foster and Bushnell, 1994; Lynn and Clarke, 2002).
Uhrig (2008) also finds that low-income has a slight positive effect on contactability and a
negative effect on cooperation. A study from Brendan et al (1995, reported by Uhrig,
2008) finds that household financial instability of any type, either positive or negative, is
associated with non-response. Uhrig (2008) speculates that large shifts in earnings may
signal some other important structural change in the household (e.g. geographical move,
change in employment). Analysis of the European Community Household Panel by
Watson (2003) also found that a relationship between income and attrition exists but it
differs across countries, with higher attrition being associated with lower income in
northern European countries but with higher income in southern European countries.
Finally, Watson and Wooden (2009) reports a number of studies that have found no
7
evidence of any significant relationship between income and attrition (Gray et al, 1996;
Zabel, 1998; Lepkowski and Couper, 2002; Nicoletti and Peracchi, 2005) and concludes
that income is probably relatively unimportant for attrition.
As for education, less educated individuals appear more likely to attrit in panel studies
(Jones et al, 2006; Watson, 2003; Behr et al, 2005; Lillard and Panis, 1998), but the
magnitude of the relationship is arguably small (Watson and Wooden, 2009). Watson
(2003) also finds the reverse relationship in Southern Europe, where less education is
associated with lower rates of attrition.
2.1.5 Ethnicity and language
Studies have shown that ethnic minority groups are more likely to be non-respondents
(Zabel, 1998; Burkam and Lee, 1998). While Lynn et all (2005) reports that people from
ethnic minorities are more likely to be refusals (Foster, 1998; Fitzgerald et al, 1998; Lyer,
1984; Lynn and Clarke, 2002; Nathan, 1999), Watson (2003) reports two studies from
Gray et al (1996) and Lepkowski and Couper (2002) which find that non-response among
ethnic groups was mainly due to lower rates of contact and not higher rates of refusals.
Lower contact rates for non white respondents are also reported by Uhrig (2008) and
Calderwood (2009).
Watson and Wooden (2009) point out that limited research has looked at the relationship
between language-speaking ability and attrition, although cross-sectional surveys in
English-speaking countries have almost always reported lower response rates for non
English speakers. Uhrig (2008) found that respondents experiencing difficulty with the
English language were associated with future non-contact. De Graaf et al (2000), in the
Netherlands Mental Health Survey and Incidence Study also found that respondents not
born in the Netherlands were more likely not to be located.
2.1.6 Other findings
People living in urban areas, for example London, are not only more likely to be non-
contacts, but also to be refusals (Goyder, 1987; Couper, 1991; Foster, 1998). Watson and
Wooden (2009) also reports a number of studies which have confirmed the expectations
that people living in urban areas are both less available and harder to reach (Gray et al,
1996; Burkam and Lee, 1998; Zabel, 1998), with only Lepkowski and Couper (2002)
reporting contrary evidence.
People's attachment to their housing unit, as well as to their surrounding neighbourhood
can be indicative of likely future geographical mobility, which in itself is a strong predictor
of contactability (Uhrig, 2008). Research has showed how renters are more likely to attrite
than home owners (Zabel, 1998; Lepkowski and Couper, 2002; Watson, 2003).
Lepkowski and Couper (2002) found that indicators of community attachment and social
integration, including frequency of visits to friends, engagement in community affairs and
interest in politics, appear to be positively associated with survey cooperation. Couper
and Ofstedal (2009) also note that a sample consisting of a higher rate of socially isolated
individuals may be more difficult to locate.
Few studies of attrition have taken into account a measure of respondent's health when
studying attrition (Uhrig 2008). Exceptions are Lepkowski and Couper (2002),Jones et al
(2006) and Couper and Ofstedal (2009) who found that those who reported worse health
or who were less satisfied with their health were less likely to respond at a later wave.
Uhrig himself, however, does not find significant evidence in the BHPS to confirm a
relationship between attrition and health.
2.2. Survey experience
8
Even more than in cross-sectional surveys, in longitudinal studies it is essential to ensure
the survey experience is as pleasant as possible, as this experience will have an impact
not only on cooperation at a particular point in time, but also at later waves (Laurie and
Lynn 2009; Rodgers 2002). Indeed, Watson and Wooden (2009) suggests that "the
respondent's perception of the interview experience is possibly the most important
influence on cooperation in future survey waves". Research into this area by Hill & Willis
(2001, cited by Lynn et al. 2005) found that around 75 per cent of respondents who didn’t
enjoy their experience were still participating at wave 3, compared to 90 per cent who did.
2.2.1 Salience and sensitivity
Respondents who have little interest or knowledge of a survey’s topic are more likely to
refuse at later waves than other sample members. Questions that are considered
sensitive by respondents may also promote attrition at later waves (Lepkowski and
Couper, 2002; Brenden et al, 1995).
Both salience and sensitivity of a questionnaire can be reflected in the number of item
non-response. Watson and Wooden (2009) found that the number of questions not-
answered at previous waves is a good indication of attrition for future waves. This is
particularly true for non-response at potentially sensitive questions. Missing data on
sensitive questions may be indicative of a negative interview experience, but it also
shows how committed a respondent is to participate in the study. Research on sensitive
questions has focussed particularly on income. Non response at income questions has
been proved to be an important predictor of non response at subsequent-waves (Brenden
et al, 1995; Uhrig, 2008). Uhrig (2008) also finds that item-non response at political
preference questions also suggest that politics seems to be a sensitive topic that
promotes subsequent survey refusal.
2.2.1 Interview length
The number of questions and the length of the questionnaire can have an impact on
individuals' propensity of cooperating at further waves of a longitudinal survey (Uhrig,
2008). It is expected that a longer interview places a greater burden on respondents, thus
reducing their willingness to cooperate at later waves. This illustrates the argument of
opportunity cost vs. perceived reward.
Research into the relation between interview length and attrition, as reported by Uhrig
(2008), shows instead that respondents who had short interviews at previous waves were
more likely to be non-responders at the subsequent waves (Branden et al, 1995; Zabel,
1998). Although these results may appear to be counterintuitive, Watson and Wooden
(2009) explain that interview length is actually a product of how willing respondents are to
talk to the interviewers. Thus, the respondents most interested in the survey and who find
it a more enjoyable experience, will have longer interviews. Uhrig (2008) also notes that
the running time of the interview can signal greater respondent burden but it can also
signal a greater commitment by the respondent to give more information. Branden et al
(1995) suggests that the association between longer interview lengths and sample
retention can be explained by:
- Interest in the outcome of the survey
- Interest/salience of the survey topic
- Sense of civic duty on government surveys
- Good rapport between interviewer and respondent.
The association between the length of an interview and attrition therefore does not
necessarily reflect the association between the length of a questionnaire and attrition.
Experimental evidence would be needed to assess the effect of different interviews'
length on attrition. Nevertheless, Zabel (1998), reports that attrition rates on the Panel
9
Study of Income Dynamics (PSID) were reduced after an explicit attempt to decrease the
survey length.
2.2.3. Respondent co-operation at previous waves
Respondent's co-operation at prior interviews appears to be a good predictor for further
participation in the survey (Branden et al, 1995; Laurie et al, 1999; Lepkowski and
Couper, 2002; Uhrig, 2008). Co-operation can be measured directly in the survey by
asking the interviewer to rate how cooperative the respondent is or through the use of
paradata. Respondents that are rated by interviewers anything less than ‘excellent’ in
terms of cooperativeness were more likely to subsequently refuse in Uhrig's (2008) study
of the BHPS. A study by Cheshire and Hussey (2009) using paradata from the English
Longitudinal Study of Ageing (ELSA) found that those respondents who consulted
documents during the interview and who provided consent for data linkage were less
likely to become refusals at a later stage. As mentioned earlier, the amount of item non-
response and non-response at potentially sensitive questions such as income has also
proved to be an important predictor of non-response at subsequent-waves (Brenden et al,
1995; Cheshire and Hussey, 2009).
Willingness to be re-contacted is another important indicator of attrition. Respondents
who did not provide any tracking information or who failed to provide complete tracking
information were more likely to be refusals at later stages. Uhrig (2008) observes that
supplying partial contact details may be interpreted as an "advance soft refusal".
Section 3
Survey organisation processes
Survey organisations have devised and implemented a number of systems to locate,
contact and ensure continued cooperation from panel members in an effort to reduce
attrition over the life of their longitudinal studies. We present here an overview of these
methods, with a particular focus on methods to locate respondents and to reduce
refusals.
3.1 Locating Respondents
Methods to locate respondents are often referred to in the literature as tracking
techniques. There is a large and constantly growing array of tracking methods and
resources available for use on longitudinal surveys. Couper and Ofstedal (2009) classify
tracking procedures into two groups: proactive and reactive techniques.
3.1.1 Proactive techniques
Forward-tracing or prospective techniques are methods of tracking respondents that try to
ensure that up-to-date contact details are available at the start of the wave fieldwork.
Information is gained from the respondent themselves, by ensuring that the most
accurate contact details are recorded at the latest interview and/or by updating the
contact details before the next wave occurs (Burgess, 1989; Couper and Ofstedal, 2009).
These methods are often relatively inexpensive and have proved to be successful, as the
most useful source of information for tracking is often the participants themselves (Ribisl
et al, 1996).
Obviously, all biographical information needs to be recorded accurately at each wave
(McAllister et al, 1973; Ribisl et al, 1996). Investigation has found that ensuring the
correct spelling of an individual’s name by asking the respondent to spell letter-by-letter,
especially for unique names, makes it easier to contact respondents in the future.
Nicknames, maiden or birth names and aliases should also all be recorded. Recording
some vital statistics such as date and place of birth can also be useful to track
respondents (Gunderson and McGovern, 1989).
10
In order to reduce non-contact attrition, it is also useful to record certain types of
information at previous interviews (Ribisl et al, 1996). For example asking the respondent
whether they have any plans to move within the next 6 months, and collect details of their
new address, if known, as well as asking the respondent when would be the best time to
call at future waves (Ribisl et al, 1996). Many longitudinal surveys ask for additional
contact details of friends or relatives of the respondent when interviewed at each wave
(McAllister et al, 1973). Craig (1979) and Bale et al (1984) note that participant’s mothers
are the most helpful contacts as they are more likely to maintain contact with the
participant. Couper and Ofstedal (2009) point out how individuals with large extended
families and strong family ties have many potential sources through which their current
location can be ascertained. But success of this method must not be overstressed as the
contact person may be just as mobile or elusive as the respondent themselves. There is
no guarantee that the additional contacts will be traceable at the next wave.
Keeping in contact with respondents between waves of a longitudinal survey helps to
maintain rapport by reducing non-contact attrition and also encourages a sense of
belonging to the survey (Laurie et al, 1999). There is evidence that obtaining contact
updates between waves has a positive effect not only on tracking respondents over the
waves of a longitudinal survey but also in terms of cooperation at later dates (Couper and
Ofstedal, 2009).
Respondents may be asked to provide address or other contact details updates between
waves. For example, when last interviewed, they can be given a change of address post
card and/or a telephone number or email address to get in touch with the survey
organisation if their details are to change. Alternatively or additionally, change of address
cards and/or confirmation of address cards can be sent to respondents between waves,
with a request to return them to the survey organisation. Laurie et al (1999) reported that
the BHPS receive approximately 500 change of address cards each year and one third of
respondents return the confirmation of address card. This method is relatively
inexpensive and is easy to administer, although it does increase the burden placed on the
respondent. To address the increased burden on respondents, Ribisl et al (1996) suggest
offering a small incentive to increase compliance. For example, the BHPS sends £5 as a
conditional incentive to those who return the change of address card between interview
points.
Other keep-in-touch exercises include short telephone interviews between waves, which
allow respondents to update any of their contact details and also highlight if any
respondents need to be tracked (Ribisl et al, 1996). This procedure can also be used as a
way of ensuring that respondents are free for interviewing during the fieldwork period.
Many survey organisations also keep-in-touch with respondents between waves by
mailing a newsletter or report containing a summary of the survey's results to date (e.g.
ELSA). This method is meant to encourage respondents to feel that their opinions and
experiences are contributing to a worthwhile project, thus encouraging participation at the
following interview(s). Additionally, it allows the survey organisation to check respondents'
contact details. For example if the mail or newsletters are returned-to-sender it will
highlight that the respondent will need to be tracked before the field period begins.
3.1.2 Reactive techniques
Reactive or retrospective techniques are tracking procedures which occur once the
interviewer finds the respondent can no longer be contacted using the contact details held
by the survey organisation (Laurie et al, 1999). A number of participants are still able to
be contacted retrospectively, but these tracking methods tend to be less cost-effective
when compared to proactive methods.
Reactive tracking is normally attempted by interviewers in the field or by a centralised
tracking team (Couper and Ofstedal, 2009).
11
Training can be provided to interviewers to emphasise the importance of tracking and the
impact that non-contact attrition can have on a longitudinal survey. Interviewers often
have a high amount of local knowledge and tracking skills (Laurie at al, 1999) and these
skills can be used to their full capacity in order to help reduce attrition. For example
interviewers can be encouraged to contact neighbours and other members of the
community if a respondent has moved, leave letters for present occupiers to forward to
respondents if the new address is known, etc. However, interviewers will necessarily work
on a case-by-case basis and therefore tracking will be expensive (Couper and Ofstedal,
2009), and it is important that issues of privacy and ethics are taken into account.
Tracking participants is often the toughest and most frustrating job in any longitudinal
survey so it is important to motivate interviewers to locate respondents. Ribisl et al (1996)
suggest that rewards can be offered to interviewers with high rates of success from
tracking individuals and interviewers with certain experience or skills for tracking
respondents could be dedicated to tracking work.
Some longitudinal surveys employ a dedicated tracking team, whose responsibility is
uniquely to track respondents who cannot be located. Many respondents’ contact details
are accessible through existing databases which are updated on a regular basis, for
example, telephone directories and electoral registers. Although such databases could be
used also in a proactive way, they are more often queried by survey organisations after
learning that one or more respondents can't be located. A centralised tracking team can
make a cost-effective use of these resources as it is possible to search for a high number
of respondents' details at one time.
Available databases for tracking are dependent upon the country in which the survey is
taking place (Couper and Ofstedal, 2009). For example, some countries maintain
population registers that are updated every time an individual within the population
moves. These are often available to survey organisations to freely access. In the UK,
Royal Mail maintains a National Change of Address register; however providing change
of address details is entirely voluntary. The Electoral Register can also be accessed for
use with permission, and birth, marriage, death and divorce registers are also a good
source of information for details. In the USA, there are also commercial vendors who
provide contact information, in return for a small fee by consulting, for example, credit
card bills and tax rolls. In the UK, some companies provide access to telephone
directories. Access to some databases may be restricted due to the privacy legislation
and so a limited amount of information may be available to survey organisations.
Centralised tracking teams could also search for email addresses if they are listed
publicly. The issue with this method is that email addresses tend to change more often
than a respondent's home address. Another option may be to search on the internet for a
person's name and last known address. This is a successful method particularly if the
respondent has an unusual name (Couper and Ofstedal, 2009).
3.1.4 Issues to consider
The majority of tracking methods are potentially time-consuming and costly. Reactive
methods have proven more costly than proactive methods. Centralised tracking methods
are the most cost-efficient, whereas tracking done by the interviewers themselves are the
most costly (Couper and Ofstedal, 2009). The tracking process has long been
considered only an operational issue, with very little research looking at the relative
effectiveness of the different methods. Two recent papers have started to fill this
knowledge gap.
Fumagalli et al (2009) conducted an experiment on the BHPS looking at the
effectiveness, in terms of tracking success, of the following methods:
1. Use of address confirmation card vs change of address card vs neither
12
2. Use of a pre-paid (unconditional) vs post-paid (conditional) incentive for address
confirmation/change of address card
3. Use of a standard findings report to be sent to respondent before fieldwork vs a tailored
report for young and busy people.
The research found that the use of a conditional incentive on return of a change of
address card was more effective in tracing respondents than the use of an unconditional
incentive and of an address confirmation card. It also found a limited effect of the tailored
version compared to the standard report.
McGonagle et al (2009) carried out a similar experimental test on the PSID, looking in
particular at the design of the change of address/confirmation card sent between waves,
the timing and frequency of the mailing and the use of a pre-paid or post-paid incentive.
The study found that the old card design performed better than the new design and that
there was no difference in response to the card mailing for the pre and post-paid incentive
groups. The study also found that families who received a second mailing had
significantly higher response rates than those in the one-time mailing condition.
It is important to note the success of any method of locating respondents in a longitudinal
survey is partly dependent on the design of the survey itself. For example, the length
between waves and the mode of data collection can have an important impact on the
probability of locating respondents. The longer the time left between each wave, the
greater the likelihood that sample members will have moved. Face-to-face surveys have
more opportunity for the interviewers to track respondents in the local area by talking to
neighbours, whereas telephone, email and postal survey are less informative (Couper
and Ofstedal, 2009).
3.2. Contacting respondents for interview
Non-contact attrition may still persist even after the respondent is located at the correct
address. The respondent's patterns of being physically present at the address, physical
impediments to getting an interview and the survey organisation's effort all contribute to
whether contact is achieved or not (Uhrig, 2008).
Evidence in the literature shows that the use of paradata to concentrate interviewer effort
can help to contact respondents and improve response in a longitudinal survey (Baribeau
et al, 2007; Couper and Ofstedal, 2009). One of the benefits of a longitudinal survey is
that there is information available from the previous waves, for example the number of
calls taken to contact each respondent, the outcome of the calls, and the time of day of
calls. This data can be used by longitudinal surveys to vary interviewer efforts for
minimising non-contacts in the coming waves. The National Longitudinal Survey of
Children and Youth (NLSCY), for example, uses detailed call record data from previous
waves to minimise non-contact at the following waves.
3.3. Avoiding refusals
Various methods are incorporated into longitudinal survey design with an aim to minimise
refusals (Moon et al, 2005), including incentives (Singer, 2002; Singer et al, 1999), refusal
conversion techniques (Burton et al, 2006; Lynn et al, 2002) and extra interviewer efforts
(Laurie, Smith and Scott, 1998; Lynn et al, 2002). This section outlines some of the most
common methods longitudinal surveys use to reduce refusals. Most of the information in
the section on incentives is based upon a recently published paper by Laurie and Lynn
(2009) which presents a detailed overview of the literature on incentives on longitudinal
surveys.
3.3.1 Incentives
13
Incentives are a common method used on both cross-sectional and longitudinal surveys
to try to minimise refusals. Laurie and Lynn (2009) explain that incentives lead to a
decrease in refusals as an effect of social reciprocity. According to the social reciprocity
model, small gestures on the part of the survey organisation (including incentives)
promote trust and encourage respondents to feel they should give something in return, in
this case cooperation to the survey. Incentives are also a method to show appreciation
for the respondent's time.
Incentives can be particularly beneficial for the long-term viability of longitudinal surveys,
as they can play an important role in securing cooperation into the study not only at a
particular point in time, but also throughout the life of a study. As already mentioned, in
longitudinal surveys, a greater burden is placed on respondents. This is because
cooperation is required over time but also because many longitudinal surveys are often
long and complex, cover sensitive subject matters and may require interviews from each
member of the household. The greater the burden on respondents is, the more
appropriate the use of incentives is generally felt (Laurie and Lynn, 2009).
Findings in the literature report that incentives contribute in improving response rates and
are effective in reducing attrition over multiple waves of a survey (Singer et al, 1999;
Shettle and Mooney, 1999; Rodgers, 2002; Singer, 2002, Lengacher et al, 1995). Some
authors have also noted how incentives may lead to reductions in the overall field costs
through a reduction of the number of calls that interviewers need to make (James, 1997).
However, James and Bolstein (1990) hint towards a backfire effect for very large
incentives which may even cause a reduction in cooperation.
In spite of the recognised role played by incentives, there is sparse guidance on how
incentives should be used in longitudinal studies. A review by Singer (2002) has
highlighted that little empirical research has been done on the usefulness of incentives for
maintaining response rates throughout waves of a survey. The range of incentives used
to maximise response on longitudinal surveys varies greatly between surveys. Decisions
for incentives on particular surveys are often based on the survey's own experience in the
field, feedback from the interviewers and on advice of survey practitioners as opposed to
being based on experimental evidence (Laurie and Lynn, 2009).
Monetary incentives can be offered in the form of cash, cheque, gift voucher or a gift such
as a book of stamps. They can be conditional, that is, they are paid after the interview has
been completed or they may be offered as an unconditional incentive prior to the
interview. For example, the BHPS gives the entire sample an unconditional £10 gift
voucher, and offers a small gift at the interview, whereas ELSA posts a £10 gift voucher
after the interview has been completed. Past evidence has illustrated that monetary
incentives given as an unconditional incentive prior to the interview have the greatest
impact on response (Laurie and Lynn, 2009; Lengacher et al, 1995; Singer, 2002). Trust
is thought to be gained immediately from the respondent, and so refusal rates are shown
to decrease.
Some surveys, like the Canadian Community Health Survey, offer small gifts (e.g. a first
aid mini-kit) as incentives for participation. Research however shows that monetary
incentives such as cash or cheques are more effective than gifts and reinforces that pre-
paid incentives have more influence on response than conditional incentives (Church,
1993; Warrimer et al, 1996).
Providing respondents with feedback of the results of the survey they were involved with
may also act as an incentive to encourage panel members to continue their cooperation
in the study. For example, between data collection waves, ELSA provides each of their
respondents with a newsletter to keep them updated with the main findings of the study
and to reiterate the importance of each response to the validity of the study as a whole.
As a longitudinal survey occurs over a number of waves, it is possible to introduce,
change or cease incentives, although there is little evidence about the likely effects of
14
doing so. Lynn and Laurie (2009) suggest that as the majority of attrition due to refusals
in a longitudinal survey occurs at the first couple of waves, introducing an incentive on an
already existing survey may have little effect on reducing the refusal rates. At the same
time however, it could increase sample members' loyalty for later waves (Laurie, 2007).
The effects of ceasing an incentive are largely unknown. Payments of any kind may
induce respondents to expect some other payment at the next interview (Singer et al,
1998, 2000) although some research suggests that the withdrawal of incentives may not
have a significant impact on response (Lengacher et al, 1995).
Incentives could be tailored to the sample members’ individual circumstances. Detailed
information is known about the response history in longitudinal surveys, so it is possible to
target resources at respondents who are thought to have a higher risk of dropping out
(Laurie and Lynn, 2009). Incentives could vary in amount, nature or the timing of when
they are administered. Laurie and Lynn (2009) recognise that tailoring may not be
practical in some circumstances, for example in household surveys, where the same
incentive should be offered to each member. They also explain that evidence of the
effectiveness of tailoring strategies is extremely thin, as most longitudinal surveys are not
willing to experiment with targeted treatments.
The ethical implications associated with the use of incentives should always be
considered. Kulka (1994) carried out a review of incentives for reluctant respondents and
found the use of incentives may restrict the freedom to refuse to participate in the survey.
In the UK, it is now common practice for incentives to be given as "a token of
appreciation", and testing has shown that such payments are rarely perceived as coercive
(Lessof, 2009).
Mixed evidence exists on the effect of incentives on sample composition and attrition
bias. Couper et al (2006) found that cash incentives are more likely to increase response
than a gift to those of lower education levels, single people and unemployed. This would
suggest that certain sub-sets of the population with low retention propensities react better
than others to offered incentives and incentives therefore could play an important role in
reducing non-response bias. Other studies however have failed to show any change in
the composition of the sample as a result of incentives (Singer et al, 1999).
Finally, another aspect that has been researched extensively is whether incentives may
lead to lower data quality. Research by Couper et al (2006) and Singer et al (1999)
showed that the use of incentives did not appear to have any adverse effect on data
quality as measured by differential measurement errors, levels of item non-response and
effort expended in the interview.
3.3.2 Refusal conversion
In order to reduce attrition, longitudinal surveys often use refusal conversion procedures.
These often involve interviewers re-approaching individuals who initially refused
participation in the survey and try and persuade them to complete an interview by
explaining the purpose of the study more fully and re-emphasising the importance of each
respondent to the survey (Burton et al, 2006; Stoop, 2004; Moon et al, 2005; Laurie et al,
1999). In some cases, larger incentives may be offered to refusals when attempting
conversion (Lengacher et al, 2005; Abreu and Winters, 1999).
Refusal conversion techniques are expensive methods, but they may prove particularly
useful in longitudinal surveys to retain individuals over time (Burton, Laurie and Lynn,
2006). Research has looked at whether refusal conversion procedures at one wave
impacts upon response on later waves. Lengacher et al (1995) report the results of a
refusal-conversion experiment at Wave 1 of the Health and Retirement Study (HRS),
when interviews were sought from a sub-sample of non-respondents by either using
interviewing persuasive techniques or larger incentives. Although they found that the
group who required refusal conversion had significantly lower response rates than the
15
group who did not need conversion, only 11% of Wave 1 converted refusals were refusals
at Wave 2. They also found no difference in Wave 2 response rates between the
persuaded and the large incentive converted-refusals groups. Burton et al (2006) in their
study of the BHPS also concluded that refusal conversion procedures appear to be
effective in minimising attrition from the sample not only at each wave, but over a longer
term.
3.3.3 Interviewers effect
In face-to-face surveys, interviewers play a key role to obtain cooperation from sample
members. Interviewer effects in longitudinal surveys are to some extent similar to those in
cross-sectional surveys. In both cases, for example, interviewers can persuade
respondents of their importance to the survey as a whole reassure respondents on
confidentiality issues and more generally provide more information on the survey at the
doorstep. Some interviewer effects however are specific to longitudinal surveys.
Some evidence suggests that using the same interviewer is preferred by both
respondents and interviewers (Laurie et al, 1999). Hill and Willis (2001, cited by Lynn et
al, 2005) found that in a health study the largest and most significant factor which
predicted response at a future wave was having the same interviewer at each wave.
Interviewer continuity was associated with around 6 per cent increase in response rates.
Some surveys, such as the BHPS, assign, where possible, the same interviewer to the
same household at each wave of the survey in an attempt to reduce attrition. Lynn et al
(2005) however point out how most studies that have looked into interviewers’ continuity
effects are non-experimental and in consequence confound interviewer stability with area
effects. Campanelli and O’Muircheartaigh (1999 and 2002) found that interviewer effects
disappear once area effects are controlled. Lynn et al (2005) conclude that actually little
evidence exists that interviewers’ stability affects response rates and that further research
is needed on this issue. They also point out that although interviewer continuity may
improve attrition, it may also have a negative impact on data quality. For example, Uhrig
and Lynn (2009) found that interviewer familiarity may increase social desirability bias.
Interviewer’s experience is also known to have an important impact on response. Watson
and Wooden (2009) found the age and/or experience of the interviewers had an effect on
attrition. Longitudinal surveys can make use of experienced interviewers to target
households who were refusals in previous waves or that they have higher probability to
drop out of the survey. Indeed this method proved successful in the NLSCY (Baribeau at
al, 2007), resulting in higher response rates.
3.3.4 Responsive design
Responsive designs are being considered by some survey organisations to reduce
attrition bias in longitudinal surveys. Responsive design refers to the ability to monitor
continually the streams of process data and survey data thus creating the opportunity to
alter the design during the course of data collection to improve survey cost efficiency and
to achieve more precise, less biased estimates (Groves & Heeringa, 2006). By
continuously monitoring the composition of the respondent group during fieldwork, under-
represented population groups may be targeted to improve response. This may
eventually lead to improvements in data quality by ensuring sample representativity. In
Canada, the SLID is being currently redesigned, with an aim to introduce a responsive
design element for the 2010 data collection.
16
References
Abreu, D.A and Winters, F (1999) Using Monetary Incentives to Reduce Attrition in the
Survey of Income and Program Participation. US Census Bureau.
Allen, M., Ambrose D., and Atkinson, p. (1997) Measuring Refusal Rates. Canadian Journal
of Marketing Research, 16, pp 31-42.
The American Association for Public Opinion Research (AAPOR) (2008), Standard
Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 5th
edition.
Lenexa, Kansas: AAPOR.
Apodaca, R., Lea, S. and Edwards, B. (1998) The Effect of Longitudinal Burden on
Survey Participation. 1998 Proceedings of Survey Research Methods Section of the
American Statistical Association, pp 906-910.
Bale, R.N., Arnouldussen, B.H, Quittner, A.M. (1984) Follow-up Difficulty with Substance
Abusers: Predictions of Time to Locate and Relationship to Outcome. The International
Journal of Addictions, 19, pp 885-902.
Baribeau, B, Wedselsoft, C, and Franklin, S (2007) Battling Attrition in the National
Longitudinal Survey of Children and Youth. SSC Annual Meeting, June 2007
Behr, A., Bellgardt, E., and Rendtel, U. (2005). Extent and Determinants of Panel Attrition in
the European Community Household Panel. European Sociological Review, 21, pp 489-512
Branden, L., Gritz, R.M. and Pergamit, M.R. (1995) The Effect of Interview Length on Attrition
in the National Longitudinal Survey of Youth. No. NLS 95-28, US Department of Labour
Burgess, R.D (1989) Major Issues and Implications of Tracing Survey Respondents. In
Kasprzyk, D, Duncan G, Kalton G and Singh M.P (Eds) Panel Surveys (pp.52-73). New
York:Wiley
Burkham, D.T. and Lee, V.E. (1998). Effects of Monotone and Non-Monotone Attrition on
Parameter Estimates in Regression Models with Educational Data. Journal of Human
Resources, 33, pp 555-574
Burton, J, Laurie, H and Lynn, P (2006) The Long-term Effectiveness of Refusal Conversion
Procedures on Longitudinal Surveys. J. R. Statistical Society Association 169, Part 2, pp 459-
478.
Calderwood, L. (2009) Keeping in Touch with Mobile Families in the UK Millenium Cohort
Study. Statistics Canada 25th
International Symposium on Methodological Issues.
Longitudinal Surveys: from Design to Analysis, Ottawa, 2009.
Campanelli, P. and O’Muircheartaigh, C. (1999) Interviewers, Interviewer Continuity,
and Panel Survey Non-Response. Quality & Quantity 33(1), pp 59-76.
Campanelli, P. and O’Muircheartaigh, C. (2002) The Importance of Experimental
Control in Testing the Impact of Interviewer Continuity on Panel Survey
Non-Response. Quality & Quantity 36(2), pp 129-144.
17
Cheesbrough, S. (1993) Characteristics of Non-Responding Households in the Family
Expenditure Survey. Survey Methodology Bulletin, 33, pp 12-18
Cheshire H. and Hussey, D. (2009) Factors associated with refusals in the English
Longitudinal Study of Ageing. Statistics Canada 25th
International Symposium on
Methodological Issues. Longitudinal Surveys: from Design to Analysis, Ottawa, 2009
Church, A.H (1993). Estimating the Effects of Incentives on Mail Survey Response Rates: A
Meta-Analysis. Public Opinions Quarterly, 57, pp 62-79
Couper, M.P. (1991) Modelling Survey Participation at the Interviewer Level in 1991.
Proceedings of the Survey Research Methods Section of the American
Statistical Association, pp 98-107.
Couper, M.P, Ryu, E and Marans, R.W (2006). Survey Incentives: Cash vs In-kind; Face-to-
Face vs Mail; Response Rate vs Non-Response Error. International Journal of Public
Opinions Research, 18, pp 89-106.
Couper, M and Ofstedal, M. (2009). Keeping in Contact with Mobile Sample Members. In
Lynn, P (eds) (2009) Methodology of Longitudinal Surveys. West Sussex: Wiley.
Craig, R.J (1979) Locating Drug Addicts Who Have Dropped out of Treatment. Hospital and
Community Psychiatry, 30, pp 402-404.
De Graff R., V.Bijl R., Smit F., Ravelli A. and Vollebergh, W. A.M (2000). Psychiatric and
Siciodemographic Predictors of Attrition in a Longitudinal Study: The Netherlands Mental
Health Survey and Incidence Study (NEMESIS), 2000, pp 1039-1045.
Eurostat (2004) Technical document on intermediate and final quality reports. Working Group
on Statistics on Income and Living Conditions (EU-SILC), 29-30 March 2004. Eurostat.
Luxembourg.
Fitzgerald, J., Gottschalk, P. and Moffitt, R. (1998). An Analysis of Sample Attrition in Panel
Data: the Michigan Panel Study of Income Dynamics. Journal of Human Resources, 33, pp
251-299.
Foster, K. (1998). Evaluating Non-Response on Household Surveys. GSS Methodology
Series No. 8, London: Government Statistics Service.
Foster, K. and Bushnell, D. (1994) Non-Response Bias on Government Surveys in Great
Britain. The 5th
International Workshop on Household Non-Response, Ottawa, 1994.
Fumagalli, L., Laurie, H. Lynn, P (2009). Methods to Reduce Attrition in Longitudinal Surveys:
An Experiment. European Survey Research Association Conference. Warsaw, 2009.
Goyder, J. (1987) The Silent Minority – Non-Respondents on Sample Surveys. Cambridge:
Polity Press.
Gray, R., Campanelli, P., Deepchand, K. and Prescott-Clarke P. (1996) Exploring Survey
Non-Response: The Effect of Attrition on a Follow-up of the 1984-85 Health and Life Style
Survey. The Statistician, 45, pp 163-183.
Groves, R.M and Couper, M.P (1998) Non-Response in Household Interview Surveys. New
York: John Wiley and Sons Ltd.
Groves, R.M and Hansen, S.E (1996). Survey Design Features to Maximise Respondent
Retention in Longitudinal Surveys. Unpublished report to the National Centre for Health
Statistics, University of Michigan, Ann Arbour, MI.
Groves R. M.and Heeringa, S. (2006) Responsive Design for Household Surveys: Tools for
Actively Controlling Survey Errors and Costs. Journal of the Royal Statistics Society Series A:
Statistics in Society, 169, pp 439-257 Part 3.
18
Groves, R.M., Singer, E. and Corning A. (2000). Leverage-Saliency Theory of Survey
Participation. Public Opinion Quarterly, 64, pp 299-308.
Hawkes D, Plewis, I (2006). Modelling Non-Response in the National Child Development
Study. Journal of Royal Statistics Society A, 169, Part 3, pp 479-491.
Hawkins, D.F. (1975). Estimation of Non-Response Bias. Sociological Methods and
Research, 3, pp 461-488.
Hidiroglou, M.A., Drew, J.D., and Gray, G.B. (1993). A Framework for Measuring and
Reducing Non-Response in Surveys. Survey Methodology, 19, pp 81-94.
Hill, D.H and Willis, R.J. (2001) Reducing Panel Attrition: A Search for Effective Policy
Instruments. Journal of Human Resources, 36, pp 416-438.
Iyer, R. (1984) NCDS Fourth follow-up 1981: Analysis of Response. NCDS4 Working Paper,
no. 25, London: National Children’s Bureau.
Kasse, M., (1999) Quality Criteria for Survey Research Berlin: Akademie Verlag.
Kulka, R.A. (1994). The Use of Incentives to Survey ‘Hard-to-Reach’ Respondents: A Brief
Overview of Empirical Research and current practice. Paper presented at the COPAFS
seminar on New Directions in Statistical Methodology, Bethesda, MD.
James J.M, Bolstein R. (1990) The effect of monetary incentives and follow-up mailings on
the response rate and response quality in mail surveys. Public Opinion Quarterly, 54, pp 346-
361.
James, T.L. (1997). Results of the Wave 1 Incentive Experiment in the 1996 Survey of
Income and Program participation. 1997 Proceedings of the Survey Research Methods
Section of the American Statistical Association (pp 834-839). Washington, DC: American
Statistical Association.
Jones, A.M., Koolman, X. and Rice, N. (2006) Health-Related Non-Response in the British
Household Panel Survey and European Community Household Panel: Using Inverse-
Probability-Weighted Estimators in Non-Linear Models. Journal of the Royal Statistical
Society Series A, 179(3), pp 543-569.
Laurie, H and Lynn, P (2009). The Use of Respondent Incentives on Longitudinal Surveys. In
Lynn, P (eds) (2009) Methodology of Longitudinal Surveys. West Sussex: Wiley.
Laurie, H, Smith R and Scott, L (1999) Strategies for Reducing Non-Response in a
Longitudinal Panel Survey. Journal of Official Statistics, 15:2, pp269-2
Lengacher, J.E, Sullivan, C.M, Couper M.P and Groves, R.M (1995) Once Reluctant, Always
Reluctant? Effects of Differential Incentives on Later Survey Participation in a Longitudinal
Survey. Survey Research Centre, University of Michigan.
Lepkowski ,J and Couper, M (2002) Non-Response in the Second Wave of Longitudinal
Household Surveys. In Groves R. Dillman D. Eltinge J. Little R.(2002). Survey Non Response.
Wiley Series in Survey Methodology.
Lessof, C.(2009) Ethical issues in Longitudinal Surveys. In Lynn, P (eds) (2009) Methodology
of Longitudinal Surveys Methodology of Longitudinal Surveys. West Sussex: Wiley.
Lillard, L.A. and Panis, C.W.A. (1998) Panel Attrition from the Panel Study of Income
Dynamics. Journal of Political Economy, 94(3), pp 489-506.
Lynn, P (2005) Outcome Categories and Definitions of response Rates for Panel Surveys and
Other Surveys involving Multiple Data Collection Events from the Same Units. Unpublished
manuscript. Colchester: University of Essex.
19
Lynn, P. (2006) Editorial: Attrition and Non-Response. Journal of the Royal Statistics Society
A 169, Part 3, pp 393-394.
Lynn, P., Beerten R., Laiho J., Martin J. (2003). Towards Standardisation of Survey Outcome
Categories and Response Rate Calculations. Research in Official Statistics, edition 1:
vol:2002, pp 61-84.
Lynn P, Buck N, Burton J, Jackle A, Laurie H (2005). A Review of Methodological Research
Pertinent to Longitudinal Survey Design and Data Collection. ISER. Working Paper 2005-29.
Colchester: University of Essex.
Lynn P., and Clarke, P. (2002) Separating Refusal Bias and Non-Contact Bias: Evidence
from UK National Surveys. Journal of the Royal Statistical Society Series D. The Statistician.
51(3), pp 319-333.
Lynn, P, Clarke P, Martin J and Sturgis P (2002). The Effects of Extended Interviewer Efforts
on Non-Response Bias. In R.M. Groves, D.A Dilman, J.L Elting and R.J.A Little (eds) (2002).
Survey Non-Response. Chichester :Wiley
McAllister, R, Goe, S and Edgar, B. (1973) Tracking Respondents in Longitudinal Surveys:
Some Prelimary Considerations. The Public Opinion Quarterly, 47:3, pp 413-416
McGonagle, K., Couper, M. Schoeni, R. (2009). Maintaining Contact with PSID Families
between Waves: An Experimental Test of a New Strategy, Statistics Canada 25th
International
Symposium on Methodological Issues. Longitudinal Surveys: from Design to Analysis,
Ottawa, 2009.
Michaud, S., Webber, M. (1994) Measuring Non-Response in a Longitudinal Survey: The
Experience of the Survey of Labour and Income Dynamics. Fifth International Workshop on
Household Survey Non-Response, Ottawa, 1994.
Moon, N, Rose, N and Steel, N (2005) How Could They Ever, Ever Persuade You? Are Some
Refusals Easier to Convert Than Others? AAPOR, ASA Section on Survey Research
Methods.
Nathan, G. (1999) A Review of Sample Attrition and Representativeness in Three
Longitudinal Surveys (The British Household Panel Survey, the 1970 British Cohort
Study and The National Child Development Study) Government Statistical Service,
Methodology Series, No. 13., London: GSS.
Nicoletti, C. and Buck, N. (2004) Explaining Interviewee Contact and Co-operation in the
Birtish and German Household Panels. In M. Ehling and U. Rendtel (eds) (2004),
Harmonisation of Panel Surveys and Data Quality. (pp 143-166). Wiesbaden: Statistisches
Bundesamt.
Nicoletti, C. and Peracchi, F. (2002) A Cross-Country Comparison of Survey Non-
Participation in the ECHP. ISER Working Papers, No. 2002-32, Colchester: University of
Essex.
Nicoletti, C. and Peracchi, F. (2005) Survey Response and Survey Characteristics: Microlevel
Evidence from the European Community Household Panel. Journal of the Royal Statistical
Society Series A, 168(4), pp 763:781.
Plewis, I. Ketende, S. Joshi, H. Hughes, G. (2008) The Contribution of Residential Mobility to
Sample Loss in a Birth Cohort Study: Evidence from the First Two Waves of the UK
Millennium Cohort Study. Journal of Official Statistics, Vol. 24, No3, 2008, pp 365-385
Ribisl, Walton, Mowbray, Luke and Davidson (1996). Minimising Participant Attrition.
Evaluation and Program Planning, 19:1, pp.1-25
20
Rodgers, W. (2002). Size of Incentive Effects in a Longitudinal Study. Presented at the 2002
American Association for Public Research conference, mimeo, Survey Research Centre,
University of Michigan, Ann Arbor.
Scholes S., Medina, J., Cheshire, H., Cox, K., Hacker, E., Lessof, C. (2009). Living in the 21st
Century: Older People in England: The 2006 English Longitudinal Study of Ageing. Technical
report, Natcen, 2009.
Shettle, C and Mooney, G (1999). Monetary Incentives in Government Surveys. Journal of
Official Statistics 15, pp 231-250.
Singer, E (2002). The Use of Incentives to Reduce Non-Response in Household Surveys. In
R.M. Groves, D.A Dilman, J.L Elting and R.J.A Little (eds) (2002). Survey Non-Response.
Chichester:Wiley.
Singer, E, Van Hoewyk, J and Gebler, N (1999). The Effect of Incentives on Response Rates
in Interviewer Mediated Surveys. Journal of Official Statistics, 15, pp 217-230.
Singer, E., Van Hoewyk, J. and Maher, P. (1998) Does the Payment of Incentives Create
Expectation Effects? Public Opinion Quarterly, 62, pp 152-164.
Singer, E., Van Hoewyk, J. and Maher, P. (2000). Experiments with Incentives in Telephone
Surveys. Public Opinion Quarterly, 64, pp 171-188
Smith, T (2002), Developing Non-Response Standards. In Groves R., Dillman, D., Eltinge J.,
Little. R. (eds) (2002) Survey Non Response. Chichester:Wiley.
Stoop, I (2004) Surveying Non-Respondents. Field Methods, 16, pp 23-54.
Stoop, I.A. L. (2005) The Hunt for the Last Respondent, The Hague, Netherlands: Social and
Cultural Planning Office.
Uhrig, S (2008). The Nature and Causes of Attrition in the British Household Panel Survey.
ISER Working Paper Series. No.2008-5.
Warriner, K.; Goyder, J.; Gjertsen, H.; Hohner, P.; and McSpurren, K. (1996). Charities, No;
Lotteries, No; Cash, Yes. Public Opinion Quarterly, 60, pp 542-562.
Watson, D. (2003) Sample Attrition between Waves 1 and 5 in the European Community
Household Panel. European Sociological Review, 19(4) pp 361-378.
Watson, N and Wooden, M. (2004) Sample Attrition in the HILDA Survey. Australian Journal
of Labour Economics, Vol. 7, No 2, pp 293-308.
Watson, N and Wooden, M (2004) Wave 2 Survey Methodology. HILDA Project Technical
Paper Series, No. 1/04.
Watson, N and Wooden, M (2009) Identifying Factors Affecting Longitudinal Survey
Response. In Lynn, P (eds) (2009) Methodology of Longitudinal Surveys. West Sussex:
Wiley.
Zabel, J. E. (1998). An Analysis of Attrition in the Panel Study of Income Dynamics and the
Survey of Income and Program Participation with an Application to a Model of Labour Market
Behaviour. Journal of Human Resource, 33, pp 479-506.
21

More Related Content

What's hot

PhilipGuillet.Critique1.PC673
PhilipGuillet.Critique1.PC673PhilipGuillet.Critique1.PC673
PhilipGuillet.Critique1.PC673
Philip Guillet
 
Article Review-Writing Sample
Article Review-Writing SampleArticle Review-Writing Sample
Article Review-Writing Sample
Dawn Drake, Ph.D.
 
Cross sectional research desighn
Cross  sectional research desighnCross  sectional research desighn
Cross sectional research desighn
Ahmad Mashhood
 
Judgment of causality2 kaleab
Judgment of causality2  kaleabJudgment of causality2  kaleab
Judgment of causality2 kaleab
kaleabtegegne
 
09 selection bias
09 selection bias09 selection bias
09 selection bias
Abdiwali Abdullahi Abdiwali
 
Mistakes in comparative research method
Mistakes in comparative research methodMistakes in comparative research method
Mistakes in comparative research method
iouellet
 
Research Part II
Research Part IIResearch Part II
Nested case control,
Nested case control,Nested case control,
Nested case control,
shefali jain
 
Vranceanu2013
Vranceanu2013Vranceanu2013
Vranceanu2013
MihaelaIftode1
 
Observational study design
Observational study designObservational study design
Observational study design
swati864887
 
04 chapter4
04 chapter404 chapter4
04 chapter4
Ubah Esse
 
Sources of bias and error
Sources of bias and error Sources of bias and error
Sources of bias and error
IAU Dent
 
Bias in health research
Bias in health researchBias in health research
Bias in health research
Venkitachalam R
 
Group twob ppt_presentation_1
Group twob ppt_presentation_1Group twob ppt_presentation_1
Group twob ppt_presentation_1
grouptwob
 
How to assess evidence
How to assess evidenceHow to assess evidence
How to assess evidence
Ramy Ishaq
 
Patient Centered Care | Unit 7a Lecture
Patient Centered Care | Unit 7a LecturePatient Centered Care | Unit 7a Lecture
Patient Centered Care | Unit 7a Lecture
CMDLMS
 
Error, bias and confounding
Error, bias and confoundingError, bias and confounding
Error, bias and confounding
Mitasha Singh
 
Bias and validity
Bias and validityBias and validity
Bias and validity
mshihatasite
 
pengajian malaysia
pengajian malaysiapengajian malaysia
pengajian malaysia
Gowri Owshika
 
Basics of Research and Bias
Basics of Research and BiasBasics of Research and Bias
Basics of Research and Bias
Brian Wells, MD, MS, MPH
 

What's hot (20)

PhilipGuillet.Critique1.PC673
PhilipGuillet.Critique1.PC673PhilipGuillet.Critique1.PC673
PhilipGuillet.Critique1.PC673
 
Article Review-Writing Sample
Article Review-Writing SampleArticle Review-Writing Sample
Article Review-Writing Sample
 
Cross sectional research desighn
Cross  sectional research desighnCross  sectional research desighn
Cross sectional research desighn
 
Judgment of causality2 kaleab
Judgment of causality2  kaleabJudgment of causality2  kaleab
Judgment of causality2 kaleab
 
09 selection bias
09 selection bias09 selection bias
09 selection bias
 
Mistakes in comparative research method
Mistakes in comparative research methodMistakes in comparative research method
Mistakes in comparative research method
 
Research Part II
Research Part IIResearch Part II
Research Part II
 
Nested case control,
Nested case control,Nested case control,
Nested case control,
 
Vranceanu2013
Vranceanu2013Vranceanu2013
Vranceanu2013
 
Observational study design
Observational study designObservational study design
Observational study design
 
04 chapter4
04 chapter404 chapter4
04 chapter4
 
Sources of bias and error
Sources of bias and error Sources of bias and error
Sources of bias and error
 
Bias in health research
Bias in health researchBias in health research
Bias in health research
 
Group twob ppt_presentation_1
Group twob ppt_presentation_1Group twob ppt_presentation_1
Group twob ppt_presentation_1
 
How to assess evidence
How to assess evidenceHow to assess evidence
How to assess evidence
 
Patient Centered Care | Unit 7a Lecture
Patient Centered Care | Unit 7a LecturePatient Centered Care | Unit 7a Lecture
Patient Centered Care | Unit 7a Lecture
 
Error, bias and confounding
Error, bias and confoundingError, bias and confounding
Error, bias and confounding
 
Bias and validity
Bias and validityBias and validity
Bias and validity
 
pengajian malaysia
pengajian malaysiapengajian malaysia
pengajian malaysia
 
Basics of Research and Bias
Basics of Research and BiasBasics of Research and Bias
Basics of Research and Bias
 

Viewers also liked

tus-poster
tus-postertus-poster
tus-poster
Kathryn Ashton
 
CSP_Going_Private_F18_1010
CSP_Going_Private_F18_1010CSP_Going_Private_F18_1010
CSP_Going_Private_F18_1010
Gail Fleenor
 
RESUME_RAGHUNATH
RESUME_RAGHUNATH RESUME_RAGHUNATH
RESUME_RAGHUNATH
Raghunath Jana
 
Nandan CV (1)
Nandan CV (1)Nandan CV (1)
Nandan CV (1)
Nandan Sharma
 
CV of Vusi J Jiyane
CV of Vusi J JiyaneCV of Vusi J Jiyane
CV of Vusi J JiyaneVusi Jiyane
 
Evaluation of final media product
Evaluation of final media productEvaluation of final media product
Evaluation of final media product
shbano2630
 
Cloudy with a chance of devops (devopsdays Philadelphia)
Cloudy with a chance of devops (devopsdays Philadelphia)Cloudy with a chance of devops (devopsdays Philadelphia)
Cloudy with a chance of devops (devopsdays Philadelphia)
bridgetkromhout
 
Kaseya Connect 2013: Automatic Remediation & Superfluous Ticket Elimination
Kaseya Connect 2013: Automatic Remediation & Superfluous Ticket EliminationKaseya Connect 2013: Automatic Remediation & Superfluous Ticket Elimination
Kaseya Connect 2013: Automatic Remediation & Superfluous Ticket Elimination
Kaseya
 
Can chinh gclk
Can chinh gclkCan chinh gclk
Can chinh gclk
justfortrade
 

Viewers also liked (9)

tus-poster
tus-postertus-poster
tus-poster
 
CSP_Going_Private_F18_1010
CSP_Going_Private_F18_1010CSP_Going_Private_F18_1010
CSP_Going_Private_F18_1010
 
RESUME_RAGHUNATH
RESUME_RAGHUNATH RESUME_RAGHUNATH
RESUME_RAGHUNATH
 
Nandan CV (1)
Nandan CV (1)Nandan CV (1)
Nandan CV (1)
 
CV of Vusi J Jiyane
CV of Vusi J JiyaneCV of Vusi J Jiyane
CV of Vusi J Jiyane
 
Evaluation of final media product
Evaluation of final media productEvaluation of final media product
Evaluation of final media product
 
Cloudy with a chance of devops (devopsdays Philadelphia)
Cloudy with a chance of devops (devopsdays Philadelphia)Cloudy with a chance of devops (devopsdays Philadelphia)
Cloudy with a chance of devops (devopsdays Philadelphia)
 
Kaseya Connect 2013: Automatic Remediation & Superfluous Ticket Elimination
Kaseya Connect 2013: Automatic Remediation & Superfluous Ticket EliminationKaseya Connect 2013: Automatic Remediation & Superfluous Ticket Elimination
Kaseya Connect 2013: Automatic Remediation & Superfluous Ticket Elimination
 
Can chinh gclk
Can chinh gclkCan chinh gclk
Can chinh gclk
 

Similar to LongitudinalAttrtionLitReviewNov09

719 747
719 747719 747
719 747
yasserflaifal
 
Research methodology
 Research methodology Research methodology
Research methodology
phaniyasaswinikanaka
 
Epi chapter 4
Epi chapter 4Epi chapter 4
Epi chapter 4
emmoss21
 
Epi chapter 4
Epi chapter 4Epi chapter 4
Epi chapter 4
emmoss21
 
Epi chapter 4
Epi chapter 4Epi chapter 4
Epi chapter 4
emmoss21
 
Cross sectional study
Cross sectional studyCross sectional study
Cross sectional study
HarshithaShetty24
 
48  january 2  vol 27 no 18  2013  © NURSING STANDARD RC.docx
48  january 2  vol 27 no 18  2013  © NURSING STANDARD  RC.docx48  january 2  vol 27 no 18  2013  © NURSING STANDARD  RC.docx
48  january 2  vol 27 no 18  2013  © NURSING STANDARD RC.docx
blondellchancy
 
Slide presentation
Slide presentationSlide presentation
Slide presentation
zuraiberahim
 
Slide presentation
Slide presentationSlide presentation
Slide presentation
Neza Mohd
 
Correlational n survey research
Correlational n survey researchCorrelational n survey research
Correlational n survey research
Noor Hasmida
 
6..Study designs in descritive epidemiology DR.SOMANATH.ppt
6..Study designs in descritive  epidemiology DR.SOMANATH.ppt6..Study designs in descritive  epidemiology DR.SOMANATH.ppt
6..Study designs in descritive epidemiology DR.SOMANATH.ppt
DentalYoutube
 
PR 2, WEEK 2.pptx
PR 2, WEEK 2.pptxPR 2, WEEK 2.pptx
PR 2, WEEK 2.pptx
allandavecastrosibba
 
Threats to Internal and External Validity
Threats to Internal and External ValidityThreats to Internal and External Validity
Threats to Internal and External Validity
Muhammad Salman Rao
 
Advanced Regression Methods For Single-Case Designs Studying Propranolol In ...
Advanced Regression Methods For Single-Case Designs  Studying Propranolol In ...Advanced Regression Methods For Single-Case Designs  Studying Propranolol In ...
Advanced Regression Methods For Single-Case Designs Studying Propranolol In ...
Stephen Faucher
 
Epidemiological study designs
Epidemiological study designsEpidemiological study designs
Epidemiological study designs
Ismail Qamar
 
The case study approach
The case study approachThe case study approach
The case study approach
BAPPENAS
 
PR 2, WEEK 2.pptx
PR 2, WEEK 2.pptxPR 2, WEEK 2.pptx
PR 2, WEEK 2.pptx
allandavecastrosibba
 
research design
research designresearch design
research design
Rajeshwori
 
Research design
Research designResearch design
Research design
Mayang Colcol
 
Excelsior College PBH 321 Page 1 EXPERI MENTAL E.docx
Excelsior College PBH 321     Page 1 EXPERI MENTAL E.docxExcelsior College PBH 321     Page 1 EXPERI MENTAL E.docx
Excelsior College PBH 321 Page 1 EXPERI MENTAL E.docx
gitagrimston
 

Similar to LongitudinalAttrtionLitReviewNov09 (20)

719 747
719 747719 747
719 747
 
Research methodology
 Research methodology Research methodology
Research methodology
 
Epi chapter 4
Epi chapter 4Epi chapter 4
Epi chapter 4
 
Epi chapter 4
Epi chapter 4Epi chapter 4
Epi chapter 4
 
Epi chapter 4
Epi chapter 4Epi chapter 4
Epi chapter 4
 
Cross sectional study
Cross sectional studyCross sectional study
Cross sectional study
 
48  january 2  vol 27 no 18  2013  © NURSING STANDARD RC.docx
48  january 2  vol 27 no 18  2013  © NURSING STANDARD  RC.docx48  january 2  vol 27 no 18  2013  © NURSING STANDARD  RC.docx
48  january 2  vol 27 no 18  2013  © NURSING STANDARD RC.docx
 
Slide presentation
Slide presentationSlide presentation
Slide presentation
 
Slide presentation
Slide presentationSlide presentation
Slide presentation
 
Correlational n survey research
Correlational n survey researchCorrelational n survey research
Correlational n survey research
 
6..Study designs in descritive epidemiology DR.SOMANATH.ppt
6..Study designs in descritive  epidemiology DR.SOMANATH.ppt6..Study designs in descritive  epidemiology DR.SOMANATH.ppt
6..Study designs in descritive epidemiology DR.SOMANATH.ppt
 
PR 2, WEEK 2.pptx
PR 2, WEEK 2.pptxPR 2, WEEK 2.pptx
PR 2, WEEK 2.pptx
 
Threats to Internal and External Validity
Threats to Internal and External ValidityThreats to Internal and External Validity
Threats to Internal and External Validity
 
Advanced Regression Methods For Single-Case Designs Studying Propranolol In ...
Advanced Regression Methods For Single-Case Designs  Studying Propranolol In ...Advanced Regression Methods For Single-Case Designs  Studying Propranolol In ...
Advanced Regression Methods For Single-Case Designs Studying Propranolol In ...
 
Epidemiological study designs
Epidemiological study designsEpidemiological study designs
Epidemiological study designs
 
The case study approach
The case study approachThe case study approach
The case study approach
 
PR 2, WEEK 2.pptx
PR 2, WEEK 2.pptxPR 2, WEEK 2.pptx
PR 2, WEEK 2.pptx
 
research design
research designresearch design
research design
 
Research design
Research designResearch design
Research design
 
Excelsior College PBH 321 Page 1 EXPERI MENTAL E.docx
Excelsior College PBH 321     Page 1 EXPERI MENTAL E.docxExcelsior College PBH 321     Page 1 EXPERI MENTAL E.docx
Excelsior College PBH 321 Page 1 EXPERI MENTAL E.docx
 

LongitudinalAttrtionLitReviewNov09

  • 1. Attrition on Longitudinal Surveys – Literature Review Social Survey Division, ONS November 2009 Introduction This paper presents a review of literature related to attrition on longitudinal surveys. Research into attrition encompasses a very wide range of topics, including methods to measure attrition, attrition bias measures, methods to reduce attrition and methods to correct for attrition. In this paper we have focussed only on a few of these topics. In particular, this paper does not examine attrition bias on survey estimates and methodologies to correct for it, i.e. weighting or imputation. The paper is organised into three sections: Section 1 provides an overview of the attrition problem, how to measure attrition and the theoretical framework of non-response in longitudinal surveys (pages 2-5). Section 2 focuses on respondents’ and survey’s characteristics that have been found associated with attrition (pages 6-10). Section 3 provides a review of the most commonly used methods to reduce attrition adopted by survey organisations before or during fieldwork (pages 10-16). A summary of the main findings is also included. Summary 1. Attrition due to non-response is a major issue of concern to researchers not only because it may decrease the power of longitudinal analysis but also, and mainly because, it may be selective, thus impacting on the generalisability of results to the target population (attrition bias). 2. There is limited methodological research which examines standard definitions of attrition/longitudinal response rates, in particular for households. Significant exceptions are Lynn (2005) and Ribisl et al (1996) who recommend a set of standard response rates to be published for longitudinal surveys. Detailed guidelines to calculate attrition measures are also published by Eurostat (2004). 3. Lepkowski and Couper (2002) provide a framework to explain the longitudinal response process. Longitudinal response can be seen as the result of three conditional processes: locating a respondent; contacting the respondent at a given location; and then obtaining a respondent’s cooperation. Although these three processes have got parallels in cross-sectional surveys, they do present longitudinal- specific issues. 4. A vast body of empirical research has looked into which socio-demographic characteristics are more likely to predict or to be associated with attrition. Attrition is more likely among younger and older respondents, men, single (i.e. never married) people and minority ethnic groups. More mixed evidence exists on the relationship between attrition and employment, education and income. 5. Respondents’ prior experience of the survey plays a key role in predicting attrition. Respondents who have little interest or knowledge of a survey topic are more likely to refuse at later waves than other sample members. Non-response at potentially sensitive questions, such as income, is also a good predictor of attrition at later waves. More experimental research is needed to assess the impact of interview length on attrition. 6. A large and constantly growing array of methods is available in longitudinal surveys to locate respondents. The majority of tracking methods are potentially time-consuming and costly, in particular reactive and interviewer-led tracking techniques. Very little research has looked into the cost-effectiveness of the different tracking methods. 7. Various methods are incorporated into longitudinal survey design with an aim to minimise refusals. These include incentives, refusal conversion techniques and extra 1
  • 2. interviewer efforts. Incentives are one of the most popular methods employed and reduce refusals both in cross-sectional and longitudinal surveys. In longitudinal surveys, more evidence is needed to assess the impact of changes in incentives over time (including introducing and ceasing incentives) and of incentive tailoring strategies. Section 1 Attrition: Definition, Measures and Theory 1.1. Attrition and sample attrition In the context of longitudinal surveys, the term attrition is normally used to refer to the loss of survey participants over time. Attrition may occur for a number of different reasons and Watson and Wooden (2004) classify these in two types. The first type includes reasons related to change in the underlying population, such as deaths, and is often referred to as ‘natural attrition’. This type of attrition is inevitable but from a statistical perspective is less problematic in practice, as it reflects phenomena which occur not only in the study cohort but also in the overall target population. The second type of attrition arises because sample members cannot be contacted or they refuse to continue participation. Attrition due to non-response is usually referred to as "sample attrition" or "panel attrition" (Lynn, 2006). This type is far more problematic and it will be the focus of this literature review. From now on, we will refer to this type of attrition as "sample attrition" or simply "attrition". Lynn (2006) defines sample attrition as the "cumulative effect of non-response over repeated waves or data collection efforts", not including non-response at Wave 1 of a survey as this is before attrition has occurred. This definition implies a monotone process, where sample members change their status from respondent to non-respondent, but not vice versa. In many longitudinal surveys however, attempts are made to contact non- respondents at previous waves. Therefore sample members may return to be respondents at a subsequent wave. Some authors distinguish explicitly between "wave non-respondents" and "attrition cases" (Plewis et al, 2008; Hawkes D and Plewis, 2006). Wave non-respondents are those cases who are interviewed on some occasions in a longitudinal survey, but not on others; attrition cases refers to units who are initially part of the sample but are, sooner or later, lost permanently at follow-up. Some other authors instead use the term sample attrition indistinctly to denote loss of study participants at follow up either permanently or temporarily (Lynn, 2006). In any case, either temporary or permanent, sample attrition is an issue of concern to longitudinal survey researchers for at least two reasons. Firstly, similarly to attrition due to demographic losses, sample attrition reduces the size of the sample available for longitudinal analysis where data from the same respondent for one or more waves is needed. This causes loss of statistical power with longitudinal samples becoming too small to produce robust statistical analysis and panel data estimates losing significance. At high levels, sample attrition may even threaten the viability of continuing a panel (Watson and Wooden, 2009). Secondly, non-response attrition may be selective, in that those who are lost at follow up may be different from those who remain in the sample. Non-random attrition causes great concern as it impacts on the generalisability of results to the entire target population. This problem is often referred to as "attrition bias". Many studies have been carried out to investigate the extent of attrition bias in specific surveys by looking at how characteristics of attriters differ from those of respondents' (Hawkes and Plewis, 2006). We report some of their main findings in Section 2. 1.2 Measures of attrition 1.2.1 Attrition and response rates 2
  • 3. Attrition rates are a typical measure used to report levels of attrition. These are defined as the proportion of respondents who are lost at follow-up. Many surveys however do not report attrition rates directly, but these can be derived from their published response rates. Response rates can be computed and reported in many different ways. As one of the key indicators of survey quality, the importance of developing adequate standards to allow comparison of response level across survey organisations has been long acknowledged (Smith, 2002). The American Association of Public Opinion Research (AAPOR) published, in the late 1990s, recommended standards to define and calculate response rates, currently in its 5th edition (AAPOR, 2008). In the UK, Lynn et al (2001) recommended standards for face to face surveys of households and individuals. Examples of work on development of response standards in other countries can be found in literature by Kasse (1999), Hidiroglou et al (1993) and Allen et al (1997). More limited methodological research has looked specifically at developing standards for calculating longitudinal response rates. Even the AAPOR Standard Definitions manual (AAPOR, 2008) provides only generic guidelines on the calculation of response rates for multi-wave surveys, stating that response rates should be calculated and reported "for each separate component and cumulatively". The most extensive work on defining response rates for longitudinal surveys that was found in the literature is an unpublished paper by Lynn (2005), in which he extends his previous work on response rates standards for cross sectional surveys to longitudinal surveys. Lynn (2005) also argues that no single rate can summarise the overall level of response to a longitudinal survey and recommends instead a number of different response measures to be calculated and published. Lynn (2005) refers to longitudinal surveys as surveys with multiple Data Collection Events (DCEs). In his framework, rates are explicitly defined according to a particular set of DCEs. For each set of DCEs, rates can be defined either unconditionally or conditionally. Unconditional response rates are based on all sample units who were eligible for all of the relevant DCEs while conditional response rates depend upon response to some other set of DCEs, typically one or more prior DCEs. This results in up to ∑= − m i i 1 )12( different response rates that could be reported for a survey with m DCEs. For a survey of 5 waves, that would mean 57 different response rates. Out of all possible response rates, Lynn (2005) recommends that the following are always published: • Complete response rate: Response to every wave/ Eligible at every wave • Wave-specific response rates: Responses to wave k/Eligible at wave k • Wave specific response rates conditional upon response at the previous wave: Response at wave k/Eligible at wave k and respondent at wave k-1 Additionally, if the survey design is such that a new sample enters at each wave, then the sample-specific wave response rates should also be published. Finally, if there are certain combinations of DCEs important for analysis purposes, survey organisations or data providers should also identify these key combinations and publish the relevant response rates. Lynn's framework has been recently adopted in the UK by the National Centre for Social Research for the reporting of response rates for the English Longitudinal Survey of Ageing (Scholes et al, 2009). 3
  • 4. Ribisl et al (1996) also suggested that five types of response rates should be published in panel studies, making an explicit distinction between cooperation and location rates for later waves of a survey. Their recommended rates include: • Baseline response rate: Number of completed baseline interviews/ Number of eligible individuals • Gross follow-up location rate: Number of participants located at follow up/Number of completed baseline interviews • Gross follow-up completion rate: Number of completed follow-up interviews/Number of completed baseline interviews • Eligible participant follow-up completion rate: Number of completed follow-up interview/Number of completed baseline interviews still eligible at follow-up • Cumulative follow-up completion rate: Number of individuals with completed interviews at all follow-up time periods/Number of completed baseline interviews Eurostat (2004) have also devised standard longitudinal response measures for the surveys feeding into its Statistics on Income and Living Condition (EU-SILC) longitudinal component. The following response measures are required by each member state for the second and following waves of the EU-SILC: • Wave response rate: the proportion of eligible sample units at that wave which responded to the survey • Longitudinal follow-up rate: the percentage of units which are passed on to wave k+1 for follow up within units received into wave k from wave k-1, excluding those out of scope or non existent. • Follow-up ratio: number of units passed on from wave k to wave k+1 in comparison to the number of units received for follow-up at wave k from wave k-1 • Achieved sample size ratio: ratio of the number of responding units in wave k to the number of responding units in wave k-1 1.2.2. Household response rates One of the difficulties in calculating longitudinal response rates stems from the difficulties in dealing with a dynamic picture. Survey units change over time, with some units ceasing to exist while new ones are created. This is a particular issue for longitudinal household surveys. Households are more transient units than individuals, as they may change composition over time with original members leaving and/or new individuals joining the original household. The conceptual difficulties surrounding the definition of a longitudinal household have led some surveys to publish only personal longitudinal response rates. This is the case for the Survey of Labour and Income Dynamics in Canada (SILD) (Michaud and Webber, 1994). Other surveys do calculate and publish household and individual level longitudinal rates. That is the case for example the EU-SILC, which provides detailed guidelines on how to produce its standard response measures both at household and person level (Eurostat, 2004). With the exception of Eurostat’s 2004 paper, we could not find any methodological literature looking into the specific issues surrounding the calculation of longitudinal household response rates, including how to link households over time and how split households (i.e. households that are formed when one or more individuals leave their original households) should be taken into account in the calculation of response rates. 1.3. Attrition theory 4
  • 5. The factors which cause non-response in longitudinal surveys are, in many ways, similar to those that operate on standard cross-sectional surveys (Lynn et al, 2005). However, there are also some mechanisms which are specific to longitudinal surveys. In order to illustrate the attrition process, Lepkowski and Couper (2002) extend Groves and Couper’s (1998) general theory of non response to longitudinal surveys. In Lepkowski and Couper’s framework, the process that leads to non-response attrition at a second (or later) wave of a panel survey can be divided into three conditional processes: 1. Location: locating a sample member 2. Contact: contacting the sample member given location; 3. Co-operation: obtaining an interview from the sample member given contact. 1.3.1. Location The failure to locate respondents over waves of a longitudinal survey is often one of the major causes of attrition (Ribisl et al, 1996). The propensity of locating a respondent can be seen as the combination of the propensity of the respondent to move and the propensity to locate a person who has moved (Couper and Ofstedal, 2009). Geographical mobility is an area of research in its own right and is a phenomenon that cannot be manipulated by survey practitioners; therefore this paper will not go into it in much detail. It is worthy to note that longitudinal panel surveys can provide a variety of information to help and predict the likelihood that a respondent will move. For example, Uhrig (2008) found that respondents, who expressed a desire to move, or a lack of attachment to their area, are more likely to become non-contacts at a later wave, as well as respondents who rent their home. Uhrig (2008) suggests that this may be presumably due to re-location and these measures could be used to predict subsequent non- response in panel studies. The process of locating (tracking) respondents who have moved is more of interest to this literature review as survey practitioners can have some control upon it. In order to locate respondents, survey organisations need to ensure that all contact details exist and are up-to-date for each respondent. For example, name, address, telephone number, email addresses (Uhrig, 2008). Laurie et al (1999), in their study of tracking procedures, found that approximately 10 per cent of respondents move within a given year. However, McAllister et al (1973) state that respondents do not disappear unless they are deliberately trying to do so. This highlights that there is considerable promise in tracking respondents if the right methods are used. Groves and Hansen (1996) suggest that ‘with adequate planning, multiple methods and enough information, time, money, skilled staff and perseverance, and so on, 90 per cent to 100 per cent location rates are possible’. We will look at survey tracking efforts more in detail in Section 3. 1.3.2 Contact Once a panel member has been located, the contact process is not very different from cross-sectional surveys (Watson and Wooden, 2009). Contactability will depend on the respondent's patterns of being physically present at the place of contact (normally home), or any physical impediments (i.e. locked or shared entrances to dwelling) and finally on the survey organisation's effort in making contacts (Uhrig, 2008). Additionally, in a longitudinal survey, interviewers have prior knowledge of at home-patterns, information on the best time to call (Lepkowski and Couper, 2002) and awareness of physical barriers, if the respondent has not moved. Therefore the impact of these factors should be smaller than in a cross-sectional survey (Uhrig, 2008), suggesting that non-contact should be a relatively small phenomenon at later waves, given successful location (Lepkowski and Couper, 2002). Non-contact attrition tends to be more a result of failure to locate respondents than of failure to contact them. 1.3.4 Co-operation 5
  • 6. Refusals in longitudinal surveys differ significantly from cross-sectional surveys. After the first wave of a longitudinal survey the sample has already experienced the interview process and is aware of the survey’s topics, its cognitive requirements and its time commitment. Respondents will use this experience as a guide whether to participate or not in future waves (Lynn et al, 2005). By its very nature, a longitudinal survey places a greater burden on respondents and this factor alone can induce sample members to refuse cooperation at the outset. Indeed Apodaca et al (1998) report that the presence of a ‘perceived longitudinal burden’ resulted in a 5 per cent drop in response rates. 'Panel fatigue' (Laurie, Smith and Scott, 1999) is also often present, and over time respondents may feel like they have 'done enough'. Laurie et al (1999) identify two types of refusers on longitudinal surveys: • Wave specific refusers are individuals who refuse to take part for one wave because of circumstantial situations, for example illness or bereavement, but may participate at a successive wave. • Definite withdrawals are refusers who are adamant that they don't wish to take part in the study (any more). Various methods are incorporated into longitudinal survey practice with an aim to minimise refusals. We will look at survey organisation’s efforts to improve co-operation more in detail in Section 3. Section 2 Factors associated with attrition Using Lepkowski and Couper (2002)'s framework, overall survey non-response can be seen as the cumulative effect of failure to locate, failure to contact and refusal to cooperate. These processes may operate independent of one another, but they all contribute to the overall attrition (Nicoletti and Peracchi, 2005). Uhrig (2008) notes how in the literature there is often little differentiation between these processes as a greater focus is put on the general absence of data regardless of the processes generating it. The likelihood of a sample member to be located, contacted and to cooperate in a later wave of a longitudinal survey depends on respondents' personal characteristics, but also on their previous survey experience and on the survey organisation operational efforts. In this section we will look at the first two aspects, while the survey organisation processes will be discussed in Section 3. 2.1. Individual characteristics related to non-response There is a large amount of literature on non-respondents characteristics, often looking separately at characteristics of refusals and non-contacts. This literature has been reviewed extensively by Watson and Wooden (2009), Uhrig (2008) and Lynn et al (2005). We present here their main findings relative to a number of key socio-economic characteristics. 2.1.1 Age Both Lynn et al (2005) and Uhrig (2008) report that a wide body of empirical research has found that elderly and young people are more likely not to be contacted (Cheesbrough,1993; Lillard and Panis, 1998, Foster, 1998; Groves and Couper, 1998; Lynn and Clarke, 2002; Stoop, 2005; Watson, 2003). The elderly also appear more likely to refuse survey cooperation (Hawkins, 1975; Foster and Bushnell, 1994; Groves and Couper, 1998; Lepkowski and Couper, 2002). Some authors suggest that a greater likelihood of situation refusal among the elderly could be due to the increasing chance of 6
  • 7. finding older sample members with health problems (Groves et al, 2000). Indeed, research by Jones et al (2006) found age has no effect on refusals if health is good. Focussing on evidence from longitudinal studies, Watson and Wooden (2009) confirm that attrition is higher amongst the youngest, but that response patterns are more mixed for the elderly, with some studies finding rising attrition propensities in old age (e.g. Fitzgerald et al, 1998), others reporting the reverse (e.g. Hill and Willis, 2001), and others again reporting no clear evidence in either direction (Behr et al, 2005; Nicoletti and Peracchi, 2005). 2.1.2 Household structure A large body of research highlights that single people (never married) are more likely to not be contacted (e.g. Gray et al, 1996) and refuse participation in surveys (Goyder, 1987; Lillard and Panis, 1998; Nicoletti and Peracchi, 2002). Households with children and married couples appear less likely to be lost at follow-up (Fitzgerald et al, 1998; Lillard and Panis, 1998; Nicoletti and Peracchi, 2005; Zabel, 1998). Jones et al (2006), however, finds no effect of marital status on non-response 2.1.3 Gender Within panel surveys, men appear to attrit more frequently than women (Lepkowski and Couper, 2002; Hawkes and Plewis, 2006; Behr et al, 2005). Lynn reports that men are less likely to be contacted than women (Goyder, 1987; Foster, 1998; Lepkowski and Couper, 2002). Research by Watson (2003) on the European Community Household Panel found that once education, employment, child care responsibilities and other factors are controlled, the gender effect disappears. 2.1.4 Labour market activity, income and education Mixed evidence is found in the literature regarding the relationship between attrition and employment, income and education. Employment outside the home, either as an employee or self-employed, has been found to be associated with no-contact generally and in longitudinal surveys in particular (Foster and Bushnell, 1994; Goyder 1987; Groves and Couper 1998; Lynn and Clarke, 2002; Nicoletti and Peracchi, 2005). Hawkes and Plewis (2006) and Branden et al (1995) also found that job instability appears related to non-contactability. However, Fitzgerald et al (1998), Zabel (1998) and Jones et al (2006) all found no significant relationship between employment status and attrition. Gray et al (1996) actually found attrition rates to be lowest among the employed. Using data from the British Household Panel Survey (BHPS) and the German Socio-Economic Panel, Nicoletti and Buck (2004) found that the economically inactive had higher cooperation rates in one sample, but significantly lower contact probabilities in the other. Uhrig (2008) reports that respondents who are unemployed are more likely to be non-contacts and speculates that this may be due to individuals moving in order to find employment. Lynn et al (2005) reports that survey refusal appears more likely amongst those with low incomes (Fitgerald et al, 1998; Nathan, 1999), while households with higher incomes appear more difficult to be contacted (Foster and Bushnell, 1994; Lynn and Clarke, 2002). Uhrig (2008) also finds that low-income has a slight positive effect on contactability and a negative effect on cooperation. A study from Brendan et al (1995, reported by Uhrig, 2008) finds that household financial instability of any type, either positive or negative, is associated with non-response. Uhrig (2008) speculates that large shifts in earnings may signal some other important structural change in the household (e.g. geographical move, change in employment). Analysis of the European Community Household Panel by Watson (2003) also found that a relationship between income and attrition exists but it differs across countries, with higher attrition being associated with lower income in northern European countries but with higher income in southern European countries. Finally, Watson and Wooden (2009) reports a number of studies that have found no 7
  • 8. evidence of any significant relationship between income and attrition (Gray et al, 1996; Zabel, 1998; Lepkowski and Couper, 2002; Nicoletti and Peracchi, 2005) and concludes that income is probably relatively unimportant for attrition. As for education, less educated individuals appear more likely to attrit in panel studies (Jones et al, 2006; Watson, 2003; Behr et al, 2005; Lillard and Panis, 1998), but the magnitude of the relationship is arguably small (Watson and Wooden, 2009). Watson (2003) also finds the reverse relationship in Southern Europe, where less education is associated with lower rates of attrition. 2.1.5 Ethnicity and language Studies have shown that ethnic minority groups are more likely to be non-respondents (Zabel, 1998; Burkam and Lee, 1998). While Lynn et all (2005) reports that people from ethnic minorities are more likely to be refusals (Foster, 1998; Fitzgerald et al, 1998; Lyer, 1984; Lynn and Clarke, 2002; Nathan, 1999), Watson (2003) reports two studies from Gray et al (1996) and Lepkowski and Couper (2002) which find that non-response among ethnic groups was mainly due to lower rates of contact and not higher rates of refusals. Lower contact rates for non white respondents are also reported by Uhrig (2008) and Calderwood (2009). Watson and Wooden (2009) point out that limited research has looked at the relationship between language-speaking ability and attrition, although cross-sectional surveys in English-speaking countries have almost always reported lower response rates for non English speakers. Uhrig (2008) found that respondents experiencing difficulty with the English language were associated with future non-contact. De Graaf et al (2000), in the Netherlands Mental Health Survey and Incidence Study also found that respondents not born in the Netherlands were more likely not to be located. 2.1.6 Other findings People living in urban areas, for example London, are not only more likely to be non- contacts, but also to be refusals (Goyder, 1987; Couper, 1991; Foster, 1998). Watson and Wooden (2009) also reports a number of studies which have confirmed the expectations that people living in urban areas are both less available and harder to reach (Gray et al, 1996; Burkam and Lee, 1998; Zabel, 1998), with only Lepkowski and Couper (2002) reporting contrary evidence. People's attachment to their housing unit, as well as to their surrounding neighbourhood can be indicative of likely future geographical mobility, which in itself is a strong predictor of contactability (Uhrig, 2008). Research has showed how renters are more likely to attrite than home owners (Zabel, 1998; Lepkowski and Couper, 2002; Watson, 2003). Lepkowski and Couper (2002) found that indicators of community attachment and social integration, including frequency of visits to friends, engagement in community affairs and interest in politics, appear to be positively associated with survey cooperation. Couper and Ofstedal (2009) also note that a sample consisting of a higher rate of socially isolated individuals may be more difficult to locate. Few studies of attrition have taken into account a measure of respondent's health when studying attrition (Uhrig 2008). Exceptions are Lepkowski and Couper (2002),Jones et al (2006) and Couper and Ofstedal (2009) who found that those who reported worse health or who were less satisfied with their health were less likely to respond at a later wave. Uhrig himself, however, does not find significant evidence in the BHPS to confirm a relationship between attrition and health. 2.2. Survey experience 8
  • 9. Even more than in cross-sectional surveys, in longitudinal studies it is essential to ensure the survey experience is as pleasant as possible, as this experience will have an impact not only on cooperation at a particular point in time, but also at later waves (Laurie and Lynn 2009; Rodgers 2002). Indeed, Watson and Wooden (2009) suggests that "the respondent's perception of the interview experience is possibly the most important influence on cooperation in future survey waves". Research into this area by Hill & Willis (2001, cited by Lynn et al. 2005) found that around 75 per cent of respondents who didn’t enjoy their experience were still participating at wave 3, compared to 90 per cent who did. 2.2.1 Salience and sensitivity Respondents who have little interest or knowledge of a survey’s topic are more likely to refuse at later waves than other sample members. Questions that are considered sensitive by respondents may also promote attrition at later waves (Lepkowski and Couper, 2002; Brenden et al, 1995). Both salience and sensitivity of a questionnaire can be reflected in the number of item non-response. Watson and Wooden (2009) found that the number of questions not- answered at previous waves is a good indication of attrition for future waves. This is particularly true for non-response at potentially sensitive questions. Missing data on sensitive questions may be indicative of a negative interview experience, but it also shows how committed a respondent is to participate in the study. Research on sensitive questions has focussed particularly on income. Non response at income questions has been proved to be an important predictor of non response at subsequent-waves (Brenden et al, 1995; Uhrig, 2008). Uhrig (2008) also finds that item-non response at political preference questions also suggest that politics seems to be a sensitive topic that promotes subsequent survey refusal. 2.2.1 Interview length The number of questions and the length of the questionnaire can have an impact on individuals' propensity of cooperating at further waves of a longitudinal survey (Uhrig, 2008). It is expected that a longer interview places a greater burden on respondents, thus reducing their willingness to cooperate at later waves. This illustrates the argument of opportunity cost vs. perceived reward. Research into the relation between interview length and attrition, as reported by Uhrig (2008), shows instead that respondents who had short interviews at previous waves were more likely to be non-responders at the subsequent waves (Branden et al, 1995; Zabel, 1998). Although these results may appear to be counterintuitive, Watson and Wooden (2009) explain that interview length is actually a product of how willing respondents are to talk to the interviewers. Thus, the respondents most interested in the survey and who find it a more enjoyable experience, will have longer interviews. Uhrig (2008) also notes that the running time of the interview can signal greater respondent burden but it can also signal a greater commitment by the respondent to give more information. Branden et al (1995) suggests that the association between longer interview lengths and sample retention can be explained by: - Interest in the outcome of the survey - Interest/salience of the survey topic - Sense of civic duty on government surveys - Good rapport between interviewer and respondent. The association between the length of an interview and attrition therefore does not necessarily reflect the association between the length of a questionnaire and attrition. Experimental evidence would be needed to assess the effect of different interviews' length on attrition. Nevertheless, Zabel (1998), reports that attrition rates on the Panel 9
  • 10. Study of Income Dynamics (PSID) were reduced after an explicit attempt to decrease the survey length. 2.2.3. Respondent co-operation at previous waves Respondent's co-operation at prior interviews appears to be a good predictor for further participation in the survey (Branden et al, 1995; Laurie et al, 1999; Lepkowski and Couper, 2002; Uhrig, 2008). Co-operation can be measured directly in the survey by asking the interviewer to rate how cooperative the respondent is or through the use of paradata. Respondents that are rated by interviewers anything less than ‘excellent’ in terms of cooperativeness were more likely to subsequently refuse in Uhrig's (2008) study of the BHPS. A study by Cheshire and Hussey (2009) using paradata from the English Longitudinal Study of Ageing (ELSA) found that those respondents who consulted documents during the interview and who provided consent for data linkage were less likely to become refusals at a later stage. As mentioned earlier, the amount of item non- response and non-response at potentially sensitive questions such as income has also proved to be an important predictor of non-response at subsequent-waves (Brenden et al, 1995; Cheshire and Hussey, 2009). Willingness to be re-contacted is another important indicator of attrition. Respondents who did not provide any tracking information or who failed to provide complete tracking information were more likely to be refusals at later stages. Uhrig (2008) observes that supplying partial contact details may be interpreted as an "advance soft refusal". Section 3 Survey organisation processes Survey organisations have devised and implemented a number of systems to locate, contact and ensure continued cooperation from panel members in an effort to reduce attrition over the life of their longitudinal studies. We present here an overview of these methods, with a particular focus on methods to locate respondents and to reduce refusals. 3.1 Locating Respondents Methods to locate respondents are often referred to in the literature as tracking techniques. There is a large and constantly growing array of tracking methods and resources available for use on longitudinal surveys. Couper and Ofstedal (2009) classify tracking procedures into two groups: proactive and reactive techniques. 3.1.1 Proactive techniques Forward-tracing or prospective techniques are methods of tracking respondents that try to ensure that up-to-date contact details are available at the start of the wave fieldwork. Information is gained from the respondent themselves, by ensuring that the most accurate contact details are recorded at the latest interview and/or by updating the contact details before the next wave occurs (Burgess, 1989; Couper and Ofstedal, 2009). These methods are often relatively inexpensive and have proved to be successful, as the most useful source of information for tracking is often the participants themselves (Ribisl et al, 1996). Obviously, all biographical information needs to be recorded accurately at each wave (McAllister et al, 1973; Ribisl et al, 1996). Investigation has found that ensuring the correct spelling of an individual’s name by asking the respondent to spell letter-by-letter, especially for unique names, makes it easier to contact respondents in the future. Nicknames, maiden or birth names and aliases should also all be recorded. Recording some vital statistics such as date and place of birth can also be useful to track respondents (Gunderson and McGovern, 1989). 10
  • 11. In order to reduce non-contact attrition, it is also useful to record certain types of information at previous interviews (Ribisl et al, 1996). For example asking the respondent whether they have any plans to move within the next 6 months, and collect details of their new address, if known, as well as asking the respondent when would be the best time to call at future waves (Ribisl et al, 1996). Many longitudinal surveys ask for additional contact details of friends or relatives of the respondent when interviewed at each wave (McAllister et al, 1973). Craig (1979) and Bale et al (1984) note that participant’s mothers are the most helpful contacts as they are more likely to maintain contact with the participant. Couper and Ofstedal (2009) point out how individuals with large extended families and strong family ties have many potential sources through which their current location can be ascertained. But success of this method must not be overstressed as the contact person may be just as mobile or elusive as the respondent themselves. There is no guarantee that the additional contacts will be traceable at the next wave. Keeping in contact with respondents between waves of a longitudinal survey helps to maintain rapport by reducing non-contact attrition and also encourages a sense of belonging to the survey (Laurie et al, 1999). There is evidence that obtaining contact updates between waves has a positive effect not only on tracking respondents over the waves of a longitudinal survey but also in terms of cooperation at later dates (Couper and Ofstedal, 2009). Respondents may be asked to provide address or other contact details updates between waves. For example, when last interviewed, they can be given a change of address post card and/or a telephone number or email address to get in touch with the survey organisation if their details are to change. Alternatively or additionally, change of address cards and/or confirmation of address cards can be sent to respondents between waves, with a request to return them to the survey organisation. Laurie et al (1999) reported that the BHPS receive approximately 500 change of address cards each year and one third of respondents return the confirmation of address card. This method is relatively inexpensive and is easy to administer, although it does increase the burden placed on the respondent. To address the increased burden on respondents, Ribisl et al (1996) suggest offering a small incentive to increase compliance. For example, the BHPS sends £5 as a conditional incentive to those who return the change of address card between interview points. Other keep-in-touch exercises include short telephone interviews between waves, which allow respondents to update any of their contact details and also highlight if any respondents need to be tracked (Ribisl et al, 1996). This procedure can also be used as a way of ensuring that respondents are free for interviewing during the fieldwork period. Many survey organisations also keep-in-touch with respondents between waves by mailing a newsletter or report containing a summary of the survey's results to date (e.g. ELSA). This method is meant to encourage respondents to feel that their opinions and experiences are contributing to a worthwhile project, thus encouraging participation at the following interview(s). Additionally, it allows the survey organisation to check respondents' contact details. For example if the mail or newsletters are returned-to-sender it will highlight that the respondent will need to be tracked before the field period begins. 3.1.2 Reactive techniques Reactive or retrospective techniques are tracking procedures which occur once the interviewer finds the respondent can no longer be contacted using the contact details held by the survey organisation (Laurie et al, 1999). A number of participants are still able to be contacted retrospectively, but these tracking methods tend to be less cost-effective when compared to proactive methods. Reactive tracking is normally attempted by interviewers in the field or by a centralised tracking team (Couper and Ofstedal, 2009). 11
  • 12. Training can be provided to interviewers to emphasise the importance of tracking and the impact that non-contact attrition can have on a longitudinal survey. Interviewers often have a high amount of local knowledge and tracking skills (Laurie at al, 1999) and these skills can be used to their full capacity in order to help reduce attrition. For example interviewers can be encouraged to contact neighbours and other members of the community if a respondent has moved, leave letters for present occupiers to forward to respondents if the new address is known, etc. However, interviewers will necessarily work on a case-by-case basis and therefore tracking will be expensive (Couper and Ofstedal, 2009), and it is important that issues of privacy and ethics are taken into account. Tracking participants is often the toughest and most frustrating job in any longitudinal survey so it is important to motivate interviewers to locate respondents. Ribisl et al (1996) suggest that rewards can be offered to interviewers with high rates of success from tracking individuals and interviewers with certain experience or skills for tracking respondents could be dedicated to tracking work. Some longitudinal surveys employ a dedicated tracking team, whose responsibility is uniquely to track respondents who cannot be located. Many respondents’ contact details are accessible through existing databases which are updated on a regular basis, for example, telephone directories and electoral registers. Although such databases could be used also in a proactive way, they are more often queried by survey organisations after learning that one or more respondents can't be located. A centralised tracking team can make a cost-effective use of these resources as it is possible to search for a high number of respondents' details at one time. Available databases for tracking are dependent upon the country in which the survey is taking place (Couper and Ofstedal, 2009). For example, some countries maintain population registers that are updated every time an individual within the population moves. These are often available to survey organisations to freely access. In the UK, Royal Mail maintains a National Change of Address register; however providing change of address details is entirely voluntary. The Electoral Register can also be accessed for use with permission, and birth, marriage, death and divorce registers are also a good source of information for details. In the USA, there are also commercial vendors who provide contact information, in return for a small fee by consulting, for example, credit card bills and tax rolls. In the UK, some companies provide access to telephone directories. Access to some databases may be restricted due to the privacy legislation and so a limited amount of information may be available to survey organisations. Centralised tracking teams could also search for email addresses if they are listed publicly. The issue with this method is that email addresses tend to change more often than a respondent's home address. Another option may be to search on the internet for a person's name and last known address. This is a successful method particularly if the respondent has an unusual name (Couper and Ofstedal, 2009). 3.1.4 Issues to consider The majority of tracking methods are potentially time-consuming and costly. Reactive methods have proven more costly than proactive methods. Centralised tracking methods are the most cost-efficient, whereas tracking done by the interviewers themselves are the most costly (Couper and Ofstedal, 2009). The tracking process has long been considered only an operational issue, with very little research looking at the relative effectiveness of the different methods. Two recent papers have started to fill this knowledge gap. Fumagalli et al (2009) conducted an experiment on the BHPS looking at the effectiveness, in terms of tracking success, of the following methods: 1. Use of address confirmation card vs change of address card vs neither 12
  • 13. 2. Use of a pre-paid (unconditional) vs post-paid (conditional) incentive for address confirmation/change of address card 3. Use of a standard findings report to be sent to respondent before fieldwork vs a tailored report for young and busy people. The research found that the use of a conditional incentive on return of a change of address card was more effective in tracing respondents than the use of an unconditional incentive and of an address confirmation card. It also found a limited effect of the tailored version compared to the standard report. McGonagle et al (2009) carried out a similar experimental test on the PSID, looking in particular at the design of the change of address/confirmation card sent between waves, the timing and frequency of the mailing and the use of a pre-paid or post-paid incentive. The study found that the old card design performed better than the new design and that there was no difference in response to the card mailing for the pre and post-paid incentive groups. The study also found that families who received a second mailing had significantly higher response rates than those in the one-time mailing condition. It is important to note the success of any method of locating respondents in a longitudinal survey is partly dependent on the design of the survey itself. For example, the length between waves and the mode of data collection can have an important impact on the probability of locating respondents. The longer the time left between each wave, the greater the likelihood that sample members will have moved. Face-to-face surveys have more opportunity for the interviewers to track respondents in the local area by talking to neighbours, whereas telephone, email and postal survey are less informative (Couper and Ofstedal, 2009). 3.2. Contacting respondents for interview Non-contact attrition may still persist even after the respondent is located at the correct address. The respondent's patterns of being physically present at the address, physical impediments to getting an interview and the survey organisation's effort all contribute to whether contact is achieved or not (Uhrig, 2008). Evidence in the literature shows that the use of paradata to concentrate interviewer effort can help to contact respondents and improve response in a longitudinal survey (Baribeau et al, 2007; Couper and Ofstedal, 2009). One of the benefits of a longitudinal survey is that there is information available from the previous waves, for example the number of calls taken to contact each respondent, the outcome of the calls, and the time of day of calls. This data can be used by longitudinal surveys to vary interviewer efforts for minimising non-contacts in the coming waves. The National Longitudinal Survey of Children and Youth (NLSCY), for example, uses detailed call record data from previous waves to minimise non-contact at the following waves. 3.3. Avoiding refusals Various methods are incorporated into longitudinal survey design with an aim to minimise refusals (Moon et al, 2005), including incentives (Singer, 2002; Singer et al, 1999), refusal conversion techniques (Burton et al, 2006; Lynn et al, 2002) and extra interviewer efforts (Laurie, Smith and Scott, 1998; Lynn et al, 2002). This section outlines some of the most common methods longitudinal surveys use to reduce refusals. Most of the information in the section on incentives is based upon a recently published paper by Laurie and Lynn (2009) which presents a detailed overview of the literature on incentives on longitudinal surveys. 3.3.1 Incentives 13
  • 14. Incentives are a common method used on both cross-sectional and longitudinal surveys to try to minimise refusals. Laurie and Lynn (2009) explain that incentives lead to a decrease in refusals as an effect of social reciprocity. According to the social reciprocity model, small gestures on the part of the survey organisation (including incentives) promote trust and encourage respondents to feel they should give something in return, in this case cooperation to the survey. Incentives are also a method to show appreciation for the respondent's time. Incentives can be particularly beneficial for the long-term viability of longitudinal surveys, as they can play an important role in securing cooperation into the study not only at a particular point in time, but also throughout the life of a study. As already mentioned, in longitudinal surveys, a greater burden is placed on respondents. This is because cooperation is required over time but also because many longitudinal surveys are often long and complex, cover sensitive subject matters and may require interviews from each member of the household. The greater the burden on respondents is, the more appropriate the use of incentives is generally felt (Laurie and Lynn, 2009). Findings in the literature report that incentives contribute in improving response rates and are effective in reducing attrition over multiple waves of a survey (Singer et al, 1999; Shettle and Mooney, 1999; Rodgers, 2002; Singer, 2002, Lengacher et al, 1995). Some authors have also noted how incentives may lead to reductions in the overall field costs through a reduction of the number of calls that interviewers need to make (James, 1997). However, James and Bolstein (1990) hint towards a backfire effect for very large incentives which may even cause a reduction in cooperation. In spite of the recognised role played by incentives, there is sparse guidance on how incentives should be used in longitudinal studies. A review by Singer (2002) has highlighted that little empirical research has been done on the usefulness of incentives for maintaining response rates throughout waves of a survey. The range of incentives used to maximise response on longitudinal surveys varies greatly between surveys. Decisions for incentives on particular surveys are often based on the survey's own experience in the field, feedback from the interviewers and on advice of survey practitioners as opposed to being based on experimental evidence (Laurie and Lynn, 2009). Monetary incentives can be offered in the form of cash, cheque, gift voucher or a gift such as a book of stamps. They can be conditional, that is, they are paid after the interview has been completed or they may be offered as an unconditional incentive prior to the interview. For example, the BHPS gives the entire sample an unconditional £10 gift voucher, and offers a small gift at the interview, whereas ELSA posts a £10 gift voucher after the interview has been completed. Past evidence has illustrated that monetary incentives given as an unconditional incentive prior to the interview have the greatest impact on response (Laurie and Lynn, 2009; Lengacher et al, 1995; Singer, 2002). Trust is thought to be gained immediately from the respondent, and so refusal rates are shown to decrease. Some surveys, like the Canadian Community Health Survey, offer small gifts (e.g. a first aid mini-kit) as incentives for participation. Research however shows that monetary incentives such as cash or cheques are more effective than gifts and reinforces that pre- paid incentives have more influence on response than conditional incentives (Church, 1993; Warrimer et al, 1996). Providing respondents with feedback of the results of the survey they were involved with may also act as an incentive to encourage panel members to continue their cooperation in the study. For example, between data collection waves, ELSA provides each of their respondents with a newsletter to keep them updated with the main findings of the study and to reiterate the importance of each response to the validity of the study as a whole. As a longitudinal survey occurs over a number of waves, it is possible to introduce, change or cease incentives, although there is little evidence about the likely effects of 14
  • 15. doing so. Lynn and Laurie (2009) suggest that as the majority of attrition due to refusals in a longitudinal survey occurs at the first couple of waves, introducing an incentive on an already existing survey may have little effect on reducing the refusal rates. At the same time however, it could increase sample members' loyalty for later waves (Laurie, 2007). The effects of ceasing an incentive are largely unknown. Payments of any kind may induce respondents to expect some other payment at the next interview (Singer et al, 1998, 2000) although some research suggests that the withdrawal of incentives may not have a significant impact on response (Lengacher et al, 1995). Incentives could be tailored to the sample members’ individual circumstances. Detailed information is known about the response history in longitudinal surveys, so it is possible to target resources at respondents who are thought to have a higher risk of dropping out (Laurie and Lynn, 2009). Incentives could vary in amount, nature or the timing of when they are administered. Laurie and Lynn (2009) recognise that tailoring may not be practical in some circumstances, for example in household surveys, where the same incentive should be offered to each member. They also explain that evidence of the effectiveness of tailoring strategies is extremely thin, as most longitudinal surveys are not willing to experiment with targeted treatments. The ethical implications associated with the use of incentives should always be considered. Kulka (1994) carried out a review of incentives for reluctant respondents and found the use of incentives may restrict the freedom to refuse to participate in the survey. In the UK, it is now common practice for incentives to be given as "a token of appreciation", and testing has shown that such payments are rarely perceived as coercive (Lessof, 2009). Mixed evidence exists on the effect of incentives on sample composition and attrition bias. Couper et al (2006) found that cash incentives are more likely to increase response than a gift to those of lower education levels, single people and unemployed. This would suggest that certain sub-sets of the population with low retention propensities react better than others to offered incentives and incentives therefore could play an important role in reducing non-response bias. Other studies however have failed to show any change in the composition of the sample as a result of incentives (Singer et al, 1999). Finally, another aspect that has been researched extensively is whether incentives may lead to lower data quality. Research by Couper et al (2006) and Singer et al (1999) showed that the use of incentives did not appear to have any adverse effect on data quality as measured by differential measurement errors, levels of item non-response and effort expended in the interview. 3.3.2 Refusal conversion In order to reduce attrition, longitudinal surveys often use refusal conversion procedures. These often involve interviewers re-approaching individuals who initially refused participation in the survey and try and persuade them to complete an interview by explaining the purpose of the study more fully and re-emphasising the importance of each respondent to the survey (Burton et al, 2006; Stoop, 2004; Moon et al, 2005; Laurie et al, 1999). In some cases, larger incentives may be offered to refusals when attempting conversion (Lengacher et al, 2005; Abreu and Winters, 1999). Refusal conversion techniques are expensive methods, but they may prove particularly useful in longitudinal surveys to retain individuals over time (Burton, Laurie and Lynn, 2006). Research has looked at whether refusal conversion procedures at one wave impacts upon response on later waves. Lengacher et al (1995) report the results of a refusal-conversion experiment at Wave 1 of the Health and Retirement Study (HRS), when interviews were sought from a sub-sample of non-respondents by either using interviewing persuasive techniques or larger incentives. Although they found that the group who required refusal conversion had significantly lower response rates than the 15
  • 16. group who did not need conversion, only 11% of Wave 1 converted refusals were refusals at Wave 2. They also found no difference in Wave 2 response rates between the persuaded and the large incentive converted-refusals groups. Burton et al (2006) in their study of the BHPS also concluded that refusal conversion procedures appear to be effective in minimising attrition from the sample not only at each wave, but over a longer term. 3.3.3 Interviewers effect In face-to-face surveys, interviewers play a key role to obtain cooperation from sample members. Interviewer effects in longitudinal surveys are to some extent similar to those in cross-sectional surveys. In both cases, for example, interviewers can persuade respondents of their importance to the survey as a whole reassure respondents on confidentiality issues and more generally provide more information on the survey at the doorstep. Some interviewer effects however are specific to longitudinal surveys. Some evidence suggests that using the same interviewer is preferred by both respondents and interviewers (Laurie et al, 1999). Hill and Willis (2001, cited by Lynn et al, 2005) found that in a health study the largest and most significant factor which predicted response at a future wave was having the same interviewer at each wave. Interviewer continuity was associated with around 6 per cent increase in response rates. Some surveys, such as the BHPS, assign, where possible, the same interviewer to the same household at each wave of the survey in an attempt to reduce attrition. Lynn et al (2005) however point out how most studies that have looked into interviewers’ continuity effects are non-experimental and in consequence confound interviewer stability with area effects. Campanelli and O’Muircheartaigh (1999 and 2002) found that interviewer effects disappear once area effects are controlled. Lynn et al (2005) conclude that actually little evidence exists that interviewers’ stability affects response rates and that further research is needed on this issue. They also point out that although interviewer continuity may improve attrition, it may also have a negative impact on data quality. For example, Uhrig and Lynn (2009) found that interviewer familiarity may increase social desirability bias. Interviewer’s experience is also known to have an important impact on response. Watson and Wooden (2009) found the age and/or experience of the interviewers had an effect on attrition. Longitudinal surveys can make use of experienced interviewers to target households who were refusals in previous waves or that they have higher probability to drop out of the survey. Indeed this method proved successful in the NLSCY (Baribeau at al, 2007), resulting in higher response rates. 3.3.4 Responsive design Responsive designs are being considered by some survey organisations to reduce attrition bias in longitudinal surveys. Responsive design refers to the ability to monitor continually the streams of process data and survey data thus creating the opportunity to alter the design during the course of data collection to improve survey cost efficiency and to achieve more precise, less biased estimates (Groves & Heeringa, 2006). By continuously monitoring the composition of the respondent group during fieldwork, under- represented population groups may be targeted to improve response. This may eventually lead to improvements in data quality by ensuring sample representativity. In Canada, the SLID is being currently redesigned, with an aim to introduce a responsive design element for the 2010 data collection. 16
  • 17. References Abreu, D.A and Winters, F (1999) Using Monetary Incentives to Reduce Attrition in the Survey of Income and Program Participation. US Census Bureau. Allen, M., Ambrose D., and Atkinson, p. (1997) Measuring Refusal Rates. Canadian Journal of Marketing Research, 16, pp 31-42. The American Association for Public Opinion Research (AAPOR) (2008), Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 5th edition. Lenexa, Kansas: AAPOR. Apodaca, R., Lea, S. and Edwards, B. (1998) The Effect of Longitudinal Burden on Survey Participation. 1998 Proceedings of Survey Research Methods Section of the American Statistical Association, pp 906-910. Bale, R.N., Arnouldussen, B.H, Quittner, A.M. (1984) Follow-up Difficulty with Substance Abusers: Predictions of Time to Locate and Relationship to Outcome. The International Journal of Addictions, 19, pp 885-902. Baribeau, B, Wedselsoft, C, and Franklin, S (2007) Battling Attrition in the National Longitudinal Survey of Children and Youth. SSC Annual Meeting, June 2007 Behr, A., Bellgardt, E., and Rendtel, U. (2005). Extent and Determinants of Panel Attrition in the European Community Household Panel. European Sociological Review, 21, pp 489-512 Branden, L., Gritz, R.M. and Pergamit, M.R. (1995) The Effect of Interview Length on Attrition in the National Longitudinal Survey of Youth. No. NLS 95-28, US Department of Labour Burgess, R.D (1989) Major Issues and Implications of Tracing Survey Respondents. In Kasprzyk, D, Duncan G, Kalton G and Singh M.P (Eds) Panel Surveys (pp.52-73). New York:Wiley Burkham, D.T. and Lee, V.E. (1998). Effects of Monotone and Non-Monotone Attrition on Parameter Estimates in Regression Models with Educational Data. Journal of Human Resources, 33, pp 555-574 Burton, J, Laurie, H and Lynn, P (2006) The Long-term Effectiveness of Refusal Conversion Procedures on Longitudinal Surveys. J. R. Statistical Society Association 169, Part 2, pp 459- 478. Calderwood, L. (2009) Keeping in Touch with Mobile Families in the UK Millenium Cohort Study. Statistics Canada 25th International Symposium on Methodological Issues. Longitudinal Surveys: from Design to Analysis, Ottawa, 2009. Campanelli, P. and O’Muircheartaigh, C. (1999) Interviewers, Interviewer Continuity, and Panel Survey Non-Response. Quality & Quantity 33(1), pp 59-76. Campanelli, P. and O’Muircheartaigh, C. (2002) The Importance of Experimental Control in Testing the Impact of Interviewer Continuity on Panel Survey Non-Response. Quality & Quantity 36(2), pp 129-144. 17
  • 18. Cheesbrough, S. (1993) Characteristics of Non-Responding Households in the Family Expenditure Survey. Survey Methodology Bulletin, 33, pp 12-18 Cheshire H. and Hussey, D. (2009) Factors associated with refusals in the English Longitudinal Study of Ageing. Statistics Canada 25th International Symposium on Methodological Issues. Longitudinal Surveys: from Design to Analysis, Ottawa, 2009 Church, A.H (1993). Estimating the Effects of Incentives on Mail Survey Response Rates: A Meta-Analysis. Public Opinions Quarterly, 57, pp 62-79 Couper, M.P. (1991) Modelling Survey Participation at the Interviewer Level in 1991. Proceedings of the Survey Research Methods Section of the American Statistical Association, pp 98-107. Couper, M.P, Ryu, E and Marans, R.W (2006). Survey Incentives: Cash vs In-kind; Face-to- Face vs Mail; Response Rate vs Non-Response Error. International Journal of Public Opinions Research, 18, pp 89-106. Couper, M and Ofstedal, M. (2009). Keeping in Contact with Mobile Sample Members. In Lynn, P (eds) (2009) Methodology of Longitudinal Surveys. West Sussex: Wiley. Craig, R.J (1979) Locating Drug Addicts Who Have Dropped out of Treatment. Hospital and Community Psychiatry, 30, pp 402-404. De Graff R., V.Bijl R., Smit F., Ravelli A. and Vollebergh, W. A.M (2000). Psychiatric and Siciodemographic Predictors of Attrition in a Longitudinal Study: The Netherlands Mental Health Survey and Incidence Study (NEMESIS), 2000, pp 1039-1045. Eurostat (2004) Technical document on intermediate and final quality reports. Working Group on Statistics on Income and Living Conditions (EU-SILC), 29-30 March 2004. Eurostat. Luxembourg. Fitzgerald, J., Gottschalk, P. and Moffitt, R. (1998). An Analysis of Sample Attrition in Panel Data: the Michigan Panel Study of Income Dynamics. Journal of Human Resources, 33, pp 251-299. Foster, K. (1998). Evaluating Non-Response on Household Surveys. GSS Methodology Series No. 8, London: Government Statistics Service. Foster, K. and Bushnell, D. (1994) Non-Response Bias on Government Surveys in Great Britain. The 5th International Workshop on Household Non-Response, Ottawa, 1994. Fumagalli, L., Laurie, H. Lynn, P (2009). Methods to Reduce Attrition in Longitudinal Surveys: An Experiment. European Survey Research Association Conference. Warsaw, 2009. Goyder, J. (1987) The Silent Minority – Non-Respondents on Sample Surveys. Cambridge: Polity Press. Gray, R., Campanelli, P., Deepchand, K. and Prescott-Clarke P. (1996) Exploring Survey Non-Response: The Effect of Attrition on a Follow-up of the 1984-85 Health and Life Style Survey. The Statistician, 45, pp 163-183. Groves, R.M and Couper, M.P (1998) Non-Response in Household Interview Surveys. New York: John Wiley and Sons Ltd. Groves, R.M and Hansen, S.E (1996). Survey Design Features to Maximise Respondent Retention in Longitudinal Surveys. Unpublished report to the National Centre for Health Statistics, University of Michigan, Ann Arbour, MI. Groves R. M.and Heeringa, S. (2006) Responsive Design for Household Surveys: Tools for Actively Controlling Survey Errors and Costs. Journal of the Royal Statistics Society Series A: Statistics in Society, 169, pp 439-257 Part 3. 18
  • 19. Groves, R.M., Singer, E. and Corning A. (2000). Leverage-Saliency Theory of Survey Participation. Public Opinion Quarterly, 64, pp 299-308. Hawkes D, Plewis, I (2006). Modelling Non-Response in the National Child Development Study. Journal of Royal Statistics Society A, 169, Part 3, pp 479-491. Hawkins, D.F. (1975). Estimation of Non-Response Bias. Sociological Methods and Research, 3, pp 461-488. Hidiroglou, M.A., Drew, J.D., and Gray, G.B. (1993). A Framework for Measuring and Reducing Non-Response in Surveys. Survey Methodology, 19, pp 81-94. Hill, D.H and Willis, R.J. (2001) Reducing Panel Attrition: A Search for Effective Policy Instruments. Journal of Human Resources, 36, pp 416-438. Iyer, R. (1984) NCDS Fourth follow-up 1981: Analysis of Response. NCDS4 Working Paper, no. 25, London: National Children’s Bureau. Kasse, M., (1999) Quality Criteria for Survey Research Berlin: Akademie Verlag. Kulka, R.A. (1994). The Use of Incentives to Survey ‘Hard-to-Reach’ Respondents: A Brief Overview of Empirical Research and current practice. Paper presented at the COPAFS seminar on New Directions in Statistical Methodology, Bethesda, MD. James J.M, Bolstein R. (1990) The effect of monetary incentives and follow-up mailings on the response rate and response quality in mail surveys. Public Opinion Quarterly, 54, pp 346- 361. James, T.L. (1997). Results of the Wave 1 Incentive Experiment in the 1996 Survey of Income and Program participation. 1997 Proceedings of the Survey Research Methods Section of the American Statistical Association (pp 834-839). Washington, DC: American Statistical Association. Jones, A.M., Koolman, X. and Rice, N. (2006) Health-Related Non-Response in the British Household Panel Survey and European Community Household Panel: Using Inverse- Probability-Weighted Estimators in Non-Linear Models. Journal of the Royal Statistical Society Series A, 179(3), pp 543-569. Laurie, H and Lynn, P (2009). The Use of Respondent Incentives on Longitudinal Surveys. In Lynn, P (eds) (2009) Methodology of Longitudinal Surveys. West Sussex: Wiley. Laurie, H, Smith R and Scott, L (1999) Strategies for Reducing Non-Response in a Longitudinal Panel Survey. Journal of Official Statistics, 15:2, pp269-2 Lengacher, J.E, Sullivan, C.M, Couper M.P and Groves, R.M (1995) Once Reluctant, Always Reluctant? Effects of Differential Incentives on Later Survey Participation in a Longitudinal Survey. Survey Research Centre, University of Michigan. Lepkowski ,J and Couper, M (2002) Non-Response in the Second Wave of Longitudinal Household Surveys. In Groves R. Dillman D. Eltinge J. Little R.(2002). Survey Non Response. Wiley Series in Survey Methodology. Lessof, C.(2009) Ethical issues in Longitudinal Surveys. In Lynn, P (eds) (2009) Methodology of Longitudinal Surveys Methodology of Longitudinal Surveys. West Sussex: Wiley. Lillard, L.A. and Panis, C.W.A. (1998) Panel Attrition from the Panel Study of Income Dynamics. Journal of Political Economy, 94(3), pp 489-506. Lynn, P (2005) Outcome Categories and Definitions of response Rates for Panel Surveys and Other Surveys involving Multiple Data Collection Events from the Same Units. Unpublished manuscript. Colchester: University of Essex. 19
  • 20. Lynn, P. (2006) Editorial: Attrition and Non-Response. Journal of the Royal Statistics Society A 169, Part 3, pp 393-394. Lynn, P., Beerten R., Laiho J., Martin J. (2003). Towards Standardisation of Survey Outcome Categories and Response Rate Calculations. Research in Official Statistics, edition 1: vol:2002, pp 61-84. Lynn P, Buck N, Burton J, Jackle A, Laurie H (2005). A Review of Methodological Research Pertinent to Longitudinal Survey Design and Data Collection. ISER. Working Paper 2005-29. Colchester: University of Essex. Lynn P., and Clarke, P. (2002) Separating Refusal Bias and Non-Contact Bias: Evidence from UK National Surveys. Journal of the Royal Statistical Society Series D. The Statistician. 51(3), pp 319-333. Lynn, P, Clarke P, Martin J and Sturgis P (2002). The Effects of Extended Interviewer Efforts on Non-Response Bias. In R.M. Groves, D.A Dilman, J.L Elting and R.J.A Little (eds) (2002). Survey Non-Response. Chichester :Wiley McAllister, R, Goe, S and Edgar, B. (1973) Tracking Respondents in Longitudinal Surveys: Some Prelimary Considerations. The Public Opinion Quarterly, 47:3, pp 413-416 McGonagle, K., Couper, M. Schoeni, R. (2009). Maintaining Contact with PSID Families between Waves: An Experimental Test of a New Strategy, Statistics Canada 25th International Symposium on Methodological Issues. Longitudinal Surveys: from Design to Analysis, Ottawa, 2009. Michaud, S., Webber, M. (1994) Measuring Non-Response in a Longitudinal Survey: The Experience of the Survey of Labour and Income Dynamics. Fifth International Workshop on Household Survey Non-Response, Ottawa, 1994. Moon, N, Rose, N and Steel, N (2005) How Could They Ever, Ever Persuade You? Are Some Refusals Easier to Convert Than Others? AAPOR, ASA Section on Survey Research Methods. Nathan, G. (1999) A Review of Sample Attrition and Representativeness in Three Longitudinal Surveys (The British Household Panel Survey, the 1970 British Cohort Study and The National Child Development Study) Government Statistical Service, Methodology Series, No. 13., London: GSS. Nicoletti, C. and Buck, N. (2004) Explaining Interviewee Contact and Co-operation in the Birtish and German Household Panels. In M. Ehling and U. Rendtel (eds) (2004), Harmonisation of Panel Surveys and Data Quality. (pp 143-166). Wiesbaden: Statistisches Bundesamt. Nicoletti, C. and Peracchi, F. (2002) A Cross-Country Comparison of Survey Non- Participation in the ECHP. ISER Working Papers, No. 2002-32, Colchester: University of Essex. Nicoletti, C. and Peracchi, F. (2005) Survey Response and Survey Characteristics: Microlevel Evidence from the European Community Household Panel. Journal of the Royal Statistical Society Series A, 168(4), pp 763:781. Plewis, I. Ketende, S. Joshi, H. Hughes, G. (2008) The Contribution of Residential Mobility to Sample Loss in a Birth Cohort Study: Evidence from the First Two Waves of the UK Millennium Cohort Study. Journal of Official Statistics, Vol. 24, No3, 2008, pp 365-385 Ribisl, Walton, Mowbray, Luke and Davidson (1996). Minimising Participant Attrition. Evaluation and Program Planning, 19:1, pp.1-25 20
  • 21. Rodgers, W. (2002). Size of Incentive Effects in a Longitudinal Study. Presented at the 2002 American Association for Public Research conference, mimeo, Survey Research Centre, University of Michigan, Ann Arbor. Scholes S., Medina, J., Cheshire, H., Cox, K., Hacker, E., Lessof, C. (2009). Living in the 21st Century: Older People in England: The 2006 English Longitudinal Study of Ageing. Technical report, Natcen, 2009. Shettle, C and Mooney, G (1999). Monetary Incentives in Government Surveys. Journal of Official Statistics 15, pp 231-250. Singer, E (2002). The Use of Incentives to Reduce Non-Response in Household Surveys. In R.M. Groves, D.A Dilman, J.L Elting and R.J.A Little (eds) (2002). Survey Non-Response. Chichester:Wiley. Singer, E, Van Hoewyk, J and Gebler, N (1999). The Effect of Incentives on Response Rates in Interviewer Mediated Surveys. Journal of Official Statistics, 15, pp 217-230. Singer, E., Van Hoewyk, J. and Maher, P. (1998) Does the Payment of Incentives Create Expectation Effects? Public Opinion Quarterly, 62, pp 152-164. Singer, E., Van Hoewyk, J. and Maher, P. (2000). Experiments with Incentives in Telephone Surveys. Public Opinion Quarterly, 64, pp 171-188 Smith, T (2002), Developing Non-Response Standards. In Groves R., Dillman, D., Eltinge J., Little. R. (eds) (2002) Survey Non Response. Chichester:Wiley. Stoop, I (2004) Surveying Non-Respondents. Field Methods, 16, pp 23-54. Stoop, I.A. L. (2005) The Hunt for the Last Respondent, The Hague, Netherlands: Social and Cultural Planning Office. Uhrig, S (2008). The Nature and Causes of Attrition in the British Household Panel Survey. ISER Working Paper Series. No.2008-5. Warriner, K.; Goyder, J.; Gjertsen, H.; Hohner, P.; and McSpurren, K. (1996). Charities, No; Lotteries, No; Cash, Yes. Public Opinion Quarterly, 60, pp 542-562. Watson, D. (2003) Sample Attrition between Waves 1 and 5 in the European Community Household Panel. European Sociological Review, 19(4) pp 361-378. Watson, N and Wooden, M. (2004) Sample Attrition in the HILDA Survey. Australian Journal of Labour Economics, Vol. 7, No 2, pp 293-308. Watson, N and Wooden, M (2004) Wave 2 Survey Methodology. HILDA Project Technical Paper Series, No. 1/04. Watson, N and Wooden, M (2009) Identifying Factors Affecting Longitudinal Survey Response. In Lynn, P (eds) (2009) Methodology of Longitudinal Surveys. West Sussex: Wiley. Zabel, J. E. (1998). An Analysis of Attrition in the Panel Study of Income Dynamics and the Survey of Income and Program Participation with an Application to a Model of Labour Market Behaviour. Journal of Human Resource, 33, pp 479-506. 21