1. Attrition due to non-response is a major issue for longitudinal surveys as it can decrease statistical power and introduce bias. This literature review focuses on defining and measuring attrition, factors associated with attrition, and methods to reduce attrition.
2. There is no standard way to calculate longitudinal response rates, but frameworks have been proposed to measure attrition and response at each wave and cumulatively. Response rates can help assess attrition levels.
3. Attrition is modeled as a three-step process - locating respondents, making contact, and gaining cooperation. Characteristics like age, gender, and prior survey experience can predict attrition. Methods to reduce attrition include incentives, refusal conversion, and improved
1) This abbreviated quantitative research plan examines the risk factors that contribute to health disparities in the US, specifically those related to lack of health insurance.
2) The study aims to identify the risk factors (e.g. racial, ethnic) that contribute to lack of health insurance and determine which population groups are most affected. It will also analyze what percentage of the population experiences these issues.
3) Two hypotheses will be tested: 1) there is a significant relationship between risk factors (independent variables) and lack of health insurance (dependent variable), and 2) poverty remains the greatest barrier preventing people from accessing health care due to lack of insurance. Quantitative research methods will be used to test these hypotheses and answer the
1. The document discusses correlational and survey research methods. It provides definitions and purposes of correlational research, including describing relationships between variables and using relationships to predict outcomes.
2. The basic steps of correlational research are outlined, including problem selection, sampling, instrumentation, design and procedures, data collection, and data analysis. Threats to internal validity like subject characteristics and mortality are also discussed.
3. Survey research is defined as collecting data using questionnaires to answer questions about populations. Different types of surveys like cross-sectional, longitudinal, trend, cohort and panel studies are explained. The key steps in conducting survey research are identified.
1. The document discusses correlational and survey research methods. It defines correlational research as studying relationships between two or more variables without influencing them.
2. The basic steps in correlational research are outlined as problem selection, sampling, instrumentation, design and procedures, data collection, and data analysis and interpretation.
3. Survey research is defined as collecting data using questionnaires or interviews to answer questions about populations. Cross-sectional and longitudinal survey designs are described.
This document analyzes and synthesizes research on treatment options for multiple sclerosis (MS). It summarizes three studies that examined switching treatments for relapsing-remitting MS. One study found switching from Natalizumab (NAT) to Fingolimod (FTY) increased relapse rates. Two other studies found switching to NAT from other treatments reduced relapse rates. The document concludes some secondary treatments may help prevent MS symptoms while more research is needed on others. Nurses should apply research by recommending NAT as a secondary treatment and individualizing patient education and care.
Five steps to conducting a systematic reviewDinesh Rokaya
Five steps to conducting a systematic review outlines a 5-step process for conducting systematic reviews: 1) Framing questions, 2) Identifying relevant publications, 3) Assessing study quality, 4) Summarizing evidence, and 5) Interpreting findings. The document uses the example of a review on water fluoridation safety to illustrate these steps. It describes framing a clear structured question, extensively searching for studies, selecting 254 studies that compared fluoridated to non-fluoridated areas, assessing study quality considering biases, and summarizing evidence on cancer outcomes from 26 studies to determine the safety of water fluoridation.
This document describes different types of epidemiological study designs, including observational studies like cross-sectional, case-control, cohort, and experimental studies like randomized controlled trials. It provides details on descriptive versus analytical epidemiology and cross-sectional studies specifically. Cross-sectional studies measure prevalence at a single point in time by surveying exposures and disease status simultaneously in a population cross-section. They are useful for assessing disease burden, comparing prevalence between populations, and examining trends over time.
Poster: Test-Retest Reliability and Equivalence of PRO MeasuresCRF Health
This literature review examined administration intervals used in test-retest reliability and equivalence studies for patient-reported outcome measures. The review found a large variance in intervals, ranging from immediate to 7 years for test-retest studies and from immediate to 1 month for equivalence studies. The most common intervals were 2 weeks for test-retest studies and 1 hour or less for equivalence studies. Intervals varied depending on the medical condition and type of study, with shorter intervals used for equivalence studies compared to test-retest studies for the same conditions.
This document discusses various types of bias that can occur in research studies. It defines bias as an unknown or unacknowledged error created during the research process. Some key biases discussed include selection bias, measurement bias, confounding, and publication bias. The document emphasizes the importance of research design features like randomization and blinding to help reduce bias.
1) This abbreviated quantitative research plan examines the risk factors that contribute to health disparities in the US, specifically those related to lack of health insurance.
2) The study aims to identify the risk factors (e.g. racial, ethnic) that contribute to lack of health insurance and determine which population groups are most affected. It will also analyze what percentage of the population experiences these issues.
3) Two hypotheses will be tested: 1) there is a significant relationship between risk factors (independent variables) and lack of health insurance (dependent variable), and 2) poverty remains the greatest barrier preventing people from accessing health care due to lack of insurance. Quantitative research methods will be used to test these hypotheses and answer the
1. The document discusses correlational and survey research methods. It provides definitions and purposes of correlational research, including describing relationships between variables and using relationships to predict outcomes.
2. The basic steps of correlational research are outlined, including problem selection, sampling, instrumentation, design and procedures, data collection, and data analysis. Threats to internal validity like subject characteristics and mortality are also discussed.
3. Survey research is defined as collecting data using questionnaires to answer questions about populations. Different types of surveys like cross-sectional, longitudinal, trend, cohort and panel studies are explained. The key steps in conducting survey research are identified.
1. The document discusses correlational and survey research methods. It defines correlational research as studying relationships between two or more variables without influencing them.
2. The basic steps in correlational research are outlined as problem selection, sampling, instrumentation, design and procedures, data collection, and data analysis and interpretation.
3. Survey research is defined as collecting data using questionnaires or interviews to answer questions about populations. Cross-sectional and longitudinal survey designs are described.
This document analyzes and synthesizes research on treatment options for multiple sclerosis (MS). It summarizes three studies that examined switching treatments for relapsing-remitting MS. One study found switching from Natalizumab (NAT) to Fingolimod (FTY) increased relapse rates. Two other studies found switching to NAT from other treatments reduced relapse rates. The document concludes some secondary treatments may help prevent MS symptoms while more research is needed on others. Nurses should apply research by recommending NAT as a secondary treatment and individualizing patient education and care.
Five steps to conducting a systematic reviewDinesh Rokaya
Five steps to conducting a systematic review outlines a 5-step process for conducting systematic reviews: 1) Framing questions, 2) Identifying relevant publications, 3) Assessing study quality, 4) Summarizing evidence, and 5) Interpreting findings. The document uses the example of a review on water fluoridation safety to illustrate these steps. It describes framing a clear structured question, extensively searching for studies, selecting 254 studies that compared fluoridated to non-fluoridated areas, assessing study quality considering biases, and summarizing evidence on cancer outcomes from 26 studies to determine the safety of water fluoridation.
This document describes different types of epidemiological study designs, including observational studies like cross-sectional, case-control, cohort, and experimental studies like randomized controlled trials. It provides details on descriptive versus analytical epidemiology and cross-sectional studies specifically. Cross-sectional studies measure prevalence at a single point in time by surveying exposures and disease status simultaneously in a population cross-section. They are useful for assessing disease burden, comparing prevalence between populations, and examining trends over time.
Poster: Test-Retest Reliability and Equivalence of PRO MeasuresCRF Health
This literature review examined administration intervals used in test-retest reliability and equivalence studies for patient-reported outcome measures. The review found a large variance in intervals, ranging from immediate to 7 years for test-retest studies and from immediate to 1 month for equivalence studies. The most common intervals were 2 weeks for test-retest studies and 1 hour or less for equivalence studies. Intervals varied depending on the medical condition and type of study, with shorter intervals used for equivalence studies compared to test-retest studies for the same conditions.
This document discusses various types of bias that can occur in research studies. It defines bias as an unknown or unacknowledged error created during the research process. Some key biases discussed include selection bias, measurement bias, confounding, and publication bias. The document emphasizes the importance of research design features like randomization and blinding to help reduce bias.
This document provides a critique of the article "Effects of Spirituality on Professionals at Risk of Developing Secondary Traumatic Stress Disorder". The critique examines several points, including the practical application of the research findings, the view of spirituality as a trait versus process, additional variables that could influence results, and whether resilience truly requires spiritual processes. In summarizing, the critique states that while the research on spirituality and resilience among therapists is intriguing, it does not have enough data to stand alone and further research is needed to better illustrate or falsify the hypothesis.
Cross-sectional studies collect data from subjects at a single point in time to measure prevalence of characteristics. They provide a snapshot of variables like behaviors, attitudes, and beliefs in a population but cannot determine causation or change over time. One example documented increasing acceptance of racial equality over decades, while another examined relationships between beer consumption and obesity measures. Cross-sectional designs are useful for descriptive analyses but have limitations like inability to establish causality.
This document discusses factors that can result in false associations in epidemiological studies, including chance, bias, and confounding. It describes ways to assess the validity of studies and avoid false associations, such as ensuring internal and external validity. The document outlines criteria for judging causality, including assessing the role of chance, bias, and confounding in individual studies, and considering the totality of evidence from multiple sources. It discusses Bradford Hill criteria for evaluating the strength of evidence for a causal relationship.
This document discusses sources of error and bias in epidemiological studies. It describes how selection bias can occur when the study population is not representative of the target population, due to factors like differential participation rates or loss to follow up. Selection bias can lead the study to produce either overestimates or underestimates of exposure-disease relationships. The document provides examples to illustrate how selection bias may influence both cohort and case-control study designs.
This document describes different types of experimental and non-experimental research designs used in pre-experimental studies. It discusses one-shot case studies, one group pretest-posttest designs, and static group comparison studies as types of pre-experimental designs. For non-experimental designs it covers descriptive, correlational, and comparative designs including surveys, simple descriptive studies, and ex-post facto correlational causal comparative studies. The advantages and disadvantages of these various designs are also outlined.
This document discusses nested case-control studies, case-cohort studies, and case-crossover studies. It provides examples and discusses the advantages and disadvantages of each study design. Nested case-control studies select controls from within a prospective cohort study. Case-cohort studies select a random subcohort of controls from the entire cohort. Case-crossover studies use individuals as their own controls by comparing exposure during case periods to control periods.
Three studies found that patients with neurofibromatosis (NF), including NF1 and NF2, reported lower quality of life scores compared to the general population when assessed using the generic SF-36 quality of life measure. Visibility and severity of NF1 symptoms significantly predicted lower skin-specific and general quality of life scores. However, the evidence for specific predictors of quality of life in NF patients was otherwise weak or inconclusive. Given the documented lower quality of life in NF patients, future research should comprehensively examine psychosocial factors and potential mind-body interventions.
This document provides an overview of observational study designs, including definition, types, and examples. It discusses cohort studies, case-control studies, and cross-sectional studies. Cohort studies measure events over time to determine causes and prognosis. Case-control studies identify risk factors by comparing exposed and unexposed groups. Cross-sectional studies analyze a population at a single time point to determine prevalence. Observational studies are useful when randomized controlled trials are unethical or for studying rare conditions and adverse events.
This chapter describes the analysis and findings of the study. Data from 93 nurse questionnaires were analyzed to examine the relationship between death anxiety and death attitudes. Descriptive statistical analysis was used to identify frequencies and percentages. Key findings included a significant relationship between older age and less death anxiety. There were no significant gender differences. Those with more nursing experience tended to have less death anxiety, though this was only marginally significant.
This document discusses sources of bias and error that can occur in research studies. It defines validity as the degree to which a measurement measures what it intends to measure. Reliability is defined as the degree to which repeated measurements produce similar results. There are two types of errors - random errors which are due to chance, and systematic errors which have a recognizable source or pattern. Bias is a deviation from the truth that can lead to conclusions that differ from reality. There are three main types of biases: selection bias due to systematic differences between study groups, measurement/misclassification bias from inaccurate measurements, and confounding bias when an extraneous factor is associated with both an exposure and outcome. Confounding can distort the apparent
Selection bias, information bias, and confounding are the three main types of bias that can occur in epidemiological studies. Selection bias results from the inappropriate selection of study participants and can be reduced through randomization and clearly defining eligibility criteria. Information bias occurs due to errors in measuring or classifying exposure, disease status, or other variables and can be reduced through blinding of outcome assessors. Confounding happens when a variable is associated with both the exposure and the outcome but is not on the causal pathway, distorting the exposure-outcome relationship.
The document summarizes two small studies conducted by students to examine the relationship between inadequate sleep and unintentional injuries. A qualitative study using an online focus group of 4 students explored perceptions of sleep and injuries. It found inadequate sleep negatively impacts health and can increase risks. A quantitative survey of 18 students further examined the relationship, finding agreement that inadequate sleep impacts judgment and awareness and may increase injury risks. Both studies had limitations as student exercises but provided insight into how policies could help address the issue.
A very vital article that briefly and nicely describes how shpuld evidence be handled in order to evaluate it and make use of the information provided.
This document provides an introduction to shared decision-making (SDM). It defines SDM as a collaborative process where patients and providers make healthcare decisions together based on scientific evidence and patient values/preferences. SDM is most appropriate when there is clinical uncertainty or balanced risks/benefits. While SDM is important for quality care, it has been slow to be adopted in practice. The document outlines the steps in SDM and common misconceptions, and provides learning objectives and references for further information.
Systematic and random errors can affect epidemiological studies. Random errors are due to chance and include individual biological variation, measurement error, and sampling error. Systematic errors, also called biases, are non-random and can distort study results. Selection bias occurs if study groups differ in characteristics unrelated to exposure that influence outcomes. Measurement bias happens if exposures or diseases are inaccurately classified. Confounding is present when a third factor is associated with both the exposure and outcome under investigation. Careful study design and analysis techniques can help reduce biases and errors to obtain more accurate results.
This document discusses bias and validity in clinical research. It defines clinical epidemiology as the study of health-related states and events in populations to control health problems. It describes how epidemiologic studies compare outcomes like disease rates between exposed and unexposed groups. Validity is important, with internal validity indicating good construct free from bias/errors, and external validity showing generalizability. Bias and confounding can threaten validity and lead to erroneous associations if not avoided or controlled for.
This document discusses research design and different types of research methods. It begins by defining a research design as a systematic plan for studying a scientific problem that defines key aspects of a study such as the type of design, research questions, variables, and statistical analysis plan. It then describes different types of non-experimental designs including relational, comparative, and longitudinal designs. Within non-experimental designs, it distinguishes between exploratory and descriptive research. It also discusses experimental designs including causal and quasi-experimental designs. Finally, it contrasts cross-sectional and longitudinal study designs. In summary, the document provides an overview of key research design concepts and differentiates between experimental and non-experimental designs as well as specific types of designs within those two
This document discusses several key aspects of research and statistics:
1. It emphasizes that reliable evidence generally comes from multiple studies and research teams, and the totality of evidence matters most.
2. Statistics are useful for determining whether differences or associations are likely due to chance or represent real effects, but numbers can be easily manipulated.
3. Several types of biases and errors can influence research, decision making, memory, and social judgments. Rigorous methodology is important to produce high quality, accurate research and publications.
The document summarizes findings from New Zealand's 2009/10 Time Use Survey on how Kiwis spend their time and who they spend it with. The survey found that on average:
- Kiwis spend most of their time (13 hours 26 minutes) with family in their own household and less time with other known people (5 hours 24 minutes) or family outside their household (1 hour 23 minutes).
- Unemployed people and those not in the labor force spend more time alone than employed people. Employed people spend more time with unknown people, such as 1 hour 50 minutes for full-time employed and 2 hours for part-time employed.
- Time spent with non-family
This document discusses the growth of private label candy and snacks in convenience stores. Some key points:
- Over 46% of shoppers buy private label products in c-stores, and sales have grown due to the recession and brands like 7-Eleven introducing more options.
- Private label candy and snacks provide higher margins for retailers compared to national brands, especially if priced at least 30% less.
- Several convenience store chains discuss the success they have seen from introducing private label bagged candy, chips, nuts and other snacks - representing 2-5% of total category sales in some cases.
- Wholesalers are also offering private label options to help retailers expand offerings and margins compared to major brands
This document provides a critique of the article "Effects of Spirituality on Professionals at Risk of Developing Secondary Traumatic Stress Disorder". The critique examines several points, including the practical application of the research findings, the view of spirituality as a trait versus process, additional variables that could influence results, and whether resilience truly requires spiritual processes. In summarizing, the critique states that while the research on spirituality and resilience among therapists is intriguing, it does not have enough data to stand alone and further research is needed to better illustrate or falsify the hypothesis.
Cross-sectional studies collect data from subjects at a single point in time to measure prevalence of characteristics. They provide a snapshot of variables like behaviors, attitudes, and beliefs in a population but cannot determine causation or change over time. One example documented increasing acceptance of racial equality over decades, while another examined relationships between beer consumption and obesity measures. Cross-sectional designs are useful for descriptive analyses but have limitations like inability to establish causality.
This document discusses factors that can result in false associations in epidemiological studies, including chance, bias, and confounding. It describes ways to assess the validity of studies and avoid false associations, such as ensuring internal and external validity. The document outlines criteria for judging causality, including assessing the role of chance, bias, and confounding in individual studies, and considering the totality of evidence from multiple sources. It discusses Bradford Hill criteria for evaluating the strength of evidence for a causal relationship.
This document discusses sources of error and bias in epidemiological studies. It describes how selection bias can occur when the study population is not representative of the target population, due to factors like differential participation rates or loss to follow up. Selection bias can lead the study to produce either overestimates or underestimates of exposure-disease relationships. The document provides examples to illustrate how selection bias may influence both cohort and case-control study designs.
This document describes different types of experimental and non-experimental research designs used in pre-experimental studies. It discusses one-shot case studies, one group pretest-posttest designs, and static group comparison studies as types of pre-experimental designs. For non-experimental designs it covers descriptive, correlational, and comparative designs including surveys, simple descriptive studies, and ex-post facto correlational causal comparative studies. The advantages and disadvantages of these various designs are also outlined.
This document discusses nested case-control studies, case-cohort studies, and case-crossover studies. It provides examples and discusses the advantages and disadvantages of each study design. Nested case-control studies select controls from within a prospective cohort study. Case-cohort studies select a random subcohort of controls from the entire cohort. Case-crossover studies use individuals as their own controls by comparing exposure during case periods to control periods.
Three studies found that patients with neurofibromatosis (NF), including NF1 and NF2, reported lower quality of life scores compared to the general population when assessed using the generic SF-36 quality of life measure. Visibility and severity of NF1 symptoms significantly predicted lower skin-specific and general quality of life scores. However, the evidence for specific predictors of quality of life in NF patients was otherwise weak or inconclusive. Given the documented lower quality of life in NF patients, future research should comprehensively examine psychosocial factors and potential mind-body interventions.
This document provides an overview of observational study designs, including definition, types, and examples. It discusses cohort studies, case-control studies, and cross-sectional studies. Cohort studies measure events over time to determine causes and prognosis. Case-control studies identify risk factors by comparing exposed and unexposed groups. Cross-sectional studies analyze a population at a single time point to determine prevalence. Observational studies are useful when randomized controlled trials are unethical or for studying rare conditions and adverse events.
This chapter describes the analysis and findings of the study. Data from 93 nurse questionnaires were analyzed to examine the relationship between death anxiety and death attitudes. Descriptive statistical analysis was used to identify frequencies and percentages. Key findings included a significant relationship between older age and less death anxiety. There were no significant gender differences. Those with more nursing experience tended to have less death anxiety, though this was only marginally significant.
This document discusses sources of bias and error that can occur in research studies. It defines validity as the degree to which a measurement measures what it intends to measure. Reliability is defined as the degree to which repeated measurements produce similar results. There are two types of errors - random errors which are due to chance, and systematic errors which have a recognizable source or pattern. Bias is a deviation from the truth that can lead to conclusions that differ from reality. There are three main types of biases: selection bias due to systematic differences between study groups, measurement/misclassification bias from inaccurate measurements, and confounding bias when an extraneous factor is associated with both an exposure and outcome. Confounding can distort the apparent
Selection bias, information bias, and confounding are the three main types of bias that can occur in epidemiological studies. Selection bias results from the inappropriate selection of study participants and can be reduced through randomization and clearly defining eligibility criteria. Information bias occurs due to errors in measuring or classifying exposure, disease status, or other variables and can be reduced through blinding of outcome assessors. Confounding happens when a variable is associated with both the exposure and the outcome but is not on the causal pathway, distorting the exposure-outcome relationship.
The document summarizes two small studies conducted by students to examine the relationship between inadequate sleep and unintentional injuries. A qualitative study using an online focus group of 4 students explored perceptions of sleep and injuries. It found inadequate sleep negatively impacts health and can increase risks. A quantitative survey of 18 students further examined the relationship, finding agreement that inadequate sleep impacts judgment and awareness and may increase injury risks. Both studies had limitations as student exercises but provided insight into how policies could help address the issue.
A very vital article that briefly and nicely describes how shpuld evidence be handled in order to evaluate it and make use of the information provided.
This document provides an introduction to shared decision-making (SDM). It defines SDM as a collaborative process where patients and providers make healthcare decisions together based on scientific evidence and patient values/preferences. SDM is most appropriate when there is clinical uncertainty or balanced risks/benefits. While SDM is important for quality care, it has been slow to be adopted in practice. The document outlines the steps in SDM and common misconceptions, and provides learning objectives and references for further information.
Systematic and random errors can affect epidemiological studies. Random errors are due to chance and include individual biological variation, measurement error, and sampling error. Systematic errors, also called biases, are non-random and can distort study results. Selection bias occurs if study groups differ in characteristics unrelated to exposure that influence outcomes. Measurement bias happens if exposures or diseases are inaccurately classified. Confounding is present when a third factor is associated with both the exposure and outcome under investigation. Careful study design and analysis techniques can help reduce biases and errors to obtain more accurate results.
This document discusses bias and validity in clinical research. It defines clinical epidemiology as the study of health-related states and events in populations to control health problems. It describes how epidemiologic studies compare outcomes like disease rates between exposed and unexposed groups. Validity is important, with internal validity indicating good construct free from bias/errors, and external validity showing generalizability. Bias and confounding can threaten validity and lead to erroneous associations if not avoided or controlled for.
This document discusses research design and different types of research methods. It begins by defining a research design as a systematic plan for studying a scientific problem that defines key aspects of a study such as the type of design, research questions, variables, and statistical analysis plan. It then describes different types of non-experimental designs including relational, comparative, and longitudinal designs. Within non-experimental designs, it distinguishes between exploratory and descriptive research. It also discusses experimental designs including causal and quasi-experimental designs. Finally, it contrasts cross-sectional and longitudinal study designs. In summary, the document provides an overview of key research design concepts and differentiates between experimental and non-experimental designs as well as specific types of designs within those two
This document discusses several key aspects of research and statistics:
1. It emphasizes that reliable evidence generally comes from multiple studies and research teams, and the totality of evidence matters most.
2. Statistics are useful for determining whether differences or associations are likely due to chance or represent real effects, but numbers can be easily manipulated.
3. Several types of biases and errors can influence research, decision making, memory, and social judgments. Rigorous methodology is important to produce high quality, accurate research and publications.
The document summarizes findings from New Zealand's 2009/10 Time Use Survey on how Kiwis spend their time and who they spend it with. The survey found that on average:
- Kiwis spend most of their time (13 hours 26 minutes) with family in their own household and less time with other known people (5 hours 24 minutes) or family outside their household (1 hour 23 minutes).
- Unemployed people and those not in the labor force spend more time alone than employed people. Employed people spend more time with unknown people, such as 1 hour 50 minutes for full-time employed and 2 hours for part-time employed.
- Time spent with non-family
This document discusses the growth of private label candy and snacks in convenience stores. Some key points:
- Over 46% of shoppers buy private label products in c-stores, and sales have grown due to the recession and brands like 7-Eleven introducing more options.
- Private label candy and snacks provide higher margins for retailers compared to national brands, especially if priced at least 30% less.
- Several convenience store chains discuss the success they have seen from introducing private label bagged candy, chips, nuts and other snacks - representing 2-5% of total category sales in some cases.
- Wholesalers are also offering private label options to help retailers expand offerings and margins compared to major brands
Raghunath Jana is an instrumentation engineer with over 4 years of experience in power plant maintenance, commissioning, and operations. He has successfully commissioned instrumentation and electrical equipment for multiple projects including a 12 MW cogeneration plant and 39TPH boiler. His expertise includes preventative maintenance, programming of DCS and PLC systems, instrumentation calibration, and preparation of piping and instrumentation diagrams. He holds a diploma in electronics and instrumentation engineering and is proficient in software like AutoCAD, MS Office, and various DCS platforms including Siemens and Yokogawa.
Nandan Sharma has over 10 years of experience as a Chemical Engineer working in hydrogen production plants using steam methane reforming (SMR) technology. He is currently a Production Engineer at Praxair India Limited managing their SMR and electrolysis hydrogen production operations and maintenance. Prior to this, he worked as a Shift In Charge at Linde India Limited overseeing operations of their SMR hydrogen plant and captive power facilities.
The document evaluates the student's final media product, a women's health magazine. It discusses how the magazine conforms to magazine conventions through its size and fonts but challenges conventions through its unique name and target audience. The magazine aims to increase awareness of health issues among women in Pakistan and motivate them to live healthier lifestyles. It would be distributed in medical stores, clinics, and surgery centers and advertised by a publishing company. Throughout the project, the student improved their production skills in areas like camera work, lighting, and editing as well as developing creative skills in vision, research, and planning. The magazine was created using Photoshop and a Nikon DSLR camera, and online research was conducted to inform the design of
Cloudy with a chance of devops (devopsdays Philadelphia)bridgetkromhout
This document appears to be a series of tweets by Bridget Kromhout discussing her background working in DevOps, organizing DevOpsDays conferences, and sharing thoughts on topics like effective communication, tools, and the importance of people over processes and tools in DevOps. She references living in Minneapolis, Minnesota and working at Pivotal, and hosts the Arrested DevOps podcast.
This session will show a “case study” of how CWPS uses built-in Kaseya functionality to eliminate over a hundred and fifty superfluous tickets per day, while enlisting utilities to produce “actionable intelligence” for those tickets that need human intervention. The “building blocks” present in Kaseya will be detailed, and content takeaways will be provided for attendees’ forays into the world of “automatic remediation.” Special emphasis will be placed on auditing exceptions when executing automatic remediation, as well as “WWYAFLSDEDWTT?”— the meaning of which will be revealed during the session.
This document discusses different study designs used in clinical research. It begins by describing descriptive study designs like case reports, case series, and cross-sectional studies which are used to gather general information about a disease but cannot prove causality. It then discusses analytic study designs like case-control and cohort studies which can be used to test hypotheses about associations between exposures and outcomes. Case-control studies identify cases and controls and compare their exposures to determine if exposures are associated with the outcome. Cohort studies follow groups over time to assess if exposures affect outcomes. The document emphasizes the importance of defining outcomes, exposures, and confounders and choosing the appropriate design based on the research question and feasibility factors.
The document provides an overview of research methodology. It defines key terminology related to research such as population, sample, variables, and statistics. It discusses different types of research designs including observational studies like cross-sectional and case-control studies as well as experimental designs like randomized clinical trials. The document also covers topics like formulating research questions and hypotheses, sampling methods, levels of evidence in clinical research, and the various steps involved in the research process from data collection to interpretation and reporting of findings.
This document discusses various epidemiologic study designs including descriptive and analytic designs. Descriptive designs like case studies, cross-sectional studies, and ecological studies focus on assessing samples without making causal inferences, while analytic designs like cohort studies, case-control studies, and experimental studies utilize comparisons to evaluate relationships between exposures and outcomes. Meta-analysis involves statistically combining results from multiple separate but related studies to obtain an overall effect or relationship.
This document discusses various epidemiologic study designs including descriptive and analytic designs. Descriptive designs focus on assessing samples without causal inferences, and include case studies, cross-sectional studies, and ecological studies. Analytic designs utilize comparison groups and include experimental, cohort, and case-control studies. The strengths and weaknesses of each design are described.
This document discusses various epidemiologic study designs including descriptive and analytic designs. Descriptive designs like case studies, cross-sectional studies, and ecological studies focus on assessing samples without making causal inferences, while analytic designs like cohort studies, case-control studies, and experimental studies utilize comparisons to evaluate relationships between exposures and outcomes. Meta-analysis involves statistically combining results from multiple separate but related studies to obtain an overall conclusion.
Cross-sectional studies examine the relationship between a disease and exposure in a population at a single point in time. They provide a snapshot of disease prevalence and exposure prevalence simultaneously. While they can describe disease burden and identify potential risk factors, the temporal relationship between exposure and disease is unclear since they involve simultaneous rather than longitudinal measurement.
Three key points about the document:
1. The document discusses correlational research and survey research. It defines correlational research as studying relationships between two or more variables without influencing them. Survey research involves collecting data through questionnaires or interviews to answer questions about populations.
2. The basic steps of correlational research are discussed, including problem selection, sampling, instrumentation, design/procedures, data collection/analysis. Threats to internal validity like subject characteristics and mortality are also covered.
3. The different types of surveys - cross-sectional, longitudinal (trend, cohort, panel), are defined. The key steps in conducting survey research are outlined, such as defining the problem, identifying the population,
Three key points about the document:
1. The document discusses correlational research and survey research methods. It defines correlational research as studying relationships between two or more variables without influencing them. Survey research involves collecting data through questionnaires and interviews to answer hypotheses or questions about populations.
2. The basic steps of correlational research are outlined, including problem selection, sampling, instrumentation, design and procedures, data collection, and data analysis. Threats to internal validity like subject characteristics, location, instrumentation, and mortality are also discussed.
3. The document provides details on longitudinal and cross-sectional survey designs. The key types of longitudinal surveys - trend studies, cohort studies, and panel studies - are explained.
Three key points about the document:
1. The document discusses correlational research and survey research. It defines correlational research as studying relationships between two or more variables without influencing them. Survey research involves collecting data through questionnaires or interviews to answer hypotheses or questions about populations.
2. The basic steps of correlational research are discussed, including problem selection, sampling, instrumentation, design/procedures, data collection/analysis. Threats to internal validity like subject characteristics and mortality are also covered.
3. The different types of surveys - cross-sectional, longitudinal (trend, cohort, panel), are defined. The key steps of conducting survey research are outlined, such as defining the problem, identifying the
6..Study designs in descritive epidemiology DR.SOMANATH.pptDentalYoutube
This document provides an overview of various epidemiological study designs used in public health research. It begins with descriptive studies, which observe disease distribution by time, place and person without attempting to draw conclusions about causes. It then covers analytical studies, including ecological, cross-sectional, case-control and cohort studies. Ecological studies examine population-level associations, while cross-sectional studies measure prevalence. Case-control studies test hypotheses by comparing exposures in cases vs controls, and cohort studies prospectively follow groups to measure disease incidence and relative risks. The document discusses key aspects of study design, biases, strengths and limitations for each type.
This document discusses various quantitative research methods including surveys, correlational research, experimental research, causal-comparative research, and sampling methods. It provides details on how each method works, including how variables are studied and the advantages and limitations of each approach. It also discusses ethical considerations and guidelines for writing the methodology section of a research study.
The document discusses factors that threaten the validity of research findings, including internal and external validity. It examines 10 threats to internal validity related to history, maturation, testing, instrumentation, regression, selection bias, attrition, and their interactions. It also discusses 4 threats to external validity regarding reactive effects of testing, selection bias and treatments, experimental arrangements, and multiple treatments. The document then summarizes 12 research designs and their strengths and weaknesses in controlling for threats to internal and external validity.
Advanced Regression Methods For Single-Case Designs Studying Propranolol In ...Stephen Faucher
This document discusses a study that used advanced regression methods to analyze data from a single-case design clinical trial of propranolol for treating agitation in patients with traumatic brain injury. The study was a double-blind, randomized clinical trial of 13 patients (9 men and 4 women) with traumatic brain injury. Logistic regression models found that propranolol was not associated with less agitation for most participants, though 4 participants did show a significant response. The study demonstrates how single-case design data can be analyzed using regression methods to obtain clinically and statistically significant information about psychological and medical treatments.
This document discusses various epidemiological study designs used to assess health outcomes and answer clinical questions. It begins by outlining the 6 D's of health outcomes - death, disease, discomfort, disability, dissatisfaction, and destitution. It then describes key clinical questions and types of epidemiological studies including descriptive studies, analytical observational studies, and experimental/interventional studies. Descriptive studies involve systematically collecting and presenting data to describe a situation, while analytical studies aim to establish causes or risk factors by comparing groups. Specific analytical study designs covered include case-control studies, cohort studies, and randomized controlled trials.
This document discusses the case study approach to research. It begins by defining a case study as an in-depth exploration of a complex issue within its real-world context. The document then discusses different types of case studies, how they are conducted, and common challenges. Key points include: 1) Case studies can explore issues, events, or phenomena, 2) They use multiple data sources to provide a nuanced understanding, 3) Challenges include maintaining objectivity and generalizing from a single case.
The document discusses various methods used in quantitative research, including survey research, correlational research, experimental research, causal-comparative research, and sampling methods. It provides details on each method/technique, such as how surveys involve using scientific sampling and questionnaires to gather information from a population. It also discusses the different types of experimental, causal-comparative, and correlational research designs. Additionally, it outlines the various steps involved in sampling, including defining the population, selecting a sampling frame, choosing a sampling technique, determining sample size, collecting data, and assessing response rates.
Research design involves decisions about how to collect and analyze data to answer research questions or solve problems. There are two main types of research design: observational studies and experimental studies. Observational studies observe naturally occurring events without intervention, while experimental studies involve deliberate human intervention to change the course of events. Common research designs include descriptive studies, analytical studies, case-control studies, cohort studies, cross-sectional studies, and randomized controlled trials. Research design aims to ensure valid, unbiased conclusions through careful planning of study type, variables, data collection, and statistical analysis.
1. The document outlines different types of research designs - descriptive studies that observe phenomena without manipulation, and experimental studies that intentionally introduce a treatment and observe the results.
2. Descriptive studies collect information to demonstrate relationships, while experimental studies test hypotheses by manipulating variables and using control groups.
3. Research design provides a framework and plan to address research questions while maintaining integrity, protecting subjects, and minimizing bias. The chosen design depends on the question, resources, and feasibility.
Excelsior College PBH 321 Page 1 EXPERI MENTAL E.docxgitagrimston
Excelsior College PBH 321
Page 1
EXPERI MENTAL E PIDE MIOLOGICAL STUDIE S
Epidemiologic studies are either observational or experimental. Observational studies, including ecologic,
cross-sectional, cohort, and case-control designs, are considered “natural” experiments, but experimental
studies are considered true experiments. We will spend the next 2 modules discussing these designs.
Before we begin to discuss study designs, we need a brief introduction to a concept that we will spend more
time discussing in later modules -- bias. The definition of bias is:
“Deviation of results or inferences from the truth, or processes leading to such deviation. Any trend in the
collection, analysis, interpretation, publication, or review of data that can lead to conclusions that are
systematically different from the truth.” (Last, J.M., A Dictionary of Epidemiology, 4th ed.)
Epidemiologists are naturally concerned whether the results of an epidemiologic study are biased, since many
important public health decisions are often drawn from epidemiologic research. The severity of the bias, that
is - how much it influences or distorts the results, is related to the study design as well as how information is
analyzed.
Experimental Studies
The defining feature of experimental studies is that the investigator assigns exposure to the study subjects.
Experimental studies most closely resemble controlled laboratory experiments and serve as models for the
conduct of observational studies, thus they are the “gold standard” of epidemiologic research. Experimental
studies have high validity (i.e., less bias), and can identify even very small effects. The most well known type of
experimental study is a randomized trial (sometimes referred to as a randomized controlled trial), where the
investigator randomly assigns exposure to the study subjects. In this type of study, the only expected
difference between the experimental and control groups is the outcome variable being studied.
Experimental designs like the randomized trial can assess both preventive interventions, where a prophylactic
agent is given to healthy or high-risk individual to prevent disease, or can assess effects of therapeutic
treatment, such as those given to diseased individuals to reduce their risk of disease recurrence, or to improve
their survival or quality of life.
Preventive intervention: Does tamoxifen lower the incidence of breast cancer in women with high risk profile
compared to high risk women not given tamoxifen?
Therapeutic intervention: Do combinations of two or three antiretroviral drugs prolong survival of AIDS
patients as well as regimens of single drugs?
The investigator can assign exposures (or allocate interventions) to either individuals or to an entire
community.
Individual-level assignment: Do women with stage I breast cancer given a lumpectomy alone survive as long
without recurrence of disease as women given a lumpec ...
Similar to LongitudinalAttrtionLitReviewNov09 (20)
Excelsior College PBH 321 Page 1 EXPERI MENTAL E.docx
LongitudinalAttrtionLitReviewNov09
1. Attrition on Longitudinal Surveys – Literature Review
Social Survey Division, ONS November 2009
Introduction
This paper presents a review of literature related to attrition on longitudinal surveys.
Research into attrition encompasses a very wide range of topics, including methods to
measure attrition, attrition bias measures, methods to reduce attrition and methods to
correct for attrition. In this paper we have focussed only on a few of these topics. In
particular, this paper does not examine attrition bias on survey estimates and
methodologies to correct for it, i.e. weighting or imputation.
The paper is organised into three sections:
Section 1 provides an overview of the attrition problem, how to measure attrition and the
theoretical framework of non-response in longitudinal surveys (pages 2-5).
Section 2 focuses on respondents’ and survey’s characteristics that have been found
associated with attrition (pages 6-10).
Section 3 provides a review of the most commonly used methods to reduce attrition
adopted by survey organisations before or during fieldwork (pages 10-16).
A summary of the main findings is also included.
Summary
1. Attrition due to non-response is a major issue of concern to researchers not only
because it may decrease the power of longitudinal analysis but also, and mainly
because, it may be selective, thus impacting on the generalisability of results to the
target population (attrition bias).
2. There is limited methodological research which examines standard definitions of
attrition/longitudinal response rates, in particular for households. Significant
exceptions are Lynn (2005) and Ribisl et al (1996) who recommend a set of standard
response rates to be published for longitudinal surveys. Detailed guidelines to
calculate attrition measures are also published by Eurostat (2004).
3. Lepkowski and Couper (2002) provide a framework to explain the longitudinal
response process. Longitudinal response can be seen as the result of three
conditional processes: locating a respondent; contacting the respondent at a given
location; and then obtaining a respondent’s cooperation. Although these three
processes have got parallels in cross-sectional surveys, they do present longitudinal-
specific issues.
4. A vast body of empirical research has looked into which socio-demographic
characteristics are more likely to predict or to be associated with attrition. Attrition is
more likely among younger and older respondents, men, single (i.e. never married)
people and minority ethnic groups. More mixed evidence exists on the relationship
between attrition and employment, education and income.
5. Respondents’ prior experience of the survey plays a key role in predicting attrition.
Respondents who have little interest or knowledge of a survey topic are more likely to
refuse at later waves than other sample members. Non-response at potentially
sensitive questions, such as income, is also a good predictor of attrition at later
waves. More experimental research is needed to assess the impact of interview
length on attrition.
6. A large and constantly growing array of methods is available in longitudinal surveys to
locate respondents. The majority of tracking methods are potentially time-consuming
and costly, in particular reactive and interviewer-led tracking techniques. Very little
research has looked into the cost-effectiveness of the different tracking methods.
7. Various methods are incorporated into longitudinal survey design with an aim to
minimise refusals. These include incentives, refusal conversion techniques and extra
1
2. interviewer efforts. Incentives are one of the most popular methods employed and
reduce refusals both in cross-sectional and longitudinal surveys. In longitudinal
surveys, more evidence is needed to assess the impact of changes in incentives over
time (including introducing and ceasing incentives) and of incentive tailoring
strategies.
Section 1
Attrition: Definition, Measures and Theory
1.1. Attrition and sample attrition
In the context of longitudinal surveys, the term attrition is normally used to refer to the loss
of survey participants over time.
Attrition may occur for a number of different reasons and Watson and Wooden (2004)
classify these in two types. The first type includes reasons related to change in the
underlying population, such as deaths, and is often referred to as ‘natural attrition’. This
type of attrition is inevitable but from a statistical perspective is less problematic in
practice, as it reflects phenomena which occur not only in the study cohort but also in the
overall target population. The second type of attrition arises because sample members
cannot be contacted or they refuse to continue participation. Attrition due to non-response
is usually referred to as "sample attrition" or "panel attrition" (Lynn, 2006). This type is far
more problematic and it will be the focus of this literature review. From now on, we will
refer to this type of attrition as "sample attrition" or simply "attrition".
Lynn (2006) defines sample attrition as the "cumulative effect of non-response over
repeated waves or data collection efforts", not including non-response at Wave 1 of a
survey as this is before attrition has occurred. This definition implies a monotone process,
where sample members change their status from respondent to non-respondent, but not
vice versa. In many longitudinal surveys however, attempts are made to contact non-
respondents at previous waves. Therefore sample members may return to be
respondents at a subsequent wave. Some authors distinguish explicitly between "wave
non-respondents" and "attrition cases" (Plewis et al, 2008; Hawkes D and Plewis, 2006).
Wave non-respondents are those cases who are interviewed on some occasions in a
longitudinal survey, but not on others; attrition cases refers to units who are initially part of
the sample but are, sooner or later, lost permanently at follow-up. Some other authors
instead use the term sample attrition indistinctly to denote loss of study participants at
follow up either permanently or temporarily (Lynn, 2006).
In any case, either temporary or permanent, sample attrition is an issue of concern to
longitudinal survey researchers for at least two reasons. Firstly, similarly to attrition due
to demographic losses, sample attrition reduces the size of the sample available for
longitudinal analysis where data from the same respondent for one or more waves is
needed. This causes loss of statistical power with longitudinal samples becoming too
small to produce robust statistical analysis and panel data estimates losing significance.
At high levels, sample attrition may even threaten the viability of continuing a panel
(Watson and Wooden, 2009). Secondly, non-response attrition may be selective, in that
those who are lost at follow up may be different from those who remain in the sample.
Non-random attrition causes great concern as it impacts on the generalisability of results
to the entire target population. This problem is often referred to as "attrition bias". Many
studies have been carried out to investigate the extent of attrition bias in specific surveys
by looking at how characteristics of attriters differ from those of respondents' (Hawkes
and Plewis, 2006). We report some of their main findings in Section 2.
1.2 Measures of attrition
1.2.1 Attrition and response rates
2
3. Attrition rates are a typical measure used to report levels of attrition. These are defined as
the proportion of respondents who are lost at follow-up. Many surveys however do not
report attrition rates directly, but these can be derived from their published response
rates.
Response rates can be computed and reported in many different ways. As one of the key
indicators of survey quality, the importance of developing adequate standards to allow
comparison of response level across survey organisations has been long acknowledged
(Smith, 2002). The American Association of Public Opinion Research (AAPOR)
published, in the late 1990s, recommended standards to define and calculate response
rates, currently in its 5th edition (AAPOR, 2008). In the UK, Lynn et al (2001)
recommended standards for face to face surveys of households and individuals.
Examples of work on development of response standards in other countries can be found
in literature by Kasse (1999), Hidiroglou et al (1993) and Allen et al (1997).
More limited methodological research has looked specifically at developing standards for
calculating longitudinal response rates. Even the AAPOR Standard Definitions manual
(AAPOR, 2008) provides only generic guidelines on the calculation of response rates for
multi-wave surveys, stating that response rates should be calculated and reported "for
each separate component and cumulatively".
The most extensive work on defining response rates for longitudinal surveys that was
found in the literature is an unpublished paper by Lynn (2005), in which he extends his
previous work on response rates standards for cross sectional surveys to longitudinal
surveys. Lynn (2005) also argues that no single rate can summarise the overall level of
response to a longitudinal survey and recommends instead a number of different
response measures to be calculated and published. Lynn (2005) refers to longitudinal
surveys as surveys with multiple Data Collection Events (DCEs). In his framework, rates
are explicitly defined according to a particular set of DCEs. For each set of DCEs, rates
can be defined either unconditionally or conditionally. Unconditional response rates are
based on all sample units who were eligible for all of the relevant DCEs while conditional
response rates depend upon response to some other set of DCEs, typically one or more
prior DCEs. This results in up to ∑=
−
m
i
i
1
)12( different response rates that could be
reported for a survey with m DCEs. For a survey of 5 waves, that would mean 57 different
response rates.
Out of all possible response rates, Lynn (2005) recommends that the following are always
published:
• Complete response rate: Response to every wave/ Eligible at every wave
• Wave-specific response rates: Responses to wave k/Eligible at wave k
• Wave specific response rates conditional upon response at the previous wave:
Response at wave k/Eligible at wave k and respondent at wave k-1
Additionally, if the survey design is such that a new sample enters at each wave, then the
sample-specific wave response rates should also be published.
Finally, if there are certain combinations of DCEs important for analysis purposes, survey
organisations or data providers should also identify these key combinations and publish
the relevant response rates. Lynn's framework has been recently adopted in the UK by
the National Centre for Social Research for the reporting of response rates for the English
Longitudinal Survey of Ageing (Scholes et al, 2009).
3
4. Ribisl et al (1996) also suggested that five types of response rates should be published in
panel studies, making an explicit distinction between cooperation and location rates for
later waves of a survey. Their recommended rates include:
• Baseline response rate: Number of completed baseline interviews/ Number of
eligible individuals
• Gross follow-up location rate: Number of participants located at follow up/Number
of completed baseline interviews
• Gross follow-up completion rate: Number of completed follow-up
interviews/Number of completed baseline interviews
• Eligible participant follow-up completion rate: Number of completed follow-up
interview/Number of completed baseline interviews still eligible at follow-up
• Cumulative follow-up completion rate: Number of individuals with completed
interviews at all follow-up time periods/Number of completed baseline interviews
Eurostat (2004) have also devised standard longitudinal response measures for the
surveys feeding into its Statistics on Income and Living Condition (EU-SILC) longitudinal
component. The following response measures are required by each member state for the
second and following waves of the EU-SILC:
• Wave response rate: the proportion of eligible sample units at that wave which
responded to the survey
• Longitudinal follow-up rate: the percentage of units which are passed on to wave
k+1 for follow up within units received into wave k from wave k-1, excluding those
out of scope or non existent.
• Follow-up ratio: number of units passed on from wave k to wave k+1 in
comparison to the number of units received for follow-up at wave k from wave k-1
• Achieved sample size ratio: ratio of the number of responding units in wave k to
the number of responding units in wave k-1
1.2.2. Household response rates
One of the difficulties in calculating longitudinal response rates stems from the difficulties
in dealing with a dynamic picture. Survey units change over time, with some units ceasing
to exist while new ones are created. This is a particular issue for longitudinal household
surveys. Households are more transient units than individuals, as they may change
composition over time with original members leaving and/or new individuals joining the
original household.
The conceptual difficulties surrounding the definition of a longitudinal household have led
some surveys to publish only personal longitudinal response rates. This is the case for
the Survey of Labour and Income Dynamics in Canada (SILD) (Michaud and Webber,
1994). Other surveys do calculate and publish household and individual level longitudinal
rates. That is the case for example the EU-SILC, which provides detailed guidelines on
how to produce its standard response measures both at household and person level
(Eurostat, 2004).
With the exception of Eurostat’s 2004 paper, we could not find any methodological
literature looking into the specific issues surrounding the calculation of longitudinal
household response rates, including how to link households over time and how split
households (i.e. households that are formed when one or more individuals leave their
original households) should be taken into account in the calculation of response rates.
1.3. Attrition theory
4
5. The factors which cause non-response in longitudinal surveys are, in many ways, similar
to those that operate on standard cross-sectional surveys (Lynn et al, 2005). However,
there are also some mechanisms which are specific to longitudinal surveys.
In order to illustrate the attrition process, Lepkowski and Couper (2002) extend Groves
and Couper’s (1998) general theory of non response to longitudinal surveys. In Lepkowski
and Couper’s framework, the process that leads to non-response attrition at a second (or
later) wave of a panel survey can be divided into three conditional processes:
1. Location: locating a sample member
2. Contact: contacting the sample member given location;
3. Co-operation: obtaining an interview from the sample member given contact.
1.3.1. Location
The failure to locate respondents over waves of a longitudinal survey is often one of the
major causes of attrition (Ribisl et al, 1996). The propensity of locating a respondent can
be seen as the combination of the propensity of the respondent to move and the
propensity to locate a person who has moved (Couper and Ofstedal, 2009).
Geographical mobility is an area of research in its own right and is a phenomenon that
cannot be manipulated by survey practitioners; therefore this paper will not go into it in
much detail. It is worthy to note that longitudinal panel surveys can provide a variety of
information to help and predict the likelihood that a respondent will move. For example,
Uhrig (2008) found that respondents, who expressed a desire to move, or a lack of
attachment to their area, are more likely to become non-contacts at a later wave, as well
as respondents who rent their home. Uhrig (2008) suggests that this may be presumably
due to re-location and these measures could be used to predict subsequent non-
response in panel studies.
The process of locating (tracking) respondents who have moved is more of interest to this
literature review as survey practitioners can have some control upon it. In order to locate
respondents, survey organisations need to ensure that all contact details exist and are
up-to-date for each respondent. For example, name, address, telephone number, email
addresses (Uhrig, 2008). Laurie et al (1999), in their study of tracking procedures, found
that approximately 10 per cent of respondents move within a given year. However,
McAllister et al (1973) state that respondents do not disappear unless they are
deliberately trying to do so. This highlights that there is considerable promise in tracking
respondents if the right methods are used. Groves and Hansen (1996) suggest that ‘with
adequate planning, multiple methods and enough information, time, money, skilled staff
and perseverance, and so on, 90 per cent to 100 per cent location rates are possible’. We
will look at survey tracking efforts more in detail in Section 3.
1.3.2 Contact
Once a panel member has been located, the contact process is not very different from
cross-sectional surveys (Watson and Wooden, 2009). Contactability will depend on the
respondent's patterns of being physically present at the place of contact (normally home),
or any physical impediments (i.e. locked or shared entrances to dwelling) and finally on
the survey organisation's effort in making contacts (Uhrig, 2008). Additionally, in a
longitudinal survey, interviewers have prior knowledge of at home-patterns, information
on the best time to call (Lepkowski and Couper, 2002) and awareness of physical
barriers, if the respondent has not moved. Therefore the impact of these factors should be
smaller than in a cross-sectional survey (Uhrig, 2008), suggesting that non-contact should
be a relatively small phenomenon at later waves, given successful location (Lepkowski
and Couper, 2002). Non-contact attrition tends to be more a result of failure to locate
respondents than of failure to contact them.
1.3.4 Co-operation
5
6. Refusals in longitudinal surveys differ significantly from cross-sectional surveys. After the
first wave of a longitudinal survey the sample has already experienced the interview
process and is aware of the survey’s topics, its cognitive requirements and its time
commitment. Respondents will use this experience as a guide whether to participate or
not in future waves (Lynn et al, 2005).
By its very nature, a longitudinal survey places a greater burden on respondents and this
factor alone can induce sample members to refuse cooperation at the outset. Indeed
Apodaca et al (1998) report that the presence of a ‘perceived longitudinal burden’
resulted in a 5 per cent drop in response rates. 'Panel fatigue' (Laurie, Smith and Scott,
1999) is also often present, and over time respondents may feel like they have 'done
enough'.
Laurie et al (1999) identify two types of refusers on longitudinal surveys:
• Wave specific refusers are individuals who refuse to take part for one wave
because of circumstantial situations, for example illness or bereavement, but
may participate at a successive wave.
• Definite withdrawals are refusers who are adamant that they don't wish to
take part in the study (any more).
Various methods are incorporated into longitudinal survey practice with an aim to
minimise refusals. We will look at survey organisation’s efforts to improve co-operation
more in detail in Section 3.
Section 2
Factors associated with attrition
Using Lepkowski and Couper (2002)'s framework, overall survey non-response can be
seen as the cumulative effect of failure to locate, failure to contact and refusal to
cooperate. These processes may operate independent of one another, but they all
contribute to the overall attrition (Nicoletti and Peracchi, 2005). Uhrig (2008) notes how in
the literature there is often little differentiation between these processes as a greater
focus is put on the general absence of data regardless of the processes generating it.
The likelihood of a sample member to be located, contacted and to cooperate in a later
wave of a longitudinal survey depends on respondents' personal characteristics, but also
on their previous survey experience and on the survey organisation operational efforts. In
this section we will look at the first two aspects, while the survey organisation processes
will be discussed in Section 3.
2.1. Individual characteristics related to non-response
There is a large amount of literature on non-respondents characteristics, often looking
separately at characteristics of refusals and non-contacts. This literature has been
reviewed extensively by Watson and Wooden (2009), Uhrig (2008) and Lynn et al (2005).
We present here their main findings relative to a number of key socio-economic
characteristics.
2.1.1 Age
Both Lynn et al (2005) and Uhrig (2008) report that a wide body of empirical research has
found that elderly and young people are more likely not to be contacted
(Cheesbrough,1993; Lillard and Panis, 1998, Foster, 1998; Groves and Couper, 1998;
Lynn and Clarke, 2002; Stoop, 2005; Watson, 2003). The elderly also appear more likely
to refuse survey cooperation (Hawkins, 1975; Foster and Bushnell, 1994; Groves and
Couper, 1998; Lepkowski and Couper, 2002). Some authors suggest that a greater
likelihood of situation refusal among the elderly could be due to the increasing chance of
6
7. finding older sample members with health problems (Groves et al, 2000). Indeed,
research by Jones et al (2006) found age has no effect on refusals if health is good.
Focussing on evidence from longitudinal studies, Watson and Wooden (2009) confirm
that attrition is higher amongst the youngest, but that response patterns are more mixed
for the elderly, with some studies finding rising attrition propensities in old age (e.g.
Fitzgerald et al, 1998), others reporting the reverse (e.g. Hill and Willis, 2001), and others
again reporting no clear evidence in either direction (Behr et al, 2005; Nicoletti and
Peracchi, 2005).
2.1.2 Household structure
A large body of research highlights that single people (never married) are more likely to
not be contacted (e.g. Gray et al, 1996) and refuse participation in surveys (Goyder,
1987; Lillard and Panis, 1998; Nicoletti and Peracchi, 2002). Households with children
and married couples appear less likely to be lost at follow-up (Fitzgerald et al, 1998;
Lillard and Panis, 1998; Nicoletti and Peracchi, 2005; Zabel, 1998). Jones et al (2006),
however, finds no effect of marital status on non-response
2.1.3 Gender
Within panel surveys, men appear to attrit more frequently than women (Lepkowski and
Couper, 2002; Hawkes and Plewis, 2006; Behr et al, 2005). Lynn reports that men are
less likely to be contacted than women (Goyder, 1987; Foster, 1998; Lepkowski and
Couper, 2002). Research by Watson (2003) on the European Community Household
Panel found that once education, employment, child care responsibilities and other
factors are controlled, the gender effect disappears.
2.1.4 Labour market activity, income and education
Mixed evidence is found in the literature regarding the relationship between attrition and
employment, income and education.
Employment outside the home, either as an employee or self-employed, has been found
to be associated with no-contact generally and in longitudinal surveys in particular (Foster
and Bushnell, 1994; Goyder 1987; Groves and Couper 1998; Lynn and Clarke, 2002;
Nicoletti and Peracchi, 2005). Hawkes and Plewis (2006) and Branden et al (1995) also
found that job instability appears related to non-contactability. However, Fitzgerald et al
(1998), Zabel (1998) and Jones et al (2006) all found no significant relationship between
employment status and attrition. Gray et al (1996) actually found attrition rates to be
lowest among the employed. Using data from the British Household Panel Survey (BHPS)
and the German Socio-Economic Panel, Nicoletti and Buck (2004) found that the
economically inactive had higher cooperation rates in one sample, but significantly lower
contact probabilities in the other. Uhrig (2008) reports that respondents who are
unemployed are more likely to be non-contacts and speculates that this may be due to
individuals moving in order to find employment.
Lynn et al (2005) reports that survey refusal appears more likely amongst those with low
incomes (Fitgerald et al, 1998; Nathan, 1999), while households with higher incomes
appear more difficult to be contacted (Foster and Bushnell, 1994; Lynn and Clarke, 2002).
Uhrig (2008) also finds that low-income has a slight positive effect on contactability and a
negative effect on cooperation. A study from Brendan et al (1995, reported by Uhrig,
2008) finds that household financial instability of any type, either positive or negative, is
associated with non-response. Uhrig (2008) speculates that large shifts in earnings may
signal some other important structural change in the household (e.g. geographical move,
change in employment). Analysis of the European Community Household Panel by
Watson (2003) also found that a relationship between income and attrition exists but it
differs across countries, with higher attrition being associated with lower income in
northern European countries but with higher income in southern European countries.
Finally, Watson and Wooden (2009) reports a number of studies that have found no
7
8. evidence of any significant relationship between income and attrition (Gray et al, 1996;
Zabel, 1998; Lepkowski and Couper, 2002; Nicoletti and Peracchi, 2005) and concludes
that income is probably relatively unimportant for attrition.
As for education, less educated individuals appear more likely to attrit in panel studies
(Jones et al, 2006; Watson, 2003; Behr et al, 2005; Lillard and Panis, 1998), but the
magnitude of the relationship is arguably small (Watson and Wooden, 2009). Watson
(2003) also finds the reverse relationship in Southern Europe, where less education is
associated with lower rates of attrition.
2.1.5 Ethnicity and language
Studies have shown that ethnic minority groups are more likely to be non-respondents
(Zabel, 1998; Burkam and Lee, 1998). While Lynn et all (2005) reports that people from
ethnic minorities are more likely to be refusals (Foster, 1998; Fitzgerald et al, 1998; Lyer,
1984; Lynn and Clarke, 2002; Nathan, 1999), Watson (2003) reports two studies from
Gray et al (1996) and Lepkowski and Couper (2002) which find that non-response among
ethnic groups was mainly due to lower rates of contact and not higher rates of refusals.
Lower contact rates for non white respondents are also reported by Uhrig (2008) and
Calderwood (2009).
Watson and Wooden (2009) point out that limited research has looked at the relationship
between language-speaking ability and attrition, although cross-sectional surveys in
English-speaking countries have almost always reported lower response rates for non
English speakers. Uhrig (2008) found that respondents experiencing difficulty with the
English language were associated with future non-contact. De Graaf et al (2000), in the
Netherlands Mental Health Survey and Incidence Study also found that respondents not
born in the Netherlands were more likely not to be located.
2.1.6 Other findings
People living in urban areas, for example London, are not only more likely to be non-
contacts, but also to be refusals (Goyder, 1987; Couper, 1991; Foster, 1998). Watson and
Wooden (2009) also reports a number of studies which have confirmed the expectations
that people living in urban areas are both less available and harder to reach (Gray et al,
1996; Burkam and Lee, 1998; Zabel, 1998), with only Lepkowski and Couper (2002)
reporting contrary evidence.
People's attachment to their housing unit, as well as to their surrounding neighbourhood
can be indicative of likely future geographical mobility, which in itself is a strong predictor
of contactability (Uhrig, 2008). Research has showed how renters are more likely to attrite
than home owners (Zabel, 1998; Lepkowski and Couper, 2002; Watson, 2003).
Lepkowski and Couper (2002) found that indicators of community attachment and social
integration, including frequency of visits to friends, engagement in community affairs and
interest in politics, appear to be positively associated with survey cooperation. Couper
and Ofstedal (2009) also note that a sample consisting of a higher rate of socially isolated
individuals may be more difficult to locate.
Few studies of attrition have taken into account a measure of respondent's health when
studying attrition (Uhrig 2008). Exceptions are Lepkowski and Couper (2002),Jones et al
(2006) and Couper and Ofstedal (2009) who found that those who reported worse health
or who were less satisfied with their health were less likely to respond at a later wave.
Uhrig himself, however, does not find significant evidence in the BHPS to confirm a
relationship between attrition and health.
2.2. Survey experience
8
9. Even more than in cross-sectional surveys, in longitudinal studies it is essential to ensure
the survey experience is as pleasant as possible, as this experience will have an impact
not only on cooperation at a particular point in time, but also at later waves (Laurie and
Lynn 2009; Rodgers 2002). Indeed, Watson and Wooden (2009) suggests that "the
respondent's perception of the interview experience is possibly the most important
influence on cooperation in future survey waves". Research into this area by Hill & Willis
(2001, cited by Lynn et al. 2005) found that around 75 per cent of respondents who didn’t
enjoy their experience were still participating at wave 3, compared to 90 per cent who did.
2.2.1 Salience and sensitivity
Respondents who have little interest or knowledge of a survey’s topic are more likely to
refuse at later waves than other sample members. Questions that are considered
sensitive by respondents may also promote attrition at later waves (Lepkowski and
Couper, 2002; Brenden et al, 1995).
Both salience and sensitivity of a questionnaire can be reflected in the number of item
non-response. Watson and Wooden (2009) found that the number of questions not-
answered at previous waves is a good indication of attrition for future waves. This is
particularly true for non-response at potentially sensitive questions. Missing data on
sensitive questions may be indicative of a negative interview experience, but it also
shows how committed a respondent is to participate in the study. Research on sensitive
questions has focussed particularly on income. Non response at income questions has
been proved to be an important predictor of non response at subsequent-waves (Brenden
et al, 1995; Uhrig, 2008). Uhrig (2008) also finds that item-non response at political
preference questions also suggest that politics seems to be a sensitive topic that
promotes subsequent survey refusal.
2.2.1 Interview length
The number of questions and the length of the questionnaire can have an impact on
individuals' propensity of cooperating at further waves of a longitudinal survey (Uhrig,
2008). It is expected that a longer interview places a greater burden on respondents, thus
reducing their willingness to cooperate at later waves. This illustrates the argument of
opportunity cost vs. perceived reward.
Research into the relation between interview length and attrition, as reported by Uhrig
(2008), shows instead that respondents who had short interviews at previous waves were
more likely to be non-responders at the subsequent waves (Branden et al, 1995; Zabel,
1998). Although these results may appear to be counterintuitive, Watson and Wooden
(2009) explain that interview length is actually a product of how willing respondents are to
talk to the interviewers. Thus, the respondents most interested in the survey and who find
it a more enjoyable experience, will have longer interviews. Uhrig (2008) also notes that
the running time of the interview can signal greater respondent burden but it can also
signal a greater commitment by the respondent to give more information. Branden et al
(1995) suggests that the association between longer interview lengths and sample
retention can be explained by:
- Interest in the outcome of the survey
- Interest/salience of the survey topic
- Sense of civic duty on government surveys
- Good rapport between interviewer and respondent.
The association between the length of an interview and attrition therefore does not
necessarily reflect the association between the length of a questionnaire and attrition.
Experimental evidence would be needed to assess the effect of different interviews'
length on attrition. Nevertheless, Zabel (1998), reports that attrition rates on the Panel
9
10. Study of Income Dynamics (PSID) were reduced after an explicit attempt to decrease the
survey length.
2.2.3. Respondent co-operation at previous waves
Respondent's co-operation at prior interviews appears to be a good predictor for further
participation in the survey (Branden et al, 1995; Laurie et al, 1999; Lepkowski and
Couper, 2002; Uhrig, 2008). Co-operation can be measured directly in the survey by
asking the interviewer to rate how cooperative the respondent is or through the use of
paradata. Respondents that are rated by interviewers anything less than ‘excellent’ in
terms of cooperativeness were more likely to subsequently refuse in Uhrig's (2008) study
of the BHPS. A study by Cheshire and Hussey (2009) using paradata from the English
Longitudinal Study of Ageing (ELSA) found that those respondents who consulted
documents during the interview and who provided consent for data linkage were less
likely to become refusals at a later stage. As mentioned earlier, the amount of item non-
response and non-response at potentially sensitive questions such as income has also
proved to be an important predictor of non-response at subsequent-waves (Brenden et al,
1995; Cheshire and Hussey, 2009).
Willingness to be re-contacted is another important indicator of attrition. Respondents
who did not provide any tracking information or who failed to provide complete tracking
information were more likely to be refusals at later stages. Uhrig (2008) observes that
supplying partial contact details may be interpreted as an "advance soft refusal".
Section 3
Survey organisation processes
Survey organisations have devised and implemented a number of systems to locate,
contact and ensure continued cooperation from panel members in an effort to reduce
attrition over the life of their longitudinal studies. We present here an overview of these
methods, with a particular focus on methods to locate respondents and to reduce
refusals.
3.1 Locating Respondents
Methods to locate respondents are often referred to in the literature as tracking
techniques. There is a large and constantly growing array of tracking methods and
resources available for use on longitudinal surveys. Couper and Ofstedal (2009) classify
tracking procedures into two groups: proactive and reactive techniques.
3.1.1 Proactive techniques
Forward-tracing or prospective techniques are methods of tracking respondents that try to
ensure that up-to-date contact details are available at the start of the wave fieldwork.
Information is gained from the respondent themselves, by ensuring that the most
accurate contact details are recorded at the latest interview and/or by updating the
contact details before the next wave occurs (Burgess, 1989; Couper and Ofstedal, 2009).
These methods are often relatively inexpensive and have proved to be successful, as the
most useful source of information for tracking is often the participants themselves (Ribisl
et al, 1996).
Obviously, all biographical information needs to be recorded accurately at each wave
(McAllister et al, 1973; Ribisl et al, 1996). Investigation has found that ensuring the
correct spelling of an individual’s name by asking the respondent to spell letter-by-letter,
especially for unique names, makes it easier to contact respondents in the future.
Nicknames, maiden or birth names and aliases should also all be recorded. Recording
some vital statistics such as date and place of birth can also be useful to track
respondents (Gunderson and McGovern, 1989).
10
11. In order to reduce non-contact attrition, it is also useful to record certain types of
information at previous interviews (Ribisl et al, 1996). For example asking the respondent
whether they have any plans to move within the next 6 months, and collect details of their
new address, if known, as well as asking the respondent when would be the best time to
call at future waves (Ribisl et al, 1996). Many longitudinal surveys ask for additional
contact details of friends or relatives of the respondent when interviewed at each wave
(McAllister et al, 1973). Craig (1979) and Bale et al (1984) note that participant’s mothers
are the most helpful contacts as they are more likely to maintain contact with the
participant. Couper and Ofstedal (2009) point out how individuals with large extended
families and strong family ties have many potential sources through which their current
location can be ascertained. But success of this method must not be overstressed as the
contact person may be just as mobile or elusive as the respondent themselves. There is
no guarantee that the additional contacts will be traceable at the next wave.
Keeping in contact with respondents between waves of a longitudinal survey helps to
maintain rapport by reducing non-contact attrition and also encourages a sense of
belonging to the survey (Laurie et al, 1999). There is evidence that obtaining contact
updates between waves has a positive effect not only on tracking respondents over the
waves of a longitudinal survey but also in terms of cooperation at later dates (Couper and
Ofstedal, 2009).
Respondents may be asked to provide address or other contact details updates between
waves. For example, when last interviewed, they can be given a change of address post
card and/or a telephone number or email address to get in touch with the survey
organisation if their details are to change. Alternatively or additionally, change of address
cards and/or confirmation of address cards can be sent to respondents between waves,
with a request to return them to the survey organisation. Laurie et al (1999) reported that
the BHPS receive approximately 500 change of address cards each year and one third of
respondents return the confirmation of address card. This method is relatively
inexpensive and is easy to administer, although it does increase the burden placed on the
respondent. To address the increased burden on respondents, Ribisl et al (1996) suggest
offering a small incentive to increase compliance. For example, the BHPS sends £5 as a
conditional incentive to those who return the change of address card between interview
points.
Other keep-in-touch exercises include short telephone interviews between waves, which
allow respondents to update any of their contact details and also highlight if any
respondents need to be tracked (Ribisl et al, 1996). This procedure can also be used as a
way of ensuring that respondents are free for interviewing during the fieldwork period.
Many survey organisations also keep-in-touch with respondents between waves by
mailing a newsletter or report containing a summary of the survey's results to date (e.g.
ELSA). This method is meant to encourage respondents to feel that their opinions and
experiences are contributing to a worthwhile project, thus encouraging participation at the
following interview(s). Additionally, it allows the survey organisation to check respondents'
contact details. For example if the mail or newsletters are returned-to-sender it will
highlight that the respondent will need to be tracked before the field period begins.
3.1.2 Reactive techniques
Reactive or retrospective techniques are tracking procedures which occur once the
interviewer finds the respondent can no longer be contacted using the contact details held
by the survey organisation (Laurie et al, 1999). A number of participants are still able to
be contacted retrospectively, but these tracking methods tend to be less cost-effective
when compared to proactive methods.
Reactive tracking is normally attempted by interviewers in the field or by a centralised
tracking team (Couper and Ofstedal, 2009).
11
12. Training can be provided to interviewers to emphasise the importance of tracking and the
impact that non-contact attrition can have on a longitudinal survey. Interviewers often
have a high amount of local knowledge and tracking skills (Laurie at al, 1999) and these
skills can be used to their full capacity in order to help reduce attrition. For example
interviewers can be encouraged to contact neighbours and other members of the
community if a respondent has moved, leave letters for present occupiers to forward to
respondents if the new address is known, etc. However, interviewers will necessarily work
on a case-by-case basis and therefore tracking will be expensive (Couper and Ofstedal,
2009), and it is important that issues of privacy and ethics are taken into account.
Tracking participants is often the toughest and most frustrating job in any longitudinal
survey so it is important to motivate interviewers to locate respondents. Ribisl et al (1996)
suggest that rewards can be offered to interviewers with high rates of success from
tracking individuals and interviewers with certain experience or skills for tracking
respondents could be dedicated to tracking work.
Some longitudinal surveys employ a dedicated tracking team, whose responsibility is
uniquely to track respondents who cannot be located. Many respondents’ contact details
are accessible through existing databases which are updated on a regular basis, for
example, telephone directories and electoral registers. Although such databases could be
used also in a proactive way, they are more often queried by survey organisations after
learning that one or more respondents can't be located. A centralised tracking team can
make a cost-effective use of these resources as it is possible to search for a high number
of respondents' details at one time.
Available databases for tracking are dependent upon the country in which the survey is
taking place (Couper and Ofstedal, 2009). For example, some countries maintain
population registers that are updated every time an individual within the population
moves. These are often available to survey organisations to freely access. In the UK,
Royal Mail maintains a National Change of Address register; however providing change
of address details is entirely voluntary. The Electoral Register can also be accessed for
use with permission, and birth, marriage, death and divorce registers are also a good
source of information for details. In the USA, there are also commercial vendors who
provide contact information, in return for a small fee by consulting, for example, credit
card bills and tax rolls. In the UK, some companies provide access to telephone
directories. Access to some databases may be restricted due to the privacy legislation
and so a limited amount of information may be available to survey organisations.
Centralised tracking teams could also search for email addresses if they are listed
publicly. The issue with this method is that email addresses tend to change more often
than a respondent's home address. Another option may be to search on the internet for a
person's name and last known address. This is a successful method particularly if the
respondent has an unusual name (Couper and Ofstedal, 2009).
3.1.4 Issues to consider
The majority of tracking methods are potentially time-consuming and costly. Reactive
methods have proven more costly than proactive methods. Centralised tracking methods
are the most cost-efficient, whereas tracking done by the interviewers themselves are the
most costly (Couper and Ofstedal, 2009). The tracking process has long been
considered only an operational issue, with very little research looking at the relative
effectiveness of the different methods. Two recent papers have started to fill this
knowledge gap.
Fumagalli et al (2009) conducted an experiment on the BHPS looking at the
effectiveness, in terms of tracking success, of the following methods:
1. Use of address confirmation card vs change of address card vs neither
12
13. 2. Use of a pre-paid (unconditional) vs post-paid (conditional) incentive for address
confirmation/change of address card
3. Use of a standard findings report to be sent to respondent before fieldwork vs a tailored
report for young and busy people.
The research found that the use of a conditional incentive on return of a change of
address card was more effective in tracing respondents than the use of an unconditional
incentive and of an address confirmation card. It also found a limited effect of the tailored
version compared to the standard report.
McGonagle et al (2009) carried out a similar experimental test on the PSID, looking in
particular at the design of the change of address/confirmation card sent between waves,
the timing and frequency of the mailing and the use of a pre-paid or post-paid incentive.
The study found that the old card design performed better than the new design and that
there was no difference in response to the card mailing for the pre and post-paid incentive
groups. The study also found that families who received a second mailing had
significantly higher response rates than those in the one-time mailing condition.
It is important to note the success of any method of locating respondents in a longitudinal
survey is partly dependent on the design of the survey itself. For example, the length
between waves and the mode of data collection can have an important impact on the
probability of locating respondents. The longer the time left between each wave, the
greater the likelihood that sample members will have moved. Face-to-face surveys have
more opportunity for the interviewers to track respondents in the local area by talking to
neighbours, whereas telephone, email and postal survey are less informative (Couper
and Ofstedal, 2009).
3.2. Contacting respondents for interview
Non-contact attrition may still persist even after the respondent is located at the correct
address. The respondent's patterns of being physically present at the address, physical
impediments to getting an interview and the survey organisation's effort all contribute to
whether contact is achieved or not (Uhrig, 2008).
Evidence in the literature shows that the use of paradata to concentrate interviewer effort
can help to contact respondents and improve response in a longitudinal survey (Baribeau
et al, 2007; Couper and Ofstedal, 2009). One of the benefits of a longitudinal survey is
that there is information available from the previous waves, for example the number of
calls taken to contact each respondent, the outcome of the calls, and the time of day of
calls. This data can be used by longitudinal surveys to vary interviewer efforts for
minimising non-contacts in the coming waves. The National Longitudinal Survey of
Children and Youth (NLSCY), for example, uses detailed call record data from previous
waves to minimise non-contact at the following waves.
3.3. Avoiding refusals
Various methods are incorporated into longitudinal survey design with an aim to minimise
refusals (Moon et al, 2005), including incentives (Singer, 2002; Singer et al, 1999), refusal
conversion techniques (Burton et al, 2006; Lynn et al, 2002) and extra interviewer efforts
(Laurie, Smith and Scott, 1998; Lynn et al, 2002). This section outlines some of the most
common methods longitudinal surveys use to reduce refusals. Most of the information in
the section on incentives is based upon a recently published paper by Laurie and Lynn
(2009) which presents a detailed overview of the literature on incentives on longitudinal
surveys.
3.3.1 Incentives
13
14. Incentives are a common method used on both cross-sectional and longitudinal surveys
to try to minimise refusals. Laurie and Lynn (2009) explain that incentives lead to a
decrease in refusals as an effect of social reciprocity. According to the social reciprocity
model, small gestures on the part of the survey organisation (including incentives)
promote trust and encourage respondents to feel they should give something in return, in
this case cooperation to the survey. Incentives are also a method to show appreciation
for the respondent's time.
Incentives can be particularly beneficial for the long-term viability of longitudinal surveys,
as they can play an important role in securing cooperation into the study not only at a
particular point in time, but also throughout the life of a study. As already mentioned, in
longitudinal surveys, a greater burden is placed on respondents. This is because
cooperation is required over time but also because many longitudinal surveys are often
long and complex, cover sensitive subject matters and may require interviews from each
member of the household. The greater the burden on respondents is, the more
appropriate the use of incentives is generally felt (Laurie and Lynn, 2009).
Findings in the literature report that incentives contribute in improving response rates and
are effective in reducing attrition over multiple waves of a survey (Singer et al, 1999;
Shettle and Mooney, 1999; Rodgers, 2002; Singer, 2002, Lengacher et al, 1995). Some
authors have also noted how incentives may lead to reductions in the overall field costs
through a reduction of the number of calls that interviewers need to make (James, 1997).
However, James and Bolstein (1990) hint towards a backfire effect for very large
incentives which may even cause a reduction in cooperation.
In spite of the recognised role played by incentives, there is sparse guidance on how
incentives should be used in longitudinal studies. A review by Singer (2002) has
highlighted that little empirical research has been done on the usefulness of incentives for
maintaining response rates throughout waves of a survey. The range of incentives used
to maximise response on longitudinal surveys varies greatly between surveys. Decisions
for incentives on particular surveys are often based on the survey's own experience in the
field, feedback from the interviewers and on advice of survey practitioners as opposed to
being based on experimental evidence (Laurie and Lynn, 2009).
Monetary incentives can be offered in the form of cash, cheque, gift voucher or a gift such
as a book of stamps. They can be conditional, that is, they are paid after the interview has
been completed or they may be offered as an unconditional incentive prior to the
interview. For example, the BHPS gives the entire sample an unconditional £10 gift
voucher, and offers a small gift at the interview, whereas ELSA posts a £10 gift voucher
after the interview has been completed. Past evidence has illustrated that monetary
incentives given as an unconditional incentive prior to the interview have the greatest
impact on response (Laurie and Lynn, 2009; Lengacher et al, 1995; Singer, 2002). Trust
is thought to be gained immediately from the respondent, and so refusal rates are shown
to decrease.
Some surveys, like the Canadian Community Health Survey, offer small gifts (e.g. a first
aid mini-kit) as incentives for participation. Research however shows that monetary
incentives such as cash or cheques are more effective than gifts and reinforces that pre-
paid incentives have more influence on response than conditional incentives (Church,
1993; Warrimer et al, 1996).
Providing respondents with feedback of the results of the survey they were involved with
may also act as an incentive to encourage panel members to continue their cooperation
in the study. For example, between data collection waves, ELSA provides each of their
respondents with a newsletter to keep them updated with the main findings of the study
and to reiterate the importance of each response to the validity of the study as a whole.
As a longitudinal survey occurs over a number of waves, it is possible to introduce,
change or cease incentives, although there is little evidence about the likely effects of
14
15. doing so. Lynn and Laurie (2009) suggest that as the majority of attrition due to refusals
in a longitudinal survey occurs at the first couple of waves, introducing an incentive on an
already existing survey may have little effect on reducing the refusal rates. At the same
time however, it could increase sample members' loyalty for later waves (Laurie, 2007).
The effects of ceasing an incentive are largely unknown. Payments of any kind may
induce respondents to expect some other payment at the next interview (Singer et al,
1998, 2000) although some research suggests that the withdrawal of incentives may not
have a significant impact on response (Lengacher et al, 1995).
Incentives could be tailored to the sample members’ individual circumstances. Detailed
information is known about the response history in longitudinal surveys, so it is possible to
target resources at respondents who are thought to have a higher risk of dropping out
(Laurie and Lynn, 2009). Incentives could vary in amount, nature or the timing of when
they are administered. Laurie and Lynn (2009) recognise that tailoring may not be
practical in some circumstances, for example in household surveys, where the same
incentive should be offered to each member. They also explain that evidence of the
effectiveness of tailoring strategies is extremely thin, as most longitudinal surveys are not
willing to experiment with targeted treatments.
The ethical implications associated with the use of incentives should always be
considered. Kulka (1994) carried out a review of incentives for reluctant respondents and
found the use of incentives may restrict the freedom to refuse to participate in the survey.
In the UK, it is now common practice for incentives to be given as "a token of
appreciation", and testing has shown that such payments are rarely perceived as coercive
(Lessof, 2009).
Mixed evidence exists on the effect of incentives on sample composition and attrition
bias. Couper et al (2006) found that cash incentives are more likely to increase response
than a gift to those of lower education levels, single people and unemployed. This would
suggest that certain sub-sets of the population with low retention propensities react better
than others to offered incentives and incentives therefore could play an important role in
reducing non-response bias. Other studies however have failed to show any change in
the composition of the sample as a result of incentives (Singer et al, 1999).
Finally, another aspect that has been researched extensively is whether incentives may
lead to lower data quality. Research by Couper et al (2006) and Singer et al (1999)
showed that the use of incentives did not appear to have any adverse effect on data
quality as measured by differential measurement errors, levels of item non-response and
effort expended in the interview.
3.3.2 Refusal conversion
In order to reduce attrition, longitudinal surveys often use refusal conversion procedures.
These often involve interviewers re-approaching individuals who initially refused
participation in the survey and try and persuade them to complete an interview by
explaining the purpose of the study more fully and re-emphasising the importance of each
respondent to the survey (Burton et al, 2006; Stoop, 2004; Moon et al, 2005; Laurie et al,
1999). In some cases, larger incentives may be offered to refusals when attempting
conversion (Lengacher et al, 2005; Abreu and Winters, 1999).
Refusal conversion techniques are expensive methods, but they may prove particularly
useful in longitudinal surveys to retain individuals over time (Burton, Laurie and Lynn,
2006). Research has looked at whether refusal conversion procedures at one wave
impacts upon response on later waves. Lengacher et al (1995) report the results of a
refusal-conversion experiment at Wave 1 of the Health and Retirement Study (HRS),
when interviews were sought from a sub-sample of non-respondents by either using
interviewing persuasive techniques or larger incentives. Although they found that the
group who required refusal conversion had significantly lower response rates than the
15
16. group who did not need conversion, only 11% of Wave 1 converted refusals were refusals
at Wave 2. They also found no difference in Wave 2 response rates between the
persuaded and the large incentive converted-refusals groups. Burton et al (2006) in their
study of the BHPS also concluded that refusal conversion procedures appear to be
effective in minimising attrition from the sample not only at each wave, but over a longer
term.
3.3.3 Interviewers effect
In face-to-face surveys, interviewers play a key role to obtain cooperation from sample
members. Interviewer effects in longitudinal surveys are to some extent similar to those in
cross-sectional surveys. In both cases, for example, interviewers can persuade
respondents of their importance to the survey as a whole reassure respondents on
confidentiality issues and more generally provide more information on the survey at the
doorstep. Some interviewer effects however are specific to longitudinal surveys.
Some evidence suggests that using the same interviewer is preferred by both
respondents and interviewers (Laurie et al, 1999). Hill and Willis (2001, cited by Lynn et
al, 2005) found that in a health study the largest and most significant factor which
predicted response at a future wave was having the same interviewer at each wave.
Interviewer continuity was associated with around 6 per cent increase in response rates.
Some surveys, such as the BHPS, assign, where possible, the same interviewer to the
same household at each wave of the survey in an attempt to reduce attrition. Lynn et al
(2005) however point out how most studies that have looked into interviewers’ continuity
effects are non-experimental and in consequence confound interviewer stability with area
effects. Campanelli and O’Muircheartaigh (1999 and 2002) found that interviewer effects
disappear once area effects are controlled. Lynn et al (2005) conclude that actually little
evidence exists that interviewers’ stability affects response rates and that further research
is needed on this issue. They also point out that although interviewer continuity may
improve attrition, it may also have a negative impact on data quality. For example, Uhrig
and Lynn (2009) found that interviewer familiarity may increase social desirability bias.
Interviewer’s experience is also known to have an important impact on response. Watson
and Wooden (2009) found the age and/or experience of the interviewers had an effect on
attrition. Longitudinal surveys can make use of experienced interviewers to target
households who were refusals in previous waves or that they have higher probability to
drop out of the survey. Indeed this method proved successful in the NLSCY (Baribeau at
al, 2007), resulting in higher response rates.
3.3.4 Responsive design
Responsive designs are being considered by some survey organisations to reduce
attrition bias in longitudinal surveys. Responsive design refers to the ability to monitor
continually the streams of process data and survey data thus creating the opportunity to
alter the design during the course of data collection to improve survey cost efficiency and
to achieve more precise, less biased estimates (Groves & Heeringa, 2006). By
continuously monitoring the composition of the respondent group during fieldwork, under-
represented population groups may be targeted to improve response. This may
eventually lead to improvements in data quality by ensuring sample representativity. In
Canada, the SLID is being currently redesigned, with an aim to introduce a responsive
design element for the 2010 data collection.
16
17. References
Abreu, D.A and Winters, F (1999) Using Monetary Incentives to Reduce Attrition in the
Survey of Income and Program Participation. US Census Bureau.
Allen, M., Ambrose D., and Atkinson, p. (1997) Measuring Refusal Rates. Canadian Journal
of Marketing Research, 16, pp 31-42.
The American Association for Public Opinion Research (AAPOR) (2008), Standard
Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 5th
edition.
Lenexa, Kansas: AAPOR.
Apodaca, R., Lea, S. and Edwards, B. (1998) The Effect of Longitudinal Burden on
Survey Participation. 1998 Proceedings of Survey Research Methods Section of the
American Statistical Association, pp 906-910.
Bale, R.N., Arnouldussen, B.H, Quittner, A.M. (1984) Follow-up Difficulty with Substance
Abusers: Predictions of Time to Locate and Relationship to Outcome. The International
Journal of Addictions, 19, pp 885-902.
Baribeau, B, Wedselsoft, C, and Franklin, S (2007) Battling Attrition in the National
Longitudinal Survey of Children and Youth. SSC Annual Meeting, June 2007
Behr, A., Bellgardt, E., and Rendtel, U. (2005). Extent and Determinants of Panel Attrition in
the European Community Household Panel. European Sociological Review, 21, pp 489-512
Branden, L., Gritz, R.M. and Pergamit, M.R. (1995) The Effect of Interview Length on Attrition
in the National Longitudinal Survey of Youth. No. NLS 95-28, US Department of Labour
Burgess, R.D (1989) Major Issues and Implications of Tracing Survey Respondents. In
Kasprzyk, D, Duncan G, Kalton G and Singh M.P (Eds) Panel Surveys (pp.52-73). New
York:Wiley
Burkham, D.T. and Lee, V.E. (1998). Effects of Monotone and Non-Monotone Attrition on
Parameter Estimates in Regression Models with Educational Data. Journal of Human
Resources, 33, pp 555-574
Burton, J, Laurie, H and Lynn, P (2006) The Long-term Effectiveness of Refusal Conversion
Procedures on Longitudinal Surveys. J. R. Statistical Society Association 169, Part 2, pp 459-
478.
Calderwood, L. (2009) Keeping in Touch with Mobile Families in the UK Millenium Cohort
Study. Statistics Canada 25th
International Symposium on Methodological Issues.
Longitudinal Surveys: from Design to Analysis, Ottawa, 2009.
Campanelli, P. and O’Muircheartaigh, C. (1999) Interviewers, Interviewer Continuity,
and Panel Survey Non-Response. Quality & Quantity 33(1), pp 59-76.
Campanelli, P. and O’Muircheartaigh, C. (2002) The Importance of Experimental
Control in Testing the Impact of Interviewer Continuity on Panel Survey
Non-Response. Quality & Quantity 36(2), pp 129-144.
17
18. Cheesbrough, S. (1993) Characteristics of Non-Responding Households in the Family
Expenditure Survey. Survey Methodology Bulletin, 33, pp 12-18
Cheshire H. and Hussey, D. (2009) Factors associated with refusals in the English
Longitudinal Study of Ageing. Statistics Canada 25th
International Symposium on
Methodological Issues. Longitudinal Surveys: from Design to Analysis, Ottawa, 2009
Church, A.H (1993). Estimating the Effects of Incentives on Mail Survey Response Rates: A
Meta-Analysis. Public Opinions Quarterly, 57, pp 62-79
Couper, M.P. (1991) Modelling Survey Participation at the Interviewer Level in 1991.
Proceedings of the Survey Research Methods Section of the American
Statistical Association, pp 98-107.
Couper, M.P, Ryu, E and Marans, R.W (2006). Survey Incentives: Cash vs In-kind; Face-to-
Face vs Mail; Response Rate vs Non-Response Error. International Journal of Public
Opinions Research, 18, pp 89-106.
Couper, M and Ofstedal, M. (2009). Keeping in Contact with Mobile Sample Members. In
Lynn, P (eds) (2009) Methodology of Longitudinal Surveys. West Sussex: Wiley.
Craig, R.J (1979) Locating Drug Addicts Who Have Dropped out of Treatment. Hospital and
Community Psychiatry, 30, pp 402-404.
De Graff R., V.Bijl R., Smit F., Ravelli A. and Vollebergh, W. A.M (2000). Psychiatric and
Siciodemographic Predictors of Attrition in a Longitudinal Study: The Netherlands Mental
Health Survey and Incidence Study (NEMESIS), 2000, pp 1039-1045.
Eurostat (2004) Technical document on intermediate and final quality reports. Working Group
on Statistics on Income and Living Conditions (EU-SILC), 29-30 March 2004. Eurostat.
Luxembourg.
Fitzgerald, J., Gottschalk, P. and Moffitt, R. (1998). An Analysis of Sample Attrition in Panel
Data: the Michigan Panel Study of Income Dynamics. Journal of Human Resources, 33, pp
251-299.
Foster, K. (1998). Evaluating Non-Response on Household Surveys. GSS Methodology
Series No. 8, London: Government Statistics Service.
Foster, K. and Bushnell, D. (1994) Non-Response Bias on Government Surveys in Great
Britain. The 5th
International Workshop on Household Non-Response, Ottawa, 1994.
Fumagalli, L., Laurie, H. Lynn, P (2009). Methods to Reduce Attrition in Longitudinal Surveys:
An Experiment. European Survey Research Association Conference. Warsaw, 2009.
Goyder, J. (1987) The Silent Minority – Non-Respondents on Sample Surveys. Cambridge:
Polity Press.
Gray, R., Campanelli, P., Deepchand, K. and Prescott-Clarke P. (1996) Exploring Survey
Non-Response: The Effect of Attrition on a Follow-up of the 1984-85 Health and Life Style
Survey. The Statistician, 45, pp 163-183.
Groves, R.M and Couper, M.P (1998) Non-Response in Household Interview Surveys. New
York: John Wiley and Sons Ltd.
Groves, R.M and Hansen, S.E (1996). Survey Design Features to Maximise Respondent
Retention in Longitudinal Surveys. Unpublished report to the National Centre for Health
Statistics, University of Michigan, Ann Arbour, MI.
Groves R. M.and Heeringa, S. (2006) Responsive Design for Household Surveys: Tools for
Actively Controlling Survey Errors and Costs. Journal of the Royal Statistics Society Series A:
Statistics in Society, 169, pp 439-257 Part 3.
18
19. Groves, R.M., Singer, E. and Corning A. (2000). Leverage-Saliency Theory of Survey
Participation. Public Opinion Quarterly, 64, pp 299-308.
Hawkes D, Plewis, I (2006). Modelling Non-Response in the National Child Development
Study. Journal of Royal Statistics Society A, 169, Part 3, pp 479-491.
Hawkins, D.F. (1975). Estimation of Non-Response Bias. Sociological Methods and
Research, 3, pp 461-488.
Hidiroglou, M.A., Drew, J.D., and Gray, G.B. (1993). A Framework for Measuring and
Reducing Non-Response in Surveys. Survey Methodology, 19, pp 81-94.
Hill, D.H and Willis, R.J. (2001) Reducing Panel Attrition: A Search for Effective Policy
Instruments. Journal of Human Resources, 36, pp 416-438.
Iyer, R. (1984) NCDS Fourth follow-up 1981: Analysis of Response. NCDS4 Working Paper,
no. 25, London: National Children’s Bureau.
Kasse, M., (1999) Quality Criteria for Survey Research Berlin: Akademie Verlag.
Kulka, R.A. (1994). The Use of Incentives to Survey ‘Hard-to-Reach’ Respondents: A Brief
Overview of Empirical Research and current practice. Paper presented at the COPAFS
seminar on New Directions in Statistical Methodology, Bethesda, MD.
James J.M, Bolstein R. (1990) The effect of monetary incentives and follow-up mailings on
the response rate and response quality in mail surveys. Public Opinion Quarterly, 54, pp 346-
361.
James, T.L. (1997). Results of the Wave 1 Incentive Experiment in the 1996 Survey of
Income and Program participation. 1997 Proceedings of the Survey Research Methods
Section of the American Statistical Association (pp 834-839). Washington, DC: American
Statistical Association.
Jones, A.M., Koolman, X. and Rice, N. (2006) Health-Related Non-Response in the British
Household Panel Survey and European Community Household Panel: Using Inverse-
Probability-Weighted Estimators in Non-Linear Models. Journal of the Royal Statistical
Society Series A, 179(3), pp 543-569.
Laurie, H and Lynn, P (2009). The Use of Respondent Incentives on Longitudinal Surveys. In
Lynn, P (eds) (2009) Methodology of Longitudinal Surveys. West Sussex: Wiley.
Laurie, H, Smith R and Scott, L (1999) Strategies for Reducing Non-Response in a
Longitudinal Panel Survey. Journal of Official Statistics, 15:2, pp269-2
Lengacher, J.E, Sullivan, C.M, Couper M.P and Groves, R.M (1995) Once Reluctant, Always
Reluctant? Effects of Differential Incentives on Later Survey Participation in a Longitudinal
Survey. Survey Research Centre, University of Michigan.
Lepkowski ,J and Couper, M (2002) Non-Response in the Second Wave of Longitudinal
Household Surveys. In Groves R. Dillman D. Eltinge J. Little R.(2002). Survey Non Response.
Wiley Series in Survey Methodology.
Lessof, C.(2009) Ethical issues in Longitudinal Surveys. In Lynn, P (eds) (2009) Methodology
of Longitudinal Surveys Methodology of Longitudinal Surveys. West Sussex: Wiley.
Lillard, L.A. and Panis, C.W.A. (1998) Panel Attrition from the Panel Study of Income
Dynamics. Journal of Political Economy, 94(3), pp 489-506.
Lynn, P (2005) Outcome Categories and Definitions of response Rates for Panel Surveys and
Other Surveys involving Multiple Data Collection Events from the Same Units. Unpublished
manuscript. Colchester: University of Essex.
19
20. Lynn, P. (2006) Editorial: Attrition and Non-Response. Journal of the Royal Statistics Society
A 169, Part 3, pp 393-394.
Lynn, P., Beerten R., Laiho J., Martin J. (2003). Towards Standardisation of Survey Outcome
Categories and Response Rate Calculations. Research in Official Statistics, edition 1:
vol:2002, pp 61-84.
Lynn P, Buck N, Burton J, Jackle A, Laurie H (2005). A Review of Methodological Research
Pertinent to Longitudinal Survey Design and Data Collection. ISER. Working Paper 2005-29.
Colchester: University of Essex.
Lynn P., and Clarke, P. (2002) Separating Refusal Bias and Non-Contact Bias: Evidence
from UK National Surveys. Journal of the Royal Statistical Society Series D. The Statistician.
51(3), pp 319-333.
Lynn, P, Clarke P, Martin J and Sturgis P (2002). The Effects of Extended Interviewer Efforts
on Non-Response Bias. In R.M. Groves, D.A Dilman, J.L Elting and R.J.A Little (eds) (2002).
Survey Non-Response. Chichester :Wiley
McAllister, R, Goe, S and Edgar, B. (1973) Tracking Respondents in Longitudinal Surveys:
Some Prelimary Considerations. The Public Opinion Quarterly, 47:3, pp 413-416
McGonagle, K., Couper, M. Schoeni, R. (2009). Maintaining Contact with PSID Families
between Waves: An Experimental Test of a New Strategy, Statistics Canada 25th
International
Symposium on Methodological Issues. Longitudinal Surveys: from Design to Analysis,
Ottawa, 2009.
Michaud, S., Webber, M. (1994) Measuring Non-Response in a Longitudinal Survey: The
Experience of the Survey of Labour and Income Dynamics. Fifth International Workshop on
Household Survey Non-Response, Ottawa, 1994.
Moon, N, Rose, N and Steel, N (2005) How Could They Ever, Ever Persuade You? Are Some
Refusals Easier to Convert Than Others? AAPOR, ASA Section on Survey Research
Methods.
Nathan, G. (1999) A Review of Sample Attrition and Representativeness in Three
Longitudinal Surveys (The British Household Panel Survey, the 1970 British Cohort
Study and The National Child Development Study) Government Statistical Service,
Methodology Series, No. 13., London: GSS.
Nicoletti, C. and Buck, N. (2004) Explaining Interviewee Contact and Co-operation in the
Birtish and German Household Panels. In M. Ehling and U. Rendtel (eds) (2004),
Harmonisation of Panel Surveys and Data Quality. (pp 143-166). Wiesbaden: Statistisches
Bundesamt.
Nicoletti, C. and Peracchi, F. (2002) A Cross-Country Comparison of Survey Non-
Participation in the ECHP. ISER Working Papers, No. 2002-32, Colchester: University of
Essex.
Nicoletti, C. and Peracchi, F. (2005) Survey Response and Survey Characteristics: Microlevel
Evidence from the European Community Household Panel. Journal of the Royal Statistical
Society Series A, 168(4), pp 763:781.
Plewis, I. Ketende, S. Joshi, H. Hughes, G. (2008) The Contribution of Residential Mobility to
Sample Loss in a Birth Cohort Study: Evidence from the First Two Waves of the UK
Millennium Cohort Study. Journal of Official Statistics, Vol. 24, No3, 2008, pp 365-385
Ribisl, Walton, Mowbray, Luke and Davidson (1996). Minimising Participant Attrition.
Evaluation and Program Planning, 19:1, pp.1-25
20
21. Rodgers, W. (2002). Size of Incentive Effects in a Longitudinal Study. Presented at the 2002
American Association for Public Research conference, mimeo, Survey Research Centre,
University of Michigan, Ann Arbor.
Scholes S., Medina, J., Cheshire, H., Cox, K., Hacker, E., Lessof, C. (2009). Living in the 21st
Century: Older People in England: The 2006 English Longitudinal Study of Ageing. Technical
report, Natcen, 2009.
Shettle, C and Mooney, G (1999). Monetary Incentives in Government Surveys. Journal of
Official Statistics 15, pp 231-250.
Singer, E (2002). The Use of Incentives to Reduce Non-Response in Household Surveys. In
R.M. Groves, D.A Dilman, J.L Elting and R.J.A Little (eds) (2002). Survey Non-Response.
Chichester:Wiley.
Singer, E, Van Hoewyk, J and Gebler, N (1999). The Effect of Incentives on Response Rates
in Interviewer Mediated Surveys. Journal of Official Statistics, 15, pp 217-230.
Singer, E., Van Hoewyk, J. and Maher, P. (1998) Does the Payment of Incentives Create
Expectation Effects? Public Opinion Quarterly, 62, pp 152-164.
Singer, E., Van Hoewyk, J. and Maher, P. (2000). Experiments with Incentives in Telephone
Surveys. Public Opinion Quarterly, 64, pp 171-188
Smith, T (2002), Developing Non-Response Standards. In Groves R., Dillman, D., Eltinge J.,
Little. R. (eds) (2002) Survey Non Response. Chichester:Wiley.
Stoop, I (2004) Surveying Non-Respondents. Field Methods, 16, pp 23-54.
Stoop, I.A. L. (2005) The Hunt for the Last Respondent, The Hague, Netherlands: Social and
Cultural Planning Office.
Uhrig, S (2008). The Nature and Causes of Attrition in the British Household Panel Survey.
ISER Working Paper Series. No.2008-5.
Warriner, K.; Goyder, J.; Gjertsen, H.; Hohner, P.; and McSpurren, K. (1996). Charities, No;
Lotteries, No; Cash, Yes. Public Opinion Quarterly, 60, pp 542-562.
Watson, D. (2003) Sample Attrition between Waves 1 and 5 in the European Community
Household Panel. European Sociological Review, 19(4) pp 361-378.
Watson, N and Wooden, M. (2004) Sample Attrition in the HILDA Survey. Australian Journal
of Labour Economics, Vol. 7, No 2, pp 293-308.
Watson, N and Wooden, M (2004) Wave 2 Survey Methodology. HILDA Project Technical
Paper Series, No. 1/04.
Watson, N and Wooden, M (2009) Identifying Factors Affecting Longitudinal Survey
Response. In Lynn, P (eds) (2009) Methodology of Longitudinal Surveys. West Sussex:
Wiley.
Zabel, J. E. (1998). An Analysis of Attrition in the Panel Study of Income Dynamics and the
Survey of Income and Program Participation with an Application to a Model of Labour Market
Behaviour. Journal of Human Resource, 33, pp 479-506.
21