Survey Research Papers

(These are papers that I have contributed to, co-authored, or authored)


Editing and Crafting Good Survey Questions and Questionnaires (version 1.9) - plus: Survey Design Bibliography (September 2025)
(April 2025) (Author: Timothy Triplett)
Abstract: Perhaps the biggest challenge in designing questions is that researchers who are responsible for developing a survey questionnaire might have had some training in questionnaire design but are not specialists. Although, increasingly researchers are working with survey practitioners, which has led to more emphasis on improving question wording. As a survey practitioner, I often find myself helping on projects that require drafting a survey questionnaire. Much of the discussion on questionnaire design in this document comes from my own experiences combined with what I have learned from the other researchers who have published work around the methodology of writing good survey questions.

Methodology for the VoicesDMV Survey
(December 2017) (Author: Timothy Triplett)
Abstract: Voices of the Community: DC, Maryland, Virginia (VoicesDMV), is a community engagement initiative from the Greater Washington Community Foundation, in partnership with the Urban Institute, designed to lift up residents’ stories and perceptions of the quality of life in the Greater Washington region. This brief discusses the methodology for the 2017 VoicesDMV survey.

Who has trouble reporting prior day events?
(December 2016) (Authors: Timothy Triplett, Brian Tefft, Rob Santos)
Abstract: Research has shown that surveys that require respondents to recall events can be subject to fairly high measurement error. Recall error tends be less problematic for highly salient events and events that have recently occurred. However, there is less information on whether some respondents are more or less prone to having problems with answering questions that involve recalling an event. This research uses data from the American Driving Study where people are asked to report the length of driving trips that they made yesterday. We analyze of over 16,000 reported driving trips from data collected from 7,913 respondents who reported having been the driver for at least one driving trip on the day before they were interviewed (yesterday). For this analysis, we are concerned with two types of recall problems; 1) the inability of the respondent to provide an estimate of either the length or duration of a driving trip; 2) providing an estimate of miles driven that is inconsistent given the duration and purpose of the trip. This paper finds difference in the characteristics of respondents who are more likely to have problems reporting their prior day driving behavior. But, the main finding is that longer trips were harder to report on and have a bigger impact on key survey estimates. We conclude with some discussion of important considerations that survey practitioners should keep in mind when designing surveys that include recall questions.

Current Knowledge and Considerations Regarding Survey Refusals
(Summer 2015) (Authors: David Dutwin; John D. Loft; Jill E. Darling; Allyson L. Holbrook; Timothy P. Johnson; Ronald E. Langley; Paul J. Lavrakas; Kristen Olson; Emilia Peytcheva; Jeffery A. Stec; Timothy Triplett; Andrew Zukerberg)
Abstract: The landscape of survey research has arguably changed more significantly in the past decade than at any other time in its relatively brief history. In that short time, landline telephone ownership has dropped from some 98 percent of all households to less than 60 percent; cell-phone interviewing went from a novelty to a mainstay; address-based designs quickly became an accepted method of sampling the general population; and surveys via Internet panels became ubiquitous in many sectors of social and market research, even as they continue to raise concerns given their lack of random selection.

Massachusetts Health Insurance Survey Methodology Report Survey Years: 2008, 2009, and 2010
(December 2010) (Authors: Timothy Triplett, Sharon Long, David Dutwin, Susan Scher)
Abstract: The Massachusetts Health Insurance Survey (MHIS) collects information on health insurance coverage and access to and use of health care for the non-institutionalized population in Massachusetts. It is funded by the Massachusetts Division of Health Care Finance and Policy (DHCFP) and is conducted by the Urban Institute, along with its subcontractor, Social Science Research Solutions (SSRS). This report provides information about the methods used to collect and analyze the MHIS.

Can Your Spouse Accurately Report Your Activities? An Examination of Proxy Reporting
(January 2010) (Author: Timothy Triplett)
Abstract: The 2008 Survey of Public Participation in the Arts (SPPA) is sponsored by the National Endowment for the Arts and was conducted as a supplement to the May 2008 Current Population Survey (CPS). A big challenge in being a supplement to the CPS is that respondent selection procedures for SPPA supplement differ from the CPS procedures. CPS is administered to any person 16 or older who is able to report employment information on all persons 16 years or older in the household. While the SPPA collects information on all adults 18 or older and it is felt that many of the questions on the survey require self reports rather than proxy reports. In 2002, the Census Bureau interviewers attempted to complete the SPPA supplement with all persons 18 or older, but after 4 call attempts they accepted proxy reports. To make the SPPA a better fit for the CPS protocol, rather than attempt to interview all adults in the household , the 2008 SPPA accepted proxy responses for spouses or partners (for many of the questions). This change in design makes it much easier to measure the impact of proxy reports given that they were collected by design rather than out of necessity. This research will explore the extent to which proxy reporting may have resulted in over or under reporting participation. And when there are differences should you adjust your estimate? Of particular interest will be comparisons between husbands reporting about the wives activities and vice a versa. In addition, this research will explore whether the quality of proxy reporting varies by key population sub groups.

2002 NSAF Nonresponse Analysis
(June 2006) (Author: Timothy Triplett)
Abstract:  This report focuses on the characteristics of nonrespondents to the 2002 NSAF and assesses the impact of nonreponse on the NSAF statistics.  It includes analysis of the effectiveness of the call attempt and refusal conversion strategies across all three rounds of NSAF data collection, providing some insights on how the level of effort affects the quality of the data by reducing nonresponse.  This report also includes a sociodemographic comparison of nonrespondents using census block information obtained for 2002 nonrespondents and respondents.

Comparing Incentives at Initial and Refusal Conversion Stages On a Screening Interview For a Random Digit Dial Survey
(March, 2006) (Authors: David Cantor, Patricia Cunningham, Timothy Triplett, Rebecca Steinbach)
Abstract:  This paper discusses the results of an experiment that explored the use of incentives at different stages of an RDD survey.  A primary question addressed was the relative effectiveness of using an incentive at the initial call or during refusal conversion.  The results show that providing $2 at the initial attempt to complete the screener works about as well with respect to response rates as a $5 treatment at refusal conversion.

Using an E-Mail Invitation to Screen Survey Respondents
(September, 2005) (Authors: Timothy Triplett, Adam Safir, Natalie Abi-Habib)
Abstract:  Internet surveys can be designed so that the respondent can simply click on a link that indicates that they do not want to fill out the survey.   The link could be embedded in the e-mail invite or for on-line invite surveys they could be included on the invite page.   The decline option would be appropriate for those respondents that are not actually the end-user and, thus cannot answer most of the questions.   This option can potentially improve your response rate estimate as well as provide additional information about your respondents.   However, there is the concern that the decline option would provide an easy out for legitimate respondents.   This paper analyzes the effect the decline option had on the response rate and survey responses.

Determining the Probability of Selection for a Telephone Household in a Random Digit Dial Sample Design is Becoming more Difficult
(September, 2004) (Authors: Timothy Triplett, Natalie Abi-Habib)
Abstract:  For many years, researchers using a RDD sample design could estimate the total number of residential telephone numbers in a household by simply asking one, sometimes two, and at most three questions.   The 2002 National Survey of America's Families (NSAF) is a telephone survey that relies primarily on a large RDD sample design using over 400,000 telephone numbers.  In previous rounds of the NSAF (1999 and 1997) a simple two-question approach was used to estimate total residential phone numbers that could be sampled.  For the 2002 study a more complex set of questions was asked of each household which included learning more about what these additional phone numbers were being used for.  This paper compares the results of these questions with other large RDD studies, with previous rounds of NSAF, and discusses the impact these questions have on the probability of selection adjustments.

Sampling Refusals: Why, When, and How Much?
(September, 2004) (Authors: Timothy Triplett, Adam Safir, Kevin Wang, and Natalie Abi-Habib)
Abstract: The National Survey of America's Families (NSAF) is a dual frame survey that relies primarily on a large RDD sample.   The survey consists of a short three-minute screener interview used to determine eligibility, followed by a forty-five minute extended interview.   Almost half of all potential respondents initially refuse to participate.   Although interviews are completed with close to forty percent of all initial refusals, the per-interview cost of converted refusals far exceeds that of initial cooperators.   In addition, refusal conversion extends the data collection period.  Therefore, for the last round of NSAF data collection, refusal conversion was limited to a sub-sample of refusals.  The completed interviews from this effort were given a weighting factor to adjust for the refusals not attempted.   This paper analyzes the effect of using the refusal sampling approach and weighting adjustment on the survey estimates and associated standard errors.

Effects On Survey Estimates From Reducing Nonresponse
(December, 2002) (Authors: Adam Safir, Rebecca Steinbach, Timothy Triplett, Kevin Wang)
Abstract:  This paper presents the results of research conducted to analyze the effects of efforts to minimize the potential for nonresponse bias in the National Survey of America's Families (NSAF) survey estimates.  We compare the characteristics of those easily interviewed with the characteristics of those who were difficult to contact or who had temporarily refused to do the survey.   We also compare information from the sampling frame for possible differences between respondents and non-respondents.  Finally, we provide a preliminary analysis of data from an independently conducted follow-up survey of 2000 randomly selected NSAF respondents and nonrespondents.

What is Gained from Additional Call Attempts & Refusal Conversion and What are the Cost Implications?
(November, 2002)  (Author: Timothy Triplett)
Abstract:  This research uses data from more than 20 random digit dialing (RDD) studies that have been conducted at the University of Maryland's Survey Research Center over the past 12 years.   Research looks closely at call attempt patterns, outcomes of call attempts, and response rates.  The research is useful in making informed decisions on when the best time is to call certain types of households, how useful is refusal conversion, and how to best design an auto scheduler. Future plans are to include in the analysis call attempt data from the National Survey of America's Families.

Using a Short Follow-up Survey to Compare Respondents and Nonrespondents
(October, 2002)   (Authors: Timothy Triplett, Adam Safir, Kevin Wang, Rebecca Steinbach)
Abstract:  The research analyzes the potential for nonresponse bias in the 1999 National Survey of America's Families (NSAF) survey.   The NSAF is primarily a random digit dial (RDD) telephone survey, consisting of a short screener interview to determine household eligibility and a longer extended interview during which survey items of interest are gathered for sampled household members.   In order to examine the potential for nonresponse bias, a follow-up survey of a sample of respondents and refusals from the NSAF screener interview was conducted by a different survey organization than the one which conducted the main survey.   The follow-up survey contained key items from the main survey, which were used to examine differences between respondents and nonrespondents on these measures.   In addition, the follow-up survey also contained questions on subjects thought to be correlated with willingness to participate in a survey, such as attitudes towards surveys and government, and perceptions of being busy.

How Long Should you Wait Before Attempting to Convert a Refusal?
(October, 2001)  (Authors: Timothy Triplett, Julie Scheib, Johnny Blair)
Abstract:  With the increasing difficulties obtaining high response rates in telephone Random Digit Dial (RDD) studies, refusal conversion is becoming a more important and expensive component of the data collection process.   One issue that is not clear in the literature is just how long you should wait before calling back a household in which someone has refused the interview.   Often you hear the phrase "cooling off period," but how long is that period, and does a cooling off period really have the intended effect of increasing the likelihood of converting the refusal?   This paper analyzes more than 5,000 nationwide RDD refusal conversion attempts in surveys conducted at the University of Maryland's Survey Research Center over the past five years.   By combining studies, we have enough data to analyze refusal conversion rates by how many days have elapsed between the initial refusal and the refusal conversion attempt.   In addition, the data set is large enough to separately look at refusals by selected respondents versus refusals in which the respondent selection was not yet completed.   Separate analysis can be done by gender of the person who refused, as well as regional comparisons.   Finally the range of survey topics and interview lengths help ensure that the findings may apply to other RDD surveys.

The Effects of Telephone Introductions on Cooperation: An Experimental Comparison
(November, 1999)  (Authors: Nileeni Meegama, Johnny Blair)
Abstract:  This paper studies the use of two alternative introductions to systematically vary type of components of a survey introduction in a field experiment to see which variation produces the best cooperation rate.

The Effect of Alternative Incentives on Cooperation and Refusal Conversion in a Telephone Survey
(November, 1999)  (Authors: Martha Kropf, Julie Scheib, Johnny Blair)
Abstract:  As the cost and effort to gain cooperation in telephone surveys increases, many researchers are exploring the use of incentives to increase initial cooperation rates and as an inducement in refusal conversion.  Seldom has the combined use of incentives for multiple purposes been used in a single telephone survey.  This paper reports on an experiment where incentives were used to try and increase initial cooperation rates or as an inducement in refusal conversion.

Verbal Reports Are Data!   A Theoretical Approach to Cognitive Interviews
(August, 1999)  (Authors: Frederick Conrad, Johnny Blair, Elena Tracy)
Abstract:  Respondents are, in effect, instructed to carry out a task – that is to do whatever mental work is necessary to provide an answer.  By this view, they can get into trouble in several ways.  For example, respondents can misunderstand what they have been asked to do and, as a result, carry out the wrong task.  Or they can understand the instruction just fine but find themselves unable to carry out the task they have been assigned.  Or they can successfully carry out the assigned task but find themselves unable to fit their answer into the options provided.  Each of these difficulties demands a different sort of solution, for example, making question wording clearer, simplifying the task, or better matching the response options to the way people think about the topic of the question.

A Comparison of Mail and E-Mail for a Survey of Employees in Federal Statistical Agencies
(January, 1999)  (Authors: Mick Couper, Johnny Blair, Timothy Triplett)
Abstract: This paper reports on the results of a study comparing e-mail and mail for a survey of employees in several government statistical agencies in the U.S. As part of a larger study of organization climate, employees in five agencies were randomly assigned to a mail or e-mail mode of data collection.  Similar procedures were used for advance contact and follow-up of subjects across modes.  This paper describes the procedures used to implement the e-mail survey, and discusses the results of the mode experiment.

A Probability Sample of Gay Urban Males:  The Use of Two-Phase Adaptive Sampling
(November, 1998)  (Author: Johnny Blair)
Abstract: The major goal of the Gay Urban Males Study (GUMS) was to provide reliable (i.e., replicable) population estimates for the gay male population of four major cities.  In order to implement the probability sample necessary to achieve this objective, a survey sample design was selected that took into account the likelihood of flaws in the data used for planning.

Initial Cooperators VS. Converted Refusers:   Are There Response Behavior Differences?
(May, 1997)   (Authors: Timothy Triplett, Johnny Blair, Teresa Hamilton, Yun Chiao Kang)
Abstract: With the increasing difficulties obtaining high response rates in telephone Random Digit Dial (RDD) studies, refusal conversion is becoming a more important and expensive component of the data collection process. One issue that is not clear in the literature is just how long you should wait before calling back a household in which someone has refused the interview. Often you hear the phrase "cooling off period," but how long is that period, and does a cooling off period really have the intended effect of increasing the likelihood of converting the refusal? This paper analyzes more than 5,000 nationwide RDD refusal conversion attempts in surveys conducted at the University of Maryland's Survey Research Center over the past five years. By combining studies, we have enough data to analyze refusal conversion rates by how many days have elapsed between the initial refusal and the refusal conversion attempt. In addition, the data set is large enough to separately look at refusals by selected respondents versus refusals in which the respondent selection was not yet completed. Separate analysis can be done by gender of the person who refused, as well as regional comparisons. Finally the range of survey topics and interview lengths help ensure that the findings may apply to other RDD surveys.

A Paper Describing the SRC "Fix-it Program"
(November, 1996)   (Authors: Timothy Triplett, Beth Webb)
Abstract: The Survey Research Center's "fix-it" program was written using the Berkeley CASES (Computer Assisted Survey Execution System) software. The fix-it instrument is used to edit data for CATI (Computer Assisted Telephone Interviewing) studies. The three main functions of the fix-it program are editing incorrect sample disposition information, editing incorrect interviewer information, and removing survey data from interviews conducted with ineligible respondents.

From Impressions to Data: Increasing the Objectivity of Cognitive Interviews
(January, 1996)  (Authors: Frederick Conrad, Johnny Blair)
Abstract:This paper reports amethod for analyzingthink aloud datafrom cognitive interviews that requires coders to systematically consider a broad set of criteria in evaluatingthe verbalreport for each question in a questionnaire. The crux of the method is a taxonomy of respondent problems which the analyst uses to classify verbal reports that seem to indicate trouble with a question. The problem categories are derived, in part, from a theory of survey responding to which many practitioners subscribe. By identifying the response stage at which a problem is likely to have occurred, certain solutions to the problem become more promising while others become less plausible.

Conducting Cognitive Interviews to Test Self-Administered and Telephone Surveys: Which Methods Should We Use?
(May, 1995)   (Authors: Susan Schechter, Johnny Blair, Janet Vande Hey)
Abstract: For more than 10 years now, the interdisciplinary efforts of survey methodologists and cognitive scientists have stimulated interest in establishing cognitive pretesting of questionnaires as a standard component of survey research. By applying cognitive psychology techniques to develop and test data collection instruments, survey researchers continue to improve and expand methods used to interview small numbers of subjects in a laboratory environment in order to identify questionnaire problems

Survey Procedures for Conducting Cognitive Interviews to Pretest Questionnaires: A Review of Theory and Practice
(Authors: Johnny Blair, Stanley Presser)
Abstract: After many years of relative neglect, the pretest phase of survey design has become the focus of considerable methodological activity. Applications of cognitive psychology to survey design have helped both to increase interest in pretesting, and to provide methods for its study.


All research papers on this site are Adobe PDF documents.

If you cannot read PDF files, click  HERE!!  to download the free Acrobat reader.

If you could not find what you were looking for then send me an EMAIL, and I'll get back to you


 Return to Timothy Triplett's Home Page  Timothy Triplett's Home Page: