Blog



Screener Bias in Panel and River Samples

Most online surveys start with screener questions, designed to target subsets of the population for specific research. Screeners can be used to qualify particular people, to enforce quotas, or to verify the accuracy of the list of survey participants. For instance, for one survey on cosmetics, we screened out men. For a campaign measurement study, we targeted respondents in four metropolitan areas and screened out everyone else; once we had more than 150 respondents for a metropolitan area, we screened out additional responses in that area as well (for being over the quota).

Members of a panel quickly learn that how they answer the first few questions of a survey determines whether or not they qualify for that study and its reward. As a result of the use of screeners, survey researchers often fear that people will lie to questions in the screener in order to qualify for the study.

To test this several years ago, we surveyed a panel and a river sample and asked respondents from each the following screener question:

Which, if any, of the following prescription medicines do you currently take?
Aglatimagene Beradenovac
Domeglicant
Pegmodglutide
Susquetide
Tofeglicant
None of the above

(The choices were shown in random order to participants in both studies.)

Each of the choices listed were for drugs that were not yet on the market. These were names published as "under consideration" in June 2014 by the United States Adopted Names (USAN) Council of the American Medical Association. In other words, none of these drugs were in use at the time the survey was fielded, and the proper response for 100% of participants should be "None of the above."

For the river sample, 94% of the 406 respondents answered "none of the above" and the 6% who didn't each selected only one medicine. For the panel sample, 88% of the 408 respondents answered "none of the above"; however, 3% selected multiple medicines, and of those 14 respondents, 5 respondents selected all 5 medicines.

A fair hypothesis is that the 3% who selected multiple medicines were cheaters, fully aware they didn't take any of these medicines but wanting to qualify for the study. Some of those who selected one medicine might be cheating; some might be confusing the choice they selected with the name of a medicine they actually take. In the river study, for example, Domeglicant and Tofeglicant were twice as likely to be selected as Aglatimagene Beradenovac and Susquetide, indicating such confusion might be at play (no one in the river sample selected Pegmodglutide).

Imagine that this screener was meant to find people taking a low incidence drug that 1% of the population might be administered. If a drug was selected 4% of the time by cheaters, fully 80% of the survey responses might be from people not taking the medicine (1% qualifying + 4% lying, divided by 5%).

The false positives to screeners can significantly skew results to surveys for which few participants qualify. Make sure to plan accordingly.

For more tips on addressing screener bias, see my post, 7 Best Practices for Writing Better Screeners.

 

Recent Headlines

  1. Postgraduate Degree Survey Jeffrey Henning 10-Jul-2019
  2. Voice Assistant Survey: Let the Young Speak Jeffrey Henning 09-Jul-2019
  3. PDI Releases New C-Store Shopper Report Jeffrey Henning 26-Jun-2019
  4. Issues with Grid Questions Jeffrey Henning 24-Jun-2019


White Papers


Consumer Research

  1. Postgraduate Degree Survey Jeffrey Henning 10-Jul-2019
  2. Voice Assistant Survey: Let the Young Speak Jeffrey Henning 09-Jul-2019
  3. Grocery Shopping Survey Jeffrey Henning 22-Jun-2019
  4. 2019 Summer Vacation Survey Jeffrey Henning 11-Jun-2019


From Jeffrey Henning's Blog

  1. Issues with Grid Questions Jeffrey Henning 24-Jun-2019
  2. Asking Too Much Jeffrey Henning 07-Jun-2019
  3. Helping You Through the Entire Survey Research Process Jeffrey Henning 02-Mar-2019
  4. Market Research Conference Calendar - 2019 Jeffrey Henning 12-Dec-2018