Panel companies do a great job of profiling their members for common attributes. Need to survey Hispanics under 30 years old? No problem. Need to survey divorced college graduates? No problem. Need upper-income parents? Again, no problem!

But what if you need to survey moms who took their child to the museum in the past 6 months? Or, you need upper income households that listen to Sirius XM satellite radio? With target groups like these, things start to get a little more complicated.

Now, none of those groups are particularly hard to reach, but it will require you to write a screener to start your questionnaire. You will need to screen out people who don’t qualify for your survey so that those who do respond to the body of the questionnaire are your target market.

While panel companies are diligent about eliminating the small percentage of panelists who cheat on surveys, screeners are tempting to cheat on. Imagine it from a panelist’s perspective. They are invited to take a survey, answer one to five questions and then are disqualified. Eventually, some panelists are going to feel emboldened to lie on the screener in order to qualify for the survey.

The basic practice to eliminate this small group of undesirables is to take a page from questionnaire design on leading questions. For instance, the leading question, “Should people be allowed to protect themselves from harm by using Mace as self-defense?” tips your hand as to the answer you want. In the same way, a screener question like, “Do you care for an Alzheimer’s patient who takes Memantine?” tips your hand as to who qualifies for your survey.

When writing screeners, here are some best practices to follow to further screen your respondents and increase validity to their answers:

  1. Replace yes/no questions with select-all-that-apply questions. For instance, for the above question on Alzheimer’s, instead first ask, “Do you care for a friend or family member who suffers from any of the following conditions?” Provide a long list of ailments.
  2. Triangulate qualification by asking related questions. For example, for pharmaceutical research, a subsequent question to the question on ailments might provide a long list of possible medications.
  3. Screen out respondents who select very rare attributes or multiple, unrelated low-incident choices. Sticking with the same example, I typically include ALAD deficiency in the list of ailments I present. Since only 10 cases of it have ever been reported, I eliminate respondents who select it.
  4. Screen out respondents who select every choice. For a survey on Internet radio, I asked if they had a subscription to five different services and I screened out the 1 percent who selected all the choices.
  5. Use red herrings. In your choice lists, use red herrings such as invented drug names or television shows or website names to catch cheaters in the act. The United States Adopted Names (USAN) Council of the American Medical Association is a good source for medication names that have been approved but aren’t yet on the market.
  6. Provide long choice lists. We recently conducted a control-market/test-market survey in four metropolitan areas. Since 12 percent of Americans move every year, profile data could be out-of-date. Accordingly, we asked respondents to identify the closest major metropolitan area from a dropdown list of 200.
  7. End the interview with a reworded screener question. Re-ask the screener question in different words as the final question of the survey. If the answers don’t match, then you have a cheater.

The best screener question is the one that you don’t have to ask because your panel provider has already profiled respondents using it. Panelist attribute lists are constantly being updated. Talk to your panel provider to see if you can target respondents more precisely than anticipated. If not, make sure to follow the best practices above.

 

Author Notes:

Jeffrey Henning

Gravatar Image
Jeffrey Henning, IPC is a professionally certified researcher and has personally conducted over 1,400 survey research projects. Jeffrey is a member of the Insights Association and the American Association of Public Opinion Researchers. In 2012, he was the inaugural winner of the MRA’s Impact award, which “recognizes an industry professional, team or organization that has demonstrated tremendous vision, leadership, and innovation, within the past year, that has led to advances in the marketing research profession.” In 2022, the Insights Association named him an IPC Laureate. Before founding Researchscape in 2012, Jeffrey co-founded Perseus Development Corporation in 1993, which introduced the first web-survey software, and Vovici in 2006, which pioneered the enterprise-feedback management category. A 35-year veteran of the research industry, he began his career as an industry analyst for an Inc. 500 research firm.