Blog



Order Bias Is a Larger Source of Error Than You Think

Last week I was writing a questionnaire for a client using their survey software account, and I was chagrined to discover that it lacked the ability to randomize the display of items in a choice list. This is a common capability of all modern survey software applications, including QuestionPro, Survey Analytics, Google Consumer Surveys, and more. Not randomizing your choice lists can introduce significant error into your results, as the impact of order bias is greater even than the margin of sampling error.

As an example, the GSS, way back in 1984, showed respondents in a face-to-face interview a card, then asked them “The qualities listed on this card may all be important, but which three would you say are the most desirable for a child to have?”

  1. … has good manners (MANNER)
  2. … tries hard to succeed (SUCCESS)
  3. … is honest (HONEST)
  4. … is neat and clean (CLEAN)
  5. … has good sense and sound judgment (JUDGMENT)
  6. … has self-control (CONTROL)
  7. … he acts like a boy or she acts like a girl (ROLE)
  8. … gets along well with other children (AMICABLE)
  9. … obeys his parents well (OBEY)
  10. … is responsible (RESPONSIBLE)
  11. … is considerate of others (CONSIDERATE)
  12. … is interested in how and why things happen (INTERESTED)
  13. … is a good student (STUDIOUS)

The top three choices chosen most often were Honest (selected by 66% of respondents), Judgment (by 39%), and Responsible (34%).

Unless you reversed the order of the choices on the card.

In which case, the top three choices were Honest (48%, a 17-point decrease), Judgment (41%, a 2-point increase), and Considerate (40%, a 15-point increase).

In fact, there was an average of a ±6.5% difference across the 13 items because of response order bias, as much as the margin of sampling error at 95% confidence level for a probability survey of U.S. adults with 230 respondents. But this may even understate the situation: the average difference was ±11.6% for the six items that ended up in the top 3 and bottom 3 items of the list (the other items were in the middle in both versions of the card).

In analyzing this data, Jon Krosnick and Duane Alwin in the paper “An evaluation of a cognitive theory of response order effects in survey measurement” found that choices presented earlier in the list were disproportionately likely to be selected. Summarizing past research showing similar findings, they report two reasons for this primacy effect:

  1. “Items presented early may establish a cognitive framework or standard of comparison that guides interpretation of later items. Because of their role in establishing the framework, early items may be accorded special significance in subsequent judgments.
  2. “Items presented early in a list are likely to be subjected to deeper cognitive processing; by the time a respondent considers the final alternative, his or her mind is likely to be cluttered with thoughts about previous alternatives that inhibit extensive consideration of it. Research on problem-solving suggests that the deeper processing accorded to early items is likely to be dominated by generation of cognitions that justify selection of these early items. Later items are less likely to stimulate generation of such justifications (because they are less carefully considered) and may therefore be selected less frequently.”

And, yes, this has been replicated when doing online surveys.

Accordingly, when doing web surveys, always randomize when appropriate: for multiple choice questions, as opposed to scale questions, randomize the order of the choices whenever they have no logical or inherent order. Most modern survey software applications also allow you to anchor a “None of the above” to the bottom of this list for select-all-that-apply questions.

(For dropdowns with long lists that respondents will skim, such as alphabetical lists of states, provinces, and countries, there is no need to randomize the order. Doing so would only confuse respondents.)

Randomizing choice lists is one of the easiest ways at your disposal to greatly improve the quality of your survey data.

 

Recent Headlines

  1. Postgraduate Degree Survey Jeffrey Henning 10-Jul-2019
  2. Voice Assistant Survey: Let the Young Speak Jeffrey Henning 09-Jul-2019
  3. PDI Releases New C-Store Shopper Report Jeffrey Henning 26-Jun-2019
  4. Issues with Grid Questions Jeffrey Henning 24-Jun-2019


White Papers


Consumer Research

  1. Postgraduate Degree Survey Jeffrey Henning 10-Jul-2019
  2. Voice Assistant Survey: Let the Young Speak Jeffrey Henning 09-Jul-2019
  3. Grocery Shopping Survey Jeffrey Henning 22-Jun-2019
  4. 2019 Summer Vacation Survey Jeffrey Henning 11-Jun-2019


From Jeffrey Henning's Blog

  1. Issues with Grid Questions Jeffrey Henning 24-Jun-2019
  2. Asking Too Much Jeffrey Henning 07-Jun-2019
  3. Helping You Through the Entire Survey Research Process Jeffrey Henning 02-Mar-2019
  4. Market Research Conference Calendar - 2019 Jeffrey Henning 12-Dec-2018