As a B2B/B2C survey researcher rather than a pollster, the quadrennial spectator sport that consumes me is not the Olympics, but Presidential elections. The sheer volume of polls and news stories about polls is amazing to me, and I often begin each morning during the campaign season reviewing the latest poll results. Of course, while I enjoy poll watching, I can’t help but learn something about corporate survey research as well. What are some of the things political polling has taught me about surveying?

First, some background. During the 2004 race, I turned regularly to the site Electoral-vote.com, which attempted to compensate for the mass media obsession with the popular vote. As those who had forgotten high school civics relearned in 2000, the winner of the popular vote is not guaranteed to become President. With this in mind, Electoral-vote.com used a state-by-state model to predict the results of the Electoral College. The model factored in past reliability of polls by source as well as recency of polls; despite the sophistication of this approach, the site called the election for Senator Kerry.

During the 2008 race, I instead turned to FiveThirtyEight.com (named after the number of members of Congress: 100 Senators + 438 Representatives). FiveThirtyEight.com developed an even more sophisticated model. As researchers, we often design sampling to get us a 95% confidence level. That done, we then overlook the fact that 1 out of 20 times the answer we got to a survey question is outside the margin of error for the target population. To factor this into account, Nate Silver, publisher of FiveThirtyEight.com, actually runs 10,000 simulations of the election, determining the results for each state randomly, but informed by the model. Rarely, a state might have a result completely unpredicted by current polling, just as might happen in real life. The result of running 10,000 simulations then provides odds that a candidate wins the Electoral College, with a range of electoral margins of victory.

So what has polling taught me, as a survey researcher?

Report what matters. Reporting the Net Promoter Score for B2B research, for instance, is as bad as reporting the popular vote when it is the electoral vote that wins elections. Determine the key drivers for your business, the measures with predictive validity that tie to business outcomes, and report those.

Sampling methodologies matter. The famous “Dewey Defeats Truman” headline is a reminder of the potential inaccuracy of access panels (readers of a particular magazine, in the case of the Dewey poll). RDD personal surveys, IVR surveys, cell phone surveys, online surveys, etc., all have their issues. Understand and report the strengths and weaknesses of the mode and sampling approach used.

Recognize that some data is wrong. Fortunately for us, B2B and B2C surveys never fail as spectacularly as “Dewey Defeats Truman.” We focus on sampling error, because we can measure it, clinging to it even in cases where it shouldn’t be reported, yet many other types of errors can causes surveys to be wrong. Identify where data may be suspect and factor that into your conclusions.

Differential turnout. Just as voter blocks sometimes don’t go to the polls as predicted, sometimes customer segments don’t make it to market in the proportions you predicted.

Make sure the targeted segments align with your distribution channels.

Late swing. Sometimes market conditions change. If a major event has occurred since your survey was fielded (for instance, a competitor dramatically lowered prices or released a heralded new product), update your research before drawing any conclusions from it.

Fortunately for those of us who are survey researchers, political polling is a spectator sport. Few of us are surveying to predict one-day outcomes. We’re researching product and service launches and ongoing improvements that will play out over months or years. We get to be wrong slowly. So, sit back, relax and enjoy the upcoming Congressional polling. It’s not as fun as the Olympics, but until survey marathons are added to, it will have to do.

Author Notes:

Jeffrey Henning

Gravatar Image
Jeffrey Henning, IPC is a professionally certified researcher and has personally conducted over 1,400 survey research projects. Jeffrey is a member of the Insights Association and the American Association of Public Opinion Researchers. In 2012, he was the inaugural winner of the MRA’s Impact award, which “recognizes an industry professional, team or organization that has demonstrated tremendous vision, leadership, and innovation, within the past year, that has led to advances in the marketing research profession.” In 2022, the Insights Association named him an IPC Laureate. Before founding Researchscape in 2012, Jeffrey co-founded Perseus Development Corporation in 1993, which introduced the first web-survey software, and Vovici in 2006, which pioneered the enterprise-feedback management category. A 35-year veteran of the research industry, he began his career as an industry analyst for an Inc. 500 research firm.