Blog
Apr 15, 2019

Survey Says...

Survey Researchers in Office
Just a warning that we’re going to spend the next thousand words or so talking about online survey research. There will be math involved. Which makes me at least a little bit happy.

When I first started in the survey research industry *cough cough* years ago, beyond consulting-level questions about the best ways to address business issues, we used to get operational questions around four main overlapping topics – panel size, response rates, survey length, and data validity. I wanted to revisit those inquiries through the lens of survey research today, as doing so leads me to the specific conclusion that there should be a new best-practice approach to surveys emerging.

One Person, One Vote

Companies used to go to great lengths to highlight the differences between a research panel and a database of people, and to tout both panel size and response rates. A cursory online search suggests that’s no longer the case; I found minimal information on any of those topics from the major survey providers.

My hypothesis as to why? The way most companies approach sampling makes those terms almost meaningless. Their commitment to clients is to deliver a set number of completed surveys, and they accomplish it by sending out invitations to increasingly large numbers of potential respondents until returns can be closed. Then, if they’re applying requisite rigor, they weight the results to appear to be representative (ask your current survey vendor to explain their approach to survey weighting to you – go ahead, I’ll wait).

Maybe the time has come to ask whether we are better off reverting to a research panel approach:

  • A research panel with 100,000 representative potential respondents that delivers a 50% response rate is larger in effective size than a database of 1,500,000 people that delivers a 3% response rate.

  • The industry is often applying sampling weighting as if the size of weights is irrelevant, but it has two huge implications. First, it often makes a small group of people seem much larger. For example, if a representative sample is supposed to have 50 respondents aged 18-24 and there are only 15 actual respondents, each of those respondents is being treated as if they are 3.33 people. If only a couple of those people are outliers in some way, survey results are no longer representative. Second, there’s an overall penalty to be paid for applying weights to a sample. For example, if you have an unweighted sample of 1,000 respondents and in applying weighting you end up with a weighting efficiency score of .5, the statistical precision of the weighted sample is based on 500 respondents (1,000 x .5 = 500).

Thumbnail

But wait, there’s more. As an added benefit, having a true research panel helps minimize the risk of receiving inaccurate results via survey farms (groups of people who speed through surveys for compensation) or bot-based questionnaire completion, both of which the survey industry is actively combating.

If You Have Five Seconds to Spare

Why, when everyone claims to be so busy, do we think long surveys are a good idea? Oh, that’s right, because we have a lot of questions we want to ask of people. So, good for us, bad for them! I also realize that many of my former colleagues are laughing at my hypocrisy, as I was a strong proponent of longer surveys earlier in my career (in fairness, I was also one of the biggest advocates for compensating respondents appropriately).

What’s wrong with longer surveys? I think most of us intuitively know these things, but longer surveys come with inherent disadvantages:

  • Respondent fatigue leads to inaccurate survey responses. The research industry has done an admirable job of putting in place checks for things like speeders (people answering surveys in far less time than would be expected), people who straightline (offering the exact same response to a series of questions), and other easy-to-spot data issues. But there’s really no way to gauge when a participant’s attention begins to drift and what was it I was saying, I can’t even remember.

  • Knowing that I firmly believe if we ask people a question they will answer to the best of their ability, the longer the survey the more likely we are to ask questions they can’t accurately answer. I may be able to answer in detail my thought process when I bought my last car, but the magic eight ball says my recollection of the last time I bought a particular condiment is hazy. We have a rule of thumb that within CPG if you are asking someone about their behaviors in the past three months, you are really getting their past six month results – because for lower involvement products, people can’t remember accurately.

Thumbnail

But wait, there’s more. As an added benefit, conducting survey research via a purchase panel allows for the elimination of behavioral screening questions. If households have been taking pictures of their purchase receipts (as Numerator participants do), we have proof that if we’re asking about a recent purchase of a product, we know definitively when that purchase actually took place.

Oh, and kudos to anyone who picked up on the reference to The Smiths in the title of this section.

Asked and Answered

Survey research remains a key component of the consumer insights toolbox. But with mid-sized and large CPG companies losing market share and small companies increasing theirs, the need to generate high quality consumer insights is greater than ever.

Survey says…it’s time to reconsider your approach to conducting surveys. Want to know more about how to use Numerator’s OmniPanel to improve your survey insights? Contact us with questions or download an overview of our survey research capabilities today.