In Lewis Carroll’s Alice in Wonderland, there’s a great exchange between Alice and the Cheshire Cat.
Would you tell me, please, which way I ought to go from here?’ asks Alice.
That depends a good deal on where you want to get to,’ says the Cat.
‘I don’t much care where,’ says Alice.
Then it doesn’t matter which way you go,’ says the Cat.
Applying that same principle to market research, you might say that if you don’t care about getting reliable, repeatable responses to your survey questions, then it doesn’t much matter where you source the panel members. Conversely, if you do, it does.
The importance of sample was objectively confirmed for us in some data recently released by Nate Silver’s FiveThirtyEight. In its ranking of the accuracy of political pollsters, Maru/Matchbox’s Springboard America finished #1 amongst those using online samples (this was for work done on behalf of Angus Reid Global). It had the highest “Elections Correctly Called” and the lowest “Difference from Actual” among those online sources that had covered ten or more elections.
Every pollster has its own preferred methodology and its own way of defining sample, but all are not equal as Silver’s work confirms. As an industry, we know that advent of social media and the introduction of publisher sources have radically changed the sample we use in recent years. We have gone from relying on panels of known respondents – people vetted and profiled – to streams of unknown respondents whose motivations for answering questions are variable and not always aligned with the goal of collecting reliable and useful data.
In a recently published paper (“Art or Science? The Perils and Possibilities of Survey Sampling in the Evolving Online World”) we flagged many of the troubling issues surrounding current sample constructing practices. To simplify a bit, the challenge essentially comes down to repeatability. As part of the paper, we reviewed the reliability of seven sources, ranging from our own Springboard America research community to multi-reward and river sources, as well as a large, well-known U.S. panel (called Panel A) that has recently been taking much of its sample from non-panel resources. Using a two-wave study, we found statistically significant differences in three of the seven sources, including Panel A. Of note, the sample from Panel A tended to be less motivated by intrinsic rewards (“doing my part as a good consumer and citizen”) and more by simply a financial incentive.
This motivation is critical. If the sample members just want access to content, or a new piglet for a game of Farmville, they may not be the most reliable source of survey data and that shows up in the lack of repeatability. Unsurprisingly, Springboard America showed the smallest statistical difference from one wave to the next (2 percent). That is, we believe, a function of the motivation of the panel participants, itself a result of the effort that has gone into developing a detailed understanding of panel members, and encouraging a true feeling of community among them. This applies whether you’re building a specific insight community, or tapping into an existing market community: known respondents are critical to making decisions that are accurately informed.
So do communities more accurately predict elections? Nate Silver’s work would seem to suggest so. More broadly, for clients building business ideas and marketing plans around research, just “getting somewhere” is not enough. They want to have real insights and actionable data. For that, you need to know who you’re sampling.
Download the entire paper authored by Rob Berger and Andrew Grenville – Art or Science? The Perils and Possibilities of Survey Sampling in the Evolving Online World.