If you ask a question twice, and you get the same answer, you know your sample is reliable. If you ask a question and get the same answer as gold-standard benchmarks like the census, you can be certain your sample is valid.
In the world of data samples, reliability and validity are the difference between research that informs and panel surveys that mislead. In collaboration with Maru/Blue we measured the test-retest reliability and criterion validity of 28 panels in 14 countries: Australia, Brazil, China, France, Germany, India, Italy, Japan, Mexico, Russia, Singapore, South Africa, Spain and Thailand.
The result is a 14-nation whirlwind tour of how market research can go horribly awry and how it can be refreshingly right. The lessons learned reveal a great deal about the nature of data samples in a wide range of markets around the world.
Maru/Blue, in collaboration with Maru/Matchbox, undertook this research as part of their Sample Certified program. Maru/Blue panels in the US, UK and Canada are constantly monitored for reliability and validity. Outside of these countries, they rely on other sample providers, and they must be reliable and representative too. To ensure these suppliers meet these criteria, Maru/Blue built a global quality control program: Sample Certified.
Sample Certified tests and certifies sample vendors in countries where there is a high demand for multi-market insights. Vendors who are Sample Certified become preferred suppliers.
The objective was to identify quality market research panels that we can count on to deliver accurate information. We measured test-retest reliability and criterion validity. In terms of reliability, we wanted to answer the question: “If you measure the same thing twice do you get the same results?” Regarding validity, we wanted to know: “Do these results match what is known to be true?”
This systematic approach to assessing panel quality identified some great sources, some that were slightly off the mark, and some that should never be used. Overall, validity was harder to achieve than reliability. There were some panels that returned the same results both waves but were skewed in terms of how well they represented the population. And then were a few that, frighteningly, were neither reliable nor representative.
There were, of course, also quite a few panels that were both valid and reliable. In total, there were 16 suppliers in 9 countries that got a score of between 85% and 100% on both reliability and validity. We call this boringly reliable. It’s not exciting, but it does allow us to be confident that information gathered from these sources will illuminate rather than mislead. So, they are boring, but in a good way.
In addition to our communities in the US, UK and Canada, this first round of research allows us to certify panels in Australia, Brazil, China, France, Germany, Italy, Japan, Spain, and Thailand.
To learn more about how sample can go wonderfully right, and horribly wrong download Maru/Blue’s free whitepaper “Can we count on you? Assessing the reliability and validity of panels around the world” or contact a Maru/Matchbox representative.