One of my colleagues is a competitive distance runner: 5k runs; half-marathons; marathons—she does them all.
Continually striving to be better, she knows her fastest times and her worst times, and is constantly learning about what impacts her time—so that she can become a champion. When she looks at her time in a race, she compares it to her best times, and worst times, and average times, because that is the most useful context for her.
She could compare herself to the pros who typically win, or to those who are in the back of the pack, but that won’t push her forward. Her best comparison is her prior times.
When you test new product ideas, the question always comes: “Is that a good number?” We need context and comparison to help bring meaning to the numbers we are looking at. As with my colleague the runner, the most meaningful comparisons are those that involve comparisons to ideas that we know well—our own successes and failures.
That’s why when we set up an idea or concept testing system for clients, we recommend starting out by testing a number of ideas they know performed really well, some they know tanked and some that were pretty average performers. These function as a framework upon which to build your uniquely relevant points of comparison.
We believe in this benchmarking approach because we know that norms databases are too often filled with irrelevant comparisons. Loads of ideas that were rightfully abandoned. Epic brands that could put their name on anything and have it sell. They are all there, for comparison. But it’s not helpful.
The best comparisons are benchmarks you know. Benchmarks that push you to do better. Just ask my colleague, the runner. She knows. Our research and insights capabilities are all predicated on this philosophy.
To learn more about Idea Filter, our fast, sensitive approach to screening innovation ideas, see our article Idea Filter: Evidence of a Sensitive and Easy to Answer Approach to Idea and Concept Testing or contact us.