3 Lessons In How Not To Write A Questionnaire

questionnaires

This morning I opened my computer to find I had been invited to complete a survey for a community I recently joined. Curious, I opened the study and started in. It turned out to be about cars. And it was not just a bad survey, it was a terrible survey. It contained a number of fundamental mistakes that reminded me of three principles of good questionnaire design:

  1. Ask, don’t assume;
  2. Don’t ask questions that people can’t accurately answer;
  3. Give people choices—and don’t force answers.

Ask, don’t assume

The survey started with cars in general, before quickly focusing on the car buying journey. But they never asked if I have a car, or if I drive, let alone whether I am in the market for a car. The thing is, I don’t drive. Never have. Our family did not even have a car until we inherited a 16-year-old Honda CRV about a year ago. Would I be involved in searching for information about buying a car? Not bloody likely.

Some basic screening questions would have made sure I was screened out and that the sample was suitable. Do you really want information on the car buying journey from people who are not either recent buyers or at least intending to purchase in the future?

The problem with the survey is that they assumed I was suitable, instead of asking. That assumption might increase the number of completes they get, but it will certainly dilute the value of the survey results. By them assuming I was suitable, they gave me very little choice. I could either quit the survey or persist and hope that the topic would change to something I could answer. Curious about how bad things could get, I soldiered on.

Don’t ask questions that people can’t accurately answer

What followed were a series of questions about the car buying journey that were impossible to answer accurately. Here is one sample question:

“WHEN PURCHASING OR LEASING A NEW VEHICLE, HOW LONG DOES IT TYPICALLY TAKE BETWEEN MAKING THE DECISION TO BUY A NEW CAR AND ACTUALLY MAKING THE PURCHASE?” (Yes, it was all in CAPS. Not only were they asking me a question I could not answer, but they also yelled it at me.)

The answer options were:

1 week or less
1-2 weeks
2-4 weeks
4-6 weeks
6-8 weeks
8-10 weeks
10 or more weeks

Let’s start with the question itself. Firstly, car buying is something most people do every few years. Will you accurately recall the amount of time that elapsed between the decision to buy a new car and the last time you did it? In fact, will you even be aware of the moment you decided to buy a new car? Do people have eureka moments where they sit up in bed and declare “Honey, I just decided to buy a new car! Mark the date.” Or do people gradually start to look enviously at new models and start to notice how their current car is starting to look a little worn and does not have the latest technology? When does that “decision” really happen?

Secondly, what is my “typical” behavior? Let’s assume for the sake of argument that I have bought three cars. I bought the first one on the spur of the moment after I got a great bonus at work. The second one was long researched because it was going to be a minivan for a growing family and I wanted to find a vehicle that was resistant to that slimy mixture of drool, soggy Cheerios and bits of Chicken McNuggets that children seem to effortlessly excrete from their fingertips. And then my third vehicle was a quick decision to buy a sports car in the midst of a midlife crisis. How do I get to “typical”? Do I average them? How well could I remember? I can’t, so I make up an answer.

Thirdly, what do I do if I have ever only bought used cars? Or, in my case, have never bought a car. Again, the problem of assuming, not asking.

Fourthly, let’s just assume that I know that every time I bought a car I firmly decided to buy the car on a full moon and then sealed the deal exactly 4 weeks later. What answer option do I use? Do I pick 2-4 weeks or 4-6 weeks? How do I answer that?

Give people choices—and don’t force answers

I finally powered past the questions about the customer journey and idled into a section on brand image. Finally, something I could answer! The first question was a grid with a list of attributes like “trustworthy,” “powerful,” “innovative” and “passionate” across the top. In rows down the side were a half dozen car brands. My task was to pick the attributes I associated with each brand. No problem, at least until I came to Infiniti. I don’t know much about Infiniti, so I didn’t pick any attributes for it. That was fine until I tried to go to the next question. That’s when I got an error message and noticed the instruction “You must select at least 1 answer for each row.” I had no choice. I was forced to pick an answer for Infiniti. So I held my nose, picked “passionate” and moved on.

“People say ‘Consumers lie all the time.’ They don’t lie. They’re just answering the question you asked them” Coca-Cola’s Pam Mittoo told me for my book The Insights Revolution: Question Everything. That’s certainly what happened to me. I didn’t want to lie about Infiniti, but I was forced to pick an answer.

What effect does forcing an answer have on data quality? It adds error to the data, frustrates respondents and leads them to take the whole exercise less seriously. And we can’t afford to let that happen. We need to keep the people who do our surveys happy, because without them our industry will dry up and blow away.

This experience reminded me that designing a bad survey is all too easy. But it also reminded me of three principles of good questionnaire design:

  1. Ask, don’t assume;
  2. Don’t ask questions that people can’t accurately answer;
  3. Give people choices—and don’t force answers.

To learn more about good survey design, check out my review of Annie Pettit’s People Aren’t Robots: A practical guide to the psychology and technique of questionnaire design.