Skip to content

Are surveys useful for product development

man looking at survey results

Are surveys useful for product development?

When it comes to customer research there are two very common ways to do it. Two ways, that are common but don’t bring good results. The first one is working with your own assumptions instead of talking to your customers. Technically, that’s no customer research, but sometimes empathy get’s confused with knowledge. The other is doing surveys. The appeal of a survey is the ease of doing it. We just write down our assumptions and then we ask people what they think about them. We can start with something like google form and we are good to go. As it is a quantitative method we could just send our survey to anyone remotely relevant and hope for the best. It’s easy and cheap.

Yet, surveys don’t give us good results. They give us results, but results and good results are not the same. Here are the things I think are wrong with surveys.

What’s wrong with surveys?

Before we start there is one important thing to say. I have been doing a lot of surveys. For product development but also for other things. I have even been project manager for a corporate survey program at Deutsche Telekom. I totally get why someone would do them. And there are situations where we can use a survey. The question isn’t, whether surveys are a good tool. The question is whether they can help us in building better products. And to this my answer is a clear no.

The main reason for this is that surveys give too shallow results and no option to course-correct. To better illustrate what I mean, let’s dissect them into three parts:


  • Who will answer our survey?
  • What do we ask on our survey?
  • How will we work with the results?

Who will answer our survey?

Let’s say we want to build a new payment app. We want to know what features our future customers will want to have. So we have to come up with a potential customer profile first. Let’s assume, we do everything right here. We have a look at the extended competitive landscape. We look at the jobs we assume our customers want to be done. And we select accordingly. We skip the whole personas and demographics nonsense. We select based on well thought through assumptions. Just like we would do it for proper customer research. We might come up with something like this:

  • People that have used another payment app for the first time in the last 6 month
  • People that quit using a payment method within the last 6 month

This would already be way better than what usually happens. But hey, we are professionals. Our assumption here is that we will mainly compete with other payment apps. Paypal, Google Pay, Apple Pay, a virtual credit card and so on.

The issues is that not everyone who receives the survey will answer the survey. Usually there are only two groups answering a survey. One group are very happy customers and friends who want to do us a favor. They will answer mostly positive with some elements of honesty. The other group are haters. They don’t want to help us, they only want to tell us how bad our product or service or idea is. But the important group is the middle-ground. Those people that don’t care much about payment apps. Yet, as they don’t care much they usually don’t have the time for surveys. So our baseline is already biased. And we miss out on the biggest customer group.

What do we ask on surveys?

There are endless things you can do wrong on surveys. Let’s assume again that we know what to do. So we ask questions that target past behavior. How people behaved instead of opinions. Only for time intervalls in a timeframe where participants have proper memory of. Questions that are open and non-leading. Problem related questions instead of solution targeted. All the things you have to do when you want to ask better questions.

However, there is one thing you can’t circumvent. You can ask only a limited amount of questions. There is very little emotional energy in surveys. Usually when we ask more than seven questions every question after that will get average values. Participants just don’t care enough to answer longer surveys. This is called survey fatigue. When we did employee surveys at Deutsche Telekom, after question 7, most values where a 6 out of 10. And this is for closed questions where you had to give a number. If you ask free text questions you got around two questions. If you are lucky. This severely limits the depth of data you can get. We can only get around 7 good answers from a survey. Compare that to a 60-90 minute customer interview.

So by now, we have done everything right. Still we got only seven small data points from the least interesting customer base. Let’s try to work with the results.

How will we work with the survey results?

Let’s take some questions we could have asked. You may replace them with other questions to your liking, but I think this makes it very tangible. For our payment app, our survey consists of the following seven questions:

  1. How frequently have you used payment apps for transactions in the past month? (multiple choice)
  2. On a scale of 1 to 10, how important is the speed of transactions when you choose a payment app? (scale)
  3. What features have prompted you to choose a specific payment app in the past? (open text answer)
  4. What are the top three issues you’ve encountered with payment apps previously? (open text answer)
  5. On a scale of 1 to 10, how significant is the availability of customer support in your decision to use a payment app? (scale)
  6. How do you rate the importance of user-friendliness in a payment app on a scale from 1 to 10? (scale)
  7. Describe a situation where a payment app failed to meet your expectations. (open text answer)

Now go ahead and make up some answers we could have received from that. Assume maybe 2-3 sentences for the open text answers. And then ask yourself: Does this answer give me everything I need to know? Because I always would have loved to ask the people more questions. To dig deeper into their answer

Here is a list of questions that I instantly came up with to further elaborate on. Questions I would love to ask but can’t in a survey: 

  • Why didn’t you use the payment app more or less often? 
  • When did you use it? 
  • What triggered you to prefer the app to the alternative? 
  • What are the alternatives? 
  • How did you make this decision? 
  • Why does the speed matter to you? 
  • How do you measure if the transaction speed is fast or slow? 
  • Why these features? 
  • When did you first notice, that this is an important feature to you? 
  • Why are these the top three? 
  • What was the effect of the issue you had with this payment app? 
  • What did you do, when the app had the issue? 

And so on and so on.

And this means, that even if we do everything right, surveys are quantitative research. And quantitative research gives a few data points from many individuals. But that doesn’t help in understanding them and their decisions. Thus you can’t really build on them.

Want to learn about an alternative to surveys?

Have a look at our Jobs to be done online masterclass

Stay ahead on future posts and subscribe to our newsletter

Posts you may like as well

pirate digging up a treasure

Customer research for treasure hunting pirates

A tale of a young pirate learning from an old one.
extended competitive landscape

How to use the extended competitive landscape?

How to get your customer research skills from zero to one and beyond.

Where to start with everyday customer interviews

How to get your customer research skills from zero to one and beyond.