UX Survey Design: Why and When to Run a User Survey

Someone looking at sheets of data

User surveys can be fairly easy to do, but also easy to do poorly if you’re not careful. Poorly done user surveys are so prevalent in UX that it can be difficult to discuss them without the conversation turning into a debate about whether or not “surveys suck.” But no research method is always right (nor always wrong).

Like any research method, surveys have strengths and weaknesses. Surveys are one of the more time-consuming and least agile research methods in the UX toolkit. You might reach out to hundreds of people, or thousands, only to realize you’re asking the wrong questions. So if you’re going to do a survey, make sure you do it well.

In general, surveys are best suited to situations when:

  • You have specific questions and hypotheses rather than general interest in “getting some feedback” or a desire to “learn more about our users.”
  • You require precision that can only come from a sizeable sample, such as when trying to compare multiple segements or user groups.
  • You have access to a sufficiently large sample of qualified respondents (150 respondents for each data point is a good rule of thumb for keeping your margin of error within single digits, but more is better).

Define High-Level Research Questions Before Survey Questions

Don’t treat surveys as the default method to use when trying to get customer feedback. Don’t start out saying, “we need to do a survey,” then think about what to ask.

Generic surveys rarely help make decisions. At worst, they produce ambiguous data that either reinforces existing biases or increases uncertainty (or both). At best, you might get some graphs showing that people are “somewhat satisfied” with an existing experience. But with no way to ask follow-up questions, survey results can leave you with more questions than answers. It’s not uncommon to see high satisfaction ratings from respondents who list multiple complaints, or say they’ve never actually used the product or service.

In general, a choice of research method should be based on the questions we need to answer, and those questions should be tied to decisions we need to make.

  • What types of people use our product or service?
  • What proportion of users fit various types/segments, groups/audiences, or personas?
  • What are people’s goals and “top tasks”?
  • How do expectations and needs vary between user groups?
  • How do those expectations and needs align with adoption and satisfaction?
  • Where are the best opportunities to improve adoption and satisfaction?
  • Which changes or features should we prioritize in order to make those improvements?

There are dozens of different methods we can use to answer questions — interviews, usability tests, contextual/field observation, diary studies, participatory workshops, data analysis, secondary research — each suited to different situations, and with countless ways to tailor each to your needs.

Combine Surveys with Other User Research Methods

Any research method on its own risks giving you more questions, or bigger questions, than you started with. A survey can easily leave you wondering things like, “What exactly do people not like? What exactly should we change?”

That’s why surveys should be treated as just part of your research, not all of it.

A survey should either build on previous findings (i.e. to add confidence and precision) or include a plan for follow-up interviews to help make sense of the data. You can also treat a short survey as a way to recruit and screen participants for more qualitative studies like interviews and usability tests.

Our usual preference is to start by talking to people, either in individual interviews or focus group-style sessions. Interviews give us a better understanding which questions to ask in a survey and how to ask them. Interviews and observations reveal issues and opportunities we might otherwise not have thought to probe in a survey. And we learn how to speak the users’ language, which makes us more confident that people understand what we’re asking.

That’s not to say that the purpose of interviews and other types of more qualitative research is just to help ask better survey questions. Nor do you always need to do a survey to validate qualitative results. (Corroboration might come from other research methods, like usage analytics and secondary research.)

What’s most important is to look at surveys as just one set of tools in the user research toolbox.




On


Surveys User Research