Emoji hand wave We're hiring a UX Research Lead

The simplest way to improve user surveys

Printed survey with red editing marks

We get asked for user survey advice a lot. We find ourselves offering the same, simple piece of advice: ask more open-ended text (or numeric) questions instead relying so heavily on multiple-choice.

This applies especially to surveys done near the start of a research process. We don’t necessarily recommend starting research with a survey (see our post on why and when to run a user survey), but sometimes it’s necessary, like when using a questionnaire to recruit and screen interview participants.

Multiple choice feels normal because we usually see survey results presented in bar graphs, pie charts, or other formats that require data to be categorized or segmented. But what we don’t see is that those categories are often created after data is analyzed.

If you start with open-ended responses, you can usually manage to clean, categorize, and simplify your findings later. But you can never extract more precision from ambiguous multiple-choice responses.

The graph below is from a state of UX survey done in 2017. A question asked respondents the size of their design team. 66% of respondents identified with the smallest team size (1-10), while only 3% identified with the three largest options combined.

2017 UX Tools Survey graph showing respondents' team sizes

This data leaves a lot of uncertainty about the actual distribution of team sizes.

A 2018 version of this survey was slightly improved (to the designer’s credit). The 11-100 option is split into two smaller ranges, but the majority of responses (59%) are still lumped into a single segment that begs to be broken up.

2018 UX Tools Survey graph showing respondents' team sizes

This data would be help answer more questions if respondents were asked to simply enter the number of people on their team.

A bigger concern with the multiple-choice format is the risk of bias. The options you include (or exclude) and how you word them can easily reinforce blind spots and lead people to answer in ways that don’t accurately represent reality. You’re imposing a mental model and vocabulary on respondents rather than learning how to improve your current understanding.

Try to start with basic text (or numeric) responses as a default. Send the survey to a small, representative sample. If it’s important enough, talk to people as they’re taking the survey, like a usability test. Then use the responses and feedback to design multiple-choice questions — after you become more confident in the options and language you’re using.

When to use multiple choice

There are valid reasons to use multiple choice. Thousands of open-ended responses can be time consuming to analyze, so multiple-choice questions are usually preferable for large-scale surveys. However, if the questions are poorly designed and the data isn’t useful, then your time (and your respondents’ time) was wasted anyway.

If participation/completion rate is a major concern, well-designed multiple choice questions can help reduce participant effort, especially for people using mobile or touchscreen devices. On the other hand, poorly designed multiple choice questions actually make the survey more difficult. At minimum people are forced to read and understand a list of options before they can respond.

Rather than treating multiple choice questions as the default to be used whenever possible, try to use multiple choice only when necessary:

  • When the responses (or the question itself) would otherwise be ambiguous, like when asking respondents to describe their current situation or status, or when asking about degree of satisfaction or agreement/disagreement on a (balanced) rating scale.
  • When you need to make sure respondents consider specific options before responding, like when you want to learn about competing preferences or priorities.
  • When the options/categories are known with a high degree of certainty and you want to minimize needless variation, like when asking people which country they live in, or to use logic and branching to tailor the survey to different groups of respondents.

If you’re really struggling, that may be a sign that a survey isn’t the right method. Surveys are useful for testing well-developed hypotheses, but not as useful for exploring uncertain territory or generating new, rich insights.

Don’t be afraid of a little extra effort

If the goal of the survey is to produce some charts as quickly as possible, then analysing open-ended responses can be a pain. But if your goal is to deepen your understanding and make better decisions, then open-ended responses can produce some of the most insightful data you get from a survey.

Basic spreadsheet and qualitative analysis skills will help to tag and analyze responses. Some survey tools include text analysis features to categorize open-ended responses. This process can be time consuming for hundreds of responses, but for smaller samples (or for short screener questionnaires) it might actually be faster and easier than debating multiple versions of the survey before launching it.

Remember that participants will experience the survey as part of the overall experience of your organization, product, or brand. When starting with a high level of uncertainty, a little bit of preliminary research helps everyone involved.




On


UX research Surveys

Ready to work together?

Reach out with questions or to get started on your next UX research or product design project. You’ll hear back from us within a day.


Contact us