How and why to test your usability interviews
“We know our site needs work, but we don’t know where to start.” For such a short statement, it’s loaded with complexity. It’s also the statement that kicked off our recent project to understand how to improve the online shopping experience for customers of a large graphics supply distributor.
The client is a new client to us, and I’m new to Res.im. This got us thinking about two important elements of UX research: alignment and trust. With any new client or new project it’s important to spend some time to ensure you are aligning your research objectives to your client/stakeholder and project needs; aligning your tactics and processes to reach those objectives effectively; and establishing healthy working relationships (both internal and external) based on trust.
Okay, so alignment and trust that’s great, but how does that apply specifically to how to test your usability tests?
Plenty of resources, including Neilsen Norman Group, recommend running a pilot test (or dress rehearsal) of your usability study to uncover any issues with logistics, technology, task flow and clarity prior to engaging with real participants.
We used an iterative approach to our usability interviews as we assessed our graphics supply client’s existing site. This approach helped us focus on gaining alignment and building trust. Besides running a dress rehearsal, we also included a review our script prior to dress rehearsal; a pause to assess our initial data; and an internal review of the way we captured that data for analysis.
The next time you launch a new project, consider trying this approach. Here are some of the details and what we noted along the way:
Review your script
You may be tempted to jump right to the fun bit—the interviews with live participants— but your patience and diligence will be rewarded if you take a little extra time to have other project stakeholders review your script with you. Consider inviting product managers, product designers, business leaders, or other UX researchers who will be using the data and insights from your study.
Be open to new ideas and new perspectives. Another set of eyes is always helpful for spotting biases or leading follow-up questions, providing clarification for tasks, or identifying any missing pieces based on your stated objectives. It can also uncover opportunities.
Remember to keep these things in mind when receiving feedback from reviewers who are in non-research roles:
- Reviewers may not be aware of the possibilities, limitations, or other research options available. Listen for cues to uncover hidden expectations and prevent hearing something like “I really wish you would have answered…” when reviewing the results.
- Listen carefully to hear the problems or issues the stakeholder is trying to tie script questions to. Is there a theme emerging or insights that can inform your research?
- Use your client or stakeholder’s knowledge of the product to weed out any tasks or elements of the test that won’t yield usable results.
- Pay close attention to anything that could bring to light potential roadblocks or challenges you may encounter so you can be prepared to address them.
In our graphics supply client example, our script review uncovered two pain points while discussing task flow from an end user’s perspective: understanding stock levels by warehouse location, and a specific element related to shipping information. It also brought to light challenges with customer account step up and login processes.
The script didn’t drastically change, but the updates did serve an important purpose. They helped us ensure that we were meeting the client’s expectations. A couple of simple tweaks saved what could have resulted in an erosion of trust, and extra time and work to address unanswered questions.
Pause to assess your initial data
After you finish 1-2 interviews, pause and review your results. What went well? What didn’t go as expected? How does the quality of the information collected stand up to expectations? Are you learning what you thought you would?
Taking a beat to assess the quality of data at this point is like dipping a toe into a hot bath. Double check and adjust the temperature of the water (iterate on your usability test) before you jump all the way in to avoid dealing with potentially painful results. You don’t want to waste valuable time and resources assessing a tub full of weak or flawed data.
Consider sharing, as we did with our graphics supply client, a copy or two of your early recording sessions. It can go a long way to helping you build and maintain trust and credibility (even if you cringe at the idea of your client or colleague hearing your performance on the recording):
- Giving stakeholders a look behind the curtain helps them understand what kinds of data will result from the study — and perhaps what not to expect.
- It demonstrates the quality of information collected and allows them to experience first-hand how in-depth answers from an engaged participant can be.
- Show, don’t tell. Give stakeholders an opportunity to experience what it’s like to see and hear a user’s frustration at navigating a difficult workflow. Give them an opportunity to go beyond the highlight reel you may be planning on presenting with your full results.
At this stage, it’s just as important to ensure the quality of the data being collected as it is to help your stakeholders engage in the process of uncovering user issues and needs.
It’s far better to risk a stakeholder latching on to an early data point from an interview than it is to be viewed as a blackbox of information. Remember that hearing insights presented and internalizing insights are not the same. The latter takes more time and care to achieve.
Watching “user movies” became a fascinating and exciting activity for our graphics client to review over a morning coffee. You can’t beat that level of client engagement.
Review how your data is captured
Okay, now your data is pouring in and it’s time to organize, code, and analyze your results. Like us, you probably have established processes and tools (we use Airtable for most of our qualitative analysis). Maybe right now is not the time to change a process, but it’s always the right time to take note of opportunities for improvement.
When I joined Res.im as a UX Researcher and began working on usability testing for our graphics supplier client, I embraced the importance of asking as a team, “Are we doing too much?” This doesn’t mean we aim to do poor quality work. It simply means that our focus should always be on providing relevant, actionable, and timely insights in a way that makes the best use of our time and resources.
There is no perfect technique, tool, or recipe for research success, and there are always opportunities to improve. Here are a few things to keep in mind as you move through your usability testing:
- Be mindful of your processes as you work. Simply taking note of each step along the way can open your eyes to opportunities that are easy to overlook when you are moving quickly from project to project, and focusing your energy on deliverables.
- Look for easy wins to tweak the tools and processes you’ve accumulated over the course of many research projects. Are there any new updates to the tools you use that will help you work more efficiently?
- Look for new and novel ways to conduct your research—especially when your one-on-one time with users is limited. This one comes with a caveat: make sure that you’re doing so in a thoughtful way, using critical thinking, and proceed with caution. It’s crucial to assess whether your new (or new to you) approach is helping to achieve your research objectives.
Putting it all together
Testing and iterating on your moderated usability studies can take you from uncovering acceptable results to truly insightful and more valuable ones. It can also help you avoid costly rework before you devote valuable time, effort, and resources to your testing.
Remember, even the most spectacular UX research insights are useless if they’re not used to inform great product designs. Usability testing is an opportunity to connect, align, and build trust with your team, your clients—and ultimately the users who will benefit from your work.