beta

What is Usability Testing? Critical Methods & Helpful Tools

Usability testing is an integral part of the user-centered design process that typically involves observing a group of users as they attempt to complete a set of tasks or task-based scenarios as a means for evaluating a product’s efficiency and ease of use. It is the aspect of user research that focuses on the practical application and real-world usage of both physical and digital products.

Usability testing is an indispensable part of the product design process because it has the potential to illuminate what is ‘right’ and ‘wrong’ with a product, uncovering usability issues that only become more expensive and time-intensive to fix as the product life cycle advances.

The #1 Benefit of Usability Testing

There is usually a gap between how stakeholders think users will behave and how they actually behave and interact with a product. That gap is commonly filled with hype, wishful thinking, and risk — all of which can lead to failure down the road.

One of the key benefits of usability testing is that it provides an opportunity to close this gap by giving stakeholders the power to address usability problems within their product early on, before the design is implemented or shipped en masse.

The best piece of advice for anyone who's thinking of conducting usability testing is that you can—and should—start early. Conduct usability tests on early, low-fidelity prototypes rather than only testing late-stage, high-fidelity prototypes. Doing these tests early in the design process allows you to gather important feedback and make changes early. This way, the costs of production can be much lower too.

–Mads Soegaard, Founder of the Interaction Design Foundation

The process of usability testing can then be repeated throughout the product lifecycle to ensure that it remains aligned with user needs.

There are many different usability testing methods, each with its own benefits and limitations:

  • In-Person or Remote
  • Moderated or Unmoderated
  • Eye-tracking
  • Synchronous or Asynchronous
  • Hallway usability testing

The method(s) you select for a given project should be based on factors like cost, convenience, stage in the product life cycle, and the type of information you’re looking to collect and validate.

Moderated Usability Testing

  • Ideal Use Cases: When you need a high level of interaction with a user or it’s necessary for the user to work with a complicated process or concept
  • Relevant Scenarios: Limited functionality prototype testing, qualitative research-centric customer journey investigation, information gathering on overall usability perception

As the name implies, these types of tests are set in a scenario where there is a moderator present who is able to guide the course of the study. The moderator needs to be a person with the proper understanding and skills to guide the conversation, as it’s sometimes necessary to answer questions posed by the user.

Challenges of moderated usability tests are that they require more effort to schedule and, depending on if the study is in-person, can also demand more resources to orchestrate successfully. Note-taking and moderating simultaneously is a nearly impossible task, so if possible it’s helpful to record the session. An additional identifiable drawback is also the risk of skewed results through the introduction of bias, or a moderator asking leading questions.

Unmoderated Usability Testing

  • Ideal Use Cases: When you need quick insight regarding a specific topic area or feature from a large and diverse sample and have a limited budget
  • Relevant Scenarios: Big data collection on how large audiences interact with an interface in very specific way or complete a simple, straightforward task (e.g. completion rates, time to complete a task, ease of task completion, etc.)

Unmoderated usability studies involve setting up a test and letting participants complete it without any guidance or input from a moderator. Because of this, unmoderated tests typically allow for quicker turnaround. An additional benefit of unmoderated tests is the safeguard against the introduction of bias and leading questions. This method also lends itself nicely to remote user testing, as it is not constrained by a necessity of moderator or a location to conduct the test.

One of the primary challenges of unmoderated testing is that the lack of moderator guidance means you must rely on a vetting process for questions/tasks to try to ensure that user thoughts and feedback are clear and useful.

In-Person Usability Testing

  • Ideal Use Cases: When you have a long lead time, ample budget, access to a suitable location, and want to gather observational data (like a user’s body language and facial expressions) in a more controlled environment
  • Relevant Scenarios: When evaluating a crucial product component, when gathering data on a product element or application that’s strictly confidential, when conducting more holistic qualitative research on efficacy or clarity

This method of usability test places a moderator and participant in the same space, which allows the moderator to directly observe participants as they share their thoughts during the test. More specifically, it’s excellent for giving the moderator contextual and visual cues about user behavior and presenting an opportunity to analyze participants’ body language, gestures, and facial expressions as they occur in real time.

Some of the limitations of in-person usability testing are often linked to timing and cost. In-person studies can take longer to run than some of the other methods, and a moderator also must contend with logistical factors, like the setting of the study and the number of participant no-shows, which will happen from time to time.

Remote Usability Testing

  • Ideal Use Cases: When you have a limited budget, a tight deadline, an audience that is geographically dispersed, or a situation when the user’s environment may greatly affect their decision-making
  • Relevant Scenarios: Measuring the impact of a minor feature change, gathering data on a widget, investigating the ease of completing a specific task

This type of usability testing can be done independent of your company’s or the user’s location. Compared to a in-person usability test, remote testing is generally more cost-effective and easier to schedule since you don’t have to rent out a physical space to conduct the tests. In addition, there are many services that offer remote unmoderated usability tests which take care of user recruiting and test administration.

A limitation of remote usability testing is that it can be more difficult to test complex tasks and prototypes. In addition, remote usability testing relies on the users’ hardware to complete the test successfully.

Hallway Usability Testing

  • Ideal Use Cases: When you need extremely quick feedback to validate or invalidate perceived assumptions
  • Relevant Scenarios: Evaluating a design decision in an early lifecycle stage or identifying any issues around completing a task or series of tasks (eg. locating a sign in button or changing a delivery address).

As the name implies, hallway usability testing is a technique that calls on researchers to recruit random bystanders (often colleagues in your office) from a busy area, like a hallway, for a quick and informal 5 to 10-minute test. While both cheap and convenient, this methodology won’t allow you to deep dive into the nuances of the product, but it can highlight glaring roadblocks.

Eye-tracking Usability Testing

  • Ideal Use Cases: After you’ve exhausted all other usability testing methods and you need more detailed evaluation of how users behave while interacting with an interface - and what UI elements draw their attention
  • Relevant Scenarios: Identifying miscues (distracting or problematic UI elements), evaluating efficacy of core UI elements (e.g. key call-to-action buttons), gaining insight into the order users complete tasks on a webpage

Eye-tracking usability testing leverages advanced software to evaluate how users digest information on a page or interface. It follows the motion of the pupil to record where users focus their attention, how this focus moves as a user navigates a page or interface, how much time a user spends focusing on particular elements, etc.

Good design utilizes predictability and consistency to makes it easy for users to scan a page or interface, and eye-tracking studies can help reveal whether layouts meet this goal relative to the task at hand—illuminating whether a design is usable or unusable.

On the other hand, these kinds of test are expensive and time-consuming; not to mention, they allow a lot of opportunity for the researcher to introduce bias and erroneous analysis into the study, as it’s easy to make assumptions about why a user behaved in a certain way.

How to Conduct a Usability Test in 5 Simple Steps

Usability testing is research, and becoming an effective researcher has to begin with asking questions. A good place to try to operate from as a researcher is to assume the mindset that you know little to nothing about the ‘whys’ of user behavior — treat each usability test as an investigation. To start, you’ll want to begin by laying the foundation for a successful test.

While the approach to conducting a usability test will vary depending on which combination of the aforementioned methods you’re implementing (i.e. in-person, moderated usability testing vs. remote, unmoderated usability testing), there are some general best practices that apply in almost all cases.

Here are the basic steps:

  1. Set goals
  2. Create a script
  3. Recruit participants
  4. Run a pilot
  5. Test
  6. Report findings

Step 1: Set Goals

The first step in any usability evaluation is to start with a discussion of goals: you should come to an understanding with stakeholders about the nature of the test you want to run and why.

Some useful discussion points that can help with goal-setting:

  • What are the most important tasks for users to complete?
  • What areas of concern exist?
  • Do stakeholders have specifics they want to learn about how users engage with a product or interface?
  • What kind of data do stakeholders need to make a clear decision?
  • Are there pressing questions?

This conversation will also help you identify what devices and environments you should consider that might be best aligned to those users and the goals you are hoping to achieve.

Do keep in mind that goals should be specific and measurable. Also, one usability testing session should not have too many goals: work with stakeholder to prioritize goals, focus on those objectives that add the most business value, and ensure goals are simple and stem from solving a specific problem.

Examples of good goals:

  1. Can users find the information they need to return a product
  2. What is the best location for a ‘checkout’ button on a specific page
  3. How many clicks does it take users to make a booking

Examples of bad goals:

  1. Why aren’t users buying a product
  2. What do users think of a particular product feature
  3. Discover why users are bouncing from our website

For usability testing a digital product or service, sometimes it can be helpful to think about what the user’s ‘happy path’ through the product or service would be.

From there, you can then assess the following:

  • What method of testing works best for the stated goals, timeline, and budget?
  • How much will the participant be compensated?
  • How many participants do we want to test? (Nielsen Norman Group recommends at least 5)

Lastly, you’ll need to decide on the specific design and fidelity of the product that participants will be testing. Often, prototypes are used in usability testing, as opposed to a finished product. Testing with prototypes helps minimize design/build time while still effectively leveraging user insights and feedback.

Prototypes can have differing levels of fidelity (i.e. details, functionality). In the early stages of the product development process, prototypes for digital products can commonly be presented in a lower fidelity, such as paper prototypes. As the product advances towards the release date, the fidelity of the prototype typically increases, often in the form of interactive, digital prototypes. Once you’ve familiarized yourself with what is going to be tested, you’re ready to move on to the next step.

Step 2: Create a Script

No matter what method of usability testing you run, you’ll want to create the framework for the test as you’d like it to take place. Doing this is essential. It helps you ensure that the test runs smoothly and, even more importantly, it builds consistency across participants. Consistency is key because it’s your job to look for trends in the user behavior that you observe, which is impossible to do if the usability test is not standardized in some way across participants.

The next part of the script involves creating the set of tasks or scenarios that you’d like users to interact with - and that align with the appointed goals. Doing this ensures that the test will be standardized and repeatable, and will help ensure that the tests yield measurable results. If you are moderating a study, you’ll want to structure the task list along with any contextual or follow-up questions you may have.

How would you....? Why did you...? One effective technique is to let your voice trail off and have the user complete the question...[this] gives good insights to the mental at work in their minds.

- David Hutchful, Co-founder at Bloom Impact, on good examples of usability testing questions

You’ll also want to take time to set user expectations at the beginning of each test. Commonly, this is accomplished by an introduction to the test where you explain to the user what they can expect from the session.

Here is a usability testing sample script intro for a moderated study:

Hi _____, thank you again for taking the time to participate in this study. There are just a couple of things you’ll need to know before we get started.

The first thing is that all of the data from this test will be kept confidential, and your name will not be associated with the data that we collect.

The second thing is we are not evaluating you, at any point today. There are no wrong moves or wrong answers. Our goal is to evaluate the product and make it easier for you and people like you to use it.

The third thing is that you are free to take a break at any time, just say that you would like to do so. You are also free to end this session at any time, for any reason.

How this will work is I’m going to be giving you a set of task-based activities to complete during the course of today’s evaluation. During the course of the study you will interact with the product as it is configured today and answer a few questions about your experience. Please work with the system as you would if you were at your desk, at home, or at work. If you need help, please try and get help just as you would during a regular day.

You may ask me for help; however, I may not be able to answer immediately, and that’s mainly because I want to see how you interact with the system, on your own.

At my signal, you can begin the first task. During each task, I want you to describe out loud your thoughts and actions. Your verbal description will assist me in understanding how you completed each step. So please, as much as you can, comment and think out loud - even though that’s probably not how you’re used to working on the day-to-day. It’ll be very helpful.

You want to do your best to make users feel comfortable and not as though they are being scrutinized. It’s important that users know that it’s the product that’s being tested, and not them.

For moderated tests, resist the urge to help or explain anything. When they flounder, show you're right there with them (a confused look at the screen and a are quite sufficient)

- Marti Gukeisen, UX Director at Perficient Digital

Step 3: Recruit Participants

Once you’ve established your key goals, mapped out corresponding scenarios, and created a script or plan, it’s time to recruit participants. Again, the manner in which you complete this task will depend on the usability testing method you’re putting into play and the goals you’ve outlined.

In every situation though, you should consider the following when recruiting:

  • How many participants do you need to give a representative sample?
  • What qualities must a participant have to best align with your user base in this scenario?
  • How will you screen participants to ensure they represent your user base?
  • Will you incentivize participation - and if so, how?
  • Where will you find participants - and will you recruit yourself or leverage the help of an outside source?
  • Where will the study take place - and how will this affect recruitment and no-show rate?

If you’re new to recruitment and have adequate budget, using a recruitment agency can be a quick and easy way to garner the right number of users for testing and the right user demographics.

Alternatively, if you have limited budget and need to recruit yourself, remember that the most important recruitment rule of thumb is making sure your participants accurately reflect the user groups that will be engaging with your site as per your outlined scenario.

Step 4: Run a Pilot

After you’ve gotten your script or task list together, put it to the test. If you work on a team, you can ask one of your teammates to participate in your pilot session. The purpose of a pilot is to uncover any points in the structure of your script or the test itself that might need to be adjusted for time or clarity.

You are not your users. The questions or tasks you go in thinking are important are not usually the important questions you should be asking. I suggest doing a pre-test of your test or an open-ended usability minded think aloud with guided tasks to see what are the actual problems your users are running into and then forming a usability test based on what you find.

- Laurian C Vega, Phd in User Experience

If you’re moderating, running a pilot is also a great way to familiarize yourself with how the test will flow before you’re actually facing a user. In the case that you’re running a remote study, recording, or webcasting your usability tests, this will also give you an opportunity to identify and solve any of the technical problems or snafus before showtime.

Step 5: Test

Okay. By now, you have alignment on what you’re testing, who you’re testing, and how. Now it’s time to test. In every case possible, it’s immensely helpful to record each session, which is something that you’ll want to have participants agree to beforehand.

For unmoderated studies, it’s a good idea to take notes. On the other hand, when you are acting as a moderator, you want to avoid trying to divide your attention between the participant and the near-impossible task of simultaneous note-taking. Recording each session is ideal because it gives anyone the ability to go back through the sessions and make notes. If possible, having a team member present for the usability tests who is able to make notes can also be beneficial.

You might feel compelled to fill the void when users pause, and think about something they haven't thought about. However, that's where your best answers are coming from. Try wait it out until they start talking again.

- Min Hu, Lead Product Designer at Salesforce

@imMin

Step 6: Report Findings

After the study is completed, you’ll want to sit and analyze the sessions. The goal is to identify repeated themes or trends in user behavior. It’s always a feel-good moment if you are able to report on users experiencing moments of delight during the study, but your ultimate goal is to target and call out areas that can be improved and, if possible, suggest ways to improve them.

Make sure to account for their [users'] constraints and biases when doing the analyses.

- Jude Yew, Assistant Professor in the Department of Communications and New Media, National University of Singapore

Jude Yew

An effective means for reporting on usability issues is by ranking them in terms of severity. This can help guide the conversation with stakeholders and development teams as to what issues to address first.

Proportion of users experiencing the problem Degree to which the problem impacts the users who experience it Few Big High Severtity Medium Severtity Many Small Low Severtity Medium Severtity

To estimate the severtity of a usability problem, explain the frequency with which users face the problem occurs and the impact of the problem on the users that encounter it.

There can be any number of severity levels assigned to issues, but here are the core three:

S1
A Severity 1 issue usually falls into two categories:
  • An issue that prevents a significant number of users starting or completing an interaction
  • An issue that is critical to the core of the product
S2
A Severity 2 issue usually falls under the following:
  • Prevents or prohibits a significant number of users from engaging with important features or tasks
S3
A Severity 3 issue usually falls under the following:
  • Impedes some users from task or goal completion

Map the usability problems you uncover in your usability test to these levels within your findings document. Doing this also provides a succinct executive summary of some of the key learnings.

Related articles

Dino Insights is build for you.

Try it Free

$59 a month after 14 day trial expires.
Special early adopters pricing, normally $99 a month