Usability testing is a systematic approach for testing whether users can carry out scripted tasks successfully, finding what their preferences for completing a task are, and uncovering opportunities to improve the product.
At the heart of a usability study stands the script, which the moderator follows during test sessions.
It is virtually impossible to practice solid usability research without having a script. Don’t go in without one.
Use this template as your starting point for writing a first draft for your usability study:
A usability script typically follows 4 main parts:
Here is a bit about each part (you may want to read this while going over the template).
This is where you introduce yourself and other attending members of the team. You also tell the participant how the session is going to go, and double check that you have their consent for recording and sharing the session.
Use this part to build rapport with the participant. People are often hesitant, nervous or even a little standoffish at the beginning of a research session. Simple comments such as "Oh I've been to where you live and I loved it" can go a long way to making people feel comfortable and helping the study to run smoother.
Warm up questions are meant to further break the ice as well as getting relevant background information on the participant.
Here are some standard warm up questions to consider:
Tip 1: Even if you don’t have anything you want to ask, have at least 2 quick questions here, as it will help to break the ice and make the participant feel more at ease.
Tip 2: Consider asking questions that will help you to understand the participant’s mental model and expectations, prior to interacting with your designs (e.g. "We have a page called Security Dashboard. What tasks would you expect to be able to accomplish with the help of that page?".)
Tip 3: If needed and you have the time for it, you might want to include some more interview-y questions here that would benefit either this study or a different one (if it’s a different study, make sure it shares the same participant profile with this one!). In particular, consider asking questions that might be used later on as part of a persona or a JTBD study (e.g. "What would you say are your top 3 tasks?").
This is the heart of the script, and the part that takes the longest to write. Usability studies usually consist of 3-4 tasks (though your mileage may vary).
How to define good tasks
When sitting down to write your tasks you should already have defined your research goals, objectives and hypotheses. To form good usability tests, start by going over your research objectives (which detail what you and your stakeholders want to get out of the study), and consider how to best translate them into user tasks.
The key thing to remember when moving from objectives to tasks is that your tasks should reflect realistic user goals.
For example, perhaps one of your objectives entails finding out whether users can quickly locate a CTA. But since no user ever goes into a website with the goal of locating a button, your task should never be “Can you find and click the button?”. Rather it should be about why the participant might want to click the button in the first place (e.g. “You want to enable feature X for your project. Can you do that please?”).
Another objective of yours might be identifying potential improvements to the flow. Since real users don't generally go into websites with the sole purpose of finding faults in it, instead of asking the participant “Can you please complete the following steps and tell me what we should improve?”, ask them to perform a task that would necessitate them to attempt to go through that flow and observe them carefully to see where they fail, struggle or hesitate.
Finally, pay close attention to how you phrase your tasks to avoid bias, leading the participant, and other common pitfalls. To learn more about writing good tasks we highly recommend going over this helpful NN/g’s article: Write Better Qualitative Usability Tasks: Top 10 Mistakes to Avoid.
How to structure each task
For each task, consider whether some set up is required to provide context and appropriate motivation for the participant. If so, describe a relevant scenario prior to giving the task. For example:
Scenario: “Let’s say this is a project you’re working on, and you just committed some new code”.
Task: “Please test to see whether that code contains any security vulnerabilities”.
Then, consider adding some more specific questions and prompts that the moderator can utilise as the task unfolds, in case the participant isn't bringing these topics on their own. Examples:
Tip 1: For each task, add a link in your script for the prototype/webpage that’s relevant for that task. Not only will it help your teammates who will be reviewing the script to understand what the task is about, but it will allow you to quickly resend the relevant link should the participant need it again.
Tip 2: Consider noting under each task, in light gray, ‘What we expect them to do’ (e.g. "Follow the CI pipeline and go into the SAST job output"), to remind the moderator of the possible paths for completing the task. This would assist the moderator in helping participants to recover, in case they failed a task which is a prerequisite for the following task.
How to order your tasks
Here you can get the participant’s broad impressions about what they saw and experienced. Here are some standard questions to consider:
Conclude the script with thanking your participant and mentioning when they are expected to get their compensation.
Once your draft is more or less done, give it another read and ask yourself:
Edit as needed based on feedback received from your stakeholders/teammates.
Run a dummy test with a colleague/an internal participant to make sure your task instructions are clear and that you’re keeping time. Edit as needed, and notify your stakeholders of any big changes.