Comparative, qualitative usability testing enables you to get feedback on 2-3 designs early in the design process to assess the pros and cons of different design directions. This is different from quantitative comparative studies that focus on benchmarking and measurement of the design.
The focus at this stage in the design process is to identify what is working well or not working well with different designs and provide insight on which one to move forward with. This is a within subjects methodology, meaning that each participant will see all designs. By experiencing multiple designs, participants are able to provide useful feedback because they are able to compare and contrast the different designs they’ve seen.
If you have 2-3 navigation designs with distinct differences, and you want to see which one is resonating with participants before investing in further. You can use low fidelity prototypes that have just enough interaction to test the tasks in the study.
Goal: Compare multiple early prototypes to see which performs better. If participants can navigate to a relevant task/tasks, assess overall usability, and obtain additional insight such as initial impressions, verbal qualitative feedback, etc.
Type of facilitation: This kind of testing may be moderated or unmoderated depending on the specific details and goals of the study (e.g., how much qualitative insight are you hoping to learn, can participants get through the prototype on their own, etc), but will likely be moderated to elicit enough qualitative feedback to compare the designs.
Fidelity required: A prototype with enough fidelity to test the basic interactions and be able to compare the differences between the designs. For example, if you want to assess whether participants can navigate to an area, you would need an interactive prototype that allows participants to complete that task, and ideally be able to click a couple of “wrong” areas or paths.
Recommended sample size: 5, because the goal is to obtain qualitative insights. If you are learning new and significant usability issues after 5 participants, add 1-2 additional participants.
Number of designs: It is recommended to limit to 3 designs to ensure participants have enough time to get through each design and reduce participant fatigue.
What to capture:
|Metric||Details||What it measures|
|Task Completion||Assess if participants completed this task. This should be interpreted as directional only, given the small sample size. For example, this can indicate where to focus improvements/help to prioritize findings.||Effectiveness|
|UMUX Lite||Two questions on a 7 point scale (1 = strongly disagree and 7 = strongly agree):
- 1. [This system’s] capabilities meet my requirements.
- 2. [This system] is easy to use.
This should be interpreted as directional only, given the small sample size and the focus should be on why they provided the rating they did. Also collect qualitative feedback to add understanding to their answer by asking why participants rated their UMUX Lite as they did.
|Usability severity rating||Usability issues can be rated as High, Medium, or Low.
- High: Participants could not complete a task or experienced significant frustration.
- Medium: Participants had some difficulties completing a task, but could complete it.
- Low: Participants experienced some minor frustration, confusion, or other issues.
|How severe the issues are, informs priority to address|
Recommended testing platform: UserTesting for unmoderated, Zoom for moderated
Test 2-3 designs max. Testing more than 3 designs may be overwhelming for participants and you may not have enough time in the sessions to cover them all with discussion.
Test designs that have obvious differences to ensure that participants can distinguish them from each other.
All participants should see all designs and be given the same tasks for each design, just ensure that you randomize the order of the designs to avoid order effects (tasks becoming easier with each design they experience due to learned behavior).
To report brief/initial findings in Slack or in an Issue, please use the following format: