The UX Scorecard is a way for us to identify and score the usability of an experience in our product based on a set of heuristics. We use UX Scorecards to gain an understanding of how a user interacts with our product and to quickly spot opportunities for improvement.
UX Scorecards should be done on every important workflow and should be repeated every 6 months from the last scorecard run. Allowing you and your team to continuously monitor the progress in making experiences better for our users. If it has been a while since a scorecard has been run in your group you should plan on doing one soon to reestablish the cadence.
As UX practitioners, we must think strategically about fixing usability challenges within the GitLab product in order to give our users a quality experience. Creating a UX Scorecard with associated recommendations enables us to identify, scope, and track the effort of addressing usability concerns within a specific workflow. When it's complete, we have the information required to collaborate with Product Managers on grouping fixes into meaningful iterations and prioritizing UX-related issues.
All of the UX Scorecards can be found in this epic.
The UX Scorecard process is meant to balance flexibility and consistency. There are several ways you can create your Scorecard, listed from lightest to heaviest. Select an appropriate approach based on the time you have, the priority of the workflow to users, or whether or not this is the first Scorecard for a JTBD.
When doing a Scorecard, you have options:
Review the current experience by doing a heuristic evaluation. This can be done in half a day, and it is a fast approach especially if you are working on a Scorecard for a task that has previously been scored.
Using an experience map, such as the one found in the Scorecard Experience Template, capture the screens and jot down observations. During the evaluation strive to wear the hat of the persona(s) relevant to the JTBD and while doing so try to see the UI from their perspective as if they were a new user. As you progress through your evaluation this will be easy to forget so it's recommended to put a reminder somewhere in your view, such as a post-it stuck on your monitor that says "You're a new user!"
Bear in mind that a Heuristic Evaluation is considered an expert evaluation, discount usability method. Where the "expert" in this context is a UX expert, not a user expert. It can therefore be thought of as a starting point for finding potential problems, not necessarily an endpoint. If you do find some areas of improvement then you might want your next step to be the Option B UX Scorecard outlined below as a way to validate things with real users.
You can do a formative evaluation by having internal or external users try to accomplish the JTBD. The goal is to provide the participant context (the scenario of the JTBD) and listen and watch how they attempt to complete the job. What we learn may differ from participant to participant.
When doing a formative evaluation, do a "light" usability test to observe 3-5 internal or external users, as this provides valuable insights and removes subjectivity. This can be done in about a day, if the scenario you are evaluating is simple to set up in a test project. If your scenario is highly technical and requires complex customizations, plan ahead as it can take a couple of days just to set up an environment for the evaluation.
In every case,
This is a process intended to help inform the design process and maintain a high bar of quality.
When to create a UX Scorecard:
When to create a Category Maturity Scorecard:
Below is a recommended step by step process for completing a UX Scorecard. Note that every scorecard is not the same. Product Designers are welcome to adapt the steps to their needs as long as they are as objective as possible and the spirit and outcome remains the same.
Example: “UX Scorecard - Create:Source Code”
Create an experience scoring issue, using the template “WIP: UX Scorecard Part 1”, and add it to the stage group epic.
This issue should have the UX Scorecard label. If it's related to an OKR, also apply the OKR label for easier tracking.
If you'd like to view or edit the templates, you can find them here:
|Exceeds Expectations||Experience exceeds expectations and the user feels the experience is delightful.
- Ease: Extremely easy
- Experience: Extremely good
|Meets Expectations||Meets expectations but does not exceed user needs. The user is able to reach the goal and complete the job.
- Ease: Successful
- Experience: Good
|Average||The user can complete the job but it does not exceed their needs and requires unnecessary steps.
- Ease: Successful with unnecessary steps
- Experience: Okay
|Poor||Experience is viewed as a poor experience and is difficult to complete.
- Ease: Difficult
- Experience: Bad
|Terrible||Too many users are unable to complete the job. Experience is viewed as extremely bad and extremely difficult to complete.
- Ease: Extremely difficult
- Experience: Extremely bad
|Unknown||This job has yet to be graded.
- Frustration: Unknown
- Job Completion: Unknown
The onboarding experience of your product category can make a big difference in the adoption of GitLab stages. You can use a UX Scorecard to assess the UX of your onboarding experience and identify areas for improvement.
Onboarding can refer to many different scenarios, and this can impact the experience:
For example, an existing user of GitLab joining a new team might require some help on unfamiliar features, or getting oriented to the groups and projects on the team. While a brand new user would require more help getting oriented with the application itself.
If you have a recent UX scorecard or a recent usability test with recordings, you can update these rather than starting over. For example, if you recently did a usability test on creating MRs, you can re-watch the sessions with the onboarding heuristics in mind to infer an initial score for the onboarding experience. However, it is highly recommended that at some point you intentionally evaluate the onboarding experience.