Gitlab hero border pattern left svg Gitlab hero border pattern right svg

Experience Baselines and Recommendations

Experience Baselines and Recommendations

As UX practitioners, we must think strategically about fixing usability challenges within the GitLab product.

Creating an Experience Baseline with associated Recommendations enables us to identify, scope, and track the effort of addressing usability concerns within a specific workflow. When it's complete, we have the information required to collaborate with Product Managers on grouping fixes into meaningful iterations and prioritizing UX-related issues.

Initial Setup

  1. Create a main stage group Epic (e.g. "Experience Baselines and Recommendations: {{Stage Group}} OKR {{YYYY}}{{Quarter}}")
  2. Create two related sub-epics for Part 1: Experience Baseline and Part 2: Experience Recommendations. Append “{{Stage Group}} OKR {{YYYY}}{{Quarter}}” to the sub-epic's title.

Part 1: Experience Baseline

  1. Work with your Product Manager to identify the top 3-5 tasks (in frequency or importance) for users of your stage group. Ideally, you will base this task list on user research (analytics or qualitative findings).
  2. Write each task as a "Job to Be Done" (JTBD) using the standard format: When (situation), I want to (motivation), so I can (expected outcome).
  3. Create a “{{YYYY}}{{Quarter}} Baseline for…” issue for each JTBD and include them in the Part 1: Experience Baseline sub-epic.
  4. If your JTBD spans more than one stage group, that’s great! Review your JTBD with a designer from that stage group for accuracy.
  5. Document the current experience of the JTBD, as if you are the user. Capture the screens and jot down observations. Also, apply the following Emotional Grading Scale to document how a user likely feels at each step of the workflow. Add this documentation to each JTBD issue's description.

    • Positive: The user’s experience included a pleasant surprise—something they were not expecting to see. The user enjoyed the experience on the screen and could complete the task, effortlessly moving forward without having to stop and reassess their workflow. Emotion(s): Happy, Motivated, Possibly Surprised
    • Neutral: The user’s expectations were met. Each action provided the basic expected response from the UI, so that the user could complete the task and move forward. Emotion(s): Indifferent
    • Negative: The user did not receive the results they were expecting. There may be bugs, roadblocks, or confusion about what to click on that prevents the user from completing the task. Maybe they even needed to find an alternative method to achieve their goal. Emotion(s): Angry, Frustrated, Confused, Annoyed
  6. Use the Grading Rubric below to provide an overall measurement that becomes the Benchmark Score for the experience (one grade per JTBD), and add it to each JTBD issue's description.
  7. Once you’re clear about the user’s path, create a clickthrough video that documents the existing experience. Begin the video with a contextual introduction including: your role, stage group, and a short summary of the baseline initiative. This is not a "how to" video, but instead should help build empathy for users by clearly showing areas of potential frustration and confusion. (You can point out where the experience is positive, too.) The Emotional Grading Scale you documented earlier will help identify areas to call out. At the end of the video, make sure to include narration of the Benchmark Score.
  8. Post your video to the GitLab Unfiltered YouTube channel, and link to it from each JTBD issue's description.
  9. Create an issue to revisit the same JTBD the following quarter to see if we have made improvements. We will use the grades to monitor progress toward improving the overall quality of our user experience. Add that issue as related to each JTBD issue.

Part 2: Experience Recommendations

  1. After completing the Experience Baseline for a JTBD, create a “{{YYYY}}{{Quarter}} Recommendations for…” issue for each JTBD and include them in the Part 2: Experience Recommendations sub-epic.
  2. Brainstorm opportunities to fix or improve areas of the experience.

    Use the findings from the Emotional Grading scale to determine areas of immediate focus. For example, if parts of the experience received a “Negative” Emotional Grade, consider addressing those first.

  3. Create an issue for each recommendation and link them to the corresponding JTBD recommendations issue.
  4. Think iteratively, and create dependencies where appropriate, remembering that sometimes the order of what we release is just as important as what we release.

    If you need to break recommendations into phases or over multiple milestones, create multiple epics and use the Category Maturity Definitions in the title of each epic: Minimal, Viable, Complete, or Lovable.

Grading Rubric

A (High Quality/Exceeds): Workflow is smooth and painless. Clear path to reach goal. Creates “Wow” moments due to the process being so easy. User would not hesitate to go through the process again.

B (Meets Expectations) Workflow meets expectations but does not exceed user needs. User is able to reach the goal and complete the task. Less likely to abandon.

C (Average) Workflow needs improvement, but user can still finish completing the task. It usually takes longer to complete the task than it should. User may abandon the process or try again later.

D (Presentable) Workflow has clear issues and should have not gone into production without more thought and testing. User may or may not be able to complete the task. High risk of abandonment.

F (Poor) Workflow leaves user confused and with no direction of where to go next. Can sometimes cause the user to go around in circles or reach a dead end. Very high risk of abandonment, and user will most likely seek other methods to complete the task.