Gitlab hero border pattern left svg Gitlab hero border pattern right svg

Experience Baselines and Recommendations

On this page

Experience Baselines and Recommendations

As UX practitioners, we must think strategically about fixing usability challenges within the GitLab product.

Creating an Experience Baseline with associated Recommendations enables us to identify, scope, and track the effort of addressing usability concerns within a specific workflow. When it's complete, we have the information required to collaborate with Product Managers on grouping fixes into meaningful iterations and prioritizing UX-related issues.

Below is a recommended step by step process for completing an Experience Baseline. Note that every baseline is not the same. Product Designers are welcome to adapt the steps to their needs as long as they are as objective as possible and the spirit and outcome remains the same.


  1. Create a main stage group Epic (e.g. "Experience Baselines and Recommendations: {{Stage Group}} OKR {{YYYY}}{{Quarter}}")
  2. Work with your Product Manager to identify the top 3-5 tasks (in frequency or importance) for users of your stage group. Ideally, you will base this task list on user research (analytics or qualitative findings).
  3. Create a sub-epic named Part 1: Experience Baseline and append “{{Stage Group}} OKR {{YYYY}}{{Quarter}}” to the epic's title.
  4. Create another sub-epic named Part 2: Experience Recommendations and append “{{Stage Group}} OKR {{YYYY}}{{Quarter}}” to the epic's title.
  5. Create a “{{YYYY}}{{Quarter}} Baseline for…” issue for each JTBD using the Experience Baseline Part 1 template. Name the issue for the JTBD and include them in the Part 1: Experience Baseline sub-epic.
  6. Follow the instructions in the templates to complete the baseline, and use the Grading Rubric below.

Note that you might do Experience Baselines that are unrelated to an OKR. That's OK, you can leave the word 'OKR' out of the issue titles, but it's still helpful to note the quarter when the baseline was done.

If you'd like to view or edit the templates they can be found here: Part 1 Part 2

Grading Rubric

A (High Quality/Exceeds): Workflow is smooth and painless. Clear path to reach goal. Creates “Wow” moments due to the process being so easy. User would not hesitate to go through the process again.

B (Meets Expectations) Workflow meets expectations but does not exceed user needs. User is able to reach the goal and complete the task. Less likely to abandon.

C (Average) Workflow needs improvement, but user can still finish completing the task. It usually takes longer to complete the task than it should. User may abandon the process or try again later.

D (Presentable) Workflow has clear issues and should have not gone into production without more thought and testing. User may or may not be able to complete the task. High risk of abandonment.

F (Poor) Workflow leaves user confused and with no direction of where to go next. Can sometimes cause the user to go around in circles or reach a dead end. Very high risk of abandonment, and user will most likely seek other methods to complete the task.