UX Department Performance Indicators

Performance indicators for the UX department at GitLab

Executive Summary

KPI Health Status
Total open UX bug issues by severity Attention
  • We are refining this KPI in FY25-Q1 to target UX bugs by severity and their open/close rates. We will be able to reduce the noise created in issues by the bot automation that required manual intervening, provide focus to the resolution of UX bugs, and monitor the volume of UX bugs continuing to enter the product.
Technical Writer MR Rate Attention
  • This target rate remains somewhat aspirational.
Average research projects per Product Designer Attention
  • Below target for most of FY24.
Product Design MR review volume Attention
  • MR review volume has consistently been below the target after reducing the coverage area of the product. In FY25, we will move from a per month target to a per Product Designer measurement to better understand the volume in relation to team size.
UX Team Member Retention Attention
  • Below target with indicators that this trend will continue in the short term.
UX Average Age of Open Positions Attention
  • Consistently higher than the target. Making adjustments in our approach in FY25.

Key Performance Indicators

Total open UX bug issues by severity

The purpose of this chart is to show the total volume of existing UX bug issues that impact our SUS score. We are tracking against the label “bug::ux.”

Target: 0 no severity issues, and 0 S1/S2 issues behind the SLA due date Health:Attention

  • We are refining this KPI in FY25-Q1 to target UX bugs by severity and their open/close rates. We will be able to reduce the noise created in issues by the bot automation that required manual intervening, provide focus to the resolution of UX bugs, and monitor the volume of UX bugs continuing to enter the product.

Chart (Sisense↗)

Technical Writer MR Rate

This PI tracks the number of MRs merged every month using the Technical Writing and UI text labels across all GitLab projects where the team works. The December rate was impacted by team PTO. Team performance has understandably changed throughout the years of this PI, based on changes to team organization and role requirements. We are revisiting this PI.

Target: 55 MRs per technical writer per month Health:Attention

  • This target rate remains somewhat aspirational.

Chart (Tableau↗)

Average research projects per Product Designer

Our goal is to use customer research to validate problems and solutions to ensure we are building the right things in the right way. We use many research methods, including interviews, surveys, usability studies, findability/navigation studies, and analytics. Hypothesis that there is a connection between this KPI and SUS KPI.

Target: At or greater than 2 validation issues per Product Designer per quarter Health:Attention

  • Below target for most of FY24.

Chart (Sisense↗)

Product Design MR review volume

Our goal is to provide UX reviews for Merge Requests (MRs) that involve user-facing changes, including those impacting screen readers, to help improve the quality of our product and reduce the amount of Deferred UX. Product Designers follow our MR Review guidelines to conduct these reviews.

Target: At or greater than 200 MR Reviews per month Health:Attention

  • MR review volume has consistently been below the target after reducing the coverage area of the product. In FY25, we will move from a per month target to a per Product Designer measurement to better understand the volume in relation to team size.

Chart (Tableau↗)

UX Team Member Retention

We need to be able to retain talented team members. Retention measures our ability to keep them sticking around at GitLab. Team Member Retention = (1-(Number of Team Members leaving GitLab/Average of the 12 month Total Team Member Headcount)) x 100. GitLab measures team member retention over a rolling 12 month period.

Target: at or above 84% This KPI cannot be public Health:Attention

  • Below target with indicators that this trend will continue in the short term.

URL(s):


UX Average Age of Open Positions

Measures the average time job openings take from open to close. This metric includes sourcing time of candidates compared to Time to Hire or Time to Offer Accept which only measures the time from when a candidate applies to when they accept.

Target: at or below 50 days Health:Attention

  • Consistently higher than the target. Making adjustments in our approach in FY25.

Chart (Tableau↗)

Regular Performance Indicators

System Usability Scale (SUS) score

The System Usability Scale (SUS) is an industry-standard survey that measures overall system usability based on 10 questions. Moving a SUS score upward even a couple of points on a large system is a significant change. The goal of this KPI is to understand how usability of the GitLab product rates against industry standards and then track trends over time. Even though UX will be responsible for this metric, they will need other departments such as PM and Development to positively affect change. See our grading scale for details on interpreting scores. SUS data is collected every other quarter.

Target: 73 by Q4-FY24, 77 by Q4-FY25, 82 by Q4-FY26 Health:Problem

  • Perceived usability rates as a C+. Overall, we have seen a declining trend in SUS with periods of stabilization.
  • FY22-Q1 focused on performance and visibility of system status.
  • FY22-Q2 focused on Merge Request improvements.
  • FY22-Q3 included 104 Pajamas migrations.
  • FY22-Q4 included 209 Pajamas migrations.
  • FY23-Q1 included 331 Pajamas migrations and burn down of 9 S1 SUS-Impacting issues.

Chart (Tableau↗)

Experience baselines

This PI measures the number of total categories with and without baseline experience scores from formative evaluation UX Scorecards of a main JTBD.

Target: 100% of supported categories have determined their experience baseline. Health:Okay

  • In FY25Q1, we have a KR to create main JTBD for each supported category. Additionally, we have a KR to perform a formative scorecard for three categories.


Experience baseline scores

This PI tracks the experience baseline scores of each category.

Target: 100% of supported categories have an experience baseline score of B or higher. Health: Unknown

  • We first need to generate scores for all categories in order to understand which categories we need to prioritize to meet the goal of a B or greater.


UX bug issues opened/closed each month

With SUS as a KPI, it’s important to ensure that we are closing UX bugs issues at an appropriate velocity. UX is responsible for ensuring that issues are opened when appropriate and advocating for their prioritization, while Product Management is the ultimate DRI for prioritization. We are tracking against the label “bug::ux.”

Target: TBD Unknown

  • We will be revising this PI in FY25 to give focus to a smaller number of labels.

Chart (Tableau↗)

Pajamas component migrations

Integrating Pajamas components into GitLab contributes to a cohesive and consistent user experience, visually and functionally. This allows users to seemlessly transition throughout different stages of the DevOps lifecycle. With our adoption scanner, we are able to track percent adoption of existing Pajamas components per stage group. This KR does not yet track use of components within the product that exist outside of Pajamas. The scale is

Target: 100% of groups are “On track” Health:Attention

  • 64% (27/43) of groups are "On track"
  • 29% (12/43) of groups "Need attention"
  • 0% (0/43) of groups are "At risk"
  • 9% (4/43) of groups don't have enough data to measure

URL(s):


Usability benchmarking overall score by stage

This PI tracks the overall stage score for usability benchmarking studies performed across stage groups as they change over time. The tasks and workflows that comprise each benchmarking study are derived from JTBD for one or more target personas typical for the stage running the study. The overall score for each study takes into account the performance of each task that was tested, through metrics like completion rate, severity, and customer effort score (CES). The scale is 0-100, where 90-100 is ‘Great’, 80-89 is ‘Good’, 70-79 is ‘Fair’, 69 and below is ‘Poor’.

Target: 5% increase in overall score from previous benchmarking, maintaining an overall score above 84/100. Unknown

  • Not enough data. Only one benchmarking study has been performed so far.

Chart (Tableau↗)

UX Department MR Rate

UX Department MR Rate is a performance indicator showing how many changes the UX team implements directly in the GitLab product. We currently count all members of the UX Department (Directors, Managers, ICs) in the denominator, because this is a team effort. The full definition of MR Rate is linked in the url section.

Target: Greater than TBD MRs per month Health:Attention

  • We don't yet know what a good MR rate looks like for UX. Need accurate data to determine.
  • UX MR rate doesn't accurately reflect all MRs to which UX contributes, because we often collaborate on MRs rather than opening them ouselves.

Chart (Tableau↗)

URL(s):


UX Department Discretionary Bonus Rate

The number of discretionary bonuses given divided by the total number of team members, in a given period as defined. This metric definition is taken from the People Success Discretionary Bonuses KPI.

Target: at or above 10% Health:Attention

  • Metric is new and is being monitored

Chart (Tableau↗)

Actionable insights

Actionable insights originate from user research. They always have the ‘Actionable Insight’ label applied to the resulting issue and a clear follow up that needs to take place as a result of the research observation or data. An actionable insight both defines the insight and clearly calls out the next step as a recommendation. The goal of this KPI is to ensure we’re documenting research insights that are actionable and tracking their closure rate.

Target: TBD Health:Okay

  • Q3 FY21 was spent establishing a baseline. Now that there's ample data available, we'll take two steps. Step 1 - investigate the oldest open actionable insights to understand why they have not been closed. Step 2 - track the average time for actionable insights to be closed.

Chart (Tableau↗)

Deferred UX

Deferred UX means that for a given issue, we failed to meet defined standards for our Design system or for usability and feature viability standards as defined in agreed-upon design assets. When we fail to ship something according to defined standards, we track the resulting issues with a “Deferred UX” label. Even though UX will be responsible for this metric, they will need other departments such as PM and Development to positively affect change.

Target: Below 50 open “deferred UX” issues Health:Problem

  • Total amount of Deferred UX has increased in the last several months and is well above the target.
  • We are actively working with PMs to prioritize Deferred UX. Some stage groups are committing to resolving a minimum number of Deferred UX issues per milestone (generally, a commitment of no less than one issue). We will track this effort and make adjustments as we see the results.
  • The Deferred UX label has been inconsistently applied likely due to it deviating from the industry standard term. We are exploring alternatives to track Deferred UXt in FY25 to increase accuracy.

Chart (Tableau↗)

Open Deferred UX Age

Age of outstanding Deferred UX issues. Deferred UX means that for a given issue, we failed to meet defined standards. Age represented via median of days opened.

Target: At or below 150 days Health:Attention

  • Average days to close a "Deferred UX" issue is beginning to trend upward.
  • We will monitor to see if the trend continues.

Chart (Tableau↗)

Technical Writing collaboration on UI text

Historically, Technical Writers were not consistently included in the creation of UI text. Since UI text is critical to product usability, Technical Writing involvement can help improve the quality of our UI. This chart includes issues and MRs with the Technical Writing and UI text labels, because Technical Writing contributions happen in both places.

Target: TBD Unknown

  • We are watching this metric to determine a target, because historical data is inconsistent.

Chart (Sisense↗)

Product Designer Gearing Ratio

Number of Product designers against the targeted gearing ratio

Target: At 57 product designers Health:Attention

  • At 47% of targeted gearing ratio

Chart (Tableau↗)

Technical Writer Gearing Ratio

Number of Technical Writers against the targeted gearing ratio

Target: At 19 technical writers Health:Attention

  • At 63% of targeted gearing ratio

Chart (Tableau↗)

UX Researcher Gearing Ratio

Number of researchers against the targeted gearing ratio

Target: At 11 researchers Health:Attention

  • At 72% of targeted gearing ratio

Chart (Tableau↗)

UX Department Promotion Rate

The total number of promotions over a rolling 12 month period divided by the month end headcount. The target promotion rate is 12% of the population. This metric definition is taken from the People Success Team Member Promotion Rate PI.

Target: 12% Health:Okay

  • Metric is new and is being monitored

Chart (Tableau↗)

Legends

Health

Value Level Meaning
3 Okay The KPI is at an acceptable level compared to the threshold
2 Attention This is a blip, or we’re going to watch it, or we just need to enact a proven intervention
1 Problem We'll prioritize our efforts here
-1 Confidential Metric & metric health are confidential
0 Unknown Unknown

How pages like this work

Data

The heart of pages like this are Performance Indicators data files which are YAML files. Each - denotes a dictionary of values for a new (K)PI. The current elements (or data properties) are:

Property Type Description
name Required String value of the name of the (K)PI. For Product PIs, product hierarchy should be separate from name by " - " (Ex. {Stage Name}:{Group Name} - {PI Type} - {PI Name}
base_path Required Relative path to the performance indicator page that this (K)PI should live on
definition Required refer to Parts of a KPI
parent Optional should be used when a (K)PI is a subset of another PI. For example, we might care about Hiring vs Plan at the company level. The child would be the division and department levels, which would have the parent flag.
target Required The target or cap for the (K)PI. Please use Unknown until we reach maturity level 2 if this is not yet defined. For GMAU, the target should be quarterly.
org Required the organizational grouping (Ex: Engineering Function or Development Department). For Product Sections, ensure you have the word section (Ex : Dev Section)
section Optional the product section (Ex: dev) as defined in sections.yml
stage Optional the product stage (Ex: release) as defined in stages.yml
group Optional the product group (Ex: progressive_delivery) as defined in stages.yml
category Optional the product group (Ex: feature_flags) as defined in categories.yml
is_key Required boolean value (true/false) that indicates if it is a (key) performance indicator
health Required indicates the (K)PI health and reasons as nested attributes. This should be updated monthly before Key Reviews by the DRI.
health.level Optional indicates a value between 0 and 3 (inclusive) to represent the health of the (K)PI. This should be updated monthly before Key Reviews by the DRI.
health.reasons Optional indicates the reasons behind the health level. This should be updated monthly before Key Reviews by the DRI. Should be an array (indented lines starting with dashes) even if you only have one reason.
urls Optional list of urls associated with the (K)PI. Should be an array (indented lines starting with dashes) even if you only have one url
funnel Optional indicates there is a handbook link for a description of the funnel for this PI. Should be a URL
sisense_data Optional allows a Sisense dashboard to be embeded as part of the (K)PI using chart, dashboard, and embed as neseted attributes.
sisense_data.chart Optional indicates the numeric Sisense chart/widget ID. For example: 9090628
sisense_data.dashboard Optional indicates the numeric Sisense dashboard ID. For example: 634200
sisense_data.shared_dashboard Optional indicates the numeric Sisense shared_dashboard ID. For example: 185b8e19-a99e-4718-9aba-96cc5d3ea88b
sisense_data.embed Optional indicates the Sisense embed version. For example: v2
sisense_data_secondary Optional allows a second Sisense dashboard to be embeded. Same as sisense data
sisense_data_secondary.chart Optional Same as sisense_data.chart
sisense_data_secondary.dashboard Optional Same as sisense_data.dashboard
sisense_data_secondary.shared_dashboard Optional Same as sisense_data.shared_dashboard
sisense_data_secondary.embed Optional Same as sisense_data.embed
public Optional boolean flag that can be set to false where a (K)PI does not meet the public guidelines.
pi_type Optional indicates the Product PI type (Ex: AMAU, GMAU, SMAU, Group PPI)
product_analytics_type Optional indicates if the metric is available on SaaS, SM (self-managed), or Both.
is_primary Optional boolean flag that indicates if this is the Primary PI for the Product Group.
implementation Optional indicates the implementation status and reasons as nested attributes. This should be updated monthly before Key Reviews by the DRI.
implementation.status Optional indicates the Implementation Status status. This should be updated monthly before Key Reviews by the DRI.
implementation.reasons Optional indicates the reasons behind the implementation status. This should be updated monthly before Key Reviews by the DRI. Should be an array (indented lines starting with dashes) even if you only have one reason.
lessons Optional indicates lessons learned from a K(PI) as a nested attribute. This should be updated monthly before Key Reviews by the DRI.
lessons.learned Optional learned is an attribute that can be nested under lessonsand indicates lessons learned from a K(PI). This should be updated monthly before Key Reviews by the DRI. Should be an array (indented lines starting with dashes) even if you only have one lesson learned
monthly_focus Optional indicates monthly focus goals from a K(PI) as a nested attribute. This should be updated monthly before Key Reviews by the DRI.
monthly_focus.goals Optional indicates monthly focus goals from a K(PI). This should be updated monthly before Key Reviews by the DRI. Should be an array (indented lines starting with dashes) even if you only have one goal
metric_name Optional indicates the name of the metric in Self-Managed implemenation. The SaaS representation of the Self-Managed implementation should use the same name.

// want to edit? Go here –> https://gitlab.com/-/ide/project/gitlab-com/www-gitlab-com/edit/master/-/data/performance_indicators/ux_department.yml


Paid NPS
The Paid NPS (Paid Net Promoter Score) is a performance indicator we use to measure the customer satisfaction of our product.
System Usability Scale
The System Usability Scale (SUS) is a performance indicator we use to measure the long term usability of our product.