Development Department Performance Indicators

Executive Summary

KPI Health Status
Past Due InfraDev Issues Okay
  • We are now below 5. We did have a small spike earlier this quarter which was addressed.
  • We will continue to work to reduce this 0.
Past Due Security Issues Attention
  • Security issues continue to be an area of focus as requirements have become more stringent.
  • Large increases due to new evaluation plus some automation that wasn't addressing issues properly (composition analysis). We are in process of remediating that.
  • The due date of issues is set to the latest date that can be included in a release that will go out before the SLA expires
Largest Contentful Paint (LCP) Okay
  • We are currently below targets consistently other than a small number of day outliers.
Open MR Review Time (OMRT) Attention
  • Have broken out by community vs. company. We are keeping company review times stable.
  • Summer months have seen a slight increase which should be monitored moving forward.
  • Looking at reducing the tail (stale MRs) is still an option to be explored.
Development Team Member Retention Okay
  • above target and monitoring as retention remains a concern
Development Average Age of Open Positions Okay
  • Open age has been stable the past quarter.

Key Performance Indicators

Past Due InfraDev Issues

Measures the number of past due infradev issues by severity.

Target: At or below 5 issues Health:Okay

  • We are now below 5. We did have a small spike earlier this quarter which was addressed.
  • We will continue to work to reduce this 0.

Chart (Tableau↗)

Past Due Security Issues

Measures the number of past due security issues by severity. This is filtered down to issues with either a stage or group label.

Target: At or below 20 issues Health:Attention

  • Security issues continue to be an area of focus as requirements have become more stringent.
  • Large increases due to new evaluation plus some automation that wasn't addressing issues properly (composition analysis). We are in process of remediating that.
  • The due date of issues is set to the latest date that can be included in a release that will go out before the SLA expires

Chart (Tableau↗)

Largest Contentful Paint (LCP)

Largest Contentful Paint (LCP) is an important, user-centric metric for measuring the largest load speed visible on the web page. To provide a good user experience on GitLab.com, we strive to have the LCP occur within the first few seconds of the page starting to load. This LCP metric is reporting on our Projects Home Page. LCP data comes from the Graphite database. A Grafana dashboard is available to compare LCP of GitLab.com versus GitHub.com on key pages, in additon to a third party site with a broader comparison. LCP p90 data is captured every 4 hours and we report on the latest value each day.

Target: Below 2500ms at the 90th percentile Health:Okay

  • We are currently below targets consistently other than a small number of day outliers.

Chart (Tableau↗)

Open MR Review Time (OMRT)

We want to be more intuitive with calculating how long it takes an MR in review state. Open MR Review Time (OMRT) measures the median time of all open MRs in review as of a specific date. In other words, on any given day, we calculate the number of open MRs in review and median time in review state for those MRs at that point in time. MRs are considered in review at the point when a review is requested on an MR.

Target: At or below 21 Health:Attention

  • Have broken out by community vs. company. We are keeping company review times stable.
  • Summer months have seen a slight increase which should be monitored moving forward.
  • Looking at reducing the tail (stale MRs) is still an option to be explored.

Chart (Tableau↗)

Development Team Member Retention

We need to be able to retain talented team members. Retention measures our ability to keep them sticking around at GitLab. Team Member Retention = (1-(Number of Team Members leaving GitLab/Average of the 12 month Total Team Member Headcount)) x 100. GitLab measures team member retention over a rolling 12 month period.

Target: at or above 84% This KPI cannot be public Health:Okay

  • above target and monitoring as retention remains a concern

URL(s):


Development Average Age of Open Positions

Measures the average time job openings take from open to close. This metric includes sourcing time of candidates compared to Time to Hire or Time to Offer Accept which only measures the time from when a candidate applies to when they accept.

Target: at or below 50 days Health:Okay

  • Open age has been stable the past quarter.

Chart (Tableau↗)
		</tableau-viz>
	</div>

Regular Performance Indicators

MR Rate

Development Department MR Rate is a performance indicator showing how many changes the Development Department implements directly in the GitLab product. This is the ratio of product MRs to the number of team members in the Development Department. It’s important because it shows us how productivity of our projects have changed over time. The full definition of MR Rate is linked in the url section.

Target: Above 12 MRs per month Health:Attention

  • We are seeing positive trends with MR Rate (April) we expect similar improvements months coming.

Chart (Tableau↗)

URL(s):


UX Debt

See UX Debt for the definition. We include this as part of the development PIs since we need to help to positively affect change of this metric.

Target: Below 50 open “ux debt” issues Health:Attention

  • Currently following UX guidance for health where we are above target.
  • This metric may change as our usability strategy is being reviewed.

Chart (Tableau↗)

URL(s):


Development Handbook MR Rate

The handbook is essential to working remote successfully, to keeping up our transparency, and to recruiting successfully. Our processes are constantly evolving and we need a way to make sure the handbook is being updated at a regular cadence. This data is retrieved by querying the API with a python script for merge requests that have files matching /source/handbook/engineering/development/** over time. The calculation for the monthly handbook MR rate is the number of handbook updates divided by the number of team members in the Development Department for a given month.

Target: At or above 0.5 Health:Attention

  • We need to look into new ways to enhance our handbook experience if we want to hit target.

Chart (Tableau↗)
		</tableau-viz>
	</div>

Development Department Discretionary Bonus Rate

The number of discretionary bonuses given divided by the total number of team members, in a given period as defined. This metric definition is taken from the People Success Discretionary Bonuses KPI.

Target: at or above 10% Health:Attention

  • We have been consistently below 10% goal for the last 6 months.
  • We have given more discretionary bonuses YoY.
  • We will discuss having another review to encourage more recognition.

Chart (Tableau↗)
		</tableau-viz>
	</div>

Average PTO per Development Team Member

This shows the average number of PTO days taken per Development Team Member. It is the ratio of PTO days taken (vacation, sick leave, public holidays, Family & Friends days, etc) to the number of team members in the Development Department each month. Looking at the average number of PTO days over time helps us understand increases or decreases in efficiency and ensure that team members are taking time off to keep a healthy work/life balance.

Target: TBD Health:Okay

  • Need to monitor on a monthly basis
  • PTO has stabilized in the past few months.

Chart (Tableau↗)
		</tableau-viz>
	</div>

Escape Rate Over Time

This shows the rate that bugs are created. It is the ratio of opened bugs to the number of MRs merged. As an example, an escape rate of 10% indicates that, on average, for every 10 MRs merged we will see 1 bug opened. Looking at the escape rate helps us understand the quality of the MRs we are merging.

Target: Currently no target is set for this metric. We need to establish a baseline and consider the right balance between velocity and quality. Unknown

  • Spike in October is due to the increased scrutiny as a part of the FedRamp project.

Chart (Tableau↗)

Backend Unit Test Coverage

BE Unit Test coverage shows the unit test coverage of our code base. As an example 95% represents that 95% of the LOC in our BE software is unit tested. It’s important as it shows how much code is tested early in the development process.

Target: Above 95% Health:Okay

  • This metric's threshold for action is around 95%.

Chart (Tableau↗)

URL(s):


Frontend Unit Test Coverage

FE Unit Test coverage shows the unit test coverage of our code base. As an example 95% represents that 95% of the LOC in our FE software is unit tested. It’s important as it shows how much code is tested early in the development process.

Target: Above 75% Health:Attention

  • Couple of months custom VM's were cleaned up and it seems that this might have been one, this is currently investigated as our CI doesn't output easy to read percentages.

Chart (Tableau↗)

URL(s):


Project/Area Maintainership Health

A project’s maintainership is considered unhealthy if it has fewer maintainers than the target maintainer count. Each project’s target maintainer count is based on the number of incoming MRs and maintainer availability.

Target: Below 20% Health:Attention

  • The chart shows a decrease in overall unhealthy projects or areas in the last quarter
  • Some areas are not being properly tracked which should lead to a further decrease once resolved

Chart (Tableau↗)

Unhealthy Core Areas of Maintainership Health

Within a given project (for example, gitlab-org/gitlab), maintainers cover different areas within the project - backend, database, frontend, and more. An area of maintainership receives more than 100 merge requests per month and is considered unhealthy if it has less maintainers than the target maintainer count. This indicator is a subset of Project Maintainership Health.

Target: 0 Health:Attention

  • Our unhealthy areas have increased within the past 2 months. The unhealthy areas are GitLab backend and database (consistent), but GitLab frontend and Gitaly have now shown up too.
  • Availability also trended down dramatically this last quarter which explains the increase here.
  • Already this month we've been trending back in the right direction.

Chart (Tableau↗)

Open MR Age (OMA)

We want to be more intuitive with calculating how long it takes an MR to merge or close. Open MR Age (OMA) measures the median time of all open MRs as of a specific date. In other words, on any given day, we calculate the number of open MRs and median time in open state for those MRs at that point in time.

Target: At or below 30 Health:Attention

  • We are seeing an overall downwards trend towards the target in the past month.

Chart (Tableau↗)

Review Time to Merge (RTTM)

Review Time to Merge (RTTM) tells us on average how long it takes from submitting code for review to being merged. The VP of Development is the DRI on what projects are included.

Target: At or below 3 Health:Okay

  • This is a revived metric and we are currently monitoring the trends.

Chart (Tableau↗)

Overall MRs by Type

We want to measure the breakdown of our development investment by MR type/label. We only consider MRs that contribute to our product. If an MR has more than one of these labels, the highest one in the list takes precedence.

Target: < 5% change in proportion of MRs with undefined label Health:Okay

  • Have worked to remove undefined MRs.
  • The Engineering Manager for each team is ultimately responsible for ensuring that these labels are set correctly.

Chart (Tableau↗)

Development Department Promotion Rate

The total number of promotions over a rolling 12 month period divided by the month end headcount. The target promotion rate is 12% of the population. This metric definition is taken from the People Success Team Member Promotion Rate PI.

Target: 12% Health:Okay

  • We are coming in line with promotion level goals.

Chart (Tableau↗)
		</tableau-viz>
	</div>

Legends

Health

Value Level Meaning
3 Okay The KPI is at an acceptable level compared to the threshold
2 Attention This is a blip, or we’re going to watch it, or we just need to enact a proven intervention
1 Problem We'll prioritize our efforts here
-1 Confidential Metric & metric health are confidential
0 Unknown Unknown

How pages like this work

Data

The heart of pages like this are Performance Indicators data files which are YAML files. Each - denotes a dictionary of values for a new (K)PI. The current elements (or data properties) are:

Property Type Description
name Required String value of the name of the (K)PI. For Product PIs, product hierarchy should be separate from name by " - " (Ex. {Stage Name}:{Group Name} - {PI Type} - {PI Name}
base_path Required Relative path to the performance indicator page that this (K)PI should live on
definition Required refer to Parts of a KPI
parent Optional should be used when a (K)PI is a subset of another PI. For example, we might care about Hiring vs Plan at the company level. The child would be the division and department levels, which would have the parent flag.
target Required The target or cap for the (K)PI. Please use Unknown until we reach maturity level 2 if this is not yet defined. For GMAU, the target should be quarterly.
org Required the organizational grouping (Ex: Engineering Function or Development Department). For Product Sections, ensure you have the word section (Ex : Dev Section)
section Optional the product section (Ex: dev) as defined in sections.yml
stage Optional the product stage (Ex: release) as defined in stages.yml
group Optional the product group (Ex: progressive_delivery) as defined in stages.yml
category Optional the product group (Ex: feature_flags) as defined in categories.yml
is_key Required boolean value (true/false) that indicates if it is a (key) performance indicator
health Required indicates the (K)PI health and reasons as nested attributes. This should be updated monthly before Key Reviews by the DRI.
health.level Optional indicates a value between 0 and 3 (inclusive) to represent the health of the (K)PI. This should be updated monthly before Key Reviews by the DRI.
health.reasons Optional indicates the reasons behind the health level. This should be updated monthly before Key Reviews by the DRI. Should be an array (indented lines starting with dashes) even if you only have one reason.
urls Optional list of urls associated with the (K)PI. Should be an array (indented lines starting with dashes) even if you only have one url
funnel Optional indicates there is a handbook link for a description of the funnel for this PI. Should be a URL
sisense_data Optional allows a Sisense dashboard to be embeded as part of the (K)PI using chart, dashboard, and embed as neseted attributes.
sisense_data.chart Optional indicates the numeric Sisense chart/widget ID. For example: 9090628
sisense_data.dashboard Optional indicates the numeric Sisense dashboard ID. For example: 634200
sisense_data.shared_dashboard Optional indicates the numeric Sisense shared_dashboard ID. For example: 185b8e19-a99e-4718-9aba-96cc5d3ea88b
sisense_data.embed Optional indicates the Sisense embed version. For example: v2
sisense_data_secondary Optional allows a second Sisense dashboard to be embeded. Same as sisense data
sisense_data_secondary.chart Optional Same as sisense_data.chart
sisense_data_secondary.dashboard Optional Same as sisense_data.dashboard
sisense_data_secondary.shared_dashboard Optional Same as sisense_data.shared_dashboard
sisense_data_secondary.embed Optional Same as sisense_data.embed
public Optional boolean flag that can be set to false where a (K)PI does not meet the public guidelines.
pi_type Optional indicates the Product PI type (Ex: AMAU, GMAU, SMAU, Group PPI)
product_analytics_type Optional indicates if the metric is available on SaaS, SM (self-managed), or Both.
is_primary Optional boolean flag that indicates if this is the Primary PI for the Product Group.
implementation Optional indicates the implementation status and reasons as nested attributes. This should be updated monthly before Key Reviews by the DRI.
implementation.status Optional indicates the Implementation Status status. This should be updated monthly before Key Reviews by the DRI.
implementation.reasons Optional indicates the reasons behind the implementation status. This should be updated monthly before Key Reviews by the DRI. Should be an array (indented lines starting with dashes) even if you only have one reason.
lessons Optional indicates lessons learned from a K(PI) as a nested attribute. This should be updated monthly before Key Reviews by the DRI.
lessons.learned Optional learned is an attribute that can be nested under lessonsand indicates lessons learned from a K(PI). This should be updated monthly before Key Reviews by the DRI. Should be an array (indented lines starting with dashes) even if you only have one lesson learned
monthly_focus Optional indicates monthly focus goals from a K(PI) as a nested attribute. This should be updated monthly before Key Reviews by the DRI.
monthly_focus.goals Optional indicates monthly focus goals from a K(PI). This should be updated monthly before Key Reviews by the DRI. Should be an array (indented lines starting with dashes) even if you only have one goal
metric_name Optional indicates the name of the metric in Self-Managed implemenation. The SaaS representation of the Self-Managed implementation should use the same name.