KPI | Health | Status |
---|---|---|
Unique Wider Community Contributors per Month | Attention |
|
MRARR | Confidential |
|
Master Pipeline Stability | Attention |
|
Review App deployment success rate | Attention |
|
Time to First Failure | Attention |
|
S1 OBA | Okay |
|
S2 OBA | Problem |
|
Quality Team Member Retention | Okay |
|
Quality Average Age of Open Positions | Attention |
|
Software Engineer in Test Gearing Ratio | Attention |
|
Unique Wider Community Contributors based on merged contribution by month. Contributors are unique and only counted once for multiple MRs from the same contributor.
Target: Above 120 contributors per month
Health: Attention
DRI: Nick Veenhof
URL(s)
MRARR (pronounced "mer-arr," like a pirate) is the measurement of Merge Requests from customers, multiplied with the revenue of the account (ARR) from that customer. This measures how active our biggest customers are contributing to GitLab. We believe the higher this number the better we'll retain these customers and improve product fit for large enterprises. The unit of MRARR is MR Dollars (MR$). MR Dollars is different than the normal Dollars which is used for ARR. We are tracking current initiatives to improve this in this epic.
Target: Identified in Sisense Chart
Health: Confidential
DRI: Nick Veenhof
URL(s)
Measures the stability of the GitLab project master branch pipeline success rate. A key indicator to the stability of our releases. We will continue to leverage Merge Trains in this effort.
Target: Above 95%
Health: Attention
DRI: Kyle Wiebers
URL(s)
Measures the stability of our test tooling to enable end to end and exploratory testing feedback.
Target: Above 99%
Health: Attention
DRI: Kyle Wiebers
URL(s)
TtFF (pronounced "teuf") measures the average time from pipeline creation until the first actionable failed build is completed for the GitLab monorepo project. The Quality Department recently discussed the desire to measure the average time to first failure. We want to run the tests that are likely to fail first and shorten the feedback cycle to R&D teams.
Target: Below 15 minutes
Health: Attention
DRI: Kyle Wiebers
URL(s)
S1 Open Bug Age (OBA) measures the total number of days that all S1 bugs are open within a month divided by the number of S1 bugs within that month.
Target: Below 100 days
Health: Okay
DRI: Tanya Pazitny
S2 Open Bug Age (OBA) measures the total number of days that all S2 bugs are open within a month divided by the number of S2 bugs within that month.
Target: Below 300 days
Health: Problem
DRI: Tanya Pazitny
URL(s)
This is a subset of an existing KPI. Please see the definition for the parent KPI.
We need to be able to retain talented team members. Retention measures our ability to keep them sticking around at GitLab. Team Member Retention = (1-(Number of Team Members leaving GitLab/Average of the 12 month Total Team Member Headcount)) x 100. GitLab measures team member retention over a rolling 12 month period.
Target: at or above 84%
This KPI cannot be public.
Health: Okay
URL(s)
This is a subset of an existing KPI. Please see the definition for the parent KPI.
Measures the average time job openings take from open to close. Positions are closed when offer is accepted. This metric includes sourcing time of candidates compared to Time to Hire or Time to Offer Accept which only measures the time from when a candidate applies to when they accept.
Target: at or below 50 days
Health: Attention
Amount of Software Engineers in Test against the targeted gearing ratio We have a detailed gearing prioritization model that informs important us which group we will hire an SET first.
Target: At 46 Software Engineers in Test
Health: Attention
DRI: Tanya Pazitny
Measures the average successful duration for the GitLab project merge request pipelines. Key building block to improve our cycle time, and effiency. More pipeline improvements and critical code paths planned.
Target: Below 45 minutes
Health: Attention
URL(s)
Measure the average GitLab project merge request pipeline cost to measure engineering efficiency. This is calculated by taking the total compute cost for the runner machines for merge request pipelines divided by the number of merge requests.
Target: Below 7.50
Health: Okay
OCMA (pronounced "ock-mah") measures the median time of all open MRs as of a specific date. In other words, on any given day, this calculates the number of open MRs and median time in an open state for open MRs at that point in time.
Target: Below 100 days
Health: Attention
URL(s)
The number of MR Coaches defined by team.yml role
Target: Above 50 coaches per month
Health: Attention
URL(s)
Percentage of Community Contributions that are related to feature development
Target: Above 30%
Health: Unknown
This is the ratio of community contributions with the number of merged product MRs. As we grow as a company we want to make sure the community scales with the company.
Target: Above 8% of all MRs
Health: Attention
Measures the speed of our full QA/end-to-end test suite in the master
branch. A Software Engineering in Test job-family performance-indicator.
Target: at 90 mins
Health: Okay
DRI: Tanya Pazitny
URL(s)
Measures the stability and effectiveness of our QA/end-to-end tests running in the master
branch. A Software Engineering in Test job-family performance-indicator.
Target: TBD
Health: Attention
DRI: Tanya Pazitny
URL(s)
This is a subset of an existing KPI. Please see the definition for the parent KPI.
We need to spend our investors' money wisely. We also need to run a responsible business to be successful, and to one day go on the public market. Latest data is in Adaptive, data team importing to Sisense in FY22Q2
Target: See Sisense for target
Health: Unknown
URL(s)
This is a subset of an existing KPI. Please see the definition for the parent KPI.
The handbook is essential to working remote successfully, to keeping up our transparency, and to recruiting successfully. Our processes are constantly evolving and we need a way to make sure the handbook is being updated at a regular cadence. This data is retrieved by querying the API with a python script for merge requests that have files matching /source/handbook/engineering/quality/**
over time.
Target: Above 1 MR per person per month
Health: Attention
This is a subset of an existing KPI. Please see the definition for the parent KPI.
Quality Department MR rate a key indicator showing how many changes the Quality Department implements directly in the GitLab product. It is important because it shows our iterative productivity based on the average MR merged per team member. We currently count all members of the Quality Department (Director, EMs, ICs) in the denominator because this is a team effort. The full definition of MR Rate is linked in the url section.
Target: Above 10 MRs per Month
Health: Attention
URL(s)
This is a subset of an existing KPI. Please see the definition for the parent KPI.
We remain efficient financially if we are hiring globally, working asynchronously, and hiring great people in low-cost regions where we pay market rates. We track an average location factor by function and department so managers can make tradeoffs and hire in an expensive region when they really need specific talent unavailable elsewhere, and offset it with great people who happen to be in low cost areas.
Target: Below 0.58
Health: Attention
URL(s)
This is a subset of an existing KPI. Please see the definition for the parent KPI.
We remain efficient financially if we are hiring globally, working asynchronously, and hiring great people in low-cost regions where we pay market rates. We track an average location factor for team members hired within the past 3 months so hiring managers can make tradeoffs and hire in an expensive region when they really need specific talent unavailable elsewhere, and offset it with great people who happen to be in more efficient location factor areas with another hire. The historical average location factor represents the average location factor for only new hires in the last three months, excluding internal hires and promotions. The calculation for the three-month rolling average location factor is the location factor of all new hires in the last three months divided by the number of new hires in the last three months for a given hire month. The data source is Workday data.
Target: Below 0.58
Health: Okay
The total number of promotions over a rolling 12 month period divided by the month end headcount. The target promotion rate is 12% of the population. This metric definition is taken from the People Success Team Member Promotion Rate PI.
Target: 12%
Health: Okay
This is a subset of an existing KPI. Please see the definition for the parent KPI.
The number of discretionary bonuses given divided by the total number of team members, in a given period as defined. This metric definition is taken from the People Success Discretionary Bonuses KPI.
Target: at or above 10%
Health: Attention
Amount of Engineering Productivity Engineers against the targeted gearing ratio
Target: At 18 Engineering Productivity Engineers
Health: Problem
Value | Level | Meaning |
---|---|---|
3 | Okay | The KPI is at an acceptable level compared to the threshold |
2 | Attention | This is a blip, or we’re going to watch it, or we just need to enact a proven intervention |
1 | Problem | We'll prioritize our efforts here |
-1 | Confidential | Metric & metric health are confidential |
0 | Unknown | Unknown |
Pages, such as the Engineering Function Performance Indicators page are rendered by an ERB template that contains HTML code.
Other PI Pages
sectionThese ERB templates calls custom helper functions that extract and transform data from the Performance Indicators data file.
kpi_list_by_org(org)
helper function takes a required string argument named org
(deparment or division level) that returns all the KPIs (pi.is_key == true) for a specific organization grouping (pi.org == org) from the Performance Indicators data file.pi_maturity_level(performance_indicator)
helper function automatically assigns a maturity level based on the availability of certain data properties for a particular PI.pi_maturity_reasons(performance_indicator)
helper function returns a reason
for a PI maturity based on other data properties.performance_indicators(org)
takes a required string argument named org
(deparment or division level) that returns two lists - a list of all KPIs and a list of all PIs for a specific organization grouping (department/division).signed_periscope_url(data)
takes in the sisense_data property information from Performance Indicators data files and returns a signed chart URL for embedding a Sisense chart into the handbook.The heart of pages like this are Performance Indicators data files which are YAML files. Each - denotes a dictionary of values for a new (K)PI. The current elements (or data properties) are:
Property | Type | Description |
---|---|---|
name |
Required | String value of the name of the (K)PI. For Product PIs, product hierarchy should be separate from name by " - " (Ex. {Stage Name}:{Group Name} - {PI Type} - {PI Name} |
base_path |
Required | Relative path to the performance indicator page that this (K)PI should live on |
definition |
Required | refer to Parts of a KPI |
parent |
Optional | should be used when a (K)PI is a subset of another PI. For example, we might care about Hiring vs Plan at the company level. The child would be the division and department levels, which would have the parent flag. |
target |
Required | The target or cap for the (K)PI. Please use Unknown until we reach maturity level 2 if this is not yet defined. For GMAU, the target should be quarterly. |
org |
Required | the organizational grouping (Ex: Engineering Function or Development Department). For Product Sections, ensure you have the word section (Ex : Dev Section) |
section |
Optional | the product section (Ex: dev) as defined in sections.yml |
stage |
Optional | the product stage (Ex: release) as defined in stages.yml |
group |
Optional | the product group (Ex: progressive_delivery) as defined in stages.yml |
category |
Optional | the product group (Ex: feature_flags) as defined in categories.yml |
is_key |
Required | boolean value (true/false) that indicates if it is a (key) performance indicator |
health |
Required | indicates the (K)PI health and reasons as nested attributes. This should be updated monthly before Key Reviews by the DRI. |
health.level |
Optional | indicates a value between 0 and 3 (inclusive) to represent the health of the (K)PI. This should be updated monthly before Key Reviews by the DRI. |
health.reasons |
Optional | indicates the reasons behind the health level. This should be updated monthly before Key Reviews by the DRI. Should be an array (indented lines starting with dashes) even if you only have one reason. |
urls |
Optional | list of urls associated with the (K)PI. Should be an array (indented lines starting with dashes) even if you only have one url |
funnel |
Optional | indicates there is a handbook link for a description of the funnel for this PI. Should be a URL |
sisense_data |
Optional | allows a Sisense dashboard to be embeded as part of the (K)PI using chart, dashboard, and embed as neseted attributes. |
sisense_data.chart |
Optional | indicates the numeric Sisense chart/widget ID. For example: 9090628 |
sisense_data.dashboard |
Optional | indicates the numeric Sisense dashboard ID. For example: 634200 |
sisense_data.shared_dashboard |
Optional | indicates the numeric Sisense shared_dashboard ID. For example: 185b8e19-a99e-4718-9aba-96cc5d3ea88b |
sisense_data.embed |
Optional | indicates the Sisense embed version. For example: v2 |
sisense_data_secondary |
Optional | allows a second Sisense dashboard to be embeded. Same as sisense data |
sisense_data_secondary.chart |
Optional | Same as sisense_data.chart |
sisense_data_secondary.dashboard |
Optional | Same as sisense_data.dashboard |
sisense_data_secondary.shared_dashboard |
Optional | Same as sisense_data.shared_dashboard |
sisense_data_secondary.embed |
Optional | Same as sisense_data.embed |
public |
Optional | boolean flag that can be set to false where a (K)PI does not meet the public guidelines. |
pi_type |
Optional | indicates the Product PI type (Ex: AMAU, GMAU, SMAU, Group PPI) |
product_analytics_type |
Optional | indicates if the metric is available on SaaS, SM (self-managed), or Both. |
is_primary |
Optional | boolean flag that indicates if this is the Primary PI for the Product Group. |
implementation |
Optional | indicates the implementation status and reasons as nested attributes. This should be updated monthly before Key Reviews by the DRI. |
implementation.status |
Optional | indicates the Implementation Status status. This should be updated monthly before Key Reviews by the DRI. |
implementation.reasons |
Optional | indicates the reasons behind the implementation status. This should be updated monthly before Key Reviews by the DRI. Should be an array (indented lines starting with dashes) even if you only have one reason. |
lessons |
Optional | indicates lessons learned from a K(PI) as a nested attribute. This should be updated monthly before Key Reviews by the DRI. |
lessons.learned |
Optional | learned is an attribute that can be nested under lessons and indicates lessons learned from a K(PI). This should be updated monthly before Key Reviews by the DRI. Should be an array (indented lines starting with dashes) even if you only have one lesson learned |
monthly_focus |
Optional | indicates monthly focus goals from a K(PI) as a nested attribute. This should be updated monthly before Key Reviews by the DRI. |
monthly_focus.goals |
Optional | indicates monthly focus goals from a K(PI). This should be updated monthly before Key Reviews by the DRI. Should be an array (indented lines starting with dashes) even if you only have one goal |
metric_name |
Optional | indicates the name of the metric in Self-Managed implemenation. The SaaS representation of the Self-Managed implementation should use the same name. |
Please reference the Engineering Metrics Page for guidelines on chart visualization formatting.