KPI | Health | Status |
---|---|---|
Sales Renewal CSAT | Okay |
|
Development Non-Headcount Plan vs Actuals | Okay |
|
Development Overall Handbook Update Frequency Rate | Attention |
|
Review To Merge Time (RTMT) | Attention |
|
Development Department Narrow MR Rate | Okay |
|
Largest Contentful Paint (LCP) | Attention |
|
Can we improve the sales renewal process to meet a 90% satisfaction rating from internal sales teams?
Target: Above 90%
Chart (Sisense↗)
Health: Okay
This is a subset of an existing KPI. Please see the definition for the parent KPI.
We need to spend our investors' money wisely. We also need to run a responsible business to be successful, and to one day go on the public market.
Target: Unknown until FY21 planning process
URL(s)
Health: Okay
This is a subset of an existing KPI. Please see the definition for the parent KPI.
The handbook is essential to working remote successfully, to keeping up our transparency, and to recruiting successfully. Our processes are constantly evolving and we need a way to make sure the handbook is being updated at a regular cadence. This data is retrieved by querying the API with a python script for merge requests that have files matching `/source/handbook/engineering/development/**` over time. The calculation for the monthly overall handbook update frequency rate is the number of handbook updates divided by the number of team members in the Development Department for a given month.
Target: At or above 0.5
Chart (Sisense↗)
Health: Attention
To be aligned with review time from Development. Monthly Product MRs review to merge time (RTMT), it tells us on average how long it takes from requesting the first MR review to being merged. This metric includes only MRs authored only by team members in the Development Department. No community contributions are included. The VP of Development is the DRI on what projects are included.
Target: At or below 4.5 days
Chart (Sisense↗)
Health: Attention
Development Department Narrow MR Rate is a performance indicator showing how many changes the Development Department implements directly in the GitLab product. The projects that are part of the product contributes to the overall product development efforts. This is the ratio of product MRs authored by team members in Development Department to the number of team members in the Development Department. It's important because it shows us how productivity within the Development Department has changed over time.
Target: Above 10 MRs per month
Chart (Sisense↗)
Health: Okay
Largest Contentful Paint (LCP) is an important, user-centric metric for measuring the largest load speed visible on the web page. To provide a good user experience on GitLab.com, we strive to have the LCP occur within the first few seconds of the page starting to load. This LCP metric is reporting on our Projects Home Page. LCP data comes from the Graphite database. We reference this webpage to compare performance information from various hosted software development services.
Target: Below 2.5s at the 90th percentile
Chart (Sisense↗)
Health: Attention
This is a subset of an existing KPI. Please see the definition for the parent KPI.
Employees are in the division "Engineering" and department is "Development".
Target: 283 by end of FY21
Chart (Sisense↗)
Health: Okay
We remain efficient financially if we are hiring globally, working asynchronously, and hiring great people in low-cost regions where we pay market rates. We track an average location factor by function and department so managers can make tradeoffs and hire in an expensive region when they really need specific talent unavailable elsewhere, and offset it with great people who happen to be in low cost areas.
Target: Below 0.54
Chart (Sisense↗)
Health: Attention
Discretionary bonuses offer a highly motivating way to reward individual GitLab team members who really shine as they live our values. Our goal is to award discretionary bonuses to 10% of GitLab team members in the Development department every month.
Target: At 10%
URL(s)
Health: Unknown
This shows the number of Merge Requests (MRs) on a month by month basis. We only consider MRs that contribute to our product. It’s important because it shows the overall development teams velocity and helps ensure that we are continually making our product better. The VP of Development is the DRI on what projects are included. Please note that not all MRs in GitLab will count towards monthly total.
Target: 20% increase quarter of quarter
URL(s)
Chart (Sisense↗)
Health: Okay
Measurement of time CVE being issued to our product being updated.
Target: 7 days (until further data is provided)
URL(s)
Chart (Sisense↗)
Health: Okay
Measurement of time from Community member MR proposed till GitLab response. It’s important because it shows our commitment to community and engagement.
Target: No Target Set
URL(s)
Health: Unknown
BE Unit Test coverage shows the unit test coverage of our code base. As an example 95% represents that 95% of the LOC in our BE software is unit tested. It’s important as it shows how much code is tested early in the development process.
Target: Above 95%
URL(s)
Health: Okay
FE Unit Test coverage shows the unit test coverage of our code base. As an example 95% represents that 95% of the LOC in our FE software is unit tested. It’s important as it shows how much code is tested early in the development process.
Target: Above 75%
URL(s)
Chart (Sisense↗)
Health: Attention
To be aligned with CycleTime from Development. Monthly Product MRs mean time to merge, it tells us on average how long it takes from initiating an MR to being merged. This metric includes only MRs authored only by team members in the Development Department. No community contributions are included. The VP of Development is the DRI on what projects are included.
Target: At or below 11 days
Chart (Sisense↗)
Health: Okay
We want to measure the lifecycle of MRs and reduce the tail of MRs. We don't expect to ever eliminate it because there can be unique cases, but we don't the tail trending up.
Target: < 10% of MRs in the past 3 months are merged more than 14 days after they are opened
Chart (Sisense↗)
Health: Attention
We want to measure the breakdown of our development investment by MR type/label. We only consider MRs that contribute to our product. If an MR has more than one of these labels, the highest one in the list takes precedence.
Target: < 5% change in proportion of MRs with undefined label
Chart (Sisense↗)
Health: Unknown
Say do ratios measure the number of product issues that were committed to a development phase vs. the number of product issues that were closed at the end of the release cycle. We don’t want to put pressure on teams to make commitments and push to deliver on those commitments. Rather, we want to optimize for velocity over predictability to deliver more value per release. It’s important to note that this is also not used as a comparative metric across team but rather to help each team understand what they are consistently able to deliver during each release. Additionally, the say do ratios help us clean up issues and encourage conversations between engineering managers and product managers.
Target: Unknown
URL(s)
Health: Unknown
We want to measure the number of contributors and contributions made outside of the Development Department. We do this by looking at the number of product MRs merged by members in the Development Department (Directors, EMs, ICs) vs the number of product MRs merged by members outside of the Development Department. It's important because it shows how much of what gets shipped is being done by members outside of the Development Department.
Target: Unknown
Chart (Sisense↗)
Health: Okay
We aim to keep the merge request per maintainer at a reasonable level.
Target: Below 20
Chart (Sisense↗)
Health: Okay
This tracks the number of maintainers and trainees over time.
Target: Unknown
Chart (Sisense↗)
Health: Okay
The percentage of engineers who worked on less than X merge requests. Observing the MR rate distribution across individuals helps us understand how productivity distribution is changing over time.
Target: Unknown
Chart (Sisense↗)
Health: Unknown
This is a subset of an existing KPI. Please see the definition for the parent KPI.
We remain efficient financially if we are hiring globally, working asynchronously, and hiring great people in low-cost regions where we pay market rates. We track an average location factor for team members hired within the past 3 months so hiring managers can make tradeoffs and hire in an expensive region when they really need specific talent unavailable elsewhere, and offset it with great people who happen to be in more efficient location factor areas with another hire. The historical average location factor represents the average location factor for only new hires in the last three months, excluding internal hires and promotions. The calculation for the three-month rolling average location factor is the location factor of all new hires in the last three months divided by the number of new hires in the last three months for a given hire month. The data source is BambooHR data.
Target: Below 0.54
Chart (Sisense↗)
Health: Problem
This shows the average number of PTO days taken per Development Team Member. It is the ratio of PTO days taken to the number of team members in the Development Department each month. Looking at the average number of PTO days over time helps us understand increases or decreases in efficiency and ensure that team members are taking time off to keep a healthy work/life balance.
Target: TBD
Chart (Sisense↗)
Health: Attention
Value | Level | Meaning |
---|---|---|
3 | Okay | The KPI is at an acceptable level compared to the threshold |
2 | Attention | This is a blip, or we’re going to watch it, or we just need to enact a proven intervention |
1 | Problem | We'll prioritize our efforts here |
0 | Unknown | Unknown |
Pages, such as the Engineering Function Performance Indicators page are rendered by an ERB template that contains HTML code.
Other PI Pages
sectionThese ERB templates calls custom helper functions that extract and transform data from the Performance Indicators data file.
kpi_list_by_org(org)
helper function takes a required string argument named org
(deparment or division level) that returns all the KPIs (pi.is_key == true) for a specific organization grouping (pi.org == org) from the Performance Indicators data file.pi_maturity_level(performance_indicator)
helper function automatically assigns a maturity level based on the availability of certain data properties for a particular PI.pi_maturity_reasons(performance_indicator)
helper function returns a reason
for a PI maturity based on other data properties.performance_indicators(org)
takes a required string argument named org
(deparment or division level) that returns two lists - a list of all KPIs and a list of all PIs for a specific organization grouping (department/division).signed_periscope_url(data)
takes in the sisense_data property information from Performance Indicators data files and returns a signed chart URL for embedding a Sisense chart into the handbook.The heart of pages like this are Performance Indicators data files which are YAML files. Each - denotes a dictionary of values for a new (K)PI. The current elements (or data properties) are:
Property | Type | Description |
---|---|---|
name |
Required | String value of the name of the (K)PI. For Product PIs, product hierarchy should be separate from name by " - " (Ex. {Stage Name}:{Group Name} - {PI Type} - {PI Name} |
base_path |
Required | Relative path to the performance indicator page that this (K)PI should live on |
definition |
Required | refer to Parts of a KPI |
parent |
Optional | should be used when a (K)PI is a subset of another PI. For example, we might care about Hiring vs Plan at the company level. The child would be the division and department levels, which would have the parent flag. |
target |
Required | The target or cap for the (K)PI. Please use Unknown until we reach maturity level 2 if this is not yet defined. For GMAU, the target should be quarterly. |
org |
Required | the organizational grouping (Ex: Engineering Function or Development Department). For Product Sections, ensure you have the word section (Ex : Dev Section) |
section |
Optional | the product section (Ex: dev) as defined in sections.yml |
stage |
Optional | the product stage (Ex: release) as defined in stages.yml |
group |
Optional | the product group (Ex: progressive_delivery) as defined in stages.yml |
category |
Optional | the product group (Ex: feature_flags) as defined in categories.yml |
is_key |
Required | boolean value (true/false) that indicates if it is a (key) performance indicator |
health |
Required | indicates the (K)PI health and reasons as nested attributes. This should be updated monthly before Key Meetings by the DRI. |
health.level |
Optional | indicates a value between 0 and 3 (inclusive) to represent the health of the (K)PI. This should be updated monthly before Key Meetings by the DRI. |
health.reasons |
Optional | indicates the reasons behind the health level. This should be updated monthly before Key Meetings by the DRI. Should be an array (indented lines starting with dashes) even if you only have one reason. |
urls |
Optional | list of urls associated with the (K)PI. Should be an array (indented lines starting with dashes) even if you only have one url |
funnel |
Optional | indicates there is a handbook link for a description of the funnel for this PI. Should be a URL |
sisense_data |
Optional | allows a Sisense dashboard to be embeded as part of the (K)PI using chart, dashboard, and embed as neseted attributes. |
sisense_data.chart |
Optional | indicates the numeric Sisense chart/widget ID. For example: 9090628 |
sisense_data.dashboard |
Optional | indicates the numeric Sisense dashboard ID. For example: 634200 |
sisense_data.shared_dashboard |
Optional | indicates the numeric Sisense shared_dashboard ID. For example: 185b8e19-a99e-4718-9aba-96cc5d3ea88b |
sisense_data.embed |
Optional | indicates the Sisense embed version. For example: v2 |
sisense_data_secondary |
Optional | allows a second Sisense dashboard to be embeded. Same as sisense data |
sisense_data_secondary.chart |
Optional | Same as sisense_data.chart |
sisense_data_secondary.dashboard |
Optional | Same as sisense_data.dashboard |
sisense_data_secondary.shared_dashboard |
Optional | Same as sisense_data.shared_dashboard |
sisense_data_secondary.embed |
Optional | Same as sisense_data.embed |
public |
Optional | boolean flag that can be set to false where a (K)PI does not meet the public guidelines. |
pi_type |
Optional | indicates the Product PI type (Ex: AMAU, GMAU, SMAU, Group PPI) |
product_analytics_type |
Optional | indicates if the metric is available on SaaS, SM (self-managed), or Both. |
is_primary |
Optional | boolean flag that indicates if this is the Primary PI for the Product Group. |
implementation |
Optional | indicates the implementation status and reasons as nested attributes. This should be updated monthly before Key Meetings by the DRI. |
implementation.status |
Optional | indicates the Implementation Status status. This should be updated monthly before Key Meetings by the DRI. |
implementation.reasons |
Optional | indicates the reasons behind the implementation status. This should be updated monthly before Key Meetings by the DRI. Should be an array (indented lines starting with dashes) even if you only have one reason. |
lessons |
Optional | indicates lessons learned from a K(PI) as a nested attribute. This should be updated monthly before Key Meetings by the DRI. |
lessons.learned |
Optional | learned is an attribute that can be nested under lessons and indicates lessons learned from a K(PI). This should be updated monthly before Key Meetings by the DRI. Should be an array (indented lines starting with dashes) even if you only have one lesson learned |
monthly_focus |
Optional | indicates monthly focus goals from a K(PI) as a nested attribute. This should be updated monthly before Key Meetings by the DRI. |
monthly_focus.goals |
Optional | indicates monthly focus goals from a K(PI). This should be updated monthly before Key Meetings by the DRI. Should be an array (indented lines starting with dashes) even if you only have one goal |
metric_name |
Optional | indicates the name of the metric in Self-Managed implemenation. The SaaS representation of the Self-Managed implementation should use the same name. |
Above ...
Below ...
At ...
At or above ...
At or below ...
shared_dashboard
, chart
, and the dashboard
key-value pairs to the corresponding Performance Indicators data file under the sisense_data
property:
in strings as it's an important character in YAML and will confuse the data parsing process. Put the string in "quotes" if you really need to use a :