Please note: Product KPIs are mapped 1 to 1 with our Growth teams in order to focus those teams on experiments to improve our KPIs. Additional performance indicators will be tracked and may add value, but should ultimately drive one or more KPI.
We want every group to have a single North Star Metric that aligns with the development activities. The North Star Metric can be SMAU or AMAU, but it can be a different metric as well, if that's more suitable. We expect to have quarterly goals around the North Star Metrics. This data will be used to inform future investment decisions.
It's important to understand what a North Star Metric is, and how it can be used. You can read more about it in Amplitude's North Star handbook. A North Star Metric is always a single metric. Your group might have more than one metric, but all the other metrics are expected to be used as inputs for the North Star Metric. The ideal North Star Metric is a leading indicator, and its input metrics break down the North Star into its dynamics. A few examples are provided below.
The North Star Metric is an important communication device, especially when used as a framework. Preferably, every engineer in the group should be aware of the group's North Star Metric and there is an active discussion around the input metrics that might drive the North Star Metric. Likely, engineers will have the best ideas to move the metrics and come up with a useful breakdown. Thus their understanding is crucial in using the North Star Metric framework.
A typical North Star Metric could be the total number of a given feature usage. Actually, every feaure usage can be broken down at least in the following way:
total feature usage = number of users using the feature x number of times each user uses the feature
number of users using the feature and
number of times each user uses the feature as input metrics, we can target these by development efforts, and they add up into the North Star Metric.
Another typical approach to think about the North Star Metric as being an important point in a user's journey, like a signup event. This can be broken down using the funnel that leads to signup, and we can have separate development effort around the funnel stages, still knowing that our aim is to move the North Star Metric.
Stages per user is calculated by dividing Stage Monthly Active User (SMAU) by Monthly Active Users (MAU). Stages per User (SpU) is meant to capture the number of DevOps stages the average user is using on a monthly basis. We hope to add this metric to the stage maturity page, alongside number of contributions.
An action describes an interaction the user has within a stage. The actions that need to be tracked have to be pre-defined.
AMAU is defined as the number of unique users for a specific action in a 28 day rolling period. AMAU helps to measure the success of features.
Note: There are metrics in usage ping under usage activity by stage that aren't user actions and these should not be used for AMAU. examples:
:GroupMember.distinct_count_by(:user_id)This is the number of distinct users added to groups, regardless of activity
Other counter examples:
This is a count of ldap group links and not a user initiated action
This is a setting not an action
Stage Monthly Active Users is a KPI that is required for all product stages. SMAU is defined as the specified AMAU within a stage in a 28 day rolling period.
|Stage||SMAU Candidate based on usage ping||Event details||Confirmed by|
|package||For instances that have
For Secure, SMAU is defined as the highest AMAU within its stage in a 28 day rolling period.
We are working to define a feature to track for SMAU purposes for the following stages:
While an ideal definition for SMAU is the count of unique users who do any action in a given stage, this approach was chosen for technical reasons. First, we need to consider the query performance of the usage ping (e.g. time outs). Second, this allows us to not worry about the version of an instance when comparing SMAU metrics because of changing definitions. Dashboards for SaaS
AMAN is defined as the number of unique namespaces in which that action was performed in a 28 day rolling period.
Stage Monthly Active Namespaces is a KPI that is required for all product stages. SMAN is defined as the highest AMAN within a stage in a 28 day rolling period.
As the most important metric in the company, Product have a defined an Investment Thesis to measure the impact on IACV of a feature
Percentage of category maturity plan achieved per quarter
Abbreviated as the PNPS acronym, please do not refer to it as NPS to prevent confusion. Measured as the percentage of paid customer "promoters" minus the percentage of paid customer "detractors" from a Net Promoter Score survey. Note that while other teams at GitLab use a satisfaction score, we have chosen to use Net Promoter Score in this case so it is easier to benchmark versus other like companies. Also note that the score will likely reflect customer satisfaction beyond the product itself, as customers will grade us on the total customer experience, including support, documentation, billing, etc.
The number of active accounts on all self-managed instances that we receive usage ping from.
An active account in this context is defined as
Total accounts - Blocked users so it is not truly measuring "activity", only non-blocked accounts on instances.
To get a more accurate measure of MAU on self-managed, we will add new counters to usage ping (Issue).
The number of unique users that performed an event on GitLab.com within the previous 28 days.
Amount of new users who signed up for a GitLab account (GitLab.com or Self-Managed) in a given month.
Amount of paid groups that added users to the namespace in a given month.
Total number of CI Runner Minutes consumed in a given month.
Percent of users or groups that are still active between the current month and the prior month.
The opposite of User Return Rate. The percentage of users or groups that are no longer active in the current month, but were active in the prior month.
Number of new Projects created in a calendar month.
Number of Merge Requests created in a calendar month.
Number of Issues created in a calendar month.
This metric reports on the percentage of Direction items that have met or exceeded their respective success performance indicators. For each feature labeled ~Direction, there should be a defined success metric, and telemetry configured to report on that success metric to determine if it was provably successful.
Percent of open GitLab issues that have comments from customers and wider community members. This dashboard also measures relative engagement over time.
Amount of users who moved from a free tier to a paid tier in a given month.
A GitLab.com user, who is not a MAU in month T, but was a MAU in month T-1.
A GitLab.com user, who is a MAU both in months T and T-1.
A newly registered GitLab.com user - no requirements on activity.
A GitLab.com user, who is not a new user and who was not a MAU in month T-1, but is a MAU in month T.
A GitLab.com Licensed User
A GitLab.com group, which is not a MAG in month T, but was a MAG in month T-1.
A GitLab.com group, which is a MAG both in months T and T-1.
A newly created top-level GitLab.com group - no requirements on activity.
A GitLab.com group, which is part of a paid plan, i.e. Bronze, Silver or Gold. Free licenses for Ultimate and Gold are currently included.
A GitLab.com user, who is a member of a Paid Group.
The percent of users or groups that pay for additional CI pipeline minutes.
The count of active Self Hosts, Core and Paid, plus GitLab.com.
This is measured by counting the number of unique GitLab instances that send us usage ping.
We know from a previous analysis that only ~30% of licensed instances send us usage ping at least once a month.
This is the conversion rate of customers moving from tier to tier
A lost instance of self-managed GitLab didn't send a usage ping in the given month but it was active in the previous month