Throughput, is a measure of the total number of MRs that are completed and in production in a given period of time. Unlike velocity, throughput does not require the use of story points or weights, instead we measure the number of MRs completed by a team in the span of a week or a release. Each MR is represented by 1 unit/point. This calculation happens after the time period is complete and no pre-planning is required to capture this metric. The total count should not be limited to only MRs that deliver features, it's important to include engineering proposed MRs in this count as well. This will ensure that we properly reflect the team's capacity in a consistent way and focus on delivering at a predictable rate.
We also refer to throughput as productivity on occasion. In both cases, we measure it at a team level (or higher), not at an individual level.
Each merge request must have one of the following labels. These labels assign a category in the charts. If an MR has more than one of these labels, the highest one in the list takes precedence.
~"Community contribution": A community contribution label takes precedence over other labels. Therefore, while the work may introduce a new feature or resolve a bug, we prioritize this label over others due to the importance of this particular category. You may use a second label such as ~bug or ~feature if you would like to add an additional identifier.
~security: Security-related MRs.
~bug: Defects in shipped code.
~feature: Any MR that contains work to support the implementation of a feature and/or results in an improvement in the user experience. Whether the code results in user facing updates or not, if it is part of building the feature it should be labelled as such. Additionally, performance improvements and user interface enhancements improve the experience for end users and should be labelled as ~"feature".
~backstage: This is a hard category but you can consider it the NOT of all the other labels. Better yet, it is the work we do to keep the product running smoothly or our own development running smoothly. Technical debt is under this category, though to keep the categories simple, we currently do not use the ~"technical debt" label. Please use ~"backstage" instead. Some examples of work that fit under this label:
If it does not have any of these, it will be tracked in the 'undefined' bucket instead. The Engineering Manager for each team is ultimately responsible for ensuring that these labels are set correctly, and should do this as a manual process on a schedule that is appropriate for their time.
Throughput charts are available on the quality dashboard for each team.
undefinedMRs represented, spend some time to review your team's MRs and add labels so you can get a more accurate reflection of your investment. It could take up to a day for these updates to show up on the quality dashboard. Also good to keep in mind that the data here represents contributions in multiple projects. Label hygiene is not enforced across all of them.
When combined with cycle time, throughput is a great metric to help you identify areas of improvement and possible bottlenecks that the team can work to address.
In the spirit of "Everyone can Contribute" it's natural that members in a group will contribute to another group.
Our guideline aims to cover for the 20/80 (default accounting method). By default the MR from an author should belong to their
group::xxx and direct parent
Optimizing for all edge cases will lead to complexity since there will always be edge cases.
We allow flexibility where the parent
devops::xxx and child
group::xxx label may not match. For example:
backstagework that spans multiple groups.
If a contribution happens across groups, we leave it to the discretion of the engineering and product manager to change the
group::xxx label to reflect which group worked on it. They can also decide if they want to move over the
devops::xxx as well or keep it to reflect the product area.
The triage bot automatic labelling we will not override existing labels.