Throughput, is a measure of the total number of MRs that are completed and in production in a given period of time. Unlike velocity, throughput does not require the use of story points or weights, instead we measure the number of MRs completed by a team in the span of a week or a release. Each issue is represented by 1 unit/point. This calculation happens after the time period is complete and no pre-planning is required to capture this metric. The total count should not be limited to only MRs that deliver features, it's important to include engineering proposed MRs in this count as well. This will ensure that we properly reflect the team's capacity in a consistent way and focus on delivering at a predictable rate.
Each merge request must have one of the following labels. These labels assign a category in the charts. If an MR has more than one of these labels, the highest one in the list takes precedence.
If it does not have any of these, it will be tracked in the 'undefined' bucket instead. The Engineering Manager for each is ultimately responsible for ensuring that these labels are set correctly, and should do this as a manual process on a schedule that is appropriate for their time.
The goal for using this measure is to incentivize teams to break MRs to the smallest deliverable which lead to a smaller set of changes and the many benefits that come along with that.
This practice aligns with one of our core values: Iteration, do the smallest thing possible and get it out as quickly as possible.
Instead of spending time sizing and figuring out the weight of an issue, we should put this effort toward breaking issues to the smallest deliverable.
Since throughput is a measure of actual work completed, it is far more accurate than using weights.
Throughput is a simpler model to implement for new teams since the measure is the count of small well defined MRs.
Unlike weights which may be estimated differently from one team to another, throughput can be normalized across every engineering team.
A few notes to consider when using this model
There are many activities such as code reviews, meetings, planning that we do not count as units of work independently, they are however accounted for as part of the delivery of an issue whether it be feature work or technical debt. The team's rate of delivering code to production is what we are trying to measure.
While there is no scoring required with this model, there is still value for an Engineering Manager to look through each issue they are committing to in any given release and making sure they are well defined small deliverables.
When combined with cycle time, throughput is a great metric to help you identify areas of improvement and possible bottlenecks that the team can work to address.