For more details about the vision for this area of the product, see the Plan stage page.
This team is currently shared between Plan:Portfolio Management and Plan:Certify. See that page for details.
|Felipe Artur||Backend Engineer, Plan:Portfolio Management & Plan:Certify|
|Donald Cook||Frontend Engineering Manager, Plan|
|Kushal Pandya||Senior Frontend Engineer, Plan:Portfolio Management, Plan:Certify|
|Eulyeon K.||Frontend Engineer (Intern), Plan|
|Jarka Košanová||Senior Backend Engineer, Plan:Portfolio Management & Plan:Certify|
|Mike Long||Product Design Manager, Plan & Manage|
|Jan Provaznik||Senior Backend Engineer, Plan:Portfolio Management & Plan:Certify|
|Justin Farris||Group Manager, Product Management, Plan|
|Rajat Jain||Frontend Engineer, Plan:Portfolio Management & Plan:Certify|
|Florie Guibert||Frontend Engineer, Plan:Portfolio Management & Plan:Certify|
|Axel García||Frontend Engineer, Plan:Portfolio Management & Plan:Certify|
|Charlie Ablett||Senior Backend Engineer, Plan:Portfolio Management & Plan:Certify|
|Eugenia Grieff||Backend Engineer, Plan:Portfolio Management & Plan:Certify|
|Nick Brandt||Product Designer, Plan:Certify|
|Mark Wood||Senior Product Manager, Plan:Certify|
|John Hope||Backend Engineering Manager, Plan:Portfolio Management & Plan:Certify|
|Marcin Sędłak-Jakubowski||Technical Writer, Plan|
Since we share a backend team between the Plan:Portfolio Management and Certify groups, we have a combined metrics dashboard. This is intended to track against some of the Development Department KPIs, particularly those around merge request creation and acceptance. From that dashboard, the following charts show MR Rate and Mean time to merge (MTTM) respectively.
The following chart shows a breakdown of MRs by category (omitting Security, for now). Totals may vary slightly from overall throughput as some MRs may have more than one throughput label.
We have an application performance dashboard (internal) that tracks the performance of the parts of GitLab for which we are responsible. This dashboard is shared between the Portfolio Management and Certify groups for now.
We use a lightweight system of issue weighting to help with capacity planning, with the knowledge that things take longer than you think. These weights are used for capacity planning and the main focus is on making sure the overall sum of the weights is reasonable.
It's OK if an issue takes longer than the weight indicates. The weights are intended to be used in aggregate, and what takes one person a day might take another person a week, depending on their level of background knowledge about the issue. That's explicitly OK and expected.
These weights we use are:
|1||Trivial, does not need any testing|
|2||Small, needs some testing but nothing involved|
|3||Medium, will take some time and collaboration|
|4||Substantial, will take significant time and collaboration to finish|
|5||Large, will take a major portion of the milestone to finish|
Anything larger than 5 should be broken down if possible.
We're discussing a possible change to the weight scale we use.
We look at recent releases and upcoming availability to determine the weight available for a release.
Estimating bugs is inherently difficult. The majority of the effort in fixing bugs is finding the cause, and then a bug be accurately estimated. Additionally, velocity is used to measure the amount of new product output, and bug fixes are typically fixes on a feature that has been tracked and had a weight attached to it previously.
Because of this, we do not weigh bugs during ~"workflow::planning breakdown". If an engineer picks up a bug and determines that there will be a significant level of effort in fixing it (for example, a large migration is needed, or we need to switch state management to Vuex on the frontend), we then will want to prioritize it against feature deliverables. Ping the product manager with this information so they can determine when the work should be scheduled.
To assign weights to issues in a future milestone, we ask team members to continually weight and break-down issues in ~workflow::planning breakdown that don't have a ~"Breakdown Sufficient" label, especially pieces of work in which they have experience or which belongs to their group.
Contributions that add new information or insight are welcome, even if they don't consistute a complete break-down. When a discussion fails to meet a conclusion in a timely manner, include the PM immediately so they can clarify requirements or cut scope.
Often new complexity is revealed when development starts or as it progresses. This is normal. Team-members should re-assess weights when new information becomes clear and alert the PM or EM when delivery within the milestone is at risk.
To weight issues, team-members should:
Points of weight delivered by the team on the last three milestones, including rolling average. This allows for more accurate estimation of what we can deliver in future milestones. Full chart here.
The Plan:Portfolio Management & Certify Build Board always shows work in the current release, with workflow columns relevant to implementation. Filtering it by ~backend shows issues for backend engineers to work on.
It's OK to not take the top item if you are not confident you can solve it, but please post in #s_plan if that's the case, as this probably means the issue should be better specified.
Everyone at GitLab has the freedom to manage their work as they see fit, because we measure results, not hours. Part of this is the opportunity to work on items that aren't scheduled as part of the regular monthly release. This is mostly a reiteration of items elsewhere in the handbook, and it is here to make those explicit:
When you pick something to work on, please:
It is often necessary to specify behaviors for a system or application. Requirements Management is a process by which these behaviors would be captured so that there is a clearly defined scope of work.