Our milestone planning is handled asynchronously as much as possible. Planning discussions are fluid and ongoing, in general they do not follow a predefined monthly schedule.
Top priority features from upcoming release milestones go through Planning Breakdown with Product Managers (PMs), Product Designers (UX), Engineering Managers (EMs) & Engineers from the respective groups. Weekly group-level synchronous meetings facilitate this discussion. The list of issues to be discussed is provided by the PM at least 1 day prior to the meeting. The expectation is that all attendees have reviewed the issues prior to the start of the meeting. Attendees should add the carrot 🥕 emoji to signify that an issue has been reviewed in advance.
Questions to be answered:
If the answer is “No” to either of these questions, discussion continues with the PM to improve the team's understanding of the request. If necessary, the discussion will continue asynchronously in the Epic or Issue and is brought back to a future weekly meeting.
If the answer is "Yes" to these questions the team estimates whether or not the issue can be delivered in a single iteration (ignoring any other work that may be in that same iteration). If it's determined that the issue under discussion cannot be delivered within a single iteration, the team works with the PM to break it into multiple MVC Issues that can each be delivered in an iteration, are independent "slices" of value that can be used by a customer (so no mocked UIs or backend-only work that is inaccessible), and when all delivered will completely fulfill the original issue's requirements.
EM output: Once all of the above requirements have been satisfied the EMs assign a frontend and backend engineer as respective DRIs to create Implementation Issues under the MVC epic(s). The Design issue created by UX is also closed at this point by the EM.
Engineering output: Frontend and backend DRIs create implementation issues following the Implementation template available for the Gitlab-org project issues. Once they are done, they unassign themself and move issues to the
Issues in the
workflow::refinement state are either assigned by EMs to individuals engineers for refinement, or are assigned randomly by the triage bot based on the assign-refinement policy (Threat Insights policy). Note that Epics are slightly different.
Engineers assigned to refine issues are encouraged to ask questions and push back on PM if issues lack the information and/or designs required for successful refinement and execution.
We assign issues for refinement to ensure we have focus on the highest-priority items, as determined by Product Management. This is not an assignment to work on the issue.
workflow::ready for devstate and unassign themselves if they have completed refinement. Leave issue in
workflow::refinementand assign the issue to their EM if for any reason refinement could not be completed. Confirm the issue has the appropriate work type classification.
By the week prior to the completion of the current milestone, the scope of the next release is finalized by EMs and PMs.
Deliverablelabels are applied to issues we are committing to deliver. It's up to the EM's discretion what issues receive this label which is used in the calculation of our Say Do Ratio. Factors include: confidence that the issue will be completed in the milestone, completion of issues rolled over from the previous milestones, commitments with other groups or stakeholders.
Deliverablelabels in the
workflow::ready for devstate have been confirmed to be in the correct priority order.
Backlog refinement is the most important step to ensure an issue is ready to move into development and that the issue will match everyone's expectations when the work is delivered.
The goal of the refinement process is to ensure an issue is ready to be worked on by doing this:
backendlabel. Otherwise, remove any backend label, assign any relevant labels and you are done.
frontendlabel. Otherwise, remove any frontend label, assign any relevant labels and you are done.
~"workflow::ready for development".
~"workflow::ready for development".
Anyone should be able to read a refined issue's description and understand what is being solved, how it is solving the problem, and the technical plan for implementing the issue.
In order for someone to understand the issue and its implementation, they should not have to read through all the comments. The important bits should be captured in the description as the single source of truth.
Note the following differences when refining bugs:
The Security Release Process for Developers can be daunting for first-timers. As part of refinement, ask for a volunteer to act as a "Security Issue Release Buddy".
An issue should fail refinement if it can not be worked on without additional information or decisions to be made. To fail an issue:
Weights are used as a rough order of magnitude to help signal to the rest of the team how much work is involved. Weights should be considered an artifact of the refinement process, not the purpose of the refinement process.
It is perfectly acceptable if items take longer than the initial weight. We do not want to inflate weights, as velocity is more important than predictability and weight inflation over-emphasizes predictability.
We do not add weights to bugs as this would be double-counting points. When our delivery contains bugs, the velocity should go down so we have time to address any systemic quality problems.
We are using the Fibonacci sequence for issue weights. Definitions of each numeric value are associated with the frontend-weight & backend-weight labels. Anything larger than 5 should be broken down whenever possible.
backend-weight label on an issue is optional, but ensure you set the Weight property on the issue during refinement.
Examples of when it may be appropriate to set a weight label instead of / as well as setting the issue weight include:
A list of the steps and the parts of the code that will need to get updated to implement this feature. The implementation plan should also call out any responsibilities for other team members or teams. Example.
The goal of the implementation plan is to spur critical analysis of the issue and have the engineer refining the issue think through what parts of the application will get touched. The implementation plan will also permit other engineers to review the issue and call out any areas of the application that might have dependencies or been overlooked.
A list of the steps that will need to be followed to verify this feature. The verification steps should also include additional test cases that should be covered. Example.
The purpose of the issue verification procedures is to aid in better understanding the expected change in the application after implementing the issue. Other engineers will be able to evaluate the issue and identify any application components that may have dependencies or have been ignored and that require further testing thanks to the verification steps.
The issue verification should be done by someone else other than the MR author2.
~workflow:verificationlabel and wait until they receive notification that their work has been deployed on staging via the release issue email.
~workflow:verificationlabel, so it's available with GitLab Next turned off), the engineer should verify again, leave a comment summarizing the testing that was completed, and unassign themselves from the issue. Also provide a link to a project or page, if applicable.
~workflow:verificationstate are are assigned randomly by the triage bot based on the verification policy to an applicable team engineer. This engineer should then additionally verify the issue.
workflow::completelabel and close the issue.
When a team-member takes some time off, it is important that their work is still being followed up on if needed. We want to make sure that any MR that lands in staging and production environments while we are out gets proper attention and is verified by a counterpart. Therefore, when getting close to our time-off period, we should do the following:
Draftstatus. This ensures that the MR won't be merged accidentally without a clear DRI to follow up on it.
Keep in mind that, while we strongly recommend following this process when taking some time off, it might not be relevant all the time. For example, if our time-off period is going to be short and/or our active MRs are minor enough, it might make sense to ignore these recommendations and follow up when we're back.
In any case, it's always a good idea to give the company-wide PTO policy another read before going on leave: A GitLab team members Guide to Time Off.
As an Epic is ready to move to the refinement stage, the EMs assigns someone as the DRI for each required tech stack. This may happen sooner, during planning breakdown.
As the DRI for an Epic, the engineer is not responsible for executing all the work but they are responsible for:
The DRI may choose to refine and work on the issues they created but they're not expected to deliver the whole Epic on their own.
Q: Should discovery issues be refined?
A: Yes. Discovery issues should be refined but some of the steps above may not be relevant. Use good judgement to apply the process above. The purpose of refining a discovery issue is to make sure the scope of the discovery is clear, what the output will be and that the prerequisites for the discovery are known and completed. Discovery issues can have a habit of dragging out or not creating actionable steps, the refinement process should lock down what needs to be answered in the discovery process.
Q: If an issue has both frontend and backend work how should I weigh it?
A: Issues that require both frontend and backend work are usually broken into multiple implementation issues. An exception is when a single engineer agrees to work on both tech stacks.
Q: What's the meaning of the emoji in issues?
A: we use them to communicate certain steps in our process.
a spike doesn't directly add value to users so it shouldn't contribute to our velocity. The information delivered by a spike is what will be useful to deliver direct value to users. ↩
When the engineer who writes the code is the only one verifying it, it increases the chance of defects getting into production because when that engineer tests in a new environment, they are likely to try all the same attempts to break it as they did during writing the code, which does not bring any value. If a person who did not write the code verifies the resolution in a deployed environment, they will come in with a different perspective and is more likely to cover more test cases. ↩