This document explains the workflow for anyone working with issues in GitLab Inc. For the workflow that applies to everyone please see PROCESS.md.
masterservice level objectives
Products at GitLab are built using the GitLab Flow.
We have specific rules around code review.
In line with our values of short toes, making two-way-door decisions and bias for action, anyone can propose to revert a merge request. When deciding whether an MR should be reverted, the following should be true:
Reverting merge requests that add non-functional changes and don't remove any existing capabilities should be avoided in order to prevent designing by committee.
The intent of a revert is never to place blame on the original author. Additionally, it is helpful to inform the original author so they can participate as a DRI on any necessary follow up actions.
If you notice that pipelines for the
master branch of GitLab or GitLab FOSS is failing (red) or broken (green as a false positive), returning the build to a passing state takes priority over everything else development related, since everything we do while tests are broken may break existing functionality, or introduce new bugs and security issues.
All tests (unit, integration, and E2E QA) that fail on master are treated as
Any test failures or flakiness (either false positive or false negative) causes productivity impediments for all of engineering and our release processes.
If a change causes new test failures, the fix to the test should be made in the same Merge Request.
If the change causes new QA test failures, in addition to fixing the QA tests, the
review-qa-all job must be run to validate the fix before the Merge Request can be merged.
The cost to fix test failures increases exponentially as time passes due to merged results pipelines used. Auto-deploys, as well as monthly releases and security releases, depend on
gitlab-org/gitlab master being green for tagging and merging of backports.
Our aim should be to keep
master free from failures, not to fix
master only after it breaks.
masterservice level objectives
There are two phases for fixing a
~"master:broken" issue which have a target SLO to clarify the urgency. The resolution phase is dependent on the completion of the triage phase.
|Phase||Service level objective||DRI|
|Triage||4 hours from the initial master pipeline failure until assigned
||Engineering Productivity team|
|Resolution||4 hours from assignment to DRI until issue is closed||Merge request author or team of merge request author|
Additional details about the phases are listed below.
~"master:broken" is blocking your team (such as creating a security release) then you should:
~"master-broken"issue with a DRI
#master-brokenif there's anyone investigating the issue you are looking at
The Engineering Productivity team is the triage DRI for monitoring master pipeline failures, identification and communication of
#master-brokenand will be reviewed by the team. These reactions will be applied by the triage DRI to signal current status:
:eyes:- signals the triage DRI is investigating a failing pipeline
:boom:- signals the pipeline contains a new failure. The triage DRI will create a new
~"master:broken"issue and reply in the thread with a link to the issue.
:fire_engine:- signals the pipeline is failing due to a known issue. The triage DRI will reply in the thread with a link to the existing issue(s).
:retry:- signals a system failure (e.g., Docker failure) is responsible and a retry has been triggered.
masterfailing for a non-flaky reason - create an issue with the following labels:
masterfailing for a flaky reason that cannot be reliably reproduced - create an issue with the following labels:
~"master:broken"merge request author if they are available at the moment. If the author is not available, mention the team Engineering Manager and seek assistance in the
#developmentSlack channel if there is no merge request that caused the
#frontendusing the Slack Workflow. Click the Shortcut lightning bolt icon in the
#master-brokenchannel and select "Broadcast Master Broken". Continue the broadcast after the automated message in
@username FYImessage. Additionally, a message can be posted in
#frontend_maintainersto get a maintainer take a look at the fix ASAP.
An example of the reactions for a failure is:
The merge request author of the change that broke master is the resolution DRI. In the event the merge request author is not available, the team of the merge request author will assume the resolution DRI responsibilities. If a DRI has not acknowledged or signaled working on a fix, any developer can take ownership using the reaction guidance below and assume the resolution DRI responsibilities.
~"master:broken"over new bug/feature work. Resolution options include:
~severity::1issue. To ensure efficient review of the fix, the merge request should only contain the minimum change needed to fix the failure. Additional refactor or improvement to the code should be done as a follow up.
~"master:broken"label from the issue and apply
~"Pick into auto-deploy"label (along with the needed
~"priority::1") to make sure deployments are unblocked.
masteraffects any stable branches (e.g. https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25274), open new merge requests directly against the stable branches which are broken and ping the current release manager in the merge requests to avoid delays in releases / security releases. See How to fix a broken stable branch guide for more details.
#developmentshould follow this guidance:
:eyes:- applied by the resolution DRI (or backup) to signal acknowledgment
:construction:- applied by the resolution DRI to signal that work is in progress on a fix
:white_check_mark:- applied by the resolution DRI to signal the fix is complete.
#frontendwhen the fix is in
masterbuild was failing and the underlying problem was quarantined / reverted / temporary workaround created but the root cause still needs to be discovered: create a new issue with the
~"master:broken"could have been prevented in the Merge Request pipeline.
Once the resolution DRI announces that
master is fixed:
Merge requests can not be merged to
master until the broken pipeline
is fixed and passing again.
This is because we need to try hard to avoid introducing new failures, since it's easy to lose confidence if it stays red for a long time.
In the rare case where a merge request is urgent
and must be merged immediately, team members can follow the process below to have a merge
request merged during a broken
master is broken can only be done for:
masterissues (we can have multiple broken master issues ongoing).
First, ensure the latest pipeline has completed less than 2 hours ago (although it is likely to have have failed due to
merged results pipelines).
Next, make a request on Slack:
#backend_maintainersSlack channels (whichever one is more relevant).
master, optionally add a link to this page in your request.
A maintainer who sees a request to merge during a broken
master must follow this process.
Note, if any part of the process below disqualifies a merge request from being merged
during a broken
master then the maintainer must inform the requestor as to why in the
merge request (and optionally in the Slack thread of the request).
First, assess the request:
:eyes:emoji to the Slack post so other maintainers know it is being assessed. We do not want multiple maintainers to work on fulfilling the request.
Next, ensure that all the following conditions are met:
gitlab-org/gitlabusing merged results pipelines).
~"master:broken"for every failure, see the "Triage DRI Responsibilities" steps above for more details.
Next, add a comment to the merge request mentioning that the merge request will be merged
during a broken
master, and link to the
~"master:broken" issue(s). For example:
Merge request will be merged while `master` is broken. Failure in <JOB_URL> happens in `master` and is being worked on in <ISSUE_URL>.
Next, merge the merge request:
#master-broken-mirrors was created to remove duplicative notifications from the
#master-broken channel which provides a space for Release Managers and the Engineering Productivity team to monitor failures for the following projects:
#master-broken-mirrors channel is to be used to identify unique failures for those projects and flaky failures are not expected to be retried/reacted to in the same way as
Security issues are managed and prioritized by the security team. If you are assigned to work on a security issue in a milestone, you need to follow the Security Release process.
If you find a security issue in GitLab, create a confidential issue mentioning the relevant security and engineering managers, and post about it in
If you accidentally push security commits to
gitlab-org/gitlab, we recommend that you:
#releases. It may be possible to execute a garbage collection (via the Housekeeping task in the repository settings) to remove the commits.
For more information on how the entire process works for security releases, see the documentation on security releases.
workflow::in devlabel to the issue.
workflow::in review. If multiple people are working on the issue or multiple workflow labels might apply, consider breaking the issue up. Otherwise, default to the workflow label farthest away from completion.
workflow::verification, to indicate all the development work for the issue has been done and it is waiting to be deployed and verified. We will use this label in cases where the work was requested to be verified by product OR we determined we need to perform this verification in production.
For larger issues or issues that contain many different moving parts, you'll be likely working in a team. This team will typically consist of a backend engineer, a frontend engineer, a Product Designer and a product manager.
Avoid adding configuration values in the application settings or in
gitlab.yml. Only add configuration if it is absolutely necessary. If you
find yourself adding parameters to tune specific features, stop and consider
how this can be avoided. Are the values really necessary? Could constants be
used that work across the board? Could values be determined automatically?
See Convention over Configuration
for more discussion.
Start working on things with the highest priority in the current milestone. The priority of items are defined under labels in the repository, but you are able to sort by priority.
After sorting by priority, choose something that you’re able to tackle and falls under your responsibility. That means that if you’re a frontend developer, you work on something with the label
To filter very precisely, you could filter all issues for:
Use this link to quickly set the above parameters. You'll still need to filter by the label for your own team.
If you’re in doubt about what to work on, ask your lead. They will be able to tell you.
It's every developers' responsibilities to triage and review code contributed by the rest of the community, and work with them to get it ready for production.
Merge requests from the rest of the community should be labeled with the
Community contribution label.
When evaluating a merge request from the community, please ensure that a relevant PM is aware of the pending MR by mentioning them.
This should be to be part of your daily routine. For instance, every morning you could triage new merge requests from the rest of the community that are not yet labeled
Community contribution and either review them or ask a relevant person to review it.
Make sure to follow our Code Review Guidelines.
Labels are described in our Contribution guide.
GitLab.com is a very large instance of GitLab Enterprise Edition. It runs release candidates for new releases, and sees a lot of issues because of the amount of traffic it gets. There are several internal tools available for developers at GitLab to get data about what's happening in the production system:
GitLab Inc has to be selective in working on particular issues. We have a limited capacity to work on new things. Therefore, we have to schedule issues carefully.
Product Managers are responsible for scheduling all issues in their respective product areas, including features, bugs, and tech debt. Product managers alone determine the prioritization, but others are encouraged to influence the PMs decisions. The UX Lead and Engineering Leads are responsible for allocating people making sure things are done on time. Product Managers are not responsible for these activities, they are not project managers.
Direction issues are the big, prioritized new features for each release. They are limited to a small number per release so that we have plenty of capacity to work on other important issues, bug fixes, etc.
If you want to schedule an issue with the
Seeking community contributions label, please remove the label first.
Any scheduled issue should have a team label assigned, and at least one type label.
To request scheduling an issue, ask the responsible product manager
We have many more requests for great features than we have capacity to work on.
There is a good chance we’ll not be able to work on something.
Make sure the appropriate labels (such as
customer) are applied so every issue is given the priority it deserves.
Teams (Product, UX, Development, Quality) continually work on issues according to their respective workflows.
There is no specified process whereby a particular person should be working on a set of issues in a given time period.
However, there are specific deadlines that should inform team workflows and prioritization.
Suppose we are talking about milestone
m that will be shipped in month
M (on the 22nd).
We have the following deadlines:
M-1, 4th(at least 14 days before milestone
M-1, 13th(at least 5 days before milestone
M-1, 16th(at least 1 day before milestone
M-1, 18th(or next business day, milestone
mbegins): Kick off! 📣
M-1 26th: GitLab Bot opens Group Retrospective issue for the current milestone.
missues with docs have been merged into master.
mrelease. See feature flags.
mrelease. See release timelines.
M, 19th, or
M, 20th, or
M, 22nd: Release Day 🚀
M, 23rd(the day after the release):
mstarts. This includes regular and security patch releases.
missues and merge requests are automatically moved to milestone
m+1, with the exception of
M, 24th: Moderator opens the Retrospective planning and execution issue.
M+1, 3rd: Assignees of Group Retrospective issues summarize the discussion, ensure corrective actions are taken and a DRI is assigned to each. Actions related to participation in section-based Retrospective Summaries are taken.
Refer to release post content reviews for additional deadlines.
Note that deployments to GitLab.com are more frequent than monthly major/minor releases on the 22nd. See auto deploy transition guidance for details.
Team members use labels to track issues throughout development. This gives visibility to other developers, product managers, and designers, so that they can adjust their plans during a monthly iteration. An issue should follow these stages:
workflow::in dev: A developer indicates they are developing an issue by applying the
workflow::in review: A developer indicates the issue is in code review and UX review by removing the
in devlabel, and applying the
workflow::verification: A developer indicates that all the development work for the issue has been done and is waiting to be deployed and verified.
When the issue has been verified and everything is working, it can be closed.
At the beginning of each release, we have a kickoff meeting, publicly livestreamed to YouTube. In the call, the Product Development team (PMs, Product Designers, and Engineers) communicate with the rest of the organization which issues are in scope for the upcoming release. The call is structured by product area with each PM leading their part of the call.
The notes are available in a publicly-accessible Google doc. Refer to the doc for details on viewing the livestream.
The purpose of our retrospective is to help each Product Group, and the entire R&D cost center at GitLab learn and improve as much as possible from every monthly release.
Each retrospective consist of three parts:
M-1 26th: GitLab Bot opens Group Retrospective issue for the current milestone.
M, 21st: Group Retrospectives should be held.
M, 24th: Moderator opens the Retrospective planning and execution issue and communicates a reminder in R&D quad slack channels.
M+1, 3rd: Participants complete the Retrospective planning and execution issue, add their notes to the retro doc, and suggest and vote on discussion topics.
M+1, 4th: Moderator records the Retrospective Summary video and announces the video and discussion topics.
M+1, 6th: Retrospective Discussion is held.
M+1, 6thfalls in a weekend.
M+1, 6this a holiday.
The moderator of each retrospective is responsible for:
The job of a moderator is to remain objective and is focused on guiding conversations forward. The moderator for each retrospective is assigned by the VP Development in each milestone.
Retrospective planning and execution issue
For each monthly release, a Retrospective planning and execution issue (example) is opened by the moderator to help us coordinate this work.
Create the Retrospective planning and execution issue by selecting the 'product-development-retro' issue template in the 'www-gitlab-com' project.
Title the issue <MILESTONE VERSION #> Team Retrospectives.
Set the due date to 2 days before the Retrospective Discussion to encourage team members to contribute prior to recording of the Retrospective Summary video.
The retro doc is a Google Doc we use to collaborate on for our Retrospective Summary and Retrospective Discussion.
At the end of every release, each team should host their own retrospective. For details on how this is done, see Group Retrospectives.
Note - we are currently conducting an experiment with section-based Retrospective Summaries. During such time we will not be conducting an R&D-wide Retrospective Summary.
Once all Group Retrospectives are completed, each team inputs their learnings into a single publicly-accessible retro doc. The moderator then pre-records a video of the highlights. This video is then announced in the Retrospective planning and execution issue along with the #whats-happening-at-gitlab slack channel. In line with our value of transparency, we also post this video to our public GitLab Unfiltered channel.
Steps for participants
Steps for the moderator
The Retrospective Discussion is a 25 minute live discussion among participants where we deep dive into discussion topics from our Group Retrospectives (example). In line with our value of transparency, we livestream this meeting to YouTube and monitor chat for questions from viewers. Please check the retro doc for details on joining the livestream.
For each retrospective discussion, we aim to host an interactive discussion covering two discussion topics. We limit this to two topics due to the length of the meeting.
The discussion topics stem from our Group Retrospective learnings and should be applicable to the majority of participants.
Discussion topics are suggested by participants by commenting on the Retrospective planning and execution issue. Participants can vote on these topics by adding a :thumbsup: reaction. The two topics with the most :thumbsup: votes will be used as the discussion topics. If there are not enough votes or if the discussion topics are not relevant to the majority of participants, the moderator can choose other discussion topics.
Steps for participants
M+1, 4th, begin adding your comments to the retro doc.
Steps for the moderator
M+1, 3rd. Take note of which discussion topics have the most votes at this time. If there are not enough votes or if you deem the discussion topics as not relevant to the majority of participants, please choose other discussion topics.
At the end of each retrospective the Engineering Productivity team is responsible for triaging improvement items identified from the retrospective. This is needed for a single owner to be aware of the bigger picture technical debt and backstage work. The actual work can be assigned out to other teams or engineers to execute.
The Moderator for the Retrospective Summary is chosen on a quarterly basis. For FY22 we have selected 4 moderators from across Engineering and Product. The moderators are:
During FY22 Q4 (the 14.4, 14.5, 14.6 Retrospectives) we will conduct an experiment where we perform retrospective summaries at the Section level instead of an R&D-wide retrospective summary. Section level leaders in Product and Development are the DRIs for retrofitting the current retrospective summary process for their section and documenting their process for doing so.
As GitLab has grown, there have become too many layers between a group retrospective and the company-wide retrospective. Performing retrospective summaries at the Section level will increase our rate of learning and encourage broader collaboration between stable counterparts across the R&D organization.
We'll consider this experiment a success if:
While leaders are available in the categories page (and subject to change) - we explicitly call out the DRIs for each section in this experiment.
Discretion is provided to Section leaders on how to conduct a section retrospective discussion. A good starting point would be to follow the current handbook and issue template recommendations for our R&D wide retrospective. Consider creating section versions of the issue template and discussion doc.
Engineering Managers are responsible for capacity planning and scheduling for their respective teams with guidance from their counterpart Product Managers.
To ensure hygiene across Engineering, we run scheduled pipelines to move
unfinished work (open issues and merge requests) with the expired milestone to
the next milestone, and label
~"missed:x.y" for the expired milestone.
This is currently implemented as part of our automated triage operations. Additionally, issues with the
~Deliverable label which have a milestone beyond current +1, will have the
~Deliverable label removed.
We keep the milestone open for 3 months after it's expired, based on the release and maintenance policy.
The milestone cleanup is currently applied to the following groups and projects:
Milestones closure is in the remit of the Delivery team. At any point in time a release might need to be created for an active milestone,and once that is no longer the case, the Delivery team closes the milestone.
The milestone cleanup will happen one weekday before the 22nd (release day).
The following is observed to account for the weekends:
These actions will be applied to open issues:
~"missed-deliverable"will also be added whenever
Milestones are closed when the Delivery team no longer needs to create a backport release for a specific milestone.
Both the monthly kickoff and retrospective meetings are publicly streamed to the GitLab Unfiltered YouTube Channel. The EBA for Engineering is the moderator and responsible for initiating the Public Stream or designating another moderator if EBA is unable to attend.
When working in GitLab (and in particular, the GitLab.org group), use group labels and group milestones as much as you can. It is easier to plan issues and merge requests at the group level, and exposes ideas across projects more naturally. If you have a project label, you can promote it to a group milestone. This will merge all project labels with the same name into the one group label. The same is true for promoting group milestones.
We definitely don't want our technical debt to grow faster than our code base. To prevent this from happening we should consider not only the impact of the technical debt but also a contagion. How big and how fast is this problem going to be over time? Is it likely a bad piece of code will be copy-pasted for a future feature? In the end, the amount of resources available is always less than amount of technical debt to address.
To help with prioritization and decision-making process here, we recommend thinking about contagion as an interest rate of the technical debt. There is a great comment from the internet about it:
You wouldn't pay off your $50k student loan before first paying off your $5k credit card and it's because of the high interest rate. The best debt to pay off first is one that has the highest loan payment to recurring payment reduction ratio, i.e. the one that reduces your overall debt payments the most, and that is usually the loan with the highest interest rate.
For technical debt which might span, or fall in gaps between groups they should be brought up for a globally optimzed prioritization in retrospectives or directly with the appropriate member of the Product Leadership team. Additional avenues for addressing technical debt outside of product groups are Rapid Action issues and working groups.
Sometimes there is an intentional decision to deviate from the agreed-upon MVC, which sacrifices the user experience. When this occurs, the Product Designer creates a follow-up issue and labels it
UX debt to address the UX gap in subsequent releases.
For the same reasons as technical debt, we don't want UX debt to grow faster than our code base.
UI polish issues are visual improvements to the existing user interface, touching mainly aesthetic aspects of the UI that are guided by Pajamas foundations. UI polish issues generally capture improvements related to color, typography, iconography, and spacing. We apply the
UI polish label to these issues. UI polish issues don't introduce functionality or behavior changes to a feature.
Open merge requests sometimes become idle (not updated by a human in more than a month). Once a month, engineering managers will receive an
Merge requests requiring attention triage issue that includes all (non-WIP/Draft) MRs for their group and use it to determine if any action should be taken (such as nudging the author/reviewer/maintainer). This assists in getting merge requests merged in a reasonable amount of time which we track with the Open MR Review Time (OMRT) and Open MR Age (OMA) performance indicators.
Open merge requests may also have other properties that indicate that the engineering manager should research them and potentially take action to improve efficiency. One key property is the number of threads, which, when high, may indicate a need to update the plan for the MR or that a synchronous discussion should be considered. Another property is the number of pipelines, which, when high, may indicate a need to revisit the plan for the MR. These metrics are not yet included in an automatically created a triage issue. However, they are available in a Sisense dashboard. Engineering managers are encouraged to check this dashboard for their group periodically (once or twice a month) in the interim.
Security is our top priority. Our Security Team is raising the bar on security every day to protect users' data and make GitLab a safe place for everyone to contribute. There are many lines of code, and Security Teams need to scale. That means shifting security left in the Software Development LifeCycle (SDLC). Each team has an Application Security Stable Counterpart who can help you, and you can find more secure development help in the
#sec-appsec Slack channel.
Being able to start the security review process earlier in the software development lifecycle means we will catch vulnerabilities earlier, and mitigate identified vulnerabilities before the code is merged. You should know when and how to proactively seek an Application Security Review. You should also be familiar with our Secure Coding Guidelines.
We are fixing the obvious security issues before every merge, and therefore, scaling the security review process. Our workflow includes a check and validation by the reviewers of every merge request, thereby enabling developers to act on identified vulnerabilities before merging. As part of that process, developers are also encouraged to reach out to the Security Team to discuss the issue at that stage, rather than later on, when mitigating vulnerabilities becomes more expensive. After all, security is everyone's job. See also our Security Paradigm
From time to time, there are occasions that engineering team must act quickly in response to urgent issues. This section describes how the engineering team handles certain kinds of such issues.
Not everything is urgent. See below for a non-exclusive list of things that are in-scope and not in-scope. As always, use your experience and judgment, and communicate with others.
A bi-weekly performance refinement session is held by the Development and QE teams jointly to raise awareness and foster wider collaboration about high-impact performance issues. A high impact issue has a direct measurable impact on GitLab.com service levels or error budgets.
The Performance Refinement issue board is reviewed in this refinement exercise.
Milestoneor the label
workflow::ready for developmentis missing.
Milestoneand the label
workflow::ready for development.
The infradev process is established to identify Issues requiring priority attention in support of SaaS availability and reliability. These escalations are intended to primarily be asyncronous as timely triage and attention is required. In addition to primary management through the Issues, any gaps, concerns, or critical triage is handled in the weekly GitLab SaaS Infrastructure meeting.
The infradev issue board is the primary focus of this process.
Infradevlabel. +1. Assess Severity and
Priorityand apply the corresponding label as appropriate.
Infradevlabel to the new issues.
Prioritylabels to the new issues. The labels should correspond to the importance of the follow-on work.
(To be completed primarily by Development Engineering Management)
Issues are nominated to the board through the inclusion of the label
infradev and will appear on the infradev board.
Milestoneor the label
workflow::ready for developmentis missing.
Milestoneand the label
workflow::ready for development.
~infradev ~severity::1 ~priority::1 ~production request labels applied require immediate resolution.
~infradev issues requiring a ~"breaking change" should not exist. If a current
~infradev issue requires a breaking change then it should split into two issues. The first issue should be the immediate
~infradev work that can be done under current SLOs. The second issue should be ~"breaking change" work that needs to be completed at the next major release in accordance with handbook guidance. Agreement from development DRI as well as the infrastructure DRI should be documented on the issue.
Additionally, an automated status report is generated in the gitlab-org/infradev-reports issue tracker. A new report is opened weekly, and updated regularly. The report categorizes each infradev issue according to several criteria, and can help with the triage and priorization process.
Triage of infradev Issues is desired to occur asynchronously. There is also a section of the Weekly GitLab SaaS meeting which aims to address anything requiring synchronous discussion or which hasn't been triaged. This meeting has time constraints and many of the participants may not have a detailed understanding of the problems being presented. For maximum efficiency, please ensure the following, so that your infradev issues can gain maximum traction.
infradevlabel to architectural problems, vague solutions, or requests to investigate an unknown root-cause.