Gitlab hero border pattern left svg Gitlab hero border pattern right svg

Engineering Workflow

This document explains the workflow for anyone working with issues in GitLab Inc. For the workflow that applies to everyone please see

GitLab Flow

Products at GitLab are built using the GitLab Flow.

We have specific rules around code review.

Broken master

If you notice that pipelines for the master branch of GitLab or GitLab FOSS is failing (red) or broken (green as a false positive), returning the build to a passing state takes priority over everything else development related, since everything we do while tests are broken may break existing functionality, or introduce new bugs and security issues.

What is a broken master?

All tests (unit, integration, and E2E QA) that fail on master are treated as ~"master:broken".

Any test failures or flakiness (either false positive or false negative) causes productivity impediments for all of engineering and our release processes. If a change causes new test failures, the fix to the test should be made in the same Merge Request. If the change causes new QA test failures, in addition to fixing the QA tests, the package-and-qa or review-qa-all job must be run to validate the fix before the Merge Request can be merged.

The cost to fix test failures increases exponentially as time passes. Our aim should be to keep master free from failures, not to fix master only after it breaks.

Broken master service level objectives

There are two phases for fixing a ~"master:broken" issue which have a target SLO to clarify the urgency. The resolution phase is dependent on the completion of the triage phase.

Phase Service level objective DRI
Triage 12 hours from the initial master pipeline failure until assigned ~"master:broken" issue Engineering Productivity team
Resolution 12 hours from assignment to DRI until issue is closed Merge request author or team of merge request author

Additional details about the phases are listed below.

Triage broken master

The Engineering Productivity team is the triage DRI for monitoring master pipeline failures, identification and communication of ~"master:broken" issues.

Triage DRI Responsibilities

  1. Monitor
    • Pipeline failures are sent to #master-broken and will be reviewed by the team. These reactions will be applied by the triage DRI to signal current status:
      • :eyes: - signals the triage DRI is investigating a failing pipeline
      • :boom: - signals the pipeline contains a new failure. The triage DRI will create a new ~"master:broken" issue and reply in the thread with a link to the issue.
      • :fire_engine: - signals the pipeline is failing due to a known issue. The triage DRI will reply in the thread with a link to the existing issue(s).
      • :retry: - signals a system failure (e.g., Docker failure) is responsible and a retry has been triggered.
  2. Identification
    • Create an issue based on:
      • Master failing for a non-flaky reason - create an issue labelled as ~"master:broken" with the highest priority and severity ~P1 ~S1.
      • When master build had a flaky failure that cannot be reliably reproduced: apply ~"master:flaky" label
    • Identify the merge request that introduced the failures
    • Assign the issue to the ~"master:broken" merge request author if they are available at the moment. If the author is not available, mention the team Engineering Manager and seek assistance in the #development Slack channel.
      • Ask for assistance in the #development Slack channel if there is not a Merge Request that caused the ~"master:broken"
  3. Communication
    • Communicate ~"master:broken" in #development, #backend and #frontend channels.
  4. (Optional) Pre-resolution
    • If the triage DRI believes that there's an easy resolution by either:
      • Reverting a particular merge request.
      • Making a quick fix (for example, one line or a few similar simple changes in a few lines) The triage DRI can create a merge request and assign to the resolution DRI. The resolution DRI will determine the steps to resolve and may close the merge request.

An example of the reactions for a failure is:

pipeline failures example

Resolution of broken master

The merge request author of the change that broke master is the resolution DRI. In the event the merge request author is not available, the team of the merge request author will assume the resolution DRI responsibilities. If a DRI has not acknowledged or signaled working on a fix, any developer can take ownership using the reaction guidance below and assume the resolution DRI responsibilities.

Resolution DRI Responsibilities

  1. Prioritize resolving ~"master:broken" over new bug/feature work. Resolution options include:
    • Revert the merge request which caused the broken master
    • Create a new merge request to fix the failure if revert is not possible or would introduce additional risk. This should be treated as a ~P1 ~S1 issue.
    • Quarantine the failing test if you can confirm that it is flaky (e.g. it wasn't touched recently and passed after retrying the failed job).
      • Remove the ~"master:broken" label from the issue and apply ~"master:flaky"
  2. Reactions by the resolution DRI in #development should follow this guidance:
    • :eyes: - applied by the resolution DRI (or backup) to signal acknowledgment
    • :construction: - applied by the resolution DRI to signal that work is in progress on a fix
    • :white_check_mark: - applied by the resolution DRI to signal the fix is complete.
  3. Communicate in #development, #backend and/or #frontend when fix is in master to rebase any open branches.
  4. When master build was failing and the underlying problem was quarantined / reverted / temporary workaround created but the root cause still needs to be discovered: create a new issue with the ~"master:needs-investigation" label
  5. Create an issue for the Engineering Productivity team describing how the ~"master:broken" could have been prevented in the Merge Request pipeline.

Maintaining throughput during broken master

When master is red, we need to try hard to avoid introducing new failures, since it's easy to lose confidence if it stays red for a long time. However, it's wasteful and impractical to completely stop development. We need to compromise between the two priorities.

It's ok to merge a merge request with a failing pipeline if the following conditions are met:

  1. The failures also happen on master
  2. Only a small number of the tests have failed
  3. They are not directly related to functionality touched by the merge request
  4. There is:
    1. An issue labelled ~"master:broken" for it, see the "As a developer" steps above for more details
    2. That issue is already assigned to someone
    3. The assignee is actively working on it (i.e. they are not on PTO, at a conference, etc.)

Before merging, add a comment mentioning that the failure happens in master, and post a reference to the issue. For instance:

Failure in <JOB_URL> happens in `master` and is being worked on in #XYZ, merging.

Whether the pipeline is failing or not, if master is red, check how far behind the source branch is. If it's more than 100 commits behind, ask for it to be brought up to date before merging. This reduces the chance of introducing new failures, and also acts to slow (but not stop) the rate of change in master, helping us to make it green again.

Security Issues

Security issues are managed and prioritized by the security team. If you are assigned to work on a security issue in a milestone, you need to follow the Security Release process.

If you find a security issue in GitLab, create a confidential issue mentioning the relevant security and engineering managers, and post about it in #security.

If you accidentally push security commits to gitlab-org/gitlab, we recommend that you:

  1. Delete the relevant branch ASAP
  2. Inform a release manager in #releases. It may be possible to execute a garbage collection (via the Housekeeping task in the repository settings) to remove the commits.

For more information on how the entire process works for security releases, see the documentation on security releases.


  1. Start working on an issue you’re assigned to. If you’re not assigned to any issue, find the issue with the highest priority and relevant label you can work on, and assign it to yourself. You can use this query, which sorts by priority for the started milestones, and filter by the label for your team.
  2. If you need to schedule something or prioritize it, apply the appropriate labels (see Scheduling issues).
  3. If you are working on an issue that touches on areas outside of your expertise, be sure to mention someone in the other group(s) as soon as you start working on it. This allows others to give you early feedback, which should save you time in the long run.
  4. When you start working on an issue:
    • Add the workflow::In dev label to the issue.
    • Create a merge request (MR) by clicking on the Create merge request button in the issue. This creates a MR with the labels, milestone and title of the issue. It also relates the just created MR to the issue.
    • Remove the Closes #issue_id from the MR description, to prevent auto closing of the issue after merging.
    • Assign the MR to yourself.
    • Work on the MR until it is ready, it meets GitLab's definition of done, and the pipeline succeeds.
    • Edit the description and click on the Remove the WIP: prefix from the title button.
    • Assign it to the suggested reviewer(s) from Reviewer Roulette. If there are reviewers for multiple categories, for example: frontend, backend and database, assign all of them. Alternatively, assign someone who specifically needs to review. When assigning, also @mention them in the comments, requesting a review.
    • Unassign yourself.
    • Change the workflow label of the issue to workflow::In review. If multiple people are working on the issue or multiple workflow labels might apply, consider breaking the issue up. Otherwise, default to the workflow label farthest away from completion.
    • Potentially, a reviewer offers feedback and assigns back to the author.
    • The author addresses the feedback and this goes back and forth until all reviewers approve the MR.
    • After approving, the reviewer in each category unassigns themselves and assigns the suggested maintainer in their category.
    • Maintainer reviews take place with any back and forth as necessary and attempts to resolve any open threads.
    • The last maintainer to approve the MR, merges it.
    • (Optionally) Change the workflow label of the issue to workflow::verification, to indicate all the development work for the issue has been done and it is waiting to be deployed and verified. We will use this label in cases where the work was requested to be verified by product OR we determined we need to perform this verification in production.
  5. You are responsible for the issues assigned to you. This means it has to ship with the milestone it's associated with. If you are not able to do this, you have to communicate it early to your manager and other stakeholders (e.g. the product manager, other engineers working on dependent issues). In teams, the team is responsible for this (see Working in Teams). If you are uncertain, err on the side of overcommunication. It's always better to communicate doubts than to wait.
  6. You (and your team, if applicable) are responsible for:
  7. Once a release candidate has been deployed to the staging environment, please verify that your changes work as intended. We have seen issues where bugs did not appear in development but showed in production (e.g. due to CE-EE merge issues).

Be sure to read general guidelines about issues and merge requests.

Working in Teams

For larger issues or issues that contain many different moving parts, you'll be likely working in a team. This team will typically consist of a backend engineer, a frontend engineer, a Product Designer and a product manager.

  1. Teams have a shared responsibility to ship the issue in the planned release.
    1. If the team suspects that they might not be able to ship something in time, the team should escalate / inform others as soon as possible. A good start is informing your manager.
    2. It's generally preferable to ship a smaller iteration of an issue, than ship something a release later.
  2. Consider starting a Slack channel for a new team, but remember to write all relevant information in the related issue(s). You don't want to have to read up on two threads, rather than only one, and Slack channels are not open to the greater GitLab community.
  3. If an issue entails frontend and backend work, consider separating the frontend and backend code into separate MRs and merge them independently under feature flags. This will ensure frontend/backend engineers can work and deliver independently.
    1. It's important to note that even though the code is merged behind a feature flag, it should still be production ready and continue to hold our definition of done.
    2. A separate MR containing the integration, documentation (if applicable) and removal of the feature flags should be completed in parallel with the backend and frontend MRs, but should only be merged when both the frontend and backend MRs are on the master branch.

In the spirit of collaboration and efficiency, members of teams should feel free to discuss issues directly with one another while being respectful of others' time.

Convention over Configuration

Avoid adding configuration values in the application settings or in gitlab.yml. Only add configuration if it is absolutely necessary. If you find yourself adding parameters to tune specific features, stop and consider how this can be avoided. Are the values really necessary? Could constants be used that work across the board? Could values be determined automatically? See Convention over Configuration for more discussion.

Choosing Something to Work On

Start working on things with the highest priority in the current milestone. The priority of items are defined under labels in the repository, but you are able to sort by priority.

After sorting by priority, choose something that you’re able to tackle and falls under your responsibility. That means that if you’re a frontend developer, you work on something with the label frontend.

To filter very precisely, you could filter all issues for:

Use this link to quickly set the above parameters. You'll still need to filter by the label for your own team.

If you’re in doubt about what to work on, ask your lead. They will be able to tell you.

Triaging and Reviewing Code from the rest of the Community

It's every developers' responsibilities to triage and review code contributed by the rest of the community, and work with them to get it ready for production.

Merge requests from the rest of the community should be labeled with the Community Contribution label.

When evaluating a merge request from the community, please ensure that a relevant PM is aware of the pending MR by mentioning them.

This should be to be part of your daily routine. For instance, every morning you could triage new merge requests from the rest of the community that are not yet labeled Community Contribution and either review them or ask a relevant person to review it.

Make sure to follow our Code Review Guidelines.

Workflow Labels

Labels are described in our Contribution guide.

Working with is a very large instance of GitLab Enterprise Edition. It runs release candidates for new releases, and sees a lot of issues because of the amount of traffic it gets. There are several internal tools available for developers at GitLab to get data about what's happening in the production system:

Performance Data

There is extensive monitoring publicly available for For more on this and related tools, see the monitoring handbook.

More details on GitLab Profiler are also found in the monitoring performance handbook.

Error Reporting

Feature Flags

If you've built feature flags into your code, be sure to read about how to use the feature flag to test a feature on

Scheduling Issues

GitLab Inc has to be selective in working on particular issues. We have a limited capacity to work on new things. Therefore, we have to schedule issues carefully.

Product Managers are responsible for scheduling all issues in their respective product areas, including features, bugs, and tech debt. Product managers alone determine the prioritization, but others are encouraged to influence the PMs decisions. The UX Lead and Engineering Leads are responsible for allocating people making sure things are done on time. Product Managers are not responsible for these activities, they are not project managers.

Direction issues are the big, prioritized new features for each release. They are limited to a small number per release so that we have plenty of capacity to work on other important issues, bug fixes, etc.

If you want to schedule an Accepting merge requests issue, please remove the label first.

Any scheduled issue should have a team label assigned, and at least one type label.

Requesting Something to be Scheduled

To request scheduling an issue, ask the responsible product manager

We have many more requests for great features than we have capacity to work on. There is a good chance we’ll not be able to work on something. Make sure the appropriate labels (such as customer) are applied so every issue is given the priority it deserves.

Product Development Timeline

Teams (Product, UX, Engineering) continually work on issues according to their respective workflows. There is no specified process whereby a particular person should be working on a set of issues in a given time period. However, there are specific deadlines that should inform team workflows and prioritization. Suppose we are talking about milestone m that will be shipped in month M (on the 22nd). We have the following deadlines:

Refer to release post due dates for additional deadlines.

Note that deployments to are more frequent than monthly major/minor releases on the 22nd. See auto deploy transition guidance for details.

Updating Issues Throughout Development

Team members use labels to track issues throughout development. This gives visibility to other developers, product managers, and designers, so that they can adjust their plans during a monthly iteration. An issue should follow these stages:

When the issue has been verfied and everything is working, it can be closed.


At the beginning of each release, we have a kickoff meeting, publicly livestreamed to YouTube. In the call, the Product Development team (PMs, Product Designers, and Engineers) communicate with the rest of the organization which issues are in scope for the upcoming release. The call is structured by product area with each PM leading their part of the call.

The notes are available in a publicly-accessible Google doc. Refer to the doc for details on viewing the livestream.


After each release, we have a retrospective meeting, publicly livestreamed to YouTube. We discuss what went well, what went wrong, and what we can improve for the next release.

The format for the retrospective is as follows. The notes for the retrospective are kept in a publicly-accessible Google doc. In order to keep the call on time and to make sure we leave ample room to discuss how we can improve, the moderator may move the meeting forward with the timing indicated:

  1. How we improved since last month. 2 minutes. The moderator will review the improvements we identified in the last retrospective and discuss progress on those items.
  2. What went well this month. 5 minutes. Teams are encouraged to celebrate the ways in which we exceeded expectations either individually or as a team.
  3. What went wrong this month. 5 minutes. Teams are encouraged to call out areas where we made mistakes or otherwise didn't meet our expectations as a team.
  4. How can we improve? 18 minutes. Teams are encouraged to discuss the lessons we learned in this release and how we can use those learnings to improve. Any action items should be captured in a GitLab issue so they can receive adequate attention before the next release.

The purpose of the retrospective is to help Engineering at GitLab learn and improve as much as possible from every monthly release. In line with our value of transparency, we livestream the meeting to YouTube and monitor chat for questions from viewers. Please check the retrospective notes for details on joining the livestream.

Triaging retrospective improvements

At the end of each retrospective the Engineering Productivity team is responsible for triaging improvement items identified from the retrospective. This is needed for a single owner to be aware of the bigger picture technical debt and backstage work. The actual work can be assigned out to other teams or engineers to execute.

Milestone Cleanup

Engineering Managers are responsible for capacity planning and scheduling for their respective teams with guidance from their counterpart Product Managers.

To ensure hygiene across Engineering, we run scheduled pipelines to move unfinished work (open issues and merge requests) with the expired milestone to the next milestone, and label ~"missed:x.y" for the expired milestone. Additionally, label ~"missed-deliverable" whenever ~"Deliverable" is presented.

This is currently implemented as part of our automated triage operations. Additionally, issues with the ~Deliverable label which have a milestone beyond current +1, will have the ~Deliverable label removed.

We keep the milestone open for 3 months after it's expired, based on the release and maintenance policy.

The milestone cleanup is currently applied to the following groups and projects:

Milestone cleanup schedule

Kickoff and Retrospective Livestream Instructions

Before the meeting starts, remind people who plan to speak to join the Google Hangout earlier, since there is a 50 user limit.

Several minutes before the scheduled meeting time, follow the livestreaming instructions to start a Google Hangout using the Now setting. Paste the Google Hangout invite link in the Google doc.

At the scheduled meeting time, start broadcasting live to YouTube. Begin the meeting.

Use Group Labels and Group Milestones

When working in GitLab (and in particular, the group), use group labels and group milestones as much as you can. It is easier to plan issues and merge requests at the group level, and exposes ideas across projects more naturally. If you have a project label, you can promote it to a group milestone. This will merge all project labels with the same name into the one group label. The same is true for promoting group milestones.

Technical debt

We definitely don't want our technical debt to grow faster than our code base. To prevent this from happening we should consider not only the impact of the technical debt but also a contagion. How big and how fast is this problem going to be over time? Is it likely a bad piece of code will be copy-pasted for a future feature? In the end, the amount of resources available is always less than amount of technical debt to address.

To help with prioritization and decision-making process here, we recommend thinking about contagion as an interest rate of the technical debt. There is a great comment from the internet about it:

You wouldn't pay off your $50k student loan before first paying off your $5k credit card and it's because of the high interest rate. The best debt to pay off first is one that has the highest loan payment to recurring payment reduction ratio, i.e. the one that reduces your overall debt payments the most, and that is usually the loan with the highest interest rate.

Security is everyone's responsibility

Security is our top priority. Our Security Team is raising the bar on security every day to protect users' data and make GitLab a safe place for everyone to contribute. There are many lines of code, and Security Teams need to scale. That means shifting security left in the Software Development LifeCycle (SDLC). Being able to start the security review process earlier in the software development lifecycle means we will catch vulnerabilities earlier, and mitigate identified vulnerabilities before the code is merged. We are fixing the obvious security issues before every merge, and therefore, scaling the security review process. Our workflow includes a check and validation by the reviewers of every merge request, thereby enabling developers to act on identified vulnerabilities before merging. As part of that process, developers are also empowered to reach out to the Security Team to discuss the issue at that stage, rather than later on, when mitigating vulnerabilities becomes more expensive. After all, security is everyone's job. See also our Security Paradigm

Rapid Engineering Response

From time to time, there are occasions that engineering team must act quickly in response to urgent issues. This section describes how the engineering team handles certain kinds of such issues.


Not everything is urgent. See below for a non-exclusive list of things that are in-scope and not in-scope. As always, use your experience and judgment, and communicate with others.


  1. Person requesting Rapid Engineering Response creates an issue supplying all known information and applies priority and severity (or security severity and priority) to the best of their ability.
  2. Person requesting Rapid Engineering Response raises the issue to their own manager and the subject matter domain engineering manager (or the delegation if OOO).
    1. In case a specific group cannot be determined, raise the issue to the Director of Engineering (or the delegation if OOO) of the section.
    2. In case a specific section cannot be determined, raise the issue to the Sr. Director of Development (or the delegation if OOO).
  3. The engineering sponsor (subject matter Manager, Director, and/or Sr. Director) invokes all stakeholders of the subject matter as a rapid response task force to determine the best route of resolution:
    1. Engineering manager(s)
    2. Product Management
    3. QE
    4. UX
    5. Docs
    6. Security
    7. Support
    8. Distribution engineering manager
    9. Delivery engineering manager (Release Management)
  4. Adjust priority and severity or security severity and priority if necessary, and work collaboratively on the determined resolution.

Availability and Performance Grooming

To timely address high impact availability and performance issues, a weekly grooming session is held by the Infrastructure, Development, and QE teams jointly to triage issues for prioritization and planning with the Product team.


There are two issue boards being reviewed in this grooming exercise.

  1. Infra/Dev Triage.
  2. Performance Grooming.


  1. To participate in the weekly grooming, ask your engineering director to forward the invite of Availability & Performance Grooming meeting which is at 16:30 UTC (summer) or 17:30 UTC (winter) every Tuesday. Here is the meeting agenda.
  2. To nominate issues to either of the boards above:
    1. Infra/Dev Triage: use the label infradev.
    2. Performance Grooming: use the label performance-grooming.
  3. For the issues under the Open column:
    1. An engineering manager will be assigned if either the Milestone or the label workflow::ready for development is missing.
    2. Engineering manager brings assigned issue(s) to the Product Manager for prioritization and planning.
    3. Engineering manager unassigns themselves once the issue is planned for an iteration, i.e. associated with a Milestone and the label workflow::ready for development.