This document explains the workflow for anyone working with issues in GitLab Inc. For the workflow that applies to everyone please see PROCESS.md.
master
Products at GitLab are built using the GitLab Flow.
We have specific rules around code review.
In line with our values of short toes, making two-way-door decisions and bias for action, anyone can propose to revert a merge request. When deciding whether an MR should be reverted, the following should be true:
~severity::1
or ~severity::2
.
See severity labelsReverting merge requests that add non-functional changes and don't remove any existing capabilities should be avoided in order to prevent designing by committee.
The intent of a revert is never to place blame on the original author. Additionally, it is helpful to inform the original author so they can participate as a DRI on any necessary follow up actions.
master
If you notice that pipelines for the master
branch of GitLab or GitLab FOSS is failing (red) or broken (green as a false positive), returning the build to a passing state takes priority over everything else development related, since everything we do while tests are broken may break existing functionality, or introduce new bugs and security issues.
master
?All tests (unit, integration, and E2E QA) that fail on master are treated as ~"master:broken"
.
Any test failures or flakiness (either false positive or false negative) causes productivity impediments for all of engineering and our release processes.
If a change causes new test failures, the fix to the test should be made in the same Merge Request.
If the change causes new QA test failures, in addition to fixing the QA tests, the package-and-qa
or review-qa-all
job must be run to validate the fix before the Merge Request can be merged.
The cost to fix test failures increases exponentially as time passes. Our aim should be to keep master
free from failures, not to fix master
only after it breaks.
master
service level objectivesThere are two phases for fixing a ~"master:broken"
issue which have a target SLO to clarify the urgency. The resolution phase is dependent on the completion of the triage phase.
Phase | Service level objective | DRI |
---|---|---|
Triage | 4 hours from the initial master pipeline failure until assigned ~"master:broken" issue |
Engineering Productivity team |
Resolution | 4 hours from assignment to DRI until issue is closed | Merge request author or team of merge request author |
Additional details about the phases are listed below.
The Engineering Productivity team is the triage DRI for monitoring master pipeline failures, identification and communication of ~"master:broken"
issues.
#master-broken
and will be reviewed by the team. These reactions will be applied by the triage DRI to signal current status:
:eyes:
- signals the triage DRI is investigating a failing pipeline:boom:
- signals the pipeline contains a new failure. The triage DRI will create a new ~"master:broken"
issue and reply in the thread with a link to the issue.:fire_engine:
- signals the pipeline is failing due to a known issue. The triage DRI will reply in the thread with a link to the existing issue(s).:retry:
- signals a system failure (e.g., Docker failure) is responsible and a retry has been triggered.master
failing for a non-flaky reason - create an issue with the following labels: ~"master:broken"
, ~"Engineering Productivity"
,~priority::1
, ~severity::1
.master
failing for a flaky reason that cannot be reliably reproduced - create an issue with the following labels: ~"failure::flaky-test"
, ~"Engineering Productivity"
,~priority::2
, ~severity::2
.~"master:broken"
merge request author if they are available at the moment. If the author is not available, mention the team Engineering Manager and seek assistance in the #development
Slack channel.
#development
Slack channel if there is no
merge request that caused the ~"master:broken"
.~"master:broken"
in #development
, #backend
, and #frontend
.@username FYI
message.
Additionally, a message can be posted in #backend_maintainers
or #frontend_maintainers
to get a maintainer take a look at the fix ASAP.An example of the reactions for a failure is:
The merge request author of the change that broke master is the resolution DRI. In the event the merge request author is not available, the team of the merge request author will assume the resolution DRI responsibilities. If a DRI has not acknowledged or signaled working on a fix, any developer can take ownership using the reaction guidance below and assume the resolution DRI responsibilities.
~"master:broken"
over new bug/feature work. Resolution options include:
~priority::1
~severity::1
issue.~"master:broken"
label from the issue and apply ~"failure::flaky-test"
master
affects any auto-deploy, add the relevant ~"Pick into auto-deploy"
label.master
affects any stable branches (e.g. https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25274),
open new merge requests directly against the stable branches which are
broken and ping the current release manager in the merge requests to avoid
delays in releases / security releases. Optionally, post a message in #releases
.#development
should follow this guidance:
:eyes:
- applied by the resolution DRI (or backup) to signal acknowledgment:construction:
- applied by the resolution DRI to signal that work is in progress on a fix:white_check_mark:
- applied by the resolution DRI to signal the fix is complete.#development
, #backend
, and #frontend
when the fix is in
master
.master
build was failing and the underlying problem was quarantined /
reverted / temporary workaround created but the root cause still needs to be
discovered: create a new issue with the ~"master:needs-investigation"
label~"master:broken"
could have been prevented in the Merge Request pipeline.Once the resolution DRI announces that master
is fixed:
When master
is red, we need to try hard to avoid introducing new failures,
since it's easy to lose confidence if it stays red for a long time.
Most merge requests don't need to be merged immediately, so the priority should
always be maintaining the stability of master
over keeping a high throughput.
In the rare cases where master
is broken for more than 4 hours (from the
notification in #master-broken
), maintainers are allowed to merge with a
broken pipeline if all the following conditions are met:
master
for at least 4 hours.~"master:broken"
for it,
see the "Triage DRI Responsibilities" steps above for more detailsBefore asking a maintainer to review, or before merging the merge request,
add a comment mentioning that the failure happens in master
,
and post a reference to the issue. For instance:
Failure in <JOB_URL> happens in `master` and is being worked on in #XYZ.
Whether the merge requests pipeline is failing or not, if master
is red:
master
with a link to the ~"master:broken"
issue before
clicking the red "Merge" button.master
the source branch is. If it's more
than 100 commits behind master
, ask the author to rebase it before merging.This reduces the chance of introducing new failures, and also acts to slow
(but not stop) the rate of change in master
, helping us to make it green again.
Security issues are managed and prioritized by the security team. If you are assigned to work on a security issue in a milestone, you need to follow the Security Release process.
If you find a security issue in GitLab, create a confidential issue mentioning the relevant security and engineering managers, and post about it in #security
.
If you accidentally push security commits to gitlab-org/gitlab
, we recommend that you:
#releases
. It may be possible to execute a garbage collection (via the Housekeeping task in the repository settings) to remove the commits.For more information on how the entire process works for security releases, see the documentation on security releases.
workflow::in dev
label to the issue.Closes #issue_id
from the MR description, to prevent automatically closing of the issue after merging. Be careful not to use other keywords in the MR description given the default closing pattern that can also automatically close issues.workflow::in review
. If multiple people are working on the issue or multiple workflow labels might apply, consider breaking the issue up. Otherwise, default to the workflow label farthest away from completion.
- Potentially, a reviewer offers feedback and assigns back to the author.
- The author addresses the feedback and this goes back and forth until all reviewers approve the MR.
- After approving, the reviewer in each category unassigns themselves and assigns the suggested maintainer in their category.
- Maintainer reviews take place with any back and forth as necessary and attempts to resolve any open threads.
- The last maintainer to approve the MR, follows the Merging a merge request guidelines.
- (Optionally) Change the workflow label of the issue to workflow::verification
, to indicate all the development work for the issue has been done and it is waiting to be deployed and verified. We will use this label in cases where the work was requested to be verified by product OR we determined we need to perform this verification in production.Be sure to read general guidelines about issues and merge requests.
For larger issues or issues that contain many different moving parts, you'll be likely working in a team. This team will typically consist of a backend engineer, a frontend engineer, a Product Designer and a product manager.
In the spirit of collaboration and efficiency, members of teams should feel free to discuss issues directly with one another while being respectful of others' time.
Avoid adding configuration values in the application settings or in
gitlab.yml
. Only add configuration if it is absolutely necessary. If you
find yourself adding parameters to tune specific features, stop and consider
how this can be avoided. Are the values really necessary? Could constants be
used that work across the board? Could values be determined automatically?
See Convention over Configuration
for more discussion.
Start working on things with the highest priority in the current milestone. The priority of items are defined under labels in the repository, but you are able to sort by priority.
After sorting by priority, choose something that you’re able to tackle and falls under your responsibility. That means that if you’re a frontend developer, you work on something with the label frontend
.
To filter very precisely, you could filter all issues for:
CI/CD
, Discussion
, Quality
, frontend
, or Platform
Use this link to quickly set the above parameters. You'll still need to filter by the label for your own team.
If you’re in doubt about what to work on, ask your lead. They will be able to tell you.
It's every developers' responsibilities to triage and review code contributed by the rest of the community, and work with them to get it ready for production.
Merge requests from the rest of the community should be labeled with the Community Contribution
label.
When evaluating a merge request from the community, please ensure that a relevant PM is aware of the pending MR by mentioning them.
This should be to be part of your daily routine. For instance, every morning you could triage new merge requests from the rest of the community that are not yet labeled Community Contribution
and either review them or ask a relevant person to review it.
Make sure to follow our Code Review Guidelines.
Labels are described in our Contribution guide.
GitLab.com is a very large instance of GitLab Enterprise Edition. It runs release candidates for new releases, and sees a lot of issues because of the amount of traffic it gets. There are several internal tools available for developers at GitLab to get data about what's happening in the production system:
There is extensive monitoring publicly available for GitLab.com. For more on this and related tools, see the monitoring handbook.
More details on GitLab Profiler are also found in the monitoring performance handbook.
GitLab Inc has to be selective in working on particular issues. We have a limited capacity to work on new things. Therefore, we have to schedule issues carefully.
Product Managers are responsible for scheduling all issues in their respective product areas, including features, bugs, and tech debt. Product managers alone determine the prioritization, but others are encouraged to influence the PMs decisions. The UX Lead and Engineering Leads are responsible for allocating people making sure things are done on time. Product Managers are not responsible for these activities, they are not project managers.
Direction issues are the big, prioritized new features for each release. They are limited to a small number per release so that we have plenty of capacity to work on other important issues, bug fixes, etc.
If you want to schedule an Accepting merge requests
issue, please remove the label first.
Any scheduled issue should have a team label assigned, and at least one type label.
To request scheduling an issue, ask the responsible product manager
We have many more requests for great features than we have capacity to work on.
There is a good chance we’ll not be able to work on something.
Make sure the appropriate labels (such as customer
) are applied so every issue is given the priority it deserves.
Teams (Product, UX, Engineering) continually work on issues according to their respective workflows.
There is no specified process whereby a particular person should be working on a set of issues in a given time period.
However, there are specific deadlines that should inform team workflows and prioritization.
Suppose we are talking about milestone m
that will be shipped in month M
(on the 22nd).
We have the following deadlines:
M-1, 4th
(at least 14 days before milestone m
begins):
M-1, 13th
(at least 5 days before milestone m
begins):
m
; label deliverable
applied.M-1, 16th
(at least 1 day before milestone m
begins):
M-1, 18th
(or next business day, milestone m
begins): Kick off! 📣
m
beginsM-1 26th
: GitLab Bot opens Team Retrospective issue for the current milestone.M, 17th
:
m
issues with docs have been merged into master.m
release. See feature flags.m
release. See release timelines.m
is expired.M, 19th
:
M, 19th
, or M, 20th
, or M, 21st
:
M, 22nd
: Release Day 🚀
M, 23rd
(the day after the release):
m
starts. This includes regular and security patch releases.m
issues and merge requests are automatically moved to milestone m+1
, with the exception of ~security
issues.M, 24th
: Moderator opens the Retrospective planning and execution issue.M, 24th
to M+1, 3rd
: Participants complete the Retrospective planning and execution issue, add their notes to the retro doc, and suggest and vote on discussion topics.M, 26th
:
M+1, 4th
: Moderator records the Retrospective Summary video and announces the video and discussion topics.M+1, 6th:
Retrospective Discussion is held.Refer to release post content reviews for additional deadlines.
Note that deployments to GitLab.com are more frequent than monthly major/minor releases on the 22nd. See auto deploy transition guidance for details.
Team members use labels to track issues throughout development. This gives visibility to other developers, product managers, and designers, so that they can adjust their plans during a monthly iteration. An issue should follow these stages:
workflow::in dev
: A developer indicates they are developing an issue by applying the in dev
label.workflow::in review
: A developer indicates the issue is in code review and UX review by removing the in dev
label, and applying the in review
label.workflow::verification
: A developer indicates that all the development work for the issue has been done and is waiting to be deployed and verified.When the issue has been verfied and everything is working, it can be closed.
At the beginning of each release, we have a kickoff meeting, publicly livestreamed to YouTube. In the call, the Product Development team (PMs, Product Designers, and Engineers) communicate with the rest of the organization which issues are in scope for the upcoming release. The call is structured by product area with each PM leading their part of the call.
The notes are available in a publicly-accessible Google doc. Refer to the doc for details on viewing the livestream.
The purpose of our retrospective is to help our team at GitLab learn and improve as much as possible from every monthly release.
Each retrospective consist of three parts:
Timeline
M-1 26th
: GitLab Bot opens Team Retrospective issue for the current milestone.M, 19th
: Team Retrospectives should be held.M, 24th
: Moderator opens the Retrospective planning and execution issue.M, 24th
to M+1, 3rd
: Participants complete the Retrospective planning and execution issue, add their notes to the retro doc, and suggest and vote on discussion topics.M+1, 4th
: Moderator records the Retrospective Summary video and announces the video and discussion topics.M+1, 6th
: Retrospective Discussion is held.
M+1, 6th
falls in a weekend.M+1, 6th
if M+1, 6th
is a holiday.Moderator
The moderator of each retrospective is responsible for:
The job of a moderator is to remain objective and is focused on guiding conversations forward. The moderator for each retrospective is assigned by the VP Development in each milestone.
Retrospective planning and execution issue
For each monthly release, a Retrospective planning and execution issue (example) is opened by the moderator to help us coordinate this work.
Retro doc
The retro doc is a Google Doc we use to collaborate on for our Retrospective Summary and Retrospective Discussion.
At the end of every release, each team should host their own retrospective. For details on how this is done, see Team Retrospectives.
The Retrospective Summary is a short pre-recorded video which summarizes the learnings across all Team Retrospectives (example video, example presentation).
Once all Team Retrospectives are completed, each team inputs their learnings into a single publicly-accessible retro doc. The moderator then pre-records a video of the highlights. This video is then announced in the Retrospective planning and execution issue along with the #whats-happening-at-gitlab slack channel. In line with our value of transparency, we also post this video to our public Gitlab Unfiltered channel.
Steps for participants
Steps for the moderator
The Retrospective Discussion is a 25 minute live discussion among participants where we deep dive into discussion topics from our Team Retrospectives (example). In line with our value of transparency, we livestream this meeting to YouTube and monitor chat for questions from viewers. Please check the retro doc for details on joining the livestream.
Discussion Topics
For each retrospective discussion, we aim to host an interactive discussion covering two discussion topics. We limit this to two topics due to the length of the meeting.
The discussion topics stem from our Team Retrospective learnings and should be applicable to the majority of participants.
Discussion topics are suggested by participants by commenting on the Retrospective planning and execution issue. Participants can vote on these topics by adding a :thumbsup: reaction. The two topics with the most :thumbsup: votes will be used as the discussion topics. If there are not enough votes or if the discussion topics are not relevant to the majority of participants, the moderator can choose other discussion topics.
Meeting Agenda
Steps for participants
M+1, 3rd
.M+1, 4th
, begin adding your comments to the retro doc.Steps for the moderator
M+1, 3rd
. Take note of which discussion topics have the most votes at this time. If there are not enough votes or if you deem the discussion topics as not relevant to the majority of participants, please choose other discussion topics.At the end of each retrospective the Engineering Productivity team is responsible for triaging improvement items identified from the retrospective. This is needed for a single owner to be aware of the bigger picture technical debt and backstage work. The actual work can be assigned out to other teams or engineers to execute.
Engineering Managers are responsible for capacity planning and scheduling for their respective teams with guidance from their counterpart Product Managers.
To ensure hygiene across Engineering, we run scheduled pipelines to move
unfinished work (open issues and merge requests) with the expired milestone to
the next milestone, and label ~"missed:x.y"
for the expired milestone.
Additionally, label ~"missed-deliverable"
whenever ~"Deliverable"
is
presented.
This is currently implemented as part of our automated triage operations. Additionally, issues with the ~Deliverable
label which have a milestone beyond current +1, will have the ~Deliverable
label removed.
We keep the milestone open for 3 months after it's expired, based on the release and maintenance policy.
The milestone cleanup is currently applied to the following groups and projects:
Milestones closure is in the remit of the Delivery team. At any point in time a release might need to be created for an active milestone,and once that is no longer the case, the Delivery team closes the milestone.
M, 19th
if M, 22nd
is Monday:M, 20th
or M, 21st
whichever is Friday, if M, 22nd
is on weekend.M, 21st
if M, 22nd
is any other dayThese actions will be applied to open issues:
~"missed:x.y"
.~"missed-deliverable"
will also be added whenever ~"Deliverable"
is presented.Milestones are closed when the Delivery team no longer needs to create a backport release for a specific milestone.
Both the monthly kickoff and retrospective meetings are publicly streamed to the GitLab Unfiltered YouTube Channel. The EBA for Engineering is the moderator and responsible for initiating the Public Stream or designating another moderator if EBA is unable to attend.
When working in GitLab (and in particular, the GitLab.org group), use group labels and group milestones as much as you can. It is easier to plan issues and merge requests at the group level, and exposes ideas across projects more naturally. If you have a project label, you can promote it to a group milestone. This will merge all project labels with the same name into the one group label. The same is true for promoting group milestones.
We definitely don't want our technical debt to grow faster than our code base. To prevent this from happening we should consider not only the impact of the technical debt but also a contagion. How big and how fast is this problem going to be over time? Is it likely a bad piece of code will be copy-pasted for a future feature? In the end, the amount of resources available is always less than amount of technical debt to address.
To help with prioritization and decision-making process here, we recommend thinking about contagion as an interest rate of the technical debt. There is a great comment from the internet about it:
You wouldn't pay off your $50k student loan before first paying off your $5k credit card and it's because of the high interest rate. The best debt to pay off first is one that has the highest loan payment to recurring payment reduction ratio, i.e. the one that reduces your overall debt payments the most, and that is usually the loan with the highest interest rate.
Technical debt is prioritized like other technical decisions in product groups by product management.
For technical debt which might span, or fall in gaps between groups they should be brought up for a globally optimzed prioritization in retrospectives or directly with the appropriate member of the Product Leadership team. Additional avenues for addressing technical debt outside of product groups are Rapid Action issues and working groups.
Sometimes features release to production without meeting design and Pajamas specifications. We create UX debt issues to capture these discrepancies. For the same reasons as technical debt, we don't want UX debt to grow faster than our code base.
We apply the UX debt
label to these issues, and it is prioritized like other technical decisions in product groups by product management. You can see the number of UX debt issues on the UX Debt dashboard.
As with technical debt, UX debt should be brought up for globally optimized prioritization in retrospectives or directly with the appropriate member of the Product Leadership team.
Open merge requests sometimes become idle (not updated by a human in more than a month). Once a month, engineering managers will receive an idle MR triage issue
that includes all (non-WIP/Draft) MRs for their group and use it to determine if any action should be taken (such as nudging the author/reviewer/maintainer). This assists in getting merge requests merged in a reasonable amount of time (which we track as the metric MTTR: Mean Time to Merge).
Open merge requests may also have other properties that indicate that the engineering manager should research them and potentially take action to improve efficiency. One key property is the number of threads, which, when high, may indicate a need to update the plan for the MR or that a synchronous discussion should be considered. Another property is the number of pipelines, which, when high, may indicate a need to revisit the plan for the MR. These metrics are not yet included in an automatically created a triage issue. However, they are available in a Sisense dashboard. Engineering managers are encouraged to check this dashboard for their group periodically (once or twice a month) in the interim.
Security is our top priority. Our Security Team is raising the bar on security every day to protect users' data and make GitLab a safe place for everyone to contribute. There are many lines of code, and Security Teams need to scale. That means shifting security left in the Software Development LifeCycle (SDLC). Being able to start the security review process earlier in the software development lifecycle means we will catch vulnerabilities earlier, and mitigate identified vulnerabilities before the code is merged. We are fixing the obvious security issues before every merge, and therefore, scaling the security review process. Our workflow includes a check and validation by the reviewers of every merge request, thereby enabling developers to act on identified vulnerabilities before merging. As part of that process, developers are also empowered to reach out to the Security Team to discuss the issue at that stage, rather than later on, when mitigating vulnerabilities becomes more expensive. After all, security is everyone's job. See also our Security Paradigm
From time to time, there are occasions that engineering team must act quickly in response to urgent issues. This section describes how the engineering team handles certain kinds of such issues.
Not everything is urgent. See below for a non-exclusive list of things that are in-scope and not in-scope. As always, use your experience and judgment, and communicate with others.
A bi-weekly performance refinement session is held by the Development and QE teams jointly to raise awareness and foster wider collaboration about high-impact performance issues. A high impact issue has a direct measurable impact on GitLab.com service levels or error budgets.
The Performance Refinement issue board is reviewed in this refinement exercise.
performance-refinement
.Milestone
or the label workflow::ready for development
is missing.Milestone
and the label workflow::ready for development
.The infradev process is established to identify Issues requiring priority attention in support of SaaS availability and reliability. These escalations are intended to primarily be asyncronous as timely triage and attention is required. In addition to primary management through the Issues, any gaps, concerns, or critical triage is handled in the weekly GitLab SaaS Infrastructure meeting.
The infradev issue board is the primary focus of this process.
(To be completed primarily by Development Engineering Management)
Issues are nominated to the board through the inclusion of the label infradev
and will appear on the infradev board.
Milestone
or the label workflow::ready for development
is missing.
Milestone
and the label workflow::ready for development
.Issues with ~infradev ~severity::1 ~priority::1 ~production request
labels applied require immediate resolution.
Triage of infradev Issues is desired to occur asynchronously. There is also a section of the Weekly GitLab SaaS meeting which aims to address anything requiring synchronous discussion or which hasn't been triaged. This meeting has time constraints and many of the participants may not have a detailed understanding of the problems being presented. For maximum efficiency, please ensure the following, so that your infradev issues can gain maximum traction.
infradev
label to architectural problems, vague solutions, or requests to investigate an unknown root-cause.