Gitlab hero border pattern left svg Gitlab hero border pattern right svg

Issue Triage

GitLab believes in Open Development, and we encourage the community to file issues and open merge requests for our projects on Their contributions are valuable, and we should handle them as effectively as possible. A central part of this is triage - the process of categorization according to type and severity.

Any GitLab team-member can triage issues. Keeping the number of un-triaged issues low is essential for maintainability, and is our collective responsibility. Consider triaging a few issues around your other responsibilities, or scheduling some time for it on a regular basis.

Currently the Quality Department takes on triaging all new issues in the main GitLab project via the newly created unlabelled issues triage report.

Triage levels

We define two level of triage.

Partial Triage

An issue is considered partially triaged when:

Complete Triage

An issue is considered completely triaged when:


Priority labels help us define the time a ~bug fix should be completed. Priority determines how quickly the defect turnaround time must be. If there are multiple defects, the priority decides which defect has to be fixed immediately versus later. This label documents the planned timeline & urgency which is used to measure against our target SLO on delivering ~bug fixes.

Label Meaning Target SLO (currently only applies to ~bug and ~security defects)
~P1 Urgent Priority The current release + potentially immediate hotfix to (30 days)
~P2 High Priority The next release (60 days)
~P3 Medium Priority Within the next 3 releases (approx one quarter or 90 days)
~P4 Low Priority Anything outside the next 3 releases (more than one quarter or 120 days)


Severity labels help us clearly communicate the impact of a ~bug on users. There can be multiple facets of the impact. The below is a guideline.

For Availability, please refer to the Availability Prioritization section in the handbook.

Type of ~bug ~S1 - Blocker ~S2 - Critical ~S3 - Major ~S4 - Low
Functionality Unusable feature with no workaround, user is blocked Broken Feature, workaround too complex & unacceptable Broken feature with an acceptable workaround Functionality inconvenience or cosmetic issue
~performance Response time
Above 9000ms to timing out Between 2000ms and 9000ms Between 1000ms and 2000ms Between 500ms and 1000ms
~performance Degradation
(to be reviewed for deprecation)
  Degradation is guaranteed to occur in the near future Degradation is likely to occur in the near future Degradation may occur but it's not likely
Affected Users
(to be reviewed for deprecation)
Impacts 50% or more of users Impacts between 25%-50% of users Impacts up to 25% of users Impacts less than 5% of users
~availability Availability See Availability Prioritization See Availability Prioritization See Availability Prioritization See Availability Prioritization
~security Security Vulnerability See Security Prioritization See Security Prioritization See Security Prioritization See Security Prioritization

Examples of severity levels

If a issue seems to fall between two severity labels, assign it to the higher severity label.

Improving performance: It may not be possible to reach the intended response time in one iteration. We encourage performance improvements to be broken down. Improve where we can and then re-evaluate the next appropriate level of severity & priority based on the new response time.

This run happens nightly and results are outputted to the wiki on the GPT project.

Triaging Issues

Initial triage involves (at a minimum) labelling an issue appropriately, so un-triaged issues can be discovered by searching for issues without any labels.

Follow one of these links:

Pick an issue, with preference given to the oldest in the list, and evaluate it with a critical eye, bearing the issue triage practices below in mind. Some questions to ask yourself:

Apply each label that seems appropriate. Issues with a security impact should be treated specially - see the security disclosure process.

If the issue seems unclear - you aren't sure which labels to apply - ask the requester to clarify matters for you. Keep our user communication guidelines in mind at all times, and commit to keeping up the conversation until you have enough information to complete triage.

Check for duplicates! Searching for some keywords in the issue should give you a short list of possibilities to scan through. Check both open and closed issues, as it may be a duplicate of a solved problem.

Consider whether the issue is still valid. Especially for older issues, a ~bug may have been fixed since it was reported, or a ~feature may have already been implemented.

Be sure to check cross-reference notes from other issues or merge requests, they are a great source of information! For instance, by looking at a cross-referenced merge request, you could see a "Picked into 8-13-stable, will go into 8.13.6." which would mean that the issue is fixed since the version 8.13.6.

If the issue meets the requirements, it may be appropriate to make a scheduling request - use your judgement!

You're done! The issue has all appropriate labels, and may now be in the backlog, closed, awaiting scheduling, or awaiting feedback from the requestor. Pick another, if you've got the time.

Issue Triage Practices

We're enforcing some of the policies automatically in triage-ops, using the @gitlab-bot user. For more information about the automated triage, please read the Triage Operations

That said, we can't automate everything. In this section we'll describe some of the practices we're doing manually.

Outdated issues

For issues that haven't been updated in the last 3 months the "Awaiting Feedback" label should be added to the issue. After 14 days, if no response has been made by anyone on the issue, the issue should be closed. This is a slightly modified version of the Rails Core policy on outdated issues.

If they respond at any point in the future, the issue can be considered for reopening. If we can't confirm an issue still exists in recent versions of GitLab, we're just adding noise to the issue tracker.


Before opening a new issue, make sure to search for keywords and verify your issue isn't a duplicate.

Checking for and/or reporting duplicates when you notice them.

All things held equal, the earliest issue should be considered the canonical version. If one issue has a better title, description, and/or more comments and positive reactions, it should be prioritized over earlier issues even if it's a duplicate.

Lean toward closing

We simply can't satisfy everyone. We need to balance pleasing users as much as possible with keeping the project maintainable.

Label issues as they come in

When an issue comes in, it should be triaged and labeled. Issues without labels are harder to find and often get lost.

Take ownership of issues you've opened

Sort by "Author: your username" and close any issues which you know have been fixed or have become irrelevant for other reasons. Label them if they're not labeled already.

Questions/support issues

If it's a question, or something vague that can't be addressed by the development team for whatever reason, close it and direct them to the relevant support resources we have (e.g., our Discourse forum or emailing Support).

New labels

If you notice a common pattern amongst various issues (e.g. a new feature that doesn't have a dedicated label yet), suggest adding a new label in Slack or a new issue.

Reproducing issues

If possible, ask the reporter to reproduce the issue in a public project on You can also try to do so yourself in the issue-reproduce group. You can ask any owner of that group for access.


We also hold regular, quarterly events where the Community, Core Team Members and Team Members can contribute to tackling some of our open issues. Please see the dedicated page for further information and upcoming event dates.


The original issue about these policies is #17693. We'll be working to improve the situation from within GitLab itself as time goes on.

The following projects, resources, and blog posts were very helpful in crafting these policies:

  1. Our current response time standard is based on the TTFB P90 results of the GitLab Performance Tool (GPT) being run against the 10k-user reference environment.