On this page

Security Vision

We enhance the security posture of our company, products, and client-facing services. The security team works cross-functionally inside and outside GitLab to meet these goals. The security team does not work directly on security-centric features of our platform—these are handled by the development teams. The specialty areas of our team reflect the major areas of information security.

Security Department

Security Automation

Security Automation specialists help us scale by creating tools that perform common tasks automatically. Examples include building automated security issue triage and management, proactive vulnerability scanning, and defining security metrics for executive review. Initiatives for this specialty also include:

Application Security

Application Security specialists work closely with development, product security PMs, and third-party groups (including paid bug bounty programs) to ensure pre and post deployment assessments are completed. Initiatives for this specialty also include:

Security Operations

Security Operations specialists respond to incidents. This is often a fast-paced and stressful environment, where responding quickly and maintaining ones composure is critical. Initiatives for this specialty also include:

Abuse Operations

Abuse Operations specialists investigate malicious use of our systems. Initiatives for this specialty include:


Compliance specialists enables Sales by achieving standard as required by our customers. This includes SaaS, on-prem, and open source instances. Initiatives for this specialty also include:

Threat Intelligence

Threat intelligence specialists research and provide information about specific threats to help us protect from the types of attacks that could cause the most damage. Initiatives for this specialty also include:

Strategic Security

Strategic security specialists focus on holistic changes to policy, architecture, and processes to reduce entire categories of future security issues. Initiatives for this specialty also include:

Security Research

Security research specialists conduct internal testing against GitLab assets, and against FOSS that is critical to GitLab products and operations. Initiatives for this specialty also include:

Security Team Collaborators

Secure Team

The Security team will collaborate with development and product management for security-related features in GitLab. The Secure team must not be mistaken with the Security Team.

External Security Firms

We work closely with bounty programs, as well as security assessment and penetration testing firms to ensure external review of our security posture.

Security Topics

At a high level, the topic of security encompasses keeping safe, from the perspective of the application, the infrastructure, and the organization.

We track progress on tackling security related issues across the spectrum through an "always WIP Risk Assessment", and its associated (confidential) meta issue that is kept up to date to list the top 10 actions the team is working on.

Security Releases

The definitions, processes and checklists for security releases are described in the release/docs project.

The policies for backporting changes follow Security Releases for Gitlab EE.

For critical security releases, refer to Critical Security Releases in the handbook for a high level description of communication, and the critical release checklist in release/docs.

Issue Triage

The Security team needs to be able to communicate the priorities of security related issues to the Product, Development, and Infrastructure teams. Here's how the team can set priorities internally for subsequent communication (inspired in part by how the support team does this).

Creating New Security Issues

New security issue should follow these guidelines when being created on

If necesary, a sanitized issue may need to be created with more general discussion and examples appropriate for public disclosure prior to release.

For more immediate attention, mention @gl-security with a request in the issue and use @sec-team in Slack.

Severity and Priority Labels on ~security Issues

The presence of ~security modifies the standard severity labels(~S1, ~S2, ~S3, ~S4) by additionally taking into account likelihood as described below, as well as any other mitigating or exacerbating factors. The priority of addressing ~security issues is also driven by impact, so in most cases, the priority label assigned by the security team will match the severity label. Exceptions must be noted in issue description or comments.

The intent of tying ~S/~P labels to release schedules is to measure and improve GitLab's response time to security issues to consistently meet or exceed industry standard timelines for responsible disclosure. For non-release based resources like infrastructure changes or other artifacts that do not have to follow the monthly release schedule,

Severity Priority Target Release Time to remediate (non-release resources)
~S1 ~P1 Critical Release Immediate
~S2 ~P2 Next Monthly Security Release 30 days
~S3 ~P3 Next 1-3 Releases 90 days

Security issues which are not at least ~S3, are most likely ~feature proposals that should be triaged and prioritized by a product manager. This includes suggestions without a well-defined path for implementation or requiring complex changes to the application or architecture to address. ~S4/~P4 may be used for ~feature proposals as assigned by a product manager and do not need assignment to a release.

Transferring from Security to Engineering

The security engineer will add team labels (~Create, ~Deploy, etc.) and any additional labels as appropriate to the issue. The engineering team lead should be @ mentioned and followed up as necessary as noted below for different severity levels.

Note that issues are not scheduled for a particular release unless the team leads add them to a release milestone and they are assigned to a developer.

Issues with an S1 or S2 rating should be immediately brought to the attention of the relevant engineering team leads and product managers by tagging them in the issue and/or escalating via chat and email if they are unresponsive.

Issues with an S1 rating have priority over all other issues and should be considered for a critical security release.

Issues with an S2 rating should be scheduled for the next scheduled security release, which may be days or weeks ahead depending on severity and other issues that are waiting for patches. An S2 rating is not a guarantee that a patch will be ready prior to the next security release, but that should be the goal.

Issues with an S3 rating have a lower sense of urgency and are assigned a target of the next minor version. If a low-risk or low-impact vulnerability is reported that would normally be rated S3 but the reporter has provided a 30 day time window (or less) for disclosure the issue may be escalated to ensure that it is patched before disclosure.

Security Severity Labels

Many factors affect the severity ratings of security isssues, but the following guideline can can used as a starting point. For this, consider severity as a combination of the likelihood and impact of a security incident that could result from this issue not being resolved.

Likelihood \ Impact I1 - High I2 - Medium I3 - Low
L1 - High S1 S1 S2
L2 - Medium S1 S2 S3
L3 - Low S2 S3 S3

S4 may be used for issues not requiring mitigations, but may need to be triaged as ~feature proposals as described under labels.

More Risk Rating Examples





Internal Application Security Reviews

For systems built (or significantly modified) by functional groups that house customer and other sensitive data, the Security Team should perform applicable application security reviews to ensure the systems are hardened.


The current process for requesting an internal application security review is:

  1. Requestor creates an issue in the security tracker and adds the app sec review label.
  2. Requestor submits a triage questionnaire form for the app.
  3. Security team reviews the triage form and determines its next steps (design, testing, no test, etc.).
  4. If sent to design phase, requestor submits design questionnaire. An optional call can be arranged to discuss the architecture further. At the end of this phase, the security team will scope it for X amount of days of testing. It will now be sent to the testing phase.
  5. The requestor submits a testing questionnaire with the test environment information. The scheduled testing window will also be determined.
  6. The security team performs the assessment during the testing window and documents any findings in the app's associated component in GitLab. Once testing is finished, it will be sent to the remediation phase.
  7. All findings will be remediated by the appropriate teams and validated by the security engineer who tested it.
  8. Once all findings have been closed, the review can be considered complete.

As part of the app sec review process, a security issue is created to track the progress of the review. In those issues, labels and milestones can be added. It's important to note here that application security reviews are not a one-and-done, but can be ongoing as the application under review evolves.

Fighting Spam

The security team plays a large role in defining procedures for defending against and dealing with spam. Common targets for spam are public snippets, projects, issues, merge requests, and comments. Advanced techniques for dealing with these types of spam are detailed in the Spam Fighting runbook.

Always "WIP" Risk Assessment

GitLab has an "always WIP" Risk Assessment and all team members are encouraged to participate. The Risk Assessment consists of a list of all risks or threats to the GitLab infrastructure and GitLab as a company, their likelihood of occurring, the impact should they occur, and what actions can be taken to prevent these risks from damaging the company or mitigate the damage should they be realized.

The risk assessment is stored as a Google Sheet (search Google Drive for "Risk Assessment"; make sure you are searching for spreadsheets shared with and is available to all team members. It should not be shared with people outside of the company without permission.

The format of the Risk Assessment may seem intimidating at first. If you do not know what values to use for risk ratings, impact ratings, likelihoods or any other value leave them blank and reach out to the Security Team to help you determine appropriate values. It is more important to have all risks documented than it is to have all values completed when adding new risks. Guidelines and instructions for how to add a risk and how to calculate each rating or score are included on the "Instructions" tab.

Vulnerability Reports and HackerOne

GitLab receives vulnerability reports by various pathways, including:

For any reported vulnerability:

HackerOne Process

GitLab utilizes HackerOne for its bug bounty program. Security researchers can report vulnerabilities in GitLab applications or the GitLab infrastructure via the HackerOne website. Team members authorized to respond to HackerOne reports use procedures outlined here.

If a Report is Unclear

If a report is unclear, or the reviewer has any questions about the validity of the finding or how it can be exploited, now is the time to ask. Move the report to the "Needs More Info" state until the researcher has provided all the information necessary to determine the validity and impact of the finding. Use your best judgement to determine whether it makes sense to open a confidential issue anyway, noting in it that you are seeking more information from the reporter. When in doubt, err on the side of opening the issue.

One the report has been clarified, follow the "regular flow" described above.

If a Report Violates the Rules

If a report violates the rules of GitLab's bug bounty program use good judgement in deciding how to proceed. For instance, if a researcher has tested a vulnerability against GitLab production systems (a violation), but the vulnerability has not placed GitLab user data at risk, notify them that they have violated the terms of the bounty program but you are still taking the report seriously and will treat it normally. If the researcher has acted in a dangerous or malicious way, inform them that they have violated the terms of the bug bounty program and will not receive credit. Then continue with the "regular flow" as you normally would.

If the Report is Invalid

If the report is invalid (in your determination) or does not pose a security risk to GitLab or GitLab users it can be closed without opening an issue on When this happens inform the researcher why it is not a vulnerability and close the issue as "Informational". HackerOne offers the option to close an issue as "Not Applicable" or "Spam". Both of these categories result in damage to the researcher's reputation and should only be used in obvious cases of abuse.

When a Patch is Ready

When a patch has been developed, tested, approved, merged into the security branch, and a new security release is being prepared it is time to inform the researcher via HackerOne. Post a comment on the HackerOne issue to all parties informing them that a patch is ready and will be included with the next security release. Provide release dates, if available, but try not to promise a release on a specific date if you are unsure.

This is also a good time to ask if they would like public credit in our release blog post and on our vulnerability acknowledgements page for the finding. We will link their name or alias to their HackerOne profile, Twitter handle, Facebook profile, company website, or URL of their choosing. Also ask if they would like the HackerOne report to be made public upon release. It is always preferable to publicly disclose reports unless the researcher has an objection.


We use CVE IDs to uniquely identify and publicly define vulnerabilities in our products. Since we publicly disclose all security vulnerabilities 30 days after a patch is released, CVE IDs must be obtained for each vulnerability to be fixed. The earlier obtained the better, and it should be requested either during or immediately after a fix is prepared.

We currently request CVEs either through the HackerOne team or directly through MITRE's webform. Keep in mind that some of our security releases contain security related enhancements which may not have an associated CWE or vulnerability. These particular issues are not required to obtain a CVE since there's no associated vulnerability.

On Release Day

On the day of the security release several things happen in order:

Once all of these things have happened notify the HackerOne researcher that the vulnerability and patch are now public. The GitLab issue should be closed and the HackerOne report should be closed as "Resolved". Public disclosure should be requested if they have not objected to doing so. Any sensitive information contained in the HackerOne report should be sanitized before disclosure.

Swag for Reports

GitLab awards swag codes for free GitLab swag to any reports that result in a security patch. Limit: 1 per reporter. When a report is closed, ask the reporter if they would like a swag code for free GitLab clothing or accessories. Swag codes are available by request from the marketing team.

Security Questionnaires for Customers

Some customers, to keep up with regulations that impact their business, need to understand the security implications of installing any software - including software like GitLab.


The current process for responding to customer requests is:

  1. Refer a customer to our public statements on security here: /security/
  2. If a customer still has questions that need to be discussed, you can engage a Solutions Architect in that discussion.
  3. If the customer still needs a specific questionnaire filled out, create a confidential issue with the label Security and SA Backlog for the completion of that document
  4. The SA team will take the first pass at the questionnaire using /security/ and this folder as a reference.
  5. Once the SA team has completed what they can, the questionnaire will go to the security team for additional answers.
  6. Once the questionnaire is complete, it will need to be approved by the Director of Security for release to the customer.
  7. File the completed questionnaire in the example folder for future reference.

Vulnerability Scanning

GitLab maintains a custom vulnerability scanner that is used to regularly scan all GitLab assets for common vulnerabilities as well as previously patched GitLab vulnerabilities and to ensure that no GitLab security-sensitive services are accidentally exposed.

Details on this scanner and how it is configured are available to all team members in a Google Doc entitled "Vulnerability Scanner Config".

Package Signing

The packages we ship are signed with GPG keys, as described in the omnibus documentation. The process around how to make and store the key pair in a secure manner is described in the runbooks. Those runbooks also point out that the management of the keys is handled by the Security team and not the Build team. For more details that are specific to key locations and access at GitLab, find the internal google doc titled "Package Signing Keys at GitLab" on Google Drive.