Gitlab hero border pattern left svg Gitlab hero border pattern right svg


Engaging the Security On-Call

If you identified an urgent security issue, if something feels wrong, or you need immediate assistance from the Security Department, you have two options available:

Please be aware that the Security Department can only be paged internally. If you are an external party, please proceed to Vulnerability Reports and HackerOne section of this page.

Both mechanisms will trigger the very same process, and as a result, the Security responder on-call will engage in the relevant issue within the appropriate SLA. If the SLA is breached, the Security manager on-call will be paged. When paging security a new issue will be created to track the incident being reported. Please provide as much detail as possible in this issue to aid the Security Engineer On Call in their investigation of the incident. The Security Engineer On Call will typically respond to the page within 15 minutes and may have questions which require synchronous communication from the incident reporter. It is important when paging security to please be prepared to be available for this synchronous communication in the initial stage of the incident response.

For lower severity requests or general Q&A, GitLab Security is available in the #Security channel in GitLab Slack and the Security Operations team can be alerted by mentioning @sec-ops-team. If you suspect you've received a phishing email, and have not engaged with the sender, please see: What to do if you suspect an email is a phishing attack. If you have engaged a phisher by replying to an email, clicking on a link, have sent and received text messages, or have purchased goods requested by the phisher, page security as described above.

Note: The Security Department will never be upset if you page us for something that turns out to be nothing, but we will be sad if you don't page us and it turns out to be something. IF UNSURE - PAGE US.

Further information on GitLab's security response program is described in our Incident Response guide.

External Contact Information

The security department can be contacted at External researchers or other interested parties should refer to our Responsible Disclosure Policy for more information about reporting vulnerabilities. The email address also forwards to a ZenDesk queue that is monitored by the Field Security team.

For security team members, the private PGP key is available in the Security 1Password vault. Refer to PGP process for usage.

Security Mission and Vision Statements

Our Mission is to be most Transparent Security Group in the world with a results oriented approach.

By embracing GitLab values and being active in engaging with our customers, our staff and our product, we enhance the security posture of our company, products, and client-facing services. The security department works cross-functionally inside and outside GitLab to meet these goals.

Our Vision is as follows:

  1. GitLab Security as a business enabler with a focus on
    • Improving time to market (new features getting into releases faster and prevent slippage)
    • Creating Product and competitive differentiation
    • Reducing internal roadblocks and delays
    • Improving security’s risk based approach to decision making through decentralization and managing appropriate levels of acceptable risk
  2. Strengthen GitLab's enterprise grade security with a focus on
    • Achieving industry recognized security certifications
    • Reducing the time required to pass customer security reviews
    • A Direct impact to increasing contract size and volume
    • Closing the gap between customer security requirements and GL’s Security posture
  3. Reduce GitLab's threat landscape with a focus on
    • Reducing the likelihood of breach
    • Reducing the exposure and severity of vulnerabilities
    • Reducing the cost associated with service vulnerabilities

Security Hiring

The company-wide mandate is justification for mapping Security headcount to around 5% of total company headcount. Tying Security Department growth headcount to 5% of total company headcount ensures adequate staffing support for the following (below are highlights and not the entire list of responsibilities of the Security Department):

Security Department

As the Security Department has evolved, so has the area of engagement where Security has been required to provide input. Today, the Security Department provides an essential security service to the Engineering and Development Departments, and is directly engaged in Security Releases. However, the functions that the Security Department provides to the rest of the business are more consultative and advisory in nature - while the Security Department is a key player in the security questions raised throughout the business, it is not directly responsible for all of the administration of execution of functions that are required for the business to function while minimising risk.

This is expected - the security of the business should be a concern of everyone within the company and not just the domain of specialists. In addition, the role of the Security Department has expanded to assure customer, investor and regulator concerns about the security and safety of using GitLab as an enterprise-ready product, both through compliance certifications and through assurance responses directly to customers, and as we continue to encourage enterprise customers to use

To reflect this, we have structured the Security Department around three key tenets, which drive the structure and the activities of our group. These are :

Secure the Product

This reflects the Security Department’s current efforts to be involved in the Application development and Release cycle for Security Releases, Security Research, our HackerOne bug bounty program, Security Automation and Vulnerability Management.

The term “Product” is interpreted broadly and includes the GitLab application itself and all other integrations and code that is developed internally to support the GitLab application for the multi-tenant SaaS. Our responsibility is to ensure all aspects of GitLab that are exposed to customers or that host customer data are held to the highest security standards, and to be proactive and responsive to ensure world-class security in anything GitLab offers.

Protect the Company

This encompasses protecting company properties as well as to prevent, detect and respond to risks and events targeting the business and This includes Security Operations and Anti-Abuse functions, Security Strategy, Threat Intelligence and Identity and Access Management Teams.

These functions have the responsibility of shoring up and maintaining the security posture of to ensure enterprise-level security is in place to protect our new and existing customers.

Assure the Customer

This reflects the need for us to provide resources to our customers to assure them of the security and safety of GitLab as an application to use within their organisation and as a enterprise-level SaaS. This also involves providing appropriate consulting, services and resources to customers so that they trust GitLab as a Secure Company, as a Secure Product, and Secure SaaS

Tenets Overlap between all Teams

It’s important to note that the three tenets do not operate independently of each other, and every team within the Security Department provides an important function to perform in order to progress these tenets. For example, Application Security may be strongly focused on Securing the Product, but it still has a strong focus around customer assurance and protecting the company in performing its functions. Similarly, Security Operations functions may be engaged on issues related to Product vulnerabilities, and the resolution path for this deeply involves improving the security of product features, as well as scoping customer impact and assisting in messaging to customers.

Department Structure

Broadly speaking, teams are aligned under three Sections (Product, Operations and Customer) which reflect the tenets that are their primary focus. The Graph below illustrates how the teams within the Department are managed.

graph TD; VP[VP/Senior Director of Security]---A1[Product Security]; VP[VP/Senior Director of Security]---A2[Operations Security]; VP[VP/Senior Director of Security]---A3[Customer Security]; A1[Product Security]---A11[Application Security]; A11[Application Security]---A12[Security Automation]; A12[Security Automation]---A13[Security Development]; A13[Security Development]---A14[Security Research]; A2[Operations Security]---A21[Security Operations]; A21[Security Operations]---A22[Anti-Abuse Operations]; A22[Anti-Abuse Operations]---A23[Red Team]; A23[Red Team]---A24[Threat Intelligence]; A24[Threat Intelligence]---A25[Strategic Security]; A25[Strategic Security]---A26[IAM]; A3[Customer Security]---A31[Security Compliance]; A31[Security Compliance]---A32[Field Security]; A32[Field Security]---A33[Security Communications];

"Secure the Product" Teams

The teams below are primarily focused on Application Security or other functions related to Securing the Product.

Application Security

Application Security specialists work closely with development, product security PMs, and third-party groups (including paid bug bounty programs) to ensure pre and post deployment assessments are completed. Initiatives for this specialty also include:

Security Automation

Security Automation specialists help us scale by creating tools that perform common tasks automatically. Examples include building automated security issue triage and management, proactive vulnerability scanning, and defining security metrics for executive review. Initiatives for this specialty also include:

Security Development

The Security Development team provides engineering and development capabilities for security teams. The Security Development team implements and deploys changes when security teams need changes or additional features to any of the products' codebases.

Furthermore, Security Development can design, plan, and build new products or services to aid and improve security of the product and company.

The Security Development team works closely with all of the other development teams (product teams, Secure, Defend) and is knowledgable about the company's development standards, processes, best practices, and codebases.

Security Research

Security research specialists conduct internal testing against GitLab assets, against FOSS that is critical to GitLab products and operations, and against vendor products being considered for purchase and integration with GitLab. Initiatives for this specialty also include:

Security research specialists are subject matter experts (SMEs) with highly specialized security knowledge in specific areas, including reverse engineering, incident response, malware analysis, network protocol analysis, cryptography, and so on. They are often called upon to take on security tasks for other security team members as well as other departments when highly specialized security knowledge is needed. Initiatives for SMEs may include:

Security research specialists are often used to promote GitLab thought leadership by engaging as all-around security experts, to let the public know that GitLab doesn’t just understand DevSecOps or application security, but has a deep knowledge of the security landscape. This can include the following:

"Protect the Company" Teams

These teams are primarily focused on Protecting GitLab the business and

Security Operations

The Security Operations team is here to manage security incidents across GitLab, which includes events that originate from outside of our infrastructure. This is often a fast-paced and stressful environment where responding quickly and maintaining ones composure is critical.

More than just being the first to acknowledge issues as they arise, Security Operations is responsible for leading, designing, and implementing the strategic initiatives to grow the Detection and Response practices at GitLab. These initiatives include:

Security Operations can be contacted on slack via our handle @sec-ops-team or in a Gitlab issue using @gitlab-com/gl-security/secops. If your request requires immediate attention please review the steps for engaging the security on-call.

Abuse Operations

Abuse Operations specialists investigate and mitigate the malicious use of our systems, which is defined under Section 3 of the GitLab Website Terms of Use. This activity primarily originates from inside our infrastructure. Initiatives for this specialty include:

For more information please see our Resources Section

Code of Conduct Violations are handled by the Community Advocates in the Community Relations Team. For more information on reporting these violations please see the GitLab Community Code of Conduct page.

Red Team

GitLab's internal Red Team emulates adversary activity to better GitLab’s enterprise and product security. This includes activities such as:

Threat Intelligence

Threat intelligence specialists research and provide information about specific threats to help us protect from the types of attacks that could cause the most damage. Initiatives for this specialty also include:

Strategic Security

Strategic security specialists focus on holistic changes to policy, architecture, and processes to reduce entire categories of future security issues. Initiatives for this specialty also include:

Identity and Access Management (IAM)

The Identity and Access Management (IAM) team manages policy and implementation around tooling used to identify and manage access rights for GitLab team-members. This includes managing access requests, provisioning and deprovision processes for GitLab team-members, and managing the governance and auditing of access rights to applications and services used by GitLab.

Areas in scope for this team would include:

"Assure the Customer" Teams

The teams below target Customer Assurance projects among their responsibilities.

Security Compliance

Compliance enables Sales by achieving standards as required by our customers and help to verify that the outcomes the security department is trying to achieve are actually being met. This includes SaaS, on-prem, and open source instances. Initiatives for this specialty also include:

For additional information about the compliance program see the Security Compliance team handbook page or refer to GitLab's security controls for a detailed list of all compliance controls organized by control family.

Field Security

The Field Security team serves as the public face for GitLab's Security Department. The Field Security team works closely with multiple teams within GitLab including:

Areas of responsibility for the Field Security team include:

Security External Communications

The External Communications Team leads customer advocacy, engagement and communications in support of GitLab Security Team programs. Initiatives for this specialty include:

Security Department Collaborators

Secure Team

The Security department will collaborate with development and product management for security-related features in GitLab. The Secure team must not be mistaken with the Security Teams.

External Security Firms

We work closely with bounty programs, as well as security assessment and penetration testing firms to ensure external review of our security posture.


Information Security Policies

Information Security Policies are reviewed annually. Policy changes are approved by the Senior Director of Security and Legal. Changes to the Data Protection Impact Assessment Policy are approved by GitLab's Privacy Officer.

Information Security Policy Exception Management Process

Information security considerations such as regulatory, compliance, confidentiality, integrity and availability requirements are most easily met when companies employ centrally supported or recommended industry standards. Whereas GitLab operates under the principle of least privilege, we understand that centrally supported or recommended industry technologies are not always feasible for a specific job function or company need. Deviations from the aforementioned standard or recommended technologies is discouraged. However, it may be considered provided that there is a reasonable, justifiable business and/or research case for an information security policy exception; resources are sufficient to properly implement and maintain the alternative technology; the process outlined in this and other related documents is followed and other policies and standards are upheld.

In the event an employee requires a deviation from the standard course of business or otherwise allowed by policy, the Requestor must submit a Policy Exception Request to IT Security, which contains, at a minimum, the following elements:

The Policy Exception Request should be used to request exceptions to information security policies, such as the password policy, or when requesting the use of a non-standard device (laptop).

Exception request approval requirements are documented within the issue template. The requester should tag the appropriate individuals who are required to provide an approval per the approval matrix.

If the business wants to appeal an approval decision, such appeal will be sent to Legal at Legal will draft an opinion as to the proposed risks to the company if the deviation were to be granted. Legal’s opinion will be forwarded to the CEO and CFO for final disposition.

Any deviation approval must: Groups and Projects

Slack Channels

Other Frequently Used Projects

Security crosses many teams in the company, so you will find ~security labelled issues across all GitLab projects, especially:

When opening issues, please follow the Creating New Security Issues process for using labels and the confidential flag.

Other Resources for GitLab Team Members

The Security Department tracks their OKRs using the boards in the table below. These boards aggregate work from the many subgroups and projects under gitlab-com/gl-security used by the various subteams to track their work and projects. High level issues and open issues that map directly to an OKR should be added to the board for the quarter to provide visibility into the current work of the security department.

When creating an issue that maps to an OKR, apply the following group level labels so that it appears in the board:

These labels can also be applied to MRs related to the work.

Quarter Board
~FY20Q4 Board link
~FY20Q3 Board link
~FY20Q2 Board link

How We Plan, Assign, and Execute Work

Any work that is related to an OKR is tracked with an issue in the appropriate team or project tracker and linked to the OKR issue as related.

Larger initiatives that span the scope of multiple teams or projects may require a Planning handbook page to further developer requirements.

Security Releases

The definitions, processes and checklists for security releases are described in the release/docs project.

The policies for backporting changes follow Security Releases for Gitlab EE.

For critical security releases, refer to Critical Security Releases in release/docs.

Issue Triage

The Security team needs to be able to communicate the priorities of security related issues to the Product, Development, and Infrastructure teams. Here's how the team can set priorities internally for subsequent communication (inspired in part by how the support team does this).

Creating New Security Issues

New security issue should follow these guidelines when being created on

Occasionally, data that should remain confidential, such as the private project contents of a user that reported an issue, may get included in an issue. If necessary, a sanitized issue may need to be created with more general discussion and examples appropriate for public disclosure prior to release.

For review by the Application Security team, @ mention @gitlab-com/gl-security/appsec.

For more immediate attention, refer to Engaging security on-call.

Severity and Priority Labels on ~security Issues

If an issue is determined to be a vulnerability, the security engineer will ensure that the issue labelled as a ~bug and not a ~feature.

The presence of ~security and ~bug labels modifies the standard severity labels(~S1, ~S2, ~S3, ~S4) by additionally taking into account likelihood as described below, as well as any other mitigating or exacerbating factors. The priority of addressing ~security issues is also driven by impact, so in most cases, the priority label assigned by the security team will match the severity label. Exceptions must be noted in issue description or comments.

The intent of tying ~S/~P labels to remediation times is to measure and improve GitLab's response time to security issues to consistently meet or exceed industry standard timelines for responsible disclosure. Mean time to remediation (MTTR) is a external metric that may be evaluated by users as an indication of GitLab's commitment to protecting our users and customers. It is also an important measurement that security researchers when choosing to engage with the security team, either directly or through our HackerOne Bug Bounty Program.

Severity Priority Time to mitigate Time to remediate
~S1 ~P1 Within 24 hours On or before the next security release
~S2 ~P2 N/A Within 60 days
~S3 ~P3 N/A Within 90 days
~S4 ~P4 N/A Best effort unless risk accepted

Due date on ~security Issues

For ~S2, ~S3, and ~S4 ~security ~bugs, the security engineer assigns the Due date, which is the target date of when fixes should be ready for release. This due date should account for the Time to remediate times above, as well as monthly security releases on the 28th of each month. For example, suppose today is October 1st, and a new S2 ~security issue is opened. It must be addressed in a security release within 60 days, which is November 30th. So therefore, it must catch the November 28th security release. Furthermore, the Security Release Process deadlines say that it should the code fix should be ready by November 23rd. So the due date in this example should be November 23rd.

~S1 ~security issues do not have a due date since they should be fixed as soon as possible, and a security release made available as soon as possible to accommodate it. These issues are worked by a security engineer oncall to mitigate the risk within 24 hours. This means the risks relevant to our customers have been removed or reduced as much as possible within 24 hours. The remediation timeline target is never greater than our next security release but remediation in this case would not impact customer security risk.

Note that some ~security issues may not need to be part of a code release, such as an infrastructure change. In that case, the due date will not need to account for monthly security release dates.

On occasion, the due date of such an issue may need to be changed if the security team needs to move up or delay a monthly security release date to accommodate for urgent problems that arise.


Issues labelled with the security and feature labels are not considered vulnerabilities, but rather security enhancements or defense-in-depth mechanisms. This means the security team is not required to set the S and P labels or follow the vulnerability triage process as these issues will be triaged by product or other appropriate team owning the component.

On the contrary, note that issues with the security, bug, and S4 labels are considered Low severity vulnerabilities and will be handled according to the standard vulnerability triage process.

~"security request"

The security team may also apply ~internal customer and ~security request to issue as an indication that the feature is being requested by the security team to meet additional customer requirements, compliance or operational needs in support of

Transferring from Security to Engineering

The security engineer must:

The product manager will assign a Milestone that has been assigned a due date to communicate when work will be assigned to engineers. The Due date field, severity label, and priority label on the issue should not be changed by PMs, as these labels are intended to provide accurate metrics on ~security issues, and are assigned by the security team. Any blockers, technical or organizational, that prevents ~security issues from being addressed as our top priority should be escalated up the appropriate management chains.

Note that issues are not scheduled for a particular release unless the team leads add them to a release milestone and they are assigned to a developer.

Issues with an S1 or S2 rating should be immediately brought to the attention of the relevant engineering team leads and product managers by tagging them in the issue and/or escalating via chat and email if they are unresponsive.

Issues with an S1 rating have priority over all other issues and should be considered for a critical security release.

Issues with an S2 rating should be scheduled for the next scheduled security release, which may be days or weeks ahead depending on severity and other issues that are waiting for patches. An S2 rating is not a guarantee that a patch will be ready prior to the next security release, but that should be the goal.

Issues with an S3 rating have a lower sense of urgency and are assigned a target of the next minor version. If a low-risk or low-impact vulnerability is reported that would normally be rated S3 but the reporter has provided a 30 day time window (or less) for disclosure the issue may be escalated to ensure that it is patched before disclosure.

Security issue becoming irrelevant due to unrelated code changes

It is possible that a ~security issue becomes irrelevant after it was initially triaged, but before a patch was implemented. For example, the vulnerable functionality was removed or significantly changed resulting in the vulnerability not being present anymore.

If an engineer notices that an issue has become irrelevant, he should @-mention the person that triaged the issue to confirm that the vulnerability is not present anymore. Note that it might still be necessary to backport a patch to previous releases according to our maintenance policy. In case no backports are necessary, the issue can be closed.

Secure Coding Training

For information on secure coding initiatives, please see the Secure Coding Training page.

Access Management Process

Centralized access management is key to ensuring that the correct GitLab team-members have access to the correct data and systems and at the correct level. GitLab access controls are guided by the principle of least privilege and need-to-know. These controls apply to information and information processing systems at the application and operating system layers, including networks and network services.

The access request project is used to request and track the following access-related activities:

  1. New Access Requests
  2. Access Removal Requests
  3. Access Reviews
  4. New Service Account Requests

Usage guidelines for each of the access templates is outlined on the IT Operation's handbook page.

These templates should be used during the onboarding process and throughout the employment tenure of a GitLabber. Access required as part of the team member's onboarding should be requested using the New Access Requests or if applicable, one of the available Role-based entitlements templates.

Access Control Policy and Procedures

Access Control Process Exceptions

Bulk Access Requests

Access Requests and Onboarding

During the onboarding process, the manager should determine which email and slack groups the new team member should be added to. Also determine if new team member will need access to the dev server, which is used by engineers to prepare fixes for security issues and also allows for access to and If so, request the creation of a new account with the same username the team member has on and an invitation to the gitlab group as a Developer. Fill out one access request for both the groups and Dev account if needed.

Principle of Least Privilege

GitLab operates its access management under the Under least privilege, a team member should only be granted the minimum necessary access to perform their function. An access is considered necessary only when a GitLabber cannot perform a function without that access. If an action can be performed without the requested access, it's not considered necessary. Least privilege is important because it protects GitLab and its customers from unauthorized access and configuration changes and in the event of an account compromise by limiting access.

Least Privilege Reviews for Access Requests


Job Transfers

Access Reviews

Please refer to the Access reviews page for additional information.

Baseline Role-Based Entitlements Access Runbooks & Issue Templates

The goal of baseline and role-based entitlements is to increase security while reducing access management complexity by moving towards role-based access control. The basic idea is that if we configure all of our systems for access based on the specific job families that require access to each system, then as we scale we can simply add new GitLab team-members to these pre-defined groups and system-level access will be granted automatically. The difficult part in this implementation is accurately defining the access each role should have and collecting/maintaining all related approvals. The GitLab solution to this challenge is to use baseline and role-based entitlements. These entitlements define what systems each role should have access to and to pre-approve access to those systems so provisioning can be sped up. Okta will be a huge help in this process as we continue to build out that tool, but these baseline entitlements can still define pre-approved access to systems not managed by Okta. Baseline and role-based entitlements can also help automate access reviews since we will have a solid source of truth for what access should exist for each role and which GitLab team-members should be a part of each role.

The basic workflow for using a baseline or role-based entitlement is:

graph TD; Q1[Does a baseline or role-based entitlement exist for a role you are provisioning?]-->A1[If Yes: Submit an AR with the template]; Q1[Does a baseline or role-based entitlement exist for a role you are provisioning?]-->A2[If No: Create an MR with the systems that role requires]; A1[If Yes: Submit an AR with the template]-->A1A[Assign the Access Request to the system owner]; A2[If No: Create an MR with the systems that role requires]-->A2A["Assign the department manager and director to approve the new MR"];

The hope with the above workflow is that everyone will contribute to the creation of the baseline and role-based entitlements and they will be prioritized based on how frequently the roles have access provisioned for them. The other benefit of this approach to access is that each access request template for the specific baseline or role-based entitlement role can have very specific instructions and links.

Runbooks, Baseline, and Role-based Access Request Templates have been established for the following roles:

Baseline Entitlements (All GitLab team-members):
System Name Business Purpose System Role (What level of access) Data Classification
1Password User Password Management Team Member RED
BambooHR Human Resource Platform Employee RED
Calendly Add-in for meeting Scheduling Employee YELLOW
Carta Shares Management Employee RED
CultureAmp 360 Feedback Management User YELLOW
Expensify Expense Claims and Management Employee ORANGE Gitlab Application for Staff Employee RED
Greenhouse Recruiting Portal Interviewer RED
Gsuite Email, Calendar, and Document sharing/collaboration Org Unit RED
Moo Business Cards User YELLOW
NexTravel Travel booking Employee ORANGE
Sertifi Digital signatures, payments, and authorizations User YELLOW
Slack GitLab async communications Member RED
Sisense (Periscope) Data Analysis and Visualisation User RED
Will Learning Staff Training and Awareness Portal User YELLOW
ZenDesk (non US Federal instance Customer Support - Incident Management Light Agent RED
Zoom For video conferencing / meetings Pro RED

Access Control Procedure Activities

GitLab's access controls include the following control activities:

  1. user registration and de-registration
  2. user access provisioning
  3. removal of adjustment of user access rights
  4. management of privileged access rights
  5. management and use of secret authentication information
  6. review and recertification of user access rights
  7. secure log-on procedures
  8. management of passwords and tokens
  9. access to privileged utility programs
  10. access to program source code

Account Naming Conventions

Automated Group Membership Reports for Managers

If you would like to check whether or not a team-member is a member of a Slack or a G-Suite group, you can view the following automated group membership reports:

G-Suite Group Membership Reports

Slack Group Membership Reports

Unique Account Identifiers

Every service and application must use unique identifiers for user accounts and prevent the re-use of those identifiers.

For example, if a user account is identified with a username, there can only be one account with that username. Accounts may eventually be deleted and that username (or other unique identifier) intentionally released for re-use, but that new account may not have the same permissions or access as the first account that was deleted. This doesn't preclude the use of shared accounts (except where it is strictly forbidden, like in-scope PCI systems) and applies to both individual and shared accounts. If a shared account is used, that account must have a unique identifier in the same way an individual, non-shared account does.

This is required to allow the actions of any given account to be associated back with that particular account. If two accounts share an identifier, if a malicious action were taken, we'd have no way of identifying which of the two accounts performed that malicious action. It's also important to preserve the confidentiality of information; if access or permission are given to an account, they should only be given to the specific account for which they were intended.

Internal Application Security Reviews

For systems built (or significantly modified) by functional groups that house customer and other sensitive data, the Security Team should perform applicable application security reviews to ensure the systems are hardened. Security reviews aim to help reduce vulnerabilities and to create a more secure product.

When to request a security review?

  1. If your changes are processing, storing, or transferring any kind of RED or ORANGE data, it should be reviewed by the application security team.
  2. If your changes involve implementing, utilizing, or is otherwise related to any type of authentication, authorization, or session handling mechanism, it should be reviewed by the application security team.
  3. If your changes have a goal which requires a cryptographic function such as: confidentiality, integrity, authentication, or non-repudiation, it should be reviewed by the application security team.

How to request a security review?

There are two ways to request a security review depending on how significant the changes are. It is divided between individual merge requests and larger scale initiatives.

Individual merge requests or issues

Loop in the application security team by /cc @gitlab\-com/gl\-security/appsec in your merge request or issue.

These reviews are intended to be faster, more lightweight, and have a lower barrier of entry.

Larger scale initiatives

To get started, create an issue in the security tracker, add the app sec review label, and submit a triage questionnaire form. The complete process can be found at here.

Some use cases of this are for epics, milestones, reviewing for a common security weakness in the entire codebase, or larger features.

Is security approval required to progress?

No, code changes do not require security approval to progress. Non-blocking reviews enables the freedom for our code to keep shipping fast, and it closer aligns with our values of iteration and efficiency. They operate more as guardrails instead of a gate.

What should I provide when requesting a security review?

To help speed up a review, it's recommended to provide any or all of the following:

What does the security process look like?

The current process for larger scale internal application security reviews be found here

My changes have been reviewed by security, so is my project now secure?

Security reviews are not proof or certification that the code changes are secure. They are best effort, and additional vulnerabilities may exist after a review.

It's important to note here that application security reviews are not a one-and-done, but can be ongoing as the application under review evolves.

Abuse Operations Resources

Common forms of Abuse

1. Malware: Defined as software that is designed and distributed with the intention of causing damage to a computer, server, client, or computer network.

Making use of services to deliver malicious executables or as attack infrastructure is prohibited under the GitLab Website Terms of Use (Section 3, “Acceptable Use of Your Account and the Website”). We do however understand that making such technical details available for research purposes can benefit the wider community and as such it will be allowed if the content meets the following criteria:

2. Commercial Spam: An account that's been created for the purpose of advertising a product or service.

3. Malicious Spam: An account that’s been created for the purpose of distribution of fraudulent, illegal, pirated or deceptive content.

4. CI Abuse: Making use of CI Runners for any other purpose than what it is intended for. Examples include, but are not limited to:

5. Prohibited Content: Distributing harmful or offensive content that is defamatory, obscene, abusive, an invasion of privacy or harassing.

6. Gitlab Pages: Pages Abuse: Include, but are not limited to:

Fighting Spam

The security team plays a large role in defining procedures for defending against and dealing with spam. Common targets for spam are public snippets, projects, issues, merge requests, and comments. Advanced techniques for dealing with these types of spam are detailed in the Spam Fighting runbook.

For any actions taken on an account:

The purpose of adding Admin Notes allow us to better assist the Support Team and Production if there are any questions around changes made to an account by the Security Team.

DMCA Requests

The Security Team plays a big role in defining the procedures and reviewing Digital Millennium Copyright Act (DMCA) requests. All DMCA requests need to be vetted by Legal first before any further steps are taken to proceed with the take down of reported content. Reported content that has been successfully vetted by Legal must be referred to the Abuse Team before any action is taken.

For DMCA requests the Abuse Team will follow the below process

Abuse works in conjunction with Legal referencing the DMCA Removal Workflow

Vulnerability Reports and HackerOne

GitLab receives vulnerability reports by various pathways, including:

For any reported vulnerability:

Triage Rotation

Application Security team members may assign themselves as the directly responsible individual (DRI) for incoming requests to the Application Security team for a given calendar week in the Triage Rotation Google Sheet in the Security Team Drive.

The following rotations are defined:

Team members should not assign themselves on weeks they are responsible for the scheduled security release.

Team members not assigned as the DRI for the week should continue to triage reports when possible, especially to close duplicates or handle related reports to those they have already triaged.

Team members remain responsible for their own assigned reports.

HackerOne Process

GitLab utilizes HackerOne for its bug bounty program. Security researchers can report vulnerabilities in GitLab applications or the GitLab infrastructure via the HackerOne website. Team members authorized to respond to HackerOne reports use procedures outlined here. The #hackerone-feed Slack channel receives notifications of report status changes and comments via HackerOne's Slack integration.

For information or questions about the GitLab HackerOne program, please contact

Working the Queue

Application Security Engineer Procedures for S1/P1 Issues

Please see Handling S1/P1 Issues

If a Report is Unclear

If a report is unclear, or the reviewer has any questions about the validity of the finding or how it can be exploited, now is the time to ask. Move the report to the "Needs More Info" state until the researcher has provided all the information necessary to determine the validity and impact of the finding. Use your best judgement to determine whether it makes sense to open a confidential issue anyway, noting in it that you are seeking more information from the reporter. When in doubt, err on the side of opening the issue.

One the report has been clarified, follow the "regular flow" described above.

If a Report Violates the Rules of Engagement

If a report violates the rules of GitLab's bug bounty program use good judgement in deciding how to proceed. For instance, if a researcher has tested a vulnerability against GitLab production systems (a violation), but the vulnerability has not placed GitLab user data at risk, notify them that they have violated the terms of the bounty program but you are still taking the report seriously and will treat it normally. If the researcher has acted in a dangerous or malicious way, inform them that they have violated the terms of the bug bounty program and will not receive credit. Then continue with the "regular flow" as you normally would.

Closing reports as Informative, Not Applicable, or Spam

If the report does not pose a security risk to GitLab or GitLab users it can be closed without opening an issue on

When this happens inform the researcher why it is not a vulnerability. It is up to the discretion of the Security Engineer whether to to close the report as "Informative", "Not Applicable", or "Spam".

Reports potentially affecting third parties

When GitLab receives reports, via HackerOne or other means, which might affect third parties the reporter will be encouraged to report the vulnerabilities upstream. On a case-by-case basis, e.g. for urgent or critical issues, GitLab might proactively report security issues upstream while being transparent to the reporter and making sure the original reporter will be credited. GitLab team members however will not attempt to re-apply unique techniques observed from bug bounty related submissions towards unrelated third parties which might be affected.

Awarding Ultimate Licenses

GitLab reporters with 3 or more valid reports are eligible for a 1-year Ultimate license for up to 5 users. As per the H1 policy, reporters will request the license through a comment on one of their issues. When a reporter has requested a license, the following steps should be taken:

  1. Validate that the three reports were valid. That means they are Triaged or Resolved.
  2. Validate that the three reports have not been used to obtain a previous license.
  3. If the reports are not valid, respond to the reporter on H1 explaining the reason the license is not being issued.
  4. If the reports are valid, create the license on,
    • For Name use the reporters fullname if available, otherwise their H1 handle
    • For Company use H1 Reporter Award
    • For Email use the reporter's [username] email address
    • User Count is up to 5
    • GitLab Plan is Ultimate
    • The license should start the day of issue and expire in 1 year
  5. Enter the associated license information in the H1 License Award sheet
  6. Reply to the report on H1 use the 20 - Ultimate License Creation template.

The license will be sent to the reporter by the License app. If the reporter claims that the license has not arrived, the app can be used to resend the license. When that happens, the creation of a new license should be avoided.

Security Dashboard Review

Frequency: Daily

AppSec engineers are responsible for triaging the findings of the GitLab security tools. This role has two primary functions.

  1. Validate the findings and handoff to engineering for correction
  2. Provide feedback to the Secure team

For the dashboards to review, please see triage rotation above.


For each finding:

Dependency Updates

If a vulnerability is identified in a product dependency, the appsec engineer should follow the security development workflow to create a merge request to update the dependency in all supported versions. The merge request should be opened in the GitLab Security repo so that the dependency gets updated in supported backports as well. Vulnerabilities determined to be Critical or High should have merge requests created when identified. Medium and Low vulnerabilities will be addressed by best effort, but always within the 90-day SLA.

The goal of this process is to update dependencies as quickly as possible, and reduce the impact on development teams for minor updates. If an upgrade to a new major version is required, it might be necessary for the update to be handled directly by the responsible development team. In the future, this step could be replaced by auto remediation.

When a Patch is Ready

When a patch has been developed, tested, approved, merged into the security branch, and a new security release is being prepared it is time to inform the researcher via HackerOne. Post a comment on the HackerOne issue to all parties informing them that a patch is ready and will be included with the next security release. Provide release dates, if available, but try not to promise a release on a specific date if you are unsure.

This is also a good time to ask if they would like public credit in our release blog post and on our vulnerability acknowledgements page for the finding. We will link their name or alias to their HackerOne profile, Twitter handle, Facebook profile, company website, or URL of their choosing. Also ask if they would like the HackerOne report to be made public upon release. It is always preferable to publicly disclose reports unless the researcher has an objection.


We use CVE IDs to uniquely identify and publicly define vulnerabilities in our products. Since we publicly disclose all security vulnerabilities 30 days after a patch is released, CVE IDs must be obtained for each vulnerability to be fixed. The earlier obtained the better, and it should be requested either during or immediately after a fix is prepared.

We currently request CVEs either through the HackerOne team or directly through MITRE's webform. Keep in mind that some of our security releases contain security related enhancements which may not have an associated CWE or vulnerability. These particular issues are not required to obtain a CVE since there's no associated vulnerability.

On Release Day

On the day of the security release several things happen in order:

Once all of these things have happened notify the HackerOne researcher that the vulnerability and patch are now public. The GitLab issue should be closed and the HackerOne report should be closed as "Resolved". Public disclosure should be requested if they have not objected to doing so. Any sensitive information contained in the HackerOne report should be sanitized before disclosure.

Swag for Reports

GitLab awards swag codes for free GitLab swag to any reports that result in a security patch. Limit: 1 per reporter. When a report is closed, ask the reporter if they would like a swag code for free GitLab clothing or accessories. Swag codes are available by request from the marketing team.

Handling Disruptive Researcher Activity

Even though many of our 3rd-party dependencies, hosted services, and the static site are listed explicitly as out of scope, they are sometimes targeted by researchers. This results in disruption to normal GitLab operations. In these cases, if a valid email can be associated with the activity, a warning such as the following should be sent to the researcher using an official channel of communication such as ZenDesk.

Dear Security Researcher,

The system that you are accessing is currently out-of-scope for our bounty
program or has resulted in activity that is disruptive to normal GitLab
operations. Reports resulting from this activity may be disqualified from
receiving a paid bounty. Continued access to this system causing disruption to
GitLab operations, as described in policy under "Rules of Engagement,
Testing, and Proof-of-concepts", may result in additional restrictions on
participation in our program:

  Activity that is disruptive to GitLab operations will result in account bans and disqualification of the report.

Further details and some examples are available in the full policy available at:

Please contact us at with any questions.

Best Regards,

Security Department | GitLab

External Code Contributions

We have a process in place to conduct security reviews for externally contributed code, especially if the code functionality includes any of the following:

The Security Team works with our Community Outreach Team to ensure that security reviews are conducted where relevant. For more information about contributing, please reference the Contribute to GitLab page.

Security Questionnaires for Customers

Some customers, to keep up with regulations that impact their business, need to understand the security implications of installing any software - including software like GitLab.


GitLab believes in Transparency so we publish the majority of our processes and policies online within our Handbook. One of the reasons that we do this so that customers can serve themselves to get access to the information they need to properly assess how we manage risk and align with security postures of our customers.

We recommend that customers and prospects review our GitLab Security Compliance Controls handbook page and our Security Trust Center before submitting a questionnaire to Field Security, and that our Sales and Solutions Architects refer customers and prospects to these as an initial step.

Even with this, we frequently receive requests to fill out security questionnaires from customers and prospects. In order to be efficient and provide the highest level of service possible for our customers and prospects, we require that questionnaires that need a tailored response meet certain thresholds:

For customers where they do not meet the criteria above, we can provide them with a completed SIG and CAIQ Questionnaire, and refer them to the GitLab Security Compliance Controls handbook page so that they can self-serve.

An overview of the process for responding to customer requests is:

graph TD; id1[Public statements]-->id2[Solution_Architect]; id2[Solution_Architect]-->id3[SA Triage issue];

The detailed process for responding to customer requests is:

  1. Refer a customer to our public statements on security here and here
  2. If a customer still has questions that need to be discussed, you can engage a Solutions Architect in that discussion.
  3. If the customer still needs a specific questionnaire filled out or requests a copy of GitLab's penetration test report without a questionnaire, create a confidential issue on the appropriate SA Triage board using the Vendor Security Assessment template with the label Security Audit and for the completion of that document
  4. The SA team will take the first pass at the questionnaire using /security/ and this folder as a reference.
  5. Once the SA team has completed what they can, the questionnaire will go to the security team for additional answers.
  6. We always want to respond immediately to customer questions, but when everything is urgent then nothing is. In order to maintain the ability to respond to truly urgent requests the security team requests ten (10) business days to complete the review, from the time it is labelled for Field Security review. In many cases we can turn these around more quickly so every effort will be made to meet requested deadline.
  7. Once the questionnaire is complete, it will be peer reviewed and approved for release to the customer.
  8. File the completed questionnaire in the example folder for future reference.

Vulnerability Management

Vulnerability Management is the recurring process of identifying, classifying, prioritizing, mitigating, and remediating vulnerabilities. This overview will focus on infrastructure vulnerabilities and the operational vulnerability management process. This process is designed to provide insight into our environments, leverage GitLab for vulnerability workflows, promote healthy patch management among other preventative best-practices, and remediate risk; all with the end goal to better secure our environments.

To achieve these goals, we’ve partnered with Tenable and have deployed their software-as-a-service (SaaS) solution,, as our vulnerability scanner. allows us to focus on what is important; scanning for vulnerabilities, analyzing, and ingesting vulnerability data into GitLab as the starting point for our vulnerability management process. For more information, please visit the vulnerability management overview.

Third Party Vendor Security Review

The Security Compliance team performs annual vendor security reviews of services/tools that GitLab as a company uses and potentially processes GitLab sensitive data. The security review is triggered in the Finance Procure to Pay process for new vendors based on the data classification identified by the business owner. Any data confirmed as RED, ORANGE, or YELLOW requires a security review for a recommendation or to record any potential risks through the risk exception process. The vendor security review process happens in tandem with the Finance Vendor and Contract Approval Workflow.

Package Signing

The packages we ship are signed with GPG keys, as described in the omnibus documentation. The process around how to make and store the key pair in a secure manner is described in the runbooks. Those runbooks also point out that the management of the keys is handled by the Security team and not the Build team. For more details that are specific to key locations and access at GitLab, find the internal google doc titled "Package Signing Keys at GitLab" on Google Drive.

Risk Management

GitLab's risk management handbook page details our approach to organizational risk management, risk assessments, and risk acceptance. On this page you'll find procedures for how best to document risk in a way to best inform business decisions at GitLab.

Annual 3rd-Party Security Testing

Along with the internal security testing done by the Application Security, Security Research, and Red teams, GitLab annually contracts a 3rd-party penetration test of our infrastructure. For more information on the goals of these exercises, please see our Penetration Testing Policy.

The following process is followed for these annual tests:

  1. The Application Security team will partner with the Security Operations and other releveant teams to define the scope of the test.
  2. The Infrastructure team will be notified in accordance with their procedures
  3. The Application Security team will manage the relationship with the 3rd-party vendor. Included in this role will be communicating the chosen scope and soliticing feedback.
  4. Based on feedback from all parties, testing dates will be defined and communicated to teams for appropriate actions.
  5. Testing will be done by the 3rd-party vendor and the results communicated to GitLab.
  6. The Application Security team will triage the findings and create issues in accordance with the Issue Triage process.

Obtaining the Report

GitLab customers can request a redacted copy of the report. For steps on how to do so, please see our External Testing page.