If you identified an urgent security issue, if something feels wrong, or you need immediate assistance from the Security Department, you have two options available:
/security Hi security, I have a concern! Please see the following URL ...command.
Please be aware that the Security Department can only be paged internally. If you are an external party, please proceed to Vulnerability Reports and HackerOne section of this page.
Both mechanisms will trigger the very same process, and as a result, the Security responder on-call will engage in the relevant issue within the appropriate SLA. If the SLA is breached, the Security manager on-call will be paged. When paging security a new issue will be created to track the incident being reported. Please provide as much detail as possible in this issue to aid the Security Engineer On Call in their investigation of the incident. The Security Engineer On Call will typically respond to the page within 15 minutes and may have questions which require synchronous communication from the incident reporter. It is important when paging security to please be prepared to be available for this synchronous communication in the initial stage of the incident response.
For lower severity requests or general Q&A, GitLab Security is available in the
#Security channel in GitLab Slack and the Security Incident Response Team can be alerted by mentioning
@sirt-team. If you suspect you've received a phishing email, and have not engaged with the sender, please see: What to do if you suspect an email is a phishing attack. If you have engaged a phisher by replying to an email, clicking on a link, have sent and received text messages, or have purchased goods requested by the phisher, page security as described above.
Note: The Security Department will never be upset if you page us for something that turns out to be nothing, but we will be sad if you don't page us and it turns out to be something. IF UNSURE - PAGE US.
Further information on GitLab's security response program is described in our Incident Response guide.
The security department can be contacted at
email@example.com. External researchers or other interested parties should
refer to our Responsible Disclosure Policy for more information about reporting vulnerabilities. The
email address also forwards to a ZenDesk queue that is monitored by the Field Security team.
For security team members, the private PGP key is available in the Security 1Password vault. Refer to PGP process for usage.
Our Mission is to be most Transparent Security Group in the world with a results oriented approach.
By embracing GitLab values and being active in engaging with our customers, our staff and our product, we enhance the security posture of our company, products, and client-facing services. The security department works cross-functionally inside and outside GitLab to meet these goals.
Our Vision is as follows:
The company-wide mandate is justification for mapping Security headcount to around 5% of total company headcount. Tying Security Department growth headcount to 5% of total company headcount ensures adequate staffing support for the following (below are highlights and not the entire list of responsibilities of the Security Department):
As the Security Department has evolved, so has the area of engagement where Security has been required to provide input. Today, the Security Department provides an essential security service to the Engineering and Development Departments, and is directly engaged in Security Releases. However, the functions that the Security Department provides to the rest of the business are more consultative and advisory in nature - while the Security Department is a key player in the security questions raised throughout the business, it is not directly responsible for all of the administration of execution of functions that are required for the business to function while minimising risk.
This is expected - the security of the business should be a concern of everyone within the company and not just the domain of specialists. In addition, the role of the Security Department has expanded to assure customer, investor and regulator concerns about the security and safety of using GitLab as an enterprise-ready product, both through compliance certifications and through assurance responses directly to customers, and as we continue to encourage enterprise customers to use GitLab.com.
To reflect this, we have structured the Security Department around three key tenets, which drive the structure and the activities of our group. These are :
This reflects the Security Department’s current efforts to be involved in the Application development and Release cycle for Security Releases, Security Research, our HackerOne bug bounty program, Security Automation and Vulnerability Management.
The term “Product” is interpreted broadly and includes the GitLab application itself and all other integrations and code that is developed internally to support the GitLab application for the multi-tenant SaaS. Our responsibility is to ensure all aspects of GitLab that are exposed to customers or that host customer data are held to the highest security standards, and to be proactive and responsive to ensure world-class security in anything GitLab offers.
This encompasses protecting company property as well as to prevent, detect and respond to risks and events targeting the business and GitLab.com. This sub department includes the Security Incident Response Team (SIRT), Trust and Safety team (former Abuse Operations) and Red team.
These functions have the responsibility of shoring up and maintaining the security posture of GitLab.com to ensure enterprise-level security is in place to protect our new and existing customers.
This reflects the need for us to provide resources to our customers to assure them of the security and safety of GitLab as an application to use within their organisation and as a enterprise-level SaaS. This also involves providing appropriate support, services and resources to customers so that they trust GitLab as a Secure Company, as a Secure Product, and Secure SaaS
It’s important to note that the three tenets do not operate independently of each other, and every team within the Security Department provides an important function to perform in order to progress these tenets. For example, Application Security may be strongly focused on Securing the Product, but it still has a strong focus around customer assurance and protecting the company in performing its functions. Similarly, Security Operations functions may be engaged on issues related to Product vulnerabilities, and the resolution path for this deeply involves improving the security of product features, as well as scoping customer impact and assisting in messaging to customers.
Broadly speaking, teams are aligned under three Sections (Product, Operations and Customer) which reflect the tenets that are their primary focus. The Graph below illustrates how the teams within the Department are managed.
The teams below are primarily focused on Application Security or other functions related to Securing the Product.
Application Security specialists work closely with development, product security PMs, and third-party groups (including paid bug bounty programs) to ensure pre and post deployment assessments are completed. Initiatives for this specialty also include:
Security Automation specialists help us scale by creating tools that perform common tasks automatically. Examples include building automated security issue triage and management, proactive vulnerability scanning, and defining security metrics for executive review. Initiatives for this specialty also include:
Security research specialists conduct internal testing against GitLab assets, against FOSS that is critical to GitLab products and operations, and against vendor products being considered for purchase and integration with GitLab. Initiatives for this specialty also include:
Security research specialists are subject matter experts (SMEs) with highly specialized security knowledge in specific areas, including reverse engineering, incident response, malware analysis, network protocol analysis, cryptography, and so on. They are often called upon to take on security tasks for other security team members as well as other departments when highly specialized security knowledge is needed. Initiatives for SMEs may include:
Security research specialists are often used to promote GitLab thought leadership by engaging as all-around security experts, to let the public know that GitLab doesn’t just understand DevSecOps or application security, but has a deep knowledge of the security landscape. This can include the following:
The External Communications Team leads customer advocacy, engagement and communications in support of GitLab Security Team programs. Initiatives for this specialty include:
Operations Sub-department teams are primarily focused on protecting GitLab the business and GitLab.com.
The SIRT team is here to manage security incidents across GitLab. These stem from events that originate from outside of our infrastructure, as well as those internal to GitLab. This is often a fast-paced and stressful environment where responding quickly and maintaining ones composure is critical.
More than just being the first to acknowledge issues as they arise, SIRT is responsible for leading, designing, and implementing the strategic initiatives to grow the Detection and Response practices at GitLab. These initiatives include:
SIRT can be contacted on slack via our handle
@sec-ops-team or in a GitLab issue using
@gitlab-com/gl-security/secops. If your request requires immediate attention please review the steps for engaging the security on-call.
Initiatives for this specialty include:
For more information please see our Resources Section
Code of Conduct Violations are handled by the Community Advocates in the Community Relations Team. For more information on reporting these violations please see the GitLab Community Code of Conduct page.
GitLab's internal Red Team emulates adversary activity to better GitLab’s enterprise and product security. This includes activities such as:
The Security Assurance sub-department is comprised of the teams below. They target Customer Assurance projects among their responsibilities.
Operating as a second line of defense, Security Compliance's core mission is to implement a best in class governance, risk and compliance program that encompasses SaaS, on-prem, and open source instances. Initiatives for this specialty include:
For additional information about the Security Compliance program see the Security Compliance team handbook page or refer to GitLab's security controls for a detailed list of all compliance controls organized by control family.
The Field Security team serves as the public face for GitLab's Security Department. The Field Security team works closely with multiple teams at GitLab including:
Initiatives for this specialty include:
For additional information about the Field Security Team see the Field Security Team handbook page.
For information on the security internship, see the Internship page.
The Security department will collaborate with development and product management for security-related features in GitLab. The Secure team must not be mistaken with the Security Teams.
We work closely with bounty programs, as well as security assessment and penetration testing firms to ensure external review of our security posture.
Information Security Policies are reviewed annually. Policy changes are approved by the Senior Director of Security and Legal. Changes to the Data Protection Impact Assessment Policy are approved by GitLab's Privacy Officer.
Information security considerations such as regulatory, compliance, confidentiality, integrity and availability requirements are most easily met when companies employ centrally supported or recommended industry standards. Whereas GitLab operates under the principle of least privilege, we understand that centrally supported or recommended industry technologies are not always feasible for a specific job function or company need. Deviations from the aforementioned standard or recommended technologies is discouraged. However, it may be considered provided that there is a reasonable, justifiable business and/or research case for an information security policy exception; resources are sufficient to properly implement and maintain the alternative technology; the process outlined in this and other related documents is followed and other policies and standards are upheld.
In the event an employee requires a deviation from the standard course of business or otherwise allowed by policy, the Requestor must submit a Policy Exception Request to IT Security, which contains, at a minimum, the following elements:
The Policy Exception Request should be used to request exceptions to information security policies, such as the password policy, or when requesting the use of a non-standard device (laptop).
Exception request approval requirements are documented within the issue template. The requester should tag the appropriate individuals who are required to provide an approval per the approval matrix.
If the business wants to appeal an approval decision, such appeal will be sent to Legal at firstname.lastname@example.org. Legal will draft an opinion as to the proposed risks to the company if the deviation were to be granted. Legal’s opinion will be forwarded to the CEO and CFO for final disposition.
Any deviation approval must:
~metaand backend tasks, and catch all for anything not covered by other projects.
gl-security/runbooksshould only be used for documenting specifics that would increase risk and/or have customer impact if publicly disclosed.
GitLab.comenvironment, consider if it's possible to release when the
~securityissue becomes non-confidential. This group can also be used for private demonstration projects for security issues.
@trust-and-safetyin the channel to alert the team to anything urgent.
#security-department-standup- Private channel for daily standups.
#incident-managementand other infrastructure department channels
#security-alert-manual- New reports for the security department from various intake sources, including ZenDesk and new HackerOne reports.
#hackerone-feed- Feed of most activity from our HackerOne program.
#abuse*- Multiple channels for different notifications handled by the Security Department.
Security crosses many teams in the company, so you will find
issues across all GitLab projects, especially:
When opening issues, please follow the Creating New Security Issues process for using labels and the confidential flag.
The Security Department tracks their OKRs using the boards in the table below.
These boards aggregate work from the many subgroups and projects under
used by the various subteams to track their work and projects. High level issues
and open issues that map directly to an OKR should be added to the board for the
quarter to provide visibility into the current work of the security department.
When creating an issue that maps to an OKR, apply the following group level labels so that it appears in the board:
Security Management::label for the appropriate team. For example:
~Security Management::Security Automation Team
These labels can also be applied to MRs related to the work.
Any work that is related to an OKR is tracked with an issue in the appropriate team or project tracker and linked to the OKR issue as related.
Larger initiatives that span the scope of multiple teams or projects may require a Planning handbook page to further developer requirements.
The definitions, processes and checklists for security releases are described in the release/docs project.
The policies for backporting changes follow Security Releases for GitLab EE.
For critical security releases, refer to Critical Security Releases in
The Security team needs to be able to communicate the priorities of security related issues to the Product, Development, and Infrastructure teams. Here's how the team can set priorities internally for subsequent communication (inspired in part by how the support team does this).
New security issue should follow these guidelines when being created on
confidentialif unsure whether issue a potential vulnerability or not. It is easier to make an issue that should have been public open than to remediate an issue that should have been confidential. Consider adding the
/confidentialquick action to a project issue template.
~securityat a minimum.
~customerif issue is a result of a customer report
~internal customershould be added by team members when the issue impacts GitLab operations.
~keep confidential. If possible avoid this by linking resources only available to GitLab employees, for example, the originating ZenDesk ticket. Label the link with
(GitLab internal)for clarity.
Occasionally, data that should remain confidential, such as the private project contents of a user that reported an issue, may get included in an issue. If necessary, a sanitized issue may need to be created with more general discussion and examples appropriate for public disclosure prior to release.
For review by the Application Security team, @ mention
For more immediate attention, refer to Engaging security on-call.
If an issue is determined to be a vulnerability, the security engineer will ensure that the issue labelled as a
~bug and not a
The presence of
~bug labels modifies the standard severity labels(
by additionally taking into account
likelihood as described below, as well as any
other mitigating or exacerbating factors. The priority of addressing
~security issues is also driven by impact, so in most cases, the priority label
assigned by the security team will match the severity label.
Exceptions must be noted in issue description or comments.
The intent of tying
~S/~P labels to remediation times is to measure and improve GitLab's
response time to security issues to consistently meet or exceed industry
standard timelines for responsible disclosure. Mean time to remediation (MTTR) is
metric that may be evaluated by users as an indication of GitLab's commitment
to protecting our users and customers. It is also an important measurement that
security researchers use when choosing to engage with the security team, either
directly or through our HackerOne Bug Bounty Program.
|Severity||Priority||Time to mitigate||Time to remediate|
||Within 24 hours||On or before the next security release|
||N/A||Within 60 days|
||N/A||Within 90 days|
||N/A||Best effort unless risk accepted|
~bugs, the security engineer assigns the
which is the target date of when fixes should be ready for release.
This due date should account for the
Time to remediate times above, as well as
monthly security releases on the 28th of each month. For example, suppose today is October 1st, and
~security issue is opened. It must be addressed in a security release within 60 days,
which is November 30th. So therefore, it must catch the November 28th security release.
Furthermore, the Security Release Process deadlines
say that it should the code fix should be ready by November 23rd. So the due date
in this example should be November 23rd.
~security issues do not have a due date since they should be fixed as soon
as possible, and a security release made available as soon as possible to accommodate
it. These issues are worked by a security engineer oncall to mitigate the risk within 24 hours. This means the risks relevant to our customers have been removed or reduced as much as possible within 24 hours. The remediation timeline target is never greater than our next security release but remediation in this case would not impact customer security risk.
Note that some
~security issues may not need to be part of a code release, such as
an infrastructure change. In that case, the due date will not need to account for
monthly security release dates.
On occasion, the due date of such an issue may need to be changed if the security team needs to move up or delay a monthly security release date to accommodate for urgent problems that arise.
Issues labelled with the
feature labels are not considered vulnerabilities, but rather security enhancements or defense-in-depth mechanisms. This means the security team is not required to set the
P labels or follow the vulnerability triage process as these issues will be triaged by product or other appropriate team owning the component.
On the contrary, note that issues with the
S4 labels are considered
Low severity vulnerabilities and will be handled according to the standard vulnerability triage process.
The security team may also apply
~internal customer and ~
security request to issue as an
indication that the feature is being requested by the security team to meet
additional customer requirements, compliance or operational needs in
support of GitLab.com.
The security engineer must:
@pm for scheduling.
The product manager will assign a
Milestone that has been assigned a due
date to communicate when work will be assigned to engineers. The
field, severity label, and priority label on the issue should not be changed
by PMs, as these labels are intended to provide accurate metrics on
~security issues, and are assigned by the security team. Any blockers,
technical or organizational, that prevents
~security issues from being
addressed as our top priority
should be escalated up the appropriate management chains.
Note that issues are not scheduled for a particular release unless the team leads add them to a release milestone and they are assigned to a developer.
Issues with an
S2 rating should be immediately brought to the
attention of the relevant engineering team leads and product managers by
tagging them in the issue and/or escalating via chat and email if they are
Issues with an
S1 rating have priority over all other issues and should be
considered for a critical security release.
Issues with an
S2 rating should be scheduled for the next scheduled
security release, which may be days or weeks ahead depending on severity and
other issues that are waiting for patches. An
S2 rating is not a guarantee
that a patch will be ready prior to the next security release, but that
should be the goal.
Issues with an
S3 rating have a lower sense of urgency and are assigned a
target of the next minor version. If a low-risk or low-impact vulnerability
is reported that would normally be rated
S3 but the reporter has
provided a 30 day time window (or less) for disclosure the issue may be
escalated to ensure that it is patched before disclosure.
It is possible that a ~security issue becomes irrelevant after it was initially triaged, but before a patch was implemented. For example, the vulnerable functionality was removed or significantly changed resulting in the vulnerability not being present anymore.
If an engineer notices that an issue has become irrelevant, he should @-mention the person that triaged the issue to confirm that the vulnerability is not present anymore. Note that it might still be necessary to backport a patch to previous releases according to our maintenance policy. In case no backports are necessary, the issue can be closed.
For information on secure coding initiatives, please see the Secure Coding Training page.
Gearing ratios are used as Business Drivers to forecast long term financial goals by function. The primary gearing ratios for Security is Bug Bounty expenditure:
An illustration: GitLab is worth 2.5 billion and a significant compromise can cost GitLab $250 million. 1% ratio = $2.5 millon budget. Likewise, 1% of budget = $25,000 top reward
Approximate monthly budget should be set at total budget divided by 12. It should be understood that our bug bounty payouts are largely unpredictable and fluctuate based on the following:
For systems built (or significantly modified) by Departments that house customer and other sensitive data, the Security Team should perform applicable application security reviews to ensure the systems are hardened. Security reviews aim to help reduce vulnerabilities and to create a more secure product.
There are two ways to request a security review depending on how significant the changes are. It is divided between individual merge requests and larger scale initiatives.
Loop in the application security team by
/cc @gitlab\-com/gl\-security/appsec in your merge request or issue.
These reviews are intended to be faster, more lightweight, and have a lower barrier of entry.
Some use cases of this are for epics, milestones, reviewing for a common security weakness in the entire codebase, or larger features.
No, code changes do not require security approval to progress. Non-blocking reviews enables the freedom for our code to keep shipping fast, and it closer aligns with our values of iteration and efficiency. They operate more as guardrails instead of a gate.
To help speed up a review, it's recommended to provide any or all of the following:
The current process for larger scale internal application security reviews be found here
Security reviews are not proof or certification that the code changes are secure. They are best effort, and additional vulnerabilities may exist after a review.
It's important to note here that application security reviews are not a one-and-done, but can be ongoing as the application under review evolves.
GitLab receives vulnerability reports by various pathways, including:
For any reported vulnerability:
devor in other non-public ways even if there is a reason to believe that the vulnerability is already out in the public domain (e.g. the original report was made in a public issue that was later made confidential).
Application Security team members may assign themselves as the directly responsible individual (DRI) for incoming requests to the Application Security team for a given calendar week in the Triage Rotation Google Sheet in the Security Team Drive.
The following rotations are defined:
email@example.com, which creates tickets in ZenDesk, following the AppSec ZenDesk process
Team members should not assign themselves on weeks they are responsible for the scheduled security release.
Team members not assigned as the DRI for the week should continue to triage reports when possible, especially to close duplicates or handle related reports to those they have already triaged.
Team members remain responsible for their own assigned reports.
GitLab utilizes HackerOne for its bug bounty program. Security researchers can report vulnerabilities in GitLab applications or the GitLab infrastructure via the HackerOne website. Team members authorized to respond to HackerOne reports use procedures outlined here.
#hackerone-feed Slack channel receives notifications of report status changes and comments via HackerOne's Slack integration.
For information or questions about the GitLab HackerOne program, please contact
If a reporter has questions about the order of report responses,
06 - Duplicates/Invalid Reports Triaged First common response may be used.
~featureas defined above and would not need to be made confidential or scheduled for remediation. An issue can be created, or requested that the reporter creates one if desired, but the report can be closed as "Informative".
01 - Duplicatecommon response. Include the link to the GitLab issue.
05 - Duplicate Follow Up, No Adding Contributors/Future Public Releasecommon response can be used when denying the request to add as a contributor.
/h1import <report> [project]in Slack
00 - TriagedCommon response as a template.
04 - Bounty Award / Reviewed and Awarded Prior to Fixcan be used as the template for the bounty award if approved.
Please see Handling S1/P1 Issues
If a report is unclear, or the reviewer has any questions about the validity of the finding or how it can be exploited, now is the time to ask. Move the report to the "Needs More Info" state until the researcher has provided all the information necessary to determine the validity and impact of the finding. Use your best judgement to determine whether it makes sense to open a confidential issue anyway, noting in it that you are seeking more information from the reporter. When in doubt, err on the side of opening the issue.
One the report has been clarified, follow the "regular flow" described above.
If a report violates the rules of GitLab's bug bounty program use good judgement in deciding how to proceed. For instance, if a researcher has tested a vulnerability against GitLab production systems (a violation), but the vulnerability has not placed GitLab user data at risk, notify them that they have violated the terms of the bounty program but you are still taking the report seriously and will treat it normally. If the researcher has acted in a dangerous or malicious way, inform them that they have violated the terms of the bug bounty program and will not receive credit. Then continue with the "regular flow" as you normally would.
If the report does not pose a security risk to GitLab or GitLab users it can be closed without opening an issue on GitLab.com.
When this happens inform the researcher why it is not a vulnerability. It is up to the discretion of the Security Engineer whether to close the report as "Informative", "Not Applicable", or "Spam". "Spam" results in the maximum penalty to the researcher's reputation and should only be used in obvious cases of abuse.
Sometimes there will be a breakdown in effective communication with a reporter. While this could happen for multiple reasons, it is important that further communication follows GitLab's Guidelines for Effective and Responsible Communication. If communication with a reporter has gotten to this point, the following steps should be taken to help meet this goal.
When GitLab receives reports, via HackerOne or other means, which might affect third parties the reporter will be encouraged to report the vulnerabilities upstream. On a case-by-case basis, e.g. for urgent or critical issues, GitLab might proactively report security issues upstream while being transparent to the reporter and making sure the original reporter will be credited. GitLab team members however will not attempt to re-apply unique techniques observed from bug bounty related submissions towards unrelated third parties which might be affected.
GitLab reporters with 3 or more valid reports are eligible for a 1-year Ultimate license for up to 5 users. As per the H1 policy, reporters will request the license through a comment on one of their issues. When a reporter has requested a license, the following steps should be taken:
Nameuse the reporters fullname if available, otherwise their H1 handle
H1 Reporter Award
User Countis up to 5
20 - Ultimate License Creationtemplate.
The license will be sent to the reporter by the License app. If the reporter claims that the license has not arrived, the app can be used to resend the license. When that happens, the creation of a new license should be avoided.
AppSec engineers are responsible for triaging the findings of the GitLab security tools. This role has two primary functions.
For the dashboards to review, please see triage rotation above.
For each finding:
/label ~command) corresponding to the DevOps stage and source group (consult the Hierarchy for an overview on categories forming the hierarchy)
If a vulnerability is identified in a product dependency, the appsec engineer should follow the security development workflow to create a merge request to update the dependency in all supported versions. The merge request should be opened in the GitLab Security repo so that the dependency gets updated in supported backports as well. Vulnerabilities determined to be
High should have merge requests created when identified.
Low vulnerabilities will be addressed by best effort, but always within the 90-day SLA.
The goal of this process is to update dependencies as quickly as possible, and reduce the impact on development teams for minor updates. In the future, this step could be replaced by auto remediation.
If an upgrade to a new major version is required, it might be necessary for the update to be handled directly by the responsible development team.
Security developer workflowtemplate.
When a patch has been developed, tested, approved, merged into the security branch, and a new security release is being prepared it is time to inform the researcher via HackerOne. Post a comment on the HackerOne issue to all parties informing them that a patch is ready and will be included with the next security release. Provide release dates, if available, but try not to promise a release on a specific date if you are unsure.
This is also a good time to ask if they would like public credit in our release blog post and on our vulnerability acknowledgements page for the finding. We will link their name or alias to their HackerOne profile, Twitter handle, Facebook profile, company website, or URL of their choosing. Also ask if they would like the HackerOne report to be made public upon release. It is always preferable to publicly disclose reports unless the researcher has an objection.
We use CVE IDs to uniquely identify and publicly define vulnerabilities in our products. Since we publicly disclose all security vulnerabilities 30 days after a patch is released, CVE IDs must be obtained for each vulnerability to be fixed. The earlier obtained the better, and it should be requested either during or immediately after a fix is prepared.
We currently request CVEs either through the HackerOne team or directly through MITRE's webform. Keep in mind that some of our security releases contain security related enhancements which may not have an associated CWE or vulnerability. These particular issues are not required to obtain a CVE since there's no associated vulnerability.
On the day of the security release several things happen in order:
Once all of these things have happened notify the HackerOne researcher that the vulnerability and patch are now public. The GitLab issue should be closed and the HackerOne report should be closed as "Resolved". Public disclosure should be requested if they have not objected to doing so. Any sensitive information contained in the HackerOne report should be sanitized before disclosure.
GitLab awards swag codes for free GitLab swag to any reports that result in a security patch. Limit: 1 per reporter. When a report is closed, ask the reporter if they would like a swag code for free GitLab clothing or accessories. Swag codes are available by request from the marketing team.
Even though many of our 3rd-party dependencies, hosted services, and the static
about.gitlab.com site are listed explicitly as out of scope, they are sometimes
targeted by researchers. This results in disruption to normal GitLab operations.
In these cases, if a valid email can be associated with the activity, a warning
such as the following should be sent to the researcher using an official channel
of communication such as ZenDesk.
Dear Security Researcher, The system that you are accessing is currently out-of-scope for our bounty program or has resulted in activity that is disruptive to normal GitLab operations. Reports resulting from this activity may be disqualified from receiving a paid bounty. Continued access to this system causing disruption to GitLab operations, as described in policy under "Rules of Engagement, Testing, and Proof-of-concepts", may result in additional restrictions on participation in our program: Activity that is disruptive to GitLab operations will result in account bans and disqualification of the report. Further details and some examples are available in the full policy available at: https://hackerone.com/gitlab Please contact us at firstname.lastname@example.org with any questions. Best Regards, Security Department | GitLab email@example.com https://about.gitlab.com/handbook/engineering/security/
We have a process in place to conduct security reviews for externally contributed code, especially if the code functionality includes any of the following:
The Security Team works with our Community Outreach Team to ensure that security reviews are conducted where relevant. For more information about contributing, please reference the Contribute to GitLab page.
Vulnerability Management is the recurring process of identifying, classifying, prioritizing, mitigating, and remediating vulnerabilities. This overview will focus on infrastructure vulnerabilities and the operational vulnerability management process. This process is designed to provide insight into our environments, leverage GitLab for vulnerability workflows, promote healthy patch management among other preventative best-practices, and remediate risk; all with the end goal to better secure our environments.
To achieve these goals, we’ve partnered with Tenable and have deployed their software-as-a-service (SaaS) solution, Tenable.io, as our vulnerability scanner. Tenable.io allows us to focus on what is important; scanning for vulnerabilities, analyzing, and ingesting vulnerability data into GitLab as the starting point for our vulnerability management process. For more information, please visit the vulnerability management overview.
The packages we ship are signed with GPG keys, as described in the omnibus documentation. The process around how to make and store the key pair in a secure manner is described in the runbooks. The Distribution team is responsible for updating the package signing key. For more details that are specific to key locations and access at GitLab, find the internal google doc titled "Package Signing Keys at GitLab" on Google Drive.
Along with the internal security testing done by the Application Security, Security Research, and Red teams, GitLab annually contracts a 3rd-party penetration test of our infrastructure. For more information on the goals of these exercises, please see our Penetration Testing Policy.
The following process is followed for these annual tests:
GitLab customers can request a redacted copy of the report. For steps on how to do so, please see our External Testing page.