We enhance the security posture of our company, products, and client-facing services. The security team works cross-functionally inside and outside GitLab to meet these goals. The security team does not work directly on security-centric features of our platform—these are handled by the development teams. The specialty areas of our team reflect the major areas of information security.
Security Automation specialists help us scale by creating tools that perform common tasks automatically. Examples include building automated security issue triage and management, proactive vulnerability scanning, and defining security metrics for executive review. Initiatives for this specialty also include:
Application Security specialists work closely with development, product security PMs, and third-party groups (including paid bug bounty programs) to ensure pre and post deployment assessments are completed. Initiatives for this specialty also include:
Security Operations specialists respond to incidents. This is often a fast-paced and stressful environment, where responding quickly and maintaining ones composure is critical. Initiatives for this specialty also include:
Abuse Operations specialists investigate malicious use of our systems. Initiatives for this specialty include:
Compliance specialists enables Sales by achieving standard as required by our customers. This includes SaaS, on-prem, and open source instances. Initiatives for this specialty also include:
Threat intelligence specialists research and provide information about specific threats to help us protect from the types of attacks that could cause the most damage. Initiatives for this specialty also include:
Strategic security specialists focus on holistic changes to policy, architecture, and processes to reduce entire categories of future security issues. Initiatives for this specialty also include:
Security research specialists conduct internal testing against GitLab assets, and against FOSS that is critical to GitLab products and operations. Initiatives for this specialty also include:
The Security team will collaborate with development and product management for security-related features in GitLab. The Secure team must not be mistaken with the Security Team.
We work closely with bounty programs, as well as security assessment and penetration testing firms to ensure external review of our security posture.
~securitylabel. Please use confidential issues for topics that should only be visible to team members at GitLab.
#securitychat channel for questions that don't seem appropriate to use the issue tracker or the internal email address for.
At a high level, the topic of security encompasses keeping GitLab.com safe, from the perspective of the application, the infrastructure, and the organization.
We track progress on tackling security related issues across the spectrum through an "always WIP Risk Assessment", and its associated (confidential) meta issue that is kept up to date to list the top 10 actions the team is working on.
The definitions, processes and checklists for security releases are described in the release/docs project.
The policies for backporting changes follow Security Releases for Gitlab EE.
The Security team needs to be able to communicate the priorities of security related issues to the Product, Development, and Infrastructure teams. Here's how the team can set priorities internally for subsequent communication (inspired in part by how the support team does this).
New security issue should follow these guidelines when being created on
confidentialif unsure whether issue a potential vulnerability or not. It is easier to make an issue that should have been public open than to remediate an issue that should have been confidential. Consider adding the
/confidentialquick action to a project issue template.
~securityat a minimum.
~feature proposalif appropriate
~customerif issue is a result of a customer report
~keep confidential. If possible avoid this by linking resources only available to GitLab employess, for example, the originating Zendesk ticket. Label the link with
(GitLab internal)for clarity.
If necesary, a sanitized issue may need to be created with more general discussion and examples appropriate for public disclosure prior to release.
For more immediate attention, mention
@gl-security with a request in the issue and use
@sec-team in Slack.
The presence of
~security modifies the standard severity labels(
~S4) by additionally taking into account likelihood as described below, as well as any other mitigating or exacerbating factors. The priority of addressing
~security issues is also driven by impact, so in most cases, the priority label assigned by the security team will match the severity label. Exceptions must be noted in issue description or comments.
The intent of tying
~S/~P labels to release schedules is to measure and improve GitLab's response time to security issues to consistently meet or exceed industry standard timelines for responsible disclosure. For non-release based resources like infrastructure changes or other artifacts that do not have to follow the monthly release schedule,
|Severity||Priority||Target Release||Time to remediate (non-release resources)|
| || ||Critical Release||Immediate|
| || ||Next Monthly Security Release||30 days|
| || ||Next 1-3 Releases||90 days|
Security issues which are not at least
~S3, are most likely
~feature proposals that should be triaged and prioritized by a product manager. This includes suggestions without a well-defined path for implementation or requiring complex changes to the application or architecture to address.
~S4/~P4 may be used for
~feature proposals as assigned by a product manager and do not need assignment to a release.
The security engineer will add team labels (
~Deploy, etc.) and any additional labels as appropriate to the issue. The engineering team lead should be @ mentioned and followed up as necessary as noted below for different severity levels.
Note that issues are not scheduled for a particular release unless the team leads add them to a release milestone and they are assigned to a developer.
Issues with an
S2 rating should be immediately brought to the attention of the relevant engineering team leads and product managers by tagging them in the issue and/or escalating via chat and email if they are unresponsive.
Issues with an
S1 rating have priority over all other issues and should be considered for a critical security release.
Issues with an
S2 rating should be scheduled for the next scheduled security release, which may be days or weeks ahead depending on severity and other issues that are waiting for patches. An
S2 rating is not a guarantee that a patch will be ready prior to the next security release, but that should be the goal.
Issues with an
S3 rating have a lower sense of urgency and are assigned a target of the next minor version. If a low-risk or low-impact vulnerability is reported that would normally be rated
S3 but the reporter has provided a 30 day time window (or less) for disclosure the issue may be escalated to ensure that it is patched before disclosure.
Many factors affect the severity ratings of security isssues, but the following guideline can can used as a starting point. For this, consider severity as a combination of the likelihood and impact of a security incident that could result from this issue not being resolved.
|Likelihood \ Impact||I1 - High||I2 - Medium||I3 - Low|
|L1 - High|| || || |
|L2 - Medium|| || || |
|L3 - Low|| || || |
S4 may be used for issues not requiring mitigations, but may need to be triaged as
~feature proposals as described under labels.
S1, the final determination for
S1is determined by the impact to our users ( > 50% impacted)
S2, the final determination for
S2is determined by the impact to our users (between 25-50% impacted)
S3, the final determination for
S3is determined by the impact to our users (up to 25% impacted)
S4, the final determination for
S4is determined by the impact to our users (zero impact to users)
For systems built (or significantly modified) by functional groups that house customer and other sensitive data, the Security Team should perform applicable application security reviews to ensure the systems are hardened.
The current process for requesting an internal application security review is:
As part of the app sec review process, a security issue is created to track the progress of the review. In those issues, labels and milestones can be added. It's important to note here that application security reviews are not a one-and-done, but can be ongoing as the application under review evolves.
The security team plays a large role in defining procedures for defending against and dealing with spam. Common targets for spam are public snippets, projects, issues, merge requests, and comments. Advanced techniques for dealing with these types of spam are detailed in the Spam Fighting runbook.
GitLab has an "always WIP" Risk Assessment and all team members are encouraged to participate. The Risk Assessment consists of a list of all risks or threats to the GitLab infrastructure and GitLab as a company, their likelihood of occurring, the impact should they occur, and what actions can be taken to prevent these risks from damaging the company or mitigate the damage should they be realized.
The risk assessment is stored as a Google Sheet (search Google Drive for "Risk Assessment"; make sure you are searching for spreadsheets shared with GitLab.com) and is available to all team members. It should not be shared with people outside of the company without permission.
The format of the Risk Assessment may seem intimidating at first. If you do not know what values to use for risk ratings, impact ratings, likelihoods or any other value leave them blank and reach out to the Security Team to help you determine appropriate values. It is more important to have all risks documented than it is to have all values completed when adding new risks. Guidelines and instructions for how to add a risk and how to calculate each rating or score are included on the "Instructions" tab.
GitLab receives vulnerability reports by various pathways, including:
For any reported vulnerability:
devor in other non-public ways even if there is a reason to believe that the vulnerability is already out in the public domain (e.g. the original report was made in a public issue that was later made confidential).
GitLab utilizes HackerOne for its bug bounty program. Security researchers can report vulnerabilities in GitLab applications or the GitLab infrastructure via the HackerOne website. Team members authorized to respond to HackerOne reports use procedures outlined here.
Add commentusing the
Week to respondtemplate.
~feature proposalas defined above and would not need to be made confidential or scheduled for remediation. An issue can be created, or requested that the reporter creates one if desired, but the report can be closed as "Informational".
~HackerOnelabel to these issues, for later reporting and tracking.
Traiged-Escalated to engineeringCommon response as a template.
If a report is unclear, or the reviewer has any questions about the validity of the finding or how it can be exploited, now is the time to ask. Move the report to the "Needs More Info" state until the researcher has provided all the information necessary to determine the validity and impact of the finding. Use your best judgement to determine whether it makes sense to open a confidential issue anyway, noting in it that you are seeking more information from the reporter. When in doubt, err on the side of opening the issue.
One the report has been clarified, follow the "regular flow" described above.
If a report violates the rules of GitLab's bug bounty program use good judgement in deciding how to proceed. For instance, if a researcher has tested a vulnerability against GitLab production systems (a violation), but the vulnerability has not placed GitLab user data at risk, notify them that they have violated the terms of the bounty program but you are still taking the report seriously and will treat it normally. If the researcher has acted in a dangerous or malicious way, inform them that they have violated the terms of the bug bounty program and will not receive credit. Then continue with the "regular flow" as you normally would.
If the report is invalid (in your determination) or does not pose a security risk to GitLab or GitLab users it can be closed without opening an issue on GitLab.com. When this happens inform the researcher why it is not a vulnerability and close the issue as "Informational". HackerOne offers the option to close an issue as "Not Applicable" or "Spam". Both of these categories result in damage to the researcher's reputation and should only be used in obvious cases of abuse.
When a patch has been developed, tested, approved, merged into the security branch, and a new security release is being prepared it is time to inform the researcher via HackerOne. Post a comment on the HackerOne issue to all parties informing them that a patch is ready and will be included with the next security release. Provide release dates, if available, but try not to promise a release on a specific date if you are unsure.
This is also a good time to ask if they would like public credit in our release blog post and on our vulnerability acknowledgements page for the finding. We will link their name or alias to their HackerOne profile, Twitter handle, Facebook profile, company website, or URL of their choosing. Also ask if they would like the HackerOne report to be made public upon release. It is always preferable to publicly disclose reports unless the researcher has an objection.
We use CVE IDs to uniquely identify and publicly define vulnerabilities in our products. Since we publicly disclose all security vulnerabilities 30 days after a patch is released, CVE IDs must be obtained for each vulnerability to be fixed. The earlier obtained the better, and it should be requested either during or immediately after a fix is prepared.
We currently request CVEs either through the HackerOne team or directly through MITRE's webform. Keep in mind that some of our security releases contain security related enhancements which may not have an associated CWE or vulnerability. These particular issues are not required to obtain a CVE since there's no associated vulnerability.
On the day of the security release several things happen in order:
Once all of these things have happened notify the HackerOne researcher that the vulnerability and patch are now public. The GitLab issue should be closed and the HackerOne report should be closed as "Resolved". Public disclosure should be requested if they have not objected to doing so. Any sensitive information contained in the HackerOne report should be sanitized before disclosure.
GitLab awards swag codes for free GitLab swag to any reports that result in a security patch. Limit: 1 per reporter. When a report is closed, ask the reporter if they would like a swag code for free GitLab clothing or accessories. Swag codes are available by request from the marketing team.
Some customers, to keep up with regulations that impact their business, need to understand the security implications of installing any software - including software like GitLab.
The current process for responding to customer requests is:
SA Backlogfor the completion of that document
GitLab maintains a custom vulnerability scanner that is used to regularly scan all GitLab assets for common vulnerabilities as well as previously patched GitLab vulnerabilities and to ensure that no GitLab security-sensitive services are accidentally exposed.
Details on this scanner and how it is configured are available to all team members in a Google Doc entitled "Vulnerability Scanner Config".
The packages we ship are signed with GPG keys, as described in the omnibus documentation. The process around how to make and store the key pair in a secure manner is described in the runbooks. Those runbooks also point out that the management of the keys is handled by the Security team and not the Build team. For more details that are specific to key locations and access at GitLab, find the internal google doc titled "Package Signing Keys at GitLab" on Google Drive.