We enhance the security posture of our company, products, and client-facing services. The security team works cross-functionally inside and outside GitLab to meet these goals. The security team does not work directly on security-centric features of our platform—these are handled by the development teams. The specialty areas of our team reflect the major areas of information security.
Security Automation specialists help us scale by creating tools that perform common tasks automatically. Examples include building automated security issue triage and management, proactive vulnerability scanning, and defining security metrics for executive review. Initiatives for this specialty also include:
Application Security specialists work closely with development, product security PMs, and third-party groups (including paid bug bounty programs) to ensure pre and post deployment assessments are completed. Initiatives for this specialty also include:
Security Operations specialists respond to incidents. This is often a fast-paced and stressful environment, where responding quickly and maintaining ones composure is critical. Initiatives for this specialty also include:
Abuse Operations specialists investigate malicious use of our systems. Initiatives for this specialty include:
Compliance specialists enables Sales by achieving standard as required by our customers. This includes SaaS, on-prem, and open source instances. Initiatives for this specialty also include:
Threat intelligence specialists research and provide information about specific threats to help us protect from the types of attacks that could cause the most damage. Initiatives for this specialty also include:
Strategic security specialists focus on holistic changes to policy, architecture, and processes to reduce entire categories of future security issues. Initiatives for this specialty also include:
Security research specialists conduct internal testing against GitLab assets, and against FOSS that is critical to GitLab products and operations. Initiatives for this specialty also include:
Here is the planned priorities for 2018 hires:
The security team will collaborate with development and product management for security-related features in GitLab.
We work closely with bounty programs, as well as security assessment and penetration testing firms to ensure external review of our security posture.
~securitylabel. Please use confidential issues for topics that should only be visible to team members at GitLab.
#securitychat channel for questions that don't seem appropriate to use the issue tracker or the internal email address for.
At a high level, the topic of security encompasses keeping GitLab.com safe, from the perspective of the application, the infrastructure, and the organization.
We track progress on tackling security related issues across the spectrum through an "always WIP Risk Assessment", and its associated (confidential) meta issue that is kept up to date to list the top 10 actions the team is working on.
The Security team needs to be able to communicate the priorities of security related issues to the Product, Development, and Infrastructure teams. Here's how the team can set priorities internally for subsequent communication (inspired in part by how the support team does this).
~security and as appropriate also
~customer. Add Security Level priority labels (
~SL3) to indicate perceived priority inside the Security Team.
The reasoning behind adding an
~SL label to every of these issues is that each issue should have had someone consider the urgency and impact, and this is best done at time of creating the issue since that is when the information and context is "fresh" in your mind. It is OK to change the assessment and label at a later date upon reflection. When issues are filed without an
~SL label it will be unclear whether an issue lacks the label due to lack of urgency / impact, or due to missing a step in the process.
Use the following as a guideline to determine which Security Level Priority label to use for bugs and feature proposals. For this, consider the likelihood and impact of a security incident that could result from this issue not being resolved.
|Likelihood \ Impact||I1 - High||I2 - Medium||I3 - Low|
|L1 - High|| || || |
|L2 - Medium|| || || |
|L3 - Low|| || || |
Issues with an
SL2 rating should be immediately brought to the attention of the relevant product managers, and team leads by pinging them in the issue and/or escalating via email and chat if they are unresponsive.
Issues with an
SL1 rating are given priority over all other releases and should be patched immediately.
Issues with an
SL2 rating will be scheduled for the next security release, which may be days or weeks ahead depending on severity and other issues that are waiting for patches. An
SL2 rating is not a guarantee that a patch will be ready prior to the next security release, but that should be the goal.
Issues with an
SL3 rating have a lower sense of urgency and are assigned a target of the next minor version. If a low-risk or low-impact vulnerability is reported that would normally be rated
SL3 but the researcher has provided a 30 day time window (or less) for disclosure this issue may receive an
SL2 rating to ensure that it is patched before disclosure. Any
SL3 issues that are approaching their public disclosure window can be re-assigned an
SL4 rating is also available.
SL4 is used for issues that are suggestions to improve security without a well-defined path for implementation, vulnerabilities that are low risk but would require complex changes to the application or architecture to fix, or other long-term issues.
SL4 issues have no defined schedule for closure.
For systems built (or significantly modified) by functional groups that house customer and other sensitive data, the Security Team should perform applicable application security reviews to ensure the systems are hardened.
The current process for requesting an internal application security review is:
As part of the app sec review process, a security issue is created to track the progress of the review. In those issues, labels and milestones can be added. It's important to note here that application security reviews are not a one-and-done, but can be ongoing as the application under review evolves.
The security team plays a large role in defining procedures for defending against and dealing with spam. Common targets for spam are public snippets, projects, issues, merge requests, and comments. Advanced techniques for dealing with these types of spam are detailed in the Spam Fighting runbook.
GitLab has an "always WIP" Risk Assessment and all team members are encouraged to participate. The Risk Assessment consists of a list of all risks or threats to the GitLab infrastructure and GitLab as a company, their likelihood of occurring, the impact should they occur, and what actions can be taken to prevent these risks from damaging the company or mitigate the damage should they be realized.
The risk assessment is stored as a Google Sheet (search Google Drive for "Risk Assessment"; make sure you are searching for spreadsheets shared with GitLab.com) and is available to all team members. It should not be shared with people outside of the company without permission.
The format of the Risk Assessment may seem intimidating at first. If you do not know what values to use for risk ratings, impact ratings, likelihoods or any other value leave them blank and reach out to the Security Team to help you determine appropriate values. It is more important to have all risks documented than it is to have all values completed when adding new risks. Guidelines and instructions for how to add a risk and how to calculate each rating or score are included on the "Instructions" tab.
GitLab receives vulnerability reports by various pathways, including:
For any reported vulnerability:
devor in other non-public ways even if there is a reason to believe that the vulnerability is already out in the public domain (e.g. the original report was made in a public issue that was later made confidential).
GitLab utilizes HackerOne for its bug bounty program. Security researchers can report vulnerabilities in GitLab applications or the GitLab infrastructure via the HackerOne website. Team members authorized to respond to HackerOne reports use procedures outlined here.
~HackerOnelabel to these issues, for later reporting and tracking.
If a report is unclear, or the reviewer has any questions about the validity of the finding or how it can be exploited, now is the time to ask. Move the report to the "Needs More Info" state until the researcher has provided all the information necessary to determine the validity and impact of the finding. Use your best judgement to determine whether it makes sense to open a confidential issue anyway, noting in it that you are seeking more information from the reporter. When in doubt, err on the side of opening the issue.
One the report has been clarified, follow the "regular flow" described above.
If a report violates the rules of GitLab's bug bounty program use good judgement in deciding how to proceed. For instance, if a researcher has tested a vulnerability against GitLab production systems (a violation), but the vulnerability has not placed GitLab user data at risk, notify them that they have violated the terms of the bounty program but you are still taking the report seriously and will treat it normally. If the researcher has acted in a dangerous or malicious way, inform them that they have violated the terms of the bug bounty program and will not receive credit. Then continue with the "regular flow" as you normally would.
If the report is invalid (in your determination) or does not pose a security risk to GitLab or GitLab users it can be closed without opening an issue on GitLab.com. When this happens inform the researcher why it is not a vulnerability and close the issue as "Informational". HackerOne offers the option to close an issue as "Not Applicable" or "Spam". Both of these categories result in damage to the researcher's reputation and should only be used in obvious cases of abuse.
When a patch has been developed, tested, approved, merged into the security branch, and a new security release is being prepared it is time to inform the researcher via HackerOne. Post a comment on the HackerOne issue to all parties informing them that a patch is ready and will be included with the next security release. Provide release dates, if available, but try not to promise a release on a specific date if you are unsure.
This is also a good time to ask if they would like public credit in our release blog post and on our vulnerability acknowledgements page for the finding. We will link their name or alias to their HackerOne profile, Twitter handle, Facebook profile, company website, or URL of their choosing. Also ask if they would like the HackerOne report to be made public upon release. It is always preferable to publicly disclose reports unless the researcher has an objection.
We use CVE IDs to uniquely identify and publicly define vulnerabilities in our products. Since we publicly disclose all security vulnerabilities 30 days after a patch is released, CVE IDs must be obtained for each vulnerability to be fixed. The earlier obtained the better, and it should be requested either during or immediately after a fix is prepared.
We currently request CVEs either through the HackerOne team or directly through MITRE's webform. Keep in mind that some of our security releases contain security related enhancements which may not have an associated CWE or vulnerability. These particular issues are not required to obtain a CVE since there's no associated vulnerability.
On the day of the security release several things happen in order:
Once all of these things have happened notify the HackerOne researcher that the vulnerability and patch are now public. The GitLab issue should be closed and the HackerOne report should be closed as "Resolved". Public disclosure should be requested if they have not objected to doing so. Any sensitive information contained in the HackerOne report should be sanitized before disclosure.
GitLab awards swag codes for free GitLab swag to any reports that result in a security patch. Limit: 1 per reporter. When a report is closed, ask the reporter if they would like a swag code for free GitLab clothing or accessories. Swag codes are available by request from the marketing team.
Some customers, to keep up with regulations that impact their business, need to understand the security implications of installing any software - including software like GitLab.
The current process for responding to customer requests is:
Securityto own the completion of that document
GitLab maintains a custom vulnerability scanner that is used to regularly scan all GitLab assets for common vulnerabilities as well as previously patched GitLab vulnerabilities and to ensure that no GitLab security-sensitive services are accidentally exposed.
Details on this scanner and how it is configured are available to all team members in a Google Doc entitled "Vulnerability Scanner Config".
The packages we ship are signed with GPG keys, as described in the omnibus documentation. The process around how to make and store the key pair in a secure manner is described in the runbooks. Those runbooks also point out that the management of the keys is handled by the Security team and not the Build team. For more details that are specific to key locations and access at GitLab, find the internal google doc titled "Package Signing Keys at GitLab" on Google Drive.