Security

On this page

Security Vision

We enhance the security posture of our company, products, and client-facing services. The security team works cross-functionally inside and outside GitLab to meet these goals. The security team does not work directly on security-centric features of our platform—these are handled by the development teams. The specialty areas of our team reflect the major areas of information security.

Security Department

Security Automation

Security Automation specialists help us scale by creating tools that perform common tasks automatically. Examples include building automated security issue triage and management, proactive vulnerability scanning, and defining security metrics for executive review. Initiatives for this specialty also include:

Application Security

Application Security specialists work closely with development, product security PMs, and third-party groups (including paid bug bounty programs) to ensure pre and post deployment assessments are completed. Initiatives for this specialty also include:

Security Operations

Security Operations specialists respond to incidents. This is often a fast-paced and stressful environment, where responding quickly and maintaining ones composure is critical. Initiatives for this specialty also include:

Abuse Operations

Abuse Operations specialists investigate malicious use of our systems. Initiatives for this specialty include:

Compliance

Compliance specialists enables Sales by achieving standard as required by our customers. This includes SaaS, on-prem, and open source instances. Initiatives for this specialty also include:

Threat Intelligence - Future

Threat intelligence specialists research and provide information about specific threats to help us protect from the types of attacks that could cause the most damage. Initiatives for this specialty also include:

Strategic Security - Future

Strategic security specialists focus on holistic changes to policy, architecture, and processes to reduce entire categories of future security issues. Initiatives for this specialty also include:

Security Research - Future

Security research specialists conduct internal testing against GitLab assets, and against FOSS that is critical to GitLab products and operations. Initiatives for this specialty also include:

Hiring Order

Here is the planned priorities for 2018 hires:

  1. Security Automation Engineer
  2. Application Security Engineer
  3. Security Operations Engineer
  4. Abuse Operations Engineer
  5. Security Operations Engineer
  6. Application Security Engineer
  7. Security Automation Engineer
  8. Security Operations Engineer
  9. Compliance Analyst
  10. Compliance Analyst, Infrastructure specialist
  11. Compliance Analyst, Products specialist
  12. Security Operations Engineer
  13. Threat Intelligence Analyst
  14. Strategic Security Engineer
  15. Security Research Engineer

Security Team Collaborators

Security Products

The security team will collaborate with development and product management for security-related features in GitLab.

External Security Firms

We work closely with bounty programs, as well as security assessment and penetration testing firms to ensure external review of our security posture.

Security Topics

At a high level, the topic of security encompasses keeping GitLab.com safe, from the perspective of the application, the infrastructure, and the organization.

We track progress on tackling security related issues across the spectrum through an "always WIP Risk Assessment", and its associated (confidential) meta issue that is kept up to date to list the top 10 actions the team is working on.

Issue Triage

The Security team needs to be able to communicate the priorities of security related issues to the Product, Development, and Infrastructure teams. Here's how the team can set priorities internally for subsequent communication (inspired in part by how the support team does this).

Use Labels

Always. Use ~security and as appropriate also ~bug, ~feature proposal, ~customer. Add Security Level priority labels ( ~SL1, ~SL2, ~SL3) to indicate perceived priority inside the Security Team.

The reasoning behind adding an ~SL label to every of these issues is that each issue should have had someone consider the urgency and impact, and this is best done at time of creating the issue since that is when the information and context is "fresh" in your mind. It is OK to change the assessment and label at a later date upon reflection. When issues are filed without an ~SL label it will be unclear whether an issue lacks the label due to lack of urgency / impact, or due to missing a step in the process.

Security Priority Labels

Use the following as a guideline to determine which Security Level Priority label to use for bugs and feature proposals. For this, consider the likelihood and impact of a security incident that could result from this issue not being resolved.

Likelihood \ Impact I1 - High I2 - Medium I3 - Low
L1 - High SL1 SL1 SL2
L2 - Medium SL1 SL2 SL3
L3 - Low SL2 SL3 SL3

Escalating from the Security to Development

Note

Issues with an SL1 or SL2 rating should be immediately brought to the attention of the relevant product managers, and team leads by pinging them in the issue and/or escalating via email and chat if they are unresponsive.

Issues with an SL1 rating are given priority over all other releases and should be patched immediately.

Issues with an SL2 rating will be scheduled for the next security release, which may be days or weeks ahead depending on severity and other issues that are waiting for patches. An SL2 rating is not a guarantee that a patch will be ready prior to the next security release, but that should be the goal.

Issues with an SL3 rating have a lower sense of urgency and are assigned a target of the next minor version. If a low-risk or low-impact vulnerability is reported that would normally be rated SL3 but the researcher has provided a 30 day time window (or less) for disclosure this issue may receive an SL2 rating to ensure that it is patched before disclosure. Any SL3 issues that are approaching their public disclosure window can be re-assigned an SL2 rating.

An SL4 rating is also available. SL4 is used for issues that are suggestions to improve security without a well-defined path for implementation, vulnerabilities that are low risk but would require complex changes to the application or architecture to fix, or other long-term issues. SL4 issues have no defined schedule for closure.

More Risk Rating Examples

SL1:

SL2:

SL3:

SL4:

Security Releases

The processes for security releases is described with a checklist of events on the critical release process page, as well as in the release-tools project.

Internal Application Security Reviews

For systems built (or significantly modified) by functional groups that house customer and other sensitive data, the Security Team should perform applicable application security reviews to ensure the systems are hardened.

Process

The current process for requesting an internal application security review is:

  1. Requestor creates an issue in the security tracker and adds the app sec review label.
  2. Requestor submits a triage questionnaire form for the app.
  3. Security team reviews the triage form and determines its next steps (design, testing, no test, etc.).
  4. If sent to design phase, requestor submits design questionnaire. An optional call can be arranged to discuss the architecture further. At the end of this phase, the security team will scope it for X amount of days of testing. It will now be sent to the testing phase.
  5. The requestor submits a testing questionnaire with the test environment information. The scheduled testing window will also be determined.
  6. The security team performs the assessment during the testing window and documents any findings in the app's associated component in GitLab. Once testing is finished, it will be sent to the remediation phase.
  7. All findings will be remediated by the appropriate teams and validated by the security engineer who tested it.
  8. Once all findings have been closed, the review can be considered complete.

As part of the app sec review process, a security issue is created to track the progress of the review. In those issues, labels and milestones can be added. It's important to note here that application security reviews are not a one-and-done, but can be ongoing as the application under review evolves.

Fighting Spam

The security team plays a large role in defining procedures for defending against and dealing with spam. Common targets for spam are public snippets, projects, issues, merge requests, and comments. Advanced techniques for dealing with these types of spam are detailed in the Spam Fighting runbook.

Always "WIP" Risk Assessment

GitLab has an "always WIP" Risk Assessment and all team members are encouraged to participate. The Risk Assessment consists of a list of all risks or threats to the GitLab infrastructure and GitLab as a company, their likelihood of occurring, the impact should they occur, and what actions can be taken to prevent these risks from damaging the company or mitigate the damage should they be realized.

The risk assessment is stored as a Google Sheet (search Google Drive for "Risk Assessment"; make sure you are searching for spreadsheets shared with GitLab.com) and is available to all team members. It should not be shared with people outside of the company without permission.

The format of the Risk Assessment may seem intimidating at first. If you do not know what values to use for risk ratings, impact ratings, likelihoods or any other value leave them blank and reach out to the Security Team to help you determine appropriate values. It is more important to have all risks documented than it is to have all values completed when adding new risks. Guidelines and instructions for how to add a risk and how to calculate each rating or score are included on the "Instructions" tab.

Vulnerability Reports and HackerOne

GitLab receives vulnerability reports by various pathways, including:

For any reported vulnerability:

HackerOne Flow

GitLab utilizes HackerOne for its bug bounty program. Security researchers can report vulnerabilities in GitLab applications or the GitLab infrastructure via the HackerOne website. Team members authorized to respond to HackerOne reports use procedures outlined here.

If a Report is Unclear

If a report is unclear, or the reviewer has any questions about the validity of the finding or how it can be exploited, now is the time to ask. Move the report to the "Needs More Info" state until the researcher has provided all the information necessary to determine the validity and impact of the finding. Use your best judgement to determine whether it makes sense to open a confidential issue anyway, noting in it that you are seeking more information from the reporter. When in doubt, err on the side of opening the issue.

One the report has been clarified, follow the "regular flow" described above.

If a Report Violates the Rules

If a report violates the rules of GitLab's bug bounty program use good judgement in deciding how to proceed. For instance, if a researcher has tested a vulnerability against GitLab production systems (a violation), but the vulnerability has not placed GitLab user data at risk, notify them that they have violated the terms of the bounty program but you are still taking the report seriously and will treat it normally. If the researcher has acted in a dangerous or malicious way, inform them that they have violated the terms of the bug bounty program and will not receive credit. Then continue with the "regular flow" as you normally would.

If the Report is Invalid

If the report is invalid (in your determination) or does not pose a security risk to GitLab or GitLab users it can be closed without opening an issue on GitLab.com. When this happens inform the researcher why it is not a vulnerability and close the issue as "Informational". HackerOne offers the option to close an issue as "Not Applicable" or "Spam". Both of these categories result in damage to the researcher's reputation and should only be used in obvious cases of abuse.

When a Patch is Ready

When a patch has been developed, tested, approved, merged into the security branch, and a new security release is being prepared it is time to inform the researcher via HackerOne. Post a comment on the HackerOne issue to all parties informing them that a patch is ready and will be included with the next security release. Provide release dates, if available, but try not to promise a release on a specific date if you are unsure.

This is also a good time to ask if they would like public credit in our release blog post and on our vulnerability acknowledgements page for the finding. We will link their name or alias to their HackerOne profile, Twitter handle, Facebook profile, company website, or URL of their choosing. Also ask if they would like the HackerOne report to be made public upon release. It is always preferable to publicly disclose reports unless the researcher has an objection.

CVE IDs

We use CVE IDs to uniquely identify and publicly define vulnerabilities in our products. Since we publicly disclose all security vulnerabilities 30 days after a patch is released, CVE IDs must be obtained for each vulnerability to be fixed. The earlier obtained the better, and it should be requested either during or immediately after a fix is prepared.

We currently request CVEs either through the HackerOne team or directly through MITRE's webform. Keep in mind that some of our security releases contain security related enhancements which may not have an associated CWE or vulnerability. These particular issues are not required to obtain a CVE since there's no associated vulnerability.

On Release Day

On the day of the security release several things happen in order:

Once all of these things have happened notify the HackerOne researcher that the vulnerability and patch are now public. The GitLab issue should be closed and the HackerOne report should be closed as "Resolved". Public disclosure should be requested if they have not objected to doing so. Any sensitive information contained in the HackerOne report should be sanitized before disclosure.

Swag for Reports

GitLab awards swag codes for free GitLab swag to any reports that result in a security patch. Limit: 1 per reporter. When a report is closed, ask the reporter if they would like a swag code for free GitLab clothing or accessories. Swag codes are available by request from the marketing team.

Security Questionnaires for Customers

Some customers, to keep up with regulations that impact their business, need to understand the security implications of installing any software - including software like GitLab.

Process

The current process for responding to customer requests is:

  1. Refer a customer to our public statements on security here: /security/
  2. If a customer still has questions that need to be discussed, you can engage a Solutions Architect in that discussion.
  3. If the customer still needs a specific questionnaire filled out, engage a Solutions Architect with the label Security to own the completion of that document
  4. The SA team will take the first pass at the questionnaire using /security/ and this folder as a reference.
  5. Once the SA team has completed what they can, the questionnaire will go to the security team for additional answers.
  6. Once the questionnaire is complete, it will need to be approved by the Director of Security for release to the customer.
  7. File the completed questionnaire in the example folder for future reference.

Vulnerability Scanning

GitLab maintains a custom vulnerability scanner that is used to regularly scan all GitLab assets for common vulnerabilities as well as previously patched GitLab vulnerabilities and to ensure that no GitLab security-sensitive services are accidentally exposed.

Details on this scanner and how it is configured are available to all team members in a Google Doc entitled "Vulnerability Scanner Config".

Package Signing

The packages we ship are signed with GPG keys, as described in the omnibus documentation. The process around how to make and store the key pair in a secure manner is described in the runbooks. Those runbooks also point out that the management of the keys is handled by the Security team and not the Build team. For more details that are specific to key locations and access at GitLab, find the internal google doc titled "Package Signing Keys at GitLab" on Google Drive.