If you identified an urgent security issue, if something feels wrong, or you need immediate assistance from the Security Department, you have two options available:
/security Hi security, I have a concern! Please see the following URL ...
command.Please be aware that the Security Department can only be paged internally. If you are an external party, please proceed to Vulnerability Reports and HackerOne section of this page.
Both mechanisms will trigger the very same process, and as a result, the Security Engineer On Call will engage in the relevant issue within the appropriate SLA. If the SLA is breached, the Security manager on-call will be paged. When paging security a new issue will be created to track the incident being reported. Please provide as much detail as possible in this issue to aid the Security Engineer On Call in their investigation of the incident. The Security Engineer On Call will typically respond to the page within 15 minutes and may have questions which require synchronous communication from the incident reporter. It is important when paging security to please be prepared to be available for this synchronous communication in the initial stage of the incident response.
For lower severity requests or general Q&A, GitLab Security is available in the #security
channel in GitLab Slack and the Security Incident Response Team can be alerted by mentioning @sirt-team
. Please be advised, the SLA for slack mentions is 6 hours on business days, if you are unsure if the issue you are contacting SIRT for is urgent, please page. If you suspect you've received a phishing email, and have not engaged with the sender, please see: What to do if you suspect an email is a phishing attack. If you have engaged a phisher by replying to an email, clicking on a link, have sent and received text messages, or have purchased goods requested by the phisher, page security as described above.
Note: The Security Department will never be upset if you page us for something that turns out to be nothing, but we will be sad if you don't page us and it turns out to be something. IF UNSURE - PAGE US.
Further information on GitLab's security response program is described in our Incident Response guide.
The security department can be contacted at security@gitlab.com
. External researchers or other interested parties should
refer to our Responsible Disclosure Policy for more information about reporting vulnerabilities. The security@gitlab.com
email address also forwards to a ZenDesk queue that is monitored by the Risk & Field Security team.
For security team members, the private PGP key is available in the Security 1Password vault. Refer to PGP process for usage.
Our Mission is to be most Transparent Security Group in the world with a results oriented approach.
By embracing GitLab values and being active in engaging with our customers, our staff and our product, we enhance the security posture of our company, products, and client-facing services. The security department works cross-functionally inside and outside GitLab to meet these goals.
Our Vision is as follows:
To help achieving the mission of being the most Transparent Security Group in the world the Security Department has nominated a Security Culture Committee.
The company-wide mandate is justification for mapping Security headcount to around 5% of total company headcount. Tying Security Department growth headcount to 5% of total company headcount ensures adequate staffing support for the following (below are highlights and not the entire list of responsibilities of the Security Department):
Performance reviews are a formal assessment in which managers evaluate a team members' performance, identify strengths and weaknesses, offer feedback, and set goals for future performance and career development. A good performance review is designed to facilitate conversations between team members and their managers. The benefits of performance reviews include:
There is a company-wide 360 performance review process run by the People Group which focuses on values and gathering feedback from peers, managers and direct reports. Besides the People Group is working on developing a 9-box/Talent assessment and facilitating Career Development Conversations.
The Security Department is aiming to add to those processes, bring them together and deliver additional value to team members and the company. This specific performance review cycle will be run every 6 months and start with a pilot. The specific things looking to achieve with the pilot are:
We will assess whether these goals were achieved three times: after the conversations have taken place with Security Leadership, after 3 months and at the end of the 6 month period with a Security Department member survey. This will be reviewed and discussed with their People Business Partner. Upon success, parts or the whole of this pilot will be integrated into the company-wide review process. Also, we may decide to continue the pilot, or perform a new one.
The framework of the performance review is as follows:
The performance review process is as follows:
As the Security Department has evolved, so has the area of engagement where Security has been required to provide input. Today, the Security Department provides an essential security service to the Engineering and Development Departments, and is directly engaged in Security Releases. However, the functions that the Security Department provides to the rest of the business are more consultative and advisory in nature - while the Security Department is a key player in the security questions raised throughout the business, it is not directly responsible for all of the administration of execution of functions that are required for the business to function while minimising risk.
This is expected - the security of the business should be a concern of everyone within the company and not just the domain of specialists. In addition, the role of the Security Department has expanded to assure customer, investor and regulator concerns about the security and safety of using GitLab as an enterprise-ready product, both through compliance certifications and through assurance responses directly to customers, and as we continue to encourage enterprise customers to use GitLab.com.
To reflect this, we have structured the Security Department around three key tenets, which drive the structure and the activities of our group. These are :
This reflects the Security Department’s current efforts to be involved in the Application development and Release cycle for Security Releases, Security Research, our HackerOne bug bounty program, Security Automation, External Security Communications, and Vulnerability Management.
The term “Product” is interpreted broadly and includes the GitLab application itself and all other integrations and code that is developed internally to support the GitLab application for the multi-tenant SaaS. Our responsibility is to ensure all aspects of GitLab that are exposed to customers or that host customer data are held to the highest security standards, and to be proactive and responsive to ensure world-class security in anything GitLab offers.
This encompasses protecting company property as well as to prevent, detect and respond to risks and events targeting the business and GitLab.com. This sub department includes the Security Incident Response Team (SIRT), Trust and Safety team and Red team.
These functions have the responsibility of shoring up and maintaining the security posture of GitLab.com to ensure enterprise-level security is in place to protect our new and existing customers.
This reflects the need for us to provide resources to our customers to assure them of the security and safety of GitLab as an application to use within their organisation and as a enterprise-level SaaS. This also involves providing appropriate support, services and resources to customers so that they trust GitLab as a Secure Company, as a Secure Product, and Secure SaaS
It’s important to note that the three tenets do not operate independently of each other, and every team within the Security Department provides an important function to perform in order to progress these tenets. For example, Application Security may be strongly focused on Securing the Product, but it still has a strong focus around customer assurance and protecting the company in performing its functions. Similarly, Security Operations functions may be engaged on issues related to Product vulnerabilities, and the resolution path for this deeply involves improving the security of product features, as well as scoping customer impact and assisting in messaging to customers.
Broadly speaking, teams are aligned under three Sections (Product, Operations and Customer) which reflect the tenets that are their primary focus. The Graph below illustrates how the teams within the Department are managed.
The Security Engineering & Research teams below are primarily focused on Security Engineering or other functions related to Securing the Product.
Application Security specialists work closely with development, product security PMs, and third-party groups (including paid bug bounty programs) to ensure pre and post deployment assessments are completed. Initiatives for this specialty also include:
Security Automation specialists help us scale by creating tools that perform common tasks automatically. Examples include building automated security issue triage and management, proactive vulnerability scanning, and defining security metrics for executive review. Initiatives for this specialty also include:
Security Research specialists conduct internal testing against GitLab assets, against FOSS that is critical to GitLab products and operations, and against vendor products being considered for purchase and integration with GitLab. Initiatives for this specialty also include:
Security research specialists are subject matter experts (SMEs) with highly specialized security knowledge in specific areas, including reverse engineering, incident response, malware analysis, network protocol analysis, cryptography, and so on. They are often called upon to take on security tasks for other security team members as well as other departments when highly specialized security knowledge is needed. Initiatives for SMEs may include:
Security research specialists are often used to promote GitLab thought leadership by engaging as all-around security experts, to let the public know that GitLab doesn’t just understand DevSecOps or application security, but has a deep knowledge of the security landscape. This can include the following:
The External Communications Team leads customer advocacy, engagement and communications in support of GitLab Security Team programs. Initiatives for this specialty include:
Security Operations Sub-department teams are primarily focused on protecting GitLab the business and GitLab.com.
The SIRT team is here to manage security incidents across GitLab. These stem from events that originate from outside of our infrastructure, as well as those internal to GitLab. This is often a fast-paced and stressful environment where responding quickly and maintaining ones composure is critical.
More than just being the first to acknowledge issues as they arise, SIRT is responsible for leading, designing, and implementing the strategic initiatives to grow the Detection and Response practices at GitLab. These initiatives include:
SIRT can be contacted on slack via our handle @sirt-team
or in a GitLab issue using @gitlab-com/gl-security/security-operations/sirt
. If your request requires immediate attention please review the steps for engaging the security on-call.
Trust & Safety specialists investigate and mitigate the malicious use of our systems, which is defined under Section 3 of the GitLab Website Terms of Use. This activity primarily originates from inside our infrastructure.
Initiatives for this specialty include:
For more information please see our Resources Section
Code of Conduct Violations are handled by the Community Advocates in the Community Relations Team. For more information on reporting these violations please see the GitLab Community Code of Conduct page.
GitLab's internal Red Team emulates adversary activity to better GitLab’s enterprise and product security. This includes activities such as:
The Security Assurance sub-department is comprised of the teams below. They target Customer Assurance projects among their responsibilities.
Operating as a second line of defense, Security Compliance's core mission is to implement a best in class governance, risk and compliance program that encompasses SaaS, on-prem, and open source instances. Initiatives for this specialty include:
For additional information about the Security Compliance program see the Security Compliance team handbook page or refer to GitLab's security controls for a detailed list of all compliance controls organized by control family.
The Risk and Field Security team serves as the public representation of GitLab's internal Security function. We are tasked with providing high levels of security assurance to internal and external customers. This means working with multiple departments to document requests, analyze the risk associated with those requests, and provide value-added remediation recommendations. As a member of the Risk and Field Security team, you will support GitLab's growth by effectively and appropriately identifying, tracking, and treating Operational, Third Party and Customer related risks.
Initiatives for this specialty include:
For additional information about the Field Security Team see the Field Security Team handbook page.
For information on the security internship, see the Internship page.
The Security Organization is piloting a fully immersive on-the-job cross-training program amoung our various sub-organizations and teams. Participants will get a true behind the scenes look at how the Security Organization protects, defends, and assures our customers and team members day in and day out.
For more information, see the Security Shadow Program page.
The Security department will collaborate with development and product management for security-related features in GitLab. The Secure team must not be mistaken with the Security Teams.
We work closely with bounty programs, as well as security assessment and penetration testing firms to ensure external review of our security posture.
Information Security Policies are reviewed annually. Policy changes are approved by the Senior Director of Security and Legal. Changes to the Data Protection Impact Assessment Policy are approved by GitLab's Privacy Officer.
Information security considerations such as regulatory, compliance, confidentiality, integrity and availability requirements are most easily met when companies employ centrally supported or recommended industry standards. Whereas GitLab operates under the principle of least privilege, we understand that centrally supported or recommended industry technologies are not always feasible for a specific job function or company need. Deviations from the aforementioned standard or recommended technologies is discouraged. However, it may be considered provided that there is a reasonable, justifiable business and/or research case for an information security policy exception; resources are sufficient to properly implement and maintain the alternative technology; the process outlined in this and other related documents is followed and other policies and standards are upheld.
In the event a team member requires a deviation from the standard course of business or otherwise allowed by policy, the Requestor must submit a Policy Exception Request to IT Security, which contains, at a minimum, the following elements:
The Policy Exception Request should be used to request exceptions to information security policies, such as the password policy, or when requesting the use of a non-standard device (laptop).
Exception request approval requirements are documented within the issue template. The requester should tag the appropriate individuals who are required to provide an approval per the approval matrix.
If the business wants to appeal an approval decision, such appeal will be sent to Legal at legal@gitlab.com. Legal will draft an opinion as to the proposed risks to the company if the deviation were to be granted. Legal’s opinion will be forwarded to the CEO and CFO for final disposition.
Any deviation approval must:
Many teams follow a convention of having a GitLab group team-name-team
with a
primary project used for issue tracking underneath team-name
or similiar. For example:
gitlab-com/gl-security/security-assurance/field-security-team/field-security
.
~meta
and backend tasks, and catch all for anything not covered by other projects. For non-Security department team members, use this if unsure which team to contact. For Security Team members, use of this
project as a catch-all is deprecated.gl-security/runbooks
should only be used for documenting specifics that would increase risk and/or
have customer impact if publicly disclosed.GitLab.com
environment, consider
if it's possible to release when the ~security
issue becomes
non-confidential. This group can also be used for private demonstration projects for
security issues.@trust-and-safety
in the channel to alert the team to anything urgent.#security-department-standup
- Private channel for daily standups.#incident-management
and other infrastructure department channels#security-alert-manual
- New reports for the security department from various intake sources, including ZenDesk and new HackerOne reports.#hackerone-feed
- Feed of most activity from our HackerOne program.#security-alert-*
and #abuse*
- Multiple channels for different notifications
handled by the Security Department.Security crosses many teams in the company, so you will find ~security
labelled
issues across all GitLab projects, especially:
When opening issues, please follow the Creating New Security Issues process for using labels and the confidential flag.
The Security Department tracks their OKRs using the boards in the table below.
These boards aggregate work from the many subgroups and projects under gitlab-com/gl-security
used by the various subteams to track their work and projects. High level issues
and open issues that map directly to an OKR should be added to the board for the
quarter to provide visibility into the current work of the security department.
When creating an issue that maps to an OKR, apply the following group level labels so that it appears in the board:
~FY20Q3
Security Management::
label for the appropriate team. For example: ~Security Management::Security Automation Team
These labels can also be applied to MRs related to the work.
Quarter | Board |
---|---|
~FY20Q4 |
Board link |
~FY20Q3 |
Board link |
~FY20Q2 |
Board link |
Any work that is related to an OKR is tracked with an issue in the appropriate team or project tracker and linked to the OKR issue as related.
Larger initiatives that span the scope of multiple teams or projects may require a Planning handbook page to further developer requirements.
The definitions, processes and checklists for security releases are described in the release/docs project.
The policies for backporting changes follow Security Releases for GitLab EE.
For critical security releases, refer to Critical Security Releases in release/docs
.
The Security team needs to be able to communicate the priorities of security related issues to the Product, Development, and Infrastructure teams. Here's how the team can set priorities internally for subsequent communication (inspired in part by how the support team does this).
New security issue should follow these guidelines when being created on GitLab.com
:
confidential
if unsure whether issue a potential
vulnerability or not. It is easier to make an issue that should have been
public open than to remediate an issue that should have been confidential.
Consider adding the /confidential
quick action to a project issue template.~security
at a minimum.~bug
or ~feature
if appropriate~customer
if issue is a result of a customer report~internal customer
should be added by team members when the issue
impacts GitLab operations.~dependency update
if issue is related to updating to newer versions of the dependencies GitLab requires.~group::not_owned
if issue doesn't have a clear group owner.~keep confidential
. If possible avoid this by linking
resources only available to GitLab team member, for example, the originating
ZenDesk ticket. Label the link with (GitLab internal)
for clarity.Occasionally, data that should remain confidential, such as the private project contents of a user that reported an issue, may get included in an issue. If necessary, a sanitized issue may need to be created with more general discussion and examples appropriate for public disclosure prior to release.
For review by the Application Security team, @ mention @gitlab-com/gl-security/appsec
.
For more immediate attention, refer to Engaging security on-call.
~security
IssuesSeverity and priority labels are set by an application security engineer at the time of triage. If another team member feels that the chosen ~severity
/ ~priority
labels
need to be reconsidered, they are encouraged to begin a discussion on the relevant issue.
If an issue is determined to be a vulnerability, the security engineer will ensure that the issue labelled as a ~bug
and not a ~feature
.
The presence of ~security
and ~bug
labels modifies the standard severity labels(~severity::1
, ~severity::2
, ~severity::3
, ~severity::4
)
by additionally taking into account
likelihood as described below, as well as any
other mitigating or exacerbating factors. The priority of addressing
~security
issues is also driven by impact, so in most cases, the priority label
assigned by the security team will match the severity label.
Exceptions must be noted in issue description or comments.
The intent of tying ~severity/~priority
labels to remediation times is to measure and improve GitLab's
response time to security issues to consistently meet or exceed industry
standard timelines for responsible disclosure. Mean time to remediation (MTTR) is
a external
metric that may be evaluated by users as an indication of GitLab's commitment
to protecting our users and customers. It is also an important measurement that
security researchers use when choosing to engage with the security team, either
directly or through our HackerOne Bug Bounty Program.
Severity | Priority | Time to mitigate | Time to remediate |
---|---|---|---|
~severity::1 (Critical) |
~priority::1 |
Within 24 hours | On or before the next security release |
~severity::2 (High) |
~priority::2 |
N/A | Within 60 days |
~severity::3 (Medium) |
~priority::3 |
N/A | Within 90 days |
~severity::4 (Low) |
~priority::4 |
N/A | Best effort unless risk accepted |
~security
IssuesFor ~severity::2
, ~severity::3
, and ~severity::4
~security
~bug
s, the security engineer assigns the Due date
,
which is the target date of when fixes should be ready for release.
This due date should account for the Time to remediate
times above, as well as
monthly security releases on the 28th of each month. For example, suppose today is October 1st, and
a new severity::2
~security
issue is opened. It must be addressed in a security release within 60 days,
which is November 30th. So therefore, it must catch the November 28th security release.
Furthermore, the Security Release Process deadlines
say that it should the code fix should be ready by November 23rd. So the due date
in this example should be November 23rd.
~severity::1
~security
issues do not have a due date since they should be fixed as soon
as possible, and a security release made available as soon as possible to accommodate
it. These issues are worked by a security engineer oncall to mitigate the risk within 24 hours. This means the risks relevant to our customers have been removed or reduced as much as possible within 24 hours. The remediation timeline target is never greater than our next security release but remediation in this case would not impact customer security risk.
Note that some ~security
issues may not need to be part of a code release, such as
an infrastructure change. In that case, the due date will not need to account for
monthly security release dates.
On occasion, the due date of such an issue may need to be changed if the security team needs to move up or delay a monthly security release date to accommodate for urgent problems that arise.
~security
IssuesThe issue description should have a How to reproduce
section to ensure clear replication details are in description. Add additional details, as needed:
curl
command that triggers the issueIssues labelled with the security
and feature
labels are not considered vulnerabilities, but rather security enhancements or defense-in-depth mechanisms. This means the security team is not required to set the S
and P
labels or follow the vulnerability triage process as these issues will be triaged by product or other appropriate team owning the component.
Implementation of security feature issues should be done publicly in line with our Transparency value, i.e. not following the security developer workflow.
On the contrary, note that issues with the security
, bug
, and severity::4
labels are considered Low
severity vulnerabilities and will be handled according to the standard vulnerability triage process.
The security team may also apply ~internal customer
and ~security request
to issue as an
indication that the feature is being requested by the security team to meet
additional customer requirements, compliance or operational needs in
support of GitLab.com.
The security engineer must:
~group::editor
, ~group::package
, etc.)~merge request
.@pm for scheduling
.The product manager will assign a Milestone
that has been assigned a due
date to communicate when work will be assigned to engineers. The Due date
field, severity label, and priority label on the issue should not be changed
by PMs, as these labels are intended to provide accurate metrics on
~security
issues, and are assigned by the security team. Any blockers,
technical or organizational, that prevents ~security
issues from being
addressed as our top priority
should be escalated up the appropriate management chains.
Note that issues are not scheduled for a particular release unless the team leads add them to a release milestone and they are assigned to a developer.
Issues with an severity::1
or severity::2
rating should be immediately brought to the
attention of the relevant engineering team leads and product managers by
tagging them in the issue and/or escalating via chat and email if they are
unresponsive.
Issues with an severity::1
rating have priority over all other issues and should be
considered for a critical security release.
Issues with an severity::2
rating should be scheduled for the next scheduled
security release, which may be days or weeks ahead depending on severity and
other issues that are waiting for patches. An severity::2
rating is not a guarantee
that a patch will be ready prior to the next security release, but that
should be the goal.
Issues with an severity::3
rating have a lower sense of urgency and are assigned a
target of the next minor version. If a low-risk or low-impact vulnerability
is reported that would normally be rated severity::3
but the reporter has
provided a 30 day time window (or less) for disclosure the issue may be
escalated to ensure that it is patched before disclosure.
It is possible that a ~security issue becomes irrelevant after it was initially triaged, but before a patch was implemented. For example, the vulnerable functionality was removed or significantly changed resulting in the vulnerability not being present anymore.
If an engineer notices that an issue has become irrelevant, he should @-mention the person that triaged the issue to confirm that the vulnerability is not present anymore. Note that it might still be necessary to backport a patch to previous releases according to our maintenance policy. In case no backports are necessary, the issue can be closed.
For information on secure coding initiatives, please see the Secure Coding Training page.
Gearing ratios related to the Security Department have been moved to a separate page.
For systems built (or significantly modified) by Departments that house customer and other sensitive data, the Security Team should perform applicable application security reviews to ensure the systems are hardened. Security reviews aim to help reduce vulnerabilities and to create a more secure product.
There are two ways to request a security review depending on how significant the changes are. It is divided between individual merge requests and larger scale initiatives.
Loop in the application security team by /cc @gitlab\-com/gl\-security/appsec
in your merge request or issue.
These reviews are intended to be faster, more lightweight, and have a lower barrier of entry.
To get started, create an issue in the security tracker, add the app sec review
label, and submit a triage questionnaire form. The complete process can be found at here.
Some use cases of this are for epics, milestones, reviewing for a common security weakness in the entire codebase, or larger features.
No, code changes do not require security approval to progress. Non-blocking reviews enables the freedom for our code to keep shipping fast, and it closer aligns with our values of iteration and efficiency. They operate more as guardrails instead of a gate.
To help speed up a review, it's recommended to provide any or all of the following:
The current process for larger scale internal application security reviews be found here
Security reviews are not proof or certification that the code changes are secure. They are best effort, and additional vulnerabilities may exist after a review.
It's important to note here that application security reviews are not a one-and-done, but can be ongoing as the application under review evolves.
GitLab receives vulnerability reports by various pathways, including:
security@gitlab.com
~security
and @-mention @gitlab-com/gl-security/appsec
on issues.For any reported vulnerability:
dev
or in other non-public ways even if there is a reason to believe that the vulnerability is already out in the public domain (e.g. the original report was made in a public issue that was later made confidential).See the dedicated page to read about our Triage Rotation process.
See the dedicated page to read about our HackerOne process.
See the dedicated page to read about our dashboard review process.
When a patch has been developed, tested, approved, merged into the security branch, and a new security release is being prepared it is time to inform the researcher via HackerOne. Post a comment on the HackerOne issue to all parties informing them that a patch is ready and will be included with the next security release. Provide release dates, if available, but try not to promise a release on a specific date if you are unsure.
This is also a good time to ask if they would like public credit in our release blog post and on our vulnerability acknowledgements page for the finding. We will link their name or alias to their HackerOne profile, Twitter handle, Facebook profile, company website, or URL of their choosing. Also ask if they would like the HackerOne report to be made public upon release. It is always preferable to publicly disclose reports unless the researcher has an objection.
We use CVE IDs to uniquely identify and publicly define vulnerabilities in our products. Since we publicly disclose all security vulnerabilities 30 days after a patch is released, CVE IDs must be obtained for each vulnerability to be fixed. The earlier obtained the better, and it should be requested either during or immediately after a fix is prepared.
We currently request CVEs either through the HackerOne team or directly through MITRE's webform. Keep in mind that some of our security releases contain security related enhancements which may not have an associated CWE or vulnerability. These particular issues are not required to obtain a CVE since there's no associated vulnerability.
On the day of the security release several things happen in order:
Once all of these things have happened notify the HackerOne researcher that the vulnerability and patch are now public. The GitLab issue should be closed and the HackerOne report should be closed as "Resolved". Public disclosure should be requested if they have not objected to doing so. Any sensitive information contained in the HackerOne report should be sanitized before disclosure.
GitLab awards swag codes for free GitLab swag to any reports that result in a security patch. Limit: 1 per reporter. When a report is closed, ask the reporter if they would like a swag code for free GitLab clothing or accessories. Swag codes are available by request from the marketing team.
Even though many of our 3rd-party dependencies, hosted services, and the static
about.gitlab.com
site are listed explicitly as out of scope, they are sometimes
targeted by researchers. This results in disruption to normal GitLab operations.
In these cases, if a valid email can be associated with the activity, a warning
such as the following should be sent to the researcher using an official channel
of communication such as ZenDesk.
Dear Security Researcher,
The system that you are accessing is currently out-of-scope for our bounty
program or has resulted in activity that is disruptive to normal GitLab
operations. Reports resulting from this activity may be disqualified from
receiving a paid bounty. Continued access to this system causing disruption to
GitLab operations, as described in policy under "Rules of Engagement,
Testing, and Proof-of-concepts", may result in additional restrictions on
participation in our program:
Activity that is disruptive to GitLab operations will result in account bans and disqualification of the report.
Further details and some examples are available in the full policy available at:
https://hackerone.com/gitlab
Please contact us at security@gitlab.com with any questions.
Best Regards,
Security Department | GitLab
security@gitlab.com
/handbook/engineering/security/
We have a process in place to conduct security reviews for externally contributed code, especially if the code functionality includes any of the following:
The Security Team works with our Community Outreach Team to ensure that security reviews are conducted where relevant. For more information about contributing, please reference the Contribute to GitLab page.
Vulnerability Management is the recurring process of identifying, classifying, prioritizing, mitigating, and remediating vulnerabilities. This overview will focus on infrastructure vulnerabilities and the operational vulnerability management process. This process is designed to provide insight into our environments, leverage GitLab for vulnerability workflows, promote healthy patch management among other preventative best-practices, and remediate risk; all with the end goal to better secure our environments.
To achieve these goals, we’ve partnered with Tenable and have deployed their software-as-a-service (SaaS) solution, Tenable.io, as our vulnerability scanner. Tenable.io allows us to focus on what is important; scanning for vulnerabilities, analyzing, and ingesting vulnerability data into GitLab as the starting point for our vulnerability management process. For more information, please visit the vulnerability management overview.
The packages we ship are signed with GPG keys, as described in the omnibus documentation. The process around how to make and store the key pair in a secure manner is described in the runbooks. The Distribution team is responsible for updating the package signing key. For more details that are specific to key locations and access at GitLab, find the internal google doc titled "Package Signing Keys at GitLab" on Google Drive.
Along with the internal security testing done by the Application Security, Security Research, and Red teams, GitLab annually contracts a 3rd-party penetration test of our infrastructure. For more information on the goals of these exercises, please see our Penetration Testing Policy.
The following process is followed for these annual tests:
GitLab customers can request a redacted copy of the report. For steps on how to do so, please see our External Testing page.