This is a Controlled Document
Inline with GitLab's regulatory obligations, changes to controlled documents must be approved or merged by a code owner. All contributions are welcome and encouraged.
The Security Incident Response Team (SIRT) is on-call 24/7/365 to assist with any security incidents. If an urgent security incident has been identified or you suspect an incident may have occurred, please refer to Engaging the Security Engineer On-Call.
Information about SIRT responsibilities and incident ownership is available in the SIRT On-Call Guide.
Security incident investigations are initiated when a security event has been detected on GitLab.com or as part of the GitLab company. These investigations are handled with the same level of urgency and priority regardless of whether it's a single user or multiple projects.
Incident indicators can be reported to SIRT either internally, by a GitLab team member, or externally. It is the Security team's responsibility to determine when to investigate dependent on the identification and verification of a security incident.
The GitLab Security team identifies security incidents as any violation, or threat of violation, of GitLab security, acceptable use or other relevant policies.
Role | Responsibilities |
---|---|
GitLab Team Members | Responsible for following the requirements in this procedure |
SIRT | Responsible for implementing and executing this procedure |
SIRT Management (Code Owners) | Responsible for approving significant changes and exceptions to this procedure |
When secrets are confirmed to be leaked, it is important to minimize the exposure time by immediately revoking the secrets. This can be done by automation or manual revocation by the Security team. Security will immediately revoke the secrets to prevent further abuse even if the potential impact of that action isn't clearly understood at that time. In some cases this may cause disruption, when the secrets are being used for legitimate processes. Because of this potential for impact to services dependent on the revoked secrets, Security will post a notification to the #security-revocation-self-service
Slack channel, where secrets owners can use the channel for manual or automated self-service. Because the secret has already been exposed and revoked, and because it makes it easier for secrets owners to find their secrets in the channel, the clear text version of the revoked secret will be part of the notification.
Security incidents may (and usually do) involve sensitive information related to GitLab, GitLab's customers or employees, or users who (in one way or another) have engaged with GitLab. GitLab, while codifying the Transparency value, also strongly believes in and strives to maintain the privacy and confidentiality of the data its employees, customers, and users have entrusted us with.
A confidential issue means any data within the issue and any discussions about the issue or investigation are to be kept to GitLab employees only unless permission is explicitly granted by GitLab Legal, a GitLab Security Director, the VP of Security, or the GitLab Executive Team.
Security incident investigations must begin by opening a tracking issue in the SIRT project and using the Incident Response template. This tracking issue will be the primary location where all work and resulting data collection will reside throughout the investigation.
All artifacts from an investigation must be handled per the Artifact Handling and Sharing internal only runbook.
NOTE: The tracking issue, any collected data, and all other engagements involved in a Security Incident must be kept strictly confidential.
Assigning severity to an incident isn't an exact science and it takes some rational concepts mixed with past experiences and gut feelings to decide how bad a situation may be. When considering severity, look at:
To help place the correct severity rating on the incident you are about to submit, please refer to the following examples:
Severity | Description | Examples | Resolution |
---|---|---|---|
High | A Critical Incident with a High Impact | 1. Gitlab.com is down for all customers 2. Confidentiality or Privacy is breached 3. Customer Data is lost 4. Exposed key |
Activate Pager Duty Immediately |
Low | A minor incident with a very low impact | Suspicious activity on team-member laptop Third party vendor vulnerability Example_1 Example_2 Example_3 |
Resolution will be provided during business hours for Engineer on call |
Coordinate with internal teams and prepare for the incident investigation:
#sirt_####
where #### is the GitLab issue number in the SIRT project.In the event that an incident needs to be escalated within GitLab, the Security Engineer On Call (SEOC) will page the Security Incident Manager On Call (SIMOC). It is the responsibility of the SIMOC to direct response activities, gather technical resources from required teams, coordinate communication efforts with the Communications Manager On Call, and further escalate the incident as necessary.
Characteristics of an incident requiring escalation include but are not limited to the following:
If applicable, coordinate the incident response with business contingency activities.
Once an incident has been identified and the severity has been set, the incident responder must attempt to limit the damage that has already occurred and prevent any further damage from occurring. When an incident issue is opened, it will automatically contain the ~Incident::Phase::Identification
label. At the start of the containment phase this label will be updated to ~Incident::Phase::Containment
.
The first step in this process is to identify impacted resources and determine a course of action to contain the incident while potentially also preserving evidence. Containment strategies will vary based on the type of incident but can be as simple as marking an issue confidential to prevent information disclosure or to block access to a network segment.
It's important to remember the containment phase is typically a stop-gap measure to limit damage and not to produce a long term fix for the underlying problem. Additionally the impact of the mitigation on the service must be weighed against the severity of the incident.
When triaging priority::1/severity::1
incidents there may be times that SIRT or Infrastructure are unable to mitigate an issue, or identify the full impact of a potential mitigation. In these cases the Development Escalation Process can be used to engage with the development team on-call. It is important that this process is followed as documented and only for priority::1/severity::1
issues.
During the remediation and recovery phase the incident responder will work to ensure impacted resources are secured and prepared to return the service to the production environment. This process may involve removing malicious or illicit content, updating access controls, deploying patches and hardening systems, redeploying systems completely, or a variety of other tasks depending on the type of incident. When transitioning from the containment phase into the remediation phase the SEOC will update the phase lable to ~Incident::Phase::Eradication
and when the remediation is complete the label will be updated to ~Incident::Phase::Recovery
.
An Incident Review will be completed for all severity::1
incidents to guide the remediation and recovery process. Careful planning is required to ensure successful recovery and prevention of repeat incidents. The incident responder coordinates impacted teams to test and validate all remediations prior to deployment.
This phase should prioritize short term changes that improve the overall security of impacted systems while the full recovery process may take several months as longer term improvements are developed. During the post remediation Incident Review process the incident phase label will be updated to ~Incident::Phase::Incident Review
.
Upon completing the containment, remediation, communication and verification of impacted services, the incident will be considered resolved and the incident issues may be closed and the incident phase label will be changed to Incident::Phase::Closed
.
The incident response process will move on to a post-mortem and lessons learned phase through which the process improvements and overall security of the organization can be analyzed and strengthened.
Our security incident communication plan defines the who, what, when, and how of GitLab in notifying internal stakeholders and external customers of security incidents.
If during the course of investigating a security event the incident itself, materials involved in the incident (stored data, traffic/connections, etc), or actions surrounding the incident are deemed illegal in the United States, it may be necessary (and advisable) to engage U.S. law enforcement.
In the event of a perceived major security incident (which may prove to not be one at a later point), adhoc communication is sometimes required for coordination. This is outlined in the sections above. If you are identified as someone who could assist during the perceived security incident with either the identification, confirmation, or mitigation of the incident, you will be added to a dedicated Zoom call or Slack channel. Upon joining that call/channel, please take note of the following:
Incidents are tracked in the Operations tracker through the use of the incident template.
The correct use of dedicated scoped incident labels is critical to the sanity of the data in the incident tracker and the subsequent metrics gathering from it.
Incident delineator Incident
denotes that an issue should be considered an incident and tracked as such.
Incident::Phase |
What stage is the incident at? |
---|---|
Incident::Phase::Identification |
Incident is currently being triaged (log dives, analysis, and verification) |
Incident::Phase::Containment |
Limiting the damage (mitigations being put in place) |
Incident::Phase::Eradication |
Cleaning, restoring, removing affected systems, or otherwise remediating findings |
Incident::Phase::Recovery |
Testing fixes, restoring services, transitioning back to normal operations |
Incident::Phase::Incident Review |
The incident review process has begun (required for all S1/P1 incidents) |
Incident::Phase::Closed |
Incident is completely resolved |
Incident::Category |
What is the nature of the incident? |
---|---|
Incident::Category::Abuse |
Abusive activity impacted GitLab.com |
Incident::Category::CustomerRequest |
Customer related request |
Incident::Category::DataLoss |
Loss of data |
Incident::Category::InformationDisclosure |
Confidential information might have been disclosed to untrusted parties |
Incident::Category::LostStolenDevice |
Laptop or mobile device was lost or stolen |
Incident::Category::Malware |
Malware |
Incident::Category::Misconfiguration |
A service misconfiguration |
Incident::Category::NetworkAttack |
Incident due to malicious network activity - DDoS, credential stuffing |
Incident::Category::NotApplicable |
Used to denote a false positive incident (such as an accidental page) |
Incident::Category::Phishing |
Phishing |
Incident::Category::UnauthorizedAccess |
Data or systems were accessed without authorization |
Incident::Category::Vulnerability |
A vulnerability in GitLab and/or a service used by the organization has lead to a security incident |
Incident::Organization |
What is impacted? |
---|---|
Incident::Organization::AWS |
One of GitLab's AWS environments |
Incident::Organization::Azure |
GitLab's Azure environment |
Incident::Organization::DO |
Digital Ocean environment |
Incident::Organization::GCP |
GitLab's GCP environment |
Incident::Organization::GCP-Enclave |
GitLab Security's GCP environment |
Incident::Organization::GSuite |
Google Workspaces (GSuite, GDrive) |
Incident::Organization::GitLab |
GitLab the organization and GitLab the product |
Incident::Organization::GitLabPages |
GitLab.com Pages |
Incident::Organization::SaaS |
Incident in vendor-operated SaaS platform |
Incident::Organization::end-user-devices |
Team member devices |
Incident::Organization::EnterpriseApps |
Other enterprise apps not defined here (Zoom, Slack, etc) |
Incident::Source |
How did SIRT learn of the incident? |
---|---|
Incident::Source::External |
An external source (such as a GitLab.com customer) |
Incident::Source::Internal |
An internal source (such as a finding by a team member) |
Incident::Origin |
How did GitLab learn of the incident? |
---|---|
Incident::Origin::Email |
Reported via email |
Incident::Origin::EDR |
Endpoint Detection |
Incident::Origin::GoogleSecurityAlert |
Google Security Alert |
Incident::Origin::H1 |
HackerOne report |
Incident::Origin::HIPB |
Have I Been Pwned email |
Incident::Origin::Issue |
GitLab issue |
Incident::Origin::SIEM |
SIEM alert |
Incident::Origin::Slack |
Slack |
Incident::Origin::Zendesk |
Zendesk ticket |
Incident::Classification |
How accurate was the finding? |
---|---|
Incident::Classification::TruePositive |
True positive |
Incident::Classification::FalsePositive |
False positive |
Incident::Classification::TrueNegative |
True negative |
Incident::Classification::FalseNegative |
False positive |
Exceptions to this procedure will be tracked as per the Information Security Policy Exception Management Process.