GitLab takes the security of our clients’ information extremely seriously, regardless of whether it’s on GitLab.com or in a self-managed instance. In keeping with GitLab’s value of transparency we believe in communicating about security incidents clearly and promptly.
This communication response plan maps out the who, what, when, and how of GitLab in notifying and engaging with internal stakeholders and external customers on security incidents. This plan of action covers the strategy and approach for security events which have a ‘high’ or greater impact as outlined in GitLab’s risk scoring matrix.
For Support or Infrastructure managed incidents where external communication guidance is needed, please see the corporate communications incident response plan and engage that team via #corp-comms in slack.
For Infrastructure incidents, please follow the infrastructure incident management and communication process.
The GitLab Security team identifies security incidents as any violation, or threat of violation, of GitLab security, acceptable use or other relevant policies. You can learn more about how we identify incidents in the GitLab security incident response guide.
The Security Engineer On-Call will determine the scope, severity and potential impact of the security incident. Once the potential impact has been determined, implementation of the appropriate internal and external communications strategy should begin.
Security Engineer on Call (SEOC): This is the on-call Security Operations Engineer. The individual is the first to act, validate, and begin the process of determining severity and scope.
Security Incident Manager on Call (SIMOC): This is a Security Engineering Manager who is engaged when incident resolution requires coordination across multiple parties. The SIMOC is the tactical leader of the incident response team, typically not engaged to perform technical work. The SIMOC assembles the incident team by engaging individuals with the skills, access, and information required to resolve the incident. The focus of the SIMOC is to keep the incident moving towards resolution, keeping stakeholders informed and performing CMOC duties.
Communications Manager on Call (CMOC): This is the Security Incident Manager On-Call (SIMOC) or Security Assurance Engineer who will coordinate external communications efforts according to this security incident response plan and liaise across the extended GitLab teams to ensure all parties are activated, updated and aligned.
Security External Communications: this function partners with and advises incident response teams in reviewing and improving messaging for external audiences (including customers, media, broader industry). This role laises with marketing teams for any necessary reviews or messaging deployment. This function should be engaged once first draft content has been developed using the Security incident external response issue template.
As security practitioners and incident response engineers, our security assurance and security operations teams and engineers are best positioned to develop initial messaging and serve in the CMOC/Communications manager on call role.
Each team-appointed CMOC is the DRI for:
The Security External Communications function is the DRI for:
#customer-success) for awareness and use.
incident_communicationstemplate and posting the issue in the #mktgops channel in Slack.
/pd triggercommand in any slack channel and select
Marketing Ops Ext. Comms - Emergency.
Support Team: Using background information and prepared responses provided by the Security Engineer On Call and Communications Manager On Call, our Support Team will triage and respond to customer communications stemming from the security incident. Contact the on-call manager via
#support_escalations in Slack. If it's urgent page the Support Manager On-call using
/pd-support-manager command in slack.
#community-relationsor any Slack channel by pinging
Security incidents can be high-pressure, high-stress situations. Everyone is anxious to understand the details around the investigation, scope, mitigation and more. Ensuring that stakeholders across security, infrastructure, engineering and operations teams are informed and engaged is one of the chief responsibilities of the Security Incident Manager On Call. The Security Incident Manager should focus on providing high-level status updates without delving too deeply into the technical details of the incident, including:
Any time there is a service disruption for team members, the CMOC should post details regarding the disruption and related issue(s) in #whats-happening-at-gitlab, and cross-post in any related channels. It is important to identify if this is a production incident affecting gitlab.com or a service used by the organization.
In cases of high priority security notifications appropriate for the entire organization, the Internal Security Notification Dashboard should be used. When an update is made to this dashboard, notifications will be sent via slack and email to all GitLab team members.
In the cases of incidents that are on-going and require constant communication the Security Engineer on Call will set up an incident response Slack channel. All security incident team members and extended POCs should be invited. If the nature of the incident allows, the Slack channel will be public to GitLab and a link to this channel will also be shared in
#security-department Slack channel to increase visibility.
|Group & Contacts||When to Engage||DRI to Engage||At what Cadence||In what Channel|
|Director of Security Operations||For S1 incidents immediately upon determination of the S1 severity rating||SIMOC/CMOC||30 minute intervals (unless otherwise requested)||In incident response Slack channel|
|VP of Security||For S1 incidents immediately upon determination of the S1 severity rating||Director of Security Operations||30 minute intervals (unless otherwise requested)||Slack direct message|
|Broader e-group||Immediately in cases of a data breach or an RCE with evidence of exploitation||VP of Security||30 minute intervals (unless otherwise requested)||
|Sr. Director of Corporate Marketing and Director of Corporate Communications||Immediately, if the incident has been publicly reported or if there is a regulatory requirement to make an announcement. In other cases, once the full impact and associated risk has been determined.||SIMOC/CMOC||Continuous||In incident response Slack channel|
|Legal||If GitLab EE customers are impacted, or if the security incident includes a data breach including but not limited to:
Exposure of PII / Personal Data
|VP of Security||Continuous||incident response Slack channel|
External communications should happen as soon as possible after the scope and impact of the security incident is determined, using concise and clear language. The first external communications are directed to affected parties. Examples include: Affected customers and third parties, providers of products, services or organizations involved in the supply chain for system components as related to the incident. Regulatory authorities are contacted based on incident scope, regulatory and legal requirements.
Once it has been determined that external response is needed, the SIRT team should develop, gain approval on a final customer communication and distribute and/or publish within 24 hours.
The communications channels and forms that should be used in an incident or event can vary but should align with our need to be responsive to our customers and our transparency value, and be balanced with the potential risk and exposure to customers.
Commonly used forms and channels of communication:
Security external incident or event response template (this is an internal template) has links to templates that can be used (make a copy)to start various communications.
📝 Security incident communications runbooks are located here (internal only).
It is important to keep in mind, any time we are communicating externally, we need to advise our support, customer, social and community relations teams that we’ll be making external communication about an issue that affects customers and/or the community.
For this reason, each incident response (direct email, media statement, blog post, etc) should have accompanying:
Depending on scope, impact or risk associated with the incident, our Corporate Communications and Marketing team may determine that additional outreach is necessary. Any official statements about the security incident would be made by GitLab’s Director of Corporate Communications, VP of Corporate Marketing, CMO or VP of Security.
@heatherinto the issue for first review and consult on communications forms/channels (more details below).
@heatherin the related security incident external response issue for first round review and edits.
|Communications Channels||Purpose/Message||Additional Details|
|Incident Response Customer Email||Provides incident background, response, potential, impact, follow-up actions, and who to contact with questions.||Drafted by SIMOC/CMOC and reviewed by DRIs from Support, Legal, External Comms and Security. Sent from firstname.lastname@example.org with reply to email@example.com. Should be in plain text with no link tracking.|
|Mitigation and response blog post||Details the background, GitLab response and any action required by our customers.||If it is determined that a longer, more-in-depth response is needed (i.e. a blog post), the team will follow the Marketing rapid response process in which the VP of Corporate Marketing is engaged immediately (via slack or text) and proposes the path forward. This includes determining the appropriate channel, response and timing and engaging the appropriate resources across marketing and corporate to collaborate, review and/or be advised of the response (corporate communications, content, community and legal teams). Content for the blog post is provided by the SIMOC/CMOC, the content team performs copyedits and the corporate communications and legal teams review and approve message. Once the message is approved, the Content team will merge the blog post. Note: collaboration and work on the response blog post should happen in the related incident response channel on slack.|
|GitLab Security Release Alert/Email||Indicates required action for customers and links to related mitigation and response blog.||Email sent to opt-in security notices distribution list. Prepared and sent by Security External Communications or Marketing Ops, Sent to Security Notices distro through Marketo. Users can sign up for this distribution list through our Communication Preference Center.|
|Customer Frequently Asked Questions (FAQs)||List of early customer questions and responses, or probable questions and responses.||Created by SIMOC/CMOC and Support DRI. Provided to appropriate Support group.|
|Social media post||For distribution of related blog post, details our response to X issue.||Security External Communications engages
When appropriate, key stakeholders for contribution, review and approval should meet synchronously in a Zoom session to create and fine-tune customer communications (emails, FAQs, blog post, etc). Meeting synchronously in this case allows us to expedite the development of communications with key inputs from stakeholders in security, customer support and beyond and quickly move into the review stage. These zoom sessions are recorded and will be linked into the related security incident external response issue.
The chart below illustrates the process flow between incident and impact investigation and the communications decisions and actions needed: