If you're a GitLab team member and are looking to alert Reliability Engineering about an availability issue with GitLab.com, please find quick instructions to report an incident here: Reporting an Incident.
If you're a GitLab team member looking for who is currently the Engineer On Call (EOC), please see the Who is the Current EOC? section.
If you're a GitLab team member looking for help with a security problem, please see the Engaging the Security On-Call section.
Incidents are anomalous conditions that result in—or may lead to—service degradation or outages. These events require human intervention to avert disruptions or restore service to operational status. Incidents are always given immediate attention.
The goal of incident management is to organize chaos into swift incident resolution. To that end, incident management provides:
When an incident starts: we use the #incident-management slack channel for chat based communication. There is a Situation Room Zoom link in the channel description for incident team members to join for synchronous communication. There should be a link to an incident issue in the
#incident-management channel. We prefer to keep status updates and main comments in a thread from that issue announcment. This makes it easier for incoming oncall for EOC and CMOC to look for status on handoffs.
There is only ever one owner of an incident—and only the owner of the incident can declare an incident resolved. At anytime the incident owner can engage the next role in the hierarchy for support. With the exception of when GitLab.com is not functioning correctly, the incident issue should be assigned to the current owner.
It's important to clearly delineate responsibilities during an incident. Quick resolution requires focus and a clear hierarchy for delegation of tasks. Preventing overlaps and ensuring a proper order of operations is vital to mitigation. The responsibilities outlined in the roles below are cascading–and ownership of the incident passes from one role to the next as those roles are engaged. Until the next role in the hierarchy engages, the previous role assumes all of the subsequent roles' responsibilities and retains ownership of the incident.
||The EOC is the usually the first person alerted - expectations for the role are in the Handbook for oncall. The checklist for the EOC is in our runbooks. If another party has declared an incident, once the EOC is engaged the EOC owns the incident. The EOC can escalate a page in PagerDuty to engage the IMOC and CMOC.||The Reliability Team Engineer On Call is generally an SRE and can declare an incident. They are part of the "SRE 8 Hour" on call shift in PagerDuty.|
||The IMOC is engaged when incident resolution requires coordination from multiple parties. The IMOC is the tactical leader of the incident response team—not a person performing technical work. The IMOC assembles the incident team by engaging individuals with the skills and information required to resolve the incident.||The Incident Manager is an Engineering Manager, Staff Engineer, or Director from the Reliability team. The IMOC rotation is currently in the "SRE Managers" Pager Duty Schedule.|
||The CMOC disseminates information internally to stakeholders and externally to customers across multiple media (e.g. GitLab issues, Twitter, status.gitlab.com, etc.).||The Communications Manager is generally member of the support team at GitLab. Notifications to the
These definitions imply several on-call rotations for the different roles.
#alerts-generalare an important source of information about the health of the environment and should be monitored during working hours.
productiontracker. See production queue usage for more details.
The Situation Room Permanent Zoom. The Zoom link is in the
The Situation Room Permanent Zoomas soon as possible.
#production. If the alert is flappy, create an issue and post a link in the thread. This issue might end up being a part of RCA or end up requiring a change in the alert rule.
At times, we have a security incident where we may need to take actions to block a certain URL path or part of the application. This list is meant to help the Security Engineer On-Call and EOC decide when to engage help and post to status.io.
If any of the following are true, it would be best to engage an Incident Manager:
In some cases, we may choose not to post to status.io, the following are examples where we may skip a post/tweet. In some cases, this helps protect the security of self managed instances until we have released the security update.
To page the Incident Manager on call, see Reporting an incident
/pd triggerin the
For serious incidents that require coordinated communications across multiple channels, the IMOC will select a CMOC for the duration of the incident during the incident declaration process.
The GitLab support team staffs an oncall rotation and via the
Incident Management - CMOC service in PagerDuty. They have a section in the support handbook for getting new CMOC people up to speed.
During an incident, the CMOC will:
@advocateshandle at the start of an incident.
If, during an incident, EOC or IMOC decide to engage CMOC, they should do that by paging the on-call person:
/pd-cmoccommand in Slack or
Runbooks are available for engineers on call. The project README contains links to checklists for each of the above roles.
In the event of a GitLab.com outage, a mirror of the runbooks repository is available on at https://ops.gitlab.net/gitlab-com/runbooks.
The chatops bot in the
#production Slack Channel will tell you this with
/chatops run oncall prod.
The current EOC can be contacted via the
@sre-oncall handle in Slack, but please only use this handle in the following scenarios.
The EOC will respond as soon as they can to the usage of the
@sre-oncall handle in Slack, but depending on circumstances, may not be immediately available. If it is an emergency and you need an immediate response, please see the Reporting an Incident section.
If you are a GitLab team member and would like to report a possible incident related to GitLab.com and have the EOC paged in to respond, choose one of the reporting methods below. Regardless of the method chose, please stay online until the EOC has had a chance to come online and engage with you regarding the incident. Thanks for your help!
/incident declare in the
#production channel in GitLab's Slack and follow the prompts (detailed description and screenshot below). This will open an incident issue and notify the engineer on-call (EOC). You do not need to decide if the problem is an incident. We have triage steps below to make sure we respond appropriately. Reporting high severity bugs via this process is the preferred path so that we can make sure we engage the appropriate engineering teams as needed.
Incident Declaration Slack window
|Title||This is the title of the incident issue, place a brief description of what you are witnessing on GitLab.com.|
|Detailed Description||Add any additional details here to assist the EOC responding to the incident.|
|Severity||If unsure about the severity to choose, but you are seeing a large amount of customer impact, please select S1. More details here: Incident Severity.|
|Tasks||You can safely ignore the first two options, if you'd like to ensure that the incident manager on-call (IMOC) and/or the communication manager on-call (CMOC) is notified via a page to their mobile device, select one or both of those options. Note: the engineer on-call (EOC) is notified via this workflow.|
Email firstname.lastname@example.org. This will immediately page the Engineer On Call.
This is a first revision of the definition of Service Disruption (Outage), Partial Service Disruption, and Degraded Performance per the terms on Status.io. Data is based on the graphs from the Key Service Metrics Dashboard
Outage and Degraded Performance incidents occur when:
Degradedas any sustained 5 minute time period where a service is below its documented Apdex SLO or above it's documented error ratio SLO.
Outage(Status = Disruption) as a 5 minute sustained error rate above the Outage line on the error ratio graph
SLOs are documented in the runbooks/rules
To check if we are Degraded or Disrupted for GitLab.com, we look at these graphs:
A Partial Service Disruption is when only part of the GitLab.com services or infrastructure is experiencing an incident. Examples of partial service disruptions are instances where GitLab.com is operating normally except there are:
In the case of high severity bugs, we prefer that an incident issue is still created via Reporting an Incident. This will give us an incident issue on which to track the events and response.
In the case of a high severity bug that is in an ongoing, or upcoming deployment please follow the steps to Block a Deployment.
If an incident may be security related, engage the Security Operations on-call by using
/security in Slack. More detail can be found in Engaging the Security On-Call.
Information is an asset to everyone impacted by an incident. Properly managing the flow of information is critical to minimizing surprise and setting expectations. We aim to keep interested stakeholders apprised of developments in a timely fashion so they can plan appropriately.
This flow is determined by:
Furthermore, avoiding information overload is necessary to keep every stakeholder’s focus.
To that end, we will have:
#incident-managementroom in Slack.
#incident-managementchannel for internal updates
Definitions and rules for transitioning state and status are as follows.
|Investigating||The incident has just been discovered and there is not yet a clear understanding of the impact or cause. If an incident remains in this state for longer than 30 minutes after the EOC has engaged, the incident should be escalated to the IMOC.|
|Identified||The cause of the incident is believed to have been identified and a step to mitigate has been planned and agreed upon.|
|Monitoring||The step has been executed and metrics are being watched to ensure that we're operating at a baseline|
|Resolved||The incident is closed and status is again Operational.|
Status can be set independent of state. The only time these must align is when an issues is
|Operational||The default status before an incident is opened and after an incident has been resolved. All systems are operating normally.|
|Degraded Performance||Users are impacted intermittently, but the impacts is not observed in metrics, nor reported, to be widespread or systemic.|
|Partial Service Disruption||Users are impacted at a rate that violates our SLO. The IMOC must be engaged and monitoring to resolution is required to last longer than 30 minutes.|
|Service Disruption||This is an outage. The IMOC must be engaged.|
|Security Issue||A security vulnerability has been declared public and the security team has asked to publish it.|
Incident severities encapsulate the impact of an incident and scope the resources allocated to handle it. Detailed definitions are provided for each severity, and these definitions are reevaluated as new circumstances become known. Incident management uses our standardized severity definitions, which can be found under availability severities.
In order to effectively track specific metrics and have a single pane of glass for incidents and their reviews, specific labels are used. The below workflow diagram describes the two paths an incident can take from
closed. The two paths are dictated by the severity of the issue,
S2 incidents require a review. In certain cases an
S4 incident can take the review workflow path. Details here
The EOC and the IMOC, at the time of the incident, are the default assignees for an incident issue. They are the assignees for the entire workflow of the incident issue.
The following labels are used to track the incident lifecyle from active incident to completed incident review. Label Source
In order to help with attribution, we also label each incident with a scoped label for the Infrastructure Service (Service::) and Group (group::) scoped labels.
||Indicates that the incident labeled is active and ongoing. Initial severity is assigned.|
||Indicates that the incident has been mitigated, but immediate post-incident activity may be ongoing (monitoring, messaging, etc.)|
||Indicates that SRE engagement with the incident has ended and GitLab.com is fully operational. Incident severity is re-assessed and determined if the initial severity is still correct and if it is not, it is changed to the correct severity.|
||Indicates that an incident met the threshold for requiring a review (S1/S2) or the IMOC or EOC for the incident chose to include a review (S3/S4) for the purposes of deriving and creating needed corrective actions.|
||Indicates that the incident review has been added to the agenda for an upcoming review meeting.|
||Indicates that an incident review has been completed, but there are notes to incorporate from the review writeup prior to closing the issue.|
These labels are always required on incident issues.
||Label used for metrics tracking and immediate identification of incident issues.|
||Scoped label for service attribution. Used in metrics and error budgeting.|
||Scoped label for severity assignment. Details on severity selection can be found in the availability severities section.|
In certain cases, additional labels will be added as a mechanism to add metadata to an incident issue for the purposes of metrics and tracking.
||Indicates that an incident is exclusively and incident for self-managed GitLab. Example self-managed incident issue|
The board which tracks all GitLab.com incidents from active to reviewed is located here.
A near miss, "near hit", or "close call" is an unplanned event that has the potential to cause, but does not actually result in an incident.
In the United States, the Aviation Safety Reporting System has been collecting reports of close calls since 1976. Due to near miss observations and other technological improvements, the rate of fatal accidents has dropped about 65 percent. source
Near misses are like a vaccine. They help the company better defend against more serious errors in the future, without harming anyone or anything in the process.
When a near miss occurs, we should treat it in a similar manner to a normal incident.