Incidents are anomalous conditions that result in—or may lead to—service degradation or outages. These events require human intervention to avert disruptions or restore service to operational status. Incidents are always given immediate attention.
If you're observing issues on GitLab.com or working with users who are reporting issues, please follow the instructions found on the On-Call page and alert the Engineer On Call (EOC).
If any of the dashboards below are showing major error rates or deviations, it's best to alert the Engineer On Call.
The goal of incident management is to organize chaos into swift incident resolution. To that end, incident management provides:
There is only ever one owner of an incident—and only the owner of the incident can declare an incident resolved. At anytime the incident owner can engage the next role in the hierarchy for support. With the exception of when GitLab.com is not functioning correctly, the incident issue should be assigned to the current owner.
It's important to clearly delineate responsibilities during an incident. Quick resolution requires focus and a clear hierarchy for delegation of tasks. Preventing overlaps and ensuring a proper order of operations is vital to mitigation. The responsibilities outlined in the roles below are cascading–and ownership of the incident passes from one role to the next as those roles are engaged. Until the next role in the hierarchy engages, the previous role assumes all of the subsequent roles' responsibilities and retains ownership of the incident.
||Engineer On Call|
|The Production Engineer On Call is generally an SRE and can declare an incident. If another party has declared an incident, once the EOC is engaged they own the incident. The EOC gathers information, performs an initial assessment, and determines the incident severity level.|
||Incident Manager On Call|
|The Incident Manager is generally a Reliability Engineering manager and is engaged when incident resolution requires coordination from multiple parties. The IMOC is the tactical leader of the incident response team—not a person performing technical work. The IMOC assembles the incident team by engaging individuals with the skills and information required to resolve the incident.|
||Communications Manager On Call|
|The Communications Manager is generally a Reliability Engineering manager. The CMOC disseminates information internally to stakeholders and externally to customers across multiple media (e.g. GitLab issues, Twitter, status.gitlab.com, etc.).|
These definitions imply several on-call rotations for the different roles.
#productionSlack Channel will tell you this with
/chatops run oncall prod.
#alerts-generalare an important source of information about the health of the environment and should be monitored during working hours.
productiontracker. See production queue usage for more details.
The Situation Room Permanent Zoom. The Zoom link is in the
The Situation Room Permanent Zoomas soon as possible.
#production. If the alert is flappy, create an issue and post a link in the thread. This issue might end up being a part of RCA or end up requiring a change in the alert rule.
Runbooks are available for engineers on call. The project README contains links to checklists for each of the above roles.
In the event of a GitLab.com outage, a mirror of the runbooks repository is available on at https://ops.gitlab.net/gitlab-com/runbooks.
The following steps can be automated in Slack by typing
/start-incident. If the commend fails, manually do the following:
Incident, on the
productionqueue with the template for Incident . If it is not possible to generate the issue, start with the tracking document and create the incident issue later.
Optional - not required for post deployment patches and as needed for the incident:
infrastructurequeue with the template for RCA. If it is not possible to generate the issue, start with the tracking document and create the incident issue later.
infra/5543 tracks automation for incident management.
This is a first revision of the definition of Service Disruption (Outage), Partial Service Disruption, and Degraded Performance per the terms on Status.io. Data is based on the graphs from the Key Service Metrics Dashboard
Outage and Degraded Performance incidents occur when:
Degradedas any sustained 5 minute time period where a service is below its documented Apdex SLO or above it's documented error ratio SLO.
Outage(Status = Disruption) as a 5 minute sustained error rate above the Outage line on the error ratio graph
SLOs are documented in the runbooks/rules
To check if we are Degraded or Disrupted for GitLab.com, we look at these graphs:
A Partial Service Disruption is when only part of the GitLab.com services or infrastructure is experiencing an incident. Examples of partial service disruptions are instances where GitLab.com is operating normally except there are:
If an incident may be security related, engage the Security Operations on-call following the Security Incident Response Guide.
Information is an asset to everyone impacted by an incident. Properly managing the flow of information is critical to minimizing surprise and setting expectations. We aim to keep interested stakeholders apprised of developments in a timely fashion so they can plan appropriately.
This flow is determined by:
Furthermore, avoiding information overload is necessary to keep every stakeholder’s focus.
To that end, we will have:
#incident-managementroom in Slack.
#incident-managementchannel for internal updates
Definitions and rules for transitioning state and status are as follows.
|Investigating||The incident has just been discovered and there is not yet a clear understanding of the impact or cause. If an incident remains in this state for longer than 30 minutes after the EOC has engaged, the incident should be escalated to the IMOC.|
|Identified||The cause of the incident is believed to have been identified and a step to mitigate has been planned and agreed upon.|
|Monitoring||The step has been executed and metrics are being watched to ensure that we're operating at a baseline|
|Resolved||The incident is closed and status is again Operational.|
Status can be set independent of state. The only time these must align is when an issues is
|Operational||The default status before an incident is opened and after an incident has been resolved. All systems are operating normally.|
|Degraded Performance||Users are impacted intermittently, but the impacts is not observed in metrics, nor reported, to be widespread or systemic.|
|Partial Service Disruption||Users are impacted at a rate that violates our SLO. The IMOC must be engaged and monitoring to resolution is required to last longer than 30 minutes.|
|Service Disruption||This is an outage. The IMOC must be engaged.|
|Security Issue||A security vulnerability has been declared public and the security team has asked to publish it.|
The primary goals of writing an [Incident Review] are to ensure that the incident is documented, that all contributing root cause(s) are well understood, and, especially, that effective preventive actions are put in place to reduce the likelihood and/or impact of recurrence.1
Not every incident requires a review. But, if an incident matches any of the following criteria, an incident review must be completed:
Incident Reviews are conducted in production issues—except in the case of extenuating circumstances when Infrastructure or Engineering management determines a synchronous video call should be held. The issues should have the
~IncidentReview label attached.
~Corrective Actionand linked to the Incident Review issue.
~Corrective Actionissues have been linked, the issue can be closed.
The infrastructure team keeps track of Corrective Actions on a dedicated board. The prioritization and assignment of issues is collectively handled by the Reliability Engineering managers.
Incident severities encapsulate the impact of an incident and scope the resources allocated to handle it. Detailed definitions are provided for each severity, and these definitions are reevaluated as new circumstances become known. Incident management uses our standardized severity definitions, which can be found under our issue workflow documentation.
1: Google SRE Chapter 15 - Postmortem Culture: Learning from Failure