|GitLab.com Status||Status Page||Getting help||How to get help|
|Incidents||Incident response||Changes||Managing changes|
|State||Production Board||On-Call Board|
|Issue Trackers||Incidents, Hotspots||Changes||Deltas|
|Operations||Runbooks||On-call||Handover Document, Reports|
The Site Reliability teams are responsible for all of GitLab's user-facing services, most notably, GitLab.com. Site Reliability Engineers ensure that these services are available, reliable, scalable, performant and, with the help of GitLab's Security Department, secure. This infrastructure includes a multitude of environments, including staging, GitLab.com (production) and dev.GitLab.org, among others (see the list of environments).
SREs are primarily focused on the GitLab.com's availability, and have a strong focus on building the right toolsets and automations to enable development to ship features as fast and bug-free as possible, leveraging the tools provided by GitLab (we must dogfood).
Another part of the job is building monitoring tools that allow quick troubleshooting as a first step, then turning this into alerts to notify based on symptoms, to then fixing the problem or automating the remediation. We can only scale GitLab.com by being smart and using resources effectively, starting with our own time as the main scarce resource.
The Production Board keeps track of the state of Production, showing, at a glance, incidents, hotspots, changes and deltas related to production, and it also includes on-call reports.
For a detailed description of the board, see production/board.
We want to make GitLab.com ready for mission critical workloads. That readiness means:
To see current SRE on call for urgent requests, issue the GitLab chatops
/chatops run oncall prod. If an issue already exists, please be ready to provide it to the SRE on call.
For high severity issues
that require immediate attention the best way to get help is to use
/pd <msg> in the
#production channel on slack.
For non-urgent requests, open an issue in the infrastructure tracker
@gitlab-com/gl-infra/managers for scheduling.
There are now 3 infrastructure teams reporting to
The three teams share the on-call rotations for GitLab.com. The two SREs in the weekly rotation (EMEA and Americas) share responsibility for triaging issues and managing tasks on the SRE On-call board. The board uses the group
SRE:On-call label to identify issues across subgroups in
gitlab-com and is not aligned with any single milestone.
Engineers not on-call should focus on their team's board(s) (e.g. AS TEAM) so they remained focused on the current milestone.
Incoming requests of the infrastructure team can start in the Current milestone, but can be triaged out to the correct teams.
Add issues at any time to the infrastructure issue tracker. Let one of the managers for the production team know of the request. It would be helpful for our prioritization to know the timeline for the issue if your team has commitments related to it. We do reserve part of our time for interrupt requests, but that does not always mean we can fit in everything that comes to us.
Each team's manager will triage incoming requests for the services their team owns. In some cases, we may decide to pull that work immediately, in other cases, we may defer the work to a later milestone if we have higher priority currently in progress. The 3 managers will be meeting twice a week and we can share efforts and rebalance work if needed. Work that is ready to pull will be added to the team milestone(s) and appear on their boards.
The infrastructure issue tracker is the backlog for the infrastructure team and tracks all work that SRE teams are doing that is not related to an ongoing change or incident.
We have a production issue tracker. Issues in this tracker are meant to track incidents and changes to production that need approval. We can host discussion of proposed changes in linked infrastructure issues. These issues should have ~incident or ~change and notes describing what happened or what is changing with relevant infrastructure team issues linked for supporting information.
Standups: We do standups with a bot that will ask for updates from each team member at 11AM in their timezone. Updates will go into our slack channel.
Retros: We are testing async retros with another bot that happens the second Wednesday of our milestone. Updates from that retro will again go to our slack channel. A summary will also be made so that we can vote on important issues to talk about in more depth. These can then help us update our themes for milestones.
Long term, additional teams will perform work on the production environment:
We cannot keep track of events in production across a growing number of functional queues.
Furthermore, said teams will start to have on-call rotations for both their function (e.g., security) and their services. For people on-call, having a centralized tracking point to keep track of said events is more effective than perusing various queues. Timely information (in terms of when an event is happening and how long it takes for an on-call person to understand what's happening) about the production environment is critical. The
production queue centralizes production event information.
Functional queues track team workloads (
security, etc) and are the source of the work that has to get done. Some of this work clearly impacts production (build and deploy new storage nodes); some of it will not (develop a tool to do x, y, z) until it is deployed to production.
production queue tracks events in production, namely:
Over time, we will implement hooks into our automation to automagically inject change audit data into the
This also leads to a single source of data. Today, for instance, incident reports for the week get transcribed to both the On-call Handoff and Infra Call documents (we also show exceptions in the latter). These meetings serve different purposes but have overlapping data. The input for this data should be queries against the
production queue versus the manual build in documents.
Additionally, we need to keep track of error budgets, which should also be derived from the
We will also be collapsing the
database queue into the
infrastructure queue. The database is a special piece of the infrastructure for sure, but so are the storage nodes, for example.
For the on-call SRE, every event that pages (where an event may be a group of related pages) should have an issue created for it in the
production queue. Per the severity definitions, if there is at least visible impact (functional inconvenience to users), then it is by definition an incident, and the Incident template should be used for the issue. This is likely to be the majority of pager events; exceptions are typically obvious, i.e. they impact only us and customers won't even be aware, or they're alerts that are pre-incident level which by acting on we avoid incidents.
All direct or indirect changes to authentication and authorization mechanisms used by GitLab Inc. by customers or employees require additional review and approval by a member of at least one of following teams:
This process is enforced for the following repositories where the approval is mandatory using MR approvals:
Additional repositories may also require this approval and can be evaluated on a case-by-case basis.
When should we loop the security team in on changes? If we are making major changes to any of the following areas:
We use issue labels within the Infrastructure issue tracker to assist in prioritizing and organizing work. Prioritized labels are:
~(perceived) data loss
|Label||Description||Estimate to fix|
||Urgent Priority||Has to be executed immediately|
||High Priority||Has to be executed at the maximum in 2 milestones|
||Medium Priority||Has to be executed at the maximum in 6 milestones|
||Low Priority||Lower priority to be executed|
|Label||Meaning||Impact on Functionality||Example|
||Blocker||Outage, broken feature with no workaround||Unable to create an issue. Data corruption/loss. Security breach|
||Critical Severity||Broken feature, workaround too complex & unacceptable||Can push commits, but only via the command line|
||Major Severity||Broken feature, workaround acceptable||Can create merge requests only from the Merge Requests page, not through the issue|
||Low Severity||Functionality inconvenience or cosmetic issue||Label colors are incorrect / not being displayed|
Type labels are very important. They define what kind of issue this is. Every issue should have one or more.
||Represents a Change on infrastructure please check details on : Change|
||Represents a Incident on infrastructure please check details on : Incident|
||Standard access requests|
||Are prioritized to be worked on by the current oncall team members|
||Label for problems that can get higher priority|
||Label for problems related to database|
||Label for problems related to security|
The services list is mentioned here : https://gitlab.com/gitlab-com/runbooks/blob/master/services/service-catalog.yml
Service Criticality labels help us to define, how critical is the service and could be a change in the infrastructure, considering how will affect the user experience in case of a failure. I.e.
~C1 Postgresql or Redis Master. As most of the services could reach different levels of criticality we consider here the highest, also we have the template of actions for a change depending on the criticality:
||Vital service and is a single point of failure, if down the application will be down|
||Important service, if down some functionalities will not be available from the application|
||Service in case of some instance is down or the service, we can have performance degradation|
||Services that could be in maintenance mode or would not affect the performance of the application|
The service redundancy level helps us to identify what services has the avaiability of failover or if there is another mechanism of redundancy.
||The loss of a single instance will affect all the users||Instance in PostgreSql or Redis|
||The loss of a single instance will affect a subset of users||Instance in Gitaly|
||The loss of a single instance would not affect any user||Instance of grafana|
~goals are issues that are in a Milestone and we agreed as a team that we will do everything in our power to deliver them. Goal issues should fit in one Milestone, that is, they are deliverable in a single week time, if they do not fit in one Milestone we are probably talking about a
We use some other labels to indicate specific conditions and then measure the impact of these conditions within production or the production engineering team. This is specially important from the time investment in specific parts of the production engineering team, to reduce toil or to reduce the chance of a failure by accessing to production more than enough.
Labels that are particularly important for gathering data are:
~toilRepetitive, boring work that should be automated away.
~unscheduledAn issue that became an interruption to the team and had to be handled in a Milestone. It's unplanned work.
~unblocks othersAn issue that is allowing some other part of the company to deliver something.
~access requestWhen someone is requesting to get access to some part of the infrastructure.
~requires production accessEvery time someone with production access has to jump into a console to perform some manual operation like running a script in a rails console, or connecting to Redis or the database directly
We should never stop helping and unblocking team members. To this end, data should always be gathered to assist in highlighting areas for automation and the creation of self-service processes. Creating an issue from the request with the proper labels is the first step. The default should be that the person requesting help makes the issue; but we can help with that step too if needed.
If this issue is urgent for whatever reason, we should label them following the instructions above and add them to the ongoing Milestone.
Ongoing outages, as well as issues that have the
~(perceived) data loss label and are (therefore) actively being worked on need a hand off to happen as team members cycle in and out of their timezones and availability. The on call log can be used to assist with this. (See link at top to on-call log).
To ensure 24x7 coverage of emergency issues, we currently have split on-call rotations between EMEA and AMER regions; team members in EMEA regions are on-call from 0400-1600 UTC, and team members in AMER regions are on-call from 1600-0400 UTC. We plan to extend this to include team members from the APAC region in the future, as well. This forms the basis of a follow-the-sun support model, and has the benefit for our team members of reducing (or eliminating) the stress of responding to emergent issues outside of their normal work hours, as well as increasing communication and collaboration within our global team.
When an on-call person is paged, either via the /pd command in slack or the automated monitoring systems, the SRE member will have a 15 minute SLA to acknowledge or escalate the alert.
This is also noted in the On-Call section of the handbook.
Because GitLab is an asynchronous workflow company, @mentions of On-Call individuals in slack will be treated like normal messages and no SLA for response will be attached or associated with them. This is also because notifications over phones via Slack have no escalation policies. PagerDuty has policies that team members and rotations can configure to make sure an alert is escalated when no person has acknowledged the alert.
If you need to page a team member from slack - you can do the /pd "your message to the on-call here" to send out an alert to the currently on-call team members.
Given the number of systems and service that we use, it's very hard if not impossible to reach an expert level in all of them. What makes it even harder is the rate of changes made to our infrastructure. For this reason, the person on-call is not expected to know everything about all of our systems. In addition, incidents are often complex and vague in their nature requiring different perspectives and ideas for solutions.
Reaching out for help is considered good practice and should not be mistaken for incompetence. Asking for help while following the escalation guidelines and checklists can expose information and result in faster resolution of problems. It also improves the knowledge of the team as a whole when for example an undocumented problem is covered in runbooks after an incident or when questions are asked in Slack channels where others can read it. This is true for on-call emergencies as well as project work. You will not be judged on the questions you ask, regardless of how elemental they might be.
The SRE team's primary responsibility is availability of gitlab.com. For this reason, helping the person on-call should take priority over project work. It doesn't mean that for every single incident, the entire SRE team should drop everything and get involved. However, it does mean that as knowledge and experience in a field that is relevant to a problem, they should feel entitled to prioritize that over project work. Previous experiences have shown that as the incident's severity increased or potential causes were ruled out, more and more people from across the company were getting involved.
There are 2 kind of production events that we track:
For some incidents, we may figure out that the usage patterns that led to the issues were abuse. There is a process for how we define and handle abuse.