Infrastructure

Communication

Infrastructure roles

The infrastructure team is split between production engineers and performance specialists.

Both roles are closely related as they touch on some of the same spots, for example, both care about the availability and performance of GitLab.com, from different perspectives.

Both roles also care about building an infrastructure and monitoring that can be shipped to our customers.

Production Engineers

Production engineers work on keeping the infrastructure that runs our services running fast and reliably. This infrastructure includes GitLab.com, dev.GitLab.org and GitHost.io.

Production engineers also have a strong focus on enabling development to ship features as fast and bug free as possible. Providing the monitoring tools that prevent shipping regressions that would affect our customers. And building automation tools that lower the barrier of access to production and allow us to scale with automation.

Responsibilities can be found in the job description

Production Engineering Resources

Performance Specialists

Performance specialists are developers that have a focus on improving GitLab.com performance. They work on issues from the GitLab-CE project.

For practical reasons we track the work that is on flight in the performance issue tracker by cross linking, but we keep the discussion in the source issue.

This is so we can have really quick 1 week sprints that allow us to iterate faster.

Performance specialists can also focus on critical infrastructure tasks that will enable GitLab.com to go faster, to increase availability, or to just generally make it scale to handle more users with less resources.

We have a public monitoring server that shows our most important metrics.

Documentation

The main infrastructure documentation can be found in 2 places:

Runbooks

Runbooks are public, but they are automatically mirrored in our development environment, this is so because if GitLab.com is down, those runbooks would not be available to take it up again.

These runbooks aim to provide simple solutions for common problems, they should be pointed from our alerting system and should also be kept up to date with whatever new finding we get as we learn how to scale GitLab.com so these runbooks can also be adopted by our customers.

Runbooks are divided into 2 main sections:

When writing a new runbook, be mindful what the goal of it is:

Chef cookbooks

Some basic rules:

Generally our chef cookbooks live in the open, and they get mirrored back to our internal cookbooks group for availability reasons.

There may be cases of cookbooks that could become a security concern, in which case it is ok to keep them in our GitLab private instance. This should be assessed in a case by case and documented properly.

Internal documentation

Available in the Chef Repo. There is some documentation that is specific to GitLab.com. Things that are specific to our infrastructure providers or that would create a security treat for our installation.

Still, this documentation is in the Chef Repo, and we aim to start pulling things out of there into the runbooks, until this documentation is thin and GitLab.com only.

GitLab Cloud Images

A detailed process on creating and maintaining GitLab cloud images can be found here.

Production events logging

There are 2 kind of production events that we track:

Outages and Blameless Post Mortems

Every time there is a production incident we will create an issue in the infrastructure issue tracker with the outage label.

In this issue we will gather the following information:

These issues should also be tagged with any other label that makes sense, for example, if the issue is related to storage, label it so.

The responsibility of creating this post mortem is initially on the person that handled the incident, unless it gets assigned explicitly to someone else.

Public by default policy

These blameless post mortems have to be public by default with just a few exceptions:

That's it, there are no more reasons.

If what's blocking us from revealing this information is shame because we made a mistake, that is not a good enough reason.

The post mortem is blameless because our mistakes are not a person mistake but a company mistake, if we made a bad decision because our monitoring failed we have to fix our monitoring, not blame someone for making a decision based on insuficient data.

On top of this, blameless post mortems help in the following aspects:

Once this Post Mortem is created, we will tweet from the GitLabStatus account with a link to the issue and a brief explanation of what is it about.

On Call

See the separate on-call page.

Make GitLab.com settings the default

As said in the production engineer job description one of the goals is "Making GitLab easier to maintain to administrators all over the world". One of the ways we do it is making GitLab.com settings the default for all our customers. It is very important that GitLab.com is running GitLab Enterprise Edition with all its default settings. We don't want users running GitLab at scale to run into any problems.

If it is not possible to use the default settings the difference should be documented in GitLab.com settings before applying them to GitLab.com.