Infrastructure

On this page

Infrastructure teams

The infrastructure team is populated by engineers who share the responsibility of making GitLab.com scale, keep it safe, available and scalable with specific focuses.

These teams are:

Production Team

Composed of production engineers.

Production engineers work on keeping the infrastructure that runs our services running fast and reliably. This infrastructure includes staging, GitLab.com and dev.GitLab.org.

Production engineers also have a strong focus on building the right toolsets and automations to enable development to ship features as fast and bug free as possible, leveraging the tools provided by GitLab.com itself - we must dogfood.

Another part of the job is building monitoring tools that allow quick troubleshooting as a first step, then turning this into alerts to notify based on symptoms, to then fixing the problem or automating the remediation. We can only scale GitLab.com by being smart and using resources effectively, starting with our own time as the main scarce resource.

Production Engineer job description.

Tenets

  1. Security: reduce risk to its minimum, and make the minimum explicit.
  2. Transparency, clarity and directness: public and explicit by default, we work in the open, we strive to get signal over noise.
  3. Efficiency: smart resource usage, we should not fix scalability problems by throwing more resources at it but by understanding where the waste is happening and then working to make it disappear. We should work hard to reduce toil to a minimum by automating all the boring work out of our way.

Production and Staging Access

Production access is granted to production engineers, security engineers, and (production) on-call heroes.

Staging access is treated at the same level as production access because it contains production data.

Any other engineer, or lead, or manager at any level will not have access to production, and, in case some information is needed from production it must be obtained by a production engineer through an issue in the infrastructure issue tracker.

There is one temporary exception: release managers require production access to perform deploys, they will have production access until production engineering can offer a deployment automation that does not require chef nor ssh access. This is an ongoing effort.

Production Engineering Resources

Documentation

Runbooks

Runbooks are public, but they are automatically mirrored in our development environment, this is so because if GitLab.com is down, those runbooks would not be available to take it up again.

These runbooks aim to provide simple solutions for common problems, they are linked pointed from our alerting system and should also be kept up to date with whatever new finding we get as we learn how to scale GitLab.com so these runbooks can also be adopted by our customers.

Runbooks are divided into 2 main sections:

When writing a new runbook, be mindful what the goal of it is:

Chef cookbooks

Some basic rules:

Generally our chef cookbooks live in the open, and they get mirrored back to our internal cookbooks group for availability reasons.

There may be cases of cookbooks that could become a security concern, in which case it is ok to keep them in our GitLab private instance. This should be assessed in a case by case and documented properly.

Internal documentation

Available in the Chef Repo. There is some documentation that is specific to GitLab.com. Things that are specific to our infrastructure providers or that would create a security threat for our installation.

Still, this documentation is in the Chef Repo, and we aim to start pulling things out of there into the runbooks, until this documentation is thin and GitLab.com only.

GitLab Cloud Images

A detailed process on creating and maintaining GitLab cloud images can be found here.

Production events logging

There are 2 kind of production events that we track:

Outages and Blameless Post Mortems

Every time there is a production incident we will create an issue in the infrastructure issue tracker with the outage label.

In this issue we will gather the following information:

These issues should also be tagged with any other label that makes sense, for example, if the issue is related to storage, label it so.

The responsibility of creating this post mortem is initially on the person that handled the incident, unless it gets assigned explicitly to someone else.

Public by default policy

These blameless post mortems have to be public by default with just a few exceptions:

That's it, there are no more reasons.

If what's blocking us from revealing this information is shame because we made a mistake, that is not a good enough reason.

The post mortem is blameless because our mistakes are not a person mistake but a company mistake, if we made a bad decision because our monitoring failed we have to fix our monitoring, not blame someone for making a decision based on insuficient data.

On top of this, blameless post mortems help in the following aspects:

Once this Post Mortem is created, we will tweet from the GitLabStatus account with a link to the issue and a brief explanation of what is it about.

On Call

See the separate on-call page.

Make GitLab.com settings the default

As said in the production engineer job description one of the goals is "Making GitLab easier to maintain to administrators all over the world". One of the ways we do it is making GitLab.com settings the default for all our customers. It is very important that GitLab.com is running GitLab Enterprise Edition with all its default settings. We don't want users running GitLab at scale to run into any problems.

If it is not possible to use the default settings the difference should be documented in GitLab.com settings before applying them to GitLab.com.