On this page

Team responsibility

Distribution ensures the experience of installing and updating GitLab is easy and safe for everyone.

This means they make sure that:

  1. The installation page, update, and upgrade pages are complete, correct, easy, helpful, and attractive.
  2. A beginner should be able to install GitLab completely, quickly, and correctly.
  3. Make the Omnibus packages.
  4. Make the cloud native installation.
  5. Make sure that GitLab is easy and safe to install and maintain for self-hosted installations.
  6. Make sure that GitLab is easy and safe to install and maintain on GitLab.com.
  7. Measure if our updates work on the machines of our users by seeing if GitLab comes back online.
  8. Limit the number of people affected by problems in our packages.
  9. Make sure nightly builds are installed on dev.gitlab.org. and mentions in GitLab CE/EE repositories on issues with Distribution label.
  10. Maintain runners used to make builds.
  11. Triaging issues in the omnibus-gitlab issue tracker.
  12. Work with community installation methods around the internet to make sure they are as good as possible and they link to relevant materials.


Name Location Description
Omnibus GitLab gitlab-org/omnibus-gitlab Build Omnibus packages with HA support for LTS versions of all major Linux operating systems such as Ubuntu, Debian, CentOS/RHEL, OpenSUSE, SLES
Docker GitLab image gitlab-org/omnibus-gitlab/docker Build Docker images for GitLab CE/EE based on the omnibus-gitlab package
AWS images AWS marketplace AWS image based on the omnibus-gitlab package
Azure images Azure marketplace Azure image based on the omnibus-gitlab package
Kubernetes helm charts charts/charts.gitlab.io Official Helm Charts Application definitions for Kubernetes Helm based on the omnibus-gitlab package
Redhat Openshift Openshift template Template for Openshift Origin based on the omnibus-gitlab package
Mesosphere DC/OS package Universe repository Package for Mesosphere DC/OS based on omnibus-gitlab package
GitLab PCF tile gitlab.com/gitlab-pivotal One click installation of GitLab in Pivotal Cloud Foundry based on omnibus-gitlab package
GitLab Terraform configuration gitlab-terraform Terraform configuration for various cloud providers
Omnibus GitLab Builder GitLab Omnibus Builder Create environment containing build dependencies for the omnibus-gitlab package
Upgrade time metrics Upgrade time metrics page on GL Pages Stores the result of calculation of time needed for upgrade between versions in chart form. Backed by Google sheets. Hosted using GL Pages

How to work with Distribution

Everything that is done in GitLab will end up in the packages that are shipped to users. While that sounds like the last link in the chain, it is one of the most important ones. This means that informing the Distribution team of a change in an early stage is crucial for releasing your feature. While last minute changes are inevitable and can happen, we should strive to avoid them.

We expect every team to reach out to the Distribution team before scheduling a feature in an upcoming release in the following cases:

To sum up the above list:

If you need to do install, update, make, mkdir, mv, cp, chown, chmod, compilation or configuration change in any part of GitLab stack, you need to reach out to the Distribution team for opinion as early as possible.

This will allow us to schedule appropriately the changes (if any) we have to make to the packages.

If a change is reported late in the release cycle or not reported at all, your feature/change might not be shipped within the release.

If you have any doubt whether your change will have an impact on the Distribution team, don't hesitate to ping us in your issue and we'll gladly help.

Internal team training

Every Distribution team member is responsible for creating a training session for the rest of the team. These trainings will be recorded and available to the whole team.


The purpose of team training is to introduce the work done to the rest of your team. It also allows the rest of the team to easily transition into new features and projects and prepare them for maintenance.

Training should be:

Training should not be:

Simply put, the training is a summation of: notes taken in issues during development, programming challenges, high level overview of written documentation. Your team member should be able to take over the maintenance or build on top of your feature with less effort after they have been part of the training.

Note Do not shy away from being technical in your training. You can ask yourself: What would have been useful for me when I started working on this task? What would have helped me be more efficient?

Efficiency of the training

In order to see whether the training is efficient, Distribution lead will rotate team members on projects where training was done. For example, if the feature requires regular releases, the person who gave the training will be considered a tutor. Different team member will follow the training and documentation and will ask the original maintainer for help. The new person responsible is now also responsible for improving the feature. They are now also responsible of training other team members.

Training Listings


Q: Isn't this double work? A: No. The training should be prepared while documenting the task.

Q: Won't this slow me down? A: At the beginning, possibly. However, every hour of the training given will multiple the value of it by the amount of team members.

Q: Isn't it more useful to let the team check out the docs and ask questions? A: In an ideal world, possibly. However, everyone has a lot of tasks assigned and they might not be able to go through the docs until they need to do something. This might be months later and you, as a person who would give the training, might not be able help efficiently anymore.

Public by default

All work carried out by the Distribution team is public. Some exceptions apply:

If you are unsure whether something needs to remain private, check with the team lead.

Working on dev.gitlab.org

Some of the team work is carried out on our development server at dev.gitlab.org. Infrastructure overview document lists the reasons.

Unless your work is related to the security, all other work is carried out in projects on GitLab.com.


General resources available to developers are listed in the Engineering handbook.

In the Distribution team specifically, everyone should have access to the testground project on Google Cloud Platform. If you don't have access, ask the team lead by creating issue in Distribution team issue tracker and label it Access Request.

Cloud Images

The process documenting the steps necessary to update the GitLab images available on the various cloud providers is detailed on our Cloud Image Process page.


As part of the team tasks, team has responsibility towards the following nodes:


Every day at 1:30 UTC, a nightly build gets triggered on dev.gitlab.org. The cron trigger times are currently defined at the scheduled pipeline page on dev.gitlab.org.

Every day at 7:20 UTC, the nightly CE packages gets automatically deployed on dev.gitlab.org. Any errors in the install process will be logged in Sentry. Slack notifications will appear in #dev-gitlab. The cron task is currently defined in dev.gitlab.org role.

Manually upgrading/downgrading packages

Announce in #announcements slack channel before and after up/downgrading package on dev.gitlab.org. Something like

Will be manually downgrading package on dev.gitlab.org to <version> as latest nightly shipped some bugs.
Downgrade completed. Also put the package on hold to prevent automatic upgrades.
Will be removing the package hold and manually upgrading package on dev.gitlab.org to <version> (or latest nightly)
Upgrade completed. dev.gitlab.org now runs <version>.
  1. Upgrading

    Login to dev and run the following commands

     $ sudo apt-get update
     $ sudo apt-get install gitlab-ce

    and verify if the latest version of the package was installed. Either visit https://dev.gitlab.org/help and confirm the version string there or run the following command

     $ apt-cache policy gitlab-ce | grep "Installed"
  2. Downgrading

    Sometimes a bug may be introduced in the latest nightly that broke dev.gitlab.org. In such situations, we want to downgrade to a version before the bug was introduced so that dev.gitlab.org is operational again. Most of the times, we will also want to prevent the package from being updated automatically by our cron job until the bug is fixed. To accomplish this, do the following:

    1. Stop sidekiq and unicorn to be sure that data doesn't get altered during the upgrade.
       $ sudo gitlab-ctl stop sidekiq
       $ sudo gitlab-ctl stop unicorn
    2. Downgrade to a previous version. It will install the package and run reconfigure automatically.
       $ sudo apt-get install gitlab-ce=<version to be installed>


       $ sudo apt-get install gitlab-ce=10.4.0+rnightly.75436.44501791-0
    3. Confirm all the services are up and running.
       $ sudo gitlab-ctl status
    4. Confirm the correct version is deployed by visiting https://dev.gitlab.org/help
    5. Keep the package on hold, so that it doesn't get auto-upgraded.
       $ sudo apt-mark hold gitlab-ce
    6. Verify the hold is in place.
       $ sudo apt-mark showhold
    7. Remember to unhold the package once a version with fix to the bug is released, so that it can be installed.
       $ sudo apt-mark unhold gitlab-ce

Maintenance tasks


Teams responsibility is to make sure that the GitLab instance on this server is operational. The omnibus-gitlab package on this server is a stock package with required configuration to keep it operational. Regular omnibus-gitlab commands can be used on this node. If, for some reason, you need to apply a change to /etc/gitlab/gitlab.rb, this change needs to be introduced in the dev-gitlab-org role.

If you do not have access to this repository, but you need to do a hot-patch or configuration testing, the following steps can be performed:


Build Machines

GitLab CI runner manager is responsible for creating build machines for package builds. This node configuration is managed by cookbook-gitlab-runner. Configuration values are stored in the vault named the same as the node, see example.

Currently, the version of GitLab CI runner is locked. We aim to be close to the current version of runner in order to get the fixes that we need without getting into issues that could cause a failure. These failures could prevent the release from going out so be careful with unnecessary changes on these nodes.

For building official packages we use build-runners.gitlab.org and (soon to be deprecated) omnibus-builder-runners-manager.gitlab.org.

For building packages on GitLab.com as part of trigger packages pipelines, we use a manager machine at build-trigger-runner-manager.gitlab.org.

Both build-runners.gitlab.org and build-trigger-runner-manager.gitlab.org are in GCP project omnibus-build-runners. Each of these managers are spawning machines inside of GCP and are configured with google docker machine driver.

build-runners.gitlab.org is also configured with the scaleway driver, which boots up machines in a Scaleway account for Raspberry Pi (arm platform) builds. This same machine is configured to create package-promotion machines. These machines are used only to upload packages so they are scaled down to save on costs.

omnibus-builder-runners-manager.gitlab.org is currently being used as a backup and is configured with digitalocean docker machine driver and is booting up machines inside of Digital Ocean.

Maintenance tasks


When the version of GitLab CI runner needs to be changed:

When you notice that the builds are pending on our dev.gitlab.org project, it is possible that the number of failed machines is high. Failed machines prevent the runner manager from starting up new machines and this can slow down or even block the release. To resolve this:


At this moment, Distribution team is only the user of packages.gitlab.com. Release packages are served to our users and customers from our CI on dev.gitlab.org.

The duties for this server are yet to be defined with the Production team.

Given that the package server is currently deployed on our own infrastructure, from an omnibus type package, if Production requires help the team should do a best effort to help trough any issues.