Infrastructure Environments

On this page

Environments

Development

Name URL Purpose Deploy Database Terminal access
Development various Development on save Fixture individual dev

Development happens on a local machine. Therefore there is no way to provide any SLA. Access is to the individual dev. This could be either EE/CE depending on what the developer is working on.

Demo

Name URL Purpose Deploy Database Terminal access
Demo "GitLab Sales Demo Domains - Internal only" (found on the google drive) Sales Release Fixture Production Team

This should be a fully featured version of the current EE release. The high SLA and tightened access is to ensure it is always available for sales. There are no features (feature flags/canary/etc) that we do not ship.

.org

Name URL Purpose Deploy Database Terminal access
.org dev.gitlab.org Tools for Gitlab.com Nightly Real Work Production and build team

Currently there are two main uses for the .org environment, Builds and repos that are needed in case gitlab.com is offline. This is a critical piece of infrastructure that is always growing in size due to build artifacts. There are discussions to make a new build server where nightly CE/EE builds can be deployed or to move the infra repos to a new host that would be an separate (not gitlab.com) EE instance. Although the environment has dev in its domain name don't refer to it as dev since that could be confused with a local development environment.

Review Apps

Name URL Purpose Deploy Database Terminal access
Review apps various Test proposal on commit Fixture Review app owner

Ephemeral app environments that are created dynamically every time you push a new branch up to GitLab, and they're automatically deleted when the branch is deleted. Single container with limited access.

One-off

Name URL Purpose Deploy Database Terminal access
one-off various Testing specific features in a large environment Release + patches User specific Team developing feature

This is less of a staging environment more like a large scale development environments. This could be because of the number of repos required, or a full sized db is required. A version of CE/EE is installed and then patches are applied as work progresses. These should be very limited in number.

Version 1.0

K8s & helm charts (cloud native)

The final version of staging is multiple container deploys managed by K8s via helm charts. This could be mapped to Master and re-deployed every time there is a successful merge to master. There is already work started to move us to containers. https://gitlab.com/gitlab-org/omnibus-gitlab/issues/2420

Ops

Name URL Purpose Deploy Database Terminal access
ops ops.gitlab.net GitLab.com Operations official ee releases Fixture SREs

The ops environment hold all infrastructure that is critical for managing GitLab.com infrastructure.

At this time it includes:

Production

Name URL Purpose Deploy Database Terminal access
Production gitlab.com Production Release Candidate Production Production team

Production will be full scale and size with the ability to have a canary deploy. Production has limited access. It consists of two stages:

Staging

Name URL Purpose Deploy Database Terminal access
Staging staging.gitlab.com To test master Nightly Pseudonymization of prod all engineers

Staging has the same topology of Production, it includes all components that are in Production which is enforced by sharing the same terraform configuration.

This deployment can at most be updated nightly* because it requires an omnibus version to install. This would be a static environment with a pseudonymization production database. The DB is a snapshot of the production DB (updated only often enough to keep migration times to a minimum).

If you need an account to test QA issues assigned to you on Staging, you may already have an account as Production accounts are brought across to Staging. Otherwise, if you require an account to be created, create an issue in the access-request project and assign to your manager for review. Requests for access to database and server environments require the approval of your manager as well as that of one of the Infrastructure managers. The same access-request tracker should be used to request this type of access.

* or however often we can have an omnibus build created.

Self-Managed

Name URL Purpose Deploy Database Terminal access
Self-Managed various Self hosted versions of CE & EE User specific User specific User specific

These are environments that are run on-premises by the end-user. We have no influence, access or control of these environments.

Nodes

If you work at GitLab also see the list of nodes managed by Chef to get an idea.