The group of Database Reliability Engineers (DBREs) are on the Reliability Engineering teams that runs GitLab.com. We care most about database reliability aspects of the infrastructure and GitLab as a product.
We strive to approach database reliability from a data driven perspective as much as we can. As such, we start by defining Service Level Objectives below and document what service levels we currently aim to maintain for GitLab.com.
The Database diagram:
The pgbouncer setup for Read Write traffic:
The pgbouncer setup for Read Only traffic:
We use Service Level Objects (SLOs) to reason about the performance and reliability aspects of the database. We think of SLOs as "commitments by the architects and operators that guide the design and operations of the system to meet those commitments."1
This list is by no means complete and we're just about to define SLOs and document them here. See #147.
In addition to the DBREs, the reliability of our database is supported by
OnGres. OnGres provides 24x7 support with engineers
in our PagerDuty escalation policy for database support (see
https://about.gitlab.com/handbook/on-call/#dbre). Issues can be brought to the attention of OnGers engineers by affixed the
labels to Infrastructure issues. Finally, there is also a dedicated Slack channel,
#ongres-gitlab, for issues.
In backup and recovery, there are two SLOs:
||8 hours||Maximum time to recovery from a full database backup in case of disaster|
||7 days||The number of days we keep backups for recovery purposes.|
The backup strategy is to take a daily snapshot of the full database (basebackup) and store this in Google Cloud Storage. Additionally, we capture the write-ahead log data in GCS to be able to perform point-in-time recovery (PITR) using one of the basebackups. Read more on Disaster Recovery
DB-DR-TTR we need to consider worst-case scenarios with the
latest backup being 24 hours old. Hence recovery time includes the time
it takes to perform PITR to recover from archive to a certain point in
time (right before the disaster).
We are able to recover to any point in time within the last
For GitLab.com we maintain availability above 99.95%. For the PostgreSQL database, we define the following SLOs:
||99.9%||General database availability|
||p99 < 200ms||99th percentile of database queries runtime below this level.|
||60s||Maximum accepted data loss in face of a primary failure|
DB-HA-UPTIME of 99.9% allows for roughly 45 minutes of downtime per month. Uptime means, the database cluster is available to serve
queries from the application while maintaining other database SLOs.
We allocate a downtime budget of 45 minutes per month for planned downtimes, although we strive to keep downtime as low as possible. The downtime budget can be used to introduce change to the system. If the budget is used up (planned or unplanned), we stop introducing change and focus on availability (similar to SRE error budgets).
DB-HA-PERF, 99% of queries should finish below 200ms.
DB-HA-LOSS we require an upper bound on replication lag. A write
on the primary is considered at risk as long as it has not been
replicated to a secondary (or to the PITR archive).
To make it easier to find your way around you can find a list of useful or important links below.
As a database specialist the following tools can be very helpful:
pbin GitLab and a bar with performance metrics will show up at the top of the page. This tool is especially useful for viewing the queries executed and their timings.
EXPLAIN ANALYZEfor executed queries. Enable by starting Rails with
env ENABLE_SHERLOCK=1 bundle exec rails s.
The following (private) Grafana dashboard are important / useful for database specialists:
Basically everything under https://docs.gitlab.com/ee/development/README.html#databases, but the following guides in particular are important:
For various other development related guides refer to https://docs.gitlab.com/ee/development/README.html.
From "Database Reliability Engineering", O'Reilly Media, 2017 ↩