Gitlab hero border pattern left svg Gitlab hero border pattern right svg

PostgreSQL Upgrade - Design

On this page


Epic: gl-infra/106

Blueprint: PostgreSQL Upgrade Blueprint


The objective of this design document is to describe the most important technical aspects of the PostgreSQL upgrade to 11.7. Currently, our PostgreSQL cluster is at version 9.6.

We are upgrading to version 11.7 due to business requirements.

The option chosen is to execute a pg_upgrade directly on the live cluster. The main reasons for this approach are:

The maintenance window could have a duration of up to 2 hours. Considering this time constraint, and the time consumed to stop the platform and reinitialize the traffic, we need to focus on the main goal, which is the PostgreSQL upgrade. The goals of enabling checksums should be approached in a further iteration.

Considering the deadlines and the business requirements as: (the end of support of PostgreSQL version 9.x and 10.x)[] and supporting the initiatives of database sharding ), the approach proposed is to keep the project as simple and straightforward as possible. Besides, all the testing and interaction with all the necessary teams for this upgrade will demand time, concentration and effort from all involved. For more information about the discussed options and reasons consider the PostgreSQL Upgrade Blueprint.


Implementation Considerations

All the detailed steps of the change request (it will test and apply this change in staging and production) will be detailed in the upgrade plan, that is reflected in the PostgreSQL Upgrade Template. We will include all the tasks, pre-checks, post-checks, communications and tests, with the person responsible for each item.

The upgrade process will consist of the following steps :


The approach with pg_upgrade that is being implemented is based on the Official PostgreSQL Upgrade Documentation.

Pg_upgrade : The following command will migrate the data between the major versions of PostgreSQL, in our case from 9.6 to 11.7. We will execute the pg_upgrade in the primary node, during a maintenance window without traffic on the database cluster.

pg_upgrade -b oldbindir -B newbindir -d oldconfigdir -D newconfigdir -k

The parameters are:

-b oldbindir: defines the path of the binaries from the old version. In our case the path of the binaries from version 9.6.

-B newbindir: defines the path of the binaries from the new version. In our case the path of the binaries from version 11.7.

-d oldconfigdir: defines the path of the old configuration directory, that in our case would be the configuration folder from the version 9.6

-D newconfigdir: defines the path of the new configuration directory. In the new configuration, we will have the optimized setup for PostgreSQL 11.7. The main tests we are executing is to enable the new features of query processing parallelism.

-K: use hard links instead of copying files to the new cluster.


Following the Official PostgreSQL Upgrade Documentation. The following steps would be executed:

The execution of this rsync will record the links created by pg_upgrade's link mode that connect files in the old and new clusters on the primary server. Finding the matching files in the old secondary node cluster and creates links in the new secondary cluster. Files that were not linked on the primary are copied from the primary to the secondary. The data that has been rsync`ed in our tests is small. This step is the key to provide fast secondary upgrades.

This process has been tested in several times in a test environment considering :


After executing the upgrade, and during the downtime, it is necessary to gather statistics from the database on the newer version. The new database will have no stats at this point. This task will be executed during the downtime with a high number of cores to be completed quickly.

Vacuumdb -j 70 --analyze-only database_name

The parameters are:

-j number_of_jobs: It will execute the task with the specified number of parallel jobs.

--analyze-only: Only calculate statistics for use by the optimizer (no vacuum executed).


After completing the analyze, we will start executing the VACUUM FREEZE ANALYZE. Executing a VACUUM FREEZE helps to avoid any future vacuums triggered by txid wraparound protection. With this step we are reseting the variables from vacuum not to carry them from the old cluster. During this execution we should start to restore the traffic into the DB.

Vacuumdb -j 20 -F database_name

The parameters are:

-j number_of_jobs : It will execute the task with the specified number of parallel jobs.

-F: Aggressively “freeze” tuples. Specifying FREEZE is equivalent to performing VACUUM with the vacuum_freeze_min_age and vacuum_freeze_table_age parameters set to zero.


Currently, our database cluster consists of 12 nodes, we would spare 4 nodes in the current version 9.6, to be able to rollback in a short period of time.

As another recovery mechanism, in our automated upgrade process, a snapshot from the data disk will be executed when the database is down (to ensure the consistency of the database snapshot). The command to execute the snapshot is:

gcloud --project [PROJECT_NAME] compute disks snapshot [DISK_NAME] --type [TYPE_NAME] --source-snapshot [SOURCE_NAME] --zone [ZONE_NAME].

The SRE, DBRE and dev teams will be constantly monitoring the database, application logs, and monitoring alerts.

In the event of any unexpected problem, our rollback strategy during the maintenance is :


The following is a list of risks and mitigations :

Author: Jose Finotto {: .note}

GIT is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license