The following page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features or functionality remain at the sole discretion of GitLab Inc.
Thanks for visiting this category strategy page for GitLab Geo Disaster Recovery. This page belongs to the Geo group.
GitLab is part of the business critical infrastructure for many of our customers. As such, it is important our customers are able to rapidly and safely recover their GitLab instance from a disaster. We built Geo Disaster Recovery for exactly this purpose.
Geo Disaster Recovery (DR) helps our customers fulfill their business continuity plans by creating processes that allow the recovery of a GitLab instance following a natural or human-created disaster.
It all starts with a well-designed, well-tested, and well-practiced DR plan that will minimize the business impact of a catastrophic event. Two key metrics for disaster recovery are Recovery Time Objective (RTO) and Recovery Point Objective (RPO). These two metrics determine the maximum acceptable time your application can be offline and the maximum length of time data might be lost from your application respectively. Companies strive for both of these metrics to be small as possible. Geo Disaster Recovery was built to help customers minimize both of these metrics by providing an easily configurable warm standby (Geo site) in an additional region, which can quickly take over in the event the services at the primary site are interrupted.
It is important that customers can recover all of their core GitLab data in the event of a disaster. Therefore, Geo replication and verification support for GitLab-generated data is part of the definition of done. This ensures that new features ship with Geo support and our customers are not exposed to data loss.
You can read more about how we started this journey in our blog post, How we built GitLab Geo
Please reach out to Sampath Ranasinghe, Product Manager for the Geo group (Email) if you'd like to provide feedback or ask any questions related to this product category.
This strategy is a work in progress, and everyone can contribute. Please comment and contribute in the linked issues and epics on this page. Sharing your feedback directly on GitLab.com is the best way to contribute to our strategy and vision.
We envision a disaster recovery solution that our customers can rely on to restore the full operational capability of GitLab in the event of a catastrophe and one that they can repeatedly exercise without fear of losing their data. A solution that makes the actual experience of failing over to warm standby sites seamless and uneventful both for the systems administrator and the end users.
A robust disaster recovery plan needs to be regularly tested and adjusted as necessary. Testing both the process and the infrastructure is essential to ensuring a successful recovery in the event of a disaster. Regular testing will expose weaknesses that can be addressed under controlled circumstances. Geo Disaster Recovery will allow customers to perform routine controlled failovers without data loss.
The solution will perform an autonomous failover to a designated secondary site under limited circumstances, relying on specific trigger events that indicate a problem with the primary site. The failover will be seamless and transparent to the end users. CI runners will automatically register with the new primary and restore a fully operational GitLab instance. This will reduce the monitoring burden on the system administrators by allowing the GitLab solution to self-heal.
System administrators will be notified of irregularities in the replication and verification processes asynchronously via push notifications such as email and Slack messages alerting them to potential data loss if a failover were to happen. System administrators can follow the information in the notifications to investigate and remediate the issues.
The solution will support all current and future GitLab reference architectures.
GitLab customers will be able to scale and tailor the Geo Disaster Recovery solution to best fit their business continuity plans. They will be able to configure the disaster recovery solution to meet a defined RTO and RPO.
All GitLab SaaS services such as GitLab Dedicated and GitLab.com will be protected by Geo Disaster Recovery to ensure that all best practices are followed and that we dogfood our own solutions.
For more information on how we use personas and roles at GitLab, please click here.
It is possible to promote a secondary site to a primary site, either during a planned failover or in a genuine disaster recovery situation. Geo supports promotion for a single node installation and for a high-availability (HA) configuration. The current promotion process consists of a large number of manual preflight checks, followed by the actual promotion. The promotion is only possible in the command line; no UI flow is possible and for high-availability configurations modifications to the
gitlab.rb file are required on almost all nodes. Given the critical nature of this process, Geo should make it simple to promote a secondary, especially for more complex high-availability configurations.
After a failover, an administrator may want to re-attach the demoted primary site back as a secondary site in order to failback to the original primary at some point. This is currently possible. However, the process is highly manual and not well-documented. After we have simplified the promotion process, we want to simplify demoting a secondary site site of any size by reducing the steps required and making the process easily automatable.
GitLab.com is by far the largest GitLab instance and is used by GitLab to dogfood GitLab itself. GitLab.com does not use GitLab Geo for DR purposes. This has many disadvantages and the Geo Team is working with Infrastructure to enable Geo on GitLab.com.
Some of our existing data types, such as Projects, Wikis, and Design repositories do not yet use the self-service framework. We are migrating these data types over to reduce technical debt and so that all data types can benefit from new features that are added to the framework.
Geo replied on provider replication and verification for object storage. Recently we added native object storage replication) allowing customers to replicate between different providers and customers who use their own object storage appliances. We plan to add native object storage verification to ensure these customers also benefit from 100% verification coverage.
It is important to test disaster recovery mechanisms and procedures regularly to ensure successful recovery after a failover event. One way to achieve this is to promote a Geo secondary site and perform recovery testing against it while keeping the primary site operational and available to the business. Performing testing against a promoted Geo secondary in this fashion will cause it to generate outbound traffic such as emails, webhooks, push mirrors, etc. This is undesirable as it will cause confusion for users and disrupt upstream services. Silent mode will suppress outgoing communication from a GitLab instance.
Improve observability of replication and verification operations will allow systems administrators to monitor the health of the warm standby secondary sites. It will help identify any fault conditions in the replication and verification process and aid in the remediation of these faults. By surfacing the underlying errors in the UI, it will provide easy access to this information and speed up recovery actions needed on the part of the systems administrators.
GitLab Dedicated provides a GitLab SaaS offering to large enterprises and customers in industries with strict security and compliance requirements. We will make Disaster Recovery available to these customers.
We currently don't plan to replace PostgreSQL with a different database e.g. CockroachDB.
This category is currently at the complete maturity level, and our next maturity target is lovable (see our definitions of maturity levels).
In order to move this category from complete to lovable we are working on all items in the lovable maturity epic.
We currently track:
GitHub Enterprise Server 2.22 supports a passive replica server that can be used for disaster recovery purposes.