Gitlab hero border pattern left svg Gitlab hero border pattern right svg

Database Scalability Working Group

On this page

Attributes

Property Value
Date Created February 19, 2021
End Date September 24, 2021
Slack #wg_database-scalability (only accessible from within the company)
Google Doc Working Group Agenda (only accessible from within the company)
Issue Board TBD

Exit Criteria

The charter of this working group is to produce a set of blueprints, design and initial implementation for scaling the database backend storage, access, separation, synchronization, and lifecycle management for various use-cases (see Split patterns below).

Overview

Our current architecture relies, almost exclusively and by design, on a single database to be the sole and absolute manager of data in terms of storage, consistency, and query results collation. Strictly speaking, we use a single logical database that is implemented across several physical replicas to handle load demands on the database backend (with the single pseudo-exception of storing diffs on object storage). We are at the [1,0.5,0] coordinates in the scale cube. Regular analysis of database load, however, is showing that this approach is unsustainable, as the primary RW database server is experiencing high peaks that are approaching the limits of vertical scalability.

We explored sharding last year and scoped it to the database layer. We concluded that while there are solutions available in the market, they did not fit our requirements, both in financial and product fit terms, as they would have forced us into a solution that was difficult (if not impossible) to ship as part of the product.

We are now kicking off a new iteration on this problem, where the scope is expanded from the database layer into the application itself, as we recognize this problem cannot be solved to meet our needs and requirements if we limit ourselves to the database: we must consider careful changes in the application to make it a reality.

Data management as a discipline

Deferring most, if not all, data management responsibities to the database has enabled us to offload decisions and rely on PostgreSQL's excellent capabilities to cope with them. While we have run into a few scalability limits we have addressed through infradev, the database has, for the most part, held its ground (aided by copious amounts of hardware). This has provided specific fixes to very specific problems, and has afforded us development speed but blinded us to long-term issues as the database grows. Spot fixes will not be sufficient to sustain growth. This is now reflected in massive technical debt issues such as ci_builds, where the current schema and usage is making our ability to scale it significantly difficult. Thus, we will need to take a more strategic view to provide a long-term, comprehensive solution, and adopt data management as a full-fledged discipline.

There will be areas of the database where discipline and scalability will mix. In general, we should tackle discipline first and scalability later.

Y-Axis split

We are at the stage of having to embrace and split across the Y-axis, decomposing the database backend by separating data by meaning, function or usage, but we must do so in ways that enable the application to continue to view the database backends as a single entity. This entails the minimization of introducing new database engines into the stack while taking on some of the data management responsibilities. Therefore, we aim to fully embrace [1,1,0] .

Data access layer

We recognize the advantages of a single application and wish to maintain the view of a single data store as far down the stack as possible. It is imperative we maintain a high-degree of flexibility and low-resistance for developers, which translates into a cohesive experience for administrators and users. To that end, we will need to build and introduce a data access layer that the rest of the application can leverage to access backend data stores (which is not a new concept for GitLab, as the success of Gitaly clearly shows).

As we look at scaling the database backend, a global reliance on the database to be the sole executor of data management capabilities is no longer possible. In essence, the scope of responsibility is now more localized, and the application must accept some of these responsibilities. These cannot be implemented in an ad-hoc fashion, and should be centralized in a data access layer that, on the one hand, understands the various backends supporting data storage, and on the front-end, can service complex queries and provide the necessary caching capabilities to address the latencies that will be introduced by trying to scale the database horizontally, all while hiding the backend details from the rest of the application.

As the database backend is decomposed through the use of the tentatively proposed patterns, the need to implement a data access layer becomes apparent. We can no longer rely on the database to perform all data composition duties. The application itself (or a service on its behalf) must take on those responsibilities.

Split patterns

We must ensure we approach database scalability both systematically and holistically. We should not enable disjoined approaches to scaling the database. We therefore propose deciding on a small number of patterns to break down the problem and provide solutions that can apply to a variety of tables, keeping the solution space small, manageable, and with a certain degree of flexibility. Once we understand the solutions applicable to these patterns, we can think holistically about the data access layer, enabling us to apply iteration to its development while minimizing disruption to the rest of the application.

Read-Mostly Data

There are small pockets of data that are frequently read but seldom written, and rarely participate in complex queries (such as the license table). They should be offloaded to more appropriate backends that are better suited for high-frequency reads, minimally for caching purposes (since the only backend we consider to be durable is the database). While the queries are simple and the amount of data is small, these queries to contribute to significant context switches in the database.

Time-Decay Data

Some datasets are subject to strong time-decay effects, in which recent data is accessed far more frequently than older data. This effect is usually tied to product and/or application semantics, and we can potentially take advantage of this pattern of offload older data elsewhere, keeping recent, often-accessed data in the "main" database. This approach decreases resource demands and improves the performance of the database overall.

Entity/Service

A popular database scalability methodology entails decomposing concrete entities (e.g., users) into separate data stores. This approach is typically associated with microservices, and while at times it may be useful to encapsulate an entity behind a service, this is not a strict requirement, especially within the context of our application (even though, as mentioned earlier, we have already done this with Gitaly).

This approach shifts consistency concerns to the application, as the database runtime cannot possibly track external dependencies. This strategy works particularly well when a portion of the column data can be streamlined and kept on the "main" database while offloading the rest of the metadata to a separate backend.

Sharding

One of the most widely used methodolgies to database scalability is sharding. It is also one of the hardest to implement, as sharding shifts logical data consistency and query resolution demands into the application, significantly elevating the relevance of the CAP theorem in our design choices. While these are not trivial problems to solve, they are solvable.

Horizontal

TO-DO

Vertical

TO-DO

Asynchronicity and CAP

As we dive into the relevance of the CAP theorem, we should also consider it outside the context of sharding. This is a point @andrewn crystalized in a recent conversation where we discussed the topic of database vertically scalability. In this context, we should evaluate the need for full transactional synchronicity, and shift, where possible, to asynchronous mode.

Plan

  1. Kick-off working group: handbook, agenda, meeting
  2. Determine authoritative list of 3 patterns to comprise the first iteration
    1. A blueprint per pattern should be produced by an assigned DRI to:
      1. Describe the pattern
      2. Applicable use cases in the database
      3. Analysis and evaluation of current use cases, their effects on the database
      4. A brief overview of the design
  3. Based on the three patterns, a blueprint about the Data Access Layer and caching considerations
  4. First iteration implementation plan
    1. Must include a plan to measure effects on both the database and the application
    2. Must include validation (testing) plan

Roles and Responsibilities

Working Group Role Person Title
Executive Stakeholder Eric Johnson Chief Technology Officer
Facilitator/DRI Gerardo "Gerir" Lopez-Fernandez Engineering Fellow, Infrastructure
Functional Lead Kamil TrzciƄski Distinguished Engineer, Ops and Enablement (Development)
Functional Lead Jose Finotto Staff Database Reliability Engineer (Infrastructure)
Member Stan Hu Engineering Fellow
Member Andreas Brandl Staff Backend Engineer, Database
Member Grzegorz Bizon Staff Backend Engineer (Time-decay data ci_builds )
Member Craig Gomes Backend Engineering Manager (Database and Memory)
Member Chun Du Director of Engineering, Enablement
Member Christopher Lefelhocz Cleaner
Member Nick Nguyen Fullstack Engineering Manager, Geo
Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license