This document captures the design decisions behind the current implementation of real-time features in GitLab. It is adapted from the description and discussion within issues in this epic.
We want to implement real-time issue boards but the current way of doing real-time (polling with ETag caching) would not work well with it. We'd have to track a lot of things in an issue board (lists changing, issues within lists changing, etc..) and polling for each of those isn't feasible due the number of requests that would be required and the resultant load on server nodes and the database. It is possible to use just one polling endpoint, but it just makes the code harder to understand.
Polling is also not very real-time unless polling intervals can be dropped substantially.
Even pages that successfully use multiple polling requests; such as the MR page for title and description, notes, widgets, and so on, would benefit from faster updates.
The objective is to implement a real-time solution that satisfies the following criteria:
This objective is an iterative step in the long-term plan to implement real-time collaboration.
We will roll out the use of WebSockets by starting with a small, relatively low-risk feature. When we've identified and solved the problems of maintaining WebSocket connections at scale and captured the lessons of designing a feature for persistent connections, we'll produce documentation that will allow other developers to work on real-time features.
The initial feature is viewing assignees on issues in real-time and the chosen technology is Action Cable.
For the simplest deployments, enabling Action Cable enables the first feature by default.
The feature can also be toggled using two feature flags:
||Attempts to establish a WebSocket connection when viewing an issue and responds to update signals|
||Broadcasts a signal when an issue is updated|
Sometimes, the nodes serving Web requests aren't the same ones serving WebSocket connections (see "How to implement it on premise") so don't have Action Cable enabled. The feature flags can be used to enable the feature explicitly.
This diagram shows the current steps involved in establishing an open WebSocket connection for bidirectional communication. This is subject to change as work progresses.
cableBackendoption, defaulting to
†: This step is especially subject to change as we consider using GraphQL Subscriptions instead.
The feature is currently available for internal team-members to demo on the dev.gitlab.org instance. This is a single-instance deployment of CE.
Performance testing Action Cable with Puma determined no impact on resource usage but only tested while idle. In the absence of simulated workloads, the recommendation was to roll the feature out gradually.
An end-to-end test for real-time assignees was added in this MR.
Instance administrators have a number of options for using Action Cable. Admins of single-instance and small cluster deployments may choose to simply serve WebSocket connections from existing nodes. By enabling Action Cable the first feature will be immediately available.
Administrators of larger deployments may wish to proxy WebSocket connections to a separate set of nodes to protect their main Web nodes from saturation. This can be done in one of two ways:
cableBackendoption to specify a separate address, this defaults to
In both cases only embedded mode is supported for Action Cable. In the latter case the separate nodes are running full GitLab Web processes additionally running Action Cable. See the decision to support only embedded mode here.
It's important to note that Action Cable channels (similar to controllers) can do anything that can be done in the web context; such as using models or reading from the cache, so it is important that these processes are treated like existing web processes. They should have the same configuration and should be able to connect to the DB, Redis cache, shared state, sidekiq, etc. Although we probably would just be doing permission checks in the initial implementation, it could be a source of weird bugs in the future if these dependencies aren't setup properly.
webserviceprocesses with Action Cable enabled.
Since the nodes serving Web requests do not have Action Cable enabled on gitlab.com, the feature can be controlled using the feature flags
:broadcast_issue_updates. These will be used to roll-out the first feature in a controlled way.
Note: The feature has been trialled on gitlab.com already using the
ACTION_CABLE_IN_APP environment variable (via the
extraEnv section exposed by our Helm charts) to proxy WebSocket requests to a dedicated set of pods. This coincided with elevated memory consumption on our Workhorse nodes and was subsequently rolled-back.
Action Cable was the first choice because it is included with Rails. Scalability is a known concern but if it becomes a problem Anycable implements the same API. We could switch to that in the future with minimal to no changes in the application code.
Long-polling / Server-sent Events (SSE)
Both long-polling and SSE have the problem detailed above with having to poll / request multiple endpoints. Also, even if we do this, we'd have to implement some custom backend logic similar to our current ETag caching that checks Redis or something similar. It's not worth it when ActionCable provides the full stack.
The message_bus gem implements multiple subscriptions in one polling endpoint. But since we're planning to do real-time collaboration which would need lower latencies and bi-directional communication, it's better to just go with websockets directly.
Go / Erlang / Elixir websocket servers
It is known that these languages are better than Ruby at concurrency but without booting our Rails app / Ruby libraries, we can't reuse the code we already have; for example, permissions checks. These are complex and very easy to get wrong so we definitely don't want to re-implement this. We could do a separate API call to our Rails backend but more on that below.
Anycable has websocket servers in Go / Erlang and it solves the problem of not having Rails context by using gRPC. The downside is that we'd have to spin up another gRPC server which boots our Rails app. This complicates our infrastructure setup and would take longer to setup everything that's needed. This option was discussed in this issue.
Since it is easy to switch to this later on if needed, we decided to defer this and start with Action Cable.
Other Ruby websocket servers (Faye)
Gitter uses this and we looked into this briefly but we didn't really have a strong reason to choose this over Action Cable.