This document is a work in progress. The Digital Experience team is the DRI for marketing's AB testing engineering efforts using the feature flag based tool LaunchDarkly.
An A/B test is when we release two versions of a page and compare how well they perform (comparing version A to version B). When running an experiment, we are testing a hypothesis using a control variant and a test variant, similar to how one typically employs the scientific method.
Below are some resources to learn more about feature flags. At a high level, a feature flag is an if-else wrapper around code that can be enabled, disabled, or served at a certain percentage to a certain group. This is controlled via a dashboard toggle, changing the production interface on-the-fly without having to wait for a release to change something.
Because the marketing website about.gitlab has no dynamic server, we needed a solution that could be performant and implemented using a javascript SDK. In addition to that, we needed a solution that was able to attach metrics.
Below are some links with information on the history of the decision:
We currently use LaunchDarkly to control whether or not a test is showing, at what percentage, and gather metrics about a test's performance.
Our AB tests include two files, the control and the test variant. Both exist on the page in the HTML DOM at the same time, but are hidden by default on page load. The javascript SDK will return which version of the experiment should be shown.
run-experiments.js
from www: https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/source/javascripts/run-experiment.jswww
repositoryThis can be overridden by optional URL parameters as exhibited in the codepaths section below.
Example merge request for an AB test
Active