Engineering A/B tests

Learn more about how Digital Experience engineers our A/B tests.

Engineering AB Tests

Overview

An A/B test is when we release two versions of a page and compare how well they perform (comparing version A to version B). When running an experiment, we are testing a hypothesis using a control variant and a test variant, similar to how one typically employs the scientific method.

We currently use LaunchDarkly to control whether or not a test is showing, at what percentage, and gather metrics about a test’s performance. Within LaunchDarkly, you can create events that fire when a user does something. For our case, our most common example would be a click. We also push the experiment ID into the Google Analytics dataLayer so we know what version of the page the user viewed.

window.dataLayer = window.dataLayer || [];

dataLayer.push({
 'event': 'launchDarklyExperiment',
 'launchDarklyExperimentName': 'name of experiment',
 'launchDarklyExperimentId': '0 or 1' //0 = control, 1 = variant
});

This project is for the ideation and refinement of AB Tests to be conducted on GitLab’s Marketing site. Anyone can contribute an AB testing issue.

What is a feature flag

Below are some resources to learn more about feature flags. At a high level, a feature flag is an if-else wrapper around code that can be enabled, disabled, or served at a certain percentage to a certain group. This is controlled via a dashboard toggle, changing the production interface on-the-fly without having to wait for a release to change something.

How we run AB test

Our AB tests include two files, the control and the test variant. Both exist on the page in the HTML DOM at the same time, but are hidden by default on page load. The javascript SDK will return which version of the experiment should be shown. For each test, we use the following process:

  1. Test Candidates are validated by modelling out the potential lift on an annualized basis.
  2. AB test candidate with the highest annualized lift (lift scoped specifically to the action/improvement we’re measuring) is run as an AB test.
  3. AB test complete, data analysis on results.
  4. The winner goes live at 100%.
  5. Saves (tests that did not deliver as expected) are documented.

A/B Testing Schedule

Planned A/B Tests

Issue Test Length Status
Homepage Featured Blocks vs Carousel scheduled

Completed A/B Tests & Results

Issue Variant A Variant B Winner
Remove ‘register’ from the navigation (top right) on about.gitlab.com Variant A Image Variant B Image Variant B
Add borders around pricing tiers on /pricing based on DemandBase data Variant A Image Variant B Image Variant B
Update Homepage Sub Copy Variant A Image Variant B Image Variant B
Pricing Page: Free SaaS Trial + Self Managed Install CTAs Variant A Image Variant B Image Variant B

How do we engineer tests

Running test on the Buyer’s Experience repository

Running test on the www repository

This can be overridden by optional URL parameters as exhibited in the codepaths section below.

Example merge request for an AB test

Tutorials for implementing AB test in the www-gitlab-com project

Active

Additional Notes

Why did we choose LaunchDarkly

Because the marketing website about.gitlab has no dynamic server, we needed a solution that could be performant and implemented using a javascript SDK. In addition to that, we needed a solution that was able to attach metrics.

Below are some links with information on the history of the decision:

Last modified November 17, 2023: Move digital experence files in to place (153d5985)