This page covers the Growth engineering process for running experiments. See also:
We follow a four step process for running experiments as outlined by Andrew Chen's How to build a growth team.
Each week, we provide progress updates and talk about our learnings in our Growth Weekly Meeting.
The duration of each experiment will vary depending on how long it takes for experiment results to reach statistical significance. Due to the varying duration, there will be some weeks when we have several experiments running concurrently in parallel.
~"experiment idea"
, optionally add: ~"growth experiment"
)
%Awaiting further demand
, %Backlog
, %Next 1-3 releases
, or a specific milestone)workflow::
issues linked to the epic to track the work required to complete the experiment~"experiment::active"
when the experiment is live#production
Slack channelSee also the Growth RADCIE and DRIs for determining DRIs at each stage.
A backlog of experiments are tracked on the Experiment backlog board.
To track the status of an Experiment, Experiment tracking issues using the ~"experiment-rollout"
and scoped experiment::
labels are tracked on experiment rollout boards across the following groups:
gitlab-org | gitlab-com | all groups |
---|---|---|
Experiment rollout | Experiment rollout | Issues List |
gitlab-org
group)
This issue acts as the starting point for defining an experiment, including an overview of the experiment, the hypothesis, and some idea of how success will be measured. The [Experiment idea] issue template can be used for this (label added: ~"experiment idea"
, optionally add: ~"growth experiment"
).
This epic acts as the single source of truth (SSoT) for the experiment once an experiment has been properly defined according to our Experiment Definition Standards and is deemed worthwhile to run. Once an Experiment Definition Issue is added to this epic, we fill out further details such as the expected rollout plan. We also assign the experiment to a milestone and follow the product development flow for UX & Engineering work. As the experiment design and rollout progresses, this epic or parent issue should contain details or links to further information about experiment including the tracking events and data points used to determine if the experiment is a success as well as links to relevant metrics-reporting dashboards (such as Sisense).
This issue is used to track the experiment progress once deployed. It is similar to a Feature Flag Roll Out issue with an additional experiment::
scoped label to track the status of the experiment. The [Experiment Tracking] issue template includes an overview of the experiment, who to contact, rollout steps, and a link to the overall experiment issue or epic.
The experiment::
scoped labels are:
~"experiment::pending"
- The experiment is waiting to be deployed~"experiment::active"
- The experiment is active (live)~"experiment::blocked"
- The experiment is currently blocked~"experiment::validated"
- The experiment has been validated (the success criteria was clearly met)~"experiment::invalidated"
- The experiment has been invalidated (the success criteria was clearly unmet)~"experiment::inconclusive"
- The experiment was inconclusive (the success criteria was not clearly met nor clearly unmet)This issue is used to clean up an experiment after an experiment has been completed. It is created within the project where the cleanup work will be done (e.g. the gitlab-org/gitlab
project).
The cleanup work may include completely removing the experiment
(in the case of ~"experiment::invalidated"
and ~"experiment::inconclusive"
) or refactoring the experiment feature for the long run (in the case of ~"experiment::validated"
).
The cleanup issue should be linked to the experiment rollout issue as a reference to ensure the experiment is concluded prior to cleanup.
The Experiment Successful Cleanup issue template can be used for the gitlab-org/gitlab
project.
gitlab-org/gitlab
project
team-tasks
project
Experimentation, like everything at GitLab, should be approached with the GitLab CREDIT values in mind, specifically the values of Iteration, Efficiency, and Results.
The larger an experiment is, the longer it takes to craft a design, implement code changes, review code changes, define and collect necessary data, organize data into meaningful tables, graphs, and dashboards, and so on. As we build and improve our experimentation platform and increase our ability to quickly create and run experiments, we should expect a large percentage of all experiments to fail at proving their hypotheses. Given these invalidated or inconclusive experiments will be rolled back there is an advantage in ensuring experiments are as small and iterative as possible.
With this in mind, there are advantages to considering developing a Minimum Viable Experiment (MVE). Much like the concept of Minimum Viable Change (MVC), the goal of an MVE is to look for the smallest hypothesis to test, the simplest design for testing that hypothesis, the quickest implementation of the design, the least amount of data to be collected in order to verify the hypothesis, and so on.
So, what might an MVE look like in practice? Matej Latin shares an example of a so-called "painted door" test in his blog post, "Small experiments, significant results and learnings". A simple example of a "painted door" test might be a call-to-action (CTA) button that doesn't really go anywhere – maybe it brings up a simple modal which says "Oops! That feature isn't ready yet," or maybe it takes the user to an existing page in our documentation. Because the design of this type of MVE is intentionally simple, it is easier and faster to develop, deploy, and start gathering data. Because the design of this type of MVE is intentionally simple, it is easier and faster to develop and deploy. With a small amount of instrumentation, we can use it as an opportunity to measure initial engagement with that button.
This can be a fairly low cost way to inform next steps, for example rolling back, developing a larger experiment, or implementing a feature as a follow-up.
Keep in mind that "painted door" tests are not always the best first approach. The main idea is to strive for an iterative approach to experimentation. Ask yourself, "Is there a simpler version of this experiment which is worth deploying and which still gives us enough data to know how to proceed?"
For real time experiment rollout status GitLab team members can view the experiments API (docs) (a JSON viewer for your browser is recommended).
The "current_status" will be on, off, or conditional. If conditional, there will be either a percentage_of_time or percentage_of_actors. Refer to the note on feature flags in the experiment guide.
There are dashboards in Sisense to indicate whether the experiment flag still exists, but not the current status. These are also only available to GitLab team members.
The experiment rollout board lists rollout issues linked to experiment feature flags used in development and on SaaS.