We test changes at GitLab to verify a hypothesis for design or copy changes. These changes can use a variety of tools and methods, but the goal is the same: to make changes to help improve the visitor experience and better explain the value of GitLab.
We don't test fixes for typos, link updates, or other structural improvements to the site.
Testing requires a control and a variant within the same time period, while holding all other variables constant. Using A/B testing tools allow us to create tests to follow testing best practices and gather data about what works or does not work to encourage people to spend time on our site or convert on a form. With testing, we can make informed decisions about what works for our audience and helps them reach their goals, and what works for us to help use meet our business goals.
#Active testssections, @shanerice and @dmor will appear as approvers on any MRs with active tests.
Always gather relevant heatmaps, analytics, metrics, KPI (key performance indicators), etc.
We will announce major tests in #whats-happening-at-gitlab on Slack. A major test is anything on a page with more than 10,000 pageviews in the last 30 days. For example, tests on
/free-trial/ and sub-pages, and the homepage would all qualifiy as major tests. The announcement should share basic details and timeline of the testing and link to the test issue for additional context.
The DRI for the test should share the original Slack announcement in appropriate channels (#sales, #marketing, etc.).
We use different tools for different kinds of tests.
Feature flag tests replace enitre pages or large sections of a page with a test.
WYSIWYG tests replace small elements of a page or update copy.
Hotjar tests measure interactions on a page for research before and after tests.
This is where we plan to do the bulk of our testing. We can run several of these at the same time. For full-page, partial-page, component, and small changes. Right now we use Launch Darkly for feature flag testing. This tool is administered by @bradon_lyon, please ping him with any questions about the tool. You can request a test with the AB Test template.
Feature flags should be implemented in code similarly to the includes system. Example:
/source/experiments/1234-control.html.haml, where experiments is the folder name instead of includes and 1234 is the id number of the associated issue. "Control" refers to the baseline measurement you are testing against.
This is an advanced tool meant to test large-scale changes at a systemic level. For now we plan to run only one of these at a time.
We use Google Optimize for these tests. There are performance impacts with large-scale tests so we only use this platform for testing a handful of CSS elements or copy changes.
To enable Google Optimize on any page you add
google_optimize: true to the frontmatter of any page.
Use this link to request a heatmap test or results for a page on
New heatmaps will record data until the page has reached 2,000 pageviews. An Inbound Marketing Manager will ping you when the results are complete (Duration depends on average traffic to the page. Some tests can take up to a month or more.)
Note: Every page viewed by a visitor is counted as a single pageview.