Gitlab hero border pattern left svg Gitlab hero border pattern right svg

Category Direction - Code Testing and Coverage

Code Testing

Code testing and coverage ensure that individual components built within a pipeline perform as expected, and are an important part of a Continuous Integration framework. Our vision for this category is to make the feedback loop for developers as short as possible, eventually enabling users to go from first commit to code in production in only an hour with confidence.

Interested in joining the conversation for this category? Please join us in the issues where we discuss this topic and can answer any questions you may have. Your contributions are more than welcome.

This page is maintained by the Product Manager for Testing, James Heimbuck (E-mail)

What's Next & Why

Users are excited to have data and a visual indicator of the direction code coverage is trending in a project. For users that have dozens to thousands of projects though having an even higher level view of coverage is often required, especially in the enterprise. The last step to deliver our vision for Code Coverage for groups is to implement the graph of the average coverage over all projects. We welcome feedback about this feature in gitlab#231515

Maturity Plan

This category is currently at the "Viable" maturity level, and our next maturity target is "Complete" (see our definitions of maturity levels). Key deliverables to achieve this are included in these epics:

We may find in research that only some of the issues in these epics are needed to move the vision for this category maturity forward. The work to move the maturity is captured and being tracked in this epic.

Competitive Landscape

Many other CI solutions can also consume standard JUnit test output or other formats to display insights natively like CircleCI or through a plugin like Jenkins.

There are new entries in the code testing space utilizing ML/AI tech to optimize test execution like Launchable and even write test cases like Diffblue.

In order to stay remain ahead of these competitors we will continue to push forward to make unit test data visible and actionable in the context of the Merge Request for developers with unit test reports and historical insights to identify flaky tests with issues like gitlab#33932.

Top Customer Success/Sales Issue(s)

Sales has requested a higher level view of testing and coverage data for both projects and groups from the Testing Group. A first step towards this will be the display of coverage data for groups.

Top Customer Issue(s)

The most popular issue(s) in the Code Testing and Coverage category today are requests to be able to enforce a code coverage increase or hold or a coverage percentage before a merge.

Another popular issue is a request to see the code coverage badge on any branch which would solve common problem for users of long lived branches who do not have a view of the test coverage of those branches today.

Top Internal Customer Issue(s)

The GitLab Quality team has opened an interesting issue, Provide API to retrieve test case durations from a pipeline, that is aimed at solving a problem where they have limited visibility into long test run times that can impact efficiency.

Top Analyst Landscape Items

In 2020, Gartner has released the Artificial Intelligence Use Case Prism for Development and Testing on their research website. Directionally, several of the use cases are generation of unit tests from analysing code patterns, using business logic to create API test scenarios, and using machine learning to fabricate test data as well as correlating testing results back to business metrics to convey meaningful connections like release success or quality.

Top Vision Item(s)

The top vision item is Detect and report on flaky tests which will start to address the problem of flaky test results which cause developers to not trust test runs or force unnecessary reruns of tests. Both of those outcomes are undesirable and counter to our goal of minimizing the lead time of changes. The Testing team had a good discussion about this as part of a Think Big session in July, 2020. The next step towards this are implementation of test history on the Test Summary Widget and Unit Test Report to learn if there is value in tracking and displaying test failures.

We are also looking to provide a one stop place for CI/CD leaders with Director-level CI/CD dashboards. Quality is an important driver for improving our users ability to confidently track deployments with GitLab and so we are working next on a view of code coverage data over time for across a group's projects.

We have started brainstorming some ideas for the vision and captured that as a rough design idea you can see below.

Design for Vision of Code Testing and Coverage data summary

Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license