The following page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features or functionality remain at the sole discretion of GitLab Inc.
Code testing and coverage ensure that individual components built within a pipeline perform as expected. This is a core piece of the Ops Section direction "Smart Feedback Loop" between developers and we are aiming to make that as reliably speedy as possible, eventually enabling users to go from first commit to code in production in only an hour with confidence.
Interested in joining the conversation for this category? Please join us in the issues where we discuss this topic and can answer any questions you may have. Your contributions are more than welcome.
This page is maintained by the Product Manager for Testing, James Heimbuck (E-mail)
We hear from users that they want to enforce that test coverage within a project should not be allowed to decrease with changes. It is possible to do this with some scripts and fail the pipeline but it is not a great user experience and is easy to circumvent.
This is why we are next starting on a feature that enforces test coverage must stay or increase beyond the current level with a merge request before it can be merged.
Check out our Ops Section Direction "Who's is it for?" for an in-depth look at our target personas across Ops. For Code Testing and Coverage, our "What's Next & Why" are targeting the following personas, as ranked by priority for support:
This category is currently at the "Viable" maturity level, and our next maturity target is "Complete" (see our definitions of maturity levels). Key deliverables to achieve this are included in these epics:
We may find in research that only some of the issues in these epics are needed to move the vision for this category maturity forward. The work to move the maturity is captured and being tracked in this epic.
In order to stay remain ahead of these competitors we will continue to push forward to make unit test data visible and actionable in the context of the Merge Request for developers with unit test reports and historical insights to identify flaky tests with issues like gitlab#33932.
Sales has requested a higher level view of testing and coverage data for both projects and groups from the Testing Group. Our first step towards this was the display of coverage data for groups the first iteration of which has shipped. We believe that Test Coverage and Test Execution data are signals that will help Team Leads guide their teams to a higher quality project.
The most popular customer issue is a request to support JaCoCo coverage reports for the test coverage visualization feature. While there is a documented way to transform JaCoCo reports to the supported Cobertura format by running an extra job to do this is contrary to our vision of Speedy, Reliable Pipelines and our product principle of working by default.
The GitLab Quality team has opened an interesting issue, Provide API to retrieve test case durations from a pipeline, that is aimed at solving a problem where they have limited visibility into long test run times that can impact efficiency.
In 2020, Gartner has released the Artificial Intelligence Use Case Prism for Development and Testing on their research website. Directionally, several of the use cases are generation of unit tests from analyzing code patterns, using business logic to create API test scenarios, and using machine learning to fabricate test data as well as correlating testing results back to business metrics to convey meaningful connections like release success or quality.
To realize our long term vision we need to add more value not just for users uploading junit.xml and Cobertura reports but for any users with test and coverage reports. We believe that the best way to do this is to make it easy for users to contribute additional parsers so they can access the features the team is building that use the data. This will allow wider community contributions and is in alignment with GitLab's Dual Flywheel strategy. A first step towards this could be a GitLab-specific unit test report.
We are also looking to provide a one stop place for CI/CD leaders with Director-level CI/CD dashboards. Quality is an important driver for improving our users ability to confidently track deployments with GitLab and so we are working next on a view of test execution data over time within projects.
We have started brainstorming some ideas for the vision and captured that as a rough design idea you can see below.