The following page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features or functionality remain at the sole discretion of GitLab Inc.
|Content Last Reviewed||
Thanks for visiting this category direction page on Code Testing and Coverage in GitLab. This page belongs to the Pipeline Execution group of the Verify stage and is maintained by Jackie Porter (E-Mail, Twitter).
This direction page is a work in progress, and everyone can contribute:
We utilize intelligence in testing to ensure that individual components built within a pipeline perform as expected and as efficiently as possible. We also aim to make testing accessible by making it easier to setup and start testing to drive quality early in the development process.
Our long term vision is to optimizing pipelines to quickly deliver quality code with a high degree of confidence. We will do this by automate testing; reducing the amount of time between development and test cycles; broading test scope and coverage (e.g. unit, functional, end-to-end); and dashboards to provide an aggregate view of test quality and observable trends to monitor product health.
Our offerings in the area of Testing are limited compared to our competitors; in particular, we do not offer test case management features. We are working with the Certify group to build an integrated test case management feature, providing traceability across product requirements and test cases/plans as part of gitlab&9640. Our long-term vision provides not only traceability, but also group-level dashboards for various stakeholders to view both rolled up and individual project completion status. Quality remains an important driver for improving our users ability to confidently track deployments with GitLab and as noted above we are starting on that vision in the Project Quality Summary epic.
Pipeline efficiency has become an increasingly important to developers, CI/CD leaders, and executives. As part of achieving our long-term goal of "smarter testing", we are evaluating opportunitites to use ML/AI to optimize pipelines and additional opportunities to expand our current offering for Fail Fast Testing. We are also evaulating mechanisms enabling users to select which tests they want to execute or quarantine.
In all features we build, we strive to continuously improve our user experience, including ease of use and automation where possible. We also know users want more insights from their CI/CD pipelines and especially from their tests. We are evaluating gitlab#210250 as a way to provide those insights and further encourage users to upload test report artifacts within their CI/CD pipelines.
There are no new planned features in Code Testing and Coverage for 2024. Pipeline Execution will support high priority bug fixes in this category as they arise.
Although we do not have any plans to release new features in the coming year, we welcome community contributions that align with our intellgence testing vision.
BIC (Best In Class) is an indicator of forecasted near-term market performance based on a combination of factors, including analyst views, market news, and feedback from the sales and product teams. It is critical that we understand where GitLab appears in the BIC landscape.
In the 2021 Continuous Software Delivery Forrester Tech Tide, Testing was cited as the number one key to unlock continuous delivery for organizations. Top areas for investment are a) API test automation, b) continuous functional test suites, c) shift-left performance testing. Industry leaders are seeking integrated suites over best in breed tools for testing and CD. Additionally, API testing is being marketed as a silver bullet that is cheaper, effective and efficient to modernize the toolchain for enterprises. Sample vendors include: API Fortress, Broadcom, Eggplant, and others. We are exploring how we expand our market share in this area via product#2516 and adding a new category in this merge request.
Many other CI solutions can also consume standard JUnit test output or other formats to display insights natively like CircleCI or through a plugin like Jenkins. Allure is a popular reporting tool for review of test executions and recently DataDog introduced CI Visibility as part of their SaaS offering including Flaky Test Management.
In order to stay remain ahead of these competitors we will continue to push forward to make unit test data visible and actionable in the context of the Merge Request for developers with unit test reports and historical insights to identify flaky tests with epics like gitlab&3129
Check out our Ops Section Direction "Who's is it for?" for an in-depth look at our target personas across Ops. For Code Testing and Coverage, our "What's Next & Why" are targeting the following personas, as ranked by priority for support:
In 2020, Gartner has released the Artificial Intelligence Use Case Prism for Development and Testing on their research website. Directionally, several of the use cases are generation of unit tests from analyzing code patterns, using business logic to create API test scenarios, and using machine learning to fabricate test data as well as correlating testing results back to business metrics to convey meaningful connections like release success or quality.