The following page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features or functionality remain at the sole discretion of GitLab Inc.
GitLab Code Quality helps you keep your source-code maintainable and bug-free.
It automatically analyzes your source code for potential mistakes and hard-to-maintain patterns, then surfaces those findings in merge request widgets, reports, and diffs so you can handle them before they land on your default branch.
You can also provide your own custom findings by saving a JSON report as a CI/CD artifact, so you can track organization-specific quality goals.
Interested in joining the conversation for this category? We'd love to hear your voice. Please join us in the issues where we discuss this topic and can answer any questions you may have.
This page is maintained by the Product Manager for Static Analysis, Connor Gilbert.
GitLab Code Quality can be broken down into three groups of features:
Code analysis currently uses the CodeClimate open-source scanning tool and its analyzers.
Within the code analysis area, our top priority is removing blockers to Code Quality adoption, particularly the Docker-in-Docker requirement imposed by the CodeClimate scanning engine. Removing Docker-in-Docker will allow us to support runners operating in more contexts, including on OpenShift.
Ultimately, we intend to remove the CodeClimate engine from GitLab. We plan to iterate toward a new user experience that has fewer system requirements, runs faster, and provides better support for the specific configurations development teams use. While we design the longer-term solution, we have investigated options to allow users to switch away from Docker-in-Docker sooner. We're planning user experience research as we prepare to build the new solution.
Code Quality reports are processed so they can be displayed in merge requests and used elsewhere in GitLab.
Within the processing and display area, we plan to invest in:
Today, GitLab Code Quality lacks detailed workflow features. We intend to improve the experience through various feature improvements, including:
Any tool that is too noisy is quickly ignored. If we want to provide sustainable value and drive actual quality improvements in users' software projects, we need to make sure that developers are shown findings that are:
Our direction follows certain themes:
Check the full list of Code Quality feature announcements for more.
This category is currently at the Minimal maturity level, and our next maturity target is Viable. (See GitLab's definitions of maturity levels.)
To reach Viable maturity, we believe we must solve most of the top issues identified on this page, though research may yield a smaller set of issues.
The work to move the vision is tracked in this epic, which is currently being reviewed for completeness.
SonarQube is a commonly used static analysis tool that provides a user information about quality and security problems in their code. Some of the notable features users we hear about from users are the quality gate, blocking a merge request until issues are resolved, and the letter grade provided by the tooling. We understand this letter grading to mean that a high level and easy to understand and track quality measure is provided so team leads and directors can see when a project moves from an F to a C when it comes to quality. Many users we talk to want to get this kind of data in GitLab through the Code Quality feature set OR the SonarQube->GitLab integration, but they would prefer to have one fewer tool to manage.
Many development teams adopt open-source linters to check for correctness issues or known bad patterns. Often, these tools are tightly integrated with the language the team uses, and development teams often maintain a ruleset or configuration file specifying the exact findings they wish to check for.
Teams often choose to run their linters in CI/CD and fail a job if any sufficiently severe findings are identified. Teams then add exceptions as comments in source code, adjust rule severities, or ignore entire files, as a way to dismiss a finding and allow the job to pass.
GitLab provides additional value by integrating code quality findings into the merge request view, which helps reviewers and other stakeholders collaborate to understand and resolve areas for improvement.
Azure DevOps does not offer in-product quality testing in the same way we do with CodeClimate, but does have a number of easy to find and install plugins in their marketplace that are both paid and free. Their SonarQube plugin appears to be the most popular, though it seems to have some challenges with the rating.
In order to remain ahead of Azure DevOps, we should continue to push forward the feature capability of our own open-source integration with CodeClimate. Issues like Code Quality report for default branch moves both our vision forward as well as ensures we have a high quality integration in our product. To be successful here, though, we need to support formats Microsoft-stack developers use. The current Code Quality scanner has some limited scanning capability but the issue Support C# code quality results direction extends this to be more in line with scanning provided for other languages. Because CodeClimate does not yet have deep .NET support, we may need to build something ourselves.
The top customer request currently is to allow for multiple code quality reports to be shown in the pipeline report and diff view. We believe customers are running multiple scanners besides the one provided by the GitLab template to get around other issues such as Docker-in-Docker and limitations with pulling from Docker Hub. Not being able to see these reports natively within GitLab may result in them finding another solution for their code quality needs.
Another top customer priority is to be able to see the Code quality report for default branch which will let developers get information about code quality issues in the default branch outside of a pipeline or MR context.
Field feedback tells us that a significant number of customers consider adopting GitLab Code Quality but are blocked by its current scanning architecture, which:
We will iterate to resolve these concerns, beginning with Docker-in-Docker.
Top missing features are Quality Gates and a Quality Dashboard. We have a good understanding about quality gates and are working to resolve this issue by resolving Prevent merge on code quality degradation. We are investigating what outcomes customers are looking to get from a Quality Dashboard and will iterate on a solution in our MVC issue.
Recently, GitLab team members have proposed using code quality reports with custom analyzers for design system migration and technical writing. Both use cases would allow teams to collaborate more efficiently and we are excited to enable these use cases. A current identified blocker for these use cases is support for multiple reports in diffs and reports.
A top request from our internal customers they want to enforce code quality standards accross departments by enforcing code quality cannot decrease in a merge request without an approval.
Our vision for Code Quality is for it to become another rich signal of confidence for users of GitLab. This will be not just a signal of the quality of a change but one of many inputs like Code Coverage to be able to view a project at a high level and make decisions about what code needs attention, additional tests or refactoring, to bring it up to the quality requirements of the group. This long term vision is captured in the issues Instance wide code statistics and Code Quality Dashboard and the team has started brainstorming what this may look like by creating wireframes like the design below.
We are currently evaluating the way that Code Quality scanning will evolve, as discussed in the Code analysis section above. This evaluation may lead us to reshape the way that Code Quality findings are generated and processed.
Last Reviewed: 2022-07-07
Last Updated: 2022-07-07