The following page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features or functionality remain at the sole discretion of GitLab Inc.
Automatically analyze your static source code to surface issues and see if quality is improving or getting worse with the latest commit. Our vision for Code Quality is to provide actionable data across an organization to empower users to make quality visible with every commit and in every release.
Interested in joining the conversation for this category? Please join us in the issues where we discuss this topic and can answer any questions you may have. Your contributions are more than welcome.
This page is maintained by the Product Manager for Testing, James Heimbuck (E-mail)
To move closer to our long term Vision of a Code Quality Dashboard we need to first make sure that users are only looking at the Code Quality alerts that matter to them and that they get the context about the severity they need. The next issue that will move the category this way is to provide context of the quality violations found by showing them in the MR Diff view which better integrates the data into a developer's day to day workflow. After working the code quality data into the developer workflow we will move to gitlab#238858 which lets teams set a threshold for the minimum severity level of issues to display from code quality scans.
This category is currently at the "Minimal" maturity level, and our next maturity target is "Viable" (see our definitions of maturity levels). Key deliverables to achieve this are:
We may find in research that only some of these issues are needed to move the vision for this category forward. The work to move the vision is captured and being tracked in this epic.
SonarQube is a commonly used static analysis tool that provides a user information about quality and security problems in their code. Some of the notable features users we hear about from users are the quality gate, blocking a merge request until issues are resolved, and the letter grade provided by the tooling. We understand this letter grading to mean that a high level and easy to understand and track quality measure is provided so team leads and directors can see when a project moves from an F to a C when it comes to quality. Many users we talk to want to get this kind of data in GitLab through the Code Quality feature set OR the SonarQube->GitLab integration, but they would prefer to have one fewer tool to manage.
Azure DevOps does not offer in-product quality testing in the same way we do with CodeClimate, but does have a number of easy to find and install plugins in their marketplace that are both paid and free. Their SonarQube plugin appears to be the most popular, though it seems to have some challenges with the rating.
In order to remain ahead of Azure DevOps, we should continue to push forward the feature capability of our own open-source integration with CodeClimate. Issues like Code Quality report for default branch moves both our vision forward as well as ensures we have a high quality integration in our product. To be successful here, though, we need to support formats Microsoft-stack developers use. The current Code Quality scanner has some limited scanning capability but the issue Support C# code quality results direction extends this to be more in line with scanning provided for other languages. Because CodeClimate does not yet have deep .NET support, we may need to build something ourselves.
The field teams have told us that the top priorities for customers looking to replace a competitor and get the full value of GitLab are Quality Gates and a Quality Dashboard. We have a good understanding about quality gates and are working to resolve this issue by resolving Prevent merge on code quality degradation. We are investigating what outcomes customers are looking to get from a Quality Dashboard and will iterate on a solution in our MVC issue.
The top customer request currently is to allow for multiple code quality reports to in the full report. We believe customers are running multiple scanners besides the one provided by the GitLab template to get around other issues such as DinD and limitations with pulling from DockerHub. Not being able to see these reports natively within GitLab may result in them finding another solution for their code quality needs.
Another top customer priority is to be able to see the Code quality report for default branch which will let developers get information about code quality issues in the default branch outside of a pipeline or MR context.
A top problem identified by our internal customers is that the code quality MR Widget and Report lack context about if code quality issues were introduced in the current Merge Request. This makes it hard to incorporate that extra data into a Code Review. By resolving the issue Show code quality notices on diffs/MRs we will help developers make better decisions about if a merge request should be merged, or if it needs further refinement.
Our vision for Code Quality is for it to become another rich signal of confidence for users of GitLab. This will be not just a signal of the quality of a change but one of many inputs like Code Coverage to be able to view a project at a high level and make decisions about what code needs attention, additional tests or refactoring, to bring it up to the quality requirements of the group. This long term vision is captured in the issues Instance wide code statistics and Code Quality Dashboard and the team has started brainstorming what this may look like by creating wireframes like the design below.
As we think about the future of the Code Quality category we realize that there are limitations in what the open source scanner we have used to date provides. Adding additional scanners starting with the linters in super linter not present in the existing scanner would be a first step towards this vision and dropping support for the current engine that requires DinD.