The following page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features or functionality remain at the sole discretion of GitLab Inc.
This direction is constantly evolving and everyone can contribute:
Code Review is an essential activity of software development. It ensures that contributions to a project maintain and improve code quality, and is an avenue of mentorship and feedback for engineers. It can also be one of the most time consuming activities in the software development process.
GitLab's guiding principle for Code Review is: Reviewing code is an activity that ultimately improves the resulting product, by improving the quality of the code while optimizing for the speed at which that code is delivered.
The Code Review process begins with authors proposing changes to an existing project via a change proposal. Once they've proposed the changes they need to request feedback from peers (Developers, Designers, Product Managers, etc) and then respond to that feedback. Ultimately, a merge request needs to be approved and then merged for the Code Review process to be completed for a given changeset.
When an author submits their merge request, the first step is to find an appropriate person to review the changes. As an author, it can be hard to determine who might have subject matter expertise in an area, who has available capacity, and, ultimately, who needs to approve your Merge Request. If the wrong person is selected for a review, the quality of the review and the speed at which that review is completed are impacted.
When an author finds the right reviewer, it's important that the reviewer is able to easily understand the context of the changes, and provide constructive and meaningful feedback to the author. The feedback a reviewer provides should provide clear intention and actionable comments which are devoid of undocumented opinion.
Merge request reviewers also need to transparently communicate status of the review at all points in time. This includes communicating to themselves which changes have been reviewed and where in the process they are. They also need to be able to communicate this to the change author so that they understand when they need to action feedback. Finally, status needs to be communicated to other reviewers and interested parties so that the contribution is delivered efficiently.
After feedback has been provided through the merge request, the author must respond to that feedback and signal to reviewers that it has been actioned either via additional changes or comments. Authors also need to address feedback provided by automated testing, security scanning and quality review tools as part of their contribution.
As a final piece of the review cycle, the merge request needs to be approved. This is an important affirmative signal that the contribution meets the standards of the project and is an improvement to the codebase. In some cases, initial reviewers provide that approval which is why selecting the correct reviewer is so important. In cases where there are second level approvals required, surfacing that information to authors and suggesting appropriate approvers can ensure contributions don't sit stale.
GitLab's vision for code review is a place where:
In GitLab, Code Review takes place in the Merge Request. GitLab should make these tasks efficient and easy, so that velocity and code quality both increase even if the merge request isn't perfect.
The metrics by which we measure the success of the Code Review category are aligned with our goals for code review, specifically ease of use, love-ability, and efficiency.
Our primary metric is: reducing the duration of the Code Review. This is measured as the duration from the first merge request version to merged.
Secondary metrics of success act as support for the primary metric, helping build a more complete picture of how successful the category is.
Right now we're focused on measuring and improving perceived performance: “how fast, responsive, and reliable a website feels to its users. The perception of how well a site is performing can have more impact on the user experience that the actual load and response times.” Perceived performance is not only technical performance (i.e. load and response times), but also user performance (i.e. efficiency in completing tasks), and can be formulated as:
perceived performance = f(expected performance, UX, actual performance) experience = f(perceived performance, task completion)
||Primarily by user’s feedback, and secondarily by actual performance of competitors.||SaaS user’s feedback (in progress)
Competitor performance (Software Forge Performance Index) (maintained by SourceHut)
Largest Contentful Paint of SaaS vs GitHub.com for key pages
||Primarily by the Largest Contentful Paint (LCP) metric, and secondarily by other important metrics.||Test instance (test samples: large MR overview and changes tabs, large MR commits tab)
SaaS: Other project MR overview tab (test sample)
||Estimates of user’s execution time of primary tasks through the GOMS approach. We focus on the percentage difference of GitLab and competitors, or of current and proposed designs.||July 2021 estimates|
Code review is used by software engineers and individual contributors of all kinds. Depending on their context, however, the workflow and experience of code review can vary significantly.
There are many code review tools in the market as well as multiple workflows. Deciding which features/workflows to build-in to GitLab is important so that users can migrate seamlessly. However, it is not realistic for us to support every feature/workflow out there, as such we must suss out the most popular, forward-looking features/workflows, and support them in GitLab.
Some of the features/workflows we are planning to build into GitLab:
Some of the features/workflows we are currently researching:
The code review process involves at least two roles (author, and reviewer) but may involve many people, who work together to achieve code quality standards and mentor the author. Furthermore, many reviewers are often not Developers. Reviewers may be Developers, Product Designers, Product Managers, Technical Writers, Security Engineers and more.
In support of GitLab's vision for code review, areas of interest and improvement can be organized by the following goals:
The following improvements will help us make significant progress towards the above goals:
In Progress: Smarter merge request diffs using merge refs
Primary functionality for merge refs was shipped in GitLab 13.9. We're continuing to refine this area with improvements merge conflict presentation so that mergre requests don't fall back to the merge base when conflicts are detected.
In Progress: Restructure MR merge widget
The merge widget is a central piece to the merge request experience and how code is actually contributed to projects. The current designs of the area can be confusing and inconsistent with how messages are presented to users and actions required to complete the merge. We'll be focusing on making this more clear and easier for users to interact with.
Merge Request Reviewers allow users to explicitly ask people for a review of their contribution. However, when that user has finished the review, there is no clear signal to the author of the merge request (absent an approval.) There's also not a clear signal to the reviewer that the author has addressed their feedback and a new review is needed before the contribution can be accepted.
Identifying merge requests that are "waiting on you" will simplify the discovery of merge requests that require your attention and help to speed up cycle time so that users aren't wondering who the next person to take action is.
Large code reviews involving multiple files and lots of code require reviewers accurately keep track of the files/code that has already been reviewed in order to avoid duplicate effort.
Helping reviewers keep track of the files/code that has already been reviewed will save time, eliminate duplication of work, and yield better code reviews.
In Progress: Better defined mergability
Mergability checks are an essential part of the review process that ensure automated tasks and requirements are completed and approvals have been given prior to merge. Currently checks can span long running backend processes and some frontend confirmations, but these are not all imoplemented consistently which can result in improper state or non-existent state for the merge button. Currently, investigation in to possible paths forward has been discussed and this work will be the next area of ongoing improvement.
In Progress: Error Budget improvements
The Code Review team currently exceeds their allocated error budget with areas of the application that are too slow to respond or requests that fail completely. We'll be working to improve these areas so they behave more reliabily and improve the experience for users in the the merge request.
This category is currently at the Loveable maturity level (see our definitions of maturity levels).
GitLab competes with both integrated and dedicated code review tools. Because merge requests (which is the code review interface), and more specifically the merge widget, is the single source of truth about a code change and a critical control point in the GitLab workflow, it is important that merge requests and code review in GitLab is excellent. Our primary source of competition and comparison is to dedicated code review tools.
Prospects and new customers, who previously used dedicated code review tools typically have high expectations and accustomed to a high degree of product depth. Given that developers spend a significant portion (majority?) of their in application time in merge requests, limitations are quickly noticed and become a source of frustration.
GitLab’s current code review experience is largely modeled after GitHub’s, with most of its pros and cons. Gerrit and Phabricator are frequently mentioned as the best alternatives to the GitHub code review model. See the competitive analysis for a closer look at the user experience and feature set of competitor tools.
Integrated code review packaged with source code management:
Dedicated code review tools:
The highest priority customer requests are for improved application performance, accuracy and efficiency for reviewing merge request diffs of all sizes, small and extremely large.
Other notable requests include:
Investigating: Commit focused code review
Small changes are easier and faster to review, and commits are the smallest unit of change. Some of the largest projects in the world use commit based workflows for this reason.
We are investigating how we can amplify best practices in commit focussed workflows, and bring these into GitLab to improve code quality and efficiency.
Investigating: Track unread merge request comments and commits
When reviewing a merge request with multiple commits, a large number of changes, or that requires many revisions, it's hard to know what requires your attention, and what you have previously reviewed. This is a factor in making code review inefficient.