The following page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features or functionality remain at the sole discretion of GitLab Inc.
This direction is constantly evolving and everyone can contribute:
Code Review is an essential activity of software development. It ensures that contributions to a project maintain and improve code quality, and is an avenue of mentorship and feedback for engineers. It can also be one of the most time consuming activities in the software development process.
GitLab's guiding principle for Code Review is: Reviewing code is an activity that ultimately improves the resulting product, by improving the quality of the code while optimizing for the speed at which that code is delivered.
The Code Review process begins with authors proposing changes to an existing project via a change proposal. Once they've proposed the changes they need to request feedback from peers (Developers, Designers, Product Managers, etc) and then respond to that feedback. Ultimately, a merge request needs to be approved and then merged for the Code Review process to be completed for a given changeset.
When an author submits their merge request, the first step is to find an appropriate person to review the changes. As an author, it can be hard to determine who might have subject matter expertise in an area, who has available capacity, and, ultimately, who needs to approve your merge request. If the wrong person is selected for a review, the quality of the review and the speed at which that review is completed are impacted.
When an author finds the right reviewer, it's important that the reviewer is able to easily understand the context of the changes, and provide constructive and meaningful feedback to the author. The feedback a reviewer provides should provide clear intention and actionable comments which are devoid of undocumented opinion.
Merge request reviewers also need to transparently communicate status of the review at all points in time. This includes communicating to themselves which changes have been reviewed and where in the process they are. They also need to be able to communicate this to the change author so that they understand when they need to action feedback. Finally, status needs to be communicated to other reviewers and interested parties so that the contribution is delivered efficiently.
After feedback has been provided through the merge request, the author must respond to that feedback and signal to reviewers that it has been actioned either via additional changes or comments. Authors also need to address feedback provided by automated testing, security scanning and quality review tools as part of their contribution.
As a final piece of the review cycle, the merge request needs to be approved. This is an important affirmative signal that the contribution meets the standards of the project and is an improvement to the codebase. In some cases, initial reviewers provide that approval which is why selecting the correct reviewer is so important. In cases where there are second level approvals required, surfacing that information to authors and suggesting appropriate approvers can ensure contributions don't sit stale.
GitLab's vision for code review is a place where:
In GitLab, Code Review takes place in the merge request. GitLab should make these tasks efficient and easy, so that velocity and code quality both increase even if the merge request isn't perfect.
The metrics by which we measure the success of the Code Review category are aligned with our goals for code review, specifically ease of use, love-ability, and efficiency.
Our primary metric is: reducing the duration of the Code Review. This is measured as the duration from the first merge request version to merged.
Secondary metrics of success act as support for the primary metric, helping build a more complete picture of how successful the category is.
Once in a while, we conduct UX scorecards to track the user experience through various heuristics — see all UX scorecards for Code Review. At the Create stage level, we conduct usability benchmarking studies.
Right now we're focused on measuring and improving perceived performance: “how fast, responsive, and reliable a website feels to its users. The perception of how well a site is performing can have more impact on the user experience that the actual load and response times.” Perceived performance is not only technical performance (i.e. load and response times), but also user performance (i.e. efficiency in completing tasks), and can be formulated as:
perceived performance = f(expected performance, UX, actual performance) experience = f(perceived performance, task completion)
||Primarily by user’s feedback, and secondarily by actual performance of competitors.||SaaS user’s feedback (in progress)
Competitor performance (Software Forge Performance Index) (maintained by SourceHut)
Largest Contentful Paint of SaaS vs GitHub.com for key pages
||Primarily by the Largest Contentful Paint (LCP) metric, and secondarily by other important metrics.||Test instance (test samples: large MR overview and changes tabs, large MR commits tab)
SaaS: Other project MR overview tab (test sample)
||Estimates of user’s execution time of primary tasks through the GOMS approach. We focus on the percentage difference of GitLab and competitors, or of current and proposed designs.||July 2021 estimates|
The Source Code category of GitLab offers the features where the creative process begins. Here authors will not only consume existing project contents but will also author new content that will eventually move through the DevOps lifecycle. Additionally, many of the features in Source Code are consumed in the Code Review stage of the software developement lifecycle. Consider the following examples:
Because of this close relationship, the Source Code Management group must work closely with the Code Review group in order to ensure the developer experience is cohesive and efficient. This experience is core to providing a great first impression for users.
Code review is used by software engineers and individual contributors of all kinds. Depending on their context, however, the workflow and experience of code review can vary significantly.
There are many code review tools in the market as well as multiple workflows. Deciding which features/workflows to build-in to GitLab is important so that users can migrate seamlessly. However, it is not realistic for us to support every feature/workflow out there, as such we must suss out the most popular, forward-looking features/workflows, and support them in GitLab.
Some of the features/workflows we are planning to build into GitLab:
The code review process involves at least two roles (author, and reviewer) but may involve many people, who work together to achieve code quality standards and mentor the author. Furthermore, many reviewers are often not Developers. Reviewers may be Developers, Product Designers, Product Managers, Technical Writers, Security Engineers and more.
In support of GitLab's vision for code review, areas of interest and improvement can be organized by the following goals:
In Progress: Merge conflicts in diffs
Building upon our functionality for merge refs which was shipped in GitLab 13.9. We're continuing to refine this area with improvements so that merge requests don't fall back to the merge base when conflicts are detected.
In Review: Merge Requests that require my attention
Merge Request Reviewers allow users to explicitly ask people for a review of their contribution. However, when that user has finished the review, there is no clear signal to the author of the merge request (absent an approval.) There's also not a clear signal to the reviewer that the author has addressed their feedback and a new review is needed before the contribution can be accepted.
Identifying merge requests that are "waiting on you" will simplify the discovery of merge requests that require your attention and help to speed up cycle time so that users aren't wondering who the next person to take action is.
Large code reviews involving multiple files and lots of code require reviewers accurately keep track of the files/code that has already been reviewed in order to avoid duplicate effort.
Helping reviewers keep track of the files/code that has already been reviewed will save time, eliminate duplication of work, and yield better code reviews.
In Progress: Better defined mergability
Mergeability checks are an essential part of the review process that ensure automated tasks and requirements are completed and approvals have been given prior to merge. Currently checks can span long running backend processes and some frontend confirmations, but these are not all implemented consistently which can result in improper or non-existent state for the merge button. Currently, investigation in to possible paths forward has been discussed and this work will be the next area of ongoing improvement.
In Progress: Restructure merge requests
The merge request interface is overwhelming and can be challenging to find the information you need to complete the tasks you're working on. We're exploring several areas to restructure information on the page, remove items where appropriate, and increase the focus on completing the important tasks for merge requests.
This category is currently at the Loveable maturity level (see our definitions of maturity levels).
GitLab competes with both integrated and dedicated code review tools. Because merge requests (which is the code review interface), and more specifically the merge widget, is the single source of truth about a code change and a critical control point in the GitLab workflow, it is important that merge requests and code review in GitLab is excellent. Our primary source of competition and comparison is to dedicated code review tools.
Prospects and new customers, who previously used dedicated code review tools typically have high expectations and accustomed to a high degree of product depth. Given that developers spend a significant portion (majority?) of their in application time in merge requests, limitations are quickly noticed and become a source of frustration.
GitLab's current code review experience is largely modeled after GitHub's, with most of its pros and cons. Gerrit and Phabricator are frequently mentioned as the best alternatives to the GitHub code review model. According to Jet Brain's 5th annual Developer Ecosystem survey, GitLab is only second to GitHub in Code Review solutions. See the competitive analysis for a closer look at the user experience and feature set of competitor tools.
Integrated code review packaged with source code management:
Dedicated code review tools:
The highest priority customer requests are for improved application performance, accuracy and efficiency for reviewing merge request diffs of all sizes, small and extremely large.
Investigating: Track unread merge request comments and commits
When reviewing a merge request with multiple commits, a large number of changes, or that requires many revisions, it's hard to know what requires your attention, and what you have previously reviewed. This is a factor in making code review inefficient.