Gitlab hero border pattern left svg Gitlab hero border pattern right svg

Category Direction - Code Review

Code Review

Section Stage Maturity Last Reviewed
Dev Create Loveable 2021-08-04

Introduction and how you can help

Thanks for visiting this direction page on Code Review in GitLab. This page belongs to the Code Review group of the Create stage and is maintained by Kai Armstrong (E-Mail).

This direction is constantly evolving and everyone can contribute:


Code Review is an essential activity of software development. It ensures that contributions to a project maintain and improve code quality, and is an avenue of mentorship and feedback for engineers. It can also be one of the most time consuming activities in the software development process.

GitLab's guiding principle for Code Review is: Reviewing code is an activity that ultimately improves the resulting product, by improving the quality of the code while optimizing for the speed at which that code is delivered.

The Code Review process begins with authors proposing changes to an existing project via a change proposal. Once they've proposed the changes they need to request feedback from peers (Developers, Designers, Product Managers, etc) and then respond to that feedback. Ultimately, a merge request needs to be approved and then merged for the Code Review process to be completed for a given changeset.

When an author submits their merge request, the first step is to find an appropriate person to review the changes. As an author, it can be hard to determine who might have subject matter expertise in an area, who has available capacity, and, ultimately, who needs to approve your Merge Request. If the wrong person is selected for a review, the quality of the review and the speed at which that review is completed are impacted.

When an author finds the right reviewer, it's important that the reviewer is able to easily understand the context of the changes, and provide constructive and meaningful feedback to the author. The feedback a reviewer provides should provide clear intention and actionable comments which are devoid of undocumented opinion.

Merge request reviewers also need to transparently communicate status of the review at all points in time. This includes communicating to themselves which changes have been reviewed and where in the process they are. They also need to be able to communicate this to the change author so that they understand when they need to action feedback. Finally, status needs to be communicated to other reviewers and interested parties so that the contribution is delivered efficiently.

After feedback has been provided through the merge request, the author must respond to that feedback and signal to reviewers that it has been actioned either via additional changes or comments. Authors also need to address feedback provided by automated testing, security scanning and quality review tools as part of their contribution.

As a final piece of the review cycle, the merge request needs to be approved. This is an important affirmative signal that the contribution meets the standards of the project and is an improvement to the codebase. In some cases, initial reviewers provide that approval which is why selecting the correct reviewer is so important. In cases where there are second level approvals required, surfacing that information to authors and suggesting appropriate approvers can ensure contributions don't sit stale.

GitLab's vision for code review is a place where:

In GitLab, Code Review takes place in the Merge Request. GitLab should make these tasks efficient and easy, so that velocity and code quality both increase even if the merge request isn't perfect.

Metrics of success

The metrics by which we measure the success of the Code Review category are aligned with our goals for code review, specifically ease of use, love-ability, and efficiency.

Primary metric

Our primary metric is: reducing the duration of the Code Review. This is measured as the duration from the first merge request version to merged.

Secondary metrics

Secondary metrics of success act as support for the primary metric, helping build a more complete picture of how successful the category is.

In the future we plan to conduct quarterly UX scorecards to track the user experience through various heuristics.

Right now we're focused on measuring and improving perceived performance: “how fast, responsive, and reliable a website feels to its users. The perception of how well a site is performing can have more impact on the user experience that the actual load and response times.” Perceived performance is not only technical performance (i.e. load and response times), but also user performance (i.e. efficiency in completing tasks), and can be formulated as:

perceived performance = f(expected performance, UX, actual performance)
experience = f(perceived performance, task completion)
Aspect Measured by Results
Expected performance and UX Primarily by user’s feedback, and secondarily by actual performance of competitors. SaaS user’s feedback (in progress)
Competitor performance (Software Forge Performance Index) (maintained by SourceHut)
Largest Contentful Paint of SaaS vs for key pages
Actual performance (load and response times) Primarily by the Largest Contentful Paint (LCP) metric, and secondarily by other important metrics. Test instance (test samples: large MR overview and changes tabs, large MR commits tab)
SaaS: gitlab-foss large MR overview tab (test sample)
SaaS: gitlab-foss large MR changes tab (test sample)
SaaS: gitlab-foss empty MR overview tab (test sample)
SaaS: gitlab large MR overview tab (test sample)
SaaS: gitlab small MR overview tab (test sample)
SaaS: Other project MR overview tab (test sample)
Task completion (task times) Estimates of user’s execution time of primary tasks through the GOMS approach. We focus on the percentage difference of GitLab and competitors, or of current and proposed designs. July 2021 estimates

Connection between Source Code Management and Code Review

The Source Code category of GitLab offers the features where the creative process begins. Here authors will not only consume existing project contents but will also author new content that will eventually move through the DevOps lifecycle. Additionally, many of the features in Source Code are consumed in the Code Review stage of the software developement lifecycle. Consider the following examples:

Because of this close relationship, the Source Code Management group must work closely with the Code Review group in order to ensure the developer experience is cohesive and efficient. This experience is core to providing a great first impression for users.

Target Audience and Experience

Code review is used by software engineers and individual contributors of all kinds. Depending on their context, however, the workflow and experience of code review can vary significantly.

Challenges to address

There are many code review tools in the market as well as multiple workflows. Deciding which features/workflows to build-in to GitLab is important so that users can migrate seamlessly. However, it is not realistic for us to support every feature/workflow out there, as such we must suss out the most popular, forward-looking features/workflows, and support them in GitLab.

Where we are headed

The code review process involves at least two roles (author, and reviewer) but may involve many people, who work together to achieve code quality standards and mentor the author. Furthermore, many reviewers are often not Developers. Reviewers may be Developers, Product Designers, Product Managers, Technical Writers, Security Engineers and more.

In support of GitLab's vision for code review, areas of interest and improvement can be organized by the following goals:

The following improvements will help us make significant progress towards the above goals:

What's Next & Why

Feature Enhancements

Performance and Reliability Improvements

What is Not Planned Right Now

Maturity Plan

This category is currently at the Loveable maturity level (see our definitions of maturity levels).

Competitive Landscape

GitLab competes with both integrated and dedicated code review tools. Because merge requests (which is the code review interface), and more specifically the merge widget, is the single source of truth about a code change and a critical control point in the GitLab workflow, it is important that merge requests and code review in GitLab is excellent. Our primary source of competition and comparison is to dedicated code review tools.

Prospects and new customers, who previously used dedicated code review tools typically have high expectations and accustomed to a high degree of product depth. Given that developers spend a significant portion (majority?) of their in application time in merge requests, limitations are quickly noticed and become a source of frustration.

GitLab’s current code review experience is largely modeled after GitHub’s, with most of its pros and cons. Gerrit and Phabricator are frequently mentioned as the best alternatives to the GitHub code review model. According to Jet Brain's 5th annual Developer Ecosystem survey, GitLab is only second to GitHub in Code Review solutions. See the competitive analysis for a closer look at the user experience and feature set of competitor tools.

Integrated code review packaged with source code management:

Dedicated code review tools:


Analyst Landscape

Top Customer Success/Sales issue(s)

The highest priority customer requests are for improved application performance, accuracy and efficiency for reviewing merge request diffs of all sizes, small and extremely large.

Other notable requests include:

Top user issue(s)

Top dogfooding issues

Top Vision Item(s)

Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license