GitLab Engineering values clear, concise, transparent, asynchronous, and frequent communication. Here are our most important modes of communication:
As part of a fully distributed organization such as GitLab, it is important to stay informed about engineering led initiatives. We employ multimodal communication, which describes the minimum set of communication channels we'll broadcast to.
For the Engineering department, any important initiative will be announced in:
If you frequently check any of these channels, you can consider yourself informed. It is up to the person sharing to ensure that the same message is shared across all channels. Ideally, this message should be a one sentence summary with a link to an issue to allow for a single source of truth for any feedback.
Please see the Product Management section that governs how they prioritize work, and also should guide our technical decision making.
|4||Fixing regressions (things that worked before)||
|5||Promised to Customers||
|8||Identified for Dogfooding||
|9||Velocity of new features, user experience improvements, technical debt, community contributions, and all other improvements||
|10||Behaviors that yield higher predictability (because this inevitably slows us down)||
Despite the high priority of velocity to our project and our company, there is one set of things we must prioritize over it: GitLab availability & security. Neither we, nor our customers, can run an Enterprise-grade service if we are willing to risk users productivity and data.
Our hundreds of Engineers collectively make thousands of independent decisions each day that can impact GitLab.com and our users and customers there. They all need to keep availability and security in mind as we endeavor to be the most productive engineering organization in the world. We can only move as fast as GitLab.com is available and secured. Availability of self-managed GitLab instances is also extremely important to our success, and this needs to happen in partnership with our customers' admins (whereas we are the admins for GitLab.com)
For security, we prioritize it more highly by having strict SLAs around priorities labels with security issues. This shows a security first mindset as these issues take precedence in a given timeframe.
Our velocity should be incremental in nature. It's derived from our MVC, which encourages "delivering the smallest possible solution that offers value to our users". This could be a small new feature, but also includes code improvements, fixing bugs, etc.
To measure this, we count and define the target here: MRs per Development Engineer which is a goal for managers and not ICs. Historically, we have seen this as high as 14-19 MRs per Product Development Engineer per Month.
Ten MRs per month per Product Development Engineer translates to roughly an MR every 1 1/2 business days with time for overhead. To attain this, Product Development Engineers are encouraged to:
We optimize for shipping a high volume of user/customer value with each release. We do want to ship multiple major features in every monthly release of GitLab. However, we do not strive for predictability over velocity. As such, we eschew heavyweight processes like detailed story point estimation by the whole team in favor of lightweight measurements of throughput like the number of merge requests that were included or rough estimates by single team members.
There is variance in how much time an issue will take versus what you estimated. This variance causes unpredictability. If you want close to 100% predictability you have to take two measures:
Both measures reduce the overall velocity of shipping features. The way to prevent this is to accept that we don't want perfect predictability. Just like with our OKRs, which are so ambitious that we expect to reach about 70% of the goal, this is also fine for shipping planned features.
Note: This does not mean we place zero value on predictability. We just optimize for velocity first.
When changing an outdated part of our code (e.g. HAML views, jQuery modules), use discretion on whether to refactor or not. For long term maintainability, we are very interested in migrating old code to the consistent and preferred approach (e.g. Vue, GraphQL), but we're also interested in continuously shipping features that our users will love.
Aim to implement new modules or features with the preferred approach, but changing preexisting non-conforming parts is a gray area.
If the weight of refactoring and other constraints (such as time) risk threatening the availability of a feature, then strongly consider refactoring at another time. On the other hand, if the code in question has hurt availability or poses a threat to it, then strongly consider prioritizing refactoring. This is a balancing act and if you're not sure where your change should go (or whether you should do some refactoring before hand), reach out to another Engineer or Maintainer.
If it makes sense to refactor before implementing a new feature or a change, then please:
If it is decided not to refactor at this moment, then please:
Team members are welcome to run Folding@home on their company provided computers. Folding@home is a distributed computing network that is searching for therapies for the COVID-19 respiratory illness among other diseases. We recommend running it at night if you have high daily compute workloads. Also keep your computer plugged in. We considered potential security and hardware implications in this issue.
If you would like to join a team with other GitLab team members, there is a
GitLab Team Members team for Folding@home. When setting up or changing your Folding@home identity, you can add team
245256. This is not a competition, but simply to track how much our team members have contributed overall. You can view our statistics on our team page. You can discuss with other GitLab team members in the #folding-at-home slack channel.
Calendar year 2020 will be a time of slower growth for GitLab Engineering compared to past years. We grew 100% in 2018, and 130% in 2019. We'll grow roughly 20% this year. But this is still fast compared to other companies, which we're grateful for. We can use the expertise and bandwidth we've built in past years to raise our bar even higher. We rely primarily on the judgment of our hiring managers to do this. But we also try to systematize as much as possible so our hiring practices are fair, transparent, and repeatable.
We do not run a single-veto hiring process because this impedes our ability to uplevel our teams. High-performers are more likely to have been the product of a controversial hiring process because they challenge the status quo. But that does not mean every controversial hiring process yeilds a high performer. An important part of a hiring manager's performance is making these determinations.
The VP of Engineering and their direct reports track our highest priorities in the Engineering Management Issue Board, rather than to do lists, Google Doc action items, or other places. The reasons for this are:
Here are the mechanics of making this work:
Engineering Managementlabel to get it on the board, and the department label to get it in progress (e.g.
CEO Interestlabel, please post it to #ceo
Here is the standard, company-wide process for OKRs. Engineering has some small deviations from (and extensions to) this process.
This process should begin no later than two weeks before the end of the preceding quarter. And kickoff should happen on or before the first day of the new quarter.
FY20-Q2 Organization Type OKR: Objective phrase => 0%
FY20-Q3 Engineering Product OKR: Build our product vision => 0%
* Raise first reply-time SLA for premium from 92% to 95% => 0%
* Department: [Objective phrase](https://placeholder.com/) => 0%e.g.
* Support: Raise first reply-time SLA for premium from 92% to 95% => 0%
This process should begin on the first day of the subsequent quarter, and complete no later that two weeks after.
=> 70%) and assign it to their direct manager for review.
In GitLab Engineering we are serious about concepts like servant leadership, over-communication, and furthering our company value of transparency. You may have joined GitLab from another organization that did not share the same values or techniques. Perhaps you're accustomed to more corporate politics? You may need to go through a period of "unlearning" to be able to take advantage of our results-focused, people-friendly environment. It takes time to develop trust in a new culture.
Less common, but even more important, is to make certain you don't unintentionally bring any mal-adaptive behaviors to GitLab from these other environments.
We encourage you to read the engineering section of the handbook as part of your onboarding, ask questions of your peers and managers, and reflect on how you can help us better live our culture:
We dogfood everything. Based on our product principles, it is the Engineering division's responsibility to dogfood features or do the required discovery work to provide feedback to Product. It is Product's responsibility to prioritize improvements or rebuild functionality in GitLab.
An easy antipattern to fall into is to resolve your problem outside of what the product offers. Dogfooding is not:
Follow the dogfooding process described in the Product Handbook when considering building a tool outside of GitLab.
GitLab consists of many subprojects. A curated list of GitLab Repositories can be found at the GitLab Engineering Projects page.
When adding a repository please follow these steps:
CONTRIBUTING.mdin the repository. It is easiest to simply copy-paste the DCO + License section verbatim from the
CONTRIBUTING.mdfrom the project's
When changing the settings in an existing repository, it's important to keep communication in mind. In addition to discussing the change in an issue and announcing it in relevant chat channels (e.g.,
#development), consider announcing the change during the Company Call. This is particularly important for changes to the GitLab repository.
When creating a new project that may stay small, or could eventually become an open-source project that we maintain, add it first to the
Sandbox namespace at gitlab-org/sandbox following the same steps above. This will ensure that if we ever need to promote a project or share it with a wider audience, it is already in a GitLab namespace.
GitLab consists of many different types of applications and resources.
When you require escalated permissions or privileges to a resource to conduct task(s), or support for creating resource(s) with specific endpoints, please submit an issue to the Access Requests Issue Tracker using the template provided.
Below is a short list of supported technologies:
Before the beginning of each fiscal year, and at various check points throughout the year, we plan the size and shape of the Engineering and Product Management functions together to maintain symmetry.
The process should take place in a single artifact (usually a spreadsheet, current spreadsheet), and follow these steps:
Note: Support is part of the engineering function but is budgeted as 'cost of sales' instead of research and development. Headcount planning is done separately according to a different model.
The non support related departments within Engineering (Development, Infrastructure, Quality, Security, and UX) have an expense target of 20% as a percentage of revenue.
The Support target is 10% as a percentage of revenue.
A dedicated team needs certain skills and a minimum size to be successful. But that doesn't block us from taking on new work. This is how we iterate our team size and structure as a feature set grows:
New teams may benefit from holding a Fast Boot event to help the jump start the team. During a Fast Boot, the entire team gets together in a physical location to bond and work alongside each other.
All levels of leadership at GitLab could benefit from external mentorship and coaching programs. To validate this hypothesis we are working on a small pilot program for 6 months with PlatoHQ and 7CTOs.
The pilot for PlatoHQ has 5 Engineering Managers participating. The program exists of both self-learning via an online portal and 1-1 sessions with a mentor. During the program participants are working on a project together. The goals for the pilot are:
The pilot with 7CTOs is ran with 3 Senior leaders in Engineering. The program exists of peer mentoring sessions (forums) and effective network building. The goals of the pilot are:
The pilot programs' progression will be evaluated on February 28, 2020 and the final evaluation will be on May 1, 2020. After the evaluation there will be a decision whether to roll this out to all of Engineering.
To maintain our rapid cadence of shipping a new release on the 22nd of every month, we must keep the barrier low to getting things done. Since our team is distributed around the world and therefore working at different times, we need to work in parallel and asynchronously as much as possible.
That also means that if you are implementing a new feature, you should feel empowered to work on the entire stack if it is most efficient for you to do so.
Nevertheless, there are features whose implementation requires knowledge that is outside the expertise of the developer or even the group/stage group. For these situations, we'll require the help of an expert in the feature's domain.
In order to figure out how to articulate this help, it is necessary to evaluate first the amount of work the feature will require from the expert.
If the feature only requires the expert's help at an early stage, for example designing and architecting the future solution, the approach will be slightly different. In this case, we would require the help of at least two experts in order to get a consensual agreement about the solution. Besides, they should be informed about the development status before the final solution is finished. This way, any discrepancy or architectural issue related to the current solution, will be brought up early.
We need to maintain code quality and standards. It's very important that you are familiar with the Development Guides in general, and the ones that relates to your group in particular:
Please remember that the only way to make code flexible is to make it as simple as possible:
A lot of programmers make the mistake of thinking the way you make code flexible is by predicting as many future uses as possible, but this paradoxically leads to *less* flexible code.— Nearby Cats (@BaseCase) January 16, 2019
The only way to achieve flexibility is to make things as simple and easy to change as you can.
It is important to remember that quality is everyone's responsibility. Everything you merge to master should be production ready. Familiarize yourself with the definition of done.
Our releases page describes our two main release channels:
As the first of these is a monthly release, it's tempting to try to rush to get something in to a monthly self-managed release. However, this is an anti-pattern. Most issues don't have strict deadlines. Those that do are exceptions, and should be treated as such.
Deadline pressure logically leads to a few outcomes:
Only the last two outcomes are acceptable as a general rule. Missing a 'deadline' in the form of an assigned milestone is often OK as we put velocity above predictability, and missing the monthly self-managed release does not prevent code from reaching GitLab.com.
For these reasons, and others, we intentionally do not define a specific date for code to be merged in order to reach a self-managed monthly release. The earlier it is merged, the better. This also means that:
If it is essential that a merge request make it in a particular release, this must be communicated well in advance to the engineer and any reviewers, to ensure they're able to make that commitment. If a severe bug needs to be fixed with short notice, it is better to revert the change that introduced it than to rush, or even to delay the release until the fix is ready.
In general, there is no need to change any behavior close to the self-managed release.
In most cases, a single engineer and maintainer review are adequate to handle a P1/S1 issue. However, some issues are highly difficult or complicated. Engineers should treat these issues with a high sense of urgency. For a complicated P1/S1 issue, multiple engineers should be assigned based on the level of complexity. The issue description should include the team member and their responsibilities.
If we have cases where three or five or X people are needed, Engineering Managers should feel the freedom to execute on a plan quickly.
Following this procedure will:
Each backend and frontend development team is responsible for not exceeding an allocated budget of 15 points each quarter. The severity of issues caused will impact their budget accordingly:
The Infrastructure team will perform attribution as part of the root cause analysis process and record the results in the OKRs page.
Engineering is the primary advocate for the performance, availability, and security of the GitLab project. Product Management prioritizes all initiatives, so everyone in the engineering function should participate in the Product Management prioritization process to ensure that our project stays ahead in these areas. The following list should provide some guidelines around the initiatives that each engineering team should advocate for during their release planning:
support-fixlabel. You can filter on open MRs here.
Part of our engineering culture is to keep shipping so users and customers see significant new value added to GitLab.com or their self-managed instance. To support rapid development, we focus on Rails page views by default. When an area of the application sees significant usage, we typically rewrite those screens as a VueJS single page app backed by our API, in order to maintain the best qualitative experience and quantitative performance.
When adding new functionality, we should use GraphQL where possible on the backend and the frontend. We have a long-term goal to use GraphQL everywhere because it lets us increase development speed, reduces dependencies between frontend and backend engineers, and gives us a single source of truth for application data.
Defaulting to GraphQL for new work means that the distance from that goal doesn't increase over time.
This does not override the importance of velocity: if something is significantly more work to ship using GraphQL, rather than extending an existing implementation (in a Rails controller or the REST API), we should not block ourselves on using GraphQL. Instead, we should ship the feature and create a follow-up issue to move that resource to GraphQL in future. That follow-up issue can be scheduled by the relevant Product Manager, in consultation with Engineering Managers, as with any other engineering proposed initiative.
Moved to a dedicated page.
GitLab makes use of a 'Canary' stage. Production Canary is a series of servers running GitLab code in a production environment. The Canary stage contains code functional elements like web, container registry and git servers while sharing data elements such as sidekiq, database, and file storage with production. This allows UX code and most application logic code to be consumed by a smaller subset of users under real world scenarios before being made available to all users on GitLab.com.
The production Canary stage is forcibly enabled for all users visiting GitLab Inc. operated groups:
The Infrastructure department teams can globally disable use of production Canary when necessary. Individuals can also opt-out of using production Canary environments. However, opting-out does not include the aforementioned groups above.
To opt in/out, go to GitLab Version and move the toggle appropriately.
To verify that Canary is enabled, in the header, next to the GitLab logo will be a 'Next' icon, or use the performance bar (typing
pb) in GitLab and watch out for the Canary icon next to the web server name.
When using any of the resources listed below, some rules apply:
Every team member has access to a common project on Google Cloud Platform. Please see the secure note with the name "Google Cloud Platform" in the shared vault in 1password for the credentials or further details on how to gain access.
Once in the console, you can spin up VM instances, Kubernetes clusters, etc. Where possible, please prefix the resource name with your name for easy identification (e.g.
Please remove any resources that you are not using, since the company is billed monthly. If you are unable to create a resource due to quota limits, file an issue on the Infrastructure issue tracker.
If you encounter the following error when creating a new GKE cluster, this indicates that we cannot create more clusters within that network. Please ask in #kubernetes for team members to delete unused clusters, or alternatively create your cluster in a different network.
The network "default" does not have available private IP space in 10.0.0.0/8
Every team member has access to the dev-resources project which allows everyone to create and delete machines on demand.
In general, most team members do not have access to AWS accounts. In case you need an AWS resource, file an issue on the Infrastructure issue tracker. Please supply the details on what type of access you need.
There are primarily two Slack channels which developers may be called upon to assist the production team when something appears to be amiss with GitLab.com:
#backend: For backend-related issues (e.g. error 500s, high database load, etc.)
Treat questions or requests from production team for immediate urgency with high priority.