The Create:Source Code BE team focuses on GitLab's suite of Source Code Management (SCM) tools and is responsible for all backend aspects of the product categories that fall under the Source Code group of the Create stage of the DevOps lifecycle. Our Product direction is found on the Category Direction - Source Code Management page.
We interface with the Gitaly and Code Review teams, and work closely with the Create:Source Code Frontend team. The features we work with are listed on the Features by Group Page and technical documentation can be found on the Create: Source Code Backend technical reference page.
The following people are permanent members of the Create:Source Code BE Team:
|Sean Carroll||Backend Engineering Manager, Create:Source Code|
|Gavin Hinfey||Backend Engineer, Create:Source Code|
|Igor Drozdov||Staff Backend Engineer, Create:Source Code, Systems:Gitaly API|
|Jerry Seto||Senior Backend Engineer, Create:Source Code|
|Patrick Cyiza||Backend Engineer, Create:Source Code|
|Joe Woodward||Senior Backend Engineer, Create:Source Code|
|Robert May||Senior Backend Engineer, Create:Source Code|
|Vasilii Iakliushin||Staff Backend Engineer, Create:Source Code, Systems:Gitaly API|
The following members of other functional teams are our stable counterparts:
|Torsten Linz||Senior Product Manager, Create:Source Code|
|Amy Qualls||Senior Technical Writer, Create (Source Code, Code Review), Enablement (Database)|
|André Luís||Frontend Engineering Manager, Create:Source Code, Create:Code Review, Delivery & Scalability|
|Costel Maxim||Senior Security Engineer, Application Security, Plan (Project Management, Product Planning, Certify), Create:Source Code, Growth, Fulfillment:Purchase, Fulfillment:Provision, Fulfillment:Utilization, Systems:Gitaly|
|Denys Mishunov||Staff Frontend Engineer, Create:Source Code|
|Darva Satcher||Director of Engineering, Create|
|Jacques Erasmus||Senior Frontend Engineer, Create:Source Code|
|Nataliia Radina||Frontend Engineer, Create:Source Code|
|Shekhar Patnaik||Principal Fullstack Engineer, Create|
|Sarah Waldner||Group Manager, Product Management, Create|
We have a metrics dashboard to help us stay on track with Development KPIs (Make sure you filter by our team at the top!) This dashboard does not include security MRs from
dev.gitlab.org, but does include security MRs from production.
(Sisense↗) We also track our backlog of issues, including past due security and infradev issues, and total open System Usability Scale (SUS) impacting issues and bugs.
(Sisense↗) MR Type labels help us report what we're working on to industry analysts in a way that's consistent across the engineering department. The dashboard below shows the trend of MR Types over time and a list of merged MRs.
(Sisense↗) Flaky test are problematic for many reasons.
We use the standard GitLab engineering workflow. To create an issue for the Create:Source Code BE team, add these labels:
For more urgent items, feel free to use
#g_create_source_code on Slack.
Take a look at the features we support per category here.
Weekly calls between the Product Manager and Engineering Managers (frontend and backend) are listed in the "Source Code Group" calendar. Everyone is welcome to join and these calls are used to discuss any roadblocks, concerns, status updates, deliverables, or other thoughts that impact the group.
workflow::ready for design.
workflow::planning breakdownwill be applied.
workflow::refinementlabel to signal next step.
workflow::needs issue review.
Note: if an issue receives a weight > 3 after this process, it may indicate the IC may not have a full idea of what is needed and further research is needed.
As stated in our direction, we must place special emphasis on our convention over configuration principle. As the feature set within Create:Source Code grows, it may feel natural to solve problems with configuration. To ensure this is not the case, we must intentionally challenge MVC and new feature issues to check for this. Let's consider the following steps for best results:
Once issues have been labeled as
workflow::needs issue review PM will share the proposal with either a peer or their manager as well as engineering (EM or IC) and product designer.
Peers in product and engineering who review the issue should look for opportunities to eliminate configuration where possible. If opportunities are identified, the issue is moved back to
If PM and peers are satisfied with the proposal and it follows our convention over configuration principle as much as possible, those who reviewed the issue indicate their agreement with the proposal (with either 👍 or a comment in the issue). Finally, PM or EM will label issue
workflow:: ready for development.
You are encouraged to work as closely as needed with stable counterparts outside of the PM. We specifically include quality engineering and application security counterparts prior to a release kickoff and as-needed during code reviews or issue concerns.
Quality engineering is included in our workflow via the Quad Planning Process.
Application Security will be involved in our workflow at the same time that kickoff emails are sent to the team, so that they are able to review the upcoming milestone work, and notate any concerns or potential risks that we should be aware of.
The weekly Triage Report is generated automatically by the GitLab bot and this report is reviewed by the EM. Here is an example of a previous report.
The Triage Report can be quite long, and it important to deal with it efficiently. An effective way to approach it is:
The engineering cycle is centered around the GitLab Release Date of the 22nd of the month. This is the only fixed date in the month, and the table below indicates how the other dates can be determined in a given month.
These documents comprise everything that is documented during the release planning and execution.
Create Source Code BE planning takes inputs from the following sources:
Create Source Code UX planning takes inputs from the following sources:
Each month a planning issue is created by the PM, using the Source code template.
The Planning Board is created for each release by the PM, and is a curated list of issues by category. The EM requests engineers to allocate weights to all issues on this board via the Needs weight issue
The EM maintains a Google Sheet for calculating team capacity, and the same Spreadsheet is also used to perform the process of assigning issues to the release based on weight and priority.
The EM selects issues from the Planning Board based on:
The EM then applies the ~Deliverable label to each issue in the Release and assigns then to an engineer. The issues are tracked through the release via the Build Board.
Urgent issues are tentatively assigned to a release to ensure other teams have visibility.
At this point the issues are Candidate issues, and the milestone does not confirm that they will be definitely scheduled. Issues move from Candidate status to confirmed during the Issue selection process.
|10th||PM creates planning board and pings EMs in the Planning Issue for review & weighting.
EMs calculate capacity, add to Planning Issue.
PM submits RPIs for reviews.
|10th-14th||EMs & ICs add weights to issues in the planning board|
|15th||EMs add ~Deliverable labels to issues so that they appear on the Build board as a draft
Release Post: EMs, PMs, and PDs contribute to MRs for Usability, Performance Improvements, and Bug Fixes
|17th||Last day of milestone
EMs adjust ~Deliverable labels for slippage and make final assignments
PMs review final plan for milestone on Build board
EMs merge RPI MRs for features that have been merged.
Based on the issues on the Planning Board, the EM will create a Needs Weight issue to request an estimation of work by the engineers. In general no more than 4 issues should be assigned to an engineer for weighting.
If you would like to be assigned to work on this issue in the upcoming release, add a comment and ping the EM.
The weights we use are:
|1: Trivial||The problem is very well understood, no extra investigation is required, the exact solution is already known and just needs to be implemented, no surprises are expected, and no coordination with other teams or people is required.
Examples are documentation updates, simple regressions, and other bugs that have already been investigated and discussed and can be fixed with a few lines of code, or technical debt that we know exactly how to address, but just haven't found time for yet.
|2: Small||The problem is well understood and a solution is outlined, but a little bit of extra investigation will probably still be required to realize the solution. Few surprises are expected, if any, and no coordination with other teams or people is required.
Examples are simple features, like a new API endpoint to expose existing data or functionality, or regular bugs or performance issues where some investigation has already taken place.
|3: Medium||Features that are well understood and relatively straightforward. A solution will be outlined, and most edge cases will be considered, but some extra investigation will be required to realize the solution. Some surprises are expected, and coordination with other teams or people may be required.
Bugs that are relatively poorly understood and may not yet have a suggested solution. Significant investigation will definitely be required, but the expectation is that once the problem is found, a solution should be relatively straightforward.
Examples are regular features, potentially with a backend and frontend component, or most bugs or performance issues.
|4: Large||Features that are well understood, but known to be hard. A solution will be outlined, and major edge cases will be considered, but extra investigation will definitely be required to realize the solution. Many surprises are expected, and coordination with other teams or people is likely required.
Bugs that are very poorly understood, and will not have a suggested solution. Significant investigation will be required, and once the problem is found, a solution may not be straightforward.
Examples are large features with a backend and frontend component, or bugs or performance issues that have seen some initial investigation but have not yet been reproduced or otherwise "figured out".
|5: Unknown||A feature that is weight 5 will not be scheduled and instead should be broken down or a spike scheduled|
A weight of 5 generally indicates the problem is not clear or a solution should be instead converted to an Epic with sub-issues.
If the problem is well-defined but too large (weight 5 or greater), either:
When a spike is scheduled, the engineer performs research on what needs to be done. On completion of the investigation, the engineer has either closed the issue or developed a plan for the work needed, including a weight. A follow-up issue is created and the labels copied from the original issue, and the original issue then closed.
Security issues are typically weighted one level higher than they would normally appear from the table above. This is to account for additional work and backports in the security release process.
The Source Code stable counterparts (BE, FE, PM, UX) meet and propose issues to be worked on in the upcoming release. Using the Mural visual collaboration tool, candidate issues are voted on by the group.
Capacity planning is a collaborative effort involving all Source Code team members and stable counterparts from Frontend, UX and Product. An initial list of issues is tracked in the Source Code Group Planning issue example for each month.
Approximately 5-10 business days before the start of a new release, the EM will begin determining how "available" the team will be. Some of the things that will be taken into account when determining availability are:
Availability is a percentage calculated by (work days available / work days in release) * 100.
All individual contributors start with a "weight budget" of 10, meaning they are capable (based on historical data) of completing a maximum number of issues worth 10 weight points total (IE: 2 issues which are weighted at 5 and 5, or 10 issues weighted at 1 each, etc.) Then, based on their availability percentage, weight budgets are reduced individually. For example, if you are 80% available, your weight budget becomes 8.
Product will prioritize issues based on the teams total weight budget. Our planning rotation will help assign weights to issues that product intends on prioritizing, to help gauge the amount of work prioritized versus the amount we can handle prior to a kickoff.
The Source Code issue pipeline is broad, and the PM and EM work together throughout the planning process and the final list is chosen during the Issue Selection meeting. The issue pipeline includes:
On or around the 16th, the PM and EM meet once more to finalize the list of issues in the release. The issue board for that release is then updated, and any issues with an candidate milestone that are not selected will be moved to Backlog or added as a candidate for a future release.
Issues scheduled for the release are then marked ~"workflow::ready for development".
Once availability has been determined, weights have been assigned, and the PM/EM finalize a list of prioritized issues for the upcoming release, kickoff emails will be sent. The intent of this email is to notify you of the work we intend to assign for the upcoming release. This email will be sent before the release begins. The kickoff email will include:
You will begin to collect follow-up issues when you've worked on something in a release but have tasks leftover, such as technical debt, feature flag rollouts or removals, or non-blocking work for the issue. For these, you can address them in at least 2 ways:
You should generally take on follow-up work that is part of our definition of done, preferably in the same milestone as the original work, or the one immediately following. If this represents a substantial amount of work, bring it to your manager's attention, as it may affect scheduling decisions.
If there are many follow-up issues, consider creating an epic.
Many issues require work on both the backend and frontend, but the weight of that work may not be the same. Since an issue can only have a single weight set on it, we use scoped labels instead when this is the case:
The easiest way for engineering managers, product managers, and other stakeholders to get a high-level overview of the status of all issues in the current milestone, or all issues assigned to specific person, is through the Development issue board, which has columns for each of the workflow labels described on Engineering Workflow handbook page under Updating Issues Throughout Development.
As owners of the issues assigned to them, engineers are expected to keep the workflow labels on their issues up to date, either by manually assigning the new label, or by dragging the issue from one column on the board to the next.
We have 1 regularly scheduled "Per Milestone" retrospective, and can have ad-hoc "Per Project" retrospectives.
The Create:Source Code group conducts monthly retrospectives in GitLab issues. These include the backend team, plus any people from frontend, UX, and PM who have worked with that team during the release being retrospected.
These are confidential during the initial discussion, then made public in time for each month's GitLab retrospective. For more information, see team retrospectives.
If a particular issue, feature, or other sort of project turns into a particularly useful learning experience, we may hold a synchronous or asynchronous retrospective to learn from it. If you feel like something you're working on deserves a retrospective:
All feedback from the retrospective should ultimately end up in the issue for reference purposes.
The groups in the Create stage organize regular Deep Dive sessions to share our domain specific knowledge with others in the stage, the organization, and the wider community. All existing Deep Dives can be found on GitLab Unfiltered with slides and recordings in the descriptions. For Create specific Deep Dives, there is a different playlist. To find more information on upcoming sessions, or to propose a new topic, please see the epic.
Career development conversations in the Create:Source Code BE team are centered around a Career Development Sheet that is based on the Engineering Career Matrix for Individual Contributors. The sheet lists the expected current level behaviors on the left, the next level behaviors on the right, and uses colored columns in between to visually represent the extent to which the individual has shown to have grown from the current level to the next. Columns to the right of a next level behavior are used to collect specific examples of that behavior, which serve as evidence of the individual's growth.
Both the matrix and the sheet are Works in Progress; the development of the career matrix is tracked in an epic, and as the matrix evolves, the sheet will be updated to match.
The Create:Source Code BE team is responsible for keeping some API endpoints and controller actions performant (e.g. below our target speed index).
Here are some Kibana visualizations that give a quick overview on how they perform:
These tables are filtered by the endpoints and controller actions that the group handles and sorted by P90 (slowest first) for the last 7 days by default.