The Integrations group is a part of the Manage Stage. We support the product with 3rd party integrations, REST APIs and GraphQL foundational code, and Webhooks.
This page covers processes and information specific to the Integrations group. See also the Integrations direction page and the features we support per category.
To get in touch with the Integrations group, it's best to create an
issue in the relevant project (typically GitLab) and add the
~"group::integrations"
label, along with any other appropriate labels. Then,
feel free to ping the relevant Product Manager and/or Engineering Manager.
For more urgent items, feel free to use the Slack Channel (internal): #g_manage_integrations.
Person | Role |
---|
Person | Role |
---|
Person | Role |
---|---|
Bojan Marjanović | Senior Backend Engineer, Manage:Integrations |
Justin Ho | Senior Frontend Engineer, Manage:Integrations |
Luke Duncalfe | Senior Backend Engineer, Manage:Integrations |
Martin Wortschack | Engineering Manager, Manage:Import & Manage:Integrations |
Each quarter we have a series of Objectives and Key Results (OKRs) for our group. To find the current OKRs for this quarter, check the OKR project.
You can find our group metrics in the Manage:Integrations Sisense Dashboard and Integrations Group Engineering Metrics handbook page.
(Sisense↗) We also track our backlog of issues, including past due security and infradev issues, and total open System Usability Scale (SUS) impacting issues and bugs.
(Sisense↗) MR Type labels help us report what we're working on to industry analysts in a way that's consistent across the engineering department. The dashboard below shows the trend of MR Types over time and a list of merged MRs.
The Product Manager compiles the list of Deliverable and Stretch issues following the product prioritization process, with input from the team, Engineering Managers, and other stakeholders. The iteration cycle lasts from the 18th of one month until the 17th of the next, and is identified by the GitLab version set to be released on the 22nd.
Engineers are encouraged to work as closely as needed with stable counterparts including our Product Manager. We should be sure to include documentation, UX, quality engineering, and application security in the planning process.
Quality engineering is included in our workflow via the Quad Planning Process.
Before starting a milestone, the team coordinates using planning issues. We follow this process:
We use Google Apps Script to estimate capacity. For more accurate capacity planning, engineers have to report their time off using PTO by Deel before the beginning of the milestone.
Issues are usually not directly assigned to people, except in cases where there is clearly a most appropriate person to work on them, like in the case of technical debt, bugs related to work someone did recently, or issues someone started on before but hasn't had a chance to finish yet.
Deliverables are considered top priority and are expected to be done by the end of the iteration cycle on the 17th, in time for the release on the 22nd.
The primary source for things to work on is the Integrations issue board for the current iteration cycle, which lists all of the Deliverable and Stretch issues scheduled for this cycle in priority order. They should be picked up starting at the top. When you assign yourself to an issue, you indicate that you are working on it.
If anything is blocking you from getting started with the top issue immediately, like unanswered questions or unclear requirements, you can skip it and consider a lower priority issue, as long as you put your findings and questions in the issue, so that the next engineer who comes around may find it in a better state.
Generally, you work about 75% of the time on Deliverables. The other 25% is set aside for other responsibilities (code review, community merge request coaching, helping people out in Slack, participating in discussions in issues, etc), as well as urgent issues that come up during the month and need someone working on them immediately (regressions, security issues, customer issues, etc).
Many things can happen during a month that can result in a Deliverable not actually being completed by the end of a cycle, and while this usually indicates that the team was too optimistic in their estimation of the issue's weight, or that an engineer's other responsibilities ended up taking up more time than expected, this should never come as a surprise to the Product Manager or to the Engineering Manager.
The sooner this potential outcome is anticipated and communicated, the more time there is to see if anything can be done to prevent it, like reducing the scope of the Deliverable, or finding a different engineer who may have more time to finish a Deliverable that hasn't been started yet. If this outcome cannot be averted and the Deliverable ends up missing the cycle, it will simply be moved to the next cycle to be finished up, and the engineer and Engineering Manager will have a chance to retrospect and learn from what happened.
If there are no more unassigned Deliverables, first offer to assist other engineers with their Deliverables. Next, you can spend the remaining time working on Stretch issues.
These lower priority issues are not expected to be done by the end of the iteration cycle, but are to be Deliverables in the next cycle, so any progress made on them ahead of time is a bonus.
Instead of picking up Stretch issues, you may also choose to spend any spare time working on anything else that you believe will have a significant positive impact on the product or the company in general. As the general guidelines state, "we recognize that inspiration is perishable, so if you’re enthusiastic about something that generates great results in relatively little time feel free to work on that."
We expect people to be managers of one and prefer responsibility over rigidity, so there's no need to ask for permission if you decide to work on something that's not on the issue board, but please keep your other responsibilities in mind, and make sure that there is an issue, you are assigned to it, and consider sharing it in #g_manage_integrations.
In general, we use the standard GitLab engineering workflow.
The easiest way for Engineering Managers, Product Managers, and other stakeholders to get a high-level overview of the status of all issues in the current milestone, or all issues assigned to a specific person, is through the Development issue board, which has columns for each of the workflow labels.
As owners of the issues assigned to them, engineers are expected to keep the workflow labels on their issues up to date, either by manually assigning the new label, or by dragging the issue from one column on the board to the next.
Once an engineer starts working an issue, they mark it with the workflow::"in
dev"
label as the starting point and continue updating the issue throughout development.
The process primarily follows the guideline:
If someone starts working on an issue but it has the same workflow label for a week, the assignee has to leave a comment explaining the status of the issue. We should write at least one comment every week that the issue is not moving.
The work for the Integrations group can be tracked on the following issue boards:
Workflow Boards track the workflow progress of issues.
Team Member Boards track group::integrations
labeled issues by the assigned team member.
Deliverable / Stretch Board tracks ~Deliverable
and ~Stretch
issues.
We use a lightweight system of issue weighting to help with capacity planning. These weights help us ensure that the amount of scheduled work in a cycle is reasonable, both for the team as a whole and for each individual. The "weight budget" for a given cycle is determined based on the team's recent output, as well as the upcoming availability of each engineer.
Since things take longer than you think, it's OK if an issue takes longer than the weight indicates. The weights are intended to be used in aggregate, and what takes one person a day might take another person a week, depending on their level of background knowledge about the issue. That's explicitly OK and expected. We should strive to be accurate, but understand that they are estimates! Change the weight if it is not accurate or if the issue becomes harder than originally expected. Leave a comment indicating why the weight was changed and tag your EM so that we can better understand weighting and continue to improve.
The weights we use are:
Weight | Description |
---|---|
1: Trivial | The problem is very well understood, no extra investigation is required, the exact solution is already known and just needs to be implemented, no surprises are expected, and no coordination with other teams or people is required. Examples are documentation updates, simple regressions, and other bugs that have already been investigated and discussed and can be fixed with a few lines of code, or technical debt that we know exactly how to address, but just haven't found time for yet. |
2: Small | The problem is well understood and a solution is outlined, but a little bit of extra investigation will probably still be required to realize the solution. Few surprises are expected, if any, and no coordination with other teams or people is required. Examples are simple features, like a new API endpoint to expose existing data or functionality, or regular bugs or performance issues where some investigation has already taken place. |
3: Medium | Features that are well understood and relatively straightforward. A solution will be outlined, and most edge cases will be considered, but some extra investigation will be required to realize the solution. Some surprises are expected, and coordination with other teams or people may be required. Bugs that are relatively poorly understood and may not yet have a suggested solution. Significant investigation will definitely be required, but the expectation is that once the problem is found, a solution should be relatively straightforward. Examples are regular features, potentially with a backend and frontend component, or most bugs or performance issues. |
5: Large | Features that are well understood, but known to be hard. A solution will be outlined, and major edge cases will be considered, but extra investigation will definitely be required to realize the solution. Many surprises are expected, and coordination with other teams or people is likely required. Bugs that are very poorly understood, and will not have a suggested solution. Significant investigation will be required, and once the problem is found, a solution may not be straightforward. Examples are large features with a backend and frontend component, or bugs or performance issues that have seen some initial investigation but have not yet been reproduced or otherwise "figured out". |
Anything larger than 5 should be broken down if possible.
Security issues are typically weighted one level higher than they would normally appear from the table above. This is to account for the extra rigor of the security release process. In particular, the fix usually needs more-careful consideration, and must also be backported across several releases.
Every week the engineering team completes a backlog refinement process to review upcoming issues. The goal of this effort is for all issues to have a weight so we can more accurately plan each milestone using the estimated capacity for the team and the estimated issue weights.
In addition to this backlog refinement process, engineers on the team can add weights to any issues that are straight-forward and do not need backlog refinement.
This process happens in three steps.
The engineering manager will identify issues that need to be refined. On average we will try to refine 3-6 issues per week. If there are issues that are good candidates for the backlog refinement process, please let the engineering manager know in the issue.
When picking issues to refine, we try to have themed refinements to reduce the context switching while the issues are being investigated. Here are some places to look:
Once identified, the engineering manager will apply the ready for next refinement
, which will indicate the issues are ready for
refinement.
The engineering manager will use the Refinement Bot to generate an issue with all the issues that have been identified for refinement.
Over the week, each engineer on the team will look at the list of issues selected for backlog refinement. Current backlog refinement issues.
For each issue, each team member will review the issues and provide the following information:
Some considerations:
After engineers have had a chance to provide input, the engineering manager will then:
ready for next refinement
label.For any issues that were not discussed and given a weight, the engineering manager will work with the engineers to see if we need to get more information from PM or UX.
We have 1 regularly scheduled "Per Milestone" retrospective, and can have ad-hoc "Per Project" retrospectives.
The Integrations group conducts milestone retrospectives in GitLab issues. These include the engineers, UX, PM, and all stable counterparts who have worked with that team during the milestone.
Participation by the Integrations team members is highly encouraged for every milestone.
These are confidential during the initial discussion, then made public in time for each month's GitLab retrospective. For more information, see group retrospectives.
If a particular issue, feature, or other sort of project turns into a particularly useful learning experience, we may hold a synchronous or asynchronous retrospective to learn from it. If you feel like something you're working on deserves a retrospective:
All feedback from the retrospective should ultimately end up in the issue for reference purposes.
When areas of the Integrations codebase are changed, the reviewer roulette will recommend that the merge request is reviewed by an Integrations team member. This will only happen when the merge request is authored by people outside of the Integrations team. See this example of how the review recommendation looks.
The reasoning behind these special recommendations is that other groups have some ownership of certain integrations or webhooks. Reviewing changes made by non-team members allows us to act as owners of foundational code and maintain a better quality of the Integrations codebase.
File paths of changes in a merge request are matched against a
list of regular expressions.
The roulette uses these hash values to recommend reviewer groups. For example, :integrations_be
and
:integrations_fe
will recommend Integrations backend and Integrations frontend reviews respectively. As the regex matches
are first match wins
and not cumulative, any other relevant reviewer groups like :backend
or :frontend
must also be included
in each hash value.
The regex list should be updated to match Integrations code whenever needed. The list matches our commonly namespaced files, so new code in existing namespaces will always match.
To see which files in the GitLab repository produce a match, paste the following in a Rails console:
require Rails.root.join('tooling/danger/project_helper.rb')
ALL_FILES = Dir.glob('**/*');
def category_regexs(category)
matching_categories = Tooling::Danger::ProjectHelper::CATEGORIES.select do |regexs, categories|
next if regexs.is_a?(Array)
Array.wrap(categories).include?(category)
end
regexes = matching_categories.map(&:first)
Regexp.union(*regexes)
end
def print_files(category)
regex = category_regexs(category)
puts ALL_FILES.grep(regex).reject { |path| File.directory?(path) }.sort
end
puts "Backend:\n"
print_files(:integrations_be)
puts "Frontend:\n"
print_files(:integrations_fe)
This is a collection of links for monitoring Integrations features.
queue
dropdownJiraConnect::SyncMergeRequestWorker
errors.JiraConnect::SyncBranchWorker
errors.JiraConnect::SyncProjectWorker
errors.GitLab uses error budgets to measure the availability and performance of our features. Each engineering group has its own budget spend. The current 28-day spend for the Integrations team shows in this Grafana dashboard.
Error budget spend happens when either of the following exceeds a certain threshold:
To determine the highest-priority problems in our Grafana dashboard:
Fixing the top offenders will have the biggest impact on the budget spend.
Learn more about error budgets with these resources:
feature_category: :integrations
Here are some resources team members can use for employee development: