For an understanding of what this team is going to be working on take a look at the product vision. This team is responsible for delivering on the following directions:
The Verify:Pipeline Exection Group is focused on supporting the functionality with respect to Continuous Integration use case. A key focus for the PE group is delivering features that achieve the outcome we track in our performance indicator.
This team maps to Verify devops stage.
We measure the value we contribute by using Performance Indicators (PI), which we define and use to track progress. The current PI for the Pipeline Execution group is the
number of unique users who trigger ci_pipelines. For more details, please check out the Product Team Performance Indicators. To view the latest Verify stage ci_pipeline data see our Sisense Dashboard.
Based on the AARRR framework (Acquisition, Activation, Retention, Revenue, Referral), this funnel represents the customer journey in using GitLab CI. Each state in the funnel is defined with a metric to measure behavior. Product managers can focus on any of the various states in the funnel to prioritize features that drive a desired action.
Not included in the Pipeline Execution group's domain:
The following people are permanent members of the Verify:Pipeline Execution group:
|Sam Beckham||Frontend Engineering Manager, Verify|
|Jose Ivan Vargas||Senior Frontend Engineer, Verify:Pipeline Execution|
|Payton Burdette||Senior Frontend Engineer, Verify:Pipeline Execution|
The following members of other functional teams are our stable counterparts:
|Veethika Mishra||Senior Product Designer, Verify:Pipeline Execution|
|Evan Read||Senior Technical Writer, Create (Gitaly), Verify (Testing), Manage (Access, Compliance)|
|Marcel Amirault||Technical Writer, Verify (Pipeline Execution, Pipeline Authoring)|
|Suzanne Selhorn||Staff Technical Writer, Verify (Runner), Fulfillment|
|Dominic Couture||Senior Security Engineer, Application Security, Verify (Pipeline Execution, Pipeline Authoring, Runner, Testing), Release (Release)|
|Richard Chong||Senior Software Engineer in Test, Verify:Pipeline Execution|
|Jackie Porter||Group Manager, Product, Verify|
Like most GitLab backend teams we spend a lot of time working in Rails on the main GitLab CE app, but we also do a lot of work in Go which is the language that GitLab Runner is written in. Familiarity with Docker and Kubernetes is also useful on our team.
For those new to the team, these links may be helpful in learning more about the product and technology.
Issues are refined and weighted prior to scheduling them to an upcoming milestone. We use
candidate:: scoped labels to help with planning work in future iterations. The additional label allows us to filter on the issues we are planning, allowing Product, Engineering and UX to start async issue refinement on the 4th. Weighting also helps with capacity planning with respect to how issues are scheduled in future milestones.
We create 2 issues as part of our Planning process:
Both issues currently rely on the
candidate:: scoped label to determine what issues are to be investigated for the upcoming milestone(s).
side note: we prefer using Refining over Grooming
Engineers are expected to allocate approximately 4 hours each milestone to refine and weight issues assigned to them. The purpose of refining an issue is to ensure the problem statement is clear enough to provide a rough effort sizing estimate; the intention is not to provide solution validation during refinement.
Engineering uses the following handbook guidance for determining weights. If any issue needs any additional
~frontend ~backend ~Quality ~UX ~documentation reviews, they are assigned to the respective individual(s).
Any one on the team can contribute to answering the questions in this checklist, but the final decisions are up to the PM and EMs.
~workflow::label to the appropriate status, e.g.
We add a
Weight to issues as a way to estimate the effort needed to complete an issue. We factor in complexity and any additional coordination needed to work on an issue. We weight issues based on complexity, following the fibonacci sequence:
|1: Trivial||The problem is very well understood, no extra investigation is required, the exact solution is already known and just needs to be implemented, no surprises are expected, and no coordination with other teams or people is required.
Examples are documentation updates, simple regressions, and other bugs that have already been investigated and discussed and can be fixed with a few lines of code, or technical debt that we know exactly how to address, but just haven't found time for yet.
|2: Small||The problem is well understood and a solution is outlined, but a little bit of extra investigation will probably still be required to realize the solution. Few surprises are expected, if any, and no coordination with other teams or people is required.
Examples are simple features, like a new API endpoint to expose existing data or functionality, or regular bugs or performance issues where some investigation has already taken place.
|3: Medium||Features that are well understood and relatively straightforward. A solution will be outlined, and some edge cases will be considered, but additional investigation may be required to confirm the approach. Some surprises are expected, and coordination with other team members may be necessary.
Bugs that are relatively well understood, but additional investigation may be required. The expectation is that once the problem is verified and major edge cases have been identified, a solution should be relatively straightforward.
Examples are regular features, potentially with backend or frontend dependencies or performance issues.
|5: Large||Features that are well understood, but has more ambiguity and complexity. A solution will be outlined, and major edge cases will be considered, but additional investigation will likely be required to validate the approach. Surprises with specific edge cases are to be expected, and feedback from multiple engineers and/or teams may be required.
Bugs are complex and may be understood, and may not have an indepth solution defined during issue refinement. Additional investigation is likely necessary, and once the problem is identified, multiple iterations of a solution may be considered.
Examples are large features with backend and frontend dependencies, or performance issues that have a solution outlined but requires more indepth solution validation.
The maximum weighted value for an issue is a
5, and may exceed one milestone to complete given additional dependencies and/or complexity. Consider whether an issue weighted with a
5 can be broken down into smaller iterations.
If an issue requires a feature flag rollout plan, consider increasing the weight by
2, according to the effort involved in rolling out the feature flag and monitoring the new behavior.
To encourage more transparency and collaboration amongst the team and additionally align on the Release Posts we publish at the end of each milestone, we will be creating a separate issue to highlight a Feature flag roll out plan for each feature being released starting in 13.2, based on the issue template for feature flag roll outs. The engineer who implements the feature will be responsible for creating this separate issue to highlight the details of when and how the feature flag will be toggled, and subsquently link this issue to their feature issue. The product manager will tag this issue as a blocker to their release post, so that everyone is aligned on the release plan of the feature.
We use the Pipeline Execution Workflow issue board to track what we work on in the current milestone.
Development moves through workflow states in the following order:
workflow::ready for development
workflow::planning breakdown is driven by Product, but is a collaborative effort between Product, UX and Engineering. The steps for planning breakdown typically consists of:
problem validationas needed
At any point, if an issue becomes blocked, it would be in the
workflow::blocked status. If there is a blocking issue, it needs to be added to the issue description or linked to the issue with a 'blocked by' relationship.
workflow::ready for development means that an issue has been sufficiently refined and weighted by Engineering, upon request by Product and UX
Closed means that all code changes associated with the issue are fully enabled on gitlab.com. If it is being rolled out behind a feature flag, it means the feature flag is enabled for all users on gitlab.com.
We use a series of labels to indicate the highest priority issues in the milestone.
VerifyP1issues have been picked up and are in
workflow:in devor beyond, we have
VerifyP3to signal issues that will become
VerifyP1issues in the following milestones.
Any future product priorities (
VerifyP3 labelled issues) will typically be in
workflow::ready for development and have designs ready, and ideally already be weighted with proposals for implementation. Beyond the product
VerifyPX priorities, the
ready for development column will be stack ranked (or at least reviewed) daily by the Product Manager, so that each team member can pull from the top of the column expecting that it is already ordered in priority.
When DRIs select issues, they will assign themselves to the issue and also add the milestone that they believe the issue will most likely be shipped. This is also a good time to re-evaluate the weight and proposal, in case the DRI picking up the issue was not the same individual who originally weighted and refined the issue. Assigning a milestone to an issue does not necessarily mean that the issue is being started on that same milestone. Aspriationally, we strive to iterate and want to break down the efforts to ship as much value in the milestone for our users as possible, which means if you see a more efficient way forward when you start working on a new issue, free to raise a comment and update the proposal to deliver more iterative value.
Each member of the team can choose which issues to work on during a milestone by assigning the issue to themselves. When the milestone is well underway and we find ourselves looking for work, we default to working right to left on the issue board by pulling issues in the right-most column. If there is an issue that a team member can help with on the board, they should do so instead of starting new work. This includes conducting code review on issues that the team member may not be assigned to, if they feel that they can add value and help move the issue along to completion.
Specifically, this means our work is prioritized in the following order:
workflow::in devif applicable
workflow::ready for developmentfor development column
The goal of this process is to reduce the amount of work in progress (WIP) at any given time. Reducing WIP forces us to "Start less, finish more", and it also reduces cycle time. Engineers should keep in mind that the DRI for a merge request is the author(s), to reflect the importance of teamwork without diluting the notion that having a DRI is encouraged by our values.
If an issue has several components (e.g. ~frontend, ~backend, or ~documentation) it may be useful to split it up into separate implementation issues. The original issue should hold all the discussion around the feature, with the implementation issues being used to track the work done. Doing this provides several benefits:
When creating implementation issues, we need to link the implementation issue to the original feature issue. You should be able to see from the feature issue that it depends on implementation issues, and what the status of those issues are. It is everyone’s responsibility to keep these issues readable and make sure everything links back to the original feature issue.
Please use implementation issues responsibly. They make sense for larger issues, but can be cumbersome for smaller features.
In order to keep our stakeholders informed of work in progress, we provide updates to issues either by updating the issue's health status and/or adding an async issue update.
For issues in the current milestone, we use the Issue Health Status feature to indicate probability that an issue will ship in the current milestone. This status is updated by the DRI (directly responsible individual) as soon as they recognize the probability has changed. If there is no change to the status, a comment to indicate that it has been the status of the issue has been assessed would be helpful.
The following are definitions of the health status options:
On Track- The issue has no current blockers, and is likely to be completed in the current milestone.
Needs Attention- The issue is still likely to be completed in the current milestone, but there are setbacks or time constraints that could cause the issue to miss the release due dates.
At Risk- The issue is highly unlikely to be completed in the current milestone, and will probably miss the release due dates.
Examples of how status updates are added:
At Risk, we recommend that the DRI add a short comment stating the reason for the change in an issue status update.
On Track, the DRI could provide a comment to indicate solutions (whatever it may be) continue to be implemented, and it's still on track to be delivered in the same milestone.
When the DRI is actively working on an issue (workflow status is
~workflow::in review or
workflow::verification in the current milestone), they will add a comment into the issue with a status update, detailing:
There are several benefits to this approach:
Expectations for DRIs when providing updates for work in progress:
As a general rule, any issues being actively worked on have one of the following workflow labels:
workflow::production(upon closing the issue)
The Health Status of these issues should be updated to:
Needs Attention- on the
1stof the month.
At Risk- on the
8thof the month.
EMs are responsible for manually updating the Health Status of any inactive issues in the milestone accordingly.
Spikes are time-boxed investigations typically performed in agile software development. We create Spike issues when there is uncertainty on how to proceed on a feature from a technical perspective before a feature is developed.
The Pipeline Execution group supports the product marketing categories described below:
||Issues||MRs||Issues related to API endpoints for CI features.|
||Issues||MRs||Issues related to
||Issues||MRs||All issues and MRs related to how we count continuous integration minutes and calculate usage. Formely
||Issues||MRs||Issues related to CI functionality within the Merge Request.|
||Issues||MRs||Issues related to various forms of notifications related to CI features.|
||Issues||MRs||Issues related to CI pipeline statistics and dashboards.|
||Issues||MRs||Issues related to the execution of pipeline jobs, including DAG, child pipelines, and matrix build jobs|
||Issues||MRs||Any issues and merge requests related to CI/CD core domain, either as changes to be made or as observable side effects.|
||Issues||Issues that are helpful for someone onboarding as a new team member.|
||Issues||Issues that are good for first time community contributors, and could similarly be labeled for
To create a high-quality product that is functional and useful – Engineering, PM and Product Designer need to work closely together, combine methodologies, and often connect throughout the product development. Product Management and Product Designers aim to work 3 months in advance of Engineering proposals to ensure the problem definition and solution has been adequately validated prior to building.
Product Designers play a critical role in the product development of user-facing issues. They collaborate with the Engineering and the Product Manager to design the user experience for the features. Once the design solution is proposed, agreed and validated, the Engineering DRI is assigned to implement that design and functionality during the milestone for which the issue is planned.
Product Designer, PM, and Engineering use
workflow::design to discuss possible complexities, challenges, and uncover blockers around the proposed solution. To avoid blocking reviews later in the product development flow, the Product Designer, PM, and Engineering should work collaboratively throughout the feature design and development process to check-in often, so that the UX approval on merge requests is not required.
A top priority for us is usability, and one way to effectively measure our JTBDs is with periodic UX Scorecards. For especially technical tasks requiring infrasturcture support such as a functional cluster or provisoned environment with a gitlab-ci.yml file Product Design and the Product Manager may work with Engineering Manager and Quality stable counterparts to craft a project based on the scenarios to test to the JTBDs. Some guidelines for working together in this case:
We are also working on building a library of precreated tasks for contribution from the engineering team in CI Sample Projects group. These will be prioritized in the milestone and will help avoid having too much overhead from UX Scorecards and Category Maturity Scorecards in the future.
Our process of planning and development relies heavily on overcommunication rather than any approval gates or automated notification mechanisms. We adhere to the proactive mindset and responsibility of everyone involved to make sure every step in the process is as transparent as it can be.
For both planning and building this means direct, cross-functional, and other relevant stakeholders are included early into the process. This makes sure everyone is able to contribute to the best of their capabilities at the right time in the process. This can include, but is not limited to, GitLab objects, Slack, meetings, and daily standups.
Some practical examples of this are:
For issues related to Merge Request experience, make sure to keep the Code Review group in the loop to avoid any technical or UX debt from occuring. Refer to the collaboration on merge requests experience page to learn more about the collaboration framework.
Note: A good practice when only wanting to inform rather than requesting a direct action from the mentioned stakeholders is to put
FYI directly following the @mention handle.
We suggest using the below steps to reach the best results in the shortest time:
We aim to design broadly for an epic or full feature at least one milestone ahead of time and then break the big solution into smaller issues to pick up in the next milestones. Suppose working one milestone ahead to design the big solution is not possible. In that case, Engineering and Product Designer will define the first most technically feasible and smallest feature set (MVC) to satisfy early customers that will be implemented in the same milestone.
In the past, we did not require UX reviews on MRs in Pipeline Execution in order to increase velocity. A lot has changed since this was introduced so we're making steps to re-align ourselves with the code review guidelines and ensuring all MRs with user facing changes are reviewed by a product designer.
UX reviews should follow the guidelines as closely as possible to reduce the impact on velocity whilst maintaining quality.
We'll be measuring the impact of this change by comparing the Mean Time To Merge (MTTM) on user-facing MRs before and after the change.
For more details on how to contribute to GitLab generally, please see our documentation.
The Engineering DRI works with the Product Designer throughout the
workflow:in dev phase to uncover possible problems with the solution early enough that exhibit unexpected behaviour to what was originally agreed upon. If there are changes to be added that weren't agreed upon in the initial issue - a followup issue should be made and the Engineering DRI should work with the Product Manager to schedule that issue in a following milestone. This allows us to focus on cleanup over signoff, iterate quickly on issues with a low level of shame, and still make sure we accomplish what we've agreed upon. We should be careful not to hold off on completing these followup issues so that we don't build up a significant amount of UX debt issues.
If we find that solutions are consistently not matching the agreed upon design, we will hold a retrospective with the DRI, designer, and product manager to discuss where the gaps in communication are so that we can improve. It may be necessary to begin requiring a UX approval for merge requests on certain issues to help the Engineering DRI meet the requirements.
~workflow::blockedand change the assignee to engineering DRIs until the technical discussion is resolved. If the discussion is expected to go on further, risking the chances of the design solution to be delivered in the intended milestone, consider creating a spike issue for the discussion that blocks the current issue.
Whenever possible, engineers should add all relevant tests alongside a new feature or a bug fix. It is always required that engineers should add all relevant tests (unit, component, integration or E2E) alongside a new feature or a bug fix. We recognize, however, that in the case of creating E2E tests this cannot be applied systematically for multiple reasons. This section lists some ways we work to deliver tests in a painless and efficient way.
We aim to define needed tests early with Quad-planning. All the testing should be defined before the implementation starts and all parties should agree on:
When writing a new feature, we might need to write new E2E specs. In Pipeline Execution, we prefer to add E2E tests in separate MRs, the same way we prefer frontend and backend MRs to be separate. During Quad-planning, it is essential to determine whether that separate MR is required for the feature to ship or not. Given that we use feature flags for all new features, it is quite easy to work in separate MRs and turn on the flag when the team feels the feature has enough coverage for production use. A typical full stack feature can therefore involve multiple backend MRs, then frontend MRs, and finally E2E test MRs.
Writing tests is a team effort and should never be delegated to a single group. We are all responsible for quality. That being said, we acknowledge that Software Engineers in Tests (SETs) have the most knowledge about writing E2E tests. A benefit of splitting the E2E specs into a separate MR is that it can be assigned to someone other than the DRI of the feature, which lets a more appropriate person write the spec. This can be the SET, backend or frontend, depending on the circumstances.
Given the Ruby expertise required to write E2E tests, we should have SETs and backend engineers be the primary team members to write them. Frontend engineer may volunteer to write them with the support of a SET if they feel confident, but it is not expected of them.
Whenever possible, backend engineers should help write the E2E tests of a new feature or bug fix. They should feel comfortable pinging the SET on their team as needed (to review the MR, for example). This helps alleviate the SET's workload, so that they are not entirely responsible for E2E tests on their own. However when they are not comfortable writing the required E2E tests, then the plan should be for the SET to lead the effort. The rationale being that SETs have the most context and work daily with specs, so they will be able to write them much faster. More importantly, they can write much better specs. DRI engineers should proactively help the SET understand the required tests.
In the instance where the SET has too many E2E tests to write, then they should check with backend engineers of the team if they could lead the effort on some of them. Because testing is part of the required work, we should account for E2E tests when assigning issues to a milestone.
When we fix a bug, we ideally want to write a test that would catch and prevent this bug from happening again in the future. However, if the spec we need to write (unit, integration, or E2E test), is part of a code change that needs to be merged as soon as possible (e.g. requires a time sensitive resolution), it is preferable to merge the fix first, and create an issue to write the spec afterwards, so that it is not blocking merging of the MR. This example should be considered the exception, not the rule.
When creating follow-up issues for tests, we have to ensure that they are not going to sit on a pile in the backlog. Tests are essential and should be prioritized as such. When creating a follow-up issue for required tests:
We track our technical debt using the following Pipeline Execution Technical Debt issue board, where we track issues in the planning phase.
This board has 2 main sections:
workflow::planning breakdownwe find issues that are currently being groomed.
workflow::ready for developmentwe find issues that clearly defined and a weight has been assigned.
severity::4labels to classify the impact of the specific tech debt item. We use the list below as a guideline to grade the impact.
severity::1Blocking or Critical
Note that a multiple factors can exist at once. In that case use your judgment to either bump the impact score or lower it. For example:
severity::2bugs. Then choose
To better understand the risk environment and each risk's causes and consequences, the Pipeline Execution team uses the Risk Map as our risk management tool to prioritise mitigation strategies and increase Quality.
The team leverages a monthly async retrospective process as a way to celebrate success and look for ways to improve. The process for these retrospectives aligns with the automated retrospective process used by many teams at GitLab. The process is defined here: https://gitlab.com/gitlab-org/async-retrospectives#how-it-works.
A new retrospective issue is created on the 27th of each month, and remains open until the 26th of the following month. Team members are encouraged to add comments to that issue throughout the month as things arise as a way to capture topics as they happen. The current issue can be found in https://gitlab.com/gl-retrospectives/verify/-/issues.
On the 16th of each month a summary of the milestone will be added to the issue and the team will be notified to add any additional comments to the issue.
As comments are added to the issue, team members are encourage to upvote any comments they feel are important to callout, and to add any additional discussion points as comments to the original thread.
Around the 26th of the month, or after the discussions have wrapped up the backend engineering manager will summarize the retrospective and create issues for any follow up action items that need to be addressed. They will also redact any personal information, customer names, or any other notes the team would like to keep private. Once that has been completed the issue will be made non-confidential and closed.
The team is globally distributed and separated by many timezones. This presents some challenges in how we communicate since our work days only overlap by a couple hours. We have decided as a team to embrace asynchronous communication because scheduling meetings is difficult to coordinate. We meet as a team one day per week, on Wednesdays for a team meeting and our team's Engineering Managers schedule regular 1-on-1s.
Daily standup updates are posted to
Feel free to ask us questions directly in this Slack channel and someone will likely get back to you in 1-2 working days. We will use following emojis to respond to the posted question accordingly:
:eyes:to indicate that one of us has seen it
:white_check_mark:to indicate that the question has been answered
The verify stage has a separate Slack channel under
#s_verify, which encompasses the other groups of Verify.
Most spontaneous team communiation happens in issues and MRs. Our issues have a group label of
~"group:pipeline execution". You can also tag a team member with
@mention in the issue if you have someone specific to ask.
If you need to call the attention of the entire group, you can tag
Welcome to the team! Whether you're joining GitLab as a new hire, transferring internally or ramping up on the CI domain knowledge to tackle issues in our area, you'll be assigned an onboarding/shadowing buddy so you can have someone to work with as you're getting familiarized with our codebase, our tech stack and general development processes on the Pipeline Execution team.
Read over this page as a starting point, and feel free to set up regular sync or async conversations with your buddy. We recommend setting up weekly touch points at a minimum, and joining our regular team syncs to learn more about how we work. (Reach out to our Engineering Managers for an invite to those recurring meetings). You're also welcome to schedule a few coffee chats to meet some members of our team. There is also a Pipeline Execution Developer onboarding checklist for you to go through if it's helpful, which will have admin tasks to complete (as a new team member, if relevant), and also links to technical documentation, meeting agendas and recordings.
Issues labelled with
~onboarding are smaller issues to help you get onboarded into the CI feature area. We typically work Kanban, so if there aren't any
~onboarding issues in the current milestone, reach out to the Product Manager and/or Engineering Managers to see which issues you can start on as part of your onboarding period.
In May 2021, we introduced the CI Shadow Program, which we are trialing as a way to onboard existing GitLab team members from other Engineering teams to the CI domain and contribute to CI features.