The following people are permanent members of the Verify:Pipeline Authoring group:
|Mark Nuzzo||Backend Engineering Manager, Verify:Pipeline Authoring & Acting Backend Engineering, Verify:Pipeline Execution|
|Avielle Wolfe||Backend Engineer, Verify:Pipeline Authoring|
|Furkan Ayhan||Senior Backend Engineer, Verify:Pipeline Authoring|
|Laura Montemayor||Backend Engineer, Verify:Pipeline Authoring|
|Sam Beckham||Frontend Engineering Manager, Verify|
|Briley Sandlin||Senior Frontend Engineer, Verify:Pipeline Authoring|
|Frédéric Caplette||Senior Frontend Engineer, Verify:Pipeline Authoring|
|Mireya Andres||Frontend Engineer, Verify:Pipeline Authoring|
The following members of other functional teams are our stable counterparts:
|Dov Hershkovitch||Senior Product Manager, Verify:Pipeline Authoring|
|Jackie Porter||Director of Product Management, Verify & Package and Acting Verify:Testing Product Manager|
|Nadia Sotnikova||Senior Product Designer, Verify:Pipeline Authoring|
|Marcel Amirault||Technical Writer, Verify (Pipeline Execution, Pipeline Authoring)|
|Suzanne Selhorn||Staff Technical Writer, Verify (Runner), Configure, Enablement (Memory, Global Search, Sharding)|
|Cheryl Li||Senior Manager, Engineering, Verify|
|Dominic Couture||Staff Security Engineer, Application Security, Verify (Pipeline Execution, Pipeline Authoring, Runner, Testing), Release (Release)|
|Grzegorz Bizon||Principal Engineer, Verify|
|Tiffany Rea||Software Engineer in Test, Verify:Pipeline Authoring|
(Sisense↗) We also track our backlog of issues, including past due security and infradev issues, and total open SUS-impacting issues and bugs.
(Sisense↗) MR Type labels help us report what we're working on to industry analysts in a way that's consistent across the engineering department. The dashboard below shows the trend of MR Types over time and a list of merged MRs.
The team uses iteration cadences to manage the team's prioritised work and capacity. Currently, each iteration is set to a 2-week duration. Any issue that has a label of
frontend will be included in the iteration, but any design issue is managed at the associated milestone level. A video was created to summarize the trialing of iteration cadences in Pipeline Authoring.
In February 2022, the team began to trial the use of a template in the
gitlab-org/gitlab project called
Pipeline Authoring Issue Implementation to capture consistent details around implementation efforts. This template should be used when creating
frontend issues which could relate to the implementation of a validated problem with a refined solution proposal, or a general
frontend request that isn't related to a feature improvement or addition. This template is not intended to replace or duplicate any original design issues as the single source of truth (SSOT). When creating an issue for a feature change or addition, usability problems or bugs, use the
feature request or
bug issue templates.
The team uses the Verify candidate label to denote issues that are good candidates to be worked on by any group in the Verify stage. Further details can be found in the Ops:Verify Shared Issues section.
Issues are refined and weighted prior to assigning them to a milestone. We use
candidate:: scoped labels to help with planning work in future milestones. This label allows us to filter on the issues we are planning, allowing Product, Engineering, and UX to refine issues async that have
workflow::ready for development labels applied. Weighting also helps with capacity planning with respect to how issues are scheduled in future iterations.
We create a planning issue as part of our milestone planning process and the workflow board is the single source of truth (SSOT) for current and upcoming work. Product is the DRI in prioritizing work, with input from Engineering, UX, and Technical Writers. The planning issue is used to discuss questions and team capacity. Prior to the beginning of each iteration, issues identified in the planning issue will be assigned to that iteration and engineers can assign prioritized issues to themselves from the top of the
workflow::ready for development column in the workflow board.
All issues that need refining will have the
~needs weight label applied to them.
You can access all these issues using this issue filter.
It's never too early to refine an issue, but we should prioritise the issues closest to starting.
side note: we prefer using Refining over Grooming
The purpose of refining an issue is to ensure the problem statement is clear enough to provide a rough effort sizing estimate; the intention is not to provide solution validation during refinement.
Engineering uses the following handbook guidance for determining weights. If any issue needs any additional
~frontend ~backend ~Quality ~UX ~documentation reviews, they are assigned to the respective individual(s).
Any one on the team can contribute to answering the questions in this checklist, but the final decisions are up to the PM and EMs.
~workflow::label to the appropriate status, e.g.
In April 2022, the team began to trial the use of the
needs weight board that shows issues that
needs weight for the next milestone. That criteria will be denoted by having a
needs weight label and the appropriate milestone candidate label applied to it (i.e.
candidate::15.0). During the first 5 business days of the month (i.e May 1-5), team members will review the
needs weight board and assign themselves to issues. Issues that aren't able to be weighted in one milestone will be evaluated for future milestones. If there is an issue with a higher urgency for weighting, a team member might be directly assigned to the issue for a prioritized review.
We add a
Weight to issues as a way to estimate the effort needed to complete an issue. We factor in complexity and any additional coordination needed to work on an issue. We weight issues based on complexity, following the fibonacci sequence:
|1: Trivial||The problem is very well understood, no extra investigation is required, the exact solution is already known and just needs to be implemented, no surprises are expected, and no coordination with other teams or people is required.
Examples are documentation updates, simple regressions, and other bugs that have already been investigated and discussed and can be fixed with a few lines of code, or technical debt that we know exactly how to address, but just haven't found time for yet.
|2: Small||The problem is well understood and a solution is outlined, but a little bit of extra investigation will probably still be required to realize the solution. Few surprises are expected, if any, and no coordination with other teams or people is required.
Examples are simple features, like a new API endpoint to expose existing data or functionality, or regular bugs or performance issues where some investigation has already taken place.
|3: Medium||Features that are well understood and relatively straightforward. A solution will be outlined, and some edge cases will be considered, but additional investigation may be required to confirm the approach. Some surprises are expected, and coordination with other team members may be necessary.
Bugs that are relatively well understood, but additional investigation may be required. The expectation is that once the problem is verified and major edge cases have been identified, a solution should be relatively straightforward.
Examples are regular features, potentially with backend or frontend dependencies or performance issues.
|5: Large||Features that are well understood, but has more ambiguity and complexity. A solution will be outlined, and major edge cases will be considered, but additional investigation will likely be required to validate the approach. Surprises with specific edge cases are to be expected, and feedback from multiple engineers and/or teams may be required.
Bugs are complex and may be understood, and may not have an indepth solution defined during issue refinement. Additional investigation is likely necessary, and once the problem is identified, multiple iterations of a solution may be considered.
Examples are large features with backend and frontend dependencies, or performance issues that have a solution outlined but requires more indepth solution validation.
The maximum weighted value for an issue is a
5, and may exceed one milestone to complete given additional dependencies and/or complexity. Consider whether an issue weighted with a
5 can be broken down into smaller iterations.
If an issue requires a feature flag rollout plan, consider increasing the weight by
2, according to the effort involved in rolling out the feature flag and monitoring the new behavior.
Our CI syntax keeps evolving. We cannot support all keywords indefinitely, so deprecating and removing keywords is inevitable.
GitLab does not have a versioning system for CI/CD configuration. Therefore, it is critical to over-communicate our deprecation purposes to our users and take the necessary precautions to reduce the impact on their projects. Deprecating a keyword is risky because it will break all pipelines using it, and in some cases, users are not aware of the keyword they use in their pipeline. The deprecation process described below is similar to the deprecating and removing features process, with additional steps to reduce the risks which involved with removing a CI/CD keyword.
Deprecation notice - Syntax removal introduces a breaking change, as outlined in our deprecation process, we must notify the community and customers, which means including a deprecation notice in the monthly release post.
Track keyword usage - Tracking keyword usage should begin as early as possible. It is a mandatory step that helps estimate the user impact, timing, and needed effort. The more users use the keyword, the more time it takes to remove it (It took more than four years to move from remove to deprecation for 'type' keyword).
In-app warning - Provide our users with an in-app notification that we plan to remove a keyword they use in their pipeline. Our customers will get a notification in each run of the pipeline that uses the deprecated keyword. The warning will be printed:
This step is optional if the keyword usage is relatively low (Recommend minimal reach of ~5% impacted users).
Keyword removal - The keyword will be removed from our code and schema and should happen in a major version. Once removed, using the keyword will result in a lint error.
To encourage more transparency and collaboration amongst the team and additionally align on the Release Posts we publish at the end of each milestone, we will be creating a separate issue to highlight a Feature flag roll out plan for each feature being released starting in 13.2, based on the issue template for feature flag roll outs. The engineer who implements the feature will be responsible for creating this separate issue to highlight the details of when and how the feature flag will be toggled, and subsquently link this issue to their feature issue. The product manager will tag this issue as a blocker to their release post, so that everyone is aligned on the release plan of the feature.
Similar to the Three Amigos process, Quad-planning is leveraging our "Quad" (Product, UX, Engineering and Quality) to help inform planning of work for the team. It also ensures a shared understanding about the scope of work in a milestone. The Quad is invited to provide feedback on a Planning Issue to ensure that issues have been properly refined. Unlike the regular product development workflow that details the Quad-planning process, Engineering is not only represented by Engineering Management, but the Engineer DRIs as well. The SET counterpart representing Quality will also provide feedback earlier in the process following the Quality department's guidelines for Quad-Planning, by working asynchronously and reviewing the issues in the Planning issue.
The Product Manager will label relevant issues from the milestone planning issue with the
quad-planning:ready label. Supplying these distinct viewpoints at this early stage of planning is valuable to informing the feasibility of the effort, as well as refining any unclear requirements or scope. Once acceptance criteria is agreed upon, the
quad-planning:complete label is applied by the SET on each of the issues.
We use the Pipeline Authoring Workflow issue board to track what we work on in the current iteration for the target milestone.
We follow the product development flow to ensure that the problems we're solving are well understood and the solutions are well defined and validated before the implementation.
We aim to achieve key outcomes in each phase in order to de-risk subsequent phases. However, the product development flow doesn't dictate the order we go through the phases, or the time spent in each. We might skip certain phases if we think that the necessary outcomes for that phase have already been achieved.
Development moves through workflow states as follows:
We use workflow labels to efficiently communicate an issue's state. Using these labels enables collaboration across teams and communicates an issue's current state. The DRIs throughout each phase of the workflow are responsible for keeping the workflow labels up-to-date.
Issue descriptions shall always be maintained as the single source of truth. Issue description accuracy should be maintained by the DRIs throughout each phase. However all collaborators can and should contribute when they see discrepancies or needed updates.
Each member of the team can choose which issues to work on during an iteration for a milestone by assigning the issue to themselves. When the iteration and milestone are well underway and we find ourselves looking for work, we default to working right to left on the issue board by pulling issues in the right-most column. If there is an issue that a team member can help with on the board, they should do so instead of starting new work. This includes conducting code review on issues that the team member may not be assigned to, if they feel that they can add value and help move the issue along to completion.
Specifically, this means our work is prioritized in the following order:
workflow::in devif applicable
workflow::ready for developmentfor development column
The goal of this process is to reduce the amount of work in progress (WIP) at any given time. Reducing WIP forces us to "Start less, finish more", and it also reduces cycle time. Engineers should keep in mind that the DRI for a merge request is the author(s), to reflect the importance of teamwork without diluting the notion that having a DRI is encouraged by our values.
We use the Issue Health Status feature to indicate the probability that an issue will ship in the assigned milestone. On the last week of the milestone the DRI (directly responsible individual) should update the health status for any open issue they are assigned to, with the following options
On Track- The issue has no current blockers, and is likely to be completed in the current milestone.
Needs Attention- This signals that the DRI of the issue requires collaboration it might impact planned delivery.
At Risk- The issue is unlikely to be completed in the current milestone and will probably miss the release due dates.
Please note that if an issue happens to roll over to the next milestone, the DRI should clear the health status once the next iteration begins to ensure the health status is accurate.
When the DRI is actively working on an issue (workflow status is
workflow::in review or
workflow::verification in the current milestone), they will add a comment into the issue with a status update, detailing:
There are several benefits to this approach:
Expectations for DRIs when providing updates for work in progress:
The product development workflow labels are the SSOT for the status of the issue as it relates to the product development workflow.
Issues going through the validation track should have the appropriate workflow label and a milestone assigned so they show up in the
workflow::design column of the Pipeline Authoring issue board.
As an issue is labeled
workflow::design, we change the title to
Design: [Issue title] to make the issues in
workflow::design easier to differentiate from Frontend and Backend implementation issues which are titled
Frontend: [Issue title] or
Backend: [Issue title] respectively.
Once the team has created a shared understanding about the problem and the solution, and there's no obvious outstanding questions about the next steps, the Product Manager moves the issue into
workflow::planning breakdown. We should avoid moving issues into implementation prematurely to ensure that the design discussions happen during
workflow::design and not during the implementation as much as possible.
If an issue has several components (e.g. ~frontend, ~backend, or ~documentation) we should split it up into separate implementation issues. When these issues are created, the issues should be titled
Frontend: [Issue title] or
Backend: [Issue title], and marked as
blocked by in case one is blocking the other. The original issue should hold all the discussion around the feature, with the implementation issues being used to track the work done. By splitting issues, there are several benefits:
When new implementation issues are created, they should always be linked to the initial issue that contains the proposal and relevant discussions.
Sometimes it is useful to discuss a new product opportunity or a technical approach as a team to get additional input from the engineers and the team counterparts. Such discussions aren't always actionable and may or may not result in an actionable issue. These discussions should happen in issues titled
Discussion: [Issue title]. Their main purpose is to engage the team in the discussions around a topic. After this discussion, we might:
The discussion issue should be linked to any related issues that were created as its outcome.
Having these discussions in issues can help us maintain a source of truth for the problems we're solving as a team and make the discussions more inclusive.
Spikes are time-boxed investigations typically performed in agile software development. We create Spike issues when there is uncertainty on how to proceed on a feature from a technical perspective before a feature is developed.
The Pipeline Authoring group supports the product marketing categories described below:
||Issues||MRs||Issues related to supporting different CI targets directly (for example, Java or Mobile).||Pipeline Authoring|
||Issues||MRs||Issues related to Persistence (workspaces, caching). Does not include artifacts, which is its own label|
||Issues||MRs||Issues related to CI rules or linting|
||Issues||MRs||Relates to functionality surrounding pre-defined and user-defined variables available in the Build environment. Formerly
||Issues||MRs||Issues related to visualizing how pipelines start and depend on each other. Includes visualizations for triggering, cross-project pipelines, and child/parent pipelines. For job execution, please use
||Issues||MRs||Issues related to Directed Acyclic Graphs visualization only. For job execution, please use
||Issues||MRs||Issues related to pipeline graphs and visualization|
||Issues||MRs||Issues related to authoring the .gitlab-ci.yml file and CI YAML configuration (https://docs.gitlab.com/ee/ci/yaml/) but excludes issues handled by another label such as "CI rules"|
||Issues||MRs||Any issues and merge requests related to CI/CD core domain, either as changes to be made or as observable side effects.|
||Issues||Issues that are helpful for someone onboarding as a new team member.|
||Issues||Issues that are good for first time community contributors, and could similarly be labeled for
||Issues||Issues we would like the wider community to contrbute and may also be labeled
To create a high-quality product that is functional and useful – Engineering, PM and Product Designer need to work closely together, combine methodologies, and often connect throughout the product development.
Product Designers play a critical role in the product development of user-facing issues. They collaborate with the Engineering and the Product Manager to design the user experience for the features. Once the design solution is proposed, agreed and validated, the Engineering DRI is assigned to implement that design and functionality during the milestone for which the issue is planned.
Product Designer, PM, and Engineering use
workflow::design to discuss possible complexities, challenges, and uncover blockers around the proposed solution. To avoid blocking reviews later in the product development flow, the Product Designer, PM, and Engineering should work collaboratively throughout the feature design and development process to check-in often, so that the UX approval on merge requests is not required.
The Product Designer drives team discussions during the
workflow::design phase, primarily using issues, during weekly team meetings, or leaving updates in meeting agendas as an async option. Following our communication guidelines, Slack discussions are mostly reserved for informal communication only. We use issues to document design decisions — The issue description should be the single source of truth for any given problem.
The GitLab Design Management feature in issues is preferred for specific mock-up discussions, to make it easier to find the specs for the latest design direction or review the design-related discussions for the solutions that have been explored. Figma prototypes can also be added to the issue description to support quick-access to high-fidelity deliverables.
Our process of planning and development relies heavily on overcommunication rather than any approval gates or automated notification mechanisms. We adhere to the proactive mindset and responsibility of everyone involved to make sure every step in the process is as transparent as it can be.
For both planning and building this means direct, cross-functional, and other relevant stakeholders are included early into the process. This makes sure everyone is able to contribute to the best of their capabilities at the right time in the process. This can include, but is not limited to, GitLab objects, Slack, meetings, and daily standups.
Some practical examples of this are:
Note: A good practice when only wanting to inform rather than requesting a direct action from the mentioned stakeholders is to put
FYI directly following the @mention handle.
We suggest using the below steps to reach the best results in the shortest time:
We aim to design broadly for an epic or full feature at least one milestone ahead of time and then break the big solution into smaller issues to pick up in the next milestones. Suppose working one milestone ahead to design the big solution is not possible. In that case, Engineering and Product Designer will define the first most technically feasible and smallest feature set (MVC) to satisfy early customers that will be implemented in the same milestone.
In the past, we did not require UX reviews on MRs in Pipeline Authoring in order to increase velocity. A lot has changed since this was introduced so we're making steps to re-align ourselves with the code review guidelines and ensuring all MRs with user facing changes are reviewed by a product designer.
UX reviews should follow the guidelines as closely as possible to reduce the impact on velocity whilst maintaining quality.
For more details on how to contribute to GitLab generally, please see our documentation.
The Engineering DRI works with the Product Designer throughout the
workflow:in dev phase to uncover possible problems with the solution early enough that exhibit unexpected behaviour to what was originally agreed upon. If there are changes to be added that weren't agreed upon in the initial issue - a followup issue should be made and the Engineering DRI should work with the Product Manager to schedule that issue in a following iteration. This allows us to focus on cleanup over signoff, iterate quickly on issues with a low level of shame, and still make sure we accomplish what we've agreed upon. We should be careful not to hold off on completing these followup issues so that we don't build up a significant amount of UX debt issues.
If we find that solutions are consistently not matching the agreed upon design, we will hold a retrospective with the DRI, designer, and product manager to discuss where the gaps in communication are so that we can improve. It may be necessary to begin requiring a UX approval for merge requests on certain issues to help the Engineering DRI meet the requirements.
In order to develop and maintain the optimal process for collaboration between UX and Engineering, the Product Designer runs a quarterly UX Collaboration Survey to assess the quality of UX collaboration on the team and gather feedback from the engineers, engineering managers and the product manager.
The results of the first UX collaboration survey can be found in the UX Collaboration Survey issue. We are making iterative process changes based on the results of this survey and sync retrospective discussions, and will reassess the quality of our UX collaboration at the end of FY22-Q2.
It is always required that engineers should add all relevant tests (unit, component, integration or E2E) alongside a new feature or a bug fix. We recognize, however, that in the case of creating E2E tests this cannot be applied systematically for multiple reasons. This section lists some ways we work to deliver tests in a painless and efficient way.
We aim to define needed tests early with Quad-planning. All the testing should be defined before the implementation starts and all parties should agree on:
When writing a new feature, we might need to write new E2E specs. In Pipeline Authoring, we prefer to add E2E tests in separate MRs, the same way we prefer frontend and backend MRs to be separate. During Quad-planning, it is essential to determine whether that separate MR is required for the feature to ship or not. Given that we use feature flags for all new features, it is quite easy to work in separate MRs and turn on the flag when the team feels the feature has enough coverage for production use. A typical full stack feature can therefore involve multiple backend MRs, then frontend MRs, and finally E2E test MRs.
Writing tests is a team effort and should never be delegated to a single group. We are all responsible for quality. That being said, we acknowledge that Software Engineers in Tests (SETs) have the most knowledge about writing E2E tests. A benefit of splitting the E2E specs into a separate MR is that it can be assigned to someone other than the DRI of the feature, which lets a more appropriate person write the spec. This can be the SET, backend or frontend, depending on the circumstances.
Given the Ruby expertise required to write E2E tests, we should have SETs and backend engineers be the primary team members to write them. Frontend engineer may volunteer to write them with the support of a SET if they feel confident, but it is not expected of them.
Whenever possible, backend engineers should help write the E2E tests of a new feature or bug fix. They should feel comfortable pinging the SET on their team as needed (to review the MR, for example). This helps alleviate the SET's workload, so that they are not entirely responsible for E2E tests on their own. However when they are not comfortable writing the required E2E tests, then the plan should be for the SET to lead the effort. The rationale being that SETs have the most context and work daily with specs, so they will be able to write them much faster. More importantly, they can write much better specs. DRI engineers should proactively help the SET understand the required tests.
In the instance where the SET has too many E2E tests to write, then they should check with backend engineers of the team if they could lead the effort on some of them. Because testing is part of the required work, we should account for E2E when assigning issues to an iteration.
When we fix a bug, we ideally want to write a test that would catch and prevent this bug from happening again in the future. However, if the spec we need to write (unit, integration, or E2E test), is not straightforward and is blocking the release of the bug fix, it is preferable to merge the fix first, and create an issue to write the spec afterwards. This is in the spirit of bias for action.
When creating follow-up issues for tests, we have to ensure that they are not going to sit on a pile in the backlog. Tests are essential and should be prioritized as such.
When creating a follow-up issue for required tests:
workflow::ready for development.
When creating a technical debt issue make sure to label it as such, in addition make sure to add labels based on the following guidelines:
Issue readiness: We add the correct
~workflow::x label in accordance with our product development workflow
Impact breakdown: We use
severity::n labels to classify the impact of the specific tech debt item. These should map to the severity definitions for bugs
Engineering Managers are responsible for collecting feedback from their engineering teams to help inform the Product Manager to decide on the priority of these technical debt issues. Engineers should label these issues accordingly with their assessment of severity, weight, and make a recommendation on a technical proposal.
The team uses a monthly async retrospective process as a way to celebrate success and look for ways to improve. The process for these retrospectives aligns with the automated retrospective process used by many teams at GitLab.
A new retrospective issue is created on the 27th of each month, and remains open until the 26th of the following month. Team members are encouraged to add comments to that issue throughout the month as things arise as a way to capture topics as they happen.
On the 16th of each month a summary of the milestone will be added to the issue and the team will be notified to add any additional comments to the issue.
As comments are added to the issue, team members are encourage to upvote any comments they feel are important to callout, and to add any additional discussion points as comments to the original thread.
Around the 26th of the month, or after the discussions have wrapped up the backend engineering manager will summarize the retrospective and create issues for any follow up action items that need to be addressed. They will also redact any personal information, customer names, or any other notes the team would like to keep private. Once that has been completed the issue will be made non-confidential and closed.
The team is globally distributed and separated by many timezones. This presents some challenges in how we communicate since our work days only overlap by a couple hours. We have decided as a team to embrace asynchronous communication because scheduling meetings is difficult to coordinate. We meet as a team one day per week, on Wednesdays for a team meeting and our team's Engineering Managers schedule regular 1-on-1s.
Daily standup updates are posted to
Feel free to ask us questions directly in this Slack channel and someone will likely get back to you in 1-2 working days. We will use following emojis to respond to the posted question accordingly:
:eyes:to indicate that one of us has seen it
:white_check_mark:to indicate that the question has been answered
The verify stage has a separate Slack channel under
#s_verify, which encompasses the other groups of Verify.
Most spontaneous team communiation happens in issues and MRs. Our issues have a group label of
~"group::pipeline authoring". You can also tag a team member with
@mention in the issue if you have someone specific to ask.
If you need to call the attention of the entire group, you can tag
The Pipeline Authoring team has a shared calendar that integrates with the
#g_pipeline-authoring Slack channel to inform the team of important dates from PTO Roots. Instructions on how
to integrate with the
Verify:Pipeline Authoring calendar can be found here.
Welcome to the team! Whether you're joining GitLab as a new hire, transferring internally or ramping up on the CI domain knowledge to tackle issues in our area, you'll be assigned an onboarding/shadowing buddy so you can have someone to work with as you're getting familiarized with our codebase, our tech stack and general development processes on the Pipeline Authoring team.
Read over this page as a starting point, and feel free to set up regular sync or async conversations with your buddy. We recommend setting up weekly touch points at a minimum, and joining our regular team syncs to learn more about how we work. (Reach out to our Engineering Managers for an invite to those recurring meetings). You're also welcome to schedule a few coffee chats to meet some members of our team.
Issues labelled with
~onboarding are smaller issues to help you get onboarded into the CI feature area. We typically work Kanban, so if there aren't any
~onboarding issues in the current milestone, reach out to the Product Manager and/or Engineering Managers to see which issues you can start on as part of your onboarding period.