The Verify:Pipeline Authoring Group is focused on all the functionality with respect to Pipeline Authoring.
This team maps to Verify devops stage.
We measure the value we contribute by using Performance Indicators (PI), which we define and use to track progress. The current PI for the Pipeline Execution group is the
number of unique users who trigger ci_pipelines. For more details, please check out the Product Team Performance Indicators. To view the latest Verify stage ci_pipeline data see our Sisense Dashboard.
Based on the AARRR framework (Acquisition, Activation, Retention, Revenue, Referral), this funnel represents the customer journey in using GitLab CI. Each state in the funnel is defined with a metric to measure behavior. Product managers can focus on any of the various states in the funnel to prioritize features that drive a desired action.
The following people are permanent members of the Verify:Pipeline Authoring group:
|Mark Nuzzo||Backend Engineering Manager, Verify:Pipeline Authoring|
|Furkan Ayhan||Senior Backend Engineer, Verify:Pipeline Authoring|
|Laura Montemayor||Backend Engineer, Verify:Pipeline Authoring|
|Matija Čupić||Backend Engineer, Verify:Pipeline Authoring|
|Sam Beckham||Frontend Engineering Manager, Verify|
|Frédéric Caplette||Senior Frontend Engineer, Verify:Pipeline Authoring|
|Mireya Andres||Frontend Engineer, Verify:Pipeline Authoring|
|Sarah Groff Hennigh-Palermo||Senior Frontend Engineer, Verify:Pipeline Authoring|
The following members of other functional teams are our stable counterparts:
|Dov Hershkovitch||Senior Product Manager, Verify:Pipeline Authoring|
|Nadia Sotnikova||Product Designer, Verify:Pipeline Authoring|
|Evan Read||Senior Technical Writer, Create (Gitaly), Verify (Testing), Manage (Access, Compliance)|
|Marcel Amirault||Technical Writer, Verify (Pipeline Execution, Pipeline Authoring)|
|Suzanne Selhorn||Staff Technical Writer, Verify (Runner), Fulfillment|
|Cheryl Li||Interim Senior Manager, Engineering, Verify & Backend Engineering Manager, Verify:Pipeline Execution|
|Dominic Couture||Senior Security Engineer, Application Security, Verify (Pipeline Execution, Pipeline Authoring, Runner, Testing), Release (Release)|
|Tiffany Rea||Software Engineer in Test, Verify:Pipeline Authoring|
|Jackie Porter||Group Manager, Product, Verify|
Like most GitLab backend teams we spend a lot of time working in Rails on the main GitLab CE app, but we also do a lot of work in Go which is the language that GitLab Runner is written in. Familiarity with Docker and Kubernetes is also useful on our team.
For those new to the team, these links may be helpful in learning more about the product and technology.
Issues are refined and weighted prior to scheduling them to an upcoming milestone. We use
candidate:: scoped labels to help with planning work in future iterations. The additional label allows us to filter on the issues we are planning, allowing Product, Engineering and UX to start async issue refinement on the 4th. Weighting also helps with capacity planning with respect to how issues are scheduled in future milestones.
We create 2 issues as part of our Planning process:
Both issues currently rely on the
candidate:: scoped label to determine what issues are to be investigated for the upcoming milestone(s).
side note: we prefer using Refining over Grooming
Engineers are expected to allocate approximately 4 hours each milestone to refine and weight issues assigned to them. The purpose of refining an issue is to ensure the problem statement is clear enough to provide a rough effort sizing estimate; the intention is not to provide solution validation during refinement.
Engineering uses the following handbook guidance for determining weights. If any issue needs any additional
~frontend ~backend ~Quality ~UX ~documentation reviews, they are assigned to the respective individual(s).
Any one on the team can contribute to answering the questions in this checklist, but the final decisions are up to the PM and EMs.
~workflow::label to the appropriate status, e.g.
We add a
Weight to issues as a way to estimate the effort needed to complete an issue. We factor in complexity and any additional coordination needed to work on an issue. We weight issues based on complexity, following the fibonacci sequence:
|1: Trivial||The problem is very well understood, no extra investigation is required, the exact solution is already known and just needs to be implemented, no surprises are expected, and no coordination with other teams or people is required.
Examples are documentation updates, simple regressions, and other bugs that have already been investigated and discussed and can be fixed with a few lines of code, or technical debt that we know exactly how to address, but just haven't found time for yet.
|2: Small||The problem is well understood and a solution is outlined, but a little bit of extra investigation will probably still be required to realize the solution. Few surprises are expected, if any, and no coordination with other teams or people is required.
Examples are simple features, like a new API endpoint to expose existing data or functionality, or regular bugs or performance issues where some investigation has already taken place.
|3: Medium||Features that are well understood and relatively straightforward. A solution will be outlined, and some edge cases will be considered, but additional investigation may be required to confirm the approach. Some surprises are expected, and coordination with other team members may be necessary.
Bugs that are relatively well understood, but additional investigation may be required. The expectation is that once the problem is verified and major edge cases have been identified, a solution should be relatively straightforward.
Examples are regular features, potentially with backend or frontend dependencies or performance issues.
|5: Large||Features that are well understood, but has more ambiguity and complexity. A solution will be outlined, and major edge cases will be considered, but additional investigation will likely be required to validate the approach. Surprises with specific edge cases are to be expected, and feedback from multiple engineers and/or teams may be required.
Bugs are complex and may be understood, and may not have an indepth solution defined during issue refinement. Additional investigation is likely necessary, and once the problem is identified, multiple iterations of a solution may be considered.
Examples are large features with backend and frontend dependencies, or performance issues that have a solution outlined but requires more indepth solution validation.
The maximum weighted value for an issue is a
5, and may exceed one milestone to complete given additional dependencies and/or complexity. Consider whether an issue weighted with a
5 can be broken down into smaller iterations.
If an issue requires a feature flag rollout plan, consider increasing the weight by
2, according to the effort involved in rolling out the feature flag and monitoring the new behavior.
To encourage more transparency and collaboration amongst the team and additionally align on the Release Posts we publish at the end of each milestone, we will be creating a separate issue to highlight a Feature flag roll out plan for each feature being released starting in 13.2, based on the issue template for feature flag roll outs. The engineer who implements the feature will be responsible for creating this separate issue to highlight the details of when and how the feature flag will be toggled, and subsquently link this issue to their feature issue. The product manager will tag this issue as a blocker to their release post, so that everyone is aligned on the release plan of the feature.
Similar to the Three Amigos process, Quad-planning is leveraging our "Quad" (Product, UX, Engineering and Quality) to help inform planning of work for the team. It also ensures a shared understanding about the scope of work in a milestone. The Quad is invited to provide feedback on a Planning Issue to ensure that issues have been properly refined. Unlike the regular product development workflow that details the Quad-planning process, Engineering is not only represented by Engineering Management, but the Engineer DRIs as well. The SET counterpart representing Quality will also provide feedback earlier in the process following the Quality department's guidelines for Quad-Planning, by working asynchronously and reviewing the issues in the Planning issue.
The Product Manager will label relevant issues from the milestone planning issue with the
quad-planning:ready label. Supplying these distinct viewpoints at this early stage of planning is valuable to informing the feasibility of the effort, as well as refining any unclear requirements or scope. Once acceptance criteria is agreed upon, the
quad-planning:complete label is applied by the SET on each of the issues.
We use the Pipeline Authoring Workflow issue board to track what we work on in the current milestone.
Development moves through workflow states in the following order:
workflow::ready for development
workflow::planning breakdown is driven by Product, but is a collaborative effort between Product, UX and Engineering. The steps for planning breakdown typically consists of:
problem validationas needed
At any point, if an issue becomes blocked, it would be in the
workflow::blocked status. If there is a blocking issue, it needs to be added to the issue description or linked to the issue with a 'blocked by' relationship.
workflow::ready for development means that an issue has been sufficiently refined and weighted by Engineering, upon request by Product and UX
Closed means that all code changes associated with the issue are fully enabled on gitlab.com. If it is being rolled out behind a feature flag, it means the feature flag is enabled for all users on gitlab.com.
Each member of the team can choose which issues to work on during a milestone by assigning the issue to themselves. When the milestone is well underway and we find ourselves looking for work, we default to working right to left on the issue board by pulling issues in the right-most column. If there is an issue that a team member can help with on the board, they should do so instead of starting new work. This includes conducting code review on issues that the team member may not be assigned to, if they feel that they can add value and help move the issue along to completion.
Specifically, this means our work is prioritized in the following order:
workflow::in devif applicable
workflow::ready for developmentfor development column
The goal of this process is to reduce the amount of work in progress (WIP) at any given time. Reducing WIP forces us to "Start less, finish more", and it also reduces cycle time. Engineers should keep in mind that the DRI for a merge request is the author(s), to reflect the importance of teamwork without diluting the notion that having a DRI is encouraged by our values.
In order to keep our stakeholders informed of work in progress, we provide updates to issues either by updating the issue's health status and/or adding an async issue update.
We use the Issue Health Status feature to indicate probability that an issue will ship in the assigned milestone. When a DRI (directly responsible individual) picks up an issue to work on, they should add the appropriate health status. This status is also updated by the DRI as soon as they recognize the probability has changed.
The following are definitions of the health status options:
No Status– If the status is left blank, we are unsure on the likelihood of an issue shipping. This is usually only before an issue has been picked up.
On Track- The issue has no current blockers, and is likely to be completed in the current milestone.
Needs Attention- This signals that the DRI of the issue requires collaboration it might impact planned delivery.
At Risk- The issue is unlikely to be completed in the current milestone and will probably miss the release due dates.
Health Statuses can also be used in conjuntion with milestone board to communicate a variety of statuses. Here are some situations and how health status and workflow labels can work together to communicate information on the probability of an issue shipping. They are written in the perspective of the DRI.
Notes on health status:
No statusand becomes
On Track, the DRI could provide a comment as to why they now think it will make the milestone. This would ensure this update is seen by interested parties of the issue.
At Risk, we recommend that the DRI add a short comment stating the reason for the change in an issue status update.
When the DRI is actively working on an issue (workflow status is
workflow::in review or
workflow::verification in the current milestone), they will add a comment into the issue with a status update, detailing:
There are several benefits to this approach:
Expectations for DRIs when providing updates for work in progress:
As a general rule, any issues being actively worked on have one of the following workflow labels:
workflow::production(upon closing the issue)
The Health Status of these issues should be updated to:
Needs Attention- on the
1stof the month.
At Risk- on the
8thof the month.
EMs are responsible for manually updating the Health Status of any inactive issues in the milestone accordingly.
We split up design and engineering implementation into separate issues.
This makes it easier to know the status of an active issue at a glance.
Design issues go through the first few stages of our workflow.
Once it's ready for dev, the product manager creates a new issue for the engineering implementation.
This will be in the
planning breakdown phase of the workflow until an implementation plan has been added and weighted.
The health status should be enough to know the general status of an issue. If you need more information on what tasks remain, you can check the implementation plan. Every engineering implementation issue needs an implementation plan in the description. It is a list or a table that outlines all the steps required to complete the feature. The engineer assigned to the issue is responsible for creating this plan and keeping it up to date. Here is an example of one an implementation plan in an issue.
We will start this process in %14.0 and will retro on it at the end of that milestone.
Sometimes it is useful to discuss a new product opportunity as a team to get additional input from the engineers and the team counterparts, before creating a design issue. These discussions should happen in issues titled
Discussion: [Issue title]. Their main purpose is to engage the team in the discussions around the product vision or new feature ideas early in the opportunity validation and design process. After this discussion we might:
Having these discussions in issues can help us maintain a source of truth for the problems we're solving as a team and make the discussions more inclusive.
Spikes are time-boxed investigations typically performed in agile software development. We create Spike issues when there is uncertainty on how to proceed on a feature from a technical perspective before a feature is developed.
The Pipeline Authoring Group uses a lightweight architecture planning process for new features and significant refactoring.
The Pipeline Authoring group supports the product marketing categories described below:
||Issues||MRs||Issues related to supporting different CI targets directly (for example, Java or Mobile).||Pipeline Authoring|
||Issues||MRs||Issues related to Persistence (workspaces, caching). Does not include artifacts, which is its own label|
||Issues||MRs||Issues related to CI rules or linting|
||Issues||MRs||Relates to functionality surrounding pre-defined and user-defined variables available in the Build environment. Formerly
||Issues||MRs||Issues related to visualizing how pipelines start and depend on each other. Includes visualizations for triggering, cross-project pipelines, and child/parent pipelines. For job execution, please use
||Issues||MRs||Issues related to Directed Acyclic Graphs visualization only. For job execution, please use
||Issues||MRs||Issues related to pipeline graphs and visualization|
||Issues||MRs||Issues related to authoring the .gitlab-ci.yml file and CI YAML configuration (https://docs.gitlab.com/ee/ci/yaml/) but excludes issues handled by another label such as "CI rules"|
||Issues||MRs||Any issues and merge requests related to CI/CD core domain, either as changes to be made or as observable side effects.|
||Issues||Issues that are helpful for someone onboarding as a new team member.|
||Issues||Issues that are good for first time community contributors, and could similarly be labeled for
To create a high-quality product that is functional and useful – Engineering, PM and Product Designer need to work closely together, combine methodologies, and often connect throughout the product development.
Product Designers play a critical role in the product development of user-facing issues. They collaborate with the Engineering and the Product Manager to design the user experience for the features. Once the design solution is proposed, agreed and validated, the Engineering DRI is assigned to implement that design and functionality during the milestone for which the issue is planned.
Product Designer, PM, and Engineering use
workflow::design to discuss possible complexities, challenges, and uncover blockers around the proposed solution. To avoid blocking reviews later in the product development flow, the Product Designer, PM, and Engineering should work collaboratively throughout the feature design and development process to check-in often, so that the UX approval on merge requests is not required.
The Product Designer drives team discussions during the
workflow::design phase, primarily using issues, during weekly team meetings, or leaving updates in meeting agendas as an async option. Following our communication guidelines, Slack discussions are mostly reserved for informal communication only. We use issues to document design decisions — The issue description should be the single source of truth for any given problem.
The GitLab Design Management feature in issues is preferred for specific mock-up discussions, to make it easier to find the specs for the latest design direction or review the design-related discussions for the solutions that have been explored. Figma prototypes can also be added to the issue description to support quick-access to high-fidelity deliverables.
Our process of planning and development relies heavily on overcommunication rather than any approval gates or automated notification mechanisms. We adhere to the proactive mindset and responsibility of everyone involved to make sure every step in the process is as transparent as it can be.
For both planning and building this means direct, cross-functional, and other relevant stakeholders are included early into the process. This makes sure everyone is able to contribute to the best of their capabilities at the right time in the process. This can include, but is not limited to, GitLab objects, Slack, meetings, and daily standups.
Some practical examples of this are:
Note: A good practice when only wanting to inform rather than requesting a direct action from the mentioned stakeholders is to put
FYI directly following the @mention handle.
We suggest using the below steps to reach the best results in the shortest time:
We aim to design broadly for an epic or full feature at least one milestone ahead of time and then break the big solution into smaller issues to pick up in the next milestones. Suppose working one milestone ahead to design the big solution is not possible. In that case, Engineering and Product Designer will define the first most technically feasible and smallest feature set (MVC) to satisfy early customers that will be implemented in the same milestone.
In the past, we did not require UX reviews on MRs in Pipeline Authoring in order to increase velocity. A lot has changed since this was introduced so we're making steps to re-align ourselves with the code review guidelines and ensuring all MRs with user facing changes are reviewed by a product designer.
UX reviews should follow the guidelines as closely as possible to reduce the impact on velocity whilst maintaining quality.
We'll be measuring the impact of this change by comparing the Mean Time To Merge (MTTM) on user-facing MRs before and after the change. If the velocity decreases significantly, we should look at ways to alleviate that.
For more details on how to contribute to GitLab generally, please see our documentation.
The Engineering DRI works with the Product Designer throughout the
workflow:in dev phase to uncover possible problems with the solution early enough that exhibit unexpected behaviour to what was originally agreed upon. If there are changes to be added that weren't agreed upon in the initial issue - a followup issue should be made and the Engineering DRI should work with the Product Manager to schedule that issue in a following milestone. This allows us to focus on cleanup over signoff, iterate quickly on issues with a low level of shame, and still make sure we accomplish what we've agreed upon. We should be careful not to hold off on completing these followup issues so that we don't build up a significant amount of UX debt issues.
If we find that solutions are consistently not matching the agreed upon design, we will hold a retrospective with the DRI, designer, and product manager to discuss where the gaps in communication are so that we can improve. It may be necessary to begin requiring a UX approval for merge requests on certain issues to help the Engineering DRI meet the requirements.
In order to develop and maintain the optimal process for collaboration between UX and Engineering, the Product Designer runs a quarterly UX Collaboration Survey to assess the quality of UX collaboration on the team and gather feedback from the engineers, engineering managers and the product manager.
The results of the first UX collaboration survey can be found in the UX Collaboration Survey issue. We are making iterative process changes based on the results of this survey and sync retrospective discussions, and will reassess the quality of our UX collaboration at the end of FY22-Q2.
It is always required that engineers should add all relevant tests (unit, component, integration or E2E) alongside a new feature or a bug fix. We recognize, however, that in the case of creating E2E tests this cannot be applied systematically for multiple reasons. This section lists some ways we work to deliver tests in a painless and efficient way.
We aim to define needed tests early with Quad-planning. All the testing should be defined before the implementation starts and all parties should agree on:
When writing a new feature, we might need to write new E2E specs. In Pipeline Authoring, we prefer to add E2E tests in separate MRs, the same way we prefer frontend and backend MRs to be separate. During Quad-planning, it is essential to determine whether that separate MR is required for the feature to ship or not. Given that we use feature flags for all new features, it is quite easy to work in separate MRs and turn on the flag when the team feels the feature has enough coverage for production use. A typical full stack feature can therefore involve multiple backend MRs, then frontend MRs, and finally E2E test MRs.
Writing tests is a team effort and should never be delegated to a single group. We are all responsible for quality. That being said, we acknowledge that Software Engineers in Tests (SETs) have the most knowledge about writing E2E tests. A benefit of splitting the E2E specs into a separate MR is that it can be assigned to someone other than the DRI of the feature, which lets a more appropriate person write the spec. This can be the SET, backend or frontend, depending on the circumstances.
Given the Ruby expertise required to write E2E tests, we should have SETs and backend engineers be the primary team members to write them. Frontend engineer may volunteer to write them with the support of a SET if they feel confident, but it is not expected of them.
Whenever possible, backend engineers should help write the E2E tests of a new feature or bug fix. They should feel comfortable pinging the SET on their team as needed (to review the MR, for example). This helps alleviate the SET's workload, so that they are not entirely responsible for E2E tests on their own. However when they are not comfortable writing the required E2E tests, then the plan should be for the SET to lead the effort. The rationale being that SETs have the most context and work daily with specs, so they will be able to write them much faster. More importantly, they can write much better specs. DRI engineers should proactively help the SET understand the required tests.
In the instance where the SET has too many E2E tests to write, then they should check with backend engineers of the team if they could lead the effort on some of them. Because testing is part of the required work, we should account for E2E when assigning issues to a milestone.
When we fix a bug, we ideally want to write a test that would catch and prevent this bug from happening again in the future. However, if the spec we need to write (unit, integration, or E2E test), is not straightforward and is blocking the release of the bug fix, it is preferable to merge the fix first, and create an issue to write the spec afterwards. This is in the spirit of bias for action.
When creating follow-up issues for tests, we have to ensure that they are not going to sit on a pile in the backlog. Tests are essential and should be prioritized as such.
When creating a follow-up issue for required tests:
workflow::ready for development.
When creating a technical debt issue make sure to label it as such, in addition make sure to add labels based on the following guidelines:
workflow::planning breakdownissues are currently being refined. These issues require a weight, any requirements to be clarified, and ideally a technical proposal prior to being able to mark them as
workflow::ready for development.
workflow::ready for developmentissues are clearly defined after review from an engineer, who applies a weight and a recommended technical approach. These issues are then ready to be scheduled.
severity::4labels to classify the impact of the specific tech debt item. We use the list below as a guideline to grade the impact.
severity::1Blocking or Critical
Note that a multiple factors can exist at once. In that case use your judgment to either bump the impact score or lower it. For example:
severity::2bugs. Then choose
Engineering Managers are responsible for collecting feedback from their engineering teams to help inform the Product Manager to decide on the priority of these technical debt issues. Engineers should label these issues accordingly with their assessment of severity, weight, and make a recommendation on a technical proposal.
The team leverages a monthly async retrospective process as a way to celebrate success and look for ways to improve. The process for these retrospectives aligns with the automated retrospective process used by many teams at GitLab. The process is defined here: https://gitlab.com/gitlab-org/async-retrospectives#how-it-works.
A new retrospective issue is created on the 27th of each month, and remains open until the 26th of the following month. Team members are encouraged to add comments to that issue throughout the month as things arise as a way to capture topics as they happen. The current issue can be found in https://gitlab.com/gl-retrospectives/verify/-/issues.
On the 16th of each month a summary of the milestone will be added to the issue and the team will be notified to add any additional comments to the issue.
As comments are added to the issue, team members are encourage to upvote any comments they feel are important to callout, and to add any additional discussion points as comments to the original thread.
Around the 26th of the month, or after the discussions have wrapped up the backend engineering manager will summarize the retrospective and create issues for any follow up action items that need to be addressed. They will also redact any personal information, customer names, or any other notes the team would like to keep private. Once that has been completed the issue will be made non-confidential and closed.
The team is globally distributed and separated by many timezones. This presents some challenges in how we communicate since our work days only overlap by a couple hours. We have decided as a team to embrace asynchronous communication because scheduling meetings is difficult to coordinate. We meet as a team one day per week, on Wednesdays for a team meeting and our team's Engineering Managers schedule regular 1-on-1s.
Daily standup updates are posted to
Feel free to ask us questions directly in this Slack channel and someone will likely get back to you in 1-2 working days. We will use following emojis to respond to the posted question accordingly:
:eyes:to indicate that one of us has seen it
:white_check_mark:to indicate that the question has been answered
The verify stage has a separate Slack channel under
#s_verify, which encompasses the other groups of Verify.
Most spontaneous team communiation happens in issues and MRs. Our issues have a group label of
~"group::pipeline authoring". You can also tag a team member with
@mention in the issue if you have someone specific to ask.
If you need to call the attention of the entire group, you can tag
The Pipeline Authoring team has a shared calendar that integrates with the
#g_pipeline-authoring Slack channel to inform the team of important dates from PTO Roots. Instructions on how
to integrate with the
Verify:Pipeline Authoring calendar can be found here.
We use feature channels (e.g. #f_awesome_feature) for our larger features. These slack channels are for internal discussions on ideation, requests for general suggestions, and context. For any product related updates, the Product Manager is responsible for documenting summaries from Slack discussions directly in the issue. For any other discussions relating to the technical details, the Engineering DRI is responsible for documenting these summaries from relevant Slack discussions in the issue and/or making changes in the appropriate merge request.
Discussions about definition and implementation should still happen in merge requests or issues. By encapsulating summaries of any discussions outside of these documents (e.g. Slack channels/threads or sync calls), we ensure we keep a single source of truth driving our decisions about an implementation. This ensures the information persists and is visible to a wider audience.
Welcome to the team! Whether you're joining GitLab as a new hire, transferring internally or ramping up on the CI domain knowledge to tackle issues in our area, you'll be assigned an onboarding/shadowing buddy so you can have someone to work with as you're getting familiarized with our codebase, our tech stack and general development processes on the Pipeline Authoring team.
Read over this page as a starting point, and feel free to set up regular sync or async conversations with your buddy. We recommend setting up weekly touch points at a minimum, and joining our regular team syncs to learn more about how we work. (Reach out to our Engineering Managers for an invite to those recurring meetings). You're also welcome to schedule a few coffee chats to meet some members of our team.
Issues labelled with
~onboarding are smaller issues to help you get onboarded into the CI feature area. We typically work Kanban, so if there aren't any
~onboarding issues in the current milestone, reach out to the Product Manager and/or Engineering Managers to see which issues you can start on as part of your onboarding period.
In May 2021, we introduced the CI Shadow Program, which we are trialing as a way to onboard existing GitLab team members from other Engineering teams to the CI domain and contribute to CI features.
See dedicated page.