For an understanding of what this team is going to be working on take a look at the product vision. This team is responsible for delivering on the following directions:
The Verify:Pipeline Execution Group is focused on supporting the functionality with respect to Continuous Integration use case. A key focus for the PE group is delivering features that achieve the outcome we track in our performance indicator.
This team maps to Verify devops stage.
Our focus is on improving our performance, scalability, and developer efficiency with our engineering driven efforts. We will also continue to address Customer Experience through our quality initiatives. Key aspects of each of these areas are outlined below.
We measure the value we contribute by using Performance Indicators (PI), which we define and use to track progress.
The current PI for the Pipeline Execution group is the number of unique users who trigger ci_pipelines
. For more details, please check out the Product Team Performance Indicators. To view the latest Verify stage ci_pipeline data see our Sisense Dashboard.
Based on the AARRR framework (Acquisition, Activation, Retention, Revenue, Referral), this funnel represents the customer journey in using GitLab CI. Each state in the funnel is defined with a metric to measure behavior. Product managers can focus on any of the various states in the funnel to prioritize features that drive a desired action.
Domain | Issues |
---|---|
Pipeline processing: processes responsible for transitions of pipelines, stages and jobs. | ~pipeline processing |
Rails-Runner communication: jobs queuing, API endpoints and their underlying functionalities related to operations performed by and for Runners. |
Domain | Issues |
---|---|
Continuous Integration and Deployment Admin Area settings | ~CI/CD Settings |
Repositories analytics for groups | ~CI reports |
GitLab CI/CD artifacts reports types | ~CI reports |
Unit test reports | ~testing::code testing |
Test with GitLab CI/CD and generate reports in merge requests | |
Load Performance Testing | ~testing::load performance |
Metrics Reports | |
Test coverage visualization | ~testing::coverage |
Browser Performance Testing | ~testing::browser performance |
Fail Fast Testing | |
Accessibility testing | ~testing::accessibility |
Usability testing | ~testing::usability |
Review apps | ~testing::review apps |
Visual review tool (deprecated) | ~testing::visual review tool |
Scheduled pipelines | ~pipeline schedules |
Pipeline efficiency | ~ci::scaling |
Building images using Docker | |
External SCM and CI Integration | |
External pipeline validation | |
Rate limits on pipeline creation | ~Category:Continuous Integration + ~Eng-Inter-Dept::Rate Limits |
Job logs | |
Job log artifacts | |
Merge Trains | ~Category:Merge Trains |
Not included in the Pipeline Execution group's domain:
These are our high-level engineering driven goals for the year. As with any of our goals, they are ambitious and subject to change.
Goals:
Goals:
Goals:
Goals:
Goals:
Goals:
Goals:
Role | Team Member | Description |
---|---|---|
DRI for milestone refining | Drew Stachon | Helps the EM and PM ensure that issue refinement and weighting is complete prior to the start of a milestone by setting processes and following up with engineers. |
To find our stable counterparts, look at the Pipeline Execution product category listing.
Like most GitLab backend teams we spend a lot of time working in Rails on the main GitLab CE app, but we also do a lot of work in Go which is the language that GitLab Runner is written in. Familiarity with Docker and Kubernetes is also useful on our team.
~group::pipeline execution
#g_pipeline-execution
For those new to the team, these links may be helpful in learning more about the product and technology.
(Sisense↗) We also track our backlog of issues, including past due security and infradev issues, and total open System Usability Scale (SUS) impacting issues and bugs.
(Sisense↗) MR Type labels help us report what we're working on to industry analysts in a way that's consistent across the engineering department. The dashboard below shows the trend of MR Types over time and a list of merged MRs.
(Sisense↗) Flaky test are problematic for many reasons.
The team uses the #g_pipeline_execution_quad
Slack channel to discuss cross-functional prioritisation in addition to any other topics that require the quad to collaborate on. Additionally, the quad also reviews the dashboard which shows the % of MRs that are bugs vs maintenance vs features to ensure the team's efforts are properly aligned to the prioritisation.
The Pipeline Execution Workflow board is the source of truth for current and upcoming work.
Our planning timeline follows the GitLab Product Development timeline.
Verify::P*
labels to indicate which issues will be scheduled into upcoming milestones. Verify::P1 indicates a ~Deliverable for the current milestone. Verify::P2 a ~Deliverable for next milestone, and so on.workflow::ready for development
column based on the average closed weight of past milestones.~needs weight
board should be updated by the PM for all issues being considered for the next milestone that need weight and apply the workflow::planning breakdown
label.Verify::P*
~needs weight
issues to apply weight. Issues should be selected for in order of Verify::P1, Verify::P2, then Verify::P3.~needs weight
board to ensure that all issues for the upcoming milestone are being weighted and refined by team members.
Verify::P*
issues where team members have not already started the refinement process, the designated DRI will comment on the issue asking a specific team member or team members to start the process.~"type::bug"
issues for the upcoming milestone. The weight for these issues + any Verify::P1, type::bug
issues should total approximately 40% of the WIP limit for the ~"workflow::ready for development"
column.~"type::maintenance"
issues for the upcoming milestone. The weight for these issues + any Verify::P1, type::maintenance
issues should total approximately 40% of the WIP limit for the ~"workflow::ready for development"
column.~"type::feature"
issues for the upcoming milestone. The weight for these issues + any Verify::P1, type::feature
issues should total approximately 20% of the WIP limit for the ~"workflow::ready for development"
column.~Verify::P1
issues not yet in workflow::ready for development
.Verify::P*
issues and move them to workflow::ready for development
.~cicd::active
on any issues in workflow::ready for development
that could be worked on in the upcoming milestone. The total weight for this column on the board will not exceed 100% of the average shipped milestone's weight. For example, if the team's average closed issues weight for a milestone is 30, the column should not exceed 30 weight.
~Deliverable
is added to any ~Verify::P1
issues or will re-prioritize them appropriately~Verify::P1
issues to the current milestone.%Backlog
~Stretch
goals.workflow::planning breakdown
that do not have a due date requiring them to be completed in that milestone. They can either be moved into a future milestone or the %Backlog depending on team priorities.workflow::ready for development
column aligning with the theme for the next upcoming milestone.Note: The EM and PM may need to modify the team commitments and schedule work for the upcoming milestone as we focus on Customer Results over what we plan.
side note: we prefer using Refining over Grooming
Engineers are expected to allocate approximately 4 hours each milestone to refine and weight issues assigned to them. The purpose of refining an issue is to ensure the problem statement is clear enough to provide a rough effort sizing estimate; the intention is not to provide solution validation during refinement.
Engineering uses the following handbook guidance for determining weights. If any issue needs any additional ~frontend ~backend ~Quality ~UX ~documentation
reviews, they are assigned to the respective individual(s).
Any one on the team can contribute to answering the questions in this checklist, but the final decisions are up to the PM and EMs.
Engineers will:
~workflow::
label to the appropriate status, e.g.
The team makes use of the ~needs weight
board that shows issues that need to be weighted for the next milestone. The criteria are denoted by having a ~needs weight
and ~cicd::planning
applied. Throughout the month (i.e May 1-7), team members review the ~needs weight
board and assign issues to themselves. The priority order will be determined by column: Verify::P1
, Verify::P2
, Verify::P3
, then 'Open'. If there is an issue with a higher urgency for weighting, a team member might be directly assigned to the issue for a prioritized review.
We add a Weight
to issues as a way to estimate the effort needed to complete an issue. We factor in complexity and any additional coordination needed to work on an issue. We weight issues based on complexity, following the fibonacci sequence:
Weight | Description |
---|---|
1: Trivial | The problem is very well understood, no extra investigation is required, the exact solution is already known and just needs to be implemented, no surprises are expected, and no coordination with other teams or people is required. Examples are documentation updates, simple regressions, and other bugs that have already been investigated and discussed and can be fixed with a few lines of code, or technical debt that we know exactly how to address, but just haven't found time for yet. Around 1 MR to address |
2: Small | The problem is well understood and a solution is outlined, but a little bit of extra investigation will probably still be required to realize the solution. Few surprises are expected, if any, and no coordination with other teams or people is required. Examples are simple features, like a new API endpoint to expose existing data or functionality, or regular bugs or performance issues where some investigation has already taken place. Less than 2 MRs to address |
3: Medium | Features that are well understood and relatively straightforward. A solution will be outlined, and some edge cases will be considered, but additional investigation may be required to confirm the approach. Some surprises are expected, and coordination with other team members may be necessary. Bugs that are relatively well understood, but additional investigation may be required. The expectation is that once the problem is verified and major edge cases have been identified, a solution should be relatively straightforward. Examples are regular features, potentially with backend or frontend dependencies or performance issues. 3 or more MRs to address |
5: Large | Features that are well understood, but has more ambiguity and complexity. A solution will be outlined, and major edge cases will be considered, but additional investigation will likely be required to validate the approach. Surprises with specific edge cases are to be expected, and feedback from multiple engineers and/or teams may be required. Bugs are complex and may be understood, and may not have an in depth solution defined during issue refinement. Additional investigation is likely necessary, and once the problem is identified, multiple iterations of a solution may be considered. Examples are large features with backend and frontend dependencies, or performance issues that have a solution outlined but requires more in depth solution validation. Unknown amount of MRs to address, must be broken down. |
The maximum weighted value for an issue is a 5
, and may exceed one milestone to complete given additional dependencies and/or complexity. Consider how an issue weighted with a 5
can be broken down into smaller iterations and do so.
A feature flag roll-out issue should be created while refining/weighting where anticipated. These are weight of 1
.
To encourage more transparency and collaboration amongst the team and additionally align on the Release Posts we publish at the end of each milestone, we will be creating a separate issue to highlight a Feature flag roll out plan for each feature being released starting in 13.2, based on the issue template for feature flag roll outs. The engineer who implements the feature will be responsible for creating this separate issue to highlight the details of when and how the feature flag will be toggled, and subsequently link this issue to their feature issue. The product manager will tag this issue as a blocker to their release post, so that everyone is aligned on the release plan of the feature.
The team makes use of Verify::P*
labels to indicate priority order of issues. As part of milestone planning and review we may decide to change or remove the labels to reflect new priority for issues. Currently these labels are applied as follows.
Priority Label | Reason for applying |
---|---|
Verify::P1 |
These are issues that we will commit to delivering in the current milestone. |
Verify::P2 |
These are issues that we are likely to commit to delivering in the next milestone. |
Verify::P3 |
These are issues that are likely to be committed to delivering in the milestone following the next. |
We use the Pipeline Execution Workflow issue board to track what we work on in the current milestone.
Development moves through workflow states in the following order:
workflow::design
(if applicable)workflow::planning breakdown
workflow::ready for development
workflow::in dev
workflow::blocked
(as necessary)workflow::in review
workflow::verification
workflow::awaiting security release
(if applicable)workflow::feature-flagged
(if applicable)workflow::complete
Closed
workflow::planning breakdown
is driven by Product, but is a collaborative effort between Product and Engineering. The steps for planning breakdown typically consists of:
problem validation
as needed.workflow::problem validation
.At any point, if an issue becomes blocked, it would be in the workflow::blocked
status. If there is a blocking issue, it needs to be added to the issue description or linked to the issue with a 'blocked by' relationship. If there is not a blocking issue, the reason for being blocked should be clearly communicated in a comment on the issue.
workflow::ready for development
means that an issue has been sufficiently refined and weighted by Engineering. Issues in this state that are labeled cicd::active
are ones that should be worked on in a milestone. When a developer starts working on an issue they should set the milestone to that of the one where the issue will most likely be completed, rather than the one where it is started.
workflow::awaiting security release
is applied by an engineer after the security issue has passed verification and this label signals that it is ready for production but awaiting the next monthly security release. When this label is applied, the issue's milestone should also be updated to the next milestone to align with when the next security release will happen.
workflow::feature-flagged
is applied to an issue that is being enabled through a separate feature flag rollout issue. Once the feature is validated the status is moved to workflow::complete
and the issue is closed.
Closed
means that all code changes associated with the issue are fully enabled on gitlab.com. If it is being rolled out behind a feature flag, it means the feature flag is enabled for all users on gitlab.com.
More detail on the workflow is available on the Product-Development Flow page.
We use a series of labels to indicate the highest priority issues in the milestone.
Verify::P1
, Deliverable
, and group::pipeline execution
to align with the Theme and Goals of the milestone.Verify::P1
, Deliverable
, and group::pipeline execution
. This should account for approximately 30% of the average total milestone weight.Verify::P1
issues have been picked up and are in workflow:in dev
or beyond, we have Verify::P2
and Verify::P3
to signal issues that are important and will likely become Verify::P1
issues in later milestones.Following the Verify::P*
priorities, the ready for development
column on the Pipeline Execution Workflow issue board will be curated, so that each team member can pull items from this column as they choose.
When DRIs select issues, they will assign themselves to the issue and also add the milestone that they believe the issue will most likely be shipped, which may not be the current milestone. The DRI may work on issues which are in future milestones. If the milestone is set to a future milestone or not set, and you are sure it will be shipped in the current milestone, reset the milestone to the current one. This is also a good time to re-evaluate the weight and proposal, especially if the DRI picking up the issue was not the same individual who originally weighted and refined the issue. Aspirationally, we strive to iterate and want to break down the efforts to ship as much value in the milestone for our users as possible. This means if you see a more efficient way forward when you start working on a new issue, you are free to raise a comment and update the proposal to deliver more iterative value.
Each member of the team can choose which issues to work on during a milestone by assigning the issue to themselves. When the milestone is well underway and we find ourselves looking for work, we default to working right to left on the issue board by pulling issues in the right-most column. If there is an issue that a team member can help with on the board, they should do so instead of starting new work. This includes conducting code review on issues that the team member may not be assigned to, if they feel that they can add value and help move the issue along to completion. Additionally, prior to picking up the next issue from the top of the workflow::ready for development
column, team members should check the ~needs weight
board to ensure everything in the Verify::P*
columns has been weighted.
Specifically, this means our work is prioritized in the following order:
workflow::verification
or workflow::production
workflow::in review
workflow::blocked
or workflow::in dev
if applicable~needs weight
board for any issues needing to be weighted in current milestone.workflow::ready for development
columnThe goal of this process is to reduce the amount of work in progress (WIP) at any given time. Reducing WIP forces us to "Start less, finish more", and it also reduces cycle time. Engineers should keep in mind that the DRI for a merge request is the author(s), to reflect the importance of teamwork without diluting the notion that having a DRI is encouraged by our values.
If an issue has several components (e.g. ~frontend, ~backend, or ~documentation) it may be useful to split it up into separate implementation issues. The original issue should hold all the discussion around the feature, with the implementation issues being used to track the work done. If implementation will take more than 1 milestone and/or none of the implementation will be done against the original issue, it should be promoted to an epic. Doing this provides several benefits:
When moving an issue through workflow::design
to workflow::planning breakdown
and implementation, use one of these processes:
Frontend? | Backend? | Action |
---|---|---|
Yes | Yes | Original issue is renamed to a Frontend implementation issue and a separate Backend implementation issue is created. |
No | Yes | Original issue is renamed to a Backend implementation issue. |
Yes | No | Original issue is renamed to a Frontend implementation issue. |
If an issue in workflow::design
is too large in scope to be effectively implemented in one issue, or in case the issue is old and too cluttered with discussions, the issue can be broken down into smaller issues. The Product Designer will work together with the Engineers and the Product Manager during workflow::design
to understand the possible iteration plan and break down the large design proposal into smaller parts whenever possible.
When new implementation issues are created, they should always be linked to the initial issue that contains the proposal and relevant discussions.
Please use implementation issues responsibly. They make sense for larger issues, but can be cumbersome for smaller features.
In order to keep our stakeholders informed of work in progress, we provide updates to issues either by updating the issue's health status and/or adding an async issue update.
For issues in the current milestone, we use the Issue Health Status feature to indicate probability that an issue will ship in the current milestone. This status is updated by the DRI (directly responsible individual) as soon as they recognize the probability has changed. If there is no change to the status, a comment to indicate that it has been the status of the issue has been assessed would be helpful.
The following are definitions of the health status options:
On Track
- The issue has no current blockers, and is likely to be completed in the current milestone.Needs Attention
- The issue is still likely to be completed in the current milestone, but there are setbacks or time constraints that could cause the issue to miss the release due dates.At Risk
- The issue is highly unlikely to be completed in the current milestone, and will probably miss the release due dates.Examples of how status updates are added:
On Track
to Needs attention
or At Risk
, we recommend that the DRI add a short comment stating the reason for the change in an issue status update.On Track
, the DRI could provide a comment to indicate solutions (whatever it may be) continue to be implemented, and it's still on track to be delivered in the same milestone.When the DRI is actively working on an issue (workflow status is workflow::in dev
, ~workflow::in review
, or workflow::verification
in the current milestone), they will add a comment into the issue with a status update, detailing:
On Track
)There are several benefits to this approach:
Expectations for DRIs when providing updates for work in progress:
As a general rule, any issues being actively worked on have one of the following workflow labels:
workflow::in dev
workflow::in review
workflow::verification
workflow::complete
(upon closing the issue)The Health Status of these issues should be updated to:
Needs Attention
- on the 1st
of the month.At Risk
- on the 8th
of the month.EMs are responsible for manually updating the Health Status of any inactive issues in the milestone accordingly.
Spikes are time-boxed investigations typically performed in agile software development. We create Spike issues when there is uncertainty on how to proceed on a feature from a technical perspective before a feature is developed.
The Pipeline Execution group supports the product marketing categories described below:
Label | ||||
---|---|---|---|---|
Category:Continuous Integration |
Issues | MRs | Direction | Documentation |
Category:Merge Trains |
Issues | MRs | Direction | Documentation |
Label | Description | ||
---|---|---|---|
api |
Issues | MRs | Issues related to API endpoints for CI features. |
CI permissions |
Issues | MRs | Issues related to CI_JOB_TOKEN and CI authentication |
Units of compute |
Issues | MRs | All issues and MRs related to how we count continuous integration minutes and calculate usage. Formely ~ci minutes |
merge requests |
Issues | MRs | Issues related to CI functionality within the Merge Request. |
notifications |
Issues | MRs | Issues related to various forms of notifications related to CI features. |
pipeline analytics |
Issues | MRs | Issues related to CI pipeline statistics and dashboards. |
pipeline processing |
Issues | MRs | Issues related to the execution of pipeline jobs, including DAG, child pipelines, and matrix build jobs |
Label | Description | ||
---|---|---|---|
CI/CD core platform |
Issues | MRs | Any issues and merge requests related to CI/CD core domain, either as changes to be made or as observable side effects. |
onboarding |
Issues | Issues that are helpful for someone onboarding as a new team member. | |
Good for 1st time contributors |
Issues | Issues that are good for first time community contributors, and could similarly be labeled for onboarding |
|
Seeking community contributions |
Issues | Issues we would like the wider community to contrbute and may also be labeled Seeking community contributions |
When building features that may have high impact the team uses established GitLab guidelines for feature flags.
We also ensure we are collaborating with our teammates in customer support and customer success by alerting them to the rollout issue before a feature is enabled.
The feature flags introduced by the team still in the code can be found in this table.
You can also search all Feature Flags through Sam's great tool here (prefiltered for pipeline exeuction)
To create a high-quality product that is functional and useful – Engineering, PM and Product Designer work closely together, combine methodologies, and often connect throughout the product development process. They use workflow::design
to discuss possible complexities, challenges, and uncover blockers around the proposed solution.
Product Designers play a critical role in the product development of user-facing issues. They collaborate with the Engineering and the Product Manager to design the user experience for the features. Once the design solution is proposed, agreed and validated, the Engineering DRI is assigned to implement that design and functionality during the milestone for which the issue is planned.
Following the code review guidelines means that we should ensure all MRs with user facing changes are reviewed by a product designer. UX reviews should follow the designer review guidelines as closely as possible to reduce the impact on velocity while maintaining quality.
Our process of planning and development relies heavily on overcommunication rather than any approval gates or automated notification mechanisms. We adhere to the proactive mindset and responsibility of everyone involved to make sure every step in the process is as transparent as it can be. For both planning and building this means direct, cross-functional, and other relevant stakeholders are included early into the process. This ensures everyone is able to contribute to the best of their capabilities, and at the right time in the process. This can include, but is not limited to, GitLab objects, Slack, meetings, and daily stand-ups.
Some practical examples of this are:
Note: For issues related to Merge Request experience, always keep the Code Review group in the loop to avoid any technical or UX debt from occurring. Refer to the collaboration on merge requests experience page to learn more about the collaboration framework.
Note: A good practice when only wanting to inform rather than requesting a direct action from the mentioned stakeholders is to put FYI
directly following the @mention handle.
A top priority for us is usability, and one way to effectively evaluate our JTBDs is with periodic UX Scorecards. For technical tasks that require infrastructure support, such as a functional cluster or provisioned environment with a .gitlab-ci.yml
file, the Product Designer and the Product Manager can work with the Engineering Manager and Quality stable counterparts to craft a project based on the scenarios to test for the JTBDs. Some guidelines for working together in this case:
We are also working on maintaining a library of precreated tasks for contribution from the engineering team in the CI Sample Projects group. These will help avoid having too much overhead from UX Scorecards and Category Maturity Scorecards in the future.
We follow the steps below to achieve the best results in the shortest time:
workflow::planning breakdown
phase and the whole team (PM, Engineers, Product Designer, QA, and Technical Writer) is involved in the process. They work closely together to find the most technically feasible and smallest feature set to deliver value to early customers and provide feedback for future product development. Check out iteration strategies for help.We aim to design broadly for an epic or full feature at least one milestone ahead of time and then break the big solution into smaller issues to pick up in the next milestones. Suppose working one milestone ahead to design the big solution is not possible. In that case, Engineering and Product Designer will define the first most technically feasible and smallest feature set (MVC) to satisfy early customers that will be implemented in the same milestone.
~workflow::blocked
and change the assignee to engineering DRIs until the technical discussion is resolved. If the discussion is expected to go on longer, reducing the chances of the design solution being delivered in the intended milestone, consider creating a spike issue for the discussion that blocks the current issue.For more details on how to contribute to GitLab generally, please see our documentation.
The Engineering DRI works with the Product Designer throughout the workflow:in dev
phase to uncover possible problems with the solution early enough that exhibit unexpected behaviour to what was originally agreed upon. If there are changes to be added that weren't agreed upon in the initial issue - a followup issue should be made and the Engineering DRI should work with the Product Manager to schedule that issue in a following milestone. This allows us to focus on cleanup over signoff, iterate quickly on issues with a low level of shame, and still make sure we accomplish what we've agreed upon. We should be careful not to hold off on completing these followup issues so that we don't build up a significant amount of UX debt issues.
If we find that solutions are consistently not matching the agreed upon design, we will hold a retrospective with the DRI, designer, and product manager to discuss where the gaps in communication are so that we can improve. It may be necessary to begin requiring a UX approval for merge requests on certain issues to help the Engineering DRI meet the requirements.
We strive to be as collaborative as possible in our work because the scope of our product area touches many other groups, including Create:Source Code and Verify:Runner. To enable collaboration we may work together with other internal stakeholders on a single MR to update or create a feature and documentation. https://gitlab.com/gitlab-org/gitlab/-/merge_requests/76253 is an example of how this can work very well.
Whenever possible, engineers should add all relevant tests alongside a new feature or a bug fix. It is always required that engineers should add all relevant tests (unit, component, integration or E2E) alongside a new feature or a bug fix. We recognize, however, that in the case of creating E2E tests this cannot be applied systematically for multiple reasons. This section lists some ways we work to deliver tests in a painless and efficient way.
We aim to define needed tests early with Quad-planning. All the testing should be defined before the implementation starts and all parties should agree on:
When writing a new feature, we might need to write new E2E specs. In Pipeline Execution, we prefer to add E2E tests in separate MRs, the same way we prefer frontend and backend MRs to be separate. During Quad-planning, it is essential to determine whether that separate MR is required for the feature to ship or not. Given that we use feature flags for all new features, it is quite easy to work in separate MRs and turn on the flag when the team feels the feature has enough coverage for production use. A typical full stack feature can therefore involve multiple backend MRs, then frontend MRs, and finally E2E test MRs.
Writing tests is a team effort and should never be delegated to a single group. We are all responsible for quality. That being said, we acknowledge that Software Engineers in Tests (SETs) have the most knowledge about writing E2E tests. A benefit of splitting the E2E specs into a separate MR is that it can be assigned to someone other than the DRI of the feature, which lets a more appropriate person write the spec. This can be the SET, backend or frontend, depending on the circumstances.
Given the Ruby expertise required to write E2E tests, we should have SETs and backend engineers be the primary team members to write them. Frontend engineer may volunteer to write them with the support of a SET if they feel confident, but it is not expected of them.
Whenever possible, backend engineers should help write the E2E tests of a new feature or bug fix. They should feel comfortable pinging the SET on their team as needed (to review the MR, for example). This helps alleviate the SET's workload, so that they are not entirely responsible for E2E tests on their own. However when they are not comfortable writing the required E2E tests, then the plan should be for the SET to lead the effort. The rationale being that SETs have the most context and work daily with specs, so they will be able to write them much faster. More importantly, they can write much better specs. DRI engineers should proactively help the SET understand the required tests.
In the instance where the SET has too many E2E tests to write, then they should check with backend engineers of the team if they could lead the effort on some of them. Because testing is part of the required work, we should account for E2E tests when assigning issues to a milestone.
When we fix a bug, we ideally want to write a test that would catch and prevent this bug from happening again in the future. However, if the spec we need to write (unit, integration, or E2E test), is part of a code change that needs to be merged as soon as possible (e.g. requires a time sensitive resolution), it is preferable to merge the fix first, and create an issue to write the spec afterwards, so that it is not blocking merging of the MR. This example should be considered the exception, not the rule.
When creating follow-up issues for tests, we have to ensure that they are not going to sit on a pile in the backlog. Tests are essential and should be prioritized as such. When creating a follow-up issue for required tests:
As a group, we strive to meet the Severity Service Level Objective of bugs. We regularly review all bugs and prioritize issues with a ~missed-SLO
label and those approaching SLO (Service Level Objective) through our weekly Triage Report. One of the group's goals is to reduce the median age of open S2 bugs which is being tracked by the Quality department as a KPI. To do this we will triage aged bugs each milestone closing what we can, reducing severity for bugs mis-labeled, asking for more details for issues that cannot be reproduced and prioritizing those that can be reproduced focusing on bugs in the identified JTBD.
When we build new features, in line with our Iteration value, we aim for an MVC that covers the core aspect of the feature.
Then we create issues with ~feature::enhancement
label to iterate on the remaining functionality. The more a new feature is used, the more missing functionality eventually can be perceived as bugs by the users.
We use severity::1
, severity::2
, severity::3
and severity::4
labels with ~feature::enhancement
to classify the impact of those issues to prevent them from becoming ~type::bug
.
We use the list below as a guideline to grade the impact.
Label | Definition | Description |
---|---|---|
severity::1 |
Blocking or Critical | The enhancement is critical to the core of the feature. The feature is currently incomplete or extra action is required. |
severity::2 |
High | Some important aspects of the feature are missing but the feature delivers value. However the missing behavior can make the feature appear buggy. |
severity::3 |
Medium | The feature works ok but the enhancement is required under less common scenarios. |
severity::4 |
Low | The feature works fine and the improvement aims to improve customer satisfaction. |
We track our technical debt using the following Pipeline Execution Technical Debt issue board, where we track issues in the planning phase.
This board has 2 main sections:
workflow::planning breakdown
we find issues that are currently being refined.workflow::ready for development
we find issues that clearly defined and a weight has been assigned.severity::1
, severity::2
, severity::3
and severity::4
labels to classify the impact of the specific tech debt item.
We use the list below as a guideline to grade the impact.
severity::1
Blocking or Critical
severity::1
bugsseverity::2
High
severity::2
bugsseverity::3
Medium
severity::3
bugsseverity::4
Low
severity::4
bugsNote that multiple factors can exist at once. In that case use your judgment to either bump the impact score or lower it. For example:
severity::2
bugs.
Then choose severity::2
.severity::3
.To better understand the risk environment and each risk's causes and consequences, the Pipeline Execution team uses the Risk Map as our risk management tool to prioritise mitigation strategies and increase Quality.
The team uses a monthly async retrospective process as a way to celebrate success and look for ways to improve. The process for these retrospectives aligns with the automated retrospective process used by many teams at GitLab.
A new retrospective issue is created on the 27th of each month, and remains open until the 26th of the following month. Team members are encouraged to add comments to that issue throughout the month as things arise as a way to capture topics as they happen.
On the 16th of each month a summary of the milestone will be added to the issue and the team will be notified to add any additional comments to the issue.
As comments are added to the issue, team members are encourage to upvote any comments they feel are important to callout, and to add any additional discussion points as comments to the original thread.
Around the 26th of the month, or after the discussions have wrapped up the backend engineering manager will summarize the retrospective and create issues for any follow up action items that need to be addressed. They will also redact any personal information, customer names, or any other notes the team would like to keep private. Once that has been completed the issue will be made non-confidential and closed.
The team is globally distributed and separated by many timezones. This presents some challenges in how we communicate since our work days only overlap by a couple hours. We have decided as a team to embrace asynchronous communication because scheduling meetings is difficult to coordinate. We meet as a team one day per week, on Wednesdays for a team meeting and our team's Engineering Managers schedule regular 1-on-1s.
Daily standup updates are posted to #g_pipeline-execution
.
Feel free to ask us questions directly in this Slack channel and someone will likely get back to you in 1-2 working days. We will use following emojis to respond to the posted question accordingly:
:eyes:
to indicate that one of us has seen it:white_check_mark:
to indicate that the question has been answeredThe verify stage has a separate Slack channel under #s_verify
, which encompasses the other groups of Verify.
Most spontaneous team communiation happens in issues and MRs. Our issues have a group label of ~"group:pipeline execution"
. You can also tag a team member with @mention
in the issue if you have someone specific to ask.
If you need to call the attention of the entire group, you can tag @gitlab-com/pipeline-execution-group
.
Refer to the Developer Onboarding in Verify section.