See FY24Q1 Engineering Productivity OKRs (internal link)
Person | Role |
---|---|
Greg Alfaro | GDK Project Stable Counterpart, Application Security |
GitLab Team Handle | @gl-quality/eng-prod |
Slack Channel | #g_engineering_productivity |
Team Boards | Team Board & Priority Board |
Issue Tracker | gitlab-org/quality/engineering-productivity/team |
Engineering productivity has monthly office hours on the 3rd Wednesday of the month at 3:00 UTC (20:00 PST) on even months (e.g February, April, etc) open for anyone to add topics or questions to the agenda. Office hours can be found in the GitLab Team Meetings calendar
The Engineering Productivity team focuses on the following workstreams and the associated Epics with workstream specific vision and objectives.
Tracking Label | Epics |
---|---|
~"ep::pipeline" | GitLab Project Pipeline Improvement |
~"ep::review-apps" | Improve Review Apps reliability & efficiency |
~"ep::triage" | Quality: Triage |
~"ep::workflow" | Reviewer Roulette Improvements |
#master-broken
pipeline monitoring.gitlab-triage
Ruby gem, and Triage operations projects for examples.Engineering Productivity has an alternating weekly team meeting schedule to allow for all team members to collaborate in times that work for them.
Showcases are done every two months and will be voted on by the team asynchronously in an issue.
:thumbsup:
reactions on the ideas they'd like to hear about.The Engineering Productivity team uses modified prioritization and planning guidelines for targeting work within a Milestone.
The Engineering Productivity team recently reviewed all our projects and discussed relative priority. Aligning this with our business goals and priorities is very important. The list below is ordered based on aligned priorities and includes primary domain experts for communication as well as a documentation reference for self-service.
Project | Domain Knowledge | Documentation |
---|---|---|
GitLab CI Pipeline configuration optimization and stability | Jen-Shin, David, Nao | Pipelines for the GitLab project |
Triaging master-broken | Jenn, Nao, Alina | Broken Master |
GitLab Development Kit (GDK) continued development | Ash, Nao | GitLab Development Kit |
Triage operations for issues, merge requests, community contributions | Jenn, Alina | triage-ops |
Review Apps | David | Using review apps in the development of GitLab |
Triage engine, used by GitLab triage operations | ??? | GitLab Triage |
Dangerfiles for shared Danger rules and plugins | ??? | gitLab-dangerfiles Ruby gem for shared Danger rules and plugins |
JiHu | Jen-Shin | JiHu Support |
Development department metrics for measurements of Quality and Productivity | ??? | Development Department Performance Indicators |
RSpec Profiling Statistics for profiling information on RSpec tests in CI | Alina | rspec_profiling_stats |
Styles for shared RuboCop cops | ??? | gitLab-styles Ruby gem for shared RuboCop cops |
Feature flag alert for reporting on GitLab feature flags | ??? | Gitlab feature flag alert |
The Engineering Productivity team creates metrics in the following sources to aid in operational reporting.
The Engineering Productivity team will make changes which can create notification spikes or new behavior for GitLab contributors. The team will follow these guidelines in the spirit of GitLab's Internal Communication Guidelines.
Pipeline changes that have the potential to have an impact on the GitLab.com infrastructure should follow the Change Management process.
Pipeline changes that meet the following criteria must follow the Criticality 3 process:
cache-repo
job jobThese kind of changes led to production issues in the past.
The team will communicate significant pipeline changes to #development
in Slack and the Engineering Week in Review.
Pipeline changes that meet the following criteria will be communicated:
Other pipeline changes will be communicated based on the team's discretion.
Be sure to give a heads-up to #development
,#eng-managers
,#product
, #ux
Slack channels
and the Engineering week in review when an automation is expected to triage more
than 50 notifications or change policies that a large stakeholder group use (e.g. team-triage report).
Communicating progress is important but status doesn't belong in one on ones as it can be more appropriately communicated with a broader audience using other methods. The "standup" model used by a lot of organizations practicing scrum assumes a certain time of day for those to happen. In the context of a timezone distributed team, there is no "9am" that the team shares. Additionally, the act of losing and gaining context after completing work for the day only to gain it again to share a status update is context switching. The intended audience of the standup model assumes that it's just the team but in GitLab's model, that means folks need to be aware of where this is being communicated (slack, issues, other). Since this information isn't available to the intended audience, the information needs to be duplicated which at worst means there's no single source of truth and at a minimum means copy pasting information.
The proposal is to trial using an Asynchronous Issue Update model, similar to what the Package Group uses. This process would replace the existing daily standup update we post in Slack with Geekbot
. The time period for the trial would be a milestone or two, depending on feedback cycles.
The async daily update communicates the progress and confidence using an issue comment and the milestone health status using the Health Status field in the issue. A daily update may be skipped if there was no progress. Merge requests that do not have a related issue should be updated directly. It's preferable to update the issue rather than the related merge requests, as those do not provide a view of the overall progress. Where there are blockers or you need support, Slack is the preferred space to ask for that. Being blocked or needing support are more urgent than email notifications allow.
When communicating the health status, the options are:
on track
- when the issue is progressing as plannedneeds attention
- when the issue requires attention or intervention to keep it on scheduleat risk
- when there is a risk the issue will not be completed according to scheduleThe async update comment should include:
Example:
**Status**: 20% complete, 75% confident
Expecting to go into review tomorrow.
Include one entry for each associated MR
Example:
**Issue status**: 20% complete, 75% confident
Expecting to go into review tomorrow.
**MR statuses**:
- !11111+ - 80% complete, 99% confident - docs update - need to add one more section
- !21212+ - 10% complete, 70% confident - api update - database migrations created, working on creating the rest of the functionality next
Ask yourself, how confident am I that my % of completeness is correct?.
For things like bugs or issues with many unknowns, the confidence can help communicate the level of unknowns. For example, if you start a bug with a lot of unknowns on the first day of the milestone you might have low confidence that you understand what your level of progress is. Your confidence in the work may go down for whatever reason, it's acceptable to downgrade your confidence. Consideration should be given to retrospecting on why that happened.
A weekly update should be added to epics you're assigned to and/or are actively working on. The update should provide an overview of the progress across the feature. Consider adding an update if epic is blocked, if there are unexpected competing priorities, and even when not in progress, what is the confidence level to deliver by the expected delivery date. A weekly update may then be skipped until the situation changes. Anyone working on issues assigned to an epic can post weekly updates.
The epic updates communicate a high level view of progress and status for quarterly goals using an epic comment. It does not need to have issue or MR level granularity because that is part of each issue updates.
The weekly update comment should include:
Some good examples of epic updates that cover the above aspects:
As the owner of pipeline configuration for the GitLab project, the Engineering Productivity team has adopted several test intelligence strategies aimed to improve pipeline efficiency with the following benefits:
These strategies include:
Tests that provide coverage to the code changes in each merge request are most likely to fail. As a result, merge request pipelines for the GitLab project run only the predictive set of tests by default. These include:
See https://docs.gitlab.com/ee/development/pipelines/index.html#predictive-test-jobs-before-a-merge-request-is-approved for more information.
There is a fail-fast job in each merge request pipeline aimed to run all the RSpec tests that provide coverage for the code changes, hence are most likely to fail. It uses the same test_file_finder gem for test mapping. The job provides faster feedback by running early and stops the rest of the pipeline right away if any of the fail-fast job tests fail. Take a look at this youtube video for details on how GitLab implements the fail-fast job with test_file_finder. Note that the current design only works with low-impacting merge requests which are only mapped to a small set of tests. If there is a large number of tests that are likely to fail for a merge request, putting them in a single job is not feasible and could result in a long-running bottleneck which defeats its purpose.
See https://docs.gitlab.com/ee/development/pipelines/index.html#fail-fast-job-in-merge-request-pipelines for more information.
Premium GitLab customers, who wish to incorporate the Fail-Fast job
into their Ruby projects, can set it up with our Verify/Failfast template.
Tests that previously failed in a merge request are likely to fail again, so they provide the most urgent feedback in the next run. To grant these tests the highest priority, the GitLab pipeline prioritizes previously failed tests by re-running them early in a dedicated job, so it will be one of the first jobs to fail if attention is needed.
See https://docs.gitlab.com/ee/development/pipelines/index.html#re-run-previously-failed-tests-in-merge-request-pipelines for more information.
The GitLab pipeline consists of hundreds of jobs, but not all are necessary for each merge request. For example, a merge request with only changes to documenation files do not need to run any backend tests, so we can exclude all backend test jobs from the pipeline. See specify-when-jobs-run-with-rules for how to include/exclude CI jobs based on file changes. Most of the pipeline rules for the GitLab project can be found in https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rules.gitlab-ci.yml.
Developers can add labels to run jobs in addition to the ones selected by the pipeline rules. Those labels start with pipeline:
and multiple can be applied. A few examples that people commonly use:
~"pipeline:run-all-rspec"
~"pipeline:run-all-jest"
~"pipeline:run-as-if-foss"
~"pipeline:run-as-if-jh"
~"pipeline:run-praefect-with-db"
~"pipeline:run-single-db"
See docs for when to use these pipeline labels.
This is a list of Engineering Productivity experiments where we identify an opportunity, form a hypothesis and experiment to test the hypothesis.
Experiment | Status | Hypothesis | Feedback Issue or Findings |
---|---|---|---|
Automatic issue creation for test failures | In Progress | The goal is to track each failing test in master with an issue, so that we can later automatically quarantine tests. |
Feedback issue. |
Always run predictive jobs for fork pipelines | Complete | The goal is to reduce the compute units consumed by fork pipelines. The "full" jobs only run for canonical pipelines (i.e. pipelines started by a member of the project) once the MR is approved. | |
Retry failed specs in a new process after the initial run | Complete | Given that a lot of flaky tests are unreliable due to previous test which are affecting the global state, retrying only the failing specs in a new RSpec process should result in a better overall success rate. | Results show that this is useful. |
Experiment with automatically skipping identified flaky tests | Complete - Reverted | Skipping flaky tests should reduce the number of false broken master and increase the master success rate. |
We found out that it can actually break master in some cases, so we reverted the experiment with gitlab-org/gitlab!111217 . |
Experiment with running previously failed tests early | Complete | We have not noticed a significant improvement in feedback time due to other factors impacting our Time to First Failure metric. | |
Store/retrieve tests metadata in/from pages instead of artifacts | Complete | We're only interested in the latest state of these files, so using Pages makes sense here. This simplifies the logic to retrieve the reports and reduce the load on GitLab.com's infrastructure. | This has been enabled since 2022-11-09. |
Reduce pipeline cost by reducing number of rspec tests before MR approval | Complete | Reduce the CI cost for GitLab pipelines by running the most applicable rspec tests for changes prior to approval | Improvements needed to identify and resolve selective test gaps as this impacted pipeline stability. |
Enabling developers to run failed specs locally | Complete | Enabling developers to run failed specs locally will lead to less pipelines per merge request and improved productivity from being able to fix regressions more quickly | Feedback issue. |
Use dynamic analysis to streamline test execution | Complete | Dynamic analysis can reduce the amount of specs that are needed for MR pipelines without causing significant disruption to master stability | Miss rate of 10% would cause a large impact to master stability. Look to leverage dynamic mapping with local developer tooling. Added documentation from the experiment. |
Using timezone for Reviewer Roulette suggestions | Complete - Reverted | Using timezone in Reviewer Roulette suggestions will lead to a reduction in the mean time to merge | Reviewer Burden was inconsistently applied and specific reviewers were getting too many reviews compared to others. More details in the experiment issue and feedback issue |