|GitLab Team Handle||
|Team Boards||Team Board & Priority Board|
Engineering productivity has monthly office hours on the 3rd Wednesday of the month at the 13:30 UTC (6:30 PST) on odd months (e.g January, March, etc) and 3:00 UTC (20:00 PST) on even months (e.g February, April, etc) open for anyone to add topics or questions to the agenda. Office hours can be found in the GitLab Team Meetings calendar
The Engineering Productivity team increases productivity of GitLab team members and contributors by shortening feedback loops and improving workflow efficiency for GitLab projects. The team uses a quantified approach to identify improvements and measure results of changes.
Enable frequent and positive experience of Community Contributions from the Wider GitLab Community.
The Engineering Productivity team focuses on the following workstreams and the associated Epics with workstream specific vision and objectives.
|~"ep::pipeline"||GitLab Project Pipeline Improvement
GitLab Project Selective Test Execution
|~"ep::review-apps"||Improve Review Apps Reliability
Improve Review Apps setup and usefulness
|~"ep::metrics"||Centralized handbook first metrics dashboard|
|~"ep::workflow"||Reviewer Roulette Improvements
|Kyle Wiebers||Backend Engineering Manager, Engineering Productivity|
|Rémy Coutable||Staff Backend Engineer, Engineering Productivity|
|Mark Fletcher||Backend Engineer, Engineering Productivity|
|Jen-Shin Lin||Senior Backend Engineer, Engineering Productivity|
|Albert Salim||Senior Backend Engineer, Engineering Productivity|
The Engineering Productivity team uses modified prioritization and planning guidelines for targeting work within a Milestone.
The Engineering Productivity team creates metrics in the following sources to aid in operational reporting.
The Engineering Productivity team will make changes which can create notification spikes or new behavior for GitLab contributors. The team will follow these guidelines in the spirit of GitLab's Internal Communication Guidelines.
Pipeline changes that have the potential to have an impact on the GitLab.com infrastructure should follow the Change Management process.
Pipeline changes that meet the following criteria must follow the Criticality 3 process:
These kind of changes led to production issues in the past.
The team will communicate significant pipeline changes to
#development in Slack and the Engineering Week in Review.
Pipeline changes that meet the following criteria will be communicated:
Other pipeline changes will be communicated based on the team's discretion.
Be sure to give a heads-up to
#ux Slack channels
and the Engineering week in review when an automation is expected to triage more
than 50 notifications or change policies that a large stakeholder group use (e.g. team-triage report).
This is a list of Engineering Productivity experiments where we identify an opportunity, form a hypothesis and experiment to test the hypothesis.
|Experiment||Status||Hypothesis||Feedback Issue or Findings|
|Experiment with running previously failed tests early||Upcoming||Can iteration or cycle time be improved if failed tests are run earlier in the
|Store/retrieve tests metadata in/from pages instead of artifacts||In Progress||We're only interested in the latest state of these files, so using Pages makes sense here. Also, this would simplify the logic to retrieve the reports and reduce the load on GitLab.com's infrastructure.|
|Reduce pipeline cost by reducing number of rspec tests before MR approval||In Progress||Reduce the CI cost for GitLab pipelines by running the most applicable rspec tests for changes prior to approval|
|Enabling developers to run failed specs locally||In Progress||Enabling developers to run failed specs locally will lead to less pipelines per merge request and improved productivity from being able to fix regressions more quickly||https://gitlab.com/gitlab-org/gitlab/-/issues/327660|
|Use dynamic analysis to streamline test execution||Complete||Dynamic analysis can reduce the amount of specs that are needed for MR pipelines without causing significant disruption to master stability||Miss rate of 10% would cause a large impact to master stability. Look to leverage dynamic mapping with local developer tooling. Added documentation from the experiment.|
|Using timezone for Reviewer Roulette suggestions||Complete - Reverted||Using timezone in Reviewer Roulette suggestions will lead to a reduction in the mean time to merge||Reviewer Burden was inconsistently applied and specific reviewers were getting too many reviews compared to others. More details in the experiment issue and feedback issue|