The Release Group is responsible for developing features that relate to the Release stage of the DevOps lifecycle. The Release Stage is focused on all the functionality with respect to Continuous Delivery and Release Automation.
For an understanding of what this stage encapsulates, take a look at the product vision.
The following people are permanent members of the Release group:
The following members of other functional teams are our stable counterparts:
|Chris Balane||Senior Product Manager, Release|
|Andrei Zubov||Senior Frontend Engineer, Release|
|Dominic Couture||Staff Security Engineer, Application Security, Verify (Pipeline Execution, Pipeline Authoring, Runner, Testing), Release (Release)|
Like most GitLab teams, we spend a lot of time working in Rails and Vue.js on the main GitLab app, but we also do some work in Go which is used heavily in the Gitlab Release CLI. Familiarity with Docker and Kubernetes is also useful on our team.
We use performance indicator dashboards to determine if we are successfully delivering on the Development Department KPIs. We also define quarterly Objectives & Key Results (OKRs) to set and track measureable team goals.
(Sisense↗) We also track our backlog of issues, including past due security and infradev issues, and total open System Usability Scale (SUS) impacting issues and bugs.
(Sisense↗) MR Type labels help us report what we're working on to industry analysts in a way that's consistent across the engineering department. The dashboard below shows the trend of MR Types over time and a list of merged MRs.
Issues that contribute to the release stage of the devops toolchain have the
The stage's primary Slack channel is
#s_release. We also support a number of feature channels for discussons or questions about a specific feature area. We no longer support issue specific channels, as they are easy to lose track of and fragment the discussion. Supported channels are:
We use following Youtube playlists:
We hold recurring office hours to give community members a chance to discuss any questions, issues, or merge requests. For details about upcoming office hours, check out the epic or the playlist on GitLab Unfiltered.
We use the following labels to promote issues among community contributors:
~"Seeking community contributions". Sometimes we can write down the clear implementation instructions, but don't have the capacity to work on the issue ourselves. We mark such issues as
~"Seeking community contributions".
~"quick win". Such issues can help community contributors to learn the review process without need for deep understanding of our codebase.
The Release team engineers will work with the Product Manager to regularly triage issues that are viable candidates for community contribution. There are a few ways that team members can assist with contributions, primarily by:
If a merge request becomes a high priority task and the contributor has become less active or stalled, consider adding an explanation comment and finishing the merge request. Always remember to give credit and appreciation to the community contributor in order to encourage future participation.
We are always striving to improve and iterate on our planning process as a team. To maximize our velocity and meet our deliverables, we follow the process outlined on our planning page.
We aim to follow the Product Development Flow as closely as possible to plan and track features as they make their way from idea to production. The Release Team Workflow board can be used to follow features in each stage of the process. This board details the various stages an issue can exist in as it is being developed. It's not necessary that an issue go through all of these stages, and it is allowed for issues to move back and forward through the workflow as they are iterated on.
Below is a high level description of each stage, its meaning, and any requirements for that stage.
workflow::ready for development
UX Readylabel, as well as either
backendor both labels to indicate which areas will need focus.
The Release Error Budget dashboard is used to identify and prioritize issues that are impacting our customers and infrastructure performance.
To ensure that we are consistently monitoring and addressing these issues, each milestone the engineering manager will review the error budget dashboard to monitor changes and determine whether or not we're exceeding our budget. If needed, investigation issues will be opened to identify what is contributing to our error budget spend, and the
will be applied to facilitate tracking and measurement. These issues will be added to the upcoming milestone, and an engineer will be assigned to help determine the root cause. Once a potential solution has been identified, the issue will be triaged based on the regular triage process and scheduled per the regular async planning process.
In some instances, we may determine that a service is functioning properly and that an adjustment needs to be made to our Service Level Indicators (SLI). If it is determined that the current thresholds are too low for a given service, the Product Manager who is the DRI for error budget spend, will work with the team to determine an appropriate SLI for that service.
More information about error budgets and how they are calculated can be found in the error budget dashboard documentation.
To investigate a decrease or a discrepancy in the error budget, an engineer can utilize one of the following dashboards.
The Release error budget dashboard provides a good overview of all the key metrics for the Release team. The Detailed release error budget dashboard is more focused on Service Level Indicators (SLIs). Both of them share an overview of the error budget and the budget spend attribution.
The budget spend attribution is often very helpful in determining where failures might be coming from. Violations are separated by the following types and components, respectively:
rails_request: for API requests with violations on the framework (rails) level.
puma: for API requests with violations on the server (puma) level.
rails_request: for Web requests with violations on the framework (rails) level.
puma: for Web requests with violations on the server (puma) level.
sidekiq_execution: for background jobs executed by Sidekiq.
There are also two types of a violation, an
apdex or an
apdex: an operation succeeded but not within set threshold.
error: an operation that failed, e.g. failed background job, a request returning 500 response error.
Note: not all components are implemented, so
GraphQL failures for example will not be adding up to the budget spend.
Before you start, it is important to keep in mind that sometimes, there might be an infrastructure issue or some other underlying problem that affects the error budget negatively (check this issue for instance). In such cases, the steps below might not be very helpful for an investigation. If unsure about this, it might be a good idea to check with the engineering manager or fellow team members. Also look out for discussions in any of these slack channels:
#f_error_budgets, as similar cases are often discussed there.
Our goal is to ship software at scale without sacrificing quality or velocity. In order to do that, we believe that the quality of our product must be a shared responsibility.
Every member of the Release team contributes to quality through better software design, proper testing practices and bug prevention strategies. We ensure these best practices are followed by:
stuff that should just worklabel.
Code reviews follow the standard process of using the reviewer roulette to choose a reviewer and a maintainer. The roulette is optional, so if a merge request contains changes that someone outside our group may not fully understand in depth, it is encouraged that a member of the Release team be chosen for the preliminary review to focus on correctly solving the problem. The intent is to leave this choice to the discretion of the engineer but raise the idea that fellow Release team members will sometimes be best able to understand the implications of the features we are implementing. The maintainer review will then be more focused on quality and code standards.
This tactic also creates an environment to ask for early review on a WIP merge request where the solution might be better refined through collaboration and also allows us to share knowledge across the team.
Our daily updates on progress and status will be added to the issues as a comment. A daily update may be skipped if there was no change in progress. It's preferable to update the issue rather than the related merge requests, as those do not provide a view of the overall progress. The status comment should include what percentage complete the work is, the confidence of the person that their estimate is correct and, notes on what was done and/or if review has started. Finally, if there are multiple MRs associated with an issue, please include an entry for each. A couple of suggestions to consider when adding your async updates:
Complete: Confidence: Notes: Concern:
Complete: 80% Confidence: 90% Notes: expecting to go into review tomorrow Concern: ~frontend
Issue status: 20% complete, 75% confident MR statuses: !11111 - 80% complete, 99% confident - docs update - need to add one more section !21212 - 10% complete, 70% confident - api update - database migrations created, working on creating the rest of the functionality next
The Weekly Status Update is configured to run at noon on Fridays, and contains three questions:
What progress was made on your deliverables this week? (MRs and demos are good for this)
The goal with this question is to show off the work you did, even if it's only part of a feature. This could be a whole feature, a small part of a larger feature, an API to be used later, or even just a copy change.
What do you plan to work on next week? (think about what you'll be able to merge by the end of the week)
Think about what the next most important thing is to work on, and think about what part of that you can accomplish in one week. If your priorities aren't clear, talk to your manager.
Who will you need help from to accomplish your plan for next week? (tag specific individuals so they know ahead of time)
This helps to ensure that the others on the team know you'll need their help and will surface any issues earlier.
When going out of office (OOO), be sure to clearly communicate it with other people.
#s_releaseas your backup to help distribute the workload.
firstname.lastname@example.orgRead more in the Paid time off page.
To help team members carve out time for their own personal development, the Release team will be piloting an optional Personal Growth Day once per milestone. Team members should consider using the first Friday of every milestone to minimize impact on our deliverables. However, individuals should use their discretion and choose a day that works best for them based on their current priorities.
On our dedicated personal growth day:
The Release team schedules two optional social/gaming calls every month, one in APAC and one in EMEA time zones. Every month we rotate game selection amongst the team, making sure everyone has a chance to participate. Our primary goal is to build
better relationships as a team, and more importantly to just have fun! If you'd like to join us, send a request in
#s_release slack channel to get added to the invite.
The Release team has historically been responsible for dogfooding the releases feature. See the Dogfooding GitLab Releases page for more information.
There is a cheat sheet of useful tidbits of information for anyone onboarding at GitLab about our environments, practices and advice for learning how to practice development here at GitLab. See the Onboarding Cheat Sheet for more information.
You can try out the features of the Release stage in test projects. These demonstrable projects are located in the test-group group.
There is no specific format for the test projects, however, keep in mind the following points:
README.mdshould explain what features or categories are demonstrable.