Because this group works on components of the application that have a far-reaching impact, we take these extra steps in order to reduce our risk of a production incident:
#support_gitlab-com) will also be notified if necessary.
CODEOWNERSfeature of GitLab.
From time to time, there will be ad hoc work and questions that arise, such as Slack questions, questions in Issues, Error Budget investigations, etc. All Compliance group members are encouraged to watch these mediums and engage.
As first responder, we will acknowledge the ad hoc work / question in the appropriate medium. This is to ensure that the questioner knows that we are on it.
Similar to Spikes, as a rule of thumb, if the work will take longer than 1 hour to investigate and respond, then create an issue. This is to ensure that this work is accounted for, is transparent and has a DRI. This can be done through asking the questioner to create an issue, or taking ownership and creating the issue ourselves.
We try to ensure that the ad hoc work Issue has as much info as possible, asking for more info where required. Before working on an Issue, we make sure to define a clear question or problem that needs to be delivered.
Next, we take ownership of the Issue, assigning it to ourselves. We also add the :eyes: reaction to the Slack message/Issue comment to indicate that investigation has begun. We also ensure correct label hygiene (~type::, ~group::, ~priority::, ~workflow::). In the comments, ping the Compliance EM and PM to ensure transparency. Depending on priority, it may need to go through cross-functional prioritization process to get it planned, scheduled, and completed.
Once the investigation is complete, we follow up in the original medium. We also add the :white_check_mark: reaction to the original Slack message/Issue comment to indicate that investigation is complete.
We use spikes to conduct research, prototype, investigation to gain knowledge, reduce the risk of a technical approach, and better understand a requirement.
When we identify the need for a spike, we will create a new issue, clearly label it as such example, conduct the spike, and document the findings in the spike issue.
Spikes are great for accounting for ad-hoc work and questions that can come through during a Milestone. A rule of thumb, if the work will take longer than 1-2 hours, then create a spike issue.
Spike issues, like any other issue, will go through the cross-functional prioritization process to get it planned, scheduled, and completed.
Before working on a spike we make sure to clearly define:
We strongly believe in Iteration and delivering value in small pieces. Iteration can be hard, especially when you lack product context or are working in a particularly risky/complex part of the codebase. If you are struggling to estimate an issue or determine whether it is feasible, it may be appropriate to first create a proof-of-concept MR. The goal of a proof-of-concept MR is to remove any major assumptions during planning and provide early feedback, therefore reducing risk from any future implementation.
The need for a proof-of-concept MR may signal that parts of our codebase or product have become overly complex. It's always worth discussing the MR as part of the retrospective so we can discuss how to avoid this step in future.
Everyone at GitLab has the freedom to manage their work as they see fit, because we measure results, not hours. Part of this is the opportunity to work on items that aren't scheduled as part of the regular monthly release. This is mostly a reiteration of items elsewhere in the handbook, and it is here to make those explicit:
When you pick something to work on, please:
We plan in monthly cycles in accordance with our Product Development Timeline. Our typical planning cycle is suggested to look like:
type::featureissues, the Engineering Manager will prioritize
type::maintenanceissues, and the Quality Manager will prioritize
workflow::solution validation) or Implementation phase (
workflow::ready for development).
workflow::ready for developmentcan be added to this work, ready for the Milestone to start.
workflow::solution validation) per person at the start of a Milestone, this is a rule of thumb.
Our priorities should follow overall guidance for Product. This should be reflected in the priority label for scheduled issues:
|Priority||Description||Probability of shipping in milestone|
|priority::1||Urgent: top priority for achieving in the given milestone. These issues are the most important goals for a release and should be worked on first; some may be time-critical or unblock dependencies.||~100%|
|priority::2||High: important issues that have significant positive impact to the business or technical debt. Important, but not time-critical or blocking others.||~75%|
|priority::3||Normal: incremental improvements to existing features. These are important iterations, but deemed non-critical.||~50%|
|priority::4||Low: stretch issues that are acceptable to postpone into a future release.||~25%|
We generally follow the Product Development Flow:
workflow::problem validation- needs clarity on the problem to solve
workflow::design- needs a clear proposal (and mockups for any visual aspects)
workflow::solution validation- needs refinement and acceptance from engineering
workflow::planning breakdown- needs a Weight estimate
workflow::scheduling- needs a milestone assignment
workflow::ready for development
workflow::verification- code is in production and pending verification by the DRI engineer
workflow::complete- the work has been verified and the work is complete, the issue should be closed at this stage
Generally speaking, issues are in one of two states:
Basecamp thinks about these stages in relation to the climb and descent of a hill.
While individual groups are free to use as many stages in the Product Development Flow workflow as they find useful, we should be somewhat prescriptive on how issues transition from discovery/refinement to implementation.
The end goal is defined, where all direct stakeholders says “yes, this is ready for development”. Some issues get there quickly, some require a few passes back and forth to figure out.
The goal is for engineers to have buy-in and feel connected to the roadmap. By having engineering included earlier on, the process can be much more natural and smooth. To do so, engineering managers, engineers, and designers can be pinged directly from the issue.
To move to the implementation phase all issues should have an Implementation Plan and a Weight
Backlog management is very challenging, but we try to do so with the use of labels and milestones.
To identify issues that need refinement, use the "Next Up" label.
The purpose of the "Next Up" label is to identify issues that are currently in any workflow stage before
workflow::ready for development. By using this "Next Up" label in addition to workflow labels, we're able to see exactly what is being refined, e.g., problem, design, solution. This helps identify which issues are closer to being ready to schedule.
Issues shouldn't receive a milestone for a specific release (e.g. 13.0) until they've received a 👍 from both Product and Engineering. This also means the issue should not be labeled as
workflow::ready for development.
* Product approval is represented by an issue moving into
* Engineering approval is represented by an issue weight measuring its complexity.
Before work can begin on an issue, we should estimate it first after a preliminary investigation.
If the scope of work of a given issue touches several disciplines (docs, design, frontend, backend, etc.) and involves significant complexity across them, consider creating separate issues for each discipline (see an example).
Issues without a weight should be assigned the
workflow::planning breakdown label.
When estimating development work, please assign an issue an appropriate weight:
|1||The simplest possible change. We are confident there will be no side effects.|
|2||A simple change (minimal code changes), where we understand all of the requirements.|
|3||A simple change, but the code footprint is bigger (e.g. lots of different files, or tests affected). The requirements are clear.|
|5||A more complex change that will impact multiple areas of the codebase, there may also be some refactoring involved. Requirements are understood but you feel there are likely to be some gaps along the way. We should challenge ourselves to break this issue in to smaller pieces.|
|8||A complex change, that will involve much of the codebase or will require lots of input from others to determine the requirements. These issues will often need further investigation or discovery before being
|13||A significant change that may have dependencies (other teams or third-parties) and we likely still don't understand all of the requirements. It's unlikely we would commit to this in a milestone, and the preference would be to further clarify requirements and/or break in to smaller Issues.|
As part of estimation, if you feel the issue is in an appropriate state for an engineer to start working on it, please add the ~"workflow::ready for development" label. Alternatively, if there are still requirements to be defined or questions to be answered that you feel an engineer won't be able to easily resolve, please add the
workflow::blocked label. Issues with the
workflow::blocked label will appear in their own column on our planning board, making it clear that they need further attention. When applying the
workflow::blocked label, please make sure to leave a comment and ping the DRI on the blocked issue and/or link the blocking issue to raise visibility.
For engineers, you may want to create an implementation approach when moving an issue out of
~workflow::planning breakdown. A proposed implementation approach isn't required to be followed, but is helpful to justify a recorded weight.
As the DRI for
workflow::planning breakdown, consider following the example below to signal the end of your watch and the issues preparedness to move into scheduling. While more straightforward issues that have already been broken down may use a shorter format, the plan should (at a minimum) always justify the "why" behind an estimation.
The following is an example of an implementation approach from https://gitlab.com/gitlab-org/gitlab/-/issues/247900#implementation-plan. It illustrates that the issue should likely be broken down into smaller sub-issues for each part of the work:
### Implementation approach ~database 1. Add new `merge_requests_author_approval` column to `namespace_settings` table (The final table is TBD) ~"feature flag" 1. Create new `group_merge_request_approvers_rules` flag for everything to live behind ~backend 1. Add new field to `ee/app/services/ee/groups/update_service.rb:117` 1. Update `ee/app/services/ee/namespace_settings/update_service.rb` to support more than just one setting 1. *(if feature flag enabled)* Update the `Projects::CreateService` and `Groups::CreateService` to update newly created projects and sub-groups with the main groups setting 1. *(if feature flag enabled)* Update the Groups API to show the settings value 1. Tests tests and more tests :muscle: - In particular, cover both happy and unhappy paths, and consider tests for scenarios that could result in false positives or negatives ~frontend 1. *(if feature flag enabled)* Add new `Merge request approvals` section to Groups general settings 1. Create new Vue app to render the contents of the section 1. Create new setting and submission process to save the value 1. Tests tests and more tests :muscle: - In particular, cover both happy and unhappy paths, and consider tests for scenarios that could result in false positives or negatives ~documentation 1. Update docs page eg https://docs.gitlab.com/ee/administration/audit_events.html 1. Update the GraphQL examples https://gitlab.com/gitlab-org/govern/compliance/graphql-example-requests ~quality 1. Add new group-level end-to-end test based on existing project-level end-to-end test (include the path to the existing test eg `path/to/existing_test`)
The DRI will ping a relevant counterpart (Quality, UX, etc) and domain expert (database, backend, frontend) before moving the issue to
workflow::scheduling. This gives the domain expert the opportunity to approve the implementation plan or raise any potential pitfalls or concerns before work begins.
For domain expert review of development implementation plan, in case of trivial changes, the approval can be solicited from any of the relevant compliance development team members. Do try to find a person who has context around the topic. In case of non-trivial changes, opinions from the whole relevant compliance backend or frontend or both team members should be solicited by tagging respective group (
@gitlab-org/govern/compliance/backend) in the issue's comment. Deciding whether the implementation is trivial or non-trivial depends on the discretion of DRI and the initial domain expert asked for review.
Once an issue has been estimated, it can then be moved to
workflow::scheduling to be assigned a milestone before finally being
workflow::ready for development.
Depending on the complexity of an issue, it may be necessary to break down or promote issues. A couple sample scenarios may be:
If none of the above applies, then the issue is probably fine as-is! It's likely then that the weight of this issue is quite low, e.g., 1-2.
The issue verification should be done by someone else other than the MR author. This decreases the case of defects getting into production and a different perspective to cover more test cases.
~workflow:verificationlabel and wait until they receive notification that their work has been deployed on staging via the release issue email.
~type::feature, or big changes the engineer should verify again once the change is available on .com/production and leave a comment summarizing the testing that was completed. Also provide a link to a project or page, if applicable.
~workflow:verificationstate are assigned randomly by the triage bot based on the verification policy to an applicable team engineer. This engineer should then additionally verify the issue.
In cases where verification in staging or production is unfeasible, the staging-ref environment may be used. For complex setups, the DRI for the MR should work with a domain expert to ensure verification steps are clear and correct.
Verifier: the engineer verifying the issue on .com/production (not the MR author)
workflow::verification process, we determine whether the Issue requires a demo. If unsure, work with PM to determine if a demo is required. Demos are great for showcasing progress and help users quickly understand how to use a features and its benefits. Our process for this is similar to Single Engineer Groups Demo:
For issues which need to be announced in more detail, a release post can be automatically created using the issue. When working on an issue, either in planning, or during design and development, you can use the release post item generator to have the release post created and notify all the relevant people.
If you do not want an issue to have a release post, make sure that the issue does not have a
release notes section or use a
release post item:: label.
Although we have a bias for asynchronous communication, synchronous meetings are necessary and should adhere to our communication guidelines. Some regular meetings that take place in Compliance are:
|Weekly (alternating between APAC/EMEA and AMER)||Group-level meeting||Engineering Manager||Ensure current release is on track by walking the board, unblock specific issues|
|Monthly||Planning meeting||Product Manager||See Planning section|
For one-off, topic specific meetings, please always consider recording these calls and sharing them (or taking notes in a publicly available document).
Agenda documents and recordings can be placed in the shared Google drive (internal only) as a single source of truth.
Meetings that are not 1:1s or covering confidential topics should be added to the Govern Shared calendar.
All meetings should have an agenda prepared at least 12 hours in advance. If this is not the case, you are not obligated to attend the meeting. Consider meetings canceled if they do not have an agenda by the start time of the meeting.
The EM will usually create a general update for the group on what is happening within the company and within the group on a weekly basis. This update currently takes the form of an issue within the compliance update Epic
The Compliance EM also contributes to issues in the Govern stage weekly updates epic.
The following people are permanent members of the group:
|Derek Ferguson||Senior Product Manager, Govern:Compliance and Secure:Dynamic Analysis|
|Camellia X. Yang||Senior Product Designer Govern:Compliance and Govern:Security Policies|
|Evan Read||Senior Technical Writer, Govern:Compliance, Manage:Import and Integrate, Systems:Distribution, Systems:Gitaly|
|Aaron Huntsman||Senior Backend Engineer, Govern:Compliance|
|Harsimar Sandhu||Backend Engineer, Govern:Compliance|
|Hitesh Raghuvanshi||Senior Backend Engineer, Govern:Compliance|
|Huzaifa Iftikhar||Senior Backend Engineer, Govern:Compliance|
|Illya Klymov||Senior Frontend Engineer, Govern:Compliance|
|Jay Montal||Fullstack Engineer, Govern:Compliance|
|Mark Lapierre||Senior Software Engineer in Test, Govern:Compliance|
|Nathan Rosandich||Fullstack Engineering Manager, Govern:Compliance|
|Sam Figueroa||Fullstack Engineer, Govern:Compliance|
|Michael Becker||Senior Backend Engineer, Govern:Compliance|
Product performance indicators / North star metrics
(Sisense↗) We also track our backlog of issues, including past due security and infradev issues, and total open System Usability Scale (SUS) impacting issues and bugs.
(Sisense↗) MR Type labels help us report what we're working on to industry analysts in a way that's consistent across the engineering department. The dashboard below shows the trend of MR Types over time and a list of merged MRs.
(Sisense↗) Flaky test are problematic for many reasons.