We provide confidence in software through the delivery of meaningful, actionable automated testing results in GitLab.
The Verify:Testing Group provides automated testing integration into GitLab. We aim to allow software teams to be able to easily integrate many layers of testing into their GitLab CI workflow including:
See all current and planned category maturity in the Maturity page.
We want software teams to feel confident that the changes they introduce into their code are safe and conformant.
We measure the value we contribute by using a Product Performance Indicator. Our current PI for the Testing group is the Paid GMAU. This is a rolling count of unique users who have interacted with any paid Testing feature. We count only users interacting with Paid Features, as opposed to users on paid plans using any feature, because our team goal is to drive additional value in the Premium and Ultimate tiers within the Verify Stage. We are tracking progress of this in this epic.
This funnel represents the customer journey and the various means a product manager may apply a Performance Indicator metric to drive a desired behavior in the funnel. This framework can be applied to any of the categories being worked on by Verify:Testing. The current priority is to increase Activation, Retention and Revenue within the Code Testing and Coverage Category.
The following people are permanent members of the Verify:Testing group:
This section will list the top three most recent, exciting accomplishments from the team.
Person | Role |
---|---|
Ricky Wiens | Backend Engineering Manager, Verify:Testing |
Miranda Fluharty | Frontend Engineer, Verify:Testing |
Scott Hampton | Senior Frontend Engineer, Verify:Testing |
Drew Cimino | Backend Engineer, Verify:Testing |
Erick Bajao | Senior Backend Engineer, Verify:Testing |
Maxime Orefice | Backend Engineer, Verify:Testing |
The following members of other functional teams are our stable counterparts:
Person | Role |
---|---|
James Heimbuck | Senior Product Manager, Verify:Testing |
Rayana Verissimo | Product Design Manager, CI/CD, Staff Product Designer, Verify:Testing, Staff Product Designer, Verify:Runner |
Marcel Amirault | Technical Writer, Verify (Continuous Integration, Pipeline Authoring, Testing), Growth (Adoption) |
You can view and contribute to our current list of JTBD and job statements here.
Like most GitLab backend teams we spend a lot of time working in Rails on the main GitLab app. Familiarity with Docker and Kubernetes is also useful on our team.
Note: This table will be replaced with an issue filter at a later date.
Flag name | Description |
---|---|
junit_pipeline_screenshots_view |
Disabled by default until the first front end issue ships. Docs |
ci_limit_test_reports_size |
|
anonymous_visual_review_feedback |
Allows comments to be posted to Merge Requests directly from Visual Review Tools without authentication Docs |
Also known as the Three Amigos process, Quad-planning is the development of a shared understanding about the scope of work in an issue and what it means for the work represented in an issue to be considered done. When an SET counterpart is assigned, we will perform Quad-planning earlier in the planning process than described in the product workflow, beginning when the PM develops initial acceptance criteria. Once this occurs, the PM will apply the quad-planning:ready
label. At this point, asynchronous collaboration will begin between product, engineering, UX, and quality. Supplying these distinct viewpoints at this early stage of planning is valuable to our team. This can take place more than one milestone into the future. Once acceptance criteria is agreed upon, the quad-planning:complete
label is applied.
We use a release planning issue to plan our release-level priorities over each milestone. This issue is used to highlight deliverables, capacity, team member holidays, and more. This allows team members and managers to see at a high-level what we are planning on accomplishing for each release, and serves as a central location for collecting information.
Before issues can be moved from the Planning Breakdown stage into the Ready for Development stage, they must have a weight applied. We use this table to decide what weight an issue should have. We are currently trialing an asynchronous refinement process in lieu of our Refinement meeting. This should help us be more Inclusive, and work more asynchronously. Engineers should take time each week to review the issues that are in Planning Breakdown for the next milestone and see what they can do to better refine and weight those issues.
In the process of refinement we may discover a new feature will require a blueprint or the team feels input from maintainers will help scope down the problem, ensure the feature is performant and/or reduce future technical debt. When this happens the team will create a Technical Investigation issue for the investigation. This issue will be assigned to one team member, that team member should spend no more than 5 days of work time on the investigation (ideally 1 day). They will answer specific questions specified in the Technical Investigation issue before work on the feature is started. This process is analogous to the concept of a Spike.
The Technical Investigation process is an interim process and as such is subject to achieving measurable success before it is confirmed as a long-term process. We will use this process on a trial basis for the next 2 technically challenging issues that the team faces. The measurement for this process will be gauging:
During each milestone, we create a Release Post Checklist issue that is used by team members to help track the progress of the team release posts. We use the checklist issue to associate release post merge requests with links to the implementation issue, links to updated documentation, and list which engineers are working on each issue. The checklist issue provides a single place to see all this information.
Engineering managers apply the Deliverable
label on issues that meet the following criteria:
We will use the Stretch
label if we feel the issue meets most of the criteria but either contains some known unknowns, or it's uncertain whether we will have the capacity to complete the work in the current milestone. If an issue with a Stretch
label is carried over into the next milestone, its label will change to Deliverable
. The position in the issue board is what confers priority, rather than the presence of a Deliverable
label. However most high priority issues are Deliverable
.
There may be some issues in a milestone with neither Deliverable
nor Stretch
labels but will strive to have the majority of issues labeled with one of these.
The team will trial this process for the 13.5 milestone and re-evaluate this labeling process.
Before the team will accept an issue into a milestone for work it must meet these criteria:
Issues that depend on another issue to be completed before they can be validated on Canary
are considered blocked, and should have the ~workflow::blocked
label applied. The issues should also be marked as blocked within the related issues section of the issues.
The team has noticed an uptick in issues created and resolved within a milestone and wants to more easily understand what the nature of these issues are to improve our planning and scheduling capabilities.
We are trialling a process wherein issues that are follow-ups to other issues and are being worked on in the same milestone they are created will have a ~follow-up
label added to them. At the end of 13.6 we'll discuss in Retrospective whether or not the label has been effective in categorizing and tracking new issues, and decide whether to end or continue the process. Examples of items the team may create and label as ~follow-up
include but is not limited to feature scope-creep, non-blocking requests from code review, additional UI polish, non-blocking refactoring that can be done, Low Priority (P2 or lower) bug fixes, etc.
Unless specifically mentioned below, the Verify:Testing group follows the standard engineering and product workflows.
Verify:Testing team members are encouraged to start looking for work starting Right to left in the milestone board. This is also known as "Pulling from the right". If there is an issue that a team member can help along on the board, they should do so instead of starting new work. This includes conducting code review on issues that the team member may not be assigned to if they feel that they can add value and help move the issue along the board.
Specifically this means, in order:
workflow::in review
workflow::in development
columnworkflow::ready for development
column OR an item the team member investigated to apply the estimated weight if unfamiliar with the top itemThe goal with this process is to reduce WIP. Reducing WIP forces us to "Start less, finish more", and it also reduces cycle time. Engineers should keep in mind that the DRI for a merge request is the author(s), just because we are putting emphasis on the importance of teamwork does not mean we should dilute the fact that having a DRI is encouraged by our values.
Issues in "Planning Breakdown" and "Ready for Development" are in top-to-bottom priority order on the planning board. Issues further to the right on the issue board are not in vertical priority order. Rather, the further to the right an issue is on the board, the higher the priority which follows our "Pull from the right" philosophy of working.
Code reviews follow the standard process of using the reviewer roulette to choose a reviewer and a maintainer. The roulette is optional, so if a merge request contains changes that someone outside our group may not fully understand in depth, it is encouraged that a member of the Verify:Testing team be chosen for the preliminary review to focus on correctly solving the problem. The intent is to leave this choice to the discretion of the engineer but raise the idea that fellow Verify:Testing team members will sometimes be best able to understand the implications of the features we are implementing. The maintainer review will then be more focused on quality and code standards.
We also recommend that team members take some time to review each others merge requests even if they are not assigned to do so, as described in the GitLab code review process. It is not necessary to assign anyone except the initial domain reviewer to your Merge Request. This process augmentation is intended to encourage team members to review Merge Requests that they are not assigned to. As a new team, reviewing each others merge requests allows us to build familiarity with our product area, helps reduce the amount of investigation that needs to be done when implementing features and fixes, and increases our lottery factor. The more review we can do ourselves, the less work the maintainer will have to do to get the merge request into good shape.
This tactic also creates an environment to ask for early review on a WIP merge request where the solution might be better refined through collaboration and also allows us to share knowledge across the team.
On Track
within the week.
On Track
status.On Track
today.
On Track
status in your message.When an engineer is actively working (workflow of ~workflow::"In dev" or further right on current milestone) on an issue they will periodically leave status updates as top-level comments in the issue. The status comment should include the updated health status, any blockers, notes on what was done, if review has started, and anything else the engineer feels is beneficial. If there are multiple people working on it also include whether this is a front end or back end update. An update for each of MR associated with the issue should be included in the update comment. Engineers should also update the health status of the issue at this time.
This update need not adhere to a particular format. Some ideas for formats:
Health status: (On track|Needs attention|At risk)
Notes: (Share what needs to be shared specially when the issue needs attention or is at risk)
Health status: (On track|Needs attention|At risk)
What's left to be done:
What's blocking: (probably empty when on track)
## Update <date>
Health status: (On track|Needs attention|At risk)
What's left to be done:
#### MRs
1. !MyMR1
1. !MyMR2
1. !MyMR3
There are several benefits to this approach:
Some notes/suggestions:
In addition to the steps documented for developing with feature flags at GitLab Verify:Testing engineers monitor their changes' impact on infrastructure using dashboards and logs where possible. Because feature flags allow engineers to have complete control over their code in production it also enables them to take ownership of monitoring the impact their changes have against production infrastructure. In order to monitor our changes we use this helpful selection of dashboards and specifically the Rails controller dashboard (Internal Only) for monitoring our changes in production. Metrics we evaluate include latency, throughput, CPU usage, memory usage, and database calls, depending on what our change's expected impact will be and any considerations called out in the issue.
The goal of this process is to reduce the time that a change could potentially have an impact on production infrastructure to the smallest possible window. A side benefit of this process is to increase engineer familiarity with our monitoring tools, and develop more experience with predicting the outcomes of changes as they relate to infrastructure metrics.
After an engineer has ensured that the Definition of Done is met for an issue, they are the ones responsible for closing it. The engineer responsible for verifying an issue is done is the engineer who is the DRI for that issue, or the DRI for the final merge request that completes the work on an issue.
We meet on a weekly cadence for one synchronous meeting.
This synchronous meeting is to discuss anything that is blocking, or notable from the past week. This meeting acts as a team touchpoint.
We refine and triage the current and next milestone's issues asynchronously. We discuss priority, size issues, estimate the weight of issues, keep track of our milestone bandwidth and remove issues appropriately.
We use geekbot integrated with Slack for our daily async standup. The purpose of the daily standup meeting is to keep the team informed about what everyone is working on, and to surface blockers so we can eliminate them. The standup bot will run at 10am in the team members local time and ask 2 questions:
We use a GitLab issue in this project for our monthly retrospective. The issue is created automatically towards the end of the current milestone. The purpose of the monthly retrospective issue is to reflect on the milestone and talk about what went well, what didn't go so well, and what we can do better.
We use a GitLab issue in this project for our quarterly issue weighting retrospective. The issue is created manually towards the end of the current quarter. The purpose of the quarterly issue weighting retrospective issue is to reflect on the issues that we did not weight accurately to learn from our experience. That way we can be more deliberate about continuously improving our ability to estimate and increase our velocity of value delivery to customers.
We have a monthly synchronous 30-minute think big meeting, followed the next week by a monthly 30-minute think small meeting on the same topic of the previous think big meeting. This pair of meetings is modeled after the Gitlab Product Manager deep dive interview. The purpose of this meeting is to discuss the vision, product roadmap, user research, design, and delivery around the Testing features. The goal of this meeting will be to align the team on our medium to long-term goals and ensure that our short-term goals are leading us in that direction. This meeting is useful for aligning the team with its stable counterparts and ensuring that engineers have an understanding of the big picture and so they know how their work fits into the long-term goals of the team.
Issues worked on by the testing group have a group label of ~"group::testing". Issues that contribute to the verify stage of the devops toolchain have the ~"devops::verify" label.