GitLab's Quality is everyone's responsibility. The Quality Department ensures that everyone is aware of what the Quality of the product is, empirically.
The Quality direction for FY22 is to empower GitLab R&D teams and Contributors to deliver with quality at high velocity.
We aim to increase the focus on our community contributions. Below is a timeline on how we will measure and track this goal.
~"group::ecosystem"to provide feedback to improve contribution tooling (currently GDK).
|Team||GitLab.com handle||Slack channel||Slack handle|
|Dev QE team||
|Ops & CI/CD QE team||
|Secure & Enablement QE team||
|Growth QE team||
|Engineering Productivity team||
|Mek Stittri||Director of Quality Engineering|
|Ramya Authappan||Quality Engineering Manager, Dev|
|Tanya Pazitny||Quality Engineering Manager, Secure & Enablement|
|Kyle Wiebers||Backend Engineering Manager, Engineering Productivity|
|Joanna Shih||Quality Engineering Manager, Ops|
|Vincy Wilson||Quality Engineering Manager, Growth, Fulfillment & Protect|
|Rémy Coutable||Staff Backend Engineer, Engineering Productivity|
|Mark Fletcher||Backend Engineer, Engineering Productivity|
|Jen-Shin Lin||Senior Backend Engineer, Engineering Productivity|
|Dan Davison||Senior Software Engineer in Test, Fulfillment:License|
|Mark Lapierre||Senior Software Engineer in Test, Create:Source Code|
|Sanad Liaquat||Senior Software Engineer in Test, Manage:Access|
|Tomislav Nikić||Software Engineer in Test, Create:Code Review|
|Zeff Morgan||Senior Software Engineer in Test, Verify:Runner|
|Désirée Chevalier||Software Engineer in Test, Plan:Project Management|
|Grant Young||Staff Software Engineer in Test, Enablement:Memory|
|Jennie Louie||Software Engineer in Test, Enablement:Geo|
|Nailia Iskhakova||Senior Software Engineer in Test, Enablement:Distribution|
|Albert Salim||Senior Backend Engineer, Engineering Productivity|
|Erick Banks||Senior Software Engineer in Test Enablement:Search|
|Tiffany Rea||Software Engineer in Test, Verify:Continuous Integration|
|Sofia Vistas||Software Engineer in Test, Package:Package|
|Anastasia McDonald||Software Engineer in Test, Create:Editor|
|Will Meek||Senior Software Engineer in Test, Secure:Composition Analysis|
|Nick Westbury||Senior Software Engineer in Test, Enablement:Geo|
|Chloe Liu||Senior Software Engineer in Test, Fulfillment:Purchase|
|Andrejs Cunskis||Senior Software Engineer in Test, Manage:Import|
Every Software Engineer in Test (SET) takes part in building our product as a DRI in GitLab's Product Quad DRIs. They work alongside Development, Product, and UX in the Product Development Workflow. As stable counterparts, SETs should be considered critical members of the core team between Product Designers, Engineering Managers and Product Managers.
Every Quality Engineering Manager is aligned with an Engineering Director in the Development Department. They work at a higher level and align cross-team efforts which maps to a Development Department section. The area a Quality Engineering Manager is responsible for is defined in the Product Stages and Groups and part of their title in team org chart. This is with the exception of the Engineering Productivity team which is based on the span of control.
Full-stack Engineering Productivity Engineers develop features both internal and external that improves the efficiency of engineers and development processes. Their work is separate from the regular release kickoff features per areas of responsibility.
We staff our department with the following gearing ratios:
Group Conversations or abbreviated as GC for short, runs on an 8-week cadence. The Quality department has our own, and we also contribute content to the GC presentation slides for our counterpart product sections.
This is a company-wide discussion where we highlight achievements, challenges, and progress of the department.
You can look up the historical prep of our group conversations using the
group-conversation label in our issue tracker.
Quality Engineering Managers are responsible for contributing Quality-focused content to the GC slides for their counterpart product sections. At a minimum, details about our related OKRs should be shared, but other info can be shared as appropriate. In general, aim to keep the slides informative yet brief and few in number, since we do have our own GC during which we can share more details. Following are some ideas for suggested content.
By the end of the week, we populate the Engineering Week-in-Review document with relevant updates from our department. The agenda is internal only, please search in Google Drive for 'Engineering Week-in-Review'. Every Monday a reminder is sent to all of engineering in the #eng-week-in-review slack channel to read summarize updates in the google doc.
We try to have as few meetings as possible. We currently have 3 recurring meetings for the whole department. Everyone in the Department is free to join and the agenda is available to everyone in the company. Every meeting is also recorded.
The Quality team holds an asynchronous retrospective for each release. The process is automated and notes are captured in Quality retrospectives (GITLAB ONLY)
We track work regarding performance indicators in the Engineering Metrics board.
The purpose of the board is to:
The work effort on Engineering Division and Departments' KPIs/RPIs is currently a shared capacity of the following:
This group also maintains the Engineering Metrics page.
The Engineering Metrics board is structured by the tasks and asks within each Engineering Departments.
The ownership of the work columns are as follows:
We still need to establish a triage process for open issues or generic issues.
~"Engineering Metrics"to be added to the Engineering Metrics board.
~KPI. If regular performance indicator, apply
We have top level boards (at the
gitlab-org level) to communicate what is being worked on for all teams in quality engineering.
Each board has a cut-line on every column that is owned by an individual. Tasks can be moved vertically to be above or below the cut-line.
The cut-line is used to determine team member capacity, it is assigned to the
Backlog milestone. The board itself pulls from
any milestone as a catch-all so we have insights into past, current and future milestones.
The cut-line also serves as a healthy discussion between engineers and their manager in their 1:1s. Every task on the board should be sized according to our weight definitions.
~"workflow::blocked"to indicate a blocked issue.
The boards serve as a single pane of glass view for each team and help in communicating the overall status broadly, transparently and asynchronously.
Every member in the Quality Department shares the responsibility of analyzing the daily QA tests against
More details can be seen here
Every manager and director in the Quality Department shares the responsibility of monitoring new and existing incidents and responding or mitigating as appropriate. Incidents may require review of test coverage, test planning, or updated procedures, as examples of follow-up work which should be tracked by the DRI.
The Quality Department has a rotation for incident management. The rotation can be seen here.
Please note: Though there is a rotation for DRI, any manager or director within Quality can step in to help in an
urgent situation if the primary DRI is not available. Don't hesitate to reach out in the Slack channel
Below mentioned are few venues of collaboration with Development department.
To mitigate high priority issues like performance bugs and transient bugs, Quality Engineering will triage and refine those issues for Product Management and Development via a bi-weekly Bug Refinement process.
Quality Engineering will do the following in order to identify the issues to be highlighted in the refinement meeting:
Quality Engineering will track productivity, metric and process automation improvement work items
in the Development-Quality board to service the Development department.
Requirements and requests are to be created with the label
~dev-quality. The head of both departments will review and refine the board on an on-going basis.
Issues will be assigned and worked on by an Engineer in the Engineering Productivity team team and communicated broadly when each work item is completed.
Moved to release documentation.
The Quality department collaborates with the Security department's compliance team to handle requests from customers and prospects.
The Risk and Field Security team maintains the current state of answers to these questions, please follow the process to request completion of assessment questionnaire.
If additional input is needed from the Quality team, the DRI for this is the Director of Quality. Tracking of supplemental requests will be via a confidential issue in the compliance issue tracker. Once the additional inputs have been supplied, this is stored in the Compliance team's domain for efficiency.
|Recurring event||Primary DRI||Backup DRI||Cadence||Format|
|Engineering Key review||
||Every 4 weeks||Review meeting|
||Every 8 weeks||Group Conversations|
|Product Stage Group conversation Content||
||Every 8 weeks||Quality team OKR slide contributions to the counterpart product section|
|GitLab SaaS Infrastructure Weekly||QEM rotation between
||Weekly||Incident review and corrective action tracking|
|Incident management||Rotates between @jo_shih, @tpazitny, and @vincywilson||All managers||Weekly||Incident monitoring, response, and management as needed to represent Quality|
|Self-managed environment triage||
||Every 2 weeks||Sync stand-up|
||Every 2 weeks||Review meeting|
|Security Vulnerability review||
||Every 4 weeks||Review meeting|
|Quality Engineering Staff||
|Quality Engineering Bi-Weekly||All managers||
||Every 2 weeks||Review meeting|
|Ops section stakeholder review||
||Every 4 weeks||Review meeting|
|Quality Department Social Call||All team members||All team members||Every 2 weeks||Meet and Greet|
Due to the volume of issues, one team cannot handle the triage process. We have invented Triage Reports to scale the triage process within Engineering horizontally.
More on our Triage Operations
The GitLab test automation framework is distributed across two projects:
The Quality Department is committed to ensuring that self-managed customers have performant and scalable configurations. To that end, we are focused on creating a variety of tested and certified Reference Architectures. Additionally, we have developed the GitLab Performance Tool, which provides several tools for measuring the performance of any GitLab instance. We use the Tool every day to monitor for potential performance degradations, and this tool can also be used by GitLab customers to directly test their on-premise instances. More information is available on our Performance and Scalability page.