In FY23 we will be focused on innovative test architecture, efficiency, and customer results while delivering impact to the company's bottomline via alignment to the top cross-functional initiatives. Key directional highlights: broaden our lead in ensuring self-managed excellence, improve deployment confidence, drive visibility and actionability of test results, and expand our Quality Architecture focus. In FY23 we anticipate continued momentum on enabling development and deployment at scale and it's more important than ever for us to deliver results.
Quality Engineering owns several tools which form a 3-prong trident for Self-Managed Excellence: the GitLab Environment Toolkit (GET), the GitLab Performance Tool (GPT), and the Reference Architectures (RA). Together, these tools support our broader strategy of cementing customer confidence and contributing to their ongoing success by ensuring their instances are built to a rigorously tested standard that performs smoothly at scale.
For more information, please visit our Self-Managed Excellence page.
Given rapidly evolving technologies and our drive to provide a world class experience for GitLab users, Quality Engineering must meet the increasing demands of efficient, intelligent test coverage and confidence at scale. We must test the right things at the right time. To that end, this year we are exploring several new testing types and visibility improvements to increase the actionability, speed, and sophistication of our various test suites.
Quality Engineering has been key to supporting prospect POVs and providing prompt, knowledgeable troubleshooting for external customers, but we also have a deep commitment to supporting our internal customers as well. We will expand our Deploy with Confidence foundation we began last year in collaboration with our Infrastructure and Development Departments, and we will seek input into how our processes and tools can be improved.
Quality Engineering is function under the Quality Department operating as several teams of Software Engineers in Test, each led by a Quality Engineering Manager reporting to the Quality Engineering Sub-Department Leader.
Quality Engineering is actively hiring! Please view our jobs page to read more and apply.
|Joanna Shih||Manager, Quality Engineering, Ops & Analytics|
|Kassandra Svoboda||Manager, Quality Engineering, Enablement & SaaS Platform|
|Ramya Authappan||Manager, Quality Engineering, Dev|
|Vincy Wilson||Senior Manager, Quality Engineering, Enablement, Fulfillment, Growth, Sec and Data Science|
|Andrejs Cunskis||Senior Software Engineer in Test, Manage:Import|
|Aleksandr Lyubenkov||Senior Software Engineer in Test, Verify:Runner|
|Anastasia McDonald||Senior Software Engineer in Test, Create:Source Code|
|Andy Hohenner||Senior Software Engineer in Test, SaaS Platforms:US Public Sector Services|
|Brittany Wilkerson||Senior Software Engineer in Test, SaaS Platforms:US Public Sector Services|
|Careem Ahamed||Senior Software Engineer in Test, Secure:Static Analysis|
|Carlo Catimbang||Senior Software Engineer in Test, Analytics:Product Intelligence|
|Chloe Liu||Senior Software Engineer in Test, Fulfillment:Purchase|
|Dan Davison||Staff Software Engineer in Test, Fulfillment:Provision|
|Désirée Chevalier||Senior Software Engineer in Test, Plan:Project Management|
|Edgars Brālītis||Senior Software Engineer in Test, Fulfillment:Utilization|
|Erick Banks||Senior Software Engineer in Test Data Stores:Global Search|
|Grant Young||Staff Software Engineer in Test, Enablement:Distribution|
|Harsha Muralidhar||Senior Software Engineer in Test, Govern:Threat Insights|
|Jay McCure||Senior Software Engineer in Test, Create:Code Review|
|John McDonnell||Senior Software Engineer in Test, Systems:Gitaly|
|Jason Zhang||Senior Software Engineer in Test, Create:Editor|
|Mark Lapierre||Senior Software Engineer in Test, ModelOps:AI Assisted|
|Nailia Iskhakova||Senior Software Engineer in Test, Enablement:Distribution|
|Nick Westbury||Senior Software Engineer in Test, Enablement:Geo|
|Nivetha Prabakaran||Software Engineer in Test, Package:Package Registry|
|Richard Chong||Senior Software Engineer in Test, Verify:Pipeline Execution|
|Sanad Liaquat||Staff Software Engineer in Test, Manage:Authentication and Authorization|
|Sean Gregory||Senior Software Engineer in Test, Manage:Integrations|
|Sofia Vistas||Senior Software Engineer in Test, Package:Container Registry|
|Tiffany Rea||Senior Software Engineer in Test, Verify:Pipeline Authoring|
|Valerie Burton||Software Engineer in Test, Manage:Organization|
|Vishal Patel||Software Engineer in Test, Enablement:Distribution|
|Will Meek||Senior Software Engineer in Test, Secure:Composition Analysis|
|Zeff Morgan||Senior Software Engineer in Test, Verify:Runner|
Feel free to reach out to us by opening an issue on the Quality Team Tasks project or contacting us in one of the Slack channels listed below.
|Team||GitLab.com handle||Slack channel||Slack handle|
|Dev QE team||
|Ops & Analytics QE team||
|Enablement & SaaS Platforms QE team||
|Sec & Data Science QE team||
|Fulfillment & Growth QE team||
While this team operates as a several teams, we emphasize on ensuring the prioritization and needs of Engineering Leaders via stable counterparts.
Every Software Engineer in Test (SET) takes part in building our product as a DRI in GitLab's Product Quad DRIs. They work alongside Development, Product, and UX in the Product Development Workflow. As stable counterparts, SETs should be considered critical members of the core team between Product Designers, Engineering Managers and Product Managers.
Every Quality Engineering Manager (QEM) is aligned with an Engineering Director in the Development Department. They work at a higher level and align cross-team efforts which maps to a Development Department section. The area a QEM is responsible for is defined in the Product Stages and Groups and part of their title.
We use type labels to track: feature, maintenance, and bug issues and MRs. UX Leadership are active participants in influencing the prioritization of all three work types.
QEMs meet with their PM, EM, and UX counterparts to discuss the priorities for the upcoming milestone. The purpose of this is to ensure that everyone understands the requirements and to assess whether or not there is the capacity to complete all of the proposed issues.
For product groups with a SET counterpart, QEMs are encouraged to delegate bug prioritization to the SET as the bug subject matter expert for that group. In these situations, QEMs should provide guidance and oversight as needed by the SET and should still maintain broad awareness of bug prioritization for these delegated groups.
While we follow the product development timeline, it is recommended that you work with your counterparts to discuss upcoming issues in your group's roadmap prior to them being marked as a deliverable for a particular milestone. There will be occasions where priorities shift and changes must be made to milestone deliverables. We should remain flexible and understanding of these situations, while doing our best to make sure these exceptions do not become the rule.
Section-level members of the quad are QEMs, Directors of Development, Directors of Product Management, and Product Design Managers aligned to the same section. These counterparts will review their work type trends on a monthly basis.
We have compiled a number of tips and tricks we have found useful in day-to-day Quality Engineering related tasks.
For more information, please visit our tips and tricks page.
The Quality Engineering Sub-Department has two on-call rotations: pipeline triage (SET-led) and incident management (QEM-led). These are scheduled in advance to share the responsibilities of debugging pipeline failures and representing Quality in incident responses.
For more information, please visit our on-call rotation page.
The Quality Engineering Sub-Department helps facilitate the quad-planning process. This is the participation of Product Management, Development, UX, and Quality which aims to bring test planning as a topic before the development of any feature.
For more information, please visit our quad planning page.
Reliable tests have met stricter reliability criteria than other tests in our test suite. When a failure is seen in a reliable test, it's less likely to be flakiness and more likely to be a true issue.
For more information, please visit our reliable tests page.
The Quality Engineering Sub-Department helps facilitate the risk mapping process. This requires the participation of Product Management, Development, UX, and the Quality team to develop a strategic approach to risk and mitigation planning.
For more information, please visit our risk mapping page.
The Quality Engineering Sub-Department helps facilitate the test planning process for all things related to Engineering work.
For more information, please visit our test engineering page.
If you need to debug a test failure, please visit our debugging QA pipeline test failures page.
The Quality Engineering Sub-Department maintains ChatOps commands for Quality department which provides quick access to various information on Slack. These commands can be run on any Slack channel that has the GitLab ChatOps bot such as the #quality and #chat-ops-test channels.
Commands that are currently available are:
||Lists the current schedule for on-call rotation|
||Show current and previous Quality pipeline triage reports|
||Lists currently active and mitigated incidents|
For more information about these commands you can run:
/chatops run quality --help
For test automation changes, it is crucial that every change is reviewed by at least one Senior Software Engineer in Test in the Quality team.
We are currently setting best practices and standards for Page Objects and REST API clients. Thus the first priority is to have test automation related changes reviewed and approved by the team. For test automation only changes, the Quality Engineering Sub-Department alone is adequate to review and merge the changes.
We use Fibonacci Series for weights and limit the highest number to 8. The definitions are as below:
|1 - Trivial||Simple and quick changes (e.g. typo fix, test tag update, trivial documentation additions)|
|2 - Small||Straight forward changes, no underlying dependencies needed. (e.g. new test that has existing factories or page objects)|
|3 - Medium||Well understood changes with a few dependencies. Few surprises can be expected. (e.g. new test that needs to have new factories or page object / page components)|
|5 - Large||A task that will require some investigation and research, in addition to the above weights (e.g. Tests that need framework level changes which can impact other parts of the test suite)|
|8 - X-large||A very large task that will require much investigation and research. Pushing initiative level|
|13 or more||Please break the work down further, we do not use weights higher than 8.|
We have compiled a list of learning resources that we've found useful for Software Engineer in Test and Quality Engineering Manager growth.
For more information, please visit our learning resources page.