Our vision is to be world-class Test Platform sub-department that enables successful development and deployment of GitLab software applications with improvised workflow efficiency, reliability, and productivity
In FY23 we will be focused on innovative test architecture, efficiency, and customer results while delivering impact to the company's bottomline via alignment to the top cross-functional initiatives. Key directional highlights: broaden our lead in ensuring self-managed excellence, improve deployment confidence, drive visibility and actionability of test results, and expand our Quality Architecture focus. In FY23 we anticipate continued momentum on enabling development and deployment at scale and it's more important than ever for us to deliver results.
Test Platform owns several tools which form a 3-prong trident for Self-Managed Excellence: the GitLab Environment Toolkit (GET), the GitLab Performance Tool (GPT), and the Reference Architectures (RA). Together, these tools support our broader strategy of cementing customer confidence and contributing to their ongoing success by ensuring their instances are built to a rigorously tested standard that performs smoothly at scale.
For more information, please visit our Self-Managed Excellence page.
Given rapidly evolving technologies and our drive to provide a world class experience for GitLab users, Test Platform must meet the increasing demands of efficient, intelligent test coverage and confidence at scale. We must test the right things at the right time. To that end, this year we are exploring several new testing types and visibility improvements to increase the actionability, speed, and sophistication of our various test suites.
Test Platform has been key to supporting prospect POVs and providing prompt, knowledgeable troubleshooting for external customers, but we also have a deep commitment to supporting our internal customers as well. We will expand our Deploy with Confidence foundation we began last year in collaboration with our Infrastructure and Development Departments, and we will seek input into how our processes and tools can be improved.
Objectives and Key Results (OKRs) help align our sub-department towards what really matters. These happen quarterly and are based on company OKRs. We follow the OKR process defined here.
Here is an overview of our current Test Platform OKR.
Test Platform is actively hiring! Please view our jobs page to read more and apply.
Test Platform sub-department has three teams - Test and Tools Infrastructure team
, Self-Managed Platform team
, and Test Engineering team
.
Person | Role |
---|---|
Vincy Wilson | Interim Director, Test Platform |
Abhinaba Ghosh | Engineering Manager, Test Platform, Test and Tools Infrastructure |
Kassandra Svoboda | Manager, Quality Engineering, Core Platform & SaaS Platform |
Ramya Authappan | Manager, Quality Engineering, Dev & Analytics |
The following people are members of the Test and Tools Infrastructure team:
Person | Role |
---|---|
Abhinaba Ghosh | Engineering Manager, Test Platform, Test and Tools Infrastructure |
Andrejs Cunskis | Senior Software Engineer in Test, Test and Tools Infrastructure |
Anastasia McDonald | Senior Software Engineer in Test, Test and Tools Infrastructure |
Chloe Liu | Staff Software Engineer in Test, Test and Tools Infrastructure |
Dan Davison | Staff Software Engineer in Test, Test and Tools Infrastructure |
Ievgen Chernikov | Senior Software Engineer in Test, Test and Tools Infrastructure, Analytics section |
Mark Lapierre | Senior Software Engineer in Test, Test and Tools Infrastructure |
Sanad Liaquat | Staff Software Engineer in Test, Test and Tools Infrastructure |
Sofia Vistas | Senior Software Engineer in Test, Test and Tools Infrastructure |
The following people are members of the Self-Managed Platform team:
Person | Role |
---|---|
Kassandra Svoboda | Manager, Quality Engineering, Core Platform & SaaS Platform |
Andy Hohenner | Senior Software Engineer in Test, SaaS Platforms:US Public Sector Services |
Brittany Wilkerson | Senior Software Engineer in Test, Dedicated:Environment Automation |
Grant Young | Staff Software Engineer in Test, Core Platform:Distribution |
Jim Baumgardner | Software Engineer in Test, SaaS Platforms:US Public Sector Services |
John McDonnell | Senior Software Engineer in Test, Systems:Gitaly |
Nailia Iskhakova | Senior Software Engineer in Test, Self-Managed Platform team |
Nick Westbury | Senior Software Engineer in Test, Core Platform:Geo |
Vishal Patel | Software Engineer in Test, Core Platform:Systems |
The following people are members of the Test Engineering team:
Person | Role |
---|---|
Vincy Wilson | Interim Director, Test Platform |
Harsha Muralidhar | Senior Software Engineer in Test, Govern:Threat Insights |
Joy Roodnick | Software Engineer in Test |
Richard Chong | Senior Software Engineer in Test, Fulfillment:Utilization |
Senior Software Engineer in Test | Senior Software Engineer in Test, CI:Verify |
Valerie Burton | Senior Software Engineer in Test, Test Engineering, Fulfillment section |
Will Meek | Senior Software Engineer in Test, Secure:Composition Analysis |
Feel free to reach out to us by opening an issue on the Quality Team Tasks project or contacting us in one of the Slack channels listed below.
Team | GitLab.com handle | Slack channel | Slack handle |
---|---|---|---|
Test Platform | @gl-quality/tp-sub-dept |
#test-platform | None |
Test and Tools Infrastructure team | @gl-quality/tp-test-tools-infrastructure |
#test-tools-infrastructure-team |
@test-tools-infrastructure |
Enablement & SaaS Platforms QE team | @gl-quality/enablement-qe |
##g_qe_enablement_platform | @enablement-saas-platform-qe-team |
Test Engineering team | @gl-quality/tp-test-engineering |
#test-engineering-team | @test-engineering-team |
While this team operates as a several teams, we emphasize on ensuring the prioritization and needs of Engineering Leaders via stable counterparts.
Every Software Engineer in Test (SET) takes part in building our product as a DRI in GitLab's Product Quad DRIs. They work alongside Development, Product, and UX in the Product Development Workflow. As stable counterparts, SETs should be considered critical members of the core team between Product Designers, Engineering Managers and Product Managers.
Every Engineering Manager (EM) is aligned with an Engineering Director in the Development Department. They work at a higher level and align cross-team efforts which maps to a Development Department section. The area a QEM is responsible for is defined in the Product Stages and Groups and part of their title.
Milestones (product releases) are one of our planning horizons, where prioritization is a collaboration between Product, Development, UX, and Quality. DRIs for prioritization are based on work type:
We use type labels to track: feature, maintenance, and bug issues and MRs. UX Leadership are active participants in influencing the prioritization of all three work types.
QEMs meet with their PM, EM, and UX counterparts to discuss the priorities for the upcoming milestone. The purpose of this is to ensure that everyone understands the requirements and to assess whether or not there is the capacity to complete all of the proposed issues.
For product groups with a SET counterpart, QEMs are encouraged to delegate bug prioritization to the SET as the bug subject matter expert for that group. In these situations, QEMs should provide guidance and oversight as needed by the SET and should still maintain broad awareness of bug prioritization for these delegated groups.
While we follow the product development timeline, it is recommended that you work with your counterparts to discuss upcoming issues in your group's roadmap prior to them being marked as a deliverable for a particular milestone. There will be occasions where priorities shift and changes must be made to milestone deliverables. We should remain flexible and understanding of these situations, while doing our best to make sure these exceptions do not become the rule.
Section-level members of the quad are QEMs, Directors of Development, Directors of Product Management, and Product Design Managers aligned to the same section. These counterparts will review their work type trends on a monthly basis.
We have compiled a number of tips and tricks we have found useful in day-to-day Test Platform related tasks.
For more information, please visit our tips and tricks page.
The Test Platform Sub-Department has two on-call rotations: pipeline triage (SET-led) and incident management (QEM-led). These are scheduled in advance to share the responsibilities of debugging pipeline failures and representing Quality in incident responses.
For more information, please visit our on-call rotation page.
The Test Platform Sub-Department helps facilitate the quad-planning process. This is the participation of Product Management, Development, UX, and Quality which aims to bring test planning as a topic before the development of any feature.
For more information, please visit our quad planning page.
A borrow is used when a team member is shifted from one team to another temporarily or assists other teams part-time for an agreed-upon period of time. Currently, we do not have an SET embedded within every product group, hence for product groups with no SET counterpart, the following would be the process to request one:
~SET Borrow
label.Please note that the borrow request might not guarantee 100% allocation to the requested product group. The temporary allocation will depend upon ongoing priorities.
The list of all SET borrow requests can be seen here.
Reliable tests have met stricter reliability criteria than other tests in our test suite. When a failure is seen in a reliable test, it's less likely to be flakiness and more likely to be a true issue.
For more information, please visit our reliable tests page.
The Test Platform Sub-Department helps facilitate the risk mapping process. This requires the participation of Product Management, Development, UX, and the Quality team to develop a strategic approach to risk and mitigation planning.
For more information, please visit our risk mapping page.
The Test Platform Sub-Department helps facilitate the test planning process for all things related to Engineering work.
For more information, please visit our test engineering page.
If you need to debug a test failure, please visit our debugging QA pipeline test failures page.
The Test Platform Sub-Department maintains ChatOps commands for Quality department which provides quick access to various information on Slack. These commands can be run on any Slack channel that has the GitLab ChatOps bot such as the #test-platform and #chat-ops-test channels.
Commands that are currently available are:
Command | Description |
---|---|
/chatops run quality dri schedule |
Lists the current schedule for on-call rotation |
/chatops run quality dri report |
Show current and previous Quality pipeline triage reports |
/chatops run quality dri incidents |
Lists currently active and mitigated incidents |
For more information about these commands you can run:
/chatops run quality --help
For test automation changes, it is crucial that every change is reviewed by at least one Senior Software Engineer in Test in the Test Platform team.
We are currently setting best practices and standards for Page Objects and REST API clients. Thus the first priority is to have test automation related changes reviewed and approved by the team. For test automation only changes, the Test Platform Sub-Department alone is adequate to review and merge the changes.
We use Fibonacci Series for weights and limit the highest number to 8. The definitions are as below:
Weight | Description |
---|---|
1 - Trivial | Simple and quick changes (e.g. typo fix, test tag update, trivial documentation additions) |
2 - Small | Straight forward changes, no underlying dependencies needed. (e.g. new test that has existing factories or page objects) |
3 - Medium | Well understood changes with a few dependencies. Few surprises can be expected. (e.g. new test that needs to have new factories or page object / page components) |
5 - Large | A task that will require some investigation and research, in addition to the above weights (e.g. Tests that need framework level changes which can impact other parts of the test suite) |
8 - X-large | A very large task that will require much investigation and research. Pushing initiative level |
13 or more | Please break the work down further, we do not use weights higher than 8. |
TBA
We have compiled a list of learning resources that we've found useful for Software Engineer in Test and Engineering Manager growth.
For more information, please visit our learning resources page.