Make GitLab the most responsive and performant DevOps Platform.
The Application Performance group's mission is to ensure that GitLab users, both self-managed and SaaS, have a great user experience. Performance is a critical part of that experience.
Our team works to improve availability, reliability, and performance of the application. We analyze the behavior, recognize bottlenecks, and propose changes. We work to make GitLab a responsive and performant DevOps platform, which offers a great user experience at any scale.
You can check our direction page for more information on our mission, and our short term and long term roadmap.
The following people are permanent members of the Application Performance group:
The following members of other functional teams are our stable counterparts:
Person | Role |
---|---|
Roger Woo | Senior Product Manager, Application Performance and Database |
Where we can we follow the GitLab values and communicate asynchronously. However, there have a few important recurring meetings. Please reach out to the #g_application_performance Slack channel if you'd like to be invited.
We follow the GitLab engineering workflow guidelines. To bring an issue to our attention please create an issue in the relevant project, or in the Application Performance team project. Add the ~"group::application performance"
label along with any other relevant labels. If it is an urgent issue, please reach out to the Product Manager or Engineering Manager listed in the Stable Counterparts section above.
When planning for a milestone, the Application Performance group creates a planning issue to discuss the upcoming milestone asynchronously. We outline the major efforts planned for the milestone along with who is working on each effort. Often there are individual issues that are either operational in nature, or don't belong to an epic. These issues are also called out in the planning issue for prioritization.
We have three main boards for tracking our work (listed below).
Application Performance by Milestone The Milestone board gives us a "big picture" view of issues planned in each milestone.
Application Performance: Build
The build board gives you an overview of the current state of work for "group::application performance"
. These issues have already gone through validation and are on the Product Development Build Track. Issues are added to this board by adding the application_performance::active
and "group::application performance"
labels. Issues in the workflow::ready for development
column are ordered in priority order (top down). Team members use this column to select the next item to work on.
Application Performance: Validation The validation board is a queue for incoming issues for the Product Manager to review. A common scenario for the Application Performance group's validation board is when an issue is created that requires further definition before it can be prioritized. The issue typically states a big picture idea but is not yet detailed enough to take action. The Application Performance group will then go through a refinement process to break down the issue into actionable steps, create exit criteria and prioritize against ongoing efforts. If an issue becomes too large, it will be promoted to an epic and small sub-issues will be created.
We use the ~Deliverable
label to track our Say/Do ratio. At the beginning of each milestone, during an Application Performance group Weekly meeting, we review the issues and determine those issues we are confident we can deliver within the milestone. The issue will be marked with the ~Deliverable
label. At the end of the milestone the successfully completed issues with the ~Deliverable
label are tracked in two places. We have a dashboard in Sisense that will calculate how many were delivered within the mileston and account for issues that were moved. Additionally, our milestone retro issue lists all of the ~Deliverable
issues shipped along with those that missed the milesone.
The Application Performance group's Roadmap gives a view of what is currently in flight as well as projects that have been prioritized for the next 3+ months.
(Sisense↗) We also track our backlog of issues, including past due security and infradev issues, and total open System Usability Scale (SUS) impacting issues and bugs.
(Sisense↗) MR Type labels help us report what we're working on to industry analysts in a way that's consistent across the engineering department. The dashboard below shows the trend of MR Types over time and a list of merged MRs.
(Sisense↗) Flaky test are problematic for many reasons.