At GitLab, we use gearing ratios as Business Drivers to forecast long term financial goals by function. The Product Analysis group currently focuses on one gearing ratio: Product Managers per Product Analyst. In the future, we may consider other ratios (ex: Active Experiments per Product Analyst), but for the moment we are focussing on the PM:Product Analyst ratio.
The long-term target for the Product Managers per Product Analyst ratio is 3:1. The ability of PMs to self-serve on day-to-day questions and activities is a critical component to finding success at this ratio, and finding the best tool is a focus of the R&D Fusion Team in FY22 Q3-Q4. In addition, we want to ensure that analysts are not spending more time context switching (changing from one unrelated task to another) and learning the nuances of different data sets then they are actually conducting analysis. We want our product analysts to spend their time answering complex questions, developing or improving metrics, and making business-critical recommendations.
In order to validate our target ratio, we looked at the practices of other large product organizations, including Linkedin, Intuit, HubSpot, Squarespace, iHeartRadio, and Peloton Digital. We found that most maintained a ratio of 1.5-3 PMs per product analyst, in addition to a self-service tool. As such, we feel comfortable setting a target of 3 PM:1 Product Analyst ratio.
The current PM:Product Analyst ratio is ~10:1 - 40 IC product managers (including current openings) and 4 product analysts (3 ICs and 1 IC/Manager hybrid). We plan to hire 4 more analysts by the end of 2022, which would bring the ratio to 5:1 (assuming the PM head count remains the same). As we work to close the gap and move towards to the 3:1 target, we encourage PMs to leverage office hours.
All issues must have the following:
workflow::1 - triage)
As mentioned above, all issues should have a workflow label. These should be kept up-to-date in order to track the current status of an issue on our board. The Product Analysis team uses a subset of the workflow labels used by the Data team.
|Stage (Label)||Description||Completion Criteria|
||New issue, being assessed||Requirements are complete and issue is assigned to an analyst|
||Waiting for scheduling||Issue has an iteration|
||Waiting for development||Work starts on the issue|
||Work is in-flight||Issue enters review|
||Waiting for or in review||Issue meets criteria for closure|
||Issue needs intervention that assignee can't perform||Work is no longer blocked|
When an issue becomes blocked:
workflow::X - blockedlabel
When we start a new iteration, any open issues from the previous iteration do not automatically roll over. As such, we need to be diligent about updating issues to ensure that they do not fall off the radar before they are completed.
At the end of an iteration, analysts should review any remaining open issues and:
Sometimes high-priority and/or urgent work comes up after an iteration starts. When an unplanned issue is opened mid-iteration:
Sometimes issues are opened and assigned to analysts outside of the Product Analysis and Data team projects. As such, they are hard to track (since they will not appear on our board) and do not count towards our velocity. In order to capture the work, analysts have the option of opening a placeholder/tracking issue within the Product Analysis project. The placeholder/tracking issue should contain a link to the original issue, along with the standard labels, iteration, weight, etc.
All code and issues should undergo self-review. While it may seem obvious, it is critical to ensuring the team is producing high-quality, trustworthy work.
JOINs and other manipulations, the results make sense
You should ask a peer to review your code and/or findings if:
Before submitting your code for peer review, please check the following:
JOINs, values used in
WHEREclauses, etc. When it doubt, add a comment
LEFT JOIN", "these are the two most complex CTEs", etc
To request a review, open an MR in the Product Analysis project.
code_reviews/and use the issue number for the name
Using MRs for reviews will allow for easy feedback and collaboration. However, the code in that directory will become stale quickly (ex: additional changes may be made to a snippet in a different issue), so the queries should not be considered the SSOT.
Use the following checklist before closing an issue:
<details markdown=1> <summary>This is the name of the section</summary> ``` Add your code here ``` </details>
The Product Analysis group follows the Data team's SQL Style Guide and best practices.