All issues must have the following:
product data analysislabel
workflow::1 - triage)
As mentioned above, all issues should have a workflow label. These should be kept up-to-date in order to track the current status of an issue on our board. The Product Data Insights team uses a subset of the workflow labels used by the Data team.
|Stage (Label)||Description||Completion Criteria|
||New issue, being assessed||Requirements are complete and issue is assigned to an analyst|
||Waiting for scheduling||Issue has an iteration|
||Waiting for development||Work starts on the issue|
||Work is in-flight||Issue enters review|
||Waiting for or in review||Issue meets criteria for closure|
||Issue needs intervention that assignee can't perform||Work is no longer blocked|
When an issue becomes blocked:
workflow::X - blockedlabel
When we start a new iteration, any open issues from the previous iteration do not automatically roll over. As such, we need to be diligent about updating issues to ensure that they do not fall off the radar before they are completed.
At the end of an iteration, analysts should review any remaining open issues and:
Sometimes high-priority and/or urgent work comes up after an iteration starts. When an unplanned issue is opened mid-iteration:
Sometimes issues are opened and assigned to analysts outside of the Product Data Insights and Data team projects. As such, they are hard to track (since they will not appear on our board) and do not count towards our velocity. In order to capture the work, analysts have the option of opening a placeholder/tracking issue within the Product Data Insights project. The placeholder/tracking issue should contain a link to the original issue, along with the standard labels, iteration, weight, etc.
All code and issues should undergo self-review. While it may seem obvious, it is critical to ensuring the team is producing high-quality, trustworthy work.
JOINs and other manipulations, the results make sense
You should ask a peer to review your code and/or findings if:
Before submitting your code for peer review, please check the following:
JOINs, values used in
WHEREclauses, etc. When it doubt, add a comment
LEFT JOIN", "these are the two most complex CTEs", etc
To request a review, open an MR in the Product Data Insights project.
code_reviews/and use the issue number for the name
Using MRs for reviews will allow for easy feedback and collaboration. However, the code in that directory will become stale quickly (ex: additional changes may be made to a snippet in a different issue), so the queries should not be considered the SSOT.
Use the following checklist before closing an issue:
<details markdown=1> <summary>This is the name of the section</summary> ``` Add your code here ``` </details>
The Product Data Insights team uses two different measures of completed work to determine velocity, one based in work done on issues completed during an iteration (Completed Issue Weight), and one based on the volume of work that was done (even if the issue was not closed out) (Total Issue Weight). In both cases, we use Analyst Working Days as the denominator.
Completed Issue Velocity
This is the more tradition velocity calculation outlined in on our main handbook page. It is tied exclusively to work done on issues closed during an iteration and does not account for work on issues rolling over to the next iteration.
Completed Issue Velocity = Completed Issue Weight / Analyst Working Days
Total Issue Velocity
This is a less traditional adaptation of velocity and is used as an internal team metric. It is intended to capture all work done by analysts during an iteration, even if the issue is not closed out. Given the nature of our work, it is not uncommon for issues to roll over to the next iteration, especially as unplanned work comes up and shifts priorities. This version of velocity controls for those larger projects or work that is started mid-iteration.
Total Issue Velocity = Total Issue Weight / Analyst Working Days
Here are two examples:
The Product Data Insights group follows the Data team's SQL Style Guide and best practices.
At GitLab, we use gearing ratios as Business Drivers to forecast long term financial goals by function. The Product Data Insights group currently focuses on one gearing ratio: Product Managers per Product Analyst. In the future, we may consider other ratios (ex: Active Experiments per Product Analyst), but for the moment we are focusing on the PM:Product Analyst ratio.
The long-term target for the Product Managers per Product Analyst ratio is 3:1. The ability of PMs to self-serve on day-to-day questions and activities is a critical component to finding success at this ratio, and finding the best tool is a focus of the R&D Fusion Team in FY23 Q2-Q3. In addition, we want to ensure that analysts are not spending more time context switching (changing from one unrelated task to another) and learning the nuances of different data sets then they are actually conducting analysis. We want our product analysts to spend their time answering complex questions, developing or improving metrics, and making business-critical recommendations.
In order to validate our target ratio, we looked at the practices of other large product organizations, including Linkedin, Intuit, HubSpot, Squarespace, iHeartRadio, and Peloton Digital. We found that most maintained a ratio of 1.5-3 PMs per product analyst, in addition to a self-service tool. As such, we feel comfortable setting a target of 3 PM:1 Product Analyst ratio.
The current PM:Product Analyst ratio is ~7:1 - 40 IC product managers (including current openings) and 6 product analysts (5 ICs and 1 IC/Manager hybrid). As we work to close the gap and move towards to the 3:1 target, we encourage PMs to leverage office hours.