Product Development Flow Draft
This is the draft version of the Product Development Flow. All changes to this document will be merged into
product-development-flow/index.html.md Handbook link by the 17th of each month*, and announced (including a list of changes) by creating a release post if larger changes are included. See the Product Development Timeline
|1.0||Introduce new structure and core content||13.5||2020-10-22|
|1.1||Introduce visuals, optimize content and supporting resources||13.6||2020-11-22|
|1.2||Incorporate feedback from 13.6 Gitlab dogfooding and broader Gitlab team post launch||13.7||2020-12-22|
*Version 1.0 will stay on draft page for dogfooding. Version 1.1 will be the first push to
GitLab's product mission is to consistently create products and experiences that users love and value. To deliver on this mission, it's important to have a clearly defined and repeatable flow for turning an idea into something that offers customer value. Note that it's also important to allow open source contributions at any point in the process from the wider GitLab community - these won't necessarily follow this process.
This page is an evolving description of how we expect our cross-functional development teams to work, and reflects the current process being used. All required steps in this development flow are denoted as follows:
Denotes a required aspect of the product development workflow.
Feature development is expected to pass through all required phases, while the rest of the development flow should be considered a set of best practices and tools to aid with completing these phases.
The goal is to have this page be the single source of truth, but it will take time to eliminate duplication elsewhere in the handbook. In the meantime, whenever there are conflicts, this page takes precedence.
Because this page needs to be concise and consistent, be sure to follow the prescribed change process.
No. Although the phases described on this page appear to be independent and linear, they're not. They're presented for simplicity and ease of navigation. It's common to iterate through the Validation phases multiple times before moving to Build. During the Build phases, it may be necessary to go back to Validation phases as roadblocks or technical challenges arise.
Workflow labels must be applied for each phase that's used to enable tracking and collaboration across teams.
Issue descriptions shall always be maintained as the single source of truth.
It's not efficient for contributors to need to read every comment in an issue to understand the current state.
For new ideas where the customer problem and solution isn't well understood, Product Managers (PMs) and the User Experience Department (UXers) should work together to validate new opportunities before moving to the Build track. The Validation track is an independent track from the always moving Build track. PMs and UXers should work together to get one to two months ahead, so that the Build track always has well-validated product opportunities ready to start. Milestone work should be prioritized with the understanding that some milestones may include more validation efforts than others. Validation cycles may not be necessary for things like bug fixes, well understood iterative improvements, minor design fixes, and technical debt.
When: When our confidence about the proposed problem or solution isn't high. For example, if we aren't reasonably sure that the problem is important to a significant number of users, or that the solution is easy to understand and use.
Who: Product Manager, Product Designer, UX Research, Engineering Manager
✅ Understand the user problem we are trying to solve.
✅ Identify business goals and key metrics to determine success.
✅ Generate hypotheses and research/experiment/user-test.
✅ Define MVC and potential future iterations.
✅ Minimize risks to value, usability, feasibility, and business viability with qualitative and quantitative analysis.
Outcome: We have confidence that a proposed solution will positively impact one or more Product KPIs. There may be reason for exceptions, so the team would need to be clear in that case, and be able to justify that it's still important without mapping back to our KPIs.
If we don't have confidence in the MVC or what success looks like, we should continue validation cycles before we move to the Build track.
Technical Account Manager
Product Marketing Manager
Other stakeholders as appropriate
The growth of a world class product is built from a well maintained backlog. Product Managers are responsible for refining a group's backlog to ensure validation opportunities are scoped and prioritized in line with category direction and stage or section level strategy. The backlog is also the single source of truth for stakeholders to understand and engage with your group. An issue position in the backlog, along with the description, discussion, and metadata on those issues are key pieces of data necessary to keep stakeholders up to date.
|Up to date issues and epics: At GitLab, issues are the single source of truth for any change to the product. Keeping these up to date increases efficiency and transparency by allowing all team members to understand the planned work.||- Create issues in response to a sensing mechanism. Consider using the Problem Validation issue template for new features.
- Review issue discussions and update relevant info in the description.
- Keep metadata (such as labels) up-to-date.
- Actively respond to stakeholder comments.
- Transfer discussion notes, and external information to the issue (as links or discussion/description details).
|Prioritized backlog: The issue and epic backlog is the primary signal stakeholders use to know what's "up next" for a group. The backlog is also the queue for a group to work from, as features progress through the Product Development Flow phases. This queue is kept up to date with milestones and rank ordering on issue boards.||- Regular review of issue prioritization (such as issue board ordering and milestone assignment).
- Align prioritized backlog to category direction and category maturity state.
- Consider using the RICE formula to help make prioritization tradeoffs.
Other stakeholders as appropriate
If the problem is small and well-understood, it may be possible to quickly move through this phase by documenting the known data about the user problem.
If the problem is nuanced, then it will likely take longer to validate with users properly. This phase's primary outcome is a clear understanding of the problem, along with a simple and clear way to communicate the problem to various stakeholders. Although optional, it is recommended to use an Opportunity Canvas as a tool that helps individuals better understand a problem, and communicate it to various stakeholders.
|Thorough understanding of the problem: The team understands the problem, who it affects, when and why, and how solving the problem maps to business needs and product strategy.||- Create new issue using the Problem Validation Template.
- Complete an Opportunity Canvas.
- Open a Problem Validation Research issue and work with UX Researcher to execute the research study.
- Schedule a review of the opportunity canvas for feedback.
|Update issue/epic description: A well understood and clearly articulated customer problem is added to the issue, and will lead to successful and efficient design and development phases.||- Ensure your issue is up-to-date with the latest understanding of the problem.
- Understand and document (in the issue) the goals that people want to accomplish using the Jobs to be Done (JTBD) framework.
- Leverage your opportunity canvas to communicate the problem to your stable counterparts and group stakeholders. Consider scheduling a review to gather feedback and communicate the findings to Product and UX leadership.
Software Engineer in Test
|Informed||Other stakeholders as appropriate|
After understanding and validating the problem, we can begin or continue to ideate potential solutions through a diverge/converge process.
The Product Designer leads the team (Product Manager, Engineering, UX Researcher, Software Engineers in Test, and Technical Writers, as needed, depending on the item) in ideating potential solutions and exploring different approaches (diverge) before converging on a single solution. Product Managers and Engineers evaluate solutions by determining if they meet customer and business goals, and are technically feasible. The team is encouraged to engage with stakeholders to determine potential flaws, missed use cases, and if the solution has the intended customer impact. After the team converges on the proposed solution or identifies a small set of options to validate, the issue moves into the Solution Validation phase.
To start the Design phase, the Product Designer or Product Manager applies the
workflow::design label to an existing issue or, if needed, creates a new issue with this label.
|Proposed solution(s) identified and documented: The Product Designer works with the Product Manager and Engineering team to explore solutions and identifies the approach(es) that strike the best balance of user experience, customer value, business value, and development cost.||Diverge: explore multiple different approaches as a team. Example activities:
- Think Big session.
Internal interviews (be sure to document findings in Dovetail).
- Creating user flows or journey maps.
Converge: identify a small set of options to validate. Example activities:
- Think Small session with the team.
- Design reviews with team
- Low fidelity design ideas.
- Update issue/epic description with proposed solution. Add Figma design file link or attach design to GitLab's Design Management to communicate the solution idea.
- Validate approach with help from stakeholders. Run user validation using any of the proposed methods and document your findings in Dovetail and appropriate GitLab issue.
- Draw inspiration from competitive and adjacent offerings.
|Shared understanding in the team of the proposed solution: The Product Designer leads the broader team through a review of the proposed solution(s).||- Review the proposed solution as a team so that everyone has a chance to contribute, ask questions, raise concerns, and suggest alternatives.
- Review the proposed solution with leadership.
|Confidence in the technical feasibility: It's important that Engineering understands the technical feasibility of the solution(s) to avoid rework or significant changes when we start the build phase.||- Discuss the technical implications with Engineering to ensure that what is being proposed is possible within the desired timeframe. When sharing design work, use both Figma's collaboration tools and GitLab's design management features. Read to understand what tool to use.
- Engage engineering peers early and often through Slack messages, pins on issues or by scheduling sessions to discuss the proposal.
|Updated issues/epic descriptions: The Product Manager and Product Designer ensure issues and epics are up-to-date.||- Ensure issues and epics are up-to-date, so we can continue our work efficiently and asynchronously.
- Experiment definition.
Software Engineers in Test
Other stakeholders as appropriate
After identifying one more potential solutions that meet business needs and are technically feasible, the Product Manager and Product Designer must ensure that we have confidence that the proposed solution will meet the user's needs and expectations. This confidence can be obtained from work performed during the design phase and supplemented with additional research (including user interviews, usability testing, or solution validation). If necessary, this phase will launch a Solution Validation issue within the GitLab UX Research project which will walk the team through research to validate their proposed solution(s).
To start the Solution Validation phase, the Product Designer or Product Manager applies the
workflow::solution validation label to an existing issue.
|High confidence in the proposed solution: Confidence that the jobs to be done outlined within the problem statement can be fulfilled by the proposed solution.||- Gather feedback from relevant stakeholders.
- Follow solution validation guidance to gather feedback.
|Documented Solution validation Learnings: The results of the solution validation is communicated to and understood by team members.||- Document solution validation findings as insights in Dovetail.
- Update the opportunity canvas (if used) with relevant insights.
- Update issue or epic description to contain or link to the findings.
The (iteration) Review track is an optional step in the flow that brings peer PMs in to help you hone your skills at iteration, clarity, and strategy. Keeping issues small and iterative is core to how GitLab maintains velocity, writing a "small" issue is often (counterintuitively) more difficult than writing a bigger one, and understanding the entire strategy of how GitLab operates is a herculean task. Having a helping hand with these tasks is important to professional development, and it ensures that our entire Product organization continues to improve.
You should consider requesting a review when:
*Note: If you're a new GitLab team member, you should request reviews of the first 3 issues you create. It will help familiarize you with what we're looking for in an iteration, get more comfortable with our process, and meet your fellow team members. After you've completed a few reviews, this track can be considered optional.
If you would like a peer to review one of your issues (or epics):
issue::needs reviewlabel to your issue
issue::reviewedlabel and lets the original PM know that the review is complete.
You can view all of the work happening in this track on this board.
The build track is where we plan, develop, and deliver value to our users by building MVCs, fixing defects, patching security vulnerabilities, enhancing user experience, and improving performance. DRIs across engineering disciplines involving Design, Backend, Frontend and Quality work closely together to implement MVCs while in close collaboration with the Product Manager. Decisions are made quickly if challenges arise. We instrument usage and track product performance, so after MVCs are delivered to customers, feedback is captured quickly for learnings to refine the next iteration.
When: As we build MVCs according to our product development timeline
Who: Product Manager, Product Designer, Engineers, Software Engineers in Test
✅ Release to a subset or full set of customers as appropriate.
✅ Assess UX, functional, technical performance, and customer impact.
✅ Collect data to measure MVC against success metrics to inform the next iteration.
✅ Iterate until success metrics are achieved and the product experience is optimal.
Outcome: Deliver performant MVCs that improve one or more of our Product KPIs and/or Engineering KPIs. If it fails to do so, honor our Efficiency value (that includes a low level of shame), abandon it, and restart the validation cycle to identify the right solution.
||Applied by the Product Manager on or before the 4th of the month signaling an intent to prioritize the issue for the next milestone.|
||Applied to issues that have been broken down (passed
||Issue has been broken down and prioritized by PM for development. Issue also has a milestone assigned at this point.|
||Applied to issues by engineering managers indicating it's been accepted into the current milestone.|
Software Engineers in Test
This phase prepares features so they are ready to be built by engineering. Bugs, technical debt, and other similar changes that are not features may enter the process in this phase (or may benefit from entering in earlier phases based on the cost of doing the work requiring the full problem to be validated to ensure it makes sense to do the work). Following Validation Phase 4 the feature should already be broken down into the smallest possible iterations that add customer value, and be ready for a more detailed review by engineering (check out iteration strategies for help). During this phase, Product Managers will surface issues they intend to prioritize for a milestone by applying the
workflow::planning breakdown label. At this point, Engineering Managers will assign an engineer to further break down and apply weights to that work. This process is a collaboration between the DRI and Collaborators. Tradeoff decisions can be made and feature issues evolve from validation solutions to clear MVCs that can be delivered in a single milestone. Be sure to document all decisions on issues.
By reviewing and weighing work in the beginning of the Build Track, Product Managers are able to make better prioritization tradeoffs and engineering teams can ensure they've scoped the right amount of work for the milestone. If an issue enters the
workflow::planning breakdown state it doesn't necessarily mean it will be prioritized in the next milestone, a Product Manager may make a tradeoff decision depending on capacity, and urgency.
Once work has passed the
workflow::planning breakdown step, the
workflow::ready for development label along with an upcoming milestone is applied to the issue. If an issue has been broken down, but not yet ready to pull into a milestone apply the
workflow::scheduling label. Engineering Managers will apply
Deliverable label to issues with a milestone and marked
workflow::ready for development signaling acceptance of the issue for that milestone. This process occurs at the beginning of milestone planning.
To ensure that a Software Engineer in Test (SET) will have ample time to contribute to new features, Quad Planning is triggered automatically when an issue is in
workflow::ready for development and a milestone is applied. The Quad Planning approach is triggered only in groups where a SET is assigned as the Quality team grows to their desired ratio.
|Well-scoped MVC issues - Issues are the SSOT for all feature development.||- Refine issues into something that can be delivered within a single milestone
- Open follow on issues to track work that is de-prioritized
- Promote existing issues to Epics and open implementation issues for the upcoming milestone
- Review feature issues with contributors
- Consider scheduling a POC or engineering investigation issue
- Make scope tradeoffs to reach for a right-sized MVC
|- Product Manager|
|Prioritized Milestone - The team should understand what issues should be delivered during the next milestone||- PM sets
- EM applies
|- Product Manager and Engineering Manager|
|Defined Quality Plan - Involving SETs in this phase ensures they are able to understand and effectively plan their own capacity before engineering is truly underway.||- Quad Planning
- Test planning
|- Software Engineer in Test|
||Applied by the engineer after work (including documentation) has begun on the issue. An MR is typically linked to the issue at this point.|
||Applied by an engineer indicating that all MRs required to close an issue are in review.|
||Applied if at any time during development the issue is blocked. For example: technical issue, open question to PM or PD, cross-group dependency.|
||After the MRs in the issue have been merged, this label is applied signaling the issue needs to be verified in staging or production.|
Software Engineer in Test
The develop and test phase is where we build the features, address bugs or technical debt and test the solutions before launching them. The PM is directly responsible for prioritizing what should be worked on; however, the engineering manager and software engineers are responsible for the implementation of the feature using the engineering workflow. Engineering owns the definition of done and issues are not moved into the next phase until those requirements are met. Keep in mind that many team members are likely to contribute to a single issue and collaboration is key.
This phase begins after work has been broken down, and prioritized in Phase 1. Work is completed in priority order as set at the beginning of the milestone. The Engineering Manager will assign an issue to an engineer who is responsible for building the feature. An engineer can also self-serve and pick up the next priority order issue from the
workflow::ready for development queue on their team's board. That engineer will update its
workflow:: label to indicate where it's position in the development process.
When an issue is in development the Software Engineer in Test (SET) will ensure the quad planning process is being followed regarding test plans, regression jobs, end to end tests, etc. Coordination is key between the assigned development engineer and the SET during this phase.
Documentation for the work will be developed by the engineer and the Technical Writer, and the Technical Writer should review the documentation as part of the development process. Items discovered during a documentation review should not block issues moving into the next phase, and may drive the creation of follow-on improvement MRs for the documentation, after release.
Note: Work deemed out-of-scope or incomplete by engineering is taken back into the plan phase for refinement and rescheduling for completion.
|Feature is built||- Engineering manager checks that definition of done is met
- Provide regular status updates to stakeholders
- Provide asynchronous updates to avoid status check-ins and synchronous stand-ups
- Engineers follow the engineering process to implement assigned issues.
|Feature is tested||- Engineers test features they implement (see Definition of done).
- SET sets testing requirements on the issue.
- SET follows up on any specific test coverage changes necessary as an outcome of Quad Planning.
- Technical Writers complete a review of any developed documentation.
workflow::production (The production label is recommended but not required at this phase because issues may have valid reason to close with differing labels)
|DRI||Development: Close issue after it's available in production.
Product Manager: Initiate release post item creation if they decide it's warranted.
Product Manager: Initiate dogfooding process if they decide it's applicable.
Product Manager: Consider alerting relevant stakeholders in appropriate Slack channels.
|Collaborators||Development team, Quality counterpart, and Product Manager may verify the feature is working as expected in production. (Primary verification is, of course, performed prior to production whenever possible.)
-Technical Writers create any documentation issues or MRs required to address issues identified during review that weren't resolved.
|Informed||Stakeholders for the change (including customers, open-source users, and GitLab team members) will be informed about the feature by the change in the status of the issue or the release post. GitLab team members may also be informed by posts in relevant Slack channels.|
When the change becomes available in production, the issue is closed by the development team so stakeholders know work on it has been completed. Afterward, the Product Manager coordinates the release post and dogfooding process when they apply.
|Feature is available to GitLab.com hosted customers: After it's deployed to production (and any feature-flags for it are enabled), the feature is launched and available to GitLab.com hosted customers.||- Code is deployed to production.
- Feature flag(s) enabled.
|Feature is available to self-managed customers: The feature will be available in the next scheduled release for self-managed customers to install.||- Code is included in the self-managed release, (depending upon the cut-off).||Development|
|Stakeholders of a feature will know it's available in production||- After the feature is deployed to production and any needed verification in production is completed, the development team will close the issue.
- Prior to the issue being closed, the development team may set the workflow label to
- Product Manager may follow up with individual stakeholders to let them know the feature is available.
|Customers will be informed about major changes: When appropriate for a change, a release post item will be written and merged by the Product Manager.||- Product Manager follows the instructions in the template, which will then cause it to appear on the GitLab.com releases page and be part of the release post.||Product Manager|
|The Product Manager determines if the feature should go through the dogfooding process to see if the feature is meeting GitLab's own needs||- A determination is made by the Product Manager as to if the feature should be a part of the dogfooding process. If so, the Product Manager initiates this process.||Product Manager|
|Experiment results and follow-up issue is created||For experiments, create a follow-up issue that will be where results of the test and next-steps are tracked.||Product Manager|
Other stakeholders as appropriate
After launch, the Product Manager and Product Designer should pay close attention to product usage data. This starts by ensuring your AMAU is instrumented and reporting as you expect. From there consider how the feature has impacted GMAU and SMAU. At this point you should also solicit customer feedback to guide follow-on iterative improvements, until success metrics are achieved/exceeded and a decision can be made that the product experience is sufficient. To create a combined and ongoing quantitative and qualitative feedback loop, consideration of the outcomes and potential activiies below are recommended.
|Understand Qualitative Feedback: To know how to improve something, it's important to understand the qualitative feedback that we're hearing from users and team members.||- Create a dedicated feedback issue (optional).
- Continue dogfooding process.
- Review user feedback in issues.
- Follow up with TAMs and SALs to gather feedback from interested customers.
- Set up follow-up calls with customers to gather more specific feedback.
- Consider running a Category Maturity Scorecard evaluation.
- Consider running a survey for usability.
|Measure Quantitative Impact: Qualitative data is great, but coupling it with quantitative data can help to paint the full picture of what is going on. Set up dashboards in Sisense and review the performance and engagement of your change.||- Update any applicable dashboards in Sisense, if necessary work with the data team for more complex reporting.
- Review AMAU, GMAU, and SMAU dashboards to understand if the new feature or improvement has impacted core metrics.
- Consider running a Category Maturity Scorecard evaluation.
|Take action on Learnings: After you understand the qualitative and quantitative impact, you can take action on your learnings by creating new issues or updating existing open issues with more information.||- Open new issues or revise existing open issues for follow-on iterations and improvements.
- Ensure you've captured feedback in issues or as updates to your direction pages.
- If applicable, update your category maturity score and timeline.
- Share learnings with your group and stage.
- Consider sharing learnings with the broader team.
- Coordinate with your PMM to understand if there are any relevant GTM motions you should consider updating.
- Update experiment follow-up issue with results and specific next steps.
- Potentially create issues or MRs for updates to the documentation site, to provide useful information in advance of potential product updates related to learnings.
All substantive merge requests to this page require cross-functional alignment prior to merging. To make updates such as grammatical fixes and typos, you can create an MR and tag in the Product Operations DRI for reference. There's no need to wait for feedback on these types of updates.
For updates that affect the overall phases by modifying core definitions, workflow labels or other cross-functionally utilized processes, you can create an issue or MR and assign it to the Product Operations DRI for collaboration and iteration. The Product Operations DRI will ensure alignment happens with the following stakeholders: