GitLab's product mission is to consistently create products and experiences that users love and value. To deliver on this mission, it's important to have a clearly defined and repeatable flow for turning an idea into something that offers customer value. Note that it's also important to allow open source contributions at any point in the process from the wider GitLab community - these will not necessarily follow this process.
This page is an evolving description of how we expect our cross-functional development teams to work, but at the same time reflects the current process being used. All issues are expected to follow this workflow, though are not required to have passed every step on the way.
The goal is to have this page be the single source of truth, but it will take time to eliminate duplication elsewhere in the handbook; in the meantime, where there are conflicts this page takes precedence.
Because this page needs to be concise and consistent, please ensure to follow the prescribed change process.
|Stage (Label)||Track||Responsible||Completion Criteria||Who Transitions Out|
||N/A||Product||Item has enough information to enter problem validation.||Product|
||Validation||Product, UX Research||Item is validated and defined enough to propose a solution||Product|
||Validation||Product Design||Design work is complete enough for issue to be validated or implemented. Product and Engineering confirm the proposed solution is viable and feasible.||Product Design|
||Validation||Product, Product Design||Product Manager works with Product Designer to validate the solution with users.||Product|
||Review(Optional)||Product (Original PM)||Issue needs review by a Peer PM to help issue become more iterative, clearer, and better aligned with GitLab strategy||Product (Reviewer PM)|
||Review(Optional)||Product (Reviewer PM)||Issue has been reviewed and is ready to move to Build||Product (Original PM)|
||Build||Product, Product Design, Engineering||Issue has backend and/or frontend labels and estimated weight attached||Engineering|
||Build||Engineering||Issue has a numerical milestone label||Product/Engineering|
||Build||Engineering||An engineer has started to work on the issue||Engineering|
||Build||Engineering||Initial engineering work is complete and review process has started||Engineering|
||Build||Engineering||MR(s) are merged||Engineering|
||Build||Engineering||Work is demonstrable on production||Engineering|
||N/A||Product/Engineering||Work is no longer blocked||Engineering|
Issue descriptions should always be maintained as the single source of truth. It's not efficient for contributors to need to read through every comment in an issue to understand the current state.
For new ideas where the customer problem and solution is not well understood, Product Managers (PMs) and the User Experience Department (UXers) should work together to validate new opportunities before moving to the Build track. The Validation track is an independent track from the always moving Build track. PMs and UXers should work together to get 1-2 months ahead, so that the Build track always has well-validated product opportunities ready to start. Milestone work should be prioritized with the understanding that some milestones may include more validation efforts than others. Validation cycles may not be necessary for things like bug fixes, well understood iterative improvements, minor design fixes, etc.
When: When our confidence about the proposed problem or solution isn't high. For example, if we aren't reasonably sure that the problem is important to a significant number of users, and/or that the solution is easy to understand and use.
Who: Product Manager, Product Designer, UX Research, Engineering Manager
✅ Understand the user problem we are trying to solve
✅ Identify business goals & key metrics to determine success
✅ Generate hypotheses and research/experiment/user-test
✅ Define MVC and potential future iterations
✅ Minimize risks to value, usability, feasibility, and business viability with qualitative and quantitative analysis
Outcome: We have confidence that a proposed solution will positively impact one or more Product KPIs. There may be reason for exceptions, so the team would need to be clear in that case and be able to justify that it is still important without mapping back to our KPIs.
If we don't have confidence in the MVC or what success looks like, we should continue validation cycles before we move to the build track.
One of the primary artifacts of the validation track is the Opportunity Canvas. The Opportunity Canvas introduces a lean product management philosophy to the validation track by quickly iterating on level of confidence, hypotheses, and lessons learned as the document evolves. At completion, it serves as a concise set of knowledge which can be transferred to the relevant issues and epics to aid in understanding user pain, business value, and the constraints to a particular problem statement. Just as valuable as a completed Opportunity Canvas is an incomplete one. The tool is also useful for quickly invalidating ideas. A quickly invalidated problem is often more valuable than a slowly validated one.
Please note that an opportunity canvas is not required for product functionality or problems that already have well-defined jobs to be done (JTBD). For situations where we already have a strong understanding of the problem and its solution, it is appropriate to skip the opportunity canvas and proceed directly to solution validation. It might be worth using the opportunity canvas template for existing features in the product to test assumptions and current thinking, although not required.
Opportunity Canvases are a great assessment for ill-defined or poorly understood problems our customers are experiencing that may result in net new features. As noted previously, opportunity canvases may be helpful for existing features, which is where the
Product-Opportunity-Opportunity-Canvas-Lite issue template delivers. This template offers a lightweight approach to quickly identify the customer problem, business case, and feature plan in a convenient issue. The steps to use the template are outlined in the Instructions section and for clarity one would create this issue template for an existing feature they are interested in expanding. For example, this template would be great to use if you are evaluating the opportunity to add a third or fourth iteration to an MVC. This issue should leverage already available resources and be used to collate details to then surface to leadership for review. Once you fill out the template, you will assign to the parties identifed in the issue and you can always post in the
#product channel for visibility.
Every PM should maintain a backlog of potential validation opportunities. Validation opportunities may come from customers, internal stakeholders, product usage insights, support tickets, win/loss data, or other sensing mechanisms. Validation opportunities should be captured as an issue and described in customer problem language, and should avoid jumping ahead to feature/solution language.
Sometimes it can be tricky to identify a good issue for problem validation. The following situations often are good criteria:
Some items will skip the problem validation phase. In these cases, the problem is well understood and has been validated in other ways. When skipping problem validation, ensure the issue description is clear with the rationale and sensing mechanisms used to skip the problem validation phase.
To queue an item in your validation backlog:
Good product development starts with a well understood and clearly articulated customer problem. Once we have this, then generating solutions, developing the product experience, and launching to the market is much more effective. The danger in not starting with the problem is that you might miss out on solutions that come from deeply understanding the customer problem. A poorly defined problem statement can also cause the design and development phases to be inefficient.
Product Managers and Product Designers should refine the validation backlog together. You should pull items from your validation backlog in the problem validation process on a regular cadence to ensure you always have validated problems for your groups to start working on.
To run the problem validation process:
PM creates an issue using the Problem Validation template.
PM applies the
~"workflow::problem validation" label to the associated issue; this automatically removes the
~"workflow::validation backlog" label.
PM fills out an opportunity canvas to the best of their ability. Ensure the problem and persona is well articulated and add the opportunity canvas to the issue's Designs. It can be helpful to discuss your problem statement Jobs to be Done (JTBD) and user experience as a Product Manager and Product Designer partnership. Note that you should include content for the solution and go-to-market sections, possibly with low confidence; this section may be likely to change, but thinking it through will help clarify your thoughts. PMs are encouraged to reach out to UX Researchers for help.
PM opens a
Problem validation research issue using the available template in the UX Research project. Once completed, please assign the issue to the relevant UX Researcher.
Product Manager, Product Designer, and UX Researcher meet to discuss the appropriate research methodology, timescales, and user recruitment needs.
PM finalizes the opportunity canvas with the synthesized feedback and reviews it with the Product Designer.
PM schedules a review of the opportunity canvas with Scott Williamson, Christie Lenneville, and the Product Director for your section. Weekly time blocks will be held. You can contact Kristie 'KT' Thomas to get your review added to one of the weekly time blocks.
workflow::designlabel to an existing issue or creates a new issue, if needed.
When there are one or more potential solutions that meet business needs and are technically feasible, then it's time to validate that the solution(s) meet our users' needs. As always, you should continually move issues from the backlog into problem and solution validation to ensure that there are validated problems to deliver.
To run the solution validation process:
Product Designer works with the PM to determine whether solution validation is needed. Solution validation is appropriate when we don't have high confidence that the proposed solution will meet users expectations.
Note: Solution validation is only needed after designs or solutions have been proposed. If you lack confidence in a specific direction or if there is a high risk in moving forward without user validation, then continue with these steps. If you are uncertain whether to move forward, reach out to your Product Design Manager.
Product Designer creates a new issue using the
Solution validation template in the GitLab UX Research project. The issue will automatically apply the
~"workflow::solution validation" label. Link the associated Opportunity Canvas and design-related issues. Assign the new issue to yourself, the PM, and the Product Design Manager.
PM and Product Designer review the goals and research questions to determine the best research method to use. It's critical to determine this early, because the method dictates what kinds of design assets to use, and it influences criteria for the screening survey.
PM and Product Designer discuss user recruitment needs and clarify the research study's goals, research questions, and hypotheses. Once a draft is complete, the Product Design Manager reviews and provides feedback.
Product Designer begins crafting a screening survey in Qualtrics.
Note: It's important to complete the screening survey in a timely manner, so that user recruitment can quickly begin. In most cases, user recruitment should begin before the test plan is complete. Learn more about the screening process to understand what happens once the request is made.
Product Designer creates a
recruitment request issue in the GitLab UX Research project using the available issue template. Assign it to the relevant Research Coordinator.
The Research Coordinator will perform a sense check to make sure your screener will catch the people you’ve identified as your target participants. If there are multiple rounds of review, the Coordinator will pause activities until uncertainty about your screening criteria is resolved.
Product Designer drafts the test plan in collaboration with the PM. When a first draft of the test plan is complete, the Product Design Manager and UX Researcher review and provide feedback.
Product Designer prepares the design assets needed for the study. This will likely be a clickthrough wireframe or prototype (low or high-fidelity screenshots, or an interactive UI prototype).
Note: Design reviews should happen prior to preparing for testing. Make sure solutions are viable and include feedback from PM and Engineering.
Product Designer forwards research study session invites to the UX Research calendar
(firstname.lastname@example.org) and any other interested parties (Product Designer, PMs, Engineers, etc).
Product Designer leads (moderates) the usability sessions. PM should observe research study sessions and take note of insights and pain points. It is beneficial to also invite Engineers to shadow the research study. This can help the team broadly understand existing user behaviors. *Recommendation: Run a pilot session with an internal participant to test for technical issues, comprehension, and to make adjustments before your sessions with participants.
After the research study sessions conclude, the Product Designer updates the
recruitment request issue in the GitLab UX Research project. The Research Coordinator will reimburse participants for their time (payment occurs on Tuesdays and Thursdays).
PM and Product Designer work collaboratively to synthesize the data and identify trends in Dovetail, resulting in insights.
Product Design Manager reviews insights and provides feedback, if needed.
Product Designer updates the solution validation issue with links to the insights in Dovetail.
PM updates the opportunity canvas with the insights.
PM articulates success metrics for each opportunity and ensures a plan for product instrumentation and dashboarding are in place.
At this point, we should have a clear direction on how to move forward. If the solution is validated, then the issue is ready to enter the build track. If the solution was not validated, revisit and make appropriate adjustments.
The (iteration) Review track is an optional step in the flow that brings peer PMs in to help you hone your skills at iteration, clarity, and strategy. Keeping issues small and iterative is core to how GitLab maintains velocity, writing a "small" issue is often (counterintuitively) more difficult than writing a bigger one, and understanding the entire strategy of how GitLab operates is a herculean task. Having a helping hand with these tasks is important to professional development, and it ensures that our entire Product organization continues to improve.
You should consider requesting a review when:
*Note: If you are a new GitLab team member, you should request reviews of the first 3 issues you create. It will help familiarize you with what we're looking for in an iteration, get more comfortable with our process, and meet your fellow team members. Once you've gone through a few reviews, this track can be considered optional.
If you would like a peer to reivew one of your issues (or epics):
issue::needs reviewlabel to your issue
issue::reviewedlabel and lets the original PM know that the review is complete.
You can view all the work in happening in this track on this board.
The build track is where we plan, develop, and deliver value to our users by building MVCs, fixing defects, patching security vulnerabilities, enhancing user experience, and improving performance. DRIs across engineering disciplines involving Design, Backend, Frontend and Quality work closely together to implement MVCs while in close collaboration with the Product Manager. Decisions are made quickly if challenges arise. We make sure to instrument usage and track product performance, so once MVCs are delivered to the hands of customers, feedback is captured quickly for learnings to refine the next iteration.
When: As we build MVCs according to our product development timeline
Who: Product Manager, Product Designer, Engineers, Software Engineers in Test
✅ Release to a subset or full set of customers as appropriate
✅ Assess UX, functional, technical performance, and customer impact
✅ Collect data to measure MVC against success metrics to inform the next iteration
✅ Iterate until success metrics are achieved and the product experience is optimal
Outcome: Deliver performant MVCs that improve one or more of our Product KPIs and/or Engineering KPIs. If it fails to do so, honor our Efficiency value (that includes a low level of shame), abandon it, and restart the validation cycle to identify the right solution.
The build track starts with Product Manager (PM), User Experience (UX), Software Engineer in Test (SET), and Engineering Managers (EM) breaking down the opportunities into well-defined issues.
For user-facing deliverables, Product Designers work with Engineering to validate technical feasibility during the
workflow::design phase, but it's equally important to validate feasibility for work that users don't see in the UI, such as APIs and other technical features. Communicate these solutions using artifacts such as API docs, workflow diagrams, etc. Involve your Engineering Managers in creating and reviewing these artifacts to gain a shared understanding of the solution and receive input on feasibility.
documentationlabel and complete other relevant PM documentation responsibilities. For issues requiring new or updated UI text, add the
Availability and Testingsection in the Feature Proposal to complete the definition of done. As we grow to reach our desired ratio, we will only have the quad approach in groups where we have an assigned SET in place.
workflow:ready for developmentaround the 16th of each milestone and apply the
quad-planning::readylabel. If necessary, SET will coordinate with PM/EM to discuss specific issues as needed.
Availability and Testingsection, ensuring that the strategy accounts for all test levels and facilitating discussions and feedback with the group.
package-and-qaregression job, this is made clear in the above section.
quad-planning::complete-actionlabel to the issue. If no additional action needs to be taken, the SET applies the
quad-planning::complete-no-actionlabel to the issue.
Build Planthat outlines the number of MRs and responsibilities for assigned team members. EM and PM provide a focus on iteration when reviewing these plans.
workflow::schedulingto allow for a buffered priority queue.
workflow::ready for developmentand
deliverablelabels during the next phase, in alignment with the PM.
workflow::ready for development,
workflow::In dev (along with
workflow::ready for review as queue state while waiting for maintainer),
workflow::verification (sub-states for verification are
The develop and test phase is where we build the features and test them before launch:
workflow::ready for developmentand apply the deliverable as they commit to them, in alignment with the PM.
workflow::planning breakdownshould be reapplied.
workflow::production. At this point the feature is launched.
If the feature is part of the Dogfooding process:
After launch, the Product Manager and Product Designer should pay close attention to product usage data. This starts by ensuring your AMAU is instrumented and reporting as you expect. From there consider how the feature has impacted GMAU and SMAU. At this point you should also solicit customer feedback to guide follow-on iterative improvements, until success metrics are achieved/exceeded and a decision can be made that the product experience is sufficient. To create a combined and ongoing quantitative and qualitative feedback loop, the following activities are recommended:
|Understand Qualitative Feedback||- Continue Dogfooding process
- Review user feedback in issues
- Follow up with TAMs and SALs to gather feedback from interested customers
- Setup follow-up calls with customers to gather more specific feedback
- Consider running a Category Maturity Scorecard evaluation
- Consider running a survey for usability
|Measure Quantitative Impact||- Update any applicable dashboards in Sisense, if necessary work with the data team for more complex reporting
- Review AMAU, GMAU, and SMAU dashboards to understand if the new feature or improvement has impacted core metrics
- Consider running a Category Maturity Scorecard evaluation
|Take Action on Learnings||- Open new issues or revise existing issues for follow-on iterations and improvements
- Ensure you've captured feedback in issues or as updates to your direction pages
- If applicable, update your category maturity score and timeline
- Share learnings with your group and stage
- Consider sharing learnings with the broader team
- Coordinate with your PMM to understand if there are any relevant GTM motions you should consider updating
Here are several strategies for breaking features down into tiny changes that can be developed and released iteratively. This process will also help you critically evaluate if every facet of the design is actually necessary.
As part of design and discovery, you likely created a minimal user journey that contains sequential steps a user is going to take to “use” the feature you are building. Each of these should be separated. You can further by asking yourself these questions:
View, Create, Update, Remove and Delete are actions users take while interacting with software. These actions naturally provide lines along which you can split functionality into smaller features. By doing this, you prioritize the most important actions first. For example, users will likely need to be able to visually consume information before they can create, update, remove, or delete.
Often, the criteria by which a new feature needs to be built is implicit. It can help to approach this from a test-driven development mindset, meaning you write the tests and the outcomes you need from the software before building the software. Writing these tests can uncover the different criteria you need the development team to meet when building the new feature. Once you’ve outlined these tests, you may be able to use them to continue to break down the feature into smaller parts for each test. Here are a few examples:
Software often fails and can fail in different ways depending upon how it is architected. It is always best to provide the user with as much information as possible as to why something did not behave as expected. Creating and building different states to handle all possible errors and exceptions can easily be broken down into individual issues. Start by creating a generic error state to display when anything goes wrong, and then add on to handle different cases one by one. Remember to always make error messages useful, and add additional error messages as you identify new error states.
When creating net new features research efforts are intended to provide GitLab with the best opportunity to deliver customer value while considering business needs, performance expectations, timelines, and other considerations. When delivering new features that interact with exisiting customer data and workflows, care must be taken to evaluate impact throughout the product development process.
Breaking down a design into pieces that can be released iteratively is going to depend on what you are building. Here are a few helpful questions to guide that process:
Continuously improving the software we write is important. If we don't proactively work through technical debt and ux debt as we progress, we will end up spending more time and moving slower in the long run. However, it is important to strike the right balance between technical and ux debt and iteratively developing features. Here are some questions to consider:
Consider the following to improve iteration:
All substantive merge requests to this page require cross-functional alignment prior to merging. To make updates such as grammatical fixes and typos, you can create an MR and tag in the Product Operations DRI for reference. There is no need to wait for feedback on these types of updates.
For updates that affect the overall phases by modifying core definitions, workflow labels or other cross-functionally utilized processes, you can create an issue or MR and assign it to the Product Operations DRI for collaboration and iteration. The Product Operations DRI will make sure alignment happens with the following stakeholders: