GitLab's product mission is to consistently create products and experiences that users love and value. To deliver on this mission, it's important to have a clearly defined and repeatable flow for turning an idea into something that offers customer value. Note that it's also important to allow open source contributions at any point in the process from the wider GitLab community - these will not necessarily follow this process.
This page is an evolving description of how we expect our cross-functional development teams to work, but at the same time reflects the current process being used. All issues are expected to follow this workflow, though are not required to have passed every step on the way.
The goal is to have this page be the single source of truth, but it will take time to eliminate duplication elsewhere in the handbook; in the meantime, where there are conflicts this page takes precedence.
Because this page needs to be concise and consistent, please ensure to follow the prescribed change process.
|Stage (Label)||Track||Responsible||Completion Criteria||Who Transitions Out|
||N/A||Product||Item has enough information to enter problem validation.||Product|
||Validation||Product||Item is validated and defined enough to propose a solution||Product|
||Validation||UX||Design work is complete enough for issue to be validated or implemented. Product and Engineering confirm the proposed solution is viable and feasible.||UX|
||Validation||Product, UX||Product Manager works with UX to validate the solution with users.||Product|
||Review(Optional)||Product (Original PM)||Issue needs review by a Peer PM to help issue become more iterative, clearer, and better aligned with GitLab strategy||Product (Reviewer PM)|
||Review(Optional)||Product (Reviewer PM)||Issue has been reviewed and is ready to move to Build||Product (Original PM)|
||Build||Product, UX, Engineering||Issue has backend and/or frontend labels and estimated weight attached||Engineering|
||Build||Engineering||Issue has a numerical milestone label||Product/Engineering|
||Build||Engineering||An engineer has started to work on the issue||Engineering|
||Build||Engineering||Initial engineering work is complete and review process has started||Engineering|
||Build||Engineering||MR(s) are merged||Engineering|
||Build||Engineering||Work is demonstrable on production||Engineering|
||N/A||Product/Engineering||Work is no longer blocked||Engineering|
For new ideas where the customer problem and solution is not well understood, Product Managers (PMs) and the User Experience Department (UXers) should work together to validate new opportunities before moving to the Build track. The Validation track is an independent track from the always moving Build track. PMs and UXers should work together to get 1-2 months ahead, so that the Build track always has well-validated product opportunities ready to start. Milestone work should be prioritized with the understanding that some milestones may include more validation efforts than others. Validation cycles may not be necessary for things like bug fixes, well understood iterative improvements, minor design fixes, etc.
When: When our confidence about the proposed problem or solution isn't high. For example, if we aren't reasonably sure that the problem is important to a significant number of users, and/or that the solution is easy to understand and use.
Who: Product Manager, Product Designer, UX Research, Engineering Manager
✅ Understand the user problem we are trying to solve
✅ Identify business goals & key metrics to determine success
✅ Generate hypotheses and research/experiment/user-test
✅ Define MVC and potential future iterations
✅ Minimize risks to value, usability, feasibility, and business viability with qualitative and quantitative analysis
Outcome: We have confidence that a proposed solution will positively impact one or more Product KPIs. There may be reason for exceptions, so the team would need to be clear in that case and be able to justify that it is still important without mapping back to our KPIs.
If we don't have confidence in the MVC or what success looks like, we should continue validation cycles before we move to the build track.
One of the primary artifacts of the validation track is the Opportunity Canvas. The Opportunity Canvas introduces a lean product management philosophy to the validation track by quickly iterating on level of confidence, hypotheses, and lessons learned as the document evolves. At completion, it serves as a concise set of knowledge which can be transferred to the relevant issues and epics to aid in understanding user pain, business value, and the constraints to a particular problem statement. Just as valuable as a completed Opportunity Canvas is an incomplete one. The tool is also useful for quickly invalidating ideas. A quickly invalidated problem is often more valuable than a slowly validated one.
Please note that an opportunity canvas is not required for product functionality or problems that already have well-defined jobs to be done (JTBD). For situations where we already have a strong understanding of the problem and its solution, it is appropriate to skip the opportunity canvas and proceed directly to solution validation. It might be worth creating an opportunity canvas for existing features in the product to test assumptions and current thinking, although not required.
Every PM should maintain a backlog of potential validation opportunities. Validation opportunities may come from customers, internal stakeholders, product usage insights, support tickets, win/loss data, or other sensing mechanisms. Validation opportunities should be captured as an issue and described in customer problem language, and should avoid jumping ahead to feature/solution language.
Sometimes it can be tricky to identify a good issue for problem validation. The following situations often are good criteria:
Some items will skip the problem validation phase. In these cases, the problem is well understood and has been validated in other ways. When skipping problem validation, ensure the issue description is clear with the rationale and sensing mechanisms used to skip the problem validation phase.
To queue an item in your validation backlog:
Good product development starts with a well understood and clearly articulated customer problem. Once we have this, then generating solutions, developing the product experience, and launching to the market is much more effective. The danger in not starting with the problem is that you might miss out on solutions that come from deeply understanding the customer problem. A poorly defined problem statement can also cause the design and development phases to be inefficient.
Product Managers and Product Designers should refine the validation backlog together. You should pull items from your validation backlog in the problem validation process on a regular cadence to ensure you always have validated problems for your groups to start working on.
To run the problem validation process:
PM or Product Designer creates an issue using the Problem Validation template.
PM applies the
~"workflow::problem validation" label to the associated issue; this automatically removes the
~"workflow::validation backlog" label.
PM fills out an opportunity canvas to the best of their ability. Ensure the problem and persona is well articulated and add the opportunity canvas to the issue's Designs. It can be helpful to discuss your problem statement Jobs To Be Done (JTBD) and user experience as a Product Manager and Product Designer partnership. Note that you should include content for the solution and go-to-market sections, possibly with low confidence; this section may be likely to change, but thinking it through will help clarify your thoughts. PMs are encouraged to reach out to UX Researchers for help.
PM opens a
Problem validation research issue using the available template in the UX Research project. Once completed, please assign the issue to the relevant UX Researcher.
Product Manager, Product Designer, and UX Researcher meet to discuss the appropriate research methodology, timescales, and user recruitment needs.
PM finalizes the opportunity canvas with the synthesized feedback and reviews it with the Product Designer.
PM schedules a review of the opportunity canvas with Scott Williamson, Christie Lenneville, and the Product Director for your section. Weekly time blocks will be held. You can contact Kristie 'KT' Thomas to get your review added to one of the weekly time blocks.
workflow::designlabel to an existing issue or creates a new issue, if needed.
When there are one or more potential solutions that meet business needs and are technically feasible, then it's time to validate that the solution(s) meet our users' needs. As always, you should be consistently moving issues forward from the backlog into problem and solution validation to ensure that there are validated problems to deliver.
To run the solution validation process:
Product Designer works with the PM (and the Product Design Manager, if needed) to determine whether solution validation is needed. Solution validation is appropriate when we don't have high confidence that the proposed solution will meet users expectations.
Note: Solution validation is only needed after designs or solutions have been proposed. If you lack confidence in a specific direction or if there is a high risk in moving forward without user validation, then continue with these steps. If you are uncertain whether to move forward, reach out to your Product Design Manager.
Solution validationtemplate in the GitLab UX Research project. The issue will automatically apply the
~"workflow::solution validation"label. Link the associated Opportunity Canvas and design-related issues. Assign the new issue to yourself, the PM, and the Product Design Manager.
Product Designer begins crafting a screening survey in Qualtrics.
Note: It's important to complete the screening survey in a timely manner, so that user recruitment can quickly begin. In most cases, user recruitment should begin before the usability testing script is complete.
recruitment requestissue in the GitLab UX Research project using the available issue template. Assign it to the relevant Research Coordinator.
Product Designer prepares the testing environment. This will likely be a clickthrough wireframe or prototype (low or high-fidelity screenshots, or an interactive User Interface (UI) prototype).
Note: Design reviews should happen prior to preparing for testing. Make sure solutions are viable and include feedback from PM and Engineering.
(email@example.com)and any other interested parties (Product Designer, PMs, Engineers, etc).
recruitment requestissue in the GitLab UX Research project. The Research Coordinator will reimburse participants for their time (payment occurs on Tuesdays and Thursdays).
At this point, we should have a clear direction on how to move forward. If the solution is validated, then the issue is ready to enter the build track. If the solution was not validated, revisit and make appropriate adjustments.
The (iteration) Review track is an optional step in the flow that brings peer PMs in to help you hone your skills at iteration, clarity, and strategy. Keeping issues small and iterative is core to how GitLab maintains velocity, writing a "small" issue is often (counterintuitively) more difficult than writing a bigger one, and understanding the entire strategy of how GitLab operates is a herculean task. Having a helping hand with these tasks is important to professional development, and it ensures that our entire Product organization continues to improve.
You should consider requesting a review when:
*Note: If you are a new GitLab team member, you should request reviews of the first 3 issues you create. It will help familiarize you with what we're looking for in an iteration, get more comfortable with our process, and meet your fellow team members. Once you've gone through a few reviews, this track can be considered optional.
If you would like a peer to reivew one of your issues (or epics):
issue::needs reviewlabel to your issue
issue::reviewedlabel and lets the original PM know that the review is complete.
You can view all the work in happening in this track on this board.
The build track is where we plan, develop, and deliver value to our users by building MVCs, fixing defects, patching security vulnerabilities, enhancing user experience, and improving performance. DRIs across engineering disciplines involving Design, Backend, Frontend and Quality work closely together to implement MVCs while in close collaboration with the Product Manager. Decisions are made quickly if challenges arise. We make sure to instrument usage and performance measurements, so once MVCs are delivered to the hands of customers, feedback is captured quickly for learnings to refine the next iteration.
When: As we build MVCs according to our product development timeline
Who: Product Manager, Product Designer, Engineers, Software Engineers in Test
✅ Release to a subset or full set of customers as appropriate
✅ Assess UX, functional, and technical performance
✅ Collect data to measure MVC against success metrics to inform the next iteration
✅ Iterate until success metrics are achieved and the product experience is optimal
Outcome: Deliver performant MVCs that improve one or more of our Product KPIs and/or Engineering KPIs. If it fails to do so, honor our Efficiency value (that includes a low level of shame), abandon it, and restart the validation cycle to identify the right solution.
The build track starts with Product Manager (PM), User Experience (UX), Software Engineer in Test (SET), and Engineering Managers (EM) breaking down the opportunities into well-defined issues.
For user-facing deliverables, Product Designers work with Engineering to validate technical feasibility during the
workflow::design phase, but it's equally important to validate feasibility for work that users don't see in the UI, such as APIs and other technical features. Communicate these solutions using artifacts such as API docs, workflow diagrams, etc. Involve your Engineering Managers in creating and reviewing these artifacts to gain a shared understanding of the solution and receive input on feasibility.
documentationlabel and complete other relevant PM documentation responsibilities. For issues requiring new or updated UI text, add the
Availability and Testingsection in the Feature Proposal to complete the definition of done. As we grow to reach our desired ratio, we will only have the quad approach in groups where we have an assigned SET in place.
quad-planning::readylabel and assigns the issue to the counterpart SET.
Availability and Testingsection, ensuring that the strategy accounts for all test levels and facilitating discussions and feedback with the group.
package-and-qaregression job, this is made clear in the above section.
quad-planning::complete-actionlabel to the issue. If no additional action needs to be taken, the SET applies the
quad-planning::complete-no-actionlabel to the issue.
Build Planthat outlines the number of MRs and responsibilities for assigned team members. EM and PM provide a focus on iteration when reviewing these plans.
workflow::schedulingto allow for a buffered priority queue.
workflow::ready for developmentand
deliverablelabels during the next phase, in alignment with the PM.
workflow::ready for development,
workflow::In dev (along with
workflow::ready for review as queue state while waiting for maintainer),
workflow::verification (sub-states for verification are
The develop and test phase is where we build the features and test them before launch:
workflow::ready for developmentand apply the deliverable as they commit to them, in alignment with the PM.
workflow::planning breakdownshould be reapplied.
workflow::production. At this point the feature is launched.
If the feature is part of the Dogfooding process:
After launch, the PM should pay close attention to product usage data and customer feedback to guide follow-on iterative improvements, until success metrics are achieved or a decision is made that the product experience is sufficient.
Here are several strategies for breaking features down into tiny changes that can be developed and released iteratively. This process will also help you critically evaluate if every facet of the design is actually necessary.
As part of design and discovery, you likely created a minimal user journey that contains sequential steps a user is going to take to “use” the feature you are building. Each of these should be separated. You can further by asking yourself these questions:
View, Create, Update, Remove and Delete are actions users take while interacting with software. These actions naturally provide lines along which you can split functionality into smaller features. By doing this, you prioritize the most important actions first. For example, users will likely need to be able to visually consume information before they can create, update, remove, or delete.
Often, the criteria by which a new feature needs to be built is implicit. It can help to approach this from a test-driven development mindset, meaning you write the tests and the outcomes you need from the software before building the software. Writing these tests can uncover the different criteria you need the development team to meet when building the new feature. Once you’ve outlined these tests, you may be able to use them to continue to break down the feature into smaller parts for each test. Here are a few examples:
Software often fails and can fail in different ways depending upon how it is architected. It is always best to provide the user with as much information as possible as to why something did not behave as expected. Creating and building different states to handle all possible errors and exceptions can easily be broken down into individual issues. Start by creating a generic error state to display when anything goes wrong, and then add on to handle different cases one by one. Remember to always make error messages useful, and add additional error messages as you identify new error states.
Breaking down a design into pieces that can be released iteratively is going to depend on what you are building. Here are a few helpful questions to guide that process:
Continuously improving the software we write is important. If we don't proactively work through technical debt and ux debt as we progress, we will end up spending more time and moving slower in the long run. However, it is important to strike the right balance between technical and ux debt and iteratively developing features. Here are some questions to consider:
Consider the following to improve iteration:
All substantive merge requests to this page require cross-functional alignment prior to merging. To make updates such as grammatical fixes and typos, you can create an MR and tag in the Product Operations DRI for reference. There is no need to wait for feedback on these types of updates.
For updates that affect the overall phases by modifying core definitions, workflow labels or other cross-functionally utilized processes, you can create an issue or MR and assign it to the Product Operations DRI for collaboration and iteration. The Product Operations DRI will make sure alignment happens with the following stakeholders: