The Fulfillment Sub-department is responsible for the infrastructure between the systems which affect the user purchasing process, as well as being the primary caretakers of the CustomersDot and LicenseDot systems.
We frequently collaborate with other teams. For example if we are making a change which will affect usage ping we collaborate with the Product Intelligence group. When our work involves or affects any backend enterprise application, we collaborate with the Sales Systems team.
The Fulfillment Engineering team has grown substantially with an influx of new staff through hiring and internal realignments over the previous year. Our success in FY22 will be determined by how well we can up-skill and motivate these new team members along with the relationships and trust we build with our stable counterparts in Product, UX and Quality while we work to deliver our team's roadmap.
In order to support GitLab's continued business growth, the applications we are responsible for need to be able to scale while reducing our customer’s needs for manual support intervention. To achieve this, our Engineering efforts and product roadmap are concentrated around ensuring our Fulfillment process:
We measure our success by our performance indicators and our team members engagement and wellbeing. We support our individual team members by having regular career conversations, gathering actionable feedback on ways we can improve the way we work, and reinforcing our Diversity, Inclusion and Belonging value through ongoing training. We aim to run quarterly team days to help build relationships and foster collaboration.
Responsible for retrieving and managing licenses, overall system architecture, data integrity. Systems: LicenseDot. Integration: Salesforce.
Responsible for all consumables management, usage reporting, and usage notifications (excluding license utilization). Systems: GitLab.
We plan in monthly cycles in accordance with our Product Development Timeline.
Release scope for an upcoming release should be finalized by the
On or around the
26th: Product meets with Engineering Managers for a preliminary issue review. Issues are tagged with a milestone and are estimated initially.
Before work can begin on an issue, we should estimate it first after a preliminary investigation. This is normally done in the monthly planning meeting.
|1||The simplest possible change. We are confident there will be no side effects.|
|2||A simple change (minimal code changes), where we understand all of the requirements.|
|3||A simple change, but the code footprint is bigger (e.g. lots of different files, or tests effected). The requirements are clear.|
|5||A more complex change that will impact multiple areas of the codebase, there may also be some refactoring involved. Requirements are understood but you feel there are likely to be some gaps along the way.|
|8||A complex change, that will involve much of the codebase or will require lots of input from others to determine the requirements.|
|13||A significant change that may have dependencies (other teams or third-parties) and we likely still don't understand all of the requirements. It's unlikely we would commit to this in a milestone, and the preference would be to further clarify requirements and/or break in to smaller Issues.|
In planning and estimation, we value velocity over predictability. The main goal of our planning and estimation is to focus on the MVC, uncover blind spots, and help us achieve a baseline level of predictability without over optimizing. We aim for 70% predictability instead of 90%. We believe that optimizing for velocity (MR throughput) enables our Fulfillment teams to achieve a weekly experimentation cadence.
The following is a guiding mental framework for engineers to consider when contributing to estimates on issues.
### Refinement / Weighting **Ready for Development**: Yes/No <!-- Yes/No Is this issue sufficiently small enough, or could it be broken into smaller issues? If so, recommend how the issue could be broken up. Is the issue clear and easy to understand? --> **Weight**: X **Reasoning**: <!-- Add some initial thoughts on how you might breakdown this issue. A bulleted list is fine. This will likely require the code changes similar to the following: - replace the hexdriver with a sonic screwdriver - rewrite backups to magnetic tape - send up semaphore flags to warn others Links to previous example. Discussions on prior art. Notice examples of the simplicity/complexity in the proposed designs. --> **MR Count**: Y <!-- - 1 MR to update the driver worker - 1 MR to update docs regarding mag tape backups Let me draw your attention to potential caveats. --> **Testing considerations**: <!-- - ensure that rotation speed of sonic screwdriver doesn't exceed rotational limits -->
Engineers can find and open the milestone board for Fulfillment
and begin working first on those with the
It's possible for engineers to pick any of the remaining issues for the milestone once the deliverables are done. If the engineer has no preference, they can choose the next available issue from the top.
The following table will be used as a guideline for scheduling work within the milestone:
|Type||% of Milestone||Description|
|Deliverable||40%||business priorities (compliance, IACV, efficiency initiatives)|
|Bug||16%||non-critical bug fixes|
|Other||20%||engineer picks, critical security/data/availability/regression, urgent business priorities|
We generally follow the Product Development Flow:
workflow::problem validation- needs clarity on the problem to solve
workflow::design- needs a clear proposal (and mockups for any visual aspects)
workflow::solution validation- designs need to be evaluated by customers, and/or other GitLab team members for usability and feasibility
workflow::planning breakdown- needs a Weight estimate
workflow::scheduling- needs a milestone assignment
workflow::ready for development
workflow::verification- code is in production and pending verification by the DRI engineer
Generally speaking, issues are in one of two states:
Basecamp thinks about these stages in relation to the climb and descent of a hill.
While individual groups are free to use as many stages in the Product Development Flow workflow as they find useful, we should be somewhat prescriptive on how issues transition from discovery/refinement to implementation.
Every Friday, each engineer is expected to provide a quick async issue update by commenting on their assigned issues using the following template:
<!--- Please be sure to update the workflow labels of your issue to one of the following (that best describes the status)" - ~"workflow::In dev" - ~"workflow::In review" - ~"workflow::verification" - ~"workflow::blocked" --> ### Async issue update 1. Please provide a quick summary of the current status (one sentence). 1. When do you predict this feature to be ready for maintainer review? 1. Are there any opportunities to further break the issue or merge request into smaller pieces (if applicable)? 1. Were expectations met from a previous update? If not, please explain why.
We do this to encourage our team to be more async in collaboration and to allow the community and other team members to know the progress of issues that we are actively working on.
Deliverable proposal issue moves into
workflow::planning breakdown, SETs owns the completion of the
Availability and Testing section in the Feature Proposal to complete the definition of done. As we grow to reach our desired ratio, we will only have the quad approach in groups where we have an assigned SET in place.
quad-planning::readywhen the feature is reviewed by the team and is ready to be implemented.
Availability and Testingsection, ensuring that the strategy accounts for all test levels and facilitating discussions and feedback with the group.
quad-planning::complete-actionif they have are recommendations (e.g. running regression job, writing additional tests, etc.).
quad-planning::complete-no-actionif there is no additional actions needed.
Quad Planning Dashboard showcases the total Planned issues for Quad Planning vs the actual ones for each milestone.
The CustomersDot has different types of tests running:
We also have a flag
VCR that mocks external calls to Zuora by default. We have a daily pipeline that runs at 9AM UTC with the flag set so the API calls hit the Zuora sandbox and we are notified of any failure (due to potential API changes).
Any test failure is notified to #s_fulfillment_status including a link to the pipeline. Pipeline failures will prevent deployments to staging and production.
We use CD (Continuous Deployment) for CustomersDot and a MR goes through the following stages once it gets merged into the
If something goes wrong at the
Verification stage, we could create an issue with the label
production::blocker, which will prevent deployment to production. The issue cannot be confidential.
For MRs with significant changes, we should consider using feature flags or create an issue with the
production::blocker label to pause deployment and a allow for longer testing.
We use an automatic deployment to staging, but manual deployment to production for LicenseDot.
Maintainers of the application need to trigger a manual action on the
master branch in order to deploy to production.
The app lives at https://license.gitlab.com
staging environment can be found here
In most cases an MR should follow the standard process of review, maintainer review, merge, and deployment as outlined above. When production is broken:
In these cases please ensure:
The feature freeze for Fulfillment occurs at the same time as the rest of the company, normally around the 18th.
|App||Feature freeze (*)||Milestone ends|
|GitLab.com||~18th-22nd||Same as the freeze|
|Customers/License||~18th-22nd||Same as the freeze|
(*) feature freeze may vary according to the auto-deploy document.
Any issues not merged on the current milestone post feature freeze, will need to be moved to the next one (priority may also change for those).
One of our main engineering metrics is throughput which is the total number of MRs that are completed and in production in a given period of time. We use throughput to encourage small MRs and to practice our values of iteration. Read more about why we adoped this model.
We hold optional synchronous social meetings weekly where we chat about anything outside work. These meetings happen each Wednesday alternating between 10:00am UTC (EMEA) and 4:00pm (AMER). Check the Fulfillment Google calendar for more information.
8th, the Fulfillment team conducts an asynchronous retrospective. You can find current and past retrospectives for Fulfillment in https://gitlab.com/gl-retrospectives/fulfillment/issues/.