The Fulfillment Sub-department is responsible for the infrastructure between the systems which affect the user purchasing process, as well as being the primary caretakers of the CustomersDot and LicenseDot systems.
We frequently collaborate with other teams. For example if we are making a change which will affect usage ping we collaborate with the Product Analytics group. When our work involves or affects any backend enterprise application, we collaborate with the Sales Systems team.
The Fulfillment team is currently working on an initiative to improve our sales efficiency, with improvements identified by the Commercial & Licensing Working Group.
We're following the Engineering Demo Process in order to show regular improvements as we iterate on issues in this area.
Our Scorecard (internal only) breaks down the different steps in these flows and links to videos where we step through the process and discuss the issues that aim to improve them.
We plan in monthly cycles in accordance with our Product Development Timeline.
Release scope for an upcoming release should be finalized by the
On or around the
26th: Product meets with Engineering Managers for a preliminary issue review. Issues are tagged with a milestone and are estimated initially.
Before work can begin on an issue, we should estimate it first after a preliminary investigation. This is normally done in the monthly planning meeting.
|1||The simplest possible change. We are confident there will be no side effects.|
|2||A simple change (minimal code changes), where we understand all of the requirements.|
|3||A simple change, but the code footprint is bigger (e.g. lots of different files, or tests effected). The requirements are clear.|
|5||A more complex change that will impact multiple areas of the codebase, there may also be some refactoring involved. Requirements are understood but you feel there are likely to be some gaps along the way.|
|8||A complex change, that will involve much of the codebase or will require lots of input from others to determine the requirements.|
|13||A significant change that may have dependencies (other teams or third-parties) and we likely still don't understand all of the requirements. It's unlikely we would commit to this in a milestone, and the preference would be to further clarify requirements and/or break in to smaller Issues.|
In planning and estimation, we value velocity over predictability. The main goal of our planning and estimation is to focus on the MVC, uncover blind spots, and help us achieve a baseline level of predictability without over optimizing. We aim for 70% predictability instead of 90%. We believe that optimizing for velocity (MR throughput) enables our Growth teams to achieve a weekly experimentation cadence.
Points of weight delivered by the team on the last milestones. This allows for more accurate estimation of what we can deliver in future milestones. Full chart here.
Engineers can find and open the milestone board for Fulfillment
and begin working first on those with the
It's possible for engineers to pick any of the remaining issues for the milestone once the deliverables are done. If the engineer has no preference, they can choose the next available issue from the top.
The following table will be used as a guideline for scheduling work within the milestone:
|Type||% of Milestone||Description|
|Deliverable||40%||business priorities (compliance, IACV, efficiency initiatives)|
|Bug||16%||non-critical bug fixes|
|Other||20%||engineer picks, critical security/data/availability/regression, urgent business priorities|
We generally follow the Product Development Flow:
workflow::problem validation- needs clarity on the problem to solve
workflow::design- needs a clear proposal (and mockups for any visual aspects)
workflow::solution validation- needs refinement and acceptance from engineering
workflow::planning breakdown- needs a Weight estimate
workflow::scheduling- needs a milestone assignment
workflow::ready for development
workflow::verification- code is in production and pending verification by the DRI engineer
Generally speaking, issues are in one of two states:
Basecamp thinks about these stages in relation to the climb and descent of a hill.
While individual groups are free to use as many stages in the Product Development Flow workflow as they find useful, we should be somewhat prescriptive on how issues transition from discovery/refinement to implementation.
Every Friday, each engineer is expected to provide a quick async issue update by commenting on their assigned issues using the following template:
<!--- Please be sure to update the workflow labels of your issue to one of the following (that best describes the status)" - ~"workflow::In dev" - ~"workflow::In review" - ~"workflow::verification" - ~"workflow::blocked" --> ### Async issue update 1. Please provide a quick summary of the current status (one sentence). 1. When do you predict this feature to be ready for maintainer review? 1. Are there any opportunities to further break the issue or merge request into smaller pieces (if applicable)?
We do this to encourage our team to be more async in collaboration and to allow the community and other team members to know the progress of issues that we are actively working on.
Deliverable proposal issue moves into
workflow::planning breakdown, SETs owns the completion of the
Availability and Testing section in the Feature Proposal to complete the definition of done. As we grow to reach our desired ratio, we will only have the quad approach in groups where we have an assigned SET in place.
quad-planning::readywhen the feature is reviewed by the team and is ready to be implemented.
Availability and Testingsection, ensuring that the strategy accounts for all test levels and facilitating discussions and feedback with the group.
quad-planning::complete-actionif they have are recommendations (e.g. running regression job, writing additional tests, etc.).
quad-planning::complete-no-actionif there is no additional actions needed.
Quad Planning Dashboard showcases the total Planned issues for Quad Planning vs the actual ones for each milestone.
The CustomersDot has different types of tests running:
We also have a flag
VCR that mocks external calls to Zuora by default. We have a daily pipeline that runs at 9AM UTC with the flag set so the API calls hit the Zuora sandbox and we are notified of any failure (due to potential API changes).
Any test failure is notified to #s_fulfillment_status including a link to the pipeline. Pipeline failures will prevent deployments to staging and production.
We use CD (Continuous Deployment) for CustomersDot and a MR goes through the following stages once it gets merged into the
If something goes wrong at the
Verification stage, we could create an issue with the label
production::blocker, which will prevent deployment to production. The issue cannot be confidential.
For MRs with significant changes, we should consider using feature flags or create an issue with the
production::blocker label to pause deployment and a allow for longer testing.
We use an automatic deployment to staging, but manual deployment to production for LicenseDot.
Maintainers of the application need to trigger a manual action on the
master branch in order to deploy to production.
In most cases an MR should follow the standard process of review, maintainer review, merge, and deployment as outlined above. When production is broken:
In these cases please ensure:
The feature freeze for Fulfillment occurs at the same time as the rest of the company, normally around the 18th.
|App||Feature freeze (*)||Milestone ends|
|GitLab.com||~18th-22nd||Same as the freeze|
|Customers/License||~18th-22nd||Same as the freeze|
(*) feature freeze may vary according to the auto-deploy document.
Any issues not merged on the current milestone post feature freeze, will need to be moved to the next one (priority may also change for those).
One of our main engineering metrics is throughput which is the total number of MRs that are completed and in production in a given period of time. We use throughput to encourage small MRs and to practice our values of iteration. Read more about why we adoped this model.
We aim for 12 MRs per engineer per month which is tracked using our throughput metrics dashboard.
We also have a general quality dashboard for the whole Fulfillment team.
We optionally join the Growth sync meetings on Wednesdays. See the agenda.
We hold optional synchronous social meetings weeekly, every Wednesday at 03:30pm UTC. In these meetings we chat about anything outside work.
8th, the Fulfillment team conducts an asynchronous retrospective. You can find current and past retrospectives for Fulfillment in https://gitlab.com/gl-retrospectives/fulfillment/issues/.