The following people are permanent members of the Create:Source Code FE Team:
|André Luís||Frontend Engineering Manager, Create:Source Code, Create:Code Review, Delivery & Scalability|
|Jacques Erasmus||Senior Frontend Engineer, Create:Source Code|
|Nataliia Radina||Frontend Engineer, Create:Source Code|
|Paulina Sedlak-Jakubowska||Frontend Engineer, Create:Source Code|
The following members of other functional teams are our stable counterparts:
|Amy Qualls||Senior Technical Writer, Create (Source Code, Code Review), Core Platform (Database)|
|Ash McKenzie||Staff Backend Engineer, Create:Source Code|
|Costel Maxim||Senior Security Engineer, Application Security, Plan (Project Management, Product Planning, Certify), Create:Source Code, Growth, Fulfillment:Purchase, Fulfillment:Provision, Fulfillment:Utilization, Systems:Gitaly|
|Darva Satcher||Director of Engineering, Create|
|Gavin Hinfey||Backend Engineer, Create:Source Code|
|Igor Drozdov||Staff Backend Engineer, Create:Source Code, Systems:Gitaly API|
|Senior Backend Engineer||Senior Backend Engineer, Create:Source Code|
|Patrick Cyiza||Backend Engineer, Create:Source Code|
|Joe Woodward||Senior Backend Engineer, Create:Source Code|
|Robert May||Senior Backend Engineer, Create:Source Code|
|Sean Carroll||Backend Engineering Manager, Create:Source Code|
|Shekhar Patnaik||Principal Fullstack Engineer, Create|
|Vasilii Iakliushin||Staff Backend Engineer, Create:Source Code, Systems:Gitaly API|
|Derek Ferguson||Group Manager, Product Management, Create|
We held an Iteration Retrospective in April 2020 in order to review past work and discuss how we could improve iteration for upcoming efforts.
Some overal conclusions/improvements
In general, we use the standard GitLab engineering workflow. To get in touch
with the Create:Source Code FE team, it's best to create an issue in the relevant project
(typically GitLab) and add the
~"group::source code" and
~frontend labels, along with any other
appropriate labels (
~section::dev). Then, feel free to ping the relevant Product Manager and/or
Engineering Manager as listed above.
To ensure we are living our iteration value consistenly, we should be intentional in asking ourselves: "is this in the smallest possible form it could be?". To that end engineers, designer, EMs, and PM should work together to find the smallest feature set that delivers value to users and can be used to elicit feedback for future iterations. Once a feasible issue plan comes together. Let's consider the following steps for best results:
workflow::refinementlabel to signal next step.
workflow::needs issue review.
Note: if an issue receives a weight > 3 after this process, it may indicate the IC may not have a full idea of what is needed and further research is needed.
As stated in our direction, we must place special emphasis on our convention over configuration principle. As the feature set within Create:Source Code grows, it may feel natural to solve problems with configuration. To ensure this is not the case, we must intentionally challenge MVC and new feature issues to check for this. Let's consider the following steps for best results:
Once issues have been labeled as
workflow::needs issue review PM will share the proposal with either a peer or their manager as well as engineering (EM or IC) and product designer.
Peers in product and engineering who review the issue should look for opportunities to eliminate configuration where possible. If opportunities are identified, the issue is moved back to
If PM and peers are satisfied with the proposal and it follows our convention over configuration principle as much as possible, those who reviewed the issue indicate their agreement with the proposal (with either 👍 or a comment in the issue). Finally, PM or EM will label issue
workflow:: ready for development.
Expanding on the concept of Middle of milestone check-in:
The way we try to grasp how well we are doing according to the scheduled and committed set of Deliverables is simply trying to calculate the level of completeness of all of them.
We do this by tallying up:
We then compile a small report like this:
Done + Verification: 1 w1 2.27% In review: 6 w15 34.09% In dev: 6 w20 45.45% Unstarted: 3 w8 18.18% Progress: 47.73% Conclusion: ...
Progress is calculated with:
(100% * 2.27) + (80% * 34.09) + (40% * 45.45) + (0% * 18.18)
In the conclusion we write an interpretation of what this means and what we'll be doing to correct course, if needed.
We use weights to forecast the complexity of each given issue aimed at being scheduled into a given milestone. These weights help us ensure that the amount of scheduled work in a cycle is reasonable, both for the team as a whole and for each individual. The "weight budget" for a given cycle is determined based on the team's recent output, as well as the upcoming availability of each engineer.
Before each milestone, the Engineering Manager takes a pass and sets weights on all issues currently aimed at the next milestone by Product and triaging processes. On occasion, specific ICs can be brought to review the assigned weight. This is aimed at helping the ICs stay focused on Deliverables while working in the last week of the active milestone.
We understand weights are mere forecasts and we accept the uncertainty that comes with this.
These are the broad definition for each of the weights values. We don't use a Fibonacci-based progression but we do try to split up large issues as we understand they are by definition less accurate predictions.
When issues are Frontend only, we use the Weight feature of Issues.
When issues are both Frontend and Backend we use specific labels to support both weights in the same issue:
~frontend-weight::13. Only weights between 1-3 can be scheduled into a milestone. Higher ones need to be broken down.
Note: Since milestone 13.10, we switched to using a fibonacci based scale. The reason behind this is how hard it's been to distinguish between issues of weight 3 and 4 or weight 4 and 5. Fibonacci allows for a clearer distinction as weights increase.
|1: Trivial||The problem is very well understood, no extra investigation is required, the exact solution is already known and just needs to be implemented, no surprises are expected, and no coordination with other teams or people is required.|
|2: Small||The problem is well understood and a solution is outlined, but a little bit of extra investigation will probably still be required to realize the solution.|
|3: Medium||Features that are well understood and relatively straightforward. Bugs that are relatively poorly understood and may not yet have a suggested solution.|
Anything above weight 3 is unschedulable.
Those are either large amounts of work or have too many unknowns. In that case, opt to break it down into multiple issues right away or open an epic to start a discussion in order to create the multiple steps.
Also consider adding the label:
This hard limit helps the team embody the Iteration value.
The easiest way for engineering managers, product managers, and other stakeholders to get a high-level overview of the status of all issues in the current milestone, or all issues assigned to specific person, is through the Development issue board, which has columns for each of the workflow labels described on Engineering Workflow handbook page under Updating Issues Throughout Development.
As owners of the issues assigned to them, engineers are expected to keep the workflow labels on their issues up to date, either by manually assigning the new label, or by dragging the issue from one column on the board to the next.
The goal is to support the members of these groups in connecting at a personal level, not to check in on people's progress or replace any existing processes to communicate status or ask for help, and the questions are written with that in mind:
We have 1 regularly scheduled "Per Milestone" retrospective, and can have ad-hoc "Per Feature" retrospectives more focused at analyzing a specific case, usually looking into the Iteration approach.
The Create:Source Code group conducts monthly retrospectives in GitLab issues. These include the backend team, plus any people from frontend, UX, and PM who have worked with that team during the release being retrospected.
At the start of each milestone we have a synchronous Kickoff session where every IC take turns at presenting their plan for their Deliverables for the new milestone.
This happens at least 2 working days after all Deliverables are assigned, which happens on the first day of the milestone, on the 18th.
During this call, we also do a quick Retrospective review going through the highlights of the discussions in the asynchronous issue mentioned above.