This team is deprecated in favour of Scalability:Observability. Handbook updates to follow.
This team focuses on forecasting & projection systems that enable development engineering to understand system growth (planned and unplanned) for their areas of responsibility. Error Budgets and Stage Group Dashboards are examples of successful projects that have provided development teams information about how their code runs on GitLab.com.
As Dedicated becomes more mature, we will expand our remit to include projection activities for this platform.
We use metrics to gather data to inform our decisions. We contribute to the observability of the system by maintaining metrics that concern saturation and improving observability tools that we can use to help us understand how the system responds to load.
The following people are members of the Scalability:Projections team:
|Rachel Nienaber||Senior Engineering Manager, Scalability:Projections|
|Bob Van Landuyt||Staff Backend Engineer, Scalability|
|Igor Wiedler||Staff Site Reliability Engineer, Scalability|
|Kennedy Wanyangu||Engineering Manager, Scalability:Practices|
|Liam McAndrew||Engineering Manager, Scalability:Frameworks|
|Matt Smiley||Staff Site Reliability Engineer, Scalability|
We are responsible for Capacity Planning, Error Budgets and Infrastructure Cost Data.
The goal of this process is to predict and prevent saturation incidents on GitLab.com.
Issues are kept in the capacity planning issue tracker. Where an issue is needed to improve metrics to support this process, we raise an issue in the Scalability group tracker with the label of
We develop and release Tamland, our saturation forecasting tool for capacity planning. The capacity warning issues are created automatically by Tamland, the saturation forecasting tool we develop for capacity planning.
The triage rotation is maintained in a PagerDuty Schedule: https://gitlab.pagerduty.com/schedules#PRMDCJG
The responsibility for reviewing Tamland reports rotates between all members of the Scalability::Projections team.
The rotation lasts for a minimum of two weeks. There is flexibility in the schedule to allow for OOO and on-call responsibilities. If you need to adjust your shift, please find another team-member to take your shift and add the override into the schedule.
The length of the rotation cycle is to try provide exposure to the wide variety of capacity warnings that occur and to enable each person to gain context on the components that we monitor. The handover day is Thursday to allow for any sync calls needed so that the review of the capacity planning issues can still be completed by the end of Monday.
The triage duties are:
capacity-planning::workflow label). The saturation labels can help in choosing which issues to review first, if there are many with the same due date.
Make sure to set aside at least half a work day during each week in your rotation to go through the items in the Capacity Planning board. Consider re-scheduling one of your shifts if it coincides with another rotation (e.g. EOC on-call duties). When your rotation is finished, you need to provide handover notes in the #infra_capacity-planning channel for the incoming person.
Some tips to help you to get started on duties:
component: disk_space) in
runbooksproject, the underlying recording rule can be found in
ignore_outliersentries to the forecast parameters. For more details, see Tuning the forecast.
We maintain the Error Budgets process that is described in the Engineering Handbook.
Issues are kept in the Scalability group tracker with
the label of
We maintain the metrics used to generate the Error Budgets and we ensure that the reports are published on time.
We advocate for improving the SLOs for Stage Groups and we provide support to help them achieve this. Providing the Stage Groups with data about how their feature categories operate on GitLab.com enables them to make good choices about how to efficiently improve the reliability, availability and performance of their feature categories.
The Scalability group is an owner of several performance indicators that roll up to the Infrastructure department indicators:
These are combined to enable us to better prioritize team projects.
An overly simplified example of how these indicators might be used, in no particular order:
Between these different signals, we have a relatively (im)precise view into the past, present and future to help us prioritise scaling needs for GitLab.com.