A world class development team of software engineers and managers who make our customers happy when using our product(s). Our products should contain broad rich features, high availaiblity, high quality, fast performance, trustworthy security, and reliable operation.
The development department strives to deliver MRs fast. MR delivery is a reflection of:
The department also focuses on career development and process to make this a preferred destination for high performing software engineers.
We use data to make decisions. If data doesn't exist we use anecdotal information. If anecdotal information isn't available we use first principles.
The FY22 year starts with much promise and ambition. During FY21, the Development department focused on efficiency and maturation as a team. This has led to strong efficiencies in execution. For the coming year, we will focus on how to best leverage those efficiencies across a variety of activities.
None of our ambition for FY22 would be possible without the people doing the work. As such we need to make sure the right opportunities are available for the right people at the right time. We will continue the career development focus and commit to having twice yearly conversations on career development including a written review following guidance from the career matrixes.
We will continue our strong partnership with Product to make Gitlab the best, most complete DevSecOps platform on the planet. While we continue adding features to the product we must also work to identify technical debt and bring it to the prioritization discussion. We expect that Engineering managers are already addressing technical debt that is group specific with their Product Manager. For the coming year, we would like to achieve 4 cross-group technical debt retirements supported by Product prioritization. The benefits of technical debt retirement should address of maintaining/increasing feature velocity, increasing developer happiness, increasing community contributions, reducing cost, etc.
Our quality team supports us through automation, tooling, metrics, and focus on the community. We will work with them to help our open source and quality initiatives. During FY22, we would like to contribute to the driving of MRARR and converting at least one severity level from SLO to SLA.
As we move into FY22, we need to continue to support Sales initiatives and enable them through our fulfillment section. As an example, several pain points were recognized as being problematic in Sales and these need to be addressed through the following epic. After completing this work there will be other opportunities to help accelerate sales in fulfillment and other sections. We will support these initiatives in a timely fashion.
User experience is a focus area for FY22. We support this effort both in the product development as well as in our architecture. This includes continued conversion of Pajamas components in order to continue to improve performance experienced by users. This involves the completion of the original 8 identified components and further completion of an additional 20 of the currently 38 components listed.
Support for the scaling of our SaaS solution is an important component of our business. We will continue the ongoing efforts to support requirements related to scaling requested from infrastructure. Further, we will define an appropriate sharding solution and implement it in the coming fiscal year.
The best way to validate our product is to dogfood it ourselves. As part of our process initiatives for the year, we will work in each sub-department to dogfood additional parts of the product such as Roadmaps, Requirements Management, Code Quality, Auto DevOps, SAST, DAST, Dependency Scanning, Vulnerability Management, Package Management, Product Analytics, Global Search, etc. Overall the department will have an FY22 initiative to have at least 2 categories dogfooded across the entire department. The categories selected should be new for at least some portion of the department, but do not need to be a new category.
As we continue to scale the GitLab Product and consider additional business opportunities, our architecture will become more important. Architecture is crucial to our success and needs to be part of our thinking on a constant basis. We adopted an architecture workflow in FY21. During FY22 we would like to see this workflow highly leveraged. We will set a goal to have as many workflows completed in FY22 as there are Staff+ representatives in the organization.
The development team is responsible for developing products in the following categories:
The following people are permanent members of the Development Department:
|Christopher "Leif" Lefelhocz||VP of Development|
|Stan Hu||Engineering Fellow|
|Tim Zallmann||Director of Engineering, Dev|
|Bartek Marnane||Director of Engineering, Fulfillment|
|Chun Du||Director of Engineering, Enablement|
|Todd Stadelhofer||Director of Engineering, Secure|
|Wayne Haber||Director of Engineering, Threat Management and Growth|
|Sam Goldstein||Director of Engineering, Ops|
|Lily Mai||Operations Analyst, Development|
The following members of other functional teams are our stable counterparts:
|Giuliana Lucchesi||People Business Partner, Development and Product|
This is the breakdown of our department by section and by stage.
This is the stack-up of our engineers, by level.
Welcome to GitLab! We are excited for you to join us. Here are some curated resources to get you started:
Issues that impact code in another team's product stage should be approached collaboratively with the relevant Product and Engineering managers prior to work commencing, and reviewed by the engineers responsible for that stage.
We do this to ensure that the team responsible for that area of the code base is aware of the impact of any changes being made and can influence architecture, maintainability, and approach in a way that meets their stage's roadmap.
At times when cross-functional, or cross-departmental architectural collaboration is needed, the GitLab Architecute Evolution Workflow should be followed.
Development's headcount planning follows the Engineering headcount planning and long term profitability targets. Development headcount is a percentage of overall engineering headcount. For FY20, the headcount size is 271 or ~58% of overall engineering headcount.
The following is a non exhaustive list of daily duties for engineering directors, while some items are only applicable at certain time, though.
In general, OKRs flow top-down and align to the company and upper level organization goals.
For managers and directors, please refer to a good walk-through example of OKR format for developing team OKRs. Consider stubbing out OKRs early in the last month of the current quarter, and get the OKRs in shape (e.g. fleshing out details and making them SMART) no later than the end of the current quarter.
It is recommended to assess progress weekly.
Below are tips for developing individual's OKRs:
The GitLab application is built on top of many shared services and components, such as PostgreSQL database, Redis, Sidekiq, Prometheus and so on. These services are tightly woven into each feature's rails code base. Very often, there is need to identify the DRI when demand arises, be it feature request, incident escalation, technical debt, or bug fixes. Below is a guide to help people quickly locate the best parties who may assist on the subject matter.
There are a few available models to choose from so that the flexibility is maximized to streamline what works best for a specific shared service and component.
The shared services and components below are extracted from the GitLab product documentation.
|Service or Component||Sub-Component||Ownership Model||DRI (Centralized Only)||Ownership Group (Centralized Only)||Additional Notes|
|Container Registry||Centralized with Specific Team||@jhampton||Package|
|Email - Inbound|
|Email - Outbound|
|GitLab K8S Agent||Centralized with Specific Team||@nicholasklick||Configure|
|GitLab Pages||Centralized with Specific Team||@nicolewilliams||Release|
|GitLab Rails||Decentralized||DRI for each controller is determined by the feature category specified in the class. app/controllers and ee/app/controllers|
|HAproxy||Centralized with Specific Team||@brentnewton||Infrastructure|
|Jaeger||Centralized with Specific Team||@AnthonySandoval||Infrastructure:Observability||Observability team made the initial implementation/deployment.|
|LFS||Centralized with Specific Team||@sean_carroll||Create|
|MinIO||Decentralized||Some issues can be broken down into group-specific issues. Some issues may need more work identifying user or developer impact in order to find a DRI.|
|NGINX||Centralized with Specific Team||@mendeni||Distribution|
|Object Storage||Decentralized||Some issues can be broken down into group-specific issues. Some issues may need more work identifying user or developer impact in order to find a DRI.|
|Patroni||General except Geo secondary clusters||Centralized with Specific Team||@mendeni||Distribution|
|Geo secondary standby clusters||Centralized with Specific Team||@nhxnguyen||Geo|
|PgBouncer||Centralized with Specific Team||@mendeni||Distribution|
|PostgreSQL||PostgreSQL Framework and Tooling||Centralized with Specific Team||@craig-gomes||Database||Specific to the development portion of PostgreSQL, such as the fundamental architecture, testing utilities, and other productivity tooling|
|GitLab Product Features||Decentralized||Examples like feature specific schema changes and/or performance tuning, etc.|
|Puma||Centralized with Specific Team||@craig-gomes||Memory|
|Sidekiq||Decentralized||DRI for each worker is determined by the feature category specified in the class. app/workers and ee/app/workers|
|Workhorse||Centralized with Specific Team||Sean Carroll||Create: Source Code BE||Team does not work on most Workhorse features and has reduced development capacity|
The materials from an earlier Ruby on Rails performance workshop can be found on internally shared Google drive.
|Session - Day 1||Intro and overview||Monday Wednesday|
|Session - Day 2||Tools||Monday Wednesday|
|Session - Day 3||SQL and N+1 Troubleshooting||Monday Wednesday|
|Session - Day 4||Queueing Theory||Monday Wednesday|
Frontend Masters allows you to advance your skills with in-depth, modern frontend engineering courses.
GitLab has an account with Frontend Masters and team members can gain access to it by creating an Access Request and select the best option for your situation (single user, bulk user, etc.) and, once approved by your manager, assign to the Access Request Provisioner listed in the Tech Stack for this system. Once your access has been provisioned, you will receive an email to activate your account.
You can also join the #frontendmasters Slack channel for course recommendations and discussion.
We use GraphQL alongside our REST API at GitLab and are increasingly adding new features to the GraphQL API over time.
The GraphQL API can be added to by anyone, including community members. We have a group of self-selected team members who are willing to help with any GraphQL questions you may have. You can get in touch with them by mentioning
@gitlab-org/graphql-experts in any GitLab issue or merge request.
You can read more information about GraphQL at GitLab here:
We also run GraphQL office hours, which is a place where GitLab team members can ask questions and chat amongst peers about GraphQL at GitLab.
These meetings are currently run on a weekly cadence, alternating between timezones.
|Time in UTC||Organisers|
In late June 2019, we moved from a monthly release cadence to a more continuous delivery model. This has led to us changing from issues being concentrated during the deployment to a more constant flow. With the adoption of continuous delivery, there is an organizational mismatch in cadence between changes that are regularly introduced in the environment and the monthly development cadence.
To reduce this, infrastructure and quality will engage development via SaaS Infrastructure Weekly and Performance refinement which represent critical issues to be addressed in development from infrastructure and quality.
Refinement will happen on a weekly basis and involve a member of infrastructure, quality, product management, and development.
Rapid Action is a process we follow when a critical situation arises needing immediate attention from various stakeholders.
Any problem with both high severity and broad impact is a potential Rapid Action. For example, a performance problem that causes latency to spike by 500%, or a security problem that risks exposing customer data.
If the problem only affects one customer, consider a customer escalation process as the alternative.
When a situation is identified as a potential Rapid Action the following actions are recommended:
Optionally, to facilitate communication, you might:
The DRI is responsible for coordinating the effort to resolve the problem, from start to finish. In particular, the DRI is responsible for:
Please note that customers can be stakeholders. The DRI can seek assistance with customer communication in Slack at
The DRI should post a formal update on the epic every day, following this format:
YYYY-MM-DD What's progress has been made (what, effect): Example: What changes have been deployed to production and how are they impacting the problem? What's happening next (what, when, effect): What work is currently in progress (include links to MRs), when do you expect these to be deployed, what do you expect to be the effect(s)? Blockers (optional): Are there any specific obstacles preventing us from making progress? If so, what is needed to overcome them? Other notes (optional): Anything else that is relevant.
Once the resolution criteria have been satisfied:
@gitlab-com/gl-security/secopsto determine when the epic can be made public.
Available email alias (a.k.a. Google group):
Managers, Directors, VP's teams: each alias includes everyone in the respective organization.
email@example.com, examples below -
Teams roll up by the org chart hierarchy -
Note: books in this section can be expensed.