Our mission is to create a secure environment where both source code and dependencies can live by allowing you to publish, consume, and discover packages of a large variety of languages and platforms all in one place.
For more details about the vision for this product area, see the product vision page.
The package group is a Fullstack team which means that all the engineers, Frontend and Backend, report to a single Fullstack Engineering Manager. Using this model increases efficiency and drives results by building a process that supports all team members towards our goals. This means that all engineers participate in conversations in the group and contribute broadly to group process iteration.
The package group is made up of 3 functional teams:
Due to these small functional teams, the Package group deliverables are sometimes more at risk given that there are fewer people who can help out.
The following people are permanent members of the Package Group:
|Michelle Torres||Fullstack Engineering Manager, Package|
|David Fernandez||Senior Backend Engineer, Package|
|Hayley Swimelar||Senior Backend Engineer, Package|
|Jaime Martínez||Senior Backend Engineer, Package|
|João Pereira||Staff Backend Engineer, Package|
|Rahul Chanila||Senior Frontend Engineer|
|Steve Abrams||Backend Engineer, Package|
The following members of other functional teams are our stable counterparts:
|Jackie Porter||Director of Product Management, Verify & Package and Acting Verify:Testing Product Manager|
|Tim Rizzi||Principal Product Manager, Package|
|Katie Macoy||Product Designer, Package|
|Sofia Vistas||Senior Software Engineer in Test, Package:Package|
|Vitor Meireles De Sousa||Senior Security Engineer, Application Security, Package (Package), Configure (Configure), Monitor (Monitor)|
In order to better align our effort with our customer's needs we will use the following methodology to measure our results. We believe that our best measure of success and progress is our product category maturity plan. Progress towards these goals will be measured as follows:
Package:P1label applied then be scheduled in upcoming milestones
The below epics detail the work required to move each respective category to the next maturity level.
We use quarterly Objectives and Key Results as a tool to help us plan and measure how to achieve Key Performance Indicators (KPIs).
Here is the standard, company-wide process for OKRs
We measure the value we contribute by using performance indicator metrics. The primary metric used for the Package group is the number of monthly active users or GMAU. For more details, please check out the Ops section's performance indicators.
(Sisense↗) We also track our backlog of issues, including past due security and infradev issues, and total open SUS-impacting issues and bugs.
(Sisense↗) MR Type labels help us report what we're working on to industry analysts in a way that's consistent across the engineering department. The dashboard below shows the trend of MR Types over time and a list of merged MRs.
We monitor our features using different dashboards. It is recommended to check them weekly.
These dashboards are all internal and can be only accessed by GitLab Team members.
Error Budgets for stage groups have been established in order to help groups identify and prioritize issues that are impacting customers and infrastructure performance. The Error Budget dashboard is used to identify issues that are contributing to the Package group's error budget spend.
The Package::Package error budget peformance indicator is tracked and updated weekly.
The engineering manager will review the error budget dashboard weekly to determine whether we're exceeding our budget, determine what (if anything) is contributing to our error budget spend, and create issues addressing root cause for product manager prioritization. Issues created to address error budget spend should be created using appropriate labels as well as the label
Error Budget Improvement in order to facilitate tracking and measurement.
We expect to track the journey of users through the following funnel. You can view the below metrics in the Package usage funnel dashboard.
Follow along our instrumentation and measurement of Package-related metrics in gitlab-#2289.
As a team, we are committed to understanding our users needs. We believe the best way to do that is by understanding the reason they hired GitLab, and how those motivations translate into our area of the product. For that, we apply a research-driven approach to Jobs to Be Done (JTBD) framework of innovation. This method aims to understand why a customer uses and buys a given solution. We apply the job statement to identify a list of specific, contextual user needs to fulfill their JTBD. In addition, we regularly evaluate the overall user experience of each JTBD, with UX Scorecards, to ensure that we are meeting the needs of our users.
You can view and contribute to our current list of JTBD and job statements here.
The GitLab Container and Package Registry currently handle hundreds of millions of events per week. However, when onboarding a large, enterprise customer it will be helpful for GitLab and the customer to understand their expected use case and workflows to ensure that the product scales reliably. When onboarding a new, large customer, it's helpful to follow the below steps:
Our team emphasises ownership by people who have the information required. This means, for example, in the event of some discussion about UX considerations, our Product Designer will have ownership. When we're building features, the Product Manager owns the decision on whether this is a feature that meets our customer needs. Our Engineers own the technical solutions being implemented.
The process of making sure that there are issues to evaluate and break down is the responsibility of our Product Manager. It is the responsibility of the engineering team to evaluate each issue and make sure it's ready for development (using the
workflow::ready for development label). It is the responsibility of our Product Designer to evaluate user experience and score our product maturity based on user research. This process will take some time to complete each time we achieve a new maturity stage. MR Rate will be used as an objective measure of our efficiency, not of alignment with our customer's needs or our organizational goals.
Issues for Package group can be found in the following projects:
To plan, visualize and organize better our work, we use the following issue boards:
Package:Assignmentsbut for Cross-Group Dependencies.
We have created a collection of Tips and Tricks for folks working with/around the Package Stage. You can view them on our Wiki Page.
|Weekly sync||Share news and information and provide an opportunity for people on the team to escalate concerns. In addition, the weekly sync includes a rotating agenda of Product, UX, and Engineering specific topics.|
|Retrospective (weekly)||Discuss not only what went well or not but also how we did things and what we can do to improve for next week.|
|Think BIG (monthly)||Discuss the vision, product roadmap, user research, design, and delivery around the Package solution.|
The weekly sync is the Package group's first touchpoint of the week. In addition to an opportunity to share any relevant updates or concerns, the team cycles through a rotating (weekly) agenda. For example:
When issues that we commit to delivering (have the
Deliverable label) are not delivered in the milestone we commit to, we will hold an asynchronous retrospective on the miss to determine the root cause following the guidelines outlined in the handbook. In instances of a single issue, these retrospectives may be quite brief, in scenarios where we miss a larger effort, the root cause analysis will be more detailed. These should be conducted within the first week following the determination that we'll miss the deliverable.
The purpose of the daily standup is to allow team members to have visibility into what everyone else is doing, allow a platform for asking for and offering help, and provide a starting point for some social conversations. We use geekbot integrated with Slack.
While it is encouraged to participate in the full variety of daily questions, it is completely acceptable to skip questions by entering
The Geekbot asynchronous standup will be reserved for blocking items and merge announcements (merge parrot!).
The purpose of daily updates is to inspect progress and adapt upcoming planned work as necessary. In an all-remote culture, we keep the updates asynchronous and put them directly in the issues.
The async daily update communicates the progress and confidence using an issue comment and the milestone health status using the Health Status field in the issue. A daily update may be skipped if there was no progress. It's preferable to update the issue rather than the related merge requests, as those do not provide a view of the overall progress.
A weekly async update should be added to epics, providing an overview of the progress across related issues.
When communicating the health status, the options are:
on track- when the issue is progressing as planned
needs attention- when the issue requires attention or intervention to keep it on schedule
at risk- when there is a risk the issue will not be completed according to schedule
The async update comment should include:
Complete: 80% Confidence: 90% Notes: expecting to go into review tomorrow Concern: ~frontend
Include one entry for each associated MR
Issue status: 20% complete, 75% confident MR statuses: !11111 - 80% complete, 99% confident - docs update - need to add one more section !21212 - 10% complete, 70% confident - api update - database migrations created, working on creating the rest of the functionality next
Ask yourself, how confident am I that my % of completeness is correct?.
For things like bugs or issues with many unknowns, the confidence can help communicate the level of unknowns. For example, if you start a bug with a lot of unknowns on the first day of the milestone you might have low confidence that you understand what your level of progress is.
We generally follow the Product Development Flow:
workflow::problem validation- needs clarity on the problem to solve. Our Product Manager owns the problem validation backlog and problem validation process as outlined in the Product Development Workflow.
workflow::design- needs a clear proposal (and mockups for any visual aspects).
workflow::solution validation- designs need to be evaluated by customers, and/or other GitLab team members for usability and feasibility. Our Product Designer then owns the solution validation process. You can view all items and their current state in the Package: Validation Track issue board.
workflow::refinement- needs a weight estimate and clarification to ensure an issue is ready for development. At the end of this process, the issues will be ready for the build track which is owned by the Engineers and lives in the Package:Workflow issue board.
workflow::ready for development
workflow::verification- code is merged and pending verification by the DRI engineer.
workflow::staging- code is in staging and has been verified.
workflow::production- code is in production and has been verified. Ideally, when an issue is closed, it has this label.
The Product Manager owns the process of populating the current milestone work following the prioritization guidelines. Engineers are empowered, once the planned work has been exhausted, to prioritize issues that will deliver customer value preferring smaller issues over larger ones.
Issues that we're expecting to work on in the milestone will have the
workflow::ready for development label added to them. Once labeled, they'll appear in the Package:Workflow board. As engineers begin working on the issue, they'll assign the
workflow::in dev label.
Package:P2according to their priority. Our prioritization model can be found below in the section Priorities.
Package:P1work in the milestone by having an engineer add the
workflow::ready for developmentlabel and then having the engineering manager add the
Deliverablelabel. We measure our predictability and commitments with Say/Do ratios.
After we complete problem and solution validation, the next step is refinement, where engineers break down requirements and present a high-level design and feasible estimated solution.
During refinement, engineers can create more issues if necessary. A single engineer can do the refinement. They can ask other team members for a review or seek input to understand the domain knowledge better when in doubt.
Issues needing refinement get the label
workflow::refinement and are added to the milestone planning issue by the Product Manager. At the end of refinement, engineers apply the label
workflow::ready for development.
Throughout the workflow, issues should be addressed in the following priority order:
Corrective Action, and
ci-decomposition::phase*issues will be at the top of our Package:Milestones Board.
Package:P1label: Used to identify high priority issues that should be committed to in a given milestone or scheduled in an upcoming milestone.
Community Contributionlabel: When in the milestone planning, this identifies community contributions we committed to delivering in a given milestone.
Package:P2label: Used to stretch goals for a given milestone.
workflow::refinementlabel: These are issues that require weighting, feedback, and scheduling before being moved to
workflow::ready for development.
|1: Trivial||The problem is very well understood, no extra investigation is required, the exact solution is already known and just needs to be implemented, no surprises are expected, and no coordination with other teams or people is required.
Examples are documentation updates, simple regressions, and other bugs that have already been investigated and discussed and can be fixed with a few lines of code, or technical debt that we know exactly how to address, but just haven't found time for yet.
This will map to a confidence greater or equal to 90%.
|2: Small||The problem is well understood and a solution is outlined, but a little bit of extra investigation will probably still be required to realize the solution. Few surprises are expected, if any, and no coordination with other teams or people is required.
Examples are simple features, like a new API endpoint to expose existing data or functionality, or regular bugs or performance issues where some investigation has already taken place.
This will map to a confidence greater than or equal to 75%.
|3: Medium||Features that are well understood and relatively straightforward. A solution will be outlined, and most edge cases will be considered, but some extra investigation will be required to realize the solution. Some surprises are expected, and coordination with other teams or people may be required.
Bugs that are relatively poorly understood and may not yet have a suggested solution. Significant investigation will definitely be required, but the expectation is that once the problem is found, a solution should be relatively straightforward.
Examples are regular features, potentially with a backend and frontend component, or most bugs or performance issues.
This will map to a confidence greater than or equal to 60%.
|Larger: resize||Features that are well understood, but known to be hard. A solution will be outlined, and major edge cases will be considered, but extra investigation will definitely be required to realize the solution. Many surprises are expected, and coordination with other teams or people is likely required.
Bugs that are very poorly understood, and will not have a suggested solution. Significant investigation will be required, and once the problem is found, a solution may not be straightforward.
Examples are large features with a backend and frontend component, or bugs or performance issues that have seen some initial investigation but have not yet been reproduced or otherwise "figured out".
This will map to a confidence greater than or equal to 50%.
Anything larger than 3 should be broken down. Anything with a confidence percentage lower than 50% should be investigated prior to finalising the issue weight.
Our intention is to break up issues that have a weight greater than 3, either by converting the issue to an epic with sub issues or just separating the work into related issues. An issue weight of 3 should describe something that would take no more than 2 weeks to complete.
When starting work on an MR that involves unfamiliar tools/libraries, be sure to update the estimated weight depending on who picks up the issue to reflect the additional time that may be spent learning. For example, a developer who has never worked with GraphQL before may need to spend some additional time learning the library versus a developer who has experience with GraphQL. If the first developer picks up the issue, they should consider raising the weight so it is reflected that it may take longer for them to deliver it.
When working on an MR for a Deliverable, don't lose track of the aim: release the Deliverable in time. That doesn't mean that refactorings can't happen or that we can't take time to investigate side subjects. It means that we need to limit the time allocated for this type of work.
When considering a refactoring or a heavy refactoring, consider working iteratively. A refactoring can be implemented and refined many times but consider releasing a good enough first version so that depending work is not delayed or blocked. For an example of how we can work iteratively, please see how we worked through lifting the npm naming conventioon.
A bug investigation is a two part process:
At the end of this process, the engineer should be able to weight the issue.
The whole process can take a few minutes to several hours (or even days). The assigned engineer should timebox this process to avoid investing too much time in it, without communicating and coordinating with EM and PM, and thus hindering the milestone planning. We suggest that anything that goes above half a day should be coordinated with the team.
If a bug investigation takes more time than intended, it's better to:
When a bug is detected on staging, engineers should evaluate its severity will be on production. You can use kibana or other tools to evaluate the number of requests impacted. If the severity is high, appropriate actions should be taken.
In particular, when the most used package registries (npm, Maven) are impacted negatively, consider the bug a higher severity. If the bug disrupts the expected behavior of those package registries, consider blocking the next production deployment with the appropriate actions above.
To best understand how users use the GitLab package registry, when building and testing features, it is beneficial to test using projects that resemble real use-case scenarios. A Hello-World package is not going to simulate the same functionality that a large open source library or enterprise customer is going to experience. Depending on the feature that is being built, it is recommended during the development phase to test locally using a real package. Additionally, consider reviewing existing data to determine a good range of test cases. The package group has created an ad-hoc test projects group to store larger projects that can be used to test against. This group may contain copies of open source projects or projects specifically designed to test certain aspects of the GitLab package registry. It is not meant to be a static collection of projects, so the projects may be replaced, updated, or removed as seen fit.
The package features regularly deal with file uploads. When testing these features locally using an environment like GDK, it is recommended to test changes using the default local storage configuration, but also using a cloud service for object storage. GCP is recommended when trying to best recreate the environment GitLab.com is running. For highest confidence in features working with uploads, testing using local storage, Minio, GCP, AWS S3, and Azure is recommended. The GDK docs have instructions on how to configure for each of these providers.
Code reviews follow the standard process of using the reviewer roulette to choose a reviewer and a maintainer. The roulette is optional, so if a merge request contains changes that someone outside our group may not fully understand in depth, people are encouraged to ask for a preliminary review from members of the package group to focus on correctly solving the problem. The intent is to leave this choice to the discretion of the engineer but raise the idea that fellow package group members will sometimes be best able to understand the implications of the features we are implementing. The maintainer review will then be more focused on quality and code standards.
This tactic also creates an environment to ask for early review on a WIP merge request where the solution might be better refined through collaboration and also allows us to share knowledge across the team.
When a merge request needs to be reviewed for the experience or for the copy in the user interface, there are a few suggestions to ensure that the review process is quick and effecient:
The Package team has a goal of shipping enterprise grade software with a focus on Quality. The team accomplishes this goal with the following practices:
Following GitLab's Culture of Quality with a focus on being champions for better software design.
Partnering with our Software Engineer in Test stable counterparts.
Frequently reviewing the code coverage across our functional areas (GoLang, Ruby, Frontend) and addressing low scoring areas as needed.
Actively reviewing Triage reports and working with our Product Manager to prioritize bugs or regressions.
To better understand the risk environment and each risk's causes and consequences, the Package team uses the Risk Mapping Tool as their risk management tool to stratigically prioritise mitigation and increase Quality.
If a community contributor wants to pick up an issue, or create an issue and a follow up merge request for it, please ping
@gitlab-org/ci-cd/package-stage or an individual team member on the issue itself before starting the work, this ensures that:
Additionally, the Package team can help set realistic review/merge times based on the scope of the work.
Ultimately the aim is to enable community contributor to deliver meaningful work with the least amount of back and forth and minimising the risk of stumbling on a show stopper.
A merge request with the following properties:
A Package group member will adopt the community contribution with the following tasks:
workflow::in devto signal that the issue is already in development.
/copy_metadataquick action to copy the labels from the issue.
Package:P1issues, we should invest the time necessary to make sure the author is able to contribute.
Given the number of community contributions submitted (thank you!), the Package team will include them in Milestone Planning issues. We'll schedule time for team members to assist with the various community contributions as part of our milestone plan. You can view guidelines for merge requests and a definition of done here.
The Package team will add review weight labels to community contributions to try to help understand the required effort and plan capacity. The intention is to help the team better plan for the support of community contributions among other priorities. We'll start with labels for weights of 1, 2, 3, and 5 similar to the weights we use for our issues. The only difference is that a
package-review-weight::5 won't be replaced with an investigation. We will analyse our community contribution capacity in milestone
Other points to consider for the Package group member:
It can be overwhelming to come back to work after taking time off. Remember to review the returning from PTO section of our time-off policy, especially the key points:
Cross-group dependencies may exist as pre-requisites to deliver Package features or bug fixes. Issues to deliver such dependencies are owned by groups that Package depends on, such as Delivery or Distribution.
For discoverability, issues that represent cross-group dependencies should be labeled with
package:cross-group-dependency. If working on one of these issues, Package engineers should ensure that they are labeled correctly. For visibility, these issues are shown in the Package:Cross-Group Assignments issue board.
The product manager should include cross-group dependencies in the milestone planning issue for review, discussion and prioritization.
When requiring attention from all the team members, use any of the following options or mix them.
@package-combined-teamonly once in an issue - When there will be multiple interactions and actions required. Consider this as the way to bring the issue to team members' attention. For example, when a milestone planning issue is ready for team members to review and give feedback, and multiple actions could come up later in the conversations.
For any other communication tailored to only certain members, we ping them individually on issues.
There are times during the development lifecycle that changes need to be communicated to the Infrastructure teams. For example:
GitLab Unfilteredto view