Gitlab hero border pattern left svg Gitlab hero border pattern right svg

Package Group

Package Group

The Package group works on the part of GitLab concerning the Package stage, which integrates with GitLab's CI/CD product. Our mission is to create a secure environment where both source code and dependencies can live by allowing you to publish, consume, and discover packages of a large variety of languages and platforms all in one place.

The package group is using the integrated group model which means that all the engineers, front end and back end, report to a single Engineering Manager. The intention of using this model is to increase efficiency and drive results by building a process that supports the whole group's effort towards our goals. Primarily, this means that all engineers participate in conversations in the group and contribute broadly to group process iteration.

The package group is made up of 3 functional teams:

Due to these small functional teams, the Package Group deliverables are sometimes more at risk given that there are fewer people who can help out.

For more details about the vision for this area of the product, see the product vision page.

Team Members

The following people are permanent members of the Package Group:

Person Role
Daniel Croft Engineering Manager, Package
Nick Kipling Senior Frontend Engineer, Package
Steve Abrams Backend Engineer, Package
David Fernandez Senior Backend Engineer, Package
Giorgenes Gelatti Backend Engineer, Package
Hayley Swimelar Backend Engineer, Package
João Pereira Senior Backend Engineer, Package
Nicolò Maria Mezzopera Senior Frontend Engineer, Package

Stable Counterparts

The following members of other functional teams are our stable counterparts:

Person Role
Achilleas Pipinellis Senior Technical Writer, Create, Package, Monitor, Secure, Defend
Justin Mandell UX Manager, Package, Configure, and Monitor
Jason Yavorska Director of Product, CI, Runner Progressive Delivery & Package
Tim Rizzi Senior Product Manager, Package
Iain Camacho Senior Product Designer, Package

Issue boards

Demos & Speedruns

Package Registry

Container Registry

Dependency Proxy

Think BIG planning

The purpose of this meeting is to discuss the vision, product roadmap, user research, design, and delivery around the Package solution.


The goal of this meeting will be to align the team on our medium to long-term goals and ensure that our short-term goals are leading us in that direction.

How it works

We have a ThinkBIG meeting on the second Wednesday of the month at 1:30PM UTC. The agenda document is available per communication guidelines and the video of the meeting is shared on our GitLabUnfiltered YouTube channel to enable asynchronous collaboration. Action items from the meeting will be created with the label ~Package:ThinkBIG. We evaluate the cadence of the meeting to make sure that it's valuable and in adherance to meeting guidelines.

How to contribute

If possible, join the synchronous meeting and discussion on Wednesdays Add discussion items to the agenda document Read through the active epics, leave feedback and questions Read through the discussion topic issues and leave feedback/questions

How we work

Roles and responsibilities

Our team emphasises ownership by people who have the information required. This means, for example, in the event of some discussion about UX considerations, our Product Designer will have ownership. When we're building features, the Product Manager owns the decision on whether this is a feature that meets our customer needs. Our Product Developers own the technical solutions being implemented.

Understanding our users

As a team, we are committed to understanding our users needs. We believe the best way to do that is by understanding the reason they hired GitLab, and how those motivations translate into our area of the product. For that, we have apply a research-driven approach to Clayton M. Christensen's Jobs to be Done (JTBD) theory of innovation, which aims to understand why a customer bought a given product. We then utilize the job story to identify a list of specific, contextual user needs to fulfill their JTBD. In addition, we regularly evaluate the overall user experience of each JTBD, to ensure that we are meeting the needs of our users. You can view and contribute to our current list of JTBD and job stories here. All of the above is used to drive our validation and development workflows.


Our Product Manager owns the problem validation backlog and problem validation process as outlined in the Product Development Workflow. Our Product Designer then owns the solution validation process. You can view all items and their current state in the Package: Validation Track issue board.

At the end of this process, the issues will be ready for the build track which is owned by the Product Developers and lives in the Package:Workflow issue board.

Once an issue receives the workflow:scheduling label, engineers will give the issue a weight, identify any gaps in the description moving the issue eventually to the workflow:ready for development state (with the matching label).

Our Product Manager, Engineering Manager and, Front End Engineering Manager will ensure that the package:active label is applied to sufficient issues for the team to have work items. Product Developers are empowered to move items into this state as well.

The Product Manager owns the process of populating the current milestone with feature work. This feature work will take priority but, will be limited to 3 to 4 items per milestone. Product Developers are empowered, once feature work has been exhausted, to prioritise customer value issues that will quickly deliver customer value preferring smaller issues over larger ones.

Issues that we're actually expecting to work on will have the package:active label added to them at which point they'll appear in the Package:Workflow board. As product developers begin working on the issue, they'll assign the workflow:in dev label.


Throughout the workflow, issues should be addressed in the following priority order:

  1. Security issues: These will be at the top of our Package:Milestones Board and identified with either a Planning Priority or Package::P1 label
  2. Planning Priority label: Organizational level priorities that span multiple customers and prospects.
  3. Package:P1 label: Used to identify high priority issues that should be committed to in a given milestone or scheduled in an upcoming milestone.
  4. Package:P2 label: Used to identify security issues, bugs and feature requests that although may not be in a milestone, should be worked on ahead of any other work.
  5. Package:Triage label: Cross-functional dependencies required to resolve important issues for our team.
  6. package:active label: least effort to largest
  7. workflow::scheduling label: These are issues that require weighting, feedback and scheduling before being moved to package:active

How we measure results

In order to better align our effort with our customer's needs we will use the following methodology to measure our results. We believe that our best measure of success and progress is our product category maturity plan. Progress towards these goals will be measured as follows:

  1. The long term product category maturity goals will be split into each stage: minimal, viable, complete and, loveable
  2. For each catergory's next maturity stage, we'll break down each feature into small iterations and give them issue weights
  3. These weighted issues will have the ~Package:P1 label applied then be scheduled in upcoming milestones
  4. We'll measure our delivery by the percentage of issues completed out of the total with a goal of 100% completion and reevaluate our ability to deliver on our long term goals in each iteration

The process of making sure that there are issues to evaluate and break down is the responsibility of our Product Manager. It is the responsibility of the engineering team to evaluate each issue and make sure it's ready for development (using the ~"workflow::ready for development" label). It is the responsibility of our Product Designer to evaluate user experience and score our product maturity based on user research. This process will take some time to complete each time we achieve a new maturity stage. MR Rate will be used as an objective measure of our efficiency, not of alignment with our customer's needs or our organizational goals.

Product maturity goals

The below epics detail the work required move each respective category to the next maturity level.

Code Review

Code reviews follow the standard process of using the reviewer roulette to choose a reviewer and a maintainer. The roulette is optional, so if a merge request contains changes that someone outside our group may not fully understand in depth, people are encouraged to ask for a preliminary review from members of the package group to focus on correctly solving the problem. The intent is to leave this choice to the discretion of the engineer but raise the idea that fellow package group members will sometimes be best able to understand the implications of the features we are implementing. The maintainer review will then be more focused on quality and code standards.

This tactic also creates an environment to ask for early review on a WIP merge request where the solution might be better refined through collaboration and also allows us to share knowledge across the team.

UI or Technical Writing Review

When a merge request needs to be reviewed for the experience or for the copy in the user interface, there are a few suggestions to ensure that the review process is quick and effecient:

Community Contributions


A merge request with the following properties:

  1. It impacts features or issues managed by the Package group. This means it has the ~"devops::package" label
  2. Anyone in the wider community or at GitLab who isn't part of the Package group.


A Package group member will adopt the community contribution with the following tasks:


Other points to consider for the Package group member:

Sync Design Review

Design Feedback Round Robin is an effective tool to help enable teams by creating a structured feedback conversation. The primary focus of this exercise is to ensure everyone in the room has a voice, enabling us to capture a large quantity of precise and focused feedback. This conversation is strictly timeboxed to 15 minutes, so remember to be concise and have fun with it! If you ever need inspiration for feedback, consider taking a few different hats for a spin!

The Set-Up

The setup is very simple. Before the sync session, prepare the agenda by pasting a link to these rules. As the meeting starts, look at the attending members' names and form a randomly ordered list. This will be the order for participants to go in. Make sure to put this ordering into the agenda.

The Process

The designer will kick off the process by quickly reviewing the rules and starting the 15-minute timer. After the timer has started, the activity goes as follows:

  1. The designer should present the design clearly communicating what areas they're looking for feedback on. For more genuine reactions and feedback, keep the explanation as short as possible.
  2. Following the order pasted into the agenda, participants take turns asking relevant questions and providing a single piece of feedback to the design. Each "turn" should be limited to about 1 minute.
  3. Repeat this turn-based process until time runs out or all the participants "pass".

The Turns

As a participant, you can do a few different things on your turn. Try to be quick, as each turn should only last around 1 minute. During your turn, you can do a few things (in order of priority):

  1. Ask questions to the designer.
  2. +1 or -1 somebody else piece of feedback.
  3. Provide one(1) piece of feedback.
  4. "Pass" - you can skip your turn.
  5. You officially end your turn by calling out the name of who is next.

Notes about the feedback options:

Remember, the goal is to capture a quantity of specific feedback. While it may be tempting to start discussions around the design choices, this activity doesn't make for a good forum. The designer will follow up with discussions asynchronously afterward in the issue to start discussions and conversations around the feedback.

Missed deliverables retrospectives

When issues that we commit to delivering (have the ~Deliverable label) are not delivered in the milestone we commit to, we will hold an asynchronous retrospective on the miss to determine the root cause following the guidelines outlined in the handbook. In instances of a single issue, these retrospectives may be quite brief, in scenarios where we miss a larger effort, the root cause analysis will be more detailed. These should be conducted within the first week following the determination that we'll miss the deliverable.

Async Daily Standups

The purpose of the daily standup is to allow team members to have visibility into what everyone else is doing, allow a platform for asking for and offering help, and provide a starting point for some social conversations. We use geekbot integrated with Slack.

While it is encouraged to participate in the full variety of daily questions, it is completely acceptable to skip questions by entering -.

The Geekbot asynchronous standup will be reserved for blocking items and merge announncements (merge parrot!). Our normal updates on progress and status will be added to the issue as a comment. The status comment should include what percentage complete the work is, the confidence of the person that their estimate is correct and, notes on what was done and/or if review has started. It could be good to include whether this is a front end or back end update if there are multiple people working on it. Finally, for each MR associated, please include an entry for each.


Complete: 80%
Confidence: 90%
Notes: expecting to go into review tomorrow
Concern: ~frontend 
Issue status: 20% complete, 75% confident

MR statuses: 
!11111 - 80% complete, 99% confident - docs update - need to add one more section
!21212 - 10% complete, 70% confident - api update - database migrations created, working on creating the rest of the functionality next

Weekly Retrospective

Our weekly retrospective is intended to provide the team an opportunity to retrospect on our week's effort. The discussion takes the usual GitLab asynchronous endabled synchronous meeting format: it has an agenda google document and we upload the video to GitLab Unfiltered. We currently have our retro every week on Friday morning (UTC-7) and, every 3rd week the meeting is held on Thursday afternoon (UTC-7) to support people in APAC TZs. The retrospective is a 25 minute long meeting.

The document is an ongoing list of retrospectives with a date heading as well as a link to the video after it has been uploaded. Each retrospective is divided into what went not so well and what went well in that order - so we can end the meeting on a positive note. In asynchronous style, we add our items prior to the meeting and read them out during the meeting. We read people's items when they aren't able to attend.

We roll up some of our retro thoughts into our monthly, milestone-linked, async retrospective. Ideally we will be able to address concerns in the retro. Action items are described during the meeting.

Examples of our retrospectives can be found here:

Issue Weighting

Weight Description
1: Trivial The problem is very well understood, no extra investigation is required, the exact solution is already known and just needs to be implemented, no surprises are expected, and no coordination with other teams or people is required.

Examples are documentation updates, simple regressions, and other bugs that have already been investigated and discussed and can be fixed with a few lines of code, or technical debt that we know exactly how to address, but just haven't found time for yet.

This will map to a confidence greater or equal to 90%.
2: Small The problem is well understood and a solution is outlined, but a little bit of extra investigation will probably still be required to realize the solution. Few surprises are expected, if any, and no coordination with other teams or people is required.

Examples are simple features, like a new API endpoint to expose existing data or functionality, or regular bugs or performance issues where some investigation has already taken place.

This will map to a confidence greater than or equal to 75%.
3: Medium Features that are well understood and relatively straightforward. A solution will be outlined, and most edge cases will be considered, but some extra investigation will be required to realize the solution. Some surprises are expected, and coordination with other teams or people may be required.

Bugs that are relatively poorly understood and may not yet have a suggested solution. Significant investigation will definitely be required, but the expectation is that once the problem is found, a solution should be relatively straightforward.

Examples are regular features, potentially with a backend and frontend component, or most bugs or performance issues.

This will map to a confidence greater than or equal to 60%.
5: Large Features that are well understood, but known to be hard. A solution will be outlined, and major edge cases will be considered, but extra investigation will definitely be required to realize the solution. Many surprises are expected, and coordination with other teams or people is likely required.

Bugs that are very poorly understood, and will not have a suggested solution. Significant investigation will be required, and once the problem is found, a solution may not be straightforward.

Examples are large features with a backend and frontend component, or bugs or performance issues that have seen some initial investigation but have not yet been reproduced or otherwise "figured out".

This will map to a confidence greater than or equal to 50%.

Anything larger than 5 should be broken down. Anything with a confidence percentage lower than 50% should be investigated prior to finalising the issue weight.