The following page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features or functionality remain at the sole discretion of GitLab Inc.
By the calendar year 2024, our vision for GitLab Runner Core is to provide developers with a zero-friction experience. The goals to support achieving that vision include automated installation and configuration of GitLab Runner on the market-leading compute architectures and operating systems, zero-friction Runner autoscaling on public cloud provider-hosted virtual machine stacks, and public cloud or on-premise Kubernetes clusters.
While the critical pillar of the Runner Core investment strategy through the calendar year 2024 is enabling the vision, a core guiding principle is that GitLab Runner's performance is best in class and can be hosted in environments with the most stringent security and compliance requirements.
Check out our Ops Section Direction "Who's is it for?" for an in depth look at the our target personas across Ops. For Runner, our "What's Next & Why" are targeting the following personas, as ranked by priority for support:
|Operating Systems||Compute Architectures|
|Linux||x86_64, ARM32, ARM64, ppc64le, s390x|
|Container Orchestration||Kubernetes, Red Hat Open Shift, AWS Fargate, AWS EKS, GCP ECS|
The table below represents the current strategic priorities for runner core. This list will change with each monthly revision of this direction page.
|Theme-Category||Item||Why?||Target delivery QTR|
|Security and Compliance||GitLab Runner Tokens Architecture Evolution||A GitLab Runner can be described as a worker process that executes the CI/CD pipeline jobs you define for your project. A runner has access to the source code in your project repository, so you must follow best practices for securely running your CI/CD jobs. One of our goals in FY23 is to enhance the security architecture of the runner token mechanism to reduce security risks further and simplify compliance management.||FY23 Q2|
|Secure Software Supply Chain||Add Software Attestations (metadata) to enable SLSA 2 in GitLab CI (MVC)||"The software of complex systems is often built from many discrete software modules that perform distinct functions. Modern software can be rapidly or even automatically assembled. In this respect, software development increasingly resembles manufacturing processes." Secure software supply chain management, and the related Software Bill of Materials (SBOM), refers to ensuring the security and provenance of everything that goes into the software you build and ultimately deploy to a production environment. Supply chain Levels for Software Artifacts (SLSA) is an emergent security framework for ensuring a secure software supply chain. To enable the initial goal of supporting SLSA level 2, requires adding capabilities in GitLab Runner to generate provenance, a type of software attestation, during the build and packaging stage of the software supply chain.||FY23 Q2|
|Technical Debt||Burn down of past due severity 2 bugs||At last check, for Runner Core, we have ~28 severity 2, priority 2 bugs with missed resolution service level objects in the gitlab-runner project backlog. A critical goal is to resolve all aged severity two bugs. The level of investment to resolve some of these bugs will likely be high, especially for those that are complex to reproduce or implement a fix. However, the burden from a technical debt perspective and impact on users and customers affected by a specific bug is too significant. So this is why we continue to invest a high percentage of engineering resources in this area.||FY23 Q3|
|Platform Enablement||GitLab Runner Autoscaling plugins for public cloud providers||The Next Runner Autoscaling Architecture is the architectural blueprint that serves as the foundation for replacing the Docker Machine-based Runner Autoscaler for public cloud virtual machines. The goal is to design a new abstraction layer and migrate the current architecture to a plugin. Once that is complete, we will provide GitLab-maintained plugins for the major cloud providers like AWS, Google Cloud Platform, and Azure.||FY23 Q3|
|Security and Compliance||GitLab Runner Token Architectural Evolution||The GitLab Runner Token Architecture Evolution aims to introduce new runner registration and authentication mechanisms to simplify operational management and automation. Beyond the token architecture changes, the other area of evolution is the Runner type model. The current GitLab Runner model in which a runner is strictly coupled with a type has worked well. However, several use cases, especially around addressing regulatory and compliance requirements, will require migrating from the concept of runner types to a runner ownership model. Once implemented, such a model will enable customers to quickly implement configurations such as limiting access to a Runner to specified users or groups.||FY23 Q4|
Refer to the Runner Core roadmap board for a more in-depth view prioritized roadmap features.
The summary list below includes a few popular items that we have decided not to prioritize.
|Local runner execution MVC||There is significant value to our users if a fully-featured solution can validate the pipeline is functional without committing the pipeline changes to a GitLab instance. However, while this seems like a simple feature on the surface, implementing CI job debugging in a local runner is quite complex. To summarize, we will need to duplicate the CI logic handled in the Rails application. As a result of the level of effort and complexity, there are other investments with a higher return for Runner Core. On the other hand, the Verify Pipeline Authoring team is exploring an MVC feature that aims to validate a pipeline's syntax and logic.|
|Sticky Runners MVC||In this issue, users need to improve CI job performance in scenarios where each job can generate intermediate build elements hundreds of GBs in size. In the current GitLab CI model, a significant amount of pipeline execution time is due to the uploading and downloading of intermediate build elements between jobs in a pipeline. Given the current Runner executor implementation, i.e., we support several executor types out of the box (shell, docker, Kubernetes), changing the CI job execution paradigm in GitLab is a significant architectural change. One option on the table is to restrict this feature to Runners using the shell executor. The Sticky Runners MVC feature is not prioritized for roadmap delivery due to competing architectural investments of the Runner code base.|
|Viable - HELM & Operator based installation capabilities available. Install and configuration is not yet 100% automated. Transition to an Operator only install model planned to complete by GitLab 16.0||Viable - install and configuration is not yet 100% automated.||Viable - install and configuration is not yet 100% automated.||Viable - install and configuration is not yet 100% automated.|
|Amazon Web Services EC2||Google Cloud Compute Engine||Azure Virtual Machines|
|Viable - available today, but the foundation is legacy Docker Machine technology.||Viable - available today, but the foundation is legacy Docker Machine technology.||Viable - available today, but the foundation is legacy Docker Machine technology.|
Runner core is comprised of various components, features, and capabilities. This section aims to provide clarity regarding the Runner Core architecture direction at a more fine-grained level.
|Shells||Today the current philosophy behind GitLab CI/CD job execution is that everything is a shell script. The use of shell scripts for job execution has benefits. Still, there are also significant drawbacks in maintenance costs and complexity, which in some cases has negatively impacted our ability to deliver new features quickly. In this issue, Manager/Taskrunner design, which is currently confidential, we are discussing the architectural underpinning of Runner. The result of those discussions will guide the evolution of the core GitLab Runner CI job execution mechanism.|
|Helm Chart||The Helm Chart has been the traditional method to install GitLab Runner on Kubernetes. However, with the release of the GitLab Runner Operator and the GitLab Kubernetes agent, we need to carefully consider and define our long-term maintenance and development strategy for the Helm Chart and the Operator. The current thought process is to add critical functional pieces to the Operator to be fully on par with the Helm Chart. Then this will mean we will aim to deprecate the Helm Chart install option for GitLab Runner in 16.0. Follow along with the discussion in this issue.|
In conjunction with the development work required to deliver the strategic priorities listed above, in each milestone, the Runner Core team will devote up to 60% of available developer capacity across the categories listed below. Development of features, capabilities, and bug fixes for the Kubernetes executor continues to be a significant investment area through FY23. The continued use of Kubernetes in the market and the demand that we are seeing from large customers who self-manage Runner Fleets are the key drivers of this investment.
When you run a continuous integration pipeline job or workflow, the code in that pipeline must execute on some computing platform to complete your software's building, testing, and deployment. Terms used to describe the software that handles the pipeline code execution include worker, agent, or runner.
So while the basic functionality of pipeline code execution is table stakes in the industry, the ability to efficiently build software on multiple compute platforms with low operational maintenance overhead is critical to the value proposition for self-managed GitLab.
For customers who need to run CI/CD workloads on environments that they manage (self-managed), GitLab runner includes a wide array of features and capabilities positioned competitively in the marketplace.
|Solution||CI/CD Agent naming convention/brand||Self-Managed Option Availablity||Notes|
|GitHub Actions||Runners||Available||GitHub released self-hosted runners in late 2019. Since then, GitHub has continued to invest in features and capabilities. As GitHub continues to target market segments requiring a self-managed platform, we also notice similar themes. For example, a feature related to the ease of use and enterprise management theme, an improved Runner management experience released in September 2021. For security and compliance, the Limit self-hosted Runners to specific workflows feature shipped in March 2022.|
|Jenkins||Agent||Available||A Jenkins agent is an executable residing on a node, whether virtual, bare-metal or a container that the Jenkins controller tasks to run a job. While installing the Jenkins agent on a target platform does require Java, the agent capability enables distributed builds in Jenkins and is flexible from a deployment standpoint. The Jenkins agent architecture is scalable; however, there will be ongoing maintenance overhead for organizations that self-manage large-scale Jenkins installations.|
|Harness.io||Harness Delegate||Available||Harness currently provides the following types of Delegate: Kubernetes, Shell Script, AWS ECS, Helm, Docker. Though the Delegates perform a similar essential function as GitLab Runner, i.e., executes tasks provided by the Harness Manager, the Delegates' primary purpose is to deploy software to the target platform. In this regard, the value proposition of the GitLab Agent for Kubernetes is a critical consideration when evaluating capabilities in GitLab for developer frictionless cloud-native deployment.|
|Codefresh||Codefresh Runner||Available*||The Codefresh Runner, which handles getting tasks from the Codefresh SaaS platform and executing them, is available only for Kubernetes.|
|CircleCI||CircleCI Runner||Available||The CircleCi self-hosted runner, released in November 2020, is supported on Linux, Windows, macOS, and Kubernetes but is only available to customers on CircleCi's Scale Plan. In the near term, CirlceCI is adding support for additional platforms. Extending platform support is an expected and necessary by-product of targeting customers who cannot run CI/CD workloads on a SaaS solution.|
|Bitbucket||Runners||Available||Users can self-host Bitbucket Runners on Linux x64, Windows 2K19, or macOS Catalina. On Windows and macOS, a pre-requisite to using the runner is OpenJDK11.|
The pace of change and innovation in DevOps is high. New entrants will likely challenge current paradigms and disrupt the market. An example of that is onedev, an open-source project that relies solely on Kubernetes to execute CI jobs with Linux and Windows containers support. The long-term potential here is clear. Kubernetes continues to be the leading container orchestration platform. Assuming that continues and organizations develop a deep bench of expertise to manage Kubernetes at scale, then we can make the following hypothesis. Having a CI/CD runner solution that is easy to install, maintain and operate on Kubernetes, coupled with predictive DevOps capabilities, will be critical to long-term market success.
So, as we head into FY23 and beyond, we will continue to focus on adding key features to Runner Core to maintain our pace of innovation and competitive position
The near features highlighted here represent just a subset of the features and capabilities that have been requested by the community and customers. If you have questions about a specific runner feature request or have a requirement that's not yet in our backlog, you can provide feedback or open an issue in the GitLab Runner repository.
This direction page was revised on: 2022-05-06