We hear you: Managing cloud accounts is risky, tedious, and time-consuming, but also a must-have in many situations. You might run your Kubernetes clusters with one of the hyperclouds, and your engineers need to access at least the non-production cluster to troubleshoot issues quickly and efficiently. Sometimes, you also need to give special, temporary access to engineers on a production cluster.
You have also told us that access requests might not come very often, but when they do, they are urgent, and given the high security requirements around the process, they can take close to a week to fulfill.
By giving access to your cloud infrastructure, you automatically expose yourself to risks. As a result, it's a best practice to restrict access only to the resources the given user must have access to. However, cloud identity and access management (IAM) is complex by nature.
If you are using Kubernetes and you need to give access specifically to your clusters only, GitLab can help. Your user will be able to identify with your cluster, so you can configure the Kubernetes role-based access controls (RBAC) to restrict their access within the cluster. With GitLab, and specifically the GitLab agent for Kubernetes, you can start at the last step and focus only on the RBAC aspect.
What is the GitLab agent for Kubernetes?
The GitLab agent for Kubernetes is a set of GitLab components that allows a permanent, bi-directional streaming channel between your GitLab instance and your Kubernetes cluster (one agent per cluster). Once the agent connection is configured, you can share it across projects and groups within your GitLab instance, allowing a single agent to serve all the access needs of a cluster.
Currently, the agent has several features to simplify your Kubernetes management tasks:
- Integrates with GitLab CI/CD for push-based deployments or regular cluster management jobs. The integration exposes a Kubernetes context per available agent in the Runner environment, and any tool that can take a context as an input (e.g. kubectl or helm CLI) can reach your cluster from the CI/CD jobs.
- Integrates with the GitLab GUI, specifically the environment pages. Users can configure an environment to show the Kubernetes resources available in a specific namespace, and even set up a Flux resource to track the reconciliation of your applications.
- Enables users to use the GitLab-managed channel to connect to the cluster from their local laptop, without giving them cloud-specific Kubernetes access tokens.
- Supports Flux GitRepository reconciliations by triggering a reconciliation automatically on new commits in repositories the agent can access.
- Runs operational container scans and shows the reports in the GitLab UI.
- Enables you to enrich the remote development offering with workspaces.
Try simplifying your cloud account management for Kubernetes access today with a free trial of GitLab Ultimate.
The agent and access management
The GitLab agent for Kubernetes, which is available for GitLab Ultimate and Premium, impersonates various GitLab-specific users when it acts on behalf of GitLab in the cluster.
-
For the GitLab CI/CD integration, the agent impersonates the CI job as the user, and enriches the user with group specific metadata that describe the project and the group.
-
For the environment and local connections, the agent impersonates the GitLab user using the connection, and similarly to the CI/CD integration, the impersonated Kubernetes user is enriched with group specific metadata, like roles in configured groups.
As this article is about using the agent instead of cloud accounts for cluster access, let’s focus on the environment and local connections setup.
An example setup
To offer a realistic setup, let’s assume that in our GitLab instance we have the following groups and projects:
/app-dev-group/team-a/service-1
/app-dev-group/team-a/service-2
/app-dev-group/team-b/service-3
/platform-group/clusters-project
In the above setup, the agents are registered against the clusters-project
project and, in addition to other code, the project contains the agent configuration files:
.gitlab/agents/dev-cluster/config.yaml
.gitlab/agents/prod-cluster/config.yaml
The dev-cluster
and prod-cluster
directory names are actually the agent names as well, and registered agents and related events can be seen within the projects “Operations/Kubernetes clusters” menu item. The agent offers some minimal features by default, without a configuration file. To benefit from the user access features and to share the agent connection across projects and groups, a configuration file is required.
Let’s assume that we want to configure the agents in the following way:
-
For the development cluster connection:
- Everyone with at least developer role in team-a should be able to read-write their team specific namespace
team-a
only. - Everyone with group owner role in team-a should have namespace admin rights on the
team-a
namespace only. - Members of
team-b
should not be able to access the cluster.
- Everyone with at least developer role in team-a should be able to read-write their team specific namespace
-
For the production cluster connection:
- Everyone with at least developer role in team-a should be able to read-only their team specific namespace
team-a
only. - Members of
team-b
should not be able to access the cluster.
- Everyone with at least developer role in team-a should be able to read-only their team specific namespace
For the development cluster, the above setup requires an agent configuration file in .gitlab/agents/dev-cluster/config.yaml
as follows:
user_access:
access_as:
user: {}
groups:
- id: app-dev-group/team-a # group_id=1
- id: app-dev-group/team-b # group_id=2
In this code snippet we added the group ID of the specific groups in a comment. We will need these IDs in the following Kubernetes RBAC definitions:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: team-a-dev-can-edit
namespace: team-a
roleRef:
name: edit
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- name: gitlab:group_role:1:developer
kind: Group
and...
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: team-a-owner-can-admin
namespace: team-a
roleRef:
name: admin
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- name: gitlab:group_role:1:owner
kind: Group
The above two code snippets can be applied to the cluster with the GitLab Flux integration or manually via kubectl
. They describe role bindings for the team-a
group members. It’s important to note that only the groups and projects from the agent configuration file can be targeted as RBAC groups. Therefore, the following RBAC will not work as the impersonated user resources don’t know about the referenced projects:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: team-a-dev-can-edit
namespace: team-a
roleRef:
name: edit
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- name: gitlab:project_role:3:developer # app-dev-group/team-a/service-1 project ID is 3
kind: Group
For the production cluster we need the same agent configuration under .gitlab/agents/prod-cluster/config.yaml
and the following RBAC definitions:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: team-a-dev-can-read
namespace: team-a
roleRef:
name: view
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- name: gitlab:group_role:1:developer
kind: Group
These configurations allow project owners to set up the environment pages so members of team-a
will be able to see the status of their cluster workloads in real-time and they should be able to access the cluster from their local computers using their favorite Kubernetes tools.
Explaining the magic
In the previous section, you learned how to set up role bindings for group members with specific roles. In this section, let's dive into the impersonated user and their attributes.
While Kubernetes does not have a User or Group resource, its authentication and authorization scheme pretends to have it. Users have a username, can belong to groups, and can have other extra attributes.
The impersonated GitLab user carries the gitlab:username:<username>
in the cluster. For example, if our imaginary user Béla has the GitLab username bela
, then in the cluster the impersonated user will be called gitlab:username:bela
. This allows targeting of a specific user in the cluster.
Every impersonated user belongs to the gitlab:user
group. Moreover, for every project and group listed in the agent configuration, we check the current user’s role and add it as a group. This is more easily understood through an example, so let’s modify a little bit the agent configuration we used above.
user_access:
access_as:
user: {}
projects:
- id: platform-group/clusters-project # project_id=1
groups:
- id: app-dev-group/team-a # group_id=1
- id: app-dev-group/team-b # group_id=2
For the sake of example, let’s assume the contrived setup that our user Béla is a maintainer in the platform-group/clusters-project
project, is a developer in app-dev-group/team-a
group, and an owner of the app-dev-group/team-a/service-1
project. In this case, the impersonated Kubernetes user gitlab:username:bela
will belong to the following groups:
gitlab:user
gitlab:project_role:1:developer
gitlab:project_role:1:maintainer
gitlab:group_role:1:developer
What happens is that we check Béla’s role in every project and group listed in the agent configuration, and set up all the roles that Béla has there. As Béla is a maintainer in platform-group/clusters-project
(project ID 1), we add him to both the gitlab:project_role:1:developer
and gitlab:project_role:1:maintainer
groups. Note as well, that we did not add any groups for the app-dev-group/team-a/service-1
project, only its parent group that appears in the agent configuration.
Simplifying cluster management
Setting up the agent and configuring the cluster as presented above is everything you need to model the presented access requirements in the cluster. You don’t have to manage cloud accounts or add in-cluster account management tools like Dex. The agent for Kubernetes and its user impersonation features can simplify your infrastructure management work.
When new people join your company, once they become members of the team-a
they immediately get access to the clusters as configured above. Similarly, as someone leaves your company, you just have to remove them from the group and their access will be disabled. As we mentioned, the agent supports local access to the clusters, too. As that local access runs through the GitLab-side agent component, it will be disabled as well when users are removed from the team-a
group.
Setting up the agent takes around two-to-five minutes per cluster. Setting up the required RBAC might take another five minutes. In 10 minutes, users can get controlled access to a cluster, saving days of work and decreasing the risks associated with cloud accounts.
Get started today
If you want to try this approach and allow access to your colleagues to some of your clusters without managing cloud accounts, the following documentation pages should help you to get started:
-
On self-managed GitLab instances, you might need to configure the GitLab-side component (called KAS) of the agent for Kubernetes first.
-
You can learn more about all the Kubernetes management features here, or you can immediately dive in by installing an agent, and granting users access to Kubernetes.
-
You’ll likely want to configure a Kubernetes dashboard for your deployed application.
Try simplifying your cloud account management for Kubernetes access today with a free trial of GitLab Ultimate.