Gitlab hero border pattern left svg Gitlab hero border pattern right svg

Category Direction - Logging

Logging

   
Stage Monitor
Maturity Minimal

Introduction and how you can help

Thanks for visiting this category strategy page on Logging in GitLab. This category belongs to and is maintained by the APM group of the Monitor stage.

This strategy is a work in progress, and everyone can contribute:

The Challange

A fundamental requirement for running applications is to have a centralized location to manage and review the logs. While manually reviewing logs could work with just a single node app server, once the deployment scales beyond one you need solution which can aggregate and centralize them for review. In the distributed nature of cloud-native applications, it is crucial and critical to collect logs across multiple services and infrastructure, present them in an aggregated view, so users could quickly search through a list of logs that originate from multiple pods and containers.

Target Audience and Experience

Being able to capture and review logs are an important tool for all users across the DevOps spectrum. From pure developers who may need to troubleshoot their application when it is running in a staging or review environment, as well as pure operators who are responsible for keeping production services online.

The target workflow includes a few important use cases:

  1. Aggregating logs from multiple pods and containers from all namespaces
  2. Filtering by host, container, service, timespan, regex, and other criteria. These filtering options should align with the filter options and tags/labels of our other monitoring tools, like metrics.
  3. Log alerts should also be able to be created, triggering alerts under specific user defined scenarios.

Log aggregation use case in GitLab

Before the 12.8 release, existing Monitor stage users already could view pod logs directly from within the GitLab UI. However, this was done only through the available Kubernetes APIs. Viewing logs with the Kubernetes APIs is limited to allowing a log-tailing experience on a specific pod from multiple environments only.

With the 12.8 release, any user can deploy Elastic stack - a specific flavor of Elasticsearch to your Kubernetes cluster with the push of a button (similar to the way we deploy Prometheus). Once deployed, it automatically starts collecting all logs that are coming from the cluster and applications across the available environments (production, staging, testing, etc.), and they will surface in the Log explorer within the GitLab UI. With it the user can select whether they'd like to view logs from all pods vs individual pods, conduct a full-text search or go back in time. In addition, users can also navigate directly from the metric chart to the log explorer while preserving the context.

What's Next & Why

Enhancing our filtering capabilities by implementing filtered search will allow us to provide a scalable solution which will enable our users to search for anything they'd like within their Kubernetes cluster

Epics

Competitive Landscape

Splunk and Elastic are the top two competitors in this space.