The following page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features or functionality remain at the sole discretion of GitLab Inc.
UPDATE: As of 2020-09-01: The Logging category at GitLab has presently been reprioritized and we are not actively progressing this category. The vision you see displayed on this page represents the direction we will pursue if this becomes a priority again in the future.
Thanks for visiting this category strategy page on Logging in GitLab. This category belongs to and is maintained by the Health group of the Monitor stage.
This page is maintained by Kevin Chu, group product manager. You can connect with him via Zoom or Email. If you're a GitLab user and have direct knowledge of your Logging usage, we'd especially love to hear your use case(s)
A fundamental requirement for running applications is to have a centralized location to manage and review the logs. While manually reviewing logs could work with just a single node app server, once the deployment scales beyond one you need solution which can aggregate and centralize them for review. In the distributed nature of cloud-native applications, it is crucial and critical to collect logs across multiple services and infrastructure, present them in an aggregated view, so users could quickly search through a list of logs that originate from multiple pods and containers.
Being able to capture and review logs are an important tool for all users across the DevOps spectrum. From pure developers who may need to troubleshoot their application when it is running in a staging or review environment, as well as pure operators who are responsible for keeping production services online.
The target workflow includes a few important use cases:
Before the 12.8 release, existing Monitor stage users already could view pod logs directly from within the GitLab UI. However, this was done only through the available Kubernetes APIs. Viewing logs with the Kubernetes APIs is limited to allowing a log-tailing experience on a specific pod from multiple environments only.
With the 12.8 release, any user can deploy Elastic stack - a specific flavor of Elasticsearch to your Kubernetes cluster with the push of a button (similar to the way we deploy Prometheus). Once deployed, it automatically starts collecting all logs that are coming from the cluster and applications across the available environments (production, staging, testing, etc.), and they will surface in the Log explorer within the GitLab UI. With it the user can select whether they'd like to view logs from all pods vs individual pods, conduct a full-text search or go back in time. In addition, users can also navigate directly from the metric chart to the log explorer while preserving the context.
filtered search - Enhancing our filtering capabilities by implementing - will allow us to provide a scalable solution which will enable our users to search for anything they'd like within their Kubernetes cluster
Show terminated pod logs - Once filtered search is in place we will provide a curated experience for our users to surface terminated pod logs
GitLab's log aggregation solution works only when an environment is defined on a managed Kubernetes cluster (where Elasticsearch and filebeat deployed into), working on external Elasticsearch instance which surface logs in GitLab UI is out of scope for now.
Splunk and Elastic are the top two competitors in this space.