Blog Open Source We're moving our observability suite to Core
Published on: December 16, 2019
6 min read

We're moving our observability suite to Core

Our gift to you for 2020: Metrics, logging, and tracing and alerting are coming soon to Core!

gitlab-holiday-2019-blog-cover.png

Happy New Year to our developer community! We're moving a big portion of our observability features – custom metrics, logging, tracing and alerting – from our proprietary codebase to our open source codebase in 2020. We aim to complete this migration by early next year, and you can follow along with our progress in this Epic. While we're giving you the gift of 20/20 vision into your production environment as a thank you for all you've contributed, there are also three practical reasons as to why we're moving our observability suite to Core.

Why we're moving observability to Core

It's part of our stewardship model

The first reason being that it is our stewardship mandate. Our product is open-core and our pricing model is transparent and buyer-based. A buyer-based pricing model means we try to think about what type of buyer is going to get the most value out of a feature as we determine whether a feature belongs in our open source Core product or our paid versions of GitLab.

"If it's a feature for a single developer who might be working on his or her own individual project, we want that to be in Core because it invites more usage of those tools and we get great feedback in the form of new feature requests and developer contributions," says Kenny Johnston, director of product, Ops at GitLab. "It's an important part of our product philosophy to ensure we keep developer focused features in our Core product."

Observability belongs in Core

Our mission is to provide an end-to-end DevOps solution for developers that is also open source, and we were falling a bit short on the Ops side of things by keeping essential observability tools in a proprietary codebase.

"Before this move, If you were using Gitlab's open source version, you could attach a Kubernetes cluster and deploy applications to it, but then your ability to observe how your users are interacting with it in production was limited," says Kenny. "Now, you can get out-of-the-box metrics, create customized ones, get access to log tailing and searching and see traces – all within GitLab. Those were all non-existent in Core previously."

The fact is, the three pillars of observability: custom metrics, logging, and tracing and alerting, are fundamental to the complete DevOps lifecycle even for those single developers working on their own projects. That means they belong in our Core product.

We want your input on monitoring

The third reason is that we value your contributions, and we're hoping that by making our observability tools open source you will make valuable improvements to the code so that other developers can benefit from your insight. This is the gift you offer us every day, and so now we have a wishlist for you.

The three pillars of observability are on our wish list

Custom metrics

GitLab has a strong integration with Prometheus that allows users like you to monitor key metrics for applications deployed on Kubernetes or a different Prometheus server, without ever leaving our interface. Common reporting metrics include system metrics such as memory consumption, as well as error and latency rates. GitLab will automatically detect certain metrics from our metrics library, and you can customize these metrics based on your needs.

But there is always room for improvement. If you see something that you think needs improvement with metrics, or any of our observability features, please submit an issue or a merge request, or even contribute changes to our open source codebase.

Logging

You can see logs of running pods on your Kubernetes clusters, without the hassle of having to toggle between applications, since logging is integrated within GitLab. But our current logging capabilities are best described as log tailing. Users can see what is essentially a live stream of their logs within GitLab. Is our log tailing providing enough observability into the health of your deployed Kubernetes clusters? We're hoping you can help us innovate new ways to make our logging tools more valuable.

"I would love to have more insight into how users want to interact with [logging], if log tailing is sufficient, how much they want to move back and forth," says Kenny. "Some of those contributions can come in the form of commentary or issues being created, but people could also take it upon themselves to adjust that view so that is better suited to their needs when tailing a log."

Tracing and alerting

While there are certain metrics that are commonly reported about a deployed application — such as how much CPU is being consumed, the speed to process a request, etc., tracing allows you to monitor deployed applications in more depth and be alerted to any issues with the performance or health of the application. But, like logging, our tracing and alerting capabilities are in the earliest stages.

"Today, our tracing is fairly minimal," says Kenny. "We have an embedded UI for Jaeger, but we'd love to see contribution from members of the Jaeger community for more deep integration into GitLab. Maybe developers and operators who use GitLab would like to see more of the Jaeger UI experience directly in GitLab."

Our alerting capabilities are also a bit clunky. You have to define it directly in the UI and code configuration. By better uniting our tracing integration with Jaeger with our alerting capabilities, we could create a more synchronized user experience.

Closing the DevOps loop

In order for GitLab to function as an end-to-end DevOps solution, our users must be able to apply our ticketing system all the way from issue to production.

"I'm really interested in the use case where people are creating issues for alerting when something goes wrong with their production environments, and then how they interact with observability information in the incident management issue itself," explains Kenny.

Perhaps you need an issue template for incidents that will show a particular log line. Or there might be a custom metric that is so commonly used, it ought to be added to our metrics library.

"If you don't like the way that your alerting is set up, or you don't like the way that your log system is aggregated we'd love your contributions. If you don't like how metric charts, logs or traces are displayed in fire-fighting issues we'd love your contributions. GitLab is open source. You can contribute improvements to your observability tool just like you can the rest of your developer platform," says Kenny.

So go for it!

The three pillars of observability on GitLab are ripe for iteration, and there is still so much creative potential for each of these tools. We look forward to seeing what you come up with in 2020!

We want to hear from you

Enjoyed reading this blog post or have questions or feedback? Share your thoughts by creating a new topic in the GitLab community forum. Share your feedback

Ready to get started?

See what your team could do with a unified DevSecOps Platform.

Get free trial

Find out which plan works best for your team

Learn about pricing

Learn about what GitLab can do for your team

Talk to an expert