This document provides information on what Kibana is, how to search it, interpret its results, and contains tips and tricks on getting specific information from it.
Kibana is an open source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Support Engineering uses Kibana to both search for error events on GitLab.com and to detect when specific changes were made to various aspects of it by a user.
Note: Kibana does not retain logs older than 7 days.
Knowing where to search in Kibana is paramount to getting the proper results. The area of the application (GitLab.com) that you're searching is known as the Index in Kibana. The default index used for searching is pubsub-rails-inf-gprd-* but you can change this by clicking the (change) button:

Indexes closely correlate for the most part with our log structure in general. Some other frequently used indexes are:
pubsub-gitaly-inf-gprd-*pubsub-pages-inf-gprd-*pubsub-runner-inf-gprd-*For example, if you're trying to track down failed logins you would search the index pubsub-rails-inf-gprd-*. To search for 500 errors involving a controller you'd search in pubsub-rails-inf-gprd-*, the default index.
Along with the index, knowing when a specific error or event ocurred that you're trying to track down is important and it's important to keep in mind that Kibana logs on GitLab.com persist for seven days. Kibana allows you to choose relative and absolute time ranges for search results and this can be changed by manipulating the date range:

Note: As of March 2022, using Filters is the recommended way for searching Kibana. Using the general search box is discouraged as it may generate several errors in the form of X of Y Shards Failed. See Infra Issue.
Each log entry is comprised of a number of Fields in which the specific information about the entry is displayed. Knowing which fields to apply to your search results and how to filter for them is just as important as knowing where and when to search. The most important fields are:
json.methodjson.actionjson.controller - for more on this see the controller definitions in the GitLab sourcecodejson.statusjson.pathAll available fields are displayed along the left-hand side menu and you can add them to your search results by hovering over each and clicking the add button.

If you don't filter for specific fields it can be difficult to find specific log entries if a large number of them are returned on your search query.
For example, we're trying to locate any log events generated by the GitLab.com user tristan that returned a 404 status code within the last 15 minutes. We can start by searching the pubsub-rails-inf-gprd-* index for json.username : tristan within that time range and we'd get results similar to the following once we click add next to the json.status field along the left-hand side bar:

The majority of results as entries that returned 200, which aren't in the scope of what we're looking for. To search for results where 404 was returned we can click + Add Filter underneath the search bar and place a positive filter on json.status is 404, which will give us these results, exactly what we're looking for.

This section details how you can find very specific pieces of information in Kibana by searching for and filtering out specific fields. Each tip includes a link to the group, subgroup, or project that was used in the example for reference.
Example group: gitlab-bronze
We can determine if the GitLab Runner registration token was reset for a group or project and see which user reset it and when. For this example the Runner registration token was reset at the group-level in gitlab-bronze. To find the log entry:
Last 7 days if you're unsure.json.path for the path of the group, which is just gitlab-bronze in this example.json.action for reset_registration_token.json.username field of the result.Kibana can be used to determine who triggered the deletion of a group, subgroup, or project on GitLab.com. To find the log entry:
Last 7 days if you're unsure.json.path for the path of the project, including the group and subgroup, if applicable. This is gitlab-silver/test-project-to-delete in this example.json.method for DELETE.json.username field of the result.In some cases a Let's Encrypt Certificate will fail to be issued for one or more reasons. To determine the exact reason, we can look this up in Kibana.
json.message to "Failed to obtain Let's Encrypt certificate".json.pages_domain: "sll-error.shushlin.dev"json.acme_error.detail This should display the relevant error message. In this case, we can see the error "No valid IP addresses found for sll-error.shushlin.dev"Kibana can be used to find out if and when an account on GitLab.com was deleted if it occurred within the last seven days at the time of searching.
Last 7 days if you're unsure.If an account was self-deleted, try searching with these filters:
json.username for the username of the user, if you have it.json.controller for RegistrationsController.If an account was deleted by an admin, try searching with these filters:
json.params.value for the username of the user.json.method for DELETE.Observe the results. There should be only one result if the account that was filtered for was deleted within the specified timeframe.
Kibana can be used to find out which admin disabled 2FA on a GitLab.com account. To see the log entries:
Last 7 days if you're unsure.json.username for admin.json.action for disable_two_factor.json.location and json.username fields.Kibana can be used to find out if and when SSO Enforcement was enabled or disabled on a group on GitLab.com, and which user did so.
Last 7 days if you're unsure.json.controller for Groups::SamlProvidersController.json.action for update.json.method for PATCH.json.path for the path of the group, gitlab-silver in this example case.json.params field. If \"enforced_sso\"=>\"1\" is present, that entry was logged when SSO Enforcement was enabled by the user in the json.username field.Last 7 days if you're unsure.json.controller for Groups::SamlProvidersController.json.action for update.json.method for PATCH.json.path for the path of the group, gitlab-silver in this example case.json.params field. If \"enforced_sso\"=>\"0\" is present, that entry was logged when SSO Enforcement was disabled by the user in the json.username field.Kibana can be used to determine whether a container registry tag was deleted, when, and who triggered it, if the deletion happened in the last 7 days.
To find the log entry in pubsub-rails-inf-gprd-*:
Last 7 days if you're unsure.json.graphql.variables for *ContainerRepository/1842896.*.json.graphql.operation_name for destroyContainerRepositoryTags.As of 14.7, a Correlation ID is provided on the 500 error page when using the interface. You can ask the customer to supply this and use Kibana to filter by json.correlation_id.
Kibana is not typically used to locate 5XX errors, but there are times where they can't be easily found in Sentry and searching Kibana first is beneficial. To perform a general search in Kibana:
gitlab-ee project for any mention of error using the following query:"gitlab-ee" AND "error"
It's recommended to apply a Negative Filter to the gitlab_error.log and gitlab_access.log log files. These two generate a large amount of noise and may not be relevant to your search.
See the 500 errors workflow for more information on searching and finding errors on GitLab.com
Not sure what to look for? Consider using a Self-Managed instance to replicate the bug/action you're investigating. This will allow you to confirm whether or not an issue is specific to GitLab.com, while also providing easily accessible logs to reference while searching through Kibana.
Support Engineers looking to configure a Self-Managed instance should review our Sandbox Cloud page for a list of available (company provided) hosting options.
Customers will sometimes give us an IP Range of their resources such as their Kubernetes cluster or other external servers that may need to access GitLab. You can search a range by using Elasticsearch Query DSL.
+ Add Filter > Edit as Query DSL{
"query": {
"term": {
"json.remote_ip": {
"value": "192.168.0.1/20"
}
}
}
}
Note that depending on the range, this operation may be expensive so it is best to first narrow down your date range first.
Most timeout related imports end up with a partial import with very few or zero issues or merge requests. Where there is a relatively smaller difference (10% or less), then there are most likely errors with those specific issues or merge requests.
Anytime there is an error, ensure that the export originated from a compatible version of GitLab.
Here are some tips for searching for import errors in Kibana:
path/to/projectINFO)done)RepositoryImportWorkerProjects::ImportsController with error statuspath/to/projectProjects::ImportService::Error ; make sure to remove the is:unresolved filter.If there is an error, search for an existing issue. Errors where the metadata is throwing an error and no issue exists, consider creating one from Sentry.
If no error is found and the import is partial, most likely it is a timeout issue.
If you found a Correlation ID that is relevant for your troubleshooting, you can utilize the correlation dashboard to quickly view all related components across indices without having to search each individual index. You can get to the correlation dashboard by navigating to Analytics > Dashboard in Kibana and typing in correlation dashboard. Once you view the dashboard, simply click on the json.correlation_id filter and enter in your found correlation ID. It will then search across web, workhorse, sidekiq, and gitaly indices.