GitLab Workhorse π
Description π
The Splunk Distribution of OpenTelemetry Collector provides this integration as the gitlab-workhorse
monitor type by using the Smart Agent Receiver.
This is a monitor for GitLab Workhorse, which is the GitLab service that handles slow HTTP requests. Workhorse includes a built-in Prometheus exporter that this monitor hits to gather metrics. By default, the exporter runs on port 9229.
This monitor is available on Kubernetes, Linux, and Windows.
Benefits π
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Splunk Infrastructure Monitoring navigators.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Use the Metric Finder.
Installation π
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
Configure the monitor, as described in the Configuration section.
Restart the Splunk Distribution of OpenTelemetry Collector.
Configuration π
To use this Smart Agent monitor with the Collector, include the smartagent
receiver and service pipeline in your configuration file. The Smart Agent receiver is fully supported only on x86_64/amd64 platforms.
Read more in Use Smart Agent monitors with the Collector.
Learn about config options in Collector default configuration.
See the examples below for more details.
receivers:
smartagent/gitlab-workhorse:
type: gitlab-workhorse
... # Additional config
To monitor Workhorse using the Prometheus exporter, use a configuration similar to the following:
receivers:
smartagent/gitlab-workhorse:
type: gitlab-workhorse
discoveryRule: port == 9229
# Other expressions to avoid false positives on the port alone.
To complete the integration, include the Smart Agent receiver using this monitor in a metrics
pipeline. To do this, add the receiver to the service > pipelines > metrics > receivers
section of your configuration file. For example:
service:
pipelines:
metrics:
receivers: [smartagent/gitlab-workhorse]
Configuration settings π
The following table shows the configuration options for this monitor:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
HTTP timeout duration for both read and writes. This should be a duration string that is accepted by ParseDuration. The default value is |
|
no |
|
Basic Auth username to use on each request, if any. |
|
no |
|
Basic Auth password to use on each request, if any. |
|
no |
|
If |
|
no |
|
A map of HTTP header names to values. Comma-separated multiple values for the same message-header is supported. |
|
no |
|
If |
|
no |
|
If |
|
no |
|
Path to the CA cert that has signed the TLS cert, unnecessary if |
|
no |
|
Path to the client TLS cert to use for TLS required connections |
|
no |
|
Path to the client TLS key to use for TLS required connections |
|
yes |
|
Host of the exporter |
|
yes |
|
Port of the exporter |
|
no |
|
Use pod service account to authenticate. The default value is |
|
no |
|
Path to the metrics endpoint on the exporter server, usually |
|
no |
|
Send all the metrics that come out of the Prometheus exporter without any filtering. This option has no effect when using the Prometheus exporter monitor directly since there is no built-in filtering, only when embedding it in other monitors. The default value is |
Metrics π
These are the metrics available for this integration:
Get help π
If you are not able to see your data in Splunk Observability Cloud, try these tips:
Submit a case in the Splunk Support Portal
Available to Splunk Observability Cloud customers
-
Available to Splunk Observability Cloud customers
Ask a question and get answers through community support at Splunk Answers
Available to Splunk Observability Cloud customers and free trial users
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide
Available to Splunk Observability Cloud customers and free trial users
To learn how to join, see Get Started with Splunk Community - Chat groups
To learn about even more support options, see Splunk Customer Success.