systemd 🔗
The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the collectd/systemd
monitor type to collect metrics about the state of configured systemd services.
This integration is available on Kubernetes and Linux.
Benefits 🔗
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Splunk Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Use navigators in Splunk Infrastructure Monitoring.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Search the Metric Finder and Metadata catalog.
Requirements 🔗
This integration reads the status of systemd services from /var/run/dbus/system_bus_socket
. You must mount the host location to the container in which the Collector or the Smart Agent is running. The agent container must also run in privileged mode. The following example shows an excerpt of the docker run
command:
docker run ...\
--privileged \
-v /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:ro \
...
Installation 🔗
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
Configure the monitor, as described in the Configuration section.
Restart the Splunk Distribution of OpenTelemetry Collector.
Configuration 🔗
To use this integration of a Smart Agent monitor with the Collector:
Include the Smart Agent receiver in your configuration file.
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
Read more on how to Use Smart Agent monitors with the Collector.
See how to set up the Smart Agent receiver.
Learn about config options in Collector default configuration.
Example 🔗
To activate this integration, add the following to your Collector configuration:
receivers:
smartagent/systemd:
type: collectd/systemd
... # Additional config
Next, add the monitor to the service.pipelines.metrics.receivers
section of your configuration file:
service:
pipelines:
metrics:
receivers: [smartagent/systemd]
Advanced examples 🔗
The following is an excerpt of a YAML configuration for monitoring the state of docker
and ubuntu-fan
services:
receivers:
smartagent/systemd:
type: collectd/systemd
intervalSeconds: 10
services:
- docker
- ubuntu-fan
By default, the gauge.substate.running
metrics, which indicates whether a service is running or not, is the only metric reported. Configure additional metrics using the sendActiveState
, sendSubState
, and sendLoadState
configuration flags, as shown in the following example:
receivers:
smartagent/systemd:
type: collectd/systemd
intervalSeconds: 10
services:
- docker
- ubuntu-fan
sendActiveState: true
Configuration settings 🔗
The following table shows the configuration options for this monitor:
Option |
Required |
Type |
Description |
---|---|---|---|
|
Yes |
|
Services to report on. |
|
No |
|
Flag for sending metrics about the state of systemd services. The default value is |
|
No |
|
Flag for sending more detailed metrics about the state of systemd services. The default value is |
|
No |
|
Flag for sending metrics about the load state of systemd services. The default value is |
A service is in the state that a metric represents if the metric value is 1
and not in that state if the metric value is 0
. The integration assigns the name of monitored services to the systemd_service
dimension.
Metrics 🔗
The following metrics are available for this integration:
Notes 🔗
To learn more about the available in Observability Cloud see Metric types.
In host-based subscription plans, default metrics are those metrics included in host-based subscriptions in Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See Metric categories for more information.
In MTS-based subscription plans, all metrics are custom.
To add additional metrics, see how to configure
extraMetrics
in Add additional metrics.
Troubleshooting 🔗
If you are not able to see your data in Splunk Observability Cloud, try these tips:
Submit a case in the Splunk Support Portal.
Available to Splunk Observability Cloud customers
-
Available to Splunk Observability Cloud customers
Ask a question and get answers through community support at Splunk Answers.
Available to Splunk Observability Cloud customers and free trial users
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide
Available to Splunk Observability Cloud customers and free trial users
To learn how to join, see Get Started with Splunk Community - Chat groups.
To learn about even more support options, see Splunk Customer Success.