Exec Input ๐
The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the Exec Input monitor type, an embedded form of the Telegraf Exec plugin, to receive metrics or logs from exec files.
Benefits ๐
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Splunk Infrastructure Monitoring navigators.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Use the Metric Finder.
Installation ๐
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
Configure the monitor, as described in the Configuration section.
Restart the Splunk Distribution of OpenTelemetry Collector.
Configuration ๐
To use this integration of a Smart Agent monitor with the Collector:
Include the Smart Agent receiver in your configuration file.
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
Read more on how to Use Smart Agent monitors with the Collector.
See how to set up the Smart Agent receiver.
Learn about config options in Collector default configuration.
Example ๐
To activate this integration, add the following to your Collector configuration:
receivers:
smartagent/exec:
type: telegraf/exec
... # Additional config
Next, include the monitor in a logs
pipeline that utilizes an exporter that makes the event submission requests. Use a Resource Detection processor to ensure that host identity and other useful information is made available as event dimensions. For example:
service:
pipelines:
logs:
receivers:
- smartagent/exec
processors:
- resourcedetection
exporters:
- signalfx
Configuration settings ๐
The following tables show the configuration options for this monitor type:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
|
|
no |
|
|
|
no |
|
The default value is |
|
no |
|
|
|
no |
|
A list of metric names typed as โcumulative countersโ in Observability Cloud. The Telegraf Exec plugin only emits |
The nested telegrafParser
configuration object has the following fields:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
dataFormat specifies a data format to parse: |
|
no |
|
defaultTags are tags added to all metrics. ( |
|
no |
|
metricName applies to ( |
|
no |
|
dataType specifies the value type to parse the value to: |
|
no |
|
A list of tag names to fetch from JSON data. ( |
|
no |
|
A list of fields in JSON to extract and use as string fields. (json only) |
|
no |
|
A path used to extract the metric name in JSON data. ( |
|
no |
|
A gjson path for json parser. ( |
|
no |
|
The name of the timestamp key. ( |
|
no |
|
Specifies the timestamp format. ( |
|
no |
|
Separator for Graphite data. ( |
|
no |
|
A list of templates for Graphite data. ( |
|
no |
|
The path to the collectd authentication file ( |
|
no |
|
Specifies the security level: |
|
no |
|
A list of paths to collectd TypesDB files. ( |
|
no |
|
Indicates whether to separate or join multivalue metrics. ( |
|
no |
|
An optional gjson path used to locate a metric registry inside of JSON data. The default behavior is to consider the entire JSON document. ( |
|
no |
|
An optional gjson path used to identify the drop wizard metric timestamp. ( |
|
no |
|
The format used for parsing the drop wizard metric timestamp. The default format is |
|
no |
|
An optional gjson path used to locate drop wizard tags. ( |
|
no |
|
A map of gjson tag names and gjson paths used to extract tag values from the JSON document. This is only used if |
|
no |
|
A list of patterns to match. ( |
|
no |
|
A list of named grok patterns to match. ( |
|
no |
|
Custom grok patterns. ( |
|
no |
|
List of paths to custom grok pattern files. ( |
|
no |
|
Specifies the timezone. The default is UTC time. Other options are |
|
no |
|
The delimiter used between fields in the csv. ( |
|
no |
|
The character used to mark rows as comments. ( |
|
no |
|
Indicates whether to trim leading white from fields. ( |
|
no |
|
List of custom column names. All columns must have names. Unnamed columns are ignored. This configuration must be set when |
|
no |
|
List of types to assign to columns. Acceptable values are |
|
no |
|
List of columns added as tags. Unspecified columns are added as fields. ( |
|
no |
|
The name of the column to extract the metric name from ( |
|
no |
|
The name of the column to extract the metric timestamp from. |
|
no |
|
The format to use for extracting timestamps. ( |
|
no |
|
The number of rows that are headers. By default, no rows are treated as headers. ( |
|
no |
|
The number of rows to ignore before looking for headers. ( |
|
no |
|
The number of columns to ignore before parsing data on a given row. ( |
Metrics ๐
The agent does not do any built-in filtering of metrics coming out of this monitor.
By default, all metrics are emitted as gauges. If you have cumulative counter metrics that you want properly typed in Splunk Observability Cloud, use one of the following options:
Set the configuration option
signalFxCumulativeCounters
to the list of metric names to be considered as counters. Note that these names are the full names that are sent to Observability Cloud (for example,<metric>.<field>
).Set a tag named
signalfx_type
on the metric emitted by the exec script tocumulative
. All other values are ignored. Note that you must allow this tag value through in your parser configuration if the parser ignores certain fields. For example, the JSON parser requires addingsignalfx_type
to theJSONTagKeys
configuration option.
Troubleshooting ๐
If you are not able to see your data in Splunk Observability Cloud, try these tips:
Submit a case in the Splunk Support Portal
Available to Splunk Observability Cloud customers
-
Available to Splunk Observability Cloud customers
Ask a question and get answers through community support at Splunk Answers
Available to Splunk Observability Cloud customers and free trial users
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide
Available to Splunk Observability Cloud customers and free trial users
To learn how to join, see Get Started with Splunk Community - Chat groups
To learn about even more support options, see Splunk Customer Success.