Logstash π
Description π
The Splunk Distribution of OpenTelemetry Collector provides these integrations as the logstash
and logstash-tcp
monitor types using the Smart Agent Receiver.
The logstash
monitor π
The logstash
monitor monitors the health and performance of Logstash deployments through
Logstash Monitoring APIs.
The logstash-tcp
monitor π
The logstash-tcp
monitor fetches events from the logstash tcp output
plugin
operating in either server
or client
mode and converts them to data points. The logstash-tcp
monitor is meant to be used in conjunction with the Logstash
Metrics filter
plugin
that turns events into metrics.
You can only use autodiscovery when this monitor is in client
mode.
Installation π
These monitors are available in the Smart Agent Receiver, which is part of the Splunk Distribution of OpenTelemetry Collector.
To install these integrations:
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform.
Configure the monitors, as described in the next section.
Configuration π
The Splunk Distribution of OpenTelemetry Collector allows embedding a Smart Agent monitor configuration in an associated Smart Agent Receiver instance.
The logstash
monitor π
Note: Providing a logstash
monitor entry in your Collector or Smart Agent (deprecated) configuration is required for its use. Use the appropriate form for your agent type.
Splunk Distribution of OpenTelemetry Collector π
To activate this monitor in the Splunk Distribution of OpenTelemetry Collector, add the following to your agent configuration:
receivers:
smartagent/logstash:
type: logstash
... # Additional config
To complete the monitor activation, you must also include the smartagent/logstash
receiver item in a metrics
pipeline. To do this, add the receiver item to the service
> pipelines
> metrics
> receivers
section of your configuration file. For example:
service:
pipelines:
metrics:
receivers: [smartagent/logstash]
Smart Agent π
To activate this monitor in the Smart Agent, add the following to your agent configuration:
monitors: # All monitor config goes under this key
- type: logstash
... # Additional config
See Smart Agent example configuration for an autogenerated example of a YAML configuration file, with default values where applicable.
Configuration settings π
The following table shows the configuration options for this monitor:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
The hostname of Logstash monitoring API (default: |
|
no |
|
The port number of Logstash monitoring API (default: |
|
no |
|
If |
|
no |
|
The maximum amount of time to wait for API requests (default: |
The logstash-tcp
monitor π
Note: Providing a logstash-tcp
monitor entry in your Smart Agent or Collector configuration is required for its use. Use the appropriate form for your agent type.
To activate this monitor in the Smart Agent, add the following to your agent configuration:
monitors: # All monitor config goes under this key
- type: logstash-tcp
... # Additional config
See Smart Agent example configuration for an autogenerated example of a YAML configuration file, with default values where applicable.
To activate this monitor in the Splunk Distribution of OpenTelemetry Collector, add the following to your agent configuration and include the smartagent/logstash-tcp
receiver item in the metrics
pipeline:
receivers:
smartagent/logstash-tcp:
type: logstash-tcp
... # Additional config
The following table shows the configuration options for this monitor:
Option |
Required |
Type |
Description |
---|---|---|---|
|
yes |
|
If |
|
no |
|
If |
|
no |
|
Whether to act as a |
|
no |
|
(default: |
|
no |
|
How long to wait before reconnecting if the TCP connection cannot be made or after it gets broken. (default: |
|
no |
|
If |
The following is a contrived example configuration that shows the use of both timer
and
meter
metrics from the Logstash Metrics filter plugin:
input {
file {
path => "/var/log/auth.log"
start_position => "beginning"
tags => ["auth_log"]
}
# A contrived file that contains timing messages
file {
path => "/var/log/durations.log"
tags => ["duration_log"]
start_position => "beginning"
}
}
filter {
if "duration_log" in [tags] {
dissect {
mapping => {
"message" => "Processing took %{duration} seconds"
}
convert_datatype => {
"duration" => "float"
}
}
if "_dissectfailure" not in [tags] { # Filter out bad events
metrics {
timer => { "process_time" => "%{duration}" }
flush_interval => 10
# This makes the timing stats pertain to only the previous 5 minutes
# instead of since Logstash last started.
clear_interval => 300
add_field => {"type" => "processing"}
add_tag => "metric"
}
}
}
# Count the number of logins using SSH from /var/log/auth.log
if "auth_log" in [tags] and [message] =~ /sshd.*session opened/ {
metrics {
# This determines how often metric events will be sent to the agent, and
# thus how often data points will be emitted.
flush_interval => 10
# The name of the meter will be used to construct the name of the metric
# in Splunk Infrastructure Monitoring. For this example, a data point called `logins.count` would
# be generated.
meter => "logins"
add_tag => "metric"
}
}
}
output {
# This can be helpful to debug
stdout { codec => rubydebug }
if "metric" in [tags] {
tcp {
port => 8900
# The agent will connect to Logstash
mode => "server"
# Needs to be "0.0.0.0" if running in a container.
host => "127.0.0.1"
}
}
}
Once Logstash is configured with the above configuration, the logstash-tcp
monitor
collects logins.count
and process_time.<timer_field>
.
Metrics π
The following metrics are available for this integration:
Get help π
If you are not able to see your data in Splunk Observability Cloud, try these tips:
Submit a case in the Splunk Support Portal
Available to Splunk Observability Cloud customers
-
Available to Splunk Observability Cloud customers
Ask a question and get answers through community support at Splunk Answers
Available to Splunk Observability Cloud customers and free trial users
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide
Available to Splunk Observability Cloud customers and free trial users
To learn how to join, see Get Started with Splunk Community - Chat groups
To learn about even more support options, see Splunk Customer Success.