Docs » Available host and application monitors » Configure application receivers for databases » Logstash

Logstash πŸ”—

Description πŸ”—

The Splunk Distribution of OpenTelemetry Collector provides these integrations as the logstash and logstash-tcp monitor types using the Smart Agent Receiver.

The logstash monitor πŸ”—

The logstash monitor monitors the health and performance of Logstash deployments through Logstash Monitoring APIs.

The logstash-tcp monitor πŸ”—

The logstash-tcp monitor fetches events from the logstash tcp output plugin operating in either server or client mode and converts them to data points. The logstash-tcp monitor is meant to be used in conjunction with the Logstash Metrics filter plugin that turns events into metrics.

You can only use autodiscovery when this monitor is in client mode.

Installation πŸ”—

These monitors are available in the Smart Agent Receiver, which is part of the Splunk Distribution of OpenTelemetry Collector.

To install these integrations:

  1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform.

  2. Configure the monitors, as described in the next section.

Configuration πŸ”—

The Splunk Distribution of OpenTelemetry Collector allows embedding a Smart Agent monitor configuration in an associated Smart Agent Receiver instance.

The logstash monitor πŸ”—

Note: Providing a logstash monitor entry in your Collector or Smart Agent (deprecated) configuration is required for its use. Use the appropriate form for your agent type.

Splunk Distribution of OpenTelemetry Collector πŸ”—

To activate this monitor in the Splunk Distribution of OpenTelemetry Collector, add the following to your agent configuration:

 receivers:
   smartagent/logstash:
     type: logstash
       ...  # Additional config

To complete the monitor activation, you must also include the smartagent/logstash receiver item in a metrics pipeline. To do this, add the receiver item to the service > pipelines > metrics > receivers section of your configuration file. For example:

service:
  pipelines:
    metrics:
      receivers: [smartagent/logstash]

Smart Agent πŸ”—

To activate this monitor in the Smart Agent, add the following to your agent configuration:

monitors:  # All monitor config goes under this key
  - type: logstash
    ...  # Additional config

See Smart Agent example configuration for an autogenerated example of a YAML configuration file, with default values where applicable.

Configuration settings πŸ”—

The following table shows the configuration options for this monitor:

Option

Required

Type

Description

host

no

string

The hostname of Logstash monitoring API (default: 127.0.0.1)

port

no

integer

The port number of Logstash monitoring API (default: 9600)

useHTTPS

no

bool

If true, the agent will connect to the host using HTTPS instead of plain HTTP. (default: false)

timeoutSeconds

no

integer

The maximum amount of time to wait for API requests (default: 5)

The logstash-tcp monitor πŸ”—

Note: Providing a logstash-tcp monitor entry in your Smart Agent or Collector configuration is required for its use. Use the appropriate form for your agent type.

To activate this monitor in the Smart Agent, add the following to your agent configuration:

monitors:  # All monitor config goes under this key
  - type: logstash-tcp
    ...  # Additional config

See Smart Agent example configuration for an autogenerated example of a YAML configuration file, with default values where applicable.

To activate this monitor in the Splunk Distribution of OpenTelemetry Collector, add the following to your agent configuration and include the smartagent/logstash-tcp receiver item in the metrics pipeline:

receivers:
  smartagent/logstash-tcp:
    type: logstash-tcp
    ...  # Additional config

The following table shows the configuration options for this monitor:

Option

Required

Type

Description

host

yes

string

If mode: server, the local IP address to listen on. If mode: client, the Logstash host/ip to connect to.

port

no

integer

If mode: server, the local port to listen on. If mode: client, the port of the Logstash TCP output plugin. If port is 0, a random listening port is assigned by the kernel. (default: 0)

mode

no

string

Whether to act as a server or client. The corresponding setting in the Logtash tcp output plugin should be set to the opposite of this. (default: client)

desiredTimerFields

no

list of strings

(default: [mean, max, p99, count])

reconnectDelay

no

integer

How long to wait before reconnecting if the TCP connection cannot be made or after it gets broken. (default: 5s)

debugEvents

no

bool

If true, events received from Logstash will be dumped to the agent’s stdout in deserialized form (default: false)

The following is a contrived example configuration that shows the use of both timer and meter metrics from the Logstash Metrics filter plugin:

input {
  file {
    path => "/var/log/auth.log"
    start_position => "beginning"
    tags => ["auth_log"]
  }

  # A contrived file that contains timing messages
  file {
    path => "/var/log/durations.log"
    tags => ["duration_log"]
    start_position => "beginning"
  }
}

filter {
  if "duration_log" in [tags] {
    dissect {
      mapping => {
        "message" => "Processing took %{duration} seconds"
      }
      convert_datatype => {
        "duration" => "float"
      }
    }
    if "_dissectfailure" not in [tags] { # Filter out bad events
      metrics {
        timer => { "process_time" => "%{duration}" }
        flush_interval => 10
        # This makes the timing stats pertain to only the previous 5 minutes
        # instead of since Logstash last started.
        clear_interval => 300
        add_field => {"type" => "processing"}
        add_tag => "metric"
      }
    }
  }
  # Count the number of logins using SSH from /var/log/auth.log
  if "auth_log" in [tags] and [message] =~ /sshd.*session opened/ {
    metrics {
      # This determines how often metric events will be sent to the agent, and
      # thus how often data points will be emitted.
      flush_interval => 10
      # The name of the meter will be used to construct the name of the metric
      # in Splunk Infrastructure Monitoring. For this example, a data point called `logins.count` would
      # be generated.
      meter => "logins"
      add_tag => "metric"
    }
  }
}

output {
  # This can be helpful to debug
  stdout { codec => rubydebug }

  if "metric" in [tags] {
    tcp {
      port => 8900
      # The agent will connect to Logstash
      mode => "server"
      # Needs to be "0.0.0.0" if running in a container.
      host => "127.0.0.1"
    }
  }
}

Once Logstash is configured with the above configuration, the logstash-tcp monitor collects logins.count and process_time.<timer_field>.

Metrics πŸ”—

The following metrics are available for this integration:

Get help πŸ”—

If you are not able to see your data in Splunk Observability Cloud, try these tips:

To learn about even more support options, see Splunk Customer Success.