JMX ๐
The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the jmx
monitor type to run an arbitrary Groovy script to convert JMX MBeans fetched from a remote Java application to SignalFx data points. This is a more flexible alternative to the GenericJMX monitor.
You can use the following utility helpers in the Groovy script within the util
variable, which is set in the scriptโs context:
util.queryJMX(String objectName)
: This helper queries the pre-configured JMX application for the givenobjectName
, which can include wildcards. In any case, the return value will be aList
of zero or moreGroovyMBean
objects, which are a convenience wrapper that Groovy provides to make accessing attributes on the MBean simple. See http://groovy-lang.org/jmx.html for more information about theGroovyMBean
object. You can use the Groovy.first()
method on the returned list to access the first MBean is you are only expecting one.util.makeGauge(String name, double val, Map<String, String> dimensions)
: A convenience function to create a SignalFx gauge data point. This creates aDataPoint
instance that can be fed tooutput.sendDatapoint[s]
. This does not send the data point, only creates it.util.makeCumulative(String name, double val, Map<String, String> dimensions)
: A convenience function to create a SignalFx cumulative counter data point. This creates aDataPoint
instance that can be fed tooutput.sendDatapoint[s]
. This does not send the data point, it only creates it.
The output
instance available in the script context is used to send data to Observability Cloud. It contains the following methods:
output.sendDatapoint(DataPoint dp)
: Emit the given data point to SignalFx. Use theutil.make[Gauge|Cumulative]
helpers to create theDataPoint
instance.output.sendDatapoints(List<DataPoint> dp)
: Emit the given data points to SignalFx. We recommend using theutil.make[Gauge|Cumulative]
helpers to create theDataPoint
instance. Itโs slightly more efficient to send multiple data points at once, but this doesnโt matter that much unless youโre sending very high volumes of data.
Benefits ๐
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Splunk Infrastructure Monitoring navigators.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Use the Metric Finder.
Installation ๐
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
Configure the monitor, as described in the Configuration section.
Restart the Splunk Distribution of OpenTelemetry Collector.
Configuration ๐
To use this integration of a Smart Agent monitor with the Collector:
Include the Smart Agent receiver in your configuration file.
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
Read more on how to Use Smart Agent monitors with the Collector.
See how to set up the Smart Agent receiver.
Learn about config options in Collector default configuration.
Example ๐
To activate this integration, add the following to your Collector configuration:
receivers:
smartagent/jmx:
type: jmx
... # Additional config
Next, add the monitor to the service > pipelines > metrics > receivers
section of your configuration file:
service:
pipelines:
metrics:
receivers: [smartagent/jmx]
Configuration settings ๐
The following table shows the configuration options for this integration:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
Host will be filled in by auto-discovery if this monitor has a discovery rule. |
|
no |
|
Port will be filled in by auto-discovery if this monitor has a discovery rule. (default: |
|
no |
|
The service URL for the JMX RMI/JMXMP endpoint. If empty it will be filled in with values from |
|
yes |
|
A literal Groovy script that generates data points from JMX MBeans. See the top-level |
|
no |
|
Username for JMX authentication, if applicable. |
|
no |
|
Password for JMX authentication, if applicable. |
|
no |
|
The key store path is required if client authentication is activated on the target JVM. |
|
no |
|
The key store file password if required. |
|
no |
|
The key store type. (default: |
|
no |
|
The trusted store path if the TLS profile is required. |
|
no |
|
The trust store file password if required. |
|
no |
|
Supported JMX remote profiles are TLS in combination with SASL profiles: SASL/PLAIN, SASL/DIGEST-MD5 and SASL/CRAM-MD5. Thus valid |
|
no |
|
The realm is required by profile SASL/DIGEST-MD5. |
The following is an example Groovy script that replicates some of the data presented by the Cassandra nodetool status
utility:
// Query the JMX endpoint for a single MBean.
ss = util.queryJMX("org.apache.cassandra.db:type=StorageService").first()
// Copied and modified from https://github.com/apache/cassandra
def parseFileSize(String value) {
if (!value.matches("\\d+(\\.\\d+)? (GiB|KiB|MiB|TiB|bytes)")) {
throw new IllegalArgumentException(
String.format("value %s is not a valid human-readable file size", value));
}
if (value.endsWith(" TiB")) {
return Math.round(Double.valueOf(value.replace(" TiB", "")) * 1e12);
}
else if (value.endsWith(" GiB")) {
return Math.round(Double.valueOf(value.replace(" GiB", "")) * 1e9);
}
else if (value.endsWith(" KiB")) {
return Math.round(Double.valueOf(value.replace(" KiB", "")) * 1e3);
}
else if (value.endsWith(" MiB")) {
return Math.round(Double.valueOf(value.replace(" MiB", "")) * 1e6);
}
else if (value.endsWith(" bytes")) {
return Math.round(Double.valueOf(value.replace(" bytes", "")));
}
else {
throw new IllegalStateException(String.format("FileUtils.parseFileSize() reached an illegal state parsing %s", value));
}
}
localEndpoint = ss.HostIdToEndpoint.get(ss.LocalHostId)
dims = [host_id: ss.LocalHostId, cluster_name: ss.ClusterName]
output.sendDatapoints([
// Equivalent of "Up/Down" in the `nodetool status` output.
// 1 = Live; 0 = Dead; -1 = Unknown
util.makeGauge(
"cassandra.status",
ss.LiveNodes.contains(localEndpoint) ? 1 : (ss.DeadNodes.contains(localEndpoint) ? 0 : -1),
dims),
util.makeGauge(
"cassandra.state",
ss.JoiningNodes.contains(localEndpoint) ? 3 : (ss.LeavingNodes.contains(localEndpoint) ? 2 : 1),
dims),
util.makeGauge(
"cassandra.load",
parseFileSize(ss.LoadString),
dims),
util.makeGauge(
"cassandra.ownership",
ss.Ownership.get(InetAddress.getByName(localEndpoint)),
dims)
])
Make sure that your script is carefully tested before using it to monitor a production JMX service. In general, scripts should only read attributes, but nothing enforces that.
Metrics ๐
There are no metrics available for this integration.
Troubleshooting ๐
If you are not able to see your data in Splunk Observability Cloud, try these tips:
Submit a case in the Splunk Support Portal
Available to Splunk Observability Cloud customers
-
Available to Splunk Observability Cloud customers
Ask a question and get answers through community support at Splunk Answers
Available to Splunk Observability Cloud customers and free trial users
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide
Available to Splunk Observability Cloud customers and free trial users
To learn how to join, see Get Started with Splunk Community - Chat groups
To learn about even more support options, see Splunk Customer Success.