Docs » Get started with Splunk APM » Deploy a SignalFx Smart Agent for Splunk APM

Deploy a SignalFx Smart Agent for Splunk APM 🔗

Important

The original µAPM product, released in 2019, is now called µAPM Previous Generation (µAPM PG). Wherever you see the reference, µAPM now refers to the product released on March 31, 2020.

If you’re using µAPM Previous Generation (µAPM PG), see Overview of SignalFx Microservices APM Previous Generation (µAPM PG).

The recommended deployment model for Splunk APM involves running the SignalFx Smart Agent on each host that runs traced applications. The traced applications on that host should report their trace spans to the Smart Agent, which will forward them to either the OpenTelemetry Collector or the APM ingest endpoint.

Sending trace spans through the Smart Agent gives the Smart Agent an opportunity to add certain tags to the spans that identify the host on which the spans were generated (e.g. the host tag that is the hostname, and cloud-provider specific tags such as AWSUniqueId if running on EC2). While this could be done in the tracer configuration as global tags, it is better to let the Smart Agent do it for consistency with the host metadata of metrics.

Important

APM requires Smart Agent version 5.0.4 or higher. If you do not already have this version or above running on your target hosts, see the information starting in Smart Agent Quick Install to install the agent on those hosts using the method of your choice, then continue with the steps below.

Once you have the agent running on your hosts, there are a few configuration options you need to set in your agent.yaml config file to enable trace forwarding as shown below. This example assumes collection of traces only. If you also wish to collect metrics for Splunk Infrastructure Monitoring, see Use the Smart Agent for additional configuration information.

Note about realms

A realm is a self-contained deployment of Splunk Infrastructure Monitoring or APM in which your organization is hosted. Different realms have different API endpoints. For example, the endpoint for sending data in the us1 realm is https://ingest.us1.signalfx.com, while the endpoint for sending data in the eu0 realm is https://ingest.eu0.signalfx.com.

Where you see a placeholder that refers to realms, such as <REALM> or <YOUR_REALM>, replace it with the actual name of your realm. You can find this name on your profile page in the user interface. If you don’t include the realm name when specifying an endpoint, Infrastructure Monitoring defaults to the us0 realm.

# Optional: Your realm
# Default: us0
#signalFxRealm: "${SFX_REALM}"

# Required: Parameter must exist, but can be empty
signalFxAccessToken: "${SFX_TOKEN}"

# Optional: Where to send dimension updates
#apiUrl: "http://ot-collector:6060"

# Required: Where to send trace data
# Default: URL/v1/trace (note must be changed to /v2/trace)
#traceEndpointUrl: "http://ot-collector:7276/v2/trace"
traceEndpointUrl: "https://ingest.${SFX_REALM}.signalfx.com/v2/trace"

# Optional: The host's hostname
# Default: Reverse lookup of the machine's IP address
#hostname: ${SFX_HOSTNAME}

# If you are only collecting traces then you should
# disable all metric collection capabilities
collectd:
  disableCollectd: true
  configDir: /tmp/collectd

# Required: What data to accept
# Note: On Docker/Kubernetes you may need to use 0.0.0.0 over 127.0.0.1
monitors:
  # If using auto instrumentation with default settings
  - type: signalfx-forwarder
    listenAddress: 0.0.0.0:9080
    # Used to add a tag to spans missing it
    defaultSpanTags:
     # Set the environment filter to monitor each environment separately.
     # The environment span tag also enables host correlation with
     # default Infrastructure Monitoring dashboards.
     environment: "${SFX_ENVIRONMENT}"
    # Used to add and override a tag on a span
    #extraSpanTags:
     #SPAN_TAG_KEY: "SPAN_TAG_VALUE"

# Required: What format to send data in
writer:
  traceExportFormat: sapm

Once you have the agent configured properly, configure your application’s tracer to send to the listenAddress specified in the signalfx-forwarder monitor configuration. The signalfx-forwarder monitor offers a variety of span formats. To configure your tracer, see Instrument applications for Splunk APM. Every instrumentation library defaults to sending to a locally-running Smart Agent listening for trace spans on port 9080.

In this minimal configuration, the footprint and impact of the Smart Agent on the host system is minimal. The CPU utilization of the Smart Agent is directly related to the volume of trace spans that pass through it; it can comfortably handle over 100,000 spans/sec. The memory utilization of the Smart Agent should remain minimal during normal operation, but it may increase if the destination it is configured to send to is unavailable and the Smart Agent needs to buffer trace spans. In the case of a slow downstream trace endpoint, the Smart Agent will buffer spans up to a default of 100,000 spans before dropping data to prevent unbounded memory consumption.

Deploy the Smart Agent in Kubernetes 🔗

When deploying the Smart Agent in a Kubernetes cluster, you have a couple options:

  • Deploy as a DaemonSet via Helm. Follow the standard installation instructions, but be aware that changes to the default values.yaml will be required. These changes include ensuring traceEndpointUrl is set correctly and explictly setting the agentConfig (see YAML example above).
  • Deploy as a DaemonSet manually. Follow the advanced installation instructions, but be aware that changes to remove metric collection will be required. The configmap can be constructed based on the YAML example above.

In addition, the Kubernetes Downward API provides a means of dynamically setting an environment variable in your pod. This can be leveraged by the tracer in traced applications. For example:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: myapp
  name: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - env:
        - name: "SIGNALFX_AGENT_HOST"
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.hostIP
        image: my-image
        name: myapp

You will have to configure your tracer to make use of that SIGNALFX_AGENT_HOST environment variable (see the documentation for each language and tracer).