Docs » Get started with Splunk APM » First steps to set up Splunk APM

First steps to set up Splunk APM 🔗

Important

The original µAPM product, released in 2019, is now called µAPM Previous Generation (µAPM PG). Wherever you see the reference, µAPM now refers to the product released on March 31, 2020.

If you’re using µAPM Previous Generation (µAPM PG), see Overview of SignalFx Microservices APM Previous Generation (µAPM PG).

Configure your environment for Splunk APM and instrument applications to export trace data to APM. Use trace data to analyze application performance, alert on metrics for trace data, and correlate trace data with logs and other resources.

These steps provide more information about exporting trace data to APM and analyzing application performance.

1. Prepare your environment 🔗

Deploy the SignalFx Smart Agent in each instance that hosts an application you want to instrument. Use the Smart Agent to include additional context in trace data and enable infrastructure correlation. Add context to trace data by including tags for individual spans in a trace. You can search for and break down performance by these span tags.

APM provides special functionality for the environment span tag. When you set the environment span tag, you can filter services by environment from the service menu bar when viewing everything in the application. Use this span tag to easily monitor multiple environments separately. If you add the environment span tag in your Smart Agent configuration, APM correlates the service with default Infrastructure Monitoring dashboards. APM also correlates services with dashboard when you add the environment span tag at the instrumentation level. If you set the environment span tag in an OpenTelemetry Collector configuration, you can filter services to monitor each environment separately, but it doesn’t enable host correlation for Infrastructure Monitoring dashboards.

Configure the Smart Agent to send trace data directly to an ingest endpoint or to an OpenTelemetry Collector. If you deploy an OpenTelemetry Collector, you can centrally manage trace context from multiple Smart Agents by including span tags, modify or obfuscate trace data attributes, and monitor trace data network volume.

To learn about APM deployment architectures, see Splunk APM architecture overview.

To configure the Smart Agent for APM, see Deploy a SignalFx Smart Agent for Splunk APM.

To configure an OpenTelemetry Collector for APM, see Deploy an OpenTelemetry Collector for Splunk APM.

2. Instrument applications 🔗

Use an instrumentation library to automatically instrument code and deploy an OpenTracing-compatible tracer to capture and report trace data. You can automatically instrument many popular libraries and frameworks for these languages:

You can also manually instrument code and deploy an OpenTelemetry tracer to capture and report trace spans. For more information, see the OpenTelemetry API on GitHub.

APM also supports receiving traces from these sources:

If you don’t set the environment span tag in your Smart Agent or OpenTelemetry Collector configuration, set this span tag when you’re configuring instrumentation for your application. Setting this span tag at the instrumentation level enables you to monitor each environment separately in APM, and to populate default dashboards for Infrastructure Monitoring.

3. Break down service performance by indexing span tags 🔗

Index span tags to generate custom request, error, and duration (RED) metrics for services. RED metrics for indexed span tags are known as Troubleshooting MetricSets. Indexing span tags to generate Troubleshooting MetricSets enables you to analyze service performance by specific characteristics and attributes of each service.

For more information, see Analyze service performance.

4. Create detectors to alert on application performance 🔗

Configure detectors to alert on service metrics with APM Metrics Alert Rules. Detectors help keep you aware of performance degradation or other issues with specific services. Alert on latency or error rate for specific endpoints associated with a service. Detectors can include runbooks or tips for recipients to act on.

For more information, see Configure detectors and alerts in Splunk APM.

For an example that shows you how to identify root causes with APM, see Find the root cause of a problem with Splunk APM.

5. Correlate trace data with logs and other resources 🔗

Use global data links to associate APM services, traces, and spans with Infrastructure Monitoring dashboards, logging tools, and custom URLs.

For more information, see Connect Splunk APM to logs and other resources.