Docs » Reference » Understanding SignalFx Analytics » SignalFx Analytics Overview

SignalFx Analytics Overview 🔗

SignalFlow programs 🔗

Conceptually, a SignalFlow program consists of a several computational blocks, each of which accepts some input, performs some computation (for example, sum, mean, max, etc.) and generates some output. The blocks are connected in a directed graph so that the output of one unit flows as input to other units, resulting in a cascading series of computations that calculates the desired results.

../../_images/compare-03.png

In practice, individual SignalFlow programs are the computational backbone for charts in SignalFx, and are visualized in the SignalFx application as an interlinked set of analytics pipelines.

The initial input into a SignalFlow program is typically a set of one or more time series.

Rollup policies 🔗

Each set of time series in a plot line should have a common metric type, whether a gauge, counter or cumulative counter. The metric type determines the default visualization rollup that is applied to the time series data. The defaults in each case are chosen to ensure that values displayed are accurate and stable across different chart resolutions.

Take as an example a gauge that is reporting every 30 seconds. In a chart with a time range of 5 minutes, each reported value can be shown on the chart, as there would typically be enough screen real estate to show the data at its native resolution, i.e. 10 datapoints sent in during a 5‑minute period. If the time range is changed to 1 week, however, SignalFx automatically switches to a coarser chart resolution to match.

In this case, SignalFx uses the Average rollup to calculate the average value of the gauge, over each time interval at the coarser chart resolution. With one week’s worth of data, each visible datapoint is the average of the values sent during the chosen interval. SignalFx will then plot those average values, instead of, say, a sampled value. In general, this provides a more accurate representation of the data, but it also has the side effect of averaging out peaks and valleys, which may not be desirable, depending on the actual metric.

Note

If you prefer to see sampled values, you can select the Latest rollup, or if you prefer to see the peaks and valleys, you can select the Max or Min rollups, respectively.

For a counter or cumulative counter, the chosen rollup affects not only the accuracy, but more generally how the chart behaves as you change the time range. For example, let’s say we have a counter, sent as a high-resolution metric, that tells us how many responses a server handled per 1‑second interval. If you leave the default rollup of Rate/sec in place, then in a chart with a time range small enough to show the time series at its native resolution, values are reported as follows:

  • For a counter, each reported Rate/sec value will be shown normalized by the interval, i.e. number of responses during each 1‑second interval, divided by 1 (for data coming in every second), number of responses during each 5‑second interval, divided by 5 (for data coming in every 5 seconds) etc.
  • For a cumulative counter, each reported Rate/sec value will be the delta from the last datapoint normalized by the interval, divided by 1 (for data coming in every second), divided by 5 (for data coming in every 5 seconds), etc.
../../_images/intro-counter.png

If you then change the time range, such that each datapoint to plot represents, say, a 4‑minute interval, then values are reported as follows:

  • For a counter, each datapoint is the sum of all the responses during that 4‑minute interval, divided by 240 (the number of seconds in that interval).
  • For a cumulative counter, each datapoint is the sum of delta rollups over the interval (delta rollups are the differences between successive datapoints), divided by 240 (the number of seconds in the interval).

In all likelihood, this will have an impact similar to the Average rollup for a gauge: it will provide an accurate representation of the data, and one whose visualization is aligned with how you typically use line or area charts.

../../_images/intro-counter-02.png

In contrast, if you choose a different rollup, such as Sum, then the behavior of the chart changes with different chart resolutions. In a chart with a time range small enough to show the time series at its native resolution, each reported value will be the same as in the rate per second case, as the sum rollup will occur over a 1‑second interval. In a chart with a 4-minute interval, however, the values shown will be the sum of all values during the 240 seconds. As you might expect, this will likely generate a value that is significantly higher than the normalized rate per second rollup, and depending on the nature of your metric, it may be what you are looking for.

For more information on the interaction of rollups, resolutions, and analytics, see Chart Resolution and Rollups.

How SignalFlow handles metadata 🔗

SignalFlow computations often involve both data and corresponding metadata - dimensions, properties or tags. For example, when a mean is computed across CPU utilization metrics received from server instances spread out across multiple regions or availability zones, you will often want to group by the regions or availability zones, so that you can discern whether one is running hotter than the next at the aggregate level.

To ensure that calculations throughout a SignalFlow program are able to make use of this metadata, time series data is ingested into a SignalFlow computation with its corresponding metadata. Subsequent computations on data include corresponding computations on metadata so that the result includes both data and metadata components, enabling further downstream processing and re-aggregations, as necessary.

Computations that output a single summary result from a collection of time series, such as a sum or mean, will use only the metadata that shares the same name and values across the collection. In contrast, computations that select values from a collection, such as maximum, minimum, etc. simply use the corresponding metadata of the selected values as is.

Aggregations and transformations 🔗

An analytic computation is a mathematical function that is applied to a collection of datapoints. For example, a mean is computed over a collection of datapoints by dividing the sum of the collection by the number of datapoints in the collection. In the context of time series calculations, an analytic computation is applied either as an aggregation or a transformation. For more information, see Aggregations and transformations.