Docs » Concepts » Rollups and Resolutions

Rollups and Resolutions

Rollups

A rollup is a simple mathematical function that takes all of the datapoints received over a period of time and produces a single output datapoint for that period. Rollups are a way to summarize data, and they enable SignalFx to render charts or perform computations for longer time ranges quickly, without compromising the accuracy of the results.

When SignalFx receives datapoints for a given time series, it accumulates them for specific periods of time (1 second, 1 minute, 5 minutes and 1 hour) and stores five rollups for each interval, as follows:

  • Sum - The sum of all the values of the datapoints received during each interval
  • Min - The lowest value from among the datapoints received during each interval
  • Max -The highest value from among the datapoints received during each interval
  • Count - The number of datapoints received during each interval
  • Latest - The value of the most recent datapoint received during each interval

For example, if SignalFx receives the datapoint values 40, 50, 30, 10, and 20 (in that order) for a given time series in a 5-minute window, the 5-minute rollups will be stored as shown in the following table.

Rollup type Value
Sum 150
Count 5
Min 10
Max 50
Latest 20

In addition to the 5 rollups described above, additional rollup options are available for use when displaying chart data:

  • Average - Calculates the mean of datapoint values received during each interval.
  • Lag - Represents the difference between the time the measurement was taken (timestamp on the datapoint) and the time SignalFx received it, in milliseconds.
  • Delta - For cumulative counters, summarizes data by calculating the difference between datapoints at successive intervals. Equivalent to Sum for counters.
  • Rate per second - For cumulative counters, divides delta by the length of the interval. For counters, divides sum by length of the interval. Useful for showing running totals (e.g. error count) as a rate (e.g. number of errors per second).

For visualizing data in charts, SignalFx chooses a default rollup (displayed as “Auto”) that is appropriate to how you typically want to view the data; the default depends on the metric type, as shown in the following table.

Metric type Default rollup Other rollups available*
Gauge Average None
Counter Rate per second None
Cumulative counter Rate per second Delta

* In addition to the rollups that are available for all metric types: Sum, Count, Min, Max, Latest, Average, and Lag.

You can change the rollup being used for a metric, and choose whether to display the rollup time on the plot line in the chart builder, in the plot configuration panel.

In some cases, you may see “Multiple” as the default rollup. If so, it is because your selected metric has more than one metric type, either because of a wildcard query, or because it is an AWS CloudWatch metric where the stat dimension has not been specified.

You can view and change the metric type in the Catalog; see Signals in Using the Catalog to Find Metrics.

Resolution

In SignalFx, the term “resolution” can refer to native data collection intervals (data resolution) or intervals at which datapoints are displayed on a chart (chart resolution).

Data resolution

Datapoints are typically sent to SignalFx at a regular interval, e.g. once every 10 seconds. We refer to this interval as the native data resolution of a time series. Detectors run at native data resolution.

A variety of factors can affect data resolution; for a more detailed discussion, see How SignalFx Chooses Data Resolution.

Chart resolution

When rendering charts, SignalFx defaults to a resolution that is based on the time range of the chart. In general, the longer the time range, the coarser the resolution, and the greater the likelihood that this chart resolution will differ from the native data resolution. SignalFx ensures accuracy for that chart through the use of rollups (discussed above).

For example, let’s say you are sending in CPU utilization measurements for a cluster of hosts every 10 seconds, but you want to chart the average CPU utilization across the cluster over a two-week period. In this case, it doesn’t make sense to use the native data resolution, as the sheer number of 10-second intervals in a two-week period (14 x 24 x 60 x 6 = 120,960) would not be viewable on most physical displays.

For information on how to set chart resolution, see Specifying a chart data display resolution.