Docs » Integrations Guide » Monitor Kubernetes » Kubernetes Monitors and Observers

Kubernetes Monitors and Observers 🔗

Monitors 🔗

Monitors collect metrics from the environment or services. See Kubernetes metrics here. See Monitor Configuration for more information on specific monitors that we support. All of these work the same in a Kubernetes environment.

These monitors are relevant to Kubernetes environments:

  • Kubernetes Cluster or OpenShift Cluster - Gets cluster level metrics from the K8s API
  • cAdvisor- Gets container metrics directly from cAdvisor exposed on the same node (may not work in newer K8s versions that don’t expose the cAdvisor port in the Kubelet)
  • Kubelet Stats - Gets cAdvisor metrics through the Kubelet /stats endpoint. This is much more robust, as it uses the same interface that Heapster uses.
  • Prometheus Exporter - Gets prometheus metrics directly from exporters. This is useful especially if you already have exporters deployed in your cluster because you currently use Prometheus.

If you want to pull metrics from kube-state-metrics you can use use a config similar to the following. This assumes that one kube-state-metrics instance is running in your cluster in a container using an image with the string “kube-state-metrics” in it.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
- type: prometheus-exporter
  discoveryRule: container_image =~ "kube-state-metrics"
  disableHostDimensions: true
  disableEndpointDimensions: true
  dimensionTransformations:
    pod: kubernetes_pod_name
    namespace: kubernetes_namespace
    container: container_spec_name
    node: kubernetes_node
  extraDimensions:
    metric_source: kube-state-metrics

This uses the prometheus-exporter monitor to pull metrics from the service. It also disables a lot of host and endpoint specific dimensions that are irrelevant to cluster-level metrics. dimensionTransformations is required to rename the dimensions by kube-state-metrics to match the dimensions generated by other K8s monitors. Note that many of the metrics exposed by kube-state-metrics overlap with the SignalFx kubernetes-cluster monitor, so you probably don’t want to enable both unless you are using heavy filtering.

Configuring the Observer using K8s Annotations 🔗

When using the k8s-api observer, you can use Kubernetes pod annotations to tell the agent how to monitor your services. There are several annotations that the k8s-api observer recognizes:

  • agent.signalfx.com/monitorType.<port>: "<monitor type>" - Specifies the monitor type to use when monitoring the specified port. If this value is present, any agent config will be ignored, so you must fully specify any non-default config values you want to use in annotations. If this annotation is missing for a port but other config is present, you must have discovery rules or manually configured endpoints in your agent config to monitor this port; the other annotation config values will be merged into the agent config.
  • agent.signalfx.com/config.<port>.<configKey>: "<configValue>" - Specifies a config option for the monitor that will monitor this endpoint. The options are the same as specified in the monitor config reference. Lists may be specified with the syntax [a, b, c] (YAML compact list) which will be deserialized to a list that will be provided to the monitor. Boolean values are the annotation string values true or false. Integers can also be specified; they must be strings as the annotation value, but they will be interpreted as an integer if they don’t contain any non-number characters.
  • agent.signalfx.com/configFromEnv.<port>.<configKey>: "<env var name>" – Specifies a config option that will be pulled from an environment variable on the same container as the port being monitored.
  • agent.signalfx.com/configFromSecret.<port>.<configKey>: "<secretName>/<secretKey>" – Maps the value of a secret to a config option. The <secretKey> is the key of the secret value within the data object of the actual K8s Secret resource. Note that this requires the agent’s service account to have the correct permissions to read the specified secret.

In all of the above, the <port> field can be either the port number of the endpoint you want to monitor or the assigned name. The config is specific to a single port, which allows you to monitor multiple ports in a single pod and container by just specifying annotations with different ports.

Example 🔗

The following K8s pod spec and agent YAML configuration accomplish the same thing.

K8s pod spec:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
metadata:
  annotations:
    agent.signalfx.com/monitorType.jmx: "collectd/cassandra"
    agent.signalfx.com/config.jmx.intervalSeconds: "20"
    agent.signalfx.com/config.jmx.mBeansToCollect: "[cassandra-client-read-latency, threading]"
  labels:
    app: my-app
spec:
  containers:
    - name: cassandra
      ports:
        - containerPort: 7199
          name: jmx
          protocol: TCP
 ......

Agent config:

1
2
3
4
5
6
 monitors:
 - type: collectd/cassandra
   intervalSeconds: 20
   mBeansToCollect:
     - cassandra-client-read-latency
     - threading

If a pod has the agent.signalfx.com/monitorType.* annotation on it, that pod will be excluded from the auto discovery mechanism and will be monitored only with the given annotation configuration. If you want to merge configuration from the annotations with agent configuration, you must omit the monitorType annotation and rely on auto discovery to find this endpoint. At that time, configurations from both sources will be merged together, with pod annotation configurations taking precedent.