Kubernetes Cluster receiver π
The Kubernetes Cluster Receiver, k8s_cluster
, collects cluster-level
metrics from the Kubernetes API server. The receiver uses the Kubernetes
API to listen for updates. A single instance of this receiver can be
used to monitor a cluster.
This receiver is a native OpenTelemetry receiver and replaces the
kubernetes-cluster
SignalFx Smart Agent monitor.
Note
This receiver is in beta and configuration fields are subject to change.
Installation π
Follow these steps to configure and enable the component:
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
Configure the receiver as described in the next section.
Restart the Collector.
Note
Kubernetes version 1.21 and higher are compatible with the Kubernetes navigator. Using lower versions of Kubernetes is not fully supported for this receiver and might result in the navigator not displaying all clusters.
Configuration π
Use the following example configuration to activate this receiver in the Collector:
receivers:
k8s_cluster:
auth_type: kubeConfig
collection_interval: 30s
node_conditions_to_report: ["Ready", "MemoryPressure"]
allocatable_types_to_report: ["cpu","memory"]
metadata_exporters: [signalfx]
The following table shows the required and optional settings:
metadata_exporters π
Sync the receiver with the metadata exporters you want to use to collect metadata. Exporters specified in this list need to implement the following interface. If an exporter doesnβt implement the interface, startup fails.
type MetadataExporter interface {
ConsumeMetadata(metadata []*MetadataUpdate) error
}
type MetadataUpdate struct {
ResourceIDKey string
ResourceID ResourceID
MetadataDelta
}
type MetadataDelta struct {
MetadataToAdd map[string]string
MetadataToRemove map[string]string
MetadataToUpdate map[string]string
}
node_conditions_to_report π
Use the following configuration to have the k8s_cluster
receiver
emit two metrics, k8s.node.condition_ready
and
k8s.node.condition_memory_pressure
, one for each condition in the
configuration. The value is 1
if the ConditionStatus
for the
corresponding Condition
is True
, 0
if it is False
, and
-1
if it is Unknown
.
# ...
k8s_cluster:
node_conditions_to_report:
- Ready
- MemoryPressure
# ...
To learn more, search for βConditionsβ on the Kubernetes documentation site.
Configure with the SignalFx Exporter π
The following example shows a deployment of the Collector that sets up
the k8s_cluster
receiver along with the SignalFx Metrics Exporter.
This example shows how to set up the following Kubernetes resources that are required for the deployment:
ConfigMap
Service account
RBAC
Deployment
ConfigMap π
Create a ConfigMap with the configuration for otelcontribcol
.
Replace SIGNALFX_TOKEN
and SIGNALFX_REALM
with valid values.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
data:
config.yaml: |
receivers:
k8s_cluster:
collection_interval: 10s
metadata_exporters: [signalfx]
exporters:
signalfx:
access_token: <SIGNALFX_TOKEN>
realm: <SIGNALFX_REALM>
service:
pipelines:
metrics:
receivers: [k8s_cluster]
exporters: [signalfx]
EOF
Service account π
Create a service account for the Collector:
<<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: otelcontribcol
name: otelcontribcol
EOF
Role-based access control (RBAC) π
Create a ClusterRole
with required permissions:
<<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
rules:
- apiGroups:
- ""
resources:
- events
- namespaces
- namespaces/status
- nodes
- nodes/spec
- pods
- pods/status
- replicationcontrollers
- replicationcontrollers/status
- resourcequotas
- services
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
EOF
Create a ClusterRoleBinding
to grant the role to the service account
created in the service account example:
<<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: otelcontribcol
subjects:
- kind: ServiceAccount
name: otelcontribcol
namespace: default
EOF
Deployment π
Create a deployment to the Collector:
<<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
spec:
replicas: 1
selector:
matchLabels:
app: otelcontribcol
template:
metadata:
labels:
app: otelcontribcol
spec:
serviceAccountName: otelcontribcol
containers:
- name: otelcontribcol
image: otelcontribcol:latest # specify image
args: ["--config", "/etc/config/config.yaml"]
volumeMounts:
- name: config
mountPath: /etc/config
imagePullPolicy: IfNotPresent
volumes:
- name: config
configMap:
name: otelcontribcol
EOF
Metrics π
The following table shows the legacy metrics that are available for this integration. See OpenTelemetry values and their legacy equivalents for the Splunk Distribution of OpenTelemetry Collector equivalents.
Get help π
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers π
Submit a case in the Splunk Support Portal.
Call Splunk Customer Support.
Available to customers and free trial users π
Ask a question and get answers through community support at Splunk Answers.
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Get Started with Splunk Community - Chat groups.
To learn about even more support options, see Splunk Customer Success.