Kubernetes scheduler π
Description π
The Splunk Distribution of OpenTelemetry Collector provides the kubernetes-scheduler
monitor type by using the Splunk Observability Cloud Smart Agent Receiver.
This monitor type exports Prometheus metrics from the kube-scheduler.
This monitor type is available on Kubernetes, Linux, and Windows.
Benefits π
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Splunk Infrastructure Monitoring navigators.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Use the Metric Finder.
Installation π
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
Configure the monitor, as described in the Configuration section.
Restart the Splunk Distribution of OpenTelemetry Collector.
Configuration π
To use this Smart Agent monitor with the Collector, include the smartagent
receiver and service pipeline in your configuration file. The Smart Agent receiver is fully supported only on x86_64/amd64 platforms.
Read more in Use Smart Agent monitors with the Collector.
Learn about config options in Collector default configuration.
See the examples below for more details.
To complete the monitor type activation, you must also include it in a metrics pipeline. To do this, add the monitor type to the service > pipelines > metrics > receivers section of your configuration file. For example:
service:
pipelines:
metrics:
receivers: [smartagent/kubernetes-scheduler]
Configuration settings π
Config option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
HTTP timeout duration for both read and writes. This should be a duration string that is accepted by https://golang.org/pkg/time/#ParseDuration Default is |
|
no |
|
Basic authentication username to use on each request, if any. |
|
no |
|
Basic authentication password to use on each request, if any. |
|
no |
|
If true, the agent will connect to the server using HTTPS instead of plain HTTP. Default is |
|
no |
|
A map of HTTP header names to values. Comma separated multiple values for the same message-header is supported. |
|
no |
|
If useHTTPS is true and this option is also true, the exporter TLS cert will not be verified. Default is |
|
no |
|
If useHTTPS is true and skipVerify is true, the sniServerName is used to verify the hostname on the returned certificates. It is also included in the clientβs handshake to support virtual hosting unless it is an IP address. |
|
no |
|
Path to the CA cert that has signed the TLS cert, unnecessary if |
|
no |
|
Path to the client TLS cert to use for TLS required connections. |
|
no |
|
Path to the client TLS key to use for TLS required connections. |
|
yes |
|
Host of the exporter. |
|
yes |
|
Port of the exporter. |
|
no |
|
Use pod service account to authenticate. Default is |
|
no |
|
Path to the metrics endpoint on the exporter server, usually |
|
no |
|
Send all the metrics that come out of the Prometheus exporter without any filtering. This option has no effect when using the prometheus exporter monitor directly since there is no built-in filtering, only when embedding it in other monitors. Default is |
Metrics π
The following metrics are available for this integration:
Non-default metrics (version 4.7.0+) π
To emit metrics that are not default, you can add those metrics in the
generic monitor-level extraMetrics
config option. Metrics that are derived
from specific configuration options that do not appear in the above list of
metrics do not need to be added to extraMetrics
.
To see a list of metrics that will be emitted you can run agent-status monitors
after configuring this monitor in a running agent instance.
Get help π
If you are not able to see your data in Splunk Observability Cloud, try these tips:
Submit a case in the Splunk Support Portal
Available to Splunk Observability Cloud customers
-
Available to Splunk Observability Cloud customers
Ask a question and get answers through community support at Splunk Answers
Available to Splunk Observability Cloud customers and free trial users
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide
Available to Splunk Observability Cloud customers and free trial users
To learn how to join, see Get Started with Splunk Community - Chat groups
To learn about even more support options, see Splunk Customer Success.