1
0
Fork 0

Adding upstream version 1.34.4.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-05-24 07:26:29 +02:00
parent e393c3af3f
commit 4978089aab
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
4963 changed files with 677545 additions and 0 deletions

View file

@ -0,0 +1,185 @@
# Kubernetes Input Plugin
This plugin gathers metrics about running pods and containers of a
[Kubernetes][kubernetes] instance via the Kubelet API.
> [!NOTE]
> This plugin has to run as part of a `daemonset` within a Kubernetes
> installation, i.e. Telegraf is running on every node within the cluster.
You should configure this plugin to talk to its locally running kubelet.
> [!CRITICAL]
> This plugin produces high cardinality data, which when not controlled for will
> cause high load on your database. Please make sure to [filter][filtering] the
> produced metrics or configure your database to avoid cardinality issues!
⭐ Telegraf v1.1.0
🏷️ containers
💻 all
[kubernetes]: https://kubernetes.io/
[filtering]: /docs/CONFIGURATION.md#metric-filtering
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support
additional global and plugin configuration settings. These settings are used to
modify metrics, tags, and field or create aliases and configure ordering, etc.
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
## Configuration
```toml @sample.conf
# Read metrics from the kubernetes kubelet api
[[inputs.kubernetes]]
## URL for the kubelet, if empty read metrics from all nodes in the cluster
url = "http://127.0.0.1:10255"
## Use bearer token for authorization. ('bearer_token' takes priority)
## If both of these are empty, we'll use the default serviceaccount:
## at: /var/run/secrets/kubernetes.io/serviceaccount/token
##
## To re-read the token at each interval, please use a file with the
## bearer_token option. If given a string, Telegraf will always use that
## token.
# bearer_token = "/var/run/secrets/kubernetes.io/serviceaccount/token"
## OR
# bearer_token_string = "abc_123"
## Kubernetes Node Metric Name
## The default Kubernetes node metric name (i.e. kubernetes_node) is the same
## for the kubernetes and kube_inventory plugins. To avoid conflicts, set this
## option to a different value.
# node_metric_name = "kubernetes_node"
## Pod labels to be added as tags. An empty array for both include and
## exclude will include all labels.
# label_include = []
# label_exclude = ["*"]
## Set response_timeout (default 5 seconds)
# response_timeout = "5s"
## Optional TLS Config
# tls_ca = /path/to/cafile
# tls_cert = /path/to/certfile
# tls_key = /path/to/keyfile
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
```
### Host IP
To find the ip address of the host you are running on you can issue a command
like the following:
```sh
curl -s $API_URL/api/v1/namespaces/$POD_NAMESPACE/pods/$HOSTNAME \
--header "Authorization: Bearer $TOKEN" \
--insecure | jq -r '.status.hostIP'
```
This example uses the downward API to pass in the `$POD_NAMESPACE` and
`$HOSTNAME` is the hostname of the pod which is set by the kubernetes API.
See the [Kubernetes documentation][Kubernetes_docs] for a full example of
generating a bearer token to explore the Kubernetes API.
[Kubernetes_docs]: https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#without-kubectl-proxy
### Daemon-set
For recommendations on running Telegraf as a daemon-set see the
[Monitoring Kubernetes Architecture blog post][k8s_telegraf_blog] or check the
following Helm charts:
- [Telegraf][helm_telegraf]
- [InfluxDB][helm_influxdb]
- [Chronograf][helm_chronograf]
- [Kapacitor][helm_kapacitor]
[k8s_telegraf_blog]: https://www.influxdata.com/blog/monitoring-kubernetes-architecture/
[helm_telegraf]: https://github.com/helm/charts/tree/master/stable/telegraf
[helm_influxdb]: https://github.com/helm/charts/tree/master/stable/influxdb
[helm_chronograf]: https://github.com/helm/charts/tree/master/stable/chronograf
[helm_kapacitor]: https://github.com/helm/charts/tree/master/stable/kapacitor
## Metrics
- kubernetes_node
- tags:
- node_name
- fields:
- cpu_usage_nanocores
- cpu_usage_core_nanoseconds
- memory_available_bytes
- memory_usage_bytes
- memory_working_set_bytes
- memory_rss_bytes
- memory_page_faults
- memory_major_page_faults
- network_rx_bytes
- network_rx_errors
- network_tx_bytes
- network_tx_errors
- fs_available_bytes
- fs_capacity_bytes
- fs_used_bytes
- runtime_image_fs_available_bytes
- runtime_image_fs_capacity_bytes
- runtime_image_fs_used_bytes
- kubernetes_pod_container
- tags:
- container_name
- namespace
- node_name
- pod_name
- fields:
- cpu_usage_nanocores
- cpu_usage_core_nanoseconds
- memory_usage_bytes
- memory_working_set_bytes
- memory_rss_bytes
- memory_page_faults
- memory_major_page_faults
- rootfs_available_bytes
- rootfs_capacity_bytes
- rootfs_used_bytes
- logsfs_available_bytes
- logsfs_capacity_bytes
- logsfs_used_bytes
- kubernetes_pod_volume
- tags:
- volume_name
- namespace
- node_name
- pod_name
- fields:
- available_bytes
- capacity_bytes
- used_bytes
- kubernetes_pod_network
- tags:
- namespace
- node_name
- pod_name
- fields:
- rx_bytes
- rx_errors
- tx_bytes
- tx_errors
## Example Output
```text
kubernetes_node
kubernetes_pod_container,container_name=deis-controller,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr cpu_usage_core_nanoseconds=2432835i,cpu_usage_nanocores=0i,logsfs_available_bytes=121128271872i,logsfs_capacity_bytes=153567944704i,logsfs_used_bytes=20787200i,memory_major_page_faults=0i,memory_page_faults=175i,memory_rss_bytes=0i,memory_usage_bytes=0i,memory_working_set_bytes=0i,rootfs_available_bytes=121128271872i,rootfs_capacity_bytes=153567944704i,rootfs_used_bytes=1110016i 1476477530000000000
kubernetes_pod_network,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr rx_bytes=120671099i,rx_errors=0i,tx_bytes=102451983i,tx_errors=0i 1476477530000000000
kubernetes_pod_volume,volume_name=default-token-f7wts,namespace=default,node_name=ip-172-17-0-1.internal,pod_name=storage-7 available_bytes=8415240192i,capacity_bytes=8415252480i,used_bytes=12288i 1546910783000000000
kubernetes_system_container
```

View file

@ -0,0 +1,369 @@
//go:generate ../../../tools/readme_config_includer/generator
package kubernetes
import (
"context"
_ "embed"
"encoding/json"
"fmt"
"net/http"
"os"
"strings"
"sync"
"time"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/config"
"github.com/influxdata/telegraf/filter"
"github.com/influxdata/telegraf/plugins/common/tls"
"github.com/influxdata/telegraf/plugins/inputs"
)
//go:embed sample.conf
var sampleConfig string
const (
defaultServiceAccountPath = "/var/run/secrets/kubernetes.io/serviceaccount/token"
)
// Kubernetes represents the config object for the plugin
type Kubernetes struct {
URL string `toml:"url"`
BearerToken string `toml:"bearer_token"`
BearerTokenString string `toml:"bearer_token_string" deprecated:"1.24.0;1.35.0;use 'BearerToken' with a file instead"`
NodeMetricName string `toml:"node_metric_name"`
LabelInclude []string `toml:"label_include"`
LabelExclude []string `toml:"label_exclude"`
ResponseTimeout config.Duration `toml:"response_timeout"`
Log telegraf.Logger `toml:"-"`
tls.ClientConfig
labelFilter filter.Filter
httpClient *http.Client
}
func (*Kubernetes) SampleConfig() string {
return sampleConfig
}
func (k *Kubernetes) Init() error {
// If neither are provided, use the default service account.
if k.BearerToken == "" && k.BearerTokenString == "" {
k.BearerToken = defaultServiceAccountPath
}
labelFilter, err := filter.NewIncludeExcludeFilter(k.LabelInclude, k.LabelExclude)
if err != nil {
return err
}
k.labelFilter = labelFilter
if k.URL == "" {
k.InsecureSkipVerify = true
}
if k.NodeMetricName == "" {
k.NodeMetricName = "kubernetes_node"
}
return nil
}
func (k *Kubernetes) Gather(acc telegraf.Accumulator) error {
if k.URL != "" {
acc.AddError(k.gatherSummary(k.URL, acc))
return nil
}
var wg sync.WaitGroup
nodeBaseURLs, err := getNodeURLs(k.Log)
if err != nil {
return err
}
for _, url := range nodeBaseURLs {
wg.Add(1)
go func(url string) {
defer wg.Done()
acc.AddError(k.gatherSummary(url, acc))
}(url)
}
wg.Wait()
return nil
}
func getNodeURLs(log telegraf.Logger) ([]string, error) {
cfg, err := rest.InClusterConfig()
if err != nil {
return nil, err
}
client, err := kubernetes.NewForConfig(cfg)
if err != nil {
return nil, err
}
nodes, err := client.CoreV1().Nodes().List(context.Background(), metav1.ListOptions{})
if err != nil {
return nil, err
}
nodeUrls := make([]string, 0, len(nodes.Items))
for i := range nodes.Items {
n := &nodes.Items[i]
address := getNodeAddress(n.Status.Addresses)
if address == "" {
log.Warnf("Unable to node addresses for Node %q", n.Name)
continue
}
nodeUrls = append(nodeUrls, "https://"+address+":10250")
}
return nodeUrls, nil
}
// Prefer internal addresses, if none found, use ExternalIP
func getNodeAddress(addresses []v1.NodeAddress) string {
extAddresses := make([]string, 0)
for _, addr := range addresses {
if addr.Type == v1.NodeInternalIP {
return addr.Address
}
extAddresses = append(extAddresses, addr.Address)
}
if len(extAddresses) > 0 {
return extAddresses[0]
}
return ""
}
func (k *Kubernetes) gatherSummary(baseURL string, acc telegraf.Accumulator) error {
summaryMetrics := &summaryMetrics{}
err := k.loadJSON(baseURL+"/stats/summary", summaryMetrics)
if err != nil {
return err
}
podInfos, err := k.gatherPodInfo(baseURL)
if err != nil {
return err
}
buildSystemContainerMetrics(summaryMetrics, acc)
buildNodeMetrics(summaryMetrics, acc, k.NodeMetricName)
buildPodMetrics(summaryMetrics, podInfos, k.labelFilter, acc)
return nil
}
func buildSystemContainerMetrics(summaryMetrics *summaryMetrics, acc telegraf.Accumulator) {
for _, container := range summaryMetrics.Node.SystemContainers {
tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName,
"container_name": container.Name,
}
fields := make(map[string]interface{})
fields["cpu_usage_nanocores"] = container.CPU.UsageNanoCores
fields["cpu_usage_core_nanoseconds"] = container.CPU.UsageCoreNanoSeconds
fields["memory_usage_bytes"] = container.Memory.UsageBytes
fields["memory_working_set_bytes"] = container.Memory.WorkingSetBytes
fields["memory_rss_bytes"] = container.Memory.RSSBytes
fields["memory_page_faults"] = container.Memory.PageFaults
fields["memory_major_page_faults"] = container.Memory.MajorPageFaults
fields["rootfs_available_bytes"] = container.RootFS.AvailableBytes
fields["rootfs_capacity_bytes"] = container.RootFS.CapacityBytes
fields["logsfs_available_bytes"] = container.LogsFS.AvailableBytes
fields["logsfs_capacity_bytes"] = container.LogsFS.CapacityBytes
acc.AddFields("kubernetes_system_container", fields, tags)
}
}
func buildNodeMetrics(summaryMetrics *summaryMetrics, acc telegraf.Accumulator, metricName string) {
tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName,
}
fields := make(map[string]interface{})
fields["cpu_usage_nanocores"] = summaryMetrics.Node.CPU.UsageNanoCores
fields["cpu_usage_core_nanoseconds"] = summaryMetrics.Node.CPU.UsageCoreNanoSeconds
fields["memory_available_bytes"] = summaryMetrics.Node.Memory.AvailableBytes
fields["memory_usage_bytes"] = summaryMetrics.Node.Memory.UsageBytes
fields["memory_working_set_bytes"] = summaryMetrics.Node.Memory.WorkingSetBytes
fields["memory_rss_bytes"] = summaryMetrics.Node.Memory.RSSBytes
fields["memory_page_faults"] = summaryMetrics.Node.Memory.PageFaults
fields["memory_major_page_faults"] = summaryMetrics.Node.Memory.MajorPageFaults
fields["network_rx_bytes"] = summaryMetrics.Node.Network.RXBytes
fields["network_rx_errors"] = summaryMetrics.Node.Network.RXErrors
fields["network_tx_bytes"] = summaryMetrics.Node.Network.TXBytes
fields["network_tx_errors"] = summaryMetrics.Node.Network.TXErrors
fields["fs_available_bytes"] = summaryMetrics.Node.FileSystem.AvailableBytes
fields["fs_capacity_bytes"] = summaryMetrics.Node.FileSystem.CapacityBytes
fields["fs_used_bytes"] = summaryMetrics.Node.FileSystem.UsedBytes
fields["runtime_image_fs_available_bytes"] = summaryMetrics.Node.Runtime.ImageFileSystem.AvailableBytes
fields["runtime_image_fs_capacity_bytes"] = summaryMetrics.Node.Runtime.ImageFileSystem.CapacityBytes
fields["runtime_image_fs_used_bytes"] = summaryMetrics.Node.Runtime.ImageFileSystem.UsedBytes
acc.AddFields(metricName, fields, tags)
}
func (k *Kubernetes) gatherPodInfo(baseURL string) ([]item, error) {
var podAPI pods
err := k.loadJSON(baseURL+"/pods", &podAPI)
if err != nil {
return nil, err
}
podInfos := make([]item, 0, len(podAPI.Items))
podInfos = append(podInfos, podAPI.Items...)
return podInfos, nil
}
func (k *Kubernetes) loadJSON(url string, v interface{}) error {
var req, err = http.NewRequest("GET", url, nil)
if err != nil {
return err
}
var resp *http.Response
tlsCfg, err := k.ClientConfig.TLSConfig()
if err != nil {
return err
}
if k.httpClient == nil {
if k.ResponseTimeout < config.Duration(time.Second) {
k.ResponseTimeout = config.Duration(time.Second * 5)
}
k.httpClient = &http.Client{
Transport: &http.Transport{
TLSClientConfig: tlsCfg,
},
CheckRedirect: func(*http.Request, []*http.Request) error {
return http.ErrUseLastResponse
},
Timeout: time.Duration(k.ResponseTimeout),
}
}
if k.BearerToken != "" {
token, err := os.ReadFile(k.BearerToken)
if err != nil {
return err
}
k.BearerTokenString = strings.TrimSpace(string(token))
}
req.Header.Set("Authorization", "Bearer "+k.BearerTokenString)
req.Header.Add("Accept", "application/json")
resp, err = k.httpClient.Do(req)
if err != nil {
return fmt.Errorf("error making HTTP request to %q: %w", url, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("%s returned HTTP status %s", url, resp.Status)
}
err = json.NewDecoder(resp.Body).Decode(v)
if err != nil {
return fmt.Errorf("error parsing response: %w", err)
}
return nil
}
func buildPodMetrics(summaryMetrics *summaryMetrics, podInfo []item, labelFilter filter.Filter, acc telegraf.Accumulator) {
for _, pod := range summaryMetrics.Pods {
podLabels := make(map[string]string)
containerImages := make(map[string]string)
for _, info := range podInfo {
if info.Metadata.Name == pod.PodRef.Name && info.Metadata.Namespace == pod.PodRef.Namespace {
for _, v := range info.Spec.Containers {
containerImages[v.Name] = v.Image
}
for k, v := range info.Metadata.Labels {
if labelFilter.Match(k) {
podLabels[k] = v
}
}
}
}
for _, container := range pod.Containers {
tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName,
"namespace": pod.PodRef.Namespace,
"container_name": container.Name,
"pod_name": pod.PodRef.Name,
}
for k, v := range containerImages {
if k == container.Name {
tags["image"] = v
tok := strings.Split(v, ":")
if len(tok) == 2 {
tags["version"] = tok[1]
}
}
}
for k, v := range podLabels {
tags[k] = v
}
fields := make(map[string]interface{})
fields["cpu_usage_nanocores"] = container.CPU.UsageNanoCores
fields["cpu_usage_core_nanoseconds"] = container.CPU.UsageCoreNanoSeconds
fields["memory_usage_bytes"] = container.Memory.UsageBytes
fields["memory_working_set_bytes"] = container.Memory.WorkingSetBytes
fields["memory_rss_bytes"] = container.Memory.RSSBytes
fields["memory_page_faults"] = container.Memory.PageFaults
fields["memory_major_page_faults"] = container.Memory.MajorPageFaults
fields["rootfs_available_bytes"] = container.RootFS.AvailableBytes
fields["rootfs_capacity_bytes"] = container.RootFS.CapacityBytes
fields["rootfs_used_bytes"] = container.RootFS.UsedBytes
fields["logsfs_available_bytes"] = container.LogsFS.AvailableBytes
fields["logsfs_capacity_bytes"] = container.LogsFS.CapacityBytes
fields["logsfs_used_bytes"] = container.LogsFS.UsedBytes
acc.AddFields("kubernetes_pod_container", fields, tags)
}
for _, volume := range pod.Volumes {
tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName,
"pod_name": pod.PodRef.Name,
"namespace": pod.PodRef.Namespace,
"volume_name": volume.Name,
}
for k, v := range podLabels {
tags[k] = v
}
fields := make(map[string]interface{})
fields["available_bytes"] = volume.AvailableBytes
fields["capacity_bytes"] = volume.CapacityBytes
fields["used_bytes"] = volume.UsedBytes
acc.AddFields("kubernetes_pod_volume", fields, tags)
}
tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName,
"pod_name": pod.PodRef.Name,
"namespace": pod.PodRef.Namespace,
}
for k, v := range podLabels {
tags[k] = v
}
fields := make(map[string]interface{})
fields["rx_bytes"] = pod.Network.RXBytes
fields["rx_errors"] = pod.Network.RXErrors
fields["tx_bytes"] = pod.Network.TXBytes
fields["tx_errors"] = pod.Network.TXErrors
acc.AddFields("kubernetes_pod_network", fields, tags)
}
}
func init() {
inputs.Add("kubernetes", func() telegraf.Input {
return &Kubernetes{
LabelExclude: []string{"*"},
}
})
}

View file

@ -0,0 +1,93 @@
package kubernetes
import "time"
// summaryMetrics represents all the summary data about a particular node retrieved from a kubelet
type summaryMetrics struct {
Node nodeMetrics `json:"node"`
Pods []podMetrics `json:"pods"`
}
// nodeMetrics represents detailed information about a node
type nodeMetrics struct {
NodeName string `json:"nodeName"`
SystemContainers []containerMetrics `json:"systemContainers"`
StartTime time.Time `json:"startTime"`
CPU cpuMetrics `json:"cpu"`
Memory memoryMetrics `json:"memory"`
Network networkMetrics `json:"network"`
FileSystem fileSystemMetrics `json:"fs"`
Runtime runtimeMetrics `json:"runtime"`
}
// containerMetrics represents the metric data collect about a container from the kubelet
type containerMetrics struct {
Name string `json:"name"`
StartTime time.Time `json:"startTime"`
CPU cpuMetrics `json:"cpu"`
Memory memoryMetrics `json:"memory"`
RootFS fileSystemMetrics `json:"rootfs"`
LogsFS fileSystemMetrics `json:"logs"`
}
// runtimeMetrics contains metric data on the runtime of the system
type runtimeMetrics struct {
ImageFileSystem fileSystemMetrics `json:"imageFs"`
}
// cpuMetrics represents the cpu usage data of a pod or node
type cpuMetrics struct {
Time time.Time `json:"time"`
UsageNanoCores int64 `json:"usageNanoCores"`
UsageCoreNanoSeconds int64 `json:"usageCoreNanoSeconds"`
}
// podMetrics contains metric data on a given pod
type podMetrics struct {
PodRef podReference `json:"podRef"`
StartTime *time.Time `json:"startTime"`
Containers []containerMetrics `json:"containers"`
Network networkMetrics `json:"network"`
Volumes []volumeMetrics `json:"volume"`
}
// podReference is how a pod is identified
type podReference struct {
Name string `json:"name"`
Namespace string `json:"namespace"`
}
// memoryMetrics represents the memory metrics for a pod or node
type memoryMetrics struct {
Time time.Time `json:"time"`
AvailableBytes int64 `json:"availableBytes"`
UsageBytes int64 `json:"usageBytes"`
WorkingSetBytes int64 `json:"workingSetBytes"`
RSSBytes int64 `json:"rssBytes"`
PageFaults int64 `json:"pageFaults"`
MajorPageFaults int64 `json:"majorPageFaults"`
}
// fileSystemMetrics represents disk usage metrics for a pod or node
type fileSystemMetrics struct {
AvailableBytes int64 `json:"availableBytes"`
CapacityBytes int64 `json:"capacityBytes"`
UsedBytes int64 `json:"usedBytes"`
}
// networkMetrics represents network usage data for a pod or node
type networkMetrics struct {
Time time.Time `json:"time"`
RXBytes int64 `json:"rxBytes"`
RXErrors int64 `json:"rxErrors"`
TXBytes int64 `json:"txBytes"`
TXErrors int64 `json:"txErrors"`
}
// volumeMetrics represents the disk usage data for a given volume
type volumeMetrics struct {
Name string `json:"name"`
AvailableBytes int64 `json:"availableBytes"`
CapacityBytes int64 `json:"capacityBytes"`
UsedBytes int64 `json:"usedBytes"`
}

View file

@ -0,0 +1,27 @@
package kubernetes
type pods struct {
Kind string `json:"kind"`
APIVersion string `json:"apiVersion"`
Items []item `json:"items"`
}
type item struct {
Metadata metadata `json:"metadata"`
Spec spec `json:"spec"`
}
type metadata struct {
Name string `json:"name"`
Namespace string `json:"namespace"`
Labels map[string]string `json:"labels"`
}
type spec struct {
Containers []container `json:"containers"`
}
type container struct {
Name string `json:"name"`
Image string `json:"image"`
}

View file

@ -0,0 +1,391 @@
package kubernetes
import (
"fmt"
"net/http"
"net/http/httptest"
"testing"
"github.com/stretchr/testify/require"
"github.com/influxdata/telegraf/filter"
"github.com/influxdata/telegraf/testutil"
)
func TestKubernetesStats(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.RequestURI == "/stats/summary" {
w.WriteHeader(http.StatusOK)
if _, err := fmt.Fprintln(w, responseStatsSummery); err != nil {
w.WriteHeader(http.StatusInternalServerError)
t.Error(err)
return
}
}
if r.RequestURI == "/pods" {
w.WriteHeader(http.StatusOK)
if _, err := fmt.Fprintln(w, responsePods); err != nil {
w.WriteHeader(http.StatusInternalServerError)
t.Error(err)
return
}
}
}))
defer ts.Close()
labelFilter, err := filter.NewIncludeExcludeFilter([]string{"app", "superkey"}, nil)
require.NoError(t, err)
k := &Kubernetes{
URL: ts.URL,
labelFilter: labelFilter,
NodeMetricName: "kubernetes_node",
}
var acc testutil.Accumulator
err = acc.GatherError(k.Gather)
require.NoError(t, err)
fields := map[string]interface{}{
"cpu_usage_nanocores": int64(56652446),
"cpu_usage_core_nanoseconds": int64(101437561712262),
"memory_usage_bytes": int64(62529536),
"memory_working_set_bytes": int64(62349312),
"memory_rss_bytes": int64(47509504),
"memory_page_faults": int64(4769397409),
"memory_major_page_faults": int64(13),
"rootfs_available_bytes": int64(84379979776),
"rootfs_capacity_bytes": int64(105553100800),
"logsfs_available_bytes": int64(84379979776),
"logsfs_capacity_bytes": int64(105553100800),
}
tags := map[string]string{
"node_name": "node1",
"container_name": "kubelet",
}
acc.AssertContainsTaggedFields(t, "kubernetes_system_container", fields, tags)
fields = map[string]interface{}{
"cpu_usage_nanocores": int64(576996212),
"cpu_usage_core_nanoseconds": int64(774129887054161),
"memory_usage_bytes": int64(12313182208),
"memory_working_set_bytes": int64(5081538560),
"memory_rss_bytes": int64(35586048),
"memory_page_faults": int64(351742),
"memory_major_page_faults": int64(1236),
"memory_available_bytes": int64(10726387712),
"network_rx_bytes": int64(213281337459),
"network_rx_errors": int64(0),
"network_tx_bytes": int64(292869995684),
"network_tx_errors": int64(0),
"fs_available_bytes": int64(84379979776),
"fs_capacity_bytes": int64(105553100800),
"fs_used_bytes": int64(16754286592),
"runtime_image_fs_available_bytes": int64(84379979776),
"runtime_image_fs_capacity_bytes": int64(105553100800),
"runtime_image_fs_used_bytes": int64(5809371475),
}
tags = map[string]string{
"node_name": "node1",
}
acc.AssertContainsTaggedFields(t, "kubernetes_node", fields, tags)
fields = map[string]interface{}{
"cpu_usage_nanocores": int64(846503),
"cpu_usage_core_nanoseconds": int64(56507553554),
"memory_usage_bytes": int64(30789632),
"memory_working_set_bytes": int64(30789632),
"memory_rss_bytes": int64(30695424),
"memory_page_faults": int64(10761),
"memory_major_page_faults": int64(0),
"rootfs_available_bytes": int64(84379979776),
"rootfs_capacity_bytes": int64(105553100800),
"rootfs_used_bytes": int64(57344),
"logsfs_available_bytes": int64(84379979776),
"logsfs_capacity_bytes": int64(105553100800),
"logsfs_used_bytes": int64(24576),
}
tags = map[string]string{
"node_name": "node1",
"container_name": "foocontainer",
"namespace": "foons",
"pod_name": "foopod",
"app": "foo",
"superkey": "foobar",
}
acc.AssertContainsTaggedFields(t, "kubernetes_pod_container", fields, tags)
fields = map[string]interface{}{
"cpu_usage_nanocores": int64(846503),
"cpu_usage_core_nanoseconds": int64(56507553554),
"memory_usage_bytes": int64(0),
"memory_working_set_bytes": int64(0),
"memory_rss_bytes": int64(0),
"memory_page_faults": int64(0),
"memory_major_page_faults": int64(0),
"rootfs_available_bytes": int64(0),
"rootfs_capacity_bytes": int64(0),
"rootfs_used_bytes": int64(0),
"logsfs_available_bytes": int64(0),
"logsfs_capacity_bytes": int64(0),
"logsfs_used_bytes": int64(0),
}
tags = map[string]string{
"node_name": "node1",
"container_name": "stopped-container",
"namespace": "foons",
"pod_name": "stopped-pod",
"app": "foo-stop",
"superkey": "superfoo",
}
acc.AssertContainsTaggedFields(t, "kubernetes_pod_container", fields, tags)
fields = map[string]interface{}{
"available_bytes": int64(7903948800),
"capacity_bytes": int64(7903961088),
"used_bytes": int64(12288),
}
tags = map[string]string{
"node_name": "node1",
"volume_name": "volume1",
"namespace": "foons",
"pod_name": "foopod",
"app": "foo",
"superkey": "foobar",
}
acc.AssertContainsTaggedFields(t, "kubernetes_pod_volume", fields, tags)
fields = map[string]interface{}{
"rx_bytes": int64(70749124),
"rx_errors": int64(0),
"tx_bytes": int64(47813506),
"tx_errors": int64(0),
}
tags = map[string]string{
"node_name": "node1",
"namespace": "foons",
"pod_name": "foopod",
"app": "foo",
"superkey": "foobar",
}
acc.AssertContainsTaggedFields(t, "kubernetes_pod_network", fields, tags)
}
var responsePods = `
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {},
"items": [
{
"metadata": {
"name": "foopod",
"namespace": "foons",
"labels": {
"superkey": "foobar",
"app": "foo",
"exclude": "exclude0"
}
}
},
{
"metadata": {
"name": "stopped-pod",
"namespace": "foons",
"labels": {
"superkey": "superfoo",
"app": "foo-stop",
"exclude": "exclude1"
}
}
}
]
}
`
var responseStatsSummery = `
{
"node": {
"nodeName": "node1",
"systemContainers": [
{
"name": "kubelet",
"startTime": "2016-08-25T18:46:52Z",
"cpu": {
"time": "2016-09-27T16:57:31Z",
"usageNanoCores": 56652446,
"usageCoreNanoSeconds": 101437561712262
},
"memory": {
"time": "2016-09-27T16:57:31Z",
"usageBytes": 62529536,
"workingSetBytes": 62349312,
"rssBytes": 47509504,
"pageFaults": 4769397409,
"majorPageFaults": 13
},
"rootfs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800
},
"logs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800
},
"userDefinedMetrics": null
},
{
"name": "bar",
"startTime": "2016-08-25T18:46:52Z",
"cpu": {
"time": "2016-09-27T16:57:31Z",
"usageNanoCores": 56652446,
"usageCoreNanoSeconds": 101437561712262
},
"memory": {
"time": "2016-09-27T16:57:31Z",
"usageBytes": 62529536,
"workingSetBytes": 62349312,
"rssBytes": 47509504,
"pageFaults": 4769397409,
"majorPageFaults": 13
},
"rootfs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800
},
"logs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800
},
"userDefinedMetrics": null
}
],
"startTime": "2016-08-25T18:46:52Z",
"cpu": {
"time": "2016-09-27T16:57:41Z",
"usageNanoCores": 576996212,
"usageCoreNanoSeconds": 774129887054161
},
"memory": {
"time": "2016-09-27T16:57:41Z",
"availableBytes": 10726387712,
"usageBytes": 12313182208,
"workingSetBytes": 5081538560,
"rssBytes": 35586048,
"pageFaults": 351742,
"majorPageFaults": 1236
},
"network": {
"time": "2016-09-27T16:57:41Z",
"rxBytes": 213281337459,
"rxErrors": 0,
"txBytes": 292869995684,
"txErrors": 0
},
"fs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800,
"usedBytes": 16754286592
},
"runtime": {
"imageFs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800,
"usedBytes": 5809371475
}
}
},
"pods": [
{
"podRef": {
"name": "foopod",
"namespace": "foons",
"uid": "6d305b06-8419-11e6-825c-42010af000ae"
},
"startTime": "2016-09-26T18:45:42Z",
"containers": [
{
"name": "foocontainer",
"startTime": "2016-09-26T18:46:43Z",
"cpu": {
"time": "2016-09-27T16:57:32Z",
"usageNanoCores": 846503,
"usageCoreNanoSeconds": 56507553554
},
"memory": {
"time": "2016-09-27T16:57:32Z",
"usageBytes": 30789632,
"workingSetBytes": 30789632,
"rssBytes": 30695424,
"pageFaults": 10761,
"majorPageFaults": 0
},
"rootfs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800,
"usedBytes": 57344
},
"logs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800,
"usedBytes": 24576
},
"userDefinedMetrics": null
}
],
"network": {
"time": "2016-09-27T16:57:34Z",
"rxBytes": 70749124,
"rxErrors": 0,
"txBytes": 47813506,
"txErrors": 0
},
"volume": [
{
"availableBytes": 7903948800,
"capacityBytes": 7903961088,
"usedBytes": 12288,
"name": "volume1"
},
{
"availableBytes": 7903956992,
"capacityBytes": 7903961088,
"usedBytes": 4096,
"name": "volume2"
},
{
"availableBytes": 7903948800,
"capacityBytes": 7903961088,
"usedBytes": 12288,
"name": "volume3"
},
{
"availableBytes": 7903952896,
"capacityBytes": 7903961088,
"usedBytes": 8192,
"name": "volume4"
}
]
},
{
"podRef": {
"name": "stopped-pod",
"namespace": "foons",
"uid": "da7c1865-d67d-4688-b679-c485ed44b2aa"
},
"startTime": null,
"containers": [
{
"name": "stopped-container",
"startTime": "2016-09-26T18:46:43Z",
"cpu": {
"time": "2016-09-27T16:57:32Z",
"usageNanoCores": 846503,
"usageCoreNanoSeconds": 56507553554
}
}
]
}
]
}`

View file

@ -0,0 +1,36 @@
# Read metrics from the kubernetes kubelet api
[[inputs.kubernetes]]
## URL for the kubelet, if empty read metrics from all nodes in the cluster
url = "http://127.0.0.1:10255"
## Use bearer token for authorization. ('bearer_token' takes priority)
## If both of these are empty, we'll use the default serviceaccount:
## at: /var/run/secrets/kubernetes.io/serviceaccount/token
##
## To re-read the token at each interval, please use a file with the
## bearer_token option. If given a string, Telegraf will always use that
## token.
# bearer_token = "/var/run/secrets/kubernetes.io/serviceaccount/token"
## OR
# bearer_token_string = "abc_123"
## Kubernetes Node Metric Name
## The default Kubernetes node metric name (i.e. kubernetes_node) is the same
## for the kubernetes and kube_inventory plugins. To avoid conflicts, set this
## option to a different value.
# node_metric_name = "kubernetes_node"
## Pod labels to be added as tags. An empty array for both include and
## exclude will include all labels.
# label_include = []
# label_exclude = ["*"]
## Set response_timeout (default 5 seconds)
# response_timeout = "5s"
## Optional TLS Config
# tls_ca = /path/to/cafile
# tls_cert = /path/to/certfile
# tls_key = /path/to/keyfile
## Use TLS but skip chain & host verification
# insecure_skip_verify = false