1
0
Fork 0

Adding upstream version 1.34.4.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-05-24 07:26:29 +02:00
parent e393c3af3f
commit 4978089aab
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
4963 changed files with 677545 additions and 0 deletions

View file

@ -0,0 +1,135 @@
# Amazon CloudWatch Output Plugin
This plugin writes metrics to the [Amazon CloudWatch][cloudwatch] service.
⭐ Telegraf v0.10.1
🏷️ cloud
💻 all
[cloudwatch]: https://aws.amazon.com/cloudwatch
## Amazon Authentication
This plugin uses a credential chain for Authentication with the CloudWatch API
endpoint. In the following order the plugin will attempt to authenticate.
1. Web identity provider credentials via STS if `role_arn` and
`web_identity_token_file` are specified
1. Assumed credentials via STS if `role_arn` attribute is specified (source
credentials are evaluated from subsequent rules)
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
1. Shared profile from `profile` attribute
1. [Environment Variables][1]
1. [Shared Credentials][2]
1. [EC2 Instance Profile][3]
If you are using credentials from a web identity provider, you can specify the
session name using `role_session_name`. If left empty, the current timestamp
will be used.
The IAM user needs only the `cloudwatch:PutMetricData` permission.
[1]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables
[2]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support
additional global and plugin configuration settings. These settings are used to
modify metrics, tags, and field or create aliases and configure ordering, etc.
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
## Configuration
```toml @sample.conf
# Configuration for AWS CloudWatch output.
[[outputs.cloudwatch]]
## Amazon REGION
region = "us-east-1"
## Amazon Credentials
## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile'
## 5) environment variables
## 6) shared credentials file
## 7) EC2 Instance Profile
#access_key = ""
#secret_key = ""
#token = ""
#role_arn = ""
#web_identity_token_file = ""
#role_session_name = ""
#profile = ""
#shared_credential_file = ""
## Endpoint to make request against, the correct endpoint is automatically
## determined and this option should only be set if you wish to override the
## default.
## ex: endpoint_url = "http://localhost:8000"
# endpoint_url = ""
## Set http_proxy
# use_system_proxy = false
# http_proxy_url = "http://localhost:8888"
## Namespace for the CloudWatch MetricDatums
namespace = "InfluxData/Telegraf"
## If you have a large amount of metrics, you should consider to send
## statistic values instead of raw metrics which could not only improve
## performance but also save AWS API cost. If enable this flag, this plugin
## would parse the required CloudWatch statistic fields (count, min, max, and
## sum) and send them to CloudWatch. You could use basicstats aggregator to
## calculate those fields. If not all statistic fields are available, all
## fields would still be sent as raw metrics.
# write_statistics = false
## Enable high resolution metrics of 1 second (if not enabled, standard
## resolution are of 60 seconds precision)
# high_resolution_metrics = false
```
For this output plugin to function correctly the following variables must be
configured.
* region
* namespace
### region
The region is the Amazon region that you wish to connect to. Examples include
but are not limited to:
* us-west-1
* us-west-2
* us-east-1
* ap-southeast-1
* ap-southeast-2
### namespace
The namespace used for AWS CloudWatch metrics.
### write_statistics
If you have a large amount of metrics, you should consider to send statistic
values instead of raw metrics which could not only improve performance but also
save AWS API cost. If enable this flag, this plugin would parse the required
[CloudWatch statistic fields][statistic fields] (count, min, max, and sum) and
send them to CloudWatch. You could use `basicstats` aggregator to calculate
those fields. If not all statistic fields are available, all fields would still
be sent as raw metrics.
[statistic fields]: https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatch/#StatisticSet
### high_resolution_metrics
Enable high resolution metrics (1 second precision) instead of standard ones
(60 seconds precision).

View file

@ -0,0 +1,425 @@
//go:generate ../../../tools/readme_config_includer/generator
package cloudwatch
import (
"context"
_ "embed"
"math"
"net/http"
"sort"
"strings"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/cloudwatch"
"github.com/aws/aws-sdk-go-v2/service/cloudwatch/types"
"github.com/influxdata/telegraf"
common_aws "github.com/influxdata/telegraf/plugins/common/aws"
common_http "github.com/influxdata/telegraf/plugins/common/http"
"github.com/influxdata/telegraf/plugins/outputs"
)
//go:embed sample.conf
var sampleConfig string
type CloudWatch struct {
Namespace string `toml:"namespace"` // CloudWatch Metrics Namespace
HighResolutionMetrics bool `toml:"high_resolution_metrics"`
svc *cloudwatch.Client
WriteStatistics bool `toml:"write_statistics"`
Log telegraf.Logger `toml:"-"`
common_aws.CredentialConfig
common_http.HTTPClientConfig
client *http.Client
}
type statisticType int
const (
statisticTypeNone statisticType = iota
statisticTypeMax
statisticTypeMin
statisticTypeSum
statisticTypeCount
)
type cloudwatchField interface {
addValue(sType statisticType, value float64)
buildDatum() []types.MetricDatum
}
type statisticField struct {
metricName string
fieldName string
tags map[string]string
values map[statisticType]float64
timestamp time.Time
storageResolution int64
}
func (f *statisticField) addValue(sType statisticType, value float64) {
if sType != statisticTypeNone {
f.values[sType] = value
}
}
func (f *statisticField) buildDatum() []types.MetricDatum {
var datums []types.MetricDatum
if f.hasAllFields() {
// If we have all required fields, we build datum with StatisticValues
vmin := f.values[statisticTypeMin]
vmax := f.values[statisticTypeMax]
vsum := f.values[statisticTypeSum]
vcount := f.values[statisticTypeCount]
datum := types.MetricDatum{
MetricName: aws.String(strings.Join([]string{f.metricName, f.fieldName}, "_")),
Dimensions: BuildDimensions(f.tags),
Timestamp: aws.Time(f.timestamp),
StatisticValues: &types.StatisticSet{
Minimum: aws.Float64(vmin),
Maximum: aws.Float64(vmax),
Sum: aws.Float64(vsum),
SampleCount: aws.Float64(vcount),
},
StorageResolution: aws.Int32(int32(f.storageResolution)),
}
datums = append(datums, datum)
} else {
// If we don't have all required fields, we build each field as independent datum
for sType, value := range f.values {
datum := types.MetricDatum{
Value: aws.Float64(value),
Dimensions: BuildDimensions(f.tags),
Timestamp: aws.Time(f.timestamp),
}
switch sType {
case statisticTypeMin:
datum.MetricName = aws.String(strings.Join([]string{f.metricName, f.fieldName, "min"}, "_"))
case statisticTypeMax:
datum.MetricName = aws.String(strings.Join([]string{f.metricName, f.fieldName, "max"}, "_"))
case statisticTypeSum:
datum.MetricName = aws.String(strings.Join([]string{f.metricName, f.fieldName, "sum"}, "_"))
case statisticTypeCount:
datum.MetricName = aws.String(strings.Join([]string{f.metricName, f.fieldName, "count"}, "_"))
default:
// should not be here
continue
}
datums = append(datums, datum)
}
}
return datums
}
func (f *statisticField) hasAllFields() bool {
_, hasMin := f.values[statisticTypeMin]
_, hasMax := f.values[statisticTypeMax]
_, hasSum := f.values[statisticTypeSum]
_, hasCount := f.values[statisticTypeCount]
return hasMin && hasMax && hasSum && hasCount
}
type valueField struct {
metricName string
fieldName string
tags map[string]string
value float64
timestamp time.Time
storageResolution int64
}
func (f *valueField) addValue(sType statisticType, value float64) {
if sType == statisticTypeNone {
f.value = value
}
}
func (f *valueField) buildDatum() []types.MetricDatum {
return []types.MetricDatum{
{
MetricName: aws.String(strings.Join([]string{f.metricName, f.fieldName}, "_")),
Value: aws.Float64(f.value),
Dimensions: BuildDimensions(f.tags),
Timestamp: aws.Time(f.timestamp),
StorageResolution: aws.Int32(int32(f.storageResolution)),
},
}
}
func (*CloudWatch) SampleConfig() string {
return sampleConfig
}
func (c *CloudWatch) Connect() error {
cfg, err := c.CredentialConfig.Credentials()
if err != nil {
return err
}
ctx := context.Background()
client, err := c.HTTPClientConfig.CreateClient(ctx, c.Log)
if err != nil {
return err
}
c.client = client
c.svc = cloudwatch.NewFromConfig(cfg, func(options *cloudwatch.Options) {
options.HTTPClient = c.client
})
return nil
}
func (c *CloudWatch) Close() error {
if c.client != nil {
c.client.CloseIdleConnections()
}
return nil
}
func (c *CloudWatch) Write(metrics []telegraf.Metric) error {
var datums []types.MetricDatum
for _, m := range metrics {
d := BuildMetricDatum(c.WriteStatistics, c.HighResolutionMetrics, m)
datums = append(datums, d...)
}
// PutMetricData only supports up to 1000 data metrics per call
// https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricData.html
const maxDatumsPerCall = 1000
for _, partition := range PartitionDatums(maxDatumsPerCall, datums) {
err := c.WriteToCloudWatch(partition)
if err != nil {
return err
}
}
return nil
}
func (c *CloudWatch) WriteToCloudWatch(datums []types.MetricDatum) error {
params := &cloudwatch.PutMetricDataInput{
MetricData: datums,
Namespace: aws.String(c.Namespace),
}
_, err := c.svc.PutMetricData(context.Background(), params)
if err != nil {
c.Log.Errorf("Unable to write to CloudWatch : %+v", err.Error())
}
return err
}
// PartitionDatums partitions the MetricDatums into smaller slices of a max size so that are under the limit
// for the AWS API calls.
func PartitionDatums(size int, datums []types.MetricDatum) [][]types.MetricDatum {
numberOfPartitions := len(datums) / size
if len(datums)%size != 0 {
numberOfPartitions++
}
partitions := make([][]types.MetricDatum, numberOfPartitions)
for i := 0; i < numberOfPartitions; i++ {
start := size * i
end := size * (i + 1)
if end > len(datums) {
end = len(datums)
}
partitions[i] = datums[start:end]
}
return partitions
}
// BuildMetricDatum makes a MetricDatum from telegraf.Metric. It would check if all required fields of
// cloudwatch.StatisticSet are available. If so, it would build MetricDatum from statistic values.
// Otherwise, fields would still been built independently.
func BuildMetricDatum(buildStatistic, highResolutionMetrics bool, point telegraf.Metric) []types.MetricDatum {
fields := make(map[string]cloudwatchField)
tags := point.Tags()
storageResolution := int64(60)
if highResolutionMetrics {
storageResolution = 1
}
for k, v := range point.Fields() {
val, ok := convert(v)
if !ok {
// Only fields with values that can be converted to float64 (and within CloudWatch boundary) are supported.
// Non-supported fields are skipped.
continue
}
sType, fieldName := getStatisticType(k)
// If statistic metric is not enabled or non-statistic type, just take current field as a value field.
if !buildStatistic || sType == statisticTypeNone {
fields[k] = &valueField{
metricName: point.Name(),
fieldName: k,
tags: tags,
timestamp: point.Time(),
value: val,
storageResolution: storageResolution,
}
continue
}
// Otherwise, it shall be a statistic field.
if _, ok := fields[fieldName]; !ok {
// Hit an uncached field, create statisticField for first time
fields[fieldName] = &statisticField{
metricName: point.Name(),
fieldName: fieldName,
tags: tags,
timestamp: point.Time(),
values: map[statisticType]float64{
sType: val,
},
storageResolution: storageResolution,
}
} else {
// Add new statistic value to this field
fields[fieldName].addValue(sType, val)
}
}
var datums []types.MetricDatum
for _, f := range fields {
d := f.buildDatum()
datums = append(datums, d...)
}
return datums
}
// BuildDimensions makes a list of Dimensions by using a Point's tags. CloudWatch supports up to
// 10 dimensions per metric, so we only keep up to the first 10 alphabetically.
// This always includes the "host" tag if it exists.
func BuildDimensions(mTags map[string]string) []types.Dimension {
const maxDimensions = 10
dimensions := make([]types.Dimension, 0, maxDimensions)
// This is pretty ugly, but we always want to include the "host" tag if it exists.
if host, ok := mTags["host"]; ok {
dimensions = append(dimensions, types.Dimension{
Name: aws.String("host"),
Value: aws.String(host),
})
}
var keys []string
for k := range mTags {
if k != "host" {
keys = append(keys, k)
}
}
sort.Strings(keys)
for _, k := range keys {
if len(dimensions) >= maxDimensions {
break
}
value := mTags[k]
if value == "" {
continue
}
dimensions = append(dimensions, types.Dimension{
Name: aws.String(k),
Value: aws.String(mTags[k]),
})
}
return dimensions
}
func getStatisticType(name string) (sType statisticType, fieldName string) {
switch {
case strings.HasSuffix(name, "_max"):
sType = statisticTypeMax
fieldName = strings.TrimSuffix(name, "_max")
case strings.HasSuffix(name, "_min"):
sType = statisticTypeMin
fieldName = strings.TrimSuffix(name, "_min")
case strings.HasSuffix(name, "_sum"):
sType = statisticTypeSum
fieldName = strings.TrimSuffix(name, "_sum")
case strings.HasSuffix(name, "_count"):
sType = statisticTypeCount
fieldName = strings.TrimSuffix(name, "_count")
default:
sType = statisticTypeNone
fieldName = name
}
return sType, fieldName
}
func convert(v interface{}) (value float64, ok bool) {
ok = true
switch t := v.(type) {
case int:
value = float64(t)
case int32:
value = float64(t)
case int64:
value = float64(t)
case uint64:
value = float64(t)
case float64:
value = t
case bool:
if t {
value = 1
} else {
value = 0
}
case time.Time:
value = float64(t.Unix())
default:
// Skip unsupported type.
ok = false
return value, ok
}
// Do CloudWatch boundary checking
// Constraints at: http://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html
switch {
case math.IsNaN(value):
return 0, false
case math.IsInf(value, 0):
return 0, false
case value > 0 && value < float64(8.515920e-109):
return 0, false
case value > float64(1.174271e+108):
return 0, false
}
return value, ok
}
func init() {
outputs.Add("cloudwatch", func() telegraf.Output {
return &CloudWatch{}
})
}

View file

@ -0,0 +1,158 @@
package cloudwatch
import (
"math"
"sort"
"testing"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/cloudwatch/types"
"github.com/stretchr/testify/require"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/metric"
"github.com/influxdata/telegraf/testutil"
)
// Test that each tag becomes one dimension
func TestBuildDimensions(t *testing.T) {
const maxDimensions = 10
testPoint := testutil.TestMetric(1)
dimensions := BuildDimensions(testPoint.Tags())
tagKeys := make([]string, 0, len(testPoint.Tags()))
for k := range testPoint.Tags() {
tagKeys = append(tagKeys, k)
}
sort.Strings(tagKeys)
if len(testPoint.Tags()) >= maxDimensions {
require.Len(t, dimensions, maxDimensions, "Number of dimensions should be less than MaxDimensions")
} else {
require.Len(t, dimensions, len(testPoint.Tags()), "Number of dimensions should be equal to number of tags")
}
for i, key := range tagKeys {
if i >= 10 {
break
}
require.Equal(t, key, *dimensions[i].Name, "Key should be equal")
require.Equal(t, testPoint.Tags()[key], *dimensions[i].Value, "Value should be equal")
}
}
// Test that metrics with valid values have a MetricDatum created where as non valid do not.
// Skips "time.Time" type as something is converting the value to string.
func TestBuildMetricDatums(t *testing.T) {
zero := 0.0
validMetrics := []telegraf.Metric{
testutil.TestMetric(1),
testutil.TestMetric(int32(1)),
testutil.TestMetric(int64(1)),
testutil.TestMetric(float64(1)),
testutil.TestMetric(float64(0)),
testutil.TestMetric(math.Copysign(zero, -1)), // the CW documentation does not call out -0 as rejected
testutil.TestMetric(float64(8.515920e-109)),
testutil.TestMetric(float64(1.174271e+108)), // largest should be 1.174271e+108
testutil.TestMetric(true),
}
invalidMetrics := []telegraf.Metric{
testutil.TestMetric("Foo"),
testutil.TestMetric(math.Log(-1.0)),
testutil.TestMetric(float64(8.515919e-109)), // smallest should be 8.515920e-109
testutil.TestMetric(float64(1.174272e+108)), // largest should be 1.174271e+108
}
for _, point := range validMetrics {
datums := BuildMetricDatum(false, false, point)
require.Lenf(t, datums, 1, "Valid point should create a Datum {value: %v}", point)
}
for _, point := range invalidMetrics {
datums := BuildMetricDatum(false, false, point)
require.Emptyf(t, datums, "Valid point should not create a Datum {value: %v}", point)
}
statisticMetric := metric.New(
"test1",
map[string]string{"tag1": "value1"},
map[string]interface{}{"value_max": float64(10), "value_min": float64(0), "value_sum": float64(100), "value_count": float64(20)},
time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC),
)
datums := BuildMetricDatum(true, false, statisticMetric)
require.Lenf(t, datums, 1, "Valid point should create a Datum {value: %v}", statisticMetric)
multiFieldsMetric := metric.New(
"test1",
map[string]string{"tag1": "value1"},
map[string]interface{}{"valueA": float64(10), "valueB": float64(0), "valueC": float64(100), "valueD": float64(20)},
time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC),
)
datums = BuildMetricDatum(true, false, multiFieldsMetric)
require.Lenf(t, datums, 4, "Each field should create a Datum {value: %v}", multiFieldsMetric)
multiStatisticMetric := metric.New(
"test1",
map[string]string{"tag1": "value1"},
map[string]interface{}{
"valueA_max": float64(10), "valueA_min": float64(0), "valueA_sum": float64(100), "valueA_count": float64(20),
"valueB_max": float64(10), "valueB_min": float64(0), "valueB_sum": float64(100), "valueB_count": float64(20),
"valueC_max": float64(10), "valueC_min": float64(0), "valueC_sum": float64(100),
"valueD": float64(10), "valueE": float64(0),
},
time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC),
)
datums = BuildMetricDatum(true, false, multiStatisticMetric)
require.Lenf(t, datums, 7, "Valid point should create a Datum {value: %v}", multiStatisticMetric)
}
func TestMetricDatumResolution(t *testing.T) {
const expectedStandardResolutionValue = int32(60)
const expectedHighResolutionValue = int32(1)
m := testutil.TestMetric(1)
standardResolutionDatum := BuildMetricDatum(false, false, m)
actualStandardResolutionValue := *standardResolutionDatum[0].StorageResolution
require.Equal(t, expectedStandardResolutionValue, actualStandardResolutionValue)
highResolutionDatum := BuildMetricDatum(false, true, m)
actualHighResolutionValue := *highResolutionDatum[0].StorageResolution
require.Equal(t, expectedHighResolutionValue, actualHighResolutionValue)
}
func TestBuildMetricDatums_SkipEmptyTags(t *testing.T) {
input := testutil.MustMetric(
"cpu",
map[string]string{
"host": "example.org",
"foo": "",
},
map[string]interface{}{
"value": int64(42),
},
time.Unix(0, 0),
)
datums := BuildMetricDatum(true, false, input)
require.Len(t, datums[0].Dimensions, 1)
}
func TestPartitionDatums(t *testing.T) {
testDatum := types.MetricDatum{
MetricName: aws.String("Foo"),
Value: aws.Float64(1),
}
zeroDatum := make([]types.MetricDatum, 0)
oneDatum := []types.MetricDatum{testDatum}
twoDatum := []types.MetricDatum{testDatum, testDatum}
threeDatum := []types.MetricDatum{testDatum, testDatum, testDatum}
require.Empty(t, PartitionDatums(2, zeroDatum))
require.Equal(t, [][]types.MetricDatum{oneDatum}, PartitionDatums(2, oneDatum))
require.Equal(t, [][]types.MetricDatum{oneDatum}, PartitionDatums(2, oneDatum))
require.Equal(t, [][]types.MetricDatum{twoDatum}, PartitionDatums(2, twoDatum))
require.Equal(t, [][]types.MetricDatum{twoDatum, oneDatum}, PartitionDatums(2, threeDatum))
}

View file

@ -0,0 +1,49 @@
# Configuration for AWS CloudWatch output.
[[outputs.cloudwatch]]
## Amazon REGION
region = "us-east-1"
## Amazon Credentials
## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile'
## 5) environment variables
## 6) shared credentials file
## 7) EC2 Instance Profile
#access_key = ""
#secret_key = ""
#token = ""
#role_arn = ""
#web_identity_token_file = ""
#role_session_name = ""
#profile = ""
#shared_credential_file = ""
## Endpoint to make request against, the correct endpoint is automatically
## determined and this option should only be set if you wish to override the
## default.
## ex: endpoint_url = "http://localhost:8000"
# endpoint_url = ""
## Set http_proxy
# use_system_proxy = false
# http_proxy_url = "http://localhost:8888"
## Namespace for the CloudWatch MetricDatums
namespace = "InfluxData/Telegraf"
## If you have a large amount of metrics, you should consider to send
## statistic values instead of raw metrics which could not only improve
## performance but also save AWS API cost. If enable this flag, this plugin
## would parse the required CloudWatch statistic fields (count, min, max, and
## sum) and send them to CloudWatch. You could use basicstats aggregator to
## calculate those fields. If not all statistic fields are available, all
## fields would still be sent as raw metrics.
# write_statistics = false
## Enable high resolution metrics of 1 second (if not enabled, standard
## resolution are of 60 seconds precision)
# high_resolution_metrics = false