1
0
Fork 0

Adding upstream version 1.34.4.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-05-24 07:26:29 +02:00
parent e393c3af3f
commit 4978089aab
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
4963 changed files with 677545 additions and 0 deletions

View file

@ -0,0 +1,114 @@
# Amazon CloudWatch Logs Output Plugin
This plugin writes log-metrics to the [Amazon CloudWatch][cloudwatch] service.
⭐ Telegraf v1.19.0
🏷️ cloud, logging
💻 all
[cloudwatch]: https://aws.amazon.com/cloudwatch
## Amazon Authentication
This plugin uses a credential chain for Authentication with the CloudWatch Logs
API endpoint. In the following order the plugin will attempt to authenticate.
1. Web identity provider credentials via STS if `role_arn` and
`web_identity_token_file` are specified
1. Assumed credentials via STS if `role_arn` attribute is specified (source
credentials are evaluated from subsequent rules). The `endpoint_url`
attribute is used only for Cloudwatch Logs service. When fetching
credentials, STS global endpoint will be used.
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
1. Shared profile from `profile` attribute
1. [Environment Variables][1]
1. [Shared Credentials][2]
1. [EC2 Instance Profile][3]
The IAM user needs the following permissions (see this [reference][4] for more):
- `logs:DescribeLogGroups` - required for check if configured log group exist
- `logs:DescribeLogStreams` - required to view all log streams associated with a
log group.
- `logs:CreateLogStream` - required to create a new log stream in a log group.)
- `logs:PutLogEvents` - required to upload a batch of log events into log
stream.
[1]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables
[2]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
[4]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support
additional global and plugin configuration settings. These settings are used to
modify metrics, tags, and field or create aliases and configure ordering, etc.
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
## Configuration
```toml @sample.conf
# Configuration for AWS CloudWatchLogs output.
[[outputs.cloudwatch_logs]]
## The region is the Amazon region that you wish to connect to.
## Examples include but are not limited to:
## - us-west-1
## - us-west-2
## - us-east-1
## - ap-southeast-1
## - ap-southeast-2
## ...
region = "us-east-1"
## Amazon Credentials
## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile'
## 5) environment variables
## 6) shared credentials file
## 7) EC2 Instance Profile
#access_key = ""
#secret_key = ""
#token = ""
#role_arn = ""
#web_identity_token_file = ""
#role_session_name = ""
#profile = ""
#shared_credential_file = ""
## Endpoint to make request against, the correct endpoint is automatically
## determined and this option should only be set if you wish to override the
## default, e.g endpoint_url = "http://localhost:8000"
# endpoint_url = ""
## Cloud watch log group. Must be created in AWS cloudwatch logs upfront!
## For example, you can specify the name of the k8s cluster here to group logs
## from all cluster in oine place
log_group = "my-group-name"
## Log stream in log group
## Either log group name or reference to metric attribute, from which it can
## be parsed, tag:<TAG_NAME> or field:<FIELD_NAME>. If the log stream is not
## exist, it will be created. Since AWS is not automatically delete logs
## streams with expired logs entries (i.e. empty log stream) you need to put
## in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
log_stream = "tag:location"
## Source of log data - metric name
## specify the name of the metric, from which the log data should be
## retrieved. I.e., if you are using docker_log plugin to stream logs from
## container, then specify log_data_metric_name = "docker_log"
log_data_metric_name = "docker_log"
## Specify from which metric attribute the log data should be retrieved:
## tag:<TAG_NAME> or field:<FIELD_NAME>.
## I.e., if you are using docker_log plugin to stream logs from container,
## then specify log_data_source = "field:message"
log_data_source = "field:message"
```

View file

@ -0,0 +1,405 @@
//go:generate ../../../tools/readme_config_includer/generator
package cloudwatch_logs
import (
"context"
_ "embed"
"errors"
"fmt"
"sort"
"strings"
"time"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs"
"github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs/types"
"github.com/influxdata/telegraf"
common_aws "github.com/influxdata/telegraf/plugins/common/aws"
"github.com/influxdata/telegraf/plugins/outputs"
)
//go:embed sample.conf
var sampleConfig string
type messageBatch struct {
logEvents []types.InputLogEvent
messageCount int
}
type logStreamContainer struct {
currentBatchSizeBytes int
currentBatchIndex int
messageBatches []messageBatch
sequenceToken string
}
// Cloudwatch Logs service interface
type cloudWatchLogs interface {
DescribeLogGroups(
context.Context,
*cloudwatchlogs.DescribeLogGroupsInput,
...func(options *cloudwatchlogs.Options),
) (*cloudwatchlogs.DescribeLogGroupsOutput, error)
DescribeLogStreams(
context.Context,
*cloudwatchlogs.DescribeLogStreamsInput,
...func(options *cloudwatchlogs.Options),
) (*cloudwatchlogs.DescribeLogStreamsOutput, error)
CreateLogStream(
context.Context,
*cloudwatchlogs.CreateLogStreamInput,
...func(options *cloudwatchlogs.Options),
) (*cloudwatchlogs.CreateLogStreamOutput, error)
PutLogEvents(context.Context, *cloudwatchlogs.PutLogEventsInput, ...func(options *cloudwatchlogs.Options)) (*cloudwatchlogs.PutLogEventsOutput, error)
}
// CloudWatchLogs plugin object definition
type CloudWatchLogs struct {
LogGroup string `toml:"log_group"`
lg *types.LogGroup // log group data
LogStream string `toml:"log_stream"`
lsKey string // log stream source: tag or field
lsSource string // log stream source tag or field name
ls map[string]*logStreamContainer // log stream info
LDMetricName string `toml:"log_data_metric_name"`
LDSource string `toml:"log_data_source"`
logDatKey string // log data source (tag or field)
logDataSource string // log data source tag or field name
svc cloudWatchLogs // cloudwatch logs service
Log telegraf.Logger `toml:"-"`
common_aws.CredentialConfig
}
const (
// Log events must comply with the following
// (https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatchlogs/#CloudWatchLogs.PutLogEvents):
maxLogMessageLength = 262144 - awsOverheadPerLogMessageBytes // In bytes
maxBatchSizeBytes = 1048576 // The sum of all event messages in UTF-8, plus 26 bytes for each log event
awsOverheadPerLogMessageBytes = 26
maxFutureLogEventTimeOffset = time.Hour * 2 // None of the log events in the batch can be more than 2 hours in the future.
maxPastLogEventTimeOffset = time.Hour * 24 * 14 // None of the log events in the batch can be older than 14 days or older
// than the retention period of the log group.
maxItemsInBatch = 10000 // The maximum number of log events in a batch is 10,000.
// maxTimeSpanInBatch = time.Hour * 24 // A batch of log events in a single request cannot span more than 24 hours.
// Otherwise, the operation fails.
)
func (*CloudWatchLogs) SampleConfig() string {
return sampleConfig
}
// Init initialize plugin with checking configuration parameters
func (c *CloudWatchLogs) Init() error {
if c.LogGroup == "" {
return errors.New("log group is not set")
}
if c.LogStream == "" {
return errors.New("log stream is not set")
}
if c.LDMetricName == "" {
return errors.New("log data metrics name is not set")
}
if c.LDSource == "" {
return errors.New("log data source is not set")
}
lsSplitArray := strings.Split(c.LDSource, ":")
if len(lsSplitArray) != 2 {
return errors.New("log data source is not properly formatted, ':' is missed.\n" +
"Should be 'tag:<tag_mame>' or 'field:<field_name>'")
}
if lsSplitArray[0] != "tag" && lsSplitArray[0] != "field" {
return errors.New("log data source is not properly formatted.\n" +
"Should be 'tag:<tag_mame>' or 'field:<field_name>'")
}
c.logDatKey = lsSplitArray[0]
c.logDataSource = lsSplitArray[1]
c.Log.Debugf("Log data: key %q, source %q...", c.logDatKey, c.logDataSource)
if c.lsSource == "" {
c.lsSource = c.LogStream
c.Log.Debugf("Log stream %q...", c.lsSource)
}
return nil
}
// Connect connects plugin with to receiver of metrics
func (c *CloudWatchLogs) Connect() error {
var queryToken *string
var dummyToken = "dummy"
var logGroupsOutput = &cloudwatchlogs.DescribeLogGroupsOutput{NextToken: &dummyToken}
var err error
awsCreds, awsErr := c.CredentialConfig.Credentials()
if awsErr != nil {
return awsErr
}
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
return err
}
cfg.Credentials = awsCreds.Credentials
if c.CredentialConfig.EndpointURL != "" && c.CredentialConfig.Region != "" {
c.svc = cloudwatchlogs.NewFromConfig(cfg, func(o *cloudwatchlogs.Options) {
o.Region = c.CredentialConfig.Region
o.BaseEndpoint = &c.CredentialConfig.EndpointURL
})
} else {
c.svc = cloudwatchlogs.NewFromConfig(cfg)
}
// Find log group with name 'c.LogGroup'
if c.lg == nil { // In case connection is not retried, first time
for logGroupsOutput.NextToken != nil {
logGroupsOutput, err = c.svc.DescribeLogGroups(
context.Background(),
&cloudwatchlogs.DescribeLogGroupsInput{
LogGroupNamePrefix: &c.LogGroup,
NextToken: queryToken})
if err != nil {
return err
}
queryToken = logGroupsOutput.NextToken
for _, logGroup := range logGroupsOutput.LogGroups {
lg := logGroup
if *(lg.LogGroupName) == c.LogGroup {
c.Log.Debugf("Found log group %q", c.LogGroup)
c.lg = &lg
}
}
}
if c.lg == nil {
return fmt.Errorf("can't find log group %q", c.LogGroup)
}
lsSplitArray := strings.Split(c.LogStream, ":")
if len(lsSplitArray) > 1 {
if lsSplitArray[0] == "tag" || lsSplitArray[0] == "field" {
c.lsKey = lsSplitArray[0]
c.lsSource = lsSplitArray[1]
c.Log.Debugf("Log stream: key %q, source %q...", c.lsKey, c.lsSource)
}
}
if c.lsSource == "" {
c.lsSource = c.LogStream
c.Log.Debugf("Log stream %q...", c.lsSource)
}
c.ls = make(map[string]*logStreamContainer)
}
return nil
}
// Close closes plugin connection with remote receiver
func (*CloudWatchLogs) Close() error {
return nil
}
// Write perform metrics write to receiver of metrics
func (c *CloudWatchLogs) Write(metrics []telegraf.Metric) error {
minTime := time.Now()
if c.lg.RetentionInDays != nil {
minTime = minTime.Add(-time.Hour * 24 * time.Duration(*c.lg.RetentionInDays))
} else {
minTime = minTime.Add(-maxPastLogEventTimeOffset)
}
maxTime := time.Now().Add(maxFutureLogEventTimeOffset)
for _, m := range metrics {
// Filtering metrics
if m.Name() != c.LDMetricName {
continue
}
if m.Time().After(maxTime) || m.Time().Before(minTime) {
c.Log.Debugf("Processing metric '%v': Metric is filtered based on TS!", m)
continue
}
tags := m.Tags()
fields := m.Fields()
logStream := ""
logData := ""
lsContainer := &logStreamContainer{
currentBatchSizeBytes: 0,
currentBatchIndex: 0,
messageBatches: []messageBatch{
{},
},
}
switch c.lsKey {
case "tag":
logStream = tags[c.lsSource]
case "field":
if fields[c.lsSource] != nil {
logStream = fields[c.lsSource].(string)
}
default:
logStream = c.lsSource
}
if logStream == "" {
c.Log.Errorf("Processing metric '%v': log stream: key %q, source %q, not found!", m, c.lsKey, c.lsSource)
continue
}
switch c.logDatKey {
case "tag":
logData = tags[c.logDataSource]
case "field":
if fields[c.logDataSource] != nil {
logData = fields[c.logDataSource].(string)
}
}
if logData == "" {
c.Log.Errorf("Processing metric '%v': log data: key %q, source %q, not found!", m, c.logDatKey, c.logDataSource)
continue
}
// Check if message size is not fit to batch
if len(logData) > maxLogMessageLength {
metricStr := fmt.Sprintf("%v", m)
c.Log.Errorf(
"Processing metric '%s...', message is too large to fit to aws max log message size: %d (bytes) !",
metricStr[0:maxLogMessageLength/1000],
maxLogMessageLength,
)
continue
}
// Batching log messages
// awsOverheadPerLogMessageBytes - is mandatory aws overhead per each log message
messageSizeInBytesForAWS := len(logData) + awsOverheadPerLogMessageBytes
// Pick up existing or prepare new log stream container.
// Log stream container stores logs per log stream in
// the AWS Cloudwatch logs API friendly structure
if val, ok := c.ls[logStream]; ok {
lsContainer = val
} else {
lsContainer.messageBatches[0].messageCount = 0
lsContainer.messageBatches[0].logEvents = make([]types.InputLogEvent, 0)
c.ls[logStream] = lsContainer
}
if lsContainer.currentBatchSizeBytes+messageSizeInBytesForAWS > maxBatchSizeBytes ||
lsContainer.messageBatches[lsContainer.currentBatchIndex].messageCount >= maxItemsInBatch {
// Need to start new batch, and reset counters
lsContainer.currentBatchIndex++
lsContainer.messageBatches = append(lsContainer.messageBatches,
messageBatch{
messageCount: 0,
},
)
lsContainer.currentBatchSizeBytes = messageSizeInBytesForAWS
} else {
lsContainer.currentBatchSizeBytes += messageSizeInBytesForAWS
lsContainer.messageBatches[lsContainer.currentBatchIndex].messageCount++
}
// AWS need time in milliseconds. time.UnixNano() returns time in nanoseconds since epoch
// we store here TS with nanosec precision iun order to have proper ordering, later ts will be reduced to milliseconds
metricTime := m.Time().UnixNano()
// Adding metring to batch
lsContainer.messageBatches[lsContainer.currentBatchIndex].logEvents =
append(lsContainer.messageBatches[lsContainer.currentBatchIndex].logEvents,
types.InputLogEvent{
Message: &logData,
Timestamp: &metricTime})
}
// Sorting out log events by TS and sending them to cloud watch logs
for logStream, elem := range c.ls {
for index, batch := range elem.messageBatches {
if len(batch.logEvents) == 0 {
continue
}
// Sorting
sort.Slice(batch.logEvents[:], func(i, j int) bool {
return *batch.logEvents[i].Timestamp < *batch.logEvents[j].Timestamp
})
putLogEvents := cloudwatchlogs.PutLogEventsInput{LogGroupName: &c.LogGroup, LogStreamName: &logStream}
if elem.sequenceToken == "" {
// This is the first attempt to write to log stream,
// need to check log stream existence and create it if necessary
describeLogStreamOutput, err := c.svc.DescribeLogStreams(context.Background(), &cloudwatchlogs.DescribeLogStreamsInput{
LogGroupName: &c.LogGroup,
LogStreamNamePrefix: &logStream})
if err == nil && len(describeLogStreamOutput.LogStreams) == 0 {
_, err := c.svc.CreateLogStream(context.Background(), &cloudwatchlogs.CreateLogStreamInput{
LogGroupName: &c.LogGroup,
LogStreamName: &logStream})
if err != nil {
c.Log.Errorf("Can't create log stream %q in log group. Reason: %v %q.", logStream, c.LogGroup, err)
continue
}
putLogEvents.SequenceToken = nil
} else if err == nil && len(describeLogStreamOutput.LogStreams) == 1 {
putLogEvents.SequenceToken = describeLogStreamOutput.LogStreams[0].UploadSequenceToken
} else if err == nil && len(describeLogStreamOutput.LogStreams) > 1 { // Ambiguity
c.Log.Errorf("More than 1 log stream found with prefix %q in log group %q.", logStream, c.LogGroup)
continue
} else {
c.Log.Errorf("Error describing log streams in log group %q. Reason: %v", c.LogGroup, err)
continue
}
} else {
putLogEvents.SequenceToken = &c.ls[logStream].sequenceToken
}
// Upload log events
// Adjusting TS to be in align with cloudwatch logs requirements
for _, event := range batch.logEvents {
*event.Timestamp = *event.Timestamp / 1000000
}
putLogEvents.LogEvents = batch.logEvents
// There is a quota of 5 requests per second per log stream. Additional
// requests are throttled. This quota can't be changed.
putLogEventsOutput, err := c.svc.PutLogEvents(context.Background(), &putLogEvents)
if err != nil {
c.Log.Errorf("Can't push logs batch to AWS. Reason: %v", err)
continue
}
// Cleanup batch
elem.messageBatches[index] = messageBatch{
messageCount: 0,
}
elem.sequenceToken = *putLogEventsOutput.NextSequenceToken
}
}
return nil
}
func init() {
outputs.Add("cloudwatch_logs", func() telegraf.Output {
return &CloudWatchLogs{}
})
}

View file

@ -0,0 +1,584 @@
package cloudwatch_logs
import (
"context"
"fmt"
"math/rand"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs"
"github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs/types"
"github.com/stretchr/testify/require"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/common/aws"
"github.com/influxdata/telegraf/testutil"
)
type mockCloudWatchLogs struct {
logStreamName string
pushedLogEvents []types.InputLogEvent
}
func (c *mockCloudWatchLogs) Init(lsName string) {
c.logStreamName = lsName
c.pushedLogEvents = make([]types.InputLogEvent, 0)
}
func (*mockCloudWatchLogs) DescribeLogGroups(
context.Context,
*cloudwatchlogs.DescribeLogGroupsInput,
...func(options *cloudwatchlogs.Options),
) (*cloudwatchlogs.DescribeLogGroupsOutput, error) {
return nil, nil
}
func (c *mockCloudWatchLogs) DescribeLogStreams(
context.Context,
*cloudwatchlogs.DescribeLogStreamsInput,
...func(options *cloudwatchlogs.Options),
) (*cloudwatchlogs.DescribeLogStreamsOutput, error) {
arn := "arn"
creationTime := time.Now().Unix()
sequenceToken := "arbitraryToken"
output := &cloudwatchlogs.DescribeLogStreamsOutput{
LogStreams: []types.LogStream{
{
Arn: &arn,
CreationTime: &creationTime,
FirstEventTimestamp: &creationTime,
LastEventTimestamp: &creationTime,
LastIngestionTime: &creationTime,
LogStreamName: &c.logStreamName,
UploadSequenceToken: &sequenceToken,
}},
NextToken: &sequenceToken,
}
return output, nil
}
func (*mockCloudWatchLogs) CreateLogStream(
context.Context,
*cloudwatchlogs.CreateLogStreamInput,
...func(options *cloudwatchlogs.Options),
) (*cloudwatchlogs.CreateLogStreamOutput, error) {
return nil, nil
}
func (c *mockCloudWatchLogs) PutLogEvents(
_ context.Context,
input *cloudwatchlogs.PutLogEventsInput,
_ ...func(options *cloudwatchlogs.Options),
) (*cloudwatchlogs.PutLogEventsOutput, error) {
sequenceToken := "arbitraryToken"
output := &cloudwatchlogs.PutLogEventsOutput{NextSequenceToken: &sequenceToken}
// Saving messages
c.pushedLogEvents = append(c.pushedLogEvents, input.LogEvents...)
return output, nil
}
// Ensure mockCloudWatchLogs implement cloudWatchLogs interface
var _ cloudWatchLogs = (*mockCloudWatchLogs)(nil)
func RandStringBytes(n int) string {
const letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
b := make([]byte, n)
for i := range b {
b[i] = letterBytes[rand.Intn(len(letterBytes))]
}
return string(b)
}
func TestInit(t *testing.T) {
tests := []struct {
name string
expectedErrorString string
plugin *CloudWatchLogs
}{
{
name: "log group is not set",
expectedErrorString: "log group is not set",
plugin: &CloudWatchLogs{
CredentialConfig: aws.CredentialConfig{
Region: "eu-central-1",
AccessKey: "dummy",
SecretKey: "dummy",
},
LogGroup: "",
LogStream: "tag:source",
LDMetricName: "docker_log",
LDSource: "field:message",
Log: testutil.Logger{
Name: "outputs.cloudwatch_logs",
},
},
},
{
name: "log stream is not set",
expectedErrorString: "log stream is not set",
plugin: &CloudWatchLogs{
CredentialConfig: aws.CredentialConfig{
Region: "eu-central-1",
AccessKey: "dummy",
SecretKey: "dummy",
},
LogGroup: "TestLogGroup",
LogStream: "",
LDMetricName: "docker_log",
LDSource: "field:message",
Log: testutil.Logger{
Name: "outputs.cloudwatch_logs",
},
},
},
{
name: "log data metrics name is not set",
expectedErrorString: "log data metrics name is not set",
plugin: &CloudWatchLogs{
CredentialConfig: aws.CredentialConfig{
Region: "eu-central-1",
AccessKey: "dummy",
SecretKey: "dummy",
},
LogGroup: "TestLogGroup",
LogStream: "tag:source",
LDMetricName: "",
LDSource: "field:message",
Log: testutil.Logger{
Name: "outputs.cloudwatch_logs",
},
},
},
{
name: "log data source is not set",
expectedErrorString: "log data source is not set",
plugin: &CloudWatchLogs{
CredentialConfig: aws.CredentialConfig{
Region: "eu-central-1",
AccessKey: "dummy",
SecretKey: "dummy",
},
LogGroup: "TestLogGroup",
LogStream: "tag:source",
LDMetricName: "docker_log",
LDSource: "",
Log: testutil.Logger{
Name: "outputs.cloudwatch_logs",
},
},
},
{
name: "log data source is not properly formatted (no divider)",
expectedErrorString: "log data source is not properly formatted, ':' is missed.\n" +
"Should be 'tag:<tag_mame>' or 'field:<field_name>'",
plugin: &CloudWatchLogs{
CredentialConfig: aws.CredentialConfig{
Region: "eu-central-1",
AccessKey: "dummy",
SecretKey: "dummy",
},
LogGroup: "TestLogGroup",
LogStream: "tag:source",
LDMetricName: "docker_log",
LDSource: "field_message",
Log: testutil.Logger{
Name: "outputs.cloudwatch_logs",
},
},
},
{
name: "log data source is not properly formatted (inappropriate fields)",
expectedErrorString: "log data source is not properly formatted.\n" +
"Should be 'tag:<tag_mame>' or 'field:<field_name>'",
plugin: &CloudWatchLogs{
CredentialConfig: aws.CredentialConfig{
Region: "eu-central-1",
AccessKey: "dummy",
SecretKey: "dummy",
},
LogGroup: "TestLogGroup",
LogStream: "tag:source",
LDMetricName: "docker_log",
LDSource: "bla:bla",
Log: testutil.Logger{
Name: "outputs.cloudwatch_logs",
},
},
},
{
name: "valid config",
plugin: &CloudWatchLogs{
CredentialConfig: aws.CredentialConfig{
Region: "eu-central-1",
AccessKey: "dummy",
SecretKey: "dummy",
},
LogGroup: "TestLogGroup",
LogStream: "tag:source",
LDMetricName: "docker_log",
LDSource: "tag:location",
Log: testutil.Logger{
Name: "outputs.cloudwatch_logs",
},
},
},
{
name: "valid config with EndpointURL",
plugin: &CloudWatchLogs{
CredentialConfig: aws.CredentialConfig{
Region: "eu-central-1",
AccessKey: "dummy",
SecretKey: "dummy",
EndpointURL: "https://test.com",
},
LogGroup: "TestLogGroup",
LogStream: "tag:source",
LDMetricName: "docker_log",
LDSource: "tag:location",
Log: testutil.Logger{
Name: "outputs.cloudwatch_logs",
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if tt.expectedErrorString != "" {
require.EqualError(t, tt.plugin.Init(), tt.expectedErrorString)
} else {
require.NoError(t, tt.plugin.Init())
}
})
}
}
func TestConnect(t *testing.T) {
// mock cloudwatch logs endpoint that is used only in plugin.Connect
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
fmt.Fprintln(w,
`{
"logGroups": [
{
"arn": "string",
"creationTime": 123456789,
"kmsKeyId": "string",
"logGroupName": "TestLogGroup",
"metricFilterCount": 1,
"retentionInDays": 10,
"storedBytes": 0
}
]
}`)
}))
defer ts.Close()
plugin := &CloudWatchLogs{
CredentialConfig: aws.CredentialConfig{
Region: "eu-central-1",
AccessKey: "dummy",
SecretKey: "dummy",
EndpointURL: ts.URL,
},
LogGroup: "TestLogGroup",
LogStream: "tag:source",
LDMetricName: "docker_log",
LDSource: "field:message",
Log: testutil.Logger{
Name: "outputs.cloudwatch_logs",
},
}
require.NoError(t, plugin.Init())
require.NoError(t, plugin.Connect())
}
func TestWrite(t *testing.T) {
// mock cloudwatch logs endpoint that is used only in plugin.Connect
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
fmt.Fprintln(w,
`{
"logGroups": [
{
"arn": "string",
"creationTime": 123456789,
"kmsKeyId": "string",
"logGroupName": "TestLogGroup",
"metricFilterCount": 1,
"retentionInDays": 1,
"storedBytes": 0
}
]
}`)
}))
defer ts.Close()
plugin := &CloudWatchLogs{
CredentialConfig: aws.CredentialConfig{
Region: "eu-central-1",
AccessKey: "dummy",
SecretKey: "dummy",
EndpointURL: ts.URL,
},
LogGroup: "TestLogGroup",
LogStream: "tag:source",
LDMetricName: "docker_log",
LDSource: "field:message",
Log: testutil.Logger{
Name: "outputs.cloudwatch_logs",
},
}
require.NoError(t, plugin.Init())
require.NoError(t, plugin.Connect())
tests := []struct {
name string
logStreamName string
metrics []telegraf.Metric
expectedMetricsOrder map[int]int // map[<index of pushed log event>]<index of corresponding metric>
expectedMetricsCount int
}{
{
name: "Sorted by timestamp log entries",
logStreamName: "deadbeef",
expectedMetricsOrder: map[int]int{0: 0, 1: 1},
expectedMetricsCount: 2,
metrics: []telegraf.Metric{
testutil.MustMetric(
"docker_log",
map[string]string{
"container_name": "telegraf",
"container_image": "influxdata/telegraf",
"container_version": "1.11.0",
"stream": "tty",
"source": "deadbeef",
},
map[string]interface{}{
"container_id": "deadbeef",
"message": "Sorted: message #1",
},
time.Now().Add(-time.Minute),
),
testutil.MustMetric(
"docker_log",
map[string]string{
"container_name": "telegraf",
"container_image": "influxdata/telegraf",
"container_version": "1.11.0",
"stream": "tty",
"source": "deadbeef",
},
map[string]interface{}{
"container_id": "deadbeef",
"message": "Sorted: message #2",
},
time.Now(),
),
},
},
{
name: "Unsorted log entries",
logStreamName: "deadbeef",
expectedMetricsOrder: map[int]int{0: 1, 1: 0},
expectedMetricsCount: 2,
metrics: []telegraf.Metric{
testutil.MustMetric(
"docker_log",
map[string]string{
"container_name": "telegraf",
"container_image": "influxdata/telegraf",
"container_version": "1.11.0",
"stream": "tty",
"source": "deadbeef",
},
map[string]interface{}{
"container_id": "deadbeef",
"message": "Unsorted: message #1",
},
time.Now(),
),
testutil.MustMetric(
"docker_log",
map[string]string{
"container_name": "telegraf",
"container_image": "influxdata/telegraf",
"container_version": "1.11.0",
"stream": "tty",
"source": "deadbeef",
},
map[string]interface{}{
"container_id": "deadbeef",
"message": "Unsorted: message #2",
},
time.Now().Add(-time.Minute),
),
},
},
{
name: "Too old log entry & log entry in the future",
logStreamName: "deadbeef",
expectedMetricsCount: 0,
metrics: []telegraf.Metric{
testutil.MustMetric(
"docker_log",
map[string]string{
"container_name": "telegraf",
"container_image": "influxdata/telegraf",
"container_version": "1.11.0",
"stream": "tty",
"source": "deadbeef",
},
map[string]interface{}{
"container_id": "deadbeef",
"message": "message #1",
},
time.Now().Add(-maxPastLogEventTimeOffset).Add(-time.Hour),
),
testutil.MustMetric(
"docker_log",
map[string]string{
"container_name": "telegraf",
"container_image": "influxdata/telegraf",
"container_version": "1.11.0",
"stream": "tty",
"source": "deadbeef",
},
map[string]interface{}{
"container_id": "deadbeef",
"message": "message #2",
},
time.Now().Add(maxFutureLogEventTimeOffset).Add(time.Hour),
),
},
},
{
name: "Oversized log entry",
logStreamName: "deadbeef",
expectedMetricsCount: 0,
metrics: []telegraf.Metric{
testutil.MustMetric(
"docker_log",
map[string]string{
"container_name": "telegraf",
"container_image": "influxdata/telegraf",
"container_version": "1.11.0",
"stream": "tty",
"source": "deadbeef",
},
map[string]interface{}{
"container_id": "deadbeef",
// Here comes very long message
"message": RandStringBytes(maxLogMessageLength + 1),
},
time.Now().Add(-time.Minute),
),
},
},
{
name: "Batching log entries",
logStreamName: "deadbeef",
expectedMetricsOrder: map[int]int{0: 0, 1: 1, 2: 2, 3: 3, 4: 4},
expectedMetricsCount: 5,
metrics: []telegraf.Metric{
testutil.MustMetric(
"docker_log",
map[string]string{
"container_name": "telegraf",
"container_image": "influxdata/telegraf",
"container_version": "1.11.0",
"stream": "tty",
"source": "deadbeef",
},
map[string]interface{}{
"container_id": "deadbeef",
// Here comes very long message to cause message batching
"message": "batch1 message1:" + RandStringBytes(maxLogMessageLength-16),
},
time.Now().Add(-4*time.Minute),
),
testutil.MustMetric(
"docker_log",
map[string]string{
"container_name": "telegraf",
"container_image": "influxdata/telegraf",
"container_version": "1.11.0",
"stream": "tty",
"source": "deadbeef",
},
map[string]interface{}{
"container_id": "deadbeef",
// Here comes very long message to cause message batching
"message": "batch1 message2:" + RandStringBytes(maxLogMessageLength-16),
},
time.Now().Add(-3*time.Minute),
),
testutil.MustMetric(
"docker_log",
map[string]string{
"container_name": "telegraf",
"container_image": "influxdata/telegraf",
"container_version": "1.11.0",
"stream": "tty",
"source": "deadbeef",
},
map[string]interface{}{
"container_id": "deadbeef",
// Here comes very long message to cause message batching
"message": "batch1 message3:" + RandStringBytes(maxLogMessageLength-16),
},
time.Now().Add(-2*time.Minute),
),
testutil.MustMetric(
"docker_log",
map[string]string{
"container_name": "telegraf",
"container_image": "influxdata/telegraf",
"container_version": "1.11.0",
"stream": "tty",
"source": "deadbeef",
},
map[string]interface{}{
"container_id": "deadbeef",
// Here comes very long message to cause message batching
"message": "batch1 message4:" + RandStringBytes(maxLogMessageLength-16),
},
time.Now().Add(-time.Minute),
),
testutil.MustMetric(
"docker_log",
map[string]string{
"container_name": "telegraf",
"container_image": "influxdata/telegraf",
"container_version": "1.11.0",
"stream": "tty",
"source": "deadbeef",
},
map[string]interface{}{
"container_id": "deadbeef",
"message": "batch2 message1",
},
time.Now(),
),
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Overwrite cloud watch log endpoint
mockCwl := &mockCloudWatchLogs{}
mockCwl.Init(tt.logStreamName)
plugin.svc = mockCwl
require.NoError(t, plugin.Write(tt.metrics))
require.Len(t, mockCwl.pushedLogEvents, tt.expectedMetricsCount)
for index, elem := range mockCwl.pushedLogEvents {
require.Equal(t, *elem.Message, tt.metrics[tt.expectedMetricsOrder[index]].Fields()["message"])
require.Equal(t, *elem.Timestamp, tt.metrics[tt.expectedMetricsOrder[index]].Time().UnixNano()/1000000)
}
})
}
}

View file

@ -0,0 +1,60 @@
# Configuration for AWS CloudWatchLogs output.
[[outputs.cloudwatch_logs]]
## The region is the Amazon region that you wish to connect to.
## Examples include but are not limited to:
## - us-west-1
## - us-west-2
## - us-east-1
## - ap-southeast-1
## - ap-southeast-2
## ...
region = "us-east-1"
## Amazon Credentials
## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile'
## 5) environment variables
## 6) shared credentials file
## 7) EC2 Instance Profile
#access_key = ""
#secret_key = ""
#token = ""
#role_arn = ""
#web_identity_token_file = ""
#role_session_name = ""
#profile = ""
#shared_credential_file = ""
## Endpoint to make request against, the correct endpoint is automatically
## determined and this option should only be set if you wish to override the
## default, e.g endpoint_url = "http://localhost:8000"
# endpoint_url = ""
## Cloud watch log group. Must be created in AWS cloudwatch logs upfront!
## For example, you can specify the name of the k8s cluster here to group logs
## from all cluster in oine place
log_group = "my-group-name"
## Log stream in log group
## Either log group name or reference to metric attribute, from which it can
## be parsed, tag:<TAG_NAME> or field:<FIELD_NAME>. If the log stream is not
## exist, it will be created. Since AWS is not automatically delete logs
## streams with expired logs entries (i.e. empty log stream) you need to put
## in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
log_stream = "tag:location"
## Source of log data - metric name
## specify the name of the metric, from which the log data should be
## retrieved. I.e., if you are using docker_log plugin to stream logs from
## container, then specify log_data_metric_name = "docker_log"
log_data_metric_name = "docker_log"
## Specify from which metric attribute the log data should be retrieved:
## tag:<TAG_NAME> or field:<FIELD_NAME>.
## I.e., if you are using docker_log plugin to stream logs from container,
## then specify log_data_source = "field:message"
log_data_source = "field:message"