Adding upstream version 1.34.4.
Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
parent
e393c3af3f
commit
4978089aab
4963 changed files with 677545 additions and 0 deletions
169
plugins/inputs/cloudwatch_metric_streams/README.md
Normal file
169
plugins/inputs/cloudwatch_metric_streams/README.md
Normal file
|
@ -0,0 +1,169 @@
|
|||
# Amazon CloudWatch Metric Streams Input Plugin
|
||||
|
||||
This plugin listens for metrics sent via HTTP by
|
||||
[Cloudwatch metric streams][metric_streams] implementing the required
|
||||
[response specifications][response_specs].
|
||||
|
||||
> [!IMPORTANT]
|
||||
> Using this plugin can incure costs, see the _Metric Streams example_ in
|
||||
> [CloudWatch pricing][pricing].
|
||||
|
||||
⭐ Telegraf v1.24.0
|
||||
🏷️ cloud
|
||||
💻 all
|
||||
|
||||
[pricing]: https://aws.amazon.com/cloudwatch/pricing
|
||||
[metric_streams]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html
|
||||
[response_specs]: https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html
|
||||
|
||||
## Service Input <!-- @/docs/includes/service_input.md -->
|
||||
|
||||
This plugin is a service input. Normal plugins gather metrics determined by the
|
||||
interval setting. Service plugins start a service to listen and wait for
|
||||
metrics or events to occur. Service plugins have two key differences from
|
||||
normal plugins:
|
||||
|
||||
1. The global or plugin specific `interval` setting may not apply
|
||||
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
|
||||
output for this plugin
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# AWS Metric Streams listener
|
||||
[[inputs.cloudwatch_metric_streams]]
|
||||
## Address and port to host HTTP listener on
|
||||
service_address = ":443"
|
||||
|
||||
## Paths to listen to.
|
||||
# paths = ["/telegraf"]
|
||||
|
||||
## maximum duration before timing out read of the request
|
||||
# read_timeout = "10s"
|
||||
|
||||
## maximum duration before timing out write of the response
|
||||
# write_timeout = "10s"
|
||||
|
||||
## Maximum allowed http request body size in bytes.
|
||||
## 0 means to use the default of 524,288,000 bytes (500 mebibytes)
|
||||
# max_body_size = "500MB"
|
||||
|
||||
## Optional access key for Firehose security.
|
||||
# access_key = "test-key"
|
||||
|
||||
## An optional flag to keep Metric Streams metrics compatible with
|
||||
## CloudWatch's API naming
|
||||
# api_compatability = false
|
||||
|
||||
## Set one or more allowed client CA certificate file names to
|
||||
## enable mutually authenticated TLS connections
|
||||
# tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
|
||||
|
||||
## Add service certificate and key
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
The plugin has its own internal metrics for troubleshooting:
|
||||
|
||||
* Requests Received
|
||||
* The number of requests received by the listener.
|
||||
* Writes Served
|
||||
* The number of writes served by the listener.
|
||||
* Bad Requests
|
||||
* The number of bad requests, separated by the error code as a tag.
|
||||
* Request Time
|
||||
* The duration of the request measured in ns.
|
||||
* Age Max
|
||||
* The maximum age of a metric in this interval. This is useful for offsetting
|
||||
any lag or latency measurements in a metrics pipeline that measures based
|
||||
on the timestamp.
|
||||
* Age Min
|
||||
* The minimum age of a metric in this interval.
|
||||
|
||||
Specific errors will be logged and an error will be returned to AWS.
|
||||
|
||||
For additional help check the [Firehose Troubleshooting][firehose_troubleshoot]
|
||||
page.
|
||||
|
||||
[firehose_troubleshoot]: https://docs.aws.amazon.com/firehose/latest/dev/http_troubleshooting.html
|
||||
|
||||
## Metrics
|
||||
|
||||
Metrics sent by AWS are Base64 encoded blocks of JSON data.
|
||||
The JSON block below is the Base64 decoded data in the `data`
|
||||
field of a `record`.
|
||||
There can be multiple blocks of JSON for each `data` field
|
||||
in each `record` and there can be multiple `record` fields in
|
||||
a `record`.
|
||||
|
||||
The metric when decoded may look like this:
|
||||
|
||||
```json
|
||||
{
|
||||
"metric_stream_name": "sandbox-dev-cloudwatch-metric-stream",
|
||||
"account_id": "541737779709",
|
||||
"region": "us-west-2",
|
||||
"namespace": "AWS/EC2",
|
||||
"metric_name": "CPUUtilization",
|
||||
"dimensions": {
|
||||
"InstanceId": "i-0efc7ghy09c123428"
|
||||
},
|
||||
"timestamp": 1651679580000,
|
||||
"value": {
|
||||
"max": 10.011666666666667,
|
||||
"min": 10.011666666666667,
|
||||
"sum": 10.011666666666667,
|
||||
"count": 1
|
||||
},
|
||||
"unit": "Percent"
|
||||
}
|
||||
```
|
||||
|
||||
### Tags
|
||||
|
||||
All tags in the `dimensions` list are added as tags to the metric.
|
||||
|
||||
The `account_id` and `region` tag are added to each metric as well.
|
||||
|
||||
### Measurements and Fields
|
||||
|
||||
The metric name is a combination of `namespace` and `metric_name`,
|
||||
separated by `_` and lowercased.
|
||||
|
||||
The fields are each aggregate in the `value` list.
|
||||
|
||||
These fields are optionally renamed to match the CloudWatch API for
|
||||
easier transition from the API to Metric Streams. This relies on
|
||||
setting the `api_compatability` flag in the configuration.
|
||||
|
||||
The timestamp applied is the timestamp from the metric,
|
||||
typically 3-5 minutes older than the time processed due
|
||||
to CloudWatch delays.
|
||||
|
||||
## Example Output
|
||||
|
||||
Example output based on the above JSON & compatability flag is:
|
||||
|
||||
**Standard Metric Streams format:**
|
||||
|
||||
```text
|
||||
aws_ec2_cpuutilization,accountId=541737779709,region=us-west-2,InstanceId=i-0efc7ghy09c123428 max=10.011666666666667,min=10.011666666666667,sum=10.011666666666667,count=1 1651679580000
|
||||
```
|
||||
|
||||
**API Compatability format:**
|
||||
|
||||
```text
|
||||
aws_ec2_cpuutilization,accountId=541737779709,region=us-west-2,InstanceId=i-0efc7ghy09c123428 maximum=10.011666666666667,minimum=10.011666666666667,sum=10.011666666666667,samplecount=1 1651679580000
|
||||
```
|
|
@ -0,0 +1,424 @@
|
|||
//go:generate ../../../tools/readme_config_includer/generator
|
||||
package cloudwatch_metric_streams
|
||||
|
||||
import (
|
||||
"compress/gzip"
|
||||
"crypto/tls"
|
||||
_ "embed"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"math"
|
||||
"net"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/config"
|
||||
"github.com/influxdata/telegraf/internal/choice"
|
||||
common_tls "github.com/influxdata/telegraf/plugins/common/tls"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
"github.com/influxdata/telegraf/selfstat"
|
||||
)
|
||||
|
||||
//go:embed sample.conf
|
||||
var sampleConfig string
|
||||
|
||||
// defaultMaxBodySize is the default maximum request body size, in bytes.
|
||||
// if the request body is over this size, we will return an HTTP 413 error.
|
||||
// 500 MB
|
||||
const defaultMaxBodySize = 500 * 1024 * 1024
|
||||
|
||||
type CloudWatchMetricStreams struct {
|
||||
ServiceAddress string `toml:"service_address"`
|
||||
Paths []string `toml:"paths"`
|
||||
MaxBodySize config.Size `toml:"max_body_size"`
|
||||
ReadTimeout config.Duration `toml:"read_timeout"`
|
||||
WriteTimeout config.Duration `toml:"write_timeout"`
|
||||
AccessKey string `toml:"access_key"`
|
||||
APICompatability bool `toml:"api_compatability"`
|
||||
|
||||
requestsReceived selfstat.Stat
|
||||
writesServed selfstat.Stat
|
||||
requestTime selfstat.Stat
|
||||
ageMax selfstat.Stat
|
||||
ageMin selfstat.Stat
|
||||
|
||||
Log telegraf.Logger
|
||||
common_tls.ServerConfig
|
||||
wg sync.WaitGroup
|
||||
close chan struct{}
|
||||
listener net.Listener
|
||||
acc telegraf.Accumulator
|
||||
}
|
||||
|
||||
type request struct {
|
||||
RequestID string `json:"requestId"`
|
||||
Timestamp int64 `json:"timestamp"`
|
||||
Records []struct {
|
||||
Data string `json:"data"`
|
||||
} `json:"records"`
|
||||
}
|
||||
|
||||
type data struct {
|
||||
MetricStreamName string `json:"metric_stream_name"`
|
||||
AccountID string `json:"account_id"`
|
||||
Region string `json:"region"`
|
||||
Namespace string `json:"namespace"`
|
||||
MetricName string `json:"metric_name"`
|
||||
Dimensions map[string]string `json:"dimensions"`
|
||||
Timestamp int64 `json:"timestamp"`
|
||||
Value map[string]float64 `json:"value"`
|
||||
Unit string `json:"unit"`
|
||||
}
|
||||
|
||||
type response struct {
|
||||
RequestID string `json:"requestId"`
|
||||
Timestamp int64 `json:"timestamp"`
|
||||
}
|
||||
|
||||
type age struct {
|
||||
max time.Duration
|
||||
min time.Duration
|
||||
}
|
||||
|
||||
func (*CloudWatchMetricStreams) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (cms *CloudWatchMetricStreams) Init() error {
|
||||
tags := map[string]string{
|
||||
"address": cms.ServiceAddress,
|
||||
}
|
||||
cms.requestsReceived = selfstat.Register("cloudwatch_metric_streams", "requests_received", tags)
|
||||
cms.writesServed = selfstat.Register("cloudwatch_metric_streams", "writes_served", tags)
|
||||
cms.requestTime = selfstat.Register("cloudwatch_metric_streams", "request_time", tags)
|
||||
cms.ageMax = selfstat.Register("cloudwatch_metric_streams", "age_max", tags)
|
||||
cms.ageMin = selfstat.Register("cloudwatch_metric_streams", "age_min", tags)
|
||||
|
||||
if cms.MaxBodySize == 0 {
|
||||
cms.MaxBodySize = config.Size(defaultMaxBodySize)
|
||||
}
|
||||
|
||||
if cms.ReadTimeout < config.Duration(time.Second) {
|
||||
cms.ReadTimeout = config.Duration(time.Second * 10)
|
||||
}
|
||||
|
||||
if cms.WriteTimeout < config.Duration(time.Second) {
|
||||
cms.WriteTimeout = config.Duration(time.Second * 10)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start starts the http listener service.
|
||||
func (cms *CloudWatchMetricStreams) Start(acc telegraf.Accumulator) error {
|
||||
cms.acc = acc
|
||||
server := cms.createHTTPServer()
|
||||
|
||||
var err error
|
||||
server.TLSConfig, err = cms.ServerConfig.TLSConfig()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if server.TLSConfig != nil {
|
||||
cms.listener, err = tls.Listen("tcp", cms.ServiceAddress, server.TLSConfig)
|
||||
} else {
|
||||
cms.listener, err = net.Listen("tcp", cms.ServiceAddress)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cms.wg.Add(1)
|
||||
go func() {
|
||||
defer cms.wg.Done()
|
||||
if err := server.Serve(cms.listener); err != nil {
|
||||
if !errors.Is(err, net.ErrClosed) {
|
||||
cms.Log.Errorf("Serve failed: %v", err)
|
||||
}
|
||||
close(cms.close)
|
||||
}
|
||||
}()
|
||||
|
||||
cms.Log.Infof("Listening on %s", cms.listener.Addr().String())
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (*CloudWatchMetricStreams) Gather(telegraf.Accumulator) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (cms *CloudWatchMetricStreams) Stop() {
|
||||
if cms.listener != nil {
|
||||
cms.listener.Close()
|
||||
}
|
||||
cms.wg.Wait()
|
||||
}
|
||||
|
||||
func (cms *CloudWatchMetricStreams) ServeHTTP(res http.ResponseWriter, req *http.Request) {
|
||||
cms.requestsReceived.Incr(1)
|
||||
start := time.Now()
|
||||
defer cms.recordRequestTime(start)
|
||||
|
||||
handler := cms.serveWrite
|
||||
|
||||
if !choice.Contains(req.URL.Path, cms.Paths) {
|
||||
handler = http.NotFound
|
||||
}
|
||||
|
||||
cms.authenticateIfSet(handler, res, req)
|
||||
}
|
||||
|
||||
func (a *age) record(t time.Duration) {
|
||||
if t > a.max {
|
||||
a.max = t
|
||||
}
|
||||
|
||||
if t < a.min {
|
||||
a.min = t
|
||||
}
|
||||
}
|
||||
|
||||
func (a *age) submitMax(stat selfstat.Stat) {
|
||||
stat.Incr(a.max.Nanoseconds())
|
||||
}
|
||||
|
||||
func (a *age) submitMin(stat selfstat.Stat) {
|
||||
stat.Incr(a.min.Nanoseconds())
|
||||
}
|
||||
|
||||
func (cms *CloudWatchMetricStreams) createHTTPServer() *http.Server {
|
||||
return &http.Server{
|
||||
Addr: cms.ServiceAddress,
|
||||
Handler: cms,
|
||||
ReadTimeout: time.Duration(cms.ReadTimeout),
|
||||
WriteTimeout: time.Duration(cms.WriteTimeout),
|
||||
}
|
||||
}
|
||||
|
||||
func (cms *CloudWatchMetricStreams) recordRequestTime(start time.Time) {
|
||||
elapsed := time.Since(start)
|
||||
cms.requestTime.Incr(elapsed.Nanoseconds())
|
||||
}
|
||||
|
||||
func (cms *CloudWatchMetricStreams) serveWrite(res http.ResponseWriter, req *http.Request) {
|
||||
select {
|
||||
case <-cms.close:
|
||||
res.WriteHeader(http.StatusGone)
|
||||
return
|
||||
default:
|
||||
}
|
||||
|
||||
defer cms.writesServed.Incr(1)
|
||||
|
||||
// Check that the content length is not too large for us to handle.
|
||||
if req.ContentLength > int64(cms.MaxBodySize) {
|
||||
cms.Log.Errorf("content length exceeded maximum body size")
|
||||
if err := tooLarge(res); err != nil {
|
||||
cms.Log.Debugf("error in too-large: %v", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Check that the method is a POST
|
||||
if req.Method != "POST" {
|
||||
cms.Log.Errorf("incompatible request method")
|
||||
if err := methodNotAllowed(res); err != nil {
|
||||
cms.Log.Debugf("error in method-not-allowed: %v", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Decode GZIP
|
||||
var body = req.Body
|
||||
encoding := req.Header.Get("Content-Encoding")
|
||||
|
||||
if encoding == "gzip" {
|
||||
reader, err := gzip.NewReader(req.Body)
|
||||
if err != nil {
|
||||
cms.Log.Errorf("unable to uncompress metric-streams data: %v", err)
|
||||
if err := badRequest(res); err != nil {
|
||||
cms.Log.Debugf("error in bad-request: %v", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
body = reader
|
||||
defer reader.Close()
|
||||
}
|
||||
|
||||
// Decode the request
|
||||
var r request
|
||||
err := json.NewDecoder(body).Decode(&r)
|
||||
if err != nil {
|
||||
cms.Log.Errorf("unable to decode metric-streams request: %v", err)
|
||||
if err := badRequest(res); err != nil {
|
||||
cms.Log.Debugf("error in bad-request: %v", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
agesInRequest := &age{max: 0, min: math.MaxInt32}
|
||||
defer agesInRequest.submitMax(cms.ageMax)
|
||||
defer agesInRequest.submitMin(cms.ageMin)
|
||||
|
||||
// For each record, decode the base64 data and store it in a data struct
|
||||
// Metrics from Metric Streams are Base64 encoded JSON
|
||||
// https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html
|
||||
for _, record := range r.Records {
|
||||
b, err := base64.StdEncoding.DecodeString(record.Data)
|
||||
if err != nil {
|
||||
cms.Log.Errorf("unable to base64 decode metric-streams data: %v", err)
|
||||
if err := badRequest(res); err != nil {
|
||||
cms.Log.Debugf("error in bad-request: %v", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
list := strings.Split(string(b), "\n")
|
||||
|
||||
// If the last element is empty, remove it to avoid unexpected JSON
|
||||
if len(list) > 0 {
|
||||
if list[len(list)-1] == "" {
|
||||
list = list[:len(list)-1]
|
||||
}
|
||||
}
|
||||
|
||||
for _, js := range list {
|
||||
var d data
|
||||
err = json.Unmarshal([]byte(js), &d)
|
||||
if err != nil {
|
||||
cms.Log.Errorf("unable to unmarshal metric-streams data: %v", err)
|
||||
if err := badRequest(res); err != nil {
|
||||
cms.Log.Debugf("error in bad-request: %v", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
cms.composeMetrics(d)
|
||||
agesInRequest.record(time.Since(time.Unix(d.Timestamp/1000, 0)))
|
||||
}
|
||||
}
|
||||
|
||||
// Compose the response to AWS using the request's requestId
|
||||
// https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html#responseformat
|
||||
response := response{
|
||||
RequestID: r.RequestID,
|
||||
Timestamp: time.Now().UnixNano() / 1000000,
|
||||
}
|
||||
|
||||
marshalled, err := json.Marshal(response)
|
||||
if err != nil {
|
||||
cms.Log.Errorf("unable to compose response: %v", err)
|
||||
if err := badRequest(res); err != nil {
|
||||
cms.Log.Debugf("error in bad-request: %v", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
res.Header().Set("Content-Type", "application/json")
|
||||
res.WriteHeader(http.StatusOK)
|
||||
_, err = res.Write(marshalled)
|
||||
if err != nil {
|
||||
cms.Log.Debugf("Error writing response to AWS: %s", err.Error())
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
func (cms *CloudWatchMetricStreams) composeMetrics(data data) {
|
||||
fields := make(map[string]interface{})
|
||||
tags := make(map[string]string)
|
||||
timestamp := time.Unix(data.Timestamp/1000, 0)
|
||||
|
||||
namespace := strings.Replace(data.Namespace, "/", "_", -1)
|
||||
measurement := strings.ToLower(namespace + "_" + data.MetricName)
|
||||
|
||||
for field, value := range data.Value {
|
||||
fields[field] = value
|
||||
}
|
||||
|
||||
// Rename Statistics to match the CloudWatch API if in API Compatability mode
|
||||
if cms.APICompatability {
|
||||
if v, ok := fields["max"]; ok {
|
||||
fields["maximum"] = v
|
||||
delete(fields, "max")
|
||||
}
|
||||
|
||||
if v, ok := fields["min"]; ok {
|
||||
fields["minimum"] = v
|
||||
delete(fields, "min")
|
||||
}
|
||||
|
||||
if v, ok := fields["count"]; ok {
|
||||
fields["samplecount"] = v
|
||||
delete(fields, "count")
|
||||
}
|
||||
}
|
||||
|
||||
tags["accountId"] = data.AccountID
|
||||
tags["region"] = data.Region
|
||||
|
||||
for dimension, value := range data.Dimensions {
|
||||
tags[dimension] = value
|
||||
}
|
||||
|
||||
cms.acc.AddFields(measurement, fields, tags, timestamp)
|
||||
}
|
||||
|
||||
func tooLarge(res http.ResponseWriter) error {
|
||||
tags := map[string]string{
|
||||
"status_code": strconv.Itoa(http.StatusRequestEntityTooLarge),
|
||||
}
|
||||
selfstat.Register("cloudwatch_metric_streams", "bad_requests", tags).Incr(1)
|
||||
res.Header().Set("Content-Type", "application/json")
|
||||
res.WriteHeader(http.StatusRequestEntityTooLarge)
|
||||
_, err := res.Write([]byte(`{"error":"http: request body too large"}`))
|
||||
return err
|
||||
}
|
||||
|
||||
func methodNotAllowed(res http.ResponseWriter) error {
|
||||
tags := map[string]string{
|
||||
"status_code": strconv.Itoa(http.StatusMethodNotAllowed),
|
||||
}
|
||||
selfstat.Register("cloudwatch_metric_streams", "bad_requests", tags).Incr(1)
|
||||
res.Header().Set("Content-Type", "application/json")
|
||||
res.WriteHeader(http.StatusMethodNotAllowed)
|
||||
_, err := res.Write([]byte(`{"error":"http: method not allowed"}`))
|
||||
return err
|
||||
}
|
||||
|
||||
func badRequest(res http.ResponseWriter) error {
|
||||
tags := map[string]string{
|
||||
"status_code": strconv.Itoa(http.StatusBadRequest),
|
||||
}
|
||||
selfstat.Register("cloudwatch_metric_streams", "bad_requests", tags).Incr(1)
|
||||
res.Header().Set("Content-Type", "application/json")
|
||||
res.WriteHeader(http.StatusBadRequest)
|
||||
_, err := res.Write([]byte(`{"error":"http: bad request"}`))
|
||||
return err
|
||||
}
|
||||
|
||||
func (cms *CloudWatchMetricStreams) authenticateIfSet(handler http.HandlerFunc, res http.ResponseWriter, req *http.Request) {
|
||||
if cms.AccessKey != "" {
|
||||
auth := req.Header.Get("X-Amz-Firehose-Access-Key")
|
||||
if auth == "" || auth != cms.AccessKey {
|
||||
http.Error(res, "Unauthorized.", http.StatusUnauthorized)
|
||||
return
|
||||
}
|
||||
handler(res, req)
|
||||
} else {
|
||||
handler(res, req)
|
||||
}
|
||||
}
|
||||
|
||||
func init() {
|
||||
inputs.Add("cloudwatch_metric_streams", func() telegraf.Input {
|
||||
return &CloudWatchMetricStreams{
|
||||
ServiceAddress: ":443",
|
||||
Paths: []string{"/telegraf"},
|
||||
}
|
||||
})
|
||||
}
|
|
@ -0,0 +1,382 @@
|
|||
package cloudwatch_metric_streams
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/influxdata/telegraf/config"
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
)
|
||||
|
||||
const (
|
||||
badMsg = "blahblahblah: 42\n"
|
||||
emptyMsg = ""
|
||||
accessKey = "super-secure-password!"
|
||||
badAccessKey = "super-insecure-password!"
|
||||
maxBodySize = 524288000
|
||||
)
|
||||
|
||||
var (
|
||||
pki = testutil.NewPKI("../../../testutil/pki")
|
||||
)
|
||||
|
||||
func newTestCloudWatchMetricStreams() *CloudWatchMetricStreams {
|
||||
metricStream := &CloudWatchMetricStreams{
|
||||
Log: testutil.Logger{},
|
||||
ServiceAddress: "localhost:8080",
|
||||
Paths: []string{"/write"},
|
||||
MaxBodySize: config.Size(maxBodySize),
|
||||
close: make(chan struct{}),
|
||||
}
|
||||
return metricStream
|
||||
}
|
||||
|
||||
func newTestMetricStreamAuth() *CloudWatchMetricStreams {
|
||||
metricStream := newTestCloudWatchMetricStreams()
|
||||
metricStream.AccessKey = accessKey
|
||||
return metricStream
|
||||
}
|
||||
|
||||
func newTestMetricStreamHTTPS() *CloudWatchMetricStreams {
|
||||
metricStream := newTestCloudWatchMetricStreams()
|
||||
metricStream.ServerConfig = *pki.TLSServerConfig()
|
||||
|
||||
return metricStream
|
||||
}
|
||||
|
||||
func newTestCompatibleCloudWatchMetricStreams() *CloudWatchMetricStreams {
|
||||
metricStream := newTestCloudWatchMetricStreams()
|
||||
metricStream.APICompatability = true
|
||||
return metricStream
|
||||
}
|
||||
|
||||
func getHTTPSClient() *http.Client {
|
||||
tlsConfig, err := pki.TLSClientConfig().TLSConfig()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return &http.Client{
|
||||
Transport: &http.Transport{
|
||||
TLSClientConfig: tlsConfig,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func createURL(scheme, path string) string {
|
||||
u := url.URL{
|
||||
Scheme: scheme,
|
||||
Host: "localhost:8080",
|
||||
Path: path,
|
||||
RawQuery: "",
|
||||
}
|
||||
return u.String()
|
||||
}
|
||||
|
||||
func readJSON(t *testing.T, jsonFilePath string) []byte {
|
||||
data, err := os.ReadFile(jsonFilePath)
|
||||
require.NoErrorf(t, err, "could not read from data file %s", jsonFilePath)
|
||||
|
||||
return data
|
||||
}
|
||||
|
||||
func TestInvalidListenerConfig(t *testing.T) {
|
||||
metricStream := newTestCloudWatchMetricStreams()
|
||||
metricStream.ServiceAddress = "address_without_port"
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.Error(t, metricStream.Start(acc))
|
||||
|
||||
// Stop is called when any ServiceInput fails to start; it must succeed regardless of state
|
||||
metricStream.Stop()
|
||||
}
|
||||
|
||||
func TestWriteHTTPSNoClientAuth(t *testing.T) {
|
||||
metricStream := newTestMetricStreamHTTPS()
|
||||
metricStream.TLSAllowedCACerts = nil
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
cas := x509.NewCertPool()
|
||||
cas.AppendCertsFromPEM([]byte(pki.ReadServerCert()))
|
||||
noClientAuthClient := &http.Client{
|
||||
Transport: &http.Transport{
|
||||
TLSClientConfig: &tls.Config{
|
||||
RootCAs: cas,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// post single message to the metric stream listener
|
||||
record := readJSON(t, "testdata/record.json")
|
||||
resp, err := noClientAuthClient.Post(createURL("https", "/write"), "", bytes.NewBuffer(record))
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, resp.Body.Close())
|
||||
require.EqualValues(t, 200, resp.StatusCode)
|
||||
}
|
||||
|
||||
func TestWriteHTTPSWithClientAuth(t *testing.T) {
|
||||
metricStream := newTestMetricStreamHTTPS()
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
// post single message to the metric stream listener
|
||||
record := readJSON(t, "testdata/record.json")
|
||||
resp, err := getHTTPSClient().Post(createURL("https", "/write"), "", bytes.NewBuffer(record))
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, resp.Body.Close())
|
||||
require.EqualValues(t, 200, resp.StatusCode)
|
||||
}
|
||||
|
||||
func TestWriteHTTPSuccessfulAuth(t *testing.T) {
|
||||
metricStream := newTestMetricStreamAuth()
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
client := &http.Client{}
|
||||
|
||||
record := readJSON(t, "testdata/record.json")
|
||||
req, err := http.NewRequest("POST", createURL("http", "/write"), bytes.NewBuffer(record))
|
||||
require.NoError(t, err)
|
||||
req.Header.Set("X-Amz-Firehose-Access-Key", accessKey)
|
||||
|
||||
// post single message to the metric stream listener
|
||||
resp, err := client.Do(req)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, resp.Body.Close())
|
||||
require.EqualValues(t, http.StatusOK, resp.StatusCode)
|
||||
}
|
||||
|
||||
func TestWriteHTTPFailedAuth(t *testing.T) {
|
||||
metricStream := newTestMetricStreamAuth()
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
client := &http.Client{}
|
||||
|
||||
record := readJSON(t, "testdata/record.json")
|
||||
req, err := http.NewRequest("POST", createURL("http", "/write"), bytes.NewBuffer(record))
|
||||
require.NoError(t, err)
|
||||
req.Header.Set("X-Amz-Firehose-Access-Key", badAccessKey)
|
||||
|
||||
// post single message to the metric stream listener
|
||||
resp, err := client.Do(req)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, resp.Body.Close())
|
||||
require.EqualValues(t, http.StatusUnauthorized, resp.StatusCode)
|
||||
}
|
||||
|
||||
func TestWriteHTTP(t *testing.T) {
|
||||
metricStream := newTestCloudWatchMetricStreams()
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
// post single message to the metric stream listener
|
||||
record := readJSON(t, "testdata/record.json")
|
||||
resp, err := http.Post(createURL("http", "/write"), "", bytes.NewBuffer(record))
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, resp.Body.Close())
|
||||
require.EqualValues(t, 200, resp.StatusCode)
|
||||
}
|
||||
|
||||
func TestWriteHTTPMultipleRecords(t *testing.T) {
|
||||
metricStream := newTestCloudWatchMetricStreams()
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
// post multiple records to the metric stream listener
|
||||
records := readJSON(t, "testdata/records.json")
|
||||
resp, err := http.Post(createURL("http", "/write"), "", bytes.NewBuffer(records))
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, resp.Body.Close())
|
||||
require.EqualValues(t, 200, resp.StatusCode)
|
||||
}
|
||||
|
||||
func TestWriteHTTPExactMaxBodySize(t *testing.T) {
|
||||
metricStream := newTestCloudWatchMetricStreams()
|
||||
record := readJSON(t, "testdata/record.json")
|
||||
metricStream.MaxBodySize = config.Size(len(record))
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
// post single message to the metric stream listener
|
||||
resp, err := http.Post(createURL("http", "/write"), "", bytes.NewBuffer(record))
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, resp.Body.Close())
|
||||
require.EqualValues(t, 200, resp.StatusCode)
|
||||
}
|
||||
|
||||
func TestWriteHTTPVerySmallMaxBody(t *testing.T) {
|
||||
metricStream := newTestCloudWatchMetricStreams()
|
||||
metricStream.MaxBodySize = config.Size(512)
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
// post single message to the metric stream listener
|
||||
record := readJSON(t, "testdata/record.json")
|
||||
resp, err := http.Post(createURL("http", "/write"), "", bytes.NewBuffer(record))
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, resp.Body.Close())
|
||||
require.EqualValues(t, 413, resp.StatusCode)
|
||||
}
|
||||
|
||||
func TestReceive404ForInvalidEndpoint(t *testing.T) {
|
||||
metricStream := newTestCloudWatchMetricStreams()
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
// post single message to the metric stream listener
|
||||
record := readJSON(t, "testdata/record.json")
|
||||
resp, err := http.Post(createURL("http", "/foobar"), "", bytes.NewBuffer(record))
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, resp.Body.Close())
|
||||
require.EqualValues(t, 404, resp.StatusCode)
|
||||
}
|
||||
|
||||
func TestWriteHTTPInvalid(t *testing.T) {
|
||||
metricStream := newTestCloudWatchMetricStreams()
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
// post a badly formatted message to the metric stream listener
|
||||
resp, err := http.Post(createURL("http", "/write"), "", bytes.NewBufferString(badMsg))
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, resp.Body.Close())
|
||||
require.EqualValues(t, 400, resp.StatusCode)
|
||||
}
|
||||
|
||||
func TestWriteHTTPEmpty(t *testing.T) {
|
||||
metricStream := newTestCloudWatchMetricStreams()
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
// post empty message to the metric stream listener
|
||||
resp, err := http.Post(createURL("http", "/write"), "", bytes.NewBufferString(emptyMsg))
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, resp.Body.Close())
|
||||
require.EqualValues(t, 400, resp.StatusCode)
|
||||
}
|
||||
|
||||
func TestComposeMetrics(t *testing.T) {
|
||||
metricStream := newTestCloudWatchMetricStreams()
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
// compose a data object for writing
|
||||
data := data{
|
||||
MetricStreamName: "cloudwatch-metric-stream",
|
||||
AccountID: "546734499701",
|
||||
Region: "us-west-2",
|
||||
Namespace: "AWS/EC2",
|
||||
MetricName: "CPUUtilization",
|
||||
Dimensions: map[string]string{"AutoScalingGroupName": "test-autoscaling-group"},
|
||||
Timestamp: 1651679400000,
|
||||
Value: map[string]float64{"max": 0.4366666666666666, "min": 0.3683333333333333, "sum": 1.9399999999999997, "count": 5.0},
|
||||
Unit: "Percent",
|
||||
}
|
||||
|
||||
// Compose the metrics from data
|
||||
metricStream.composeMetrics(data)
|
||||
|
||||
acc.Wait(1)
|
||||
acc.AssertContainsTaggedFields(t, "aws_ec2_cpuutilization",
|
||||
map[string]interface{}{"max": 0.4366666666666666, "min": 0.3683333333333333, "sum": 1.9399999999999997, "count": 5.0},
|
||||
map[string]string{"AutoScalingGroupName": "test-autoscaling-group", "accountId": "546734499701", "region": "us-west-2"},
|
||||
)
|
||||
}
|
||||
|
||||
func TestComposeAPICompatibleMetrics(t *testing.T) {
|
||||
metricStream := newTestCompatibleCloudWatchMetricStreams()
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
// compose a data object for writing
|
||||
data := data{
|
||||
MetricStreamName: "cloudwatch-metric-stream",
|
||||
AccountID: "546734499701",
|
||||
Region: "us-west-2",
|
||||
Namespace: "AWS/EC2",
|
||||
MetricName: "CPUUtilization",
|
||||
Dimensions: map[string]string{"AutoScalingGroupName": "test-autoscaling-group"},
|
||||
Timestamp: 1651679400000,
|
||||
Value: map[string]float64{"max": 0.4366666666666666, "min": 0.3683333333333333, "sum": 1.9399999999999997, "count": 5.0},
|
||||
Unit: "Percent",
|
||||
}
|
||||
|
||||
// Compose the metrics from data
|
||||
metricStream.composeMetrics(data)
|
||||
|
||||
acc.Wait(1)
|
||||
acc.AssertContainsTaggedFields(t, "aws_ec2_cpuutilization",
|
||||
map[string]interface{}{"maximum": 0.4366666666666666, "minimum": 0.3683333333333333, "sum": 1.9399999999999997, "samplecount": 5.0},
|
||||
map[string]string{"AutoScalingGroupName": "test-autoscaling-group", "accountId": "546734499701", "region": "us-west-2"},
|
||||
)
|
||||
}
|
||||
|
||||
// post GZIP encoded data to the metric stream listener
|
||||
func TestWriteHTTPGzippedData(t *testing.T) {
|
||||
metricStream := newTestCloudWatchMetricStreams()
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
require.NoError(t, metricStream.Init())
|
||||
require.NoError(t, metricStream.Start(acc))
|
||||
defer metricStream.Stop()
|
||||
|
||||
data, err := os.ReadFile("./testdata/records.gz")
|
||||
require.NoError(t, err)
|
||||
|
||||
req, err := http.NewRequest("POST", createURL("http", "/write"), bytes.NewBuffer(data))
|
||||
require.NoError(t, err)
|
||||
req.Header.Set("Content-Encoding", "gzip")
|
||||
|
||||
client := &http.Client{}
|
||||
resp, err := client.Do(req)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, resp.Body.Close())
|
||||
require.EqualValues(t, 200, resp.StatusCode)
|
||||
}
|
32
plugins/inputs/cloudwatch_metric_streams/sample.conf
Normal file
32
plugins/inputs/cloudwatch_metric_streams/sample.conf
Normal file
|
@ -0,0 +1,32 @@
|
|||
# AWS Metric Streams listener
|
||||
[[inputs.cloudwatch_metric_streams]]
|
||||
## Address and port to host HTTP listener on
|
||||
service_address = ":443"
|
||||
|
||||
## Paths to listen to.
|
||||
# paths = ["/telegraf"]
|
||||
|
||||
## maximum duration before timing out read of the request
|
||||
# read_timeout = "10s"
|
||||
|
||||
## maximum duration before timing out write of the response
|
||||
# write_timeout = "10s"
|
||||
|
||||
## Maximum allowed http request body size in bytes.
|
||||
## 0 means to use the default of 524,288,000 bytes (500 mebibytes)
|
||||
# max_body_size = "500MB"
|
||||
|
||||
## Optional access key for Firehose security.
|
||||
# access_key = "test-key"
|
||||
|
||||
## An optional flag to keep Metric Streams metrics compatible with
|
||||
## CloudWatch's API naming
|
||||
# api_compatability = false
|
||||
|
||||
## Set one or more allowed client CA certificate file names to
|
||||
## enable mutually authenticated TLS connections
|
||||
# tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
|
||||
|
||||
## Add service certificate and key
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
9
plugins/inputs/cloudwatch_metric_streams/testdata/record.json
vendored
Normal file
9
plugins/inputs/cloudwatch_metric_streams/testdata/record.json
vendored
Normal file
|
@ -0,0 +1,9 @@
|
|||
{
|
||||
"requestId": "c8291d2e-8c46-4f2a-a8df-2562550287ad",
|
||||
"timestamp": 1651679861072,
|
||||
"records": [
|
||||
{
|
||||
"data": "eyJtZXRyaWNfc3RyZWFtX25hbWUiOiJncnBuLXNhbmRib3gtZGV2LWNsb3Vkd2F0Y2gtbWV0cmljLXN0cmVhbSIsImFjY291bnRfaWQiOiI1NDk3MzQzOTk3MDkiLCJyZWdpb24iOiJ1cy13ZXN0LTIiLCJuYW1lc3BhY2UiOiJBV1MvRUMyIiwibWV0cmljX25hbWUiOiJDUFVVdGlsaXphdGlvbiIsImRpbWVuc2lvbnMiOnsiSW5zdGFuY2VJZCI6ImktMGVmYzdmZGYwOWMxMjM0MjgifSwidGltZXN0YW1wIjoxNjUxNjc5NTgwMDAwLCJ2YWx1ZSI6eyJtYXgiOjEwLjAxMTY2NjY2NjY2NjY2NywibWluIjoxMC4wMTE2NjY2NjY2NjY2NjcsInN1bSI6MTAuMDExNjY2NjY2NjY2NjY3LCJjb3VudCI6MS4wfSwidW5pdCI6IlBlcmNlbnQifQ=="
|
||||
}
|
||||
]
|
||||
}
|
BIN
plugins/inputs/cloudwatch_metric_streams/testdata/records.gz
vendored
Normal file
BIN
plugins/inputs/cloudwatch_metric_streams/testdata/records.gz
vendored
Normal file
Binary file not shown.
18
plugins/inputs/cloudwatch_metric_streams/testdata/records.json
vendored
Normal file
18
plugins/inputs/cloudwatch_metric_streams/testdata/records.json
vendored
Normal file
File diff suppressed because one or more lines are too long
Loading…
Add table
Add a link
Reference in a new issue